Abstract

In this paper, reproducing kernel Hilbert space method is applied to approximate the solution of two-point boundary value problems for fourth-order Fredholm-Volterra integrodifferential equations. The analytical solution was calculated in the form of convergent series in the space with easily computable components. In the proposed method, the -term approximation is obtained and is proved to converge to the analytical solution. Meanwhile, the error of the approximate solution is monotone decreasing in the sense of the norm of . The proposed technique is applied to several examples to illustrate the accuracy, efficiency, and applicability of the method.

1. Introduction

Boundary value problems (BVPs) of fourth-order integrodifferential equations (IDEs) which are a combination of differential and integral equations arise very frequently in many branches of applied mathematics and physics such as fluid dynamics, biological models, chemical kinetics, biomechanics, electromagnetic, elasticity, electrodynamics, heat and mass transfer, and oscillation theory [14]. If the BVPs for fourth-order Fredholm-Volterra IDEs cannot be solved analytically, which is the usual case, then recourse must be made to numerical and approximate methods.

This paper discusses and investigates the analytical approximate solution using reproducing kernel Hilbert space (RKHS) method for fourth-order Fredholm-Volterra IDE which is as follows: subject to the boundary conditions where , , are real finite constants, is an unknown function to be determined, , , are continuous functions on , , , are continuous terms in as , , , , , and are depending on the problem discussed, and , are two reproducing kernel spaces.

Recently, many authors have discussed the numerical solvability of BVPs for fourth-order Volterra IDEs which are special cases of (1) and (2). To mention a few, in [5], the authors have discussed the Adomian decomposition method for solving IDEs (1) and (2) when , , and , where . The differential transform method has been applied to solve the same equations when , , and , where as described in [6]. Furthermore, the homotopy perturbation method is carried out in [7] for the aforementioned IDEs in the case that , , and , where . Recently, the homotopy analysis method for solving (1) and (2) when , , and , where , is proposed in [8]. But on the other aspects as well, the numerical solvability of differential and integral equations of different type and order can be found in [915] and references therein.

Investigation about two-point BVPs for fourth-order Fredholm-Volterra IDEs is scarce. Actually, no one has given a method to get approximate solution for this type of equations in the literature. Also, none of previous studies propose a methodical way to solve these equations. Moreover, previous studies require more effort to achieve the results; they are not accurate but are usually developed for special types of (1) and (2). The new method is accurate, needs less effort to achieve the results, and is developed especially for nonlinear case. Meanwhile, the proposed technique has an advantage that it is possible to pick any point in the interval of integration, and the approximate solutions and all its derivatives up to order four will be applicable as well.

Reproducing kernel theory has important application in numerical analysis, differential equations, integral equations, probability and statistics, and so forth [1618]. In the last years, extensive work has been done using RKHS method, which provides numerical approximations for linear and nonlinear equations. This method has been implemented in several operator, differential, integral, and integro-differential equations, such as nonlinear operator equations [19], nonlinear system of second-order BVPs [20], nonlinear fourth-order BVPs [21], nonlinear systems of initial value problems [22], nonlinear second-order singular BVPs [23], nonlinear partial differential equations [24], nonlinear Fredholm-Volterra integral equations [25], nonlinear fourth-order Volterra IDEs [26], nonlinear Fredholm-Volterra IDEs [27], and others.

The paper is organized in the following form. In the next section, two reproducing kernel spaces are described. In Section 3, a linear operator, a complete normal orthogonal system, and some essential results are introduced. Also, a method for the existence of solutions for (1) and (2) based on reproducing kernel space is described. In Section 4, we give an iterative method to solve (1) and (2) numerically in the space . Numerical examples are presented in Section 5. Section 6 ends this paper with a brief conclusion.

2. Two Reproducing Kernel Spaces

In this section, two reproducing kernels needed are constructed in order to solve (1) and (2) using RKHS method. Before the construction, we utilize the reproducing kernel concept. Throughout this paper, is the set of complex numbers, , and .

An abstract set is supposed to have elements, each of which has no structure, and is itself supposed to have no internal structure, except that the elements can be distinguished as equal or unequal, and to have no external structure except for the number of elements.

Definition 1 (see [23]). Let be a nonempty abstract set. A function is a reproducing kernel of the Hilbert space if(1) for each , ;(2) for each and , .

The second condition in Definition 1 is called “the reproducing property”; the value of the function at the point is reproducing by the inner product of with . A Hilbert space which possesses a reproducing kernel is called an RKHS [23].

Next, we first construct the space in which every function satisfies the boundary conditions (2) and then utilize the space .

Definition 2 (see [21]). ,   are absolutely continuous real-valued functions on , , and . The inner product and the norm in are defined as and , respectively, where .

Definition 3 (see [19]). is absolutely continuous real-valued function on and . The inner product and the norm in are defined as and , respectively, where .

Remark 4. In [19], the authors have proved that the space is a complete reproducing kernel space and its reproducing kernel is
The Hilbert space is called a reproducing kernel if for each fixed and any , there exist (simply ) and such that . The next theorem formulates the reproducing kernel function on the space .

Theorem 5. The Hilbert space is a reproducing kernel and its reproducing kernel function is given by where and are unknown coefficients of .

Proof. Through several integrations by parts for (3), one can obtain . Since , it follows that , . Again, since , one obtains , . Thus, if , , , and , , then . Now, for each , if also satisfies the formula , where is the dirac-delta function, then . Obviously, is the reproducing kernel of the space .
Next, we give the expression of the reproducing kernel function . The characteristic equation of is , and their characteristic values are with 10 multiple roots. So, let the kernel be as defined in (5).
On the other hand, let satisfy , . Integrating from to with respect to and letting , we have the jump degree of at given by . Through the last descriptions, the unknown coefficients of (5) can be obtained. This completes the proof.

Remark 6. Without the loss of generality and by using Mathematica 7.0 software package, the two rules and of the reproducing kernel function in (5) are obtained at and in (1) and (2) and are given, respectively, as

The following corollary summarizes some important properties of the reproducing kernel function .

Corollary 7. The reproducing kernel function is symmetric and unique, and for any fixed .

Proof. By the reproducing property, we have for each and . Now, let and be all the reproducing kernels of the space ; then, . Finally, we note that .

3. Main Results and the Structure of Solution

In this section, the representation of the analytical solution of (1) and (2) and the implementation method are given in the reproducing kernel space . After that, we construct an orthogonal function system of based on the Gram-Schmidt orthogonalization process.

To do this, we define a differential operator as such that After homogenization of the boundary conditions (2), (1) and (2) can be converted into the equivalent form as follows: such that , , and , where and for , , ,  ,  . It is easy to show that is a bounded linear operator from into .

Next, we construct an orthogonal function system of . Put and , where is dense on and is the adjoint operator of . In terms of the properties of reproducing kernel function , one obtains , .

Remark 8. The orthonormal function system of the space can be derived from Gram-Schmidt orthogonalization process of as follows: where are orthogonalization coefficients and are given by the following subroutine: such that , , and is the orthonormal system in the space .

Through the next theorem, the subscript by the operator () indicates that the operator applies to the function of .

Theorem 9. If is dense on , then is a complete function system of and .

Proof. Clearly, . Now, for each fixed , let , . In other word, . Note that is dense on , and therefore . It follows that from the existence of . So, the proof of the theorem is complete.

Lemma 10. If , then there exists such that , , where .

Proof. For any , we have , . By the expression of , it follows that , . Thus, , . Hence, , , where .

The structure of the next theorem is as follows. Firstly, we will give the representation of the exact solution of (1) and (2) in the space . After that, the convergence of approximate solution to the analytic solution will be proved.

Theorem 11. For each in the space , the series is convergent in the sense of the norm of . On the other hand, if is dense on and the solution of (1) and (2) is unique, then(i) the exact solution of (9) could be represented by (ii) the approximate solution of (9) and , are converging uniformly to the exact solution and all its derivative as , respectively.

Proof. For the first part, let be solution of (9) in the space . Since belong to Hilbert space and is the Fourier series about normal orthogonal system ; then, the series is convergent in the sense of . On the other hand, using (10), we have Therefore, the form of (12) is the exact solution of (9).
For the second part, it easy to see that by Lemma 10, for any , On the other hand, where , are positive constants. Hence, if as , the approximate solution and , converge uniformly to the exact solution and all its derivative, respectively. So, the proof of the theorem is complete.

Remark 12. We mention here that the approximate solution in (13) can be obtained by taking finitely many terms in the series representation for of (12).

4. Procedure of Constructing Iterative Method

In this section, an iterative method of obtaining the solution of (9) is presented in the reproducing kernel space for both linear and nonlinear cases. Initially, we will mention the following remark about the exact and approximate solutions of (1) and (2).

Remark 13. In order to apply the RKHS technique to solve (1) and (2), we have the following two cases based on the structure of the functions , , and .

Case 1. If (1) is linear, then the exact and approximate solutions can be obtained directly from (12) and (13), respectively.

Case 2. If (1) is nonlinear, then in this case the exact and approximate solutions can be obtained by using the following iterative algorithm.

Algorithm 14. According to (12), the representation of the analytical solution of (1) can be denoted by the following series: where . In fact, , in (17) are unknown; we will approximate using known . For numerical computations, we define initial function , put , and define the -term approximation to by where the coefficients of , are given as follows:
Here, note that in the iterative process of (18), we can guarantee that the approximation satisfies the boundary conditions (2). Now, the approximate solution can be obtained by taking finitely many terms in the series representation of and
Next, we will prove that in the iterative formula (18) converges to the exact solution of (1); in fact, this result is fundamental in the RKHS theory and its applications. The next two lemmas are collected for future use in order to prove the recent theorem.

Lemma 15. If in the sense of the norm of , as , and is continuous in with respect to for and , then as .

Proof. Firstly, we will prove that in the sense of , since By reproducing property of , we have and . Thus, . From the symmetry of , it follows that as . Hence, as soon as . On the other hand, by Theorem 11 part (ii), for any , it holds that as . Therefore, in the sense of as and .
Thus, by means of the continuation of , , and , it is obtained that , , and as . This shows that and as . Hence, the continuity of gives the result.

Lemma 16. For , one has

Proof. The proof of
will be obtained by induction as follows:
If , then
Using the orthogonality of yields that
Now, if , then
Again, if , then
Thus, . It is easy to see by using mathematical induction concept that .
On the other hand, from Theorem 11, converge uniformly to . It follows that on taking limits in (18) . Therefore, , where is an orthogonal projector from to Span . Thus, = .

Theorem 17. If is dense on and is bounded, then the -term approximate solution in the iterative formula (18) converges to the exact solution of (9) in the space and , where are given by (19).

Proof. The proof consists of the following three steps. Firstly, we will prove that the sequence in (18) is monotone increasing in the sense of . By Theorem 9, is the complete orthonormal system in . Hence, we have = . Therefore, is monotone increasing.
Secondly, we will prove the convergence of . From (18), we have . From the orthogonality of , it follows that . Since, the sequence is monotone increasing. Due to the condition that is bounded, is convergent as . Then, there exists a constant such that . It implies that . On the other hand, since , it follows for that Furthermore, . Consequently, as , we have . Considering the completeness of , there exists a such that as in the sense of .
Thirdly, we will prove that is the solution of (9). Since is dense on , for any , there exists subsequence , such that as . From Lemma 16, it is clear that . Hence, let , by Lemma 15 and the continuity of , we have . That is, satisfies (1). Also, since , clearly, satisfies the boundary conditions (2). In other words, is the solution of (1) and (2), where and are given by (19). The proof is complete.

It obvious that if we let denote the exact solution of (9), the approximate solution obtained by the RKHS method as given by (18), and the difference between and , where , then and or . In fact, this is just the proof of the following theorem.

Theorem 18. The difference function is monotone decreasing in the sense of the norm of .

5. Numerical Outcomes

In this section, we propose few numerical simulations implemented by Mathematica 7.0 software package for solving some specific examples of (1) and (2). However, we apply the techniques described in the previous sections to some linear and nonlinear test examples in order to demonstrate the efficiency, accuracy, and applicability of the proposed method. Results obtained by the method are compared with the analytical solution of each example by computing the exact and relative errors and are found to be in good agreement with each other.

Example 19. Consider the following linear Fredholm-Volterra IDE: subject to the boundary conditions where . The exact solution is .

Using RKHS method, taking , with the reproducing kernel function on , the approximate solution is calculated by (13). The numerical results at some selected grid points for are given in Table 1.

Example 20. Consider the following nonlinear Fredholm-Volterra IDE: subject to the boundary conditions where . The exact solution is .
Using RKHS method, taking , with the reproducing kernel function on , the approximate solution is calculated by (20). The numerical results at some selected grid points for and are given in Table 2.

Example 21. Consider the following nonlinear Fredholm-Volterra IDE: subject to the boundary conditions where . The exact solution is .
Using RKHS method, taking , with the reproducing kernel function on , the approximate solution is calculated by (20). The numerical results at some selected grid points for and are given in Table 3.

Example 22. Consider the following nonlinear Fredholm-Volterra IDE: subject to the boundary conditions where . The exact solution is .
Using RKHS method, taking , with the reproducing kernel function on , the approximate solution is calculated by (20). The numerical results at some selected grid points for and are given in Table 4.

Example 23. Consider the following nonlinear Fredholm-Volterra IDE: subject to the boundary conditions where . The exact solution is .
Using RKHS method, taking , with the reproducing kernel function on , the approximate solution is calculated by (20). The numerical results at some selected grid points for and are given in Table 5.

As we mention earlier, it is possible to pick any point in , and the approximate solutions and all its derivative up to order four will be applicable as well. Next, new numerical results for Example 23 which include the absolute error at some selected nodes in for , , where , in which and are given in Table 6.

6. Conclusions

The main concern of this work has been to propose an efficient algorithm for the solutions of two-point BVPs for fourth-order Fredholm-Volterra IDEs (1) and (2). The main goal has been achieved by introducing the RKHS method to solve this class of IDEs. We can conclude that the RKHS method is powerful and efficient technique in finding approximate solution for linear and nonlinear problems. In the proposed algorithm, the solution and the approximate solution are represented in the form of series in . Moreover, the approximate solution and all its derivatives converge uniformly to the exact solution and all its derivatives up to order four, respectively. There is an important point to make here; the results obtained by the RKHS method are very effective and convenient in linear and nonlinear cases with less computational, iteration steps, work, and time. This confirms our belief that the efficiency of our technique gives it much wider applicability in the future for general classes of linear and nonlinear problems.