• Views 834
• Citations 3
• ePub 15
• PDF 422
`Mathematical Problems in EngineeringVolume 2014, Article ID 147079, 7 pageshttp://dx.doi.org/10.1155/2014/147079`
Research Article

## Polynomial Least Squares Method for the Solution of Nonlinear Volterra-Fredholm Integral Equations

Department of Mathematics, “Politehnica” University of Timişoara, P-ta Victoriei 2, 300006 Timişoara, Romania

Received 24 January 2014; Accepted 26 August 2014; Published 27 October 2014

Copyright © 2014 Bogdan Căruntu and Constantin Bota. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The present paper presents the application of the polynomial least squares method to nonlinear integral equations of the mixed Volterra-Fredholm type. For this type of equations, accurate approximate polynomial solutions are obtained in a straightforward manner and numerical examples are given to illustrate the validity and the applicability of the method. A comparison with previous results is also presented and it emphasizes the accuracy of the method.

#### 1. Introduction

In this paper, we consider nonlinear integral Volterra-Fredholm equations of the general form: where ,  ,  , and  are constants and ,  ,  ,  , and  are functions assumed to have suitable derivatives on the interval.

Equations of this type are frequently used to model applications from various fields of science such as elasticity, electricity, and magnetism, fluid dynamics, the dynamic of populations, and mathematical economics.

In general, the exact solution of these nonlinear integral equations cannot be found and thus it is often necessary to find approximate solutions for such equations. In this regard, many approximation techniques were employed over the years. Some of the approximation methods employed in recent years include the following (see the examples in Section 3).(i)Rationalized Haar functions method ([1, 2])(ii)Chebyshev polynomials method ([3, 4])(iii)Triangular functions (TF) method ([5])(iv)Sinc approximation method ([6])(v)Collocation methods ([7])(vi)Optimal control method ([8])(vii)Radial basis functions method ([9])(viii)Bernoulli matrix method ([10])(ix)Homotopy analysis method ([11]).

In the next section, we will present the polynomial least squares method (PLSM), which allows us to determine analytical approximate polynomial solutions for nonlinear integral equations. In the third section, we will compare approximate solutions obtained using PLSM with approximate solutions computed recently for several test problems. If the exact solution of the test problem is polynomial, PLSM is able to find the exact solution. If not, PLSM allows us to obtain approximations with an error relative to the exact solution smaller than the errors obtained using other methods. In most cases, the approximate solutions obtained not only are more precise but also present a simpler expression in comparison to previous ones.

#### 2. The Polynomial Least Squares Method

We consider the following operator, corresponding to (1):

We also consider the so-called remainder associated to (1), defined as the error obtained by replacing the exact solution with an approximate solution :

Before we present the actual steps of the method, we introduce the following types of solutions.

Definition 1. One calls an -approximate polynomial solution of (1) an approximate polynomial solution satisfying the relation (3).

Definition 2. One calls a weak -approximate polynomial solution of (1) an approximate polynomial solution satisfying the relation:
One also considers the following type of convergence.

Definition 3. One considers the sequence of polynomials + , , . One calls the sequence of polynomials convergent to the solution of (1) if .

The aim of PLSM is to find a weak -polynomial solution of the type:

The values of the constants are calculated using the following steps.

Step 1. By substituting the approximate solution (5) in (1), we obtain the following expression:

Step 2. Next, we attach to (1) the following real functional:

Step 3. We compute as the values which give the minimum of the functional (7). We remark that the computation of the minimum can be performed in many ways and some examples are presented in the next section.

Step 4. Using the constants determined in the previous step, we consider the polynomial:
The following convergence theorem holds.

Theorem 4. The necessary condition for (1) to admit a sequence of polynomials convergent to the solution of this equation is Moreover, ,   such that for ,  , it follows that is a weak -approximate polynomial solution of (1).

Proof. Based on the way the coefficients of polynomial are computed and taking into account the relations (5)–(8), the following inequality holds: It follows that We obtain From this limit, we obtain that ,   such that for ,  , it follows that is a weak -approximate polynomial solution of (1).

Step 5. Taking into account the fact that any -approximate polynomial solution of (1) is also a weak -approximate polynomial solution (but the opposite is not always true), it follows that the set of weak approximate solutions of (1) also contains the approximate solutions of the equation. As a consequence, in order to find -approximate polynomial solutions of (1) by PLSM, we will first compute weak approximate polynomial solutions, . If then is also an -approximate polynomial solution of the problem.

#### 3. Applications

In this section, we compute approximate polynomial solutions for several test problems previously solved using other methods and compare the results.

##### 3.1. Application 1

Our first application is a simple nonlinear Fredholm integral equation ([6, 10]): This equation is obtained from (1) by choosing the constants ,  ,  , and   and the functions ,  ,  and .

The exact solution of (13) presented in [6, 10] is . In [6], approximate solutions of (13) were computed using two Sinc-collocation-type methods and, in [10] the exact solution, was determined using a Bernoulli matrix method.

Since the solution is a polynomial, we expected that, by using PLSM, we would be able to find, if not the exact solution, at least a very accurate approximation.

In the following, in order to obtain our approximation, we will perform the steps described in the previous section. The computations were performed using the SAGE open source software (v5.5, available at http://www.sagemath.org/).

We choose the polynomial (5) as

In Step 1, the expression (6) is

The corresponding functional (7) from Step 2 is

In Step 3, we must compute the minimum of with respect to ,  . As mentioned in the previous section, the minimization can be performed in more than one way. The algorithms used in our computations include the following three possible approaches.

###### 3.1.1. Minimization Based on the Exact Computation of Critical Points

For relatively simple problems such as (13), it is possible to compute directly the critical points of and subsequently select the value corresponding to the minimum.

The critical points corresponding to a functional are the solution of the system:

For the problem (13), the system (17) becomes

Using the “solve” command in SAGE and excluding the complex solutions, we find the critical points:

In order to find the minimum, we use the second partial derivative test, which is easy enough to be implemented in SAGE, and find that both ,   and ,   are local minima.

Moreover, we find that , which means that, by using PLSM, we found not one but two exact solutions of (13):

We remark that the exact solutions can be found this way even if the initial polynomial has a degree greater than one. For example, for the given problem (13) using a third degree polynomial leads to the local minima ,  ,  ,   and ,  ,  ,  .

Generally speaking, if the degree of is too high or if the problem studied is too complicated (e.g., with a strong nonlinearity), then the exact solution of (17) cannot be found exactly. In the case of SAGE, the command “solve” fails to find the solutions exiting with some kind of error message.

In this situation, it is still possible to find good approximations of the solutions of the problem solving the system (17) by means of a numerical method.

###### 3.1.2. Minimization Based on Approximate Computation of Critical Points

In this subsection, we will find approximate solutions for the given problem (13) solving (17) by means of a SAGE implementation of the well-known Newton method. We will use the same problem (13) and the same initial polynomial for the sake of simplicity and clarity, even though we already found exact solutions.

As the following results will show, the Newton method is able to find approximate solutions of (17) which can lead to highly accurate approximate solutions of the problem (13).

In order for the sequence of approximations given by Newtons’ formula to converge to the solution(s) of the system (17), the starting point of the sequence will take successively values on a given grid of the type , where is a division of an interval .

In the case of the problem (13) and of the polynomial , the grid is large enough in the sense that if the starting point scans , we can obtain using Newtons’ formula approximations for both solutions and .

More precisely, we found the following approximate solutions:

The absolute errors for these approximations, computed as the differences in absolute value between the approximate solutions and the corresponding exact solutions, are of the order of .

We remark that, for polynomials of higher degree, in principle the grid presented above can become quite large. However, in practice, we observed that, in all the examples tested, two or three values in each division were enough to arrive at the approximation sought.

###### 3.1.3. Minimization Based on a Dedicated Solver

A third approach in finding the minimum of the functional from (7), and probably the most convenient one, is the use of a specialized optimization package. In SAGE, we can use the “minimize” command which is based on the well-known open-source SciPy/NumPy libraries (http://www.scipy.org/). The “minimize” command allows us to choose the minimization algorithm used in the computation, the possible choices including among others the Nelder-Mead method, Powell’s method, the conjugate gradient method, and simulated annealing method.

In the case of the problem (13) and of the polynomial , choosing the Powell’s method in the “minimize” command, we obtain the following approximations: The absolute errors corresponding to these approximations are of the order of .

In the conclusion of this first application, we remark that, in the following applications, depending on the problem and also depending on the precision sought for the approximate solution, we presented one of the three approaches presented above. If the known solution of the problem is polynomial one, we search for the exact solution. If the solution is not polynomial, from the other two approaches, we presented the one which gave the most accurate approximation, as in the case of application 4.

##### 3.2. Application 2

Our second application is a nonlinear Volterra integral equation ([3, 4]): This equation is obtained from (1) by choosing the constants and the functions ,  , .

The exact solution of (23) is  . In [3] and [4], approximate solutions of (23) were computed using approximations methods based on Chebyshev polynomials. The absolute errors of the approximate solutions obtained are of the order of in [3] and of in [4].

In the following, we will compute the exact solution of the problem (23) using PLSM. We choose the polynomial (5) as

The critical points of the corresponding functional (7) are the solutions of the system:

Using the “solve” command in SAGE and excluding the complex solutions, we obtain the following critical points:

Using the second partial derivative test, we deduce that only the first two critical points are minimum points. Computing the values of for these two minimum points, we see that the global minimum is obtained for ,  , and  and thus the solution obtained using PLSM is in fact the exact solution of (23):

##### 3.3. Application 3

The third application is a nonlinear mixed Volterra-Fredholm integral equation ([2, 5, 9]):

The exact solution of (2) is . In [2], approximate solutions of (28) were computed using a Rationalized Haar functions method, in [5], approximate solutions of (28) were computed using a Triangular functions method, in [9], approximate solutions of (28) were computed using a Radial basis functions method, and, in [8], approximate solutions of (28) were computed using an Optimal control method. The values of the absolute errors of the approximate solutions obtained varied from to but none of these methods could find the exact solution.

We will compute the exact solution of the problem (23) using PLSM. We choose again the polynomial (5) as

The critical points of the corresponding functional (7) are the solutions of the system:

Using the “solve” command in SAGE and again excluding the complex solutions, we obtain the following critical points:

Using the second partial derivative test, it follows that only the first two critical points are minimum points and, by computing the values of for these two minimum points, we see that the global minimum is obtained for ,  , and  and the solution obtained using PLSM is the exact solution of (28):

##### 3.4. Application 4

The next application is the nonlinear Volterra-Fredholm integral equation ([1, 7]):

The exact solution of (33) is . Approximate solutions for this equation were computed in [1] using the Rationalized Haar functions method RHM and in [7] using a composite collocation method CCM consisting of a hybrid of block-pulse functions and Lagrange polynomials. The solution in [1] contained sixteen terms while the one in [7] is a piecewise polynomial solution consisting of two polynomials of fourth degree.

Using the PLSM, we computed a seventh order polynomial approximate solution of (33). We used the second approach described in application 1, solving the corresponding system (17) by means of Newton’s method. We obtained the approximate solution:

Table 1 presents the comparison of the absolute errors corresponding to the three approximate solutions ([1]), ([7]), and . The approximate solution given by PLSM is much closer to the exact solution and has a simpler form.

Table 1: Comparison of the absolute errors of the approximate solutions for Problem (33).

#### 4. Conclusions

The paper presents the computation of approximate polynomial solutions for nonlinear integral equations of mixed Volterra-Fredholm type by using the polynomial least squares method, which is presented as a straightforward and efficient method.

The test problems solved clearly illustrate the accuracy of the method, since, in all of the cases, we were able to compute better approximations than the ones computed in previous papers, and, in most cases, the exact solutions were found. Moreover, the expressions of the approximations computed by PLSM are also simpler than the expressions of the approximations computed by using other methods.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### References

1. Y. Ordokhani, “Solution of nonlinear Volterra-Fredholm-Hammerstein integral equations via rationalized HAAr functions,” Applied Mathematics and Computation, vol. 180, no. 2, pp. 436–443, 2006.
2. Y. Ordokhani and M. Razzaghi, “Solution of nonlinear Volterra-Fredholm-Hammerstein integral equations via a collocation method and rationalized HAAr functions,” Applied Mathematics Letters, vol. 21, no. 1, pp. 4–9, 2008.
3. K. Maleknejad, S. Sohrabi, and Y. Rostami, “Numerical solution of nonlinear Volterra integral equations of the second kind by using Chebyshev polynomials,” Applied Mathematics and Computation, vol. 188, no. 1, pp. 123–128, 2007.
4. C. Yang, “Chebyshev polynomial solution of nonlinear integral equations,” Journal of the Franklin Institute. Engineering and Applied Mathematics, vol. 349, no. 3, pp. 947–956, 2012.
5. K. Maleknejad, H. Almasieh, and M. Roodaki, “Triangular functions (TF) method for the solution of nonlinear Volterra-Fredholm integral equations,” Communications in Nonlinear Science and Numerical Simulation, vol. 15, no. 11, pp. 3293–3298, 2010.
6. K. Maleknejad and K. Nedaiasl, “Application of sinc-collocation method for solving a class of nonlinear Fredholm integral equations,” Computers & Mathematics with Applications, vol. 62, no. 8, pp. 3292–3303, 2011.
7. H. R. Marzban, H. R. Tabrizidooz, and M. Razzaghi, “A composite collocation method for the nonlinear mixed Volterra-Fredholm-Hammerstein integral equations,” Communications in Nonlinear Science and Numerical Simulation, vol. 16, no. 3, pp. 1186–1194, 2011.
8. M. A. El-Ameen and M. El-Kady, “A new direct method for solving nonlinear Volterra-Fredholm-Hammerstein integral equations via optimal control problem,” Journal of Applied Mathematics, vol. 2012, Article ID 714973, 10 pages, 2012.
9. K. Parand and J. A. Rad, “Numerical solution of nonlinear Volterra-Fredholm-Hammerstein integral equations via collocation method based on radial basis functions,” Applied Mathematics and Computation, vol. 218, no. 9, pp. 5292–5309, 2012.
10. A. H. Bhrawy, E. Tohidi, and F. Soleymani, “A new Bernoulli matrix method for solving high-order linear and nonlinear Fredholm integro-differential equations with piecewise intervals,” Applied Mathematics and Computation, vol. 219, no. 2, pp. 482–497, 2012.
11. B. Ghanbari, “The convergence study of the homotopy analysis method for solving nonlinear Volterra-Fredholm integrodifferential equations,” The Scientific World Journal, vol. 2014, Article ID 465951, 7 pages, 2014.