Abstract

A new approach based on the Reproducing Kernel Hilbert Space Method is proposed to approximate the solution of the second-kind nonlinear integral equations. In this case, the Gram-Schmidt process is substituted by another process so that a satisfactory result is obtained. In this method, the solution is expressed in the form of a series. Furthermore, the convergence of the proposed technique is proved. In order to illustrate the effectiveness and efficiency of the method, four sample integral equations arising in electromagnetics are solved via the given algorithm.

1. Introduction

Electromagnetics is the phenomenon associated with electric and magnetic fields and their interactions which is generally one of the most important sciences. Exterior calculus is given in [1, 2] inside some textbooks. A way to teach electromagnetics can be approached via the use of differential forms which is given in [3]. According to electromagnetic field problems from many years ago, some solutions via linear and nonlinear integral equations (NIE) have been given which can be useful in the field. In those methods like block-pulse functions (BPFs), Galerkin, and collocation, the most important ways are basic functions and appropriate projection. Based on the Reproducing Kernel Hilbert Space method, an approach has been found to solve some electromagnetic issues.

Nonlinear integral equations are encountered in different fields of science and numerous applications as elasticity, plasticity, heat and mass transfer, oscillation theory, fluid dynamics, filtration theory, electrostatics, electrodynamics, biomechanics, game theory, control, queuing theory, electrical engineering, economics, and medicine, among others. There are different types of NIE usually which cannot be worked out explicitly, so it should be approached approximately.

Therefore, many researchers studied and focused on different numerical techniques which can work out these integral equations. For instance, in [4, 5], the authors presented the homotopy analysis method to solve the second kind of nonlinear Fredholm and Volterra integral equations. The linear multistep techniques were applied in [6], to obtain the numerical solution of a singular nonlinear Volterra integral equation. In [7], an asymptotic technique to approach numerically the nonlinear Abel-Volterra integral equation was applied.

Reproducing Kernel Hilbert Space (RKHS) was introduced by Minggen et al. [8, 9], and it was developed in different areas, including approximation theory, statistics, machine learning theory, group representation theory, and various areas of complex analysis. Reproducing Kernel Hilbert Space Method (RKHSM) is a kernel based approximation method which was applied for solving nonlinear boundary value problems [712], generalized singular nonlinear Lane-Emden type equations [13], integrodifferential equations [1416], integrodifferential fractional equations [17], Bratus Problem [18], and so forth.

Consider the following nonlinear integral equation:where , are real constants, is an unknown function which can be determined, is a continuous function on , is a continuous function on , is a continuous term in as , , and is Reproducing Kernel Space. Equation (1) has a continuous solution on [19]. The existence and uniqueness conditions of the solution for (1) were discussed in [1923]. We assume that the solution of (1) is unique.

Over several decades, numerical methods in electromagnetic problems have been one of the most important subjects of extensive researches [14]. On the other hand, many problems in electromagnetics can be modeled by integral equations mentioned in [2426], for example, electric field integral equation (EFIE) and magnetic field integral equation (MFIE). In recent years, several numerical methods for solving linear and nonlinear integral equations have been presented. Applicable equations of electromagnetics have been implied in the presented paper.

In previous works like [1315], the Gram-Schmidt orthogonalization process has been considered to implement RKHSM. Since this process is unstable numerically and it may take a lot of time to run the algorithm, here, we put away this process and act with another way. Our approach combines the methods mentioned in [1317]. More specifically, on the contrary to [1315], without use of the orthogonalization process, the RKSHM is applied successfully to solve the nonlinear problem (1).

The structure of this paper would be described as follows. In Section 2, the basic definitions, assumptions, and preliminaries of RKHS are described. The main idea and convergence of the proposed scheme are discussed in Section 3. Section 4 contains the numerical experiments. Finally, Section 5 is dedicated to a brief conclusion.

2. Preliminaries

In this section, some basic definitions and important properties of Reproducing Kernel Hilbert Spaces (RKHS) are mentioned [8, 9, 2729].

Definition 1. A Hilbert Space is an inner product space that is complete and separable with respect to the norm defined by the inner product. Completeness of the space holds provided that every Cauchy sequence of points in that has a limit that is also in and separable of admits a countable orthonormal basis of it.

Definition 2. For an abstract set , let be a Hilbert Space of real or complex-valued functions on set . We say is a Reproducing Kernel Hilbert Space if there exist a linear and bounded evaluation functional over , or, equivalently,

Riesz Representation Theorem implies that for all in there exists a unique function of with the reproducing property, for each where represent the inner product of the Hilbert Space .

Definition 3. The space is interpreted asThe inner product and the norm in are of forms

Lemma 4 (see [9, 29]). Functional space is inner space.

Theorem 5 (see [9, 29]). Functional space is a Hilbert Space.

Theorem 6 (see [9, 29]). Functional space is Reproducing Kernel Hilbert Space.

Now, it is taken away that expression form of the Reproducing Kernel function .

Based on essay, it is easy to prove that is the answer of the following generalized differential equation [9, 29]:where is Dirac’s delta function. While , is the answer of the following constant linear homogeneous differential equation with order:with the boundary conditionsEquation (7) is characteristic . Then the general solution of Equation (7) iswhere coefficients and , , could be calculated by solving the following linear equations:Subsequently, the representation of the Reproducing Kernel of is provided by

3. Main Idea and Theoretical Discussion

The uniqueness conditions for nonlinear problems exist in [2123]. The unique solution of (1) is assumed in this paper. The solution of (1) is given in space. We consider (1) aswhere . It is obvious that is the bounded linear operator of to . Put and , where is the adjoint operator of . In fact, for , we have and .

The orthonormal system of from the space can be derived from Gram-Schmidt orthogonal process of :

Definition 7. In a topological space , a subset of is called dense in if .

Theorem 8. If is dense on then is the complete function system of the space and , where the subscript t in the operator indicates that the operator applies to the function of .

Proof. We have Clearly . For each fixed , let , , which means Assume that is dense on and so . It follows that from the existence of . Now, the theorem is proved.

Theorem 9. If is dense on and the solution of (12) is unique, then the solution of (12) is

Proof. Using (13), we haveOn the other hand, and , , are the Fourier series expansion about normal orthogonal system and is the Hilbert Space. Thus the series is convergent in the sense of and the proof would be complete.

Now the approximate solution can be obtained by the -term intercept of the exact solution and

Theorem 10. If then exists such that .

Proof. We have , for any . We know . Thus .

Theorem 11. The approximate solution is uniformly convergent.

Proof. Assuming , by Theorems 9 and 10, it can be proved that

In the sequel, a new iterative method to achieve the solution of (12) is presented. Ifthen (16) can be written asNow suppose, for some , is known. There is no problem if we assume . We put and define the -term approximation to bywhereIn the following, it would be proven that the approximate solution in the iterative (22) is convergent to the exact solution of (12) uniformly.

Theorem 12. Suppose that is bounded in (22). If is dense on then -term approximate solution in the iterative (22) converges to the exact solution of (12) and , whereas is given by (23).

Proof. First of all, the convergence of from (22) would be proven. We inferSubsequence is orthogonal, and it yields thatIt is obvious that the sequence is monotonically increasing. Because is bounded and is convergent, then is bounded and this implies that .
If thenSo . Consequently as . To prove the completeness of it requires , where that as . Now we can prove is the solution of (12).
If we take limit from (22), we will have , so . Let , and thenFrom (23) and (29), it is concluded that .
is dense on . For each , subsequence exists that as . Hence, when , we have which indicates that is the solution of (12).

The mentioned scheme above is an efficient method of solving nonlinear equations [3133]. However, in implementing this algorithm on a computer, is not quite orthogonal, due to rounding errors. In other words, Gram-Schmidt process is numerically unstable and the computational cost of the algorithm is high. Therefore, the following process is suggested similar for linear problems in [20, 34]. This is the subject of the next theorem, where the following notations are used:

Theorem 13 (). The approximate solution obtained from (22) is found as follows:where

Proof. Suppose that . Since , and () are linear independence, and thereforeEquation (12) implies . For we haveBoth sides of (33) provideFrom (32) and (35) the following equation can be reached:Equation (32) implies . Sowhich proves the theorem.

Algorithm 14. The following steps exist for approximating the solution without applying Gram-Schmidt orthogonal process:Step  1. Fix and .If , set .Else set .Step  2. For set .Set .Step  3. Set .Step  4. For set .Step  5. .Step  6. Set .Step  7. Set .Step  8. If then set and go to step 6.Else stop.

Algorithm 15. The following steps exist for approximating the solution by applying Gram-Schmidt orthogonal process:Step  1. Fix and .If , set .Else set .Step  2. For set .Set .Step  3. For and , if then set and .Else .Else .Step  4. For set .Step  5. Set .Step  6. Set .Step  7. Set .Step  8. Set .Step  9. If then set and go to step 7.Else stop.

4. Numerical Experiments

In this part, four numerical examples are solved for potency and utility of the present method. All computations are performed by MAPLE package. Results which are taken by this method show a proper agreement with the exact solution. A comprehensive applicability of this method is given the stability and consistence of the presented method. The reliability of the method and increasing of the accuracy cause this method to be more applicable.

Example 1. For first applicable instance, we offer nonlinear Fredholm integral equations [26, 30]:The exact solution of this equation is . According to (38) we can assume an initial approximation . The numerical results are given in Table 1 by taking , and . In Table 1 a comparison between the absolute errors of our method and the Haar wavelet method [30] is given. Figure 1 shows the approximate solution and its errors.

Example 2. For second applicable example, an electromagnetic problem is solved via the presented method. It is simulated to nonlinear Volterra integral equations model [26, 30]:The exact solution of this equation is . According to (39) we can assume an initial approximation . Numerical results are given in Table 2 by taking , and . In Table 2 a comparison between the absolute errors of the proposed method and the BPFs method [26] is given. Figure 2 shows the approximate solution and its errors. In Table 3, a comparison execution time between Algorithms 14 and 15 is given.

Example 3. An electromagnetic problem is solved via our method for another applicable example. It is simulated to nonlinear Fredholm integral equations model [26, 30]:The exact solution of this equation is . According to (40), the initial approximation is chosen. Numerical results are given in Table 4 by taking , and . In Table 4 a comparison between the absolute errors of the proposed method and the BPFs method [26] is given. Figure 3 shows the approximate solution and its error. In Table 5 a comparison execution time between Algorithms 14 and 15 is given.

Example 4. A nonlinear Fredholm integral problem [26, 30] is solved via our method for this applicable example:The exact solution of this equation is . According to (41) we consider an initial approximation as . Numerical results are given in Table 6 by taking , and . In Table 6 a comparison between the absolute errors of the proposed method and the Haar wavelet method [30] is given. Figure 4 shows the approximate solution and its error.

5. Conclusion

According to this essay, supplementary of iterative Reproducing Kernel Hilbert Space Method was introduced and applied to acquire the approximate solution of some nonlinear integral equation. In this method, unlike other similar methods, orthogonal process is not used. However the time is increasing, and accuracy is also increasing. The main point which is mentioned in this paper is that Algorithm 14 has higher execution time in comparison with Algorithm 15, but the approximate solution in Algorithm 14 is more accurate than Algorithm 15. Current uniform convergence method is stated and proved. The obtained numerical results confirm that it is a good candidate for solving the nonlinear integral equation.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

The authors would like to thank Professor Haipeng Peng for instructive comments and recommendations to improve the quality of this work and also the Islamic Azad University, Science and Research Branch, Tehran, for supporting this project.