Abstract

We investigate the effectiveness of reproducing kernel method (RKM) in solving partial differential equations. We propose a reproducing kernel method for solving the telegraph equation with initial and boundary conditions based on reproducing kernel theory. Its exact solution is represented in the form of a series in reproducing kernel Hilbert space. Some numerical examples are given in order to demonstrate the accuracy of this method. The results obtained from this method are compared with the exact solutions and other methods. Results of numerical examples show that this method is simple, effective, and easy to use.

1. Introduction

The hyperbolic partial differential equations model the vibrations of structures (e.g., buildings, beams, and machines). These equations are the basis for fundamental equations of atomic physics. In this paper, we consider the telegraph equation of the form with initial conditions and appropriate boundary conditions by using reproducing kernel method (RKM). In recent years, much attention has been given in the literature to the development, analysis, and implementation of stable methods for the numerical solution of (1)–(3) [13]. Mohanty carried out a new technique to solve the linear one-space-dimensional hyperbolic equation (1) [4]. High-order accurate method for solving linear hyperbolic equation is presented in [5]. A compact finite difference approximation of fourth order for discretizing spatial derivative of linear hyperbolic equation and a collocation method for the time component are used in [6]. A numerical scheme is developed to solve the one-dimensional hyperbolic telegraph equation using the collocation points and approximating the solution using thin plate splines radial basis function [7]. Several test problems were given, and the results of numerical experiments were compared with analytical solutions to confirm the good accuracy of their scheme. Yao [8] investigated a nonlinear hyperbolic telegraph equation with an integral condition by reproducing kernel space at . Yousefi presented a numerical method for solving the one-dimensional hyperbolic telegraph equation by using Legendre multiwavelet Galerkin method [9]. Dehghan and Lakestani presented a numerical technique for the solution of the second-order one-dimensional linear hyperbolic equation [10]. Lakestani and Saray used interpolating scaling functions for solving (1)–(3) [11]. Dehghan provided a solution of the second-order one-dimensional hyperbolic telegraph equation by using the dual reciprocity boundary integral equation (DRBIE) method [12]. The problem has explicit solution that can be obtained by the method of separation of variables in [13].

In this paper, the problem is solved easily and elegantly by using RKM. The technique has many advantages over the classical techniques. It also avoids discretization and provides an efficient numerical solution with high accuracy, minimal calculation, and avoidance of physically unrealistic assumptions. In the next section, we will describe the procedure of this method.

The theory of reproducing kernels was used for the first time at the beginning of the th century by Zaremba in his work on boundary value problems for harmonic and biharmonic functions [14]. Reproducing kernel theory has important application in numerical analysis, differential equations, probability, and statistics. Recently, using the RKM, some authors discussed telegraph equation [15], Troesch's porblem [16], MHD Jeffery-Hamel flow [17], Bratu’s problem [18], KdV equation [19], fractional differential equation [20], nonlinear oscillator with discontinuity [21], nonlinear two-point boundary value problems [22], integral equations [23], and nonlinear partial differential equations [24].

The paper is organized as follows. Section 2 introduces several reproducing kernel spaces. The representation in and a linear operator are presented in Section 3. Section 4 provides the main results. The exact and approximate solutions of (1)–(3) and an iterative method are developed for the kind of problems in the reproducing kernel space. We have proved that the approximate solution converges to the exact solution uniformly. Numerical experiments are illustrated in Section 5. Some conclusions are given in Section 6.

2. Reproducing Kernel Spaces

In this section, some useful reproducing kernel spaces are defined.

Definition 1 (reproducing kernel function). Let . A function is called a reproducing kernel function of the Hilbert space if and only if (a) for all , (b) for all and all . The last condition is called “the reproducing property” as the value of the function at the point is reproduced by the inner product of with .

Definition 2. Hilbert function space is a reproducing kernel space if and only if for any fixed , the linear functional is bounded [25, page 5].

Definition 3. We define the space by The inner product and the norm in are defined by

Lemma 4. The space is a reproducing kernel space, and its reproducing kernel function is given by [25, page 123]

Definition 5. We define the space by The inner product and the norm in are defined by

Lemma 6. The space is a reproducing kernel space, and its reproducing kernel function is given by [25, page 148]

Definition 7. We define the space by The inner product and the norm in are defined by The space is a reproducing kernel space, and its reproducing kernel function is given by the following theorem.

Theorem 8. The space is a reproducing kernel space, and its reproducing kernel function is given by where

Proof. Let and let . By Definition 7 and integrating by parts two times, we obtain that After substituting the values of , , , , , , and into the above equation, we get thus we obtain that By Definition 7, we have . So This completes the proof.

Definition 9. We define the binary space by The inner product and the norm in are defined by

Lemma 10. is a reproducing kernel space, and its reproducing kernel function is given by [25, page 148]

Definition 11. We define the binary space by The inner product and the norm in are defined by

Lemma 12. is a reproducing kernel space, and its reproducing kernel function is given as [25, page 148]

Remark 13. Hilbert spaces can be completely classified: there is a unique Hilbert space up to isomorphism for every cardinality of the base. Since finite-dimensional Hilbert spaces are fully understood in linear algebra and since morphisms of Hilbert spaces can always be divided into morphisms of spaces with Aleph-null dimensionality, functional analysis of Hilbert spaces mostly deals with the unique Hilbert space of dimensionality Aleph-null and its morphisms. One of the open problems in functional analysis is to prove that every bounded linear operator on a Hilbert space has a proper invariant subspace. Many special cases of this invariant subspace problem have already been proven [26].

3. Solution Representation in

In this section, the solution of (1) is given in the reproducing kernel space . On defining the linear operator by after homogenizing the initial and boundary conditions, model problem (1)–(3) changes to the problem where for convenience, we again write instead of in (26)

Lemma 14. is a bounded linear operator.

Proof. Let and let . By Lemma 10, we have and thus Hence there exist such that Therefore, This completes the proof.

Now, choose a countable dense subset in and define where is the adjoint operator of . The orthonormal system of can be derived from the process of Gram-Schmidt orthogonalization of as

Theorem 15. Suppose that is dense in . Then is a complete system in , and

Proof. We have Clearly, . For each fixed , if then Note that is dense in . Hence, . From the existence of , it follows that . The proof is completed.

Theorem 16. If is dense in , then the solution of (26) is given by

Proof. By Theorem 15, is a complete system in . Thus, This completes the proof.

Now the approximate solution can be obtained from the -term intercept of the exact solution and Obviously

Theorem 17. If , then Moreover, a sequence is monotonically decreasing in .

Proof. From (38) and (40), it follows that Thus, In addition Clearly, is monotonically decreasing in .

4. The Method Implementation

(i)If (26) is linear, then the analytical solution of (26) can be obtained directly by (38). (ii)If (26) is nonlinear, then the solution of (26) can be obtained by the following iterative method.

We construct an iterative sequence , putting where Next we will prove that given by the iterative formula (46) converges to the exact solution.

Theorem 18. Suppose that defined by (46) is bounded and (26) has a unique solution. If is dense in , then converges to the analytical solution of (26), and where is given by (47).

Proof. First, we prove the convergence of . From (46) and the orthonormality of , we infer that By (49), is nondecreasing, and by the assumption, is bounded. Thus is convergent. By (49), there exists a constant such that This implies that If , then The completeness of shows that there exists such that as . Now, we prove that solves (26). Taking limits in (40), we get Note that In view of (47), we have Since is dense in , for each , there exists a subsequence such that We know that Let . By the continuity of , we have which indicates that satisfies (26).

Remark 19. In the same manner, it can be proved that where and is given by (47).

5. Numerical Results

To test the accuracy of the present method, some numerical experiments are presented in this section. Using our method, we chose points in and obtained the approximate solution . The comparison between interpolating scaling function method [11] and RKM for different values of , , and is given in Tables 4 and 5. We solve these examples for a set of points In Tables 7 and 10 we calculate the RMS error by the following formula: It can be seen from Tables 4 and 5 and 710 that the results obtained by the RKM are more accurate than those obtained by the methods in [10, 11]. This indicates that RKM is a reliable method. The CPU time () is given in Tables 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10. Numerical solutions are described in the extended domain . The comparison of RMS error is given for our method and Chebyshev method.

Example 20. Consider the following telegraph equation with initial and boundary conditions: where The exact solution of (64) is given by [11] If we apply to (64), then the following equation (68) is obtained: where After homogenizing the initial and boundary conditions and using the above method, we obtain Tables 14 and Figure 1.

Example 21. Consider the following telegraph equation with initial and boundary conditions: where The exact solution of (70) is given by [11] If we apply to (70), then the following (74) is obtained: where After homogenizing the initial and boundary conditions and using the above method, we obtain Tables 57 and Figures 2 and 3.

Example 22. Consider the following telegraph equation with initial and boundary conditions: where The exact solution of (76) is given by [10] If we apply to (76), then the following equation (80) is obtained where After homogenizing the initial and boundary conditions and using the above method, we obtain Tables 810 and Figure 4.

Remark 23. In Tables 19, we abbreviate the exact solution and the approximate solution by AS and ES, respectively. AE stands for the absolute error, that is, the absolute value of the difference of the exact solution and the approximate solution, while RE indicates the relative error, that is, the absolute error divided by the absolute value of the exact solution.

6. Conclusion

In this study, a second-order one-dimensional telegraph equation with initial and boundary conditions was solved by reproducing kernel Hilbert space method. We described the method and used it in some test examples in order to show its applicability and validity in comparison with exact and other numerical solutions. The obtained results show that this approach can solve the problem effectively and need few computations. The satisfactory results that we obtained were compared with the results that were obtained by [10, 11]. Numerical experiments on test examples show that our proposed schemes are of high accuracy and support the theoretical results. As shown in Tables 7 and 10 our results are better than the results that were obtained by [10]. According to these results, it is possible to apply RKM to linear and nonlinear differential equations with initial and boundary conditions. It has been shown that the obtained results are uniformly convergent and the operator that was used is a bounded linear operator. From the results, RKM can be applied to high dimensional partial differential equations, integral equations, and fractional differential equations without any transformation or discretization, and good results can be obtained.