Recent Developments in Integral Transforms, Special Functions, and Their Extensions to Distributions Theory
View this Special IssueResearch Article  Open Access
Mustafa Inc, Ali Akgül, Adem Kılıçman, "Numerical Solutions of the SecondOrder OneDimensional Telegraph Equation Based on Reproducing Kernel Hilbert Space Method", Abstract and Applied Analysis, vol. 2013, Article ID 768963, 13 pages, 2013. https://doi.org/10.1155/2013/768963
Numerical Solutions of the SecondOrder OneDimensional Telegraph Equation Based on Reproducing Kernel Hilbert Space Method
Abstract
We investigate the effectiveness of reproducing kernel method (RKM) in solving partial differential equations. We propose a reproducing kernel method for solving the telegraph equation with initial and boundary conditions based on reproducing kernel theory. Its exact solution is represented in the form of a series in reproducing kernel Hilbert space. Some numerical examples are given in order to demonstrate the accuracy of this method. The results obtained from this method are compared with the exact solutions and other methods. Results of numerical examples show that this method is simple, effective, and easy to use.
1. Introduction
The hyperbolic partial differential equations model the vibrations of structures (e.g., buildings, beams, and machines). These equations are the basis for fundamental equations of atomic physics. In this paper, we consider the telegraph equation of the form with initial conditions and appropriate boundary conditions by using reproducing kernel method (RKM). In recent years, much attention has been given in the literature to the development, analysis, and implementation of stable methods for the numerical solution of (1)–(3) [1–3]. Mohanty carried out a new technique to solve the linear onespacedimensional hyperbolic equation (1) [4]. Highorder accurate method for solving linear hyperbolic equation is presented in [5]. A compact finite difference approximation of fourth order for discretizing spatial derivative of linear hyperbolic equation and a collocation method for the time component are used in [6]. A numerical scheme is developed to solve the onedimensional hyperbolic telegraph equation using the collocation points and approximating the solution using thin plate splines radial basis function [7]. Several test problems were given, and the results of numerical experiments were compared with analytical solutions to confirm the good accuracy of their scheme. Yao [8] investigated a nonlinear hyperbolic telegraph equation with an integral condition by reproducing kernel space at . Yousefi presented a numerical method for solving the onedimensional hyperbolic telegraph equation by using Legendre multiwavelet Galerkin method [9]. Dehghan and Lakestani presented a numerical technique for the solution of the secondorder onedimensional linear hyperbolic equation [10]. Lakestani and Saray used interpolating scaling functions for solving (1)–(3) [11]. Dehghan provided a solution of the secondorder onedimensional hyperbolic telegraph equation by using the dual reciprocity boundary integral equation (DRBIE) method [12]. The problem has explicit solution that can be obtained by the method of separation of variables in [13].
In this paper, the problem is solved easily and elegantly by using RKM. The technique has many advantages over the classical techniques. It also avoids discretization and provides an efficient numerical solution with high accuracy, minimal calculation, and avoidance of physically unrealistic assumptions. In the next section, we will describe the procedure of this method.
The theory of reproducing kernels was used for the first time at the beginning of the th century by Zaremba in his work on boundary value problems for harmonic and biharmonic functions [14]. Reproducing kernel theory has important application in numerical analysis, differential equations, probability, and statistics. Recently, using the RKM, some authors discussed telegraph equation [15], Troesch's porblem [16], MHD JefferyHamel flow [17], Bratu’s problem [18], KdV equation [19], fractional differential equation [20], nonlinear oscillator with discontinuity [21], nonlinear twopoint boundary value problems [22], integral equations [23], and nonlinear partial differential equations [24].
The paper is organized as follows. Section 2 introduces several reproducing kernel spaces. The representation in and a linear operator are presented in Section 3. Section 4 provides the main results. The exact and approximate solutions of (1)–(3) and an iterative method are developed for the kind of problems in the reproducing kernel space. We have proved that the approximate solution converges to the exact solution uniformly. Numerical experiments are illustrated in Section 5. Some conclusions are given in Section 6.
2. Reproducing Kernel Spaces
In this section, some useful reproducing kernel spaces are defined.
Definition 1 (reproducing kernel function). Let . A function is called a reproducing kernel function of the Hilbert space if and only if (a) for all , (b) for all and all . The last condition is called “the reproducing property” as the value of the function at the point is reproduced by the inner product of with .
Definition 2. Hilbert function space is a reproducing kernel space if and only if for any fixed , the linear functional is bounded [25, page 5].
Definition 3. We define the space by The inner product and the norm in are defined by
Lemma 4. The space is a reproducing kernel space, and its reproducing kernel function is given by [25, page 123]
Definition 5. We define the space by The inner product and the norm in are defined by
Lemma 6. The space is a reproducing kernel space, and its reproducing kernel function is given by [25, page 148]
Definition 7. We define the space by The inner product and the norm in are defined by The space is a reproducing kernel space, and its reproducing kernel function is given by the following theorem.
Theorem 8. The space is a reproducing kernel space, and its reproducing kernel function is given by where
Proof. Let and let . By Definition 7 and integrating by parts two times, we obtain that After substituting the values of , , , , , , and into the above equation, we get thus we obtain that By Definition 7, we have . So This completes the proof.
Definition 9. We define the binary space by The inner product and the norm in are defined by
Lemma 10. is a reproducing kernel space, and its reproducing kernel function is given by [25, page 148]
Definition 11. We define the binary space by The inner product and the norm in are defined by
Lemma 12. is a reproducing kernel space, and its reproducing kernel function is given as [25, page 148]
Remark 13. Hilbert spaces can be completely classified: there is a unique Hilbert space up to isomorphism for every cardinality of the base. Since finitedimensional Hilbert spaces are fully understood in linear algebra and since morphisms of Hilbert spaces can always be divided into morphisms of spaces with Alephnull dimensionality, functional analysis of Hilbert spaces mostly deals with the unique Hilbert space of dimensionality Alephnull and its morphisms. One of the open problems in functional analysis is to prove that every bounded linear operator on a Hilbert space has a proper invariant subspace. Many special cases of this invariant subspace problem have already been proven [26].
3. Solution Representation in
In this section, the solution of (1) is given in the reproducing kernel space . On defining the linear operator by after homogenizing the initial and boundary conditions, model problem (1)–(3) changes to the problem where for convenience, we again write instead of in (26)
Lemma 14. is a bounded linear operator.
Proof. Let and let . By Lemma 10, we have and thus Hence there exist such that Therefore, This completes the proof.
Now, choose a countable dense subset in and define where is the adjoint operator of . The orthonormal system of can be derived from the process of GramSchmidt orthogonalization of as
Theorem 15. Suppose that is dense in . Then is a complete system in , and
Proof. We have Clearly, . For each fixed , if then Note that is dense in . Hence, . From the existence of , it follows that . The proof is completed.
Theorem 16. If is dense in , then the solution of (26) is given by
Proof. By Theorem 15, is a complete system in . Thus, This completes the proof.
Now the approximate solution can be obtained from the term intercept of the exact solution and Obviously
Theorem 17. If , then Moreover, a sequence is monotonically decreasing in .
Proof. From (38) and (40), it follows that Thus, In addition Clearly, is monotonically decreasing in .
4. The Method Implementation
(i)If (26) is linear, then the analytical solution of (26) can be obtained directly by (38). (ii)If (26) is nonlinear, then the solution of (26) can be obtained by the following iterative method.
We construct an iterative sequence , putting where Next we will prove that given by the iterative formula (46) converges to the exact solution.
Theorem 18. Suppose that defined by (46) is bounded and (26) has a unique solution. If is dense in , then converges to the analytical solution of (26), and where is given by (47).
Proof. First, we prove the convergence of . From (46) and the orthonormality of , we infer that By (49), is nondecreasing, and by the assumption, is bounded. Thus is convergent. By (49), there exists a constant such that This implies that If , then The completeness of shows that there exists such that as . Now, we prove that solves (26). Taking limits in (40), we get Note that In view of (47), we have Since is dense in , for each , there exists a subsequence such that We know that Let . By the continuity of , we have which indicates that satisfies (26).
Remark 19. In the same manner, it can be proved that where and is given by (47).
5. Numerical Results
To test the accuracy of the present method, some numerical experiments are presented in this section. Using our method, we chose points in and obtained the approximate solution . The comparison between interpolating scaling function method [11] and RKM for different values of , , and is given in Tables 4 and 5. We solve these examples for a set of points In Tables 7 and 10 we calculate the RMS error by the following formula: It can be seen from Tables 4 and 5 and 7–10 that the results obtained by the RKM are more accurate than those obtained by the methods in [10, 11]. This indicates that RKM is a reliable method. The CPU time () is given in Tables 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10. Numerical solutions are described in the extended domain . The comparison of RMS error is given for our method and Chebyshev method.

