Some Recent Developments in Applied Functional AnalysisView this Special Issue
Research Article | Open Access
Mustafa Inc, Ali Akgül, Adem Kiliçman, "Explicit Solution of Telegraph Equation Based on Reproducing Kernel Method", Journal of Function Spaces, vol. 2012, Article ID 984682, 23 pages, 2012. https://doi.org/10.1155/2012/984682
Explicit Solution of Telegraph Equation Based on Reproducing Kernel Method
We propose a reproducing kernel method for solving the telegraph equation with initial conditions based on the reproducing kernel theory. The exact solution is represented in the form of series, and some numerical examples have been studied in order to demonstrate the validity and applicability of the technique. The method shows that the implement seems easy and produces accurate results.
In this paper, we consider the telegraph equation of the following form: over a region and , are known constant coefficients with initial conditions as where can be voltage or current through the wire at position and time . In (1.1), we have where is conductance of resistor, is resistance of resistor, is inductance of capacitor, is capacitance of capacitor, and can be considered as a function depending on distance and time , and constants are depending on a given problem and , and are known continuous functions.
The hyperbolic partial differential equations model the vibrations of structures (e.g., buildings, beams, and machines) and are the basis for fundamental equations of atomic physics. Equation (1.1), referred to as the second-order telegraph equation with constant coefficients, models a mixture between diffusion and wave propagation by introducing a term that accounts for effects of finite velocity to the standard heat or mass transport equation . However, (1.1) is commonly used in signal analysis for transmission and propagation of electrical signals [2, 3].
In recent years, much attention has been given in the literature to the development, analysis, and implementation of stable methods for the numerical solution of second-order hyperbolic equations, see, for example, [4–11]. These methods are conditionally stable. In , Mohanty carried over a new technique to solve (1.1), which is unconditionally stable and is of second-order accuracy in both the time and space components. Mohebbi and Dehghan  presented a high-order accurate method for solving one-space-dimensional linear hyperbolic equations and proved the high-order accuracy due to the fourth-order discretization of spatial derivative and unconditional stability. A compact finite difference approximation was presented in  by using the fourth order discretizing spatial derivatives of the linear hyperbolic equation and collocation method for the time component. Another solution is approximated by suing a polynomial at each grid point such that its coefficients were determined by solving a linear system of equations . By using collocation points and approximating the solution by using a thin plate splines radial basis function was presented in .
In , the author used the Chebyshev cardinal functions. Lakestani and Saray used interpolating scaling function. Ding et al.  constructed a class of new difference scheme based on a new nonpolynomial spline method to solve (1.1) and (1.2). Lakoud and Belakroum  studied the existence and uniqueness of the solution with integral condition by using the Ro the time discretization method. Dehghan et al.  used to compute the solution for the linear, variable coefficient, fractional derivative, and multispace telegraph equations by using the variational iteration method. Further, Biazar et al.  obtained an approximate solution by using the variational iteration method. Recently, Yao and Lin  investigated a nonlinear hyperbolic telegraph equation with an integral condition by reproducing kernel space at in (1.1). In , Yousefi presented a numerical method by using Legendre multiwavelet Galerkin method.
In this paper, the RKHSM [25–47] will be used to investigate the telegraph equation (1.1). Several researches have been devoted to the application of RKHSM to a wide class of stochastic and deterministic problems involving fractional differential equation, nonlinear oscillator with discontinuity, singular nonlinear two-point periodic boundary value problems, integral equations, and nonlinear partial differential equations [27–41]. The method is well suited to physical problems.
The efficiency of the method was used by many authors to investigate several scientific applications. Geng and Cui  applied the RKHSM to handle the second-order boundary value problems. Yao and Cui  and Wang et al.  investigated a class of singular boundary value problems by this method and the obtained results were good. Zhou et al.  used the RKHSM effectively to solve second-order boundary value problems. In , the method was used to solve nonlinear infinite-delay-differential equations. Wang and Chao , Li and Cui , and Zhou and Cui  independently employed the RKHSM to variable-coefficient partial differential equations. Geng and Cui  and Du and Cui  investigated to the approximate solution of the forced Duffing equation with integral boundary conditions by combining the homotopy perturbation method and the RKHSM. Lv and Cui  presented a new algorithm to solve linear fifth-order boundary value problems. In [38, 39], authors developed a new existence proof of solutions for nonlinear boundary value problems. Cui and Du  obtained the representation of the exact solution for the nonlinear Volterra-Fredholm integral equations by using the reproducing kernel space. Wu and Li  applied iterative reproducing kernel method to obtain the analytical approximate solution of a nonlinear oscillator with discontinuities. Recently, the method was applied the fractional partial differential equations and the multipoint boundary value problems [42–45]. For more details about RKHSM and the modified forms and their effectiveness, see [25–47].
The paper is organized as follows. Section 2 is devoted to several reproducing kernel spaces and a linear operator is introduced. The solution representation in has been presented in Section 3. We prove that the approximate solution converges to the exact solution uniformly. Some numerical examples are illustrated in Section 4. We provide some conclusions in the last sections.
Hilbert spaces can be completely classified: there is a unique Hilbert space up to isomorphism for every cardinality of the base. Since finite-dimensional Hilbert spaces are fully understood in linear algebra, and since morphisms of Hilbert spaces can always be divided into morphisms of spaces with Aleph-null () dimensionality, functional analysis of Hilbert spaces mostly deals with the unique Hilbert space of dimensionality Aleph-null and its morphisms. One of the open problems in functional analysis is to prove that every bounded linear operator on a Hilbert space has a proper invariant subspace. Many special cases of this invariant subspace problem have already been proven .
2.1. Reproducing Kernel Spaces
In this section, we define some useful reproducing kernel spaces.
Definition 2.1 (reproducing kernel). Let be a nonempty set. A function is called a reproducing kernel of the Hilbert space if and only if (a) for all ,(b) for all and all .
The last condition is called “the reproducing property” as the value of the function at the point is reproduced by the inner product of with .
Then we need some notation that we use in the development of the paper. In the next we define several spaces with inner product over those spaces. Thus the space defined as is a Hilbert space. The inner product and the norm in are defined by respectively. Thus the space is a reproducing kernel space, that is, for each fixed and any , there exists a function such that and similarly we define the space
The inner product and the norm in are defined by respectively. The space is a reproducing kernel Hilbert space and its reproducing kernel function is given by  as and the space is a Hilbert space where the inner product and the norm in are defined by respectively. The space is a reproducing kernel space and its reproducing kernel function is given by  as
Similarly, the space defined by is a Hilbert space and then inner product and the norm in are defined by respectively. The space is a reproducing kernel space and its reproducing kernel function is given by  as
Now we have the following theorem.
Theorem 2.2. The space is a complete reproducing kernel space whose reproducing kernel is given by where
through iterative integrations by parts for (2.15), we have
Note the property of the reproducing kernel as
Then by (2.16) we obtain when , therefore
Since we have
From (2.18) and (2.23), the unknown coefficients and can be obtained. Thus is given by
Now we note that the space given in  as is a binary reproducing kernel Hilbert space. The inner product and the norm in are defined by respectively.
Theorem 2.3. The is a reproducing kernel space and its reproducing kernel function is such that for any ,
Similarly, the space is a binary reproducing kernel Hilbert space. The inner product and the norm in are defined by  as respectively. is a reproducing kernel space and its reproducing kernel function is
3. Solution Representation in W(Ω)
In this section, the solution of (1.1) is given in the reproducing kernel space . We define the linear operator as
Model problem (1.1) changes to the following problem:
Lemma 3.1. The operator is a bounded linear operator.
by continuity of , we have
Now similarly for , we obtain and then
Therefore we conclude
Now, if we choose a countable dense subset in and define where is the adjoint operator of , then the orthonormal system of can be derived from the process of Gram-Schmidt orthogonalization of as
Then we have the following theorem.
Theorem 3.2. Suppose that is dense in , then is complete system in and
Proof. We have
That is clearly . For each fixed , if then
Note that is dense in , hence, . It follows that from the existence of . So the proof is complete.
Theorem 3.3. If is dense in , then the solution of (1.5) is given as
Proof. Since is complete system in , we have
Now the approximate solution can be obtained from the term intercept of the exact solution and
Of course it is also easy to show that
3.1. Convergence Analysis
We assume that is dense in . We discuss the convergence of the approximate solutions constructed in Section 3. Let be the exact solution of (1.1) and the term approximation solution of (1.1). Then we have the following theorem.
Theorem 3.4. If , then
Moreover a sequence is monotonically decreasing in .
4. Experimental Results for the Telegraph Equation
In this section, three numerical examples are provided to show the accuracy of the present method. All the computations were performed by Maple 13. Since, the RKHSM does not require discretization of the variables, that is, time and space, it is also not effected by computation round-off errors and no need to face with necessity of large computer memory and time. The accuracy of the RKHSM for the problem (1.1) is controllable and absolute errors are small with present choice of and (see Tables 1, 2, 3, 4, 5, and 6). Thus the numerical results which we obtain justify the advantage of this methodology.
Note that the solutions are very rapidly convergent by utilizing the RKHSM. Further, the series solution methodology can be applied to various types of linear or nonlinear system of partial differential equations and single partial differential equations, see, for example, [25–30].
In Table 7, we compute the following relative errors: at the points . It is possible to refine the result by increasing the intensive points.
We constructed the figures for different values of and . It can be concluded by figures that the speed of convergence is decreasing by increasing the values of and .
Example 4.1. Consider the following telegraph equation with initial conditions:
Then the exact solution is given as
If we apply to (4.3), then we obtain the following equation: Then we have the estimation in Table 1.
Example 4.2. Consider the following telegraph equation with initial conditions: The exact solution . If we apply to (4.6), then we obtain then similarly, in Table 3 we have the estimation among the exact and approximate solutions and the error terms.
Then the comparison yields in Table 4.
Example 4.3. Consider the following telegraph equation with initial conditions: