#### Abstract

We propose a reproducing kernel method for solving the telegraph equation with initial conditions based on the reproducing kernel theory. The exact solution is represented in the form of series, and some numerical examples have been studied in order to demonstrate the validity and applicability of the technique. The method shows that the implement seems easy and produces accurate results.

#### 1. Introduction

In this paper, we consider the telegraph equation of the following form: over a region and , are known constant coefficients with initial conditions as where can be voltage or current through the wire at position and time . In (1.1), we have where is conductance of resistor, is resistance of resistor, is inductance of capacitor, is capacitance of capacitor, and can be considered as a function depending on distance and time , and constants are depending on a given problem and , and are known continuous functions.

The hyperbolic partial differential equations model the vibrations of structures (e.g., buildings, beams, and machines) and are the basis for fundamental equations of atomic physics. Equation (1.1), referred to as the second-order telegraph equation with constant coefficients, models a mixture between diffusion and wave propagation by introducing a term that accounts for effects of finite velocity to the standard heat or mass transport equation [1]. However, (1.1) is commonly used in signal analysis for transmission and propagation of electrical signals [2, 3].

In recent years, much attention has been given in the literature to the development, analysis, and implementation of stable methods for the numerical solution of second-order hyperbolic equations, see, for example, [4–11]. These methods are conditionally stable. In [12], Mohanty carried over a new technique to solve (1.1), which is unconditionally stable and is of second-order accuracy in both the time and space components. Mohebbi and Dehghan [13] presented a high-order accurate method for solving one-space-dimensional linear hyperbolic equations and proved the high-order accuracy due to the fourth-order discretization of spatial derivative and unconditional stability. A compact finite difference approximation was presented in [14] by using the fourth order discretizing spatial derivatives of the linear hyperbolic equation and collocation method for the time component. Another solution is approximated by suing a polynomial at each grid point such that its coefficients were determined by solving a linear system of equations [15]. By using collocation points and approximating the solution by using a thin plate splines radial basis function was presented in [16].

In [17], the author used the Chebyshev cardinal functions. Lakestani and Saray[18] used interpolating scaling function. Ding et al. [19] constructed a class of new difference scheme based on a new nonpolynomial spline method to solve (1.1) and (1.2). Lakoud and Belakroum [20] studied the existence and uniqueness of the solution with integral condition by using the Ro the time discretization method. Dehghan et al. [21] used to compute the solution for the linear, variable coefficient, fractional derivative, and multispace telegraph equations by using the variational iteration method. Further, Biazar et al. [22] obtained an approximate solution by using the variational iteration method. Recently, Yao and Lin [23] investigated a nonlinear hyperbolic telegraph equation with an integral condition by reproducing kernel space at in (1.1). In [24], Yousefi presented a numerical method by using Legendre multiwavelet Galerkin method.

In this paper, the RKHSM [25–47] will be used to investigate the telegraph equation (1.1). Several researches have been devoted to the application of RKHSM to a wide class of stochastic and deterministic problems involving fractional differential equation, nonlinear oscillator with discontinuity, singular nonlinear two-point periodic boundary value problems, integral equations, and nonlinear partial differential equations [27–41]. The method is well suited to physical problems.

The efficiency of the method was used by many authors to investigate several scientific applications. Geng and Cui [27] applied the RKHSM to handle the second-order boundary value problems. Yao and Cui [28] and Wang et al. [29] investigated a class of singular boundary value problems by this method and the obtained results were good. Zhou et al. [30] used the RKHSM effectively to solve second-order boundary value problems. In [31], the method was used to solve nonlinear infinite-delay-differential equations. Wang and Chao [32], Li and Cui [33], and Zhou and Cui [34] independently employed the RKHSM to variable-coefficient partial differential equations. Geng and Cui [35] and Du and Cui [36] investigated to the approximate solution of the forced Duffing equation with integral boundary conditions by combining the homotopy perturbation method and the RKHSM. Lv and Cui [37] presented a new algorithm to solve linear fifth-order boundary value problems. In [38, 39], authors developed a new existence proof of solutions for nonlinear boundary value problems. Cui and Du [40] obtained the representation of the exact solution for the nonlinear Volterra-Fredholm integral equations by using the reproducing kernel space. Wu and Li [41] applied iterative reproducing kernel method to obtain the analytical approximate solution of a nonlinear oscillator with discontinuities. Recently, the method was applied the fractional partial differential equations and the multipoint boundary value problems [42–45]. For more details about RKHSM and the modified forms and their effectiveness, see [25–47].

In the present work, we use the following equation: by transformation for homogeneous initial conditions of (1.1) and (1.2), we get as follows (1.5): where

The paper is organized as follows. Section 2 is devoted to several reproducing kernel spaces and a linear operator is introduced. The solution representation in has been presented in Section 3. We prove that the approximate solution converges to the exact solution uniformly. Some numerical examples are illustrated in Section 4. We provide some conclusions in the last sections.

#### 2. Preliminaries

Hilbert spaces can be completely classified: there is a unique Hilbert space up to isomorphism for every cardinality of the base. Since finite-dimensional Hilbert spaces are fully understood in linear algebra, and since morphisms of Hilbert spaces can always be divided into morphisms of spaces with Aleph-null () dimensionality, functional analysis of Hilbert spaces mostly deals with the unique Hilbert space of dimensionality Aleph-null and its morphisms. One of the open problems in functional analysis is to prove that every bounded linear operator on a Hilbert space has a proper invariant subspace. Many special cases of this invariant subspace problem have already been proven [48].

##### 2.1. Reproducing Kernel Spaces

In this section, we define some useful reproducing kernel spaces.

*Definition 2.1 (reproducing kernel). *Let be a nonempty set. A function is called a reproducing kernel of the Hilbert space if and only if (a) for all ,(b) for all and all .

The last condition is called “the reproducing property” as the value of the function at the point is reproduced by the inner product of with .

Then we need some notation that we use in the development of the paper. In the next we define several spaces with inner product over those spaces. Thus the space defined as is a Hilbert space. The inner product and the norm in are defined by respectively. Thus the space is a reproducing kernel space, that is, for each fixed and any , there exists a function such that and similarly we define the space

The inner product and the norm in are defined by respectively. The space is a reproducing kernel Hilbert space and its reproducing kernel function is given by [26] as and the space is a Hilbert space where the inner product and the norm in are defined by respectively. The space is a reproducing kernel space and its reproducing kernel function is given by [26] as

Similarly, the space defined by is a Hilbert space and then inner product and the norm in are defined by respectively. The space is a reproducing kernel space and its reproducing kernel function is given by [26] as

Now we have the following theorem.

Theorem 2.2. *The space is a complete reproducing kernel space whose reproducing kernel is given by
**
where
*

*Proof. *Since
through iterative integrations by parts for (2.15), we have

Note the property of the reproducing kernel as

If

Then by (2.16) we obtain
when ,
therefore

Since
we have

From (2.18) and (2.23), the unknown coefficients and can be obtained. Thus is given by

Now we note that the space given in [26] as
is a binary reproducing kernel Hilbert space. The inner product and the norm in are defined by
respectively.

Theorem 2.3. *The is a reproducing kernel space and its reproducing kernel function is
**
such that for any ,
*

Similarly, the space is a binary reproducing kernel Hilbert space. The inner product and the norm in are defined by [26] as respectively. is a reproducing kernel space and its reproducing kernel function is

#### 3. Solution Representation in *W*(Ω)

In this section, the solution of (1.1) is given in the reproducing kernel space . We define the linear operator as

Model problem (1.1) changes to the following problem:

Lemma 3.1. *The operator is a bounded linear operator. *

*Proof. * Since
by continuity of , we have

Now similarly for , we obtain
and then

Therefore we conclude

Now, if we choose a countable dense subset in and define where is the adjoint operator of , then the orthonormal system of can be derived from the process of Gram-Schmidt orthogonalization of as

Then we have the following theorem.

Theorem 3.2. *Suppose that is dense in , then is complete system in and
*

*Proof. *We have

That is clearly . For each fixed , if
then

Note that is dense in , hence, . It follows that from the existence of . So the proof is complete.

Theorem 3.3. *If is dense in , then the solution of (1.5) is given as
*

*Proof. *Since is complete system in , we have

Now the approximate solution can be obtained from the term intercept of the exact solution and

Of course it is also easy to show that

##### 3.1. Convergence Analysis

We assume that is dense in . We discuss the convergence of the approximate solutions constructed in Section 3. Let be the exact solution of (1.1) and the term approximation solution of (1.1). Then we have the following theorem.

Theorem 3.4. *If , then
**Moreover a sequence is monotonically decreasing in . *

*Proof. *From (3.14) and (3.16), it follows that

Thus

In addition,

Then clearly, is monotonically decreasing in .

#### 4. Experimental Results for the Telegraph Equation

In this section, three numerical examples are provided to show the accuracy of the present method. All the computations were performed by Maple 13. Since, the RKHSM does not require discretization of the variables, that is, time and space, it is also not effected by computation round-off errors and no need to face with necessity of large computer memory and time. The accuracy of the RKHSM for the problem (1.1) is controllable and absolute errors are small with present choice of and (see Tables 1, 2, 3, 4, 5, and 6). Thus the numerical results which we obtain justify the advantage of this methodology.

Note that the solutions are very rapidly convergent by utilizing the RKHSM. Further, the series solution methodology can be applied to various types of linear or nonlinear system of partial differential equations and single partial differential equations, see, for example, [25–30].

Using our method we choose points in . In Tables 2, 4 and 6 we compute the absolute errors at the following points:

In Table 7, we compute the following relative errors: at the points . It is possible to refine the result by increasing the intensive points.

We constructed the figures for different values of and . It can be concluded by figures that the speed of convergence is decreasing by increasing the values of and .

*Example 4.1. *Consider the following telegraph equation with initial conditions:
Then the exact solution is given as

If we apply to (4.3), then we obtain the following equation:
Then we have the estimation in Table 1.

Now if we compare [13, 21] and the present scheme we have Table 2.

We have Figures 1, 2, and 3 for this example, where ES = exact solution and AS = approximate solution.

*Example 4.2. *Consider the following telegraph equation with initial conditions:
The exact solution . If we apply to (4.6), then we obtain
then similarly, in Table 3 we have the estimation among the exact and approximate solutions and the error terms.

Then the comparison yields in Table 4.

We have Figures 4, 5, and 6 for this example.

*Example 4.3. *Consider the following telegraph equation with initial conditions:

The exact solution . If we apply
to (4.8) then (4.10) is obtained as

We have Figures 7, 8, 9, 10, and 11 for this example.

*Remark 4.4. * Ding et al. [19] has solved Examples 4.1–4.3 by using new polynomial spline methods. In our method the numerical solutions are in good agreement with analytical solutions (see Figures 1–11). In addition, the figure of the relative error is drawn for the value of and also for three examples (see Figure 11).

One may view Tables 1, 2, 3, 4, 5 and 6 for the confidence of the method and the comparison with the other methods. In Table 7, computing time with relative error is also given for each example.

#### 5. Conclusion

In this paper, the RKHSM was used for the telegraph equation with initial conditions. The approximate solutions to the equations have been calculated by using the RKHSM without any need to transformation techniques and linearization or perturbation of the equations. In closing, the RKHSM avoids the difficulties and massive computational work by determining the analytic solutions. We compare our solutions with the exact solutions and the results of [19].

A clear conclusion can be drawn from the numerical results as the RKHSM algorithm provides highly accurate numerical solutions without spatial discretizations for the nonlinear partial differential equations. It is also worth noting that the advantage of this methodology displays a fast convergence of the solutions. The illustrations show the dependence of the rapid convergence depends on the character and behavior of the solutions just as in a closed form solutions.

#### Acknowledgments

The authors express their sincere thanks to the referees for the careful and detailed reading of the paper and the very helpful suggestions that improved the paper substantially. The third author gratefully acknowledges that this paper was partially supported by the University Putra Malaysia under the ERGS Grant Scheme having project no. 5527068.