Abstract

The nonlinear Klein-Gordon equation (KGE) models many nonlinear phenomena. In this paper, we propose a scheme for numerical approximation of solutions of the one-dimensional nonlinear KGE. A common approach to find a solution of a nonlinear system is to first linearize the equations by successive substitution or the Newton iteration method and then solve a linear least squares problem. Here, we show that it can be advantageous to form a sum of squared residuals of the nonlinear problem and then find a zero of the gradient. Our scheme is based on the Sobolev gradient method for solving a nonlinear least square problem directly. The numerical results are compared with Lattice Boltzmann Method (LBM). The , , and Root-Mean-Square (RMS) values indicate better accuracy of the proposed method with less computational effort.

1. Introduction

Many nonlinear phenomena in solid state physics, plasma physics, electrostatics, chemical kinetics, and fluid dynamics can be modeled through partial differential equations (PDEs) [1]. One example of such a PDE is the nonlinear KGE which arises in relativistic quantum mechanics and field theory [2]. This equation has attracted the attention of many scientists and has been widely used to study various laws related to motions of elementary particles in condensed matter and particle physics [3]. The KGE models a wide class of scientific phenomena, including propagation of dislocations in crystals and the behavior of elementary particles. The equation also arises in the study of solitons and perturbation theory [46].

The initial value problem for the one-dimensional nonlinear KGE is given by where shows the wave displacement at position at time , is constant, and is nonlinear force. The residual of the equation can be written as The nonlinear KGE also has a conserved Hamiltonian energy: where

In this paper we find the numerical approximation of the nonlinear KGE with quadratic nonlinearity and also with cubic nonlinearity where , , and are known constants.

A number of different numerical schemes have been proposed for the solution of the KGE. It is mentioned in [1] that many methods like inverse scattering method, Bäcklund transformation, the auxiliary equation method [7, 8], pseudospectral method, and Riccati equation expansion methods have been used to solve these types of equations (see [1] and references therein). Other methods like spectral and pseudospectral approaches have recently been presented in [9]. In [10] Spectral methods using Legendre polynomials (SMLP) as basis functions are considered to solve the KGE. The solution of the nonlinear KGE, using -weighted finite difference discretization with radial basis functions (RBS) is discussed in [11]. Many difference schemes have been given in [12]. In these difference schemes, undesirable results like instability and loss of spatial symmetry for large number of parameters in initial conditions were observed.

Another approach to solving any boundary value problem is to formulate the problem in terms of minimizing a least square functional representing the residuals of the equation. The least square functional acts as an error representative. The boundary conditions are conveniently satisfied in this gradient descent approach.

One such technique is the Sobolev gradient method. Sobolev gradients have been used in minimization of Ginzburg-Landau energy functionals [13] related to phase separation and ordering in finite difference settings [14], electrostatic potential equations [15], nonlinear elliptic problems [16], inverse problems in elasticity [17], groundwater modeling [18], and simulation of Bose Einstein condensates [19].

The paper is organized as follows. In Section 2 Sobolev gradient method is discussed and some existence results for the proposed method are presented. In Section 3 we apply the method on the nonlinear KGE. In Section 4 results of the numerical experiments are presented by solving some numerical examples. Comparison with other standard methods like spectral or finite difference method is also given. Section 5 contains the summary and conclusion of the results obtained. And at the end some references are provided.

In this paper all the numerical experiments are carried out using Intel(R) Core i3 1.70 GHz processor with 4 GB RAM. All the codes and graphs are drawn in MATLAB and are available upon request.

2. Sobolev Gradient Method

In this section we throw some light on Sobolev gradients and steepest descent. A comprehensive procedure for the interpretation of Sobolev gradients is presented in [13] and the given theory is borrowed from [13].

Here we are giving the Riesz Representation theorem which is useful in our work.

Theorem 1. Every bounded linear functional on a Hilbert space can be represented in terms of the inner product; namely,where depends on , is uniquely determined, and has norm .

Let be a positive integer and be a real-valued function on . Then the gradient is defined as For as mentioned above but with an inner product different from the standard inner product , there is a functionso that Since every linear functional defined on finite-dimensional vector space is bounded so by Theorem 1 the linear functional can be represented using any inner product on Saying that is the gradient of with respect to the inner product it is worth noticing that the gradient has the same set of properties as that of .

From linear algebra there is a linear transformation by which these two inner products are related byand some reflection leads toEvery point possesses its own inner product on . Therefore, for , define such that We have a collection of gradients which have tremendous differences in numerical properties for a function depending upon the choice of metric.

The gradient of a function defined in a finite or infinite dimensional Sobolev space is known as a Sobolev gradient.

The method of steepest descent can be classified into two categories: continuous steepest descent and discrete steepest descent.

Consider an inner product , a real-valued function , and its gradient defined on a Hilbert space .

Discrete steepest descent generates a sequence of points such that where initial guess is given and is selected so that it minimizes, if possible,Continuous steepest descent is a process of finding a function such thatContinuous steepest descent is regarded as a limiting case of discrete steepest descent and (15) can be used as a numerical technique to extract solutions of (17).

To prove the convergence of discrete steepest descent, we extract a theoretical initial point by using continuous steepest descent.

By (15) we desire a , so that By (17), we desire a so that (18) is satisfied.

The Sobolev gradient technique is a process in which we discretize a gradient in a Sobolev space rather than in a Euclidean space. For the solution of a PDE, we represent PDE using an error functional for a least square formulation. We try to find a in the domain of the PDE such that residual is zero, while and its gradient are given by The following theorem gives the existence and convergence of several linear and nonlinear forms of .

Theorem 2. Let be a nonnegative function (a differentiable function whose derivative is continuous) on a Hilbert space which has a locally Lipschitzian gradient. Then, for each , there exists a unique function with

Another observation which is useful in our work is as follows.

Theorem 3. Suppose all the conditions in the hypothesis of the above theorem are satisfied andexists; then

3. The Nonlinear Klein-Gordon Equation

Consider the problem with initial conditionsand Dirichlet boundary conditionswhere , , and are constants and , , , and are known functions while is unknown. For we have quadratic and cubic nonlinearity, respectively.

To solve the problem numerically, we work in a finite-dimensional vector space on a uniform grid. We denote by or the vector space equipped with the usual inner product . The operators are defined byfor and where is the spacing between the nodes. is the average operator. and are standard central difference formulas for approximating the first and second derivatives. The choice of difference formula is not important to the theoretical development of this paper; other choices would also work.

The numerical version of the problem of evolving from a time to is to solve where and in the equation are at the previous and present time steps and is the desired at the next time. We can put the solution of this problem in terms of minimizing a functional via steepest descent. Define by where represents the residual of the equation and when it approaches zero we have the desired . The least square functional associated with the residual is given by has a minimum of nearly zero when approaches zero. So we will look for the minimum of this functional.

3.1. Gradients and Minimization

The gradient of a functional in is found by solving for test functions . The gradient points in the direction of greatest increase of the functional. The direction of greatest decrease of the functional is . This is the corner stone of steepest descent algorithms.

We replace by to reduce , where is a positive number. This process is repeated until or are reduced by a fair margin. To satisfy the boundary conditions in the finite-dimensional analogue of the original problem we use a projection which projects vectors in onto the subspace in which the initial and final entries of vectors are zero. We will use instead of using . In this particular case, gives the desired gradient in .

For a nonlinear problem like the KGE the gradient or Euclidean gradient is not a good choice. Standard steepest descent using the gradient is inefficient when the spacing of the grid is refined and also when the dimensions of the problem are increased as it suffers from the CFL condition. The CFL condition requires that the time step should not be taken bigger than some computable quantity which depends on the discretization.

Rather than abandoning the steepest descent, the gradient is reconsidered in a different Sobolev space. We consider the Sobolev space which is with the inner product It can be shown that the space defined by (32) is complete. The desired Sobolev gradient in is found by solving

4. Test Problems and Numerical Results

Numerical experiments for the solution of nonlinear KGE were performed as follows. A system of nodes was set up with as internodal spacing. The function was then evolved. The programs were terminated when the infinity norm of the residual function was less than .

Example 4. In this example we consider the nonlinear KGE with cubic nonlinearitywith constants , , and and the initial conditionsThe analytical solution to this problem is where and .

Example 4 is solved by using both Euclidean and Sobolev gradients. We set for the time increment. For five time steps, results are summarized in Table 1. Here is the maximum step size that could be used to obtain convergence with both and gradient. Iterations denote the total number of minimization steps to reach convergence. CPUs denotes the time in seconds taken to reach convergence.

From Table 1, it can be seen that as we increase the number of nodes the gradient is going to be inefficient, but this is not in the case of Sobolev gradient.

The graphs of analytical solution and numerical solution obtained by the Sobolev gradient method for are shown in Figures 1 and 2. Figures 3 and 4 show the surface plot of the obtained solution.

In Tables 2 and 3, a comparison is shown between the Sobolev gradient method and LBM [20]. The , and Root-Mean-Square (RMS) errors are obtained for .

Results in Tables 2 and 3 show the better accuracy of the proposed method than from LBM.

In Table 4 RMS value obtained by the Sobolev gradient method is compared with RBF and SMLP methods given in [11, 21]. The values of energy obtained by the proposed method for different time levels are also given.

Table 4 shows that the results obtained by the proposed method in terms of accuracy are comparable with other numerical schemes. The energy remains invariant at different time levels.

Example 5. Consider the nonlinear KGE with quadratic nonlinearity with constants , , and and initial conditionsThe right-hand side function is and the analytical solution to this problem is We get the boundary function from the exact solution.

The graph of the approximated and exact solutions at is shown in Figure 5 and the space-time graph using Sobolev gradient method is shown in Figure 6.

Table 5 shows the comparison between the Sobolev gradient method and LBM [20]. The , and Root-Mean-Square (RMS) errors are also calculated for .

Once again the results show the better accuracy of the proposed method than LBM.

5. Summary and Conclusions

In this paper, we proposed a numerical scheme to solve the nonlinear KGE using Sobolev gradients. Table 1 shows that standard steepest descent using the gradient compels us to choose a very small step size , leading to a huge number of iterations and sometimes a failure to reach convergence.

Steepest descent defined in an appropriate Sobolev space, on the other hand, uses a very large and fewer iterations. So the Sobolev gradient technique is preferable to the usual steepest descent technique as the spacing of the grid is made finer.

Sobolev gradient techniques may offer definite benefits in certain cases; for example, the choice of initial guess does not affect convergence of the method. The results of two examples are compared with the LBM in [20]. Our scheme uses a large time step as compared with the LBM and gives better numerical results in terms of accuracy. In these two examples results show that, for larger values of time , the Sobolev gradient approach is better than the LBM. The choice of optimal metric defined in Sobolev space can further improve the numerical results, but how to choose such a metric is still an important question to be investigated.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The first author is thankful to University of Punjab for providing research grant via D/605/Est. 1 dated 24-02-2015.