Abstract

Critical points related to the singular perturbed reaction diffusion models are calculated using weighted Sobolev gradient method in finite element setting. Performance of different Sobolev gradients has been discussed for varying diffusion coefficient values. A comparison is shown between the weighted and unweighted Sobolev gradients in two and three dimensions. The superiority of the method is also demonstrated by showing comparison with Newton's method.

1. Introduction

Many problems in biology and chemistry can be represented in terms of partial differential equations (PDEs). One of the important models is reaction diffusion problems. Much attention has been devoted for the solutions of these problems. In the literature it is shown that numerical solutions of these problems can be computed provided the diffusion coefficients, reaction excitations, and initial and boundary data are given in a deterministic way. The solution of these PDEs is extremely challenging when they has singularly perturbed behavior. In this paper, we discuss the numerical solutions of where is a small and strictly positive parameter called diffusion coefficient, is some two- or three-dimensional region. The Dirichlet boundary conditions are used to solve the equation. Numerous numerical algorithms are designed to solve these kind of systems [1, 2]. We are also using some numerical techniques to solve these systems based on the Sobolev gradient methods. A weighted Sobolev gradient approach is presented, which provides an iterative method for nonlinear elliptic problems.

Weighted Sobolev gradients [3] have been used for the solution of nonlinear singular differential equations. It is shown that [4] significant improvement can be achieved by careful considerations of the weighting. Numerous Sobolev norms can be used as a preconditioner strategies. In Sobolev gradient methods, linear operators are formed to improve the condition number in the steepest descent minimization process. The efficiency of Sobolev gradient methods has been shown in many situations, for example, in physics [411], image processing [12, 13], geometric modelling [14], material sciences [1520], Differential Algebraic Equations, (DAEs) [21] and for the solution of integrodifferential equations [22].

We refer [23] for motivation and background for Sobolev gradients. For some applications and open problems in this area, an interesting article is written by Renka and Neuberger [24]. For the computational comparison an Intel Pentium 1.4 GHZ core i3 machine with 1 GB RAM was used. All the programs were written in FreeFem++ [25] which is freely available to solve these kind of problems.

The paper is organized as follows. In Section 2, we introduce the basic Sobolev gradient approach. This section contains some important results for the existence and convergence of the solution. In Section 3, we find the expression for Sobolev and weighted Sobolev gradients. Section 4 is composed of numerical results. Summary and conclusion are discussed in Section 5.

2. Sobolev Gradient Approach

This section is devoted to show the working of Sobolev gradient methods. A detailed analysis regarding the construction of Sobolev gradients can be seen in [23] and these lines are also taken from the same reference.

Let us consider that is a positive integer and is a real valued function on . The gradient is defined by For as in (2) but with an inner product on different from the usual inner product , there is a function which satisfies The linear functional can be represented using any inner product on . Say that is the gradient of with respect to the inner product and note that has the same properties as .

By applying a linear transformation: we can relate these two inner products by and by applying a reflection we have To each an inner product is associated on . Therefore, for , one can define such that So corresponding to each metric, we can define a gradient for a function and these gradients may have vastly different numerical properties. Therefore, the choice of metric plays a vital role in steepest descent minimization process. A gradient of a function is said to be Sobolev gradient, when the underlying space is Sobolev. Readers who are unfamiliar with Sobolev spaces are referred to [26]. Steepest descent can be considered both in discrete as well as in continuous versions.

Suppose is a real-valued function defined on a Hilbert space and is its gradient with respect to the inner product defined on . Discrete steepest descent is a procedure of establishing a sequence such that for given we have where for each , is chosen in a way that it minimizes, if possible, In continuous steepest descent, a function is constructed so that Under suitable conditions on , where is the minimum value of .

So we conclude that discrete steepest descent (8) can be considered as a numerical method for approximating solutions to (10). Continuous steepest descent provides a theoretical starting point for proving convergence for discrete steepest descent.

Using (8), one seeks , so that Using (10), we seek so that (11) holds.

For the solution of PDEs, we can formulate by a variational principle, such that satisfies the given PDE if and only if is a critical point of .

To find a zero of the gradient of we try to use steepest descent minimization process. The energy functional associated with (1) is given by For the solution of PDEs, other functionals are also possible and one of the prime examples in this direction is the least square formulation. Such functions are shown in [11, 18]. The existence and convergence of for different linear and nonlinear forms of is discussed in [23].

Here we quote a result [23] for convergence of under a convexity assumption.

Theorem 1 (see [23]). Let be a Hilbert space and let be a nonnegative function defined on , for which there is such that Suppose also that (10) holds and , . Then exists and .

In this paper, only discrete steepest descents are used and for numerical computation, discretized versions of functions are considered.

3. Gradients and Minimization

Each inner product corresponds a gradient and a descent direction. The performance of steepest descent minimization process can be improved by defining gradients in a suitably chosen Sobolev space. It is still an open problem, how to choose the best inner product which is suitable for the problem.

A Sobolev space is defined as where denotes the weak derivative of of order . is a Hilbert space with the norm defined by

By inspiration from Mahavier's work on weighted Sobolev gradients, we define a new norm as This norm takes care of which is affecting the derivative term in (24). We call the Sobolev space with the norm (16) a weighted Sobolev space.

Now we show that the weighted Sobolev space with the norm defined by (16) is a Hilbert space denoted by . For this first, we show it satisfies the properties of inner product space and then we show it is complete.

Let us denote the inner product associated with the norm defined by (16): where is the usual inner product in the vector space defined by . Consider the following: if , then . Let be a Cauchy sequence in . Since is complete so such that also .

Therefore Hence, is complete by the norm defined by (16).

To incorporate the Dirichlet boundary conditions, we define a perturbation subspace of functions such that Here denotes the boundary of the domain . Perturbation subspaces related to and would be and , respectively.

We need to find the gradient of a functional associated with the original problem and to find zero of the gradient.

Given an energy functional: the Fréchet derivative of is a bounded linear functional defined by

The energy functional in our case is given by then according to (23) we have Simplifying and applying the limit we have

Let , , and denote the gradients in , , and , respectively. By using (7) we can write Thus the gradient in is

To satisfy the boundary conditions, we are looking for gradients that are zero on the boundary of . So instead of using , we use . is a projection that sets values of test function zero on the boundary of the system. For implementing the Sobolev gradient method, a software FreeFem++ [25] is used, which is designed to solve PDEs using the finite element method. Therefore, to find we need to solve the following: In other words, for in find , such that

This gradient suffers from the CFL conditions, as something well known for numerical analysts. By using (15) and (27) we can relate the gradient and unweighted Sobolev gradient in the weak from as Similarly using (16) and (27) the weighted Sobolev gradient can be related with the gradient: In order to find gradients, we represent the above systems in weak formulation as

It is seen that when using the weighted Sobolev gradient, the step size does not have to be reduced as the numerical grid becomes finer and the number of minimization steps remains reasonable. At the same time, the conceptual simplicity and elegance of the steepest descent algorithm has been retained.

4. Numerical Results

Let us consider a two-dimensional domain in the form of a circle with centre at the origin and radius . An elliptic region is removed that has border , with . To start the minimization process, we specify the value of as initial state and the boundary condition was that on the outer boundary and on the inner boundary on the boundaries. We also let and time step .

For the implementation of FreeFem++, one needs to specify the number of nodes on each border. After that a mesh is formed by triangulating the region.

FreeFem++ then solves the equations of the type, as given by (35) which can be used to compute gradients in and . We performed numerical experiments by specifying different numbers of nodes on each border. For each time step the functional defined by (24) was minimized using steepest descent steps with both and until the infinity norm of the gradient was less than . The system was evolved over time steps. The results are reported in Table 3.

Table 1 shows that by refining the mesh, the weighted Sobolev gradient becomes more and more efficient as compared to unweighted Sobolev gradients. It is also observed that by decreasing , the performance of steepest descent in is better than in .

Let us consider now a three-dimensional domain in the form of a cube with center at the origin and radius . To start the minimization process, we specify the value of as initial state and the boundary condition was set as on the top and bottom faces and on the front back, left, and right faces of the cube. We also let and time step . Once again, functional was minimized using steepest descent with both and until the infinity norm of the gradient was less than . Once again, the same software is used. Numerical experiments are performed by varying number of nodes specified on each border. The results are recorded in Table 2.

By refining the mesh with increasing , the efficiency of weighted Sobolev gradient in terms of minimization steps taken for convergence and CPU time also increased. By reducing the value of , the performance of steepest descent in is more pronounced over than over .

5. Comparison with Newton's Method

In this section, a comparison is given between Newton's method and the Sobolev gradient method is with weighting. Newton's methods considered as one of the optimal methods to solve these kinds of problems. For convergence of Newton's method a good initial guess is required. For the comparison between the two methods, we take a good initial guess for implementation of Newton's method so that we fairly can compare the two methods. In variational form, the given nonlinear problem can be written as To apply Newton's method, we need to find Gâteaux derivative which is defined by A linear solver is required to solve (37). Thus the Newton's iteration scheme looks like

An example is solved in two-dimensional case. We let be in the form of a circle with centre at the origin and radius . An elliptic region is removed that has border , with . The initial state was and the boundary condition was that on the outer boundary and on the inner boundary on the boundaries. The value of time step was set as . The system was evolved over time step. The minimization was performed until the infinity norm of the gradient was less than some set tolerance. We specify nodes on each border to obtain results.

Different values of results are recorded in Table 3. The term NC denotes no convergence. From the table, one can observe that Newton's method perform better than the steepest descent in . For strict stopping criterion, Newton's method does not converge. Newton's method was superior when the minimization process start but not able to handle low frequency errors. On the other hand, steepest descent in takes more iterations but it continues to converge even for very tight stopping criteria. By decreasing the value of diffusion coefficient , the steepest descent in always manage to converge whereas this is not in the case of Newton's method.

6. Summary and Conclusions

In this draft, for the solution of concerned problem, a minimization scheme based on the Sobolev gradient methods [23] is developed. The performance of descent in is better than descent in , as the spacing of the numerical grid is refined. In this paper, we signify a systematic way to choose the underlying space and the importance of weighting to develop efficient codes.

The convergence of Newton's method depend on the good initial guess which is sufficiently close to local minima. To implement Newton's method, we need to evaluate the inverse of Hessian matrix which is expensive at some times. A failure of the Newton's method has been shown in [11, 18] by taking large size of jump discontinuity and by refining the grid. But this is not in the case of steepest descent method. So a broader range of problems can be addressed by using steepest descent in appropriate Sobolev space.

One of the advantages of thge Sobolev gradient methods is that it still manages to converge even if we take rough initial guess. The weighted Sobolev gradient offers robust alternative method for problems which are highly nonlinear and have large discontinuities. An interesting project can be done by comparing the performance of Sobolev gradient methods with some multigrid technique in some finite element setting.

Acknowledgments

The authors are thankful to the precise and kind remarks of the learned referees. The first author acknowledge and appreciate the Higher Education Commission (HEC), Pakistan, for providing research grant through Postdoctoral fellowship.