Abstract and Applied Analysis

Volume 2015 (2015), Article ID 539652, 12 pages

http://dx.doi.org/10.1155/2015/539652

## A Strongly A-Stable Time Integration Method for Solving the Nonlinear Reaction-Diffusion Equation

Department of Mathematics and Statistics, University of Calgary, 2500 University Drive NW, Calgary, AB, Canada T2N 1N4

Received 28 July 2014; Accepted 17 October 2014

Academic Editor: Santanu Saha Ray

Copyright © 2015 Wenyuan Liao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The semidiscrete ordinary differential equation (ODE) system resulting from compact higher-order finite difference spatial discretization of a nonlinear parabolic partial differential equation, for instance, the reaction-diffusion equation, is highly stiff. Therefore numerical time integration methods with stiff stability such as implicit Runge-Kutta methods and implicit multistep methods are required to solve the large-scale stiff ODE system. However those methods are computationally expensive, especially for nonlinear cases. Rosenbrock method is efficient since it is iteration-free; however it suffers from order reduction when it is used for nonlinear parabolic partial differential equation. In this work we construct a new fourth-order Rosenbrock method to solve the nonlinear parabolic partial differential equation supplemented with Dirichlet or Neumann boundary condition. We successfully resolved the phenomena of order reduction, so the new method is fourth-order in time when it is used for nonlinear parabolic partial differential equations. Moreover, it has been shown that the Rosenbrock method is strongly A-stable hence suitable for the stiff ODE system obtained from compact finite difference discretization of the nonlinear parabolic partial differential equation. Several numerical experiments have been conducted to demonstrate the efficiency, stability, and accuracy of the new method.

#### 1. Introduction

Let us consider the following parabolic partial differential equation:with the initial condition:where is a positive constant describing the diffusion property and is a function representing the reaction term, which is nonlinear on . The unknown function represents, depending on the applications, variables such as mass concentration in chemical reaction process, temperature in heat conduction, neutron flux in nuclear reactors, and population density in population dynamics. On the boundary, either Dirichlet conditionor Neumann conditionis specified, where , , , and are sufficiently smooth functions. Here in this paper we restrict our attention on Dirichlet and Neumann boundary conditions, while the developed techniques can be easily extended to Robin boundary condition.

Efficient and accurate numerical methods for solving (1) had attracted great attentions from scientists and engineers, as for many application problems in science and engineering, it is preferable to use high-order compact numerical algorithms to compute accurate solutions. In the past several decades a great deal of work has been done in the development of efficient, accurate, and robust numerical algorithm for solving such problem. For more details, the reader is referred to [1–4].

Since both temporal and spatial derivatives are involved in the equation, we discuss the numerical treatments in time and space separately. Here we first apply the high-order compact finite difference approximation to the spatial derivative, so a semidiscrete ODE system is obtained, which is then solved by a fourth-order Rosenbrock method that will be discussed later.

Recently there have been attempts to develop high-order compact scheme for the spatial derivative. In [5], a three-point combined compact difference scheme was proposed to approximate the first and second derivatives for problem with periodic boundary condition. The resulting scheme has up to sixth-order accuracy at all grid points including the boundary nodes for periodic boundaries; however it is only fourth-order accurate for nonperiodic boundary condition.

Because the semidiscrete ODE system obtained from spatial discretization, such as method of lines, of the nonlinear parabolic partial differential equation is highly stiff, the choices of time integration methods are limited to implicit methods only. Explicit algorithm is efficient in a single time step but suffers from strict step size restriction, which makes it less efficient. Implicit method, on the other side, is less efficient in a single step but the unconditional stability allows the use of larger time step; hence the overall computational efficiency can be significantly improved. One issue, however, is that the iteration is usually slow when large step size is used. Also, due to the stiffness of the ODE system, only Newton-type iterative methods are applicable to solve the nonlinear algebraic system. Furthermore, strong A-stability or L-stability of the time integration method is necessary for error damping. A great deal of work has been done in the development of efficient time stepping methods for the stiff ODE system. In [6] explicit exponential Rosenbrock methods of order five have been constructed to solve the large-scale stiff ODE system. Through the derivation of stiff order condition, new pairs of embedded methods of higher-order can be obtained. Similarly, fifth-order explicit exponential Runge-Kutta methods were constructed to efficiently integrate the semilinear stiff problems in [7]. The authors have also shown that there does not exist an explicit exponential Runge-Kutta method of order 5 with less than or equal to 6 stages; therefore the resultant methods are 8-stage methods. In [8] a fourth-order time stepping method, which is a modification of the exponential time-differencing fourth-order Runge-Kutta method, has been developed for stiff ODEs. These methods are efficient and accurate. However, A-stability of the time stepping method is not sufficient for highly stiff problem. To overcome these difficulties, it is desirable to construct new algorithms with strong A-stability or L-stability that are free of solving nonlinear equations. It turns out that the Rosenbrock method, which was firstly reported by Rosenbrock [9] and then improved by Haines [10], responded to these issues with considerably satisfying and promising results.

The objective herein is to develop a strongly A-stable Rosenbrock method to solve the semidiscrete stiff ODEs resulting from compact high-order finite difference approximation of a semilinear parabolic partial differential equation. The rest of the paper is organized as the following. In Section 2, we discretize the spatial derivatives of the semilinear parabolic partial differential equation using a fourth-order compact finite difference scheme, which is then combined with a newly proposed compact fourth-order boundary condition treatment to form the semidiscrete ODE system. In Section 3 we focus on the development of a fourth-order strongly A-stable Rosenbrock method and the stability analysis. Several numerical examples are used to demonstrate the accuracy and efficiency of the new algorithm in Section 4, which is followed by conclusions and possible future work.

#### 2. Compact Fourth-Order Spatial Discretization

For the sake of simplicity, we assume that the 1D spatial domain is divided into subintervals with equal length . Let , , be the grid points. A variety of compact high-order discretizations can be utilized to approximate the second derivative in (1).

Here we introduce a compact finite difference scheme to approximate , such that the resulting semidiscrete ODE system is an accurate and compact approximation to the original semilinear parabolic partial differential equation. This operator-approximation based method has been widely used to solve various multidimensional problems. We first define the central finite difference operator asthen gives second-order accurate approximation to . Using Taylor series to expand all terms on the right-hand side of (5), under the assumption that is sufficiently smooth, we haveTo improve the above finite difference approximation to fourth-order accurate, one just needs to eliminate the second-order error term. Applying to both sides of (6), we have

Combining (6) with (7), neglecting , we obtain the following fourth-order accurate approximation to at node :The drawback is that a five-point stencil is required; therefore the compactness is destroyed so the method becomes less efficient. Further investigation shows that the difference between and is , so a natural way is to approximate as , which is fourth-order accurate and compact.

Applying the fourth-order Padé approximation to in (1), we obtain the following ODE systemwhich is a fourth-order accurate approximation (in space) to the original semilinear parabolic partial differential equation defined in (1).

However, the above algorithm is difficult to implement, so we multiply to both sides to obtain the following implicit ODE system:which can be written in vector form aswhere is the discrete solution of (1) at time , with , is an tridiagonal matrix, and is a vector-valued function defined through (10). To complete the ODE system, we need the boundary conditions at and , which can be derived from the original boundary conditions defined in (3) or (4).

First, if the Dirichlet boundary condition (3) is specified, one can add the following two ODEs to (9):Consequently the matrix is modified aswhile the vector-valued function , after modifications, is defined as

Alternatively, we can incorporate the boundary condition by replacing and in (10) with and , respectively, so the ODE system equation (11) has only equations.

As one can imagine, the situation is more complicated when the Neumann boundary condition (4) is specified. To complete the ODE system and maintain the higher-order overall accuracy, a compact fourth-order approximation of the Neumann boundary condition is needed. Let us use the boundary condition at as the example to demonstrate the idea of the new algorithm.

Unlike the Dirichlet boundary condition, which specifies the solution on the boundary point explicitly, the Neumann boundary condition defines at the boundary points; thus and need to be calculated along with solution at the interior grid points. Consequently, the range for in (10) should be changed to , so is an matrix. To approximate the derivative at , we introduce a ghost point and assume that (1) holds and the solution is sufficiently smooth on the extended domain . Let denote the solution at and then apply the second-order central finite difference approximation to ,

Taking partial derivative with respect to on both sides of (1), we haveLetting in (16) and then applying the Neumann boundary condition (4), we obtainCombining (15) with (17), we obtain the following fourth-order compact approximation for :which involves and only, so the compact structure is preserved.

Similarly, the fourth-order compact approximation for can be derived as

Now the matrix involves and , and the first and last rows are modified as

Consequently the first and last components of areFinally, the ODE system is written in the form of . Apparently, the matrix preserves tridiagonal structure, but it depends on and ; hence the development of Rosenbrock method becomes difficult. Fortunately, we notice that and are involved in two entries: and only. Further investigation shows that the extra terms in (20) arerespectively. We can eliminate these two extra terms by incorporating them into vector ; therefore and are modified as

Using (1), we have

Inserting (26) into (24) and then ignoring the fourth-order error term , we obtainSimilarly, inserting (27) into (25) and then ignoring the fourth-order error term, we obtainWe then obtain the closed ODE system , where the vector-valued function is given as , and is a nonsingular constant matrix given asIf the Robin boundary condition is specified, a similar numerical technique can be used to derive the semidiscrete ODE system.

Here we mention, without theoretical proof, that the resulting semidiscrete ODE system (9) is a fourth-order accurate approximation to the original semilinear parabolic partial differential equation given in (1), supplemented with either Dirichlet or Neumann boundary conditions. Interested readers can find a similar theorem and proof in [11].

#### 3. Fourth-Order Strongly A-Stable Rosenbrock Method

Various numerical methods can be used to solve the ODE system in (11). However, due to the stiffness of the problem, only stiffly stable methods are applicable; thus the choices are limited to the subclass of implicit methods such as implicit linear multistep methods and implicit Runge-Kutta methods. It is known that A-stability is necessary for stiff problem, and in general strong A-stable or even L-stable methods are preferred. The A-stability was firstly introduced and defined by Dahlquist [12] as the following.

*Definition 1. *A numerical method is called A-stable if there is no restriction on the step size, when it is applied to solve the test equation , where .

For a single-step method such as Runge-Kutta method that can be written as , the -stability is equivalent to the condition for any , where is called the stability function of the method. Although successfully used in various applications, an A-stable linear multistep method has the highest order of . In fact, the second-order A-stable linear multistep method with optimal error constant is the Trapezium rule [12]. Further, Gourlay [13] pointed out that an A-stable method is necessary but not sufficient for very stiff system as it has the incorrect damping rate. For example, the widely used Trapezium rule has stability function satisfying for any , but its damping rate converges to when . To overcome this difficulty, strong A-stability was introduced.

*Definition 2. *A numerical method is called strongly A-stable if it is A-stable and .

It has been shown that a numerical method with strong A-stability is effective in damping numerical oscillations for highly stiff system. For more details about the description and comparison of A-stability and strong A-stability, the readers are referred to [14].

Implicit Runge-Kutta method is usually unconditionally stable but suffers from the issue of high computational complexity, especially for nonlinear ODE system. For example, during each time step, an algebraic system with unknown variables needs to be solved, if an -stage implicit Runge-Kutta method is used to solve an ODE system with equations. Therefore, fully implicit Runge-Kutta methods are too computationally expensive to be useful for large-scale problems. In the past several decades, efforts have been made to reduce the computational cost, which results in various modified implicit Runge-Kutta methods, such as diagonally implicit Runge-Kutta method, singly diagonally implicit Runge-Kutta method, explicit-implicit Runge-Kutta method, to name a few. For more details of these methods, the reader is referred to [15–19].

To completely avoid solving a nonlinear algebraic system, Rosenbrock method, which is a special class of Runge-Kutta method, had been proposed. Since the spatial discretization is fourth-order, our aim herein is to develop a strongly A-stable fourth-order Rosenbrock method for solving the ODE system (11), so the new algorithm is fourth-order accurate in both temporal and spatial dimensions.

##### 3.1. Rosenbrock Method for Scalar Equation

We first derive the Rosenbrock method based on an autonomous scalar equation , for which the initial condition is given as . Nonautonomous equations can be converted to autonomous form by adding an extra equation to the system. Some previous research [20] suggested that it is unlikely to find a 3-stage fourth-order Rosenbrock method with strong A-stability or L-stability, so herein we focus on the development of a 4-stage Rosenbrock method. Suppose the numerical solution at time is known as , the 4-stage Rosenbrock method calculates the numerical solution at aswhere , and are coefficients to be determined and .

To extend the above algorithm to the nonautonomous problems , we first convert it to autonomous form by adding a new equation and then apply the algorithm (31) to the augmented system. Note that the components corresponding to the last variable can be computed explicitly; thus we can derive the modified algorithm as the following:where

A Rosenbrock method of order is obtained through choosing coefficients in (31)-(32) so that the local error satisfies . This can be done either by solving the so-called* Butcher Series* [14] or by straightforward differentiation. Here we derive the order conditions for the fourth-order Rosenbrock method in a different way using Taylor series. First, both and are expanded as Taylor series so that the difference can be expressed as a Taylor series and its coefficients of the terms up to are set to , which results in a set of equations involving these coefficients.

Given , the Taylor series of at is expanded aswhere , , , and . For the sake of simplicity, let and .

Letting in (32) we have

Similarly, letting in (32) we have

Combining it with the Taylor series of in (36), we obtain

Following the same way, we have

Inserting the four Taylor series into (35) and matching the coefficients of for on both sides of (31), we obtain the following order conditions:Note that the set of order conditions obtained by using Butcher series [14, page 108] is a special case here when .

Also it is worthy to point out that if is a scalar function of a single variable, , thus conditions (45) and (46) can be combined to one single condition. However here since the method is developed for ODE system, both conditions should be satisfied individually.

Apparently there are 12 degrees of freedom to determine the method defined in (32) since there are 20 parameters while only 8 constraints are given by (40)–(47). To simplify the procedure of determining these parameters and reduce the number of matrix inversion, we assume that , for , so that only one matrix inversion is required to solve all during each time step. After this simplification, there are 17 parameters left, and the order conditions are simplified as well.

It is known that the Rosenbrock method suffers from order reduction when applied to nonlinear parabolic partial differential equations; see [20, 21]. In order to avoid such reduction of accuracy, the following two extra order conditions, simplified after taking into account the previous order conditions, should be satisfied [22]:which implySolving (50) results in three distinct real roots, while stability analysis shows that only one can ensure A-stability for the Rosenbrock method. More details regarding stability will be provided later.

To perform step size control and error estimation, we need a third-order embedded formula [23] defined as which uses the same values of ’s as the Rosenbrock method in (31) but coefficients of ’s are different. To ensure the existence of such embedded formula, we need , where is a square matrix defined asand .

Taking into account the order conditions, we obtain the simplified condition as , which unfortunately contradicts the order condition given in (47). Thus one can either sacrifices efficiency by relaxing the condition for the existence of a third-order embedded formula or simply skip the condition and then ignore step size control and error estimation. In this work, we are interested in the efficiency and order reduction of the Rosenbrock method, so the condition for step size control is bypassed.

Upon the determination of , there are 17 free parameters remaining while the order conditions are simplified as the following:which are equivalent to those given in [14, page 108], where .

Since is a root of (50), , but , thus . Then the order conditions in (52)–(58) can be further simplified.

We now choose the coefficients with the purpose of reducing the number of functional evaluations. Particularly, letting , (55)-(56) imply that , which further implies , so . Letting and , we can reduce the number of functional evaluation by 1.

We choose as a free parameter, so

By choosing as a free parameter, we have

Letting be a free parameter, we obtainFinally, choose as free parameter, so we have and

In summary, the coefficients of the Rosenbrock method are listed in Table 1.