Abstract

The semidiscrete ordinary differential equation (ODE) system resulting from compact higher-order finite difference spatial discretization of a nonlinear parabolic partial differential equation, for instance, the reaction-diffusion equation, is highly stiff. Therefore numerical time integration methods with stiff stability such as implicit Runge-Kutta methods and implicit multistep methods are required to solve the large-scale stiff ODE system. However those methods are computationally expensive, especially for nonlinear cases. Rosenbrock method is efficient since it is iteration-free; however it suffers from order reduction when it is used for nonlinear parabolic partial differential equation. In this work we construct a new fourth-order Rosenbrock method to solve the nonlinear parabolic partial differential equation supplemented with Dirichlet or Neumann boundary condition. We successfully resolved the phenomena of order reduction, so the new method is fourth-order in time when it is used for nonlinear parabolic partial differential equations. Moreover, it has been shown that the Rosenbrock method is strongly A-stable hence suitable for the stiff ODE system obtained from compact finite difference discretization of the nonlinear parabolic partial differential equation. Several numerical experiments have been conducted to demonstrate the efficiency, stability, and accuracy of the new method.

1. Introduction

Let us consider the following parabolic partial differential equation:with the initial condition:where is a positive constant describing the diffusion property and is a function representing the reaction term, which is nonlinear on . The unknown function represents, depending on the applications, variables such as mass concentration in chemical reaction process, temperature in heat conduction, neutron flux in nuclear reactors, and population density in population dynamics. On the boundary, either Dirichlet conditionor Neumann conditionis specified, where , , , and are sufficiently smooth functions. Here in this paper we restrict our attention on Dirichlet and Neumann boundary conditions, while the developed techniques can be easily extended to Robin boundary condition.

Efficient and accurate numerical methods for solving (1) had attracted great attentions from scientists and engineers, as for many application problems in science and engineering, it is preferable to use high-order compact numerical algorithms to compute accurate solutions. In the past several decades a great deal of work has been done in the development of efficient, accurate, and robust numerical algorithm for solving such problem. For more details, the reader is referred to [14].

Since both temporal and spatial derivatives are involved in the equation, we discuss the numerical treatments in time and space separately. Here we first apply the high-order compact finite difference approximation to the spatial derivative, so a semidiscrete ODE system is obtained, which is then solved by a fourth-order Rosenbrock method that will be discussed later.

Recently there have been attempts to develop high-order compact scheme for the spatial derivative. In [5], a three-point combined compact difference scheme was proposed to approximate the first and second derivatives for problem with periodic boundary condition. The resulting scheme has up to sixth-order accuracy at all grid points including the boundary nodes for periodic boundaries; however it is only fourth-order accurate for nonperiodic boundary condition.

Because the semidiscrete ODE system obtained from spatial discretization, such as method of lines, of the nonlinear parabolic partial differential equation is highly stiff, the choices of time integration methods are limited to implicit methods only. Explicit algorithm is efficient in a single time step but suffers from strict step size restriction, which makes it less efficient. Implicit method, on the other side, is less efficient in a single step but the unconditional stability allows the use of larger time step; hence the overall computational efficiency can be significantly improved. One issue, however, is that the iteration is usually slow when large step size is used. Also, due to the stiffness of the ODE system, only Newton-type iterative methods are applicable to solve the nonlinear algebraic system. Furthermore, strong A-stability or L-stability of the time integration method is necessary for error damping. A great deal of work has been done in the development of efficient time stepping methods for the stiff ODE system. In [6] explicit exponential Rosenbrock methods of order five have been constructed to solve the large-scale stiff ODE system. Through the derivation of stiff order condition, new pairs of embedded methods of higher-order can be obtained. Similarly, fifth-order explicit exponential Runge-Kutta methods were constructed to efficiently integrate the semilinear stiff problems in [7]. The authors have also shown that there does not exist an explicit exponential Runge-Kutta method of order 5 with less than or equal to 6 stages; therefore the resultant methods are 8-stage methods. In [8] a fourth-order time stepping method, which is a modification of the exponential time-differencing fourth-order Runge-Kutta method, has been developed for stiff ODEs. These methods are efficient and accurate. However, A-stability of the time stepping method is not sufficient for highly stiff problem. To overcome these difficulties, it is desirable to construct new algorithms with strong A-stability or L-stability that are free of solving nonlinear equations. It turns out that the Rosenbrock method, which was firstly reported by Rosenbrock [9] and then improved by Haines [10], responded to these issues with considerably satisfying and promising results.

The objective herein is to develop a strongly A-stable Rosenbrock method to solve the semidiscrete stiff ODEs resulting from compact high-order finite difference approximation of a semilinear parabolic partial differential equation. The rest of the paper is organized as the following. In Section 2, we discretize the spatial derivatives of the semilinear parabolic partial differential equation using a fourth-order compact finite difference scheme, which is then combined with a newly proposed compact fourth-order boundary condition treatment to form the semidiscrete ODE system. In Section 3 we focus on the development of a fourth-order strongly A-stable Rosenbrock method and the stability analysis. Several numerical examples are used to demonstrate the accuracy and efficiency of the new algorithm in Section 4, which is followed by conclusions and possible future work.

2. Compact Fourth-Order Spatial Discretization

For the sake of simplicity, we assume that the 1D spatial domain is divided into subintervals with equal length . Let , , be the grid points. A variety of compact high-order discretizations can be utilized to approximate the second derivative in (1).

Here we introduce a compact finite difference scheme to approximate , such that the resulting semidiscrete ODE system is an accurate and compact approximation to the original semilinear parabolic partial differential equation. This operator-approximation based method has been widely used to solve various multidimensional problems. We first define the central finite difference operator asthen gives second-order accurate approximation to . Using Taylor series to expand all terms on the right-hand side of (5), under the assumption that is sufficiently smooth, we haveTo improve the above finite difference approximation to fourth-order accurate, one just needs to eliminate the second-order error term. Applying to both sides of (6), we have

Combining (6) with (7), neglecting , we obtain the following fourth-order accurate approximation to at node :The drawback is that a five-point stencil is required; therefore the compactness is destroyed so the method becomes less efficient. Further investigation shows that the difference between and is , so a natural way is to approximate as , which is fourth-order accurate and compact.

Applying the fourth-order Padé approximation to in (1), we obtain the following ODE systemwhich is a fourth-order accurate approximation (in space) to the original semilinear parabolic partial differential equation defined in (1).

However, the above algorithm is difficult to implement, so we multiply to both sides to obtain the following implicit ODE system:which can be written in vector form aswhere is the discrete solution of (1) at time , with , is an tridiagonal matrix, and is a vector-valued function defined through (10). To complete the ODE system, we need the boundary conditions at and , which can be derived from the original boundary conditions defined in (3) or (4).

First, if the Dirichlet boundary condition (3) is specified, one can add the following two ODEs to (9):Consequently the matrix is modified aswhile the vector-valued function , after modifications, is defined as

Alternatively, we can incorporate the boundary condition by replacing and in (10) with and , respectively, so the ODE system equation (11) has only equations.

As one can imagine, the situation is more complicated when the Neumann boundary condition (4) is specified. To complete the ODE system and maintain the higher-order overall accuracy, a compact fourth-order approximation of the Neumann boundary condition is needed. Let us use the boundary condition at as the example to demonstrate the idea of the new algorithm.

Unlike the Dirichlet boundary condition, which specifies the solution on the boundary point explicitly, the Neumann boundary condition defines at the boundary points; thus and need to be calculated along with solution at the interior grid points. Consequently, the range for in (10) should be changed to , so is an matrix. To approximate the derivative at , we introduce a ghost point and assume that (1) holds and the solution is sufficiently smooth on the extended domain . Let denote the solution at and then apply the second-order central finite difference approximation to ,

Taking partial derivative with respect to on both sides of (1), we haveLetting in (16) and then applying the Neumann boundary condition (4), we obtainCombining (15) with (17), we obtain the following fourth-order compact approximation for :which involves and only, so the compact structure is preserved.

Similarly, the fourth-order compact approximation for can be derived as

Now the matrix involves and , and the first and last rows are modified as

Consequently the first and last components of areFinally, the ODE system is written in the form of . Apparently, the matrix preserves tridiagonal structure, but it depends on and ; hence the development of Rosenbrock method becomes difficult. Fortunately, we notice that and are involved in two entries: and only. Further investigation shows that the extra terms in (20) arerespectively. We can eliminate these two extra terms by incorporating them into vector ; therefore and are modified as

Using (1), we have

Inserting (26) into (24) and then ignoring the fourth-order error term , we obtainSimilarly, inserting (27) into (25) and then ignoring the fourth-order error term, we obtainWe then obtain the closed ODE system , where the vector-valued function is given as , and is a nonsingular constant matrix given asIf the Robin boundary condition is specified, a similar numerical technique can be used to derive the semidiscrete ODE system.

Here we mention, without theoretical proof, that the resulting semidiscrete ODE system (9) is a fourth-order accurate approximation to the original semilinear parabolic partial differential equation given in (1), supplemented with either Dirichlet or Neumann boundary conditions. Interested readers can find a similar theorem and proof in [11].

3. Fourth-Order Strongly A-Stable Rosenbrock Method

Various numerical methods can be used to solve the ODE system in (11). However, due to the stiffness of the problem, only stiffly stable methods are applicable; thus the choices are limited to the subclass of implicit methods such as implicit linear multistep methods and implicit Runge-Kutta methods. It is known that A-stability is necessary for stiff problem, and in general strong A-stable or even L-stable methods are preferred. The A-stability was firstly introduced and defined by Dahlquist [12] as the following.

Definition 1. A numerical method is called A-stable if there is no restriction on the step size, when it is applied to solve the test equation , where .

For a single-step method such as Runge-Kutta method that can be written as , the -stability is equivalent to the condition for any , where is called the stability function of the method. Although successfully used in various applications, an A-stable linear multistep method has the highest order of . In fact, the second-order A-stable linear multistep method with optimal error constant is the Trapezium rule [12]. Further, Gourlay [13] pointed out that an A-stable method is necessary but not sufficient for very stiff system as it has the incorrect damping rate. For example, the widely used Trapezium rule has stability function satisfying for any , but its damping rate converges to when . To overcome this difficulty, strong A-stability was introduced.

Definition 2. A numerical method is called strongly A-stable if it is A-stable and .

It has been shown that a numerical method with strong A-stability is effective in damping numerical oscillations for highly stiff system. For more details about the description and comparison of A-stability and strong A-stability, the readers are referred to [14].

Implicit Runge-Kutta method is usually unconditionally stable but suffers from the issue of high computational complexity, especially for nonlinear ODE system. For example, during each time step, an algebraic system with unknown variables needs to be solved, if an -stage implicit Runge-Kutta method is used to solve an ODE system with equations. Therefore, fully implicit Runge-Kutta methods are too computationally expensive to be useful for large-scale problems. In the past several decades, efforts have been made to reduce the computational cost, which results in various modified implicit Runge-Kutta methods, such as diagonally implicit Runge-Kutta method, singly diagonally implicit Runge-Kutta method, explicit-implicit Runge-Kutta method, to name a few. For more details of these methods, the reader is referred to [1519].

To completely avoid solving a nonlinear algebraic system, Rosenbrock method, which is a special class of Runge-Kutta method, had been proposed. Since the spatial discretization is fourth-order, our aim herein is to develop a strongly A-stable fourth-order Rosenbrock method for solving the ODE system (11), so the new algorithm is fourth-order accurate in both temporal and spatial dimensions.

3.1. Rosenbrock Method for Scalar Equation

We first derive the Rosenbrock method based on an autonomous scalar equation , for which the initial condition is given as . Nonautonomous equations can be converted to autonomous form by adding an extra equation to the system. Some previous research [20] suggested that it is unlikely to find a 3-stage fourth-order Rosenbrock method with strong A-stability or L-stability, so herein we focus on the development of a 4-stage Rosenbrock method. Suppose the numerical solution at time is known as , the 4-stage Rosenbrock method calculates the numerical solution at aswhere , and are coefficients to be determined and .

To extend the above algorithm to the nonautonomous problems , we first convert it to autonomous form by adding a new equation and then apply the algorithm (31) to the augmented system. Note that the components corresponding to the last variable can be computed explicitly; thus we can derive the modified algorithm as the following:where

A Rosenbrock method of order is obtained through choosing coefficients in (31)-(32) so that the local error satisfies . This can be done either by solving the so-called Butcher Series [14] or by straightforward differentiation. Here we derive the order conditions for the fourth-order Rosenbrock method in a different way using Taylor series. First, both and are expanded as Taylor series so that the difference can be expressed as a Taylor series and its coefficients of the terms up to are set to , which results in a set of equations involving these coefficients.

Given , the Taylor series of at is expanded aswhere , , , and . For the sake of simplicity, let and .

Letting in (32) we have

Similarly, letting in (32) we have

Combining it with the Taylor series of in (36), we obtain

Following the same way, we have

Inserting the four Taylor series into (35) and matching the coefficients of for on both sides of (31), we obtain the following order conditions:Note that the set of order conditions obtained by using Butcher series [14, page 108] is a special case here when .

Also it is worthy to point out that if is a scalar function of a single variable, , thus conditions (45) and (46) can be combined to one single condition. However here since the method is developed for ODE system, both conditions should be satisfied individually.

Apparently there are 12 degrees of freedom to determine the method defined in (32) since there are 20 parameters while only 8 constraints are given by (40)–(47). To simplify the procedure of determining these parameters and reduce the number of matrix inversion, we assume that , for , so that only one matrix inversion is required to solve all during each time step. After this simplification, there are 17 parameters left, and the order conditions are simplified as well.

It is known that the Rosenbrock method suffers from order reduction when applied to nonlinear parabolic partial differential equations; see [20, 21]. In order to avoid such reduction of accuracy, the following two extra order conditions, simplified after taking into account the previous order conditions, should be satisfied [22]:which implySolving (50) results in three distinct real roots, while stability analysis shows that only one can ensure A-stability for the Rosenbrock method. More details regarding stability will be provided later.

To perform step size control and error estimation, we need a third-order embedded formula [23] defined as which uses the same values of ’s as the Rosenbrock method in (31) but coefficients of ’s are different. To ensure the existence of such embedded formula, we need , where is a square matrix defined asand .

Taking into account the order conditions, we obtain the simplified condition as , which unfortunately contradicts the order condition given in (47). Thus one can either sacrifices efficiency by relaxing the condition for the existence of a third-order embedded formula or simply skip the condition and then ignore step size control and error estimation. In this work, we are interested in the efficiency and order reduction of the Rosenbrock method, so the condition for step size control is bypassed.

Upon the determination of , there are 17 free parameters remaining while the order conditions are simplified as the following:which are equivalent to those given in [14, page 108], where .

Since is a root of (50), , but , thus . Then the order conditions in (52)–(58) can be further simplified.

We now choose the coefficients with the purpose of reducing the number of functional evaluations. Particularly, letting , (55)-(56) imply that , which further implies , so . Letting and , we can reduce the number of functional evaluation by 1.

We choose as a free parameter, so

By choosing as a free parameter, we have

Letting be a free parameter, we obtainFinally, choose as free parameter, so we have and

In summary, the coefficients of the Rosenbrock method are listed in Table 1.

Note that one can also use these free parameters to eliminate some fifth-order truncation error terms, so the truncation error constant can be further optimized.

3.2. Extend the Rosenbrock Method to ODE System

We now extend the Rosenbrock algorithm to the ODE system . Suppose that is the numerical solution of (10) at time ; then is calculated aswhere is the Jacobi matrix and are defined through (34).

Obviously, each stage of this method consists of solving a linear system of equations. Since , only one LU-decomposition is required during each time step.

3.3. Stability of the Rosenbrock Method

We now study the stability of the Rosenbrock method defined by (31)-(32) with coefficients given in Table 1. Applying the Rosenbrock method to the linear test equation , , and taking into account the order condition equation (49), we obtain the stability function of the Rosenbrock method aswhere .

In order for the Rosenbrock method to be strongly A-stable, we need

Apparently,so (68) is satisfied.

To prove (67), we can substitute with and then show that for any . However such proof is technically tedious although possible. Here we use the Boundary-Locus method to draw the region of absolute stability of the method, which is shown in Figure 1. Obviously, the region of absolute stability contains the whole half-plane , so the Rosenbrock method is A-stable. Combined with (69), we can prove that the Rosenbrock method is strongly A-stable.

4. Numerical Examples and Discussions

We solve three numerical examples to demonstrate the accuracy and efficiency of the new algorithm. The first and second examples are reaction-diffusion equations supplemented with Dirichlet and Neumann boundary conditions, respectively. The two examples are solved by the new method to demonstrate the fourth-order accuracy in time and space. The third example is used to demonstrate that the new algorithm is free of order reduction and more efficient than several other existing fourth-order Rosenbrock methods. For comparison, the rate of convergence is compared with several other fourth-order methods which suffer from order reduction. In what follows, we use HOC-ROSB4 to represent the new algorithm developed in this paper and HOC-GRK4A to represent the fourth-order method that combines high-order compact finite difference scheme and fourth-order GRK4A [23]. HOC-SHAMP represents the algorithm that combines high-order compact finite difference scheme with the fourth-order Rosenbrock method Shampine [24]. HOC-LSTAB represents the algorithm that combines the high-order compact finite difference scheme in space and the L-stable fourth-order Rosenbrock method [14] in time. Finally, we use HOC-VELD to represent the algorithm that combines the high-order compact finite difference scheme in space with fourth-order Rosenbrock method proposed in [25].

4.1. Example  1

Consider the following reaction-diffusion equation:for which the analytical solution is .

We first show that the compact finite difference scheme for spatial discretization and the compact boundary condition treatment are fourth-order accurate. In order to do so, we fixed so the truncation error from time discretization is negligible. The results included in Table 2 clearly demonstrate the fourth-order convergence in space, as the maximal error reduced by a factor of 16 (roughly) when the grid size is halved.

We then show the fourth-order accuracy in time by fixing . The data in Table 3 confirmed that the Rosenbrock method is fourth-order accurate in time, so the new algorithm is free of order reduction.

Since the method is fourth-order accurate in both temporal and spatial dimensions, we can write the leading term of the truncation error as , where and are two constants. To obtain optimal performance, we can adjust the ratio to balance the two error terms. To do so, we adjust and so that the two error terms are balanced, which is utilized by letting , from which we can estimate the optimal ratio as . To estimate , we solve the example using small and then . Similarly, we can estimate using the same method.

Simple calculation based on numerical results from Tables 2 and 3 suggests that the optimal ratio for this example is . We then solve the reaction-diffusion equation using the optimal ratio to show that the new algorithm is fourth-order accurate in both temporal and spatial dimensions. The numerical results in Table 4 indicate that the new algorithm is fourth-order accurate in both temporal and spatial dimensions, as the maximal error is reduced by a factor of 16 (roughly) when and are halved simultaneously. We also notice that the algorithm is very efficient as the maximal error drops to the level of when and are still reasonably large.

4.2. Example  2

We solve the following semilinear parabolic partial differential equation with the Neumann boundary conditions:for which the analytical solution is .

Notice that the boundary conditions are approximated by the compact fourth-order boundary scheme given in (18)-(19), so the compactness of the resulting linear system is preserved; consequently, in each time step, only a tridiagonal system is solved for each , so the solution procedure is very efficient. However we point out that some tedious work is needed to form the semidiscrete ODE system, for instance, the first and last rows of the matrix in (20) and the first and last components of in (28)-(29).

To obtain the optimal performance, we choose the optimal ratio as as to balance the two error terms. The data in Table 5 shows that the method is fourth-order accurate in both time and space, as the maximum error is reduced by a factor of 16 (roughly), when and are halved.

It is worthy to mention that one can also use any other one-sided formula to approximate the Neumann boundary condition, which apparently reduces the effort to derive the semidiscrete ODE system, but will destroy the tridiagonal structure of the matrix and consequently affect the efficiency of the algorithm. Another side-effect of using one-sided approximation on the boundary is the stability issue that may arise because the modifications to matrix may result in positive eigenvalues of .

4.3. Example  3

In this example we compare the new algorithm with several other fourth-order Rosenbrock methods in terms of the rate of convergence in time and efficiency. The nonlinear parabolic partial differential equation to be solved is defined asfor which the exact solution is .

The data in Table 6 shows that HOC-ROSB4 is fourth-order accurate and thus is free of order reduction, while the other four fourth-order Rosenbrock methods, which obviously suffer from order reduction, show rates of convergence ranging from 3.0 to 3.25. Note that is fixed and the same spatial discretization method is used for all five methods, so the demonstrated rate of convergence is in time. One can see that the HOC-LSTAB method has the most optimal error constant, while the HOC-ROSB4 method has the largest error constant. However, due to the freedom in determining those coefficients in Rosenbrock method, better error constant can be accomplished by eliminating several fifth-order truncation error terms.

Finally, we show that HOC-ROSB4 is the most efficient method. For consistency, we adjust and for each method to reach the same error level and record the average CPU time of 5 simulation runs. Since all of the five methods use the same spatial discretization, the same is used for all methods; hence we adjust only. The comparison result included in Table 7 clearly indicates that the new method is the most efficient one. The higher efficiency apparently is obtained from the fact that the new method is free of order reduction; hence large time step can be used to reach the same error level.

5. Conclusion

An efficient fourth-order numerical algorithm that combines the Padé approximation in space and fourth-order accurate Rosenbrock method in time is proposed in this paper. Our investigation shows that many widely used fourth-order Rosenbrock methods (A-stable or L-stable) suffer from order reduction when they are used to solve nonlinear parabolic partial differential equation. To avoid order reduction, extra order conditions are required, which are implemented in this paper to develop the new algorithm. Also, it has been shown [26] that the extra condition to resolve order reduction contradicts the L-stability of the Rosenbrock method, so there is no L-stable Rosenbrock method that is also free of order reduction. The new method can be used to solve nonlinear parabolic partial differential equation with all types of boundary conditions; however, for Neumann or Robin boundary condition, extra efforts are needed to form the semidiscrete ODE system. Two numerical examples are solved to demonstrate that the new method is fourth-order accurate in both time and space, while the third example shows that the new method is free of order reduction and is very efficient. In the future, we plan to extend the new method to multidimensional problems supplemented with various types of boundary conditions.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work of was supported by the Natural Sciences & Engineering Research Council of Canada (NSERC) through the individual Discovery Grant program. The author gratefully acknowledges the financial support from NSERC.