Table of Contents Author Guidelines Submit a Manuscript
Abstract and Applied Analysis
Volume 2015, Article ID 539652, 12 pages
http://dx.doi.org/10.1155/2015/539652
Research Article

A Strongly A-Stable Time Integration Method for Solving the Nonlinear Reaction-Diffusion Equation

Department of Mathematics and Statistics, University of Calgary, 2500 University Drive NW, Calgary, AB, Canada T2N 1N4

Received 28 July 2014; Accepted 17 October 2014

Academic Editor: Santanu Saha Ray

Copyright © 2015 Wenyuan Liao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The semidiscrete ordinary differential equation (ODE) system resulting from compact higher-order finite difference spatial discretization of a nonlinear parabolic partial differential equation, for instance, the reaction-diffusion equation, is highly stiff. Therefore numerical time integration methods with stiff stability such as implicit Runge-Kutta methods and implicit multistep methods are required to solve the large-scale stiff ODE system. However those methods are computationally expensive, especially for nonlinear cases. Rosenbrock method is efficient since it is iteration-free; however it suffers from order reduction when it is used for nonlinear parabolic partial differential equation. In this work we construct a new fourth-order Rosenbrock method to solve the nonlinear parabolic partial differential equation supplemented with Dirichlet or Neumann boundary condition. We successfully resolved the phenomena of order reduction, so the new method is fourth-order in time when it is used for nonlinear parabolic partial differential equations. Moreover, it has been shown that the Rosenbrock method is strongly A-stable hence suitable for the stiff ODE system obtained from compact finite difference discretization of the nonlinear parabolic partial differential equation. Several numerical experiments have been conducted to demonstrate the efficiency, stability, and accuracy of the new method.

1. Introduction

Let us consider the following parabolic partial differential equation:with the initial condition:where is a positive constant describing the diffusion property and is a function representing the reaction term, which is nonlinear on . The unknown function represents, depending on the applications, variables such as mass concentration in chemical reaction process, temperature in heat conduction, neutron flux in nuclear reactors, and population density in population dynamics. On the boundary, either Dirichlet conditionor Neumann conditionis specified, where , , , and are sufficiently smooth functions. Here in this paper we restrict our attention on Dirichlet and Neumann boundary conditions, while the developed techniques can be easily extended to Robin boundary condition.

Efficient and accurate numerical methods for solving (1) had attracted great attentions from scientists and engineers, as for many application problems in science and engineering, it is preferable to use high-order compact numerical algorithms to compute accurate solutions. In the past several decades a great deal of work has been done in the development of efficient, accurate, and robust numerical algorithm for solving such problem. For more details, the reader is referred to [14].

Since both temporal and spatial derivatives are involved in the equation, we discuss the numerical treatments in time and space separately. Here we first apply the high-order compact finite difference approximation to the spatial derivative, so a semidiscrete ODE system is obtained, which is then solved by a fourth-order Rosenbrock method that will be discussed later.

Recently there have been attempts to develop high-order compact scheme for the spatial derivative. In [5], a three-point combined compact difference scheme was proposed to approximate the first and second derivatives for problem with periodic boundary condition. The resulting scheme has up to sixth-order accuracy at all grid points including the boundary nodes for periodic boundaries; however it is only fourth-order accurate for nonperiodic boundary condition.

Because the semidiscrete ODE system obtained from spatial discretization, such as method of lines, of the nonlinear parabolic partial differential equation is highly stiff, the choices of time integration methods are limited to implicit methods only. Explicit algorithm is efficient in a single time step but suffers from strict step size restriction, which makes it less efficient. Implicit method, on the other side, is less efficient in a single step but the unconditional stability allows the use of larger time step; hence the overall computational efficiency can be significantly improved. One issue, however, is that the iteration is usually slow when large step size is used. Also, due to the stiffness of the ODE system, only Newton-type iterative methods are applicable to solve the nonlinear algebraic system. Furthermore, strong A-stability or L-stability of the time integration method is necessary for error damping. A great deal of work has been done in the development of efficient time stepping methods for the stiff ODE system. In [6] explicit exponential Rosenbrock methods of order five have been constructed to solve the large-scale stiff ODE system. Through the derivation of stiff order condition, new pairs of embedded methods of higher-order can be obtained. Similarly, fifth-order explicit exponential Runge-Kutta methods were constructed to efficiently integrate the semilinear stiff problems in [7]. The authors have also shown that there does not exist an explicit exponential Runge-Kutta method of order 5 with less than or equal to 6 stages; therefore the resultant methods are 8-stage methods. In [8] a fourth-order time stepping method, which is a modification of the exponential time-differencing fourth-order Runge-Kutta method, has been developed for stiff ODEs. These methods are efficient and accurate. However, A-stability of the time stepping method is not sufficient for highly stiff problem. To overcome these difficulties, it is desirable to construct new algorithms with strong A-stability or L-stability that are free of solving nonlinear equations. It turns out that the Rosenbrock method, which was firstly reported by Rosenbrock [9] and then improved by Haines [10], responded to these issues with considerably satisfying and promising results.

The objective herein is to develop a strongly A-stable Rosenbrock method to solve the semidiscrete stiff ODEs resulting from compact high-order finite difference approximation of a semilinear parabolic partial differential equation. The rest of the paper is organized as the following. In Section 2, we discretize the spatial derivatives of the semilinear parabolic partial differential equation using a fourth-order compact finite difference scheme, which is then combined with a newly proposed compact fourth-order boundary condition treatment to form the semidiscrete ODE system. In Section 3 we focus on the development of a fourth-order strongly A-stable Rosenbrock method and the stability analysis. Several numerical examples are used to demonstrate the accuracy and efficiency of the new algorithm in Section 4, which is followed by conclusions and possible future work.

2. Compact Fourth-Order Spatial Discretization

For the sake of simplicity, we assume that the 1D spatial domain is divided into subintervals with equal length . Let , , be the grid points. A variety of compact high-order discretizations can be utilized to approximate the second derivative in (1).

Here we introduce a compact finite difference scheme to approximate , such that the resulting semidiscrete ODE system is an accurate and compact approximation to the original semilinear parabolic partial differential equation. This operator-approximation based method has been widely used to solve various multidimensional problems. We first define the central finite difference operator asthen gives second-order accurate approximation to . Using Taylor series to expand all terms on the right-hand side of (5), under the assumption that is sufficiently smooth, we haveTo improve the above finite difference approximation to fourth-order accurate, one just needs to eliminate the second-order error term. Applying to both sides of (6), we have

Combining (6) with (7), neglecting , we obtain the following fourth-order accurate approximation to at node :The drawback is that a five-point stencil is required; therefore the compactness is destroyed so the method becomes less efficient. Further investigation shows that the difference between and is , so a natural way is to approximate as , which is fourth-order accurate and compact.

Applying the fourth-order Padé approximation to in (1), we obtain the following ODE systemwhich is a fourth-order accurate approximation (in space) to the original semilinear parabolic partial differential equation defined in (1).

However, the above algorithm is difficult to implement, so we multiply to both sides to obtain the following implicit ODE system:which can be written in vector form aswhere is the discrete solution of (1) at time , with , is an tridiagonal matrix, and is a vector-valued function defined through (10). To complete the ODE system, we need the boundary conditions at and , which can be derived from the original boundary conditions defined in (3) or (4).

First, if the Dirichlet boundary condition (3) is specified, one can add the following two ODEs to (9):Consequently the matrix is modified aswhile the vector-valued function , after modifications, is defined as

Alternatively, we can incorporate the boundary condition by replacing and in (10) with and , respectively, so the ODE system equation (11) has only equations.

As one can imagine, the situation is more complicated when the Neumann boundary condition (4) is specified. To complete the ODE system and maintain the higher-order overall accuracy, a compact fourth-order approximation of the Neumann boundary condition is needed. Let us use the boundary condition at as the example to demonstrate the idea of the new algorithm.

Unlike the Dirichlet boundary condition, which specifies the solution on the boundary point explicitly, the Neumann boundary condition defines at the boundary points; thus and need to be calculated along with solution at the interior grid points. Consequently, the range for in (10) should be changed to , so is an matrix. To approximate the derivative at , we introduce a ghost point and assume that (1) holds and the solution is sufficiently smooth on the extended domain . Let denote the solution at and then apply the second-order central finite difference approximation to ,

Taking partial derivative with respect to on both sides of (1), we haveLetting in (16) and then applying the Neumann boundary condition (4), we obtainCombining (15) with (17), we obtain the following fourth-order compact approximation for :which involves and only, so the compact structure is preserved.

Similarly, the fourth-order compact approximation for can be derived as

Now the matrix involves and , and the first and last rows are modified as

Consequently the first and last components of areFinally, the ODE system is written in the form of . Apparently, the matrix preserves tridiagonal structure, but it depends on and ; hence the development of Rosenbrock method becomes difficult. Fortunately, we notice that and are involved in two entries: and only. Further investigation shows that the extra terms in (20) arerespectively. We can eliminate these two extra terms by incorporating them into vector ; therefore and are modified as

Using (1), we have

Inserting (26) into (24) and then ignoring the fourth-order error term , we obtainSimilarly, inserting (27) into (25) and then ignoring the fourth-order error term, we obtainWe then obtain the closed ODE system , where the vector-valued function is given as , and is a nonsingular constant matrix given asIf the Robin boundary condition is specified, a similar numerical technique can be used to derive the semidiscrete ODE system.

Here we mention, without theoretical proof, that the resulting semidiscrete ODE system (9) is a fourth-order accurate approximation to the original semilinear parabolic partial differential equation given in (1), supplemented with either Dirichlet or Neumann boundary conditions. Interested readers can find a similar theorem and proof in [11].

3. Fourth-Order Strongly A-Stable Rosenbrock Method

Various numerical methods can be used to solve the ODE system in (11). However, due to the stiffness of the problem, only stiffly stable methods are applicable; thus the choices are limited to the subclass of implicit methods such as implicit linear multistep methods and implicit Runge-Kutta methods. It is known that A-stability is necessary for stiff problem, and in general strong A-stable or even L-stable methods are preferred. The A-stability was firstly introduced and defined by Dahlquist [12] as the following.

Definition 1. A numerical method is called A-stable if there is no restriction on the step size, when it is applied to solve the test equation , where .

For a single-step method such as Runge-Kutta method that can be written as , the -stability is equivalent to the condition for any , where is called the stability function of the method. Although successfully used in various applications, an A-stable linear multistep method has the highest order of . In fact, the second-order A-stable linear multistep method with optimal error constant is the Trapezium rule [12]. Further, Gourlay [13] pointed out that an A-stable method is necessary but not sufficient for very stiff system as it has the incorrect damping rate. For example, the widely used Trapezium rule has stability function satisfying for any , but its damping rate converges to when . To overcome this difficulty, strong A-stability was introduced.

Definition 2. A numerical method is called strongly A-stable if it is A-stable and .

It has been shown that a numerical method with strong A-stability is effective in damping numerical oscillations for highly stiff system. For more details about the description and comparison of A-stability and strong A-stability, the readers are referred to [14].

Implicit Runge-Kutta method is usually unconditionally stable but suffers from the issue of high computational complexity, especially for nonlinear ODE system. For example, during each time step, an algebraic system with unknown variables needs to be solved, if an -stage implicit Runge-Kutta method is used to solve an ODE system with equations. Therefore, fully implicit Runge-Kutta methods are too computationally expensive to be useful for large-scale problems. In the past several decades, efforts have been made to reduce the computational cost, which results in various modified implicit Runge-Kutta methods, such as diagonally implicit Runge-Kutta method, singly diagonally implicit Runge-Kutta method, explicit-implicit Runge-Kutta method, to name a few. For more details of these methods, the reader is referred to [1519].

To completely avoid solving a nonlinear algebraic system, Rosenbrock method, which is a special class of Runge-Kutta method, had been proposed. Since the spatial discretization is fourth-order, our aim herein is to develop a strongly A-stable fourth-order Rosenbrock method for solving the ODE system (11), so the new algorithm is fourth-order accurate in both temporal and spatial dimensions.

3.1. Rosenbrock Method for Scalar Equation

We first derive the Rosenbrock method based on an autonomous scalar equation , for which the initial condition is given as . Nonautonomous equations can be converted to autonomous form by adding an extra equation to the system. Some previous research [20] suggested that it is unlikely to find a 3-stage fourth-order Rosenbrock method with strong A-stability or L-stability, so herein we focus on the development of a 4-stage Rosenbrock method. Suppose the numerical solution at time is known as , the 4-stage Rosenbrock method calculates the numerical solution at aswhere , and are coefficients to be determined and .

To extend the above algorithm to the nonautonomous problems , we first convert it to autonomous form by adding a new equation and then apply the algorithm (31) to the augmented system. Note that the components corresponding to the last variable can be computed explicitly; thus we can derive the modified algorithm as the following:where

A Rosenbrock method of order is obtained through choosing coefficients in (31)-(32) so that the local error satisfies . This can be done either by solving the so-called Butcher Series [14] or by straightforward differentiation. Here we derive the order conditions for the fourth-order Rosenbrock method in a different way using Taylor series. First, both and are expanded as Taylor series so that the difference can be expressed as a Taylor series and its coefficients of the terms up to are set to , which results in a set of equations involving these coefficients.

Given , the Taylor series of at is expanded aswhere , , , and . For the sake of simplicity, let and .

Letting in (32) we have

Similarly, letting in (32) we have

Combining it with the Taylor series of in (36), we obtain

Following the same way, we have

Inserting the four Taylor series into (35) and matching the coefficients of for on both sides of (31), we obtain the following order conditions:Note that the set of order conditions obtained by using Butcher series [14, page 108] is a special case here when .

Also it is worthy to point out that if is a scalar function of a single variable, , thus conditions (45) and (46) can be combined to one single condition. However here since the method is developed for ODE system, both conditions should be satisfied individually.

Apparently there are 12 degrees of freedom to determine the method defined in (32) since there are 20 parameters while only 8 constraints are given by (40)–(47). To simplify the procedure of determining these parameters and reduce the number of matrix inversion, we assume that , for , so that only one matrix inversion is required to solve all during each time step. After this simplification, there are 17 parameters left, and the order conditions are simplified as well.

It is known that the Rosenbrock method suffers from order reduction when applied to nonlinear parabolic partial differential equations; see [20, 21]. In order to avoid such reduction of accuracy, the following two extra order conditions, simplified after taking into account the previous order conditions, should be satisfied [22]:which implySolving (50) results in three distinct real roots, while stability analysis shows that only one can ensure A-stability for the Rosenbrock method. More details regarding stability will be provided later.

To perform step size control and error estimation, we need a third-order embedded formula [23] defined as which uses the same values of ’s as the Rosenbrock method in (31) but coefficients of ’s are different. To ensure the existence of such embedded formula, we need , where is a square matrix defined asand .

Taking into account the order conditions, we obtain the simplified condition as , which unfortunately contradicts the order condition given in (47). Thus one can either sacrifices efficiency by relaxing the condition for the existence of a third-order embedded formula or simply skip the condition and then ignore step size control and error estimation. In this work, we are interested in the efficiency and order reduction of the Rosenbrock method, so the condition for step size control is bypassed.

Upon the determination of , there are 17 free parameters remaining while the order conditions are simplified as the following:which are equivalent to those given in [14, page 108], where .

Since is a root of (50), , but , thus . Then the order conditions in (52)–(58) can be further simplified.

We now choose the coefficients with the purpose of reducing the number of functional evaluations. Particularly, letting , (55)-(56) imply that , which further implies , so . Letting and , we can reduce the number of functional evaluation by 1.

We choose as a free parameter, so

By choosing as a free parameter, we have

Letting be a free parameter, we obtainFinally, choose as free parameter, so we have and

In summary, the coefficients of the Rosenbrock method are listed in Table 1.

Table 1: Coefficients of the 4th-order Rosenbrock method.

Note that one can also use these free parameters to eliminate some fifth-order truncation error terms, so the truncation error constant can be further optimized.

3.2. Extend the Rosenbrock Method to ODE System

We now extend the Rosenbrock algorithm to the ODE system . Suppose that is the numerical solution of (10) at time ; then is calculated aswhere is the Jacobi matrix and are defined through (34).

Obviously, each stage of this method consists of solving a linear system of equations. Since , only one LU-decomposition is required during each time step.

3.3. Stability of the Rosenbrock Method

We now study the stability of the Rosenbrock method defined by (31)-(32) with coefficients given in Table 1. Applying the Rosenbrock method to the linear test equation , , and taking into account the order condition equation (49), we obtain the stability function of the Rosenbrock method aswhere .

In order for the Rosenbrock method to be strongly A-stable, we need

Apparently,so (68) is satisfied.

To prove (67), we can substitute with and then show that for any . However such proof is technically tedious although possible. Here we use the Boundary-Locus method to draw the region of absolute stability of the method, which is shown in Figure 1. Obviously, the region of absolute stability contains the whole half-plane , so the Rosenbrock method is A-stable. Combined with (69), we can prove that the Rosenbrock method is strongly A-stable.

Figure 1: Absolute stable region of the Rosenbrock method.

4. Numerical Examples and Discussions

We solve three numerical examples to demonstrate the accuracy and efficiency of the new algorithm. The first and second examples are reaction-diffusion equations supplemented with Dirichlet and Neumann boundary conditions, respectively. The two examples are solved by the new method to demonstrate the fourth-order accuracy in time and space. The third example is used to demonstrate that the new algorithm is free of order reduction and more efficient than several other existing fourth-order Rosenbrock methods. For comparison, the rate of convergence is compared with several other fourth-order methods which suffer from order reduction. In what follows, we use HOC-ROSB4 to represent the new algorithm developed in this paper and HOC-GRK4A to represent the fourth-order method that combines high-order compact finite difference scheme and fourth-order GRK4A [23]. HOC-SHAMP represents the algorithm that combines high-order compact finite difference scheme with the fourth-order Rosenbrock method Shampine [24]. HOC-LSTAB represents the algorithm that combines the high-order compact finite difference scheme in space and the L-stable fourth-order Rosenbrock method [14] in time. Finally, we use HOC-VELD to represent the algorithm that combines the high-order compact finite difference scheme in space with fourth-order Rosenbrock method proposed in [25].

4.1. Example  1

Consider the following reaction-diffusion equation:for which the analytical solution is .

We first show that the compact finite difference scheme for spatial discretization and the compact boundary condition treatment are fourth-order accurate. In order to do so, we fixed so the truncation error from time discretization is negligible. The results included in Table 2 clearly demonstrate the fourth-order convergence in space, as the maximal error reduced by a factor of 16 (roughly) when the grid size is halved.

Table 2: Numerical results of example 1 by HOC-ROSB4 with .

We then show the fourth-order accuracy in time by fixing . The data in Table 3 confirmed that the Rosenbrock method is fourth-order accurate in time, so the new algorithm is free of order reduction.

Table 3: Numerical results of example 1 by HOC-ROSB4 with .

Since the method is fourth-order accurate in both temporal and spatial dimensions, we can write the leading term of the truncation error as , where and are two constants. To obtain optimal performance, we can adjust the ratio to balance the two error terms. To do so, we adjust and so that the two error terms are balanced, which is utilized by letting , from which we can estimate the optimal ratio as . To estimate , we solve the example using small and then . Similarly, we can estimate using the same method.

Simple calculation based on numerical results from Tables 2 and 3 suggests that the optimal ratio for this example is . We then solve the reaction-diffusion equation using the optimal ratio to show that the new algorithm is fourth-order accurate in both temporal and spatial dimensions. The numerical results in Table 4 indicate that the new algorithm is fourth-order accurate in both temporal and spatial dimensions, as the maximal error is reduced by a factor of 16 (roughly) when and are halved simultaneously. We also notice that the algorithm is very efficient as the maximal error drops to the level of when and are still reasonably large.

Table 4: Numerical results of example 1 by HOC-ROSB4 with .
4.2. Example  2

We solve the following semilinear parabolic partial differential equation with the Neumann boundary conditions:for which the analytical solution is .

Notice that the boundary conditions are approximated by the compact fourth-order boundary scheme given in (18)-(19), so the compactness of the resulting linear system is preserved; consequently, in each time step, only a tridiagonal system is solved for each , so the solution procedure is very efficient. However we point out that some tedious work is needed to form the semidiscrete ODE system, for instance, the first and last rows of the matrix in (20) and the first and last components of in (28)-(29).

To obtain the optimal performance, we choose the optimal ratio as as to balance the two error terms. The data in Table 5 shows that the method is fourth-order accurate in both time and space, as the maximum error is reduced by a factor of 16 (roughly), when and are halved.

Table 5: Numerical results of example 2 by HOC-ROSB4 with .

It is worthy to mention that one can also use any other one-sided formula to approximate the Neumann boundary condition, which apparently reduces the effort to derive the semidiscrete ODE system, but will destroy the tridiagonal structure of the matrix and consequently affect the efficiency of the algorithm. Another side-effect of using one-sided approximation on the boundary is the stability issue that may arise because the modifications to matrix may result in positive eigenvalues of .

4.3. Example  3

In this example we compare the new algorithm with several other fourth-order Rosenbrock methods in terms of the rate of convergence in time and efficiency. The nonlinear parabolic partial differential equation to be solved is defined asfor which the exact solution is .

The data in Table 6 shows that HOC-ROSB4 is fourth-order accurate and thus is free of order reduction, while the other four fourth-order Rosenbrock methods, which obviously suffer from order reduction, show rates of convergence ranging from 3.0 to 3.25. Note that is fixed and the same spatial discretization method is used for all five methods, so the demonstrated rate of convergence is in time. One can see that the HOC-LSTAB method has the most optimal error constant, while the HOC-ROSB4 method has the largest error constant. However, due to the freedom in determining those coefficients in Rosenbrock method, better error constant can be accomplished by eliminating several fifth-order truncation error terms.

Table 6: Comparison of the rate of convergence among the five fourth-order Rosenbrock methods for example 3 with . () represents the maximum error of the numerical solution obtained by using .

Finally, we show that HOC-ROSB4 is the most efficient method. For consistency, we adjust and for each method to reach the same error level and record the average CPU time of 5 simulation runs. Since all of the five methods use the same spatial discretization, the same is used for all methods; hence we adjust only. The comparison result included in Table 7 clearly indicates that the new method is the most efficient one. The higher efficiency apparently is obtained from the fact that the new method is free of order reduction; hence large time step can be used to reach the same error level.

Table 7: Comparison of the efficiency among the five fourth-order Rosenbrock methods for example 3.

5. Conclusion

An efficient fourth-order numerical algorithm that combines the Padé approximation in space and fourth-order accurate Rosenbrock method in time is proposed in this paper. Our investigation shows that many widely used fourth-order Rosenbrock methods (A-stable or L-stable) suffer from order reduction when they are used to solve nonlinear parabolic partial differential equation. To avoid order reduction, extra order conditions are required, which are implemented in this paper to develop the new algorithm. Also, it has been shown [26] that the extra condition to resolve order reduction contradicts the L-stability of the Rosenbrock method, so there is no L-stable Rosenbrock method that is also free of order reduction. The new method can be used to solve nonlinear parabolic partial differential equation with all types of boundary conditions; however, for Neumann or Robin boundary condition, extra efforts are needed to form the semidiscrete ODE system. Two numerical examples are solved to demonstrate that the new method is fourth-order accurate in both time and space, while the third example shows that the new method is free of order reduction and is very efficient. In the future, we plan to extend the new method to multidimensional problems supplemented with various types of boundary conditions.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work of was supported by the Natural Sciences & Engineering Research Council of Canada (NSERC) through the individual Discovery Grant program. The author gratefully acknowledges the financial support from NSERC.

References

  1. Y. Adam, “Highly accurate compact implicit methods and boundary conditions,” Journal of Computational Physics, vol. 24, no. 1, pp. 10–22, 1977. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  2. A. R. Mitchell and D. F. Griffths, The Finite Difference Method in Partial Differential Equations, John Wiley & Sons, New York, NY, USA, 1980.
  3. J. I. Ramos, “Linearization methods for reaction-diffusion equations: multidimensional problems,” Applied Mathematics and Computation, vol. 88, no. 2-3, pp. 225–254, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  4. J. I. Ramos, “Implicit, compact, linearized θ-methods with factorization for multidimensional reaction-diffusion equations,” Applied Mathematics and Computation, vol. 94, no. 1, pp. 17–43, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  5. P. C. Chu and C. Fan, “A three-point combined compact difference scheme,” Journal of Computational Physics, vol. 140, no. 2, pp. 370–399, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  6. V. Luan and A. Ostermann, “Exponential Rosenbrock methods of order five—construction, analysis and numerical comparisons,” Journal of Computational and Applied Mathematics, vol. 255, no. 1, pp. 417–431, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. V. T. Luan and A. Ostermann, “Explicit exponential Runge-Kutta methods of high order for parabolic problems,” Journal of Computational and Applied Mathematics, vol. 256, pp. 168–179, 2014. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  8. A.-K. Kassam and L. N. Trefethen, “Fourth-order time-stepping for stiff PDEs,” SIAM Journal on Scientific Computing, vol. 26, no. 4, pp. 1214–1233, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. H. Rosenbrock, “Some general implicit processes for the numerical solution of differential equations,” The Computer Journal, vol. 5, pp. 329–330, 1963. View at Google Scholar · View at MathSciNet
  10. C. F. Haines, “Implicit integration processes with error estimate for the numerical solution of differential equations,” The Computer Journal, vol. 12, no. 2, pp. 183–188, 1969. View at Google Scholar · View at MathSciNet
  11. W. Liao and Y. Yan, “Singly diagonally implicit Runge-Kutta method for time-dependent reaction-diffusion equation,” Numerical Methods for Partial Differential Equations, vol. 27, no. 6, pp. 1423–1441, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. G. G. Dahlquist, “A special stability problem for linear multistep methods,” BIT Numerical Mathematics, vol. 3, pp. 27–43, 1963. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  13. A. R. Gourlay, “A note on trapezoidal methods for the solution of initial value problems,” Mathematics of Computation, vol. 24, pp. 629–633, 1970. View at Publisher · View at Google Scholar · View at MathSciNet
  14. E. Hairer and G. Wanner, Solving Ordinary Differential Equations, II, Stiff and Algebraic Problems, Springer, Berlin , Germany, 2nd edition, 1996. View at Publisher · View at Google Scholar · View at MathSciNet
  15. R. Alexander, “Diagonally implicit Runge-Kutta methods for stiff O.D.E.’s,” SIAM Journal on Numerical Analysis, vol. 14, no. 6, pp. 1006–1021, 1977. View at Publisher · View at Google Scholar · View at MathSciNet
  16. K. Burrage, J. C. Butcher, and F. H. Chipman, “An implementation of singly-implicit Runge-Kutta methods,” BIT Numerical Mathematics, vol. 20, no. 3, pp. 326–340, 1980. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. H. Claus, “Singly-implicit Runge-Kutta methods for retarded and ordinary differential equations,” Computing, vol. 43, no. 3, pp. 209–222, 1990. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. W. Liniger and R. A. Willoughby, “Efficient integration methods for stiff systems of ordinary differential equations,” SIAM Journal on Numerical Analysis, vol. 7, pp. 47–66, 1970. View at Publisher · View at Google Scholar · View at MathSciNet
  19. J. G. Verwer, E. J. Spee, J. G. Blom, and W. Hundsdorfer, “A second-order Rosenbrock method applied to photochemical dispersion problems,” SIAM Journal on Scientific Computing, vol. 20, no. 4, pp. 1456–1480, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  20. J. Lang and J. Verwer, “ROS3P—an accurate third-order Rosenbrock solver designed for parabolic problems,” BIT Numerical Mathematics, vol. 41, no. 4, pp. 731–738, 2001. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  21. W. H. Hundsdorfer, “Stability and B-convergence of linearly implicit Runge-Kutta methods,” Numerische Mathematik, vol. 50, no. 1, pp. 83–95, 1986. View at Publisher · View at Google Scholar
  22. C. Lubich and A. Ostermann, “Linearly implicit time discretization of non-linear parabolic equations,” IMA Journal of Numerical Analysis, vol. 15, no. 4, pp. 555–583, 1995. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  23. P. Kaps and P. Rentrop, “Generalized Runge-Kutta methods of order four with stepsize control for stiff ordinary differential equations,” Numerische Mathematik, vol. 33, no. 1, pp. 55–68, 1979. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. L. F. Shampine, “Implementation of Rosenbrock methods,” ACM Transactions on Mathematical Software, vol. 8, no. 2, pp. 93–113, 1982. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. M. van Veldhuizen, “D-stability and Kaps-Rentrop-methods,” Computing. Archives for Scientific Computing, vol. 32, no. 3, pp. 229–237, 1984. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  26. T. D. Bui, “On an L-stable method for stiff differential equations,” Information Processing Letters, vol. 6, no. 5, pp. 158–161, 1977. View at Google Scholar · View at MathSciNet · View at Scopus