Research Article | Open Access
Firdaus E. Udwadia, Artin Farahani, "Accelerated Runge-Kutta Methods", Discrete Dynamics in Nature and Society, vol. 2008, Article ID 790619, 38 pages, 2008. https://doi.org/10.1155/2008/790619
Accelerated Runge-Kutta Methods
Standard Runge-Kutta methods are explicit, one-step, and generally constant step-size numerical integrators for the solution of initial value problems. Such integration schemes of orders 3, 4, and 5 require 3, 4, and 6 function evaluations per time step of integration, respectively. In this paper, we propose a set of simple, explicit, and constant step-size Accerelated-Runge-Kutta methods that are two-step in nature. For orders 3, 4, and 5, they require only 2, 3, and 5 function evaluations per time step, respectively. Therefore, they are more computationally efficient at achieving the same order of local accuracy. We present here the derivation and optimization of these accelerated integration methods. We include the proof of convergence and stability under certain conditions as well as stability regions for finite step sizes. Several numerical examples are provided to illustrate the accuracy, stability, and efficiency of the proposed methods in comparison with standard Runge-Kutta methods.
Most of the ordinary differential equations (ODEs) that model systems in nature and society are nonlinear, and as a result are often extremely difficult, or sometimes impossible, to solve analytically with presently available mathematical methods. Therefore, numerical methods are often very useful for understanding the behavior of their solutions. A very important class of ODEs is the initial value problem (IVP):where , , and are -dimensional vectors, and denotes the independent variable, time.
A constant step numerical integrator approximates the solution of the IVP defined by (1.1) at point by . This is referred to as “taking a step.” Similarly, at step , the numerical integrator approximates the solution at point by . By taking steps successively, the approximate solutions at points are generated. The accuracy of a numerical integrator is primarily determined by the order of the local error it generates in taking a step. The local error at the end of step is the difference , where is the local solution that satisfies the local IVP:
Standard Runge-Kutta (RK) methods are a class of one-step numerical integrators. That is, the approximate solution is calculated using and the function . An RK method that has a local error of is said to be of order and is denoted by RK. It has been established that RK3, RK4, and RK5 require 3, 4, and 6 function evaluations per time step of integration, respectively [1–3].
In this paper, we propose a new and simple set of numerical integrators named the accelerated-Runge-Kutta (ARK) methods. We derive these methods for the autonomous version of (1.1) given bywhere , , and are -dimensional vectors, and denotes the independent variable, time. We assume that is a (sufficiently smooth) Lipschitz continuous function, and hence the solution of the IVP given by (1.3) is unique.
Accelerated Runge-Kutta methods are inspired by standard Runge-Kutta methods but are two-step in nature. That is, the approximate solution is calculated using , , and the function . ARK denotes an ARK method whose local error is of . We will see in the next section that ARK3, ARK4, and ARK5 require 2, 3, and 5 function evaluations per time step of integration, respectively. Since function evaluations are often the most computationally expensive part of numerically solving differential equations (see Section 4 for further details), ARK methods are expected to be more computationally efficient than standard RK methods.
Various authors have attempted to increase the efficiency of RK methods. As a result, numerous variations of two-step explicit RK methods exist today. Byrne and Lambert  proposed 3rd- and 4th-order two-step Pseudo RK methods. Byrne  later proposed a set of methods that minimize a conservative bound on the error, and Gruttke  proposed a 5th-order Pseudo RK method. Costabile  introduced the Pseudo RK methods of II species which involve many parameters to improve local accuracy. Jackiewicz et al.  considered the two-step Runge-Kutta (TSRK) methods up to order 4. Jackiewicz and Tracogna  studied a more general class of these TSRK mothods and derived order conditions up to order 5 using Albrecht's method. Recently, Wu  has proposed a set of two-step methods which make use of the derivative of the differential equation.
The accelerated-Runge-Kutta (ARK) methods presented here are along the lines first proposed in  and are simple and straightforward. They could be considered special cases of the more general TSRK methods. The simplicity of the ARK methods' construction not only reduces computational overhead cost, but also leads to a simpler and more elegant set of order equations that can be directly analyzed more precisely. The ARK methods also benefit from a more precise and effective optimization technique which transforms them into higher-order methods for some problems. In addition, we have presented here a complete analysis of all aspects of the ARK methods, including their motivation, derivation, accuracy, speedup, and stability.
In Section 2, we explain the central idea of ARK methods followed by a description of how to derive and optimize ARK3, ARK4, ARK4-4, (a more accurate ARK4), and ARK5 in Sections 2.1, 2.2, 2.3, and 2.4, respectively. In Section 3, we use seven standard initial value problems to compare the accuracy of ARK and RK methods. In Section 4, we present the computational speedup of ARK methods over the RK methods using some of the same standard problems. The hallmark of our ARK method is its simplicity. Tracogna and Welfert , based on methods developed in , use a more sophisticated approach using B-series to develop a 2-step TSRK algorithm for numerical integration. In Section 5, we compare the numerical performance of our method with that of  using the above-mentioned standard initial value problems. Section 6.1 deals with stability and convergence of ARK methods at small step sizes, and Section 6.2 compares the stability region of ARK and RK methods at finite step sizes. We end with our conclusions and findings, and include in the appendices, the description of the standard initial value problems used, and the Maple code that derives the ARK3 method.
2. Central Idea of Accelerated Runge-Kutta Methods
The idea behind the ARK methods proposed herein is perhaps best conveyed by looking at RK3 and ARK3. An example of the RK3 method solving the IVP given by (1.1) has the form whereThe approximate solution at is calculated based on the approximate solution at along with 3 function evaluations , , and in . Hence, RK3 is a one-step method with a computational cost of 3 function evaluations per time step.
An example of the accelerated-RK3 (ARK3) method solving the IVP given by (1.3) has the formwhereThe approximate solution at time is calculated based on the approximate solutions and at times and , respectively, along with 2 function evaluations and in and 2 reused function evaluations and in . In each step of integration from point to point , only and are calculated, while and are reused from the previous step's and . Figure 1 illustrates this idea. The reuse of previously calculated data reduces the number of function evaluations at each time step of integration from 3 for RK3 to 2 for ARK3, making ARK3 more computationally efficient than RK3. This increased efficiency is shared by ARK4 and ARK5 when compared to RK4 and RK5, respectively, and is the central idea behind the ARK methods.
It is important to note that at the first step, there is no previous step. Therefore, ARK methods cannot be self-starting. A one-step method must supply the approximate solution at the end of the first step . The one-step method must be of sufficient order to ensure that the difference is of order or higher. In this case, the ARK method will be convergent of order (see Section 6.1, Theorem 6.4). For example, the ARK3 method can be started by the RK3 or RK4 methods. This extra computation occurs only in the first step of integration; for all the subsequent steps, ARK methods bear less computational cost than their RK counterparts.
The general form of the ARK methods presented in this paper iswhereHere, denotes the number of function evaluations performed at each time step and increases with the order of local accuracy of the ARK method. In each step of integration, only are evaluated, while are reused from the previous step.
To determine the appropriate values of the parameters 's and 's, a three-part process is performed. (The first and second parts of this process are shown in detail for ARK3 in .) First, the ARK expression (2.5) is expanded using the Taylor series expansion. Second, after some algebraic manipulation, this expansion is equated to the local solution at given by the Taylor series expansion (see (1.2))This results in the system of nonlinear algebraic order equations (2.8a)–(2.13a). Third, we attempt to solve as many order equations as possible because the highest power of for which all of the order equations are satisfied is the order of the resulting ARK method. The above process requires a great deal of algebraic and numeric calculations which were mainly performed using Maple (see Appendix B).
In the following, we will show what order of accuracy an ARK method can achieve with a given number of function evaluations . For , a second-order method ARK2 is possible. It is, however, omitted here because such a low order of accuracy is of limited use in accurately solving differential equations; we will look at cases .
For , from (2.5), we have the 6 parameters and . The derived order equations up to are Clearly, (2.8f) cannot be satisfied, so it is impossible to achieve a 4th-order method with 2 function evaluations per time step. It is, however, possible to achieve a 3rd-order method ARK3 with 2 function evaluations per time step by satisfying (2.8a)–(2.8d). We solve these equations in terms of , and and let the remaining two parameters and become free parameters.
We use sets of free parameters to optimize the accuracy of an ARK method by minimizing its local error. The local error is represented by higher order equations (order 4 and above in case of ARK3). The closer these order equations are to being satisfied, the smaller the local error is. Therefore, we try to minimize the difference between the LHS and RHS of these order equations. This can be done in several different ways. One popular technique is to minimize a general bound on the error . A second technique is to minimize a norm of the error . A third technique is to satisfy as many individual order equations as possible (hinted to by ). In general, there is no way of knowing a priori which technique yields the most accurate ARK method since the error ultimately depends on the problem at hand. We have chosen the last technique for two reasons. One, it is straightforward to implement exactly. Two, for some classes of problems, the differential term multiplying the unsolved higher order equations may vanish, which could effectively increases the order of the method. For instance, the differential term multiplying (2.8f) may vanish, and provided (2.8e) is already satisfied, the resulting ARK method (with 2 function evaluations) will become a 4th-order method.
Using the two free parameters and , we can solve two higher-order equations: (2.8e) and any one of the two 5th-order equations, However, we cannot choose (2.9b) because to prove stability, we will need (see condition (6.16)). Therefore, we choose (2.8e) and (2.9a), which leads to Set 2 of Table 1. We also present a set in which because doing so can increase the stability (see Section 6) and speed (see Section 4) of an ARK method. Setting and solving (2.8a)–(2.8e) leads to Set 3 of Table 1. Set 1 of Table 1 is the only set of parameters here that is not found by solving all possible higher order equations. It is found by setting , solving (2.8a)–(2.8d) for , , and , and searching for a value of that leads to a highly accurate ARK method in solving some standard IVPs (see Section 3 and Appendix A).
For , we have the 8 parameters , and along with order equations up to which are
Since there are 10 order equations up to , we cannot achieve a 5th-order method. However, we can achieve a 4th-order method ARK4 by solving the first 6 order equations (2.10a)–(2.10f). Therefore, ARK4 requires 3 function evaluations per time step. We have chosen to solve these equations for the 's while keeping and as free parameters because all order equations are linear in terms of the 's which makes it straightforward to solve for them.
We use and to optimize ARK4's accuracy by solving two of the equations. We choose (2.10g) and (2.10h) because they were used in deriving the other equations. Set 2 and Set 3 of Table 2 are the two solutions. Set 1 of Table 2 is a set in which and is produced by solving (2.10b) to (2.10g). It is a set of approximate values and naturally contains some round-off error. The effects of this round-off error (in parameter values) can be significant if the error produced by the ARK methods is small enough to be of the same order. Therefore, to measure the accuracy of the ARK methods correctly (see Section 3), we have calculated the parameter values using 25 digits of computational precision in Maple. Note, however, that such high precision is not required for the ARK methods to be effective.
For , we have the 10 parameters and . Order equations up to are Since there are exactly 10 order equations up to , it would seem that we can achieve a 5th-order method. However, the only two solutions of the above order equations have and , both of which produce unstable methods (see condition (6.16)).
Therefore, we set which leaves us with the 8 parameters , , , , , , , and . We solve (2.11b)–(2.11f) for the 's and solve three of the four order equations using the free parameters , , and . The result is a 4th-order method called ARK4-4 that is more accurate than ARK4 for most of the IVPs that are investigated herein. The “” indicates that this 4th-order method requires 4 function evaluations, unlike ARK4 which requires only 3.
Set 1 of Table 3 satisfies the higher order equations (2.11g), (2.11h), and (2.11j). Set 3 of Table 3 satisfies the higher order equations (2.11g), (2.11h), and (2.11i). Set 2 of Table 3 satisfies the higher order equations (2.11g), (2.11j) and (2.12b). To understand where (2.12b) comes from, we first need to explain a unique property of the order equations.
The order equations are fundamentally different from lower order equations. If the order equations for ARK4-4 are derived with a two-dimensional coupled ODE system, we get the four order equations presented above, that is, (2.11g)–(2.11j). However, if the derivation is done with a one-dimensional ODE or with a two-dimensional uncoupled ODE system, we get the three equations: Hence, coupling of the ODE system affects the order equations but not the order equations. We see that the difference between the two sets of order equations is that (2.11h) and (2.11i) are replaced by (2.12b) = 2 (2.11h) + (2.11i). Therefore, a parameter set that satisfies (2.11h) and (2.11i) also automatically satisfies (2.12b), but not vice versa. To ensure that the order equations are valid for all ODE systems, we have considered a two-dimensional coupled ODE system in our derivations.
Although we have not derived the order equations based on a 3- or higher-dimensional coupled ODE system, we hypothesize that the order equations would be the same as (2.11g)–(2.11j). We base this hypothesis on two observations. One, unlike moving from a one-dimensional system to a two-dimensional system where the coupling property emerges, no property emerges when we move from a two-dimensional system to a three-dimensional system, so there is no perceivable reason for the order equations to change. Two, using a two-dimensional coupled ODE system, we have derived the same standard RK order equations that others have reported .
With that explanation, it might seem counterintuitive to use a parameter set that satisfies (2.12b). After all, this equation only applies for uncoupled systems. However, the thinking is that for uncoupled systems, the resulting ARK4-4 becomes a 5th-order method. While for coupled systems, the resulting method yields a very accurate 4th-order method because it satisfies two of the four equations as well as a weighted average of the other two order equations. This is confirmed by our experiments in solving standard problems (see Section 3) since the accuracy of Set 2 is comparable to that of Set 1 and Set 3.
For , we have the 12 parameters and . Order equations up to are
Solving the first 10 equations, (2.13a)–(2.13j), gives us a 5th-order method ARK5 which requires 5 function evaluations per time step. We solve the first 8 equations for the 's and solve equations (2.13i)–(2.13l) for . Among the multiple solutions of this system, we have found Set 3 in Table 4 to be the most accurate in solving the standard problems given in Appendix A. Setting results in 10 parameters and 9 equations of order up to . We use the one free parameter to solve (2.13k) which leads to numerous solutions. Among them, Set 1 and Set 2 in Table 4 are the most accurate in solving the standard initial value problems in Section 3.