Research Article  Open Access
JuiLing Yu, " Adaptive Optimal Stage RungeKutta Methods for Solving ReactionDiffusionChemotaxis Systems ", Journal of Applied Mathematics, vol. 2011, Article ID 389207, 25 pages, 2011. https://doi.org/10.1155/2011/389207
Adaptive Optimal Stage RungeKutta Methods for Solving ReactionDiffusionChemotaxis Systems
Abstract
We present a class of numerical methods for the reactiondiffusionchemotaxis system which is significant for biological and chemistry pattern formation problems. To solve reactiondiffusionchemotaxis systems, efficient and reliable numerical algorithms are essential for pattern generations. Along with the implementation of the method of lines, implicit or semiimplicit schemes are typical time stepping solvers to reduce the effect on time step constrains due to the stability condition. However, these two schemes are usually difficult to employ. In this paper, we propose an adaptive optimal time stepping strategy for the explicit stage RungeKutta method to solve reactiondiffusionchemotaxis systems. Instead of relying on empirical approaches to control the time step size, variable time step sizes are given explicitly. Yet, theorems about stability and convergence of the algorithm are provided in analyzing robustness and efficiency. Numerical experiment results on a testing problem and a real application problem are shown.
1. Introduction and Background
The reactiondiffusionchemotaxis system is the most common model used to describe chemical and biological processes and the most successful model used to describe the generation of pattern formations. Numerical simulation of such system is necessary because complicated chemotaxis terms and highly nonlinear reaction terms increase the difficulty on finding analytical solutions for realistic systems. Even though, some of the early chemotaxis models had analytical solution or onedimensional simulation with reduced models [1, 2], the development of numerical methods for solving chemotaxis systems does not seem sufficient comparing with other type of PDEs systems. Since interesting patterns may emerge and evolve before they reach steady state [3β8], some particular heed needs to be paid to the numerical methods to ensure that the numerical simulated patterns literally obey primary model equations [5, 8].
Chemotaxis is a directed motion, shown in motile organism, which moves toward or away from certain chemical or environment. It has been observed in microorganism, plants, animals, even in blood vessels of cancer tumors. For bacteria, chemotaxis was first discovered in the lab by Wilhelm Pfeffer and his associates in E. coli. in 1884 [9]. Besides the random motion of bacteria, they also show directed random movements in response to some higher concentration of oxygen. Keller and Segel [1] construct the first differential chemotaxis models. They extend the reactiondiffusion model with a convection term to simulate chemotaxis. Based on their work, many models are constructed, especially on the development of bacteria colonies [3β8, 10].
The general form of the reactiondiffusionchemotaxis is given by [11].
Here, represents the microorganism (bacteria) density; stands for chemoattractant concentration and is given or obtained from another equation. is the reaction term which models the interactions between the chemicals. The diffusion term, , imitates the random motion of particles with positive constant diffusivity . describes the directed random movement of particles that are modulated by the concentration gradient of the chemoattractant. The chemotaxis parameter function, , indicates the strength of the chemotaxis, which is given by the KellerSegal model. where and are positive constants and is an integer power. Observe that which indicates that particles tend to stay where they are when the local concentration of chemoattractant is large.
The basic idea for illustrating patterns using reactiondiffusionchemotaxis systems is to demonstrate that eigenvalues of local linearization that are responsible for the growth (decay) can have positive (negative) real parts. Therefore, it can lead to growth at some places and decay at others, resulting in spatially inhomogeneous patterns [12, 13]. Based on this idea, we consider the method of lines (MOL) approach in this paper because it is not only the first step for construction of many numerical methods, but also provides good explanations about the generation of patterns through analyzing the positive (growth modes) and negative (decay modes) of eigenvalues. Other numerical methods used on the chemotaxis model include the fractional step method [14], the Alternating Direction Implicit (ADI) method [15, 16], and the optimal twostage scheme [5, 8] listed here for readers's references.
The MOL approach reduces the partial differential equations to a system of ordinary differential equations (ODEs) by approximating spatial derivatives with numerous spatial discretization methods, for example, the finite difference, finite element, and spectral methods. The solution can be obtained after following some suitable time integrator. According to various spatial discretization features, the computation work can be facilitated by developing some more sophisticated robust and efficient adaptive time integrators [5, 8].
In this paper, we use standard finite differences of center difference for the Laplacian operator and upwind differences for the chemotaxis term. The reason for adapting upwind differences is the stability concern. When the chemotaxis dominates, the system behaves more like a hyperbolic system. Then the upwind differences are more stable methods [17]. It is worth to mention that the following timestepping scheme can also be applied to other discretization methods when the domain containing the eigenvalues is known or can be estimated. The general form of the resulting ODE system after spatial discretization reads where is a timedependent matrix and is a reaction vector which also depends on time.
Typically, the time integrator for (1.3) can be classified as implicit or explicit methods. Generally, the numerical scheme for solving implicit method is more complicated than explicit method since it often involves solving systems of algebraic equations or needs to deal with the inverse of matrices. Although implicit methods allow larger step size than explicit ones for those spatial discretization matrices only having eigenvalues with negative real parts, the computational cost may still be high due to the complicated implementation for implicit methods. Most importantly, the problem regarding time constrain still exists if the spatial matrices contain very positive real parts of eigenvalues; even for the implicit method the use of very small time steps is yet unpreventable in order to meet the stability condition. This situation is likely to occur for pattern formation problems for which the spatial discretization matrices comprise both positive and negative real parts for their eigenvalues [5, 8]. Besides, for pattern formation problems, those positive real eigenvalues are the unpredictable ones which evolve along with time. Bear these factors in mind, the explicit methods are actually better ones for the reactiondiffusionchemotaxis systems.
In the present paper, we start with reviewing the explicit stage RungeKutta methods and then construct their adaptive optimal time steps to enhance the computation efficiency for the chemotaxis system. The optimal time step formulas are presentable and easy to calculate. Only some simple estimations about the bounds for the largest and smallest eigenvalues of the discretization spatial matrices are required at each time step. The numerical simulations on the test and realistic problems are provided. Comparison will be made in the testing problem to demonstrate that our method is not only stable but also more efficient than other similar methods, referring to CPU time and the number of iterations.
2. Optimal Time Steps for Explicit RungeKutta Methods
2.1. Time Independence ODEs Systems
Since interesting patterns are caused by the interaction between growth and decay modes, to analyze system (1.3) and relate it to the pattern formation model (1.1), one can do the local linearization property of the righthand side of (1.3). The positive and negative real eigenvalues of the linearization matrix then, respectively, correspond to the growth and decay modes. Because of this reason, we will consider the corresponding ODEs induced from the chemotaxis system and introduce our idea by starting with the constant coefficients case, that is, then generalize it to the timedependent coefficients case. The general formulation of the stage RungeKutta method is where is the stage number and . The subscripts and are the iteration level. For simplicity, we first look at the uniform time step size for all . Then is the numerical solution at the th step. Equations for the unknowns and can be obtained by matching terms with Taylor's expansion.
We choose singlestep explicit methods, which basically are RungeKutta methods, because they are easy to implement and especially efficient for ODEs systems derived from the PDEs. The method is set to be explicit since the range of eigenvalues with negative real parts is measurable for chemotaxis systems. In [5, 8], we have developed adaptive twostage time stepping scheme with optimum time step sizes for solving chemotaxis system and verified that it is reliable and more economical when comparing with other similar methods. We conjecture that the similar strategy used in [5, 8] could be used to derive optimal adaptive three and fourstage methods. In the twostage case, the size of the optimal step is determined by the root of the firstorder equation hence it is absolute optimal within one time step. If our conjecture is correct, the sizes of optimal steps for higherstage methods should be the roots of some higherorder equations. But, are these adaptive step sizes absolute optimal? And are they easy to be found as in the twostage case?
To answer these questions, we begin our study with the threestage RungeKutta method and set in (2.1). As in [5, 8], we first fix all the intermediate coefficients at for the threestage RungeKutta method. The general time dependence case is proposed in the next section. Denoting and applying (2.2) to the threestage RungeKutta method with all the intermediate values being fixed at , we arrive at where , are combinations of the coefficients and . Since and assuming is exact up to the order , we have , , . Similar arguments also hold for RK methods with other stages. Thus, if we treat it as a standard iterative method, (2.4) is a onestep method [18]:
In the case of , (2.5) becomes
Denote as the solution by solving the system with the perturbated initial condition; we have
From (2.6) and (2.7), the accumulation error can be written as
Note that is called the amplification factor, where is an eigenvalue of . It is obtained when a scalar test problem, , is applied to the singlestep method.
Suppose that . For pattern formation problems, what we are interested in is the longterm behavior of the solution vectors. It is important to ensure that observed patterns are accurate in the sense that they are not coming from the accumulation error of numerical schemes. We therefore make the following definition of stability.
Definition 2.1. An explicit singlestep method is said to be stable if when and when .
Here means higher order of . Our definition is designed to guarantee appropriate numerical solutions for both decay and growth cases. Based on this definition, we get restrictions on the step size. Typically, time steps are restricted either when the scheme is explicit and or when the scheme is implicit and [5, 8]. Since the amplification factors are functions of complex numbers, it is sometimes difficult to analyze. The following theorem shows that the amplification factor can be approximated by simpler functions of plus some error terms instead of functions of complex eigenvalues . Observe that the order of error terms indicated in Theorem 2.2 is the same as the truncation error of the stage RungeKutta method. Therefore, we can work with this simpler function without losing the order of accuracy.
2.1.1. Stability Analysis and the Optimization Method
The following theorem states that the stage RungeKutta method satisfies the stability condition automatically for but needs constraints on when .
Theorem 2.2. , where .
Proof. The proof for is straightforward, and the proof for is in [5, 8]. For ,
For ,
The geometric interpretation is that, for spatial matrices , the eigenvalues with negative real parts are assumed to be contained in the leftplane ellipse. Let us denote as . To examine whether numerically generated patterns truly obey the original models, one needs to verify if the negative modes are diminished and positive modes are growing accordingly. That is, when the negative modes occur, is less than one, and is greater than one when the spatial matrix happens to have positive modes. From , we find that when , is definitely greater than unity, hence the stability condition is automatically satisfied; when , is then possible to be less than unity. It is necessary to put the limitation on the time steps to ensure the stability condition is preserved. Besides, the negative modes should be diminished in an effective way so that the computational cost can be reduced. Under these consideration, we derive the optimal time step which not only diminishes decay modes efficiently but also guarantees the stability conditions. Before moving on the theorems about showing how we get the optimal time steps for stage RungeKutta method, let us recall the Gerschgorin theorem [19]. It says that for matrices with real coefficients, the distribution for the real parts of eigenvalues can be estimated simply by the discs centered at the diagonal elements. So we can assume that the range of negative real parts of eigenvalues of the discretized spatial matrices is known. Here, only negative real eigenvalues are needed as they are the only ones responsible for the stability and restriction of the step sizes for explicit schemes. We are now ready to give relevant theorems for constructing optimal step sizes by effectively damping the negative modes.
Theorem 2.3. For and being even, and is concave up for all .
Proof. For , it is a seconddegree polynomial with a positive leading coefficient. The proof is straightforward. Note that .
Assume for all even . For being even and , we have
It follows from mathematical induction, for all even . Hence, is concave up w.r.t. if is even.
Moreover, for all even , because for even .
Corollary 2.4. For and being an odd number, and is strictly decreasing for all .
Proof. For , the proof is trivial. For and
since and is an even number.
Therefore, we conclude for all odd and . Moreover, is strictly decreasing with respect to .
Theorem 2.5 (Case for being odd). Assume that , where are two positive numbers. Then the solution of the following minmax problem occurs at in which satisfies , where is an odd number. The minmax problem is stated as
Proof. Note that if and . For any fixed , the maximum of is obtained at . So the minmax occurs when . Since is continuous, , and as , the solution exists by Intermediate Value Theorem. Furthermore, from Corollary 2.4, is monotone decreasing w.r.t. , so the solution is unique.
Theorem 2.6 (Case for being even). Assume that , where are two positive numbers. Then the solution of the following minmax problem occurs at in which satisfies , where is an even number . The minmax problem is stated as
Proof. For each fixed , is a quadraticlike polynomial function in . Then for any fixed , the maximum of is obtained at , and at , . So the minimax occurs when , that is, This leads to the solution.
Let us denote . The following theorem states that the optimal time step is unique for being even. For being odd, it is shown in Theorem 2.5.
Theorem 2.7. If is even, has only one positive real solution.
Proof.
It follows from (2.17) and the fact that , to find the zeros of is enough to solve . Since and as tends to infinity, positive real solutions hence exist for by the Intermediate Value Theorem. Moreover, is strictly decreasing (Figures 1 and 2). Therefore, the solution is unique.
2.2. TimeDependent ODEs Systems
In the previous section, we derived simpler forms of amplification factors as well as theorems for setting optimal time steps for constant ODEs systems. The optimal time step is obtained under the assumption that the intermediate values of RungeKutta methods are fixed at current time. Under the stability consideration, we continue our study on searching optimal time steps and relevant schemes when the ODEs system and RungeKutta methods are time dependent. The general form of the time dependent system is
Derivation of Adaptive Optimal Time Step for the ThreeStage RungeKutta Method
We start with the threestage Rungekutta method. As before, we first set the reaction vector to be zero.
Here, . Expand at by Taylor expansion, then , where .
Therefore,
Since by consistency,
Followed by (2.20) and the consistency condition, (2.19) can be simplified as
Now, we assume
Then, we can rewrite (2.22) as
This way, we arrive the following desirable form through rearrangement:
Let
where
Then the previous equation implies
The above result illustrates that the timedependent threestage RungeKutta method can be treated as a singlestep method without losing the order of accuracy. In addition, it shares the same form with (2.5) if is replaced by . Next, we derive a similar argument for the fourstage RungeKutta method.
Derivation of Adaptive Optimal Time Step for the FourStage RungeKutta Method
Applying the fourstage RungeKutta method to (2.18), we get
Since ,
By consistency of the method,
Thus,
We further assume
The previous equation hence can be written as
That is,
where
We now turn to examine the more general situation in which the reaction term is not zero, that is, . In this case, we have
Similar to our discussion in the previous section, the accumulation error for (2.37) is
We remark again that the amplification factor in (2.38) at the th local step has the same form with (2.5) except being replaced by . Therefore, we can apply the strategy demonstrated in Theorems 2.5 and 2.6 to look for the optimal time steps for RungeKutta schemes. The present and appeared in the optimal step size formulas then, respectively, represent the greatest and smallest real parts of the eigenvalues of the matrix . Notice that by the similar argument outlined in the previous proofs, the stability is preserved with this step size.
An Explicit Adaptive Optimal Time Stepping Scheme for Solving (2.18)
(1)Set up initial condition and initial step size . (2)Set in accordance with the stage number , and apply stage RungeKutta schemes. (3)Use the Gerschgorin theorem to estimate the interval, for negative real parts of the eigenvalues of .(4)Place , (i), (ii), (iii)
and then let . When an is equal to or close to zero, simply choose a reasonable step size for .
Note that the time step for can be solved by mathematical software such as MAPLE from the following equation:
2.2.1. The βBestβ Optimal Step Size and the TimeDependent Homogeneous Systems
We point out that for those eigenvalues with negative real parts, one may also assume they are enclosed in an ellipse with one of the vertex lying at the origin, that is, . In this way, only are needed in forming optimal time step formulas and time steps in step 4 are switched as(i), (ii), (iii),
where is defined as before in step 2. We remark that when is specified as above values, is equal to one. The stability condition does not seem to be satisfied. However, this hidden problem is avoidable by requiring Gerschgorin theorem or other equivalent theorems in step 3 as an estimation for the distribution of real part of eigenvalues. This is because the theorem provides an overestimation for the distribution of negative real part of eigenvalues. As a result, is less than one. Hence, it is safe to use the above time step.
In particular, when the reaction vector is negligible, we can further write the stage Rungekutta method as some equivalent stage formulas while the order of accuracy remains invariant. For example, , we have
For ,
Of course, is defined according to their stage number.
3. Numerical Examples
3.1. Overview
In this section, we demonstrate the implementation and the efficiency of our optimal adaptive time stepping method by setting a onedimension testing problem and simulating some problems from a series of real biological experiments. The 1D testing problem with the known analytic solution is used to show the implementation process of our scheme. It is also designed to show the rate of convergence as well as the computational cost. Those results are illustrated through comparison with standard stage RungeKutta methods and the RungeKuttaFehlberg (RKF) method. To test our numerical methods on real biological experiments, we apply our optimal adaptive twostage RungeKutta method to a twodimensional reactiondiffusionchemotaxis model for a series of biological experiments on bacterial chemotaxis reported in [9]. Some results will be shown here.
3.2. Testing Problems
Consider where and . The three terms on the righthand side represent diffusion, chemotaxis, and reaction, respectively. Assume the periodic boundary condition: and the initial condition: . The analytic solution of (3.1) is
To apply MOL, we simply use the difference scheme for the spatial discretization. The standard secondorder difference is applied to the diffusion term. The upwind scheme is employed in the chemotaxis for some stability consideration [17]. After the discretization, we then apply stage RungeKutta method with the adaptive optimal time stepping scheme as the time integration strategy. By denoting , , , and the numerical solution as , the resulting ODE system is where Generally, the actual eigenvalues of the spatial matrix are not available. One may use the power method and the inverse power method [20] to estimate the largest and smallest magnitude of eigenvalues. But these methods are usually time consuming. Moreover, only negative real parts of the eigenvalues are needed for our scheme. Therefore, we suggest using Gerschgorin theorem to estimate the distribution of eigenvalues. As we should show immediately, it is a simple and inexpensive way to find the range for real parts of the eigenvalues. It is therefore adequate for implementing our scheme. In fact, if a formula for the distribution of eigenvalues or actual eigenvalues can be found ahead of time, then it is not necessary to use the theorem to estimate the range at each time step. A simple version of the Gerschgorin theorem is stated as follows.
Theorem 3.1 (Gerschgorin). The union of all discs contains all eigenvalues of the matrix .
The theorem states that if is a real matrix, the range of the real parts of the eigenvalues can be easily detected by checking the intersection intervals of these discs with the real axis.
From Gerschgorin Theorem, we obtain where
Thus, for stages, where , the optimal time steps at the current step are
Figure 3 shows the relative error between the numerical and the analytic solutions. Observe that the relative error for the standard twostage RungeKutta method increases dramatically around , but the optimal timestepping stage RungeKutta methods still converge to the analytic solution.
Tables 1, 2, and 3 demonstrate the number of iteration for our scheme (Optimal stage RK) with that of two other methods: standard stage RungeKutta method with fixed step size (Std stage RK) and the RungeKuttaFehlberg methods (RKF). These tables basically indicate that while the optimal time stepping scheme always ensure convergence, the standard stage RungeKutta methods or other adaptive methods like RKF either diverge or converge slower since they require a larger number of iterations. Similar results can also be illustrated by calculating CPU time following from Tables 4, 5, and 6. The computations were done on Dell Latitude D620 2600.






Tables 4β6 illustrate the computational efficiency for our method with CPU time.
3.3. Further Discussion on the Efficiency of the Optimal RK Method
We continue our discussion about the computational efficiency for optimal RK methods with time steps mentioned in Sections 2.2. We are interested in measuring the efficiency difference between RKF and our method. To make one's point, we take the optimal fourstage RK method as an example. Let us start with the βbestβ optimal step size as in Section 2.2.1. Then the current time step for the optimal fourstage RK is
To get the βworst" case, set . Then
Now, let us assign to be the average of (3.9) and (3.10). We obtain
Table 7 shows that even with the βmediumβ optimal time step (3.11), our numerical scheme still performs better than the RKF 45. In addition, the computational efficiency of RKF 45 is approximately equal to the optimal fourstage RK when the time step is chosen as (3.11).

3.4. Computer Simulations of Chemotaxis in Bacteria
Our numerical experiment is based on the biological experiments reported in (2.28) and the corresponding mathematical model given in (2.28). Interesting patterns are developed when Escherichia coli or Salmonella typhimurium is incoulated on intermediates of the tricarboxylic acid (TCA) cycle. Bacteria form symmetrical rings of spots or strips if they are inoculated in semisolid agar on the intermediates of TCA cycle. When placed in a liquid medium and fed on intermediates of the TCA cycle, they secrete aspartate as a chemoattractant. Cells then arrange themselves into high density aggregates. In this case, aggregates form randomly as time evolves and fade in a time interval less than the bacteria generation. Here, attractant chemicals are excreted by the bacterial themselves instead of being added into the Petri dish artificially.
A nondimensionalized mathematical models were proposed in (2.28) for the liquid medium experiments in (2.28). The model is a classical chemotaxis system, which reads as