Abstract

A family of Enright’s second derivative formulas with trigonometric basis functions is derived using multistep collocation method. The continuous schemes obtained are used to generate complementary methods. The stability properties of the methods are discussed. The methods which can be applied in predictor-corrector form are implemented in block form as simultaneous numerical integrators over nonoverlapping intervals. Numerical results obtained using the proposed block form reveal that the new methods are efficient and highly competitive with existing methods in the literature.

1. Introduction

Many real life processes in areas such as chemical kinetics, biological sciences, circuit theory, economics, and reactions in physical systems can be transformed into systems of ordinary differential equations (ODE) which are generally formulated as initial value problems (IVPs). Some classes of IVPs are stiff and/or highly oscillatory as described by the following model problem:where and is real matrix with at least one eigenvalue with a very negative real part and/or very large imaginary part, respectively (see Fatunla [1]). Many conventional methods cannot solve these types of problems effectively.

Stiff systems have been solved by several authors including Lambert [2, 3], Gear [4, 5], Hairer [6], and Hairer and Wanner [7]. Different methods including the Backward Differentiation Formula (BDF) have been used to solve stiff systems. Second derivative methods with polynomial basis functions were proposed to overcome the Dahlquist [8] barrier theorem whereby the conventional linear multistep method was modified by incorporating the second derivative term in the derivation process in order to increase the order of the method, while preserving good stability properties (see Gear [9], Gragg and Stetter [10], and Butcher [11]).

Many classical numerical methods including Runge-Kutta methods, higher derivative multistep schemes, and block methods have been constructed for solving oscillatory initial value problems (see Butcher [11, 12], Brugnano and Trigiante [13, 14], Ozawa [15], Nguyen et al. [16], Berghe and van Daele [17], Vigo-Aguiar and Ramos [18], and Calvo et al. [19]). Many methods for solving oscillatory IVPs require knowledge of the system under consideration in advance.

Obrechkoff [20] proposed a general multiderivative method for solving systems of ordinary differential equations. Special cases of Obrechkoff method have been developed by many others including Cash [21] and Enright [22]. The methods by Enright [22] have order for a step method.

In this paper, we propose a numerical integration formula which more effectively copes with stiff and/or oscillatory IVPs. We will construct a continuous form of the second derivative multistep method (CSDMM) using a multistep collocation technique such that Enright’s second derivative methods (ESDM) will be recovered from the derived continuous methods. The aim of this paper is to derive a family of Enright’s second derivative formulas with trigonometric basis functions using multistep collocation method. Many methods for solving IVPs are implemented in a step-by-step fashion in which, on the partition , an approximation is obtained at only after an approximation at has been computed, where , , , , is the step size, is a positive integer, and is the grid index. We implement ESDM in block form.

In Section 2, we present a derivation of the family of Enright methods. Error analysis and stability are discussed in Section 3. The implementation of the ESDM and numerical examples to show the accuracy and efficiency of the ESDM are given in Section 4. Finally, we conclude in Section 5.

2. Derivation of the Family of Methods

We consider the first-order differential equationwhere is assumed to satisfy the conditions to guarantee the existence of a unique solution of the initial-value problem.

2.1. CSDMM

In what follows, we state the CSDMM which has the ability to produce the ESDM:where , , and are continuous coefficients. We assume that is the numerical approximation to the analytical solution , is the numerical approximation to the analytical solution , is an approximation to , and is an approximation to , where , , .

We now define the following vectors and matrix used in the following theorem:where , , , and .

Remark 1. In the derivation of the ESDM, the bases with , , , and are chosen because they are simple to analyze. Other possible bases (see Nguyen et al. [16] and Nguyen et al. [23]) include the following: (1);(2);(3);(4);(5);(6).

Theorem 2. Let satisfy , , and and let be invertible; then method (3) is equivalent toThe proof of the above theorem can be found in Jator et al. [24].

Through interpolation of at the point , collocation of at the points , , and collocation of at the point , we get the systemTo solve this system we require that method (3) be defined by the assumed basis functions where the constants , , and are to be determined.

2.2. ESDM

The general second derivative formula for solving (2) using the -step second derivative linear multistep method is of the formwhere , and , ; is a discrete point at node ; and , , and are parameters to be determined. It is worth noting that Enright’s method is a special case of (7). We solve (6) to get the coefficients , , and in (7) which are then used to obtain the continuous multistep method of Enright in the formEvaluating (9) at and setting yield the following Enright’s second derivative multistep method:whereas the complementary methodsare obtained by evaluating (9) at , , with .

We note that, in order to avoid the cancellations which might occur when is small, the use of the power series expansions of , , , and is preferable (see Simos [25]).

Case . This case has only the main method given by (10) with the coefficients defined by

Case . The coefficients of the main method (10) and the complimentary method (11) are, respectively, defined by

Case . The coefficients of the main method (10) and the complimentary methods (11) are, respectively, defined by

Case . The coefficients of the main method (10) and the complimentary methods (11) are, respectively, defined bywith , , and defined in part A of Appendix of the supplementary material (see Supplementary Material available online at http://dx.doi.org/10.1155/2015/343295). Consider  with , , and defined in part B of Appendix of the supplementary material. Considerwith , , defined in part C of Appendix of the supplementary material. Considerwith , , and defined in part D of Appendix of the supplementary material.

2.3. Block Specification and Implementation of the Methods

We consider a general procedure for the block implementation of the methods in matrix form (see Fatunla [26]). First we define the following vectors:where , and . The integration on the entire block will be compactly written aswhich forms a nonlinear equation because of the implicit nature, and hence we employ the Newton iteration for the evaluation of the approximate solutions. We use Newton’s approach for the implementation of implicit schemes to get the following solution of the block:The matrices , , , , and are defined as follows.

Case . Considerwith , , , , , , defined in methods (13) and (14).

Case . Considerwith , , , , , , defined in methods (15), (16), and (17).

Case . Considerwith , , , , and , defined in methods (18) through (21).

3. Error Analysis and Stability

3.1. Local Truncation Error (LTE)

Suppose that method (10) is associated with a linear difference operator:where is an arbitrary smooth function. Then is called the local truncation error at if represents a solution of the IVP (2). By a Taylor series expansion of , , and , we have where , , and ,  .

Method (10) is said to be of order if , (see [3]).

Theorem 3. The -step method (10) ESDM has a local truncation error (LTE) of

Proof. We consider a Taylor series expansion of and assume that , , . Then by substituting these into method (10) and simplifying we get thatwhere the values of are given in Table 1.

Define the local truncation error of (23) as follows:wherea linear difference operator. Assuming that is sufficiently differentiable, we can expand the terms in (23) as a Taylor series about to obtain the expression for the local truncation error where , , are constant coefficients (see Ehigie et al. [27]).

Definition 4. The block method (23) has algebraic order , provided there exists a constant such that the local truncation error satisfies , where is the maximum norm.

Remark 5. (i) The local truncation error constants of the block method (23) are presented in Table 1. (ii) From the local truncation error constant computation, it follows that the order of the method (23) is .

3.2. Stability

Definition 6. The block method (23) is zero-stable, provided the roots of the first characteristic polynomial have modulus less than or equal to one and those of modulus one are simple (see [2]).

Remark 7. Observe that, from the first characteristic polynomial of the block method (23) specified by det, we obtain . Thus the roots , of satisfy , and, for those roots with , the roots are simple.

Definition 8. The block method (23) is consistent if it has order (see [26]).

Remark 9. The block method (23) is consistent as it has order and zero-stable; hence it is convergent since zero-stability + consistency convergence.

Proposition 10. The block method (23) applied to the test equations and yields

Proof. We begin by applying (23) to the test equations and which are expressed as and , respectively; letting and , we obtain a linear equation which is used to solve for with (23) as a consequence.

Remark 11. The rational function is called the stability function which determines the stability of the method.

Definition 12. A region of stability is a region in the plane, in which .

Corollary 13. Method (23) has specified in Appendix of the supplementary material.

Remark 14. In the plane the ESDM (23) is stable for , and , since , .

Remark 15. Figures 1, 3, 5, and 7 are plots of the stability region of for case , respectively. We note from these figures that the stability region of for includes the entire left side of the complex plane. Figures 2, 4, 6, and 8 show the respective zeros and poles of .

3.3. Implementation

The ESDM (10) is implemented in the spirit of Ngwane et al. in [28, 29] to solve (2) without requiring starting values and predictors. For instance, if we let in (10), then are obtained on the subinterval , as is known from the IVP. If , then are obtained on the subinterval , as is known from the previous computation and so on, until we reach the final subinterval . Note that, for linear problems, we solve (2) directly using the feature in Mathematica , while for nonlinear problems we use Newton’s method enhanced by the feature .

4. Numerical Illustration

In this section we consider some standard problems: stiff, oscillatory, linear, and nonlinear systems that appear in the literature to experimentally illustrate the accuracy and efficiency of the ESDM (10) which is implemented in block form. The ESDM for , and as early stated are denoted by , , , and , respectively. Our numerical examples test this family of methods. We include examples of second-order IVPs and it would be pertinent to mention here that there are methods specifically designed for this type of problems. In this paper, all the numerical experiments are carried out with fixed and , assuming that is known. This allows us to compute the coefficients of the ESDM once for all integration. Some of the methods of orders and in the literature have been compared to and , respectively. We find the approximate solution on the partition , and we give the errors at the endpoints calculated as Error . We denote the Max by Err, the number of steps by , and the number of function evaluations by NFEs. We will write an error of the form Err as .

Example 16. We consider the following inhomogeneous IVP by Simos [25]:where the analytic solution is given by .

EM2 is fourth-order and hence comparable to the exponentially fitted method by Simos [25] which is also of fourth-order. PC1 and PC2 denote the predictor-corrector mode for and , respectively. The efficiency curves in Figure 9 show the computational efficiency of the two methods (Simos and EM2) by considering the NFEs over integration steps for each method. Hence for this example, EM2 performs better than Simos. We see from Table 2 that ESDM is efficient for each case.

Example 17. We consider the nonlinear Duffing equation which was also solved by Ixaru and Vanden Berghe [30]:The analytic solution is given by , where , , , , , , , and .

We compare the errors produced by our EM2 with the fourth-order methods by Ixaru and Vanden Berghe [30]. We see from Table 3 that the results produced by ESDM on Example 17 are very good. In fact, EM2 produces results that are better than Simos’ method used in [30], as it produces better error magnitude while using less number of steps and fewer number of function evaluations. This example once more shows us that the ESDM produces good results and in particular EM2 is very competitive to the method used by Ixaru and Vanden Berghe [30].

Example 18 (a nearly sinusoidal problem). We consider the following IVP on the range (see the study by Nguyen et al. [16]): We choose and in order to illustrate the phenomenon of stiffness. Given the initial conditions and , the exact solution is -independent and is given by

We choose this example to demonstrate the performance of ESDM on stiff problems. We compute the solutions to Example 18 with . We compare EM4 of order six to the method by Nguyen et al. [16] which is also of order six. For both and , EM4 clearly obtains better absolute errors compared to Nguyen et al. [16]. This efficiency is achieved using fewer number of steps and less number of function evaluations than Nguyen et al. [16].

Example 19. Consider the given two-body problem which was solved by Ozawa [15]: where , , is an eccentricity. The exact solution of this problem is , , where is the solution of Kepler’s equation .

Table 5 contains the results obtained using the ESDM for . These results are compared with the explicit singly diagonally implicit Runge-Kutta (ESDIRK) and the functionally fitted ESDIRK (FESDIRK) methods given in Ozawa [15]. In terms of accuracy, Table 5 clearly shows that ESDM performs better than those in Ozawa [15].

4.1. A Predictor-Corrector Mode Implementation of ESDM

We can implement our ESDM in a predictor-corrector (PC) mode. The predictor for is given below while the corrector for each case is given by the main method (10). PC1 and PC2 denote the PC mode for and , respectively:: ::The coefficients , , and for are given below while those for are given in Appendix 3 of the supplementary material. We observe that the PC implementation performs poorly relative to the block implementation; see Table 2. Consider

4.2. Estimating the Frequency

Though we are mainly interested in problems where is taken as the exact frequency of the analytical solution and is known in advance, it is important to note that the exact frequency may be unknown for some problems. A preliminary testing indicates that a good estimate of the frequency can be obtained by demanding that the in Theorem 3 equals zero and solving for the frequency. That is, solve for given that , where , , denotes derivatives. We used this procedure to estimate for the problem given in Example 16 and obtained , which approximately gives the known frequency . Hence, this procedure is interesting and will be seriously considered in our future research.

If a problem has multiple frequencies, then is approximatively calculated so that it is an indicative frequency (see Nguyen et al. [16]). We note that estimating the frequency when it is unknown as well as finding the frequency for problems for which the frequency varies over time is very challenging. This challenge and the choice of the frequency in trigonometrically fitted methods have grown in interest. Existing references on how to estimate the frequency and the choice of the frequency include Vanden Berghe et al. [31] and Ramos and Vigo-Aguiar [32].

5. Conclusion

In this paper, we have proposed a family of Enright methods using trigonometric bases for solving stiff and oscillatory IVPs. The ESDM is zero-stable and produced good results on stiff IVPs. This method has the advantages of being self-starting and having good accuracy properties. ESDM has order similar to that in Enright [22]. We have presented representative numerical examples that are linear, nonlinear, stiff, and highly oscillatory. The need that the frequency be known in advance might be a shortcoming, yet these examples show that the ESDM is not only promising but more accurate and efficient than those in Nguyen et al. [16], Simos [25], Ixaru and Vanden Berghe [30], and Ozawa [15]. Details of the numerical results are displayed in Tables 2, 3, 4, and 5, and the efficiency curves are presented in Figures 9, 10, 11, 12, and 13. Our future research will incorporate a technique for accurately estimating the frequency as suggested in Section 4.2 as well as implementing the method in a variable step mode.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This research was funded by the RISE Grant no. 176601332896 of the University of South Carolina, Columbia, SC, USA.

Supplementary Materials

The variables in the numerators of equations 17-20 are defined in Appendix 1 of the supplementary material.

  1. Supplementary Material