Mathematical Problems in Engineering

Volume 2014, Article ID 565137, 10 pages

http://dx.doi.org/10.1155/2014/565137

## Solving Nonstiff Higher Order Odes Using Variable Order Step Size Backward Difference Directly

^{1}Institute for Mathematical Research, UPM, Selangor Darul Ehsan, 43400 Serdang, Malaysia^{2}Department of Mathematics, Faculty of Science, UPM, Selangor Darul Ehsan, 43400 Serdang, Malaysia

Received 11 March 2014; Revised 3 July 2014; Accepted 22 July 2014; Published 19 August 2014

Academic Editor: Alessandro Palmeri

Copyright © 2014 Ahmad Fadly Nurullah Rasedee et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The current numerical techniques for solving a system of higher order ordinary differential equations (ODEs) directly calculate the integration coefficients at every step. Here, we propose a method to solve higher order ODEs directly by calculating the integration coefficients only once at the beginning of the integration and if required once more at the end. The formulae will be derived in terms of backward difference in a constant step size formulation. The method developed will be validated by solving some higher order ODEs directly using variable order step size. To simplify the evaluations of the integration coefficients, we find the relationship between various orders. The results presented confirmed our hypothesis.

#### 1. Introduction

In this paper, we will focus only on nonstiff ODEs of the form given where in the interval , is the order of the ODE. We impose the condition that is continuous. This implies that solution for exists.

Many science and engineering problems are in the form of higher order initial value problems (IVPs) ODEs. Authors such as Gear [1], Suleiman [2, 3], Hall and Suleiman [4], and Omar [5] suggested a new approach for solving higher order ODEs directly. An algorithm was designed by Suleiman [2] to solve stiff and nonstiff higher order ODEs directly without reducing the order of the problems to first order. He called it the direct integration method. The drawbacks to methods described by Omar [5], Suleiman [3], Gear [1], and Lambert [6] are the tedious calculations of the divided differences and recurrence relations in computing the integration coefficients.

Here, a new and efficient algorithm for solving IVPs of higher order ODEs directly using variable order step size method in its backward difference formulation (MSBD method) is developed. In contrast to the integration coefficients used in Suleiman’s [3] direct integration (DI) method, the code MSBD calculates the integration coefficients once at the start and once when calculating the last point. The simpler nature of the backward difference compared to the divided difference gives an added advantage to the MSBD over the DI method when formulating and coding the method. This produces more elegant error formulae. The DI method in Suleiman [2] has been proven to converge. We note that the DI method is formulated based on divided difference while the proposed MSBD technique is based on backward difference. Therefore, in a similar way, we can prove the convergence of the MSBD. A brief explanation of the DI code is given below.

Let be the integrating polynomial with degree ; interpolating points is denoted by and obtained through divided difference.

We define to be the -fold integral and the integration coefficients obtained are denoted as follows: For the case , where .

Let be given explicitly by which are the predicted derivatives.

Using the coefficients, the predictor and corrector are given by

#### 2. Derivation of th Order Explicit and Implicit Backward Differences Method

In this section, the integration coefficients of the backward difference formulae are derived in order to obtain a relationship between the explicit and implicit formulae. This will make the computation easier.

Integrating (1) once yields Let be the interpolating polynomial which interpolates the values ; then Next, approximating in (8) with and letting give us where Let the generating function for the coefficients be defined as follows: Substituting (12) in gives which leads to Hence, the coefficients of the backward difference formulation, , are given by To derive the th order generating function, we integrate (1) number of times and mathematical induction gives This gives the generalized explicit coefficients which are denoted by Similarly, deriving the generalized implicit formulae gives the following relationship for the generating functions: where the coefficients are given by and is the Lagrange coefficient.

#### 3. The Relationship between the Explicit and Implicit Integration Coefficients

Calculating the integration coefficients directly is time consuming if large numbers of integrations are involved. By obtaining a recursive relationship between the coefficients, we are able to obtain the implicit integration coefficient more efficiently. The relationship between the explicit and implicit coefficients is discussed below.

For first order coefficients, It can be written as By substituting into (22), we have Solving the equation above gives the recursive relationship For second order coefficient, It can be written as Substituting (24) into the equation above gives or which can be simplified as Solving the equation above gives the recursive relationship For th order coefficient, using mathematical induction, we have In similar manner to the case of the second order coefficients, we obtain the general th order coefficients as follows:

#### 4. Order and Step Size Selection

An important decision to be made is whether to accept the result of an integration step. Although the efficiency of an algorithm is affected by the strategy for selecting the order and step size, its reliability is affected by the acceptance criteria used.

The literature on variable order step size codes show numerous strategies of order and step size selection. Varying the order in a multistep method is very easy. The order depends directly on the back values stored. After the completion of any step, the order can be increased by one if none of the back values used in the previous step are discarded. The order can also be decreased to any value desired by discarding the appropriate number of back values. Through experience, it is suggested that the order strategies which are not biased to selecting a lower order when implemented in an Adams code are efficient for nonstiff problems. After considering certain criteria, we adopted the order strategies similar to those used by Shampine and Gordon [7].

Let be the calculated step size and the final step size. See Suleiman [3]. In practice, we multiply by a safety factor such that , in order to give a more conservative estimate for and hence reduce the numbers of steps rejected. For our code, we take the safety factor as 0.8. Shampine and Gordon [7] discuss a practical approach to the result of the convergence and stability when using a variable step size. Their experience suggests that their ratio of successive step sizes will need to be restricted in order to ensure stability. For this reason and reasons of economy discussed in the section of generating the algorithm, it would appear that the use of a constant step size is desirable. Opposing this view, however, is the fact that the use of the maximum possible step size minimizes the number of steps to be taken and hence the number of derivative evaluations.

#### 5. Error Estimation

Adopting the same strategies from Hall and Watt [8] for higher order ODEs, the estimations for the local errors on each step of the integration are shown below.

The predictor takes the form of where the constant is independent of . In algorithm, the corrector can be written as follows: where denotes the th backward difference using for . It can be shown that where (36), can be simplified for computation to

The Milne error estimate mentions that the local truncation error (LTE) is simply the difference between the two possibilities of and giving The selection of an appropriate , for , to control the order and step size is as mentioned in Suleiman [3].

When reducing the order, the estimated error would be Since a different predictor, of order one less, would have been used, from the mean value theorem, we are able to write In practice, to avoid the extra derivative evaluation required for (40), the decision to reduce the order is based on the estimate which is immediately available.

To consider the increase in order, the estimated error is where the predicted difference is based on a predictor with an extra term included. In estimating , we obtain leading to which establishes the asymptotic validity of using

#### 6. Changing the Step Size

We implement the step size changing technique as mentioned in Lambert [6]. Implementing the Adams-Bashforth methods as predictors and Adams-Moulton methods as correctors or which is commonly known as ABM methods in PECE mode in backwards difference form, we then adopt the algorithm for doubling or halving the step size from Krogh [9] which was derived for the ABM method.

#### 7. Last Step for Solving Higher Order ODEs

As we mentioned previously, the method we proposed calculates the integrating coefficients only once. In the case of the last step, generally, the final step size will not always fit into our current step size strategy of , and . Here, we need to calculate the coefficients once more for the last step when the step size is in the form of , . The following are the generating functions for the explicit coefficients: and implicit coefficients: The following yields the relationship between the explicit and implicit coefficients: Tables 1 and 2 contain values for the coefficients when .

#### 8. The MSBD Algorithm

The previous write-up was devoted to the calculation of the integration coefficients for the explicit formulae which can form the basis of the predicted values and also the implicit coefficients which are obtained from the explicit formulae.

For the sake of clarity, we present the algorithm for the MSBD.

*Step **1.* The integration coefficients are calculated from the algorithm in (16), (18), and (34).

*Step **2.* Use the back values to obtain the predictor in (30).

*Step **3*. Calculate the corrected values in terms of the predicted values given in (35).

*Step **4.* The errors , and are obtained in (39), (42), and (46).

*Step **5*. Determine whether satisfies the local accuracy requirements which we take to be TOL, where with give the absolute error test; which gives a relative error test, while give a mixed error test.

*Step **6.* The order is lowered by one if, for , and . If we have available, we lower the order if and . The order is raised by one only after successful step at constant step size such that, for and, for . We restrict in the range .

*Step **7.* Step size selection is also determined based on the local accuracy requirements. The step size algorithm for doubling and halving is derived by Krogh [9].

*Step **8*. If , where is the current step size and denotes the end of the interval, repeat Steps 2–7. If not, determine where . We then calculate the new integration coefficients as given in (47), (48), and (50). Using the new coefficients, repeat Steps 2–7 and exit the program.

#### 9. Test Problems and Numerical Results

##### 9.1. Numerical Results

Tables 3, 4, 5, and 6 show the numerical results for Problems 1–4, when solved with direct integration, backwards difference, and also reducing to first order systems. Problems 1 and 2 are well-behaved problems whereas Problems 3 and 4 are without exact solutions. For the first two problems, we evaluate the maximum and average values of the error in the computed solution . The definition of the error is as defined above. The following notation will indicate TIME: the execution time taken in microseconds, FS: the number of failed steps, STEPS: total steps, MTD: the name of the method, SUCSTEP: the number of successful step, MAXE: the maximum error, AVER: the average error, TOL: the tolerance used, DI: direct integration, D1: direct integration with the reduction to first order system, 1PBDVSO: backward difference, B1: backward difference with the reduction to first order system.The errors calculated are defined as with as the th component of . correspond to the absolute error test, correspond to the mixed error test, and correspond to the relative error test. The mixed error test is used for all test problems. The maximum and average errors are given by where is the number of equations in the systems.

The result of the backwards difference method (1PBDVSO) is measured against D1, B1, and DI method for solving the same problems. The variable step size order codes are implemented for solving the problems. We begin to compute the solution using a higher tolerance ().

*Problem 1. *Consider
Solution is as follows:
First order system is as follows:
Solution is as follows:
For source, see Shampine and Gordon [7].

*Problem 2. *Consider
Solution is as follows:
First order system is as follows:
Solution is as follows:
For source, see Suleiman [2].

*Problem 3. *Van der Pol’s equation is as follows:
First order system is as follows:
For source, see Suleiman [2].

*Problem 4. *Control theory is as follows:
First order system is as follows:
For source, see Enright et al. [10].

#### 10. Comments on Numerical Results and Conclusion

Results for the first problem in Table 3 whose solution has the trigonometric form show that the 1PBDVSO is quicker in terms of execution times compared to the other three methods. This is not surprising because the divided difference has an element of division whereas backward difference is without one. And since the solution involves trigonometric functions, the divided difference can be small and can magnify round-off errors. This can clearly be seen as we decrease the tolerance. The solution to Problem 2 has the solution in the positive exponential form and, therefore, the effect in terms of accumulated errors between divided difference and backward difference is small. This is reflected in the graph of accuracy versus execution times in Figure 2, where 1PBDVSO and DI methods almost overlapped each other.

In Problem 3, the reduction to first order system using divided difference cannot solve the Van der Pol equation while the backward difference could. However, the DI direct method and 1PBDVSO are successful in solving this equation. This conclusion is judged by the closeness of the solution at . This also indicates the instability of the problem when reduced to first order and solved using divided difference. For Problem 4, which has no given solution, similar behavior pattern occurs in solving the problem. This indicates the preferability of the high order problems being solved directly rather than reduction to first order. Overall, from the tables, we see that the accuracy of 1PBDVSO is better compared to the other methods. In order to see this more clearly, we present the graphs of execution times against accuracy. To determine the more accurate relationship, we take any particular values of the abscissae (the execution times) and the ordinates (accuracy) which are lowest in values representing more accurate points. For graph in Figure 1, the 1PBDVSO is the most accurate method since, for all the abscissae chosen from the graphs, they are below the values of the corresponding ordinates.

The numerical results show that the 1PBDVSO is better than DI in terms of accuracy, competitive in terms of execution times, and more stable when tested with more demanding problems. We are inclined to choose 1PBDVSO because, for some problems, they are more accurate. It can be seen from the figures above, where the undermost curve is more efficient. Using this simple performance index, the point which is lowest at a particular time is more accurate. These results show that the variable order step size method gives acceptable solutions and is suitable for solving higher order ODEs directly.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgment

The researchers would like to thank the Ministry of Higher Education (MOHE) for its My Brain 15 (My PhD) scholarship.

#### References

- C. W. Gear, “The numerical integration of ordinary differential equations,”
*Mathematics of Computation*, vol. 21, pp. 146–156, 1967. View at Publisher · View at Google Scholar · View at MathSciNet - M. B. Suleiman,
*Generalised multistep Adams and Backward differentiation methods for the solution of stiff and non-stiff ordinary differential equations [Ph.D. thesis]*, University of Manchester, Manchester, UK, 1979. - M. B. Suleiman, “Solving nonstiff higher order ODEs directly by the direct integration method,”
*Applied Mathematics and Computation*, vol. 33, no. 3, pp. 197–219, 1989. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus - G. Hall and M. B. Suleiman, “Stability of Adams-type formulae for second-order ordinary differential equations,”
*IMA: Journal of Numerical Analysis*, vol. 1, no. 4, pp. 427–428, 1981. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus - Z. B. Omar,
*Parallel block methods for solving higher order ordinary differential equations directly [Ph.D. thesis]*, Universiti Putra Malaysia, 1999. - J. D. Lambert,
*Computational Methods in Ordinary Differential Equations*, John Wiley & Sons, New York, NY, USA, 1973. View at MathSciNet - L. F. Shampine and M. K. Gordon,
*Computed Solutions of Ordinary Differential Equations*, W. H. Freeman, San Francisco, Calif, USA, 1975. View at MathSciNet - G. Hall and J. M. Watt,
*Modern Numerical Methods for Ordinary Differential Equations*, Clarendon Press, Oxford, UK, 1976. View at MathSciNet - F. T. Krogh, “Algorithms for changing the step size,”
*SIAM Journal on Numerical Analysis*, vol. 10, no. 5, pp. 949–965, 1973. View at Publisher · View at Google Scholar · View at MathSciNet - W. H. Enright, T. E. Hull, and B. Lindberg, “Comparing numerical methods for systems of ordinary differential equations,” Tech. Rep. 69, Department of Computer Science, University of Toronto, 1974. View at Google Scholar