Research Article | Open Access
A Reliable Analytical Method for Solving Higher-Order Initial Value Problems
In this article, a new analytical method has been devised to solve higher-order initial value problems for ordinary differential equations. This method was implemented to construct a series solution for higher-order initial value problems in the form of a rapidly convergent series with easily computable components using symbolic computation software. The proposed method is based on the Taylor series expansion which constructs an analytical solution in the form of a polynomial and reproduces the exact solution when the solution is polynomial. This technique is applied to a few test examples to illustrate the accuracy, efficiency, and applicability of the method. The results reveal that the method is very effective, straightforward, and simple.
The study of nonlinear problems is of crucial importance not only in mathematics but also in physics, engineering, economic, and other disciplines, since most phenomena in our world are essentially nonlinear and are described by nonlinear equations. It is very difficult to solve nonlinear problems and, in general, it is often more difficult to get an analytic approximation than a numerical one for a given nonlinear problem. In this paper, we focus on finding approximate solution to higher-order initial value problems (IVPs), which are a combination of higher-order ordinary differential equations subject to given initial conditions. In fact, accurate and fast numerical solution of higher-order IVPs is of great importance due to its wide application in scientific and engineering research.
In the fields of engineering and science, we come across physical and natural phenomena which, when represented by mathematical models, happen to be differential equations. For example, simple harmonic motion, equation of motion, deflection of a beam, and so forth are represented by differential equations. Hence, the solution of differential equations is a necessity in such studies. There are a number of differential equations which we studied in calculus to get closed form solutions. But all differential equations do not possess closed form of finite form solutions. Even if they possess closed form solutions, we do not know the method of getting it. In such situations, depending on the need of the hour, we go in for numerical solutions of differential equations. In researches, especially after the advent of computer, the numerical solutions of differential equations have become easy for manipulations.
In this work, we introduce a new analytical method; we call it residual power series (RPS) [1, 2] to find out series solution to strongly linear and nonlinear higher-order IVPs. The RPS method is effective and easy to use for solving higher-order IVPs without linearization, perturbation, or discretization. This method constructs an analytical approximate solution in the form of polynomial. The RPS method is different from the traditional higher-order Taylor series approach; the Taylor series approach is computationally expensive for large orders. The RPS method is an alternative procedure for obtaining analytic Taylor series solution of higher-order IVPs. By using residual error concept, we get a series solution, in practice a truncated series solution. As we will see later, the exact solution is available when the solution is polynomial. Moreover, the solution and all its derivatives are applicable for each arbitrary point in the given interval. On the other hand, RPS does not require any converting while switching from the first-order to the higher-order; as a result, the method can be applied directly to the given problem by choosing appropriate values for the initial guess approximation.
The purpose of this paper is to explain the idea of the RPS method and test the efficiency and applicability of the method. To reach our goal, we will construct symbolic approximate RPS solution for higher-order ordinary differential equation of the following form: subject to the initial conditions where , is nonlinear function of , is an unknown function of independent variable to be determined, and , are real finite constants with . Throughout this paper, we assume that and are analytic functions on the given interval.
In the recent years, there are several methods used to find approximate solutions to higher-order IVPs, such as homotopy analysis method , differential transformation method , Adomian decomposition method , Jacobi-Gauss collocation method , Schauder bases method , or other techniques [8–12]. On the other aspect as well, the applications of other versions of series solutions to linear and nonlinear problems can be found in [13–18] and for numerical solvability of different categories of differential equations, one can consult the references [19, 20].
The outline of the paper is as follows. In the next section, we present the basic idea of RPS method. In Section 3, convergence theorem and error analysis are discussed in order to capture the behavior of solution. In Section 4, numerical examples are given to illustrate the capability of proposed method. This paper ends in Section 5 with some concluding remarks.
2. The Idea of the RPS Method
In this section, we employ our algorithm of RPS to find out series solution for higher-order ordinary differential equation subject to given initial conditions. We first formulate and analyze the RPS method for solving such equations.
The RPS method consists in expressing the solution of (1) and (2) as a power series expansion about the initial point [1, 2]. To achieve our goal, we suppose that this solution takes the form where are terms of approximations and are given as .
Obviously, when , since the terms satisfy the initial conditions (2) as we have the initial guess approximation of which is as follows:
Prior to applying RPS method for solving (1) and (2), we substituting the th-truncated series into (1) to obtain the following definition for the th residual function: and the following definition of th residual function: .
It is clear that for each . This shows that the residual function is infinitely many times differentiable at . On the other hand, it easy to see that , for each . In fact, this relation is a fundamental rule in RPS method and its applications [1, 2].
To obtain the th approximate solution, we put and . On the other hand, we substitute into (7) to get
Similarly, to find the th approximate solution, we put and . On the other direction, we differentiate both sides of (7) with respect to and substitute to get where . Thus, using the fact that , yield the following value of : where . Hence, the th approximation of (1) and (2) can be written as where .
This procedure can be repeated till the arbitrary order coefficients of RPS solution for (1) and (2) are obtained. Moreover, higher accuracy can be achieved by evaluating more components of the solution. In other words, choose large in the truncation series (6).
3. Convergence Theorem and Error Analysis
Numerical technique is widely used by scientists and engineers to solve their problems. A major advantage for numerical technique is that a numerical answer can be obtained even when a problem has no analytical solution. However, result from numerical analysis is an approximation, in general, which can be made as accurate as desired. The reliability of the numerical result will depend on an error estimate or bound; therefore, the analysis of error and the sources of error in numerical methods is also a critically important part of the study of numerical technique. In this section, we study the convergence of RPS method.
Taylor’s theorem allows us to represent exactly and fairly general functions in terms of polynomials with a known, specified, and boundable error. Next theorem will guarantee convergence to exact analytic solution of (1) and (2) using RPS method.
Proof. Assume that the approximate solution of (1) and (2) is
In order to prove the theorem, it is enough to show that the coefficients in (13) take the form where is the exact solution of (1) and (2). It is clear that for , (4) gives the following values for :
Moreover, for , we substitute into (1); we obtain
On the other aspect as well, from (13) and (15), we can write
Also, by substituting (17) into (1) and then setting , we get
Further, for , differentiating both sides of (1) with respect to and then substituting , we obtain where . By substituting into (19) and using (14) and (18), we can conclude that where . According to (17) and (18), we can write the approximation of (1) and (2) as follows:
On the other hand, by substituting (21) into (19) and setting , we obtain where . Finally, by comparing (20) and (22), we can conclude that . By continuing the above procedure, we can easily prove (14) for . So, the proof of the theorem is complete.
Corollary 2. If is polynomial, then the RPS method will be obtained the exact solution.
It will be convenient to have a notation for the error in the approximation . Accordingly, we will let denote the difference between and its th Taylor polynomial obtained from the RPS method; that is,
The functions are called the th remainder for the RPS approximation of . In fact, it often happens that the remainders become smaller and smaller, approaching zero, as gets larger.
4. Numerical Results and Discussion
Numerical methods tend to emphasize the implementation of algorithms. The aim of numerical methods is therefore to provide systematic methods for solving problems in a numerical form. The process of solving problems generally involves starting from an initial data, using high precision digital computers, following the steps in the algorithms, and finally obtaining the results. Often the numerical data and the methods used are approximate ones. In this section, we consider six examples to demonstrate the performance and efficiency of the present technique. Throughout this paper, all the symbolic and numerical computations were performed by using Maple 13 software package.
First of all, error functions are introduced to study the accuracy and efficiency of the method. Actually, continuous approximations to the solution will be obtained. To show the accuracy of the present method for our problems, we report four types of error. The first one is the residual error, , defined as while the exact, Ext, relative, , and consecutive, , errors are defined, respectively, by for , where , is the th-order approximation of obtained by RPS method.
The proposed method provides an analytical approximate solution in terms of an infinite power series. Then, there is a practical need to evaluate this solution and to obtain numerical values from the infinite power series. The consequent series truncation and the practical procedure are conducted to accomplish this task and transform the otherwise analytical results into an exact solution, which is evaluated to a finite degree of accuracy.
In most real-life situations, the differential equation that models the problem is too complicated to solve exactly, and there is a practical need to approximate the solution. In the next example, the exact solution cannot be found analytically.
Example 1. Consider the Genesio equation: subject to the initial conditions where , , and are positive real numbers, satisfying .
The Genesio equation was represented by authors in , as a third-order IVP that includes a simple square part and depends on three positive real parameters. The Genesio equation is one of paradigms of chaos since it captures many features of chaotic equation. The reader is kindly requested to go through [21–25] in order to know more details about applications of Genesio equation and method of solutions.
As we mentioned earlier, if we select the initial guess approximation as , then the th-truncated series takes the form
Here, it is easy to see that , , and . Anyway, in order to find out the value of coefficients of (28), we employ our algorithm RPS method. Therefore, we construct the residual function as follows:
Now, to find the rd approximate solution, we put through (29) and use the fact that to conclude
For numerical results, the following values, for parameters, are considered : , , , , , and . By continuing with the similar fashion discussed above, the 10th-order RPS solution of leads to the following truncated series:
Numerical comparisons are studied next. Figure 1 shows a comparison between the numerical solution of (26) and (27) for th-order RPS solution together with Runge-Kutta method (RKM) of order four and Predictor-Corrector method (PCM) of order four. Throughout this figure, the step size for the RKM and PCM is fixed at 0.01, while the starting values of the PCM obtained from the 4th-order RKM. It is demonstrated that the RPS solution agrees very well with the solutions obtained by the RKM and PCM.
Example 2. Consider the Lienard equation: subject to the initial conditions where , , and are real coefficients.
The Lienard equation in its general form , can be regarded not only as a generalization of the damped pendulum equation or a damped spring-mass system, but also used as nonlinear models in many physically significant fields. For example, the choices , , and lead to the Van der Pol equation served as a nonlinear model of electronic oscillation [26, 27]. In the general case, it is commonly believed that it is very difficult to find out the exact solution of the Lienard equation by usual ways . However, authors in [29, 30] studied the special case (36) and (37) of the Lienard equation. The reader is asked to refer to [26–33] in order to know more details about the Lienard equation, including its history and kinds and method of solutions.
Hence, as in the previous example, the first few truncated series approximations of the RPS solution for (36) and (37) are as follows: and so on; for numerical results, the following values, for parameters, are considered : , , and . If we collect and extend the above results according to these parameters, then the th-order RPS solution of can be truncated as follows:
In this manner, the components of the RPS solution are obtained as far as we like. In fact, this series is exact to the last term, as one can verify, of the Taylor series of the exact closed form solution as
Let us now carry out the error analysis of the RPS method for this example. Figure 2 shows the exact solution , the initial solution , and the four iterates of RPS approximation for . These graphs exhibit the convergence of the approximate solutions to the exact solution with respect to the order of the approximation.
In Figure 3(a), we plot the error functions for which are approaching the axis as the numbers of iterations increase. These graphs show that the exact errors are getting smaller as the order of the solution is increasing, in other words, as we progress through more iterations. On the other hand, Figure 3(b) shows the residual error functions for for the obtained approximations. These error indicators confirm the convergence of the RPS method with respect to the order of the approximation.
Example 3. Consider Bratu’s equation: subject to the initial conditions
The standard Bratu problem was used to model a combustion problem in a numerical slab. The Bratu model appears in a number of applications such as the fuel ignition of the thermal combustion theory, the model of the thermal reaction process, the Chandrasekhar model of the expansion of the universe, questions in geometry and relativity concerning the Chandrasekhar model, chemical reaction theory, and radiative heat transfer and nanotechnology [34–38]. The reader is kindly requested to go through [39–44] in order to know more details about applications of Bratu problem, its history and kinds, its method of solutions, and so forth.
If we calculate more terms and then collect the above results, then the 10th-truncated RPS solution of for (42) and (43) is which is the first tenth term in the expansion of . Thus, the exact solution of (42) and (43) has the general form which are coinciding with the exact solution
We mention here that this RPS solution is the same as Adomian decomposition solution obtained in , homotopy perturbation solution obtained in , variational iteration solution obtained in , and perturbation-iteration solution obtained in .
Example 4. Consider the following second-order nonlinear equation: subject to the initial conditions where .
As in the previous examples, if we select the initial guess approximation as , then the RPS expansion of the solution takes the form
It easy to discover that each of the coefficients for in the expansions (49) is vanished. In other words, we have with full agreement with Corollary 2. Thus, the analytic approximate solution of (47) and (48) agrees well with the exact solution .
A numerical experiment is carried out to verify the mathematical results, and the theoretical statement for the solution is supported by the results of numerical experiments. Next, we present more complicated IVPs in order to show the efficiency and applicability of the present method.
Example 5. Consider the following fourth-order nonlinear equation: subject to the initial conditions where .
Example 6. Consider the following third-order nonlinear equation: subject to the initial conditions where
It easy to see that the 10th-truncated RPS solution of above can be separated in the form of the following series that agree well with the general form:
Clearly, the RPS method provides the approximate solution in terms of an infinite power series. Now, we collect and transform these series in order to discover the exact solution. Anyway, it is not difficult to see that the Taylor series of the exact closed form solution of (57) and (58) is
Our next goal is to show how the value of in the truncation series (6) affects the approximate solutions. To determine this effect, an error analysis is performed. We substitute the RPS approximations of for various into the exact error formula and obtain the exact error values. Next, the maximum and average errors at of (57) and (58) have been listed in Table 1 for , .
In Table 2, the exact, relative, and residual errors have been calculated for various in with step size to measure the extent of agreement between the 10th-order RPS solution and the exact solution. From the table, it can be seen that the RPS method provides us with the accurate approximate solution for (57) and (58). Also, we can note that the RPS solution is more accurate at the beginning values of the independent interval .