Table of Contents Author Guidelines Submit a Manuscript
International Journal of Differential Equations
Volume 2018, Article ID 6043936, 10 pages
Research Article

Convergent Power Series of and Solutions to Nonlinear Differential Equations

1Physics Department, United Arab Emirates University, P.O. Box 15551, Al Ain, UAE
2Department of Mathematical Sciences, United Arab Emirates University, P.O. Box 15551, Al Ain, UAE

Correspondence should be addressed to U. Al Khawaja;

Received 25 September 2017; Revised 7 December 2017; Accepted 8 January 2018; Published 13 February 2018

Academic Editor: Jaume Giné

Copyright © 2018 U. Al Khawaja and Qasem M. Al-Mdallal. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


It is known that power series expansion of certain functions such as diverges beyond a finite radius of convergence. We present here an iterative power series expansion (IPS) to obtain a power series representation of that is convergent for all . The convergent series is a sum of the Taylor series of and a complementary series that cancels the divergence of the Taylor series for . The method is general and can be applied to other functions known to have finite radius of convergence, such as . A straightforward application of this method is to solve analytically nonlinear differential equations, which we also illustrate here. The method provides also a robust and very efficient numerical algorithm for solving nonlinear differential equations numerically. A detailed comparison with the fourth-order Runge-Kutta method and extensive analysis of the behavior of the error and CPU time are performed.

1. Introduction

It is well-known that the Taylor series of some functions diverge beyond a finite radius of convergence [1]. For instance, by way of example not exhaustive enumeration, the Taylor series of and diverge for and , respectively. Increasing the number of terms in the power series does not increase the radius of convergence; it only makes the divergence sharper. The radius of convergence can be increased only slightly via some functional transforms [2]. Among the many different methods of solving nonlinear differential equations [39], the power series is the most straightforward and efficient [10]. It has been used as a powerful numerical scheme for many problems [1119] including chaotic systems [2023]. Many numerical algorithms and codes have been developed based on this method [1012, 2024]. However, the above-mentioned finiteness of radius of convergence is a serious problem that hinders the use of this method to wide class of differential equations, in particular the nonlinear ones. For instance, the nonlinear Schrödinger equation (NLSE) with cubic nonlinearity has the as a solution. Using the power series method to solve this equation produces the power series of a , which is valid only for .

A review of the literature reveals that the power series expansion was exploited by several researchers [1012, 2024] to develop powerful numerical methods for solving nonlinear differential equations. Therefore, this paper is motivated by a desire to extend these attempts to a develop a numerical scheme with systematic control on the accuracy and error. Specifically, two main advances are presented in this paper: a method of constructing a convergent power series representation of a given function with an arbitrarily large radius of convergence and a method of obtaining analytic power series solution of a given nonlinear differential equation that is free from the finite radius of convergence. Through this paper, we show robustness and efficiency of the method via a number of examples including the chaotic Lorenz system [25] and the NLSE. Therefore, solving the problem of finite radius of convergence will open the door wide for applying the power series method to much larger class of differential equations, particularly the nonlinear ones.

It is worth mentioning that the literature includes several semianalytical methods for solving nonlinear differential equations; such as homotopy analysis method (HAM), homotopy perturbation method (HPM), and Adomian decomposition method (ADM); for more details see [2629] and the references therein. Essentially, these methods generate iteratively a series solution for the nonlinear systems where we have to solve a linear differential equation at each iteration. Although these methods prove to be effective in solving most of nonlinear differential equations and in obtaining a convergent series solution, they have few disadvantages such as the large number of terms in the solution as the number of iterations increases. One of the most important advantages of the present technique is the simplicity in transforming the nonlinear differential equation into a set of simple algebraic difference equations which can be easily solved.

The paper is thus divided into two, seemingly separated, but actually connected main parts. In the first (Section 2), we show, for a given function, how a convergent power series is constructed out of the nonconverging one. In the second part (Section 3.1), we essentially use this idea to solve nonlinear differential equations. In Section 3.2, we investigate the robustness and efficiency of the method by studying the behavior of its error and CPU time versus the parameters of the method. We summarise our results in Section 4.

2. Iterative Power Series Method

This section describes how to obtain a convergent power series for a given function that is otherwise not converging for all . In brief, the method is described as follows. We expand the function in a power series as usual, say around . Then we reexpress the coefficients, , in terms of . This establishes a recursion relation between the higher-order coefficients, , and the lowest order ones, and , and thus the power series is written in terms of only these two coefficients. Then the series and its derivative are calculated at , where is much less than the radius of convergence of the power series. A new power series expansion of is then performed at . Similarly, the higher-order coefficients are reexpressed in terms of the lowest order coefficients and . The value of the previous series and its derivative calculated at are then given to and , respectively. Then a new expansion around is performed with the lowest order coefficients being taken from the previous series, and so on. This iterative process is repeated times. The final series will correspond to a convergent series at .

Here is a detailed description of the method. The function is expanded in a Taylor series, , around . The infinite Taylor series is an exact representation of for where is the radius of convergence. For the series diverges. We assume that is divided into small intervals such that . Expanding around the beginning of each interval we obtain convergent Taylor series representing in each intervalwhere denotes the Taylor series expansion of around and is the th derivative of calculated at . It is noted that we use as the independent variable for the th Taylor series expansion to distinguish it from . However, these series can not be combined in a single series since their ranges of applicability are different and do not overlap. To obtain a single convergent power series out of the set of series , we put forward two new ideas, which constitute the basis of our method; namely:

Reexpress in terms of as , where the functional is determined by direct differentiation of for times and then reexpressing the result in terms of only. We conjecture that this is possible for a wide class of functions if not all. At least for the two specific functions considered here, this turned out to be possible. Equation (1) then takes the formwhere we have renamed by and by for a reason to be obvious in the next section. Thus, the coefficients for all are determined only by .

Calculate from at which amounts to assigning the value of the Taylor series at the end of an interval to of the consecutive one. Equation (3) captures the essence of the recursive feature of our method; is calculated recursively from by repeated action of the right-hand-side on . While represents the function within an interval of width , the sequence corresponds to the values of the function at the end of the intervals. In the limit , or equivalently , the discrete set of values and render to the continuous function and its independent variable , respectively. Formally, the convergent power series expansion of around will thus be given bywhere denotes the iteration of

As an illustrative example, we apply the method to . The infinite Taylor series expansion of this function diverges sharply to infinity at . The first step is to determine the coefficients , which are the coefficients of the seriesThe next step is to reexpress the higher-order coefficients, , in terms of the zeroth-order coefficient . The property is used to that end. It is noticed, however, that while it is possible to express the even- coefficients in terms of only, the odd- coefficients can only be expressed terms of both and . In the context of solving differential equations using the power series method, this reflects the fact that the solution is expressed in terms of two independent parameters (initial conditions). The sech function is indeed a solution of a second-order differential equation, which is solved using this method in the next section. Equation (6) then takes the formCalculating series at the end of its interval of applicability, , we getwhere the “recursion” coefficients are given byFinally, we assign to The second independent coefficient is determined by the derivative of calculated at While, in the limit , corresponds to the function , the sequence corresponds to . Therefore, the power series expansion of and its first derivative are given bywhere the superscript of the matrix on the right-hand-side, , denotes the th iteration of the matrix. The superscript has been removed since the functional form of the recursion coefficients does not depend on . The procedure of calculating the power series recursively is described as follows. First, and are substituted in the right-hand-side of the last equation. Then the result of the upper element is taken as the updated value of , and, similarly, the lower element updates . The two updated values are then resubstituted back in the right-hand-side. The process is repeated times. To obtain an explicit form of the series we truncate the Taylor series at and use iterations. The resulting expansion takes the formIt is noted that the higher-order coefficients, which correspond to ratios of large integers, are represented in real numbers for convenience. Already with such a small number of iterations, , the number of terms equals . By inspection, we find that the number of terms equals . Here, is even due to the fact that is an even function.

It is also noted that the series (13) is composed of the Taylor expansion of around zero, represented by the first three terms, and a series of higher-order terms generated from the nonlinearity in the recursion relations of . In fact, we prove in the next section that this property holds for any , , and function , provided that the Taylor series of the later exists. Therefore, the power series expansion of , given by (12), can be put in the suggestive formwhere is the infinite Taylor series of about and is a complementary series. It turns out that the complementary series increases the radius of convergence of for . For finite , the effect of is to shift the radius of convergence, , to a larger value such that for . In Figure 1 we plot the convergent power series obtained by the present method as given by (12) using and . The curve is indistinguishable from the plot of . Both the Taylor series expansion, , and the complementary series, , diverge sharply at . Since is essentially zero for , it will not affect the sum . However, its major role is to cancel the divergency for . In the limit , will be an exact representative of for and will equal zero in the same interval. For , the divergences in and cancel each other with a remainder that is an exact representative of . In this manner, will represent for all .

Figure 1: The solid curve corresponds to the convergent power series obtained by the present method as given by (12) using and . It is indistinguishable from the solid curve of . Dashed curve corresponds to the Taylor series expansion, , and the dotted curve corresponds to the complementary series, .

For finite values of and , the series is an approximate representative of . Truncating the Taylor series at introduces an error of order . This error will be magnified times due the recursive substitutions. The total error is then estimated byFor the parameters used in Figure 1, this error is of order at . This can be reduced to extremely small values such as with . However, the number of terms in the series will be of order which is extremely large and hinders any analytical manipulations.

As another example, we consider with Taylor series diverging at . Much of the formulation we followed for the previous case holds here and the specifics of the function alter only the recursion relations, (9):The convergent power series is obtained by using these recursion relations in (12). Plots similar to those of Figure 1 are obtained.

We present now a proof that the convergent power series produced by the recursive procedure always regenerates the Taylor series in addition to a complementary one.

Proposition 1. If we expand in a Taylor series, , around truncated at and use the recursive procedure, as described above, the resulting convergent power series always takes the form where is a power series of orders larger than . This is true for any number of iterations, , maximum power of the Taylor series, , and for all functions that satisfy the general differential equationwhere is an analytic real functional that does not contain derivatives.

Proof. It is trivial to prove this for a specific case, such as . For the general case, we prove this only for and . The Taylor series expansion of around isIn our recursive procedure, this is put in the equivalent formwhere the approximation stems from using finite and , and . For , , and , (20) is regenerated. However, in our recursive procedure and are kept as variables since they will be substituted for at each recursive step. Only at the last step are their numerical values inserted. For , we resubstitute in the last equation for and by their updated expressions, as follows:Substituting the updated expressions for and in (21), we getClearly for , the last equation gives the Taylor expansion, that is, (21) with . The complimentary series, , is absent here since we have terminated the expansions at . For , another step of resubstituting updated expressions is needed, and so on.
Now, we present the proof for the more general case, namely, when is unspecified but is a solution to (19). We start with the following Taylor series expansion of and its derivativeSubstituting on the right-hand-side for and by and , respectively, we getThe partial derivatives can not be calculated unless the functional forms of the recursion coefficients are known. One possibility is to specify the function being expanded, , as we did at the start of this proof. The other possibility is to exploit the differential equation that is a solution for, namely, (19). Substituting the Taylor expansion of , from (24) in (19) and expanding up to the fourth power in , we obtain the following relations:which lead toUsing these equations in (28), we obtainFor , the last equation regenerates the Taylor series of , namely, (24) with , and this completes the proof.

3. Application to Nonlinear Differential Equations

The method described in the previous section can be used as a powerful solver and integrator of nonlinear differential equations both analytically and numerically. In Section 3.1, we apply the method on a number of well-known problems. In Section 3.2, we show the power of the method in terms of detailed analysis of the error and CPU time.

3.1. Examples

For the sake of demonstration, we consider the following nonlinear differential equation:The reason for selecting this equation is that is one of its exact solutions. Substituting the power series expansion , we obtain, as usual, the recursion relationswhere and turn out to be independent parameters which in the present case correspond to the initial conditions on the solution and its first derivative. It is not surprising that these recursion relations are identical with those we found for the in the previous section, (9). Therefore, substituting the above recursion relations in we obtain the Taylor series expansion, , of . Removing the divergency in follows exactly the same steps as in the previous section, and thus an exact solution in terms of a convergent power series is obtained, as also plotted in Figure 1.

For , the relevant differential equation isSubstituting the power series expansion in this equation, the recursion relations will be given by (16)–(18). Similarly, the convergent series solution will be obtained, as in the previous section.

3.2. Numerical Method

As a numerical method, the power series is very powerful and efficient [10]. The power series method with , denoted by IPS4, is used to solve the NLSE, (29) and the error is calculated as the difference between the numerical solution and the exact solution, namely, . The equation is then resolved using the fourth-order Runge-Kutta (RK4) method. In Figure 2, we plot the error of both methods which turn out to be of the same order. Using the iterative power series method with , (IPS12), the error drops to infinitesimally low values. Neither the CPU time nor the memory requirements for IPS12 are much larger than those for IPS4; it is straight forward upgrade to higher orders which leads to ultrahigh efficiency. This is verified by the set of tables, Tables 14, where we compute using both the RK4 and the iterative power series method and show the corresponding CPU time. For the same , Tables 1 and 3 show that both RK4 and IPS4 produce the first 16 digits of the exact value (underlined numbers in the last raw) and consume almost the same CPU time. Table 3 shows that, for the same , IPS12, reproduces the first 49 digits of the exact value. The CPU time needed for such ultrahigh accuracy is just about 3 times that of the RK4 and IPS4. Of course the accuracy can be arbitrarily increased by increasing or more efficiently . For IPS12 to produce only the first 16 digits, as in RK4 and IPS4, only very small number of iterations is needed, as shown in Table 4. The CPU time in this case is about 100 times less than that of RK4 and IPS4, highlighting the high efficiency of the power series method.

Table 1: RK4 solution of the NLSE, (29), computed at .
Table 2: IPS4 solution of the NLSE, (29), computed at .
Table 3: IPS12 solution of the NLSE, (29), computed at .
Table 4: IPS12 solution of the NLSE, (29), computed at , but with much less number of iterations than in Table 3.
Figure 2: Error defined by the difference between the numerical and exact soliton solution of the NLSE, (29). (a) Dashed line corresponds to RK4 and dotted line corresponds to IPS4. (b) IPS12. Parameters used: , number of digits , , and .

A more challenging test on the power series method is the chaotic Lorenz system [25] given by where we take the usual values , , and with initial conditions and . It is straight forward to generalise the method to three differential equations; therefore we do not show the details of the calculation. In Figure 3, the results of solving the Lorenz system using RK4 and IPS12 are shown. For the same parameters, namely discretization, RK4 reaches stability at about ; that is, the curve for is unchanged by increasing , but for , the curve keeps changing by increasing . In comparison, IPS12 reaches stability at about . In chaotic systems, it is quite challenging to go that deep in the chaotic region. Hence, the need for such high accuracy methods.

Figure 3: Numerical solution of the chaotic Lorenz system, (32), using the RK4 and IPS methods. Parameters: for IPS (solid black line), , . For RK4 (dashed red line), . For both curves we used , , , , , and .

Achieving higher accuracy requires larger CPU time usage. Therefore, it is important to investigate how the CPU time, denoted here by , depends on the main parameters of the method, namely and . A typical plot is shown in Figure 4, where we plot on a log-log scale the CPU time versus the error. The linear relationship indicates , where is the slope of the line joining the points in Figure 4. The error can be calculated in two ways: (i) the difference between the numerical solution and (ii) theoretical estimate, (15). Both ways are shown in the figure and they have the same slope. However, as expected, error defined by (15), which is actually an upper limit on the error, is always larger than the first one. To find how the CPU time depends explicitly on and , we argue that the dependence should be of the form . This is justified by the fact that CPU time should be linearly proportional to the number of terms computed. The number of terms computed increases linearly with the number of iterations . The number of terms in the power series is linearly proportional to . When substituted in the NLSE with cubic nonlinearity, the resulting number of terms, and thus , will be proportional to . In Figure 5, it is shown indeed that the ratio saturates asymptotically to a constant for large and since the scaling behaviors mentioned here apply for large and . The proportionality constant, , is very small and corresponds to the CPU time of calculating one term. It is dependent on the machine, the programming optimization [10], and the number of digits used, . In terms of the number of digits, the CPU time increases, as shown in Figure 6, where it is noticed that CPU time is almost constant for number of digits .

Figure 4: CPU time versus error on a log-log scale for the iterative power series solution of the NLSE, (29). Black filled circles correspond to error calculated from the difference between the power series solution and the exact solution. Red empty circles correspond to the error estimated by the formula (15). Parameters used: , , and ranges between 2 and 90.
Figure 5: CPU time versus (a) and (b) for the NLSE, (29). Parameters used: for (a), . For (b), . For both plots, , , , , and .
Figure 6: CPU time versus the number of digits for the power series solution of NLSE, (29). Parameters used: , , , , and . Inset shows a zoom on linear scale. Red dashed line is an asymptote of slope equal 1.

4. Conclusions

We have presented an iterative power series method that solves the problem of finite radius of convergence. We have proved that the iterative power series is always composed of a sum of the typical power series of the function and a complementary series that cancels the divergency. The method is divided into two schemes where in the first we find a convergent power series for a given function and in the second we solve a given nonlinear differential equation. The result of the iterative power series expansion of is remarkably convergent for arbitrary radius of convergence and accuracy, as shown by Figures 1 and 2 and Tables 14. Extremely high accuracy can be obtained by using higher-order iterative power series via increasing with relatively low CPU time usage. Robustness and efficiency of the method have been shown by solving the chaotic Lorenz system and the NLSE. Extensive analysis of the error and CPU time characterising the method is performed. Although we have focused on the localised solution of the NLSE, all other solitary wave solutions (conoidal waves) can be obtained using the present method, just by choosing the appropriate initial conditions.

The method can be generalised to partial and fractional differential equations making its domain of applicability even wider.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.


  1. M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, vol. 7, Dover Publications, New York, NY, USA, 1965. View at Publisher · View at Google Scholar
  2. R. E. Scraton, “A note on the summation of divergent power series,” Mathematical Proc. of the Camb. Phil. Soc, vol. 66, pp. 109–114, 1969. View at Google Scholar · View at MathSciNet
  3. A. D. Polyanin and V. F. Zaitsev, Handbook of Nonlinear Partial Differential Equations, Chapman and Hall, New York, NY, USA, 2003. View at MathSciNet
  4. C. S. Gardner, J. M. Greene, M. D. Kruskal, and R. M. Miura, “Method for solving the Korteweg-deVries equation,” Physical Review Letters, vol. 19, no. 19, pp. 1095–1097, 1967. View at Publisher · View at Google Scholar · View at Scopus
  5. P. D. Lax, “Integrals of nonlinear equations of evolution and solitary waves,” Communications on Pure and Applied Mathematics, vol. 21, pp. 467–490, 1968. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. V. B. Matveev and M. A. Salle, Darboux Transformations and Solitons, Springer-Verlag, Berlin, Gemany, 1991. View at Publisher · View at Google Scholar · View at MathSciNet
  7. R. Hirota, Topics in Current Physics 17, R. K. Eullough and P. I. Caudrey, Eds., Springer-Verlag, Berlin, Germany, 1980.
  8. G. Adomian, Solving Frontier Problems of Physics: The Decomposition Method, Kluwer Academic, Dordrecht, The Netherlands, 1994. View at Publisher · View at Google Scholar · View at MathSciNet
  9. S. Liao and Y. Tan, “A general approach to obtain series solutions of nonlinear differential equations,” Studies in Applied Mathematics, vol. 119, no. 4, pp. 297–354, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  10. R. Barrio, M. Rodríguez, A. Abad, and F. Blesa, “Breaking the limits: the Taylor series method,” Applied Mathematics and Computation, vol. 217, no. 20, pp. 7940–7954, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  11. G. Corliss and Y. F. Chang, “Solving ordinary differential equations using Taylor series,” ACM Transactions on Mathematical Software, vol. 8, no. 2, pp. 114–144, 1982. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. Y. F. Chang and G. Corliss, “ATOMFT: solving ODEs and DAEs using Taylor series,” Computers & Mathematics with Applications, vol. 28, no. 10-12, pp. 209–233, 1994. View at Publisher · View at Google Scholar · View at Scopus
  13. J. D. Pryce, “Solving high-index DAEs by Taylor series,” Numerical Algorithms, vol. 19, no. 1-4, pp. 195–211, 1998. View at Publisher · View at Google Scholar · View at MathSciNet
  14. R. Barrio, “Performance of the Taylor series method for ODEs/DAEs,” Applied Mathematics and Computation, vol. 163, no. 2, pp. 525–545, 2005. View at Publisher · View at Google Scholar · View at MathSciNet
  15. N. S. Nedialkov and J. D. Pryce, “Solving differential-algebraic equations by taylor series (I): Computing taylor coefficients,” BIT Numerical Mathematics, vol. 45, no. 3, pp. 561–591, 2005. View at Publisher · View at Google Scholar · View at Scopus
  16. N. S. Nedialkov and J. D. Pryce, “Solving differential-algebraic equations by Taylor series. (II): Computing the system Jacobian,” BIT Numerical Mathematics, vol. 47, no. 1, pp. 121–135, 2007. View at Publisher · View at Google Scholar · View at MathSciNet
  17. N. S. Nedialkov and J. D. Pryce, “Solving differential algebraic equations by Taylor series. (III): THE DAETS code,” JNAIAM. Journal of Numerical Analysis, Industrial and Applied Mathematics, vol. 3, no. 1-2, pp. 61–80, 2008. View at Google Scholar · View at MathSciNet
  18. N. S. Nedialkov, K. R. Jackson, and G. F. Corliss, “Validated solutions of initial value problems for ordinary differential equations,” Applied Mathematics and Computation, vol. 105, no. 1, pp. 21–68, 1999. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. W. Tucker, “A Rigorous ODE Solver and Smale's 14th Problem,” Foundations of Computational Mathematics, vol. 2, no. 1, pp. 53–117, 2002. View at Publisher · View at Google Scholar · View at Scopus
  20. R. Barrio and F. Blesa, “Systematic search of symmetric periodic orbits in 2DOF Hamiltonian systems,” Chaos, Solitons & Fractals, vol. 41, no. 2, pp. 560–582, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  21. R. Barrio, F. Blesa, and S. Serrano, “Periodic, escape and chaotic orbits in the Copenhagen and the (n+1)-body ring problems,” Communications in Nonlinear Science and Numerical Simulation, vol. 14, no. 5, pp. 2229–2238, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  22. R. Barrio, “Painting chaos: a gallery of sensitivity plots of classical problems,” International Journal of Bifurcation and Chaos, vol. 16, no. 10, pp. 2777–2798, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  23. R. Barrio and S. Serrano, “A three-parametric study of the Lorenz model,” Physica D: Nonlinear Phenomena, vol. 229, no. 1, pp. 43–51, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. Á. Jorba and M. Zou, “A software package for the numerical integration of ODEs by means of high-order Taylor methods,” Experimental Mathematics, vol. 14, no. 1, pp. 99–117, 2005. View at Publisher · View at Google Scholar · View at MathSciNet
  25. E. Lorenz, “Deterministic nonperiodic flow,” Journal of Atomic and Molecular Sciences, vol. 20, pp. 130–141, 1963. View at Google Scholar
  26. F. M. Allan, “Derivation of the Adomian decomposition method using the homotopy analysis method,” Applied Mathematics and Computation, vol. 190, no. 1, pp. 6–14, 2007. View at Publisher · View at Google Scholar · View at MathSciNet
  27. Q. M. Al-Mdallal, M. I. Syam, and P. . Ariel, “Extended homotopy perturbation method and the axisymmetric flow past a porous stretching sheet,” International Journal for Numerical Methods in Fluids, vol. 69, no. 5, pp. 909–925, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  28. M. A. Hajji and K. Al-Khaled, “Analytic studies and numerical simulations of the generalized Boussinesq equation,” Applied Mathematics and Computation, vol. 191, no. 2, pp. 320–333, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  29. S. Liao, “On the homotopy analysis method for nonlinear problems,” Applied Mathematics and Computation, vol. 147, no. 2, pp. 499–513, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus