Research Article  Open Access
Shin Min Kang, Arif Rafiq, Young Chel Kwun, "A New SecondOrder Iteration Method for Solving Nonlinear Equations", Abstract and Applied Analysis, vol. 2013, Article ID 487062, 4 pages, 2013. https://doi.org/10.1155/2013/487062
A New SecondOrder Iteration Method for Solving Nonlinear Equations
Abstract
We establish a new secondorder iteration method for solving nonlinear equations. The efficiency index of the method is 1.4142 which is the same as the NewtonRaphson method. By using some examples, the efficiency of the method is also discussed. It is worth to note that (i) our method is performing very well in comparison to the fixed point method and the method discussed in Babolian and Biazar (2002) and (ii) our method is so simple to apply in comparison to the method discussed in Babolian and Biazar (2002) and involves only firstorder derivative but showing secondorder convergence and this is not the case in Babolian and Biazar (2002), where the method requires the computations of higherorder derivatives of the nonlinear operator involved in the functional equation.
1. Introduction
Our problem, to recall, is solving equations in one variable. We are given a function and would like to find at least one solution to the equation . Note that, priorly, we do not put any restrictions on the function ; we need to be able to evaluate the function; otherwise, we cannot even check that a given solution is true, that is, . In reality, the mere ability to be able to evaluate the function does not suffice. We need to assume some kind of “good behavior.” The more we assume, the more potential we have, on the one hand, to develop fast algorithms for finding the root. At the same time, the more we assume, the fewer the functions are going to satisfy our assumptions! This is a fundamental paradigm in numerical analysis.
We know that one of the fundamental algorithm for solving nonlinear equations is socalled fixedpoint iteration method [1].
In the fixedpoint iteration method for solving the nonlinear equation , the equation is usually rewritten as where(i)there exists such that for all ,(ii)there exists such that for all .
Considering the following iteration scheme: and starting with a suitable initial approximation , we build up a sequence of approximations, say , for the solution of the nonlinear equation, say . The scheme will converge to the root , provided that(i)the initial approximation is chosen in the interval ,(ii) has a continuous derivative on ,(iii) for all ,(iv) for all (see [1]).
The order of convergence for the sequence of approximations derived from an iteration method is defined in the literature, as follows.
Definition 1. Let converge to . If there exist an integer constant and real positive constant such that then is called the order and the constant of convergence.
To determine the order of convergence of the sequence , let us consider the Taylor expansion of Using (1) and (2) in (4) we have and we can state the following result [1].
Theorem 2 (see [2]). Suppose that . If , for and , then the sequence is of order .
It is well known that the fixed point method has first order convergence.
During the last many years, the numerical techniques for solving nonlinear equations have been successfully applied (see, e.g., [2–4] and the references therein).
In [4], Babolian and Biazar modified the standard Adomian decomposition method for solving the nonlinear equation to derive a sequence of approximations to the solution, with nearly superlinear convergence. However, their method requires the computation of higherorder derivatives of the nonlinear operator involved in the functional equation.
In this paper, a new iteration method extracted from the fixed point method is proposed to solve nonlinear equations. The proposed method has secondorder convergence and then applied to solve some problems in order to assess its validity and accuracy. It is worth to mention that our method involves only firstorder derivative but showing secondorder convergence.
2. New Iteration Method
Consider the nonlinear equation
We assume that is simple zero of , and is an initial guess sufficiently close to . Equation (6) is usually rewritten as Following the approach of [4], if , we can modify (7) by adding to both sides as follows: which implies that
In order for (9) to be efficient, we can choose such that , we yields so that (9) takes the form
This formulation allows us to suggest the following iteration methods for solving nonlinear equation (6).
Algorithm 3. For a given , we calculate the approximation solution , by the iteration scheme
3. Convergence Analysis
Now we discuss the convergence analysis of Algorithm 3.
Theorem 4. Let for an open interval and consider that the nonlinear equation (or has a simple root , where be sufficiently smooth in the neighborhood of the root ; then the order of convergence of Algorithm 3 is at least 2.
Proof. The iteration scheme is given by
Let be a simple zero of , , where is the error term involved at the th step of Algorithm 3 and
By Taylor's expansion, we have
this shows that
This completes the proof.
Remark 5. For and using the software Maple, we can easily deduce that Now, it can be easily seen that for (14) we obtain , , and . Hence, according to Theorem 2, Algorithm 3 has secondorder convergence.
4. Applications
Now we present some examples [4] to illustrate the efficiency of the developed methods namely, Algorithm 3. We compare the fixed point method (FPM) with Algorithm 3.
Example 6. Consider the equation . We have and . The exact solution of this equation is . Take ; then the comparison of the two methods is shown in Table 1 correct up to four decimal places.

Example 7. Consider the equation . We have and . The graphical solution of this equation is (4D). Take , then the comparison of the two methods is shown in Table 2 up to four decimal places.

Example 8. Consider the equation . We have and . The graphical solution of this equation is (5D). Take , then the comparison of the two methods is shown in Table 3 correct up to five decimal places.

Example 9. Consider the equation . We have and . The graphical solution of this equation is (2D). Take ; then the comparison of the two methods is shown in Table 4 corrected up to five decimal places.

5. Conclusions
A new iteration method for solving nonlinear equations is established. By using some examples the performance of the method is also discussed. The method is performing very well in comparison to the fixed point method and the method discussed in [4]. The method can be studied for functional equations and can be extended to a system of nonlinear equations.
Acknowledgments
The authors would like to thank the editor and referees for useful comments and suggestions. This study was supported by research funds from DongA University.
References
 E. Isaacson and H. B. Keller, Analysis of Numerical Methods, John Wiley & Sons, New York, NY, USA, 1966. View at: MathSciNet
 E. Babolian and J. Biazar, “On the order of convergence of Adomian method,” Applied Mathematics and Computation, vol. 130, no. 23, pp. 383–387, 2002. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 S. Abbasbandy, “Improving NewtonRaphson method for nonlinear equations by modified Adomian decomposition method,” Applied Mathematics and Computation, vol. 145, no. 23, pp. 887–893, 2003. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 E. Babolian and J. Biazar, “Solution of nonlinear equations by modified Adomian decomposition method,” Applied Mathematics and Computation, vol. 132, no. 1, pp. 167–172, 2002. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
Copyright
Copyright © 2013 Shin Min Kang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.