Research Article  Open Access
Gustavo FernándezTorres, Juan VásquezAquino, "Three New Optimal FourthOrder Iterative Methods to Solve Nonlinear Equations", Advances in Numerical Analysis, vol. 2013, Article ID 957496, 8 pages, 2013. https://doi.org/10.1155/2013/957496
Three New Optimal FourthOrder Iterative Methods to Solve Nonlinear Equations
Abstract
We present new modifications to Newton's method for solving nonlinear equations. The analysis of convergence shows that these methods have fourthorder convergence. Each of the three methods uses three functional evaluations. Thus, according to KungTraub's conjecture, these are optimal methods. With the previous ideas, we extend the analysis to functions with multiple roots. Several numerical examples are given to illustrate that the presented methods have better performance compared with Newton's classical method and other methods of fourthorder convergence recently published.
1. Introduction
One of the most important problems in numerical analysis is solving nonlinear equations. To solve these equations, we can use iterative methods such as Newton's method and its variants. Newton's classical method for a single nonlinear equation , where is a single root, is written as which converges quadratically in some neighborhood of .
Taking , many modifications of Newton's method were recently published. In [1], Noor and Khan presented a fourthorder optimal method as defined by which uses three functional evaluations.
In [2], Cordero et al. proposed a fourthorder optimal method as defined by which also uses three functional evaluations.
Chun presented a thirdorder iterative formula [3] as defined by which uses three functional evaluations, where is any iterative function of second order.
Li et al. presented a fifthorder iterative formula in [4] as defined by which uses five functional evaluations.
The main goal and motivation in the development of new methods are to obtain a better computational efficiency. In other words, it is advantageous to obtain the highest possible convergence order with a fixed number of functional evaluations per iteration. In the case of multipoint methods without memory, this demand is closely connected with the optimal order considered in the KungTraub’s conjecture.
KungTraub's Conjecture (see [5]). Multipoint iterative methods (without memory) requiring functional evaluations per iteration have the order of convergence at most .
Multipoint methods which satisfy KungTraub's conjecture (still unproved) are usually called optimal methods; consequently, is the optimal order.
The computational efficiency of an iterative method of order , requiring function evaluations per iteration, is most frequently calculated by OstrowskiTraub's efficiency index [6] .
On the case of multiple roots, the quadratically convergent modified Newton's method [7] is where is the multiplicity of the root.
For this case, there are several methods recently presented to approximate the root of the function. For example, the cubically convergent Halley's method [8] is a special case of the HansenPatrick's method [9] Osada [10] has developed a thirdorder method using the second derivative: where .
Another thirdorder method [11] based on King's fifthorder method (for simple roots) [12] is the EulerChebyshev's method of order three
Recently, Chun and Neta [13] have developed a thirdorder method using the second derivative:
All previous methods use the second derivative of the function to obtain a greater order of convergence. The objective of the new method is to avoid the use of the second derivative.
The new methods are based on a mixture of Lagrange's and Hermite's interpolations. That is to say not only Hermite’s interpolation. This is the novelty of the new methods. The interpolation process is a conventional tool for iterative methods; see [5, 7]. However, this tool has been applied recently in several ways. For example, in [14], Cordero and Torregrosa presented a family of Steffensentype methods of fourthorder convergence for solving nonlinear smooth equations by using a linear combination of divided differences to achieve a better approximation to the derivative. Zheng et al. [15] proposed a general family of Steffensentype methods with optimal order of convergence by using Newton's iteration for the direct Newtonian interpolation. In [16], Petković et al. investigated a general way to construct multipoint methods for solving nonlinear equations by using inverse interpolation. In [17], Džunić et al. presented a new family of threepoint derivative free methods by using a selfcorrecting parameter that is calculated applying the secanttype method in three different ways and Newton's interpolatory polynomial of the second degree.
The three new methods (for simple roots) in this paper use three functional evaluations and have fourthorder convergence; thus, they are optimal methods and their efficiency index is , which is greater than the efficiency index of Newton's method, which is . In the case of multiple roots, the method developed here is cubically convergent and uses three functional evaluations without the use of second derivative of the function. Thus, the method has better performance than Newton's modified method and the above methods with efficiency index .
2. Development of the Methods
In this paper, we consider iterative methods to find a simple root of a nonlinear equation , where is a scalar function for an open interval . We suppose that is sufficiently differentiable and for , and since is a simple root, we can define on . Taking closer to and supposing that has been chosen, we define
2.1. First Method FAM1
Consider the polynomial with the conditions Solving simultaneously the conditions (13) and using the common representation of divided differences for Hermite's inverse interpolation, Consequently, we find and the polynomial (12) can be written as If we are making in (16) we have a new iterative method (FAM1) It can be written as which uses three functional evaluations and has fourthorder convergence.
2.2. Second Method FAM2
Consider the polynomial with the conditions Taking , we have . Solving simultaneously the conditions (20) and using the common representation of divided differences for Hermite's inverse interpolation, we find and the polynomial (19) can be written as Then, if we are making in (23) we have our second iterative method (FAM2) It can be written as which uses three functional evaluations and has fourthorder convergence.
2.3. Third Method FAM3
Consider the polynomial with the conditions where we have used an approximation of in [2], . Solving simultaneously the conditions (27) and (28) and using the common representation of divided differences for Hermite's inverse interpolation, we have Thus, the polynomial (26) can be written as Making in (31), we have It can be written as which uses three functional evaluations and has fourthorder convergence.
2.4. Method FAM4 (Multiple Roots)
Consider the polynomial where is the multiplicity of the root and verify the conditions with .
Solving the system, we obtain where Thus, we have that can be written as which uses three functional evaluations and has thirdorder convergence.
3. Analysis of Convergence
Theorem 1. Let be a sufficiently differentiable function, and let be a simple zero of in an open interval , with on . If is sufficiently close to , then the methods FAM1, FAM2, and FAM3, as defined by (18), (25), and (33), have fourthorder convergence.
Proof. Following an analogous procedure to find the error in Lagrange's and Hermite's interpolations, the polynomials (12), (19), and (26) in FAM1, FAM2, and FAM3, respectively, have the error
for some . is the coefficient in (15), (21), (29) that appears in the polynomial in (12), (19), and (26), respectively.
Then, substituting in
with . Since was taken from Newton's method, we know that . Then, we have
Now, in FAM1, we take , then,
In FAM2 and FAM3, we take , then,
Thus, FAM1, FAM2, and FAM3 have fourthorder convergence.
Theorem 2. Let be a sufficiently differentiable function, and let be a zero of with multiplicity in an open interval . If is sufficiently close to , then, the method FAM4 defined by (12), (20) is cubically convergent.
Proof. The proof is based on the error of Lagrange's interpolation. Suppose that has been chosen. We can see that
for some .
Taking and expanding around , we have
with .
Since , we know that
for some .
Thus,
Therefore, FAM4 has thirdorder convergence.
Note that is not zero for and this fact allows the convergence of .
4. Numerical Analysis
In this section, we use numerical examples to compare the new methods introduced in this paper with Newton's classical method (NM) and recent methods of fourthorder convergence, such as Noor's method (NOM) with in [1], Cordero's method (CM) with in [2], Chun's thirdorder method (CHM) with in [3], and Li's fifthorder method (ZM) with in [4] in the case of simple roots. For multiple roots, we compare the method developed here with the quadratically convergent Newton's modified method (NMM) and with the cubically convergent Halley's method (HM), Osada's method (OM), EulerChebyshev's method (ECM), and ChunNeta's method (CNM). Tables 2 and 4 show the number of iterations (IT) and the number of functional evaluations (NOFE). The results obtained show that the methods presented in this paper are more efficient.
All computations were done using MATLAB 2010. We accept an approximate solution rather than the exact root, depending on the precision of the computer. We use the following stopping criteria for computer programs: (i) , (ii) . Thus, when the stopping criterion is satisfied, is taken as the exact computed root . For numerical illustrations in this section, we used the fixed stopping criterion .
We used the functions in Tables 1 and 3.




The computational results presented in Tables 2 and 4 show that, in almost all of cases, the presented methods converge more rapidly than Newton’s method, Newton’s modified method, and those previously presented for the case of simple and multiple roots. The new methods require less number of functional evaluations. This means that the new methods have better efficiency in computing process than Newton’s method as compared to other methods, and furthermore, the method FAM3 produces the best results. For most of the functions we tested, the obtained methods behave at least with equal performance compared to the other known methods of the same order.
5. Conclusions
In this paper, we introduce three new optimal fourthorder iterative methods to solve nonlinear equations. The analysis of convergence shows that the three new methods have fourthorder convergence; they use three functional evaluations, and thus, according to KungTraub's conjecture, they are optimal methods. In the case of multiple roots, the method developed here is cubically convergent and uses three functional evaluations without the use of second derivative. Numerical analysis shows that these methods have better performance as compared with Newton's classical method, Newton's modified method, and other recent methods of third (multiple roots) and fourthorder (simple roots) convergence.
Acknowledgments
The authors wishes to acknowledge the valuable participation of Professor Nicole Mercier and Professor Joelle Ann Labrecque in the proofreading of this paper. This paper is the result of the research project “Análisis Numérico de Métodos Iterativos Óptimos para la Solución de Ecuaciones No Lineales” developed at the Universidad del Istmo, Campus Tehuantepec, by ResearcherProfessor Gustavo FernándezTorres.
References
 M. A. Noor and W. A. Khan, “Fourthorder iterative method free from second derivative for solving nonlinear equations,” Applied Mathematical Sciences, vol. 6, no. 93–96, pp. 4617–4625, 2012. View at: Google Scholar  MathSciNet
 A. Cordero, J. L. Hueso, E. Martínez, and J. R. Torregrosa, “New modifications of PotraPták's method with optimal fourth and eighth orders of convergence,” Journal of Computational and Applied Mathematics, vol. 234, no. 10, pp. 2969–2976, 2010. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 C. Chun, “A geometric construction of iterative formulas of order three,” Applied Mathematics Letters, vol. 23, no. 5, pp. 512–516, 2010. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 Z. Li, C. Peng, T. Zhou, and J. Gao, “A new Newtontype method for solving nonlinear equations with any integer order of convergence,” Journal of Computational Information Systems, vol. 7, no. 7, pp. 2371–2378, 2011. View at: Google Scholar
 H. T. Kung and J. F. Traub, “Optimal order of onepoint and multipoint iteration,” Journal of the Association for Computing Machinery, vol. 21, pp. 643–651, 1974. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 A. M. Ostrowski, Solution of Equations and Systems of Equations, Academic Press, New York, NY, USA, 1966. View at: MathSciNet
 A. Ralston and P. Rabinowitz, A First Course in Numerical Analysis, McGrawHill, 1978. View at: MathSciNet
 E. Halley, “A new, exact and easy method of finding the roots of equations generally and that without any previous reduction,” Philosophical Transactions of the Royal Society of London, vol. 18, pp. 136–148, 1964. View at: Google Scholar
 E. Hansen and M. Patrick, “A family of root finding methods,” Numerische Mathematik, vol. 27, no. 3, pp. 257–269, 1977. View at: Publisher Site  Google Scholar  MathSciNet
 N. Osada, “An optimal multiple rootfinding method of order three,” Journal of Computational and Applied Mathematics, vol. 51, no. 1, pp. 131–133, 1994. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H. D. Victory and B. Neta, “A higher order method for multiple zeros of nonlinear functions,” International Journal of Computer Mathematics, vol. 12, no. 34, pp. 329–335, 1983. View at: Google Scholar
 R. F. King, “A family of fourth order methods for nonlinear equations,” SIAM Journal on Numerical Analysis, vol. 10, pp. 876–879, 1973. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 C. Chun and B. Neta, “A thirdorder modification of Newton's method for multiple roots,” Applied Mathematics and Computation, vol. 211, no. 2, pp. 474–479, 2009. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 A. Cordero and J. R. Torregrosa, “A class of Steffensen type methods with optimal order of convergence,” Applied Mathematics and Computation, vol. 217, no. 19, pp. 7653–7659, 2011. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 Q. Zheng, J. Li, and F. Huang, “An optimal Steffensentype family for solving nonlinear equations,” Applied Mathematics and Computation, vol. 217, no. 23, pp. 9592–9597, 2011. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 M. S. Petković, J. Džunić, and B. Neta, “Interpolatory multipoint methods with memory for solving nonlinear equations,” Applied Mathematics and Computation, vol. 218, no. 6, pp. 2533–2541, 2011. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 J. Džunić, M. S. Petković, and L. D. Petković, “Threepoint methods with and without memory for solving nonlinear equations,” Applied Mathematics and Computation, vol. 218, no. 9, pp. 4917–4927, 2012. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
Copyright
Copyright © 2013 Gustavo FernándezTorres and Juan VásquezAquino. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.