- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Advances in Numerical Analysis
Volume 2013 (2013), Article ID 957496, 8 pages
Three New Optimal Fourth-Order Iterative Methods to Solve Nonlinear Equations
1Petroleum Engineering Department, UNISTMO, 70760 Tehuantepec, OAX, Mexico
2Applied Mathematics Department, UNISTMO, 70760 Tehuantepec, OAX, Mexico
Received 21 November 2012; Revised 24 January 2013; Accepted 2 February 2013
Academic Editor: Michael Ng
Copyright © 2013 Gustavo Fernández-Torres and Juan Vásquez-Aquino. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We present new modifications to Newton's method for solving nonlinear equations. The analysis of convergence shows that these methods have fourth-order convergence. Each of the three methods uses three functional evaluations. Thus, according to Kung-Traub's conjecture, these are optimal methods. With the previous ideas, we extend the analysis to functions with multiple roots. Several numerical examples are given to illustrate that the presented methods have better performance compared with Newton's classical method and other methods of fourth-order convergence recently published.
One of the most important problems in numerical analysis is solving nonlinear equations. To solve these equations, we can use iterative methods such as Newton's method and its variants. Newton's classical method for a single nonlinear equation , where is a single root, is written as which converges quadratically in some neighborhood of .
Taking , many modifications of Newton's method were recently published. In , Noor and Khan presented a fourth-order optimal method as defined by which uses three functional evaluations.
In , Cordero et al. proposed a fourth-order optimal method as defined by which also uses three functional evaluations.
Chun presented a third-order iterative formula  as defined by which uses three functional evaluations, where is any iterative function of second order.
Li et al. presented a fifth-order iterative formula in  as defined by which uses five functional evaluations.
The main goal and motivation in the development of new methods are to obtain a better computational efficiency. In other words, it is advantageous to obtain the highest possible convergence order with a fixed number of functional evaluations per iteration. In the case of multipoint methods without memory, this demand is closely connected with the optimal order considered in the Kung-Traub’s conjecture.
Kung-Traub's Conjecture (see ). Multipoint iterative methods (without memory) requiring functional evaluations per iteration have the order of convergence at most .
Multipoint methods which satisfy Kung-Traub's conjecture (still unproved) are usually called optimal methods; consequently, is the optimal order.
The computational efficiency of an iterative method of order , requiring function evaluations per iteration, is most frequently calculated by Ostrowski-Traub's efficiency index  .
On the case of multiple roots, the quadratically convergent modified Newton's method  is where is the multiplicity of the root.
For this case, there are several methods recently presented to approximate the root of the function. For example, the cubically convergent Halley's method  is a special case of the Hansen-Patrick's method  Osada  has developed a third-order method using the second derivative: where .
Recently, Chun and Neta  have developed a third-order method using the second derivative:
All previous methods use the second derivative of the function to obtain a greater order of convergence. The objective of the new method is to avoid the use of the second derivative.
The new methods are based on a mixture of Lagrange's and Hermite's interpolations. That is to say not only Hermite’s interpolation. This is the novelty of the new methods. The interpolation process is a conventional tool for iterative methods; see [5, 7]. However, this tool has been applied recently in several ways. For example, in , Cordero and Torregrosa presented a family of Steffensen-type methods of fourth-order convergence for solving nonlinear smooth equations by using a linear combination of divided differences to achieve a better approximation to the derivative. Zheng et al.  proposed a general family of Steffensen-type methods with optimal order of convergence by using Newton's iteration for the direct Newtonian interpolation. In , Petković et al. investigated a general way to construct multipoint methods for solving nonlinear equations by using inverse interpolation. In , Džunić et al. presented a new family of three-point derivative free methods by using a self-correcting parameter that is calculated applying the secant-type method in three different ways and Newton's interpolatory polynomial of the second degree.
The three new methods (for simple roots) in this paper use three functional evaluations and have fourth-order convergence; thus, they are optimal methods and their efficiency index is , which is greater than the efficiency index of Newton's method, which is . In the case of multiple roots, the method developed here is cubically convergent and uses three functional evaluations without the use of second derivative of the function. Thus, the method has better performance than Newton's modified method and the above methods with efficiency index .
2. Development of the Methods
In this paper, we consider iterative methods to find a simple root of a nonlinear equation , where is a scalar function for an open interval . We suppose that is sufficiently differentiable and for , and since is a simple root, we can define on . Taking closer to and supposing that has been chosen, we define
2.1. First Method FAM1
Consider the polynomial with the conditions Solving simultaneously the conditions (13) and using the common representation of divided differences for Hermite's inverse interpolation, Consequently, we find and the polynomial (12) can be written as If we are making in (16) we have a new iterative method (FAM1) It can be written as which uses three functional evaluations and has fourth-order convergence.
2.2. Second Method FAM2
Consider the polynomial with the conditions Taking , we have . Solving simultaneously the conditions (20) and using the common representation of divided differences for Hermite's inverse interpolation, we find and the polynomial (19) can be written as Then, if we are making in (23) we have our second iterative method (FAM2) It can be written as which uses three functional evaluations and has fourth-order convergence.
2.3. Third Method FAM3
Consider the polynomial with the conditions where we have used an approximation of in , . Solving simultaneously the conditions (27) and (28) and using the common representation of divided differences for Hermite's inverse interpolation, we have Thus, the polynomial (26) can be written as Making in (31), we have It can be written as which uses three functional evaluations and has fourth-order convergence.
2.4. Method FAM4 (Multiple Roots)
Consider the polynomial where is the multiplicity of the root and verify the conditions with .
Solving the system, we obtain where Thus, we have that can be written as which uses three functional evaluations and has third-order convergence.
3. Analysis of Convergence
Theorem 1. Let be a sufficiently differentiable function, and let be a simple zero of in an open interval , with on . If is sufficiently close to , then the methods FAM1, FAM2, and FAM3, as defined by (18), (25), and (33), have fourth-order convergence.
Proof. Following an analogous procedure to find the error in Lagrange's and Hermite's interpolations, the polynomials (12), (19), and (26) in FAM1, FAM2, and FAM3, respectively, have the error
for some . is the coefficient in (15), (21), (29) that appears in the polynomial in (12), (19), and (26), respectively.
Then, substituting in with . Since was taken from Newton's method, we know that . Then, we have Now, in FAM1, we take , then, In FAM2 and FAM3, we take , then, Thus, FAM1, FAM2, and FAM3 have fourth-order convergence.
Theorem 2. Let be a sufficiently differentiable function, and let be a zero of with multiplicity in an open interval . If is sufficiently close to , then, the method FAM4 defined by (12), (20) is cubically convergent.
Proof. The proof is based on the error of Lagrange's interpolation. Suppose that has been chosen. We can see that
for some .
Taking and expanding around , we have with .
Since , we know that for some .
Thus, Therefore, FAM4 has third-order convergence.
Note that is not zero for and this fact allows the convergence of .
4. Numerical Analysis
In this section, we use numerical examples to compare the new methods introduced in this paper with Newton's classical method (NM) and recent methods of fourth-order convergence, such as Noor's method (NOM) with in , Cordero's method (CM) with in , Chun's third-order method (CHM) with in , and Li's fifth-order method (ZM) with in  in the case of simple roots. For multiple roots, we compare the method developed here with the quadratically convergent Newton's modified method (NMM) and with the cubically convergent Halley's method (HM), Osada's method (OM), Euler-Chebyshev's method (ECM), and Chun-Neta's method (CNM). Tables 2 and 4 show the number of iterations (IT) and the number of functional evaluations (NOFE). The results obtained show that the methods presented in this paper are more efficient.
All computations were done using MATLAB 2010. We accept an approximate solution rather than the exact root, depending on the precision of the computer. We use the following stopping criteria for computer programs: (i) , (ii) . Thus, when the stopping criterion is satisfied, is taken as the exact computed root . For numerical illustrations in this section, we used the fixed stopping criterion .
The computational results presented in Tables 2 and 4 show that, in almost all of cases, the presented methods converge more rapidly than Newton’s method, Newton’s modified method, and those previously presented for the case of simple and multiple roots. The new methods require less number of functional evaluations. This means that the new methods have better efficiency in computing process than Newton’s method as compared to other methods, and furthermore, the method FAM3 produces the best results. For most of the functions we tested, the obtained methods behave at least with equal performance compared to the other known methods of the same order.
In this paper, we introduce three new optimal fourth-order iterative methods to solve nonlinear equations. The analysis of convergence shows that the three new methods have fourth-order convergence; they use three functional evaluations, and thus, according to Kung-Traub's conjecture, they are optimal methods. In the case of multiple roots, the method developed here is cubically convergent and uses three functional evaluations without the use of second derivative. Numerical analysis shows that these methods have better performance as compared with Newton's classical method, Newton's modified method, and other recent methods of third- (multiple roots) and fourth-order (simple roots) convergence.
The authors wishes to acknowledge the valuable participation of Professor Nicole Mercier and Professor Joelle Ann Labrecque in the proofreading of this paper. This paper is the result of the research project “Análisis Numérico de Métodos Iterativos Óptimos para la Solución de Ecuaciones No Lineales” developed at the Universidad del Istmo, Campus Tehuantepec, by Researcher-Professor Gustavo Fernández-Torres.
- M. A. Noor and W. A. Khan, “Fourth-order iterative method free from second derivative for solving nonlinear equations,” Applied Mathematical Sciences, vol. 6, no. 93–96, pp. 4617–4625, 2012.
- A. Cordero, J. L. Hueso, E. Martínez, and J. R. Torregrosa, “New modifications of Potra-Pták's method with optimal fourth and eighth orders of convergence,” Journal of Computational and Applied Mathematics, vol. 234, no. 10, pp. 2969–2976, 2010.
- C. Chun, “A geometric construction of iterative formulas of order three,” Applied Mathematics Letters, vol. 23, no. 5, pp. 512–516, 2010.
- Z. Li, C. Peng, T. Zhou, and J. Gao, “A new Newton-type method for solving nonlinear equations with any integer order of convergence,” Journal of Computational Information Systems, vol. 7, no. 7, pp. 2371–2378, 2011.
- H. T. Kung and J. F. Traub, “Optimal order of one-point and multipoint iteration,” Journal of the Association for Computing Machinery, vol. 21, pp. 643–651, 1974.
- A. M. Ostrowski, Solution of Equations and Systems of Equations, Academic Press, New York, NY, USA, 1966.
- A. Ralston and P. Rabinowitz, A First Course in Numerical Analysis, McGraw-Hill, 1978.
- E. Halley, “A new, exact and easy method of finding the roots of equations generally and that without any previous reduction,” Philosophical Transactions of the Royal Society of London, vol. 18, pp. 136–148, 1964.
- E. Hansen and M. Patrick, “A family of root finding methods,” Numerische Mathematik, vol. 27, no. 3, pp. 257–269, 1977.
- N. Osada, “An optimal multiple root-finding method of order three,” Journal of Computational and Applied Mathematics, vol. 51, no. 1, pp. 131–133, 1994.
- H. D. Victory and B. Neta, “A higher order method for multiple zeros of nonlinear functions,” International Journal of Computer Mathematics, vol. 12, no. 3-4, pp. 329–335, 1983.
- R. F. King, “A family of fourth order methods for nonlinear equations,” SIAM Journal on Numerical Analysis, vol. 10, pp. 876–879, 1973.
- C. Chun and B. Neta, “A third-order modification of Newton's method for multiple roots,” Applied Mathematics and Computation, vol. 211, no. 2, pp. 474–479, 2009.
- A. Cordero and J. R. Torregrosa, “A class of Steffensen type methods with optimal order of convergence,” Applied Mathematics and Computation, vol. 217, no. 19, pp. 7653–7659, 2011.
- Q. Zheng, J. Li, and F. Huang, “An optimal Steffensen-type family for solving nonlinear equations,” Applied Mathematics and Computation, vol. 217, no. 23, pp. 9592–9597, 2011.
- M. S. Petković, J. Džunić, and B. Neta, “Interpolatory multipoint methods with memory for solving nonlinear equations,” Applied Mathematics and Computation, vol. 218, no. 6, pp. 2533–2541, 2011.
- J. Džunić, M. S. Petković, and L. D. Petković, “Three-point methods with and without memory for solving nonlinear equations,” Applied Mathematics and Computation, vol. 218, no. 9, pp. 4917–4927, 2012.