Research Article  Open Access
On a New Efficient SteffensenLike Iterative Class by Applying a Suitable SelfAccelerator Parameter
Abstract
It is attempted to present an efficient and free derivative class of Steffensenlike methods for solving nonlinear equations. To this end, firstly, we construct an optimal eighthorder threestep uniparameter without memory of iterative methods. Then the selfaccelerator parameter is estimated using Newton’s interpolation in such a way that it improves its convergence order from 8 to 12 without any extra function evaluation. Therefore, its efficiency index is increased from 8^{1/4} to 12^{1/4} which is the main feature of this class. To show applicability of the proposed methods, some numerical illustrations are presented.
1. Introduction
Kung and Traub are pioneers in constructing optimal general multistep methods without memory. They devised two general step methods based on interpolation. Moreover, they conjectured any step methods without memory using function evaluations may reach the convergence order at most [1]. Accordingly, many authors during the last years, specially the four past years, are attempted to construct iterative methods without memory which support this conjecture with optimal order [1–22].
Although construction of optimal methods without memory is still an active field, however, much attention has not been paid for developing methods with memory. Based on our best knowledge, Traub in his book introduces the first method with memory. The main feature of these methods is that they improve convergence order as well as efficiency index without any new function evaluations. Indeed, Traub changed Steffensen's method slightly as follows (see [18, pp 185–187]): The parameter is called selfaccelerator and method (1) has convergence order 2.41. It is still possible to increase the convergence order using better selfaccelerator parameter based on better Newton interpolation. Freederivative can be considered as another virtue of (1).
In this work, motivated by Traub’s work (1), we construct a new class of methods with memory. To this end, we first try to devise a new optimal freederivative threestep without memory of iterative methods with eight order of convergence and using merely four function evaluations per step. In other words, our first step is the same as Traub’s method (1). The second and third steps use combination Steffensenlike methods and weight function idea so that we achieve an optimal class of methods without memory. Finally, we apply a selfaccelerator parameter to extend it to with memory case. We remember two main properties of this work: increasing efficiency index without any new functional evaluations and nonusing derivatives of a given function.
We use the symbols , , and according to the following conventions [18]: if , we write or . If , we write or . If , where is a nonzero constant, we write or . Let be a function defined on an interval , where is the smallest interval containing distinct nodes . The divided difference with thorder is defined as follows: Moreover, we recall the definition of efficiency index (EI) as , where is the order of convergence and is the total number of function evaluations per iteration.
This work is organized as follows: Section 2 present construction and error analysis of optimal threestep class of without memory class. Section 3 is devoted to with memory extension. Numerical results are demonstrated in Section 5. We sum up this work in Section 5.
2. DerivativeFree ThreePoint Method
This section concerns construction a new class of threestep freederivative methods without memory for solving nonlinear equations. In the next section, it is extended to its with memory cases. To this end, let us first start with the following threestep Steffensentype [23] initiative: This scheme is not optimal in the sense of Kung and Traub [1] as it is of fourthorder convergence using four functions evaluations per iteration. In other words, its error equation has the form Therefore, some modifications based on applying weight function ideas must be considered in such a way that the scheme (3) changes into an optimal method. Accordingly, we put forward the following iterative plan: where , , , .
The main contribution of this section lies in the following Theorem which provides sufficient conditions for drawing optimal threestep iterations without memory class.
Theorem 1. Let , , and be differentiable twovariable functions that satisfy the conditions If the initial approximation is sufficiently close to the zero of a function , then the convergence order of the family (5) is eight.
Proof. Let , , , , and , . Using Taylor’s expansion and taking into account , we have
Substituting these into the first step of (5) gives
Set , , and , and expanding about , yields
Substituting (9) into (5), we can assert that
where
To achieve the fourthorder methods in the first two steps of (5), we attempt to vanish the coefficients of in (10). For this purpose, it suffices to set
Define ,, , and , . Taylor's series for about are
Under the conditions stated above (13) and substituting these Taylor's series into the third step of (5), we obtain
where
Fix ; then .
Assuming these conditions, (15) alters
and to get , it is sufficient to put .
In the same manner, we can see that the coefficient of is
To vanish the coefficient of , set , , and we conclude similarly that
As in the above cases, choosing , , and gives .
On account of the above conditions, we see that
Some simple but efficient weight functions satisfying the conditions of Theorem 1 are where .
Consider In the next section we introduce a new threestep method with memory. The efficiency index of the optimal class (5) is . we extent proposed class (5) to its with memory version, using an accelerator parameter, which improves the efficiency index to .
3. A New Method with Memory
Looking at the error equation (20) of the class (5) reveals that we can increase the convergence order of this class if the crucial element vanishes. This can be done if . Although this is true theoretically, it is not possible practically since is unknown. Fortunately, during the iterative process (5), finer approximations to are generated by the sequence , and therefor we try to obtain a good approximate for . Each iteration, , ,, , and , are accessible, except at the initial step. Hence, we can interpolate using these nodes. It is natural that we estimate the best interpolator, and as a result we consider Newton interpolating polynomial as follows:
In the next theorem we prove that if , then convergence order of the proposed class in Theorem 1 improves to 12.
Theorem 2. Suppose that is an approximation to a simple zero of , then the order of convergence of the threepoint method (5) is at least 12.
Proof. Suppose that an iterative method generates a sequence approximating a zero of and tends to the asymptotic error constant when , so
Assume that the iterative sequences , , and have the order , , and , respectively; that is,
On the other hand, based on error analysis of the Theorem 1, we have
where and are explicit from (20) and depend on iteration index since is recalculated in each step.
By (23) and the order of interpolatory iteration function, see Section 4.2 in [18], we can also conclude that
Since , then
Combining (26) with (28), we infer that
Equating powers on righthandside of relations (25) and (29), correspondingly, we form the following system of equations:
Nontrivial solution of this system is , , , and . Therefore, the order of the methods with memory (5) under assumptions of Theorem 1, when , is at least 12.
Remark 3. If we use lower Newton interpolation, we achieve lower order.
4. Numerical Results
In this section, we test our proposed methods and compare their results with some other methods of the same order of convergence. First, we introduce some concrete methods based on the proposed class in this work.
Considering weight functions (21)–(22), we have Concrete method 1 Concrete method 2 Concrete method 3 Concrete method 4
For comparison purposes, we consider the following methods: Threepoint by Sharma et al. [20]: where , and . Threepoint by Kung and Traub [1]: Threepoint by Zheng et al. [21]: Threepoint by Soleymani et al. [22]:
By we denote approximations to the zero , stands for , and the computational order of convergence (COC). Here, COC is defined by [16]: Also the following functions are used: Tables 1 and 2 show numerical results for various optimal without memory methods (31)–(38). It is clear that all these methods behave very well practically and confirm their relevant theories.


Tables 3 and 4 present numerical results for various with memory methods (31)–(38). It is also clear that all these methods behave very well practically and confirm their relevant theories. They all provide 12thorder of convergence asymptotically without any new function evaluations.


5. Conclusions
In this work we proposed a new optimal class of methods without and with memory for computing simple root of a nonlinear equation. Its without and with memory methods attain 8 and 12 orders of convergence, respectively, using only four function evaluations per iterations. This class is freederivative which can be considered as another virtue for it. All together, we managed to increase efficiency index of methods without memory from to using a very suitable selfaccelerator parameter based on Newton interpolation.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
First of all, the authors express their sincere appreciation to the referees for their valuable comments. This research was supported by Islamic Azad University, Hamedan Branch, as a research plan entitled “A new class of iterative methods with and without memory.”
References
 H. T. Kung and J. F. Traub, “Optimal order of onepoint and multipoint iteration,” Journal of the Association for Computing Machinery, vol. 21, no. 4, pp. 643–651, 1974. View at: Google Scholar
 R. Behl, V. Kanwar, and K. K. Sharma, “Another simple way of deriving several iterative functions to solve nonlinear equations,” Journal of Applied Mathematics, vol. 2012, Article ID 294086, 22 pages, 2012. View at: Publisher Site  Google Scholar
 F. Soleimani, F. Soleymani, and S. Shateyi, “Some iterative methods free from derivatives and their basins of attraction for nonlinear equations,” Discrete Dynamics in Nature and Society, vol. 2013, Article ID 301718, 10 pages, 2013. View at: Publisher Site  Google Scholar
 G. FernandezTorres and J. VásquezAquino, “Three new optimal fourthorder iterative methods to solve nonlinear equations,” Advances in Numerical Analysis, vol. 2013, Article ID 957496, 8 pages, 2013. View at: Publisher Site  Google Scholar
 H. Montazeri, F. Soleymani, S. Shateyi, and S. S. Motsa, “On a new method for computing the numerical solution of systems of nonlinear equations,” Journal of Applied Mathematics, vol. 2012, Article ID 751975, 15 pages, 2012. View at: Publisher Site  Google Scholar
 F. Soleymani, “Novel computational iterative methods with optimal order for nonlinear equations,” Advances in Numerical Analysis, vol. 2011, Article ID 270903, 10 pages, 2011. View at: Publisher Site  Google Scholar
 F. Soleymani, S. Karimi Vanani, and A. Afghani, “A general threestep class of optimal iterations for nonlinear equations,” Mathematical Problems in Engineering, vol. 2011, Article ID 469512, 10 pages, 2011. View at: Publisher Site  Google Scholar
 F. Soleymani, M. Sharifi, and S. Mousavi, “An improvement of Ostrowski’s and King’s techniques with optimal convergence order eight,” Journal of Optimization Theory and Applications, vol. 153, no. 1, pp. 225–236, 2012. View at: Publisher Site  Google Scholar
 F. Soleymani, S. Karimi Vanani, and M. Jamali Paghaleh, “A class of threestep derivativefree root solvers with optimal convergence order,” Journal of Applied Mathematics, vol. 2012, Article ID 568740, 15 pages, 2012. View at: Publisher Site  Google Scholar
 F. Soleymani, S. Karimi Vanani, M. Khan, and M. Sharifi, “Some modifications of King's family with optimal eighth order of convergence,” Mathematical and Computer Modelling, vol. 55, no. 34, pp. 1373–1380, 2012. View at: Publisher Site  Google Scholar
 R. Thukral, “Further development of Jarratt method for solving nonlinear equations,” Advances in Numerical Analysis, vol. 2012, Article ID 493707, 9 pages, 2012. View at: Publisher Site  Google Scholar
 R. Thukral, “Eighthorder iterative methods without derivatives for solving nonlinear equation,” ISRN Applied Mathematics, vol. 2011, Article ID 693787, 12 pages, 2011. View at: Publisher Site  Google Scholar
 R. Thukral, “New eighthorder derivativefree methods for solving nonlinear equations,” International Journal of Mathematics and Mathematical Sciences, vol. 2012, Article ID 493456, 12 pages, 2012. View at: Publisher Site  Google Scholar
 R. Thukral, “A new eighthorder iterative method for solving nonlinear equations,” Applied Mathematics and Computation, vol. 217, no. 1, pp. 222–229, 2010. View at: Publisher Site  Google Scholar
 S. M. Kang, A. Rafiq, and Y. C. Kwun, “A new secondorder iteration method for solving nonlinear equations,” Abstract and Applied Analysis, vol. 2013, Article ID 487062, 4 pages, 2013. View at: Publisher Site  Google Scholar
 F. Soleymani, “A new method for solving illconditioned linear systems,” Opuscula Mathematica, vol. 33, no. 2, pp. 337–344, 2013. View at: Google Scholar
 F. Soleymani, “A rapid numerical algorithm to compute matrix inversion,” International Journal of Mathematics and Mathematical Sciences, vol. 2012, Article ID 134653, 11 pages, 2012. View at: Publisher Site  Google Scholar
 J. F. Traub, Iterative Methods for the Solution of Equations, Prentice Hall, New York, NY, USA, 1964.
 J. Wang, “He’s maxmin approach for coupled cubic nonlinear equations arising in packaging system,” Mathematical Problems in Engineering, vol. 2013, Article ID 382509, 4 pages, 2013. View at: Publisher Site  Google Scholar
 J. R. Sharma, R. K. Guha, and P. Gupta, “Some effcient derivative free methods with memory for solving nonlinear equations,” Applied Mathematics and Computation, vol. 219, pp. 699–707, 2012. View at: Google Scholar
 Q. Zheng, J. Li, and F. Huang, “An optimal Steffensentype family for solving nonlinear equations,” Applied Mathematics and Computation, vol. 217, no. 23, pp. 9592–9597, 2011. View at: Publisher Site  Google Scholar
 F. Soleymani, D. K. R. Babajee, S. Shateyi, and S. S. Motsa, “Construction of optimal derivativefree techniques without memory,” Journal of Applied Mathematics, vol. 2012, Article ID 497023, 24 pages, 2012. View at: Publisher Site  Google Scholar
 J. F. Steffensen, “Remarks on iteration,” Skand Aktuar Tidsr, vol. 16, pp. 64–72, 1933. View at: Google Scholar
Copyright
Copyright © 2014 Taher Lotfi and Elahe Tavakoli. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.