International Journal of Mathematics and Mathematical Sciences

VolumeΒ 2012Β (2012), Article IDΒ 932420, 18 pages

http://dx.doi.org/10.1155/2012/932420

## Optimized Steffensen-Type Methods with Eighth-Order Convergence and High Efficiency Index

Department of Applied Mathematics, School of Mathematical Sciences, Ferdowsi University of Mashhad, Mashhad, Iran

Received 21 March 2012; Revised 23 May 2012; Accepted 6 June 2012

Academic Editor: V. R.Β Khalilov

Copyright Β© 2012 F. Soleymani. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Steffensen-type methods are practical in solving nonlinear equations. Since, such schemes do not need derivative evaluation per iteration. Hence, this work contributes two new multistep classes of Steffensen-type methods for finding the solution of the nonlinear equation . New techniques can be taken into account as the generalizations of the one-step method of Steffensen. Theoretical proofs of the main theorems are furnished to reveal the eighth-order convergence. Per computing step, the derived methods require only four function evaluations. Experimental results are also given to add more supports on the underlying theory of this paper as well as lead us to draw a conclusion on the efficiency of the developed classes.

#### 1. Introduction

Finding rapidly and accurately the zeros of nonlinear functions is a common problem in various areas of numerical analysis. This problem has fascinated mathematicians for centuries, and the literature is full of ingenious methods, and discussions of their merits and shortcomings [1β13].

Over the last decades, there exist a large number of different methods for finding the solution of nonlinear equations either iteratively or simultaneously. Thus far, some better modified methods for finding roots of nonlinear equations cover mainly the pioneering work of Kung and Traub [14] or the efficient ways to build high-order methods through the procedures discussed in [15β18].

In this paper, we consider the problem of finding a numerical procedure to solve a simple root of the nonlinear equation: by derivative-free high-order iterations without memory. We here note that higher order of convergence is only possible through employing multistep methods. In what follows, we first give two definitions concerning iterative processes.

*Definition 1.1. * Let the sequence tends to such that
Therefore, the order of convergence of the sequence is , and is known as the asymptotic error constant. If , , or , the sequence is said to converge linearly, quadratically, or cubically, respectively.

*Definition 1.2. *The efficiency of a method [2] is measured by the concept of efficiency index and is defined by
where is the order of the method and is the whole number of (functional) evaluations per iteration. Note that in (1.3) we consider that all function and derivative evaluations have the same computational cost. Moreover, we should remind that by Kung-Traub conjecture [14]: an iterative scheme without memory for solving nonlinear equations has the optimal efficiency index and optimal rate (speed) of convergence .

The so-called method of Newton is basically taken into account to solve (1.1). This scheme is an optimal one-step one-point iteration without memory. The method is in fact the essence of all the improvements of root solvers. We should remind that iterations are themselves divided into two main categories of derivative-involved methods (in which at least one derivative evaluation is needed to proceed; see, e.g., [19, 20]) and derivative-free methods (do not have a direct derivative evaluation per iteration [21β23]) which are more economic in terms of derivative evaluation. Clearly, the arising real problems in science and engineering are not normally of the type to let the users to calculate their first or second derivatives. Since, they include hard structures, which make the procedure of finding the derivatives so difficult or time consuming. Due to this, derivative-free methods now comes to attention to solve the related problems as easily as possible. For further reading, one may refer to [24β28].

This paper tries to overcome on the matter of derivative evaluation for nonlinear solvers by giving two general classes of three-step eighth-order convergence methods, which have the optimal efficiency index, optimal order, accurate performance in solving numerical examples as well as are totally free from derivative calculation per full cycle to proceed. Hence, after this short introduction in this section, we organize the rest of the paper as comes next. Section 2 shortly provides one of the most applicable usages of root solvers in Mechanical Engineering in order to manifest their applicability. This section is followed by Section 3, where some of the available derivative-free schemes in root-finding are discussed and presented. Section 4 gives the heart of this paper by contributing two novel classes of three-step derivative-free without memory iterations. The theoretical proofs of the main theorems are given therein to show that each member of our classes without memory attain as much as possible of the efficiency index by using as small as possible of the number of evaluations. In Section 5, we give large number of numerical experiments to reveal the accuracy of the obtained methods from the proposed classes. And finally, Section 6 contains a short conclusion of this study.

#### 2. Describing an Application of Nonlinear Equations Solvers

In Mechanical Engineering, a trunnion (a cylindrical protrusion used as a mounting and/or pivoting point; in a cannon, the trunnions are two projections cast just forward of the center of mass of the cannon are fixed to a two-wheeled movable gun carriage) has to be cooled before it is shrink fitted into a steel hub.

The equation that gives the temperature to which the trunnion has to be cooled to attain the desired contraction is given by

Clearly (2.1) could be solved using nonlinear equation solvers. This was one application of the matter of nonlinear equation solving by iterative processes in the scalar case. At this time, consider that the obtained nonlinear scalar equations in other problems are complicated; therefore their first and second derivatives are not at hand and moreover, better accuracy by low elapsed time is needed. Consequently, a close look at the derivative-free high-order methods should be paid by considering multi step iterations. Also note that such iterative schemes can be extended to solve systems of nonlinear equations which have a lot of applications in engineering problems, see for more [29].

#### 3. Available Derivative-Free Methods in the Literature

For the first time, Steffensen in [30] coined the following derivative-free form of Newton's method: which possesses the same rate of convergence and efficiency index as Newton's.

We here remind the well-written family of derivative-free three-step methods, which was given by Kung and Traub [14] as comes next where . This family of one-parameter methods possesses the eighth-order convergence utilizing four pieces of information, namely, , , and . Therefore, its classical efficiency index is 1.682. Note that the first two steps of (3.2), are an optimal fourth-order derivative-free uniparametric family, which could also be given as comes next

Another uniparametric three-step derivative-free iteration was presented recently by Zheng et al. in [21] as follows:
where is the divided differences of . And we recall that they can be defined recursively via
and, for *,* via

#### 4. Main Contributions

As usual to build high-order iterations, we must consider a multipoint cycle. Now in order to contribute, we pay heed to the follow up three-step scheme in which the first step is Steffensen, while the second and the third steps are Newton's iterations. This procedure is completely inefficient. Since it possesses as its efficiency index, which is the same to Steffensen's and Newton's:

To annihilate the derivative evaluations of the structure (4.1) and also keep the order at eight we must reconstruct our structure in which the first two steps provide the optimal order four using three evaluations and subsequently to the eighth-order convergence by consuming four function evaluations. Toward this end, we make use of weight function approach as comes next (and also by replacing and ): wherein ,ββ,ββ,ββ,ββ,ββ,ββ, and are seven real-valued weight functions when , , , , and , without the index , should be chosen such that the order of convergence arrives at the optimal level eight. Theorem 4.1 indicates the way of selecting the weight functions in order to reach the optimal efficiency index by using the smallest possible number of function evaluations.

Theorem 4.1. *Let us consider as a simple root of the nonlinear equation in the domain . And assume that is sufficiently smooth in the neighborhood of the root. Then the derivative-free iterative class without memory defined by (4.2) is of optimal order eight, when the weight functions satisfy
*

*Proof. *Using Taylor's series and symbolic computations, we can determine the asymptotic error constant of the three-step uniparametric class (4.2)-(4.3). Furthermore, assume be the error in the th iterate and take into account , ,ββfor all . Now, we expand around the simple zero . Hence, we have
By considering (4.4) and the first step of (4.2), we attain
In the same vein, by considering (4.3) and (4.5), we obtain for the second step that
and . We also have . Using symbolic computation in the last step of (4.2) and considering (4.3) and (4.6), we have
Furthermore using (4.3), we have
And finally, Taylor series expansion around the simple root in the last iterate by using (4.3) and the above relation (values of higher order derivatives of and ; not explicitly given in (4.9), can be arbitrary at the point 0), will result in
which shows that the three-step without memory class (4.2)-(4.3) reaches the order of convergence eight by using only four pieces of information. This completes that proof.

A simple computational example from the class (4.2)-(4.3) can be where and its error equation is as comes next

*Remark 4.2. *Although the structure (4.10) is hard, it could be coded easily because and the factors , , , , and should be computed only once per computing step and their values will be used throughout and we thus have some operational calculations in implementing (4.10).

The contributed class (4.2)-(4.3) uses the forward finite difference approximation in its first step. If we use backward finite difference estimation, then another novel three-step eighth-order derivative-free iteration without memory can be constructed. For this reason, our second general derivative-free class could be given as where ,,,,,, and are six real-valued weight functions when , , , , without the index , and they must read

The scheme (4.13)-(4.14) defines a new family of multipoint methods. To obtain the solution of (1.1) by the new derivative-free class, we must set a particular guess , ideally close to the simple root. In numerical analysis, it is very useful and essential to know the behavior of an approximate method. Therefore, we will prove the order of convergence of the new eighth-order class.

Theorem 4.3. * Let us consider as a simple root of the nonlinear equation in the domain . And assume that is sufficiently smooth in the neighborhood of the root. Then the derivative-free iterative class defined by (4.13)-(4.14) is of optimal order eight. *

*Proof. *Applying Taylor series and symbolic computations, we can determine the asymptotic error constant of the three-step uni-parametric family (4.13)-(4.14). Assume be the error in the *n*th iterate and take into account , , βfor all . Then, the procedure of this proof is similar to the proof of Theorem 4.1. Thus, we below give the final error equation of (4.13)-(4.14):
This shows that the contributed class (4.13)-(4.14) achieves the optimal order eight by using only four pieces of information. The proof is complete.

A very efficient example from our novel class (4.13)-(4.14) can be the following iteration without memory
where
and its *very* simple error equation comes next as
Mostly, and according to the assumptions made in the proof of Theorem 4.1, should be in the denominator of the asymptotic error constant of the optimal eighth-order methods, and if one obtains some forms like or , then the derived methods will be mostly finer than the other existing forms of optimal eighth-order methods.

*Remark 4.4. *Each method from the proposed derivative-free classes in this paper reaches the efficiency index , which is greater than of the optimal fourth-order techniques' and of optimal one-point methods' without memory.

Some other examples from the class (4.13)-(4.14) can be easily constructed by changing the involved weight function in (4.14). For example, we can have with where its error equation comes next as And also we could construct where and its error equation comes next as

#### 5. Numerical Examples

We check the effectiveness of the novel derivative-free classes of iterative methods (4.2)-(4.3) and (4.13)-(4.14) in this section. In order to do this, we choose (4.10) as the representative from the optimal class (4.2)-(4.3) and (4.16) as the representative of the novel three-step class (4.13)-(4.14). We have compared (4.10) and (4.16) with Steffensen's method (3.1), the fourth-order family of Kung and Traub (3.3) with , the eighth-order technique of Kung and Traub (3.2) with , and the optimal eighth-order family of Zheng et al. (3.4) with , using the examples listed in Table 1.

The results of comparisons are given in Table 2 in terms of the number significant digits for each test function after the specified number of iterations, that is, for example, shows that the absolute value of the given nonlinear function , after three iterations, is zero up to 1190 decimal places. For numerical comparisons, the stopping criterion is . In Table 2, IT and TNE stand for the number of iterations and the total number of (function) evaluations. We used four different initial guesses to analyze the behaviors of the methods totally. In Table 2, F stands for failure, for instance, when the iteration for the particular initial guess becomes divergence, or finds another root, needs more numerical computing steps to find an acceptable solution of the nonlinear equations.

It can be observed from Table 2 that almost in most cases our contributed methods from the classes of derivative free without memory iterations are superior in solving nonlinear equations. Numerical computations have been carried out using variable precision arithmetic in MATLAB 7.6.

We here remark that the eighth-order iterative methods such as (4.10) and (4.16) improve the number of correct digits in the convergence phase for the simple roots of (1.1) by a factor of eight and in order to show this and also the asymptotic error constant, we have applied 1200 digits floating point.

Note that experimental results show that whatever the value of is small, then the output results of solving nonlinear equations will be more reliable. As a matter of fact, experimental results for the contributed methods from the three-step derivative-free classes (4.2)-(4.3) and (4.13)-(4.14) can give better feedbacks in most cases, for instance they provide better accuracy than those illustrated in Table 2, by choosing very small values for . By doing this for , the error equation will be narrowed. Also note that, if we approximate by an iteration through the data of the first step per cycle, then *with memory* iterations from the suggested classes will be attained, which is the topic of the forthcoming papers in this field.

A simple glance at Table 2 reveals that (4.16) is mostly better than its competitors. The reason indeed is that whatever the error equation is finer, the better numerical results will be attained. The error equations correspond to (4.16), for instance (4.18) is very small, in fact in its denominator we have which clearly shows these refinements. In general, and according to the assumptions made in the proof of Theorem 4.1, should be in the denominator of the optimal eighth-order methods, and if one obtains forms like or , then the derived methods will be mostly finer than the other existing forms of the optimal eighth-order methods.

#### 6. Conclusions

Multipoint methods without memory are methods that use new information at a number of points. Much literature on the multipoint Newton-like methods for function of one variable and their convergence analysis can be found in Traub [2].

In this paper, two novel classes of iterations without memory were discussed fully. We have shown that each member of our contributions reach the optimal order of convergence eight by consuming only four function evaluations per full cycle. Thus, our classes support the optimality conjecture of Kung and Traub for building optimal without memory iterations. Our classes can be taken into account as the generalizations of the well-cited derivative-free method of Steffensen. We have given a lot of numerical examples in Section 5 to support the underlying theory developed in this paper. Numerical results were completely in harmony with the theory developed in this paper, and accordingly, the contributions hit the target.

#### Acknowledgment

The author cheerfully acknowledges the interesting comments of the reviewer, which have led to the improvement of this paper.

#### References

- A. Iliev and N. Kyurkchiev,
*Nontrivial Methods in Numerical Analysis (Selected Topics in Numerical Analysis)*, Lambert Academic Publishing, 2010. - J. F. Traub,
*Iterative Methods for the Solution of Equations*, Prentice-Hall, Englewood Cliffs, NJ, USA, 1964. - F. Soleymani, R. Sharma, X. Li, and E. Tohidi, βAn optimized derivative-free form of the Potra-Ptak method,β
*Mathematical and Computer Modelling*, vol. 56, pp. 97β104, 2012. View at Publisher Β· View at Google Scholar - F. Soleymani, βOptimal fourth-order iterative methods free from derivative,β
*Miskolc Mathematical Notes*, vol. 12, pp. 255β264, 2011. View at Google Scholar - F. Soleymani and B. S. Mousavi, βOn novel classes of iterative methods for solving nonlinear equations,β
*Computational Mathematics and Mathematical Physics*, vol. 52, pp. 203β210, 2012. View at Publisher Β· View at Google Scholar - F. Soleymani and S. K. Khattri, βFinding simple roots by seventh- and eighth-order derivative-free methods,β
*International Journal of Mathematical Models and Methods in Applied Sciences*, vol. 6, pp. 45β52, 2012. View at Google Scholar - F. Soleymani, βOn a novel optimal quartically class of methods,β
*Far East Journal of Mathematical Sciences*, vol. 58, pp. 199β206, 2011. View at Google Scholar - F. Soleymani, βOptimal eighth-order simple root-finders free from derivative,β
*WSEAS Transactions on Information Science and Applications*, vol. 8, pp. 293β299, 2011. View at Google Scholar - F. Soleymani and F. Soleimani, βNovel computational derivative-free methods for simple roots,β
*Fixed Point Theory, Computation and Applications*, vol. 13, pp. 247β258, 2012. View at Google Scholar - F. Soleymani, S. K. Vanani, and A. Afghani, βA general three-step class of optimal iterations for nonlinear equations,β
*Mathematical Problems in Engineering*, vol. 2011, Article ID 469512, 10 pages, 2011. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH - W.-X. Wang, Y.-L. Shang, W.-G. Sun, and Y. Zhang, βFinding the roots of system of nonlinear equations by a novel filled function method,β
*Abstract and Applied Analysis*, vol. 2011, Article ID 209083, 9 pages, 2011. View at Publisher Β· View at Google Scholar - B. Ignatova, N. Kyurkchiev, and A. Iliev, βMultipoint algorithms arising from optimal in the sense of Kung-Traub iterative procedures for numerical solution of nonlinear equations,β
*General Mathematics Notes*, vol. 6, no. 2, 2011. View at Google Scholar - S. Kumar, V. Kanwar, S. K. Tomar, and S. Singh, βGeometrically constructed families of Newton's method for unconstrained optimization and nonlinear equations,β
*International Journal of Mathematics and Mathematical Sciences*, vol. 2011, Article ID 972537, 9 pages, 2011. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH - H. T. Kung and J. F. Traub, βOptimal order of one-point and multipoint iteration,β
*Journal of the Association for Computing Machinery*, vol. 21, pp. 643β651, 1974. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH - P. Sargolzaei and F. Soleymani, βAccurate fourteenth-order methods for solving nonlinear equations,β
*Numerical Algorithms*, vol. 58, no. 4, pp. 513β527, 2011. View at Publisher Β· View at Google Scholar - M. S. Petković, J. Džunić, and B. Neta, βInterpolatory multipoint methods with memory for solving nonlinear equations,β
*Applied Mathematics and Computation*, vol. 218, no. 6, pp. 2533β2541, 2011. View at Publisher Β· View at Google Scholar - F. Soleymani, S. Karimi Vanani, and M. Jamali Paghaleh, βA class of three-step derivative-free root-solvers with optimal convergence order,β
*Journal of Applied Mathematics*, vol. 2012, Article ID 568740, 15 pages, 2012. View at Publisher Β· View at Google Scholar - M. Sharifi, D. K. R. Babajee, and F. Soleymani, βFinding the solution of nonlinear equations by a class of optimal methods,β
*Computers & Mathematics with Applications*, vol. 63, pp. 764β774, 2012. View at Google Scholar - F. Soleymani, S. Karimi Vanani, M. Khan, and M. Sharifi, βSome modifications of King's family with optimal eighth order of convergence,β
*Mathematical and Computer Modelling*, vol. 55, pp. 1373β1380, 2012. View at Publisher Β· View at Google Scholar - F. Soleymani, M. Sharifi, and B. Somayeh Mousavi, βAn improvement of Ostrowski's and King's techniques with optimal convergence order eight,β
*Journal of Optimization Theory and Applications*, vol. 153, no. 1, pp. 225β236, 2012. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH - Q. Zheng, J. Li, and F. Huang, βAn optimal Steffensen-type family for solving nonlinear equations,β
*Applied Mathematics and Computation*, vol. 217, no. 23, pp. 9592β9597, 2011. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH - F. Soleymani, βOn a new class of optimal eighth-order derivative-free methods,β
*Mathematica*, vol. 3, pp. 169β180, 2011. View at Google Scholar - F. Soleymani and S. Karimi Vanani, βOptimal Steffensen-type methods with eighth order of convergence,β
*Computers & Mathematics with Applications*, vol. 62, no. 12, pp. 4619β4626, 2011. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH - F. Soleymani, βNew optimal iterative methods in solving nonlinear equations,β
*International Journal of Pure and Applied Mathematics*, vol. 72, pp. 195β202, 2011. View at Google Scholar - F. Soleymani, βOn a bi-parametric class of optimal eighth-order derivative-free methods,β
*International Journal of Pure and Applied Mathematics*, vol. 72, no. 1, pp. 27β37, 2011. View at Google Scholar - F. Soleymani, S. K. Khattri, and S. Karimi Vanani, βTwo new classes of optimal Jarratt-type fourth-order methods,β
*Applied Mathematics Letters*, vol. 25, pp. 847β853, 2012. View at Publisher Β· View at Google Scholar - F. Soleymani, βNovel computational iterative methods with optimal order for nonlinear equations,β
*Advances in Numerical Analysis*, vol. 2011, Article ID 270903, 10 pages, 2011. View at Google Scholar Β· View at Zentralblatt MATH - B. I. Yun, βA non-iterative method for solving non-linear equations,β
*Applied Mathematics and Computation*, vol. 198, no. 2, pp. 691β699, 2008. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH - D. K. R. Babajee,
*, Analysis of higher order variants of Newton's method and their applications to differential and integral equations and in ocean acidification [Ph.D. thesis]*, University of Mauritius, 2010. - J. F. Steffensen, βRemarks on iteration,β
*Scandinavian Actuarial Journal*, vol. 16, no. 1, pp. 64β72, 1933. View at Publisher Β· View at Google Scholar