Research Article  Open Access
F. Soleymani, "Optimized SteffensenType Methods with EighthOrder Convergence and High Efficiency Index", International Journal of Mathematics and Mathematical Sciences, vol. 2012, Article ID 932420, 18 pages, 2012. https://doi.org/10.1155/2012/932420
Optimized SteffensenType Methods with EighthOrder Convergence and High Efficiency Index
Abstract
Steffensentype methods are practical in solving nonlinear equations. Since, such schemes do not need derivative evaluation per iteration. Hence, this work contributes two new multistep classes of Steffensentype methods for finding the solution of the nonlinear equation . New techniques can be taken into account as the generalizations of the onestep method of Steffensen. Theoretical proofs of the main theorems are furnished to reveal the eighthorder convergence. Per computing step, the derived methods require only four function evaluations. Experimental results are also given to add more supports on the underlying theory of this paper as well as lead us to draw a conclusion on the efficiency of the developed classes.
1. Introduction
Finding rapidly and accurately the zeros of nonlinear functions is a common problem in various areas of numerical analysis. This problem has fascinated mathematicians for centuries, and the literature is full of ingenious methods, and discussions of their merits and shortcomings [1–13].
Over the last decades, there exist a large number of different methods for finding the solution of nonlinear equations either iteratively or simultaneously. Thus far, some better modified methods for finding roots of nonlinear equations cover mainly the pioneering work of Kung and Traub [14] or the efficient ways to build highorder methods through the procedures discussed in [15–18].
In this paper, we consider the problem of finding a numerical procedure to solve a simple root of the nonlinear equation: by derivativefree highorder iterations without memory. We here note that higher order of convergence is only possible through employing multistep methods. In what follows, we first give two definitions concerning iterative processes.
Definition 1.1. Let the sequence tends to such that Therefore, the order of convergence of the sequence is , and is known as the asymptotic error constant. If , , or , the sequence is said to converge linearly, quadratically, or cubically, respectively.
Definition 1.2. The efficiency of a method [2] is measured by the concept of efficiency index and is defined by where is the order of the method and is the whole number of (functional) evaluations per iteration. Note that in (1.3) we consider that all function and derivative evaluations have the same computational cost. Moreover, we should remind that by KungTraub conjecture [14]: an iterative scheme without memory for solving nonlinear equations has the optimal efficiency index and optimal rate (speed) of convergence .
The socalled method of Newton is basically taken into account to solve (1.1). This scheme is an optimal onestep onepoint iteration without memory. The method is in fact the essence of all the improvements of root solvers. We should remind that iterations are themselves divided into two main categories of derivativeinvolved methods (in which at least one derivative evaluation is needed to proceed; see, e.g., [19, 20]) and derivativefree methods (do not have a direct derivative evaluation per iteration [21–23]) which are more economic in terms of derivative evaluation. Clearly, the arising real problems in science and engineering are not normally of the type to let the users to calculate their first or second derivatives. Since, they include hard structures, which make the procedure of finding the derivatives so difficult or time consuming. Due to this, derivativefree methods now comes to attention to solve the related problems as easily as possible. For further reading, one may refer to [24–28].
This paper tries to overcome on the matter of derivative evaluation for nonlinear solvers by giving two general classes of threestep eighthorder convergence methods, which have the optimal efficiency index, optimal order, accurate performance in solving numerical examples as well as are totally free from derivative calculation per full cycle to proceed. Hence, after this short introduction in this section, we organize the rest of the paper as comes next. Section 2 shortly provides one of the most applicable usages of root solvers in Mechanical Engineering in order to manifest their applicability. This section is followed by Section 3, where some of the available derivativefree schemes in rootfinding are discussed and presented. Section 4 gives the heart of this paper by contributing two novel classes of threestep derivativefree without memory iterations. The theoretical proofs of the main theorems are given therein to show that each member of our classes without memory attain as much as possible of the efficiency index by using as small as possible of the number of evaluations. In Section 5, we give large number of numerical experiments to reveal the accuracy of the obtained methods from the proposed classes. And finally, Section 6 contains a short conclusion of this study.
2. Describing an Application of Nonlinear Equations Solvers
In Mechanical Engineering, a trunnion (a cylindrical protrusion used as a mounting and/or pivoting point; in a cannon, the trunnions are two projections cast just forward of the center of mass of the cannon are fixed to a twowheeled movable gun carriage) has to be cooled before it is shrink fitted into a steel hub.
The equation that gives the temperature to which the trunnion has to be cooled to attain the desired contraction is given by
Clearly (2.1) could be solved using nonlinear equation solvers. This was one application of the matter of nonlinear equation solving by iterative processes in the scalar case. At this time, consider that the obtained nonlinear scalar equations in other problems are complicated; therefore their first and second derivatives are not at hand and moreover, better accuracy by low elapsed time is needed. Consequently, a close look at the derivativefree highorder methods should be paid by considering multi step iterations. Also note that such iterative schemes can be extended to solve systems of nonlinear equations which have a lot of applications in engineering problems, see for more [29].
3. Available DerivativeFree Methods in the Literature
For the first time, Steffensen in [30] coined the following derivativefree form of Newton's method: which possesses the same rate of convergence and efficiency index as Newton's.
We here remind the wellwritten family of derivativefree threestep methods, which was given by Kung and Traub [14] as comes next where . This family of oneparameter methods possesses the eighthorder convergence utilizing four pieces of information, namely, , , and . Therefore, its classical efficiency index is 1.682. Note that the first two steps of (3.2), are an optimal fourthorder derivativefree uniparametric family, which could also be given as comes next
Another uniparametric threestep derivativefree iteration was presented recently by Zheng et al. in [21] as follows: where is the divided differences of . And we recall that they can be defined recursively via and, for , via
4. Main Contributions
As usual to build highorder iterations, we must consider a multipoint cycle. Now in order to contribute, we pay heed to the follow up threestep scheme in which the first step is Steffensen, while the second and the third steps are Newton's iterations. This procedure is completely inefficient. Since it possesses as its efficiency index, which is the same to Steffensen's and Newton's:
To annihilate the derivative evaluations of the structure (4.1) and also keep the order at eight we must reconstruct our structure in which the first two steps provide the optimal order four using three evaluations and subsequently to the eighthorder convergence by consuming four function evaluations. Toward this end, we make use of weight function approach as comes next (and also by replacing and ): wherein , , , , , , , and are seven realvalued weight functions when , , , , and , without the index , should be chosen such that the order of convergence arrives at the optimal level eight. Theorem 4.1 indicates the way of selecting the weight functions in order to reach the optimal efficiency index by using the smallest possible number of function evaluations.
Theorem 4.1. Let us consider as a simple root of the nonlinear equation in the domain . And assume that is sufficiently smooth in the neighborhood of the root. Then the derivativefree iterative class without memory defined by (4.2) is of optimal order eight, when the weight functions satisfy
Proof. Using Taylor's series and symbolic computations, we can determine the asymptotic error constant of the threestep uniparametric class (4.2)(4.3). Furthermore, assume be the error in the th iterate and take into account , , for all . Now, we expand around the simple zero . Hence, we have By considering (4.4) and the first step of (4.2), we attain In the same vein, by considering (4.3) and (4.5), we obtain for the second step that and . We also have . Using symbolic computation in the last step of (4.2) and considering (4.3) and (4.6), we have Furthermore using (4.3), we have And finally, Taylor series expansion around the simple root in the last iterate by using (4.3) and the above relation (values of higher order derivatives of and ; not explicitly given in (4.9), can be arbitrary at the point 0), will result in which shows that the threestep without memory class (4.2)(4.3) reaches the order of convergence eight by using only four pieces of information. This completes that proof.
A simple computational example from the class (4.2)(4.3) can be where and its error equation is as comes next
Remark 4.2. Although the structure (4.10) is hard, it could be coded easily because and the factors , , , , and should be computed only once per computing step and their values will be used throughout and we thus have some operational calculations in implementing (4.10).
The contributed class (4.2)(4.3) uses the forward finite difference approximation in its first step. If we use backward finite difference estimation, then another novel threestep eighthorder derivativefree iteration without memory can be constructed. For this reason, our second general derivativefree class could be given as where ,,,,,, and are six realvalued weight functions when , , , , without the index , and they must read
The scheme (4.13)(4.14) defines a new family of multipoint methods. To obtain the solution of (1.1) by the new derivativefree class, we must set a particular guess , ideally close to the simple root. In numerical analysis, it is very useful and essential to know the behavior of an approximate method. Therefore, we will prove the order of convergence of the new eighthorder class.
Theorem 4.3. Let us consider as a simple root of the nonlinear equation in the domain . And assume that is sufficiently smooth in the neighborhood of the root. Then the derivativefree iterative class defined by (4.13)(4.14) is of optimal order eight.
Proof. Applying Taylor series and symbolic computations, we can determine the asymptotic error constant of the threestep uniparametric family (4.13)(4.14). Assume be the error in the nth iterate and take into account , , for all . Then, the procedure of this proof is similar to the proof of Theorem 4.1. Thus, we below give the final error equation of (4.13)(4.14): This shows that the contributed class (4.13)(4.14) achieves the optimal order eight by using only four pieces of information. The proof is complete.
A very efficient example from our novel class (4.13)(4.14) can be the following iteration without memory where and its very simple error equation comes next as Mostly, and according to the assumptions made in the proof of Theorem 4.1, should be in the denominator of the asymptotic error constant of the optimal eighthorder methods, and if one obtains some forms like or , then the derived methods will be mostly finer than the other existing forms of optimal eighthorder methods.
Remark 4.4. Each method from the proposed derivativefree classes in this paper reaches the efficiency index , which is greater than of the optimal fourthorder techniques' and of optimal onepoint methods' without memory.
Some other examples from the class (4.13)(4.14) can be easily constructed by changing the involved weight function in (4.14). For example, we can have with where its error equation comes next as And also we could construct where and its error equation comes next as
5. Numerical Examples
We check the effectiveness of the novel derivativefree classes of iterative methods (4.2)(4.3) and (4.13)(4.14) in this section. In order to do this, we choose (4.10) as the representative from the optimal class (4.2)(4.3) and (4.16) as the representative of the novel threestep class (4.13)(4.14). We have compared (4.10) and (4.16) with Steffensen's method (3.1), the fourthorder family of Kung and Traub (3.3) with , the eighthorder technique of Kung and Traub (3.2) with , and the optimal eighthorder family of Zheng et al. (3.4) with , using the examples listed in Table 1.

The results of comparisons are given in Table 2 in terms of the number significant digits for each test function after the specified number of iterations, that is, for example, shows that the absolute value of the given nonlinear function , after three iterations, is zero up to 1190 decimal places. For numerical comparisons, the stopping criterion is . In Table 2, IT and TNE stand for the number of iterations and the total number of (function) evaluations. We used four different initial guesses to analyze the behaviors of the methods totally. In Table 2, F stands for failure, for instance, when the iteration for the particular initial guess becomes divergence, or finds another root, needs more numerical computing steps to find an acceptable solution of the nonlinear equations.
