Abstract
Steffensen-type methods are practical in solving nonlinear equations. Since, such schemes do not need derivative evaluation per iteration. Hence, this work contributes two new multistep classes of Steffensen-type methods for finding the solution of the nonlinear equation . New techniques can be taken into account as the generalizations of the one-step method of Steffensen. Theoretical proofs of the main theorems are furnished to reveal the eighth-order convergence. Per computing step, the derived methods require only four function evaluations. Experimental results are also given to add more supports on the underlying theory of this paper as well as lead us to draw a conclusion on the efficiency of the developed classes.
1. Introduction
Finding rapidly and accurately the zeros of nonlinear functions is a common problem in various areas of numerical analysis. This problem has fascinated mathematicians for centuries, and the literature is full of ingenious methods, and discussions of their merits and shortcomings [1–13].
Over the last decades, there exist a large number of different methods for finding the solution of nonlinear equations either iteratively or simultaneously. Thus far, some better modified methods for finding roots of nonlinear equations cover mainly the pioneering work of Kung and Traub [14] or the efficient ways to build high-order methods through the procedures discussed in [15–18].
In this paper, we consider the problem of finding a numerical procedure to solve a simple root of the nonlinear equation: by derivative-free high-order iterations without memory. We here note that higher order of convergence is only possible through employing multistep methods. In what follows, we first give two definitions concerning iterative processes.
Definition 1.1. Let the sequence tends to such that Therefore, the order of convergence of the sequence is , and is known as the asymptotic error constant. If , , or , the sequence is said to converge linearly, quadratically, or cubically, respectively.
Definition 1.2. The efficiency of a method [2] is measured by the concept of efficiency index and is defined by where is the order of the method and is the whole number of (functional) evaluations per iteration. Note that in (1.3) we consider that all function and derivative evaluations have the same computational cost. Moreover, we should remind that by Kung-Traub conjecture [14]: an iterative scheme without memory for solving nonlinear equations has the optimal efficiency index and optimal rate (speed) of convergence .
The so-called method of Newton is basically taken into account to solve (1.1). This scheme is an optimal one-step one-point iteration without memory. The method is in fact the essence of all the improvements of root solvers. We should remind that iterations are themselves divided into two main categories of derivative-involved methods (in which at least one derivative evaluation is needed to proceed; see, e.g., [19, 20]) and derivative-free methods (do not have a direct derivative evaluation per iteration [21–23]) which are more economic in terms of derivative evaluation. Clearly, the arising real problems in science and engineering are not normally of the type to let the users to calculate their first or second derivatives. Since, they include hard structures, which make the procedure of finding the derivatives so difficult or time consuming. Due to this, derivative-free methods now comes to attention to solve the related problems as easily as possible. For further reading, one may refer to [24–28].
This paper tries to overcome on the matter of derivative evaluation for nonlinear solvers by giving two general classes of three-step eighth-order convergence methods, which have the optimal efficiency index, optimal order, accurate performance in solving numerical examples as well as are totally free from derivative calculation per full cycle to proceed. Hence, after this short introduction in this section, we organize the rest of the paper as comes next. Section 2 shortly provides one of the most applicable usages of root solvers in Mechanical Engineering in order to manifest their applicability. This section is followed by Section 3, where some of the available derivative-free schemes in root-finding are discussed and presented. Section 4 gives the heart of this paper by contributing two novel classes of three-step derivative-free without memory iterations. The theoretical proofs of the main theorems are given therein to show that each member of our classes without memory attain as much as possible of the efficiency index by using as small as possible of the number of evaluations. In Section 5, we give large number of numerical experiments to reveal the accuracy of the obtained methods from the proposed classes. And finally, Section 6 contains a short conclusion of this study.
2. Describing an Application of Nonlinear Equations Solvers
In Mechanical Engineering, a trunnion (a cylindrical protrusion used as a mounting and/or pivoting point; in a cannon, the trunnions are two projections cast just forward of the center of mass of the cannon are fixed to a two-wheeled movable gun carriage) has to be cooled before it is shrink fitted into a steel hub.
The equation that gives the temperature to which the trunnion has to be cooled to attain the desired contraction is given by
Clearly (2.1) could be solved using nonlinear equation solvers. This was one application of the matter of nonlinear equation solving by iterative processes in the scalar case. At this time, consider that the obtained nonlinear scalar equations in other problems are complicated; therefore their first and second derivatives are not at hand and moreover, better accuracy by low elapsed time is needed. Consequently, a close look at the derivative-free high-order methods should be paid by considering multi step iterations. Also note that such iterative schemes can be extended to solve systems of nonlinear equations which have a lot of applications in engineering problems, see for more [29].
3. Available Derivative-Free Methods in the Literature
For the first time, Steffensen in [30] coined the following derivative-free form of Newton's method: which possesses the same rate of convergence and efficiency index as Newton's.
We here remind the well-written family of derivative-free three-step methods, which was given by Kung and Traub [14] as comes next where . This family of one-parameter methods possesses the eighth-order convergence utilizing four pieces of information, namely, , , and . Therefore, its classical efficiency index is 1.682. Note that the first two steps of (3.2), are an optimal fourth-order derivative-free uniparametric family, which could also be given as comes next
Another uniparametric three-step derivative-free iteration was presented recently by Zheng et al. in [21] as follows: where is the divided differences of . And we recall that they can be defined recursively via and, for , via
4. Main Contributions
As usual to build high-order iterations, we must consider a multipoint cycle. Now in order to contribute, we pay heed to the follow up three-step scheme in which the first step is Steffensen, while the second and the third steps are Newton's iterations. This procedure is completely inefficient. Since it possesses as its efficiency index, which is the same to Steffensen's and Newton's:
To annihilate the derivative evaluations of the structure (4.1) and also keep the order at eight we must reconstruct our structure in which the first two steps provide the optimal order four using three evaluations and subsequently to the eighth-order convergence by consuming four function evaluations. Toward this end, we make use of weight function approach as comes next (and also by replacing and ): wherein , , , , , , , and are seven real-valued weight functions when , , , , and , without the index , should be chosen such that the order of convergence arrives at the optimal level eight. Theorem 4.1 indicates the way of selecting the weight functions in order to reach the optimal efficiency index by using the smallest possible number of function evaluations.
Theorem 4.1. Let us consider as a simple root of the nonlinear equation in the domain . And assume that is sufficiently smooth in the neighborhood of the root. Then the derivative-free iterative class without memory defined by (4.2) is of optimal order eight, when the weight functions satisfy
Proof. Using Taylor's series and symbolic computations, we can determine the asymptotic error constant of the three-step uniparametric class (4.2)-(4.3). Furthermore, assume be the error in the th iterate and take into account , , for all . Now, we expand around the simple zero . Hence, we have By considering (4.4) and the first step of (4.2), we attain In the same vein, by considering (4.3) and (4.5), we obtain for the second step that and . We also have . Using symbolic computation in the last step of (4.2) and considering (4.3) and (4.6), we have Furthermore using (4.3), we have And finally, Taylor series expansion around the simple root in the last iterate by using (4.3) and the above relation (values of higher order derivatives of and ; not explicitly given in (4.9), can be arbitrary at the point 0), will result in which shows that the three-step without memory class (4.2)-(4.3) reaches the order of convergence eight by using only four pieces of information. This completes that proof.
A simple computational example from the class (4.2)-(4.3) can be where and its error equation is as comes next
Remark 4.2. Although the structure (4.10) is hard, it could be coded easily because and the factors , , , , and should be computed only once per computing step and their values will be used throughout and we thus have some operational calculations in implementing (4.10).
The contributed class (4.2)-(4.3) uses the forward finite difference approximation in its first step. If we use backward finite difference estimation, then another novel three-step eighth-order derivative-free iteration without memory can be constructed. For this reason, our second general derivative-free class could be given as where ,,,,,, and are six real-valued weight functions when , , , , without the index , and they must read
The scheme (4.13)-(4.14) defines a new family of multipoint methods. To obtain the solution of (1.1) by the new derivative-free class, we must set a particular guess , ideally close to the simple root. In numerical analysis, it is very useful and essential to know the behavior of an approximate method. Therefore, we will prove the order of convergence of the new eighth-order class.
Theorem 4.3. Let us consider as a simple root of the nonlinear equation in the domain . And assume that is sufficiently smooth in the neighborhood of the root. Then the derivative-free iterative class defined by (4.13)-(4.14) is of optimal order eight.
Proof. Applying Taylor series and symbolic computations, we can determine the asymptotic error constant of the three-step uni-parametric family (4.13)-(4.14). Assume be the error in the nth iterate and take into account , , for all . Then, the procedure of this proof is similar to the proof of Theorem 4.1. Thus, we below give the final error equation of (4.13)-(4.14): This shows that the contributed class (4.13)-(4.14) achieves the optimal order eight by using only four pieces of information. The proof is complete.
A very efficient example from our novel class (4.13)-(4.14) can be the following iteration without memory where and its very simple error equation comes next as Mostly, and according to the assumptions made in the proof of Theorem 4.1, should be in the denominator of the asymptotic error constant of the optimal eighth-order methods, and if one obtains some forms like or , then the derived methods will be mostly finer than the other existing forms of optimal eighth-order methods.
Remark 4.4. Each method from the proposed derivative-free classes in this paper reaches the efficiency index , which is greater than of the optimal fourth-order techniques' and of optimal one-point methods' without memory.
Some other examples from the class (4.13)-(4.14) can be easily constructed by changing the involved weight function in (4.14). For example, we can have with where its error equation comes next as And also we could construct where and its error equation comes next as
5. Numerical Examples
We check the effectiveness of the novel derivative-free classes of iterative methods (4.2)-(4.3) and (4.13)-(4.14) in this section. In order to do this, we choose (4.10) as the representative from the optimal class (4.2)-(4.3) and (4.16) as the representative of the novel three-step class (4.13)-(4.14). We have compared (4.10) and (4.16) with Steffensen's method (3.1), the fourth-order family of Kung and Traub (3.3) with , the eighth-order technique of Kung and Traub (3.2) with , and the optimal eighth-order family of Zheng et al. (3.4) with , using the examples listed in Table 1.
The results of comparisons are given in Table 2 in terms of the number significant digits for each test function after the specified number of iterations, that is, for example, shows that the absolute value of the given nonlinear function , after three iterations, is zero up to 1190 decimal places. For numerical comparisons, the stopping criterion is . In Table 2, IT and TNE stand for the number of iterations and the total number of (function) evaluations. We used four different initial guesses to analyze the behaviors of the methods totally. In Table 2, F stands for failure, for instance, when the iteration for the particular initial guess becomes divergence, or finds another root, needs more numerical computing steps to find an acceptable solution of the nonlinear equations.
It can be observed from Table 2 that almost in most cases our contributed methods from the classes of derivative free without memory iterations are superior in solving nonlinear equations. Numerical computations have been carried out using variable precision arithmetic in MATLAB 7.6.
We here remark that the eighth-order iterative methods such as (4.10) and (4.16) improve the number of correct digits in the convergence phase for the simple roots of (1.1) by a factor of eight and in order to show this and also the asymptotic error constant, we have applied 1200 digits floating point.
Note that experimental results show that whatever the value of is small, then the output results of solving nonlinear equations will be more reliable. As a matter of fact, experimental results for the contributed methods from the three-step derivative-free classes (4.2)-(4.3) and (4.13)-(4.14) can give better feedbacks in most cases, for instance they provide better accuracy than those illustrated in Table 2, by choosing very small values for . By doing this for , the error equation will be narrowed. Also note that, if we approximate by an iteration through the data of the first step per cycle, then with memory iterations from the suggested classes will be attained, which is the topic of the forthcoming papers in this field.
A simple glance at Table 2 reveals that (4.16) is mostly better than its competitors. The reason indeed is that whatever the error equation is finer, the better numerical results will be attained. The error equations correspond to (4.16), for instance (4.18) is very small, in fact in its denominator we have which clearly shows these refinements. In general, and according to the assumptions made in the proof of Theorem 4.1, should be in the denominator of the optimal eighth-order methods, and if one obtains forms like or , then the derived methods will be mostly finer than the other existing forms of the optimal eighth-order methods.
6. Conclusions
Multipoint methods without memory are methods that use new information at a number of points. Much literature on the multipoint Newton-like methods for function of one variable and their convergence analysis can be found in Traub [2].
In this paper, two novel classes of iterations without memory were discussed fully. We have shown that each member of our contributions reach the optimal order of convergence eight by consuming only four function evaluations per full cycle. Thus, our classes support the optimality conjecture of Kung and Traub for building optimal without memory iterations. Our classes can be taken into account as the generalizations of the well-cited derivative-free method of Steffensen. We have given a lot of numerical examples in Section 5 to support the underlying theory developed in this paper. Numerical results were completely in harmony with the theory developed in this paper, and accordingly, the contributions hit the target.
Acknowledgment
The author cheerfully acknowledges the interesting comments of the reviewer, which have led to the improvement of this paper.