Research Article  Open Access
F. Soleymani, S. Shateyi, "Two Optimal EighthOrder DerivativeFree Classes of Iterative Methods", Abstract and Applied Analysis, vol. 2012, Article ID 318165, 14 pages, 2012. https://doi.org/10.1155/2012/318165
Two Optimal EighthOrder DerivativeFree Classes of Iterative Methods
Abstract
Optimization problems defined by (objective) functions for which derivatives are unavailable or available at an expensive cost are emerging in computational science. Due to this, the main aim of this paper is to attain as high as possible of local convergence order by using fixed number of (functional) evaluations to find efficient solvers for onevariable nonlinear equations, while the procedure to achieve this goal is totally free from derivative. To this end, we consider the fourthorder uniparametric family of Kung and Traub to suggest and demonstrate two classes of threestep derivativefree methods using only four pieces of information per full iteration to reach the optimal order eight and the optimal efficiency index 1.682. Moreover, a large number of numerical tests are considered to confirm the applicability and efficiency of the produced methods from the new classes.
1. Introduction
This paper focuses on finding approximate solutions to nonlinear scalar and sufficiently smooth equations by derivativefree methods. Techniques such as the false position method in the rootfinding of a nonlinear equation , requires bracketing of the zero by two guesses. Such schemes are called bracketing methods. These methods are almost always convergent since they are based on reducing the interval between the two guesses so as to zero in on the root of the equation. In the Newton method, the zero is not bracketed. In fact, only one initial guess of the solution is needed to get the iterative process started in order to find the zero. The method hence falls in the category of open methods.
Convergence in open methods is not guaranteed but if the method does converge, it does so much faster than the bracketing methods [1]. Although Newton’s iteration has widely been discussed and improved in the literature, see for example, [2, 3], one of the main drawback, which is the matter of first derivative evaluation, occasionally restricts the application of this method or its variants. On the other hand, when we discuss improving the iterative without memory methods for solving one variable nonlinear equations, the conjecture of Kung and Traub [4] is taken automatically into consideration. It should be remarked that according to the unproved conjecture of Kung and Traub [4], the index of efficiency will not pass the maximum level , wherein is the total number of (functional) evaluations.
Apart from being involved in the first derivative, Newton’s iteration will use the second derivative of the function, when it is applied in optimization problems to final the local minima. Therefore, its use will be limited especially when the cost of first and second derivatives of the functions is expensive. And subsequently, derivativefree algorithms come to attention.
For the first time, Steffensen in [5] gave the following derivativefree form of Newton’s iteration which possesses the same rate of convergence and efficiency index as Newton’s.
In this work, we suggest novel classes of threestep fourpoint iterative methods, which are without memory, derivativefree, optimal, and therefore consistent with hard problems. For this reason, the contents of the paper unfold as comes next. Section 2 reveals our main contribution as some generalizations to the fourthorder uniparametric family of Kung and Traub [4]. Subsequently, Section 3 gives a very short discussion on the available derivativefree methods in the literature. Section 4 discusses the implementation of the new produced methods from our classes on large number of numerical examples. And finally, a short conclusion will be given in Section 5.
2. Main Outcome
In order to contribute and provide a class of methods, which are derivativefree with high efficiency index, we take into consideration the optimal twostep fourthorder uniparametric family of Kung and Traub [4] in the first two steps of a threestep cycle in which the Newton’s iteration has been performed in the last step as comes next
This scheme includes four evaluations of the function and one of its firstorder derivative to reach the order eight and 1.516 as its efficiency index. To improve the efficiency index, we first consider the same approximation as used in the second step of (2.1) to annihilate and then take advantage of weight function approach as follows: where , , , , , without the index , and , , , are four realvalued weight functions, which force the order to arrive at the maximum level eight by using the fixed number of evaluations per cycle. Theorem 2.1 shows that (2.2) will arrive at local eighthorder convergence by using only four function evaluations per full cycle. This reveals that any method from our proposed class possesses 1.682 as the efficiency index, which is optimal according to the Kung and Traub conjecture, while it is fully free from derivative evaluations.
Theorem 2.1. Let be a simple zero of a sufficiently differentiable function and let that , . If is sufficiently close to , then, (i) the local order of convergence of the solution by the class of methods without memory defined in (2.2) is eight, when and (ii) this solution reads the error equation
Proof. We expand any terms of (2.2) around the simple root in the th iterate by taking into account and . Thus, we write . Accordingly, we attain In the same vein, we obtain We also have . Now using symbolic computation in the last step of (2.2), we attain Furthermore, by considering (2.7) and the realvalued weight functions as in (2.3), we will obtain the error equation (2.4). This shows that the proposed class of derivativefree methods (2.2)(2.3) reaches the optimal eighthorder convergence by using only four function evaluations per full iteration. This ends the proof.
Now, any optimal threestep fourpoint derivativefree without memory method can be produced by using (2.2)(2.3). As an instance, we can have () where its error equation reads
We should here recall that per computing step for any of the methods from the new class, the values of and should be computed only once, and then their values will be considered in the rest of the cycle wherever required.
Many of the nonlinear functions are arising from solving complex environmental engineering problems, where the objective depends on the output of a numerical simulation of a physical process. These simulators are expensive to evaluate because they involve numerically solving systems of partial differential equations governing the underlying physical phenomena. However, function evaluation remains the dominant expense in optimization problems since the savings in time are often offset by increased accuracy of the simulation. For these reasons, algorithms like (2.2)(2.3) in which there is no need of derivative evaluation, the order is optimal as well as the efficiency index is high are important in hard problems.
Before going to the next sections, it is required to have a discussion on another similar class of derivativefree methods that is attainable by choosing a different approximation in the first step of (2.1) and also with a somehow different weight functions in the last step. Considering these in (2.1), we can have the following without memory derivativefree class which its order is given in Theorem 2.2 to be eight by choosing appropriate weight functions. Noting that , , , , , without the index , , and are four realvalued weight functions.
Theorem 2.2. Let be a simple zero of a sufficiently differentiable function and let that . If is sufficiently close to , then, (i) the local order of convergence of the solution by the class of methods without memory defined in (2.10) is eight, when and (ii) this solution reads the error equation
Proof. The proof of this theorem is similar to the proof of Theorem 2.1. It is hence omitted.
Now, by using (2.10)(2.11), we can have some other efficient optimal eighthorder without memory derivativefree methods, such as where its error equation is as follows: Another example from the new class of optimal iterations (2.10)(2.11) can be where by the following error equation
3. A Brief Look at the Literature
In this section, we shortly present some of the wellknown highorder derivativefree techniques to find the simple zeros of nonlinear equations for the sake of comparison. Kung and Traub in [4] introduced the following without memory twostep iteration where , and it was in fact the first two steps of our novel class (2.2)(2.3) in this paper. They moreover gave the following fourpoint eighthorder iterative scheme wherein . Soleymani in [6] suggested the following threestep derivativefree seventhorder method For further reading, one should consult the papers [7–10] and the references therein.
Remark 3.1. In terms of computational point of view, the efficiency index of our classes of derivativefree without memory methods (2.2)(2.3) (and the class (2.10)(2.11)) is greater than that of Newton’s and Steffensen’s, that is, 1.414, 1.565 of the sixthorder derivativefree technique given in [11], 1.587 of (3.1), 1.626 of (3.3), and is equal to 1.682 of the family (3.2).
4. Numerical Experiments
We now check the effectiveness of the novel derivativefree classes of iterative methods. In order to do this, we choose (2.8) as the representative from our class (2.2)(2.3). Note that the derived methods from the class (2.10)(2.11) can also be used.
We have compared (2.8) with Steffensen’s method (1.1), the fourthorder family of Kung and Traub (3.1) with , the seventhorder technique of Soleymani (3.3), and the optimal eighthorder family of Kung and Traub (3.2) with , using the examples listed in Table 1. The results of comparisons are given in Table 2 in terms of the number significant digits for each test function after the specified number of iterations, that is, for example, shows that the absolute value of the given nonlinear function () after nine iterations (corresponding to iteration (1.1)) is zero up to 207 decimal places.

