Abstract

A class of derivative-free methods without memory for approximating a simple zero of a nonlinear equation is presented. The proposed class uses four function evaluations per iteration with convergence order eight. Therefore, it is an optimal three-step scheme without memory based on Kung-Traub conjecture. Moreover, the proposed class has an accelerator parameter with the property that it can increase the convergence rate from eight to twelve without any new functional evaluations. Thus, we construct a with memory method that increases considerably efficiency index from to . Illustrations are also included to support the underlying theory.

1. Introduction

The first attempts for classifying iterative root-finding methods were done by Traub [1]. He divided iterative methods for finding zeros of a function into two sets: one-point and multipoint methods. There is a fact about how to create a new method. As Traub investigated in his book [1], and Kung and Traub mentioned in [2], construction of one-point methods is not a useful task. In other words, to construct an optimal one-point method with convergence order , we need functional evaluations, while for construction an optimal method without memory having convergence order ; only function evaluations are required.

To be more precise, constructing an optimal one-point method with eighth-order convergence needs eight function evaluations, while constructing an optimal three-point method without memory having the same convergence order requires four functional evaluations. As a result, many researchers have paid much attention to construct optimal multipoint iterations without memory based on the unproved conjecture due to Kung and Traub: any multipoint iteration without memory using function evaluations can reach the optimal order .

This work follows two main goals: frst developing a new optimal three-step derivative-free class of methods without memory and second developing the proposed class to methods with memory. In this way, it reaches convergence order 12 without any new functional evaluations. Because of the derivative-free property of the proposed class, it can be used for finding zeros of not only smooth functions but also nonsmooth ones. Moreover, as we pointed out above we can reach the convergence order 12 using the same functional evaluations (to three-step without memory iterations) and, therefore, increasing convergence order is the other aspect and contribution of this work.

Note that in most test problems for nonlinear equations computing derivatives are an easy exercise. However, for some practical problems computing the derivative might be a cumbersome task and we have to relay on methods free of derivatives. For further reading on this topic, one may refer to [36].

The paper is organized as follows. First, a new without memory family of optimal order eight, consuming four function evaluations per iteration, is proposed by using two weight functions in Section 2. A different way to compute the order of convergence for iterative methods that use divided differences instead of derivatives is presented in Section 3, when we derive a method with memory. The significant increase of convergence speed is achieved without additional function evaluations, which is the main advantage of such methods. Section 4 is devoted to numerical results connected to the order of the methods with and without memory. And finally, concluding remarks will be drawn in Section 5.

2. Construction of a New Three-Step Class

Let the scalar function and . In other words, is a simple zero of . In this section, we start with the three-step scheme:

where , is the iteration indicator. The order of convergence for (1) is eight but its computational efficiency is low. We substitute derivatives in all three steps by suitable approximations that use available data; thus we introduce the following approximations:

in the first, second, and third steps of (1), where and are weight functions. The following iterative family of three-step methods is obtained:

In the following theorem, we state suitable conditions for deriving a new optimal class without memory according to the Kung and Traub conjecture [2] (also known as K-T hypothesis).

Theorem 1. Let be a scalar function which has a simple root in the open interval , and also the initial approximation is sufficiently close to the simple zero. Then, the three-step iterative method (3) has eighth-order under the conditions , , , and and satisfies the following error equation:

Proof. By using Taylor’s expansion of around and taking into account that , we obtain where , , , , and . Therefore, Note that we used in order to avoid writing further terms of the Taylor expansions due to symbolic computations. By using (5) and (7), we obtain Dividing (5) by (8) gives us And thus, Subsequently, we have Thus, Therefore, we attain Finally, according to the above analysis, the general error equation is given by so that the proof of the theorem is finished.

We provide some specific weight functions that satisfy the conditions of Theorem 1 as follows:

We consider these weight functions in, without and with memory methods, (3) and (18) in the forthcoming sections.

There are some measures for comparing various iterative techniques. Traub [1] introduced the informational efficiency and efficiency index, which can be expressed in terms of the order () of the method and the number of function (and derivative) evaluations (). In fact, the efficiency index (or computational efficiency) is given by .

Clearly, the efficiency index of the proposed optimal class of method is which is optimal in the sense of K-T hypothesis and is higher than two- or one-step methods without memory.

It is worth emphasizing that the maximal order of convergence is not the only goal in constructing root-finding methods and, consequently, the ultimate measure of efficiency of the designed method. Complexity of the formulae involved, often called combinatorial cost, makes another important parameter, which should be taken into account. Hence, we wish to construct a new method with memory possessing a high order 12 requiring only 4 functional evaluations (just like (3)).

In the next section, we will modify the proposed method and introduce a new method. We use an accelerator parameter to increase the order of convergence significantly.

3. Construction of a Method with Memory

Error equation (14) indicates that the order of convergence for class (3) is equal to eight. This section is concerned with extracting an efficiency with memory method from (3) since its error equation contains the parameter which can be approximated in such a way that increases the local order of convergence.

We set as the iteration proceeds by the formula for , where is an approximation of . We have a method via the following forms of :

The key idea that provides the order acceleration lies in a special form of the error relation and a convenient choice of a free parameter. We define a self-accelerating parameter, which is calculated during the iterative process using Newton’s interpolating polynomial.

Hence, we consider Newton’s interpolation as the method for approximating , where is Newton’s interpolation polynomial of fourth degree, set through five available approximations as follows:

Here, the with memory development of (3) can be presented as follows:

We attempt to prove that the method with memory (18) has convergence order twelve provided that we use accelerator as in (16). It should be remarked that we have applied the Herzberger’s matrix method [7].

Theorem 2. If an initial approximation is sufficiently close to the zero of and the parameter in the iterative scheme (18) is recursively calculated by the forms given in (16), then the order of convergence is twelve.

Proof. We will use Herzberger’s matrix method to determine the order of convergence. Note that the lower bound of order for a single-step -point method is the spectral radius of a matrix , associated to the method with elements:
On the other hand, the lower bound of order of an -step method is the spectral radius of the product of matrices .
We can express each approximation , , , and as a function of available information , , , and from the th iteration and , , , and from the previous iteration, depending on the accelerating technique. Now, we determine the order of convergence for (18) applied for the calculation of .
Method (N4). We use the following matrices to express informational dependence:
Hence, it is easy to derive with the eigenvalues . Consequently, the order of the method with memory (18)-(N4) is at least twelve. The proof of Theorem 2 is finished.

Clearly, the proposed with memory scheme possesses a high computational efficiency index , which makes it interesting for practical problems.

4. Numerical Examples

In this section, we test our proposed methods and compare their results with some other methods of the same order of convergence. The errors denote approximations to the sought zeros, and stands for . Moreover, indicates computational order of convergence and is computed by

The calculated value coc estimates the theoretical order of convergence well when “pathological behavior” of the iterative method (for instance, slow convergence at the beginning of the implemented iterative method, “oscillating” behavior of approximations, etc.) does not exist.

We have used 1000-fixed floating point arithmetic so as to minimize the effect of round-off errors.

By using weight functions (15), we introduce some concrete methods based on the proposed class. Note that it is assumed that the initial estimate should be chosen before starting the iterative process and also is given suitably.

Concrete method 1:

Concrete method 2:

Concrete method 3:

Concrete method 4:

Concrete method 5:

Concrete method 6:

Concrete method 7:

Concrete method 8:

Several iterative methods (IM) of optimal order eight, which also require four function evaluations, for comparisons with our proposed methods have been chosen.

Three-step method by Wang et al. [8]:

where functions and are following , , and  .

Three-step method by Lotfi and Tavakoli [9]:

where

Derivative-free Kung-Traub’s family [2]:

Three-step methods made by Zheng et al. [10]:

In Tables 1, 3, and 5 our without memory proposed method by different weight functions (23)–(30) has been compared with optimal three-point methods (34) and (35), and we observe that all these methods behave very well practically and confirm their theoretical results.

Also Tables 2, 4, and 6 present numerical results for our with memory classes (23)–(30). It is also clear that all these methods behave very well practically and confirm their relevant theories. They all provide wherea bout twelve of convergence order.

5. Concluding Remarks

We have constructed a class of methods without and with memory. Our proposed methods do not need any derivative and therefore are applicable to nonsmooth functions too. Another advantage of the proposed methods is that their without memory versions are optimal in the sense of K-T conjecture. In addition, it contains an accelerator parameter which rises convergence order from eight to twelve without any new functional evaluations. In other words, the efficiency index of the with memory class is .

We finalize this work by suggesting some outlines for future research: first developing the proposed methods for computing multiple roots and second exploring its dynamic or basins of attractions, and finally we wonder why not to use an adaptive arithmetic in each step of the iterative method instead of using a fixed precision, since this higher precision is only necessary in the last step of the iterative process.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The first three authors thank Islamic Azad University, Hamedan Branch, for the financial support of the present research. Also, the authors would like to express their sincere appreciations to Dr. Soleymani and the respected referees for their constructive comments regarding improving this work.