#### Abstract

Two families of derivative-free methods without memory for approximating a simple zero of a nonlinear equation are presented. The proposed schemes have an accelerator parameter with the property that it can increase the convergence rate without any new functional evaluations. In this way, we construct a method with memory that increases considerably efficiency index from to . Numerical examples and comparison with the existing methods are included to confirm theoretical results and high computational efficiency.

#### 1. Preliminaries

The main goal and motivation in constructing iterative methods for solving nonlinear equations is to obtain as high as possible order of convergence with minimal computational cost (see, e.g., [1–3]). Hence, many researchers (see, e.g., [4, 5]) have paid much attention to construct optimal multipoint methods without memory based on the unproved conjecture of Kung and Traub [6], which indicates that any multipoint iterative method without memory using functional evaluations can reach the optimal order .

Let be a simple real zero of a real function and let be an initial approximation to . In many practical situations, it is preferable to avoid calculations of derivatives of . This makes the construction of iterative derivative-free methods important [7].

This paper follows two main goals. First, developing some optimal three-step derivative-free families of methods without memory. And second, developing the proposed families to methods with memory in a way that reaches convergence R-orders 6 and 12 without any new functional evaluation.

The main idea in methods with memory is based on the use of suitable two-valued functions and the variation of a free parameter in each iterative step. This parameter is calculated using information from the current and previous iteration so that the developed methods may be regarded as methods with memory following Traub’s classification [8]. A supplemental inspiration for studying methods with memory stands up from an astonishing fact that such classes of iterative methods have been considered in the literature very rarely despite their high efficiency indices.

The paper is organized as follows. First, two families of optimal methods with orders four and eight consuming three and four function evaluations per iteration, respectively, are proposed in Section 2. Then in Section 3, we state methods with memory of very high computational efficiencies. The increase of convergence speed is achieved without additional function evaluations, which is the main advantage of the methods with memory. Section 4 is assigned to numerical results connected to the order of the methods with and without memory. Concluding remarks are given in Section 5.

#### 2. Construction of New Families

This section deals with constructing new multipoint methods for solving nonlinear equations. The discussions are divided into two subsections. Let the scalar function and ; that is, is a simple zero of .

##### 2.1. New Two-Step Methods without Memory

In this subsection, we start with the two-step scheme:

Note that we omit the iteration index for the sake of simplicity only. The order of convergence of scheme (1) is four, but it does not contain any contributions. We substitute derivatives in the first and second steps by suitable approximations that use available data; thus, we introduce an approximation as follows.

Using the points and ( is a nonzero real constant), we can apply Lagrange’s interpolatory polynomial for approximating : Fix , and by setting , we have

Also, Lagrange’s interpolatory polynomial at points , , and for approximating can be given as follows: We obtain Finally, we set above approximations in the denominator of (1), and so our derivative-free two-step iterative method is derived in what follows:

Now to check the convergence order of (6), we avoid retyping the widely practiced approach in the literature and put forward the following self-explained Mathematica code:

;

;

;

;

;

.

;

;

.

Considering Out[b] of the above Mathematica program, we observe that the order of convergence of the family (6) is four, and so we can state a convergence theorem in what follows.

Theorem 1. *If an initial approximation is sufficiently close to a simple zero of , then the convergence order of the two-step approach (6) is equal to four. And its error equation is given by
*

##### 2.2. New Three-Step Family

Now, we construct a three-step uniparametric family of methods based on the two-step method (6). We start from a three-step scheme where the first two steps are given by (6), and also the third step is Newton’s method; that is, The derivative in the third step of (8) should be substituted by a suitable approximation in order to obtain as high as possible of convergence order and to make the scheme optimal. To provide this approximation, we apply Lagrange’s interpolatory polynomial at the points , , , and ; that is, It is obvious that . By differentiating (9) and setting , we obtain By substituting in (8), we have where is defined by (5) and given by (10).

In the following theorem, we state suitable conditions for deriving an optimal three-step scheme without memory.

Theorem 2. *Let be a scalar function which has the simple root in the open interval ; also initial approximation is sufficiently close to a simple zero of . The three-step iterative method (11) has eighth order and satisfies the following error equation:
*

*Proof. *We are going to employ symbolic computation in the computational software package Mathematica. We write the following:
Now by introducing the abbreviations , , , , and , we provide the following Mathematica program in order to obtain the convergence order of (11).*Program Written in Mathematica.* Consider the following.

;

;

;

;

;

.

;

;

.

;

.

As a result, the proof of the theorem is finished. According to Out[c], it possesses eighth order.

Error equations (7) and (12) indicate that the orders of methods (6) and (11) are four and eight, respectively.

In the next section, we will modify the proposed methods and introduce new methods with memory. With the use of accelerator parameters, the order of convergence will significantly increase.

#### 3. Extension to Methods with Memory

This section is concerned with extraction of efficient methods with memory from (6) and (11), through a careful inspection of their error equations containing the parameter , which can be approximated in such a way that increases the local order of convergence.

Toward this goal, we set as the iteration proceeds by the formula for , where is an approximation of . We therefore use the approximation for (6) and the following one for (11). Here, we consider Newton’s interpolating polynomial of third degree as the method for approximating in two-step method (6) and Newton’s interpolating polynomial of fourth degree for approximating in the three-step method (11), where is the Newton’s interpolation polynomial of third degree, set through four available approximations and is the Newton’s interpolation polynomial of fourth degree, set through five available approximations . Consider

Note that a divided difference of order , defined recursively as has been used throughout this paper. Hence, the with memory developments of (6) and (11) can be presented as follows:

*Remark 3. *Accelerating methods obtained by recursively calculated free parameter may also be called self-accelerating methods. The initial value should be chosen before starting the iterative process, for example, using one of the ways proposed in [8].

Here, we attempt to prove that the methods with memory (19) and (20) have convergence orders six and twelve provided that we use accelerator as in (14) and (15). For ease of continuing analysis, we introduce the convenient notation as follows. If the sequence converges to the zero of with the order , then we write , where . The following lemma will play a crucial role in improving the convergence order of the methods with memory to be proposed in this paper.

Lemma 4. *If , , and , then the following relation holds:
*

*Proof. *Following the same terminology as in Theorem 2 and the symbolic software Mathematica, it would be easy to obtain (21) via writing the following code:

;

;

;

;

;

;

,

The proof is complete.

In order to obtain the R-order of convergence [9] of the method with memory (19), we establish the following theorem.

Theorem 5. *If an initial approximation is sufficiently close to the zero of and the parameter in the iterative scheme (19) is recursively calculated by the forms given in (14), then the R-order of convergence for (19) is at least six.*

*Proof. *Let be a sequence of approximations generated by the iterative method with memory (19). If this sequence converges to the zero of with the order , then we write
Thus
Moreover, assume that the iterative sequences and have the orders and , respectively. Then, (22) gives
Since
using Lemma 4 and (27), induce
Matching the powers of on the right hand sides of (24)–(29), (25)–(30), and (23)–(31), one can obtain
The nontrivial solution of this system is , , and . This completes the proof.

Using symbolic computations and Taylor expansions, it is easy to derive the following lemma.

Lemma 6. *Assuming (15) and (17), we have
**
where , , , and .*

*Proof. *The proof of this lemma is similar to Lemma 4. It is hence omitted.

Similarly for the three-step method with memory (20), we have the following theorem.

Theorem 7. *If an initial approximation is sufficiently close to the zero of and the parameter in the iterative scheme (20) is recursively calculated by the forms given in (15), then the order of convergence for (20) is at least twelve.*

*Proof. *Let be a sequence of approximations generated by the iterative method with memory (20). If this sequence converges to the zero of with the order , then we write
So
Moreover, assume that the iterative sequences , , and have the orders , , and , respectively. Then, (34) gives
Since
by Lemma 6 and (40), we obtain
Matching the powers of on the right hand sides of (36)–(43), (37)–(44), (38)–(45), and (35)–(46), one can obtain
This system has the solutions , , , and . The proof is complete.

*Remark 8. *The advantage of the proposed methods is in their higher computational efficiency indices. We emphasize that the increase of the R-order of convergence has been obtained without any additional function evaluations, which points to very high computational efficiency. Indeed, the efficiency index of the proposed three-step twelfth-order method with memory is higher than the efficiency index, of (19), of the optimal three-point method (11), and of (6).

*Remark 9. *We observe that the methods (19) and (20) with memory are considerably accelerated (up to 50%) in contrast to the corresponding method (11) without memory.

#### 4. Numerical Experiments

In this section, we test our proposed methods and compare their results with some other methods of the same order of convergence. The results are reported using the programming package Mathematica 8 in multiple precision arithmetic environment. We have considered 1000 digits floating point arithmetic so as to minimize the round-off errors as much as possible. The errors denote approximations to the sought zeros, and stands for . Moreover, indicates the computational order of convergence and is computed by

It is assumed that the initial estimate should be chosen before starting the iterative process, and also is given suitably.

Several iterative methods of optimal orders four and eight, for comparing with our proposed methods, have been chosen as comes next.

Derivative-free Kung-Traub’s two-step method (KT4) [6] is as follows:

Two-step method by Zheng et al. (ZLH4) [10] is as follows:

Two-step method by Lotfi and Tavakoli (LT4) [11] is as follows: where .

Derivative-free Kung-Traub’s three-step method (KT8) [6] is as follows:

Three-step methods developed by Zheng et al. (ZLH8) [10] is as follows:

Three-step method by Lotfi and Tavakoli (LT8) [11] is as follows: where , and , and are the weight functions.

In Tables 1, 2, and 3 our two-step proposed classes (6) and (19) have been compared with optimal two-step methods KT4, ZLH4, and LT4. We observe that all these methods behave very well practically and confirm their theoretical results.

Also Tables 4, 5, and 6 present numerical results for our three-step classes (11) and (20) and methods KT8, ZLH8, and LT8. It is also clear that all these methods behave very well practically and confirm their relevant theories.

We remark the importance of the choice of initial guesses. If they are chosen sufficiently close to the sought roots, then the expected (theoretical) convergence speed will be reached in practice; otherwise, all iterative root-finding methods show slower convergence, especially at the beginning of the iterative process. Hence, a special attention should be paid to finding good initial approximations. We note that efficient ways for the determination of initial approximations of great accuracy were discussed thoroughly in the works [12–14].

#### 5. Conclusions

We have constructed two families of iterative methods without memory which are optimal in the sense of Kung and Traub’s conjecture in this paper. Our proposed methods do not need any derivative.

In addition, they contain an accelerator parameter which raises convergence order without any new functional evaluations. In other words, the efficiency index of the three-step with memory hit .

We finalize this work by suggesting some points for future researches: first developing the proposed methods for some matrix functions, such as the ones in [15, 16], second exploring its dynamic or basins of attractions, and lastly extending the developed methods with memory using two or three accelerators.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

The first author thanks Islamic Azad University, Hamedan Branch, for the financial support of the present research. The authors are also thankful for insightful comments of three referees whom helped with the readability and reliability of the present paper. The fourth author also gratefully acknowledges that this research was partially supported by the University Putra Malaysia under the GP-IBT Grant Scheme having project number GP-IBT/2013/9420100.