Abstract

The problem is to extend the method proposed by Soleymani et al. (2012) to a method with memory. Following this aim, a free parameter is calculated using Newton’s interpolatory polynomial of the third degree. So the R-order of convergence is increased from 4 to 6 without any new function evaluations. Numerically the extended method is examined along with comparison to some existing methods with the similar properties.

1. Introduction

Root finding is a great task in mathematics, both historically and practically. It has attracted attention of great mathematicians like Gauss and Newton. It has real major applications and because of these real features it is still alive as a research field.

Kung and Traub's conjecture is the basic fact to construct optimal multipoint methods without memory [1]. On the other hand, multipoint methods with memory can increase efficiency index of an optimal method without memory without consuming any new functional evaluations and merely using accelerator parameter(s). This great power of methods with memory has not been well considered until very recently. So we have been motivated to extend modified Potra-Pták [2] to its with memory method.

Traub in his book [3] introduced methods with and without memory for the first time. Moreover, he constructed a Steffensen-type method with memory using secant approach. In fact, he increased the order of convergence of the Steffensen method [4] from 2 to 2.41. This is the first method with memory based on our best knowledge. In other words, Traub changed Steffensen's method slightly as follows (see [3, pages 185–187]): The parameter is called self-accelerator and method (1) has convergence order of 2.41. It is still possible to increase the convergence order using better self-accelerator parameter based on better Newton interpolation. Free derivative can be considered as another virtue of (1).

We use the symbols , , and according to the following conventions [3]. If , we write or . If , we write or . If , where is a nonzero constant, we write or . Let be a function defined on an interval , where is the smallest interval containing distinct nodes . The divided difference with th-order is defined as follows: , Moreover, we recall the definition of efficiency index (EI) as , where is the order of convergence and is the total number of function evaluations per iteration.

This paper is organized as follows. Section 2 reviews modified Potra-Pták's method and we try to remodify it slightly too. Error equation for our modification is provided. In Section 3, development to with memory is carried out along with the discussion of its -order. Numerical examinations and comparisons are presented in the last section.

2. Remodified Optimal Derivative-Free Potra-Pták's Method

In this section, our primal goal is to modify Soleymani et al. method slightly so that its error equation can provide better form in the case with memory. In fact, we prove that our modified method can generate order of convergence of 6 while theirs has order of convergence of 5.2 in the case of with memory.

Derivative-free iterative methods for solving nonlinear equation are important in the sense that in many practical situation it is preferable to avoid calculation of derivative of . One such scheme is which is obtained from Newton's method by approximating the derivative by the quotient . Scheme (3) defines a one-parameter family of methods and has the same order and efficiency index as that of Newton's method [3, 4].

Recently, based on scheme (3), Soleymani et al. [2] have extended the idea of this family and presented Potra-Pták's derivative free families of two-point methods without memory as follows Moreover, they have proved.

Theorem 1 (see [2]). Let be a simple root of the sufficiently differentiable function in an open interval . If is sufficiently close to , then (5) is of local forth order and satisfies the error equation below, where , , ., and

As you can see, the order of convergence is . It is clear that error equation (6) has linear factor ; it is better to correct approach (5) in such a way that its error equation has the quadratic factor . So, as we can prove later, this factor increases convergence order up to 6. To this end, it is just enough to correct second step in (5) as follows:

Hence, method without memory (8) is still optimal and in the following theorem we establish its error equation.

Theorem 2. Let be a simple root of the sufficiently differentiable function in an open interval . If is sufficiently close to , then (8) is of local forth order and satisfies the error equation below where , , ,  , and

Proof. We provide the Taylor expansion of any term involved in (8). By Taylor expanding around the simple root in the nth iterate, we have By considering this relation and the first step of (8), we obtain At this time, we should expand around the root by taking into consideration (12). Accordingly, we have Additionally, we obtain Similarly, the same Taylor expansion results in Using (12)–(15) in the last step of (8) provides finally which shows that (8) is a derivative-free family of two-step methods with optimal convergence rate of 4. This completes the proof.

3. Development and Construction with Memory Family

This section concerns with extension of (8) to a method with memory since its error equation contains the parameter which can be approximated in such a way that increase the local order of convergence. So we set as the iteration proceeds by the formula   for , where is an approximation of . We have a method through the following forms of : where is Newton's interpolatory polynomial of third degree, set through four available approximations , , , and   and By using Taylor's expansion of around the root , we have where . By using (18) and (19), we calculate According to this and (17) we find For general case one can consult [3].

In order to obtain the order of convergence of the family of two-point methods with memory (8), where is calculated using the formula (17), we will use the concept of the -order of convergence [3]. Now, we can state the following convergence theorem.

Theorem 3. If an initial approximation is sufficiently close to the zero of and the parameter in the iterative scheme (8) is recursively calculated by the forms given in (17), then the -order of convergence is at least 6.

Proof. Let be a sequence of approximations generated by an iterative method with memory (IM). If this sequence converges to the zero of with the -order (≥) of IM, then we write where tends to the asymptotic error constant of IM when . Thus Let , , then we have where . In the sequel, we obtain the -order of convergence of family (8) for approach (17) applied to the calculation of .
Assume that the iterative sequences   and have the -orders;   and , respectively, then, bearing in mind (22) we obtain and then, we obtain Assume that the iterative sequence has the -order ; then bearing in mind (22) we obtain and then, we obtain Combining the exponents of on the right-hand sides of (27)-(28), (29)-(30), and (23)–(31), we form the nonlinear system of three equations in , , and :
Nontrivial solution of this system is , , and , and we conclude that the lower bound of the -order of the method with memory is .

Similarly, one can prove the following.

Theorem 4. If an initial approximation is sufficiently close to the zero of and the parameter in the iterative scheme (5) is recursively calculated by the forms given in (17), then the -order of convergence is at least 5.2.

4. Numerical Examples

To examine practical aspects of the proposed modified Potra-Pták's without and with memory we implement it here in action. In other words, we demonstrate the convergence behavior of the method with memory (8), where is calculated by (17). For comparison purposes, we pick up Kung and Traub [1] and Zheng et al. [5] with and without memories. We use these notations. The errors denote approximations to the sought zeros. stands for . Moreover, indicates computational order of convergence and is computed [2] The software Mathematica 8, with 1000 arbitrary precision arithmetic, has been used in our computations. The results alongside the test functions are given in Tables 1 and 2, while [3]. From Tables 1 and 2, we can conclude that our methods work numerically well and are successfully competing with the existing methods. Indeed the last columns of these tables show that both numerical and theoretical aspects support each other.

For comparison purposes, we consider the following methods.

Two-Point Method by Kung and Traub [1]:

Two-Point Method by Zheng et al. [5]:

From Tables 1 and 2, it can be seen that our modified method without memory works truly; moreover, its with memory competes the existing methods. To sum up, Potra and Pták [6] constructed two-point method without memory with convergence order of 3; it is not optimal in the sense of Kung and Traub. Cordero et al. [7] could make it optimal. In other words, they introduce optimal two- and three-point methods with order of convergence of 4 and 8, respectively. Though their methods are optimal, they are not derivative-free. Freshly, Soleymani et al. [2] have drawn two point methods without memory from Potra and pták method. One is derivative-free and the other is not. In addition their derivative method results two steps method by Cordero et al. [7] for (See (5)). In this work, we modified their derivative-free method at first. Then, we generalized it to method with memory with efficiency index ; see more about efficiency index in [7]. Therefore, a two-step method with memory can obtain performance even better than four-step methods without memory with efficiency index .

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research was supported by the University of Venda and the Islamic Azad University, Hamedan Branch.