Abstract

Based on iterative methods without memory of eighth-order convergence proposed by Thukral (2012), some iterative methods with memory and high efficiency index are presented. We show that the order of convergence is increased without any additional function evaluations. Numerical comparisons are made to show the performance of the presented methods.

1. Introduction

Multipoint iterative methods for solving nonlinear equations are of great practical importance since they overcome theoretical limits of one-point methods concerning the convergence order and computational efficiency. The main goal and motivation in constructing root-solvers is to achieve as high as possible convergence order consuming as small as possible function evaluations. Let be a sufficiently smooth function of single variable in some neighborhood of , where satisfies . Traub [1] considered the iterative function of order two: where is a real constant. The choice produces the well-known Steffensen method [2]. To improve the local order of convergence, many modified methods have been proposed in the open literature; see [311] and references therein. Thukral developed a scheme of optimal order of convergence eight [12], constructing a weight function as well in the following form: where is a real constant. These methods belong to the class of methods without memory. In this paper, we use optimal multipoint method without memory by Thukral as the base for constructing considerably faster methods employing information from the current and previous iteration without any additional evaluations of the function. Following Traub’s classification (see [1, pp. 8-9]), this class of root-finders is called methods with memory. Our main goal is to present some multipoint methods with high efficiency index to find the approximation of the root of the nonlinear equation . The acceleration of convergence rate is attained by suitable variation of one free parameter in each iterative step. This self-accelerating parameter is calculated using information from the current and previous iteration by applying Newton’s interpolating polynomials. Since considerable acceleration of convergence is obtained without additional function evaluations, the computational efficiency of improved multipoint methods is significantly increased. The efficacy of the methods is tested on a number of numerical examples.

2. Multipoint Methods with Memory

In [13], Sharma et al. have presented three methods through the following forms of : for , where denotes an approximation to , and are Newton’s interpolatory polynomials of the third degree, set through four available approximations: respectively, and is Newton’s interpolatory polynomial of the fourth degree, set through five available approximations. The derivatives are as follows: Now, we replace constant parameters in the iterative formula (5) by the varying defined by (6), (7), and (8). Then, the multipoint methods with memory, following from (5), become where for is defined by (2), (3), and (4), respectively.

3. Convergence Theorems

Theorem 1. Let the function be sufficiently differentiable in a neighborhood of its simple zero . If an initial approximation is sufficiently close to and the parameter in (12) is recursively calculated by the forms given in (6)–(8), where for is defined by (2), then the -order of convergence of the Steffensen-like method with memory (12) with the corresponding expressions (6)–(8) of is at least 10.7202, 11, and 11.2915, respectively.

Proof. Let be a sequence of approximations generated by an iterative method with memory (12). If this sequence converges to the zero of with the -order () of (12), then we write [11] where tends to the asymptotic error constant of (12) when . Therefore, Assume that the iterative sequences ,  , and have the -orders ,  , and , respectively; then, bearing in mind (13), we obtain Let for be defined by (2). We now obtain the order of convergence of the methods with memory (12), where is calculated from (6). The error relations with the self-accelerating parameter for (12) are in what follows (where ,  ): In order to find the error relation for (12), we need to find the expression for . Using a symbolic software such as 8 with the use of (6), we attain that According to (13), (16), (17), (19), and (22), we obtain Similarly, by (13), (16), (17), (20), and (22), we can write Combining (13), (16), (17), (21), and (22) yields Equating the exponents of the error in pairs of relations (19) and (23), (20) and (24), and then (21) and (25), we arrive at the following system of equations: Positive solution of this system is ,  , and . Therefore, the -order of the methods with memory (12), when is calculated by (6), is at least 10.7202.
Now, using a symbolic software such as 8 with the use of (7), we attain that Combining (13), (15), (16), (17), (18), and (27), we obtain In the similar way, we find the following error relations: Comparing the exponents of on the right hand sides of (18) and (28), (19) and (29), (20) and (30), and then (21) and (31), we arrive at the following system of equations: Positive solution of this system is ,  ,  , and . Therefore, the -order of the methods with memory (12), when is calculated by (7), is at least 11.
Using a symbolic software such as 8 with the use of (8), we attain that Using (33) and previously derived relations, we obtain the following error relations for the intermediate approximations: In the similar fashion, we find the final error relation (21) which is given by Comparing the exponents of on the right hand sides of (18) and (34), (19) and (35), (20) and (36), and then (21) and (37), we arrive at the following system of equations: Positive solution of this system is ,  ,  , and . Therefore, the -order of the methods with memory (12), when is calculated by (8), is at least 11.2915.

Theorem 2. Let the function be sufficiently differentiable in a neighborhood of its simple zero . If an initial approximation is sufficiently close to and the parameter in (12) is recursively calculated by the forms given in (6)–(8), where for is defined by (3), then the -order of convergence of the Steffensen-like method with memory (12) with the corresponding expressions (6)–(8) of is at least 10, 10.2426, and 10.4721, respectively.

Proof. The proof of this theorem is similar to the proof of Theorem 1; hence, it is omitted.

Theorem 3. Let the function be sufficiently differentiable in a neighborhood of its simple zero . If an initial approximation is sufficiently close to and the parameter in (12) is recursively calculated by the forms given in (6)–(8), where for is defined by (4), then the -order of convergence of the Steffensen-like method with memory (12) with the corresponding expressions (6)–(8) of is at least 10.7202, 11, and 11.2915, respectively.

Proof. The proof of this theorem is similar to the proof of Theorem 1; hence, it is omitted.

4. Numerical Results

In this section, we demonstrate the convergence behavior of the methods with memory (12), where is calculated by one of formulas (6)–(8) and for is defined by (2)–(4), respectively. Numerical computations reported here have been carried out in a 8.0 environment. Tables 1 and 2 show the difference of the root and the approximation to , where is the exact root computed with 800 significant digits (). To check the theoretical order of convergence, we calculate the computational order of convergence using the following formula [14]: taking into consideration the last three approximations in the iterative process. We use the following examples (selected from [11]): It is obvious from the tables that recursive calculation by the Newton interpolation (8) gives the best results.

5. Conclusion

In this paper, Newton’s interpolatory polynomials of the third and fourth degrees are applied for constructing considerably faster methods employing information from the current and previous iteration without any additional evaluations of the function. has been obtained as the highest possible computational efficiency index for the new methods with memory. The efficacy of the methods is tested on a number of numerical examples. The results show that this method is very useful to find an acceptable approximation of the exact solution of nonlinear equations.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The author is grateful to the referees for their comments and suggestions that helped to improve the paper.