Discrete Dynamics in Nature and Society

Volume 2015, Article ID 938606, 7 pages

http://dx.doi.org/10.1155/2015/938606

## Two Bi-Accelerator Improved with Memory Schemes for Solving Nonlinear Equations

Department of Mathematics, Maulana Azad National Institute of Technology, Bhopal, Madhya Pradesh 462051, India

Received 16 October 2014; Accepted 27 December 2014

Academic Editor: Giuseppe Izzo

Copyright © 2015 J. P. Jaiswal. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The present paper is devoted to the improvement of the -order convergence of with memory derivative free methods presented by Lotfi et al. (2014) without doing any new evaluation. To achieve this aim one more self-accelerating parameter is inserted, which is calculated with the help of Newton’s interpolatory polynomial. First theoretically it is proved that the -order of convergence of the proposed schemes is increased from 6 to 7 and 12 to 14, respectively, without adding any extra evaluation. Smooth as well as nonsmooth examples are discussed to confirm theoretical result and superiority of the proposed schemes.

#### 1. Introduction

Finding the root of a nonlinear equation frequently occurs in scientific computation. Newton’s method is the most well-known method for solving nonlinear equations and has quadratic convergence. However, the existence of the derivative in the neighborhood of the required root is compulsory for convergence of Newton’s method, which restricts its applications in practice. To overcome the this difficulty, Steffensen replaced the first derivative of the function in Newton’s iterate by forward finite difference approximation. This method also possesses the quadratical convergence and the same efficiency just like Newton’s method. Kung and Traub are pioneers in constructing optimal general multistep methods without memory. Moreover, they conjectured any multistep methods without memory using function evaluations that may reach the convergence order at most [1]. Thus both Newton’s and Steffenssen’s methods are optimal in the sense of Kung and Traub. But the superiority of Steffenssen’s method over Newton’s method is that it is derivative free. So it can be applied to nondifferentiable equations also. To compare iterative methods theoretically, Owtrowski [2] introduced the idea of efficiency index given by , where is the order of convergence and number of function evaluations per iteration. In other words we can say that an iterative method with higher efficiency index is more efficient. To improve convergence order as well as efficiency index without adding any new function evaluations, Traub in his book introduced the method with memory. In fact he changed Steffensen’s method slightly as follows (see [3, pp. 185–187]).

, are given suitably: The parameter is called self-accelerator and method (1) has -order of convergence 2.414. The possibility to increase the convergence order cannot be denied by using more suitable parameters. Many researchers from the last few years, are trying to construct iterative methods without memory which support Kung and Traub conjecture such as [4–13] which are few of them. Although construction of optimal methods without memory is still an active field, from the last one year, many authors are shifting their attention for developing more efficient methods with memory.

In the convergence analysis of the new method, we employ the notation used in Traub’s book [3]: if and are null sequences and , where is a nonzero constant, we will write or . We also use the concept of -order of convergence introduced by Ortega and Rheinboldt [14]. Let be a sequence of approximations generated by an iterative method (IM). If this sequence converges to a zero of function with the -order , we will write where tends to the asymptotic error constant of the iterative method (IM) when .

The rest of the paper is organized as follows: in Section 2 we describe the existing two- and three-point with memory derivative-free schemes and then their convergence orders are accelerated from six to seven and twelve to fourteen, respectively, without doing any extra evaluation. Proposed methods are obtained by imposing one more suitable iterative parameter. The parameter is calculated using Newton’s interpolatory polynomial.

The numerical study is also presented in the next section to confirm the theoretical results. Finally, we give the concluding remarks.

#### 2. Brief Literature Review and Improving with Memory Schemes

Two-step (double) and three-step (triple) Newton’s methods can be, respectively, written as The orders of convergence of the schemes (3) and (4) have been increased to four and eight, respectively, but both of them have no improvement in efficiency as compared to the original Newton’s method. One major draw back of the same schemes is that they also involved derivatives. For the purpose of obtaining efficient as well as free from derivatives schemes, Lotfi et al. [15] approximated the derivatives , , and by Lagrange’s interpolatory polynomials of degrees one, two, and three, respectively, which are given by where and , Thus modified versions of schemes (3) and (4), respectively, become The authors have shown that without memory methods (7) and (8) preserve the convergence order with reduced number of function evaluation. Their corresponding error expressions are given by respectively, where . The above two without memory schemes are optimal in the sense of Kung and Traub. Initially, Traub showed that the convergence of the without memory methods can be increased by use of information from current and previous iterations without adding any evaluation, which are known as with memory methods. To get increased order of convergence the authors in the same paper [15] first replaced by and then for , where is the exact root and is the approximation of . In addition, they used the approximation for method (7) and for method (8), where are Newton’s interpolatory polynomial of degrees three and four, respectively. Here the single prime denotes the first derivative and double prime will later denote the second derivative. So that the one-parametric version of the methods (7) and (8) can be written as The authors showed that the convergence order of methods (14) and (15) is increased from 4 to 6 and from 8 to 12, respectively. The aim of this paper is to find more efficient methods using the same number of evaluations. For this purpose we introduce one more iterative parameter in the above methods; then the modified with memory methods are given by with their error expression With its error expression, Since the above error equations contain the iterative parameters both and , we should approximate these parameters in such a way that they will increase the convergence order. To this end, we approximate and as follows.

For method (16) and for method (18) where are Newton’s interpolatory polynomial of degrees three, four, four and five, respectively. Now we denote where is the exact root. Before going to prove the main result, we state the following two lemmas which can be obtained by using the error expression of Newton’s interpolation, in the same manner as in [16].

Lemma 1. *If and , then the estimates*(i)*,
*(ii)*. *

*Lemma 2. If and , then the estimates(i),
(ii). *

*The theoretical proof of the order of convergence of the proposed methods is given by the following theorem.*

*Theorem 3. If an initial approximation is sufficiently close to a simple zero of and the parameters and in the iterative scheme (16) and (18) are recursively calculated by the forms given in (20) and (21), respectively, then the -orders of convergence of with memory schemes (16) and (18) are at least seven and fourteen, respectively.*

*Proof. *First, we assume that the -orders of convergence of the sequences , , , and are at least , , , and , respectively. Hence

Similarly
Now we will prove the results in two parts. First for method (16) and then for (18).*Modified Method I.* For method (16), it can be derived that

Using the results of Lemma 1 in (28), we have
Now comparing the equal powers of in three pairs of (29)-(25), (30)-(26), and (31)-(24), we get the following nonlinear system:
After solving these equations, we get , , and . It confirms the convergence of method (16). This shows the first part.*Modified Method II.* For method (18), it can be derived that
Now using the results of Lemma 2 in (33), we have

Comparing the equal powers of in four pairs of (34)-(25), (35)-(26), (36)-(27), and (37)-(24), we get the following nonlinear system:
After solving these equations we get , , , and . And thus the proof is completed.

*Note 1. *The efficiency index of the proposed method (16) along with (20) is which is more than of method (14).

*Note 2. *The efficiency index of the proposed method (18) along with (21) is which is more than of method (15).

*3. Numerical Examples and Conclusion*

*3. Numerical Examples and Conclusion*

*In this section the proposed derivative free methods are applied to solve smooth as well as nonsmooth nonlinear equations and compared with the existing with memory methods. Nowadays, high-order methods are important because numerical applications use high precision in their computations; for this reason numerical tests have been carried out using variable precision arithmetic in MATHEMATICA 8 with 700 significant digits. The computational order of convergence (COC) is defined by [17, 18]
To test the performance of new method consider the following three nonlinear functions (which are taken from [5, 15]): (1),(2),(3)The absolute errors for the first three iterations are given in Table 1. In the table stands for . Note that a large number of three-step derivative-free (with and without memory) methods are available in the literature. But the methods which have been tested for nonsmooth functions are rare and this clearly proves the significance of this paper.*