Mathematical Problems in Engineering

Volume 2014, Article ID 795628, 6 pages

http://dx.doi.org/10.1155/2014/795628

## Solving Nondifferentiable Nonlinear Equations by New Steffensen-Type Iterative Methods with Memory

Department of Mathematics, Maulana Azad National Institute of Technology, Bhopal, Madhya Pradesh 462051, India

Received 5 August 2014; Revised 15 November 2014; Accepted 26 November 2014; Published 21 December 2014

Academic Editor: Alessandro Palmeri

Copyright © 2014 J. P. Jaiswal. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

It is attempted to present two derivative-free Steffensen-type methods with memory for solving nonlinear equations. By making use of a suitable self-accelerator parameter in the existing optimal fourth- and eighth-order without memory methods, the order of convergence has been increased without any extra function evaluation. Therefore, its efficiency index is also increased, which is the main contribution of this paper. The self-accelerator parameters are estimated using Newton’s interpolation. To show applicability of the proposed methods, some numerical illustrations are presented.

#### 1. Introduction

Finding the root of a nonlinear equation frequently occurs in scientific computation. Newton’s method is the most well-known method for solving nonlinear equations and has quadratic convergence. However, the existence of the derivative in the neighborhood of the required root is compulsory for convergence of Newton’s method, which restricts its applications in practice. To overcome this difficulty, Steffensen replaced the first derivative of the function in Newton’s iterate by forward finite difference approximation. This method also possesses the quadratic convergence and the same efficiency just like Newton’s method. Some nice applications of iterative methods can be found in the literature; one can see [1–8].

Kung and Traub are pioneers in constructing optimal general multistep methods without memory. They discussed two general -step methods based on interpolation. Moreover, they conjectured that any multistep method without memory using function evaluations may reach the convergence order at most [9]. Thus, both Newton’s and Steffensen’s methods are optimal in the sense of Kung and Traub. But the superiority of Steffensen’s method over Newton’s method is that it is derivative-free. So, it can be applied to nondifferentiable equations also. To compare iterative methods theoretically, Owtrowski [10] introduced the idea of efficiency index given by , where is the order of convergence and is the number of function evaluations per iteration. In other words, we can say that an iterative method with higher efficiency index is more efficient. To improve convergence order as well as efficiency index without any new function evaluations, Traub in his book introduced the method with memory. In fact, he changed Steffensen’s method slightly as follows (see [11, pp. 185–187]): , are given suitably The parameter is called self-accelerator and method (1) has -order convergence 2.414. The possibility to increase the convergence order cannot be denied by using more suitable parameters. Many authors during the last few years tried to construct iterative methods without memory which support this conjecture with optimal order; one can see [12–21] and many more. Although construction of optimal methods without memory is still an active field, the authors are shifting their attentions to develop more efficient methods with memory since the last year; for example, see [22–25].

In the convergence analysis of the new method, we employ the notation used in Traub’s book [11]: if and are null sequences and , where is a nonzero constant, we will write or . We also use the concept of -order of convergence introduced by Ortega and Rheinboldt [5]. Let be a sequence of approximations generated by an iterative method (IM). If this sequence converges to a zero of function with the -order , we will write where tends to the asymptotic error constant of the iterative method (IM) when .

The rest of the paper is organized as follows. In Section 2, we describe the existing two- and three-point Steffensen-type iterative schemes and then their convergence orders are accelerated from four to six and from eight to twelve, respectively, without any extra evaluation. The proposed methods are obtained by using previous values as well as a suitable parameter. The parameter is calculated using Newton’s interpolations. The numerical study was also presented in the next section to confirm the theoretical results. Finally, we give the concluding remarks.

#### 2. Development and Construction with Memory

Two-step and three-step repeated Newton’s method is given by
The order of convergence of the methods (3) and (4) is four and eight, respectively. By repeating in the same way, one can get more higher-order methods. Definitely these methods have higher convergence order as compared to the standard Newton’s method. But there is no improvement in its efficiency index. For example, the efficiency indexes of the above two methods are and , which are the same as the efficiency index of the original Newton’s method . To improve the efficiency index, Cordero et al. in [26] have reduced the number of function evaluations by approximating the derivatives in terms of functions. For this first, they have approximated the derivatives by forward difference given by
Then, to approximate the other two derivatives, they used rational approximation. In fact, is approximated by the rational approximation of the first degree and is approximated by the rational approximation of the second degree and they derived that
where
By using these approximations of the derivatives in (3) and (4), they have shown that their methods have the same order of convergence but with reduced number of function evaluations and thus efficiency index has been increased. In fact, the efficiency index becomes and , respectively. For more detail, one can see [26]. One more advantage of these methods is that it can be applied to nonsmooth functions also. Now one natural question arises:* is it possible to find more efficient methods?* Main aim of our paper is to find the answer of this question.

For this purpose, we first introduce a nonzero real parameter in and consider the following two methods with its error expression.

*Method I*. For the suitably given ,
and its error expression is given by

*Method II*. For the suitably given ,
and its error expression is given by
where , , , and are defined as in (7) and .

Now we are concerned with the extension of the above schemes in methods with memory, since its error equations contain the parameter which can be approximated in such a way that it increases the local order convergence. For this purpose, we put and , where and . Here, and are two different approximations of and given by where are Newton’s interpolatory polynomial of degrees three and four, respectively. Now, the theoretical order of convergence of the methods is given by the following theorem.

Theorem 1. *If an initial approximation is sufficiently close to a simple zero of and the parameters and in the iterative scheme (8) and (10) are recursively calculated by the forms given in (12) and (13), respectively, then the -order of convergence of methods (8) and (10) with memory is at least six and twelve, respectively.*

*Proof. *Before going to show the main result, we first prove the following two results, which we will use later.*Claim I*. Consider
*Claim II*. Consider
For this, suppose that there are nodes from the interval , where is the minimum and is the maximum of these nodes, respectively. Then, for some , the error expression of Newton’s interpolation polynomial of degree is given by
For , the above equation assumes the form (keeping in the mind , , , and )
Differentiating (18) with respect to and putting , we get
Now,
Similarly,
Using these relations in (19) and simplifying, we get
And thus
or
which shows the first part. Similarly, taking in (17) and proceeding as in the above manner, we can prove the second relation. Now, we will prove the main result. To do this, we first assume that the -order of convergence of sequences , , , and is at least , , , and , respectively. Hence,
Similarly,
*Method I*. For method (8), it can be derived that
Using the expression of Claim (I) in (29), we have
Now comparing the equal powers of in (26)–(30), (27)–(31), and (25)–(32), we get the following nonlinear system:
After solving these equations, we get , , and , which confirm the convergence of method (8).*Method II*. For method (10), it can be derived that
Using the expression of Claim (II) in (34), we have
Again comparing the equal powers of in (26)–(35), (27)–(36), and (28)–(37) and (25)–(38), we get the following nonlinear system:
After solving these equations, we get , , , and and hence we have the second part. Thus, the proof is completed.

#### 3. Application to Nonlinear Equations

In this section, we apply the proposed methods to solve some smooth as well as nonsmooth nonlinear equations. Here, we demonstrate the convergence behavior of the methods with and without memory. Numerical computations reported here have been carried out in a Mathematica 8.0 environment. Table 1 shows the absolute value of the difference of the exact root and approximated root , where the exact root is computed with 1000 significant digits (digits 1000). To check the theoretical order of convergence, we calculate the computational order of convergence (COC) using the following formula: For this purpose, we consider the following three test functions (taken from [25, 26]): In the table, stand for and is indeterminate. It is obvious from the table that the proposed methods give better results.