Abstract

We present and analyze some variants of Cauchy's methods free from second derivative for obtaining simple roots of nonlinear equations. The convergence analysis of the methods is discussed. It is established that the methods have convergence order three. Per iteration the new methods require two function and one first derivative evaluations. Numerical examples show that the new methods are comparable with the well-known existing methods and give better numerical results in many aspects.

1. Introduction

In this paper, we consider iterative methods to find a simple root ; that is, and , of a nonlinear equation where for an open interval is a scalar function.

Finding the simple roots of the nonlinear equation (1.1) is one of the most important problems in numerical analysis of science and engineering, and iterative methods are usually used to approximate a solution of these equations. We know that Newton’s method is an important and basic approach for solving nonlinear equations [1, 2], and its formulation is given by this method converges quadratically.

The classical Cauchy’s methods [2] are expressed as where This family of methods given by (1.3) is a well-known third-order method. However, the methods depend on the second derivatives in computing process, and therefore their practical applications are restricted rigorously. In the recent years, several methods with free second derivatives have been developed; see [39] and references therein.

In this paper, we will improve the family defined by (1.3) and obtain a three-parameter family of second-derivative-free variants of Cauchy’s methods. The rest of the paper is organized as follows. In Section 2, we describe new variants of Cauchy’s methods and analyze the order of convergence. In Section 3, we obtain some different iterative methods by taking several parameters. In Section 4, different numerical tests confirm the theoretical results, and the new methods are comparable with other known methods and give better results in many cases. Finally, we infer some conclusions.

2. Development of Methods and Convergence Analysis

Consider approximating the equation around the point by the quadratic equation in and in the following form: We impose the tangency conditions where is th iterate and By using the tangency conditions from (2.2), we obtain the value of is determined in terms of in the following: From (2.1), we have Substituting (2.4) into (2.5) yields Using (2.6) we can approximate We define Using instead of , we obtain a new three-parameter family of methods free from second derivative where . Similar to the classical Cauchy’s method, a square root is required in (2.9). However, this may cost expensively, even fail in the case . In order to avoid the calculation of the square roots, we will derive some forms free from square roots by the Taylor approximation [10].

It is easy to know that the Taylor approximation of is Using (2.10) in (2.9), we can obtain the following form: where , .

On the other hand, it is clear that Then, Using (2.12) in (2.9), we also can construct a new family of iterative methods as follows: where , .

We have the convergence analysis of the methods by (2.13).

Theorem 2.1. Let be a simple zero of sufficiently differentiable function for an open interval . If is sufficiently close to , for , the methods defined by (2.13) are at least cubically convergent; as particular cases, if or the methods have convergence order four.

Proof. Let ; we use the following Taylor expansions: where . Furthermore, we have Dividing (2.14) by (2.15), From (2.16), we get Expanding , in Taylor’s series about and using (2.17), we get From (2.14) and (2.18), we have Because of (2.15), we obtain From (2.20) and (2.21), we have Because of (2.15) and (2.20), we get From (2.20), (2.22), and (2.23), we also easily obtain Because of (2.14) and (2.18), we get Furthermore, we have Because of (2.21) and (2.27), we have From (2.14) and (2.15), we also easily have By a simple manipulation with (2.26) and (2.29), we obtain Substituting (2.25), (2.29), and (2.30) in the denominator of , we obtain Dividing (2.24) by (2.31) gives us where Since If we consider , from (2.12) we obtain Because of (2.13), we have From , we have This means that if , the methods defined by (2.13) are at least of order three for any . Furthermore, we consider that if then the methods defined by (2.9) are shown to converge of the order four. From (2.37) and (2.38), it is obvious that the methods defined by (2.13) are of order four by taking .
If considering , we from (2.32) have From (2.13), we can obtain From (2.40) and , we have
This means that the methods defined by (2.13) are at least of order three for any . Furthermore, we consider that if then the methods defined by (2.13) are shown to converge of the order four. From (2.41) and (2.42), it is obvious that the methods defined by (2.13) are of order four by taking .

Similar to the proof of Theorem 2.1, we can prove that for , the methods defined by (2.9) and (2.11) are at least cubically convergent; as particular cases, if , , or , , the methods have convergence order four.

3. Some Special Cases

(10) If , from (2.8) we obtain For , we obtain from (2.13) a third-order method (LM1) For , we obtain from (2.13) a third-order method (LM2) (20) If , , from (2.8) we obtain For , we obtain from (2.13) a fourth-order method [11] For , we obtain from (2.13) a third-order method (LM3) (30) If , from (2.8) we obtain For , we obtain from (2.13) a third-order method (LM4) For , we also obtain the fourth-order method which was obtained in [10].

For , we obtain a fourth-order method as follows [11]: For , we obtain the new fourth-order method (40) If , , for , we obtain a fourth-order method from (2.11) and (3.4) For , we obtain from (2.11) a third-order method (50) If , , from (2.11) we obtain some iterative methods as follows:

For , we obtain a third-order method (LM5) where is defined by (3.7).

For , we obtain a fourth-order method (LM6) For , we obtain the fourth-order method as follows [10]:

4. Numerical Examples

In this section, firstly, we present some numerical test results about the number of iterations () for some cubically convergent iterative methods in Table 1. The following methods were compared: Newton’s method (NM), the method of Weerakoon and Fernando [12] (WF), Halley’s method (HM), Chebyshev’s method (CHM), Super-Halley’s method (SHM), and our new methods (3.2) (LM1), (3.3) (LM2), (3.6) (LM3), (3.8) (LM4), and (3.14) (LM5).

Secondly, we employ our new fourth-order methods defined by (3.15) (LM6) and the super cubic convergence method by (3.2) (LM1), to solve some nonlinear equations and compare them with Newton’s method (NM), Newton-secant method [13] (NSM), and Ostrowski’s method [14] (OM). Displayed in Table 2 are the number of iterations () and the number of function evaluations (NFEs) counted as the sum of the number of evaluations of the function itself plus the number of evaluations of the derivative.

All computations were done using Matlab7.1. We accept an approximate solution rather than the exact root, depending on the precision of the computer. We use the following stopping criteria for computer programs: , we used the fixed stopping criterion . In table, “−” is divergence.

We used the following test functions and display the computed approximate zero [15]:

5. Conclusions

In this paper, we presented some variants of Cauchy’s methods free from second derivative for solving nonlinear equations. Per iteration the methods require two-function and one first-derivative evaluations. These methods are at least three-order convergence, if , , or , , the methods have convergence order four, respectively, and if , , the method has super cubic convergence. We observed from numerical examples that the proposed methods are efficient and demonstrate equal or better performance as compared with other well-known methods.