#### Abstract

We present a family of fourteenth-order convergent iterative methods for solving nonlinear equations involving a specific step which when combined with any two-step iterative method raises the convergence order by , if is the order of convergence of the two-step iterative method. This new class include four evaluations of function and one evaluation of the first derivative per iteration. Therefore, the efficiency index of this family is . Several numerical examples are given to show that the new methods of this family are comparable with the existing methods.

#### 1. Introduction

Let us consider the equation where is a real valued univariate nonlinear function.

In the recent past, many higher-order convergent iterative methods have been developed. However, in the recent years, locating zeroes of such functions has been addressed majorly in the context of methods with higher-order convergence as well as less computational effort. Thus, the methods are now usually described in terms of their computational order of convergence, efficiency index, and computational efficiency index. The conjecture of Kung and Traub [1] has also remained under consideration to develop the computationally efficient classes of iterative methods for solving nonlinear equations.

*Definition 1 (computational order of convergence). *Let be a root of the function and suppose that are three consecutive iterates closer to the root . Then, the computational order of convergence can be approximated using the following formula:

*Definition 2 (efficiency index). *The efficiency index of the iterative method is given by
where is the order of convergence and is computational cost.

Conjecture 3 (optimal order of convergence). *An optimal iterative method without memory based on evaluations could achieve an optimal convergence order of .*

In 1973, King [2] developed a one parameter family of fourth-order convergent iterative methods for nonlinear equations given by where is a constant. It is a two-step family which at the second step adds only one more function evaluation and the order of convergence increases from two to four with efficiency index of . The Ostowski's Method [3, 4] is a member of this family for , given as

In 2009, Bi et al. [5] presented a family of three-step eighth-order convergent iterative methods for solving nonlinear equations. This scheme is based on King's fourth-order convergent iterative methods and the family of sixth-order iterative methods developed by Chun and Ham [6]. The first two steps are the King's fourth-order method and the third step is developed with a real-valued function , similar to that introduced by Chun and Ham [6], where . The method is given as follows: where

In the third step, only one additional function evaluation increases the order of convergence from four to eight. The efficiency index of this family is , which is the optimal order of convergence with four function evaluations.

In 2011, Sargolzaei and Soleymani [7] presented new fourteenth order convergent iterative methods to approximate the simple roots of nonlinear equations. The three-step eighth-order construction can be viewed as a variant of Newton's method by using Hermite interpolation to reduce the number of function evaluations. The method is given in the following form: where

This method requires five evaluations per iteration. So, the efficiency index of this method is .

Also in 2011, Soleymani and Sharifi [8] presented an efficient class of iterative methods. This class of methods consists of optimal eighth-order convergent iterative methods with an additional fourth-step developed by Soleymani and Sharifi. A method from this class is given by where , , and are the divided differences and is approximated by

The first three steps are the eighth-order convergent iterative method of [9]. The method requires five evaluations per iteration. So, the efficiency index of this class of methods is . This index is just a little bit lower than that of optimal sixteenth-order methods, .

In this paper, we, by using the divided difference decomposition of the derivative involved in the third step, have developed a specific step which when combined with any two-step iterative method raises the convergence order by , where is the order of convergence of the two-step iterative method. Specifically, we have taken King's fourth order convergent iterative methods which when combined with our specific third step raises the convergence order of the new family of methods to fourteen with the efficiency index . In Section 2, we have presented a general family of fourteenth-order convergent iterative methods and its convergence analysis. The convergence analysis of the two specific methods and their comparison with the existing higher-order convergent and computationally efficient iterative methods is also given.

#### 2. Development of the New Class

In this section, we first construct a general class of three-step fourteenth-order convergent iterative methods by combining a new derivative free step with a general fourth order convergent iterative method. We suggest the following specific step: where

This approximation of is taken from [8].

Now the general three-step fourteenth-order convergent iterative method has the following form: where is any two-step fourth-order convergent iterative method and .

The order of convergence of the iterative method (16) using Maple 7.0 is given in the form of the following theorem.

Theorem 4. *Let be a simple zero of sufficiently differentiable function for an open interval . If is sufficiently close to , then the iterative method (16) is fourteenth-order convergent. The error equation is given by
**
where
*

*Proof. *Let be a fourth-order convergent method

Let be the zero of and with an error . By Taylor's expansion, we have

For the calculation of , we have used divided difference approximation of given by

The simplified form of (21) is given by

Now we have
where

Using (23), we have

Thus, by using (25) we get

Also for the calculation of we have used divided difference approximation of as follows:

Using this approximation we have found that

Finally, combining (26) and (28), we get
and the error equation is given by
where

The error equation shows that the order of convergence of the new family of iterative methods is fourteen.

#### 3. The Iterative Methods

In this section we give two special cases of formula (16) as follows.

*Method 1. *Specifically we take the two-step fourth-order convergent iterative method as King's fourth-order family of methods [2] for , so
where , , and are given by (13), (14), and (15), respectively.

*Method 2. *Take two-step fourth-order convergent iterative method as Ostrowski's fourth-order method [3]. Therefore, Method 2 has the following formula:
where , , and are given by (13), (14), and (15), respectively.

The methods (32) and (33) achieve fourteenth-order convergence. Per iteration the presented methods require four evaluations of functions and one evaluation of the first derivative. We note that the new class of methods (16) has the efficiency index of which is better than of Newton's method, of King's method [2], and of three-step iterative method with eighth-order convergence [5]. However, the fourteenth-order convergent iterative method of Sargolzaei and Soleymani [7] has the same efficiency index and that of fifteenth-order convergent iterative method of Soleymani and Sharifi [8] is with greater , yet it can be observed from Table 2 that our methods are comparable with these methods.

#### 4. Numerical Examples

In this section, we now consider some numerical examples to demonstrate the performance of the newly developed iterative methods. All the computations for the above-mentioned methods are performed using software Maple 7 with 500 digits precision and is taken as tolerance. The following criteria is used for estimating the zeros:(i).

Thus, for convergence criteria, it is required that the functional value at the root must be less than . represents the initial guess and the exact zero of the nonlinear function .

The numerical examples used for comparison are given in Table 1, most of which are taken from [5, 7, 8].

In Table 2, we compare the classical Newton's method (NM), the eighth-order convergent iterative method (BI) of Bi et al. [5], the fourteenth-order convergent iterative method (SP) of Sargolzaei and Soleymani [7], the fifteenth-order convergent iterative method (SS) of Soleymani and Sharifi [8], and the newly developed fourteenth-order convergent iterative methods (M1) and (M2). The methods are compared in terms of number of iterations , the total number of function evaluations (NFE), the computational order of convergence (COC), and the absolute values of the function .

#### 5. Conclusion

New methods are tested for almost all types of nonlinear functions, polynomials, and transcendental functions. Table 2 shows that the newly developed fourteenth-order methods are comparable with the existing methods of this domain in terms of number of iterations and number of function evaluations per iteration. In many examples, the newly developed methods perform better than the existing methods. For example, the method (BI) diverges for the functions and , the method (SS) diverges for function , but the new methods (M1) and (M2) converge for these cases. Similarly, for the function the method (BI) approximates the root in three iterations, while the new methods in two iterations with ten evaluations. For the function , the method (BI) gives the approximate root in nine iterations with thirty-six evaluations while the methods (M1) and (M2) give the approximate root in three and two iterations, respectively. For the function , the method (SS) approximates the root in six iterations while the method (M1) approximates the root in four iterations and the method (M2) approximates the root in three iterations. It may be noted that for almost all functions the new methods (M1) and (M2) are correct up to more significant decimal places as compared to the method (BI) and in some cases to the method (SP). The performance of the fourteenth-order method (SP) [7] is the same as that of the new methods of the family. It is also clear from the tables that for the choice of initial guess, near to exact root or far from the exact root, the performance of the new methods is good. Although the order and efficiency index of the method (SS) is greater than the new methods; but it can be seen from the comparison tables that the new methods are comparable with this method in terms of number of iterations and in some cases perform better than the method (SS).

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.