Abstract

A family of derivative-free methods of seventh-order convergence for solving nonlinear equations is suggested. In the proposed methods, several linear combinations of divided differences are used in order to get a good estimation of the derivative of the given function at the different steps of the iteration. The efficiency indices of the members of this family are equal to 1.6266. Also, numerical examples are used to show the performance of the presented methods, on smooth and nonsmooth equations, and to compare with other derivative-free methods, including some optimal fourth-order ones, in the sense of Kung-Traub’s conjecture.

1. Introduction

Finding iterative methods for solving nonlinear equations is an important area of research in numerical analysis, and it has interesting applications in various branches of Science and Engineering. In this study, we describe new iterative methods to find a simple root of a nonlinear equation , where is a scalar function on an open interval . The known Newton’s method for finding uses the iterative expression

which converges quadratically in some neighborhood of . If the derivative is replaced by the forward-difference approximation

the Newton’s method becomes

which is the known Steffensen’s method (SM), (see [1]). Newton and Steffensen’s methods are of second order, both require two functional evaluations per step, but in contrast to Newton’s method, Steffensen’s method is derivative-free.

The procedure of removing the derivatives usually increases the number of functional evaluations per iteration. Commonly in the literature the efficiency of an iterative method is measured by the efficiency index defined as (see [2]), where is the order of convergence and is the total number of functional evaluations per step. Kung and Traub conjectured in [3] that the order of convergence of any multipoint method cannot exceed the bound , (called the optimal order). Thus, the optimal order for methods with 3 or 4 functional evaluations per step would be 4 or 8, respectively.

To improve the convergence properties, many variants of Steffensen’s method have been proposed in the last years. Some of these methods use forward or central divided differences for approximating the derivatives. For example, by composing Steffensen and Newton’s methods and using a particular approximation of the first derivative, Liu et al. derive in [4] an optimal fourth-order method, that we denote (LZM), with three functional evaluations per step. The iterative expression is where is the approximation of the Steffensen’s method, and is the divided difference of order one.

Dehghan and Hajarian [5] proposed a variant of Steffensen’s method (DHM), which is written as where . The method is obtained by replacing the forward-difference approximation in Steffensen’s method by the central-difference approximation. However, it is still a method of third order and requires four functional evaluations per iteration.

The authors have also presented in [6] a one-parameter family of optimal fourth-order derivative-free methods, denoted by (CTM), which will be used in this paper as a base in order to achieve higher orders of convergence. The iterative expression of this family is where parameters and must verify . In the numerical section, we will work with the element of this family obtained by taking and .

In this paper, the technique used to improve the local order of convergence consists of the composition of two iterative methods of order and , respectively, to obtain a method of order (see [1]). Specifically, we compose Newton’s method and the CTM family (1.6). In addition, some particular approximations of the derivative will be made in order to obtain a Steffensen-type method. As we will show, the obtained family of methods is of seventh-order of convergence and requires four evaluations of the function ; therefore, this class of methods has efficiency index , which is higher than of the Steffensen’s method, of the DHM method (1.5), of the LZM (1.4) and CTM (1.6) methods. Therefore, although the methods of the new family are not optimal in the sense of Kung-Traub’s conjecture, they are competitive from the point of view of the efficiency index.

Recently, some seventh-order methods have appeared in the literature: for example, Hu and Fang in [7] design a Jarratt-type scheme of order of convergence seven. Its iterative expression is where is the Newton’s iteration. We will denote this scheme by HFM. Let us note that this method is not derivative-free, and it uses five functional evaluations. So, its efficiency index is .

Noor et al. in [8, Algorithm 2.5] show an iterative method free from second derivative of order seven, with five functional evaluations. Its efficiency index is and its iterative expression is where . We denote this scheme by NM.

Soleymani and Khattri in [9] (Theorem 1), design a derivative-free seventh-order method with four functional evaluations. Its iterative expression is where . We denote this method by SKM. Its efficiency index is .

The rest of the paper is organized as follows: in Section 2, we describe our family of methods and analyze its convergence order for smooth equations. In Section 3, different numerical tests confirm the theoretical results and allow us to compare this family with other known methods mentioned in this section. We also analyze in this numerical section the behavior of the new family on nonsmooth equations.

2. The Methods and Analysis of Convergence

By direct composition of the CTM family (1.6) and Newton’s method, it is easy to see that the scheme where , is of eighth-order convergence. In order to avoid the evaluation of the first derivative in the last step, we extend the estimation used in (1.6), by replacing by a linear combination of several divided differences:

where are parameters. So, we are going to prove that for some values of the parameters the family of methods described by is of seventh-order of convergence.

Theorem 2.1. Let be a simple zero of a sufficiently differentiable function in an open interval . If is sufficiently close to , then the iterative method defined by (2.3) has seventh-order of convergence for , , , , , , and and satisfies the error equation where and , and .

Proof. By using Taylor’s expansion around , it is easy to observe (see [6]) that if then, and then,
Now, the approximation of , , can be written as and calculating the last step of the iterative process (2.3), we have
It is necessary to assign the following values to the parameters in order to assure the sixth-order of convergence: , , , and . So, the error equation can be expressed as
Finally, if and , we have

In terms of computational cost, the methods of this family require only four functional evaluations per step. So, they have efficiency indices . If we denote by M7 any element of this family, we can establish

In the next section, we use the element of family (2.3) obtained by choosing (for simplicity) and, therefore, , , and . So, the resulting iterative expression of the method is where and .

It is well known that if is a multiple zero of , then is a simple zero of . In a similar way, for Steffensen-type methods, it is easy to prove that if we use , we transform the problem of solving multiple roots of into a simple roots one, . Theoretically, this idea improves the order of convergence but in practice, results are not satisfactory.

As we have seen, the method (2.12) has seventh-order convergence for smooth equations but, what is its behavior for nonsmooth equations? As we will see in the next section, for this class of equations our method, in general, loses the seventh-order convergence and stability problems appear. For nonsmooth functions, Amat and Busquier in [10] presented an strategy to control the approximation of the derivative and the stability of the iteration. They applied this idea to Steffensen’s method, obtaining a new scheme (STM): where the parameters allow to control the approximation of the derivative. This procedure can be applied to any other derivative-free scheme. The authors showed the second-order convergence of (2.13) for nonsmooth functions and mentioned that, in order to control the stability in practice, the parameters should verify

where is related to the computer precision and is a free parameter.

In the following section, we will apply this strategy to our proposed method, M7, obtaining a modified scheme that will be denoted as M7mod. Then, we will analyze how this new scheme improves in nonsmooth cases, although the order of convergence at singular points decreases to four.

3. Numerical Results

This numerical section is divided into two parts: one devoted to compare the different methods on smooth equations and other in which we analyze the behavior of our method on nonsmooth test functions.

In the first part of this section, we use some test functions in order to check the effectiveness of the new high-order method (2.12), we compare it with the classical Steffensen’s method, SM, the method DHM, and the optimal fourth-order methods, LZM and CTM with and . These methods are employed to find the zeros of some nonlinear functions, specifically,(i),(ii),(iii),(iv),(v),(vi),(vii),(viii),(ix),(x).

The complexity of the iterative expressions plays an important role in the computational efficiency of the different methods. So, some authors use another index in order to compare the iterative methods, taking also into account the number of products and quotients involved in each step of the iterative process. The computational efficiency index is defined as , (see [11]), where is the order of convergence, is the number of functional evaluations, and is the number of products and quotients per iteration. Under the point of view of this index, the relationship between the schemes that we use in this section is

Nowadays, high-order methods are important because numerical applications use high precision in their computations; for this reason, numerical computations have been carried out using variable precision arithmetic in Matlab 7.12 (R2011a) with 500 significant digits. The computer specifications are Intel(R) Core(TM) i5-2500 CPU @ 3.30 GHz with 16.00 GB of RAM. The stopping criterion used is or . The information shown in Tables 1 and 2 is, for every method, the number of iterations needed to reach the required tolerance (if the method does not converge, it will be denoted by "nc"), the last value of and , and the approximated computational order of convergence (ACOC) , defined by the authors in [12]: By means of (3.2), a vector is obtained by using the different iterations calculated in the process. The value of that appears in Tables 1 to 4 is the last coordinate of this vector when the variation between its components is small. Let us note that when the approximated convergence order is not stable (if the difference between two consecutive values is bigger than one unit), we will denote it by ‘—’.

On the other hand, in Tables 1 and 2, the mean elapsed time, calculated by means of the command "cputime" of Matlab (e-time), after 100 performances of the program, appears. It can be observed that, in most cases, the elapsed time taken by M7 to obtain the solution is lower than the corresponding ones of the other methods involved. In terms of computational effort, the efficiency of the proposed method is not lower than that of the optimal fourth-order methods. These elapsed times are in concordance with the computational efficiency index of each method.

Numerical results in Table 1 confirm the theoretical statements developed in this paper, showing that the estimated order of convergence coincides with the theoretical one, except in case : the second derivative of this nonlinear function at the solution is zero, so the order of convergence increases: from second to third in SM, from third (and fourth) to fifth in DHM (and LZM or CTM), and from seventh to ninth in M7. Nevertheless, in this case the e-time from the new method M7 does not improve the other ones. In fact, in case the best time is obtained by Steffensen’s method, followed by CTM and DHM. In general, there can be stated that the new high-order scheme improves the results obtained by other known methods, even optimal fourth-order methods such as LZM and CTM.

In Table 2, we compare the new method M7 with other known seventh-order schemes described in the introduction, HFM, SKM, and NM. It can be observed that M7 performs better than SKM and NM, but HFM is more precise than the rest of methods.

Now, we are going to make some numerical tests in order to check how the methods SM and M7 behave in nonsmooth cases. Moreover, we apply the -procedure to both methods to avoid some stability problems. In these cases, numerical computations have been carried out using simple precision arithmetic, so , and the stopping criterion used has been or . From a sufficiently small , we use the following algorithm to compute the different :

The first test has been made on the function:

that can be found in [13]. We use three initial estimations in order to approximate the three different roots of the equation, . In Table 3, we show for each initial estimations and every method, the exact absolute error at first and last iterations, the absolute difference between the two last iterations , the value of in the last iteration , and the ACOC. From Table 3 it can be inferred that the order of convergence of M7 method decreases to five and stability problems appear when it is applied to nonsmooth equations. Nevertheless, it usually performs better than or equal to Steffensen’s method and its modifications by the procedure. Indeed, when this strategy is applied on the seventh-order method (M7mod), the stability of the method is improved and it results in more precise estimations with less iterations. In this example, the ACOC is not stable in some cases.

Finally, we consider the nonsmooth function

The numerical experiments made on this function are summarized in Table 4. In this case, the advantages of the modified methods over original Steffensen’s method and M7 method are more evident when the initial estimation is far from the zero of the function. When the initial estimation is good enough, it is clear that the behavior of M7mod improves lower-order methods, in terms of precision and number of iterations.

4. Conclusions

A new seventh-order family of derivative-free iterative methods for solving nonlinear equations has been presented. As only four functional evaluations are required per iteration, the efficiency index of each member of this family is equal to . In addition, these methods use a small amount of products and quotients and are derivative-free, which allow us to apply them also to nonsmooth equations with positive and promising results.

The generalization of these methods to nonlinear systems is similar to the classical Steffensen’s method (see [1]):

where is a linear operator such that , which is called divided difference.

Acknowledgments

The authors would like to thank the referees for their valuable comments and for their suggestions to improve the readability of the paper. This research was supported by Ministerio de Ciencia y Tecnología MTM2011-28636-C02-02 and by Vicerrectorado de Investigación, Universitat Politècnica de València PAID-06-2010-2285.