Abstract

Construction of iterative processes without memory, which are both optimal according to the Kung-Traub hypothesis and derivative-free, is considered in this paper. For this reason, techniques with four and five function evaluations per iteration, which reach to the optimal orders eight and sixteen, respectively, are discussed theoretically. These schemes can be viewed as the generalizations of the recent optimal derivative-free family of Zheng et al. in (2011). This procedure also provides an n-step family using function evaluations per full cycle to possess the optimal order 2n. The analytical proofs of the main contributions are given and numerical examples are included to confirm the outstanding convergence speed of the presented iterative methods using only few function evaluations. The second aim of this work will be furnished when a hybrid algorithm for capturing all the zeros in an interval has been proposed. The novel algorithm could deal with nonlinear functions having finitely many zeros in an interval.

1. Introduction

The purpose of this study is to present some generalizations of both the celebrated second-order Steffensen’s method [1], which was advanced by the Danish mathematician Johan Frederik Steffensen (1873–1961) as follows with higher orders of convergence and optimal efficiency indices, and also the extension of the family of methods in [2]. In 1974, Kung and Traub [3] conjectured on the optimality of multipoint iterative schemes without memory as comes next. An iterative method for solving single variable nonlinear equation , with , evaluations per iteration reaches to the maximum order of convergence and the optimal efficiency index . Consequently, the efficiency of a th-order method could be given by , where is the whole number of (functional) evaluations per iteration. Note that Steffensen’s method possesses 1.414 as its efficiency index. By considering this conjecture, per iteration the contributed techniques in this research should reach the order 8 with four and order 16 with five evaluations.

The paper unfolds the contents as follows. A collection of pointers to known literature on derivative-free techniques will be presented in Section 2. This is followed by Section 3, whereas the central results are given and also by Section 4, where an optimal sixteenth-order derivative-free technique is constructed. Results and discussion on the comparisons with other famous derivative-free methods are presented in Section 5, entitled by Computational Tests. The only difficulty of iterative methods of this type is in the choice of the initial guess. Using the programming package MATHEMATICA 8, we will provide an algorithm to capture all the real solutions of a nonlinear equation in a short piece of time by presenting a hybrid algorithm in Section 6. The conclusions have finally been drawn in Section 7.

2. Selections from the Literature

The literature related to the present paper is substantial, and we do not present a comprehensive survey. The references of the papers we cite should be consulted for further reading. Consider iterative techniques for finding a simple root of the nonlinear equation , where for an open interval is a scalar function and it is sufficiently smooth in a neighborhood of . The design of formulas for solving such equations is an absorbing task in numerical analysis [4]. The most frequent approach to solve the nonlinear equations consists of the implementation of rapidly convergent iterative methods starting from a reasonably good initial guess to the sought zero of .

Many iterative methods have been improved by using various techniques such as quadrature formulas and weight function; see, for example, [5, 6]. All these developments are aimed at increasing the local order of convergence with a special view of increasing their efficiency indices. The derivative-involved methods are well discussed in the literature; see, for example, [79] and the references therein. But another thing that should be mentioned is that for many particular choices of the function , specially in hard problems, the calculations of the derivatives are not possible or it takes a deal of time.

Thus, in some situations the considered function has an improper behavior or its derivative is close to 0, which causes that the applied iterative processes fail. That is why higher-order derivative-free methods are better root solvers and are in focus recently. The most important merit of Steffensen’s method is that it has quadratic convergence like Newton’s method. That is, both techniques estimate roots of the equation just as quickly. In this case, quickly means that the number of correct digits in the new obtained value doubles with each iteration for both. But the formula for Newton’s method requires a separate function for the derivative; Steffensen’s method does not. Now let us review some derivative-free techniques.

In 2010, an optimal fourth-order derivative-free method [10] was introduced by Liu et al. in the following form: where . This technique consists of three evaluations of the function per iteration to obtain the fourth-order convergence. We here remark that , and are divided differences. This scheme has as its efficiency index. The notation of divided differences will be used throughout this paper.

In 2011, Cordero et al. [11] proposed a sixth-order method, which is free from derivative and includes 5 evaluations of the function per iteration to reach the efficiency index .

Zheng et al. in [2] presented the following eighth-order derivative-free family without memory: which is optimal in the sense of Kung-Traub.

Soleymani [12] proposed the following optimal three-step iteration without memory, including four function evaluations, just like (2.3), by using the weight function approach:

To see a recent paper including derivative-free methods with memory, refer to [13]. For further reading on this topic, refer to [1419].

3. Construction of a New Eighth-Order Derivative-Free Class

In this paper, we derive a new way for constructing multistep methods of orders eight, sixteen, and so forth, requiring four, five, and so forth respectively, function evaluations per iteration. This means that the proposed techniques support the Kung-Traub hypothesis. Besides, the novel schemes do not use any derivative of a function whose zeros are sought, which is another advantage since it is preferable to avoid calculations of derivatives of in some situations. Let us consider a two-step cycle in which we have Steffensen’s method in the first step and Newton’s method in the second step as follows: with four evaluations, that is, three evaluations of the function , and   and one derivative evaluation per iteration. For simplicity, we assume that ; that is, . At this time, the main challenge is to approximate as efficiently as possible such that the fourth-order convergence does not decrease and the efficiency index increases to 1.587 at the same time. To fulfill this aim, we must use all of the past three known data, that is, and . Now take into account the following approximation function for in the domain [20]: where its first derivative takes the form . Clearly, the unknown three parameters , and will be obtained by substituting of the known values in (3.2). Note that is a free real parameter. Hence, we have

Accordingly, by considering in the derivative form of (3.2), we attain the following two-step derivative-free technique: wherein there are three evaluations of the function per iteration only. Theorem 3.1 indicates that the order of (3.4) is four, and hence, it is an optimal derivative-free class with the efficiency index 1.587. Note that (3.4) is similar to the method given by Ren et al. in [20].

Theorem 3.1. Assume to be a sufficiently continuous real function in the domain . Then the sequence generated by (3.4) converges to the simple root with fourth-order convergence and it satisfies the follow-up error equation wherein , and .

Proof. See [20].

The given approximation (3.3) of the first derivative of the function in the second step of our cycle can be applied on any optimal second-order derivative-free method to provide a new optimal fourth-order method. For example, if one chooses a two-step cycle in which the first derivative of the function in the first step estimated by backward finite difference, and after that applies the presented approximation in the second step, then another novel optimal fourth-order method could be obtained. This will be discussed more in the rest of the work.

Remark 3.2. The simplified form of the approximation for the derivative in the quotient of Newton’s iteration at the second step of our cycle is as follows:
Inspired by the above approach, now we are about to construct new high-order derivative-free methods, which are optimal as well and can be considered as the generalizations of the methods in [2, 20, 21]. To construct such techniques, first we should consider an optimal fourth-order method in the first and second steps of a multistep cycle. Toward this end, we take into consideration the following three-step cycle in which Newton’s method is applied in the third step:
It is crystal clear that scheme (3.7) is an eighth-order method with five evaluations per iteration. The main question is that Is there any way to keep the order on eight but reduce the number of evaluation while the method be free from derivative? Fortunately, by using all four past known data, a very powerful approximation of will be obtained. Therefore, we approximate in the domain , by a new polynomial approximation of degree four with a free parameter , as follows: where its first derivative takes the following form:
Hopefully, we have four known values , and . Thus, by substituting these values in (3.8), we obtain the following linear system of four equations with four unknowns: and consequently we can find the unknown parameters by solving linear system (3.10). We attain
Now, an efficient, accurate, and optimal eighth-order family, which is free from any derivative, can be obtained in the following form:

Remark 3.3. The simplified form of the approximation for the derivative in the quotient of Newton’s iteration at the third step of our cycle is as comes next:
Using Remark 3.3, we can propose the following optimal eighth-order derivative-free iteration in this paper with some free parameters: where and it consists of four evaluations per iteration to reach the efficiency index . Theorem 3.4 illustrates its error equation and order of convergence. The main contribution of this section lies in the following Theorem.

Theorem 3.4. Assume that the function has a single root , where is an open interval. Then the convergence order of the derivative-free iterative method defined by (3.14) is eight and it satisfies the following the error equation:

Proof. To find the asymptotic error constant of (3.14), wherein , and , we expand any terms of (3.14) around the simple root in the th iterate. Thus, we write . Accordingly, we attain Now we should expand around the simple root by using (3.16). We have Using (3.17) and the second step of (3.14), we attain Now, the Taylor expansion of the second step of (3.14), using (3.18), gives us A similar Taylor expansion is required for continuing. Hence, it is needed to write the Taylor expansion of now. Thus, we have Subsequently for the approximation of , we obtain + +. Furthermore, we attain Using (3.21) and the last step of (3.14), we have the error equation (3.15). This manifests that (3.14) is of optimal order eight with four function evaluations per iteration. Hence, the proof is complete and (3.14) reaches the efficiency index .

Noting that (3.14) is an extension of the Steffensen and the Ren et al. methods, we should here pull the attention toward this fact that (3.14) is also a generalization of the family given by Zheng et al. in [2]. As a matter of fact, our scheme (3.14) in this paper includes two free parameters (more than that of Zheng et al.), which shows the generality of our technique. Clearly, choosing will result in the Zheng et al. method (2.4). Thus, it is only an special element from the family (3.14).

Remark 3.5. The introduced approximation for the first derivative of the function in the third step of (3.7) can be implemented on any optimal derivative-free fourth-order method without memory for presenting new optimal eighth-order techniques free from derivative. Namely, using (2.1) and (3.13) results in the following optimal eighth-order derivative-free method: where with the following error equation: and if one chooses any of the derivative-free third-order methods without memory in the first two steps and applied the introduced estimation (3.13), then a sixth-order derivative-free technique will be attained. This also shows that the given approximation doubles the convergence rate.

To show the generality of the proposed class and by using Remark 3.5, in what follows, we give some other optimal three-step four-point root solvers without memory. For the first example and by using (11) in [13], we have where and Using a different fourth-order method according to [13], we get that wherein , with the following error relation:

We can also propose the following general family of three-step iterations using [22]: where ()

Remark 3.6. Using backward finite difference approximation for the first derivative of the function in the first step of our cycles will end in other new methods as comes next: where () We can also have with the following error equation ():

4. Higher-Order Optimal Schemes

In this section, we take a special heed to generalize the novel scheme (3.14) by using the same idea. To increase the local order of convergence and the efficiency index more, we should consider cycles in which there are four, five, and so forth steps. Here, we consider a four-step cycle at which (3.14) is in the first three steps and Newton’s method is in the fourth step as follows ():

It is easy to check that (4.1) is a sixteenth-order method with six evaluations per iteration and it has the efficiency index . To make (4.1) optimal derivative-free multipoint technique with as the efficiency index, it is required to approximate effectively. To do this, we estimate in the domain , by a novel polynomial of degree four like the similar cases in (3.2) and (3.8) as follows: where its first derivative takes the form as follows: = . Substituting the known values in (4.2) gives us the unknown values by solving a system of linear equations in what follows:

Note that for the case of multipoint methods, it is more desirable to apply the symbolic calculations using one of the modern software in mathematics. Using Mathematica 8 gives us the unknown parameters by the command LinearSolve as in Algorithm 1.

LinearSolve k-x,((k-x) 2), ((k-x) 3),((k-x) 4) } ,
{ y-x, ((y-x) 2),((y-x) 3), ((y-x) 4) } , { z-x, ((z-x) 2),
((z-x) 3),((z-x) 4) } , { w-x, ((w-x) 2), ((w-x) 3),((w-x) 4) ,
{ f k -f x -r5(k-x) 4,f y]-f x -r5(y-x) 4, f z -f x
-r5(z-x) 4,f w -f[x -r5(w-x) 4 //FullSimplify

Here to cut the long story short, we just provide the following theorem.

Theorem 4.1. The four-step method converges to the simple root of in the domain with local sixteenth order of convergence, where the first three steps are the optimal eighth-order method (3.14) and and are given as follows:

Proof. The proof of this theorem is similar to the proofs of Theorems 3.1 and 3.4. Hence, it is omitted.

Remark 4.2. The simplified form of the approximation for the derivative in the quotient of Newton’s iteration at the fourth step of our cycle (without the index ) is as comes next:

Remark 4.3. Note that if we choose another optimal derivative-free eighth-order method in the first three steps of (4.4) and then implement the introduced approximation in the fourth-step, we will obtain a new technique of optimal sixteenth convergence rate to solve nonlinear scalar equations. In what follows, we give some of such optimal 16th-order schemes: where and

Remark 4.4. If one considers an -step derivative-free construction in which the first steps are any of the existing optimal derivative-free methods of order , then by considering a polynomial approximation of degree (with one free parameter, as in the idea above), as discussed above, an optimal derivative-free method of order with the optimal efficiency index 2 will be attained.

Now, let us list the efficiency indices of some of the well-known methods with the contributed schemes in this study. The results of comparisons are made in Table 1. As it shows, our schemes are totally better than the existing methods in the literature in terms of efficiency index.

5. Computational Tests

The practical utilities of the main contributions in this paper are given by solving a couple of numerical examples and comparison with other well-known methods of different orders in this section. The main purpose of demonstrating the new derivative-free methods for nonlinear equations is purely to illustrate the accuracy of the approximate solution, the stability of the convergence, and the consistency of the results and to determine the efficiency of the new iterative techniques.

We have computed the root of each test function for an initial approximation in the neighborhood . We remark that we have chosen for comparison only the methods which do not require the computation of derivatives of the function to carry out iterations. We have used the fourth-order scheme of Ren et al. (RM4), the eighth-order derivative-free family of Zheng et al. (2.3) with , the eighth-order derivative-free method of Soleymani (2.4), and our novel contributed techniques (3.14) with , (3.22) with , and (3.30) with . We here recall the family of derivative-free methods, which was given by Kung and Traub (KT8) [3] as comes next: where ; we have used this scheme in numerical comparison by choosing . It is completely clear that (4.4) performs better than any of the other methods; hence we do not include it in numerical comparisons. The test functions and their roots are given in Table 2.

The results are summarized in Tables 3 and 4, in terms of the required number of iterations to obtain an approximation of the root. In our numerical comparisons, we have used 2000-digit floating point in our calculations.

As Tables 3 and 4 manifest, our contributed methods are accurate and efficient in contrast by the other high-order schemes in terms of required number of iterations. It is well known that convergence of iteration formula is guaranteed only when the initial approximation is sufficiently near to the root. This issue will be the main body of the next section. However, our class of derivative-free method is much better and has better convergence rate in contrast to the existing high-order derivative-free methods in the literature.

6. A Hybrid Algorithm to Capture All the Solutions

An important aspect in implementing the iterative methods for the solution of nonlinear equations and systems relies on the choice of the initial approximation. There are a few known ways in the literature [23] to extract a starting point for the solutions of nonlinear functions. In practice, users need to find out robust approximations for all the zeros in an interval. Thus, to remedy this and to respond on this need, we provide a way to extract all the real zeros of nonlinear function in the interval . We use the command in MATHEMATICA 8 [24, 25]. Hence, we give a hybrid algorithm including two main steps, a predictor and a corrector. In the predictor step, we extract initial approximations for all the zeros in an interval up to 8 decimal places. Then in the corrector step, an eighth-order method, for example, (3.28), will be used to boost up the accuracy of the starting points up to any tolerance. We also give some significant cautions for applying on different test functions.

In what follows, we keep going by choosing an oscillatory function in the domain .

Let us define the function and the domain for imposing the command as in Algorithm 2.

ClearAll f, x, rts, initialValues, tol, a, b, j,
NumberOfGuesses, FD, nMax, beta, gamma, y, z, k,
fx, fk, fy, fz
f x_ := Log x/7 - Cos x 2 - 2 + 1/10
a = 0; b = 15;
Plot f x , { x, a, b } , Background -> LightBlue,
PlotStyle -> {Brown, Thick}, PlotRange -> All,
PerformanceGoal -> "Quality"
rts = Reduce f x == 0 && a < x < b, x ;

One may note that works with function of exact arithmetic. Hence, if a nonlinear function is the floating point arithmetic, that is, has inexact coefficients, thus we should write it in the exact arithmetic when we enter it into the above piece of code.

Now we store the list of initial approximations in , by the following piece of code, which also sort the initial points. The tol will specify that the accuracy of each member of the provided sequence to be correct up to utmost , decimal places (Algorithm 3).

tol = 8;
initialValues = Sort N x /. { ToRules rts , tol
Length initialValues
Accuracy initialValues

It is obvious that is so oscillatory, and by the above predictor piece of Mathematica code, we attain that it has 69 real solutions. Note that the graph of the function has been drawn in Figure 1. Now it is time to use one of the derivative-free methods given in Section 2 or 3 for correcting the elements of the obtained list. We use the method (3.28) in Algorithm 4.

nMax = 2; beta = SetAccuracy 1, 2000 ;
gamma = SetAccuracy 0, 2000 ;
For j = 1, j <= NumberOfGuesses, j++,
 {x = SetAccuracy initialValues j , 2000 ,
   Do
  fx = SetAccuracy f x , 2000 ;
  k = SetAccuracy x + beta fx, 2000 ;
  fk = SetAccuracy f k , 2000 ;
  FD = SetAccuracy (fk - fx)/(beta fx), 2000 ;
  y = SetAccuracy x - fx/FD, 2000 ;
  fy = SetAccuracy f y , 2000 ;
  z = SetAccuracy y - (fy/FD)*(1 + ((2 + beta FD)/(1
      + beta FD)) (fy/fx)), 2000 ;
  fz = SetAccuracy f z , 2000 ;
  FDxz = SetAccuracy (fz - fx)/(z - x), 2000 ;
  FDxy = SetAccuracy (fx - fy)/(x - y), 2000 ;
  FDxk = SetAccuracy (fk - fx)/(k - x), 2000 ;
  x = SetAccuracy z - fz/(FDxz + ((FDxk - FDxy)/(k - y)
     - (FDxk - FDxz)/(k - z)
     - (FDxy - FDxz)/(y - z))*(x - z)
+ gamma (z - x)*(z - k)*(z - y)), 2000 ;
  , {n, nMax ;
Print Column
   N x, 128 ,
   N f x
, Background -> LightGreen, LightGray ,
Frame -> True]];
 }
];

In Algorithm 4, we have the ability to choose the nonzero free parameter , as well as the parameter gamma. Note that for optimal eighth-order methods and with such an accurate list of initial points, we believe that only two full iterations are required. Hence, we have selected nMax. Note that if a user needs much more accuracy, thus higher number of steps should be taken. It should be remarked that in order to work with such a high accuracy, we must then choose more than 2000 decimal places arithmetic in our calculations.

Due to page limitations, we are not able to list the results for all 69 zeros. However, running the above algorithm could capture all the real zeros of the nonlinear functions. We also remark three points. One is that for many oscillatory function or for nonsmooth functions, the best way is to first divide the whole interval into some subintervals and then find all the zeros of the function on the subintervals. And second, in case of having a root cluster, that is, when the zeros are concentrated on a very small area, then it would be better to increase the first tolerance of our algorithm in the predictor step, to find reliable starting points and then start the process. And last, if the nonlinear function has an exact solution, that is to say, an integer be the solution of a nonlinear function, then the first step of our algorithm finds this exact solution, and an error-like message would be generated by applying our second step.

For instance, the function on the interval has 319 real solutions in which one of them (its plot is given in Figure 2), that is, , is an exact one. Thus, the first step of the previously mentioned algorithm finds the following very efficient list of starting points in which 2, is the exact solution:

Now we are able to solve nonlinear equations with finitely many roots in an interval and find all the real zeros in a short piece of time. Finding robust ways, to capture the complex solutions along working with complex nonlinear functions, can be taken into account as future works.

7. Concluding Comments

In a word, a general way for producing optimal methods based upon the still unproved hypothesis of Kung-Traub [3] has been discussed fully. In these methods, it is necessary to begin with one initial approximation . The convergence results of the iterative sequences generated by new algorithms were provided. Our contributed techniques without memory reach the optimal efficiency indices and .

In the limitation case, based on a polynomial of degree for a method with steps, we can obtain an iterative scheme with efficiency index 2, which is the highest possible efficiency in this field of study for methods without memory. By numerical experiments above, we verify that new methods are effective and accurate. We conclude that the new methods presented in this paper are competitive with other recognized efficient equation solvers.

Note that construction of some ways, which launch -step Steffensen-type methods with memory with the order up to (50% of an improvement) requiring the same computational cost and applying elegant and quite natural accelerating procedure, could be considered as future works.

Note that the second aim of this paper was given by presenting a robust algorithm for solving nonlinear equations with finitely many roots in an interval. In fact, a way for extracting very robust initial approximations as the seeds to start the iterative Steffensen-type methods was presented. Now we are able to capture all the solutions with any desire number of decimal places.

Acknowledgment

The second author is an IEEE member.