Abstract

The problem considered in this paper is to approximate the simple zeros of the function by iterative processes. An optimal 16th order class is constructed. The class is built by considering any of the optimal three-step derivative-involved methods in the first three steps of a four-step cycle in which the first derivative of the function at the fourth step is estimated by a combination of already known values. Per iteration, each method of the class reaches the efficiency index , by carrying out four evaluations of the function and one evaluation of the first derivative. The error equation for one technique of the class is furnished analytically. Some methods of the class are tested by challenging the existing high-order methods. The interval Newton's method is given as a tool for extracting enough accurate initial approximations to start such high-order methods. The obtained numerical results show that the derived methods are accurate and efficient.

1. Introduction

Consider that the function is a sufficiently differentiable scalar function. We assume that be a simple zero of the nonlinear function, that is, and . There is a vast literature on simple zeros of such nonlinear functions by iterative processes, see, for example, [13]. In 1974, Kung and Traub [4] conjectured that a multipoint iterative method without memory for solving single variable nonlinear equations consisting of evaluations per iteration has the maximal convergence rate and, subsequently, the maximal efficiency index will be . By taking into consideration this concept, many authors, see, for example, [57], have tried to produce optimal multistep methods.

Kung and Traub in [4] presented the following -point optimal iterative process consisting of -evaluation per full cycle: for , where is the inverse Hermite interpolatory polynomial of degree at most such that

This technique is defined by starting with an initial point . Recently, Neta and Petković applied the concept of inverse interpolation again in [8] to approximate the first derivative of the function in the third and fourth steps of a three- and four-step cycle. They obtained the following optimal 8th order technique: with , and as comes next

They also presented an optimal 16th order technique in the following structure consisting of four evaluations of the function and one evaluation of the first derivative: wherein (without the index ) and also ,   , and .

For further reading, one may refer to [9, 10]. Contents of the paper are summarized in what follows. In the next section, our novel contribution is constructed by considering an optimal eighth-order method in the first three steps of a four-step cycle, in which the derivative in the quotient of the new Newton’s iteration is estimated such that the order remains at 16, namely, optimal according to the Kung-Traub hypothesis. The new class of methods is supported with detailed proof in this section to verify the construction theoretically.

Then, it will be observed that the computed results listed in Table 2 completely support the theory of convergence and efficiency analyses discussed in the paper. Section 4 reminds the well-known Interval Newton method in the efficient programming package Mathematica 8 to extract enough accurate initial guesses to start the process. Finally, a short conclusion is given in the last section.

2. Main Results

In this section, we derive our main results by providing a class of four-step iterative methods, which agrees with Kung-Traub hypothesis. In order to build the class, we consider the following four-step four-point iteration, in which the first three steps are any of the optimal three-step three-point without memory derivative-involved methods:

It can be seen that per full cycle of the structure (2.1), it includes four evaluations of the function and two evaluations of the first derivative to reach the convergence rate 16. At this time, we should approximate the first derivative of the function at the fourth step such that the convergence order does not decrease. In order to do this effectively, we have to use all of the known data from the past steps, that is, , , , , and . Herein, we take into account the nonlinear fraction given by (without the index )

This nonlinear fraction is inspired by Pade approximant in essence. The approximation function should meet the interpolation conditions , , , , and . Note that the first derivative of (2.2) takes the following form:

Substituting the known data in (2.2) and (2.3) will result in obtaining the five unknown parameters. It is obvious that , hence we obtain the following system of four linear equations with four unknowns: Solving (2.4) and simplifying (without the index and using divided differences) yields in wherein , , . Now we have a powerful approximation of the first derivative of the function in the fourth step of (2.1), which doubles the convergence rate of the optimal 8th order methods. Therefore, we attain the following class, in which we have four evaluations of the function and one evaluation of the first-order derivative:

Now using any of the optimal three-point three-step methods, we could obtain a novel 16th order technique which satisfies the Kung-Traub hypothesis as well. Using the optimal 8th order method given by Wang and Liu in [11] ends in the following four-step method:

Theorem 2.1. Assume that the scalar function be sufficiently smooth in the real open domain . Furthermore let and , for , is the set of natural numbers. Then, the iterative method (2.7) has the optimal order of convergence 16 and satisfies the following error equation:

Proof. Using Taylor series expansion around the simple root in the th iterate, [12], results in , and also . Applying these expansions in the first step of (2.7), we have Note that to keep the prerequisites at the minimum, we only mention the simplified error equations at the end of first, second, and third steps. Because we also know that the first three steps are an eighth-order technique. Taylor series expanding in the second step of (2.7) by applying (2.9) yields in Using (2.10) and the third step of (2.7) gives us Now, we expand , we obtain Subsequently, in the last step we have Finally, using (2.12) and (2.13) in the last step of (2.7) ends in which shows that (2.7) is a 16th order method consuming only five evaluations per iteration. This completes the proof.

Remark 2.2. If we choose any of the other optimal 8th order derivative-involved methods in the first three steps of (2.6), then a novel optimal 16th order technique will be obtained. For instance, using the optimal 8th order method given by J. R. Sharma and R. Sharma in [13] results in the following four-step optimal method: which satisfies the following error equation:

Remark 2.3. Any method of the developed class carries out five evaluations per full cycle to reach the optimal order 16. Hence, the index of efficiency for our class is , which is greater than that of optimal 4th order techniques , and optimal 8th order techniques .

3. Computational Experiments

The analytical outcomes given in the last section are fully supported by numerical experiments here. Two methods of the class (2.6), that is, methods (2.7) (S-S 16 (I)) and (2.15) (S-S 16 (II)), are compared with some of high-order methods.

For numerical comparisons, we have the 16th-order method of Geum and Kim [14] given as follows: wherein , , , and , , , (without the index ). This method is denoted by (G-K 16). We also used the 16th order method obtained by Neta and Petković (1.5) with (N-P 16). We do not include other schemes with other orders, and one may refer to [1521] for having further information about the other recent methods. Among many numerical examples, we have selected ten instances, with the use of multiprecision arithmetic. The test functions are displayed in Table 1.

For comparisons, 4000 digits floating points have been considered. As it shows, any method of the presented class is more efficient than the existing high order methods in the literature. Results in Table 2 manifest the applicability of the new scheme in the test problems. Also notice that F. stands for failure, that is, the starting value is not sufficiently close to the zero to make the iterative method converge.

We have checked that the sequence of iterations converge to an approximation of the solution for nonlinear equations in our numerical works. We note that an important problem of determining good starting points appears, when applying iterative methods for solving nonlinear equations. A quick convergence, one of the advantages of multipoint methods, can be attained only if initial approximations are sufficiently close to the sought roots; otherwise, it is not possible to realize the expected convergence speed in practice. For achieving such a goal, we remind the well-known technique of interval Newton’s method as a tool to obtain robust initial approximations in the next section.

4. All the Zeros

In this section, we pay close attention to the matter of finding all the (simple) zeros of nonlinear functions in an interval using a hybrid algorithm. In fact, in practice one wish to find all the zeros at the same time. On the other hand, the iterative scheme, such as (2.7), is so sensitive upon the choice of the initial guess. Generally speaking, no method is the best iterative scheme in all test problems. Mostly, this is due to the test function and the accuracy of the initial approximation.

Having a sharp enough initial estimation, one may start the process as efficiently as possible and to see the computational order of convergence correctly, that is, the rate of correcting the number decimal places to the true solution. However, having such an initial guess is not always an easy task for practical problems.

In what follows, we remind a well-known scheme based on [22], also known as interval Newton’s method for finding enough accurate initial approximations for all the zeros of a nonlinear function in an interval. Hopefully, Mathematica 8 gives the users a friendly environment for working with lists (sequences), that is to say, the obtained list then would be corrected using the new 16th order schemes of this paper.

This procedure was efficiently coded in [22] and with some changes we have provided in what follows: intervalnewton::rec = "MaxRecursion exceeded."; intnewt[f_,df_,x_,{a_,b_},eps_,n_]:= Block[{xmid,int=Interval[{a,b}]},If[b-a<eps,Return[int]];   If[n==0,Message[intervalnewton::rec];   Return[int]];   xmid=Interval[SetAccuracy[(a+b)/2,16]];   int=IntervalIntersection[int,SetAccuracy[xmid- N[f/.x->xmid]/N[df/.x->int],16]]; (intnewt[f,df,x,#,eps,n-1])&/@(List@@int)]; Options[intervalnewton]={MaxRecursion->2000};

The above piece of code takes the nonlinear function , the lower and upper bounds of the working interval and applies the interval form of the Newton’s iteration with the maximum number of recursion considered as 2000 and also the use of the command SetAccuracy[exp,16]. Note that in this case, like the Newton’s method, the function should have first order derivative on the interval. Thus now, the call-out function to implement the above piece of code could be given by intervalnewton[f_,x_,int_Interval,eps_,opts___] :=Block[{df,n},n=MaxRecursion/.{opts}/.Options[intervalnewton];   df=D[f,x]; IntervalUnion@@Select[Flatten[(intnewt[f,df,x,#,eps,n])&/@(List@@int)], IntervalMemberQ [f/.x->#,0]&]];

The tolerance should be chosen to be arbitrary in machine precision. To illustrate the procedure further, we consider the oscillatory nonlinear function , where its plot has been given in Figure 1, in the interval while the tolerance is set to 0.0001 as follows: f[x_]:=Exp[Sin[Log[x]*Cos[20 ]]]-2; IntervalSol=intervalnewton[f[x],x,Interval[{2.,10.}],.0001]; setInitial=N[Mean/@List@@IntervalSol] NumberOfGuesses=Length[setInitial];

The implementation of the above part of code shows that the number of zeros is 51 while the list of initial approximations is {2.18856, 2.2125, 2.48473, 2.54372, 2.791, 2.86524, 3.10023, 3.18401, 3.41091, 3.50142, 3.72247, 3.81803, 4.03459, 4.13411, 4.3471, 4.44984, 4.65989, 4.7653, 4.9729, 5.08056, 5.28606, 5.39566, 5.59937, 5.71065, 5.91277, 6.02554, 6.22625, 6.34035, 6.53981, 6.65509, 6.85342, 6.96977, 7.16709, 7.28441, 7.4808, 7.599, 7.79455, 7.91356, 8.10833, 8.22809, 8.42214, 8.54259, 8.73598, 8.85707, 9.04983, 9.17152, 9.36371, 9.48595, 9.67761, 9.80037, and 9.99152}.

We should remark that now by considering the new optimal 16th order methods from the class (2.6), one may enrich the accuracy of the initial guesses up to the desired tolerance when high precision computing is needed.

5. Conclusions

A class of four-point four-step iterative methods has been developed for solving nonlinear equations. The analytical proof for one method of the class was written to clarify the 16th order convergence. The class was attained by approximating the first derivative of the function in the fourth step of a cycle in which the first three steps are any of the optimal eighth-order derivative-involved methods.

Per full cycle, the methods of the class consist of four evaluations of the function and one evaluation of the first derivative which results in 1.741 as the efficiency index. The presented class satisfies the still unproved conjecture of Kung-Traub on multi-point iterations without memory.

The accuracy and efficiency of two obtained methods of the class were illustrated by solving a lot of numerical examples. Numerical works also have attested the theoretical results given in the paper and have shown the fast rate of convergence.

Acknowledgment

The authors cheerfully thank the remarks of the two reviewers on an earlier version of this paper.