- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Journal of Applied Mathematics

Volume 2012 (2012), Article ID 958020, 13 pages

http://dx.doi.org/10.1155/2012/958020

## Computing Simple Roots by an Optimal Sixteenth-Order Class

^{1}Department of Mathematics, Islamic Azad University, Zahedan Branch, Zahedan, Iran

^{2}Department of Mathematics, University of Venda, Private Bag X5050, Thohoyandou 0950, South Africa

^{3}Department of Civil Engineering, Islamic Azad University, Zahedan Branch, Zahedan, Iran

Received 21 August 2012; Revised 6 October 2012; Accepted 7 October 2012

Academic Editor: Changbum Chun

Copyright © 2012 F. Soleymani et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The problem considered in this paper is to approximate the simple zeros of the function by iterative processes. An optimal 16th order class is constructed. The class is built by considering any of the optimal three-step derivative-involved methods in the first three steps of a four-step cycle in which the first derivative of the function at the fourth step is estimated by a combination of already known values. Per iteration, each method of the class reaches the efficiency index , by carrying out four evaluations of the function and one evaluation of the first derivative. The error equation for one technique of the class is furnished analytically. Some methods of the class are tested by challenging the existing high-order methods. The interval Newton's method is given as a tool for extracting enough accurate initial approximations to start such high-order methods. The obtained numerical results show that the derived methods are accurate and efficient.

#### 1. Introduction

Consider that the function is a sufficiently differentiable scalar function. We assume that be a simple zero of the nonlinear function, that is, and . There is a vast literature on simple zeros of such nonlinear functions by iterative processes, see, for example, [1–3]. In 1974, Kung and Traub [4] conjectured that a multipoint iterative method without memory for solving single variable nonlinear equations consisting of evaluations per iteration has the maximal convergence rate and, subsequently, the maximal efficiency index will be . By taking into consideration this concept, many authors, see, for example, [5–7], have tried to produce optimal multistep methods.

Kung and Traub in [4] presented the following -point optimal iterative process consisting of -evaluation per full cycle: for , where is the inverse Hermite interpolatory polynomial of degree at most such that

This technique is defined by starting with an initial point . Recently, Neta and Petković applied the concept of inverse interpolation again in [8] to approximate the first derivative of the function in the third and fourth steps of a three- and four-step cycle. They obtained the following optimal 8th order technique: with , and as comes next

They also presented an optimal 16th order technique in the following structure consisting of four evaluations of the function and one evaluation of the first derivative: wherein (without the index ) and also , , and .

For further reading, one may refer to [9, 10]. Contents of the paper are summarized in what follows. In the next section, our novel contribution is constructed by considering an optimal eighth-order method in the first three steps of a four-step cycle, in which the derivative in the quotient of the new Newton’s iteration is estimated such that the order remains at 16, namely, optimal according to the Kung-Traub hypothesis. The new class of methods is supported with detailed proof in this section to verify the construction theoretically.

Then, it will be observed that the computed results listed in Table 2 completely support the theory of convergence and efficiency analyses discussed in the paper. Section 4 reminds the well-known *Interval Newton method* in the efficient programming package Mathematica 8 to extract enough accurate initial guesses to start the process. Finally, a short conclusion is given in the last section.

#### 2. Main Results

In this section, we derive our main results by providing a class of four-step iterative methods, which agrees with Kung-Traub hypothesis. In order to build the class, we consider the following four-step four-point iteration, in which the first three steps are any of the optimal three-step three-point without memory derivative-involved methods:

It can be seen that per full cycle of the structure (2.1), it includes four evaluations of the function and two evaluations of the first derivative to reach the convergence rate 16. At this time, we should approximate the first derivative of the function at the fourth step such that the convergence order does not decrease. In order to do this effectively, we have to use all of the known data from the past steps, that is, , , , , and . Herein, we take into account the nonlinear fraction given by (without the index )

This nonlinear fraction is inspired by Pade approximant in essence. The approximation function should meet the interpolation conditions , , , , and . Note that the first derivative of (2.2) takes the following form:

Substituting the known data in (2.2) and (2.3) will result in obtaining the five unknown parameters. It is obvious that , hence we obtain the following system of four linear equations with four unknowns: Solving (2.4) and simplifying (without the index and using divided differences) yields in wherein , , . Now we have a powerful approximation of the first derivative of the function in the fourth step of (2.1), which doubles the convergence rate of the optimal 8th order methods. Therefore, we attain the following class, in which we have four evaluations of the function and one evaluation of the first-order derivative:

Now using any of the optimal three-point three-step methods, we could obtain a novel 16th order technique which satisfies the Kung-Traub hypothesis as well. Using the optimal 8th order method given by Wang and Liu in [11] ends in the following four-step method:

Theorem 2.1.
*Assume that the scalar function be sufficiently smooth in the real open domain . Furthermore let and , for , is the set of natural numbers. Then, the iterative method (2.7) has the optimal order of convergence 16 and satisfies the following error equation:
*
*
**
*

*Proof. *
Using Taylor series expansion around the simple root in the th iterate, [12], results in , and also . Applying these expansions in the first step of (2.7), we have
Note that to keep the prerequisites at the minimum, we only mention the simplified error equations at the end of first, second, and third steps. Because we also know that the first three steps are an eighth-order technique. Taylor series expanding in the second step of (2.7) by applying (2.9) yields in
Using (2.10) and the third step of (2.7) gives us
Now, we expand , we obtain
Subsequently, in the last step we have
Finally, using (2.12) and (2.13) in the last step of (2.7) ends in
which shows that (2.7) is a 16th order method consuming only five evaluations per iteration. This completes the proof.

*Remark 2.2. *
If we choose any of the other optimal 8th order derivative-involved methods in the first three steps of (2.6), then a novel optimal 16th order technique will be obtained. For instance, using the optimal 8th order method given by J. R. Sharma and R. Sharma in [13] results in the following four-step optimal method:
which satisfies the following error equation:

*Remark 2.3. *
Any method of the developed class carries out five evaluations per full cycle to reach the optimal order 16. Hence, the index of efficiency for our class is , which is greater than that of optimal 4th order techniques , and optimal 8th order techniques .

#### 3. Computational Experiments

The analytical outcomes given in the last section are fully supported by numerical experiments here. Two methods of the class (2.6), that is, methods (2.7) (S-S 16 (I)) and (2.15) (S-S 16 (II)), are compared with some of high-order methods.

For numerical comparisons, we have the 16th-order method of Geum and Kim [14] given as follows: wherein , , , and , , , (without the index ). This method is denoted by (G-K 16). We also used the 16th order method obtained by Neta and Petković (1.5) with (N-P 16). We do not include other schemes with other orders, and one may refer to [15–21] for having further information about the other recent methods. Among many numerical examples, we have selected ten instances, with the use of multiprecision arithmetic. The test functions are displayed in Table 1.

For comparisons, 4000 digits floating points have been considered. As it shows, any method of the presented class is more efficient than the existing high order methods in the literature. Results in Table 2 manifest the applicability of the new scheme in the test problems. Also notice that F. stands for failure, that is, the starting value is not sufficiently close to the zero to make the iterative method converge.

We have checked that the sequence of iterations converge to an approximation of the solution for nonlinear equations in our numerical works. We note that an important problem of determining good starting points appears, when applying iterative methods for solving nonlinear equations. A quick convergence, one of the advantages of multipoint methods, can be attained only if initial approximations are sufficiently close to the sought roots; otherwise, it is not possible to realize the expected convergence speed in practice. For achieving such a goal, we remind the well-known technique of interval Newton’s method as a tool to obtain robust initial approximations in the next section.

#### 4. All the Zeros

In this section, we pay close attention to the matter of finding all the (simple) zeros of nonlinear functions in an interval using a hybrid algorithm. In fact, in practice one wish to find all the zeros at the same time. On the other hand, the iterative scheme, such as (2.7), is so sensitive upon the choice of the initial guess. Generally speaking, no method is the best iterative scheme in all test problems. Mostly, this is due to the test function and the accuracy of the initial approximation.

Having a sharp enough initial estimation, one may start the process as efficiently as possible and to see the computational order of convergence correctly, that is, the rate of correcting the number decimal places to the true solution. However, having such an initial guess is not always an easy task for practical problems.

In what follows, we remind a well-known scheme based on [22], also known as interval Newton’s method for finding enough accurate initial approximations for all the zeros of a nonlinear function in an interval. Hopefully, Mathematica 8 gives the users a friendly environment for working with lists (sequences), that is to say, the obtained list then would be corrected using the new 16th order schemes of this paper.

This procedure was efficiently coded in [22] and with some changes we have provided in what follows:
intervalnewton::rec = "MaxRecursion exceeded.";
intnewt[f_,df_,x_,{a_,b_},eps_,n_]:=
Block[{xmid,int=Interval[{a,b}]},If[b-a<eps,Return[int]];
If[n==0,Message[intervalnewton::rec];
* *Return[int]];
xmid=Interval[SetAccuracy[(a+b)/2,16]];
int=IntervalIntersection[int,SetAccuracy[xmid-
N[f/.x->xmid]/N[df/.x->int],16]];
* *(intnewt[f,df,x,#,eps,n-1])&/@(List@@int)];
Options[intervalnewton]={MaxRecursion->2000};

The above piece of code takes the nonlinear function , the lower and upper bounds of the working interval and applies the interval form of the Newton’s iteration with the maximum number of recursion considered as 2000 and also the use of the command SetAccuracy[exp,16]. Note that in this case, like the Newton’s method, the function should have first order derivative on the interval. Thus now, the call-out function to implement the above piece of code could be given by intervalnewton[f_,x_,int_Interval,eps_,opts___] :=Block[{df,n},n=MaxRecursion/.{opts}/.Options[intervalnewton]; df=D[f,x]; IntervalUnion@@Select[Flatten[(intnewt[f,df,x,#,eps,n])&/@(List@@int)], IntervalMemberQ [f/.x->#,0]&]];

The tolerance should be chosen to be arbitrary in machine precision. To illustrate the procedure further, we consider the oscillatory nonlinear function , where its plot has been given in Figure 1, in the interval while the tolerance is set to 0.0001 as follows: f[x_]:=Exp[Sin[Log[x]*Cos[20]]]-2; IntervalSol=intervalnewton[f[x],x,Interval[{2.,10.}],.0001]; setInitial=N[Mean/@List@@IntervalSol] NumberOfGuesses=Length[setInitial];

The implementation of the above part of code shows that the number of zeros is 51 while the list of initial approximations is {2.18856, 2.2125, 2.48473, 2.54372, 2.791, 2.86524, 3.10023, 3.18401, 3.41091, 3.50142, 3.72247, 3.81803, 4.03459, 4.13411, 4.3471, 4.44984, 4.65989, 4.7653, 4.9729, 5.08056, 5.28606, 5.39566, 5.59937, 5.71065, 5.91277, 6.02554, 6.22625, 6.34035, 6.53981, 6.65509, 6.85342, 6.96977, 7.16709, 7.28441, 7.4808, 7.599, 7.79455, 7.91356, 8.10833, 8.22809, 8.42214, 8.54259, 8.73598, 8.85707, 9.04983, 9.17152, 9.36371, 9.48595, 9.67761, 9.80037, and 9.99152}.

We should remark that now by considering the new optimal 16th order methods from the class (2.6), one may enrich the accuracy of the initial guesses up to the desired tolerance when high precision computing is needed.

#### 5. Conclusions

A class of four-point four-step iterative methods has been developed for solving nonlinear equations. The analytical proof for one method of the class was written to clarify the 16th order convergence. The class was attained by approximating the first derivative of the function in the fourth step of a cycle in which the first three steps are any of the optimal eighth-order derivative-involved methods.

Per full cycle, the methods of the class consist of four evaluations of the function and one evaluation of the first derivative which results in 1.741 as the efficiency index. The presented class satisfies the still unproved conjecture of Kung-Traub on multi-point iterations without memory.

The accuracy and efficiency of two obtained methods of the class were illustrated by solving a lot of numerical examples. Numerical works also have attested the theoretical results given in the paper and have shown the fast rate of convergence.

#### Acknowledgment

The authors cheerfully thank the remarks of the two reviewers on an earlier version of this paper.

*
*

#### References

- T. Sauer,
*Numerical Analysis*, Pearson, Boston, Mass, USA, 2nd edition, 2011. View at Zentralblatt MATH - F. Soleymani, “Optimized Steffensen-type methods with eighth-order convergence and high efficiency index,”
*International Journal of Mathematics and Mathematical Sciences*, vol. 2012, Article ID 932420, 18 pages, 2012. - M. Heydari, S. M. Hosseini, and G. B. Loghmani, “Convergence of a family of third-order methods free from second derivatives for finding multiple roots of nonlinear equations,”
*World Applied Sciences Journal*, vol. 11, pp. 507–512, 2010. - H. T. Kung and J. F. Traub, “Optimal order of one-point and multipoint iteration,”
*Journal of the Association for Computing Machinery*, vol. 21, pp. 643–651, 1974. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - F. Soleymani, “Optimal fourth-order iterative methods free from derivatives,”
*Miskolc Mathematical Notes*, vol. 12, no. 2, pp. 255–264, 2011. - F. Soleymani, R. Sharma, X. Li, and E. Tohidi, “An optimized derivative-free form of the Potra-Ptak method,”
*Mathematical and Computer Modelling*, vol. 56, pp. 97–104, 2012. View at Publisher · View at Google Scholar - F. Soleymani and B. S. Mousavi, “On novel classes of iterative methods for solving nonlinear equations,”
*Computational Mathematics and Mathematical Physics*, vol. 52, no. 2, pp. 214–221, 2012. View at Publisher · View at Google Scholar - B. Neta and M. S. Petković, “Construction of optimal order nonlinear solvers using inverse interpolation,”
*Applied Mathematics and Computation*, vol. 217, no. 6, pp. 2448–2455, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - Y. H. Geum and Y. I. Kim, “A family of optimal sixteenth-order multipoint methods with a linear fraction plus a trivariate polynomial as the fourth-step weighting function,”
*Computers & Mathematics with Applications*, vol. 61, no. 11, pp. 3278–3287, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - F. Soleymani, M. Sharifi, and B. S. Mousavi, “An improvement of Ostrowski's and King's techniques with optimal convergence order eight,”
*Journal of Optimization Theory and Applications*, vol. 153, no. 1, pp. 225–236, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - X. Wang and L. Liu, “New eighth-order iterative methods for solving nonlinear equations,”
*Journal of Computational and Applied Mathematics*, vol. 234, no. 5, pp. 1611–1620, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - S. Wolfram,
*The Mathematica Book*, Wolfram Media, 5th edition, 2003. - J. R. Sharma and R. Sharma, “A new family of modified Ostrowski's methods with accelerated eighth order convergence,”
*Numerical Algorithms*, vol. 54, no. 4, pp. 445–458, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - Y. H. Geum and Y. I. Kim, “A biparametric family of optimally convergent sixteenth-order multipoint methods with their fourth-step weighting function as a sum of a rational and a generic two-variable function,”
*Journal of Computational and Applied Mathematics*, vol. 235, no. 10, pp. 3178–3188, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - Y. H. Geum and Y. I. Kim, “A biparametric family of four-step sixteenth-order root-finding methods with the optimal efficiency index,”
*Applied Mathematics Letters*, vol. 24, no. 8, pp. 1336–1342, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - H. Montazeri, F. Soleymani, S. Shateyi, and S. S. Motsa, “On a new method for computing the numerical solution of systems of nonlinear equations,”
*Journal of Applied Mathematics*, vol. 2012, Article ID 751975, 15 pages, 2012. - Y. I. Kim and C. Chun, “New twelfth-order modifications of Jarratt’s method for solving nonlinear equations,”
*Studies in Nonlinear Sciences*, vol. 1, pp. 14–18, 2010. - Y. I. Kim, C. Chun, and W. Kim, “Some third-order curvature based methods for solving nonlinear equations,”
*Studies in Nonlinear Sciences*, vol. 1, pp. 72–76, 2010. - F. Soleymani and S. K. Khattri, “Finding simple roots by seventh- and eighth-order derivative-free methods,”
*International Journal of Mathematical Models and Methods in Applied Sciences*, vol. 6, pp. 45–52, 2012. - F. Soleymani, “Optimal eighth-order simple root-finders free from derivative,”
*WSEAS Transactions on Information Science and Applications*, vol. 8, pp. 293–299, 2011. - F. Soleymani and F. Soleimani, “Novel computational derivative-free methods for simple roots,”
*Fixed Point Theory*, vol. 13, no. 1, pp. 247–258, 2012. - J. B. Keiper, “Interval arithmetic in mathematica,”
*The Mathematica Journal*, vol. 5, pp. 66–71, 1995.

*
*