Recent Developments on Sequence Spaces and Compact Operators with Applications
View this Special IssueResearch Article  Open Access
Fiza Zafar, Nawab Hussain, Zirwah Fatimah, Athar Kharal, "Optimal Sixteenth Order Convergent Method Based on QuasiHermite Interpolation for Computing Roots", The Scientific World Journal, vol. 2014, Article ID 410410, 18 pages, 2014. https://doi.org/10.1155/2014/410410
Optimal Sixteenth Order Convergent Method Based on QuasiHermite Interpolation for Computing Roots
Abstract
We have given a fourstep, multipoint iterative method without memory for solving nonlinear equations. The method is constructed by using quasiHermite interpolation and has order of convergence sixteen. As this method requires four function evaluations and one derivative evaluation at each step, it is optimal in the sense of the Kung and Traub conjecture. The comparisons are given with some other newly developed sixteenthorder methods. Interval Newton’s method is also used for finding the enough accurate initial approximations. Some figures show the enclosure of finitely many zeroes of nonlinear equations in an interval. Basins of attractions show the effectiveness of the method.
1. Introduction
Let us consider the problem of approximating the simple root of the nonlinear equation involving a nonlinear univariate function : Newton’s method and its variants have always remained as widely used onepoint without memory and onestep methods for solving (1). However, the usage of single point and onestep methods puts limit on the order of convergence and computational efficiency is given as where is the order of convergence of the iterative method and is the cost of evaluating and its derivatives.
To overcome the drawbacks of onepoint, onestep methods, many multipoint multistep higher order convergent methods have been introduced in the recent past by using inverse, Hermite, and rational interpolation [1, 2]. In developing these methods, so far, the conjecture of Kung and Traub has remained the focus of attention. It states the following.
Conjecture 1. An optimal iterative method without memory based on n evaluations would achieve an optimal convergence order of , hence, a computational efficiency of .
In [3, 4], Petkovi presented a general optimal point iterative scheme without memory defined by where is the approximation of the root at the th iteration and is an arbitrary fourthorder, twopoint method requiring three function evaluations: is Newton’s method. The derivative at step is approximated through quasiHermite interpolatory polynomial of degree , denoted by .
Using this approach, Sargolzaei and Soleymani [5] presented a threestep optimal eighthorder iterative method. However, since the authors approximated the derivative at the fourth step by using Hermite interpolatory polynomials of degree three, therefore the fourthstep method given by Sargolzaei and Soleymani has order of convergence fourteen including five function evaluations, which is not optimal in the sense of Kung and Traub.
In this paper, we present an optimal fourstep fourpoint sixteenthorder convergent method by using quasiHermite interpolation from the general class of Petkovi [3, 4]. The interpolation is done by using the Newtonian formulation given by Traub [6]. The numerical comparisons are given in Section 4 with recent optimal sixteenthorder convergent methods based on rational interpolants. Since, the first step of our method is Newton’s method, thus to overcome the drawbacks of Newton’s method we have calculated, in Section 5, accurate initial guess required for the convergence of this method for some oscillatory functions.
2. Construction of Method
We define the following: where and are any arbitrary fourth and eighthorder, multipoint methods. We, now, approximate with a quasiHermite interpolatory polynomial of degree four satisfying To construct the interpolatory polynomial , satisfying the above conditions, we apply the Newtonian representation of the interpolatory polynomial satisfying the conditions Traub [6, p. 243] have given this as follows: The confluent divided differences involved here are defined as In particular, is the usual divided difference. Here, we take , , , and hence, , , , and . Expanding (8), we get Differentiating (11) with respect to “” and substituting in the above equation, we obtain where Using representation (12) of in place of at the fourth step, the new fourstep iterative method is obtained as where and are any fourth and eighthorder convergent methods, respectively, and
Theorem 2. Let one consider as a root of nonlinear equation (1) in the domain and assume that is sufficiently differentiable in the neighbourhood of the root. Then the iterative method defined by (14) is of optimal order sixteen and has the following error equation: where , for , are defined by
Proof. We write the Taylor series expansion of the function about the simple root in th iteration. Let . Therefore, we have Also, we obtain Now, we find the Taylor expansion of , the first step, by using the above two expressions (18) and (19). Hence, we have Also, we need the Taylor expansion of ; that is In second step, we take a general fourthorder convergent method as Now, we find the Taylor expansion of each divided difference used at the third step. We thus obtain In the third step, we take a general eighthorder convergent method as follows: and the Taylor expansion for is Now, we find the Taylor expansion of divided differences used at the last step. We, thus, obtain Hence, our fourth step defined in (14) becomes which manifests that (14) is a fourstep iterative method of optimal order of convergence of sixteen consuming four function evaluations and one derivative evaluation.
Remark 3. It is concluded from Theorem 2 that the new sixteenthorder convergent iterative method (14) for solving nonlinear equations satisfies the conjecture of Kung and Traub that a multipoint method without memory with four evaluations of functions and a derivative evaluation can achieve an optimal sixteenth order of convergence and an efficiency index of .
3. Some Particular Methods
In this section, we consider some particular methods from the newly developed family of the sixteenthorder convergent iterative methods.
3.1. Iterative Method M1
Here, we take as twostep fourthorder convergent method defined by Geum and Kim [7] and the thirdstep is replaced by the third step of eighthorder convergent method given by [5] using Hermite interpolation. Hence, our fourstep method becomes where is given by (15).
3.2. Iterative Method M2
Here, we define as King’s twostep fourthorder convergent method [8] with , as Hence, our fourstep iterative method becomes where is given by (15).
4. Numerical Results and Computational Cost
In this section, we compare our newly constructed family of iterative methods of optimal sixteenthorder M1 and M2 defined in (28) and (30), respectively, with some famous equation solvers. For the sake of comparison, we consider the fourteenthorder convergent method (PF) given by Sargolzaei and Soleymani [5] and the optimal sixteenthorder convergent methods (JRP) and (FSH) given by Sharma et al. [1] and Soleymani et al. [2], respectively. All the computations are done using software Maple 13 with tolerance and 4000 digits precision. The stopping criterion is Here, is the exact zero of the function and is the initial guess. In Tables 1–9, columns show the number of iterations , in which the method converges to , the absolute value of function at th step, for . The numerical examples are taken from [1, 2].






