Abstract

We have given a four-step, multipoint iterative method without memory for solving nonlinear equations. The method is constructed by using quasi-Hermite interpolation and has order of convergence sixteen. As this method requires four function evaluations and one derivative evaluation at each step, it is optimal in the sense of the Kung and Traub conjecture. The comparisons are given with some other newly developed sixteenth-order methods. Interval Newton’s method is also used for finding the enough accurate initial approximations. Some figures show the enclosure of finitely many zeroes of nonlinear equations in an interval. Basins of attractions show the effectiveness of the method.

1. Introduction

Let us consider the problem of approximating the simple root of the nonlinear equation involving a nonlinear univariate function : Newton’s method and its variants have always remained as widely used one-point without memory and one-step methods for solving (1). However, the usage of single point and one-step methods puts limit on the order of convergence and computational efficiency is given as where is the order of convergence of the iterative method and is the cost of evaluating and its derivatives.

To overcome the drawbacks of one-point, one-step methods, many multipoint multistep higher order convergent methods have been introduced in the recent past by using inverse, Hermite, and rational interpolation [1, 2]. In developing these methods, so far, the conjecture of Kung and Traub has remained the focus of attention. It states the following.

Conjecture 1. An optimal iterative method without memory based on n evaluations would achieve an optimal convergence order of , hence, a computational efficiency of .

In [3, 4], Petkovi presented a general optimal -point iterative scheme without memory defined by where is the approximation of the root at the th iteration and is an arbitrary fourth-order, two-point method requiring three function evaluations: is Newton’s method. The derivative at -step is approximated through quasi-Hermite interpolatory polynomial of degree , denoted by .

Using this approach, Sargolzaei and Soleymani [5] presented a three-step optimal eighth-order iterative method. However, since the authors approximated the derivative at the fourth step by using Hermite interpolatory polynomials of degree three, therefore the fourth-step method given by Sargolzaei and Soleymani has order of convergence fourteen including five function evaluations, which is not optimal in the sense of Kung and Traub.

In this paper, we present an optimal four-step four-point sixteenth-order convergent method by using quasi-Hermite interpolation from the general class of Petkovi [3, 4]. The interpolation is done by using the Newtonian formulation given by Traub [6]. The numerical comparisons are given in Section 4 with recent optimal sixteenth-order convergent methods based on rational interpolants. Since, the first step of our method is Newton’s method, thus to overcome the drawbacks of Newton’s method we have calculated, in Section 5, accurate initial guess required for the convergence of this method for some oscillatory functions.

2. Construction of Method

We define the following: where and are any arbitrary fourth- and eighth-order, multipoint methods. We, now, approximate with a quasi-Hermite interpolatory polynomial of degree four satisfying To construct the interpolatory polynomial , satisfying the above conditions, we apply the Newtonian representation of the interpolatory polynomial satisfying the conditions Traub [6, p. 243] have given this as follows: The confluent divided differences involved here are defined as In particular, is the usual divided difference. Here, we take , , , and hence, , , , and . Expanding (8), we get Differentiating (11) with respect to “” and substituting in the above equation, we obtain where Using representation (12) of in place of at the fourth step, the new four-step iterative method is obtained as where and are any fourth- and eighth-order convergent methods, respectively, and

Theorem 2. Let one consider as a root of nonlinear equation (1) in the domain and assume that is sufficiently differentiable in the neighbourhood of the root. Then the iterative method defined by (14) is of optimal order sixteen and has the following error equation: where , for , are defined by

Proof. We write the Taylor series expansion of the function about the simple root in th iteration. Let . Therefore, we have Also, we obtain Now, we find the Taylor expansion of , the first step, by using the above two expressions (18) and (19). Hence, we have Also, we need the Taylor expansion of ; that is In second step, we take a general fourth-order convergent method as Now, we find the Taylor expansion of each divided difference used at the third step. We thus obtain In the third step, we take a general eighth-order convergent method as follows: and the Taylor expansion for is Now, we find the Taylor expansion of divided differences used at the last step. We, thus, obtain Hence, our fourth step defined in (14) becomes which manifests that (14) is a four-step iterative method of optimal order of convergence of sixteen consuming four function evaluations and one derivative evaluation.

Remark 3. It is concluded from Theorem 2 that the new sixteenth-order convergent iterative method (14) for solving nonlinear equations satisfies the conjecture of Kung and Traub that a multipoint method without memory with four evaluations of functions and a derivative evaluation can achieve an optimal sixteenth order of convergence and an efficiency index of .

3. Some Particular Methods

In this section, we consider some particular methods from the newly developed family of the sixteenth-order convergent iterative methods.

3.1. Iterative Method M1

Here, we take as two-step fourth-order convergent method defined by Geum and Kim [7] and the third-step is replaced by the third step of eighth-order convergent method given by [5] using Hermite interpolation. Hence, our four-step method becomes where is given by (15).

3.2. Iterative Method M2

Here, we define as King’s two-step fourth-order convergent method [8] with , as Hence, our four-step iterative method becomes where is given by (15).

4. Numerical Results and Computational Cost

In this section, we compare our newly constructed family of iterative methods of optimal sixteenth-order M1 and M2 defined in (28) and (30), respectively, with some famous equation solvers. For the sake of comparison, we consider the fourteenth-order convergent method (PF) given by Sargolzaei and Soleymani [5] and the optimal sixteenth-order convergent methods (JRP) and (FSH) given by Sharma et al. [1] and Soleymani et al. [2], respectively. All the computations are done using software Maple 13 with tolerance and 4000 digits precision. The stopping criterion is Here, is the exact zero of the function and is the initial guess. In Tables 19, columns show the number of iterations , in which the method converges to , the absolute value of function at th step, for . The numerical examples are taken from [1, 2].

We now give the numerical results of our new schemes in comparison with Newton’s method for three oscillatory nonlinear functions, in the domain having 69 zeroes, on the interval having 320 zeroes, and in the domain having 51 zeroes using the same precision, stopping criterion, and tolerance as given above. The first two functions and are taken from [9] and is taken from [2]. Table 8 shows the importance of accurate initial guesses for the convergence of Newton’s method (NM) for these types of highly fluctuating functions. The results include the number of iterations , the absolute value of each function at the th iterate , and the root to which the methods converge.

Table 9 shows the cost of executing each method for solving a nonlinear equation. The table clearly depicts that except that of the fourteenth-order convergent method given by Sargolzaei and Soleymani (PF) [5] all other methods of respective domain require more computational effort compared to our methods M1 and M2.

5. Newton’s Method and Zeroes of Functions

The new sixteenth-order iterative method developed in this paper includes Newton’s method as the first step. Although Newton’s method is one of the most widely used methods, still it has many drawbacks; that is, proper initial guess plays a crucial role in the convergence of this method; an initial guess, which is not close enough to the root of the function, may lead to divergence as shown in Table 8. Moreover, another drawback is the involvement of derivative which may not exist at some points of the domain. To overcome these two main drawbacks of Newton’s method, Moore et al. in 1966 ([10], Chapter 9) gave a method called interval Newton’s method which can generate the safe initial guesses to ensure the convergence of Newton’s method in vicinity of the root. However, interval Newton’s method for handling nonlinear equations has a restriction that if the interval extension of initial guess contains a zero of the function , then every th iteration contains the zero of for all , which thus leads to failure of this method. Thus, forms a nested sequence converging to only if . To remove this restriction and to allow the range of values of the derivative to contain zero, Moore et al. ([10], Chapter 5) gave an extension of this method by splitting the quotient occurring in interval Newton’s method into two subintervals, where each subinterval though contains a zero of the function but excludes the zero of the derivative of . This method is known as extended interval Newton’s method. We, herein, find the intervals enclosing all the zeroes of the function by using extended interval Newton’s method defined in [10]. The endpoints of these subintervals are approximated up to 10 decimal places which may serve as initial guesses, good enough to show convergence for all the zeroes of oscillatory nonlinear functions.

By using Maple, we find the subintervals for , , and defined above in Section 4. For , 69 subintervals are calculated as follows:

Likewise, for the nonlinear function , the interval is subdivided into 320 subintervals given as

Similarly, we find out that the function has 51 zeroes in subintervals

The graphs of the first and tenth iteration of extended interval Newton’s method for each function obtained by using Maple are shown in Figures 1, 2, and 3, representing the enclosure of exact zeroes at each interval.

6. Basins of Attraction

We consider complex polynomials , , . To generate basins of attraction, we use two different techniques. In the first technique, we take a square box of . Now for every initial guess , we assign a colour according to the root to which an iterative method converges. For divergence, we assign the colour dark blue. The stopping criteria for convergence are , and the maximum number of iterations is 30. In the second technique, we take the same scale, but we assign a colour for each initial guess depending upon the number of iterations in which the iterative method converges to any of the roots of the given function. The maximum number of iterations taken here is 25; stopping criterion is same as given earlier. If an iterative method does not converge in the maximum number of iterations, we consider the method as divergent for that initial guess and the method thus is represented by black colour.

To obtain basins of attraction, we take four test examples which are given as , , , and . The roots of are 1.0, , ; roots of are , , and for roots are 1.0,   ,, , . And roots of are 1.0, , , , , , .

We compare the results of our newly constructed method M1 with those of well-known sixteenth-order convergent methods PF [5], JR [1], and FS [2] (see Figures 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, and 19).

7. Conclusions

A general four-step four-point iterative method without memory has been given for solving nonlinear equations. This iterative method has been obtained by approximating the first derivative of the function at the fourth step by using quasi-Hermite interpolation. An analytic proof for the order of convergence of this method was given which demonstrates that the method has an optimal order of sixteen. For this method the number of function evaluations is five per full step, so the efficiency index of the method is 1.741. Numerical comparisons in the form of Tables 2, 3, 4, 5, 6, and 7 with the methods based on rational interpolation, that is, methods with comparably more arithmetic cost as shown in Table 9, reveal the robust performance of this method compared to existing methods of this domain. Moreover, extended interval Newton’s method is also introduced which is very effective in finding enough accurate initial guesses for solving nonlinear functions having finitely many zeroes in an interval. The basins of attraction show that our new method requires less number of iterations to converge to a root compared to the methods of [2, 5].

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This paper was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah. Therefore, Dr. Nawab Hussain acknowledges DSR, KAU, for the financial support. The publication cost is covered by Dr. Nawab’s institution.