Research Article | Open Access
Mohammed Barrada, Mariya Ouaissa, Yassine Rhazali, Mariyam Ouaissa, "A New Class of Halley’s Method with Third-Order Convergence for Solving Nonlinear Equations", Journal of Applied Mathematics, vol. 2020, Article ID 3561743, 13 pages, 2020. https://doi.org/10.1155/2020/3561743
A New Class of Halley’s Method with Third-Order Convergence for Solving Nonlinear Equations
In this paper, we present a new family of methods for finding simple roots of nonlinear equations. The convergence analysis shows that the order of convergence of all these methods is three. The originality of this family lies in the fact that these sequences are defined by an explicit expression which depends on a parameter where is a nonnegative integer. A first study on the global convergence of these methods is performed. The power of this family is illustrated analytically by justifying that, under certain conditions, the method convergence’s speed increases with the parameter . This family’s efficiency is tested on a number of numerical examples. It is observed that our new methods take less number of iterations than many other third-order methods. In comparison with the methods of the sixth and eighth order, the new ones behave similarly in the examples considered.
Many problems in science and engineering [1–3] can be expressed in the form of the following nonlinear scalar equations: where is a real analytic function. To approximate the solution α, supposed simple, of Equation (1), we can use a fixed-point iteration method in which we find a function , called an iteration function (I.F.) for , and from a starting value [4–6], we define a sequence
A point is called a fixed point of if By respecting some conditions, we can guarantee the convergence of the sequence towards .
In order to ameliorate the order of convergence of Newton’s method, several third-order techniques have been elaborated [1–29]. For example, Halley [1, 4, 7, 10–15], super-Halley [12, 16–20], Chebyshev [1, 6, 7, 12, 13, 29], Euler [1, 3, 5, 11, 13], Chun , Sharma [17, 22], Amat , Traub , Barrada et al. [23, 24], Chun and Neta , Jaiswal , Liu and Li , and Singh and Jaiswal  are interesting methods.
Furthermore, considerable efforts have been made to construct higher order methods. Jaiswal , Kim and Geum , and Thukral , proposed some forth-order method families. Fang et al.  constructed some fifth-order convergent iterative methods. Thukral , Kou et al. , and Chun and Ham  presented three sixth-order methods. Soleymani et al. [35, 36], Bi et al. , Lotfi et al. , and Cordero et al.  proposed some families of the eighth-order convergence methods. Soleymani et al. , Lotfi et al. , and Artidiello et al.  developed some new methods of the sixteenth order.
The purpose of this paper is to construct, from Halley’s method and Taylor’s polynomial, a new family of methods for finding simple roots of nonlinear equations with cubical convergence. We will show that the weight functions of his methods have particular expressions which depend on a parameter , where is a nonnegative integer, and that, if certain conditions are verified, the speed’s convergence of these sequences improves by increasing . Moreover, from this study, we will study the global convergence of these methods. Finally, the efficacy of some methods of the proposed family will be tested on a number of numerical examples. A comparison with many third-order methods will be realized.
2. Development of New Family of Halley’s Method
One of the best-known third-order methods is Halley’s method, given by where
Using second-order Taylor’s polynomial of at , we obtain where is an approximate value of α. The graph of intersects the -axis at some point , which is the solution of the equation:
Factoring from the last two terms, we obtain so
This schema is implicit because it does not directly find as a function of . It can be changed to make it explicit by replacing , remaining in THE right-hand side of Equation (9), by Halley’s correction given in Equation (4), we get where . It is simply known as super-Halley’s method.
By repeating the same scenario many times and by replacing , each time, with the last correction found, we derive the following iterative process which represents a general family of Halley’s method for finding simple roots: where where is a parameter, which is a nonnegative integer.
We can show that can be explicitly expressed as a function of on the interval as follows: where
Finally, the general family of Halley’s method is generated by
This schema is simple and interesting because it regenerates both well-known and new methods. For example,
For , the formula (11) corresponds to the classical Halley method
For , the formula (11) corresponds to the famous super-Halley method ,
For , we obtain the methods and given, respectively, by the following sequences:
3. Analysis of Convergence
3.1. Order of Convergence
Theorem 1. Let p be a parameter where p is a non-negative integer. We suppose that the function has at least two continuous derivatives in the neighborhood of a zero, . Further, assuming that and is sufficiently close to , the methods defined by Equation (16) converge cubically and satisfy the error equation: where is the error at th iteration and
Proof. Let be a simple root of and be the error in approximating by . We use the Taylor expansions  about
Using (23), we obtain
Using Taylor’s series expansion  of about leads to
Knowing that , so
Thus, formula (30) becomes
Using Equation (27), we get
Finally, using Equation (31), we obtain which completes the proof of the theorem.
3.2. Global Convergence of the Halley Families’ Methods
We will make a first study of the global convergence of six selected methods from the proposed family (), in the case where they converge towards the root in a monotone fashion [6, 11, 13–15, 17, 19, 20].
Lemma 2. Let us write the I.F. of , from the sequence to
Then, the derivatives of the I.F. of are given by
3.2.1. Monotonic Convergence of the Sequences
Let be a parameter where is an integer between 0 and 5. We consider the functions of a real variable defined on interval by: where and Knowing that and that the derivatives of I.F. are given by (39), we deduce that in . So, . By induction, we obtain that for all
Theorem 3. Let , , and , on an interval containing the root of . Then, the sequences given by Equation (16), are decreasing (resp., increasing) and converge to from any point checking (resp. ).
Proof. Let us consider the case where , then . The application of the mean value theorem gives:
Furthermore, from Equation (11) we have where
As , then , so for all .
In addition, we have , we deduce that . Now, it is easy to prove by induction that for all
So, and consequently As is the unique root of in , therefore This completes the proof of Theorem 3.
Corollary 4. Let , and on an interval containing the root of . The sequences given by Equations (4), (10), (17), (18), (19), and (20) are decreasing (resp., increasing) and converge to from any point checking (rep. ).
Proof. Taking into account that for for every integer , it follows that the condition of Theorem 3 is well satisfied since . By applying Theorem 3, we obtain the thesis (for example, see Section 5.1).
4. Principal Advantage of the New Family
As the family is governed by formula (16), depending on the parameter , where is a nonnegative integer, it would be interesting to look for which values, and under which conditions, the convergence is faster.
Lemma 5. Let . Let and be defined, respectively, by the sequences and ,given by formula (16).
Then, we have
Proof. We have the following:
We know that
We deduce that
Then, , which completes the proof of the lemma.
Theorem 6. Let . Let and be defined, respectively, by the sequences and given by Equation (16), , , , and on an interval containing the root of . Starting from the same initial point , the convergence’s rate of the sequence is higher than one’s of the sequence ().
Proof. Supposing that the initial value satisfies so . According to Corollary. 4 given above, we know that if and in , the sequences () and () are decreasing and converge to from any point .
Let and be defined, respectively, by () and (). Since and the two sequences are decreasing, we expect that for all This can be shown by induction. Let , then from (46), where .
We know that and for
So, As and , then .
Now, we assume that . Since, under the above hypotheses, is an increasing function in , we obtain.
On the other hand, we have
We deduce that. So and the induction is completed. The case is similar to the previous one.
Consequently, the power of the proposed family has been shown analytically by justifying that, under certain conditions, the convergence’s speed of its methods increases with the parameter , where is a nonnegative integer. As the methods of Halley and super-Halley are particular cases of this family, in which the parameters are the smallest, their convergence rates are smaller than the ones of the other new higher parameter methods.
5. Numerical Results
For the numerical results, we use a fixed stopping criterion of and . The computations were performed using MATLAB R2015b.
In order to compare two methods, we give the number of iterations () required to satisfy the stopping criterion, the number of function (and derivatives) evaluations per step, and the order of convergence of the method. Based on and , there is an efficiency measure defined by (efficiency index).
Unfortunately, for the methods of the same order and demanding the same number of function evaluations , the efficiency index is the same for these methods. In this case, the comparison is made on the basis of the number of iterations (). This number depends on how far the starting point is from and the value of the asymptotic constant. For two methods of the same order , the one having the smallest asymptotic constant will converge faster than the other having the higher asymptotic constant, for a starting point sufficiently close to . But if is too far from (and in the basin of attraction of ), it is possible that a method with a higher asymptotic constant converge faster . Thus, in order to make the comparison more realistic and fairer, it is preferable to use an approximate asymptotic constant at the step , defined by where and are two consecutive iterations. In general, choosing close enough to and a very high precision (300 significant digits or more), then, taking and closer to the root, will tend towards the theoretical asymptotic constant.
Furthermore, we cannot compare two methods of different -order and demanding the same number of function evaluations , on the basis of asymptotic constant. It is quite obvious that the method with the highest is the fastest, for a starting point sufficiently close to the solution. But, if is too far from (and in the basin of attraction of ), the”order” of convergence is not necessarily , especially for the first iterations of the method . Thus, it is more correct and judicious to use the computational order of convergence at the step , which can be approximated using the formula : where , and are three consecutive iterations. In general, choosing close enough to and a very high precision (300 significant digits or more), then, taking , and closer to the root, will tend towards the theoretical order of convergence .
Here, the values of and will be calculated by using the same total number of function evaluation iterations (or, if not possible, the same total number of iterations) for all methods.