Abstract

In this paper, we present a new family of methods for finding simple roots of nonlinear equations. The convergence analysis shows that the order of convergence of all these methods is three. The originality of this family lies in the fact that these sequences are defined by an explicit expression which depends on a parameter where is a nonnegative integer. A first study on the global convergence of these methods is performed. The power of this family is illustrated analytically by justifying that, under certain conditions, the method convergence’s speed increases with the parameter . This family’s efficiency is tested on a number of numerical examples. It is observed that our new methods take less number of iterations than many other third-order methods. In comparison with the methods of the sixth and eighth order, the new ones behave similarly in the examples considered.

1. Introduction

Many problems in science and engineering [13] can be expressed in the form of the following nonlinear scalar equations: where is a real analytic function. To approximate the solution α, supposed simple, of Equation (1), we can use a fixed-point iteration method in which we find a function , called an iteration function (I.F.) for , and from a starting value [46], we define a sequence

A point is called a fixed point of if By respecting some conditions, we can guarantee the convergence of the sequence towards .

One of the most famous and widely methods to solve Equation (1) is the second-order Newton method given by: [3, 79]:

In order to ameliorate the order of convergence of Newton’s method, several third-order techniques have been elaborated [129]. For example, Halley [1, 4, 7, 1015], super-Halley [12, 1620], Chebyshev [1, 6, 7, 12, 13, 29], Euler [1, 3, 5, 11, 13], Chun [16], Sharma [17, 22], Amat [12], Traub [7], Barrada et al. [23, 24], Chun and Neta [25], Jaiswal [26], Liu and Li [27], and Singh and Jaiswal [28] are interesting methods.

Furthermore, considerable efforts have been made to construct higher order methods. Jaiswal [26], Kim and Geum [30], and Thukral [31], proposed some forth-order method families. Fang et al. [32] constructed some fifth-order convergent iterative methods. Thukral [31], Kou et al. [33], and Chun and Ham [34] presented three sixth-order methods. Soleymani et al. [35, 36], Bi et al. [37], Lotfi et al. [38], and Cordero et al. [39] proposed some families of the eighth-order convergence methods. Soleymani et al. [40], Lotfi et al. [38], and Artidiello et al. [41] developed some new methods of the sixteenth order.

The purpose of this paper is to construct, from Halley’s method and Taylor’s polynomial, a new family of methods for finding simple roots of nonlinear equations with cubical convergence. We will show that the weight functions of his methods have particular expressions which depend on a parameter , where is a nonnegative integer, and that, if certain conditions are verified, the speed’s convergence of these sequences improves by increasing . Moreover, from this study, we will study the global convergence of these methods. Finally, the efficacy of some methods of the proposed family will be tested on a number of numerical examples. A comparison with many third-order methods will be realized.

2. Development of New Family of Halley’s Method

One of the best-known third-order methods is Halley’s method, given by where

Using second-order Taylor’s polynomial of at , we obtain where is an approximate value of α. The graph of intersects the -axis at some point , which is the solution of the equation:

Factoring from the last two terms, we obtain so

This schema is implicit because it does not directly find as a function of . It can be changed to make it explicit by replacing , remaining in THE right-hand side of Equation (9), by Halley’s correction given in Equation (4), we get where . It is simply known as super-Halley’s method.

By repeating the same scenario many times and by replacing , each time, with the last correction found, we derive the following iterative process which represents a general family of Halley’s method for finding simple roots: where where is a parameter, which is a nonnegative integer.

We can show that can be explicitly expressed as a function of on the interval as follows: where

Finally, the general family of Halley’s method is generated by

This schema is simple and interesting because it regenerates both well-known and new methods. For example,

For , the formula (11) corresponds to the classical Halley method

For , the formula (11) corresponds to the famous super-Halley method ,

For , we obtain the methods and given, respectively, by the following sequences:

3. Analysis of Convergence

3.1. Order of Convergence

Theorem 1. Let p be a parameter where p is a non-negative integer. We suppose that the function has at least two continuous derivatives in the neighborhood of a zero, . Further, assuming that and is sufficiently close to , the methods defined by Equation (16) converge cubically and satisfy the error equation: where is the error at th iteration and

Proof. Let be a simple root of and be the error in approximating by . We use the Taylor expansions [22] about

where

Using (23), we obtain

Using Taylor’s series expansion [22] of about leads to

Knowing that , so

We have

Thus, formula (30) becomes

Using Equation (27), we get

Substituting Equation (28) and Equation (33) in formula (11), we obtain the error’s equation:

Finally, using Equation (31), we obtain which completes the proof of the theorem.

3.2. Global Convergence of the Halley Families’ Methods

We will make a first study of the global convergence of six selected methods from the proposed family (), in the case where they converge towards the root in a monotone fashion [6, 11, 1315, 17, 19, 20].

Lemma 2. Let us write the I.F. of , from the sequence to

Then, the derivatives of the I.F. of are given by

3.2.1. Monotonic Convergence of the Sequences

Let be a parameter where is an integer between 0 and 5. We consider the functions of a real variable defined on interval by: where and Knowing that and that the derivatives of I.F. are given by (39), we deduce that in . So, . By induction, we obtain that for all

Theorem 3. Let , , and , on an interval containing the root of . Then, the sequences given by Equation (16), are decreasing (resp., increasing) and converge to from any point checking (resp. ).

Proof. Let us consider the case where , then . The application of the mean value theorem gives:

Furthermore, from Equation (11) we have where

As , then , so for all .

In addition, we have , we deduce that . Now, it is easy to prove by induction that for all

Thereby, the sequences (16) are decreasing and converge towards a limit , where . So, by calculating the limit of Equation (16), we obtain We have and , for every real .

So, and consequently As is the unique root of in , therefore This completes the proof of Theorem 3.

Corollary 4. Let , and on an interval containing the root of . The sequences given by Equations (4), (10), (17), (18), (19), and (20) are decreasing (resp., increasing) and converge to from any point checking (rep. ).

Proof. Taking into account that for for every integer , it follows that the condition of Theorem 3 is well satisfied since . By applying Theorem 3, we obtain the thesis (for example, see Section 5.1).

4. Principal Advantage of the New Family

As the family is governed by formula (16), depending on the parameter , where is a nonnegative integer, it would be interesting to look for which values, and under which conditions, the convergence is faster.

Lemma 5. Let . Let and be defined, respectively, by the sequences and ,given by formula (16).

Then, we have

Proof. We have the following:

We know that

We deduce that

As

Then, , which completes the proof of the lemma.

Theorem 6. Let . Let and be defined, respectively, by the sequences and given by Equation (16), , , , and on an interval containing the root of . Starting from the same initial point , the convergence’s rate of the sequence is higher than one’s of the sequence ().

Proof. Supposing that the initial value satisfies so . According to Corollary. 4 given above, we know that if and in , the sequences () and () are decreasing and converge to from any point .

Let and be defined, respectively, by () and (). Since and the two sequences are decreasing, we expect that for all This can be shown by induction. Let , then from (46), where .

We know that and for

So, As and , then .

Now, we assume that . Since, under the above hypotheses, is an increasing function in , we obtain.

On the other hand, we have

We deduce that. So and the induction is completed. The case is similar to the previous one.

Consequently, the power of the proposed family has been shown analytically by justifying that, under certain conditions, the convergence’s speed of its methods increases with the parameter , where is a nonnegative integer. As the methods of Halley and super-Halley are particular cases of this family, in which the parameters are the smallest, their convergence rates are smaller than the ones of the other new higher parameter methods.

5. Numerical Results

For the numerical results, we use a fixed stopping criterion of and . The computations were performed using MATLAB R2015b.

In order to compare two methods, we give the number of iterations () required to satisfy the stopping criterion, the number of function (and derivatives) evaluations per step, and the order of convergence of the method. Based on and , there is an efficiency measure defined by (efficiency index).

Unfortunately, for the methods of the same order and demanding the same number of function evaluations , the efficiency index is the same for these methods. In this case, the comparison is made on the basis of the number of iterations (). This number depends on how far the starting point is from and the value of the asymptotic constant. For two methods of the same order , the one having the smallest asymptotic constant will converge faster than the other having the higher asymptotic constant, for a starting point sufficiently close to . But if is too far from (and in the basin of attraction of ), it is possible that a method with a higher asymptotic constant converge faster [18]. Thus, in order to make the comparison more realistic and fairer, it is preferable to use an approximate asymptotic constant at the step , defined by where and are two consecutive iterations. In general, choosing close enough to and a very high precision (300 significant digits or more), then, taking and closer to the root, will tend towards the theoretical asymptotic constant.

Furthermore, we cannot compare two methods of different -order and demanding the same number of function evaluations , on the basis of asymptotic constant. It is quite obvious that the method with the highest is the fastest, for a starting point sufficiently close to the solution. But, if is too far from (and in the basin of attraction of ), the”order” of convergence is not necessarily , especially for the first iterations of the method [10]. Thus, it is more correct and judicious to use the computational order of convergence at the step , which can be approximated using the formula [37]: where , and are three consecutive iterations. In general, choosing close enough to and a very high precision (300 significant digits or more), then, taking , and closer to the root, will tend towards the theoretical order of convergence .

Here, the values of and will be calculated by using the same total number of function evaluation iterations (or, if not possible, the same total number of iterations) for all methods.

The tests functions, used in Tables 17, and their roots are displayed in Table 2.

5.1. Numerical Comparison of the Six Proposed Methods

Example: Given the function in the interval , we have , in , and . It is easy to prove that in . Since in addition, in then the conditions of Theorem 6 are satisfied for the sequences given by Equations (4), (10), (17), (18), (19), and (20). By taking , we have for all and .

In this case, it is not easy to compare the chosen methods because they have the same order of convergence (), the same number of function evaluations () by step, the same efficiency index , and the same theoretical asymptotic constant (see formula (21)). In Table 1, the comparison will be, thus, made on the basis of the number of iterations (), the approximate asymptotic constants defined by (53), and/or the computational order of convergence defined by (54). As we know that the method with the smallest values of and and/or the higher values of will be locally faster.

We note in Table 1 that (i)All the sequences are decreasing and converge to the solution of the equation in (ii)By increasing the value of the parameter , where p is a nonnegative integer, the values of and decrease and the value of increases. Thus, the convergence speed of these methods increases with the parameter (iii)The convergence rates of the new methods, in which the parameter p is high are higher than those of Halley’s and super-Halley’s , in which the parameter is low.

The given example illustrates the great importance of the Theorem 6 which stipulates that, under certain conditions, the higher the parameter is ( is a nonnegative integer), the faster the convergence of methods becomes.

5.2. Comparison with Other Third-Order Methods

In Table 3, we shall present the numerical results obtained by employing various cubically convergent iterative and Newton’s methods. Comparing Newton’s method (NM) defined by formula (3), Chebyshev’s method (CB) defined by (16) in [16], Chun’s method (CH) defined by Equation (30) with in [16], Sharma’s method (SM) defined by Equation (26) with in [17], Halley’s method (H0) defined by Equation (4) given before, and super-Halley’s method (H1) given by Equation (10) above, with the four new methods designated as , and defined above, respectively, by Equations (17), (18), (19), and (20).

In Table 3, all the methods have the same order of convergence () and require the same number of function evaluations (). Consequently, they have the same efficiency index . Thus, the comparison in Table 3 can be made on the basis of the number of iterations () and the approximate asymptotic constants defined by (53). We know that the method with the smallest values of and is locally faster.

From the numerical results given in Table 3, we see that the four proposed methods of the new family appear more interesting and effective than the other chosen third-order methods, because in the majority of the selected examples, our methods converge with fewer iterations and smaller approximate asymptotic constants.

5.3. Comparison with Higher Order Methods

Now, we compare the four selected methods of the new family with some higher order methods: B, an eighth-order method, denotes for Bi et al. [37] (formula (36) with ); FG and NR, a fifth-order methods, respectively, denote for Fang et al. [32] (formula (2)) and Muhammad et al. [42] (algorithm 2.4.); CC represents the Chun and Ham method [34] (formulas (12), (13), (14)) and K denotes for Kou [33] (first formula), both last methods are from the sixth order; WG, a fourth-order iterative method, denotes for Wang [43] (formula (23)).

Table 4 shows the number of iterations () and the number of function evaluations () required to approximate the root.

In Table 5, we exhibit the absolute values of the error , the error in the consecutive iterations , the absolute values of the function , the computational order of convergence , and the efficiency index

In Table 4, the comparison, above, with several fifth-, sixth-, and even eighth-order methods, confirms the efficiency and power of the new proposed family. In fact, in most of the considered examples, Table 4 shows that our methods behave in a similar way as the higher order ones, as they require an equal or smaller number of function evaluations Table 5 confirms the power of higher order methods which shows, generally, higher values of computational order of convergence (ρ) if the starting point is sufficiently close to α. However, for the first case (), in which the starting point is far from the root α, the higher order methods (FG, CC, and K) show low values of computational order of convergence , contrary to our methods. This remark leads us to test some functions to study the variation of as a function of . These results are displayed in Tables 6 and 7 that show that, for our methods , the choice of an , far from the root α only results a small variation of comparing to its theoretical value (). Contrary to high-order methods, the further away from , the more the value of decreases during the first iterations. This leads us to think that these high-order methods would start the first iteration with a speed lower than the normal one; then, in the line of iterations, would progressively regain speed to reach its maximum in the last iteration. Thus, the delay that would be in the first iterations could lead to a decrease in the average speed of convergence, and consequently to an increase in the number of iterations This would explain why, in some cases (such as the example ), our methods, which are of order 3, show numbers and similar or even smaller than higher order methods, contrary to the predictions. Having said that, the power of high-order methods is confirmed on several levels and it manifests itself especially when is close enough to the root and the required precision is very high.

6. Conclusion

In this paper, we built a new family of Halley’s method with third-order convergence for solving nonlinear equations with simple roots. The proposed scheme is interesting because it regenerates Halley’s method, super-Halley’s method, and infinity of new methods. The originality of this family lies, on the one hand, in the fact that these sequences are governed by a single formula depending on a parameter , where is a nonnegative integer, and on the other hand, under certain conditions, the convergence speed of its methods improves when the value increases. In order to reveal the quality of the new family, we focused on four of its methods. A first study on the global convergence of these selected methods was carried out. To test the new techniques, several digital examples were produced. The performance of our methods is compared with well-known methods of similar or higher order. The numerical results clearly illustrated the efficiency of the techniques of the new family proposed in this article.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to thank Dr. Bruno Carpentieri and all the members of the Editorial Board who were responsible for dealing with this paper, improving the content of this paper. We would also like to thank Miss Chaymae Salhi for her help and contribution during the writing and correction of the paper.