Journal of Applied Mathematics

Journal of Applied Mathematics / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 3561743 | https://doi.org/10.1155/2020/3561743

Mohammed Barrada, Mariya Ouaissa, Yassine Rhazali, Mariyam Ouaissa, "A New Class of Halley’s Method with Third-Order Convergence for Solving Nonlinear Equations", Journal of Applied Mathematics, vol. 2020, Article ID 3561743, 13 pages, 2020. https://doi.org/10.1155/2020/3561743

A New Class of Halley’s Method with Third-Order Convergence for Solving Nonlinear Equations

Academic Editor: Bruno Carpentieri
Received05 Mar 2020
Revised04 Jun 2020
Accepted12 Jun 2020
Published02 Jul 2020

Abstract

In this paper, we present a new family of methods for finding simple roots of nonlinear equations. The convergence analysis shows that the order of convergence of all these methods is three. The originality of this family lies in the fact that these sequences are defined by an explicit expression which depends on a parameter where is a nonnegative integer. A first study on the global convergence of these methods is performed. The power of this family is illustrated analytically by justifying that, under certain conditions, the method convergence’s speed increases with the parameter . This family’s efficiency is tested on a number of numerical examples. It is observed that our new methods take less number of iterations than many other third-order methods. In comparison with the methods of the sixth and eighth order, the new ones behave similarly in the examples considered.

1. Introduction

Many problems in science and engineering [13] can be expressed in the form of the following nonlinear scalar equations: where is a real analytic function. To approximate the solution α, supposed simple, of Equation (1), we can use a fixed-point iteration method in which we find a function , called an iteration function (I.F.) for , and from a starting value [46], we define a sequence

A point is called a fixed point of if By respecting some conditions, we can guarantee the convergence of the sequence towards .

One of the most famous and widely methods to solve Equation (1) is the second-order Newton method given by: [3, 79]:

In order to ameliorate the order of convergence of Newton’s method, several third-order techniques have been elaborated [129]. For example, Halley [1, 4, 7, 1015], super-Halley [12, 1620], Chebyshev [1, 6, 7, 12, 13, 29], Euler [1, 3, 5, 11, 13], Chun [16], Sharma [17, 22], Amat [12], Traub [7], Barrada et al. [23, 24], Chun and Neta [25], Jaiswal [26], Liu and Li [27], and Singh and Jaiswal [28] are interesting methods.

Furthermore, considerable efforts have been made to construct higher order methods. Jaiswal [26], Kim and Geum [30], and Thukral [31], proposed some forth-order method families. Fang et al. [32] constructed some fifth-order convergent iterative methods. Thukral [31], Kou et al. [33], and Chun and Ham [34] presented three sixth-order methods. Soleymani et al. [35, 36], Bi et al. [37], Lotfi et al. [38], and Cordero et al. [39] proposed some families of the eighth-order convergence methods. Soleymani et al. [40], Lotfi et al. [38], and Artidiello et al. [41] developed some new methods of the sixteenth order.

The purpose of this paper is to construct, from Halley’s method and Taylor’s polynomial, a new family of methods for finding simple roots of nonlinear equations with cubical convergence. We will show that the weight functions of his methods have particular expressions which depend on a parameter , where is a nonnegative integer, and that, if certain conditions are verified, the speed’s convergence of these sequences improves by increasing . Moreover, from this study, we will study the global convergence of these methods. Finally, the efficacy of some methods of the proposed family will be tested on a number of numerical examples. A comparison with many third-order methods will be realized.

2. Development of New Family of Halley’s Method

One of the best-known third-order methods is Halley’s method, given by where

Using second-order Taylor’s polynomial of at , we obtain where is an approximate value of α. The graph of intersects the -axis at some point , which is the solution of the equation:

Factoring from the last two terms, we obtain so

This schema is implicit because it does not directly find as a function of . It can be changed to make it explicit by replacing , remaining in THE right-hand side of Equation (9), by Halley’s correction given in Equation (4), we get where . It is simply known as super-Halley’s method.

By repeating the same scenario many times and by replacing , each time, with the last correction found, we derive the following iterative process which represents a general family of Halley’s method for finding simple roots: where where is a parameter, which is a nonnegative integer.

We can show that can be explicitly expressed as a function of on the interval as follows: where

Finally, the general family of Halley’s method is generated by

This schema is simple and interesting because it regenerates both well-known and new methods. For example,

For , the formula (11) corresponds to the classical Halley method

For , the formula (11) corresponds to the famous super-Halley method ,

For , we obtain the methods and given, respectively, by the following sequences:

3. Analysis of Convergence

3.1. Order of Convergence

Theorem 1. Let p be a parameter where p is a non-negative integer. We suppose that the function has at least two continuous derivatives in the neighborhood of a zero, . Further, assuming that and is sufficiently close to , the methods defined by Equation (16) converge cubically and satisfy the error equation: where is the error at th iteration and

Proof. Let be a simple root of and be the error in approximating by . We use the Taylor expansions [22] about

where

Using (23), we obtain

Using Taylor’s series expansion [22] of about leads to

Knowing that , so

We have

Thus, formula (30) becomes

Using Equation (27), we get

Substituting Equation (28) and Equation (33) in formula (11), we obtain the error’s equation:

Finally, using Equation (31), we obtain which completes the proof of the theorem.

3.2. Global Convergence of the Halley Families’ Methods

We will make a first study of the global convergence of six selected methods from the proposed family (), in the case where they converge towards the root in a monotone fashion [6, 11, 1315, 17, 19, 20].

Lemma 2. Let us write the I.F. of , from the sequence to

Then, the derivatives of the I.F. of are given by

3.2.1. Monotonic Convergence of the Sequences

Let be a parameter where is an integer between 0 and 5. We consider the functions of a real variable defined on interval by: where and Knowing that and that the derivatives of I.F. are given by (39), we deduce that in . So, . By induction, we obtain that for all

Theorem 3. Let , , and , on an interval containing the root of . Then, the sequences given by Equation (16), are decreasing (resp., increasing) and converge to from any point checking (resp. ).

Proof. Let us consider the case where , then . The application of the mean value theorem gives:

Furthermore, from Equation (11) we have where

As , then , so for all .

In addition, we have , we deduce that . Now, it is easy to prove by induction that for all

Thereby, the sequences (16) are decreasing and converge towards a limit , where . So, by calculating the limit of Equation (16), we obtain We have and , for every real .

So, and consequently As is the unique root of in , therefore This completes the proof of Theorem 3.

Corollary 4. Let , and on an interval containing the root of . The sequences given by Equations (4), (10), (17), (18), (19), and (20) are decreasing (resp., increasing) and converge to from any point checking (rep. ).

Proof. Taking into account that for for every integer , it follows that the condition of Theorem 3 is well satisfied since . By applying Theorem 3, we obtain the thesis (for example, see Section 5.1).

4. Principal Advantage of the New Family

As the family is governed by formula (16), depending on the parameter , where is a nonnegative integer, it would be interesting to look for which values, and under which conditions, the convergence is faster.

Lemma 5. Let . Let and be defined, respectively, by the sequences and ,given by formula (16).

Then, we have

Proof. We have the following:

We know that

We deduce that

As

Then, , which completes the proof of the lemma.

Theorem 6. Let . Let and be defined, respectively, by the sequences and given by Equation (16), , , , and on an interval containing the root of . Starting from the same initial point , the convergence’s rate of the sequence is higher than one’s of the sequence ().

Proof. Supposing that the initial value satisfies so . According to Corollary. 4 given above, we know that if and in , the sequences () and () are decreasing and converge to from any point .

Let and be defined, respectively, by () and (). Since and the two sequences are decreasing, we expect that for all This can be shown by induction. Let , then from (46), where .

We know that and for

So, As and , then .

Now, we assume that . Since, under the above hypotheses, is an increasing function in , we obtain.

On the other hand, we have

We deduce that. So and the induction is completed. The case is similar to the previous one.

Consequently, the power of the proposed family has been shown analytically by justifying that, under certain conditions, the convergence’s speed of its methods increases with the parameter , where is a nonnegative integer. As the methods of Halley and super-Halley are particular cases of this family, in which the parameters are the smallest, their convergence rates are smaller than the ones of the other new higher parameter methods.

5. Numerical Results

For the numerical results, we use a fixed stopping criterion of and . The computations were performed using MATLAB R2015b.

In order to compare two methods, we give the number of iterations () required to satisfy the stopping criterion, the number of function (and derivatives) evaluations per step, and the order of convergence of the method. Based on and , there is an efficiency measure defined by (efficiency index).

Unfortunately, for the methods of the same order and demanding the same number of function evaluations , the efficiency index is the same for these methods. In this case, the comparison is made on the basis of the number of iterations (). This number depends on how far the starting point is from and the value of the asymptotic constant. For two methods of the same order , the one having the smallest asymptotic constant will converge faster than the other having the higher asymptotic constant, for a starting point sufficiently close to . But if is too far from (and in the basin of attraction of ), it is possible that a method with a higher asymptotic constant converge faster [18]. Thus, in order to make the comparison more realistic and fairer, it is preferable to use an approximate asymptotic constant at the step , defined by where and are two consecutive iterations. In general, choosing close enough to and a very high precision (300 significant digits or more), then, taking and closer to the root, will tend towards the theoretical asymptotic constant.

Furthermore, we cannot compare two methods of different -order and demanding the same number of function evaluations , on the basis of asymptotic constant. It is quite obvious that the method with the highest is the fastest, for a starting point sufficiently close to the solution. But, if is too far from (and in the basin of attraction of ), the”order” of convergence is not necessarily , especially for the first iterations of the method [10]. Thus, it is more correct and judicious to use the computational order of convergence at the step , which can be approximated using the formula [37]: where , and are three consecutive iterations. In general, choosing close enough to and a very high precision (300 significant digits or more), then, taking , and closer to the root, will tend towards the theoretical order of convergence .

Here, the values of and will be calculated by using the same total number of function evaluation iterations (or, if not possible, the same total number of iterations) for all methods.

The tests functions, used in Tables 17, and their roots are displayed in Table 2.


H6H7

2020202020202020
6.8593872741555.3550260820523.0447924412754.0119640112033.6878063706933.4736228130693.3289904352803.229883661414
3.3289904352803.0282259514313.0000000000723.0000284221073.0000003211763.0000000018203.0000000000053.0
3.0006332200753.0000000018203.03.03.03.03.0
3.0000000000053.0
3.0
54333332
1.66062.23822.93693.71474.54475.41166.3054__
8.5-022.2-033.6-042.7-059.9-071.7-081.5-10__


Test functionsRoot ()

=2.365230013414097
2.057103549994738
-0.7211373399241717
=1.714805912362778
1.085982678007472
3.000000000000000
-1.000000000000000
0.5235987755982989
3.0000000000000000
2.154434690031884
1.895494267033981
2.000000000000000
0.7390851332151607
2.947530902542285


Test: number of iterations/: approximate asymptotic constants
NMCBCHSHH0


1.68
6777443333
2.332.062.060.200.050.060.060.060.06

4.5
7555443333
0.230.230.220.183e-27e-36e-36e-36e-3

-1.3
5544433333
2.740.440.270.300.110.110.110.110.11
6654433333
1.47.413.332.770.880.100.090.090.090.09

1.4
5544433333
3.911.280.910.950.540.530.530.530.53

7
5444433222
3-23-23-22-23-48-7___

-0.485
6444444433
1.741.681.610.810.390.470.470.470.47

1.1
5544433333
1.260.490.320.290.170.160.170.170.17

2.7
6555443333
4.116.825.571.340.861.011.031.481.02


Testct°: number of iterations: number of functions evaluations
FGCCKBH2H3H4H5FGCCKBH2H3H4H5

0.575333332282012129996
2543333333281212129999
4.53332333312121289999
1.3444333333161612129999
-13332333312121289999
1.5585333333322012129999
2.9333244451212121212121215
1.26533343424201212912912
3.833324444121212812121212
2.5533324444121212812121212
-0.554334434201612121212912
0332233331212889999
3.43322444412128812121212
1.38D33433332D121212999



Order 56683333
1.495351.565081.565081.681791.442251.442251.442251.44225
Test function
4.6e-022.351.653.9e-012.341.651.188.6e-01
3.0-105.3-033.4-047.2-117.0-033.4-049.3-061.5-07
2.1-093.7-022.4-35.1-104.9-022.4-036.5-051.0-06
2.19292.72573.285.53692.59873.28004.02334.8121
Test function
2.0-011.3-017.3-022.441.4-018.6-025.8-024.3-02
6.4-057.8-078.8-092.441.9-053.9-061.2-065.1-07
1.2-041.4-061.6-088.1-033.4-057.0-062.2-069.0-07
3.20534.03364.54352 it3.08182.98672.88032.8115
Test function
3.5-031.3-021.5-021.598.0-023.5-027.7-039.6-03
8.7-137.2-124.2-121.591.2-051.2-061.3-082.5-08
3.2-122.6-111.5-116.3-034.6-054. 7-065.0-089.8-08
3.15063.15064.51902 it2.94782.69822.49532.5142
Test function
7.4-023.7-021.2-029.0-016.5-048.3-33.8-025.2-04
5.1-067.0-095.1-139.0-019.2-111.9-071.8-054.5-11
1.5-052.1-081.5-129.8-032.8-105.7-075.4-051.4