The asymptotic form of the Taylor-Lagrange remainder is used to derive some new, efficient, high-order methods to iteratively locate the root, simple or multiple, of a nonlinear function. Also derived are superquadratic methods that converge contrarily and superlinear and supercubic methods that converge alternatingly, enabling us not only to approach, but also to bracket the root.

1. The Asymptotic Form of the Taylor-Lagrange Remainder

The Taylor-Lagrange theorem is a corollary of extended Rolle’s theorem, which for the spare polynomial function such that is exactly

As , the degree of osculation at point , increases, inexorably edges leftwards toward that point.

The Taylor-Lagrange theorem states that if function is continuous in the closed interval and is times differentiable in the open interval and exists, then with the osculating polynomial part of the formula being

As with extended Rolle’s theorem, as increases, moves gradually closer to point . More precisely,

Theorem 1. If in addition to the required conditions on the function in the Taylor-Lagrange formula, also and exists in and is continuous from the right at , then in (4) is such that or nearly, if is nearly , implying that with

We will give here an explicit elementary proof to this theorem for only; for its generalization, see [1, 2].

We write the Taylor-Lagrange formula for , so as to have, with , and seek so that .

Using L’Hopital’s rule, we determine that We take and establish that the asymptotic error with this is

For example

2. From Newton to Halley via the Asymptotic Remainder

We write the version of the Taylor-Lagrange formula and obtain from it, by setting , the classical Newton method which is, as is well known, a second order method provided that . In the above equation is the input and is the output of the iterative process.

The difficult problem of finding the root of the nonlinear function is replaced in Newton’s method by the easy task of repeatedly finding the approximating root of the tangent line . This is the essence of all other higher-order methods, to supplant the finding of the root of the original function by the repeated finding of the root of an approximating polynomial.

Having computed by (16), it occurs to us to return and replace the initial by the asymptotic to have the two-step, mid-point method which is cubic or third order See also Traub [3, page 164, ].

The linearization reproduces out of (19) classical Halley’s method which is cubic as well, but requires the second derivative .

Power series expansion modifies rational Halley’s method into the polynomial in form which is still cubic, provided that . See also Traub [3, page 205, ].

We write the equation of the osculating parabola to at and seek its intersection with the -axis. The smaller root, , of is given by which is Halley’s method of (24).

3. Construction of High-Order Iterations by Generalized Undetermined Coefficients

Halley’s method or, for that matter, any other higher-order method such as that in (24) can be derived ab initio by writing , as a power series of or merely , as in and then progressively fixing the undetermined coefficients and , which eventually need not remain constant, so as to achieve the highest possible order of convergence.

Thus, at first, we have from (28) that We substitute variable for the constant in (29) and try . With this we have next that and we set with which the polynomial variant of Halley’s method in (24) is regained.

Doing the same to the rational method we verify that cubic convergence is achieved with , , , and as in classical Halley’s method in (22).

For other interesting applications of the method of undetermined coefficients, see [4, 5].

4. High-Order Iterative Methods Derived from the Weighted Fixed Point Iteration

Consider the fixed point iteration for point . We write and have the power series expansion Hence, if around , then the fixed point iteration converges linearly; and if , then the fixed point iteration converges quadratically, and so on.

Suppose now that we are seeking single root of . We rewrite as the equivalent fixed point problem for weight function , and seek to fix it to our advantage. For a quadratic method, needs to be such that for near . Since , we choose to ignore in the previous equation and are left with and Newton’s method.

From and ignoring in the second equation, we obtain the system which we solve for as and arrive at Halley’s method of (22). Higher-order methods are systematically generated in similar fashion. See also [6].

5. The Recursive Generation of the High-Order Iteration Function

Let be the fixed point iteration function of the recursion . By dint of , the iterative method is quadratic

The recursively constructed iteration function assures a third order convergence

For example, taking we obtain See also Traub [3, ].

Indeed, taking as we have with in (45) that where , , .

For more on such recursive formulas, see Traub [3, Section 8.3] and Petkovi et al. [7, Theorem 2 and Remark 1].

6. A Finite-Difference Approximation

Wishing to avoid the possibly computationally costly additional derivative in (19), we propose to approximate it by the central difference scheme Taking leaves us with the approximation where , , by which (19) becomes the cubic chord or two-step method See also Traub [3, page 180, ]. We return to this method in the next section.

The second derivative approximation leads to the same result.

7. A Cubic, One-Sided, Two-Step, Secant Method

Having computed by Newton’s method we propose to proceed and predict the next by pseudo-Newton’s method skirting the computation of a new . In (53) We write (53) variously as with all three methods being cubic Notice the extra in the last method of (55), added to recover the factor 1/4 in the last of error equations (56).

Convergence is here one sided: if , then ; and if , then .

For example, for , we obtain by the first method of (55) the two oppositely or contrarily converging sequences by which root is bounded as

Method (55) is also obtained from the secant line passing through the two points, , and then taking the root of as the next

Including in the polynomial interpolation and passing a parabola through the available data and should allow us to obtain a better approximation for , and with it a higher-order method, as we will see next.

8. Quartic Two-Step Methods

Seeking a possibly higher-order method, we write the slope estimate of (53) as for undetermined coefficient and then advantageously determine it, to have or which is the celebrated quartic method of Ostrowski [8]. See also Traub [3, page 184, ] and King [9]. Quartic method (62) is also obtained by replacing by in the slope estimate in (53).

The method is quartic as well but with the simpler error equation

Power series expansion changes rational method (62) into the polynomial in method which is still quartic

The quartic method of (66) is also obtained from the parabola passing through the data and with the predicted such that or by taking in pseudo-Newton’s method .

Replacing in (61) by the perturbed slope turns the method into the supercubic alternatingly converging method

9. Quintic, Sextic, Septic, and Octic Three-Step Methods

We continue with higher-order multistep methods requiring only the sole derivative at initial point .

The pseudo-Newton method or is quintic and one sided

The method is sextic The method is septic

The method where and , is octic

See also [7, 1016].

Octic method (78) is also obtained from a cubic polynomial passing through the data , , , and chosen such that .

However, if the root repeats, that is, if , then the order of convergence of the method plummets from eighth order to first order.

10. Estimates for the Root Multiplicity Index

In this section we derive both first and second order estimates for the root multiplicity index . Also derived is an estimate for the relative size of the second term in the Taylor expansion of function at root point .

Assuming that the power series expansion of function , whose root we are seeking, is of the form we obtain from it the first order estimate for the root multiplicity index as well as the second order estimate for

For example, for ,  , we compute, at , the first order and second order approximations respectively. For and , we compute from (82) respectively.

We have also that

From the first of the previous equations we have, for , which, with the Padé rational approximation becomes the first order estimate for the multiplicity index

For example, for , , (89) yields the approximations for the exact .

Traub [3, Section 7.8] has the pointwise approximation which yields for , , the less precise estimates .

Suppose that . Application of Newton’s method to find a root of multiplicity greater than one reduces the method to first order Ignoring higher-order terms and eliminating root from the pair of successive approximations leave us with the discrete, first order approximation to the multiplicity index of the root, an approximation which is the discrete counterpart to that in (81).

Now is quadratic with no need for prior knowledge of .

For example, using method (94), with the updated employed in the computation of in each cycle, we generate the alternatingly converging sequences and . Actually, for a pair of successively computed values in the sequence, .

For a high-order method, to realize its full speed of convergence, it is necessary that the estimated is appropriately accurate. For example, using the estimate for from (90) in the modified Newton method = , we obtain for the sequences and , with convergence, that is, barely above the linear: , .

11. Correction for Multiple Roots by Undetermined Coefficients

We rewrite Newton’s method as for the undetermined coefficient and have that near a root of multiplicity Quadratic convergence is restored, as is well known, with . In the previous equation, and are the coefficients in the power series expansion of in (80).

With , the modified Newton method of (96) is reduced to an alternating superlinear method. For example, for , , , we generate, starting with ,

Next, we rewrite the method in (50) as and seek to adjust correction coefficients and so that convergence remains cubic even to root of multiplicity . By power series expansion we determine that upholds cubic convergence where , , and are as in (80). Method (99)-(100) is found in [17]. See also [18].

The method is similarly cubic

No such correction to account for multiple roots exists for the quartic two-step method of (62).

12. Correction of Halley’s Method for Multiple Roots

We parametrize Halley’s method of (22) with the undetermined coefficients and as and determine by power series expansion that for convergence remains cubic even to a root of any multiplicity Method (104)-(105) is found in Hansen and Patrick [19].

Method (24) becomes here for a multiple root with error equation (106).

13. From a Cubic to a Quartic Method by Taylor’s Formula

We write the second order, , version of the Taylor-Lagrange formula and take to obtain the iterative method We propose to approximate the solution of the quadratic increment equation or, for that matter, any such higher-order algebraic equation, by the power series and have upon substitution and collection from which we deduce, by annulling lower order terms, that and so on.

The methods or are both cubic provided that .

As here , we take next recalculate , and verify that the second method in (114) is elevated thereby to quartic

14. Contrarily Converging Superquadratic Methods

We write for undetermined coefficient and have We request that for parameter , or, in view of (51), that by which the iterative method in (119) becomes This superquadratic method converges from above if , and from below if .

The interest in the method is that it ultimately converges oppositely to Newton’s method as seen by comparing (125) with (17).

For example, for and starting with , we compute from Newton’s method and from method (124), respectively, and root is bounded or bracketed thereby as

The average of Newton’s method and the method of (124) is cubic

The modified Halley method is also superquadratic and one sided

According to error equation (130), if , then convergence is at least asymptotically from above; if , convergence is from below.

15. Alternating Superlinear and Supercubic Methods

We start by modifying Newton’s method as to have the superlinear method that ultimately converges alternatingly if .

For example, for , , and , we compute by method (131) the alternating sequence allowing us to bracket root as

For a higher-order method, we start with the originally quartic of (66) and have that This supercubic method converges alternatingly if parameter .

For example, for , we generate by methods (135)-(136), with , the alternating sequence and root is bracketed as

16. Still Higher-Order Taylor Methods

Starting with we obtain the iterative method where with

The methods are both quartic provided that .

Recalculating at elevates the method to quintic

17. Repeated Fourth-Order Method

The repeated Newton method is also quartic

Similarly, the repeated modified Newton method remains quartic even near a root of any multiplicity

The repeated-step method not requiring prior knowledge of the multiplicity index of the root is also quartic

The two single-step methods converge contrarily Their average is a cubic method

For instance, for , we compute by the two methods in (153) the sequences

18. Stacked Higher-Order Methods and Simple Root

Higher-order, single-step methods can be written as a built-up power series of