Abstract

We present new high-order optimal iterative methods for solving a nonlinear equation, , by using Padé-like approximants. We compose optimal methods of order 4 with Newton’s step and substitute the derivative by using an appropriate rational approximant, getting optimal methods of order 8. In the same way, increasing the degree of the approximant, we obtain optimal methods of order 16. We also perform different numerical tests that confirm the theoretical results.

1. Introduction

Many applied problems in different fields of science and technology require to find the solution of a nonlinear equation. Iterative methods are used to approximate its solutions. The performance of an iterative method can be measured by the efficiency index introduced by Ostrowski in [1]. In this sense, Kung and Traub conjectured in [2] that a multistep method without memory performing functional evaluations per iteration can have at most convergence order , in which case it is said to be optimal.

Recently, different optimal eighth-order methods, with 4 functional evaluations per step, have been published. A very interesting survey can be found in [3]. Some of them are a generalization of the well-known Ostrowski’s optimal method of order four [47]. In [8] the authors start from a third-order method due to Potra-Pták, combine this scheme with Newton’s method using “frozen” derivative, and estimate the new functional evaluation. The procedure designed in [9] uses weight-functions and “frozen” derivative for the development of the schemes. As far as we know, beyond the family described by Kung and Traub in [2], only in [10] a general technique to obtain new optimal methods has been presented; the authors use inverse interpolation and methods of sixteenth order have also been obtained.

While computational engineering has achieved significant maturity, computational costs can be extremely large when high accuracy simulations are required. The development of a practical high-order solution method could diminish this problem by significantly decreasing the computational time required to achieve an acceptable error level (see, e.g., [11]).

The existence of an extensive literature on higher order methods (see, e.g., [3, 12] and the references therein) reveals that they are only limited by the nature of the problem to be solved: in particular, the numerical solutions of nonlinear equations and systems are needed in the study of dynamical models of chemical reactors [13], or in radioactive transfer [14]. Moreover, many of numerical applications use high precision in their computations; in [15], high-precision calculations are used to solve interpolation problems in Astronomy; in [16] the authors describe the use of arbitrary precision computations to improve the results obtained in climate simulations; the results of these numerical experiments show that the high-order methods associated with a multiprecision arithmetic floating point are very useful, because it yields a clear reduction in iterations. A motivation for an arbitrary precision in interval methods can be found in [17], in particular for the calculation of zeros of nonlinear functions.

The objective of this paper is to present a general procedure to obtain optimal order methods for starting from optimal order methods for . The procedure consists in composing optimal methods of order 4 that use two evaluations of the function and one of the derivative, with Newton’s step and approximating the derivative in this last step by using an adequate rational function which allows duplicating the convergence order, introducing only one new functional evaluation per iteration.

In Section 2, we describe the process to generate the new eighth-order methods and establish their convergence order. In Section 3, the same procedure is used to obtain sixteenth-order methods by increasing the approximant degree. Finally, in Section 4, we collect several optimal methods of order 4 that are the starting point for our new methods and present numerical experiments that confirm the theoretical results.

2. Optimal Methods of Order 8

In this section, we describe a procedure that allows us to obtain new optimal methods of order 8, starting from optimal schemes of order 4. Let us denote by the set of iteration functions corresponding to optimal methods of order .

Consider the three-step method given by where

In order to simplify the notation, we will omit the argument in the iterative process, so that we will write as and .

Obviously, this three-step method has order 8, being a composition of schemes of orders 4 and 2, respectively (see [2], Th. 2.4), but the method is not optimal because it introduces two new functional evaluations in the last step.

Thus, to maintain the optimality, we substitute with the derivative of the second-degree approximant verifying the conditions

From the first condition one has . Substituting in (4)–(6) we obtain the following linear system: where, as usual, denotes the divided difference of order 1, . Applying Gaussian elimination the following reduced system is obtained

In the divided differences with a repeated argument, one places the derivative instead of an undetermined quotient. The coefficients of the approximant are obtained by backward substitution. Then, the derivative of the approximant in is

Substituting by this value, we obtain an iterative method, , defined by where

This method only uses 4 functional evaluations per iteration. Showing that it is of order 8 we will prove that it is optimal in Kung-Traub’s sense.

Theorem 1. Let be a simple root of a function sufficiently differentiable in an open interval . For an close enough to , the method defined by (11)–(13) has optimal convergence order .

Proof. Let be the error of ; that is, , , for . Then, by the definition of each step of the iterative method, we haveConsider the expansion of around where , for ; then,so that using (11) and (14)Substituting (19) in the expansion of around we getUsing in (15) that , we writefor some constants, . Substituting (21) in Taylor’s expansion of , we obtainUsing (17), (18), (20), and (22) in the determination of the coefficients of the rational approximant and in the expression of its derivative (9) givesNow, Taylor’s expansion of in gives and the fact that is of fourth order allows us to establish Using this expression and (23) one can writeThe order of the method is obtained by computing Using (26) we haveFrom (19) it can be deduced thatSo, it is clear thatBy substituting (30) in (28) and using that one has which proves that method has optimal order .

3. Optimal Methods of Order 16

The idea of this section is to extend the former process performing a new step to obtain optimal methods of order starting from optimal methods of order . For the method can be defined as follows:with where and (See [2, 5, 6, 8, 14, 16, 18] for some optimal eighth-order methods.)

Then, we start from a method that, in its first three steps, performs 4 functional evaluations and another additional evaluation in the last step that allows us to construct the following rational approximant:

The coefficients are determined by imposing the following conditions: Similarly to the former case, . Substituting in (36)–(39) we obtain the linear system

The remaining coefficients are obtained by reducing the system to triangular form and solving it by backward substitution

The derivative of the rational approximant in is

As in the previous case, this expression allows us to establish thatand taking into account the fact that we get Similarly to the eighth-order case, from this expression it results that has optimal convergence order .

4. Numerical Experiments

First of all, we consider some optimal four-order methods that we have used for developing high-order methods with the procedure described; all of them use Newton’s step as a predictor and another evaluation of function .(1)Ostrowski’s method (see [1])(2)The family of King’s method (see [18])(3)An optimal variant of Potra-Pták’s method (see [8]) (4)Maheshwari method (see [19])

Now we check the performance of the methods and generated by (5) and (32), taking the different methods described above.

We have chosen the following examples:(a).(b).

We have performed the computations in MATLAB in variable precision arithmetic with 1000 digits of mantissa.

Tables 1 and 2 show the distance for the first three iterations of the new order 8 and 16 methods, respectively. The last column, when we know the exact solution , that is, for example, (a), depicts the computational convergence order (see [20]) and for example (b), we compute the approximated computational convergence order (see [21])

The results from Tables 3 and 4 correspond to an equation without exact solution, so that is computed, instead of the actual error. In both cases, the numerical results support the optimality of the new methods, according to the proven theoretical results.

5. Conclusions

In this paper, we develop high-order iterative methods to solve nonlinear equations. The procedure to obtain the iteration functions is rigorously deduced and can be generalized. There are numerous applications where these schemes are needed because it is necessary to use high precision in their computations, as occurs in dynamical models of chemical reactors and in radioactive transfer and also high-precision calculations are used to solve interpolation problems in Astronomy and so forth. Moreover, the methods presented are optimal in terms of efficiency; this fact makes them very competitive.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work has been supported by Ministerio de Ciencia e Innovación de España MTM2014-52016-C2-02-P and Generalitat Valenciana PROMETEO/2016/089.