Journal of Applied Mathematics

Journal of Applied Mathematics / 2014 / Article
Special Issue

Iterative Methods for Nonlinear Equations or Systems and Their Applications 2014

View this Special Issue

Research Article | Open Access

Volume 2014 |Article ID 591638 |

S. Artidiello, A. Cordero, Juan R. Torregrosa, M. P. Vassileva, "Optimal High-Order Methods for Solving Nonlinear Equations", Journal of Applied Mathematics, vol. 2014, Article ID 591638, 9 pages, 2014.

Optimal High-Order Methods for Solving Nonlinear Equations

Academic Editor: Ioannis K. Argyros
Received04 Feb 2014
Accepted07 Apr 2014
Published05 May 2014


A class of optimal iterative methods for solving nonlinear equations is extended up to sixteenth-order of convergence. We design them by using the weight function technique, with functions of three variables. Some numerical tests are made in order to confirm the theoretical results and to compare the new methods with other known ones.

1. Introduction

The rapid advances in the development of digital computer have established the need to design new methods with higher computational efficiency for solving problems of practical relevance for applied mathematics, engineering, biology, and so forth. A variety of problems in different fields of science and technology require finding the solution of a nonlinear equation. Iterative methods for approximating solutions are the most used technique. The interest in the multipoint iterative methods has been renewed in the first decade of the 21st century as they are of great practical importance because they exceed the theoretical limits of the methods of a point on the order of convergence and computational efficiency.

Throughout this paper we consider multipoint iterative methods to find a simple root of a nonlinear equation , where , restricted to real functions with a unique solution inside an open interval . Many modified schemes of Newton’s method, probably the most widely used iterative method, have been proposed to improve the local order of convergence and the efficiency index over the last years. The efficiency index, introduced by Ostrowski in [1] as , where is the order of convergence and the number of functional evaluations per step, establishes the effectiveness of the iterative method. In this sense, Kung and Traub conjectured in [2] that a multipoint iterative scheme without memory, requiring functional evaluations per iteration, has order of convergence at most . The schemes which achieve this bound are called optimal methods.

A common way to increase the convergence order in multipoint methods is to use weight functions that are applied to construct families of iterative methods for nonlinear equations. See, for example, the text by Petković et al. [3] and the references therein. The main goal and motivation in the construction of new methods is to attain as high as possible computational efficiency. Optimal methods of order four were discussed, for example, in [4, 5]. Many optimal methods of order eight have been suggested and compared in the literature; see, for instance, the recent results obtained by Kim in [6], Khan et al. in [7], Džunić and Petković in [8], and Soleymani et al. in [9]. Recently, by using weight function method some sixteenth-order iterative schemes have been also published as [10, 11].

The outline of the paper is as follows. In Section 2 the families of optimal sixteenth-order methods are constructed and the convergence analysis is discussed. In Section 3 numerical experiments are performed and the proposed methods of order sixteen are compared with the mentioned sixteenth-order schemes on academic test functions. Finally, in Section 4, the problem of preliminary orbit determination of artificial satellites is studied by using the classical fixed point method and numerical experiments on the modified Gaussian preliminary orbit determination are performed and the proposed methods are compared with recent optimal known schemes.

2. Description of the Family of Optimal Multipoint Methods

Our starting point is Traub’s scheme (see [12], also known as Potra-Pták’s method) whose iterative expression is where is Newton’s step. This method has order three but it requires three functional evaluations, so it is not optimal according to Kung-Traub conjecture and our purpose is to design optimal methods.

So, we begin the process from the iterative scheme (see [13]) where is a real parameter and is a real function with .

The method defined by (2) has order four if and a function is chosen so that the conditions , , and are fulfilled. Some known iterative schemes are obtained as particular cases of this family. Choosing , we obtain the fourth-order method described by Kung and Traub in [2]. King’s family [14] of fourth-order methods is obtained when we choose . Also, if we take , we obtain the family of fourth-order methods defined by Zhao et al. in [15].

Recently, taking (2) with as the first two steps and adding a new step, Džunić et al. in [16] designed the following three-step method: where is Newton’s step and is a function of two variables: and .

They proved in [16] that the method defined by (3) has optimal eighth-order of convergence, if sufficiently differentiable functions and are chosen so that the conditions and are satisfied. The iterative method resulting from introducing these conditions and the simplest form for and , obtained by using the Taylor polynomial of the functions: and , is denoted by .

Now, we wonder if it is possible to find a sixteenth-order iterative method by adding a new step with the same settings accompanied with a weight function that depends on three variables , , and , where is the last step of the eighth-order method (3). The iterative expression of the new scheme is where and are the same steps as in method (3). The following result can be proved that establishes the sixteenth-order of family (5).

Theorem 1. Let be a simple zero of a sufficiently differentiable function in an open interval and an initial guest close to . The method defined by (5) has optimal sixteenth-order convergence if sufficiently differentiable functions , , and are chosen so that the conditions on method (3) (proved in [16]) and the following requirements are satisfied: , and . The error equation of the method is where , , , and , depend on the partial derivatives of order one, two, and three of the weight functions and at zero.

Proof. The proof is based on Taylor’s expansion of the elements appearing in the iterative expression (5). We only show the necessary elements of the expressions in order to determine the conditions needed to attain the order of convergence. The Taylor expansion of the weight functions used is developed around zero but, for the sake of simplicity, we will omit the zero in the Taylor expansion of , , and .
By using Taylor’s expansion about , we have , where and . By substituting the expression in the first step of (5), we obtain , where , , and . Using again Taylor’s expansion, we obtain and we calculated and , where we demand conditions and . This allows us to obtain the error equation (fourth-order) for the second step : , where . We use again Taylor’s expansion about for obtaining , calculate , , introduce the known conditions ([16]): , , , , and , and obtain Taylor’s series of : where and . So, using again Taylor’s expansion about , we obtain and use it to get Taylor’s expression of and . Finally, we obtain the error equation of the proposed iterative scheme (5): , where . If , then and , where . Taking , we obtain , where . If we make and , we ensure order of convergence is at least eleven. The error equation in this case takes the following form: and . Taking and , we obtain the new expression where If and , the order of convergence is at least thirteen. The solution of these four equations determine that , , and and the error equation is where For obtaining order of convergence of at least fourteen it is necessary that and . This gives us the conditions: , , and and the error equation is where
If , Now, if we demand , the order of convergence is at least fifteen, and the necessary conditions are , , , . Taking into account these conditions, the error equation is where . By taking and simplifying the error equation, we obtain and ; we have
By solving the system we obtain , , and . Finally, the error equation is This finishes the proof.

A particular element of family (5), denoted by M16, is obtained by choosing the weight functions: which we will use in the following sections.

3. Numerical Tests for Sixteenth-Order Methods

The proposed iterative scheme with order of convergence sixteen M16 is employed to estimate the simple solution of some particular nonlinear equations. It will be compared with some known methods existing in the literature. In particular, the iterative scheme of the sixteenth-order scheme designed by Thukral in [10] is where is Steffensen’s step, , , , , , , , and . We will denote this scheme by T16.

We will also use the sixteenth-order procedure designed by Sharma et al. in [11] that will be denoted by S16, whose iterative expression is where is Newton’s step and

The numerical behavior will be analyzed by means of the test functions and the corresponding simple roots listed below:(a), ,(b), ,(c), .

All the computations have been carried out by using variable precision arithmetics with 4000 digits of mantissa. The exact solution of the nonlinear equations is known, so the exact absolute error of the first three iterations of each procedure is listed in Table 1, joint with the computational order of convergence (see [17]), for different initial estimations .

Test functions T16S16M16


1 0.05729

−2 0.1511

−3 0.10020.3238


From results shown in Table 2, it can be deduced that the proposed scheme is, at least, as competitive as recently published methods of the same order of convergence, being better in some cases.


FP 1.002
K8 7.001
S8 8.001
M8 8.000
T16 NaNNaN