Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2011, Article ID 469512, 10 pages
Research Article

A General Three-Step Class of Optimal Iterations for Nonlinear Equations

1Young Researchers Club, Islamic Azad University, Zahedan Branch, Zahedan 98168, Iran
2Department of Mathematics, Islamic Azad University, Zahedan Branch, Zahedan 98168, Iran

Received 7 August 2011; Accepted 16 September 2011

Academic Editor: Hung Nguyen-Xuan

Copyright © 2011 Fazlollah Soleymani et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Many of the engineering problems are reduced to solve a nonlinear equation numerically, and as a result, an especial attention to suggest efficient and accurate root solvers is given in literature. Inspired and motivated by the research going on in this area, this paper establishes an efficient general class of root solvers, where per computing step, three evaluations of the function and one evaluation of the first-order derivative are used to achieve the optimal order of convergence eight. The without-memory methods from the developed class possess the optimal efficiency index 1.682. In order to show the applicability and validity of the class, some numerical examples are discussed.

1. Introduction

Numerical solution of nonlinear scalar equations plays a crucial role in many optimization and engineering problems. For example, many engineering systems can be modeled as neutral delay differential equations (NDDEs) that involve a time delay in the derivative of the highest order, which are different from retarded delay differential equations (RDDEs) that do not involve a time delay in the derivative of the highest order. To illustrate more, a system, which consists of a mass mounted on a linear spring to which a pendulum is attached via a hinged massless rod, is used to predict the dynamic response of structures to external forces using a set of actuators, and it is modeled as an NDDE if the delay in actuators is taken into consideration [1]. On the other hand, the stability of a delay differential equation can be investigated on the basis of the root location of the characteristic function. This simple example shows the importance of numerical root solvers in engineering problems.

There are numerical methods, which find one root at a time, such as Newton’s iteration or its variant, and the schemes, which find all the roots at a time, namely, simultaneous methods, such as Weierstrass method. Recently many journals such as Numerical Algorithms, Mathematical Problems in Engineering, Applied Mathematics and Computation, etc., have published new findings; see, for example, [25] and the references therein in this active topic of study. To shortly provide some of the newest findings in this field, we mention the following.

Noor et al. in [3] developed the follow-up quartically iterative scheme consisting of three steps and eight numbers of evaluation per full iteration as comes next 𝑦𝑛=𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛,𝑧𝑛=𝑦𝑛𝑦4𝑓𝑛𝑓𝑥𝑛+2𝑓𝑥𝑛+𝑦𝑛/2+𝑓𝑦𝑛,𝑥𝑛+1=𝑧𝑛𝑧4𝑓𝑛𝑓𝑥𝑛+2𝑓𝑥𝑛+𝑧𝑛/2+𝑓𝑧𝑛.(1.1)

In 2010, an eighth-order method is provided in [6] using Ostrowski's method in the first two steps of a three-step cycle as follows: 𝑦𝑛=𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛,𝑧𝑛=𝑦𝑛𝑓𝑥𝑛𝑓𝑥𝑛𝑦2𝑓𝑛𝑓𝑦𝑛𝑓𝑥𝑛,𝑥𝑛+1=𝑧𝑛𝑓𝑧1+𝑛𝑓𝑥𝑛+𝑓𝑧𝑛𝑓𝑥𝑛2𝑓𝑥𝑛,𝑦𝑛𝑓𝑧𝑛𝑓𝑥𝑛,𝑧𝑛𝑓𝑦𝑛,𝑧𝑛,(1.2) wherein 𝑓[𝑥0,𝑥1,,𝑥𝑘] are the divided differences of the function 𝑓.

Soleymani and Mousavi in [7] suggested an iteration without memory scheme including three steps and only four functional evaluations per iteration in what follows: 𝑦𝑛=𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛,𝑧𝑛=𝑥𝑛+𝑓𝑥𝑛𝑦+𝑓𝑛𝑓𝑥𝑛𝑓𝑥2𝑛𝑓𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛𝑦𝑓𝑛,𝑥𝑛+1=𝑧𝑛𝑓𝑧𝑛𝑓𝑧1+𝑛𝑦/𝑓𝑛2𝑓𝑧1+2𝑛𝑥/𝑓𝑛𝑓𝑦16𝑛𝑥/𝑓𝑛3𝒜×𝑓𝑧𝑛,𝑦𝑛𝑧+𝑓𝑛,𝑥𝑛,𝑥𝑛𝑧𝑛𝑦𝑛,(1.3) where 𝒜 denotes 9(𝑓(𝑦𝑛)/𝑓(𝑥𝑛))4, and denotes (1+(𝑓(𝑧𝑛)/𝑓(𝑥𝑛))2)(1+(𝑓(𝑦𝑛)/𝑓(𝑥𝑛))3).

For further reading, one may consult [8], where a complete review of the methods given in literature from 2000 to 2010 was furnished, and also [9] for obtaining a background on the application use of such root solvers. We here remark that the efficiency of different methods could be assessed by the measure of efficiency index, which could be defined as 𝑛𝑝, wherein 𝑝 is the order of convergence and 𝑛 is the total number of evaluations per iteration. Now, we should remark that Kung and Traub in [10] conjectured that an iterative scheme without memory by using 𝑛 evaluations per cycle can arrive at the maximum order of convergence 2𝑛1. Any without memory iteration, which satisfies this hypothesis, is named as an optimal method in literature.

After providing a short background of this research in this section, we give the main contribution in Section 2. The convergence study of our general three-step class is also furnished therein. We will also produce different optimal three-step iterations from the contributed class. Section 3 discusses some numerical comparisons with the existing methods in literature, and finally Section 4 draws a conclusion of this research paper.

2. New Class of Iteration Methods

In order to contribute and give a general class of methods consistent with the optimality conjecture of Kung-Traub, an iteration eighth-order scheme without memory in this section should be constructed such that four evaluations per computing step are used. Such schemes are also known as predictor-corrector methods in which the first step is (Newton's step) predictor, while the other two steps correct the obtained solution. To achieve our goal, we consider the following three-step scheme on which the first two steps are the King's fourth-order family with one free parameter in real numbers, 𝛽, 𝑦𝑛=𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛,𝑧𝑛=𝑦𝑛𝑓𝑦𝑛𝑓𝑥𝑛𝑓𝑥𝑛𝑦+𝛽𝑓𝑛𝑓𝑥𝑛𝑦+(𝛽2)𝑓𝑛,𝑥𝑛+1=𝑧𝑛𝑓𝑧𝑛𝑓𝑧𝑛.(2.1)

Clearly in (2.1), 𝑓(𝑧𝑛) should be annihilated as the order of convergence remains at the highest level by the smallest use of number of evaluations per iteration. Toward this end, we approximate it by a polynomial of degree two that fits in 𝑓(𝑥𝑛), 𝑓(𝑦𝑛), and 𝑓(𝑧𝑛). Therefore, we take into account 𝑓(𝑡)𝐴(𝑡)=𝑎0+𝑎1(𝑡𝑦𝑛)+𝑎2(𝑡𝑦𝑛)2 where 𝐴(𝑡)=𝑎1+2𝑎2(𝑡𝑦𝑛). Subsequently, by considering 𝑓(𝑥𝑛)=𝐴(𝑥𝑛), 𝑓(𝑦𝑛)=𝐴(𝑦𝑛), and 𝑓(𝑧𝑛)=𝐴(𝑧𝑛), we attain 𝑎0=𝑓(𝑦𝑛), 𝑎1+2𝑎2𝑥𝑛𝑦𝑛=𝑓𝑥𝑛,𝑎1+𝑎2𝑧𝑛𝑦𝑛=𝑓𝑧𝑛𝑦𝑓𝑛𝑧𝑛𝑦𝑛𝑧=𝑓𝑛,𝑦𝑛.(2.2)

Solving the system of two linear equations with two unknowns, (2.2) gives us 𝑎1 and 𝑎2. Using the obtained relations for the unknowns in the approximation 𝑓(𝑧𝑛)𝐴(𝑧𝑛)=𝑎1+2𝑎2(𝑧𝑛𝑦𝑛) and simplifying, we have 𝑓𝑧𝑛𝑧2𝑓𝑛,𝑦𝑛𝑥𝑛𝑧𝑛+𝑧𝑛𝑦𝑛𝑓𝑥𝑛2𝑥𝑛𝑧𝑛𝑦𝑛.(2.3)

Considering (2.3) in (2.1) and using weight function approach, we have the following general class of three-step without-memory iteration: 𝑦𝑛=𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛,𝑧𝑛=𝑦𝑛𝑓𝑦𝑛𝑓𝑥𝑛𝑓𝑥𝑛𝑦+𝛽𝑓𝑛𝑓𝑥𝑛𝑦+(𝛽2)𝑓𝑛,𝑥𝑛+1=𝑧𝑛𝑓𝑧𝑛2𝑥𝑛𝑧𝑛𝑦𝑛𝑧2𝑓𝑛,𝑦𝑛𝑥𝑛𝑧𝑛+𝑧𝑛𝑦𝑛𝑓𝑥𝑛{𝐺(𝑡)+𝐻(𝜏)+𝑄(𝛾)},(2.4) wherein 𝐺(𝑡), 𝐻(𝜏), and 𝑄(𝛾) are three real-valued weight functions with 𝑡=𝑓(𝑧)/𝑓(𝑦), 𝜏=𝑓(𝑧)/𝑓(𝑥), and 𝛾=𝑓(𝑦)/𝑓(𝑥) (without the index 𝑛) which should be chosen such that the order of convergence arrives at the optimal level eight. We summarize this in the following theorem.

Theorem 2.1. Let 𝛼𝐷 be a simple zero of a sufficiently differentiable function 𝑓𝐷  for an open interval 𝐷, which contains 𝑥0 as the initial approximation of 𝛼, then the three-step iteration (2.4), which includes four evaluations per full cycle, has the optimal convergence rate eight, when 𝐺(0)=1,𝐺||𝐺(0)=0,||(0)<,𝐻(0)=0,𝐻9(0)=6,||𝐻||(0)<,𝑄(0)=𝑄(0)=𝑄(0)=0,𝑄(3)||𝑄(0)=(9+18𝛽),(4)||(0)<,(2.5) and satisfies the error equation below 𝑒𝑛+11=𝑐242𝑐3+𝑐22(1+2𝛽)6𝑐29𝑐2𝑐34𝑐4+4𝑐32𝑐(5+(43𝛽)𝛽)+123𝑐22(1+2𝛽)2𝐺(0)+𝑐42𝑄(4)𝑒(0)8𝑛𝑒+𝑂9𝑛.(2.6)

Proof. By defining 𝑒𝑛=𝑥𝑛𝛼 as the error of the iterative scheme in the 𝑛th iterate, applying the Taylor's series expansion for (2.4), and taking into account 𝑓(𝛼)=0, we have 𝑓𝑥𝑛=𝑓𝑒(𝛼)𝑛+𝑐2𝑒2𝑛+𝑐3𝑒3𝑛+𝑐4𝑒4𝑛+𝑐5𝑒5𝑛+𝑐6𝑒6𝑛+𝑐7𝑒7𝑛+𝑐8𝑒8𝑛𝑒+𝑂9𝑛,(2.7) where 𝑐𝑘=(1/𝑘!)(𝑓(𝑘)(𝛼)/𝑓(𝛼)),𝑘2. Furthermore, we have 𝑓𝑥𝑛=𝑓(𝛼)1+2𝑐2𝑒𝑛+3𝑐3𝑒2𝑛+4𝑐4𝑒3𝑛+5𝑐5𝑒4𝑛+6𝑐6𝑒5𝑛+7𝑐7𝑒6𝑛+8𝑐8𝑒7𝑛𝑒+𝑂8𝑛.(2.8) Dividing (2.7) by (2.8) gives us 𝑓(𝑥𝑛)/𝑓(𝑥𝑛)=𝑒𝑛𝑐2𝑒2𝑛+2(𝑐22𝑐3)𝑒3𝑛+(7𝑐2𝑐34𝑐323𝑐4)𝑒4𝑛++𝑂(𝑒8𝑛). Again by substituting this relation in the first step of (2.4) and writing the Taylor's series expansion for 𝑓(𝑦𝑛), we obtain, respectively, 𝑦𝑛=𝛼+𝑐2𝑒2𝑛+2𝑐22+𝑐3𝑒3𝑛+7𝑐2𝑐3+4𝑐32+3𝑐4𝑒4𝑛𝑒++𝑂8𝑛,𝑓𝑦𝑛=𝑓𝑐(𝛼)2𝑒2𝑛+2𝑐22+𝑐3𝑒3𝑛+7𝑐2𝑐3+4𝑐32+3𝑐4𝑒4𝑛𝑒++𝑂8𝑛.(2.9) Furthermore, we find 𝑧𝑛𝛼=𝑐2𝑐3+𝑐32𝑒(1+2𝛽)4𝑛𝑐223+𝑐2𝑐42𝑐22𝑐3(2+3𝛽)+𝑐42𝑒(2+𝛽(6+𝛽))5𝑛𝑒++𝑂8𝑛.(2.10) Similarly, we have 𝑓𝑧𝑛2𝑥𝑛𝑧𝑛𝑦𝑛𝑧2𝑓𝑛,𝑦𝑛𝑥𝑛𝑧𝑛+𝑧𝑛𝑦𝑛𝑓𝑥𝑛=𝑐2𝑐3+𝑐32𝑒(1+2𝛽)4𝑛𝑐223+𝑐2𝑐42𝑐22𝑐3(2+3𝛽)+𝑐42𝑒(2+𝛽(6+𝛽))5𝑛+7𝑐3𝑐4+6𝑐22𝑐4(2+3𝛽)2𝑐32𝑐315+42𝛽+8𝛽2+3𝑐2𝑐5+𝑐23(6+8𝛽)+2𝑐52𝑒(5+𝛽(22+𝛽(7+𝛽)))6𝑛𝑒++𝑂8𝑛,𝑧(2.11)𝑛𝑓𝑧𝑛2𝑥𝑛𝑧𝑛𝑦𝑛𝑧2𝑓𝑛,𝑦𝑛𝑥𝑛𝑧𝑛+𝑧𝑛𝑦𝑛𝑓𝑥𝑛3=𝛼2𝑐2𝑐3𝑐2𝑐3+𝑐32𝑒(1+2𝛽)7𝑛+6𝑐2𝑐33+5𝑐22𝑐3𝑐4+𝑐72(1+2𝛽)234𝑐32𝑐23(23+32𝛽)2𝑐42𝑐4+2𝑐4𝛽+14𝑐52𝑐3𝑒(29+2𝛽(41+6𝛽))8𝑛𝑒+𝑂9𝑛.(2.12) We moreover by using (2.11) and (2.5) attain that (𝑓(𝑧𝑛)(2𝑥𝑛𝑧𝑛𝑦𝑛)/2𝑓[𝑧𝑛,𝑦𝑛](𝑥𝑛𝑧𝑛)+(𝑧𝑛𝑦𝑛)𝑓(𝑥𝑛)){𝐺(𝑓(𝑧𝑛)/𝑓(𝑦𝑛))+𝐻(𝑓(𝑧𝑛)/𝑓(𝑥𝑛))+𝑄(𝑓(𝑦𝑛)/𝑓(𝑥𝑛))}=(𝑐2𝑐3+𝑐32(1+2𝛽))𝑒4𝑛2(𝑐23+𝑐2𝑐42𝑐22𝑐3(2+3𝛽)+𝑐42(2+𝛽(6+𝛽)))𝑒5𝑛+(7𝑐3𝑐4+6𝑐22𝑐4(2+3𝛽)2𝑐32𝑐3(15+42𝛽+8𝛽2)+3𝑐2(𝑐5+𝑐23(6+8𝛽))+2𝑐52(5+𝛽(22+𝛽(7+𝛽))))𝑒6𝑛++𝑂(𝑒8𝑛). Considering this new relation, (2.12) and (2.5) in the last step of (2.4) will end in 𝑒𝑛+1=𝑥𝑛+11𝛼=𝑐242𝑐3+𝑐22×(1+2𝛽)6𝑐29𝑐2𝑐34𝑐4+4𝑐32𝑐(5+(43𝛽)𝛽)+123𝑐22(1+2𝛽)2𝐺(0)+𝑐42𝑄(4)𝑒(0)8𝑛𝑒+𝑂9𝑛.(2.13) This concludes the proof. And it shows that our suggested general class of three-step without-memory methods (2.4)-(2.5) possesses the eighth order of convergence.

Remark 2.2. The class of three-step methods (2.4)-(2.5) requires four evaluations and has the order of convergence eight. Therefore, this class is of optimal order and supports the Kung-Traub conjecture [10]. Hence, the efficiency index of the eighth-order derivative-involved methods from the class is 481.682.Some efficient methods from the contributed optimal three-step class are given below. Per computing step, these methods are free from second or higher order derivative computations. The new contributed methods are 𝑦𝑛=𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛,𝑧𝑛=𝑦𝑛𝑓𝑦𝑛𝑓𝑥𝑛𝑥2𝑓𝑛𝑦𝑓𝑛𝑥2𝑓𝑛𝑦5𝑓𝑛,𝑥𝑛+1=𝑧𝑛𝑓𝑧𝑛2𝑥𝑛𝑧𝑛𝑦𝑛𝑧2𝑓𝑛,𝑦𝑛𝑥𝑛𝑧𝑛+𝑧𝑛𝑦𝑛𝑓𝑥𝑛𝑓𝑧1+𝑛𝑓𝑦𝑛3+96𝑓𝑧𝑛𝑓𝑥𝑛94𝑓𝑦𝑛𝑓𝑥𝑛4,(2.14) where 𝑒𝑛+1=(1/4)𝑐22𝑐3(9𝑐2𝑐34𝑐4)𝑒8𝑛+𝑂(𝑒9𝑛), 𝑦𝑛=𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛,𝑧𝑛=𝑦𝑛𝑓𝑦𝑛𝑓𝑥𝑛𝑓𝑥𝑛𝑓𝑥𝑛𝑦2𝑓𝑛,𝑥𝑛+1=𝑧𝑛𝑓𝑧𝑛2𝑥𝑛𝑧𝑛𝑦𝑛𝑧2𝑓𝑛,𝑦𝑛𝑥𝑛𝑧𝑛+𝑧𝑛𝑦𝑛𝑓𝑥𝑛×𝑓𝑧1+𝑛𝑓𝑦𝑛3+96𝑓𝑧𝑛𝑓𝑥𝑛96𝑓𝑦𝑛𝑓𝑥𝑛3𝑓𝑦5𝑛𝑓𝑥𝑛4,(2.15) where 𝑒𝑛+1=(1/4)(𝑐22(𝑐22𝑐3)(9𝑐2𝑐34𝑐4))𝑒8𝑛+𝑂(𝑒9𝑛) is its error equation. We also here mention some of the typical forms of the weight functions 𝐺(𝑡), 𝐻(𝜏), and 𝑄(𝛾) in iteration (2.4), which satisfy (2.5) for making the order optimal. These forms are listed in Table 1.Other than the very efficient methods (2.14) and (2.15) of optimal order eight, many more three-step without-memory iterations can be constructed using Table 1, that is, (2.5) in (2.4) and also with different values for the free parameter 𝛽. Thus, now, in order to save the space and also giving some of the other such optimal eighth-order methods according to (2.4) and (2.5), we list the interesting ones in Table 2. Note that we consider first that the weight functions satisfy (2.5), and then we try to make new error equations based on the available data in (2.13).
Future researches in this field of study can now be turned to finding optimal sixteenth-order four-step without-memory iterations based on the general class (2.4)-(2.5). Furthermore, producing with-memory iterations according to this class can also be of researcher's interest for future studies.

Table 1: Typical forms of the weight functions satisfying (2.5).
Table 2: Interesting choices of 𝛽, 𝐺(0), and 𝑄(4)(0) in (2.13), which provide very efficient optimal root solvers.

3. Computational Examples

The contribution given in Section 2 is supported here through numerical works. We check the effectiveness of the novel methods (2.14) and (2.15) from our class of methods. For this reason, we have compared our new methods with Newton's method (NM), (1.1), (1.2), and (1.3). The nonlinear test functions are furnished in Table 3. The results of comparisons are given in Table 4 in terms of the number significant digits for each test function after some specified iterations.

Table 3: Test function, their roots, and the starting points.
Table 4: Comparison of different methods for finding the simple roots of test functions.

All computations in this paper were performed in MATLAB 7.6 using variable precision arithmetic (VPA) to increase the number of significant digits. We have considered the following stopping criterion |𝑓(𝑥𝑛)|10800. In Table 4, 0.2𝑒448 shows that the absolute value of the given nonlinear function after three iterations is zero up to 448 decimal places. In Table 4, IN and TNE stand for iteration number and total number of evaluation. As shown in Table 4, the proposed method (2.14) is preferable to Newton’s method and some methods with fourth- and eighth-order of convergences. It is evident that (2.14) is more robust than the other competent from various orders. We also recall an important concern in using multi point iterations, which indicates that the high-order root solvers are very sensitive for initial guesses far from the root. And they are so powerful for starting points in the vicinity of the sought zero and so close.

Remark 3.1. If we need to solve a lot of equations from a large system of boundary-value problems, then the cost of function evaluations becomes important. Therefore, the proposed class (2.4)-(2.5) is valuable for solving such problems.

4. Concluding Remarks

In recent years, numerous works have been focusing on the development of more advanced and efficient methods for nonlinear scalar equations. Many methods have been developed, which improve the convergence rate of the Newton’s method. One practical drawback of so many methods is their slow rate of convergence. This paper has developed and established a rapid class of eighth-order iteration methods. Per iteration, the methods from our class require three evaluations of the function and one of its first derivatives; and therefore, the efficiency of the methods is equal to 481.682, which is better than that of the classical Newton’s method. Kung and Traub [10] conjectured that a multipoint iteration without memory based on 𝑛 evaluations of 𝑓 or its derivatives could achieve optimal convergence order 2𝑛1. Newton’s method is an example, which agrees with Kung-Traub’s conjecture for 𝑛=2, and the class of methods (2.4)-(2.5) is another example, which agrees with Kung-Traub’s hypothesis for 𝑛=4. Thus, the suggested class (2.4)-(2.5) is effective and attracts the attention of researchers.


  1. Z. H. Wang, “Numerical stability test of neutral delay differential equations,” Mathematical Problems in Engineering, vol. 2008, Article ID 698043, 10 pages, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  2. N. A. Mir, R. Muneer, and I. Jabeen, “Some families of two-step simultaneous methods for determining zeros of nonlinear equations,” ISRN Applied Mathematics, Article ID 817174, 11 pages, 2011. View at Publisher · View at Google Scholar
  3. M. A. Noor, K. I. Noor, E. Al-Said, and M. Waseem, “Some new iterative methods for nonlinear equations,” Mathematical Problems in Engineering, Article ID 198943, 12 pages, 2010. View at Google Scholar · View at Zentralblatt MATH
  4. P. Sargolzaei and F. Soleymani, “Accurate fourteenth-order methods for solving nonlinear equations,” Numerical Algorithms, vol. 58, pp. 513–527, 2011. View at Publisher · View at Google Scholar
  5. F. Soleymani, M. Sharifi, and B. S. Mousavi, “An improvement of Ostrowski's and King's techniques with optimal convergence order eight,” Journal of Optimization Theory and Applications. In press. View at Publisher · View at Google Scholar
  6. J. R. Sharma and R. Sharma, “A new family of modified Ostrowski's methods with accelerated eighth order convergence,” Numerical Algorithms, vol. 54, no. 4, pp. 445–458, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  7. F. Soleymani and B. S. Mousavi, “A novel computational technique for finding simple roots of nonlinear equations,” International Journal of Mathematical Analysis, vol. 5, pp. 1813–1819, 2011. View at Google Scholar
  8. A. Iliev and N. Kyurkchiev, Nontrivial Methods in Numerical Analysis (Selected Topics in Numerical Analysis), Lambeth Academy, 2010. View at Zentralblatt MATH
  9. G. Adomian, Solving Frontier Problems of Physics: The Decomposition Method, Kluwer Academic, Boston, Mass, USA, 1994. View at Zentralblatt MATH
  10. H. T. Kung and J. F. Traub, “Optimal order of one-point and multipoint iteration,” Journal of the Association for Computing Machinery, vol. 21, pp. 643–651, 1974. View at Google Scholar · View at Zentralblatt MATH