Abstract

We present a new fourth order method for finding simple roots of a nonlinear equation . In terms of computational cost, per iteration the method uses one evaluation of the function and two evaluations of its first derivative. Therefore, the method has optimal order with efficiency index 1.587 which is better than efficiency index 1.414 of Newton method and the same with Jarratt method and King’s family. Numerical examples are given to support that the method thus obtained is competitive with other similar robust methods. The conjugacy maps and extraneous fixed points of the presented method and other existing fourth order methods are discussed, and their basins of attraction are also given to demonstrate their dynamical behavior in the complex plane.

1. Introduction

Solving nonlinear equations is a common and important problem in science and engineering [1, 2]. Analytic methods for solving such equations are almost nonexistent and therefore it is only possible to obtain approximate solutions by relying on numerical methods based on iterative procedures. With the advancement of computers, the problem of solving nonlinear equations by numerical methods has gained more importance than before.

In this paper, we consider the problem of finding simple root of a nonlinear equation , where is a continuously differentiable function. Newton method is probably the most widely used algorithm for finding simple roots, which starts with an initial approximation closer to the root (say, ) and generates a sequence of successive iterates converging quadratically to simple roots (see [3]). It is given by

In order to improve the order of convergence of Newton method, many higher order multistep methods [4] have been proposed and analyzed by various researchers at the expense of additional evaluations of functions, derivatives, and changes in the points of iterations. An extensive survey of the literature dealing with these methods of improved order is found in [3, 5, 6] and references therein. Euler method and Chebyshev method (see Traub [3]) Weerakoon and Fernando [7], Ostrowski’s square root method [5], Halley [8], Hansen and Patrick [9], and so forth are well-known third order methods requiring the evaluation of , , and per step. The famous Ostrowski’s method [5] is an important example of fourth order multipoint methods without memory. The method requires two and one evaluations per step and is seen to be efficient compared to classical Newton method. Another well-known example of fourth order multipoint methods with same number of evaluations is King’s family of methods [10], which contains Ostrowski’s method as a special case. Chun et al. [1113], Cordero et al. [14], and Kou et al. [15, 16] have also proposed fourth order methods requiring two evaluations and one evaluation per iteration. Jarratt [17] proposed fourth order methods requiring one evaluation and two evaluations per iteration. All of these methods are classified as multistep methods in which a Newton or weighted-Newton step is followed by a faster Newton-like step.

Through this work, we contribute a little more in the theory of iterative methods by developing the formula of optimal order four for computing simple roots of a nonlinear equation. The algorithm is based on the composition of two weighted-Newton steps and uses three function evaluations, namely, one evaluation and two evaluations.

On the other hand, we analyze the behavior of this method in the complex plane using some tools from complex dynamics. Several authors have used these techniques on different iterative methods. In this sense, Curry et al. [18] and Vrscay and Gilbert [19, 20] described the dynamical behavior of some well-known iterative methods. The complex dynamics of various other known iterative methods, such as King’s and Chebyshev-Halley’s families, Jarratt method, has also been analyzed by various researchers; for example, see [13, 2126].

The paper is organized as follows. Some basic definitions relevant to the present work are presented in Section 2. In Section 3, the method is developed and its convergence behavior is analyzed. In Section 4, we compare the presented method with some existing methods of fourth order in a series of numerical examples. In Section 5, we obtain the conjugacy maps and possible extraneous fixed points of these methods to make a comparison from dynamical point of view. In Section 6, the methods are compared in the complex plane using basins of attraction. Concluding remarks are given in Section 7.

2. Basic Definitions

Definition 1. Let be a simple root and let , , be a sequence of real or complex numbers that converges towards . Then, one says that the order of convergence of the sequence is if there exists such thatfor some , is known as the asymptotic error constant.

Definition 2. Let be the error in the th iteration; one calls the relationthe error equation. If one can obtain error equation for any iterative method, then the value of is the order of convergence.

Definition 3. Let be the number of new pieces of information required by a method. A “piece of information” typically is any evaluation of a function or one of its derivatives. The efficiency of the method is measured by the concept of efficiency index [27] and is defined bywhere is the order of the method.

Definition 4. Suppose that , , and are three successive iterations closer to the root . Then, the computational order of convergence (see [7]) is approximated by using (3) as

3. Development of the Method

Let us consider the two-step weighted-Newton iteration scheme of the typewhere , , , and are some constants which are to be determined. A natural question arises: is it possible to find , , , and such that the iteration method (6) has maximum order of convergence? The answer to this question is affirmative and is proved in the following theorem.

Theorem 5. Let be a real or complex function. Assuming that is sufficiently differentiable in an interval , if has a simple root and is sufficiently close to , then (6) has fourth order convergence if , , , and .

Proof. Let be the error at th iteration, then . Expanding and about and using the fact that , , we havewhere , .
Using (7) and (8) and then simplifying, we obtainSubstituting (9) in first substep of (6), we get Expanding about and using (10), we have From (8) and (11), we have Using (9) and (12) in second substep of (6), we obtainwhereIn order to achieve fourth order convergence, the coefficients , , and must vanish. Therefore, , , and yield , , , and . With these values, error equation (13) turns out to beThus, (15) establishes the fourth order convergence for iterative scheme (6). This completes proof of the theorem.

Hence, the proposed scheme is given bywhere .

We denote this method by SBM.

Thus, we have derived fourth order method (16) for finding simple roots of a nonlinear equation. It is clear that this method requires three evaluations per iteration and therefore it is of optimal order.

4. Numerical Results

In this section, we present the numerical results obtained by employing the presented method SBM equation (16) to solve some nonlinear equations. We compare the presented method with quadratically convergent Newton method denoted by NM defined by (1) and some existing fourth order iterative methods, namely, the method proposed by Chun et al. [13], Cordero et al. method [14], King’s family of methods [10], and Kou et al. method [16]. These above-mentioned methods are given as follows.

Chun et al. method (CLM) isCordero et al. method (CM) is King’s family of methods (KM) iswhere .

Kou et al. method (KLM) is

Test functions along with root correcting up to 28 decimal places are displayed in Table 1. Table 2 shows the values of initial approximation () chosen from both sides to the root, values of the error calculated by costing the same total number of function evaluations (NFE) for each method. Table 3 displays the computational order of convergence () defined by (5). The NFE is counted as sum of the number of evaluations of the function and the number of evaluations of the derivatives. In calculations, the NFE used for all the methods is 12. That means, for NM, the error is calculated at the sixth iteration, whereas for the remaining methods this is calculated at the fourth iteration.

The results in Table 3 show that the computational order of convergence is in accordance with the theoretical order of convergence.

The numerical results of Table 2 clearly show that the presented method is competitive with other existing fourth order methods being considered for solving nonlinear equations. It can also be observed that there is no clear winner among the methods of fourth order in the sense that one behaves better in one situation while others take the lead in other situations. All computations are performed by Mathematica [28] using significant digits.

Since the present approach utilizes one and two evaluations per iteration, the newly developed method is very useful in the applications in which derivative can be rapidly evaluated compared to itself. Example of this kind occurs when is defined by a polynomial or an integral.

5. Corresponding Conjugacy Maps for Quadratic Polynomials

In this section, we discuss the rational map arising from various methods applied to a generic polynomial with simple roots.

Theorem 6 (Newton method). For a rational map arising from Newton method applied to , is conjugate via the Möbius transformation given by to

Proof. Let , , and let be Möbius transformation given by with its inverse , which may be considered as a map from . We then get

We can have the following results similarly.

Theorem 7 (KM). For a rational map arising from KM applied to , , is conjugate via the Möbius transformation given by to

Theorem 8 (KLM). For a rational map arising from KLM applied to , , is conjugate via the Möbius transformation given by to

Theorem 9 (CM). For a rational map arising from CM applied to , , is conjugate via the Möbius transformation given by to

Theorem 10 (CLM). For a rational map arising from CLM applied to , , is conjugate via the Möbius transformation given by to

Theorem 11 (SBM). For a rational map arising from SBM applied to , , is conjugate via the Möbius transformation given by to

Remark. All the maps obtained above are of the form , where is either unity or a rational function and is the order of the method.

5.1. Extraneous Fixed Points

The methods discussed above can be written in the fixed point iteration form as where the corresponding of the methods are listed in Table 4.

Clearly the root of is a fixed point of the method. However, the points at which are also fixed points of the method as, with , second term on right side of (28) vanishes. These points are called extraneous fixed points (see [20]). Moreover, a fixed point is called attracting if , repelling if , and neutral otherwise, where is the iteration function. In addition, if , the fixed point is superattracting.

In this section, we will discuss the extraneous fixed points of each method for the polynomial .

Theorem 12. There are no extraneous fixed points for Newton method.

Proof. For Newton method (1), we have This function does not vanish and therefore there are no extraneous fixed points.

Theorem 13 (KM). There are four extraneous fixed points for KM (assuming ).

Proof. The extraneous fixed points for KM are the roots of . For , we get the fixed points as . These fixed points are repelling (the derivative at these points has its magnitude >1).

Theorem 14 (KLM). There are four extraneous fixed points for KLM.

Proof. The extraneous fixed points for KLM are at the roots of . These extraneous fixed points are at . These fixed points are repelling (the derivative at these points has its magnitude >1).

Theorem 15 (CM). There are six extraneous fixed points for CM.

Proof. The extraneous fixed points for CM are given by . On solving, we get the extraneous fixed points as and . These fixed points are repelling (the derivative at these points has its magnitude >1).

Theorem 16 (CLM). The extraneous fixed points for CLM are .

Proof. The extraneous fixed points for CLM method are those for which On substituting , we get the equation . The four roots are . These fixed points are repelling (the derivative at these points has its magnitude >1).

Theorem 17 (SBM). There are four extraneous fixed points for SBM.

Proof. The extraneous fixed points for SBM are at the roots of . These extraneous fixed points are at . These fixed points are repelling (the derivative at these points has its magnitude >1).

In the next section, we give the basins of attraction of these iterative methods in the complex plane.

6. Basins of Attraction

To study the dynamical behavior, we generate basins of attraction for two different polynomials using the above-mentioned methods. Following [29], we take a square () of points, which contains all roots of concerned nonlinear equation, and we apply the iterative method starting in every in the square. We assign a color to each point according to the simple root to which the corresponding orbit of the iterative method, starting from , converges. If the corresponding orbit does not reach any root of the polynomial, with tolerance in a maximum of 25 iterations, we mark those points with black color.

For the first test problem , Figure 1 clearly shows that the proposed method SBM (Figure 1(f)) seems to produce larger basins of attraction than CLM (Figure 1(e)) and CM (Figure 1(d)), almost competitive basins of attraction as KLM (Figure 1(c)), and smaller basins of attraction than NM (Figure 1(a)) and KM (Figure 1(b)).

In our next problem, we have taken the cubic polynomial . The results are given in Figure 2. Again the proposed method SBM (Figure 2(f)) seems to produce larger basins of attraction than KM (Figure 2(b)), CM (Figure 2(d)), and CLM (Figure 2(e)), almost competitive basins of attraction as KLM (Figure 2(c)), and smaller basins of attraction than NM (Figure 2(a)).

7. Conclusion

In this work, we have proposed an optimal method of fourth order for finding simple roots of nonlinear equations. The advantage of the proposed method is that it does not require the use of second order derivative. The numerical results have confirmed the robustness and efficiency of the proposed method. The presented basins of attraction also show the good performance of the proposed method as compared to other existing fourth order methods in the literature.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

One of the authors acknowledges I. K. Gujral Punjab Technical University, Kapurthala, Punjab, for providing research support to her.