Abstract

We revisit the necessary and sufficient conditions for linear and high order of convergence of fixed point and Newton’s methods in the complex plane. Schröder’s processes of the first and second kind are revisited and extended. Examples and numerical experiments are included.

1. Introduction

In this paper, we revisit fixed point and Newton’s methods to find a simple solution of a nonlinear equation in the complex plane. This paper is an adapted version of [1] for complex valued functions. We present only proofs of theorems we have to modify compared to the real case. We present sufficient and necessary conditions for the convergence of fixed point and Newton’s methods. Based on these conditions we show how to obtain direct processes to recursively increase the order of convergence. For the fixed point method, we present a generalization of Schröder’s method of the first kind. Two methods are also presented to increase the order of convergence of the Newton’s method. One of them coincide with the Schröder’s process of the second kind which has several forms in the literature. The link between the two Schröder’s processes can be found in [2]. As for the real case, we can combine methods to obtain, for example, the super-Halley process of order and other possible higher order generalizations of this process. We refer to [1] for details about this subject.

The plan of the paper is as follows. In Section 2, we recall Taylor’s expansions for analytic functions and the error term for truncated expansions. In Section 3 we consider the fixed point method and its necessary and sufficient conditions for convergence. These results lead to a generalization of the Schröder’s process of the first kind. Section 4 is devoted to Newton’s method. Based on the necessary and sufficient conditions, we propose two ways to increase the order of convergence of the Newton’s method. Examples and numerical experiments are included in Section 5.

2. Analytic Function

Since we are working with complex numbers, we will be dealing with analytic functions. Supposing is an analytic function and is in its domain, we can writefor any . Then, for we havewhere is the analytic function:Moreover, the series for and have the same radius of convergence for any , andfor .

3. Fixed Point Method

A fixed point method use an iteration function (IF) which is an analytic function mapping its domain of definition into itself. Using an IF and an initial value , we are interested by the convergence of the sequence . It is well known that if the sequence converges, it converges to a fixed point of .

Let be an IF, be a positive integer, and be such that the following limit exists: Let us observe that for we have We say that the convergence of the sequence to is of (integer) order if and only if , and is called the asymptotic constant. We also say that is of order . If the limit exists but is zero, we can say that is of order at least .

From a numerical point of view, since is not known, it is useful to define the ratio:

Following [3], it can be shown that

We say that is a root of of multiplicity if and only if for , and . Moreover, is a root of of multiplicity if and only if there exists an analytic function such that and .

We will use the big notation and the small notation , around , respectively, when and , when

For a root of multiplicity of , it is equivalent to write or . Observe also that if is a simple root of , then is a root of multiplicity of . Hence is equivalent to .

The first result concerns the necessary and sufficient conditions for achieving linear convergence.

Theorem 1. Let be an IF, and let stand for its first derivative. Observe that although the first derivative is usually denoted by , one will write to maintain uniformity throughout the text.(i)If , then there exists a neighborhood of such that for any in that neighborhood the sequence converges to .(ii)If there exists a neighborhood of such that for any in that neighborhood the sequence converges to , and for all , then .(iii)For any sequence which converges to , the limit exists and .

Proof. (i) By continuity, there is a disk such that . Then if , we haveand . Moreoverand the sequence converges to because .
(ii) If , there exists a disk , with , such that . Let us suppose that the sequence is such that for all . If and , then we have Let , and suppose are in . Because eventually and . Then the infinite sequence cannot converge to .
(iii) For any sequence which converges to we have

For higher order convergence we have the following result about necessary and sufficient conditions.

Theorem 2. Let be an integer and let be an analytic function such that . The IF is of order if and only if for , and . Moreover, the asymptotic constant is given by

Proof. (i) The (local) convergence is given by part (i) of Theorem 1. Moreover we haveand hence(ii) If the IF is of order , assume that for with . We havewhereButand henceSo .

It follows that, for an analytic IF and , the limit exists if and only if for .

As a consequence, for an analytic IF we can say that (a) is of order if and only if , or, equivalently, if and , and (b) if is a simple root of , then is of order if and only if , or, equivalently, if and .

Schröder’s process of the first kind is a systematic and recursive way to construct an IF of arbitrary order to find a simple zero of . The IF has to fulfill at least the sufficient condition of Theorem 2. Let us present a generalization of this process.

Theorem 3 (see [1]). Let be a simple root of , and let be an analytic function such that . Let be the IF defined by the finite series:where are such that for Then is of order , and its asymptotic constant is

For in (22), we recover the Schröder’s process of the first kind of order [47], which is also associated with Chebyshev and Euler [810]. The first term could be seen as a preconditioning to decrease the asymptotic constant of the method, but its choice is not obvious.

4. Newton’s Iteration Function

Considering and in (22), we obtain which is Newton’s IF of order to solve . The sufficiency and the necessity of the condition for high-order convergence of the Newton’s method are presented in the next result.

Theorem 4. Let and let be an analytic function such that and . The Newton iteration is of order if and only if for , and . Moreover, the asymptotic constant is

Proof. (i) If for , and we haveButIt follows thatso(ii) Conversely, if is of order we have for , and . Hence is a root of multiplicity of and we can write We also haveButso we obtainwhereIt follows that is a root of multiplicity of . Hence for , and .

We can look for a recursive method to construct a function which will satisfy the conditions of Theorem 4. A consequence will be that will be of order , and . A first method has been presented in [11, 12]. The technique can also be based on Taylor’s expansion as indicated in [13].

Theorem 5 (see [11]). Let be analytic such that and . If is defined by then , , for . It follows that is of order at least .

Let us observe that in this theorem it seems that the method depends on a choice of a branch for the th root function. In fact the Newton iterative function does not depend on this choice because we haveIn fact the next theorem shows that a branch for the th root function is not necessary.

Theorem 6 (see [12]). Let be given by (36); one can also write where

Unfortunately, there exist no general formulae for and its asymptotic constant exists. However, the asymptotic constant can be numerically estimated with (7).

A second method to construct a function which will satisfy the conditions of Theorem 4 is given in the next theorem.

Theorem 7 (see [1]). Let be a simple root of . Let be defined bywhere and are two analytic functions such that for . Then is of order , with

Let us observe that if we set with given by (22), then verifies the assumptions of Theorem 7.

Remark 8. For a given pair of and in Theorem 7, the linearity of expression (42) with respect to and for computing ’s allows us to decompose the computation for in two computations, one for the pair and and the other for the pair and , and then add the two ’s hence obtained.

5. Examples

Let us consider the problem of finding the roots of unity: for which we have . Hence we would like to solveforAs examples of the preceding results, we present methods of orders and obtained from Theorems 3, 5, and 7. For each method, we consider also presenting the basins of attraction of the roots.

The drawing process for the basins of attraction follows Varona [14]. Typically for the upcoming figures, in squares , we assign a color to each attraction basin of each root. That is, we color a point depending on whether within a fixed number of iteration (here ) we lie with a certain precision (here ) of a given root. If after 25 iterations we do not lie within of any given root we assign to the point a very dark shade of purple. The more there are dark shades of purple, the more the points have failed to achieve the required precision within the predetermined number of iteration.

5.1. Examples for Theorem 3

We start with iterative methods of order . From Theorem 3, we first want . We observe that the simplest such function is . Such a choice has the advantage that derivative of higher order than 2 of this function will be 0, thus simplifying further computation. This is in fact the choice of function which leads to Newton’s method and Chebyshev family of iterative methods. We observe however that it is generally possible to consider different choices of functions, although most might be numerically convenient as we will illustrate here. We need , in such we can also look at where . In the examples that follow we will look at such functions .

In Table 1, we have considered functions of this kind. We have developed explicit expressions for . Figure 1 presents different graphs for the basins of attraction for these methods. We observe that some of them have a lot of purple points.

Now let us consider method of order with with . In this case we obtainand its asymptotic constant is Examples of basins of attraction are given in Figure 2 for . The smallest asymptotic constant is for .

5.2. Examples for Theorem 5

Gerlach’s process described in Theorems 5 and 6 leads to Newton’s method for and Halley’s method for . For our problem we have These methods are well known standard methods. For comparison, their basins of attraction are given in Figure 3.

5.3. Examples for Theorem 7

To illustrate Theorem 7, we set and for , and let us consider methods of orders and to solve . Table 2 presents the quantities , , , and for for this example.

We observe that the asymptotic constant of the method of order for is zero; it means that this method is of an order of convergence higher than , and in fact it corresponds to Halley’s method which is of order . We observe that methods of order for the values of and both correspond to Halley’s method for our specific problem. Examples of basins of attraction are given in Figure 4 for methods of order and in Figure 5 for methods of order using values of .

6. Concluding Remarks

In this paper we have presented fixed point and Newton’s methods to compute a simple root of a nonlinear analytic function in the complex plane. We have pointed out that the usual sufficient conditions for convergence are also necessary. Based on these conditions for high-order convergence, we have revisited and extended both Schröder’s methods of the first and second kind. Numerical examples are given to illustrate the basins of attraction when we compute the third roots of unity. It might be interesting to study the relationship, if there is any between the asymptotic constant and the basin of attraction for such methods.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work has been financially supported by an individual discovery grant from NSERC (Natural Sciences and Engineering Research Council of Canada) and a grant from ISM (Institut des Sciences Mathématiques).