Journal of Complex Analysis

Volume 2018, Article ID 7289092, 11 pages

https://doi.org/10.1155/2018/7289092

## Fixed Point and Newton’s Methods in the Complex Plane

Correspondence should be addressed to Calvin Gnang; ac.ekoorbrehsu@gnang.nivlac

Received 29 August 2017; Accepted 12 December 2017; Published 29 January 2018

Academic Editor: Daniel Girela

Copyright © 2018 François Dubeau and Calvin Gnang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We revisit the necessary and sufficient conditions for linear and high order of convergence of fixed point and Newton’s methods in the complex plane. Schröder’s processes of the first and second kind are revisited and extended. Examples and numerical experiments are included.

#### 1. Introduction

In this paper, we revisit fixed point and Newton’s methods to find a simple solution of a nonlinear equation in the complex plane. This paper is an adapted version of [1] for complex valued functions. We present only proofs of theorems we have to modify compared to the real case. We present sufficient and necessary conditions for the convergence of fixed point and Newton’s methods. Based on these conditions we show how to obtain direct processes to recursively increase the order of convergence. For the fixed point method, we present a generalization of Schröder’s method of the first kind. Two methods are also presented to increase the order of convergence of the Newton’s method. One of them coincide with the Schröder’s process of the second kind which has several forms in the literature. The link between the two Schröder’s processes can be found in [2]. As for the real case, we can combine methods to obtain, for example, the super-Halley process of order and other possible higher order generalizations of this process. We refer to [1] for details about this subject.

The plan of the paper is as follows. In Section 2, we recall Taylor’s expansions for analytic functions and the error term for truncated expansions. In Section 3 we consider the fixed point method and its necessary and sufficient conditions for convergence. These results lead to a generalization of the Schröder’s process of the first kind. Section 4 is devoted to Newton’s method. Based on the necessary and sufficient conditions, we propose two ways to increase the order of convergence of the Newton’s method. Examples and numerical experiments are included in Section 5.

#### 2. Analytic Function

Since we are working with complex numbers, we will be dealing with analytic functions. Supposing is an analytic function and is in its domain, we can writefor any . Then, for we havewhere is the analytic function:Moreover, the series for and have the same radius of convergence for any , andfor .

#### 3. Fixed Point Method

A fixed point method use an iteration function (IF) which is an analytic function mapping its domain of definition into itself. Using an IF and an initial value , we are interested by the convergence of the sequence . It is well known that if the sequence converges, it converges to a fixed point of .

Let be an IF, be a positive integer, and be such that the following limit exists: Let us observe that for we have We say that the convergence of the sequence to is of (integer) order if and only if , and is called the asymptotic constant. We also say that is of order . If the limit exists but is zero, we can say that is of order at least .

From a numerical point of view, since is not known, it is useful to define the ratio:

Following [3], it can be shown that

We say that is a root of of multiplicity if and only if for , and . Moreover, is a root of of multiplicity if and only if there exists an analytic function such that and .

We will use the big notation and the small notation , around , respectively, when and , when

For a root of multiplicity of , it is equivalent to write or . Observe also that if is a simple root of , then is a root of multiplicity of . Hence is equivalent to .

The first result concerns the necessary and sufficient conditions for achieving linear convergence.

Theorem 1. *Let be an IF, and let stand for its first derivative. Observe that although the first derivative is usually denoted by , one will write to maintain uniformity throughout the text.*(i)*If , then there exists a neighborhood of such that for any in that neighborhood the sequence converges to .*(ii)*If there exists a neighborhood of such that for any in that neighborhood the sequence converges to , and for all , then .*(iii)*For any sequence which converges to , the limit exists and .*

*Proof. *(i) By continuity, there is a disk such that . Then if , we haveand . Moreoverand the sequence converges to because .

(ii) If , there exists a disk , with , such that . Let us suppose that the sequence is such that for all . If and , then we have Let , and suppose are in . Because eventually and . Then the infinite sequence cannot converge to .

(iii) For any sequence which converges to we have

*For higher order convergence we have the following result about necessary and sufficient conditions.*

*Theorem 2. Let be an integer and let be an analytic function such that . The IF is of order if and only if for , and . Moreover, the asymptotic constant is given by *

*Proof. *(i) The (local) convergence is given by part (i) of Theorem 1. Moreover we haveand hence(ii) If the IF is of order , assume that for with . We havewhereButand henceSo .

*It follows that, for an analytic IF and , the limit exists if and only if for .*

*As a consequence, for an analytic IF we can say that (a) is of order if and only if , or, equivalently, if and , and (b) if is a simple root of , then is of order if and only if , or, equivalently, if and .*

*Schröder’s process of the first kind is a systematic and recursive way to construct an IF of arbitrary order to find a simple zero of . The IF has to fulfill at least the sufficient condition of Theorem 2. Let us present a generalization of this process.*

*Theorem 3 (see [1]). Let be a simple root of , and let be an analytic function such that . Let be the IF defined by the finite series:where are such that for Then is of order , and its asymptotic constant is *

*For in (22), we recover the Schröder’s process of the first kind of order [4–7], which is also associated with Chebyshev and Euler [8–10]. The first term could be seen as a preconditioning to decrease the asymptotic constant of the method, but its choice is not obvious.*

*4. Newton’s Iteration Function*

*Considering and in (22), we obtain which is Newton’s IF of order to solve . The sufficiency and the necessity of the condition for high-order convergence of the Newton’s method are presented in the next result.*

*Theorem 4. Let and let be an analytic function such that and . The Newton iteration is of order if and only if for , and . Moreover, the asymptotic constant is*

*Proof. *(i) If for , and we haveButIt follows thatso(ii) Conversely, if is of order we have for , and . Hence is a root of multiplicity of and we can write We also haveButso we obtainwhereIt follows that is a root of multiplicity of . Hence for , and .

*We can look for a recursive method to construct a function which will satisfy the conditions of Theorem 4. A consequence will be that will be of order , and . A first method has been presented in [11, 12]. The technique can also be based on Taylor’s expansion as indicated in [13].*

*Theorem 5 (see [11]). Let be analytic such that and . If is defined by then , , for . It follows that is of order at least .*

*Let us observe that in this theorem it seems that the method depends on a choice of a branch for the th root function. In fact the Newton iterative function does not depend on this choice because we haveIn fact the next theorem shows that a branch for the th root function is not necessary.*

*Theorem 6 (see [12]). Let be given by (36); one can also write where*

*Unfortunately, there exist no general formulae for and its asymptotic constant exists. However, the asymptotic constant can be numerically estimated with (7).*

*A second method to construct a function which will satisfy the conditions of Theorem 4 is given in the next theorem.*

*Theorem 7 (see [1]). Let be a simple root of . Let be defined bywhere and are two analytic functions such that for . Then is of order , with*

*Let us observe that if we set with given by (22), then verifies the assumptions of Theorem 7.*

*Remark 8. *For a given pair of and in Theorem 7, the linearity of expression (42) with respect to and for computing ’s allows us to decompose the computation for in two computations, one for the pair and and the other for the pair and , and then add the two ’s hence obtained.

*5. Examples*

*Let us consider the problem of finding the roots of unity: for which we have . Hence we would like to solveforAs examples of the preceding results, we present methods of orders and obtained from Theorems 3, 5, and 7. For each method, we consider also presenting the basins of attraction of the roots.*

*The drawing process for the basins of attraction follows Varona [14]. Typically for the upcoming figures, in squares , we assign a color to each attraction basin of each root. That is, we color a point depending on whether within a fixed number of iteration (here ) we lie with a certain precision (here ) of a given root. If after 25 iterations we do not lie within of any given root we assign to the point a very dark shade of purple. The more there are dark shades of purple, the more the points have failed to achieve the required precision within the predetermined number of iteration.*

*5.1. Examples for Theorem 3*

*We start with iterative methods of order . From Theorem 3, we first want . We observe that the simplest such function is . Such a choice has the advantage that derivative of higher order than 2 of this function will be 0, thus simplifying further computation. This is in fact the choice of function which leads to Newton’s method and Chebyshev family of iterative methods. We observe however that it is generally possible to consider different choices of functions, although most might be numerically convenient as we will illustrate here. We need , in such we can also look at where . In the examples that follow we will look at such functions .*

*In Table 1, we have considered functions of this kind. We have developed explicit expressions for . Figure 1 presents different graphs for the basins of attraction for these methods. We observe that some of them have a lot of purple points.*