Table of Contents Author Guidelines Submit a Manuscript
Journal of Complex Analysis
Volume 2018, Article ID 7289092, 11 pages
https://doi.org/10.1155/2018/7289092
Research Article

Fixed Point and Newton’s Methods in the Complex Plane

Département de Mathématiques, Faculté des Sciences, Université de Sherbrooke, 2500 Boul. de l’Université, Sherbrooke, QC, Canada J1K 2R1

Correspondence should be addressed to Calvin Gnang; ac.ekoorbrehsu@gnang.nivlac

Received 29 August 2017; Accepted 12 December 2017; Published 29 January 2018

Academic Editor: Daniel Girela

Copyright © 2018 François Dubeau and Calvin Gnang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We revisit the necessary and sufficient conditions for linear and high order of convergence of fixed point and Newton’s methods in the complex plane. Schröder’s processes of the first and second kind are revisited and extended. Examples and numerical experiments are included.

1. Introduction

In this paper, we revisit fixed point and Newton’s methods to find a simple solution of a nonlinear equation in the complex plane. This paper is an adapted version of [1] for complex valued functions. We present only proofs of theorems we have to modify compared to the real case. We present sufficient and necessary conditions for the convergence of fixed point and Newton’s methods. Based on these conditions we show how to obtain direct processes to recursively increase the order of convergence. For the fixed point method, we present a generalization of Schröder’s method of the first kind. Two methods are also presented to increase the order of convergence of the Newton’s method. One of them coincide with the Schröder’s process of the second kind which has several forms in the literature. The link between the two Schröder’s processes can be found in [2]. As for the real case, we can combine methods to obtain, for example, the super-Halley process of order and other possible higher order generalizations of this process. We refer to [1] for details about this subject.

The plan of the paper is as follows. In Section 2, we recall Taylor’s expansions for analytic functions and the error term for truncated expansions. In Section 3 we consider the fixed point method and its necessary and sufficient conditions for convergence. These results lead to a generalization of the Schröder’s process of the first kind. Section 4 is devoted to Newton’s method. Based on the necessary and sufficient conditions, we propose two ways to increase the order of convergence of the Newton’s method. Examples and numerical experiments are included in Section 5.

2. Analytic Function

Since we are working with complex numbers, we will be dealing with analytic functions. Supposing is an analytic function and is in its domain, we can writefor any . Then, for we havewhere is the analytic function:Moreover, the series for and have the same radius of convergence for any , andfor .

3. Fixed Point Method

A fixed point method use an iteration function (IF) which is an analytic function mapping its domain of definition into itself. Using an IF and an initial value , we are interested by the convergence of the sequence . It is well known that if the sequence converges, it converges to a fixed point of .

Let be an IF, be a positive integer, and be such that the following limit exists: Let us observe that for we have We say that the convergence of the sequence to is of (integer) order if and only if , and is called the asymptotic constant. We also say that is of order . If the limit exists but is zero, we can say that is of order at least .

From a numerical point of view, since is not known, it is useful to define the ratio:

Following [3], it can be shown that

We say that is a root of of multiplicity if and only if for , and . Moreover, is a root of of multiplicity if and only if there exists an analytic function such that and .

We will use the big notation and the small notation , around , respectively, when and , when

For a root of multiplicity of , it is equivalent to write or . Observe also that if is a simple root of , then is a root of multiplicity of . Hence is equivalent to .

The first result concerns the necessary and sufficient conditions for achieving linear convergence.

Theorem 1. Let be an IF, and let stand for its first derivative. Observe that although the first derivative is usually denoted by , one will write to maintain uniformity throughout the text.(i)If , then there exists a neighborhood of such that for any in that neighborhood the sequence converges to .(ii)If there exists a neighborhood of such that for any in that neighborhood the sequence converges to , and for all , then .(iii)For any sequence which converges to , the limit exists and .

Proof. (i) By continuity, there is a disk such that . Then if , we haveand . Moreoverand the sequence converges to because .
(ii) If , there exists a disk , with , such that . Let us suppose that the sequence is such that for all . If and , then we have Let , and suppose are in . Because eventually and . Then the infinite sequence cannot converge to .
(iii) For any sequence which converges to we have

For higher order convergence we have the following result about necessary and sufficient conditions.

Theorem 2. Let be an integer and let be an analytic function such that . The IF is of order if and only if for , and . Moreover, the asymptotic constant is given by

Proof. (i) The (local) convergence is given by part (i) of Theorem 1. Moreover we haveand hence(ii) If the IF is of order , assume that for with . We havewhereButand henceSo .

It follows that, for an analytic IF and , the limit exists if and only if for .

As a consequence, for an analytic IF we can say that (a) is of order if and only if , or, equivalently, if and , and (b) if is a simple root of , then is of order if and only if , or, equivalently, if and .

Schröder’s process of the first kind is a systematic and recursive way to construct an IF of arbitrary order to find a simple zero of . The IF has to fulfill at least the sufficient condition of Theorem 2. Let us present a generalization of this process.

Theorem 3 (see [1]). Let be a simple root of , and let be an analytic function such that . Let be the IF defined by the finite series:where are such that for Then is of order , and its asymptotic constant is

For in (22), we recover the Schröder’s process of the first kind of order [47], which is also associated with Chebyshev and Euler [810]. The first term could be seen as a preconditioning to decrease the asymptotic constant of the method, but its choice is not obvious.

4. Newton’s Iteration Function

Considering and in (22), we obtain which is Newton’s IF of order to solve . The sufficiency and the necessity of the condition for high-order convergence of the Newton’s method are presented in the next result.

Theorem 4. Let and let be an analytic function such that and . The Newton iteration is of order if and only if for , and . Moreover, the asymptotic constant is

Proof. (i) If for , and we haveButIt follows thatso(ii) Conversely, if is of order we have for , and . Hence is a root of multiplicity of and we can write We also haveButso we obtainwhereIt follows that is a root of multiplicity of . Hence for , and .

We can look for a recursive method to construct a function which will satisfy the conditions of Theorem 4. A consequence will be that will be of order , and . A first method has been presented in [11, 12]. The technique can also be based on Taylor’s expansion as indicated in [13].

Theorem 5 (see [11]). Let be analytic such that and . If is defined by then , , for . It follows that is of order at least .

Let us observe that in this theorem it seems that the method depends on a choice of a branch for the th root function. In fact the Newton iterative function does not depend on this choice because we haveIn fact the next theorem shows that a branch for the th root function is not necessary.

Theorem 6 (see [12]). Let be given by (36); one can also write where

Unfortunately, there exist no general formulae for and its asymptotic constant exists. However, the asymptotic constant can be numerically estimated with (7).

A second method to construct a function which will satisfy the conditions of Theorem 4 is given in the next theorem.

Theorem 7 (see [1]). Let be a simple root of . Let be defined bywhere and are two analytic functions such that for . Then is of order , with

Let us observe that if we set with given by (22), then verifies the assumptions of Theorem 7.

Remark 8. For a given pair of and in Theorem 7, the linearity of expression (42) with respect to and for computing ’s allows us to decompose the computation for in two computations, one for the pair and and the other for the pair and , and then add the two ’s hence obtained.

5. Examples

Let us consider the problem of finding the roots of unity: for which we have . Hence we would like to solveforAs examples of the preceding results, we present methods of orders and obtained from Theorems 3, 5, and 7. For each method, we consider also presenting the basins of attraction of the roots.

The drawing process for the basins of attraction follows Varona [14]. Typically for the upcoming figures, in squares , we assign a color to each attraction basin of each root. That is, we color a point depending on whether within a fixed number of iteration (here ) we lie with a certain precision (here ) of a given root. If after 25 iterations we do not lie within of any given root we assign to the point a very dark shade of purple. The more there are dark shades of purple, the more the points have failed to achieve the required precision within the predetermined number of iteration.

5.1. Examples for Theorem 3

We start with iterative methods of order . From Theorem 3, we first want . We observe that the simplest such function is . Such a choice has the advantage that derivative of higher order than 2 of this function will be 0, thus simplifying further computation. This is in fact the choice of function which leads to Newton’s method and Chebyshev family of iterative methods. We observe however that it is generally possible to consider different choices of functions, although most might be numerically convenient as we will illustrate here. We need , in such we can also look at where . In the examples that follow we will look at such functions .

In Table 1, we have considered functions of this kind. We have developed explicit expressions for . Figure 1 presents different graphs for the basins of attraction for these methods. We observe that some of them have a lot of purple points.

Table 1: Method of order based on Theorem 3.
Figure 1: Basins of attraction for methods of order of Table 1.

Now let us consider method of order with with . In this case we obtainand its asymptotic constant is Examples of basins of attraction are given in Figure 2 for . The smallest asymptotic constant is for .

Figure 2: Methods of order 3 for computing the cubic root with for .
5.2. Examples for Theorem 5

Gerlach’s process described in Theorems 5 and 6 leads to Newton’s method for and Halley’s method for . For our problem we have These methods are well known standard methods. For comparison, their basins of attraction are given in Figure 3.

Figure 3: First two methods for computing the third root with Theorem 5.
5.3. Examples for Theorem 7

To illustrate Theorem 7, we set and for , and let us consider methods of orders and to solve . Table 2 presents the quantities , , , and for for this example.

Table 2: Method of orders and based on Theorem 3.

We observe that the asymptotic constant of the method of order for is zero; it means that this method is of an order of convergence higher than , and in fact it corresponds to Halley’s method which is of order . We observe that methods of order for the values of and both correspond to Halley’s method for our specific problem. Examples of basins of attraction are given in Figure 4 for methods of order and in Figure 5 for methods of order using values of .

Figure 4: Methods of order to illustrate Theorem 7.
Figure 5: Methods of order to illustrate Theorem 7.

6. Concluding Remarks

In this paper we have presented fixed point and Newton’s methods to compute a simple root of a nonlinear analytic function in the complex plane. We have pointed out that the usual sufficient conditions for convergence are also necessary. Based on these conditions for high-order convergence, we have revisited and extended both Schröder’s methods of the first and second kind. Numerical examples are given to illustrate the basins of attraction when we compute the third roots of unity. It might be interesting to study the relationship, if there is any between the asymptotic constant and the basin of attraction for such methods.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work has been financially supported by an individual discovery grant from NSERC (Natural Sciences and Engineering Research Council of Canada) and a grant from ISM (Institut des Sciences Mathématiques).

References

  1. F. Dubeau and C. Gnang, “Fixed point and Newton's methods for solving a nonlinear equation: from linear to high-order convergence,” SIAM Review, vol. 56, no. 4, pp. 691–708, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  2. F. Dubeau, “Polynomial and rational approximations and the link between Schröder's processes of the first and second kind,” Abstract and Applied Analysis, vol. 2014, Article ID 719846, 5 pages, 2014. View at Publisher · View at Google Scholar · View at Scopus
  3. F. Dubeau, “On comparisons of chebyshev-halley iteration functions based on their asymptotic constants,” International Journal of Pure and Applied Mathematics, vol. 85, no. 5, pp. 965–981, 2013. View at Publisher · View at Google Scholar · View at Scopus
  4. E. Schröder, “Ueber unendlich viele algorithmen zur auflösung der gleichungen,” Mathematische Annalen, vol. 2, no. 2, pp. 317–365, 1870. View at Publisher · View at Google Scholar · View at MathSciNet
  5. E. Schröder, “On Infinitely Many Algorithms for Solving Equations,” in Institute for advanced Computer Studies, G. W. Stewart, Ed., pp. 92–121, University of Maryland, 1992. View at Google Scholar
  6. J. F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, NJ, USA, Englewood Cliffs, 1964.
  7. A. S. Householder, The Numerical Treatment of a Single Nonlinear Equation, McGraw-Hill Book Co., NY, USA, 1970. View at MathSciNet
  8. E. Bodewig, “On types of convergence and on the behavior of approximations in the neighborhood of a multiple root of an equation,” Quarterly of Applied Mathematics, vol. 7, pp. 325–333, 1949. View at Publisher · View at Google Scholar · View at MathSciNet
  9. M. Shub and S. Smale, “Computational complexity. On the geometry of polynomials and a theory of cost,” Annales Scientifiques de l'École Normale Supérieure. Quatrième Série, vol. 18, no. 1, pp. 107–142, 1985. View at Google Scholar · View at MathSciNet
  10. M. Petković and D. Herceg, “On rediscovered iteration methods for solving equations,” Journal of Computational and Applied Mathematics, vol. 107, no. 2, pp. 275–284, 1999. View at Publisher · View at Google Scholar · View at MathSciNet
  11. J. Gerlach, “Accelerated convergence in Newton's method,” SIAM Review, vol. 36, no. 2, pp. 272–276, 1994. View at Publisher · View at Google Scholar · View at Scopus
  12. W. F. Ford and J. A. Pennline, “Accelerated convergence in Newton's method,” SIAM Review, vol. 38, no. 4, pp. 658-659, 1996. View at Publisher · View at Google Scholar · View at Scopus
  13. F. Dubeau, “On the modified Newton's method for multiple root,” Journal of Mathematical Analysis, vol. 4, no. 2, pp. 9–15, 2013. View at Google Scholar · View at MathSciNet
  14. J. L. Varona, “Graphic and numerical comparison between iterative methods,” The Mathematical Intelligencer, vol. 24, no. 1, pp. 37–46, 2002. View at Publisher · View at Google Scholar · View at MathSciNet