Abstract

We study the local convergence properties of inexact Newton-Gauss method for singular systems of equations. Unified estimates of radius of convergence balls for one kind of singular systems of equations with constant rank derivatives are obtained. Application to the Smale point estimate theory is provided and some important known results are extended and/or improved.

1. Introduction

Consider the following system of nonlinear equations: where is a nonlinear operator with its Fréchet derivative denoted by and is open and convex. In the case when and is invertible for each , Newton’s method is a classical numerical method to find an approximation solution for such system. There are a lot of results that improve, generalize, or extend the convergence of Newton’s method for solving (1). We refer the reader to the works of Deuflhard and Heindl [1], Smale [2], Wang [3], Ferreira [4], Argyros et al. [5], and the references therein. If is an approximation of a zero of this system, then Newton's method can be defined by the form as follows: When is not invertible, we choose its Moore-Penrose inverse instead of its classical inverse and call it Gauss-Newton's method given as follows:

Let be a linear operator (or an matrix). Recall that an operator (or matrix) is the Moore-Penrose inverse of , if it satisfies the following four equations: where denotes the adjoint of . Let and denote the kernel and image of , respectively. For a subspace of , we use to denote the projection onto . Then, it is clear that In particular, in the case when is full row rank (or, equivalently, when is surjective), ; when is full column rank (or equivalently, when is injective), .

One of the disadvantages for Newton's method (2) is that it requires solving exactly the following linear equation at each step: To overcome this disadvantage, Dembo et al. presented in [6] the following iterative processes called inexact Newton method ( is an initial guess): where the residual control satisfies and is a sequence of forcing terms such that . In [6], it was shown that if , then there exists such that, for any initial guess , the sequence is well defined and converges to a solution . Moreover, the rate of convergence of to is characterized by the rate of convergence of to 0.

Note that it is clear that the residual control (8) is not affine invariant (see [1] for more details about the affine invariant). To this end, Ypma used in [7] the affine invariant condition of residual control in the form to study the local convergence of inexact Newton method (7). And the radius of convergent result is also obtained.

To study the local convergence of inexact Newton method and inexact Newton-like method (called inexact methods for short below), Morini presented in [8] the following variation for the residual controls: where is a sequence of invertible operator from to and is the forcing term. If and for each , (10) reduces to (8) and (9), respectively. Both proposed inexact methods are linearly convergent under Lipschitz condition. It is worth noting that the residual controls (10) are used in iterative methods if preconditioning is applied and lead to a relaxation on the forcing terms. But we also note that the results obtained in [8] cannot make us clearly see how big the radius of the convergence ball is. To this end, Chen and Li [9] obtained the local convergence properties of inexact methods for (1) under weak Lipschitz condition, which was first introduced by Wang in [10] to study the local convergence behavior of Newton’s method (2). The results in [9] easily provide an estimate of convergence ball for the inexact methods. Furthermore, Ferreira and Gonçalves presented in [11] a new local convergence analysis for inexact Newton-like under so-called majorant condition, which is equivalent to the preceding weak Lipschitz condition.

Under the assumption that the derivative of the operator satisfies the Hölder condition, the radius of convergence ball of the inexact Newton-like methods with a new type of residual control is estimated by Li and Shen [12]. And a superlinear convergence property is proved, which extends the corresponding result in [8]. In addition, as an application of the local convergence result, they presented a slight modification of the inexact Newton-like method of [13] for solving inverse eigenvalue problems and showed that it can be regarded equivalently as one of the inexact methods considered in [12].

Recent attentions are focused on the study of finding zeros of singular nonlinear systems by Gauss-Newton’s method (3). For example, Shub and Smale extended in [14] the Smale point estimate theory (including -theory and -theory) to Gauss-Newton's methods for underdetermined analytic systems with surjective derivatives. For overdetermined systems, Dedieu and Shub studied in [15] the local linear convergence properties of Gauss-Newton’s for analytic systems with injective derivatives and provided estimates of the radius of convergence balls for Gauss-Newton's method. Dedieu and Kim in [16] generalized both the results of the underdetermined case and the overdetermined case to such case where is of constant rank (not necessarily full rank), which has been improved by Xu and Li in [17, 18], Ferreira et al. in [19], Argyros and Hilout in [20], and Gonçalves and Oliveira in [21].

In the last years, some authors have studied the convergence behaviour of inexact versions of Gauss-Newton’s method for singular nonlinear systems. For example, Chen [22] employed the ideas of [9] to study the local convergence properties of several inexact Gauss-Newton type methods where a scaled relative residual control is performed at each iteration under weak Lipschitz conditions. Ferreira et al. presented in their recent paper [23] a local convergence analysis of an inexact version of Gauss-Newton's method for solving nonlinear least squares problems. Moreover, the radii of the convergence balls under the corresponding conditions were estimated in these two papers.

In the present paper, we study the local convergence of inexact Newton-Gauss method for the singular systems with constant rank derivatives under the hypotheses that the derivatives satisfy Lipschitz conditions with average and the residual satisfies several control conditions. Unified estimates for the radius of convergence balls of inexact Newton-Gauss method are obtained. As an application to Smale approximate zeros, we obtain a gamma-type theorem which gives an estimate of the size of convergence ball of inexact Newton-Gauss method about a zero.

The rest of this paper is organized as follows. In Section 2, we introduce some preliminary notions and properties of the majorizing function. The main results about the local convergence are stated in Section 3. And finally, in Section 4, we prove the local convergence results given in Section 3.

2. Preliminaries

For and a positive number , throughout the whole paper, we use to stand for the open ball with radius and center and let denote its closure.

Throughout this paper, we assume that is a positive nondecreasing function on , where . Let with . The majorizing function corresponding to is defined by Note that, in the case when , (11) reduces to Obviously, Moreover, we have and is convex and strictly increasing. Set

For the convergence analysis, we need the following useful lemma about elementary convex analysis.

Lemma 1 (see [4]). Let . If is continuously differentiable and convex, then (i), for all and ,(ii), for all , and .

The next two lemmas show that the constants and defined in (14) and (15), respectively, are positive.

Lemma 2. The constant defined in (14) is positive and , for all .

Proof. Since , there exists such that for all . Then, we get . Because is strictly increasing, is strictly convex. It follows from Lemma 1(i) that Note that and , for all . Thus, the inequality follows.

Lemma 3. The constant defined in (15) is positive. As a consequence, , for all .

Proof. On one hand, by Lemma 2, it is clear that , for . On the other hand, we can obtain from Lemma 1(i) that Then, we conclude that there exists such that Therefore, is positive.

Let where and are given in (15) and (16), respectively. For any starting point , let denote the sequence generated by

Lemma 4. The sequence given by (21) is well defined, is strictly decreasing, is contained in , and converges to .

Proof. Since , using Lemma 3, one has that is well defined, strictly decreasing, and contained in . Thus, there exists such that ; that is, we have If , it follows from Lemma 3 that This is a contradiction. So as . This completes the proof.

The notion of the -average Lipschitz condition for semilocal convergence analysis was introduced by Li and Ng in [24], which is a modification of the one that was first introduced by Wang in [3], where the terminology of “the center Lipschitz condition in the inscribed sphere with average” was used. This notion was used to study the semilocal convergence of Newton’s method (2) to solve singular systems of equation with constant rank derivatives by Xu and Li in [18] and Li et al. in [25]. As for the local convergence analysis, we can also introduce the similar definition.

Definition 5. Let be such that . Then, is said to satisfy the -average Lipschitz condition on if for any and .

This definition is a modification of the one in [10], where the terminology of “the radius Lipschitz condition with the average” was used. In the case when is not surjective (see [15, 16]), the information on may be lost. To this end, we need to modify the above notion to suit the case when is not surjective.

Definition 6. Let be such that . Then, is said to satisfy the modified -average Lipschitz condition on if for any and .

The notion of the -condition for operators in Banach spaces was introduced in [26] by Wang and Han to study the Smale point estimate theory. Definition 7 about -condition and the related Lemma 8 are taken from [25].

Definition 7 (see [25]). Suppose that and has continuous second derivative. Let be such that . is said to satisfy the -condition (resp., the modified -condition) on if (26) (resp., (27)) holds as follows:

Lemma 8 (see [25]). Suppose that and has continuous second derivative. Let be such that . Then, satisfies the -condition (resp., the modified -condition) on if and only if satisfies the -average Lipschitz condition (resp., the modified -average Lipschitz condition) on with .

3. Local Convergence for Inexact Newton-Gauss Method

In this section, we state our main results of local convergence for inexact Newton-Gauss method (7). Recall that the system (1) is a surjective-underdetermined (resp., injective-overdetermined) system if the number of equations is less (resp., greater) than the number of unknowns and is of full rank for each . Note that, for surjective-underdetermined systems, the fixed points of the Newton operator are the zeros of , while, for injective-overdetermined systems, the fixed points of are the least square solutions of , which, in general, are not necessarily the zeros of .

Our first result concerned the local convergence properties of inexact Newton-Gauss method for general singular systems with constant rank derivatives.

Theorem 9. Let be continuously Fréchet differentiable nonlinear operator, and is open and convex. Suppose that , and that satisfies the modified -average Lipschitz condition (25) on , where is given in (20). In addition, one assumes that , for any , and that where the constant satisfies . Let be sequence generated by inexact Newton-Gauss method with any initial point and the conditions for the residual and the forcing term : where denotes the condition number of . Then, converges to a zero of in . Moreover, one has the following estimate: where the sequence is defined by (21).

Remark 10. If taking (in this case, and ) in Theorem 9, we obtain the local convergence of Newton’s method for solving the singular systems, which has been studied by Dedieu and Kim in [16] for analytic singular systems with constant rank derivatives and Li et al. in [25] for some special singular systems with constant rank derivatives. Now, we obtain that the convergence ball satisfies

If is full column rank for every , then we have . Thus, that is, . We immediately have the following corollary.

Corollary 11. Suppose that and that for any . Suppose that and that satisfies the modified -average Lipschitz condition (25). Let be sequence generated by inexact Newton-Gauss method with any initial point and the condition (29) for the residual and the forcing term . Then, converges to a zero of in . Moreover, one has the following estimate: where the sequence is defined by (21) for .

In the case when is full row rank, the modified -average Lipschitz condition (25) can be replaced by the -average Lipschitz condition (24).

Theorem 12. Suppose that is full row rank, and satisfies the -average Lipschitz condition (24) on , where is given in (20). In addition, one assumes that for any and that condition (28) holds. Let be sequence generated by inexact Newton-Gauss method with any initial point and the conditions for the residual and the forcing term : Then, converges to a zero of in . Moreover, one has the following estimate: where the sequence is defined by (21).

Theorem 13. Suppose that is full row rank, and satisfies the L-average Lipschitz condition (24) on , where is given in (20). In addition, one assumes that for any and that condition (28) holds. Let be sequence generated by inexact Newton-Gauss method with any initial point and the conditions for the control residual and the forcing term : Then, converges to a zero of in . Moreover, one has the following estimate: where the sequence is defined by (21).

Remark 14. In the case when is invertible in Theorem 13, we obtain the local convergence results of inexact Newton-Gauss method for nonsingular systems, and the convergence ball in this case satisfies
In particular, if taking , the convergence ball determined in (39) reduces to the one given in [10] by Wang and the value is the optimal radius of the convergence ball when the equality holds. Then, we can conclude that vanishing residuals, Theorem 13 merges into the theory of Newton’s method.

The result below is an extension of the Smale approximate zeros. We first recall the notion of the approximate zero of an analytic operator from the domain in a Banach space to another. In [2], Smale proposed two kinds of the notion: the first kind (in sense of ) and the second kind (in sense of ) of an approximate zero. A more reasonable definition for the second kind was presented in [27]; see also [28]. The notion of the approximate zero in the sense of was defined in [29], which is equivalent to the first kind (see [3]). The following unified definition is taken from [3].

Definition 15 (see [3]). Let be such that the sequence generated by Newton's method (2) is well defined and satisfies where denotes some measurement of the approximation degree between and the zero point . Then, is called an approximate zero of in sense of .

The concepts of an approximate zero for Gauss-Newton method (3) for solving singular systems of equations and inexact Newton method (7) for solving nonsingular systems of equations are proposed in [25, 30], respectively. We now extend the notion of approximate zeros to inexact Newton-Gauss method for solving singular systems of equations.

Definition 16. Let be such that the sequence generated by inexact Newton-Gauss method (7) converges to a zero of (resp., ) and satisfies (40). Then, is called an INM-approximate solution (resp., approximate zero) of in sense of .

For the remainder of this subsection, let be the function defined by Then, and the constants and defined in (14) and (15), respectively, have the following concrete forms:

To state our gamma-type theorem for inexact Newton-Gauss method (7), we introduce some more notations. Let Since and , there exists one zero at least in . The smallest positive zero of in is denoted by . Recall that ; here and are given in (44) and (16), respectively. Let where is given by

Theorem 17. Suppose that is full row rank, and satisfies the -condition (26) on . Assume that and that for any . Let be sequence generated by inexact Newton-Gauss method with any initial point and the conditions for the control residual and the forcing term : Then, converges to a zero of in and is an approximate zero of in sense of .

One typical and important class of examples satisfying the -conditions is the one of analytic functions. Following Smale’s idea in [2], Shub and Smale introduced in [14] the following invariant for analytic underdetermined systems with surjective : For the case when is not surjective, due to loss of the information on , Dedieu and Shub introduce in [15] the following invariant for analytic overdetermined systems: By [25, Proposition 5.2], one has that an analytic operator satisfies the -condition and the modified -condition. So, the conclusions of Theorem 17 still hold when is analytic.

4. Proofs

In this section, we prove our main results of local convergence for inexact Newton-Gauss method (7) given in Section 3.

4.1. Proof of Theorem 9

The following lemma gives a perturbation bound for Moore-Penrose inverse, which is stated in [31, Corollary 7.1.1 and Corollary 7.1.2].

Lemma 18 (see [31]). Let and be matrices and let . Suppose that , and . Then, and

Lemma 19. Suppose that satisfies the modified L-average Lipschitz condition on and that , where , , and are defined in (20), (15), and (14), respectively. Then, and

Proof. Since , we have It follows from Lemma 18 that and

Proof of Theorem 9. We will prove by induction that is the majorizing sequence for ; that is, Because , thus (56) holds, for . Now, we assume that , for some . For the case , we first notice that By using the modified -average Lipschitz condition (25), Lemma 19, the inductive hypothesis (56), and Lemma 1, one has that Thanks to (29), Since combining Lemmas 1 and 19, the modified -average Lipschitz condition (25), the inductive hypothesis (56), and the condition (29), we have Combining (28), (58), (59), and (62), we can obtain that By the definition of , we have . Then, we can obtain that Note that , for any , and Thus, in view of the definition of given in (21), one has that which implies . Therefore, the proof by induction is complete. Since converges to 0 (by Lemma 4), it follows from (56) that converges to and the estimate (30) holds for all . This completes the proof.

4.2. Proof of Theorem 12

Lemma 20. Suppose that is full row rank, and satisfies the -average Lipschitz condition (24) on . Then, for any , one has and

Proof. Since , we have It follows from Banach lemma that exists and Since is full row rank, we have and which implies that is full row rank; that is, .

Proof of Theorem 12. Let be defined by with residual . Since one has that coincides with the sequence generated by inexact Newton-Gauss method (7) for . In addition, we have and so Because , thus, we have . Therefore, by (24), we can obtain that That is, satisfies the modified -average Lipschitz condition (25) on . So, Theorem 9 is applicable and converges to as follows. Note that and ; it follows that is a zero of .

4.3. Proof of Theorem 13

Lemma 21. Suppose that is full row rank, and satisfies the L-average Lipschitz condition (24) on . Then, one has

Proof. Since is full row rank, we have . It follows that By Lemma 20, is invertible for any . Thus, in view of the equality , for any matrix , one has that Therefore, Lemma 20 is applicable to conclude that

Proof of Theorem 13. Using Lemma 21, -average condition (24), and the residual condition (35), respectively, instead of Lemma 19, modified -average condition (25), and condition (29), one can complete the proof of Theorem 13 as the same line of proof in Theorem 9.

4.4. Proof of Theorem 17

Proof. Recall that the majorizing sequence is defined by (21) and the majorizing function is defined by (42). By Lemma 4, is strictly decreasing and converges to 0. We first note that (57) gives Using Lemma 8, the -condition (26), and Lemma 21, one has that Thanks to (60), we use the -condition (26), Lemma 21, and the condition (49) to obtain that Note that Then, it follows from (49) and (82) that Thus, we can obtain by combining (81), (84), and (48) that It is clear that the function is increasing monotonically with respect to in . Hence, we have Consequently, to show that is an approximate zero of , it suffices to prove . In fact, in view of the definition of given in (46), for any , we have . Consequently, we get that which is equivalent to . The proof is complete.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was supported by Quzhou City Science and Technology Bureau Project of Zhejiang Province of China (Grant no. 20111046).