Abstract

The minimization of a quadratic function within an ellipsoidal trust region is an important subproblem for many nonlinear programming algorithms. When the number of variables is large, one of the most widely used strategies is to project the original problem into a small dimensional subspace. In this paper, we introduce an algorithm for solving nonlinear least squares problems. This algorithm is based on constructing a basis for the Krylov subspace in conjunction with a model trust region technique to choose the step. The computational step on the small dimensional subspace lies inside the trust region. The Krylov subspace is terminated such that the termination condition allows the gradient to be decreased on it. A convergence theory of this algorithm is presented. It is shown that this algorithm is globally convergent.

1. Introduction

Nonlinear least squares (NLS) problems are unconstrained optimization problems with special structures. These problems arise in many aspects such as the solution of overdetermined systems of nonlinear equations, some scientific experiments, pattern recognition, and maximum likelihood estimation. For more details about these problems [1]. The general formulation of the NLS problem is to determine the solution that minimizes the functionwhere and .

There are two general types of algorithms for solving NLS problem (1.1). The first type is most closely related to solving systems of nonlinear equations [2] and it leads to Gauss-Newton and Levenberg-Marquart methods [3]. The second type is closely related to unconstrained minimization techniques [4]. Of course the two types of algorithms are closely related to each other [5].

The presented algorithm is a Newton-Krylov type algorithm. It requires a fixed-size limited storage proportional to the size of the problem and relies only upon matrix vector product. It is based on the implicitly restarted Arnoldi method (IRAM) to construct a basis for the Krylov subspace and to reduce the Jacobian into a Hessenberg matrix in conjunction with a trust region strategy to control the step on that subspace [6].

Trust region methods for unconstrained minimization are blessed with both strong theoretical convergence properties and a good accurate results in practice. The trial computational step in these methods is to find an approximate minimizer of some model of the true objective function within a trust region for which a suitable norm of the correction lies inside a given bound. This restriction is known as the trust region constraint, and the bound on the norm is its radius. The radius is adjusted so that successive model problems minimized the true objective function within the trust region [7].

The trust region subproblem is the problem of finding so thatwhere is some positive constant, is the Euclidean norm in , andwhere and are, respectively, the gradient vector and the Hessian matrix or their approximations.

There are two different approaches to solve (1.2). These approaches are based on either solving several linear systems [8] or approximating the curve by a piecewise linear approximation “dogleg strategies” [9]. In large-scale optimization, solving linear systems is computationally expensive. Moreover, it is not clear how to define the dogleg curves when the matrix is singular [10].

Several authors have studied inexact Newton's methods for solving NLS problems [11]. Xiaofang et al. have introduced stable factorized quassi-Newton methods for solving large-scale NLS [12]. Dennis et al. proposed a convergence theory for structured BFGS secant method with an application for NLS [13]. The Newton-Krylov method is an example of inexact Newton methods. Krylov techniques inside a Newton iteration in the context of system of equations have been proposed in [14]. The recent work of Sorensen, provides an algorithm which is based on recasting the trust region subproblem into a parameterized eigenvalue problem. This algorithm provides a super linearly convergent scheme to adjust the parameter and find the optimal solution from the eigenvector of the parameterized problem, as long as the hard case does not occur [15].

This contribution is organized as follows. In Section 2, we introduce the structure of the NLS problem and a general view about the theory of the trust region strategies and their convergence. The statement of the algorithm and its properties is developed in Section 3. Section 4 states the assumptions and presents the role of restarting mechanism to control the dimension of the Krylov subspace. The global convergence analysis is presented in Section 5. Concluding remarks and future ideas are given in the last section.

2. Structure of the Problem

The subspace technique plays an important role in solving the NLS problem (1.1). We assume that the current iterate and a Krylov subspace . Denote the dimension of to be and be a set of linearly independent vectors in . The next iterate is such that the increment . Thus, we have and , where the matrix . The second-order Taylor expansion of at iswhere is defined by and the jacobian matrix is . If we consider , we get the quadratic modelwhere approximates the reduced matrix .

The first order necessary conditions of (2.1) isThus, a solution of the quadratic model is a solution to an equation of the form

The model trust region algorithm generates a sequence of points , and at the th stage of the iteration the quadratic model of problem (1.2) has the formAt this stage, an initial value for the trust region radius is also available. An inner iteration is then performed which consists of using the current trust region radius, , and the information contained in the quadratic model to compute a step, . Then a comparison of the actual reduction of the objective functionand the reduction predicted by the quadratic modelis performed. If there is a satisfactory reduction, then the step can be taken, or a possibly larger trust region used. If not, then the trust region is reduced and the inner iteration is repeated. For now, we leave unspecified what algorithm is used to form the step , and how the trust region radius is changed. We also leave unspecified the selection of except to restrict it to be symmetric. Details on these points will be addressed in our forth coming paper.

The solution of the quadratic model (2.2) is a solution to an equation of the formwith , and is positive semidefinite. This represents the first-order necessary conditions concerning the pair and , where is a solution to model (2.2) and is the Lagrange multiplier associated with the constraint . The sufficient conditions that will ensure to be a solution to (2.2) can be established in the following lemma which has a proof in [16]. Lemma 2.1. Let and satisfy with positive semidefinite, if Of course, if is positive definite then is a unique solution.

The main result which is used to prove that the sequence of gradients tends to zero for modified Newton methods iswhere is a solution to (2.2). The geometric interpretation of this inequality is that for a quadratic function, any solution to (2.2) produces a decrease that is at least as much as the decrease along the steepest descent direction . A proof of this result may be found in [17].

3. Algorithmic Framework

The algorithm we will discuss here requires that is twice differentiable at any point in the domain of . This algorithm involves two levels. In the first level we use IRA method to reduce the Hessian to a tridiagonal matrix and construct an orthonormal basis for the invariant subspace of the Hessian matrix. The second level is used to compute the step and update the trust region radius for the reduced local model (a model defined on the subspace). The desired properties of this algorithm are:

(1)the algorithm should be well defined for a sufficiently general class of functions and it is globally convergent;(2)the algorithm should be invariant under linear affine scalings of the variables, that is, if we replace by , where is an orthonormal matrix, and , then applying the iteration to with initial guess satisfying should produce a sequence related to the sequence by , where is produced by applying the algorithm to with initial guess ;(3) the algorithm should provide a decrease that is at least as large as a given multiple of the minimum decrease that would be provided by a quadratic search along the steepest descent direction;(4)the algorithm should give as good a decrease of the quadratic model as a direction of the negative gradient when the Hessian, , is indefinite and should force the direction to be equal to the Newton direction when is symmetric positive definite.

The following describes a full iteration of a truncated Newton type method. Some of the previous characteristics will be obvious and the other ones will be proved in the next section.

Algorithm 3.1. (1)Step 0 (initialization).Given , compute , .Choose , , .(2)Step 1 (construction a basis for the Krylov subspace). (a)Choose , an initial guess which can be chosen to be zero, form , where , and compute and .(b)For until convergence, do.(i)Form and orthogonalize it against the previous then,,..(ii)Compute the residual norm of the solution that would be obtained if we stopped.(iii)If , where , and is an approximation to the condition number of , then set and go to 3, else continue.(3)Step 2 (compute the trial step). (i)Construct the local model,where and .(ii)Compute the solution to the problem and set .(4)Step 3 (test the step and update ). (i)Evaluate , and .(ii)If thenbegin go to 3(i) Comment: Do not accept the step “”, reduce the trust region radius “” and compute a new stepElse if thenComment: Accept the step and keep the previous trust region radius the sameElse thenComment: Accept the step and increase the trust region radiusend if(5)Step 4.Set and go to Step 1.

4. The Restarting Mechanism

In this section, we discuss the important role of the restarting mechanism to control the possibility of the failure of nonsingularity of the Hessian matrix, and introduce the assumptions under which we prove the global convergence.

Let the sequence of the iterates generated by Algorithm 3.1 be ; for such sequence we assume

(G1)for all and where is a convex set;(G2) and for all ;(G3) is bounded in norm in and is nonsingular for all ;(G4)for all such that , the termination condition, , is satisfied;

An immediate consequence of the global assumptions is that the Hessian matrix is bounded, that is, there exists a constant such that . Therefore, the condition numbers of the Hessian, are bounded.

Assumption G3 that is nonsingular is necessary since the IRA iteration is only guaranteed to converge for nonsingular systems. We first discuss the two ways in which the IRA method can breakdown. This can happen if either so that cannot be formed, or if is singular which means that the maximum number of IRA steps has been taken, but the final iterate cannot be formed. The first situation has often been referred to as a soft breakdown, since implies is nonsingular and is the solution [18]. The second case is more serious in that it causes a convergence failure of the Arnoldi process. One possible recourse is to hope that is nonsingular for some among . If such exists, we can compute and then restart the algorithm using as the new initial guess . It may not be possible to do this.

To handle the problem of singular , we can use a technique similar to that done in the full dimensional space. If is singular, or its condition number is greater than , where is the machine epsilon, then we perturb the quadratic model towhere . The condition number of is roughly . For more details about this technique see reference [4].

We have discussed the possibility of stagnation in the linear IRA method which results in a break down in the nonlinear iteration. Sufficient conditions under which stagnation of the linear iteration never occurs are

(1)we have to ensure that the steepest descent direction belongs to the subspace, . Indeed in Algorithm 3.1, the Krylov subspace will contain the steepest descent direction because and the Hessian is nonsingular;(2)there is no difficulty, if one required the Hessian matrix, to be positive definite for all , then the termination condition can be satisfied, and so convergence of the sequence of iterates will follow. This will be the case for some problems, but, requiring to be positive definite every where is very restrictive.

One of the main restrictions of most of the Newton-Krylov schemes is that the subspace onto which a given Newton step is projected must solve the Newton equations with a certain accuracy which is monitored by the termination condition (assumption G4). This condition is enough to essentially guarantee convergence of the trust region algorithm. Practically, the main difficulty is that one does not know in advance if the subspace chosen for projection will be good enough to guarantee this condition. Thus, can be chosen as large as the termination condition will eventually be satisfied, but, when is too large the computational cost and the storage become too high. An alternative to that is to restart the algorithm keeping nonsingular and . Moreover, preconditioning and scaling are essential for the successful application of these schemes [19].

5. Global Convergence Analysis

In this section, we are going to establish some convergence properties which are possessed by Algorithm 3.1. The major differences between the proposed results and the preexisting ones arise from the fact that a lower dimensional quadratic model is used, rather than the full dimension model that is used in the preexisting results [10].Lemma 5.1. Let the global assumptions hold. Then for any , one hasProof. Suppose be the residual associated with so that and . Then , and . So,Hence,Next, implies which givesIntroduce (5.4) into (5.3), we obtain the following:Inequality (5.1) can be written asThis condition (5.6) tells us at each iteration of Algorithm 3.1, we require that the termination condition holds. Since the condition numbers, , are bounded from above, condition (5.6) giveswhere is the angle between and and are the upper bound of the condition numbers . Inequality (5.7) shows that the acute angle is bounded away from .

The following lemma shows that the termination norm assumption G4 implies that the cosine of the angle between the gradient and the Krylov subspace is bounded below.Lemma 5.2. Let the global assumptions hold. ThenProof. Suppose be a vector of satisfying the termination condition, . From Lemma 5.1 and the fact that , we obtainFrom Cauchy-Schwartz inequality, we obtainCombining formulas (5.9) and (5.10), we obtain the result.

The following Lemma establishes that Algorithm 3.1 converges with a decrease in the quadratic model on the lower dimensional space at least equal to the decrease in the quadratic model on the full dimensional space.Lemma 5.3. Let the global assumptions hold and let be a solution of the minimization problem, . Thenwhere .Proof. Since, . Thus . From Step 3 of Algorithm 3.1, we obtainWe have to convert the lower bound of this inequality to one involving rather than . Using Lemma 5.2, we obtain , but, . Substituting in (5.14), we obtainwhich completes the proof.

The following two facts will be used in the remainder of the proof.

Fact 1.
By Taylor's theorem for any and , we havewhere, with . Thus,

Fact 2.
For any sequence generated by an algorithm satisfying the global assumptions, the related sequence is monotonically decreasing and bounded from below, that is, as .

The next result establishes that every limit point of the sequence satisfies the first-order necessary conditions for a minimum.Lemma 5.4. Let the global assumptions hold and Algorithm 3.1 is applied to , generating a sequence then .Proof. Since for all , we haveConsider any with , . Thus, if , then . Let, and . There are two cases, either for all or eventually leaves the ball . Suppose for all , then for all . Thus by Lemma 5.3, we havePut , introduce inequality (5.19) into (5.17) and since , we obtainThis gives for sufficiently small and thatIn addition, we haveInequalities (5.21) and (5.22) tell us that, for sufficiently small, we cannot get a decrease in in Step 3(ii) of Algorithm 3.1. It follows that is bounded away from 0, but, sinceand since is bounded from below, we must have which is a contradiction. Therefore, must be outside for some .
Let be the first term after that does not belong to . From inequality (5.23), we getIf for all , we haveIf there exists at least one with , we getIn either case, we haveSince, is bounded below, is a monotonically decreasing sequence (). Hence,( as ). The proof is completed by using Lemma 5.2.

The following lemma proves that under the global assumptions, if each member of the sequence of iterates generated by Algorithm 3.1 satisfies the termination condition of the algorithm, then this sequence converges to one of its limit points.Lemma 5.5. Let the global assumptions hold, for all , is Lipschitz continuous with , and is a limit point of the sequence with be positive definite. Then converges q-Superlinearly to .

Proof. Let be a subsequence of the sequence that converges to , where, is a limit of . From Lemma 5.4, . Since is positive definite and is continuous there exists such that if , then is positive definite, and if then . Let, .
Since , we can find with and for all .
Find such that and . Consider any with .
We claim that , which implies that the entire sequence beyond . Suppose does not belong to . Since , does not belong to either. So So, the truncated-Newton step is within the trust region, we obtain Since , it follows that , which is a contradiction. Hence, for all , and so since is strictly decreasing sequence and is the unique minimizer of in , we have that .

The following lemma establishes the rate of convergence of the sequence of iterates generated by Algorithm 3.1 to be -super linear.Lemma 5.6. Let the assumptions of Lemma 5.5 hold. Then the rate of convergence of the sequence is -Super linear.

Proof. To show that the convergence rate is super linear, we will show eventually that will always be less than and hence the truncated-Newton step will always be taken. Since is positive definite, it follows that the convergence rate of to is super linear.
To show that eventually the truncated-Newton step is always shorter than the trust region radius, we need a particular lower bound on . By assumptions, for all large enough, is positive definite. Hence, either the truncated-Newton step is longer than the trust region radius, or is the truncated-Newton step. In either caseand so it follows that . By Lemma 5.3, for all large enough, we haveThus,So, by the continuity of , for all large enough, we obtainFinally, by the argument leading up to Fact 1 and the Lipschitz continuity assumption, we haveThus, for any and large enough, we obtainThus by Step 3(ii) of Algorithm 3.1, there is a such that if , then will be less than only if . It follows from the superlinear convergence of the truncated-Newton method that for close enough to and large enough,Now, if is bounded away from for all large , then we are done. Otherwise, if for an arbitrarily large , is reduced, that is, , then we haveand so the full truncated-Newton step is taken. Inductively, this occurs for all subsequence iterates and super linear convergence follows.

The next result shows that under the global assumptions every limit point of the sequence satisfies the second-order necessary conditions for a minimum.Lemma 5.7. Let the global assumptions hold, and assume for all the symmetric matrix , , and some . If , then is positive semidefinite, where is the smallest eigenvalue of the matrix .Proof. Suppose to the contrary that . There exists such that if , then . For all and for all , we obtainUsing inequality (5.17), we obtainSince the last quantity goes to zero as and since a Newton step is never taken for , it follows from Step 3(ii) of Algorithm 3.1 that for sufficiently small. cannot be decreased further. Thus, is bounded below, butSince is bounded below, , so which is a contradiction. Hence . This completes the proof of the lemma.

6. Concluding Remarks

In this paper, we have shown that the implicitly restarted Arnoldi method can be combined with the Newton iteration and trust region strategy to obtain a globally convergent algorithm for solving large-scale NLS problems.

The main restriction of this scheme is that the subspace onto which a given Newton step is projected must solve the Newton equations with a certain accuracy which is monitored by the termination condition. This is enough to guarantee convergence.

This theory is sufficiently general that is hold for any algorithm that projects the problem on a lower dimensional subspace. The convergence results indicate promise for this research to solve large-scale NLS problems. Our next step is to investigate the performance of this algorithm on some NLS problems. The results will be reported in our forthcoming paper.

Acknowledgment

The authors would like to thank the editor-in-chief and the anonymous referees for their comments and useful suggestions in the improvement of the final form of this paper.