Abstract

New convergence properties of the proximal augmented Lagrangian method is established for continuous nonconvex optimization problem with both equality and inequality constrains. In particular, the multiplier sequences are not required to be bounded. Different convergence results are discussed dependent on whether the iterative sequence generated by algorithm is convergent or divergent. Furthermore, under certain convexity assumption, we show that every accumulation point of is either a degenerate point or a KKT point of the primal problem. Numerical experiments are presented finally.

1. Introduction

In this paper, we consider the following nonlinear programming problem: where for each and for each are all continuously differentiable functions, is a nonempty and closed set in . Denoted by the feasible region and by the solution set.

Augmented Lagrangian algorithms are very popular tools for solving nonlinear programming problems. At each outer iteration of these methods, a simpler optimization problem is solved, for which efficient algorithms can be used, especially when the problems are large. The most famous augmented Lagrangian algorithm based on the Powell-Hestenes-Rockafellar [13] formula has been successfully used for defining practical nonlinear programming algorithms [47]. At each iteration, a minimization problem with simple constraints is approximately solved whereas Lagrange multipliers and penalty parameters are updated in the master routine. The advantage of the Augmented Lagrangian approach over other methods is that the subproblems can be solved using algorithms that can deal with a very large number of variables without making use of factorization of matrices of any kind.

An indispensable assumption in the most existing global convergence analysis for augmented Lagrangian methods is that the multiplier sequence generated by the algorithms is bounded. This restrictive assumption confines applications of augmented Lagrangian methods in many practical situation. The important work on this direction includes [8], where global convergence of modified augmented Lagrangian methods for nonconvex optimization with equality constraints was established; and Andreani et al. [4] and Birgin et al. [9] investigated the augmented Lagrangian methods using safeguarding strategies for nonconvex constrained problems. Recently, for inequality-constrained global optimization, Luo et al. [10] established the convergence properties of the primal-dual method based on four types of augmented Lagrangian functions without the boundedness assumption of the multiplier sequence. More information can be found in [5, 11, 12].

In this paper, for the optimization problem () with both equality and inequality constraints, we further study the convergence property of the proximal Lagrangian method without requiring the boundedness of multiplier sequences. The main contribution of this paper lies in the following three aspects. First, more general constraints are considered, without restricting only inequality constraints as in [10, 13] and requiring boundedness of as in [9]. Second, an essential assumption on the global convergence properties given in [47, 9, 10] is that the iterative sequence must be convergent in advance; here, we further discuss the case when is divergent and develop a necessary and sufficient condition for converging to the optimal value of primal problem. Third, the definition of degeneration in [9, 10] is extended from inequality constraint to both inequality and equality constraints.

This paper is organized as follows. In Section 2, we propose the multiplier algorithm and study its global convergence properties. Preliminary numerical results are reported in Section 3. The conclusion is drawn in Section 4.

2. Multiplier Algorithms

The primal augmented Lagrangian function for () is where , and denotes the all positive real scalars, that is, .

Given , the augmented Lagrangian relaxation problem associated with the augmented Lagrangian is defined by Given , then the -optimal solution set of (), denoted by , is defined as

If is closed and bounded, then the global optimal solution of () exists. However, if is unbounded, then () maybe unsolvable. To overcome this difficultly, we assume throughout this paper that is bounded on from below, that is, This assumption is rather mild in optimization programming, because otherwise the objective function can be replaced by . It ensures that the -optimal solution set with always exists, since is bounded from below by (2.1) and (2.3).

Recall that a vector is said to be a KKT point of () if there exist for each and for each such that where denotes the normal cone of at . The collection set of all and satisfying (2.4) is denoted by .

The multiplier algorithm based on the primal augmented Lagrangian is proposed below. One of its main features is that the Lagrangian multipliers associated with equality and inequality constraints are not restricted to be bounded, which makes the algorithm applicable for many problems in practice.

Algorithm 2.1 (Multiplier algorithm based on ). Step 1. Select an initial point , , , , and . Set .Step 2. Compute Step 3. Find ;Step 4. If and , then STOP; otherwise, let and go back to Step 2.

The iterative formula for given in (2.7) is just used to guarantee its convergence to zero. In fact, in the practical numerical experiment, we can choose to improve the convergence of the algorithm. The following lemma gives the relationship between the penalty parameter and the multipliers and .

Lemma 2.2. Let be given as in Algorithm 2.1, then the following terms all approach to zero as .

Proof. This follows immediately from (2.8).

For establishing the convergence property of Algorithm 2.1, we first consider the perturbation analysis of (). Given , define the perturbation of feasible region as and the perturbation of level set as It is clear that coincides with the feasible set of (). The corresponding perturbation function is given as

The following result shows that the perturbation value function is upper semicontinuous at zero.

Lemma 2.3. The perturbation function is upper semicontinuous at zero from right.

Proof. Since for any , then by definition (2.12). This implies that .

Lemma 2.4. Let be given as in Algorithm 2.1. For any , one has whenever is sufficiently large.

Proof. For any given , it follows from (2.7) and Lemma 2.4 that when is large enough, we have Therefore, for ,

Lemma 2.5. Let be given as in Algorithm 2.1. For any , one has whenever is sufficiently large.

Proof. We prove this result by the way of contradiction. Suppose that we can find an and an infinite subsequence such that but It follows from (2.17) that
Since , it needs to consider the following two cases.
Case 1. There exist an index and an infinite subsequence such that . It then follows from (2.19) that
Using Lemma 2.2 and the fact that gives us whenever is sufficiently large. This, together with (2.20), yields by taking approaching to , which leads to a contradiction.
Case 2. There exist an index and an infinite subsequence such that . It follows from (2.19) that where the last step is due to Lemma 2.2, since and . Taking limits in the above inequality yields , which is a contradiction. This completes the proof.

Lemma 2.6. Let be given as in Algorithm 2.1. For any , one has

Proof. For an arbitrarily , we have The proof is complete.

With these preparation, the global convergence property of Algorithm 2.1 can be given, which shows that if the algorithm terminates in finite steps, then we obtain a KKT point of (); otherwise, every limit point of would be the optimal solution of ().

Theorem 2.7. Let be the iterative sequence generated by Algorithm 2.1. Then if is terminated in finite steps, then one gets a KKT point of (); otherwise, every limit point of belongs to .

Proof. According to the construction of Algorithm 2.1, the first part is clear. It remains to prove the second part. Let be given. It follows from Lemmas 2.42.6 that when is large enough, we have Thus, Note that and are closed, due to the continuity of , for all and for all and the closeness of . Taking the limit in (2.26) yields , which further shows that , since is arbitrary, that is, . The proof is complete.

The foregoing result is applicable to the case when at least has an accumulation point. However, a natural question arises: how does the algorithm perform as is divergent? The following theorem gives an answer.

Theorem 2.8. Let be an iterative sequence generated by Algorithm 2.1. Then, if and only if is lower semicontinuous at from right.

Proof. We first show the sufficiency. According to the proof of Theorem 2.7 (recall (2.26)), we know that whenever is sufficiently large. Since is lower semicontinuous at from right, taking the lower limitation in (2.28) yields that is, We now show the necessity. Suppose on the contrary that is not lower semicontinuous at zero from right, then there exist and such that For any given , since we can choose a subsequence satisfying In addition, let with , which further implies by (2.31). Therefore, where the last step is due to the fact and since . Taking limits in both sides of (2.33) and using (2.7), (2.27), and Lemma 2.2, we get which leads to a contradiction. The proof is complete.

Note that in many practical cases, the set typically stands for a more simple constraint, for example, a box or a bounded polytope [7]. Hence, we conclude this paper by considering the case of is a bounded, closed, and convex subset of . In this case, the global optimal solution of the augmented Lagrangian relaxation problem always exists. Hence, we choose in Step 1 of Algorithm 2.1, which in turn implies that for all by (2.7). First, however, we need to extend the definition of degenerate from inequality constraint as in [10] to both inequality and equality constraints.

Definition 2.9. A point is said to be degenerate if there exists and such that where denotes the projection of onto and .

Theorem 2.10. Suppose that is a bounded, closed, and convex set of . Let and be the iterative sequence generated by Algorithm 2.1. Then, every accumulation point of , say , is either a degenerate or a KKT point of ().

Proof. Noting that for all by and (2.7), then is a global optimal solution of by Step 3 in Algorithm 2.1. Applying the well-known optimality condition of optimization problem to the augmented Lagrangian relaxation problem () yields where is the normal cone of at . This together with (2.5) and (2.6) means that where we have used the basic property of normal cone of convex set. Let be an infinite subsequence in such that . Consider now the following two cases.Case 1. Either or is unbounded. In this case, we must have Since and are bounded, we can assume by passing a subsequence if necessary that Clearly, and are not all zeros. On the other hand, since is cone, then it follows from (2.36) that from which and using the basic property of normal cone of convex set, we further have Since and as , we obtain from (2.39) and (2.41) where we have used the continuity of the projection operator.
If , then . Since , we have as . Using (2.5) and Lemma 2.2, we obtain which, together with (2.39), implies that for all . Therefore, we obtain from (2.42) that So is degenerate.
Case 2. Both and are bounded. In this case, we can assume without loss of generality that Taking limits in (2.37) gives rise to which is equivalent to
We claim that is a feasible point. In fact, if for some , then as . From (2.5), we must have , contradicting the boundedness of . Note that (2.6) can be rewritten as Taking limits in both sides and using the boundedness of , we obtain that for all . Thus, is a feasible solution of () as claimed.
If , that is, , then following almost the same argument as in Case 1, we can show that (cf. (2.43)). Therefore, This together with (2.47) implies that is a KKT point of () and , are the corresponding Lagrangian multipliers.

3. Numerical Reports

To give some insight into the behavior of our proposed algorithm presented in this paper, we solve the following nonlinar programming problems. The test was done at a PC of Pentium 4 with 2.8 GHz CPU and 1.99 GB memory, and the computer codes were written in MATLAB 7.0. Numerical results are reported in Tables 14, where is the number of iterations, is the penalty parameter, is iterative point found by the algorithm, and is the objective value.

Example 3.1 (see [14]). It holds that

Example 3.2 (see [14]). Consider

Example 3.3. It holds that

Example 3.4. Consider

4. Conclusions

Augmented Lagrangian methods are useful tools for solving many practical nonconvex optimization problems. In this paper, new convergence property of proximal augmented Lagrangian algorithm is established without requiring the boundedness of multiplier sequences. It is proved that if the algorithm terminates in finite steps, then we obtain a KKT point of the primal problem; otherwise, the iterative sequence generalized by algorithm converges to optimal solution. Even if is divergent, we also present a necessary and sufficient condition for the convergence of to the optimal value. Moreover, under suitable assumptions, we show that every accumulation point of the iterative sequence generated by the algorithm is either a degenerate or is a KKT point of the primal problem. As our future work, one of the interesting and important topics is whether these nice properties could be extended to more general cone programming, for example, nonlinear semidefinite programming or second-order cone programming.

Acknowledgments

The authors would like to thank the referees for their valuable comments, which greatly improved the presentation of the paper. Research of the first author was partly supported by the National Natural Science Foundation of China (11101248, 11026047) and Shandong Province Natural Science Foundation (ZR2010AQ026). Research of the second author was partly supported by the National Natural Science Foundation of China (11171247). Research of the forth author was partly supported by the National Natural Science Foundation of China (10971118).