Abstract and Applied Analysis

Volume 2011, Article ID 902131, 14 pages

http://dx.doi.org/10.1155/2011/902131

## New Convergence Properties of the Primal Augmented Lagrangian Method

Department of Mathematics, School of Science, Shandong University of Technology, Zibo 255049, China

Received 23 August 2011; Revised 25 November 2011; Accepted 26 November 2011

Academic Editor: Simeon Reich

Copyright © 2011 Jinchuan Zhou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

New convergence properties of the proximal augmented Lagrangian method is established for continuous nonconvex optimization problem with both equality and inequality constrains. In particular, the multiplier sequences are not required to be bounded. Different convergence results are discussed dependent on whether the iterative sequence generated by algorithm is convergent or divergent. Furthermore, under certain convexity assumption, we show that every accumulation point of is either a degenerate point or a KKT point of the primal problem. Numerical experiments are presented finally.

#### 1. Introduction

In this paper, we consider the following nonlinear programming problem: where for each and for each are all continuously differentiable functions, is a nonempty and closed set in . Denoted by the feasible region and by the solution set.

Augmented Lagrangian algorithms are very popular tools for solving nonlinear programming problems. At each outer iteration of these methods, a simpler optimization problem is solved, for which efficient algorithms can be used, especially when the problems are large. The most famous augmented Lagrangian algorithm based on the Powell-Hestenes-Rockafellar [1–3] formula has been successfully used for defining practical nonlinear programming algorithms [4–7]. At each iteration, a minimization problem with simple constraints is approximately solved whereas Lagrange multipliers and penalty parameters are updated in the master routine. The advantage of the Augmented Lagrangian approach over other methods is that the subproblems can be solved using algorithms that can deal with a very large number of variables without making use of factorization of matrices of any kind.

An indispensable assumption in the most existing global convergence analysis for augmented Lagrangian methods is that the multiplier sequence generated by the algorithms is bounded. This restrictive assumption confines applications of augmented Lagrangian methods in many practical situation. The important work on this direction includes [8], where global convergence of modified augmented Lagrangian methods for nonconvex optimization with equality constraints was established; and Andreani et al. [4] and Birgin et al. [9] investigated the augmented Lagrangian methods using safeguarding strategies for nonconvex constrained problems. Recently, for inequality-constrained global optimization, Luo et al. [10] established the convergence properties of the primal-dual method based on four types of augmented Lagrangian functions without the boundedness assumption of the multiplier sequence. More information can be found in [5, 11, 12].

In this paper, for the optimization problem () with both equality and inequality constraints, we further study the convergence property of the proximal Lagrangian method without requiring the boundedness of multiplier sequences. The main contribution of this paper lies in the following three aspects. First, more general constraints are considered, without restricting only inequality constraints as in [10, 13] and requiring boundedness of as in [9]. Second, an essential assumption on the global convergence properties given in [4–7, 9, 10] is that the iterative sequence must be convergent in advance; here, we further discuss the case when is divergent and develop a necessary and sufficient condition for converging to the optimal value of primal problem. Third, the definition of *degeneration* in [9, 10] is extended from inequality constraint to both inequality and equality constraints.

This paper is organized as follows. In Section 2, we propose the multiplier algorithm and study its global convergence properties. Preliminary numerical results are reported in Section 3. The conclusion is drawn in Section 4.

#### 2. Multiplier Algorithms

The primal augmented Lagrangian function for () is where , and denotes the all positive real scalars, that is, .

Given , the *augmented Lagrangian relaxation problem* associated with the augmented Lagrangian is defined by
Given , then the -optimal solution set of (), denoted by , is defined as

If is closed and bounded, then the global optimal solution of () exists. However, if is unbounded, then () maybe unsolvable. To overcome this difficultly, we assume throughout this paper that is bounded on from below, that is, This assumption is rather mild in optimization programming, because otherwise the objective function can be replaced by . It ensures that the -optimal solution set with always exists, since is bounded from below by (2.1) and (2.3).

Recall that a vector is said to be a KKT point of () if there exist for each and for each such that where denotes the normal cone of at . The collection set of all and satisfying (2.4) is denoted by .

The multiplier algorithm based on the primal augmented Lagrangian is proposed below. One of its main features is that the Lagrangian multipliers associated with equality and inequality constraints are not restricted to be bounded, which makes the algorithm applicable for many problems in practice.

*Algorithm 2.1 (Multiplier algorithm based on ). **Step 1. *Select an initial point , , , , and . Set .*Step 2. *Compute
*Step 3. *Find ;*Step 4. *If and , then STOP; otherwise, let and go back to Step 2.

The iterative formula for given in (2.7) is just used to guarantee its convergence to zero. In fact, in the practical numerical experiment, we can choose to improve the convergence of the algorithm. The following lemma gives the relationship between the penalty parameter and the multipliers and .

Lemma 2.2. *Let be given as in Algorithm 2.1, then the following terms
**
all approach to zero as .*

*Proof. *This follows immediately from (2.8).

For establishing the convergence property of Algorithm 2.1, we first consider the perturbation analysis of (). Given , define the perturbation of feasible region as and the perturbation of level set as It is clear that coincides with the feasible set of (). The corresponding perturbation function is given as

The following result shows that the perturbation value function is upper semicontinuous at zero.

Lemma 2.3. *The perturbation function is upper semicontinuous at zero from right.*

*Proof. *Since for any , then by definition (2.12). This implies that .

Lemma 2.4. *Let be given as in Algorithm 2.1. For any , one has
**
whenever is sufficiently large.*

*Proof. *For any given , it follows from (2.7) and Lemma 2.4 that when is large enough, we have
Therefore, for ,

Lemma 2.5. *Let be given as in Algorithm 2.1. For any , one has
**
whenever is sufficiently large.*

*Proof. *We prove this result by the way of contradiction. Suppose that we can find an and an infinite subsequence such that
but
It follows from (2.17) that

Since , it needs to consider the following two cases.*Case 1. *There exist an index and an infinite subsequence such that . It then follows from (2.19) that

Using Lemma 2.2 and the fact that gives us
whenever is sufficiently large. This, together with (2.20), yields by taking approaching to , which leads to a contradiction.*Case 2. *There exist an index and an infinite subsequence such that . It follows from (2.19) that
where the last step is due to Lemma 2.2, since and . Taking limits in the above inequality yields , which is a contradiction. This completes the proof.

Lemma 2.6. *Let be given as in Algorithm 2.1. For any , one has
*

*Proof. *For an arbitrarily , we have
The proof is complete.

With these preparation, the global convergence property of Algorithm 2.1 can be given, which shows that if the algorithm terminates in finite steps, then we obtain a KKT point of (); otherwise, every limit point of would be the optimal solution of ().

Theorem 2.7. *Let be the iterative sequence generated by Algorithm 2.1. Then if is terminated in finite steps, then one gets a KKT point of (); otherwise, every limit point of belongs to .*

*Proof. *According to the construction of Algorithm 2.1, the first part is clear. It remains to prove the second part. Let be given. It follows from Lemmas 2.4–2.6 that when is large enough, we have
Thus,
Note that and are closed, due to the continuity of , for all and for all and the closeness of . Taking the limit in (2.26) yields , which further shows that , since is arbitrary, that is, . The proof is complete.

The foregoing result is applicable to the case when at least has an accumulation point. However, a natural question arises: how does the algorithm perform as is divergent? The following theorem gives an answer.

Theorem 2.8. *Let be an iterative sequence generated by Algorithm 2.1. Then,
**
if and only if is lower semicontinuous at from right.*

*Proof. *We first show the sufficiency. According to the proof of Theorem 2.7 (recall (2.26)), we know that
whenever is sufficiently large. Since is lower semicontinuous at from right, taking the lower limitation in (2.28) yields
that is,
We now show the necessity. Suppose on the contrary that is not lower semicontinuous at zero from right, then there exist and such that
For any given , since we can choose a subsequence satisfying
In addition, let with , which further implies by (2.31). Therefore,
where the last step is due to the fact and since . Taking limits in both sides of (2.33) and using (2.7), (2.27), and Lemma 2.2, we get
which leads to a contradiction. The proof is complete.

Note that in many practical cases, the set typically stands for a more simple constraint, for example, a box or a bounded polytope [7]. Hence, we conclude this paper by considering the case of is a bounded, closed, and convex subset of . In this case, the global optimal solution of the augmented Lagrangian relaxation problem always exists. Hence, we choose in Step 1 of Algorithm 2.1, which in turn implies that for all by (2.7). First, however, we need to extend the definition of degenerate from inequality constraint as in [10] to both inequality and equality constraints.

*Definition 2.9. *A point is said to be degenerate if there exists and such that
where denotes the projection of onto and .

Theorem 2.10. *Suppose that is a bounded, closed, and convex set of . Let and be the iterative sequence generated by Algorithm 2.1. Then, every accumulation point of , say , is either a degenerate or a KKT point of ().*

*Proof. *Noting that for all by and (2.7), then is a global optimal solution of by Step 3 in Algorithm 2.1. Applying the well-known optimality condition of optimization problem to the augmented Lagrangian relaxation problem () yields
where is the normal cone of at . This together with (2.5) and (2.6) means that
where we have used the basic property of normal cone of convex set. Let be an infinite subsequence in such that . Consider now the following two cases.*Case 1. *Either or is unbounded. In this case, we must have
Since and are bounded, we can assume by passing a subsequence if necessary that
Clearly, and are not all zeros. On the other hand, since is cone, then it follows from (2.36) that
from which and using the basic property of normal cone of convex set, we further have
Since and as , we obtain from (2.39) and (2.41)
where we have used the continuity of the projection operator.

If , then . Since , we have as . Using (2.5) and Lemma 2.2, we obtain
which, together with (2.39), implies that for all . Therefore, we obtain from (2.42) that
So is degenerate.*Case 2. *Both and are bounded. In this case, we can assume without loss of generality that
Taking limits in (2.37) gives rise to
which is equivalent to

We claim that is a feasible point. In fact, if for some , then as . From (2.5), we must have , contradicting the boundedness of . Note that (2.6) can be rewritten as
Taking limits in both sides and using the boundedness of , we obtain that for all . Thus, is a feasible solution of () as claimed.

If , that is, , then following almost the same argument as in Case 1, we can show that (cf. (2.43)). Therefore,
This together with (2.47) implies that is a KKT point of () and , are the corresponding Lagrangian multipliers.

#### 3. Numerical Reports

To give some insight into the behavior of our proposed algorithm presented in this paper, we solve the following nonlinar programming problems. The test was done at a PC of Pentium 4 with 2.8 GHz CPU and 1.99 GB memory, and the computer codes were written in MATLAB 7.0. Numerical results are reported in Tables 1–4, where is the number of iterations, is the penalty parameter, is iterative point found by the algorithm, and is the objective value.

*Example 3.1 (see [14]). *It holds that

*Example 3.2 (see [14]). *Consider

*Example 3.3. *It holds that

*Example 3.4. *Consider

#### 4. Conclusions

Augmented Lagrangian methods are useful tools for solving many practical nonconvex optimization problems. In this paper, new convergence property of proximal augmented Lagrangian algorithm is established without requiring the boundedness of multiplier sequences. It is proved that if the algorithm terminates in finite steps, then we obtain a KKT point of the primal problem; otherwise, the iterative sequence generalized by algorithm converges to optimal solution. Even if is divergent, we also present a necessary and sufficient condition for the convergence of to the optimal value. Moreover, under suitable assumptions, we show that every accumulation point of the iterative sequence generated by the algorithm is either a degenerate or is a KKT point of the primal problem. As our future work, one of the interesting and important topics is whether these nice properties could be extended to more general cone programming, for example, nonlinear semidefinite programming or second-order cone programming.

#### Acknowledgments

The authors would like to thank the referees for their valuable comments, which greatly improved the presentation of the paper. Research of the first author was partly supported by the National Natural Science Foundation of China (11101248, 11026047) and Shandong Province Natural Science Foundation (ZR2010AQ026). Research of the second author was partly supported by the National Natural Science Foundation of China (11171247). Research of the forth author was partly supported by the National Natural Science Foundation of China (10971118).

#### References

- M. R. Hestenes, “Multiplier and gradient methods,”
*Journal of Optimization Theory and Applications*, vol. 4, pp. 303–320, 1969. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - M. J. D. Powell, “A method for nonlinear constraints in minimization problems,” in
*Optimization*, R. Fletcher, Ed., Academic Press, New York, NY, USA, 1969. View at Google Scholar · View at Zentralblatt MATH - R. T. Rockafellar, “Augmented Lagrange multiplier functions and duality in nonconvex programming,”
*SIAM Journal on Control and Optimization*, vol. 12, pp. 268–285, 1974, Collection of articles dedicated to the memory of Lucien W. Neustad. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - R. Andreani, E. G. Birgin, J. M. Martínez, and M. L. Schuverdt, “On augmented Lagrangian methods with general lower-level constraints,”
*SIAM Journal on Optimization*, vol. 18, no. 4, pp. 1286–1309, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - E. G. Birgin, C. A. Floudas, and J. M. Martínez, “Global minimization using an augmented Lagrangian method with variable lower-level constraints,”
*Mathematical Programming*, vol. 125, no. 1, pp. 139–162, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - E. G. Birgin and J. M. Martnez, “Augmented Lagrangian method with nonmonotone penalty parameters for constrained optimization,”
*Computational Optimization and Applications*. In press. View at Publisher · View at Google Scholar - E. G. Birgin, D. Fernandez, and J. M. Martnez, “On the boundedness of penalty parameters in an Augmented Lagrangian method with lower level constraints,”
*Optimization Methods and Software*. In press. - A. R. Conn, N. Gould, A. Sartenaer, and P. L. Toint, “Convergence properties of an augmented Lagrangian algorithm for optimization with a combination of general equality and linear constraints,”
*SIAM Journal on Optimization*, vol. 6, no. 3, pp. 674–703, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - E. G. Birgin, R. A. Castillo, and J. M. Martínez, “Numerical comparison of augmented Lagrangian algorithms for nonconvex problems,”
*Computational Optimization and Applications*, vol. 31, no. 1, pp. 31–55, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - H. Z. Luo, X. L. Sun, and D. Li, “On the convergence of augmented Lagrangian methods for constrained global optimization,”
*SIAM Journal on Optimization*, vol. 18, no. 4, pp. 1209–1230, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - X. X. Huang and X. Q. Yang, “A unified augmented Lagrangian approach to duality and exact penalization,”
*Mathematics of Operations Research*, vol. 28, no. 3, pp. 524–532, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - R. M. Lewis and V. Torczon, “A globally convergent augmented Lagrangian pattern search algorithm for optimization with general constraints and simple bounds,”
*SIAM Journal on Optimization*, vol. 12, no. 4, pp. 1075–1089, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - C.-Y. Wang and D. Li, “Unified theory of augmented Lagrangian methods for constrained global optimization,”
*Journal of Global Optimization*, vol. 44, no. 3, pp. 433–458, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - R. N. Gasimov, “Augmented Lagrangian duality and nondifferentiable optimization methods in nonconvex programming,”
*Journal of Global Optimization*, vol. 24, no. 2, pp. 187–203, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH