Mathematical Problems in Engineering

Volume 2010 (2010), Article ID 324812, 13 pages

http://dx.doi.org/10.1155/2010/324812

## Existence of Local Saddle Points for a New Augmented Lagrangian Function

Department of Mathematics, School of Science, Shandong University of Technology, Zibo 255049, China

Received 27 March 2010; Revised 14 July 2010; Accepted 13 September 2010

Academic Editor: Joaquim J. Júdice

Copyright © 2010 Wenling Zhao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We give a new class of augmented Lagrangian functions for nonlinear programming problem with both equality and inequality constraints. The close relationship between local saddle points of this new augmented Lagrangian and local optimal solutions is discussed. In particular, we show that a local saddle point is a local optimal solution and the converse is also true under rather mild conditions.

#### 1. Introduction

Consider the nonlinear optimization problem where , for and for are twice continuously differentiable functions and is a nonempty closed subset.

The classical Lagrangian function associated with () is defined as where and .

The Lagrangian dual problem () is presented: where

Lagrange multiplier theory not only plays a key role in many issues of mathematical programming such as sensitivity analysis, optimality conditions, and numerical algorithms, but also has important applications, for example, in scheduling, resource allocation, engineering design, and matching problems. According to both analysis and experiments, it performs substantially better than classical methods for solving some engineering projects, especially for medium-sized or large projects.

Roughly speaking, the augmented Lagrangian method uses a sequence of iterate point of unconstrained optimization problems, which are constructed by utilizing the Lagrangian multipliers, to approximate the optimal solution of the original problem. Toward this end, we must ensure that the zero dual gap property holds between primal and dual problems. Therefore, saddle point theory received much attention, due to its equivalence with zero dual gap property. It is well known that, for convex programming problems, the zero dual gap holds by using the above classical Lagrangian function. However, the nonzero duality gap may appear for nonconvex optimization problems. The main reason is that the classical Lagrangian function is linear with respect to the Lagrangian multiplier. To overcome this drawback, various types of nonlinear Lagrangian functions and augmented Lagrangian functions have been developed in recent years. For example, Hestenes [1] and Powell [2] independently proposed augmented Lagrangian methods for solving equality constrained problems by incorporating the quadratic penalty term in the classical Lagrangian function. This was extended by Rockafellar [3] to the constrained optimization problem with both equality and inequality constraints. A convex augmented function and the corresponding augmented Lagrangian with zero duality gap property were introduced by Rockafellar and Wets in [4]. This was further extended by Huang and Yang by removing the convexity assumption imposed on the augmented functions as in [4]; see [5, 6] for the details. Wang et al. [7] proposed two classes of augmented Lagrangian functions, which are simpler than those given in [4, 5], and discussed the existence of saddle points. For other kinds of augmented Lagrangian methods refer to [8–16]; for saddle points theory and multiplier methods, refer to [17–20]. It should be noted that the sufficient conditions given in the above papers for the existence of local saddle points of augmented Lagrangian functions all require the standard second-order sufficient conditions. So, a natural question arises: whether we can exploit local saddle points under rather mild assumptions, other than the standard second-order sufficient conditions. Motivated by this, in this paper, we propose a new augmented Lagrangian function and establish the close relationship between local saddle points and local optimal solutions of the original problem. In particular, we show that this property holds under weak second-order sufficient conditions.

The paper is organized as follows. After introducing some basic notation and definitions, we mainly present sufficient conditions for the existence of a local saddle point and discuss the close relationship between a local saddle point and a local optimal solution of the original problem. Finally, an example to illustrate our result is given.

#### 2. Notation and Definition

We first introduce some basic notation and definitions, which will be used in the sequel. Let be the nonnegative orthant. For notational simplification, let

*Definition 2.1. *A pair is said to be a global saddle point of for some , if
whenever . If there exists some positive scalar such that the above inequality holds for all , where , then is said to be a local saddle point of for .

*Definition 2.2 (weak second-order sufficient conditions). *Let be a feasible solution. (1)Suppose that the KKT conditions hold at ; that is, there exist scalars for and for such that
(2)The Hessian matrix
is positive definite on the cone
where

Clearly, the above cone is included in the cone which is involved in the second-order sufficient condition. Hence, we refer to above conditions as weak second-order sufficient conditions.

#### 3. Existence of Local Saddle Points

For inequality constrained optimization, Sun et al. [21] introduced a class of the generalized augmented Lagrangian function where is defined as The function satisfies the following assumptions:(A1), and for all ;(A2)for each , is nondecreasing on and nonincreasing on ;(A3) is continuously differentiable and ;(A4) is twice continuously differentiable in a neighborhood of and for all nonzero .

We extend this function to treat the optimization problems with equality and inequality constraints. Consider a new augmented Lagrangian function where is defined as above and the function satisfies (A1)–(A4). Several important augmented functions satisfy the above assumptions, as for example:

*Example 3.1. *.

*Example 3.2. *.

Under the weak second-order sufficient conditions (instead of the standard second-order sufficient conditions), we show that a local optimal solution is also a local saddle point of the augmented Lagrangian function.

Theorem 3.3. *Let be a local optimal solution to problem (). If the weak second-order sufficient conditions are satisfied at , then there exist and such that for any ,
*

*Proof. *Since is a feasible solution to problem (), then and
It follows from (A1) and (2.4) that
Combining the last two inequalities yields . Hence,
We obtain the left inequality of (2.2) as desired.

To show the right inequality of (2.2), it suffices to prove that it holds for some and , since is nondecreasing in . Suppose the contrary that such and do not exist. Then for each positive integer , there must exist such that and
Define as follows:
where
For , we have

Three cases may be considered.*Case 1. *When , take . Since , then the original point is a minimizer of
*Case 2. *When , taking into account to the fact that the function is decreasing on in , then is a minimizer of
*Case 3. *When , let be the *i*th component of vector , for any . We get from (A3) that
that is, , and this implies that is decreasing on in . So is a minimizer of
Therefore,
Since , then
Hence
Set , which is bounded, we can assume without loss of generality that converges to with . It follows from (3.18) that
Let be the small eigenvalue of . Then by assumption. We claim that converges to zero. Suppose the contrary, that for , we have
Taking limits in the above inequality as , the right hand converges to , which contradicts (3.19). So as claimed.Noting that (3.9) amounts to
then
So for any , taking limits in (3.19) with approaches to , we must have
For , there is an infinite index set such that for all . So
where lies in the line segment between and . Putting (3.24)–(3.26) together implies that . We get by Definition 2.2, which is a contradiction with (3.23). So the right inequality of (2.2) holds. The proof is complete.

The converse of Theorem 3.3 is given below.

Theorem 3.4. *If is a local saddle point of for some , then is a local optimal solution to the problem ().*

*Proof. *Let be a local saddle point of for some . Then
whenever . We first show that is a feasible solution. If not, there must exist for some or for some .*Case 1. *There exists for some . Note that
Choose and for all . Then we get from (A1) that
Taking the limit as yields , which is a contradiction with (2.2). So we have for all .*Case 2. *There exists for some . Choose and with the same signal and let approach to , which is a contradiction with (2.2). So we have for all . Then is a feasible solution as claimed.Since is feasible, we have and for . In particular for , we have
Substituting (3.30) into (2.2) yields
On the other hand, for any feasible and ,
which, together with (3.31) and (3.32), implies that
So
Finally, for any feasible , we have
which means that is a local optimal solution of ().

*Example 3.5. *Consider the nonconvex programming problem

The optimal solutions of the above problem are and with objective value . Setting and , then we get from KKT conditions that The Hessian matrix is positive definite. The weak second-order sufficient conditions are satisfied at . By Theorem 3.4, is a local saddle point for , and hence and are the optimal solutions to ().

Based on the above discussion, we know that, if is a saddle point of , then where we denote by the optimal value of problem () and by the problem given in the right-hand side. Note that the problem has just the nonnegative constraints (rather simple constraints). Hence, we successful convert the original nonconvex problem to a simple constrained optimization problems by using the augmented Lagrangian function. Furthermore, many efficient algorithms for solving unconstrained optimization problems can be used to solve , such as gradient-type algorithms. Therefore, our results, obtained with weaker conditions, provide a new insight and theoretical foundation for the use of augmented Lagrangian functions in constrained optimization problems.

#### Acknowledgments

The authors would like to give their sincere thanks to the anonymous referees for their helpful suggestions and valuable comments which improved the presentation of this paper. This research was supported by the National Natural Science Foundation of China (10971118, 10701047, 10901096).

#### References

- M. R. Hestenes, “Multiplier and gradient methods,”
*Journal of Optimization Theory and Applications*, vol. 4, pp. 303–320, 1969. View at Google Scholar · View at Zentralblatt MATH - M. J. D. Powell, “A method for nonlinear constraints in minimization problems,” in
*Optimization*, R. Fletcher, Ed., pp. 283–298, Academic Press, London, UK, 1969. View at Google Scholar · View at Zentralblatt MATH - R. T. Rockafellar, “Augmented Lagrange multiplier functions and duality in nonconvex programming,”
*SIAM Journal on Control and Optimization*, vol. 12, pp. 268–285, 1974. View at Google Scholar · View at Zentralblatt MATH - R. T. Rockafellar and R. J.-B. Wets,
*Variational Analysis*, vol. 317 of*Grundlehren der Mathematischen Wissenschaften*, Springer, Berlin, Germany, 1998. View at Publisher · View at Google Scholar - X. X. Huang and X. Q. Yang, “A unified augmented Lagrangian approach to duality and exact penalization,”
*Mathematics of Operations Research*, vol. 28, no. 3, pp. 533–552, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - Q. Liu, W. M. Tang, and X. M. Yang, “Properties of saddle points for generalized augmented Lagrangian,”
*Mathematical Methods of Operations Research*, vol. 69, no. 1, pp. 111–124, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - C. Wang, J. Zhou, and X. Xu, “Saddle points theory of two classes of augmented Lagrangians and its applications to generalized semi-infinite programming,”
*Applied Mathematics and Optimization*, vol. 59, no. 3, pp. 413–434, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - A. Ben-tal and M. Zibulevsky, “Penalty/barrier multiplier methods for convex programming problems,”
*SIAM Journal on Optimization*, vol. 7, no. 2, pp. 347–366, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - D. P. Bertsekas,
*Constrained Optimization and Lagrange Multiplier Methods*, Computer Science and Applied Mathematics, Academic Press, New York, NY, USA, 1982. - X. X. Huang and X. Q. Yang, “Approximate optimal solutions and nonlinear Lagrangian functions,”
*Journal of Global Optimization*, vol. 21, no. 1, pp. 51–65, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - X. X. Huang and X. Q. Yang, “Further study on augmented Lagrangian duality theory,”
*Journal of Global Optimization*, vol. 31, no. 2, pp. 193–210, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - E. Polak and J. O. Royset, “On the use of augmented Lagrangians in the solution of generalized semi-infinite min-max problems,”
*Computational Optimization and Applications*, vol. 31, no. 2, pp. 173–192, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - R. Polyak, “Modified barrier functions: theory and methods,”
*Mathematical Programming*, vol. 54, no. 2, pp. 177–222, 1992. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - A. Rubinov and X. Yang,
*Lagrange-Type Functions in Constrained Non-Convex Optimization*, vol. 85 of*Applied Optimization*, Kluwer Academic Publishers, Boston, Mass, USA, 2003. - P. Tseng and D. P. Bertsekas, “On the convergence of the exponential multiplier method for convex programming,”
*Mathematical Programming*, vol. 60, no. 1, pp. 1–19, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - Y. Y. Zhou and X. Q. Yang, “Augmented Lagrangian function, non-quadratic growth condition and exact penalization,”
*Operations Research Letters*, vol. 34, no. 2, pp. 127–134, 2006. View at Publisher · View at Google Scholar - D. Li, “Zero duality gap for a class of nonconvex optimization problems,”
*Journal of Optimization Theory and Applications*, vol. 85, no. 2, pp. 309–324, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - D. Li, “Saddle point generation in nonlinear nonconvex optimization,”
*Nonlinear Analysis: Theory, Methods and Applications*, vol. 30, no. 7, pp. 4339–4344, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - D. Li and X. L. Sun, “Existence of a saddle point in nonconvex constrained optimization,”
*Journal of Global Optimization*, vol. 21, no. 1, pp. 39–50, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - D. Li and X. L. Sun, “Convexification and existence of a saddle point in a pth-power reformulation for nonconvex constrained optimization,”
*Nonlinear Analysis: Theory, Methods and Applications*, vol. 47, no. 8, pp. 5611–5622, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - X. L. Sun, D. Li, and K. I. M. McKinnon, “On saddle points of augmented Lagrangians for constrained nonconvex optimization,”
*SIAM Journal on Optimization*, vol. 15, no. 4, pp. 1128–1146, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH