Abstract

We give a new class of augmented Lagrangian functions for nonlinear programming problem with both equality and inequality constraints. The close relationship between local saddle points of this new augmented Lagrangian and local optimal solutions is discussed. In particular, we show that a local saddle point is a local optimal solution and the converse is also true under rather mild conditions.

1. Introduction

Consider the nonlinear optimization problem where , for and for are twice continuously differentiable functions and is a nonempty closed subset.

The classical Lagrangian function associated with () is defined as where and .

The Lagrangian dual problem () is presented: where

Lagrange multiplier theory not only plays a key role in many issues of mathematical programming such as sensitivity analysis, optimality conditions, and numerical algorithms, but also has important applications, for example, in scheduling, resource allocation, engineering design, and matching problems. According to both analysis and experiments, it performs substantially better than classical methods for solving some engineering projects, especially for medium-sized or large projects.

Roughly speaking, the augmented Lagrangian method uses a sequence of iterate point of unconstrained optimization problems, which are constructed by utilizing the Lagrangian multipliers, to approximate the optimal solution of the original problem. Toward this end, we must ensure that the zero dual gap property holds between primal and dual problems. Therefore, saddle point theory received much attention, due to its equivalence with zero dual gap property. It is well known that, for convex programming problems, the zero dual gap holds by using the above classical Lagrangian function. However, the nonzero duality gap may appear for nonconvex optimization problems. The main reason is that the classical Lagrangian function is linear with respect to the Lagrangian multiplier. To overcome this drawback, various types of nonlinear Lagrangian functions and augmented Lagrangian functions have been developed in recent years. For example, Hestenes [1] and Powell [2] independently proposed augmented Lagrangian methods for solving equality constrained problems by incorporating the quadratic penalty term in the classical Lagrangian function. This was extended by Rockafellar [3] to the constrained optimization problem with both equality and inequality constraints. A convex augmented function and the corresponding augmented Lagrangian with zero duality gap property were introduced by Rockafellar and Wets in [4]. This was further extended by Huang and Yang by removing the convexity assumption imposed on the augmented functions as in [4]; see [5, 6] for the details. Wang et al. [7] proposed two classes of augmented Lagrangian functions, which are simpler than those given in [4, 5], and discussed the existence of saddle points. For other kinds of augmented Lagrangian methods refer to [816]; for saddle points theory and multiplier methods, refer to [1720]. It should be noted that the sufficient conditions given in the above papers for the existence of local saddle points of augmented Lagrangian functions all require the standard second-order sufficient conditions. So, a natural question arises: whether we can exploit local saddle points under rather mild assumptions, other than the standard second-order sufficient conditions. Motivated by this, in this paper, we propose a new augmented Lagrangian function and establish the close relationship between local saddle points and local optimal solutions of the original problem. In particular, we show that this property holds under weak second-order sufficient conditions.

The paper is organized as follows. After introducing some basic notation and definitions, we mainly present sufficient conditions for the existence of a local saddle point and discuss the close relationship between a local saddle point and a local optimal solution of the original problem. Finally, an example to illustrate our result is given.

2. Notation and Definition

We first introduce some basic notation and definitions, which will be used in the sequel. Let be the nonnegative orthant. For notational simplification, let

Definition 2.1. A pair is said to be a global saddle point of for some , if whenever . If there exists some positive scalar such that the above inequality holds for all , where , then is said to be a local saddle point of for .

Definition 2.2 (weak second-order sufficient conditions). Let be a feasible solution. (1)Suppose that the KKT conditions hold at ; that is, there exist scalars for and for such that (2)The Hessian matrix is positive definite on the cone where

Clearly, the above cone is included in the cone which is involved in the second-order sufficient condition. Hence, we refer to above conditions as weak second-order sufficient conditions.

3. Existence of Local Saddle Points

For inequality constrained optimization, Sun et al. [21] introduced a class of the generalized augmented Lagrangian function where is defined as The function satisfies the following assumptions:(A1), and for all ;(A2)for each , is nondecreasing on and nonincreasing on ;(A3) is continuously differentiable and ;(A4) is twice continuously differentiable in a neighborhood of and for all nonzero .

We extend this function to treat the optimization problems with equality and inequality constraints. Consider a new augmented Lagrangian function where is defined as above and the function satisfies (A1)–(A4). Several important augmented functions satisfy the above assumptions, as for example:

Example 3.1. .

Example 3.2. .

Under the weak second-order sufficient conditions (instead of the standard second-order sufficient conditions), we show that a local optimal solution is also a local saddle point of the augmented Lagrangian function.

Theorem 3.3. Let be a local optimal solution to problem (). If the weak second-order sufficient conditions are satisfied at , then there exist and such that for any ,

Proof. Since is a feasible solution to problem (), then and It follows from (A1) and (2.4) that Combining the last two inequalities yields . Hence, We obtain the left inequality of (2.2) as desired.
To show the right inequality of (2.2), it suffices to prove that it holds for some and , since is nondecreasing in . Suppose the contrary that such and do not exist. Then for each positive integer , there must exist such that and Define as follows: where For , we have
Three cases may be considered.
Case 1. When , take . Since , then the original point is a minimizer of Case 2. When , taking into account to the fact that the function is decreasing on in , then is a minimizer of Case 3. When , let be the ith component of vector , for any . We get from (A3) that that is, , and this implies that is decreasing on in . So is a minimizer of Therefore, Since , then Hence Set , which is bounded, we can assume without loss of generality that converges to with . It follows from (3.18) that Let be the small eigenvalue of . Then by assumption. We claim that converges to zero. Suppose the contrary, that for , we have Taking limits in the above inequality as , the right hand converges to , which contradicts (3.19). So as claimed.Noting that (3.9) amounts to then So for any , taking limits in (3.19) with approaches to , we must have For , there is an infinite index set such that for all . So where lies in the line segment between and . Putting (3.24)–(3.26) together implies that . We get by Definition 2.2, which is a contradiction with (3.23). So the right inequality of (2.2) holds. The proof is complete.

The converse of Theorem 3.3 is given below.

Theorem 3.4. If is a local saddle point of for some , then is a local optimal solution to the problem ().

Proof. Let be a local saddle point of for some . Then whenever . We first show that is a feasible solution. If not, there must exist for some or for some .Case 1. There exists for some . Note that Choose and for all . Then we get from (A1) that Taking the limit as yields , which is a contradiction with (2.2). So we have for all .Case 2. There exists for some . Choose and with the same signal and let approach to , which is a contradiction with (2.2). So we have for all . Then is a feasible solution as claimed.Since is feasible, we have and for . In particular for , we have Substituting (3.30) into (2.2) yields On the other hand, for any feasible and , which, together with (3.31) and (3.32), implies that So Finally, for any feasible , we have which means that is a local optimal solution of ().

Example 3.5. Consider the nonconvex programming problem

The optimal solutions of the above problem are and with objective value . Setting and , then we get from KKT conditions that The Hessian matrix is positive definite. The weak second-order sufficient conditions are satisfied at . By Theorem 3.4, is a local saddle point for , and hence and are the optimal solutions to ().

Based on the above discussion, we know that, if is a saddle point of , then where we denote by the optimal value of problem () and by the problem given in the right-hand side. Note that the problem has just the nonnegative constraints (rather simple constraints). Hence, we successful convert the original nonconvex problem to a simple constrained optimization problems by using the augmented Lagrangian function. Furthermore, many efficient algorithms for solving unconstrained optimization problems can be used to solve , such as gradient-type algorithms. Therefore, our results, obtained with weaker conditions, provide a new insight and theoretical foundation for the use of augmented Lagrangian functions in constrained optimization problems.

Acknowledgments

The authors would like to give their sincere thanks to the anonymous referees for their helpful suggestions and valuable comments which improved the presentation of this paper. This research was supported by the National Natural Science Foundation of China (10971118, 10701047, 10901096).