Optimization Problems via Best Proximity Point AnalysisView this Special Issue
New Exact Penalty Functions for Nonlinear Constrained Optimization Problems
For two kinds of nonlinear constrained optimization problems, we propose two simple penalty functions, respectively, by augmenting the dimension of the primal problem with a variable that controls the weight of the penalty terms. Both of the penalty functions enjoy improved smoothness. Under mild conditions, it can be proved that our penalty functions are both exact in the sense that local minimizers of the associated penalty problem are precisely the local minimizers of the original constrained problem.
Penalty function has always taken an important role in solving many constrained optimization problems in the fields such as industry design and management science. It is traditionally constructed to solve nonlinear programs by adding some penalty or barrier terms with respect to the constraints to the objective function or a corresponding Lagrange function. Then it can be optimized by some unconstrained or bounded constrained optimization software or sequential quadratic programming (SQP) techniques. No matter what kind of techniques are employed, the penalty function always depends on a small parameter or large parameter . As ; the minimizer of the penalty function, such as a barrier function or the quadratic penalty function , converges to a minimizer of the original problem. By using some exact penalty function such as penalty function (see [2–7]), the minimizer of the corresponding penalty problem must be a minimizer of the original problem when is sufficiently small. There are some nonsmooth penalty functions for nonsmooth optimization problems, such as the exact penalty function using the distance function for the nonsmooth variational inequality problem in Hilbert spaces . In , the convergence of lower-order exact penalization for a constrained scalar set-valued optimization problem is given under sufficient conditions which are easy to verify.
The traditional exact penalty functions  are always nonsmooth. When it is used as a merit function to accept a new iterate in an SQP method, it may cause the Maratos effect . On the other hand, a traditional smooth penalty function such as the quadratic penalty function cannot be an exact one. So we must compute a sequence of minimization subproblem as . At that time, ill-conditioning may occur when the penalty parameter is too large or small, which also brings difficulty of computation. In [11, 12], some kinds of augmented Lagrangian penalty functions have been proposed with improved exactness under strong conditions. In , exact penalty functions via regularized gap function for variational inequalities have also been given. In , the authors study exactness and algorithm of an objective penalty function for inequality constrained optimization. All these functions enjoy some smoothness, but at the very beginning, to use this smoothness we need second-order or third-order derivative information of the problem function that is difficult to estimate in practice. Besides, all the above kinds of penalty functions (see [11, 15–18] for summary) may be unbounded below even when the constrained problem is bounded, which may make it difficult to locate a minimizer.
Most results in the literature of exact penalization are mainly concerned with finding conditions under which a solution of the constrained optimization problem is a solution of an unconstrained penalized optimization problem, and the reverse property is rarely studied. In , the author studies the reverse property. In this paper, two modified simple exact penalty functions are proposed for two kinds of constrained nonlinear programming problem, where the term simple means that the penalty function constructed in the primal variable space only contains the original information of the objective function and the constraint functions in the constrained optimization problem but does not contain the information of their differentials and multipliers. This kind of traditional exact penalty function can be expressed into the following form: where satisfies that , if is feasible, and otherwise. A simple exact penalty function of this kind is the penalty function, and it is known that this penalty function is nonsmooth. The penalty functions without the multipliers have been given in [12, 17, 20, 21]. Under mild conditions, these penalty functions have been proved exact and smooth; however, since they include the information of differentials of the objective function and the constraint functions in the constrained optimization problem, they are not simple ones by our definition. In , a new exact penalty function is constructed by adding a new finite-dimensional or even one-dimensional decision variable to control the penalty terms. Under mild conditions, it is proved that for sufficiently large penalty parameter , every local minimizer of the above penalty problem with finite penalty function value has the form , where is a local minimizer of the original problem.
Inspired by this idea, in this paper, by augmenting the dimension of the program with a variable, we propose a simple exact penalty function for the equality constrained mathematical program and a simple exact barrier-penalty function for the inequality constrained mathematical program, respectively. Our new penalty function for equality constrained mathematical program is different from the one in  since that in , as the variable is controlled by the function that has the properties , and , which are not needed for our function. In , to construct the penalty function for the inequality constrained mathematical program, the original optimization problem must be converted to be an equality constrained problem. For the inequality constrained mathematical program, we propose a new simple exact log-type barrier-penalty function, which is different from the classical log-barrier function and has broader feasible region.
2. A Modified Simple Exact Penalty Function for Equality Constrained Optimization Problems
We are now ready to propose a simple exact penalty function for equality constrained mathematical programs.
We consider the following problem: where is a bounded open set in the -dimensional Euclidean space and and are all continuously differentiable in . We assume that is bounded below in .
We then consider a new penalty function as follows: where is a new one-dimensional variable, is the constraint violation measure, are three integers, , are both even, and is a penalty parameter. In particular, we can set , and . is a preset variable, for example, we can set . Compared with the paper [22, 23], here we get rid of the restriction that is bounded and positive.
Based on the function (3), we establish the following penalty problem:
Let denote the gradient of in , then we have
In the following we consider the smoothness of the penalty function .
For , if , , then if , then Obviously, is continuously differentiable in the set .
We are now to discuss the exactness of the penalty function .
Theorem 1. Suppose that satisfies that is the local minimizer of the penalty problem with finite , for any , , when , one has and , then if have full rank, then , , .
Proof. Since for any , is the local minimizer of with finite , and , then we have , that is,
Equation (9) is equivalent to
Let , from the assumption we know , , if , then by the above equality, we know that the first and second term on the left side are all finite value, and the third term tends to . This is impossible, so .
Moreover, by (8) we know that Let , from and , it follows that Since has full rank, it follows that and ; this completes the proof.
Theorem 2. Suppose that for , if there exists a sequence satisfying that , where is the set of local minimizers of the penalty problem and have full rank, is a strictly increasing sequence and is finite, then there exists , such that when , where is the set of local minimizers of (2).
Proof. We first show that there exists a sufficiently large , such that when , . If it is not the case, then there exists a subsequence of , which can be still as without loss of generality, such that for each , is the local minimizer of with finite , and , when , we have and . By (9) we know that
then we have that
then we can get that
By Theorem 1, we know that when , , . From the assumption that , we know that the third term on the left side of (17) tends to , and the second term tends to zero, thus we have
Let , , it is obvious that , and ; without loss of generality, we suppose , then .
Besides, by (8) we know Let , then by , , and we obtain that Since is full-rank, then , and this leads a contradiction with , so there exists a , such that when , . Since is finite, then by the definition of we have Again by the definition of , there exists a neighborhood of , with sufficiently small, such that for all in satisfying , we have thus .
It is shown from Theorems 1 and 2 that under some constraint qualification condition, the local minimizer of the penalty problem corresponds to a local minimizer of the original problem, thus our penalty function for equality constrained mathematical program enjoys exactness.
3. A New Simple Exact Barrier-Penalty Function for Inequality Constrained Optimization Problem
We are now to construct a class of simple smooth exact penalty function for inequality constrained optimization problem: where , and are all continuously differentiable functions. Throughout this section, we assume that is a nonempty and bounded set. In , the authors transform the inequality constrained problem into a kind of equality constrained optimization problem by adding some parameters to control the constraints. In this section, we will give a new smooth and exact barrier-penalty function.
For problem (23), the classical barrier function is where is a parameter. The corresponding problem is Because constructs a barrier wall at the boundary points which satisfy , the above problem is equivalent to the following problem: So the operation set is the interior of , this implies that the interior point method will need a strict interior point as an original point.
In this section, our penalty function is constructed by augmenting a variable . Problem (23) is equivalent to Consider the following penalty function: where , and is a penalty parameter. Penalty function is a class of logarithmic barrier-penalty function, and the operation set can be enlarged as a set that contains the feasible region of the original problem.
If , , then if , , then Obviously is continuously differentiable on the set , where is a constant.
Consider the corresponding penalty function Assume that is a bounded set; problem is equivalent to
In the following we discuss the exactness of penalty function .
Theorem 3. If there exists a sequence satisfying:(1), and is finite, where is a set of the local minimizers of ,(2), and when , , ,(3)EMFCQ condition (extended Mangasarian-Fromovitz constraint qualification) is satisfied at ; that is, there exists a vector such that where Then we have , and .
Proof. By the assumptions, we have
For any , is a local optimizer of , and is finite, then for any , we have that
and holds if and only if .
Suppose that , then by (37) and the fact that when , , we have that Because is bounded, and , so is bounded. Then when , we have Since , there exists at least such that when , thus Let be the set of such index . Then by (36) and the boundedness of , we have for any , when ; so This contradicts the assumption that EMFCQ is satisfied at . Thus . By , , then we have that is, .
Theorem 4. Assume that is a bounded set, and there exists a sequence such that , where is the set of local minimizers of , EMFCQ is satisfied at , is an increasing sequence, and is finite, then there exists a sufficiently large , such that when , where is the set of local minimizers of .
Proof. We first show that there exists a sufficiently large such that , . If it is not the case, then there exists a subsequence of ; here we assume without loss of generality that is the subsequence, such that , , is finite, and when , . Then from Theorem 3, it follows that
By (37) we have,
By the proof of Theorem 3, the above results contradicts with the assumption that EMFCQ condition is satisfied at . Thus there exists a sufficiently large such that , for .
Since for any , is finite, then by the definition of , Then by the definition of , there exists a neighborhood of , where is sufficiently small, such that for all , thus .
It is shown by Theorem 4 that under some constraint qualification condition, a local minimizer corresponds a local minimizer of the original problem when the penalty parameter is sufficiently large, thus the penalty function (28) is an exact penalty function. Since the penalty function (28) is a penalty function with a barrier, thus for problem (23), we can still apply the interior method. Note that we can use an interior point as the original point, where .
Conflict of Interests
The authors declared that there is no conflict of interests in their submitted paper.
The authors wish to thank the anonymous referees for their endeavors and valuable comments. The authors would also like to thank Professor Zhang Liansheng for some very helpful comments on a preliminary version of this paper. This research was supported by the National Natural Science Foundation of China under Grants 11271233 and 11101248, Shandong Natural Science Foundation under Grants ZR2012AM016, and the foundation 4041-409012 of Shandong University of Technology.
N. Maratos, Exact penalty function algorithms for finite dimensional and control optimization problems [Ph.D. thesis], University of London, 1978.
D. P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods, Academic Press, New York, NY, USA, 1982.View at: MathSciNet
M. S. Bazaraa, H. D. Sherali, and C. M. Shetty, Nonlinear Optimizaton Theory and Algorithms, John Wiley & Sons, New York, NY, USA, 2nd edition, 1993.
R. Fletcher, Practical Methods of Optimization, John Wiley & Sons, New York, NY, USA, 2nd edition, 1987.View at: MathSciNet