About this Journal Submit a Manuscript Table of Contents
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 620949, 12 pages
http://dx.doi.org/10.1155/2012/620949
Research Article

On the Convergence of a Smooth Penalty Algorithm without Computing Global Solutions

1School of Science, Shandong University of Technology, Zibo 255049, China
2Institute of Operations Research, Qufu Normal University, Qufu 273165, China

Received 18 September 2011; Accepted 9 November 2011

Academic Editor: Yeong-Cheng Liou

Copyright © 2012 Bingzhuang Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We consider a smooth penalty algorithm to solve nonconvex optimization problem based on a family of smooth functions that approximate the usual exact penalty function. At each iteration in the algorithm we only need to find a stationary point of the smooth penalty function, so the difficulty of computing the global solution can be avoided. Under a generalized Mangasarian-Fromovitz constraint qualification condition (GMFCQ) that is weaker and more comprehensive than the traditional MFCQ, we prove that the sequence generated by this algorithm will enter the feasible solution set of the primal problem after finite times of iteration, and if the sequence of iteration points has an accumulation point, then it must be a Karush-Kuhn-Tucker (KKT) point. Furthermore, we obtain better convergence for convex optimization problem.

1. Introduction

Consider the following nonconvex optimization problem min𝑓(𝑥)s.t.𝑔𝑖(𝑥)0,𝑖=1,,𝑚,𝑥𝑅𝑛,(NP) where 𝑓,𝑔𝑖𝑅𝑛𝑅,𝑖=1,,𝑚, are all continuously differentiable functions. Without loss of generality, we suppose throughout this paper that inf𝑥𝑅𝑛𝑓(𝑥)0, because otherwise we can substitute 𝑓(𝑥) by exp(𝑓(𝑥)). Let Ω𝜀={𝑥𝑅𝑛𝑔𝑖(𝑥)𝜀,𝑖=1,,𝑚} be the relax feasible set for 𝜀>0. Then Ω0 is the feasible set of (NP).

The classical 𝑙1 exact penalty function [1] is 𝑓𝛽(𝑥)=𝑓(𝑥)+𝛽𝑚𝑖=1𝑔𝑖(𝑥)+,(1.1) where 𝛽>0 is a penalty parameter, and 𝑔𝑖(𝑥)+=max0,𝑔𝑖(𝑥),𝑖=1,,𝑚.(1.2)

The obvious advantage of the traditional exact penalty functions such as the 𝑙1 exact penalty function is that when the penalty parameter is sufficiently large, their global optimal solutions exist and are optimal solutions of (NP). But they also have obvious disadvantage, that is, their nonsmoothness, which prevent the use of many efficient unconstrained optimization algorithms (such as Gradient-type or Newton-type algorithm). Therefore the study on the smooth approximation of exact penalty functions has attracted broad interests in scholars [28]. In recent years based on the smooth approximation of the exact penalty function, several smooth penalty methods are given to solve (NP). For example, [9] gives a smooth penalty method based on approximating the 𝑙1 exact penalty function. Under the assumptions that the optimal solution satisfies MFCQ and the iterate sequence is bounded, it is proved that the iterative sequence will enter the feasible set and every accumulation point is the optimal solution of (NP). In [10, 11], smooth penalty methods are considered based on approximating low-order exact penalty functions. Reference [10] proves the similar results as [9] under very strict conditions (some of them are uneasy to check). The conditions for convergence of the smooth penalty algorithm in [11] are weaker than that in [10], but in [11] it is only proved that the accumulation point of the iterate sequence is a Fritz-John (FJ) point of (NP).

In the algorithms given by [911], at each iteration a global optimal solution of the smooth penalty problem is needed. As we all know, it is very difficult to find a global optimal point of a nonconvex function. To avoid this difficulty, in this paper we give a smooth penalty algorithm based on the smooth approximation of the 𝑙1 exact penalty function. The feature of this algorithm lies in that only a stationary point of the penalty function is needed to compute at each iteration. To prove the convergence of this algorithm, we first establish a generalized Mangasarian-Fromovitz constraint qualification condition (GMFCQ) weaker and more comprehensive than the traditional MFCQ. Under this condition, we prove that the iterative sequence of the algorithm will enter the feasible set of (NP). Moreover, we prove that if the iterative sequence has accumulation points, then each of them is a KKT point of (NP). Finally, we apply this algorithm to solve convex optimization and get better convergence results.

The rest of this paper is organized as follows. In the next section, we give a family of smooth penalty functions. In Section 3 based on the smooth penalty functions given in Section 2, we propose an algorithm for (NP) and analyze its convergence under the GMFCQ condition. We give an example that satisfies GMFCQ at last in this section.

2. Smooth Approximation to 𝑙1 Exact Penalty Function

In this section we give a family of penalty functions, which decreasingly approximate the 𝑙1 exact penalty function. At first we consider a class of smooth function 𝜙𝑅𝑅+ with the following properties:(I)𝜙() is a continuously differentiable convex function with 𝜙(0)>0;(II)lim𝑡𝜙(𝑡)=𝑎, where 𝑎 is a nonnegative constant;(III)𝜙(𝑡)𝑡, for any 𝑡>0;(IV)lim𝑡+(𝜙(𝑡))/𝑡)=1.From (I)–(IV), it follows that 𝜙 satisfies(V)0𝜙(𝑡)1, for any 𝑡𝑅, and lim𝑡𝜙(𝑡)=0,lim𝑡+𝜙(𝑡)=1;(VI)𝑟𝜙(𝑡/𝑟) increases with respect to 𝑟>0, for any 𝑡𝑅;(VII)𝑟𝜙(𝑡/𝑟)𝑡+(𝑟0), for any 𝑡𝑅.

The following functions are often used in the smooth approximation of the 𝑙1 exact penalty function and satisfy properties (I)–(IV).(1)𝜙(𝑡)=log(1+𝑒𝑡).(2)𝜙(𝑡)=(𝑡+𝑡2+4)/2. (3)𝜙(𝑡)=𝑒𝑡,𝑡0;𝑡+1,𝑡>0.

We now use 𝜙() to construct the smooth penalty function 𝑓𝛽,𝑟(𝑥)=𝑓(𝑥)+𝑟𝑚𝑖=1𝜙𝛽𝑔𝑖(𝑥)𝑟,(2.1) where 𝛽1 is a penalty parameter.

By (VII), we easily know when 𝑟0+,𝑓𝛽,𝑟(𝑥) decreasingly converges to 𝑓𝛽(𝑥), that is, 𝑓𝛽,𝑟(𝑥)=𝑓(𝑥)+𝑟𝑚𝑖=1𝜙𝛽𝑔𝑖(𝑥)𝑟𝑓(𝑥)+𝛽𝑚𝑖=1𝑔𝑖(𝑥)+.(2.2) Therefore 𝑓𝛽,𝑟(𝑥) smoothly approximates the 𝑙1 exact penalty function, where 𝑟 decreases to improve the precision of the approximation. It is worth noting that the smooth function 𝜙() and penalty function 𝑓𝛽,𝑟() given in this paper make substantive improvement of the corresponding functions given in [9]. This gives 𝑓𝛽,𝑟() better convergence properties (refer to (2.2) and Theorem 3.9).

3. The Algorithm and Its Convergence

We propose a penalty algorithm for (NP) in this section based on computing the stationary point of 𝑓𝛽,𝑟(). We assume that for any 𝛽1 and 0<𝑟1,𝑓𝛽,𝑟() always has stationary point.

Algorithm
Step 0. Given 𝑥0𝑅𝑛,𝛽1=1,𝑟1=1,0<𝜂1<1, and 𝜂2>1. Let 𝑘=1. Step 1. Find 𝑥𝑘 such that 𝑓𝛽𝑘,𝑟𝑘𝑥𝑘=0.(3.1)Step 2. Put 𝑟𝑘+1=𝜂1𝑟𝑘, 𝛽𝑘+1=𝛽𝑘if𝑥𝑘Ω0,𝜂2𝛽𝑘otherwise.(3.2)Step 3. Let 𝑘=𝑘+1 and return to Step 1.

Let {𝑥𝑘} be the iterative sequence generated by the algorithm. We shall use the following assumption:(𝐴1) the penalty function value sequence {𝑓𝛽𝑘,𝑟𝑘(𝑥𝑘)} is bounded.

Lemma 3.1. Suppose that the assumption (𝐴1) holds, then for any 𝜀>0, there exists 𝑘0𝑁={1,2,}, such that for 𝑘𝑘0, 𝑥𝑘Ω𝜀.(3.3)

Proof. Suppose to the contrary that there exist an 𝜀0>0 and an infinite sequence 𝐾𝑁, such that for any 𝑘𝐾, 𝑥𝑘Ω𝜀0.(3.4) By the algorithm, we know that 𝛽𝑘+(𝑘).(3.5) It follows from (3.4) that there exist a subsequence 𝐾0𝐾 and an index 𝑖0𝐼={1,,𝑚}, such that for any 𝑘𝐾0, 𝑔𝑖0𝑥𝑘>𝜀0.(3.6) Thus, from the assumptions about 𝑓(), the properties about 𝜙(), (3.5) and (3.6), it follows that 𝑓𝛽𝑘,𝑟𝑘𝑥𝑘𝑥=𝑓𝑘+𝑟𝑘𝑚𝑖=1𝜙𝛽𝑘𝑔𝑖𝑥𝑘𝑟𝑘𝑥𝑓𝑘+𝑟𝑘𝜙𝛽𝑘𝜀0𝑟𝑘𝜙𝛽𝑘𝜀0/𝑟𝑘𝛽𝑘𝜀0/𝑟𝑘𝛽𝑘𝜀0+𝑘,𝑘𝐾0.(3.7) This contradicts with (𝐴1).

Lemma 3.2. Suppose that the assumption (𝐴1) holds, and 𝑥 is any accumulation point of {𝑥𝑘}, then 𝑥Ω0, that is, 𝑥 is a feasible solution of (NP).

Proof. By Lemma 3.1, we obtain that for any 𝜀>0 and every sufficiently large 𝑘,𝑥𝑘Ω𝜀. Let 𝑥 be an accumulation point of {𝑥𝑘}, then there exists a subsequence {𝑥𝑘}𝑘𝐾 such that 𝑥𝑘𝑥(𝑘𝐾,𝑘). Therefore 𝑥Ω𝜀.(3.8) By the arbitrariness of 𝜀>0, we have that 𝑥Ω0.

Given 𝑥Ω0, we denote that 𝐼(𝑥)={𝑖𝐼𝑔𝑖(𝑥)=0}.

Definition 3.3 (see [12]). We say that 𝑥Ω0 satisfies MFCQ, if there exists a 𝑅𝑛 such that 𝑔𝑖𝑥𝑇<0,forany𝑖𝐼𝑥.(3.9)

In the following we propose a kind of generalized Mangasarian-Fromovitz constraint qualification (GMFCQ).

Let 𝐾𝑁 be a subsequence, and for sequence {𝑧𝑘}𝑘𝐾 in 𝑅𝑛 denote two index sets as 𝐼+(𝐾)=𝑖𝐼limsup𝑘𝐾,𝑘𝑔𝑖𝑧𝑘,𝐼0(𝐾)=𝑖𝐼limsup𝑘𝐾,𝑘𝑔𝑖𝑧𝑘.<0(3.10)

Definition 3.4. We say that the sequence {𝑧𝑘}𝑘𝐾 satisfies GMFCQ, if there exist a subsequence 𝐾0𝐾 and a vector 𝑅𝑛 such that limsup𝑘𝐾0,𝑘𝑔𝑖𝑧𝑘𝑇<0,forany𝑖𝐼+𝐾0.(3.11)

Under some circumstances, the sequence {𝑥𝑘} may satisfy that 𝑥𝑘+(𝑘), which can be seen for the example in the last part of this section. At this time MFCQ can not be applied, but GMFCQ can. The following proposition shows that Definition 3.4 is a substantive generalization of Definition 3.3.

Proposition 3.5. Suppose that {𝑧𝑘}𝑘𝐾 satisfies lim𝑘𝐾,𝑘𝑧𝑘=𝑧Ω0.(3.12) If 𝑧 satisfies MFCQ, then {𝑧𝑘}𝑘𝐾 satisfies GMFCQ.

Proof. By (3.12), we know that limsup𝑘𝐾,𝑘𝑔𝑖(𝑧𝑘)0 if and only if lim𝑘𝐾,𝑘𝑔𝑖𝑧𝑘=𝑔𝑖𝑧=0.(3.13) Thus, 𝐼+(𝐾)=𝐼(𝑧). By the assumption, there exists a 𝑅𝑛 such that limsup𝑘𝐾,𝑘𝑔𝑖𝑧𝑘𝑇<0,forany𝑖𝐼+(𝐾).(3.14)

We need two assumptions in the following:(𝐴2) the sequence {𝑓(𝑥𝑘)} and {𝑔𝑖(𝑥𝑘)},𝑖=1,,𝑚 are both bounded;(𝐴3) any subsequence of {𝑥𝑘} satisfies GMFCQ.

Theorem 3.6. Suppose that the assumptions (𝐴1), (𝐴2), and (𝐴3) hold, then(1)there exists a 𝑘0 such that for any 𝑘𝑘0, 𝑥𝑘Ω0;(3.15)(2)any accumulation point of {𝑥𝑘} is a KKT point of (NP).

Proof. If (1) does not hold, that is, there exists a subsequence 𝐾𝑁 such that for any 𝑘𝐾, it holds that 𝑥𝑘Ω0.(3.16) By the algorithm, we know that lim𝑘𝛽𝑘=+.(3.17) From the assumption (𝐴3) and (3.16), it follows that there exist 𝐾0𝐾 and 𝑅𝑛 such that limsup𝑘𝐾0,𝑘𝑔𝑖𝑥𝑘𝑇<0,forany𝑖𝐼+𝐾0,𝐼(3.18)𝐾0=𝑖𝐼𝑔𝑖𝑥𝑘>0,forany𝑘𝐾0,𝐼𝐾0𝐼+𝐾0.(3.19) By (3.18) and the definition of 𝐼(𝐾0), there exists a 𝛿>0, such that for all 𝑘𝐾0, 𝑔𝑖𝑥𝑘𝑇𝛿,forany𝑖𝐼+𝐾0,𝑔(3.20)𝑖𝑥𝑘𝛿,forany𝑖𝐼𝐾0.(3.21) From the algorithm, we know that 𝑥𝑘 satisfies 𝑥𝑓𝑘+𝑚𝑖=1𝛽𝑘𝜙𝛽𝑘𝑔𝑖𝑥𝑘𝑟𝑘𝑔𝑖𝑥𝑘=0.(3.22) Let 𝑘𝐾0, from (3.22) we obtain that 𝑥𝑓𝑘𝑇𝛽𝑘+𝑖𝐼𝐾0𝜙𝛽𝑘𝑔𝑖𝑥𝑘𝑟𝑘𝑔𝑖𝑥𝑘𝑇+𝑖𝐼+𝐾0𝜙𝛽𝑘𝑔𝑖𝑥𝑘𝑟𝑘𝑔𝑖𝑥𝑘𝑇=0.(3.23) We now analyze the three terms on the left side of (3.23).(a)By (3.17) and (𝐴2), lim𝑘𝐾0,𝑘𝑥𝑓𝑘𝑇𝛽𝑘=0.(3.24)(b)By (3.21), for any 𝑖𝐼(𝐾0), we have lim𝑘𝐾0,𝑘𝛽𝑘𝑔𝑖𝑥𝑘𝑟𝑘=.(3.25) From the properties of 𝜙() and (𝐴2), we have that the second term satisfies lim𝑘𝐾0,𝑘𝑖𝐼𝐾0𝜙𝛽𝑘𝑔𝑖𝑥𝑘𝑟𝑘𝑔𝑖𝑥𝑘𝑇=0.(3.26)(c)From (3.19), (3.20), and the properties of 𝜙(), it follows that 𝑖𝐼+𝐾0𝜙𝛽𝑘𝑔𝑖𝑥𝑘𝑟𝑘𝑔𝑖𝑥𝑘𝑇𝛿𝑖𝐼+𝐾0𝜙𝛽𝑘𝑔𝑖𝑥𝑘𝑟𝑘𝛿𝑖𝐼𝐾0𝜙𝛽𝑘𝑔𝑖𝑥𝑘𝑟𝑘||𝐼𝛿𝐾0||𝜙(0),(3.27) where |𝐼| denotes the number of the elements in 𝐼.Now, by letting 𝑘,𝑘𝐾0, and taking the limit on both sides of (3.23), we obtain from (a)–(c) that 𝛿||𝐼𝐾0||𝜙(0)0.(3.28) But by (3.19) and the properties of 𝜙(),𝛿|𝐼(𝐾0)|𝜙(0)>0. This contradiction completes the proof of (1).
By (1) we know that there exists a 𝑘0, such that if 𝑘𝑘0, then 𝑥𝑘Ω0. Thus by the algorithm, when 𝑘𝑘0, we have that 𝛽𝑘=𝛽𝑘0.(3.29) Suppose that 𝑥 is an accumulation point of {𝑥𝑘}, then there exists a subsequence {𝑥𝑘}𝑘𝐾, such that lim𝑘𝐾,𝑘𝑥𝑘=𝑥.(3.30) By Lemma 3.2, 𝑥 is a feasible point of (NP), that is, 𝑥Ω0. Thus by (3.22), we obtain that 𝑥𝑓𝑘+𝑖𝐼𝐼(𝑥)𝛽𝑘0𝜙𝛽𝑘0𝑔𝑖𝑥𝑘𝑟𝑘𝑔𝑖𝑥𝑘+𝑖𝐼(𝑥)𝛽𝑘0𝜙𝛽𝑘0𝑔𝑖𝑥𝑘𝑟𝑘𝑔𝑖𝑥𝑘=0.(3.31) In the second term of (3.31), because 𝑖𝐼𝐼(𝑥), so by (3.30) and the properties of 𝜙(), we have lim𝑘𝐾,𝑘𝜙𝛽𝑘0𝑔𝑖𝑥𝑘𝑟𝑘=0.(3.32) In the third term of (3.31), from the properties of 𝜙(), the sequence {𝜙(𝛽𝑘0𝑔𝑖(𝑥𝑘)/𝑟𝑘)},𝑖𝐼 is nonnegative and bounded. Thus, there exists a subsequence 𝐾0𝐾 such that lim𝑘𝐾0,𝑘𝛽𝑘0𝜙𝛽𝑘0𝑔𝑖𝑥𝑘𝑟𝑘=𝜆𝑖𝑥0,forany𝑖𝐼.(3.33) At last by letting 𝑘,𝑘𝐾0, and taking the limit on both sides of (3.31), we obtain from (3.30)(3.32) and (3.33) that 𝑥𝑓+𝑖𝐼(𝑥)𝜆𝑖𝑔𝑖𝑥=0.(3.34)

By Lemma 3.2, Proposition 3.5, and Theorem 3.6, we obtain the following conclusion.

Corollary 3.7. Suppose that (𝐴1) holds, {𝑥𝑘} is bounded, and any accumulation point 𝑥 of {𝑥𝑘} satisfies MFCQ, then(1)there exists a 𝑘0 such that for any 𝑘𝑘0𝑥𝑘Ω0;(3.35)(2)any accumulation point of {𝑥𝑘} is a KKT point of (NP).

When (NP) is a convex programming problem, that is, the functions 𝑓 and 𝑔𝑖,𝑖𝐼 of (NP) are all convex functions, the algorithm has better convergence results.

Theorem 3.8. Suppose (NP) is a convex programming problem, then every accumulation point of {𝑥𝑘} is an optimal solution of (NP).

Proof. Since 𝑓(),𝑔𝑖(),𝑖𝐼 are convex, and 𝜙() is increasing, then for any 𝛽>0 and 𝑟>0,𝑓𝛽,𝑟() is convex. Thus 𝑓𝛽𝑘,𝑟𝑘(𝑥𝑘)=0 is equivalent to 𝑥𝑘argmin𝑥𝑅𝑛𝑓𝛽𝑘,𝑟𝑘(𝑥).(3.36) Therefore by (3.36) and the properties of 𝜙(), we have for any 𝑥Ω0, 𝑓𝛽𝑘,𝑟𝑘𝑥𝑘𝑥=𝑓𝑘+𝑟𝑘𝑚𝑖=1𝜙𝛽𝑘𝑔𝑖𝑥𝑘𝑟𝑘𝑓𝑥+𝑟𝑘𝑚𝑖=1𝜙𝛽𝑘𝑔𝑖𝑥𝑟𝑘𝑓𝑥+𝑟𝑘𝑚𝜙(0).(3.37) From (3.37), the arbitrariness of 𝑥Ω0 and the nonnegativity of 𝜙(), it follows that 𝑓𝑥𝑘inf𝑥Ω0𝑓(𝑥)+𝑟𝑘𝑚𝜙(0).(3.38) Suppose that 𝑥 is an accumulation point of {𝑥𝑘}, there exists a subsequence 𝐾𝑁 such that lim𝑘𝐾,𝑘𝑥𝑘=𝑥. Thus, by (3.38), we have 𝑓𝑥inf𝑥Ω0𝑓(𝑥).(3.39) On the other side, (3.37) implies that (𝐴1) holds. Then from Lemma 3.2, we know 𝑥Ω0.

Theorem 3.9. Suppose that (NP) is a convex programming problem, and the assumptions (𝐴2), (𝐴3) hold, then(1)there exists a 𝑘0, for any 𝑘𝑘0,{𝑓𝛽𝑘,𝑟𝑘(𝑥𝑘)} decreases to inf𝑥Ω0𝑓(𝑥).(2)lim𝑘𝑓(𝑥𝑘)=inf𝑥Ω0𝑓(𝑥).

Proof. Note that for (NP) which is convex, (𝐴1) holds. By Theorem 3.6 there exists a 𝑘0, such that 𝑥𝑘Ω0 when 𝑘𝑘0. Therefore from the algorithm, we have for any 𝑘𝑘0,𝛽𝑘=𝛽𝑘0. By (3.36) and the property (VI) of 𝜙(), when 𝑘𝑘0, 𝑓𝛽𝑘0,𝑟𝑘+1𝑥𝑘+1𝑓𝛽𝑘0,𝑟𝑘+1𝑥𝑘𝑥=𝑓𝑘+𝑟𝑚𝑘+1𝑖=1𝜙𝛽𝑘0𝑔𝑖𝑥𝑘𝑟𝑘+1𝑥𝑓𝑘+𝑟𝑘𝑚𝑖=1𝜙𝛽𝑘0𝑔𝑖𝑥𝑘𝑟𝑘𝑓𝛽𝑘0,𝑟𝑘𝑥𝑘.(3.40) Notice that 𝑥𝑘Ω0(𝑘𝑘0), by (3.37) and the properties of 𝜙(), we have for 𝑘𝑘0 that inf𝑥Ω0𝑥𝑓(𝑥)𝑓𝑘𝑓𝛽𝑘0,𝑟𝑘𝑥𝑘inf𝑥Ω0𝑓(𝑥)+𝑟𝑘𝑚𝜙(0).(3.41) Combining (3.40) with (3.41), we obtain the conclusion.

Example 3.10. Consider that min𝑥Ω0𝑓(𝑥)=(1/4)(𝑥1𝑥2)2,Ω0={𝑥𝑅2𝑔(𝑥)=𝑥1𝑥20}.
This is a convex case. Denote its optimal solution by Ω0={𝑥Ω0𝑥1𝑥2=0} and let 𝜙(𝑡)=(𝑡+𝑡2+4)/2. We consider 𝑓𝛽,𝑟(), that is, 𝑓𝛽,𝑟1(𝑥)=4𝑥1𝑥22+𝑟2𝛽2𝑟2𝑥1𝑥22𝛽+4+𝑟𝑥1𝑥2.(3.42) Because 𝑓𝛽,𝑟() is convex, thus 𝑓𝛽,𝑟(𝑥)=0 if and only if 𝑥argmin𝑥𝑅2𝑓𝛽,𝑟(𝑥). By the algorithm, we get stationary points as 𝑥𝑘=𝑘𝑘+𝛼𝑘,𝑘=0,1,,(3.43) where 𝛼𝑘>0 and lim𝑘𝛼𝑘=0. Here {𝑥𝑘} has no accumulation point, that is, lim𝑘𝑥𝑘=+. Thus in the analysis of convergence, MFCQ may not be appropriate to be applied as a constraint qualification condition for this example. But for any 𝑘𝑁, we have 𝑓(𝑥𝑘)=((1/2)𝛼𝑘,(1/2)𝛼𝑘)𝑇, 𝑔(𝑥𝑘)=(1,1)𝑇, which implies that assumption (𝐴2) is satisfied. we can also check that {𝑥𝑘} satisfies GMFCQ. In fact, choose =(1,1)𝑇, then we have lim𝑘𝑔𝑥𝑘=lim𝑘𝛼𝑘=0,lim𝑘𝑥𝑔𝑘𝑇=(1,1)𝑇(1,1)=2<0.(3.44) On the other side, by the algorithm, we have 𝑥𝑘Ω0 and 𝛽𝑘=1, for all 𝑘. By letting 𝑘, we get 𝑓𝛽𝑘,𝑟𝑘(𝑥𝑘)0 and 𝑓(𝑥𝑘)0. So by the algorithm we get a feasible solution sequence which is also optimal.

Acknowledgments

This paper was supported by the National Natural Science Foundations of China (10971118, 10701047, and 10901096). The authors would like to give their thanks to the editor and anonymous referees for their valuable suggestions and comments.

References

  1. W. I. Zangwill, “Non-linear programming via penalty functions,” Management Science, vol. 13, pp. 344–358, 1967. View at Zentralblatt MATH
  2. A. Auslender, R. Cominetti, and M. Haddou, “Asymptotic analysis for penalty and barrier methods in convex and linear programming,” Mathematics of Operations Research, vol. 22, no. 1, pp. 43–62, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  3. A. Ben-Tal and M. Teboulle, “A smoothing technique for non-differentiable optimization problems,” in Optimization, vol. 1405 of Lecture Notes in Mathematics, pp. 1–11, Springer, Berlin, Germany, 1989. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  4. C. H. Chen and O. L. Mangasarian, “Smoothing methods for convex inequalities and linear complementarity problems,” Mathematical Programming, vol. 71, no. 1, pp. 51–69, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  5. C. Chen and O. L. Mangasarian, “A class of smoothing functions for nonlinear and mixed complementarity problems,” Computational Optimization and Applications, vol. 5, no. 2, pp. 97–138, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  6. M. Herty, A. Klar, A. K. Singh, and P. Spellucci, “Smoothed penalty algorithms for optimization of nonlinear models,” Computational Optimization and Applications, vol. 37, no. 2, pp. 157–176, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  7. M. Ç. Pinar and S. A. Zenios, “On smoothing exact penalty functions for convex constrained optimization,” SIAM Journal on Optimization, vol. 4, no. 3, pp. 486–511, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  8. I. Zang, “A smoothing-out technique for min-max optimization,” Mathematical Programming, vol. 19, no. 1, pp. 61–77, 1980. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  9. C. C. Gonzaga and R. A. Castillo, “A nonlinear programming algorithm based on non-coercive penalty functions,” Mathematical Programming, vol. 96, no. 1, pp. 87–101, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  10. Z. Y. Wu, F. S. Bai, X. Q. Yang, and L. S. Zhang, “An exact lower order penalty function and its smoothing in nonlinear programming,” Optimization, vol. 53, no. 1, pp. 51–68, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  11. Z. Meng, C. Dang, and X. Yang, “On the smoothing of the square-root exact penalty function for inequality constrained optimization,” Computational Optimization and Applications, vol. 35, no. 3, pp. 375–398, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  12. O. L. Mangasarian and S. Fromovitz, “The Fritz John necessary optimality conditions in the presence of equality and inequality constraints,” Journal of Mathematical Analysis and Applications, vol. 17, pp. 37–47, 1967. View at Publisher · View at Google Scholar · View at Zentralblatt MATH