`Journal of Applied MathematicsVolume 2012 (2012), Article ID 620949, 12 pageshttp://dx.doi.org/10.1155/2012/620949`
Research Article

## On the Convergence of a Smooth Penalty Algorithm without Computing Global Solutions

1School of Science, Shandong University of Technology, Zibo 255049, China
2Institute of Operations Research, Qufu Normal University, Qufu 273165, China

Received 18 September 2011; Accepted 9 November 2011

Copyright © 2012 Bingzhuang Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We consider a smooth penalty algorithm to solve nonconvex optimization problem based on a family of smooth functions that approximate the usual exact penalty function. At each iteration in the algorithm we only need to find a stationary point of the smooth penalty function, so the difficulty of computing the global solution can be avoided. Under a generalized Mangasarian-Fromovitz constraint qualification condition (GMFCQ) that is weaker and more comprehensive than the traditional MFCQ, we prove that the sequence generated by this algorithm will enter the feasible solution set of the primal problem after finite times of iteration, and if the sequence of iteration points has an accumulation point, then it must be a Karush-Kuhn-Tucker (KKT) point. Furthermore, we obtain better convergence for convex optimization problem.

#### 1. Introduction

Consider the following nonconvex optimization problem where , are all continuously differentiable functions. Without loss of generality, we suppose throughout this paper that , because otherwise we can substitute by . Let be the relax feasible set for . Then is the feasible set of (NP).

The classical exact penalty function [1] is where is a penalty parameter, and

The obvious advantage of the traditional exact penalty functions such as the exact penalty function is that when the penalty parameter is sufficiently large, their global optimal solutions exist and are optimal solutions of (NP). But they also have obvious disadvantage, that is, their nonsmoothness, which prevent the use of many efficient unconstrained optimization algorithms (such as Gradient-type or Newton-type algorithm). Therefore the study on the smooth approximation of exact penalty functions has attracted broad interests in scholars [28]. In recent years based on the smooth approximation of the exact penalty function, several smooth penalty methods are given to solve (NP). For example, [9] gives a smooth penalty method based on approximating the exact penalty function. Under the assumptions that the optimal solution satisfies MFCQ and the iterate sequence is bounded, it is proved that the iterative sequence will enter the feasible set and every accumulation point is the optimal solution of (NP). In [10, 11], smooth penalty methods are considered based on approximating low-order exact penalty functions. Reference [10] proves the similar results as [9] under very strict conditions (some of them are uneasy to check). The conditions for convergence of the smooth penalty algorithm in [11] are weaker than that in [10], but in [11] it is only proved that the accumulation point of the iterate sequence is a Fritz-John (FJ) point of (NP).

In the algorithms given by [911], at each iteration a global optimal solution of the smooth penalty problem is needed. As we all know, it is very difficult to find a global optimal point of a nonconvex function. To avoid this difficulty, in this paper we give a smooth penalty algorithm based on the smooth approximation of the exact penalty function. The feature of this algorithm lies in that only a stationary point of the penalty function is needed to compute at each iteration. To prove the convergence of this algorithm, we first establish a generalized Mangasarian-Fromovitz constraint qualification condition (GMFCQ) weaker and more comprehensive than the traditional MFCQ. Under this condition, we prove that the iterative sequence of the algorithm will enter the feasible set of (NP). Moreover, we prove that if the iterative sequence has accumulation points, then each of them is a KKT point of (NP). Finally, we apply this algorithm to solve convex optimization and get better convergence results.

The rest of this paper is organized as follows. In the next section, we give a family of smooth penalty functions. In Section 3 based on the smooth penalty functions given in Section 2, we propose an algorithm for (NP) and analyze its convergence under the GMFCQ condition. We give an example that satisfies GMFCQ at last in this section.

#### 2. Smooth Approximation to 𝑙1 Exact Penalty Function

In this section we give a family of penalty functions, which decreasingly approximate the exact penalty function. At first we consider a class of smooth function with the following properties:(I) is a continuously differentiable convex function with ;(II), where is a nonnegative constant;(III), for any ;(IV).From (I)–(IV), it follows that satisfies(V), for any , and ;(VI) increases with respect to , for any ;(VII), for any .

The following functions are often used in the smooth approximation of the exact penalty function and satisfy properties (I)–(IV).(1).(2). (3)

We now use to construct the smooth penalty function where is a penalty parameter.

By (VII), we easily know when decreasingly converges to , that is, Therefore smoothly approximates the exact penalty function, where decreases to improve the precision of the approximation. It is worth noting that the smooth function and penalty function given in this paper make substantive improvement of the corresponding functions given in [9]. This gives better convergence properties (refer to (2.2) and Theorem 3.9).

#### 3. The Algorithm and Its Convergence

We propose a penalty algorithm for (NP) in this section based on computing the stationary point of . We assume that for any and always has stationary point.

Algorithm
Step 0. Given , and . Let . Step 1. Find such that Step 2. Put , Step 3. Let and return to Step 1.

Let be the iterative sequence generated by the algorithm. We shall use the following assumption: the penalty function value sequence is bounded.

Lemma 3.1. Suppose that the assumption holds, then for any , there exists , such that for ,

Proof. Suppose to the contrary that there exist an and an infinite sequence , such that for any , By the algorithm, we know that It follows from (3.4) that there exist a subsequence and an index , such that for any , Thus, from the assumptions about , the properties about , (3.5) and (3.6), it follows that This contradicts with .

Lemma 3.2. Suppose that the assumption holds, and is any accumulation point of , then , that is, is a feasible solution of (NP).

Proof. By Lemma 3.1, we obtain that for any and every sufficiently large . Let be an accumulation point of , then there exists a subsequence such that . Therefore By the arbitrariness of , we have that .

Given , we denote that .

Definition 3.3 (see [12]). We say that satisfies MFCQ, if there exists a such that

In the following we propose a kind of generalized Mangasarian-Fromovitz constraint qualification (GMFCQ).

Let be a subsequence, and for sequence in denote two index sets as

Definition 3.4. We say that the sequence satisfies GMFCQ, if there exist a subsequence and a vector such that

Under some circumstances, the sequence may satisfy that , which can be seen for the example in the last part of this section. At this time MFCQ can not be applied, but GMFCQ can. The following proposition shows that Definition 3.4 is a substantive generalization of Definition 3.3.

Proposition 3.5. Suppose that satisfies If satisfies MFCQ, then satisfies GMFCQ.

Proof. By (3.12), we know that if and only if Thus, . By the assumption, there exists a such that

We need two assumptions in the following: the sequence and are both bounded; any subsequence of satisfies GMFCQ.

Theorem 3.6. Suppose that the assumptions , , and hold, then(1)there exists a such that for any , (2)any accumulation point of is a KKT point of (NP).

Proof. If (1) does not hold, that is, there exists a subsequence such that for any , it holds that By the algorithm, we know that From the assumption and (3.16), it follows that there exist and such that By (3.18) and the definition of , there exists a , such that for all , From the algorithm, we know that satisfies Let , from (3.22) we obtain that We now analyze the three terms on the left side of (3.23).(a)By (3.17) and , (b)By (3.21), for any , we have From the properties of and , we have that the second term satisfies (c)From (3.19), (3.20), and the properties of , it follows that where denotes the number of the elements in .Now, by letting , and taking the limit on both sides of (3.23), we obtain from (a)–(c) that But by (3.19) and the properties of . This contradiction completes the proof of (1).
By (1) we know that there exists a , such that if , then . Thus by the algorithm, when , we have that Suppose that is an accumulation point of , then there exists a subsequence , such that By Lemma 3.2, is a feasible point of (NP), that is, . Thus by (3.22), we obtain that In the second term of (3.31), because , so by (3.30) and the properties of , we have In the third term of (3.31), from the properties of , the sequence is nonnegative and bounded. Thus, there exists a subsequence such that At last by letting , and taking the limit on both sides of (3.31), we obtain from (3.30)(3.32) and (3.33) that

By Lemma 3.2, Proposition 3.5, and Theorem 3.6, we obtain the following conclusion.

Corollary 3.7. Suppose that holds, is bounded, and any accumulation point of satisfies MFCQ, then(1)there exists a such that for any (2)any accumulation point of is a KKT point of (NP).

When (NP) is a convex programming problem, that is, the functions and of (NP) are all convex functions, the algorithm has better convergence results.

Theorem 3.8. Suppose (NP) is a convex programming problem, then every accumulation point of is an optimal solution of (NP).

Proof. Since are convex, and is increasing, then for any and is convex. Thus is equivalent to Therefore by (3.36) and the properties of , we have for any , From (3.37), the arbitrariness of and the nonnegativity of , it follows that Suppose that is an accumulation point of , there exists a subsequence such that . Thus, by (3.38), we have On the other side, (3.37) implies that holds. Then from Lemma 3.2, we know .

Theorem 3.9. Suppose that (NP) is a convex programming problem, and the assumptions , hold, then(1)there exists a , for any decreases to .(2).

Proof. Note that for (NP) which is convex, holds. By Theorem 3.6 there exists a , such that when . Therefore from the algorithm, we have for any . By (3.36) and the property (VI) of , when , Notice that , by (3.37) and the properties of , we have for that Combining (3.40) with (3.41), we obtain the conclusion.

Example 3.10. Consider that .
This is a convex case. Denote its optimal solution by and let . We consider , that is, Because is convex, thus if and only if . By the algorithm, we get stationary points as where and . Here has no accumulation point, that is, . Thus in the analysis of convergence, MFCQ may not be appropriate to be applied as a constraint qualification condition for this example. But for any , we have , , which implies that assumption is satisfied. we can also check that satisfies GMFCQ. In fact, choose , then we have On the other side, by the algorithm, we have and , for all . By letting , we get and . So by the algorithm we get a feasible solution sequence which is also optimal.

#### Acknowledgments

This paper was supported by the National Natural Science Foundations of China (10971118, 10701047, and 10901096). The authors would like to give their thanks to the editor and anonymous referees for their valuable suggestions and comments.

#### References

1. W. I. Zangwill, “Non-linear programming via penalty functions,” Management Science, vol. 13, pp. 344–358, 1967.
2. A. Auslender, R. Cominetti, and M. Haddou, “Asymptotic analysis for penalty and barrier methods in convex and linear programming,” Mathematics of Operations Research, vol. 22, no. 1, pp. 43–62, 1997.
3. A. Ben-Tal and M. Teboulle, “A smoothing technique for non-differentiable optimization problems,” in Optimization, vol. 1405 of Lecture Notes in Mathematics, pp. 1–11, Springer, Berlin, Germany, 1989.
4. C. H. Chen and O. L. Mangasarian, “Smoothing methods for convex inequalities and linear complementarity problems,” Mathematical Programming, vol. 71, no. 1, pp. 51–69, 1995.
5. C. Chen and O. L. Mangasarian, “A class of smoothing functions for nonlinear and mixed complementarity problems,” Computational Optimization and Applications, vol. 5, no. 2, pp. 97–138, 1996.
6. M. Herty, A. Klar, A. K. Singh, and P. Spellucci, “Smoothed penalty algorithms for optimization of nonlinear models,” Computational Optimization and Applications, vol. 37, no. 2, pp. 157–176, 2007.
7. M. Ç. Pinar and S. A. Zenios, “On smoothing exact penalty functions for convex constrained optimization,” SIAM Journal on Optimization, vol. 4, no. 3, pp. 486–511, 1994.
8. I. Zang, “A smoothing-out technique for min-max optimization,” Mathematical Programming, vol. 19, no. 1, pp. 61–77, 1980.
9. C. C. Gonzaga and R. A. Castillo, “A nonlinear programming algorithm based on non-coercive penalty functions,” Mathematical Programming, vol. 96, no. 1, pp. 87–101, 2003.
10. Z. Y. Wu, F. S. Bai, X. Q. Yang, and L. S. Zhang, “An exact lower order penalty function and its smoothing in nonlinear programming,” Optimization, vol. 53, no. 1, pp. 51–68, 2004.
11. Z. Meng, C. Dang, and X. Yang, “On the smoothing of the square-root exact penalty function for inequality constrained optimization,” Computational Optimization and Applications, vol. 35, no. 3, pp. 375–398, 2006.
12. O. L. Mangasarian and S. Fromovitz, “The Fritz John necessary optimality conditions in the presence of equality and inequality constraints,” Journal of Mathematical Analysis and Applications, vol. 17, pp. 37–47, 1967.