About this Journal Submit a Manuscript Table of Contents
Journal of Applied Mathematics
Volume 2013 (2013), Article ID 568316, 7 pages
http://dx.doi.org/10.1155/2013/568316
Research Article

Smoothing Approximation to the Square-Order Exact Penalty Functions for Constrained Optimization

College of Operations and Management, Qufu Normal University, Rizhao, Shandong 276826, China

Received 13 May 2013; Accepted 18 September 2013

Academic Editor: Zhihua Zhang

Copyright © 2013 Shujun Lian and Jinli Han. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A method is proposed to smooth the square-order exact penalty function for inequality constrained optimization. It is shown that, under some conditions, an approximately optimal solution of the original problem can be obtained by searching an approximately optimal solution of the smoothed penalty problem. An algorithm based on the smoothed penalty functions is given. The algorithm is shown to be convergent under mild conditions. Two numerical examples show that the algorithm seems efficient.

1. Introduction

Consider the following nonlinear constrained optimization problem: where and , are twice continuously differentiable functions. Let

To solve , many penalty function methods have been proposed in numerous pieces of literature. One of the popular penalty functions is given by where , . Obviously, it is a continuously differentiable function, but it is not an exact penalty function. If each minimum of the penalty problem is a minimum of the original problem or each minimum of the original problem is a minimum of the penalty problem when the penalty parameter is large enough, the corresponding penalty function is called exact penalty function.

In Zangwill [1], the classical exact penalty function is defined as follows: After Zangwill’s development, exact penalty functions have attracted most of the attention (see, e.g., [26]). It is known from the theory of ordinary constrained optimization that the penalty function is a better candidate for penalization. However, it is not a smooth function and causes some numerical instability problems in its implementation when the value of the penalty parameter becomes larger. Some methods for smoothing the exact penalty function are developed (see, e.g., [714]).

In [15, 16], the square-order penalty function has been introduced and investigated. The penalty function is exact but not smooth. Its smoothing has been investigated in [15, 16]. So, it can been applied to solve the problem via a gradient-type or a Newton-type method.

In this paper, a new smoothing function to the square-order penalty function of the form (5) is investigated. The rest of this paper is organized as follows. In Section 2, a new smoothing function to the square-order penalty function is introduced, and some fundamental properties of the smoothing function are discussed. In Section 3, an algorithm is presented to compute an approximate solution to based on the smooth penalty function and is shown to be convergent. In Section 4, two numerical examples are given to show the applicability of the algorithm. In Section 5, we conclude the paper with some remarks.

2. Smoothing Exact Lower Order Penalty Function

Consider the following lower order penalty problem: In this paper, we say that the pair satisfies the KKT condition if and that the pair satisfies the second-order sufficiency condition [17, page 169] if where and In order to establish the exact penalization, we need the following assumptions.

Assumption 1. satisfies the following coercive condition:

Under Assumption 1, there exists a box such that , where is the set of global minima of problem and denotes the interior of the set . Consider the following problem: Let denote the set of global minima of problem . Then .

Assumption 2. The set is a finite set.

Then we consider the penalty problem of the form

Let ; that is, then For any , let It follows that It is easy to see that is continuously differentiable on . Furthermore, we can obtain that as .

Figure 1 shows the behavior of (represented by the real line), (represented by the real line with plus sign), (represented by the dash and dot line), and (represented by broken line).

568316.fig.001
Figure 1: The behavior of and .

Let Then is continuously differentiable on . Consider the following smoothed optimization problem:

Lemma 3. For any , ,

Proof. Note that
When , we have
Then

As a direct result of Lemma 3, we have the following result.

Theorem 4. Let be a sequence of positive numbers, and assume that is a solution to for some . Let be an accumulating point of the sequence . Then is an optimal solution to .

Proof. Because is a solution to , we have By Lemma 3, we have It follows that Let ; we have
We complete the proof.

Theorem 5. Let be an optimal solution of problem and an optimal solution of problem for some and . Then If both and are feasible, then

Proof. By Lemma 3, we have Specially, if both and are feasible, we have by .
It follows that On the other hand, by (14), (15), (17), and (19), we have
We complete the proof.

Theorem 6. Supposing that Assumptions 1 and 2 hold, and that, for any , there exists a such that the pair satisfies the second-order sufficiency condition (8). Let be a global solution of problem and a global solution of problem for . Then there exists such that for any , where is defined in Corollary  2.3 in [16].

Proof. By Corollary  2.3 in [16], we have that is a global solution of problem . Then, by Theorem 5, we have
Since , we have
We complete the proof.

Theorems 4 and 5 mean that an approximate solution to is also an approximate solution to . Furthermore, by Theorem 6, an optimal solution to is an approximately optimal solution to . Now we present a penalty function algorithm to solve .

3. A Smoothing Method

We propose the following algorithm to solve .

Algorithm 7. Consider the following.
Step  1. Choose a point . Given , , , and , let , and go to Step  2.
Step  2. Use as the starting point to solve . Let be the optimal solution obtained ( is obtained by a quasi-Newton method and a finite difference gradient). Go to Step  3.
Step  3. Let , , and ; then go to Step  2.

Remark 8. From and , we can easily obtain that the sequence is decreasing to 0 and the sequence is increasing to as .
Now we prove the convergence of the algorithm under mild conditions.

Theorem 9. Suppose that, for any , , the set Let be the sequence generated by Algorithm 7. If has limit point, then any limit point of is the solution of .

Proof. Let be any limit point of . Then there exists a natural number set , such that , . If we can prove that (i) and (ii) hold, then is the optimal solution of .
(i) Suppose, to the contrary, that ; then there exist , , and the subset such that for any .
If , it follows from Step  2 in Algorithm 7 and (15) that for any , which contradicts with and .
If or , it follows from Step  2 in Algorithm 7 and (15) that for any , which contradicts with and .
Then we have .
(ii) For any , it holds that then holds.
This completes the proof.

4. Numerical Examples

In this section, we solve two numerical examples to show the applicability of Algorithm 7 on Fortran.

Example 1 (see [18, Example 4.1]). We can see the following:
Starting point , , , , and , we obtain the results by Algorithm 7 shown in Table 1.
Furthermore, the algorithms based on the penalty function (3) or the exact penalty function (4) are described as follows.

tab1
Table 1: Numerical results for Example 1 by Algorithm 7.

Algorithm 10. Consider the following.
Step  1. Choose a point , and a stopping tolerance . Given , , , and , let , and go to Step  2.
Step  2. Use as the starting point to solve . Let be the optimal solution obtained ( is obtained by a quasi-Newton method and a finite difference gradient). Go to Step  3.
Step  3. Let , , , and ; then go to Step  2.

Algorithm 11. Consider the following.
Step  1. Choose a point and a stopping tolerance . Given , , , and , let , and go to Step  2.
Step  2. Use as the starting point to solve . Let be the optimal solution obtained ( is obtained by a quasi-Newton method and a finite difference gradient). Go to Step  3.
Step  3. Let , , , and ; then go to Step  2.

Let , , , , and ; numerical results by Algorithm 10 are shown in Table 2.

tab2
Table 2: Numerical results for Example 1 by Algorithm 10.

Let , , , , and ; numerical results by Algorithm 11 are shown in Table 3.

tab3
Table 3: Numerical results for Example 1 by Algorithm 11.

This example is a nonconvex problem with 22 local optimal solutions in the interior of the feasible region. By Sun and Li [18], we know that is a global minimum with global optimal value . It is clear from Table 1 that the obtained approximately optimal solution is with corresponding objective function value 1.837684.

From Tables 13, one can see that Algorithm 11 converges faster than Algorithms 7 and 10, but the solution generated by Algorithm 11 is the worst. Algorithm 10 is the slowest one, and the solution generated by Algorithm 10 is worse than the solution generated by Algorithm 7.

Example 2 (see the Rosen-Suzki problem in [15]). We can see the following:
Let , , , , and ; the results by Algorithm 7 are shown in Table 4.
Let , , , , and ; numerical results by Algorithm 10 are shown in Table 5.
Let , , , , and ; the results by Algorithm 11 are shown in Table 6.

tab4
Table 4: Numerical results for Example 2 by Algorithm 7.
tab5
Table 5: Numerical results for Example 2 by Algorithm 10.
tab6
Table 6: Numerical results for Example 2 by Algorithm 11.

It is clear from Table 4 that the obtained approximately optimal solution is with corresponding objective function value −44.22965. From [15], the obtained approximately optimal solution is with corresponding objective function value −44.233582.

From Tables 46, one can see that Algorithm 11 converges faster than Algorithms 7 and 10, but the solution generated by Algorithm 11 is the worst. Algorithm 10 is the slowest one, and the solution generated by Algorithm 10 is worse than the solution generated by Algorithm 7.

From Tables 16, one can see that Algorithm 7 yields some approximate solutions to that have a better objective function value in comparison with Algorithms 10 and 11.

5. Conclusion

In this paper, we propose a method for smoothing the nonsmooth square-order exact penalty function for inequality constrained optimization. Error estimations are obtained among the optimal objective function values of the smoothed penalty problem, of the nonsmooth penalty problem, and of the original optimization problem. The algorithm based on the smoothed penalty functions is shown to be convergent under mild conditions.

According to the numerical results given in Section 4, one may draw that the smoothing penalty function yields some better convergence results for computing an approximate solution to than and .

Finally, we give some advices on how to choose a parameter in the algorithm. According to our experience, initially, may be 0.1, 1, 5, 10, 100, 1000, or 10000, = 2, 5, 10, or 100, and the iteration formula . The initial value of may be 10, 5, 1, 0.5, or 0.1, = 0.5, 0.1, 0.05, or 0.01, and the iteration formula .

Acknowledgments

This work is supported by National Natural Science Foundation of China (10971118 and 71371107) and the Foundation of Shandong Province (J10LG04 and ZR2012AL07).

References

  1. W. I. Zangwill, “Non-linear programming via penalty functions,” Management Science, vol. 13, pp. 344–358, 1967.
  2. M. S. Bazaraa and J. J. Goode, “Sufficient conditions for a globally exact penalty function without convexity,” Mathematical Programming Studies, vol. 19, pp. 1–15, 1982. View at Scopus
  3. S. P. Han and O. L. Mangasarian, “Exact penalty functions in nonlinear programming,” Mathematical Programming, vol. 17, no. 1, pp. 251–269, 1979. View at Scopus
  4. O. L. Mangasarian, “Sufficiency of exact penalty minimization,” SIAM Journal on Control and Optimization, vol. 23, no. 1, pp. 30–37, 1985. View at Scopus
  5. G. D. Pillo, “Exact penalty methods,” in Algorithms for Continuous Optimization, E. Spedicato, Ed., pp. 209–253, Kluwer Academic, New York, NY, USA, 1994.
  6. C. Yu, K. L. Teo, L. Zhang, and Y. Bai, “A new exact penalty function method for continuous inequality constrained optimization problems,” Journal of Industrial and Management Optimization, vol. 6, no. 4, pp. 895–910, 2010. View at Publisher · View at Google Scholar · View at Scopus
  7. F. S. Bai and X. Y. Luo, “Modified lower order penalty functions based on quadratic smoothing approximation,” Operations Research Transactions, vol. 16, no. 2, pp. 9–22, 2012.
  8. C. Chen and O. L. Mangasarian, “Smoothing methods for convex inequalities and linear complementarity problems,” Mathematical Programming B, vol. 71, no. 1, pp. 51–69, 1995. View at Scopus
  9. S. J. Lian, “Smoothing approximation to l1 exact penalty function for inequality constrained optimization,” Applied Mathematics and Computation, vol. 219, no. 6, pp. 3113–3121, 2012.
  10. Z. Q. Meng and S. Gao, “Smoothed square-root penalty function for nonlinear constrained optimization,” Operations Research Transactions, vol. 17, no. 2, pp. 70–80, 2013.
  11. M. Pinar and S. Zenios, “On smoothing exact penalty functions for convex constrained optimization,” SIAM Journal on Optimization, vol. 4, pp. 468–511, 1994.
  12. C. Y. Wang, W. L. Zhao, J. C. Zhou, and S. J. Lian, “Global convergence and finite termination of a class of smooth penalty function algorithms,” Optimization Methods and Software, vol. 28, no. 1, pp. 1–25, 2013.
  13. X. S. Xu, Z. Q. Meng, J. W. Sun, L. G. Huang, and R. Shen, “A second-order smooth penalty function algorithm for constrained optimization problems,” Computational Optimization and Applications, vol. 55, no. 1, pp. 155–172, 2013.
  14. X. Q. Yang, Z. Q. Meng, X. X. Huang, and G. T. Y. Pong, “Smoothing nonlinear penalty functions for constrained optimization problems,” Numerical Functional Analysis and Optimization, vol. 24, no. 3-4, pp. 351–364, 2003. View at Scopus
  15. Z. Meng, C. Dang, and X. Yang, “On the smoothing of the square-root exact penalty function for inequality constrained optimization,” Computational Optimization and Applications, vol. 35, no. 3, pp. 375–398, 2006. View at Publisher · View at Google Scholar · View at Scopus
  16. Z. Y. Wu, F. S. Bai, X. Q. Yang, and L. S. Zhang, “An exact lower order penalty function and its smoothing in nonlinear programming,” Optimization, vol. 53, no. 1, pp. 51–68, 2004. View at Publisher · View at Google Scholar · View at Scopus
  17. M. S. Bazaraa, H. D. Sherali, and C. M. Shetty, Nonlinear Programming: Theory and Algorithms, John & Wiley Sons, New York, NY, USA, 2nd edition, 1993.
  18. X. L. Sun and D. Li, “Value-estimation function method for constrained global optimization,” Journal of Optimization Theory and Applications, vol. 102, no. 2, pp. 385–409, 1999. View at Scopus