Abstract

A method is proposed to smooth the square-order exact penalty function for inequality constrained optimization. It is shown that, under some conditions, an approximately optimal solution of the original problem can be obtained by searching an approximately optimal solution of the smoothed penalty problem. An algorithm based on the smoothed penalty functions is given. The algorithm is shown to be convergent under mild conditions. Two numerical examples show that the algorithm seems efficient.

1. Introduction

Consider the following nonlinear constrained optimization problem: where and , are twice continuously differentiable functions. Let

To solve , many penalty function methods have been proposed in numerous pieces of literature. One of the popular penalty functions is given by where , . Obviously, it is a continuously differentiable function, but it is not an exact penalty function. If each minimum of the penalty problem is a minimum of the original problem or each minimum of the original problem is a minimum of the penalty problem when the penalty parameter is large enough, the corresponding penalty function is called exact penalty function.

In Zangwill [1], the classical exact penalty function is defined as follows: After Zangwill’s development, exact penalty functions have attracted most of the attention (see, e.g., [26]). It is known from the theory of ordinary constrained optimization that the penalty function is a better candidate for penalization. However, it is not a smooth function and causes some numerical instability problems in its implementation when the value of the penalty parameter becomes larger. Some methods for smoothing the exact penalty function are developed (see, e.g., [714]).

In [15, 16], the square-order penalty function has been introduced and investigated. The penalty function is exact but not smooth. Its smoothing has been investigated in [15, 16]. So, it can been applied to solve the problem via a gradient-type or a Newton-type method.

In this paper, a new smoothing function to the square-order penalty function of the form (5) is investigated. The rest of this paper is organized as follows. In Section 2, a new smoothing function to the square-order penalty function is introduced, and some fundamental properties of the smoothing function are discussed. In Section 3, an algorithm is presented to compute an approximate solution to based on the smooth penalty function and is shown to be convergent. In Section 4, two numerical examples are given to show the applicability of the algorithm. In Section 5, we conclude the paper with some remarks.

2. Smoothing Exact Lower Order Penalty Function

Consider the following lower order penalty problem: In this paper, we say that the pair satisfies the KKT condition if and that the pair satisfies the second-order sufficiency condition [17, page 169] if where and In order to establish the exact penalization, we need the following assumptions.

Assumption 1. satisfies the following coercive condition:

Under Assumption 1, there exists a box such that , where is the set of global minima of problem and denotes the interior of the set . Consider the following problem: Let denote the set of global minima of problem . Then .

Assumption 2. The set is a finite set.

Then we consider the penalty problem of the form

Let ; that is, then For any , let It follows that It is easy to see that is continuously differentiable on . Furthermore, we can obtain that as .

Figure 1 shows the behavior of (represented by the real line), (represented by the real line with plus sign), (represented by the dash and dot line), and (represented by broken line).

Let Then is continuously differentiable on . Consider the following smoothed optimization problem:

Lemma 3. For any , ,

Proof. Note that
When , we have
Then

As a direct result of Lemma 3, we have the following result.

Theorem 4. Let be a sequence of positive numbers, and assume that is a solution to for some . Let be an accumulating point of the sequence . Then is an optimal solution to .

Proof. Because is a solution to , we have By Lemma 3, we have It follows that Let ; we have
We complete the proof.

Theorem 5. Let be an optimal solution of problem and an optimal solution of problem for some and . Then If both and are feasible, then

Proof. By Lemma 3, we have Specially, if both and are feasible, we have by .
It follows that On the other hand, by (14), (15), (17), and (19), we have
We complete the proof.

Theorem 6. Supposing that Assumptions 1 and 2 hold, and that, for any , there exists a such that the pair satisfies the second-order sufficiency condition (8). Let be a global solution of problem and a global solution of problem for . Then there exists such that for any , where is defined in Corollary  2.3 in [16].

Proof. By Corollary  2.3 in [16], we have that is a global solution of problem . Then, by Theorem 5, we have
Since , we have
We complete the proof.

Theorems 4 and 5 mean that an approximate solution to is also an approximate solution to . Furthermore, by Theorem 6, an optimal solution to is an approximately optimal solution to . Now we present a penalty function algorithm to solve .

3. A Smoothing Method

We propose the following algorithm to solve .

Algorithm 7. Consider the following.
Step  1. Choose a point . Given , , , and , let , and go to Step  2.
Step  2. Use as the starting point to solve . Let be the optimal solution obtained ( is obtained by a quasi-Newton method and a finite difference gradient). Go to Step  3.
Step  3. Let , , and ; then go to Step  2.

Remark 8. From and , we can easily obtain that the sequence is decreasing to 0 and the sequence is increasing to as .
Now we prove the convergence of the algorithm under mild conditions.

Theorem 9. Suppose that, for any , , the set Let be the sequence generated by Algorithm 7. If has limit point, then any limit point of is the solution of .

Proof. Let be any limit point of . Then there exists a natural number set , such that , . If we can prove that (i) and (ii) hold, then is the optimal solution of .
(i) Suppose, to the contrary, that ; then there exist , , and the subset such that for any .
If , it follows from Step  2 in Algorithm 7 and (15) that for any , which contradicts with and .
If or , it follows from Step  2 in Algorithm 7 and (15) that for any , which contradicts with and .
Then we have .
(ii) For any , it holds that then holds.
This completes the proof.

4. Numerical Examples

In this section, we solve two numerical examples to show the applicability of Algorithm 7 on Fortran.

Example 1 (see [18, Example 4.1]). We can see the following:
Starting point , , , , and , we obtain the results by Algorithm 7 shown in Table 1.
Furthermore, the algorithms based on the penalty function (3) or the exact penalty function (4) are described as follows.

Algorithm 10. Consider the following.
Step  1. Choose a point , and a stopping tolerance . Given , , , and , let , and go to Step  2.
Step  2. Use as the starting point to solve . Let be the optimal solution obtained ( is obtained by a quasi-Newton method and a finite difference gradient). Go to Step  3.
Step  3. Let , , , and ; then go to Step  2.

Algorithm 11. Consider the following.
Step  1. Choose a point and a stopping tolerance . Given , , , and , let , and go to Step  2.
Step  2. Use as the starting point to solve . Let be the optimal solution obtained ( is obtained by a quasi-Newton method and a finite difference gradient). Go to Step  3.
Step  3. Let , , , and ; then go to Step  2.

Let , , , , and ; numerical results by Algorithm 10 are shown in Table 2.

Let , , , , and ; numerical results by Algorithm 11 are shown in Table 3.

This example is a nonconvex problem with 22 local optimal solutions in the interior of the feasible region. By Sun and Li [18], we know that is a global minimum with global optimal value . It is clear from Table 1 that the obtained approximately optimal solution is with corresponding objective function value 1.837684.

From Tables 13, one can see that Algorithm 11 converges faster than Algorithms 7 and 10, but the solution generated by Algorithm 11 is the worst. Algorithm 10 is the slowest one, and the solution generated by Algorithm 10 is worse than the solution generated by Algorithm 7.

Example 2 (see the Rosen-Suzki problem in [15]). We can see the following:
Let , , , , and ; the results by Algorithm 7 are shown in Table 4.
Let , , , , and ; numerical results by Algorithm 10 are shown in Table 5.
Let , , , , and ; the results by Algorithm 11 are shown in Table 6.

It is clear from Table 4 that the obtained approximately optimal solution is with corresponding objective function value −44.22965. From [15], the obtained approximately optimal solution is with corresponding objective function value −44.233582.

From Tables 46, one can see that Algorithm 11 converges faster than Algorithms 7 and 10, but the solution generated by Algorithm 11 is the worst. Algorithm 10 is the slowest one, and the solution generated by Algorithm 10 is worse than the solution generated by Algorithm 7.

From Tables 16, one can see that Algorithm 7 yields some approximate solutions to that have a better objective function value in comparison with Algorithms 10 and 11.

5. Conclusion

In this paper, we propose a method for smoothing the nonsmooth square-order exact penalty function for inequality constrained optimization. Error estimations are obtained among the optimal objective function values of the smoothed penalty problem, of the nonsmooth penalty problem, and of the original optimization problem. The algorithm based on the smoothed penalty functions is shown to be convergent under mild conditions.

According to the numerical results given in Section 4, one may draw that the smoothing penalty function yields some better convergence results for computing an approximate solution to than and .

Finally, we give some advices on how to choose a parameter in the algorithm. According to our experience, initially, may be 0.1, 1, 5, 10, 100, 1000, or 10000, = 2, 5, 10, or 100, and the iteration formula . The initial value of may be 10, 5, 1, 0.5, or 0.1, = 0.5, 0.1, 0.05, or 0.01, and the iteration formula .

Acknowledgments

This work is supported by National Natural Science Foundation of China (10971118 and 71371107) and the Foundation of Shandong Province (J10LG04 and ZR2012AL07).