Abstract

For inequality constrained minimization problem, we first propose a new exact nonsmooth objective penalty function and then apply a smooth technique to the penalty function to make it smooth. It is shown that any minimizer of the smoothing objective penalty function is an approximated solution of the original problem. Based on this, we develop a solution method for the inequality constrained minimization problem and prove its global convergence. Numerical experiments are provided to show the efficiency of the proposed method.

1. Introduction

Consider the following inequality constrained minimization problem: where are continuously differentiable functions. Throughout this paper, we use to denote the feasible solution set.

The problem finds applications in fields such as economics, mathematical programming, transportation, and regional science [15], and it has received much attention from researchers; see, e.g., [616].

Due to the involvement of the inequality constraint in the problem, it is very hard to solve directly. Hence, some researchers turn to indirect methods such as the penalty function method, the SQP method, and the feasible direction method [17]. Among these methods, the penalty function method is a popular one. Its main idea is to combine the objective function and constraints into a penalty function and then attack problem by solving a sequence of unconstrained problems. Generally, if the solution of the original problem is the solution of the penalty problem or the solution of the penalty problem is the solution of the original problem, then the penalty function is called exact [17]. For this, Zangwill [18] proposed the following classical exact penalty function: where is a penalty parameter and .

Obviously, the penalty function given above is not smooth and the researchers considered its smoothing version [1925]. In [26], the lower-order penalty function, was introduced and its exact property and its smoothing were investigated [27, 28].

To improve the performance of the penalty function when solving the inequality constrained optimization problem, the following objective penalty function is introduced [29, 30]: where is an objective penalty parameter and . Assume that is an optimal solution and is the optimal objective function value of the original problem . For this function, it is shown that the minimizers of the problem tend to when a convergent sequence tends to .

Later, Meng et al. [31] considered the following objective penalty function: where and is an objective penalty parameter. The objective penalty function is smooth and its exact property was proved for the objective penalty function.

Li et al. [32] proposed the following objective penalty function for solving minmax programming problems with equality and inequality constraints: by combining the objective penalty and constraint penalty.

In this paper, we will propose a new exact nonsmooth objective penalty function which is different from the functions defined by (3) and (4). Then motivated by the smoothing technique of the exact penalty function in [2325], we make a smoothing to the nonsmooth objective penalty function so that the nonsmooth objective penalty function can be numerically minimized by methods such as gradient-type method or Newton-type method.

The remainder of this paper is organized as follows. In Section 2, we propose a new exact nonsmooth objective penalty function and then make a second-order smoothing approximation to it. Error bound estimations among the optimal objective values of the nonsmooth objective penalty problem and the smoothed objective penalty problem are presented. Based on the second-order differentiable smoothing objective penalty function, we develop a solution method in Section 3 and prove its global convergence. Some numerical experiments are made in Section 4 to show the efficiency of the proposed method.

2. A New Objective Penalty Function and Its Smoothing

In this section, we consider the following objective penalty function: Correspondingly, the associated optimization problem is as follows: where .

For this problem, we have the following conclusion on the relationship between the optimal solution of and .

Theorem 1. If is an optimal solution of problem , then is also an optimal solution of problem with .

Proof. Since is an optimal solution to and , it holds that It is easy to see that for any . Hence is an optimal solution to .

Theorem 2. Let be a connected and compact set and be a continuous function. Set and . Suppose is an optimal solution to for some . Then
(i) if , then is a feasible solution to and ;
(ii) if and , then .

Proof. (i) It follows from that , and , so . The conclusion is proved.
(ii) If , then . Since is continuous, there exists such that . Hence . On the other hand, since is optimal to , it holds that , which is contradict with . Therefore, .

Theorem 3. Let be a connected and compact set, be a continuous function, and , and be an optimal solution to . Suppose is an optimal solution to for some , , and . Then
(i) if is not feasible to , then and ;
(ii) if is a feasible solution to , then is an optimal solution to and is an exact value of the objective penalty parameter.

Proof. (i) By (ii) in Theorem 2, . If , then . On the other hand, if , and from (6), one has Since and , it follows that . Hence, .
(ii) It follows from the assumption and (6) that Since is feasible to , by (ii) in Theorem 2, one has and . Hence, and . Then, Therefore, This means that is an optimal solution to .

Theorems 2 and 3 provide a way to solve problem . However, the objective penalty function is not smooth. Now, we use a smoothing technique to make it twice continuously differentiable which can be minimized by methods such as Newton-type method. The obtained smooth objective penalty function is much different from the functions given in [2325].

Let , and define Then and It is easy to see that function is twice continuously differentiable on and

Based on this, we consider the following second-order smoothing approximation: where

The corresponding optimization problem to is as follows:

For problems and , we have the following conclusion.

Lemma 4. For any and , it holds that

Proof. From the definition of and , one has Hence, Thus, for any , it holds that which means that It follows from (6) and (16) that

Theorem 5. Suppose positive sequence converges to 0 as , is a solution to , and is an accumulating point of sequence . Then is an optimal solution to .

Proof. Since is a solution to , one has It follows from Lemma 4 that and From (23), (24), and (25), one has Letting yields Thus is an optimal solution to .

Theorem 6. Let be an optimal solution of and be an optimal solution of . Then

Proof. By Lemma 4 and the assumption, one has and Then

Theorem 6 means that the optimal solution to is also an approximately optimal solution to when is sufficiently small.

3. A Smoothing Method

In this section, we will propose an algorithm for solving problem based on the smoothed objective penalty function . The following algorithm is based on the relationship between and given in Theorems 2 and 3.

Algorithm 7.
Step 1. Take and . Let , .
Step 2. Solve starting at . Let be the global optimal solution. ( is obtained by a quasi-Newton method.)
Step 3. If , let , , , , and go to Step 2.
Step 4. If is not feasible to , let , , , , and go to Step 2. Otherwise, if is feasible to , is the approximate optimal solution to .

For Algorithm 7, we always assume that and can be satisfied. Under this condition, we can establish the global convergence of Algorithm 7.

Theorem 8. Suppose that and is an infinite sequence generated by Algorithm 7. Then the following hold:
(1) If the algorithm terminates at step , then is an optimal solution to .
(2) If the algorithm generates an infinite sequence , then it is bounded and its any limit point is an optimal solution to .

Proof. First, we claim that sequences and defined in Algorithm 7 are such that is an increasing sequence and is an decreasing sequence with and The following proof is by induction.
For , it follows from Algorithm 7 that , . For the induction step, let the hypothesis hold for . For , we let , , and in Step 3. By , one has In Step 4, let , and . By , one has By induction, (32) holds for all .
Consider the next iteration.In Step 3, let , , then .In Step 4, let , , then . By induction, (33) holds for all .
From Algorithm 7, it is easy to see that is increasing and is decreasing. Then sequences and are both convergent. Let and . It follows from (32) and (33) that . Therefore, also converges to
Now, we are at the position to prove the main conclusion in the section.
For (1), if Algorithm 7 terminates at the iteration, it must terminate at Step 4; is feasible to . By Theorem 3, is an optimal solution to .
For (2), we first show that the sequence is bounded. For the sake of contradiction, suppose that the sequence is unbounded.
Since is an optimal solution to , for any fixed , Due to and as , we conclude that there is some such that Since , we arrive at a contradiction, which shows that the sequence is bounded.
Let . Without loss of generality, we assume as . By Theorems 2 and 3 and Algorithm 7, we know that . It follows from (32) that . Let be an optimal solution to . Then . Note that Letting yields that which implies and . Therefore, is an optimal solution to .

4. Numerical Experiments

In this section, we will make some numerical experiments to show the efficiency of Algorithm 7. Based on the different objective penalty functions, we give different algorithms to make a comparison. The algorithms based on the objective functions (6) or (4) are described below.

Algorithm 9.
Step 1. Take , , and . Let , .
Step 2. Solve starting at . Let be the global optimal solution.
Step 3. If , let , , , and go to Step 2.
Step 4. If is not feasible to , let , , , and go to Step 2. Otherwise, if is feasible to , is the approximate optimal solution to .

Algorithm 10.
Step 1. Take , , and . Let , .
Step 2. Solve with starting at . Let be the global optimal solution.
Step 3. If , let , , , and go to Step 2.
Step 4. If is not feasible to , let , , , and go to Step 2. Otherwise, if is feasible to , is the approximate optimal solution to .

Example 11. Consider the following problem considered in [33]:

Let , , . The numerical results of Algorithm 7 on example 4.1 with and different starting points are shown in Table 1.

The numerical results given in Table 1 show that all algorithms are completed in the first iteration and the numerical result of Algorithm 7 does not depend on the selection of the starting points for this example.

Let The numerical results of Algorithm 9 or Algorithm 10 on this example with different starting point are shown in Tables 2 and 3.

From the numerical results on Example 11, we can see that Algorithms 7, 9, and 10 can obtain almost the same approximate optimal solution. From the numerical results given in [33], we know that the optimal solution of Example 11 is with the objective function value . Hence, the numerical results show that Algorithm 7 is efficient in this example.

Example 12. Consider the following problem considered in [24]:

For this example, we let The numerical results of Algorithm 7 on Example 12 with and different starting point are shown in Table 4.

Let The numerical results of Algorithm 9 or Algorithm 10 on Example 12 with different starting point are shown in Tables 5 and 6.

From Tables 46, we can see that Algorithm 7 has better numerical stability than Algorithms 9 and 10 for the optimal solution and objective function value in this example. In fact, the given solution for Example 12 is with the objective function value .

5. Concluding Remarks

In this paper, we proposed a method for smoothing the nonsmooth objective penalty function for inequality constrained optimization. Further, we showed the global convergence of the method under mild conditions. The given numerical experiments exhibit the efficiency of the proposed method.

Data Availability

The data used in our numerical experiments are taken from [24, 33], and all used data released in this paper can be used directly.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

This work was supported by National Natural Science Foundation of China (71371107, 61373027, and 11671228) and Natural Science Foundation of Shandong Province (ZR2016AM10).