Abstract

A generalized gradient projection filter algorithm for inequality constrained optimization is presented. It has three merits. The first is that the amount of computation is lower, since the gradient matrix only needs to be computed one time at each iterate. The second is that the paper uses the filter technique instead of any penalty function for constrained programming. The third is that the algorithm is of global convergence and locally superlinear convergence under some mild conditions.

1. Introduction

The optimal problems are often discovered in the field of management, engineering design, traffic transportation, national defence, and so on. The efficient algorithms for these problems are important. We will consider the following nonlinear inequality constrained optimization problem: where ; assume that and are continuously differentiable.

In 2002, Fletcher and Leyffer [1] had proposed a filter method for nonlinear inequality constrained optimization, which did not require any penalty function. The main idea is that a trial point is accepted if it improves either the objective function or the constraint violation. Fletcher et al. [2, 3] and Gonzaga et al. [4] had proved that the method was of global convergence. More recently, this method has been extended by Wächter and Biegler [5, 6] and Chin [7] to line search method and by Su [8] to the SQP method.

In this paper, we modify the method given by Wang et al. [9] and propose a generalized gradient projection filter algorithm for inequality constrained optimization with arbitrary initial point. It is organized as follows. In Section 2, we first review the filter method and some definitions of generalized gradient projection and then introduce an algorithm for problem (1). The convergence and the rate of convergence on the algorithm are discussed in Sections 3 and 4, respectively. In the last section, we shall list the numerical tests.

2. Preliminaries and a Filter Algorithm

Let be a violation function; that is,

Definition 1. A pair obtained on iteration dominates another pair if and only if and hold.

Definition 2. A filter is a list of pairs such that no pair dominates any other. A pair is said to be acceptable for the filter if it is not dominated by any point in the filter.

We use to denote the set of iterations indices such that is an entry in the current filter. A point is said to be “acceptable for the filter” if and only if holds for all , where is close to zero and is the step size. We may also “update the filter” which means that the pair is added to the list of pairs in the filter, and any pairs in the filter that are dominated by are removed.

However, relying solely on this criterion would result in convergence to a feasible but nonoptimal point. In order to prevent this, we employ the following sufficient reduction criterion.

We denote and as actual reduction and linear reduction, respectively, at . The sufficient reduction condition for takes the form where is a preassigned parameter.

At the current iterate , define that , and , and then where is a given symmetric positive definite matrix, , , , and .

Let , where . Set and , where . Then where , . We use correction direction if a trial point has been rejected.

The following is the algorithm.

Algorithm (S0) Given start point , , , , and . Initialize the filter and . Set .  (S1) Inner loop A:  (S1.1) set and ;  (S1.2) if , where and , then set , , and , and go to S2;  (S1.3) let , , and go to S1.2.  (S2) Compute , by (5). If and , then stop.  (S3)Test direction :  (S3.1) if , and is acceptable for the filter, then go to S3.2; otherwise, go to S4;  (S3.2) if , let , and go to S7; otherwise, go to S3.3;  (S3.3) if satisfies the sufficient reduction condition (4), then let , and go to S7; otherwise, go to S4.  (S4) Compute by (6) and set .  (S5) Inner loop B:  (S5.1) if is acceptable for the filter, go to S5.2; otherwise, go to S5.3;  (S5.2) if , go to S5.3; otherwise, go to S6;  (S5.3) set , and go to S5.1.  (S6) Set and .  (S7) Update filter to . Update to by a quasi-Newton method. Set , and back to S1.

3. Global Convergence of the Algorithm

In this section, we assume that the following conditions hold. (A1) is linearly independent of any . (A2) For any and , holds, where are constants. (A3) Sequence generated by the algorithm remains in a closed, bounded subset . (A4) and are twice differentiable in ; that is, .

Similar to [9], the following theorem and lemma hold.

Theorem 3. If and hold, then is a KKT point of problem (1).

Lemma 4. Consider

According to [8], the following lemma holds.

Lemma 5. The inner loop A will terminate in finite times.

Lemma 6. If is not a KKT point of problem (1), there must exist and .

Proof. Since is not a KKT point, we have either or such that . Thus holds. From Lemma 4, we know that . Therefore That is, hold.

Lemma 7. Let be the cluster point of generated by algorithm. If is not the KKT point of problem (1), there exists , such that holds when .

Proof. From the definition of and the assumption (A2), we have Since we have It is easy to learn that holds when .

Lemma 8. The inner loop B will end in finite times.

Proof. From Lemma 7, we have that holds when . By contradiction, if the conclusion is false, then the algorithm will run infinitely between S5.1 and S5.3, so we have and not acceptable for the filter. We consider it in the following two cases.
Case 1 (). From Lemma 6, we have and . So when it is easy to get that It proves that is acceptable for the filter.
Case 2 (). Similarly, when it is easy to learn that Since is acceptable for the filter, so for all , or holds. From that is not acceptable for the filter, we have hold. If , then which contradicts (17). If , then when , it is easy to learn that which contradicts (18).
Based on the above analysis, we can see that the claim holds.

By the above statement, we can see that the algorithm is implementable. Now we turn on to prove the global convergence of the algorithm.

Theorem 9. Let the assumptions hold and . Suppose be the cluster point of generated by algorithm. There exist two possible cases. (i) The iteration terminates at a KKT point. (ii) Any accumulation point of is a KKT point.

Proof. we only need to proof case (ii). Since is the cluster point generated by algorithm, let be any thinner subsequences converging to .
We will first show that is a feasible point. Assume that for . Let and be any two adjacent indices in where . If , then there exists such that for all and because is acceptable to the filter, we have Since is a monotonically decreasing subsequence for and is bounded below, therefore for , , and , is bounded above. However, since , therefore by summing over all indices , , and , which contradicts the fact that is bounded above. Thus , hence is feasible.
Next we need to show that is a KKT point. By the construction of algorithm, there are two cases: one generates the sequence from , and the other generates it from . We prove that claim according to the two cases.
Case 1. Suppose that there are infinite points gotten by . Since , we have Thus holds. Since is bounded below, then Thus , which means . Since is a feasible point, is a KKT point.
Case 2. Suppose that there are infinite points gotten by . Since , we have which means that . Since we have and , and since is a feasible point, is a KKT point.
Combined Case 1 and Case 2, we can see that the claim holds.

4. The Rate of Convergence

In this section, we discuss the convergent rate of the algorithm. We need the following strong assumptions. (A5) The second-order sufficiently conditions hold, that is, , for all , where , , and is the KKT pair of problem (1). (A6)Consider .

Theorem 10. Suppose that assumptions (A1)–(A6) hold; then for large enough . Therefore the algorithm is superlinearly convergent.

Proof. Suppose that is acceptable for the filter; we will show that for large enough , is acceptable for the filter and satisfies the sufficient reduction condition.
First we need to prove that is acceptable for the filter. If , then is already acceptable for the filter. Else we need to show that . Let ; it holds that From , we have Since , set , and then According to , , and and assumptions (A2), (A3), and (A5), then Hence, for large enough , is acceptable for the filter.
Now we are going to show that when is large enough, satisfies the sufficient reduction condition . Let ; then we have Since and assumptions (A3), and (A5), then Hence, for large enough , satisfies the sufficient reduction condition.

Based on Theorem 10, we can see, when is large enough that the algorithm will implement the Newton steps and will not change; thus the algorithm is superlinearly convergent.

5. Numerical Test

In this section, we give some numerical results according to our algorithm. We update the matrix by BFGS formulation and the algorithm parameters are set as .

Example 11. One has where , and iterate = 16.

Example 12 (see [8]). Consider where , and iterate = 14.

Example 13 (see [10]). One has
is a minimizer with an objective value . We choose the initial point , iterate = 6.

Example 14 (see [11]). Consider
We choose the initial point . is a minimizer with an objective value , iterate = 40.

Acknowledgment

This research was supported by the National Natural Science Foundation of China (no. 11271128).