Abstract

An improved filter-SQP algorithm with active set for constrained finite minimax problems is proposed. Firstly, an active constraint subset is obtained by a pivoting operation procedure. Then, a new quadratic programming (QP) subproblem is constructed based on the active constraint subset. The main search direction is obtained by solving this (QP) subproblem which is feasible at per iteration point and need not to consider the penalty function by using the filter technique. Under some suitable conditions, the global convergence of our algorithm is established. Finally, some numerical results are reported to show the effectiveness of the proposed algorithm.

1. Introduction

Many real life problems, such as engineering, economics, management, finance, and other fields, can be described as the minimax problems, which wants to obtain the objection functions minimum under conditions of the maximum of the functions (such as [1, 2]). In this paper, consider the following constrained minimax optimization problems: where , : , are continuously differentiable. For convenience, we denote Obviously, the objective function is not necessarily differentiable even if the , are all differentiable. Consequently, the classical algorithms for smooth optimization problems may fail to reach an optimum if they are applied directly to the such constrained minimax optimization problem (1). Taking into account the value of minimax problems, many methods are proposed for solving problem (1). For example, in [3, 4], the minimax optimization problem is viewed as an unconstrained nonsmooth optimization problem, which can be solved by the general methods, such as subgradient methods, bundle methods, and cutting plane methods. The other type of methods which solves the problem (1) is so called smoothing methods, whose approach is to transform the minimax problem (1) into an equivalent smooth constrained nonlinear programming problem as follows: where is an artificial variable. Obviously, from the problem (4), the Karush-Kuhn-Tucker (KKT) conditions of (1) can be stated as follows: where , are the corresponding vector. In view of the equivalent relationship between the KKT point of (4) and the stationary point of (1), many methods focus on finding the stationary point of the problem (1), namely, solving (5). And a lot of methods have been proposed to solve minimax problem [513]. For finding the minima of convex functions that are not necessarily differentiable, in [58], combining the nonmonotone line search with the second-order correction technique, which can effectively avoid Maratos effect, many other effective algorithms for solving the minimax problems are presented, such as [1013].

To solve the minimax problem efficiently and save time, reduce computations, and reduce the number of iterations, we aim for a fast convergent algorithm. It is well-known that sequential quadratic programming (SQP) can be considered one of the best nonlinear programming methods for smooth constrained optimization problems (see, e.g., [1420], etc.), which outperforms every other nonlinear programming method in terms of efficiency, accuracy, and percentage of successful solutions over a large number of test problems. Hence, some authors have directly applied SQP techniques to minimax problems and got some satisfactory results (such as [5, 9]). For typical SQP method, the main procedure in the method for minimum problem is to solve the following quadratic programming: where is an index set and is an approximation of Hessian matrix of Lagrangian function. Since the objective function contains the max operator, it is continuous but nondifferentiable even if every constrained function is differentiable. Hence, this method may fail to reach an optimum for the minimax problem. In view of this, and combining with (4), in a similar way to [5], one considers the following quadratic programming through introducing an auxiliary variable : where is a symmetric positive definite matrix. However, it is well-known that the solution of (8) may not be a feasible descent direction and can not avoid the Maratos effect. Recently, many researches have extended the popular SQP scheme to the minimax problems (see [2125], etc.). Jian et al. [22] and Hu et al. [23] process pivoting operation to generate an -active constraint subset associated with the current iteration point. At each iteration of their proposed algorithm, a main search direction is obtained by solving a reduced quadratic program which always has a solution.

As an alternative to merit function, in 2002, Fletcher et al. [26] proposed the filter-SQP method for inequality constrained optimization problems instead of classic merit function SQP methods. This method main idea is that a trial point was received if it improves either the objective function or the constraint violation. Furthermore, the global and superlinear local convergence of a trust region filter-SQP method was shown in Ulbrich [27]. In recent years, filter method got high importance and many algorithms have been paid [2830], and so forth.

In this paper, an improved filter-SQP algorithm with active set for constrained minimax problems (1) is proposed. In the process of the iteration of our algorithm, an active constraint subset first is obtained by a pivoting operation procedure, and then, we construct a new quadratic programming (QP) subproblem based on the active constraint subset. In order to obtain a main search direction for (1), we only need solve this (QP) subproblem which is feasible at per iteration point, and need not to consider the penalty function by using the filter technique. Furthermore, under some mild conditions, the global convergence of our algorithm is established.

The remaining part of this paper is organized as follows: an improved filter-SQP algorithm is proposed in Section 2. In Section 3, we prove that the algorithm is globally convergent. Some preliminary numerical tests are reported in Section 4. And concluding remarks are given in the last section.

2. Improved Filter-SQP Algorithm

As traditional filter technique, define the violation function as follows: where It is easy to see that if and only if is a feasible point.

Definition 1. A pair is said to dominate another pair if and only if both and .

Definition 2. A filter is a list of pairs such that no pair dominates any other. A pair is said to be acceptable for the filter if it is not dominated by any point in the filter.

We use to denote the set of iterations indices such that is an entry in the current filter. Then, we say that a point is acceptable for the filter if and only if for all , where is close to zero. We may also update the filter which means that the pair is added to the list of pairs in the filter, and any pairs in the filter that are dominated by are removed.

As a criterion for accepting or rejecting a trial step, we use the filter technique combined with SQP method.

Algorithm A

Step 0. Given initial point , a symmetric positive definite matrix . Choose parameters , , , and . Set , and with some .

Step 1. Computation of an active constraint set is as follows.

Step  1.1. Set and .

Step  1.2. Generate an -active constraint subset , and matrix by

Step  1.3. If , set go to Step 2; otherwise let , set , and repeat Step 1.2.

Step 2. Compute () by the quadratic problem (14) at . Consider Let be the corresponding KKT multipliers vector. If then stop.

Step 3. Compute by the quadratic problem (15). Consider where , . Set as the corresponding KKT multipliers vector. If , set ; otherwise, let .

Step 4. Initial line search: set , .

Step 5. If is not acceptable for the filter, go to Step 6; otherwise let , , and add to the filter; go to Step 7.

Step 6. Set , , and go to Step 5.

Step 7. Update filter to , and obtain by updating the positive definite matrix using some quasi-Newton formulas. Set . Go back to Step 1.

Remark 3. In Step 1, by using the the pivoting operation POP, we obtain an active set . Based on the -active constraint subset, we construct a new QP (14), which is helpful for discussing the convergence of our algorithm.

Remark 4. Step 1.1–Step 1.3 and Step 4–Step 6 are called inner circle iteration, while Step 1–Step 7 are called outer circle steps.

3. Global Convergence of Algorithm

In this section, we analyze the convergence of the algorithm. The following general assumptions are true throughout this paper.(H 1)The functions , , , , are continuously differentiable.(H 2), the set of vectors is linearly independent.(H 3)There exist , , such that , for all and .

Similar to Lemmas 2.1 and 2.3 in [22], the following lemma holds which describes some beneficial properties of the pivoting operation POP.

Lemma 5. Suppose that H 1H 3 hold and let . Then(1)the pivoting operation POP can be finished in a finite number of computations; that is, there is no infinite times of loop between Step 1.2 and Step 1.3;(2)if the sequence of points is bounded, then there exists a constant such that the associated sequence of parameters generated by POP satisfies for all .

Lemma 6. Suppose that H 1H 3 hold, matrix is symmetric positive definite, and is an optimal solution of (14). Then(1), ;(2)if , then is a K-T point of problem (1);(3)if , then ; moreover, is a descent direction of at point .

Lemma 7. If , Step 4–Step 6 of Algorithm  A are well defined; that is, the inner loop Step 5-Step 6 terminates finitely.

Proof. By contradiction, if the conclusion is false, then Algorithm A will run infinitely between Step 5 and Step 6, so we have and the point is not acceptable for the filter. The following two cases need to be considered.
Case 1. Consider .
From the definition of , we can obtain Since is a solution of the problem (14), then Together with , there exists a constant , such that Moreover, for and , we have With (19) and (20), we conclude that must be acceptable for the filter and , which is a contradiction.
Case 2. Consider . Considering Taylor’s formula, we have where denotes some point on the line segment from to . Since is acceptable for the filter, we have or Similar to Case 1, we can also get the relation From the assumption, is not acceptable for the filter, and we have For the point , if it holds that (22), by and (18), we have which contradicts (25). If inequality (23) holds, by and (18), we have which contradicts (26). From the above analysis, the desired conclusion holds.

Lemma 8. Suppose that infinite points are added to the filter; then .

In the following of this section, we will show the global convergence of the algorithm.

Theorem 9. Suppose that H 1H 3 hold; let be the sequence of iterates produced by Algorithm A. The algorithm either stops at the KKT point of the problem (1) in finite number of steps or generates an infinite sequence of points such that each accumulation point of is the KKT point of problem (1).

Proof. The first statement is easy to show, the only stopping point being in Step 1. Thus, assume that the algorithm generates an infinite sequence , and since is bounded under all above-mentioned assumptions, we can assume without loss of generality that there exists an infinite index set such that Obviously, according to Lemma 6, it is only necessary to prove that .
Let ; two cases need to be considered.
Case 1. is an infinite index set. Suppose by contradiction that ; since in view of , , we obtain It is shown that the following corresponding quadratic programming subproblem (32) at has a nonempty feasible set. Moreover, from and Theorem 2.4 in [9], it is not difficult to show that is the unique solution of (32). So, it holds that Considering the KKT conditions of the problem (14), we have which contradicts the definition of .
Case 2. is a finite index set. That means it holds for large enough . There exists a constant , for ; we have Then, for some integer , we have That means . Thereby, is a KKT point of problem (1).

4. Numerical Experiments

In this section, we select some problems in [9, 10] to show the efficiency of our algorithm in Section 2. Some preliminary numerical experiments are tested on an Intel(R) Celeron(R) CPU 2.40 GHz computer. The code of the proposed algorithm is written by using MATLAB 7.0 and utilized the optimization toolbox to solve the quadratic programming (14) and (15). The results show that the proposed algorithm is efficient.

During the numerical experiments, it is chosen at random some parameters as follows.(1)Consider , , , , , and , the unit matrix.(2) is updated by the BFGS formula similar to [15]. Consider where (3)In the implementation, the stopping criterion of Step 2 is changed to STOP.

The algorithm has been tested on some problems from [9, 10]. The results are summarized in Tables 1 and 2. The columns of this table have the following meanings:No.: the number of the test problem in [9, 10];: the dimension of the problem;: the number of objective functions;: the number of inequality constraints;NT: the number of iterations;IP: the initial point;LWM: the proposed Algorithm A;XUE: the method in [9];RNM: the method in [10];ZZM: the method in [21];FV: the final value of the objective function.

In Table 2, the performance of algorithm LWM is compared with other algorithms. For problem 1 and 2, the results we get are a little better than those in [9] if we choose appropriate initial point. From the iteration results for test problems 3 to 7, it seems that our method is a bit more efficient than that in [10, 21] if the number of iterations is considered.

5. Concluding Remarks

In this paper, we propose a filter method combining this method with sequential quadratic programming algorithm for inequality constrained minimax problems. With the help of the pivoting operation procedure, an active constraint subset is first obtained. At each iteration, a main search direction is obtained by solving only one quadratic programming subproblem which is feasible at per iteration point and need not to consider the penalty function by using the filter technique. Then, a correction direction is yielded by solving another quadratic programming to avoid Maratos effect and guarantee the global convergence properties under mild conditions. The preliminary numerical results also show that the proposed algorithm is effective.

However, to show that our algorithm is global convergent, we suppose some rigorous conditions such as the hypotheses H 2-H 3. We hope that we can get rid of them in our future work. In addition, it is noted that there are still some problems worthy further discussion such as studying the algorithm with inequality and equality constraints. And we can get the main search direction by other techniques, for example, sequential systems of linear equations technique.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors are deeply indebted to the editor Professor Wenyu Sun and the anonymous referees whose insightful comments helped the authors a lot to improve the quality of the paper. The first author would also like to thank Professor Zhibin Zhu for valuable work on numerical experiments. This research was supported by Scientific Research Fund of Hunan Provincial Education Department (nos. 12A077,12C0743,13C453, and 14C0609).