Research Article  Open Access
Improved FilterSQP Algorithm with Active Set for Constrained Minimax Problems
Abstract
An improved filterSQP algorithm with active set for constrained finite minimax problems is proposed. Firstly, an active constraint subset is obtained by a pivoting operation procedure. Then, a new quadratic programming (QP) subproblem is constructed based on the active constraint subset. The main search direction is obtained by solving this (QP) subproblem which is feasible at per iteration point and need not to consider the penalty function by using the filter technique. Under some suitable conditions, the global convergence of our algorithm is established. Finally, some numerical results are reported to show the effectiveness of the proposed algorithm.
1. Introduction
Many real life problems, such as engineering, economics, management, finance, and other fields, can be described as the minimax problems, which wants to obtain the objection functions minimum under conditions of the maximum of the functions (such as [1, 2]). In this paper, consider the following constrained minimax optimization problems: where , : , are continuously differentiable. For convenience, we denote Obviously, the objective function is not necessarily differentiable even if the , are all differentiable. Consequently, the classical algorithms for smooth optimization problems may fail to reach an optimum if they are applied directly to the such constrained minimax optimization problem (1). Taking into account the value of minimax problems, many methods are proposed for solving problem (1). For example, in [3, 4], the minimax optimization problem is viewed as an unconstrained nonsmooth optimization problem, which can be solved by the general methods, such as subgradient methods, bundle methods, and cutting plane methods. The other type of methods which solves the problem (1) is so called smoothing methods, whose approach is to transform the minimax problem (1) into an equivalent smooth constrained nonlinear programming problem as follows: where is an artificial variable. Obviously, from the problem (4), the KarushKuhnTucker (KKT) conditions of (1) can be stated as follows: where , are the corresponding vector. In view of the equivalent relationship between the KKT point of (4) and the stationary point of (1), many methods focus on finding the stationary point of the problem (1), namely, solving (5). And a lot of methods have been proposed to solve minimax problem [5–13]. For finding the minima of convex functions that are not necessarily differentiable, in [5–8], combining the nonmonotone line search with the secondorder correction technique, which can effectively avoid Maratos effect, many other effective algorithms for solving the minimax problems are presented, such as [10–13].
To solve the minimax problem efficiently and save time, reduce computations, and reduce the number of iterations, we aim for a fast convergent algorithm. It is wellknown that sequential quadratic programming (SQP) can be considered one of the best nonlinear programming methods for smooth constrained optimization problems (see, e.g., [14–20], etc.), which outperforms every other nonlinear programming method in terms of efficiency, accuracy, and percentage of successful solutions over a large number of test problems. Hence, some authors have directly applied SQP techniques to minimax problems and got some satisfactory results (such as [5, 9]). For typical SQP method, the main procedure in the method for minimum problem is to solve the following quadratic programming: where is an index set and is an approximation of Hessian matrix of Lagrangian function. Since the objective function contains the max operator, it is continuous but nondifferentiable even if every constrained function is differentiable. Hence, this method may fail to reach an optimum for the minimax problem. In view of this, and combining with (4), in a similar way to [5], one considers the following quadratic programming through introducing an auxiliary variable : where is a symmetric positive definite matrix. However, it is wellknown that the solution of (8) may not be a feasible descent direction and can not avoid the Maratos effect. Recently, many researches have extended the popular SQP scheme to the minimax problems (see [21–25], etc.). Jian et al. [22] and Hu et al. [23] process pivoting operation to generate an active constraint subset associated with the current iteration point. At each iteration of their proposed algorithm, a main search direction is obtained by solving a reduced quadratic program which always has a solution.
As an alternative to merit function, in 2002, Fletcher et al. [26] proposed the filterSQP method for inequality constrained optimization problems instead of classic merit function SQP methods. This method main idea is that a trial point was received if it improves either the objective function or the constraint violation. Furthermore, the global and superlinear local convergence of a trust region filterSQP method was shown in Ulbrich [27]. In recent years, filter method got high importance and many algorithms have been paid [28–30], and so forth.
In this paper, an improved filterSQP algorithm with active set for constrained minimax problems (1) is proposed. In the process of the iteration of our algorithm, an active constraint subset first is obtained by a pivoting operation procedure, and then, we construct a new quadratic programming (QP) subproblem based on the active constraint subset. In order to obtain a main search direction for (1), we only need solve this (QP) subproblem which is feasible at per iteration point, and need not to consider the penalty function by using the filter technique. Furthermore, under some mild conditions, the global convergence of our algorithm is established.
The remaining part of this paper is organized as follows: an improved filterSQP algorithm is proposed in Section 2. In Section 3, we prove that the algorithm is globally convergent. Some preliminary numerical tests are reported in Section 4. And concluding remarks are given in the last section.
2. Improved FilterSQP Algorithm
As traditional filter technique, define the violation function as follows: where It is easy to see that if and only if is a feasible point.
Definition 1. A pair is said to dominate another pair if and only if both and .
Definition 2. A filter is a list of pairs such that no pair dominates any other. A pair is said to be acceptable for the filter if it is not dominated by any point in the filter.
We use to denote the set of iterations indices such that is an entry in the current filter. Then, we say that a point is acceptable for the filter if and only if for all , where is close to zero. We may also update the filter which means that the pair is added to the list of pairs in the filter, and any pairs in the filter that are dominated by are removed.
As a criterion for accepting or rejecting a trial step, we use the filter technique combined with SQP method.
Algorithm A
Step 0. Given initial point , a symmetric positive definite matrix . Choose parameters , , , and . Set , and with some .
Step 1. Computation of an active constraint set is as follows.
Step 1.1. Set and .
Step 1.2. Generate an active constraint subset , and matrix by
Step 1.3. If , set go to Step 2; otherwise let , set , and repeat Step 1.2.
Step 2. Compute () by the quadratic problem (14) at . Consider Let be the corresponding KKT multipliers vector. If then stop.
Step 3. Compute by the quadratic problem (15). Consider where , . Set as the corresponding KKT multipliers vector. If , set ; otherwise, let .
Step 4. Initial line search: set , .
Step 5. If is not acceptable for the filter, go to Step 6; otherwise let , , and add to the filter; go to Step 7.
Step 6. Set , , and go to Step 5.
Step 7. Update filter to , and obtain by updating the positive definite matrix using some quasiNewton formulas. Set . Go back to Step 1.
Remark 3. In Step 1, by using the the pivoting operation POP, we obtain an active set . Based on the active constraint subset, we construct a new QP (14), which is helpful for discussing the convergence of our algorithm.
Remark 4. Step 1.1–Step 1.3 and Step 4–Step 6 are called inner circle iteration, while Step 1–Step 7 are called outer circle steps.
3. Global Convergence of Algorithm
In this section, we analyze the convergence of the algorithm. The following general assumptions are true throughout this paper.(H 1)The functions , , , , are continuously differentiable.(H 2), the set of vectors is linearly independent.(H 3)There exist , , such that , for all and .
Similar to Lemmas 2.1 and 2.3 in [22], the following lemma holds which describes some beneficial properties of the pivoting operation POP.
Lemma 5. Suppose that H 1–H 3 hold and let . Then(1)the pivoting operation POP can be finished in a finite number of computations; that is, there is no infinite times of loop between Step 1.2 and Step 1.3;(2)if the sequence of points is bounded, then there exists a constant such that the associated sequence of parameters generated by POP satisfies for all .
Lemma 6. Suppose that H 1–H 3 hold, matrix is symmetric positive definite, and is an optimal solution of (14). Then(1), ;(2)if , then is a KT point of problem (1);(3)if , then ; moreover, is a descent direction of at point .
Lemma 7. If , Step 4–Step 6 of Algorithm A are well defined; that is, the inner loop Step 5Step 6 terminates finitely.
Proof. By contradiction, if the conclusion is false, then Algorithm A will run infinitely between Step 5 and Step 6, so we have and the point is not acceptable for the filter. The following two cases need to be considered.
Case 1. Consider .
From the definition of , we can obtain
Since is a solution of the problem (14), then
Together with , there exists a constant , such that
Moreover, for and , we have
With (19) and (20), we conclude that must be acceptable for the filter and , which is a contradiction.
Case 2. Consider . Considering Taylor’s formula, we have
where denotes some point on the line segment from to . Since is acceptable for the filter, we have
or
Similar to Case 1, we can also get the relation
From the assumption, is not acceptable for the filter, and we have
For the point , if it holds that (22), by and (18), we have
which contradicts (25). If inequality (23) holds, by and (18), we have
which contradicts (26). From the above analysis, the desired conclusion holds.
Lemma 8. Suppose that infinite points are added to the filter; then .
In the following of this section, we will show the global convergence of the algorithm.
Theorem 9. Suppose that H 1–H 3 hold; let be the sequence of iterates produced by Algorithm A. The algorithm either stops at the KKT point of the problem (1) in finite number of steps or generates an infinite sequence of points such that each accumulation point of is the KKT point of problem (1).
Proof. The first statement is easy to show, the only stopping point being in Step 1. Thus, assume that the algorithm generates an infinite sequence , and since is bounded under all abovementioned assumptions, we can assume without loss of generality that there exists an infinite index set such that
Obviously, according to Lemma 6, it is only necessary to prove that .
Let ; two cases need to be considered.
Case 1. is an infinite index set. Suppose by contradiction that ; since
in view of , , we obtain
It is shown that the following corresponding quadratic programming subproblem (32) at
has a nonempty feasible set. Moreover, from and Theorem 2.4 in [9], it is not difficult to show that is the unique solution of (32). So, it holds that
Considering the KKT conditions of the problem (14), we have
which contradicts the definition of .
Case 2. is a finite index set. That means it holds for large enough . There exists a constant , for ; we have
Then, for some integer , we have
That means . Thereby, is a KKT point of problem (1).
4. Numerical Experiments
In this section, we select some problems in [9, 10] to show the efficiency of our algorithm in Section 2. Some preliminary numerical experiments are tested on an Intel(R) Celeron(R) CPU 2.40 GHz computer. The code of the proposed algorithm is written by using MATLAB 7.0 and utilized the optimization toolbox to solve the quadratic programming (14) and (15). The results show that the proposed algorithm is efficient.
During the numerical experiments, it is chosen at random some parameters as follows.(1)Consider , , , , , and , the unit matrix.(2) is updated by the BFGS formula similar to [15]. Consider where (3)In the implementation, the stopping criterion of Step 2 is changed to STOP.
The algorithm has been tested on some problems from [9, 10]. The results are summarized in Tables 1 and 2. The columns of this table have the following meanings: No.: the number of the test problem in [9, 10]; : the dimension of the problem; : the number of objective functions; : the number of inequality constraints; NT: the number of iterations; IP: the initial point; LWM: the proposed Algorithm A; XUE: the method in [9]; RNM: the method in [10]; ZZM: the method in [21]; FV: the final value of the objective function.

In Table 2, the performance of algorithm LWM is compared with other algorithms. For problem 1 and 2, the results we get are a little better than those in [9] if we choose appropriate initial point. From the iteration results for test problems 3 to 7, it seems that our method is a bit more efficient than that in [10, 21] if the number of iterations is considered.
5. Concluding Remarks
In this paper, we propose a filter method combining this method with sequential quadratic programming algorithm for inequality constrained minimax problems. With the help of the pivoting operation procedure, an active constraint subset is first obtained. At each iteration, a main search direction is obtained by solving only one quadratic programming subproblem which is feasible at per iteration point and need not to consider the penalty function by using the filter technique. Then, a correction direction is yielded by solving another quadratic programming to avoid Maratos effect and guarantee the global convergence properties under mild conditions. The preliminary numerical results also show that the proposed algorithm is effective.
However, to show that our algorithm is global convergent, we suppose some rigorous conditions such as the hypotheses H 2H 3. We hope that we can get rid of them in our future work. In addition, it is noted that there are still some problems worthy further discussion such as studying the algorithm with inequality and equality constraints. And we can get the main search direction by other techniques, for example, sequential systems of linear equations technique.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The authors are deeply indebted to the editor Professor Wenyu Sun and the anonymous referees whose insightful comments helped the authors a lot to improve the quality of the paper. The first author would also like to thank Professor Zhibin Zhu for valuable work on numerical experiments. This research was supported by Scientific Research Fund of Hunan Provincial Education Department (nos. 12A077,12C0743,13C453, and 14C0609).
References
 A. Baums, “Minimax method in optimizing energy consumption in realtime embedded systems,” Automatic Control and Computer Sciences, vol. 43, no. 2, pp. 57–62, 2009. View at: Publisher Site  Google Scholar
 E. Y. Rapoport, “Minimax optimization of stationary states in systems with distributed parameters,” Journal of Computer and Systems Sciences International, vol. 52, no. 2, pp. 165–179, 2013. View at: Google Scholar
 E. Polak, D. Q. Mayne, and J. E. Higgins, “Superlinearly convergent algorithm for minmax problems,” Journal of Optimization Theory and Applications, vol. 69, no. 3, pp. 407–439, 1991. View at: Publisher Site  Google Scholar  MathSciNet
 R. Reemtsen, “A cutting plane method for solving minimax problems in the complex plane,” Numerical Algorithms, vol. 2, no. 34, pp. 409–436, 1992. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 J. L. Zhou and A. L. Tits, “Nonmonotone line search for minimax problems,” Journal of Optimization Theory and Applications, vol. 76, no. 3, pp. 455–476, 1993. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 L. Grippo, F. Lampariello, and S. Lucidi, “A nonmonotone line search technique for Newton's method,” SIAM Journal on Numerical Analysis, vol. 23, no. 4, pp. 707–716, 1986. View at: Publisher Site  Google Scholar  MathSciNet
 Y. H. Yu and L. Gao, “Nonmonotone line search algorithm for constrained minimax problems,” Journal of Optimization Theory and Applications, vol. 115, no. 2, pp. 419–446, 2002. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 F. Wang and Y. Wang, “Nonmonotone algorithm for minimax optimization problems,” Applied Mathematics and Computation, vol. 217, no. 13, pp. 6296–6308, 2011. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 Y. Xue, “A SQP Method for Minimax Problems,” Journal of System Science and Math Science, vol. 22, no. 3, pp. 355–364, 2002 (Chinese). View at: Google Scholar  MathSciNet
 B. Rustem and Q. Nguyen, “An algorithm for the inequalityconstrained discrete minmax problem,” SIAM Journal on Optimization, vol. 8, no. 1, pp. 265–283, 1998. View at: Publisher Site  Google Scholar  MathSciNet
 E. Obasanjo, G. TzallasRegas, and B. Rustem, “An interiorpoint algorithm for nonlinear minimax problems,” Journal of Optimization Theory and Applications, vol. 144, no. 2, pp. 291–318, 2010. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 B. Rustem, S. Žakovic, and P. Parpas, “An interior point algorithm for continuous minimax: implementation and computation,” Optimization Methods and Software, vol. 23, no. 6, pp. 911–928, 2008. View at: Publisher Site  Google Scholar
 Y. Feng, L. Hongwei, Z. Shuisheng, and L. Sanyang, “A smoothing trustregion NewtonCG method for minimax problem,” Applied Mathematics and Computation, vol. 199, no. 2, pp. 581–589, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 S. P. Han, “A globally convergent method for nonlinear programming,” Journal of Optimization Theory and Applications, vol. 22, no. 3, pp. 297–309, 1977. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 M. J. D. Powell, “A fast algorithm for nonlinearly constrained optimization calculations,” in Numerical Analysis, pp. 144–157, Springer, Berlin, Germany, 1978. View at: Google Scholar
 G. He, Z. Gao, and Y. Lai, “New sequential quadratic programming algorithm with consistent subproblems,” Science in China A: Mathematics, vol. 40, no. 2, pp. 137–150, 1997. View at: Publisher Site  Google Scholar  MathSciNet
 E. R. Panier and A. L. Tits, “A superlinearly convergent feasible method for the solution of inequality constrained optimization problems,” SIAM Journal on Control and Optimization, vol. 25, no. 4, pp. 934–950, 1987. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 Z. Wan, “A modified SQP algorithm for mathematical programs with linear complementarity constraints,” ACTA Scientiarum Naturalium Universitatis Normalis Hunanensis, vol. 26, pp. 9–12, 2001. View at: Google Scholar
 Z. Zhu, K. Zhang, and J. Jian, “An improved SQP algorithm for inequality constrained optimization,” Mathematical Methods of Operations Research, vol. 58, no. 2, pp. 271–282, 2003. View at: Publisher Site  Google Scholar  MathSciNet
 Z. Luo, G. Chen, S. Luo, and Z. Zhu, “Improved feasible SQP algorithm for nonlinear programs with equality constrained subproblems,” Journal of Computers, vol. 8, no. 6, pp. 1496–1503, 2013. View at: Publisher Site  Google Scholar
 Z. Zhu and C. Zhang, “A superlinearly convergent sequential quadratic programming algorithm for minimax problems,” Journal of Numerical Methods and Applications, vol. 27, no. 4, pp. 15–32, 2005. View at: Google Scholar
 J. Jian, R. Quan, and Q. Hu, “A new superlinearly convergent SQP algorithm for nonlinear minimax problems,” Acta Mathematicae Applicatae Sinica, vol. 23, no. 3, pp. 395–410, 2007. View at: Publisher Site  Google Scholar  MathSciNet
 Q. Hu, Y. Chen, N. Chen, and X. Li, “A modified SQP algorithm for minimax problems,” Journal of Mathematical Analysis and Applications, vol. 360, no. 1, pp. 211–222, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 W. Xue, C. Shen, and D. Pu, “A new nonmonotone SQP algorithm for the minimax problem,” International Journal of Computer Mathematics, vol. 86, no. 7, pp. 1149–1159, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 J. Jian, X. Zhang, R. Quan, and Q. Ma, “Generalized monotone line search SQP algorithm for constrained minimax problems,” Optimization, vol. 58, no. 1, pp. 101–131, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 R. Fletcher, S. Leyffer, and P. L. Toint, “On the global convergence of a filter—SQP algorithm,” SIAM Journal on Optimization, vol. 13, no. 1, pp. 44–59, 2002. View at: Publisher Site  Google Scholar  MathSciNet
 S. Ulbrich, “On the superlinear local convergence of a filterSQP method,” Mathematical Programming, vol. 100, no. 1, Ser. B, pp. 217–245, 2004. View at: Publisher Site  Google Scholar  MathSciNet
 K. Su and J. Che, “A modified SQPfilter method and its global convergence,” Applied Mathematics and Computation, vol. 194, no. 1, pp. 92–101, 2007. View at: Publisher Site  Google Scholar  MathSciNet
 C. Shen, W. Xue, and X. Chen, “Global convergence of a robust filter SQP algorithm,” European Journal of Operational Research, vol. 206, no. 1, pp. 34–45, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 C. Gu and D. Zhu, “A nonmonotone line search multidimensional filterSQP method for general nonlinear programming,” Numerical Algorithms, vol. 56, no. 4, pp. 537–559, 2011. View at: Publisher Site  Google Scholar  MathSciNet
Copyright
Copyright © 2014 Zhijun Luo and Lirong Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.