The Scientific World Journal

Volume 2014 (2014), Article ID 159754, 9 pages

http://dx.doi.org/10.1155/2014/159754

## A Simple SQP Algorithm for Constrained Finite Minimax Problems

^{1}The Department of Information Science and Engineering, Hunan University of Humanities, Science and Technology, Loudi 417000, China^{2}The Department of Mathematics and Econometrics, Hunan University of Humanities, Science and Technology, Loudi 417000, China

Received 30 August 2013; Accepted 7 November 2013; Published 10 February 2014

Academic Editors: Z.-C. Deng, K. Skouri, and K.-C. Ying

Copyright © 2014 Lirong Wang and Zhijun Luo. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

A simple sequential quadratic programming method is proposed to solve the constrained minimax problem. At each iteration, through introducing an auxiliary variable, the descent direction is given by solving only one quadratic programming. By solving a corresponding quadratic programming, a high-order revised direction is obtained, which can avoid the Maratos effect. Furthermore, under some mild conditions, the global and superlinear convergence of the algorithm is achieved. Finally, some numerical results reported show that the algorithm in this paper is successful.

#### 1. Introduction

Consider the following constrained minimax optimization problems: where and are continuously differentiable.

Minimax problem is one of the most important non-differentiable optimization problems, and it can be widely applied in many fields (such as [1–4]). In real life, a lot of problems can be stated as a minimax problem, such as financial decision making, engineering design, and other fields which wants to obtain the objection functions minimum under conditions of the maximum of the functions. Since the objective function is non-differentiable, we cannot use the classical methods for smooth optimization problems directly to solve such constrained optimization problems.

Generally speaking, a lot of the schemes have been proposed for solving minimax problems, by converting the problem (1) to a smooth constrained optimization problem as follows Obviously, from the problem (2), the KKT conditions of (1) can be stated as follows: where , , and are the corresponding vector. Based on the equivalent relationship between the K-T point of (2) and the stationary point of (1), a lot of methods focus on finding the K-T point of (1), namely, solving (3). And many algorithms have been proposed to solve minimax problem [5–15]. Such as [5–8], the minimax problems are discussed with nonmonotone line search, which can effectively avoid the Maratos effect. Combining the trust-region methods with the line-search methods and curve-search methods, Wang and Zhang [9] propose a hybrid algorithm for linearly constrained minimax problems. Many other effective algorithms for solving the minimax problems are presented, such as [11–15].

Sequential quadratic programming (SQP) method is one of the efficient algorithms for solving smooth constrained optimization problems because of its fast convergence rate. Thus, it is studied deeply and widely (see, e.g., [16–20], etc.). For typical SQP method, the standard search direction should be obtained by solving the following quadratic programming: where is a symmetric positive definite matrix. Since the objective function contains the max operator, it is continuous but non-differentiable even if every constrained function is differentiable. Therefore this method may fail to reach an optimum for the minimax problem. In view of this and combining with (2), one considers the following quadratic programming through introducing an auxiliary variable : However, it is well known that the solution of (5) may not be a feasible descent direction and can not avoid the Maratos effect. Recently, many researches have extended the popular SQP scheme to the minimax problems (see [21–26], etc.). Jian et al. [22] and Q.-J. Hu and J.-Z. Hu [23] process pivoting operation to generate an -active constraint subset associated with the current iteration point. At each iteration of their proposed algorithm, a main search direction is obtained by solving a reduced quadratic program which always has a solution.

The feasible direction method (MFD) (see [27, 28], etc.) is another effective way for solving smooth constrained optimization problems. An advantage of MFD over the classical SQP method is that a feasible direction of descent can be obtained by solving only one quadratic programming. In this paper, to obtain a feasible direction of descent and reduce the computational cost, we construct a new quadratic programming subproblem. Suppose is the current iteration point; at each iteration, the descent direction is obtained by solving the following quadratic programming subproblem: where is a symmetric positive definite matrix and is nonnegative auxiliary variable. In order to avoid the Maratos effect, a height-order correction direction is computed by the corresponding quadratic programming: Under suitable conditions, the theoretical analysis shows that the convergence of our algorithm can be obtained.

The plan of the paper is as follow. The algorithm is proposed in Section 2. In Section 3, we show that the algorithm is globally convergent, while the superlinear convergence rate is analyzed in Section 4. Finally, some preliminary numerical tests are reported in Section 5.

#### 2. Description of the Algorithm

Now we state our algorithm as follows.

*Algorithm 1. **Step **0*. Given initial point , define a symmetric positive definite matrix . Choose parameters , , and . Set .*Step **1*. Compute by the quadratic problem (6) at . Let be the corresponding KKT multipliers vector. If , then STOP.*Step **2*. Compute by the quadratic problem (7).

Set as the corresponding KKT multipliers vector. If , set .*Step **3 (the line search)*. A merit function is defined as follows:
where and is a suitable large positive scalar.

Compute , the first number in the sequence satisfying

*Step **4 (update)*. Obtain by updating the positive definite matrix using some quasi-Newton formulas. Set , . Set . Go back to Step 1.

#### 3. Global Convergence of the Algorithm

For convenience, we denote

In this section, we analyze the convergence of the algorithm. The following general assumptions are true throughout this paper.(H 3.1)The functions , , , , and , , are continuously differentiable.(H 3.2); the set of vectors

is linearly independent.(H 3.3)There exist , such that , for all and .

Lemma 2. *Suppose that (H 3.1)–(H 3.3) hold, matrix is symmetric positive definite, and is an optimal solution of (6). Then**(1)**, ,**(2)**if , then is a K-T point of problem (1).*

*Proof. *(1) For is a feasible solution of (6) and is positive definite, one has
Further, if , then .

(2) Firstly, we prove . If , then . For the positive definite property of , it has . On the other hand, if , in view of the constraints
we have . Combining , we have .

Secondly, we show that is a K-T point of problem (1) when . From the problem (6), the K-T condition at is defined as follows:
If , then , and according to the definition of in Step 4, we have . Furthermore, it holds that
That is to see that the results hold.

*From Lemma 2, it is obvious, if , that the line search in Step 3 yields is always completed.*

*Lemma 3. If and if satisfies and , the line search in Step 3 of the algorithm is well defined.*

*Proof. *Firstly, we consider the functions , , , , and , , of the Taylor expansion at . Then, we obtain
where
is convex as a function of , and thus we have
From the definition of , and (1), it is easy to obtain
On the other hand, from the first equation of (14) we can get
Let . Since the third formula of (14) implies
then
For , we get
Thus, (22) implies
Substituting the above equality in (19), we can obtain
It follows from (14) that
Considering satisfies and , then we have
Then, at , we have
Since , for small enough, it holds that
That is, the line search condition (9) is satisfied.

*In the following of this section, we will show the global convergence of the algorithm. Since is bounded under all the above-mentioned assumptions, we can assume without loss of generality that there exist an infinite index set and a constant such that
*

*Theorem 4. The algorithm either stops at the KKT point of the problem (1) in finite number of steps or generates an infinite sequence any accumulation point of which is a KKT point of the problem (1).*

*Proof. *The first statement is obvious, the only stopping point being in Step 1. Thus, assume that the algorithm generates an infinite sequence and (30) holds. The cases and are considered separately.*Case A *(). By Step 4, there exists an infinite index set , such that , , while, by Step 3, it holds that
So, the fact that implies that . So, from Lemma 2, it is clear that is a K-T point of (1).*Case B *(). Obviously, it only needs to prove that , . Suppose by contradiction that . Since
in view of , , we have
So, the following corresponding QP subproblem (6) at
has a nonempty feasible set. Moreover, it is not difficult to show that is the unique solution of (34). So, it holds that
For , , , it is clear, for , large enough, that
From (36), by imitating the proof of [17, Proposition 3.2], we know that the stepsize obtained by the line search is bounded away from zero on ; that is,
In addition, from (9) and Lemma 2, it follows that is monotonous decreasing. So, considering and the hypothesis (H 3.1), one holds that
Hence, from (9) and (36)–(38), we get
It is a contradiction. So, . Thereby, according to Lemma 2, is a KKT point of problem (1).

*4. Rate of Convergence*

*In this section, we show the convergence rate of the algorithm. For this purpose, we add the following some stronger regularity assumptions.(H 4.1)The functions , , and are twice continuously differentiable.(H 4.2)The sequence generated by the algorithm possesses an accumulation point , and , .(H 4.3)The second-order sufficiency conditions with strict complementary slackness are satisfied at the KKT point ; that is, it holds that , , , , and*

* where*

*According to the all stated assumptions (H 4.1)–(H 4.3) and [21, Theorem 2], we have the following results.*

*Lemma 5. The KKT point of problem (1) is isolated.*

*Lemma 6. The entire sequence converges to ; that is, , .*

*Proof. *The result of this lemma is similar to the proof of [19, Lemma 4.1].

*Lemma 7. For large enough, it holds that(1) and ,(2), , and .*

*Lemma 8. For large enough, obtained by Step 2 satisfies(1)(2)*

*Proof. *(1) The result can be proven similarly to the proof of [5, Proposition 3.1] or [19, Lemma 4.3].

(2) We have
Analogously, the other result is not difficult to be shown.

*To get the superlinearly convergent rate of the above proposed algorithm, the following additional assumption is necessary.(H 4.4)The matrix sequence satisfies that
where
*

*According to Lemmas 6 and 8, it is easy to know
*

*Lemma 9. For large enough, under the above-mentioned assumptions, .*

*Proof. *It is only necessary to prove that
From (6) and (14), we have
Hence,
Similarly, together with , it is easy to get
On the other hand, the facts that and imply that ( large enough). Thus, for , we have
By the definition of and Lemma 8, we have
Multiplying both sides of (52) by and adding them, combining with (53), we get
In addition, for large enough, we have
Combining the above equation with (54) we can obtain
From the KKT condition (14) implies ; then we get
Thus,
For large enough, according to , it holds that

*From Lemma 9 and the method of [29, Theorem 5.2], we can get the following.*

*Theorem 10. Under all stated assumptions, the algorithm is superlinearly convergent; that is, the sequence generated by the algorithm satisfies .*

*5. Numerical Experiments*

*5. Numerical Experiments*

*In this section, we select several problems to show the efficiency of the algorithm in Section 2. Some preliminary numerical experiments are tested on an Intel(R) Celeron(R) CPU 2.40 GHz computer. The code of the proposed algorithm is written by using MATLAB 7.0 and utilized the optimization toolbox to solve the quadratic programmings (6) and (7). The results show that the proposed algorithm is efficient.*

*During the numerical experiments, are chosen at random some parameters as follows: , , , and , the unit matrix. is updated by the BFGS formula [16]. In the implementation, the stopping criterion of Step 1 is changed to If , STOP.*

*This algorithm has been tested on some problems from [10, 11, 26]. The results are summarized in Table 1. The columns of this table have the following meanings: Number: the number of the test problem in [10, 11] or [26]; : the dimension of the problem; : the number of objective functions; : the number of inequality constraints; : the number of equality constraints; NT: the number of iterations; IP: the initial point; FV: the final value of the objective function.*

*6. Concluding Remarks*

*6. Concluding Remarks*

*In this paper, we propose a simple feasible sequential quadratic programming algorithm for inequality constrained minimax problems. With the help of the technique of method of feasible direction, at each iteration, a main search direction is obtained by solving only one reduced quadratic programming subproblem. Then, a correction direction is yielded by solving another quadratic programming to avoid Maratos effect and guarantee the superlinear convergence under mild conditions. The preliminary numerical results also show that the proposed algorithm is effective.*

*As further work, we can get the main search direction by other techniques, for example, sequential systems of linear equations technique. And we can also consider removing the strict complementarity.*

*Conflict of Interests*

*Conflict of Interests*

*The authors declare that there is no conflict of interests regarding the publication of this paper.*

*Acknowledgments*

*Acknowledgments*

*The authors would like to thank the anonymous referee for the careful reading and helpful comments and suggestions that led to an improved version of this paper. Project was supported by the Foundation of Hunan Provincial Education Department under Grant nos. 12A077 and 13C453 and Scientific Research Fund of Hunan University of Humanities, Science and Technology of China (no. 2012QN04).*

*References*

*References*

- X. Cai, K. L. Teo, X. Yang, and X. Y. Zhou, “Portfolio optimization under a minimax rule,”
*Management Science*, vol. 46, no. 7, pp. 957–972, 2000. View at Google Scholar · View at Scopus - A. R. Pankov, E. N. Platonov, and K. V. Semenikhin, “Minimax quadratic optimization and its application to investment planning,”
*Automation and Remote Control*, vol. 62, no. 12, pp. 1978–1995, 2001. View at Google Scholar · View at Scopus - A. Baums, “Minimax method in optimizing energy consumption in real-time embedded systems,”
*Automatic Control and Computer Sciences*, vol. 43, no. 2, pp. 57–62, 2009. View at Publisher · View at Google Scholar · View at Scopus - E. Y. Rapoport, “Minimax optimization of stationary states in systems with distributed parameters,”
*Journal of Computer and Systems Sciences International*, vol. 52, no. 2, pp. 165–179, 2013. View at Google Scholar - J. L. Zhou and A. L. Tits, “Nonmonotone line search for minimax problems,”
*Journal of Optimization Theory and Applications*, vol. 76, no. 3, pp. 455–476, 1993. View at Publisher · View at Google Scholar · View at Scopus - L. Grippo, F. Lampariello, and S. Lucidi, “Nonmonotone line search technique for newton's method,”
*SIAM Journal on Numerical Analysis*, vol. 23, no. 4, pp. 707–716, 1986. View at Google Scholar · View at Scopus - Y. H. Yu and L. Gao, “Nonmonotone line search algorithm for constrained minimax problems,”
*Journal of Optimization Theory and Applications*, vol. 115, no. 2, pp. 419–446, 2002. View at Publisher · View at Google Scholar · View at Scopus - F. Wang and Y. Wang, “Nonmonotone algorithm for minimax optimization problems,”
*Applied Mathematics and Computation*, vol. 217, no. 13, pp. 6296–6308, 2011. View at Publisher · View at Google Scholar · View at Scopus - F. Wang and K. Zhang, “A hybrid algorithm for nonlinear minimax problems,”
*Annals of Operations Research*, vol. 164, no. 1, pp. 167–191, 2008. View at Publisher · View at Google Scholar · View at Scopus - Y. Xue, “A SQP method for minimax problems,”
*Journal of System Science and Math Science*, vol. 22, pp. 355–364, 2002 (Chinese). View at Google Scholar - B. Rustem and Q. Nguyen, “An algorithm for the inequality-constrained discrete min-max problem,”
*SIAM Journal on Optimization*, vol. 8, no. 1, pp. 265–283, 1998. View at Google Scholar · View at Scopus - E. Obasanjo, G. Tzallas-Regas, and B. Rustem, “An interior-point algorithm for nonlinear minimax problems,”
*Journal of Optimization Theory and Applications*, vol. 144, no. 2, pp. 291–318, 2010. View at Publisher · View at Google Scholar · View at Scopus - B. Rustem, S. Žakovic, and P. Parpas, “An interior point algorithm for continuous minimax: implementation and computation,”
*Optimization Methods and Software*, vol. 23, no. 6, pp. 911–928, 2008. View at Publisher · View at Google Scholar · View at Scopus - Y. Feng, L. Hongwei, Z. Shuisheng, and L. Sanyang, “A smoothing trust-region Newton-CG method for minimax problem,”
*Applied Mathematics and Computation*, vol. 199, no. 2, pp. 581–589, 2008. View at Publisher · View at Google Scholar · View at Scopus - L. H. Ma, Y. Zhang, C. N. Yang et al., “A neural network model for equality and inequality constrained minimax problems,”
*Information Technology Journal*, vol. 11, no. 11, pp. 1655–1659, 2012. View at Google Scholar - M. J. D. Powell, “A fast algorithm for nonlinearly constrained optimization calculations,” in
*Numerical Analysis*, pp. 144–157, Springer, Berlin, Germany, 1978. View at Google Scholar - E. R. Panier and A. L. Tits, “Superlinearly convergent feasible method for the solution of inequality constrained optimization problems,”
*SIAM Journal on Control and Optimization*, vol. 25, no. 4, pp. 934–950, 1987. View at Google Scholar · View at Scopus - Z. Wan, “A modified SQP algorithm for mathematical programs with linear complementarity constraints,”
*Acta Scientiarum Naturalium Universitatis Normalis Hunanensis*, vol. 26, pp. 9–12, 2001. View at Google Scholar - Z. Zhu and S. Wang, “A superlinearly convergent numerical algorithm for nonlinear programming,”
*Nonlinear Analysis*, vol. 13, no. 5, pp. 2391–2402, 2012. View at Publisher · View at Google Scholar · View at Scopus - Z. Luo, Z. Zhu, G. Chen et al., “A superlinearly convergent SQP algorithm for constrained optimization problems,”
*Journal of Computational Information Systems*, vol. 9, no. 11, pp. 4443–4450, 2013. View at Google Scholar - Z. Zhu and C. Zhang, “A superlinearly convergent sequential quadratic programming algorithm for minimax problems,”
*Chinese Journal of Numerical Mathematics and Applications*, vol. 27, no. 4, pp. 15–32, 2005. View at Google Scholar - J.-B. Jian, R. Quan, and Q.-J. Hu, “A new superlinearly convergent SQP algorithm for nonlinear minimax problems,”
*Acta Mathematicae Applicatae Sinica*, vol. 23, no. 3, pp. 395–410, 2007. View at Publisher · View at Google Scholar · View at Scopus - Q.-J. Hu and J.-Z. Hu, “A sequential quadratic programming algorithm for nonlinear minimax problems,”
*Bulletin of the Australian Mathematical Society*, vol. 76, no. 3, pp. 353–368, 2007. View at Google Scholar · View at Scopus - Q.-J. Hu, Y. Chen, N.-P. Chen, and X.-Q. Li, “A modified SQP algorithm for minimax problems,”
*Journal of Mathematical Analysis and Applications*, vol. 360, no. 1, pp. 211–222, 2009. View at Publisher · View at Google Scholar · View at Scopus - W. Xue, C. Shen, and D. Pu, “A new non-monotone SQP algorithm for the minimax problem,”
*International Journal of Computer Mathematics*, vol. 86, no. 7, pp. 1149–1159, 2009. View at Publisher · View at Google Scholar · View at Scopus - J. B. Jian, X. L. Zhang, R. Quan, and Q. Ma, “Generalized monotone line search SQP algorithm for constrained minimax problems,”
*International Journal of Control*, vol. 58, no. 1, pp. 101–131, 2009. View at Publisher · View at Google Scholar · View at Scopus - G. Zoutendijk,
*Methods of Feasible Directions*, Elsevier, Amsterdam, The Netherlands, 1960. - M. M. Kostreva and X. Chen, “A superlinearly convergent method of feasible directions,”
*Applied Mathematics and Computation*, vol. 116, no. 3, pp. 231–244, 2000. View at Google Scholar · View at Scopus - F. Facchinei and S. Lucidi, “Quadratically and superlinearly convergent algorithms for the solution of inequality constrained minimization problems,”
*Journal of Optimization Theory and Applications*, vol. 85, no. 2, pp. 265–289, 1995. View at Publisher · View at Google Scholar · View at Scopus

*
*