Abstract

A simple sequential quadratic programming method is proposed to solve the constrained minimax problem. At each iteration, through introducing an auxiliary variable, the descent direction is given by solving only one quadratic programming. By solving a corresponding quadratic programming, a high-order revised direction is obtained, which can avoid the Maratos effect. Furthermore, under some mild conditions, the global and superlinear convergence of the algorithm is achieved. Finally, some numerical results reported show that the algorithm in this paper is successful.

1. Introduction

Consider the following constrained minimax optimization problems: where and are continuously differentiable.

Minimax problem is one of the most important non-differentiable optimization problems, and it can be widely applied in many fields (such as [14]). In real life, a lot of problems can be stated as a minimax problem, such as financial decision making, engineering design, and other fields which wants to obtain the objection functions minimum under conditions of the maximum of the functions. Since the objective function is non-differentiable, we cannot use the classical methods for smooth optimization problems directly to solve such constrained optimization problems.

Generally speaking, a lot of the schemes have been proposed for solving minimax problems, by converting the problem (1) to a smooth constrained optimization problem as follows Obviously, from the problem (2), the KKT conditions of (1) can be stated as follows: where , , and are the corresponding vector. Based on the equivalent relationship between the K-T point of (2) and the stationary point of (1), a lot of methods focus on finding the K-T point of (1), namely, solving (3). And many algorithms have been proposed to solve minimax problem [515]. Such as [58], the minimax problems are discussed with nonmonotone line search, which can effectively avoid the Maratos effect. Combining the trust-region methods with the line-search methods and curve-search methods, Wang and Zhang [9] propose a hybrid algorithm for linearly constrained minimax problems. Many other effective algorithms for solving the minimax problems are presented, such as [1115].

Sequential quadratic programming (SQP) method is one of the efficient algorithms for solving smooth constrained optimization problems because of its fast convergence rate. Thus, it is studied deeply and widely (see, e.g., [1620], etc.). For typical SQP method, the standard search direction should be obtained by solving the following quadratic programming: where is a symmetric positive definite matrix. Since the objective function contains the max operator, it is continuous but non-differentiable even if every constrained function is differentiable. Therefore this method may fail to reach an optimum for the minimax problem. In view of this and combining with (2), one considers the following quadratic programming through introducing an auxiliary variable : However, it is well known that the solution of (5) may not be a feasible descent direction and can not avoid the Maratos effect. Recently, many researches have extended the popular SQP scheme to the minimax problems (see [2126], etc.). Jian et al. [22] and Q.-J. Hu and J.-Z. Hu [23] process pivoting operation to generate an -active constraint subset associated with the current iteration point. At each iteration of their proposed algorithm, a main search direction is obtained by solving a reduced quadratic program which always has a solution.

The feasible direction method (MFD) (see [27, 28], etc.) is another effective way for solving smooth constrained optimization problems. An advantage of MFD over the classical SQP method is that a feasible direction of descent can be obtained by solving only one quadratic programming. In this paper, to obtain a feasible direction of descent and reduce the computational cost, we construct a new quadratic programming subproblem. Suppose is the current iteration point; at each iteration, the descent direction is obtained by solving the following quadratic programming subproblem: where is a symmetric positive definite matrix and is nonnegative auxiliary variable. In order to avoid the Maratos effect, a height-order correction direction is computed by the corresponding quadratic programming: Under suitable conditions, the theoretical analysis shows that the convergence of our algorithm can be obtained.

The plan of the paper is as follow. The algorithm is proposed in Section 2. In Section 3, we show that the algorithm is globally convergent, while the superlinear convergence rate is analyzed in Section 4. Finally, some preliminary numerical tests are reported in Section 5.

2. Description of the Algorithm

Now we state our algorithm as follows.

Algorithm 1.
Step 0. Given initial point , define a symmetric positive definite matrix . Choose parameters , , and . Set .
Step 1. Compute by the quadratic problem (6) at . Let be the corresponding KKT multipliers vector. If , then STOP.
Step 2. Compute by the quadratic problem (7).
Set as the corresponding KKT multipliers vector. If , set .
Step 3 (the line search). A merit function is defined as follows: where and is a suitable large positive scalar.

Compute , the first number in the sequence satisfying

Step 4 (update).   Obtain by updating the positive definite matrix using some quasi-Newton formulas. Set , . Set . Go back to Step 1.

3. Global Convergence of the Algorithm

For convenience, we denote

In this section, we analyze the convergence of the algorithm. The following general assumptions are true throughout this paper.(H  3.1)The functions , , , , and , , are continuously differentiable.(H  3.2); the set of vectors

is linearly independent.(H  3.3)There exist , such that , for all and .

Lemma 2. Suppose that (H 3.1)–(H 3.3) hold, matrix is symmetric positive definite, and is an optimal solution of (6). Then(1), ,(2)if , then is a K-T point of problem (1).

Proof. (1) For is a feasible solution of (6) and is positive definite, one has Further, if , then .
(2) Firstly, we prove . If , then . For the positive definite property of , it has . On the other hand, if , in view of the constraints we have . Combining , we have .
Secondly, we show that is a K-T point of problem (1) when . From the problem (6), the K-T condition at is defined as follows: If , then , and according to the definition of in   Step 4, we have . Furthermore, it holds that That is to see that the results hold.

From Lemma 2, it is obvious, if , that the line search in Step 3 yields is always completed.

Lemma 3. If and if satisfies and , the line search in Step 3 of the algorithm is well defined.

Proof. Firstly, we consider the functions , , , , and , , of the Taylor expansion at . Then, we obtain where is convex as a function of , and thus we have From the definition of , and (1), it is easy to obtain On the other hand, from the first equation of (14) we can get Let . Since the third formula of (14) implies then For , we get Thus, (22) implies Substituting the above equality in (19), we can obtain It follows from (14) that Considering satisfies and , then we have Then, at , we have Since , for small enough, it holds that That is, the line search condition (9) is satisfied.

In the following of this section, we will show the global convergence of the algorithm. Since is bounded under all the above-mentioned assumptions, we can assume without loss of generality that there exist an infinite index set and a constant such that

Theorem 4. The algorithm either stops at the KKT point of the problem (1) in finite number of steps or generates an infinite sequence any accumulation point of which is a KKT point of the problem (1).

Proof. The first statement is obvious, the only stopping point being in Step  1. Thus, assume that the algorithm generates an infinite sequence and (30) holds. The cases and are considered separately.
Case A (). By Step 4, there exists an infinite index set , such that , , while, by Step 3, it holds that So, the fact that implies that . So, from Lemma 2, it is clear that is a K-T point of (1).
Case B (). Obviously, it only needs to prove that , . Suppose by contradiction that . Since in view of , , we have So, the following corresponding QP subproblem (6) at has a nonempty feasible set. Moreover, it is not difficult to show that is the unique solution of (34). So, it holds that For , , , it is clear, for , large enough, that From (36), by imitating the proof of [17, Proposition 3.2], we know that the stepsize obtained by the line search is bounded away from zero on ; that is, In addition, from (9) and Lemma 2, it follows that is monotonous decreasing. So, considering and the hypothesis (H 3.1), one holds that Hence, from (9) and (36)–(38), we get It is a contradiction. So, . Thereby, according to Lemma 2, is a KKT point of problem (1).

4. Rate of Convergence

In this section, we show the convergence rate of the algorithm. For this purpose, we add the following some stronger regularity assumptions.(H 4.1)The functions , , and are twice continuously differentiable.(H 4.2)The sequence generated by the algorithm possesses an accumulation point , and , .(H  4.3)The second-order sufficiency conditions with strict complementary slackness are satisfied at the KKT point ; that is, it holds that , , , , and

    where

According to the all stated assumptions (H 4.1)–(H 4.3) and [21, Theorem 2], we have the following results.

Lemma 5. The KKT point of problem (1) is isolated.

Lemma 6. The entire sequence converges to ; that is, , .

Proof. The result of this lemma is similar to the proof of [19, Lemma 4.1].

Lemma 7. For large enough, it holds that(1) and ,(2), , and .

Lemma 8. For large enough, obtained by Step 2 satisfies(1)(2)

Proof. (1) The result can be proven similarly to the proof of [5, Proposition 3.1] or [19, Lemma 4.3].
(2) We have Analogously, the other result is not difficult to be shown.

To get the superlinearly convergent rate of the above proposed algorithm, the following additional assumption is necessary.(H 4.4)The matrix sequence satisfies that where

According to Lemmas 6 and 8, it is easy to know

Lemma 9. For large enough, under the above-mentioned assumptions, .

Proof. It is only necessary to prove that From (6) and (14), we have Hence, Similarly, together with , it is easy to get On the other hand, the facts that and imply that ( large enough). Thus, for , we have By the definition of and Lemma 8, we have Multiplying both sides of (52) by and adding them, combining with (53), we get In addition, for large enough, we have Combining the above equation with (54) we can obtain From the KKT condition (14) implies ; then we get Thus, For large enough, according to , it holds that

From Lemma 9 and the method of [29, Theorem 5.2], we can get the following.

Theorem 10. Under all stated assumptions, the algorithm is superlinearly convergent; that is, the sequence generated by the algorithm satisfies .

5. Numerical Experiments

In this section, we select several problems to show the efficiency of the algorithm in Section 2. Some preliminary numerical experiments are tested on an Intel(R) Celeron(R) CPU 2.40 GHz computer. The code of the proposed algorithm is written by using MATLAB 7.0 and utilized the optimization toolbox to solve the quadratic programmings (6) and (7). The results show that the proposed algorithm is efficient.

During the numerical experiments, are chosen at random some parameters as follows: , , , and , the unit matrix. is updated by the BFGS formula [16]. In the implementation, the stopping criterion of Step 1 is changed to If , STOP.

This algorithm has been tested on some problems from [10, 11, 26]. The results are summarized in Table 1. The columns of this table have the following meanings:  Number: the number of the test problem in [10, 11] or [26];  : the dimension of the problem;  : the number of objective functions;  : the number of inequality constraints;  : the number of equality constraints;  NT: the number of iterations;  IP: the initial point;  FV: the final value of the objective function.

6. Concluding Remarks

In this paper, we propose a simple feasible sequential quadratic programming algorithm for inequality constrained minimax problems. With the help of the technique of method of feasible direction, at each iteration, a main search direction is obtained by solving only one reduced quadratic programming subproblem. Then, a correction direction is yielded by solving another quadratic programming to avoid Maratos effect and guarantee the superlinear convergence under mild conditions. The preliminary numerical results also show that the proposed algorithm is effective.

As further work, we can get the main search direction by other techniques, for example, sequential systems of linear equations technique. And we can also consider removing the strict complementarity.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the anonymous referee for the careful reading and helpful comments and suggestions that led to an improved version of this paper. Project was supported by the Foundation of Hunan Provincial Education Department under Grant nos. 12A077 and 13C453 and Scientific Research Fund of Hunan University of Humanities, Science and Technology of China (no. 2012QN04).