Generalized Minimax Programming with Nondifferentiable (, )-Invexity
We consider the generalized minimax programming problem (P) in which functions are locally Lipschitz (, )-invex. Not only -sufficient but also -necessary optimality conditions are established for problem (P). With -necessary optimality conditions and (, )-invexity on hand, we construct dual problem (DI) for the primal one (P) and prove duality results between problems (P) and (DI). These results extend several known results to a wider class of programs.
Convexity plays a central role in many aspects of mathematical programming including analysis of stability, sufficient optimality conditions, and duality. Based on convexity assumptions, nonlinear programming problems can be solved efficiently. There have been many attempts to weaken the convexity assumptions in order to treat many practical problems. Therefore, many concepts of generalized convex functions have been introduced and applied to mathematical programming problems in the literature . One of these concepts, invexity, was introduced by Hanson in . Hanson has shown that invexity has a common property in mathematical programming with convexity that Karush-Kuhn-Tucker conditions are sufficient for global optimality of nonlinear programming under the invexity assumptions. Ben-Israel and Mond  introduced the concept of preinvex functions which is a special case of invexity.
Recently, Antczak extended further invexity to -invexity  for scalar differentiable functions and introduced new necessary optimality conditions for differentiable mathematical programming problem. Antczak also applied the introduced -invexity notion to develop sufficient optimality conditions and new duality results for differentiable mathematical programming problems. Furthermore, in the natural way, Antczak’s definition of -invexity was also extended to the case of differentiable vector-valued functions. In , Antczak defined vector -invex (-incave) functions with respect to and applied this vector -invexity to develop optimality conditions for differentiable multiobjective programming problems with both inequality and equality constraints. He also established the so-called -Karush-Kuhn-Tucker necessary optimality conditions for differentiable vector optimization problems under the Kuhn-Tucker constraint qualification . With this vector -invexity concept, Antczak proved new duality results for nonlinear differentiable multiobjective programming problems . A number of new vector duality problems such as -Mond-Weir, -Wolfe, and -mixed dual vector problems to the primal one were also defined in .
In the last few years, many concepts of generalized convexity, which include -invexity , -convexity , -convexity , -convexity , -invexity , --invexity , and their extensions, have been introduced and applied to different mathematical programming problems. In particular, they have also been applied to deal with minimax programming; see [13–17] for details. However, we have not found a paper which deals with generalized minimax programming problem under -invexity or its generalizations assumptions.
Note that the function may not be differentiable even if the function is differentiable. Yuan et al.  introduced the -invexity concept for the locally Lipschtiz function . This -invexity extended Antczak’s -invexity concept to the nonsmooth case. In this paper, we deal with nondifferentiable generalized minimax programming problem with the vector -invexity proposed in . Here, the generalized minimax programming problem is presented as follows: where is a compact subset of , , and . Let be the set of feasible solutions of problem ; in other words, . For convenience, let us define the following sets for every :
The rest of the paper is organized as follows. In Section 2, we present concepts in regards to nondifferentiable vector -invexity. In Section 3, we present not only -sufficient but also -necessary optimality conditions for problem . When the -necessary optimality conditions and the -invexity concept are utilized, dual problem is formulated for the primal one and duality results between them are presented in Section 4.
2. Notations and Preliminaries
In this section, we provide some definitions and results that we will use in the sequel. The following convention for equalities and inequalities will be used throughout the paper. For any , , we define the following: Let , and be a subset of . For our convenience, denote , , , and . Further, we recall some definitions and a lemma.
Definition 1 (see ). Let , be a nonempty set of and . If exists, then is called the Clarke derivative of at in the direction . If this limit superior exists for all , then is called the Clarke differentiable at . The set is called the Clarke subdifferential of at .
Note that if a given function is locally Lipschitz, then the Clarke subdifferential exists.
Lemma 2 (see ). Let be a real-valued Lipschitz continuous function defined on and denote the image of under by ; let be a differentiable function such that is continuous on and for each . Then the chain rule holds for each . Therefore,
Definition 3. Let be a vector-valued locally Lipschitz function defined on a nonempty set . Consider the functions , , and for . Moreover, is strictly increasing on its domain for each . If holds for all and , then is said to be a (strictly) nondifferentiable vector -invex at on (with respect to (or shortly, -invex at on , where and . If is a (strictly) nondifferentiable vector -invex at on with respect to for all , then is a (strictly) nondifferential vector -invex on with respect to .
3. Optimality Conditions
In this section, we firstly establish the -necessary optimality conditions for problem involving functions which are locally Lipschitz with respect to the variable . For this purpose, we will need some additional assumptions with respect to problem .
Condition 4. Assume the following: (a) the set is compact;
(b) and are upper semicontinuous at ;
(c) is locally Lipschitz in and this Lipschitz continuity is uniform for in ;
(d) is regular in ; that is, ;
(e) are regular and locally Lipschitz at .
Condition 5. For each satisfying the conditions the following implication holds:
We will also use the following auxiliary programming problem : where . We denote by , . If function is strictly increasing on for each , then and . So, we represent the set of all feasible solutions and the set of constraint active indices for either or by the notations and , respectively.
The following necessary optimality conditions are presented in .
Theorem 6 (necessary optimality conditions). Let be an optimal solution of . One also assumes that Conditions 4 and 5 hold. Then there exist positive integer and vectors together with scalars and such that Furthermore, if is the number of nonzero and is the number of nonzero , then
Theorem 7 (-necessary optimality conditions). Let problem satisfy Conditions 4 and 5; let be an optimal solution of problem . Assume that is both continuously differentiable and strictly increasing on . If is both continuously differentiable and strictly increasing on with for each , then there exist positive integer and vectors together with scalars and such that
Proof. Since is an optimal solution to problem , it is easy to see that is an optimal solution to problem . Consider problem , it is easy to check that problem satisfies the assumptions of Theorem 6. Therefore, we choose and with , such that they satisfy Theorem 6.
Now, for each , we consider the scalar programming as follows: It is easy to see that is an optimal solution to problem . Thus, there exist and for such that So, we obtain from (15) that or where , . By Lemma 2, we have Now, from (18), we can deduce the required results.
Next, we derive -sufficient optimality conditions for problem under the assumption of -invexity proposed in .
Theorem 8 (-sufficient optimality conditions). Let satisfy conditions (12)–(14), where ; let be both continuously differentiable and strictly increasing on ; let be both continuously differentiable and strictly increasing on for each . Assume that is -invex at on for each and is -invex at on for each . Then, is an optimal solution to .
Proof. Suppose, contrary to the result, that is not an optimal solution for problem . Hence, there exists such that By the monotonicity of , we have Employing (13), (14), and the fact that we can write the following statement By the generalized invexity assumptions of and , we have Employing (24) to (23), we have or which implies that This is a contradiction to condition (12).
Example 9. Let . Define Then, is -invex at for each , is -invex at , and Since then Consider . Since , then we can assume that . Therefore, where . Now, from Theorem 8, we can say that is an optimal solution to .
Making use of the optimality conditions of the preceding section, we present dual problem to the primal one and establish -weak, -strong, and -strict converse duality theorems. For convenience, we use the following notations: denotes the set of all triplets satisfying
Theorem 10 10 (-weak duality). Let and be -feasible and -feasible, respectively; let be both continuously differentiable and strictly increasing on ; let be both continuously differentiable and strictly increasing on for each . If is -invex at for each and is -invex at for each , then
Proof. Suppose to the contrary that . Therefore, we obtain Thus, we obtain from the monotonicity assumption of that Again, we obtain from the monotonicity assumption of and the fact that Hence, Similar to the proof of Theorem 8, by (43) and the generalized invexity assumptions of and , we have which follows that Thus, we have a contradiction to (34). So .
Theorem 11 (-strong duality). Let problem satisfy Conditions 4 and 5; let be an optimal solution of problem . Suppose that is both continuously differentiable and strictly increasing on and is both continuously differentiable and strictly increasing on with for each . If the hypothesis of Theorem 10 holds for all -feasible points , then there exist and such that is a optimal solution, and the two problems and have the same optimal values.
Proof. By Theorem 7, there exists , satisfying the requirements specified in the theorem, such that is a (DI) feasible solution, then the optimality of this feasible solution for follows from Theorem 10.
Theorem 12 (-strict converse duality). Let and be optimal solutions for and , respectively. Suppose that is both continuously differentiable and strictly increasing on and is both continuously differentiable and strictly increasing on for each . If is -invex at for each and is -invex at for each , then ; that is, is a -optimal solution and .
Proof. Suppose to the contrary that . Similar to the arguments as in the proof of Theorem 8, there exist and such that Therefore, From the above inequality, we can conclude that there exists , such that or It follows that On the other hand, we know from Theorem 10 that This contradicts to (50).
In this paper, we have discussed the applications of -invexity for a class of nonsmooth minimax programming problem . Firstly, we established -necessary optimality conditions for problem . Under the nondifferential -invexity assumptions, we have also derived the sufficiency of the -necessary optimality conditions for the same problem. Further, we have constructed a dual model and derived -duality results between problems and . Note that many researchers are interested in dealing with the minimax programming under generalized invexity assumptions; see [1, 10, 11, 14–17]. However, we have not found results for minimax programming problems under the -invexity or its extension assumptions. Hence, this work extends the applications of -invexity to the generalized minimax programming as well as to the nonsmooth case.
The authors are grateful to the referee for his valuable suggestions that helped to improve this paper in its present form. This research is supported by Science Foundation of Hanshan Normal University (LT200801).
D. H. Yuan, X. L. Liu, S. Y. Yang, and G. M. Lai, “Nondifierential mathematical programming involving (G, β)-invexity,” Journal of Inequalities and Applications, vol. 2012, article 256, 2012.View at: Google Scholar
F. H. Clarke, Optimization and Nonsmooth Analysis, John Wiley & Sons, New York, NY, USA, 1983.View at: MathSciNet