- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 131938, 8 pages
Optimality Conditions for Nonsmooth Generalized Semi-Infinite Programs
1Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong
2Business School, Sichuan University, Chengdu 610064, China
Received 30 July 2013; Accepted 21 August 2013
Academic Editor: Jen-Chih Yao
Copyright © 2013 Zhangyou Chen and Zhe Chen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We consider a class of nonsmooth generalized semi-infinite programming problems. We apply results from parametric optimization to the lower level problems of generalized semi-infinite programming problems to get estimates for the value functions of the lower level problems and thus derive necessary optimality conditions for generalized semi-infinite programming problems. We also derive some new estimates for the value functions of the lower level problems in terms of generalized differentiation and further obtain the necessary optimality conditions.
Generalized semi-infinite programming problem (GSIP) is of the form where . GSIP is different from the standard semi-infinite programming in that its index set is dependent on .
This first systematic study of GSIP was Hettich and Still  where the reduction method was used to reduce GSIP into standard nonlinear programming problems and second-order optimality conditions were derived. Necessary optimality conditions at an optimal solution for (1) with differentiable data are as follows there exist nonnegative numbers , not all zero, such that where and each is the usual FJ-multiplier of the lower level problem at its optimal solution This condition was first derived by Jongen et al.  in an elementary way without any constraint qualifications or any kind of reduction approaches. They also proposed a constraint qualification under which it follows that and discussed some geometrical properties of the feasible set which do not appear in standard semi-infinite case. The optimality conditions are further explored by Rückmann and Shapiro  and Stein .
GSIP in itself is of complex and exclusive structures such as the nonclosedness of the feasible set, nonconvexity, nonsmoothness, and bilevel structure and thus a difficult problem to solve see, for example, [2, 5, 6]. We also refer to [7–10] for some recent study on the structure of GSIP.
It is obvious that GSIP can be rewritten equivalently as the nonlinear programming problem where is the optimal value of . Then we can relate GSIP to the min-max problem see  for more details. On the other hand, GSIP can be related to the following bilevel problem: The problem (6) is a special bilevel optimization problem in that its upper level constraint is the same as the objective function of its lower level problem. However, there is a slight difference between GSIP problem (1) and problem (6). The feasible set of (6) is a subset of (1) in that the feasible set of (1) is the combination of the feasible set of (6) and the complement of . For more comparisons between GSIP and bilevel problems, see Stein and Still .
The bilevel problem (6) is equivalent to the following problem: where and . This approach was used by Dempe and Zemkoho  to study bilevel optimization problems. For general references of bilevel optimization, see .
In this paper we concentrate on the optimality conditions of nonsmooth GSIP whose defining functions are Lipschitz continuous. Similar works are [14, 15]. We achieve this via the lower level optimal value function reformulation and then derive its necessary optimality conditions via the generalized differentiation. One of the key steps is to estimate the generalized gradients of the lower level optimal value function which involves parametric optimization. We will consider two cases with different approaches related to the two reformulations of GSIP as previously mentioned. Firstly, we develop optimality conditions via the min-max formulation with Lipschitz lower level optimal value function. Secondly, we develop optimality conditions via bilevel formulation under the assumption of partial calmness.
In this section, we present some basic definitions and results from variational analysis [16, 17]. Given a set in , the regular normal cone of at is defined by The (general) normal cone of at is defined by Given a function and a point with finite, denote by the epigraph of . The regular subdifferential of at is defined by The general (basic, limiting) and singular subdifferential of at are defined, respectively, by The upper regular subdifferential of at is defined by , and the upper subdifferential of is defined by .
The Clarke (convexified) normal cone can be defined by two different approaches. On the one hand, it can be defined by the polar cone of the Clarke’s tangent cone where or defined via the generalized directional derivative of the (Lipschitzian) distant function ; see Clarke . On the other hand, it can be defined by the closed convex hull of the (general) normal cone For this definition and also the equivalence of the two definitions, see, for example, Rockafellar and Wets .
The Clarke subgradients and Clarke horizon subgradients of at are defined by The relationship between the Clarke sub-subdifferentials and basic sub-differentials is also referred to by Mordukhovich [17, Theorem 3.57].
Proposition 1 (see ). Let be proper and around . Then, If, in particular, is Lipschitz continuous at , then
The normal cone enjoys the robustness property provided that the setting is finite dimension [17, page 11]. However, this is not true for the convexified cone ; see, for example, Rockafellar . Consider The normal cone is just the -axis, but is the -plane for all . The following proposition is from Rockafellar .
Proposition 2 (see ). If is convex, or if is pointed, then the multifunction is closed at ; that is, for all , , one has .
Proposition 3. The Clarke normal cone has the robustness property provided that is pointed.
Proof. It suffices to prove that . Let . Then there are , , such that since the sets are cones. Let . Then is bounded; that is also to say, are bounded for all . Otherwise, . That is, , where is the limit of for each . Note that since . Thus by the pointedness of . On the other hand, . This is a contradiction. Thus the sequence is bounded. By taking subsequences, we may assume that . Then and . This completes the proof.
The following definitions are required for further development.
Definition 4 (see [17, Definition 1.63]). Let be a set-valued mapping with , the domain of . (i) Given , we say that the mapping is inner semicontinuous at if for every sequence there is a sequence converging to as . (ii) is inner semicompact at if for every sequence there is a sequence that contains a convergent subsequence as .(iii) is -inner semicontinuous at (-inner semicompact at ) if in above two cases, is replaced by with .
Here the concept of -inner semicontinuity/semicompactness is important for our considerations. It is typical that the value function of the lower level problem of GSIP is not continuous, even taking value .
Theorem 5 (subdifferentiation of maximum functions [17, Theorem 3.46]). Consider the maximum function of the form Let be lower semicontinuous around for and be upper semicontinuous at for . Assume that the qualification holds: Then where .
Note that qualification (22) always holds if all related functions are locally Lipschitz.
The following two results are about continuity properties and estimates of subdifferentials of marginal functions which are crucial to our analysis for GSIP problems.
Proposition 6 (limiting subgradients of marginal functions ). Consider the parametric optimization problem
and let , . For simplicity, one does not consider the case with equality constraints involved. (i)Assume that is -inner semicontinuous at (the graph of ), that and all are Lipschitz continuous around , and that the following qualification condition is satisfied:
One has the inclusions (ii)Assume that is -inner semicompact at , and all and are Lipschitz continuous around for all , and qualification (25) holds for all , . Then
Proposition 7 (Lipschitz continuity of marginal functions [21, Theorem 5.2]). Continue to consider the parametric problem (24) in Proposition 6. Then the following assertions hold. (i)Assume that is -inner semicontinuous at and is locally Lipschitz around this point. Then is Lipschitz around provided that it is around and is Lipschitz-like around .(ii)Assume that is -inner compact at and is locally Lipschitz around for all . Then is Lipschitz around provided that it is around and is Lipschitz-like around for all .
3. Main Results
Now we are prepared to develop the optimality conditions for GSIP problem (1). Given a local solution of problem (1), associate it with the following min-max problem where . Let . Denote by If solves GSIP (1), then also solves problem and thus by generalized Fermat's rule (cf. [16, Theorem 10.1]), we have So, calculus for the maximum function and the estimate of subdifferentials are essential to proceed. From (31) and Theorem 5, there exists such that (if is Lipschitz) Note that for a Lipschitz function ,
Theorem 8 (optimality for GSIP with Lipschitz lower level optimal value function). Consider the GSIP problem (1), and let be its locally optimal solution. Assume that all functions , , and are Lipschitz continuous, is -inner semicompact at , and is Lipschitz-like at for all . Then there is and , , , , such that and If in addition and all components of are regular at all , then the optimality is of the form Note that .
Proof. Under regularity and Lipschitz continuity, since the following calculus rule for basic subgradients holds (see [16, Corollary 10.11]): the last two equations follow directly from the first one. Note that the function is in position of in Proposition 6. Under our assumptions, by Proposition 7, is Lipschitz continuous and the estimate of is If solves GSIP, then it also solves . By (31), (32), and (33), there exists such that Combining (37) and (38), there is and , , , such that and Letting , , , , , we obtain the desired result.
Corollary 9. In addition to the assumptions in Theorem 8, assume that , , and are continuously differentiable. Then the optimality condition at the optimal point is that there exist and , , , , such that and
Next we consider the case where the lower level value function may fail to be Lipschitz and give estimates for the subdifferentials of and thus further derive the optimality conditions for GSIP. However, it requires to use the Clarke subdifferentials.
Proposition 10. Consider the parametric optimization problem same as (24): with corresponding solution mapping . Let . Assume that the following conditions hold: (i) is lower semicontinuous at ;(ii) is -inner semicompact at and is nonempty and compact;(iii)if , , , , and , then , ;(iv)the cones and for all are pointed. Then one has the inclusion
Proof (sketch: the definition of ). The proof is divided into two parts. First the set on the right hand of the required inclusion, denoted by , is closed. The second step is to justify that .
Let , , , and where , , , . We have to show that . We may assume that , , as is compact. We show first that the sequence is bounded. Suppose on the contrary that . Then for each , Multiplying (44) by and taking summation over , we have Note that for each , by definition of , Let . Then by assumption (iv) and Proposition 3 one has , and thus . Then where , . Let . Then from (43). Combining (45) and (47), we have
Based on the assumption (iii), we have ,. This contradicts the fact that is of norm , and thus is bounded. Then have convergent subsequences, say . Thus with . That is to say, is closed.
Next, we justify that . Assume that , . Under the semicompactness assumption, invoking [17, Theorem 1.108], one gets that
Employing the sum rule from [17, Theorem 3.36] to the two above leads to which completes the proof.
Theorem 11 (convexified normal cone to inequality system ). Consider defined by inequality system . Let be Lipschitz, and the qualification (non-smooth MFCQ) at holds: where . Then
As mentioned, GSIP can be relaxed into the following bilevel programming problem: The feasible set of above problem is a subset of the feasible set of (1). Thus, if solves GSIP and and , then also solves problem (6). The perturbed version of the above bilevel problem is Problem (6) is said to be partially calm at  if Under the partial calmness condition, problem (6) can be transformed into the problem below, for some constant ,
Theorem 12 (necessary conditions of optimality of GSIP). Let be an optimal solution of GSIP with , and . Let the data functions , , and be Lipschitz and the partial calmness condition (55) hold at for some . Assume that the following conditions hold: (i) Qualification (51) holds for at . (ii) is -inner semicompact at and . (iii) If , , , , and , then and . (iv) The cones and for all are pointed. Then there is , , , such that , and
Proof. Let . Under our assumptions, GSIP can be relaxed into problem (6) and also solves (6) for all . Due to the partial calmness of (6), we also have that solves (56). Under the qualification assumption, the necessary optimality condition for problem (56) is (see, e.g., [24, Theorem 5.1] or [22, Theorem 6.2])
If , then . Indeed, by definition, Thus, , and from (58),
Applying Proposition 10 to , there are , , such that and
So, noting that ,
This completes the proof.
Conflict of Interests
The authors declare that there is no conflict of interests.
This work is supported by the National Science Foundation of China (11001289) and the Key Project of Chinese Ministry of Education (211151).
- R. Hettich and G. Still, “Second order optimality conditions for generalized semi-infinite programming problems,” Optimization, vol. 34, no. 3, pp. 195–211, 1995.
- H. Th. Jongen, J.-J. Rückmann, and O. Stein, “Generalized semi-infinite optimization: a first order optimality condition and examples,” Mathematical Programming A, vol. 83, no. 1, pp. 145–158, 1998.
- J. J. Rückmann and A. Shapiro, “First-order optimality conditions in generalized semi-infinite programming,” Journal of Optimization Theory and Applications, vol. 101, no. 3, pp. 677–691, 1999.
- O. Stein, “First-order optimality conditions for degenerate index sets in generalized semi-infinite optimization,” Mathematics of Operations Research, vol. 26, no. 3, pp. 565–582, 2001.
- G. Still, “Generalized semi-infinite programming: theory and methods,” European Journal of Operational Research, vol. 119, no. 2, pp. 301–313, 1999.
- O. Stein, Bi-Level Strategies in Semi-Infinite Programming, vol. 71 of Nonconvex Optimization and its Applications, Kluwer Academic, Boston, Mass, USA, 2003.
- H. Günzel, H. Th. Jongen, and O. Stein, “On the closure of the feasible set in generalized semi-infinite programming,” Central European Journal of Operations Research, vol. 15, no. 3, pp. 271–280, 2007.
- H. Günzel, H. Th. Jongen, and O. Stein, “Generalized semi-infinite programming: the symmetric reduction ansatz,” Optimization Letters, vol. 2, no. 3, pp. 415–424, 2008.
- F. Guerra-Vázquez, H. Th. Jongen, and V. Shikhman, “General semi-infinite programming: symmetric Mangasarian-Fromovitz constraint qualification and the closure of the feasible set,” SIAM Journal on Optimization, vol. 20, no. 5, pp. 2487–2503, 2010.
- H. Th. Jongen and V. Shikhman, “Generalized semi-infinite programming: the nonsmooth symmetric reduction ansatz,” SIAM Journal on Optimization, vol. 21, no. 1, pp. 193–211, 2011.
- O. Stein and G. Still, “On generalized semi-infinite optimization and bilevel optimization,” European Journal of Operational Research, vol. 142, no. 3, pp. 444–462, 2002.
- S. Dempe and A. B. Zemkoho, “The generalized Mangasarian-Fromowitz constraint qualification and optimality conditions for bilevel programs,” Journal of Optimization Theory and Applications, vol. 148, no. 1, pp. 46–68, 2011.
- K. Shimizu, Y. Ishizuka, and J. F. Bard, Nondifferentiable and Two-level Mathematical Programming, Kluwer Academic, Boston, Mass, USA, 1997.
- N. Kanzi and S. Nobakhtian, “Necessary optimality conditions for nonsmooth generalized semi-infinite programming problems,” European Journal of Operational Research, vol. 205, no. 2, pp. 253–261, 2010.
- N. Kanzi, “Lagrange multiplier rules for non-differentiable DC generalized semi-infinite programming problems,” Journal of Global Optimization, vol. 56, no. 2, pp. 417–430, 2013.
- R. T. Rockafellar and R. J. B. Wets, Variational Analysis, vol. 317 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], Springer, Berlin, Germany, 1998.
- B. S. Mordukhovich, Variational Analysis and Generalized Differentiation. I: Basic Theory, vol. 330 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], Springer, Berlin, Germany, 2006.
- F. H. Clarke, Optimization and Nonsmooth Analysis, Canadian Mathematical Society Series of Monographs and Advanced Texts, John Wiley & Sons, New York, NY, USA, 1983.
- R. T. Rockafellar, The Theory of Subgradients and Its Applications to Problems of Optimization. Convex and Nonconvex, vol. 1 of R & E, Heldermann, Berlin, Germany, 1981.
- B. S. Mordukhovich, N. M. Nam, and N. D. Yen, “Subgradients of marginal functions in parametric mathematical programming,” Mathematical Programming B, vol. 116, no. 1-2, pp. 369–396, 2009.
- B. S. Mordukhovich and N. M. Nam, “Variational stability and marginal functions via generalized differentiation,” Mathematics of Operations Research, vol. 30, no. 4, pp. 800–816, 2005.
- B. S. Mordukhovich, N. M. Nam, and H. M. Phan, “Variational analysis of marginal functions with applications to bilevel programming,” Journal of Optimization Theory and Applications, vol. 152, no. 3, pp. 557–586, 2012.
- J. J. Ye and D. L. Zhu, “Optimality conditions for bilevel programming problems,” Optimization, vol. 33, no. 1, pp. 9–27, 1995.
- S. Dempe, J. Dutta, and B. S. Mordukhovich, “New necessary optimality conditions in optimistic bilevel programming,” Optimization, vol. 56, no. 5-6, pp. 577–604, 2007.