Abstract

We consider a class of nonsmooth generalized semi-infinite programming problems. We apply results from parametric optimization to the lower level problems of generalized semi-infinite programming problems to get estimates for the value functions of the lower level problems and thus derive necessary optimality conditions for generalized semi-infinite programming problems. We also derive some new estimates for the value functions of the lower level problems in terms of generalized differentiation and further obtain the necessary optimality conditions.

1. Introduction

Generalized semi-infinite programming problem (GSIP) is of the form where . GSIP is different from the standard semi-infinite programming in that its index set is dependent on .

This first systematic study of GSIP was Hettich and Still [1] where the reduction method was used to reduce GSIP into standard nonlinear programming problems and second-order optimality conditions were derived. Necessary optimality conditions at an optimal solution for (1) with differentiable data are as follows there exist nonnegative numbers , not all zero, such that where and each is the usual FJ-multiplier of the lower level problem at its optimal solution This condition was first derived by Jongen et al. [2] in an elementary way without any constraint qualifications or any kind of reduction approaches. They also proposed a constraint qualification under which it follows that and discussed some geometrical properties of the feasible set which do not appear in standard semi-infinite case. The optimality conditions are further explored by Rückmann and Shapiro [3] and Stein [4].

GSIP in itself is of complex and exclusive structures such as the nonclosedness of the feasible set, nonconvexity, nonsmoothness, and bilevel structure and thus a difficult problem to solve see, for example, [2, 5, 6]. We also refer to [710] for some recent study on the structure of GSIP.

It is obvious that GSIP can be rewritten equivalently as the nonlinear programming problem where is the optimal value of . Then we can relate GSIP to the min-max problem see [3] for more details. On the other hand, GSIP can be related to the following bilevel problem: The problem (6) is a special bilevel optimization problem in that its upper level constraint is the same as the objective function of its lower level problem. However, there is a slight difference between GSIP problem (1) and problem (6). The feasible set of (6) is a subset of (1) in that the feasible set of (1) is the combination of the feasible set of (6) and the complement of . For more comparisons between GSIP and bilevel problems, see Stein and Still [11].

The bilevel problem (6) is equivalent to the following problem: where and . This approach was used by Dempe and Zemkoho [12] to study bilevel optimization problems. For general references of bilevel optimization, see [13].

In this paper we concentrate on the optimality conditions of nonsmooth GSIP whose defining functions are Lipschitz continuous. Similar works are [14, 15]. We achieve this via the lower level optimal value function reformulation and then derive its necessary optimality conditions via the generalized differentiation. One of the key steps is to estimate the generalized gradients of the lower level optimal value function which involves parametric optimization. We will consider two cases with different approaches related to the two reformulations of GSIP as previously mentioned. Firstly, we develop optimality conditions via the min-max formulation with Lipschitz lower level optimal value function. Secondly, we develop optimality conditions via bilevel formulation under the assumption of partial calmness.

2. Preliminaries

In this section, we present some basic definitions and results from variational analysis [16, 17]. Given a set in , the regular normal cone of at is defined by The (general) normal cone of at is defined by Given a function and a point with finite, denote by the epigraph of . The regular subdifferential of at is defined by The general (basic, limiting) and singular subdifferential of at are defined, respectively, by The upper regular subdifferential of at is defined by , and the upper subdifferential of is defined by .

The Clarke (convexified) normal cone can be defined by two different approaches. On the one hand, it can be defined by the polar cone of the Clarke’s tangent cone where or defined via the generalized directional derivative of the (Lipschitzian) distant function ; see Clarke [18]. On the other hand, it can be defined by the closed convex hull of the (general) normal cone For this definition and also the equivalence of the two definitions, see, for example, Rockafellar and Wets [16].

The Clarke subgradients and Clarke horizon subgradients of at are defined by The relationship between the Clarke sub-subdifferentials and basic sub-differentials is also referred to by Mordukhovich [17, Theorem 3.57].

Proposition 1 (see [16]). Let be proper and around . Then, If, in particular, is Lipschitz continuous at , then

The normal cone enjoys the robustness property provided that the setting is finite dimension [17, page 11]. However, this is not true for the convexified cone ; see, for example, Rockafellar [19]. Consider The normal cone is just the -axis, but is the -plane for all . The following proposition is from Rockafellar [19].

Proposition 2 (see [19]). If is convex, or if is pointed, then the multifunction is closed at ; that is, for all , , one has .

Proposition 3. The Clarke normal cone has the robustness property provided that is pointed.

Proof. It suffices to prove that . Let . Then there are , , such that since the sets are cones. Let . Then is bounded; that is also to say, are bounded for all . Otherwise, . That is, , where is the limit of for each . Note that since . Thus by the pointedness of . On the other hand, . This is a contradiction. Thus the sequence is bounded. By taking subsequences, we may assume that . Then and . This completes the proof.

The following definitions are required for further development.

Definition 4 (see [17, Definition  1.63]). Let be a set-valued mapping with , the domain of . (i) Given , we say that the mapping is inner semicontinuous at if for every sequence there is a sequence converging to as . (ii) is inner semicompact at if for every sequence there is a sequence that contains a convergent subsequence as .(iii) is -inner semicontinuous at (-inner semicompact at ) if in above two cases, is replaced by with .

Here the concept of -inner semicontinuity/semicompactness is important for our considerations. It is typical that the value function of the lower level problem of GSIP is not continuous, even taking value .

Theorem 5 (subdifferentiation of maximum functions [17, Theorem  3.46]). Consider the maximum function of the form Let be lower semicontinuous around for and be upper semicontinuous at for . Assume that the qualification holds: Then where .

Note that qualification (22) always holds if all related functions are locally Lipschitz.

The following two results are about continuity properties and estimates of subdifferentials of marginal functions which are crucial to our analysis for GSIP problems.

Proposition 6 (limiting subgradients of marginal functions [20]). Consider the parametric optimization problem and let , . For simplicity, one does not consider the case with equality constraints involved. (i)Assume that is -inner semicontinuous at (the graph of ), that and all are Lipschitz continuous around , and that the following qualification condition is satisfied:
One has the inclusions (ii)Assume that is -inner semicompact at , and all and are Lipschitz continuous around for all , and qualification (25) holds for all , . Then

Proposition 7 (Lipschitz continuity of marginal functions [21, Theorem 5.2]). Continue to consider the parametric problem (24) in Proposition 6. Then the following assertions hold. (i)Assume that is -inner semicontinuous at and is locally Lipschitz around this point. Then is Lipschitz around provided that it is around and is Lipschitz-like around .(ii)Assume that is -inner compact at and is locally Lipschitz around for all . Then is Lipschitz around provided that it is around and is Lipschitz-like around for all .

3. Main Results

Now we are prepared to develop the optimality conditions for GSIP problem (1). Given a local solution of problem (1), associate it with the following min-max problem where . Let . Denote by If solves GSIP (1), then also solves problem and thus by generalized Fermat's rule (cf. [16, Theorem 10.1]), we have So, calculus for the maximum function and the estimate of subdifferentials are essential to proceed. From (31) and Theorem 5, there exists such that (if is Lipschitz) Note that for a Lipschitz function ,

Theorem 8 (optimality for GSIP with Lipschitz lower level optimal value function). Consider the GSIP problem (1), and let be its locally optimal solution. Assume that all functions , , and are Lipschitz continuous, is -inner semicompact at , and is Lipschitz-like at for all . Then there is and ,  ,  , , such that and If in addition and all components of are regular at all , then the optimality is of the form Note that .

Proof. Under regularity and Lipschitz continuity, since the following calculus rule for basic subgradients holds (see [16, Corollary  10.11]): the last two equations follow directly from the first one. Note that the function is in position of in Proposition 6. Under our assumptions, by Proposition 7, is Lipschitz continuous and the estimate of is If solves GSIP, then it also solves . By (31), (32), and (33), there exists such that Combining (37) and (38), there is and , ,  ,   such that and Letting , , , , , we obtain the desired result.

Corollary 9. In addition to the assumptions in Theorem 8, assume that , , and are continuously differentiable. Then the optimality condition at the optimal point is that there exist and ,  ,  , , such that and

Next we consider the case where the lower level value function may fail to be Lipschitz and give estimates for the subdifferentials of and thus further derive the optimality conditions for GSIP. However, it requires to use the Clarke subdifferentials.

Proposition 10. Consider the parametric optimization problem same as (24): with corresponding solution mapping . Let . Assume that the following conditions hold: (i) is lower semicontinuous at ;(ii) is -inner semicompact at and is nonempty and compact;(iii)if , , , , and , then , ;(iv)the cones and for all are pointed. Then one has the inclusion

Proof (sketch: the definition of ). The proof is divided into two parts. First the set on the right hand of the required inclusion, denoted by , is closed. The second step is to justify that .
Let , , , and where , , , . We have to show that . We may assume that , , as is compact. We show first that the sequence is bounded. Suppose on the contrary that . Then for each , Multiplying (44) by and taking summation over , we have Note that for each , by definition of , Let . Then by assumption (iv) and Proposition 3 one has , and thus . Then where , . Let . Then from (43). Combining (45) and (47), we have
Based on the assumption (iii), we have ,. This contradicts the fact that is of norm , and thus is bounded. Then have convergent subsequences, say . Thus with . That is to say, is closed.
Next, we justify that . Assume that , . Under the semicompactness assumption, invoking [17, Theorem 1.108], one gets that
Employing the sum rule from [17, Theorem 3.36] to the two above leads to which completes the proof.

Theorem 11 (convexified normal cone to inequality system [22]). Consider defined by inequality system . Let be Lipschitz, and the qualification (non-smooth MFCQ) at holds: where . Then

As mentioned, GSIP can be relaxed into the following bilevel programming problem: The feasible set of above problem is a subset of the feasible set of (1). Thus, if solves GSIP and and , then also solves problem (6). The perturbed version of the above bilevel problem is Problem (6) is said to be partially calm at [23] if Under the partial calmness condition, problem (6) can be transformed into the problem below, for some constant ,

Theorem 12 (necessary conditions of optimality of GSIP). Let be an optimal solution of GSIP with , and . Let the data functions , , and be Lipschitz and the partial calmness condition (55) hold at for some . Assume that the following conditions hold: (i) Qualification (51) holds for at . (ii) is -inner semicompact at and . (iii) If , , , , and , then and . (iv) The cones and for all are pointed. Then there is , , , such that , and

Proof. Let . Under our assumptions, GSIP can be relaxed into problem (6) and also solves (6) for all . Due to the partial calmness of (6), we also have that solves (56). Under the qualification assumption, the necessary optimality condition for problem (56) is (see, e.g., [24, Theorem 5.1] or [22, Theorem 6.2])
If , then . Indeed, by definition, Thus, , and from (58),
Applying Proposition 10 to , there are , ,  such that and
So, noting that ,
This completes the proof.

Conflict of Interests

The authors declare that there is no conflict of interests.

Acknowledgments

This work is supported by the National Science Foundation of China (11001289) and the Key Project of Chinese Ministry of Education (211151).