Abstract

Sufficient optimality and sensitivity of a parameterized min-max programming with fixed feasible set are analyzed. Based on Clarke's subdifferential and Chaney's second-order directional derivative, sufficient optimality of the parameterized min-max programming is discussed first. Moreover, under a convex assumption on the objective function, a subdifferential computation formula of the marginal function is obtained. The assumptions are satisfied naturally for some application problems. Moreover, the formulae based on these assumptions are concise and convenient for algorithmic purpose to solve the applications.

1. Introduction

In this paper, sufficient optimality and sensitivity analysis of a parameterized min-max programming are given. The paper is triggered by a local reduction algorithmic strategy for solving following nonsmooth semi-infinite min-max-min programming (SIM3P, see [1, 2], etc. for related applications reference):minπ‘₯𝑓(π‘₯)s.t.𝑔(π‘₯)=maxπ‘¦βˆˆπ‘Œmin1β‰€π‘–β‰€π‘žξ€½π‘”π‘–ξ€Ύ(π‘₯,𝑦)≀0.(1.1) With the local reduction technique, the SIM3P can be rewritten as a bilevel programming first, where the lower problem is the following parameterized min-max programming 𝑃π‘₯ (see [3–5] for related reference of local reduction strategy): min𝑦𝑔(π‘₯,𝑦)=max1β‰€π‘–β‰€π‘žξ€½βˆ’π‘”π‘–ξ€Ύ(π‘₯,𝑦)s.t.π‘¦βˆˆπ‘Œ.(1.2) To make the bilevel strategy applicable to SIM3P, it is essential to discuss the second-order sufficient optimality of 𝑃π‘₯ and give sensitivity analysis of the parameterized minimum 𝑦(π‘₯) and corresponding marginal function 𝑔(π‘₯,𝑦(π‘₯)).

Sensitivity analysis of optimization problems is an important aspect in the field of operation and optimization research. Based on different assumptions, many results on kinds of parametric programming have been obtained ([6–9], etc.). Among these, some conclusions on parameterized min-max programming like (1.2) have also been given. For example, based on variation analysis, parameterized continuous programming with fixed constraint was discussed in [7]. Problem like (1.2) can be seen as a special case. Under the inf-compactness condition and the condition objective function is concave with respect to the parameter, directional derivative computational formula of marginal function for (1.2) can be obtained directly. However, concave condition cannot be satisfied for many problems. Recently, FrΓ©chet subgradients computation formula of marginal functions for nondifferentiable programming in Asplund spaces was given ([9]). By using FrΓ©chet subgradients computation formula in [9], subgradient formula of marginal function for (1.2) is direct. But the formula is tedious, if utilizing the formula to construct optimality system of (1.1), the system is so complex that it is difficult to solve the obtained optimality system.

For more convenient computational purpose, the focus of this paper is to establish sufficient optimality and simple computation formula of marginal function for (1.2). Based on Clarke's subdifferential and Chaney's second-order directional derivative, sufficient optimality of the parameterized programming 𝑃π‘₯ is given first. And then Lipschitzian continuousness of the parameterized isolated minimizer 𝑦(π‘₯) and the marginal function 𝑔(𝑦(π‘₯),π‘₯) is discussed; moreover, subdifferential computation formula of the marginal function is obtained.

2. Main Results

Let π‘Œ in (1.2) be defined as π‘Œ={π‘¦βˆˆπ‘…π‘šβˆΆβ„Žπ‘–(𝑦)≀0,𝑖=1,…,𝑙}, where β„Žπ‘–(β‹…) and 𝑖=1,…,𝑙, are twice continuously differentiable functions on π‘…π‘š, and 𝑔𝑖(β‹…,β‹…) in (1.2) are twice continuously differentiable functions on π‘…π‘›Γ—π‘š. In the following, we first give the sufficient optimality condition of (1.2) based on Clarke's subdifferential and Chaney's second-order directional derivative, and then make sensitivity analysis of the parameterized problem 𝑃π‘₯.

2.1. Sufficient Optimality Conditions of 𝑃π‘₯

Definition 2.1 (see [10]). For a given parameter π‘₯, a point π‘¦βˆ—βˆˆπ‘Œ is said to be an local minimum of problem 𝑃π‘₯ if there exists a neighborhood π‘ˆ of π‘¦βˆ— such that 𝑔(π‘₯,𝑦)β‰₯𝑔π‘₯,π‘¦βˆ—ξ€Έ,βˆ€π‘¦βˆˆπ‘ˆβˆ©π‘Œ,π‘¦β‰ π‘¦βˆ—.(2.1)

Assumption 2.2. For a given parameter π‘₯, suppose that 𝑃π‘₯ satisfying the following constraint qualification: ξ€½π‘‘βˆˆπ‘…π‘šβˆΆβˆ‡β„Žπ‘–(𝑦)𝑇𝑑<0,βˆ€π‘–βˆˆπΌβ„Ž(𝑦),π‘¦βˆˆπ‘Œβ‰ βˆ…,(2.2) where πΌβ„Ž(𝑦)={𝑖={1,…,𝑙}βˆΆβ„Žπ‘–(𝑦)=0}.
For a given parameter π‘₯, denote the Lagrange function of 𝑃π‘₯ as βˆ‘πΏ(π‘₯,𝑦,πœ†)=𝑔(π‘₯,𝑦)+𝑙𝑖=1πœ†π‘–β„Žπ‘–(𝑦), then the following holds.

Theorem 2.3. For a given parameter π‘₯, if π‘¦βˆ— is a minimum of 𝑃π‘₯, Assumption 2.2 holds, then there exists a πœ†βˆ—βˆˆπ‘…π‘™+ such that 0βˆˆπœ•π‘¦πΏ(π‘₯,π‘¦βˆ—,πœ†), where πœ•π‘¦πΏ(π‘₯,π‘¦βˆ—,πœ†βˆ—) denotes the Clarke's subdifferential of 𝐿(π‘₯,π‘¦βˆ—,πœ†βˆ—). Specifically, the following system holds: 0βˆˆπœ•π‘¦π‘”ξ€·π‘₯,π‘¦βˆ—ξ€Έ+𝑙𝑖=1πœ†π‘–βˆ‡β„Žπ‘–ξ€·π‘¦βˆ—ξ€Έ,(2.3) where πœ•π‘¦π‘”(π‘₯,𝑦) denotes Clarke's subdifferential of 𝑔(π‘₯,𝑦) with respect to 𝑦, it can be computed as co{βˆ‡π‘¦π‘”π‘–(π‘₯,π‘¦βˆ—)βˆΆπ‘–βˆˆπΌ(π‘₯,π‘¦βˆ—)}, co{β‹…} is an operation of making convex hull of the elements, 𝐼(π‘₯,π‘¦βˆ—)={π‘–βˆˆ{1,…,π‘ž}βˆΆπ‘”(π‘₯,π‘¦βˆ—)=𝑔(π‘₯,π‘¦βˆ—)}.

Proof. The conclusion is direct from Theorem 3.2.6 and Corollary 5.1.8 in [11].

Since 𝑔(π‘₯,𝑦)=max1≀𝑖≀𝑝{𝑔𝑖(π‘₯,𝑦)} is a directional differentiable function (Theorem 3.2.13 in [11]), the directional derivative of 𝑔(π‘₯,𝑦) with respect to 𝑦 in direction 𝑑 can be computed as follows: π‘”ξ…žπ‘¦ξ€½πœ‰(π‘₯,𝑦;𝑑)=maxπ‘‡π‘‘βˆΆβˆ€πœ‰βˆˆπœ•π‘¦π‘”(π‘₯,𝑦),βˆ€π‘‘βˆˆπ‘…π‘šξ€Ύ.(2.4)

Definition 2.4 (see [10]). Let 𝑓(π‘₯) is a locally Lipschitzian function on 𝑅𝑛, 𝑒 be a nonzero vector in 𝑅𝑛. Suppose that π‘‘βˆˆπœ•π‘’ξ‚†π‘“(π‘₯)=πœβˆˆπ‘…π‘›ξ€½π‘₯βˆΆβˆƒπ‘˜ξ€Ύ,ξ€½πœπ‘˜ξ€Ύ,s.t.π‘₯π‘˜π‘₯⟢,πœπ‘˜βŸΆπœ,πœπ‘˜ξ€·π‘₯βˆˆπœ•π‘“π‘˜ξ€Έξ‚‡foreachπ‘˜(2.5) define Chaney's lower second-order directional derivative as follows: π‘“βˆ’ξ…žξ…žπ‘“ξ€·π‘₯(π‘₯,𝜐,𝑒)=liminfπ‘˜ξ€Έβˆ’π‘“(π‘₯)βˆ’πœπ‘‡ξ€·π‘₯π‘˜ξ€Έβˆ’π‘₯𝑑2π‘˜,(2.6) taking over all triples of sequences {π‘₯π‘˜}, {πœπ‘˜}, and {π‘‘π‘˜} for which (a)π‘‘π‘˜>0 for each π‘˜ and {π‘₯π‘˜}β†’π‘₯; (b)π‘‘π‘˜β†’0 and (π‘₯π‘˜βˆ’π‘₯βˆ—)/π‘‘π‘˜ converges to 𝑒; (c){πœπ‘˜}β†’πœ with πœπ‘˜βˆˆπœ•π‘“(π‘₯π‘˜) for each π‘˜.
Similarly, Chaney's upper second-order directional derivative can be defined as 𝑓+ξ…žξ…žπ‘“ξ€·π‘₯=limsupπ‘˜ξ€Έβˆ’π‘“(π‘₯)βˆ’πœπ‘‡ξ€·π‘₯π‘˜ξ€Έβˆ’π‘₯𝑑2π‘˜,(2.7) taking over all triples of sequences {π‘₯π‘˜}, {πœπ‘˜}, and {π‘‘π‘˜} for which (a), (b), and (c) above hold.
For parameterized max-type function 𝑔(π‘₯,𝑦)=max1≀𝑖≀𝑝{βˆ’π‘”π‘–(π‘₯,𝑦)}, where π‘₯ is a given parameter, its Chaney's lower and upper second-order directional derivatives can be computed as follows.

Proposition 2.5 (see [12]). For any given parameter π‘₯, Chaney's lower and upper second-order directional derivatives of 𝑔(π‘₯,𝑦) with respect to 𝑦 exist; moreover, for any given 0β‰ π‘’βˆˆπ‘…π‘ž, πœβˆˆπœ•π‘’π‘”(π‘₯,𝑦), it has π‘”βˆ’ξ…žξ…žξƒ―1(π‘₯,𝑦;𝑑)=min2π‘žξ“π‘–=1π‘Žπ‘—π‘’π‘‡βˆ‡2𝑦𝑔𝑖(π‘₯,𝑦)π‘’βˆΆπ‘Žβˆˆπ‘‡π‘’ξƒ°,𝑔(𝑔,𝑦,𝜐)+ξ…žξ…žξƒ―1(π‘₯,𝑦;𝑑)=max2π‘žξ“π‘–=1π‘Žπ‘—π‘’π‘‡βˆ‡2𝑦𝑔𝑖(π‘₯,𝑦)π‘’βˆΆπ‘Žβˆˆπ‘‡π‘’ξƒ°,(𝑔,𝑦,𝜐)(2.8) whereπ‘‡π‘’βŽ§βŽͺβŽͺβŽͺβŽͺ⎨βŽͺβŽͺβŽͺβŽͺβŽ©βˆƒξ€½π‘¦(𝑔,𝑦,𝜐)=(π‘˜)ξ€Ύ,ξ€½π‘Ž(π‘˜)ξ€Ύ,ξ€½πœ(π‘˜)ξ€Ύ(,suchthat1)𝑦(π‘˜)βŸΆπ‘¦indirection𝑒,π‘Žβˆˆπ‘…π‘ž+∢(2)𝜐(π‘˜)⟢𝜐,and𝜐(π‘˜)βˆˆπœ•π‘¦π‘”ξ€·π‘₯,𝑦(π‘˜)ξ€Έ,π‘˜=1,2,…,(3)π‘Ž(π‘˜)βŸΆπ‘Ž,π‘Ž(π‘˜)βˆˆπΈπ‘ž,𝜐(π‘˜)=π‘βˆ‘π‘–=1π‘Žπ‘–(π‘˜)βˆ‡π‘¦π‘”π‘–ξ€·π‘₯,𝑦(π‘˜)ξ€Έ,(4)π‘Žπ‘—(π‘˜)=0,forπ‘—βˆ‰πΎπ‘”ξ€·π‘¦(π‘˜)ξ€ΈβŽ«βŽͺβŽͺβŽͺβŽͺ⎬βŽͺβŽͺβŽͺβŽͺ⎭,(2.9) where 𝐾𝑔(𝑦(π‘˜))={π‘–βˆˆπ‘„βˆΆπ‘”π‘–(π‘₯,𝑦(𝑛))=𝑔(π‘₯,𝑦(𝑛)),βˆƒπ‘¦(𝑛)∈𝐡(𝑦,1/𝑛),βˆ€π‘›βˆˆπ‘}, πΈπ‘ž={π‘Žβˆˆπ‘…π‘ž+βˆΆβˆ‘π‘π‘–=1π‘Žπ‘–=1}, 𝑄={1,…,π‘ž}, and 𝐡(𝑦,1/𝑛) denotes the ball centered in 𝑦 with radius 1/𝑛.

Theorem 2.6 (sufficiency theorem). For a given parameter π‘₯βˆˆπ‘…π‘›, Assumption 2.2 holds, then there exists π‘¦βˆ—βˆˆπ‘…π‘š such that (2.3) holds. Moreover, for any feasible direction π‘‘βˆˆπ‘…π‘š of π‘Œ, that is, max{βˆ‡β„Žπ‘–(π‘¦βˆ—)π‘‡π‘‘βˆΆ1≀𝑖≀𝑙}≀0, if 𝑑 satisfying one of the following conditions: (1)π‘”ξ…žπ‘¦(π‘₯,π‘¦βˆ—;𝑑)β‰ 0; (2)π‘”ξ…žπ‘¦(π‘₯,π‘¦βˆ—;𝑑)=0, βˆ‘π‘™π‘–=1πœ†π‘–βˆ‡β„Žπ‘–(𝑦)𝑇𝑑=0, that is, πΏξ…žπ‘¦(π‘₯,𝑦;𝑑)=0, and ξƒ―1min2π‘žξ“π‘–=1π‘Žπ‘–π‘‘π‘‡βˆ‡2𝑦𝑔𝑖π‘₯,π‘¦βˆ—ξ€Έπ‘‘βˆΆπ‘ŽβˆˆπΈπ‘žξƒ°+𝑙𝑖=1πœ†π‘–π‘‘π‘‡βˆ‡2β„Žπ‘–ξ€·π‘¦βˆ—ξ€Έπ‘‘>0,(2.10)then π‘¦βˆ— is a local minimum of 𝑃π‘₯.

Proof. (1) If not, then there exists sequences π‘‘π‘˜β†“0, π‘‘π‘˜β†’π‘‘, π‘¦π‘˜=π‘¦βˆ—+π‘‘π‘˜π‘‘π‘˜βˆˆπ‘Œ such that 𝑔π‘₯,π‘¦π‘˜ξ€Έξ€·<𝑔π‘₯,π‘¦βˆ—ξ€Έ.(2.11) As a result, π‘”ξ…žπ‘¦(π‘₯,π‘¦βˆ—;𝑑)=lim𝑑↓0(𝑔(π‘₯,π‘¦βˆ—+𝑑𝑑)βˆ’π‘”(π‘₯,π‘¦βˆ—))/𝑑=limπ‘˜β†’+∞(𝑔(π‘₯,π‘¦βˆ—+π‘‘π‘˜π‘‘π‘˜)βˆ’π‘”(π‘₯,π‘¦βˆ—))/π‘‘π‘˜β‰€0. If π‘”ξ…žπ‘¦(π‘₯,π‘¦βˆ—;𝑑)β‰ 0, then π‘”ξ…žπ‘¦(π‘₯,π‘¦βˆ—;𝑑)<0. From (2.4), we know that πœ‰π‘‡π‘‘<0forallπœ‰βˆˆπœ•π‘¦π‘”(π‘₯,π‘¦βˆ—). Hence, for the direction π‘‘βˆˆπ‘…π‘š, we have πœ‰π‘‡π‘‘+𝑙𝑖=1βˆ‡β„Žπ‘–ξ€·π‘¦βˆ—ξ€Έπ‘‡π‘‘<0,πœ‰βˆˆπœ•π‘¦π‘”ξ€·π‘₯,π‘¦βˆ—ξ€Έ.(2.12) On the other hand, from π‘¦βˆ— satisfying (2.3), we know that there exists a πœ‰βˆˆπœ•π‘¦π‘”(π‘₯,π‘¦βˆ—) such that πœ‰π‘‡π‘‘+𝑙𝑖=1βˆ‡β„Žπ‘–ξ€·π‘¦βˆ—ξ€Έπ‘‡π‘‘=0,(2.13) which leads to a contradiction to (2.12).
(2) From Theorem 4 in [10] and Proposition 2.5, the conclusion is direct.

2.2. Sensitivity Analysis of Parameterized 𝑃π‘₯

In the following, we make sensitivity analysis of parameterized min-max programming 𝑃π‘₯, that is, study variation of isolated local minimizers and corresponding marginal function under small perturbation of π‘₯.

For convenience of discussion, for any given parameter π‘₯, denote π‘¦βˆ—(π‘₯) as a minimizer of 𝑃π‘₯, 𝜐(π‘₯)=min{𝑔(π‘₯,𝑦)βˆΆπ‘¦βˆˆπ‘Œ} as the corresponding marginal function value and make the following assumptions first.

Assumption 2.7. For given π‘₯βˆˆπ‘…π‘›, the parametric problem 𝑃π‘₯ is a convex problem, specifically, 𝑔𝑖(π‘₯,𝑦) and 𝑖=1,…,π‘ž are concave functions with respect to that variables 𝑦 and β„Žπ‘—(𝑦),𝑗=1,…,𝑙 are convex functions.

Assumption 2.8. Let πΌβ„Ž(𝑦)={π‘–βˆˆπΏβˆΆβ„Žπ‘–(𝑦)=0}, {βˆ‡β„Žπ‘–(𝑦)βˆΆπ‘–βˆˆπΌβ„Ž(𝑦)} are linearly independent.

Definition 2.9 (see Definition 2.1, [13]). For a given π‘₯, π‘¦βˆˆπ‘Œ is said to be an isolated local minimum with order 𝑖 (𝑖 = 1 or 2) of 𝑃π‘₯ if there exists a real π‘š>0 and a neighborhood 𝑉 of 𝑦 such that 𝑔π‘₯,𝑦>𝑔π‘₯,𝑦+12π‘šβ€–β€–π‘¦βˆ’π‘¦β€–β€–π‘–,βˆ€π‘¦βˆˆπ‘‰βˆ©π‘Œ,𝑦≠𝑦.(2.14)

Theorem 2.10. For a given π‘₯βˆˆπ‘…π‘›, Assumptions 2.2–2.8 hold, then the following conclusions hold: (1)if π‘¦βˆ—(π‘₯) with corresponding multiplier πœ†βˆ— is the solution of (2.3), then π‘¦βˆ—(π‘₯) is a unique first-order isolated minimizer of 𝑃π‘₯;(2)for any minimum π‘¦βˆ—(π‘₯), it is a locally Lipschitzian function with respect to π‘₯, that is, there exists a 𝐿1>0, 𝛿>0 such that β€–β€–π‘¦βˆ—ξ€·π‘₯π‘˜ξ€Έβˆ’π‘¦βˆ—β€–β€–(π‘₯)≀𝐿1β€–β€–π‘₯π‘˜β€–β€–βˆ’π‘₯,βˆ€π‘₯π‘˜βˆˆπ‘ˆ(π‘₯,𝛿),π‘¦βˆ—ξ€·π‘₯π‘˜ξ€Έξ€·π‘₯βˆˆπ‘Œπ‘˜ξ€Έ,(2.15) where π‘Œ(π‘₯π‘˜) denotes minima set of 𝑃π‘₯π‘˜;(3)for any minimum π‘¦βˆ—(π‘₯), marginal function 𝜐(π‘₯)=𝑔(π‘₯,π‘¦βˆ—(π‘₯)) is also a locally Lipschitz function with respect to π‘₯, and πœ•πœ(π‘₯)βŠ†π‘†(π‘₯), where π‘†ξ€½βˆ‡(π‘₯)=coπ‘₯𝑔𝑖π‘₯,π‘¦βˆ—ξ€Έξ€·(π‘₯),π‘–βˆˆπΌπ‘₯,π‘¦βˆ—(π‘₯)ξ€Έξ€Ύ,(2.16) and 𝐼(π‘₯,π‘¦βˆ—(π‘₯))={π‘–βˆˆ{1,…,π‘ž}βˆΆπ‘”π‘–(π‘₯,π‘¦βˆ—(π‘₯))=𝑔(π‘₯,π‘¦βˆ—(π‘₯))}. As a result, ξƒ―ξ“πœ•πœ(π‘₯)=π‘–βˆˆπΌ(π‘₯,π‘¦βˆ—(π‘₯))πœ†π‘–βˆ‡π‘₯𝑔𝑖π‘₯,π‘¦βˆ—ξ€Έ(π‘₯)βˆΆπœ†π‘–ξ“β‰₯0,π‘–βˆˆπΌ(π‘₯,π‘¦βˆ—(π‘₯))πœ†π‘–ξƒ°.=1(2.17)

Proof. (1) From Assumption 2.7, it is direct that π‘¦βˆ—(π‘₯) is a global minimizer of 𝑃π‘₯. We only prove π‘¦βˆ—(π‘₯) is a first-order isolated minimizer.
If the conclusion does not hold, then there exists a sequence {π‘¦π‘˜}βˆˆπ‘Œ(π‘₯) converging to π‘¦βˆ—(π‘₯), π‘¦π‘˜β‰ π‘¦βˆ—(π‘₯), and a sequence π‘šπ‘˜, π‘šπ‘˜>0, and π‘šπ‘˜ converges to 0 such that 𝑔π‘₯,π‘¦π‘˜ξ€Έξ€·β‰€π‘”π‘₯,π‘¦βˆ—ξ€Έ+1(π‘₯)2π‘šπ‘˜β€–β€–π‘¦π‘˜βˆ’π‘¦βˆ—β€–β€–,π‘¦π‘˜βˆˆπ‘Œ.(2.18) Take π‘‘π‘˜=(π‘¦π‘˜βˆ’π‘¦βˆ—(π‘₯))/β€–π‘¦π‘˜βˆ’π‘¦βˆ—(π‘₯)β€–, for simplicity, we suppose π‘‘π‘˜β†’π‘‘, with ‖𝑑‖=1. Let π‘‘π‘˜=β€–π‘¦π‘˜βˆ’π‘¦βˆ—(π‘₯)β€–, then from π‘¦π‘˜βˆˆπ‘Œ, π‘‘π‘˜β†’π‘‘ and π‘Œ is compact, we have π‘¦βˆ—(π‘₯)+π‘‘π‘˜π‘‘βˆˆπ‘Œ,π‘‘π‘˜βŸΆ0,(2.19) that is, βˆ‡β„Žπ‘–ξ€·π‘¦βˆ—ξ€Έ(π‘₯)𝑇𝑑≀0,βˆ€π‘–βˆˆπΌπ‘₯,π‘¦βˆ—ξ€Έ.(π‘₯)(2.20) From Assumption 2.8, we know that βˆ‘π‘–βˆˆπΌ(π‘₯,π‘¦βˆ—(π‘₯))βˆ‡β„Žπ‘–(π‘¦βˆ—(π‘₯))𝑇𝑑≠0. As a result, we have βˆ‘π‘–βˆˆπΌ(π‘₯,π‘¦βˆ—(π‘₯))βˆ‡β„Žπ‘–(π‘¦βˆ—(π‘₯))𝑇𝑑<0.
From the first equation of (2.3), we know that there exists a π‘§βˆˆπœ•π‘¦π‘”(π‘₯,π‘¦βˆ—(π‘₯)) such that for any feasible direction 𝑑, π‘§π‘‡βˆ‘π‘‘=βˆ’π‘–βˆˆπΌ(π‘₯,π‘¦βˆ—(π‘₯))πœ†π‘–βˆ‡β„Žπ‘–(π‘¦βˆ—(π‘₯))𝑇𝑑>0. Hence, π‘”ξ…žπ‘¦ξ€·π‘₯,π‘¦βˆ—ξ€Έξ€½πœ‰(π‘₯);𝑑=maxπ‘‡π‘‘βˆΆπœ‰βˆˆπœ•π‘¦π‘”ξ€·π‘₯,π‘¦βˆ—(π‘₯)ξ€Έξ€Ύβ‰₯𝑧𝑇𝑑>0.(2.21) On the other hand, from π‘¦βˆ—(π‘₯) is a minimizer, we know that π‘”ξ…žπ‘¦(π‘₯,π‘¦βˆ—(π‘₯);𝑑)β‰₯0, this leads to a contradiction;
(2) from Assumption 2.8 and Theorem 3.1 in [13], the conclusion is direct;
(3) since 𝑔(π‘₯,𝑦) is a locally Lipschitzian function with respect to π‘₯ and 𝑦, then there exists 𝛿>0, 𝛿′>0, and 𝐿2>0 such that for any π‘₯1βˆˆπ‘ˆ(π‘₯,𝛿), π‘¦βˆˆπ‘ˆ(π‘¦βˆ—(π‘₯),π›Ώξ…ž), it has ||𝑔π‘₯1,π‘¦βˆ—ξ€Έξ€·(π‘₯)βˆ’π‘”π‘₯,π‘¦βˆ—ξ€Έ||(π‘₯)≀𝐿2β€–β€–π‘₯1β€–β€–,||ξ€·βˆ’π‘₯𝑔(π‘₯,𝑦)βˆ’π‘”π‘₯,π‘¦βˆ—(ξ€Έ||π‘₯)≀𝐿2β€–π‘¦βˆ’π‘¦βˆ—(π‘₯)β€–.(2.22) As to π‘₯1βˆˆπ‘ˆ(π‘₯,𝛿), from the conclusion in (1.2), there exists a a 𝐿1>0 such that β€–π‘¦βˆ—(π‘₯1)βˆ’π‘¦βˆ—(π‘₯)‖≀𝐿1β€–π‘₯1βˆ’π‘₯β€–. As a result, ||πœξ€·π‘₯1ξ€Έ||=||𝑔π‘₯βˆ’πœ(π‘₯)1,π‘¦βˆ—ξ€·π‘₯1ξ€Έξ€·βˆ’π‘”π‘₯,π‘¦βˆ—(||=||𝑔π‘₯π‘₯)ξ€Έξ€Έ1,π‘¦βˆ—ξ€·π‘₯1ξ€·π‘₯ξ€Έξ€Έβˆ’π‘”1,π‘¦βˆ—ξ€Έξ€·π‘₯(π‘₯)+𝑔1,π‘¦βˆ—ξ€Έξ€·(π‘₯)βˆ’π‘”π‘₯,π‘¦βˆ—ξ€Έ||≀||𝑔π‘₯(π‘₯)1,π‘¦βˆ—ξ€·π‘₯1ξ€·π‘₯ξ€Έξ€Έβˆ’π‘”1,π‘¦βˆ—(ξ€Έ||+||𝑔π‘₯π‘₯)1,π‘¦βˆ—(ξ€Έξ€·π‘₯)βˆ’π‘”π‘₯,π‘¦βˆ—(ξ€Έ||π‘₯)≀𝐿2β€–β€–π‘¦βˆ—ξ€·π‘₯1ξ€Έβˆ’π‘¦βˆ—β€–β€–(π‘₯)+𝐿2β€–β€–π‘₯1β€–β€–βˆ’π‘₯≀𝐿2ξ€·1+𝐿1ξ€Έβ€–β€–π‘₯1β€–β€–.βˆ’π‘₯(2.23) Hence, the marginal function 𝜐(π‘₯) is a local Lipschitzian function with respect to π‘₯.
Let 𝑆(π‘₯)={βˆ’βˆ‡π‘₯𝑔𝑖(π‘₯,𝑦(π‘₯)),π‘–βˆˆπΌ(π‘₯,𝑦(π‘₯))}, then 𝑆(π‘₯)=co{πœ‰,πœ‰βˆˆπ‘†(π‘₯)}. We prove that 𝑆(π‘₯) is closed first, that is, prove for any sequence {π‘₯π‘˜}βŠ‚π‘…π‘›, π‘₯π‘˜β†’π‘₯, π‘§π‘˜βˆˆξπ‘†(π‘₯π‘˜), π‘§π‘˜β†’π‘§, it has ξπ‘§βˆˆπ‘†(π‘₯).
From π‘§π‘˜βˆˆξπ‘†(π‘₯π‘˜), there exist π‘¦π‘˜βˆˆπ‘Œ(π‘₯π‘˜); π‘–π‘˜βˆˆπΌ(π‘₯π‘˜,π‘¦π‘˜) such that π‘§π‘˜=βˆ’βˆ‡π‘₯π‘”π‘–π‘˜(π‘₯π‘˜,π‘¦π‘˜). Without loss of generality, suppose that {π‘¦π‘˜} converges to 𝑦; {π‘–π‘˜} converges to 𝑖. From Proposition 3.3 in [14], it has π‘¦βˆˆπ‘Œ(π‘₯), and π‘–βˆˆπΌ(π‘₯,𝑦) and from βˆ‡π‘₯𝑔𝑖(π‘₯,𝑦) is a continuous function, it has 𝑧=limπ‘˜β†’+βˆžπ‘§π‘˜=limπ‘˜β†’+βˆžβˆ‡π‘₯π‘”π‘–π‘˜(π‘₯π‘˜,π‘¦π‘˜)=βˆ‡π‘₯𝑔𝑖(π‘₯,𝑦)βˆˆπ‘†(π‘₯). As a result, 𝑆(π‘₯) is a closed set.
From Theorem 3.2.16 in [11], for any πœ‰βˆˆπœ•πœ(π‘₯), there exists π‘₯π‘˜βˆˆπ‘…π‘›, π‘₯π‘˜β†’π‘₯ such that βˆ‡πœ(π‘₯π‘˜) exists and πœ‰=limπ‘˜β†’+βˆžβˆ‡πœ(π‘₯π‘˜). In addition, for arbitrary π‘‘βˆˆπ‘…π‘›, it has ξ€·π‘₯βˆ‡πœπ‘˜ξ€Έπ‘‡π‘‘=πœξ…žξ€·π‘₯π‘˜ξ€Έ;𝑑=lim𝑑↓0πœξ€·π‘₯π‘˜ξ€Έξ€·π‘₯+π‘‘π‘‘βˆ’πœπ‘˜ξ€Έπ‘‘=lim𝑑↓0𝑔π‘₯π‘˜+𝑑𝑑,π‘¦βˆ—ξ€·π‘₯π‘˜ξ€·π‘₯+π‘‘π‘‘ξ€Έξ€Έβˆ’π‘”π‘˜,π‘¦βˆ—ξ€·π‘₯π‘˜ξ€Έξ€Έπ‘‘β‰€lim𝑑↓0𝑔π‘₯π‘˜+𝑑𝑑,π‘¦βˆ—ξ€·π‘₯π‘˜ξ€·π‘₯ξ€Έξ€Έβˆ’π‘”π‘˜,π‘¦βˆ—ξ€·π‘₯π‘˜ξ€Έξ€Έπ‘‘=maxξ€·π‘₯π‘–βˆˆπΌπ‘˜ξ€Έ,π‘¦ξ‚†βˆ’βˆ‡π‘₯𝑔𝑖π‘₯π‘˜ξ€Έ,𝑦𝑇𝑑.(2.24) From the definition of 𝑆(π‘₯π‘˜), βˆƒπ‘§π‘˜βˆˆπ‘†(π‘₯π‘˜) such that π‘§π‘˜π‘‡π‘‘=maxπ‘–βˆˆπΌ(π‘₯π‘˜,𝑦){βˆ’βˆ‡π‘₯𝑔𝑖(π‘₯π‘˜,𝑦)𝑇𝑑}. Hence, it has βˆ‡πœ(π‘₯π‘˜)π‘‡π‘‘β‰€π‘§π‘˜π‘‡π‘‘.
From π‘§π‘˜ξβ†’π‘§βˆˆπ‘†(π‘₯)βŠ‚π‘†(π‘₯), βˆ‡πœ(π‘₯π‘˜)β†’πœ‰ and βˆ‡πœ(π‘₯π‘˜)π‘‡π‘‘β‰€π‘§π‘˜π‘‡π‘‘, it has πœ‰π‘‡π‘‘β‰€π‘§π‘‡π‘‘, that is, for arbitrary π‘‘βˆˆπ‘…π‘› and πœ‰βˆˆπœ•πœ(π‘₯), there exists π‘§βˆˆπ‘†(π‘₯) such that πœ‰π‘‡π‘‘β‰€π‘§π‘‡π‘‘.
If πœ•πœ(π‘₯)βŠ‚π‘†(π‘₯) does not hold, then there exists a πœ‰βˆˆπœ•πœ(π‘₯) and πœ‰βˆ‰π‘†(π‘₯). From 𝑆(π‘₯) is a compact convex set and separation theorem ([15]), there exists a π‘‘βˆˆπ‘…π‘› such that πœ‰π‘‡π‘‘<0 and for arbitrary π‘§βˆˆπ‘†(π‘₯), 𝑧𝑇𝑑β‰₯0, which leads to a contradiction. As a result, πœ•πœ(π‘₯)βŠ‚π‘†(π‘₯) holds. From πœ•πœ(π‘₯)βŠ‚π‘†(π‘₯) and 𝑆(π‘₯)=co{βˆ‡π‘₯𝑔𝑖(π‘₯,π‘¦βˆ—(π‘₯)),π‘–βˆˆπΌ(π‘₯,π‘¦βˆ—(π‘₯))}, computation formula (2.17) is direct.

3. Discussion

In this paper, sufficient optimality and sensitivity analysis of a parameterized min-max programming are given. A rule for computation the subdifferential of 𝜐(π‘₯) is established. Though the assumptions in this paper are some restrictive compared to some existing work, the assumptions hold naturally for some applications. Moreover, the obtained computation formula is simple, it is beneficial for establishing a concise first-order necessary optimality system of (1.1), and then constructing effective algorithms to solve the applications.

Acknowledgments

This research was supported by the National Natural Science Foundation of China no. 11001092 and the Fundamental Research Funds for the Central Universities no. 2011QC064.