About this Journal Submit a Manuscript Table of Contents
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 692325, 9 pages
http://dx.doi.org/10.1155/2012/692325
Research Article

Sufficient Optimality and Sensitivity Analysis of a Parameterized Min-Max Programming

1College of Science, Huazhong Agricultural University, Wuhan 430070, China
2School of Basic Science, East China Jiaotong University, Nanchang 330000, China

Received 4 June 2012; Accepted 17 July 2012

Academic Editor: Jian-Wen Peng

Copyright © 2012 Huijuan Xiong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Sufficient optimality and sensitivity of a parameterized min-max programming with fixed feasible set are analyzed. Based on Clarke's subdifferential and Chaney's second-order directional derivative, sufficient optimality of the parameterized min-max programming is discussed first. Moreover, under a convex assumption on the objective function, a subdifferential computation formula of the marginal function is obtained. The assumptions are satisfied naturally for some application problems. Moreover, the formulae based on these assumptions are concise and convenient for algorithmic purpose to solve the applications.

1. Introduction

In this paper, sufficient optimality and sensitivity analysis of a parameterized min-max programming are given. The paper is triggered by a local reduction algorithmic strategy for solving following nonsmooth semi-infinite min-max-min programming (SIM3P, see [1, 2], etc. for related applications reference):min𝑥𝑓(𝑥)s.t.𝑔(𝑥)=max𝑦𝑌min1𝑖𝑞𝑔𝑖(𝑥,𝑦)0.(1.1) With the local reduction technique, the SIM3P can be rewritten as a bilevel programming first, where the lower problem is the following parameterized min-max programming 𝑃𝑥 (see [35] for related reference of local reduction strategy): min𝑦𝑔(𝑥,𝑦)=max1𝑖𝑞𝑔𝑖(𝑥,𝑦)s.t.𝑦𝑌.(1.2) To make the bilevel strategy applicable to SIM3P, it is essential to discuss the second-order sufficient optimality of 𝑃𝑥 and give sensitivity analysis of the parameterized minimum 𝑦(𝑥) and corresponding marginal function 𝑔(𝑥,𝑦(𝑥)).

Sensitivity analysis of optimization problems is an important aspect in the field of operation and optimization research. Based on different assumptions, many results on kinds of parametric programming have been obtained ([69], etc.). Among these, some conclusions on parameterized min-max programming like (1.2) have also been given. For example, based on variation analysis, parameterized continuous programming with fixed constraint was discussed in [7]. Problem like (1.2) can be seen as a special case. Under the inf-compactness condition and the condition objective function is concave with respect to the parameter, directional derivative computational formula of marginal function for (1.2) can be obtained directly. However, concave condition cannot be satisfied for many problems. Recently, Fréchet subgradients computation formula of marginal functions for nondifferentiable programming in Asplund spaces was given ([9]). By using Fréchet subgradients computation formula in [9], subgradient formula of marginal function for (1.2) is direct. But the formula is tedious, if utilizing the formula to construct optimality system of (1.1), the system is so complex that it is difficult to solve the obtained optimality system.

For more convenient computational purpose, the focus of this paper is to establish sufficient optimality and simple computation formula of marginal function for (1.2). Based on Clarke's subdifferential and Chaney's second-order directional derivative, sufficient optimality of the parameterized programming 𝑃𝑥 is given first. And then Lipschitzian continuousness of the parameterized isolated minimizer 𝑦(𝑥) and the marginal function 𝑔(𝑦(𝑥),𝑥) is discussed; moreover, subdifferential computation formula of the marginal function is obtained.

2. Main Results

Let 𝑌 in (1.2) be defined as 𝑌={𝑦𝑅𝑚𝑖(𝑦)0,𝑖=1,,𝑙}, where 𝑖() and 𝑖=1,,𝑙, are twice continuously differentiable functions on 𝑅𝑚, and 𝑔𝑖(,) in (1.2) are twice continuously differentiable functions on 𝑅𝑛×𝑚. In the following, we first give the sufficient optimality condition of (1.2) based on Clarke's subdifferential and Chaney's second-order directional derivative, and then make sensitivity analysis of the parameterized problem 𝑃𝑥.

2.1. Sufficient Optimality Conditions of 𝑃𝑥

Definition 2.1 (see [10]). For a given parameter 𝑥, a point 𝑦𝑌 is said to be an local minimum of problem 𝑃𝑥 if there exists a neighborhood 𝑈 of 𝑦 such that 𝑔(𝑥,𝑦)𝑔𝑥,𝑦,𝑦𝑈𝑌,𝑦𝑦.(2.1)

Assumption 2.2. For a given parameter 𝑥, suppose that 𝑃𝑥 satisfying the following constraint qualification: 𝑑𝑅𝑚𝑖(𝑦)𝑇𝑑<0,𝑖𝐼(𝑦),𝑦𝑌,(2.2) where 𝐼(𝑦)={𝑖={1,,𝑙}𝑖(𝑦)=0}.
For a given parameter 𝑥, denote the Lagrange function of 𝑃𝑥 as 𝐿(𝑥,𝑦,𝜆)=𝑔(𝑥,𝑦)+𝑙𝑖=1𝜆𝑖𝑖(𝑦), then the following holds.

Theorem 2.3. For a given parameter 𝑥, if 𝑦 is a minimum of 𝑃𝑥, Assumption 2.2 holds, then there exists a 𝜆𝑅𝑙+ such that 0𝜕𝑦𝐿(𝑥,𝑦,𝜆), where 𝜕𝑦𝐿(𝑥,𝑦,𝜆) denotes the Clarke's subdifferential of 𝐿(𝑥,𝑦,𝜆). Specifically, the following system holds: 0𝜕𝑦𝑔𝑥,𝑦+𝑙𝑖=1𝜆𝑖𝑖𝑦,(2.3) where 𝜕𝑦𝑔(𝑥,𝑦) denotes Clarke's subdifferential of 𝑔(𝑥,𝑦) with respect to 𝑦, it can be computed as co{𝑦𝑔𝑖(𝑥,𝑦)𝑖𝐼(𝑥,𝑦)}, co{} is an operation of making convex hull of the elements, 𝐼(𝑥,𝑦)={𝑖{1,,𝑞}𝑔(𝑥,𝑦)=𝑔(𝑥,𝑦)}.

Proof. The conclusion is direct from Theorem 3.2.6 and Corollary 5.1.8 in [11].

Since 𝑔(𝑥,𝑦)=max1𝑖𝑝{𝑔𝑖(𝑥,𝑦)} is a directional differentiable function (Theorem 3.2.13 in [11]), the directional derivative of 𝑔(𝑥,𝑦) with respect to 𝑦 in direction 𝑑 can be computed as follows: 𝑔𝑦𝜉(𝑥,𝑦;𝑑)=max𝑇𝑑𝜉𝜕𝑦𝑔(𝑥,𝑦),𝑑𝑅𝑚.(2.4)

Definition 2.4 (see [10]). Let 𝑓(𝑥) is a locally Lipschitzian function on 𝑅𝑛, 𝑢 be a nonzero vector in 𝑅𝑛. Suppose that 𝑑𝜕𝑢𝑓(𝑥)=𝜐𝑅𝑛𝑥𝑘,𝜐𝑘,s.t.𝑥𝑘𝑥,𝜐𝑘𝜐,𝜐𝑘𝑥𝜕𝑓𝑘foreach𝑘(2.5) define Chaney's lower second-order directional derivative as follows: 𝑓𝑓𝑥(𝑥,𝜐,𝑢)=liminf𝑘𝑓(𝑥)𝜐𝑇𝑥𝑘𝑥𝑡2𝑘,(2.6) taking over all triples of sequences {𝑥𝑘}, {𝜐𝑘}, and {𝑡𝑘} for which (a)𝑡𝑘>0 for each 𝑘 and {𝑥𝑘}𝑥; (b)𝑡𝑘0 and (𝑥𝑘𝑥)/𝑡𝑘 converges to 𝑢; (c){𝜐𝑘}𝜐 with 𝜐𝑘𝜕𝑓(𝑥𝑘) for each 𝑘.
Similarly, Chaney's upper second-order directional derivative can be defined as 𝑓+𝑓𝑥=limsup𝑘𝑓(𝑥)𝜐𝑇𝑥𝑘𝑥𝑡2𝑘,(2.7) taking over all triples of sequences {𝑥𝑘}, {𝜐𝑘}, and {𝑡𝑘} for which (a), (b), and (c) above hold.
For parameterized max-type function 𝑔(𝑥,𝑦)=max1𝑖𝑝{𝑔𝑖(𝑥,𝑦)}, where 𝑥 is a given parameter, its Chaney's lower and upper second-order directional derivatives can be computed as follows.

Proposition 2.5 (see [12]). For any given parameter 𝑥, Chaney's lower and upper second-order directional derivatives of 𝑔(𝑥,𝑦) with respect to 𝑦 exist; moreover, for any given 0𝑢𝑅𝑞, 𝜐𝜕𝑢𝑔(𝑥,𝑦), it has 𝑔1(𝑥,𝑦;𝑑)=min2𝑞𝑖=1𝑎𝑗𝑢𝑇2𝑦𝑔𝑖(𝑥,𝑦)𝑢𝑎𝑇𝑢,𝑔(𝑔,𝑦,𝜐)+1(𝑥,𝑦;𝑑)=max2𝑞𝑖=1𝑎𝑗𝑢𝑇2𝑦𝑔𝑖(𝑥,𝑦)𝑢𝑎𝑇𝑢,(𝑔,𝑦,𝜐)(2.8) where𝑇𝑢𝑦(𝑔,𝑦,𝜐)=(𝑘),𝑎(𝑘),𝜐(𝑘)(,suchthat1)𝑦(𝑘)𝑦indirection𝑢,𝑎𝑅𝑞+(2)𝜐(𝑘)𝜐,and𝜐(𝑘)𝜕𝑦𝑔𝑥,𝑦(𝑘),𝑘=1,2,,(3)𝑎(𝑘)𝑎,𝑎(𝑘)𝐸𝑞,𝜐(𝑘)=𝑝𝑖=1𝑎𝑖(𝑘)𝑦𝑔𝑖𝑥,𝑦(𝑘),(4)𝑎𝑗(𝑘)=0,for𝑗𝐾𝑔𝑦(𝑘),(2.9) where 𝐾𝑔(𝑦(𝑘))={𝑖𝑄𝑔𝑖(𝑥,𝑦(𝑛))=𝑔(𝑥,𝑦(𝑛)),𝑦(𝑛)𝐵(𝑦,1/𝑛),𝑛𝑁}, 𝐸𝑞={𝑎𝑅𝑞+𝑝𝑖=1𝑎𝑖=1}, 𝑄={1,,𝑞}, and 𝐵(𝑦,1/𝑛) denotes the ball centered in 𝑦 with radius 1/𝑛.

Theorem 2.6 (sufficiency theorem). For a given parameter 𝑥𝑅𝑛, Assumption 2.2 holds, then there exists 𝑦𝑅𝑚 such that (2.3) holds. Moreover, for any feasible direction 𝑑𝑅𝑚 of 𝑌, that is, max{𝑖(𝑦)𝑇𝑑1𝑖𝑙}0, if 𝑑 satisfying one of the following conditions: (1)𝑔𝑦(𝑥,𝑦;𝑑)0; (2)𝑔𝑦(𝑥,𝑦;𝑑)=0, 𝑙𝑖=1𝜆𝑖𝑖(𝑦)𝑇𝑑=0, that is, 𝐿𝑦(𝑥,𝑦;𝑑)=0, and 1min2𝑞𝑖=1𝑎𝑖𝑑𝑇2𝑦𝑔𝑖𝑥,𝑦𝑑𝑎𝐸𝑞+𝑙𝑖=1𝜆𝑖𝑑𝑇2𝑖𝑦𝑑>0,(2.10)then 𝑦 is a local minimum of 𝑃𝑥.

Proof. (1) If not, then there exists sequences 𝑡𝑘0, 𝑑𝑘𝑑, 𝑦𝑘=𝑦+𝑡𝑘𝑑𝑘𝑌 such that 𝑔𝑥,𝑦𝑘<𝑔𝑥,𝑦.(2.11) As a result, 𝑔𝑦(𝑥,𝑦;𝑑)=lim𝑡0(𝑔(𝑥,𝑦+𝑡𝑑)𝑔(𝑥,𝑦))/𝑡=lim𝑘+(𝑔(𝑥,𝑦+𝑡𝑘𝑑𝑘)𝑔(𝑥,𝑦))/𝑡𝑘0. If 𝑔𝑦(𝑥,𝑦;𝑑)0, then 𝑔𝑦(𝑥,𝑦;𝑑)<0. From (2.4), we know that 𝜉𝑇𝑑<0forall𝜉𝜕𝑦𝑔(𝑥,𝑦). Hence, for the direction 𝑑𝑅𝑚, we have 𝜉𝑇𝑑+𝑙𝑖=1𝑖𝑦𝑇𝑑<0,𝜉𝜕𝑦𝑔𝑥,𝑦.(2.12) On the other hand, from 𝑦 satisfying (2.3), we know that there exists a 𝜉𝜕𝑦𝑔(𝑥,𝑦) such that 𝜉𝑇𝑑+𝑙𝑖=1𝑖𝑦𝑇𝑑=0,(2.13) which leads to a contradiction to (2.12).
(2) From Theorem 4 in [10] and Proposition 2.5, the conclusion is direct.

2.2. Sensitivity Analysis of Parameterized 𝑃𝑥

In the following, we make sensitivity analysis of parameterized min-max programming 𝑃𝑥, that is, study variation of isolated local minimizers and corresponding marginal function under small perturbation of 𝑥.

For convenience of discussion, for any given parameter 𝑥, denote 𝑦(𝑥) as a minimizer of 𝑃𝑥, 𝜐(𝑥)=min{𝑔(𝑥,𝑦)𝑦𝑌} as the corresponding marginal function value and make the following assumptions first.

Assumption 2.7. For given 𝑥𝑅𝑛, the parametric problem 𝑃𝑥 is a convex problem, specifically, 𝑔𝑖(𝑥,𝑦) and 𝑖=1,,𝑞 are concave functions with respect to that variables 𝑦 and 𝑗(𝑦),𝑗=1,,𝑙 are convex functions.

Assumption 2.8. Let 𝐼(𝑦)={𝑖𝐿𝑖(𝑦)=0}, {𝑖(𝑦)𝑖𝐼(𝑦)} are linearly independent.

Definition 2.9 (see Definition 2.1, [13]). For a given 𝑥, 𝑦𝑌 is said to be an isolated local minimum with order 𝑖 (𝑖 = 1 or 2) of 𝑃𝑥 if there exists a real 𝑚>0 and a neighborhood 𝑉 of 𝑦 such that 𝑔𝑥,𝑦>𝑔𝑥,𝑦+12𝑚𝑦𝑦𝑖,𝑦𝑉𝑌,𝑦𝑦.(2.14)

Theorem 2.10. For a given 𝑥𝑅𝑛, Assumptions 2.22.8 hold, then the following conclusions hold: (1)if 𝑦(𝑥) with corresponding multiplier 𝜆 is the solution of (2.3), then 𝑦(𝑥) is a unique first-order isolated minimizer of 𝑃𝑥;(2)for any minimum 𝑦(𝑥), it is a locally Lipschitzian function with respect to 𝑥, that is, there exists a 𝐿1>0, 𝛿>0 such that 𝑦𝑥𝑘𝑦(𝑥)𝐿1𝑥𝑘𝑥,𝑥𝑘𝑈(𝑥,𝛿),𝑦𝑥𝑘𝑥𝑌𝑘,(2.15) where 𝑌(𝑥𝑘) denotes minima set of 𝑃𝑥𝑘;(3)for any minimum 𝑦(𝑥), marginal function 𝜐(𝑥)=𝑔(𝑥,𝑦(𝑥)) is also a locally Lipschitz function with respect to 𝑥, and 𝜕𝜐(𝑥)𝑆(𝑥), where 𝑆(𝑥)=co𝑥𝑔𝑖𝑥,𝑦(𝑥),𝑖𝐼𝑥,𝑦(𝑥),(2.16) and 𝐼(𝑥,𝑦(𝑥))={𝑖{1,,𝑞}𝑔𝑖(𝑥,𝑦(𝑥))=𝑔(𝑥,𝑦(𝑥))}. As a result, 𝜕𝜐(𝑥)=𝑖𝐼(𝑥,𝑦(𝑥))𝜆𝑖𝑥𝑔𝑖𝑥,𝑦(𝑥)𝜆𝑖0,𝑖𝐼(𝑥,𝑦(𝑥))𝜆𝑖.=1(2.17)

Proof. (1) From Assumption 2.7, it is direct that 𝑦(𝑥) is a global minimizer of 𝑃𝑥. We only prove 𝑦(𝑥) is a first-order isolated minimizer.
If the conclusion does not hold, then there exists a sequence {𝑦𝑘}𝑌(𝑥) converging to 𝑦(𝑥), 𝑦𝑘𝑦(𝑥), and a sequence 𝑚𝑘, 𝑚𝑘>0, and 𝑚𝑘 converges to 0 such that 𝑔𝑥,𝑦𝑘𝑔𝑥,𝑦+1(𝑥)2𝑚𝑘𝑦𝑘𝑦,𝑦𝑘𝑌.(2.18) Take 𝑑𝑘=(𝑦𝑘𝑦(𝑥))/𝑦𝑘𝑦(𝑥), for simplicity, we suppose 𝑑𝑘𝑑, with 𝑑=1. Let 𝑡𝑘=𝑦𝑘𝑦(𝑥), then from 𝑦𝑘𝑌, 𝑑𝑘𝑑 and 𝑌 is compact, we have 𝑦(𝑥)+𝑡𝑘𝑑𝑌,𝑡𝑘0,(2.19) that is, 𝑖𝑦(𝑥)𝑇𝑑0,𝑖𝐼𝑥,𝑦.(𝑥)(2.20) From Assumption 2.8, we know that 𝑖𝐼(𝑥,𝑦(𝑥))𝑖(𝑦(𝑥))𝑇𝑑0. As a result, we have 𝑖𝐼(𝑥,𝑦(𝑥))𝑖(𝑦(𝑥))𝑇𝑑<0.
From the first equation of (2.3), we know that there exists a 𝑧𝜕𝑦𝑔(𝑥,𝑦(𝑥)) such that for any feasible direction 𝑑, 𝑧𝑇𝑑=𝑖𝐼(𝑥,𝑦(𝑥))𝜆𝑖𝑖(𝑦(𝑥))𝑇𝑑>0. Hence, 𝑔𝑦𝑥,𝑦𝜉(𝑥);𝑑=max𝑇𝑑𝜉𝜕𝑦𝑔𝑥,𝑦(𝑥)𝑧𝑇𝑑>0.(2.21) On the other hand, from 𝑦(𝑥) is a minimizer, we know that 𝑔𝑦(𝑥,𝑦(𝑥);𝑑)0, this leads to a contradiction;
(2) from Assumption 2.8 and Theorem 3.1 in [13], the conclusion is direct;
(3) since 𝑔(𝑥,𝑦) is a locally Lipschitzian function with respect to 𝑥 and 𝑦, then there exists 𝛿>0, 𝛿>0, and 𝐿2>0 such that for any 𝑥1𝑈(𝑥,𝛿), 𝑦𝑈(𝑦(𝑥),𝛿), it has ||𝑔𝑥1,𝑦(𝑥)𝑔𝑥,𝑦||(𝑥)𝐿2𝑥1,||𝑥𝑔(𝑥,𝑦)𝑔𝑥,𝑦(||𝑥)𝐿2𝑦𝑦(𝑥).(2.22) As to 𝑥1𝑈(𝑥,𝛿), from the conclusion in (1.2), there exists a a 𝐿1>0 such that 𝑦(𝑥1)𝑦(𝑥)𝐿1𝑥1𝑥. As a result, ||𝜐𝑥1||=||𝑔𝑥𝜐(𝑥)1,𝑦𝑥1𝑔𝑥,𝑦(||=||𝑔𝑥𝑥)1,𝑦𝑥1𝑥𝑔1,𝑦𝑥(𝑥)+𝑔1,𝑦(𝑥)𝑔𝑥,𝑦||||𝑔𝑥(𝑥)1,𝑦𝑥1𝑥𝑔1,𝑦(||+||𝑔𝑥𝑥)1,𝑦(𝑥)𝑔𝑥,𝑦(||𝑥)𝐿2𝑦𝑥1𝑦(𝑥)+𝐿2𝑥1𝑥𝐿21+𝐿1𝑥1.𝑥(2.23) Hence, the marginal function 𝜐(𝑥) is a local Lipschitzian function with respect to 𝑥.
Let 𝑆(𝑥)={𝑥𝑔𝑖(𝑥,𝑦(𝑥)),𝑖𝐼(𝑥,𝑦(𝑥))}, then 𝑆(𝑥)=co{𝜉,𝜉𝑆(𝑥)}. We prove that 𝑆(𝑥) is closed first, that is, prove for any sequence {𝑥𝑘}𝑅𝑛, 𝑥𝑘𝑥, 𝑧𝑘𝑆(𝑥𝑘), 𝑧𝑘𝑧, it has 𝑧𝑆(𝑥).
From 𝑧𝑘𝑆(𝑥𝑘), there exist 𝑦𝑘𝑌(𝑥𝑘); 𝑖𝑘𝐼(𝑥𝑘,𝑦𝑘) such that 𝑧𝑘=𝑥𝑔𝑖𝑘(𝑥𝑘,𝑦𝑘). Without loss of generality, suppose that {𝑦𝑘} converges to 𝑦; {𝑖𝑘} converges to 𝑖. From Proposition 3.3 in [14], it has 𝑦𝑌(𝑥), and 𝑖𝐼(𝑥,𝑦) and from 𝑥𝑔𝑖(𝑥,𝑦) is a continuous function, it has 𝑧=lim𝑘+𝑧𝑘=lim𝑘+𝑥𝑔𝑖𝑘(𝑥𝑘,𝑦𝑘)=𝑥𝑔𝑖(𝑥,𝑦)𝑆(𝑥). As a result, 𝑆(𝑥) is a closed set.
From Theorem 3.2.16 in [11], for any 𝜉𝜕𝜐(𝑥), there exists 𝑥𝑘𝑅𝑛, 𝑥𝑘𝑥 such that 𝜐(𝑥𝑘) exists and 𝜉=lim𝑘+𝜐(𝑥𝑘). In addition, for arbitrary 𝑑𝑅𝑛, it has 𝑥𝜐𝑘𝑇𝑑=𝜐𝑥𝑘;𝑑=lim𝑡0𝜐𝑥𝑘𝑥+𝑡𝑑𝜐𝑘𝑡=lim𝑡0𝑔𝑥𝑘+𝑡𝑑,𝑦𝑥𝑘𝑥+𝑡𝑑𝑔𝑘,𝑦𝑥𝑘𝑡lim𝑡0𝑔𝑥𝑘+𝑡𝑑,𝑦𝑥𝑘𝑥𝑔𝑘,𝑦𝑥𝑘𝑡=max𝑥𝑖𝐼𝑘,𝑦𝑥𝑔𝑖𝑥𝑘,𝑦𝑇𝑑.(2.24) From the definition of 𝑆(𝑥𝑘), 𝑧𝑘𝑆(𝑥𝑘) such that 𝑧𝑘𝑇𝑑=max𝑖𝐼(𝑥𝑘,𝑦){𝑥𝑔𝑖(𝑥𝑘,𝑦)𝑇𝑑}. Hence, it has 𝜐(𝑥𝑘)𝑇𝑑𝑧𝑘𝑇𝑑.
From 𝑧𝑘𝑧𝑆(𝑥)𝑆(𝑥), 𝜐(𝑥𝑘)𝜉 and 𝜐(𝑥𝑘)𝑇𝑑𝑧𝑘𝑇𝑑, it has 𝜉𝑇𝑑𝑧𝑇𝑑, that is, for arbitrary 𝑑𝑅𝑛 and 𝜉𝜕𝜐(𝑥), there exists 𝑧𝑆(𝑥) such that 𝜉𝑇𝑑𝑧𝑇𝑑.
If 𝜕𝜐(𝑥)𝑆(𝑥) does not hold, then there exists a 𝜉𝜕𝜐(𝑥) and 𝜉𝑆(𝑥). From 𝑆(𝑥) is a compact convex set and separation theorem ([15]), there exists a 𝑑𝑅𝑛 such that 𝜉𝑇𝑑<0 and for arbitrary 𝑧𝑆(𝑥), 𝑧𝑇𝑑0, which leads to a contradiction. As a result, 𝜕𝜐(𝑥)𝑆(𝑥) holds. From 𝜕𝜐(𝑥)𝑆(𝑥) and 𝑆(𝑥)=co{𝑥𝑔𝑖(𝑥,𝑦(𝑥)),𝑖𝐼(𝑥,𝑦(𝑥))}, computation formula (2.17) is direct.

3. Discussion

In this paper, sufficient optimality and sensitivity analysis of a parameterized min-max programming are given. A rule for computation the subdifferential of 𝜐(𝑥) is established. Though the assumptions in this paper are some restrictive compared to some existing work, the assumptions hold naturally for some applications. Moreover, the obtained computation formula is simple, it is beneficial for establishing a concise first-order necessary optimality system of (1.1), and then constructing effective algorithms to solve the applications.

Acknowledgments

This research was supported by the National Natural Science Foundation of China no. 11001092 and the Fundamental Research Funds for the Central Universities no. 2011QC064.

References

  1. C. Kirjner-Neto and E. Polak, “On the conversion of optimization problems with max-min constraints to standard optimization problems,” SIAM Journal on Optimization, vol. 8, no. 4, pp. 887–915, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  2. E. Polak and J. O. Royset, “Algorithms for finite and semi-infinite min-max-min problems using adaptive smoothing techniques,” Journal of Optimization Theory and Applications, vol. 119, no. 3, pp. 421–457, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  3. G.-X. Liu, “A homotopy interior point method for semi-infinite programming problems,” Journal of Global Optimization, vol. 37, no. 4, pp. 631–646, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  4. O. Stein and G. Still, “Solving semi-infinite optimization problems with interior point techniques,” SIAM Journal on Control and Optimization, vol. 42, no. 3, pp. 769–788, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  5. O. Stein and A. Tezel, “The semismooth approach for semi-infinite programming under the reduction ansatz,” Journal of Global Optimization, vol. 41, no. 2, pp. 245–266, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  6. A. Auslender, “Stability in mathematical programming with nondifferentiable data,” SIAM Journal on Control and Optimization, vol. 22, no. 2, pp. 239–254, 1984. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  7. J. F. Bonnans and A. Shapiro, Perturbation Analysis of Optimization Problems, Springer, New York, NY, USA, 2000.
  8. J. F. Bonnans and A. Shapiro, “Optimization problems with perturbations: a guided tour,” SIAM Review, vol. 40, no. 2, pp. 228–264, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  9. B. S. Mordukhovich, N. M. Nam, and N. D. Yen, “Subgradients of marginal functions in parametric mathematical programming,” Mathematical Programming B, vol. 116, no. 1-2, pp. 369–396, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  10. R. W. Chaney, “Second-order sufficient conditions in nonsmooth optimization,” Mathematics of Operations Research, vol. 13, no. 4, pp. 660–673, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  11. M. M. Mäkelä and P. Neittaanmäki, Nonsmooth Optimizatin: Analysis and Algorithms with Application to Optimal Control, Utopia Press, Singapore, 1992.
  12. L. Huang and K. F. Ng, “Second-order optimality conditions for minimizing a max-function,” Science in China A, vol. 43, no. 7, pp. 722–733, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  13. B. S. Mordukhovich, “Sensitivity analysis in nonsmooth optimization,” in Theoretical Aspects of Industrial Design, D. A. Field and V. Komkov, Eds., pp. 32–46, SIAM, Philadelphia, Pa, USA, 1992. View at Zentralblatt MATH
  14. V. F. Demyanov and A. M. Rubinov, Constructive Nonsmooth Analysis, vol. 7, Peter Lang, Frankfurt am Main, Germany, 1995.
  15. R. T. Rochafellar, Convex Analysis, Princeton University Press, Princeton, NJ, USA, 1970.