Journal of Applied Mathematics

Volume 2012, Article ID 692325, 9 pages

http://dx.doi.org/10.1155/2012/692325

## Sufficient Optimality and Sensitivity Analysis of a Parameterized Min-Max Programming

^{1}College of Science, Huazhong Agricultural University, Wuhan 430070, China^{2}School of Basic Science, East China Jiaotong University, Nanchang 330000, China

Received 4 June 2012; Accepted 17 July 2012

Academic Editor: Jian-Wen Peng

Copyright © 2012 Huijuan Xiong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Sufficient optimality and sensitivity of a parameterized min-max programming with fixed feasible set are analyzed. Based on Clarke's subdifferential and Chaney's second-order directional derivative, sufficient optimality of the parameterized min-max programming is discussed first. Moreover, under a convex assumption on the objective function, a subdifferential computation formula of the marginal function is obtained. The assumptions are satisfied naturally for some application problems. Moreover, the formulae based on these assumptions are concise and convenient for algorithmic purpose to solve the applications.

#### 1. Introduction

In this paper, sufficient optimality and sensitivity analysis of a parameterized min-max programming are given. The paper is triggered by a local reduction algorithmic strategy for solving following nonsmooth semi-infinite min-max-min programming (P, see [1, 2], etc. for related applications reference): With the local reduction technique, the P can be rewritten as a bilevel programming first, where the lower problem is the following parameterized min-max programming (see [3–5] for related reference of local reduction strategy): To make the bilevel strategy applicable to P, it is essential to discuss the second-order sufficient optimality of and give sensitivity analysis of the parameterized minimum and corresponding marginal function .

Sensitivity analysis of optimization problems is an important aspect in the field of operation and optimization research. Based on different assumptions, many results on kinds of parametric programming have been obtained ([6–9], etc.). Among these, some conclusions on parameterized min-max programming like (1.2) have also been given. For example, based on variation analysis, parameterized continuous programming with fixed constraint was discussed in [7]. Problem like (1.2) can be seen as a special case. Under the inf-compactness condition and the condition objective function is concave with respect to the parameter, directional derivative computational formula of marginal function for (1.2) can be obtained directly. However, concave condition cannot be satisfied for many problems. Recently, Fréchet subgradients computation formula of marginal functions for nondifferentiable programming in Asplund spaces was given ([9]). By using Fréchet subgradients computation formula in [9], subgradient formula of marginal function for (1.2) is direct. But the formula is tedious, if utilizing the formula to construct optimality system of (1.1), the system is so complex that it is difficult to solve the obtained optimality system.

For more convenient computational purpose, the focus of this paper is to establish sufficient optimality and simple computation formula of marginal function for (1.2). Based on Clarke's subdifferential and Chaney's second-order directional derivative, sufficient optimality of the parameterized programming is given first. And then Lipschitzian continuousness of the parameterized isolated minimizer and the marginal function is discussed; moreover, subdifferential computation formula of the marginal function is obtained.

#### 2. Main Results

Let in (1.2) be defined as , where and , are twice continuously differentiable functions on , and in (1.2) are twice continuously differentiable functions on . In the following, we first give the sufficient optimality condition of (1.2) based on Clarke's subdifferential and Chaney's second-order directional derivative, and then make sensitivity analysis of the parameterized problem .

##### 2.1. Sufficient Optimality Conditions of

*Definition 2.1 (see [10]). *For a given parameter , a point is said to be an local minimum of problem if there exists a neighborhood of such that

*Assumption 2.2. *For a given parameter , suppose that satisfying the following constraint qualification:
where .

For a given parameter , denote the Lagrange function of as , then the following holds.

Theorem 2.3. *For a given parameter , if is a minimum of , Assumption 2.2 holds, then there exists a such that , where denotes the Clarke's subdifferential of . Specifically, the following system holds:
**
where denotes Clarke's subdifferential of with respect to , it can be computed as , is an operation of making convex hull of the elements, .*

* Proof. *The conclusion is direct from Theorem and Corollary in [11].

Since is a directional differentiable function (Theorem in [11]), the directional derivative of with respect to in direction can be computed as follows:

*Definition 2.4 (see [10]). *Let is a locally Lipschitzian function on , be a nonzero vector in . Suppose that
define Chaney's lower second-order directional derivative as follows:
taking over all triples of sequences , , and for which (a) for each and ; (b) and converges to ; (c) with for each .

Similarly, Chaney's upper second-order directional derivative can be defined as
taking over all triples of sequences , , and for which (a), (b), and (c) above hold.

For parameterized max-type function , where is a given parameter, its Chaney's lower and upper second-order directional derivatives can be computed as follows.

Proposition 2.5 (see [12]). *For any given parameter , Chaney's lower and upper second-order directional derivatives of with respect to exist; moreover, for any given , , it has
**
where**
where , , , and denotes the ball centered in with radius .*

Theorem 2.6 (sufficiency theorem). *For a given parameter , Assumption 2.2 holds, then there exists such that (2.3) holds. Moreover, for any feasible direction of , that is, , if satisfying one of the following conditions: *(1)*; *(2)*, , that is, , and
**then is a local minimum of .*

* Proof. *(1) If not, then there exists sequences , , such that
As a result, . If , then . From (2.4), we know that . Hence, for the direction , we have
On the other hand, from satisfying (2.3), we know that there exists a such that
which leads to a contradiction to (2.12).

(2) From Theorem 4 in [10] and Proposition 2.5, the conclusion is direct.

##### 2.2. Sensitivity Analysis of Parameterized

In the following, we make sensitivity analysis of parameterized min-max programming , that is, study variation of isolated local minimizers and corresponding marginal function under small perturbation of .

For convenience of discussion, for any given parameter , denote as a minimizer of , as the corresponding marginal function value and make the following assumptions first.

*Assumption 2.7. *For given , the parametric problem is a convex problem, specifically, and are concave functions with respect to that variables and are convex functions.

*Assumption 2.8. *Let , are linearly independent.

*Definition 2.9 (see Definition 2.1, [13]). *For a given , is said to be an isolated local minimum with order ( = 1 or 2) of if there exists a real and a neighborhood of such that

Theorem 2.10. *For a given , Assumptions 2.2–2.8 hold, then the following conclusions hold: *(1)*if with corresponding multiplier is the solution of (2.3), then is a unique first-order isolated minimizer of ;*(2)*for any minimum , it is a locally Lipschitzian function with respect to , that is, there exists a , such that
where denotes minima set of ;*(3)*for any minimum , marginal function is also a locally Lipschitz function with respect to , and , where
and . As a result,
*

* Proof. *(1) From Assumption 2.7, it is direct that is a global minimizer of . We only prove is a first-order isolated minimizer.

If the conclusion does not hold, then there exists a sequence converging to , , and a sequence , , and converges to 0 such that
Take , for simplicity, we suppose , with . Let , then from , and is compact, we have
that is,
From Assumption 2.8, we know that . As a result, we have .

From the first equation of (2.3), we know that there exists a such that for any feasible direction , . Hence,
On the other hand, from is a minimizer, we know that , this leads to a contradiction;

(2) from Assumption 2.8 and Theorem 3.1 in [13], the conclusion is direct;

(3) since is a locally Lipschitzian function with respect to and , then there exists , , and such that for any , , it has
As to , from the conclusion in (1.2), there exists a a such that . As a result,
Hence, the marginal function is a local Lipschitzian function with respect to .

Let , then . We prove that is closed first, that is, prove for any sequence , , , , it has .

From , there exist ; such that . Without loss of generality, suppose that converges to ; converges to . From Proposition 3.3 in [14], it has , and and from is a continuous function, it has . As a result, is a closed set.

From Theorem in [11], for any , there exists , such that exists and . In addition, for arbitrary , it has
From the definition of , such that . Hence, it has .

From , and , it has , that is, for arbitrary and , there exists such that .

If does not hold, then there exists a and . From is a compact convex set and separation theorem ([15]), there exists a such that and for arbitrary , , which leads to a contradiction. As a result, holds. From and , computation formula (2.17) is direct.

#### 3. Discussion

In this paper, sufficient optimality and sensitivity analysis of a parameterized min-max programming are given. A rule for computation the subdifferential of is established. Though the assumptions in this paper are some restrictive compared to some existing work, the assumptions hold naturally for some applications. Moreover, the obtained computation formula is simple, it is beneficial for establishing a concise first-order necessary optimality system of (1.1), and then constructing effective algorithms to solve the applications.

#### Acknowledgments

This research was supported by the National Natural Science Foundation of China no. 11001092 and the Fundamental Research Funds for the Central Universities no. 2011QC064.

#### References

- C. Kirjner-Neto and E. Polak, “On the conversion of optimization problems with max-min constraints to standard optimization problems,”
*SIAM Journal on Optimization*, vol. 8, no. 4, pp. 887–915, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - E. Polak and J. O. Royset, “Algorithms for finite and semi-infinite min-max-min problems using adaptive smoothing techniques,”
*Journal of Optimization Theory and Applications*, vol. 119, no. 3, pp. 421–457, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - G.-X. Liu, “A homotopy interior point method for semi-infinite programming problems,”
*Journal of Global Optimization*, vol. 37, no. 4, pp. 631–646, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - O. Stein and G. Still, “Solving semi-infinite optimization problems with interior point techniques,”
*SIAM Journal on Control and Optimization*, vol. 42, no. 3, pp. 769–788, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - O. Stein and A. Tezel, “The semismooth approach for semi-infinite programming under the reduction ansatz,”
*Journal of Global Optimization*, vol. 41, no. 2, pp. 245–266, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - A. Auslender, “Stability in mathematical programming with nondifferentiable data,”
*SIAM Journal on Control and Optimization*, vol. 22, no. 2, pp. 239–254, 1984. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - J. F. Bonnans and A. Shapiro,
*Perturbation Analysis of Optimization Problems*, Springer, New York, NY, USA, 2000. - J. F. Bonnans and A. Shapiro, “Optimization problems with perturbations: a guided tour,”
*SIAM Review*, vol. 40, no. 2, pp. 228–264, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - B. S. Mordukhovich, N. M. Nam, and N. D. Yen, “Subgradients of marginal functions in parametric mathematical programming,”
*Mathematical Programming B*, vol. 116, no. 1-2, pp. 369–396, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - R. W. Chaney, “Second-order sufficient conditions in nonsmooth optimization,”
*Mathematics of Operations Research*, vol. 13, no. 4, pp. 660–673, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - M. M. Mäkelä and P. Neittaanmäki,
*Nonsmooth Optimizatin: Analysis and Algorithms with Application to Optimal Control*, Utopia Press, Singapore, 1992. - L. Huang and K. F. Ng, “Second-order optimality conditions for minimizing a max-function,”
*Science in China A*, vol. 43, no. 7, pp. 722–733, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - B. S. Mordukhovich, “Sensitivity analysis in nonsmooth optimization,” in
*Theoretical Aspects of Industrial Design*, D. A. Field and V. Komkov, Eds., pp. 32–46, SIAM, Philadelphia, Pa, USA, 1992. View at Google Scholar · View at Zentralblatt MATH - V. F. Demyanov and A. M. Rubinov,
*Constructive Nonsmooth Analysis*, vol. 7, Peter Lang, Frankfurt am Main, Germany, 1995. - R. T. Rochafellar,
*Convex Analysis*, Princeton University Press, Princeton, NJ, USA, 1970.