Abstract
Sufficient optimality and sensitivity of a parameterized min-max programming with fixed feasible set are analyzed. Based on Clarke's subdifferential and Chaney's second-order directional derivative, sufficient optimality of the parameterized min-max programming is discussed first. Moreover, under a convex assumption on the objective function, a subdifferential computation formula of the marginal function is obtained. The assumptions are satisfied naturally for some application problems. Moreover, the formulae based on these assumptions are concise and convenient for algorithmic purpose to solve the applications.
1. Introduction
In this paper, sufficient optimality and sensitivity analysis of a parameterized min-max programming are given. The paper is triggered by a local reduction algorithmic strategy for solving following nonsmooth semi-infinite min-max-min programming (P, see [1, 2], etc. for related applications reference): With the local reduction technique, the P can be rewritten as a bilevel programming first, where the lower problem is the following parameterized min-max programming (see [3β5] for related reference of local reduction strategy): To make the bilevel strategy applicable to P, it is essential to discuss the second-order sufficient optimality of and give sensitivity analysis of the parameterized minimum and corresponding marginal function .
Sensitivity analysis of optimization problems is an important aspect in the field of operation and optimization research. Based on different assumptions, many results on kinds of parametric programming have been obtained ([6β9], etc.). Among these, some conclusions on parameterized min-max programming like (1.2) have also been given. For example, based on variation analysis, parameterized continuous programming with fixed constraint was discussed in [7]. Problem like (1.2) can be seen as a special case. Under the inf-compactness condition and the condition objective function is concave with respect to the parameter, directional derivative computational formula of marginal function for (1.2) can be obtained directly. However, concave condition cannot be satisfied for many problems. Recently, FrΓ©chet subgradients computation formula of marginal functions for nondifferentiable programming in Asplund spaces was given ([9]). By using FrΓ©chet subgradients computation formula in [9], subgradient formula of marginal function for (1.2) is direct. But the formula is tedious, if utilizing the formula to construct optimality system of (1.1), the system is so complex that it is difficult to solve the obtained optimality system.
For more convenient computational purpose, the focus of this paper is to establish sufficient optimality and simple computation formula of marginal function for (1.2). Based on Clarke's subdifferential and Chaney's second-order directional derivative, sufficient optimality of the parameterized programming is given first. And then Lipschitzian continuousness of the parameterized isolated minimizer and the marginal function is discussed; moreover, subdifferential computation formula of the marginal function is obtained.
2. Main Results
Let in (1.2) be defined as , where and , are twice continuously differentiable functions on , and in (1.2) are twice continuously differentiable functions on . In the following, we first give the sufficient optimality condition of (1.2) based on Clarke's subdifferential and Chaney's second-order directional derivative, and then make sensitivity analysis of the parameterized problem .
2.1. Sufficient Optimality Conditions of
Definition 2.1 (see [10]). For a given parameter , a point is said to be an local minimum of problem if there exists a neighborhood of such that
Assumption 2.2. For a given parameter , suppose that satisfying the following constraint qualification:
where .
For a given parameter , denote the Lagrange function of as , then the following holds.
Theorem 2.3. For a given parameter , if is a minimum of , Assumption 2.2 holds, then there exists a such that , where denotes the Clarke's subdifferential of . Specifically, the following system holds: where denotes Clarke's subdifferential of with respect to , it can be computed as , is an operation of making convex hull of the elements, .
Proof. The conclusion is direct from Theorem and Corollary in [11].
Since is a directional differentiable function (Theorem in [11]), the directional derivative of with respect to in direction can be computed as follows:
Definition 2.4 (see [10]). Let is a locally Lipschitzian function on , be a nonzero vector in . Suppose that
define Chaney's lower second-order directional derivative as follows:
taking over all triples of sequences , , and for which (a) for each and ; (b) and converges to ; (c) with for each .
Similarly, Chaney's upper second-order directional derivative can be defined as
taking over all triples of sequences , , and for which (a), (b), and (c) above hold.
For parameterized max-type function , where is a given parameter, its Chaney's lower and upper second-order directional derivatives can be computed as follows.
Proposition 2.5 (see [12]). For any given parameter , Chaney's lower and upper second-order directional derivatives of with respect to exist; moreover, for any given , , it has where where , , , and denotes the ball centered in with radius .
Theorem 2.6 (sufficiency theorem). For a given parameter , Assumption 2.2 holds, then there exists such that (2.3) holds. Moreover, for any feasible direction of , that is, , if satisfying one of the following conditions: (1); (2), , that is, , and then is a local minimum of .
Proof. (1) If not, then there exists sequences , , such that
As a result, . If , then . From (2.4), we know that . Hence, for the direction , we have
On the other hand, from satisfying (2.3), we know that there exists a such that
which leads to a contradiction to (2.12).
(2) From Theorem 4 in [10] and Proposition 2.5, the conclusion is direct.
2.2. Sensitivity Analysis of Parameterized
In the following, we make sensitivity analysis of parameterized min-max programming , that is, study variation of isolated local minimizers and corresponding marginal function under small perturbation of .
For convenience of discussion, for any given parameter , denote as a minimizer of , as the corresponding marginal function value and make the following assumptions first.
Assumption 2.7. For given , the parametric problem is a convex problem, specifically, and are concave functions with respect to that variables and are convex functions.
Assumption 2.8. Let , are linearly independent.
Definition 2.9 (see Definition 2.1, [13]). For a given , is said to be an isolated local minimum with order ( = 1 or 2) of if there exists a real and a neighborhood of such that
Theorem 2.10. For a given , Assumptions 2.2β2.8 hold, then the following conclusions hold: (1)if with corresponding multiplier is the solution of (2.3), then is a unique first-order isolated minimizer of ;(2)for any minimum , it is a locally Lipschitzian function with respect to , that is, there exists a , such that where denotes minima set of ;(3)for any minimum , marginal function is also a locally Lipschitz function with respect to , and , where and . As a result,
Proof. (1) From Assumption 2.7, it is direct that is a global minimizer of . We only prove is a first-order isolated minimizer.
If the conclusion does not hold, then there exists a sequence converging to , , and a sequence , , and converges to 0 such that
Take , for simplicity, we suppose , with . Let , then from , and is compact, we have
that is,
From Assumption 2.8, we know that . As a result, we have .
From the first equation of (2.3), we know that there exists a such that for any feasible direction , . Hence,
On the other hand, from is a minimizer, we know that , this leads to a contradiction;
(2) from Assumption 2.8 and Theorem 3.1 in [13], the conclusion is direct;
(3) since is a locally Lipschitzian function with respect to and , then there exists , , and such that for any , , it has
As to , from the conclusion in (1.2), there exists a a such that . As a result,
Hence, the marginal function is a local Lipschitzian function with respect to .
Let , then . We prove that is closed first, that is, prove for any sequence , , , , it has .
From , there exist ; such that . Without loss of generality, suppose that converges to ; converges to . From Proposition 3.3 in [14], it has , and and from is a continuous function, it has . As a result, is a closed set.
From Theorem in [11], for any , there exists , such that exists and . In addition, for arbitrary , it has
From the definition of , such that . Hence, it has .
From , and , it has , that is, for arbitrary and , there exists such that .
If does not hold, then there exists a and . From is a compact convex set and separation theorem ([15]), there exists a such that and for arbitrary , , which leads to a contradiction. As a result, holds. From and , computation formula (2.17) is direct.
3. Discussion
In this paper, sufficient optimality and sensitivity analysis of a parameterized min-max programming are given. A rule for computation the subdifferential of is established. Though the assumptions in this paper are some restrictive compared to some existing work, the assumptions hold naturally for some applications. Moreover, the obtained computation formula is simple, it is beneficial for establishing a concise first-order necessary optimality system of (1.1), and then constructing effective algorithms to solve the applications.
Acknowledgments
This research was supported by the National Natural Science Foundation of China no. 11001092 and the Fundamental Research Funds for the Central Universities no. 2011QC064.