Abstract

We discuss and develop the convex approximation for robust joint chance constraints under uncertainty of first- and second-order moments. Robust chance constraints are approximated by Worst-Case CVaR constraints which can be reformulated by a semidefinite programming. Then the chance constrained problem can be presented as semidefinite programming. We also find that the approximation for robust joint chance constraints has an equivalent individual quadratic approximation form.

1. Introduction

Chance constraints, also called probabilistic constraints in the literature, have a long history in stochastic programming and are the direct way to treat stochastic data uncertainty. With a large class problem involved, it can be formulated in the following form: where is undetermined vector, is the decision vector, is a bounded, convex closed set which can be represented by a set of additional deterministic semidefinite constraints, and is the deterministic cost vector. The chance constraint in the above problem requires all the uncertainty-affected constraints to be jointly feasible with probability at least , where is a desired safety factor given by the decision-maker. Problem (1) can be classified as an individual chance constrained problem when and a joint chance constrained problem when .

This problem has been considered by Charnes et al. [1], Miller and Wagnet [2], and Prékopa [3]. Due to the feasible set of problems, (1) is typically nonconvex and sometimes even disconnected; at the same time, the full and accurate information about the probability distribution cannot be required; the problem has not found interest and wide application in theory and practice for a long time.

One interesting issue on the chance constrained problem is to determine the distributional condition under which the problem can be reformulated as tractable convex programming. Alizadeh and Goldfarb [4], Calafiore and EI Ghaoui [5], Erdoğan and Iyengar [8], and Zymler et al. [6] showed that the chance constraint can be reformulated as tractable convex and cone constraints under some special exact information, respectively. However, the computation of chance constrained problems for general case is intractable. Nemirovski and Shapiro [7] pointed out that computing the probability of a weighted sum of uniformly distributed variables being nonpositive is already -hard. The intractability of a chance constrained problem using exact information has spurred recent interest in robust optimization in which data uncertainties are controlled in several types of uncertainty sets [9, 10]. Moreover, robust optimization generally needs segmental information on probability distributions such as known supports and covariances. Zymler et al. [6] showed an exact LMIS reformulation for chance constrained problem which can be computed by solving a tractable SDP under known first- and second-order moments information. Usually, in practice, one only has limited information about the probability distribution driving the uncertain parameters, involved in the decision-making process. It implies that we cannot obtain the exact moments information. In this paper, we extend the framework of Zymler et al. [6] to the case of inexact known moments information. We use the following two constraints parameterized by which rely on empirical estimates of the mean and covariance matrix of the random vector to construct the distributional information: It describes how likely is to be close to controlled by the vector . At the same time, the parameter provides natural means of quantifying one’s confidence in and . In what follows, we consider the latest problem in this paper under the distributional set where is the set of all probability distributions on the measurable space and is the Borel -algebra on . To this end, let denote the set of all probability measures corresponding to .

Now let us consider the following distributionally robust chance constrained program:

It is easy to verify that the feasible set of the above inequality is a subset of the feasible set of problem (1). This yields the following distributionally robust chance constrained program: which constitutes a conservative approximation for problem (1) in the sense that it has the same objective functions but a smaller feasible set.

In this paper, we discuss approximations for distributionally robust joint chance constraints under inexact information of first- and second-order moments, which extends the framework of Zymler et al. [6] from exact known first- and second-order moments. We prove that it can be approximated by a Worst-Case CVaR constraint which can be represented as a semidefinite programming. Then, we show that distributionally robust joint chance constrained problem has an equivalent quadratic approximation form. The advantage of the new framework lies on the limited information about distribution and the tractable convex approximation.

2. Distributionally Robust Joint Chance Constraints for LP

Let be the chance constraint with the decision vector and the random vector . Now, we consider the general robust individual chance constraint whose feasible set is denoted by . Shapiro et al. [11] showed that the feasible set is convex if the probability distribution function is -concave and are quasiconcave jointly in both arguments. Unfortunately, the above chance constraint is not necessarily convex in the decision variables .

It is well-known that CVaR method, popularized by Rockafellar and Uryasev [12], is the tightest convex approximation to the general individual probabilistic constraint (see, e.g., [7, 13]). Then, using the CVaR method, we get the tractability and convex approximation of the individual above chance constraint. The corresponding conditional value-at-value is defined as follows:

Next, we show that CVaR can be used to construct convex approximation for general chance constraint. By definition (7), we have Then Thus, from (9), we obtain

Therefore, the worst-case constraint on the left hand side constitutes a conservative approximation for the distributionally robust chance constraint on the right hand side. The above discussion makes us define a feasible set as follows: - .

Theorem 1. The feasible set constitutes a conservative approximation for , in which .

Lemma 2. For any fixed , let be a measure function and F-integrable for all . We define the worst-case expectation problem as follows: Consequently, the problem can be rewritten into the following: where

Proof. The worst-case expectation can equivalently be unfolded as We can formulate the Lagrangian dual problem of the above problem which takes the following form: where , , and are the dual variables for the constraints, respectively. It is obvious that the conditions on and are sufficient to ensure that the Dirac measure lies in the relative interior of the feasible set of the above original problem. It implies that must be equal to the optimal value of the above dual problem. Rewriting the above model into the form of LMIS, we have
Actually, we complete this proof when we divide the first inequality into two parts.

We define the feasible set of the distributionally robust joint chance constraint as A popular approximation for is based on Bonferroni’ inequality, but this method can be overly conservative even if is divided among the individual chance constraints, and Chen et al. [14] give an example which highlights this shortcoming. In order to mitigate the potential overconservatism of the Bonferroni approximation, we proposed the following approximation based on a combined inequality:

The problem becomes an individual chance constraint which can be conservatively approximated by a Worst-Case CVaR constraint. We define the approximated set as follows: which is a tight approximation for . The following theorem proves that the set has a tractable reformation in terms of LMIS and therefore promises to get out a convex approximation for .

Theorem 3. The feasible set can be written as

Proof. We find that the constraint is coinciding with , where
Actually, we know that is real valued, convex in and linear in , where is weakly compact [15]. Then, interchanging the and operators leads to an equivalent formulation of the Worst-Case CVaR problem: Firstly, we deal with the subordinate maximization problem in (22); we can derive an SDP reformulation for the worst-case expectation problem
For any fixed and , as shown in Lemma 2, the above subordinate maximization problem in (22) can be rewritten into The last inequality constraint in the above problem can be expanded into simpler inequality. Representing the subordinate worst-case expectation problem in (22), we get Thus, we can easily get the exact representation of . This completes the proof.

Remark 4. When , we set , which implies that we have the exact first- and second-order moments information for the individual chance constrained problem. We can get an exact approximation for : where we denote . This representation is exactly the one in [6]; then we can get the exact result of problem (5) (where ) by computing the above problem.

By now, we can compute the feasible set by solving a tractable SDP. We appreciate that many modern methods can be used to solve such a convex problem to any precision in polynomial time and we can use YALMIP [16] to solve it in practice.

Consider the robust individual chance constraint (18) which represents the robust joint chance constraint (17). In convex programming, we use the max piecewise linear function to approximate a convex function frequently; there, we use a quadratic function to approximate the max function in the chance constraint (18) inversely. Note that that satisfies For better argumentation, we define

Actually, it is easy to find that the feasible set constitutes a conservative approximation for , which means .

Theorem 5. The feasible set can be written as Moreover, we find .

Proof. By a similar discussion as before, we get that the robust quadratic chance constraint (28) is equivalent to the Worst-Case CVaR constraint: We know that the above inequality can be reformulated as Note that the constraints in (27) are equivalent to Thus, we can get the tractable form of in Theorem 5. The LMIs in Theorem 5 can be represented as Finally, we get with vanishing the middle matrix which is formed by the components of .

This theorem presents that the approximation of a distributionally robust joint chance constraint by a Worst-Case CVaR constraint is equivalent to the approximation of the max function implied by the joint chance constraint by a quadratic majorant.

Let us consider the case that have finite support information which can be present as the matrix . Let the measures and be two known vectors which give the bounds of the probability measure with . The ambiguity distributional set can be defined as

Consider the minimax problem (22) with the above ambiguity distributional set; we have the following results.

Theorem 6. has the following tractable reformulation in terms of LMIs: where denotes the vector of ones.

Proof. Consider the subordinate maximization problem in (22); we can derive a reformulation for the worst-case expectation problem:
For any fixed and , we can formulate the Lagrangian dual problem of the above linear programming. Actually, the standard duality theory guarantees that there is no duality gap between the above problem and its dual problem as

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The author would like to thank Professor Nan-jing Huang for handling the review of this paper and two referees for helpful suggestions and comments. This work was supported by the Fundamental Research Funds for the Central Universities (2014NZYQN49).