Abstract

We investigate the stochastic linear complementarity problem affinely affected by the uncertain parameters. Assuming that we have only limited information about the uncertain parameters, such as the first two moments or the first two moments as well as the support of the distribution, we formulate the stochastic linear complementarity problem as a distributionally robust optimization reformation which minimizes the worst case of an expected complementarity measure with nonnegativity constraints and a distributionally robust joint chance constraint representing that the probability of the linear mapping being nonnegative is not less than a given probability level. Applying the cone dual theory and S-procedure, we show that the distributionally robust counterpart of the uncertain complementarity problem can be conservatively approximated by the optimization with bilinear matrix inequalities. Preliminary numerical results show that a solution of our method is desirable.

1. Introduction

The stochastic complementarity problem (SCP) is to find a vector such that where is a vector valued function affected by the uncertain data and the uncertainty set (or support set when viewing as a random vector) . If is a singleton, then the problem (1) becomes the deterministic complementarity problem, which has been well studied during the past two decades, due to its wide range of practical applications in engineering and economic science, control theory, operations research, and game theory [14].

In the stochastic programming approach, the uncertain data are assumed to be random, and the random data are assumed to obey a known in advance probability distribution for the simplest case or more generally the expected value of can be evaluated by the corresponding function of the sample of the random data . In this way, people seek some deterministic and approximate reformulations of the SCP (1). Among the literature of the SCP, there are some typical deterministic reformulations, such as, the expected value method, the expected residual minimization method, stochastic mathematical program with equilibrium constraints reformulation, and the CVaR minimization reformulation (see [58] for details). Because the sample average approximation method is utilized to solve the deterministic reformulations of the SCP, the reformulations of the SCP are depended on the sample of the random data which itself has the property of the randomness or uncertainty. In the robust optimization approach, uncertain data is assumed to be in some convex tractable sets which are called the uncertainty. In order to establish reformulation immunizing against the data uncertainty, [9] reformulates the SCP as a system of robust inequalities, namely, -approximation, as a relaxation of the robust counterpart of the SCP.

Contrary to the approaches of the stochastic programming and the classical robust optimization, the distributionally robust optimization (see [1013] for reference), which aims to find the best solution of the worst-case of a problem with uncertain data whose distribution is assumed to be in a family of probability distributions, captures the decision makers a moderate risk attitude (through the consideration of the partial information about the distribution of the uncertain data but not the exact ranges of the uncertain data) and an aversion towards uncertainty (through the consideration of the worst probability distribution within a known family of distributions with partial information). It has been argued that most decision makers have low tolerance towards uncertainty in the distribution [14, 15]. It is rational to take decisions in view of the worst probability distribution that is deemed possible under the existing information and to obtain an out-of-sample model. In this paper, viewing the uncertain data as a random vector and assuming that only partial information about the distribution of is known, we use the distributionally robust optimization approach to reformulate the SCP (1) as the distributionally robust optimization reformulation (DROR): where is a measure of the complementarity in terms of the -norm and the risk factor . The distribution is assumed to belong to a family of possible distributions. We consider two cases of information about , the necessary information given first two moments, that is, , , and with being the cone of nonnegative Borel measures on and and the additional information about support set , which is assumed as one of the following convex sets: the box set, the polyhedral set, -ellipsoidal set, or the intersection of these sets mentioned above: In rest of the paper, we assume that relies affinely on ; that is, and that the coefficients and are affinely affected by the random parameters ; that is, where are scalars.

The motivation of this reformulation comes from the idea that all reformulations for the SCP endeavor to make a trade-off between the feasibility and complementarity when both sides cannot be satisfied simultaneously. We minimize the worst case of the complementarity measure with imposing the worst case joint chance constraint, which describes that the complementarity is satisfied with a given probability not less than . It is worth mentioning that the DROR (2) contains two semi-infinite programs in the variable of the probability measure . So, the major difficulty to solve (2) lies in how to transform it into a tractable problem. By investigating the dual of the inner problem in the objective and the structure of the feasible set of the worst case joint chance constraint, we will illustrate that the DROR can be conservatively approximated as the optimization with bilinear matrix inequalities (BMIs).

The rest of the paper is organized as follows. In Sections 2 and 3, we study the conservative approximations of the DROR without and with the information of the support, respectively. Numerical results on a simple stochastic complementarity problem using the DROR are reported in Section 4.

Notation. Denote the covariance matrix , and rewrite , where is affine in with Let denote the linear mapping of determined by the support : with , , , and , respectively.

2. The Conservative Approximation for the DROR without the Support Information

In this section, we assume that we have no information about the support. Noticing that the objective function and the constraints contain two semi-infinite programs with variable , we first reformulate the inner moment problem in its dual form and then use the fact that min-min operations can be performed jointly to approximate the DROR (2) as the optimization with BMIs.

First of all, we investigate these semi-infinite programs for every fixed decision variable by the cone dual theory and the S-procedure. We show that DROR can be approximated conservatively by optimization with BMIs. First, we recall the S-procedure, which is the generalization of the well-known S-lemma [16], which plays a crucial role in the proof of the following the paper.

Lemma 1 (S-procedure). Let , be quadratic functions of . Then if there exist such that If , converse also holds as long as .

Another useful lemma is a result derived by the cone duality theory (see [17]).

Lemma 2. Let be the nonlinear mapping in and . Then for each fixed , is equivalent to the following semi-infinite programming: where represents the trace of the matrix .

Remark 3. We mention that the problem is a moment problem and the strong duality condition holds due to in Lemma 2. See Lemma  1 in [12] and also Lemma A.1 in [13] for similar results.

2.1. The Equivalent Expression of the Worst Case of the Objective

Now we study the inner problem in the objective of the DROR without the support information.

Lemma 4. Suppose that is affine in and is affinely affected by which is supported on . Then, for each fixed , the problem (11) is equivalent to an optimization with a linear matrix inequality: where is quadratic in .

Proof. Due to and Lemma 2, the problem (11) is equivalent to its dual problem: Notice that ; then, the semi-infinite inequality constraint above can be equivalently written as which can be also written as Then using the Schur Complement lemma, we can show that it is equivalent to the constraint in the problem (12), which completes the proof.

2.2. The Conservative Approximation of the Distributionally Robust Joint Chance Constraint

Denote as the feasible set of the chance constraint in (2). Now we describe the structure of by a conservative approximation when we have no information about the support . Since the chance constraint in (2) is a distributionally robust joint chance constraint, it is difficult to describe the structure of its feasible set exactly. We establish a conservative approximation in terms of a system of BMIs for the distributionally robust joint chance constraint when is affinely depended on . To this end, we introduce a set : Employing Lemmas 1 and 2, we show that the structure of can be equivalently described by a system of BMIs, as stated as Lemma 5.

Lemma 5. Suppose that is affine in and is affinely affected by which is supported on , and suppose that there exists some with for each fixed . Then and can be equivalently expressed as a system of BMIs: with the constant matrix .

Proof. Since it is clear that . So it suffices to show that can be expressed as (17) equivalently. implies that , which can be reexpressed as Then using Lemma 2, we can show that means that satisfies Note that the semi-infinite constraint above is equivalent to The constraint (22) can be expressed as Since , the above implications can be reexpressed as the following implications: Notice that is affine in and by (21); then, by Lemma 1 and the assumptions of this lemma, we have that the implications above are equivalent to So, we have To complete the proof, it is sufficient to show that is impossible in (26). Otherwise, if there exists some in (25), by taking trace operator to the both sides of the BMIs above multiplied by , we obtain , which contradicts .

2.3. The Conservative Approximation for the DROR

According to Lemma 4, the DROR (2) can be expressed equivalently as Notice that the set is inside the feasible set as stated in Lemma 5. So by replacing by in the problem above, we can obtain a problem with an optimal value not less than that of (2). Thus, putting Lemmas 4 and 5 together, we obtain conservative approximation of the DROR (2) in terms of the optimization with BMIs, as stated in Theorem 6.

Theorem 6. Suppose that is affine in and is also affine with respect to which is supported on , and suppose that there exists some with for each fixed . Then the DROR (2) can be conservatively expressed as

3. The Conservative Approximation for the DROR with the Support Information

When the information about the support is also known, we use the similar argument of the proof in Theorem 6 to establish a more conservative approximation. It is worth mentioning that there is a slight difference between the proof in Theorem 6 and that in Theorem 7. The proof of Theorem 6 uses the equivalent form of the S-procedure, but the proof of Theorem 7 uses the implication form of the S-procedure, due to their different assumptions on the support.

Theorem 7. Suppose that is affine in and is also affine with respect to uncertain data which is supported on , respectively. Then the DROR (2) can be conservatively expressed as where , , , and , respectively.

The proof for each , is similar to that of Lemmas 4 and 5, and we omit it here.

4. Numerical Experiments

In this section, we give preliminary numerical results with an example of a linear complementarity problem with a random parameter from [8] for the proposed method.

Example 1. Suppose that we have no information about the support and where is the random parameter with mean and covariance . This is a stochastic linear complementarity problem which has a unique solution when .

We solve the problem by DROR employing the PENLAB solver (see [18]) for nonlinear semidefinite programs with different settings of and : case for and and case for and . We run the codes for each case with (denoted by , , and ), respectively. To compare the property of the DROR with the classical reformulation for the stochastic complementarity problem, we also run the ERM method employing the MatLab solver Fmincon with different samples which follow these specific distributions, respectively: , , , (denoted by , , , and ), respectively. The initial value for each run (, , , , and , , and for both cases) is set to be rand of appropriate dimensions and we evaluate these solutions for uniform samples and normal samples with given moments according to each case, respectively, using the indices compl, feas, and pr. The index compl measures the complementarity and the indices feas and pr measure the nonnegativity, which are defined as , and . We list the comparison results in Table 1.

From Table 1, we see that both the DROR and the ERM methods can obtain an approximate solution when the SCP has a common solution for all , as shown in case . When the SCP has no common solution for all , the DROR can obtain a solution to guarantee with a higher probability and also with a lower feas, but the ERM method prefers a solution with a lower compl, as shown in case . Moreover, the quality of the solution by the DROR is independent of the specific distribution but relies on the first two moments only. However, the quality of the solution by the ERM method relies on the specific distribution seriously, as shown by the data with and .

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The research was supported by the National Natural Science Foundation of China (11171051, 91230103).