Abstract

We consider the expected residual minimization method for a class of stochastic quasivariational inequality problems (SQVIP). The regularized gap function for quasivariational inequality problem (QVIP) is in general not differentiable. We first show that the regularized gap function is differentiable and convex for a class of QVIPs under some suitable conditions. Then, we reformulate SQVIP as a deterministic minimization problem that minimizes the expected residual of the regularized gap function and solve it by sample average approximation (SAA) method. Finally, we investigate the limiting behavior of the optimal solutions and stationary points.

1. Introduction

The quasivariational inequality problem is a very important and powerful tool for the study of generalized equilibrium problems. It has been used to study and formulate generalized Nash equilibrium problem in which a strategy set of each player depends on the other players’ strategies (see, for more details, [13]).

QVIP is to find a vector such that where is a mapping, the symbol denotes the inner product in , and is a set-valued mapping of which is a closed convex set in for each . In particular, if is a closed convex set and for each , then QVIP (1.1) becomes the classical variational inequality problem (VIP): find a vector such that

In most important practical applications, the function always involves some random factors or uncertainties. Let be a probability space. Taking the randomness into account, we get stochastic quasivariational inequality problem (SQVIP): find an such that or equivalently, where is a mapping and a.s. is abbreviation for “almost surely” under the given probability measure .

Due to the introduction of randomness, SQVIP (1.4) becomes more practical and also evokes more and more attentions in the recent literature [416]. However, to our best knowledge, most publications in the existing literature discuss the stochastic complementarity problems and the stochastic variational inequality problems, which are two special cases of (1.4). It is well known that quasivariational inequalities are more complicated than variational inequalities and complementarity problems and that they have widely applications. Therefore, it is meaningful and interesting to study the general problem (1.4).

Because of the existence of a random element , we cannot generally find a vector such that (1.4) holds almost surely. That is, (1.4) is not well defined if we think of solving (1.4) before knowing the realization . Therefore, in order to get a reasonable resolution, an appropriate deterministic reformulation for SQVIP becomes an important issue in the study of the considered problem.

Recently, one of the mainstreaming research methods on the stochastic variational inequality problem is expected residual minimization method (see [4, 5, 7, 1113, 16] and the references therein). Chen and Fukushima [5] formulated the stochastic linear complementarity problem (SLCP) as a minimization problem which minimizes the expectation of gap function (also called residual function) for SLCP. They regarded the optimal solution of this minimization problem as a solution to SLCP. This method is the so-called expected residual minimization method (ERM). Following the ideas of Chen and Fukushima [5], Zhang and Chen [16] considered the stochastic nonlinear complementary problems. Luo and Lin [12, 13] generalized the expected residual minimization method to solve stochastic variational inequality problem.

In this paper, we focus on ERM method for SQVIP. We first show that the regularized gap function for QVIP is differentiable and convex under some suitable conditions. Then, we formulate SQVIP (1.4) as an optimization problem and solve this problem by SAA method.

The rest of this paper is organized as follows. In Section 2, some preliminaries and the reformulation for SQVIP are given. In Section 3, we give some suitable conditions under which the regularized gap function for QVIP is differentiable and convex. In Section 4, we show that the objective function of the reformulation problem is convex and differentiable under some suitable conditions. Finally, the convergence results of optimal solutions and stationary points are given in Section 5.

2. Preliminaries

Throughout this paper, we use the following notations. denotes the Euclidean norm of a vector. For an symmetric positive-definite matrix , denotes the -norm defined by for and denotes the projection of the point onto the closed convex set with respect to the norm . For a mapping , denotes the usual gradient of in . It is easy to verify that where and are the smallest and largest eigenvalues of , respectively.

The regularized gap function for the QVIP (1.1) is given as follows: where is a positive parameter. Let be defined by . This is called a feasible set of QVIP (1.1). For the relationship between the regularized gap function (2.2) and QVIP (1.1), the following result has been shown in [17, 18].

Lemma 2.1. Let be defined by (2.2). Then for all . Furthermore, and if and only if is a solution to QVIP (1.1). Hence, problem (1.1) is equivalent to finding a global optimal solution to the problem:

Though the regularized gap function is directional differentiable under some suitable conditions (see, [17, 18]), it is in general nondifferentiable.

The regularized gap function (or residual function) for SQVIP (1.4) is as follows: and the deterministic reformulation for SQVIP is where denotes the expectation operator.

Note that the objective function contains mathematical expectation. Throughout this paper, we assume that cannot be calculated in a closed form so that we will have to approximate it through discretization. One of the most well-known discretization approaches is sample average approximation method. In general, for an integrable function , we approximate the expected value with sample average , where are independently and identically distributed random samples of and . By the strong law of large numbers, we get the following lemma.

Lemma 2.2. If is integrable, then holds with probability one.

Let Applying the above techniques, we can get the following approximation of (2.5):

3. Convexity and Differentiability of

In the remainder of this paper, we restrict ourself to a special case, where . Here, is a closed convex set in and is a mapping. In this case, we can show that is continuously differentiable whenever so are the functions and . In order to get this result, we need the following lemma (see [19, Chapter 4, Theorem??1.7]).

Lemma 3.1. Let be a nonempty closed set and be an open set. Assume that be continuous and the gradient is also continuous. If the problem is uniquely attained at for any fixed , then the function is continuously differentiable and is given by .

For any , we can find a vector such that . Thus, we can rewrite (2.2) as follows: The minimization problem in (3.1) is essentially equivalent to the following problem:

It is easy to know that problem (3.2) has a unique optimal solution . Thus, is also a unique solution of problem (3.1). The following result is a natural extension of [20, Theorem??3.2].

Theorem 3.2. If is a closed convex set in and and are continuously differentiable, then the regularized gap function given by (2.2) is also continuously differentiable and its gradient is given by where and denotes the identity matrix.

Proof. Let us define the function by It is obviously that if and are continuous, then is continuous in . If and are continuously differentiable, then is continuous in . By (3.1), we have Since the minimum on the right-hand side of (3.6) is uniquely attained at , it follows from Lemma 3.1 that is differentiable and its gradient is given by This completes the proof.

Remark 3.3. When , we have and so QVIP (1.1) reduces to VIP (1.2). In this case where Moreover, when , we have which is the same as [20, Theorem??3.2].

Now we investigate the conditions under which is convex.

Theorem 3.4. Suppose that and , where and are matrices and is a vector. Denote and by the smallest and largest eigenvalues of and , respectively. We have the following statements.(i) If , and , then the function is convex. Moreover, if there exists a constant such that , then is strongly convex with modulus . (ii)If and , then the function is convex. Moreover, if , then is strongly convex with modulus .

Proof. Substituting and into (3.1), we have Define Noting that we have, for any ,
If , and , we have This implies that the Hessen matrix is positive semidefinite and hence is convex in for any . In consequence, by (3.11), the regularized gap function is convex. Moreover, if , then which means that is strongly convex with modulus in for any . From (3.11), we know that the regularized gap function is strongly convex.
If and , we have Thus, the regularized gap function is convex. Moreover, if , then the regularized gap function is strongly convex with modulus . This completes the proof.

Remark 3.5. When , QVIP (1.1) reduces to VIP (1.2). Denote and by the smallest and largest eigenvalues of and , respectively. In this case, the function is convex when , and .

Remark 3.6. When and , we have that . In this case, the function is convex when and . This is consistent with [4, Theorem??2.1].

4. Properties of Function

In this section, we consider the properties of the objective function of problem (2.5). In what follows we show that is differentiable under some suitable conditions.

Theorem 4.1. Suppose that , where and with Let . Then the function is differentiable and

Proof. Since , it is easy to know that where It follows from Lemma 2.1 that and so Thus,
In a similar way to Theorem 3.2, we can show that is differentiable with respect to and It follows that By [21, Theorem??16.8], the function is differentiable and . This completes the proof.

The following theorem gives some conditions under which is convex.

Theorem 4.2. Suppose that the assumption of Theorem 4.1 holds. Let where is a null subset of and denotes the smallest eigenvalue of . We have the following statements. (i) If , and , then the function is convex. Moreover, if with , then is strongly convex with modulus . (ii)If and , then the function is convex. Moreover, if , then is strongly convex with modulus .

Proof. Define Noting that we have, for any , where the inequality holds almost surely.
If , and , then This implies that the Hessen matrix is positive semidefinite and hence is convex in for any . Since the regularized gap function is convex and so is . Moreover, if , then which means that is strongly convex in for any . From the definitions of and , we know that is strongly convex with modulus and so is .
If and , then which implies that the regularized gap function is convex and so is . Moreover, if , then is strongly convex with modulus . This completes the proof.

It is easy to verify that is a convex subset when . Thus, Theorem 4.2 indicates that problem (2.5) is a convex program. From the proof details of Theorem 4.2, we can also get that problem (2.8) is a convex program. Hence we can obtain a global optimal solution using existing solution methods.

5. Convergence of Solutions and Stationary Points

In this section, we will investigate the limiting behavior of the optimal solutions and stationary points of (2.8).

Note that if the conditions of Theorem 4.1 are satisfied, then the set is closed, and where is a constant.

Theorem 5.1. Suppose that the conditions of Theorem 4.1 are satisfied. Let be an optimal solution of problem (2.8) for each . If is an accumulation point of , then it is an optimal solution of problem (2.5).

Proof. Without loss of generality, we assume that itself converges to as tends to infinity. It is obvious that .
We first show that It follows from mean-value theorem that where and . From the proof details of Theorem 4.1, we have Since , there exists a constant such that for each . By the definition of , we know that . Hence, where It follows that which means that (5.2) holds.
Now, we show that is an optimal solution of problem (2.5). It follows from (5.2) and that . Since is an optimal solution of problem (2.8) for each , we have that, for any , Letting above, we get from (5.2) and Lemma 2.2 that which means is an optimal solution of problem (2.5). This completes the proof.

In general, it is difficult to obtain a global optimal solution of problem (2.8), whereas computation of stationary points is relatively easy. Therefore, it is important to study the limiting behavior of stationary points of problem (2.8).

Definition 5.2. is said to be stationary to problem (2.8) if and is said to be stationary to problem (2.5) if

Theorem 5.3. Let be stationary to problem (2.8) for each . If the conditions of Theorem 4.1 are satisfied, then any accumulation point of is a stationary point of problem (2.5).

Proof. Without loss of generality, we assume that itself converges to .
At first, we show that It follows from (2.1) and the nonexpansivity of the projection operator that Thus, which means that (5.13) is true.
Next, we show that It follows from Lemma 2.2 and Theorem 4.1 that By (5.13), we have which implies that (5.16) is true.
Now we show that is a stationary point of problem (2.5). Since is stationary to problem (2.8), that is, for any , Letting above, we get from (5.16) that Thus, is a stationary point of problem (2.5). This completes the proof.

Acknowledgments

This work was supported by the Key Program of NSFC (Grant no. 70831005) and the National Natural Science Foundation of China (111171237, 71101099).