Abstract

We consider a vector variational inequality in a finite-dimensional space. A new gap function is proposed, and an equivalent optimization problem for the vector variational inequality is also provided. Under some suitable conditions, we prove that the gap function is directionally differentiable and that any point satisfying the first-order necessary optimality condition for the equivalent optimization problem solves the vector variational inequality. As an application, we use the new gap function to reformulate a stochastic vector variational inequality as a deterministic optimization problem. We solve this optimization problem by employing the sample average approximation method. The convergence of optimal solutions of the approximation problems is also investigated.

1. Introduction

The vector variational inequality (VVI for short), which was first proposed by Giannessi [1], has been widely investigated by many authors (see [29] and the references therein). VVI can be used to model a range of vector equilibrium problems in economics, traffic networks, and migration equilibrium problems (see [1]).

One approach for solving a VVI is to transform it into an equivalent optimization problem by using a gap function. A gap function was first introduced to study optimization problems and has become a powerful tool in the study of convex optimization problems. Also a gap function was introduced in the study of scalar variational inequalities. It can reformulate a scalar variational inequality as an equivalent optimization problem, and so some effective solution methods and algorithms for optimization problems can be used to find solutions of variational inequalities. Recently, many authors extended the theory of gap functions to VVI and vector equilibrium problems (see [2, 4, 69]). In this paper, we present a new gap function for VVI and reformulate it as an equivalent optimization problem. We also prove that the gap function is directionally differentiable and that any point satisfying the first-order necessary optimality condition for the equivalent optimization problem solves the VVI.

In many practical problems, problem data will involve some uncertain factors. In order to reflect the uncertainties, stochastic vector variational inequalities are needed. Recently, stochastic scalar variational inequalities have received a lot of attention in the literature (see [1020]). The ERM (expected residual minimization) method was proposed by Chen and Fukushima [11] in the study of stochastic complementarity problems. They formulated a stochastic linear complementarity problem (SLCP) as a minimization problem which minimizes the expectation of a NCP function (also called a residual function) of SLCP and regarded a solution of the minimization problem as a solution of SLCP. This method is the so-called expected residual minimization method. Following the ideas of Chen and Fukushima [11], Zhang and Chen [20] considered stochastic nonlinear complementary problems. Luo and Lin [18, 19] generalized the expected residual minimization method to solve a stochastic linear and/or nonlinear variational inequality problem. However, in comparison to stochastic scalar variational inequalities, there are very few results in the literature on stochastic vector variational inequalities. In this paper, we consider a deterministic reformulation for the stochastic vector variational inequality (SVVI). Our focus is on the expected residual minimization (ERM) method for the stochastic vector variational inequality. It is well known that VVI is more complicated than a variational inequality, and they model many practical problems. Therefore, it is meaningful and interesting to study stochastic vector variational inequalities.

The rest of this paper is organized as follows. In Section 2, some preliminaries are given. In Section 3, a new gap function for VVI is constructed and some suitable conditions are given to ensure that the new gap function is directionally differentiable and that any point satisfying the first-order necessary condition of optimality for the new optimization problem solves the vector variational inequality. In Section 4, the stochastic VVI is presented and the new gap function is used to reformulate SVVI as a deterministic optimization problem.

2. Preliminaries

In this section, we will introduce some basic notations and preliminary results.

Throughout this paper, denote by the transpose of a vector or matrix , by the Euclidean norm of a vector or matrix, and by the inner product in . Let be a nonempty, closed, and convex set of () mappings, and . The vector variational inequality is to find a vector such that where is the nonnegative orthant of and denotes the interior of . Denote by the solution set of VVI (1) and by the solution set of the following scalar variational inequality (): find a vector , such that where .

Definition 1. A function is said to be a gap function for (2) if it satisfies the following properties: (i) ; (ii) iff solves (2).

Definition 2. A function is said to be a gap function for VVI (1) if it satisfies the following properties: (i) ; (ii) iff solves VVI (1).

Suppose that is an symmetric positive definite matrix. Let where . Note that where and are the smallest and largest eigenvalues of , respectively. It was shown in [21] that the maximum in (3) is attained at where is the projection of the point onto the closed convex set with respect to the norm . Thus,

Lemma 3. The projection operator is nonexpansive; that is,

Lemma 4. The function is a gap function for (2) and solves (2) iff it solves the following optimization problem:

The gap function is also called the regularized gap function for . When () are continuously differentiable, we have the following results.

Lemma 5. If () are continuously differentiable, then is also continuously differentiable in , and its gradient is given by

Lemma 6. Assume that are continuously differentiable and that the Jacobian matrixes are positive definite for all (). If is a stationary point of problem (8), that is, then it solves (2).

3. A New Gap Function for VVI and Its Properties

In this section, based on the regularized gap function for (2), we construct a new gap function for VVI (1) and establish some properties under some mild conditions.

Let Before showing that is a gap function for VVI, we first present a useful result.

Lemma 7. The following assertion is true:

Proof. Suppose that . Then, there exists a such that and For any fixed , since , there exists a such that and so Thus, we have This implies that and .
Conversely, suppose that . Then, we have and so Since is convex, from Theorems 11.1 and 11.3 of [22], it follows that where . Moreover, we have . In fact, if for some , then we have On the other hand, which is a contradiction. Thus, . This implies that, for any , Taking , then and which implies that and .
This completes the proof.

Now, we can prove that is a gap function for VVI (1).

Theorem 8. The function given by (11) is a gap function for VVI (1). Hence, solves VVI (1) iff it solves the following optimization problem:

Proof. Note that for any , given by (3) is a gap function for (2). It follows from Definition 1 that for all and hence for all .
Assume that for some . From (6), it is easy to see that is continuous in . Since is a closed and bounded set, there exists a vector such that , which implies that and . It follows from Lemma 7 that solves VVI (1).
Suppose that solves VVI (1). From Lemma 7, it follows that there exists a vector such that and so . Since, for all , , we have .
Thus, (11) is a gap function for VVI (1). The last assertion is obvious from the definition of gap function.
This completes the proof.

Since is constructed based on the regularized gap function for , we wish to call it a regularized gap function for VVI (1). Theorem 8 indicates that in order to get a solution of VVI (1), we only need to solve problem (24). In what follows, we will discuss some properties of the regularized gap function .

Theorem 9. If () are continuously differentiable, then the regularized gap function is directionally differentiable in any direction , and its directional derivative is given by where .

Proof. It follows since the projection operator is nonexpansive that is continuous in . Thus, is nonempty for any . From Lemma 5, it follows that is continuously differentiable in and that is continuous in . It follows from Theorem 1 of [23] that is directionally differentiable in any direction , and its directional derivative is given by This completes the proof.

By the directional differentiability of shown in Theorem 9, the first-order necessary condition of optimality for problem (24) can be stated as If one wishes to solve VVI (1) via the optimization problem (24), we need to obtain its global optimal solution. However, since the function is in general nonconvex, it is difficult to find a global optimal solution. Fortunately, we can prove that any point satisfying the first-order condition of optimality becomes a solution of VVI (1). Thus, existing optimization algorithms can be used to find a solution of VVI (1).

Theorem 10. Assume that are continuously differentiable and that the Jacobian matrixes are positive definite for all (). If satisfies the first-order condition of optimality (28), then solves VVI (1).

Proof. It follows from Theorem 9 and (28) that, for some , holds for any . This implies that is a stationary point of problem (8). It follows from Lemma 6 that solves . From Lemma 7, we see that solves VVI. This completes the proof.

From Theorems 8 and 10, it is easy to get the following corollary.

Corollary 11. Assume that the conditions in Theorem 10 are all satisfied. If satisfies the first-order condition of optimality (28), then is a global optimal solution of problem (24).

4. Stochastic Vector Variational Inequality

In this section, we consider the stochastic vector variational inequality (SVVI). First, we present a deterministic reformulation for SVVI by employing the ERM method and the regularized gap function. Second, we solve this reformulation by the SAA method.

In most important practical applications, the functions () always involve some random factors or uncertainties. Let be a probability space. Taking the randomness into account, we get a stochastic vector variational inequality problem (SVVI): find a vector such that where are mappings and a.s. is the abbreviation for “almost surely" under the given probability measure .

Because of the random element , we cannot generally find a vector such that (30) holds almost surely. That is, (30) is not well defined if we think of solving (30) before knowing the realization . Therefore, in order to get a reasonable resolution, an appropriate deterministic reformulation for SVVI becomes an important issue in the study of the considered problem. In this section, we will employ the ERM method to solve (30).

Define where The maximum in (32) is attained at The ERM reformulation is given as follows: where is the expectation operator.

Note that the objective function contains the mathematical expectation. In practical applications, it is in general very difficult to calculate in a closed form. Thus, we will have to approximate it through discretization. One of the most popular discretization approaches is the sample average approximation method. In general, for an integrable function , we approximate the expected value with the sample average , where are independently and identically distributed random samples of and . By the strong law of large numbers, we get the following lemma.

Lemma 12. If is integrable, then holds with probability one.

Let . Applying the above technique, we get the following approximation of (35): In the rest of this section, we focus on the case (), where and are measurable functions such that This condition implies that and, for any scalar ,

The following results will be useful in the proof of the convergence result.

Lemma 13. Let be continuous. If and , then

Proof. Without loss of generality, we assume that . Let minimize and minimize , respectively. Hence, and . Thus
This completes the proof.

Lemma 14. When (), the function is continuously differentiable in almost surely, and its gradient is given by

Proof. The proof is the same as that of Theorem  3.2 in [21], so we omit it here.

Lemma 15. For any , one has

Proof. For any fixed ,   is the gap function of the following scalar variational inequality: find a vector , such that Hence, for all . From (4) and (34), we have and so It follows from (4) that
This completes the proof.

Now, we obtain the convergence of optimal solutions of problem (37) in the following theorem.

Theorem 16. Let be a sequence of optimal solutions of problem (37). Then, any accumulation point of is an optimal solution of problem (35).

Proof. Let be an accumulation point of . Without loss of generality, we assume that itself converges to as tends to infinity. It is obvious that . At first, we will show that From Lemma 12, it suffices to show that It follows from Lemma 13 and the mean-value theorem that where with . Because , there exists a constant such that for each . By the definition of , we have . Hence, for any and , where the second inequality is from Lemma 15. Thus, From (40) and (41), each term in the last inequality above converges to zero, and so Since (50) is true.
Now, we are in the position to show that is a solution of problem (35).
Since solves problem (37) for each , we have that, for any , Letting above, we get from Lemma 12 and (50) that which means that is an optimal solution of problem (35). This completes the proof.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (11171237, 11101069, and 71101099), by the Construction Foundation of Southwest University for Nationalities for the subject of applied economics (2011XWD-S0202), and by Sichuan University (SKQY201330).