Abstract

The monotone variational inequalities capture various concrete applications arising in many areas. In this paper, we develop a new prediction-correction method for monotone variational inequalities with separable structure. The new method can be easily implementable, and the main computational effort in each iteration of the method is to evaluate the proximal mappings of the involved operators. At each iteration, the algorithm also allows the involved subvariational inequalities to be solved in parallel. We establish the global convergence of the proposed method. Preliminary numerical results show that the new method can be competitive with Chen's proximal-based decomposition method in Chen and Teboulle (1994).

1. Introduction

The variational inequality (VI ()) in the finite-dimensional space is to determine a vector such that where is a nonempty closed convex subset and is a continuous mapping from into itself. The VI () has found many efficient applications in a broad spectrum of areas such as traffic equilibrium [1] and network economic problems [2]. For solving (1), the proximal point algorithm (PPA), which was proposed by Martinet [3] and further studied by Rockafellar [4, 5], generates the new iterative point via the following procedure: where is a positive definite matrix, playing the role of proximal regularization parameter. Note that the PPA has to solve systems of nonlinear equations in each iteration. In many cases, solving these equations is quite difficult. This difficulty has inspired the burst of approximate versions of the PPAs, in order to approximately solve (2) under certain “relative error.” These new methods include well-known-extragradient type methods (EGM) as special cases. Assume that is Lipschitz continuous; that is, there is , such that Then at each iteration EGM takes the following general form:

In this paper, we consider the following variational inequalities: find a vector such that with where is a nonempty closed convex subset and and are monotone operators. Problem (5) is referred to as a structured variational inequality (SVI) [6].

By attaching a Lagrange multiplier vector to the linear constraints , the VI problem (5) is converted into the following form: where The compact form is with

For the purpose of parallel computing, the proximal alternating directions method (PADM) generates as follows [7, 8]: first find an such that Then find an such that Finally, update via Here and are given proximal parameters; is a given penalty parameter for the linearly constraints. Note that when in (11)-(12), the classical alternating directions method (ADM) is recovered. To make the PADM (11)–(13) more efficient and flexible, some strategies have been developed. For example, allow , , and to vary from iteration to iteration according to certain strategies [810]; produce the new iterate based on the minor correction to the predictor. A simple and effective correction scheme is (see, e.g., [11, 12]) where is a chosen step size.

The PADM (11)–(13) is often easy to implement under the assumption that the decomposed subproblems have closed-form solutions or can be efficiently solved up to a high precision. However, in some cases, matrixes and are not identity matrices, and the two subproblems in PADM (11)-(12) are difficult to solve because the evaluation of and could be costly. To overcome this difficulty, we propose a new implementable prediction-correction method for the SVI. At each iteration, we first decompose the problem to two small problems with respect to and , respectively. The two subproblems are all easy to solve under the assumption that the resolvent operators of and are easy to evaluate, where the resolvent operator of mapping is defined as . Then, we update the Lagrange multipliers and make a correction step to ensure the algorithm's convergence.

The SVI has been studied extensively both in the theoretical frameworks and applications. Recently, Han [13] proposed a hybrid entropic proximal decomposition method for the SVI. Han's method is based on logarithmic-quadratic functions and combined with self-adaptive strategy. He [14] presented a parallel splitting augmented Lagrangian method which can be extended to solve the system of equilibrium problems with three separable operators. Xu et al. [15] proposed two classes of correction methods for the SVI in which the mapping does not have an explicit form. Besides, Xu and Wu [16] also studied a class of linearized proximal alternating direction methods and showed that the relaxation factor can have the same restriction region as for the general ADM. Yuan and Li [17] developed a logarithmic-quadratic-proximal- (LQP-) based decomposition method by applying the LQP terms to regularize the ADM subproblems; then Bnouhachem et al. [18] studied a new inexact LQP alternating direction method by solving a series of related systems of nonlinear equations.

The rest of this paper is organized as follows. In Section 2, we review some preliminaries which are useful for further analysis. In Section 3, we present the new implementable prediction-correction method for SVI, and the global convergence result is established. Numerical experiments and some conclusions are addressed in Sections 4 and 5, respectively.

2. Preliminaries

In this section, we make some standard assumptions and summarize some basic properties of VI which will be used in the subsequent discussions.

Assumption(A1) are simple closed convex sets.A set which is said to be simple means that the projection onto the set is easy to compute, where the projection of a point onto the closed convex set , denoted by , is defined as the nearest point to ; that is, (A2)The mapping is point-to-point, monotone, and continuous.A mapping is said to be monotone on if (A3)The solution set of SVI , denoted by , is nonempty.

Properties. Let be a symmetric positive definite matrix; the -norm of the vector is denoted by . In particular, when , is the Euclidean norm of . For a matrix , denotes its norm .

The following well-known properties of the projection operator will be used in the coming analysis.

Lemma 1. Let be a nonempty closed convex set; let be the projection operator onto under the -norm. Then

For any arbitrary positive scalar and , let denote the residual function associated with the mapping ; that is,

Lemma 2. is a solution of the SVI () if and only if for any given positive constant (see [2, page 267]).

Lemma 3. Solving SVI () (7) is equivalent to find a zero point of the mapping

3. The New Algorithm

In this section, we present a new prediction-correction method for SVI () and show its global convergence. But, at the beginning, to make the algorithm more succinct, we first define some matrices:

Obviously, is a symmetric positive definite matrix whenever , and , and we also have .

3.1. Description of the Algorithm

Algorithm 4. It is a prediction-correction-based algorithm for the SVI ().

Phase 1 (initialization step). Given a small number , let ; matrixes are defined in (20). Take ; set . Choose the parameters , and such that

Phase 2 (prediction step). Generate the predictor via solving the following projection equation: Then find an such that Finally, update via

Phase 3 (correction step). Correct the predictor, and generate the new iterate via where

Phase 4 (convergence verification). If , stop; otherwise set ; go to Phase 2.

Remark 5. Note that (22) does not involve and that (23) is independent on the generated by (22). Hence the two projections (22) and (23) are eligible for parallel computation.

Remark 6. It is easy to check that is a solution of SVI () if and only if , and . Thus, it is reasonable to take the magnitude of as the stopping criterion.

Remark 7. The strategy of choosing the step size in the correction step which coincides with the strategy in He's papers, see, for example, [19], will be explained in detail in the following section.

Remark 8. Our method and the methods proposed in [6, 15, 20] are all in the prediction-correction algorithmic framework, where at each iteration they make a prediction step to produce a predictor and a correction step to generate the new iterate via correcting this predictor.

3.2. Contractive Properties

Now, we start to prove some properties of the sequence . The first lemma quantifies the discrepancy between the point and a solution point of SVI ().

Lemma 9. Let be generated by (22)–(24), and let the matrix be given in (20). Then one has

Proof. Note that generated by (22)–(24) are actually solutions of the following VIs: Combining (28)–(30) together, we have Using the notations of (see (10)) and (see (20)), the earlier inequality can be rewritten into
The assertion (27) is thus proved.

The following lemma plays a key role in proving the convergence of the algorithm.

Lemma 10. Let matrixes , be defined in (20), if the parameters ,, and in (22)–(24) satisfy Then for the matrix in (27), one has with

Proof. For any , we have According to the Cauchy-Schwarz inequality, we get With the defined in (35), we have Substituting (38) into (37), combining (36), the assertion (34) is proved.

Lemma 11. Suppose that is a solution point of (9) and the sequences are corrected by an undeterminate step size denoted by instead of (26); that is, Then one has where

Proof. One can see that On the other hand, since , using the monotonicity of and Lemma 9, we have Combining (42)-(43) together, we have Thus, is a lower bound of for any .

Remark 12. Note that is a quadratic function of and it reaches its maximum at Hence, it is reasonable to use the step size strategy (26). The parameter in (26) plays the role of a relaxation or scaling parameter. We can easily see that can ensure convergence.

Now, we prove the Fejér monotonicity of the iterative sequence generated by the algorithm.

Theorem 13. Suppose that is a solution point of (9) and the sequences are generated by the algorithm. Then

Proof. According to Lemma 11, Moreover, it follows from (26) that the step size Based on the conditions (33), we have Hence, Substituting (50) into (47), we have Thus, we obtain the assertion of this theorem.

Based on the earlier results, we are now ready to prove the global convergence of the algorithm.

Theorem 14. The sequence generated by the proposed algorithm converges to a solution of SVI ().

Proof. We prove the convergence of the proposed algorithm by following the standard analytic framework of contraction-type methods. It follows from (46) that is bounded, and we have that Consequently, since (see (22) and (23)) It follows from (53) that Because is also bounded, it has at least one cluster point. Let be a cluster point of , and let be the subsequence converging to . It follows from (55) that Consequently, Using the continuity of and the projection operator , we have that is a solution of SVI ().
On the other hand, by taking limits over the subsequences in (52) and using , we have that, for any , it follows from (46) that Thus, the sequence converges to , which is a solution of SVI ().

4. Numerical Experiments

In this section, we present some numerical experiments results to show the effectiveness of the proposed algorithm. The codes are run on a notebook computer with Inter(R) Core(TM) 2 CPU 2.0 GHZ and RAM 2.00 GM under MATLAB Version 2009b.

We consider the following optimization problem: where   are symmetric positive semidefinite matrixes, , ,  and .

Using the KKT condition, the problem (59) can be converted into the following variational inequality: find such that In this example, we randomly created the input data of the tested collection in the following manner.(i) and were generated randomly with eigenvalues in according to the following MATLAB scripts:,; ,; ,; .(ii) and were generated randomly with singular values in , and the maximum singular value is 3 according to the following MATLAB scripts:,; ,; ,; .(iii) is generated randomly with . According to the data generation, we have and .

To apply (22)–(25) to solve (59), instead of choosing the step length judiciously as (24), we can simply choose by takeing (since when , we have which satisfies the requirement). Then, we obtain the following subproblems which are all easy enough to have closed-form solutions:

For comparison, we also solve it by the parallel decomposition method (denoted by PDM) that has been studied extensively in the literature (e.g., [21, 22]). For PDM, the restrictions on the proximal parameters are the same as our algorithm. By applying PDM to (59), we obtain the following subproblems which are also easy enough to have closed-form solutions:

We report the numerical experiments by building their performance profiles in terms of the number of iterations and the total of computational time. Here, we take for the two algorithms. We set the initial vector , and the stopping criterion is

The computational results are given in Table 1 for different choices of , , and . We reported the number of iterations (Iter.) and the computing time in seconds (CPU(s)) when the mentioned stopping criterion is achieved.

The data in Table 1 indicates clearly that the proposed method is efficient compared with the classical PDM in [21, 22]. We can observe that the iteration numbers and the CPU time of the two algorithms are almost the same.

5. Conclusions

In this paper, we proposed a new implementable algorithm for solving the monotone variational inequalities with separable structure. At each iteration, the algorithm performs two easily implementable projections parallelly to produce a predictor and then makes a simple correction to generate the new iterate. Under some mild conditions, we proved the global convergence of the new method. We also give some numerical experiments to show that the proposed method is applicable and valid.