ISRN Mathematical Analysis

Volume 2013 (2013), Article ID 727892, 8 pages

http://dx.doi.org/10.1155/2013/727892

## An Improved Two-Step Method for Generalized Variational Inequalities

School of Management Science, Qufu Normal University, Rizhao, Shandong 276800, China

Received 22 July 2013; Accepted 30 August 2013

Academic Editors: R. D. Chen and Y. Dai

Copyright © 2013 Haibin Chen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We propose an improved two-step extragradient algorithm for pseudomonotone generalized variational inequalities. It requires two projections at each iteration and allows one to take different stepsize rules. Moreover, from a geometric point of view, it is shown that the new method has a long stepsize, and it guarantees that the distance from the next iterative point to the solution set has a large decrease. Under mild conditions, we show that the method is globally convergent, and then the -linearly convergent property of the method is proven if a projection-type error bound holds locally.

#### 1. Introduction

Let be a multivalued mapping from into with nonempty values, where is a Euclidean space. Let be a nonempty, closed, and convex subset of the Euclidean space . The generalized variational inequality, abbreviated as GVI, is to find a vector such that there exists satisfying where stands for the inner product of vectors in . The solution set of problem (1) is denoted by . If the multivalued mapping is a single-valued mapping from to , then the GVI collapses to the classical variational inequality problem [1–4].

For the problem GVI, we all know that it plays a significant role in economics and transportation equilibrium, engineering sciences, and so forth, and it has received considerable attention in the past decades [1, 2, 5–11]. Solution methods for GVI have been studied extensively. They can be roughly categorized into two popular approaches to attack the solution existence problem of the GVI. The first is analytic approaches. Instead of solving problem directly, the analytic approach reformulates the GVI as a well-studied mathematical problem first and then invokes an existence theorem for the latter problem [12]. The second is a constructive approach in which the existence can be verified by the behavior of the proposed method which will be considered in this paper.

To the best of our knowledge, the extragradient method [2, 13] is a popular constructive approach which was proposed by Korpelevich [13]. It has been proved that the method has a contract property; that is, the generated sequence by the method satisfies that for any solution of the GVI. It should be noted that the proximal point algorithm also possesses this property [14].

In [15], the authors proposed a new type extragradient projection method for variational inequalities (VI). The method proposed in [15] required only two projections at each iteration and allowed one to take different stepsize rules. Moreover, it was shown that this method had a long stepsize, and it guaranteed that the distance from the next iterative point to the solution set had a large decrease. Some elementary numerical experiments showed its efficiency. Now a question is posed naturally: as the problem GVI is an extension of the problem VI, can this theory be extended to the GVI? This constitutes the main motivation of the paper.

In this paper, inspired by [15], we presented an improved extragradient method to the GVI problem. Under mild conditions, we first show that the generated sequence of the proposed method globally converges to the solution of the problem, and then we show that the method is -linearly convergent if in addition a projection-type error bound holds locally. The rest of this paper is organized as follows. In Section 2, we give some related concepts and conclusions needed in the subsequent analysis. In Section 3, we present our designed algorithm and establish the convergence and convergent rate of the algorithm.

#### 2. Preliminaries

In this section, we first give some related concepts and conclusions which are useful in the subsequent analysis. Let and let be a nonempty closed convex set in . A point is said to be the orthogonal projection of onto if it is the closest point to in ; that is, and denote by . The well-known properties of the projection operator are as follows.

Lemma 1 (see [16]). *Let be a nonempty, closed, and convex subset in . Then, for any and , the following statements hold:*(i)*,*(ii)*,
*(iii)*,
*(iv)*. *

*Remark 2. *In fact, (i) in Lemma 1 also provides a sufficient condition for a vector to be the projection of the vector ; that is, if and only if

Lemma 3. *Let be a nonempty, closed, and convex subset in . For any and , define
**
Then, is nondecreasing for .*

Lemma 4. * Let be a nonempty closed and convex subset in . For any and , define
**
Then,
*

*Definition 5. *Let be a nonempty subset of . The multivalued mapping is said to be(i) monotone if and only if
(ii) pseudomonotone if and only if, for any ,,,

To proceed, we need the following definition for a multivalued mapping .

*Definition 6. *Let be a nonempty, closed, and convex subset of . A multivalued mapping is said to be(i) upper semicontinuous at if, for every open set containing , there is an open set containing such that for all ;(ii) lower semicontinuous at if, given any sequence converging to and any , there exists a sequence that converges to ;(iii) continuous at if it is both upper semicontinuous and lower semicontinuous at .

To end this section, we state the assumptions needed in the subsequent analysis.

*Assumption 7. *Let be a nonempty, closed, and convex subset of . And we assume(i) is nonempty;(ii) the multivalued mapping is pseudomonotone and continuous on with compact convex values.

#### 3. Main Results

For any and , set Then the projection residue can verify the solution set of the GVI [17].

Proposition 8. *Let and . Then solves the problem (1) if and only if
*

The basic idea of the designed algorithm is as follows. At each step of the algorithm, compute the projection residue at iterate . If , then stop with being a solution of the GVI; otherwise, find a trial point by a back-tracking search at along the residue , and the new iterate is obtained by using a projection. Repeat this process until the projection residue is a zero vector.

Now, we describe carefully our algorithmic framework for solving GVI.

*Algorithm 9. *Choose , , .

*Step 1. *Given the current iterate , if for some , stop; else take any and compute
Let
where , with being the smallest nonnegative integer satisfying: such that

*Step 2. *Let , where is chosen as in (14) and is chosen such that

*Step 3. * Set and go to Step 1.

First, we give a conclusion which addresses the feasibility of the stepsize rule (14), that is, the existence of point .

Lemma 10. *If is not a solution of the problem (1), then there exists the smallest nonnegative integer satisfying (14) under Assumption 7.*

*Proof. *By the definition of and Lemma 1, it follows that
which implies
Since , we get
Combining this with the fact that is lower semicontinuous, we know that there exists such that
Hence, by (18), one has
This completes the proof.

Now, for the sake of convenience, we define for . Then, we have the following result.

Lemma 11. *For the generated sequence in Algorithm 9, it holds that
**
under Assumption 7, where is a point in . *

*Proof. *By Lemma 1 (iii) and the iterative process of Algorithm 9, we have
Since , it follows that there exists such that
Combining this and the fact that is pseudomonotone, one has
On the other hand,
By (27), we have
It is obvious that
So, by the definition of , we obtain
and the proof is completed.

To prove the existence of in Step 2 of Algorithm 9, we first consider the following optimization problem: which is very necessary for the feasibility proof of . By Lemma 4 and the definition of , it follows that Note that and where the first inequality follows from (14). Then if the maximal value exists. By Lemma 3, we know that is nonincreasing and continuous for . So if is solvable on , then its solution coincides with the solution to the optimization problem Next, from a geometric point of view, we will show that the equation is solvable on .

Lemma 12. *If is not a solution of the problem (1), then equation is solvable for under the Assumption 7. *

*Proof. *For the sake of simplicity, we first define halfspaces as follows:
where is the same as in (14).

Since , by the iterative process of Algorithm 9, one has
and since
we know that , , and are all nonempty convex sets, respectively. Let
It is obvious that
By the fact that is nonincreasing for , we have
for .

Now, let be any point in and let be any point in . In the triangle composed by the points , and , we denote the inner corners at points and by and , respectively. By geometric consideration, if is sufficiently large, we obtain
By the arbitrariness of and the definition of projection, there exists satisfying
On the other hand, by (39), it follows that
which implies that . Then, by the continuity of the projection operator, there exists such that
which means that
So , and the desired result follows.

In order to maintain consistency in the sequel, we denote the smallest positive solution to the equation by . Then, is the smallest positive solution to

Lemma 13. *Take in Algorithm 9; then satisfies (15) and (16) under Assumption 7.*

*Proof. *By the proof of Lemma 12, it is obvious that (16) holds. On the other hand, since and
one has
where the last inequality follows from (39). By the fact that the projection operator is nonexpansive, we have
which implies
and (15) holds. The desired result follows.

Since for all , we know that (15) and (16) hold for any . That is to say, we can take in Algorithm 9, which shows the feasibility and flexibility of the method. Of course, by Lemma 11, we know that in Algorithm 9 is a better stepsize in the sense that the distance between the next iterate point and has a large decrease at each iteration, which shows theoretically the superiority of the method.

Theorem 14. *Suppose Assumption 7 holds. If Algorithm 9 generates an infinite sequence , then it converges to a solution of GVI and
*

*Proof. *For each iterative process, by the stepsize rule, we have
Combining this and Lemma 11 one has
where is chosen from . Hence, the sequence is nonincreasing and is bounded. Then, it follows that
from which we obtain
Since is continuous with compact values, Proposition 3.11 in [18] implies that is a bounded set, and so the sequence is bounded. Hence
By the iterative process of Algorithm 9 and since
we have
Without loss of generality, if , by (60), one has
and the desired result can be obtained.

On the other hand, suppose . By the fact that is bounded, so it has a convergent subsequence , and the limit is denoted by ; that is,
Therefore,
Since is continuous on , so for all we know that there exist such that
and there exist such that
Observing the definition of and , it implies that
Let in (66), and we obtain
By Lemma 1 (iv), it follows that
Combining this with (67) we have
which implies that . By the fact that converges to zero and the whole sequence is nonincreasing, we obtain that ; that is, . And the desired result holds.

The study of the following results is in the spirit of convergence rate results in [19, 20] in , which are based on error bounds. The research on error bounds is a large topic in mathematical programming. One can refer to the surveys [21] for some sufficient conditions ensuring the existence of error bounds and for the roles played by error bounds in the convergence analysis of iterative algorithms.

Now, we first give the definition of Lipschitz continuous for a multivalued mapping.

*Definition 15. *A multivalued mapping is said to be Lipschitz continuous if there exists a constant such that
where is the Hausdorff metric on closed bounded subsets of defined by

The convergence rate of projection methods for GVI has been considered by many researchers [20, 22], and the following assumption is needed.

*Assumption 16. *Assume there are two positive constants and satisfying
where and denotes -norm distance from to .

Theorem 17. *Let Assumptions 7 and 16 hold, and suppose the multivalued mapping is Lipschitz continuous with constant . For the generated sequence , the following statements hold:*(i)* there is a constant such that, for all sufficiently large ,
*(ii)* as , then
**which means the sequence converges -linearly to a solution of GVI. *

*Proof. *(i) By the proof of Theorem 14, we know that there exists a positive constant such that
From the fact that for all and the proof of Theorem 14, we have
Since
one has
Choosing such that
we obtain
where the third inequality follows from Assumption 16. By Lemma 6 in Chapter 2 of [16], there exists a positive constant such that
for all sufficiently large.

(ii) If , then the problem GVI reduces to the situation such that
Choosing such that
and by the fact that is Lipschitz continuous, we know that
Hence,
and it is obvious that converges -linearly to .

#### 4. Discussion

Certainly, the proposed extragradient method for GVI in this paper has a good theoretical property in theory, as the generated sequence not only requires two projections at each iteration but also take different stepsize rules. Moreover, from a geometric point of view, it is shown that the new method has a long stepsize, and it guarantees that the distance from the next iterative point to the solution set has a large decrease. However, the proposed algorithm is not easy to be realized in practice as the residue and the trial point are not easy to execute. This is an interesting topic for further research.

#### Acknowledgments

This work was supported by the Natural Science Foundation of China (11171180) and the Specialized Research Fund for the Doctoral Program of Higher Education of China (20113705110002).

#### References

- A. Auslender and M. Teboulle, “Lagrangian duality and related multiplier methods for variational inequality problems,”
*SIAM Journal on Optimization*, vol. 10, no. 4, pp. 1097–1115, 2000. View at Publisher · View at Google Scholar · View at MathSciNet - Y. Censor, A. Gibali, and S. Reich, “The subgradient extragradient method for solving variational inequalities in Hilbert space,”
*Journal of Optimization Theory and Applications*, vol. 148, no. 2, pp. 318–335, 2011. View at Publisher · View at Google Scholar · View at MathSciNet - P. T. Harker and J.-S. Pang, “Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications,”
*Mathematical Programming*, vol. 48, no. 2, pp. 161–220, 1990. View at Publisher · View at Google Scholar · View at MathSciNet - Y. J. Wang, N. H. Xiu, and J. Z. Zhang, “Modified extragradient method for variational inequalities and verification of solution existence,”
*Journal of Optimization Theory and Applications*, vol. 119, no. 1, pp. 167–183, 2003. View at Publisher · View at Google Scholar · View at MathSciNet - A. Ben-Tal and A. Nemirovski, “Robust convex optimization,”
*Mathematics of Operations Research*, vol. 23, no. 4, pp. 769–805, 1998. View at Publisher · View at Google Scholar · View at MathSciNet - F. Facchinei and J. S. Pang,
*Finite Dimensional Variational Inequalities and Complementarity Problems*, Springer, New York, NY, USA, 2003. - C. Fang and Y. He, “A double projection algorithm for multi-valued variational inequalities and a unified framework of the method,”
*Applied Mathematics and Computation*, vol. 217, no. 23, pp. 9543–9551, 2011. View at Publisher · View at Google Scholar · View at MathSciNet - Y. He, “Stable pseudomonotone variational inequality in reflexive Banach spaces,”
*Journal of Mathematical Analysis and Applications*, vol. 330, no. 1, pp. 352–363, 2007. View at Publisher · View at Google Scholar · View at MathSciNet - N.-J. Huang, “Generalized nonlinear variational inclusions with noncompact valued mappings,”
*Applied Mathematics Letters*, vol. 9, no. 3, pp. 25–29, 1996. View at Publisher · View at Google Scholar · View at MathSciNet - S. Li and G. Chen, “On relations between multiclass, multicriteria traffic network equilibrium models and vector variational inequalities,”
*Journal of Systems Science and Systems Engineering*, vol. 15, no. 3, pp. 284–297, 2006. View at Publisher · View at Google Scholar · View at Scopus - R. Saigal, “Extension of the generalized complementarity problem,”
*Mathematics of Operations Research*, vol. 1, no. 3, pp. 260–266, 1976. View at Publisher · View at Google Scholar · View at MathSciNet - Y. He, “The Tikhonov regularization method for set-valued variational inequalities,”
*Abstract and Applied Analysis*, vol. 2012, Article ID 172061, 10 pages, 2012. View at Publisher · View at Google Scholar · View at MathSciNet - G. M. Korpelevich, “An extragradient method for finding saddle points and for other problems,”
*Matecon*, vol. 12, no. 4, pp. 747–756, 1976. View at Google Scholar · View at MathSciNet - E. Allevi, A. Gnudi, and I. V. Konnov, “The proximal point method for nonmonotone variational inequalities,”
*Mathematical Methods of Operations Research*, vol. 63, no. 3, pp. 553–565, 2006. View at Publisher · View at Google Scholar · View at MathSciNet - Y. J. Wang, N. H. Xiu, and C. Y. Wang, “Unified framework of extragradient-type methods for pseudomonotone variational inequalities,”
*Journal of Optimization Theory and Applications*, vol. 111, no. 3, pp. 641–656, 2001. View at Publisher · View at Google Scholar · View at MathSciNet - B. T. Polyak,
*Introduction to Optimization*, Optimization Software Incorporation, Publications Division, New York, NY, USA, 1987. View at MathSciNet - D. Kinderlehrer and G. Stampacchia,
*An Introduction to Variational Inequalities and Their Applications*, Academic Press, New York, NY, USA, 1980. View at MathSciNet - J.-P. Aubin and I. Ekeland,
*Applied Nonlinear Analysis*, John Wiley & Sons, New York, NY, USA, 1984. View at MathSciNet - Y. He, “A new double projection algorithm for variational inequalities,”
*Journal of Computational and Applied Mathematics*, vol. 185, no. 1, pp. 166–173, 2006. View at Publisher · View at Google Scholar · View at MathSciNet - M. V. Solodov, “Convergence rate analysis of iteractive algorithms for solving variational inquality problems,”
*Mathematical Programming*, vol. 96, no. 3, pp. 513–528, 2003. View at Publisher · View at Google Scholar · View at MathSciNet - J. S. Pang, “Error bounds in mathematical programming,”
*Mathematical Programming*, vol. 79, no. 1–3, pp. 299–332, 1997. View at Google Scholar - F.-Q. Xia and N.-J. Huang, “A projection-proximal point algorithm for solving generalized variational inequalities,”
*Journal of Optimization Theory and Applications*, vol. 150, no. 1, pp. 98–117, 2011. View at Publisher · View at Google Scholar · View at MathSciNet