- About this Journal
- Abstracting and Indexing
- Aims and Scope
- Annual Issues
- Article Processing Charges
- Articles in Press
- Author Guidelines
- Bibliographic Information
- Citations to this Journal
- Contact Information
- Editorial Board
- Editorial Workflow
- Free eTOC Alerts
- Publication Ethics
- Reviewers Acknowledgment
- Submit a Manuscript
- Subscription Information
- Table of Contents
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 531912, 7 pages
An Extension of Subgradient Method for Variational Inequality Problems in Hilbert Space
College of Mathematics and Statistics, Chongqing University, Chongqing 401331, China
Received 26 October 2012; Revised 30 January 2013; Accepted 1 February 2013
Academic Editor: Guanglu Zhou
Copyright © 2013 Xueyong Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
An extension of subgradient method for solving variational inequality problems is presented. A new iterative process, which relates to the fixed point of a nonexpansive mapping and the current iterative point, is generated. A weak convergence theorem is obtained for three sequences generated by the iterative process under some mild conditions.
Let be a nonempty closed convex subset of a real Hilbert space , and let be a continuous mapping. The variational inequality problem, denoted by , is to find a vector , such that Throughout the paper, let be the solution set of , which is assumed to be nonempty. In the special case when is the nonnegative orthant, (1) reduces to the nonlinear complementarity problem. Find a vector , such that The variational inequality problem plays an important role in optimization theory and variational analysis. There are numerous applications of variational inequalities in mathematics as well as in equilibrium problems arising from engineering, economics, and other areas in real life, see [1–16] and the references therein. Many algorithms, which employ the projection onto the feasible set of the variational inequality or onto some related sets in order to iteratively reach a solution, have been proposed to solve (1). Korpelevich  proposed an extragradient method for finding the saddle point of some special cases of the equilibrium problem. Solodov and Svaiter  extended the extragradient algorithm through replying the set by the intersection of two sets related to . In each iteration of the algorithm, the new vector is calculated according to the following iterative scheme. Given the current vector , compute , if , stop; otherwise, compute where and being the smallest nonnegative integer satisfying and then compute where .
On the other hand, Nadezhkina and Takahashi  got by the following iterative formula: where is a sequence in is a sequence, and is a nonexpansive mapping. Denoting the fixed points set of by and assuming , they proved that the sequence converges weakly to some .
The rest of this paper is organized as follows. In Section 2, we give some preliminaries and basic results. In Section 3, we present an extragradient algorithm and then discuss the weak convergence of the sequences generated by the algorithm. In Section 4, we modify the extragradient algorithm and give its convergence analysis.
2. Preliminary and Basic Results
Let be a real Hilbert space with denoting the inner product of the vectors . Weak converge and strong converge of the sequence to a point are denoted by and , respectively. Identity mapping from to itself is denoted by .
For some vector , the orthogonal projection of onto , denoted by , is defined as The following lemma states some well-known properties of the orthogonal projection operator.
Lemma 1. One has
A mapping is called monotone if A mapping is called Lipschitz continuous, if there exists an , such that The graph of , denoted by , is defined by A mapping is called nonexpansive if and the fixed point set of a mapping , denoted by , is defined by We denote the normal cone of at by and define the function as Then is maximal monotone. It is well known that , if and only if . For more details, see, for example,  and references therein. The following lemma is established in Hilbert space and is well known as Opial condition.
Lemma 2. For any sequence that converges weakly to , one has
The next lemma is proposed in .
Lemma 3 (Demiclosedness principle). Let be a closed, convex subset of a real Hilbert space , and let be a nonexpansive mapping. Then is demiclosed at ; that is, for any sequence , such that and , one has .
3. An Algorithm and Its Convergence Analysis
In this section, we give our algorithm, and then discuss its convergence. First, we need the following definition.
Definition 4. For some vector , the projected residual function is defined as Obviously, we have that if and only if . Now we describe our algorithm.
Algorithm A. Step 0. Take , and .
Step 1. For the current iterative point , compute where and being the smallest nonnegative integer satisfying Compute where , and is a nonexpansive mapping.
Step 2. If , stop; otherwise go to Step 1.
Remark 5. The iterative point is well computed in Algorithm A according to  and can be interpreted as follows: if (23) is well defined, then can be derived by the following iterative scheme: compute For more details, see [3, 4].
Now we investigate the weak convergence property of our algorithm. First we recall the following result, which was proposed by Schu .
Lemma 6. Let H be a real Hilbert space, let be a sequence of real number, and let , such that for some . Then one has
The following theorem is crucial in proving the boundness of the sequence .
Theorem 7. Let be a nonempty, closed, and convex subset of , let be a monotone and -Lipschitz continuous mapping, , and . Then for any sequence generated by Algorithm A, one has
Proof. Letting and . It follows from Lemma 1 (10) that that is, From (20)–(23) in Algorithm A, we get , which means . So, by the definition of the projection operator and , we obtain Substituting (32) into (31), we have Since is monotone, connecting with (1), we obtain Thus which completes the proof.
Theorem 8. Let be a nonempty, closed, and convex subset of , be a monotone and -Lipschitz continuous mapping, and . Then for any sequence generated by Algorithm A, one has Furthermore,
Proof. Using (22), we have
By the Cauchy-Schwarz inequality,
Hence, by (23)
Then we have
where the first inequation follows from that is a nonexpansive mapping.
That means is bounded, and so as . Since is continuous; namely, there exists a constant , s.t. , we yet have So we know that there exists , and hence which implies that or .
If , we get the conclusion.
If , we can deduce that the inequality (23) in Algorithm A is not satisfied for ; that is, there exists , for all , Applying (8) by setting leads to Therefore Passing onto the limit in (44), (46), we get , since , we obtain .
On the other hand, using Cauchy-Schwarz inequality again, we have Therefore, Then we have Noting that , it easily follows that , which implies that By the triangle inequality, we have Passing onto the limit in (51), we conclude The proof is complete.
Theorem 9. Let be a nonempty, closed, and convex subset of , let be a monotone and -Lipschitz continuous mapping, and . Then the sequences generated by Algorithm A converge weakly to the same point , where .
Proof. By Theorem 8, we know that is bound, which implies that there exists a subsequence of that converges weakly to some points .
First, we investigate some details of .
Letting , since is nonexpansive mapping, from (29) we have Passing onto the limit in (53), we obtain Then by (25) we have From Lemma 6, it follows that By the triangle inequality, we have and then passing onto the limit in (57), we deduce that which imply that by Lemma 3.
Second, we describe the details of .
Since , using Theorem 8 we claim that and .
Letting , we have thus, Applying (8) by letting , we have that is, Note that and , then where the last inequation follows from the monotone of .
Since is continuous, by (37) we have Passing onto the limit in (63), we obtain As is maximal monotone, we have , which implies that .
At last we show that such is unique.
Let be another subsequence of , such that . Then we conclude that . Suppose ; by Lemma 2 we have which implies that , and this is a contradiction. Thus, , and the proof is complete.
4. Further Study
In this section we propose an extension of Algorithm A, which is effective in practice. Similar to the investigation in Section 3, for the constant , we define a new projected residual function as follows: It is clear that the new projected residual function (67) degenerates into (20) by setting .
Algorithm B. Step 0. Take , and .
Step 1. For the current iterative point , compute where and being the smallest nonnegative integer satisfying Compute where and .
Step 2. If , stop; otherwise go to Step 1.
At the rest of this section, we discuss the weak convergence property of Algorithm B.
Lemma 10. For any , one has
Therefore, solving variational inequality is equivalent to finding a zero point of the projected residual function . Meanwhile we know that is a continuous function of , as the projection mapping is nonexpansive.
Lemma 11. For any , it holds that
Theorem 12. Let be a nonempty, closed, and convex subset of , let be a monotone and -Lipschitz continuous mapping, and . Then for any sequence generated by Algorithm B, one has
Proof. The proof of this theorem is similar to Theorem 7, so we omit it.
Theorem 13. Let be a nonempty, closed, and convex subset of , let be a monotone and -Lipschitz continuous mapping, and . Then for any sequences generated by Algorithm B, one has Furthermore,
Theorem 14. Let be a nonempty, closed, and convex subset of , let be a monotone and -Lipschitz continuous mapping, and . Then the sequences generated by Algorithm B converge weakly to the same point , where .
In this paper, we proposed an extension of the extragradient algorithm for solving monotone variational inequalities and established its weak convergence theorem. The Algorithm B is effective in practice. Meanwhile, we pointed out that the solution of our algorithm is also a fixed point of a given nonexpansive mapping.
This research was supported by the National Natural Science Foundation of China (Grant: 11171362) and the Fundamental Research Funds for the central universities (Grant: CDJXS12101103). The authors thank the anonymous reviewers for their valuable comments and suggestions, which helped to improve the paper.
- R. W. Cottle, J.-S. Pang, and R. E. Stone, The Linear Complementarity Problem, Academic Press, Boston, Mas, USA, 1992.
- G. M. Korpelevich, “The extragradient method for finding saddle points and other problems,” Matecon, vol. 12, pp. 747–756, 1976.
- M. V. Solodov and B. F. Svaiter, “A new projection method for variational inequality problems,” SIAM Journal on Control and Optimization, vol. 37, no. 3, pp. 765–776, 1999.
- M. V. Solodov, “Stationary points of bound constrained minimization reformulations of complementarity problems,” Journal of Optimization Theory and Applications, vol. 94, no. 2, pp. 449–467, 1997.
- Y. J. Wang, N. H. Xiu, and J. Z. Zhang, “Modified extragradient method for variational inequalities and verification of solution existence,” Journal of Optimization Theory and Applications, vol. 119, no. 1, pp. 167–183, 2003.
- W. Takahashi and M. Toyoda, “Weak convergence theorems for nonexpansive mappings and monotone mappings,” Journal of Optimization Theory and Applications, vol. 118, no. 2, pp. 417–428, 2003.
- E. H. Zarantonello, Projections on Convex Sets in Hilbert Space and Spectral Theory, Contributions to Nonlinear Functional Analysis, Academic press, New York, NY, USA, 1971.
- Z. Opial, “Weak convergence of the sequence of successive approximations for nonexpansive mappings,” Bulletin of the American Mathematical Society, vol. 73, pp. 591–597, 1967.
- R. T. Rockafellar, “On the maximality of sums of nonlinear monotone operators,” Transactions of the American Mathematical Society, vol. 149, pp. 75–88, 1970.
- F. E. Browder, “Fixed-point theorems for noncompact mappings in Hilbert space,” Proceedings of the National Academy of Sciences of the United States of America, vol. 53, pp. 1272–1276, 1965.
- N. Nadezhkina and W. Takahashi, “Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings,” Journal of Optimization Theory and Applications, vol. 128, no. 1, pp. 191–201, 2006.
- B. C. Eaves, “On the basic theorem of complementarity,” Mathematical Programming, vol. 1, no. 1, pp. 68–75, 1971.
- B. S. He and L. Z. Liao, “Improvements of some projection methods for monotone nonlinear variational inequalities,” Journal of Optimization Theory and Applications, vol. 112, no. 1, pp. 111–128, 2002.
- Y. Censor, A. Gibali, and S. Reich, “The subgradient extragradient method for solving variational inequalities in Hilbert space,” Journal of Optimization Theory and Applications, vol. 148, no. 2, pp. 318–335, 2011.
- Y. Censor, A. Gibali, and S. Reich, “Two extensions of Korpelevich’s extragradient method for solving the variational inequality problem in Euclidean space,” Tech. Rep., 2010.
- Y. J. Wang, N. H. Xiu, and C. Y. Wang, “Unified framework of extragradient-type methods for pseudomonotone variational inequalities,” Journal of Optimization Theory and Applications, vol. 111, no. 3, pp. 641–656, 2001.
- J. Schu, “Weak and strong convergence to fixed points of asymptotically nonexpansive mappings,” Bulletin of the Australian Mathematical Society, vol. 43, no. 1, pp. 153–159, 1991.