Journal Menu
`Abstract and Applied AnalysisVolume 2012 (2012), Article ID 817436, 9 pageshttp://dx.doi.org/10.1155/2012/817436`
Research Article

## Strong Convergence of a Modified Extragradient Method to the Minimum-Norm Solution of Variational Inequalities

1Department of Mathematics, Tianjin Polytechnic University, Tianjin 300387, China
2Mathematics Department, COMSATS Institute of Information Technology, Islamabad 44000, Pakistan
3Mathematics Department, College of Science, King Saud University, Riyadh 11451, Saudi Arabia
4Department of Information Management, Cheng Shiu University, Kaohsiung 833, Taiwan

Received 18 August 2011; Accepted 14 October 2011

Academic Editor: Khalida Inayat Noor

Copyright © 2012 Yonghong Yao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We suggest and analyze a modified extragradient method for solving variational inequalities, which is convergent strongly to the minimum-norm solution of some variational inequality in an infinite-dimensional Hilbert space.

#### 1. Introduction

Let be a closed convex subset of a real Hilbert space . A mapping is called -inverse-strongly monotone if there exists a positive real number such that The variational inequality problem is to find such that The set of solutions of the variational inequality problem is denoted by . It is well known that variational inequality theory has emerged as an important tool in studying a wide class of obstacle, unilateral, and equilibrium problems, which arise in several branches of pure and applied sciences in a unified and general framework. Several numerical methods have been developed for solving variational inequalities and related optimization problems; see [136] and the references therein.

It is well known that variational inequalities are equivalent to the fixed point problem. This alternative formulation has been used to study the existence of a solution of the variational inequality as well as to develop several numerical methods. Using this equivalence, one can suggest the following iterative method.

Algorithm 1.1. For a given , calculate the approximate solution by the iterative scheme It is well known that the convergence of Algorithm 1.1 requires that the operator must be both strongly monotone and Lipschitz continuous. These restrict conditions rules out its applications in several important problems. To overcome these drawbacks, Korpelevič suggested in [8] an algorithm of the form Noor [2] further suggested and analyzed the following new iterative methods for solving the variational inequality (1.2).

Algorithm 1.2. For a given , calculate the approximate solution by the iterative scheme which is known as the modified extragradient method. For the convergence analysis of Algorithm 1.2, see Noor [1, 2] and the references therein. We would like to point out that Algorithm 1.2 is quite different from the method of Korpelevič [8]. However, Algorithm 1.2 fails, in general, to converge strongly in the setting of infinite-dimensional Hilbert spaces.
In this paper, we suggest and consider a very simple modified extragradient method which is convergent strongly to the minimum-norm solution of variational inequality (1.2) in an infinite-dimensional Hilbert space. This new method includes the method of Noor [2] as a special case.

#### 2. Preliminaries

Let be a real Hilbert space with inner product and norm , and let be a closed convex subset of . It is well known that, for any , there exists a unique such that We denote by , where is called the metric projection of onto . The metric projection of onto has the following basic properties:(i) for all ;(ii) for every ;(iii) for all , .

We need the following lemma for proving our main results.

Lemma 2.1 (see [15]). Assume that is a sequence of nonnegative real numbers such that where is a sequence in and is a sequence such that(1);(2) or .Then .

#### 3. Main Result

In this section we will state and prove our main result.

Theorem 3.1. Let be a closed convex subset of a real Hilbert space . Let be an -inverse-strongly monotone mapping. Suppose that . For given arbitrarily, define a sequence iteratively by where is a sequence in and is a constant. Assume the following conditions are satisfied:(): ;(): ; (): . Then the sequence generated by (3.1) converges strongly to which is the minimum-norm element in .

We will divide our detailed proofs into several conclusions.

Proof. Take . First we need to use the following facts: (1); in particular, (2) is nonexpansive and for all From (3.1), we have Thus, Therefore, is bounded and so are , and .
From (3.1), we have where is a constant such that . Hence, by Lemma 2.1, we obtain From (3.4), (3.5) and the convexity of the norm, we deduce Therefore, we have Since and as , we obtain as .
By the property (ii) of the metric projection , we have It follows that and hence which implies that Since , and , we derive .
Next we show that where . To show it, we choose a subsequence of such that As is bounded, we have that a subsequence of converges weakly to .
Next we show that . We define a mapping by Then is maximal monotone (see [16]). Let . Since and , we have . On the other hand, from , we have that is, Therefore, we have Noting that , , and is Lipschitz continuous, we obtain . Since is maximal monotone, we have , and hence . Therefore, Finally, we prove . By the property (ii) of metric projection , we have Hence, Therefore, We apply Lemma 2.1 to the last inequality to deduce that . This completes the proof.

Remark 3.2. Our Algorithm (3.1) is similar to Noor’s modified extragradient method; see [2]. However, our algorithm has strong convergence in the setting of infinite-dimensional Hilbert spaces.

#### Acknowledgments

Y. Yao was supported in part by Colleges and Universities Science and Technology Development Foundation (20091003) of Tianjin, NSFC 11071279 and NSFC 71161001-G0105. Y.-C. Liou was partially supported by the Program TH-1-3, Optimization Lean Cycle, of Sub-Projects TH-1 of Spindle Plan Four in Excellence Teaching and Learning Plan of Cheng Shiu University and was supported in part by NSC 100-2221-E-230-012.

#### References

1. M. Aslam Noor, “Some developments in general variational inequalities,” Applied Mathematics and Computation, vol. 152, no. 1, pp. 199–277, 2004.
2. M. A. Noor, “A class of new iterative methods for general mixed variational inequalities,” Mathematical and Computer Modelling, vol. 31, no. 13, pp. 11–19, 2000.
3. R. E. Bruck,, “On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space,” Journal of Mathematical Analysis and Applications, vol. 61, no. 1, pp. 159–164, 1977.
4. J.-L. Lions and G. Stampacchia, “Variational inequalities,” Communications on Pure and Applied Mathematics, vol. 20, pp. 493–519, 1967.
5. W. Takahashi, “Nonlinear complementarity problem and systems of convex inequalities,” Journal of Optimization Theory and Applications, vol. 24, no. 3, pp. 499–506, 1978.
6. J. C. Yao, “Variational inequalities with generalized monotone operators,” Mathematics of Operations Research, vol. 19, no. 3, pp. 691–705, 1994.
7. H. K. Xu and T. H. Kim, “Convergence of hybrid steepest-descent methods for variational inequalities,” Journal of Optimization Theory and Applications, vol. 119, no. 1, pp. 185–201, 2003.
8. G. M. Korpelevič, “An extragradient method for finding saddle points and for other problems,” Èkonomika i Matematicheskie Metody, vol. 12, no. 4, pp. 747–756, 1976.
9. W. Takahashi and M. Toyoda, “Weak convergence theorems for nonexpansive mappings and monotone mappings,” Journal of Optimization Theory and Applications, vol. 118, no. 2, pp. 417–428, 2003.
10. L.-C. Ceng and J.-C. Yao, “An extragradient-like approximation method for variational inequality problems and fixed point problems,” Applied Mathematics and Computation, vol. 190, no. 1, pp. 205–215, 2007.
11. Y. Yao and J.-C. Yao, “On modified iterative method for nonexpansive mappings and monotone mappings,” Applied Mathematics and Computation, vol. 186, no. 2, pp. 1551–1558, 2007.
12. A. Bnouhachem, M. Aslam Noor, and Z. Hao, “Some new extragradient iterative methods for variational inequalities,” Nonlinear Analysis, vol. 70, no. 3, pp. 1321–1329, 2009.
13. M. A. Noor, “New extragradient-type methods for general variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 277, no. 2, pp. 379–394, 2003.
14. B. S. He, Z. H. Yang, and X. M. Yuan, “An approximate proximal-extragradient type method for monotone variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 300, no. 2, pp. 362–374, 2004.
15. H.-K. Xu, “Iterative algorithms for nonlinear operators,” Journal of the London Mathematical Society, vol. 66, no. 1, pp. 240–256, 2002.
16. R. T. Rockafellar, “Monotone operators and the proximal point algorithm,” SIAM Journal on Control and Optimization, vol. 14, no. 5, pp. 877–898, 1976.
17. Y. Censor, A. Motova, and A. Segal, “Perturbed projections and subgradient projections for the multiple-sets split feasibility problem,” Journal of Mathematical Analysis and Applications, vol. 327, no. 2, pp. 1244–1256, 2007.
18. Y. Censor, A. Gibali, and S. Reich, “The subgradient extragradient method for solving variational inequalities in Hilbert space,” Journal of Optimization Theory and Applications, vol. 148, no. 2, pp. 318–335, 2011.
19. Y. Yao and N. Shahzad, “Strong convergence of a proximal point algorithm with general errors,” Optimization Letters. In press.
20. Y. Yao and N. Shahzad, “New methods with perturbations for non-expansive mappings in hilbert spaces,” Fixed Point Theory and Applications, vol. 2011, article 79, 2011.
21. Y. Yao, R. Chen, and H.-K. Xu, “Schemes for finding minimum-norm solutions of variational inequalities,” Nonlinear Analysis, vol. 72, no. 7-8, pp. 3447–3456, 2010.
22. M. Aslam Noor, “Some developments in general variational inequalities,” Applied Mathematics and Computation, vol. 152, no. 1, pp. 199–277, 2004.
23. M. A. Noor, “Projection-proximal methods for general variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 318, no. 1, pp. 53–62, 2006.
24. M. A. Noor, “Differentiable non-convex functions and general variational inequalities,” Applied Mathematics and Computation, vol. 199, no. 2, pp. 623–630, 2008.
25. M. Aslam Noor and Z. Huang, “Wiener-Hopf equation technique for variational inequalities and nonexpansive mappings,” Applied Mathematics and Computation, vol. 191, no. 2, pp. 504–510, 2007.
26. Y. Yao and M. A. Noor, “Convergence of three-step iterations for asymptotically nonexpansive mappings,” Applied Mathematics and Computation, vol. 187, no. 2, pp. 883–892, 2007.
27. Y. Yao and M. A. Noor, “On viscosity iterative methods for variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 325, no. 2, pp. 776–787, 2007.
28. Y. Yao and M. A. Noor, “On modified hybrid steepest-descent methods for general variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 334, no. 2, pp. 1276–1289, 2007.
29. Y. Yao and M. A. Noor, “On convergence criteria of generalized proximal point algorithms,” Journal of Computational and Applied Mathematics, vol. 217, no. 1, pp. 46–55, 2008.
30. M. A. Noor and Y. Yao, “Three-step iterations for variational inequalities and nonexpansive mappings,” Applied Mathematics and Computation, vol. 190, no. 2, pp. 1312–1321, 2007.
31. Y. Yao and M. A. Noor, “On modified hybrid steepest-descent method for variational inequalities,” Carpathian Journal of Mathematics, vol. 24, no. 1, pp. 139–148, 2008.
32. Y. Yao, M. A. Noor, R. Chen, and Y.-C. Liou, “Strong convergence of three-step relaxed hybrid steepest-descent methods for variational inequalities,” Applied Mathematics and Computation, vol. 201, no. 1-2, pp. 175–183, 2008.
33. Y. Yao, M. A. Noor, and Y.-C. Liou, “A new hybrid iterative algorithm for variational inequalities,” Applied Mathematics and Computation, vol. 216, no. 3, pp. 822–829, 2010.
34. Y. Yao, M. Aslam Noor, K. Inayat Noor, Y.-C. Liou, and H. Yaqoob, “Modified extragradient methods for a system of variational inequalities in Banach spaces,” Acta Applicandae Mathematicae, vol. 110, no. 3, pp. 1211–1224, 2010.
35. Y. Yao, M. A. Noor, K. I. Noor, and Y. -C. Liou, “On an iterative algorithm for variational inequalities in banach spaces,” Mathematical Communications, vol. 16, no. 1, pp. 95–104, 2011.
36. M. A. Noor, E. Al-Said, K. I. Noor, and Y. Yao, “Extragradient methods for solving nonconvex variational inequalities,” Journal of Computational and Applied Mathematics, vol. 235, no. 9, pp. 3104–3108, 2011.