#### Abstract

We suggest and analyze a modified extragradient method for solving variational inequalities, which is convergent strongly to the minimum-norm solution of some variational inequality in an infinite-dimensional Hilbert space.

#### 1. Introduction

Let be a closed convex subset of a real Hilbert space . A mapping is called -inverse-strongly monotone if there exists a positive real number such that The variational inequality problem is to find such that The set of solutions of the variational inequality problem is denoted by . It is well known that variational inequality theory has emerged as an important tool in studying a wide class of obstacle, unilateral, and equilibrium problems, which arise in several branches of pure and applied sciences in a unified and general framework. Several numerical methods have been developed for solving variational inequalities and related optimization problems; see [1–36] and the references therein.

It is well known that variational inequalities are equivalent to the fixed point problem. This alternative formulation has been used to study the existence of a solution of the variational inequality as well as to develop several numerical methods. Using this equivalence, one can suggest the following iterative method.

*Algorithm 1.1. *For a given , calculate the approximate solution by the iterative scheme
It is well known that the convergence of Algorithm 1.1 requires that the operator must be both strongly monotone and Lipschitz continuous. These restrict conditions rules out its applications in several important problems. To overcome these drawbacks, Korpelevič suggested in [8] an algorithm of the form
Noor [2] further suggested and analyzed the following new iterative methods for solving the variational inequality (1.2).

*Algorithm 1.2. *For a given , calculate the approximate solution by the iterative scheme
which is known as the modified extragradient method. For the convergence analysis of Algorithm 1.2, see Noor [1, 2] and the references therein. We would like to point out that Algorithm 1.2 is quite different from the method of Korpelevič [8]. However, Algorithm 1.2 fails, in general, to converge strongly in the setting of infinite-dimensional Hilbert spaces.

In this paper, we suggest and consider a very simple modified extragradient method which is convergent strongly to the minimum-norm solution of variational inequality (1.2) in an infinite-dimensional Hilbert space. This new method includes the method of Noor [2] as a special case.

#### 2. Preliminaries

Let be a real Hilbert space with inner product and norm , and let be a closed convex subset of . It is well known that, for any , there exists a unique such that
We denote by , where is called the *metric projection* of onto . The metric projection of onto has the following basic properties:(i) for all ;(ii) for every ;(iii) for all , .

We need the following lemma for proving our main results.

Lemma 2.1 (see [15]). *Assume that is a sequence of nonnegative real numbers such that
**
where is a sequence in and is a sequence such that*(1)*;*(2)* or .Then . *

#### 3. Main Result

In this section we will state and prove our main result.

Theorem 3.1. *Let be a closed convex subset of a real Hilbert space . Let be an -inverse-strongly monotone mapping. Suppose that . For given arbitrarily, define a sequence iteratively by
**
where is a sequence in and is a constant. Assume the following conditions are satisfied:*()*: ;*()*: ; *()*: .** Then the sequence generated by (3.1) converges strongly to which is the minimum-norm element in .*

We will divide our detailed proofs into several conclusions.

*Proof. *Take . First we need to use the following facts: (1); in particular, (2) is nonexpansive and for all
From (3.1), we have
Thus,
Therefore, is bounded and so are , and .

From (3.1), we have
where is a constant such that . Hence, by Lemma 2.1, we obtain
From (3.4), (3.5) and the convexity of the norm, we deduce
Therefore, we have
Since and as , we obtain as .

By the property (ii) of the metric projection , we have
It follows that
and hence
which implies that
Since , and , we derive .

Next we show that
where . To show it, we choose a subsequence of such that
As is bounded, we have that a subsequence of converges weakly to .

Next we show that . We define a mapping by
Then is maximal monotone (see [16]). Let . Since and , we have . On the other hand, from , we have
that is,
Therefore, we have
Noting that , , and is Lipschitz continuous, we obtain . Since is maximal monotone, we have , and hence . Therefore,
Finally, we prove . By the property (ii) of metric projection , we have
Hence,
Therefore,
We apply Lemma 2.1 to the last inequality to deduce that . This completes the proof.

*Remark 3.2. *Our Algorithm (3.1) is similar to Noor’s modified extragradient method; see [2]. However, our algorithm has strong convergence in the setting of infinite-dimensional Hilbert spaces.

#### Acknowledgments

Y. Yao was supported in part by Colleges and Universities Science and Technology Development Foundation (20091003) of Tianjin, NSFC 11071279 and NSFC 71161001-G0105. Y.-C. Liou was partially supported by the Program TH-1-3, Optimization Lean Cycle, of Sub-Projects TH-1 of Spindle Plan Four in Excellence Teaching and Learning Plan of Cheng Shiu University and was supported in part by NSC 100-2221-E-230-012.