Abstract

An extension of subgradient method for solving variational inequality problems is presented. A new iterative process, which relates to the fixed point of a nonexpansive mapping and the current iterative point, is generated. A weak convergence theorem is obtained for three sequences generated by the iterative process under some mild conditions.

1. Introduction

Let be a nonempty closed convex subset of a real Hilbert space , and let be a continuous mapping. The variational inequality problem, denoted by , is to find a vector , such that Throughout the paper, let be the solution set of , which is assumed to be nonempty. In the special case when is the nonnegative orthant, (1) reduces to the nonlinear complementarity problem. Find a vector , such that The variational inequality problem plays an important role in optimization theory and variational analysis. There are numerous applications of variational inequalities in mathematics as well as in equilibrium problems arising from engineering, economics, and other areas in real life, see [116] and the references therein. Many algorithms, which employ the projection onto the feasible set of the variational inequality or onto some related sets in order to iteratively reach a solution, have been proposed to solve (1). Korpelevich [2] proposed an extragradient method for finding the saddle point of some special cases of the equilibrium problem. Solodov and Svaiter [3] extended the extragradient algorithm through replying the set by the intersection of two sets related to . In each iteration of the algorithm, the new vector is calculated according to the following iterative scheme. Given the current vector , compute , if , stop; otherwise, compute where and being the smallest nonnegative integer satisfying and then compute where .

On the other hand, Nadezhkina and Takahashi [11] got by the following iterative formula: where is a sequence in is a sequence, and is a nonexpansive mapping. Denoting the fixed points set of by and assuming , they proved that the sequence converges weakly to some .

Motivated and inspired by the extragradient methods in [2, 3], in this paper, we study further extragradient methods and analyze the weak converge property of three sequences generated by our method.

The rest of this paper is organized as follows. In Section 2, we give some preliminaries and basic results. In Section 3, we present an extragradient algorithm and then discuss the weak convergence of the sequences generated by the algorithm. In Section 4, we modify the extragradient algorithm and give its convergence analysis.

2. Preliminary and Basic Results

Let be a real Hilbert space with denoting the inner product of the vectors . Weak converge and strong converge of the sequence to a point are denoted by and , respectively. Identity mapping from to itself is denoted by .

For some vector , the orthogonal projection of onto , denoted by , is defined as The following lemma states some well-known properties of the orthogonal projection operator.

Lemma 1. One has

A mapping is called monotone if A mapping is called Lipschitz continuous, if there exists an , such that The graph of , denoted by , is defined by A mapping is called nonexpansive if and the fixed point set of a mapping , denoted by , is defined by We denote the normal cone of at by and define the function as Then is maximal monotone. It is well known that , if and only if . For more details, see, for example, [9] and references therein. The following lemma is established in Hilbert space and is well known as Opial condition.

Lemma 2. For any sequence that converges weakly to , one has

The next lemma is proposed in [10].

Lemma 3 (Demiclosedness principle). Let be a closed, convex subset of a real Hilbert space , and let be a nonexpansive mapping. Then is demiclosed at ; that is, for any sequence , such that and , one has .

3. An Algorithm and Its Convergence Analysis

In this section, we give our algorithm, and then discuss its convergence. First, we need the following definition.

Definition 4. For some vector , the projected residual function is defined as Obviously, we have that if and only if . Now we describe our algorithm.

Algorithm A. Step  0. Take , and .
Step 1. For the current iterative point , compute where and being the smallest nonnegative integer satisfying Compute where , and is a nonexpansive mapping.
Step 2. If , stop; otherwise go to Step 1.

Remark 5. The iterative point is well computed in Algorithm A according to [3] and can be interpreted as follows: if (23) is well defined, then can be derived by the following iterative scheme: compute For more details, see [3, 4].

Now we investigate the weak convergence property of our algorithm. First we recall the following result, which was proposed by Schu [17].

Lemma 6. Let H be a real Hilbert space, let be a sequence of real number, and let , such that for some . Then one has

The following theorem is crucial in proving the boundness of the sequence .

Theorem 7. Let be a nonempty, closed, and convex subset of , let be a monotone and -Lipschitz continuous mapping, , and . Then for any sequence generated by Algorithm A, one has

Proof. Letting and . It follows from Lemma 1 (10) that that is, From (20)–(23) in Algorithm A, we get , which means . So, by the definition of the projection operator and [3], we obtain Substituting (32) into (31), we have Since is monotone, connecting with (1), we obtain Thus which completes the proof.

Theorem 8. Let be a nonempty, closed, and convex subset of , be a monotone and -Lipschitz continuous mapping, and . Then for any sequence generated by Algorithm A, one has Furthermore,

Proof. Using (22), we have By the Cauchy-Schwarz inequality, Hence, by (23) Then we have where the first inequation follows from that is a nonexpansive mapping.
That means is bounded, and so as . Since is continuous; namely, there exists a constant , s.t. , we yet have So we know that there exists , and hence which implies that or .
If , we get the conclusion.
If , we can deduce that the inequality (23) in Algorithm A is not satisfied for ; that is, there exists , for all , Applying (8) by setting leads to Therefore Passing onto the limit in (44), (46), we get , since , we obtain .
On the other hand, using Cauchy-Schwarz inequality again, we have Therefore, Then we have Noting that , it easily follows that , which implies that By the triangle inequality, we have Passing onto the limit in (51), we conclude The proof is complete.

Theorem 9. Let be a nonempty, closed, and convex subset of , let be a monotone and -Lipschitz continuous mapping, and . Then the sequences generated by Algorithm A converge weakly to the same point , where .

Proof. By Theorem 8, we know that is bound, which implies that there exists a subsequence of that converges weakly to some points .
First, we investigate some details of .
Letting , since is nonexpansive mapping, from (29) we have Passing onto the limit in (53), we obtain Then by (25) we have From Lemma 6, it follows that By the triangle inequality, we have and then passing onto the limit in (57), we deduce that which imply that by Lemma 3.
Second, we describe the details of .
Since , using Theorem 8 we claim that and .
Letting , we have thus, Applying (8) by letting , we have that is, Note that and , then where the last inequation follows from the monotone of .
Since is continuous, by (37) we have Passing onto the limit in (63), we obtain As is maximal monotone, we have , which implies that .
At last we show that such is unique.
Let be another subsequence of , such that . Then we conclude that . Suppose ; by Lemma 2 we have which implies that , and this is a contradiction. Thus, , and the proof is complete.

4. Further Study

In this section we propose an extension of Algorithm A, which is effective in practice. Similar to the investigation in Section 3, for the constant , we define a new projected residual function as follows: It is clear that the new projected residual function (67) degenerates into (20) by setting .

Algorithm B. Step 0. Take , and .
Step 1. For the current iterative point , compute where and being the smallest nonnegative integer satisfying Compute where and .
Step 2. If , stop; otherwise go to Step 1.
At the rest of this section, we discuss the weak convergence property of Algorithm B.

Lemma 10. For any , one has

Therefore, solving variational inequality is equivalent to finding a zero point of the projected residual function . Meanwhile we know that is a continuous function of , as the projection mapping is nonexpansive.

Lemma 11. For any , it holds that

Theorem 12. Let be a nonempty, closed, and convex subset of , let be a monotone and -Lipschitz continuous mapping, and . Then for any sequence generated by Algorithm B, one has

Proof. The proof of this theorem is similar to Theorem 7, so we omit it.

Theorem 13. Let be a nonempty, closed, and convex subset of , let be a monotone and -Lipschitz continuous mapping, and . Then for any sequences generated by Algorithm B, one has Furthermore,

Proof. The proof of this theorem is similar to the Theorem 8. The only difference is that (44) is substituted by where (76) follows from Lemma 11 with .

Theorem 14. Let be a nonempty, closed, and convex subset of , let be a monotone and -Lipschitz continuous mapping, and . Then the sequences generated by Algorithm B converge weakly to the same point , where .

5. Conclusions

In this paper, we proposed an extension of the extragradient algorithm for solving monotone variational inequalities and established its weak convergence theorem. The Algorithm B is effective in practice. Meanwhile, we pointed out that the solution of our algorithm is also a fixed point of a given nonexpansive mapping.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (Grant: 11171362) and the Fundamental Research Funds for the central universities (Grant: CDJXS12101103). The authors thank the anonymous reviewers for their valuable comments and suggestions, which helped to improve the paper.