Abstract

We present a projection algorithm for finding a solution of a variational inclusion problem in a real Hilbert space. Furthermore, we prove that the proposed iterative algorithm converges strongly to a solution of the variational inclusion problem which also solves some variational inequality.

1. Introduction

Let be a real Hilbert space. Let be a single-valued nonlinear mapping and be a set-valued mapping. Now we concern the following variational inclusion, which is to find a point such that where is the zero vector in . The set of solutions of problem (1.1) is denoted by . If , then problem (1.1) becomes the generalized equation introduced by Robinson [1]. If , then problem (1.1) becomes the inclusion problem introduced by Rockafellar [2]. It is known that (1.1) provides a convenient framework for the unified study of optimal solutions in many optimization-related areas including mathematical programming, complementarity, variational inequalities, optimal control, mathematical economics, equilibria, game theory, and so forth. Also various types of variational inclusions problems have been extended and generalized. Recently, Zhang et al. [3] introduced a new iterative scheme for finding a common element of the set of solutions to the problem (1.1) and the set of fixed points of nonexpansive mappings in Hilbert spaces. Peng et al. [4] introduced another iterative scheme by the viscosity approximate method for finding a common element of the set of solutions of a variational inclusion with set-valued maximal monotone mapping and inverse strongly monotone mappings, the set of solutions of an equilibrium problem and the set of fixed points of a nonexpansive mapping. For some related works, see [528] and the references therein.

Inspired and motivated by the works in the literature, in this paper, we present a projection algorithm for finding a solution of a variational inclusion problem in a real Hilbert space. Furthermore, we prove that the proposed iterative algorithm converges strongly to a solution of the variational inclusion problem which also solves some variational inequality.

2. Preliminaries

Let be a real Hilbert space with inner product and norm . Let be a nonempty closed convex subset of . Recall that a mapping is said to be -inverse strongly monotone if there exists a constant such that ,for all . A mapping is strongly positive on if there exists a constant such that for all .

For any , there exists a unique nearest point in , denoted by , such that Such a is called the metric projection of onto . We know that is nonexpansive. Further, for and ,

A set-valued mapping is called monotone if, for all , and imply . A monotone mapping is maximal if its graph is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping is maximal if and only if, for , for every implies .

Let the set-valued mapping be maximal monotone. We define the resolvent operator associated with and as follows: where is a positive number. It is worth mentioning that the resolvent operator is single-valued, nonexpansive, and 1-inverse strongly monotone, and that a solution of problem (1.1) is a fixed point of the operator for all , see for instance [29].

Lemma 2.1 (see [30]). Let be a maximal monotone mapping and be a Lipschitz-continuous mapping. Then the mapping is maximal monotone.

Lemma 2.2 (see [8]). Let and be bounded sequences in a Banach space and let be a sequence in with . Suppose for all integers and . Then, .

Lemma 2.3 (see [31]). Assume is a sequence of nonnegative real numbers such that , where is a sequence in and is a sequence such that(1);(2) or .Then .

3. Main Result

In this section, we will prove our main result. First, we give some assumptions on the operators and the parameters. Subsequently, we introduce our iterative algorithm for finding solutions of the variational inclusion (1.1). Finally, we will show that the proposed algorithm has strong convergence.

In the sequel, we will assume that(A1) is a nonempty closed convex subset of a real Hilbert space ;(A2) is a strongly positive bounded linear operator with coefficient , is a maximal monotone mapping and is an -inverse strongly monotone mapping;(A3) is a constant satisfying .

Now we introduce the following iteration algorithm.

Algorithm 3.1. For given arbitrarily, compute the sequence as follows: where and are two real sequences in .

Now we study the strong convergence of the algorithm (3.1)

Theorem 3.2. Suppose that . Assume the following conditions are satisfied:(i);(ii);(iii).Then the sequence generated by (3.1) converges strongly to which solves the following variational inequality:

Proof. Take . It is clear that We divide our proofs into the following five steps:(1)the sequence is bounded.(2).(3).(4) where .(5).

Proof of (1.1). Since is -inverse strongly monotone, we have It is clear that if , then is nonexpansive. Set . It follows that Since is linear bounded self-adjoint operator on , then Observe that that is to say is positive. It follows that From (3.1), we deduce that Therefore, is bounded.

Proof of (3.1). Set for all . Then, we have Note that Substituting (3.11) into (3.10), we get Therefore, This together with Lemma 2.2 imply that Hence,

Proof of (3.4). From (3.4), we get By (3.1), we obtain where is some constant satisfying . From (3.16) and (3.17), we have Thus, which imply that

Proof of (3.10). Since is 1-inverse strongly monotone, we have which implies that Substitute (3.22) into (3.17) to get Then we derive So, we have
We note that is a contraction. As a matter of fact, for all . Hence has a unique fixed point, say . That is . This implies that for all . Next, we prove that
First, we note that there exists a subsequence of such that Since is bounded, there exists a subsequence of which converges weakly to . Without loss of generality, we can assume that .
Next, we show that . In fact, since is -inverse strongly monotone, is Lipschitz-continuous monotone mapping. It follows from Lemma 2.1 that is maximal monotone. Let , that is, . Again since , we have , that is, . By virtue of the maximal monotonicity of , we have and so It follows from , and that It follows from the maximal monotonicity of that , that is, . Therefore, . It follows that

Proof of (3.11). First, we note that , then for all , we have . Thus, that is, So, where and . It is easy to see that and . Hence, by Lemma 2.3, we conclude that the sequence converges strongly to . This completes the proof.

4. Conclusion

The results proved in this paper may be extended for multivalued variational inclusions and related optimization problems.

Acknowledgment

This research was partially supported by Youth Foundation of Taizhou University (2011QN11).