Abstract

We introduce a new general iterative method for finding a common element of the set of solutions of fixed point for nonexpansive mappings, the set of solution of generalized mixed equilibrium problems, and the set of solutions of the variational inclusion for a β-inverse-strongly monotone mapping in a real Hilbert space. We prove that the sequence converges strongly to a common element of the above three sets under some mild conditions. Our results improve and extend the corresponding results of Marino and Xu (2006), Su et al. (2008), Klin-eam and Suantai (2009), Tan and Chang (2011), and some other authors.

1. Introduction

Let be a closed convex subset of a real Hilbert space with the inner product and the norm . Let be a bifunction of into , where is the set of real numbers, a mapping, and a real-valued function. The generalized mixed equilibrium problem is for finding such that The set of solutions of (1.1) is denoted by GMEP, that is, If , the problem (1.1) is reduced into the mixed variational inequality of Browder type [1] for finding such that The set of solutions of (1.3) is denoted by MVI.

If and , the problem (1.1) is reduced into the equilibrium problem [2] for finding such that The set of solutions of (1.4) is denoted by EP. This problem contains fixed point problems and includes as special cases numerous problems in physics, optimization, and economics. Some methods have been proposed to solve the equilibrium problem; see [35].

If and , the problem (1.1) is reduced into the Hartmann-Stampacchia variational inequality [6] for finding such that The set of solutions of (1.5) is denoted by VI. The variational inequality has been extensively studied in the literature [7].

If and , the problem (1.1) is reduced into the minimize problem for finding such that The set of solutions of (1.6) is denoted by .

Iterative methods for nonexpansive mappings have recently been applied to solve convex minimization problems. Convex minimization problems have a great impact and influence on the development of almost all branches of pure and applied sciences. A typical problem is to minimize a quadratic function over the set of the fixed points of a nonexpansive mapping on a real Hilbert space : where is a linear bounded operator, is the fixed point set of a nonexpansive mapping , and is a given point in [8].

Recall that a mapping is said to be nonexpansive if for all . If is bounded closed convex and is a nonexpansive mapping of into itself, then is nonempty [9]. We denote weak convergence and strong convergence by notations and , respectively. A mapping of into is called monotone if for all . A mapping of into is called -inverse-strongly monotone if there exists a positive real number such that for all . It is obvious that any -inverse-strongly monotone mapping is monotone and Lipschitz continuous mapping. A linear bounded operator is strongly positive if there exists a constant with the property for all . A self mapping is a contractions on if there exists a constant such that for all . We use to denote the collection of all contraction on C. Note that each has a unique fixed point in .

Let be a single-valued nonlinear mapping and a set-valued mapping. The variational inclusion problem is to find such that where is the zero vector in . The set of solutions of problem (1.13) is denoted by . The variational inclusion has been extensively studied in the literature, see, for example, [1013] and the reference therein.

A set-valued mapping is called monotone if for all , , and imply . A monotone mapping is maximal if its graph of is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping is maximal if and only if, for , for all imply .

Let be an inverse-strongly monotone mapping of into , and let be normal cone to at , that is, , and define Then, is a maximal monotone and if and only if [14].

Let be a set-valued maximal monotone mapping; then the single-valued mapping defined by is called the resolvent operator associated with , where is any positive number and is the identity mapping. It is worth mentioning that the resolvent operator is nonexpansive, 1-inverse-strongly monotone and that a solution of problem (1.13) is a fixed point of the operator for all [15].

In 2000, Moudafi [16] introduced the viscosity approximation method for nonexpansive mapping and proved that, if is a real Hilbert space, the sequence defined by the iterative method below, with the initial guess , is chosen arbitrarily, where satisfies certain conditions and converges strongly to a fixed point of (say ) which is the unique solution of the following variational inequality:

In 2006, Marino and Xu [8] introduced a general iterative method for nonexpansive mapping. They defined the sequence generated by the algorithm : where and is a strongly positive linear bounded operator. They proved that, if and the sequence satisfies appropriate conditions, then the sequence generated by (1.18) converges strongly to a fixed point of (say ) which is the unique solution of the following variational inequality: which is the optimality condition for the minimization problem where is a potential function for (i.e., for ).

For finding a common element of the set of fixed points of nonexpansive mappings and the set of solution of the variational inequalities, let be the projection of onto . In 2005, Iiduka and Takahashi [17] introduced following iterative process for : where , , and for some with . They proved that under certain appropriate conditions imposed on and , the sequence generated by (1.21) converges strongly to a common element of the set of fixed points of nonexpansive mapping and the set of solutions of the variational inequality for an inverse-strongly monotone mapping (say ) which solve some variational inequality

In 2008, Su et al. [18] introduced the following iterative scheme by the viscosity approximation method in a real Hilbert space: for all , where and satisfy some appropriate conditions. Furthermore, they proved that and converge strongly to the same point , where .

In 2011, Tan and Chang [12] introduced following iterative process for which is a sequence of nonexpansive mappings. Let be the sequence defined by where , , and . The sequence defined by (1.24) converges strongly to a common element of the set of fixed points of nonexpansive mapping, the set of solutions of the variational inequality, and the generalized equilibrium problem.

In this paper, we modify the iterative methods (1.18), (1.23), and (1.24) by proposing the following new general viscosity iterative method: , for all , where , , and satisfy some appropriate conditions. The purpose of this paper is to show that under some control conditions the sequence strongly converges to a common element of the set of fixed points of nonexpansive mapping, the solution of the generalized mixed equilibrium problems, and the set of solutions of the variational inclusion in a real Hilbert space.

2. Preliminaries

Let be a real Hilbert space and a nonempty closed convex subset of . Recall that the (nearest point) projection from onto assigns to each the unique point in satisfying the property The following characterizes the projection . We recall some lemmas which will be needed in the rest of this paper.

Lemma 2.1. The function is a solution of the variational inequality (1.5) if and only if satisfies the relation for all .

Lemma 2.2. For a given , , , for all .
It is well known that is a firmly nonexpansive mapping of onto and satisfies Moreover, is characterized by the following properties: and, for all , ,

Lemma 2.3 (see [19]). Let be a maximal monotone mapping, and let be a monotone and Lipshitz continuous mapping. Then the mapping is a maximal monotone mapping.

Lemma 2.4 (see [20]). Each Hilbert space satisfies Opial's condition, that is, for any sequence with , the inequality holds for each with .

Lemma 2.5 (see [21]). Assume that is a sequence of nonnegative real numbers such that where and is a sequence in such that(i),(ii) or .Then .

Lemma 2.6 (see [22]). Let be a closed convex subset of a real Hilbert space , and let be a nonexpansive mapping. Then is demiclosed at zero, that is, implies .

For solving the generalized mixed equilibrium problem, let us assume that the bifunction , the nonlinear mapping is continuous monotone, and satisfies the following conditions: (A1) for all ; (A2) is monotone, that is, for any ; (A3)for each fixed , is weakly upper semicontinuous; (A4)for each fixed , is convex and lower semicontinuous; (B1)for each and , there exist a bounded subset and such that, for any , (B2) is a bounded set.

Lemma 2.7 (see [23]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a bifunction mapping satisfying (A1)–(A4), and let be convex and lower semicontinuous such that . Assume that either (B1) or (B2) holds. For and , there exists such that Define a mapping as follows: for all . Then, the following hold: (i) is single valued; (ii) is firmly nonexpansive, that is, for any ;(iii); (iv) is closed and convex.

Lemma 2.8 (see [8]). Assume that is a strongly positive linear bounded operator on a Hilbert space with coefficient and ; then .

3. Strong Convergence Theorems

In this section, we show a strong convergence theorem which solves the problem of finding a common element of , , and of an inverse-strongly monotone mappings in a Hilbert space.

Theorem 3.1. Let be a real Hilbert space, a closed convex subset of , , be , -inverse-strongly monotone mappings, respectively. Let be a convex and lower semicontinuous function, a contraction with coefficient , a maximal monotone mapping, and a strongly positive linear bounded operator of into itself with coefficient . Assume that . Let be a nonexpansive mapping of into itself such that Suppose that is a sequences generated by the following algorithm for arbitrarily: for all , where(C1), , , and ,(C2) with and ,(C3).
Then converges strongly to , where which solves the following variational inequality: which is the optimality condition for the minimization problem where is a potential function for (i.e., for ).

Proof. Due to condition (C1), we may assume without loss of generality, then, that for all . By Lemma 2.8, we have that . Next, we will assume that .
Next, we will divide the proof into six steps.
Step 1. We will show that are bounded.
Since , are , -inverse-strongly monotone mappings, we have that In a similar way, we can obtain It is clear that if , , then , are all nonexpansive.
Put , . It follows that By Lemma 2.7, we have that for all . Then, we have that Hence, we have that From (3.2), we deduce that It follows by induction that Therefore is bounded, so are , , , , and .

Step 2. We claim that . From (3.2), we have that
Since are nonexpansive, we also have that On the other hand, from and , it follows that Substituting into (3.14) and into (3.15), we get From (A2), we obtain and then So It follows that Without loss of generality, let us assume that there exists a real number such that , for all . Then, we have that and hence where . Substituting (3.22) into (3.13), we have that Substituting (3.23) into (3.12), we get where . By conditions (C1)-(C2) and Lemma 2.5, we have that as . From (3.23), we also have that as .

Step 3. We show the following: (i); (ii). For and , by (3.5) and (3.8), we get It follows that So, we obtain where . By conditions (C1) and (C3) and , we obtain that as .
Substituting (3.8) into (3.25), we get From (3.26), we have that So, we also have that where . By conditions (C1)–(C3), and , we obtain that as .

Step 4. We show the following: (i); (ii); (iii). Since is firmly nonexpansive and by (2.2), we observe that Hence, we have that Since is 1-inverse-strongly monotone and by (2.2), we compute which implies that Substituting (3.32) into (3.34), we have that Substituting (3.35) into (3.26), we get Then, we derive By condition (C1), , , and . So, we have that , as . It follows that From (3.2), we have that By condition (C1) and , we obtain that as . Next, we observe that Since is bounded and by condition (C1), we have that as , and Since and , it implies that as . Hence, we have that By (3.38) and , we obtain as . Moreover, we also have that By (3.38) and , we obtain as .
Step 5. We show that and . It is easy to see that is a contraction of into itself. Indeed, since , we have that Hence is complete, and there exists a unique fixed point such that . By Lemma 2.2, we obtain that for all .
Next, we show that , where is the unique solution of the variational inequality , for all . We can choose a subsequence of such that As is bounded, there exists a subsequence of which converges weakly to . We may assume without loss of generality that .
We claim that . Since , , and and by Lemma 2.6, we have that .
Next, we show that . Since , we know that It follows by (A2) that Hence, For and , let . From (3.48), we have that From , we have that . Further, from (A4) and the weakly lower semicontinuity of , and , we have that From (A1), (A4), and (3.50), we have that and hence Letting , we have, for each , that This implies that .
Lastly, we show that . In fact, since is a -inverse-strongly monotone, is monotone and Lipschitz continuous mapping. It follows from Lemma 2.3 that is a maximal monotone. Let , since . Again since , we have that , that is, . By virtue of the maximal monotonicity of , we have that and hence It follows from , , and that It follows from the maximal monotonicity of that , that is, . Therefore, . It follows that

Step 6. We prove that . By using (3.2) and together with Schwarz inequality, we have that
Since is bounded, where for all , it follows that where . By , we get . Applying Lemma 2.5, we can conclude that . This completes the proof.

Corollary 3.2. Let be a real Hilbert space and a closed convex subset of . Let be -inverse-strongly monotone mappings and a convex and lower semicontinuous function. Let be a contraction with coefficient , a maximal monotone mapping, and a nonexpansive mapping of into itself such that Suppose that is a sequence generated by the following algorithm for arbitrarily: for all , by (C1)–(C3) in Theorem 3.1.
Then converges strongly to , where which solves the following variational inequality:

Proof. Putting and in Theorem 3.1, we can obtain the desired conclusion immediately.

Corollary 3.3. Let be a real Hilbert space and a closed convex subset of . Let , be , -inverse-strongly monotone mappings, a convex and lower semicontinuous function, and a maximal monotone mapping. Let be a nonexpansive mapping of into itself such that Suppose that is a sequence generated by the following algorithm for and : for all , by (C1)–(C3) in Theorem 3.1.
Then converges strongly to , where which solves the following variational inequality:

Proof. Putting , for all , in Corollary 3.2, we can obtain the desired conclusion immediately.

Corollary 3.4. Let be a real Hilbert space, a closed convex subset of , be -inverse-strongly monotone mappings, and a strongly positive linear bounded operator of into itself with coefficient . Assume that . Let be a contraction with coefficient and a nonexpansive mapping of into itself such that Suppose that is a sequence generated by the following algorithm for arbitrarily: for all , by (C1)–(C3) in Theorem 3.1.
Then converges strongly to , where which solves the following variational inequality:

Proof. Taking , and in Theorem 3.1, we can obtain the desired conclusion immediately.

Remark 3.5. In Corollary 3.4 we generalize and improve the result of Klin-eam and Suantai [24].

4. Applications

In this section, we apply the iterative scheme (1.25) for finding a common fixed point of nonexpansive mapping and strictly pseudocontractive mapping and also apply Theorem 3.1 for finding a common fixed point of nonexpansive mappings and inverse-strongly monotone mappings.

Definition 4.1. A mapping is called strictly pseudocontraction if there exists a constant such that If , then is nonexpansive. In this case, we say that is a -strictly pseudocontraction. Putting . Then, we have that Observe that Hence, we obtain Then, is -inverse-strongly monotone mapping.

Using Theorem 3.1, we first prove a strong convergence theorem for finding a common fixed point of a nonexpansive mapping and a strict pseudocontraction.

Theorem 4.2. Let be a real Hilbert space, a closed convex subset of , , be , -inverse-strongly monotone mappings, a convex and lower semicontinuous function, a contraction with coefficient , and a strongly positive linear bounded operator of into itself with coefficient . Assume that . Let be a nonexpansive mapping of into itself, and let be a -strictly pseudocontraction of into itself such that Suppose that is a sequence generated by the following algorithm for arbitrarily: for all , by (C1)–(C3) in Theorem 3.1.
Then converges strongly to , where which solves the following variational inequality: which is the optimality condition for the minimization problem where is a potential function for (i.e., for ).

Proof. Put , then is -inverse-strongly monotone, , and . So by Theorem 3.1, we obtain the desired result.

Corollary 4.3. Let be a real Hilbert space, a closed convex subset of , , be , -inverse-strongly monotone mappings, and a convex and lower semicontinuous function. Let be a contraction with coefficient and a nonexpansive mapping of into itself, and let be a -strictly pseudocontraction of into itself such that Suppose that is a sequence generated by the following algorithm for arbitrarily: for all , by (C1)–(C3) in Theorem 3.1.
Then converges strongly to , where which solves the following variational inequality: which is the optimality condition for the minimization problem where is a potential function for (i.e., for ).

Proof. Putting and in Theorem 4.2, we obtain the desired result.

Acknowledgments

The authors would like to thank the National Research University Project of Thailand's Office of the Higher Education Commission for financial support under the project NRU-CSEC no. 54000267. Furthermore, they also would like to thank the Faculty of Science (KMUTT) and the National Research Council of Thailand. Finally, the authors would like to thank Professor Vittorio Colao and the referees for reading this paper carefully, providing valuable suggestions and comments, and pointing out a major error in the original version of this paper.