Abstract
We introduce a new iterative algorithm for solving a common solution of the set of solutions of fixed point for an infinite family of nonexpansive mappings, the set of solution of a system of mixed equilibrium problems, and the set of solutions of the variational inclusion for a -inverse-strongly monotone mapping in a real Hilbert space. We prove that the sequence converges strongly to a common element of the above three sets under some mild conditions. Furthermore, we give a numerical example which supports our main theorem in the last part.
1. Introduction
Let be a closed convex subset of a real Hilbert space with the inner product and the norm . Let be a bifunction of into , where is the set of real numbers, be a real-valued function. Let be arbitrary index set. The system of mixed equilibrium problem is for finding such that The set of solutions of (1.1) is denoted by , that is, If is a singleton, then problem (1.1) becomes the following mixed equilibrium problem: finding such that The set of solutions of (1.3) is denoted by .
If , the problem (1.3) is reduced into the equilibrium problem [1] for finding such that The set of solutions of (1.4) is denoted by . This problem contains fixed-point problems, includes as special cases numerous problems in physics, optimization, and economics. Some methods have been proposed to solve the system of mixed equilibrium problem and the equilibrium problem, please consult [2–19].
Recall that, a mapping is said to be nonexpansive if for all . If is a bounded closed convex and is a nonexpansive mapping of into itself, then is nonempty [20]. Let be a mapping, the Hartmann-Stampacchia variational inequality for finding such that The set of solutions of (1.6) is denoted by . The variational inequality has been extensively studied in the literature [21–28].
Iterative methods for nonexpansive mappings have recently been applied to solve convex minimization problems. Convex minimization problems have a great impact and influence on the development of almost all branches of pure and applied sciences. A typical problem is to minimize a quadratic function over the set of the fixed points of a nonexpansive mapping on a real Hilbert space : where is a linear bounded operator, is the fixed point set of a nonexpansive mapping , and is a given point in [29].
We denote weak convergence and strong convergence by notations and , respectively. A mapping of into is called monotone if for all . A mapping of into is called -inverse-strongly monotone if there exists a positive real number such that for all . It is obvious that any -inverse-strongly monotone mappings are monotone and Lipschitz continuous mapping. A linear bounded operator is strongly positive if there exists a constant with the property for all . A self-mapping is a contraction on if there exists a constant such that for all . We use to denote the collection of all contraction on . Note that each has a unique fixed point in .
Let be a single-valued nonlinear mapping and be a set-valued mapping. The variational inclusion problem is to find such that where is the zero vector in . The set of solutions of problem (1.12) is denoted by . The variational inclusion has been extensively studied in the literature, see, for example, [30–32] and the reference therein.
A set-valued mapping is called monotone if for all , , and impling . A monotone mapping is maximal if its graph of is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping is maximal if and only if for for all impling .
Let be an inverse-strongly monotone mapping of into , and let be normal cone to at , that is, , and define Then, is a maximal monotone and if and only if (see [33]).
Let be a set-valued maximal monotone mapping, then the single-valued mapping defined by is called the resolvent operator associated with , where is any positive number and is the identity mapping. It is worth mentioning that the resolvent operator is nonexpansive, 1-inverse-strongly monotone, and that a solution of problem (1.12) is a fixed point of the operator for all , (for more details see [34]).
In 2000, Moudafi [35] introduced the viscosity approximation method for nonexpansive mappings and proved that if is a real Hilbert space, the sequence defined by the iterative method below, with the initial guess is chosen arbitrarily, where satisfies certain conditions and converges strongly to a fixed point of (say ), which is then a unique solution of the following variational inequality:
In 2006, Marino and Xu [29] introduced a general iterative method for nonexpansive mapping. They defined the sequence generated by the algorithm , where , and is a strongly positive linear bounded operator. They proved that if , and the sequence satisfies appropriate conditions, then the sequence generated by (1.17) converges strongly to a fixed point of (say ) which is the unique solution of the following variational inequality: which is the optimality condition for the minimization problem where is a potential function for (i.e., for ).
For finding a common element of the set of fixed points of nonexpansive mappings and the set of solution of the variational inequalities. Let be the projection of onto . In 2005, Iiduka and Takahashi [36] introduced the following iterative process for , where , , and for some with . They proved that under certain appropriate conditions imposed on and , the sequence generated by (1.20) converges strongly to a common element of the set of fixed points of a nonexpansive mapping and the set of solutions of the variational inequality for an inverse-strongly monotone mapping (say ) which solve some variational inequality
In 2008, Su et al. [37] introduced the following iterative scheme by the viscosity approximation method in a real Hilbert space: for all , where and satisfing some appropriate conditions. Furthermore, they proved that and converge strongly to the same point , where .
Let be an infinite family of nonexpansive mappings of into itself, and let be a real sequence such that for every . For , we defined a mapping of into itself as follows:
In 2011, He et al. [38] introduced the following iterative process for which is a sequence of nonexpansive mappings. Let be the sequence defined by The sequence defined by (1.24) converges strongly to a common element of the set of fixed points of nonexpansive mappings, the set of solutions of the variational inequality, and the generalized equilibrium problem. Recently, Jitpeera and Kumam [39] introduced the following new general iterative method for finding a common element of the set of solutions of fixed point for nonexpansive mappings, the set of solution of generalized mixed equilibrium problems, and the set of solutions of the variational inclusion for a -inverse-strongly monotone mapping in a real Hilbert space.
In this paper, we modify the iterative methods (1.17), (1.22), and (1.24) by purposing the following new general viscosity iterative method: , for all , where , , and satisfy some appropriate conditions. The purpose of this paper shows that under some control conditions the sequence converges strongly to a common element of the set of common fixed points of nonexpansive mappings, the solution of the system of mixed equilibrium problems, and the set of solutions of the variational inclusion in a real Hilbert space. Moreover, we apply our results to the class of strictly pseudocontractive mappings. Finally, we give a numerical example which supports our main theorem in the last part. Our results improve and extend the corresponding results of Marino and Xu [29], Su et al. [37], He et al. [38], and some authors.
2. Preliminaries
Let be a real Hilbert space and be a nonempty closed and convex subset of . Recall that the (nearest point) projection from onto assigns to each and the unique point in satisfies the property which is equivalent to the following inequality The following characterizes the projection . We recall some lemmas which will be needed in the rest of this paper.
Lemma 2.1. The function is a solution of the variational inequality if and only if satisfies the relation for all .
Lemma 2.2. For a given , , .
It is well known that is a firmly nonexpansive mapping of onto and satisfies
Moreover, is characterized by the following properties: and for all ,
Lemma 2.3 (see [40]). Let be a maximal monotone mapping, and let be a monotone and Lipshitz continuous mapping. Then the mapping is a maximal monotone mapping.
Lemma 2.4 (see [41]). Each Hilbert space satisfies Opial's condition, that is, for any sequence with , the inequality , hold for each with .
Lemma 2.5 (see [42]). Assume is a sequence of nonnegative real numbers such that where and is a sequence in such that(i), (ii) or .Then .
Lemma 2.6 (see [43]). Let be a closed convex subset of a real Hilbert space , and let be a nonexpansive mapping. Then is demiclosed at zero, that is, implying .
For solving the mixed equilibrium problem, let us assume that the bifunction and the nonlinear mapping satisfy the following conditions: (A1) for all ; (A2) is monotone, that is, for any ; (A3)for each fixed , is weakly upper semicontinuous; (A4)for each fixed , is convex and lower semicontinuous; (B1)for each and , there exist a bounded subset and such that for any ,(B2) is a bounded set.
Lemma 2.7 (see [44]). Let be a nonempty closed and convex subset of a real Hilbert space . Let be a bifunction mapping satisfying (A1)–(A4), and let be a convex and lower semicontinuous function such that . Assume that either (B1) or (B2) holds. For and , then there exists such that Define a mapping as follows: for all . Then, the following hold: (i) is single-valued; (ii) is firmly nonexpansive, that is, for any , ;(iii); (iv) is closed and convex.
Lemma 2.8 (see [29]). Assume is a strongly positive linear bounded operator on a Hilbert space with coefficient and , then .
Lemma 2.9 (see [38]). Let C be a nonempty closed and convex subset of a strictly convex Banach space. Let be an infinite family of nonexpansive mappings of C into itself such that , and let be a real sequence such that for every . Then .
Lemma 2.10 (see [38]). Let C be a nonempty closed and convex subset of a strictly convex Banach space. Let be an infinite family of nonexpansive mappings of C into itself, and let be a real sequence such that for every . Then, for every and , the limit exist.
In view of the previous lemma, we define
3. Strong Convergence Theorems
In this section, we show a strong convergence theorem which solves the problem of finding a common element of the common fixed points, the common solution of a system of mixed equilibrium problems and variational inclusion of inverse-strongly monotone mappings in a Hilbert space.
Theorem 3.1. Let be a real Hilbert space and a nonempty close and convex subset of , and let be a -inverse-strongly monotone mapping. Let be a convex and lower semicontinuous function, a contraction mapping with coefficient 1), and a maximal monotone mapping. Let be a strongly positive linear bounded operator of into itself with coefficient . Assume that and . Let be a family of nonexpansive mappings of into itself such that
Suppose that is a sequence generated by the following algorithm for arbitrarily and
for all , where
and the following conditions are satisfied(C1):;
(C2): with and .
Then, the sequence converges strongly to , where which solves the following variational inequality:
which is the optimality condition for the minimization problem
where h is a potential function for (i.e., for ).
Proof. For condition (C1), we may assume without loss of generality, and for all . By Lemma 2.8, we have . Next, we will assume that .
Next, we will divide the proof into six steps.
Step 1. First, we will show that and are bounded. Since is -inverse-strongly monotone mappings, we have
if , then is nonexpansive.
Put . Since and are nonexpansive mapping, it follows that
By Lemma 2.7, we have
and Then, we have
Hence, we get
From (3.2), we deduce that
It follows by induction that
Therefore is bounded, so are , and .Step 2. We claim that and . From (3.2), we have
Since and are nonexpansive, we also have
On the other hand, from and , it follows that
Substituting into (3.15) and into (3.16), we get
From (A2), we obtain
so,
It follows that
Without loss of generality, let us assume that there exists a real number such that , for all . Then, we have
and hence
where . Substituting (3.22) into (3.14), we have
Substituting (3.23) into (3.13), we get
where . Since conditions (C1)-(C2) and by Lemma 2.5, we have as . From (3.23), we also have as .Step 3. Next, we show that .
For hence . By (3.6) and (3.9), we get
It follows that
So, we obtain
where . By conditions (C1), (C3) and , then, we obtain that as .Step 4. We show the following: (i);
(ii);
(iii). Since is firmly nonexpansive and (2.3), we observe that
it follows that
Since is 1-inverse-strongly monotone and by (2.3), we compute
which implies that
Substituting (3.31) into (3.26), we have
Then, we derive
By condition (C1), and .
So, we have as . It follows that
From (3.2), we have
By condition (C1) and , we obtain that as .
Hence, we have
By (3.34) and , we obtain as .
Moreover, we also have
By (3.34) and , we obtain as .Step 5. We show that and . It is easy to see that is a contraction of into itself.
Indeed, since , we have
Since is complete, then there exists a unique fixed point such that . By Lemma 2.2, we obtain that for all .
Next, we show that , where is the unique solution of the variational inequality for all . We can choose a subsequence of such that
As is bounded, there exists a subsequence of which converges weakly to . We may assume without loss of generality that .
Next we claim that . Since ,, and , and by Lemma 2.6, we have .
Next, we show that . Since , for , we know that
It follows by (A2) that
Hence, for , we get
For and , let . From (3.42), we have
Since , from (A4) and the weakly lower semicontinuity of , and . From (A1) and (A4), we have
Dividing by , we get
The weakly lower semicontinuity of for , we get
So, we have
This implies that .
Lastly, we show that . In fact, since is -inverse strongly monotone, hence is a monotone and Lipschitz continuous mapping. It follows from Lemma 2.3 that is a maximal monotone. Let , since . Again since , we have , that is, . By virtue of the maximal monotonicity of , we have
and hence
It follows from , we have and , it follows that
It follows from the maximal monotonicity of that , that is, . Therefore, . We observe that
Step 6. Finally, we prove . By using (3.2) and together with Schwarz inequality, we have
Since is bounded, where for all . It follows that
where . Since , we get . Applying Lemma 2.5, we can conclude that . This completes the proof.
Corollary 3.2. Let be a real Hilbert space and a nonempty closed and convex subset of . Let be -inverse-strongly monotone and a convex and lower semicontinuous function. Let be a contraction with coefficient , a maximal monotone mapping, and a family of nonexpansive mappings of into itself such that
Suppose that is a sequence generated by the following algorithm for arbitrarily:
for all , and the conditions (C1)–(C3) in Theorem 3.1 are satisfied.
Then, the sequence converges strongly to , where which solves the following variational inequality:
Proof. Putting and in Theorem 3.1, we can obtain the desired conclusion immediately.
Corollary 3.3. Let be a real Hilbert space and a nonempty closed and convex subset of . Let be -inverse-strongly monotone, a convex and lower semicontinuous function, and a maximal monotone mapping. Let be a family of nonexpansive mappings of into itself such that
Suppose that is a sequence generated by the following algorithm for and :
for all , and the conditions (C1)–(C3) in Theorem 3.1 are satisfied.
Then, the sequence converges strongly to , where which solves the following variational inequality:
Proof. Putting , for all in Corollary 3.2, we can obtain the desired conclusion immediately.
Corollary 3.4. Let be a real Hilbert space and a nonempty closed and convex subset of , and let be -inverse-strongly monotone mapping and a strongly positive linear bounded operator of into itself with coefficient . Assume that . Let be a contraction with coefficient and be a family of nonexpansive mappings of into itself such that
Suppose that is a sequence generated by the following algorithm for arbitrarily:
for all , and the conditions (C1)–(C3) in Theorem 3.1 are satisfied.
Then, the sequence converges strongly to , where which solves the following variational inequality:
Proof. Taking , , and in Theorem 3.1, we can obtain the desired conclusion immediately.
Remark 3.5. Corollary 3.4 generalizes and improves the result of Klin-Eam and Suantai [45].
4. Applications
In this section, we apply the iterative scheme (1.25) for finding a common fixed point of nonexpansive mapping and strictly pseudocontractive mapping.
Definition 4.1. A mapping is called a strictly pseudocontraction if there exists a constant such that If , then is nonexpansive. In this case, we say that is a -strictly pseudocontraction. Putting . Then, we have Observe that Hence, we obtain Then, is a -inverse-strongly monotone mapping.
Using Theorem 3.1, we first prove a strongly convergence theorem for finding a common fixed point of a nonexpansive mapping and a strictly pseudocontraction.
Theorem 4.2. Let be a real Hilbert space and a nonempty closed and convex subset of , and let be an -inverse-strongly monotone, a convex and lower semicontinuous function, and a contraction with coefficient , and let be a strongly positive linear bounded operator of into itself with coefficient . Assume that . Let be a family of nonexpansive mappings of into itself, and let be a -strictly pseudocontraction of into itself such that
Suppose that is a sequence generated by the following algorithm for arbitrarily:
for all , and the conditions (C1)–(C3) in Theorem 3.1 are satisfied.
Then, the sequence converges strongly to , where which solves the following variational inequality:
which is the optimality condition for the minimization problem
where is a potential function for (i.e., for ).
Proof. Put , then is inverse-strongly monotone and , and . So by Theorem 3.1, we obtain the desired result.
Corollary 4.3. Let be a real Hilbert space and a closed convex subset of , and let B be -inverse-strongly monotone and a convex and lower semicontinuous function. Let be a contraction with coefficient and a nonexpansive mapping of into itself, and let be a -strictly pseudocontraction of into itself such that
Suppose that is a sequence generated by the following algorithm for arbitrarily:
for all , and the conditions (C1)–(C3) in Theorem 3.1 are satisfied.
Then, the sequence converges strongly to , where which solves the following variational inequality:
which is the optimality condition for the minimization problem
where is a potential function for (i.e., for ).
Proof. Put and in Theorem 4.2, we obtain the desired result.
5. Numerical Example
Now, we give a real numerical example in which the condition satisfies the ones of Theorem 3.1 and some numerical experiment results to explain the main result Theorem 3.1 as follows.
Example 5.1. Let , , , , , , for all , , , , for all , for all , with contraction coefficient , for every , and . Then is the sequence generated by and as , where 0 is the unique solution of the minimization problem
Proof. We prove Example 5.1 by Step 1, Step 2, and Step 3. By Step 4, we give two numerical experiment results which can directly explain that the sequence strongly converges to 0.
Step 1. We show
where
Indeed, since for all , due to the definition of , for all , as Lemma 2.7, we have
Also by the equivalent property (2.2) of the nearest projection from , we obtain this conclusion, when we take , . By (iii) in Lemma 2.7, we have
Step 2. We show that
Indeed. By (1.23), we have
Computing in this way by (1.23), we obtain
Since , thus
Step 3. We show that
where 0 is the unique solution of the minimization problem
Indeed, we can see that is a strongly position bounded linear operator with coefficient is a real number such that , so we can take . Due to (5.1), (5.4), and (5.7), we can obtain a special sequence of (3.2) in Theorem 3.1 as follows:
Since , so,
combining with (5.6), we have
By Lemma 2.5, it is obviously that , 0 is the unique solution of the minimization problem
where is a constant number.Step 4. We give the numerical experiment results using software Mathlab 7.0 and get Table 1 to Table 2, which show that the iteration process of the sequence is a monotone-decreasing sequence and converges to 0, but the more the iteration steps are, the more showily the sequence converges to 0.
Now, we turn to realizing (3.2) for approximating a fixed point of . We take the initial valued and = 1/2, respectively. All the numerical results are given in Tables 1 and 2. The corresponding graph appears in Figures 1(a) and 1(b).
(a)
(b)
The numerical results support our main theorem as shown by calculating and plotting graphs using Matlab 7.11.0.
Acknowledgments
The authors would like to thank the Higher Education Research Promotion and National Research University Project of Thailand, Office of the Higher Education Commission (under the project NRU-CSEC no. 54000267) for financial support. Furthermore, the second author was supported by the Commission on Higher Education, the Thailand Research Fund, and the King Mongkut's University of Technology Thonburi (KMUTT) (Grant no. MRG5360044). Finally, the authors would like to thank the referees for reading this paper carefully, providing valuable suggestions and comments, and pointing out major errors in the original version of this paper.