Journal of Applied Mathematics

Volume 2013 (2013), Article ID 957363, 11 pages

http://dx.doi.org/10.1155/2013/957363

## General Iterative Methods for System of Equilibrium Problems and Constrained Convex Minimization Problem in Hilbert Spaces

College of Science, Civil Aviation University of China, Tianjin 300300, China

Received 29 December 2012; Accepted 12 July 2013

Academic Editor: Luigi Muglia

Copyright © 2013 Peichao Duan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We propose an implicit iterative scheme and an explicit iterative scheme for finding a common element of the set of solutions of system of equilibrium problems and a constrained convex minimization problem by the general iterative methods. In the setting of real Hilbert spaces, strong convergence theorems are proved. Our results improve and extend the corresponding results reported by Tian and Liu (2012) and many others. Furthermore, we give numerical example to demonstrate the effectiveness of our iterative scheme.

#### 1. Introduction

Let be a real Hilbert space with inner product and induced norm . Let be a nonempty closed convex subset of .

Let be a countable family of bifunctions from to , where is the set of real numbers. Combettes and Hirstoaga [1] considered the following system of equilibrium problems which is to find such that where is an arbitrary index set. If is a singleton, then problem (1) becomes the following equilibrium problem: The solution set of (2) is denoted by .

Numerous problems in physics, optimization, and economics reduce to finding a solution of the equilibrium problem. Many methods have been proposed to solve the equilibrium problem (2); see [2–4] and the references therein. In particular, some methods have been proposed to solve the system of equilibrium problems. See [5–7] and the references therein.

On the other hand, we consider the following constrained minimization problem: where is a real-valued convex function. It is known that the gradient projection algorithm (GPA) is a powerful tool for solving the constrained minimization problems and has extensively been studied; see for instance [8–10]. If is (Fréchet) differentiable, then the GPA generates a sequence using the following recursive formula: or more generally, where in both (4) and (5) the initial guess is taken from arbitrarily, and the parameters, or , are positive real numbers satisfying certain conditions. The convergence of the algorithms (4) and (5) depends on the behavior of the gradient . As a matter of fact, it is known that if is -strongly monotone and -Lipschitzian with constants , , then the operator is a contraction; hence, the sequence defined by the algorithm (4) converges in norm to the unique minimizer of (3). However, if the gradient fails to be strongly monotone, the operator by (6) would fail to be contractive; consequently, the sequence generated by the algorithm (4) may fail to converge strongly [11]. If is Lipschitzian, then the algorithms (4) and (5) can still converge in the weak topology under certain conditions [10, 12].

In 2007, Marino and Xu [3] introduced the general iterative method and proved that the algorithm converged strongly. In 2009, Liu [2] considered two iterative schemes by the general iterative method for equilibrium problems and strict pseudocontractions. In 2011, Xu [11] gave an alternative operator-oriented approach to algorithm (5), namely, an averaged mapping approach. He gave his averaged mapping approach to the GPA (5) and the relaxed GPA. Moreover, he constructed a counter example which shows that the algorithm (4) does not converge in norm in an infinite-dimensional space and also presented two modifications of GPA which are shown to have strong convergence. Recently, Ceng et al. [8] proposed implicit and explicit iterative schemes for finding the approximate minimizer of a constrained convex minimization problem and proved that the sequences generated by their schemes converges strongly to a solution of the constrained convex minimization problem. Very recently, Tian and Liu [9] proposed implicit and explicit composite iterative algorithms for finding a common solution of an equilibrium problem and a constrained convex minimization problem; strong convergence theorems are obtained in [9].

In this paper, motivated by the above facts, we introduce two iterative schemes by the composite general iterative methods. Further we obtain strong convergence theorems for finding a common element of the set of solutions of a constrained convex minimization problems and the set of solutions of the equilibrium problem where is a finite index set.

#### 2. Preliminaries

Throughout this paper, we always write for weak convergence and for strong convergence. We need some definitions and tools in a real Hilbert space which are listed as below.

A mapping of is said to be a nonexpansive mapping such that for all . The set of fixed points of is denoted by ; that is, .

A mapping is said to be an averaged mapping if it can be written as the average of an identity and a nonexpansive mapping; that is, where is a number in and is nonexpansive. More precisely, we say that is -averaged. It is known that the projection is -averaged.

Lemma 1. *Let be a real Hilbert space. There hold the following identities:*(i)*, ,*(ii)*, , .*

Lemma 2 (see [10]). * Assume that is a sequence of nonnegative real numbers such that
**
where is a sequence in and is a sequence such that *(i)*;
*(ii)*or**
Then, .*

Recall that given a nonempty closed convex subset of a real Hilbert space , for any , there exists a unique nearest point in , denoted by , such that for all . Such a is called the metric (or the nearest point) projection of onto . As we all know, if and only if there holds the relation:

Lemma 3 (see [13]). *Let be an -Lipschitzian and -strongly monotone operator on a Hilbert space with , , and . Then is a contraction with contractive coefficient and .*

*Definition 4. *A nonlinear mapping whose domain and range is said to be (i)monotone if
(ii)-strongly monotone if there exists such that
(iii)-inverse strongly monotone (for short, -ism) if there exists a constant such that

Lemma 5. *Let be an -Lipschitz mapping with coefficient and a strong positive bounded linear operator with . Then for ,
**
that is, is strongly monotone with coefficient .*

* Proof. *Since is a strong positive bounded linear operator with . We have

Hence is -Lipschitz and -strongly monotone:

Proposition 6. *For given operators .*(i)*If ** for some ** and if ** is averaged and ** is nonexpansive, then ** is averaged.*(ii)* is firmly nonexpansive if and only if the complement ** is firmly nonexpansive.*(iii)*If ** for some **, ** is firmly nonexpansive, and ** is nonexpansive, then ** is averaged.*(iv)*The composite of finitely many averaged mappings is averaged; that is, if each of the mapping ** is averaged, then so is the composite *. * In particular, if ** is **-averaged and ** is **-averaged, where **, then the composite ** is **-averaged, where *.

Proposition 7. *Let be an operator from to itself. *(i)* is nonexpansive if and only if the complement ** is **-ism*. (ii)*If ** is **-ism, then for **, ** is **-ism.*(iii)* is averaged if and only if the complement ** is **-ism for some **. Indeed, for **, ** is **-averaged if and only if the complement ** is **-ism.*

For solving the equilibrium problem, let us assume that the bifunction satisfies the following conditions.(A1) for all . (A2) is monotone; that is, for any ;(A3) For each , .(A4) is convex and lower semicontinuous for each .

We recall some lemmas which will be needed in the rest of this paper.

Lemma 8 (see [14]). *Let be a nonempty closed convex subset of , let be bifunction from to satisfying (A1)–(A4), and let and . Then there exists such that
*

Lemma 9 (see [1]). *For , , define a mapping as follows:
**
for all . Then, the following statements hold: *(i)* is single-valued; *(ii)* is firmly nonexpansive; that is, for any ,
*(iii)*; *(iv)* is closed and convex. *

Lemma 10 (see [4]). *Let , and be as in Lemma 9. Then the following holds:
**
for all and .*

Lemma 11 (see [13]). * Let be a Hilbert space, a nonempty closed convex subset of , and a nonexpansive mapping with . If is a sequence in weakly converging to and if converges strongly to , then .*

#### 3. Main Result

Throughout the rest of this paper, we always assume that is an -Lipschitzian mapping with coefficient , and is a strongly positive bounded linear operator with coefficient . Then we obtain that is -Lipschitzian and -strongly monotone. Let be a real-valued convex function and assume that is -ism with , which then implies that is -ism. So by Proposition 7, its complement is -averaged. Since the projection is -averaged, we obtain from Proposition 6 that the composition is -averaged for . Hence we have that, for each , is -averaged. Therefore, we can write where is nonexpansive.

Suppose that the minimization problem (3) is consistent and let denote its solution set. Assume that and .

Denote for every and for all . Define a mapping . Since both and are nonexpansive, it is easy to get that is also nonexpansive. Consider the following mapping on defined by where . By Lemmas 3 and 9, we have Since , it follows that is a contraction. Therefore, by the Banach contraction principle, has a unique fixed pointed such that

For simplicity, we will write for provided no confusion occurs. Next we prove the sequences converges strongly to a point which solves the variational inequality Equivalently, .

Theorem 12. *Let be a nonempty closed convex subset of a real Hilbert space with , and let be bifunctions from to satisfying (A1)–(A4). Let be a real-value convex function and -ism with . Assume the set . Let be an -Lipschitzian mapping with and a strongly positive bounded linear operator with coefficient , and . Let {} and {} be sequences generated by the following algorithm:
**
where , , and ; if and satisfy the following conditions: *(i)*, ; *(ii)*, for , ** then, as , the sequence converges strongly to a point , which solves the variational inequality (27).*

* Proof. *The proof is divided into several steps.*Step 1*. It shows first that is bounded.

First, since , we can assume that . By Lemma 3, we have .

Take any , since for each , is nonexpansive, , and ; we have
for all .

Thus, by (28) and Lemma 3, we derive that
It follows that .

Hence, is bounded and so . It follows from the Lipschitz continuity of , , and that , , and are also bounded. From the nonexpansivity of , it follows that is also bounded.*Step 2*. It shows that

Next we will show that

Indeed, for , it follows from the firmly nonexpansivity of that for each , we have
Thus we get
which implies that for each ,
Thus, from Lemma 1 and (35), we get
It follows that
Since , (32) holds, then we have
*Step 3*. It shows that

Observe that
Since and , it is easy to get (39).

Thus,
We obtain .

Notice that
where . Hence we have

From the boundedness of , and , we conclude that

Since is bounded, there exists a subsequence which converges weakly to .*Step 4*. It shows that .

Since is closed and convex, is weakly closed. So we have . By Lemma 11 and (44), we have .

Next we will show that .

Indeed, by Lemma 9, we have that for each ,
From (A2), we get
Hence,
From (32), we obtain that as for each (especially, ). Together with (32), condition (ii), and (A4) we have, for each , that

For any, and , let . Since and , we obtain that , and hence . So, we have
Dividing by , we get, for each , that
Letting and from (A3), we get
For all and for each ; that is, . Hence .*Step 5*. It shows that , where
Hence, we obtain
It follows that
This implies that
In particular,

Since , it follows from (56) that as . Next, we show that solves the variational inequality (27).

By the iterative algorithm (28), we have
Therefore we have
that is,
Due to the nonexpansivity of , we have that is monotone; that is, , for all . Hence for any ,

Now replacing in (60) with and letting , we obtain
that is, is a solution of the variational inequality (27).

Further, by the uniqueness of the solution of the variational inequality (27), we conclude that as . We rewrite (27) as
This is equivalent to the fixed point equation

Theorem 13. *Let be a nonempty closed convex subset of a real Hilbert space with , and let be bifunctions from to which satisfies conditions (A1)–(A4). Let be a real-value convex function and a -ism mapping with . Assume the set . Let is an -Lipschitzian mapping with and is a strongly positive bounded linear operator with coefficient , and . Given , let {} and {} be sequences generated by the following algorithm:
**
where , , and ; if , , and satisfy the following conditions:*(i)*, and ; *(ii)*, and for ; *(iii)*, and ,** then, the sequence converges strongly to a point , which solves the variational inequality (27).*

* Proof. *The proof is divided into several steps.*Step 1*. It shows first that is bounded.

Take any , we have
Thus, by (64), we derive that
By induction, we obtain , , . Hence, is bounded and so . It follows from the Lipschitz continuity of , , and that , , and are also bounded. From the nonexpansivity of , it follows that is also bounded.*Step 2.* It shows that

By (68), we have
Next we estimate .

Observe that
where .

Substitute (69) into (68), we get
for some appropriate positive constant such that

Observe that
By Lemma 10, we get

Combing (70) and (73), we have
By Lemma 2, It follows from conditions (i)–(iii) that (67) holds. Further from (73), we have
*Step 3.* It shows that

For any , as the same proof of Theorem 12, we have
Then from (64) and (77), we derive that
From and (67), we have
Further we have
Next,
It follows from (67) and (80) that (76) holds. Further we have .*Step 4.* It shows that
where is a unique solution of the variational inequality (27). Indeed, take a subsequence of such that

Since is bounded, there exists a subsequence of which converges weakly to . Without loss of generality, we can assume . By the same argument as in the proof of Theorem 12, we have . Since , it follows that
*Step 5.* It shows that
Consider
It follows from (67) and (82) that