- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Abstract and Applied Analysis

Volume 2014 (2014), Article ID 587865, 26 pages

http://dx.doi.org/10.1155/2014/587865

## Solving Generalized Mixed Equilibria, Variational Inequalities, and Constrained Convex Minimization

^{1}Department of Mathematics, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia^{2}Center for Fundamental Science, Kaohsiung Medical University, Kaohsiung 807, Taiwan

Received 14 October 2013; Accepted 18 November 2013; Published 8 January 2014

Academic Editor: Chi-Ming Chen

Copyright © 2014 A. E. Al-Mazrooei et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We propose implicit and explicit iterative algorithms for finding a common element of the set of solutions of the minimization problem for a convex and continuously Fréchet differentiable functional, the set of solutions of a finite family of generalized mixed equilibrium problems, and the set of solutions of a finite family of variational inequalities for inverse strong monotone mappings in a real Hilbert space. We prove that the sequences generated by the proposed algorithms converge strongly to a common element of three sets, which is the unique solution of a variational inequality defined over the intersection of three sets under very mild conditions.

#### 1. Introduction and Problems Formulation

Let be a real Hilbert space with inner product and norm , let be a nonempty closed convex subset of , and let be the metric projection of onto . Let be a self-mapping on . We denote by the set of fixed points of and by the set of all real numbers. Recall that a mapping is said to be -Lipschitz continuous if there exists a constant such that In particular, if , then is called a nonexpansive mapping [1], and if , then is called a contraction.

Recall that a mapping is called(i)monotone if (ii)-strongly monotone if there exists a constant such that (iii)-inverse strongly monotone if there exists a constant such that

It is obvious that if is -inverse strongly monotone, then is monotone and -Lipschitz continuous.

Let be a nonlinear mapping on . We consider the following variational inequality problem (VIP): find a point such that The solution set of VIP (5) is denoted by .

The VIP (5) was first discussed by Lions [2] and is now well known. The VIP (5) has many potential applications in computational mathematics, mathematical physics, operations research, mathematical economics, optimization theory, and so on; see, for example, [3–5] and the references therein.

In 1976, Korpelevich [6] proposed an iterative algorithm for solving the VIP (5) in Euclidean space : with , a given number which is known as the extragradient method. The literature on the VIP is vast and Korpelevich’s extragradient method has received great attention given by many researchers. See, for example, [7–16] and the references therein. In particular, motivated by the idea of Korpelevich’s extragradient method [6], Nadezhkina and Takahashi [17] introduced an extragradient iterative scheme: where is a monotone, -Lipschitz continuous mapping, is a nonexpansive mapping, for some , and for some . They proved the weak convergence of to an element of .

Let be a real-valued function, let be a nonlinear mapping, and let be a bifunction. In 2008, Peng and Yao [18] introduced the following generalized mixed equilibrium problem (GMEP) of finding such that We denote the set of solutions of GMEP (8) by . The GMEP (8) is very general in the sense that it includes, as special cases, optimization problems, variational inequalities, minimax problems, and Nash equilibrium problems in noncooperative games. The GMEP is further considered and studied. See, for example, [19, 20]. Some special cases of GMEP (8) are as follows.

If , then GMEP (8) reduces to the generalized equilibrium problem (GEP) which is to find such that It is introduced and studied by S. Takahashi and W. Takahashi [21]. The set of solutions of GEP is denoted by .

If , then GMEP (8) reduces to the mixed equilibrium problem (MEP) which is to find such that It is considered and studied in [22]. The set of solutions of MEP is denoted by .

If and , then GMEP (8) reduces to the equilibrium problem (EP) which is to find such that It is considered and studied in [23]. The set of solutions of EP is denoted by .

Throughout this paper, it is assumed as in [18] that is a bifunction satisfying conditions (A1)–(A4) and is a lower semicontinuous and convex function with restriction (B1) or (B2), where (A1), for all ; (A2) is monotone; that is, for any ; (A3) is upper hemicontinuous; that is, for each , (A4) is convex and lower semicontinuous for each ; (B1)for each and , there exists a bounded subset and such that, for any , (B2) is a bounded set.

Next we list some known results for the MEP as follows.

Proposition 1 (see [22]). *Assume that satisfies (A1)–(A4) and let be a proper lower semicontinuous and convex function. Assume that either (B1) or (B2) holds. For and , define a mapping as follows:
**
for all . Then the following conditions hold:*(i)*for each *, * is nonempty and single-valued; *(ii)* is firmly nonexpansive; that is, for any *,
(iii);(iv)* is closed and convex*;(v),* for all ** and *.

*Let , . Given the nonexpansive mappings on , for each , the mappings are defined by
*

*The is called the -mapping generated by and . Note that the nonexpansivity of implies the nonexpansivity of .*

*In 2012, combining the hybrid steepest-descent method in [24] and hybrid viscosity approximation method in [25], Ceng et al. [20] proposed and analyzed the following hybrid iterative method for finding a common element of the set of solutions of GMEP (8) and the set of fixed points of a finite family of nonexpansive mappings .*

*Theorem CGY (see [20, Theorem 3.1]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a bifunction satisfying assumptions (A1)–(A4) and let be a lower semicontinuous and convex function with restriction (B1) or (B2). Let the mapping be -inverse strongly monotone, and let be a finite family of nonexpansive mappings on such that . Let be a -Lipschitzian and -strongly monotone operator with constants and a -Lipschitzian mapping with constant . Let and , where . Suppose and are two sequences in , is a sequence in , and is a sequence in with . For every , let be the -mapping generated by and . Given arbitrarily, suppose the sequences and are generated iteratively by
where the sequences , , and and the finite family of sequences satisfy the following conditions:(i) and ;(ii);(iii) and ;(iv), for all .Then both and converge strongly to , where is a unique solution of the variational inequality problem ():
*

*Let be a convex and continuously Fréchet differentiable functional. Consider the convex minimization problem (CMP) of minimizing over the constraint set :
(assuming the existence of minimizers). We denote by the set of minimizers of CMP (19). It is well known that the gradient-projection algorithm (GPA) generates a sequence determined by the gradient and the metric projection :
or more generally,
where, in both (20) and (21), the initial guess is taken from arbitrarily and the parameters or are positive real numbers. The convergence of algorithms (20) and (21) depends on the behavior of the gradient . As a matter of fact, it is known that, if is -strongly monotone and -Lipschitz continuous, then, for , the operator
is a contraction. Hence, the sequence defined by the GPA (20) converges in norm to the unique solution of CMP (19). More generally, if the sequence is chosen to satisfy the property
then the sequence defined by the GPA (21) converges in norm to the unique minimizer of CMP (19). If the gradient is only assumed to be Lipschitz continuous, then can only be weakly convergent if is infinite dimensional (a counterexample is given in Section 5 of Xu [26]).*

*Since the Lipschitz continuity of the gradient implies that it is actually -inverse strongly monotone (ism) [27], its complement can be an averaged mapping (i.e., it can be expressed as a proper convex combination of the identity mapping and a nonexpansive mapping). Consequently, the GPA can be rewritten as the composite of a projection and an averaged mapping, which is again an averaged mapping. This shows that averaged mappings play an important role in the GPA. Recently, Xu [26] used averaged mappings to study the convergence analysis of the GPA, which is hence an operator-oriented approach.*

*In 2011, combining the hybrid steepest-descent method in [24], viscosity approximation method, and averaged mapping approach to the GPA in [26], Ceng et al. [28] introduced and analyzed the following implicit and explicit iterative algorithms:
where is -Lipschitzian mapping with constant and is a -Lipschitzian and -strongly monotone operator with constants . Assume that , , for each , for each , with and , and . The authors proved that the net defined by (24) converges strongly to some , which is a unique solution of the variational inequality problem (VIP):
Furthermore, utilizing control conditions (i) , (ii) , and (iii) either or , the authors also proved that the sequence generated by (25) converges strongly to some , which is a unique solution of the VIP (26).*

*Motivated and inspired by the above facts, in this paper we introduce implicit and explicit iterative algorithms for finding a common element of the set of solutions of the CMP (19) for a convex functional with -Lipschitz continuous gradient , the set of solutions of a finite family of GMEPs, and the set of solutions of a finite family of VIPs for inverse strong monotone mappings in a real Hilbert space. Under very mild control conditions, we prove that the sequences generated by the proposed algorithms converge strongly to a common element of three sets, which is the unique solution of a variational inequality defined over the intersection of three sets. Our iterative algorithms are based on Korpelevich’s extragradient method, hybrid steepest-descent method in [24], viscosity approximation method, and averaged mapping approach to the GPA in [26]. The results obtained in this paper improve and extend the corresponding results announced by many others.*

*2. Preliminaries *

*2. Preliminaries*

*Throughout this paper, we assume that is a real Hilbert space with inner product and norm denoted by and , respectively. Let be a nonempty closed convex subset of . We write to indicate that the sequence converges weakly to and to indicate that the sequence converges strongly to . Moreover, we use to denote the weak -limit set of the sequence ; that is,
*

*The metric projection from onto is the mapping which assigns to each point the unique point satisfying the property
*

*Some important properties of projections are listed in the following proposition.*

*Proposition 2. For given and ,(i), for all ;(ii), for all ;(iii), for all .*

*Consequently, is nonexpansive and monotone. If is an -inverse strongly monotone mapping of into , then it is obvious that is -Lipschitz continuous. We also have that, for all and ,
So, if , then is a nonexpansive mapping from to .*

*Definition 3. *A mapping is said to be(a)nonexpansive [1] if
(b)firmly nonexpansive if is nonexpansive, or, equivalently, if is -inverse strongly monotone (-ism),
alternatively, is firmly nonexpansive if and only if can be expressed as
where is nonexpansive; projections are firmly nonexpansive.

*It can be easily seen that if is nonexpansive, then is monotone. It is also easy to see that a projection is -ism. Inverse strongly monotone (also referred to as cocoercive) operators have been applied widely in solving practical problems in various fields.*

*Definition 4. *A mapping is said to be an averaged mapping if it can be written as the average of the identity and a nonexpansive mapping; that is,
where and is nonexpansive. More precisely, when the last equality holds, we say that is -averaged. Thus firmly nonexpansive mappings (in particular, projections) are -averaged mappings.

*Proposition 5 (see [29]). Let be a given mapping.(i) is nonexpansive if and only if the complement is -ism.(ii)If is -ism, then, for , is -ism.(iii) is averaged if and only if the complement is -ism for some . Indeed, for is -averaged if and only if is -ism.*

*Proposition 6 (see [29]). Let be given operators.(i)If for some and if is averaged and is nonexpansive, then is averaged.(ii) is firmly nonexpansive if and only if the complement is firmly nonexpansive.(iii)If for some and if is firmly nonexpansive and is nonexpansive, then is averaged.(iv)The composite of finitely many averaged mappings is averaged. That is, if each of the mappings is averaged, then so is the composite . In particular, if is -averaged and is -averaged, where , then the composite is -averaged, where .(v)If the mappings are averaged and have a common fixed point, then *

The notation denotes the set of all fixed points of the mapping ; that is, .

*We need some facts and tools in a real Hilbert space which are listed as lemmas below.*

*Lemma 7. Let be a real inner product space. Then there holds the following inequality:
*

*Lemma 8. Let be a monotone mapping. In the context of the variational inequality problem the characterization of the projection (see Proposition 2(i)) implies
*

*Lemma 9 (see [30, Demiclosedness principle]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a nonexpansive self-mapping on with . Then is demiclosed. That is, whenever is a sequence in weakly converging to some and the sequence strongly converges to some , it follows that . Here is the identity operator of .*

*Lemma 10 (see [31]). Let be a sequence of nonnegative numbers satisfying the conditions
where and are sequences of real numbers such that(i) and , or, equivalently,
(ii), or . Then .*

*Lemma 11 (see [32]). Let and be bounded sequences in a Banach space and let be a sequence in with
Suppose that for each and
Then .*

*The following lemma can be easily proven and, therefore, we omit the proof.*

*Lemma 12. Let be an -Lipschitzian mapping with constant , and let be a -Lipschitzian and -strongly monotone operator with positive constants . Then for ,
That is, is strongly monotone with constant .*

*Let be a nonempty closed convex subset of a real Hilbert space . We introduce some notations. Let be a number in and let . Associating with a nonexpansive mapping , we define the mapping by
where is an operator such that, for some positive constants , is -Lipschitzian and -strongly monotone on ; that is, satisfies the following conditions:
for all .*

*Lemma 13 (see [31, Lemma 3.1]). is a contraction provided ; that is,
where .*

*Remark 14. *(i) Since is -Lipschitzian and -strongly monotone on , we get . Hence, whenever , we have
which implies
So, .

(ii) In Lemma 13, put and . Then we know that , and

Finally, recall that a set-valued mapping is called monotone if, for all , and imply . A monotone mapping is maximal if its graph is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping is maximal if and only if, for for all implies . Let be a monotone, -Lipschitz continuous mapping and let be the normal cone to at ; that is, , for all . Define
It is known that in this case is maximal monotone, and if and only if ; see [33].

*3. Implicit Iterative Algorithm and Its Convergence Criteria *

*3. Implicit Iterative Algorithm and Its Convergence Criteria*

*We now state and prove the first main result of this paper.*

*Theorem 15. Let be a nonempty closed convex subset of a real Hilbert space . Let be a convex functional with -Lipschitz continuous gradient . Let and be two integers. Let be a bifunction from to satisfying (A1)–(A4) and let be a proper lower semicontinuous and convex function, where . Let and be -inverse strongly monotone and -inverse strongly monotone, respectively, where and . Let be a -Lipschitzian and -strongly monotone operator with positive constants . Let be an -Lipschitzian mapping with constant . Let and , where . Assume that and that either (B1) or (B2) holds. Let be a sequence generated by
where (here is nonexpansive and for each ). Assume that the following conditions hold:(i) for each , ;(ii), for all ;(iii), for all .*

Then converges strongly as () to a point , which is a unique solution of the VIP: Equivalently, .

*Proof. *First of all, let us show that the sequence is well defined. Indeed, since is -Lipschitzian, it follows that is -ism; see [34]. By Proposition 5(ii) we know that, for is -ism. So by Proposition 5(iii) we deduce that is -averaged. Now since the projection is -averaged, it is easy to see from Proposition 6(iv) that the composite is -averaged for . Hence we obtain that for each , is -averaged for each . Therefore, we can write
where is nonexpansive and for each . It is clear that

Put
for all and ,
for all and , and , where is the identity mapping on . Then we have that and .

Consider the following mapping on defined by
where for each . By Proposition 1(ii) and Lemma 13 we obtain from (29) that, for all ,
Since , is a contraction. Therefore, by the Banach contraction principle, has a unique fixed point , which uniquely solves the fixed point equation
This shows that the sequence is defined well.

Note that and . Hence by Lemma 12 we know that
That is, is strongly monotone for . Moreover, it is clear that is Lipschitz continuous. So the VIP (50) has only one solution. Below we use to denote the unique solution of the VIP (50).

Now, let us show that is bounded. In fact, take arbitrarily. Then from (29) and Proposition 1(ii) we have
Similarly, we have
Combining (59) and (60), we have
Since
where . It is clear that for each . Thus, utilizing Lemma 13 and the nonexpansivity of , we obtain from (61) that
This implies that . Hence is bounded. So, according to (59) and (61) we know that , and are bounded.

Next let us show that , , and as .

Indeed, from (29) it follows that, for all and ,
Thus, utilizing Lemma 7, from (49) and (64) we have
which implies that
Since and , for all and , from we conclude immediately that
for all and .

Furthermore, by Proposition 1(ii) we obtain that for each
which implies that
Also, by Proposition 2(iii), we obtain that for each