- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Abstract and Applied Analysis

Volume 2014 (2014), Article ID 132053, 25 pages

http://dx.doi.org/10.1155/2014/132053

## Algorithms of Common Solutions for Generalized Mixed Equilibria, Variational Inclusions, and Constrained Convex Minimization

^{1}Department of Mathematics, Shanghai Normal University and Scientific Computing Key Laboratory of Shanghai Universities, Shanghai 200234, China^{2}Department of Mathematics and Statistics, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia

Received 3 November 2013; Accepted 12 November 2013; Published 23 January 2014

Academic Editor: Qamrul Hasan Ansari

Copyright © 2014 Lu-Chuan Ceng and Suliman Al-Homidan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We introduce new implicit and explicit iterative algorithms for finding a common element of the set of solutions of the minimization problem for a convex and continuously Fréchet differentiable functional, the set of solutions of a finite family of generalized mixed equilibrium problems, and the set of solutions of a finite family of variational inclusions in a real Hilbert space. Under suitable control conditions, we prove that the sequences generated by the proposed algorithms converge strongly to a common element of three sets, which is the unique solution of a variational inequality defined over the intersection of three sets.

#### 1. Introduction

Let be a nonempty closed convex subset of a real Hilbert space and let be the metric projection of onto . Let be a self-mapping on . We denote by Fix the set of fixed points of and by the set of all real numbers. A mapping is called -Lipschitz continuous if there exists a constant such that In particular, if , then is called a nonexpansive mapping [1]; if , then is called a contraction.

A mapping is called strongly positive on if there exists a constant such that

Let be a nonlinear mapping on . We consider the following variational inequality problem (VIP): find a point such that The solution set of VIP (3) is denoted by VI .

The VIP (3) was first discussed by Lions [2]. There are many applications of VIP (3) in various fields; see, for example, [3–6]. It is well known that if is a strongly monotone and Lipschitz continuous mapping on , then VIP (3) has a unique solution. In 1976, Korpelevič [7] proposed an iterative algorithm for solving the VIP (3) in Euclidean space : with a given number, which is known as the extragradient method (see also [8]). The literature on the VIP is vast and Korpelevich’s extragradient method has received great attention given by many authors, who improved it in various ways; see, for example, [9–24] and references therein, to name but a few.

Let be a real-valued function, a nonlinear mapping, and a bifunction. In 2008, Peng and Yao [12] introduced the following generalized mixed equilibrium problem (GMEP) of finding such that We denote the set of solutions of GMEP (5) by GMEP . The GMEP (5) is very general in the sense that it includes, as special cases, optimization problems, variational inequalities, minimax problems, and Nash equilibrium problems in noncooperative games. The GMEP is further considered and studied; see, for example, [11, 14, 23, 25–28]. If and , then GMEP (5) reduces to the equilibrium problem (EP) which is to find such that It is considered and studied in [29]. The set of solutions of EP is denoted by EP . It is worth mentioning that the EP is a unified model of several problems, namely, variational inequality problems, optimization problems, saddle point problems, complementarity problems, fixed point problems, Nash equilibrium problems, and so forth.

Throughout this paper, it is assumed as in [12] that is a bifunction satisfying conditions (A1)–(A4) and is a lower semicontinuous and convex function with restriction (B1) or (B2), where(A1) for all ;(A2) is monotone; that is, for any ;(A3) is upper-hemicontinuous; that is, for each , (A4) is convex and lower semicontinuous for each ;(B1)for each and , there exists a bounded subset and such that, for any , (B2) is a bounded set.

Next we list some elementary results for the MEP.

Proposition 1 (see [26]). *Assume that satisfies (A1)–(A4) and let be a proper lower semicontinuous and convex function. Assume that either (B1) or (B2) holds. For and , define a mapping as follows:
**
for all . Then the following hold:*(i)*for each ** is nonempty and single-valued;*(ii)* is firmly nonexpansive; that is, for any **, *(iii)*;*(iv)* is closed and convex;*(v)* for all ** and **.*

Let , . Given the nonexpansive mappings on , for each , the mappings are defined by

The is called the -mapping generated by and . Note that the nonexpansivity of implies the nonexpansivity of .

In 2012, combining the hybrid steepest-descent method in [30] and hybrid viscosity approximation method in [31], Ceng et al. [27] proposed and analyzed the following hybrid iterative method for finding a common element of the set of solutions of GMEP (5) and the set of fixed points of a finite family of nonexpansive mappings .

Theorem CGY (see [27, Theorem 3.1]). *Let be a nonempty closed convex subset of a real Hilbert space . Let be a bifunction satisfying assumptions (A1)–(A4) and a lower semicontinuous and convex function with restriction (B1) or (B2). Let the mapping be -inverse-strongly monotone and a finite family of nonexpansive mappings on such that . Let be a -Lipschitzian and -strongly monotone operator with constants and a -Lipschitzian mapping with constant . Let and , where . Suppose and are two sequences in , is a sequence in , and is a sequence in with . For every , let be the -mapping generated by and . Given arbitrarily, suppose that the sequences and are generated iteratively by
**
where the sequences and the finite family of sequences satisfy the conditions:*(i)* and ;*(ii)*;*(iii)* and ;*(iv)* for all .**Then both and converge strongly to , where is a unique solution of the variational inequality problem (VIP):
*

Let be a single-valued mapping of into and a multivalued mapping with . Consider the following variational inclusion: find a point such that We denote by the solution set of the variational inclusion (14). In particular, if , then . If , then problem (14) becomes the inclusion problem introduced by Rockafellar [32]. It is known that problem (14) provides a convenient framework for the unified study of optimal solutions in many optimization related areas including mathematical programming, complementarity problems, variational inequalities, optimal control, mathematical economics, and equilibria and game theory.

In 1998, Huang [33] studied problem (14) in the case where is maximal monotone and is strongly monotone and Lipschitz continuous with . Subsequently, Zeng et al. [34] further studied problem (14) in the case which is more general than Huang’s one [33]. Moreover, the authors [34] obtained the same strong convergence conclusion as in Huang’s result [33]. In addition, the authors also gave the geometric convergence rate estimate for approximate solutions. Also, various types of iterative algorithms for solving variational inclusions have been further studied and developed; for more details, refer to [35–39] and the references therein.

Let be an infinite family of nonexpansive self-mappings on and a sequence of nonnegative numbers in . For any , define a self-mapping on as follows: Such a mapping is called the -mapping generated by and .

Whenever a real Hilbert space, Yao et al. [11] very recently introduced and analyzed an iterative algorithm for finding a common element of the set of solutions of GMEP (5), the set of solutions of the variational inclusion (14), and the set of fixed points of an infinite family of nonexpansive mappings.

Theorem YCL (see [11, Theorem 3.2]). *Let be a lower semicontinuous and convex function and a bifunction satisfying conditions (A1)–(A4) and (B1). Let be a strongly positive bounded linear operator with coefficient and a maximal monotone mapping. Let the mappings be -inverse-strongly monotone and -inverse-strongly monotone, respectively. Let be a -contraction. Let , , and be three constants such that , , and . Let be a sequence of positive numbers in for some and an infinite family of nonexpansive self-mappings on such that . For arbitrarily given , let the sequence be generated by
**
where are two real sequences in and is the -mapping defined by (15) (with and ). Assume that the following conditions are satisfied:*(C1)* and ;*(C2)*.**Then the sequence converges strongly to , where is a unique solution of the VIP:
*

Let be a convex and continuously Fréchet differentiable functional. Consider the convex minimization problem (CMP) of minimizing over the constraint set (assuming the existence of minimizers). We denote by the set of minimizers of CMP (18). It is well known that the gradient-projection algorithm (GPA) generates a sequence determined by the gradient and the metric projection : or more generally, where, in both (19) and (20), the initial guess is taken from arbitrarilyn and the parameters or are positive real numbers. The convergence of algorithms (19) and (20) depends on the behavior of the gradient . As a matter of fact, it is known that if is -strongly monotone and -Lipschitz continuous, then, for , the operator is a contraction; hence, the sequence defined by the GPA (19) converges in norm to the unique solution of CMP (18). More generally, if the sequence is chosen to satisfy the property then the sequence defined by the GPA (20) converges in norm to the unique minimizer of CMP (18). If the gradient is only assumed to be Lipschitz continuous, then can only be weakly convergent if is infinite-dimensional (a counterexample is given in Section 5 of Xu [40]).

Since the Lipschitz continuity of the gradient implies that it is actually -inverse-strongly monotone (ism) [41], its complement can be an averaged mapping (i.e., it can be expressed as a proper convex combination of the identity mapping and a nonexpansive mapping). Consequently, the GPA can be rewritten as the composite of a projection and an averaged mapping, which is again an averaged mapping. This shows that averaged mappings play an important role in the GPA. Recently, Xu [40] used averaged mappings to study the convergence analysis of the GPA, which is hence an operator-oriented approach.

Motivated and inspired by the above facts, we in this paper introduce new implicit and explicit iterative algorithms for finding a common element of the set of solutions of the CMP (18) for a convex functional with -Lipschitz continuous gradient , the set of solutions of a finite family of GMEPs, and the set of solutions of a finite family of variational inclusions for maximal monotone and inverse-strong monotone mappings in a real Hilbert space. Under mild control conditions, we prove that the sequences generated by the proposed algorithms converge strongly to a common element of three sets, which is the unique solution of a variational inequality defined over the intersection of three sets. Our iterative algorithms are based on Korpelevich’s extragradient method, hybrid steepest-descent method in [30], viscosity approximation method, and averaged mapping approach to the GPA in [40]. The results obtained in this paper improve and extend the corresponding results announced by many others.

#### 2. Preliminaries

Throughout this paper, we assume that is a real Hilbert space whose inner product and norm are denoted by and , respectively. Let be a nonempty closed convex subset of . We write to indicate that the sequence converges weakly to and to indicate that the sequence converges strongly to . Moreover, we use to denote the weak -limit set of the sequence ; that is,

Recall that a mapping is called(i)monotone if (ii)-strongly monotone if there exists a constant such that (iii)-inverse-strongly monotone if there exists a constant such that

It is obvious that if is -inverse-strongly monotone, then is monotone and -Lipschitz continuous.

The metric (or nearest point) projection from onto is the mapping which assigns to each point the unique point satisfying the property

Some important properties of projections are gathered in the following proposition.

Proposition 2. *For given and ,*(i)*, for all **;*(ii)*, for all **;*(iii)*, for all **.**Consequently, is nonexpansive and monotone.*

If is an -inverse-strongly monotone mapping of into , then it is obvious that is -Lipschitz continuous. We also have that, for all and , So, if , then is a nonexpansive mapping from to .

*Definition 3. *A mapping is said to be(a)nonexpansive [1] if
(b)firmly nonexpansive if is nonexpansive or, equivalently, if is -inverse-strongly monotone (-ism),
alternatively, is firmly nonexpansive if and only if can be expressed as
where is nonexpansive; projections are firmly nonexpansive.

It can be easily seen that if is nonexpansive, then is monotone. It is also easy to see that a projection is -ism. Inverse-strongly monotone (also referred to as co-coercive) operators have been applied widely in solving practical problems in various fields.

*Definition 4. *A mapping is said to be an averaged mapping if it can be written as the average of the identity and a nonexpansive mapping; that is,
where and is nonexpansive. More precisely, when the last equality holds, we say that is -averaged. Thus, firmly nonexpansive mappings (in particular, projections) are -averaged mappings.

Proposition 5 (see [42]). *Let be a given mapping.*(i)* is nonexpansive if and only if the complement ** is **-ism.*(ii)*If ** is **-ism, then, for ** is **-ism.*(iii)* is averaged if and only if the complement ** is **-ism for some **. Indeed, for ** is **-averaged if and only if ** is **-ism.*

Proposition 6 (see [42, 43]). *Let be given operators.*(i)*If ** for some ** and if ** is averaged and ** is nonexpansive, then ** is averaged.*(ii)* is firmly nonexpansive if and only if the complement ** is firmly nonexpansive.*(iii)*If ** for some ** and if ** is firmly nonexpansive and ** is nonexpansive, then ** is averaged.*(iv)*The composite of finitely many averaged mappings is averaged. That is, if each of the mappings ** is averaged, then so is the composite **. In particular, if ** is **-averaged and ** is **-averaged, where **, then the composite ** is **-averaged, where **.*(v)*If the mappings ** are averaged and have a common fixed point, then**The notation denotes the set of all fixed points of the mapping ; that is, .*

We need some facts and tools in a real Hilbert space which are listed as lemmas below.

Lemma 7. *Let be a real inner product space. Then the following inequality holds:
*

Lemma 8. *Let be a monotone mapping. In the context of the variational inequality problem the characterization of the projection (see Proposition 2(i)) implies
*

Lemma 9 (see [44, Demiclosedness principle]). *Let be a nonempty closed convex subset of a real Hilbert space . Let be a nonexpansive self-mapping on with . Then is demiclosed. That is, whenever is a sequence in weakly converging to some and the sequence strongly converges to some , it follows that . Here is the identity operator of .*

Lemma 10 (see [45]). *Let be a sequence of nonnegative numbers satisfying the conditions
**
where and are sequences of real numbers such that*(i)* and or, equivalently,
*(ii)*, or .**Then .*

Lemma 11 (see [46]). *Let and be bounded sequences in a Banach space and a sequence in with
**
Suppose that for each and
**
Then .*

The following lemma can be easily proven and, therefore, we omit the proof.

Lemma 12. *Let be an -Lipschitzian mapping with constant , and let be a -Lipschitzian and -strongly monotone operator with positive constants . Then, for ,
**
That is, is strongly monotone with constant .*

Let be a nonempty closed convex subset of a real Hilbert space . We introduce some notations. Let be a number in and let . Associating with a nonexpansive mapping , we define the mapping by where is an operator such that, for some positive constants , is -Lipschitzian and -strongly monotone on ; that is, satisfies the conditions: for all .

Lemma 13 (see [45, Lemma 3.1]). * is a contraction provided ; that is,
**
where .*

Recall that a set-valued mapping is called monotone if for all , and imply A set-valued mapping is called maximal monotone if is monotone and for each , where is the identity mapping of . We denote by the graph of . It is known that a monotone mapping is maximal if and only if, for , for every implies .

Let be a monotone, -Lipschitz continuous mapping and let be the normal cone to at ; that is, Define Then, is maximal monotone and if and only if ; see [32].

Assume that is a maximal monotone mapping. Then, for , associated with , the resolvent operator can be defined as In terms of Huang [33] (see also [34]), the following property holds for the resolvent operator .

Lemma 14. * is single-valued and firmly nonexpansive; that is,
**Consequently, is nonexpansive and monotone.*

Lemma 15 (see [39]). *Let be a maximal monotone mapping with . Then, for any given , is a solution of problem (14) if and only if satisfies
*

Lemma 16 (see [34]). *Let be a maximal monotone mapping with and let be a strongly monotone, continuous, and single-valued mapping. Then, for each , the equation has a unique solution for .*

Lemma 17 (see [39]). *Let be a maximal monotone mapping with and let be a monotone, continuous, and single-valued mapping. Then for each . In this case, is maximal monotone.*

#### 3. Implicit Iterative Algorithm and Its Convergence Criteria

We now state and prove the first main result of this paper.

Theorem 18. *Let be a nonempty closed convex subset of a real Hilbert space . Let be a convex functional with -Lipschitz continuous gradient . Let be two integers. Let be a bifunction from to satisfying (A1)–(A4) and let be a proper lower semicontinuous and convex function, where . Let be a maximal monotone mapping and let and be -inverse-strongly monotone and -inverse-strongly monotone, respectively, where , . Let be a -Lipschitzian and -strongly monotone operator with positive constants . Let be an -Lipschitzian mapping with constant . Let and , where . Assume that and that either (B1) or (B2) holds. Let be a sequence generated by
**
where (here is nonexpansive and for each . Assume that the following conditions hold:*(i)* for each , ;*(ii)*, for all ;*(iii)*, for all .**Then converges strongly as to a point , which is a unique solution of the VIP:
**
Equivalently, .*

*Proof. *First of all, let us show that the sequence is well defined. Indeed, since is -Lipschitzian, it follows that is -ism; see [41]. By Proposition 5(ii) we know that, for , is -ism. So by Proposition 5(iii) we deduce that is -averaged. Now since the projection is -averaged, it is easy to see from Proposition 6(iv) that the composite is -averaged for . Hence, we obtain that, for each , is -averaged for each . Therefore, we can write
where is nonexpansive and for each . It is clear that

Put
for all and ,
for all and , and , where is the identity mapping on . Then we have that and .

Consider the following mapping on defined by
where for each . By Proposition 1(ii) and Lemma 13 we obtain from (27) that for all
Since , is a contraction. Therefore, by the Banach contraction principle, has a unique fixed point , which uniquely solves the fixed point equation:
This shows that the sequence is defined well.

Note that and . Hence, by Lemma 12 we know that
That is, is strongly monotone for . Moreover, it is clear that is Lipschitz continuous. So the VIP (50) has only one solution. Below we use to denote the unique solution of the VIP (50).

Now, let us show that is bounded. In fact, take arbitrarily. Then from (27) and Proposition 1(ii) we have
Similarly, we have
Combining (59) and (60), we have
Since
where , it is clear that for each . Thus, utilizing Lemma 13 and the nonexpansivity of , we obtain from (61) that
This implies that . Hence, is bounded. So, according to (59) and (61) we know that , , , , and are bounded.

Next let us show that , , and as .

Indeed, from (27) it follows that for all and
Thus, utilizing Lemma 7, from (49) and (64) we have
which implies that
Since and for all and , from we conclude immediately that
for all and .

Furthermore, by Proposition 1(ii) we obtain that for each
which implies that
Also, by Lemma 14, we obtain that for each
which implies
Thus, utilizing Lemma 7, from (49), (69), and (71) we have