Abstract and Applied Analysis

Volume 2012 (2012), Article ID 979870, 30 pages

http://dx.doi.org/10.1155/2012/979870

## An Iterative Method for Solving a System of Mixed Equilibrium Problems, System of Quasivariational Inclusions, and Fixed Point Problems of Nonexpansive Semigroups with Application to Optimization Problems

^{1}Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), Bangmod, Bangkok 10140, Thailand^{2}Centre of Excellence in Mathematics, CHE, Si Ayutthaya Road, Bangkok 10400, Thailand

Received 11 September 2011; Revised 22 October 2011; Accepted 24 November 2011

Academic Editor: Donal O'Regan

Copyright © 2012 Pongsakorn Sunthrayuth and Poom Kumam. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We introduce a general implicit iterative scheme base on viscosity approximation method with a *ϕ*-strongly pseudocontractive mapping for finding a common element of the set of solutions for a system of mixed equilibrium problems, the set of common fixed point for a nonexpansive semigroup, and the set of solutions of system of variational inclusions with set-valued maximal monotone mapping and Lipschitzian relaxed cocoercive mappings in Hilbert spaces. Furthermore, we prove that the proposed iterative algorithm converges strongly to a common element of the above three sets, which is a solution of the optimization problem related to a strongly positive bounded linear operator.

#### 1. Introduction

Throughout this paper we denoted by and the set of all positive integers and all positive real numbers, respectively. We always assume that be a real Hilbert space with inner product and norm , respectively, is a nonempty closed convex subset of . Let be a real-valued function and be an equilibrium bifunction. The *mixed equilibrium problem* (for short, MEP) is to find such that
The set of solutions of (1.1) is denoted by MEP, that is,

In particular, if , this problem reduces to the *equilibrium problem*, that is, to find such that
The set of solution of (1.3) is denoted by .

Mixed equilibrium problems include fixed point problems, optimization problems, variational inequality problems, Nash equilibrium problems, and equilibrium problems as special cases (see, e.g., [1–6]). Some methods have been proposed to solve the equilibrium problem, see, for instance, [7–21].

Let be a strongly positive bounded linear operator on , that is, there exists a constant such that

Recall that, a mapping is said to be *contractive* if there exists a constant such that
A mapping is said to be (i)*nonexpansive* if
(ii)*pseudocontractive* if
(iii)*-strongly pseudocontractive* if there exists a continuous and strictly increasing function with such that

It is obvious that pseudocontractive mapping is more general than -strongly pseudocontractive mapping. If with , then -strongly pseudocontractive mapping reduces to -strongly pseudocontractive mapping with , which is more general than contractive mapping.

*Definition 1.1. *A one-parameter family mapping from into itself is said to be a *nonexpansive semigroup* on if it satisfies the following conditions: (i),(ii) for all ,(iii) for each the mapping is continuous,(iv) for all and .

*Remark 1.2. *We denote by the set of all common fixed points of , that is, .

Recall the following definitions of a nonlinear mapping , the following are mentioned.

*Definition 1.3. *The nonlinear mapping is said to be (i)*monotone* if
(ii)-*strongly monotone* if there exists a constant such that
(iii)-*Lipschitz continuous* if there exists a constant such that
(iv)-*inverse-strongly monotone* if there exists a constant such that
(v)*relaxed **-cocoercive* if there exists a constants such that
The resolvent operator technique for solving variational inequalities and variational inclusions is interesting and important. The resolvent equation technique is used to develop powerful and efficient numerical techniques for solving various classes of variational inequalities, inclusions, and related optimization problems.

*Definition 1.4. *Let be a multivalued maximal monotone mapping. The single-valued mapping *, *defined by
is called *resolvent operator associated with *, where is any positive number and is the identity mapping.

Next, we consider a system of quasivariational inclusions problem is to find such that
where and are nonlinear mappings for each .

As special cases of the problem (1.15), we have the following results. (1)If and , then the problem (1.15) is reduces to the following. Find such that
(2)Further, if in problem (1.16), then the problem (1.16) is reduces to the following. Find such that
The problem (1.17) is called *variational inclusion problem*. We denote by the set of solutions of the variational inclusion problem (1.17). Next, we consider two special cases of the problem (1.17). (1), where is a proper convex lower semicontinuous function and is the subdifferential of then the quasivariational inclusion problem (1.17) is equivalent to finding such that , for all , which is said to be the *mixed quasivariational inequality*.(2)If , where is a nonempty closed convex subset of , and is the indicator function of , that is,
then the quasivariational inclusion problem (1.17) is equivalent to the classical variational inequality problem denoted by which is to find such that
This problem is called *Hartman-Stampacchia variational inequality problem* (see e.g., [22–24]).

It is known that problem (1.17) provides a convenient framework for the unified study of optimal solutions in many optimization related areas including mathematical programming, complementarity, variational inequalities, optimal control, mathematical economics, equilibria, and game theory. Also various types of variational inclusions problems have been extended and generalized (see [25–40] and the references therein).

On the other hand, the following optimization problem has been studied extensively by many authors:
where are infinitely many closed convex subsets of such that , , is a real number, is a strongly positive linear bounded operator on and is a potential function for (i.e., for all ). This kind of optimization problem has been studied extensively by many authors (see, e.g. [41–44]) when and , where is a given point in .

Li et al. [45] introduced two steps of iterative procedures for the approximation of common fixed point of a nonexpansive semigroup on a nonempty closed convex subset in a Hilbert space. Recently, Liu et al. [46] introduced a hybrid iterative scheme for finding a common element of the set of solutions of system of mixed equilibrium problems, the set of common fixed points for nonexpansive semigroup and the set of solution of quasivariational inclusions with multivalued maximal monotone mappings and inverse-strongly monotone mappings. Very recently, Hao [47] introduced a general iterative method for finding a common element of solution set of quasivariational inclusion problems and the set of common fixed points of an infinite family of nonexpansive mappings.

In this paper, motivated and inspired by Li et al. [45], Liu et al. [46], and Hao [47], we introduce a general implicit iterative algorithm base on viscosity approximation methods with a -strongly pseudocontractive mapping which is more general than a contraction mapping for finding a common element of the set of solutions for a system of mixed equilibrium problems, the set of common fixed point for a nonexpansive semigroup, and the set of solutions of system of variational inclusions (1.15) with set-valued maximal monotone mapping and Lipschitzian relaxed cocoercive mappings in Hilbert spaces. We prove that the proposed iterative algorithm converges strongly to a common element of the above three sets, which is a solution of the optimization problem related to a strongly positive bounded linear operator. The results obtained in this paper extend and improve several recent results in this area.

#### 2. Preliminaries

In the sequel, we use and to denote the weak convergence and strong convergence of the sequence in , respectively.

This collects some results that will be used in the proofs of our main results.

Proposition 2.1 (see [21]). * The resolvent operator associated with is single-valued and nonexpansive for all , that is,
** The resolvent operator is 1-inverse-strongly monotone, that is,
**
Obviously, this immediately implies that
*

For solving the equilibrium problem for bifunction , let us assume that satisfies the following conditions:(H1) for all ;(H2) is monotone, that is, for all ;(H3)for each , is concave and upper semicontinuous;(H4) for each , is convex;(H5)for each , is lower semicontinuous.

*Definition 2.2. *A map * is *called Lipschitz continuous, if there exists a constant such that
A differentiable function on a convex set is called (i)—*convex *[7] if
where is the *Fréchet* differentiable of at ,(i)—*strongly convex* [7] if there exists a constant such that
Let be an equilibrium bifunction satisfying the conditions (H1)–(H5). Let be any given positive number. For a given point , consider the following *auxiliary problem* for MEP (for short, MEP ) to find such that
where is a mapping, and is the *Fréchet* derivative of a functional at . Let be the mapping such that for each , is the set of solutions of MEP , that is,
Then the following conclusion holds.

Proposition 2.3 (see [7]). * Let be a real Hilbert space, be a a lower semicontinuous and convex functional. Let be an equilibrium bifunction satisfying conditions (H1)–(H5). Assume that *(i)* is Lipschitz continuous with constant such that(a) for all ;(b) is affine in the first variable;(c)for each fixed , is continuous from the weak topology to the weak topology;*(ii)

*is -strongly convex with constant , and its derivative is continuous from the weak topology to the strong topology;*(iii)

*for each , there exists a bounded subset and such that for all ,*

*Then the following hold:*(i)

*is single valued;*(ii)

*;*(iii)

*is closed and convex.*

Lemma 2.4 (see [48]). *Let be a nonempty bounded closed and convex subset of a real Hilbert space . Let be a nonexpansive semigroup on , then for all ,
*

Lemma 2.5 (see [49]). *Let be a uniformly convex Banach space, be a nonempty closed and convex subset of , and be a nonexpansive mapping. Then is demiclosed at zero.*

Lemma 2.6 (see [50]). * Assume that is a strongly positive linear bounded operator on with coefficient and . Then .*

Lemma 2.7 (see [51]). * Let be a Banach space and be a -strongly pseudocontractive and continuous mapping. Then has a unique fixed point in .*

Lemma 2.8. *In a real Hilbert space , the following inequality holds:
*

The following lemma can be found in [52, 53] (see also Lemma 2.2 in [54]).

Lemma 2.9. *Let be a nonempty closed and convex subset of a real Hilbert space and be a proper lower semicontinuous differentiable convex function differentiable convex function. If is a solution to the minimization problem
**
then
**
In particular, if solves the optimization problem
**
then
**
where is a potential function for .*

The following lemmas can be found in ([55, 56]). For the sake of the completeness, one includes its proof in a Hilbert space version. Without loss of generality, one assumes that and .

Lemma 2.10. *Let be a real Hilbert space, be an -Lipschitzian and relaxed -cocoercive mapping. Then, one has
**
where . In particular, if , then is nonexpansive.*

*Proof. *For all , we have
It is clear that, if , then is nonexpansive. This completes the proof.

Lemma 2.11. *Let be a real Hilbert space, be the a maximal monotone mapping and be an -Lipschitzian and relaxed -cocoercive mapping for all . Let be a mapping defined by
**
If for all , then is nonexpansive.*

*Proof. *By Lemma 2.10, we know that and are nonexpansive, for all , we have
which implies that is nonexpansive. This completes the proof.

Lemma 2.12. *For all , where , is a solution of the problem (1.15) if and only if is a fixed point of the mapping defined as in Lemma 2.11.*

*Proof. *Let be a solution of the problem (1.15). Then, we have
which implies that
We can deduce that (2.21) is equivalent to
This completes the proof.

#### 3. Main Results

Now, in this section, we prove our main results of this article. Before proving the main result we need the following lemma.

Lemma 3.1. *Let be a real Hilbert space. Let be a nonexpansive semigroup from into itself. Then is monotone, where for all and .*

*Proof. *For all , we have
which implies that is monotone. This completes the proof.

Theorem 3.2. *Let be a real Hilbert space. Let be a finite family of lower semicontinuous and convex function, be a finite family of bifunctions satisfying (H1)–(H5), and be a finite family of Lipschitz continuous mappings with a constant . Let be a nonexpansive semigroup from into itself, be an -Lipschitzian and relaxed -cocoercive mapping with for all and be a maximal monotone mapping. Assume that MEP , where is defined as in Lemma 2.11. Let be a -strongly pseudocontractive mapping with and be a strongly positive linear bounded operator on with a coefficient . Let and be two constants such that . Let be a finite family of positive real sequence such that , and be two sequences in , and be a positive real divergent sequence. For any fixed , let be the sequence defined by
**
where
**
and , is the mapping defined by (2.8). Assume the following. *(i)* is Lipschitz continuous with constant such that(a) for all ,(b) is affine in the first variable,(c)for each fixed , is sequentially continuous from the weak topology to the weak topology.*(ii)

*is -strongly convex with constant , and its derivative is not only continuous from the weak topology to the strong topology but also Lipschitz continuous with constant such that .*(iii)

*For all and for all , there exists a bounded subset and such that for all ,*

*If the following conditions are satisfied:*(C1)

*,*(C2)

*,*

*then the sequence defined by (3.2) converges strongly to , provided is firmly nonexpansive, where is the unique solution of the variational inequality*

*or, equivalently, is the unique solution of the optimization problem*

*where is a potential function for and is the solution of the problem (1.15), where .*

*Proof. *By the conditions and , we may assume, without loss of generality, that for all . Since is a linear bounded self-adjoint operator on , by (1.4), we have
Observe that
This shows that is positive. It follows that
In fact, by the assumption that for all , is nonexpansive. Setting for and . Define a mapping by
Hence, by Lemma 2.11 and nonexpansiveness (semigroup) of and , for all , we have
which implies that is nonexpansive.

First, we show that defined by (3.2) is well defined. Define a mapping by

Indeed, by Lemma 2.6, and from (3.11), for all , we have
This shows that is a -strongly pseudocontractive and strongly continuous. It follows from Lemma 2.7 that has a unique fixed point , that is, defined by (3.2) is well defined.

Next, we show that uniqueness of the solution of the variational inequality (3.5). Suppose that satisfy (3.5), then
Adding up (3.14), we have
It follows that
which is a contradiction. Hence, and the uniqueness is proved.

Next, we show that is bounded. Taking , it follows from Lemma 2.11 that
Putting , we have . Setting and , then
Since for all , is nonexpansive, we also have that is nonexpansive and , then
By nonexpansiveness of and , we have
It follows from (3.20) that
and, so
It follows that
Hence
which implies that is bounded, so are , , and . Since is -strongly pseudocontractive, we have
Thus is bounded.

Next, we show that , for all . From (3.18), we observe that
It follows that
By the conditions and , we obtain
Let , then is nonempty bounded closed and convex subset of , which is -invariant for all and contains . It follows from Lemma 2.4 that
On the other hand, we note that
From (3.28) and (3.29), for all , we have

Next, we show that for all . Since is firmly nonexpansive for all and for , hence for , we have
It follows that
Now, by Lemma 2.8, we have
where
From the condition and (3.28), we have
From (3.20), we observe that
Substituting (3.33) into (3.34), we have
which in turn implies that
From the condition and from (3.36), we obtain that
On the other hand, we observe that
Then, we have
Moreover, we observe that
and hence
Next, we show that for all , and . By the cocoercivity of the mapping , we have
Similarly, we have