Abstract

We first construct an implicit algorithm for solving the minimization problem , where is the intersection set of the solution set of some equilibrium problem, the fixed points set of a nonexpansive mapping, and the solution set of some variational inequality. Further, we suggest an explicit algorithm by discretizing this implicit algorithm. We prove that the proposed implicit and explicit algorithms converge strongly to a solution of the above minimization problem.

1. Introduction

Let be a real Hilbert space with inner product and norm , respectively. Let be a nonempty closed convex subset of . Recall that a mapping is called -inverse-strongly monotone if there exists a constant such that A mapping is said to be nonexpansive if for all . Denote the set of fixed points of by .

Let be a nonlinear mapping and be a bifunction. Now we concern the following equilibrium problem is to find such that The solution set of (1.2) is denoted by . If , then (1.2) reduces to the following equilibrium problem of finding such that The solution set of (1.3) is denoted by . If , then (1.2) reduces to the variational inequality problem of finding such that The solution set of variational inequality (1.4) is denoted by .

Equilibrium problems which were introduced by Blum and Oettli [1] in 1994 have had a great impact and influence in pure and applied sciences. It has been shown that the equilibrium problems theory provides a novel and unified treatment of a wide class of problems which arise in economics, finance, image reconstruction, ecology, transportation, network, elasticity, and optimization. Equilibrium problems include variational inequalities, fixed point, Nash equilibrium, and game theory as special cases. The equilibrium problems and the variational inequality problems have been investigated by many authors. Please see [235] and the references therein. The problem (1.2) is very general in the sense that it includes, as special cases, optimization problems, variational inequalities, minimax problems, Nash equilibrium problem in noncooperative games, and others.

On the other hand, we also notice that it is quite often to seek a particular solution of a given nonlinear problem, in particular, the minimum-norm solution. For instance, given a closed convex subset of a Hilbert space and a bounded linear operator , where is another Hilbert space. The -constrained pseudoinverse of , , is then defined as the minimum-norm solution of the constrained minimization problem: which is equivalent to the fixed point problem: where is the metric projection from onto , is the adjoint of , is a constant, and is such that .

It is therefore an interesting problem to invent some algorithms that can generate schemes which converge strongly to the minimum-norm solution of a given problem.

In this paper, we focus on the following minimization problem: find such that where is the intersection set of the solution set of some equilibrium problem, the fixed points set of a nonexpansive mapping, and the solution set of some variational inequality. We will suggest and analyze two very simple algorithms for solving the above minimization problem.

2. Preliminaries

Let be a nonempty closed convex subset of a real Hilbert space . Throughout this paper, we assume that a bifunction satisfies the following conditions:(H1) for all ;(H2) is monotone, that is, for all ;(H3) for each , ;(H4) for each , is convex and lower semicontinuous.The metric (or nearest point) projection from onto is the mapping which assigns to each point the unique point satisfying the property: It is well known that is a nonexpansive mapping and satisfies We need the following well-known lemmas for proving our main results.

Lemma 2.1 (see [13]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a bifunction which satisfies conditions (H1)–(H4). Let and . Then there exists such that Further, if , then the following hold:(a) is single-valued and is firmly nonexpansive, that is, for any , ;(b) is closed and convex and .

Lemma 2.2 2.2 (see [27]). Let and be bounded sequences in a Banach space X and let be a sequence in with . Suppose that for all and . Then .

Lemma 2.3 (see [29]). Let be a closed convex subset of a real Hilbert space and let be a nonexpansive mapping. Then the mapping is demiclosed. That is, if is a sequence in such that weakly and strongly, then .

Lemma 2.4 (see [29]). Assume that is a sequence of nonnegative real numbers such that where is a sequence in and is a sequence such that(a); (b) or .Then .

3. Main Results

In this section we will introduce two algorithms (one implicit and one explicit) for finding the minimum norm element of . Namely, we want to find a point which solves the following minimization problem: Let be a nonexpansive mapping and be -inverse-strongly monotone and -inverse-strongly monotone mappings, respectively. Let be a bifunction which satisfies conditions (H1)–(H4). In order to solve the minimization problem (3.1), we first construct the following implicit algorithm by using the projection method: where is defined as Lemma 2.1 and are two constants such that and . We will show that the net defined by (3.2) converges to a solution of the minimization problem (3.1). First, we show that the net is well defined. As matter of fact, for each , we consider the following mapping given by Since the mappings , , , and are nonexpansive, then we can check easily that which implies that is a contraction. Using the Banach contraction principle, there exists a unique fixed point of in , that is, which is exactly (3.2).

Next we show the first main result of the present paper.

Theorem 3.1. Suppose that . Then the net generated by the implicit method (3.2) converges in norm, as , to a solution of the minimization problem (3.1).

Proof. Take . First we need use the following facts:(1) for all , . In particular, for all , (2) and are nonexpansive and for all
Set and for all . It follows that From (3.2), we have that is, So, is bounded. Hence , , and are also bounded. Next we will use to denote some possible constant appearing in the following.
From (3.7), we have that is, Since and , we derive From Lemma 2.1 and (2.2), we obtain It follows that Set for all . By Lemma 2.1 and (2.2), we have that is, Therefore, we have Hence, we deduce This denotes that Note that thus, Next we show that is relatively norm compact as . Let be a sequence such that as . Put , and . From (3.20), we get By (3.2), we have It follows that In particular,
Since is bounded, without loss of generality, we may assume that converges weakly to a point . Hence, and also converge weakly to . Noticing (3.21) we can use Lemma 2.3 to get .
Now we show . Since , for any we have From the monotonicity of , we have Hence, Put for all and . Then, we have . So, from (3.27) we have Note that . Further, from monotonicity of , we have . Letting in (3.28), we have From (H1), (H4), and (3.29), we also have and hence Letting in (3.31), we have, for each , This implies that . By the same argument as that of [13], we have . Therefore, .
We substitute for in (3.24) to get Hence, the weak convergence of to implies that strongly. This has proved the relative norm compactness of the net as .
Now we return to (3.24) and take the limit as to get To show that the entire net converges to , assume , where . In (3.34), we take to get Interchange and to obtain Adding up (3.35) and (3.36) yields which implies that .
We note that (3.34) is equivalent to This clearly implies that Therefore, solves the minimization problem (3.1). This completes the proof.

Next we introduce an explicit algorithm for finding a solution of the minimization problem (3.1). This scheme is obtained by discretizing the implicit scheme (3.2). We will show the strong convergence of this algorithm.

Theorem 3.2. Suppose that . For given arbitrarily, let the sequence be generated iteratively by where and are two sequences in satisfying the following conditions:(a) and ;(b). Then the sequence converges strongly to a solution of the minimization problem (3.1).

Proof. Take . First we need use the following fact:
for all . In particular,
Set , and for all . From (3.40), we get By induction, we obtain, for all , Hence, is bounded. Consequently, we deduce that , , and are all bounded. We will use to denote some possible constant appearing in the following.
Define for all . It follows that This together with (a) implies that Hence by Lemma 2.2, we get Therefore, By the convexity of the norm , we have It follows that Since , , and , we derive From Lemma 2.1 and (2.2), we obtain It follows that Again, by Lemma 2.1 and (2.2), we have that is, Hence, It follows that Since , , , and , we derive that Note that therefore, Next we prove where is a solution of the minimization problem (3.1).
Indeed, we can choose a subsequence of such that Without loss of generality, we may further assume that weakly. By the same argument as that of Theorem 3.1, we can deduce that . Therefore, Finally, we prove . As a matter of fact, we have where and . It is clear that and . Hence, all conditions of Lemma 2.4 are satisfied. Therefore, we immediately deduce that strongly. This completes the proof.

Acknowledgments

Y. Yao was supported in part by NSFC 11071279 and NSFC 71161001-G0105. Y. Liou was supported in part by NSC 100-2221-E-230-012.