Abstract

The purpose of this paper is to solve the minimization problem of finding such that , where stands for the intersection set of the solution set of the equilibrium problem and the fixed points set of a nonexpansive mapping. We first present two new composite algorithms (one implicit and one explicit). Further, we prove that the proposed composite algorithms converge strongly to .

1. Introduction

In the present paper, our main purpose is to solve the minimization problem of finding such that where stands for the intersection set of the solution set of the equilibrium problem and the fixed points set of a nonexpansive mapping. This problem is motivated by the following least-squares solution to the constrained linear inverse problem: where is a nonempty closed convex subset of a real Hilbert space , is a bounded linear operator from to another real Hilbert space and is a given point in . The least-squares solution to (1.2) is the least-norm minimizer of the minimization problem Let denote the solution set of (1.2) (or equivalently (1.3)). It is known that is nonempty if and only if . In this case, has a unique element with minimum norm (equivalently, (1.2) has a unique least-squares solution); that is, there exists a unique point satisfying The so-called -constrained pseudoinverse of is then defined as the operator with domain and values given by where is the unique solution to (1.4).

Note that the optimality condition for the minimization (1.3) is the variational inequality (VI) where is the adjoint of .

If , then (1.3) is consistent and its solution set coincides with the solution set of VI (1.6). On the other hand, VI (1.6) can be rewritten as where is any positive scalar. In the terminology of projections, (1.7) is equivalent to the fixed point equation It is not hard to find that for , the mapping is nonexpansive. Therefore, finding the least-squares solution of the constrained linear inverse problem is equivalent to finding the minimum-norm fixed point of the nonexpansive mapping .

Based on the above facts, it is an interesting topic of finding the minimum norm fixed point of the nonexpansive mappings. In this paper, we will consider a general problem. We will focus on to solve the minimization problem (1.1). At this point, we first recall some definitions on the fixed point problem and the equilibrium problem as follows.

Let be a nonempty closed convex subset of a real Hilbert space . Recall that a mapping is called -inverse strongly monotone if there exists a positive real number such that , for all . It is clear that any -inverse strongly monotone mapping is monotone and -Lipschitz continuous. Let be a -contraction; that is, there exists a constant such that for all . A mapping is said to be nonexpansive if , for all . Denote the set of fixed points of by .

Let be a nonlinear mapping and be a bifunction. The equilibrium problem is to find such that The solution set of (1.9) is denoted by EP. If , then (1.9) reduces to the following equilibrium problem of finding such that If , then (1.9) reduces to the variational inequality problem of finding such that We note that the problem (1.9) is very general in the sense that it includes, as special cases, optimization problems, variational inequalities, minimax problems, Nash equilibrium problem in noncooperative games and others, see, for example, [14].

We next briefly review some historic approaches which relate to the fixed point problems and the equilibrium problems.

In 2005, Combettes and Hirstoaga [5] introduced an iterative algorithm of finding the best approximation to the initial data and proved a strong convergence theorem. In 2007, by using the viscosity approximation method, S. Takahashi and W. Takahashi [6] introduced another iterative scheme for finding a common element of the set of solutions of the equilibrium problem and the set of fixed point points of a nonexpansive mapping. Subsequently, algorithms constructed for solving the equilibrium problems and fixed point problems have further developed by some authors. In particular, Ceng and Yao [7] introduced an iterative scheme for finding a common element of the set of solutions of the mixed equilibrium problem (1.9) and the set of common fixed points of finitely many nonexpansive mappings. Maingé and Moudafi [8] introduced an iterative algorithm for equilibrium problems and fixed point problems. Yao et al. [9] considered an iterative scheme for finding a common element of the set of solutions of the equilibrium problem and the set of common fixed points of an infinite nonexpansive mappings. Noor et al. [10] introduced an iterative method for solving fixed point problems and variational inequality problems. Their results extend and improve many results in the literature. Some works related to the equilibrium problem, fixed point problems, and the variational inequality problem in[145] and the references therein.

However, we note that all constructed algorithms in [2, 4, 610, 14, 15, 21, 2340] do not work to find the minimum-norm solution of the corresponding fixed point problems and the equilibrium problems. It is our main purpose in this paper that we devote to construct some algorithms for finding the minimum-norm solution of the fixed point problems and the equilibrium problems. We first suggest two new composite algorithms (one implicit and one explicit) for solving the above minimization problem. Further, we prove that the proposed composite algorithms converge strongly to the minimum norm element .

2. Preliminaries

Let be a nonempty closed convex subset of a real Hilbert space . Throughout this paper, we assume that a bifunction satisfies the following conditions: (H1) , for all ; (H2) is monotone, that is, for all ; (H3)for each , ; (H4)for each , is convex and lower semicontinuous.

The metric (or nearest point) projection from onto is the mapping which assigns to each point the unique point satisfying the property It is well known that is a nonexpansive mapping and satisfies We need the following lemmas for proving our main results.

Lemma 2.1 (see [5]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a bifunction which satisfies conditions (H1)–(H4). Let and . Then, there exists such that Further, if , then the following hold: (i) is single-valued and is firmly nonexpansive, that is, for any , ; (ii) is closed and convex and .

Lemma 2.2 (see [17]). Let be a nonempty closed convex subset of a real Hilbert space . Let the mapping be -inverse strongly monotone and be a constant. Then, one has In particular, if , then is nonexpansive.

Lemma 2.3 (see [28]). Let be a closed convex subset of a real Hilbert space and let be a nonexpansive mapping. Then, the mapping is demiclosed, that is, if is a sequence in such that weakly and strongly, then .

Lemma 2.4 (see [22]). Assume is a sequence of nonnegative real numbers such that where is a sequence in and is a sequence such that (1) ; (2) or . Then .

3. Main Results

In this section we will introduce two algorithms for finding the minimum norm element of . Namely, we want to find the unique point which solves the following minimization problem:

Let be a nonexpansive mapping and be an -inverse strongly monotone mapping. Let be a bifunction which satisfies conditions (H1)–(H4). Let and be two constants such that and . In order to find a solution of the minimization problem (3.1), we construct the following implicit algorithm where is defined as Lemma 2.1. We will show that the net defined by (3.2) converges to a solution of the minimization problem (3.1). As matter of fact, in this paper, we will study the following general algorithm.

Let be a -contraction. For each , we consider the following mapping given by Since the mappings , , and are nonexpansive, then we can check easily that which implies that is a contraction. Using the Banach contraction principle, there exists a unique fixed point of in , that is,

In this point, we would like to point out that algorithm (3.4) includes algorithm (3.2) as a special case due to the contraction is a possible nonself-mapping.

In the sequel, we assume (1) is a nonempty closed convex subset of a real Hilbert space ; (2) is a nonexpansive mapping, is an -inverse strongly monotone mapping and is a -contraction; (3) is a bifunction which satisfies conditions (H1)–(H4); (4) .

In order to prove our first main result, we need the following lemmas.

Lemma 3.1. The net generated by the implicit method (3.4) is bounded.

Proof. Set and for all . Take . It is clear that . Since is nonexpansive and is -inverse strongly monotone, we have from Lemma 2.2 that So, we have that It follows from (3.4) that that is, So, is bounded. Hence and are also bounded. This completes the proof.

According to Lemma 3.1, we can choose some appropriate constant such that satisfies the following request.

Lemma 3.2. The net generated by the implicit method (3.4) is relatively norm compact as .

Proof. From (3.4) and (3.5), we have It follows that that is, Since , we derive
From Lemmas 2.1 and 2.2, we obtain which implies that By (3.10), and (3.14), we have It follows that This together with (3.12) imply that It follows that Hence, Next, we show that is relatively norm compact as . Let be a sequence such that as . Put and . From (3.19), we get By (3.4), we deduce that is, It follows that In particular,
Since is bounded, without loss of generality, we may assume that converges weakly to a point . Also weakly. Noticing (3.20) we can use Lemma 2.3 to get .
Now, we show . Since , for any , we have From the monotonicity of , we have Hence, Put for all and . Then, we have . So, from (3.27), we have Note that . Further, from monotonicity of , we have . Letting in (3.28), we have From (H1), (H4), and (3.29), we also have and hence Letting in (3.31), we have, for each , This implies that . Therefore, .
We substitute for in (3.24) to get Hence, the weak convergence of to implies that strongly. This has proved the relative norm compactness of the net as . This completes the proof.

Now, we show our first main result.

Theorem 3.3. The net generated by the implicit method (3.4) converges in norm, as , to the unique solution of the following variational inequality: In particular, if we take , then the net defined by (3.2) converges in norm, as , to a solution of the minimization problem (3.1).

Proof. Now we return to (3.24) in Lemma 3.2 and take the limit as to get In particular, solves the following variational inequality or the equivalent dual variational inequality: Therefore, . That is, is the unique fixed point in of the contraction . Clearly this is sufficient to conclude that the entire net converges in norm to as .
Finally, if we take , then (3.35) is reduced to Equivalently, This clearly implies that Therefore, is a solution of minimization problem (3.1). This completes the proof.

Next we introduce an explicit algorithm for finding a solution of minimization problem (3.1). This scheme is obtained by discretizing the implicit scheme (3.4).

Algorithm 3.4. Given that arbitrarily, let the sequence be generated iteratively by where and are two sequences in .

Next, we give several lemmas in order to prove our second main result.

Lemma 3.5. The sequence generated by (3.41) is bounded.

Proof. Pick . Let and for all . From (3.41), we get By induction, we obtain, for all , Hence, is bounded. Consequently, we deduce that , and are all bounded. This completes the proof.

Lemma 3.6. Assume the sequences and satisfy the following conditions: (i) and ; (ii) and . Then .

Proof. From (3.41), we have Next we estimate and . We have Then, we obtain where is a constant satisfying This together with (i), (ii) and Lemma 2.4 imply that By the convexity of the norm , we have From Lemma 2.2, we get Substituting (3.50) into (3.49), we have Therefore, Since , and , we derive
From Lemma 2.1 and (3.41), we obtain Thus, we deduce By (3.49) and (3.55), we have It follows that Since , , and , we derive that Note that . Hence, Therefore, This completes the proof.

Now, we show the strong convergence of the sequence generated by (3.41).

Theorem 3.7. Assume the sequences and satisfy the following conditions: (i) , and ; (ii) and . Then the sequence generated by (3.41) converges strongly to which is the unique solution of variational inequality (3.34). In particular, if , then the sequence generated by converges strongly to a solution of the minimization problem (3.1).

Proof. We first prove where .
Indeed, we can choose a subsequence of such that Without loss of generality, we may further assume that weakly. By the same argument as that of Theorem 3.3, we can deduce that . Therefore, From (3.41), we have where and . It is clear that and . Hence, all conditions of Lemma 2.4 are satisfied. Therefore, we immediately deduce that .
Finally, if we take , by the similar argument as that Theorem 3.3, we deduce immediately that is a minimum norm element in . This completes the proof.

4. Conclusions

Iterative methods for finding the common element of the equilibrium problem and the fixed point problem have been extensively studied, see, for example, [2, 4, 6, 7, 9, 14, 15, 21, 2328]. However, iterative methods for finding the minimum norm solution of the equilibrium problem and the fixed point problem are far less developed than those for only finding the common element of the equilibrium problem and the fixed point problem. In the present paper, we suggest two algorithm, one implicit algorithm (3.4) and one explicit algorithm (3.41). We prove the strong convergence of the algorithms (3.4) and (3.41) to the common element of the equilibrium problem and the fixed points set of a nonexpansive mapping. As special cases, we prove that algorithms (3.2) and (3.61) converges to which solves the minimization problem (3.1). It should be pointed out that our algorithms and our main results are new even if we assume is a self-mapping on .

Since in many problems, it is needed to find a solution with minimum norm. Hence, it is a very interesting problem to construct some algorithms for finding the minimum norm solution of some practical problem. The reader can develop iterative algorithms for solving some minimization problems by using our methods and technique contained in the present paper.

Acknowledgments

The authors thank three anonymous referees for their comments which improved the presentation of this paper. Y. Yao was supported in part by Colleges and Universities Science and Technology Development Foundation (20091003) of Tianjin and NSFC 11071279. The second author was supported in part by NSC 99-2221-E-230-006.