Abstract

The present paper is divided into two parts. First, we introduce implicit and explicit iterative schemes based on the regularization for solving equilibrium and constrained convex minimization problems. We establish results on the strong convergence of the sequences generated by the proposed schemes to a common solution of minimization and equilibrium problem. Such a point is also a solution of a variational inequality. In the second part, as applications, we apply the algorithm to solve split feasibility problem and equilibrium problem.

1. Introduction

Let be a real Hilbert space with inner product and norm . Let be a nonempty closed convex subset of . Let be a bifunction of into , where is the set of real numbers. Consider the equilibrium problem (EP) which is to find such that

We denoted the set of solutions of EP by . Given a mapping , let for all ; then if and only if for all ; that is, is a solution of the variational inequality. Numerous problems in physics, optimizations and economics reduce to find a solution of (1). Some methods have been proposed to solve the equilibrium problem; see for instance see [18] and the references therein.

Some composite iterative algorithms were proposed by many authors for finding the common solution of equilibrium problem and fixed point problem. Next, we list some main results as follows.

With some appropriate assumptions, Ceng et al. [9] established the following iterative scheme: and Under certain conditions, the sequences and converge weakly to an element of .

For finding an element of , S. Takahashi and W. Takahashi [10] introduced the following iterative scheme by the viscosity approximation method in a Hilbert space: and Under suitable conditions, some strong convergence theorems are obtained.

In 2009, Liu [11] introduced two iterative schemes by the general iterative method for finding an element of , where is a -strictly pseudocontraction nonself mapping in the setting of a real Hilbert space. Let be a sequence generated by and arbitrarily, where is a strongly positive bounded linear operator on . Under some assumptions, the strong convergence theorems are obtained.

In 2012, based on the concept of the shrinking projection method, Reich and Sabach [12] consider the following algorithm for finding the common solution of finite equilibrium problems in a reflexive Banach space Under some consumption, the sequence converges strongly to .

The gradient-projection algorithm is a classical power method for solving constrained convex optimization problems and has been studied by many authors (see [1326] and the reference therein). The method has recently been applied to solve split feasibility problems which find applications in image reconstructions and the intensity modulated radiation therapy (see [2734]).

Consider the problem of minimizing over the constraint set (assuming is a nonempty closed and convex subset of a real Hilbert space ). The main results we all know about the gradient projection are that if is a convex and continuously Fréchet differentiable functional, the gradient-projection algorithm generates a sequence determined by the gradient of and the metric projection onto . Under the condition that has a Lipschitz continuous and strongly monotone gradient, the sequence can be strongly convergent to a minimizer of in . If the gradient of is only assumed to be inverse strongly monotone, then can only be weakly convergent if is infinite-dimensional.

Recently, Xu [35] gave an operator-oriented approach as an alternative to the gradient-projection method and to the relaxed gradient-projection algorithm, namely, an averaged mapping approach. He also presented two modifications of gradient-projection algorithms which are shown to have strong convergence.

On the other hand, regularization, in particular the traditional Tikhonov regularization, is usually used to solve ill-posed optimization problems [36]. The disadvantage is the weak convergence of the method RGPA for the regularization problem under some conditions.

The purpose of the paper is to study the iterative method for finding the common solution of an equilibrium problem and a constrained convex minimization problem. Based on the Viscosity method [18], we combine the RGPA and averaged mapping approaches to propose implicit and explicit composite iterative methods for finding the common element of the set of solutions of an equilibrium problem and the solution set of a constrained convex minimization problem and also to prove some strong convergence theorems.

2. Preliminaries

Throughout the paper, we assume that is a real Hilbert space whose inner product and norm are denoted by and , respectively, and is a nonempty closed convex subset of . The set of fixed points of a mapping is denoted by ; that is, . We write to indicate that the sequence converges weakly to . The fact that the sequence converges strongly to is denoted by . The following definition and results are needed in the subsequent sections.

Recall that a mapping is said to be -Lipschitzian if where is a constant. In particular, if , then is called a contraction on ; if , then is called a nonexpansive mapping on . is called firmly nonexpansive if is nonexpansive, or equivalently, for all . Alternatively, is firmly nonexpansive if and only if can be expressed as , where is nonexpansive.

In 1978, Baillon et al. [37] defined the concept of averaged mapping which is used very frequently now.

Definition 1 (see [37]). A mapping is said to be an averaged mapping if it can be written as the average of the identity and a nonexpansive mapping; that is, where is a number in and is nonexpansive. More precisely, when (8) holds, we say that is -averaged. Clearly, a firmly nonexpansive mapping (in particular, projection) is a -averaged map.

Proposition 2 (see [28, 38]). For given operators one has the following.

(i) If for some and if is averaged and is nonexpansive, then is averaged.

(ii) is firmly nonexpansive if and only if the complement is firmly nonexpansive.

(iii) If for some and if S is firmly nonexpansive and is nonexpansive, then is averaged.

Recall that the metric (or nearest point) projection from onto is the mapping which assigns to each point the unique point satisfying the property

In 1984, Goebel and Reich [39] discussed the properties of the nearest point projection.

Lemma 3 (see [39]). For given one has the following:(i) if and only if (ii) if and only if (iii)
Consequently, is nonexpansive and monotone.

Lemma 4. The following inequality holds in a Hilbert space

Lemma 5 (see [40]). In a Hilbert space , one has

Lemma 6 (Demiclosedness Principle [40]). Let be a Hilbert space, a closed convex subset of , and a nonexpansive mapping with ; if is a sequence in weakly converging to and if converges strongly to , then ; in particular if then .

Definition 7. A nonlinear operator whose domain and range is said to be(i) monotone if (ii)-strongly monotone if there exists such that (iii)-inverse strongly monotone (for short, -ism) if there exists such that

Proposition 8 (see [28]). Let be an operator from to itself.(i) is nonexpansive if and only if the complement is -ism.(ii)If is -ism, then for , is -ism.(iii) is averaged if and only if the complement is -ism for some . Indeed, for , is -averaged if and only if is -ism.

Lemma 9 (see [18]). Assume that is a sequence of nonnegative real numbers such that where is a sequence in and is a sequence in such that(i) ; (ii) . Then .

In order to solve the equilibrium problem for a bifunction , let us assume that satisfies the following conditions:(A1), for all ;(A2) is monotone; that is, for all ;(A3) For all , (A4) for each fixed , the function is convex and lower semicontinuous.

Let us recall the following lemmas which will be useful for our paper.

Lemma 10 (see [28]). Let be a bifunction from into satisfying (A1), (A2), (A3), and (A4). Then, for any and , there exists such that Further, if , then the following holds:(1) is single-valued; (2) is firmly nonexpansive; that is, (3); (4) is closed and convex.

3. Main Results

We now look at the constrained convex minimization problem: where is a closed and convex subset of a Hilbert space and is a real-valued convex function. If is Frchet differentiable, then the gradient-projection algorithm (GPA) generates a sequence according to the recursive formula or more generally, where, in both (23) and (24), the initial guess is taken from arbitrarily and the parameters or are positive real numbers.

As a matter of fact, it is known that, if fails to be strongly monotone and is only -ism; namely, there is constant such that under some assumption for or , then algorithms (23) and (24) can still converge in the weak topology.

Now, consider the regularized minimization problem where is the regularization parameter, and again is convex with -ism continuous gradient .

It is obvious that there exists a unique point such that is the unique fixed point of the mapping

We can prove that , where is a solution of the constrained convex minimization problem.

Throughout the rest of this paper, assume that the minimization problem (22) is consistent and let denote its solution set; we always assume that is a contraction of into with coefficient ; let be a sequence of mappings defined as Lemma 3 and define a mapping by Consider the following mapping on defined by where ; then by Lemmas 3 and 10

Since , it follows that is a contraction. Therefore, by the Banach contraction principle, has a unique fixed point such that

For simplicity, we will write for provided that no confusion occurs. Next, we prove the convergence of while we claim the existence of the which solves the variational inequality

3.1. Convergence of the Implicit Scheme

Proposition 11. If , , is -ism, for all , where , then where

Proof. One here then where, .

Theorem 12. Let be a nonempty closed convex subset of a real Hilbert space and a contraction with , , and a bifunction from into satisfying , , , and . Let be sequence generated by where, , and .(i);(ii), ;(iii).
Then, converges strongly to a point which solves the variational inequality (32).

Proof. Pick any , , ; then we have (noting ) hence, So, is bounded. Next, we claim that .
Take ; by Lemma 3, we have It follows that So, . Next, we show that , Observe that Hence, So,
Since is bounded, there exists such that . Since is closed and convex, is weakly closed. So, we have . Let us show that . Assume that . Since and , it follows from the Opial’s condition that This is a contradiction. So, we get .
Next, we show that . Since , for any , we obtain From , we have Replacing with , we have Since and , it follows from that , for all . Let for all and . Then, we have and hence . Thus, from and we have and hence . From , we have for all and hence . Therefore, .
On the other hand, then if .
Next, we prove that solves the VI (problem): Note that

3.2. Convergence of the Explicit Scheme

Theorem 13. Let be a nonempty closed convex subset of a real Hilbert space ,   a contraction with , , and a bifunction from into satisfying , , , and . Let be sequence generated by and where , and , . Let and satisfy the following conditions:(i), , , ; (ii) (iii), and .
Then, and converge strongly to a point which solves the variational inequality (32).

Proof. First we prove that is bounded.
Taking any , we have So, , So, is bounded.
Next we prove that , From and , we note that Putting in (61) and in (62), we have
So, from (A2), we have and hence
Since , without loss of generality, let us assume that there exists a real number such that for all . Thus, we have thus, where ,
So So Next, we prove that , So, , Observe that Hence, So,
Since is bounded, there exists such that . Since is closed and convex, is weakly closed. So, we have . Let us show that . Assume that . Since and , it follows from the Opial’s condition that This is a contradiction. So, we get .
Next We show that . Since , for any , we obtain From , we have Replacing with , we have Since and , it follows from that , for all . Let for all and . Then, we have and hence . Thus, from and , we have and hence . From , we have for all and hence . Therefore, .
We assume that; , then , Finally, we prove that , by Lemma 9 and , ; , then .

4. Application of the Iterative Method

Next, we give an application of Theorem 13 to the split feasibility problem (say SFP, for short) which was introduced by Censor and Elfving [27], where and are nonempty closed convex subsets of Hilbert space and , respectively. is a bounded linear operator.

It is clear that is a solution to the split feasibility problem (83) if and only if and .

We define the proximity function by and consider the convex optimization problem

Then, solves the split feasibility problem (83) if and only if solves the minimization (85) with the minimization equal to 0. Byrne [28] introduced the so-called algorithm to solve the (SFP), where .

He obtained that the sequence generated by (86) converges weakly to a solution of the (SFP).

Now we consider the regularization technique; let Applying Theorem 13, we obtain the following result.

Theorem 14. Assume that the split problem (83) is consistent. Let be a nonempty closed convex subset of a real Hilbert space ,   a contraction with , , and be a bifunction from into satisfying , , , and . Let be sequence generated by and where where ; let , satisfy the following conditions:(i), ;(ii); (iii) and .
Then, and converge strongly to a point which solves the variational inequality (32).

Proof. By the definition of the function , we have and is -ism, then, due to Theorem 13, we have the conclusion immediately.

Acknowledgments

The author wishes to thank the referees for their helpful comments, which notably improved the presentation of this paper. This work was supported in part by The Fundamental Research Funds for the Central Universities (the Special Fund of Science in Civil Aviation University of China, no. 3122013 K004).