#### Abstract

The constrained convex minimization problem is to find a point with the property that , and , , where is a nonempty, closed, and convex subset of a real Hilbert space , is a real-valued convex function, and is not Fréchet differentiable, but lower semicontinuous. In this paper, we discuss an iterative algorithm which is different from traditional gradient-projection algorithms. We firstly construct a bifunction defined as . And we ensure the equilibrium problem for equivalent to the above optimization problem. Then we use iterative methods for equilibrium problems to study the above optimization problem. Based on Jung’s method (2011), we propose a general approximation method and prove the strong convergence of our algorithm to a solution of the above optimization problem. In addition, we apply the proposed iterative algorithm for finding a solution of the split feasibility problem and establish the strong convergence theorem. The results of this paper extend and improve some existing results.

#### 1. Introduction

Let be a real Hilbert space with the inner product and the induced norm . Let be a nonempty, closed, and convex subset of . Recall that a mapping is nonexpansive (see [1]) if We denote by the set of fixed points of . Recall that a self-mapping on is a contraction if there exists a constant such that

Consider the following constrained convex minimization problem: where is a real-valued convex function. Assume that the constrained convex minimization problem (3) is solvable, and let denote the set of solutions of (3). Assume that is lower semicontinuous. Let be a bifunction from to defined by . It is easy to see that , where denotes the set of solutions of equilibrium problem to find such that Hence, the optimization problem (3) is equivalent to the equilibrium problem (4).

For solving the convex optimization problem (3), Su and Li [2] introduced the following iterative scheme in a Hilbert space: and for all , where , , , and . Under appropriate conditions, they proved that the sequence generated by (5) converges strongly to a point of optimization problem (3).

In 2010, combining Yamada’s method [3] and Marino and Xu’s method [4], Tian [5] proposed a general iterative algorithm: where is a contraction with coefficient and is a -Lipschitzian and -strongly monotone operator with , . Let , . It is proved that the sequence generated by (6) converges strongly to a fixed point , which solves the variational inequality

In 2011, Jung [6] introduced the following general iterative scheme for -strictly pseudocontractive mapping : where is a mapping defined by and is the metric projection of onto . Under appropriate conditions, he established the strong convergence of the sequence generated by (8) to a fixed point of , which is a solution of the variational inequality (7).

In this paper, motivated and inspired by the above results, we introduce a general iterative method: and for solving the optimization problem (3), where . Under appropriate conditions, it is proved that the sequence generated by (9) converges strongly to a point which solves the variational inequality Furthermore, by using the above result we studied the split feasibility problem and obtained the iterative algorithm for solving the split feasibility problem.

#### 2. Preliminaries

In this section we introduce some useful definitions and lemmas which will be used in the proofs for the main results in the next section.

Monotone operators are very useful in the convergence analysis.

*Definition 1 (see [7] for comprehensive theory of monotone operators). *Let be an operator.(i)is monotone, if and only if
(ii) is said to be -strongly monotone, if there exists a positive constant such that
(iii) is said to be -inverse strongly monotone (-ism), if there exists a positive constant such that

It is known that inverse strongly monotone operators have been studied widely (see [7–9]) and applied to solve practical problems in various fields, for instance, in traffic assignment problems (see [10, 11]).

Lemma 2 (see [5]). *Let be a Hilbert space, a contraction with a coefficient , and a -Lipschitz continuous and -strongly monotone operator with constants , . Then, for ,
**
That is, is strongly monotone with a coefficient .*

Recall the metric (nearest point) projection from a real Hilbert space to a closed and convex subset of which is defined as follows: given , is the unique point in with the property is characterized as follows.

Lemma 3. *Let be a closed and convex subset of a real Hilbert space . Given and , then if and only if the following inequality holds:
*

For solving the equilibrium problem for a bifunction , let us assume that satisfies the following conditions:(A1) for all ;(A2) is monotone; that is, for all ,;(A3)for each ,,, ;(A4)for each , is convex and lower semicontinuous.

Lemma 4 (see [12]). *Let be a nonempty, closed, and convex subset of and let be a bifunction of into satisfying (A1)–(A4). Let and . Then, there exists such that
*

Lemma 5 (see [13]). *Assume that satisfies (A1)–(A4). For and , define a mapping as follows:
*

*Then, the following hold.*(1)

*is single-valued;*(2)

*is firmly nonexpansive; that is, for any ,;*(3)

*;*(4)

*is closed and convex.*

Lemma 6 (see [14]). *Assume that is a sequence of nonnegative real numbers such that
**
where is a sequence in and is a sequence in such that*(i)*;*(ii)*either or .**Then .*

We adopt the following notation:(i) means that strongly;(ii) means that weakly.

#### 3. Main Results

Recall that throughout this paper we always use to denote the solution set of the constrained convex minimization problem (3).

Let be a real Hilbert space and let be a nonempty, closed, and convex subset of . Let be a real-valued convex function. Assume that is lower semicontinuous. Let be a contraction with a coefficient , and let be a -Lipschitzian and -strongly monotone operator with constants , . Suppose that , , where .

Next, we study the following iterative method. For a given arbitrary initial guess , the sequence is generated by the following recursive formula: where and is a mapping defined as in Lemma 5.

In order to prove the convergence, we also need the following proposition.

Proposition 7. *Let be a real Hilbert space and let be a -Lipschitzian and -strongly monotone operator with constants , . Suppose that , and , satisfy . Then, the mapping is a contraction with a contractive constant , where .*

*Proof. *Taking ,, we have
Thus,
where .

Theorem 8. *Let be a real Hilbert space and let be a nonempty, closed, and convex subset of . Let be a real-valued convex function. Assume that is lower semicontinuous. Let be a contraction with a coefficient and let be a -Lipschitzian and -strongly monotone operator with constants , . Suppose that , , where . Suppose that the optimization problem (3) is consistent and let denote its solution set. Let be generated by and
**
where and is a mapping defined as in Lemma 5. , , satisfy the following conditions:** and ;**;**;**;** and .**Then the sequence generated by (23) converges strongly to a point , which solves the variational inequality
*

*Proof. *Let be a bifunction from to defined by . We consider the following equilibrium problem, that is, to find , such that
It is obvious that , where denotes the set of solutions of equilibrium problem (25). In addition, it is easy to see that satisfies the conditions* (A1)–(A4)* in Section 2. Then, the iterative method (23) is equivalent to and
where .

It is easy to see the uniqueness of a solution of variational inequality (24). By Lemma 2, is strongly monotone, so variational inequality (24) has only one solution. Below, we use to denote the unique solution of (24). Since solves variational inequality (24), then (24) can be written as
So in terms of Lemma 3, it is equivalent to the following fixed-point equation
Now, we show that is bounded. Indeed, picking , since , by Lemma 5, we know that
From Propositions 3.1 and (29), we derive that
By induction, we have
Hence is bounded. From (29), we also derive that is bounded.

Next, we show that . Consider
From and , we note that
Putting in (33) and in (34), we have
So, from (A2), we have
and hence
Since , without loss of generality, let us assume that there exists a real number such that for all . Thus, we have
thus,
where .

From (32) and (39), we obtain
where , for all . Hence, by Lemma 6, we have
Then, from (39), (41), and , we have
Next, we show that .

Indeed, for any , by Lemma 5, we have
This implies that
Then from (29) and (44), we derive that
Thus,
From (41) and condition (C1), we obtain that
From conditions (C1) and (C3), we get that
Next, we show that
where which solves the variational inequality
Since is bounded, without loss of generality, we may assume that such that
From and , we obtain that .

Next, we show that .

Indeed, from , for any , we obtain
From (A2), we have
and hence
Since and , it follows from (A4) that , for any . Let , for all , . Since , , and is closed and convex set, then we get . Since and , then we have . Hence we have .

Thus, from (A1) and (A4), we have
and hence . From (A3), we have , for any . Hence .

Therefore,
Finally, we show that .

As a matter of fact,
So, from (29), we derive
It follows that
Hence,
where , .

Since and , we get and . From (49), we get
Now, applying Lemma 6 to (60) concludes that as .

*Remark 9. *Our proposed algorithm (23) is more general than Su and Li’s algorithm (5), because of introducing nonlinear operator .

If , , and , we obtain the following Corollary.

Corollary 10. *Let be a real Hilbert space and let be a nonempty, closed, and convex subset of . Let be a real-valued convex function. Assume that is lower semicontinuous. Let be a contraction with a coefficient . Suppose that the optimization problem (3) is consistent and let denote its solution set. Let be generated by and
**
where . Let , , satisfy the following conditions:** and ;**;** and ;** and .**Then the sequence generated by (62) converges strongly to a point , which solves the variational inequality
*

*Remark 11. *We would like to point out that the conditions of algorithm (62) are different from those of algorithm in [2]. The conditions of the sequence and the sequence of parameters in are the same as the ones of algorithm (62), but the conditions of the sequence of parameters in are and . It is obvious that our conditions are much weaker.

#### 4. Application

In this section, we give an application of Theorem 8 to the split feasibility problem (say SFP, for short) which was introduced by Censor and Elfving [15]. Since its inception in 1994, the split feasibility problem (SFP) has received much attention (see [16–18]) due to its applications to signal processing and image reconstruction, with particular progress in intensity-modulated radiation therapy.

The SFP can mathematically be formulated as the problem of finding a point with the property where and are nonempty closed convex subsets of Hilbert spaces and , respectively. is a bounded linear operator.

It is clear that is a solution to the split feasibility problem (64) if and only if and . We define the proximity function by and consider the following optimization problem: Then, solves the split feasibility problem (64) if and only if solves the minimization problem (66) with the minimization being equal to 0. Byrne [19] introduced the so-called CQ algorithm to solve the SFP: where