Abstract

The purpose of this paper is to introduce a new relaxed extragradient algorithms for the split feasibility problem. Our relaxed extragradient algorithm is new and it generalized some results for solving the split feasibility problem.

1. Introduction

The split feasibility problem has received much attention due to its applications in image denoising, signal processing, and image reconstruction, with particular progress in intensity modulated therapy; see, for instance [14]. In this paper, we continue to study the split feasibility problem and its approximation algorithms.

To begin with, let us first recall the notation of the split feasibility problem and some existing algorithms in the literature. Recall that the split feasibility problem introduced by Censor and Elfving [4] can be formulated as finding a point with the property where and are two closed convex subsets of two Hilbert spaces and , respectively, and is a bounded linear operator. The original algorithm introduced in [4] involves the computation of the inverse : where , are closed convex sets and is a full rank matrix. It is an interesting problem to construct iterative algorithms for the split feasibility problem to the bounded linear operator.

We use to denote the solution set of the split feasibility problem; that is, In the sequel, we assume that the split feasibility problem is consistent. Let and assume that . Thus, . It follows that which implies . Hence, we deduce that the fixed point equation holds. Note that . It is obvious that By using (4), Byrne [5] presented a more popular algorithm: The algorithm only needs to compute the projections and . It is therefore implementable when and have closed-form expressions. Consequently, algorithm has received so much attention; see [617]. In particular, in [14], Xu proposed an averaged CQ algorithm and he obtained the weak convergence result.

On the other hand, we note that ; then there exists such that . Thus, we consider the distance function and the minimization problem Since , we then consider the minimization However, (7) is, in general, ill-posed. We import the well-known Tikhonov’s regularization method where is the regularization parameter. Note that the gradient of is Using (9), some regularized methods have been suggested. For example, Xu [14] suggested the following regularized method: He proved that the sequence generated by (10) converges to the solution of the split feasibility problem. Ceng et al. [7] presented the following extragradient method: and proved that the sequence generated by (11) converges weakly to the solution of the split feasibility problem. Motivated by the above results, in this paper, we construct a new extragradient algorithm for solving the split feasibility problem. Strong convergence result is demonstrated.

2. Preliminaries

Let be a nonempty closed convex subset of a real Hilbert space . A mapping is called nonexpansive if A mapping is said to be -inverse strong monotone, if there exists a constant such that Recall that the metric projection from onto denoted by means that can be characterized by for all . Hence, it is nonexpansive. It is also known that is nonexpansive.

Let be a monotone mapping. The variational inequality problem is to find such that The solution set of the variational inequality is denoted by . It is well known that if and only if , . A set-valued mapping is called monotone if, for each , , and , we have A monotone mapping is called maximal if its graph is not properly contained in the graph of any other monotone mappings. It is known that a monotone mapping is maximal if and only if for each pair with for every ; then we have . Let be a monotone and -Lipschitz continuous mapping and let be the normal cone to at ; that is, Define Then, is maximal monotone and if and only if ; see [18] for more details.

Lemma 1 (see [14]). One knows the following properties:(i) is Lipschitz continuous with Lipschitz constant ;(ii) is inverse strong monotone;(iii) is nonexpansive for all .

Lemma 2 (see [19]). Let be a nonempty closed convex subset of a real Hilbert space . Let the mapping be -inverse strong monotone and let be a constant. Then, we have In particular, if , then is nonexpansive.

Lemma 3 (see [20]). Let and be bounded sequences in a Banach space and let be a sequence in with . Suppose that for all and Then, .

Lemma 4 (see [21]). Assume that is a sequence of nonnegative real numbers such that where is a sequence in and is a sequence such that(1); (2) or .Then, .

3. Main Results

Let and be two Hilbert spaces and let , be two nonempty closed and convex subsets of and , respectively. Suppose that is a bounded linear operator and is a -contraction. We will state and prove our main results in this paper and at the end of this paper we will give an example to show that our results improve the works in the literature.

Algorithm 5. Let . Let be a sequence generated by where is a sequence; and are sequences satisfying .

Theorem 6. Suppose that . Assume , , , , and . Then the sequence generated by (24) converges strongly to .

Proof. From the assumptions and , we can choose such that when is sufficiently large. For the convenience and without loss of generality, we assume that for all . Then, we deduce for all .
Let . In the first section, we know that for any . Since and , we have From (24) and (25), we have From Lemma 1, we know that and are nonexpansive. It follows that Thus, Hence, is bounded. It follows from (28) that the sequence is also bounded.
Set . Note that is nonexpansive. Thus, we can rewrite in (24) as where It follows that So, It follows that From (24), we have Hence, we deduce Since , , , and , we derive that By Lemma 3, we obtain Hence, By Lemma 1, Lemma 2, and (29), we get Therefore, we have Noting that , we get . Since and , we obtain . Thus, we have
Using (15), we have where is some constant such that It follows that and hence which implies that Since and , we derive
Next we show that where . To show it, we choose a subsequence of such that Since is bounded and is then bounded, we have the fact that a subsequence of converges weakly to .
Next we show that . We define a mapping by Then is maximal monotone. Let . Since and , we get On the other hand, from , we have That is, Therefore, we have Noting that , and is Lipschitz continuous, we deduce from above Since is maximal monotone, we have and hence Therefore, Again from (15), we have Hence, Therefore, We apply Lemma 4 to the last inequality to deduce that . This completes the proof.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the referees for the useful comments and suggestions. This study was supported by research funds from Dong-A University.