Abstract

The purpose of this paper is to introduce and analyze a strongly convergent method which combined regularized method, with extragradient method for solving the split feasibility problem in the setting of infinite-dimensional Hilbert spaces. Note that the strong convergence point is the minimum norm solution of the split feasibility problem.

1. Introduction

In 1994, Censor and Elfving [1] first introduced the split feasibility problem, (SFP) in finite-dimensional Hilbert spaces for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. A number of image reconstruction problems can be formulated as the SFP; see, for example, [2] and the references therein. Recently, it is found that the SFP can also be applied to study the intensity-modulated radiation therapy; see, for example, [35] and the references therein. Very recently, Xu [6] considered the SFP in the framework of infinite-dimensional Hilbert spaces. In this setting, the SFP is formulated as finding a point with the property where and are two closed convex subsets of two Hilbert spaces and , respectively, and is a bounded linear operator. We use to denote the solution set of the (SFP), that is, Assume that the (SFP) is consistent. A special case of the (SFP) is the convexly constrained linear inverse problem [7] in the finite-dimensional Hilbert spaces which has extensively been investigated by using the Landweber iterative method [8]: Comparatively, the SFP has received much less attention so far, due to the complexity resulted from the set . Therefore, whether various versions of the projected Landweber iterative method can be extended to solve the SFP remains an interesting open topic. For example, it is yet not clear if the dual approach to (1.2) of [9] can be extended to the SFP.

The original algorithm introduced in [1] involves the computation of the inverse : where are closed convex sets, a full rank matrix, and , and thus does not become popular. A more popular algorithm that solves the (SFP) seems to be the algorithm of Byrne ([2, 10]). The algorithm only involves the computations of the projections and onto the sets and , respectively, and is therefore implementable in the case where and have closed-form expressions (e.g., and are the closed balls or half-spaces). There are a large number of references on the method for the (SFP) in the literature, see, for instance, [1124]. It remains, however, a challenge how to implement the algorithm in the case where the projections and/or fail to have closed-form expressions though theoretically we can prove (weak) convergence of the algorithm.

Very recently, Xu [6] gave a continuation of the study on the algorithm and its convergence. He applied Mann’s algorithm to the SFP and proposed an averaged algorithm, which was proved to be weakly convergent to a solution of the SFP. He derived a weak convergence result, which shows that for suitable choices of iterative parameters (including the regularization), the sequence of iterative solutions can converge weakly to an exact solution of the SFP.

Note that means that there is an such that for some . This motivates us to consider the distance function and the minimization problem Minimizing with respect to first makes us consider the minimization: However, (1.7) is, in general, ill-posed. So regularization is needed. We consider Tikhonov’s regularization where is the regularization parameter. We can compute the gradient of as Define a Picard iteration Xu [6] has has shown that if the (SFP) (1.1) is consistent, then as , and consequently the strong exists and is the minimum-norm solution of the (SFP). Note that (1.10) is a double-step iteration. Xu [6] further suggested a single-step regularized method: Xu proved that the sequence generated by (1.11) converges in norm to the minimum-norm solution of the (SFP) provided the parameters and satisfy the following conditions:(i) and ;(ii);(iii).Motivated by the ideas of extragradient method and Xu’s regularization, Ceng et al. [25] presented the following extragradient method with regularization for finding a common element of the solution set of the split feasibility problem and the set of fixed points of a nonexpansive mapping : Ceng et al. only obtained the weak convergence of the algorithm (1.12).

The purpose of this paper is to further introduce and analyze a strongly convergent method, which combined regularized method with extragradient method for solving the split feasibility problem in the setting of infinite-dimensional Hilbert spaces. Note that the strong convergence point is the minimum norm solution of the split feasibility problem.

2. Preliminaries

Let be a nonempty closed convex subset of a real Hilbert space . A mapping is called nonexpansive if We will use to denote the set of fixed points of , that is, . A mapping is said to be -inverse strongly monotone (-ism) if there exists a constant such that Recall that the (nearest point or metric) projection from onto , denoted , assigns, to each , the unique point with the property It is well known that the metric projection of onto has the following basic properties:(a) for all ;(b) for every ;(c) for all , .

Especially, is nonexpansive.

Let be a nonempty closed convex subset of a real Hilbert space , and let be a monotone mapping. The variational inequality problem (VIP) is to find such that The solution set of the VIP is denoted by . It is well known that A set-valued mapping is called monotone if for all , , and imply A monotone mapping is called maximal if its graph is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping is maximal if and only if, for , for every implies . Let be a monotone and -Lipschitz continuous mapping, and let be the normal cone to at , that is, Define Then, is maximal monotone and if and only if ; see [21] for more details.

Next we adopt the following notation(i) means that converges strongly to ;(ii) means that converges weakly to ;(iii) is the weak -limit set of the sequence .

Lemma 2.1 (see [6]). We have the following assertions.(a) is nonexpansive if and only if the complement is -ism.(b)If is -ism, then for , is -ism.(c) is averaged if and only if the complement is -ism for some .(d)If and are both averaged, then the product (composite) is averaged.

Lemma 2.2 (see [26]). Let and be bounded sequences in a Banach space , and let be a sequence in with . Suppose that for all and Then, .

Lemma 2.3 (see [27]). Assume that is a sequence of nonnegative real numbers such that where is a sequence in and is a sequence such that(1); (2) or .Then .

3. Main Results

Let and be two infinite-dimensional Hilbert spaces. Let and be two nonempty closed and convex subset of and , respectively. Let be a bounded linear operator. In this section, we will devote to solve the SFP (1.1). First, we need the following propositions.

Proposition 3.1 (see [6, 25]). Given , then the following statements are equivalent.(i) solves the SFP;(ii) solves the fixed equation ;(iii) solves the variational inequality problem (VIP) of finding such that where and is the adjoint of .

Proposition 3.2 (see [28]). Let be a nonempty closed convex subset of a real Hilbert space . Let the mapping be -inverse strongly monotone and a constant. Then, we have In particular, if , then is nonexpansive.

Proposition 3.3 (see [6]). We have the following conclusions:(i) is Lipschitz continuous with Lipschitz constant ;(ii) is -ism,(iii) is nonexpansive for all .

Algorithm 3.4. For given arbitrarily, define a sequence iteratively by where is a sequence, and are two constants such that .

Theorem 3.5. Suppose that . Assume and . Then the sequence generated by (3.3) converges strongly to , which is the minimum norm element in .

Proof. Note that the conditions and . We deduce for enough large . Without loss of generality, we may assume that, for all , , that is, .
Pick up any . From Proposition 3.1, we have for any . Thus,
From (3.3) and (3.4), we have From Propositions 3.1 and 3.2, we get that is nonexpansive. It follows that
Thus, Hence, is bounded.
Set . Note that is nonexpansive. Thus, we can rewrite in (3.3) as where It follows that By (3.3), we have Hence, we deduce Therefore, By Lemma 2.2, we obtain Hence, From (3.5), (3.7), Proposition 3.2, and the convexity of the norm, we deduce Therefore, we have Since and as , we obtain . Thus, we have
By the property (b) of the metric projection , we have where is some constant such that It follows that and hence which implies that Since , and , we derive
Next we show that where . To show it, we choose a subsequence of such that Since is bounded, we have that is also bounded. As is bounded, we have that a subsequence of converges weakly to .
Next we show that . We define a mapping by Then is maximal monotone. Let . Since and , we have . On the other hand, from , we have that is, Therefore, we have Noting that , , and is Lipschitz continuous, we deduce from above Since is maximal monotone, we have and hence . Therefore, By the property (b) of metric projection , we have Hence Therefore, We apply Lemma 2.3 to the last inequality to deduce that . This completes the proof.

Remark 3.6. Now it is wellknown that the Korpelevich’s extragradient method has only weak convergence in the setting of infinite-dimensional Hilbert spaces. But our algorithm (3.3) which is similar to the Korpelevich’s method, has strong convergence in the setting of infinite-dimensional Hilbert spaces.

Remark 3.7. Algorithm (1.12) has only weak convergence for solving SFP. Our algorithm with strong convergence solves SFP under some weaker assumptions.

Acknowledgment

The research of N. Shahzad was partially supported by Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, Saudi Arabia.