#### Abstract

Many applied problems such as image reconstructions and signal processing can be formulated as the split feasibility problem (SFP). Some algorithms have been introduced in the literature for solving the (SFP). In this paper, we will continue to consider the convergence analysis of the regularized methods for the (SFP). Two regularized methods are presented in the present paper. Under some different control conditions, we prove that the suggested algorithms strongly converge to the minimum norm solution of the (SFP).

#### 1. Introduction

The well-known convex feasibility problem is to find a point satisfying the following: where is an integer, and each is a nonempty closed convex subset of a Hilbert space . Note that the convex feasibility problem has received a lot of attention due to its extensive applications in many applied disciplines as diverse as approximation theory, image recovery and signal processing, control theory, biomedical engineering, communications, and geophysics (see [1–3] and the references therein).

A special case of the convex feasibility problem is the split feasibility problem (SFP) which is to find a point such that where and are two closed convex subsets of two Hilbert spaces and , respectively, and is a bounded linear operator. We use to denote the solution set of the (SFP), that is, Assume that the (SFP) is consistent. A special case of the (SFP) is the convexly constrained linear inverse problem ([4]) in the finite dimensional Hilbert spaces which has extensively been investigated by using the Landweber iterative method ([5]):

The (SFP) in finite-dimensional Hilbert spaces was first introduced by Censor and Elfving [6] for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. The original algorithm introduced in [6] involves the computation of the inverse : where are closed convex sets, a full rank matrix, and and thus does not become popular. A more popular algorithm that solves the (SFP) seems to be the algorithm of Byrne ([7, 8]). The algorithm only involves the computations of the projections and onto the sets and , respectively, and is therefore implementable in the case where and have closed-form expressions (e.g., and are the closed balls or half-spaces). There are a large number of references on the method for the (SFP) in the literature, see, for instance, [9–19]. It remains, however; a challenge how to implement the algorithm in the case where the projections and/or fail to have closed-form expressions though theoretically we can prove (weak) convergence of the algorithm.

Note that means that there is an such that for some . This motivates us to consider the distance function and the minimization problem Minimizing with respect to first makes us consider the minimization: However, (1.8) is, in general, ill posed. So regularization is needed. We consider Tikhonov’s regularization where is the regularization parameter. We can compute the gradient of as Define a Picard iterates Xu [20] shown that if the (SFP) (1.2) is consistent, then as , and consequently the strong exists and is the minimum-norm solution of the (SFP). Note that (1.11) is a double-step iteration. Xu [20] further suggested a single step-regularized method: Xu proved that the sequence converges in norm to the minimum-norm solution of the (SFP) provided that the parameters and satisfy the following conditions:(i) and ,(ii),(iii).

Recently, the minimum-norm solution and the minimization problems have been considered extensively in the literature. For related works, please see [21–29]. The main purpose of this paper is to further investigate the regularized method (1.12). Under some different control conditions, we prove that this algorithm strongly converges to the minimum norm solution of the (SFP). We also consider an implicit method for finding the minimum norm solution of the (SFP).

#### 2. Preliminaries

Let be a nonempty closed convex subset of a real Hilbert space . A mapping is called *nonexpansive* if
We will use to denote the set of fixed points of , that is, . A mapping is said to be -inverse strongly monotone (-ism) if there exists a constant such that
Recall that the (nearest point or metric) projection from onto , denoted , assigns, to each , the unique point with the property
It is well known that the metric projection of onto has the following basic properties:(a) for all ,(b) for every ,(c) for all , .

Next we adopt the following notation:(i) means that converges strongly to ,(ii) means that converges weakly to ,(iii) is the weak -limit set of the sequence .

Lemma 2.1 (see [20]). *Given that . solves the (SFP) if and only if solves the fixed point equation
*

Lemma 2.2 (see [8, 20]). *We have the following assertions.*(a)* is nonexpansive if and only if the complement is -ism.*(b)*If is -ism, then for , is -ism.*(c)* is averaged if and only if the complement is -ism for some .*(d)*If and are both averaged, then the product (composite) is averaged.*

Lemma 2.3 (see [30] Demiclosedness Principle). * Let be a closed and convex subset of a Hilbert space and let be a nonexpansive mapping with . If is a sequence in weakly converging to and if converges strongly to , then
**
In particular, if , then .*

Lemma 2.4 (see [31]). *Let and be bounded sequences in a Banach space and let be a sequence in with
**
Suppose that
**
for all and
**
Then, .*

Lemma 2.5 (see [32]). *Assume that is a sequence of nonnegative real numbers such that
**
where is a sequence in and is a sequence such that*(1)*,
*(2)* or .** Then .*

#### 3. Main Results

In this section, we will state and prove our main results.

Theorem 3.1. *Assume that the (SFP) (1.2) is consistent. Let be a sequence generated by the following algorithm:
**
where the sequences and satisfy the following conditions:** and ,** and .**Then the sequence generated by (3.1) strongly converges to the minimum norm solution of the (SFP) (1.2).*

*Proof. *It is known that is -ism. Then, we have
If , then . It follows that
Thus, is a contractive mapping with coefficient .

Pick up any . From Lemma 2.1, solves the (SFP) if and only if for any fixed positive number . So, we have for all . From (3.1), we get
By induction, we deduce
This indicates that the sequence is bounded.

Since is -Lipschitz, is -ism, which then implies that is -ism. So by Lemma 2.1, is averaged. That is, for some nonexpansive mapping . Since is averaged, for some nonexpansive mapping . Then, we can rewrite as
where
It follows that
Now we choose a constant such that
We have the following estimates:
Thus, we deduce that
Note that and . Hence, by Lemma 2.3, we get the following:
It follows that
Consequently,
Now we show that the weak limit set . Choose any . Since is bounded, there must exist a subsequence of such that . At the same time, the real number sequence is bounded. Thus, there exists a subsequence of which converges to . Without loss of generality, we may assume that . Note that . So, . That is, as . Next, we only need to show that . First, from (3.14) we have that . Then, we have the following:
Since is nonexpansive. It then follows from Lemma 2.4 (demiclosedness principle) that . Hence, because . So, .

Finally, we prove that , where is the minimum norm solution of (1.2). First, we show that . Observe that there exists a subsequence of satisfying that
Since is bounded, there exists a subsequence of such that . Without loss of generality, we assume that . Then, we obtain the following:
Since . So, is nonexpansive. By using the property (b) of , we have the following:
It follows that
From Lemma 2.5, (3.17), and (3.19), we deduce that . This completes the proof.

*Remark 3.2. *We obtain the strong convergence of the regularized method (3.1) under control conditions and . In Xu’s [20] result, . However, in our result, .

Finally, we introduce an implicit method for the (SFP).

Take a constant such that . For , we define a mapping
For , we know that is -Lipschitz and -strongly monotone. Thus, is a contractive. So, has a unique fixed point in , denoted by , that is,
Next, we show the convergence of the net defined by (3.21).

Theorem 3.3. *Assume that the (SFP) (1.2) is consistent. As , the net defined by (3.21) converges to the minimum norm solution of the (SFP).*

*Proof. *Let be any a point in . We can rewrite (3.21) as
Since is nonexpansive. It follows that
Hence,
Then, is bounded.

From (3.21), we have the following:
Next we show that is relatively norm compact as . Assume that is such that as . Put . From (3.25), we have the following:
By using the property of the projection, we get the following:
Hence,
In particular,
Since is bounded, there exists a subsequence of which converges weakly to a point . Without loss of generality, we may assume that converges weakly to . Noticing (3.26) we can use Lemma 2.3 to get . Therefore, we can substitute for in (3.29) to get the following:
Consequently, actually implies that . This has proved the relative norm-compactness of the net as . Letting in (3.29), we have
This implies that
which is equivalent to the following
Hence, . Therefore, each cluster point of (as ) equals . So, . This completes the proof.

#### Acknowledgments

Y. Yao was supported in part by Colleges and Universities Science and Technology Development Foundation (20091003) of Tianjin, NSFC 11071279 and NSFC 71161001-G0105. W. Jigang is supported in part by NSFC 61173032. Y.-C. Liou was partially supported by the Program TH-1-3, Optimization Lean Cycle, of Sub-Projects TH-1 of Spindle Plan Four in Excellence Teaching and Learning Plan of Cheng Shiu University and was supported in part by NSC 100-2221-E-230-012.