Journal Menu
`Abstract and Applied AnalysisVolume 2012 (2012), Article ID 140679, 13 pageshttp://dx.doi.org/10.1155/2012/140679`
Research Article

## Regularized Methods for the Split Feasibility Problem

1Department of Mathematics, Tianjin Polytechnic University, Tianjin 300387, China
2School of Computer Science and Software, Tianjin Polytechnic University, Tianjin 300387, China
3Department of Information Management, Cheng Shiu University, Kaohsiung 833, Taiwan

Received 2 December 2011; Accepted 11 December 2011

Academic Editor: Khalida Inayat Noor

Copyright © 2012 Yonghong Yao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Many applied problems such as image reconstructions and signal processing can be formulated as the split feasibility problem (SFP). Some algorithms have been introduced in the literature for solving the (SFP). In this paper, we will continue to consider the convergence analysis of the regularized methods for the (SFP). Two regularized methods are presented in the present paper. Under some different control conditions, we prove that the suggested algorithms strongly converge to the minimum norm solution of the (SFP).

#### 1. Introduction

The well-known convex feasibility problem is to find a point satisfying the following: where is an integer, and each is a nonempty closed convex subset of a Hilbert space . Note that the convex feasibility problem has received a lot of attention due to its extensive applications in many applied disciplines as diverse as approximation theory, image recovery and signal processing, control theory, biomedical engineering, communications, and geophysics (see [13] and the references therein).

A special case of the convex feasibility problem is the split feasibility problem (SFP) which is to find a point such that where and are two closed convex subsets of two Hilbert spaces and , respectively, and is a bounded linear operator. We use to denote the solution set of the (SFP), that is, Assume that the (SFP) is consistent. A special case of the (SFP) is the convexly constrained linear inverse problem ([4]) in the finite dimensional Hilbert spaces which has extensively been investigated by using the Landweber iterative method ([5]):

The (SFP) in finite-dimensional Hilbert spaces was first introduced by Censor and Elfving [6] for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. The original algorithm introduced in [6] involves the computation of the inverse : where are closed convex sets, a full rank matrix, and and thus does not become popular. A more popular algorithm that solves the (SFP) seems to be the algorithm of Byrne ([7, 8]). The algorithm only involves the computations of the projections and onto the sets and , respectively, and is therefore implementable in the case where and have closed-form expressions (e.g., and are the closed balls or half-spaces). There are a large number of references on the method for the (SFP) in the literature, see, for instance, [919]. It remains, however; a challenge how to implement the algorithm in the case where the projections and/or fail to have closed-form expressions though theoretically we can prove (weak) convergence of the algorithm.

Note that means that there is an such that for some . This motivates us to consider the distance function and the minimization problem Minimizing with respect to first makes us consider the minimization: However, (1.8) is, in general, ill posed. So regularization is needed. We consider Tikhonov’s regularization where is the regularization parameter. We can compute the gradient of as Define a Picard iterates Xu [20] shown that if the (SFP) (1.2) is consistent, then as , and consequently the strong exists and is the minimum-norm solution of the (SFP). Note that (1.11) is a double-step iteration. Xu [20] further suggested a single step-regularized method: Xu proved that the sequence converges in norm to the minimum-norm solution of the (SFP) provided that the parameters and satisfy the following conditions:(i) and ,(ii),(iii).

Recently, the minimum-norm solution and the minimization problems have been considered extensively in the literature. For related works, please see [2129]. The main purpose of this paper is to further investigate the regularized method (1.12). Under some different control conditions, we prove that this algorithm strongly converges to the minimum norm solution of the (SFP). We also consider an implicit method for finding the minimum norm solution of the (SFP).

#### 2. Preliminaries

Let be a nonempty closed convex subset of a real Hilbert space . A mapping is called nonexpansive if We will use to denote the set of fixed points of , that is, . A mapping is said to be -inverse strongly monotone (-ism) if there exists a constant such that Recall that the (nearest point or metric) projection from onto , denoted , assigns, to each , the unique point with the property It is well known that the metric projection of onto has the following basic properties:(a) for all ,(b) for every ,(c) for all , .

Next we adopt the following notation:(i) means that converges strongly to ,(ii) means that converges weakly to ,(iii) is the weak -limit set of the sequence .

Lemma 2.1 (see [20]). Given that . solves the (SFP) if and only if solves the fixed point equation

Lemma 2.2 (see [8, 20]). We have the following assertions.(a) is nonexpansive if and only if the complement is -ism.(b)If is -ism, then for , is -ism.(c) is averaged if and only if the complement is -ism for some .(d)If and are both averaged, then the product (composite) is averaged.

Lemma 2.3 (see [30] Demiclosedness Principle). Let be a closed and convex subset of a Hilbert space and let be a nonexpansive mapping with . If is a sequence in weakly converging to and if converges strongly to , then In particular, if , then .

Lemma 2.4 (see [31]). Let and be bounded sequences in a Banach space and let be a sequence in with Suppose that for all and Then, .

Lemma 2.5 (see [32]). Assume that is a sequence of nonnegative real numbers such that where is a sequence in and is a sequence such that(1), (2) or . Then .

#### 3. Main Results

In this section, we will state and prove our main results.

Theorem 3.1. Assume that the (SFP) (1.2) is consistent. Let be a sequence generated by the following algorithm: where the sequences and satisfy the following conditions: and , and .Then the sequence generated by (3.1) strongly converges to the minimum norm solution of the (SFP) (1.2).

Proof. It is known that is -ism. Then, we have If , then . It follows that Thus, is a contractive mapping with coefficient .
Pick up any . From Lemma 2.1, solves the (SFP) if and only if for any fixed positive number . So, we have for all . From (3.1), we get By induction, we deduce This indicates that the sequence is bounded.
Since is -Lipschitz, is -ism, which then implies that is -ism. So by Lemma 2.1, is averaged. That is, for some nonexpansive mapping . Since is averaged, for some nonexpansive mapping . Then, we can rewrite as where It follows that Now we choose a constant such that We have the following estimates: Thus, we deduce that Note that and . Hence, by Lemma 2.3, we get the following: It follows that Consequently, Now we show that the weak limit set . Choose any . Since is bounded, there must exist a subsequence of such that . At the same time, the real number sequence is bounded. Thus, there exists a subsequence of which converges to . Without loss of generality, we may assume that . Note that . So, . That is, as . Next, we only need to show that . First, from (3.14) we have that . Then, we have the following: Since is nonexpansive. It then follows from Lemma 2.4 (demiclosedness principle) that . Hence, because . So, .
Finally, we prove that , where is the minimum norm solution of (1.2). First, we show that . Observe that there exists a subsequence of satisfying that Since is bounded, there exists a subsequence of such that . Without loss of generality, we assume that . Then, we obtain the following: Since . So, is nonexpansive. By using the property (b) of , we have the following: It follows that From Lemma 2.5, (3.17), and (3.19), we deduce that . This completes the proof.

Remark 3.2. We obtain the strong convergence of the regularized method (3.1) under control conditions and . In Xu’s [20] result, . However, in our result, .
Finally, we introduce an implicit method for the (SFP).
Take a constant such that . For , we define a mapping For , we know that is -Lipschitz and -strongly monotone. Thus, is a contractive. So, has a unique fixed point in , denoted by , that is, Next, we show the convergence of the net defined by (3.21).

Theorem 3.3. Assume that the (SFP) (1.2) is consistent. As , the net defined by (3.21) converges to the minimum norm solution of the (SFP).

Proof. Let be any a point in . We can rewrite (3.21) as Since is nonexpansive. It follows that Hence, Then, is bounded.
From (3.21), we have the following: Next we show that is relatively norm compact as . Assume that is such that as . Put . From (3.25), we have the following: By using the property of the projection, we get the following: Hence, In particular, Since is bounded, there exists a subsequence of which converges weakly to a point . Without loss of generality, we may assume that converges weakly to . Noticing (3.26) we can use Lemma 2.3 to get . Therefore, we can substitute for in (3.29) to get the following: Consequently, actually implies that . This has proved the relative norm-compactness of the net as . Letting in (3.29), we have This implies that which is equivalent to the following Hence, . Therefore, each cluster point of (as ) equals . So, . This completes the proof.

#### Acknowledgments

Y. Yao was supported in part by Colleges and Universities Science and Technology Development Foundation (20091003) of Tianjin, NSFC 11071279 and NSFC 71161001-G0105. W. Jigang is supported in part by NSFC 61173032. Y.-C. Liou was partially supported by the Program TH-1-3, Optimization Lean Cycle, of Sub-Projects TH-1 of Spindle Plan Four in Excellence Teaching and Learning Plan of Cheng Shiu University and was supported in part by NSC 100-2221-E-230-012.

#### References

1. H. Stark, Ed., Image Recovery: Theory and Application, Academic Press, Orlando, Fla, USA, 1987.
2. P. L. Combettes, “The convex feasibility problem in image recovery,” in Advances in Imaging and Electron Physics, P. Hawkes, Ed., vol. 95, pp. 155–270, Academic Press, New York, NY, USA, 1996.
3. H. H. Bauschke and J. M. Borwein, “On projection algorithms for solving convex feasibility problems,” SIAM Review, vol. 38, no. 3, pp. 367–426, 1996.
4. B. Eicke, “Iteration methods for convexly constrained ill-posed problems in Hilbert space,” Numerical Functional Analysis and Optimization, vol. 13, no. 5-6, pp. 413–429, 1992.
5. L. Landweber, “An iteration formula for Fredholm integral equations of the first kind,” American Journal of Mathematics, vol. 73, pp. 615–624, 1951.
6. Y. Censor and T. Elfving, “A multiprojection algorithm using Bregman projections in a product space,” Numerical Algorithms, vol. 8, no. 2–4, pp. 221–239, 1994.
7. C. Byrne, “Iterative oblique projection onto convex sets and the split feasibility problem,” Inverse Problems, vol. 18, no. 2, pp. 441–453, 2002.
8. C. Byrne, “A unified treatment of some iterative algorithms in signal processing and image reconstruction,” Inverse Problems, vol. 20, no. 1, pp. 103–120, 2004.
9. Q. Yang, “The relaxed CQ algorithm solving the split feasibility problem,” Inverse Problems, vol. 20, no. 4, pp. 1261–1266, 2004.
10. B. Qu and N. Xiu, “A note on the CQ algorithm for the split feasibility problem,” Inverse Problems, vol. 21, no. 5, pp. 1655–1665, 2005.
11. J. Zhao and Q. Yang, “Several solution methods for the split feasibility problem,” Inverse Problems, vol. 21, no. 5, pp. 1791–1799, 2005.
12. H.-K. Xu, “A variable Krasnosel'skii-Mann algorithm and the multiple-set split feasibility problem,” Inverse Problems, vol. 22, no. 6, pp. 2021–2034, 2006.
13. Y. Dang and Y. Gao, “The strong convergence of a KM-CQ-like algorithm for a split feasibility problem,” Inverse Problems, vol. 27, no. 1, article 015007, p. 9, 2011.
14. F. Wang and H.-K. Xu, “Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem,” journal of Inequalities and Applications, Article ID 102085, 13 pages, 2010.
15. Z. Wang, Q. I. Yang, and Y. Yang, “The relaxed inexact projection methods for the split feasibility problem,” Applied Mathematics and Computation, vol. 217, no. 12, pp. 5347–5359, 2011.
16. Y. Censor, T. Bortfeld, B. Martin, and A. Trofimov, “A unified approach for inversion problems in intensity-modulated radiation therapy,” Physics in Medicine and Biology, vol. 51, no. 10, pp. 2353–2365, 2006.
17. Y. Censor, T. Elfving, N. Kopf, and T. Bortfeld, “The multiple-sets split feasibility problem and its applications for inverse problems,” Inverse Problems, vol. 21, no. 6, pp. 2071–2084, 2005.
18. J. Zhao and Q. Yang, “Several solution methods for the split feasibility problem,” Inverse Problems, vol. 21, no. 5, pp. 1791–1799, 2005.
19. X. Yu, N. Shahzad, and Y. Yao, “Implicit and explicit algorithms for solving the split feasibility problem,” Optimization Letters. In press.
20. H.-K. Xu, “Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces,” Inverse Problems, vol. 26, no. 10, article 105018, p. 17, 2010.
21. Y. Yao, Y. C. Liou, and S. M. Kang, “Two-step projection methods for a system of variational inequality problems in Banach spaces,” Journal of Global Optimization. In press.
22. Y. Yao and N. Shahzad, “Strong convergence of a proximal point algorithm with general errors,” Optimization Letters. In press.
23. Y. Yao, M. Aslam Noor, and Y. C. Liou, “Strong convergence of a modified extra-gradient method to the minimum-norm solution of variational inequalities,” Abstract and Applied Analysis, vol. 2012, Article ID 817436, 9 pages, 2012.
24. Y. Yao, R. Chen, and Y.-C. Liou, “A unified implicit algorithm for solving the triple-hierarchical constrained optimization problem,” Mathematical and Computer Modelling, vol. 55, no. 3-4, pp. 1506–1515, 2012.
25. Y. Yao, Y. J. Cho, and P. X. Yang, “An iterative algorithm for a hierarchical problem,” Journal of Applied Mathematics, vol. 2012, Article ID 320421, 13 pages, 2012.
26. Y. Yao, Y. J. Cho, and Y.-C. Liou, “Algorithms of common solutions for variational inclusions, mixed equilibrium problems and fixed point problems,” European Journal of Operational Research, vol. 212, no. 2, pp. 242–250, 2011.
27. Y. Yao, R. Chen, and H.-K. Xu, “Schemes for finding minimum-norm solutions of variational inequalities,” Nonlinear Analysis. Theory, Methods & Applications, vol. 72, no. 7-8, pp. 3447–3456, 2010.
28. M. A. Noor, K. I. Noor, and E. Al-Said, “Iterative methods for solving nonconvex equilibrium variational inequalities,” Applied Mathematics and Information Sciences, vol. 6, no. 1, pp. 65–69, 2012.
29. M. A. Noor, “Extended general variational inequalities,” Applied Mathematics Letters, vol. 22, no. 2, pp. 182–186, 2009.
30. K. Goebel and W. A. Kirk, Topics in Metric Fixed Point Theory, vol. 28, Cambridge University Press, Cambridge, UK, 1990.
31. T. Suzuki, “Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spaces,” Fixed Point Theory and Applications, no. 1, pp. 103–123, 2005.
32. H.-K. Xu, “Iterative algorithms for nonlinear operators,” Journal of the London Mathematical Society, vol. 66, no. 1, pp. 240–256, 2002.