`Abstract and Applied AnalysisVolume 2012 (2012), Article ID 125046, 15 pageshttp://dx.doi.org/10.1155/2012/125046`
Research Article

## A Strongly Convergent Method for the Split Feasibility Problem

1Department of Mathematics, Tianjin Polytechnic University, Tianjin 300387, China
2Department of Information Management, Cheng Shiu University, Kaohsiung 833, Taiwan
3Department of Mathematics, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia

Received 24 June 2012; Accepted 14 July 2012

Copyright © 2012 Yonghong Yao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The purpose of this paper is to introduce and analyze a strongly convergent method which combined regularized method, with extragradient method for solving the split feasibility problem in the setting of infinite-dimensional Hilbert spaces. Note that the strong convergence point is the minimum norm solution of the split feasibility problem.

#### 1. Introduction

In 1994, Censor and Elfving [1] first introduced the split feasibility problem, (SFP) in finite-dimensional Hilbert spaces for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. A number of image reconstruction problems can be formulated as the SFP; see, for example, [2] and the references therein. Recently, it is found that the SFP can also be applied to study the intensity-modulated radiation therapy; see, for example, [35] and the references therein. Very recently, Xu [6] considered the SFP in the framework of infinite-dimensional Hilbert spaces. In this setting, the SFP is formulated as finding a point with the property where and are two closed convex subsets of two Hilbert spaces and , respectively, and is a bounded linear operator. We use to denote the solution set of the (SFP), that is, Assume that the (SFP) is consistent. A special case of the (SFP) is the convexly constrained linear inverse problem [7] in the finite-dimensional Hilbert spaces which has extensively been investigated by using the Landweber iterative method [8]: Comparatively, the SFP has received much less attention so far, due to the complexity resulted from the set . Therefore, whether various versions of the projected Landweber iterative method can be extended to solve the SFP remains an interesting open topic. For example, it is yet not clear if the dual approach to (1.2) of [9] can be extended to the SFP.

The original algorithm introduced in [1] involves the computation of the inverse : where are closed convex sets, a full rank matrix, and , and thus does not become popular. A more popular algorithm that solves the (SFP) seems to be the algorithm of Byrne ([2, 10]). The algorithm only involves the computations of the projections and onto the sets and , respectively, and is therefore implementable in the case where and have closed-form expressions (e.g., and are the closed balls or half-spaces). There are a large number of references on the method for the (SFP) in the literature, see, for instance, [1124]. It remains, however, a challenge how to implement the algorithm in the case where the projections and/or fail to have closed-form expressions though theoretically we can prove (weak) convergence of the algorithm.

Very recently, Xu [6] gave a continuation of the study on the algorithm and its convergence. He applied Mann’s algorithm to the SFP and proposed an averaged algorithm, which was proved to be weakly convergent to a solution of the SFP. He derived a weak convergence result, which shows that for suitable choices of iterative parameters (including the regularization), the sequence of iterative solutions can converge weakly to an exact solution of the SFP.

Note that means that there is an such that for some . This motivates us to consider the distance function and the minimization problem Minimizing with respect to first makes us consider the minimization: However, (1.7) is, in general, ill-posed. So regularization is needed. We consider Tikhonov’s regularization where is the regularization parameter. We can compute the gradient of as Define a Picard iteration Xu [6] has has shown that if the (SFP) (1.1) is consistent, then as , and consequently the strong exists and is the minimum-norm solution of the (SFP). Note that (1.10) is a double-step iteration. Xu [6] further suggested a single-step regularized method: Xu proved that the sequence generated by (1.11) converges in norm to the minimum-norm solution of the (SFP) provided the parameters and satisfy the following conditions:(i) and ;(ii);(iii).Motivated by the ideas of extragradient method and Xu’s regularization, Ceng et al. [25] presented the following extragradient method with regularization for finding a common element of the solution set of the split feasibility problem and the set of fixed points of a nonexpansive mapping : Ceng et al. only obtained the weak convergence of the algorithm (1.12).

The purpose of this paper is to further introduce and analyze a strongly convergent method, which combined regularized method with extragradient method for solving the split feasibility problem in the setting of infinite-dimensional Hilbert spaces. Note that the strong convergence point is the minimum norm solution of the split feasibility problem.

#### 2. Preliminaries

Let be a nonempty closed convex subset of a real Hilbert space . A mapping is called nonexpansive if We will use to denote the set of fixed points of , that is, . A mapping is said to be -inverse strongly monotone (-ism) if there exists a constant such that Recall that the (nearest point or metric) projection from onto , denoted , assigns, to each , the unique point with the property It is well known that the metric projection of onto has the following basic properties:(a) for all ;(b) for every ;(c) for all , .

Especially, is nonexpansive.

Let be a nonempty closed convex subset of a real Hilbert space , and let be a monotone mapping. The variational inequality problem (VIP) is to find such that The solution set of the VIP is denoted by . It is well known that A set-valued mapping is called monotone if for all , , and imply A monotone mapping is called maximal if its graph is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping is maximal if and only if, for , for every implies . Let be a monotone and -Lipschitz continuous mapping, and let be the normal cone to at , that is, Define Then, is maximal monotone and if and only if ; see [21] for more details.

Next we adopt the following notation(i) means that converges strongly to ;(ii) means that converges weakly to ;(iii) is the weak -limit set of the sequence .

Lemma 2.1 (see [6]). We have the following assertions.(a) is nonexpansive if and only if the complement is -ism.(b)If is -ism, then for , is -ism.(c) is averaged if and only if the complement is -ism for some .(d)If and are both averaged, then the product (composite) is averaged.

Lemma 2.2 (see [26]). Let and be bounded sequences in a Banach space , and let be a sequence in with . Suppose that for all and Then, .

Lemma 2.3 (see [27]). Assume that is a sequence of nonnegative real numbers such that where is a sequence in and is a sequence such that(1); (2) or .Then .

#### 3. Main Results

Let and be two infinite-dimensional Hilbert spaces. Let and be two nonempty closed and convex subset of and , respectively. Let be a bounded linear operator. In this section, we will devote to solve the SFP (1.1). First, we need the following propositions.

Proposition 3.1 (see [6, 25]). Given , then the following statements are equivalent.(i) solves the SFP;(ii) solves the fixed equation ;(iii) solves the variational inequality problem (VIP) of finding such that where and is the adjoint of .

Proposition 3.2 (see [28]). Let be a nonempty closed convex subset of a real Hilbert space . Let the mapping be -inverse strongly monotone and a constant. Then, we have In particular, if , then is nonexpansive.

Proposition 3.3 (see [6]). We have the following conclusions:(i) is Lipschitz continuous with Lipschitz constant ;(ii) is -ism,(iii) is nonexpansive for all .

Algorithm 3.4. For given arbitrarily, define a sequence iteratively by where is a sequence, and are two constants such that .

Theorem 3.5. Suppose that . Assume and . Then the sequence generated by (3.3) converges strongly to , which is the minimum norm element in .

Proof. Note that the conditions and . We deduce for enough large . Without loss of generality, we may assume that, for all , , that is, .
Pick up any . From Proposition 3.1, we have for any . Thus,
From (3.3) and (3.4), we have From Propositions 3.1 and 3.2, we get that is nonexpansive. It follows that
Thus, Hence, is bounded.
Set . Note that is nonexpansive. Thus, we can rewrite in (3.3) as where It follows that By (3.3), we have Hence, we deduce Therefore, By Lemma 2.2, we obtain Hence, From (3.5), (3.7), Proposition 3.2, and the convexity of the norm, we deduce Therefore, we have Since and as , we obtain . Thus, we have
By the property (b) of the metric projection , we have where is some constant such that It follows that and hence which implies that Since , and , we derive
Next we show that where . To show it, we choose a subsequence of such that Since is bounded, we have that is also bounded. As is bounded, we have that a subsequence of converges weakly to .
Next we show that . We define a mapping by Then is maximal monotone. Let . Since and , we have . On the other hand, from , we have that is, Therefore, we have Noting that , , and is Lipschitz continuous, we deduce from above Since is maximal monotone, we have and hence . Therefore, By the property (b) of metric projection , we have Hence Therefore, We apply Lemma 2.3 to the last inequality to deduce that . This completes the proof.

Remark 3.6. Now it is wellknown that the Korpelevich’s extragradient method has only weak convergence in the setting of infinite-dimensional Hilbert spaces. But our algorithm (3.3) which is similar to the Korpelevich’s method, has strong convergence in the setting of infinite-dimensional Hilbert spaces.

Remark 3.7. Algorithm (1.12) has only weak convergence for solving SFP. Our algorithm with strong convergence solves SFP under some weaker assumptions.

#### Acknowledgment

The research of N. Shahzad was partially supported by Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, Saudi Arabia.

#### References

1. Y. Censor and T. Elfving, “A multiprojection algorithm using Bregman projections in a product space,” Numerical Algorithms, vol. 8, no. 2–4, pp. 221–239, 1994.
2. C. Byrne, “Iterative oblique projection onto convex sets and the split feasibility problem,” Inverse Problems, vol. 18, no. 2, pp. 441–453, 2002.
3. Y. Censor, T. Bortfeld, B. Martin, and A. Trofimov, “A unified approach for inversion problems in intensity modulated radiation therapy,” Physics in Medicine and Biology, vol. 51, pp. 2353–2365, 2006.
4. Y. Censor, T. Elfving, N. Kopf, and T. Bortfeld, “The multiple-sets split feasibility problem and its applications for inverse problems,” Inverse Problems, vol. 21, no. 6, pp. 2071–2084, 2005.
5. Y. Censor, A. Motova, and A. Segal, “Perturbed projections and subgradient projections for the multiple-sets split feasibility problem,” Journal of Mathematical Analysis and Applications, vol. 327, no. 2, pp. 1244–1256, 2007.
6. H.-K. Xu, “Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces,” Inverse Problems, vol. 26, no. 10, Article ID 105018, 17 pages, 2010.
7. B. Eicke, “Iteration methods for convexly constrained ill-posed problems in Hilbert space,” Numerical Functional Analysis and Optimization, vol. 13, no. 5-6, pp. 413–429, 1992.
8. L. Landweber, “An iteration formula for Fredholm integral equations of the first kind,” American Journal of Mathematics, vol. 73, pp. 615–624, 1951.
9. L. C. Potter and K. S. Arun, “A dual approach to linear inverse problems with convex constraints,” SIAM Journal on Control and Optimization, vol. 31, no. 4, pp. 1080–1092, 1993.
10. C. Byrne, “A unified treatment of some iterative algorithms in signal processing and image reconstruction,” Inverse Problems, vol. 20, no. 1, pp. 103–120, 2004.
11. X. Yu, N. Shahzad, and Y. Yao, “Implicit and explicit algorithms for solving the split feasibility problem,” Optimization Letters. In press.
12. B. Qu and N. Xiu, “A note on the $CQ$ algorithm for the split feasibility problem,” Inverse Problems, vol. 21, no. 5, pp. 1655–1665, 2005.
13. J. Zhao and Q. Yang, “Several solution methods for the split feasibility problem,” Inverse Problems, vol. 21, no. 5, pp. 1791–1799, 2005.
14. H.-K. Xu, “A variable Krasnosel'skii-Mann algorithm and the multiple-set split feasibility problem,” Inverse Problems, vol. 22, no. 6, pp. 2021–2034, 2006.
15. Y. Dang and Y. Gao, “The strong convergence of a KM-$CQ$-like algorithm for a split feasibility problem,” Inverse Problems, vol. 27, no. 1, Article ID 015007, 9 pages, 2011.
16. F. Wang and H.-K. Xu, “Approximating curve and strong convergence of the $CQ$ algorithm for the split feasibility problem,” Journal of Inequalities and Applications, Article ID 102085, 13 pages, 2010.
17. Z. Wang, Q. I. Yang, and Y. Yang, “The relaxed inexact projection methods for the split feasibility problem,” Applied Mathematics and Computation, vol. 217, no. 12, pp. 5347–5359, 2011.
18. Y. Censor, T. Bortfeld, B. Martin, and A. Trofimov, “A unified approach for inversion problems in intensity modulated radiation therapy,” Physics in Medicine and Biology, vol. 51, pp. 2353–2365, 2006.
19. Y. Censor, T. Elfving, N. Kopf, and T. Bortfeld, “The multiple-sets split feasibility problem and its applications for inverse problems,” Inverse Problems, vol. 21, no. 6, pp. 2071–2084, 2005.
20. J. Zhao and Q. Yang, “Several solution methods for the split feasibility problem,” Inverse Problems, vol. 21, no. 5, pp. 1791–1799, 2005.
21. R. T. Rockafellar, “On the maximality of sums of nonlinear monotone operators,” Transactions of the American Mathematical Society, vol. 149, pp. 75–88, 1970.
22. Y. Yao, T. H. Kim, S. Chebbi, and H.-K. Xu, “A modified extragradient method for the split feasibility and fixed point problems,” Journal of Nonlinear and Convex Analysis, vol. 13, no. 3, 2012.
23. Y. Yao, R. Chen, G. Marino, and Y.-C. Liou, “Applications of fixed point and optimization methods to the multiple-sets split feasibility problem,” Journal of Applied Mathematics, vol. 2012, Article ID 927530, 21 pages, 2012.
24. Y. Yao, W. Jigang, and Y.-C. Liou, “Regularized methods for the split feasibility problem,” Abstract and Applied Analysis, vol. 2012, Article ID 140679, 15 pages, 2012.
25. L. C. Ceng, Q. H. Ansari, and J. C. Yao, “An extragradient method for split feasibility and fixed point problems,” Computers and Mathematics with Applications, vol. 64, no. 4, pp. 633–642, 2012.
26. T. Suzuki, “Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spaces,” Fixed Point Theory and Applications, no. 1, pp. 103–123, 2005.
27. H.-K. Xu, “Iterative algorithms for nonlinear operators,” Journal of the London Mathematical Society, vol. 66, no. 1, pp. 240–256, 2002.
28. W. Takahashi and M. Toyoda, “Weak convergence theorems for nonexpansive mappings and monotone mappings,” Journal of Optimization Theory and Applications, vol. 118, no. 2, pp. 417–428, 2003.