`Journal of Applied MathematicsVolume 2012 (2012), Article ID 927530, 21 pageshttp://dx.doi.org/10.1155/2012/927530`
Review Article

## Applications of Fixed-Point and Optimization Methods to the Multiple-Set Split Feasibility Problem

1Department of Mathematics, Tianjin Polytechnic University, Tianjin 300387, China
2Dipartimento di Matematica, Università della Calabria, 87036 Arcavacata di Rende, Italy
3Department of Information Management, Cheng Shiu University, Kaohsiung 833, Taiwan

Received 6 February 2012; Accepted 12 February 2012

Copyright © 2012 Yonghong Yao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The multiple-set split feasibility problem requires finding a point closest to a family of closed convex sets in one space such that its image under a linear transformation will be closest to another family of closed convex sets in the image space. It can be a model for many inverse problems where constraints are imposed on the solutions in the domain of a linear operator as well as in the operator’s range. It generalizes the convex feasibility problem as well as the two-set split feasibility problem. In this paper, we will review and report some recent results on iterative approaches to the multiple-set split feasibility problem.

#### 1. Introduction

##### 1.1. The Multiple-Set Split Feasibility Problem Model

The equivalent uniform dose (EUD) for tumors is the biologically equivalent dose which, if given uniformly, will lead to the same cell kill within the tumor volume as the actual nonuniform dose. Constraints on the EUD received by each voxel of the body are described in dose space, the space of vectors whose entries are the doses received at each voxel. Constraints on the deliverable radiation intensities of the beamlets are best described in intensity space, the space of vectors whose entries are the intensity levels associated with each of the beamlets. The constraints in dose space will be upper bounds on the dosage received by the OAR and lower bounds on the dosage received by the PTV. The constraints in intensity space are limits on the complexity of the intensity map and on the delivery time, and, obviously, that the intensities be nonnegative. Because the constraints operate in two different domains, it is convenient to formulate the problem using these two domains. This leads to a split feasibility problem.

The split feasibility problem (SFP) is to find an in a given closed convex subset of such that is in a given closed convex subset of , where is a given real by matrix. Because the constraints are best described in terms of several sets in dose space and several sets in intensity space, the SFP model needs to be expanded into the multiple-set SFP. It is not uncommon to find that, once the various constraints have been specified, there is no intensity map that satisfies them all. In such cases, it is desirable to find an intensity map that comes as close as possible to satisfying all the constraints. One way to do this, as we will see, is to minimize a proximity function.

For and , let be the dose absorbed by the th voxel of the patient’s body, the intensity of the th beamlet of radiation, and the dose absorbed at the th voxel due to a unit intensity of radiation at the th beamlet. The nonnegative matrix with entries is the dose influence matrix. Let us assume that we have constraints in the dose space and constraints in the intensity space. Let be the set of dose vectors that fulfill the th dose constraint, and let be the set of beamlet intensity vectors that fulfill the th intensity constraint.

In intensity space, we have the obvious constraints that . In addition, there are implementation constraints; the available treatment machine will impose its own requirements, such as a limit on the difference in intensities between adjacent beamlets. In dosage space, there will be a lower bound on the dosage delivered to those regions designated as planned target volumes (PTV) and an upper bound on the dosage delivered to those regions designated as organs at risk (OAR).

Suppose that is either a PTV or an OAR, and suppose that contains voxels. For each dosage vector , define the equivalent uniform dosage function (EUD function) by where if is a PTV, and if is an OAR. The function is convex, for nonnegative, when is an OAR and is convex, when is a PTV. The constraints in dosage space take the form when is an OAR, and when is a PTV. Therefore, we require that lie within the intersection of these convex sets. In a summary, we have formulated the constraints in the radiation intensity space and in the dose space , respectively, and the two spaces are related by the dose influence matrix ; that is, this problem referred as the multiple-set split feasibility problem (MSSFP) is formulated as follows. which was first investigated by Censor et al. [5]. There are a great deal of literature on the MSSFP; see [5, 7, 8, 18, 19, 22, 23]. In the sequel, there will be involved optimization and variational inequality techniques. For related references, please see [3042].

##### 1.2. Fixed-Point Method

Next, we focus on the multiple-set split feasibility problem (MSSFP) which is to find a point such that where are integers, the are closed convex subsets of , the are closed convex subsets of , and is a bounded linear operator. Assume that MSSFP is consistent; that is, it is solvable, and denotes its solution set. The case where , called split feasibility problem (SFP), was introduced by Censor and Elfving [43], modeling phase retrieval and other image restoration problems, and further studied by many researchers; see, for instance, [24, 6, 912, 17, 1921].

We use to denote the solution set of the SFP. Let and assume that . Thus, which implies the equation which in turn implies the equation , hence the fixed point equation . Requiring that , we consider the fixed-point equation We will see that solutions of the fixed point equation (1.6) are exactly solutions of the SFP. The following proposition is due to Byrne [4] and Xu [2].

Proposition 1.1. Given . Then solves the SFP if and only if solves the fixed point (1.6).

This proposition reminds us that (MSSFP) (1.5) is equivalent to a common fixed-point problem of finitely many nonexpansive mappings, as we show below.

Decompose MSSFP into subproblems : For each , we define a mapping by where is defined by with for all . Note that the gradient of is which is -Lipschitz continuous with constant It is known that if , is nonexpansive. Therefore fixed-point algorithms for nonexpansive mappings can be applied to (MSSFP) (1.5).

##### 1.3. Optimization Method

Note that solves the MSSFP implies that satisfies two properties:(i)the distance from to each is zero,(ii)the distance from to each is also zero.

This motivates us to consider the proximity function where and are positive real numbers, and and are the metric projections onto and , respectively.

Proposition 1.2. is a solution of MSSFP (1.5) if and only if .

Since for all , a solution of MSSFP (1.5) is a minimizer of over any closed convex subset, with minimum value of zero. Note that this proximity function is convex and differentiable with gradient where is the adjoint of . Since the gradient is -Lipschitz continuous with constant we can use the gradient-projection method to solve the minimization problem where is a closed convex subset of whose intersection with the solution set of MSSFP is nonempty, and get a solution of the so-called constrained multiple-set split feasibility problem (CMSSFP) In this paper, we will review and report the recent progresses on the fixed-point and optimization methods for solving the MSSFP.

#### 2. Some Concepts and Tools

Assume is a Hilbert space and is a nonempty closed convex subset of . The (nearest point or metric) projection, denoted , from onto assigns for each the unique point in such a way that

Proposition 2.1. Basic properties of projections are (i) for all and ;(ii) for all and ;(iii) for all , and equality holds if and only if . In particular, is nonexpansive; that is, for all ;(iv)if is a closed subspace of , then is the orthogonal projection from onto :

Definition 2.2. The operator is called a relaxed projection, where and is the identity operator on .
A mapping is said to be an averaged mapping if can be written as an average of the identity and a nonexpansive mapping : where is a number in and is nonexpansive.

Proposition 2.1 is equivalent to saying that the operator is nonexpansive. Indeed, we have Consequently, a projection can be written as the mean average of a nonexpansive mapping and the identity: Thus projections are averaged maps with . Also relaxed projections are averaged.

Proposition 2.3. Let be a nonexpansive mapping and an averaged map for some . Assume has a bounded orbit. Then, one has the following. (1) is asymptotically regular; that is, for all .(2)For any , the sequence converges weakly to a fixed point of .

Definition 2.4. Let be an operator with domain and range in . (i) is monotone if for all , (ii)Given a number . is said to be -inverse strongly monotone (-ism) (or cocoercive) if
It is easily seen that a projection is a 1-ism.

Proposition 2.5. Given , let be the complement of . Given also , then one has the following.(i) is nonexpansive if and only if is -ism.(ii)If is -ism, then, for , is -ism.(iii) is averaged if and only if the complement is -ism for some .
The next proposition includes the basic properties of averaged mappings.

Proposition 2.6. Given operators , then one has the following. (i)If for some and if is averaged and is nonexpansive, then is averaged.(ii) is firmly nonexpansive if and only if the complement is firmly nonexpansive. If is firmly nonexpansive, then is averaged.(iii)If for some , is firmly nonexpansive and is nonexpansive, then is averaged.(iv)If and are both averaged, then the product (composite) is averaged.(v)If and are both averaged and if and have a common fixed point, then

Proposition 2.7. Consider the variational inequality problem . where is a closed convex subset of a Hilbert space and is a monotone operator on . Assume that (2.12) has a solution and is -ism. Then for , the sequence generated by the algorithm converges weakly to a solution of the VI (2.12).

An immediate consequence of Proposition 2.7 is the convergence of the gradient-projection algorithm.

Proposition 2.8. Let be a continuously differentiable function such that the gradient is Lipschitz continuous: Assume that the minimization problem is consistent, where is a closed convex subset of . Then, for , the sequence generated by the gradient-projection algorithm converges weakly to a solution of (2.15).

#### 3. Iterative Methods

In this section, we will review and report the iterative methods for solving MSSFP (1.5) in the literature.

It is not hard to see that the solution set of the subproblem (1.7) coincides with , and the solution set of MSSFP (1.5) coincides with the common fixed-point set of the mappings . Further, we have (see [9, 18]) By using the fact (3.1), we obtain the corresponding algorithms and the convergence theorems for the MSSFP.

Algorithm 3.1. The Picard iterations are

Theorem 3.2 (see [8]). Assume that the MSSFP (1.5) is consistent. Let be the sequence generated by the Algorithm 3.1, where with given by (1.11). Then converges weakly to a solution of the MSSFP (1.5).

Algorithm 3.3. Parallel iterations are where for all such that , and with given by (1.11).

Theorem 3.4 (see [8]). Assume that the MSSFP (1.5) is consistent. Then the sequence generated by the Algorithm 3.3 converges weakly to a solution of the MSSFP (1.5).

Algorithm 3.5. Cyclic iterations are where with the mod function taking values in .

Theorem 3.6 (see [8]). Assume that the MSSFP (1.5) is consistent. Let be the sequence generated by the Algorithm 3.5, where with given by (1.11). Then converges weakly to a solution of the MSSFP (1.5).

Note that the MSSFP (1.5) can be viewed as a special case of the convex feasibility problem of finding such that In fact, (1.5) can be rewritten as where .

However, the methodologies for studying the MSSFP (1.5) are actually different from those for the convex feasibility problem in order to avoid usage of the inverse . In other words, the methods for solving the convex feasibility problem may not apply to solve the MSSFP (1.5) straightforwardly without involving the inverse . The CQ algorithm of Byrne [1] is such an example where only the operator (not the inverse ) is relevant.

Since every closed convex subset of a Hilbert space is the fixed point set of its associating projection, the convex feasibility problem becomes a special case of the common fixed-point problem of finding a point with the property Similarly, the MSSFP (1.5) becomes a special case of the split common fixed-point problem [19] of finding a point with the property where and are nonlinear operators. By using these facts, recently, Wang and Xu [17] presented another cyclic iteration as follows.

Algorithm 3.7 (cyclic iterations). Take an initial guess , choose and define a sequence by the iterative procedure:

Theorem 3.8 (see [17]). The sequence , generated by Algorithm 3.7, converges weakly to a solution of MSSFP (1.5) whenever its solution set is nonempty.

Since MSSFP (1.5) is equivalent to the minimization problem (1.15), we have the following gradient-projection algorithm.

Censor et al. [5] proved in finite-dimensional Hilbert spaces that Algorithm 3.9 converges to a solution of the MSSFP (1.5) in the consistent case. Below is a version of this convergence in infinite-dimensional Hilbert spaces.

Theorem 3.10 (see [8]). Assume that , where is given by (1.14). The sequence generated by the Algorithm 3.9 weakly converges to a point which is a solution of the MSSFP (1.5) in the consistent case and a minimizer of the function over in the inconsistent case.

Consequently, Lopez et al. [18] considered a variant version of Algorithm 3.9 to solve (1.16).

Theorem 3.12 (see [18]). Assume that , where is given by (1.14). The sequence generated by the Algorithm 3.11 weakly converges to a solution of (1.16).

Remark 3.13. It is obvious that Theorem 3.12 contains Theorem 3.10 as a special case.

Perturbation Techniques
Consider the consistent (1.16) and denote by its nonempty solution set. As pointed in the previous, the projection , where is a closed convex subset of , may bring difficulties in computing it, unless has a simple form (e.g., a closed ball or a half-space). Therefore some perturbed methods in order to avoid this inconvenience are presented.
We can use subdifferentials when , , and are level sets of convex functionals. Consider where and are convex functionals. We iteratively define a sequence as follows.

Algorithm 3.14. The initial is arbitrary; once has been defined, we define the th iterate by where

Theorem 3.15 (see [18]). Assume that each of the functions , , and satisfies the property: it is bounded on every bounded subset of and , respectively. (Note that this condition is automatically satisfied in a finite-dimensional Hilbert space.) Then the sequence generated by Algorithm 3.14 converges weakly to a solution of (1.16), provided that the sequence satisfies where the constant is given by (1.14).

Now consider general perturbation techniques in the direction of the approaches studied in [2022, 44]. These techniques consist on taking approximate sets which involve the -distance between two closed convex sets and of a Hilbert space: Let , , and be closed convex sets which are viewed as perturbations for the closed convex sets , , and , respectively. Define function by The gradient of is It is clear that is Lipschitz continuous with the Lipschitz constant given by (1.14).

Algorithm 3.16. Let an initial guess be given, and let be generated by the Krasnosel’skii-Mann iterative algorithm:

In [8], Xu proved the following result.

Theorem 3.17 (see [8]). Assume that the following conditions are satisfied. (i).(ii).(iii)For each , , and , there hold , , and .Then the sequence generated by Algorithm 3.16 converges weakly to a solution of MSSFP (1.5).

Lopez et al. [18] further obtained a general result by relaxing condition (ii).

Theorem 3.18 (see [18]). Assume that the following conditions are satisfied. (i).(ii) for all (note that may be larger than one since ) and (iii)For each , , and , there hold ,, and .
Then the sequence generated by Algorithm 3.16 converges weakly to a solution of (1.16).

Corollary 3.19. Assume that the following conditions are satisfied. (i).(ii) for all (note that may be larger than one since ) and Then the sequence generated by converges weakly to a solution of the MSSFP (1.5).

Note that all above algorithms only have weak convergence. Next, we will consider some algorithms with strong convergence.

Algorithm 3.20. The Halpern iterations are

Theorem 3.21. Assume that the MSSFP (1.5) is consistent, with given by (1.11), and satisfies the conditions (for instance, for all ) (C1),(C2), (C3) or .
Then the sequence generated by the Algorithm 3.20 converges strongly to a solution of the MSSFP (1.5) that is closest to from the solution set of the MSSFP (1.5).

Next, we consider a perturbation algorithm which has strong convergence.

Algorithm 3.22. Given an initial guess , let be generated by the perturbed iterative algorithm

Theorem 3.23 (see [18]). Assume that the following conditions are satisfied. (i).(ii) and .(iii)For each , , and , there hold , , and .
Then the sequence generated by Algorithm 3.22 converges in norm to the solution of (1.16) which is nearest to .

Corollary 3.24. Assume that the following conditions are satisfied. (i).(ii) and .
Then the sequence generated by converges in norm to a solution of the MSSFP (1.5).

Regularized Methods
Consider the following regularization: where is the regularization parameter. We can compute the gradient of as It is easily see that is -Lipschitz continuous with constant It is known that is strongly monotone.
Consider the following regularized minimization problem which has a unique solution denoted by .

Theorem 3.25. The strong exists and equals , the minimum-norm solution of (1.16).

Algorithm 3.26. Given an initial point . Define a sequence by the iterative algorithm

Theorem 3.27 (see [18]). Assume the sequences and satisfy the conditions: (i) for all (large enough) ;(ii);(iii);(iv).
Then the sequence generated by Algorithm 3.26 strongly converges to the minimum norm solution of (1.16).

Consider the following constrained minimization problem: where is defined as in (1.12) and is the same auxiliary simple nonempty closed convex set as in (1.16). This optimization problem is proposed by Censor et al. [5] for solving the constrained MSSFP (1.5) in the finite-dimensional Hilbert spaces. We know that a point is a stationary point of problem (3.31) if it satisfies Thus, from Proposition 2.8, we can use a gradient projection algorithm below to solve the MSSFP which was developed by Censor et al. ([5, 24]): where
Note that the above method of Censor et al. is the application of the projection method of Goldstein [45] and Levitin and Polyak [46] to the variational inequality problem (3.32), which is among the simplest numerical methods for solving variational inequality problems. Nevertheless, the efficiency of this projection method depends greatly on the choice of the parameter . If one chooses a small to ensure that it satisfies the condition (3.34) such that it guarantees the convergence of the iterative sequence, the recursion leads to slow speed of convergence. On the other hand, if one chooses a large step size to improve the speed of convergence, the generated sequence may not converge. In real applications for solving variational inequality problems, the Lipschitz constant may be difficult to estimate, even if the underlying mapping is linear, the case such as the MSSFP.
To overcome the difficulty in estimating the Lipschitz constant, He et al. [47] developed a self-adaptive method for solving variational inequality problems, where the constant step size in the original Goldstein-Levitin-Polyak method is replaced by a sequence of parameters and is selected self-adaptively. The numerical results reported in He et al. [47] have shown that the self-adaptive strategy is valid and robust for solving variational inequality problems. The efficiency of their modified algorithm is not affected by the initial choice of the parameter; that is, for any given initial choice , the algorithm can adjust it and finally find a “suitable” one. Thus, there is no need to pay much attention to the choice of the step size as that of the original Goldstein-Levitin-Polyak method. Moreover, the computational burden at each iteration is not much larger than that of the original Goldstein-Levitin-Polyak method. Later, their method is extended to a more flexible self-adaptive rule by Han and Sun [25].
Motivated by the self-adaptive strategy, Zhang et al. [23] proposed the following method for solving the MSSFP by using variable step sizes, instead of the fixed step sizes as in Censor et al. [5, 24].

Algorithm 3.28. (S1) Given a nonnegative sequence with , , , , and arbitrary initial point , set and .(S2) Find the smallest nonnegative integer such that and which satisfies (S3) If then set ; otherwise, set .(S4)If , stop; otherwise, set and go to (S2).

Theorem 3.29 (see [23]). The proposed Algorithm 3.28 is globally convergent.

Remark 3.30. This new method is a modification of the projection method proposed by Goldstein [45] and Levitin and Polyak [46], where the constant step size in their original method is replaced by an automatically selected one, , per iteration. This is very important, since it helps us avoid the difficult task of selecting a “suitable” step size.

The following self-adaptive projection method was introduced by Zhao and Yang [7], which was adopted by using the Armijo-like searches to solve the MSSFP.

Algorithm 3.31. Given constants , let be arbitrary. For , calculate where and is the smallest nonnegative integer such that

Algorithm 3.31 need not to estimate the Lipschitz constant of or compute the largest eigenvalue of the matrix , and the step-size is chosen so that the objective function has a sufficient decrease. It is in fact a special case of the standard gradient projection method with the Armijo-like search for solving the constrained optimization problem (3.31).

The following convergence result for the gradient projection method with the Armijo-like searches solving the generalized convex optimization problem (3.31) ensures the convergence of Algorithm 3.31.

Theorem 3.32. Let be pseudoconvex and be an infinite sequence generated by the gradient projection method with Armijo-like searches. Then, the following conclusions hold: (1);(2), which denotes the set of the optimal solutions to (3.31), is nonempty if and only if there exists at least one limit point of . In this case, converges to a solution of (3.31).

However, we find that, in each iteration step of Algorithm 3.31, it costs a large amount of work to compute the orthogonal projections and . In what follows, we consider the case that the projections are not easily calculated, and we consider a relaxed self-adaptive projection method for solving the MSSFP. In detail, the MSSFP and the convex sets and in this part should satisfy the following assumptions.(1)The solution set of the constrained MSSFP is nonempty.(2)The sets , , are given by where are convex functions. The sets , are given by where are convex functions.(3)For any , at least one subgradient can be calculated, where is a generalized gradient, called subdifferential of at , and it is defined as follows: For any , at least one subgradient can be calculated, where is a generalized gradient, called subdifferential of at and is defined as follows: In the th iteration, let where is an element in : where is an element in .

Define Obviously,

Algorithm 3.33. Given let x0 be arbitrary. For , compute where and is the smallest nonnegative integer such that Set

Theorem 3.34 (see [7]). The sequence generated by Algorithm 3.33 converges to a solution of the MSSFP.

#### Acknowledgments

Y. Yao was supported in part by Colleges and Universities Science and Technology Development Foundation (20091003) of Tianjin, NSFC 11071279 and NSFC 71161001-G0105. R. Chen was supported in part by NSFC 11071279. Y.-C. Liou was partially supported by the Program TH-1-3, Optimization Lean Cycle, of Sub-Projects TH-1 of Spindle Plan Four in Excellence Teaching and Learning Plan of Cheng Shiu University, and was supported in part by NSC 100–2221-E-230-012.

#### References

1. C. Byrne, “A unified treatment of some iterative algorithms in signal processing and image reconstruction,” Inverse Problems, vol. 20, no. 1, pp. 103–120, 2004.
2. H.-K. Xu, “Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces,” Inverse Problems, vol. 26, no. 10, p. 105018, 17, 2010.
3. Q. Yang, “The relaxed CQ algorithm solving the split feasibility problem,” Inverse Problems, vol. 20, no. 4, pp. 1261–1266, 2004.
4. C. Byrne, “Iterative oblique projection onto convex sets and the split feasibility problem,” Inverse Problems, vol. 18, no. 2, pp. 441–453, 2002.
5. Y. Censor, T. Elfving, N. Kopf, and T. Bortfeld, “The multiple-sets split feasibility problem and its applications for inverse problems,” Inverse Problems, vol. 21, no. 6, pp. 2071–2084, 2005.
6. B. Qu and N. Xiu, “A note on the CQ algorithm for the split feasibility problem,” Inverse Problems, vol. 21, no. 5, pp. 1655–1665, 2005.
7. J. Zhao and Q. Yang, “Self-adaptive projection methods for the multiple-sets split feasibility problem,” Inverse Problems, vol. 27, no. 3, Article ID 035009, 13 pages, 2011.
8. H.-K. Xu, “A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem,” Inverse Problems, vol. 22, no. 6, pp. 2021–2034, 2006.
9. H.-K. Xu, “Averaged mappings and the gradient-projection algorithm,” Journal of Optimization Theory and Applications, vol. 150, no. 2, pp. 360–378, 2011.
10. Y. Dang and Y. Gao, “The strong convergence of a KM-CQ-like algorithm for a split feasibility problem,” Inverse Problems, vol. 27, no. 1, Article ID 015007, 9 pages, 2011.
11. F. Wang and H.-K. Xu, “Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem,” Journal of Inequalities and Applications, vol. 2010, Article ID 102085, 13 pages, 2010.
12. Z. Wang, Q. Yang, and Y. Yang, “The relaxed inexact projection methods for the split feasibility problem,” Applied Mathematics and Computation, vol. 217, no. 12, pp. 5347–5359, 2011.
13. M. D. Altschuler and Y. Censor, “Feasibility solutions in radiation therapy treatment planning,” in Proceedings of the 8th International Conference on the Use of Computers in Radiation Therapy, pp. 220–224, IEEE Computer Society Press, Silver Spring, Md, USA, 1984.
14. M. D. Altschuler, W.D. Powlis, and Y. Censor, “Teletherapy treatment planning with physician requirements included in the calculation: I. Concepts and methodology,” in Optimization of Cancer Radiotherapy, B. R. Paliwal, D. E. Herbert, and C. G. Orton, Eds., pp. 443–452, American Institute of Physics, New York, NY, USA, 1985.
15. Y. Censor, “Mathematical aspects of radiation therapy treatment planning: continuous inversion versus full discretization and optimization versus feasibility,” in Computational Radiology and Imaging: Therapy and Diagnostics, C. Borgers and F. Natterer, Eds., vol. 110 of The IMA Volumes in Mathematics and Its Applications, pp. 101–112, Springer, New York, NY, USA, 1999.
16. Y. Censor, M. D. Altschuler, and W. D. Powlis, “A computational solution of the inverse problem in radiation-therapy treatment planning,” Applied Mathematics and Computation, vol. 25, no. 1, pp. 57–87, 1988.
17. F. Wang and H.-K. Xu, “Cyclic algorithms for split feasibility problems in Hilbert spaces,” Nonlinear Analysis: Theory, Methods & Applications, vol. 74, no. 12, pp. 4105–4111, 2011.
18. G. Lopez, V. Martin-Marquez, and H.-K. Xu, “Iterative algorithms for the multiple-sets split feasibility problem,” in Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems, Y. Censor, M. Jiang, and G. Wang, Eds., pp. 243–279, Medical Physics Publishing, Madison, Wis, USA, 2009.
19. Y. Censor and A. Segal, “The split common fixed point problem for directed operators,” Journal of Convex Analysis, vol. 16, no. 2, pp. 587–600, 2009.
20. Q. Yang and J. Zhao, “Generalized KM theorems and their applications,” Inverse Problems, vol. 22, no. 3, pp. 833–844, 2006.
21. J. Zhao and Q. Yang, “Several solution methods for the split feasibility problem,” Inverse Problems, vol. 21, no. 5, pp. 1791–1799, 2005.
22. Y. Censor, A. Motova, and A. Segal, “Perturbed projections and subgradient projections for the multiple-sets split feasibility problem,” Journal of Mathematical Analysis and Applications, vol. 327, no. 2, pp. 1244–1256, 2007.
23. W. Zhang, D. Han, and Z. Li, “A self-adaptive projection method for solving the multiple-sets split feasibility problem,” Inverse Problems, vol. 25, no. 11, Article ID 115001, 16 pages, 2009.
24. Y. Censor, T. Bortfeld, B. Martin, and A. Trofimov, “The split feasibility model leading to a unified approach for inversion problems in intensity-modulated radiation therapy,” Tech. Rep., Department of Mathematics, University of Haifa, Haifa, Israel, 2005.
25. D. Han and W. Sun, “A new modified Goldstein-Levitin-Polyak projection method for variational inequality problems,” Computers & Mathematics with Applications, vol. 47, no. 12, pp. 1817–1825, 2004.
26. Y. Censor, T. Bortfeld, B. Martin, and A. Trofimov, “A unified approach for inversion problems in intensity-modulated radiation therapy,” Physics in Medicine and Biology, vol. 51, no. 10, pp. 2353–2365, 2006.
27. E. K. Lee, T. Fox, and I. Crocker, “Integer programming applied to intensity-modulated radiation therapy treatment planning,” Annals of Operations Research, vol. 119, no. 1–4, pp. 165–181, 2003.
28. J. R. Palta and T. R. Mackie, Eds., Intensity-Modulated Radiation Therapy: The State of the Art, Medical Physical Monograph 29, American Association of Physists in Medicine, Medical Physical Publishing, Madison, Wis, USA, 2003.
29. Q. Wu, R. Mohan, A. Niemierko, and R. Schmidt-Ullrich, “Optimization of intensity-modulated radiotherapy plans based on the equivalent uniform dose,” International Journal of Radiation Oncology Biology Physics, vol. 52, no. 1, pp. 224–235, 2002.
30. B. Eicke, “Iteration methods for convexly constrained ill-posed problems in Hilbert space,” Numerical Functional Analysis and Optimization, vol. 13, no. 5-6, pp. 413–429, 1992.
31. E. S. Levitin and B. T. Poljak, “Minimization methods in the presence of constraints,” Žurnal Vyčislitel' noĭ Matematiki i Matematičeskoĭ Fiziki, vol. 6, pp. 787–823, 1966.
32. C. I. Podilchuk and R. J. Mammone, “Image recovery by convex projections using a least-squares constraint,” Journal of the Optical Society of America A, vol. 7, pp. 517–521, 1990.
33. H. H. Bauschke and J. M. Borwein, “On projection algorithms for solving convex feasibility problems,” SIAM Review, vol. 38, no. 3, pp. 367–426, 1996.
34. M. Fukushima, “A relaxed projection method for variational inequalities,” Mathematical Programming, vol. 35, no. 1, pp. 58–70, 1986.
35. D. C. Youla, “On deterministic convergence of iterations of relaxed projection operators,” Journal of Visual Communication and Image Representation, vol. 1, no. 1, pp. 12–20, 1990.
36. D. Youla, “Mathematical theory of image restoration by the method of convex projections,” in Image Recovery Theory and Applications, H. Stark, Ed., p. xx+543, Academic Press, Orlando, Fla, USA, 1987.
37. M. I. Sezan and H. Stark, “Applications of convex projection theory to image recovery in tomography and related areas,” Image Recovery Theory and Applications, Academic Press, Orlando, Fla, USA, 1987.
38. A. Cegielski, “Generalized relaxation of nonexpansive operators and convex feasibility problems,” in Nonlinear Analysis and Optimization I. Nonlinear Analysis, vol. 513 of Contemporary Mathematics, pp. 111–123, American Mathematical Society, Providence, RI, USA, 2010.
39. C. Byrne, “Bregman-Legendre multidistance projection algorithms for convex feasibility and optimization,” in Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications, vol. 8 of Studies in Computational Mathematics, pp. 87–99, North-Holland, Amsterdam, The Netherlands, 2001.
40. C. Byrne and Y. Censor, “Proximity function minimization using multiple Bregman projections, with applications to split feasibility and Kullback-Leibler distance minimization,” Annals of Operations Research, vol. 105, pp. 77–98, 2001.
41. Y. Censor, D. Gordon, and R. Gordon, “BICAV: A block-iterative parallel algorithm for sparse systems with pixel-related weighting,” IEEE Transactions on Medical Imaging, vol. 20, no. 10, pp. 1050–1060, 2001.
42. Y. Censor, A. Gibali, and S. Reich, “The subgradient extragradient method for solving variational inequalities in Hilbert space,” Journal of Optimization Theory and Applications, vol. 148, no. 2, pp. 318–335, 2011.
43. Y. Censor and T. Elfving, “A multiprojection algorithm using Bregman projections in a product space,” Numerical Algorithms, vol. 8, no. 2–4, pp. 221–239, 1994.
44. J. M. Dye and S. Reich, “On the unrestricted iteration of projections in Hilbert space,” Journal of Mathematical Analysis and Applications, vol. 156, no. 1, pp. 101–119, 1991.
45. A. A. Goldstein, “Convex programming in Hilbert space,” Bulletin of the American Mathematical Society, vol. 70, pp. 709–710, 1964.
46. E. S. Levitin and B. T. Polyak, “Constrained minimization problems,” U.S.S.R. Computational Mathematics and Mathematical Physics., vol. 6, pp. 1–50, 1966.
47. B. S. He, H. Yang, Q. Meng, and D. R. Han, “Modified Goldstein-Levitin-Polyak projection method for asymmetric strongly monotone variational inequalities,” Journal of Optimization Theory and Applications, vol. 112, no. 1, pp. 129–143, 2002.