- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Abstract and Applied Analysis
Volume 2012 (2012), Article ID 149508, 12 pages
An Extrapolated Iterative Algorithm for Multiple-Set Split Feasibility Problem
1School of Management, University of Shanghai for Science and Technology, Shanghai 200093, China
2School of Mathematics and Information Science, Henan Polytechnic University, Jiaozuo 454000, China
Received 29 December 2011; Revised 23 February 2012; Accepted 23 February 2012
Academic Editor: Khalida Inayat Noor
Copyright © 2012 Yazheng Dang and Yan Gao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The multiple-set split feasibility problem (MSSFP), as a generalization of the split feasibility problem, is to find a point in the intersection of a family of closed convex sets in one space such that its image under a linear transformation will be in the intersection of another family of closed convex sets in the image space. Censor et al. (2005) proposed a method for solving the multiple-set split feasibility problem (MSSFP), whose efficiency depends heavily on the step size, a fixed constant related to the Lipschitz constant of which may be slow. In this paper, we present an accelerated algorithm by introducing an extrapolated factor to solve the multiple-set split feasibility problem. The framework encompasses the algorithm presented by Censor et al. (2005). The convergence of the method is investigated, and numerical experiments are provided to illustrate the benefits of the extrapolation.
The multiple-set split feasibility problem (MSSFP) is to find a point where and are positive integers, and are closed convex, is an real matrix. When , the problem becomes to find a point and , which is just the two-set split feasibility problem (SFP, for short). SFP was originally introduced in  allowing for constraints both in the domain and range of a linear operator. Many methods have been developed for solving the SFP, for example, the basic CQ algorithm proposed by Byrne , the relaxed CQ algorithm presented by Yang  and the KM-CQ-like algorithm developed by Dang and Gao . The MSSFP, formulated in , arises in the field of intensity-modulated radiation therapy when one attempts to describe physical does constraints and equivalent uniform does (EUD) constraints within a single model, see . Censor et al. generalized the CQ algorithm  to solve the MSSFP  to get the following iterative process: where , and is the spectral radius of , and , for all and with. Let denote the projection onto the convex set , that is, There also came out other algorithms for solving MSSFP, such that Xu in  and Masad and Reich in  introduced strong convergence methods in infinite dimensional Hilbert space, respectively. Censor et al. in  presented the perturbed projection and simultaneous subgradient projection algorithm to deal with the limit of accurately computing the orthogonal projection, Censor and Segal proposed string-averaging algorithmic scheme for sparse case in  and employed product space formulation to derive and analyze the simultaneous algorithm for MSSFP in . However, the above algorithms use a fixed stepsize related to the largest eigenvalue of the matrix , which sometimes affects the convergence speed of the algorithms.
Extrapolated iterative method was first proposed in , it is an accelerated method in optimization since Pierra observed that the extrapolation parameter can be much larger than 1 and that the sequence generated by the extrapolated method converges fast. Subsequently, Heinz et al. in , proposed a general parallel block-iterative algorithmic framework by introducing extrapolated overrelaxations to solve the affine-convex feasibility problems, the corresponding numerical results also show the fast convergence.
Motivated by the extrapolated method for solving the affine-convex feasibility problems, in this paper, we present an extrapolated iterative method to solve the MSSFP, which includes the algorithm proposed by Censor et al. in . As will be shown our algorithm extends and includes as a special case of the method in .
Under normal circumstances, the MSSFP considers both the feasible and the infeasible cases by the use of a proximity function, that is, if the MSSFP problem is consistent then unconstrained minimization of the proximity function yields the value 0; in the inconsistent case, it finds a point which is least violating the feasibility by being “closest” to all sets, as “measured” by the proximity function. The minimization problem is
We know that the projections of a point onto the sets and are difficult to implement, even if each individual sets and have simple or special structures such that projection onto each of them is easy to implement. In practical applications, the projections onto individual sets are more easily calculated than the projection onto the intersection . For this purpose, Censor et al.  introduced the proximity function , to measure the distance of a point to all sets. We have where , for all and with. Then,
Hence, (1.2) can be rewritten as
The lemma provides well-known properties of orthogonal projections.
Lemma 2.1 (see ). Let be a nonempty closed convex subset of , for any and any , the following properties hold:(1), (2), (3), (4).
3. The Extrapolated Projection Algorithm and Its Convergence
The following is our extrapolated projection algorithm.
Algorithm 3.1. For an arbitrary initial point is generated by the iteration
where is a positive scalar such that , for all and with , and being the spectral radius of ,
Evidently, (3.1) happens to be (1.2), when .
Now we prove the convergence of the Algorithm 3.1.
Theorem 3.2. Assume that the set of the solution of the multiple-sets split feasibility problem (MSSFP) is nonempty. Then, any sequence generated by Algorithm 3.1 converges to a solution of MSSFP (1.1).
Proof. Let and take a point with .
Step 1. First we show that . From (3.1), we have Observe that By the property (2) in Lemma 2.1, we get Therefore, Similarly, we have Since , using again property (2) in Lemma 2.1, we obtain that Substituting (3.6) and (3.8) into (3.3), we get the following: Assume at th step, , then algorithms (3.1) and (2.4) coincide. Since has a Lipschitz constant , and is -ism (inverse-strongly monotone), that is, see ; then, from the proof of Theorem 2.1 in , we get that the sequence generated by (1.2) satisfies Similarly, assume at th step, , that is , then replacing with in (3.9), we get the following: Combing (3.11) with (3.12) Since , we have for all such that . Evidently, both and are bounded.Step 2. Secondly we show that with and .
As shown in Step 1, the sequence is monotonically decreasing and bounded, and there exists the limit Since the case is already treated in , we only need to consider the subsequence . Hence, we need to show that the subsequence converges to with and . From (3.12) and (3.15), and replacing by , we have From (3) in Lemma 2.1, we know that and , then, we may assume that there exists a constant such that Therefore, Taking limits as in (3.18) and considering (3.16) lead to this implies that Since the sequence is bounded, there exists a subsequence of which converges to a point , and a corresponding subsequence of which converges to a point . Therefore, from (3.20), it is easy to get that and .
To obtain the result that the sequence itself is convergent to a point with , it is now sufficient to show that the subsequence of converges to the same point and the corresponding subsequence of is convergent to . Let us suppose that there exists a subsequence of that is convergent to point , as above, and . For , we obtain which, after calculating the inner product, leads to Similarly, for , it is easily to obtain that As remarked, the sequences and are convergent to and . In particular, we get the following: Taking the limits in (3.22) and (3.23), for and for , we deduce that from above we conclude that . Similarly, . Hence, with and , that is, with and . Replace with , it can be written as with and . And by reason of the monotonicity and boundness of the sequence , we get the result.
Here we shortly explain the rational for the choice of the parameter in Algorithm 3.1. In fact, if , (3.9) can be rewritten as Evidently, when the maximal value of the right hand side expression of (3.26) is obtained. Hence, if , for the case , the factor can be considered as the “best” possible value which assures that as the “closest” point to the set of solution of MSSFP along the direction . Therefore, to some extent, the extrapolated factor plays an important role for the accelerated convergence for Algorithm 3.1.
4. Numerical Experiments
In the numerical results listed in the following table CPU time in seconds. We denote that and . “Algorithm (1.2)” in the tables denotes the projection algorithm developed by Censor et al., in  as (1.2). “Algorithm 3.1” in the tables denotes Algorithm 3.1.
Now we give the following examples to test the efficiency of the above algorithm.
Example 4.1. In this example, we considered the multiples-set split feasibility problem, where and with . Consider the following three cases:Case I. Case II. ,Case III. . The number of iterative step needed for Algorithm (1.2) and Algorithm 3.1, and the corresponding solutions of this example are shown in Table 1.
Example 4.2. In this example, we consider a multiple-set split feasibility where , and are generated randomly Take initial point , we test the algorithms with different values of , and respectively, in different dimensional Euclidean space.The number of iterative step needed for Algorithm (1.2) and Algorithm 3.1 is displayed in Table 2.
Example 4.3. In this example, we considered the multiples-set split feasibility problem, where
and with . Consider the following three cases:Case I. ,
Case II. ,
Case III. . The number of iterative step needed for Algorithm (1.2) and Algorithm 3.1, and the corresponding solutions of this example are shown in Table 3.
In all numerical experiments, we take the weights as , . The stopping criterion is .
From these preliminary numerical results, we can see that the method is efficient, while the computational burden is not too large using the extrapolated technique.
This work was supported by National Science Foundation of China (under Grant 111712210), Shanghai Municipal Committee of Science and Technology (under Grant 10550500800), Shanghai Municipal Government (under Grant S30501), and the Innovation Fund Project for Graduate Student of Shanghai (under Grant JWCXSL1001).
- Y. Censor and T. Elfving, “A multiprojection algorithm using Bregman projections in a product space,” Numerical Algorithms, vol. 8, no. 2–4, pp. 221–239, 1994.
- C. Byrne, “Iterative oblique projection onto convex sets and the split feasibility problem,” Inverse Problems, vol. 18, no. 2, pp. 441–453, 2002.
- Q. Yang, “The relaxed CQ algorithm solving the split feasibility problem,” Inverse Problems, vol. 20, no. 4, pp. 1261–1266, 2004.
- Y. Dang and Y. Gao, “The strong convergence of a KM-CQ-like algorithm for a split feasibility problem,” Inverse Problems, vol. 27, no. 1, Article ID 015007, 9 pages, 2011.
- Y. Censor, T. Elfving, N. Kopf, and T. Bortfeld, “The multiple-sets split feasibility problem and its applications for inverse problems,” Inverse Problems, vol. 21, no. 6, pp. 2071–2084, 2005.
- Y. Censor, T. Bortfeld, B. Martin, and A. Trofimov, “A unified approach for inversion problems in intensity-modulated radiation therapy,” Physics in Medicine and Biology, vol. 51, no. 10, pp. 2353–2365, 2006.
- H.-K. Xu, “Krasnosel slii-Mann algorithm and the multiple-set split feasibility problem,” Inverse Problems, vol. 22, no. 6, pp. 2021–2034, 2006.
- E. Masad and S. Reich, “A note on the multiple-set split convex feasibility problem in Hilbert space,” Journal of Nonlinear and Convex Analysis, vol. 8, no. 3, pp. 367–371, 2007.
- Y. Censor, A. Motova, and A. Segal, “Perturbed projections and subgradient projections for the multiple-sets split feasibility problem,” Journal of Mathematical Analysis and Applications, vol. 327, no. 2, pp. 1244–1256, 2007.
- Y. Censor and A. Segal, “Sparse string-averaging and split common fixed points,” in Nonlinear Analysis and Optimization I. Nonlinear Analysis, vol. 513 of Contemporary Mathematics Series, pp. 125–142, American Mathematical Society, Providence, RI, USA, 2010.
- Y. Censor and A. Segal, “The split common fixed point problem for directed operators,” Journal of Convex Analysis, vol. 16, no. 2, pp. 587–600, 2009.
- G. Pierra, “Decomposition through formalization in a product space,” Mathematical Programming, vol. 28, no. 1, pp. 96–115, 1984.
- H. H. Bauschke, P. L. Combettes, and S. G. Kruk, “Extrapolation algorithm for affine-convex feasibility problems,” Numerical Algorithms, vol. 41, no. 3, pp. 239–274, 2006.
- F. Facchinei and J.-S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems. Vol. I, Springer Series in Operations Research, Springer, New York, NY, USA, 2003.