Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2013 (2013), Article ID 347401, 5 pages
http://dx.doi.org/10.1155/2013/347401
Research Article

On Two Projection Algorithms for the Multiple-Sets Split Feasibility Problem

College of Science, Civil Aviation University of China, Tianjin 300300, China

Received 15 July 2013; Accepted 27 November 2013

Academic Editor: Livija Cveticanin

Copyright © 2013 Qiao-Li Dong and Songnian He. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We present a projection algorithm which modifies the method proposed by Censor and Elfving (1994) and also introduce a self-adaptive algorithm for the multiple-sets split feasibility problem (MSFP). The global rates of convergence are firstly investigated and the sequences generated by two algorithms are proved to converge to a solution of the MSFP. The efficiency of the proposed algorithms is illustrated by some numerical tests.

1. Introduction

The multiple-sets split feasibility problem (MSFP) is to find satisfying where is an real matrix, , , and , , are the nonempty closed convex sets. This problem was firstly proposed by Censor et al. in [1] and can be a model for many inverse problems where constraints are imposed on the solutions in the domain of a linear operator as well as in the operator's range. Many researchers studied the MSFP and introduced various algorithms to solve it (see [17] and the references therein). If , then this problem reduces to the feasible case of the split feasibility problem (see, e.g., [811]), which is to find with .

Assume that the MSFP (1) is consistent; that is, its solution set, denoted by , is nonempty. For convenience reasons, Censor et al. [1] considered the following constrained MSFP: where is an auxiliary simple nonempty closed convex set containing at least one solution of the MSFP. For solving the constrained MSFP, Censor et al. [1] defined a proximity function to measure the distance of a point to all sets where and for all and , respectively, and . We see that Censor et al. [1] proposed a projection algorithm as follows: where is a positive number such that and is the Lipschitz constant of .

Observe that in the algorithm (5) the determination of the stepsize depends on the operator (matrix) norm (or the largest eigenvalue of ). This means that, in order to implement the algorithm (5), one has first to compute (or, at least, estimate) operator norm of , which is in general not an easy work in practice. To overcome this difficulty, Zhang et al. [4] and Zhao and Yang [5, 7] proposed self-adaptive methods where the stepsize has no connection with matrix norms. Their methods actually compute the stepsize by adopting different self-adaptive strategies.

Note that the algorithms proposed by Censor et al. [1], Zhang et al. [4], and Zhao and Yang [5, 7] involve the projection to an auxiliary set . In fact, the set is introduced just for the convenience of the proof of the convergence and it may be difficult to determine in some cases. Considering this, Zhao and Yang [6] presented simple projection algorithms which does not need projection to an auxiliary set .

In this paper, we introduce two projection algorithms for solving the MSFP, inspired by Beck and Teboulle's iterative shrinkage-thresholding algorithm for linear inverse problem [12]. The first algorithm modifies Censor et al.'s method which does not need projection to an auxiliary set . The second algorithm is self-adaptive and adopts the backtracking rule to determine the stepsize. We firstly study the global rate of convergence of two algorithms and prove that the sequences generated by the proposed algorithms converge to a solution of the MSFP. Some numerical results are presented, which illustrate the efficiency of the proposed algorithms.

2. Preliminaries

In this section, we review some definitions and lemmas which will be used in the main results.

The following lemma is not hard to prove (see [1, 13]).

Lemma 1. Let be given as in (3). Then (i)is convex and continuously differential,(ii) is Lipschitz continuous with as the Lipschitz constant, where is the spectral radius of the matrix .

For any , consider the following quadratic approximation of at a given point : which admits a unique minimizer Simple algebra shows that (ignoring constant terms in )

The following lemma is well known and a fundamental property for a smooth function in the class ; for example, see [14, 15].

Lemma 2. Let be a continuously differentiable function with Lipschitz continuous gradient and Lipschitz constant . Then, for any ,

We are now ready to state and prove the promised key result.

Lemma 3 (see [12]). Let and be such that
Then for any ,

Proof. From (10), we have Now, from the fact that are convex, it follows that On the other hand, by the definition of , one has Therefore, using (12)–(14), it follows that where in the first equality above we used (8).

Remark 4. Note that, from Lemmas 1 and 2, it follows that if , then the condition (10) is always satisfied for .

3. Two Projection Algorithms

In this section, we propose two projection algorithms which do not need an auxiliary set ; one modifies the algorithm introduced by Censor et al. [1] and the other is a self-adaptive algorithm which solves the MSFP without prior knowledge of spectral radius of the matrix .

Algorithm 5. Let be a fixed constant and take . Let be arbitrary. For , compute

Remark 6. Algorithm 5 is different from Censor et al.'s algorithm (5) in [1] and it does not need the projection to an auxiliary simple nonempty closed convex set . In Algorithm 5, we take instead of (as in Censor et al.'s algorithm) which is restricted to a smaller range.

Algorithm 7. Given and , let be arbitrary. For , find the smallest nonnegative integer such that and which satisfies

Remark 8. Note that the sequence of function values produced by Algorithms 5 and 7 is nonincreasing. Indeed, for every , where the first inequality comes from Lemma 2 for Algorithm 5 and from (18) for Algorithm 7, and the second inequality follows from (7). in (19) is either chosen by the backtracking rule (18) or , where is a given Lipschitz constant of .

Lemma 9. There holds where , in Algorithm 5 and , in Algorithm 7.

Proof. It is easy to verify (20) for Algorithm 5. By and the choice of , we get . From Lemma 2, it follows that inequality (18) is satisfied for , where is the Lipschitz constant of . So, for Algorithm 7 one has for every .

Remark 10. From Lemma 9, it follows that backtracking rule (18) is well defined.

Remark 11. In algorithm “ISTA with backtracking” proposed by Beck and Teboulle [12], they took , with and . It is obvious that increases with . It is verified that small is more efficient than a larger one in numerical experiments (see Table 1). So, in Algorithm 7, we take for backtracking rule which is smaller than the one in the algorithm of Beck and Teboulle.

tab1
Table 1: Computational results for Example 13 with different algorithms.

Theorem 12. Let be a sequence generated by Algorithm 5 or Algorithm 7. Then converges to a solution of the MSFP (1), and furthermore for any it holds that

Proof. Invoking Lemma 3 with , , and , we obtain which combined with (20) and the fact that , yields which implies So is a Fejér monotone sequence. Summing the inequality (23) over gives Invoking Lemma 3 one more time with and yields Since (see (20)) and (see (19)), it follows that Multiplying the last inequality by and summing over , we obtain which simplifies to Adding (25) and (29) times , we get and hence, it follows that which yields Since is Fejér monotone, it is bounded. To prove the convergence of , it only remains to show that all converging subsequences have the same limit. Suppose in contradiction that two subsequences and converge to different limits and , respectively (). We are to show that is a solution of the MSFP. The continuity of then implies that Therefore, ; that is, and ; that is, is a solution of the MSFP. Similarly, we can show that it is a solution of the MSFP. Now, by Fejér monotonicity of the sequence , it follows that the sequence is bounded and nonincreasing and thus has a limit . However, we also have , and , so that , which is obviously a contradiction. Thus converges to a solution of the MSFP (1). The proof is completed.

4. Numerical Tests

In order to verify the theoretical assertions, we present some numerical tests in this section. We apply Algorithms 5 and 7 to solve two test problems of [4] (Examples 13 and 14) and compare the numerical results of two algorithms.

For convenience, we denote the vector with all elements 0 by and the vector with all elements 1 by in what follows. In the numerical results listed in the following tables, “Iter.” and “Sec.” denoted the number of iterations and the CPU time in seconds, respectively. For Algorithm 7, “InIt.” denoted the number of total iterations of finding suitable in (18).

Example 13 (see [4]). Consider a split feasibility problem as finding such that , where the matrix The weights of were set to and  . In the implementation, we took as the stopping criterion as in [4].

For Algorithm 5, we tested and the numerical results were reported in Table 1 with different initial points  . (Since the number of iterations for was larger than those for , we only reported the results for .) We took and for Algorithm 7. Table 1 shows that Algorithm 5 was efficient when choosing a suitable ( was the best choice for the current example), while the number of iterations of Algorithm 5 was still larger than those for Algorithm 7.

Example 14 (see [4]). Consider the MSFP, where and generated randomly: where is the center of the ball , , and is the radius; and are both generated randomly. and are the boundary of the box and are also generated randomly, satisfying , , respectively. The weights of were . The stopping criterion was with the initial point .

We tested Algorithms 5 and 7 with different and in different dimensional Euclidean space. In Algorithm 5, since a smaller is more efficient than a larger one, we take in the experiment. We take and   for Algorithm 7. For comparison, the same random values were taken in each test. The numerical results were listed in Table 2, from which we can observe the efficiency of the self-adaptive Algorithm 7, both from the points of view of number of iterations and CPU time.

tab2
Table 2: Computational results for Example 14 with different dimensions and different numbers of and .

Conflicts of Interests

There is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors express their thanks to Dr. Wenxing Zhang for his help in numerical tests. This research was supported by the National Natural Science Foundation of China (no. 11201476) and Fundamental Research Funds for the Central Universities (no. 3122013D017).

References

  1. Y. Censor, T. Elfving, N. Kopf, and T. Bortfeld, “The multiple-sets split feasibility problem and its applications for inverse problems,” Inverse Problems, vol. 21, no. 6, pp. 2071–2084, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  2. Z. Li, D. Han, and W. Zhang, “A self-adaptive projection-type method for nonlinear multiple-sets split feasibility problem,” Inverse Problems in Science and Engineering, vol. 21, no. 1, pp. 155–170, 2013. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  3. H.-K. Xu, “A variable Krasnosel'skii-Mann algorithm and the multiple-set split feasibility problem,” Inverse Problems, vol. 22, no. 6, pp. 2021–2034, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  4. W. Zhang, D. Han, and Z. Li, “A self-adaptive projection method for solving the multiple-sets split feasibility problem,” Inverse Problems, vol. 25, no. 11, article 115001, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  5. J. Zhao and Q. Yang, “Self-adaptive projection methods for the multiple-sets split feasibility problem,” Inverse Problems, vol. 27, no. 3, article 035009, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  6. J. Zhao and Q. Yang, “A simple projection method for solving the multiple-sets split feasibility problem,” Inverse Problems in Science and Engineering, vol. 21, no. 3, pp. 537–546, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  7. J. Zhao and Q. Yang, “Several acceleration schemes for solving the multiple-sets split feasibility problem,” Linear Algebra and Its Applications, vol. 437, no. 7, pp. 1648–1657, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  8. C. Byrne, “A unified treatment of some iterative algorithms in signal processing and image reconstruction,” Inverse Problems, vol. 20, no. 1, pp. 103–120, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  9. Y. Censor and T. Elfving, “A multiprojection algorithm using Bregman projections in a product space,” Numerical Algorithms, vol. 8, no. 2–4, pp. 221–239, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  10. Q.-L. Dong, Y. Yao, and S. He, “Weak convergence theorems of the modified relaxed projection algorithms for the split feasibility problem in Hilbert spaces,” Optimization Letters, 2013. View at Publisher · View at Google Scholar
  11. S. He and W. Zhu, “A note on approximating curve with 1-norm regularization method for the split feasibility problem,” Journal of Applied Mathematics, vol. 2012, Article ID 683890, 10 pages, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  12. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183–202, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  13. J.-P. Aubin, Optima and Equilibria: An Introduction to Nonlinear Analysis, vol. 140 of Graduate Texts in Mathematics, Springer, Berlin, Germany, 1993. View at MathSciNet
  14. D. P. Bertsekas, Nonlinear Programming, Athena Scientific, Belmont, Mass, USA, 2nd edition, 1999.
  15. J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, vol. 30 of Classics in Applied Mathematics, SIAM, Philadelphia, Pa, USA, 2000. View at Publisher · View at Google Scholar · View at MathSciNet