Abstract

We present a projection algorithm which modifies the method proposed by Censor and Elfving (1994) and also introduce a self-adaptive algorithm for the multiple-sets split feasibility problem (MSFP). The global rates of convergence are firstly investigated and the sequences generated by two algorithms are proved to converge to a solution of the MSFP. The efficiency of the proposed algorithms is illustrated by some numerical tests.

1. Introduction

The multiple-sets split feasibility problem (MSFP) is to find satisfying where is an real matrix, , , and , , are the nonempty closed convex sets. This problem was firstly proposed by Censor et al. in [1] and can be a model for many inverse problems where constraints are imposed on the solutions in the domain of a linear operator as well as in the operator's range. Many researchers studied the MSFP and introduced various algorithms to solve it (see [17] and the references therein). If , then this problem reduces to the feasible case of the split feasibility problem (see, e.g., [811]), which is to find with .

Assume that the MSFP (1) is consistent; that is, its solution set, denoted by , is nonempty. For convenience reasons, Censor et al. [1] considered the following constrained MSFP: where is an auxiliary simple nonempty closed convex set containing at least one solution of the MSFP. For solving the constrained MSFP, Censor et al. [1] defined a proximity function to measure the distance of a point to all sets where and for all and , respectively, and . We see that Censor et al. [1] proposed a projection algorithm as follows: where is a positive number such that and is the Lipschitz constant of .

Observe that in the algorithm (5) the determination of the stepsize depends on the operator (matrix) norm (or the largest eigenvalue of ). This means that, in order to implement the algorithm (5), one has first to compute (or, at least, estimate) operator norm of , which is in general not an easy work in practice. To overcome this difficulty, Zhang et al. [4] and Zhao and Yang [5, 7] proposed self-adaptive methods where the stepsize has no connection with matrix norms. Their methods actually compute the stepsize by adopting different self-adaptive strategies.

Note that the algorithms proposed by Censor et al. [1], Zhang et al. [4], and Zhao and Yang [5, 7] involve the projection to an auxiliary set . In fact, the set is introduced just for the convenience of the proof of the convergence and it may be difficult to determine in some cases. Considering this, Zhao and Yang [6] presented simple projection algorithms which does not need projection to an auxiliary set .

In this paper, we introduce two projection algorithms for solving the MSFP, inspired by Beck and Teboulle's iterative shrinkage-thresholding algorithm for linear inverse problem [12]. The first algorithm modifies Censor et al.'s method which does not need projection to an auxiliary set . The second algorithm is self-adaptive and adopts the backtracking rule to determine the stepsize. We firstly study the global rate of convergence of two algorithms and prove that the sequences generated by the proposed algorithms converge to a solution of the MSFP. Some numerical results are presented, which illustrate the efficiency of the proposed algorithms.

2. Preliminaries

In this section, we review some definitions and lemmas which will be used in the main results.

The following lemma is not hard to prove (see [1, 13]).

Lemma 1. Let be given as in (3). Then (i)is convex and continuously differential,(ii) is Lipschitz continuous with as the Lipschitz constant, where is the spectral radius of the matrix .

For any , consider the following quadratic approximation of at a given point : which admits a unique minimizer Simple algebra shows that (ignoring constant terms in )

The following lemma is well known and a fundamental property for a smooth function in the class ; for example, see [14, 15].

Lemma 2. Let be a continuously differentiable function with Lipschitz continuous gradient and Lipschitz constant . Then, for any ,

We are now ready to state and prove the promised key result.

Lemma 3 (see [12]). Let and be such that
Then for any ,

Proof. From (10), we have Now, from the fact that are convex, it follows that On the other hand, by the definition of , one has Therefore, using (12)–(14), it follows that where in the first equality above we used (8).

Remark 4. Note that, from Lemmas 1 and 2, it follows that if , then the condition (10) is always satisfied for .

3. Two Projection Algorithms

In this section, we propose two projection algorithms which do not need an auxiliary set ; one modifies the algorithm introduced by Censor et al. [1] and the other is a self-adaptive algorithm which solves the MSFP without prior knowledge of spectral radius of the matrix .

Algorithm 5. Let be a fixed constant and take . Let be arbitrary. For , compute

Remark 6. Algorithm 5 is different from Censor et al.'s algorithm (5) in [1] and it does not need the projection to an auxiliary simple nonempty closed convex set . In Algorithm 5, we take instead of (as in Censor et al.'s algorithm) which is restricted to a smaller range.

Algorithm 7. Given and , let be arbitrary. For , find the smallest nonnegative integer such that and which satisfies

Remark 8. Note that the sequence of function values produced by Algorithms 5 and 7 is nonincreasing. Indeed, for every , where the first inequality comes from Lemma 2 for Algorithm 5 and from (18) for Algorithm 7, and the second inequality follows from (7). in (19) is either chosen by the backtracking rule (18) or , where is a given Lipschitz constant of .

Lemma 9. There holds where , in Algorithm 5 and , in Algorithm 7.

Proof. It is easy to verify (20) for Algorithm 5. By and the choice of , we get . From Lemma 2, it follows that inequality (18) is satisfied for , where is the Lipschitz constant of . So, for Algorithm 7 one has for every .

Remark 10. From Lemma 9, it follows that backtracking rule (18) is well defined.

Remark 11. In algorithm “ISTA with backtracking” proposed by Beck and Teboulle [12], they took , with and . It is obvious that increases with . It is verified that small is more efficient than a larger one in numerical experiments (see Table 1). So, in Algorithm 7, we take for backtracking rule which is smaller than the one in the algorithm of Beck and Teboulle.

Theorem 12. Let be a sequence generated by Algorithm 5 or Algorithm 7. Then converges to a solution of the MSFP (1), and furthermore for any it holds that

Proof. Invoking Lemma 3 with , , and , we obtain which combined with (20) and the fact that , yields which implies So is a Fejér monotone sequence. Summing the inequality (23) over gives Invoking Lemma 3 one more time with and yields Since (see (20)) and (see (19)), it follows that Multiplying the last inequality by and summing over , we obtain which simplifies to Adding (25) and (29) times , we get and hence, it follows that which yields Since is Fejér monotone, it is bounded. To prove the convergence of , it only remains to show that all converging subsequences have the same limit. Suppose in contradiction that two subsequences and converge to different limits and , respectively (). We are to show that is a solution of the MSFP. The continuity of then implies that Therefore, ; that is, and ; that is, is a solution of the MSFP. Similarly, we can show that it is a solution of the MSFP. Now, by Fejér monotonicity of the sequence , it follows that the sequence is bounded and nonincreasing and thus has a limit . However, we also have , and , so that , which is obviously a contradiction. Thus converges to a solution of the MSFP (1). The proof is completed.

4. Numerical Tests

In order to verify the theoretical assertions, we present some numerical tests in this section. We apply Algorithms 5 and 7 to solve two test problems of [4] (Examples 13 and 14) and compare the numerical results of two algorithms.

For convenience, we denote the vector with all elements 0 by and the vector with all elements 1 by in what follows. In the numerical results listed in the following tables, “Iter.” and “Sec.” denoted the number of iterations and the CPU time in seconds, respectively. For Algorithm 7, “InIt.” denoted the number of total iterations of finding suitable in (18).

Example 13 (see [4]). Consider a split feasibility problem as finding such that , where the matrix The weights of were set to and  . In the implementation, we took as the stopping criterion as in [4].

For Algorithm 5, we tested and the numerical results were reported in Table 1 with different initial points  . (Since the number of iterations for was larger than those for , we only reported the results for .) We took and for Algorithm 7. Table 1 shows that Algorithm 5 was efficient when choosing a suitable ( was the best choice for the current example), while the number of iterations of Algorithm 5 was still larger than those for Algorithm 7.

Example 14 (see [4]). Consider the MSFP, where and generated randomly: where is the center of the ball , , and is the radius; and are both generated randomly. and are the boundary of the box and are also generated randomly, satisfying , , respectively. The weights of were . The stopping criterion was with the initial point .

We tested Algorithms 5 and 7 with different and in different dimensional Euclidean space. In Algorithm 5, since a smaller is more efficient than a larger one, we take in the experiment. We take and   for Algorithm 7. For comparison, the same random values were taken in each test. The numerical results were listed in Table 2, from which we can observe the efficiency of the self-adaptive Algorithm 7, both from the points of view of number of iterations and CPU time.

Conflicts of Interests

There is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors express their thanks to Dr. Wenxing Zhang for his help in numerical tests. This research was supported by the National Natural Science Foundation of China (no. 11201476) and Fundamental Research Funds for the Central Universities (no. 3122013D017).