Research Article | Open Access

# Self-Adaptive and Relaxed Self-Adaptive Projection Methods for Solving the Multiple-Set Split Feasibility Problem

**Academic Editor:**Yongfu Su

#### Abstract

Given nonempty closed convex subsets , and nonempty closed convex subsets , , in the - and -dimensional Euclidean spaces, respectively. The multiple-set split feasibility problem (MSSFP) proposed by Censor is to find a vector such that , where is a given real matrix. It serves as a model for many inverse problems where constraints are imposed on the solutions in the domain of a linear operator as well as in the operator’s range. MSSFP has a variety of specific applications in real world, such as medical care, image reconstruction, and signal processing. In this paper, for the MSSFP, we first propose a new self-adaptive projection method by adopting Armijo-like searches, which dose not require estimating the Lipschitz constant and calculating the largest eigenvalue of the matrix ; besides, it makes a sufficient decrease of the objective function at each iteration. Then we introduce a relaxed self-adaptive projection method by using projections onto half-spaces instead of those onto convex sets. Obviously, the latter are easy to implement. Global convergence for both methods is proved under a suitable condition.

#### 1. Introduction

The multiple-sets split feasibility problem (MSSFP) requires to find a point closest to a family of closed convex sets in one space such that its image under a linear transformation will be closest to another family of closed convex sets in the image space. It is formulated as follows: where nonempty closed convex sets , in the -dimensional Euclidean space , and nonempty closed convex sets , in the -dimensional Euclidean space . is an real matrix. Specially, the problem with only a single set in and a single set in was introduced by Censor and Elfving [1] and was called the split feasibility problem (SFP).

Such MSSFPs (1.1), proposed in [2], arise in signal processing, image reconstruction and so on. Various algorithms have been invented to solve MSSFP (1.1). See [2–5] and references therein.

In [2], Censor and Elfving were handling the MSSFP (1.1), for both the consistent and the inconsistent cases, where they aim at minimizing the proximity function

For convenience reasons, they consider an additional closed convex set . Their algorithm for the MSSFP (1.1) involves orthogonal projection onto , and , which were assumed to be easily calculated and has the following iterative step: where is the Lipschitz constant of , which is the gradient of the proximity function defined by (1.2), and is the spectral radius of the matrix . For any starting vector , the algorithm converges to a solution of the MSSFP (1.1), whenever MSSFP (1.1) has a solution. In the inconsistent case, it find a point “closest” to all sets.

This algorithm uses a fixed stepsize related to the Lipschitz constant , which sometimes computing it may be hard. On the other hand, even if we know the Lipschitz constant , the method with fixed stepsize may lead to slow speed of convergence.

In 2005, Qu and Xiu [6] modified the CQ algorithm [7] and relaxed CQ algorithm [8] by adopting Armijo-like searches to solve the SFP, where the second algorithm used orthogonal projections onto half-spaces instead of projections onto the original convex sets, just as Yang’s relaxed CQ algorithm [8]. This may reduce a lot of work for computing projections, since projections onto half-spaces can be directly calculated.

Motivated by Qu and Xiu’s idea, Zhao and Yang in [4] introduce a self-adaptive projection method by adopting Armijo-like searches to solve the MSSFP (1.1) and propose a relaxed self-adaptive projection method by using orthogonal projections onto half-spaces instead of these projections onto the original convex sets, which is more practical. But the same as Algorithm 1.3, Zhao and Yang’s algorithm involves an addition projection . Though the MSSFP (1.1) includes the SFP as a special case, the Zhao and Yang’s algorithm does not reduce to Qu and Xiu’s modifications of the CQ algorithm [6].

In this paper, We first proposed a self-adaptive method by adopting Armijo-like searches to solve the MSSFP (1.1) without an addition projection , then a relaxed self-adaptive projection method was introduced which only involves orthogonal projections onto half-spaces, so that the algorithm is implementable. We need not estimate the Lipschitz constant and make a sufficient decrease of the objection function at each iteration; besides, these projection algorithms can reduce to the modifications of the CQ algorithm [6] when the MSSFP (1.1) is reduced to the SFP. We also show convergence the algorithms under mild conditions.

#### 2. Preliminaries

In this section, we give some definitions and basic results that will be used in this paper.

*Definition 2.1. *An operator from a set into is called(a)monotone on , if
(b)cocoercive on with constant , if
(c)Lipschitz continuous on with constant , if

In particular, if , is said to be nonexpansive. It is easily seen from the definitions that cocoercive mappings are monotone.

*Definition 2.2. *Functions , differentiable on a nonempty convex set , is pseudoconvex if for every , the condition implies that

It is known that differentiable convex functions are pseudoconvex (see [9]).

For a given nonempty closed convex set in , the orthogonal projection from onto is defined by

Lemma 2.3 (see [10]). *Let be a nonempty closed convex subset in , then, for any and .*(1)*;
*(2)*;
*(3)*.
**From Lemma 2.3 (2), one sees that the orthogonal projection mapping is cocoercive with modulus 1, monotone, and nonexpansive.** Let be a mapping from into . For any and , define , .*

Lemma 2.4 (see [6]). *Let be a mapping from into . For any and , one has .*

Lemma 2.5 (see [5, 9]). *Suppose is a convex function, then it is subdifferentiable everywhere and its subdifferentials are uniformly bounded on any bounded subset of .*

#### 3. Self-Adaptive Projection Iterative Scheme and Convergence Results

It is easily seen that if the solution set of MSSFP (1.1) is nonempty, then the MSSFP (1.1) is equivalent to the minimization problem of over all . is defined by where . Note that the gradient of is

Consider the following constrained minimization problem:

We say that a point is a stationary point of the problem (3.3) if it satisfies the condition

This optimization problem is proposed by Xu [3] for solving the MSSFP (1.1); the defined by (3.2) is -Lipschitzian with and is -ism.

*Algorithm 3.1. *Given constant . Let be arbitrary. For , calculate
where and mod function takes values in , and is the smallest nonnegative integer such that

Algorithm 3.1 need not estimate the largest eigenvalue of the matrix , and the stepsize is chosen so that the objective function has a sufficient decrease. It is in fact a special case of the standard gradient projection method with the Armijo-like search rule for solving the constrained optimization problem:
where is a nonempty closed convex set, and the function is continuously differentiable on , then the following convergence result ensures the convergence of Algorithm 3.1.

Lemma 3.2 (see [6]). *Let be pseudoconvex and be an infinite sequence generated by the gradient projection method with the Armijo-like searches. Then, the following conclusions hold:*(1)*;*(2)*, which denotes the set of the optimal solutions to (3.7), if and only if there exists at least one limit point of . In this case, converges to a solution of (3.7).*

Since the function is convex and continuously differentiable on , therefore it is pseudoconvex. Then, by Lemma 3.2, one immediately obtains the following convergence result.

Theorem 3.3. *Let be a sequence generated by Algorithm 3.1, then the following conclusions hold:*(1)* is bounded if and only if the solution set of (3.3) is nonempty. In such a case, must converge to a solution of (3.3).*(2)* is bounded and if and only if the MSSFP (1.1) is solvable. In such a case, must converge to a solution of MSSFP (1.1).*

However, in Algorithm 3.1, it costs a large amount of work to compute the orthogonal projections and ; therefore, the same as Censor’s method, these projections are assumed to be easily calculated. But, in some cases it is difficult or costs too much work to exactly compute the orthogonal projection, then the efficiency of these methods will be deeply affected. In what follows, one assume that the projections are not easily calculated. One present a relaxed self-adaptive projection method. Carefully speaking, the convex sets and satisfy the following assumptions.(1)The sets , . are given by where , are convex functions.The sets , . are given by where , are convex functions.(2)For any , at least one subgradient can be calculated, where is a generalized gradient (subdifferential) of at and it is defined as follows:

For any , at least one subgradient can be calculated, where is a generalized gradient (subdifferential) of at and it is defined as the following:

In the th iteration, let where is an element in .

Consider where is an element in .

By the definition of the subgradient, it is clear that and the orthogonal projections onto and can be calculated [4, 6, 8]. Define where . Then

For the Lipschiitz constant and the cocoercive modulus of defined by (3.2) are not related to the nonempty closed convex sets and [3], one can obtain that the is L-Lipschitzian with and -ism. So, is monotone.

*Algorithm 3.4. *Given constant . Let be arbitrary. For , compute
where and is the smallest non-negative interger such that
Set

Lemma 3.5. *The Armijo-like search rule (3.17) is well defined, and .*

* Proof. *Obviously, from (3.17), . We know that must violate inequality (3.17). That is,
Since is Lipschitz continuous with constant , which together with (3.19), we have
which completes the proof.

Theorem 3.6. *Let be a sequence generated by Algorithm 3.4. If the solution set of the MSSFP (1.1) is nonempty, the converges to a solution of the MSSFP (1.1).*

* Proof. *Let be a solution of the MSSFP (1.1), then . and .

Since for all and , we have , and, ; thus, we have for all .

Using the monotonicity of , we have for all
This implies
Therefore, we have
Thus, using part (3) of Lemma 2.3 and (3.23), we obtain
Since , . By Lemma 2.3(1), we have ; also, by search rule (3.17), it follows that
which implies that the sequence is monotonically decreasing and hence is bounded. Consequently, we get from (3.25)
On the other hand

which results in
Let be an accumulation point of and , where is a subsequence of . We will show that is a solution of the MSSFP (1.1). Thus, we need to show that and .

Since , then by the definition of , we have

Passing onto the limit in this inequality and taking into account (3.29) and Lemma 3.2, we obtain that with .

Because is repeated regularly of , so for every ; thus, . Next, we need to show .

Let , then and we get from Lemmas 2.4 and 3.2, and (3.26) that
where . Since is a solution point of MSSFP (1.1), so . By using Lemma 2.3, we have

From Lemma 2.3, we see that orthogonal projection mappings are cocoercive with modulus 1, taking into account the fact that the mapping is cocoercive with modulus 1 if and only if is cocoercive with modulus 1 [6], we obtain from (3.31) and that

Since and is abounded, the sequence is also bounded. Therefore, from (3.30) and (3.32), we get for all , that
Moreover, since , we have
Again passing onto the limits and combining those inequalities with Lemma 2.5, we conclude by (3.33) that
Thus, and .

Therefore, is a solution of the MSSFP (1.1). So we may use in place of in (3.23) and get that the sequence is convergent. Furthermore, noting that a subsequence of , that is, , converges to , we obtain that , as .

This completes the proof.

#### 4. Concluding Remarks

This paper introduced two self-adaptive projection methods with the Armijo-like searches for solving the multiple-set split feasibility problem MSSFP (1.1). They need not compute the additional projection and avoid the difficult task of estimating the Lipschitz constant. It makes a sufficient decrease of the objective function at each iteration; thus the efficiency is enhanced greatly. Moreover in the second algorithm, we use the relaxed projection technology to calculate orthogonal projections onto convex sets, which may reduce a large amount of computing work and make the method more practical. The corresponding convergence theories have been established.

#### Acknowledgment

This paper was supported partly by NSFC Grants no. 11071279.

#### References

- Y. Censor and T. Elfving, “A multiprojection algorithm using Bregman projections in a product space,”
*Numerical Algorithms*, vol. 8, no. 2, pp. 221–239, 1994. View at: Publisher Site | Google Scholar | Zentralblatt MATH - Y. Censor, T. Elfving, N. Kopf, and T. Bortfeld, “The multiple-sets split feasibility problem and its applications for inverse problems,”
*Inverse Problems*, vol. 21, no. 6, pp. 2071–2084, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH - H. K. Xu, “A variable Krasnosel'skii-Mann algorithm and the multiple-set split feasibility problem,”
*Inverse Problems*, vol. 22, no. 6, pp. 2021–2034, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH - J. L. Zhao and Q. Z. Yang, “Self-adaptive projection methods for the multiple-sets split feasibility problem,”
*Inverse Problems*, vol. 27, no. 3, Article ID 035009, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH - G. Lopez, V. Martin, and H. K. Xu, “Iterative algorithms for the multiple-sets split feasibility problem,” in
*Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems*, Y. Censor, M. Jiang, and G. Wang, Eds., pp. 243–279, Medical Physics, Madison, Wis, USA, 2010. View at: Google Scholar - B. Qu and N. H. Xiu, “A note on the CQ algorithm for the split feasibility problem,”
*Inverse Problems*, vol. 21, no. 5, pp. 1655–1665, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH - C. Byrne, “Iterative oblique projection onto convex sets and the split feasibility problem,”
*Inverse Problems*, vol. 18, no. 2, pp. 441–453, 2002. View at: Publisher Site | Google Scholar - Q. Yang, “The relaxed CQ algorithm solving the split feasibility problem,”
*Inverse Problems*, vol. 20, no. 4, pp. 1261–1266, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH - R. T. Rockafellar,
*Convex Analysis*, Princeton University Press, Princeton, NJ, USA, 1970. - E. H. Zarantonello,
*Projections on Convex Sets in Hilbert Space and Spectral Theory*, University of Wisconsin Press, Madison, Wis, USA, 1971.

#### Copyright

Copyright © 2012 Ying Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.