- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Abstract and Applied Analysis

Volume 2012 (2012), Article ID 432501, 9 pages

http://dx.doi.org/10.1155/2012/432501

## An Explicit Method for the Split Feasibility Problem with Self-Adaptive Step Sizes

School of Mathematics and Information Engineering, Taizhou University, Linhai 317000, China

Received 12 September 2012; Accepted 24 October 2012

Academic Editor: Wenming Zou

Copyright © 2012 Youli Yu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

An explicit iterative method with self-adaptive step-sizes for solving the split feasibility problem is presented. Strong convergence theorem is provided.

#### 1. Introduction

Since its publication in 1994, the split feasibility problem has been studied by many authors. For some related works, please consult [1–18]. Among them, a more popular algorithm that solves the split feasibility problems is Byrne’s method [2]: where and are two closed convex subsets of two real Hilbert spaces and , respectively, and is a bounded linear operator. The algorithm only involves the computations of the projections and onto the sets and , respectively, and is therefore implementable in the case where and have closed-form expressions.

Note that algorithm can be obtained from optimization. If we set then the convex objective is differentiable and has a Lipschitz gradient given by Thus, the algorithm can be obtained by minimizing the following convex minimization problem We can use a gradient projection algorithm below to solve the split feasibility problem: where , the step size at iteration , is chosen in the interval , where is the Lipschitz constant of .

However, we observe that the determination of the step size depends on the operator (matrix) norm (or the largest eigenvalue of ). This means that in order to implement the algorithm, one has first to compute (or, at least, estimate) the matrix norm of , which is in general not an easy work in practice. To overcome the above difficulty, the so-called self-adaptive method which permits step size being selected self-adaptively was developed. See, for example, [10, 14, 15, 19–23].

Inspired by the above results and the self-adaptive method, in this paper, we present an explicit iterative method with self-adaptive step sizes for solving the split feasibility problem. Convergence analysis result is given.

#### 2. Preliminaries

Let and be two real Hilbert spaces and and two closed convex subsets of and , respectively. Let be a bounded linear operator. The split feasibility problem is to find a point such that Next, we use to denote the solution set of the split feasibility problem, that is, .

We know that a point is a stationary point of problem (1.4) if it satisfies Given . solves the split feasibility problem if and only if solves the fixed point equation Next we adopt the following notation:(i) means that converges strongly to ;(ii) means that converges weakly to ;(iii) is the weak -limit set of the sequence .

Recall that a function is called convex if for all and , . It is known that a differentiable function is convex if and only if there holds the relation: for all . Recall that an element is said to be a subgradient of at if for all . If the function has at least one subgradient at is said to be sub-differentiable at . The set of subgradients of at the point is called the subdifferential of at , and is denoted by . A function is called sub-differentiable if it is subdifferentiable at all . If is convex and differentiable, then its gradient and subgradient coincide. A function is said to be weakly lower semi continuous (w-lsc) at if implies is said to be w-lsc on if it is w-lsc at every point .

A mapping is called *nonexpansive* if
for all .

Recall that the (nearest point or metric) projection from onto , denoted , assigns, to each , the unique point with the property It is well known that the metric projection of onto has the following basic properties:(a) for all ;(b) for every ;(c) for all , .

Lemma 2.1 (see [24]). *Assume that is a sequence of nonnegative real numbers such that
**
where is a sequence in and is a sequence such that*(1)*;
*(2)* or .**
Then . *

Lemma 2.2 (see [25]). *Let be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence of such that for all . For every , define an integer sequence as
**
Then as and for all *

#### 3. Main Results

In this section, we will introduce our algorithm and prove our main results.

Let and be nonempty closed convex subsets of real Hilbert spaces and , respectively. Let be a bounded linear operator. In the sequel, we assume that the split feasibility problem is consistent, that is .

*Algorithm 3.1. *For and given , let the sequence defined by
where and .

*Remark 3.2. *In the sequel, we may assume that for all . Note that this fact can be guaranteed if the sequence is infinite; that is, Algorithm 3.1 does not terminate in a finite number of iterations.

Theorem 3.3. *Assume that the following conditions are satisfied:*(i)* and ;*(ii)*.**
Then defined by (3.1) converges strongly to . *

*Proof. *Let . It follows that for all . From (2.5), we deduce that
Thus, by (3.1) and (3.2), we have
It follows that
By induction, we deduce
Hence, is bounded.

At the same time, we note that
Therefore,
It follows that
Next, we will prove that . Set for all . Since and , we may assume without loss of generality that for some . Thus, we can rewrite (3.8) as
where .

Now, we consider two possible cases.*Case 1.* Assume that is eventually decreasing; that is, there exists such that is decreasing for . In this case, must be convergent and from (3.9) it follows that
where is a constant such that . Letting in (3.10), we get
Since is bounded, there exists a subsequence of converging weakly to . Since, , we also have of converging weakly to . From the weak lower semicontinuity of , we have
Hence, ; that is, . This indicates that
Furthermore, by using the property of the projection , we deduce
From (3.8), we obtain
This together with Lemma 2.1 imply that .*Case 2.* Assume that is not eventually decreasing. That is, there exists an integer such that . Thus, we can define an integer sequence for all as follows:
Clearly, is a nondecreasing sequence such that as and
for all . In this case, we derive from (3.10) that
It follows that
This implies that every weak cluster point of is in the solution set ; that is, . So, . On the other hand, we note that
From which we can deduce that
Since , we have from (3.9) that
Combining (3.21) and (3.22) yields
and hence
From (3.15), we have
Thus,
From Lemma 2.2, we have
Therefore, . That is, . This completes the proof.

#### Acknowledgment

The author was supported in part by the Youth Foundation of Taizhou University (2011QN11).

#### References

- Y. Censor and T. Elfving, “A multiprojection algorithm using Bregman projections in a product space,”
*Numerical Algorithms*, vol. 8, no. 2-4, pp. 221–239, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - C. Byrne, “Iterative oblique projection onto convex sets and the split feasibility problem,”
*Inverse Problems*, vol. 18, no. 2, pp. 441–453, 2002. View at Publisher · View at Google Scholar - Y. Censor, T. Bortfeld, B. Martin, and A. Trofimov, “A unified approach for inversion problems in intensity-modulated radiation therapy,”
*Physics in Medicine and Biology*, vol. 51, pp. 2353–2365, 2006. - Y. Censor, T. Elfving, N. Kopf, and T. Bortfeld, “The multiple-sets split feasibility problem and its applications for inverse problems,”
*Inverse Problems*, vol. 21, no. 6, pp. 2071–2084, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - Y. Dang and Y. Gao, “The strong convergence of a KM-CQ-like algorithm for a split feasibility problem,”
*Inverse Problems*, vol. 27, no. 1, article 015007, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - B. Qu and N. Xiu, “A note on the $CQ$ algorithm for the split feasibility problem,”
*Inverse Problems*, vol. 21, no. 5, pp. 1655–1665, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - F. Wang and H.-K. Xu, “Approximating curve and strong convergence of the $CQ$ algorithm for the split feasibility problem,”
*Journal of Inequalities and Applications*, vol. 2010, Article ID 102085, 13 pages, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - F. Wang and H.-K. Xu, “Cyclic algorithms for split feasibility problems in Hilbert spaces,”
*Nonlinear Analysis A*, vol. 74, no. 12, pp. 4105–4111, 2011. View at Publisher · View at Google Scholar - Z. Wang, Q. I. Yang, and Y. Yang, “The relaxed inexact projection methods for the split feasibility problem,”
*Applied Mathematics and Computation*, vol. 217, no. 12, pp. 5347–5359, 2011. View at Publisher · View at Google Scholar - G. Lopez, V. Martin-Marquez, F. Wang, and H. K. Xu, “Solving the split feasibility problem without prior knowledge of matrix norms,”
*Inverse Problems*, vol. 28, no. 8, article 085004, 2012. View at Publisher · View at Google Scholar - H.-K. Xu, “A variable Krasnosel'skii-Mann algorithm and the multiple-set split feasibility problem,”
*Inverse Problems*, vol. 22, no. 6, pp. 2021–2034, 2006. View at Publisher · View at Google Scholar - H.-K. Xu, “Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces,”
*Inverse Problems*, vol. 26, no. 10, article 105018, 17 pages, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - J. Zhao and Q. Yang, “Several solution methods for the split feasibility problem,”
*Inverse Problems*, vol. 21, no. 5, pp. 1791–1799, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - W. Zhang, D. Han, and Z. Li, “A self-adaptive projection method for solving the multiple-sets split feasibility problem,”
*Inverse Problems*, vol. 25, no. 11, article 115001, 16 pages, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - J. Zhao and Q. Yang, “Self-adaptive projection methods for the multiple-sets split feasibility problem,”
*Inverse Problems*, vol. 27, no. 3, article 035009, 13 pages, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - L. C. Ceng, Q. H. Ansari, and J. C. Yao, “An extragradient method for split feasibility and fixed point problems,”
*Computers and Mathematics with Applications*, vol. 64, no. 4, pp. 633–642, 2012. - Y. Dang, Y. Gao, and Y. Han, “A perturbed projection algorithm with inertial technique for split feasibility problem,”
*Journal of Applied Mathematics*, vol. 2012, Article ID 207323, 10 pages, 2012. View at Publisher · View at Google Scholar - Y. Dang and Y. Gao, “An extrapolated iterative algorithm for multiple-set split feasibility problem,”
*Abstract and Applied Analysis*, vol. 2012, Article ID 149508, 12 pages, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - D. Han, “Inexact operator splitting methods with selfadaptive strategy for variational inequality problems,”
*Journal of Optimization Theory and Applications*, vol. 132, no. 2, pp. 227–243, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - D. Han and W. Sun, “A new modified Goldstein-Levitin-Polyak projection method for variational inequality problems,”
*Computers & Mathematics with Applications*, vol. 47, no. 12, pp. 1817–1825, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - B. He, X.-Z. He, H. X. Liu, and T. Wu, “Self-adaptive projection method for co-coercive variational inequalities,”
*European Journal of Operational Research*, vol. 196, no. 1, pp. 43–48, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - L.-Z. Liao and S. Wang, “A self-adaptive projection and contraction method for monotone symmetric linear variational inequalities,”
*Computers & Mathematics with Applications*, vol. 43, no. 1-2, pp. 41–48, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - Q. Yang, “On variable-step relaxed projection algorithm for variational inequalities,”
*Journal of Mathematical Analysis and Applications*, vol. 302, no. 1, pp. 166–179, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - H.-K. Xu, “Iterative algorithms for nonlinear operators,”
*Journal of the London Mathematical Society*, vol. 66, no. 1, pp. 240–256, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - P.-E. Maingé, “Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization,”
*Set-Valued Analysis*, vol. 16, no. 7-8, pp. 899–912, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH