• Views 276
• Citations 0
• ePub 16
• PDF 227
`Journal of Applied MathematicsVolume 2014 (2014), Article ID 468079, 10 pageshttp://dx.doi.org/10.1155/2014/468079`
Research Article

## Relaxed Extragradient Algorithms for the Split Feasibility Problem

1School of Mathematics and Information Engineering, Taizhou University, Linhai 317000, China
2Department of Mathematics and the RINS, Gyeongsang National University, Jinju 660-701, Republic of Korea
3Department of Mathematics, Dong-A University, Pusan 614-714, Republic of Korea

Received 4 October 2013; Accepted 30 January 2014; Published 27 March 2014

Copyright © 2014 Youli Yu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The purpose of this paper is to introduce a new relaxed extragradient algorithms for the split feasibility problem. Our relaxed extragradient algorithm is new and it generalized some results for solving the split feasibility problem.

#### 1. Introduction

The split feasibility problem has received much attention due to its applications in image denoising, signal processing, and image reconstruction, with particular progress in intensity modulated therapy; see, for instance [14]. In this paper, we continue to study the split feasibility problem and its approximation algorithms.

To begin with, let us first recall the notation of the split feasibility problem and some existing algorithms in the literature. Recall that the split feasibility problem introduced by Censor and Elfving [4] can be formulated as finding a point with the property where and are two closed convex subsets of two Hilbert spaces and , respectively, and is a bounded linear operator. The original algorithm introduced in [4] involves the computation of the inverse : where , are closed convex sets and is a full rank matrix. It is an interesting problem to construct iterative algorithms for the split feasibility problem to the bounded linear operator.

We use to denote the solution set of the split feasibility problem; that is, In the sequel, we assume that the split feasibility problem is consistent. Let and assume that . Thus, . It follows that which implies . Hence, we deduce that the fixed point equation holds. Note that . It is obvious that By using (4), Byrne [5] presented a more popular algorithm: The algorithm only needs to compute the projections and . It is therefore implementable when and have closed-form expressions. Consequently, algorithm has received so much attention; see [617]. In particular, in [14], Xu proposed an averaged CQ algorithm and he obtained the weak convergence result.

On the other hand, we note that ; then there exists such that . Thus, we consider the distance function and the minimization problem Since , we then consider the minimization However, (7) is, in general, ill-posed. We import the well-known Tikhonov’s regularization method where is the regularization parameter. Note that the gradient of is Using (9), some regularized methods have been suggested. For example, Xu [14] suggested the following regularized method: He proved that the sequence generated by (10) converges to the solution of the split feasibility problem. Ceng et al. [7] presented the following extragradient method: and proved that the sequence generated by (11) converges weakly to the solution of the split feasibility problem. Motivated by the above results, in this paper, we construct a new extragradient algorithm for solving the split feasibility problem. Strong convergence result is demonstrated.

#### 2. Preliminaries

Let be a nonempty closed convex subset of a real Hilbert space . A mapping is called nonexpansive if A mapping is said to be -inverse strong monotone, if there exists a constant such that Recall that the metric projection from onto denoted by means that can be characterized by for all . Hence, it is nonexpansive. It is also known that is nonexpansive.

Let be a monotone mapping. The variational inequality problem is to find such that The solution set of the variational inequality is denoted by . It is well known that if and only if , . A set-valued mapping is called monotone if, for each , , and , we have A monotone mapping is called maximal if its graph is not properly contained in the graph of any other monotone mappings. It is known that a monotone mapping is maximal if and only if for each pair with for every ; then we have . Let be a monotone and -Lipschitz continuous mapping and let be the normal cone to at ; that is, Define Then, is maximal monotone and if and only if ; see [18] for more details.

Lemma 1 (see [14]). One knows the following properties:(i) is Lipschitz continuous with Lipschitz constant ;(ii) is inverse strong monotone;(iii) is nonexpansive for all .

Lemma 2 (see [19]). Let be a nonempty closed convex subset of a real Hilbert space . Let the mapping be -inverse strong monotone and let be a constant. Then, we have In particular, if , then is nonexpansive.

Lemma 3 (see [20]). Let and be bounded sequences in a Banach space and let be a sequence in with . Suppose that for all and Then, .

Lemma 4 (see [21]). Assume that is a sequence of nonnegative real numbers such that where is a sequence in and is a sequence such that(1); (2) or .Then, .

#### 3. Main Results

Let and be two Hilbert spaces and let , be two nonempty closed and convex subsets of and , respectively. Suppose that is a bounded linear operator and is a -contraction. We will state and prove our main results in this paper and at the end of this paper we will give an example to show that our results improve the works in the literature.

Algorithm 5. Let . Let be a sequence generated by where is a sequence; and are sequences satisfying .

Theorem 6. Suppose that . Assume , , , , and . Then the sequence generated by (24) converges strongly to .

Proof. From the assumptions and , we can choose such that when is sufficiently large. For the convenience and without loss of generality, we assume that for all . Then, we deduce for all .
Let . In the first section, we know that for any . Since and , we have From (24) and (25), we have From Lemma 1, we know that and are nonexpansive. It follows that Thus, Hence, is bounded. It follows from (28) that the sequence is also bounded.
Set . Note that is nonexpansive. Thus, we can rewrite in (24) as where It follows that So, It follows that From (24), we have Hence, we deduce Since , , , and , we derive that By Lemma 3, we obtain Hence, By Lemma 1, Lemma 2, and (29), we get Therefore, we have Noting that , we get . Since and , we obtain . Thus, we have
Using (15), we have where is some constant such that It follows that and hence which implies that Since and , we derive
Next we show that where . To show it, we choose a subsequence of such that Since is bounded and is then bounded, we have the fact that a subsequence of converges weakly to .
Next we show that . We define a mapping by Then is maximal monotone. Let . Since and , we get On the other hand, from , we have That is, Therefore, we have Noting that , and is Lipschitz continuous, we deduce from above Since is maximal monotone, we have and hence Therefore, Again from (15), we have Hence, Therefore, We apply Lemma 4 to the last inequality to deduce that . This completes the proof.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

The authors would like to thank the referees for the useful comments and suggestions. This study was supported by research funds from Dong-A University.

#### References

1. H. H. Bauschke and J. M. Borwein, “On projection algorithms for solving convex feasibility problems,” SIAM Review, vol. 38, no. 3, pp. 367–426, 1996.
2. C. Byrne, “A unified treatment of some iterative algorithms in signal processing and image reconstruction,” Inverse Problems, vol. 20, no. 1, pp. 103–120, 2004.
3. Y. Censor, T. Bortfeld, B. Martin, and A. Trofimov, “A unified approach for inversion problems in intensity modulated radiation therapy,” Physics in Medicine and Biology, vol. 51, pp. 2353–2365, 2006.
4. Y. Censor and T. Elfving, “A multiprojection algorithm using Bregman projections in a product space,” Numerical Algorithms, vol. 8, no. 2–4, pp. 221–239, 1994.
5. C. Byrne, “Iterative oblique projection onto convex sets and the split feasibility problem,” Inverse Problems, vol. 18, no. 2, pp. 441–453, 2002.
6. C. Byrne, Y. Censor, A. Gibali, and S. Reich, “Weak and strong convergence of algorithms for the split common null point problem,” Journal of Nonlinear and Convex Analysis, vol. 13, no. 4, pp. 759–775, 2012.
7. L.-C. Ceng, Q. H. Ansari, and J.-C. Yao, “An extragradient method for solving split feasibility and fixed point problems,” Computers & Mathematics with Applications, vol. 64, no. 4, pp. 633–642, 2012.
8. Y. Dang and Y. Gao, “The strong convergence of a KM-CQ-like algorithm for a split feasibility problem,” Inverse Problems, vol. 27, no. 1, Article ID 015007, 9 pages, 2011.
9. A. Moudafi, “A relaxed alternating CQ-algorithm for convex feasibility problems,” Nonlinear Analysis: Theory, Methods & Applications, vol. 79, pp. 117–121, 2013.
10. A. Moudafi, “Alternating CQ-algorithm for convex feasibility and split fixed-point problems,” Journal of Nonlinear and Convex Analysis. In press.
11. M. A. Noor and K. I. Noor, “Some new classes of quasi split feasibility problems,” Applied Mathematics & Information Sciences, vol. 7, no. 4, pp. 1547–1552, 2013.
12. B. Qu and N. Xiu, “A note on the $CQ$ algorithm for the split feasibility problem,” Inverse Problems, vol. 21, no. 5, pp. 1655–1665, 2005.
13. H.-K. Xu, “A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem,” Inverse Problems, vol. 22, no. 6, pp. 2021–2034, 2006.
14. H.-K. Xu, “Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces,” Inverse Problems, vol. 26, no. 10, Article ID 105018, 17 pages, 2010.
15. J. Zhao and Q. Yang, “Several solution methods for the split feasibility problem,” Inverse Problems, vol. 21, no. 5, pp. 1791–1799, 2005.
16. Y. Yao, T. H. Kim, S. Chebbi, and H. K. Xu, “A modified extragradient method for the split feasibility and fixed point problems,” Journal of Nonlinear and Convex Analysis, vol. 13, pp. 383–396, 2012.
17. Y. Yao, W. Jigang, and Y.-C. Liou, “Regularized methods for the split feasibility problem,” Abstract and Applied Analysis, vol. 2012, Article ID 140679, 13 pages, 2012.
18. R. T. Rockafellar, “On the maximality of sums of nonlinear monotone operators,” Transactions of the American Mathematical Society, vol. 149, pp. 75–88, 1970.
19. W. Takahashi and M. Toyoda, “Weak convergence theorems for nonexpansive mappings and monotone mappings,” Journal of Optimization Theory and Applications, vol. 118, no. 2, pp. 417–428, 2003.
20. T. Suzuki, “Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spaces,” Fixed Point Theory and Applications, vol. 2005, no. 1, pp. 103–123, 2005.
21. H.-K. Xu, “Iterative algorithms for nonlinear operators,” Journal of the London Mathematical Society, vol. 66, no. 1, pp. 240–256, 2002.