Recent Results on Fixed Point Approximations and ApplicationsView this Special Issue
A Regularized Algorithm for the Proximal Split Feasibility Problem
The proximal split feasibility problem has been studied. A regularized method has been presented for solving the proximal split feasibility problem. Strong convergence theorem is given.
Throughout, we assume that and are two real Hilbert spaces, and are two proper, lower semicontinuous convex functions, and is a bounded linear operator.
In the present paper, we are devoted to solving the following minimization problem: where stands for the Moreau-Yosida approximation of the function of parameter ; that is,
Problem (1) includes the split feasibility problem as a special case. In fact, we choose and as the indicator functions of two nonempty closed convex sets and ; that is, Then, problem (1) reduces to which equals Now we know that solving (5) is exactly to solve the following split feasibility problem of finding such that provided .
The split feasibility problem in finite-dimensional Hilbert spaces was first introduced by Censor and Elfving  for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. Recently, the split feasibility problem (6) has been studied extensively by many authors; see, for instance, [2–8].
In order to solve (6), one of the key ideas is to use fixed point technique according to which solves (6) if and only if Next, we will use this idea to solve (1). First, by the differentiability of the Yosida approximation , we have where denotes the subdifferential of at and is the proximal mapping of . That is, Note that the optimality condition of (8) is as follows: which can be rewritten as which is equivalent to the fixed point equation If , then (1) is reduced to the following proximal split feasibility problem of finding such that where In the sequel, we will use to denote the solution set of (13).
Recently, in order to solve (13), Moudafi and Thakur  presented the following split proximal algorithm with a way of selecting the stepsizes such that its implementation does not need any prior information about the operator norm.
Split Proximal Algorithm
Step 1 (initialization).
Step 2. Assume that has been constructed and . Then compute via the manner where the stepsize in which and .
If , then is a solution of (13) and the iterative process stops; otherwise, we set and go to (16).
Consequently, they demonstrated the following weak convergence of the above split proximal algorithm.
Theorem 1. Suppose that . Assume that the parameters satisfy the condition: Then the sequence weakly converges to a solution of (13).
Note that the proximal mapping of is firmly nonexpansive, namely, and it is also the case for complement . Thus, is cocoercive with coefficient (recall that a mapping is said to be cocoercive if for all and some ). If , then is nonexpansive. Hence, we need to regularize (16) such that the strong convergence is obtained. This is the main purpose of this paper. In the next section, we will collect some useful lemmas and in the last section we will present our algorithm and prove its strong convergence.
Lemma 2 (see ). Let be a sequence of nonnegative real numbers satisfying the following relation: where(i) and ;(ii); (iii).Then, .
Lemma 3 (see ). Let be a sequence of real numbers such that there exists a subsequence of such that for all . Then, there exists a nondecreasing sequence of such that and the following properties are satisfied by all (sufficiently large) numbers : In fact, is the largest number in the set such that the condition holds.
3. Main results
Let and be two real Hilbert spaces. Let and be two proper, lower semicontinuous convex functions and a bounded linear operator.
Now, we firstly introduce our algorithm.
Step 1 (initialization).
Step 2. Assume that has been constructed. Set and for all .
If , then compute via the manner where is a fixed point and is a real number sequence and is the stepsize satisfying with .
If , then is a solution of (13) and the iterative process stops; otherwise, we set and go to (22).
Theorem 5. Suppose that . Assume that the parameters and satisfy the conditions:(C1); (C2); (C3) for some small enough. Then the sequence converges strongly to .
Proof. Let . Since minimizers of any function are exactly fixed points of its proximal mappings, we have and . By (22) and the nonexpansivity of , we derive
Since is firmly nonexpansive, we deduce that is also firmly nonexpansive. Hence, we have
Note that and . From (24), we obtain
By condition (C3), without loss of generality, we can assume that for all . Thus, from (23) and (25), we obtain
Hence, is bounded.
Let . From (26), we deduce We consider the following two cases.
Case 1. One has for every large enough.
In this case, exists as finite and hence This together with (27) implies that Since (by condition (C3)), we get Noting that is bounded, we deduce immediately that Therefore, Next, we prove Since is bounded, there exists a subsequence satisfying and By the lower semicontinuity of , we get So, That is, is a fixed point of the proximal mapping of or equivalently . In other words, is a minimizer of .
Similarly, from the lower semicontinuity of , we get Therefore, That is, is a fixed point of the proximal mapping of or equivalently . In other words, is a minimizer of . Hence, . Therefore, From (22), we have Since is Lipschitz continuous with Lipschitzian constant and is nonexpansive, ,, and are bounded. Note that . Thus, by (32). From Lemma 2, (39), and (40) we deduce that .
Case 2. There exists a subsequence of such that for all . By Lemma 3, there exists a strictly increasing sequence of positive integers such that and the following properties are satisfied by all numbers : Consequently, Hence, By a similar argument as that of Case 1, we can prove that where .
In particular, we get Then, Thus, from (42) and (44), we conclude that Therefore, . This completes the proof.
Remark 6. Note that problem (13) was considered, for example, in [12, 13]; however, the iterative methods proposed to solve it need to know a priori the norm of the bounded linear operator .
Remark 7. We would like also to emphasize that by taking , the indicator functions of two nonempty closed convex sets , of and respectively, our algorithm (22) reduces to We observe that (49) is simpler than the one in .
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
The authors are grateful to the referees for their valuable comments and suggestions. Sun Young Cho was supported by the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education, Science and Technology (KRF-2013053358). Li-Jun Zhu was supported in part by NNSF of China (61362033 and NZ13087).
Y. Censor and T. Elfving, “A multiprojection algorithm using Bregman projections in a product space,” Numerical Algorithms, vol. 8, no. 2–4, pp. 221–239, 1994.View at: Publisher Site | Google Scholar | MathSciNet
C. Byrne, “Iterative oblique projection onto convex sets and the split feasibility problem,” Inverse Problems, vol. 18, no. 2, pp. 441–453, 2002.View at: Publisher Site | Google Scholar | MathSciNet
H.-K. Xu, “Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces,” Inverse Problems, vol. 26, no. 10, Article ID 105018, 17 pages, 2010.View at: Publisher Site | Google Scholar | MathSciNet
C. Byrne, “A unified treatment of some iterative algorithms in signal processing and image reconstruction,” Inverse Problems, vol. 20, no. 1, pp. 103–120, 2004.View at: Publisher Site | Google Scholar | MathSciNet
J. Zhao and Q. Yang, “Several solution methods for the split feasibility problem,” Inverse Problems, vol. 21, no. 5, pp. 1791–1799, 2005.View at: Publisher Site | Google Scholar | MathSciNet
Y. Dang and Y. Gao, “The strong convergence of a KM-CQ-like algorithm for a split feasibility problem,” Inverse Problems, vol. 27, no. 1, Article ID 015007, 9 pages, 2011.View at: Publisher Site | Google Scholar | MathSciNet
F. Wang and H. Xu, “Approximating curve and strong convergence of the algorithm for the split feasibility problem,” Journal of Inequalities and Applications, vol. 2010, Article ID 102085, 13 pages, 2010.View at: Publisher Site | Google Scholar | MathSciNet
Y. Yao, W. Jigang, and Y. Liou, “Regularized methods for the split feasibility problem,” Abstract and Applied Analysis, vol. 2012, Article ID 140679, 13 pages, 2012.View at: Publisher Site | Google Scholar | MathSciNet
A. Moudafi and B. S. Thakur, “Solving proximal split feasibility problems without prior knowledge of operator norms,” Optimization Letters, 2013.View at: Publisher Site | Google Scholar
H. K. Xu, “Iterative algorithms for nonlinear operators,” Journal of the London Mathematical Society, vol. 66, no. 1, pp. 240–256, 2002.View at: Publisher Site | Google Scholar | MathSciNet
P. E. Maingé, “Strong convergence of projected subgradient methods for non-smooth and non-strictly convex minimization,” Set-Valued Analysis, vol. 16, no. 7-8, pp. 899–912, 2008.View at: Publisher Site | Google Scholar | MathSciNet
A. Moudafi, “Split monotone variational inclusions,” Journal of Optimization Theory and Applications, vol. 150, no. 2, pp. 275–283, 2011.View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
C. Byrne, Y. Censor, A. Gibaliand, and S. Reich, “Weak and strong convergence of algorithms for the split common null point problem,” Journal of Nonlinear and Convex Analysis, vol. 13, pp. 759–775, 2012.View at: Google Scholar
Y. Yu, “An explicit method for the split feasibility problem with self-adaptive step sizes,” Abstract and Applied Analysis, vol. 2012, Article ID 432501, 9 pages, 2012.View at: Publisher Site | Google Scholar | MathSciNet