Abstract

The proximal split feasibility problem has been studied. A regularized method has been presented for solving the proximal split feasibility problem. Strong convergence theorem is given.

1. Introduction

Throughout, we assume that and are two real Hilbert spaces, and are two proper, lower semicontinuous convex functions, and is a bounded linear operator.

In the present paper, we are devoted to solving the following minimization problem: where stands for the Moreau-Yosida approximation of the function of parameter ; that is,

Problem (1) includes the split feasibility problem as a special case. In fact, we choose and as the indicator functions of two nonempty closed convex sets and ; that is, Then, problem (1) reduces to which equals Now we know that solving (5) is exactly to solve the following split feasibility problem of finding such that provided .

The split feasibility problem in finite-dimensional Hilbert spaces was first introduced by Censor and Elfving [1] for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. Recently, the split feasibility problem (6) has been studied extensively by many authors; see, for instance, [28].

In order to solve (6), one of the key ideas is to use fixed point technique according to which solves (6) if and only if Next, we will use this idea to solve (1). First, by the differentiability of the Yosida approximation , we have where denotes the subdifferential of at and is the proximal mapping of . That is, Note that the optimality condition of (8) is as follows: which can be rewritten as which is equivalent to the fixed point equation If , then (1) is reduced to the following proximal split feasibility problem of finding such that where In the sequel, we will use to denote the solution set of (13).

Recently, in order to solve (13), Moudafi and Thakur [9] presented the following split proximal algorithm with a way of selecting the stepsizes such that its implementation does not need any prior information about the operator norm.

Split Proximal Algorithm

Step 1 (initialization).

Step 2. Assume that has been constructed and . Then compute via the manner where the stepsize in which   and .

If , then is a solution of (13) and the iterative process stops; otherwise, we set and go to (16).

Consequently, they demonstrated the following weak convergence of the above split proximal algorithm.

Theorem 1. Suppose that . Assume that the parameters satisfy the condition: Then the sequence weakly converges to a solution of (13).

Note that the proximal mapping of is firmly nonexpansive, namely, and it is also the case for complement . Thus, is cocoercive with coefficient (recall that a mapping is said to be cocoercive if for all and some ). If , then is nonexpansive. Hence, we need to regularize (16) such that the strong convergence is obtained. This is the main purpose of this paper. In the next section, we will collect some useful lemmas and in the last section we will present our algorithm and prove its strong convergence.

2. Lemmas

Lemma 2 (see [10]). Let be a sequence of nonnegative real numbers satisfying the following relation: where(i) and ;(ii); (iii).Then, .

Lemma 3 (see [11]). Let be a sequence of real numbers such that there exists a subsequence of such that for all . Then, there exists a nondecreasing sequence of such that and the following properties are satisfied by all (sufficiently large) numbers : In fact, is the largest number in the set such that the condition holds.

3. Main results

Let and be two real Hilbert spaces. Let and be two proper, lower semicontinuous convex functions and a bounded linear operator.

Now, we firstly introduce our algorithm.

Algorithm  4

Step 1 (initialization).

Step 2. Assume that has been constructed. Set and for all .
If , then compute via the manner where is a fixed point and is a real number sequence and is the stepsize satisfying with .

If , then is a solution of (13) and the iterative process stops; otherwise, we set and go to (22).

Theorem 5. Suppose that . Assume that the parameters and satisfy the conditions:(C1); (C2); (C3) for some small enough. Then the sequence converges strongly to .

Proof. Let . Since minimizers of any function are exactly fixed points of its proximal mappings, we have and . By (22) and the nonexpansivity of , we derive Since is firmly nonexpansive, we deduce that is also firmly nonexpansive. Hence, we have Note that and . From (24), we obtain By condition (C3), without loss of generality, we can assume that for all . Thus, from (23) and (25), we obtain Hence, is bounded.
Let . From (26), we deduce We consider the following two cases.
Case  1.  One has  for every large enough.
In this case, exists as finite and hence This together with (27) implies that Since (by condition (C3)), we get Noting that is bounded, we deduce immediately that Therefore, Next, we prove Since is bounded, there exists a subsequence satisfying and By the lower semicontinuity of , we get So, That is, is a fixed point of the proximal mapping of or equivalently . In other words, is a minimizer of .
Similarly, from the lower semicontinuity of , we get Therefore, That is, is a fixed point of the proximal mapping of or equivalently . In other words, is a minimizer of . Hence, . Therefore, From (22), we have Since is Lipschitz continuous with Lipschitzian constant and is nonexpansive, ,, and are bounded. Note that . Thus, by (32). From Lemma 2, (39), and (40) we deduce that .
Case  2. There exists a subsequence of such that for all . By Lemma 3, there exists a strictly increasing sequence of positive integers such that and the following properties are satisfied by all numbers : Consequently, Hence, By a similar argument as that of Case 1, we can prove that where .
In particular, we get Then, Thus, from (42) and (44), we conclude that Therefore, . This completes the proof.

Remark 6. Note that problem (13) was considered, for example, in [12, 13]; however, the iterative methods proposed to solve it need to know a priori the norm of the bounded linear operator .

Remark 7. We would like also to emphasize that by taking , the indicator functions of two nonempty closed convex sets , of and respectively, our algorithm (22) reduces to We observe that (49) is simpler than the one in [14].

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors are grateful to the referees for their valuable comments and suggestions. Sun Young Cho was supported by the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education, Science and Technology (KRF-2013053358). Li-Jun Zhu was supported in part by NNSF of China (61362033 and NZ13087).