Abstract

The split feasibility problem has received much attention due to its various applications in signal processing and image reconstruction. In this paper, we propose two inertial relaxed algorithms for solving the split feasibility problem in real Hilbert spaces according to the previous experience of applying inertial technology to the algorithm. These algorithms involve metric projections onto half-spaces, and we construct new variable step size, which has an exact form and does not need to know a prior information norm of bounded linear operators. Furthermore, we also establish weak and strong convergence of the proposed algorithms under certain mild conditions and present a numerical experiment to illustrate the performance of the proposed algorithms.

1. Introduction

The split feasibility problem in finite-dimensional Hilbert spaces was first introduced by Censor and Elfving [1] in 1994, for modeling inverse problem that arises from the phase retrievals and in medical image reconstruction [2]. The split feasibility problem can also be used to model the intensity-modulated radiation therapy [3].

Let and be two real Hilbert spaces with the inner product and the induced norm . and are nonempty closed and convex subsets of real Hilbert spaces and , respectively, and is a linear bounded operator from into . The split feasibility problem is formulated as follows: find a point satisfying

The solution set of the problem (1) is denoted by ; that is,

A very successful method that solves the seems to be the algorithm of Byrne [4], which generates by the iterative procedure: for any initial guess ,where and are the metric projections onto and , respectively. is the adjoint operator of the linear operator , and the step size is chosen in the open interval . The step size selection depends on the operator norm (or the largest eigenvalue of ), which also is not a simple work.

The algorithm (3) for solving the problem (1) can be obtained from optimization. If we introduce the convex objective functionthen the algorithm (3) comes immediately as a special case of the gradient-projection algorithm , since the convex objective function is differentiable and has a Lipschitz gradient given by

To overcome the computational difficulties, many authors have constructed the variable step size that does not require the norm ; see, for example, [512]. In particular, Lopez et al. [7] introduced a new choice of the variable step size sequence as follows:where is a sequence of positive real numbers, take zero for the lower bound and four for the upper bound. The advantage of the choice (6) of step size is that there is neither prior information about the matrix norm nor any other conditions on and .

Now let us consider the case when and are level subsets of convex functions, where and are, respectively, given bywhere and are two lower semicontinuous convex functions, and and are bounded operators. But the associated projections and do not have closed-form expressions, and the algorithm is that the iterative process cannot be performed. In order to keep it going, Yang [13] made improvements to these two-level subsets; here is how they are defined:with , andwith .

It is easy to see that and are both half-spaces, and the projections and have closed-form expressions. In what follows, for each , define

Since these projections are easy to calculate, the algorithm is very practical.

Afterwards, the inertial technique was developed by Alvarez and Attouch in order to improve the performance of proximal point algorithms [14]. Dang et al. [15] proposed an inertial relaxed algorithm for solving the problem in a real Hilbert space, which is generated as follows: for any , where , and withwith , andwith . The algorithm converges weakly to a point of a solution set of the problem , where step size also depends on the matrix norm . It is obvious that the calculation of operator norm is more complicated, so Gibali et al. [16] has changed the step size of (11).where

If , then the sequence generated by (11) with step size converges weakly to a point of a solution set of the problem . For recent results on inertial algorithms (see [1724]).

On the other hand, the algorithm is the gradient-projection method for the variational inequality problem. In [25], Xu gave weak convergence in the setting of Hilbert spaces. Wang and Xu [26] proposed the following algorithm:where . Under some conditions, it is proved that the sequence generated by the algorithm (16) strongly converges to the minimum-norm solution of the . Motivated and inspired by the work of [7, 2729], the authors of [30] introduced a self-adaptive -type algorithm for finding a solution of the in the setting of infinite-dimensional real Hilbert spaces; the advantage of this algorithm lies in the fact that step sizes are dynamically chosen and do not depend on the operator norm. This algorithm can be formulated as follows:where . It is also proved that the sequence generated by the algorithm (17) strongly converges to the minimum-norm solution of the under some conditions.

Inspired by the works mentioned above, we propose a new relaxed algorithm to solve the in a real Hilbert space by using inertial technology. The new step size proposed in this algorithm is independent of the operator norm in this paper, and we also establish weak convergence theorem of the proposed algorithms under some mild conditions in [31]. We add the inertial term on the basis of the algorithm in [30] to construct a new iterative process, so that the new algorithm strongly converges to a point in the solution set under some conditions.

The remainder of the paper is organized as follows. Some useful definitions and results are collected in Section 2 for the convergence analysis of the proposed algorithm. In Section 3, new inertial algorithms of weak and strong convergence for solving are proposed, followed by the convergence analysis. In Section 4, we provide a numerical experiment to illustrate the performance of the proposed algorithms. Finally, we end the paper with some conclusion.

2. Preliminaries

Let be a Hilbert space and let be a nonempty closed convex subset in . The strong (weak) convergence of a sequence to is denoted by , respectively. For any sequence , denotes the weak set of ; that is,

Definition 1. An operator is called the following:(i)Nonexpansive if(ii)Firmly nonexpansive if(iii)-inverse strongly monotone (-ism) if there is such thatFor every element , there exists a unique nearest point in denoted by , such thatThen operator is called the metric projection from onto .
The projection has the following well-known properties.

Lemma 1 (see [32, 33]). For all and , we have(1)(2)(3)(4)

Lemma 2. Let be a real Hilbert space and , ; then(1);(2).

Definition 2 (see [34]). Let be a real Hilbert space and let be a convex function. An element is called the subgradient of at ifThe collection of all the subgradients of at is called the subdifferential of the function at this point, which is denoted by ; that is,

Definition 3. Let be a proper function.(i) is lower semicontinuous at if implies(ii) is weakly lower semicontinuous at if implies(iii) is lower semicontinuous on if it is lower semicontinuous at every point ; is weakly lower semicontinuous on if it is weakly lower semicontinuous at every point .(iv) is lower semicontinuous if and only if it is weakly lower semicontinuous.

Lemma 3 (see [34]). Let be an -strongly convex function. Then, for all ,

Lemma 4 (see [25]). Let and . Then the following statements are equivalent:(1)The point solves the problem .(2)The point solves the fixed-point equation(3)The point solves the variational inequality problem with respect to the gradient of ; that is, find a point such that

Lemma 5 (see [16]). Let be a real Hilbert space and let be a sequence in such that there exists a nonempty closed and convex subset of satisfying the following conditions:(i)For all , exists(ii)Any weak cluster point of belongs to Then there exists such that converges weakly to .

Lemma 6 (see [35]). Let and be two nonnegative real sequences satisfying the following conditions:(1)(2)(3), where Then, is a converging sequence and , where for any .

Lemma 7 (see [36, 37]). Let and be sequences of nonnegative real numbers such thatwhere is a sequence in and is a real sequence. Assume . Then the following results hold:(1)If for some , then is a bounded sequence(2)If and , then

Lemma 8 (see [38]). Assume that is a sequence of nonnegative real numbers such thatwhere is a sequence in , is a sequence of nonnegative real numbers, and and are two sequences in such that(1)(2)(3) implies for any subsequence of Then .

3. Convergence Analysis

In this section, we consider the in which is given bywhere is an -strongly convex function; the set is given bywhere is a -strongly convex function. We assume that the solution set of the is nonempty, and and are lower semicontinuous convex functions; furthermore, we also assume that and are bounded operators (i.e., bounded on bounded sets).

We agree to build the following sets in our algorithms according to [39]; that is, given the -th iterative point , we construct aswhere .where .

If and , then and are reduced to the half-spaces and , respectively. If and , then and are nonempty closed ball of radius centred at and centred at , respectively.

In addition, for each , we define the following functions:where is given as in (35), is weakly lower semicontinuous, convex, and differentiable, and its gradient is Lipschitz continuous. Now we propose new relaxed algorithms for solving the .

Next, two inertial relaxed CQ algorithms will be introduced. The weak convergence of Algorithm 1 and the strong convergence of Algorithm 2 will be proved under different step sizes.

Algorithm 1. Choose positive sequence satisfying .
Let be arbitrary. Given , , update the next iteration viawhere , andand and are given as in (34) and (35).where , .
If , then stop; otherwise, set and go to the next iteration.
By assuming , we knowwhich meansFrom , applying Lemma 3, we get ; and a similar way is used to get .
Now let us show that our proposed algorithm has a very important property: if for some , then is a solution of . Indeed, , so that as by assumption. So we get from (34), that is, . On the other hand, according to the algorithm, we have , which together with Lemma 4 implies that . It also implies that from (35); then . The conclusion is tenable.

Lemma 9. Let and be the sequences generated by Algorithm 1. Then, for any , it follows that

Proof. For , we have ; and we have .
It follows from Lemma 1 thatwhereHence, we haveIf , then , so thatIf , we haveThe proof is complete.

Theorem 1. Assume that satisfies the assumption. Then there exists a subsequent of generated by Algorithm 1 which weakly converges to a solution of .

Proof. We first show that, for any , the limit of exists. By applying Lemma 9, we haveFrom the construction of and Lemma 2, we haveCombining (48) and (50) immediately, we getDenote ; from (51), we havewhereUsing Lemma 6, the limit of exists, and , which implies that , . This also implies that the sequence is bounded, so is bounded.
We next show that . Since is bounded, from the Lipschitz continuity of , we get that is bounded. From (48) and (50), we getwhere , , and , so we haveBut , soOn the other hand, since is bounded, the set is nonempty. Let ; then there exists a subsequence of such that . Furthermore,Let be a subsequence of the sequence such thatSince is bounded, there exists a subsequence of , which converges weakly to . Without loss of generality, we can assume that , and is a bounded linear operator, so .
From Lemma 1, we conclude thatSince and is bounded, we have . Hence, we getSince and , from (50), we obtainThus,Since , by the definition of ,where . From the boundedness assumption of and , we haveFrom the weak lower semicontinuity of the convex function , it follows thatwhich means that .
Furthermore, , and, by the definition of ,where . From the boundedness assumption of and , we haveFrom the weak lower semicontinuity of the convex function , it follows thatwhich means that . Therefore, . The proof is complete.

Algorithm 2. Choose positive sequence satisfying .
Let be arbitrary. Given , , update the next iteration viawhere , andand and are given as in (34) and (35), , , , and .
If , then stop; otherwise, set and go to the next iteration.

Theorem 2. Assume that and . Then the sequence generated by Algorithm 2 converges strongly to .

Proof. First, we show that, for any , the sequence is bounded. From the construction of , we haveSo, combining (71) and (72), we getwhere . According to hypothesis ,Note thatwhich implies that the sequence is bounded. Settingas well as using Lemma 7, we conclude that the sequence is bounded. This shows that the sequence is bounded and so is .
Since is bounded, assume that there exists a constant such that . Thus,and we getFrom (78),LetThen (78) can reduce to the inequalitiesFurthermore, we know thatLet be a subsequence of and suppose thatThen, we havewhich implies, by our assumption, thatSince is bounded, it follows that as , so we get .
We next show that . Since is bounded, the set is nonempty. Let ; then there exists a subsequence of such that .and then , and is a bounded linear operator, so .
Since , we havewhere , and, by the boundedness of , we getand, using the weak lower semicontinuity of ,Thus, .
On the other hand,Since , we havewhere , and, by the boundedness of , we getand, using the weak lower semicontinuity of ,Thus, ; then , that is, .
Next, we haveFor and , using Lemma 1, , soand thenand thusFrom (82), (83), (98), and Lemma 8, we conclude that the sequence converges strongly to . The proof is complete.

4. Numerical Experiments

In this section, we present a numerical experiment to illustrate the performance of the proposed algorithms. Our numerical experiments are coded in MATLAB R2007 running on personal computer with 3.50 GHz Intel Core i3 and 4 GB RAM. In what follows, we apply our algorithms to solve the problem of least absolute shrinkage and selection operator, which requires solving a convex optimization problem aswhere , , and are given elements. In our experiment, we first generate an matrix randomly by a standardized normal distribution, and is a sparse signal with elements, only of which is nonzero, which is also generated randomly. The observation is generated as . The parameters in this experiment are set with , and . In this situation, it is readily seen that with and , which in turn implies thatwhere is defined bystanding for the subdifferential of . As a half-space, the associated projection onto takes the following form:

To show the efficiency of our algorithm, we compare it with the algorithm proposed in [40]. The only difference of these two algorithms is that there are no inertial terms in the algorithm proposed in [40]. For the convenience, we denote Algorithm 1 by Algo. I and the algorithm in [40] by Algo. II, respectively. In Algorithm 1, we set

In Algo. II, we set and is chosen the same as above. The stopping criterion is that . The initial points are and . The numerical results of these two algorithms with different choices of the sparsity number are listed in Figures 14. It is easy to see that Algo. I converges faster than Algo. II does, which indicates that our modified algorithm indeed accelerates the convergence of the original algorithm.

5. Conclusions

In this paper, we present two inertial relaxed algorithms for solving split feasibility problems in Hilbert spaces by adopting variable step size. These algorithms adopt the new convex subset form, and it is easy to calculate the projections onto these sets. Furthermore, step size selection in the algorithms does not depend on the operator norm. The convergence theorems are established under some mild conditions and a numerical experiment is given to illustrate the performance of the proposed algorithms.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was partially supported by the National Natural Science Foundation of China (no. 11771126).