Abstract

The split feasibility problem arises in many fields in the real world, such as signal processing, image reconstruction, and medical care. In this paper, we present a solution algorithm called memory gradient projection method for solving the split feasibility problem, which employs a parameter and two previous iterations to get the next iteration, and its step size can be calculated directly. It not only improves the flexibility of the algorithm, but also avoids computing the largest eigenvalue of the related matrix or estimating the Lipschitz constant in each iteration. Theoretical convergence results are established under some suitable conditions.

1. Introduction

The split feasibility problem (SFP) was first put forward in [1] by Censor and Elfving. It requires finding a point in a nonempty closed convex subset in one space such that its image under a certain operator is in another nonempty closed convex subset in the image space. Its precise mathematical formulation is as follows: Given closed convex set and closed convex sets in - and -dimensional Euclidean space, respectively, the split feasibility problem is to find a vector for which where is a given real matrix.

The SFP arises in many fields in the real world, such as signal processing, image reconstruction, and medical care; for details see [13] and the references therein. For example, a number of image reconstruction problems can be formulated as split feasibility problems. The vector represents a vectorized image, with the entries of , the intensity levels at each voxel or pixel. The set can be selected to incorporate such features as nonnegativity of the entries of , while the matrix can describe linear functional or projection measurements we have made, as well as other linear combinations of entries of on which we wish to impose constraints. The set then can be the product of the vector of measured data with other convex sets, such as nonnegative cones, that serve to describe the constraints to be imposed [4]. Here we give a discretized model of SFP in image reconstruction problem of X-ray tomography [5, 6]. In image reconstruction, we consider a portion of the form of a two-dimensional cross section in which the attenuation intensities of X-ray are not identical for different tissue. This attenuation effect can be seen as a function with nonnegative variable, called the image or images. We hope to get information about the physiological state of the organization by measuring the data. The fundamental model is formulated in the following way: a Cartesian grid of square picture elements, called pixels, is introduced into the region of interest so that it covers the whole picture that has to be reconstructed. The pixels are numbered in some agreed manner, say from 1 (top left corner pixel) to (bottom right corner pixel) (see Figure 1).

The X-ray attenuation function is assumed to take a constant value throughout the th pixel for Source and detector are assumed to be points and the rays between them (lines). Further, we assumed that the length of intersection of the th ray with the th pixel, denoted by for all and , represents the weight of the contribution of the th pixel to the total attenuation along the th ray. The physical measurement of the total attenuation of the th ray, denoted by , represents the line integral of the unknown attenuation function along the path of the ray. Therefore, in this discretized model, the line integral turns out to be a finite sum and the whole model is described by a system of linear equations In matrix notation, we write the equations above as where is the measurement vector, is the image vector, and the matrix is the projection matrix. So the model is a special SFP: It can also be unified model for many inverse problems, such as intensity-modulated radiation therapy (IMRT) [3].

Many well-known iterative algorithms have been established for the SFP (see, e.g., [1, 4, 618]). In [1], the authors used their multidistance idea to obtain iterative algorithms for solving the SFP. Their algorithms, as well as others obtained later, involve matrix inverses at each iteration. In [4], Byrne presented a projection method called the algorithm for solving the SFP that does not involve matrix inverses, but they assumed that the metric projections onto and are easily calculated. However, in some cases it is impossible or needs too much work to exactly compute the metric projection. Therefore, if this case appears, the efficiency of projection-type methods, including the algorithm, will be seriously affected. In [16], by using the relaxed projection technology, Yang presented a relaxed algorithm for solving the SFP, where he used two halfspaces and in place of and , respectively, at the th iteration and the metric projections onto and are easily executed. López et al. [10] introduced a new self-adaptive step size to improve the and the relaxed algorithm. For more effective algorithms and the extensions of the SFP, the readers can see [12, 19, 20] and the survey papers [2, 7]. It is noted that all these algorithms only use the current point to get the next iteration, which did not use the previous iteration points , and affect the flexibility. Using some information of previous iterative points will increase the flexibility of the algorithm.

In this paper, inspired by the inertial projection algorithms for convex feasibility problem [9], we propose a memory gradient projection algorithm for solving the SFP, which employs and , to get the next point . In this case, it can improve the convergence greatly, since the vector is acting as an impulsion term and is acting as a speed regulator. Compared with the existing methods for solving the SFP, the algorithm presented in this paper has the following advantages: When the projections and are not easily calculated, at each iteration of this algorithm, it only needs to compute the projection onto a halfspace containing the given closed convex set and being related to the current iterative point, which is implemented very easily. At each iteration, the step size can be directly computed which need not compute the largest eigenvalue of the related matrix, estimate the Lipschitz constant, or use some line search scheme. It employs two previous iterations to get the next iteration and hence improves the flexibility of the algorithm.

The rest of the paper is organized as follows. In Section 2, we present some useful preliminaries. In Section 3, we introduce a memory gradient projection method for solving the SFP and prove its convergence property. Section 4 introduces the relaxed version of the memory gradient projection method proposed in Section 3 for solving the SFP and establishes its convergence property. Section 5 highlights the main conclusions of this paper.

2. Preliminaries

In this section, we review some definitions and basic results which will be used in this paper. Throughout the paper, and denote the usual inner product and norm in .

For a given nonempty closed convex subset in , the metric projection from onto is defined by It has the following well-known properties.

Lemma 1 (see [17]). Let be a nonempty closed convex subset in ; then for any and ,(1);(2);(3)

Remark 2. From part (1) of Lemma 1, for any and , we have From part (2) of Lemma 1, we know that is a nonexpansive operator; that is, for any , The lemma below will be useful for the convergence analysis of our algorithm.

Lemma 3 (see [11]). Suppose that and satisfy(a);(b);(c), where . Then, is a convergent sequence and , where (for any ).

The following lemma provides some important properties of the subdifferential.

Lemma 4 (see [21]). Suppose is a convex function; then it is subdifferentiable everywhere and its subdifferentials are uniformly bounded on any bounded subset of .

3. Memory Gradient Projection Algorithm and Its Convergence

In this section, assuming that the projections and are easily calculated, we first establish a memory gradient projection algorithm for solving the split feasibility problem and then prove that the sequence of the iterative points generated by the algorithm converges to a solution of the SFP.

Algorithm 5. Given any , for ,
calculate Let where , , and

Remark 6. Evidently, when , Algorithm 5 happens to be the Algorithm  3.1 of [14] with At each iteration, the step size can be directly computed which avoids computing the largest eigenvalue of the related matrix, estimating the Lipschitz constant, and using some line search scheme as in some existing methods in the literature. Here, the term and two iterative points , are employed to get the next iterative point , hence improving the flexibility of the algorithm.

Now we give the convergence analysis of Algorithm 5.

Theorem 7. Assuming that the solution set of the SFP is nonempty, choosing satisfies the following conditions:(i)For any , and .(ii)

Then, for any , the sequence generated by Algorithm 5 converges to a solution of the SFP.

Proof. Let be any solution of the SFP. Then, and It is easy to see that, for any vector , we have So, we can rewrite the relation above as From Lemma 1, we haveThus, Let Then from (18), we get Thanks to Lemma 3, we conclude that is convergent; that is, is convergent. The proofs above imply that is bounded. Meanwhile, , are also bounded. Due to , we have Combining with (15), (16), and (18), we obtain thatBecause of the boundedness of , That is, Let be any accumulation point of . Then there exists a subsequence which converges to . Since is a closed convex set,
On the other hand, the construction of and (20) clarify that . By using (23), we can get Thus, That is, Therefore, is a solution of the SFP.
Thus we may use in place of in (18) and obtain that is convergent. Because there is a subsequence converging to 0, then as This completes the proof.

4. The Relaxed Memory Gradient Projection Algorithm and Its Convergence

In this section, assuming that the projections and are not easily calculated, we present the relaxed version of the memory gradient projection algorithm presented in Section 3. Carefully speaking, the convex sets and satisfy the following assumptions:

(H1) The set is given by where is a convex (not necessarily differentiable) function and is nonempty.

The set is given by where is a convex (not necessarily differentiable) function and is nonempty.

(H2) For any , at least one subgradient can be calculated, where is a generalized gradient of at and is defined as follows: For any , at least one subgradient can be calculated, where

We now formally state the relaxed version of Algorithm 5.

Algorithm 8. Given any , for ,
calculate Let where , , , where is an element in , and where is an element in

Remark 9. By the definition of subgradient, it is clear that the halfspaces and contain and , respectively. From the expressions of and , the metric projections onto and may be directly calculated (see [22, 23]). Compared with Algorithm 5, when the projections and are not easily calculated, it only needs to compute the projection onto a halfspace containing the given closed convex set and being related to the current iterative point at each iteration of Algorithm 8, which is implemented very easily.

Now, we establish the global convergence of Algorithm 8.

Theorem 10. Suppose that the solution set of the SFP is nonempty. Choosing satisfies the following conditions:(1)For any , , and (2) Then, for any , the sequence generated by Algorithm 8 converges to a solution of the SFP.

Proof. Let be any solution of the SFP. Then and Following the same line as in the proof of Theorem 7, we can also get that Let Then, from (37), we get Thanks to Lemma 3, we conclude that is convergent, that is, is convergent. The proofs above imply that is bounded. Meanwhile, , are also bounded. Due to , we get Combining with (35)–(37), we obtain that ThusLet be any accumulation point of . Then there exists a subsequence , which converges to . Thanks to the boundedness of , That is, Noting that , by the definition of , we get Since we obtain that and From Lemma 4, we know that is bounded. Passing onto the limit in (44), we get that , which implies that .
On the other hand, because , by the definition of , we get Lemma 4 implies that is bounded. Passing onto the limit in (46), we have which implies that
Therefore is a solution of the SFP.
Thus we may use in place of in (37) and obtain that is convergent. Because there is a subsequence converging to 0, then as This completes the proof.

5. Concluding Remarks

In this paper, we present a memory gradient projection method for solving the split feasibility problem, which employs a parameter and two previous iterations to get the next iteration, and its step size can be calculated directly. It not only improves the convergence and the flexibility greatly of the algorithm, but also avoids computing the largest eigenvalue of the related matrix or estimating the Lipschitz constant in each iteration. Theoretical convergence results are established under some suitable conditions. The main idea of this paper can be extended to design some analogous algorithms for solving some related convex optimization problems, such as the convex feasibility problem [7] and the multiple-sets split feasibility problem [24].

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was partly supported by the National Natural Science Foundation of China (11271226).