Abstract

We consider and study the modified extragradient methods for finding a common element of the solution set of a split feasibility problem (SFP) and the fixed point set of a strictly pseudocontractive mapping in the setting of infinite-dimensional Hilbert spaces. We propose an extragradient algorithm for finding an element of where is strictly pseudocontractive. It is proven that the sequences generated by the proposed algorithm converge weakly to an element of . We also propose another extragradient-like algorithm for finding an element of where is nonexpansive. It is shown that the sequences generated by the proposed algorithm converge strongly to an element of .

1. Introduction

Let be a real Hilbert space with inner product and norm . Let be a nonempty closed convex subset of and let be the metric projection from onto . Let be a self-mapping on . We denote by the set of fixed points of and by the set of all real numbers.

A mapping is called -inverse strongly monotone, if there exists a constant such that For a given mapping , we consider the following variational inequality (VI) of finding such that The solution set of the VI (1.2) is denoted by . The variational inequality was first discussed by Lions [1] and now is well known. Variational inequality theory has been studied quite extensively and has emerged as an important tool in the study of a wide class of obstacle, unilateral, free, moving, equilibrium problems; see, for example, [24].

A mapping is called -strictly pseudocontractive if there exists a constant such that see [5]. We denote by the fixed point set of ; that is, . In particular, if , then is called a nonexpansive mapping. In 2003, for finding an element of when is nonempty, closed and convex, is nonexpansive and is -inverse strongly monotone, Takahashi and Toyoda [6] introduced the following Mann’s type iterative algorithm: where chosen arbitrarily, is a sequence in , and is a sequence in . They showed that if , then the sequence converges weakly to some . Further, motivated by the idea of Korpelevich’s extragradient method [7], Nadezhkina and Takahashi [8] introduced an iterative algorithm for finding a common element of the fixed point set of a nonexpansive mapping and the solution set of a variational inequality problem for a monotone, Lipschitz continuous mapping in a real Hilbert space. They obtained a weak convergence theorem for two sequences generated by the proposed algorithm. Here the so-called extragradient method was first introduced by Korpelevich [7]. In 1976, She applied this method for finding a solution of a saddle point problem and proved the convergence of the proposed algorithm to a solution of this saddle point problem. Very recently, Jung [9] introduced a new composite iterative scheme by the viscosity approximation method and proved the strong convergence of the proposed scheme to a common element of the fixed point set of a nonexpansive mapping and the solution set of a variational inequality for an inverse-strongly monotone mapping in a Hilbert space.

On the other hand, let and be nonempty closed convex subsets of infinite-dimensional real Hilbert spaces and , respectively. The split feasibility problem (SFP) is to find a point with the following property: where and denote the family of all bounded linear operators from to .

In 1994, the SFP was first introduced by Censor and Elfving [10], in finite-dimensional Hilbert spaces, for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. A number of image reconstruction problems can be formulated as the SFP; see, for example, [11] and the references therein. Recently, it is found that the SFP can also be applied to study intensity-modulated radiation therapy (IMRT) [1214]. In the recent past, a wide variety of iterative methods have been used in signal processing and image reconstruction and for solving the SFP; see, for example, [11, 13, 1519] and the references therein (see also [20] for relevant projection methods for solving image recovery problems). A special case of the SFP is the following convex constrained linear inverse problem [21] of finding an element such that It has been extensively investigated in the literature using the projected Landweber iterative method [22]. Comparatively, the SFP has received much less attention so far, due to the complexity resulting from the set . Therefore, whether various versions of the projected Landweber iterative method [23] can be extended to solve the SFP remains an interesting open topics. The original algorithm given in [10] involves the computation of the inverse (assuming the existence of the inverse of ), and thus, did not become popular. A seemingly more popular algorithm that solves the SFP is the algorithm of Byrne [11, 15] which is found to be a gradient-projection method (GPM) in convex minimization. It is also a special case of the proximal forward-backward splitting method [24]. The algorithm only involves the computation of the projections and onto the sets and , respectively, and is therefore implementable in the case where and have closed-form expressions; for example, and are closed balls or half-spaces. However, it remains a challenge how to implement the algorithm in the case where the projections and/or fail to have closed-form expressions, though theoretically, we can prove the (weak) convergence of the algorithm.

In 2010, Xu [25] gave a continuation of the study on the algorithm and its convergence. He applied Mann’s algorithm to the SFP and proposed an averaged algorithm which was proved to be weakly convergent to a solution of the SFP. He derived a weak convergence result, which shows that for suitable choices of iterative parameters (including the regularization), the sequence of iterative solutions can converge weakly to an exact solution of the SFP.

Very recently, Ceng et al. [26] introduced and studied an extragradient method with regularization for finding a common element of the solution set of the SFP and the set of fixed points of a nonexpansive mapping in the setting of infinite-dimensional Hilbert spaces. By combining the regularization method and extragradient method due to Nadezhkina and Takahashi [8], the authors proposed an iterative algorithm for finding an element of . The authors proved that the sequences generated by the proposed algorithm converge weakly to an element .

The purpose of this paper is to investigate modified extragradient methods for finding a common element of the solution set of the SFP and the fixed point set of a strictly pseudocontractive mapping in the setting of infinite-dimensional Hilbert spaces. Assume that . By combining the regularization method and Nadezhkina and Takahashi’s extragradient method [8], we propose an extragradient algorithm for finding an element of . It is proven that the sequences generated by the proposed algorithm converge weakly to an element of . This result represents the supplementation, improvement, and extension of the corresponding results in [25, 26]; for example, [25, Theorem 5.7] and [26, Theorem 3.1]. On the other hand, by combining the regularization method and Jung’s composite viscosity approximation method [9], we also propose another extragradient-like algorithm for finding an element of where is nonexpansive. It is shown that the sequences generated by the proposed algorithm converge strongly to an element of . Such a result substantially develops and improves the corresponding results in [9, 25, 26]; for example, [25, Theorem 5.7], [26, Theorem 3.1], and [9, Theorem 3.1]. It is worth pointing out that our results are new and novel in the Hilbert spaces setting. Essentially new approaches for finding the fixed points of strictly pseudocontractive mappings (including nonexpansive mappings) and solutions of the SFP are provided.

2. Preliminaries

Let be a real Hilbert space whose inner product and norm are denoted by and , respectively. Let be a nonempty closed convex subset of . We write to indicate that the sequence converges weakly to and to indicate that the sequence converges strongly to . Moreover, we use to denote the weak -limit set of the sequence , that is,

Recall that the metric (or nearest point) projection from onto is the mapping which assigns to each point the unique point satisfying the property

Some important properties of projections are gathered in the following proposition.

Proposition 2.1. For given and , (i), for all ;(ii), for all ;(iii), for all , which hence implies that is nonexpansive and monotone.

Definition 2.2. A mapping is said to be(a)nonexpansive if (b)firmly nonexpansive if is nonexpansive, or equivalently, alternatively, is firmly nonexpansive if and only if can be expressed as where is nonexpansive, projections are firmly nonexpansive.

Definition 2.3. Let be a nonlinear operator with domain and range , and let and be given constants. The operator is called:(a)monotone if (b)-strongly monotone if (c)-inverse strongly monotone (-ism) if

It can be easily seen that if is nonexpansive, then is monotone. It is also easy to see that a projection is -ism.

Inverse strongly monotone (also referred to as cocoercive) operators have been applied widely to solve practical problems in various fields, for instance, in traffic assignment problems; see, for example, [27, 28].

Definition 2.4. A mapping is said to be an averaged mapping if it can be written as the average of the identity and a nonexpansive mapping, that is, where and is nonexpansive. More precisely, when the last equality holds, we say that is -averaged. Thus firmly nonexpansive mappings (in particular, projections) are -averaged maps.

Proposition 2.5 (see [15]). Let be a given mapping.(i) is nonexpansive if and only if the complement is -ism.(ii)If is -ism, then for is -ism.(iii) is averaged if and only if the complement is -ism for some. Indeed, for is -averaged if and only if is -ism.

Proposition 2.6 (see [15, 29]). Let be given operators.(i)If for some , is averaged and is nonexpansive, then is averaged.(ii) is firmly nonexpansive if and only if the complement is firmly nonexpansive.(iii)If for some , is firmly nonexpansive and is nonexpansive, then is averaged.(iv)The composite of finitely many averaged mappings is averaged. That is, if each of the mappings is averaged, then so is the composite . In particular, if is -averaged and is -averaged, where , then the composite is -averaged, where .(v)If the mappings are averaged and have a common fixed point, thenThe notation denotes the set of all fixed points of the mapping , that is, .

On the other hand, it is clear that, in a real Hilbert space , is -strictly pseudocontractive if and only if there holds the following inequality: This immediately implies that if is a -strictly pseudocontractive mapping, then is -inverse strongly monotone; for further detail, we refer to [5] and the references therein. It is well known that the class of strict pseudocontractions strictly includes the class of nonexpansive mappings. The so-called demiclosedness principle for strict pseudocontractive mappings in the following lemma will often be used.

Lemma 2.7 (see [5, Proposition 2.1]). Let be a nonempty closed convex subset of a real Hilbert space and be a mapping.(i)If is a -strict pseudocontractive mapping, then satisfies the Lipschitz condition (ii)If is a -strict pseudocontractive mapping, then the mapping is semiclosed at ; that is, if is a sequence in such that and , then .(iii)If is -(quasi-)strict pseudo-contraction, then the fixed point set of is closed and convex so that the projection is well defined.

The following elementary result on real sequences is quite well known.

Lemma 2.8 (see [30, page 80]). Let and be sequences of nonnegative real numbers satisfying the inequality If and , then exists. If, in addition, has a subsequence which converges to zero, then .

Corollary 2.9 (see [31, page 303]). Let and be two sequences of nonnegative real numbers satisfying the inequality If converges, then exists.

It is easy to see that the following lemma holds.

Lemma 2.10 (see [32]). Let be a real Hilbert space. Then, for all and ,

The following lemma plays a key role in proving weak convergence of the sequences generated by our algorithm.

Lemma 2.11 (see [33]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a -strictly pseudocontractive mapping. Let and be two nonnegative real numbers such that . Then

The following result is useful when we prove the weak convergence of a sequence.

Lemma 2.12 (see [25, Proposition 2.6]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a bounded sequence which satisfies the following properties:(i)every weak limit point of lies in ;(ii) exists for every . Then converges weakly to a point in .

Let be a nonempty closed convex subset of a real Hilbert space and let be a monotone mapping. The variational inequality (VI) is to find such that The solution set of the VIP is denoted by . It is well known that

A set-valued mapping is called monotone if for all and imply A monotone mapping is called maximal if its graph is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping is maximal if and only if, for for every implies . Let be a monotone and -Lipschitz continuous mapping and let be the normal cone to at , that is, Define Then, is maximal monotone and if and only if ; see [34] for more details.

3. Some Modified Extragradient Methods

Throughout the paper, we assume that the SFP is consistent; that is, the solution set of the SFP is nonempty. Let be a continuous differentiable function. the minimization problem is ill posed. Therefore, Xu [25] considered the following Tikhonov regularized problem: where is the regularization parameter.

We observe that the gradient is -Lipschitz continuous and -strongly monotone.

We can use fixed point algorithms to solve the SFP on the basis of the following observation.

Let and assume that . Then , which implies that , and thus, . Hence, we have the fixed point equation . Requiring that , we consider the fixed point equation It is proven in [25, Proposition 3.2] that the solutions of the fixed point equation (3.4) are exactly the solutions of the SFP; namely, for given solves the SFP if and only if solves the fixed point equation (3.4).

Proposition 3.1 (see [26, proposition 3.1]). Given , the following statements are equivalent:(i) solves the SFP;(ii) solves the fixed point equation (3.4);(iii) solves the variational inequality problem (VIP) of finding such that

Remark 3.2. It is clear from Proposition 3.1 that for all , where and denote the set of fixed points of and the solution set of VIP (3.4), respectively.

We are now in a position to propose a modified extragradient method for solving the SFP and the fixed point problem of a -strictly pseudocontractive mapping and prove that the sequences generated by the proposed method converge weakly to an element of .

Theorem 3.3. Let be a -strictly pseudocontractive mapping such that . Let and be the sequences in generated by the following modified extragradient algorithm: where and such that(i);(ii);(iii) and for all ;(iv) and . Then, both the sequences and converge weakly to an element .

Proof. First, taking into account , without loss of generality, we may assume that for some .
We observe that is -averaged for each , where See, for example, [35] and from which it follows that and are nonexpansive for all .
Next, we show that the sequence is bounded. Indeed, take a fixed arbitrarily. Then, we get and for . For simplicity, we write for all . Then we get for all . From (3.7), it follows that Also, by Proposition 2.1(ii), we have Further, by Proposition 2.1(i), we have So, we obtain Since , utilizing Lemmas 2.10 and 2.11, from (3.9) and the last inequality, we conclude that where and . Since and for some , we conclude that and . Therefore, by Lemma 2.8, we deduce that and the sequence is bounded and so are and . From the last relations, we also obtain Since for some and , we have Furthermore, we obtain This together with (3.16) implies that Note that This together with (3.16), (3.18), and implies that So, we derive Since is Lipschitz continuous, from (3.18), we have
As is bounded, there is a subsequence of that converges weakly to some . We obtain that . First, we show that . Since and , it is known that and . Let where . Then, is maximal monotone and if and only if ; see [34] for more details. Let . Then, we have and hence, So, we have On the other hand, from we have and hence, Therefore, from we have Hence, we obtain Since is maximal monotone, we have , and hence, . Thus it is clear that .
We show that . Indeed, since and by (3.21), by Lemma 2.7(ii), we get . Therefore, we have . This shows that , where Since the limit exists for every , by Lemma 2.12, we know that Further, from , it follows that . This shows that both sequences and converge weakly to .

Remark 3.4. It is worth emphasizing that the modified extragradient algorithm in Theorem 3.3 is essentially the predictor-corrector algorithm. Indeed, the first iterative step is the predictor one, and the second iterative step = is actually the corrector one. In addition, Theorem 3.3 extends the extragradient method due to Nadezhkina and Takahashi [8, Theorem 3.1].

Corollary 3.5. Let be a nonexpansive mapping such that . Let and be the sequences in generated by the following extragradient algorithm: where , and such that(i);(ii);(iii). Then, both the sequences and converge weakly to an element .

Proof. In Theorem 3.3, putting for every , we obtain that and Since is a nonexpansive mapping, must be a -strictly pseudocontractive mapping with coefficient . It is clear that for every and . In this case, all conditions in Theorem 3.3 are satisfied. Therefore, by Theorem 3.3, we derive the desired result.

Remark 3.6. Compared with [26, Theorem 3.1], Corollary 3.5 is essentially coincident with [26, Theorem 3.1]. Hence our Theorem 3.3 includes [26, Theorem 3.1] as a special case. Utilizing [8, Theorem 3.1], Ceng et al. gave the following weak convergence result [26, Theorem3.2].

Let be a nonexpansive mapping such that . Let and be the sequences in generated by the following Nadezhkina and Takahashi extragradient algorithm where for some and for some . Then, both the sequences and converge weakly to an element .

Obviously, there is no doubt that [26, Theorem 3.2] is a weak convergence result for satisfying , for  all . However, Corollary 3.5 is another weak convergence one for the sequence of regularization parameters .

Remark 3.7. Theorem 3.3 improves, extends, and develops [25, theorem 5.7] and [26, Theorem 3.1] in the following aspects.(a)The corresponding iterative algorithms in [25, Theorem 5.7] and [26, Theorem 3.1] are extended for developing our modified extragradient algorithm with regularization in Theorem 3.3.(b)The technique of proving weak convergence in Theorem 3.3 is different from those in [25, Theorem 5.7] and [26, Theorem 3.1] because our technique depends on the properties of maximal monotone mappings and strictly pseudocontractive mappings (e.g., Lemma 2.11) and the demiclosedness principle for strictly pseudocontractive mappings (e.g., Lemma 2.7) in Hilbert spaces.(c)The problem of finding an element of with being strictly pseudocontractive is more general than the problem of finding a solution of the SFP in [25, Theorem 5.7] and the problem of finding an element of with being nonexpansive in [26, Theorem 3.1].(d)The second iterative step = in our algorithm reduces to the the second iterative one in the algorithm of [26, Theorem 3.1] whenever for all .

Utilizing Theorem 3.3, we have the following two new results in the setting of real Hilbert spaces.

Corollary 3.8. Let be a -strictly pseudocontractive mapping such that . Let and be two sequences generated by where and such that(i);(ii);(iii) and for all ;(iv) and . Then, both the sequences and converge weakly to an element .

Proof. In Theorem 3.3, putting , we have and the identity mapping. By Theorem 3.3, we obtain the desired result.

Remark 3.9. In Corollary 3.8, putting for every and letting be a nonexpansive mapping, Corollary 3.8 essentially reduces to [26, Corollary 3.2]. Hence, Corollary 3.8 includes [26, Corollary 3.2] as a special case.

Corollary 3.10. Let be a maximal monotone mapping such that . Let be the resolvent of for each . Let and be the sequences generated by where and such that(i);(ii);(iii) and for all ;(iv) and . Then, both the sequences and converge weakly to an element .

Proof. In Theorem 3.3, putting and the resolvent of , we know that the identity mapping and is nonexpansive. In this case, we get and By Theorem 3.3, we obtain the desired result.

Remark 3.11. In Corollary 3.10, putting for every , Corollary 3.10 essentially reduces to [26, Corollary 3.3]. Hence, Corollary 3.10 includes [26, Corollary 3.3] as a special case.

On the other hand, by combining the regularization method and Jung’s composite viscosity approximation method [9], we introduce another new composite iterative scheme for finding an element of , where is nonexpansive, and prove strong convergence of this scheme. To attain this object, we need to use the following lemmas.

Lemma 3.12 (see [36]). Let be a sequence of nonnegative real numbers satisfying the property where and are such that(i);(ii)either or ;(iii) where . Then, .

Lemma 3.13. In a real Hilbert space , there holds the following inequality:

Theorem 3.14. Let be a contractive mapping with coefficient and be a nonexpansive mapping such that . Assume that , and let and be the sequences in generated by the following composite extragradient-like algorithm: where the sequences of parameters and satisfy the following conditions:(i);(ii) and ;(iii) and . Then, both the sequences and converge strongly to , which is a unique solution of the following variational inequality:

Proof. Repeating the same argument as in the proof of Theorem 3.3, we obtain that for each is -averaged with This shows that is nonexpansive. Furthermore, for , utilizing the fact that we may assume that Consequently, it follows that for each integer , is -averaged with This immediately implies that is nonexpansive for all . Next, we divide the remainder of the proof into several steps.
Step 1. is bounded.
Indeed, put and for every . Take a fixed arbitrarily. Then, we get and for . Hence, we have Similarly we get . Thus, from (3.44), we have and hence, By induction, we get This implies that is bounded and so are . It is clear that both and are also bounded. By condition (ii), we also obtain Step 2. .
Indeed, from (3.44), we have Simple calculations show that Since for every , we have for every , where .
On the other hand, from (3.44), we have Also, simple calculations show that Since for every , it follows from (3.57) that for every , where . From conditions (i), (ii), (iii), it is easy to see that Applying Lemma 3.12 to (3.61), we have From (3.57), we also have that as .
Step 3. .
Indeed, it follows that which implies that Obviously, utilizing (3.53), and , we have as . This implies that From (3.53) and (3.66), we also have
Step 4. .
Indeed, take a fixed arbitrarily. Then, by the convexity of , we have So we obtain Since , and , from the boundedness of and , it follows that , and hence Moreover, from the firm nonexpansiveness of , we obtain and so Thus, we have which implies that Since , and , from the boundedness of , and , it follows that and hence
Step 5. for , where is a unique solution of the variational inequality
Indeed, we choose a subsequence of such that Since is bounded, there exists a subsequence of which converges weakly to . Without loss of generality we may assume that . Then we can obtain . Let us first show that . Define where . Then, is maximal monotone and if and only if ; see [34] for more details. Let . Then, we have and hence, So, we have On the other hand, from we have and hence, Therefore, from we have Hence, we obtain Since is maximal monotone, we have , and hence, . Thus it is clear that .
On the other hand, by Steps 3 and 4, . So, by Lemma 2.7(ii), we derive and hence . Then from (3.76) and (3.77), we have Thus, from (3.53), we obtain Step 6. for , where is a unique solution of the variational inequality
Indeed, utilizing (3.49), (3.51), and Lemma 3.13, we have for every , where . Now, put , and . Then (3.91) is rewritten as It is easy to see that and . Thus by Lemma 3.12, we obtain . This completes the proof.

Corollary 3.15. Let be a contractive mapping with coefficient and be a nonexpansive mapping such that . Assume that , and let and be the sequences in generated by where the sequences of parameters and satisfy the following conditions:(i);(ii), and ;(iii) and . Then, both the sequences and converge strongly to , which is a unique solution of the following variational inequality

Proof. In Theorem 3.14, putting , we deduce that the identity mapping, and for every . Then, by Theorem 3.14, we obtain the desired result.

Remark 3.16. Theorem 3.14 improves and develops [25, Theorem5.7], [26, Theorem 3.1], and [9, Theorem 3.1] in the following aspects.(a)The corresponding iterative algorithm in [9, Theorem 3.1] is extended for developing our composite extragradient-like algorithm with regularization in Theorem 3.14.(b)The technique of proving strong convergence in Theorem 3.14 is very different from those in [25, Theorem 5.7], [26, Theorem 3.1], and [9, Theorem 3.1] because our technique depends on Lemmas 3.12 and 3.13.(c)Compared with [25, Theorem 5.7], and [26, Theorem 3.1], two weak convergence results, Theorem 3.14 is a strong convergence result. Thus, Theorem 3.14 is quite interesting and very valuable.(d)In [9, Theorem 3.1], Jung actually introduced the following composite iterative algorithm: where is inverse-strongly monotone and is nonexpansive. Now, via replacing by , we obtain the composite extragradient-like algorithm in Theorem 3.14. Consequently, this algorithm is very different from Jung’s algorithm.

Furthermore, utilizing Jung [9, Theorem 3.1], we can immediately obtain the following strong convergence result.

Theorem 3.17. Let be a contractive mapping with coefficient and be a nonexpansive mapping such that . Let and be the sequences in generated by the following composite extragradient-like algorithm: where the sequences of parameters and satisfy the following conditions:(i), and ;(ii) and ;(iii) and . Then, both the sequences and converge strongly to , which is a unique solution of the following variational inequality:

Remark 3.18. It is not hard to see that is -ism. Thus, Theorem 3.17 is an immediate consequence of Jung [9, Theorem 3.1].

Acknowledgments

In this research, the first and second authors were partially supported by the National Science Foundation of China (11071169), Innovation Program of Shanghai Municipal Education Commission (09ZZ133) and Leading Academic Discipline Project of Shanghai Normal University (DZL707). Third author was partially supported by the Grant NSC 101–2115-M-037-001.