Abstract

The purpose of this paper is to introduce and analyze the Mann-type extragradient iterative algorithms with regularization for finding a common element of the solution set of a general system of variational inequalities, the solution set of a split feasibility problem, and the fixed point set of a strictly pseudocontractive mapping in the setting of the Hilbert spaces. These iterative algorithms are based on the regularization method, the Mann-type iteration method, and the extragradient method due to Nadezhkina and Takahashi (2006). Furthermore, we prove that the sequences generated by the proposed algorithms converge weakly to an element of under mild conditions.

1. Introduction

Let be a real Hilbert space with inner product and norm . Let be a nonempty closed convex subset of . The projection (nearest point or metric projection) of onto is denoted by . Let be a mapping and be the set of fixed points of . For a given nonlinear operator , we consider the following variational inequality problem (VIP) of finding such that The solution set of VIP (1) is denoted by . The theory of variational inequalities has been studied quite extensively and has emerged as an important tool in the study of a wide class of problems from mechanics, optimization, engineering, science, and social sciences. It is well known that the VIP is equivalent to a fixed point problem. This alternative formulation has been used to suggest and analyze projection iterative method for solving variational inequalities under the conditions that the involved operator must be strongly monotone and Lipschitz continuous. In the recent past, several people have studied and proposed several iterative methods to find a solution of variational inequalities which is also a fixed point of a nonexpansive mapping or strict pseudocontractive mapping; see, for example, [19] and the references therein.

For finding an element of when is closed and convex, is nonexpansive, and is -inverse strongly monotone, Takahashi and Toyoda [10] introduced the following Mann-type iterative algorithm: where is the metric projection of onto , , is a sequence in , and is a sequence in . They showed that if , then the sequence converges weakly to some . Nadezhkina and Takahashi [9] and Zeng and Yao [8] proposed extragradient methods motivated by Korpelevič [11] for finding a common element of the fixed point set of a nonexpansive mapping and the solution set of a variational inequality problem. Further, these iterative methods are extended in [12] to develop a new iterative method for finding elements in .

Let be two mappings. Recently, Ceng et al. [4] introduced and considered the following problem of finding such that which is called a general system of variational inequalities (GSVI), where and are two constants. The set of solutions of problem (3) is denoted by . In particular, if , then problem (3) reduces to the new system of variational inequalities (NSVI), introduced and studied by Verma [13]. Further, if , then the NSVI reduces to VIP (1).

Recently, Ceng et al. [4] transformed problem (3) into a fixed point problem in the following way.

Lemma 1 (see [4]). For given , is a solution of problem (3) if and only if is a fixed point of the mapping defined by where .

In particular, if the mapping is -inverse strongly monotone for , then the mapping is nonexpansive provided for .

Utilizing Lemma 1, they introduced and studied a relaxed extragradient method for solving GSVI (3).

Throughout this paper, unless otherwise specified, the set of fixed points of the mapping is denoted by . Based on the relaxed extragradient method and viscosity approximation method, Yao et al. [7] proposed and analyzed an iterative algorithm for finding a common solution of GSVI (3) and fixed point problem of a strictly pseudocontractive mapping , where is a nonempty bounded closed convex subset of a real Hilbert space .

Subsequently, Ceng at al. [14] further presented and analyzed an iterative scheme for finding a common element of the solution set of VIP (1), the solution set of GSVI (3), and fixed point set of a strictly pseudocontractive mapping .

Theorem 2 (see [14, Theorem 3.1]). Let be a nonempty closed convex subset of a real Hilbert space . Let be -inverse strongly monotone, and let be -inverse strongly monotone for . Let be a -strictly pseudocontractive mapping such that . Let be a -contraction with . For given arbitrarily, let the sequences , , and be generated iteratively by where for , and , , , such that(i) and for all ;(ii) and ;(iii) and ;(iv);(v) and . Then the sequence generated by (5) converges strongly to and is a solution of GSVI (3), where .

On the other hand, let and be nonempty closed convex subsets of real Hilbert spaces and , respectively. The split feasibility problem (SFP) is to find a point with the following property: where and denotes the family of all bounded linear operators from to .

In 1994, the SFP was first introduced by Censor and Elfving [15], in finite-dimensional Hilbert spaces, for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. A number of image reconstruction problems can be formulated as the SFP; see, for example, [16] and the references therein. Recently, it is found that the SFP can also be applied to study intensity-modulated radiation therapy; see, for example, [1719] and the references therein. In the recent past, a wide variety of iterative methods have been used in signal processing and image reconstruction and for solving the SFP; see, for example, [1626] and the references therein. A special case of the SFP is the following convex constrained linear inverse problem [27] of finding an element such that It has been extensively investigated in the literature using the projected Landweber iterative method [28]. Comparatively, the SFP has received much less attention so far, due to the complexity resulting from the set . Therefore, whether various versions of the projected Landweber iterative method [28] can be extended to solve the SFP remains an interesting open topic. For example, it is yet not clear whether the dual approach to (7) of [29] can be extended to the SFP. The original algorithm given in [15] involves the computation of the inverse (assuming the existence of the inverse of ), and thus has not become popular. A seemingly more popular algorithm that solves the SFP is the algorithm of Byrne [16, 21] which is found to be a gradient-projection method (GPM) in convex minimization. It is also a special case of the proximal forward-backward splitting method [30]. The algorithm only involves the computation of the projections and onto the sets and , respectively, and is therefore implementable in the case where and have closed-form expressions; for example, and are closed balls or half-spaces. However, it remains a challenge how to implement the algorithm in the case where the projections and/or fail to have closed-form expressions, though theoretically we can prove the (weak) convergence of the algorithm.

Very recently, Xu [20] gave a continuation of the study on the algorithm and its convergence. He applied Mann's algorithm to the SFP and purposed an averaged algorithm which was proved to be weakly convergent to a solution of the SFP. He also established the strong convergence result, which shows that the minimum-norm solution can be obtained.

Furthermore, Korpelevič [11] introduced the so-called extragradient method for finding a solution of a saddle point problem. He proved that the sequences generated by the proposed iterative algorithm converge to a solution of the saddle point problem.

Throughout this paper, assume that the SFP is consistent; that is, the solution set of the SFP is nonempty. Let be a continuous differentiable function. The minimization problem is ill posed. Therefore, Xu [20] considered the following Tikhonov regularization problem: where is the regularization parameter. The regularized minimization (9) has a unique solution which is denoted by . The following results are easy to prove.

Proposition 3 (see [31, Proposition 3.1]). Given , the following statements are equivalent: (i)solves the SFP;(ii)solves the fixed point equationwhere , and is the adjoint of ;(iii)solves the variational inequality problem (VIP) of finding such that

It is clear from Proposition 3 that for all , where and denote the set of fixed points of and the solution set of VIP (11), respectively.

Proposition 4 (see [31]). The following statements hold: (i)the gradientis -Lipschitz continuous and -strongly monotone;(ii)the mapping is a contraction with coefficient where ;(iii)if the SFP is consistent, then the strong exists and is the minimum-norm solution of the SFP.

Very recently, by combining the regularization method and extragradient method due to Nadezhkina and Takahashi [32], Ceng et al. [31] proposed an extragradient algorithm with regularization and proved that the sequences generated by the proposed algorithm converge weakly to an element of , where is a nonexpansive mapping.

Theorem 5 (see [31, Theorem 3.1]). Let be a nonexpansive mapping such that . Let and be the sequences in generated by the following extragradient algorithm: where , for some and for some . Then, both sequences and converge weakly to an element .

Motivated and inspired by the research going on this area, we propose and analyze the following Mann-type extragradient iterative algorithms with regularization for finding a common element of the solution set of the GSVI (3), the solution set of the SFP (6), and the fixed point set of a strictly pseudocontractive mapping .

Algorithm 6. Let for , , and such that and for all . For given arbitrarily, let , , be the sequences generated by the Mann-type extragradient iterative scheme with regularization
Under appropriate assumptions, it is proven that all the sequences , , converge weakly to an element . Furthermore, is a solution of the GSVI (3), where .

Algorithm 7. Let for , , and such that for all . For given arbitrarily, let be the sequences generated by the Mann-type extragradient iterative scheme with regularization
Also, under mild conditions, it is shown that all the sequences , , converge weakly to an element . Furthermore, is a solution of the GSVI (3), where .
Observe that both [20, Theorem 5.7] and [31, Theorem 3.1] are weak convergence results for solving the SFP and so are our results as well. But our problem of finding an element of is more general than the corresponding ones in [20, Theorem 5.7] and [31, Theorem 3.1], respectively. Hence, there is no doubt that our weak convergence results are very interesting and quite valuable. Because the Mann-type extragradient iterative schemes (16) and (17) with regularization involve two inverse strongly monotone mappings and , a -strictly pseudocontractive self-mapping and several parameter sequences, they are more flexible and more subtle than the corresponding ones in [20, Theorem 5.7] and [31, Theorem 3.1], respectively. Furthermore, the hybrid extragradient iterative scheme (5) is extended to develop the Mann-type extragradient iterative schemes (16) and (17) with regularization. In our results, the Mann-type extragradient iterative schemes (16) and (17) with regularization lack the requirement of boundedness for the domain in which various mappings are defined; see, for example, Yao et al. [7, Theorem 3.2]. Therefore, our results represent the modification, supplementation, extension, and improvement of [20, Theorem 5.7], [31, Theorem 3.1], [14, Theorem 3.1], and [7, Theorem 3.2].

2. Preliminaries

Let be a real Hilbert space, whose inner product and norm are denoted by and , respectively. Let be a nonempty, closed, and convex subset of . Now we present some known definitions and results which will be used in the sequel.

The metric (or nearest point) projection from onto is the mapping which assigns to each point the unique point satisfying the property

Some important properties of projections are gathered in the following proposition.

Proposition 8. For given and : (i), for all ;(ii), for  all ;(iii), for all , which hence implies that is nonexpansive and monotone.

Definition 9. A mapping is said to be (a)nonexpansive if (b)firmly nonexpansive if is nonexpansive, or equivalently, alternatively, is firmly nonexpansive if and only if can be expressed as where is nonexpansive; projections are firmly nonexpansive.

Definition 10. Let be a nonlinear operator with domain and range . (a) is said to be monotone if (b)Given a number , is said to be -strongly monotone if (c)Given a number , is said to be -inverse strongly monotone (-ism) if
It can be easily seen that if is nonexpansive, then is monotone. It is also easy to see that a projection is 1-ism.
Inverse strongly monotone (also referred to as cocoercive) operators have been applied widely in solving practical problems in various fields, for instance, in traffic assignment problems; see, for example, [33, 34].

Definition 11. A mapping is said to be an averaged mapping if it can be written as the average of the identity and a nonexpansive mapping, that is, where and is nonexpansive. More precisely, when the last equality holds, we say that is -averaged. Thus, firmly nonexpansive mappings (in particular, projections) are -averaged maps.

Proposition 12 (see [21]). Let be a given mapping.(i)is nonexpansive if and only if the complement is -ism.(ii)If is -ism, then for , is -ism.(iii) is averaged if and only if the complement is -ism for some . Indeed, for , is -averaged if and only if is -ism.

Proposition 13 (see [21, 35]). Let be given operators. (i)If for some and if is averaged and is nonexpansive, then is averaged.(ii) is firmly nonexpansive if and only if the complement is firmly nonexpansive.(iii)If for some and if is firmly nonexpansive and is nonexpansive, then is averaged.(iv)The composite of finitely many averaged mappings is averaged. That is, if each of the mappings is averaged, then so is the composite . In particular, if is -averaged and is -averaged, where , then the composite is -averaged, where .(v)If the mappings are averaged and have a common fixed point, then The notation denotes the set of all fixed points of the mapping , that is, .

It is clear that in a real Hilbert space , is -strictly pseudocontractive if and only if there holds the following inequality:

This immediately implies that if is a -strictly pseudocontractive mapping, then is -inverse strongly monotone; for further detail, we refer to [9] and the references therein. It is well known that the class of strict pseudocontractions strictly includes the class of nonexpansive mappings.

The following elementary result in the real Hilbert spaces is quite well known.

Lemma 14 (see [36]). Let be a real Hilbert space. Then, for all and ,

Lemma 15 (see [37, Proposition 2.1]). Let be a nonempty closed convex subset of a real Hilbert space and be a mapping. (i)If is a -strict pseudocontractive mapping, then satisfies the Lipschitz condition (ii)If is a -strict pseudocontractive mapping, then the mapping is semiclosed at , that is, if is a sequence in such that weakly and strongly, then .(iii)If is -(quasi-)strict pseudocontraction, then the fixed point set of is closed and convex so that the projection is well defined.

The following lemma plays a key role in proving weak convergence of the sequences generated by our algorithms.

Lemma 16 (see [38, p. 80]). Let , , and be sequences of nonnegative real numbers satisfying the inequality If and , then exists. If, in addition, has a subsequence which converges to zero, then .

Corollary 17 (see [39, p. 303]). Let and be two sequences of nonnegative real numbers satisfying the inequality If converges, then exists.

Lemma 18 (see [7]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a -strictly pseudocontractive mapping. Let and be two nonnegative real numbers such that . Then

The following lemma is an immediate consequence of an inner product.

Lemma 19. In a real Hilbert space , there holds the inequality

Let be a nonempty closed convex subset of a real Hilbert space and let be a monotone mapping. The variational inequality problem (VIP) is to find such that

The solution set of the VIP is denoted by . It is well known that

A set-valued mapping is called monotone if for all , and imply that . A monotone set-valued mapping is called maximal if its graph is not properly contained in the graph of any other monotone set-valued mapping. It is known that a monotone set-valued mapping is maximal if and only if for for every implies that . Let be a monotone and Lipschitz continuous mapping and let be the normal cone to at , that is, Define

It is known that in this case the mapping is maximal monotone, and if and only if ; for further details, we refer to [40] and the references therein.

3. Main Results

In this section, we first prove the weak convergence of the sequences generated by the Mann-type extragradient iterative algorithm (16) with regularization.

Theorem 20. Let be a nonempty closed convex subset of a real Hilbert space . Let and be -inverse strongly monotone for . Let be a -strictly pseudocontractive mapping such that . For given arbitrarily, let be the sequences generated by the Mann-type extragradient iterative algorithm (16) with regularization, where for , , and , , , , such that (i);(ii) and for all ;(iii) for all ;(iv);(v) and ;(vi).Then all the sequences , , converge weakly to an element . Furthermore, is a solution of GSVI (3), where .

Proof. First, taking into account , without loss of generality we may assume that for some .
Now, let us show that is -averaged for each , where
Indeed, it is easy to see that is -ism, that is, Observe that
Hence, it follows that is -ism. Thus, is -ism according to Proposition 12(ii). By Proposition 12(iii), the complement is -averaged. Therefore, noting that is -averaged and utilizing Proposition 13(iv), we know that for each , is -averaged with This shows that is nonexpansive. Furthermore, for with , we have Without loss of generality, we may assume that Consequently, it follows that for each integer , is -averaged with
This immediately implies that is nonexpansive for all .
Next we divide the remainder of the proof into several steps.
Step 1. is bounded.
Indeed, take arbitrarily. Then , for , and From (16), it follows that Utilizing Lemma 19, we also have For simplicity, we write , , for each . Then for each . Since is -inverse strongly monotone and for , we know that for all , Furthermore, by Proposition 8(ii), we have Further, by Proposition 8(i), we have So, from (46), we obtain Hence, it follows from (46), (49), and (52) that Since for all , utilizing Lemma 18, we obtain from (53) Since , it is clear that . Thus, by Corollary 17, we conclude that and the sequence is bounded. Taking into account that , , and are Lipschitz continuous, we can easily see that , , , , and are bounded, where for all .
Step 2. Consider , and , where .
Indeed, utilizing Lemma 18 and the convexity of , we obtain from (16) and (47)–(52) that Therefore, Since exists, , and , it follows that
Step 3. Consider  .
Indeed, observe that
This together with implies that and hence . By firm nonexpansiveness of , we have that is,
Moreover, using the argument technique similar to the previous one, we derive that is,
Utilizing (47), (52), (61), and (63), we have Thus, utilizing Lemma 14, from (16) and (64) it follows that which hence implies that Since , , , , , and exists, it follows from the boundedness of , and that , Consequently, it immediately follows that Also, note that This together with implies that Since we have
Step 4. , and converge weakly to an element .
Indeed, since is bounded, there exists a subsequence of that converges weakly to some . We obtain that . Taking into account that and as , we deduce that weakly and weakly. First, it is clear from Lemma 15 and that . Now let us show that . Note that as where is defined as that in Lemma 1. According to Lemma 15, we get . Further, let us show that . As a matter of fact, define where . Then, is maximal monotone and if and only if ; see [40] for more details. Let . Then, we have and hence So, we have On the other hand, from we have and hence, Therefore, from we have Hence, we get Since is maximal monotone, we have , and hence, . Thus, it is clear that . Therefore, .
Let be another subsequence of such that converges weakly to . Then, . Let us show that . Assume that . From the Opial condition [41], we have This leads to a contradiction. Consequently, we have . This implies that converges weakly to . Further, from and , it follows that both and converge weakly to . This completes the proof.

Corollary 21. Let be a nonempty closed convex subset of a real Hilbert space . Let and be -inverse strongly monotone for . Let be a -strictly pseudocontractive mapping such that . For given arbitrarily, let the sequences be generated iteratively by where for , , and , , , such that (i);(ii) and for all ;(iii);(iv) and ;(v).Then all the sequences , , converge weakly to an element . Furthermore, is a solution of the GSVI (3), where .

Proof. In Theorem 20, put for all . Then, in this case, Theorem 20 reduces to Corollary 21.

Next, utilizing Corollary 21, we give the following result.

Corollary 22. Let be a nonempty closed convex subset of a real Hilbert space . Let and be a nonexpansive mapping such that . For given arbitrarily, let the sequences , , be generated iteratively by where , and such that (i);(ii);(iii);(iv).Then all the sequences , , converge weakly to an element .

Proof. In Corollary 21, put and . Then, , for all , and the iterative scheme (85) is equivalent to This is equivalent to (86). Since is a nonexpansive mapping, must be a -strictly pseudocontractive mapping with . In this case, it is easy to see that all the conditions (i)–(v) in Corollary 21 are satisfied. Therefore, in terms of Corollary 21, we obtain the desired result.

Now, we are in a position to prove the weak convergence of the sequences generated by the Mann-type extragradient iterative algorithm (17) with regularization.

Theorem 23. Let be a nonempty closed convex subset of a real Hilbert space . Let and be -inverse strongly monotone for . Let be a -strictly pseudocontractive mapping such that . For given arbitrarily, let be the sequences generated by the Mann-type extragradient iterative algorithm (17) with regularization, where for , , and such that (i);(ii) and for all ;(iii);(iv) and ;(v).Then the sequences , , converge weakly to an element . Furthermore, is a solution of the GSVI (3), where .

Proof. First, taking into account , without loss of generality, we may assume that for some . Repeating the same argument as that in the proof of Theorem 20, we can show that is -averaged for each , where . Further, repeating the same argument as that in the proof of Theorem 20, we can also show that for each integer , is -averaged with .
Next we divide the remainder of the proof into several steps.
Step 1.   is bounded.
Indeed, take arbitrarily. Then , for , and For simplicity, we write for each . Then for each . Utilizing the arguments similar to those of (46) and (47) in the proof of Theorem 20, from (17) we can obtain Since is -inverse strongly monotone and for , utilizing the argument similar to that of (49) in the proof of Theorem 20, we can obtain that for all , Utilizing the argument similar to that of (52) in the proof of Theorem 20, from (90) we can obtain Hence, it follows from (92) and (93) that Since for all , by Lemma 18 we can readily see from (94) that Since , it is clear that . Thus, by Corollary 17 we conclude that and the sequence is bounded. Since , , and are Lipschitz continuous, it is easy to see that , , , and are bounded, where for all .
Step 2. Consider , and , where .
Indeed, utilizing Lemma 18 and the convexity of , we obtain from (17), (92), and (93) that Therefore, Since , , exists, and for some , it follows from the boundedness of that
Step 3. Consider .
Indeed, utilizing the Lipschitz continuity of , we have This together with implies that and hence . Utilizing the arguments similar to those of (61) and (63) in the proof of Theorem 20, we get Utilizing (91) and (101), we have Thus, utilizing Lemma 14, from (17) and (102), it follows that which hence implies that Since , , , , , , and exists, it follows from the boundedness of , , , and that : Consequently, it immediately follows that This together with implies that Since we have
Step 4.  , and converge weakly to an element .
Indeed, since is bounded, there exists a subsequence of that converges weakly to some . We obtain that . Taking into account that and and , we deduce that weakly and weakly.
First, it is clear from Lemma 15 and that . Now let us show that . Note that as , where is defined as that in Lemma 1. According to Lemma 15, we get . Further, let us show that . As a matter of fact, define where . Utilizing the argument similar to that of Step 4 in the proof of Theorem 20, from the relation we can easily conclude that It is easy to see that . Therefore, . Finally, utilizing the Opial condition [41], we infer that converges weakly to . Further, from and , it follows that both and converge weakly to . This completes the proof.

Corollary 24. Let be a nonempty closed convex subset of a real Hilbert space . Let and be -inverse strongly monotone for . Let be a -strictly pseudocontractive mapping such that . For given arbitrarily, let the sequences , , be generated iteratively by where for , , and such that(i);(ii) and for all ;(iii) and ;(iv).Then the sequences , , converge weakly to an element . Furthermore, is a solution of GSVI (3), where .

Next, utilizing Corollary 24, we derive the following result.

Corollary 25. Let be a nonempty closed convex subset of a real Hilbert space . Let and be a nonexpansive mapping such that . For given arbitrarily, let the sequences , be generated iteratively by where , and such that (i);(ii);(iii).Then, both the sequences and converge weakly to an element .

Proof. In Corollary 24, put and . Then, , for all , and the iterative scheme (114) is equivalent to This is equivalent to (115). Since is a nonexpansive mapping, must be a -strictly pseudocontractive mapping with . In this case, it is easy to see that all the conditions (i)–(iv) in Corollary 24 are satisfied. Therefore, in terms of Corollary 24, we obtain the desired result.

Remark 26. Compared with the Ceng and Yao [31, Theorem 3.1], our Corollary 25 coincides essentially with [31, Theorem 3.1]. This shows that our Theorem 23 includes [31, Theorem 3.1] as a special case.

Remark 27. Our Theorems 20 and 23 improve, extend, and develop [20, Theorem 5.7], [31, Theorem 3.1], [7, Theorem 3.2], and [14, Theorem 3.1] in the following aspects. (i)Compared with the relaxed extragradient iterative algorithm in [7, Theorem 3.2], our Mann-type extragradient iterative algorithms with regularization remove the requirement of boundedness for the domain in which various mappings are defined.(ii)Because [31, Theorem 3.1] is the supplementation, improvement, and extension of [20, Theorem 5.7] and our Theorem 23 includes [31, Theorem 3.1] as a special case, beyond question our results are very interesting and quite valuable.(iii)The problem of finding an element of in our Theorems 20 and 23 is more general than the corresponding problems in [20, Theorem 5.7] and [31, Theorem 3.1], respectively.(iv)The hybrid extragradient method for finding an element of in [14, Theorem 3.1] is extended to develop our Mann-type extragradient iterative algorithms (16) and (17) with regularization for finding an element of .(v)The proof of our results are very different from that of [14, Theorem 3.1] because our argument technique depends on the Opial condition, the restriction on the regularization parameter sequence , and the properties of the averaged mappings to a great extent.(vi)Because our iterative algorithms (16) and (17) involve two inverse strongly monotone mappings and , a -strictly pseudocontractive self-mapping and several parameter sequences, they are more flexible and more subtle than the corresponding ones in [20, Theorem 5.7] and [31, Theorem 3.1], respectively.

Acknowledgments

In this research, the first author was partially supported by the National Science Foundation of China (11071169) and Ph.D. Program Foundation of Ministry of Education of China (20123127110002). The third author was partially supported by Grant NSC 101-2115-M-037-001.