- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 891232, 25 pages
Relaxed Extragradient Methods with Regularization for General System of Variational Inequalities with Constraints of Split Feasibility and Fixed Point Problems
1Department of Mathematics, Shanghai Normal University, and Scientific Computing Key Laboratory of Shanghai Universities, Shanghai 200234, China
2Department of Applied Mathematics, Babeş-Bolyai University, 400084 Cluj-Napoca, Romania
3Center for Fundamental Science, Kaohsiung Medical University, Kaohsiung 807, Taiwan
4Department of Applied Mathematics, National Sun Yat-Sen University, Kaohsiung 804, Taiwan
Received 1 September 2012; Accepted 15 October 2012
Academic Editor: Julian López-Gómez
Copyright © 2013 L. C. Ceng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We suggest and analyze relaxed extragradient iterative algorithms with regularization for finding a common element of the solution set of a general system of variational inequalities, the solution set of a split feasibility problem, and the fixed point set of a strictly pseudocontractive mapping defined on a real Hilbert space. Here the relaxed extragradient methods with regularization are based on the well-known successive approximation method, extragradient method, viscosity approximation method, regularization method, and so on. Strong convergence of the proposed algorithms under some mild conditions is established. Our results represent the supplementation, improvement, extension, and development of the corresponding results in the very recent literature.
Let be a real Hilbert space, whose inner product and norm are denoted by and , respectively. Let be a nonempty closed convex subset of . The (nearest point or metric) projection from onto is denoted by . We write to indicate that the sequence converges weakly to and to indicate that the sequence converges strongly to .
Let and be nonempty closed convex subsets of infinite-dimensional real Hilbert spaces and , respectively. The split feasibility problem (SFP) is to find a point with the property: where and denotes the family of all bounded linear operators from to .
In 1994, the SFP was first introduced by Censor and Elfving , in finite-dimensional Hilbert spaces, for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. A number of image reconstruction problems can be formulated as the SFP; see, for example,  and the references therein. Recently, it was found that the SFP can also be applied to study intensity-modulated radiation therapy; see, for example, [3–5] and the references therein. In the recent past, a wide variety of iterative methods have been used in signal processing and image reconstruction and for solving the SFP; see, for example, [2–12] and the references therein. A special case of the SFP is the following convex constrained linear inverse problem  of finding an element such that It has been extensively investigated in the literature using the projected Landweber iterative method . Comparatively, the SFP has received much less attention so far, due to the complexity resulting from the set . Therefore, whether various versions of the projected Landweber iterative method  can be extended to solve the SFP remains an interesting open topics. For example, it is yet not clear whether the dual approach to (1.2) of  can be extended to the SFP. The original algorithm given in  involves the computation of the inverse (assuming the existence of the inverse of ) and thus has not become popular. A seemingly more popular algorithm that solves the SFP is the algorithm of Byrne [2, 7] which is found to be a gradient-projection method (GPM) in convex minimization. It is also a special case of the proximal forward-backward splitting method . The algorithm only involves the computation of the projections and onto the sets and , respectively, and is therefore implementable in the case where and have closed-form expressions, for example, and are closed balls or half-spaces. However, it remains a challenge how to implement the algorithm in the case where the projections and/or fail to have closed-form expressions, though theoretically we can prove the (weak) convergence of the algorithm.
Very recently, Xu  gave a continuation of the study on the algorithm and its convergence. He applied Mann’s algorithm to the SFP and purposed an averaged algorithm which was proved to be weakly convergent to a solution of the SFP. He also established the strong convergence result, which shows that the minimum-norm solution can be obtained.
Furthermore, Korpelevič  introduced the so-called extragradient method for finding a solution of a saddle point problem. He proved that the sequences generated by the proposed iterative algorithm converge to a solution of the saddle point problem.
Throughout this paper, assume that the SFP is consistent, that is, the solution set of the SFP is nonempty. Let be a continuous differentiable function. The minimization problem is ill posed. Therefore, Xu  considered the following Tikhonov regularization problem: where is the regularization parameter. The regularized minimization (4) has a unique solution which is denoted by . The following results are easy to prove.
Proposition 1 (see [18, Proposition 3.1]). Given , the following statements are equivalent:(i) solves the SFP; (ii) solves the fixed point equation where , and is the adjoint of ;(iii) solves the variational inequality problem (VIP) of finding such that
Proposition 2 (see ). There hold the following statements:(i) the gradient is -Lipschitz continuous and -strongly monotone; (ii) the mapping is a contraction with coefficient where ;(iii) if the SFP is consistent, then the strong exists and is the minimum-norm solution of the SFP.
Very recently, by combining the regularization method and extragradient method due to Nadezhkina and Takahashi , Ceng et al.  proposed an extragradient algorithm with regularization and proved that the sequences generated by the proposed algorithm converge weakly to an element of , where is a nonexpansive mapping.
Theorem 3 (see [18, Theorem 3.1]). Let be a nonexpansive mapping such that . Let and the sequences in generated by the following extragradient algorithm: where for some and for some . Then, both the sequences and converge weakly to an element .
On the other hand, assume that is a nonempty closed convex subset of and is a mapping. The classical variational inequality problem (VIP) is to find such that It is now well known that the variational inequalities are equivalent to the fixed point problems, the origin of which can be traced back to Lions and Stampacchia . This alternative formulation has been used to suggest and analyze projection iterative method for solving variational inequalities under the conditions that the involved operator must be strongly monotone and Lipschitz continuous. Related to the variational inequalities, we have the problem of finding the fixed points of nonexpansive mappings or strict pseudocontraction mappings, which is the current interest in functional analysis. Several people considered a unified approach to solve variational inequality problems and fixed point problems; see, for example, [21–28] and the references therein.
A mapping is said to be an -inverse strongly monotone if there exists such that
A mapping is said to be -strictly pseudocontractive if there exists such that In this case, we also say that is a -strict pseudo-contraction. In particular, whenever , becomes a nonexpansive mapping from into itself. It is clear that every inverse strongly monotone mapping is a monotone and Lipschitz continuous mapping. We denote by and the solution set of problem (11) and the set of all fixed points of , respectively.
For finding an element of under the assumption that a set is nonempty, closed, and convex, a mapping is nonexpansive, and a mapping is -inverse strongly monotone, Takahashi and Toyoda  introduced an iterative scheme and studied the weak convergence of the sequence generated by the proposed scheme to a point of . Recently, Iiduka and Takahashi  presented another iterative scheme for finding an element of and showed that the sequence generated by the scheme converges strongly to , where is the initially chosen point in the iterative scheme and denotes the metric projection of onto .
Based on Korpelevič’s extragradient method , Nadezhkina and Takahashi  introduced an iterative process for finding an element of and proved the weak convergence of the sequence to a point of . Zeng and Yao  presented an iterative scheme for finding an element of and proved that two sequences generated by the method converges strongly to an element of . Recently, Bnouhachem et al.  suggested and analyzed an iterative scheme for finding a common element of the fixed point set of a nonexpansive mapping and the solution set of the variational inequality (11) for an inverse strongly monotone mapping .
Furthermore, as a much more general generalization of the classical variational inequality problem (11), Ceng et al.  introduced and considered the following problem of finding such that which is called a general system of variational inequalities (GSVI), which and are two constants. The set of solutions of problem (14) is denoted by . In particular, if , then problem (14) reduces to the new system of variational inequalities, introduced and studied by Verma . Recently, Ceng et al.  transformed problem (14) into a fixed point problem in the following way.
Lemma 4 (see ). For given is a solution of problem (14) if and only if is a fixed point of the mapping defined by
In particular, if the mapping is -inverse strongly monotone for , then the mapping is nonexpansive provided for .
Utilizing Lemma 4, they introduced and studied a relaxed extragradient method for solving problem (14). Throughout this paper, the set of fixed points of the mapping is denoted by . Based on the relaxed extragradient method and viscosity approximation method, Yao et al.  proposed and analyzed an iterative algorithm for finding a common solution of the GSVI (14) and the fixed point problem of a strictly pseudo-contractive mapping . Subsequently, Ceng et al.  further presented and analyzed an iterative scheme for finding a common element of the solution set of the VIP (11), the solution set of the GSVI (14), and the fixed point set of a strictly pseudo-contractive mapping .
Theorem 5 (see [33, Theorem 3.1]). Let be a nonempty closed convex subset of a real Hilbert space . Let be -inverse strongly monotone and be -inverse strongly monotone for . Let be a -strictly pseudo-contractive mapping such that . Let be a -contraction with . For given arbitrarily, let the sequences be generated by the relaxed extragradient iterative scheme:
where for , and the following conditions hold for five sequences and :(i) and for all ;(ii) and ;(iii) and ;(iv);(v) and .
Then the sequences converge strongly to the same point if and only if . Furthermore, is a solution of the GSVI (14), where .
Motivated and inspired by the research going on this area, we propose and analyze the following relaxed extragradient iterative algorithms with regularization for finding a common element of the solution set of the GSVI (14), the solution set of the SFP (1), and the fixed point set of a strictly pseudo-contractive mapping .
Algorithm 6. Let for and such that for all . For given arbitrarily, let be the sequences generated by the following relaxed extragradient iterative scheme with regularization:
Under mild assumptions, it is proven that the sequences converge strongly to the same point if and only if . Furthermore, is a solution of the GSVI (14), where .
Algorithm 7. Let for and such that and for all . For given arbitrarily, let be the sequences generated by the following relaxed extragradient iterative scheme with regularization:
Also, under appropriate conditions, it is shown that the sequences converge strongly to the same point if and only if . Furthermore, is a solution of the GSVI (14), where .
Note that both [6, Theorem 5.7] and [18, Theorem 3.1] are weak convergence results for solving the SFP (1). Beyond question our strong convergence results are very interesting and quite valuable. Because our relaxed extragradient iterative schemes (17) and (18) with regularization involve a contractive self-mapping , a -strictly pseudo-contractive self-mapping and several parameter sequences, they are more flexible and more subtle than the corresponding ones in [6, Theorem 5.7] and [18, Theorem 3.1], respectively. Furthermore, the relaxed extragradient iterative scheme (16) is extended to develop our relaxed extragradient iterative schemes (17) and (18) with regularization. All in all, our results represent the modification, supplementation, extension, and improvement of [6, Theorem 5.7], [18, Theorem 3.1], and [33, Theorem 3.1].
Let be a nonempty, closed, and convex subset of a real Hilbert space . Now we present some known results and definitions which will be used in the sequel.
The metric (or nearest point) projection from onto is the mapping which assigns to each point the unique point satisfying the property
The following properties of projections are useful and pertinent to our purpose.
Proposition 8 (see ). For given and :(i);(ii);(iii), which hence implies that is nonexpansive and monotone.
Definition 9. A mapping is said to be(a)nonexpansive if (b)firmly nonexpansive if is nonexpansive, or equivalently, alternatively, is firmly nonexpansive if and only if can be expressed as where is nonexpansive; projections are firmly nonexpansive.
Definition 10. Let be a nonlinear operator with domain and range .(a) is said to be monotone if
(b)Given a number , is said to be -strongly monotone if
(c)Given a number , is said to be -inverse strongly monotone (-ism) if
It can be easily seen that if is nonexpansive, then is monotone. It is also easy to see that a projection is -ism.
Inverse strongly monotone (also referred to as cocoercive) operators have been applied widely in solving practical problems in various fields, for instance, in traffic assignment problems; see, for example, [35, 36].
Definition 11. A mapping is said to be an averaged mapping if it can be written as the average of the identity and a nonexpansive mapping, that is, where and is nonexpansive. More precisely, when the last equality holds, we say that is -averaged. Thus firmly nonexpansive mappings (in particular, projections) are -averaged maps.
Proposition 12 (see ). Let be a given mapping.(i) is nonexpansive if and only if the complement is -ism.(ii)If is -ism, then for , is -ism.(iii) is averaged if and only if the complement is -ism for some . Indeed, for is -averaged if and only if is -ism.
Proposition 13 (see [7, 37]). Let be given operators.(i) If for some and if is averaged and is nonexpansive, then is averaged. (ii) is firmly nonexpansive if and only if the complement is firmly nonexpansive. (iii) If for some and if is firmly nonexpansive and is nonexpansive, then is averaged. (iv) The composite of finitely many averaged mappings is averaged. That is, if each of the mappings is averaged, then so is the composite . In particular, if is -averaged and is -averaged, where , then the composite is -averaged, where .(v)If the mappings are averaged and have a common fixed point, then The notation denotes the set of all fixed points of the mapping , that is, .
It is clear that, in a real Hilbert space , is -strictly pseudo-contractive if and only if there holds the following inequality: This immediately implies that if is a -strictly pseudo-contractive mapping, then is -inverse strongly monotone; for further detail, we refer to  and the references therein. It is well known that the class of strict pseudo-contractions strictly includes the class of nonexpansive mappings.
In order to prove the main result of this paper, the following lemmas will be required.
Lemma 14 (see ). Let and be bounded sequences in a Banach space and let be a sequence in with . Suppose for all integers and . Then, .
Lemma 15 (see [38, Proposition 2.1]). Let be a nonempty closed convex subset of a real Hilbert space and a mapping.(i)If is a -strict pseudo-contractive mapping, then satisfies the Lipschitz condition (ii)If is a -strict pseudo-contractive mapping, then the mapping is semiclosed at , that is, if is a sequence in such that weakly and strongly, then .(iii)If is -(quasi-)strict pseudo-contraction, then the fixed point set of is closed and convex so that the projection is well defined.
The following lemma plays a key role in proving strong convergence of the sequences generated by our algorithms.
Lemma 16 (see ). Let be a sequence of nonnegative real numbers satisfying the property where and are such that(i);(ii)either or ;(iii) where . Then, .
Lemma 17 (see ). Let be a nonempty closed convex subset of a real Hilbert space . Let be a -strictly pseudo-contractive mapping. Let and be two nonnegative real numbers such that . Then
The following lemma is an immediate consequence of an inner product.
Lemma 18. In a real Hilbert space , there holds the inequality
Let be a nonempty closed convex subset of a real Hilbert space and let be a monotone mapping. The variational inequality problem (VIP) is to find such that The solution set of the VIP is denoted by . It is well known that
A set-valued mapping is called monotone if for all and imply that . A monotone set-valued mapping is called maximal if its graph is not properly contained in the graph of any other monotone set-valued mapping. It is known that a monotone set-valued mapping is maximal if and only if for for every implies that . Let be a monotone and Lipschitz continuous mapping and let be the normal cone to at , that is, Define It is known that in this case the mapping is maximal monotone, and if and only if ; see further details, one refers to  and the references therein.
3. Main Results
In this section, we first prove the strong convergence of the sequences generated by the relaxed extragradient iterative algorithm (17) with regularization.
Theorem 19. Let be a nonempty closed convex subset of a real Hilbert space . Let , and let be -inverse strongly monotone for . Let be a -strictly pseudo-contractive mapping such that . Let be a -contraction with . For given arbitrarily, let the sequences be generated by the relaxed extragradient iterative algorithm (17) with regularization, where for , and such that(i);(ii) and for all ;(iii) and ;(iv) and ;(v);(vi) and .
Then the sequences converge strongly to the same point if and only if . Furthermore, is a solution of the GSVI (14), where .
Proof. First, taking into account , without loss of generality we may assume that for some .
Now, let us show that is -averaged for each , where
Indeed, it is easy to see that is -ism, that is, Observe that Hence, it follows that is -ism. Thus, is -ism according to Proposition 12(ii). By Proposition 12(iii) the complement is -averaged. Therefore, noting that is -averaged, and utilizing Proposition 13(iv), we know that for each is -averaged with This shows that is nonexpansive. Furthermore, for with , we have Without loss of generality we may assume that Consequently, it follows that for each integer is -averaged with This immediately implies that is nonexpansive for all .
Next we divide the remainder of the proof into several steps.
Step 1. is bounded.
Indeed, take an arbitrary . Then, we get , for , and From (17) it follows that Utilizing Lemma 18 we also have For simplicity, we write for each . Then for each . Since is -inverse strongly monotone for and for , , we know that for all , Furthermore, by Proposition 8(ii), we have Further, by Proposition 8(i), we have So, from (45) we obtain Hence it follows from (48) and (51) that Since for all , utilizing Lemma 17 we obtain from (52) Now, we claim that As a matter of fact, if , then it is clear that (54) is valid, that is, Assume that (54) holds for , that is, Then, we conclude from (53) and (56) that By induction, we conclude that (54) is valid. Hence, is bounded. Since and are Lipschitz continuous, it is easy to see that and are bounded, where for all .
Step 2. .
Indeed, define for all . It follows that Since for all , utilizing Lemma 17 we have Next, we estimate . Observe that and hence