About this Journal Submit a Manuscript Table of Contents
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 924309, 32 pages
http://dx.doi.org/10.1155/2012/924309
Research Article

Hybrid Extragradient Iterative Algorithms for Variational Inequalities, Variational Inclusions, and Fixed-Point Problems

1Department of Mathematics, Shanghai Normal University, Shanghai 200234, China
2Center for Fundamental Science, Kaohsiung Medical University, Kaohsiung 807, Taiwan

Received 20 October 2012; Accepted 24 November 2012

Academic Editor: Jen Chih Yao

Copyright © 2012 Lu-Chuan Ceng and Ching-Feng Wen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We investigate the problem of finding a common solution of a general system of variational inequalities, a variational inclusion, and a fixed-point problem of a strictly pseudocontractive mapping in a real Hilbert space. Motivated by Nadezhkina and Takahashi's hybrid-extragradient method, we propose and analyze new hybrid-extragradient iterative algorithm for finding a common solution. It is proven that three sequences generated by this algorithm converge strongly to the same common solution under very mild conditions. Based on this result, we also construct an iterative algorithm for finding a common fixed point of three mappings, such that one of these mappings is nonexpansive, and the other two mappings are strictly pseudocontractive mappings.

1. Introduction

Let be a real Hilbert space with inner product and norm . Let be a nonempty closed convex subset of , and let be the metric projection from onto . Let be a self-mapping on . We denote by the set of fixed points of and by the set of all real numbers. A mapping is called monotone if A mapping is called -Lipschitz continuous if there exists a constant , such that For a given mapping , we consider the following variational inequality (VI) of finding , such that The solution set of the VI (1.3) is denoted by . The variational inequality was first discussed by Lions [1] and now is well known. Variational inequality theory has been studied quite extensively and has emerged as an important tool in the study of a wide class of obstacle, unilateral, free, moving, and equilibrium problems; see, for example, [24]. To construct a mathematical model which is as close as possible to a real complex problem, we often have to use more than one constraint. Solving such problems, we have to obtain some solution which is simultaneously the solution of two or more subproblem or the solution of one subproblem on the solution set of another subproblem. Actually, these subproblems can be given by problems of different types. For example, Antipin considered a finite-dimensional variant of the variational inequality, where the solution should satisfy some related constraint in inequality form [5] or some system of constraints in inequality and equality form [6]. Yamada [7] considered an infinite-dimensional variant of the solution of the variational inequality on the fixed-point set of some mapping.

A mapping is called -inverse strongly monotone if there exists a constant , such that see [8, 9]. It is obvious that an -inverse strongly monotone mapping is monotone and Lipschitz continuous. A self-mapping is called -strictly pseudocontractive if there exists a constant , such that see [10]. In particular, if , then is called a nonexpansive mapping; see [11].

A set-valued mapping with domain and range in is called monotone if its graph is a monotone set in ; that is, is monotone if and only if A monotone set-valued mapping is called maximal if its graph is not properly contained in the graph of any other monotone mapping in .

Let be a single-valued mapping of into , and let be a multivalued mapping with . Consider the following variational inclusion: find , such that We denote by the solution set of the variational inclusion (1.7). In particular, if , then .

In 1998, Huang [12] studied problem (1.7) in the case where is maximal monotone, and is strongly monotone and Lipschitz continuous with . Subsequently, Zeng et al. [13] further studied problem (1.7) in the case which is more general than Huang's one [12]. Moreover, the authors [13] obtained the same strong convergence conclusion as in Huang's result [12]. In addition, the authors also gave the geometric convergence rate estimate for approximate solutions.

In 2003, for finding an element of when is nonempty, closed, and convex, is nonexpansive, and is -inverse strongly monotone. Takahashi and Toyoda [14] introduced the following iterative algorithm: where chosen arbitrarily, is a sequence in , and is a sequence in . They showed that, if , then the sequence converges weakly to some . In 2006, to solve this problem (i.e., to find an element of ), Nadezhkina and Takahashi [15] introduced an iterative algorithm by a hybrid method. Generally speaking, the suggested algorithm is based on two well-known types of methods, that is, on the extragradient-type method due to Korpelevich [16] for solving variational inequality and so-called hybrid or outer-approximation method due to Haugazeau (see [15]) for solving fixed point problem. It is worth emphasizing that the idea of “hybrid” or “outer-approximation” types of methods was successfully generalized and extended in many papers; see, for example, [1723]. In addition, the idea of the extragradient iterative algorithm introduced by Korpelevich [16] was successfully generalized and extended not only in Euclidean but also in Hilbert and Banach spaces; see, for example, [2429].

Theorem NT (see [15, Theorem 3.1]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a monotone and -Lipschitz-continuous mapping, and let be a nonexpansive mapping such that . Let , and be the sequences generated by where is chosen arbitrarily, for some , and for some . Then the sequences , , and converge strongly to .

It is easy to see that the class of -inverse strongly monotone mappings in the above mentioned problem of Takahashi and Toyoda [14] is the quite important class of mappings in various classes of well-known mappings. It is also easy to see that while -inverse strongly monotone mappings are tightly connected with the important class of nonexpansive mappings, -inverse strongly monotone mappings are also tightly connected with the more general and also quite important class of strictly pseudocontractive mappings. That is, if a mapping is nonexpansive, then the mapping is inverse strongly monotone; moreover, (see, e.g., [14]). The construction of fixed points of nonexpansive mappings via Mann's algorithm has extensively been investigated in the literature (see, e.g., [30, 31] and references therein). At the same time, if a mapping is -strictly pseudocontractive, then the mapping is -inverse strongly monotone and -Lipschitz continuous.

Let be two mappings. Recently, Ceng et al. [32] introduced and considered the following problem of finding , such that which is called a general system of variational inequalities (GSVI), where and are two constants. The set of solutions of problem (1.10) is denoted by . In particular, if , then problem (1.10) reduces to the new system of variational inequalities (NSVI), introduced and studied by Verma [33]. Further, if additionally, then the NSVI reduces to the VI (1.3).

In particular, if and , then the GSVI (1.10) is equivalent to the VI (1.3).

Indeed, in this case, the GSVI (1.10) is equivalent to the following problem of finding , such that Thus we must have . As a matter of fact, if , then by setting we have which hence leads to a contradiction. Therefore, the GSVI (1.10) coincides with the VI (1.3).

Recently, Ceng at al. [32] transformed problem (1.10) into a fixed-point problem in the following way.

Lemma 1.1 (see [32]). For given is a solution of problem (1.10) if and only if is a fixed point of the mapping defined by where .

In particular, if the mapping is -inverse strongly monotone for , then the mapping is nonexpansive provided for .

Utilizing Lemma 1.1, they introduced and studied a relaxed extragradient method for solving the GSVI (1.10). Throughout this paper, the set of fixed points of the mapping is denoted by . Based on the relaxed extragradient method and viscosity approximation method, Yao et al. [34] proposed and analyzed an iterative algorithm for finding a common solution of the GSVI (1.10) and the fixed point problem of a strictly pseudocontractive mapping .

Subsequently, Ceng et al. [35] further presented and analyzed an iterative scheme for finding a common element of the solution set of the VI (1.3), the solution set of the GSVI (1.10), and the fixed point set of a strictly pseudo-contractive mapping .

Theorem CGY (see [35, Theorem 3.1]). Let be a nonempty closed convex subset of a real Hilbert space . Let be -inverse strongly monotone, and let be -inverse strongly monotone for . Let be a -strictly pseudocontractive mapping such that . Let be a -contraction with . For given arbitrarily, let the sequences , , and be generated iteratively by where for , and , such that(i) and , for all ;(ii) and ;(iii) and ;(iv);(v) and .

Then the sequence generated by (1.14) converges strongly to , and is a solution of the GSVI (1.10), where .

On the other hand, let be a monotone, and let -Lipschitz-continuous mapping, be an -inverse strongly monotone mapping. Let be a maximal monotone mapping with , and let be a nonexpansive mapping such that . Motivated Nadezhkina and Takahashi's hybrid-extragradient algorithm (1.9), Ceng et al. [36, Theorem 3.1] introduced another modified hybrid-extragradient algorithm where chosen arbitrarily, , and such that . It was proven in [36] that under very mild conditions three sequences , , and generated by (1.15) converge strongly to the same point .

Inspired by the research going on this area, we propose and analyze the following hybrid extragradient iterative algorithm for finding a common element of the solution set of the GSVI (1.10), the solution set of the variational inclusion (1.7), and the fixed point set of a strictly pseudo-contractive mapping .

Algorithm 1.2. Assume that . Let for , and such that , for all . For given arbitrarily, let , and be the sequences generated by the hybrid extragradient iterative scheme where , for all .

Under very appropriate assumptions, it is proven that all the sequences , and converge strongly to the same point . Furthermore, is a solution of the GSVI (1.10), where .

Let be a -strictly pseudocontractive mapping, let be a -strictly pseudocontractive mapping, and let be a nonexpansive mapping. Putting , and , for all in Algorithm 1.2, we consider and analyze the following hybrid extragradient iterative algorithm for finding a common fixed point of three mappings , , and .

Algorithm 1.3. Assume that . Let , , and such that , for all . For given arbitrarily, let , and be the sequences generated by the hybrid extragradient iterative scheme
Under quite mild conditions, it is shown that all the sequences , , and converge strongly to the same point .

Observe that Ceng et al. [36, Theorem 3.1] considered the problem of finding an element of where is nonexpansive, Nadezhkina and Takahashi [15, Theorem 3.1] studied the problem of finding an element of where is nonexpansive, and Ceng et al. [35, Theorem 3.1] investigated the problem of finding an element of where is strictly pseudocontractive. It is clear that every one of these three problems is very different from our problem of finding an element of where is strictly pseudocontractive. Hence there is no doubt that the strong convergence results for solving our problem are very interesting and quite valuable. Because our hybrid extragradient iterative algorithms involve two inverse strongly monotone mappings and , a strictly pseudo-contractive self-mapping , and several parameter sequences, they are more flexible and more subtle than the corresponding ones in [36, Theorem 3.1] and [15, Theorem 3.1], respectively. Furthermore, the relaxed extragradient iterative scheme in Yao et al. [34, Theorem 3.2] is extended to develop our hybrid extragradient iterative algorithms. In our results, the hybrid extragradient iterative algorithms drop the requirements that and in [34, Theorem 3.2] and [35, Theorem 3.1]. Therefore, our results represent the modification, supplementation, extension, and improvement of [36, Theorem 3.1], [15, Theorem 3.1], [34, Theorem 3.2], and [35, Theorem 3.1] to a great extent.

2. Preliminaries

Let be a real Hilbert space, whose inner product and norm are denoted by and , respectively. Let be a nonempty closed convex subset of . We write to indicate that the sequence converges strongly to and to indicate that the sequence converges weakly to . Moreover, we use to denote the weak -limit set of the sequence , that is,

For every point , there exists a unique nearest point in , denoted by , such that is called the metric projection of onto . We know that is a firmly nonexpansive mapping of onto ; that is, there holds the following relation Consequently, is nonexpansive and monotone. It is also known that is characterized by the following properties: and for all ; see [11, 37] for more details. Let be a monotone mapping. In the context of the variational inequality, this implies that

It is also known that the norm of every Hilbert space satisfies the weak lower semicontinuity [4]. That is, for any sequence with , the inequality holds.

Recall that a set-valued mapping is called maximal monotone if is monotone and for each , where is the identity mapping of . We denote by the graph of . It is known that a monotone mapping is maximal if and only if, for for every implies . Here the following example illustrates the concept of maximal monotone mappings in the setting of Hilbert spaces.

Let be a monotone, -Lipschitz-continuous mapping, and let be the normal cone to at , that is, Then, is maximal monotone and if and only if ; see [38].

Assume that is a maximal monotone mapping. Then, for , associated with , the resolvent operator can be defined as In terms of Huang [12] (see also [13]), there holds the following property for the resolvent operator .

Lemma 2.1. is single valued and firmly nonexpansive, that is, Consequently, is nonexpansive and monotone.

Lemma 2.2 (see [39]). There holds the relation: for all and with .

Lemma 2.3 (see [36]). Let be a maximal monotone mapping with . Then for any given , is a solution of problem (1.7) if and only if satisfies

Lemma 2.4 (see [13]). Let be a maximal monotone mapping with , and let be a strong monotone, continuous, and single-valued mapping. Then for each , the equation has a unique solution for .

Lemma 2.5 (see [36]). Let be a maximal monotone mapping with , and let be a monotone, continuous, and single-valued mapping. Then for each . In this case, is maximal monotone.

It is clear that, in a real Hilbert space , is -strictly pseudo-contractive if and only if there holds the following inequality: This immediately implies that if is a -strictly pseudocontractive mapping, then is -inverse strongly monotone; for further detail, we refer to [10] and the references therein. It is well known that the class of strict pseudocontractions strictly includes the class of nonexpansive mappings.

Lemma 2.6 (see [10, Proposition 2.1]). Let be a nonempty closed convex subset of a real Hilbert space , and let be a mapping.(i)If is a -strict pseudo-contractive mapping, then satisfies the Lipschitz condition (ii)If is a -strict pseudo-contractive mapping, then the mapping is semiclosed at ; that is, if is a sequence in such that weakly and strongly, then .(iii)If is -quasistrict pseudo-contraction, then the fixed point set of is closed and convex, so that the projection is well defined.

Lemma 2.7 (see [34]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a -strictly pseudo-contractive mapping. Let and be two nonnegative real numbers such that . Then

The following lemma is well known to us.

Lemma 2.8 (see [11]). Every Hilbert space has the Kadec-Klee property; that is, for given and , we have

3. Main Results

In this section, we first prove the strong convergence of the sequences generated by our hybrid extragradient iterative algorithm for finding a common solution of a general system of variational inequalities, a variational inclusion, and a fixed problem of a strictly pseudocontractive self-mapping.

Theorem 3.1. Let be a nonempty closed convex subset of a real Hilbert space . Let be -inverse strongly monotone for , let be an -inverse strongly monotone mapping, let be a maximal monotone mapping with , and let be a -strictly pseudocontractive mapping such that . For given arbitrarily, let , and be the sequences generated by where for , for some , and such that for some , for some , and , for all . Then the sequences , , and converge strongly to the same point if and only if . Furthermore, is a solution of the GSVI (1.10), where .

Proof. It is obvious that is closed and is closed and convex for every . As we also know that is convex for every . As we have , for all , and hence by (2.4).
First of all, assume that the sequences , and converge strongly to the same point . Then it is clear that and . Observe that from the nonexpansiveness of the mappings and (due to for ), we have Hence, we conclude that and . Since , we obtain that and . Thus, from the nonexpansiveness of the mapping , we have So, we deduce that and . Note that This implies that as .
For the remainder of the proof, we divide it into several steps.
Step 1. We claim that for every .
Indeed, take a fixed arbitrarily. Then , for all , and For simplicity, we write , and , for each . Since is -inverse strongly monotone, and for , we know that for all , Repeating the same argument, we can obtain that for all , Furthermore, by Lemma 2.1 we derive from (3.9) and (3.10) Since , for all , utilizing Lemmas 2.2 and 2.7, we get from (3.11) for every , and hence . So, for every . Next, let us show by mathematical induction that is well defined and for every . For , we have . Hence we obtain . Suppose that is given and for some integer . Since is nonempty, is a nonempty closed convex subset of . So, there exists a unique element such that . It is also obvious that there holds for , and hence . Therefore, we derive .
Step 2. We claim that
Indeed, let . From , and , we have for every . Therefore, is bounded. From (3.9)–(3.12), we also obtain that , , , , , and all are bounded. Since and , we have for every . Therefore, there exists . Since and , utilizing (2.5), we have for every . This implies that Since , we have , and hence for every . From it follows that
Step 3. We claim that Indeed, for , we obtain from (3.12) Therefore, we have Since for some , and the sequences and are bounded, we deduce that On the other hand, by firm nonexpansiveness of , we have that is, Repeating the same argument, we can also obtain Moreover, using the argument technique similar to the above one, we derive that is, Repeating the same argument, we can also obtain Utilizing (3.11), (3.25)–(3.29), we have