Abstract

We study the problem of approximating a common element in the common fixed point set of an infinite family of nonexpansive mappings and in the solution set of a variational inequality involving an inverse-strongly monotone mapping based on a viscosity approximation iterative method. Strong convergence theorems of common elements are established in the framework of Hilbert spaces.

1. Introduction and Preliminaries

Let be a real Hilbert space, whose inner product and norm are denoted by and , respectively. Let be a nonempty, closed, and convex subset of . Let be a mapping. Let be the metric projection from onto the subset . The classical variational inequality is to find such that In this paper, we use to denote the solution set of the variational inequality. For a given point , satisfies the inequality if and only if . It is known that projection operator is nonexpansive. It is also know that satisfies One can see that the variational inequality (1.1) is equivalent to a fixed point problem. The point is a solution of the variational inequality (1.1) if and only if satisfies the relation , where is a constant.

Recall the following definitions.

(a) is said to be monotone if and only if

(b) is said to be -strongly monotone if and only if there exists a positive real number such that

(c) is said to be -inverse-strongly monotone if and only if there exists a positive real number such that

(d) A mapping is said to be nonexpansive if and only if

In this paper, we use to denote the fixed point set of .

(e) A mapping is said to be a -contraction if and only if there exists a positive real number such that

(f) A linear bounded operator on is strongly positive if and only if there exists a positive real number such that

(g) A set-valued mapping is called monotone if and only if for all , , and imply . A monotone mapping is maximal if the graph of of is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping is maximal if and only if for , for every implies . Let be a monotone map of into and let be the normal cone to at , that is, , for all and define

Then is maximal monotone and if and only if ; see [1] and the reference therein.

For finding a common element in the fixed point set of nonexpansive mappings and in the solution set of the variational inequality involving inverse-strongly mappings, Takahashi and Toyoda [2] introduced the following iterative process: where is an -inverse-strongly monotone mapping, is a real number sequence in , and is a real number sequence in . They showed that the sequence generated in (1.11) weakly converges to some point provided that is nonempty.

In order to obtain a strong convergence theorem of common elements, Iiduka and Takahashi [3] considered the problem by the following iterative process: where is a fixed element in , is an -inverse-strongly monotone mapping, is a real number sequence in , and is a real number sequence in . They showed that the sequence generated in (1.12) strongly converges to some point provided that is nonempty.

Iterative methods for nonexpansive mappings have been applied to solve convex minimization problems; see, for example, [48] and the references therein. A typical problem is to minimize a quadratic function over the set of the fixed points a nonexpansive mapping on a real Hilbert space : where is a linear bounded self-adjoint operator, and is a given point in . In [4], it is proved that the sequence defined by the iterative method below, with the initial guess chosen arbitrarily, strongly converges to the unique solution of the minimization problem (1.13) provided that the sequence satisfies certain conditions.

Recently, Marino and Xu [5] considered the problem by viscosity approximation method. They study the following iterative process: where is a contraction. They proved that the sequence generated by the above iterative scheme strongly converges to the unique solution of the variational inequality which is the optimality condition for the minimization problem , where is a potential function for (i.e., for ).

Concerning a family of nonlinear mappings has been considered by many authors; see, for example, [921] and the references therein. The well-known convex feasibility problem reduces to finding a point in the intersection of the fixed point sets of a family of nonexpansive mappings. The problem of finding an optimal point that minimizes a given cost function over common set of fixed points of a family of nonexpansive mappings is of wide interdisciplinary interest and practical importance; see, for example, [16, 17].

Recently, Qin et al. [18] considered a general iterative algorithm for an infinite family of nonexpansive mapping in the framework of Hilbert spaces. To be more precise, they introduced the following general iterative algorithm: where is a contraction on , is a strongly positive bounded linear operator, are nonexpansive mappings which are generated by a finite family of nonexpansive mapping as follows: where are real numbers such that , become an infinite family of mappings of into itself. Nonexpansivity of each ensures the nonexpansivity of .

Concerning we have the following lemmas which are important to prove our main results.

Lemma 1.1 (see [19]). Let be a nonempty closed convex subset of a strictly convex Banach space . Let be nonexpansive mappings of into itself such that is nonempty, and let be real numbers such that for any . Then, for every and , the limit exists.

Using Lemma 1.1, one can define the mapping of into itself as follows. , for every . Such a is called the -mapping generated by and . Throughout this paper, we will assume that for all .

Lemma 1.2 (see [19]). Let be a nonempty closed convex subset of a strictly convex Banach space . Let be nonexpansive mappings of into itself such that is nonempty, and let be real numbers such that for any . Then, .

Motivated by the above results, in this paper, we study the problem of approximating a common element in the common fixed point set of an infinite family of nonexpansive mappings, and in the solution set of a variational inequality involving an inverse-strongly monotone mapping based on a viscosity approximation iterative method. Strong convergence theorems of common elements are established in the framework of Hilbert spaces.

In order to prove our main results, we need the following lemmas.

Lemma 1.3 (see [5]). Assume is a strongly positive linear bounded operator on a Hilbert space with coefficient and . Then .

Lemma 1.4 (see [22]). Assume that is a sequence of nonnegative real numbers such that where is a sequence in and is a sequence such that (i); (ii) or . Then .

Lemma 1.5 (see [23]). Let and be bounded sequences in a Banach space and let be a sequence in with . Suppose for all integers and Then .

Lemma 1.6 (see [14, 15]). Let be a nonempty closed convex subset of a Hilbert space , be a family of infinitely nonexpansive mappings with , be a real sequence such that for each . If is any bounded subset of , then .

Lemma 1.7 (see [5]). Let be a Hilbert space. Let be a strongly positive linear bounded self-adjoint operator with the constant and a contraction with the constant . Assume that . Let be a nonexpansive mapping with a fixed point of the contraction . Then converges strongly as to a fixed point of , which solves the variational inequality Equivalently, we have .

2. Main Results

Theorem 2.1. Let be a real Hilbert space and a nonempty closed convex subset of . Let be an -inverse-strongly monotone mapping and a -contraction. Let be an infinite family of nonexpansive mappings from into itself such that . Let be a strongly positive linear bounded self-adjoint operator of into itself with the constant . Let be a sequence generated in where is generated in (1.18), and are real number sequences in . Assume that the control sequence , , and satisfy the following restrictions: (i), ;(ii); (iii); (iv)   for some   with . Assume that . Then strongly converges to some point , where , where , which solves the variation inequality

Proof. First, we show that the mapping is nonexpansive. Notice that which implies that the mapping is nonexpansive. Since the condition (i), we may assume, with no loss of generality, that for all . From Lemma 1.3, we know that if , then . Letting , we have On the other hand, we have By simple induction, we have which gives that the sequence is bounded, so is .
Next, we prove . Put . Next, we compute where is an appropriate constant such that . It follows that where is an appropriate constant such that Since and are nonexpansive, we have from (1.18) that where is an appropriate constant such that , for all . Substitute (2.7) and (2.10) into (2.8) yields that where is an appropriate appropriate constant such that . From the conditions (i) and (iii), we have By virtue of Lemma 1.5, we obtain that On the other hand, we have This implies from (2.13) that
Next, we show . Observing that and the condition (i), we can easily get Notice that On the other hand, we have from which it follows that Substituting (2.18) into (2.20), we arrive at It follows that In view of the restrictions (i), and (iv), we find from (2.15) that Observe that which yields that Substituting (2.25) into (2.20), we have This implies that In view of the restrictions (i) and (ii), we find from (2.15) and (2.23) that On the other hand, we have It follows from (2.13), (2.17) and (2.28) that . From Lemma 1.6, we find that as . Notice that from which it follows that
Next, we show , where . To show it, we choose a subsequence of such that As is bounded, we have that there is a subsequence of converges weakly to . We may assume, without loss of generality, that . Hence we have . Indeed, let us first show that . Put Since is inverse-strongly monotone, we see that is maximal monotone. Let . Since and , we have On the other hand, from , we have and hence It follows that which implies from (2.28) that . We have and hence . Next, let us show . Since Hilbert spaces are Opial’s spaces, from (2.31), we have which derives a contradiction. Thus, we have from Lemma 1.2 that . On the other hand, we have
Finally, we show strongly as . Notice that Therefore, we have where is an appropriate constant. On the other hand, we have Substitute (2.41) into (2.42) yields that Put and That is, Notice that From (2.13) and (2.39) that It follows from the condition (i) and (2.47) that Apply Lemma 1.4 to (2.45) to conclude as . This completes the proof.

For a single nonexpansive mapping, we have from Theorem 2.1 the following.

Corollary 2.2. Let be a real Hilbert space and a nonempty closed convex subset of . Let be an -inverse-strongly monotone mapping and a -contraction. Let be a nonexpansive mapping from into itself such that . Let be a strongly positive linear bounded self-adjoint operator of into itself with the constant . Let be a sequence generated in where and are real number sequences in . Assume that the control sequence , and satisfy the following restrictions: (i), ;(ii); (iii); (iv)  for some ,   with . Assume that . Then strongly converges to some point , where , where , which solves the variation inequality

Corollary 2.3. Let be a real Hilbert space and a nonempty closed convex subset of . Let be a -contraction. Let be a nonexpansive mapping from into itself such that . Let be a strongly positive linear bounded self-adjoint operator of into itself with the constant . Let be a sequence generated in where and are real number sequences in . Assume that the control sequence , and satisfy the following restrictions: (i); (ii). Assume that . Then strongly converges to some point , where , where , which solves the variation inequality

If is the identity mapping, then Theorem 2.1 is reduced to the following.

Corollary 2.4. Let be a real Hilbert space and a nonempty closed convex subset of . Let be an -inverse-strongly monotone mapping and a -contraction. Let be an infinite family of nonexpansive mappings from into itself such that . Let be a sequence generated in where is generated in (1.18), and are real number sequences in . Assume that the control sequence , , and satisfy the following restrictions: (i), ;(ii); (iii); (iv) for some , with . Then strongly converges to some point , where , where , which solves the variation inequality