General Iterative Algorithms for Hierarchical Fixed Points Approach to Variational Inequalities
Nopparat Wairojjana1and Poom Kumam1
Academic Editor: Zhenyu Huang
Received24 Mar 2012
Accepted16 May 2012
Published12 Jul 2012
Abstract
This paper deals with new methods for approximating a solution to the fixed point problem; find , where is a Hilbert space, is a closed convex subset of , is a -contraction from into , , is a strongly positive linear-bounded operator with coefficient , , is a nonexpansive mapping on and denotes the metric projection on the set of fixed point of . Under a suitable different parameter, we obtain strong convergence theorems by using the projection method which solves the variational inequality for , where . Our results generalize and improve the corresponding results of Yao et al. (2010) and some authors. Furthermore, we give an example which supports our main theorem in the last part.
1. Introduction
Throughout this paper, we assume that is a real Hilbert space where inner product and norm are denoted by and , respectively, and let be a nonempty closed convex subset of . A mapping is called nonexpansive if
We use to denote the set of fixed points of , that is, . It is assumed throughout the paper that is a nonexpansive mapping such that .
Recall that a mapping is a contraction on if there exists a constant such that
A mapping is called a strongly positive linear bounded operator on if there exists a constant with property
A mapping is called a strongly monotone operator with if
and is called a monotone operator if
We easily prove that the mapping is monotone operator, if is nonexpansive mapping.
The metric (or nearest point) projection from onto is mapping which assigns to each point the unique point satisfying the property
The variational inequality for a monotone operator, over , is to find a point in
A hierarchical fixed point problem is equivalent to the variational inequality for a monotone operator over the fixed point set. Moreover, to find a hierarchically fixed point of a nonexpansive mapping with respect to another nonexpansive mapping , namely, we find such that
Iterative methods for nonexpansive mappings have recently been applied to solve a convex minimization problem; see, for example, [1β5] and the references therein. A typical problem is to minimize a quadratic function over the set of the fixed points of a nonexpansive mapping on a real Hilbert space :
where is a given point in . In [5], it is proved that the sequence defined by the iterative method below, with the initial guess chosen arbitrarily,
converges strongly to the unique solution of the minimization problem (1.9) provided the sequence of parameters satisfies certain appropriate conditions.
On the other hand, Moudafi [6] introduced the viscosity approximation method for nonexpansive mappings (see [7] for further developments in both Hilbert and Banach spaces). Starting with an arbitrary initial , define a sequence recursively by
where is a sequence in . It is proved in [6, 7] that under certain appropriate conditions imposed on , the sequence generated by (1.11) strongly converges to the unique solution in of the variational inequality
In 2006, Marino and Xu [8] introduced a general iterative method for nonexpansive mapping. Starting with an arbitrary initial , define a sequence recursively by
They proved that if the sequence of parameters satisfies appropriate conditions, then the sequence generated by (1.13) strongly converges to the unique solution of the variational inequality
which is the optimality condition for the minimization problem
where is a potential function for (i.e., for ).
In 2010, Yao et al. [9] introduced an iterative algorithm for solving some hierarchical fixed point problem, starting with an arbitrary initial guess , define a sequence iteratively by
They proved that if the sequences and of parameters satisfies appropriate conditions, then the sequence generated by (1.16) strongly converges to the unique solution in of the variational inequality
In this paper we will combine the general iterative method (1.13) with the iterative algorithm (1.16) and consider the following iterative algorithm:
We will prove in Section 3 that if the sequences and of parameters satisfy appropriate conditions and then the sequence generated by (1.18) converges strongly to the unique solution in of the following variational inequality
In particular, if we take , under suitable difference assumptions on parameter, then the sequence generated by (1.18) converges strongly to the unique solution in of the following variational inequality
Our results improve and extend the recent results of Yao et al. [9] and some authors. Furthermore, we give an example which supports our main theorem in the last part.
2. Preliminaries
This section collects some lemma which can be used in the proofs for the main results in the next section. Some of them are known, others are not hard to derive.
Lemma 2.1 (Browder [10]). Let be a Hilbert space, be a closed convex subset of , and be a nonexpansive mapping with . If is a sequence in weakly converging to and if converges strongly to , then ; in particular, if then .
Lemma 2.2. Let and be any points. Then one has the following: (1) That if and only if there holds the relation:
(2) That if and only if there holds the relation:
(3) There holds the relation:
Consequently, is nonexpansive and monotone.
Lemma 2.3 (Marino and Xu [8]). Let be a Hilbert space, be a closed convex subset of , be a contraction with coefficient , and be nonexpansive mapping. Let be a strongly positive linear bounded operator on a Hilbert space with coefficient . Then, for , for , (1)the mapping is strongly monotone with coefficient , that is,
(2)the mapping is strongly monotone with coefficient that is
Lemma 2.4 (Xu [4]). Assume that is a sequence of nonnegative numbers such that
where is a sequence in and is a sequence in such that (1), (2) or . Then .
Lemma 2.5 (Marino and Xu [8]). Assume is a strongly positive linear bounded operator on a Hilbert space with coefficient and . Then .
Lemma 2.6 (Acedo and Xu [11]). Let be a closed convex subset of . Let be a bounded sequence in . Assume that (1)The weak -limit set ,(2)For each , exists. Then is weakly convergent to a point in .
NotationWe use for strong convergence and for weak convergence.
3. Main Results
Theorem 3.1. Let be a nonempty closed convex subset of a real Hilbert space . Let be a -contraction with . Let be two nonexpansive mappings with . Let be a strongly positive linear bounded operator on with coefficient . and are two sequences in and . Starting with an arbitrary initial guess and is a sequence generated by
Suppose that the following conditions are satisfied: (C1)ββ and , (C2)ββ, (C3)ββ and , or (C4)ββ and . Then the sequence converges strongly to a point , which is the unique solution of the variational inequality:
Equivalently, one has .
Proof. We first show the uniqueness of a solution of the variational inequality (3.2), which is indeed a consequence of the strong monotonicity of . Suppose and both are solutions to (3.2), then and . It follows that
The strong monotonicity of (Lemma 2.3) implies that and the uniqueness is proved. Next, we prove that the sequence is bounded. Since and by condition (C1) and (C2), respectively, we can assume, without loss of generality, that and for all . Take and from (3.1), we have
Since and by Lemma 2.5, we note that
By induction, we can obtain
which implies that the sequence is bounded and so are the sequences , , and . Set . We get
It follows that
By (3.7) and (3.8), we get
From (3.1), we obtain
where is a constant such that
Substituting (3.10) into (3.8) to obtain
At the same time, we can write (3.12) as
From (3.12), (C4), and Lemma 2.5 or from (3.13), (C3), and Lemma 2.5, we can deduce that , respectively. From (3.1), we have
Notice that , , and , so we obtain
Next, we prove
where . Since the sequence is bounded we can take a subsequence of such that
and . From (3.15) and by Lemma 2.1, it follows that . Hence, by Lemma 2.2(1) that
Now, by Lemma 2.2(1), we observe that
and so
Hence, it follows that
We observe that
Thus, by Lemma 2.4, as . This is completes.
From Theorem 3.1, we can deduce the following interesting corollary.
Corollary 3.2 (Yao et al. [9]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a -contraction (possibly nonself) with . Let be two nonexpansive mappings with . β and are two sequences in . Starting with an arbitrary initial guess and is a sequence generated by
Suppose that the following conditions are satisfied: (C1)ββ and , (C2)ββ, (C3)ββ and , or (C4)ββ and . Then the sequence converges strongly to a point , which is the unique solution of the variational inequality:
Equivalently, one has . In particular, if one takes , then the sequence converges in norm to the Minimum norm fixed point of , namely, the point is the unique solution to the quadratic minimization problem:
Proof. As a matter of fact, if we take and in Theorem 3.1. This completes the proof.
Under different conditions on data we obtain the following result.
Theorem 3.3. Let be a nonempty closed convex subset of a real Hilbert space . Let be a -contraction (possibly nonself) with . Let be two nonexpansive mappings with . Let be a strongly positive linear bounded operator on a Hilbert space with coefficient and . and are two sequences in . Starting with an arbitrary initial guess and is a sequence generated by
Suppose that the following conditions are satisfied: (C1)ββ and , (C2)ββ, (C5)ββ, (C6)ββthere exists a constant such that . Then the sequence converges strongly to a point , which is the unique solution of the variational inequality:
Proof. First of all, we show that (3.27) has the unique solution. Indeed, let and be two solutions. Then
Analogously, we have
Adding (3.28) and (3.29), by Lemma 2.3, we obtain
and so . From (C2), we can assume, without loss of generality, that for all . By a similar argument in Theorem 3.1, we have
By induction, we obtain
which implies that the sequence is bounded. Since (C5) implies (C4) then, from Theorem 3.1, we can deduce . From (3.1), we note that
Hence, it follows that
and so
Set . Then, we have
From (3.12) in Theorem 3.1 and (C6), we obtain
This together with Lemma 2.4 and (C2) imply that
From (3.36), for , we have
By Lemmas 2.2 and 2.3, we obtain
Now, we observe that
From (C1) and (C2), we have . Hence, from (3.1), we deduce and . Therefore,
Since , , , and , every weak cluster point of is also a strong cluster point. Note that the sequence is bounded, thus there exists a subsequence converging to a point . For all , it follows from (3.39) that
Letting , we obtain
By Lemma 2.6 and (3.27) having the unique solution, it follows that . Therefore, as . This completes the proof.
From Theorem 3.3, we can deduce the following interesting corollary.
Corollary 3.4 (Yao et al. [9]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a -contraction (possibly nonself) with . Let be two nonexpansive mappings with . and are two sequences in Starting with an arbitrary initial guess and is a sequence generated by
Suppose that the following conditions are satisfied: (C1)ββ and , (C2)ββ, (C5)ββ, (C6)ββthere exists a constant such that . Then the sequence converges strongly to a point , which is the unique solution of the variational inequality:
Proof. As a matter of fact, if we take and in Theorem 3.3 then this completes the proof.
Corollary 3.5 (Yao et al. [9]). Let be a nonempty closed convex subset of a real Hilbert space . Let be two nonexpansive mappings with . and are two sequences in . Starting with an arbitrary initial guess and suppose is a sequence generated by
Suppose that the following conditions are satisfied: (C1)ββ and , (C2)ββ, (C5)ββ, (C6)ββthere exists a constant such that . Then the sequence converges strongly to a point , which is the unique solution of the variational inequality:
Proof. As a matter of fact, if we take , , and in Theorem 3.3 then this is completes the proof.
Remark 3.6. Prototypes for the iterative parameters are, for example, and (with ). Since and , it is not difficult to prove that (C5) is satisfied for and (C6) is satisfied if .
Remark 3.7. Our results improve and extend the results of Yao et al. [9] by taking and in Theorems 3.1 and 3.3.
The following is an example to support Theorem 3.3.
Example 3.8. Let ,,,, , ,, for every , we have and choose , and