Abstract
The purpose of this article is to present a general viscosity iteration process which defined by and to study the convergence of , where T is a nonexpansive mapping and A is a strongly positive linear operator, if , satisfy appropriate conditions, then iteration sequence converges strongly to the unique solution of variational inequality for all . Meanwhile, a approximate iteration algorithm is presented which is used to calculate the fixed point of nonexpansive mapping and solution of variational inequality, the error estimate is also given. The results presented in this paper extend, generalize, and improve the results of Xu, G. Marino and Xu and some others.
1. Introduction
Iteration methods for nonexpansive mappings have recently been applied to solve convex minimization problems; see, for example, [1–4] and the references therein. A typical problem is to minimize a quadratic function over the set of the fixed points of a nonexpansive mapping on a real Hilbert space : where is the fixed points set of a nonexpansive mapping on , is a given point in , and is strongly positive operator, that is, there exists a constant with the property
Recall that is nonexpansive if for all . Throughout the rest of this paper, we denote by the fixed points set of and assume that is nonempty. It is well known that is closed convex (cf. [5]). In [4] (see also [2]), it is proved that the sequence defined by the iteration method below, with the initial guess chosen arbitrarily, converges strongly to the unique solution of minimization problem (1.1) provided the sequence satisfies certain conditions.
On the other hand, Moudafi [6] introduced the viscosity approximation method for nonexpansive mappings (see [7] for further developments in both Hilbert and Banach spaces). Let be a contraction on . Starting with an arbitrary initial guess , define a sequence recursively by where is a sequence in . It is proved [6, 7] that under certain appropriate conditions imposed on , the sequence generated by (1.4) converges strongly to the unique solution in of the variational inequality
Recently (2006), Marino and Xu [2] combine the iteration method (1.3) with the viscosity approximation method (1.4) and consider the following general iteration method: they have proved that if the sequence of parameters satisfies appropriate conditions, then the sequence generated by (1.6) converges strongly to the unique solution of the variational inequality which is the optimality condition for the minimization problem where is a potential function for (i.e., for ).
The purpose of this paper is to present a general viscosity iteration process which is defined by and to study the convergence of , where is a nonexpansive mapping and is a strongly positive linear operator, if , satisfy appropriate conditions, then iteration sequence converges strongly to the unique solution of variational inequality (1.7). Meanwhile, an approximate iteration algorithm is presented which is used to calculate the fixed point of nonexpansive mapping and solution of variational inequality; the convergence rate estimate is also given. The results presented in this paper extend, generalize and improve the results of Xu [7], Marino and Xu [2], and some others.
2. Preliminaries
This section collects some lemmas which will be used in the proofs for the main results in the next section. Some of them are known; others are not hard to derive.
Lemma 2.1 (see [3]). Assume that is a sequence of nonnegative real numbers such that where is a sequence in (0,1) and is a sequence in such that(i); (ii), or . Then .
Lemma 2.2 (see [5]). Let be a Hilbert space, a closed convex subset of , and a nonexpansive mapping with nonempty fixed points set . If is a sequence in weakly converging to and if converges strongly to , then .
The following lemma is not hard to prove.
Lemma 2.3. Let be a Hilbert space, a closed convex subset of , a contraction with coefficient , and a strongly positive linear bounded operator with coefficient . Then, for , That is, is strongly monotone with coefficient .
Recall the metric (nearest point) projection from a real Hilbert space to a closed convex subset of is defined as follows: given , is the only point in with the property is characterized as follows.
Lemma 2.4. Let be a closed convex subset of a real Hilbert space . Given that and . Then if and only if there holds the inequality
Lemma 2.5. Assume that is a strongly positive linear-bounded operator on a Hilbert space with coefficient and . Then .
Proof. Recall that a standard result in functional analysis is that if is linear bounded self-adjoint operator on , then Now for with , we see that (i.e., is positive). It follows that
The following lemma is also not hard to prove by induction.
Lemma 2.6. Assume that is a sequence of nonnegative real numbers such that where is a nonnegative constant and are sequences in such that(i); (ii). Then is bounded.
Notation. We use for strong convergence and for weak convergence.
3. A general Iteration Algorithm with Bounded Linear Operator
Let be a real Hilbert space, be a bounded linear operator on , and be a nonexpansive mapping on . Assume that the fixed point set of is nonempty. Since is closed convex, the nearest point projection from onto is well defined.
Throughout the rest of this paper, we always assume that is strongly positive, that is, there exists a constant such that (Note: is throughout reserved to be the constant such that (3.1) holds.)
Recall also that a contraction on is a self-mapping of such that where is a constant which is called contractive coefficient of .
For given contraction with contractive coefficient , and such that and , consider a mapping on defined by Assume that it is not hard to see that is a contraction for sufficiently small , indeed, by Lemma 2.5 we have Hence, has a unique fixed point, denoted by , which uniquely solves the fixed point equation: Note that indeed depends on as well, but we will suppress this dependence of on for simplicity of notation throughout the rest of this paper. We will also always use to mean a number in .
The next proposition summarizes the basic properties of .
Proposition 3.1. Let be defined via (3.6).(i) is bounded for .(ii).(iii) defines a continuous surface for into .
Proof. Observe, for , that by Lemma 2.5.
To show (i) pick . We then have
It follows that
Hence is bounded.
(ii) Since the boundedness of implies that of and , and observe that
we have
To prove (iii) take and calculate
which implies that
as . Note that
it is obvious that
This completes the proof of Proposition 3.1.
Our first main result below shows that converges strongly as to a fixed point of which solves some variational inequality.
Theorem 3.2. One has that converges strongly as to a fixed point of which solves the variational inequality: Equivalently, One has , where is the nearest point projection from onto .
Proof. We first shows the uniqueness of a solution of the variational inequality (3.15), which is indeed a consequence of the strong monotonicity of . Suppose and both are solutions to (3.15), then
Adding up (3.16) gets
The strong monotonicity of implies that and the uniqueness is proved. Below we use to denote the unique solution of (3.15).
To prove that converges strongly to , we write, for a given ,
to derive that
It follows that
which leads to
Observe that condition (3.4) implies
as . Since is bounded as , then there exists real sequences in such that and converges weakly to a point . Using Proposition 3.1 and Lemma 2.2, we see that , therefore by (3.21), we see . We next prove that solves the variational inequality (3.15). Since
we derive that
so that
It follows that, for ,
since is monotone (i.e., for ). This is due to the nonexpansivity of . Now replacing in (3.26) with and letting , we, noticing that for , obtain
That is, is a solution of (3.15), hence by uniqueness. In a summary, we have shown that each cluster point of equals . Therefore, as .
The variational inequality (3.15) can be rewritten as
This, by Lemma 2.4, is equivalent to the fixed point equation
This complete the proof.
Taking in Theorem 3.2, we get
Corollary 3.3 (see [7]). One has that converges strongly as to a fixed point of which solves the variational inequality: Equivalently, One has , where is the nearest point projection from onto .
Next we study a general iteration method as follows. The initial guess is selected in arbitrarily, and the th iterate is recursively defined by where are sequences satisfying following conditions: ; ; either or ; .Below is the second main result of this paper.
Theorem 3.4. Let be general by Algorithm (3.31) with the sequences of parameters satisfying conditions –. Then converges strongly to that is obtained in Theorem 3.2.
Proof. Since by condition , we may assume, without loss of generality, that for all .
We now observe that is bounded. Indeed, pick any to obtain
where is a constant. By Lemma 2.6 we see that is bounded.
As a result, noticing
and , we obtain
But the key is to prove that
To see this, we calculate
Since
and condition holds, an application of Lemma 2.1 to (3.36) implies (3.35) which combined with (3.34), in turns, implies
Next we show that
where is obtained in Theorem 3.2.
To see this, we take a subsequence of such that
We may also assume that . Note that in virtue of Lemma 2.2 and (3.38). It follows from the variational inequality (3.15) that
So (3.39) holds, thanks to (3.38).
Finally, we prove . To this end, we calculate
where
That is,
Since is bounded, by the conditions of Theorem 3.4, we get and , this together with (3.39) implies that
Now applying Lemma 2.1 to (3.44) concludes that . This complete the proof of Theorem 3.4.
If pick , we obtain the result of Marino and Xu [2].
4. Approximate Iteration Algorithm and Error Estimate
In this section, we use the following approximate iteration algorithm: for an arbitrary initial , to calculate the fixed point of nonexpansive mapping and solution of variational inequality with bounded linear operator , where and others as in the Section 3.
Meanwhile, the is obtained in Theorem 3.2 which is unique solution of variational inequality (3.15) and , as as , where and are respectively defined by (3.31) and (3.6).
The following lemma will be useful for the establish of formula of convergence rate estimate.
Lemma 4.1 (Banach's contractive mapping principle). Let be a Banach space and be a contraction from into self, that is, where is a constant. Then the Picard iterative sequence , for arbitrary initial , converges strongly to a unique fixed point of and
For above , we define the following contractive mapping: from into self. In fact, it is not hard to see that is a contraction for sufficiently small , indeed, by Lemma 2.5 we have, for any , that By using Lemma 4.1, then there exists unique fixed point of and the iterative sequence converges strongly to this fixed point . Meanwhile, from (4.3) and (4.5) we obtain On the other hand, from (3.21) we have which leads to Therefore, Letting , it follows that From inequality (4.11) together with (4.7), and letting , we get Inequality (4.12) is, namely, the error estimate for approximate fixed point . Now, we give several special cases of inequality (4.12).
Error Estimate 1
Consider
Error Estimate 2
If , then
which can be used to estimate error for iterative scheme
Error Estimate 3
If , then
which can be used to estimate error for iterative scheme
Acknowledgments
This paper is supported by the National Natural Science Foundation of China under Grant (11071279).