Abstract

The purpose of this article is to present a general viscosity iteration process {𝑥𝑛} which defined by 𝑥𝑛+1=(𝐼𝛼𝑛𝐴)𝑇𝑥𝑛+𝛽𝑛𝛾𝑓(𝑥𝑛)+(𝛼𝑛𝛽𝑛)𝑥𝑛 and to study the convergence of {𝑥𝑛}, where T is a nonexpansive mapping and A is a strongly positive linear operator, if {𝛼𝑛}, {𝛽𝑛} satisfy appropriate conditions, then iteration sequence {𝑥𝑛} converges strongly to the unique solution 𝑥𝑓(𝑇) of variational inequality (𝐴𝛾𝑓)𝑥,𝑥𝑥0, for all 𝑥𝑓(𝑇). Meanwhile, a approximate iteration algorithm is presented which is used to calculate the fixed point of nonexpansive mapping and solution of variational inequality, the error estimate is also given. The results presented in this paper extend, generalize, and improve the results of Xu, G. Marino and Xu and some others.

1. Introduction

Iteration methods for nonexpansive mappings have recently been applied to solve convex minimization problems; see, for example, [14] and the references therein. A typical problem is to minimize a quadratic function over the set of the fixed points of a nonexpansive mapping on a real Hilbert space 𝐻: min𝑥𝐹(𝑇)12𝐴𝑥,𝑥𝑥,𝑏,(1.1) where 𝐹(𝑇) is the fixed points set of a nonexpansive mapping 𝑇 on 𝐻, 𝑏 is a given point in 𝐻, and 𝐴𝐻𝐻 is strongly positive operator, that is, there exists a constant 𝛿>0 with the property 𝐴𝑥,𝑥𝛿𝑥2,𝑥𝐻.(1.2)

Recall that 𝑇𝐻𝐻 is nonexpansive if 𝑇𝑥𝑇𝑦𝑥𝑦 for all 𝑥,𝑦𝐻. Throughout the rest of this paper, we denote by 𝐹(𝑇) the fixed points set of 𝑇 and assume that 𝐹(𝑇) is nonempty. It is well known that 𝐹(𝑇) is closed convex (cf. [5]). In [4] (see also [2]), it is proved that the sequence {𝑥𝑛} defined by the iteration method below, with the initial guess 𝑥0 chosen arbitrarily, 𝑥𝑛+1=𝐼𝛼𝑛𝐴𝑇𝑥𝑛+𝛼𝑛𝑏,𝑛0,(1.3) converges strongly to the unique solution of minimization problem (1.1) provided the sequence {𝛼𝑛} satisfies certain conditions.

On the other hand, Moudafi [6] introduced the viscosity approximation method for nonexpansive mappings (see [7] for further developments in both Hilbert and Banach spaces). Let 𝑓 be a contraction on 𝐻. Starting with an arbitrary initial guess 𝑥0𝐻, define a sequence {𝑥𝑛} recursively by 𝑥𝑛+1=𝐼𝛼𝑛𝐴𝑇𝑥𝑛+𝛼𝑛𝑓𝑥𝑛,𝑛0,(1.4) where {𝛼𝑛} is a sequence in (0,1). It is proved [6, 7] that under certain appropriate conditions imposed on {𝛼𝑛}, the sequence {𝑥𝑛} generated by (1.4) converges strongly to the unique solution 𝑥 in 𝐹(𝑇) of the variational inequality (𝐼𝑓)𝑥,𝑥𝑥0,𝑥𝐹(𝑇).(1.5)

Recently (2006), Marino and Xu [2] combine the iteration method (1.3) with the viscosity approximation method (1.4) and consider the following general iteration method: 𝑥𝑛+1=𝐼𝛼𝑛𝐴𝑇𝑥𝑛+𝛼𝑛𝑥𝛾𝑓𝑛,𝑛0,(1.6) they have proved that if the sequence {𝛼𝑛} of parameters satisfies appropriate conditions, then the sequence {𝑥𝑛} generated by (1.6) converges strongly to the unique solution of the variational inequality (𝐴𝛾𝑓)𝑥,𝑥𝑥0,𝑥𝐹(𝑇),(1.7) which is the optimality condition for the minimization problem min𝑥𝐹(𝑇)12𝐴𝑥,𝑥(𝑥),(1.8) where is a potential function for 𝛾𝑓 (i.e., (𝑥)=𝛾𝑓(𝑥) for 𝑥𝐻).

The purpose of this paper is to present a general viscosity iteration process {𝑥𝑛} which is defined by 𝑥𝑛+1=𝐼𝛼𝑛𝐴𝑇𝑥𝑛+𝛽𝑛𝑥𝛾𝑓𝑛+𝛼𝑛𝛽𝑛𝑥𝑛(1.9) and to study the convergence of {𝑥𝑛}, where 𝑇 is a nonexpansive mapping and 𝐴 is a strongly positive linear operator, if {𝛼𝑛}, {𝛽𝑛} satisfy appropriate conditions, then iteration sequence {𝑥𝑛} converges strongly to the unique solution 𝑥𝐹(𝑇) of variational inequality (1.7). Meanwhile, an approximate iteration algorithm 𝑥𝑛+1=(𝐼𝑠𝐴)𝑇𝑥𝑛𝑥+𝑡𝛾𝑓𝑛+(𝑠𝑡)𝑥𝑛(1.10) is presented which is used to calculate the fixed point of nonexpansive mapping and solution of variational inequality; the convergence rate estimate is also given. The results presented in this paper extend, generalize and improve the results of Xu [7], Marino and Xu [2], and some others.

2. Preliminaries

This section collects some lemmas which will be used in the proofs for the main results in the next section. Some of them are known; others are not hard to derive.

Lemma 2.1 (see [3]). Assume that {𝑎𝑛} is a sequence of nonnegative real numbers such that 𝑎𝑛+11𝜆𝑛𝑎𝑛+𝛿𝑛,(2.1) where {𝜆𝑛} is a sequence in (0,1) and {𝛿𝑛} is a sequence in (,+) such that(i)𝑛=1𝜆𝑛=; (ii)limsup𝑛𝛿𝑛/𝑟𝑛0, or 𝑛=1|𝛿𝑛|<. Then lim𝑛𝑎𝑛=0.

Lemma 2.2 (see [5]). Let 𝐻 be a Hilbert space, 𝐾 a closed convex subset of 𝐻, and 𝑇𝐾𝐾 a nonexpansive mapping with nonempty fixed points set 𝐹(𝑇). If {𝑥𝑛} is a sequence in 𝐾 weakly converging to 𝑥 and if {(𝐼𝑇)𝑥𝑛} converges strongly to 𝑦, then (𝐼𝑇)𝑥=𝑦.

The following lemma is not hard to prove.

Lemma 2.3. Let 𝐻 be a Hilbert space, 𝐾 a closed convex subset of 𝐻, 𝑓𝐻𝐻 a contraction with coefficient 0<<1, and 𝐴 a strongly positive linear bounded operator with coefficient 𝛿>0. Then, for 0<𝛾<(𝛿/), 𝑥𝑦,(𝐴𝛾𝑓)𝑥(𝐴𝛾𝑓)𝑦(𝛿𝛾)𝑥𝑦2,𝑥,𝑦𝐻.(2.2) That is, 𝐴𝛾𝑓 is strongly monotone with coefficient 𝛿𝛾.

Recall the metric (nearest point) projection 𝑃𝐾 from a real Hilbert space 𝐻 to a closed convex subset 𝐾 of 𝐻 is defined as follows: given 𝑥𝐻, 𝑃𝐾𝑥 is the only point in 𝐾 with the property 𝑥𝑃𝐾𝑥=min𝑦𝐾𝑥𝑦.(2.3)𝑃𝐾 is characterized as follows.

Lemma 2.4. Let 𝐾 be a closed convex subset of a real Hilbert space 𝐻. Given that 𝑥𝐻 and 𝑦𝐾. Then 𝑦=𝑃𝐾𝑥 if and only if there holds the inequality 𝑥𝑦,𝑦𝑧0,𝑧𝐾.(2.4)

Lemma 2.5. Assume that 𝐴 is a strongly positive linear-bounded operator on a Hilbert space 𝐻 with coefficient 𝛿>0 and 0<𝜌𝐴1. Then 𝐼𝜌𝐴(1𝜌𝛿).

Proof. Recall that a standard result in functional analysis is that if 𝑉 is linear bounded self-adjoint operator on 𝐻, then ||||𝑉=sup𝑉𝑥,𝑥𝑥𝐻,𝑥=1.(2.5) Now for 𝑥𝐻 with 𝑥=1, we see that (𝐼𝜌𝐴)𝑥,𝑥=1𝜌𝐴𝑥,𝑥1𝜌𝐴0(2.6) (i.e., 𝐼𝜌𝐴 is positive). It follows that }𝐼𝜌𝐴=sup{(𝐼𝜌𝐴)𝑥,𝑥𝑥𝐻,𝑥=1=sup{1𝜌𝐴𝑥,𝑥𝑥𝐻,𝑥=1}1𝜌𝛿.(2.7)

The following lemma is also not hard to prove by induction.

Lemma 2.6. Assume that {𝑎𝑛} is a sequence of nonnegative real numbers such that 𝑎𝑛+11𝜆𝑛𝑎𝑛+𝜆𝑛+𝜇𝑛𝑀,(2.8) where 𝑀 is a nonnegative constant and {𝜆𝑛},{𝜇𝑛} are sequences in [0,+) such that(i)𝑛=0𝜆𝑛=; (ii)𝑛=0𝜇𝑛<. Then {𝑎𝑛} is bounded.

Notation. We use for strong convergence and  for weak convergence.

3. A general Iteration Algorithm with Bounded Linear Operator

Let 𝐻 be a real Hilbert space, 𝐴 be a bounded linear operator on 𝐻, and 𝑇 be a nonexpansive mapping on 𝐻. Assume that the fixed point set 𝐹(𝑇)={𝑥𝐻𝑇𝑥=𝑥} of 𝑇 is nonempty. Since 𝐹(𝑇) is closed convex, the nearest point projection from 𝐻 onto 𝐹(𝑇) is well defined.

Throughout the rest of this paper, we always assume that 𝐴 is strongly positive, that is, there exists a constant 𝛿>0 such that 𝐴𝑥,𝑥𝛿𝑥2,𝑥𝐻.(3.1) (Note: 𝛿>0 is throughout reserved to be the constant such that (3.1) holds.)

Recall also that a contraction on 𝐻 is a self-mapping 𝑓 of 𝐻 such that 𝑓(𝑥)𝑓(𝑦)𝑥𝑦,𝑥,𝑦𝐻,(3.2) where (0,1) is a constant which is called contractive coefficient of 𝑓.

For given contraction 𝑓 with contractive coefficient 0<<1, and 𝑡[0,1),𝑠(0,1),𝑠𝑡 such that 0𝑡𝑠<𝐴1 and 0<𝛾<𝛿/, consider a mapping 𝑆𝑡,𝑠 on 𝐻 defined by 𝑆𝑡,𝑠𝑥=(𝐼𝑠𝐴)𝑇𝑥+𝑡𝛾𝑓(𝑥)+(𝑠𝑡)𝑥,𝑥𝐻.(3.3) Assume that 𝑠𝑡𝑠0,(3.4) it is not hard to see that 𝑆𝑡,𝑠 is a contraction for sufficiently small 𝑠, indeed, by Lemma 2.5 we have 𝑆𝑡,𝑠𝑥𝑆𝑡,𝑠𝑦=𝑡𝛾𝑓(𝑥)𝑓(𝑦)+(𝐼𝑠𝐴)(𝑇𝑥𝑇𝑦)+(𝑠𝑡)(𝑥𝑦)(𝑡𝛾+1𝑠𝛿+𝑠𝑡)𝑥𝑦(1+𝑡(𝛾1)𝑠(𝛿1))𝑥𝑦.(3.5) Hence, 𝑆𝑡,𝑠 has a unique fixed point, denoted by 𝑥𝑡,𝑠, which uniquely solves the fixed point equation: 𝑥𝑡,𝑠=(𝐼𝑠𝐴)𝑇𝑥𝑡,𝑠𝑥+𝑡𝛾𝑓𝑡,𝑠+(𝑠𝑡)𝑥𝑡,𝑠.(3.6) Note that 𝑥𝑡,𝑠 indeed depends on 𝑓 as well, but we will suppress this dependence of 𝑥𝑡,𝑠 on 𝑓 for simplicity of notation throughout the rest of this paper. We will also always use 𝛾 to mean a number in (0,𝛿/).

The next proposition summarizes the basic properties of 𝑥𝑡,𝑠,(𝑡𝑠).

Proposition 3.1. Let 𝑥𝑡,𝑠 be defined via (3.6).(i){𝑥𝑡,𝑠} is bounded for 𝑡[0,𝐴1),𝑠(0,𝐴1).(ii)lim𝑠0𝑥𝑡,𝑠𝑇𝑥𝑡,𝑠=0.(iii){𝑥𝑡,𝑠} defines a continuous surface for (𝑡,𝑠)[0,𝐴1)×(0,𝐴1),𝑡𝑠 into 𝐻.

Proof. Observe, for 𝑠(0,𝐴1), that 𝐼𝑠𝐴1𝑠𝛿 by Lemma 2.5.
To show (i) pick 𝑝𝐹(𝑇). We then have 𝑥𝑡,𝑠=𝑝(𝐼𝑠𝐴)𝑇𝑥𝑡,𝑠𝑥𝑝+𝑡𝛾𝑓𝑡,𝑠𝑥𝐴𝑝𝑠𝐴𝑝+𝑡𝐴𝑝(1𝑠𝛿)𝑡,𝑠𝑥𝑝+𝑡𝛾𝑓𝑡,𝑠𝑥𝐴𝑝+(𝑠𝑡)𝐴𝑝(1𝑠𝛿)𝑡,𝑠𝑥𝑝+𝑠𝛾𝑡,𝑠+[]𝑥𝑝+𝛾𝑓(𝑝)𝐴𝑝(𝑠𝑡)𝐴𝑝1𝑠(𝛿𝛾)𝑡,𝑠𝑝+𝑠𝛾𝑓(𝑝)𝐴𝑝+(𝑠𝑡)𝐴𝑝.(3.7) It follows that 𝑥𝑡,𝑠(𝑝𝛾𝑓𝑝)𝐴𝑝+𝛿𝛾𝑠𝑡𝑠𝐴𝑝𝛿𝛾<+.(3.8) Hence {𝑥𝑡,𝑠} is bounded.
(ii) Since the boundedness of {𝑥𝑡,𝑠} implies that of {𝑓(𝑥𝑡,𝑠)} and {𝐴𝑇𝑥𝑡,𝑠}, and observe that 𝑥𝑡,𝑠𝑇𝑥𝑡,𝑠=𝑥𝑡𝑓𝑡,𝑠𝑠𝐴𝑇𝑥𝑡,𝑠+(𝑠𝑡)𝑥𝑡,𝑠,(3.9) we have lim𝑠0𝑥𝑡,𝑠𝑇𝑥𝑡,𝑠=0.(3.10)
To prove (iii) take 𝑡,𝑡0[0,𝐴1),𝑠,𝑠0(0,𝐴1),𝑠𝑡,𝑠0𝑡0 and calculate 𝑥𝑡,𝑠𝑥𝑡0,𝑠0=𝑡𝑡0𝑥𝛾𝑓𝑡,𝑠+𝑡0𝛾𝑓𝑥𝑡,𝑠𝑥𝑓𝑡0,𝑠0𝑠𝑠0𝐴𝑇𝑥𝑡,𝑠+𝐼𝑠0𝐴𝑇𝑥𝑡,𝑠𝑇𝑥𝑡0,𝑠0𝑥+(𝑠𝑡)𝑡,𝑠𝑥𝑡0,𝑠0+𝑠𝑠0+𝑡0𝑥𝑡𝑡0,𝑠0||𝑡𝑡0||𝛾𝑓𝑥𝑡,𝑠+𝑡0𝑥𝛾𝑡,𝑠𝑥𝑡0,𝑠0+||𝑠𝑠0||𝐴𝑇𝑥𝑡,𝑠+1𝑠0𝛿𝑥𝑡,𝑠𝑥𝑡0,𝑠0𝑥+(𝑠𝑡)𝑡,𝑠𝑥𝑡0,𝑠0+||𝑠𝑠0||+||𝑡𝑡0||𝑥𝑡0,𝑠0,(3.11) which implies that 𝑠0𝛿𝑡0𝑥𝛾+𝑡𝑠𝑡,𝑠𝑥𝑡0,𝑠0||𝑡𝑡0||𝛾𝑓𝑥𝑡,𝑠+||𝑠𝑠0||𝐴𝑇𝑥𝑡,𝑠+||𝑠𝑠0||+||𝑡𝑡0||𝑥𝑡0,𝑠00(3.12) as 𝑡𝑡0,𝑠𝑠0. Note that lim𝑡𝑡0,𝑠𝑠0𝑠0𝛿𝑡0𝛾+𝑡𝑠=𝑠0(𝛿1)𝑡0(𝛾1)>0,(3.13) it is obvious that lim𝑡𝑡0,𝑠𝑠0𝑥𝑡,𝑠𝑥𝑡0,𝑠0=0.(3.14) This completes the proof of Proposition 3.1.

Our first main result below shows that 𝑥𝑡,𝑠 converges strongly as 𝑠0 to a fixed point of 𝑇 which solves some variational inequality.

Theorem 3.2. One has that 𝑥𝑡,𝑠 converges strongly as 𝑠0(𝑡𝑠) to a fixed point ̃𝑥 of 𝑇 which solves the variational inequality: (𝐴𝛾𝑓)̃𝑥,̃𝑥𝑧0,𝑧𝐹(𝑇).(3.15) Equivalently, One has 𝑃𝐹(𝑇)(𝐼𝐴+𝛾𝑓)̃𝑥=̃𝑥, where 𝑃𝐹(𝑇)() is the nearest point projection from 𝐻 onto 𝐹(𝑇).

Proof. We first shows the uniqueness of a solution of the variational inequality (3.15), which is indeed a consequence of the strong monotonicity of 𝐴𝛾𝑓. Suppose ̃𝑥𝐹(𝑇) and ̂𝑥𝐹(𝑇) both are solutions to (3.15), then (𝐴𝛾𝑓)̃𝑥,̃𝑥̂𝑥0,(𝐴𝛾𝑓)̂𝑥,̂𝑥̃𝑥0.(3.16) Adding up (3.16) gets (𝐴𝛾𝑓)̃𝑥(𝐴𝛾𝑓)̂𝑥,̃𝑥̂𝑥0.(3.17) The strong monotonicity of 𝐴𝛾𝑓 implies that ̃𝑥=̂𝑥 and the uniqueness is proved. Below we use ̃𝑥𝐹(𝑇) to denote the unique solution of (3.15).
To prove that 𝑥𝑡,𝑠 converges strongly to ̃𝑥, we write, for a given 𝑧𝐹(𝑇), 𝑥𝑡,𝑠𝑥𝑧=𝑡𝛾𝑓𝑡,𝑠𝑠𝑡𝐴𝑧+(𝐼𝑠𝐴)𝑇𝑥𝑡,𝑠𝑧+(𝑠𝑡)𝑥𝑡,𝑠(3.18) to derive that 𝑥𝑡,𝑠𝑧2=𝑡𝑥𝛾𝑓𝑡,𝑠𝑠𝑡𝐴𝑧+(𝐼𝑠𝐴)𝑇𝑥𝑡,𝑠𝑧+(𝑠𝑡)𝑥𝑡,𝑠,𝑥𝑡,𝑠𝑥𝑧=𝑡𝛾𝑓𝑡,𝑠𝐴𝑧,𝑥𝑡,𝑠+𝑧(𝐼𝑠𝐴)T𝑥𝑡,𝑠𝑧,𝑥𝑡,𝑠𝑥𝑧+(𝑠𝑡)𝑡,𝑠𝐴𝑧,𝑥𝑡,𝑠𝑥𝑧(1𝑠𝛿)𝑡,𝑠𝑧2𝑥+𝑡𝛾𝑓𝑡,𝑠𝐴𝑧,𝑥𝑡,𝑠𝑥𝑧+(𝑠𝑡)𝑡,𝑠𝐴𝑧,𝑥𝑡,𝑠.𝑧(3.19) It follows that 𝑥𝑡,𝑠𝑧2𝑡𝑥𝑠𝛿𝛾𝑓𝑡,𝑠𝐴𝑧,𝑥𝑡,𝑠+𝑧𝑠𝑡𝑥𝑠𝛿𝑡,𝑠𝐴𝑧,𝑥𝑡,𝑠=𝑡𝑧𝛾𝑓𝑥𝑠𝛿𝑡,𝑠𝑓(𝑧),𝑥𝑡,𝑠+𝑧𝛾𝑓(𝑧)𝐴𝑧,𝑥𝑡,𝑠+𝑧𝑠𝑡𝑥𝑠𝛿𝑡,𝑠𝐴𝑧,𝑥𝑡,𝑠𝑡𝑧𝑥𝑠𝛿𝛾𝑡,𝑠𝑧2+𝛾𝑓(𝑧)𝐴𝑧,𝑥𝑡,𝑠+𝑧𝑠𝑡𝑥𝑠𝛿𝑡,𝑠𝐴𝑧,𝑥𝑡,𝑠,𝑧(3.20) which leads to 𝑥𝑡,𝑠𝑧2𝑡𝑠𝛿𝑡𝛾𝛾𝑓(𝑧)𝐴𝑧,𝑥𝑡,𝑠+𝑧𝑠𝑡𝑥𝑠𝛿𝑡𝛾𝑡,𝑠𝐴𝑧,𝑥𝑡,𝑠.𝑧(3.21) Observe that condition (3.4) implies 𝑠𝑡𝑠𝛿𝑡𝛾0,(3.22) as 𝑠0. Since 𝑥𝑡,𝑠 is bounded as 𝑠0,𝑠𝑡, then there exists real sequences {𝑠𝑛},{𝑡𝑛} in [0,1] such that 𝑠𝑛0,𝑠𝑛𝑡𝑛 and {𝑥𝑡𝑛,𝑠𝑛} converges weakly to a point 𝑥𝐻. Using Proposition 3.1 and Lemma 2.2, we see that 𝑥𝐹(𝑇), therefore by (3.21), we see 𝑥𝑡𝑛,𝑠𝑛𝑥. We next prove that 𝑥 solves the variational inequality (3.15). Since 𝑥𝑡,𝑠=(𝐼𝑠𝐴)𝑇𝑥𝑡,𝑠𝑥+𝑡𝛾𝑓𝑡,𝑠+(𝑠𝑡)𝑥𝑡,𝑠,(3.23) we derive that 𝑠(𝐴𝛾𝑓)𝑥𝑡,𝑠=𝑠𝐴𝑥𝑡,𝑠𝑥𝑡,𝑠𝑥+𝑡𝛾𝑓𝑡,𝑠𝑥𝑠𝛾𝑓𝑡,𝑠+(𝐼𝑠𝐴)𝑇𝑥𝑡,𝑠+(𝑠𝑡)𝑥𝑡,𝑠=(𝐼𝑠𝐴)𝑇𝑥𝑡,𝑠𝑥𝑡,𝑠𝑥+(𝑠𝑡)𝑡,𝑠𝑥𝛾𝑓𝑡,𝑠,(3.24) so that (𝐴𝛾𝑓)𝑥𝑡,𝑠=1𝑠(𝐼𝑠𝐴)𝑇𝑥𝑡,𝑠𝑥𝑡,𝑠+𝑠𝑡𝑠𝑥𝑡,𝑠𝑥𝛾𝑓𝑡,𝑠.(3.25) It follows that, for 𝑧𝐹(𝑇), (𝐴𝛾𝑓)𝑥𝑡,𝑠,𝑥𝑡,𝑠=1𝑧𝑠(𝐼𝑠𝐴)𝑇𝑥𝑡,𝑠𝑥𝑡,𝑠,𝑥𝑡,𝑠+𝑧𝑠𝑡𝑠𝑥𝑡,𝑠𝑥𝛾𝑓𝑡,𝑠,𝑥𝑡,𝑠=𝑧1𝑠(𝐼𝑇)𝑥𝑡,𝑠(𝐼𝑇)𝑧,𝑥𝑡,𝑠+𝑧𝐴(𝐼𝑇)𝑥𝑡,𝑠,𝑥𝑡,𝑠+𝑧𝑠𝑡𝑠𝑥𝑡,𝑠𝑥𝛾𝑓𝑡,𝑠,𝑥𝑡,𝑠𝐴𝑧(𝐼𝑇)𝑥𝑡,𝑠,𝑥,𝑠𝑡+𝑧𝑠𝑡𝑠𝑥𝑡,𝑠𝑥𝛾𝑓𝑡,𝑠,𝑥𝑡,𝑠,𝑧(3.26) since 𝐼𝑇 is monotone (i.e., 𝑥𝑦,(𝐼𝑇)𝑥(𝐼𝑇)𝑦0 for 𝑥,𝑦𝐻). This is due to the nonexpansivity of 𝑇. Now replacing 𝑡,𝑠 in (3.26) with 𝑡𝑛,𝑠𝑛 and letting 𝑛, we, noticing that (𝐼𝑇)𝑥𝑡𝑛,𝑠𝑛(𝐼𝑇)𝑥=0 for 𝑥𝐹(𝑇), obtain (𝐴𝛾𝑓)𝑥,𝑥𝑧0.(3.27) That is, 𝑥𝐹(𝑇) is a solution of (3.15), hence 𝑥=̃𝑥 by uniqueness. In a summary, we have shown that each cluster point of 𝑥𝑡,𝑠 equals ̃𝑥. Therefore, 𝑥𝑡,𝑠̃𝑥 as 𝑠0.
The variational inequality (3.15) can be rewritten as (𝐼𝐴+𝛾𝑓)̃𝑥̃𝑥,̃𝑥𝑧0,𝑧𝐹(𝑇).(3.28) This, by Lemma 2.4, is equivalent to the fixed point equation 𝑃𝐹(𝑇)(𝐼𝐴+𝛾𝑓)̃𝑥=̃𝑥.(3.29) This complete the proof.

Taking 𝑡=𝑠 in Theorem 3.2, we get

Corollary 3.3 (see [7]). One has that 𝑥𝑡=𝑥𝑡,𝑡 converges strongly as 𝑡0 to a fixed point ̃𝑥 of 𝑇 which solves the variational inequality: (𝐴𝛾𝑓)̃𝑥,̃𝑥𝑧0,𝑧𝐹(𝑇).(3.30) Equivalently, One has 𝑃𝐹(𝑇)(𝐼𝐴+𝛾𝑓)̃𝑥=̃𝑥, where 𝑃𝐹(𝑇)() is the nearest point projection from 𝐻 onto 𝐹(𝑇).

Next we study a general iteration method as follows. The initial guess 𝑥0 is selected in 𝐻 arbitrarily, and the (𝑛+1)th iterate 𝑥𝑛+1 is recursively defined by 𝑥𝑛+1=𝐼𝛼𝑛𝐴𝑇𝑥𝑛+𝛽𝑛𝑥𝛾𝑓𝑛+𝛼𝑛𝛽𝑛𝑥𝑛,(3.31) where {𝛼𝑛}(0,1),𝛽𝑛[0,1),𝛽𝑛𝛼𝑛 are sequences satisfying following conditions:(C1)𝛼𝑛0;(C2)𝑛=0𝛼𝑛=;(C3) either 𝑛=0|𝛼𝑛+1𝛼𝑛|< or lim𝑛(𝛼𝑛+1/𝛼𝑛)=1;(C4)𝑛=0(𝛼𝑛𝛽𝑛)<.Below is the second main result of this paper.

Theorem 3.4. Let {𝑥𝑛} be general by Algorithm (3.31) with the sequences {𝛼𝑛},{𝛽𝑛} of parameters satisfying conditions (C1)(C4). Then {𝑥𝑛} converges strongly to ̃𝑥 that is obtained in Theorem 3.2.

Proof. Since 𝛼𝑛0 by condition (C1), we may assume, without loss of generality, that 𝛼𝑛<𝐴1 for all 𝑛.
We now observe that {𝑥𝑛} is bounded. Indeed, pick any 𝑝𝐹(𝑇) to obtain 𝑥𝑛+1=𝑝𝐼𝛼𝑛𝐴𝑇𝑥𝑛𝑝+𝛽𝑛𝑥𝛾𝑓𝑛+𝛼𝐴𝑝𝑛𝛽𝑛𝑥𝑛𝐴𝑝𝐼𝛼𝑛𝐴𝑇𝑥𝑛𝑝+𝛽𝑛𝑥𝛾𝑓𝑛+𝛼𝐴𝑝𝑛𝛽𝑛𝑥𝑛𝐴𝑝1𝛼𝑛𝛿𝑥𝑛𝑝+𝛽𝑛𝛾𝑓𝑥𝑛++𝛼𝑓(𝑝)𝛾𝑓(𝑝)𝐴𝑝𝑛𝛽𝑛𝑥𝑛𝐴𝑝1𝛿𝛼𝑛𝑥𝑛𝑝+𝛼𝑛𝑥𝛾𝑛𝑝𝛼𝑛𝑥𝛾𝑛𝑝+𝛽𝑛𝑥𝛾𝑛𝑝+𝛽𝑛𝛼𝛾𝑓(𝑝)𝐴𝑝+𝑛𝛽𝑛𝑥𝑛=𝐴𝑝1(𝛿𝛾)𝛼𝑛𝑥𝑛𝛼𝑝𝑛𝛽𝑛𝑥𝛾𝑛𝑝+𝛽𝑛(𝛼𝛾𝑓𝑝)𝐴𝑝+𝑛𝛽𝑛𝑥𝑛𝑝+𝑝𝐴𝑝1(𝛿𝛾)𝛼𝑛𝑥𝑛+𝛼𝑝𝑛𝛽𝑛𝑥(1𝛾)𝑛𝑝+𝛽𝑛(𝛼𝛾𝑓𝑝)𝐴𝑝+𝑛𝛽𝑛𝑝𝐴𝑝1(𝛿𝛾)𝛼𝑛𝛼𝑛𝛽𝑛||||𝑥1𝛾𝑛𝑝+𝛽𝑛+𝛼𝛾𝑓(𝑝)𝐴𝑝𝑛𝛽𝑛𝑝𝐴𝑝1(𝛿𝛾)𝛼𝑛𝛼𝑛𝛽𝑛||||𝑥1𝛾𝑛𝑝+(𝛿𝛾)𝛼𝑛𝑀,𝛿𝛾(3.32) where 𝑀𝛾𝑓(𝑝)𝐴𝑝+𝑝𝐴𝑝 is a constant. By Lemma 2.6 we see that {𝑥𝑛} is bounded.
As a result, noticing 𝑥𝑛+1𝑇𝑥𝑛=𝛼𝑛𝐴𝑇𝑥𝑛+𝛽𝑛𝑥𝛾𝑓𝑛+𝛼𝑛𝛽𝑛𝑥𝑛(3.33) and 𝛼𝑛0, we obtain 𝑥𝑛+1𝑇𝑥𝑛0.(3.34) But the key is to prove that 𝑥𝑛+1𝑥𝑛0.(3.35) To see this, we calculate 𝑥𝑛+1𝑥𝑛=𝐼𝛼𝑛𝐴𝑇𝑥𝑛𝑇𝑥𝑛1𝛼𝑛𝛼𝑛1𝐴𝑇𝑥𝑛1𝛼+𝛾𝑛𝑓𝑥𝑛𝑥𝑓𝑛1+𝛼𝑛𝛼𝑛1𝑓𝑥𝑛1+𝛼𝑛𝛽𝑛𝑥𝑛𝑥𝑛11(𝛿𝛾)𝛼𝑛𝑥𝑛𝑥𝑛1+||𝛼𝑛𝛼𝑛1||𝐴𝑇𝑥𝑛1𝑥𝑓𝑛1+𝛼𝑛𝛽𝑛𝑥𝑛𝑥𝑛1=1+𝛼𝑛𝛽𝑛(𝛿𝛾)𝛼𝑛𝑥𝑛𝑥𝑛1+||𝛼𝑛𝛼𝑛1||𝐴𝑇𝑥𝑛1𝑥𝑓𝑛1=1(𝛿𝛾1)𝛼𝑛+𝛽𝑛𝑥𝑛𝑥𝑛1+||𝛼𝑛𝛼𝑛1||𝐴𝑇𝑥𝑛1𝑥𝑓𝑛1.(3.36) Since 𝑛=0(𝛿𝛾1)𝛼𝑛+𝛽𝑛=𝑛=0(𝛿𝛾)𝛼𝑛𝛼𝑛𝛽𝑛=(3.37) and condition (C3) holds, an application of Lemma 2.1 to (3.36) implies (3.35) which combined with (3.34), in turns, implies 𝑥𝑛𝑇𝑥𝑛0.(3.38) Next we show that limsup𝑛𝑇𝑥𝑛̃𝑥,𝛾𝑓(̃𝑥)𝐴̃𝑥0,(3.39) where ̃𝑥 is obtained in Theorem 3.2.
To see this, we take a subsequence {𝑥𝑛𝑘} of {𝑥𝑛} such that limsup𝑛𝑥𝑛̃𝑥,𝛾𝑓(̃𝑥)𝐴̃𝑥=lim𝑘𝑥𝑛𝑘.̃𝑥,𝛾𝑓(̃𝑥)𝐴̃𝑥(3.40) We may also assume that 𝑥𝑛𝑘𝑧. Note that 𝑧𝐹(𝑇) in virtue of Lemma 2.2 and (3.38). It follows from the variational inequality (3.15) that limsup𝑛𝑥𝑛̃𝑥,𝛾𝑓(̃𝑥)𝐴̃𝑥=𝑧̃𝑥,𝛾𝑓(̃𝑥)𝐴̃𝑥0.(3.41) So (3.39) holds, thanks to (3.38).
Finally, we prove 𝑥𝑛̃𝑥. To this end, we calculate 𝑥𝑛+1̃𝑥2=(𝐼𝛼𝑛𝐴)(𝑇𝑥𝑛̃𝑥)+𝛼𝑛(𝛾𝑓(𝑥𝑛)𝐴̃𝑥)𝛼𝑛𝛾𝑓(𝑥𝑛)+𝛽𝑛𝛾𝑓(𝑥𝑛)+(𝛼𝑛𝛽𝑛)𝑥𝑛2=𝐼𝛼𝑛𝐴𝑇𝑥𝑛̃𝑥+𝛼𝑛𝑥𝛾𝑓𝑛+𝛼𝐴̃𝑥𝑛𝛽𝑛𝑥𝑛𝛾𝑓(𝑥𝑛)2=(𝐼𝛼𝑛𝐴)(𝑇𝑥𝑛̃𝑥)2+𝛼𝑛𝑥𝛾𝑓𝑛𝐴̃𝑥2+𝛼𝑛𝛽𝑛𝑥𝑛𝑥𝛾𝑓𝑛2+2𝐼𝛼𝑛𝐴𝑇𝑥𝑛̃𝑥,𝛼𝑛𝑥𝛾𝑓𝑛𝐴̃𝑥+2𝐼𝛼𝑛𝐴𝑇𝑥𝑛,𝛼̃𝑥𝑛𝛽𝑛𝑥𝑛𝑥𝛾𝑓𝑛𝛼+2𝑛𝑥𝛾𝑓𝑛,𝛼𝐴̃𝑥𝑛𝛽𝑛𝑥𝑛𝑥𝛾𝑓𝑛1𝛼𝑛𝛿2𝑥𝑛̃𝑥2+𝛼2𝑛𝑥𝛾𝑓𝑛𝐴̃𝑥2+𝛼𝑛𝛽𝑛2𝑥𝑛𝑥𝛾𝑓𝑛2+2𝛼𝑛𝑇𝑥𝑛𝑥̃𝑥,𝛾𝑓𝑛𝐴̃𝑥2𝛼2𝑛𝐴𝑇𝑥𝑛𝑥̃𝑥,𝛾𝑓𝑛𝛼𝐴̃𝑥+2𝑛𝛽𝑛𝑇𝑥𝑛̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛2𝛼𝑛𝛼𝑛𝛽𝑛𝐴𝑇𝑥𝑛̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛+2𝛼𝑛𝛼𝑛𝛽𝑛𝑥𝛾𝑓𝑛𝐴̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛1𝛼𝑛𝛿2𝑥𝑛̃𝑥2+𝛼2𝑛𝑥𝛾𝑓𝑛𝐴̃𝑥2+𝛼𝑛𝛽𝑛2𝑥𝑛𝑥𝛾𝑓𝑛2+2𝛼𝑛𝑇𝑥𝑛𝑥̃𝑥,𝛾𝑓𝑛𝛾𝑓(̃𝑥)+𝛾𝑓(̃𝑥)𝐴̃𝑥2𝛼2𝑛𝐴𝑇𝑥𝑛𝑥̃𝑥,𝛾𝑓𝑛𝛼𝐴̃𝑥+2𝑛𝛽𝑛𝑇𝑥𝑛̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛2𝛼𝑛𝛼𝑛𝛽𝑛𝐴𝑇𝑥𝑛̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛+2𝛼𝑛𝛼𝑛𝛽𝑛𝑥𝛾𝑓𝑛𝐴̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛1𝛼𝑛𝛿2𝑥𝑛̃𝑥2+𝛼2𝑛𝑥𝛾𝑓𝑛𝐴̃𝑥2+𝛼𝑛𝛽𝑛2𝑥𝑛𝑥𝛾𝑓𝑛2+2𝛼𝑛𝑇𝑥𝑛𝑥̃𝑥,𝛾𝑓𝑛𝛾𝑓(̃𝑥)+2𝛼𝑛𝑇𝑥𝑛̃𝑥,𝛾𝑓(̃𝑥)𝐴̃𝑥2𝛼2𝑛𝐴𝑇𝑥𝑛𝑥̃𝑥,𝛾𝑓𝑛𝛼𝐴̃𝑥+2𝑛𝛽𝑛𝑇𝑥𝑛̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛2𝛼𝑛𝛼𝑛𝛽𝑛𝐴𝑇𝑥𝑛̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛+2𝛼𝑛𝛼𝑛𝛽𝑛𝑥𝛾𝑓𝑛𝐴̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛1𝛼𝑛𝛿2𝑥𝑛̃𝑥2+𝛼2𝑛𝑥𝛾𝑓𝑛𝐴̃𝑥2+𝛼𝑛𝛽𝑛2𝑥𝑛𝑥𝛾𝑓𝑛2+2𝛼𝑛𝑥𝛾𝑛̃𝑥2+2𝛼𝑛𝑇𝑥𝑛̃𝑥,𝛾𝑓(̃𝑥)𝐴̃𝑥2𝛼2𝑛𝐴𝑇𝑥𝑛𝑥̃𝑥,𝛾𝑓𝑛𝛼𝐴̃𝑥+2𝑛𝛽𝑛𝑇𝑥𝑛̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛2𝛼𝑛𝛼𝑛𝛽𝑛𝐴𝑇𝑥𝑛̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛+2𝛼𝑛𝛼𝑛𝛽𝑛𝑥𝛾𝑓𝑛𝐴̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛1𝛼𝑛𝛿2+2𝛼𝑛𝑥𝛾𝑛̃𝑥2+𝛼2𝑛𝑥𝛾𝑓𝑛𝐴̃𝑥2+𝛼𝑛𝛽𝑛2𝑥𝑛𝑥𝛾𝑓𝑛2+2𝛼𝑛𝑇𝑥𝑛̃𝑥,𝛾𝑓(̃𝑥)𝐴̃𝑥2𝛼2𝑛𝐴𝑇𝑥𝑛𝑥̃𝑥,𝛾𝑓𝑛𝛼𝐴̃𝑥+2𝑛𝛽𝑛𝑇𝑥𝑛̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛2𝛼𝑛𝛼𝑛𝛽𝑛𝐴𝑇𝑥𝑛̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛+2𝛼𝑛𝛼𝑛𝑥𝛾𝑓𝑛𝐴̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛12(𝛿𝛾)𝛼𝑛𝑥𝑛̃𝑥2+𝛼2𝑛𝛿2𝑥𝑛̃𝑥+𝛼2𝑛𝑥𝛾𝑓𝑛𝐴̃𝑥2+𝛼𝑛𝛽𝑛2𝑥𝑛𝑥𝛾𝑓𝑛2+2𝛼𝑛𝑇𝑥𝑛̃𝑥,𝛾𝑓(̃𝑥)𝐴̃𝑥2𝛼2𝑛𝐴𝑇𝑥𝑛𝑥̃𝑥,𝛾𝑓𝑛𝛼𝐴̃𝑥+2𝑛𝛽𝑛𝑇𝑥𝑛̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛2𝛼𝑛𝛼𝑛𝛽𝑛𝐴𝑇𝑥𝑛̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛+2𝛼𝑛𝛼𝑛𝛽𝑛𝑥𝛾𝑓𝑛𝐴̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛12(𝛿𝛾)𝛼𝑛𝑥𝑛̃𝑥2+𝛼2𝑛𝛿2𝑥𝑛̃𝑥+𝛼2𝑛𝑥𝛾𝑓𝑛𝐴̃𝑥2+𝛼𝑛𝛽𝑛2𝑥𝑛𝑥𝛾𝑓𝑛2+2𝛼𝑛𝑇𝑥𝑛̃𝑥,𝛾𝑓(̃𝑥)𝐴̃𝑥2𝛼2𝑛𝐴𝑇𝑥𝑛𝑥̃𝑥,𝛾𝑓𝑛𝛼𝐴̃𝑥+2𝑛𝛽𝑛𝑇𝑥𝑛̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛2𝛼𝑛𝛼𝑛𝛽𝑛𝐴𝑇𝑥𝑛̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛+2𝛼𝑛𝛼𝑛𝛽𝑛𝑥𝛾𝑓𝑛𝐴̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛12(𝛿𝛾)𝛼𝑛𝑥𝑛̃𝑥2+2𝛼𝑛𝑇𝑥𝑛̃𝑥,𝛾𝑓(̃𝑥)𝐴̃𝑥+𝛼𝑛𝑀𝑛,(3.42) where 𝑀𝑛=𝛼𝑛𝑥𝛾𝑓𝑛𝐴̃𝑥2+𝛼𝑛𝛽𝑛2𝛼𝑛𝑥𝑛𝑥𝛾𝑓𝑛22𝛼𝑛𝐴𝑇𝑥𝑛𝑥̃𝑥,𝛾𝑓𝑛𝛼𝐴̃𝑥+2𝑛𝛽𝑛𝛼𝑛𝑇𝑥𝑛̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛𝛼2𝑛𝛽𝑛𝐴𝑇𝑥𝑛̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛𝛼+2𝑛𝛽𝑛𝑥𝛾𝑓𝑛𝐴̃𝑥,𝑥𝑛𝑥𝛾𝑓𝑛.(3.43) That is, 𝑥𝑛+1̃𝑥212(𝛿𝛾)𝛼𝑛𝑥𝑛̃𝑥2+𝛼𝑛2𝑇𝑥𝑛̃𝑥,𝛾𝑓(̃𝑥)𝐴̃𝑥+𝑀𝑛.(3.44) Since {𝑥𝑛} is bounded, by the conditions of Theorem 3.4, we get lim𝑛𝑀𝑛=0 and 𝑛=0(𝛿𝛾)𝛼𝑛=, this together with (3.39) implies that limsup𝑛2𝑇𝑥𝑛̃𝑥,𝛾𝑓(̃𝑥)𝐴̃𝑥+𝑀𝑛0.(3.45) Now applying Lemma 2.1 to (3.44) concludes that 𝑥𝑛̃𝑥. This complete the proof of Theorem 3.4.

If pick 𝛼𝑛=𝛽𝑛, we obtain the result of Marino and Xu [2].

4. Approximate Iteration Algorithm and Error Estimate

In this section, we use the following approximate iteration algorithm: 𝑦𝑛+1=(𝐼𝑠𝐴)𝑇𝑦𝑛𝑦+𝑡𝛾𝑓𝑛+(𝑠𝑡)𝑦𝑛,(4.1) for an arbitrary initial 𝑦0𝐻, to calculate the fixed point of nonexpansive mapping and solution of variational inequality with bounded linear operator 𝐴, where 𝐴,𝑇,𝛾,𝑠,𝑡 and others 𝛿, as in the Section 3.

Meanwhile, the ̃𝑥𝐹(𝑇) is obtained in Theorem 3.2 which is unique solution of variational inequality (3.15) and {𝑥𝑛}̃𝑥, as 𝑛,𝑥𝑡,𝑠̃𝑥 as 𝑠0, where {𝑥𝑛} and 𝑥𝑡,𝑠 are respectively defined by (3.31) and (3.6).

The following lemma will be useful for the establish of formula of convergence rate estimate.

Lemma 4.1 (Banach's contractive mapping principle). Let 𝐻 be a Banach space and 𝑆 be a contraction from 𝐻 into self, that is, 𝑆𝑥𝑆𝑦𝜃𝑥𝑦,𝑥,𝑦𝐻,(4.2) where 0<𝜃<1 is a constant. Then the Picard iterative sequence 𝑥𝑛+1=𝑆𝑥𝑛, for arbitrary initial 𝑥0𝐻, converges strongly to a unique fixed point 𝑥 of 𝑆 and 𝑥𝑛𝑥𝜃𝑛𝑥1𝜃0𝑆𝑥0.(4.3)

For above 𝑇,𝐴,𝑓,𝛾,𝑠,𝑡,𝛿, we define the following contractive mapping: 𝑆𝑡,𝑠𝑦=(𝐼𝑠𝐴)𝑇𝑦+𝛾𝑓(𝑦)+(𝑠𝑡)𝑦(4.4) from 𝐻 into self. In fact, it is not hard to see that 𝑆𝑡,𝑠 is a contraction for sufficiently small 𝑠, indeed, by Lemma 2.5 we have, for any 𝑥,𝑦𝐻, that 𝑆𝑡,𝑠𝑥𝑆𝑡,𝑠𝑦=𝑡𝛾𝑓(𝑥)𝑓(𝑦)+(𝐼𝑠𝐴)(𝑇𝑥𝑇𝑦)+(𝑠𝑡)(𝑥𝑦)(𝑡𝛾+1𝑠𝛿+𝑠𝑡)𝑥𝑦(1+𝑡(𝛾1)𝑠(𝛿1))𝑥𝑦.(4.5) By using Lemma 4.1, then there exists unique fixed point 𝑥𝑡,𝑠𝐻 of 𝑆𝑡,𝑠 and the iterative sequence 𝑦𝑛+1=𝑆𝑡,𝑠𝑦𝑛=(𝐼𝑠𝐴)𝑇𝑦𝑛𝑦+𝛾𝑓𝑛+(𝑠𝑡)𝑦𝑛,𝑦0𝐻,(4.6) converges strongly to this fixed point 𝑥𝑡,𝑠. Meanwhile, from (4.3) and (4.5) we obtain 𝑦𝑛𝑥𝑡,𝑠(1+𝑡(𝛾1)𝑠(𝛿1))𝑛𝑦𝑠(𝛿1)𝑡(𝛾1)0𝑆𝑡,𝑠𝑦0.(4.7) On the other hand, from (3.21) we have 𝑥𝑡,𝑠̃𝑥2𝑡𝑠𝛿𝑡𝛾𝛾𝑓(̃𝑥)𝐴̃𝑥,𝑥𝑡,𝑠+̃𝑥𝑠𝑡𝑥𝑠𝛿𝑡𝛾𝑡,𝑠̃𝑥,𝑥𝑡,𝑠+̃𝑥̃𝑥𝐴̃𝑥,𝑥𝑡,𝑠,̃𝑥(4.8) which leads to 1𝑠𝑡𝑥𝑠𝛿𝑡𝛾𝑡,𝑠̃𝑥2𝑡𝑠𝛿𝑡𝛾𝛾𝑓(̃𝑥)𝐴̃𝑥,𝑥𝑡,𝑠+̃𝑥𝑠𝑡𝑠𝛿𝑡𝛾̃𝑥𝐴̃𝑥,𝑥𝑡,𝑠.̃𝑥(4.9) Therefore, 1𝑠𝑡𝑥𝑠𝛿𝑡𝛾𝑡,𝑠𝑡̃𝑥𝑠𝛿𝑡𝛾𝛾𝑓(̃𝑥)𝐴̃𝑥+𝑠𝑡𝑥𝑠𝛿𝑡𝛾̃𝑥𝐴̃𝑥,𝑡,𝑠𝑡̃𝑥𝑠𝛿𝑡𝛾+𝑡𝑠𝛾𝑓(̃𝑥)𝐴̃𝑥+𝑠𝑡𝑠𝛿𝑡𝛾+𝑡𝑠̃𝑥𝐴̃𝑥.(4.10) Letting 𝐷1=𝛾𝑓(̃𝑥)𝐴̃𝑥,𝐷2=̃𝑥𝐴̃𝑥, it follows that 𝑥𝑡,𝑠𝑡̃𝑥𝐷𝑠𝛿𝑡𝛾+𝑡𝑠1+𝑠𝑡𝐷𝑠𝛿𝑡𝛾+𝑡𝑠2.(4.11) From inequality (4.11) together with (4.7), and letting 𝐷3=𝑦0𝑆𝑡,𝑠𝑦0, we get 𝑦𝑛𝑡̃𝑥𝐷𝑠𝛿𝑡𝛾+𝑡𝑠1+𝑠𝑡𝐷𝑠𝛿𝑡𝛾+𝑡𝑠2+(1+𝑡(𝛾1)𝑠(𝛿1))𝑛𝐷𝑠(𝛿1)𝑡(𝛾1)3.(4.12) Inequality (4.12) is, namely, the error estimate for approximate fixed point 𝑦𝑛. Now, we give several special cases of inequality (4.12).

Error Estimate 1
Consider limsup𝑛𝑦𝑛𝑡̃𝑥𝐷𝑠𝛿𝑡𝛾+𝑡𝑠1+𝑠𝑡𝐷𝑠𝛿𝑡𝛾+𝑡𝑠2.(4.13)

Error Estimate 2
If 𝑡=𝑠, then 𝑦𝑛1̃𝑥𝐷(𝛿𝛾)1+(1𝑠(𝛿𝛾))𝑛𝐷𝑠(𝛿𝛾)3,(4.14) which can be used to estimate error for iterative scheme 𝑦𝑛+1=(𝐼𝑠𝐴)𝑇𝑦𝑛𝑦+𝑠𝛾𝑓𝑛,𝑦0𝐻.(4.15)

Error Estimate 3
If 𝐴=𝐼, then 𝑦𝑛𝑡̃𝑥𝐷𝑠𝛿𝑡𝛾+𝑡𝑠1+(1+𝑡(𝛾1)𝑠(𝛿1))𝑛𝐷𝑠(𝛿1)𝑡(𝛾1)3,(4.16) which can be used to estimate error for iterative scheme 𝑦𝑛+1=(1𝑠)𝑇𝑦𝑛𝑦+𝑡𝛾𝑓𝑛+(𝑠𝑡)𝑥𝑛,𝑦0𝐻.(4.17)

Acknowledgments

This paper is supported by the National Natural Science Foundation of China under Grant (11071279).