Abstract

We construct two new methods for finding the minimum norm fixed point of nonexpansive mappings in Hilbert spaces. Some applications are also included.

1. Introduction

Let 𝐶 be a nonempty closed convex subset of a real Hilbert space 𝐻. Recall that a mapping 𝑇𝐶𝐶 is nonexpansive if 𝑇𝑥𝑇𝑦𝑥𝑦,𝑥,𝑦𝐶.(1.1)

Iterative algorithms for finding fixed point of nonexpansive mappings are very interesting topic due to the fact that many nonlinear problems can be reformulated as fixed point equations of nonexpansive mappings. Related works can be found in [132].

On the other hand, we notice that it is quite often to seek a particular solution of a given nonlinear problem, in particular, the minimum-norm solution. In an abstract way, we may formulate such problems as finding a point 𝑥 with the property𝑥𝑥𝐶,=min𝑥𝐶𝑥,(1.2) where 𝐶 is a nonempty closed convex subset of a real Hilbert space 𝐻. In other words, 𝑥 is the (nearest point or metric) projection of the origin onto 𝐶,𝑥=𝑃𝐶(0),(1.3) where 𝑃𝐶 is the metric (or nearest point) projection from 𝐻 onto 𝐶.

A typical example is the least-squares solution to the constrained linear inverse problem𝐴𝑥=𝑏,𝑥𝐶,(1.4) where 𝐴 is a bounded linear operator from 𝐻 to another real Hilbert space 𝐻1 and 𝑏 is a given point in 𝐻1. The least-squares solution to (1.4) is the least-norm minimizer of the minimization problemmin𝑥𝐶𝐴𝑥𝑏2.(1.5)

Let 𝑆𝑏 denote the (closed convex) solution set of (1.4) (or equivalently (1.5)). It is known that 𝑆𝑏 is nonempty if and only if 𝑃𝐴(𝐶)(𝑏)𝐴(𝐶). In this case, 𝑆𝑏 has a unique element with minimum norm (equivalently, (1.4) has a unique least-squares solution); that is, there exists a unique point 𝑥𝑆𝑏 satisfying𝑥=min𝑥𝑥𝑆𝑏.(1.6) The so-called 𝐶-constrained pseudoinverse of 𝐴 is then defined as the operator 𝐴𝐶 with domain and values given by 𝐷𝐴𝐶=𝑏𝐻𝑃𝐴(𝐶)(𝑏)𝐴(𝐶);𝐴𝐶(𝑏)=𝑥𝐴,𝑏𝐷𝐶,(1.7)

where 𝑥𝑆𝑏 is the unique solution to (1.6).

Note that the optimality condition for the minimization (1.5) is the variational inequality (VI)̂𝑥𝐶,𝐴(𝐴̂𝑥𝑏),𝑥̂𝑥0,𝑥𝐶,(1.8) where 𝐴 is the adjoint of 𝐴.

If 𝑏𝐷(𝐴𝐶), then (1.5) is consistent and its solution set 𝑆𝑏 coincides with the solution set of VI (1.8). On the other hand, VI (1.8) can be rewritten aŝ𝑥𝐶,̂𝑥𝜆𝐴(𝐴̂𝑥)𝑏̂𝑥,𝑥̂𝑥0,𝑥𝐶,(1.9) where 𝜆>0 is any positive scalar. In the terminology of projections, (1.10) is equivalent to the fixed point equation̂𝑥=𝑃𝐶̂𝑥𝜆𝐴(𝐴̂𝑥𝑏).(1.10) It is not hard to find that for 0<𝜆<2/𝐴2, the mapping 𝑥𝑃𝐶(𝑥𝜆𝐴(𝐴𝑥𝑏)) is nonexpansive. Therefore, finding the least-squares solution of the constrained linear inverse problem (1.6) is equivalent to finding the minimum-norm fixed point of the nonexpansive mapping 𝑥𝑃𝐶(𝑥𝜆𝐴(𝐴𝑥𝑏)).

Motivated by the above least-squares solution to constrained linear inverse problems, we will study the general case of finding the minimum-norm fixed point of a nonexpansive mapping 𝑇𝐶𝐶:𝑥Fix(𝑇),𝑥=min{𝑥𝑥Fix(𝑇)},(1.11) where Fix(𝑇)={𝑥𝐶𝑇𝑥=𝑥} denotes the set of fixed points of 𝑇 (throughout we always assume that Fix(𝑇)).

We next briefly review two historic approaches which relate to the minimum-norm fixed point problem (1.11).

Browder [1] introduced an implicit scheme as follows. Fix a 𝑢𝐶, and for each 𝑡(0,1), let 𝑥𝑡 be the unique fixed point in 𝐶 of the contraction 𝑇𝑡 which maps 𝐶 into 𝐶:𝑇𝑡𝑥=𝑡𝑢+(1𝑡)𝑇𝑥,𝑥𝐶.(1.12) Browder proved that𝑠lim𝑡0𝑥𝑡=𝑃Fix(𝑇)𝑢.(1.13) That is, the strong limit of {𝑥𝑡} as 𝑡0+ is the fixed point of 𝑇 which is nearest from Fix(𝑇) to 𝑢.

Halpern [4], on the other hand, introduced an explicit scheme. Again fix a 𝑢𝐶. Then with a sequence {𝑡𝑛} in (0,1) and an arbitrary initial guess 𝑥0𝐶, we can define a sequence {𝑥𝑛} through the recursive formula𝑥𝑛+1=𝑡𝑛𝑢+1𝑡𝑛𝑇𝑥𝑛,𝑛0.(1.14) It is now known that this sequence {𝑥𝑛} converges in norm to the same limit 𝑃Fix(𝑇)𝑢 as Browder's implicit scheme (1.12) if the sequence {𝑡𝑛} satisfies, assumptions (𝐴1), (𝐴2), and (𝐴3) as follows: (𝐴1)lim𝑛𝑡𝑛=0, (𝐴2)𝑛=1𝑡𝑛=, (𝐴3)either 𝑛=1|𝑡𝑛+1𝑡𝑛|= or lim𝑛(𝑡𝑛/𝑡𝑛+1)=1.

Some more progress on the investigation of the implicit and explicit schemes (1.12) and (1.14) can be found in [3342]. We notice that the above two methods do find the minimum-norm fixed point 𝑥 of 𝑇 if 0𝐶. However, if 0𝐶, then neither Browder's nor Halpern's method works to find the minimum-norm element 𝑥. The reason is simple: if 0𝐶, we cannot take 𝑢=0 either in (1.12) or (1.14) since the contraction 𝑥(1𝑡)𝑇𝑥 is no longer a self-mapping of 𝐶 (hence may fail to have a fixed point), or (1𝑡𝑛)𝑇𝑥𝑛 may not belong to 𝐶, and consequently, 𝑥𝑛+1 may be undefined. In order to overcome the difficulties caused by possible exclusion of the origin from 𝐶, we introduce the following two remedies.

For Browder's method, we consider the contraction 𝑥(1𝛽)𝑃𝐶[(1𝑡)𝑥]+𝛽𝑇𝑥 for some 𝛽(0,1). Since this contraction clearly maps 𝐶 into 𝐶, it has a unique fixed point which is still denoted by 𝑥𝑡, that is, 𝑥𝑡=(1𝛽)𝑃𝐶[(1𝑡)𝑥𝑡]+𝛽𝑇𝑥𝑡. For Halpern's method, we consider the following iterative algorithm 𝑥𝑛+1=(1𝛽)𝑃𝐶[(1𝑡𝑛)𝑥𝑛]+𝛽𝑇𝑥𝑛,𝑛0. It is easily seen that the net {𝑥𝑡} and the sequence {𝑥𝑛} are well defined (i.e., 𝑥𝑡𝐶 and 𝑥𝑛𝐶).

The purpose of this paper is to prove that the above both implicit and explicit methods converge strongly to the minimum-norm fixed point 𝑥 of the nonexpansive mapping 𝑇. Some applications are also included.

2. Preliminaries

Let 𝐻 be a real Hilbert space with inner product , and norm , respectively. Let 𝐶 be a nonempty closed convex subset of 𝐻. Recall that the nearest point (or metric) projection from 𝐻 onto 𝐶 is defined as follows: for each point 𝑥𝐻, 𝑃𝐶𝑥 is the unique point in 𝐶 with the property 𝑥𝑃𝐶𝑥𝑥𝑦,𝑦𝐶.(2.1)

Note that 𝑃𝐶 is characterized by the inequality 𝑃𝐶𝑥𝐶,𝑥𝑃𝐶𝑥,𝑦𝑃𝐶𝑥0,𝑦𝐶.(2.2)

Consequently, 𝑃𝐶 is nonexpansive.

Below is the so-called demiclosedness principle for nonexpansive mappings.

Lemma 2.1 (cf. [7]). Let 𝐶 be a nonempty closed convex subset of a real Hilbert space 𝐻, and let 𝑇𝐶𝐶 be a nonexpansive mapping with fixed points. If (𝑥𝑛) is a sequence in 𝐶 such that 𝑥𝑛𝑥 weakly and 𝑥𝑛𝑇𝑥𝑛𝑦 strongly, then (𝐼𝑇)𝑥=𝑦.

Finally we state the following elementary result on convergence of real sequences.

Lemma 2.2 (see [19]). Let {𝑎𝑛}𝑛=0 be a sequence of nonnegative real numbers satisfying 𝑎𝑛+11𝛾𝑛𝑎𝑛+𝛾𝑛𝜎𝑛,𝑛0,(2.3)where {𝛾𝑛}𝑛=0(0,1) and {𝜎𝑛}𝑛=0 are satisfied that (i)𝑛=0𝛾𝑛=; (ii)either limsup𝑛𝜎𝑛0 or 𝑛=0|𝛾𝑛𝜎𝑛|<. Then {𝑎𝑛}𝑛=0 converges to 0.

We use the following notation: (i)Fix(𝑇) stands for the set of fixed points of 𝑇; (ii)𝑥𝑛𝑥 stands for the weak convergence of (𝑥𝑛) to 𝑥; (iii)𝑥𝑛𝑥 stands for the strong convergence of (𝑥𝑛) to 𝑥.

3. Main Results

The aim of this section is to introduce some methods for finding the minimum-norm fixed point of a nonexpansive mapping 𝑇. First, we prove the following theorem by using an implicit method.

Theorem 3.1. Let 𝐶 be a nonempty closed convex subset of a real Hilbert space 𝐻 and 𝑇𝐶𝐶 a nonexpansive mapping with Fix(𝑇). For 𝛽(0,1) and each 𝑡(0,1), let 𝑥𝑡 be defined as the unique solution of fixed point equation 𝑥𝑡=𝛽𝑇𝑥𝑡+(1𝛽)𝑃𝐶(1𝑡)𝑥𝑡,𝑡(0,1).(3.1) Then the net {𝑥𝑡} converges in norm, as 𝑡0+, to the minimum-norm fixed point of 𝑇.

Proof. First observe that, for each 𝑡(0,1), 𝑥𝑡 is well defined. Indeed, we define a mapping 𝑆𝑡𝐶𝐶 by 𝑆𝑡𝑥=𝛽𝑇𝑥+(1𝛽)𝑃𝐶[](1𝑡)𝑥,𝑥𝐶.(3.2) For 𝑥,𝑦𝐶, we have 𝑆𝑡𝑥𝑆𝑡𝑦=𝛽𝑃(𝑇𝑥𝑇𝑦)+(1𝛽)𝐶[](1𝑡)𝑥𝑃𝐶[]𝑃(1𝑡)𝑦𝛽𝑇𝑥𝑇𝑦+(1𝛽)𝐶[](1𝑡)𝑥𝑃𝐶[][](1𝑡)𝑦1(1𝛽)𝑡𝑥𝑦,(3.3) which implies that 𝑆𝑡 is a self-contraction of 𝐶. Hence 𝑆𝑡 has a unique fixed point 𝑥𝑡𝐶 which is the unique solution of the fixed point equation (3.1).
Next we prove that {𝑥𝑡} is bounded. Take 𝑢Fix(𝑇). From (3.1), we have 𝑥𝑡=𝑢𝛽𝑇𝑥𝑡+(1𝛽)𝑃𝐶(1𝑡)𝑥𝑡𝑢𝛽𝑇𝑥𝑡𝑃𝑢+(1𝛽)𝐶(1𝑡)𝑥𝑡𝑥𝑢𝛽𝑡+𝑢(1𝛽)(1𝑡)𝑥𝑡𝑥𝑢𝛽𝑡𝑥𝑢+(1𝛽)(1𝑡)𝑡,𝑢+𝑡𝑢(3.4) that is, 𝑥𝑡𝑢𝑢.(3.5) Hence, {𝑥𝑡} is bounded and so is {𝑇𝑥𝑡}.
From (3.1), we have 𝑥𝑡𝑇𝑥𝑡𝑃(1𝛽)𝐶(1𝑡)𝑥𝑡𝑃𝐶𝑇𝑥𝑡𝑥(1𝛽)𝑡𝑇𝑥𝑡𝑡𝑥𝑡𝑥(1𝛽)𝑡𝑇𝑥𝑡𝑥+(1𝛽)𝑡𝑡,(3.6)that is, 𝑥𝑡𝑇𝑥𝑡1𝛽𝛽𝑡𝑥𝑡0as𝑡0+.(3.7) Next we show that {𝑥𝑡} is relatively norm-compact as 𝑡0+. Let {𝑡𝑛}(0,1) be a sequence such that 𝑡𝑛0+ as 𝑛. Put 𝑥𝑛=𝑥𝑡𝑛. From (3.7), we have 𝑥𝑛𝑇𝑥𝑛0.(3.8) Again from (3.1), we get 𝑥𝑡𝑢2𝛽𝑇𝑥𝑡𝑢2𝑃+(1𝛽)𝐶(1𝑡)𝑥𝑡𝑢2𝑥𝛽𝑡𝑢2𝑥+(1𝛽)𝑡𝑢𝑡𝑥𝑡2𝑥=𝛽𝑡𝑢2+𝑥(1𝛽)𝑡𝑢22𝑡𝑥𝑡𝑢,𝑥𝑡𝑢2𝑡𝑢,𝑥𝑡𝑢+𝑡2𝑥𝑡2.(3.9)It turns out that 𝑥𝑡𝑢2𝑢,𝑢𝑥𝑡+𝑡𝑀,(3.10) where 𝑀>0 is some constant such that sup{(1/2)𝑥𝑡2𝑡(0,1)}𝑀. In particular, we get from (3.10) 𝑥𝑛𝑢2𝑢,𝑢𝑥𝑛+𝑡𝑛𝑀,𝑢Fix(𝑇).(3.11) Since {𝑥𝑛} is bounded, without loss of generality, we may assume that {𝑥𝑛} converges weakly to a point 𝑥𝐶. Noticing (3.8) we can use Lemma 2.1 to get 𝑥Fix(𝑇). Therefore we can substitute 𝑥 for 𝑢 in (3.11) to get 𝑥𝑛𝑥2𝑥,𝑥𝑥𝑛+𝑡𝑛𝑀.(3.12) However, 𝑥𝑛𝑥. This together with (3.12) guarantees that 𝑥𝑛𝑥. The net {𝑥𝑡} is therefore relatively compact, as 𝑡0+, in the norm topology.
Now we return to (3.11) and take the limit as 𝑛 to get 𝑥𝑢2𝑢,𝑢𝑥,𝑢Fix(𝑇).(3.13)This is equivalent to 0𝑥,𝑢𝑥,𝑢Fix(𝑇).(3.14)Therefore, 𝑥=𝑃Fix(𝑇)0. This is sufficient to conclude that the entire net {𝑥𝑡} converges in norm to 𝑥 and 𝑥 is the minimum-norm fixed point of 𝑇. This completes the proof.

Next, we introduce an explicit algorithm for finding the minimum norm fixed point of nonexpansive mappings.

Theorem 3.2. Let 𝐶 be a nonempty closed convex subset of a real Hilbert space 𝐻, and let 𝑇𝐶𝐶 be a nonexpansive mapping with Fix(𝑇). For given 𝑥0𝐶, define a sequence {𝑥𝑛} iteratively by 𝑥𝑛+1=𝛽𝑇𝑥𝑛+(1𝛽)𝑃𝐶1𝛼𝑛𝑥𝑛,𝑛0,(3.15) where 𝛽(0,1) and 𝛼𝑛(0,1) satisfying the following conditions: (C1)lim𝑛𝛼𝑛=0 and 𝑛=0𝛼𝑛=; (C2)lim𝑛(𝛼𝑛/𝛼𝑛1)=1. Then the sequence {𝑥𝑛} converges strongly to the minimum-norm fixed point of 𝑇.

Proof. First we prove that the sequence {𝑥𝑛} is bounded. Pick 𝑝Fix(𝑇). Then, we have 𝑥𝑛+1=𝛽𝑝𝑇𝑥𝑛+𝑃𝑝(1𝛽)𝐶1𝛼𝑛𝑥𝑛𝑝𝛽𝑇𝑥𝑛𝑃𝑝+(1𝛽)𝐶1𝛼𝑛𝑥𝑛𝑥𝑝𝛽𝑛+𝑝(1𝛽)1𝛼𝑛𝑥𝑛𝑝𝛼𝑛𝑝1(1𝛽)𝛼𝑛𝑥𝑛𝑝+(1𝛽)𝛼𝑛𝑥𝑝max𝑛.𝑝,𝑝(3.16)By induction, 𝑥𝑛+1𝑥𝑝max0𝑝,𝑝.(3.17) Next, we estimate 𝑥𝑛+1𝑥𝑛. From (3.15), we have 𝑥𝑛+1𝑥𝑛=𝛽𝑇𝑥𝑛𝑇𝑥𝑛1+𝑃(1𝛽)𝐶1𝛼𝑛𝑥𝑛𝑃𝐶1𝛼𝑛1𝑥𝑛1𝛽𝑇𝑥𝑛𝑇𝑥𝑛1𝑃+(1𝛽)𝐶1𝛼𝑛𝑥𝑛𝑃𝐶1𝛼𝑛1𝑥𝑛1𝑥𝛽𝑛𝑥𝑛1+(1𝛽)1𝛼𝑛𝑥𝑛𝑥𝑛1+𝛼𝑛1𝛼𝑛𝑥𝑛11(1𝛽)𝛼𝑛𝑥𝑛𝑥𝑛1+(1𝛽)𝛼𝑛||𝛼𝑛𝛼𝑛1||𝛼𝑛𝑥𝑛1.(3.18)This together with Lemma 2.2 implies that lim𝑛𝑥𝑛+1𝑥𝑛=0.(3.19)Note that 𝑥𝑛𝑇𝑥𝑛𝑥𝑛𝑥𝑛+1+𝑥𝑛+1𝑇𝑥𝑛𝑥𝑛𝑥𝑛+1𝑃+(1𝛽)𝐶1𝛼𝑛𝑥𝑛𝑃𝐶𝑇𝑥𝑛𝑥𝑛𝑥𝑛+1𝑥+(1𝛽)𝑛𝑇𝑥𝑛+(1𝛽)𝛼𝑛𝑥𝑛.(3.20)Thus, 𝑥𝑛𝑇𝑥𝑛1𝛽𝑥𝑛𝑥𝑛+1+(1𝛽)𝛼𝑛𝑥𝑛0.(3.21) We next show that limsup𝑛̃𝑥,̃𝑥𝑥𝑛0,(3.22) where ̃𝑥=𝑃Fix(𝑇)0, the minimum norm fixed point of 𝑇. To see this, we can take a subsequence {𝑥𝑛𝑘} of {𝑥𝑛} satisfying the properties limsup𝑛̃𝑥,̃𝑥𝑥𝑛=lim𝑘̃𝑥,̃𝑥𝑥𝑛𝑘,𝑥(3.23)𝑛𝑘𝑥as𝑘.(3.24)Now since 𝑥Fix(𝑇) (this is a consequence of Lemma 2.2 and (3.21)), we get by combining (3.22) and (3.23) limsup𝑛̃𝑥,̃𝑥𝑥𝑛=̃𝑥,̃𝑥𝑥0.(3.25)Finally, we show that 𝑥𝑛̃𝑥. As a matter of fact, we have 𝑥𝑛+1̃𝑥2𝛽𝑇𝑥𝑛̃𝑥2𝑃+(1𝛽)𝐶1𝛼𝑛𝑥𝑛̃𝑥2𝑥𝛽𝑛̃𝑥2𝑥+(1𝛽)𝑛̃𝑥𝛼𝑛𝑥𝑛2𝑥=𝛽𝑛̃𝑥2+(1𝛽)12𝛼𝑛𝑥𝑛̃𝑥22𝛼𝑛̃𝑥,𝑥𝑛̃𝑥+𝛼2𝑛̃𝑥2=12(1𝛽)𝛼𝑛𝑥𝑛̃𝑥2+2(1𝛽)𝛼𝑛̃𝑥,̃𝑥𝑥𝑛𝛼+𝑛̃𝑥22=1𝛿𝑛𝑥𝑛̃𝑥2+𝛿𝑛𝜃𝑛.(3.26) By (C1) and (3.22), it is easily found that lim𝑛𝛿𝑛=0 and limsup𝑛𝜃𝑛0. We can therefore apply Lemma 2.2 to (3.26) and conclude that 𝑥𝑛+1̃𝑥 as 𝑛. This completes the proof.

4. Applications

We consider the following minimization problemmin𝑥𝐶𝜑(𝑥),(4.1) where 𝐶 is a closed convex subset of a real Hilbert space 𝐻 and 𝜑𝐶 is a continuously Fréchet differentiable convex function. Denote by 𝑆 the solution set of (4.1); that is, 𝑆=𝑧𝐶𝜑(𝑧)=min𝑥𝐶𝜑(𝑥).(4.2)

Assume 𝑆=. It is known that a point 𝑧𝐶 is a solution of (4.1) if and only if the following optimality condition holds:𝑧𝐶,𝜑(𝑧),𝑥𝑧0,𝑥𝐶.(4.3) (Here 𝜑(𝑥) denotes the gradient of 𝜑 at 𝑥𝐶.) It is also known that the optimality condition (4.3) is equivalent to the following fixed point problem, 𝑧=𝑇𝛾𝑧,𝑇𝛾=𝑃𝐶(𝐼𝛾𝜑),(4.4)

where 𝛾>0 is any positive number. Note that the solution set 𝑆 of (4.1) coincides with the set of fixed points of 𝑇𝛾 (for any 𝛾>0).

If the gradient 𝜑 is L-Lipschitzian continuous on 𝐶, then it is not hard to see that the mapping 𝑇𝛾 is nonexpansive if 0<𝛾<2/𝐿.

Using Theorems 3.1 and 3.2, we immediately obtain the following result.

Theorem 4.1. Assume 𝜑 is continuously (Fréchet) differentiable and convex and its gradient 𝜑 is L-Lipschitzian. Assume the solution set 𝑆 of the minimization (4.1) is nonempty. Fix 𝛾 such that 0<𝛾<2/𝐿.
(i)For each 𝑡(0,1), let 𝑥𝑡 be the unique solution of the fixed point equation 𝑥𝑡=𝛽𝑃𝐶(𝐼𝛾𝜑)𝑥𝑡+(1𝛽)𝑃𝐶(1𝑡)𝑥𝑡.(4.5) Then {𝑥𝑡} converges in norm as 𝑡0+ to the minimum-norm solution of the minimization (4.1).(ii)Define a sequence {𝑥𝑛} via the recursive algorithm 𝑥𝑛+1=𝛽𝑃𝐶(𝐼𝛾𝜑)𝑥𝑛+(1𝛽)𝑃𝐶1𝛼𝑛𝑥𝑛,(4.6) where the sequence {𝛼𝑛} satisfies conditions (𝐶1)-(𝐶2) in Theorem 3.2. Then {𝑥𝑛} converges in norm to the minimum-norm solution of the minimization (4.1).

We next turn to consider a convexly constrained linear inverse problem𝐴𝑥=𝑏,𝑥𝐾,(4.7) where 𝐴 is a bounded linear operator with nonclosed range from a real Hilbert space 𝐻1 to another real Hilbert space 𝐻2 and 𝑏𝐻2 is given.

Problem (4.7) models many applied problems arising from image reconstructions, learning theory, and so on.

Due to some reasons (errors, noises, etc.), (4.7) is often illposed and inconsistent; thus regularization and least-squares are taken into consideration; that is, we look for a solution to the minimization problemmin𝑥𝐾12𝐴𝑥𝑏2.(4.8) Let 𝑆𝑏 denote the solution set of (4.8). It is always closed convex (but possibly empty). It is known that 𝑆𝑏 is nonempty if and only if 𝑃𝐴(𝐾)(𝑏)𝐴(𝐾). In this case, 𝑆𝑏 has a unique element with minimum norm; that is, there exists a unique point 𝑥𝑆𝑏 satisfying𝑥=min𝑥𝑥𝑆𝑏.(4.9) The K-constrained pseudoinverse of 𝐴, 𝐴𝐾, is defined as 𝐷𝐴𝐾=𝑏𝐻2𝑃𝐴(𝐾)(𝑏)𝐴(𝐾),𝐴𝐾(𝑏)=𝑥𝐴,𝑏𝐷𝐾,(4.10)

where 𝑥𝑆𝑏 is the unique solution to (4.9).

Set 1𝜑(𝑥)=2𝐴𝑥𝑏2.(4.11)

Then 𝜑(𝑥) is quadratic with gradient 𝜑(𝑥)=𝐴(𝐴𝑥𝑏),𝑥𝐻1,(4.12)

where 𝐴 is the adjoint of 𝐴. Clearly 𝜑 is Lipschitzian with constant 𝐿=𝐴𝐴=𝐴2. Therefore, applying Theorem 4.1, we obtain the following result.

Theorem 4.2. Let 𝑏𝐷(𝐴𝐾). Fix 𝛾 such that 0<𝛾<2/𝐴2.
(i)For each 𝑡(0,1), let 𝑥𝑡 be the unique solution of the fixed point equation 𝑥𝑡=𝛽𝑃𝐾𝑥𝑡𝛾𝐴𝐴𝑥𝑡+𝑏(1𝛽)𝑃𝐾(1𝑡)𝑥𝑡.(4.13)Then {𝑥𝑡} converges in norm as 𝑡0+ to 𝐴𝐾(𝑏).(ii)Define a sequence {𝑥𝑛} via the recursive algorithm 𝑥𝑛+1=𝛽𝑃𝐾𝑥𝑛𝛾𝐴𝐴𝑥𝑛+𝑏(1𝛽)𝑃𝐾1𝛼𝑛𝑥𝑛,(4.14)where the sequence {𝛼𝑛} satisfies conditions (𝐶1)-(𝐶2) in Theorem 3.2. Then {𝑥𝑛} converges in norm to 𝐴𝐾(𝑏).

Acknowledgments

The authors are very grateful to the referees for their comments and suggestions which improved the presentation of this paper. Y. -C. Liou was supported in part by NSC 99-2221-E-230-006. Y. Yao was supported in part by Colleges and Universities Science and Technology Development Foundation (20091003) of Tianjin and NSFC 11071279.