About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis
Volume 2012 (2012), Article ID 109236, 25 pages
http://dx.doi.org/10.1155/2012/109236
Research Article

Forward-Backward Splitting Methods for Accretive Operators in Banach Spaces

1Departamento de Análisis Matemático, Facultad de Matemáticas, Universidad de Sevilla, Apartado. 1160, 41080-Sevilla, Spain
2Department of Mathematics, Luoyang Normal University, Luoyang 471022, China
3Department of Applied Mathematics, National Sun Yat-sen University, Kaohsiung 80424, Taiwan

Received 31 March 2012; Accepted 29 May 2012

Academic Editor: Lishan Liu

Copyright © 2012 Genaro López et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Splitting methods have recently received much attention due to the fact that many nonlinear problems arising in applied areas such as image recovery, signal processing, and machine learning are mathematically modeled as a nonlinear operator equation and this operator is decomposed as the sum of two (possibly simpler) nonlinear operators. Most of the investigation on splitting methods is however carried out in the framework of Hilbert spaces. In this paper, we consider these methods in the setting of Banach spaces. We shall introduce two iterative forward-backward splitting methods with relaxations and errors to find zeros of the sum of two accretive operators in the Banach spaces. We shall prove the weak and strong convergence of these methods under mild conditions. We also discuss applications of these methods to variational inequalities, the split feasibility problem, and a constrained convex minimization problem.

1. Introduction

Splitting methods have recently received much attention due to the fact that many nonlinear problems arising in applied areas such as image recovery, signal processing, and machine learning are mathematically modeled as a nonlinear operator equation and this operator is decomposed as the sum of two (possibly simpler) nonlinear operators. Splitting methods for linear equations were introduced by Peaceman and Rachford [1] and Douglas and Rachford [2]. Extensions to nonlinear equations in Hilbert spaces were carried out by Kellogg [3] and Lions and Mercier [4] (see also [57]). The central problem is to iteratively find a zero of the sum of two monotone operators 𝐴 and 𝐵 in a Hilbert space , namely, a solution to the inclusion problem 0(𝐴+𝐵)𝑥.(1.1)

Many problems can be formulated as a problem of form (1.1). For instance, a stationary solution to the initial value problem of the evolution equation 𝜕𝑢𝜕𝑡+𝐹𝑢0,𝑢(0)=𝑢0(1.2) can be recast as (1.1) when the governing maximal monotone 𝐹 is of the form 𝐹=𝐴+𝐵 [4]. In optimization, it often needs [8] to solve a minimization problem of the form min𝑥𝑓(𝑥)+𝑔(𝑇𝑥),(1.3) where 𝑓,𝑔 are proper lower semicontinuous convex functions from to the extended real line =(,], and 𝑇 is a bounded linear operator on . As a matter of fact, (1.3) is equivalent to (1.1) (assuming that 𝑓 and 𝑔𝑇 have a common point of continuity) with 𝐴=𝜕𝑓 and 𝐵=𝑇𝜕𝑔𝑇. Here 𝑇 is the adjoint of 𝑇 and 𝜕𝑓 is the subdifferential operator of 𝑓 in the sense of convex analysis. It is known [8, 9] that the minimization problem (1.3) is widely used in image recovery, signal processing, and machine learning.

A splitting method for (1.1) means an iterative method for which each iteration involves only with the individual operators 𝐴 and 𝐵, but not the sum 𝐴+𝐵. To solve (1.1), Lions and Mercier [4] introduced the nonlinear Peaceman-Rachford and Douglas-Rachford splitting iterative algorithms which generate a sequence {𝜐𝑛} by the recursion 𝜐𝑛+1=2𝐽𝐴𝜆𝐼2𝐽𝐵𝜆𝜐𝐼𝑛(1.4) and respectively, a sequence {𝜐𝑛} by the recursion 𝜐𝑛+1=𝐽𝐴𝜆2𝐽𝐵𝜆𝜐𝐼𝑛+𝐼𝐽𝐵𝜆𝜐𝑛.(1.5) Here we use 𝐽𝑇𝜆 to denote the resolvent of a monotone operator 𝑇; that is, 𝐽𝑇𝜆=(𝐼+𝜆𝑇)1.

The nonlinear Peaceman-Rachford algorithm (1.4) fails, in general, to converge (even in the weak topology in the infinite-dimensional setting). This is due to the fact that the generating operator (2𝐽𝐴𝜆𝐼)(2𝐽𝐵𝜆𝐼) for the algorithm (1.4) is merely nonexpansive. However, the mean averages of {𝑢𝑛} can be weakly convergent [5]. The nonlinear Douglashere-Rachford algorithm (1.5) always converges in the weak topology to a point 𝑢 and 𝑢=𝐽𝐵𝜆𝜐 is a solution to (1.1), since the generating operator 𝐽𝐴𝜆(2𝐽𝐵𝜆𝐼)+(𝐼𝐽𝐵𝜆) for this algorithm is firmly nonexpansive, namely, the operator is of the form (𝐼+𝑇)/2, where 𝑇 is nonexpansive.

There is, however, little work in the existing literature on splitting methods for nonlinear operator equations in the setting of Banach spaces (though there was some work on finding a common zero of a finite family of accretive operators [1012]).

The main difficulties are due to the fact that the inner product structure of a Hilbert space fails to be true in a Banach space. We shall in this paper use the technique of duality maps to carry out certain initiative investigations on splitting methods for accretive operators in Banach spaces. Namely, we will study splitting iterative methods for solving the inclusion problem (1.1), where 𝐴 and 𝐵 are accretive operators in a Banach space 𝑋.

We will consider the case where 𝐴 is single-valued accretive and 𝐵 is possibly multivalued 𝑚-accretive operators in a Banach space 𝑋 and assume that the inclusion (1.1) has a solution. We introduce the following two iterative methods which we call Mann-type and respectively, Halpern-type forward-backward methods with errors and which generate a sequence {𝑥𝑛} by the recursions 𝑥𝑛+1=1𝛼𝑛𝑥𝑛+𝛼𝑛𝐽𝐵𝑟𝑛𝑥𝑛𝑟𝑛𝐴𝑥𝑛+𝑎𝑛+𝑏𝑛,(1.6)𝑥𝑛+1=𝛼𝑛𝑢+1𝛼𝑛𝐽𝐵𝑟𝑛𝑥𝑛𝑟𝑛𝐴𝑥𝑛+𝛼𝑛+𝑏𝑛,(1.7) where 𝐽𝐵𝑟 is the resolvent of the operator 𝐵 of order 𝑟 (i.e., 𝐽𝐵𝑟=(𝐼+𝑟𝐵)1), and {𝛼𝑛} is a sequence in (0,1]. We will prove weak convergence of (1.6) and strong convergence of (1.7) to a solution to (1.1) in some class of Banach spaces which will be made clear in Section 3.

The paper is organized as follows. In the next section we introduce the class of Banach spaces in which we shall study our splitting methods for solving (1.1). We also introduce the concept of accretive and 𝑚-accretive operators in a Banach space. In Section 3, we discuss the splitting algorithms (1.6) and (1.7) and prove their weak and strong convergence, respectively. In Section 4, we discuss applications of both algorithms (1.6) and (1.7) to variational inequalities, fixed points of pseudocontractions, convexly constrained minimization problems, the split feasibility problem, and linear inverse problems.

2. Preliminaries

Throughout the paper, 𝑋 is a real Banach space with norm , distance 𝑑, and dual space 𝑋. The symbol 𝑥,𝑥 denotes the pairing between 𝑋 and 𝑋, that is, 𝑥,𝑥=𝑥(𝑥), the value of 𝑥 at 𝑥. 𝐶 will denote a nonempty closed convex subset of 𝑋, unless otherwise stated, and 𝑟 the closed ball with center zero and radius 𝑟. The expressions 𝑥𝑛𝑥 and 𝑥𝑛𝑥 denote the strong and weak convergence of the sequence {𝑥𝑛}, respectively, and 𝜔𝑤(𝑥𝑛) stands for the set of weak limit points of the sequence {𝑥𝑛}.

The modulus of convexity of 𝑋 is the function 𝛿𝑋(𝜀)(0,2][0,1] defined by 𝛿𝑋(𝜀)=inf1𝑥+𝑦2𝑥=𝑦=1,𝑥𝑦𝜀.(2.1) Recall that 𝑋 is said to be uniformly convex if 𝛿𝑋(𝜀)>0 for any 𝜀(0,2]. Let 𝑝>1. We say that 𝑋 is 𝑝-uniformly convex if there exists a constant 𝑐𝑝>0 so that 𝛿𝑋(𝜀)𝑐𝑝𝜀𝑝 for any 𝜀(0,2].

The modulus of smoothness of 𝑋 is the function 𝜌𝑋(𝜏)++ defined by 𝜌𝑋(𝜏)=sup𝑥+𝜏𝑦+𝑥𝜏𝑦21𝑥=𝑦=1.(2.2) Recall that 𝑋 is called uniformly smooth if lim𝜏0𝜌𝑋(𝜏)/𝜏=0. Let 1<𝑞2. We say that 𝑋 is 𝑞-uniformly smooth if there is a 𝑐𝑞>0 so that 𝜌𝑋(𝜏)𝑐𝑞𝜏𝑞 for 𝜏>0. It is known that 𝑋 is 𝑝-uniformly convex if and only if 𝑋 is 𝑞-uniformly smooth, where (1/𝑝+1/𝑞=1). For instance, 𝐿𝑝 spaces are 2-uniformly convex and 𝑝-uniformly smooth if 1<𝑝2, whereas 𝑝-uniformly convex and 2-uniformly smooth if 𝑝2.

The norm of 𝑋 is said to be the Fréchet differentiable if, for each 𝑥𝑋, lim𝜆0𝑥+𝜆𝑦𝑥𝜆(2.3) exists and is attained uniformly for all 𝑦 such that 𝑦=1. It can be proved that 𝑋 is uniformly smooth if the limit (2.3) exists and is attained uniformly for all (𝑥,𝑦) such that 𝑥=𝑦=1. So it is trivial that a uniformly smooth Banach space has a Fréchet differentiable norm.

The subdifferential of a proper convex function 𝑓𝑋(,+] is the set-valued operator 𝜕𝑓𝑋2𝑋 defined as 𝑥𝜕𝑓(𝑥)=𝑋𝑥,𝑦𝑥+𝑓(𝑥)𝑓(𝑦).(2.4) If 𝑓 is proper, convex, and lower semicontinuous, the subdifferential 𝜕𝑓(𝑥) for any 𝑥int𝒟(𝑓), the interior of the domain of 𝑓. The generalized duality mapping 𝒥𝑝𝑋2𝑋 is defined by 𝒥𝑝𝑗(𝑥)=(𝑥)𝑋𝑗(𝑥),𝑥=𝑥𝑝,𝑗(𝑥)=𝑥𝑝1.(2.5) If 𝑝=2, the corresponding duality mapping is called the normalized duality mapping and denoted by 𝒥. It can be proved that, for any 𝑥𝑋, 𝒥𝑝1(𝑥)=𝜕𝑝𝑥𝑝.(2.6) Thus we have the following subdifferential inequality, for any 𝑥,𝑦𝑋: 𝑥+𝑦𝑝𝑥𝑝+𝑝𝑦,𝑗(𝑥+𝑦),𝑗(𝑥+𝑦)𝒥𝑝(𝑥+𝑦).(2.7) In particular, we have, for 𝑥,𝑦𝑋, 𝑥+𝑦2𝑥2+2𝑦,𝑗(𝑥+𝑦),𝑗(𝑥+𝑦)𝒥(𝑥+𝑦).(2.8) Some properties of the duality mappings are collected as follows.

Proposition 2.1 (see Cioranescu [13]). Let 1<𝑝<.(i)The Banach space 𝑋 is smooth if and only if the duality mapping 𝒥𝑝 is single valued.(ii)The Banach space 𝑋 is uniformly smooth if and only if the duality mapping 𝒥𝑝 is single-valued and norm-to-norm uniformly continuous on bounded sets of 𝑋.
Among the estimates satisfied by 𝑝-uniformly convex and 𝑝-uniformly smooth spaces, the following ones will come in handy.

Lemma 2.2 (see Xu [14]). Let 1<𝑝<,𝑞(1,2],𝑟>0 be given.(i)If 𝑋 is uniformly convex, then there exists a continuous, strictly increasing and convex function 𝜑++ with 𝜑(0)=0 such that 𝜆𝑥+(1𝜆)𝑦𝑝𝜆𝑥𝑝+𝜆𝑦𝑝𝑊𝑝(𝜆)𝜑(𝑥𝑦),𝑥,𝑦𝑟,0𝜆1,(2.9) where 𝑊𝑝(𝜆)=𝜆𝑝(1𝜆)+(1𝜆)𝜆𝑝.(ii)If X is 𝑞-uniformly smooth, then there exists a constant 𝜅𝑞>0 such that 𝑥+𝑦𝑞𝑥𝑞𝒥+𝑞𝑞(𝑥),𝑦+𝜅𝑞𝑦𝑞,𝑥,𝑦𝑋.(2.10)
The best constant 𝜅𝑞 satisfying (2.10) will be called the 𝑞-uniform smoothness coefficient of 𝑋. For instance [14], for 2𝑝<, 𝐿𝑝 is 2-uniformly smooth with 𝜅2=𝑝1, and for 1<𝑝2, 𝐿𝑝 is 𝑝-uniformly smooth with 𝜅𝑝=(1+𝑡𝑝𝑝1)(1+𝑡𝑝)1𝑝, where 𝑡𝑝 is the unique solution to the equation (𝑝2)𝑡𝑝1+(𝑝1)𝑡𝑝21=0,0<𝑡<1.(2.11)

In a Banach space 𝑋 with the Fréchet differentiable norm, there exists a function [0,)[0,) such that lim𝑡0(𝑡)/𝑡=0 and for all 𝑥,𝑢𝑋12𝑥21+𝑢,𝒥(𝑥)2𝑥+𝑢212𝑥2+𝑢,𝒥(𝑥)+(𝑢).(2.12)

Recall that 𝑇𝐶𝐶 is a nonexpansive mapping if 𝑇𝑥𝑇𝑦𝑥𝑦, for all 𝑥,𝑦𝐶. From now on, Fix(𝑇) denotes the fixed point set of 𝑇. The following lemma claims that the demiclosedness principle for nonexpansive mappings holds in uniformly convex Banach spaces.

Lemma 2.3 (see Browder [15]). Let 𝐶 be a nonempty closed convex subset of a uniformly convex space 𝑋 and 𝑇 a nonexpansive mapping with Fix(𝑇). If {𝑥𝑛} is a sequence in 𝐶 such that 𝑥𝑛𝑥 and (𝐼𝑇)𝑥𝑛𝑦, then (𝐼𝑇)𝑥=𝑦. In particular, if 𝑦=0, then 𝑥Fix(𝑇).

A set-valued operator 𝐴𝑋2𝑋, with domain 𝒟(𝐴) and range (𝐴), is said to be accretive if, for all 𝑡>0 and every 𝑥,𝑦𝒟(𝐴), 𝑥𝑦𝑥𝑦+𝑡(𝑢𝜐),𝑢𝐴𝑥,𝜐𝐴𝑦.(2.13) It follows from Lemma 1.1 of Kato [16] that 𝐴 is accretive if and only if, for each 𝑥,𝑦𝒟(𝐴), there exists 𝑗(𝑥𝑦)𝐽(𝑥𝑦) such that 𝑢𝜐,𝑗(𝑥𝑦)0,𝑢𝐴𝑥,𝜐𝐴𝑦.(2.14) An accretive operator 𝐴 is said to be 𝑚-accretive if the range (𝐼+𝜆𝐴)=𝑋 for some 𝜆>0. It can be shown that an accretive operator 𝐴 is 𝑚-accretive if and only if (𝐼+𝜆𝐴)=𝑋 for all 𝜆>0.

Given 𝛼>0 and 𝑞(1,), we say that an accretive operator 𝐴 is 𝛼-inverse strongly accretive (𝛼-isa) of order 𝑞 if, for each 𝑥,𝑦𝒟(𝐴), there exists 𝑗𝑞(𝑥𝑦)𝒥𝑞(𝑥𝑦) such that 𝑢𝜐,𝑗𝑞(𝑥𝑦)𝛼𝑢𝜐𝑞,𝑢𝐴𝑥,𝜐𝐴𝑦.(2.15) When 𝑞=2, we simply say 𝛼-isa, instead of 𝛼-isa of order 2; that is, 𝑇 is 𝛼-isa if, for each 𝑥,𝑦𝒟(𝐴), there exists 𝑗(𝑥𝑦)𝒥(𝑥𝑦) such that 𝑢𝜐,𝑗(𝑥𝑦)𝛼𝑢𝜐2,𝑢𝐴𝑥,𝜐𝐴𝑦.(2.16)

Given a subset 𝐾 of 𝐶 and a mapping 𝑇𝐶𝐾, recall that 𝑇 is a retraction of 𝐶 onto 𝐾 if 𝑇𝑥=𝑥 for all 𝑥𝐾. We say that 𝑇 is sunny if, for each 𝑥𝐶 and 𝑡0, we have 𝑇(𝑡𝑥+(1𝑡)𝑇𝑥)=𝑇𝑥,(2.17) whenever 𝑡𝑥+(1𝑡)𝑇𝑥𝐶.

The first result regarding the existence of sunny nonexpansive retractions onto the fixed point set of a nonexpansive mapping is due to Bruck.

Theorem 2.4 (see Bruck [17]). If 𝑋 is strictly convex and uniformly smooth and if 𝑇𝐶𝐶 is a nonexpansive mapping having a nonempty fixed point set Fix(𝑇), then there exists a sunny nonexpansive retraction of 𝐶 onto Fix(𝑇).

The following technical lemma regarding convergence of real sequences will be used when we discuss convergence of algorithms (1.6) and (1.7) in the next section.

Lemma 2.5 (see [18, 19]). Let {𝑎𝑛}, {𝑐𝑛}+, {𝛼𝑛}(0,1), and {𝑏𝑛} be sequences such that 𝑎𝑛+11𝛼𝑛𝑎𝑛+𝑏𝑛+𝑐𝑛,𝑛0.(2.18) Assume 𝑛=0𝑐𝑛<. Then the following results hold:
(i) If 𝑏𝑛𝛼𝑛𝑀 where 𝑀0, then {𝑎𝑛} is a bounded sequence.
(ii) If 𝑛=0𝛼𝑛= and limsup𝑛𝑏𝑛/𝛼𝑛0, then lim𝑎𝑛=0.

3. Splitting Methods for Accretive Operators

In this section we assume that 𝑋 is a real Banach space and 𝐶 is a nonempty closed subset of 𝑋. We also assume that 𝐴 is a single-valued and 𝛼-isa operator for some 𝛼>0 and 𝐵 is an 𝑚-accretive operator in 𝑋, with 𝒟(𝐴)𝐶 and 𝒟(𝐵)𝐶. Moreover, we always use 𝐽𝑟 to denote the resolvent of 𝐵 of order 𝑟>0; that is, 𝐽𝑟𝐽𝐵𝑟=(𝐼+𝑟𝐵)1.(3.1)

It is known that the 𝑚-accretiveness of 𝐵 implies that 𝐽𝑟 is single valued, defined on the entire 𝑋, and firmly nonexpansive; that is, 𝐽𝑟𝑥𝐽𝑟𝑦𝑠𝐽(𝑥𝑦)+(1𝑠)𝑟𝑥𝐽𝑟𝑦,𝑥,𝑦𝑋,0𝑠1.(3.2) Below we fix the following notation: 𝑇𝑟=𝐽𝑟(𝐼𝑟𝐴)=(𝐼+𝑟𝐵)1(𝐼𝑟𝐴).(3.3)

Lemma 3.1. For 𝑟>0, Fix(𝑇𝑟)=(𝐴+𝐵)1(0).

Proof. From the definition of 𝑇𝑟, it follows that 𝑥=𝑇𝑟𝑥𝑥=(𝐼+𝑟𝐵)1(𝑥𝑟𝐴𝑥)𝑥𝑟𝐴𝑥𝑥+𝑟𝐵𝑥0𝐴𝑥+𝐵𝑥.(3.4)

This lemma alludes to the fact that in order to solve the inclusion problem (1.1), it suffices to find a fixed point of 𝑇𝑟. Since 𝑇𝑟 is already “split,” an iterative algorithm for 𝑇𝑟 corresponds to a splitting algorithm for (1.1). However, to guarantee convergence (weak or strong) of an iterative algorithm for 𝑇𝑟, we need good metric properties of 𝑇𝑟 such as nonexpansivity. To this end, we need geometric conditions on the underlying space 𝑋 (see Lemma 3.3).

Lemma 3.2. Given 0<𝑠𝑟 and 𝑥𝑋, there holds the relation 𝑥𝑇𝑠𝑥2𝑥𝑇𝑟𝑥.(3.5)

Proof. Note that ((𝑥𝑇𝑟𝑥)/𝑟)𝐴𝑥𝐵(𝑇𝑟𝑥). By the accretivity of 𝐵, we have 𝑗𝑠,𝑟𝒥(𝑇𝑠𝑥𝑇𝑟𝑥) such that 𝑥𝑇𝑠𝑥𝑠𝑥𝑇𝑟𝑥𝑟,𝑗𝑠,𝑟0.(3.6) It turns out that 𝑇𝑠𝑥𝑇𝑟𝑥2𝑟𝑠𝑟𝑥𝑇𝑟𝑥,𝑗𝑠,𝑟|||𝑠1𝑟|||𝑥𝑇𝑟𝑥𝑇𝑠𝑥𝑇𝑟𝑥.(3.7) This along with the triangle inequality yields that 𝑥𝑇𝑠𝑥𝑥𝑇𝑟𝑥+𝑇𝑟𝑥𝑇𝑠𝑥𝑥𝑇𝑟𝑥+|||𝑠1𝑟|||𝑥𝑇𝑟𝑥2𝑥𝑇𝑟𝑥.(3.8)

We notice that though the resolvent of an accretive operator is always firmly nonexpansive in a general Banach space, firm nonexpansiveness is however insufficient to estimate useful bounds which are required to prove convergence of iterative algorithms for solving nonlinear equations governed by accretive operations. To overcome this difficulty, we need to impose additional properties on the underlying Banach space 𝑋. Lemma 3.3 below establishes a sharper estimate than nonexpansiveness of the mapping 𝑇𝑟, which is useful for us to prove the weak and strong convergence of algorithms (1.6) and (1.7).

Lemma 3.3. Let 𝑋 be a uniformly convex and 𝑞-uniformly smooth Banach space for some 𝑞(1,2]. Assume that 𝐴 is a single-valued 𝛼-isa of order 𝑞 in 𝑋. Then, given 𝑠>0, there exists a continuous, strictly increasing and convex function 𝜙𝑞++ with 𝜙𝑞(0)=0 such that, for all 𝑥,𝑦𝑠, 𝑇𝑟𝑥𝑇𝑟𝑦𝑞𝑥𝑦𝑞𝑟𝛼𝑞𝑟𝑞1𝜅𝑞𝐴𝑥𝐴𝑦𝑞𝜙𝑞𝐼𝐽𝑟(𝐼𝑟𝐴)𝑥𝐼𝐽𝑟,(𝐼𝑟𝐴)𝑦(3.9) where 𝜅𝑞 is the 𝑞-uniform smoothness coefficient of 𝑋 (see Lemma 2.2).

Proof. Put ̂𝑥=𝑥𝑟𝐴𝑥 and ̂𝑦=𝑦𝑟𝐴𝑦. Since (̂𝑥𝐽𝑟̂𝑥)/𝑟𝐵(𝐽𝑟̂𝑥), it follows from the accretiveness of 𝐵 that 𝐽𝑟̂𝑥𝐽𝑟𝐽̂𝑦𝑟̂𝑥𝐽𝑟+𝑟̂𝑦2̂𝑥𝐽𝑟̂𝑥𝑟̂𝑦𝐽𝑟̂𝑦𝑟=121(̂𝑥̂𝑦)+2𝐽𝑟̂𝑥𝐽𝑟.̂𝑦(3.10)
Since 𝑥,𝑦𝑠, by the accretivity of 𝐴 it is easy to show that there exists 𝑡>0 such that ̂𝑥̂𝑦𝑡; hence, 𝐽𝑟̂𝑥𝐽𝑟̂𝑦𝑡 for 𝐽𝑟 is nonexpansive. Now since 𝑋 is uniformly convex, we can use Lemma 2.2 to find a continuous, strictly increasing and convex function 𝜑++, with 𝜑(0)=0, satisfying 121(̂𝑥̂𝑦)+2𝐽𝑟̂𝑥𝐽𝑟̂𝑦𝑞12̂𝑥̂𝑦𝑞+12𝐽𝑟̂𝑥𝐽𝑟̂𝑦𝑞𝑊𝑞12𝜑𝐼𝐽𝑟̂𝑥𝐼𝐽𝑟̂𝑦̂𝑥̂𝑦𝑞12𝑞𝜑𝐼𝐽𝑟̂𝑥𝐼𝐽𝑟,̂𝑦(3.11) where the last inequality follows from the nonexpansivity of the resolvent 𝐽𝑟. Letting 𝜙𝑞=𝜑/2𝑞 and combining (3.10) and (3.11) yield 𝑇𝑟𝑥𝑇𝑟𝑦𝑞̂𝑥̂𝑦𝑞𝜙𝑞𝐼𝐽𝑟̂𝑥𝐼𝐽𝑟.̂𝑦(3.12)
On the other hand, since 𝑋 is also 𝑞-uniformly smooth and 𝐴 is 𝛼-isa of order 𝑞, we derive that ̂𝑥̂𝑦𝑞=(𝑥𝑦)𝑟(𝐴𝑥𝐴𝑦)𝑞𝑥𝑦𝑞+𝜅𝑞𝑟𝑞𝐴𝑥𝐴𝑦𝑞𝑟𝑞𝐴𝑥𝐴𝑦,𝒥𝑞(𝑥𝑦)𝑥𝑦𝑞𝑟𝛼𝑞𝑟𝑞1𝜅𝑞𝐴𝑥𝐴𝑦𝑞.(3.13) Finally the required inequality (3.9) follows from (3.12) and (3.13).

Remark 3.4. Note that from Lemma 3.3 one deduces that, under the same conditions, if 𝑟(𝛼𝑞/𝜅𝑞)1/(𝑞1), then the mapping 𝑇𝑟 is nonexpansive.

3.1. Weak Convergence

Mann's iterative method [20] is a widely used method for finding a fixed point of nonexpansive mappings [21]. We have proved that a splitting method for solving (1.1) can, under certain conditions, be reduced to a method for finding a fixed point of a nonexpansive mapping. It is therefore the purpose of this subsection to introduce and prove its weak convergence of a Mann-type forward-backward method with errors in a uniformly convex and 𝑞-uniformly smooth Banach space. (See [22] for a similar treatment of the proximal point algorithm [23, 24] for finding zeros of monotone operators in the Hilbert space setting.) To this end we need a lemma about the uniqueness of weak cluster points of a sequence, whose proof, included here, follows the idea presented in [21, 25].

Lemma 3.5. Let 𝐶 be a closed convex subset of a uniformly convex Banach space 𝑋 with a Fréchet differentiable norm, and let {𝑇𝑛} be a sequence of nonexpansive self-mappings on 𝐶 with a nonempty common fixed point set 𝐹. If 𝑥0𝐶 and 𝑥𝑛+1=𝑇𝑛𝑥𝑛+𝑒𝑛, where 𝑛=1𝑒𝑛<, then 𝑧1𝑧2,𝐽(𝑦1𝑦2)=0 for all 𝑦1,𝑦2𝐹 and all 𝑧1,𝑧2 weak limit points of {𝑥𝑛}.

Proof. We first claim that the sequence {𝑥𝑛} is bounded. As a matter of fact, for each fixed 𝑝𝐹 and any 𝑛, 𝑥𝑛+1=𝑇𝑝𝑛𝑥𝑛𝑇𝑛𝑝+𝑒𝑛𝑥𝑛+𝑒𝑝𝑛.(3.14) As n=1𝑒𝑛<, we can apply Lemma 2.5 to find that lim𝑛𝑥𝑛𝑝 exists. In particular, {𝑥𝑛} is bounded.
Let us next prove that, for every 𝑦1,𝑦2𝐹 and 0<𝑡<1, the limit lim𝑛𝑡𝑥𝑛+(1𝑡)𝑦1𝑦2(3.15) exists. To see this, we set 𝑆𝑛,𝑚=𝑇𝑛+𝑚1𝑇𝑛+𝑚2𝑇𝑛 which is nonexpansive. It is to see that we can rewrite {𝑥𝑛} in the manner 𝑥𝑛+𝑚=𝑆𝑛,𝑚𝑥𝑛+𝑐𝑛,𝑚,𝑛,𝑚1,(3.16) where 𝑐𝑛,𝑚=𝑇𝑛+𝑚1𝑇𝑛+𝑚2𝑇𝑛1𝑇𝑛𝑥𝑛+𝑒𝑛+𝑒𝑛1+𝑒𝑛+𝑚2+𝑒𝑛+𝑚1𝑆𝑛,𝑚𝑥𝑛.(3.17)
By nonexpansivity, we have that 𝑐𝑛,𝑚𝑛+𝑚1𝑘=𝑛𝑒𝑘,(3.18) and the summability of {𝑒𝑛} implies that lim𝑛,𝑚𝑐𝑛,𝑚=0.(3.19) Set 𝑎𝑛=𝑡𝑥𝑛+(1𝑡)𝑦1𝑦2,𝑑𝑛,𝑚=𝑆𝑛,𝑚𝑡𝑥𝑛+(1𝑡)𝑦1𝑡𝑆𝑛,𝑚𝑥𝑛+(1𝑡)𝑦1.(3.20)
Let 𝐾 be a closed bounded convex subset of 𝑋 containing {𝑥𝑛} and {𝑦1,𝑦2}. A result of Bruck [26] assures the existence of a strictly increasing continuous function 𝑔[0,)[0,) with 𝑔(0)=0 such that 𝑔(𝑈(𝑡𝑥+(1𝑡)𝑦)(𝑡𝑈𝑥+(1𝑡)𝑈𝑦))𝑥𝑦𝑈𝑥𝑈𝑦(3.21)
for all 𝑈𝐾𝑋 nonexpansive, 𝑥,𝑦𝐾 and 0𝑡1. Applying (3.21) to each 𝑆𝑛,𝑚, we obtain 𝑔𝑑𝑛,𝑚𝑥𝑛𝑦1𝑆𝑛,𝑚𝑥𝑛𝑆𝑛,𝑚𝑦1=𝑥𝑛𝑦1𝑥𝑛+𝑚𝑦1𝑐𝑛,𝑚𝑥𝑛𝑦1𝑥𝑛+𝑚𝑦1+𝑐𝑛,𝑚.(3.22)
Now since lim𝑛𝑥𝑛𝑦1 exists, (3.19) and (3.22) together imply that lim𝑛,𝑚𝑑𝑛,𝑚=0.(3.23) Furthermore, we have 𝑎𝑛+𝑚𝑎𝑛+𝑑𝑛,𝑚+𝑐𝑛,𝑚.(3.24)
After taking first limsup𝑚 and then liminf𝑛 in (3.24) and using (3.19) and (3.23), we get limsup𝑚𝑎𝑚liminf𝑛𝑎𝑛+lim𝑛,𝑚𝑑𝑛,𝑚+𝑐𝑛,𝑚=liminf𝑛𝑎𝑛.(3.25) Hence the limit (3.15) exists.
If we replace now 𝑥 and 𝑢 in (2.12) with 𝑦1𝑦2 and 𝑡(𝑥𝑛𝑦1), respectively, we arrive at 12𝑦1𝑦22𝑥+𝑡𝑛𝑦1𝑦,𝒥1𝑦212𝑡𝑥𝑛+(1𝑡)𝑦1𝑦2212𝑦1𝑦22𝑥+𝑡𝑛𝑦1𝑦,𝒥1𝑦2𝑡𝑥+𝑛𝑦1.(3.26) Since the lim𝑛𝑥𝑛𝑦1 exists, we deduce that 12𝑦1𝑦22+𝑡limsup𝑛𝑥𝑛𝑦1𝑦,𝒥1𝑦2lim𝑛12𝑡𝑥𝑛+(1𝑡)𝑦1𝑦2212𝑦1𝑦22+𝑡liminf𝑛𝑥𝑛𝑦1𝑦,𝒥1𝑦2+𝑜(𝑡),(3.27) where lim𝑡0𝑜(𝑡)/𝑡=0. Consequently, we deduce that limsup𝑛𝑥𝑛𝑦1𝑦,𝒥1𝑦2liminf𝑛𝑥n𝑦1𝑦,𝒥1𝑦2+𝑜(𝑡)𝑡.(3.28) Setting 𝑡 tend to 0, we conclude that lim𝑛𝑥𝑛𝑦1,𝒥(𝑦1𝑦2) exists. Therefore, for any two weak limit points 𝑧1 and 𝑧2 of {𝑥𝑛}, 𝑧1𝑦1,𝒥(𝑦1𝑦2)=𝑧2𝑦1,𝒥(𝑦1𝑦2); that is, 𝑧1𝑧2,𝒥(𝑦1𝑦2)=0.

Theorem 3.6. Let 𝑋 be a uniformly convex and 𝑞-uniformly smooth Banach space. Let 𝐴𝑋𝑋 be an 𝛼-isa of order 𝑞 and 𝐵𝑋2𝑋 an 𝑚-accretive operator. Assume that 𝑆=(𝐴+𝐵)1(0). We define a sequence {𝑥𝑛} by the perturbed iterative scheme 𝑥𝑛+1=1𝛼𝑛𝑥𝑛+𝛼𝑛𝐽𝑟𝑛𝑥𝑛𝑟𝑛𝐴𝑥𝑛+𝑎𝑛+𝑏𝑛,(3.29) where 𝐽𝑟𝑛=(𝐼+𝑟𝑛𝐵)1, {𝑎𝑛},{𝑏𝑛}𝑋,{𝛼𝑛}(0,1], and {𝑟𝑛}(0,+). Assume that(i)𝑛=1𝑎𝑛< and 𝑛=1𝑏𝑛<;(ii)0<liminf𝑛𝛼𝑛;(iii)0<liminf𝑛𝑟𝑛limsup𝑛𝑟𝑛<(𝛼𝑞/𝜅𝑞)1/(𝑞1).
Then {𝑥𝑛} converges weakly to some 𝑥𝑆.

Proof. Write 𝑇𝑛=(𝐼+𝑟𝑛𝐵)1(𝐼𝑟𝑛𝐴). Notice that we can write 𝐽𝑟𝑛𝑥𝑛𝑟𝑛𝐴𝑥𝑛+𝑎𝑛+𝑏𝑛=𝑇𝑛𝑥𝑛+𝑒𝑛,(3.30) where 𝑒𝑛=𝐽𝑟𝑛(𝑥𝑛𝑟𝑛(𝐴𝑥𝑛+𝑎𝑛))+𝑏𝑛𝑇𝑛𝑥𝑛. Then the iterative formula (3.29) turns into the form 𝑥𝑛+1=1𝛼𝑛𝑥𝑛+𝛼𝑛𝑇𝑛𝑥𝑛+𝑒𝑛.(3.31) Thus, by nonexpansivity of 𝐽𝑟𝑛, 𝑒𝑛𝐽𝑟𝑛𝑥𝑛𝑟𝑛𝐴𝑥𝑛+𝑎𝑛𝑇𝑛𝑥𝑛𝑏𝑛𝑟𝑛𝑎𝑛+𝑏𝑛.(3.32) Therefore, condition (i) implies 𝑛=1𝑒𝑛<.(3.33) Take 𝑧𝑆 to deduce that, as 𝑆=Fix(𝑇𝑛) and 𝑇𝑛 is nonexpansive, 𝑥𝑛+1𝑧1𝛼𝑛𝑥𝑛𝑧+𝛼𝑛𝑇𝑛𝑥𝑛𝑇𝑛𝑧+𝑒𝑛𝑥𝑛𝑧+𝛼𝑛𝑒𝑛.(3.34)
Due to (3.33), Lemma 2.5 is applicable and we get that lim𝑛𝑥𝑛𝑧 exists; in particular, {𝑥𝑛} is bounded. Let 𝑀>0 be such that 𝑥𝑛<𝑀, for all 𝑛, and let 𝑠=𝑞(𝑀+𝑧)𝑞1. By (2.7) and Lemma 3.3, we have 𝑥𝑛+1𝑧𝑞1𝛼𝑛𝑥𝑛𝑧+𝛼𝑛𝑇𝑛𝑥𝑛𝑧+𝛼𝑛𝑒𝑛𝑞1𝛼𝑛𝑥𝑛𝑧+𝛼𝑛𝑇𝑛𝑥𝑛𝑧𝑞+𝛼𝑛𝑒𝑛𝑥,𝒥𝑛+1𝑧1𝛼𝑛𝑥𝑛𝑧𝑞+𝛼𝑛𝑇𝑛𝑥𝑛𝑇𝑛𝑧𝑞+𝛼𝑛𝑞𝑒𝑛𝑥𝑛+1𝑧𝑞1𝑥𝑛𝑧𝑞𝛼𝑛𝑟𝑛𝑞𝛼𝑟𝑛𝑞1𝜅𝑞𝐴𝑥𝑛𝐴𝑧𝑞𝜙𝑞𝑥𝑛𝑟𝑛𝐴𝑥𝑛𝑇𝑛𝑥𝑛+𝑟𝑛𝑒𝐴𝑧+𝑠𝑛.(3.35)
From (3.35), assumptions (ii) and (iii), and (3.33), it turns out that lim𝑛𝐴𝑥𝑛𝐴𝑧𝑞+𝑥𝑛𝑟𝑛𝐴𝑥𝑛𝑇𝑛𝑥𝑛+𝑟𝑛𝐴𝑧=0.(3.36) Consequently, lim𝑛𝑇𝑛𝑥𝑛𝑥𝑛=0.(3.37) Since liminf𝑛𝑟𝑛>0, there exists 𝜀>0 such that 𝑟𝑛𝜀 for all 𝑛0. Then, by Lemma 3.2, lim𝑛𝑇𝜀𝑥𝑛𝑥𝑛2lim𝑛𝑇𝑛𝑥𝑛𝑥𝑛=0.(3.38)
By Lemmas 3.3 and 3.1, 𝑇𝜀 is nonexpansive and Fix(𝑇𝜀)=𝑆. We can therefore make use of Lemma 2.3 to assure that 𝜔𝑤𝑥𝑛𝑆.(3.39) Finally we set 𝑈𝑛=(1𝛼𝑛)𝐼+𝛼𝑛𝑇𝑛 and rewrite scheme (3.31) as 𝑥𝑛+1=𝑈𝑛𝑥𝑛+𝑒𝑛,(3.40) where the sequence {𝑒𝑛} satisfies 𝑛=1𝑒𝑛<. Since {𝑈𝑛} is a sequence of nonexpansive mappings with 𝑆 as its nonempty common fixed point set, and since the space 𝑋 is uniformly convex with a Fréchet differentiable norm, we can apply Lemma 3.5 together with (3.39) to assert that the sequence {𝑥𝑛} has exactly one weak limit point; it is therefore weakly convergent.

3.2. Strong Convergence

Halpern's method [27] is another iterative method for finding a fixed point of nonexpansive mappings. This method has been extensively studied in the literature [2830] (see also the recent survey [31]). In this section we aim to introduce and prove the strong convergence of a Halpern-type forward-backward method with errors in uniformly convex and 𝑞-uniformly smooth Banach spaces. This result turns out to be new even in the setting of Hilbert spaces.

Theorem 3.7. Let 𝑋 be a uniformly convex and 𝑞-uniformly smooth Banach space. Let 𝐴𝑋𝑋 be an 𝛼-isa of order 𝑞 and 𝐵𝑋2𝑋 an 𝑚-accretive operator. Assume that 𝑆=(𝐴+𝐵)1(0). We define a sequence {𝑥𝑛} by the iterative scheme 𝑥𝑛+1=𝛼𝑛𝑢+1𝛼𝑛𝐽𝑟𝑛𝑥𝑛𝑟𝑛𝐴𝑥𝑛+𝑎𝑛+𝑏𝑛,(3.41) where 𝑢𝑋, 𝐽𝑟𝑛=(𝐼+𝑟𝑛𝐵)1,{𝑎𝑛},{𝑏𝑛}𝑋,{𝛼𝑛}(0,1],and{𝑟𝑛}(0,+). Assume the following conditions are satisfied:(i)𝑛=1𝑎𝑛<𝑎𝑛𝑑𝑛=1𝑏𝑛<,𝑜𝑟lim𝑛𝑎𝑛/𝛼𝑛=lim𝑛𝑏𝑛/𝛼𝑛=0; (ii)lim𝑛𝛼𝑛=0,𝑛=1𝛼𝑛=; (iii)0<liminf𝑛𝑟𝑛limsup𝑛𝑟𝑛<(𝛼𝑞/𝜅𝑞)1/(𝑞1).
Then {𝑥𝑛} converges in norm to 𝑧=𝑄(𝑢), where 𝑄 is the sunny nonexpansive retraction of 𝑋 onto 𝑆.

Proof. Let 𝑧=𝑄(𝑢), where 𝑄 is the sunny nonexpansive retraction of 𝑋 onto 𝑆 whose existence is ensured by Theorem 2.4. Let (𝑦𝑛) be a sequence generated by 𝑦𝑛+1=𝛼𝑛𝑢+1𝛼𝑛𝑇𝑛𝑦𝑛,(3.42) where we abbreviate 𝑇𝑛=𝐽𝑟𝑛(𝐼𝑟𝑛𝐴). Hence to show the desired result, it suffices to prove that 𝑦𝑛𝑧. Indeed, since 𝐽𝑟𝑛 and 𝐼𝑟𝑛𝐴 are both nonexpansive, it follows that 𝑥𝑛+1𝑦𝑛+11𝛼𝑛𝐽𝑟𝑛𝑥𝑛𝑟𝑛𝐴𝑥𝑛+𝑎𝑛+𝑏𝑛𝐽𝑟𝑛𝑦𝑛𝑟𝑛𝐴𝑦𝑛1𝛼𝑛𝐼𝑟𝑛𝐴𝑥𝑛𝐼𝑟𝑛𝐴𝑦𝑛𝑟𝑛𝑎𝑛+𝑏𝑛=1𝛼𝑛𝑥𝑛𝑦𝑛𝑎+𝐿𝑛+𝑏𝑛,(3.43) where 𝐿=max(1,(𝛼𝑞/𝜅𝑞)1/(𝑞1)). According to condition (i), we can apply Lemma 2.5(ii) to conclude that 𝑥𝑛𝑦𝑛0 as 𝑛.

We next show 𝑦𝑛𝑧. Indeed, since 𝑆=Fix(𝑇𝑛) and 𝑇𝑛 is nonexpansive, we have 𝑦𝑛+1𝑧𝛼𝑛𝑢𝑧+1𝛼𝑛𝑇𝑛𝑦𝑛𝑇𝑛𝑧𝛼𝑛𝑢𝑧+1𝛼𝑛𝑦𝑛.𝑧(3.44) Hence, we can apply Lemma 2.5(i) to claim that {𝑦𝑛} is bounded.

Using the inequality (2.7) with 𝑝=𝑞, we derive that 𝑦𝑛+1𝑧𝑞=𝛼𝑛(𝑢𝑧)+1𝛼𝑛𝑇𝑛𝑦𝑛𝑧𝑞1𝛼𝑛𝑞𝑇𝑛𝑦𝑛𝑧𝑞+𝑞𝛼𝑛𝑢𝑧,𝒥𝑞𝑦𝑛+1.𝑧(3.45)

By condition (iii), we have some 𝛿>0 such that 1𝛼𝑛𝛿,1𝛼𝑛𝑟𝑛𝛼𝑞𝑟𝑛𝑞1𝜅𝑞𝛿,(3.46) for all 𝑛. Hence, by Lemma 3.3 we get from (3.45) that 𝑦𝑛+1𝑧𝑞1𝛼𝑛𝑦𝑛𝑧𝑞𝛿𝜙𝑞𝑦𝑛𝑟𝑛𝐴𝑦𝑛𝑇𝑛𝑦𝑛+𝑟𝑛𝐴𝑧𝛿𝐴𝑦𝑛𝐴𝑧𝑞+𝑞𝛼𝑛𝑢𝑧,𝒥𝑞𝑦𝑛+1.𝑧(3.47)

Let us define 𝑠𝑛=𝑦𝑛𝑧𝑞 for all 𝑛0. Depending on the asymptotic behavior of the sequence {𝑠𝑛} we distinguish two cases.

Case 1. Suppose that there exists 𝑁 such that the sequence {𝑠𝑛}𝑛𝑁 is nonincreasing; thus, lim𝑛𝑠𝑛 exists. Since 𝛼𝑛0 and 𝑒𝑛0, it follows immediately from (3.47) that lim𝑛𝐴𝑦𝑛𝐴𝑧𝑞+𝑦𝑛𝑟𝑛𝐴𝑦𝑛𝑇𝑛𝑦𝑛+𝑟𝑛𝐴𝑧=0.(3.48) Consequently, lim𝑛𝑇𝑛𝑦𝑛𝑦𝑛=0.(3.49)

By condition (iii), there exists 𝜀>0 such that 𝑟𝑛𝜀 for all 𝑛0. Then, by Lemma 3.2, we get lim𝑛𝑇𝜀𝑦𝑛𝑦𝑛lim𝑛𝑇𝑛𝑦𝑛𝑦𝑛=0.(3.50)

The demiclosedness principle (i.e., Lemma 2.3) implies that 𝜔𝑤𝑦𝑛𝑆.(3.51)

Note that from inequality (3.47) we deduce that 𝑠𝑛+11𝛼𝑛𝑠𝑛+𝑞𝛼𝑛𝑢𝑧,𝒥𝑞𝑦𝑛+1.𝑧(3.52)

Next we prove that limsup𝑛𝑢𝑧,𝒥𝑞𝑦𝑛𝑧0.(3.53)

Equivalently (should 𝑦𝑛𝑧0), we need to prove that limsup𝑛𝑦𝑢𝑧,𝒥𝑛𝑧0.(3.54)

To this end, let 𝑧𝑡 satisfy 𝑧𝑡=𝑡𝑢+𝑇𝜀𝑧𝑡. By Reich's theorem [32], we get 𝑧𝑡𝑄𝑆𝑢=𝑧 as 𝑡0. Using subdifferential inequality, we deduce that 𝑧𝑡𝑦𝑛2=𝑡𝑢𝑦𝑛𝑇+(1𝑡)𝜀𝑧𝑡𝑦𝑛2(1𝑡)2𝑇𝜀𝑧𝑡𝑦𝑛2+2𝑡𝑢𝑦𝑛𝑧,𝒥𝑡𝑦𝑛(1𝑡)2𝑇𝜀𝑧𝑡𝑇𝜀𝑦𝑛+𝑇𝜀𝑦𝑛𝑦𝑛2𝑧+2𝑡𝑡𝑦𝑛2+2𝑡𝑢𝑧𝑡𝑧,𝒥𝑡𝑦𝑛1+𝑡2𝑧𝑡𝑦𝑛2𝑇+𝑀𝜀𝑦𝑛𝑦𝑛+2𝑡𝑢𝑧𝑡𝑧,𝒥𝑡𝑦𝑛,(3.55) where 𝑀>0 is a constant such that 𝑧𝑀>max𝑡𝑦𝑛2𝑧,2𝑡𝑦𝑛+𝑇𝜀𝑦𝑛𝑦𝑛,𝑡(0,1),𝑛.(3.56)

Then it follows from (3.55) that 𝑢𝑧𝑡𝑦,𝒥𝑛𝑧𝑡𝑀2𝑀𝑡+𝑇2𝑡𝜀𝑦𝑛𝑦𝑛.(3.57)

Taking limsup𝑛 yields limsup𝑛𝑢𝑧𝑡𝑦,𝒥𝑛𝑧𝑡𝑀2𝑡.(3.58)

Then, letting 𝑡0 and noting the fact that the duality map 𝒥 is norm-to-norm uniformly continuous on bounded sets, we get (3.54) as desired. Due to (3.53), we can apply Lemma 2.5(ii) to (3.52) to conclude that 𝑠𝑛0; that is, 𝑦𝑛𝑧.

Case 2. Suppose that there exists 𝑛1 such that 𝑠𝑛1𝑠𝑛1+1. Let us define 𝐼𝑛𝑛=1𝑘𝑛𝑠𝑘𝑠𝑘+1,𝑛𝑛1.(3.59) Obviously 𝐼𝑛 since 𝑛1𝐼𝑛 for any 𝑛𝑛1. Set 𝜏(𝑛)=max𝐼𝑛.(3.60) Note that the sequence {𝜏(𝑛)} is nonincreasing and lim𝑛𝜏(𝑛)=. Moreover, 𝜏(𝑛)𝑛 and 𝑠𝜏(𝑛)𝑠𝜏(𝑛)+1,𝑠(3.61)𝑛𝑠𝜏(𝑛)+1,(3.62) for any 𝑛𝑛1 (see Lemma 3.1 of Maingé [33] for more details). From inequality (3.47) we get 𝑠𝜏(𝑛)+11𝛼𝜏(𝑛)𝑠𝜏(𝑛)𝑦𝛿𝜙𝜏(𝑛)𝑟𝜏(𝑛)𝐴𝑦𝜏(𝑛)𝑇𝜏(𝑛)𝑦𝜏(𝑛)+𝑟𝜏(𝑛)𝐴𝑧𝛿𝐴𝑦𝜏(𝑛)𝐴𝑧𝑞+𝑞𝛼𝜏(𝑛)𝑢𝑧,𝒥𝑞𝑦𝜏(𝑛)+1.𝑧(3.63) It turns out that lim𝑛𝐴𝑦𝜏(𝑛)𝐴𝑧=0,lim𝑛𝑦𝜏(𝑛)𝑟𝜏(𝑛)𝐴𝑦𝜏(𝑛)𝑇𝜏(𝑛)𝑦𝜏(𝑛)+𝑟𝜏(𝑛)𝐴𝑧=0.(3.64) Consequently, lim𝑛𝑇𝜏(𝑛)𝑦𝜏(𝑛)𝑦𝜏(𝑛)=0.(3.65)
Now repeating the argument of the proof of (3.53) in Case 1, we can get limsup𝑛𝑢𝑧,𝒥𝑞𝑦𝜏(𝑛)𝑧0.(3.66)
By the asymptotic regularity of {𝑦𝜏(𝑛)} and (3.65), we deduce that lim𝑛𝑦𝜏(𝑛)+1𝑦𝜏(𝑛)=0.(3.67)
This implies that limsup𝑛𝑦𝑢𝑧,𝒥𝜏(𝑛)+1𝑧0.(3.68)
On the other hand, it follows from (3.64) that 𝑠𝜏(𝑛)+1𝑠𝜏(𝑛)+𝛼𝜏(𝑛)𝑠𝜏(𝑛)𝑞𝛼𝜏(𝑛)𝑢𝑧,𝒥𝑞𝑦𝜏(𝑛)+1.𝑧(3.69)
Taking the limsup𝑛 in (3.69) and using condition (i) we deduce that limsup𝑛𝑠𝜏(𝑛)0; hence lim𝑛𝑠𝜏(𝑛)=0. That is, 𝑦𝜏(𝑛)𝑧0. Using the triangle inequality, 𝑦𝜏(𝑛)+1𝑦𝑧𝜏(𝑛)+1𝑦𝜏(𝑛)+𝑦𝜏(𝑛),𝑧(3.70) we also get that lim𝑛𝑠𝜏(𝑛)+1=0 which together with (3.42) guarantees that 𝑦𝑛𝑧0.

4. Applications

The two forward-backward methods previously studied, (3.29) and (3.41), find applications in other related problems such as variational inequalities, the convex feasibility problem, fixed point problems, and optimization problems.

Throughout this section, let 𝐶 be a nonempty closed and convex subset of a Hilbert space . Note that in this case the concept of monotonicity coincides with the concept of accretivity.

Regarding the problem we concern, of finding a zero of the sum of two accretive operators in a Hilbert space , as a direct consequence of Theorem 3.7, we first obtain the following result due to Combettes [34].

Corollary 4.1. Let 𝐴 be monotone and 𝐵 maximal monotone. Assume that 𝜅𝐴 is firmly nonexpansive for some 𝜅>0 and that (i)lim𝑛𝛼𝑛>0, (ii)0<liminf𝑛𝜆𝑛limsup𝑛𝜆𝑛<2𝜅, (iii)𝑛=1𝑎𝑛< and 𝑛=1𝑏𝑛<,(iv)𝑆=(𝐴+𝐵)1(0). Then the sequence {𝑥𝑛} generated by the algorithm 𝑥𝑛+1=1𝛼𝑛𝑥𝑛+𝛼𝑛𝐽𝜆𝑛𝑥𝑛𝜆𝑛𝐴𝑥𝑛+𝑎𝑛+𝑏𝑛(4.1) converges weakly to a point in 𝑆.

Proof. It suffices to show that 𝜅𝐴 is firmly nonexpansive if and only if 𝐴 is 𝜅-inverse strongly monotone. This however follows from the following straightforward observation: 𝜅𝐴𝑥𝜅𝐴𝑦,𝑥𝑦𝜅𝐴𝑥𝜅𝐴𝑦2𝐴𝑥𝐴𝑦,𝑥𝑦𝜅𝐴𝑥𝐴𝑦2,(4.2) for all 𝑥,𝑦.

4.1. Variational Inequality Problems

A monotone variational inequality problem (VIP) is formulated as the problem of finding a point 𝑥𝐶 with the property: 𝐴𝑥,𝑧𝑥0,𝑧𝐶,(4.3) where 𝐴𝐶 is a nonlinear monotone operator. We shall denote by 𝑆 the solution set of (4.3) and assume 𝑆.

One method for solving VIP (4.3) is the projection algorithm which generates, starting with an arbitrary initial point 𝑥0, a sequence {𝑥𝑛} satisfying 𝑥𝑛+1=𝑃𝐶𝑥𝑛𝑟𝐴𝑥𝑛,(4.4) where 𝑟 is properly chosen as a stepsize. If in addition 𝐴 is 𝜅-inverse strongly monotone (ism), then the iteration (4.4) with 0<𝑟<2𝜅 converges weakly to a point in 𝑆 whenever such a point exists.

By [35, Theorem 3], VIP (4.3) is equivalent to finding a point 𝑥 so that 0(𝐴+𝐵)𝑥,(4.5) where 𝐵 is the normal cone operator of 𝐶. In other words, VIPs are a special case of the problem of finding zeros of the sum of two monotone operators. Note that the resolvent of the normal cone is nothing but the projection operator and that if 𝐴 is 𝜅-ism, then the set Ω is closed and convex [36]. As an application of the previous sections, we get the following results.

Corollary 4.2. Let 𝐴𝐶 be 𝜅-ism for some 𝜅>0, and let the following conditions be satisfied:(i)lim𝑛𝛼𝑛>0, (ii)0<liminf𝑛𝜆𝑛limsup𝑛𝜆𝑛<2𝜅. Then the sequence {𝑥𝑛} generated by the relaxed projection algorithm 𝑥𝑛+1=1𝛼𝑛𝑥𝑛+𝛼𝑛𝑃𝐶𝑥𝑛𝜆𝑛𝐴𝑥𝑛(4.6) converges weakly to a point in 𝑆.

Corollary 4.3. Let 𝐴𝐶 be 𝜅-ism and let the following conditions be satisfied:(i)lim𝑛𝛼𝑛=0,𝑛=1𝛼𝑛=; (ii)0<lim𝑛𝜆𝑛limsup𝑛𝜆𝑛<2𝜅.Then, for any given 𝑢𝐶, the sequence {𝑥𝑛} generated by 𝑥𝑛+1=𝛼𝑛𝑢+1𝛼𝑛𝑃𝐶𝑥𝑛𝜆𝑛𝐴𝑥𝑛,(4.7) converges strongly to 𝑃𝒮𝑢.

Remark 4.4. Corollary 4.3 improves Iiduka-Takahashi's result [37, Corollary 3.2], where apart from hypotheses (i)-(ii), the conditions 𝑛=1|𝛼𝑛𝛼𝑛+1|< and 𝑛=1|𝜆𝑛𝜆𝑛+1|< are required.

4.2. Fixed Points of Strict Pseudocontractions

An operator 𝑇𝐶𝐶 is said to be a strict 𝜅-pseudocontraction if there exists a constant 𝜅[0,1) such that 𝑇𝑥𝑇𝑦2𝑥𝑦2+𝜅(𝐼𝑇)𝑥(𝐼𝑇)𝑦2(4.8) for all 𝑥,𝑦𝐶. It is known that if 𝑇 is strictly 𝜅-pseudocontractive, then 𝐴=𝐼𝑇 is (1𝜅)/2-ism (see [38]). To solve the problem of approximating fixed points for such operators, an iterative scheme is provided in the following result.

Corollary 4.5. Let 𝑇𝐶𝐶 be strictly 𝜅-pseudocontractive with a nonempty fixed point set Fix(𝑇). Suppose that (i)lim𝑛𝛼𝑛=0 and 𝑛=1𝛼𝑛=,(ii)0<lim𝑛𝜆𝑛limsup𝑛𝜆𝑛<1𝜅.
Then, for any given 𝑢𝐶, the sequence {𝑥𝑛} generated by the algorithm 𝑥𝑛+1=𝛼𝑛𝑢+1𝛼𝑛1𝜆𝑛𝑥𝑛+𝜆𝑛𝑇𝑥𝑛(4.9) converges strongly to the point 𝑃Fix(𝑇)𝑢.

Proof. Set 𝐴=𝐼𝑇. Hence 𝐴 is (1𝜅)/2-ism. Moreover we rewrite the above iteration as 𝑥𝑛+1=𝛼𝑛𝑢+1𝛼𝑛𝑥𝑛𝜆𝑛𝐴𝑥𝑛.(4.10) Then, by setting 𝐵 the operator constantly zero, Corollary 4.3 yields the result as desired.

4.3. Convexly Constrained Minimization Problem

Consider the optimization problem min𝑥𝐶𝑓(𝑥),(4.11) where 𝑓 is a convex and differentiable function. Assume (4.11) is consistent, and let Ω denote its set of solutions.

The gradient projection algorithm (GPA) generates a sequence {𝑥𝑛} via the iterative procedure: 𝑥𝑛+1=𝑃𝐶𝑥𝑛𝑥𝑟𝑓𝑛,(4.12) where 𝑓 stands for the gradient of 𝑓. If in addition 𝑓 is (1/𝜅)-Lipschitz continuous; that is, for any 𝑥,𝑦, 1𝑓(𝑥)𝑓(𝑦)𝜅𝑥𝑦,(4.13) then the GPA with 0<𝑟<2𝜅 converges weakly to a minimizer of 𝑓 in 𝐶 (see, e.g, [39, Corollary 4.1]).

The minimization problem (4.11) is equivalent to VIP [40, Lemma 5.13]: 𝑓(𝑥),𝑧𝑥0,𝑧𝐶.(4.14) It is also known [41, Corollary 10] that if 𝑓 is (1/𝜅)-Lipschitz continuous, then it is also 𝜅-ism. Thus, we can apply the previous results to (4.11) by taking 𝐴=𝑓.

Corollary 4.6. Assume that 𝑓 is convex and differentiable with (1/𝜅)-Lipschitz continuous gradient 𝑓. Assume also that(i)lim𝑛𝛼𝑛>0, (ii)0liminf𝑛𝜆𝑛limsup𝑛𝜆𝑛2𝜅.
Then the sequence {𝑥𝑛} generated by the algorithm 𝑥𝑛+1=1𝛼𝑛𝑥𝑛+𝛼𝑛𝑃𝐶𝑥𝑛𝜆𝑛𝑥𝑓𝑛(4.15) converges weakly to 𝑥Ω.

Corollary 4.7. Assume that 𝑓 is convex and differentiable with (1/𝜅)-Lipschitz continuous gradient 𝑓. Assume also that(i)lim𝑛𝛼𝑛=0 and 𝑛=1𝛼𝑛=;(ii)0<lim𝑛𝜆𝑛limsup𝑛𝜆𝑛<2𝜅.
Then for any given 𝑢𝐶, the sequence {𝑥𝑛} generated by the algorithm 𝑥𝑛+1=𝛼𝑛𝑢+1𝛼𝑛𝑃𝐶𝑥𝑛𝜆𝑛𝑥𝑓𝑛(4.16) converges strongly to 𝑃Ω𝑢 whenever such point exists.

4.4. Split Feasibility Problem

The split feasibility problem (SFP) [42] consists of finding a point ̂𝑥 satisfying the property: ̂𝑥𝐶,𝐴̂𝑥𝑄,(4.17) where 𝐶 and 𝑄 are, respectively, closed convex subsets of Hilbert spaces and 𝐾 and 𝐴𝐾 is a bounded linear operator. The SFP (4.17) has attracted much attention due to its applications in signal processing [42]. Various algorithms have, therefore, been derived to solve the SFP (4.17) (see [39, 43, 44] and reference therein). In particular, Byrne [43] introduced the so-called 𝐶𝑄 algorithm: 𝑥𝑛+1=𝑃𝐶𝑥𝑛𝜆𝐴𝐼𝑃𝑄𝐴𝑥𝑛,(4.18) where 0<𝜆<2𝜈 with 𝜈=1/𝐴2.

To solve the SFP (4.17), it is very useful to investigate the following convexly constrained minimization problem (CCMP): min𝑥𝐶𝑓(𝑥),(4.19) where 1𝑓(𝑥)=2𝐼𝑃𝑄𝐴𝑥2.(4.20)

Generally speaking, the SFP (4.17) and CCMP (4.19) are not fully equivalent: every solution to the SFP (4.17) is evidently a minimizer of the CCMP (4.19); however a solution to the CCMP (4.19) does not necessarily satisfy the SFP (4.17). Further, if the solution set of the SFP (4.17) is nonempty, then it follows from [45, Lemma 4.2] that 𝐶(𝑓)1(0),(4.21) where 𝑓 is defined by (4.20). As shown by Xu [46], the 𝐶𝑄 algorithm need not converge strongly in infinite-dimensional spaces. We now consider an iteration process with strong convergence for solving the SFP (4.17).

Corollary 4.8. Assume that the SFP (4.17) is consistent, and let 𝑆 be its nonempty solution set. Assume also that(i)lim𝑛𝛼𝑛=0 and 𝑛=1𝛼𝑛=;(ii)0<lim𝑛𝜆𝑛limsup𝑛𝜆𝑛<2𝜈.
Then for any given 𝑢𝐶, the sequence (𝑥𝑛) generated by the algorithm 𝑥𝑛+1=𝛼𝑛𝑢+1𝛼𝑛𝑃𝐶𝑥𝑛𝜆𝑛𝐴𝐼𝑃𝑄𝐴𝑥𝑛(4.22) converges strongly to the solution 𝑃𝑆𝑢 of the SFP (4.17).

Proof. Let 𝑓 be defined by (4.19). According to [39, page 113], we have 𝑓=𝐴𝐼𝑃𝑄𝐴,(4.23) which is (1/𝜈)-Lipschitz continuous with 𝜈=1/𝐴2. Thus Corollary 4.7 applies, and the result follows immediately.

Remark 4.9. Corollary 4.8 improves and recovers the result of [44, Corollary 3.7], which uses the additional condition 𝑛=1|𝛼𝑛+1𝛼𝑛|<, condition (i), and the special case of condition (ii) where 𝜆𝑛𝜆 for all 𝑛.

4.5. Convexly Constrained Linear Inverse Problem

The constrained linear system 𝐴𝑥=𝑏𝑥𝐶,(4.24) where 𝐴𝐾 is a bounded linear operator and 𝑏𝐾, is called convexly constrained linear inverse problem (cf. [47]). A classical way to deal with this problem is the well-known projected Landweber method (see [40]): 𝑥𝑛+1=𝑃𝐶𝑥𝑛𝜆𝐴𝐴𝑥𝑛𝑏,(4.25) where 0<𝜆<2𝜈 with 𝜈=1/𝐴2. A counterexample in [8, Remark 5.12] shows that the projected Landweber iteration converges weakly in infinite-dimensional spaces, in general. To get strong convergence, Eicke introduced the so-called damped projection method (see [47]). In what follows, we present another algorithm with strong convergence, for solving (4.24).

Corollary 4.10. Assume that (4.24) is consistent. Assume also that(i)lim𝑛𝛼𝑛=0 and 𝑛=1𝛼𝑛=;(ii)0<lim𝑛𝜆𝑛limsup𝑛𝜆𝑛<2𝜈.
Then, for any given 𝑢, the sequence {𝑥𝑛} generated by the algorithm 𝑥𝑛+1=𝛼𝑛𝑢+1𝛼𝑛𝑃𝐶𝑥𝑛𝜆𝑛𝐴𝐴𝑥𝑛𝑏(4.26) converges strongly to a solution to problem (4.24) whenever it exists.

Proof. This is an immediate consequence of Corollary 4.8 by taking 𝑄={𝑏}.

Acknowledgments

The work of G. López, V. Martín-Márquez, and H.-K. Xu was supported by Grant MTM2009-10696-C02-01. This work was carried out while F. Wang was visiting Universidad de Sevilla under the support of this grant. He was also supported by the Basic and Frontier Project of Henan 122300410268 and the Peiyu Project of Luoyang Normal University 2011-PYJJ-002. The work of G. López and V. Martín-Márquez was also supported by the PlanAndaluz de Investigacin de la Junta de Andaluca FQM-127 and Grant P08-FQM-03543. The work of H.-K. Xu was also supported in part by NSC 100-2115-M-110-003-MY2 (Taiwan). He extended his appreciation to the Deanship of Scientific Research at King Saud University for funding the work through a visiting professor-ship program (VPP).

References

  1. D. H. Peaceman and H. H. Rachford,, “The numerical solution of parabolic and elliptic differential equations,” Journal of the Society for Industrial and Applied Mathematics, vol. 3, pp. 28–41, 1955.
  2. J. Douglas, and H. H. Rachford,, “On the numerical solution of heat conduction problems in two and three space variables,” Transactions of the American Mathematical Society, vol. 82, pp. 421–439, 1956.
  3. R. B. Kellogg, “Nonlinear alternating direction algorithm,” Mathematics of Computation, vol. 23, pp. 23–28, 1969.
  4. P. L. Lions and B. Mercier, “Splitting algorithms for the sum of two nonlinear operators,” SIAM Journal on Numerical Analysis, vol. 16, no. 6, pp. 964–979, 1979. View at Publisher · View at Google Scholar
  5. G. B. Passty, “Ergodic convergence to a zero of the sum of monotone operators in Hilbert space,” Journal of Mathematical Analysis and Applications, vol. 72, no. 2, pp. 383–390, 1979. View at Publisher · View at Google Scholar
  6. P. Tseng, “Applications of a splitting algorithm to decomposition in convex programming and variational inequalities,” SIAM Journal on Control and Optimization, vol. 29, no. 1, pp. 119–138, 1991. View at Publisher · View at Google Scholar
  7. G. H.-G. Chen and R. T. Rockafellar, “Convergence rates in forward-backward splitting,” SIAM Journal on Optimization, vol. 7, no. 2, pp. 421–444, 1997. View at Publisher · View at Google Scholar
  8. P. L. Combettes and V. R. Wajs, “Signal recovery by proximal forward-backward splitting,” Multiscale Modeling & Simulation, vol. 4, no. 4, pp. 1168–1200, 2005. View at Publisher · View at Google Scholar
  9. S. Sra, S. Nowozin, and S. J. Wright, Optimization for Machine Learning, 2011.
  10. K. Aoyama, H. Iiduka, and W. Takahashi, “Weak convergence of an iterative sequence for accretive operators in Banach spaces,” Fixed Point Theory and Applications, Article ID 35390, 13 pages, 2006.
  11. H. Zegeye and N. Shahzad, “Strong convergence theorems for a common zero of a finite family of maccretive mappings,” Nonlinear Analysis, vol. 66, no. 5, pp. 1161–1169, 2007. View at Publisher · View at Google Scholar
  12. H. Zegeye and N. Shahzad, “Strong convergence theorems for a common zero point of a finite family of α-inverse strongly accretive mappings,” Journal of Nonlinear and Convex Analysis, vol. 9, no. 1, pp. 95–104, 2008.
  13. I. Cioranescu, Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems, Kluwer Academic Publishers, 1990. View at Publisher · View at Google Scholar
  14. H. K. Xu, “Inequalities in Banach spaces with applications,” Nonlinear Analysis, vol. 16, no. 12, pp. 1127–1138, 1991. View at Publisher · View at Google Scholar
  15. F. E. Browder, “Nonexpansive nonlinear operators in a Banach space,” Proceedings of the National Academy of Sciences of the United States of America, vol. 54, pp. 1041–1044, 1965.
  16. T. Kato, “Nonlinear semigroups and evolution equations,” Journal of the Mathematical Society of Japan, vol. 19, pp. 508–520, 1967.
  17. R. E. Bruck, “Nonexpansive projections on subsets of Banach spaces,” Pacific Journal of Mathematics, vol. 47, pp. 341–355, 1973.
  18. P. E. Maingé, “Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces,” Journal of Mathematical Analysis and Applications, vol. 325, no. 1, pp. 469–479, 2007. View at Publisher · View at Google Scholar
  19. H. K. Xu, “Iterative algorithms for nonlinear operators,” Journal of the London Mathematical Society, vol. 66, no. 1, pp. 240–256, 2002. View at Publisher · View at Google Scholar
  20. W. R. Mann, “Mean value methods in iteration,” Proceedings of the American Mathematical Society, vol. 4, pp. 506–510, 1953.
  21. S. Reich, “Weak convergence theorems for nonexpansive mappings in Banach spaces,” Journal of Mathematical Analysis and Applications, vol. 67, no. 2, pp. 274–276, 1979. View at Publisher · View at Google Scholar
  22. G. Marino and H. K. Xu, “Convergence of generalized proximal point algorithms,” Communications on Pure and Applied Analysis, vol. 3, no. 4, pp. 791–808, 2004. View at Publisher · View at Google Scholar
  23. R. T. Rockafellar, “Monotone operators and the proximal point algorithm,” SIAM Journal on Control and Optimization, vol. 14, no. 5, pp. 877–898, 1976.
  24. S. Kamimura and W. Takahashi, “Approximating solutions of maximal monotone operators in Hilbert spaces,” Journal of Approximation Theory, vol. 106, no. 2, pp. 226–240, 2000. View at Publisher · View at Google Scholar
  25. K. K. Tan and H. K. Xu, “Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process,” Journal of Mathematical Analysis and Applications, vol. 178, no. 2, pp. 301–308, 1993. View at Publisher · View at Google Scholar
  26. R. E. Bruck, “A simple proof of the mean ergodic theorem for nonlinear contractions in Banach spaces,” Israel Journal of Mathematics, vol. 32, no. 2-3, pp. 107–116, 1979. View at Publisher · View at Google Scholar
  27. B. R. Halpern, “Fixed points of nonexpanding maps,” Bulletin of the American Mathematical Society, vol. 73, pp. 957–961, 1967.
  28. P. L. Lions, “Approximation de points fixes de contractions,” Comptes Rendus de l'Académie des Sciences, vol. 284, no. 21, pp. A1357–A1359, 1977.
  29. R. Wittmann, “Approximation of fixed points of nonexpansive mappings,” Archiv der Mathematik, vol. 58, no. 5, pp. 486–491, 1992. View at Publisher · View at Google Scholar
  30. H. K. Xu, “Viscosity approximation methods for nonexpansive mappings,” Journal of Mathematical Analysis and Applications, vol. 298, no. 1, pp. 279–291, 2004. View at Publisher · View at Google Scholar
  31. G. López, V. Martín, and H. K. Xu, “Halpern's iteration for nonexpansive mappings,” in Nonlinear Analysis and Optimization I: Nonlinear Analysis, vol. 513, pp. 211–230, 2010. View at Publisher · View at Google Scholar
  32. S. Reich, “Strong convergence theorems for resolvents of accretive operators in Banach spaces,” Journal of Mathematical Analysis and Applications, vol. 75, no. 1, pp. 287–292, 1980. View at Publisher · View at Google Scholar
  33. P. E. Maingé, “Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization,” Set-Valued Analysis, vol. 16, no. 7-8, pp. 899–912, 2008. View at Publisher · View at Google Scholar
  34. P. L. Combettes, “Solving monotone inclusions via compositions of nonexpansive averaged operators,” Optimization, vol. 53, no. 5-6, pp. 475–504, 2004. View at Publisher · View at Google Scholar
  35. R. T. Rockafellar, “On the maximality of sums of nonlinear monotone operators,” Transactions of the American Mathematical Society, vol. 149, pp. 75–88, 1970.
  36. V. Barbu, Nonlinear Semigroups and Differential Equations in Banach Spaces, Noordhoff, 1976.
  37. H. Iiduka and W. Takahashi, “Strong convergence theorems for nonexpansive mappings and inverse-strongly monotone mappings,” Nonlinear Analysis, vol. 61, no. 3, pp. 341–350, 2005. View at Publisher · View at Google Scholar
  38. F. E. Browder and W. V. Petryshyn, “Construction of fixed points of nonlinear mappings in Hilbert space,” Journal of Mathematical Analysis and Applications, vol. 20, pp. 197–228, 1967.
  39. C. Byrne, “A unified treatment of some iterative algorithms in signal processing and image reconstruction,” Inverse Problems, vol. 20, no. 1, pp. 103–120, 2004. View at Publisher · View at Google Scholar
  40. H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic Publishers Group, Dordrecht, The Netherlands, 1996. View at Publisher · View at Google Scholar
  41. J. B. Baillon and G. Haddad, “Quelques proprietes des operateurs angle-bornes et cycliquement monotones,” Israel Journal of Mathematics, vol. 26, no. 2, pp. 137–150, 1977.
  42. Y. Censor and T. Elfving, “A multiprojection algorithm using Bregman projections in a product space,” Numerical Algorithms, vol. 8, no. 2–4, pp. 221–239, 1994. View at Publisher · View at Google Scholar
  43. C. Byrne, “Iterative oblique projection onto convex sets and the split feasibility problem,” Inverse Problems, vol. 18, no. 2, pp. 441–453, 2002. View at Publisher · View at Google Scholar
  44. H. K. Xu, “A variable Krasnosel'skii-Mann algorithm and the multiple-set split feasibility,” Inverse Problems, vol. 22, no. 6, pp. 2021–2034, 2006. View at Publisher · View at Google Scholar
  45. F. Wang and H. K. Xu, “Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem,” Journal of Inequalities and Applications, vol. 2010, Article ID 102085, 2010. View at Publisher · View at Google Scholar
  46. H. K. Xu, “Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces,” Inverse Problems, vol. 26, no. 10, Article ID 105018, 2010. View at Publisher · View at Google Scholar
  47. B. Eicke, “Iteration methods for convexly constrained ill-posed problems in Hilbert space,” Numerical Functional Analysis and Optimization, vol. 13, no. 5-6, pp. 413–429, 1992. View at Publisher · View at Google Scholar