- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Abstract and Applied Analysis

Volume 2012 (2012), Article ID 109236, 25 pages

http://dx.doi.org/10.1155/2012/109236

## Forward-Backward Splitting Methods for Accretive Operators in Banach Spaces

^{1}Departamento de Análisis Matemático, Facultad de Matemáticas, Universidad de Sevilla, Apartado. 1160, 41080-Sevilla, Spain^{2}Department of Mathematics, Luoyang Normal University, Luoyang 471022, China^{3}Department of Applied Mathematics, National Sun Yat-sen University, Kaohsiung 80424, Taiwan

Received 31 March 2012; Accepted 29 May 2012

Academic Editor: Lishan Liu

Copyright © 2012 Genaro López et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Splitting methods have recently received much attention due to the fact that many nonlinear problems arising in applied areas such as image recovery, signal processing, and machine learning are mathematically modeled as a nonlinear operator equation and this operator is decomposed as the sum of two (possibly simpler) nonlinear operators. Most of the investigation on splitting methods is however carried out in the framework of Hilbert spaces. In this paper, we consider these methods in the setting of Banach spaces. We shall introduce two iterative forward-backward splitting methods with relaxations and errors to find zeros of the sum of two accretive operators in the Banach spaces. We shall prove the weak and strong convergence of these methods under mild conditions. We also discuss applications of these methods to variational inequalities, the split feasibility problem, and a constrained convex minimization problem.

#### 1. Introduction

Splitting methods have recently received much attention due to the fact that many nonlinear problems arising in applied areas such as image recovery, signal processing, and machine learning are mathematically modeled as a nonlinear operator equation and this operator is decomposed as the sum of two (possibly simpler) nonlinear operators. Splitting methods for linear equations were introduced by Peaceman and Rachford [1] and Douglas and Rachford [2]. Extensions to nonlinear equations in Hilbert spaces were carried out by Kellogg [3] and Lions and Mercier [4] (see also [5–7]). The central problem is to iteratively find a zero of the sum of two monotone operators and in a Hilbert space , namely, a solution to the inclusion problem

Many problems can be formulated as a problem of form (1.1). For instance, a stationary solution to the initial value problem of the evolution equation can be recast as (1.1) when the governing maximal monotone is of the form [4]. In optimization, it often needs [8] to solve a minimization problem of the form where are proper lower semicontinuous convex functions from to the extended real line , and is a bounded linear operator on . As a matter of fact, (1.3) is equivalent to (1.1) (assuming that and have a common point of continuity) with and . Here is the adjoint of and is the subdifferential operator of in the sense of convex analysis. It is known [8, 9] that the minimization problem (1.3) is widely used in image recovery, signal processing, and machine learning.

A splitting method for (1.1) means an iterative method for which each iteration involves only with the individual operators and , but not the sum . To solve (1.1), Lions and Mercier [4] introduced the nonlinear Peaceman-Rachford and Douglas-Rachford splitting iterative algorithms which generate a sequence by the recursion and respectively, a sequence by the recursion Here we use to denote the resolvent of a monotone operator ; that is, .

The nonlinear Peaceman-Rachford algorithm (1.4) fails, in general, to converge (even in the weak topology in the infinite-dimensional setting). This is due to the fact that the generating operator for the algorithm (1.4) is merely nonexpansive. However, the mean averages of can be weakly convergent [5]. The nonlinear Douglashere-Rachford algorithm (1.5) always converges in the weak topology to a point and is a solution to (1.1), since the generating operator for this algorithm is firmly nonexpansive, namely, the operator is of the form , where is nonexpansive.

There is, however, little work in the existing literature on splitting methods for nonlinear operator equations in the setting of Banach spaces (though there was some work on finding a common zero of a finite family of accretive operators [10–12]).

The main difficulties are due to the fact that the inner product structure of a Hilbert space fails to be true in a Banach space. We shall in this paper use the technique of duality maps to carry out certain initiative investigations on splitting methods for accretive operators in Banach spaces. Namely, we will study splitting iterative methods for solving the inclusion problem (1.1), where and are accretive operators in a Banach space .

We will consider the case where is single-valued accretive and is possibly multivalued -accretive operators in a Banach space and assume that the inclusion (1.1) has a solution. We introduce the following two iterative methods which we call Mann-type and respectively, Halpern-type forward-backward methods with errors and which generate a sequence by the recursions where is the resolvent of the operator of order (i.e., ), and is a sequence in . We will prove weak convergence of (1.6) and strong convergence of (1.7) to a solution to (1.1) in some class of Banach spaces which will be made clear in Section 3.

The paper is organized as follows. In the next section we introduce the class of Banach spaces in which we shall study our splitting methods for solving (1.1). We also introduce the concept of accretive and -accretive operators in a Banach space. In Section 3, we discuss the splitting algorithms (1.6) and (1.7) and prove their weak and strong convergence, respectively. In Section 4, we discuss applications of both algorithms (1.6) and (1.7) to variational inequalities, fixed points of pseudocontractions, convexly constrained minimization problems, the split feasibility problem, and linear inverse problems.

#### 2. Preliminaries

Throughout the paper, is a real Banach space with norm , distance , and dual space . The symbol denotes the pairing between and , that is, , the value of at . will denote a nonempty closed convex subset of , unless otherwise stated, and the closed ball with center zero and radius . The expressions and denote the strong and weak convergence of the sequence , respectively, and stands for the set of weak limit points of the sequence .

The * modulus of convexity* of is the function defined by
Recall that is said to be * uniformly convex* if for any . Let . We say that is -*uniformly convex *if there exists a constant so that for any .

The * modulus of smoothness* of is the function defined by
Recall that is called * uniformly smooth* if . Let . We say that is -*uniformly smooth* if there is a so that for . It is known that is -uniformly convex if and only if is -uniformly smooth, where (. For instance, spaces are -uniformly convex and -uniformly smooth if , whereas -uniformly convex and -uniformly smooth if .

The norm of is said to be the Fréchet differentiable if, for each , exists and is attained uniformly for all such that . It can be proved that is uniformly smooth if the limit (2.3) exists and is attained uniformly for all such that . So it is trivial that a uniformly smooth Banach space has a Fréchet differentiable norm.

The subdifferential of a proper convex function is the set-valued operator defined as
If is proper, convex, and lower semicontinuous, the subdifferential for any , the interior of the domain of . The * generalized duality mapping * is defined by
If , the corresponding duality mapping is called the normalized duality mapping and denoted by . It can be proved that, for any ,
Thus we have the following subdifferential inequality, for any :
In particular, we have, for ,
Some properties of the duality mappings are collected as follows.

Proposition 2.1 (see Cioranescu [13]). *Let .*(i)*The Banach space is smooth if and only if the duality mapping is single valued.*(ii)*The Banach space is uniformly smooth if and only if the duality mapping is single-valued and norm-to-norm uniformly continuous on bounded sets of .**Among the estimates satisfied by -uniformly convex and -uniformly smooth spaces, the following ones will come in handy.*

Lemma 2.2 (see Xu [14]). *Let be given.*(i)*If is uniformly convex, then there exists a continuous, strictly increasing and convex function with such that
where .*(ii)*If is -uniformly smooth, then there exists a constant such that
**The best constant satisfying (2.10) will be called the -uniform smoothness coefficient of . For instance [14], for , is 2-uniformly smooth with , and for , is -uniformly smooth with , where is the unique solution to the equation
*

In a Banach space with the Fréchet differentiable norm, there exists a function such that and for all

Recall that is a nonexpansive mapping if , for all . From now on, denotes the fixed point set of . The following lemma claims that the demiclosedness principle for nonexpansive mappings holds in uniformly convex Banach spaces.

Lemma 2.3 (see Browder [15]). *Let be a nonempty closed convex subset of a uniformly convex space and a nonexpansive mapping with . If is a sequence in such that and , then . In particular, if , then .*

A set-valued operator , with domain and range , is said to be * accretive* if, for all and every ,
It follows from Lemma 1.1 of Kato [16] that is accretive if and only if, for each , there exists such that
An accretive operator is said to be *-accretive* if the range for some . It can be shown that an accretive operator is -accretive if and only if for all .

Given and , we say that an accretive operator is -*inverse strongly accretive* (-isa) of order if, for each , there exists such that
When , we simply say -isa, instead of -isa of order 2; that is, is -isa if, for each , there exists such that

Given a subset of and a mapping , recall that is a retraction of onto if for all . We say that is sunny if, for each and , we have whenever .

The first result regarding the existence of sunny nonexpansive retractions onto the fixed point set of a nonexpansive mapping is due to Bruck.

Theorem 2.4 (see Bruck [17]). *If is strictly convex and uniformly smooth and if is a nonexpansive mapping having a nonempty fixed point set , then there exists a sunny nonexpansive retraction of onto .*

The following technical lemma regarding convergence of real sequences will be used when we discuss convergence of algorithms (1.6) and (1.7) in the next section.

Lemma 2.5 (see [18, 19]). *Let , , , and be sequences such that
**
Assume . Then the following results hold:**(i) If where , then is a bounded sequence.**(ii) If and , then .*

#### 3. Splitting Methods for Accretive Operators

In this section we assume that is a real Banach space and is a nonempty closed subset of . We also assume that is a single-valued and -isa operator for some and is an -accretive operator in , with and . Moreover, we always use to denote the resolvent of of order ; that is,

It is known that the -accretiveness of implies that is single valued, defined on the entire , and firmly nonexpansive; that is, Below we fix the following notation:

Lemma 3.1. *For , .*

*Proof. *From the definition of , it follows that

This lemma alludes to the fact that in order to solve the inclusion problem (1.1), it suffices to find a fixed point of . Since is already “split,” an iterative algorithm for corresponds to a splitting algorithm for (1.1). However, to guarantee convergence (weak or strong) of an iterative algorithm for , we need good metric properties of such as nonexpansivity. To this end, we need geometric conditions on the underlying space (see Lemma 3.3).

Lemma 3.2. *Given and , there holds the relation
*

*Proof. * Note that (. By the accretivity of , we have such that
It turns out that
This along with the triangle inequality yields that

We notice that though the resolvent of an accretive operator is always firmly nonexpansive in a general Banach space, firm nonexpansiveness is however insufficient to estimate useful bounds which are required to prove convergence of iterative algorithms for solving nonlinear equations governed by accretive operations. To overcome this difficulty, we need to impose additional properties on the underlying Banach space . Lemma 3.3 below establishes a sharper estimate than nonexpansiveness of the mapping , which is useful for us to prove the weak and strong convergence of algorithms (1.6) and (1.7).

Lemma 3.3. *Let be a uniformly convex and -uniformly smooth Banach space for some . Assume that is a single-valued -isa of order in . Then, given , there exists a continuous, strictly increasing and convex function with such that, for all ,
**
where is the -uniform smoothness coefficient of (see Lemma 2.2).*

*Proof. *Put and . Since , it follows from the accretiveness of that

Since , by the accretivity of it is easy to show that there exists such that ; hence, for is nonexpansive. Now since is uniformly convex, we can use Lemma 2.2 to find a continuous, strictly increasing and convex function , with , satisfying
where the last inequality follows from the nonexpansivity of the resolvent . Letting and combining (3.10) and (3.11) yield

On the other hand, since is also -uniformly smooth and is -isa of order , we derive that
Finally the required inequality (3.9) follows from (3.12) and (3.13).

*Remark 3.4. *Note that from Lemma 3.3 one deduces that, under the same conditions, if , then the mapping is nonexpansive.

##### 3.1. Weak Convergence

Mann's iterative method [20] is a widely used method for finding a fixed point of nonexpansive mappings [21]. We have proved that a splitting method for solving (1.1) can, under certain conditions, be reduced to a method for finding a fixed point of a nonexpansive mapping. It is therefore the purpose of this subsection to introduce and prove its weak convergence of a Mann-type forward-backward method with errors in a uniformly convex and -uniformly smooth Banach space. (See [22] for a similar treatment of the proximal point algorithm [23, 24] for finding zeros of monotone operators in the Hilbert space setting.) To this end we need a lemma about the uniqueness of weak cluster points of a sequence, whose proof, included here, follows the idea presented in [21, 25].

Lemma 3.5. *Let be a closed convex subset of a uniformly convex Banach space with a Fréchet differentiable norm, and let be a sequence of nonexpansive self-mappings on with a nonempty common fixed point set . If and , where , then for all and all weak limit points of .*

*Proof. *We first claim that the sequence is bounded. As a matter of fact, for each fixed and any ,
As , we can apply Lemma 2.5 to find that exists. In particular, is bounded.

Let us next prove that, for every and , the limit
exists. To see this, we set which is nonexpansive. It is to see that we can rewrite in the manner
where

By nonexpansivity, we have that
and the summability of implies that
Set

Let be a closed bounded convex subset of containing and . A result of Bruck [26] assures the existence of a strictly increasing continuous function with such that

for all nonexpansive, and . Applying (3.21) to each , we obtain

Now since exists, (3.19) and (3.22) together imply that
Furthermore, we have

After taking first and then in (3.24) and using (3.19) and (3.23), we get
Hence the limit (3.15) exists.

If we replace now and in (2.12) with and , respectively, we arrive at
Since the exists, we deduce that
where . Consequently, we deduce that
Setting tend to , we conclude that exists. Therefore, for any two weak limit points and of , ; that is, .

Theorem 3.6. *Let be a uniformly convex and -uniformly smooth Banach space. Let be an -isa of order and an -accretive operator. Assume that . We define a sequence by the perturbed iterative scheme
**
where , , and . Assume that*(i)* and ;*(ii)*;*(iii)*.** Then converges weakly to some .*

*Proof. *Write . Notice that we can write
where . Then the iterative formula (3.29) turns into the form
Thus, by nonexpansivity of ,
Therefore, condition (i) implies
Take to deduce that, as and is nonexpansive,

Due to (3.33), Lemma 2.5 is applicable and we get that exists; in particular, is bounded. Let be such that , for all , and let . By (2.7) and Lemma 3.3, we have

From (3.35), assumptions (ii) and (iii), and (3.33), it turns out that
Consequently,
Since , there exists such that for all . Then, by Lemma 3.2,

By Lemmas 3.3 and 3.1, is nonexpansive and . We can therefore make use of Lemma 2.3 to assure that
Finally we set and rewrite scheme (3.31) as
where the sequence satisfies . Since is a sequence of nonexpansive mappings with as its nonempty common fixed point set, and since the space is uniformly convex with a Fréchet differentiable norm, we can apply Lemma 3.5 together with (3.39) to assert that the sequence has exactly one weak limit point; it is therefore weakly convergent.

##### 3.2. Strong Convergence

Halpern's method [27] is another iterative method for finding a fixed point of nonexpansive mappings. This method has been extensively studied in the literature [28–30] (see also the recent survey [31]). In this section we aim to introduce and prove the strong convergence of a Halpern-type forward-backward method with errors in uniformly convex and -uniformly smooth Banach spaces. This result turns out to be new even in the setting of Hilbert spaces.

Theorem 3.7. *Let be a uniformly convex and -uniformly smooth Banach space. Let be an -isa of order and an -accretive operator. Assume that . We define a sequence by the iterative scheme
**
where , . Assume the following conditions are satisfied:*(i)*; *(ii)*; *(iii)*. **Then converges in norm to , where is the sunny nonexpansive retraction of onto .*

*Proof. * Let , where is the sunny nonexpansive retraction of onto whose existence is ensured by Theorem 2.4. Let be a sequence generated by
where we abbreviate . Hence to show the desired result, it suffices to prove that . Indeed, since and are both nonexpansive, it follows that
where . According to condition (i), we can apply Lemma 2.5(ii) to conclude that as .

We next show . Indeed, since and is nonexpansive, we have Hence, we can apply Lemma 2.5(i) to claim that is bounded.

Using the inequality (2.7) with , we derive that

By condition (iii), we have some such that for all . Hence, by Lemma 3.3 we get from (3.45) that

Let us define for all . Depending on the asymptotic behavior of the sequence we distinguish two cases.

*Case 1. *Suppose that there exists such that the sequence is nonincreasing; thus, exists. Since and , it follows immediately from (3.47) that
Consequently,

By condition (iii), there exists such that for all . Then, by Lemma 3.2, we get

The demiclosedness principle (i.e., Lemma 2.3) implies that

Note that from inequality (3.47) we deduce that

Next we prove that

Equivalently (should ), we need to prove that

To this end, let satisfy . By Reich's theorem [32], we get as . Using subdifferential inequality, we deduce that where is a constant such that

Then it follows from (3.55) that

Taking yields

Then, letting and noting the fact that the duality map is norm-to-norm uniformly continuous on bounded sets, we get (3.54) as desired. Due to (3.53), we can apply Lemma 2.5(ii) to (3.52) to conclude that ; that is, .

*Case 2. *Suppose that there exists such that . Let us define
Obviously since for any . Set
Note that the sequence is nonincreasing and . Moreover, and
for any (see Lemma 3.1 of Maingé [33] for more details). From inequality (3.47) we get
It turns out that
Consequently,

Now repeating the argument of the proof of (3.53) in Case 1, we can get

By the asymptotic regularity of and (3.65), we deduce that

This implies that

On the other hand, it follows from (3.64) that

Taking the in (3.69) and using condition (i) we deduce that ; hence . That is, . Using the triangle inequality,
we also get that which together with (3.42) guarantees that .

#### 4. Applications

The two forward-backward methods previously studied, (3.29) and (3.41), find applications in other related problems such as variational inequalities, the convex feasibility problem, fixed point problems, and optimization problems.

Throughout this section, let be a nonempty closed and convex subset of a Hilbert space . Note that in this case the concept of monotonicity coincides with the concept of accretivity.

Regarding the problem we concern, of finding a zero of the sum of two accretive operators in a Hilbert space , as a direct consequence of Theorem 3.7, we first obtain the following result due to Combettes [34].

Corollary 4.1. *Let be monotone and maximal monotone. Assume that is firmly nonexpansive for some and that *(i)*,
*(ii)*,
*(iii)* and ,*(iv)*. ** Then the sequence generated by the algorithm
**
converges weakly to a point in .*

*Proof. *It suffices to show that is firmly nonexpansive if and only if is -inverse strongly monotone. This however follows from the following straightforward observation:
for all .

##### 4.1. Variational Inequality Problems

A monotone variational inequality problem (VIP) is formulated as the problem of finding a point with the property: where is a nonlinear monotone operator. We shall denote by the solution set of (4.3) and assume .

One method for solving VIP (4.3) is the projection algorithm which generates, starting with an arbitrary initial point , a sequence satisfying where is properly chosen as a stepsize. If in addition is -inverse strongly monotone (ism), then the iteration (4.4) with converges weakly to a point in whenever such a point exists.

By [35, Theorem 3], VIP (4.3) is equivalent to finding a point so that where is the normal cone operator of . In other words, VIPs are a special case of the problem of finding zeros of the sum of two monotone operators. Note that the resolvent of the normal cone is nothing but the projection operator and that if is -ism, then the set is closed and convex [36]. As an application of the previous sections, we get the following results.

Corollary 4.2. *Let be -ism for some , and let the following conditions be satisfied:*(i)*,
*(ii)*. **Then the sequence generated by the relaxed projection algorithm
**
converges weakly to a point in .*

Corollary 4.3. *Let be -ism and let the following conditions be satisfied:*(i)*;
*(ii)*.**Then, for any given , the sequence generated by
**
converges strongly to .*

*Remark 4.4. *Corollary 4.3 improves Iiduka-Takahashi's result [37, Corollary 3.2], where apart from hypotheses (i)-(ii), the conditions and are required.

##### 4.2. Fixed Points of Strict Pseudocontractions

An operator is said to be a strict -pseudocontraction if there exists a constant such that for all . It is known that if is strictly -pseudocontractive, then is -ism (see [38]). To solve the problem of approximating fixed points for such operators, an iterative scheme is provided in the following result.

Corollary 4.5. *Let be strictly -pseudocontractive with a nonempty fixed point set . Suppose that *(i)* and ,*(ii)*.**Then, for any given , the sequence generated by the algorithm
**
converges strongly to the point .*

*Proof. *Set . Hence is -ism. Moreover we rewrite the above iteration as
Then, by setting the operator constantly zero, Corollary 4.3 yields the result as desired.

##### 4.3. Convexly Constrained Minimization Problem

Consider the optimization problem where is a convex and differentiable function. Assume (4.11) is consistent, and let denote its set of solutions.

The gradient projection algorithm (GPA) generates a sequence via the iterative procedure: where stands for the gradient of . If in addition is -Lipschitz continuous; that is, for any , then the GPA with converges weakly to a minimizer of in (see, e.g, [39, Corollary 4.1]).

The minimization problem (4.11) is equivalent to VIP [40, Lemma 5.13]: It is also known [41, Corollary 10] that if is -Lipschitz continuous, then it is also -ism. Thus, we can apply the previous results to (4.11) by taking .

Corollary 4.6. *Assume that is convex and differentiable with -Lipschitz continuous gradient . Assume also that*(i)*,
*(ii)*. ** Then the sequence generated by the algorithm
**
converges weakly to .*

Corollary 4.7. * Assume that is convex and differentiable with -Lipschitz continuous gradient . Assume also that*(i)* and ;*(ii)*.** Then for any given , the sequence generated by the algorithm
**
converges strongly to whenever such point exists.*

##### 4.4. Split Feasibility Problem

The split feasibility problem (SFP) [42] consists of finding a point satisfying the property: where and are, respectively, closed convex subsets of Hilbert spaces and and is a bounded linear operator. The SFP (4.17) has attracted much attention due to its applications in signal processing [42]. Various algorithms have, therefore, been derived to solve the SFP (4.17) (see [39, 43, 44] and reference therein). In particular, Byrne [43] introduced the so-called algorithm: where with .

To solve the SFP (4.17), it is very useful to investigate the following convexly constrained minimization problem (CCMP): where

Generally speaking, the SFP (4.17) and CCMP (4.19) are not fully equivalent: every solution to the SFP (4.17) is evidently a minimizer of the CCMP (4.19); however a solution to the CCMP (4.19) does not necessarily satisfy the SFP (4.17). Further, if the solution set of the SFP (4.17) is nonempty, then it follows from [45, Lemma 4.2] that where is defined by (4.20). As shown by Xu [46], the algorithm need not converge strongly in infinite-dimensional spaces. We now consider an iteration process with strong convergence for solving the SFP (4.17).

Corollary 4.8. *Assume that the SFP (4.17) is consistent, and let be its nonempty solution set. Assume also that*(i)* and ;*(ii)*.** Then for any given , the sequence generated by the algorithm
**
converges strongly to the solution of the SFP (4.17).*

*Proof. *Let be defined by (4.19). According to [39, page 113], we have
which is -Lipschitz continuous with . Thus Corollary 4.7 applies, and the result follows immediately.

*Remark 4.9. *Corollary 4.8 improves and recovers the result of [44, Corollary 3.7], which uses the additional condition , condition (i), and the special case of condition (ii) where for all .

##### 4.5. Convexly Constrained Linear Inverse Problem

The constrained linear system where is a bounded linear operator and , is called convexly constrained linear inverse problem (cf. [47]). A classical way to deal with this problem is the well-known projected Landweber method (see [40]): where with . A counterexample in [8, Remark 5.12] shows that the projected Landweber iteration converges weakly in infinite-dimensional spaces, in general. To get strong convergence, Eicke introduced the so-called damped projection method (see [47]). In what follows, we present another algorithm with strong convergence, for solving (4.24).

Corollary 4.10. *Assume that (4.24) is consistent. Assume also that*(i)* and ;*(ii)*.** Then, for any given , the sequence generated by the algorithm
**
converges strongly to a solution to problem (4.24) whenever it exists.*

* Proof. * This is an immediate consequence of Corollary 4.8 by taking .

#### Acknowledgments

The work of G. López, V. Martín-Márquez, and H.-K. Xu was supported by Grant MTM2009-10696-C02-01. This work was carried out while F. Wang was visiting Universidad de Sevilla under the support of this grant. He was also supported by the Basic and Frontier Project of Henan 122300410268 and the Peiyu Project of Luoyang Normal University 2011-PYJJ-002. The work of G. López and V. Martín-Márquez was also supported by the PlanAndaluz de Investigacin de la Junta de Andaluca FQM-127 and Grant P08-FQM-03543. The work of H.-K. Xu was also supported in part by NSC 100-2115-M-110-003-MY2 (Taiwan). He extended his appreciation to the Deanship of Scientific Research at King Saud University for funding the work through a visiting professor-ship program (VPP).

#### References

- D. H. Peaceman and H. H. Rachford,, “The numerical solution of parabolic and elliptic differential equations,”
*Journal of the Society for Industrial and Applied Mathematics*, vol. 3, pp. 28–41, 1955. View at Google Scholar - J. Douglas, and H. H. Rachford,, “On the numerical solution of heat conduction problems in two and three space variables,”
*Transactions of the American Mathematical Society*, vol. 82, pp. 421–439, 1956. View at Google Scholar - R. B. Kellogg, “Nonlinear alternating direction algorithm,”
*Mathematics of Computation*, vol. 23, pp. 23–28, 1969. View at Google Scholar - P. L. Lions and B. Mercier, “Splitting algorithms for the sum of two nonlinear operators,”
*SIAM Journal on Numerical Analysis*, vol. 16, no. 6, pp. 964–979, 1979. View at Publisher · View at Google Scholar - G. B. Passty, “Ergodic convergence to a zero of the sum of monotone operators in Hilbert space,”
*Journal of Mathematical Analysis and Applications*, vol. 72, no. 2, pp. 383–390, 1979. View at Publisher · View at Google Scholar - P. Tseng, “Applications of a splitting algorithm to decomposition in convex programming and variational inequalities,”
*SIAM Journal on Control and Optimization*, vol. 29, no. 1, pp. 119–138, 1991. View at Publisher · View at Google Scholar - G. H.-G. Chen and R. T. Rockafellar, “Convergence rates in forward-backward splitting,”
*SIAM Journal on Optimization*, vol. 7, no. 2, pp. 421–444, 1997. View at Publisher · View at Google Scholar - P. L. Combettes and V. R. Wajs, “Signal recovery by proximal forward-backward splitting,”
*Multiscale Modeling & Simulation*, vol. 4, no. 4, pp. 1168–1200, 2005. View at Publisher · View at Google Scholar - S. Sra, S. Nowozin, and S. J. Wright,
*Optimization for Machine Learning*, 2011. - K. Aoyama, H. Iiduka, and W. Takahashi, “Weak convergence of an iterative sequence for accretive operators in Banach spaces,”
*Fixed Point Theory and Applications*, Article ID 35390, 13 pages, 2006. View at Google Scholar - H. Zegeye and N. Shahzad, “Strong convergence theorems for a common zero of a finite family of maccretive mappings,”
*Nonlinear Analysis*, vol. 66, no. 5, pp. 1161–1169, 2007. View at Publisher · View at Google Scholar - H. Zegeye and N. Shahzad, “Strong convergence theorems for a common zero point of a finite family of
*α*-inverse strongly accretive mappings,”*Journal of Nonlinear and Convex Analysis*, vol. 9, no. 1, pp. 95–104, 2008. View at Google Scholar - I. Cioranescu,
*Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems*, Kluwer Academic Publishers, 1990. View at Publisher · View at Google Scholar - H. K. Xu, “Inequalities in Banach spaces with applications,”
*Nonlinear Analysis*, vol. 16, no. 12, pp. 1127–1138, 1991. View at Publisher · View at Google Scholar - F. E. Browder, “Nonexpansive nonlinear operators in a Banach space,”
*Proceedings of the National Academy of Sciences of the United States of America*, vol. 54, pp. 1041–1044, 1965. View at Google Scholar - T. Kato, “Nonlinear semigroups and evolution equations,”
*Journal of the Mathematical Society of Japan*, vol. 19, pp. 508–520, 1967. View at Google Scholar - R. E. Bruck, “Nonexpansive projections on subsets of Banach spaces,”
*Pacific Journal of Mathematics*, vol. 47, pp. 341–355, 1973. View at Google Scholar - P. E. Maingé, “Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces,”
*Journal of Mathematical Analysis and Applications*, vol. 325, no. 1, pp. 469–479, 2007. View at Publisher · View at Google Scholar - H. K. Xu, “Iterative algorithms for nonlinear operators,”
*Journal of the London Mathematical Society*, vol. 66, no. 1, pp. 240–256, 2002. View at Publisher · View at Google Scholar - W. R. Mann, “Mean value methods in iteration,”
*Proceedings of the American Mathematical Society*, vol. 4, pp. 506–510, 1953. View at Google Scholar - S. Reich, “Weak convergence theorems for nonexpansive mappings in Banach spaces,”
*Journal of Mathematical Analysis and Applications*, vol. 67, no. 2, pp. 274–276, 1979. View at Publisher · View at Google Scholar - G. Marino and H. K. Xu, “Convergence of generalized proximal point algorithms,”
*Communications on Pure and Applied Analysis*, vol. 3, no. 4, pp. 791–808, 2004. View at Publisher · View at Google Scholar - R. T. Rockafellar, “Monotone operators and the proximal point algorithm,”
*SIAM Journal on Control and Optimization*, vol. 14, no. 5, pp. 877–898, 1976. View at Google Scholar - S. Kamimura and W. Takahashi, “Approximating solutions of maximal monotone operators in Hilbert spaces,”
*Journal of Approximation Theory*, vol. 106, no. 2, pp. 226–240, 2000. View at Publisher · View at Google Scholar - K. K. Tan and H. K. Xu, “Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process,”
*Journal of Mathematical Analysis and Applications*, vol. 178, no. 2, pp. 301–308, 1993. View at Publisher · View at Google Scholar - R. E. Bruck, “A simple proof of the mean ergodic theorem for nonlinear contractions in Banach spaces,”
*Israel Journal of Mathematics*, vol. 32, no. 2-3, pp. 107–116, 1979. View at Publisher · View at Google Scholar - B. R. Halpern, “Fixed points of nonexpanding maps,”
*Bulletin of the American Mathematical Society*, vol. 73, pp. 957–961, 1967. View at Google Scholar - P. L. Lions, “Approximation de points fixes de contractions,”
*Comptes Rendus de l'Académie des Sciences*, vol. 284, no. 21, pp. A1357–A1359, 1977. View at Google Scholar - R. Wittmann, “Approximation of fixed points of nonexpansive mappings,”
*Archiv der Mathematik*, vol. 58, no. 5, pp. 486–491, 1992. View at Publisher · View at Google Scholar - H. K. Xu, “Viscosity approximation methods for nonexpansive mappings,”
*Journal of Mathematical Analysis and Applications*, vol. 298, no. 1, pp. 279–291, 2004. View at Publisher · View at Google Scholar - G. López, V. Martín, and H. K. Xu, “Halpern's iteration for nonexpansive mappings,” in
*Nonlinear Analysis and Optimization I: Nonlinear Analysis*, vol. 513, pp. 211–230, 2010. View at Publisher · View at Google Scholar - S. Reich, “Strong convergence theorems for resolvents of accretive operators in Banach spaces,”
*Journal of Mathematical Analysis and Applications*, vol. 75, no. 1, pp. 287–292, 1980. View at Publisher · View at Google Scholar - P. E. Maingé, “Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization,”
*Set-Valued Analysis*, vol. 16, no. 7-8, pp. 899–912, 2008. View at Publisher · View at Google Scholar - P. L. Combettes, “Solving monotone inclusions via compositions of nonexpansive averaged operators,”
*Optimization*, vol. 53, no. 5-6, pp. 475–504, 2004. View at Publisher · View at Google Scholar - R. T. Rockafellar, “On the maximality of sums of nonlinear monotone operators,”
*Transactions of the American Mathematical Society*, vol. 149, pp. 75–88, 1970. View at Google Scholar - V. Barbu,
*Nonlinear Semigroups and Differential Equations in Banach Spaces*, Noordhoff, 1976. - H. Iiduka and W. Takahashi, “Strong convergence theorems for nonexpansive mappings and inverse-strongly monotone mappings,”
*Nonlinear Analysis*, vol. 61, no. 3, pp. 341–350, 2005. View at Publisher · View at Google Scholar - F. E. Browder and W. V. Petryshyn, “Construction of fixed points of nonlinear mappings in Hilbert space,”
*Journal of Mathematical Analysis and Applications*, vol. 20, pp. 197–228, 1967. View at Google Scholar - C. Byrne, “A unified treatment of some iterative algorithms in signal processing and image reconstruction,”
*Inverse Problems*, vol. 20, no. 1, pp. 103–120, 2004. View at Publisher · View at Google Scholar - H. W. Engl, M. Hanke, and A. Neubauer,
*Regularization of Inverse Problems*, Kluwer Academic Publishers Group, Dordrecht, The Netherlands, 1996. View at Publisher · View at Google Scholar - J. B. Baillon and G. Haddad, “Quelques proprietes des operateurs angle-bornes et cycliquement monotones,”
*Israel Journal of Mathematics*, vol. 26, no. 2, pp. 137–150, 1977. View at Google Scholar - Y. Censor and T. Elfving, “A multiprojection algorithm using Bregman projections in a product space,”
*Numerical Algorithms*, vol. 8, no. 2–4, pp. 221–239, 1994. View at Publisher · View at Google Scholar - C. Byrne, “Iterative oblique projection onto convex sets and the split feasibility problem,”
*Inverse Problems*, vol. 18, no. 2, pp. 441–453, 2002. View at Publisher · View at Google Scholar - H. K. Xu, “A variable Krasnosel'skii-Mann algorithm and the multiple-set split feasibility,”
*Inverse Problems*, vol. 22, no. 6, pp. 2021–2034, 2006. View at Publisher · View at Google Scholar - F. Wang and H. K. Xu, “Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem,”
*Journal of Inequalities and Applications*, vol. 2010, Article ID 102085, 2010. View at Publisher · View at Google Scholar - H. K. Xu, “Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces,”
*Inverse Problems*, vol. 26, no. 10, Article ID 105018, 2010. View at Publisher · View at Google Scholar - B. Eicke, “Iteration methods for convexly constrained ill-posed problems in Hilbert space,”
*Numerical Functional Analysis and Optimization*, vol. 13, no. 5-6, pp. 413–429, 1992. View at Publisher · View at Google Scholar