Research Article | Open Access

# On an Iterative Method for Finding a Zero to the Sum of Two Maximal Monotone Operators

**Academic Editor:**Luigi Muglia

#### Abstract

In this paper we consider a problem that consists of finding a zero to the sum of two monotone operators. One method for solving such a problem is the forward-backward splitting method. We present some new conditions that guarantee the weak convergence of the forward-backward method. Applications of these results, including variational inequalities and gradient projection algorithms, are also considered.

#### 1. Introduction

It is well known that monotone inclusions problems play an important role in the theory of nonlinear analysis. This problem consists of finding a zero of maximal monotone operators. However, in some examples such as convex programming and variational inequality problems, the operator is needed to be decomposed of the sum of two monotone operators (see, e.g., [1â€“6]). In this way, one needs to find so that where and are two monotone operators on a Hilbert space . To solve such problem, the splitting method, such as Peaceman-Rachford algorithm [7] and Douglas-Rachford algorithm [8], is usually considered. We consider a special case whenever is multivalued and is single-valued. A classical way to solve problem (1) under our assumption is the forward-backward splitting (FBS) (see [2, 9]). Starting with an arbitrary initial , the FBS generates a sequence satisfying where is some properly chosen real number. Then the FBS converges weakly to a solution of problem (1) whenever such point exists.

On the other hand, we observe that problem (1) is equivalent to the fixed point equation: for the single-valued operator . Moreover, if is properly chosen, the operator should be nonexpansive. Motivated by this assumption, by using the techniques of the fixed point theory for nonexpansive operators, we try to investigate and study various monotone inclusion problems.

The rest of this paper is organized as follows. In Section 2, some useful lemmas are introduced. In Section 3, we consider the modified forward-backward splitting method and prove its weak convergence under some new conditions. In Section 4, some applications of our results in finding a solution of the variational inequality problem are included.

#### 2. Preliminary and Notation

Throughout the paper, denotes the identity operator, the set of the fixed points of an operator , and the gradient of the functional . The notation â€śâ€ť denotes strong convergence and â€śâ€ť weak convergence. Denote by the set of the cluster points of in the weak topology (i.e., the set , where means a subsequence of ).

Let be a nonempty closed convex subset of . Denote by the projection from onto ; namely, for , is the unique point in with the property It is well-known that is characterized by the inequality A single-valued operator is called nonexpansive if firmly nonexpansive if and -averaged if there exists a constant and a nonexpansive operator such that . Firmly nonexpansive operators are -averaged.

Lemma 1 (see [10]). *The following assertions hold. *(i)* is -averaged for if and only if
*â€‰*for all .*(ii)*Assume that is -averaged for . Then is -averaged with .*

The following lemma is known as the demiclosedness principle for nonexpansive mappings.

Lemma 2. *Let be a nonempty closed convex subset of and a nonexpansive operator with . If is a sequence in such that and , then . In particular, if , then .*

A multivalued operator is called monotone if -inverse strongly monotone (-ism), if there exists a constant so that and maximal monotone if it is monotone and its graph is not properly contained in the graph of any other monotone operator.

In what follows, we shall assume that(i) is single-valued and -ism;(ii) is multivalued and maximal monotone. Hereafter, if no confusion occurs, we denote by the resolvent of for any given . It is known that is single-valued and firmly nonexpansive; moreover (see [11]).

Lemma 3 (see [12]). *For , let . Then*(i)*;
*(ii)* is -averaged;*(iii)*For , it follows that
*

*Definition 4. *Assume that is a sequence in and that is a real sequence with . Then is called* quasi FejĂ©r monotone* w.r.t. , if

Lemma 5 (see [13]). *Let be a nonempty closed convex subset of . If the sequence is quasi-FejĂ©r-monotone w.r.t. , then the following hold:*(i)* if and only if ;*(ii)*the sequence converges strongly;*(iii)*if , then .*

#### 3. Weak Convergence Theorem

In [10] Combettes considered a modified FBS: for any initial guess , set where and is computation error. He proved the weak convergence of algorithm (14) provided that (1),(2),(3).We observe that (14) is in fact a Mann-type iteration. In the following we shall prove the convergence of (14) under some sightly weak conditions.

Theorem 6. *Suppose the following conditions are satisfied:*(C1)*;
*(C2) *;
*(C3) *;
*(C4) *;
*(C5) *. **If in addition , then the sequence generated by (14) converges weakly to .*

*Proof. *We first show that is quasi-FejĂ©r-monotone. Let . Then It follows from Lemma 3 that is -averaged with and . Letting , we have that
By condition (C5), we conclude that is quasi-FejĂ©r-monotone w.r.t .

Next let us show . To see this, choose so that
Let . Obviously is -averaged. According to Lemma 1, we deduce that
which in turn implies that
for all . Letting yields
By conditions (C1) and (C4), we check that
which yields that . By condition (C2), we find and so that for all . Let . It then follows from Lemma 3 that
for all . Letting , we have as . Take and a subsequence of such that . Since is nonexpansive, applying Lemmas 2 and 3 yields and thus . By Lemma 5 the proof is complete.

We can also present another condition for the weak convergence of (14).

Theorem 7. *Suppose that the following conditions are satisfied: *(C1)*;
*(C2) *;
*(C3) *;
*(C4) *;
*(C5) *. **If in addition , then the sequence generated by (14) converges weakly to .*

*Proof. *Compared with the proof of Theorem 6, it suffices to show that as . Observe that the estimate
still holds. Let . Then by condition (C4) . According to Lemma 3, we have
where is properly chosen real number. Then, for any given , we arrive at
Conditions (C1) and (C5) imply that exists and therefore as . Hence the proof is complete.

Applying Theorem 7, one can easily get the following.

Corollary 8. *Suppose that the following conditions are satisfied: *(1)*,
*(2)*;
*(3)*,
*(4)*. ** Then the sequence generated by
**
converges weakly to some point in .*

*Remark 9. *Corollary 8 implies that our condition is slightly weaker than that of Combettesâ€™ whenever the sequence approaches to some constant.

#### 4. Application

Let be a nonempty closed convex subset of . A variational inequality problem (VIP) is formulated as a problem of finding a point with the property where is a nonlinear operator. We shall denote by the solution set of VIP (26). One method for solving VIP is the projection algorithm which generates, starting with an arbitrary initial , a sequence satisfying where is properly chosen real number. If, in addition, is -ism, then the iteration (27) with converges weakly to a point in , whenever such point exists.

Let be the normal cone for , that is, . By [14, Theorem 3], VIP (26) is equivalent to finding a zero of the maximal monotone operator . Recalling for any , we thus can apply the previous results to get the following.

Corollary 10. *Suppose the following conditions are satisfied: *(C1)*;
*(C2) *;
*(C3) *;
*(C4) *. ** Then the sequence generated by
**
converges weakly to , whenever such point exists.*

Corollary 11. *Suppose that the following conditions are satisfied:*(C1) *;
*(C2) *;
*(C3) *;
*(C4) *. ** Then the sequence generated by
**
converges weakly to , whenever such point exists.*

Consider the optimization problem of finding a point with the property where is a convex and differentiable function. The gradient projection algorithm (GPA) generates a sequence by the iterative procedure where and is a positive parameter. If, in addition, is -Lipschitz continuous, that is, for any , then the GPA with converges weakly to a minimizer of onto , if such minimizers exist (see, e.g., [15, Corollary 4.1]). Denote by the solution set of the variational inequality According to [16, Lemma 5.13], we have . Further, if is -Lipschitz continuous, then it is also -ism (see [17, Corollary 10]). Thus, we can apply the previous results by letting .

Corollary 12. *Assume that is convex and differentiable with -Lipschitz-continuous gradient and that *(C1) *;
*(C2) *;
*(C3) *;
*(C4) *. ** Then the sequence generated by
**
converges weakly to , whenever such point exists.*

Corollary 13. *Assume that is convex and differentiable with -Lipschitz-continuous gradient and that *(C1) *;
*(C2) *;
*(C3) *;
*(C4) *. ** Then the sequence generated by
**
converges weakly to , whenever such point exists.*

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

The authors would like to express their sincere thanks to the anonymous referees and editors for their careful review of the paper and the valuable comments, which have greatly improved the earlier version of this paper. This work is supported by the National Natural Science Foundation of China (Grant nos. 11301253 and 11271112), the Basic and Frontier Project of Henan (no. 122300410268), and the Science and Technology Key Project of Education Department of Henan Province (14A110024).

#### References

- J. Eckstein and D. P. Bertsekas, â€śOn the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators,â€ť
*Mathematical Programming*, vol. 55, no. 3, pp. 293â€“318, 1992. View at: Publisher Site | Google Scholar | MathSciNet - P. L. Lions and B. Mercier, â€śSplitting algorithms for the sum of two nonlinear operators,â€ť
*SIAM Journal on Numerical Analysis*, vol. 16, no. 6, pp. 964â€“979, 1979. View at: Publisher Site | Google Scholar | MathSciNet - T. Pennanen, â€śA splitting method for composite mappings,â€ť
*Numerical Functional Analysis and Optimization*, vol. 23, no. 7-8, pp. 875â€“890, 2002. View at: Publisher Site | Google Scholar | MathSciNet - J. E. Spingarn, â€śApplications of the method of partial inverses to convex programming: decomposition,â€ť
*Mathematical Programming*, vol. 32, no. 2, pp. 199â€“223, 1985. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - P. Tseng, â€śFurther applications of a splitting algorithm to decomposition in variational inequalities and convex programming,â€ť
*Mathematical Programming*, vol. 48, pp. 249â€“263, 1990. View at: Publisher Site | Google Scholar | MathSciNet - P. Tseng, â€śApplications of a splitting algorithm to decomposition in convex programming and variational inequalities,â€ť
*SIAM Journal on Control and Optimization*, vol. 29, no. 1, pp. 119â€“138, 1991. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - D. W. Peaceman and J. Rachford, â€śThe numerical solution of parabolic and elliptic differential equations,â€ť
*Society for Industrial and Applied Mathematics*, vol. 3, no. 1, pp. 28â€“41, 1955. View at: Google Scholar | MathSciNet - J. Douglas and H. H. Rachford, â€śOn the numerical solution of heat conduction problems in two or three space variables,â€ť
*Transactions of the American Mathematical Society*, vol. 82, pp. 421â€“439, 1956. View at: Google Scholar - G. B. Passty, â€śErgodic convergence to a zero of the sum of monotone operators,â€ť
*Journal of Mathematical Analysis and Applications*, vol. 72, no. 2, pp. 383â€“390, 1979. View at: Google Scholar - P. L. Combettes, â€śSolving monotone inclusions via compositions of nonexpansive averaged operators,â€ť
*Optimization*, vol. 53, no. 5-6, pp. 475â€“504, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH - R. T. Rockafellar, â€śMonotone operators and the proximal point algorithm,â€ť
*SIAM Journal on Control and Optimization*, vol. 14, no. 5, pp. 877â€“898, 1976. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - G. López, V. Martín-Márquez, F. Wang, and H.-K. Xu, â€śForward-backward splitting methods for accretive operators in Banach spaces,â€ť
*Abstract and Applied Analysis*, vol. 2012, Article ID 109236, 25 pages, 2012. View at: Publisher Site | Google Scholar | MathSciNet - D. Butnariu, Y. Censor, and S. Reich,
*Quasi-Fejérian Analysis of Some Optimization Algorithms*, Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications, Elsevier Science Publishers, Amsterdam, The Netherlands, 2001. - R. T. Rockafellar, â€śOn the maximality of sums of nonlinear monotone operators,â€ť
*Transactions of the American Mathematical Society*, vol. 149, pp. 75â€“88, 1970. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - C. Byrne, â€śA unified treatment of some iterative algorithms in signal processing and image reconstruction,â€ť
*Inverse Problems*, vol. 20, no. 1, pp. 103â€“120, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH - H. W. Engl, M. Hanke, and A. Neubauer,
*Regularization of Inverse Problems*, Kluwer Academic Publishers Group, Dordrecht, The Netherlands, 1996. View at: Publisher Site | MathSciNet - J. B. Baillon and G. Haddad, â€śQuelques propriétés des opérateurs angle-bornés et n-cycliquement monotones,â€ť
*Israel Journal of Mathematics*, vol. 26, no. 2, pp. 137â€“150, 1977. View at: Publisher Site | Google Scholar

#### Copyright

Copyright © 2014 Hongwei Jiao and Fenghui Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.