/ / Article
Special Issue

## Iterative Methods and Applications 2014

View this Special Issue

Research Article | Open Access

Volume 2014 |Article ID 414031 | 5 pages | https://doi.org/10.1155/2014/414031

# On an Iterative Method for Finding a Zero to the Sum of Two Maximal Monotone Operators

Revised18 Aug 2014
Accepted19 Aug 2014
Published21 Aug 2014

#### Abstract

In this paper we consider a problem that consists of finding a zero to the sum of two monotone operators. One method for solving such a problem is the forward-backward splitting method. We present some new conditions that guarantee the weak convergence of the forward-backward method. Applications of these results, including variational inequalities and gradient projection algorithms, are also considered.

#### 1. Introduction

It is well known that monotone inclusions problems play an important role in the theory of nonlinear analysis. This problem consists of finding a zero of maximal monotone operators. However, in some examples such as convex programming and variational inequality problems, the operator is needed to be decomposed of the sum of two monotone operators (see, e.g., ). In this way, one needs to find so that where and are two monotone operators on a Hilbert space . To solve such problem, the splitting method, such as Peaceman-Rachford algorithm  and Douglas-Rachford algorithm , is usually considered. We consider a special case whenever is multivalued and is single-valued. A classical way to solve problem (1) under our assumption is the forward-backward splitting (FBS) (see [2, 9]). Starting with an arbitrary initial , the FBS generates a sequence satisfying where is some properly chosen real number. Then the FBS converges weakly to a solution of problem (1) whenever such point exists.

On the other hand, we observe that problem (1) is equivalent to the fixed point equation: for the single-valued operator . Moreover, if is properly chosen, the operator should be nonexpansive. Motivated by this assumption, by using the techniques of the fixed point theory for nonexpansive operators, we try to investigate and study various monotone inclusion problems.

The rest of this paper is organized as follows. In Section 2, some useful lemmas are introduced. In Section 3, we consider the modified forward-backward splitting method and prove its weak convergence under some new conditions. In Section 4, some applications of our results in finding a solution of the variational inequality problem are included.

#### 2. Preliminary and Notation

Throughout the paper, denotes the identity operator, the set of the fixed points of an operator , and the gradient of the functional . The notation “” denotes strong convergence and “” weak convergence. Denote by the set of the cluster points of in the weak topology (i.e., the set , where means a subsequence of ).

Let be a nonempty closed convex subset of . Denote by the projection from onto ; namely, for , is the unique point in with the property It is well-known that is characterized by the inequality A single-valued operator is called nonexpansive if firmly nonexpansive if and -averaged if there exists a constant and a nonexpansive operator such that . Firmly nonexpansive operators are -averaged.

Lemma 1 (see ). The following assertions hold. (i) is -averaged for if and only if for all .(ii)Assume that is -averaged for . Then is -averaged with .

The following lemma is known as the demiclosedness principle for nonexpansive mappings.

Lemma 2. Let be a nonempty closed convex subset of and a nonexpansive operator with . If is a sequence in such that and , then . In particular, if , then .

A multivalued operator is called monotone if -inverse strongly monotone (-ism), if there exists a constant so that and maximal monotone if it is monotone and its graph is not properly contained in the graph of any other monotone operator.

In what follows, we shall assume that(i) is single-valued and -ism;(ii) is multivalued and maximal monotone. Hereafter, if no confusion occurs, we denote by the resolvent of for any given . It is known that is single-valued and firmly nonexpansive; moreover (see ).

Lemma 3 (see ). For , let . Then(i); (ii) is -averaged;(iii)For , it follows that

Definition 4. Assume that is a sequence in and that is a real sequence with . Then is called quasi Fejér monotone w.r.t. , if

Lemma 5 (see ). Let be a nonempty closed convex subset of . If the sequence is quasi-Fejér-monotone w.r.t. , then the following hold:(i) if and only if ;(ii)the sequence converges strongly;(iii)if , then .

#### 3. Weak Convergence Theorem

In  Combettes considered a modified FBS: for any initial guess , set where and is computation error. He proved the weak convergence of algorithm (14) provided that (1),(2),(3).We observe that (14) is in fact a Mann-type iteration. In the following we shall prove the convergence of (14) under some sightly weak conditions.

Theorem 6. Suppose the following conditions are satisfied:(C1); (C2) ; (C3) ; (C4) ; (C5) . If in addition , then the sequence generated by (14) converges weakly to .

Proof. We first show that is quasi-Fejér-monotone. Let . Then It follows from Lemma 3 that is -averaged with and . Letting , we have that By condition (C5), we conclude that is quasi-Fejér-monotone w.r.t .
Next let us show . To see this, choose so that Let . Obviously is -averaged. According to Lemma 1, we deduce that which in turn implies that for all . Letting yields By conditions (C1) and (C4), we check that which yields that . By condition (C2), we find and so that for all . Let . It then follows from Lemma 3 that for all . Letting , we have as . Take and a subsequence of such that . Since is nonexpansive, applying Lemmas 2 and 3 yields and thus . By Lemma 5 the proof is complete.

We can also present another condition for the weak convergence of (14).

Theorem 7. Suppose that the following conditions are satisfied: (C1); (C2) ; (C3) ; (C4) ; (C5) . If in addition , then the sequence generated by (14) converges weakly to .

Proof. Compared with the proof of Theorem 6, it suffices to show that as . Observe that the estimate still holds. Let . Then by condition (C4) . According to Lemma 3, we have where is properly chosen real number. Then, for any given , we arrive at Conditions (C1) and (C5) imply that exists and therefore as . Hence the proof is complete.

Applying Theorem 7, one can easily get the following.

Corollary 8. Suppose that the following conditions are satisfied: (1), (2); (3), (4). Then the sequence generated by converges weakly to some point in .

Remark 9. Corollary 8 implies that our condition is slightly weaker than that of Combettes’ whenever the sequence approaches to some constant.

#### 4. Application

Let be a nonempty closed convex subset of . A variational inequality problem (VIP) is formulated as a problem of finding a point with the property where is a nonlinear operator. We shall denote by the solution set of VIP (26). One method for solving VIP is the projection algorithm which generates, starting with an arbitrary initial , a sequence satisfying where is properly chosen real number. If, in addition, is -ism, then the iteration (27) with converges weakly to a point in , whenever such point exists.

Let be the normal cone for , that is, . By [14, Theorem 3], VIP (26) is equivalent to finding a zero of the maximal monotone operator . Recalling for any , we thus can apply the previous results to get the following.

Corollary 10. Suppose the following conditions are satisfied: (C1); (C2) ; (C3) ; (C4) . Then the sequence generated by converges weakly to , whenever such point exists.

Corollary 11. Suppose that the following conditions are satisfied:(C1) ; (C2) ; (C3) ; (C4) . Then the sequence generated by converges weakly to , whenever such point exists.

Consider the optimization problem of finding a point with the property where is a convex and differentiable function. The gradient projection algorithm (GPA) generates a sequence by the iterative procedure where and is a positive parameter. If, in addition, is -Lipschitz continuous, that is, for any , then the GPA with converges weakly to a minimizer of onto , if such minimizers exist (see, e.g., [15, Corollary 4.1]). Denote by the solution set of the variational inequality According to [16, Lemma 5.13], we have . Further, if is -Lipschitz continuous, then it is also -ism (see [17, Corollary 10]). Thus, we can apply the previous results by letting .

Corollary 12. Assume that is convex and differentiable with -Lipschitz-continuous gradient and that (C1) ; (C2) ; (C3) ; (C4) . Then the sequence generated by converges weakly to , whenever such point exists.

Corollary 13. Assume that is convex and differentiable with -Lipschitz-continuous gradient and that (C1) ; (C2) ; (C3) ; (C4) . Then the sequence generated by converges weakly to , whenever such point exists.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

The authors would like to express their sincere thanks to the anonymous referees and editors for their careful review of the paper and the valuable comments, which have greatly improved the earlier version of this paper. This work is supported by the National Natural Science Foundation of China (Grant nos. 11301253 and 11271112), the Basic and Frontier Project of Henan (no. 122300410268), and the Science and Technology Key Project of Education Department of Henan Province (14A110024).

1. J. Eckstein and D. P. Bertsekas, “On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators,” Mathematical Programming, vol. 55, no. 3, pp. 293–318, 1992. View at: Publisher Site | Google Scholar | MathSciNet
2. P. L. Lions and B. Mercier, “Splitting algorithms for the sum of two nonlinear operators,” SIAM Journal on Numerical Analysis, vol. 16, no. 6, pp. 964–979, 1979. View at: Publisher Site | Google Scholar | MathSciNet
3. T. Pennanen, “A splitting method for composite mappings,” Numerical Functional Analysis and Optimization, vol. 23, no. 7-8, pp. 875–890, 2002. View at: Publisher Site | Google Scholar | MathSciNet
4. J. E. Spingarn, “Applications of the method of partial inverses to convex programming: decomposition,” Mathematical Programming, vol. 32, no. 2, pp. 199–223, 1985.
5. P. Tseng, “Further applications of a splitting algorithm to decomposition in variational inequalities and convex programming,” Mathematical Programming, vol. 48, pp. 249–263, 1990. View at: Publisher Site | Google Scholar | MathSciNet
6. P. Tseng, “Applications of a splitting algorithm to decomposition in convex programming and variational inequalities,” SIAM Journal on Control and Optimization, vol. 29, no. 1, pp. 119–138, 1991.
7. D. W. Peaceman and J. Rachford, “The numerical solution of parabolic and elliptic differential equations,” Society for Industrial and Applied Mathematics, vol. 3, no. 1, pp. 28–41, 1955. View at: Google Scholar | MathSciNet
8. J. Douglas and H. H. Rachford, “On the numerical solution of heat conduction problems in two or three space variables,” Transactions of the American Mathematical Society, vol. 82, pp. 421–439, 1956. View at: Google Scholar
9. G. B. Passty, “Ergodic convergence to a zero of the sum of monotone operators,” Journal of Mathematical Analysis and Applications, vol. 72, no. 2, pp. 383–390, 1979. View at: Google Scholar
10. P. L. Combettes, “Solving monotone inclusions via compositions of nonexpansive averaged operators,” Optimization, vol. 53, no. 5-6, pp. 475–504, 2004.
11. R. T. Rockafellar, “Monotone operators and the proximal point algorithm,” SIAM Journal on Control and Optimization, vol. 14, no. 5, pp. 877–898, 1976.
12. G. López, V. Martín-Márquez, F. Wang, and H.-K. Xu, “Forward-backward splitting methods for accretive operators in Banach spaces,” Abstract and Applied Analysis, vol. 2012, Article ID 109236, 25 pages, 2012. View at: Publisher Site | Google Scholar | MathSciNet
13. D. Butnariu, Y. Censor, and S. Reich, Quasi-Fejérian Analysis of Some Optimization Algorithms, Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications, Elsevier Science Publishers, Amsterdam, The Netherlands, 2001.
14. R. T. Rockafellar, “On the maximality of sums of nonlinear monotone operators,” Transactions of the American Mathematical Society, vol. 149, pp. 75–88, 1970.
15. C. Byrne, “A unified treatment of some iterative algorithms in signal processing and image reconstruction,” Inverse Problems, vol. 20, no. 1, pp. 103–120, 2004.
16. H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic Publishers Group, Dordrecht, The Netherlands, 1996. View at: Publisher Site | MathSciNet
17. J. B. Baillon and G. Haddad, “Quelques propriétés des opérateurs angle-bornés et n-cycliquement monotones,” Israel Journal of Mathematics, vol. 26, no. 2, pp. 137–150, 1977. View at: Publisher Site | Google Scholar

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. 