• Views 817
• Citations 4
• ePub 40
• PDF 418
`Abstract and Applied AnalysisVolume 2016, Article ID 2371857, 10 pageshttp://dx.doi.org/10.1155/2016/2371857`
Research Article

## The Viscosity Approximation Forward-Backward Splitting Method for Zeros of the Sum of Monotone Operators

Department of Mathematics and Statistical Sciences, Botswana International University of Science and Technology, Private Bag Box 16, Palapye, Botswana

Received 8 September 2015; Accepted 8 December 2015

Copyright © 2016 Oganeditse Aaron Boikanyo. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We investigate the convergence analysis of the following general inexact algorithm for approximating a zero of the sum of a cocoercive operator and maximal monotone operators with : , for for given in a real Hilbert space , where , , and are sequences in with for all , denotes the error sequence, and is a contraction. The algorithm is known to converge under the following assumptions on and : (i) is bounded below away from 0 and above away from 1 and (ii) is summable in norm. In this paper, we show that these conditions can further be relaxed to, respectively, the following: (i) is bounded below away from 0 and above away from 3/2 and (ii) is square summable in norm; and we still obtain strong convergence results.

#### 1. Introduction

Let be a real Hilbert space endowed with the inner product and norm . Let us consider the problem where and are maximal monotone operators. The literature on problem (1) exists (see [15] and the references therein). Note that two possibilities exist here: either is maximal monotone or is not maximal monotone. (For conditions that ensure that is maximal monotone, we refer the reader to [6].) One of the iterative procedures used to solve problem (1), in the absence of the maximality of , is by using splitting methods, which have received much attention in the recent past due to their applications in image recovery, signal processing, and machine learning. One of the popular iterative methods used for solving problem (1) is the forward-backward method introduced by Passty [7] in 1979 which defines a sequence by where is a sequence of positive numbers, and are maximal monotone operators with , and is single valued. Since its inception, the splitting method (2) has received much attention from several authors including Tseng [8], Mercier [9], Gabay [10], and Chen [11]. The projected gradient method is in fact a special case of scheme (2). In this case, is taken as the gradient of a function and as the subdifferential of the indicator function of a closed and convex subset of a real Hilbert space.

In the case when is maximal monotone, the most popular iterative method for solving (1) is the proximal point algorithm (PPA) which was first introduced by Martinet [12] in 1970 and later developed by Rockafellar [13]. Due to the failure of Rockafellar’s PPA to converge strongly [14] for arbitrary maximal monotone operators, several authors [1523] have presented modified versions of the PPA that always converge strongly. Recently, Yao and Noor [22] proposed the contraction proximal point algorithm where is a maximal monotone operator, , , and are sequences in with for all , , is a sequence of computational errors, is given, and . Algorithm (3) was used to find zeros of maximal monotone operators. The viscosity approximation method which is also used for finding zeros of maximal monotone operators was introduced by Takahashi [19] as a scheme that generates sequences by where is a contraction from a closed and convex subset of a reflexive Banach space , is an accretive operator (hence monotone if is a Hilbert space) with the range condition , is a sequence in , , and . Under appropriate conditions, strong convergence of scheme (4) is obtained.

Recently, the forward-backward splitting method (2), the contraction proximal point algorithm (3), and the viscosity approximation method (4) were combined to obtain the algorithmwhere is a -cocoercive operator, is a maximal monotone operator, , , and are sequences in with for all , , is a sequence of computational errors, is given, is a contraction, and . They were able to prove that the sequence generated by (5) converges strongly to the unique fixed point of the operator , where . Note that when for all , then it suffices to derive strong convergence of (5) when , where is a closed and convex subset of , and . Note that, in the case when , if for a given , the iterate may fail to be in . That is, algorithm (5) is not well defined if .

It is worthy of note that Noor’s algorithm (3) excludes the important case , the overrelaxed case. However, the overrelaxed factor may indeed speed up the rate of the algorithm (see [24]). That is why Wang and Cui [25] investigated the convergence of sequences generated by for , , and with for all . A natural question thus arises: is it possible to relax the conditions on and used in algorithm (5) further? Our purpose in this paper is to affirmatively answer this question. Furthermore, we prove a strong convergence result associated with algorithm for , , and with for all , for the case when the sequence of error terms is square summable in norm. Our main result improves and refines similar results in the literature by using fewer conditions to derive strong convergence of (7). In addition, our results generalize many results in the literature such as [25, Theorem ], [26, Theorem ], and [27, Theorem ].

#### 2. Preliminary Results

Let be a real Hilbert space endowed with the inner product and norm . A map is called a Lipschitz mapping if there exists such that for all . The number associated with is called a Lipschitz constant. If , we say that is a contraction, and is called nonexpansive if . The set of fixed points of is given by . It is well known that if is nonexpansive, then is demiclosed at zero and is closed and convex; see [28]. Given an operator , we say that is demiclosed at zero if, for any sequence in , the following implication holds: Here, means that converges weakly to and is used to indicate that converges strongly to . These notations will be used in the sequel.

For a nonempty closed and convex subset of , the metric projection (nearest point mapping) is defined as follows: given , is the unique point in having the property Note that the projection operator is firmly nonexpansive and has the following characterization which will be used in this paper: for any , Recall that an operator is said to be firmly nonexpansive if for every It is clear that every firmly nonexpansive map is nonexpansive. A single-valued operator is called -inverse strongly monotone (-cocoercive) for a positive number if It is known that, for a -inverse strongly monotone mapping , the map is nonexpansive for all . A (possibly set-valued) nonlinear operator is said to be monotone if In other words, is monotone if its graph, , is a monotone subset of . A monotone operator is called maximal monotone if it is monotone and its graph is not properly contained in the graph of any other monotone operator. Note that a -cocoercive operator is monotone. We know that is maximal monotone if and only if the range of is equal to (i.e., ). If is maximal monotone and is a positive number, then the resolvent of is a single-valued and firmly nonexpansive operator defined by . We note that is everywhere defined on . For more information, refer to [29].

Finally, we recall some elementary inequalities in real Hilbert spaces. For every , the inequality holds. If are any real numbers in with , then, for any , Moreover, for any , the inequality can also be proved easily.

Now, we establish two lemmas that will enable us to prove our main result.

Lemma 1. Let be a -cocoercive operator and let be a positive real number satisfying . Then, is firmly nonexpansive.

Proof. We have to show that Using the definition of , we have where the inequality follows from the fact that is -cocoercive. Since , it follows that and inequality (17) follows at once.

Lemma 2. Let and be firmly nonexpansive mappings. Then, In particular, is nonexpansive.

Proof. From the firmly nonexpansive property of , we get where the second inequality follows from the fact that is firmly nonexpansive. Using (16) with and , we get The proof is complete.

Remark 3. Let be maximal monotone and let be a -cocoercive mapping with . Taking and for some , both and are firmly nonexpansive (see Lemma 1). In addition, if , then, for any and , we obtain from Lemma 2 that

We next recall some lemmas that will be useful in proving our main results. In the next two lemmas, it is assumed that is a -cocoercive mapping with and is maximal monotone.

Lemma 4. If and are any two positive real numbers, then holds for any .

The proof of Lemma 4 can be reproduced easily.

Lemma 5 (see López et al. [3]). If and are two positive real numbers such that , then holds for any .

The next two lemmas are important in showing that under suitable assumptions our sequence generated by (7) is bounded. The proofs can be reproduced by following some ideas of [26].

Lemma 6. Let be a sequence of nonnegative real numbers satisfying for some , where and are sequences of positive real numbers. Then, is bounded.

Lemma 7. Let be a sequence of nonnegative real numbers satisfying where are positive constants, is a sequence in , and with for all . Then, is bounded.

The last two lemmas will be vital in deducing strong convergence of the sequence generated by (7).

Lemma 8 (see Xu [20]). Let be a sequence of nonnegative real numbers satisfying where , , and satisfy the following conditions: (i) , with , (ii) for all with , and (iii) . Then, .

Lemma 9 (see Maingé [30]). Let be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence of such that for all . Define an integer sequence as Then, as and for all

#### 3. Main Results

For any , the map , being the composition of nonexpansive maps, is nonexpansive. Since the fixed point set (if it is not empty) of a nonexpansive map is closed and convex, it follows that if , then is closed and convex. Therefore, the map is well defined and nonexpansive. Also, for any closed and convex set , the map is a -contraction whenever is a -contraction. This information will be used in this section.

Theorem 10. Let be a real Hilbert space and let be a -contraction with . Assume that is a -cocoercive operator and is a maximal monotone operator with . For , let be a sequence generated by (7) with , , and satisfying , , and is a sequence of errors in . Then, converges strongly to the unique fixed point of , provided that(i) and ,(ii),(iii),(iv) with .

Proof. Let us denote and . Then, from (7), . In order to prove that is bounded, we note that, from the condition as , one may assume without loss of generality that for all . If is the unique fixed point of , then and from (15) we have To estimate , let us first observe that Making use of inequality (22), we obtain Therefore, it follows that Note that condition (iii) of the theorem is equivalent to . Then, there exist positive real numbers and such that . From this condition, we derive for some positive constant . Denote . Since , it follows that as well. Therefore, from (32) and condition (iv) of the theorem, we have On the other hand, by similar arguments as above, we have where the inequality follows from (16). Moreover, using the property that is a -contraction, we get Since , it follows that Combining this last inequality with (34), it follows from (30) thatwhere . Since for very large , by our assumption, the inequality above reduces to Applying Lemma 6 with and , we derive that the sequence is bounded. Therefore, is bounded.
We next show that holds for some positive constants and , where denotes To this end, we first note that the condition and (32) imply that Therefore, from the equality and using the fact that is a -contraction, we obtain On the other hand, it follows from (14) that Again, using the fact that is a -contraction with , we have where . Combining this inequality with (34), we obtain Using condition (iii) of the theorem and the boundedness of , we readily get (40).
Now, from , we can find such that Therefore, if we denote , where is a nonnegative sequence that converges to zero and is defined by then converges to zero strongly if and only if does. In addition, we rewrite inequality (40) in the form where denotes The next step is to show that converges strongly to zero. We achieve this by considering two possible cases on the sequence .
Case 1. We assume that is eventually decreasing (i.e., there exists such that is decreasing for all ). Then, converges and rearranging terms in (49) we obtain Since is bounded, we pass to the limit in the above inequality to get Take a subsequence of converging weakly to such that Note that, from (7), we derive the inequality which together with (52), the boundedness of , and as implies that The above limit implies that If is the lower bound of , then we have from Lemma 5which together with (52) implies that Since for all and is nonexpansive, it follows that is demiclosed at zero; see [29, page ]. Therefore, from (58) and the property that is demiclosed at zero, we conclude that . Hence, from the characterization of projections, we conclude that By conditions (i) and (iv) of the theorem and the fact that as , it follows that Finally, we derive from inequality (49) that The conclusion that follows from Lemma 8.
Case 2. The sequence is not eventually decreasing; that is, there is a subsequence of such that for all . In this case, we define an integer sequence as in Lemma 9. Note that the subsequence satisfies the condition for all . It then follows from (49) that Since , as , and is bounded, we conclude that Using similar arguments as in Case , we conclude that In view of (62), we have which implies that as . Since for all (see (29)), we also have as . Hence, as , and the proof is complete.

Remark 11. Observe that if is bounded, then condition (iv) of Theorem 10 implies that . The latter condition is weaker than the condition used in the existing literature. In addition, we did not require the conditions , , and . Therefore, Theorem 10 is an improvement of the aforementioned theorem.

Remark 12. Theorem 10 also improves and extends many results that exist in the literature, such as [22, Theorem ], [27, Theorem ], and [26, Theorem ]. To see this, it is enough to take and for all , where is a given fixed vector.

Theorem 13. Let be a nonempty, closed, and convex subset of and let be a -contraction with . Assume that is a -cocoercive operator and is a maximal monotone operator with . For and for all , let be a sequence generated by (7) with satisfying , and . Then, converges strongly to the unique fixed point of , provided that (i) and ,(ii),(iii).

Proof. In the proof of Theorem 10, we used the condition repeatedly to conclude that for all , where . So if , then automatically for all .

Remark 14. We have dropped the conditions , , and used in the literature to derive strong convergence of the sequence generated by (7) under the conditions of Theorem 13. Therefore, Theorem 13 is an improvement of the aforementioned results.

Theorem 15. Let be a real Hilbert space and let be a -contraction with . Assume that is a -cocoercive operator and is a maximal monotone operator with . For , let be a sequence generated by (7) with , , and satisfying , , and is a sequence of errors in . Then, converges strongly to the unique fixed point of , provided that (i) and ,(ii),(iii),(iv) with .

Proof. Similar to the proof of Theorem 10, we assume without loss of generality that for all . Following similar steps as in the proof of Theorem 10, we derive where is the unique fixed point of , , and for some positive constant . Since for all , by our assumption, the inequality above reduces to It is clear from the condition as that as , and we may therefore assume without loss of generality that for all . The conclusion that is bounded follows on applying Lemma 7 with .
Again, similar to the proof of Theorem 10, we obtain for some positive constant , where is the sequence denoted by with an appropriate constant . We will show that as by considering the following two cases on the sequence .
Case 1. Assume that is eventually decreasing (i.e., there exists such that is decreasing for all ). Then, converges and rearranging terms in (68) yields which implies that