Abstract

We construct a sequence of proximal iterates that converges strongly (under minimal assumptions) to a common zero of two maximal monotone operators in a Hilbert space. The algorithm introduced in this paper puts together several proximal point algorithms under one frame work. Therefore, the results presented here generalize and improve many results related to the proximal point algorithm which were announced recently in the literature.

1. Introduction

Let and be nonempty, closed, and convex subsets of a real Hilbert space with nonempty intersection and consider the following problem: In his 1933 paper, von Neumann showed that if and are subspaces of , then the method of alternating projections, defined by converges strongly to a point in which is closest to the starting point . The proof of this classical result can be found for example in [1, 2]. Ever since von Neumann announced his result, many researchers have dedicated their time to study the convex feasibility problem (1.1). In his paper, Bregman [3] showed that if and are two arbitrary nonempty, closed, and convex subsets in with nonempty intersection, then the sequence generated from the method of alternating projections converges weakly to a point in . The work of Hundal [4] revealed that the method of alternating projections fails in general to converge strongly, see also [5].

Recall that the projection operator coincides with the resolvent of a normal cone. Thus, the method of alternating projections can be extended in a natural way as follows: Given , define a sequence iteratively by for , and two maximal monotone operators and , where and are sequences of computational errors. Here is the resolvent of . In this case, problem (1.1) can be restated as For and , Bauschke et al. [6] proved that sequences generated from the method of alternating resolvents (1.3) converges weakly to some point that solves problem (1.4). In fact, they showed that such a sequences converges weakly to a point in provided that the fixed point set of the composition mapping is nonempty. Note that strong convergence of this method fails in general, (the same counter example of Hundal [4] applies). For convergence analysis of algorithm (1.3) in the case when any of the sequences of real numbers and is not a constant, and when the error sequences and are not zero for all , we refer the reader to [7].

There are other papers in the literature that address strong convergence of a given iterative process to solutions of (1.4). For example, several authors have discussed strong convergence of an iterative process of the Halpern type to common solutions of a finite family of maximal monotone operators in Hilbert spaces (or even -accretive operators in Banach spaces). Among the most recent works in this direction is due to Hu and Liu [8]. They showed that under appropriate conditions, an iterative process of Halpern type defined by where with for all , are given, with for , and , converges strongly to a point in nearest to .

Suppose that we want to find solutions to problem (1.4) iteratively. Then we observe that when using the iterative process (1.5), one has to calculate two resolvents of maximal monotone operators in order to find the next iterate. On the other hand, for algorithm (1.3), one needs to calculate only one resolvent operator at each step. This clearly shows that theoretically, algorithm (1.5) requires more computational time compared to algorithm (1.3). The only disadvantage with algorithm (1.3) is that it does not always converge strongly and the limit to which it converges to is not characterized. This is not the case with algorithm (1.5). Since weak convergence is not good for an effective algorithm, our purpose in this paper is to modify algorithm (1.3) in such a way that strong convergence is guaranteed. More precisely, for any two maximal monotone operators and , we define an iterative process in the following way: For given, a sequence is generated using the rule where with and . We will also show that algorithm (1.6), (1.7) contains several algorithms such as the prox-Tikhonov method, the Halpern-type proximal point algorithm, and the regularized proximal method as special cases. That is, with our algorithm, we are able to put several algorithms under one frame work. Therefore, our main results improve, generalize, and unify many related results announced recently in the literature.

2. Preliminary Results

In the sequel, will be a real Hilbert space with inner product and induced norm . We recall that a map is called nonexpansive if for every we have . We say that a map is firmly nonexpansive if for every , we have It is clear that firmly nonexpansive mappings are also nonexpansive. The converse need not be true. The excellent book by Goebel and Reich [9] is recommended to the reader who is interested in studying properties of firmly nonexpansive mappings. An operator is said to be monotone if where is the graph of . In other words, an operator is monotone if its graph is a monotone subset of the product space . An operator is called maximal monotone if in addition to being monotone, its graph is not properly contained in the graph of any other monotone operator. Note that if is maximal monotone, then so is its inverse . For a maximal monotone operator , the resolvent of , defined by , is well defined on the whole space , is single-valued, and is firmly nonexpansive for every . It is known that the Yosida approximation of , an operator defined by (where is the identity operator) is maximal monotone for every . For the properties of maximal monotone operators discussed above, we refer the reader to [10].

Notations. given a sequence , we will use to mean that converges strongly to whereas will mean that converges weakly to . The weak -limit set of a sequence will be denoted by . That is,

The following lemmas will be useful in proving our main results. The first lemma is a basic property of norms in Hilbert spaces.

Lemma 2.1. For all , one has

The next lemma is well known, it can be found for example in [10, page 20].

Lemma 2.2. Any maximal monotone operator satisfies the demiclosedness principle. In other words, given any two sequences and satisfying and with , then .

Lemma 2.3 (Xu [11]). For any and , where is a maximal monotone operator.

We end this section with the following key lemmas.

Lemma 2.4 (Boikanyo and Moroşanu [12]). Let be a sequence of nonnegative real numbers satisfying where , , , , and satisfy the conditions: (i)  , with , (ii)  , (iii)  , and (iv)   for all with . Then .

Remark 2.5. Note that if , then if and only if .

Lemma 2.6 (Maingé [13]). Let be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence of such that for all . Define an integer sequence as Then as and for all ,

3. Main Results

We first begin by giving a strong convergence result associated with the exact iterative process where with , and are given. The proof of the following theorem makes use of some ideas of the papers [1215].

Theorem 3.1. Let and be maximal monotone operators with . For arbitrary but fixed vectors , let be the sequence generated by (3.1), (3.2), where with , and . Assume that (i)  , for some and , (ii)  either or , and (iii)   and for some . Then converges strongly to the point of nearest to .

Proof. Let . Then from (3.2) and the fact that the resolvent operator of a maximal monotone operator is nonexpansive, we have Again using the fact that the resolvent operator is nonexpansive, we have from (3.1) where the last inequality follows from (3.3). Using a simple induction argument, we get This shows that the subsequence of is bounded. In view of (3.3), the subsequence is also bounded. Hence the sequence must be bounded.
Now from the firmly nonexpansive property of , we have for any which in turn gives Again by using the firmly nonexpansive property of the resolvent , we see that Now from (3.1) and Lemma 2.1, we have where is such that . On the other hand, we observe that (3.2) is equivalent to Multiplying this inclusion scalarly by and using the monotonicity of , we obtain which implies that Using this inequality in (3.9), we get If we denote , then we have for some positive constant We now show that converges to zero strongly. For this purpose, we consider two possible cases on the sequence .
Case 1. is eventually decreasing (i.e., there exists such that is decreasing for all ). In this case, is convergent. Letting in (3.14), we get Now using the second part of (3.15) and the fact that as , we get as . Also, we have the following from Lemma 2.3 and the first part of (3.15) as . Since , where denotes the Yosida approximation of , is demiclosed, it follows that . On the other hand, from the nonexpansive property of the resolvent operator of , we get where the first inequality follows from Lemma 2.3. Since is demiclosed, passing to the limit in the above inequality yields , showing that . Therefore, there is a subsequence of converging weakly to some such that where the above inequality follows from the characterization of the projection operator. Note that by virtue of (3.16), we have as well. Now, we derive from (3.13) Using Lemma 2.4 we get as . Passing to the limit in (3.12), we also get as . Therefore, we derive as . This proves the result for the case when is eventually decreasing.
Case 2. is not eventually decreasing, that is, there is a subsequence of such that for all . We then define an integer sequence as in Lemma 2.6 so that for all . Then from (3.14), it follows that We also derive from (3.1) as . In a similar way as in Case 1, we derive . Consequently, Note that from (3.21) we have, for some positive constant , Therefore, for all , we have Since for all , we have Letting in the above inequality, we see that . Hence from (2.8) it follows that as . That is, as . Furthermore, for some positive constant , we have from (3.12) which implies that as . Hence, we have as . This completes the proof of the theorem.

We are now in a position to give a strong convergence result for the inexact iteration process (1.6), (1.7). For the error sequence, we will use the 14 conditions established in [12].

Theorem 3.2. Let and be maximal monotone operators with . For arbitrary but fixed vectors , let be the sequence generated by (1.6), (1.7), where with and . Assume that , for some and , either or , and for some . Then converges strongly to the point of nearest to , provided that any of the following conditions is satisfied:(a) and ; (b) and ; (c) and ; (d) and ; (e) and ; (f) and ; (g) and ; (h) and ; (i) and ; (j) and ; (k) and ; (l) and ; (m) and ; (n) and .

Proof. Taking note of Theorem 3.1, it suffices to show that as . Since the resolvent of is nonexpansive, we derive from (1.7) and (3.2) the following: Similarly, from (1.6) and (3.1), we have Substituting (3.29) into (3.30) yields Therefore, if the error sequence satisfy any of the conditions (a)–(i), then it readily follows from Lemma 2.4 that as . Passing to the limit in (3.29), we derive as well. If the error sequence satisfy any of the conditions (j)–(n), then from (3.29) and (3.30), we have Then Lemma 2.4 guarantees that as . Passing to the limit in (3.30), we derive as well. This completes the proof of the theorem.

Note that when where is the subdifferential of the indicator function of and for all , then algorithm (1.6), (1.7) is reduced to the contraction proximal point method which was introduced by Yao and Noor in 2008 [16]. Such a method is given by where we have used the notation . Here is a sequence in and with . For this method, we have the following strong convergence result.

Corollary 3.3. Let be a maximal monotone operator with . For arbitrary but fixed vectors , let be the sequence generated by (3.33) where with and . Assume that with , for some and for some . If either or , then converges strongly to the point of nearest to .

Corollary 3.3 generalizes and unifies many results announced recently in the literature such as [7, Theorem 4], [16, Theorem 3.3], [17, Theorem 2], and [18, Theorem 3.1]. We also recover [15, Theorem 1].

Remark 3.4. We refer the reader to the paper [12] for another generalization of the method (3.3).

In the case when where is the subdifferential of the indicator function of and for all , then algorithm (1.6), (1.7) reduces to the regularization method where we have used the notation . In this case, we have the following strong convergence result which improves results given in the papers [11, 1921].

Corollary 3.5. Let be a maximal monotone operator with . For arbitrary but fixed vectors , let be the sequence generated by (3.34) where and . Assume that with and for some . If either or , then converges strongly to the point of nearest to .

It is worth mentioning that the regularization method is a generalization of the prox-Tikhonov method introduced by Lehdili and Moudafi [22], see [11]. We also mention that for and , the regularization method (3.34) is equivalent to the inexact Halpern type proximal point algorithm, see [23]. Therefore Corollary 3.5 also improves many results given in the papers [15, 19, 22, 2426] related to the inexact Halpern type proximal point algorithm.