Abstract

In this paper we consider a problem that consists of finding a zero to the sum of two monotone operators. One method for solving such a problem is the forward-backward splitting method. We present some new conditions that guarantee the weak convergence of the forward-backward method. Applications of these results, including variational inequalities and gradient projection algorithms, are also considered.

1. Introduction

It is well known that monotone inclusions problems play an important role in the theory of nonlinear analysis. This problem consists of finding a zero of maximal monotone operators. However, in some examples such as convex programming and variational inequality problems, the operator is needed to be decomposed of the sum of two monotone operators (see, e.g., [16]). In this way, one needs to find so that where and are two monotone operators on a Hilbert space . To solve such problem, the splitting method, such as Peaceman-Rachford algorithm [7] and Douglas-Rachford algorithm [8], is usually considered. We consider a special case whenever is multivalued and is single-valued. A classical way to solve problem (1) under our assumption is the forward-backward splitting (FBS) (see [2, 9]). Starting with an arbitrary initial , the FBS generates a sequence satisfying where is some properly chosen real number. Then the FBS converges weakly to a solution of problem (1) whenever such point exists.

On the other hand, we observe that problem (1) is equivalent to the fixed point equation: for the single-valued operator . Moreover, if is properly chosen, the operator should be nonexpansive. Motivated by this assumption, by using the techniques of the fixed point theory for nonexpansive operators, we try to investigate and study various monotone inclusion problems.

The rest of this paper is organized as follows. In Section 2, some useful lemmas are introduced. In Section 3, we consider the modified forward-backward splitting method and prove its weak convergence under some new conditions. In Section 4, some applications of our results in finding a solution of the variational inequality problem are included.

2. Preliminary and Notation

Throughout the paper, denotes the identity operator, the set of the fixed points of an operator , and the gradient of the functional . The notation “” denotes strong convergence and “” weak convergence. Denote by the set of the cluster points of in the weak topology (i.e., the set , where means a subsequence of ).

Let be a nonempty closed convex subset of . Denote by the projection from onto ; namely, for , is the unique point in with the property It is well-known that is characterized by the inequality A single-valued operator is called nonexpansive if firmly nonexpansive if and -averaged if there exists a constant and a nonexpansive operator such that . Firmly nonexpansive operators are -averaged.

Lemma 1 (see [10]). The following assertions hold. (i) is -averaged for if and only if for all .(ii)Assume that is -averaged for . Then is -averaged with .

The following lemma is known as the demiclosedness principle for nonexpansive mappings.

Lemma 2. Let be a nonempty closed convex subset of and a nonexpansive operator with . If is a sequence in such that and , then . In particular, if , then .

A multivalued operator is called monotone if -inverse strongly monotone (-ism), if there exists a constant so that and maximal monotone if it is monotone and its graph is not properly contained in the graph of any other monotone operator.

In what follows, we shall assume that(i) is single-valued and -ism;(ii) is multivalued and maximal monotone. Hereafter, if no confusion occurs, we denote by the resolvent of for any given . It is known that is single-valued and firmly nonexpansive; moreover (see [11]).

Lemma 3 (see [12]). For , let . Then(i); (ii) is -averaged;(iii)For , it follows that

Definition 4. Assume that is a sequence in and that is a real sequence with . Then is called quasi Fejér monotone w.r.t. , if

Lemma 5 (see [13]). Let be a nonempty closed convex subset of . If the sequence is quasi-Fejér-monotone w.r.t. , then the following hold:(i) if and only if ;(ii)the sequence converges strongly;(iii)if , then .

3. Weak Convergence Theorem

In [10] Combettes considered a modified FBS: for any initial guess , set where and is computation error. He proved the weak convergence of algorithm (14) provided that (1),(2),(3).We observe that (14) is in fact a Mann-type iteration. In the following we shall prove the convergence of (14) under some sightly weak conditions.

Theorem 6. Suppose the following conditions are satisfied:(C1); (C2) ; (C3) ; (C4) ; (C5) . If in addition , then the sequence generated by (14) converges weakly to .

Proof. We first show that is quasi-Fejér-monotone. Let . Then It follows from Lemma 3 that is -averaged with and . Letting , we have that By condition (C5), we conclude that is quasi-Fejér-monotone w.r.t .
Next let us show . To see this, choose so that Let . Obviously is -averaged. According to Lemma 1, we deduce that which in turn implies that for all . Letting yields By conditions (C1) and (C4), we check that which yields that . By condition (C2), we find and so that for all . Let . It then follows from Lemma 3 that for all . Letting , we have as . Take and a subsequence of such that . Since is nonexpansive, applying Lemmas 2 and 3 yields and thus . By Lemma 5 the proof is complete.

We can also present another condition for the weak convergence of (14).

Theorem 7. Suppose that the following conditions are satisfied: (C1); (C2) ; (C3) ; (C4) ; (C5) . If in addition , then the sequence generated by (14) converges weakly to .

Proof. Compared with the proof of Theorem 6, it suffices to show that as . Observe that the estimate still holds. Let . Then by condition (C4) . According to Lemma 3, we have where is properly chosen real number. Then, for any given , we arrive at Conditions (C1) and (C5) imply that exists and therefore as . Hence the proof is complete.

Applying Theorem 7, one can easily get the following.

Corollary 8. Suppose that the following conditions are satisfied: (1), (2); (3), (4). Then the sequence generated by converges weakly to some point in .

Remark 9. Corollary 8 implies that our condition is slightly weaker than that of Combettes’ whenever the sequence approaches to some constant.

4. Application

Let be a nonempty closed convex subset of . A variational inequality problem (VIP) is formulated as a problem of finding a point with the property where is a nonlinear operator. We shall denote by the solution set of VIP (26). One method for solving VIP is the projection algorithm which generates, starting with an arbitrary initial , a sequence satisfying where is properly chosen real number. If, in addition, is -ism, then the iteration (27) with converges weakly to a point in , whenever such point exists.

Let be the normal cone for , that is, . By [14, Theorem 3], VIP (26) is equivalent to finding a zero of the maximal monotone operator . Recalling for any , we thus can apply the previous results to get the following.

Corollary 10. Suppose the following conditions are satisfied: (C1); (C2) ; (C3) ; (C4) . Then the sequence generated by converges weakly to , whenever such point exists.

Corollary 11. Suppose that the following conditions are satisfied:(C1) ; (C2) ; (C3) ; (C4) . Then the sequence generated by converges weakly to , whenever such point exists.

Consider the optimization problem of finding a point with the property where is a convex and differentiable function. The gradient projection algorithm (GPA) generates a sequence by the iterative procedure where and is a positive parameter. If, in addition, is -Lipschitz continuous, that is, for any , then the GPA with converges weakly to a minimizer of onto , if such minimizers exist (see, e.g., [15, Corollary 4.1]). Denote by the solution set of the variational inequality According to [16, Lemma 5.13], we have . Further, if is -Lipschitz continuous, then it is also -ism (see [17, Corollary 10]). Thus, we can apply the previous results by letting .

Corollary 12. Assume that is convex and differentiable with -Lipschitz-continuous gradient and that (C1) ; (C2) ; (C3) ; (C4) . Then the sequence generated by converges weakly to , whenever such point exists.

Corollary 13. Assume that is convex and differentiable with -Lipschitz-continuous gradient and that (C1) ; (C2) ; (C3) ; (C4) . Then the sequence generated by converges weakly to , whenever such point exists.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to express their sincere thanks to the anonymous referees and editors for their careful review of the paper and the valuable comments, which have greatly improved the earlier version of this paper. This work is supported by the National Natural Science Foundation of China (Grant nos. 11301253 and 11271112), the Basic and Frontier Project of Henan (no. 122300410268), and the Science and Technology Key Project of Education Department of Henan Province (14A110024).