About this Journal Submit a Manuscript Table of Contents
Journal of Applied Mathematics
Volume 2013 (2013), Article ID 531859, 19 pages
http://dx.doi.org/10.1155/2013/531859
Research Article

Relaxed Viscosity Approximation Methods with Regularization for Constrained Minimization Problems

1Department of Mathematics, Shanghai Normal University, Scientific Computing Key Laboratory of Shanghai Universities, Shanghai 200234, China
2Department of Applied Mathematics, National Sun Yat-sen University, Kaohsiung 80424, Taiwan
3Department of Mathematics, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia
4Center for Fundamental Science, Kaohsiung Medical University, Kaohsiung 807, Taiwan

Received 13 January 2013; Accepted 26 March 2013

Academic Editor: Luigi Muglia

Copyright © 2013 Lu-Chuan Ceng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We introduce a new relaxed viscosity approximation method with regularization and prove the strong convergence of the method to a common fixed point of finitely many nonexpansive mappings and a strict pseudocontraction that also solves a convex minimization problem and a suitable equilibrium problem.

1. Introduction

Let be a real Hilbert space with inner product and norm , a nonempty closed convex subset of , and the metric projection of , onto . Let be self-mapping on . We denote by the set of fixed points of and by the set of all real numbers. A mapping is called -strictly pseudocontractive if there exists a constant such that In particular, if , then is called a nonexpansive mapping. A mapping is called -inverse strongly monotone, if there exists a constant such that

Let be a convex and a continuous Fréchet differentiable functional. Consider the minimization problem (MP) of minimizing over the constraint set where we assume the existence of minimizers. We denote by the set of minimizers of (3). The gradient-projection algorithm (GPA) generates a sequence determined by the gradient and the metric projection as follows: or more generally, where, in both (4) and (5), the initial guess is taken from arbitrarily, the parameters or are positive real numbers. The convergence of algorithms (4) and (5) depends on the behavior of the gradient . As a matter of fact, it is known that if is strongly monotone and Lipschitz continuous, then, for , the operator is a contraction. Hence, the sequence defined by the GPA (4) converges in norm to the unique solution of (3). More generally, if the sequence is chosen to satisfy the property then the sequence defined by the GPA (5) converges in norm to the unique minimizer of (3). If the gradient is only assumed to be a Lipschitz continuous, then can only be weakly convergent if is infinite dimensional. A counterexample is given by Xu in [1].

Since the Lipschitz continuity of the gradient implies that it is inverse strongly monotone (ism), it can be expressed as a proper convex combination of the identity mapping and a nonexpansive mapping. Consequently, the GPA can be rewritten as the composite of a projectionand an averaged mapping which is again an averaged mapping. This shows that averaged mappings play an important role in the GPA. Very recently, Xu [1] used averaged mappings to study the convergence analysis of the GPA which is an operator-oriented approach.

We observe that the regularization, in particular, the traditional Tikhonov regularization, is usually used to solve ill-posed optimization problems. Consider the following regularized minimization problem: where is the regularization parameter and again is convex with an -Lipschitz continuous gradient .

The advantage of a regularization method is that it is possible to get strong convergence to the minimum-norm solution of the optimization problem under investigation. The disadvantage is however its implicity, and hence explicit iterative methods seem more attractive. See, for example, [1].

Given a mapping , the classical variational inequality problem (VIP) is to find such that The solution set of VIP (9) is denoted by . It is well known that if and only if for some . The variational inequality was first discussed by Lions [2] and now is well known. The variational inequality theory has been studied quite extensively and has emerged as an important tool in the study of a wide class of obstacle, unilateral, free, moving, and equilibrium problems arising in several branches of pure and applied sciences in a unified and general framework. See, for example, [310] and the references therein.

In this paper, we study the following equilibrium problem (EP) which is to find such that The solution set of EP (10) is denoted by . We will introduce and consider a relaxed viscosity iterative scheme with regularization for finding a common element of the solution set of the minimization problem (3), the solution set of the equilibrium problem (10), and the common fixed point set of finitely many nonexpansive mappings , and a strictly pseudocontractive mapping in the setting of the infinite-dimensional Hilbert space. We will prove that this iterative scheme converges strongly to a common fixed point of the mappings , which is both a minimizer of MP (3) and an equilibrium point of EP (10).

2. Preliminaries

Let be a real Hilbert space whose inner product and norm are denoted by and , respectively. Let be a nonempty closed convex subset of . We write to indicate that the sequence converges weakly to and to indicate that the sequence converges strongly to . Moreover, we use to denote the weak -limit set of the sequence and to denote the strong -limit set of the sequence ; that is,

The metric (or nearest point) projection from onto is the mapping which assigns to each point the unique point satisfying the property

Some important properties of projections are gathered in the following.

Proposition 1. For given and (i), for all  ;(ii), for all  ;(iii), for all  , which hence implies that    is nonexpansive and monotone.

Definition 2. A mapping is said to be(a) nonexpansive if (b) firmly nonexpansive if is nonexpansive, or equivalently, alternatively, is firmly nonexpansive if and only if can be expressed as where is nonexpansive; projections are firmly nonexpansive.

Definition 3. Let be a nonlinear operator with domain and range .(a) is said to be monotone if (b) Given a number , is said to be strongly monotone if (c) Given a number , is said to be -inverse strongly monotone (-ism) if

It can be easily seen that if is nonexpansive, then is monotone. It is also easy to see that a projection is 1-ism. Inverse strongly monotone (also referred to as cocoercive) operators have been applied widely in solving practical problems in various fields.

Definition 4. A mapping is said to be an averaged mapping if it can be written as the average of the identity and a nonexpansive mapping; that is, where and is nonexpansive. More precisely, when the last equality holds, we say that is -averaged. Thus, firmly nonexpansive mappings (in particular, projections) are ()-averaged maps.

Proposition 5 (see [11]). Let be a given mapping.(i)  is nonexpansive if and only if the complement    is (12)-ism. (ii)If    is -ism, then for  ,     is  -ism.(iii)  is averaged if and only if the complement    is  -ism for some  . Indeed, for  ,    is -averaged if and only if    is  -ism.

Proposition 6 (see [11]). Let be given operators.(i)If    for some  and if    is averaged and    is nonexpansive, then    is averaged.(ii)  is firmly nonexpansive if and only if the complement    is firmly nonexpansive.(iii)If    for some  and if    is firmly nonexpansive and    is nonexpansive, then    is averaged.(iv)The composite of finitely many averaged mappings is averaged. That is, if each of the mappings    is averaged, then so is the composite  . In particular, if    is    -averaged and    is    -averaged, where  , then the composite    is  -averaged, where  .(v)If the mappings    are averaged and have a common fixed point, then
The notation denotes the set of all fixed points of the mapping , that is, .

It is clear that, in a real Hilbert space , is -strictly pseudocontractive if and only if there holds the following inequality: This immediately implies that if is a -strictly pseudocontractive mapping, then is -inverse strongly monotone; for further detail, we refer to [12] and the references therein. It is well known that the class of strict pseudocontractions strictly includes the class of nonexpansive mappings.

Lemma 7 (see [12, Proposition 2.1]). Let be a nonempty closed convex subset of a real Hilbert space and be a mapping.(i) If is a -strictly pseudocontractive mapping, then satisfies the Lipschitz condition where (ii) If is a -strictly pseudocontractive mapping, then the mapping is semiclosed at ; that is, if is a sequence in such that weakly and strongly, then .(iii) If is a -(quasi-)strict pseudocontraction, then the fixed point set of is closed and convex so that the projection is well defined.

The following lemma is an immediate consequence of an inner product.

Lemma 8. In a real Hilbert space , there holds the following inequality:

The following elementary result on real sequences is quite well known.

Lemma 9 (see [13]). Let be a sequence of nonnegative real numbers satisfying the property where and are the real sequences such that(i);(ii)either or ;(iii) where , for all . Then, .

Lemma 10 (see [14]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a -strictly pseudocontractive mapping. Let and be two nonnegative real numbers such that . Then,

The following lemma appears implicitly in the paper of Reinermann [15].

Lemma 11 (see [15]). Let be a real Hilbert space. Then, for all and ,

Lemma 12 (see [16]). Let be a nonempty closed convex subset of a real Hilbert space . Let be a bifunction such that(f1) for all ;(f2) is monotone and upper hemicontinuous in the first variable;(f3) is lower semicontinuous and convex in the second variable.
Let be a bifunction such that(h1) for all ;(h2) is monotone and weakly upper semicontinuous in the first variable;(h3) is convex in the second variable.
Moreover, let one suppose that(H) for fixed and , there exists a bounded and such that for all .
For and , let be a mapping defined by called the resolvent of and . Then,(1);(2) is a singleton;(3) is firmly nonexpansive;(4) and it is closed and convex.

Lemma 13 (see [16]). Let one suppose that (f1)–(f3), (h1)–(h3) and (H) hold. Let , . Then,

Lemma 14 (see [17]). Suppose that the hypotheses of Lemma 12 are satisfied. Let be a sequence in with . Suppose that is a bounded sequence. Then, the following statements are equivalent and true.(a) if as , each weak cluster point of satisfies the problem: that is, .(b) The demiclosedness principle holds in the sense that, if and as , then for all .

3. Main Results

We now propose the following relaxed viscosity iterative scheme with regularization: for all , where the mapping is a -contraction; the mapping is a -strict pseudocontraction;   is a nonexpansive mapping for each ;   satisfies the Lipschitz condition (10) with ;    are two bifunctions satisfying the hypotheses of Lemma 12;   is a sequence in with ;    are sequences in with ;   are sequences in with , for all ;    are sequences in and , for all ; is a sequence in with and .

Before stating and proving the main convergence results, we first establish the following lemmas.

Lemma 15. Let one suppose that . Then, the sequences , , for all , and are bounded.

Proof. First of all, we can show as in [18] that is nonexpansive for , and is nonexpansive for all and . We observe that if , then For all, from to , by induction, one proves that Thus, we obtain that for every , For simplicity, put and for every . Then, and for every . Taking into consideration that and for , we have Similarly, we get . Thus, from (34) we have Since for all , utilizing Lemma 10, we derive from (35) By induction, we get This implies that is bounded and so are , , and for each . It is clear that both and are also bounded. Since , is also bounded.

Lemma 16. Let one suppose that . Moreover, let one suppose that the following hold:(H1) and ;(H2) or ;(H3) or for each ;(H4) or ;(H5) or ;(H6) or . Then, , that is, is asymptotically regular.

Proof. Taking into account , we may assume, without loss of generality, that for some . First, we write , for all , where ). It follows that for all Since for all , utilizing Lemma 10, we have
Next, we estimate . Observe that for every and similarly, Also, from (30), we have Simple calculations show that Then, passing to the norm we get from (40) that where , for all for some . Furthermore, by the definition of one obtains that, for all In the case of , we have Substituting (46) in all (45) type one obtains for This together with (44) implies that By Lemma 13, we know that where . So, substituting (49) in (48) we obtain where is a minorant for and , for all for some . This together with (38)-(39), implies that where , for all for some .
Further, we observe that Simple calculations show that Then, passing to the norm, we get from (51) where , for all for some . By hypotheses (H1)–(H6) and Lemma 9, from , we obtain the claim.

Lemma 17. Let one suppose that . Let one suppose that is asymptotically regular. Then, and as .

Proof. We recall that, by the firm nonexpansivity of , a standard calculation (see [17]) shows that if , then Let . Then by Lemma 11, we have from (33)–(34) the following Since for all , utilizing Lemma 10, we have Taking into account , we may assume that for some . So, we deduce that Since , and as , we conclude from the boundedness of , and that as . This together with , implies that Furthermore, from (33), (55), and (56), we have which hence implies that Since and as , we deduce from the boundedness of , , and that

Remark 18. By the last lemma we have and ; that is, the sets of strong/weak cluster points of and coincide.

Of course, if , as , for all index , the assumptions of Lemma 16 are enough to assure that In the next lemma, we examine the case in which at least one sequence is a null sequence.

Lemma 19. Let one suppose that . Let one suppose that (H1) holds. Moreover, for an index , , and the following hold:(H7) for all ,