Journal of Applied Mathematics

Journal of Applied Mathematics / 2012 / Article
Special Issue

Numerical and Analytical Methods for Variational Inequalities and Related Problems with Applications

View this Special Issue

Research Article | Open Access

Volume 2012 |Article ID 782960 | https://doi.org/10.1155/2012/782960

Ming Tian, Min-Min Li, "A Hybrid Gradient-Projection Algorithm for Averaged Mappings in Hilbert Spaces", Journal of Applied Mathematics, vol. 2012, Article ID 782960, 14 pages, 2012. https://doi.org/10.1155/2012/782960

A Hybrid Gradient-Projection Algorithm for Averaged Mappings in Hilbert Spaces

Academic Editor: Hong-Kun Xu
Received18 Apr 2012
Accepted21 Jun 2012
Published25 Jul 2012

Abstract

It is well known that the gradient-projection algorithm (GPA) is very useful in solving constrained convex minimization problems. In this paper, we combine a general iterative method with the gradient-projection algorithm to propose a hybrid gradient-projection algorithm and prove that the sequence generated by the hybrid gradient-projection algorithm converges in norm to a minimizer of constrained convex minimization problems which solves a variational inequality.

1. Introduction

Let 𝐻 be a real Hilbert space and 𝐶 a nonempty closed and convex subset of 𝐻. Consider the following constrained convex minimization problem: minimize𝑥𝐶𝑓(𝑥),(1.1) where 𝑓𝐶 is a real-valued convex and continuously Fréchet differentiable function. The gradient 𝑓 satisfies the following Lipschitz condition: 𝑓(𝑥)𝑓(𝑦)𝐿𝑥𝑦,𝑥,𝑦𝐶,(1.2) where 𝐿>0. Assume that the minimization problem (1.1) is consistent, and let 𝑆 denote its solution set.

It is well known that the gradient-projection algorithm is very useful in dealing with constrained convex minimization problems and has extensively been studied ([15] and the references therein). It has recently been applied to solve split feasibility problems [610]. Levitin and Polyak [1] consider the following gradient-projection algorithm: 𝑥𝑛+1=Proj𝐶𝑥𝑛𝜆𝑛𝑥𝑓𝑛,𝑛0.(1.3) Let {𝜆𝑛}𝑛=0 satisfy 0<liminf𝑛𝜆𝑛limsup𝑛𝜆𝑛<2𝐿.(1.4) It is proved that the sequence {𝑥𝑛} generated by (1.3) converges weakly to a minimizer of (1.1).

Xu proved that under certain appropriate conditions on {𝛼𝑛} and {𝜆𝑛} the sequence {𝑥𝑛} defined by the following relaxed gradient-projection algorithm: 𝑥𝑛+1=1𝛼𝑛𝑥𝑛+𝛼𝑛Proj𝐶𝑥𝑛𝜆𝑛𝑥𝑓𝑛,𝑛0,(1.5) converges weakly to a minimizer of (1.1) [11].

Since the Lipschitz continuity of the gradient of 𝑓 implies that it is indeed inverse strongly monotone (ism) [12, 13], its complement can be an averaged mapping. Recall that a mapping 𝑇 is nonexpansive if and only if it is Lipschitz with Lipschitz constant not more than one, that a mapping is an averaged mapping if and only if it can be expressed as a proper convex combination of the identity mapping and a nonexpansive mapping, and that a mapping 𝑇 is said to be 𝜈-inverse strongly monotone if and only if 𝑥𝑦,𝑇𝑥𝑇𝑦𝜈𝑇𝑥𝑇𝑦2forall𝑥,𝑦𝐻, where the number 𝜈>0. Recall also that the composite of finitely many averaged mappings is averaged. That is, if each of the mappings {𝑇𝑖}𝑁𝑖=1 is averaged, then so is the composite 𝑇1𝑇𝑁 [14]. In particular, an averaged mapping is a nonexpansive mapping [15]. As a result, the GPA can be rewritten as the composite of a projection and an averaged mapping which is again an averaged mapping.

Generally speaking, in infinite-dimensional Hilbert spaces, GPA has only weak convergence. Xu [11] provided a modification of GPA so that strong convergence is guaranteed. He considered the following hybrid gradient-projection algorithm: 𝑥𝑛+1=𝜃𝑛𝑥𝑛+1𝜃𝑛Proj𝐶𝑥𝑛𝜆𝑛𝑥𝑓𝑛.(1.6)

It is proved that if the sequences {𝜃𝑛} and {𝜆𝑛} satisfy appropriate conditions, the sequence {𝑥𝑛} generated by (1.6) converges in norm to a minimizer of (1.1) which solves the variational inequality 𝑥𝑆,(𝐼)𝑥,𝑥𝑥0,𝑥𝑆.(1.7)

On the other hand, Ming Tian [16] introduced the following general iterative algorithm for solving the variational inequality 𝑥𝑛+1=𝛼𝑛𝑥𝛾𝑓𝑛+𝐼𝜇𝛼𝑛𝐹𝑇𝑥𝑛,𝑛0,(1.8) where 𝐹 is a 𝜅-Lipschitzian and 𝜂-strongly monotone operator with 𝜅>0, 𝜂>0 and 𝑓 is a contraction with coefficient 0<𝛼<1. Then, he proved that if {𝛼𝑛} satisfying appropriate conditions, the {𝑥𝑛} generated by (1.8) converges strongly to the unique solution of variational inequality (𝜇𝐹𝛾𝑓)̃𝑥,̃𝑥𝑧0,𝑧Fix(𝑇).(1.9)

In this paper, motivated and inspired by the research work in this direction, we will combine the iterative method (1.8) with the gradient-projection algorithm (1.3) and consider the following hybrid gradient-projection algorithm: 𝑥𝑛+1=𝜃𝑛𝑥𝛾𝑛+𝐼𝜇𝜃𝑛𝐹Proj𝐶𝑥𝑛𝜆𝑛𝑥𝑓𝑛,𝑛0.(1.10)

We will prove that if the sequence {𝜃𝑛} of parameters and the sequence {𝜆𝑛} of parameters satisfy appropriate conditions, then the sequence {𝑥𝑛} generated by (1.10) converges in norm to a minimizer of (1.1) which solves the variational inequality (𝑉𝐼)𝑥𝑆,(𝜇𝐹𝛾)𝑥,𝑥𝑥0,𝑥𝑆,(1.11) where 𝑆 is the solution set of the minimization problem (1.1).

2. Preliminaries

This section collects some lemmas which will be used in the proofs for the main results in the next section. Some of them are known; others are not hard to derive.

Throughout this paper, we write 𝑥𝑛𝑥 to indicate that the sequence {𝑥𝑛} converges weakly to 𝑥, 𝑥𝑛𝑥 implies that {𝑥𝑛} converges strongly to 𝑥. 𝜔𝑤(𝑥𝑛)={𝑥𝑥𝑛𝑗𝑥} is the weak 𝜔-limit set of the sequence {𝑥𝑛}𝑛=1.

Lemma 2.1 (see [17]). Assume that {𝑎𝑛}𝑛=0 is a sequence of nonnegative real numbers such that 𝑎𝑛+11𝛾𝑛𝑎𝑛+𝛾𝑛𝛿𝑛+𝛽𝑛,𝑛0,(2.1) where {𝛾𝑛}𝑛=0 and {𝛽𝑛}𝑛=0 are sequences in [0,1] and {𝛿𝑛}𝑛=0 is a sequence in such that(i)𝑛=0𝛾𝑛=; (ii)either limsup𝑛𝛿𝑛0 or 𝑛=0𝛾𝑛|𝛿𝑛|<;(iii)𝑛=0𝛽𝑛<. Then lim𝑛𝑎𝑛=0.

Lemma 2.2 (see [18]). Let 𝐶 be a closed and convex subset of a Hilbert space 𝐻, and let 𝑇𝐶𝐶 be a nonexpansive mapping with Fix𝑇. If {𝑥𝑛}𝑛=1 is a sequence in 𝐶 weakly converging to 𝑥 and if {(𝐼𝑇)𝑥𝑛}𝑛=1 converges strongly to y, then (𝐼T)𝑥=𝑦.

Lemma 2.3. Let 𝐻 be a Hilbert space, and let 𝐶 be a nonempty closed and convex subset of 𝐻. 𝐶𝐶 a contraction with coefficient 0<𝜌<1, and 𝐹𝐶𝐶 a 𝜅-Lipschitzian continuous operator and 𝜂-strongly monotone operator with 𝜅,𝜂>0. Then, for 0<𝛾<𝜇𝜂/𝜌, 𝑥𝑦,(𝜇𝐹𝛾)𝑥(𝜇𝐹𝛾)𝑦(𝜇𝜂𝛾𝜌)𝑥𝑦2,𝑥,𝑦𝐶.(2.2) That is, 𝜇𝐹𝛾 is strongly monotone with coefficient 𝜇𝜂𝛾𝜌.

Lemma 2.4. Let 𝐶 be a closed subset of a real Hilbert space 𝐻, given 𝑥𝐻 and 𝑦𝐶. Then, 𝑦=𝑃𝐶𝑥 if and only if there holds the inequality 𝑥𝑦,𝑦𝑧0,𝑧𝐶.(2.3)

3. Main Results

Let 𝐻 be a real Hilbert space, and let 𝐶 be a nonempty closed and convex subset of 𝐻 such that 𝐶±𝐶𝐶. Assume that the minimization problem (1.1) is consistent, and let 𝑆 denote its solution set. Assume that the gradient 𝑓 satisfies the Lipschitz condition (1.2). Since 𝑆 is a closed convex subset, the nearest point projection from 𝐻 onto 𝑆 is well defined. Recall also that a contraction on 𝐶 is a self-mapping of 𝐶 such that (𝑥)(𝑦)𝜌𝑥𝑦,forall𝑥,𝑦𝐶, where 𝜌[0,1) is a constant. Let 𝐹 be a 𝜅-Lipschitzian and 𝜂-strongly monotone operator on 𝐶 with 𝜅,𝜂>0. Denote by Π the collection of all contractions on 𝐶, namely, Π={isacontractionon𝐶}.(3.1) Now given Π with 0<𝜌<1, 𝑠(0,1). Let 0<𝜇<2𝜂/𝜅2,   0<𝛾<𝜇(𝜂(𝜇𝜅2)/2)/𝜌=𝜏/𝜌. Assume that 𝜆𝑠 with respect to 𝑠 is continuous and, in addition, 𝜆𝑠[𝑎,𝑏](0,2/𝐿). Consider a mapping 𝑋𝑠 on 𝐶 defined by 𝑋𝑠(𝑥)=𝑠𝛾(𝑥)+(𝐼𝑠𝜇𝐹)Proj𝐶𝐼𝜆𝑠𝑓(𝑥),𝑥𝐶.(3.2) It is easy to see that 𝑋𝑠 is a contraction. Setting 𝑉𝑠=Proj𝐶(𝐼𝜆𝑠𝑓). It is obvious that 𝑉𝑠 is a nonexpansive mapping. We can rewrite 𝑋𝑠(𝑥) as 𝑋𝑠(𝑥)=𝑠𝛾(𝑥)+(𝐼𝑠𝜇𝐹)𝑉𝑠(𝑥).(3.3) First observe that for 𝑠(0,1), we can get (𝐼𝑠𝜇𝐹)𝑉𝑠(𝑥)(𝐼s𝜇𝐹)𝑉𝑠(𝑦)2=𝑉𝑠(𝑥)𝑉𝑠(𝑦)𝑠𝜇𝐹𝑉𝑠(𝑥)𝐹𝑉𝑠(𝑦)2=𝑉𝑠(𝑥)𝑉𝑠(𝑦)22𝑠𝜇𝑉𝑠(𝑥)𝑉𝑠(𝑦),𝐹𝑉𝑠(𝑥)𝐹𝑉𝑠(𝑦)+𝑠2𝜇2𝐹𝑉𝑠(𝑥)𝐹𝑉𝑠(𝑦)2𝑥𝑦2𝑉2𝑠𝜇𝜂𝑠(𝑥)𝑉𝑠(𝑦)2+𝑠2𝜇2𝜅2𝑉𝑠(𝑥)𝑉𝑠(𝑦)21𝑠𝜇2𝜂𝑠𝜇𝜅2𝑥𝑦21𝑠𝜇2𝜂𝑠𝜇𝜅222𝑥𝑦21𝑠𝜇𝜂𝜇𝜅222𝑥𝑦2=(1𝑠𝜏)2𝑥𝑦2.(3.4) Indeed, we have 𝑋𝑠(𝑥)𝑋𝑠=(𝑦)𝑠𝛾(𝑥)+(𝐼𝑠𝜇𝐹)𝑉𝑠(𝑥)𝑠𝛾(𝑦)(𝐼𝑠𝜇𝐹)𝑉𝑠(𝑦)𝑠𝛾(𝑥)(𝑦)+(𝐼𝑠𝜇𝐹)𝑉𝑠(𝑥)(𝐼𝑠𝜇𝐹)𝑉𝑠(𝑦)𝑠𝛾𝜌𝑥𝑦+(1𝑠𝜏)𝑥𝑦=(1𝑠(𝜏𝛾𝜌))𝑥𝑦.(3.5) Hence, 𝑋𝑠 has a unique fixed point, denoted 𝑥𝑠, which uniquely solves the fixed-point equation 𝑥𝑠𝑥=𝑠𝛾𝑠+(𝐼𝑠𝜇𝐹)𝑉𝑠𝑥𝑠.(3.6) The next proposition summarizes the properties of {𝑥𝑠}.

Proposition 3.1. Let 𝑥𝑠 be defined by (3.6).(i){𝑥𝑠}isboundedfor𝑠(0,(1/𝜏)). (ii)lim𝑠0𝑥𝑠Proj𝐶(𝐼𝜆𝑠𝑓)(𝑥𝑠)=0. (iii)𝑥𝑠denesacontinuouscurvefrom(0,1/𝜏)into𝐻.

Proof. (i) Take a 𝑥𝑆, then we have 𝑥𝑠𝑥=𝑥𝑠𝛾𝑠+(𝐼𝑠𝜇𝐹)Proj𝐶𝐼𝜆𝑠𝑥𝑓𝑠𝑥=(𝐼𝑠𝜇𝐹)Proj𝐶𝐼𝜆𝑠𝑥𝑓𝑠(𝐼𝑠𝜇𝐹)Proj𝐶𝐼𝜆𝑠𝑓𝑥𝑥+𝑠𝛾𝑠𝜇𝐹Proj𝐶𝐼𝜆𝑠𝑓𝑥𝑥(1𝑠𝜏)𝑠𝑥𝑥+𝑠𝛾𝑠𝜇𝐹𝑥𝑥(1𝑠𝜏)𝑠𝑥𝑥+𝑠𝛾𝜌𝑠𝑥+𝑠𝛾𝑥𝜇𝐹𝑥.(3.7) It follows that 𝑥𝑠𝑥𝛾𝑥𝜇𝐹𝑥.𝜏𝛾𝜌(3.8) Hence, {𝑥𝑠} is bounded.
(ii) By the definition of {𝑥𝑠}, we have 𝑥𝑠Proj𝐶𝐼𝜆𝑠𝑥𝑓𝑠=𝑥𝑠𝛾𝑠+(𝐼𝑠𝜇𝐹)Proj𝐶𝐼𝜆𝑠𝑥𝑓𝑠Proj𝐶𝐼𝜆𝑠𝑥𝑓𝑠𝑥=𝑠𝛾𝑠𝜇𝐹Proj𝐶𝐼𝜆𝑠𝑥𝑓s0,(3.9){𝑥𝑠} is bounded, so are {(𝑥𝑠)} and {𝐹Proj𝐶(𝐼𝜆𝑠𝑓)(𝑥𝑠)}.
(iii) Take 𝑠, 𝑠0(0,1/𝜏), and we have 𝑥𝑠𝑥𝑠0=𝑥𝑠𝛾𝑠+(𝐼𝑠𝜇𝐹)Proj𝐶𝐼𝜆𝑠𝑥𝑓𝑠𝑠0𝑥𝛾𝑠0𝐼𝑠0𝜇𝐹Proj𝐶𝐼𝜆𝑠0𝑥𝑓𝑠0𝑠𝑠0𝑥𝛾𝑠+𝑠0𝛾𝑥𝑠𝑥𝑠0+𝐼𝑠0𝜇𝐹Proj𝐶𝐼𝜆𝑠0𝑥𝑓𝑠𝐼𝑠0𝜇𝐹Proj𝐶𝐼𝜆𝑠0𝑥𝑓𝑠0+(𝐼𝑠𝜇𝐹)Proj𝐶𝐼𝜆𝑠𝑥𝑓𝑠𝐼𝑠0𝜇𝐹Proj𝐶𝐼𝜆𝑠0𝑥𝑓𝑠𝑠𝑠0𝑥𝛾𝑠+𝑠0𝛾𝑥𝑠𝑥𝑠0+𝐼𝑠0𝜇𝐹Proj𝐶𝐼𝜆𝑠0𝑥𝑓𝑠𝐼𝑠0𝜇𝐹Proj𝐶𝐼𝜆𝑠0𝑥𝑓𝑠0+(𝐼𝑠𝜇𝐹)Proj𝐶𝐼𝜆𝑠𝑥𝑓𝑠(𝐼𝑠𝜇𝐹)Proj𝐶𝐼𝜆𝑠0𝑥𝑓𝑠+(𝐼𝑠𝜇𝐹)Proj𝐶𝐼𝜆𝑠0𝑥𝑓𝑠𝐼𝑠0𝜇𝐹Proj𝐶𝐼𝜆𝑠0𝑥𝑓𝑠||𝑠𝑠0||𝛾𝑥𝑠+𝑠0𝑥𝛾𝜌𝑠𝑥𝑠0+1𝑠0𝜏𝑥𝑠𝑥𝑠0+||𝜆𝑠𝜆𝑠0||𝑥𝑓𝑠+𝑠𝜇𝐹Proj𝐶𝐼𝜆𝑠0𝑥𝑓𝑠𝑠0𝜇𝐹Proj𝐶𝐼𝜆𝑠0𝑥𝑓𝑠=||𝑠𝑠0||𝛾𝑥𝑠+𝑠0𝑥𝛾𝜌𝑠𝑥𝑠0+1𝑠0𝜏𝑥𝑠𝑥𝑠0+||𝜆𝑠𝜆𝑠0||𝑥𝑓𝑠+||𝑠𝑠0||𝜇𝐹Proj𝐶𝐼𝜆𝑠0𝑥𝑓𝑠=𝛾𝑥𝑠+𝜇𝐹Proj𝐶𝐼𝜆𝑠0𝑥𝑓𝑠||𝑠𝑠0||+𝑠0𝑥𝛾𝜌𝑠𝑥𝑠0+1𝑠0𝜏𝑥𝑠𝑥𝑠0+||𝜆𝑠𝜆𝑠0||𝑥𝑓𝑠.(3.10) Therefore, 𝑥𝑠𝑥𝑠0𝛾𝑥𝑠+𝜇𝐹Proj𝐶𝐼𝜆𝑠0𝑥𝑓𝑠𝑠0||(𝜏𝛾𝜌)𝑠𝑠0||+𝑥𝑓𝑠𝑠0||𝜆(𝜏𝛾𝜌)𝑠𝜆𝑠0||.(3.11) Therefore, 𝑥𝑠𝑥𝑠0 as 𝑠𝑠0. This means 𝑥𝑠 is continuous.

Our main result in the following shows that {𝑥𝑠} converges in norm to a minimizer of (1.1) which solves some variational inequality.

Theorem 3.2. Assume that {𝑥𝑠} is defined by (3.6), then 𝑥𝑠 converges in norm as 𝑠0 to a minimizer of (1.1) which solves the variational inequality (𝜇𝐹𝛾)𝑥,̃𝑥𝑥0,̃𝑥𝑆.(3.12) Equivalently, we have Proj𝑠(𝐼(𝜇𝐹𝛾))𝑥=𝑥.

Proof. It is easy to see that the uniqueness of a solution of the variational inequality (3.12). By Lemma 2.3, 𝜇𝐹𝛾 is strongly monotone, so the variational inequality (3.12) has only one solution. Let 𝑥𝑆 denote the unique solution of (3.12).
To prove that 𝑥𝑠𝑥(𝑠0), we write, for a given ̃𝑥𝑆, 𝑥𝑠𝑥̃𝑥=𝑠𝛾𝑠+(𝐼𝑠𝜇𝐹)Proj𝐶𝐼𝜆𝑠𝑥𝑓𝑠𝑥̃𝑥=𝑠𝛾𝑠𝜇𝐹̃𝑥+(𝐼𝑠𝜇𝐹)Proj𝐶𝐼𝜆𝑠𝑥𝑓𝑠(𝐼𝑠𝜇𝐹)Proj𝐶𝐼𝜆𝑠𝑓(̃𝑥).(3.13) It follows that 𝑥𝑠̃𝑥2𝑥=𝑠𝛾𝑠𝜇𝐹̃𝑥,𝑥𝑠+̃𝑥(𝐼𝑠𝜇𝐹)Proj𝐶𝐼𝜆𝑠𝑥𝑓𝑠(𝐼𝑠𝜇𝐹)Proj𝐶𝐼𝜆𝑠𝑓(̃𝑥),𝑥𝑠𝑥̃𝑥(1𝑠𝜏)𝑠̃𝑥2𝑥+𝑠𝛾𝑠𝜇𝐹̃𝑥,𝑥𝑠.̃𝑥(3.14) Hence, 𝑥𝑠̃𝑥21𝜏𝑥𝛾𝑠𝜇𝐹̃𝑥,𝑥𝑠1̃𝑥𝜏𝑥𝛾𝜌𝑠̃𝑥2+𝛾(̃𝑥)𝜇𝐹̃𝑥,𝑥𝑠.̃𝑥(3.15) To derive that 𝑥𝑠̃𝑥21𝜏𝛾𝜌𝛾(̃𝑥)𝜇𝐹̃𝑥,𝑥𝑠̃𝑥.(3.16) Since {𝑥𝑠} is bounded as 𝑠0, we see that if {𝑠𝑛} is a sequence in (0,1) such that 𝑠𝑛0 and 𝑥𝑠𝑛𝑥, then by (3.16), 𝑥𝑠𝑛𝑥. We may further assume that 𝜆𝑠𝑛𝜆[0,2/𝐿] due to condition (1.4). Notice that Proj𝐶(𝐼𝜆𝑓) is nonexpansive. It turns out that 𝑥𝑠𝑛Proj𝐶(𝐼𝜆𝑓)𝑥𝑠𝑛𝑥𝑠𝑛Proj𝐶𝐼𝜆𝑠𝑛𝑥𝑓𝑠𝑛+Proj𝐶𝐼𝜆𝑠𝑛𝑥𝑓𝑠𝑛Proj𝐶(𝐼𝜆𝑓)𝑥𝑠𝑛𝑥𝑠𝑛Proj𝐶𝐼𝜆𝑠𝑛𝑥𝑓𝑠𝑛+𝜆𝜆𝑠𝑛𝑥𝑓𝑠𝑛=𝑥𝑠𝑛Proj𝐶𝐼𝜆𝑠𝑛𝑥𝑓𝑠𝑛+||𝜆𝜆𝑠𝑛||𝑥𝑓𝑠𝑛.(3.17) From the boundedness of {𝑥𝑠} and lim𝑠0Proj𝐶(𝐼𝜆𝑠𝑓)𝑥𝑠𝑥𝑠=0, we conclude that lim𝑛𝑥𝑠𝑛Proj𝐶(𝐼𝜆𝑓)𝑥𝑠𝑛=0.(3.18) Since 𝑥𝑠𝑛𝑥, by Lemma 2.2, we obtain 𝑥=Proj𝐶(𝐼𝜆𝑓)𝑥.(3.19) This shows that 𝑥𝑆.
We next prove that 𝑥 is a solution of the variational inequality (3.12). Since 𝑥𝑠𝑥=𝑠𝛾𝑠+(𝐼𝑠𝜇𝐹)Proj𝐶𝐼𝜆𝑠𝑥𝑓𝑠,(3.20) we can derive that 𝑥(𝜇𝐹𝛾)𝑠1=𝑠𝐼Proj𝐶𝐼𝜆𝑠𝑥𝑓𝑠𝐹𝑥+𝜇𝑠𝐹Proj𝐶𝐼𝜆𝑠𝑥𝑓𝑠.(3.21) Therefore, for ̃𝑥𝑆, 𝑥(𝜇𝐹𝛾)𝑠,𝑥𝑠1̃𝑥=𝑠𝐼Proj𝐶𝐼𝜆𝑠𝑥𝑓𝑠𝐼Proj𝐶𝐼𝜆𝑠𝑓(̃𝑥),𝑥𝑠𝐹𝑥̃𝑥+𝜇𝑠𝐹Proj𝐶𝐼𝜆𝑠𝑥𝑓𝑠,𝑥𝑠𝐹𝑥̃𝑥𝜇𝑠𝐹Proj𝐶𝐼𝜆𝑠𝑥𝑓𝑠,𝑥𝑠.̃𝑥(3.22) Since Proj𝐶(𝐼𝜆𝑠𝑓) is nonexpansive, we obtain that 𝐼Proj𝐶(𝐼𝜆𝑠𝑓) is monotone, that is, 𝐼Proj𝐶𝐼𝜆𝑠𝑥𝑓𝑠𝐼Proj𝐶𝐼𝜆𝑠𝑓(̃𝑥),𝑥𝑠̃𝑥0.(3.23) Taking the limit through 𝑠=𝑠𝑛0 ensures that 𝑥 is a solution to (3.12). That is to say (𝜇𝐹𝛾)𝑥,𝑥̃𝑥0.(3.24) Hence 𝑥=𝑥 by uniqueness. Therefore, 𝑥𝑠𝑥 as 𝑠0. The variational inequality (3.12) can be written as (𝐼𝜇𝐹+𝛾)𝑥𝑥,̃𝑥𝑥0,̃𝑥𝑆.(3.25) So, by Lemma 2.4, it is equivalent to the fixed-point equation 𝑃𝑆(𝐼𝜇𝐹+𝛾)𝑥=𝑥.(3.26)

Taking 𝐹=𝐴, 𝜇=1 in Theorem 3.2, we get the following

Corollary 3.3. We have that {𝑥𝑠} converges in norm as 𝑠0 to a minimizer of (1.1) which solves the variational inequality (𝐴𝛾)𝑥,̃𝑥𝑥0,̃𝑥𝑆.(3.27) Equivalently, we have Proj𝑠(𝐼(𝐴𝛾))𝑥=𝑥.

Taking 𝐹=𝐼, 𝜇=1, 𝛾=1 in Theorem 3.2, we get the following.

Corollary 3.4. Let 𝑧𝑠𝐻 be the unique fixed point of the contraction 𝑧𝑠(𝑧)+(1𝑠)Proj𝐶(𝐼𝜆𝑠𝑓)(𝑧). Then, {𝑧𝑠} converges in norm as 𝑠0 to the unique solution of the variational inequality (𝐼)𝑥,̃𝑥𝑥0,̃𝑥𝑆.(3.28)

Finally, we consider the following hybrid gradient-projection algorithm, 𝑥0𝑥𝐶arbitrarily,𝑛+1=𝜃𝑛𝑥𝛾𝑛+𝐼𝜇𝜃𝑛𝐹Proj𝐶𝑥𝑛𝜆𝑛𝑥𝑓𝑛,𝑛0.(3.29) Assume that the sequence {𝜆𝑛}𝑛=0 satisfies the condition (1.4) and, in addition, that the following conditions are satisfied for {𝜆𝑛}𝑛=0 and {𝜃𝑛}𝑛=0[0,1]:(i)𝜃𝑛0; (ii)𝑛=0𝜃𝑛=; (iii)𝑛=0|𝜃𝑛+1𝜃𝑛|<; (iv)𝑛=0|𝜆𝑛+1𝜆𝑛|<.

Theorem 3.5. Assume that the minimization problem (1.1) is consistent and the gradient 𝑓 satisfies the Lipschitz condition (1.2). Let {𝑥𝑛} be generated by algorithm (3.29) with the sequences {𝜃𝑛} and {𝜆𝑛} satisfying the above conditions. Then, the sequence {𝑥𝑛} converges in norm to 𝑥 that is obtained in Theorem 3.2.

Proof. (1) The sequence {𝑥𝑛}𝑛=0 is bounded. Setting 𝑉𝑛=Proj𝐶𝐼𝜆𝑛𝑓.(3.30) Indeed, we have, for 𝑥𝑆, 𝑥𝑛+1𝑥=𝜃𝑛𝑥𝛾𝑛+𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥𝑛𝑥=𝜃𝑛𝑥𝛾𝑛𝜇𝐹𝑥+𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥𝑛𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥1𝜃𝑛𝜏𝑥𝑛𝑥+𝜃𝑛𝑥𝜌𝛾𝑛𝑥+𝜃𝑛𝛾𝑥𝜇𝐹𝑥=1𝜃𝑛(𝑥𝜏𝛾𝜌)𝑛𝑥+𝜃𝑛𝛾𝑥𝜇𝐹𝑥𝑥max𝑛𝑥,1𝜏𝛾𝜌𝛾𝑥𝜇𝐹𝑥,𝑛0.(3.31) By induction, 𝑥𝑛𝑥𝑥max0𝑥,𝛾𝑥𝜇𝐹𝑥𝜏𝛾𝜌.(3.32) In particular, {𝑥𝑛}𝑛=0 is bounded.
(2) We prove that 𝑥𝑛+1𝑥𝑛0 as 𝑛. Let 𝑀 be a constant such that 𝑀>maxsup𝑛0𝛾𝑥𝑛,sup𝜅,𝑛0𝜇𝐹𝑉𝜅𝑥𝑛,sup𝑛0𝑥𝑓𝑛.(3.33) We compute 𝑥𝑛+1𝑥𝑛=𝜃𝑛𝑥𝛾𝑛+𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥𝑛𝜃𝑛1𝑥𝛾𝑛1𝐼𝜇𝜃𝑛1𝐹𝑉𝑛1𝑥𝑛1=𝜃𝑛𝛾𝑥𝑛𝑥𝑛1𝜃+𝛾𝑛𝜃𝑛1𝑥𝑛1+𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥𝑛𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥𝑛1+𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥𝑛1𝐼𝜇𝜃𝑛1𝐹𝑉𝑛1𝑥𝑛1=𝜃𝑛𝛾𝑥𝑛𝑥𝑛1𝜃+𝛾𝑛𝜃𝑛1𝑥𝑛1+𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥𝑛𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥𝑛1+𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥𝑛1𝐼𝜇𝜃𝑛𝐹𝑉𝑛1𝑥𝑛1+𝐼𝜇𝜃𝑛𝐹𝑉𝑛1𝑥𝑛1𝐼𝜇𝜃𝑛1𝐹𝑉𝑛1𝑥𝑛1𝜃𝑛𝑥𝛾𝜌𝑛𝑥𝑛1||𝜃+𝛾𝑛𝜃𝑛1||𝑥𝑛1+1𝜃𝑛𝜏𝑥𝑛𝑥𝑛1+𝑉𝑛𝑥𝑛1𝑉𝑛1𝑥𝑛1||𝜃+𝜇𝑛𝜃𝑛1||𝐹𝑉𝑛1𝑥𝑛1𝜃𝑛𝑥𝛾𝜌𝑛𝑥𝑛1||𝜃+𝑀𝑛𝜃𝑛1||+1𝜃𝑛𝜏𝑥𝑛𝑥𝑛1+𝑉𝑛𝑥𝑛1V𝑛1𝑥𝑛1||𝜃+𝑀𝑛𝜃𝑛1||=1𝜃𝑛𝑥(𝜏𝛾𝜌)𝑛𝑥𝑛1||𝜃+2𝑀𝑛𝜃𝑛1||+𝑉𝑛𝑥𝑛1𝑉𝑛1𝑥𝑛1,𝑉(3.34)𝑛𝑥𝑛1𝑉𝑛1𝑥𝑛1=Proj𝐶𝐼𝜆𝑛𝑥𝑓𝑛1Proj𝐶𝐼𝜆𝑛1𝑥𝑓𝑛1𝐼𝜆𝑛𝑥𝑓𝑛1𝐼𝜆𝑛1𝑥𝑓𝑛1=||𝜆𝑛𝜆𝑛1||𝑥𝑓𝑛1||𝜆𝑀𝑛𝜆𝑛1||.(3.35) Combining (3.34) and (3.35), we can obtain 𝑥𝑛+1𝑥𝑛1(𝜏𝛾𝜌)𝜃𝑛𝑥𝑛𝑥𝑛1||𝜃+2𝑀𝑛𝜃𝑛1||+||𝜆𝑛𝜆𝑛1||.(3.36) Apply Lemma 2.1 to (3.36) to conclude that 𝑥𝑛+1𝑥𝑛0 as 𝑛.
(3) We prove that 𝜔𝑤(𝑥𝑛)𝑆. Let ̂𝑥𝜔𝑤(𝑥𝑛), and assume that 𝑥𝑛𝑗̂𝑥 for some subsequence {𝑥𝑛𝑗}𝑗=1 of {𝑥𝑛}𝑛=0. We may further assume that 𝜆𝑛𝑗𝜆[0,2/𝐿] due to condition (1.4). Set 𝑉=Proj𝐶(𝐼𝜆𝑓). Notice that 𝑉 is nonexpansive and Fix𝑉=𝑆. It turns out that 𝑥𝑛𝑗𝑉𝑥𝑛𝑗𝑥𝑛𝑗𝑉𝑛𝑗𝑥𝑛𝑗+𝑉𝑛𝑗𝑥𝑛𝑗𝑉𝑥𝑛𝑗𝑥𝑛𝑗𝑥𝑛𝑗+1+𝑥𝑛𝑗+1𝑉𝑛𝑗𝑥𝑛𝑗+𝑉𝑛𝑗𝑥𝑛𝑗𝑉𝑥𝑛𝑗𝑥𝑛𝑗𝑥𝑛𝑗+1+𝜃𝑛𝑗𝑥𝛾𝑛𝑗𝜇𝐹𝑉𝑛𝑗𝑥𝑛𝑗+Proj𝐶𝐼𝜆𝑛𝑗𝑥𝑓𝑛𝑗Proj𝐶(𝐼𝜆𝑓)𝑥𝑛𝑗𝑥𝑛𝑗𝑥𝑛𝑗+1+𝜃𝑛𝑗𝑥𝛾𝑛𝑗𝜇𝐹𝑉𝑛𝑗𝑥𝑛𝑗+|||𝜆𝜆𝑛𝑗|||𝑥𝑓𝑛𝑗𝑥𝑛𝑗𝑥𝑛𝑗+1𝜃+2𝑀𝑛𝑗+|||𝜆𝜆𝑛𝑗|||0as𝑗.(3.37) So Lemma 2.2 guarantees that 𝜔𝑤(𝑥𝑛)Fix𝑉=𝑆.
(4) We prove that 𝑥𝑛𝑥 as 𝑛, where 𝑥 is the unique solution of the 𝑉𝐼 (3.12). First observe that there is some ̂𝑥𝜔𝑤(𝑥𝑛)𝑆 Such that limsup𝑛(𝜇𝐹𝛾)𝑥,𝑥𝑛𝑥=(𝜇𝐹𝛾)𝑥,̂𝑥𝑥0.(3.38)
We now compute 𝑥𝑛+1𝑥2=𝜃𝑛𝑥𝛾𝑛+𝐼𝜇𝜃𝑛𝐹Proj𝐶𝐼𝜆𝑛𝑥𝑓𝑛𝑥2=𝜃𝑛𝑥𝛾𝑛𝜇𝐹𝑥+𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥𝑛𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥2=𝜃𝑛𝛾𝑥𝑛𝑥+𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥𝑛(𝐼𝜇𝜃𝑛𝐹)𝑉𝑛𝑥+𝜃𝑛𝑥𝛾𝜇𝐹𝑥2𝜃𝑛𝛾𝑥𝑛𝑥+𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥𝑛𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥2+2𝜃𝑛(𝛾𝜇𝐹)𝑥,𝑥𝑛+1𝑥=𝜃𝑛𝛾𝑥𝑛𝑥2+𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥𝑛𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥2+2𝜃𝑛𝑥𝛾𝑛𝑥,𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥𝑛𝐼𝜇𝜃𝑛𝐹𝑉𝑛𝑥+2𝜃𝑛(𝛾𝜇𝐹)𝑥,𝑥𝑛+1𝑥𝜃2𝑛𝛾2𝜌2𝑥𝑛𝑥2+1𝜃𝑛𝜏2𝑥𝑛𝑥2+2𝜃𝑛𝛾𝜌1𝜃𝑛𝜏𝑥𝑛𝑥2+2𝜃𝑛(𝛾𝜇𝐹)𝑥,𝑥𝑛+1𝑥=𝜃2𝑛𝛾2𝜌2+1𝜃𝑛𝜏2+2𝜃𝑛𝛾𝜌1𝜃𝑛𝜏𝑥𝑛𝑥2+2𝜃𝑛(𝛾𝜇𝐹)𝑥,𝑥𝑛+1𝑥𝜃𝑛𝛾2𝜌2+12𝜃𝑛𝜏+𝜃𝑛𝜏2+2𝜃𝑛𝑥𝛾𝜌𝑛𝑥2+2𝜃𝑛(𝛾𝜇𝐹)𝑥,𝑥𝑛+1𝑥=1𝜃𝑛2𝜏𝛾2𝜌2𝜏2𝑥2𝛾𝜌𝑛𝑥2+2𝜃𝑛(𝛾𝜇𝐹)𝑥,𝑥𝑛+1𝑥.(3.39) Applying Lemma 2.1 to the inequality (3.39), together with (3.38), we get 𝑥𝑛𝑥0 as 𝑛.

Corollary 3.6 (see [11]). Let {𝑥𝑛} be generated by the following algorithm: 𝑥𝑛+1=𝜃𝑛𝑥𝑛+1𝜃𝑛Proj𝐶𝑥𝑛𝜆𝑛𝑥𝑓𝑛,𝑛0.(3.40) Assume that the sequence {𝜆𝑛}𝑛=0 satisfies the conditions (1.4) and (iv) and that {𝜃𝑛}[0,1] satisfies the conditions (i)–(iii). Then {𝑥𝑛} converges in norm to 𝑥 obtained in Corollary 3.4.

Corollary 3.7. Let {𝑥𝑛} be generated by the following algorithm: 𝑥𝑛+1=𝜃𝑛𝑥𝛾𝑛+𝐼𝜃𝑛𝐴Proj𝐶𝑥𝑛𝜆𝑛𝑥𝑓𝑛,𝑛0.(3.41) Assume that the sequences {𝜃𝑛} and {𝜆𝑛} satisfy the conditions contained in Theorem 3.5, then {𝑥𝑛} converges in norm to 𝑥 obtained in Corollary 3.3.

Acknowledgments

Ming Tian is Supported in part by The Fundamental Research Funds for the Central Universities (the Special Fund of Science in Civil Aviation University of China: No. ZXH2012 K001) and by the Science Research Foundation of Civil Aviation University of China (No. 2012KYM03).

References

  1. E. S. Levitin and B. T. Poljak, “Minimization methods in the presence of constraints,” Žurnal Vyčislitel' noĭ Matematiki i Matematičeskoĭ Fiziki, vol. 6, pp. 787–823, 1966. View at: Google Scholar
  2. P. H. Calamai and J. J. Moré, “Projected gradient methods for linearly constrained problems,” Mathematical Programming, vol. 39, no. 1, pp. 93–116, 1987. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  3. B. T. Polyak, Introduction to Optimization, Translations Series in Mathematics and Engineering, Optimization Software, New York, NY, USA, 1987.
  4. M. Su and H. K. Xu, “Remarks on the gradient-projection algorithm,” Journal of Nonlinear Analysis and Optimization, vol. 1, pp. 35–43, 2010. View at: Google Scholar
  5. Y. Yao and H.-K. Xu, “Iterative methods for finding minimum-norm fixed points of nonexpansive mappings with applications,” Optimization, vol. 60, no. 6, pp. 645–658, 2011. View at: Publisher Site | Google Scholar
  6. Y. Censor and T. Elfving, “A multiprojection algorithm using Bregman projections in a product space,” Numerical Algorithms, vol. 8, no. 2–4, pp. 221–239, 1994. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  7. C. Byrne, “A unified treatment of some iterative algorithms in signal processing and image reconstruction,” Inverse Problems, vol. 20, no. 1, pp. 103–120, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  8. J. S. Jung, “Strong convergence of composite iterative methods for equilibrium problems and fixed point problems,” Applied Mathematics and Computation, vol. 213, no. 2, pp. 498–505, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  9. G. Lopez, V. Martin, and H. K. Xu :, “Iterative algorithms for the multiple-sets split feasibility problem,” in Biomedical Mathematics: Promising Directions in Imaging, Therpy Planning and Inverse Problems, Y. Censor, M. Jiang, and G. Wang, Eds., pp. 243–279, Medical Physics, Madison, Wis, USA, 2009. View at: Google Scholar
  10. P. Kumam, “A hybrid approximation method for equilibrium and fixed point problems for a monotone mapping and a nonexpansive mapping,” Nonlinear Analysis, vol. 2, no. 4, pp. 1245–1255, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  11. H.-K. Xu, “Averaged mappings and the gradient-projection algorithm,” Journal of Optimization Theory and Applications, vol. 150, no. 2, pp. 360–378, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  12. P. Kumam, “A new hybrid iterative method for solution of equilibrium problems and fixed point problems for an inverse strongly monotone operator and a nonexpansive mapping,” Journal of Applied Mathematics and Computing, vol. 29, no. 1-2, pp. 263–280, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  13. H. Brezis, Operateur Maximaux Monotones et Semi-Groups de Contractions dans les Espaces de Hilbert, North-Holland, Amsterdam, The Netherlands, 1973.
  14. P. L. Combettes, “Solving monotone inclusions via compositions of nonexpansive averaged operators,” Optimization, vol. 53, no. 5-6, pp. 475–504, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  15. Y. Yao, Y.-C. Liou, and R. Chen, “A general iterative method for an infinite family of nonexpansive mappings,” Nonlinear Analysis, vol. 69, no. 5-6, pp. 1644–1654, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  16. M. Tian, “A general iterative algorithm for nonexpansive mappings in Hilbert spaces,” Nonlinear Analysis, vol. 73, no. 3, pp. 689–694, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  17. H.-K. Xu, “Iterative algorithms for nonlinear operators,” Journal of the London Mathematical Society. Second Series, vol. 66, no. 1, pp. 240–256, 2002. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  18. K. Goebel and W. A. Kirk, Topics in Metric Fixed Point Theory, vol. 28 of Cambridge Studies in Advanced Mathematics, Cambridge University Press, Cambridge, UK, 1990. View at: Publisher Site

Copyright © 2012 Ming Tian and Min-Min Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views566
Downloads412
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.