Journal of Applied Mathematics

Journal of Applied Mathematics / 2012 / Article
Special Issue

Numerical and Analytical Methods for Variational Inequalities and Related Problems with Applications

View this Special Issue

Research Article | Open Access

Volume 2012 |Article ID 782960 | 14 pages | https://doi.org/10.1155/2012/782960

A Hybrid Gradient-Projection Algorithm for Averaged Mappings in Hilbert Spaces

Academic Editor: Hong-Kun Xu
Received18 Apr 2012
Accepted21 Jun 2012
Published25 Jul 2012

Abstract

It is well known that the gradient-projection algorithm (GPA) is very useful in solving constrained convex minimization problems. In this paper, we combine a general iterative method with the gradient-projection algorithm to propose a hybrid gradient-projection algorithm and prove that the sequence generated by the hybrid gradient-projection algorithm converges in norm to a minimizer of constrained convex minimization problems which solves a variational inequality.

1. Introduction

Let 𝐻 be a real Hilbert space and 𝐢 a nonempty closed and convex subset of 𝐻. Consider the following constrained convex minimization problem: minimizeπ‘₯βˆˆπΆπ‘“(π‘₯),(1.1) where π‘“βˆΆπΆβ†’β„ is a real-valued convex and continuously FrΓ©chet differentiable function. The gradient βˆ‡π‘“ satisfies the following Lipschitz condition: β€–βˆ‡π‘“(π‘₯)βˆ’βˆ‡π‘“(𝑦)‖≀𝐿‖π‘₯βˆ’π‘¦β€–,βˆ€π‘₯,π‘¦βˆˆπΆ,(1.2) where 𝐿>0. Assume that the minimization problem (1.1) is consistent, and let 𝑆 denote its solution set.

It is well known that the gradient-projection algorithm is very useful in dealing with constrained convex minimization problems and has extensively been studied ([1–5] and the references therein). It has recently been applied to solve split feasibility problems [6–10]. Levitin and Polyak [1] consider the following gradient-projection algorithm: π‘₯𝑛+1∢=Proj𝐢π‘₯π‘›βˆ’πœ†π‘›ξ€·π‘₯βˆ‡π‘“π‘›ξ€Έξ€Έ,𝑛β‰₯0.(1.3) Let {πœ†π‘›}βˆžπ‘›=0 satisfy 0<liminfπ‘›β†’βˆžπœ†π‘›β‰€limsupπ‘›β†’βˆžπœ†π‘›<2𝐿.(1.4) It is proved that the sequence {π‘₯𝑛} generated by (1.3) converges weakly to a minimizer of (1.1).

Xu proved that under certain appropriate conditions on {𝛼𝑛} and {πœ†π‘›} the sequence {π‘₯𝑛} defined by the following relaxed gradient-projection algorithm: π‘₯𝑛+1=ξ€·1βˆ’π›Όπ‘›ξ€Έπ‘₯𝑛+𝛼𝑛Proj𝐢π‘₯π‘›βˆ’πœ†π‘›ξ€·π‘₯βˆ‡π‘“π‘›ξ€Έξ€Έ,𝑛β‰₯0,(1.5) converges weakly to a minimizer of (1.1) [11].

Since the Lipschitz continuity of the gradient of 𝑓 implies that it is indeed inverse strongly monotone (ism) [12, 13], its complement can be an averaged mapping. Recall that a mapping 𝑇 is nonexpansive if and only if it is Lipschitz with Lipschitz constant not more than one, that a mapping is an averaged mapping if and only if it can be expressed as a proper convex combination of the identity mapping and a nonexpansive mapping, and that a mapping 𝑇 is said to be 𝜈-inverse strongly monotone if and only if ⟨π‘₯βˆ’π‘¦,𝑇π‘₯βˆ’π‘‡π‘¦βŸ©β‰₯πœˆβ€–π‘‡π‘₯βˆ’π‘‡π‘¦β€–2forallπ‘₯,π‘¦βˆˆπ», where the number 𝜈>0. Recall also that the composite of finitely many averaged mappings is averaged. That is, if each of the mappings {𝑇𝑖}𝑁𝑖=1 is averaged, then so is the composite 𝑇1⋯𝑇𝑁 [14]. In particular, an averaged mapping is a nonexpansive mapping [15]. As a result, the GPA can be rewritten as the composite of a projection and an averaged mapping which is again an averaged mapping.

Generally speaking, in infinite-dimensional Hilbert spaces, GPA has only weak convergence. Xu [11] provided a modification of GPA so that strong convergence is guaranteed. He considered the following hybrid gradient-projection algorithm: π‘₯𝑛+1=πœƒπ‘›β„Žξ€·π‘₯𝑛+ξ€·1βˆ’πœƒπ‘›ξ€ΈProj𝐢π‘₯π‘›βˆ’πœ†π‘›ξ€·π‘₯βˆ‡π‘“π‘›ξ€Έξ€Έ.(1.6)

It is proved that if the sequences {πœƒπ‘›} and {πœ†π‘›} satisfy appropriate conditions, the sequence {π‘₯𝑛} generated by (1.6) converges in norm to a minimizer of (1.1) which solves the variational inequality π‘₯βˆ—βˆˆπ‘†,⟨(πΌβˆ’β„Ž)π‘₯βˆ—,π‘₯βˆ’π‘₯βˆ—βŸ©β‰₯0,π‘₯βˆˆπ‘†.(1.7)

On the other hand, Ming Tian [16] introduced the following general iterative algorithm for solving the variational inequality π‘₯𝑛+1=𝛼𝑛π‘₯𝛾𝑓𝑛+ξ€·πΌβˆ’πœ‡π›Όπ‘›πΉξ€Έπ‘‡π‘₯𝑛,𝑛β‰₯0,(1.8) where 𝐹 is a πœ…-Lipschitzian and πœ‚-strongly monotone operator with πœ…>0, πœ‚>0 and 𝑓 is a contraction with coefficient 0<𝛼<1. Then, he proved that if {𝛼𝑛} satisfying appropriate conditions, the {π‘₯𝑛} generated by (1.8) converges strongly to the unique solution of variational inequality ⟨(πœ‡πΉβˆ’π›Ύπ‘“)Μƒπ‘₯,Μƒπ‘₯βˆ’π‘§βŸ©β‰€0,π‘§βˆˆFix(𝑇).(1.9)

In this paper, motivated and inspired by the research work in this direction, we will combine the iterative method (1.8) with the gradient-projection algorithm (1.3) and consider the following hybrid gradient-projection algorithm: π‘₯𝑛+1=πœƒπ‘›ξ€·π‘₯π›Ύβ„Žπ‘›ξ€Έ+ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€ΈProj𝐢π‘₯π‘›βˆ’πœ†π‘›ξ€·π‘₯βˆ‡π‘“π‘›ξ€Έξ€Έ,𝑛β‰₯0.(1.10)

We will prove that if the sequence {πœƒπ‘›} of parameters and the sequence {πœ†π‘›} of parameters satisfy appropriate conditions, then the sequence {π‘₯𝑛} generated by (1.10) converges in norm to a minimizer of (1.1) which solves the variational inequality (𝑉𝐼)π‘₯βˆ—βˆˆπ‘†,⟨(πœ‡πΉβˆ’π›Ύβ„Ž)π‘₯βˆ—,π‘₯βˆ’π‘₯βˆ—βŸ©β‰₯0,βˆ€π‘₯βˆˆπ‘†,(1.11) where 𝑆 is the solution set of the minimization problem (1.1).

2. Preliminaries

This section collects some lemmas which will be used in the proofs for the main results in the next section. Some of them are known; others are not hard to derive.

Throughout this paper, we write π‘₯𝑛⇀π‘₯ to indicate that the sequence {π‘₯𝑛} converges weakly to π‘₯, π‘₯𝑛→π‘₯ implies that {π‘₯𝑛} converges strongly to π‘₯. πœ”π‘€(π‘₯𝑛)∢={π‘₯βˆΆβˆƒπ‘₯𝑛𝑗⇀π‘₯} is the weak πœ”-limit set of the sequence {π‘₯𝑛}βˆžπ‘›=1.

Lemma 2.1 (see [17]). Assume that {π‘Žπ‘›}βˆžπ‘›=0 is a sequence of nonnegative real numbers such that π‘Žπ‘›+1≀1βˆ’π›Ύπ‘›ξ€Έπ‘Žπ‘›+𝛾𝑛𝛿𝑛+𝛽𝑛,𝑛β‰₯0,(2.1) where {𝛾𝑛}βˆžπ‘›=0 and {𝛽𝑛}βˆžπ‘›=0 are sequences in [0,1] and {𝛿𝑛}βˆžπ‘›=0 is a sequence in ℝ such that(i)βˆ‘βˆžπ‘›=0𝛾𝑛=∞; (ii)either limsupπ‘›β†’βˆžπ›Ώπ‘›β‰€0 or βˆ‘βˆžπ‘›=0𝛾𝑛|𝛿𝑛|<∞;(iii)βˆ‘βˆžπ‘›=0𝛽𝑛<∞. Then limπ‘›β†’βˆžπ‘Žπ‘›=0.

Lemma 2.2 (see [18]). Let 𝐢 be a closed and convex subset of a Hilbert space 𝐻, and let π‘‡βˆΆπΆβ†’πΆ be a nonexpansive mapping with Fixπ‘‡β‰ βˆ…. If {π‘₯𝑛}βˆžπ‘›=1 is a sequence in 𝐢 weakly converging to π‘₯ and if {(πΌβˆ’π‘‡)π‘₯𝑛}βˆžπ‘›=1 converges strongly to y, then (πΌβˆ’T)π‘₯=𝑦.

Lemma 2.3. Let 𝐻 be a Hilbert space, and let 𝐢 be a nonempty closed and convex subset of 𝐻. β„ŽβˆΆπΆβ†’πΆ a contraction with coefficient 0<𝜌<1, and πΉβˆΆπΆβ†’πΆ a πœ…-Lipschitzian continuous operator and πœ‚-strongly monotone operator with πœ…,πœ‚>0. Then, for 0<𝛾<πœ‡πœ‚/𝜌, ⟨π‘₯βˆ’π‘¦,(πœ‡πΉβˆ’π›Ύβ„Ž)π‘₯βˆ’(πœ‡πΉβˆ’π›Ύβ„Ž)π‘¦βŸ©β‰₯(πœ‡πœ‚βˆ’π›ΎπœŒ)β€–π‘₯βˆ’π‘¦β€–2,βˆ€π‘₯,π‘¦βˆˆπΆ.(2.2) That is, πœ‡πΉβˆ’π›Ύβ„Ž is strongly monotone with coefficient πœ‡πœ‚βˆ’π›ΎπœŒ.

Lemma 2.4. Let 𝐢 be a closed subset of a real Hilbert space 𝐻, given π‘₯∈𝐻 and π‘¦βˆˆπΆ. Then, 𝑦=𝑃𝐢π‘₯ if and only if there holds the inequality ⟨π‘₯βˆ’π‘¦,π‘¦βˆ’π‘§βŸ©β‰₯0,βˆ€π‘§βˆˆπΆ.(2.3)

3. Main Results

Let 𝐻 be a real Hilbert space, and let 𝐢 be a nonempty closed and convex subset of 𝐻 such that πΆΒ±πΆβŠ‚πΆ. Assume that the minimization problem (1.1) is consistent, and let 𝑆 denote its solution set. Assume that the gradient βˆ‡π‘“ satisfies the Lipschitz condition (1.2). Since 𝑆 is a closed convex subset, the nearest point projection from 𝐻 onto 𝑆 is well defined. Recall also that a contraction on 𝐢 is a self-mapping of 𝐢 such that β€–β„Ž(π‘₯)βˆ’β„Ž(𝑦)β€–β‰€πœŒβ€–π‘₯βˆ’π‘¦β€–,forallπ‘₯,π‘¦βˆˆπΆ, where 𝜌∈[0,1) is a constant. Let 𝐹 be a πœ…-Lipschitzian and πœ‚-strongly monotone operator on 𝐢 with πœ…,πœ‚>0. Denote by Ξ  the collection of all contractions on 𝐢, namely, Ξ ={β„ŽβˆΆβ„Žisacontractionon𝐢}.(3.1) Now given β„ŽβˆˆΞ  with 0<𝜌<1, π‘ βˆˆ(0,1). Let 0<πœ‡<2πœ‚/πœ…2,   0<𝛾<πœ‡(πœ‚βˆ’(πœ‡πœ…2)/2)/𝜌=𝜏/𝜌. Assume that πœ†π‘  with respect to 𝑠 is continuous and, in addition, πœ†π‘ βˆˆ[π‘Ž,𝑏]βŠ‚(0,2/𝐿). Consider a mapping 𝑋𝑠 on 𝐢 defined by 𝑋𝑠(π‘₯)=π‘ π›Ύβ„Ž(π‘₯)+(πΌβˆ’π‘ πœ‡πΉ)ProjπΆξ€·πΌβˆ’πœ†π‘ ξ€Έβˆ‡π‘“(π‘₯),π‘₯∈𝐢.(3.2) It is easy to see that 𝑋𝑠 is a contraction. Setting π‘‰π‘ βˆΆ=Proj𝐢(πΌβˆ’πœ†π‘ βˆ‡π‘“). It is obvious that 𝑉𝑠 is a nonexpansive mapping. We can rewrite 𝑋𝑠(π‘₯) as 𝑋𝑠(π‘₯)=π‘ π›Ύβ„Ž(π‘₯)+(πΌβˆ’π‘ πœ‡πΉ)𝑉𝑠(π‘₯).(3.3) First observe that for π‘ βˆˆ(0,1), we can get β€–β€–(πΌβˆ’π‘ πœ‡πΉ)𝑉𝑠(π‘₯)βˆ’(πΌβˆ’sπœ‡πΉ)𝑉𝑠‖‖(𝑦)2=‖‖𝑉𝑠(π‘₯)βˆ’π‘‰π‘ ξ€·(𝑦)βˆ’π‘ πœ‡πΉπ‘‰π‘ (π‘₯)βˆ’πΉπ‘‰π‘ ξ€Έβ€–β€–(𝑦)2=‖‖𝑉𝑠(π‘₯)βˆ’π‘‰π‘ β€–β€–(𝑦)2βˆ’2π‘ πœ‡βŸ¨π‘‰π‘ (π‘₯)βˆ’π‘‰π‘ (𝑦),𝐹𝑉𝑠(π‘₯)βˆ’πΉπ‘‰π‘ (𝑦)⟩+𝑠2πœ‡2‖‖𝐹𝑉𝑠(π‘₯)βˆ’πΉπ‘‰π‘ β€–β€–(𝑦)2≀‖π‘₯βˆ’π‘¦β€–2β€–β€–π‘‰βˆ’2π‘ πœ‡πœ‚π‘ (π‘₯)βˆ’π‘‰π‘ β€–β€–(𝑦)2+𝑠2πœ‡2πœ…2‖‖𝑉𝑠(π‘₯)βˆ’π‘‰π‘ β€–β€–(𝑦)2≀1βˆ’π‘ πœ‡2πœ‚βˆ’π‘ πœ‡πœ…2ξ€Έξ€Έβ€–π‘₯βˆ’π‘¦β€–2≀1βˆ’π‘ πœ‡2πœ‚βˆ’π‘ πœ‡πœ…2ξ€Έ2ξƒͺ2β€–π‘₯βˆ’π‘¦β€–2≀1βˆ’π‘ πœ‡πœ‚βˆ’πœ‡πœ…22ξ‚Άξ‚Ά2β€–π‘₯βˆ’π‘¦β€–2=(1βˆ’π‘ πœ)2β€–π‘₯βˆ’π‘¦β€–2.(3.4) Indeed, we have ‖‖𝑋𝑠(π‘₯)βˆ’π‘‹π‘ β€–β€–=β€–β€–(𝑦)π‘ π›Ύβ„Ž(π‘₯)+(πΌβˆ’π‘ πœ‡πΉ)𝑉𝑠(π‘₯)βˆ’π‘ π›Ύβ„Ž(𝑦)βˆ’(πΌβˆ’π‘ πœ‡πΉ)𝑉𝑠‖‖‖‖(𝑦)β‰€π‘ π›Ύβ€–β„Ž(π‘₯)βˆ’β„Ž(𝑦)β€–+(πΌβˆ’π‘ πœ‡πΉ)𝑉𝑠(π‘₯)βˆ’(πΌβˆ’π‘ πœ‡πΉ)𝑉𝑠‖‖(𝑦)β‰€π‘ π›ΎπœŒβ€–π‘₯βˆ’π‘¦β€–+(1βˆ’π‘ πœ)β€–π‘₯βˆ’π‘¦β€–=(1βˆ’π‘ (πœβˆ’π›ΎπœŒ))β€–π‘₯βˆ’π‘¦β€–.(3.5) Hence, 𝑋𝑠 has a unique fixed point, denoted π‘₯𝑠, which uniquely solves the fixed-point equation π‘₯𝑠π‘₯=π‘ π›Ύβ„Žπ‘ ξ€Έ+(πΌβˆ’π‘ πœ‡πΉ)𝑉𝑠π‘₯𝑠.(3.6) The next proposition summarizes the properties of {π‘₯𝑠}.

Proposition 3.1. Let π‘₯𝑠 be defined by (3.6).(i){π‘₯𝑠}isboundedforπ‘ βˆˆ(0,(1/𝜏)). (ii)lim𝑠→0β€–π‘₯π‘ βˆ’Proj𝐢(πΌβˆ’πœ†π‘ βˆ‡π‘“)(π‘₯𝑠)β€–=0. (iii)π‘₯𝑠definesacontinuouscurvefrom(0,1/𝜏)into𝐻.

Proof. (i) Take a π‘₯βˆˆπ‘†, then we have β€–β€–π‘₯π‘ βˆ’π‘₯β€–β€–=β€–β€–ξ€·π‘₯π‘ π›Ύβ„Žπ‘ ξ€Έ+(πΌβˆ’π‘ πœ‡πΉ)ProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβˆ’π‘₯β€–β€–=β€–β€–(πΌβˆ’π‘ πœ‡πΉ)ProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβˆ’(πΌβˆ’π‘ πœ‡πΉ)ProjπΆξ€·πΌβˆ’πœ†π‘ βˆ‡π‘“ξ€Έξ€·π‘₯ξ€Έξ€·ξ€·π‘₯+π‘ π›Ύβ„Žπ‘ ξ€Έβˆ’πœ‡πΉProjπΆξ€·πΌβˆ’πœ†π‘ βˆ‡π‘“ξ€Έξ€·π‘₯β€–β€–β€–β€–π‘₯≀(1βˆ’π‘ πœ)π‘ βˆ’π‘₯β€–β€–β€–β€–ξ€·π‘₯+π‘ π›Ύβ„Žπ‘ ξ€Έξ€·βˆ’πœ‡πΉπ‘₯‖‖≀‖‖π‘₯(1βˆ’π‘ πœ)π‘ βˆ’π‘₯β€–β€–β€–β€–π‘₯+π‘ π›ΎπœŒπ‘ βˆ’π‘₯β€–β€–β€–β€–ξ€·+π‘ π›Ύβ„Žπ‘₯ξ€Έξ€·βˆ’πœ‡πΉπ‘₯ξ€Έβ€–β€–.(3.7) It follows that β€–β€–π‘₯π‘ βˆ’π‘₯β€–β€–β‰€β€–β€–ξ€·π›Ύβ„Žπ‘₯ξ€Έξ€·βˆ’πœ‡πΉπ‘₯ξ€Έβ€–β€–.πœβˆ’π›ΎπœŒ(3.8) Hence, {π‘₯𝑠} is bounded.
(ii) By the definition of {π‘₯𝑠}, we have β€–β€–π‘₯π‘ βˆ’ProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβ€–β€–=β€–β€–ξ€·π‘₯π‘ π›Ύβ„Žπ‘ ξ€Έ+(πΌβˆ’π‘ πœ‡πΉ)ProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβˆ’ProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβ€–β€–β€–β€–ξ€·π‘₯=π‘ π›Ύβ„Žπ‘ ξ€Έβˆ’πœ‡πΉProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€·sξ€Έβ€–β€–βŸΆ0,(3.9){π‘₯𝑠} is bounded, so are {β„Ž(π‘₯𝑠)} and {𝐹Proj𝐢(πΌβˆ’πœ†π‘ βˆ‡π‘“)(π‘₯𝑠)}.
(iii) Take 𝑠, 𝑠0∈(0,1/𝜏), and we have β€–β€–π‘₯π‘ βˆ’π‘₯𝑠0β€–β€–=β€–β€–ξ€·π‘₯π‘ π›Ύβ„Žπ‘ ξ€Έ+(πΌβˆ’π‘ πœ‡πΉ)ProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβˆ’π‘ 0ξ€·π‘₯π›Ύβ„Žπ‘ 0ξ€Έβˆ’ξ€·πΌβˆ’π‘ 0ξ€Έπœ‡πΉProjπΆξ€·πΌβˆ’πœ†π‘ 0π‘₯βˆ‡π‘“ξ€Έξ€·π‘ 0ξ€Έβ€–β€–β‰€β€–β€–ξ€·π‘ βˆ’π‘ 0ξ€Έξ€·π‘₯π›Ύβ„Žπ‘ ξ€Έ+𝑠0π›Ύξ€·β„Žξ€·π‘₯𝑠π‘₯βˆ’β„Žπ‘ 0β€–β€–+β€–β€–ξ€·ξ€Έξ€ΈπΌβˆ’π‘ 0ξ€Έπœ‡πΉProjπΆξ€·πΌβˆ’πœ†π‘ 0π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβˆ’ξ€·πΌβˆ’π‘ 0ξ€Έπœ‡πΉProjπΆξ€·πΌβˆ’πœ†π‘ 0π‘₯βˆ‡π‘“ξ€Έξ€·π‘ 0ξ€Έβ€–β€–+β€–β€–(πΌβˆ’π‘ πœ‡πΉ)ProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβˆ’ξ€·πΌβˆ’π‘ 0ξ€Έπœ‡πΉProjπΆξ€·πΌβˆ’πœ†π‘ 0π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβ€–β€–β‰€β€–β€–ξ€·π‘ βˆ’π‘ 0ξ€Έξ€·π‘₯π›Ύβ„Žπ‘ ξ€Έ+𝑠0π›Ύξ€·β„Žξ€·π‘₯𝑠π‘₯βˆ’β„Žπ‘ 0β€–β€–+β€–β€–ξ€·ξ€Έξ€ΈπΌβˆ’π‘ 0ξ€Έπœ‡πΉProjπΆξ€·πΌβˆ’πœ†π‘ 0π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβˆ’ξ€·πΌβˆ’π‘ 0ξ€Έπœ‡πΉProjπΆξ€·πΌβˆ’πœ†π‘ 0π‘₯βˆ‡π‘“ξ€Έξ€·π‘ 0ξ€Έβ€–β€–+β€–β€–(πΌβˆ’π‘ πœ‡πΉ)ProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβˆ’(πΌβˆ’π‘ πœ‡πΉ)ProjπΆξ€·πΌβˆ’πœ†π‘ 0π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβ€–β€–+β€–β€–(πΌβˆ’π‘ πœ‡πΉ)ProjπΆξ€·πΌβˆ’πœ†π‘ 0π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβˆ’ξ€·πΌβˆ’π‘ 0ξ€Έπœ‡πΉProjπΆξ€·πΌβˆ’πœ†π‘ 0π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβ€–β€–β‰€||π‘ βˆ’π‘ 0||π›Ύβ€–β€–β„Žξ€·π‘₯𝑠‖‖+𝑠0β€–β€–π‘₯π›ΎπœŒπ‘ βˆ’π‘₯𝑠0β€–β€–+ξ€·1βˆ’π‘ 0πœξ€Έβ€–β€–π‘₯π‘ βˆ’π‘₯𝑠0β€–β€–+||πœ†π‘ βˆ’πœ†π‘ 0||β€–β€–ξ€·π‘₯βˆ‡π‘“π‘ ξ€Έβ€–β€–+β€–β€–π‘ πœ‡πΉProjπΆξ€·πΌβˆ’πœ†π‘ 0π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβˆ’π‘ 0πœ‡πΉProjπΆξ€·πΌβˆ’πœ†π‘ 0π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβ€–β€–=||π‘ βˆ’π‘ 0||π›Ύβ€–β€–β„Žξ€·π‘₯𝑠‖‖+𝑠0β€–β€–π‘₯π›ΎπœŒπ‘ βˆ’π‘₯𝑠0β€–β€–+ξ€·1βˆ’π‘ 0πœξ€Έβ€–β€–π‘₯π‘ βˆ’π‘₯𝑠0β€–β€–+||πœ†π‘ βˆ’πœ†π‘ 0||β€–β€–ξ€·π‘₯βˆ‡π‘“π‘ ξ€Έβ€–β€–+||π‘ βˆ’π‘ 0||β€–β€–πœ‡πΉProjπΆξ€·πΌβˆ’πœ†π‘ 0π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβ€–β€–=ξ€·π›Ύβ€–β€–β„Žξ€·π‘₯𝑠‖‖‖‖+πœ‡πΉProjπΆξ€·πΌβˆ’πœ†π‘ 0π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβ€–β€–ξ€Έ||π‘ βˆ’π‘ 0||+𝑠0β€–β€–π‘₯π›ΎπœŒπ‘ βˆ’π‘₯𝑠0β€–β€–+ξ€·1βˆ’π‘ 0πœξ€Έβ€–β€–π‘₯π‘ βˆ’π‘₯𝑠0β€–β€–+||πœ†π‘ βˆ’πœ†π‘ 0||β€–β€–ξ€·π‘₯βˆ‡π‘“π‘ ξ€Έβ€–β€–.(3.10) Therefore, β€–β€–π‘₯π‘ βˆ’π‘₯𝑠0β€–β€–β‰€π›Ύβ€–β€–β„Žξ€·π‘₯𝑠‖‖‖‖+πœ‡πΉProjπΆξ€·πΌβˆ’πœ†π‘ 0π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβ€–β€–π‘ 0||(πœβˆ’π›ΎπœŒ)π‘ βˆ’π‘ 0||+β€–β€–ξ€·π‘₯βˆ‡π‘“π‘ ξ€Έβ€–β€–π‘ 0||πœ†(πœβˆ’π›ΎπœŒ)π‘ βˆ’πœ†π‘ 0||.(3.11) Therefore, π‘₯𝑠→π‘₯𝑠0 as 𝑠→𝑠0. This means π‘₯𝑠 is continuous.

Our main result in the following shows that {π‘₯𝑠} converges in norm to a minimizer of (1.1) which solves some variational inequality.

Theorem 3.2. Assume that {π‘₯𝑠} is defined by (3.6), then π‘₯𝑠 converges in norm as 𝑠→0 to a minimizer of (1.1) which solves the variational inequality ⟨(πœ‡πΉβˆ’π›Ύβ„Ž)π‘₯βˆ—,Μƒπ‘₯βˆ’π‘₯βˆ—βŸ©β‰₯0,βˆ€Μƒπ‘₯βˆˆπ‘†.(3.12) Equivalently, we have Proj𝑠(πΌβˆ’(πœ‡πΉβˆ’π›Ύβ„Ž))π‘₯βˆ—=π‘₯βˆ—.

Proof. It is easy to see that the uniqueness of a solution of the variational inequality (3.12). By Lemma 2.3, πœ‡πΉβˆ’π›Ύβ„Ž is strongly monotone, so the variational inequality (3.12) has only one solution. Let π‘₯βˆ—βˆˆπ‘† denote the unique solution of (3.12).
To prove that π‘₯𝑠→π‘₯βˆ—(𝑠→0), we write, for a given Μƒπ‘₯βˆˆπ‘†, π‘₯𝑠π‘₯βˆ’Μƒπ‘₯=π‘ π›Ύβ„Žπ‘ ξ€Έ+(πΌβˆ’π‘ πœ‡πΉ)ProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έξ€·ξ€·π‘₯βˆ’Μƒπ‘₯=π‘ π›Ύβ„Žπ‘ ξ€Έξ€Έβˆ’πœ‡πΉΜƒπ‘₯+(πΌβˆ’π‘ πœ‡πΉ)ProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβˆ’(πΌβˆ’π‘ πœ‡πΉ)ProjπΆξ€·πΌβˆ’πœ†π‘ ξ€Έβˆ‡π‘“(Μƒπ‘₯).(3.13) It follows that β€–β€–π‘₯π‘ β€–β€–βˆ’Μƒπ‘₯2π‘₯=π‘ π›Ύβ„Žπ‘ ξ€Έβˆ’πœ‡πΉΜƒπ‘₯,π‘₯𝑠+ξ«βˆ’Μƒπ‘₯(πΌβˆ’π‘ πœ‡πΉ)ProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έβˆ’(πΌβˆ’π‘ πœ‡πΉ)ProjπΆξ€·πΌβˆ’πœ†π‘ ξ€Έβˆ‡π‘“(Μƒπ‘₯),π‘₯𝑠‖‖π‘₯βˆ’Μƒπ‘₯≀(1βˆ’π‘ πœ)π‘ β€–β€–βˆ’Μƒπ‘₯2π‘₯+π‘ π›Ύβ„Žπ‘ ξ€Έβˆ’πœ‡πΉΜƒπ‘₯,π‘₯𝑠.βˆ’Μƒπ‘₯(3.14) Hence, β€–β€–π‘₯π‘ β€–β€–βˆ’Μƒπ‘₯2≀1πœξ«ξ€·π‘₯π›Ύβ„Žπ‘ ξ€Έβˆ’πœ‡πΉΜƒπ‘₯,π‘₯𝑠≀1βˆ’Μƒπ‘₯πœξ‚†β€–β€–π‘₯π›ΎπœŒπ‘ β€–β€–βˆ’Μƒπ‘₯2+βŸ¨π›Ύβ„Ž(Μƒπ‘₯)βˆ’πœ‡πΉΜƒπ‘₯,π‘₯𝑠.βˆ’Μƒπ‘₯⟩(3.15) To derive that β€–β€–π‘₯π‘ β€–β€–βˆ’Μƒπ‘₯2≀1πœβˆ’π›ΎπœŒβŸ¨π›Ύβ„Ž(Μƒπ‘₯)βˆ’πœ‡πΉΜƒπ‘₯,π‘₯π‘ βˆ’Μƒπ‘₯⟩.(3.16) Since {π‘₯𝑠} is bounded as 𝑠→0, we see that if {𝑠𝑛} is a sequence in (0,1) such that 𝑠𝑛→0 and π‘₯𝑠𝑛⇀π‘₯, then by (3.16), π‘₯𝑠𝑛→π‘₯. We may further assume that πœ†π‘ π‘›β†’πœ†βˆˆ[0,2/𝐿] due to condition (1.4). Notice that Proj𝐢(πΌβˆ’πœ†βˆ‡π‘“) is nonexpansive. It turns out that β€–β€–π‘₯π‘ π‘›βˆ’Proj𝐢(πΌβˆ’πœ†βˆ‡π‘“)π‘₯𝑠𝑛‖‖≀‖‖π‘₯π‘ π‘›βˆ’ProjπΆξ€·πΌβˆ’πœ†π‘ π‘›ξ€Έπ‘₯βˆ‡π‘“π‘ π‘›β€–β€–+β€–β€–ProjπΆξ€·πΌβˆ’πœ†π‘ π‘›ξ€Έπ‘₯βˆ‡π‘“π‘ π‘›βˆ’Proj𝐢(πΌβˆ’πœ†βˆ‡π‘“)π‘₯𝑠𝑛‖‖≀‖‖π‘₯π‘ π‘›βˆ’ProjπΆξ€·πΌβˆ’πœ†π‘ π‘›ξ€Έπ‘₯βˆ‡π‘“π‘ π‘›β€–β€–+β€–β€–ξ€·πœ†βˆ’πœ†π‘ π‘›ξ€Έξ€·π‘₯βˆ‡π‘“π‘ π‘›ξ€Έβ€–β€–=β€–β€–π‘₯π‘ π‘›βˆ’ProjπΆξ€·πΌβˆ’πœ†π‘ π‘›ξ€Έπ‘₯βˆ‡π‘“π‘ π‘›β€–β€–+||πœ†βˆ’πœ†π‘ π‘›||β€–β€–ξ€·π‘₯βˆ‡π‘“π‘ π‘›ξ€Έβ€–β€–.(3.17) From the boundedness of {π‘₯𝑠} and lim𝑠→0β€–Proj𝐢(πΌβˆ’πœ†π‘ βˆ‡π‘“)π‘₯π‘ βˆ’π‘₯𝑠‖=0, we conclude that limπ‘›β†’βˆžβ€–β€–π‘₯π‘ π‘›βˆ’Proj𝐢(πΌβˆ’πœ†βˆ‡π‘“)π‘₯𝑠𝑛‖‖=0.(3.18) Since π‘₯𝑠𝑛⇀π‘₯, by Lemma 2.2, we obtain π‘₯=Proj𝐢(πΌβˆ’πœ†βˆ‡π‘“)π‘₯.(3.19) This shows that π‘₯βˆˆπ‘†.
We next prove that π‘₯ is a solution of the variational inequality (3.12). Since π‘₯𝑠π‘₯=π‘ π›Ύβ„Žπ‘ ξ€Έ+(πΌβˆ’π‘ πœ‡πΉ)ProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έ,(3.20) we can derive that ξ€·π‘₯(πœ‡πΉβˆ’π›Ύβ„Ž)𝑠1=βˆ’π‘ ξ€·πΌβˆ’ProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έξ€·πΉξ€·π‘₯+πœ‡π‘ ξ€Έβˆ’πΉProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€·π‘ .ξ€Έξ€Έ(3.21) Therefore, for Μƒπ‘₯βˆˆπ‘†, π‘₯(πœ‡πΉβˆ’π›Ύβ„Ž)𝑠,π‘₯𝑠1βˆ’Μƒπ‘₯=βˆ’π‘ ξ«ξ€·πΌβˆ’ProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€Έξ€·π‘ ξ€Έβˆ’ξ€·πΌβˆ’ProjπΆξ€·πΌβˆ’πœ†π‘ βˆ‡π‘“ξ€Έξ€Έ(Μƒπ‘₯),π‘₯𝑠𝐹π‘₯βˆ’Μƒπ‘₯+πœ‡π‘ ξ€Έβˆ’πΉProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έ,π‘₯𝑠𝐹π‘₯βˆ’Μƒπ‘₯β‰€πœ‡π‘ ξ€Έβˆ’πΉProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€·π‘ ξ€Έ,π‘₯𝑠.βˆ’Μƒπ‘₯(3.22) Since Proj𝐢(πΌβˆ’πœ†π‘ βˆ‡π‘“) is nonexpansive, we obtain that πΌβˆ’Proj𝐢(πΌβˆ’πœ†π‘ βˆ‡π‘“) is monotone, that is, ξ«ξ€·πΌβˆ’ProjπΆξ€·πΌβˆ’πœ†π‘ π‘₯βˆ‡π‘“ξ€Έξ€Έξ€·π‘ ξ€Έβˆ’ξ€·πΌβˆ’ProjπΆξ€·πΌβˆ’πœ†π‘ βˆ‡π‘“ξ€Έξ€Έ(Μƒπ‘₯),π‘₯π‘ ξ¬βˆ’Μƒπ‘₯β‰₯0.(3.23) Taking the limit through 𝑠=𝑠𝑛→0 ensures that π‘₯ is a solution to (3.12). That is to say (πœ‡πΉβˆ’π›Ύβ„Ž)π‘₯ξ€Έ,π‘₯βˆ’Μƒπ‘₯≀0.(3.24) Hence π‘₯=π‘₯βˆ— by uniqueness. Therefore, π‘₯𝑠→π‘₯βˆ— as 𝑠→0. The variational inequality (3.12) can be written as ⟨(πΌβˆ’πœ‡πΉ+π›Ύβ„Ž)π‘₯βˆ—βˆ’π‘₯βˆ—,Μƒπ‘₯βˆ’π‘₯βˆ—βŸ©β‰€0,βˆ€Μƒπ‘₯βˆˆπ‘†.(3.25) So, by Lemma 2.4, it is equivalent to the fixed-point equation 𝑃𝑆(πΌβˆ’πœ‡πΉ+π›Ύβ„Ž)π‘₯βˆ—=π‘₯βˆ—.(3.26)

Taking 𝐹=𝐴, πœ‡=1 in Theorem 3.2, we get the following

Corollary 3.3. We have that {π‘₯𝑠} converges in norm as 𝑠→0 to a minimizer of (1.1) which solves the variational inequality ⟨(π΄βˆ’π›Ύβ„Ž)π‘₯βˆ—,Μƒπ‘₯βˆ’π‘₯βˆ—βŸ©β‰₯0,βˆ€Μƒπ‘₯βˆˆπ‘†.(3.27) Equivalently, we have Proj𝑠(πΌβˆ’(π΄βˆ’π›Ύβ„Ž))π‘₯βˆ—=π‘₯βˆ—.

Taking 𝐹=𝐼, πœ‡=1, 𝛾=1 in Theorem 3.2, we get the following.

Corollary 3.4. Let π‘§π‘ βˆˆπ» be the unique fixed point of the contraction π‘§β†¦π‘ β„Ž(𝑧)+(1βˆ’π‘ )Proj𝐢(πΌβˆ’πœ†π‘ βˆ‡π‘“)(𝑧). Then, {𝑧𝑠} converges in norm as 𝑠→0 to the unique solution of the variational inequality ⟨(πΌβˆ’β„Ž)π‘₯βˆ—,Μƒπ‘₯βˆ’π‘₯βˆ—βŸ©β‰₯0,βˆ€Μƒπ‘₯βˆˆπ‘†.(3.28)

Finally, we consider the following hybrid gradient-projection algorithm, ξ‚»π‘₯0π‘₯∈𝐢arbitrarily,𝑛+1=πœƒπ‘›ξ€·π‘₯π›Ύβ„Žπ‘›ξ€Έ+ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€ΈProj𝐢π‘₯π‘›βˆ’πœ†π‘›ξ€·π‘₯βˆ‡π‘“π‘›ξ€Έξ€Έ,βˆ€π‘›β‰₯0.(3.29) Assume that the sequence {πœ†π‘›}βˆžπ‘›=0 satisfies the condition (1.4) and, in addition, that the following conditions are satisfied for {πœ†π‘›}βˆžπ‘›=0 and {πœƒπ‘›}βˆžπ‘›=0βŠ‚[0,1]:(i)πœƒπ‘›β†’0; (ii)βˆ‘βˆžπ‘›=0πœƒπ‘›=∞; (iii)βˆ‘βˆžπ‘›=0|πœƒπ‘›+1βˆ’πœƒπ‘›|<∞; (iv)βˆ‘βˆžπ‘›=0|πœ†π‘›+1βˆ’πœ†π‘›|<∞.

Theorem 3.5. Assume that the minimization problem (1.1) is consistent and the gradient βˆ‡π‘“ satisfies the Lipschitz condition (1.2). Let {π‘₯𝑛} be generated by algorithm (3.29) with the sequences {πœƒπ‘›} and {πœ†π‘›} satisfying the above conditions. Then, the sequence {π‘₯𝑛} converges in norm to π‘₯βˆ— that is obtained in Theorem 3.2.

Proof. (1) The sequence {π‘₯𝑛}βˆžπ‘›=0 is bounded. Setting π‘‰π‘›βˆΆ=ProjπΆξ€·πΌβˆ’πœ†π‘›ξ€Έβˆ‡π‘“.(3.30) Indeed, we have, for π‘₯βˆˆπ‘†, β€–β€–π‘₯𝑛+1βˆ’π‘₯β€–β€–=β€–β€–πœƒπ‘›ξ€·π‘₯π›Ύβ„Žπ‘›ξ€Έ+ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›π‘₯π‘›βˆ’π‘₯β€–β€–=β€–β€–πœƒπ‘›ξ€·ξ€·π‘₯π›Ύβ„Žπ‘›ξ€Έξ€·βˆ’πœ‡πΉπ‘₯+ξ€·ξ€Έξ€ΈπΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›π‘₯π‘›βˆ’ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›π‘₯‖‖≀1βˆ’πœƒπ‘›πœξ€Έβ€–β€–π‘₯π‘›βˆ’π‘₯β€–β€–+πœƒπ‘›β€–β€–π‘₯πœŒπ›Ύπ‘›βˆ’π‘₯β€–β€–+πœƒπ‘›β€–β€–ξ€·π›Ύβ„Žπ‘₯ξ€Έξ€·βˆ’πœ‡πΉπ‘₯ξ€Έβ€–β€–=ξ€·1βˆ’πœƒπ‘›(ξ€Έβ€–β€–π‘₯πœβˆ’π›ΎπœŒ)π‘›βˆ’π‘₯β€–β€–+πœƒπ‘›β€–β€–ξ€·π›Ύβ„Žπ‘₯ξ€Έξ€·βˆ’πœ‡πΉπ‘₯ξ€Έβ€–β€–ξ‚»β€–β€–π‘₯≀maxπ‘›βˆ’π‘₯β€–β€–,1β€–β€–ξ€·πœβˆ’π›ΎπœŒπ›Ύβ„Žπ‘₯ξ€Έξ€·βˆ’πœ‡πΉπ‘₯ξ€Έβ€–β€–ξ‚Ό,βˆ€π‘›β‰₯0.(3.31) By induction, β€–β€–π‘₯π‘›βˆ’π‘₯β€–β€–ξƒ―β€–β€–π‘₯≀max0βˆ’π‘₯β€–β€–,β€–β€–ξ€·π›Ύβ„Žπ‘₯ξ€Έξ€·βˆ’πœ‡πΉπ‘₯ξ€Έβ€–β€–ξƒ°πœβˆ’π›ΎπœŒ.(3.32) In particular, {π‘₯𝑛}βˆžπ‘›=0 is bounded.
(2) We prove that β€–π‘₯𝑛+1βˆ’π‘₯𝑛‖→0 as π‘›β†’βˆž. Let 𝑀 be a constant such that 𝑀>maxsup𝑛β‰₯0π›Ύβ€–β€–β„Žξ€·π‘₯𝑛‖‖,supπœ…,𝑛β‰₯0πœ‡β€–β€–πΉπ‘‰πœ…π‘₯𝑛‖‖,sup𝑛β‰₯0β€–β€–ξ€·π‘₯βˆ‡π‘“π‘›ξ€Έβ€–β€–ξƒ°.(3.33) We compute β€–β€–π‘₯𝑛+1βˆ’π‘₯𝑛‖‖=β€–β€–πœƒπ‘›ξ€·π‘₯π›Ύβ„Žπ‘›ξ€Έ+ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›π‘₯π‘›βˆ’πœƒπ‘›βˆ’1ξ€·π‘₯π›Ύβ„Žπ‘›βˆ’1ξ€Έβˆ’ξ€·πΌβˆ’πœ‡πœƒπ‘›βˆ’1πΉξ€Έπ‘‰π‘›βˆ’1π‘₯π‘›βˆ’1β€–β€–=β€–β€–πœƒπ‘›π›Ύξ€·β„Žξ€·π‘₯𝑛π‘₯βˆ’β„Žπ‘›βˆ’1ξ€·πœƒξ€Έξ€Έ+π›Ύπ‘›βˆ’πœƒπ‘›βˆ’1ξ€Έβ„Žξ€·π‘₯π‘›βˆ’1ξ€Έ+ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›π‘₯π‘›βˆ’ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›π‘₯π‘›βˆ’1+ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›π‘₯π‘›βˆ’1βˆ’ξ€·πΌβˆ’πœ‡πœƒπ‘›βˆ’1πΉξ€Έπ‘‰π‘›βˆ’1π‘₯π‘›βˆ’1β€–β€–=β€–β€–πœƒπ‘›π›Ύξ€·β„Žξ€·π‘₯𝑛π‘₯βˆ’β„Žπ‘›βˆ’1ξ€·πœƒξ€Έξ€Έ+π›Ύπ‘›βˆ’πœƒπ‘›βˆ’1ξ€Έβ„Žξ€·π‘₯π‘›βˆ’1ξ€Έ+ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›π‘₯π‘›βˆ’ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›π‘₯π‘›βˆ’1+ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›π‘₯π‘›βˆ’1βˆ’ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›βˆ’1π‘₯π‘›βˆ’1+ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›βˆ’1π‘₯π‘›βˆ’1βˆ’ξ€·πΌβˆ’πœ‡πœƒπ‘›βˆ’1πΉξ€Έπ‘‰π‘›βˆ’1π‘₯π‘›βˆ’1β€–β€–β‰€πœƒπ‘›β€–β€–π‘₯π›ΎπœŒπ‘›βˆ’π‘₯π‘›βˆ’1β€–β€–||πœƒ+π›Ύπ‘›βˆ’πœƒπ‘›βˆ’1||β€–β€–β„Žξ€·π‘₯π‘›βˆ’1ξ€Έβ€–β€–+ξ€·1βˆ’πœƒπ‘›πœξ€Έβ€–β€–π‘₯π‘›βˆ’π‘₯π‘›βˆ’1β€–β€–+‖‖𝑉𝑛π‘₯π‘›βˆ’1βˆ’π‘‰π‘›βˆ’1π‘₯π‘›βˆ’1β€–β€–||πœƒ+πœ‡π‘›βˆ’πœƒπ‘›βˆ’1||β€–β€–πΉπ‘‰π‘›βˆ’1π‘₯π‘›βˆ’1β€–β€–β‰€πœƒπ‘›β€–β€–π‘₯π›ΎπœŒπ‘›βˆ’π‘₯π‘›βˆ’1β€–β€–||πœƒ+π‘€π‘›βˆ’πœƒπ‘›βˆ’1||+ξ€·1βˆ’πœƒπ‘›πœξ€Έβ€–β€–π‘₯π‘›βˆ’π‘₯π‘›βˆ’1β€–β€–+‖‖𝑉𝑛π‘₯π‘›βˆ’1βˆ’Vπ‘›βˆ’1π‘₯π‘›βˆ’1β€–β€–||πœƒ+π‘€π‘›βˆ’πœƒπ‘›βˆ’1||=ξ€·1βˆ’πœƒπ‘›ξ€Έβ€–β€–π‘₯(πœβˆ’π›ΎπœŒ)π‘›βˆ’π‘₯π‘›βˆ’1β€–β€–||πœƒ+2π‘€π‘›βˆ’πœƒπ‘›βˆ’1||+‖‖𝑉𝑛π‘₯π‘›βˆ’1βˆ’π‘‰π‘›βˆ’1π‘₯π‘›βˆ’1β€–β€–,‖‖𝑉(3.34)𝑛π‘₯π‘›βˆ’1βˆ’π‘‰π‘›βˆ’1π‘₯π‘›βˆ’1β€–β€–=β€–β€–ProjπΆξ€·πΌβˆ’πœ†π‘›ξ€Έπ‘₯βˆ‡π‘“π‘›βˆ’1βˆ’ProjπΆξ€·πΌβˆ’πœ†π‘›βˆ’1ξ€Έπ‘₯βˆ‡π‘“π‘›βˆ’1β€–β€–β‰€β€–β€–ξ€·πΌβˆ’πœ†π‘›ξ€Έπ‘₯βˆ‡π‘“π‘›βˆ’1βˆ’ξ€·πΌβˆ’πœ†π‘›βˆ’1ξ€Έπ‘₯βˆ‡π‘“π‘›βˆ’1β€–β€–=||πœ†π‘›βˆ’πœ†π‘›βˆ’1||β€–β€–ξ€·π‘₯βˆ‡π‘“π‘›βˆ’1ξ€Έβ€–β€–||πœ†β‰€π‘€π‘›βˆ’πœ†π‘›βˆ’1||.(3.35) Combining (3.34) and (3.35), we can obtain β€–β€–π‘₯𝑛+1βˆ’π‘₯𝑛‖‖≀1βˆ’(πœβˆ’π›ΎπœŒ)πœƒπ‘›ξ€Έβ€–β€–π‘₯π‘›βˆ’π‘₯π‘›βˆ’1β€–β€–ξ€·||πœƒ+2π‘€π‘›βˆ’πœƒπ‘›βˆ’1||+||πœ†π‘›βˆ’πœ†π‘›βˆ’1||ξ€Έ.(3.36) Apply Lemma 2.1 to (3.36) to conclude that β€–π‘₯𝑛+1βˆ’π‘₯𝑛‖→0 as π‘›β†’βˆž.
(3) We prove that πœ”π‘€(π‘₯𝑛)βŠ‚π‘†. Let Μ‚π‘₯βˆˆπœ”π‘€(π‘₯𝑛), and assume that π‘₯𝑛𝑗⇀̂π‘₯ for some subsequence {π‘₯𝑛𝑗}βˆžπ‘—=1 of {π‘₯𝑛}βˆžπ‘›=0. We may further assume that πœ†π‘›π‘—β†’πœ†βˆˆ[0,2/𝐿] due to condition (1.4). Set π‘‰βˆΆ=Proj𝐢(πΌβˆ’πœ†βˆ‡π‘“). Notice that 𝑉 is nonexpansive and Fix𝑉=𝑆. It turns out that β€–β€–π‘₯π‘›π‘—βˆ’π‘‰π‘₯𝑛𝑗‖‖≀‖‖π‘₯π‘›π‘—βˆ’π‘‰π‘›π‘—π‘₯𝑛𝑗‖‖+‖‖𝑉𝑛𝑗π‘₯π‘›π‘—βˆ’π‘‰π‘₯𝑛𝑗‖‖≀‖‖π‘₯π‘›π‘—βˆ’π‘₯𝑛𝑗+1β€–β€–+β€–β€–π‘₯𝑛𝑗+1βˆ’π‘‰π‘›π‘—π‘₯𝑛𝑗‖‖+‖‖𝑉𝑛𝑗π‘₯π‘›π‘—βˆ’π‘‰π‘₯𝑛𝑗‖‖≀‖‖π‘₯π‘›π‘—βˆ’π‘₯𝑛𝑗+1β€–β€–+πœƒπ‘›π‘—β€–β€–ξ‚€π‘₯π›Ύβ„Žπ‘›π‘—ξ‚βˆ’πœ‡πΉπ‘‰π‘›π‘—π‘₯𝑛𝑗‖‖+β€–β€–ProjπΆξ‚€πΌβˆ’πœ†π‘›π‘—ξ‚π‘₯βˆ‡π‘“π‘›π‘—βˆ’Proj𝐢(πΌβˆ’πœ†βˆ‡π‘“)π‘₯𝑛𝑗‖‖≀‖‖π‘₯π‘›π‘—βˆ’π‘₯𝑛𝑗+1β€–β€–+πœƒπ‘›π‘—β€–β€–ξ‚€π‘₯π›Ύβ„Žπ‘›π‘—ξ‚βˆ’πœ‡πΉπ‘‰π‘›π‘—π‘₯𝑛𝑗‖‖+|||πœ†βˆ’πœ†π‘›π‘—|||β€–β€–ξ‚€π‘₯βˆ‡π‘“π‘›π‘—ξ‚β€–β€–β‰€β€–β€–π‘₯π‘›π‘—βˆ’π‘₯𝑛𝑗+1β€–β€–ξ‚€πœƒ+2𝑀𝑛𝑗+|||πœ†βˆ’πœ†π‘›π‘—|||ξ‚βŸΆ0asπ‘—βŸΆβˆž.(3.37) So Lemma 2.2 guarantees that πœ”π‘€(π‘₯𝑛)βŠ‚Fix𝑉=𝑆.
(4) We prove that π‘₯𝑛→π‘₯βˆ— as π‘›β†’βˆž, where π‘₯βˆ— is the unique solution of the 𝑉𝐼 (3.12). First observe that there is some Μ‚π‘₯βˆˆπœ”π‘€(π‘₯𝑛)βŠ‚π‘† Such that limsupπ‘›β†’βˆžβŸ¨(πœ‡πΉβˆ’π›Ύβ„Ž)π‘₯βˆ—,π‘₯π‘›βˆ’π‘₯βˆ—βŸ©=⟨(πœ‡πΉβˆ’π›Ύβ„Ž)π‘₯βˆ—,Μ‚π‘₯βˆ’π‘₯βˆ—βŸ©β‰₯0.(3.38)
We now compute β€–β€–π‘₯𝑛+1βˆ’π‘₯βˆ—β€–β€–2=β€–β€–πœƒπ‘›ξ€·π‘₯π›Ύβ„Žπ‘›ξ€Έ+ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€ΈProjπΆξ€·πΌβˆ’πœ†π‘›π‘₯βˆ‡π‘“ξ€Έξ€·π‘›ξ€Έβˆ’π‘₯βˆ—β€–β€–2=β€–β€–πœƒπ‘›ξ€·ξ€·π‘₯π›Ύβ„Žπ‘›ξ€Έβˆ’πœ‡πΉπ‘₯βˆ—ξ€Έ+ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›ξ€·π‘₯π‘›ξ€Έβˆ’ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›π‘₯βˆ—β€–β€–2=β€–β€–πœƒπ‘›π›Ύξ€·β„Žξ€·π‘₯𝑛π‘₯βˆ’β„Žβˆ—+ξ€·ξ€Έξ€ΈπΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›ξ€·π‘₯π‘›ξ€Έβˆ’(πΌβˆ’πœ‡πœƒπ‘›πΉ)𝑉𝑛π‘₯βˆ—+πœƒπ‘›ξ€·ξ€·π‘₯π›Ύβ„Žβˆ—ξ€Έβˆ’πœ‡πΉπ‘₯βˆ—ξ€Έβ€–β€–2β‰€β€–β€–πœƒπ‘›π›Ύξ€·β„Žξ€·π‘₯𝑛π‘₯βˆ’β„Žβˆ—+ξ€·ξ€Έξ€ΈπΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›ξ€·π‘₯π‘›ξ€Έβˆ’ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›π‘₯βˆ—β€–β€–2+2πœƒπ‘›ξ«(π›Ύβ„Žβˆ’πœ‡πΉ)π‘₯βˆ—,π‘₯𝑛+1βˆ’π‘₯βˆ—ξ¬=β€–β€–πœƒπ‘›π›Ύξ€·β„Žξ€·π‘₯𝑛π‘₯βˆ’β„Žβˆ—β€–β€–ξ€Έξ€Έ2+β€–β€–ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›ξ€·π‘₯π‘›ξ€Έβˆ’ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›π‘₯βˆ—β€–β€–2+2πœƒπ‘›ξ€·π‘₯π›ΎβŸ¨β„Žπ‘›ξ€Έξ€·π‘₯βˆ’β„Žβˆ—ξ€Έ,ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›ξ€·π‘₯π‘›ξ€Έβˆ’ξ€·πΌβˆ’πœ‡πœƒπ‘›πΉξ€Έπ‘‰π‘›π‘₯βˆ—βŸ©+2πœƒπ‘›βŸ¨(π›Ύβ„Žβˆ’πœ‡πΉ)π‘₯βˆ—,π‘₯𝑛+1βˆ’π‘₯βˆ—βŸ©β‰€πœƒ2𝑛𝛾2𝜌2β€–β€–π‘₯π‘›βˆ’π‘₯βˆ—β€–β€–2+ξ€·1βˆ’πœƒπ‘›πœξ€Έ2β€–β€–π‘₯π‘›βˆ’π‘₯βˆ—β€–β€–2+2πœƒπ‘›ξ€·π›ΎπœŒ1βˆ’πœƒπ‘›πœξ€Έβ€–β€–π‘₯π‘›βˆ’π‘₯βˆ—β€–β€–2+2πœƒπ‘›ξ«(π›Ύβ„Žβˆ’πœ‡πΉ)π‘₯βˆ—,π‘₯𝑛+1βˆ’π‘₯βˆ—ξ¬=ξ‚€πœƒ2𝑛𝛾2𝜌2+ξ€·1βˆ’πœƒπ‘›πœξ€Έ2+2πœƒπ‘›ξ€·π›ΎπœŒ1βˆ’πœƒπ‘›πœξ€Έξ‚β€–β€–π‘₯π‘›βˆ’π‘₯βˆ—β€–β€–2+2πœƒπ‘›ξ«(π›Ύβ„Žβˆ’πœ‡πΉ)π‘₯βˆ—,π‘₯𝑛+1βˆ’π‘₯βˆ—ξ¬β‰€ξ€·πœƒπ‘›π›Ύ2𝜌2+1βˆ’2πœƒπ‘›πœ+πœƒπ‘›πœ2+2πœƒπ‘›ξ€Έβ€–β€–π‘₯π›ΎπœŒπ‘›βˆ’π‘₯βˆ—β€–β€–2+2πœƒπ‘›ξ«(π›Ύβ„Žβˆ’πœ‡πΉ)π‘₯βˆ—,π‘₯𝑛+1βˆ’π‘₯βˆ—ξ¬=ξ€·1βˆ’πœƒπ‘›ξ€·2πœβˆ’π›Ύ2𝜌2βˆ’πœ2β€–β€–π‘₯βˆ’2π›ΎπœŒξ€Έξ€Έπ‘›βˆ’π‘₯βˆ—β€–β€–2+2πœƒπ‘›ξ«(π›Ύβ„Žβˆ’πœ‡πΉ)π‘₯βˆ—,π‘₯𝑛+1βˆ’π‘₯βˆ—ξ¬.(3.39) Applying Lemma 2.1 to the inequality (3.39), together with (3.38), we get β€–π‘₯π‘›βˆ’π‘₯βˆ—β€–β†’0 as π‘›β†’βˆž.

Corollary 3.6 (see [11]). Let {π‘₯𝑛} be generated by the following algorithm: π‘₯𝑛+1=πœƒπ‘›β„Žξ€·π‘₯𝑛+ξ€·1βˆ’πœƒπ‘›ξ€ΈProj𝐢π‘₯π‘›βˆ’πœ†π‘›ξ€·π‘₯βˆ‡π‘“π‘›ξ€Έξ€Έ,βˆ€π‘›β‰₯0.(3.40) Assume that the sequence {πœ†π‘›}βˆžπ‘›=0 satisfies the conditions (1.4) and (iv) and that {πœƒπ‘›}βŠ‚[0,1] satisfies the conditions (i)–(iii). Then {π‘₯𝑛} converges in norm to π‘₯βˆ— obtained in Corollary 3.4.

Corollary 3.7. Let {π‘₯𝑛} be generated by the following algorithm: π‘₯𝑛+1=πœƒπ‘›ξ€·π‘₯π›Ύβ„Žπ‘›ξ€Έ+ξ€·πΌβˆ’πœƒπ‘›π΄ξ€ΈProj𝐢π‘₯π‘›βˆ’πœ†π‘›ξ€·π‘₯βˆ‡π‘“π‘›ξ€Έξ€Έ,βˆ€π‘›β‰₯0.(3.41) Assume that the sequences {πœƒπ‘›} and {πœ†π‘›} satisfy the conditions contained in Theorem 3.5, then {π‘₯𝑛} converges in norm to π‘₯βˆ— obtained in Corollary 3.3.

Acknowledgments

Ming Tian is Supported in part by The Fundamental Research Funds for the Central Universities (the Special Fund of Science in Civil Aviation University of China: No. ZXH2012 K001) and by the Science Research Foundation of Civil Aviation University of China (No. 2012KYM03).

References

  1. E. S. Levitin and B. T. Poljak, β€œMinimization methods in the presence of constraints,” Žurnal Vyčislitel' noĭ Matematiki i Matematičeskoĭ Fiziki, vol. 6, pp. 787–823, 1966. View at: Google Scholar
  2. P. H. Calamai and J. J. Moré, β€œProjected gradient methods for linearly constrained problems,” Mathematical Programming, vol. 39, no. 1, pp. 93–116, 1987. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  3. B. T. Polyak, Introduction to Optimization, Translations Series in Mathematics and Engineering, Optimization Software, New York, NY, USA, 1987.
  4. M. Su and H. K. Xu, β€œRemarks on the gradient-projection algorithm,” Journal of Nonlinear Analysis and Optimization, vol. 1, pp. 35–43, 2010. View at: Google Scholar
  5. Y. Yao and H.-K. Xu, β€œIterative methods for finding minimum-norm fixed points of nonexpansive mappings with applications,” Optimization, vol. 60, no. 6, pp. 645–658, 2011. View at: Publisher Site | Google Scholar
  6. Y. Censor and T. Elfving, β€œA multiprojection algorithm using Bregman projections in a product space,” Numerical Algorithms, vol. 8, no. 2–4, pp. 221–239, 1994. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  7. C. Byrne, β€œA unified treatment of some iterative algorithms in signal processing and image reconstruction,” Inverse Problems, vol. 20, no. 1, pp. 103–120, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  8. J. S. Jung, β€œStrong convergence of composite iterative methods for equilibrium problems and fixed point problems,” Applied Mathematics and Computation, vol. 213, no. 2, pp. 498–505, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  9. G. Lopez, V. Martin, and H. K. Xu :, β€œIterative algorithms for the multiple-sets split feasibility problem,” in Biomedical Mathematics: Promising Directions in Imaging, Therpy Planning and Inverse Problems, Y. Censor, M. Jiang, and G. Wang, Eds., pp. 243–279, Medical Physics, Madison, Wis, USA, 2009. View at: Google Scholar
  10. P. Kumam, β€œA hybrid approximation method for equilibrium and fixed point problems for a monotone mapping and a nonexpansive mapping,” Nonlinear Analysis, vol. 2, no. 4, pp. 1245–1255, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  11. H.-K. Xu, β€œAveraged mappings and the gradient-projection algorithm,” Journal of Optimization Theory and Applications, vol. 150, no. 2, pp. 360–378, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  12. P. Kumam, β€œA new hybrid iterative method for solution of equilibrium problems and fixed point problems for an inverse strongly monotone operator and a nonexpansive mapping,” Journal of Applied Mathematics and Computing, vol. 29, no. 1-2, pp. 263–280, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  13. H. Brezis, Operateur Maximaux Monotones et Semi-Groups de Contractions dans les Espaces de Hilbert, North-Holland, Amsterdam, The Netherlands, 1973.
  14. P. L. Combettes, β€œSolving monotone inclusions via compositions of nonexpansive averaged operators,” Optimization, vol. 53, no. 5-6, pp. 475–504, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  15. Y. Yao, Y.-C. Liou, and R. Chen, β€œA general iterative method for an infinite family of nonexpansive mappings,” Nonlinear Analysis, vol. 69, no. 5-6, pp. 1644–1654, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  16. M. Tian, β€œA general iterative algorithm for nonexpansive mappings in Hilbert spaces,” Nonlinear Analysis, vol. 73, no. 3, pp. 689–694, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  17. H.-K. Xu, β€œIterative algorithms for nonlinear operators,” Journal of the London Mathematical Society. Second Series, vol. 66, no. 1, pp. 240–256, 2002. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  18. K. Goebel and W. A. Kirk, Topics in Metric Fixed Point Theory, vol. 28 of Cambridge Studies in Advanced Mathematics, Cambridge University Press, Cambridge, UK, 1990. View at: Publisher Site

Copyright © 2012 Ming Tian and Min-Min Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

539Β Views | 366Β Downloads | 0Β Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder