Abstract

We investigate the following regularized gradient projection algorithm , . Under some different control conditions, we prove that this gradient projection algorithm strongly converges to the minimum norm solution of the minimization problem .

1. Introduction

Let be a nonempty closed and convex subset of a real Hilbert space . Let be a real-valued convex function.

Consider the following constrained convex minimization problem: Assume that (1.1) is consistent, that is, it has a solution and we use to denote its solution set. If is Fréchet differentiable, then solves (1.1) if and only if satisfies the following optimality condition: where denotes the gradient of . Note that (1.2) can be rewritten as This shows that the minimization (1.1) is equivalent to the fixed point problem where is any constant and is the nearest point projection from onto . By using this relationship, the gradient-projection algorithm is usually applied to solve the minimization problem (1.1). This algorithm generates a sequence through the recursion: where the initial guess is chosen arbitrarily and is a sequence of stepsizes which may be chosen in different ways. The gradient-projection algorithm (1.5) is a powerful tool for solving constrained convex optimization problems and has well been studied in the case of constant stepsizes for all . The reader can refer to [19] and the references therein. It is known [3] that if has a Lipschitz continuous and strongly monotone gradient, then the sequence can be strongly convergent to a minimizer of in . If the gradient of is only assumed to be Lipschitz continuous, then can only be weakly convergent if is infinite dimensional. In order to get the strong convergence, Xu [10] studied the following regularized method: where the sequences and satisfy the following conditions:(1) for all ;(2); (3); (4).

Xu [10] proved that the sequence converges strongly to a minimizer of (1.1).

Motivated by Xu’s work, in the present paper, we further investigate the gradient projection method (1.6). Under some different control conditions, we prove that this gradient projection algorithm strongly converges to the minimum norm solution of the minimization problem (1.1).

2. Preliminaries

Let be a nonempty closed convex subset of a real Hilbert space . A mapping is called nonexpansive if We will use to denote the set of fixed points of , that is, . A mapping is said to be -inverse strongly monotone (-ism), if there exists a constant such that Recall that the (nearest point or metric) projection from onto , denoted , assigns, to each , the unique point with the property It is well known that the metric projection of onto has the following basic properties:(a) for all ;(b) for every ;(c) for all .

Next we adopt the following notation:(i) means that converges strongly to ;(ii) means that converges weakly to ;(iii) is the weak -limit set of the sequence .

Lemma 2.1 (see [11]). Given and letting be the complement of , given also , (a) is nonexpansive if and only if is -ism;(b)if is -ism, then, for , is -ism;(c) is averaged if and only if the complement is -ism for some .

Lemma 2.2 (see [12], (demiclosedness principle)). Let be a closed and convex subset of a Hilbert space , and let be a nonexpansive mapping with . If is a sequence in weakly converging to and if converges strongly to , then In particular, if , then .

Lemma 2.3 (see [13]). Let and be bounded sequences in a Banach space , and let be a sequence in with Suppose that for all and Then, .

Lemma 2.4 (see [14]). Assume that is a sequence of nonnegative real numbers such that where is a sequence in and is a sequence such that(1); (2) or . Then, .

3. Main Result

In this section, we will state and prove our main result.

Theorem 3.1. Let be a nonempty closed convex subset of a real Hilbert space . Let be a real-valued Fréchet differentiable convex function. Assume . Assume that the gradient is -Lipschitzian. Let be a sequence generated by the following hybrid gradient projection algorithm: where the sequences and satisfy the following conditions:(1) and ;(2) and . Then, the sequence generated by (3.1) converges to a minimizer of (1.1).

Proof. Note that the Lipschitz condition implies that the gradient is -ism [10]. Then, we have If , then . It follows that Thus, for all .
Take any . Since solves the minimization problem (1.1) if and only if solves the fixed-point equation for any fixed positive number , so we have for all . From (3.1) and (3.4), we get Thus, we deduce by induction that This indicates that the sequence is bounded.
Since the gradient is -ism, is -ism. So by Lemma 2.1, is -averaged; that is, for some nonexpansive mapping . Since is -averaged, for some nonexpansive mapping . Then, we can rewrite as where It follows that Now we choose a constant such that We have the following estimates: Thus, we deduce Note that and . Hence, by Lemma 2.3, we get It follows that Consequently, Now we show that the weak limit set . Choose any . Since is bounded, there must exist a subsequence of such that . At the same time, the real number sequence is bounded. Thus, there exists a subsequence of which converges to . Without loss of generality, we may assume that . Note that . So, ; that is, as . Next, we only need to show that . First, from (3.15) we have that . Then, we have Since , is nonexpansive. It then follows from Lemma 2.2 (demiclosedness principle) that . Hence, because of . So, .
Finally, we prove that , where is the minimum norm solution of (1.1). First, we show that . Observe that there exists a subsequence of satisfying Since is bounded, there exists a subsequence of such that . Without loss of generality, we assume that . Then, we obtain Since , . So, is nonexpansive. By using the property (b) of , we have It follows that From Lemma 2.4, (3.18) and (3.20), we deduce that . This completes the proof.

Remark 3.2. We obtain the strong convergence of the regularized gradient projection method (3.1) under some different control conditions.

Remark 3.3. From the proof of result, we observe that our algorithm (3.1) converges to a special solution of the minimization (1.1). As a matter of fact, this special solution is the minimum-norm solution of the minimization (1.1). Finding the minimum-norm solution of practical problem is an interesting work due to its applications. A typical example is the least-squares solution to the constrained linear inverse problem; see, for example, [15]. For some related works on the minimum-norm solution and the minimization problems, please see [1622].

Acknowledgments

Y. Yao was supported in part by Colleges and Universities Science and Technology Development Foundation (20091003) of Tianjin, NSFC 11071279, and NSFC 71161001-G0105. W. Jigang is supported in part by NSFC 61173032.