- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Journal of Applied Mathematics

Volume 2012 (2012), Article ID 259813, 9 pages

http://dx.doi.org/10.1155/2012/259813

## A Regularized Gradient Projection Method for the Minimization Problem

^{1}Department of Mathematics, Tianjin Polytechnic University, Tianjin 300387, China^{2}Department of Mathematics and the RINS, Gyeongsang National University, Jinju 660-701, Republic of Korea^{3}School of Computer Science and Software, Tianjin Polytechnic University, Tianjin 300387, China

Received 22 November 2011; Accepted 8 December 2011

Academic Editor: Yeong-Cheng Liou

Copyright © 2012 Yonghong Yao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We investigate the following regularized gradient projection algorithm , . Under some different control conditions, we prove that this gradient projection algorithm strongly converges to the minimum norm solution of the minimization problem .

#### 1. Introduction

Let be a nonempty closed and convex subset of a real Hilbert space . Let be a real-valued convex function.

Consider the following constrained convex minimization problem: Assume that (1.1) is consistent, that is, it has a solution and we use to denote its solution set. If is Fréchet differentiable, then solves (1.1) if and only if satisfies the following optimality condition: where denotes the gradient of . Note that (1.2) can be rewritten as This shows that the minimization (1.1) is equivalent to the fixed point problem where is any constant and is the nearest point projection from onto . By using this relationship, the gradient-projection algorithm is usually applied to solve the minimization problem (1.1). This algorithm generates a sequence through the recursion: where the initial guess is chosen arbitrarily and is a sequence of stepsizes which may be chosen in different ways. The gradient-projection algorithm (1.5) is a powerful tool for solving constrained convex optimization problems and has well been studied in the case of constant stepsizes for all . The reader can refer to [1–9] and the references therein. It is known [3] that if has a Lipschitz continuous and strongly monotone gradient, then the sequence can be strongly convergent to a minimizer of in . If the gradient of is only assumed to be Lipschitz continuous, then can only be weakly convergent if is infinite dimensional. In order to get the strong convergence, Xu [10] studied the following regularized method: where the sequences and satisfy the following conditions:(1) for all ;(2); (3); (4).

Xu [10] proved that the sequence converges strongly to a minimizer of (1.1).

Motivated by Xu’s work, in the present paper, we further investigate the gradient projection method (1.6). Under some different control conditions, we prove that this gradient projection algorithm strongly converges to the minimum norm solution of the minimization problem (1.1).

#### 2. Preliminaries

Let be a nonempty closed convex subset of a real Hilbert space . A mapping is called *nonexpansive* if
We will use to denote the set of fixed points of , that is, . A mapping is said to be -inverse strongly monotone (-ism), if there exists a constant such that
Recall that the (nearest point or metric) projection from onto , denoted , assigns, to each , the unique point with the property
It is well known that the metric projection of onto has the following basic properties:(a) for all ;(b) for every ;(c) for all .

Next we adopt the following notation:(i) means that converges strongly to ;(ii) means that converges weakly to ;(iii) is the weak -limit set of the sequence .

Lemma 2.1 (see [11]). *Given and letting be the complement of , given also , *(a)* is nonexpansive if and only if is -ism;*(b)*if is -ism, then, for , is -ism;*(c)* is averaged if and only if the complement is -ism for some .*

Lemma 2.2 (see [12], (demiclosedness principle)). *Let be a closed and convex subset of a Hilbert space , and let be a nonexpansive mapping with . If is a sequence in weakly converging to and if converges strongly to , then
**
In particular, if , then .*

Lemma 2.3 (see [13]). *Let and be bounded sequences in a Banach space , and let be a sequence in with
**
Suppose that
**
for all and
**
Then, .*

Lemma 2.4 (see [14]). *Assume that is a sequence of nonnegative real numbers such that
**
where is a sequence in and is a sequence such that*(1)*;
*(2)* or .** Then, .*

#### 3. Main Result

In this section, we will state and prove our main result.

Theorem 3.1. *Let be a nonempty closed convex subset of a real Hilbert space . Let be a real-valued Fréchet differentiable convex function. Assume . Assume that the gradient is -Lipschitzian. Let be a sequence generated by the following hybrid gradient projection algorithm:
**
where the sequences and satisfy the following conditions:*(1)* and ;*(2)* and .** Then, the sequence generated by (3.1) converges to a minimizer of (1.1).*

*Proof. *Note that the Lipschitz condition implies that the gradient is -ism [10]. Then, we have
If , then . It follows that
Thus,
for all .

Take any . Since solves the minimization problem (1.1) if and only if solves the fixed-point equation for any fixed positive number , so we have for all . From (3.1) and (3.4), we get
Thus, we deduce by induction that
This indicates that the sequence is bounded.

Since the gradient is -ism, is -ism. So by Lemma 2.1, is -averaged; that is, for some nonexpansive mapping . Since is -averaged, for some nonexpansive mapping . Then, we can rewrite as
where
It follows that
Now we choose a constant such that
We have the following estimates:
Thus, we deduce
Note that and . Hence, by Lemma 2.3, we get
It follows that
Consequently,
Now we show that the weak limit set . Choose any . Since is bounded, there must exist a subsequence of such that . At the same time, the real number sequence is bounded. Thus, there exists a subsequence of which converges to . Without loss of generality, we may assume that . Note that . So, ; that is, as . Next, we only need to show that . First, from (3.15) we have that . Then, we have
Since , is nonexpansive. It then follows from Lemma 2.2 (demiclosedness principle) that . Hence, because of . So, .

Finally, we prove that , where is the minimum norm solution of (1.1). First, we show that . Observe that there exists a subsequence of satisfying
Since is bounded, there exists a subsequence of such that . Without loss of generality, we assume that . Then, we obtain
Since , . So, is nonexpansive. By using the property (b) of , we have
It follows that
From Lemma 2.4, (3.18) and (3.20), we deduce that . This completes the proof.

*Remark 3.2. *We obtain the strong convergence of the regularized gradient projection method (3.1) under some different control conditions.

*Remark 3.3. *From the proof of result, we observe that our algorithm (3.1) converges to a special solution of the minimization (1.1). As a matter of fact, this special solution is the minimum-norm solution of the minimization (1.1). Finding the minimum-norm solution of practical problem is an interesting work due to its applications. A typical example is the least-squares solution to the constrained linear inverse problem; see, for example, [15]. For some related works on the minimum-norm solution and the minimization problems, please see [16–22].

#### Acknowledgments

Y. Yao was supported in part by Colleges and Universities Science and Technology Development Foundation (20091003) of Tianjin, NSFC 11071279, and NSFC 71161001-G0105. W. Jigang is supported in part by NSFC 61173032.

#### References

- P. H. Calamai and J. J. More, “Projected gradient methods for linearly constrained problems,”
*Mathematical Programming*, vol. 39, no. 1, pp. 93–116, 1987. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - E. M. Gafni and D. P. Bertsekas, “Two-metric projection methods for constrained optimization,”
*SIAM Journal on Control and Optimization*, vol. 22, no. 6, pp. 936–964, 1984. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - E. S. Levitin and B. T. Polyak, “Constrained minimization methods,”
*USSR Computational Mathematics and Mathematical Physics*, vol. 6, no. 5, pp. 1–50, 1966. View at Google Scholar · View at Scopus - B. T. Polyak,
*Introduction to Optimization*, Optimization Software, New York, NY, USA, 1987. - A. Ruszczyński,
*Nonlinear Optimization*, Princeton University Press, Princeton, NJ, USA, 2006. - M. Su and H. K. Xu, “Remarks on the Gradient-projection algorithm,”
*Journal of Nonlinear Analysis and Optimization*, vol. 1, pp. 35–43, 2010. View at Google Scholar - C. Wang and N. Xiu, “Convergence of the gradient projection method for generalized convex minimization,”
*Computational Optimization and Applications*, vol. 16, no. 2, pp. 111–120, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - N. Xiu, C. Wang, and J. Zhang, “Convergence properties of projection and contraction methods for variational inequality problems,”
*Applied Mathematics and Optimization*, vol. 43, no. 2, pp. 147–168, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - N. Xiu, C. Wang, and L. Kong, “A note on the gradient projection method with exact stepsize rule,”
*Journal of Computational Mathematics*, vol. 25, no. 2, pp. 221–230, 2007. View at Google Scholar · View at Zentralblatt MATH - H.-K. Xu, “Averaged mappings and the gradient-projection algorithm,”
*Journal of Optimization Theory and Applications*, vol. 150, no. 2, pp. 360–378, 2011. View at Publisher · View at Google Scholar - C. Byrne, “A unified treatment of some iterative algorithms in signal processing and image reconstruction,”
*Inverse Problems*, vol. 20, no. 1, pp. 103–120, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - K. Goebel and W. A. Kirk, “Topics in metric fixed point theory,” in
*Cambridge Studies in Advanced Mathematics*, vol. 28, Cambridge University Press, Cambridge, UK, 1990. View at Google Scholar - T. Suzuki, “Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spaces,”
*Fixed Point Theory and Applications*, no. 1, pp. 103–123, 2005. View at Google Scholar · View at Zentralblatt MATH - H.-K. Xu, “Iterative algorithms for nonlinear operators,”
*Journal of the London Mathematical Society*, vol. 66, no. 1, pp. 240–256, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - A. Sabharwal and L. C. Potter, “Convexly constrained linear inverse problems: iterative least-squares and regularization,”
*IEEE Transactions on Signal Processing*, vol. 46, no. 9, pp. 2345–2352, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - Y. Yao, R. Chen, and Y.-C. Liou, “A unified implicit algorithm for solving the triple-hierarchical constrained optimization problem,”
*Mathematical & Computer Modelling*, vol. 55, no. 3-4, pp. 1506–1515, 2012. View at Google Scholar - Y. Yao, R. Chen, and H.-K. Xu, “Schemes for finding minimum-norm solutions of variational inequalities,”
*Nonlinear Analysis: Theory, Methods & Applications*, vol. 72, no. 7-8, pp. 3447–3456, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - Y. Yao, Y. J. Cho, and Y.-C. Liou, “Algorithms of common solutions for variational inclusions, mixed equilibrium problems and fixed point problems,”
*European Journal of Operational Research*, vol. 212, no. 2, pp. 242–250, 2011. View at Publisher · View at Google Scholar - Y. Yao, Y. J. Cho, and P.-X. Yang, “An iterative algorithm for a hierarchical problem,”
*Journal of Applied Mathematics*, vol. 2012, Article ID 320421, 13 pages, 2012. View at Publisher · View at Google Scholar - Y. Yao, Y.-C. Liou, and S. M. Kang, “Two-step projection methods for a system of variational inequality problems in Banach spaces,”
*Journal of Global Optimization*. In press. View at Publisher · View at Google Scholar - Y. Yao, M. A. Noor, and Y.-C. Liou, “Strong convergence of a modified extra-gradient method to the minimum-norm solution of variational inequalities,”
*Abstract and Applied Analysis*, vol. 2012, Article ID 817436, 9 pages, 2012. View at Publisher · View at Google Scholar - Y. Yao and N. Shahzad, “Strong convergence of a proximal point algorithm with general errors,”
*Optimization Letters*. In press. View at Publisher · View at Google Scholar