Abstract and Applied Analysis

Volume 2012 (2012), Article ID 792078, 16 pages

http://dx.doi.org/10.1155/2012/792078

## Variant Gradient Projection Methods for the Minimization Problems

^{1}Department of Mathematics, Tianjin Polytechnic University, Tianjin 300387, China^{2}Department of Information Management, Cheng Shiu University, Kaohsiung 833, Taiwan^{3}Center for General Education, Kaohsiung Medical University, Kaohsiung 807, Taiwan

Received 3 May 2012; Accepted 6 June 2012

Academic Editor: Jen-Chin Yao

Copyright © 2012 Yonghong Yao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The gradient projection algorithm plays an important role in solving constrained convex minimization problems. In general, the gradient projection algorithm has only weak convergence in infinite-dimensional Hilbert spaces. Recently, H. K. Xu (2011) provided two modified gradient projection algorithms which have strong convergence. Motivated by Xu’s work, in the present paper, we suggest three more simpler variant gradient projection methods so that strong convergence is guaranteed.

#### 1. Introduction

Let be a real Hilbert space and a nonempty closed and convex subset of . Let be a real-valued convex function. Now we consider the following constrained convex minimization problem: Assume that (1.1) is consistent; that is, it has a solution and we use to denote its solution set. If is Fréchet differentiable, then solves (1.1) if and only if satisfies the following optimality condition: where denotes the gradient of . Note that (1.2) can be rewritten as This shows that the minimization (1.1) is equivalent to the fixed-point problem: where is an any constant and is the nearest point projection from onto . By using this relationship, the gradient-projection algorithm is usually applied to solve the minimization problem (1.1). This algorithm generates a sequence through the recursion: where the initial guess is chosen arbitrarily and is a sequence of step sizes which may be chosen in different ways. The gradient-projection algorithm (1.5) is a powerful tool for solving constrained convex optimization problems and has well been studied in the case of constant stepsizes for all . The reader can refer to [1–9]. It has recently been applied to solve split feasibility problems which find applications in image reconstructions and the intensity modulated radiation therapy (see [10–17]).

It is known [3] that if has a Lipschitz continuous and strongly monotone gradient, then the sequence can be strongly convergent to a minimizer of in . If the gradient of is only assumed to be Lipschitz continuous, then can only be weakly convergent if is infinite dimensional. This gives naturally rise to a question.

*Question 1. *How to appropriately modify the gradient projection algorithm so as to have strong convergence?

For this purpose, recently, Xu [18] first introduced the following modification: where the sequences and satisfy the following conditions: (i), and ; (ii) and . Xu [18] proved that the sequence converges strongly to a minimizer of (1.1).

*Remark 1.1. *Xu's modification (1.6) is a convex combination of the gradient-projection algorithm (1.5) and a self-mapping which is usually referred as a so-called viscosity item.

In [18], Xu presented another modification as follows: Consequently, Xu [18] proved that Algorithm (1.7) also converges strongly to which solves the minimization problem (1.1).

*Remark 1.2. *Equation (1.7) involved in additional projections which couple the gradient projection method (1.5) with the so-called CQ method.

It should be pointed out that Xu's modifications (1.6) and (1.7) are interesting and provide us with a direction for solving (1.1) in infinite-dimensional Hilbert spaces.

Motivated by Xu's work, in the present paper, we suggest three variant gradient projection methods so that strong convergence is guaranteed for solving (1.1) in infinite-dimensional Hilbert spaces. Our motivations are mainly in the two respects.

*Reason* 1. The solution of the minimization problem (1.1) is not always unique, so that there may be many solutions to the problem. In that case, a special solution (e.g., the minimum norm solution) must be found from among candidate solutions. The minimum norm problem is motivated by the following least squares solution to the constrained linear inverse problem:
where is a nonempty closed convex subset of a real Hilbert space , is a bounded linear operator from to another real Hilbert space , is the adjoint of , and is a given point in . The least-squares solution to (1.8) is the least-norm minimizer of the minimization problem:
For some related works, please see Solodov and Svaiter [19], Goebel and Kirk [20], and Martinez-Yanes and Xu [21].

*Reason* 2. Projection methods are used extensively in a variety of methods in optimization theory. Apart from theoretical interest, the main advantage of projection methods, which makes them successful in real-world applications, is computational (see [22–31]). In this respect, (1.7) is particularly useful. But we observe that (1.7) involves two half-spaces and . If the sets and are simple enough, then and are easily executed. But may be complicate, so that the projection is not easily executed. This might seriously affect the efficiency of the method. Hence, it is interesting that one can relax or from (1.7).

In the present paper, we suggest the following three methods: We will show that (1.10) can be used to find the minimum norm solution of the minimization problem (1.1), and (1.11) which is only involved in also has strong convergence.

#### 2. Preliminaries

Let be a nonempty closed convex subset of a real Hilbert space . A mapping is called nonexpansive if Recall that the (nearest point or metric) projection from onto , denoted , assigns, to each , the unique point with the property It is well known that the metric projection of onto has the following basic properties: (i), for all ;(ii), for every ;(iii), for all , . Next we adopt the following notation: (i) means that converges strongly to ; (ii) means that converges weakly to ; (iii) is the weak -limit set of the sequence .

Lemma 2.1 (see [32] (Demiclosedness Principle)). *Let be a closed and convex subset of a Hilbert space , and let be a nonexpansive mapping with . If is a sequence in weakly converging to and if converges strongly to , then
**In particular, if , then .*

Lemma 2.2 (see [33]). *Let be a closed convex subset of . Let be a sequence in and . If is such that and satisfies the condition
**
then .*

Lemma 2.3 (see [34]). *Assume is a sequence of nonnegative real numbers such that
**
where is a sequence in and is a sequence such that*(1)*;*(2)* or . ** Then .*

Lemma 2.4 (see [35]). *Let and be bounded sequences in a Banach space , and let be a sequence in with
**
Suppose
**
for all , and
**
Then, .*

#### 3. Main Results

In this section, we will state and prove our main results.

Theorem 3.1. *Let be a closed convex subset of a real Hilbert space . Let be a real-valued Fréchet differentiable convex function. Assume that the solution set of (1.1) is nonempty. Assume that the gradient is -Lipschitzian. Let be a -contraction with . Let be a sequence generated by the following hybrid gradient projection algorithm:
**
where the sequences and satisfy the following conditions: *(i)*, and ; *(ii)* and . ** Then the sequence generated by (3.1) converges to a minimizer of (1.1) which is the unique solution of the following variational inequality:
*

*Proof. *Take any . Since solves the minimization problem (1.1) if and only if solves the fixed-point equation, for any fixed positive number . So, we have for all . It can be rewritten as
From condition (ii) , there exist two constants and such that for sufficiently large ; without loss of generality, we can assume for all . Since , without loss of generality, we can assume that for all . So, . Hence, is nonexpansive.

From (3.1), we get
Thus, we deduce by induction that
This indicates that the sequence is bounded and so are the sequences and . Then, we can chose a constant such that
Next, we estimate . By (3.1), we have
Then, we can combine the last inequality and Lemma 2.3 to conclude that
Now we show that the weak limit set . Choose any . Then there must exist a subsequence of such that . At the same time, the real number sequence is bounded. Thus, there exists a subsequence of which converges to . Without loss of generality, we may assume that . Note that . So, . That is, as . Next, we only need to show that . First, from (3.8) we have that . Then, we have
Since , is nonexpansive. It then follows from Lemma 2.1 (demi-closedness principle) that . Hence, because of . So, .

Finally, we prove that , where is the unique solution of the VI (3.2). First, we show that . Observe that there exists a subsequence of satisfying
Since is bounded, there exists a subsequence of such that . Without loss of generality, we assume that . Then, we obtain
By using the property (ii) of , we have
It follows that
From Lemma 2.3, (3.11), and (3.13), we deduce that . This completes the proof.

From Theorem 3.1, we obtain immediately the following theorem.

Theorem 3.2. *Let be a closed convex subset of a real Hilbert space . Let be a real-valued Fréchet differentiable convex function. Assume . Assume that the gradient is -Lipschitzian. Let be a sequence generated by the following hybrid gradient projection algorithm:
**
where the sequences and satisfy the following conditions: *(i)*, and ; *(ii)* and . ** Then the sequence generated by (3.14) converges to a minimizer of (1.1) which is the minimum norm element in .*

*Proof. *In Theorem 3.1, we note that is a non-self mapping from to the whole space . Hence, if we chose for all , then Algorithm (3.1) reduces to (3.14). And sequence converges strongly to which is obviously the minimum norm element in . The proof is completed.

Next, we suggest another simple algorithm for dropping the assumption .

Theorem 3.3. *Let be a closed convex subset of a real Hilbert space . Let be a real-valued Fréchet differentiable convex function. Assume . Assume that the gradient is -Lipschitzian. Let be a sequence generated by the following hybrid gradient projection algorithm:
**
where is a constant and the sequences satisfy the following conditions: *(1)*; *(2)*. ** Then the sequence generated by (3.15) converges to a minimizer of (1.1) which is the minimum norm element in .*

*Proof. **Claim* 1. The sequence is bounded.

Take . Then we have
By induction,
*Claim* 2. and .

By the similar argument as that in [18, page 366], we can write
where is nonexpansive and . Then we can rewrite (3.15) as
where
It follows that
So,
This together with Lemma 2.4 implies that
Thus,
Note that
Therefore,
Now repeating the proof of Theorem 3.1, we conclude that .*Claim* 3. where is the minimum norm element in .

Observe that there exists a subsequence of satisfying
Since is bounded, there exists a subsequence of such that . Without loss of generality, we assume that . Then, we obtain
*Claim* 4. . From (3.15), we have
It is obvious that . Then we can apply Lemma 2.3 to the last inequality to conclude that . The proof is completed.

Next, we suggest another algorithm with the additional projections applied to the gradient projection algorithm. We show that this algorithm has strong convergence.

Theorem 3.4. *Let be a closed convex subset of a real Hilbert space . Let be a real-valued Fréchet differentiable convex function. Assume . Assume that the gradient is -Lipschitzian. Let . For and , define a sequence of as follows:
**
where the sequence satisfies the condition . Then the sequence generated by (3.30) converges to .*

*Proof. *It is obvious that is convex. For any , we have
This implies that . Hence, . From , we have
Since , we have
So, for , we have
Hence,
This implies that is bounded.

From and , we have
Hence,
and therefore
This implies that

From (3.36) and (3.39), we obtain
By the fact , we get
Therefore, from (3.40) and (3.41), we deduce
Now (3.42) and Lemma 2.1 guarantee that every weak limit point of is a fixed point of . That is, . At the same time, if we choose in (3.35), we have
This fact and Lemma 2.2 ensure the strong convergence of to . This completes the proof.

Now we give some remarks on our variant gradient projection methods.

*Remark 3.5. *Under the same control parameters, the gradient projection methods (3.1) and (1.6) are all strong convergent. However, (3.1) seems to have more advantage than (1.6) as is a non-self-mapping.

*Remark 3.6. *The gradient projection method (3.14) is similar to (1.5) by using instead of . But (3.1) has strong convergence, and especially (3.1) converges strongly to the minimum norm element of .

*Remark 3.7. *The advantage of the gradient projection method (3.15) is that it has strong convergence under some weaker assumptions on parameter .

*Remark 3.8. *The gradient projection method (3.30) is simpler than (1.7).

#### Acknowledgments

Y. Yao was supported in part by NSFC 11071279 and NSFC 71161001-G0105. Y.-C. Liou was supported in part by NSC 100-2221-E-230-012. C.-F. Wen was supported in part by NSC 100-2115-M-037-001.

#### References

- E. M. Gafni and D. P. Bertsekas, “Two-metric projection methods for constrained optimization,”
*SIAM Journal on Control and Optimization*, vol. 22, no. 6, pp. 936–964, 1984. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - P. H. Calamai and J. J. Moré, “Projected gradient methods for linearly constrained problems,”
*Mathematical Programming*, vol. 39, no. 1, pp. 93–116, 1987. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - E. S. Levitin and B. T. Polyak, “Constrained minimization methods,”
*USSR Computational Mathematics and Mathematical Physics*, vol. 6, no. 5, pp. 1–50, 1966. View at Google Scholar · View at Scopus - B. T. Polyak,
*Introduction to Optimization*, Optimization Software, New York, NY, USA, 1987. - A. Ruszczyński,
*Nonlinear Optimization*, Princeton University Press, Princeton, NJ, USA, 2006. - C. Wang and N. Xiu, “Convergence of the gradient projection method for generalized convex minimization,”
*Computational Optimization and Applications*, vol. 16, no. 2, pp. 111–120, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - N. Xiu, C. Wang, and J. Zhang, “Convergence properties of projection and contraction methods for variational inequality problems,”
*Applied Mathematics and Optimization*, vol. 43, no. 2, pp. 147–168, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - N. Xiu, C. Wang, and L. Kong, “A note on the gradient projection method with exact stepsize rule,”
*Journal of Computational Mathematics*, vol. 25, no. 2, pp. 221–230, 2007. View at Google Scholar · View at Zentralblatt MATH - M. Su and H. K. Xu, “Remarks on the gradient-projection algorithm,”
*Journal of Nonlinear Analysis and Optimization*, vol. 1, pp. 35–43, 2010. View at Google Scholar - Y. Censor and T. Elfving, “A multiprojection algorithm using Bregman projections in a product space,”
*Numerical Algorithms*, vol. 8, no. 2–4, pp. 221–239, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - C. Byrne, “A unified treatment of some iterative algorithms in signal processing and image reconstruction,”
*Inverse Problems*, vol. 20, no. 1, pp. 103–120, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - Y. Censor, T. Elfving, N. Kopf, and T. Bortfeld, “The multiple-sets split feasibility problem and its applications for inverse problems,”
*Inverse Problems*, vol. 21, no. 6, pp. 2071–2084, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - Y. Censor, T. Bortfeld, B. Martin, and A. Trofimov, “A unified approach for inversion problems in intensity-modulated radiation therapy,”
*Physics in Medicine and Biology*, vol. 51, no. 10, pp. 2353–2365, 2006. View at Publisher · View at Google Scholar · View at Scopus - H.-K. Xu, “A variable Krasnosel'skii–Mann algorithm and the multiple-set split feasibility problem,”
*Inverse Problems*, vol. 22, no. 6, pp. 2021–2034, 2006. View at Publisher · View at Google Scholar - H.-K. Xu, “Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces,”
*Inverse Problems*, vol. 26, no. 10, Article ID 105018, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - G. Lopez, V. Martin, and H.-K. Xu, “Perturbation techniques for nonexpansive mappings with applications,”
*Nonlinear Analysis: Real World Applications*, vol. 10, no. 4, pp. 2369–2383, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - G. Lopez, V. Martin, and H. K. Xu, “Iterative algorithms for the multiple-sets split feasibility problem,” in
*Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems*, Y. Censor, M. Jiang, and G. Wang, Eds., pp. 243–279, Medical Physics Publishing, Madison, Wis, USA, 2009. View at Google Scholar - H.-K. Xu, “Averaged mappings and the gradient-projection algorithm,”
*Journal of Optimization Theory and Applications*, vol. 150, no. 2, pp. 360–378, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - M. V. Solodov and B. F. Svaiter, “A new projection method for variational inequality problems,”
*SIAM Journal on Control and Optimization*, vol. 37, no. 3, pp. 765–776, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - K. Goebel and W. A. Kirk,
*Topics in Metric Fixed Point Theory*, vol. 28 of*Cambridge Studies in Advanced Mathematics*, Cambridge University Press, Cambridge, UK, 1990. View at Publisher · View at Google Scholar - C. Martinez-Yanes and H.-K. Xu, “Strong convergence of the CQ method for fixed point iteration processes,”
*Nonlinear Analysis: Theory, Methods & Applications*, vol. 64, no. 11, pp. 2400–2411, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - H.-K. Xu, “Iterative algorithms for nonlinear operators,”
*Journal of the London Mathematical Society*, vol. 66, no. 1, pp. 240–256, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - T. Suzuki, “Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spaces,”
*Fixed Point Theory and Applications*, vol. 2005, no. 1, pp. 103–123, 2005. View at Google Scholar · View at Zentralblatt MATH - S. Reich and H.-K. Xu, “An iterative approach to a constrained least squares problem,”
*Abstract and Applied Analysis*, no. 8, pp. 503–512, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - A. Sabharwal and L. C. Potter, “Convexly constrained linear inverse problems: iterative least-squares and regularization,”
*IEEE Transactions on Signal Processing*, vol. 46, no. 9, pp. 2345–2352, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - H. K. Xu, “An iterative approach to quadratic optimization,”
*Journal of Optimization Theory and Applications*, vol. 116, no. 3, pp. 659–678, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - F. Cianciaruso, G. Marino, L. Muglia, and Y. Yao, “A hybrid projection algorithm for finding solutions of mixed equilibrium problem and variational inequality problem,”
*Fixed Point Theory and Applications*, vol. 2010, Article ID 383740, 19 pages, 2010. View at Google Scholar · View at Zentralblatt MATH - F. Cianciaruso, G. Marino, L. Muglia, and Y. Yao, “On a two-step algorithm for hierarchical fixed point problems and variational inequalities,”
*Journal of Inequalities and Applications*, vol. 2009, Article ID 208692, 13 pages, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - Y. Yao, Y. J. Cho, and Y.-C. Liou, “Algorithms of common solutions for variational inclusions, mixed equilibrium problems and fixed point problems,”
*European Journal of Operational Research*, vol. 212, no. 2, pp. 242–250, 2011. View at Publisher · View at Google Scholar - Y. Yao, Y.-C. Liou, and S. M. Kang, “Two-step projection methods for a system of variational inequality problems in Banach spaces,”
*Journal of Global Optimization*. In press. View at Publisher · View at Google Scholar - Y. Yao, R. Chen, and Y.-C. Liou, “A unified implicit algorithm for solving the triple-hierarchical constrained optimization problem,”
*Mathematical Mathematical & Computer Modelling*, vol. 55, pp. 1506–1515, 2012. View at Publisher · View at Google Scholar - H. H. Bauschke and J. M. Borwein, “On projection algorithms for solving convex feasibility problems,”
*SIAM Review*, vol. 38, no. 3, pp. 367–426, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - K. C. Kiwiel and B. Łopuch, “Surrogate projection methods for finding fixed points of firmly nonexpansive mappings,”
*SIAM Journal on Optimization*, vol. 7, no. 4, pp. 1084–1102, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - K. C. Kiwiel, “The efficiency of subgradient projection methods for convex optimization. I. General level methods,”
*SIAM Journal on Control and Optimization*, vol. 34, no. 2, pp. 660–676, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - K. C. Kiwiel, “The efficiency of subgradient projection methods for convex optimization. II. Implementations and extensions,”
*SIAM Journal on Control and Optimization*, vol. 34, no. 2, pp. 677–697, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH