- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Abstract and Applied Analysis

Volume 2014 (2014), Article ID 283215, 7 pages

http://dx.doi.org/10.1155/2014/283215

## On the Strong Convergence of a Sufficient Descent Polak-Ribière-Polyak Conjugate Gradient Method

^{1}School of Mathematics and Statistics, Zaozhuang University, Shandong 277160, China^{2}School of Mathematics and Statistics, Zhejiang University of Finance and Economics, Hangzhou 310018, China

Received 2 December 2013; Revised 9 January 2014; Accepted 10 January 2014; Published 23 February 2014

Academic Editor: Sergei V. Pereverzyev

Copyright © 2014 Min Sun and Jing Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Recently, Zhang et al. proposed a sufficient descent Polak-Ribière-Polyak (SDPRP) conjugate gradient method for large-scale unconstrained optimization problems and proved its global convergence in the sense that when an Armijo-type line search is used. In this paper, motivated by the line searches proposed by Shi et al. and Zhang et al., we propose two new Armijo-type line searches and show that the SDPRP method has strong convergence in the sense that under the two new line searches. Numerical results are reported to show the efficiency of the SDPRP with the new Armijo-type line searches in practical computation.

#### 1. Introduction

In this paper, we are concerned with the following unconstrained minimization problem: where is a smooth function whose gradient is often denoted by . The related problem is called large-scale minimization problem when its dimension is very large (e.g., ). For solving large-scale minimization problems, the matrices-free methods are quite efficient. Among such methods, the conjugate gradient method is very famous for its excellent numerical performance in the practical computation. Much progress has been achieved in the study of global convergence of the various conjugate gradient methods, such as the Polak-Ribière-Polyak (PRP) [1, 2], the Fletcher-Reeves (FR) [3], the Hestenes-Stiefel (HS) [4, 5], and the Dai-Yuan (DY) [6] conjugate gradient methods, et al.

Recently, Zhang et al. [7] presented a sufficient descent Polak-Ribière-Polyak (SDPRP) conjugate gradient method for solving large-scale problem (1), whose most important property is that its generated direction is always a sufficient descent direction for the objective function. Moreover, this property is independent of the line search used, and it reduces to the classical PRP method when the exact line search is used. The iterative process of the SDPRP method is given by where is the current iterate, is called the stepsize which can be obtained by some line search techniques, such as the Armijo line search, the Goldstein line search, and the (strong) Wolfe line search, and is the search direction determined by with where . It is easy to deduce from (3) and (4) that which indicates that is a sufficient descent direction of at the current iterate if ; that is, is not a stationary point of the objective function . It has been proved that SDPRP method has global convergence under an Armijo-type line search [7] in the sense that which means that at least one cluster point of the sequence is a stationary point if it is bounded.

In another recent paper, Shi and Shen [8] showed that the classical PRP method in [1] has strong convergence and linear convergence rate under a customized Armijo-type line search, which is somewhat complicated. The new Armijo-type line search ensures that the search direction generated by the classical PRP method possesses the sufficient descent property, which is helpful to prove the global convergence.

In this paper, motivated by the Armijo-type line search in [8], we first propose a similar but simple line search, which can ensure that the SDPRP method has strongly global convergence in the sense that that is, any cluster point of the sequence is a stationary point of the objective function . Noting that the above new line search needs to estimate the Lipschitz constant, which is not easy even for linear function, we present another Armijo-type line search, which is motivated by the line search in [7]. This new line search can also guarantee the global convergence of the SDPRP method in the above sense.

The remainder of the paper is organized as follows. In Section 2 we introduce the two new Armijo-type line searches and present the strongly convergent SDPRP method. The global convergence is established under the above two new Armijo-type line searches in Section 3. Some numerical results are presented in Section 4, and in the last section, we conclude the paper with some remarks.

#### 2. Strongly Convergent SDPRP Method

First, we give the following basic assumptions on the objection function .

*Assumption 1. *
Consider the following.The objective function has a lower bound on the level set , where is the starting point.In some neighborhood of , the gradient is Lipschitz continuous on an open convex set that contains , that is, there exists a constant such that
The level set is bounded.

Although is Lipschitz continuous, the Lipschitz constant is usually unknown in practice, even for the linear function . Therefore, we need to estimate the Lipschitz constant . Here, we adopt one of three estimating approaches proposed in [9]. More specifically, if , then we can set
with , and .*Armijo-Type Line Search I*. Set , , , and the initial stepsize , where is determined by (9). Let be the largest in such that
*Armijo-Type Line Search II*. Set , . Let be the largest in such that

Now we begin to describe the strongly convergent SDPRP method.

*Algorithm 2* [strongly convergent SDPRP method]

*Step **0.* Given an initial point , and , and set , .

*Step **1.* If then stop; otherwise go to Step 2.

*Step **2.* Compute the descent direction by (3) and (4). Determine the stepsize by the Armijo-type line search (10) or (11).

*Step **3.* Set , and ; go to Step 1.

Lemma 3. *Assume that (H1) and (H2) hold, then there exist and such that for any , one has
**
where is defined by (9).*

*Proof. *See [9, Lemma ].

Lemma 4. *Assume that (H1) and (H2) hold. If , then the new Armijo-type line search I is well-defined for the index .*

*Proof. *The proof is easy; for completeness, we give the proof here. In fact, we can prove this lemma by contradiction. Suppose that the conclusion does not hold; then for , the inequality (10) does not hold for any nonnegative integer ; that is,
Thus,
Letting , by the continuity of and , we can obtain
This and yield that
which contradicts to . The proof is completed.

Lemma 5. *Assume that (H2) and (H3) hold. If , then the new Armijo-type line search II is well-defined for the index .*

*Proof. *The lemma is also proved by contradiction. Suppose that the conclusion does not hold; then for , the inequality (11) does not hold for any nonnegative integer ; that is,
That is,
Letting , by the continuity of and , we can obtain
that is,
which contradicts to . The proof is completed.

#### 3. Strongly Global Convergence

Throughout this section, we assume that , for all ; otherwise a stationary point of the objective function has been found.

##### 3.1. Global Convergence of SDPRP Method with the Line Search I

We first prove the global convergent of SDPRP method with the Armijo-type line search I.

Lemma 6. *For all , one has
**
where is defined in Lemma 3.*

*Proof. *If then
If then, from (3), (4), and (H2), we can get that
which together with the triangular inequality implies that
This completes the proof.

The following lemma shows that the stepsize sequence generated by the Armijo-type line search I is bounded from below.

Lemma 7. *For all , there exists a constant , such that
**
in which is generated by the Armijo-type line search I.*

*Proof. *We divide the proof into two cases: and . For the first case, by (12) and (21), we get
For the second case, that is , which indicates that does not satisfy (10); that is,
Using the mean value theorem in the above inequality, we obtain , such that
This inequality and (H2), (21) show that
Therefore, we have that
Obviously, (26) and (30) show that (25) holds with
This completes the proof.

We are now ready to establish the strong convergence of SDPRP method using the Armijo-type line search I.

Theorem 8. *Suppose that (H1) and (H2) hold. Then
*

*Proof. *Since the generated sequence and the objection function is bounded below on the level set , by (10) and (25), we have
Thus
This completes the proof.

*3.2. Global Convergence of SDPRP Method with the Line Search II*

*Then, we prove the strongly global convergent of SDPRP method with the Armijo-type line search II. It is obvious that for all . Therefore, from the line search II, we have
This together with (5) implies that
In addition, (H3) implies that there is a constant such that
*

*Lemma 9. Suppose that (H2) and (H3) hold. Then for all , one has
*

*Proof. *If , then does not satisfy (11); that is
From the mean value theorem and (H2), there exists a constant , such that
which together with (36) shows that (35) holds. This completes the proof.

*We are now ready to establish the strong convergence of SDPRP method using the Armijo-type line search II. The proof is motivated by the proof of Theorem 2.2 in [10].*

*Theorem 10. Suppose that (H2) and (H3) hold. Then
*

*Proof. *For the sake of contradiction, we suppose that the conclusion is not right. Then there exist a constant and an infinite index set such that
Moreover, the fact , (35) and (H2) imply that
This and (42) indicate that there exists a positive constant such that for sufficiently large , we have
Then by (36) and (44), we can get
By (3), (4), and (H2), for all , we have
From (35), for all sufficiently large , there exist a constant , such that
Therefore with . Thus, for , this and (37) imply that
Thus, from (38) and (48), we get
which contradicts (45). The proof is then completed.

*4. Numerical Results*

*4. Numerical Results**In this section, we present some numerical results to compare the performance of SDPRP method with the two new Armijo-type line searches I and II and the three-term PRP method in [7].(i)SDPRPI: the SDPRP method with the line search (10), with ;(ii)SDPRPII: the SDPRP method with the line search (11), with .(iii)TTPRP: the two-term PRP method with the following Armijo-type line search: let be the largest in such that
where .*

*All codes were written in Matlab 7.1 and run on a portable computer. We stopped the iteration if the number of iteration exceeds 10000 or . Tables 1 and 2 list the numerical results for solving some test problems numbered from 1 to 30 in [11] with different dimension . Our numerical results are listed in the form NI/NF/CPU, where the symbols NI, NF, and CPU mean the number of iterations, the number of function evaluations, and the CPU time in seconds, respectively.*

*Figures 1 and 2 show the performance of these methods relative to the number of function evaluations and CPU time, respectively, which are evaluated using the profiles of Dolan and Moré [12]. That is, for each method, we plot the fraction of problems for which the method is within a factor of the best time. The left side of the figure gives the percentage of the test problems for which a method is fastest; while the right side gives the percentage of the test problems that are successfully solved by each of the methods. The top curve is the method that solved most problems in a time that was within a factor of the best time. Figures 1 and 2 show that SDPRPI method performs a little better than TTPRP method and obviously better than SDPRPII method. It solves about and of the problems with the smallest number of function evaluations and CPU time, respectively. Obviously, the performance of SDPRPII method is not so good, and, in the future, we will further study the corresponding line search. Of course, more numerical experiments should be carried out to test our proposed methods.*

*5. Conclusion*

*5. Conclusion**In this paper, we have proposed two new Armijo-type line searches and proved that the sufficient descent PRP method proposed by Zhang et al. is strongly global convergent with the two new line searches. Numerical results show that the SDPRP method with the proposed line searches is efficient for the test problems.*

*Conflict of Interests*

*Conflict of Interests**The authors declare that there is no conflict of interests regarding the publication of this paper.*

*Acknowledgments*

*Acknowledgments**The authors would like to thank the referees for giving them many valuable suggestions and comments, which improved this paper greatly. This work is supported by the Nature Science Foundation of Shandong Province (ZR2012AL08).*

*References*

*References*

- B. Polak and G. Ribière, “Note surla convergence des méthodes de directions conjuguées,”
*Revue Francaise Imformat Recherche Opertionelle*, vol. 16, pp. 35–43, 1969. View at Google Scholar - B. T. Polyak, “The conjugate gradient method in extremal problems,”
*USSR Computational Mathematics and Mathematical Physics*, vol. 9, no. 4, pp. 94–112, 1969. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus - R. Fletcher and C. Reeves, “Function minimization by conjugate gradients,”
*The Computer Journal*, vol. 7, pp. 149–154, 1964. View at Google Scholar - M. R. Hestenes and E. L. Stiefel, “Method of conjugate gradient for solving linear systems,”
*Journal of Research of the National Bureau of Standards*, vol. 49, pp. 409–432, 1952. View at Google Scholar - L. Zhang, W. J. Zhou, and D. H. Li, “Some descent three-term conjugate gradient methods and their global convergence,”
*Optimization Methods and Software*, vol. 22, no. 4, pp. 697–711, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus - Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,”
*SIAM Journal on Optimization*, vol. 10, no. 1, pp. 177–182, 1999. View at Google Scholar · View at Scopus - L. Zhang, W. Zhou, and D. H. Li, “A descent modified Polak-Ribière-Polyak conjugate gradient method and its global convergence,”
*IMA Journal of Numerical Analysis*, vol. 26, no. 4, pp. 629–640, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus - Z. J. Shi and J. Shen, “Convergence of the Polak-Ribière-Polyak conjugate gradient method,”
*Nonlinear Analysis, Theory, Methods and Applications*, vol. 66, no. 6, pp. 1428–1441, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus - Z.-J. Shi and J. Shen, “Convergence of Liu-Storey conjugate gradient method,”
*European Journal of Operational Research*, vol. 182, no. 2, pp. 552–560, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus - W. J. Zhou and Y. H. Zhou, “On the strong convergence of a modified Hestenes-Stiefel method for nonconvex optimization,”
*Journal of Industrial and Management Optimization*, vol. 9, no. 4, pp. 893–899, 2013. View at Google Scholar - A. Neculai, http://camo.ici.ro/neculai/UNO/UNO.FOR.
- E. D. Dolan and J. J. Moré, “Benchmarking optimization software with performance profiles,”
*Mathematical Programming B*, vol. 91, no. 2, pp. 201–213, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus

*
*