About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis

Volume 2014 (2014), Article ID 921364, 9 pages

http://dx.doi.org/10.1155/2014/921364
Research Article

Extension of Modified Polak-Ribière-Polyak Conjugate Gradient Method to Linear Equality Constraints Minimization Problems

1College of Business, Central South University, Hunan 410083, China

2College of Mathematics and Computational Science, Changsha University of Science and Technology, Hunan 410114, China

Received 19 March 2014; Revised 1 July 2014; Accepted 21 July 2014; Published 6 August 2014

Academic Editor: Daniele Bertaccini

Copyright © 2014 Zhifeng Dai. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Combining the Rosen gradient projection method with the two-term Polak-Ribière-Polyak (PRP) conjugate gradient method, we propose a two-term Polak-Ribière-Polyak (PRP) conjugate gradient projection method for solving linear equality constraints optimization problems. The proposed method possesses some attractive properties: (1) search direction generated by the proposed method is a feasible descent direction; consequently the generated iterates are feasible points; (2) the sequences of function are decreasing. Under some mild conditions, we show that it is globally convergent with Armijio-type line search. Preliminary numerical results show that the proposed method is promising.

1. Introduction

In this paper, we consider solving the following linear equality constraints optimization problem: where is a smooth function and is a matrix of rank . In this paper, the feasible region and the feasible direction set are defined, respectively, as follows:

Taking the negative gradient for a search direction ( ) is a natural way of solving unconstrained optimization problems. However, this approach does not work for constrained problems, since the gradient may not be a feasible direction. A basic technique to overcome this difficulty was initiated by Rosen [1] in 1960. To obtain a feasible search direction, Rosen projected the gradient into the feasible region; that is, The convergence of Rosen’s gradient projection method can be proved by Du; see [24].

In fact, Rosen’s gradient projection method is an extension of the steepest-descent method. It is well known that the drawback of the steepest-descent method is easy to suffer from zig-zagging specially when the graph of has an “elongated” form. To overcome the zig-zagging, we want to use the conjugate gradient method to modify the projection direction.

It is well known that nonlinear conjugate gradient methods such as the Polak-Ribière-Polyak (PRP) method [5, 6] are very efficient for large-scale unconstrained optimization problems due to their simplicity and low storage. However, it does not necessarily satisfy the descent conditions , .

Recently, Cheng [7] proposed a two-term modified PRP method (called TMPRP), in which the direction is given by An attractive property of the TMPRP method is that the direction generated by the method satisfies which is independent of any line search. The presented numerical results show some potential advantage of the TMPRP method in Cheng [7]. In fact, we can easily rewrite the above direction (4) as a three-term form: In the past few years, researchers have paid increasing attention to the conjugate gradient methods and their applications. Among others, we mention here the following works, for example, [829].

In the past few years, some researchers also paid attention to equality constrained problems. Martínez et al. [30] proposed a spectral gradient method for linearly constrained optimization by the following way to obtain the search direction: is the unique solution of In this algorithm, can be computed by quasi-Newton method in which the approximate Hessians satisfy a weak secant equation. The spectral choice of steplength is embedded into the Hessian approximation and the whole process is combined with a nonmonotone line search strategy.

C. Li and D. H. Li [31] proposed a feasible Fletcher-Reeves conjugate gradient method for solving linear equality constrained optimization problem with exact line search. Their idea is to use original Fletcher-Reeves conjugate gradient method to modify the Zoutendijk direction. The Zoutendijk direction is the feasible steepest descent direction. It is a solution of the following problem: Li et al. [32] also extended the modified Fletcher-Reeves conjugate gradient method in Zhang et al. [33] to solve linear equality constraints optimization problems which combined with the Zoutendijk feasible direction method. Under some mild conditions, Li et al. [32] showed that the proposed method with Armijo-type line search is globally convergent.

In this paper, we will extend the two-term Polak-Ribière-Polyak (PRP) conjugate gradient method in Cheng [7] to solve linear equality constraints optimization problems (1), which combines with the Rosen gradient projection method in Rosen [1]. Under some mild conditions, we show that it is globally convergent with Armijo-type line search.

The rest of this paper is organized as follows. In Section 2, we firstly propose the algorithm and prove the the feasible descent direction. In Section 3, we prove the global convergence of the proposed method. In Section 4, we give some improvement for the algorithm. In Section 5, we report some numerical results to test the proposed method.

2. Algorithm and the Feasible Descent Direction

In this section, we propose a two-term Polak-Ribière-Polyak conjugate gradient projection method for solving the linear equality constraints optimization problem (1). The proposed method is a combination of the well-known Rose gradient projection method and the two-term Polak-Ribière-Polyak (PRP) conjugate gradient method in Cheng [7].

The iterative process of the proposed method is given by and the search direction is defined by where and is a steplength obtained by a line search.

For convenience, we call the method (9) and (10) as EMPRP method. Now we prove that the direction defined by (10) and (11) is a feasible descent direction of at .

Theorem 1. Suppose that , . is defined by (10). If , then is a feasible descent direction of at .

Proof. From (9), (10), and the definition of , we have This implies that provides a descent direction of at .

In what follows, we show that is a feasible descent direction of at . From (9), we have that It follows from (10) and (13) that When , we have It is easy to get from (14) and (15) that, for all , is satisfied. That is, is feasible direction.

In the remainder of this paper, we always assume that satisfies the following assumptions.

Assumption A. (i) The level set is bounded.

(ii) Function is continuously differentiable and bounded from below. Its gradient is Lipschitz continuous, on an open ball containing ; that is, there is a constant such that

Since is decreasing, it is clear that the sequence generated by Algorithm 2 is contained in . In addition, we get from Assumption A that there is a constant , such that Since the matrix is a projection matrix, it is reasonable to assume that there is a constant , such that

We state the steps of the algorithm as follows.

Algorithm 2 (EMPRP method with Armijio-type line search). Step 0. Choose an initial point , . Let .

Step 1. Compute by (10), where is computed by (11).

Step 2. If stop, else go to Step 3.

Step 3. Given , . Determine a stepsize satisfying

Step 4. Let , and . Go to Step 1.

3. Global Convergence

In what follows, we establish the global convergence theorem of the EMPRP method for general nonlinear objective functions. We firstly give some important lemmas of the EMPRP method.

Lemma 3. Suppose that , . is defined by (10) and (11). Then we have , , .

Proof. By the definition of , we have It follows from (12) and (21) that On the other hand, we also have

Lemma 4. Suppose that Assumption A holds. is generated by Algorithm 2. If there exists a constant such that then there exists a constant such that

Proof. It follows from (20) that the function value sequence is decreasing. We also have from (20) that as is bounded from below. In particular, we have By the definition of , we get from (17), (19), and (24) that Since , as , there exist a constant and an integer , such that the following inequality holds for all : Hence, we have, for any , Letting , we can get (25).

We now establish the global convergence theorem of the EMPRP method for general nonlinear objective functions.

Theorem 5. Suppose that Assumption A holds. is generated by Algorithm 2. Then we have

Proof. Suppose that for all . Then there exists a constant such that We now prove (31) by considering the following two cases.

Case (i). . We get from (22) and (27) that . This contradicts assumption (32).

Case (ii). . That is, there is an infinite index set such that When is sufficiently large, by the line search condition, does not satisfy inequality (20). This means By the mean-value theorem and inequality (17), there is a such that and Substituting the last inequality into (34), for all sufficiently large, we have from (12) that Since is bounded and , the last inequality implies This also yields a contradiction. The proof is then complete.

4. Improvement for Algorithm 2

In this section, we propose techniques for improving the efficiency of Algorithm 2 in practical computation which is about computing projections and the stepsize of the Armijo-type line search.

4.1. Computing Projection

In this paper, as in Gould et al. [34], instead of computing a basis for null space of matrix , we choose to work directly with the matrix of constraint gradients, computing projections by normal equations. As the computation of the projection is a key step in the proposed method, following Gould et al. [34], this projection can be computed in an alternative way.

Let We can express this as where is the solution of Noting that (40) is the normal equation. Since is a matrix of rank , is symmetric positive definite matrix. We use the Doolittle (called LU) factorization of to solve (40).

For the matrix is constant, the factorization of needs to be carried out only once at the beginning of the iterative process. Using a Doolittle factorization of , (40) can be computed in the following form: where , is the Doolittle factor of .

4.2. Computing Stepsize

The drawback of the Armijo line search is how to choose the initial stepsize . If is too large then the procedure needs to call much more function evaluations. If is too small then the efficiency of related algorithm will be decreased. Therefore, we should choose an adequate initial stepsize at each iteration. In what follows, we propose a way to generate the initial stepsize.

We first estimate the stepsize determined by the exact line search. Support at the moment that is twice continuously differentiable. We denote by the Hessian of at and abbreviate as . Notice that the exact line search stepsize satisfies This shows that scalar is an estimation to . To avoid the computation of the second derivative, we further estimate by letting where positive sequence satisfies as . Let the initial stepsize of the Armijo line search be an approximation to It is not difficult to see that if and are sufficiently small, then and are good estimation to .

So to improve the efficiency of EMPRP method in practical computation, we utilize the following line search process.

Line Search Process. If inequality, holds, then we let . Otherwise we let be the largest scalar in the set such that inequality (45) is satisfied.

5. Numerical Experiments

This section reports some numerical experiments. Firstly, we test the EMPRP method and compare it with the Rose gradient projection method in [1] on low dimensional problems. Secondly, we test the EMPRP method and compare it with the spectral gradient method in Martínez et al. [30] and the feasible Fletcher-Reeves conjugate gradient method in Li et al. [32] on large dimensional problems. In the line search process, we set , , .

The methods in the tables have the following meanings.(i)“EMPRP” stands for the EMPRP method with the Armijio-type line search (20).(ii)“ROSE” stands for Rose gradient projection method in [1] with the Armijio-type line search (20). That is, in Algorithm 2, the direction .(iii)“SPG” stands for the spectral gradient method with the nonmonotone line search in Martínez et al. [30], where , .(iv)“FFR” stands for the feasible modified Fletcher-Reeves conjugate gradient method with the Armijio-type line search in Li et al. [32].

We stop the iteration if the condition is satisfied, where . If the iteration number exceeds , we also stop the iteration. Then we call it failure. All of the algorithms are coded in Matlab 7.0 and run on a personal computer with a 2.0 GHZ CPU processor.

5.1. Numerical Comparison of EMPRP and ROSE

We test the performance of EMPRP and ROSE methods on the following test problems with given initial points. The results are listed in Table 1. stands for the dimension of tested problem and stands for the number of constraints. We will report the following results: the CPU time Time (in seconds), the number of iterations Iter, the number of gradient evaluations Geval, and the number of function evaluations Feval.

tab1
Table 1: Test results for Problems 15 with given initial points.

Problem 1 (HS28 [35]). The function HS28 in [35] is defined as follows: with the initial point . The optimal solution and optimal function value .

Problem 2 (HS48 [35]). The function HS48 in [35] is defined as follows: with the initial point . The optimal solution and optimal function value .

Problem 3 (HS49 [35]). The function HS49 in [35] is defined as follows: with the initial point . The optimal solution and optimal function value .

Problem 4 (HS50 [35]). The function HS50 in [35] is defined as follows: with the initial point . The optimal solution and optimal function value . Moreover, we extend the dimension of function HS51 [35] to 10, 20 with the initial point . The optimal solution and optimal function value .

Problem 5 (HS51 [35]). The function HS51 in [35] is defined as follows: with the initial point . The optimal solution and optimal function value .

From Table 1, we can see that the EMPRP method performs better than the Rosen gradient projection method in [1], which implies that the EMPRP method can improve the computational efficiency of the Rosen gradient projection method for solving linear equality constrained optimization problems.

5.2. Numerical Comparison of EMPRP, FFR, and SPCG

In this subsection, we test the EMPRP method and compare it with the spectral gradient method (called SPCG) in Martínez et al. [30] and the feasible Fletcher-Reeves conjugate gradient method (called FFR) in Li et al. [32] on the following large dimensional problems with given initial points. The results are listed in Tables 2, 3, 4, 5, and 6. We will report the following results: the CPU time Time (in seconds), the number of iterations Iter.

tab2
Table 2: Test results for Problem 6 with given initial points.
tab3
Table 3: Test results for Problem 7 with given initial points.
tab4
Table 4: Test results for Problem 8 with given initial points.
tab5
Table 5: Test results for Problem 9 with given initial points.
tab6
Table 6: Test results for Problem 10 with given initial points.

Problem 6. Given a positive integer , the function is defined as follows: with the initial point . The optimization function value . This problem comes from Martínez et al. [30].

Problem 7. Given a positive integer , the function is defined as follows: with the initial point . This problem comes from Asaadi [36] and is called MAD6.

Problem 8. Given a positive integer , the function is defined as follows: with the initial point . The optimization function value .

Problem 9. Given a positive integer , the function is defined as follows: with the initial point . The optimization function value .

Problem 10. Given a positive integer , the function is defined as follows: with the initial point .

From Tables 26, we can see the EMPRP method and the FFR method in [32] perform better than the SPCG method in Martínez et al. [30] for solving large-scale linear equality constrained optimization problems, as the EMPRP method and the FFR method all are first order methods. But the SPCG method in Martínez et al. [30] needs to compute with quasi-Newton method in which the approximate Hessians satisfy a weak secant equation. However, as the EMPRP method also needs to compute projection, the FFR method in [32] performs better than the EMPRP method when the test problem becomes large.

6. Conclusions

In this paper, we propose a new conjugate gradient projection method for solving linear equality constrained optimization problem (1), which combines the two-term modified Polak-Ribière-Polyak (PRP) conjugate gradient method in Cheng [7] with the Rosen projection method. The proposed method also can be regarded as an extension of the recently developed two-term modified Polak-Ribière-Polyak (PRP) conjugate gradient method in Cheng [7]. Under some mild conditions, we show that it is globally convergent with the Armijio-type line search.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The author thanks the anonymous referees for their valuable comments and suggestions that improved the presentation. This work is supported by the NSF of China Grants 11301041, 11371154, 71371065, and 71371195 and the Ministry of Education Humanities and Social Sciences Project Grants 12YJC790027, Project funded by China Postdoctoral Science Foundation, Natural Science Foundation of Hunan Province.

References

  1. J. B. Rosen, “The gradient projection method for nonlinear programming. I. Linear constraints,” SIAM Journal on Applied Mathematics, vol. 8, no. 1, pp. 181–217, 1960. View at MathSciNet
  2. D. Z. Du and X. S. Zhang, “A convergence theorem of Rosen's gradient projection method,” Mathematical Programming, vol. 36, no. 2, pp. 135–144, 1986. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  3. D. Du, “Remarks on the convergence of Rosen's gradient projection method,” Acta Mathematicae Applicatae Sinica, vol. 3, no. 2, pp. 270–279, 1987. View at Publisher · View at Google Scholar · View at Scopus
  4. D. Z. Du and X. S. Zhang, “Global convergence of Rosen's gradient projection method,” Mathematical Programming, vol. 44, no. 1, pp. 357–366, 1989. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. E. Polak and G. Ribire, “Note surla convergence de directions conjuguees,” Rev Francaise informat Recherche Operatinelle 3e Annee, vol. 16, pp. 35–43, 1969.
  6. B. T. Polyak, “The conjugate gradient method in extremal problems,” USSR Computational Mathematics and Mathematical Physics, vol. 9, no. 4, pp. 94–112, 1969. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  7. W. Cheng, “A two-term PRP-based descent method,” Numerical Functional Analysis and Optimization, vol. 28, no. 11, pp. 1217–1230, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. W. W. Hager and H. Zhang, “A new conjugate gradient method with guaranteed descent and an efficient line search,” SIAM Journal on Optimization, vol. 16, no. 1, pp. 170–192, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  9. G. Li, C. Tang, and Z. Wei, “New conjugacy condition and related new conjugate gradient methods for unconstrained optimization,” Journal of Computational and Applied Mathematics, vol. 202, no. 2, pp. 523–539, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  10. N. Andrei, “Accelerated scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization,” European Journal of Operational Research, vol. 204, no. 3, pp. 410–420, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. G. Yu, L. Guan, and W. Chen, “Spectral conjugate gradient methods with sufficient descent property for large-scale unconstrained optimization,” Optimization Methods and Software, vol. 23, no. 2, pp. 275–293, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. G. Yuan, “Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems,” Optimization Letters, vol. 3, no. 1, pp. 11–21, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  13. Q. Li and D.-H. Li, “A class of derivative-free methods for large-scale nonlinear monotone equations,” IMA Journal of Numerical Analysis, vol. 31, no. 4, pp. 1625–1635, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. Y. Xiao and H. Zhu, “A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing,” Journal of Mathematical Analysis and Applications, vol. 405, no. 1, pp. 310–319, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. Z. Dai, “Two modified HS type conjugate gradient methods for unconstrained optimization problems,” Nonlinear Analysis: Theory, Methods & Applications, vol. 74, no. 3, pp. 927–936, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. Y.-Y. Chen and S.-Q. Du, “Nonlinear conjugate gradient methods with Wolfe type line search,” Abstract and Applied Analysis, vol. 2013, Article ID 742815, 5 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  17. S. Y. Liu, Y. Y. Huang, and H. W. Jiao, “Sufficient descent conjugate gradient methods for solving convex constrained nonlinear monotone equations,” Abstract and Applied Analysis, vol. 2014, Article ID 305643, 12 pages, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  18. Z. F. Dai, D. H. Li, and F. H. Wen, “Robust conditional value-at-risk optimization for asymmetrically distributed asset returns,” Pacific Journal of Optimization, vol. 8, no. 3, pp. 429–445, 2012. View at MathSciNet · View at Scopus
  19. C. Huang, C. Peng, X. Chen, and F. Wen, “Dynamics analysis of a class of delayed economic model,” Abstract and Applied Analysis, vol. 2013, Article ID 962738, 12 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  20. C. Huang, X. Gong, X. Chen, and F. Wen, “Measuring and forecasting volatility in Chinese stock market using HAR-CJ-M model,” Abstract and Applied Analysis, vol. 2013, Article ID 143194, 13 pages, 2013. View at Publisher · View at Google Scholar · View at Scopus
  21. G. Qin, C. Huang, Y. Xie, and F. Wen, “Asymptotic behavior for third-order quasi-linear differential equations,” Advances in Difference Equations, vol. 30, no. 13, pp. 305–312, 2013.
  22. C. Huang, H. Kuang, X. Chen, and F. Wen, “An LMI approach for dynamics of switched cellular neural networks with mixed delays,” Abstract and Applied Analysis, vol. 2013, Article ID 870486, 8 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  23. Q. F. Cui, Z. G. Wang, X. Chen, and F. Wen, “Sufficient conditions for non-Bazilevic functions,” Abstract and Applied Analysis, vol. 2013, Article ID 154912, 4 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  24. F. Wen and X. Yang, “Skewness of return distribution and coefficient of risk premium,” Journal of Systems Science & Complexity, vol. 22, no. 3, pp. 360–371, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. F. Wen and Z. Liu, “A copula-based correlation measure and its application in Chinese stock market,” International Journal of Information Technology & Decision Making, vol. 8, no. 4, pp. 787–801, 2009. View at Publisher · View at Google Scholar · View at Scopus
  26. F. Wen, Z. He, and X. Chen, “Investors' risk preference characteristics and conditional skewness,” Mathematical Problems in Engineering, vol. 2014, Article ID 814965, 14 pages, 2014. View at Publisher · View at Google Scholar
  27. F. Wen, X. Gong, Y. Chao, and X. Chen, “The effects of prior outcomes on risky choice: evidence from the stock market,” Mathematical Problems in Engineering, vol. 2014, Article ID 272518, 8 pages, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  28. F. Wen, Z. He, X. Gong, and A. Liu, “Investors' risk preference characteristics based on different reference point,” Discrete Dynamics in Nature and Society, vol. 2014, Article ID 158386, 9 pages, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  29. Z. Dai and F. Wen, “Robust CVaR-based portfolio optimization under a genal a_ne data perturbation uncertainty set,” Journal of Computational Analysis and Applications, vol. 16, no. 1, pp. 93–103, 2014. View at MathSciNet
  30. J. M. Martínez, E. A. Pilotta, and M. Raydan, “Spectral gradient methods for linearly constrained optimization,” Journal of Optimization Theory and Applications, vol. 125, no. 3, pp. 629–651, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  31. C. Li and D. H. Li, “An extension of the Fletcher-Reeves method to linear equality constrained optimization problem,” Applied Mathematics and Computation, vol. 219, no. 23, pp. 10909–10914, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  32. C. Li, L. Fang, and X. Cui, “A feasible fletcher-reeves method to linear equality constrained optimization problem,” in Proceedings of the International Conference on Apperceiving Computing and Intelligence Analysis (ICACIA '10), pp. 30–33, December 2010. View at Publisher · View at Google Scholar · View at Scopus
  33. L. Zhang, W. J. Zhou, and D. H. Li, “Global convergence of a modified Fletcher-Reeves conjugate gradient method with Armijo-type line search,” Numerische Mathematik, vol. 104, no. 2, pp. 561–572, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  34. N. I. M. Gould, M. E. Hribar, and J. Nocedal, “On the solution of equality constrained quadratic programming problems arising in optimization,” SIAM Journal on Scientific Computing, vol. 23, no. 4, pp. 1376–1395, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  35. W. Hock and K. Schittkowski, Test examples for nonlinear programming codes, Lecture Notes in Economics and Mathematical Systems, Springer, New York, NY, USA, 1981. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  36. J. Asaadi, “A computational comparison of some non-linear programs,” Mathematical Programming, vol. 4, no. 1, pp. 144–154, 1973. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus