Abstract and Applied Analysis

Abstract and Applied Analysis / 2014 / Article

Research Article | Open Access

Volume 2014 |Article ID 921364 | 9 pages | https://doi.org/10.1155/2014/921364

Extension of Modified Polak-Ribière-Polyak Conjugate Gradient Method to Linear Equality Constraints Minimization Problems

Academic Editor: Daniele Bertaccini
Received19 Mar 2014
Revised01 Jul 2014
Accepted21 Jul 2014
Published06 Aug 2014

Abstract

Combining the Rosen gradient projection method with the two-term Polak-Ribière-Polyak (PRP) conjugate gradient method, we propose a two-term Polak-Ribière-Polyak (PRP) conjugate gradient projection method for solving linear equality constraints optimization problems. The proposed method possesses some attractive properties: (1) search direction generated by the proposed method is a feasible descent direction; consequently the generated iterates are feasible points; (2) the sequences of function are decreasing. Under some mild conditions, we show that it is globally convergent with Armijio-type line search. Preliminary numerical results show that the proposed method is promising.

1. Introduction

In this paper, we consider solving the following linear equality constraints optimization problem: where is a smooth function and is a matrix of rank . In this paper, the feasible region and the feasible direction set are defined, respectively, as follows:

Taking the negative gradient for a search direction () is a natural way of solving unconstrained optimization problems. However, this approach does not work for constrained problems, since the gradient may not be a feasible direction. A basic technique to overcome this difficulty was initiated by Rosen [1] in 1960. To obtain a feasible search direction, Rosen projected the gradient into the feasible region; that is, The convergence of Rosen’s gradient projection method can be proved by Du; see [24].

In fact, Rosen’s gradient projection method is an extension of the steepest-descent method. It is well known that the drawback of the steepest-descent method is easy to suffer from zig-zagging specially when the graph of has an “elongated” form. To overcome the zig-zagging, we want to use the conjugate gradient method to modify the projection direction.

It is well known that nonlinear conjugate gradient methods such as the Polak-Ribière-Polyak (PRP) method [5, 6] are very efficient for large-scale unconstrained optimization problems due to their simplicity and low storage. However, it does not necessarily satisfy the descent conditions , .

Recently, Cheng [7] proposed a two-term modified PRP method (called TMPRP), in which the direction is given by An attractive property of the TMPRP method is that the direction generated by the method satisfies which is independent of any line search. The presented numerical results show some potential advantage of the TMPRP method in Cheng [7]. In fact, we can easily rewrite the above direction (4) as a three-term form: In the past few years, researchers have paid increasing attention to the conjugate gradient methods and their applications. Among others, we mention here the following works, for example, [829].

In the past few years, some researchers also paid attention to equality constrained problems. Martínez et al. [30] proposed a spectral gradient method for linearly constrained optimization by the following way to obtain the search direction: is the unique solution of In this algorithm, can be computed by quasi-Newton method in which the approximate Hessians satisfy a weak secant equation. The spectral choice of steplength is embedded into the Hessian approximation and the whole process is combined with a nonmonotone line search strategy.

C. Li and D. H. Li [31] proposed a feasible Fletcher-Reeves conjugate gradient method for solving linear equality constrained optimization problem with exact line search. Their idea is to use original Fletcher-Reeves conjugate gradient method to modify the Zoutendijk direction. The Zoutendijk direction is the feasible steepest descent direction. It is a solution of the following problem: Li et al. [32] also extended the modified Fletcher-Reeves conjugate gradient method in Zhang et al. [33] to solve linear equality constraints optimization problems which combined with the Zoutendijk feasible direction method. Under some mild conditions, Li et al. [32] showed that the proposed method with Armijo-type line search is globally convergent.

In this paper, we will extend the two-term Polak-Ribière-Polyak (PRP) conjugate gradient method in Cheng [7] to solve linear equality constraints optimization problems (1), which combines with the Rosen gradient projection method in Rosen [1]. Under some mild conditions, we show that it is globally convergent with Armijo-type line search.

The rest of this paper is organized as follows. In Section 2, we firstly propose the algorithm and prove the the feasible descent direction. In Section 3, we prove the global convergence of the proposed method. In Section 4, we give some improvement for the algorithm. In Section 5, we report some numerical results to test the proposed method.

2. Algorithm and the Feasible Descent Direction

In this section, we propose a two-term Polak-Ribière-Polyak conjugate gradient projection method for solving the linear equality constraints optimization problem (1). The proposed method is a combination of the well-known Rose gradient projection method and the two-term Polak-Ribière-Polyak (PRP) conjugate gradient method in Cheng [7].

The iterative process of the proposed method is given by and the search direction is defined by where and is a steplength obtained by a line search.

For convenience, we call the method (9) and (10) as EMPRP method. Now we prove that the direction defined by (10) and (11) is a feasible descent direction of at .

Theorem 1. Suppose that , . is defined by (10). If , then is a feasible descent direction of at .

Proof. From (9), (10), and the definition of , we have This implies that provides a descent direction of at .
In what follows, we show that is a feasible descent direction of at . From (9), we have that It follows from (10) and (13) that When , we have It is easy to get from (14) and (15) that, for all , is satisfied. That is, is feasible direction.

In the remainder of this paper, we always assume that satisfies the following assumptions.

Assumption A. (i) The level set is bounded.
(ii) Function is continuously differentiable and bounded from below. Its gradient is Lipschitz continuous, on an open ball containing ; that is, there is a constant such that

Since is decreasing, it is clear that the sequence generated by Algorithm 2 is contained in . In addition, we get from Assumption A that there is a constant , such that Since the matrix is a projection matrix, it is reasonable to assume that there is a constant , such that

We state the steps of the algorithm as follows.

Algorithm 2 (EMPRP method with Armijio-type line search). Step 0. Choose an initial point , . Let .
Step 1. Compute by (10), where is computed by (11).
Step 2. If stop, else go to Step 3.
Step 3. Given , . Determine a stepsize satisfying
Step 4. Let , and . Go to Step 1.

3. Global Convergence

In what follows, we establish the global convergence theorem of the EMPRP method for general nonlinear objective functions. We firstly give some important lemmas of the EMPRP method.

Lemma 3. Suppose that , . is defined by (10) and (11). Then we have , , .

Proof. By the definition of , we have It follows from (12) and (21) that On the other hand, we also have

Lemma 4. Suppose that Assumption A holds. is generated by Algorithm 2. If there exists a constant such that then there exists a constant such that

Proof. It follows from (20) that the function value sequence is decreasing. We also have from (20) that as is bounded from below. In particular, we have By the definition of , we get from (17), (19), and (24) that Since , as , there exist a constant and an integer , such that the following inequality holds for all : Hence, we have, for any , Letting , we can get (25).

We now establish the global convergence theorem of the EMPRP method for general nonlinear objective functions.

Theorem 5. Suppose that Assumption A holds. is generated by Algorithm 2. Then we have

Proof. Suppose that for all . Then there exists a constant such that We now prove (31) by considering the following two cases.
Case (i). . We get from (22) and (27) that . This contradicts assumption (32).
Case (ii). . That is, there is an infinite index set such that When is sufficiently large, by the line search condition, does not satisfy inequality (20). This means By the mean-value theorem and inequality (17), there is a such that and Substituting the last inequality into (34), for all sufficiently large, we have from (12) that Since is bounded and , the last inequality implies This also yields a contradiction. The proof is then complete.

4. Improvement for Algorithm 2

In this section, we propose techniques for improving the efficiency of Algorithm 2 in practical computation which is about computing projections and the stepsize of the Armijo-type line search.

4.1. Computing Projection

In this paper, as in Gould et al. [34], instead of computing a basis for null space of matrix , we choose to work directly with the matrix of constraint gradients, computing projections by normal equations. As the computation of the projection is a key step in the proposed method, following Gould et al. [34], this projection can be computed in an alternative way.

Let We can express this as where is the solution of Noting that (40) is the normal equation. Since is a matrix of rank , is symmetric positive definite matrix. We use the Doolittle (called LU) factorization of to solve (40).

For the matrix is constant, the factorization of needs to be carried out only once at the beginning of the iterative process. Using a Doolittle factorization of , (40) can be computed in the following form: where , is the Doolittle factor of .

4.2. Computing Stepsize

The drawback of the Armijo line search is how to choose the initial stepsize . If is too large then the procedure needs to call much more function evaluations. If is too small then the efficiency of related algorithm will be decreased. Therefore, we should choose an adequate initial stepsize at each iteration. In what follows, we propose a way to generate the initial stepsize.

We first estimate the stepsize determined by the exact line search. Support at the moment that is twice continuously differentiable. We denote by the Hessian of at and abbreviate as . Notice that the exact line search stepsize satisfies This shows that scalar is an estimation to . To avoid the computation of the second derivative, we further estimate by letting where positive sequence satisfies as . Let the initial stepsize of the Armijo line search be an approximation to It is not difficult to see that if and are sufficiently small, then and are good estimation to .

So to improve the efficiency of EMPRP method in practical computation, we utilize the following line search process.

Line Search Process. If inequality, holds, then we let . Otherwise we let be the largest scalar in the set such that inequality (45) is satisfied.

5. Numerical Experiments

This section reports some numerical experiments. Firstly, we test the EMPRP method and compare it with the Rose gradient projection method in [1] on low dimensional problems. Secondly, we test the EMPRP method and compare it with the spectral gradient method in Martínez et al. [30] and the feasible Fletcher-Reeves conjugate gradient method in Li et al. [32] on large dimensional problems. In the line search process, we set , , .

The methods in the tables have the following meanings.(i)“EMPRP” stands for the EMPRP method with the Armijio-type line search (20).(ii)“ROSE” stands for Rose gradient projection method in [1] with the Armijio-type line search (20). That is, in Algorithm 2, the direction .(iii)“SPG” stands for the spectral gradient method with the nonmonotone line search in Martínez et al. [30], where , .(iv)“FFR” stands for the feasible modified Fletcher-Reeves conjugate gradient method with the Armijio-type line search in Li et al. [32].

We stop the iteration if the condition is satisfied, where . If the iteration number exceeds , we also stop the iteration. Then we call it failure. All of the algorithms are coded in Matlab 7.0 and run on a personal computer with a 2.0 GHZ CPU processor.

5.1. Numerical Comparison of EMPRP and ROSE

We test the performance of EMPRP and ROSE methods on the following test problems with given initial points. The results are listed in Table 1. stands for the dimension of tested problem and stands for the number of constraints. We will report the following results: the CPU time Time (in seconds), the number of iterations Iter, the number of gradient evaluations Geval, and the number of function evaluations Feval.


Name EMPRP ROSE
Time Iter Geval Feval Time Iter Geval Feval

HS28 3 1 0.0156 20 22 21 0.0468 71 72 143
HS48 5 2 0.0156 26 28 27 0.0468 65 66 122
HS49 5 2 0.0156 29 30 30 0.0780 193 194 336
HS50 5 3 0.0156 22 23 23 0.0624 76 77 139
HS51 5 3 0.0156 15 16 16 0.0624 80 81 148
HS52 5 3 0.0156 30 31 31 0.0312 70 71 141
HS50E1 10 8 0.0156 16 17 17 0.0312 63 64 127
HS50E2 20 18 0.0156 15 16 16 0.0936 63 64 127

Problem 1 (HS28 [35]). The function HS28 in [35] is defined as follows: with the initial point . The optimal solution and optimal function value .

Problem 2 (HS48 [35]). The function HS48 in [35] is defined as follows: with the initial point . The optimal solution and optimal function value .

Problem 3 (HS49 [35]). The function HS49 in [35] is defined as follows: with the initial point . The optimal solution and optimal function value .

Problem 4 (HS50 [35]). The function HS50 in [35] is defined as follows: with the initial point . The optimal solution and optimal function value . Moreover, we extend the dimension of function HS51 [35] to 10, 20 with the initial point . The optimal solution and optimal function value .

Problem 5 (HS51 [35]). The function HS51 in [35] is defined as follows: with the initial point . The optimal solution and optimal function value .

From Table 1, we can see that the EMPRP method performs better than the Rosen gradient projection method in [1], which implies that the EMPRP method can improve the computational efficiency of the Rosen gradient projection method for solving linear equality constrained optimization problems.

5.2. Numerical Comparison of EMPRP, FFR, and SPCG

In this subsection, we test the EMPRP method and compare it with the spectral gradient method (called SPCG) in Martínez et al. [30] and the feasible Fletcher-Reeves conjugate gradient method (called FFR) in Li et al. [32] on the following large dimensional problems with given initial points. The results are listed in Tables 2, 3, 4, 5, and 6. We will report the following results: the CPU time Time (in seconds), the number of iterations Iter.


EMPRP FFR SPCG
Time Iter Time Iter Time Iter

50 99 49 0.0568 73 0.0660 80 0.0256 70
100 199 99 0.1048 103 0.1060 120 0.0756 90
200 399 199 0.0868 108 0.2660 123 0.0904 97
300 599 299 0.2252 121 0.4260 126 0.7504 123
400 799 399 0.3142 121 0.5150 126 0.9804 153
500 999 499 0.4045 138 0.7030 146 1.4320 176
1000 1999 999 2.6043 138 2.1560 146 3.3468 198
2000 3999 1999 8.2608 138 6.8280 146 9.1266 216
3000 5999 2999 22.4075 138 15.1720 146 28.2177 276
4000 7999 3999 40.1268 138 35.0460 146 56.0604 324
5000 9999 4999 105.6252 138 85.0460 146 128.1504 476


EMPRP FFR SPCG
Time Iter Time Iter Time Iter 

200 99 0.1468 83 0.1679 94 0.1286 79
400 199 0.2608 94 0.2860 103 0.2266 90
600 299 0.6252 121 0.8422 132 0.7504 133
800 399 1.4075 123 1.6510 138 1.4177 149
1000 499 3.280 141 2.7190 146 2.8790 168
2000 999 8.4045 189 4.6202 202 5.1280 190
3000 1499 12.0042 219 9.7030 248 10.7864 260
4000 1999 17.2608 254 13.4255 288 19.1266 316
5000 2499 20.6252 321 19.0940 330 29.1504 383


EMPRP FFR SPCG
Time Iter Time Iter Time Iter

50 99 49 0.0668 78 0.0750 83 0.0286 75
100 199 99 0.0988 113 0.1462 130 0.0826 106
200 399 199 0.1978 123 0.2960 133 0.1864 124
300 599 299 0.3436 128 0.5260 136 0.3504 156
400 799 399 0.4142 134 0.7143 136 0.5804 186
500 999 499 0.6039 152 0.9030 158 0.9331 228
1000 1999 999 2.9065 152 2.9570 158 3.1260 268
2000 3999 1999 8.3408 152 7.8280 158 8.2266 298
3000 5999 2999 19.6086 152 17.4350 158 19.8672 330
4000 7999 3999 46.3268 152 38.1460 158 56.1256 368
5000 9999 4999 75.3972 152 58.3867 158 125.7680 398


EMPRP FFR SPCG
Time Iter Time Iter Time Iter

50 99 49 0.0568 65 0.0640 70 0.0266 62
100 199 99 0.0868 109 0.1362 125 0.0726 103
200 399 199 0.2110 120 0.2862 128 0.2064 124
300 599 299 0.3250 127 0.5062 132 0.4524 160
400 799 399 0.4543 138 0.7456 139 0.6804 192
500 999 499 0.6246 156 0.9268 155 1.0876 258
1000 1999 999 3.9245 156 3.4268 162 3.4560 279
2000 3999 1999 9.3404 156 8.6240 162 8.6280 320
3000 5999 2999 20.6125 156 19.6548 162 20.8656 358
4000 7999 3999 48.2890 156 44.4330 162 57.7680 386
5000 9999 4999 95.4680 156 83.8650 162 128.8760 420


EMPRP FFR SPCG
Time Iter Time Iter Time Iter 

200 99 0.1842 88 0.1984 98 0.1488 92
400 199 0.2908 104 0.3260 113 0.2348 100
600 299 0.8256 132 0.9422 142 0.9804 182
800 399 1.6078 136 1.7512 148 2.6177 259
1000 499 2.9801 148 2.8192 156 3.1790 268
2000 999 7.8045 199 5.1202 212 8.1288 320
3000 1499 12.8042 248 9.9035 256 14.7862 360
4000 1999 19.2432 304 14.4258 298 22.1268 386
5000 2499 29.6846 382 20.1932 378 32.1422 393

Problem 6. Given a positive integer , the function is defined as follows: with the initial point . The optimization function value . This problem comes from Martínez et al. [30].

Problem 7. Given a positive integer , the function is defined as follows: with the initial point . This problem comes from Asaadi [36] and is called MAD6.

Problem 8. Given a positive integer , the function is defined as follows: with the initial point . The optimization function value .

Problem 9. Given a positive integer , the function is defined as follows: with the initial point . The optimization function value .

Problem 10. Given a positive integer , the function is defined as follows: with the initial point .

From Tables 26, we can see the EMPRP method and the FFR method in [32] perform better than the SPCG method in Martínez et al. [30] for solving large-scale linear equality constrained optimization problems, as the EMPRP method and the FFR method all are first order methods. But the SPCG method in Martínez et al. [30] needs to compute with quasi-Newton method in which the approximate Hessians satisfy a weak secant equation. However, as the EMPRP method also needs to compute projection, the FFR method in [32] performs better than the EMPRP method when the test problem becomes large.

6. Conclusions

In this paper, we propose a new conjugate gradient projection method for solving linear equality constrained optimization problem (1), which combines the two-term modified Polak-Ribière-Polyak (PRP) conjugate gradient method in Cheng [7] with the Rosen projection method. The proposed method also can be regarded as an extension of the recently developed two-term modified Polak-Ribière-Polyak (PRP) conjugate gradient method in Cheng [7]. Under some mild conditions, we show that it is globally convergent with the Armijio-type line search.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The author thanks the anonymous referees for their valuable comments and suggestions that improved the presentation. This work is supported by the NSF of China Grants 11301041, 11371154, 71371065, and 71371195 and the Ministry of Education Humanities and Social Sciences Project Grants 12YJC790027, Project funded by China Postdoctoral Science Foundation, Natural Science Foundation of Hunan Province.

References

  1. J. B. Rosen, “The gradient projection method for nonlinear programming. I. Linear constraints,” SIAM Journal on Applied Mathematics, vol. 8, no. 1, pp. 181–217, 1960. View at: Google Scholar | MathSciNet
  2. D. Z. Du and X. S. Zhang, “A convergence theorem of Rosen's gradient projection method,” Mathematical Programming, vol. 36, no. 2, pp. 135–144, 1986. View at: Publisher Site | Google Scholar | MathSciNet
  3. D. Du, “Remarks on the convergence of Rosen's gradient projection method,” Acta Mathematicae Applicatae Sinica, vol. 3, no. 2, pp. 270–279, 1987. View at: Publisher Site | Google Scholar
  4. D. Z. Du and X. S. Zhang, “Global convergence of Rosen's gradient projection method,” Mathematical Programming, vol. 44, no. 1, pp. 357–366, 1989. View at: Publisher Site | Google Scholar | MathSciNet
  5. E. Polak and G. Ribire, “Note surla convergence de directions conjuguees,” Rev Francaise informat Recherche Operatinelle 3e Annee, vol. 16, pp. 35–43, 1969. View at: Google Scholar
  6. B. T. Polyak, “The conjugate gradient method in extremal problems,” USSR Computational Mathematics and Mathematical Physics, vol. 9, no. 4, pp. 94–112, 1969. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  7. W. Cheng, “A two-term PRP-based descent method,” Numerical Functional Analysis and Optimization, vol. 28, no. 11, pp. 1217–1230, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  8. W. W. Hager and H. Zhang, “A new conjugate gradient method with guaranteed descent and an efficient line search,” SIAM Journal on Optimization, vol. 16, no. 1, pp. 170–192, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  9. G. Li, C. Tang, and Z. Wei, “New conjugacy condition and related new conjugate gradient methods for unconstrained optimization,” Journal of Computational and Applied Mathematics, vol. 202, no. 2, pp. 523–539, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  10. N. Andrei, “Accelerated scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization,” European Journal of Operational Research, vol. 204, no. 3, pp. 410–420, 2010. View at: Publisher Site | Google Scholar | MathSciNet
  11. G. Yu, L. Guan, and W. Chen, “Spectral conjugate gradient methods with sufficient descent property for large-scale unconstrained optimization,” Optimization Methods and Software, vol. 23, no. 2, pp. 275–293, 2008. View at: Publisher Site | Google Scholar | MathSciNet
  12. G. Yuan, “Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems,” Optimization Letters, vol. 3, no. 1, pp. 11–21, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  13. Q. Li and D.-H. Li, “A class of derivative-free methods for large-scale nonlinear monotone equations,” IMA Journal of Numerical Analysis, vol. 31, no. 4, pp. 1625–1635, 2011. View at: Publisher Site | Google Scholar | MathSciNet
  14. Y. Xiao and H. Zhu, “A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing,” Journal of Mathematical Analysis and Applications, vol. 405, no. 1, pp. 310–319, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  15. Z. Dai, “Two modified HS type conjugate gradient methods for unconstrained optimization problems,” Nonlinear Analysis: Theory, Methods & Applications, vol. 74, no. 3, pp. 927–936, 2011. View at: Publisher Site | Google Scholar | MathSciNet
  16. Y.-Y. Chen and S.-Q. Du, “Nonlinear conjugate gradient methods with Wolfe type line search,” Abstract and Applied Analysis, vol. 2013, Article ID 742815, 5 pages, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  17. S. Y. Liu, Y. Y. Huang, and H. W. Jiao, “Sufficient descent conjugate gradient methods for solving convex constrained nonlinear monotone equations,” Abstract and Applied Analysis, vol. 2014, Article ID 305643, 12 pages, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  18. Z. F. Dai, D. H. Li, and F. H. Wen, “Robust conditional value-at-risk optimization for asymmetrically distributed asset returns,” Pacific Journal of Optimization, vol. 8, no. 3, pp. 429–445, 2012. View at: Google Scholar | MathSciNet
  19. C. Huang, C. Peng, X. Chen, and F. Wen, “Dynamics analysis of a class of delayed economic model,” Abstract and Applied Analysis, vol. 2013, Article ID 962738, 12 pages, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  20. C. Huang, X. Gong, X. Chen, and F. Wen, “Measuring and forecasting volatility in Chinese stock market using HAR-CJ-M model,” Abstract and Applied Analysis, vol. 2013, Article ID 143194, 13 pages, 2013. View at: Publisher Site | Google Scholar
  21. G. Qin, C. Huang, Y. Xie, and F. Wen, “Asymptotic behavior for third-order quasi-linear differential equations,” Advances in Difference Equations, vol. 30, no. 13, pp. 305–312, 2013. View at: Google Scholar
  22. C. Huang, H. Kuang, X. Chen, and F. Wen, “An LMI approach for dynamics of switched cellular neural networks with mixed delays,” Abstract and Applied Analysis, vol. 2013, Article ID 870486, 8 pages, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  23. Q. F. Cui, Z. G. Wang, X. Chen, and F. Wen, “Sufficient conditions for non-Bazilevic functions,” Abstract and Applied Analysis, vol. 2013, Article ID 154912, 4 pages, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  24. F. Wen and X. Yang, “Skewness of return distribution and coefficient of risk premium,” Journal of Systems Science & Complexity, vol. 22, no. 3, pp. 360–371, 2009. View at: Publisher Site | Google Scholar | MathSciNet
  25. F. Wen and Z. Liu, “A copula-based correlation measure and its application in Chinese stock market,” International Journal of Information Technology & Decision Making, vol. 8, no. 4, pp. 787–801, 2009. View at: Publisher Site | Google Scholar
  26. F. Wen, Z. He, and X. Chen, “Investors' risk preference characteristics and conditional skewness,” Mathematical Problems in Engineering, vol. 2014, Article ID 814965, 14 pages, 2014. View at: Publisher Site | Google Scholar
  27. F. Wen, X. Gong, Y. Chao, and X. Chen, “The effects of prior outcomes on risky choice: evidence from the stock market,” Mathematical Problems in Engineering, vol. 2014, Article ID 272518, 8 pages, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  28. F. Wen, Z. He, X. Gong, and A. Liu, “Investors' risk preference characteristics based on different reference point,” Discrete Dynamics in Nature and Society, vol. 2014, Article ID 158386, 9 pages, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  29. Z. Dai and F. Wen, “Robust CVaR-based portfolio optimization under a genal a_ne data perturbation uncertainty set,” Journal of Computational Analysis and Applications, vol. 16, no. 1, pp. 93–103, 2014. View at: Google Scholar | MathSciNet
  30. J. M. Martínez, E. A. Pilotta, and M. Raydan, “Spectral gradient methods for linearly constrained optimization,” Journal of Optimization Theory and Applications, vol. 125, no. 3, pp. 629–651, 2005. View at: Publisher Site | Google Scholar | MathSciNet
  31. C. Li and D. H. Li, “An extension of the Fletcher-Reeves method to linear equality constrained optimization problem,” Applied Mathematics and Computation, vol. 219, no. 23, pp. 10909–10914, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  32. C. Li, L. Fang, and X. Cui, “A feasible fletcher-reeves method to linear equality constrained optimization problem,” in Proceedings of the International Conference on Apperceiving Computing and Intelligence Analysis (ICACIA '10), pp. 30–33, December 2010. View at: Publisher Site | Google Scholar
  33. L. Zhang, W. J. Zhou, and D. H. Li, “Global convergence of a modified Fletcher-Reeves conjugate gradient method with Armijo-type line search,” Numerische Mathematik, vol. 104, no. 2, pp. 561–572, 2006. View at: Publisher Site | Google Scholar | MathSciNet
  34. N. I. M. Gould, M. E. Hribar, and J. Nocedal, “On the solution of equality constrained quadratic programming problems arising in optimization,” SIAM Journal on Scientific Computing, vol. 23, no. 4, pp. 1376–1395, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  35. W. Hock and K. Schittkowski, Test examples for nonlinear programming codes, Lecture Notes in Economics and Mathematical Systems, Springer, New York, NY, USA, 1981. View at: Publisher Site | MathSciNet
  36. J. Asaadi, “A computational comparison of some non-linear programs,” Mathematical Programming, vol. 4, no. 1, pp. 144–154, 1973. View at: Publisher Site | Google Scholar | MathSciNet

Copyright © 2014 Zhifeng Dai. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

896 Views | 602 Downloads | 1 Citation
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.