/ / Article
Special Issue

## Applications of Fixed Point and Approximate Algorithms

View this Special Issue

Research Article | Open Access

Volume 2012 |Article ID 641276 | 13 pages | https://doi.org/10.1155/2012/641276

# Global Convergence of a Modified Spectral Conjugate Gradient Method

Revised25 Oct 2011
Accepted25 Oct 2011
Published12 Dec 2011

#### Abstract

A modified spectral PRP conjugate gradient method is presented for solving unconstrained optimization problems. The constructed search direction is proved to be a sufficiently descent direction of the objective function. With an Armijo-type line search to determinate the step length, a new spectral PRP conjugate algorithm is developed. Under some mild conditions, the theory of global convergence is established. Numerical results demonstrate that this algorithm is promising, particularly, compared with the existing similar ones.

#### 1. Introduction

Recently, it is shown that conjugate gradient method is efficient and powerful in solving large-scale unconstrained minimization problems owing to its low memory requirement and simple computation. For example, in , many variants of conjugate gradient algorithms are developed. However, just as pointed out in , there exist many theoretical and computational challenges to apply these methods into solving the unconstrained optimization problems. Actually, 14 open problems on conjugate gradient methods are presented in . These problems concern the selection of initial direction, the computation of step length, and conjugate parameter based on the values of the objective function, the influence of accuracy of line search procedure on the efficiency of conjugate gradient algorithm, and so forth.

The general model of unconstrained optimization problem is as follows: where is continuously differentiable such that its gradient is available. Let denote the gradient of at , and let be an arbitrary initial approximate solution of (1.1). Then, when a standard conjugate gradient method is used to solve (1.1), a sequence of solutions will be generated by where is the steplength chosen by some line search method and is the search direction defined by where is called conjugacy parameter and denotes the value of . For a strictly convex quadratical programming, can be appropriately chosen such that and are conjugate with respect to the Hessian matrix of the objective function. If is taken by where stands for the Euclidean norm of vector, then (1.2)–(1.4) are called Polak-Ribiére-Polyak (PRP) conjugate gradient method (see [8, 18]).

It is well known that PRP method has the property of finite termination when the objective function is a strong convex quadratic function combined with the exact line search. Furthermore, in , for a twice continuously differentiable strong convex objective function, the global convergence has also been proved. However, it seems to be nontrivial to establish the global convergence theory under the condition of inexact line search, especially for a general nonconvex minimization problem. Quite recently, it is noticed that there are many modified PRP conjugate gradient methods studied (see, e.g., [1013, 17]). In these methods, the search direction is constructed to possess the sufficient descent property, and the theory of global convergence is established with different line search strategy. In , the search direction is given by where Similar to the idea in , a new spectral PRP conjugate gradient algorithm will be developed in this paper. On one hand, we will present a new spectral conjugate gradient direction, which also possess the sufficiently descent feature. On the other hand, a modified Armijo-type line search strategy is incorporated into the developed algorithm. Numerical experiments will be used to make a comparison among some similar algorithms.

The rest of this paper is organized as follows. In the next section, a new spectral PRP conjugate gradient method is proposed. Section 3 will be devoted to prove the global convergence. In Section 4, some numerical experiments will be done to test the efficiency, especially in comparison with the existing other methods. Some concluding remarks will be given in the last section.

#### 2. New Spectral PRP Conjugate Gradient Algorithm

In this section, we will firstly study how to determine a descent direction of objective function.

Let be the current iterate. Let be defined by where is specified by (1.4) and

It is noted that given by (2.1) and (2.2) is different from those in [3, 16, 17], either for the choice of or for that of .

We first prove that is a sufficiently descent direction.

Lemma 2.1. Suppose that is given by (2.1) and (2.2). Then, the following result holds for any .

Proof. Firstly, for , it is easy to see that (2.3) is true since .
Secondly, assume that holds for when . Then, from (1.4), (2.1), and (2.2), it follows that Thus, (2.3) is also true with replaced by . By mathematical induction method, we obtain the desired result.

From Lemma 2.1, it is known that is a descent direction of at . Furthermore, if the exact line search is used, then ; hence In this case, the proposed spectral PRP conjugate gradient method reduces to the standard PRP method. However, it is often that the exact line search is time-consuming and sometimes is unnecessary. In the following, we are going to develop a new algorithm, where the search direction is chosen by (2.1)-(2.2) and the stepsize is determined by Armijio-type inexact line search.

Algorithm 2.2 (Modified Spectral PRP Conjugate Gradient Algorithm). We have the following steps.
Step 1. Given constants , , , . Choose an initial point . Let .Step 2. If , then the algorithm stops. Otherwise, compute by (2.1)-(2.2), and go to Step 3.Step 3. Determine a steplength such that Step 4. Set , and . Return to Step 2.

Since is a descent direction of at , we will prove that there must exist such that satisfies the inequality (2.7).

Proposition 2.3. Let be a continuously differentiable function. Suppose that is a descent direction of at . Then, there exists such that where , is the gradient vector of at , , and are given constant scalars.

Proof. Actually, we only need to prove that a step length is obtained in finitely many steps. If it is not true, then for all sufficiently large positive integer , we have Thus, by the mean value theorem, there is a such that It reads When , it is obtained that From , it follows that . This contradicts the condition that is a descent direction.

Remark 2.4. From Proposition 2.3, it is known that Algorithm 2.2 is well defined. In addition, it is easy to see that more descent magnitude can be obtained at each step by the modified Armijo-type line search (2.7) than the standard Armijo rule.

#### 3. Global Convergence

In this section, we are in a position to study the global convergence of Algorithm 2.2. We first state the following mild assumptions, which will be used in the proof of global convergence.

Assumption 3.1. The level set is bounded.

Assumption 3.2. In some neighborhood of , is continuously differentiable and its gradient is Lipschitz continuous, namely, there exists a constant such that

Since is decreasing, it is clear that the sequence generated by Algorithm 2.2 is contained in a bounded region from Assumption 3.1. So, there exists a convergent subsequence of . Without loss of generality, it can be supposed that is convergent. On the other hand, from Assumption 3.2, it follows that there is a constant such that Hence, the sequence is bounded.

In the following, we firstly prove that the stepsize at each iteration is large enough.

Lemma 3.3. With Assumption 3.2, there exists a constant such that the following inequality holds for all sufficiently large.

Proof. Firstly, from the line search rule (2.7), we know that .
If , then we have . The reason is that implies that which contradicts (2.3). Therefore, taking , the inequality (3.3) holds.
If , then the line search rule (2.7) implies that does not satisfy the inequality (2.7). So, we have
Since where satisfies and the last inequality is from (3.2), it is obtained that due to (3.5) and (3.1). It reads that is, Therefore, From Lemma 2.1, it follows that
Taking then the desired inequality (3.3) holds.

From Lemmas 2.1 and 3.3 and Assumption 3.1, we can prove the following result.

Lemma 3.4. Under Assumptions 3.1 and 3.2, the following results hold:

Proof. From the line search rule (2.7) and Assumption 3.1, there exists a constant such that Then, from Lemma 2.1, we have Therefore, the first conclusion is proved.
Since the series is convergent. Thus,
The second conclusion (3.14) is obtained.

In the end of this section, we come to establish the global convergence theorem for Algorithm 2.2.

Theorem 3.5. Under Assumptions 3.1 and 3.2, it holds that

Proof. Suppose that there exists a positive constant such that for all . Then, from (2.1), it follows that Dividing by in the both sides of this equality, then from (1.4), (2.3), (3.1), and (3.21), we obtain From (3.14) in Lemma 3.4, it follows that Thus, there exists a sufficient large number such that for , the following inequalities hold.
Therefore, for , where is a nonnegative constant.
The last inequality implies which contradicts the result of Lemma 3.4.
The global convergence theorem is established.

#### 4. Numerical Experiments

In this section, we will report the numerical performance of Algorithm 2.2. We test Algorithm 2.2 by solving the 15 benchmark problems from  and compare its numerical performance with that of the other similar methods, which include the standard PRP conjugate gradient method in , the modified FR conjugate gradient method in , and the modified PRP conjugate gradient method in . Among these algorithms, either the updating formula or the line search rule is different from each other.

All codes of the computer procedures are written in MATLAB 7.0.1 and are implemented on PC with 2.0 GHz CPU processor, 1 GB RAM memory, and XP operation system.

The parameters are chosen as follows:

In Tables 1 and 2, we use the following denotations: Dim: the dimension of the objective function;GV: the gradient value of the objective function when the algorithm stops;NI: the number of iterations;NF: the number of function evaluations;CT: the run time of CPU;mfr: the modified FR conjugate gradient method in ; prp: the standard PRP conjugate gradient method in ;msprp: the modified PRP conjugate gradient method in ;mprp: the new algorithm developed in this paper.

 Function Algorithm Dim GV NI NF CT(s) Rrosenbrock mfr 2 8 . 8 8 1 8 𝑒 − 0 0 7 328 7069 0.2970 prp 2 9 . 2 4 1 5 𝑒 − 0 0 7 760 41189 1.4370 mprp 2 8 . 6 0 9 2 𝑒 − 0 0 7 124 2816 0.0940 msprp 2 6 . 9 6 4 3 𝑒 − 0 0 7 122 2597 0.1400 Freudenstein and Roth mfr 2 5 . 5 7 2 3 𝑒 − 0 0 7 236 5110 0.2190 prp 2 7 . 1 4 2 2 𝑒 − 0 0 7 331 18798 0.6250 mprp 2 2 . 4 6 6 6 𝑒 − 0 0 7 67 1904 0.0940 msprp 2 8 . 6 9 6 7 𝑒 − 0 0 7 62 1437 0.0780 Brown badly mfr 2 — — — — prp 2 — — — — mprp 2 7 . 9 8 9 2 𝑒 − 0 0 7 105 10279 0.2030 msprp 2 7 . 6 0 2 9 𝑒 − 0 0 7 70 7117 0.2660 Beale mfr 2 6 . 1 7 3 0 𝑒 − 0 0 7 74 714 0.0780 prp 2 8 . 2 4 5 5 𝑒 − 0 0 7 292 12568 0.4370 mprp 2 6 . 2 2 5 7 𝑒 − 0 0 7 130 1539 0.0940 msprp 2 8 . 7 8 6 1 𝑒 − 0 0 7 91 877 0.0470 Powell singular mfr 4 9 . 9 8 2 7 𝑒 − 0 0 7 4122 10578 0.6870 prp 4 — — — — mprp 4 9 . 6 9 0 9 𝑒 − 0 0 7 13565 218964 5.2660 msprp 4 9 . 8 5 1 2 𝑒 − 0 0 7 11893 169537 7.2500 Wood mfr 4 7 . 7 9 3 7 𝑒 − 0 0 7 263 5787 0.2660 prp 4 9 . 9 8 4 1 𝑒 − 0 0 7 1284 69501 2.3440 mprp 4 9 . 6 4 8 4 𝑒 − 0 0 7 280 6432 0.1720 msprp 4 7 . 9 2 2 9 𝑒 − 0 0 7 404 9643 0.4070 Extended Powell singular mfr 4 9 . 9 8 2 7 𝑒 − 0 0 7 4122 10578 0.6800 prp 4 — — — — mprp 4 9 . 6 9 0 9 𝑒 − 0 0 7 13565 218964 5.5310 msprp 4 9 . 8 5 1 2 𝑒 − 0 0 7 11893 169537 7.4070 Broyden tridiagonal mfr 4 4 . 8 4 5 1 𝑒 − 0 0 7 53 784 0.0630 prp 4 6 . 6 6 2 6 𝑒 − 0 0 7 87 4460 0.1180 mprp 4 5 . 8 1 6 6 𝑒 − 0 0 7 39 430 0.0320 msprp 4 9 . 7 1 9 6 𝑒 − 0 0 7 52 785 0.0780
 Function Algorithm Dim GV NI NF CT(s) Kowalik and Osborne mfr 4 — — — — prp 4 8 . 9 5 2 1 𝑒 − 0 0 7 833 26191 1.2970 mprp 4 9 . 9 6 9 8 𝑒 − 0 0 7 6235 35425 3.5940 msprp 4 9 . 9 5 6 0 𝑒 − 0 0 7 7059 37976 4.9850 Broyden banded mfr 6 8 . 9 4 6 9 𝑒 − 0 0 7 40 505 0.0780 prp 6 8 . 4 6 8 4 𝑒 − 0 0 7 268 9640 0.4840 mprp 6 8 . 9 0 2 9 𝑒 − 0 0 7 102 1319 0.0940 msprp 6 9 . 3 2 7 6 𝑒 − 0 0 7 44 556 0.0940 Discrete boundary mfr 6 9 . 1 5 3 1 𝑒 − 0 0 7 107 509 0.0780 prp 6 7 . 8 9 7 0 𝑒 − 0 0 7 269 11449 0.4690 mprp 6 8 . 2 8 0 7 9 𝑒 − 0 0 7 157 1473 0.0930 msprp 6 9 . 9 4 3 6 𝑒 − 0 0 7 165 1471 0.1410 Variably dimensioned mfr 8 7 . 3 4 1 1 𝑒 − 0 0 7 57 1233 0.1250 prp 8 7 . 3 4 1 1 𝑒 − 0 0 7 113 7403 0.3290 mprp 8 9 . 0 9 0 0 𝑒 − 0 0 7 69 1544 0.0780 msprp 8 7 . 3 4 1 1 𝑒 − 0 0 7 57 1233 0.1100 Broyden tridiagonal mfr 9 9 . 1 8 1 5 𝑒 − 0 0 7 129 2173 0.1250 prp 9 6 . 4 5 8 4 𝑒 − 0 0 7 113 5915 0.2500 mprp 9 7 . 3 5 2 9 𝑒 − 0 0 7 187 2967 0.1250 msprp 9 9 . 2 3 6 3 𝑒 − 0 0 7 82 1304 0.1100 Linear-rank1 mfr 10 9 . 7 4 6 2 𝑒 − 0 0 7 84 3762 0.1720 prp 10 4 . 5 6 4 7 𝑒 − 0 0 7 98 6765 0.2810 mprp 10 6 . 9 1 4 0 𝑒 − 0 0 7 51 2216 0.0780 msprp 10 6 . 6 6 3 0 𝑒 − 0 0 7 50 2162 0.1250 Linear-full rank mfr 12 7 . 6 9 1 9 𝑒 − 0 0 7 9 36 0.0160 prp 12 8 . 2 5 0 7 𝑒 − 0 0 7 47 1904 0.1090 mprp 12 7 . 6 9 1 9 𝑒 − 0 0 7 9 36 0.0630 msprp 12 7 . 6 9 1 9 𝑒 − 0 0 7 9 36 0.0150

From the above numerical experiments, it is shown that the proposed algorithm in this paper is promising.

#### 5. Conclusion

In this paper, a new spectral PRP conjugate gradient algorithm has been developed for solving unconstrained minimization problems. Under some mild conditions, the global convergence has been proved with an Armijo-type line search rule. Compared with the other similar algorithms, the numerical performance of the developed algorithm is promising.

#### Acknowledgments

The authors would like to express their great thanks to the anonymous referees for their constructive comments on this paper, which have improved its presentation. This work is supported by National Natural Science Foundation of China (Grant nos. 71071162, 70921001).

1. N. Andrei, “Acceleration of conjugate gradient algorithms for unconstrained optimization,” Applied Mathematics and Computation, vol. 213, no. 2, pp. 361–369, 2009.
2. N. Andrei, “Open problems in nonlinear conjugate gradient algorithms for unconstrained optimization,” Bulletin of the Malaysian Mathematical Sciences Society, vol. 34, no. 2, pp. 319–330, 2011. View at: Google Scholar
3. E. G. Birgin and J. M. Martínez, “A spectral conjugate gradient method for unconstrained optimization,” Applied Mathematics and Optimization, vol. 43, no. 2, pp. 117–128, 2001.
4. S.-Q. Du and Y.-Y. Chen, “Global convergence of a modified spectral FR conjugate gradient method,” Applied Mathematics and Computation, vol. 202, no. 2, pp. 766–770, 2008.
5. J. C. Gilbert and J. Nocedal, “Global convergence properties of conjugate gradient methods for optimization,” SIAM Journal on Optimization, vol. 2, no. 1, pp. 21–42, 1992.
6. L. Grippo and S. Lucidi, “A globally convergent version of the Polak-Ribière conjugate gradient method,” Mathematical Programming, vol. 78, no. 3, pp. 375–391, 1997.
7. J. Nocedal and S. J. Wright, Numerical Optimization, Springer Series in Operations Research, Springer, New York, NY, USA, 1999.
8. B. T. Polyak, “The conjugate gradient method in extremal problems,” USSR Computational Mathematics and Mathematical Physics, vol. 9, no. 4, pp. 94–112, 1969. View at: Google Scholar
9. Z. J. Shi, “A restricted Polak-Ribière conjugate gradient method and its global convergence,” Advances in Mathematics, vol. 31, no. 1, pp. 47–55, 2002. View at: Google Scholar
10. Z. Wan, C. M. Hu, and Z. L. Yang, “A spectral PRP conjugate gradient methods for nonconvex optimization problem based on modigfied line search,” Discrete and Continuous Dynamical Systems: Series B, vol. 16, no. 4, pp. 1157–1169, 2011. View at: Google Scholar
11. Z. Wan, Z. Yang, and Y. Wang, “New spectral PRP conjugate gradient method for unconstrained optimization,” Applied Mathematics Letters, vol. 24, no. 1, pp. 16–22, 2011.
12. Z. X. Wei, G. Y. Li, and L. Q. Qi, “Global convergence of the Polak-Ribière-Polyak conjugate gradient method with an Armijo-type inexact line search for nonconvex unconstrained optimization problems,” Mathematics of Computation, vol. 77, no. 264, pp. 2173–2193, 2008.
13. G. Yu, L. Guan, and Z. Wei, “Globally convergent Polak-Ribière-Polyak conjugate gradient methods under a modified Wolfe line search,” Applied Mathematics and Computation, vol. 215, no. 8, pp. 3082–3090, 2009.
14. G. Yuan, X. Lu, and Z. Wei, “A conjugate gradient method with descent direction for unconstrained optimization,” Journal of Computational and Applied Mathematics, vol. 233, no. 2, pp. 519–530, 2009.
15. G. Yuan, “Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems,” Optimization Letters, vol. 3, no. 1, pp. 11–21, 2009.
16. L. Zhang, W. Zhou, and D. Li, “Global convergence of a modified Fletcher-Reeves conjugate gradient method with Armijo-type line search,” Numerische Mathematik, vol. 104, no. 4, pp. 561–572, 2006.
17. L. Zhang, W. Zhou, and D.-H. Li, “A descent modified Polak-Ribière-Polyak conjugate gradient method and its global convergence,” IMA Journal of Numerical Analysis, vol. 26, no. 4, pp. 629–640, 2006.
18. E. Polak and G. Ribière, “Note sur la convergence de méthodes de directions conjuguées,” Revue Francaise d'Informatique et de Recherche Operationnelle, vol. 3, no. 16, pp. 35–43, 1969. View at: Google Scholar | Zentralblatt MATH
19. J. J. Moré, B. S. Garbow, and K. E. Hillstrom, “Testing unconstrained optimization software,” ACM Transactions on Mathematical Software, vol. 7, no. 1, pp. 17–41, 1981.

#### More related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.