About this Journal Submit a Manuscript Table of Contents
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 641276, 13 pages
http://dx.doi.org/10.1155/2012/641276
Research Article

Global Convergence of a Modified Spectral Conjugate Gradient Method

1Department of Electronic Information Technology, Hunan Vocational College of Commerce, Hunan, Changsha 410205, China
2School of Mathematical Sciences and Computing Technology, Central South University, Hunan, Changsha 410083, China

Received 20 September 2011; Revised 25 October 2011; Accepted 25 October 2011

Academic Editor: Giuseppe Marino

Copyright © 2012 Huabin Jiang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A modified spectral PRP conjugate gradient method is presented for solving unconstrained optimization problems. The constructed search direction is proved to be a sufficiently descent direction of the objective function. With an Armijo-type line search to determinate the step length, a new spectral PRP conjugate algorithm is developed. Under some mild conditions, the theory of global convergence is established. Numerical results demonstrate that this algorithm is promising, particularly, compared with the existing similar ones.

1. Introduction

Recently, it is shown that conjugate gradient method is efficient and powerful in solving large-scale unconstrained minimization problems owing to its low memory requirement and simple computation. For example, in [117], many variants of conjugate gradient algorithms are developed. However, just as pointed out in [2], there exist many theoretical and computational challenges to apply these methods into solving the unconstrained optimization problems. Actually, 14 open problems on conjugate gradient methods are presented in [2]. These problems concern the selection of initial direction, the computation of step length, and conjugate parameter based on the values of the objective function, the influence of accuracy of line search procedure on the efficiency of conjugate gradient algorithm, and so forth.

The general model of unconstrained optimization problem is as follows:min𝑓(𝑥),𝑥𝑅𝑛,(1.1) where 𝑓𝑅𝑛𝑅 is continuously differentiable such that its gradient is available. Let 𝑔(𝑥) denote the gradient of 𝑓 at 𝑥, and let 𝑥0 be an arbitrary initial approximate solution of (1.1). Then, when a standard conjugate gradient method is used to solve (1.1), a sequence of solutions will be generated by𝑥𝑘+1=𝑥𝑘+𝛼𝑘𝑑𝑘,𝑘=0,1,,(1.2) where 𝛼𝑘 is the steplength chosen by some line search method and 𝑑𝑘 is the search direction defined by𝑑𝑘=𝑔𝑘if𝑘=0,𝑔𝑘+𝛽𝑘𝑑𝑘1if𝑘>0,(1.3) where 𝛽𝑘 is called conjugacy parameter and 𝑔𝑘 denotes the value of 𝑔(𝑥𝑘). For a strictly convex quadratical programming, 𝛽𝑘 can be appropriately chosen such that 𝑑𝑘 and 𝑑𝑘1 are conjugate with respect to the Hessian matrix of the objective function. If 𝛽𝑘 is taken by𝛽𝑘=𝛽PRP𝑘=𝑔𝑇𝑘𝑔𝑘𝑔𝑘1𝑔𝑘12,(1.4) where stands for the Euclidean norm of vector, then (1.2)–(1.4) are called Polak-Ribiére-Polyak (PRP) conjugate gradient method (see [8, 18]).

It is well known that PRP method has the property of finite termination when the objective function is a strong convex quadratic function combined with the exact line search. Furthermore, in [7], for a twice continuously differentiable strong convex objective function, the global convergence has also been proved. However, it seems to be nontrivial to establish the global convergence theory under the condition of inexact line search, especially for a general nonconvex minimization problem. Quite recently, it is noticed that there are many modified PRP conjugate gradient methods studied (see, e.g., [1013, 17]). In these methods, the search direction is constructed to possess the sufficient descent property, and the theory of global convergence is established with different line search strategy. In [17], the search direction 𝑑𝑘 is given by𝑑𝑘=𝑔𝑘if𝑘=0,𝑔𝑘+𝛽PRP𝑘𝑑𝑘1𝜃𝑘𝑦𝑘1if𝑘>0,(1.5) where𝜃𝑘=𝑔𝑇𝑘𝑑𝑘1𝑔𝑘12,𝑦𝑘1=𝑔𝑘𝑔𝑘1,𝑠𝑘1=𝑥𝑘𝑥𝑘1.(1.6) Similar to the idea in [17], a new spectral PRP conjugate gradient algorithm will be developed in this paper. On one hand, we will present a new spectral conjugate gradient direction, which also possess the sufficiently descent feature. On the other hand, a modified Armijo-type line search strategy is incorporated into the developed algorithm. Numerical experiments will be used to make a comparison among some similar algorithms.

The rest of this paper is organized as follows. In the next section, a new spectral PRP conjugate gradient method is proposed. Section 3 will be devoted to prove the global convergence. In Section 4, some numerical experiments will be done to test the efficiency, especially in comparison with the existing other methods. Some concluding remarks will be given in the last section.

2. New Spectral PRP Conjugate Gradient Algorithm

In this section, we will firstly study how to determine a descent direction of objective function.

Let 𝑥𝑘 be the current iterate. Let 𝑑𝑘 be defined by𝑑𝑘=𝑔𝑘if𝑘=0,𝜃𝑘𝑔𝑘+𝛽PRP𝑘𝑑𝑘1if𝑘>0,(2.1) where 𝛽PRP𝑘 is specified by (1.4) and𝜃𝑘=𝑑𝑇𝑘1𝑦𝑘1𝑔𝑘12𝑑𝑇𝑘1𝑔𝑘𝑔𝑇𝑘𝑔𝑘1𝑔𝑘2𝑔𝑘12.(2.2)

It is noted that 𝑑𝑘 given by (2.1) and (2.2) is different from those in [3, 16, 17], either for the choice of 𝜃𝑘 or for that of 𝛽𝑘.

We first prove that 𝑑𝑘 is a sufficiently descent direction.

Lemma 2.1. Suppose that 𝑑𝑘 is given by (2.1) and (2.2). Then, the following result 𝑔𝑇𝑘𝑑𝑘𝑔=𝑘2(2.3) holds for any 𝑘0.

Proof. Firstly, for 𝑘=0, it is easy to see that (2.3) is true since 𝑑0=𝑔0.
Secondly, assume that 𝑑𝑇𝑘1𝑔𝑘1𝑔=𝑘12(2.4) holds for 𝑘1 when 𝑘1. Then, from (1.4), (2.1), and (2.2), it follows that 𝑔𝑇𝑘𝑑𝑘=𝜃𝑘𝑔𝑘2+𝑔𝑇𝑘𝑔𝑘𝑔𝑘1𝑔𝑘12𝑑𝑇𝑘1𝑔𝑘𝑑=𝑇𝑘1𝑔𝑘𝑔𝑘1𝑔𝑘12𝑔𝑇𝑘𝑔𝑘+𝑑𝑇𝑘1𝑔𝑘𝑔𝑇𝑘𝑔𝑘1𝑔𝑘2𝑔𝑘12𝑔𝑇𝑘𝑔𝑘+𝑔𝑇𝑘𝑔𝑘𝑔𝑘1𝑔𝑘12𝑑𝑇𝑘1𝑔𝑘=𝑑𝑇𝑘1𝑔𝑘1𝑔𝑘12𝑔𝑇𝑘𝑔𝑘=𝑔𝑘2𝑔𝑘12𝑔𝑘12𝑔=𝑘2.(2.5) Thus, (2.3) is also true with 𝑘1 replaced by 𝑘. By mathematical induction method, we obtain the desired result.

From Lemma 2.1, it is known that 𝑑𝑘 is a descent direction of 𝑓 at 𝑥𝑘. Furthermore, if the exact line search is used, then 𝑔𝑇𝑘𝑑𝑘1=0; hence𝜃𝑘=𝑑𝑇𝑘1𝑦𝑘1𝑔𝑘12𝑑𝑇𝑘1𝑔𝑘𝑔𝑇𝑘𝑔𝑘1𝑔𝑘2𝑔𝑘12𝑑=𝑇𝑘1𝑔𝑘1𝑔𝑘12=1.(2.6) In this case, the proposed spectral PRP conjugate gradient method reduces to the standard PRP method. However, it is often that the exact line search is time-consuming and sometimes is unnecessary. In the following, we are going to develop a new algorithm, where the search direction 𝑑𝑘 is chosen by (2.1)-(2.2) and the stepsize is determined by Armijio-type inexact line search.

Algorithm 2.2 (Modified Spectral PRP Conjugate Gradient Algorithm). We have the following steps.
Step 1. Given constants 𝛿1, 𝜌(0,1), 𝛿2>0, 𝜖>0. Choose an initial point 𝑥0𝑅𝑛. Let 𝑘=0.Step 2. If 𝑔𝑘𝜖, then the algorithm stops. Otherwise, compute 𝑑𝑘 by (2.1)-(2.2), and go to Step 3.Step 3. Determine a steplength 𝛼𝑘=max{𝜌𝑗,𝑗=0,1,2,} such that 𝑓𝑥𝑘+𝛼𝑘𝑑𝑘𝑥𝑓𝑘+𝛿1𝛼𝑘𝑔𝑇𝑘𝑑𝑘𝛿2𝛼2𝑘𝑑𝑘2.(2.7)Step 4. Set 𝑥𝑘+1=𝑥𝑘+𝛼𝑘𝑑𝑘, and 𝑘=𝑘+1. Return to Step 2.

Since 𝑑𝑘 is a descent direction of 𝑓 at 𝑥𝑘, we will prove that there must exist 𝑗0 such that 𝛼𝑘=𝜌𝑗0 satisfies the inequality (2.7).

Proposition 2.3. Let 𝑓𝑅𝑛𝑅 be a continuously differentiable function. Suppose that 𝑑 is a descent direction of 𝑓 at 𝑥. Then, there exists 𝑗0 such that 𝑓(𝑥+𝛼𝑑)𝑓(𝑥)+𝛿1𝛼𝑔𝑇𝑑𝛿2𝛼2𝑑2,(2.8) where 𝛼=𝜌𝑗0, 𝑔 is the gradient vector of 𝑓 at 𝑥, 𝛿1, 𝜌(0,1) and 𝛿2>0 are given constant scalars.

Proof. Actually, we only need to prove that a step length 𝛼 is obtained in finitely many steps. If it is not true, then for all sufficiently large positive integer 𝑚, we have 𝑓(𝑥+𝜌𝑚𝑑)𝑓(𝑥)>𝛿1𝜌𝑚𝑔𝑇𝑑𝛿2𝜌2𝑚𝑑2.(2.9) Thus, by the mean value theorem, there is a 𝜃(0,1) such that 𝜌𝑚𝑔(𝑥+𝜃𝜌𝑚𝑑)𝑇𝑑>𝛿1𝜌𝑚𝑔𝑇𝑑𝛿2𝜌2𝑚𝑑2.(2.10) It reads (𝑔(𝑥+𝜃𝜌𝑚𝑑)𝑔)𝑇𝛿𝑑>1𝑔1𝑇𝑑𝛿2𝜌𝑚𝑑2.(2.11) When 𝑚, it is obtained that 𝛿1𝑔1𝑇𝑑<0.(2.12) From 𝛿1(0,1), it follows that 𝑔𝑇𝑑>0. This contradicts the condition that 𝑑 is a descent direction.

Remark 2.4. From Proposition 2.3, it is known that Algorithm 2.2 is well defined. In addition, it is easy to see that more descent magnitude can be obtained at each step by the modified Armijo-type line search (2.7) than the standard Armijo rule.

3. Global Convergence

In this section, we are in a position to study the global convergence of Algorithm 2.2. We first state the following mild assumptions, which will be used in the proof of global convergence.

Assumption 3.1. The level set Ω={𝑥𝑅𝑛|𝑓(𝑥)𝑓(𝑥0)} is bounded.

Assumption 3.2. In some neighborhood 𝑁 of Ω, 𝑓 is continuously differentiable and its gradient is Lipschitz continuous, namely, there exists a constant 𝐿>0 such that 𝑔(𝑥)𝑔(𝑦)𝐿𝑥𝑦,𝑥,𝑦𝑁.(3.1)

Since {𝑓(𝑥𝑘)} is decreasing, it is clear that the sequence {𝑥𝑘} generated by Algorithm 2.2 is contained in a bounded region from Assumption 3.1. So, there exists a convergent subsequence of {𝑥𝑘}. Without loss of generality, it can be supposed that {𝑥𝑘} is convergent. On the other hand, from Assumption 3.2, it follows that there is a constant 𝛾1>0 such that𝑔(𝑥)𝛾1,𝑥Ω.(3.2) Hence, the sequence {𝑔𝑘} is bounded.

In the following, we firstly prove that the stepsize 𝛼𝑘 at each iteration is large enough.

Lemma 3.3. With Assumption 3.2, there exists a constant 𝑚>0 such that the following inequality 𝛼𝑘𝑔𝑚𝑘2𝑑𝑘2(3.3) holds for all 𝑘 sufficiently large.

Proof. Firstly, from the line search rule (2.7), we know that 𝛼𝑘1.
If 𝛼𝑘=1, then we have 𝑔𝑘𝑑𝑘. The reason is that 𝑔𝑘>𝑑𝑘 implies that 𝑔𝑘2>𝑔𝑘𝑑𝑘𝑔𝑇𝑘𝑑𝑘,(3.4) which contradicts (2.3). Therefore, taking 𝑚=1, the inequality (3.3) holds.
If 0<𝛼𝑘<1, then the line search rule (2.7) implies that 𝜌1𝛼𝑘 does not satisfy the inequality (2.7). So, we have 𝑓𝑥𝑘+𝜌1𝛼𝑘𝑑𝑘𝑥𝑓𝑘>𝛿1𝛼𝑘𝜌1𝑔𝑇𝑘𝑑𝑘𝛿2𝜌2𝛼2𝑘𝑑𝑘2.(3.5)
Since 𝑓𝑥𝑘+𝜌1𝛼𝑘𝑑𝑘𝑥𝑓𝑘=𝜌1𝛼𝑘𝑔𝑥𝑘+𝑡𝑘𝜌1𝛼𝑘𝑑𝑘𝑇𝑑𝑘=𝜌1𝛼𝑘𝑔𝑇𝑘𝑑𝑘+𝜌1𝛼𝑘𝑔𝑥𝑘+𝑡𝑘𝜌1𝛼𝑘𝑑𝑘𝑔𝑘𝑇𝑑𝑘𝜌1𝛼𝑘𝑔𝑇𝑘𝑑𝑘+𝐿𝜌2𝛼2𝑘𝑑𝑘2,(3.6) where 𝑡𝑘(0,1) satisfies 𝑥𝑘+𝑡𝑘𝜌1𝛼𝑘𝑑𝑘𝑁 and the last inequality is from (3.2), it is obtained that 𝛿1𝛼𝑘𝜌1𝑔𝑇𝑘𝑑𝑘𝛿2𝜌2𝛼2𝑘𝑑𝑘2<𝜌1𝛼𝑘𝑔𝑇𝑘𝑑𝑘+𝐿𝜌2𝛼2𝑘𝑑𝑘2(3.7) due to (3.5) and (3.1). It reads 1𝛿1𝛼𝑘𝜌1𝑔𝑇𝑘𝑑𝑘+𝐿+𝛿2𝜌2𝛼2𝑘𝑑𝑘2>0,(3.8) that is, 𝐿+𝛿2𝜌1𝛼𝑘𝑑𝑘2>𝛿1𝑔1𝑇𝑘𝑑𝑘.(3.9) Therefore, 𝛼𝑘>𝛿11𝜌𝑔𝑇𝑘𝑑𝑘𝐿+𝛿2𝑑𝑘2.(3.10) From Lemma 2.1, it follows that 𝛼𝑘>𝜌1𝛿1𝑔𝑘2𝐿+𝛿2𝑑𝑘2.(3.11)
Taking 𝜌𝑚=min1,1𝛿1𝐿+𝛿2,(3.12) then the desired inequality (3.3) holds.

From Lemmas 2.1 and 3.3 and Assumption 3.1, we can prove the following result.

Lemma 3.4. Under Assumptions 3.1 and 3.2, the following results hold: 𝑘0𝑔𝑘4𝑑𝑘2<,(3.13)lim𝑘𝛼2𝑘𝑑𝑘2=0.(3.14)

Proof. From the line search rule (2.7) and Assumption 3.1, there exists a constant 𝑀 such that 𝑛1𝑘=0𝛿1𝛼𝑘𝑔𝑇𝑘𝑑𝑘+𝛿2𝛼2𝑘𝑑𝑘2𝑛1𝑘=0𝑓𝑥𝑘𝑥𝑓k+1𝑥=𝑓0𝑥𝑓𝑛<2𝑀.(3.15) Then, from Lemma 2.1, we have 2𝑀𝑛1𝑘=0𝛿1𝛼𝑘𝑔𝑇𝑘𝑑𝑘+𝛿2𝛼2𝑘𝑑𝑘2=𝑛1𝑘=0𝛿1𝛼𝑘𝑔𝑘2+𝛿2𝛼2𝑘𝑑𝑘2𝑛1𝑘=0𝛿1𝑚𝑔𝑘2𝑑𝑘2𝑔𝑘2+𝛿2𝑚2𝑔𝑘4𝑑𝑘4𝑑𝑘2=𝑛1𝑘=0𝛿1+𝛿2𝑚𝑔𝑘4𝑑𝑘2𝑚.(3.16) Therefore, the first conclusion is proved.
Since 2𝑀𝑛1𝑘=0𝛿1𝛼𝑘𝑔𝑘2+𝛿2𝛼2𝑘𝑑𝑘2𝛿2𝑛1𝑘=0𝛼2𝑘𝑑𝑘2,(3.17) the series 𝑘=0𝛼2𝑘𝑑𝑘2(3.18) is convergent. Thus, lim𝑘𝛼2𝑘𝑑𝑘2=0.(3.19)
The second conclusion (3.14) is obtained.

In the end of this section, we come to establish the global convergence theorem for Algorithm 2.2.

Theorem 3.5. Under Assumptions 3.1 and 3.2, it holds that lim𝑘𝑔inf𝑘=0.(3.20)

Proof. Suppose that there exists a positive constant 𝜖>0 such that 𝑔𝑘𝜖(3.21) for all 𝑘. Then, from (2.1), it follows that 𝑑𝑘2=𝑑𝑇𝑘𝑑𝑘=𝜃𝑘𝑔𝑇𝑘+𝛽PRP𝑘𝑑𝑇𝑘1𝜃𝑘𝑔𝑘+𝛽PRP𝑘𝑑𝑘1=𝜃2𝑘𝑔𝑘22𝜃𝑘𝛽PRP𝑘𝑑𝑇𝑘1𝑔𝑘+𝛽PRP𝑘2𝑑𝑘12=𝜃2𝑘𝑔𝑘22𝜃𝑘𝑑𝑇𝑘+𝜃𝑘𝑔𝑇𝑘𝑔𝑘+𝛽PRP𝑘2𝑑𝑘12=𝜃2𝑘𝑔𝑘22𝜃𝑘𝑑𝑇𝑘𝑔𝑘2𝜃2𝑘𝑔𝑘2+𝛽PRP𝑘2𝑑𝑘12=𝛽PRP𝑘2𝑑𝑘122𝜃𝑘𝑑𝑇𝑘𝑔𝑘𝜃2𝑘𝑔𝑘2.(3.22) Dividing by (𝑔𝑇𝑘𝑑𝑘)2 in the both sides of this equality, then from (1.4), (2.3), (3.1), and (3.21), we obtain 𝑑𝑘2𝑔𝑘4=𝛽PRP𝑘2𝑑𝑘122𝜃𝑘𝑑𝑇𝑘𝑔𝑘𝜃2𝑘𝑔𝑘2𝑔𝑘4=𝑔𝑇𝑘𝑔𝑘𝑔𝑘12𝑔𝑘14𝑑𝑘12𝑔𝑘4𝜃𝑘12𝑔𝑘2+1𝑔𝑘2𝑔𝑘𝑔𝑘12𝑔𝑘14𝑑𝑘12𝑔𝑘2𝜃𝑘12𝑔𝑘2+1𝑔𝑘2𝑔𝑘𝑔𝑘12𝑔𝑘2𝑑𝑘12𝑔𝑘14+1𝑔𝑘2<𝐿2𝛼2𝑘1𝑑𝑘12𝜖2𝑑𝑘12𝑔𝑘14+1𝑔𝑘2.(3.23) From (3.14) in Lemma 3.4, it follows that lim𝑘𝛼2𝑘1𝑑𝑘12=0.(3.24) Thus, there exists a sufficient large number 𝑘0 such that for 𝑘𝑘0, the following inequalities 0𝛼2𝑘1𝑑𝑘12<𝜖2𝐿2(3.25) hold.
Therefore, for 𝑘𝑘0, 𝑑𝑘2𝑔𝑘4𝑑𝑘12𝑔𝑘14+1𝑔𝑘2𝑑𝑘02𝑔𝑘04+𝑘𝑖=𝑘0+11𝑔𝑖2<𝐶0𝜖2+𝑘𝑖=𝑘0+11𝜖2=𝐶0+𝑘𝑘0𝜖2,(3.26) where 𝐶0=𝜖2𝑑𝑘02/𝑔𝑘02 is a nonnegative constant.
The last inequality implies 𝑘1𝑔𝑘4𝑑𝑘2𝑘>𝑘0𝑔𝑘4𝑑𝑘2>𝜖2𝑘>𝑘01𝐶0+𝑘𝑘0=,(3.27) which contradicts the result of Lemma 3.4.
The global convergence theorem is established.

4. Numerical Experiments

In this section, we will report the numerical performance of Algorithm 2.2. We test Algorithm 2.2 by solving the 15 benchmark problems from [19] and compare its numerical performance with that of the other similar methods, which include the standard PRP conjugate gradient method in [6], the modified FR conjugate gradient method in [16], and the modified PRP conjugate gradient method in [17]. Among these algorithms, either the updating formula or the line search rule is different from each other.

All codes of the computer procedures are written in MATLAB 7.0.1 and are implemented on PC with 2.0 GHz CPU processor, 1 GB RAM memory, and XP operation system.

The parameters are chosen as follows: 𝜖=106,𝜌=0.75,𝛿1=0.1,𝛿2=1.(4.1)

In Tables 1 and 2, we use the following denotations: Dim: the dimension of the objective function;GV: the gradient value of the objective function when the algorithm stops;NI: the number of iterations;NF: the number of function evaluations;CT: the run time of CPU;mfr: the modified FR conjugate gradient method in [16]; prp: the standard PRP conjugate gradient method in [6];msprp: the modified PRP conjugate gradient method in [17];mprp: the new algorithm developed in this paper.

tab1
Table 1: Comparison of efficiency with the other methods.
tab2
Table 2: Comparison of efficiency with the other methods.

From the above numerical experiments, it is shown that the proposed algorithm in this paper is promising.

5. Conclusion

In this paper, a new spectral PRP conjugate gradient algorithm has been developed for solving unconstrained minimization problems. Under some mild conditions, the global convergence has been proved with an Armijo-type line search rule. Compared with the other similar algorithms, the numerical performance of the developed algorithm is promising.

Acknowledgments

The authors would like to express their great thanks to the anonymous referees for their constructive comments on this paper, which have improved its presentation. This work is supported by National Natural Science Foundation of China (Grant nos. 71071162, 70921001).

References

  1. N. Andrei, “Acceleration of conjugate gradient algorithms for unconstrained optimization,” Applied Mathematics and Computation, vol. 213, no. 2, pp. 361–369, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  2. N. Andrei, “Open problems in nonlinear conjugate gradient algorithms for unconstrained optimization,” Bulletin of the Malaysian Mathematical Sciences Society, vol. 34, no. 2, pp. 319–330, 2011.
  3. E. G. Birgin and J. M. Martínez, “A spectral conjugate gradient method for unconstrained optimization,” Applied Mathematics and Optimization, vol. 43, no. 2, pp. 117–128, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  4. S.-Q. Du and Y.-Y. Chen, “Global convergence of a modified spectral FR conjugate gradient method,” Applied Mathematics and Computation, vol. 202, no. 2, pp. 766–770, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  5. J. C. Gilbert and J. Nocedal, “Global convergence properties of conjugate gradient methods for optimization,” SIAM Journal on Optimization, vol. 2, no. 1, pp. 21–42, 1992. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  6. L. Grippo and S. Lucidi, “A globally convergent version of the Polak-Ribière conjugate gradient method,” Mathematical Programming, vol. 78, no. 3, pp. 375–391, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  7. J. Nocedal and S. J. Wright, Numerical Optimization, Springer Series in Operations Research, Springer, New York, NY, USA, 1999.
  8. B. T. Polyak, “The conjugate gradient method in extremal problems,” USSR Computational Mathematics and Mathematical Physics, vol. 9, no. 4, pp. 94–112, 1969.
  9. Z. J. Shi, “A restricted Polak-Ribière conjugate gradient method and its global convergence,” Advances in Mathematics, vol. 31, no. 1, pp. 47–55, 2002.
  10. Z. Wan, C. M. Hu, and Z. L. Yang, “A spectral PRP conjugate gradient methods for nonconvex optimization problem based on modigfied line search,” Discrete and Continuous Dynamical Systems: Series B, vol. 16, no. 4, pp. 1157–1169, 2011.
  11. Z. Wan, Z. Yang, and Y. Wang, “New spectral PRP conjugate gradient method for unconstrained optimization,” Applied Mathematics Letters, vol. 24, no. 1, pp. 16–22, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  12. Z. X. Wei, G. Y. Li, and L. Q. Qi, “Global convergence of the Polak-Ribière-Polyak conjugate gradient method with an Armijo-type inexact line search for nonconvex unconstrained optimization problems,” Mathematics of Computation, vol. 77, no. 264, pp. 2173–2193, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  13. G. Yu, L. Guan, and Z. Wei, “Globally convergent Polak-Ribière-Polyak conjugate gradient methods under a modified Wolfe line search,” Applied Mathematics and Computation, vol. 215, no. 8, pp. 3082–3090, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  14. G. Yuan, X. Lu, and Z. Wei, “A conjugate gradient method with descent direction for unconstrained optimization,” Journal of Computational and Applied Mathematics, vol. 233, no. 2, pp. 519–530, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  15. G. Yuan, “Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems,” Optimization Letters, vol. 3, no. 1, pp. 11–21, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  16. L. Zhang, W. Zhou, and D. Li, “Global convergence of a modified Fletcher-Reeves conjugate gradient method with Armijo-type line search,” Numerische Mathematik, vol. 104, no. 4, pp. 561–572, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  17. L. Zhang, W. Zhou, and D.-H. Li, “A descent modified Polak-Ribière-Polyak conjugate gradient method and its global convergence,” IMA Journal of Numerical Analysis, vol. 26, no. 4, pp. 629–640, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  18. E. Polak and G. Ribière, “Note sur la convergence de méthodes de directions conjuguées,” Revue Francaise d'Informatique et de Recherche Operationnelle, vol. 3, no. 16, pp. 35–43, 1969. View at Zentralblatt MATH
  19. J. J. Moré, B. S. Garbow, and K. E. Hillstrom, “Testing unconstrained optimization software,” ACM Transactions on Mathematical Software, vol. 7, no. 1, pp. 17–41, 1981. View at Publisher · View at Google Scholar · View at Zentralblatt MATH