Abstract

A modified spectral PRP conjugate gradient method is presented for solving unconstrained optimization problems. The constructed search direction is proved to be a sufficiently descent direction of the objective function. With an Armijo-type line search to determinate the step length, a new spectral PRP conjugate algorithm is developed. Under some mild conditions, the theory of global convergence is established. Numerical results demonstrate that this algorithm is promising, particularly, compared with the existing similar ones.

1. Introduction

Recently, it is shown that conjugate gradient method is efficient and powerful in solving large-scale unconstrained minimization problems owing to its low memory requirement and simple computation. For example, in [1–17], many variants of conjugate gradient algorithms are developed. However, just as pointed out in [2], there exist many theoretical and computational challenges to apply these methods into solving the unconstrained optimization problems. Actually, 14 open problems on conjugate gradient methods are presented in [2]. These problems concern the selection of initial direction, the computation of step length, and conjugate parameter based on the values of the objective function, the influence of accuracy of line search procedure on the efficiency of conjugate gradient algorithm, and so forth.

The general model of unconstrained optimization problem is as follows:min𝑓(𝑥),𝑥∈𝑅𝑛,(1.1) where 𝑓∶𝑅𝑛→𝑅 is continuously differentiable such that its gradient is available. Let 𝑔(𝑥) denote the gradient of 𝑓 at 𝑥, and let 𝑥0 be an arbitrary initial approximate solution of (1.1). Then, when a standard conjugate gradient method is used to solve (1.1), a sequence of solutions will be generated by𝑥𝑘+1=𝑥𝑘+𝛼𝑘𝑑𝑘,𝑘=0,1,…,(1.2) where 𝛼𝑘 is the steplength chosen by some line search method and 𝑑𝑘 is the search direction defined by𝑑𝑘=−𝑔𝑘if𝑘=0,−𝑔𝑘+𝛽𝑘𝑑𝑘−1if𝑘>0,(1.3) where 𝛽𝑘 is called conjugacy parameter and 𝑔𝑘 denotes the value of 𝑔(𝑥𝑘). For a strictly convex quadratical programming, 𝛽𝑘 can be appropriately chosen such that 𝑑𝑘 and 𝑑𝑘−1 are conjugate with respect to the Hessian matrix of the objective function. If 𝛽𝑘 is taken by𝛽𝑘=𝛽PRP𝑘=𝑔𝑇𝑘𝑔𝑘−𝑔𝑘−1‖‖𝑔𝑘−1‖‖2,(1.4) where ‖⋅‖ stands for the Euclidean norm of vector, then (1.2)–(1.4) are called Polak-Ribiére-Polyak (PRP) conjugate gradient method (see [8, 18]).

It is well known that PRP method has the property of finite termination when the objective function is a strong convex quadratic function combined with the exact line search. Furthermore, in [7], for a twice continuously differentiable strong convex objective function, the global convergence has also been proved. However, it seems to be nontrivial to establish the global convergence theory under the condition of inexact line search, especially for a general nonconvex minimization problem. Quite recently, it is noticed that there are many modified PRP conjugate gradient methods studied (see, e.g., [10–13, 17]). In these methods, the search direction is constructed to possess the sufficient descent property, and the theory of global convergence is established with different line search strategy. In [17], the search direction 𝑑𝑘 is given by𝑑𝑘=−𝑔𝑘if𝑘=0,−𝑔𝑘+𝛽PRP𝑘𝑑𝑘−1−𝜃𝑘𝑦𝑘−1if𝑘>0,(1.5) where𝜃𝑘=𝑔𝑇𝑘𝑑𝑘−1‖‖𝑔𝑘−1‖‖2,𝑦𝑘−1=𝑔𝑘−𝑔𝑘−1,𝑠𝑘−1=𝑥𝑘−𝑥𝑘−1.(1.6) Similar to the idea in [17], a new spectral PRP conjugate gradient algorithm will be developed in this paper. On one hand, we will present a new spectral conjugate gradient direction, which also possess the sufficiently descent feature. On the other hand, a modified Armijo-type line search strategy is incorporated into the developed algorithm. Numerical experiments will be used to make a comparison among some similar algorithms.

The rest of this paper is organized as follows. In the next section, a new spectral PRP conjugate gradient method is proposed. Section 3 will be devoted to prove the global convergence. In Section 4, some numerical experiments will be done to test the efficiency, especially in comparison with the existing other methods. Some concluding remarks will be given in the last section.

2. New Spectral PRP Conjugate Gradient Algorithm

In this section, we will firstly study how to determine a descent direction of objective function.

Let 𝑥𝑘 be the current iterate. Let 𝑑𝑘 be defined by𝑑𝑘=−𝑔𝑘if𝑘=0,−𝜃𝑘𝑔𝑘+𝛽PRP𝑘𝑑𝑘−1if𝑘>0,(2.1) where 𝛽PRP𝑘 is specified by (1.4) and𝜃𝑘=𝑑𝑇𝑘−1𝑦𝑘−1‖‖𝑔𝑘−1‖‖2−𝑑𝑇𝑘−1𝑔𝑘𝑔𝑇𝑘𝑔𝑘−1‖‖𝑔𝑘‖‖2‖‖𝑔𝑘−1‖‖2.(2.2)

It is noted that 𝑑𝑘 given by (2.1) and (2.2) is different from those in [3, 16, 17], either for the choice of 𝜃𝑘 or for that of 𝛽𝑘.

We first prove that 𝑑𝑘 is a sufficiently descent direction.

Lemma 2.1. Suppose that 𝑑𝑘 is given by (2.1) and (2.2). Then, the following result 𝑔𝑇𝑘𝑑𝑘‖‖𝑔=−𝑘‖‖2(2.3) holds for any 𝑘≥0.

Proof. Firstly, for 𝑘=0, it is easy to see that (2.3) is true since 𝑑0=−𝑔0.
Secondly, assume that 𝑑𝑇𝑘−1𝑔𝑘−1‖‖𝑔=−𝑘−1‖‖2(2.4) holds for 𝑘−1 when 𝑘≥1. Then, from (1.4), (2.1), and (2.2), it follows that 𝑔𝑇𝑘𝑑𝑘=−𝜃𝑘‖‖𝑔𝑘‖‖2+𝑔𝑇𝑘𝑔𝑘−𝑔𝑘−1‖‖𝑔𝑘−1‖‖2𝑑𝑇𝑘−1𝑔𝑘𝑑=−𝑇𝑘−1𝑔𝑘−𝑔𝑘−1‖‖𝑔𝑘−1‖‖2𝑔𝑇𝑘𝑔𝑘+𝑑𝑇𝑘−1𝑔𝑘𝑔𝑇𝑘𝑔𝑘−1‖‖𝑔𝑘‖‖2‖‖𝑔𝑘−1‖‖2𝑔𝑇𝑘𝑔𝑘+𝑔𝑇𝑘𝑔𝑘−𝑔𝑘−1‖‖𝑔𝑘−1‖‖2𝑑𝑇𝑘−1𝑔𝑘=𝑑𝑇𝑘−1𝑔𝑘−1‖‖𝑔𝑘−1‖‖2𝑔𝑇𝑘𝑔𝑘=‖‖𝑔𝑘‖‖2‖‖𝑔𝑘−1‖‖2−‖‖𝑔𝑘−1‖‖2‖‖𝑔=−𝑘‖‖2.(2.5) Thus, (2.3) is also true with 𝑘−1 replaced by 𝑘. By mathematical induction method, we obtain the desired result.

From Lemma 2.1, it is known that 𝑑𝑘 is a descent direction of 𝑓 at 𝑥𝑘. Furthermore, if the exact line search is used, then 𝑔𝑇𝑘𝑑𝑘−1=0; hence𝜃𝑘=𝑑𝑇𝑘−1𝑦𝑘−1‖‖𝑔𝑘−1‖‖2−𝑑𝑇𝑘−1𝑔𝑘𝑔𝑇𝑘𝑔𝑘−1‖‖𝑔𝑘‖‖2‖‖𝑔𝑘−1‖‖2𝑑=−𝑇𝑘−1𝑔𝑘−1‖‖𝑔𝑘−1‖‖2=1.(2.6) In this case, the proposed spectral PRP conjugate gradient method reduces to the standard PRP method. However, it is often that the exact line search is time-consuming and sometimes is unnecessary. In the following, we are going to develop a new algorithm, where the search direction 𝑑𝑘 is chosen by (2.1)-(2.2) and the stepsize is determined by Armijio-type inexact line search.

Algorithm 2.2 (Modified Spectral PRP Conjugate Gradient Algorithm). We have the following steps.
Step 1. Given constants 𝛿1, 𝜌∈(0,1), 𝛿2>0, 𝜖>0. Choose an initial point 𝑥0∈𝑅𝑛. Let 𝑘∶=0.Step 2. If ‖𝑔𝑘‖≤𝜖, then the algorithm stops. Otherwise, compute 𝑑𝑘 by (2.1)-(2.2), and go to Step 3.Step 3. Determine a steplength 𝛼𝑘=max{𝜌𝑗,𝑗=0,1,2,…} such that 𝑓𝑥𝑘+𝛼𝑘𝑑𝑘𝑥≤𝑓𝑘+𝛿1𝛼𝑘𝑔𝑇𝑘𝑑𝑘−𝛿2𝛼2𝑘‖‖𝑑𝑘‖‖2.(2.7)Step 4. Set 𝑥𝑘+1∶=𝑥𝑘+𝛼𝑘𝑑𝑘, and 𝑘∶=𝑘+1. Return to Step 2.

Since 𝑑𝑘 is a descent direction of 𝑓 at 𝑥𝑘, we will prove that there must exist 𝑗0 such that 𝛼𝑘=𝜌𝑗0 satisfies the inequality (2.7).

Proposition 2.3. Let 𝑓∶𝑅𝑛→𝑅 be a continuously differentiable function. Suppose that 𝑑 is a descent direction of 𝑓 at 𝑥. Then, there exists 𝑗0 such that 𝑓(𝑥+𝛼𝑑)≤𝑓(𝑥)+𝛿1𝛼𝑔𝑇𝑑−𝛿2𝛼2‖𝑑‖2,(2.8) where 𝛼=𝜌𝑗0, 𝑔 is the gradient vector of 𝑓 at 𝑥, 𝛿1, 𝜌∈(0,1) and 𝛿2>0 are given constant scalars.

Proof. Actually, we only need to prove that a step length 𝛼 is obtained in finitely many steps. If it is not true, then for all sufficiently large positive integer 𝑚, we have 𝑓(𝑥+𝜌𝑚𝑑)−𝑓(𝑥)>𝛿1𝜌𝑚𝑔𝑇𝑑−𝛿2𝜌2𝑚‖𝑑‖2.(2.9) Thus, by the mean value theorem, there is a 𝜃∈(0,1) such that 𝜌𝑚𝑔(𝑥+𝜃𝜌𝑚𝑑)𝑇𝑑>𝛿1𝜌𝑚𝑔𝑇𝑑−𝛿2𝜌2𝑚‖𝑑‖2.(2.10) It reads (𝑔(𝑥+𝜃𝜌𝑚𝑑)−𝑔)𝑇𝛿𝑑>1𝑔−1𝑇𝑑−𝛿2𝜌𝑚‖𝑑‖2.(2.11) When ğ‘šâ†’âˆž, it is obtained that 𝛿1𝑔−1𝑇𝑑<0.(2.12) From 𝛿1∈(0,1), it follows that 𝑔𝑇𝑑>0. This contradicts the condition that 𝑑 is a descent direction.

Remark 2.4. From Proposition 2.3, it is known that Algorithm 2.2 is well defined. In addition, it is easy to see that more descent magnitude can be obtained at each step by the modified Armijo-type line search (2.7) than the standard Armijo rule.

3. Global Convergence

In this section, we are in a position to study the global convergence of Algorithm 2.2. We first state the following mild assumptions, which will be used in the proof of global convergence.

Assumption 3.1. The level set Ω={𝑥∈𝑅𝑛|𝑓(𝑥)≤𝑓(𝑥0)} is bounded.

Assumption 3.2. In some neighborhood 𝑁 of Ω, 𝑓 is continuously differentiable and its gradient is Lipschitz continuous, namely, there exists a constant 𝐿>0 such that ‖𝑔(𝑥)−𝑔(𝑦)‖≤𝐿‖𝑥−𝑦‖,∀𝑥,𝑦∈𝑁.(3.1)

Since {𝑓(𝑥𝑘)} is decreasing, it is clear that the sequence {𝑥𝑘} generated by Algorithm 2.2 is contained in a bounded region from Assumption 3.1. So, there exists a convergent subsequence of {𝑥𝑘}. Without loss of generality, it can be supposed that {𝑥𝑘} is convergent. On the other hand, from Assumption 3.2, it follows that there is a constant 𝛾1>0 such that‖𝑔(𝑥)‖≤𝛾1,∀𝑥∈Ω.(3.2) Hence, the sequence {𝑔𝑘} is bounded.

In the following, we firstly prove that the stepsize 𝛼𝑘 at each iteration is large enough.

Lemma 3.3. With Assumption 3.2, there exists a constant 𝑚>0 such that the following inequality 𝛼𝑘‖‖𝑔≥𝑚𝑘‖‖2‖‖𝑑𝑘‖‖2(3.3) holds for all 𝑘 sufficiently large.

Proof. Firstly, from the line search rule (2.7), we know that 𝛼𝑘≤1.
If 𝛼𝑘=1, then we have ‖𝑔𝑘‖≤‖𝑑𝑘‖. The reason is that ‖𝑔𝑘‖>‖𝑑𝑘‖ implies that ‖‖𝑔𝑘‖‖2>‖‖𝑔𝑘‖‖‖‖𝑑𝑘‖‖≥−𝑔𝑇𝑘𝑑𝑘,(3.4) which contradicts (2.3). Therefore, taking 𝑚=1, the inequality (3.3) holds.
If 0<𝛼𝑘<1, then the line search rule (2.7) implies that 𝜌−1𝛼𝑘 does not satisfy the inequality (2.7). So, we have 𝑓𝑥𝑘+𝜌−1𝛼𝑘𝑑𝑘𝑥−𝑓𝑘>𝛿1𝛼𝑘𝜌−1𝑔𝑇𝑘𝑑𝑘−𝛿2𝜌−2𝛼2𝑘‖‖𝑑𝑘‖‖2.(3.5)
Since 𝑓𝑥𝑘+𝜌−1𝛼𝑘𝑑𝑘𝑥−𝑓𝑘=𝜌−1𝛼𝑘𝑔𝑥𝑘+𝑡𝑘𝜌−1𝛼𝑘𝑑𝑘𝑇𝑑𝑘=𝜌−1𝛼𝑘𝑔𝑇𝑘𝑑𝑘+𝜌−1𝛼𝑘𝑔𝑥𝑘+𝑡𝑘𝜌−1𝛼𝑘𝑑𝑘−𝑔𝑘𝑇𝑑𝑘≤𝜌−1𝛼𝑘𝑔𝑇𝑘𝑑𝑘+𝐿𝜌−2𝛼2𝑘‖‖𝑑𝑘‖‖2,(3.6) where 𝑡𝑘∈(0,1) satisfies 𝑥𝑘+𝑡𝑘𝜌−1𝛼𝑘𝑑𝑘∈𝑁 and the last inequality is from (3.2), it is obtained that 𝛿1𝛼𝑘𝜌−1𝑔𝑇𝑘𝑑𝑘−𝛿2𝜌−2𝛼2𝑘‖‖𝑑𝑘‖‖2<𝜌−1𝛼𝑘𝑔𝑇𝑘𝑑𝑘+𝐿𝜌−2𝛼2𝑘‖‖𝑑𝑘‖‖2(3.7) due to (3.5) and (3.1). It reads 1−𝛿1𝛼𝑘𝜌−1𝑔𝑇𝑘𝑑𝑘+𝐿+𝛿2𝜌−2𝛼2𝑘‖‖𝑑𝑘‖‖2>0,(3.8) that is, 𝐿+𝛿2𝜌−1𝛼𝑘‖‖𝑑𝑘‖‖2>𝛿1𝑔−1𝑇𝑘𝑑𝑘.(3.9) Therefore, 𝛼𝑘>𝛿1−1𝜌𝑔𝑇𝑘𝑑𝑘𝐿+𝛿2‖‖𝑑𝑘‖‖2.(3.10) From Lemma 2.1, it follows that 𝛼𝑘>𝜌1−𝛿1‖‖𝑔𝑘‖‖2𝐿+𝛿2‖‖𝑑𝑘‖‖2.(3.11)
Taking 𝜌𝑚=min1,1−𝛿1𝐿+𝛿2,(3.12) then the desired inequality (3.3) holds.

From Lemmas 2.1 and 3.3 and Assumption 3.1, we can prove the following result.

Lemma 3.4. Under Assumptions 3.1 and 3.2, the following results hold: 𝑘≥0‖‖𝑔𝑘‖‖4‖‖𝑑𝑘‖‖2<∞,(3.13)limğ‘˜â†’âˆžğ›¼2𝑘‖‖𝑑𝑘‖‖2=0.(3.14)

Proof. From the line search rule (2.7) and Assumption 3.1, there exists a constant 𝑀 such that 𝑛−1𝑘=0−𝛿1𝛼𝑘𝑔𝑇𝑘𝑑𝑘+𝛿2𝛼2𝑘‖‖𝑑𝑘‖‖2≤𝑛−1𝑘=0𝑓𝑥𝑘𝑥−𝑓k+1𝑥=𝑓0𝑥−𝑓𝑛<2𝑀.(3.15) Then, from Lemma 2.1, we have 2𝑀≥𝑛−1𝑘=0−𝛿1𝛼𝑘𝑔𝑇𝑘𝑑𝑘+𝛿2𝛼2𝑘‖‖𝑑𝑘‖‖2=𝑛−1𝑘=0𝛿1𝛼𝑘‖‖𝑔𝑘‖‖2+𝛿2𝛼2𝑘‖‖𝑑𝑘‖‖2≥𝑛−1𝑘=0𝛿1𝑚‖‖𝑔𝑘‖‖2‖‖𝑑𝑘‖‖2‖‖𝑔𝑘‖‖2+𝛿2⋅𝑚2⋅‖‖𝑔𝑘‖‖4‖‖𝑑𝑘‖‖4⋅‖‖𝑑𝑘‖‖2=𝑛−1𝑘=0𝛿1+𝛿2𝑚‖‖𝑔𝑘‖‖4‖‖𝑑𝑘‖‖2⋅𝑚.(3.16) Therefore, the first conclusion is proved.
Since 2𝑀≥𝑛−1𝑘=0𝛿1𝛼𝑘‖‖𝑔𝑘‖‖2+𝛿2𝛼2𝑘‖‖𝑑𝑘‖‖2≥𝛿2𝑛−1𝑘=0𝛼2𝑘‖‖𝑑𝑘‖‖2,(3.17) the series âˆžî“ğ‘˜=0𝛼2𝑘‖‖𝑑𝑘‖‖2(3.18) is convergent. Thus, limğ‘˜â†’âˆžğ›¼2𝑘‖‖𝑑𝑘‖‖2=0.(3.19)
The second conclusion (3.14) is obtained.

In the end of this section, we come to establish the global convergence theorem for Algorithm 2.2.

Theorem 3.5. Under Assumptions 3.1 and 3.2, it holds that limğ‘˜â†’âˆžâ€–â€–ğ‘”inf𝑘‖‖=0.(3.20)

Proof. Suppose that there exists a positive constant 𝜖>0 such that ‖‖𝑔𝑘‖‖≥𝜖(3.21) for all 𝑘. Then, from (2.1), it follows that ‖‖𝑑𝑘‖‖2=𝑑𝑇𝑘𝑑𝑘=−𝜃𝑘𝑔𝑇𝑘+𝛽PRP𝑘𝑑𝑇𝑘−1−𝜃𝑘𝑔𝑘+𝛽PRP𝑘𝑑𝑘−1=𝜃2𝑘‖‖𝑔𝑘‖‖2−2𝜃𝑘𝛽PRP𝑘𝑑𝑇𝑘−1𝑔𝑘+𝛽PRP𝑘2‖‖𝑑𝑘−1‖‖2=𝜃2𝑘‖‖𝑔𝑘‖‖2−2𝜃𝑘𝑑𝑇𝑘+𝜃𝑘𝑔𝑇𝑘𝑔𝑘+𝛽PRP𝑘2‖‖𝑑𝑘−1‖‖2=𝜃2𝑘‖‖𝑔𝑘‖‖2−2𝜃𝑘𝑑𝑇𝑘𝑔𝑘−2𝜃2𝑘‖‖𝑔𝑘‖‖2+𝛽PRP𝑘2‖‖𝑑𝑘−1‖‖2=𝛽PRP𝑘2‖‖𝑑𝑘−1‖‖2−2𝜃𝑘𝑑𝑇𝑘𝑔𝑘−𝜃2𝑘‖‖𝑔𝑘‖‖2.(3.22) Dividing by (𝑔𝑇𝑘𝑑𝑘)2 in the both sides of this equality, then from (1.4), (2.3), (3.1), and (3.21), we obtain ‖‖𝑑𝑘‖‖2‖‖𝑔𝑘‖‖4=𝛽PRP𝑘2‖‖𝑑𝑘−1‖‖2−2𝜃𝑘𝑑𝑇𝑘𝑔𝑘−𝜃2𝑘‖‖𝑔𝑘‖‖2‖‖𝑔𝑘‖‖4=𝑔𝑇𝑘𝑔𝑘−𝑔𝑘−12‖‖𝑔𝑘−1‖‖4‖‖𝑑𝑘−1‖‖2‖‖𝑔𝑘‖‖4−𝜃𝑘−12‖‖𝑔𝑘‖‖2+1‖‖𝑔𝑘‖‖2≤‖‖𝑔𝑘−𝑔𝑘−1‖‖2‖‖𝑔𝑘−1‖‖4‖‖𝑑𝑘−1‖‖2‖‖𝑔𝑘‖‖2−𝜃𝑘−12‖‖𝑔𝑘‖‖2+1‖‖𝑔𝑘‖‖2≤‖‖𝑔𝑘−𝑔𝑘−1‖‖2‖‖𝑔𝑘‖‖2‖‖𝑑𝑘−1‖‖2‖‖𝑔𝑘−1‖‖4+1‖‖𝑔𝑘‖‖2<𝐿2𝛼2𝑘−1‖‖𝑑𝑘−1‖‖2𝜖2‖‖𝑑𝑘−1‖‖2‖‖𝑔𝑘−1‖‖4+1‖‖𝑔𝑘‖‖2.(3.23) From (3.14) in Lemma 3.4, it follows that limğ‘˜â†’âˆžğ›¼2𝑘−1‖‖𝑑𝑘−1‖‖2=0.(3.24) Thus, there exists a sufficient large number 𝑘0 such that for 𝑘≥𝑘0, the following inequalities 0≤𝛼2𝑘−1‖‖𝑑𝑘−1‖‖2<𝜖2𝐿2(3.25) hold.
Therefore, for 𝑘≥𝑘0, ‖‖𝑑𝑘‖‖2‖‖𝑔𝑘‖‖4≤‖‖𝑑𝑘−1‖‖2‖‖𝑔𝑘−1‖‖4+1‖‖𝑔𝑘‖‖2‖‖𝑑≤⋯≤𝑘0‖‖2‖‖𝑔𝑘0‖‖4+𝑘𝑖=𝑘0+11‖‖𝑔𝑖‖‖2<𝐶0𝜖2+𝑘𝑖=𝑘0+11𝜖2=𝐶0+𝑘−𝑘0𝜖2,(3.26) where 𝐶0=𝜖2‖𝑑𝑘0‖2/‖𝑔𝑘0‖2 is a nonnegative constant.
The last inequality implies 𝑘≥1‖‖𝑔𝑘‖‖4‖‖𝑑𝑘‖‖2≥𝑘>𝑘0‖‖𝑔𝑘‖‖4‖‖𝑑𝑘‖‖2>𝜖2𝑘>𝑘01𝐶0+𝑘−𝑘0=∞,(3.27) which contradicts the result of Lemma 3.4.
The global convergence theorem is established.

4. Numerical Experiments

In this section, we will report the numerical performance of Algorithm 2.2. We test Algorithm 2.2 by solving the 15 benchmark problems from [19] and compare its numerical performance with that of the other similar methods, which include the standard PRP conjugate gradient method in [6], the modified FR conjugate gradient method in [16], and the modified PRP conjugate gradient method in [17]. Among these algorithms, either the updating formula or the line search rule is different from each other.

All codes of the computer procedures are written in MATLAB 7.0.1 and are implemented on PC with 2.0 GHz CPU processor, 1 GB RAM memory, and XP operation system.

The parameters are chosen as follows: 𝜖=10−6,𝜌=0.75,𝛿1=0.1,𝛿2=1.(4.1)

In Tables 1 and 2, we use the following denotations: Dim: the dimension of the objective function;GV: the gradient value of the objective function when the algorithm stops;NI: the number of iterations;NF: the number of function evaluations;CT: the run time of CPU;mfr: the modified FR conjugate gradient method in [16]; prp: the standard PRP conjugate gradient method in [6];msprp: the modified PRP conjugate gradient method in [17];mprp: the new algorithm developed in this paper.

From the above numerical experiments, it is shown that the proposed algorithm in this paper is promising.

5. Conclusion

In this paper, a new spectral PRP conjugate gradient algorithm has been developed for solving unconstrained minimization problems. Under some mild conditions, the global convergence has been proved with an Armijo-type line search rule. Compared with the other similar algorithms, the numerical performance of the developed algorithm is promising.

Acknowledgments

The authors would like to express their great thanks to the anonymous referees for their constructive comments on this paper, which have improved its presentation. This work is supported by National Natural Science Foundation of China (Grant nos. 71071162, 70921001).