Abstract

A modified spectral PRP conjugate gradient method is presented for solving unconstrained optimization problems. The constructed search direction is proved to be a sufficiently descent direction of the objective function. With an Armijo-type line search to determinate the step length, a new spectral PRP conjugate algorithm is developed. Under some mild conditions, the theory of global convergence is established. Numerical results demonstrate that this algorithm is promising, particularly, compared with the existing similar ones.

1. Introduction

Recently, it is shown that conjugate gradient method is efficient and powerful in solving large-scale unconstrained minimization problems owing to its low memory requirement and simple computation. For example, in [1–17], many variants of conjugate gradient algorithms are developed. However, just as pointed out in [2], there exist many theoretical and computational challenges to apply these methods into solving the unconstrained optimization problems. Actually, 14 open problems on conjugate gradient methods are presented in [2]. These problems concern the selection of initial direction, the computation of step length, and conjugate parameter based on the values of the objective function, the influence of accuracy of line search procedure on the efficiency of conjugate gradient algorithm, and so forth.

The general model of unconstrained optimization problem is as follows:min𝑓(π‘₯),π‘₯βˆˆπ‘…π‘›,(1.1) where π‘“βˆΆπ‘…π‘›β†’π‘… is continuously differentiable such that its gradient is available. Let 𝑔(π‘₯) denote the gradient of 𝑓 at π‘₯, and let π‘₯0 be an arbitrary initial approximate solution of (1.1). Then, when a standard conjugate gradient method is used to solve (1.1), a sequence of solutions will be generated byπ‘₯π‘˜+1=π‘₯π‘˜+π›Όπ‘˜π‘‘π‘˜,π‘˜=0,1,…,(1.2) where π›Όπ‘˜ is the steplength chosen by some line search method and π‘‘π‘˜ is the search direction defined byπ‘‘π‘˜=ξ‚»βˆ’π‘”π‘˜ifπ‘˜=0,βˆ’π‘”π‘˜+π›½π‘˜π‘‘π‘˜βˆ’1ifπ‘˜>0,(1.3) where π›½π‘˜ is called conjugacy parameter and π‘”π‘˜ denotes the value of 𝑔(π‘₯π‘˜). For a strictly convex quadratical programming, π›½π‘˜ can be appropriately chosen such that π‘‘π‘˜ and π‘‘π‘˜βˆ’1 are conjugate with respect to the Hessian matrix of the objective function. If π›½π‘˜ is taken byπ›½π‘˜=𝛽PRPπ‘˜=π‘”π‘‡π‘˜ξ€·π‘”π‘˜βˆ’π‘”π‘˜βˆ’1ξ€Έβ€–β€–π‘”π‘˜βˆ’1β€–β€–2,(1.4) where β€–β‹…β€– stands for the Euclidean norm of vector, then (1.2)–(1.4) are called Polak-RibiΓ©re-Polyak (PRP) conjugate gradient method (see [8, 18]).

It is well known that PRP method has the property of finite termination when the objective function is a strong convex quadratic function combined with the exact line search. Furthermore, in [7], for a twice continuously differentiable strong convex objective function, the global convergence has also been proved. However, it seems to be nontrivial to establish the global convergence theory under the condition of inexact line search, especially for a general nonconvex minimization problem. Quite recently, it is noticed that there are many modified PRP conjugate gradient methods studied (see, e.g., [10–13, 17]). In these methods, the search direction is constructed to possess the sufficient descent property, and the theory of global convergence is established with different line search strategy. In [17], the search direction π‘‘π‘˜ is given byπ‘‘π‘˜=ξ‚»βˆ’π‘”π‘˜ifπ‘˜=0,βˆ’π‘”π‘˜+𝛽PRPπ‘˜π‘‘π‘˜βˆ’1βˆ’πœƒπ‘˜π‘¦π‘˜βˆ’1ifπ‘˜>0,(1.5) whereπœƒπ‘˜=π‘”π‘‡π‘˜π‘‘π‘˜βˆ’1β€–β€–π‘”π‘˜βˆ’1β€–β€–2,π‘¦π‘˜βˆ’1=π‘”π‘˜βˆ’π‘”π‘˜βˆ’1,π‘ π‘˜βˆ’1=π‘₯π‘˜βˆ’π‘₯π‘˜βˆ’1.(1.6) Similar to the idea in [17], a new spectral PRP conjugate gradient algorithm will be developed in this paper. On one hand, we will present a new spectral conjugate gradient direction, which also possess the sufficiently descent feature. On the other hand, a modified Armijo-type line search strategy is incorporated into the developed algorithm. Numerical experiments will be used to make a comparison among some similar algorithms.

The rest of this paper is organized as follows. In the next section, a new spectral PRP conjugate gradient method is proposed. Section 3 will be devoted to prove the global convergence. In Section 4, some numerical experiments will be done to test the efficiency, especially in comparison with the existing other methods. Some concluding remarks will be given in the last section.

2. New Spectral PRP Conjugate Gradient Algorithm

In this section, we will firstly study how to determine a descent direction of objective function.

Let π‘₯π‘˜ be the current iterate. Let π‘‘π‘˜ be defined byπ‘‘π‘˜=ξ‚»βˆ’π‘”π‘˜ifπ‘˜=0,βˆ’πœƒπ‘˜π‘”π‘˜+𝛽PRPπ‘˜π‘‘π‘˜βˆ’1ifπ‘˜>0,(2.1) where 𝛽PRPπ‘˜ is specified by (1.4) andπœƒπ‘˜=π‘‘π‘‡π‘˜βˆ’1π‘¦π‘˜βˆ’1β€–β€–π‘”π‘˜βˆ’1β€–β€–2βˆ’π‘‘π‘‡π‘˜βˆ’1π‘”π‘˜π‘”π‘‡π‘˜π‘”π‘˜βˆ’1β€–β€–π‘”π‘˜β€–β€–2β€–β€–π‘”π‘˜βˆ’1β€–β€–2.(2.2)

It is noted that π‘‘π‘˜ given by (2.1) and (2.2) is different from those in [3, 16, 17], either for the choice of πœƒπ‘˜ or for that of π›½π‘˜.

We first prove that π‘‘π‘˜ is a sufficiently descent direction.

Lemma 2.1. Suppose that π‘‘π‘˜ is given by (2.1) and (2.2). Then, the following result π‘”π‘‡π‘˜π‘‘π‘˜β€–β€–π‘”=βˆ’π‘˜β€–β€–2(2.3) holds for any π‘˜β‰₯0.

Proof. Firstly, for π‘˜=0, it is easy to see that (2.3) is true since 𝑑0=βˆ’π‘”0.
Secondly, assume that π‘‘π‘‡π‘˜βˆ’1π‘”π‘˜βˆ’1‖‖𝑔=βˆ’π‘˜βˆ’1β€–β€–2(2.4) holds for π‘˜βˆ’1 when π‘˜β‰₯1. Then, from (1.4), (2.1), and (2.2), it follows that π‘”π‘‡π‘˜π‘‘π‘˜=βˆ’πœƒπ‘˜β€–β€–π‘”π‘˜β€–β€–2+π‘”π‘‡π‘˜ξ€·π‘”π‘˜βˆ’π‘”π‘˜βˆ’1ξ€Έβ€–β€–π‘”π‘˜βˆ’1β€–β€–2π‘‘π‘‡π‘˜βˆ’1π‘”π‘˜π‘‘=βˆ’π‘‡π‘˜βˆ’1ξ€·π‘”π‘˜βˆ’π‘”π‘˜βˆ’1ξ€Έβ€–β€–π‘”π‘˜βˆ’1β€–β€–2π‘”π‘‡π‘˜π‘”π‘˜+π‘‘π‘‡π‘˜βˆ’1π‘”π‘˜π‘”π‘‡π‘˜π‘”π‘˜βˆ’1β€–β€–π‘”π‘˜β€–β€–2β€–β€–π‘”π‘˜βˆ’1β€–β€–2π‘”π‘‡π‘˜π‘”π‘˜+π‘”π‘‡π‘˜ξ€·π‘”π‘˜βˆ’π‘”π‘˜βˆ’1ξ€Έβ€–β€–π‘”π‘˜βˆ’1β€–β€–2π‘‘π‘‡π‘˜βˆ’1π‘”π‘˜=π‘‘π‘‡π‘˜βˆ’1π‘”π‘˜βˆ’1β€–β€–π‘”π‘˜βˆ’1β€–β€–2π‘”π‘‡π‘˜π‘”π‘˜=β€–β€–π‘”π‘˜β€–β€–2β€–β€–π‘”π‘˜βˆ’1β€–β€–2ξ‚€βˆ’β€–β€–π‘”π‘˜βˆ’1β€–β€–2‖‖𝑔=βˆ’π‘˜β€–β€–2.(2.5) Thus, (2.3) is also true with π‘˜βˆ’1 replaced by π‘˜. By mathematical induction method, we obtain the desired result.

From Lemma 2.1, it is known that π‘‘π‘˜ is a descent direction of 𝑓 at π‘₯π‘˜. Furthermore, if the exact line search is used, then π‘”π‘‡π‘˜π‘‘π‘˜βˆ’1=0; henceπœƒπ‘˜=π‘‘π‘‡π‘˜βˆ’1π‘¦π‘˜βˆ’1β€–β€–π‘”π‘˜βˆ’1β€–β€–2βˆ’π‘‘π‘‡π‘˜βˆ’1π‘”π‘˜π‘”π‘‡π‘˜π‘”π‘˜βˆ’1β€–β€–π‘”π‘˜β€–β€–2β€–β€–π‘”π‘˜βˆ’1β€–β€–2𝑑=βˆ’π‘‡π‘˜βˆ’1π‘”π‘˜βˆ’1β€–β€–π‘”π‘˜βˆ’1β€–β€–2=1.(2.6) In this case, the proposed spectral PRP conjugate gradient method reduces to the standard PRP method. However, it is often that the exact line search is time-consuming and sometimes is unnecessary. In the following, we are going to develop a new algorithm, where the search direction π‘‘π‘˜ is chosen by (2.1)-(2.2) and the stepsize is determined by Armijio-type inexact line search.

Algorithm 2.2 (Modified Spectral PRP Conjugate Gradient Algorithm). We have the following steps.
Step 1. Given constants 𝛿1, 𝜌∈(0,1), 𝛿2>0, πœ–>0. Choose an initial point π‘₯0βˆˆπ‘…π‘›. Let π‘˜βˆΆ=0.Step 2. If β€–π‘”π‘˜β€–β‰€πœ–, then the algorithm stops. Otherwise, compute π‘‘π‘˜ by (2.1)-(2.2), and go to Step 3.Step 3. Determine a steplength π›Όπ‘˜=max{πœŒπ‘—,𝑗=0,1,2,…} such that 𝑓π‘₯π‘˜+π›Όπ‘˜π‘‘π‘˜ξ€Έξ€·π‘₯β‰€π‘“π‘˜ξ€Έ+𝛿1π›Όπ‘˜π‘”π‘‡π‘˜π‘‘π‘˜βˆ’π›Ώ2𝛼2π‘˜β€–β€–π‘‘π‘˜β€–β€–2.(2.7)Step 4. Set π‘₯π‘˜+1∢=π‘₯π‘˜+π›Όπ‘˜π‘‘π‘˜, and π‘˜βˆΆ=π‘˜+1. Return to Step 2.

Since π‘‘π‘˜ is a descent direction of 𝑓 at π‘₯π‘˜, we will prove that there must exist 𝑗0 such that π›Όπ‘˜=πœŒπ‘—0 satisfies the inequality (2.7).

Proposition 2.3. Let π‘“βˆΆπ‘…π‘›β†’π‘… be a continuously differentiable function. Suppose that 𝑑 is a descent direction of 𝑓 at π‘₯. Then, there exists 𝑗0 such that 𝑓(π‘₯+𝛼𝑑)≀𝑓(π‘₯)+𝛿1π›Όπ‘”π‘‡π‘‘βˆ’π›Ώ2𝛼2‖𝑑‖2,(2.8) where 𝛼=πœŒπ‘—0, 𝑔 is the gradient vector of 𝑓 at π‘₯, 𝛿1, 𝜌∈(0,1) and 𝛿2>0 are given constant scalars.

Proof. Actually, we only need to prove that a step length 𝛼 is obtained in finitely many steps. If it is not true, then for all sufficiently large positive integer π‘š, we have 𝑓(π‘₯+πœŒπ‘šπ‘‘)βˆ’π‘“(π‘₯)>𝛿1πœŒπ‘šπ‘”π‘‡π‘‘βˆ’π›Ώ2𝜌2π‘šβ€–π‘‘β€–2.(2.9) Thus, by the mean value theorem, there is a πœƒβˆˆ(0,1) such that πœŒπ‘šπ‘”(π‘₯+πœƒπœŒπ‘šπ‘‘)𝑇𝑑>𝛿1πœŒπ‘šπ‘”π‘‡π‘‘βˆ’π›Ώ2𝜌2π‘šβ€–π‘‘β€–2.(2.10) It reads (𝑔(π‘₯+πœƒπœŒπ‘šπ‘‘)βˆ’π‘”)𝑇𝛿𝑑>1ξ€Έπ‘”βˆ’1π‘‡π‘‘βˆ’π›Ώ2πœŒπ‘šβ€–π‘‘β€–2.(2.11) When π‘šβ†’βˆž, it is obtained that 𝛿1ξ€Έπ‘”βˆ’1𝑇𝑑<0.(2.12) From 𝛿1∈(0,1), it follows that 𝑔𝑇𝑑>0. This contradicts the condition that 𝑑 is a descent direction.

Remark 2.4. From Proposition 2.3, it is known that Algorithm 2.2 is well defined. In addition, it is easy to see that more descent magnitude can be obtained at each step by the modified Armijo-type line search (2.7) than the standard Armijo rule.

3. Global Convergence

In this section, we are in a position to study the global convergence of Algorithm 2.2. We first state the following mild assumptions, which will be used in the proof of global convergence.

Assumption 3.1. The level set Ξ©={π‘₯βˆˆπ‘…π‘›|𝑓(π‘₯)≀𝑓(π‘₯0)} is bounded.

Assumption 3.2. In some neighborhood 𝑁 of Ξ©, 𝑓 is continuously differentiable and its gradient is Lipschitz continuous, namely, there exists a constant 𝐿>0 such that ‖𝑔(π‘₯)βˆ’π‘”(𝑦)‖≀𝐿‖π‘₯βˆ’π‘¦β€–,βˆ€π‘₯,π‘¦βˆˆπ‘.(3.1)

Since {𝑓(π‘₯π‘˜)} is decreasing, it is clear that the sequence {π‘₯π‘˜} generated by Algorithm 2.2 is contained in a bounded region from Assumption 3.1. So, there exists a convergent subsequence of {π‘₯π‘˜}. Without loss of generality, it can be supposed that {π‘₯π‘˜} is convergent. On the other hand, from Assumption 3.2, it follows that there is a constant 𝛾1>0 such that‖𝑔(π‘₯)‖≀𝛾1,βˆ€π‘₯∈Ω.(3.2) Hence, the sequence {π‘”π‘˜} is bounded.

In the following, we firstly prove that the stepsize π›Όπ‘˜ at each iteration is large enough.

Lemma 3.3. With Assumption 3.2, there exists a constant π‘š>0 such that the following inequality π›Όπ‘˜β€–β€–π‘”β‰₯π‘šπ‘˜β€–β€–2β€–β€–π‘‘π‘˜β€–β€–2(3.3) holds for all π‘˜ sufficiently large.

Proof. Firstly, from the line search rule (2.7), we know that π›Όπ‘˜β‰€1.
If π›Όπ‘˜=1, then we have β€–π‘”π‘˜β€–β‰€β€–π‘‘π‘˜β€–. The reason is that β€–π‘”π‘˜β€–>β€–π‘‘π‘˜β€– implies that β€–β€–π‘”π‘˜β€–β€–2>β€–β€–π‘”π‘˜β€–β€–β€–β€–π‘‘π‘˜β€–β€–β‰₯βˆ’π‘”π‘‡π‘˜π‘‘π‘˜,(3.4) which contradicts (2.3). Therefore, taking π‘š=1, the inequality (3.3) holds.
If 0<π›Όπ‘˜<1, then the line search rule (2.7) implies that πœŒβˆ’1π›Όπ‘˜ does not satisfy the inequality (2.7). So, we have 𝑓π‘₯π‘˜+πœŒβˆ’1π›Όπ‘˜π‘‘π‘˜ξ€Έξ€·π‘₯βˆ’π‘“π‘˜ξ€Έ>𝛿1π›Όπ‘˜πœŒβˆ’1π‘”π‘‡π‘˜π‘‘π‘˜βˆ’π›Ώ2πœŒβˆ’2𝛼2π‘˜β€–β€–π‘‘π‘˜β€–β€–2.(3.5)
Since 𝑓π‘₯π‘˜+πœŒβˆ’1π›Όπ‘˜π‘‘π‘˜ξ€Έξ€·π‘₯βˆ’π‘“π‘˜ξ€Έ=πœŒβˆ’1π›Όπ‘˜π‘”ξ€·π‘₯π‘˜+π‘‘π‘˜πœŒβˆ’1π›Όπ‘˜π‘‘π‘˜ξ€Έπ‘‡π‘‘π‘˜=πœŒβˆ’1π›Όπ‘˜π‘”π‘‡π‘˜π‘‘π‘˜+πœŒβˆ’1π›Όπ‘˜ξ€·π‘”ξ€·π‘₯π‘˜+π‘‘π‘˜πœŒβˆ’1π›Όπ‘˜π‘‘π‘˜ξ€Έβˆ’π‘”π‘˜ξ€Έπ‘‡π‘‘π‘˜β‰€πœŒβˆ’1π›Όπ‘˜π‘”π‘‡π‘˜π‘‘π‘˜+πΏπœŒβˆ’2𝛼2π‘˜β€–β€–π‘‘π‘˜β€–β€–2,(3.6) where π‘‘π‘˜βˆˆ(0,1) satisfies π‘₯π‘˜+π‘‘π‘˜πœŒβˆ’1π›Όπ‘˜π‘‘π‘˜βˆˆπ‘ and the last inequality is from (3.2), it is obtained that 𝛿1π›Όπ‘˜πœŒβˆ’1π‘”π‘‡π‘˜π‘‘π‘˜βˆ’π›Ώ2πœŒβˆ’2𝛼2π‘˜β€–β€–π‘‘π‘˜β€–β€–2<πœŒβˆ’1π›Όπ‘˜π‘”π‘‡π‘˜π‘‘π‘˜+πΏπœŒβˆ’2𝛼2π‘˜β€–β€–π‘‘π‘˜β€–β€–2(3.7) due to (3.5) and (3.1). It reads ξ€·1βˆ’π›Ώ1ξ€Έπ›Όπ‘˜πœŒβˆ’1π‘”π‘‡π‘˜π‘‘π‘˜+𝐿+𝛿2ξ€ΈπœŒβˆ’2𝛼2π‘˜β€–β€–π‘‘π‘˜β€–β€–2>0,(3.8) that is, 𝐿+𝛿2ξ€ΈπœŒβˆ’1π›Όπ‘˜β€–β€–π‘‘π‘˜β€–β€–2>𝛿1ξ€Έπ‘”βˆ’1π‘‡π‘˜π‘‘π‘˜.(3.9) Therefore, π›Όπ‘˜>𝛿1ξ€Έβˆ’1πœŒπ‘”π‘‡π‘˜π‘‘π‘˜ξ€·πΏ+𝛿2ξ€Έβ€–β€–π‘‘π‘˜β€–β€–2.(3.10) From Lemma 2.1, it follows that π›Όπ‘˜>πœŒξ€·1βˆ’π›Ώ1ξ€Έβ€–β€–π‘”π‘˜β€–β€–2𝐿+𝛿2ξ€Έβ€–β€–π‘‘π‘˜β€–β€–2.(3.11)
Taking ξƒ―πœŒξ€·π‘š=min1,1βˆ’π›Ώ1𝐿+𝛿2ξƒ°,(3.12) then the desired inequality (3.3) holds.

From Lemmas 2.1 and 3.3 and Assumption 3.1, we can prove the following result.

Lemma 3.4. Under Assumptions 3.1 and 3.2, the following results hold: ξ“π‘˜β‰₯0β€–β€–π‘”π‘˜β€–β€–4β€–β€–π‘‘π‘˜β€–β€–2<∞,(3.13)limπ‘˜β†’βˆžπ›Ό2π‘˜β€–β€–π‘‘π‘˜β€–β€–2=0.(3.14)

Proof. From the line search rule (2.7) and Assumption 3.1, there exists a constant 𝑀 such that π‘›βˆ’1ξ“π‘˜=0ξ‚€βˆ’π›Ώ1π›Όπ‘˜π‘”π‘‡π‘˜π‘‘π‘˜+𝛿2𝛼2π‘˜β€–β€–π‘‘π‘˜β€–β€–2ξ‚β‰€π‘›βˆ’1ξ“π‘˜=0𝑓π‘₯π‘˜ξ€Έξ€·π‘₯βˆ’π‘“k+1ξ€·π‘₯ξ€Έξ€Έ=𝑓0ξ€Έξ€·π‘₯βˆ’π‘“π‘›ξ€Έ<2𝑀.(3.15) Then, from Lemma 2.1, we have 2𝑀β‰₯π‘›βˆ’1ξ“π‘˜=0ξ‚€βˆ’π›Ώ1π›Όπ‘˜π‘”π‘‡π‘˜π‘‘π‘˜+𝛿2𝛼2π‘˜β€–β€–π‘‘π‘˜β€–β€–2=π‘›βˆ’1ξ“π‘˜=0𝛿1π›Όπ‘˜β€–β€–π‘”π‘˜β€–β€–2+𝛿2𝛼2π‘˜β€–β€–π‘‘π‘˜β€–β€–2β‰₯π‘›βˆ’1ξ“π‘˜=0𝛿1π‘šβ€–β€–π‘”π‘˜β€–β€–2β€–β€–π‘‘π‘˜β€–β€–2β€–β€–π‘”π‘˜β€–β€–2+𝛿2β‹…π‘š2β‹…β€–β€–π‘”π‘˜β€–β€–4β€–β€–π‘‘π‘˜β€–β€–4β‹…β€–β€–π‘‘π‘˜β€–β€–2ξƒͺ=π‘›βˆ’1ξ“π‘˜=0𝛿1+𝛿2π‘šξ€Έβ€–β€–π‘”π‘˜β€–β€–4β€–β€–π‘‘π‘˜β€–β€–2β‹…π‘š.(3.16) Therefore, the first conclusion is proved.
Since 2𝑀β‰₯π‘›βˆ’1ξ“π‘˜=0𝛿1π›Όπ‘˜β€–β€–π‘”π‘˜β€–β€–2+𝛿2𝛼2π‘˜β€–β€–π‘‘π‘˜β€–β€–2β‰₯𝛿2π‘›βˆ’1ξ“π‘˜=0𝛼2π‘˜β€–β€–π‘‘π‘˜β€–β€–2,(3.17) the series βˆžξ“π‘˜=0𝛼2π‘˜β€–β€–π‘‘π‘˜β€–β€–2(3.18) is convergent. Thus, limπ‘˜β†’βˆžπ›Ό2π‘˜β€–β€–π‘‘π‘˜β€–β€–2=0.(3.19)
The second conclusion (3.14) is obtained.

In the end of this section, we come to establish the global convergence theorem for Algorithm 2.2.

Theorem 3.5. Under Assumptions 3.1 and 3.2, it holds that limπ‘˜β†’βˆžβ€–β€–π‘”infπ‘˜β€–β€–=0.(3.20)

Proof. Suppose that there exists a positive constant πœ–>0 such that β€–β€–π‘”π‘˜β€–β€–β‰₯πœ–(3.21) for all π‘˜. Then, from (2.1), it follows that β€–β€–π‘‘π‘˜β€–β€–2=π‘‘π‘‡π‘˜π‘‘π‘˜=ξ€·βˆ’πœƒπ‘˜π‘”π‘‡π‘˜+𝛽PRPπ‘˜π‘‘π‘‡π‘˜βˆ’1ξ€Έξ€·βˆ’πœƒπ‘˜π‘”π‘˜+𝛽PRPπ‘˜π‘‘π‘˜βˆ’1ξ€Έ=πœƒ2π‘˜β€–β€–π‘”π‘˜β€–β€–2βˆ’2πœƒπ‘˜π›½PRPπ‘˜π‘‘π‘‡π‘˜βˆ’1π‘”π‘˜+𝛽PRPπ‘˜ξ€Έ2β€–β€–π‘‘π‘˜βˆ’1β€–β€–2=πœƒ2π‘˜β€–β€–π‘”π‘˜β€–β€–2βˆ’2πœƒπ‘˜ξ€·π‘‘π‘‡π‘˜+πœƒπ‘˜π‘”π‘‡π‘˜ξ€Έπ‘”π‘˜+𝛽PRPπ‘˜ξ€Έ2β€–β€–π‘‘π‘˜βˆ’1β€–β€–2=πœƒ2π‘˜β€–β€–π‘”π‘˜β€–β€–2βˆ’2πœƒπ‘˜π‘‘π‘‡π‘˜π‘”π‘˜βˆ’2πœƒ2π‘˜β€–β€–π‘”π‘˜β€–β€–2+𝛽PRPπ‘˜ξ€Έ2β€–β€–π‘‘π‘˜βˆ’1β€–β€–2=𝛽PRPπ‘˜ξ€Έ2β€–β€–π‘‘π‘˜βˆ’1β€–β€–2βˆ’2πœƒπ‘˜π‘‘π‘‡π‘˜π‘”π‘˜βˆ’πœƒ2π‘˜β€–β€–π‘”π‘˜β€–β€–2.(3.22) Dividing by (π‘”π‘‡π‘˜π‘‘π‘˜)2 in the both sides of this equality, then from (1.4), (2.3), (3.1), and (3.21), we obtain β€–β€–π‘‘π‘˜β€–β€–2β€–β€–π‘”π‘˜β€–β€–4=𝛽PRPπ‘˜ξ€Έ2β€–β€–π‘‘π‘˜βˆ’1β€–β€–2βˆ’2πœƒπ‘˜π‘‘π‘‡π‘˜π‘”π‘˜βˆ’πœƒ2π‘˜β€–β€–π‘”π‘˜β€–β€–2β€–β€–π‘”π‘˜β€–β€–4=ξ€·π‘”π‘‡π‘˜ξ€·π‘”π‘˜βˆ’π‘”π‘˜βˆ’1ξ€Έξ€Έ2β€–β€–π‘”π‘˜βˆ’1β€–β€–4β€–β€–π‘‘π‘˜βˆ’1β€–β€–2β€–β€–π‘”π‘˜β€–β€–4βˆ’ξ€·πœƒπ‘˜ξ€Έβˆ’12β€–β€–π‘”π‘˜β€–β€–2+1β€–β€–π‘”π‘˜β€–β€–2β‰€β€–β€–π‘”π‘˜βˆ’π‘”π‘˜βˆ’1β€–β€–2β€–β€–π‘”π‘˜βˆ’1β€–β€–4β€–β€–π‘‘π‘˜βˆ’1β€–β€–2β€–β€–π‘”π‘˜β€–β€–2βˆ’ξ€·πœƒπ‘˜ξ€Έβˆ’12β€–β€–π‘”π‘˜β€–β€–2+1β€–β€–π‘”π‘˜β€–β€–2β‰€β€–β€–π‘”π‘˜βˆ’π‘”π‘˜βˆ’1β€–β€–2β€–β€–π‘”π‘˜β€–β€–2β€–β€–π‘‘π‘˜βˆ’1β€–β€–2β€–β€–π‘”π‘˜βˆ’1β€–β€–4+1β€–β€–π‘”π‘˜β€–β€–2<𝐿2𝛼2π‘˜βˆ’1β€–β€–π‘‘π‘˜βˆ’1β€–β€–2πœ–2β€–β€–π‘‘π‘˜βˆ’1β€–β€–2β€–β€–π‘”π‘˜βˆ’1β€–β€–4+1β€–β€–π‘”π‘˜β€–β€–2.(3.23) From (3.14) in Lemma 3.4, it follows that limπ‘˜β†’βˆžπ›Ό2π‘˜βˆ’1β€–β€–π‘‘π‘˜βˆ’1β€–β€–2=0.(3.24) Thus, there exists a sufficient large number π‘˜0 such that for π‘˜β‰₯π‘˜0, the following inequalities 0≀𝛼2π‘˜βˆ’1β€–β€–π‘‘π‘˜βˆ’1β€–β€–2<πœ–2𝐿2(3.25) hold.
Therefore, for π‘˜β‰₯π‘˜0, β€–β€–π‘‘π‘˜β€–β€–2β€–β€–π‘”π‘˜β€–β€–4β‰€β€–β€–π‘‘π‘˜βˆ’1β€–β€–2β€–β€–π‘”π‘˜βˆ’1β€–β€–4+1β€–β€–π‘”π‘˜β€–β€–2β€–β€–π‘‘β‰€β‹―β‰€π‘˜0β€–β€–2β€–β€–π‘”π‘˜0β€–β€–4+π‘˜ξ“π‘–=π‘˜0+11‖‖𝑔𝑖‖‖2<𝐢0πœ–2+π‘˜ξ“π‘–=π‘˜0+11πœ–2=𝐢0+π‘˜βˆ’π‘˜0πœ–2,(3.26) where 𝐢0=πœ–2β€–π‘‘π‘˜0β€–2/β€–π‘”π‘˜0β€–2 is a nonnegative constant.
The last inequality implies ξ“π‘˜β‰₯1β€–β€–π‘”π‘˜β€–β€–4β€–β€–π‘‘π‘˜β€–β€–2β‰₯ξ“π‘˜>π‘˜0β€–β€–π‘”π‘˜β€–β€–4β€–β€–π‘‘π‘˜β€–β€–2>πœ–2ξ“π‘˜>π‘˜01𝐢0+π‘˜βˆ’π‘˜0=∞,(3.27) which contradicts the result of Lemma 3.4.
The global convergence theorem is established.

4. Numerical Experiments

In this section, we will report the numerical performance of Algorithm 2.2. We test Algorithm 2.2 by solving the 15 benchmark problems from [19] and compare its numerical performance with that of the other similar methods, which include the standard PRP conjugate gradient method in [6], the modified FR conjugate gradient method in [16], and the modified PRP conjugate gradient method in [17]. Among these algorithms, either the updating formula or the line search rule is different from each other.

All codes of the computer procedures are written in MATLAB 7.0.1 and are implemented on PC with 2.0 GHz CPU processor, 1 GB RAM memory, and XP operation system.

The parameters are chosen as follows: πœ–=10βˆ’6,𝜌=0.75,𝛿1=0.1,𝛿2=1.(4.1)

In Tables 1 and 2, we use the following denotations: Dim: the dimension of the objective function;GV: the gradient value of the objective function when the algorithm stops;NI: the number of iterations;NF: the number of function evaluations;CT: the run time of CPU;mfr: the modified FR conjugate gradient method in [16]; prp: the standard PRP conjugate gradient method in [6];msprp: the modified PRP conjugate gradient method in [17];mprp: the new algorithm developed in this paper.

From the above numerical experiments, it is shown that the proposed algorithm in this paper is promising.

5. Conclusion

In this paper, a new spectral PRP conjugate gradient algorithm has been developed for solving unconstrained minimization problems. Under some mild conditions, the global convergence has been proved with an Armijo-type line search rule. Compared with the other similar algorithms, the numerical performance of the developed algorithm is promising.

Acknowledgments

The authors would like to express their great thanks to the anonymous referees for their constructive comments on this paper, which have improved its presentation. This work is supported by National Natural Science Foundation of China (Grant nos. 71071162, 70921001).