Abstract

A modified PRP nonlinear conjugate gradient method to solve unconstrained optimization problems is proposed. The important property of the proposed method is that the sufficient descent property is guaranteed independent of any line search. By the use of the Wolfe line search, the global convergence of the proposed method is established for nonconvex minimization. Numerical results show that the proposed method is effective and promising by comparing with the VPRP, CG-DESCENT, and DL+ methods.

1. Introduction

The nonlinear conjugate gradient method is one of the most efficient methods in solving unconstrained optimization problems. It comprises a class of unconstrained optimization algorithms which is characterized by low memory requirements and simplicity.

Consider the unconstrained optimization problem min𝑥∈𝑅𝑛𝑓(𝑥),(1.1) where 𝑓∶𝑅𝑛→𝑅 is continuously differentiable, and its gradient 𝑔 is available.

The iterates of the conjugate gradient method for solving (1.1) are given by 𝑥𝑘+1=𝑥𝑘+𝛼𝑘𝑑𝑘,(1.2) where stepsize 𝛼𝑘 is positive and computed by certain line search, and the search direction 𝑑𝑘 is defined by 𝑑𝑘=−𝑔𝑘,for𝑘=1,−𝑔𝑘+𝛽𝑘𝑑𝑘−1,for𝑘≥2,(1.3) where 𝑔𝑘=∇𝑓(𝑥𝑘), and 𝛽𝑘 is a scalar. Some well-known conjugate gradient methods include Polak-Ribière-Polyak (PRP) method [1, 2], Hestenes-Stiefel (HS) method [3], Hager-Zhang (HZ) method [4], and Dai-Liao (DL) method [5]. The parameters 𝛽𝑘 of these methods are specified as follows: 𝛽PRP𝑘=𝑔𝑇𝑘𝑔𝑘−𝑔𝑘−1‖‖𝑔𝑘−1‖‖2,𝛽HS𝑘=𝑔𝑇𝑘𝑔𝑘−𝑔𝑘−1𝑑𝑇𝑘−1𝑔𝑘−𝑔𝑘−1,𝛽HZ𝑘=𝑦𝑘−1−2𝑑𝑘−1‖‖𝑦𝑘−1‖‖2𝑑𝑇𝑘−1𝑦𝑘−1𝑇𝑔𝑘𝑑𝑇𝑘−1𝑦𝑘−1,𝛽DL𝑘=𝑔𝑇𝑘𝑦𝑘−1𝑑𝑇𝑘−1𝑦𝑘−1𝑔−𝑡𝑇𝑘𝑠𝑘−1𝑑𝑇𝑘−1𝑦𝑘−1,(𝑡≥0),(1.4) where ||⋅|| is the Euclidean norm and 𝑦𝑘−1=𝑔𝑘−𝑔𝑘−1. We know that if 𝑓 is a strictly convex quadratic function, the above methods are equivalent in the case that an exact line search is used. If 𝑓 is nonconvex, their behaviors may be further different.

In the past few years, the PRP method has been regarded as the most efficient conjugate gradient method in practical computation. One remarkable property of the PRP method is that it essentially performs a restart if a bad direction occurs (see [6]). Powell [7] constructed an example which showed that the PRP method can cycle infinitely without approaching any stationary point even if an exact line search is used. This counterexample also indicates that the PRP method has a drawback that it may not globally be convergent when the objective function is nonconvex. Powell [8] suggested that the parameter 𝛽𝑘 is negative in the PRP method and defined 𝛽𝑘 as 𝛽𝑘=max0,𝛽PRP𝑘.(1.5) Gilbert and Nocedal [9] considered Powell’s suggestion and proved the global convergence of the modified PRP method for nonconvex functions under the appropriate line search. In addition, there are many researches on convergence properties of the PRP method (see [10–12]).

In recent years, much effort has been investigated to create new methods, which not only possess global convergence properties for general functions but also are superior to original methods from the computation point of view. For example, Yu et al. [13] proposed a new nonlinear conjugate gradient method in which the parameter 𝛽𝑘 is defined on the basic of 𝛽PRP𝑘 such as 𝛽VPRP𝑘=âŽ§âŽªâŽ¨âŽªâŽ©â€–â€–ğ‘”ğ‘˜â€–â€–2−||𝑔𝑇𝑘𝑔𝑘−1||𝜈||𝑔𝑇𝑘𝑑𝑘−1||+‖‖𝑔𝑘−1‖‖2‖‖𝑔if𝑘‖‖2>||𝑔𝑇𝑘𝑔𝑘−1||,0,otherwise,(1.6) where 𝜈>1 (in this paper, we call this method as VPRP method). And they proved the global convergence of the VPRP method with the Wolfe line search. Hager and Zhang [4] discussed the global convergence of the HZ method for strong convex functions under the Wolfe line search and Goldstein line search. In order to prove the global convergence for general functions, Hager and Zhang modified the parameter 𝛽HZ𝑘 as 𝛽MHZ𝑘𝛽=maxHZ𝑘,𝜂𝑘,(1.7) where 𝜂𝑘=−1‖‖𝑑𝑘‖‖‖‖𝑔min𝜂,𝑘‖‖,𝜂>0.(1.8) The corresponding method of (1.7) is the famous CG-DESCENT method.

Dai and Liao [5] proposed a new conjugate condition, that is, 𝑑𝑇𝑘𝑦𝑘−1=−𝑡𝑔𝑇𝑘𝑠𝑘−1,(𝑡≥0).(1.9) Under the new conjugate condition, they proved global convergence of the DL conjugate gradient method for uniformly convex functions. According to Powell’s suggestion, Dai and Liao gave a modified parameter 𝛽𝑘𝑔=max𝑇𝑘𝑦𝑘−1𝑑𝑇𝑘−1𝑦𝑘−1𝑔,0−𝑡𝑇𝑘𝑠𝑘−1𝑑𝑇𝑘−1𝑦𝑘−1,(𝑡≥0).(1.10) The corresponding method of (1.10) is the famous DL+ method. Under the strong Wolfe line search, they researched the global convergence of the DL+ method for general functions. Zhang et al. [14] proposed a modified DL conjugate gradient method and proved its global convergence. Moreover, some researchers have been studying a new type of method called the spectral conjugate gradient method (see [15–17]).

This paper is organized as follows: in the next section, we propose a modified PRP method and prove its sufficient descent property. In Section 3, the global convergence of the method with the Wolfe line search is given. In Section 4, numerical results are reported. We have a conclusion in the last section.

2. Modified PRP Method

In this section, we propose a modified PRP conjugate gradient method in which the parameter 𝛽𝑘 is defined on the basic of 𝛽PRP𝑘 as follows: 𝛽MPRP𝑘=âŽ§âŽªâŽ¨âŽªâŽ©â€–â€–ğ‘”ğ‘˜â€–â€–2−||𝑔𝑇𝑘𝑔𝑘−1||max0,𝑔𝑇𝑘𝑑𝑘−1+‖‖𝑔𝑘−1‖‖2‖‖𝑔if𝑘‖‖2≥||𝑔𝑇𝑘𝑔𝑘−1||‖‖𝑔≥𝑚𝑘‖‖2,0,otherwise,(2.1) in which 𝑚∈(0,1). We introduce the modified PRP method as follows.

2.1. Modified PRP (MPRP) Method

Step 1. Set 𝑥1∈𝑅𝑛, 𝜀≥0, and 𝑑1=−𝑔1, if ‖𝑔1‖≤𝜀, then stop.

Step 2. Compute 𝛼𝑘 by some inexact line search.

Step 3. Let 𝑥𝑘+1=𝑥𝑘+𝛼𝑘𝑑𝑘, 𝑔𝑘+1=𝑔(𝑥𝑘+1), if ||𝑔𝑘+1||≤𝜀, then stop.

Step 4. Compute 𝛽𝑘+1 by (2.1), and generate 𝑑𝑘+1 by (1.3).

Step 5. Set 𝑘=𝑘+1, and go to Step 2.

In the convergence analyses and implementations of conjugate gradient methods, one often requires the inexact line search to satisfy the Wolfe line search or the strong Wolfe line search. The Wolfe line search is to find 𝛼𝑘 such that 𝑓𝑥𝑘+𝛼𝑘𝑑𝑘𝑥≤𝑓𝑘+𝛿𝛼𝑘𝑔𝑇𝑘𝑑𝑘𝑔𝑥,(2.2)𝑘+ğ›¼ğ‘˜ğ‘‘ğ‘˜î€¸ğ‘‡ğ‘‘ğ‘˜â‰¥ğœŽğ‘”ğ‘‡ğ‘˜ğ‘‘ğ‘˜,(2.3) where 0<𝛿<ğœŽ<1. The strong Wolfe line search consists of (2.2) and the following strengthened version of (2.3): |||𝑔𝑥𝑘+𝛼𝑘𝑑𝑘𝑇𝑑𝑘|||â‰¤âˆ’ğœŽğ‘”ğ‘‡ğ‘˜ğ‘‘ğ‘˜.(2.4)

Moreover, in most references, we can see that the sufficient descent condition 𝑔𝑇𝑘𝑑𝑘‖‖𝑔≤−𝑐𝑘‖‖2,𝑐>0(2.5) is always given which plays a vital role in guaranteeing the global convergence properties of conjugate gradient methods. But, in this paper, 𝑑𝑘 can satisfy (2.5) without any line search.

Theorem 2.1. Consider any method (1.2)-(1.3), where 𝛽𝑘=𝛽MPRP𝑘. If 𝑔𝑘≠0 for all 𝑘≥1, then 𝑔𝑇𝑘𝑑𝑘‖‖g<−𝑘‖‖2,∀𝑘≥1.(2.6)

Proof. Multiplying (1.3) by 𝑔𝑇𝑘, we get 𝑔𝑇𝑘𝑑𝑘‖‖𝑔=−𝑘‖‖2+𝛽MPRP𝑘𝑔𝑇𝑘𝑑𝑘−1.(2.7) If 𝛽MPRP𝑘=0, from (2.7), we know that the conclusion (2.6) holds. If 𝛽MPRP𝑘≠0, the proof is divided into two cases in the following.
Firstly, if 𝑔𝑇𝑘𝑑𝑘−1≤0, then from (2.1) and (2.7), one has 𝑔𝑇𝑘𝑑𝑘‖‖𝑔=−𝑘‖‖2+‖‖𝑔𝑘‖‖2−||𝑔𝑇𝑘𝑔𝑘−1||max0,𝑔𝑇𝑘𝑑𝑘−1+‖‖𝑔𝑘−1‖‖2⋅𝑔𝑇𝑘𝑑𝑘−1‖‖𝑔=−𝑘‖‖2+‖‖𝑔𝑘‖‖2−||𝑔𝑇𝑘𝑔𝑘−1||‖‖𝑔𝑘−1‖‖2⋅𝑔𝑇𝑘𝑑𝑘−1=−‖‖𝑔𝑘‖‖2⋅||𝑔𝑇𝑘𝑔𝑘−1||/‖‖𝑔𝑘‖‖2⋅𝑔𝑇𝑘𝑑𝑘−1−𝑔𝑇𝑘𝑑𝑘−1+‖‖𝑔𝑘−1‖‖2‖‖𝑔𝑘−1‖‖2=−‖‖𝑔𝑘‖‖2⋅‖‖𝑔𝑘−1‖‖2−𝑔𝑇𝑘𝑑𝑘−1||𝑔1−𝑇𝑘𝑔𝑘−1||/‖‖𝑔𝑘‖‖2‖‖𝑔𝑘−1‖‖2≤−‖‖𝑔𝑘‖‖2⋅‖‖𝑔𝑘−1‖‖2‖‖𝑔𝑘−1‖‖2‖‖𝑔=−𝑘‖‖2<0.(2.8)
Secondly, if 𝑔𝑇𝑘𝑑𝑘−1>0, then from (2.7), we also have 𝑔𝑇𝑘𝑑𝑘‖‖𝑔<−𝑘‖‖2+‖‖𝑔𝑘‖‖2−||𝑔𝑇𝑘𝑔𝑘−1||𝑔𝑇𝑘𝑑𝑘−1⋅𝑔𝑇𝑘𝑑𝑘−1||𝑔=−𝑇𝑘𝑔𝑘−1||‖‖𝑔≤−𝑚𝑘‖‖2.(2.9) From the above, the conclusion (2.6) holds under any line search.

3. Global Convergences of the Modified PRP Method

In order to prove the global convergence of the modified PRP method, we assume that the objective function 𝑓(𝑥) satisfies the following assumption.

Assumption H
(i) The level set Ω={𝑥∈𝑅𝑛∣𝑓(𝑥)≤𝑓(𝑥1)} is bounded, that is, there exists a positive constant 𝜉>0 such that for all 𝑥∈Ω, ||𝑥||≤𝜉.
(ii) In a neighborhood 𝑉 of Ω, 𝑓 is continuously differentiable and its gradient 𝑔 is Lipchitz continuous, namely, there exists a constant 𝐿>0 such that ‖𝑔(𝑥)−𝑔(𝑦)‖≤𝐿‖𝑥−𝑦‖,∀𝑥,𝑦∈𝑉.(3.1) Under these assumptions on 𝑓, there exists a constant 𝛾>0 such that ‖𝑔(𝑥)‖≤𝛾∀𝑥∈Ω.(3.2)
The conclusion of the following lemma, often called the Zoutendijk condition, is used to prove the global convergence properties of nonlinear conjugate gradient methods. It was originally given by Zoutendijk [18].

Lemma 3.1. Suppose that, Assumption H holds. Consider any iteration of (1.2)-(1.3), where 𝑑𝑘 satisfies 𝑔𝑇𝑘𝑑𝑘<0 for 𝑘∈𝑁+ and 𝛼𝑘 satisfies the Wolfe line search, then 𝑘≥1𝑔𝑇𝑘𝑑𝑘2‖‖𝑑𝑘‖‖2<+∞.(3.3)

Lemma 3.2. Suppose that Assumption H holds. Consider the method (1.2)-(1.3), where 𝛽k=𝛽MPRP𝑘, and 𝛼k satisfies the Wolfe line search and (2.6). If there exists a constant 𝑟>0, such that ‖‖𝑔𝑘‖‖≥𝑟,∀𝑘≥1,(3.4) then one has 𝑘≥2‖‖𝑢𝑘−𝑢𝑘−1‖‖2<+∞,(3.5) where 𝑢𝑘=𝑑𝑘/||𝑑𝑘||.

Proof. From (2.1) and (3.4), we get 𝑔𝑇𝑘𝑔𝑘−1≠0.(3.6) By (2.6) and (3.6), we know that 𝑑𝑘≠0 for each 𝑘.
Define the quantities 𝑟𝑘=−𝑔𝑘‖‖𝑑𝑘‖‖,𝛿𝑘=𝛽MPRP𝑘‖‖𝑑𝑘−1‖‖‖‖𝑑𝑘‖‖.(3.7) By (1.3), one has 𝑢𝑘=𝑑𝑘‖‖𝑑𝑘‖‖=−𝑔𝑘+𝛽MPRP𝑘𝑑𝑘−1‖‖𝑑𝑘‖‖=𝑟𝑘+𝛿𝑘𝑢𝑘−1.(3.8) Since 𝑢𝑘 is unit vector, we get ‖‖𝑟𝑘‖‖=‖‖𝑢𝑘−𝛿𝑘𝑢𝑘−1‖‖=‖‖𝛿𝑘𝑢𝑘−𝑢𝑘−1‖‖.(3.9) From 𝛿𝑘≥0 and the above equation, one has ‖‖𝑢𝑘−𝑢𝑘−1‖‖≤1+𝛿𝑘‖‖𝑢𝑘−𝑢𝑘−1‖‖=‖‖1+𝛿𝑘𝑢𝑘−1+𝛿𝑘𝑢𝑘−1‖‖≤‖‖𝑢𝑘−𝛿𝑘𝑢𝑘−1‖‖+‖‖𝛿𝑘𝑢𝑘−𝑢𝑘−1‖‖‖‖𝑟=2𝑘‖‖.(3.10) By (2.1), (3.4), and (3.6), one has ||𝑔1≥𝑇𝑘𝑔𝑘−1||2‖‖𝑔𝑘‖‖2>𝑚.(3.11) From (3.3), (2.6), (3.4), and (3.11), one has 𝑚2𝑘≥1,𝑑𝑘≠0‖‖𝑟𝑘‖‖2≤𝑘≥1,𝑑𝑘≠0‖‖𝑟𝑘‖‖2⋅||𝑔𝑇𝑘𝑔𝑘−1||2‖‖𝑔𝑘‖‖2=𝑘≥1,𝑑𝑘≠0||𝑔𝑇𝑘𝑔𝑘−1||2‖‖𝑑𝑘‖‖2≤𝑘≥1,𝑑𝑘≠0𝑔𝑇𝑘𝑑𝑘2‖‖𝑑𝑘‖‖2<+∞,(3.12) so 𝑘≥1,𝑑𝑘≠0‖‖𝑟𝑘‖‖2<+∞.(3.13) By (3.10) and the above inequality, one has 𝑘≥2‖‖𝑢𝑘−𝑢𝑘−1‖‖2<+∞.(3.14)

Lemma 3.3. Suppose that Assumption H holds. If (3.4) holds, then 𝛽MPRP𝑘 has property (*), that is, (1)there exists a constant 𝑏>1, such that |𝛽MPRP𝑘|≤𝑏,(2)there exists a constant 𝜆>0, such that ||𝑥𝑘−𝑥𝑘−1||≤𝜆⇒|𝛽MPRP𝑘|≤1/2𝑏.

Proof. From Assumption (ii), we know that (3.2) holds. By (2.1), (3.2), and (3.4), one has ||𝛽MPRP𝑘||≤‖‖𝑔𝑘‖‖+‖‖𝑔𝑘−1‖‖⋅‖‖𝑔𝑘‖‖‖‖𝑔𝑘−1‖‖2≤2𝛾2𝑟2=𝑏.(3.15) Define 𝜆=𝑟2/2𝐿𝛾𝑏. If ||𝑥𝑘−𝑥𝑘−1||≤𝜆, then from (2.1), (3.1), (3.2), and (3.4), one has ||𝛽MPRP𝑘||≤‖‖𝑔𝑘‖‖2−𝑔𝑇𝑘𝑔𝑘−1‖‖𝑔𝑘−1‖‖2=𝑔𝑇𝑘𝑔𝑘−𝑔𝑘−1‖‖𝑔𝑘−1‖‖2≤‖‖𝑔𝑘‖‖⋅‖‖𝑔𝑘−𝑔𝑘−1‖‖‖‖𝑔𝑘−1‖‖2≤𝛾𝐿𝜆𝑟2=1.2𝑏(3.16)

Lemma 3.4 (see [19]). Suppose that Assumption H holds. Let {𝑥𝑘} and {𝑑𝑘} be generated by (1.2)-(1.3), in which 𝛼𝑘 satisfies the Wolfe line search and (2.6). If 𝛽𝑘≥0 has the property (*) and (3.4) holds, then there exits 𝜆>0, for any Δ∈𝑍+ and 𝑘0∈𝑍+, for all 𝑘≥𝑘0, such that ||ℜ𝜆𝑘,Δ||>Δ2,(3.17) where ℜ𝜆𝑘,Δ≜{𝑖∈𝑍+∶𝑘≤𝑖≤𝑘+Δ−1,||𝑥𝑖−𝑥𝑖−1||≥𝜆}, |ℜ𝜆𝑘,Δ| denotes the number of the ℜ𝜆𝑘,Δ.

Theorem 3.5. Suppose that Assumption H holds. Let {𝑥𝑘} and {𝑑𝑘} be generated by (1.2)-(1.3), in which 𝛼𝑘 satisfies the Wolfe line search and (2.6), 𝛽𝑘=𝛽MPRP𝑘, then one has liminf𝑘→+âˆžâ€–â€–ğ‘”ğ‘˜â€–â€–=0.(3.18)

Proof. To obtain this result, we proceed by contradiction. Suppose that (3.18) does not hold, which means that there exists 𝑟>0 such that ‖‖𝑔𝑘‖‖≥𝑟,for𝑘≥1,(3.19) so, we know that Lemmas 3.2 and 3.4 hold.
We also define 𝑢𝑘=𝑑𝑘/||𝑑𝑘||, then for all 𝑙,𝑘∈𝑍+(𝑙≥𝑘), one has 𝑥𝑙−𝑥𝑘−1=𝑙𝑖=𝑘‖‖𝑥𝑖−𝑥𝑖−1‖‖⋅𝑢𝑖−1=𝑙𝑖=𝑘‖‖𝑠𝑖−1‖‖⋅𝑢𝑘−1+𝑙𝑖=𝑘‖‖𝑠𝑖−1‖‖𝑢𝑖−1−𝑢𝑘−1,(3.20) where 𝑠𝑖−1=𝑥𝑖−𝑥𝑖−1, that is, 𝑙𝑖=𝑘‖‖𝑠𝑖−1‖‖⋅𝑢𝑘−1=𝑥𝑙−𝑥𝑘−1−𝑙𝑖=𝑘‖‖𝑠𝑖−1‖‖𝑢𝑖−1−𝑢𝑘−1.(3.21) From Assumption H, we know that there exists a constant 𝜉>0 such that ‖𝑥‖≤𝜉,for𝑥∈𝑉.(3.22) From (3.21) and the above inequality, one has 𝑙𝑖=𝑘‖‖𝑠𝑖−1‖‖≤2𝜉+𝑙𝑖=𝑘‖‖𝑠𝑖−1‖‖⋅‖‖𝑢𝑖−1−𝑢𝑘−1‖‖.(3.23) Let Δ be a positive integer and Δ∈[8𝜉/𝜆,8𝜉/𝜆+1) where 𝜆 has been defined in Lemma 3.4. From Lemma 3.2, we know that there exists 𝑘0 such that 𝑖≥𝑘0‖‖𝑢𝑖+1−𝑢𝑖‖‖2≤14Δ.(3.24) From the Cauchy-Schwartz inequality and (3.24), forall𝑖∈[𝑘,𝑘+Δ−1], one has ‖‖𝑢𝑖−1−𝑢𝑘−1‖‖≤𝑖−1𝑗=𝑘‖‖𝑢𝑗−𝑢𝑗−1‖‖≤(𝑖−𝑘)1/2𝑖−1𝑗=𝑘‖‖𝑢𝑗−𝑢𝑗−1‖‖21/2≤Δ1/2⋅14Δ1/2=12.(3.25) By Lemma 3.4, we know that there exists 𝑘≥𝑘0 such that ||ℜ𝜆𝑘,Δ||>Δ2.(3.26) It follows from (3.23), (3.25), and (3.26) that 𝜆Δ4<𝜆2||ℜ𝜆𝑘,Δ||<12𝑘+Δ−1𝑖=𝑘‖‖𝑠𝑖−1‖‖≤2𝜉.(3.27) From (3.27), one has Δ<8𝜉/𝜆, which is a contradiction with the definition of Δ. Hence, liminf𝑘→+âˆžâ€–â€–ğ‘”ğ‘˜â€–â€–=0,(3.28) which completes the proof.

4. Numerical Results

In this section, we compare the modified PRP conjugate gradient method, denoted the MPRP method, to VPRP method, CG-DESCENT method, and DL+ method under the strong Wolfe line search about problems [20] with the given initial points and dimensions. The parameters are chosen as follows: 𝛿=0.01, ğœŽ=0.1, 𝑣=1.25, 𝜂=0.01, and 𝑡=0.1. If ||𝑔𝑘||≤10−6 is satisfied, we will stop the program. The program will be also stopped if the number of iteration is more than ten thousands. All codes were written in Matlab 7.0 and run on a PC with 2.0 GHz CPU processor and 512 MB memory and Windows XP operation system.

The numerical results of our tests with respect to the MPRP method, VPRP method, CG-DESCENT method, and DL+ method are reported in Tables 1, 2, 3, 4, respectively. In the tables, the column “Problem” represents the problem’s name in [20], and “CPU,” “NI,” “NF,” and “NG” denote the CPU time in seconds, the number of iterations, function evaluations, gradient evaluations, respectively. “Dim” denotes the dimension of the tested problem. If the limit of iteration was exceeded, the run was stopped, and this is indicated by NaN.

In this paper, we will adopt the performance profiles by Dolan and Moré [21] to compare the MPRP method to the VPRP method, CG-DESCENT method, and DL+ method in the CPU time, the number of iterations, function evaluations, and gradient evaluations performance, respectively (see Figures 1, 2, 3, 4). In figures, 1𝑋=𝜏⟼𝑛𝑝size𝑝∈𝑃∶log2𝑟𝑝,𝑠𝑟≤𝜏,𝑌=𝑃𝑝,𝑠≤𝜏∶1≤𝑠≤𝑛𝑠.(4.1)

Figures 1–4 show the performance of the four methods relative to CPU time, the number of iterations, the number of function evaluations, and the number of gradient evaluations, respectively. For example, the performance profiles with respect to CPU time means that for each method, we plot the fraction 𝑃 of problems for which the method is within a factor 𝜏 of the best time. The left side of the figure gives the percentage of the test problems for which a method is the fastest; the right side gives the percentage of the test problems that are successfully solved by each of the methods. The top curve is the method that solved of the most problems in a time that was within a factor 𝜏 of the best time.

Obviously, Figure 1 shows that MPRP method outperforms VPRP method, CG-DESCENT method, and DL+ method for the given test problems in the CPU time. Figures 2–4 show that the MPRP method also has the best performance with respect to the number of iterations and function and gradient evaluations since it corresponds to the top curve. So, the MPRP method is computationally efficient.

5. Conclusions

We have proposed a modified PRP method on the basic of the PRP method, which can generate sufficient descent directions with inexact line search. Moreover, we proved that the proposed modified method converge globally for general nonconvex functions. The performance profiles showed that the proposed method is also very efficient.

Acknowledgments

The authors wish to express their heartfelt thanks to the referees and Professor Piermarco Cannarsa for their detailed and helpful suggestions for revising the paper. This work was supported by The Nature Science Foundation of Chongqing Education Committee (KJ091104) and Chongqing Three Gorge University (09ZZ-060).