Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2011 / Article

Research Article | Open Access

Volume 2011 |Article ID 463087 | 22 pages | https://doi.org/10.1155/2011/463087

Global Convergence of a Nonlinear Conjugate Gradient Method

Academic Editor: Piermarco Cannarsa
Received05 Mar 2011
Revised21 May 2011
Accepted04 Jun 2011
Published21 Jul 2011

Abstract

A modified PRP nonlinear conjugate gradient method to solve unconstrained optimization problems is proposed. The important property of the proposed method is that the sufficient descent property is guaranteed independent of any line search. By the use of the Wolfe line search, the global convergence of the proposed method is established for nonconvex minimization. Numerical results show that the proposed method is effective and promising by comparing with the VPRP, CG-DESCENT, and DL+ methods.

1. Introduction

The nonlinear conjugate gradient method is one of the most efficient methods in solving unconstrained optimization problems. It comprises a class of unconstrained optimization algorithms which is characterized by low memory requirements and simplicity.

Consider the unconstrained optimization problem min𝑥∈𝑅𝑛𝑓(𝑥),(1.1) where 𝑓∶𝑅𝑛→𝑅 is continuously differentiable, and its gradient 𝑔 is available.

The iterates of the conjugate gradient method for solving (1.1) are given by 𝑥𝑘+1=𝑥𝑘+𝛼𝑘𝑑𝑘,(1.2) where stepsize 𝛼𝑘 is positive and computed by certain line search, and the search direction 𝑑𝑘 is defined by 𝑑𝑘=−𝑔𝑘,for𝑘=1,−𝑔𝑘+𝛽𝑘𝑑𝑘−1,for𝑘≥2,(1.3) where 𝑔𝑘=∇𝑓(𝑥𝑘), and 𝛽𝑘 is a scalar. Some well-known conjugate gradient methods include Polak-Ribière-Polyak (PRP) method [1, 2], Hestenes-Stiefel (HS) method [3], Hager-Zhang (HZ) method [4], and Dai-Liao (DL) method [5]. The parameters 𝛽𝑘 of these methods are specified as follows: 𝛽PRP𝑘=𝑔𝑇𝑘𝑔𝑘−𝑔𝑘−1‖‖𝑔𝑘−1‖‖2,𝛽HS𝑘=𝑔𝑇𝑘𝑔𝑘−𝑔𝑘−1𝑑𝑇𝑘−1𝑔𝑘−𝑔𝑘−1,𝛽HZ𝑘=𝑦𝑘−1−2𝑑𝑘−1‖‖𝑦𝑘−1‖‖2𝑑𝑇𝑘−1𝑦𝑘−1𝑇𝑔𝑘𝑑𝑇𝑘−1𝑦𝑘−1,𝛽DL𝑘=𝑔𝑇𝑘𝑦𝑘−1𝑑𝑇𝑘−1𝑦𝑘−1𝑔−𝑡𝑇𝑘𝑠𝑘−1𝑑𝑇𝑘−1𝑦𝑘−1,(𝑡≥0),(1.4) where ||⋅|| is the Euclidean norm and 𝑦𝑘−1=𝑔𝑘−𝑔𝑘−1. We know that if 𝑓 is a strictly convex quadratic function, the above methods are equivalent in the case that an exact line search is used. If 𝑓 is nonconvex, their behaviors may be further different.

In the past few years, the PRP method has been regarded as the most efficient conjugate gradient method in practical computation. One remarkable property of the PRP method is that it essentially performs a restart if a bad direction occurs (see [6]). Powell [7] constructed an example which showed that the PRP method can cycle infinitely without approaching any stationary point even if an exact line search is used. This counterexample also indicates that the PRP method has a drawback that it may not globally be convergent when the objective function is nonconvex. Powell [8] suggested that the parameter 𝛽𝑘 is negative in the PRP method and defined 𝛽𝑘 as 𝛽𝑘=max0,𝛽PRP𝑘.(1.5) Gilbert and Nocedal [9] considered Powell’s suggestion and proved the global convergence of the modified PRP method for nonconvex functions under the appropriate line search. In addition, there are many researches on convergence properties of the PRP method (see [10–12]).

In recent years, much effort has been investigated to create new methods, which not only possess global convergence properties for general functions but also are superior to original methods from the computation point of view. For example, Yu et al. [13] proposed a new nonlinear conjugate gradient method in which the parameter 𝛽𝑘 is defined on the basic of 𝛽PRP𝑘 such as 𝛽VPRP𝑘=âŽ§âŽªâŽ¨âŽªâŽ©â€–â€–ğ‘”ğ‘˜â€–â€–2−||𝑔𝑇𝑘𝑔𝑘−1||𝜈||𝑔𝑇𝑘𝑑𝑘−1||+‖‖𝑔𝑘−1‖‖2‖‖𝑔if𝑘‖‖2>||𝑔𝑇𝑘𝑔𝑘−1||,0,otherwise,(1.6) where 𝜈>1 (in this paper, we call this method as VPRP method). And they proved the global convergence of the VPRP method with the Wolfe line search. Hager and Zhang [4] discussed the global convergence of the HZ method for strong convex functions under the Wolfe line search and Goldstein line search. In order to prove the global convergence for general functions, Hager and Zhang modified the parameter 𝛽HZ𝑘 as 𝛽MHZ𝑘𝛽=maxHZ𝑘,𝜂𝑘,(1.7) where 𝜂𝑘=−1‖‖𝑑𝑘‖‖‖‖𝑔min𝜂,𝑘‖‖,𝜂>0.(1.8) The corresponding method of (1.7) is the famous CG-DESCENT method.

Dai and Liao [5] proposed a new conjugate condition, that is, 𝑑𝑇𝑘𝑦𝑘−1=−𝑡𝑔𝑇𝑘𝑠𝑘−1,(𝑡≥0).(1.9) Under the new conjugate condition, they proved global convergence of the DL conjugate gradient method for uniformly convex functions. According to Powell’s suggestion, Dai and Liao gave a modified parameter 𝛽𝑘𝑔=max𝑇𝑘𝑦𝑘−1𝑑𝑇𝑘−1𝑦𝑘−1𝑔,0−𝑡𝑇𝑘𝑠𝑘−1𝑑𝑇𝑘−1𝑦𝑘−1,(𝑡≥0).(1.10) The corresponding method of (1.10) is the famous DL+ method. Under the strong Wolfe line search, they researched the global convergence of the DL+ method for general functions. Zhang et al. [14] proposed a modified DL conjugate gradient method and proved its global convergence. Moreover, some researchers have been studying a new type of method called the spectral conjugate gradient method (see [15–17]).

This paper is organized as follows: in the next section, we propose a modified PRP method and prove its sufficient descent property. In Section 3, the global convergence of the method with the Wolfe line search is given. In Section 4, numerical results are reported. We have a conclusion in the last section.

2. Modified PRP Method

In this section, we propose a modified PRP conjugate gradient method in which the parameter 𝛽𝑘 is defined on the basic of 𝛽PRP𝑘 as follows: 𝛽MPRP𝑘=âŽ§âŽªâŽ¨âŽªâŽ©â€–â€–ğ‘”ğ‘˜â€–â€–2−||𝑔𝑇𝑘𝑔𝑘−1||max0,𝑔𝑇𝑘𝑑𝑘−1+‖‖𝑔𝑘−1‖‖2‖‖𝑔if𝑘‖‖2≥||𝑔𝑇𝑘𝑔𝑘−1||‖‖𝑔≥𝑚𝑘‖‖2,0,otherwise,(2.1) in which 𝑚∈(0,1). We introduce the modified PRP method as follows.

2.1. Modified PRP (MPRP) Method

Step 1. Set 𝑥1∈𝑅𝑛, 𝜀≥0, and 𝑑1=−𝑔1, if ‖𝑔1‖≤𝜀, then stop.

Step 2. Compute 𝛼𝑘 by some inexact line search.

Step 3. Let 𝑥𝑘+1=𝑥𝑘+𝛼𝑘𝑑𝑘, 𝑔𝑘+1=𝑔(𝑥𝑘+1), if ||𝑔𝑘+1||≤𝜀, then stop.

Step 4. Compute 𝛽𝑘+1 by (2.1), and generate 𝑑𝑘+1 by (1.3).

Step 5. Set 𝑘=𝑘+1, and go to Step 2.

In the convergence analyses and implementations of conjugate gradient methods, one often requires the inexact line search to satisfy the Wolfe line search or the strong Wolfe line search. The Wolfe line search is to find 𝛼𝑘 such that 𝑓𝑥𝑘+𝛼𝑘𝑑𝑘𝑥≤𝑓𝑘+𝛿𝛼𝑘𝑔𝑇𝑘𝑑𝑘𝑔𝑥,(2.2)𝑘+ğ›¼ğ‘˜ğ‘‘ğ‘˜î€¸ğ‘‡ğ‘‘ğ‘˜â‰¥ğœŽğ‘”ğ‘‡ğ‘˜ğ‘‘ğ‘˜,(2.3) where 0<𝛿<ğœŽ<1. The strong Wolfe line search consists of (2.2) and the following strengthened version of (2.3): |||𝑔𝑥𝑘+𝛼𝑘𝑑𝑘𝑇𝑑𝑘|||â‰¤âˆ’ğœŽğ‘”ğ‘‡ğ‘˜ğ‘‘ğ‘˜.(2.4)

Moreover, in most references, we can see that the sufficient descent condition 𝑔𝑇𝑘𝑑𝑘‖‖𝑔≤−𝑐𝑘‖‖2,𝑐>0(2.5) is always given which plays a vital role in guaranteeing the global convergence properties of conjugate gradient methods. But, in this paper, 𝑑𝑘 can satisfy (2.5) without any line search.

Theorem 2.1. Consider any method (1.2)-(1.3), where 𝛽𝑘=𝛽MPRP𝑘. If 𝑔𝑘≠0 for all 𝑘≥1, then 𝑔𝑇𝑘𝑑𝑘‖‖g<−𝑘‖‖2,∀𝑘≥1.(2.6)

Proof. Multiplying (1.3) by 𝑔𝑇𝑘, we get 𝑔𝑇𝑘𝑑𝑘‖‖𝑔=−𝑘‖‖2+𝛽MPRP𝑘𝑔𝑇𝑘𝑑𝑘−1.(2.7) If 𝛽MPRP𝑘=0, from (2.7), we know that the conclusion (2.6) holds. If 𝛽MPRP𝑘≠0, the proof is divided into two cases in the following.
Firstly, if 𝑔𝑇𝑘𝑑𝑘−1≤0, then from (2.1) and (2.7), one has 𝑔𝑇𝑘𝑑𝑘‖‖𝑔=−𝑘‖‖2+‖‖𝑔𝑘‖‖2−||𝑔𝑇𝑘𝑔𝑘−1||max0,𝑔𝑇𝑘𝑑𝑘−1+‖‖𝑔𝑘−1‖‖2⋅𝑔𝑇𝑘𝑑𝑘−1‖‖𝑔=−𝑘‖‖2+‖‖𝑔𝑘‖‖2−||𝑔𝑇𝑘𝑔𝑘−1||‖‖𝑔𝑘−1‖‖2⋅𝑔𝑇𝑘𝑑𝑘−1=−‖‖𝑔𝑘‖‖2⋅||𝑔𝑇𝑘𝑔𝑘−1||/‖‖𝑔𝑘‖‖2⋅𝑔𝑇𝑘𝑑𝑘−1−𝑔𝑇𝑘𝑑𝑘−1+‖‖𝑔𝑘−1‖‖2‖‖𝑔𝑘−1‖‖2=−‖‖𝑔𝑘‖‖2⋅‖‖𝑔𝑘−1‖‖2−𝑔𝑇𝑘𝑑𝑘−1||𝑔1−𝑇𝑘𝑔𝑘−1||/‖‖𝑔𝑘‖‖2‖‖𝑔𝑘−1‖‖2≤−‖‖𝑔𝑘‖‖2⋅‖‖𝑔𝑘−1‖‖2‖‖𝑔𝑘−1‖‖2‖‖𝑔=−𝑘‖‖2<0.(2.8)
Secondly, if 𝑔𝑇𝑘𝑑𝑘−1>0, then from (2.7), we also have 𝑔𝑇𝑘𝑑𝑘‖‖𝑔<−𝑘‖‖2+‖‖𝑔𝑘‖‖2−||𝑔𝑇𝑘𝑔𝑘−1||𝑔𝑇𝑘𝑑𝑘−1⋅𝑔𝑇𝑘𝑑𝑘−1||𝑔=−𝑇𝑘𝑔𝑘−1||‖‖𝑔≤−𝑚𝑘‖‖2.(2.9) From the above, the conclusion (2.6) holds under any line search.

3. Global Convergences of the Modified PRP Method

In order to prove the global convergence of the modified PRP method, we assume that the objective function 𝑓(𝑥) satisfies the following assumption.

Assumption H
(i) The level set Ω={𝑥∈𝑅𝑛∣𝑓(𝑥)≤𝑓(𝑥1)} is bounded, that is, there exists a positive constant 𝜉>0 such that for all 𝑥∈Ω, ||𝑥||≤𝜉.
(ii) In a neighborhood 𝑉 of Ω, 𝑓 is continuously differentiable and its gradient 𝑔 is Lipchitz continuous, namely, there exists a constant 𝐿>0 such that ‖𝑔(𝑥)−𝑔(𝑦)‖≤𝐿‖𝑥−𝑦‖,∀𝑥,𝑦∈𝑉.(3.1) Under these assumptions on 𝑓, there exists a constant 𝛾>0 such that ‖𝑔(𝑥)‖≤𝛾∀𝑥∈Ω.(3.2)
The conclusion of the following lemma, often called the Zoutendijk condition, is used to prove the global convergence properties of nonlinear conjugate gradient methods. It was originally given by Zoutendijk [18].

Lemma 3.1. Suppose that, Assumption H holds. Consider any iteration of (1.2)-(1.3), where 𝑑𝑘 satisfies 𝑔𝑇𝑘𝑑𝑘<0 for 𝑘∈𝑁+ and 𝛼𝑘 satisfies the Wolfe line search, then 𝑘≥1𝑔𝑇𝑘𝑑𝑘2‖‖𝑑𝑘‖‖2<+∞.(3.3)

Lemma 3.2. Suppose that Assumption H holds. Consider the method (1.2)-(1.3), where 𝛽k=𝛽MPRP𝑘, and 𝛼k satisfies the Wolfe line search and (2.6). If there exists a constant 𝑟>0, such that ‖‖𝑔𝑘‖‖≥𝑟,∀𝑘≥1,(3.4) then one has 𝑘≥2‖‖𝑢𝑘−𝑢𝑘−1‖‖2<+∞,(3.5) where 𝑢𝑘=𝑑𝑘/||𝑑𝑘||.

Proof. From (2.1) and (3.4), we get 𝑔𝑇𝑘𝑔𝑘−1≠0.(3.6) By (2.6) and (3.6), we know that 𝑑𝑘≠0 for each 𝑘.
Define the quantities 𝑟𝑘=−𝑔𝑘‖‖𝑑𝑘‖‖,𝛿𝑘=𝛽MPRP𝑘‖‖𝑑𝑘−1‖‖‖‖𝑑𝑘‖‖.(3.7) By (1.3), one has 𝑢𝑘=𝑑𝑘‖‖𝑑𝑘‖‖=−𝑔𝑘+𝛽MPRP𝑘𝑑𝑘−1‖‖𝑑𝑘‖‖=𝑟𝑘+𝛿𝑘𝑢𝑘−1.(3.8) Since 𝑢𝑘 is unit vector, we get ‖‖𝑟𝑘‖‖=‖‖𝑢𝑘−𝛿𝑘𝑢𝑘−1‖‖=‖‖𝛿𝑘𝑢𝑘−𝑢𝑘−1‖‖.(3.9) From 𝛿𝑘≥0 and the above equation, one has ‖‖𝑢𝑘−𝑢𝑘−1‖‖≤1+𝛿𝑘‖‖𝑢𝑘−𝑢𝑘−1‖‖=‖‖1+𝛿𝑘𝑢𝑘−1+𝛿𝑘𝑢𝑘−1‖‖≤‖‖𝑢𝑘−𝛿𝑘𝑢𝑘−1‖‖+‖‖𝛿𝑘𝑢𝑘−𝑢𝑘−1‖‖‖‖𝑟=2𝑘‖‖.(3.10) By (2.1), (3.4), and (3.6), one has ||𝑔1≥𝑇𝑘𝑔𝑘−1||2‖‖𝑔𝑘‖‖2>𝑚.(3.11) From (3.3), (2.6), (3.4), and (3.11), one has 𝑚2𝑘≥1,𝑑𝑘≠0‖‖𝑟𝑘‖‖2≤𝑘≥1,𝑑𝑘≠0‖‖𝑟𝑘‖‖2⋅||𝑔𝑇𝑘𝑔𝑘−1||2‖‖𝑔𝑘‖‖2=𝑘≥1,𝑑𝑘≠0||𝑔𝑇𝑘𝑔𝑘−1||2‖‖𝑑𝑘‖‖2≤𝑘≥1,𝑑𝑘≠0𝑔𝑇𝑘𝑑𝑘2‖‖𝑑𝑘‖‖2<+∞,(3.12) so 𝑘≥1,𝑑𝑘≠0‖‖𝑟𝑘‖‖2<+∞.(3.13) By (3.10) and the above inequality, one has 𝑘≥2‖‖𝑢𝑘−𝑢𝑘−1‖‖2<+∞.(3.14)

Lemma 3.3. Suppose that Assumption H holds. If (3.4) holds, then 𝛽MPRP𝑘 has property (*), that is, (1)there exists a constant 𝑏>1, such that |𝛽MPRP𝑘|≤𝑏,(2)there exists a constant 𝜆>0, such that ||𝑥𝑘−𝑥𝑘−1||≤𝜆⇒|𝛽MPRP𝑘|≤1/2𝑏.

Proof. From Assumption (ii), we know that (3.2) holds. By (2.1), (3.2), and (3.4), one has ||𝛽MPRP𝑘||≤‖‖𝑔𝑘‖‖+‖‖𝑔𝑘−1‖‖⋅‖‖𝑔𝑘‖‖‖‖𝑔𝑘−1‖‖2≤2𝛾2𝑟2=𝑏.(3.15) Define 𝜆=𝑟2/2𝐿𝛾𝑏. If ||𝑥𝑘−𝑥𝑘−1||≤𝜆, then from (2.1), (3.1), (3.2), and (3.4), one has ||𝛽MPRP𝑘||≤‖‖𝑔𝑘‖‖2−𝑔𝑇𝑘𝑔𝑘−1‖‖𝑔𝑘−1‖‖2=𝑔𝑇𝑘𝑔𝑘−𝑔𝑘−1‖‖𝑔𝑘−1‖‖2≤‖‖𝑔𝑘‖‖⋅‖‖𝑔𝑘−𝑔𝑘−1‖‖‖‖𝑔𝑘−1‖‖2≤𝛾𝐿𝜆𝑟2=1.2𝑏(3.16)

Lemma 3.4 (see [19]). Suppose that Assumption H holds. Let {𝑥𝑘} and {𝑑𝑘} be generated by (1.2)-(1.3), in which 𝛼𝑘 satisfies the Wolfe line search and (2.6). If 𝛽𝑘≥0 has the property (*) and (3.4) holds, then there exits 𝜆>0, for any Δ∈𝑍+ and 𝑘0∈𝑍+, for all 𝑘≥𝑘0, such that ||ℜ𝜆𝑘,Δ||>Δ2,(3.17) where ℜ𝜆𝑘,Δ≜{𝑖∈𝑍+∶𝑘≤𝑖≤𝑘+Δ−1,||𝑥𝑖−𝑥𝑖−1||≥𝜆}, |ℜ𝜆𝑘,Δ| denotes the number of the ℜ𝜆𝑘,Δ.

Theorem 3.5. Suppose that Assumption H holds. Let {𝑥𝑘} and {𝑑𝑘} be generated by (1.2)-(1.3), in which 𝛼𝑘 satisfies the Wolfe line search and (2.6), 𝛽𝑘=𝛽MPRP𝑘, then one has liminf𝑘→+âˆžâ€–â€–ğ‘”ğ‘˜â€–â€–=0.(3.18)

Proof. To obtain this result, we proceed by contradiction. Suppose that (3.18) does not hold, which means that there exists 𝑟>0 such that ‖‖𝑔𝑘‖‖≥𝑟,for𝑘≥1,(3.19) so, we know that Lemmas 3.2 and 3.4 hold.
We also define 𝑢𝑘=𝑑𝑘/||𝑑𝑘||, then for all 𝑙,𝑘∈𝑍+(𝑙≥𝑘), one has 𝑥𝑙−𝑥𝑘−1=𝑙𝑖=𝑘‖‖𝑥𝑖−𝑥𝑖−1‖‖⋅𝑢𝑖−1=𝑙𝑖=𝑘‖‖𝑠𝑖−1‖‖⋅𝑢𝑘−1+𝑙𝑖=𝑘‖‖𝑠𝑖−1‖‖𝑢𝑖−1−𝑢𝑘−1,(3.20) where 𝑠𝑖−1=𝑥𝑖−𝑥𝑖−1, that is, 𝑙𝑖=𝑘‖‖𝑠𝑖−1‖‖⋅𝑢𝑘−1=𝑥𝑙−𝑥𝑘−1−𝑙𝑖=𝑘‖‖𝑠𝑖−1‖‖𝑢𝑖−1−𝑢𝑘−1.(3.21) From Assumption H, we know that there exists a constant 𝜉>0 such that ‖𝑥‖≤𝜉,for𝑥∈𝑉.(3.22) From (3.21) and the above inequality, one has 𝑙𝑖=𝑘‖‖𝑠𝑖−1‖‖≤2𝜉+𝑙𝑖=𝑘‖‖𝑠𝑖−1‖‖⋅‖‖𝑢𝑖−1−𝑢𝑘−1‖‖.(3.23) Let Δ be a positive integer and Δ∈[8𝜉/𝜆,8𝜉/𝜆+1) where 𝜆 has been defined in Lemma 3.4. From Lemma 3.2, we know that there exists 𝑘0 such that 𝑖≥𝑘0‖‖𝑢𝑖+1−𝑢𝑖‖‖2≤14Δ.(3.24) From the Cauchy-Schwartz inequality and (3.24), forall𝑖∈[𝑘,𝑘+Δ−1], one has ‖‖𝑢𝑖−1−𝑢𝑘−1‖‖≤𝑖−1𝑗=𝑘‖‖𝑢𝑗−𝑢𝑗−1‖‖≤(𝑖−𝑘)1/2𝑖−1𝑗=𝑘‖‖𝑢𝑗−𝑢𝑗−1‖‖21/2≤Δ1/2⋅14Δ1/2=12.(3.25) By Lemma 3.4, we know that there exists 𝑘≥𝑘0 such that ||ℜ𝜆𝑘,Δ||>Δ2.(3.26) It follows from (3.23), (3.25), and (3.26) that 𝜆Δ4<𝜆2||ℜ𝜆𝑘,Δ||<12𝑘+Δ−1𝑖=𝑘‖‖𝑠𝑖−1‖‖≤2𝜉.(3.27) From (3.27), one has Δ<8𝜉/𝜆, which is a contradiction with the definition of Δ. Hence, liminf𝑘→+âˆžâ€–â€–ğ‘”ğ‘˜â€–â€–=0,(3.28) which completes the proof.

4. Numerical Results

In this section, we compare the modified PRP conjugate gradient method, denoted the MPRP method, to VPRP method, CG-DESCENT method, and DL+ method under the strong Wolfe line search about problems [20] with the given initial points and dimensions. The parameters are chosen as follows: 𝛿=0.01, ğœŽ=0.1, 𝑣=1.25, 𝜂=0.01, and 𝑡=0.1. If ||𝑔𝑘||≤10−6 is satisfied, we will stop the program. The program will be also stopped if the number of iteration is more than ten thousands. All codes were written in Matlab 7.0 and run on a PC with 2.0 GHz CPU processor and 512 MB memory and Windows XP operation system.

The numerical results of our tests with respect to the MPRP method, VPRP method, CG-DESCENT method, and DL+ method are reported in Tables 1, 2, 3, 4, respectively. In the tables, the column “Problem” represents the problem’s name in [20], and “CPU,” “NI,” “NF,” and “NG” denote the CPU time in seconds, the number of iterations, function evaluations, gradient evaluations, respectively. “Dim” denotes the dimension of the tested problem. If the limit of iteration was exceeded, the run was stopped, and this is indicated by NaN.


ProblemDimNINFNGCPU

ROSE224109900.3651
FROTH21180610.0594
BADSCP2262272100.2000
BADSCB21189790.1085
BEALE22175590.1449
HELIX32576610.1754
BRAD32073610.1380
GAUSS33860.0164
MEYER31110.0063
GULF31220.0173
BOX31110.0574
SING4672632280.5000
WOOD4331501170.2421
KOWOSB4572221950.4000
BD426127960.1995
OSB151110.0157
BIGGS61214493961.0000
OSB2113419008111.3000
JENSAM61249320.0900
71356350.1872
81153300.1678
91265380.1160
1026133940.2604
11NaNNaNNaNNaN
VARDIM3440260.0135
5657380.0296
6565430.0270
8772470.0327
9778500.0647
10781520.0646
12790580.0647
15892600.0948
WATSON5592001670.2000
6387128111341.4000
71768583451916.0000
83934133731192014.0000
104319151021345117.0000
121892676260079.0000
151527555249337.0000
203001113081010719.0000
PEN251114393930.4000
101858457521.5000
151547746790.5000
201789898890.6000
301236105340.4000
401477006170.5000
501527446511.2000
601638137200.7000
PEN15301511250.2742
10884153570.9000
20321551240.3349
30733502900.7000
50723462850.3000
100291891470.2458
200281981520.4759
300272011501.0464
TRIG104192820.3817
20561361270.5634
50491061030.1949
100611371270.3857
200561161142.0205
3005210610111.6394
4005711611444.9734
5005310910889.8125
ROSEX100261231030.2323
200261231030.2583
300261231030.3078
400261231030.4697
500261231030.6781
1000261231032.4474
1500261231035.3979
2000261231039.9364
SINGX100783202830.8000
200793352930.8000
300733082690.8000
400893673241.6000
500913743302.2000
1000933853428.0000
15008234730615.8000
20008034129928.4000
BV2001813432640639.0000
300636150114185.4000
4002265164872.7000
5001884203983.2000
600861901841.9000
10002140370.9963
15001120191.0900
20002650.5456
IE20061370.3063
30061370.6698
40061371.1916
50061371.8511
60061372.6615
100061377.3635
1500613716.6397
2000613729.4927
TRID2003581740.3327
3003683750.3587
4003783750.3731
5003578730.4935
6003680760.6862
10003579751.7180
15003684794.0501
20003785797.5866


ProblemDimNINFNGCPU

ROSE224111850.1585
FROTH21278590.0698
BADSCP2993943360.4000
BADSCB21330190.0448
BEALE21248350.0402
HELIX3742031750.5000
BRAD330100800.2027
GAUSS33740.0085
MEYER31110.0058
GULF31220.0052
BOX31110.0561
SING41013412890.7000
WOOD41744824170.6000
KOWOSB4712342030.3000
BD4421611250.2451
OSB151110.0063
BIGGS61133753300.8000
OSB2112646676031.5000
JENSAM6933170.0696
71139170.1137
81042190.0883
91790570.1850
1017124840.1435
11676460.1111
VARDIM3440260.0290
5657380.0203
6565430.0273
8772470.036
9778500.0715
10781520.0458
12790580.0672
15892600.0622
WATSON51935354731.4000
634210028902.2000
71451415736785.0000
86720195301730020.0000
10NaNNaNNaNNaN
12350710432923413.0000
155271158171400624.0000
20NaNNaNNaNNaN
PEN251294994341.0000
10903793280.3000
15434146712982.2000
20941295926123.0000
30771253121933.0000
402489788601.6000
50952278825113.0000
60927282224354.0000
PEN15321511240.3752
10903833240.8000
20281601210.1119
30763342790.3000
50643442800.5000
100231601220.2212
200211741280.4081
300281921431.0207
TRIG103682710.4048
20561251140.6413
504593850.3426
100581201130.4573
200641351282.0248
300521029912.4258
4006013212552.5691
50059127116101.6207
ROSEX10024111850.2798
20024111850.2808
30024111850.3136
40024111850.4271
50024111850.6181
100024111852.1821
150024111854.7644
200024111858.8878
SINGX1001695624830.6000
2001996765761.4000
300365118110313.7000
400627202517569.0000
5001294313672.5000
100022978867516.9000
150010032928016.0000
200012842836336.0000
BV200NaNNaNNaNNaN
3007278129901298955.0000
40038376707670642.0000
50018423236323526.0000
6008981562156117.6000
10001332322316.2000
15001936352.0484
20002650.5156
IE20071580.3596
30071580.7986
40071581.4202
50071582.2147
60071583.1811
100071588.8316
1500715819.7838
2000715834.7553
TRID2003375710.3758
3003579750.3654
4003578740.3912
5003578740.5188
6003782780.7388
10003476721.6832
15003786814.1950
20003786807.5255


ProblemDimNINFNGCPU

ROSE2361321070.2371
FROTH21264480.0462
BADSCP2402131890.2000
BADSCB216101880.1212
BEALE21145330.0405
HELIX3661791520.4397
BRAD3471431220.2292
GAUSS331080.0093
MEYER31110.0085
GULF31220.0068
BOX31110.0600
SING4621981640.3000
WOOD41032982470.5000
KOWOSB4772221920.4000
BD4532041620.3000
OSB151110.0084
BIGGS61283953410.7000
OSB2113799158271.2000
JENSAM6NaNNaNNaNNaN
71251320.1114
81150260.0844
9NaNNaNNaNNaN
10559340.0421
11211481050.1749
VARDIM3440260.0246
5657380.0174
6565430.0266
8772470.0226
9778500.0323
10781520.0495
12790580.0610
15892600.0583
WATSON51353913360.5000
6421118610431.4000
71822527846556.0000
82607758967169.0000
10NaNNaNNaNNaN
12337010111893012.0000
155749174421536827.0000
205902185241628236.0000
PEN251284854211.1000
101475714860.4000
15663226219962.0000
20734245221813.0000
30810243822643.0000
401488450239605.0000
50744234220562.0000
60755245721883.0000
PEN15391741410.3298
10944043450.8000
20351621270.1220
30763472860.3000
50813753130.4000
100311841410.2434
200251711260.4106
300261861391.0160
TRIG103374650.2781
20601451320.5098
504395900.3620
100531161080.4474
200591231181.7555
3005110910512.6411
4005812111445.0506
5005111010589.7073
ROSEX10036124990.1230
200341251020.1539
300351331070.3939
400311211000.5789
500311291060.8736
1000371421163.5602
1500341321067.4845
20003212810312.9238
SINGX100772271870.7000
200541721410.2376
300993012481.0000
400792452071.3000
500692151801.6000
10001013222718.6000
15006621017412.4000
20006921017323.1000
BV200NaNNaNNaNNaN
300NaNNaNNaNNaN
40045098044804363.0000
50016352926292534.0000
6009251605160424.5000
100024741841716.9000
15001838373.0564
20002650.6883
IE20071580.3546
30071580.7927
40071581.4039
50071582.2022
60071583.1297
100071588.6892
1500715819.5607
2000715834.7423
TRID2003170580.2691
3003373650.2857
4003272610.4264
5003476710.6853
6003578740.9484
10003681772.6056
15003684795.8397
20003580739.9491


PDimItf_iterGrade_iterCPU

ROSE225110870.1192
FROTH2965500.0283
BADSCP2191461330.1003
BADSCB21156470.0434
BEALE21150390.0355
HELIX3501441220.2616
BARD32076640.0824
GAUSS32530.0079
MEYER31110.0087
GULF31220.0062
BOX31110.0601
SING4912952500.5000
WOOD41484023490.8000
KOWOSB4762362040.4000
BD4672081770.3000
OSB151110.0083
BIGGS6592111890.3000
OSB2112796996141.4000
JENSAM6935190.0570
7935170.1276
8NaNNaNNaNNaN
91591570.1148
1017136960.2084
11580500.0882
VARDIM3440260.0153
5657380.0192
6565430.0184
8772470.0219
9778500.0438
10781520.0242
12790580.0442
15892600.0535
WASTON5622081730.2000
634610559271.8000
71247344330854.0000
82034610454087.0000
105973176011569921.0000
1232009291825912.0000
151865540247908.0000
206420183201636030.0000
PEN25833292830.7000
101285194491.0000
1527710409220.9000
2029011139810.9000
30767223120403.0000
40675192917063.0000
50617222319622.0000
60NaNNaNNaNNaN
PEN15331651390.2615
10783532980.7000
2024119940.0842
30733673050.3000
50663562930.6000
100221541170.1957
200271781360.4346
300291951471.0408
TRIG103482710.2910
20591401290.5161
504694880.3865
100551201110.4625
200561161111.6476
3005211010512.3062
4005611511344.8831
5005412111697.7043
ROSEX10025110870.0859
20025110870.2019
30025110870.2697
40025110870.4169
50025110870.5990
100025110872.2166
150025110874.8426
200025110879.1263
SINGX1001133843291.0000
2001173973410.5000
3002157356342.1000
4001133843291.6000
5001103673132.2000
10001374523889.2000
150015249442022.1000
200011036731330.2000
BV200NaNNaNNaNNaN
300NaNNaNNaNNaN
40056248458845749.0000
50033144854485338.0000
60016182291229025.0000
10002583123118.7000
15001830291.7251
20002650.5129
IE20061370.3075
30061370.6731
40061371.1939
50061371.8608
60061372.6785
100061377.3685
1500613716.5794
2000613729.5854
TRID2003375710.2606
3003579750.2866
4003680760.3653
5003578730.4857
6003782780.6950
10003476721.6550
15003786814.1380
20003786807.4702

In this paper, we will adopt the performance profiles by Dolan and Moré [21] to compare the MPRP method to the VPRP method, CG-DESCENT method, and DL+ method in the CPU time, the number of iterations, function evaluations, and gradient evaluations performance, respectively (see Figures 1, 2, 3, 4). In figures, 1𝑋=𝜏⟼𝑛𝑝size𝑝∈𝑃∶log2𝑟𝑝,𝑠𝑟≤𝜏,𝑌=𝑃𝑝,𝑠≤𝜏∶1≤𝑠≤𝑛𝑠.(4.1)

Figures 1–4 show the performance of the four methods relative to CPU time, the number of iterations, the number of function evaluations, and the number of gradient evaluations, respectively. For example, the performance profiles with respect to CPU time means that for each method, we plot the fraction 𝑃 of problems for which the method is within a factor 𝜏 of the best time. The left side of the figure gives the percentage of the test problems for which a method is the fastest; the right side gives the percentage of the test problems that are successfully solved by each of the methods. The top curve is the method that solved of the most problems in a time that was within a factor 𝜏 of the best time.

Obviously, Figure 1 shows that MPRP method outperforms VPRP method, CG-DESCENT method, and DL+ method for the given test problems in the CPU time. Figures 2–4 show that the MPRP method also has the best performance with respect to the number of iterations and function and gradient evaluations since it corresponds to the top curve. So, the MPRP method is computationally efficient.

5. Conclusions

We have proposed a modified PRP method on the basic of the PRP method, which can generate sufficient descent directions with inexact line search. Moreover, we proved that the proposed modified method converge globally for general nonconvex functions. The performance profiles showed that the proposed method is also very efficient.

Acknowledgments

The authors wish to express their heartfelt thanks to the referees and Professor Piermarco Cannarsa for their detailed and helpful suggestions for revising the paper. This work was supported by The Nature Science Foundation of Chongqing Education Committee (KJ091104) and Chongqing Three Gorge University (09ZZ-060).

References

  1. E. Polak and G. Ribire, “Note sur la xonvergence de directions conjugees,” Rev Francaise Informat Recherche Operatinelle 3e Annee, vol. 16, pp. 35–43, 1969. View at: Google Scholar
  2. B. T. Polak, “The conjugate gradient method in extreme problems,” USSR Computational Mathematics and Mathematical Physics, vol. 9, pp. 94–112, 1969. View at: Google Scholar
  3. M. Al-Baali, “Descent property and global convergence of the Fletcher-Reeves method with inexact line search,” IMA Journal of Numerical Analysis, vol. 5, no. 1, pp. 121–124, 1985. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  4. W. W. Hager and H. Zhang, “A new conjugate gradient method with guaranteed descent and an efficient line search,” SIAM Journal on Optimization, vol. 16, no. 1, pp. 170–192, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  5. Y. H. Dai and L. Z. Liao, “New conjugacy conditions and related nonlinear conjugate gradient methods,” Applied Mathematics and Optimization, vol. 43, no. 1, pp. 87–101, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  6. W. W. Hager and H. Zhang, “A survey of nonlinear conjugate gradient methods,” Pacific Journal of Optimization, vol. 2, no. 1, pp. 35–58, 2006. View at: Google Scholar | Zentralblatt MATH
  7. M. J. D. Powell, “Nonconvex minimization calculations and the conjugate gradient method,” in Numerical Analysis (Dundee, 1983), vol. 1066 of Lecture Notes in Mathematics, pp. 122–141, Springer, Berlin, Germany, 1984. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  8. M. J. D. Powell, “Convergence properties of algorithms for nonlinear optimization,” SIAM Review, vol. 28, no. 4, pp. 487–500, 1986. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  9. J. C. Gilbert and J. Nocedal, “Global convergence properties of conjugate gradient methods for optimization,” SIAM Journal on Optimization, vol. 2, no. 1, pp. 21–42, 1992. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  10. L. Zhang, W. Zhou, and D. Li, “A descent modified Polak-Ribière-Polyak conjugate gradient method and its global convergence,” IMA Journal of Numerical Analysis, vol. 26, no. 4, pp. 629–640, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  11. Z. Wei, G. Y. Li, and L. Qi, “Global convergence of the Polak-Ribière-Polyak conjugate gradient method with an Armijo-type inexact line search for nonconvex unconstrained optimization problems,” Mathematics of Computation, vol. 77, no. 264, pp. 2173–2193, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  12. L. Grippo and S. Lucidi, “A globally convergent version of the Polak-Ribière conjugate gradient method,” Mathematical Programming, vol. 78, no. 3, pp. 375–391, 1997. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  13. G. Yu, Y. Zhao, and Z. Wei, “A descent nonlinear conjugate gradient method for large-scale unconstrained optimization,” Applied Mathematics and Computation, vol. 187, no. 2, pp. 636–643, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  14. J. Zhang, Y. Xiao, and Z. Wei, “Nonlinear conjugate gradient methods with sufficient descent condition for large-scale unconstrained optimization,” Mathematical Problems in Engineering, vol. 2009, Article ID 243290, 16 pages, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  15. Y. Xiao, Q. Wang, and D. Wang, “Notes on the Dai-Yuan-Yuan modified spectral gradient method,” Journal of Computational and Applied Mathematics, vol. 234, no. 10, pp. 2986–2992, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  16. Z. Wan, Z. Yang, and Y. Wang, “New spectral PRP conjugate gradient method for unconstrained optimization,” Applied Mathematics Letters, vol. 24, no. 1, pp. 16–22, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  17. L. Zhang, W. Zhou, and D. Li, “Global convergence of a modified Fletcher-Reeves conjugate gradient method with Armijo-type line search,” Numerische Mathematik, vol. 104, no. 4, pp. 561–572, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  18. G. Zoutendijk, “Nonlinear programming, computational methods,” in Integer and Nonlinear Programming, J. A. badie, Ed., pp. 37–86, North-Holland, Amsterdam, Netherlands, 1970. View at: Google Scholar | Zentralblatt MATH
  19. Y. H. Dai and Y. Yuan, Nonlinear Conjugate Gradient Method, vol. 10, Shanghai Scientific & Technical, Shanghai, China, 2000.
  20. J. J. Moré, B. S. Garbow, and K. E. Hillstrom, “Testing unconstrained optimization software,” Association for Computing Machinery Transactions on Mathematical Software, vol. 7, no. 1, pp. 17–41, 1981. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  21. E. D. Dolan and J. J. Moré, “Benchmarking optimization software with performance profiles,” Mathematical Programming, vol. 91, no. 2, pp. 201–213, 2002. View at: Publisher Site | Google Scholar | Zentralblatt MATH

Copyright © 2011 Liu Jin-kui et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

701 Views | 424 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder