Journal of Applied Mathematics

Journal of Applied Mathematics / 2012 / Article
Special Issue

Applications of Fixed Point and Approximate Algorithms

View this Special Issue

Research Article | Open Access

Volume 2012 |Article ID 641276 | 13 pages | https://doi.org/10.1155/2012/641276

Global Convergence of a Modified Spectral Conjugate Gradient Method

Academic Editor: Giuseppe Marino
Received20 Sep 2011
Revised25 Oct 2011
Accepted25 Oct 2011
Published12 Dec 2011

Abstract

A modified spectral PRP conjugate gradient method is presented for solving unconstrained optimization problems. The constructed search direction is proved to be a sufficiently descent direction of the objective function. With an Armijo-type line search to determinate the step length, a new spectral PRP conjugate algorithm is developed. Under some mild conditions, the theory of global convergence is established. Numerical results demonstrate that this algorithm is promising, particularly, compared with the existing similar ones.

1. Introduction

Recently, it is shown that conjugate gradient method is efficient and powerful in solving large-scale unconstrained minimization problems owing to its low memory requirement and simple computation. For example, in [1–17], many variants of conjugate gradient algorithms are developed. However, just as pointed out in [2], there exist many theoretical and computational challenges to apply these methods into solving the unconstrained optimization problems. Actually, 14 open problems on conjugate gradient methods are presented in [2]. These problems concern the selection of initial direction, the computation of step length, and conjugate parameter based on the values of the objective function, the influence of accuracy of line search procedure on the efficiency of conjugate gradient algorithm, and so forth.

The general model of unconstrained optimization problem is as follows:min𝑓(𝑥),𝑥∈𝑅𝑛,(1.1) where 𝑓∶𝑅𝑛→𝑅 is continuously differentiable such that its gradient is available. Let 𝑔(𝑥) denote the gradient of 𝑓 at 𝑥, and let 𝑥0 be an arbitrary initial approximate solution of (1.1). Then, when a standard conjugate gradient method is used to solve (1.1), a sequence of solutions will be generated by𝑥𝑘+1=𝑥𝑘+𝛼𝑘𝑑𝑘,𝑘=0,1,…,(1.2) where 𝛼𝑘 is the steplength chosen by some line search method and 𝑑𝑘 is the search direction defined by𝑑𝑘=−𝑔𝑘if𝑘=0,−𝑔𝑘+𝛽𝑘𝑑𝑘−1if𝑘>0,(1.3) where 𝛽𝑘 is called conjugacy parameter and 𝑔𝑘 denotes the value of 𝑔(𝑥𝑘). For a strictly convex quadratical programming, 𝛽𝑘 can be appropriately chosen such that 𝑑𝑘 and 𝑑𝑘−1 are conjugate with respect to the Hessian matrix of the objective function. If 𝛽𝑘 is taken by𝛽𝑘=𝛽PRP𝑘=𝑔𝑇𝑘𝑔𝑘−𝑔𝑘−1‖‖𝑔𝑘−1‖‖2,(1.4) where ‖⋅‖ stands for the Euclidean norm of vector, then (1.2)–(1.4) are called Polak-Ribiére-Polyak (PRP) conjugate gradient method (see [8, 18]).

It is well known that PRP method has the property of finite termination when the objective function is a strong convex quadratic function combined with the exact line search. Furthermore, in [7], for a twice continuously differentiable strong convex objective function, the global convergence has also been proved. However, it seems to be nontrivial to establish the global convergence theory under the condition of inexact line search, especially for a general nonconvex minimization problem. Quite recently, it is noticed that there are many modified PRP conjugate gradient methods studied (see, e.g., [10–13, 17]). In these methods, the search direction is constructed to possess the sufficient descent property, and the theory of global convergence is established with different line search strategy. In [17], the search direction 𝑑𝑘 is given by𝑑𝑘=−𝑔𝑘if𝑘=0,−𝑔𝑘+𝛽PRP𝑘𝑑𝑘−1−𝜃𝑘𝑦𝑘−1if𝑘>0,(1.5) where𝜃𝑘=𝑔𝑇𝑘𝑑𝑘−1‖‖𝑔𝑘−1‖‖2,𝑦𝑘−1=𝑔𝑘−𝑔𝑘−1,𝑠𝑘−1=𝑥𝑘−𝑥𝑘−1.(1.6) Similar to the idea in [17], a new spectral PRP conjugate gradient algorithm will be developed in this paper. On one hand, we will present a new spectral conjugate gradient direction, which also possess the sufficiently descent feature. On the other hand, a modified Armijo-type line search strategy is incorporated into the developed algorithm. Numerical experiments will be used to make a comparison among some similar algorithms.

The rest of this paper is organized as follows. In the next section, a new spectral PRP conjugate gradient method is proposed. Section 3 will be devoted to prove the global convergence. In Section 4, some numerical experiments will be done to test the efficiency, especially in comparison with the existing other methods. Some concluding remarks will be given in the last section.

2. New Spectral PRP Conjugate Gradient Algorithm

In this section, we will firstly study how to determine a descent direction of objective function.

Let 𝑥𝑘 be the current iterate. Let 𝑑𝑘 be defined by𝑑𝑘=−𝑔𝑘if𝑘=0,−𝜃𝑘𝑔𝑘+𝛽PRP𝑘𝑑𝑘−1if𝑘>0,(2.1) where 𝛽PRP𝑘 is specified by (1.4) and𝜃𝑘=𝑑𝑇𝑘−1𝑦𝑘−1‖‖𝑔𝑘−1‖‖2−𝑑𝑇𝑘−1𝑔𝑘𝑔𝑇𝑘𝑔𝑘−1‖‖𝑔𝑘‖‖2‖‖𝑔𝑘−1‖‖2.(2.2)

It is noted that 𝑑𝑘 given by (2.1) and (2.2) is different from those in [3, 16, 17], either for the choice of 𝜃𝑘 or for that of 𝛽𝑘.

We first prove that 𝑑𝑘 is a sufficiently descent direction.

Lemma 2.1. Suppose that 𝑑𝑘 is given by (2.1) and (2.2). Then, the following result 𝑔𝑇𝑘𝑑𝑘‖‖𝑔=−𝑘‖‖2(2.3) holds for any 𝑘≥0.

Proof. Firstly, for 𝑘=0, it is easy to see that (2.3) is true since 𝑑0=−𝑔0.
Secondly, assume that 𝑑𝑇𝑘−1𝑔𝑘−1‖‖𝑔=−𝑘−1‖‖2(2.4) holds for 𝑘−1 when 𝑘≥1. Then, from (1.4), (2.1), and (2.2), it follows that 𝑔𝑇𝑘𝑑𝑘=−𝜃𝑘‖‖𝑔𝑘‖‖2+𝑔𝑇𝑘𝑔𝑘−𝑔𝑘−1‖‖𝑔𝑘−1‖‖2𝑑𝑇𝑘−1𝑔𝑘𝑑=−𝑇𝑘−1𝑔𝑘−𝑔𝑘−1‖‖𝑔𝑘−1‖‖2𝑔𝑇𝑘𝑔𝑘+𝑑𝑇𝑘−1𝑔𝑘𝑔𝑇𝑘𝑔𝑘−1‖‖𝑔𝑘‖‖2‖‖𝑔𝑘−1‖‖2𝑔𝑇𝑘𝑔𝑘+𝑔𝑇𝑘𝑔𝑘−𝑔𝑘−1‖‖𝑔𝑘−1‖‖2𝑑𝑇𝑘−1𝑔𝑘=𝑑𝑇𝑘−1𝑔𝑘−1‖‖𝑔𝑘−1‖‖2𝑔𝑇𝑘𝑔𝑘=‖‖𝑔𝑘‖‖2‖‖𝑔𝑘−1‖‖2−‖‖𝑔𝑘−1‖‖2‖‖𝑔=−𝑘‖‖2.(2.5) Thus, (2.3) is also true with 𝑘−1 replaced by 𝑘. By mathematical induction method, we obtain the desired result.

From Lemma 2.1, it is known that 𝑑𝑘 is a descent direction of 𝑓 at 𝑥𝑘. Furthermore, if the exact line search is used, then 𝑔𝑇𝑘𝑑𝑘−1=0; hence𝜃𝑘=𝑑𝑇𝑘−1𝑦𝑘−1‖‖𝑔𝑘−1‖‖2−𝑑𝑇𝑘−1𝑔𝑘𝑔𝑇𝑘𝑔𝑘−1‖‖𝑔𝑘‖‖2‖‖𝑔𝑘−1‖‖2𝑑=−𝑇𝑘−1𝑔𝑘−1‖‖𝑔𝑘−1‖‖2=1.(2.6) In this case, the proposed spectral PRP conjugate gradient method reduces to the standard PRP method. However, it is often that the exact line search is time-consuming and sometimes is unnecessary. In the following, we are going to develop a new algorithm, where the search direction 𝑑𝑘 is chosen by (2.1)-(2.2) and the stepsize is determined by Armijio-type inexact line search.

Algorithm 2.2 (Modified Spectral PRP Conjugate Gradient Algorithm). We have the following steps.
Step 1. Given constants 𝛿1, 𝜌∈(0,1), 𝛿2>0, 𝜖>0. Choose an initial point 𝑥0∈𝑅𝑛. Let 𝑘∶=0.Step 2. If ‖𝑔𝑘‖≤𝜖, then the algorithm stops. Otherwise, compute 𝑑𝑘 by (2.1)-(2.2), and go to Step 3.Step 3. Determine a steplength 𝛼𝑘=max{𝜌𝑗,𝑗=0,1,2,…} such that 𝑓𝑥𝑘+𝛼𝑘𝑑𝑘𝑥≤𝑓𝑘+𝛿1𝛼𝑘𝑔𝑇𝑘𝑑𝑘−𝛿2𝛼2𝑘‖‖𝑑𝑘‖‖2.(2.7)Step 4. Set 𝑥𝑘+1∶=𝑥𝑘+𝛼𝑘𝑑𝑘, and 𝑘∶=𝑘+1. Return to Step 2.

Since 𝑑𝑘 is a descent direction of 𝑓 at 𝑥𝑘, we will prove that there must exist 𝑗0 such that 𝛼𝑘=𝜌𝑗0 satisfies the inequality (2.7).

Proposition 2.3. Let 𝑓∶𝑅𝑛→𝑅 be a continuously differentiable function. Suppose that 𝑑 is a descent direction of 𝑓 at 𝑥. Then, there exists 𝑗0 such that 𝑓(𝑥+𝛼𝑑)≤𝑓(𝑥)+𝛿1𝛼𝑔𝑇𝑑−𝛿2𝛼2‖𝑑‖2,(2.8) where 𝛼=𝜌𝑗0, 𝑔 is the gradient vector of 𝑓 at 𝑥, 𝛿1, 𝜌∈(0,1) and 𝛿2>0 are given constant scalars.

Proof. Actually, we only need to prove that a step length 𝛼 is obtained in finitely many steps. If it is not true, then for all sufficiently large positive integer 𝑚, we have 𝑓(𝑥+𝜌𝑚𝑑)−𝑓(𝑥)>𝛿1𝜌𝑚𝑔𝑇𝑑−𝛿2𝜌2𝑚‖𝑑‖2.(2.9) Thus, by the mean value theorem, there is a 𝜃∈(0,1) such that 𝜌𝑚𝑔(𝑥+𝜃𝜌𝑚𝑑)𝑇𝑑>𝛿1𝜌𝑚𝑔𝑇𝑑−𝛿2𝜌2𝑚‖𝑑‖2.(2.10) It reads (𝑔(𝑥+𝜃𝜌𝑚𝑑)−𝑔)𝑇𝛿𝑑>1𝑔−1𝑇𝑑−𝛿2𝜌𝑚‖𝑑‖2.(2.11) When ğ‘šâ†’âˆž, it is obtained that 𝛿1𝑔−1𝑇𝑑<0.(2.12) From 𝛿1∈(0,1), it follows that 𝑔𝑇𝑑>0. This contradicts the condition that 𝑑 is a descent direction.

Remark 2.4. From Proposition 2.3, it is known that Algorithm 2.2 is well defined. In addition, it is easy to see that more descent magnitude can be obtained at each step by the modified Armijo-type line search (2.7) than the standard Armijo rule.

3. Global Convergence

In this section, we are in a position to study the global convergence of Algorithm 2.2. We first state the following mild assumptions, which will be used in the proof of global convergence.

Assumption 3.1. The level set Ω={𝑥∈𝑅𝑛|𝑓(𝑥)≤𝑓(𝑥0)} is bounded.

Assumption 3.2. In some neighborhood 𝑁 of Ω, 𝑓 is continuously differentiable and its gradient is Lipschitz continuous, namely, there exists a constant 𝐿>0 such that ‖𝑔(𝑥)−𝑔(𝑦)‖≤𝐿‖𝑥−𝑦‖,∀𝑥,𝑦∈𝑁.(3.1)

Since {𝑓(𝑥𝑘)} is decreasing, it is clear that the sequence {𝑥𝑘} generated by Algorithm 2.2 is contained in a bounded region from Assumption 3.1. So, there exists a convergent subsequence of {𝑥𝑘}. Without loss of generality, it can be supposed that {𝑥𝑘} is convergent. On the other hand, from Assumption 3.2, it follows that there is a constant 𝛾1>0 such that‖𝑔(𝑥)‖≤𝛾1,∀𝑥∈Ω.(3.2) Hence, the sequence {𝑔𝑘} is bounded.

In the following, we firstly prove that the stepsize 𝛼𝑘 at each iteration is large enough.

Lemma 3.3. With Assumption 3.2, there exists a constant 𝑚>0 such that the following inequality 𝛼𝑘‖‖𝑔≥𝑚𝑘‖‖2‖‖𝑑𝑘‖‖2(3.3) holds for all 𝑘 sufficiently large.

Proof. Firstly, from the line search rule (2.7), we know that 𝛼𝑘≤1.
If 𝛼𝑘=1, then we have ‖𝑔𝑘‖≤‖𝑑𝑘‖. The reason is that ‖𝑔𝑘‖>‖𝑑𝑘‖ implies that ‖‖𝑔𝑘‖‖2>‖‖𝑔𝑘‖‖‖‖𝑑𝑘‖‖≥−𝑔𝑇𝑘𝑑𝑘,(3.4) which contradicts (2.3). Therefore, taking 𝑚=1, the inequality (3.3) holds.
If 0<𝛼𝑘<1, then the line search rule (2.7) implies that 𝜌−1𝛼𝑘 does not satisfy the inequality (2.7). So, we have 𝑓𝑥𝑘+𝜌−1𝛼𝑘𝑑𝑘𝑥−𝑓𝑘>𝛿1𝛼𝑘𝜌−1𝑔𝑇𝑘𝑑𝑘−𝛿2𝜌−2𝛼2𝑘‖‖𝑑𝑘‖‖2.(3.5)
Since 𝑓𝑥𝑘+𝜌−1𝛼𝑘𝑑𝑘𝑥−𝑓𝑘=𝜌−1𝛼𝑘𝑔𝑥𝑘+𝑡𝑘𝜌−1𝛼𝑘𝑑𝑘𝑇𝑑𝑘=𝜌−1𝛼𝑘𝑔𝑇𝑘𝑑𝑘+𝜌−1𝛼𝑘𝑔𝑥𝑘+𝑡𝑘𝜌−1𝛼𝑘𝑑𝑘−𝑔𝑘𝑇𝑑𝑘≤𝜌−1𝛼𝑘𝑔𝑇𝑘𝑑𝑘+𝐿𝜌−2𝛼2𝑘‖‖𝑑𝑘‖‖2,(3.6) where 𝑡𝑘∈(0,1) satisfies 𝑥𝑘+𝑡𝑘𝜌−1𝛼𝑘𝑑𝑘∈𝑁 and the last inequality is from (3.2), it is obtained that 𝛿1𝛼𝑘𝜌−1𝑔𝑇𝑘𝑑𝑘−𝛿2𝜌−2𝛼2𝑘‖‖𝑑𝑘‖‖2<𝜌−1𝛼𝑘𝑔𝑇𝑘𝑑𝑘+𝐿𝜌−2𝛼2𝑘‖‖𝑑𝑘‖‖2(3.7) due to (3.5) and (3.1). It reads 1−𝛿1𝛼𝑘𝜌−1𝑔𝑇𝑘𝑑𝑘+𝐿+𝛿2𝜌−2𝛼2𝑘‖‖𝑑𝑘‖‖2>0,(3.8) that is, 𝐿+𝛿2𝜌−1𝛼𝑘‖‖𝑑𝑘‖‖2>𝛿1𝑔−1𝑇𝑘𝑑𝑘.(3.9) Therefore, 𝛼𝑘>𝛿1−1𝜌𝑔𝑇𝑘𝑑𝑘𝐿+𝛿2‖‖𝑑𝑘‖‖2.(3.10) From Lemma 2.1, it follows that 𝛼𝑘>𝜌1−𝛿1‖‖𝑔𝑘‖‖2𝐿+𝛿2‖‖𝑑𝑘‖‖2.(3.11)
Taking 𝜌𝑚=min1,1−𝛿1𝐿+𝛿2,(3.12) then the desired inequality (3.3) holds.

From Lemmas 2.1 and 3.3 and Assumption 3.1, we can prove the following result.

Lemma 3.4. Under Assumptions 3.1 and 3.2, the following results hold: 𝑘≥0‖‖𝑔𝑘‖‖4‖‖𝑑𝑘‖‖2<∞,(3.13)limğ‘˜â†’âˆžğ›¼2𝑘‖‖𝑑𝑘‖‖2=0.(3.14)

Proof. From the line search rule (2.7) and Assumption 3.1, there exists a constant 𝑀 such that 𝑛−1𝑘=0−𝛿1𝛼𝑘𝑔𝑇𝑘𝑑𝑘+𝛿2𝛼2𝑘‖‖𝑑𝑘‖‖2≤𝑛−1𝑘=0𝑓𝑥𝑘𝑥−𝑓k+1𝑥=𝑓0𝑥−𝑓𝑛<2𝑀.(3.15) Then, from Lemma 2.1, we have 2𝑀≥𝑛−1𝑘=0−𝛿1𝛼𝑘𝑔𝑇𝑘𝑑𝑘+𝛿2𝛼2𝑘‖‖𝑑𝑘‖‖2=𝑛−1𝑘=0𝛿1𝛼𝑘‖‖𝑔𝑘‖‖2+𝛿2𝛼2𝑘‖‖𝑑𝑘‖‖2≥𝑛−1𝑘=0𝛿1𝑚‖‖𝑔𝑘‖‖2‖‖𝑑𝑘‖‖2‖‖𝑔𝑘‖‖2+𝛿2⋅𝑚2⋅‖‖𝑔𝑘‖‖4‖‖𝑑𝑘‖‖4⋅‖‖𝑑𝑘‖‖2=𝑛−1𝑘=0𝛿1+𝛿2𝑚‖‖𝑔𝑘‖‖4‖‖𝑑𝑘‖‖2⋅𝑚.(3.16) Therefore, the first conclusion is proved.
Since 2𝑀≥𝑛−1𝑘=0𝛿1𝛼𝑘‖‖𝑔𝑘‖‖2+𝛿2𝛼2𝑘‖‖𝑑𝑘‖‖2≥𝛿2𝑛−1𝑘=0𝛼2𝑘‖‖𝑑𝑘‖‖2,(3.17) the series âˆžî“ğ‘˜=0𝛼2𝑘‖‖𝑑𝑘‖‖2(3.18) is convergent. Thus, limğ‘˜â†’âˆžğ›¼2𝑘‖‖𝑑𝑘‖‖2=0.(3.19)
The second conclusion (3.14) is obtained.

In the end of this section, we come to establish the global convergence theorem for Algorithm 2.2.

Theorem 3.5. Under Assumptions 3.1 and 3.2, it holds that limğ‘˜â†’âˆžâ€–â€–ğ‘”inf𝑘‖‖=0.(3.20)

Proof. Suppose that there exists a positive constant 𝜖>0 such that ‖‖𝑔𝑘‖‖≥𝜖(3.21) for all 𝑘. Then, from (2.1), it follows that ‖‖𝑑𝑘‖‖2=𝑑𝑇𝑘𝑑𝑘=−𝜃𝑘𝑔𝑇𝑘+𝛽PRP𝑘𝑑𝑇𝑘−1−𝜃𝑘𝑔𝑘+𝛽PRP𝑘𝑑𝑘−1=𝜃2𝑘‖‖𝑔𝑘‖‖2−2𝜃𝑘𝛽PRP𝑘𝑑𝑇𝑘−1𝑔𝑘+𝛽PRP𝑘2‖‖𝑑𝑘−1‖‖2=𝜃2𝑘‖‖𝑔𝑘‖‖2−2𝜃𝑘𝑑𝑇𝑘+𝜃𝑘𝑔𝑇𝑘𝑔𝑘+𝛽PRP𝑘2‖‖𝑑𝑘−1‖‖2=𝜃2𝑘‖‖𝑔𝑘‖‖2−2𝜃𝑘𝑑𝑇𝑘𝑔𝑘−2𝜃2𝑘‖‖𝑔𝑘‖‖2+𝛽PRP𝑘2‖‖𝑑𝑘−1‖‖2=𝛽PRP𝑘2‖‖𝑑𝑘−1‖‖2−2𝜃𝑘𝑑𝑇𝑘𝑔𝑘−𝜃2𝑘‖‖𝑔𝑘‖‖2.(3.22) Dividing by (𝑔𝑇𝑘𝑑𝑘)2 in the both sides of this equality, then from (1.4), (2.3), (3.1), and (3.21), we obtain ‖‖𝑑𝑘‖‖2‖‖𝑔𝑘‖‖4=𝛽PRP𝑘2‖‖𝑑𝑘−1‖‖2−2𝜃𝑘𝑑𝑇𝑘𝑔𝑘−𝜃2𝑘‖‖𝑔𝑘‖‖2‖‖𝑔𝑘‖‖4=𝑔𝑇𝑘𝑔𝑘−𝑔𝑘−12‖‖𝑔𝑘−1‖‖4‖‖𝑑𝑘−1‖‖2‖‖𝑔𝑘‖‖4−𝜃𝑘−12‖‖𝑔𝑘‖‖2+1‖‖𝑔𝑘‖‖2≤‖‖𝑔𝑘−𝑔𝑘−1‖‖2‖‖𝑔𝑘−1‖‖4‖‖𝑑𝑘−1‖‖2‖‖𝑔𝑘‖‖2−𝜃𝑘−12‖‖𝑔𝑘‖‖2+1‖‖𝑔𝑘‖‖2≤‖‖𝑔𝑘−𝑔𝑘−1‖‖2‖‖𝑔𝑘‖‖2‖‖𝑑𝑘−1‖‖2‖‖𝑔𝑘−1‖‖4+1‖‖𝑔𝑘‖‖2<𝐿2𝛼2𝑘−1‖‖𝑑𝑘−1‖‖2𝜖2‖‖𝑑𝑘−1‖‖2‖‖𝑔𝑘−1‖‖4+1‖‖𝑔𝑘‖‖2.(3.23) From (3.14) in Lemma 3.4, it follows that limğ‘˜â†’âˆžğ›¼2𝑘−1‖‖𝑑𝑘−1‖‖2=0.(3.24) Thus, there exists a sufficient large number 𝑘0 such that for 𝑘≥𝑘0, the following inequalities 0≤𝛼2𝑘−1‖‖𝑑𝑘−1‖‖2<𝜖2𝐿2(3.25) hold.
Therefore, for 𝑘≥𝑘0, ‖‖𝑑𝑘‖‖2‖‖𝑔𝑘‖‖4≤‖‖𝑑𝑘−1‖‖2‖‖𝑔𝑘−1‖‖4+1‖‖𝑔𝑘‖‖2‖‖𝑑≤⋯≤𝑘0‖‖2‖‖𝑔𝑘0‖‖4+𝑘𝑖=𝑘0+11‖‖𝑔𝑖‖‖2<𝐶0𝜖2+𝑘𝑖=𝑘0+11𝜖2=𝐶0+𝑘−𝑘0𝜖2,(3.26) where 𝐶0=𝜖2‖𝑑𝑘0‖2/‖𝑔𝑘0‖2 is a nonnegative constant.
The last inequality implies 𝑘≥1‖‖𝑔𝑘‖‖4‖‖𝑑𝑘‖‖2≥𝑘>𝑘0‖‖𝑔𝑘‖‖4‖‖𝑑𝑘‖‖2>𝜖2𝑘>𝑘01𝐶0+𝑘−𝑘0=∞,(3.27) which contradicts the result of Lemma 3.4.
The global convergence theorem is established.

4. Numerical Experiments

In this section, we will report the numerical performance of Algorithm 2.2. We test Algorithm 2.2 by solving the 15 benchmark problems from [19] and compare its numerical performance with that of the other similar methods, which include the standard PRP conjugate gradient method in [6], the modified FR conjugate gradient method in [16], and the modified PRP conjugate gradient method in [17]. Among these algorithms, either the updating formula or the line search rule is different from each other.

All codes of the computer procedures are written in MATLAB 7.0.1 and are implemented on PC with 2.0 GHz CPU processor, 1 GB RAM memory, and XP operation system.

The parameters are chosen as follows: 𝜖=10−6,𝜌=0.75,𝛿1=0.1,𝛿2=1.(4.1)

In Tables 1 and 2, we use the following denotations: Dim: the dimension of the objective function;GV: the gradient value of the objective function when the algorithm stops;NI: the number of iterations;NF: the number of function evaluations;CT: the run time of CPU;mfr: the modified FR conjugate gradient method in [16]; prp: the standard PRP conjugate gradient method in [6];msprp: the modified PRP conjugate gradient method in [17];mprp: the new algorithm developed in this paper.


Function Algorithm Dim GV NINF CT(s)

Rrosenbrock mfr 2 8 . 8 8 1 8 𝑒 − 0 0 7 328 7069 0.2970
prp 2 9 . 2 4 1 5 𝑒 − 0 0 7 760 41189 1.4370
mprp 2 8 . 6 0 9 2 𝑒 − 0 0 7 124 2816 0.0940
msprp 2 6 . 9 6 4 3 𝑒 − 0 0 7 122 2597 0.1400

Freudenstein and Roth mfr 2 5 . 5 7 2 3 𝑒 − 0 0 7 236 5110 0.2190
prp 2 7 . 1 4 2 2 𝑒 − 0 0 7 331 18798 0.6250
mprp 2 2 . 4 6 6 6 𝑒 − 0 0 7 67 1904 0.0940
msprp 2 8 . 6 9 6 7 𝑒 − 0 0 7 62 1437 0.0780

Brown badly mfr 2————
prp 2————
mprp 2 7 . 9 8 9 2 𝑒 − 0 0 7 105 10279 0.2030
msprp 2 7 . 6 0 2 9 𝑒 − 0 0 7 70 7117 0.2660

Beale mfr 2 6 . 1 7 3 0 𝑒 − 0 0 7 74 714 0.0780
prp 2 8 . 2 4 5 5 𝑒 − 0 0 7 292 12568 0.4370
mprp 2 6 . 2 2 5 7 𝑒 − 0 0 7 130 1539 0.0940
msprp 2 8 . 7 8 6 1 𝑒 − 0 0 7 91 877 0.0470

Powell singular mfr 4 9 . 9 8 2 7 𝑒 − 0 0 7 4122 10578 0.6870
prp 4————
mprp 4 9 . 6 9 0 9 𝑒 − 0 0 7 13565 218964 5.2660
msprp 4 9 . 8 5 1 2 𝑒 − 0 0 7 11893 169537 7.2500

Wood mfr 4 7 . 7 9 3 7 𝑒 − 0 0 7 263 5787 0.2660
prp 4 9 . 9 8 4 1 𝑒 − 0 0 7 1284 69501 2.3440
mprp 4 9 . 6 4 8 4 𝑒 − 0 0 7 280 6432 0.1720
msprp 4 7 . 9 2 2 9 𝑒 − 0 0 7 404 9643 0.4070

Extended Powell singular mfr 4 9 . 9 8 2 7 𝑒 − 0 0 7 4122 10578 0.6800
prp 4————
mprp 4 9 . 6 9 0 9 𝑒 − 0 0 7 13565 218964 5.5310
msprp 4 9 . 8 5 1 2 𝑒 − 0 0 7 11893 169537 7.4070

Broyden tridiagonal mfr 4 4 . 8 4 5 1 𝑒 − 0 0 7 53 784 0.0630
prp 4 6 . 6 6 2 6 𝑒 − 0 0 7 87 4460 0.1180
mprp 4 5 . 8 1 6 6 𝑒 − 0 0 7 39 430 0.0320
msprp 4 9 . 7 1 9 6 𝑒 − 0 0 7 52 785 0.0780


Function Algorithm Dim GV NINF CT(s)

Kowalik and Osborne mfr 4 ————
prp 4 8 . 9 5 2 1 𝑒 − 0 0 7 833 26191 1.2970
mprp 4 9 . 9 6 9 8 𝑒 − 0 0 7 6235 35425 3.5940
msprp 4 9 . 9 5 6 0 𝑒 − 0 0 7 7059 37976 4.9850

Broyden banded mfr 6 8 . 9 4 6 9 𝑒 − 0 0 7 40 505 0.0780
prp 6 8 . 4 6 8 4 𝑒 − 0 0 7 268 9640 0.4840
mprp 6 8 . 9 0 2 9 𝑒 − 0 0 7 102 1319 0.0940
msprp 6 9 . 3 2 7 6 𝑒 − 0 0 7 44 556 0.0940

Discrete boundary mfr 6 9 . 1 5 3 1 𝑒 − 0 0 7 107 509 0.0780
prp 6 7 . 8 9 7 0 𝑒 − 0 0 7 269 11449 0.4690
mprp 6 8 . 2 8 0 7 9 𝑒 − 0 0 7 157 1473 0.0930
msprp 6 9 . 9 4 3 6 𝑒 − 0 0 7 165 1471 0.1410

Variably dimensioned mfr 8 7 . 3 4 1 1 𝑒 − 0 0 7 57 1233 0.1250
prp 8 7 . 3 4 1 1 𝑒 − 0 0 7 113 7403 0.3290
mprp 8 9 . 0 9 0 0 𝑒 − 0 0 7 69 1544 0.0780
msprp 8 7 . 3 4 1 1 𝑒 − 0 0 7 57 1233 0.1100

Broyden tridiagonal mfr 9 9 . 1 8 1 5 𝑒 − 0 0 7 129 2173 0.1250
prp 9 6 . 4 5 8 4 𝑒 − 0 0 7 113 5915 0.2500
mprp 9 7 . 3 5 2 9 𝑒 − 0 0 7 187 2967 0.1250
msprp 9 9 . 2 3 6 3 𝑒 − 0 0 7 82 1304 0.1100

Linear-rank1 mfr 10 9 . 7 4 6 2 𝑒 − 0 0 7 84 3762 0.1720
prp 10 4 . 5 6 4 7 𝑒 − 0 0 7 98 6765 0.2810
mprp 10 6 . 9 1 4 0 𝑒 − 0 0 7 51 2216 0.0780
msprp 10 6 . 6 6 3 0 𝑒 − 0 0 7 50 2162 0.1250

Linear-full rank mfr 12 7 . 6 9 1 9 𝑒 − 0 0 7 9 36 0.0160
prp 12 8 . 2 5 0 7 𝑒 − 0 0 7 47 1904 0.1090
mprp 12 7 . 6 9 1 9 𝑒 − 0 0 7 9 36 0.0630
msprp 12 7 . 6 9 1 9 𝑒 − 0 0 7 9 36 0.0150

From the above numerical experiments, it is shown that the proposed algorithm in this paper is promising.

5. Conclusion

In this paper, a new spectral PRP conjugate gradient algorithm has been developed for solving unconstrained minimization problems. Under some mild conditions, the global convergence has been proved with an Armijo-type line search rule. Compared with the other similar algorithms, the numerical performance of the developed algorithm is promising.

Acknowledgments

The authors would like to express their great thanks to the anonymous referees for their constructive comments on this paper, which have improved its presentation. This work is supported by National Natural Science Foundation of China (Grant nos. 71071162, 70921001).

References

  1. N. Andrei, “Acceleration of conjugate gradient algorithms for unconstrained optimization,” Applied Mathematics and Computation, vol. 213, no. 2, pp. 361–369, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  2. N. Andrei, “Open problems in nonlinear conjugate gradient algorithms for unconstrained optimization,” Bulletin of the Malaysian Mathematical Sciences Society, vol. 34, no. 2, pp. 319–330, 2011. View at: Google Scholar
  3. E. G. Birgin and J. M. Martínez, “A spectral conjugate gradient method for unconstrained optimization,” Applied Mathematics and Optimization, vol. 43, no. 2, pp. 117–128, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  4. S.-Q. Du and Y.-Y. Chen, “Global convergence of a modified spectral FR conjugate gradient method,” Applied Mathematics and Computation, vol. 202, no. 2, pp. 766–770, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  5. J. C. Gilbert and J. Nocedal, “Global convergence properties of conjugate gradient methods for optimization,” SIAM Journal on Optimization, vol. 2, no. 1, pp. 21–42, 1992. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  6. L. Grippo and S. Lucidi, “A globally convergent version of the Polak-Ribière conjugate gradient method,” Mathematical Programming, vol. 78, no. 3, pp. 375–391, 1997. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  7. J. Nocedal and S. J. Wright, Numerical Optimization, Springer Series in Operations Research, Springer, New York, NY, USA, 1999.
  8. B. T. Polyak, “The conjugate gradient method in extremal problems,” USSR Computational Mathematics and Mathematical Physics, vol. 9, no. 4, pp. 94–112, 1969. View at: Google Scholar
  9. Z. J. Shi, “A restricted Polak-Ribière conjugate gradient method and its global convergence,” Advances in Mathematics, vol. 31, no. 1, pp. 47–55, 2002. View at: Google Scholar
  10. Z. Wan, C. M. Hu, and Z. L. Yang, “A spectral PRP conjugate gradient methods for nonconvex optimization problem based on modigfied line search,” Discrete and Continuous Dynamical Systems: Series B, vol. 16, no. 4, pp. 1157–1169, 2011. View at: Google Scholar
  11. Z. Wan, Z. Yang, and Y. Wang, “New spectral PRP conjugate gradient method for unconstrained optimization,” Applied Mathematics Letters, vol. 24, no. 1, pp. 16–22, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  12. Z. X. Wei, G. Y. Li, and L. Q. Qi, “Global convergence of the Polak-Ribière-Polyak conjugate gradient method with an Armijo-type inexact line search for nonconvex unconstrained optimization problems,” Mathematics of Computation, vol. 77, no. 264, pp. 2173–2193, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  13. G. Yu, L. Guan, and Z. Wei, “Globally convergent Polak-Ribière-Polyak conjugate gradient methods under a modified Wolfe line search,” Applied Mathematics and Computation, vol. 215, no. 8, pp. 3082–3090, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  14. G. Yuan, X. Lu, and Z. Wei, “A conjugate gradient method with descent direction for unconstrained optimization,” Journal of Computational and Applied Mathematics, vol. 233, no. 2, pp. 519–530, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  15. G. Yuan, “Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems,” Optimization Letters, vol. 3, no. 1, pp. 11–21, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  16. L. Zhang, W. Zhou, and D. Li, “Global convergence of a modified Fletcher-Reeves conjugate gradient method with Armijo-type line search,” Numerische Mathematik, vol. 104, no. 4, pp. 561–572, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  17. L. Zhang, W. Zhou, and D.-H. Li, “A descent modified Polak-Ribière-Polyak conjugate gradient method and its global convergence,” IMA Journal of Numerical Analysis, vol. 26, no. 4, pp. 629–640, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  18. E. Polak and G. Ribière, “Note sur la convergence de méthodes de directions conjuguées,” Revue Francaise d'Informatique et de Recherche Operationnelle, vol. 3, no. 16, pp. 35–43, 1969. View at: Google Scholar | Zentralblatt MATH
  19. J. J. Moré, B. S. Garbow, and K. E. Hillstrom, “Testing unconstrained optimization software,” ACM Transactions on Mathematical Software, vol. 7, no. 1, pp. 17–41, 1981. View at: Publisher Site | Google Scholar | Zentralblatt MATH

Copyright © 2012 Huabin Jiang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

851 Views | 555 Downloads | 1 Citation
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.