Abstract

A mixed spectral CD-DY conjugate descent method for solving unconstrained optimization problems is proposed, which combines the advantages of the spectral conjugate gradient method, the CD method, and the DY method. Under the Wolfe line search, the proposed method can generate a descent direction in each iteration, and the global convergence property can be also guaranteed. Numerical results show that the new method is efficient and stationary compared to the CD (Fletcher 1987) method, the DY (Dai and Yuan 1999) method, and the SFR (Du and Chen 2008) method; so it can be widely used in scientific computation.

1. Introduction

The purpose of this paper is to study the global convergence properties and practical computational performance of a mixed spectral CD-DY conjugate gradient method for unconstrained optimization without restarts, and with appropriate conditions.

Consider the following unconstrained optimization problem: minπ‘₯βˆˆπ‘…π‘›π‘“(π‘₯),(1.1) where π‘“βˆΆπ‘…π‘›β†’π‘… is continuously differentiable and its gradient 𝑔(π‘₯)=βˆ‡π‘“(π‘₯) is available. Generally, we use the iterative method to solve (1.1), and the iterative formula is given by π‘₯π‘˜+1=π‘₯π‘˜+π›Όπ‘˜π‘‘π‘˜,(1.2) where π‘₯π‘˜ is the current iteration, π›Όπ‘˜ is a positive scalar and called the step-size, which is determined by some line search, π‘‘π‘˜ is the search direction defined by π‘‘π‘˜=ξ‚»βˆ’π‘”π‘˜,forπ‘˜=1,βˆ’π‘”π‘˜+π›½π‘˜π‘‘π‘˜βˆ’1,forπ‘˜β‰₯2,(1.3) where π‘”π‘˜=βˆ‡π‘“(π‘₯π‘˜) and π›½π‘˜ is a scalar which determines the different conjugate gradient methods [1, 2].

There are many kinds of iterative method that include the steepest descent method, Newton method, and conjugate gradient method. The conjugate direction method is a commonly used and effective method in optimization, and it only needs to use the information of the first derivative. However, it overcomes the shortcoming of the steepest descent method in the slow convergence and avoids the defects of Newton method in storage and computing the second derivative.

The original CD method was proposed by Fletcher [3], in which π›½π‘˜ is defined by 𝛽CDπ‘˜β€–β€–π‘”=βˆ’π‘˜β€–β€–2π‘‘π‘‡π‘˜βˆ’1π‘”π‘˜βˆ’1,(1.4) where β€–β‹…β€– denotes the Euclidean norm of vectors. An important property of the CD method is that the method in each iteration will produce a descent direction under the strong Wolfe line search: 𝑓π‘₯π‘˜+π›Όπ‘˜π‘‘π‘˜ξ€Έξ€·π‘₯β‰€π‘“π‘˜ξ€Έ+π›Ώπ›Όπ‘˜π‘”π‘‡π‘˜π‘‘π‘˜,(1.5)|||𝑔π‘₯π‘˜+π›Όπ‘˜π‘‘π‘˜ξ€Έπ‘‡π‘‘π‘˜|||β‰€βˆ’πœŽπ‘”π‘‡π‘˜π‘‘π‘˜,(1.6) where 0<𝛿<𝜎<1. Dai and Yuan [4] first proposed the DY method, in which π›½π‘˜ is defined by π›½π‘˜=β€–β€–π‘”π‘˜β€–β€–2π‘‘π‘‡π‘˜βˆ’1ξ€·π‘”π‘˜βˆ’π‘”π‘˜βˆ’1ξ€Έ.(1.7) Dai and Yuan [4] also strictly proved that the DY method in each iteration produces a descent direction under the Wolfe line search (1.5) and 𝑔π‘₯π‘˜+π›Όπ‘˜π‘‘π‘˜ξ€Έπ‘‡π‘‘π‘˜β‰₯πœŽπ‘”π‘‡π‘˜π‘‘π‘˜.(1.8) Some good results about the CD method and DY method have also been reported in recent years [5–11].

Quite recently, Birgin and Martinez [12] proposed a spectral conjugate gradient method by combining conjugate gradient method and spectral gradient method. Unfortunately, the spectral conjugate gradient method [12] cannot guarantee to generate descent directions. So, based on the FR formula, Zhang et al. [13] modified the FR method such that the direction generated is always a descent direction. Based on the modified FR conjugate gradient method [13], Du and Chen [14] proposed a new spectral conjugate gradient method: π‘‘π‘˜=ξ‚»βˆ’π‘”π‘˜,forπ‘˜=1,βˆ’πœƒπ‘˜π‘”π‘˜+𝛽FRπ‘˜π‘‘π‘˜βˆ’1,forπ‘˜β‰₯2,(1.9) where 𝛽FRπ‘˜=β€–β€–π‘”π‘˜β€–β€–2β€–β€–π‘”π‘˜βˆ’1β€–β€–2,πœƒπ‘˜=π‘‘π‘‡π‘˜ξ€·π‘”π‘˜βˆ’π‘”π‘˜βˆ’1ξ€Έβ€–β€–π‘”π‘˜βˆ’1β€–β€–2.(1.10) And they proved the global convergence of the modified spectral FR method (In this paper, we call it SFR method) with the mild conditions.

The observation of the above formula motivates us to construct a new formula; which combines the advantage of the spectral gradient method, CD method, and DY method as follows: π‘‘π‘˜=ξ‚»βˆ’π‘”π‘˜,forπ‘˜=1,βˆ’πœƒπ‘˜π‘”π‘˜+π›½π‘˜π‘‘π‘˜βˆ’1,forπ‘˜β‰₯2,(1.11) where π›½π‘˜ is specified by π›½π‘˜=𝛽CDπ‘˜ξ€½+min0,πœ‘π‘˜β‹…π›½CDπ‘˜ξ€Ύ,(1.12)πœƒπ‘˜π‘”=1βˆ’π‘‡π‘˜π‘‘π‘˜βˆ’1π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1,(1.13)πœ‘π‘˜π‘”=βˆ’π‘‡π‘˜π‘‘π‘˜βˆ’1π‘‘π‘‡π‘˜βˆ’1ξ€·π‘”π‘˜βˆ’π‘”π‘˜βˆ’1ξ€Έ.(1.14) And under some mild conditions, we give the global convergence of the mixed spectral CD-DY conjugate gradient method with the Wolfe line search.

This paper is organized as follows. In Section 2, we propose the corresponding algorithm and give some assumptions and lemmas, which are usually used in the proof of the global convergence properties of nonlinear conjugate gradient methods. In Section 3, global convergence analysis is provided with suitable conditions. Preliminary numerical results are presented in Section 4.

2. Algorithm and Lemmas

In order to establish the global convergence of the proposed method, we need the following assumption on objective function, which have been used often in the literature to analyze the global convergence of nonlinear conjugate gradient methods with inexact line search.

Assumption 2.1. (i) The level set Ξ©={π‘₯βˆ£π‘“(π‘₯)≀𝑓(π‘₯1)} is bounded, where π‘₯1 is the starting point.(ii) In some neighborhood 𝑁 of Ξ©, the objective function is continuously differentiable, and its gradient is Lipchitz continuous, that is, there exists a constant 𝐿>0 such that ‖𝑔(π‘₯)βˆ’π‘”(𝑦)‖≀𝐿‖π‘₯βˆ’π‘¦β€–,βˆ€π‘₯,π‘¦βˆˆπ‘.(2.1)Now we give the mixed spectral CD-DY conjugate gradient method as follows.

Algorithm 2.2. Step 1. Data π‘₯1βˆˆπ‘…π‘›, πœ€β‰₯0. Set 𝑑1=βˆ’π‘”1; if ‖𝑔1β€–β‰€πœ€, then stop.Step 2. Compute π›Όπ‘˜ by the strong Wolfe line search (1.5) and (1.8).Step 3. Let π‘₯π‘˜+1=π‘₯π‘˜+π›Όπ‘˜π‘‘π‘˜,π‘”π‘˜+1=𝑔(π‘₯π‘˜+1); if β€–π‘”π‘˜+1β€–β‰€πœ€, then stop.Step 4. Compute π›½π‘˜+1 by (1.12), and generate π‘‘π‘˜+1 by (1.11).Step 5. Set π‘˜=π‘˜+1; go to Step 2.

The following lemma shows that Algorithm 2.2 produces a descent direction in each iteration with the Wolfe line search.

Lemma 2.3. Let the sequences {π‘”π‘˜} and {π‘‘π‘˜} be generated by Algorithm 2.2, and let the step-size π›Όπ‘˜ be determined by the Wolfe line search (1.5) and (1.8), then π‘”π‘‡π‘˜π‘‘π‘˜<0.(2.2)

Proof. The conclusion can be proved by induction. Since 𝑔𝑇1𝑑1=βˆ’β€–π‘”1β€–2, the conclusion holds for π‘˜=1. Now we assume that the conclusion is true for π‘˜βˆ’1,π‘˜β‰₯2. Then from (1.8), we have π‘‘π‘‡π‘˜βˆ’1ξ€·π‘”π‘˜βˆ’π‘”π‘˜βˆ’1ξ€Έβ‰₯(πœŽβˆ’1)π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1>0.(2.3) If π‘”π‘‡π‘˜π‘‘π‘˜βˆ’1≀0, then from (1.14) and (2.3), we have π›½π‘˜=𝛽CDπ‘˜.
From (1.4), (1.11), and (1.13), we have π‘”π‘‡π‘˜π‘‘π‘˜ξƒ©π‘”=βˆ’1βˆ’π‘‡π‘˜π‘‘π‘˜βˆ’1π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1ξƒͺβ‹…β€–β€–π‘”π‘˜β€–β€–2βˆ’β€–β€–π‘”π‘˜β€–β€–2π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1β‹…π‘”π‘‡π‘˜π‘‘π‘˜βˆ’1‖‖𝑔=βˆ’π‘˜β€–β€–2<0.(2.4) If π‘”π‘‡π‘˜π‘‘π‘˜βˆ’1>0, then from (1.14) and (2.3), we have π›½π‘˜=𝛽CDπ‘˜+πœ‘π‘˜β‹…π›½CDπ‘˜=𝛽DYπ‘˜.
From (1.11), (1.7), and (1.13), we have π‘”π‘‡π‘˜π‘‘π‘˜=βˆ’πœƒπ‘˜β‹…β€–β€–π‘”π‘˜β€–β€–2+𝛽DYπ‘˜β‹…π‘”π‘‡π‘˜π‘‘π‘˜βˆ’1=𝛽DYπ‘˜β‹…ξ€Ίβˆ’πœƒπ‘˜β‹…π‘‘π‘‡π‘˜βˆ’1ξ€·π‘”π‘˜βˆ’π‘”π‘˜βˆ’1ξ€Έ+π‘”π‘‡π‘˜π‘‘π‘˜βˆ’1ξ€»=𝛽DYπ‘˜β‹…ξƒ¬π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1+π‘”π‘‡π‘˜π‘‘π‘˜βˆ’1π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1β‹…π‘‘π‘‡π‘˜βˆ’1ξ€·π‘”π‘˜βˆ’π‘”π‘˜βˆ’1≀𝛽DYπ‘˜β‹…π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1<0.(2.5) From the above inequality (2.4) and (2.5), we obtain that the conclusion holds for π‘˜.

The conclusion of the following lemma, often called the Zoutendijk condition, is used to prove the global convergence properties of nonlinear conjugate gradient methods. It was originally given by Zoutendijk [15].

Lemma 2.4 (see [15]). Suppose that Assumption 2.1 holds. Let the sequences {π‘”π‘˜} and {π‘‘π‘˜} be generated by Algorithm 2.2, and let the step-size π›Όπ‘˜ be determined by the Wolfe line search (1.5) and (1.8), and Lemma 2.3 holds. Then ξ“π‘˜β‰₯1ξ€·π‘”π‘‡π‘˜π‘‘π‘˜ξ€Έ2β€–β€–π‘‘π‘˜β€–β€–2<+∞.(2.6)

Lemma 2.5. Let the sequences {π‘”π‘˜} and {π‘‘π‘˜} be generated by Algorithm 2.2, and let the step-size π›Όπ‘˜ be determined by the Wolfe line search (1.5) and (1.8), and Lemma 2.3 holds. Then π›½π‘˜β‰€π‘”π‘‡π‘˜π‘‘π‘˜π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1.(2.7)

Proof. If π‘”π‘‡π‘˜π‘‘π‘˜βˆ’1≀0, then from Lemma 2.3, we have π›½π‘˜=𝛽CDπ‘˜. From (1.11), (1.4), and (1.13), we have π‘”π‘‡π‘˜π‘‘π‘˜ξƒ©π‘”=βˆ’1βˆ’π‘‡π‘˜π‘‘π‘˜βˆ’1π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1ξƒͺβ‹…β€–β€–π‘”π‘˜β€–β€–2+𝛽CDπ‘˜β‹…π‘”π‘‡π‘˜π‘‘π‘˜βˆ’1=𝛽CDπ‘˜β‹…ξ€·π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1βˆ’π‘”π‘‡π‘˜π‘‘π‘˜βˆ’1ξ€Έ+𝛽CDπ‘˜β‹…π‘”π‘‡π‘˜π‘‘π‘˜βˆ’1=𝛽CDπ‘˜β‹…π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1.(2.8) From the above equation, we have 𝛽CDπ‘˜=π‘”π‘‡π‘˜π‘‘π‘˜π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1.(2.9) If π‘”π‘‡π‘˜π‘‘π‘˜βˆ’1>0, from (2.5), we have 𝛽DYπ‘˜β‰€π‘”π‘‡π‘˜π‘‘π‘˜π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1.(2.10) From (1.12), (2.9), and (2.10), we obtain that the conclusion (2.7) holds.

3. Global Convergence Property

The following theorem proves the global convergence of the mixed spectral CD-DY conjugate gradient method with the Wolfe line search.

Theorem 3.1. Suppose that Assumption 2.1 holds. Let the sequences {π‘”π‘˜} and {π‘‘π‘˜} be generated by Algorithm 2.2, and let the step-size π›Όπ‘˜ be determined by the Wolfe line search (1.5) and (1.8), and Lemma 2.3 holds. Then liminfπ‘˜β†’+βˆžβ€–β€–π‘”π‘˜β€–β€–=0.(3.1)

Proof. Suppose by contradiction that there exists a positive constant 𝜌>0, such that β€–β€–π‘”π‘˜β€–β€–β‰₯𝜌(3.2) holds for all π‘˜β‰₯1.

From (1.11), we have π‘‘π‘˜+πœƒπ‘˜π‘”π‘˜=π›½π‘˜π‘‘π‘˜βˆ’1, and by squaring it, we get β€–β€–π‘‘π‘˜β€–β€–2=𝛽2π‘˜β€–β€–π‘‘π‘˜βˆ’1β€–β€–2βˆ’2πœƒπ‘˜π‘”π‘‡π‘˜π‘‘π‘˜βˆ’πœƒ2π‘˜β€–β€–π‘”π‘˜β€–β€–2.(3.3) From the above equation and Lemma 2.5, we have β€–β€–π‘‘π‘˜β€–β€–2β‰€ξƒ©π‘”π‘‡π‘˜π‘‘π‘˜π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1ξƒͺ2β‹…β€–β€–π‘‘π‘˜βˆ’1β€–β€–2βˆ’2πœƒπ‘˜π‘”π‘‡π‘˜π‘‘π‘˜βˆ’πœƒ2π‘˜β€–β€–π‘”π‘˜β€–β€–2.(3.4) Dividing the above inequality by (π‘”π‘‡π‘˜π‘‘π‘˜)2, we have β€–β€–π‘‘π‘˜β€–β€–2ξ€·π‘”π‘‡π‘˜π‘‘π‘˜ξ€Έ2β‰€β€–β€–π‘‘π‘˜βˆ’1β€–β€–2ξ€·π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1ξ€Έ2βˆ’2πœƒπ‘˜π‘”π‘‡π‘˜π‘‘π‘˜βˆ’πœƒ2π‘˜β‹…β€–β€–π‘”π‘˜β€–β€–2ξ€·π‘”π‘‡π‘˜π‘‘π‘˜ξ€Έ2=β€–β€–π‘‘π‘˜βˆ’1β€–β€–2ξ€·π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1ξ€Έ2βˆ’ξƒ©πœƒπ‘˜β‹…β€–β€–π‘”π‘˜β€–β€–π‘”π‘‡π‘˜π‘‘π‘˜+1β€–β€–π‘”π‘˜β€–β€–ξƒͺ2+1β€–β€–π‘”π‘˜β€–β€–2β‰€β€–β€–π‘‘π‘˜βˆ’1β€–β€–2ξ€·π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1ξ€Έ2+1β€–β€–π‘”π‘˜β€–β€–2.(3.5) Using (3.5) recursively and noting that ‖𝑑1β€–2=βˆ’π‘”π‘‡1𝑑1=‖𝑔1β€–2, we get β€–β€–π‘‘π‘˜β€–β€–2ξ€·π‘”π‘‡π‘˜π‘‘π‘˜ξ€Έ2β‰€π‘˜ξ“π‘–=11‖‖𝑔𝑖‖‖2.(3.6) Then from (3.2) and (3.6), we have ξ€·π‘”π‘‡π‘˜π‘‘π‘˜ξ€Έ2β€–β€–π‘‘π‘˜β€–β€–2β‰₯𝜌2π‘˜,(3.7) which indicates ξ“π‘˜β‰₯11β€–β€–π‘”π‘˜β€–β€–2=+∞.(3.8) The above equation contradicts the conclusion of Lemma 2.4. Therefore, the conclusion (3.1) holds.

4. Numerical Experiments

In this section, we report some numerical results. We used MATLAB 7.0 to test some problems that are from [16] and compare the performance of the mixed spectral CD-DY method (Algorithm 2.2) with the CD method, DY method, and SFR method. The global convergence of the CD method has not still been proved under the Wolfe line search, so our line search subroutine computes π›Όπ‘˜ such that the strong Wolfe line search conditions hold with 𝛿=0.01 and 𝜎=0.1. We also use the condition ||π‘”π‘˜||≀10βˆ’6 or It-max>9999 as the stopping criterion. It-max denotes the maximal number of iterations.

The numerical results of our tests are reported in the following table. The first column β€œProblem” represents the name of the tested problem in [16]. β€œDim” denotes the dimension of the tested problem. The detailed numerical results are listed in the form NI/NF/NG, where NI, NF, and NG denote the number of iterations, function evaluations, and gradient evaluations respectively. If the limit of 9999 function evaluations was exceeded, the run was stopped; this is indicated by β€œβ€”β€.

In order to rank the average performance of all above methods, one can compute the total number of function and gradient evaluation by the formula 𝑁total=NF+π‘™βˆ—NG,(4.1) where 𝑙 is some integer. According to the results on automatic differentiation [17, 18], the value of 𝑙 can be set to 5 𝑁total=NF+5βˆ—NG.(4.2) That is to say, one gradient evaluation is equivalent to five function evaluations if automatic differentiation is used.

By making use of (4.2), we compare the mixed spectral CD-DY method as follows: for the 𝑖th problem, compute the total number of function evaluations and gradient evaluations required by the CD method, the DY method, the SFR method, and the mixed spectral CD-DY method by formula (4.2), and denote them by 𝑁total,𝑖(CD), 𝑁total,𝑖(DY), 𝑁total,𝑖(SFR), and 𝑁total,𝑖(CD-DY); then calculate the ratios:𝛾𝑖𝑁(CD)=total,𝑖(CD)𝑁total,𝑖,𝛾(CD-DY)𝑖𝑁(DY)=total,𝑖(DY)𝑁total,𝑖,𝛾(CD-DY)𝑖𝑁(SFR)=total,𝑖(SFR)𝑁total,𝑖.(CD-DY)(4.3)

From Table 1, we know that some problems are not run by some methods. So, if the 𝑖0th problem is not run by the given method, we use a constant 𝜏=max{𝛾𝑖(thegivenmethod)βˆ£π‘–βˆˆπ‘†1} instead of 𝛾𝑖0(thegivenmethod), where 𝑆1 denotes the set of test problems, which can be run by the given method.

The geometric mean of these ratios for the CD method, the DY method and SFR method, over all the test problems is defined by 𝛾(CD)=π‘–βˆˆπ‘†π›Ύπ‘–ξƒͺ(CD)1/|𝑆|,𝛾(DY)=π‘–βˆˆπ‘†π›Ύπ‘–ξƒͺ(DY)1/|𝑆|,𝛾(SFR)=π‘–βˆˆπ‘†π›Ύπ‘–ξƒͺ(SFR)1/|𝑆|,(4.4) where 𝑆denotes the set of the test problems and |𝑆| denotes the number of elements in 𝑆. One advantage of the above rule is that, the comparison is relative and hence is not be dominated a few problems for which the method requires a great deal of function evaluations and gradient functions.

According to the above rule, it is clear that 𝛾(CD-DY)=1. From Table 2, we can see that average performance of the mixed spectral CD-DY conjugate gradient method (Algorithm 2.2) works the best. So, the mixed spectral CD-DY conjugate gradient method has some practical values.

Acknowledgments

The authors wish to express their heartfelt thanks to the referees and the editor for their detailed and helpful suggestions for revising the paper. This work was supported by The Nature Science Foundation of Chongqing Education Committee (KJ091104).