Abstract

A new nonlinear spectral conjugate descent method for solving unconstrained optimization problems is proposed on the basis of the CD method and the spectral conjugate gradient method. For any line search, the new method satisfies the sufficient descent condition . Moreover, we prove that the new method is globally convergent under the strong Wolfe line search. The numerical results show that the new method is more effective for the given test problems from the CUTE test problem library (Bongartz et al., 1995) in contrast to the famous CD method, FR method, and PRP method.

1. Introduction

Unconstrained optimization problems have extensive applications, for example, in petroleum exploration, aerospace, transportation, and other domains. However, the amount of necessary calculation also grows exponentially with the increasing scale of the problem. Therefore, it is required to develop new methods to solve the large-scale unconstrained optimization problems. The primary objective of this paper is to study the global convergence properties and practical computational performance of a new nonlinear spectral conjugate gradient method for unconstrained optimization problems without restarts, and with suitable conditions.

Consider the following unconstrained optimization problem where is a continuously differentiable function and its gradient is available.

Due to need less computer memory especially, conjugate gradient method is very appealing for solving (1.1) when the number of variable is large. This method can be described by the following where is the current iteration, is the step-size which is determined by some line search, is the search direction, is the gradient of at the point , and is a scalar which determines the different conjugate gradient methods [1, 2]. There are many well-known formulas for , such as the Fletcher-Reeves (FR) [3], Polak-Ribiere-Polyak (PRP) [4], Hestenes-Stiefel (HS) [5], and conjugate-descent (CD) [6]. The conjugate gradient method is a powerful line search method for solving optimization problems, and it remains very popular for engineers and mathematicians who are interested in solving large-scale problems. This method can avoid, like steepest descent method, the computation and storage of some matrices associated with the Hessian of objective functions.

The original CD method proposed by Fletcher [6], in which is defined by the following where denotes the Euclidean norm of vectors. An important property of the CD method is that the method will produce a descent direction under the strong Wolfe line search. In the strong Wolfe line search, the step-size is required to satisfy the following: where <. Some good results about the CD method have also been reported in recent years [710].

Another popular method to solving problem (1.1) is the spectral gradient method, which was developed originally by Barzilai and Borwein [11] in 1988. Raydan [12] further introduced the spectral gradient method for potentially large-scale unconstrained optimization problems. The main feature of this method is that only gradient directions are used at each line search whereas a nonmonotone strategy guarantees global convergence. What is more, this method outperforms sophisticated conjugate gradient method in many problems. Birgin and Martínez [13] proposed three kinds of spectral conjugate gradient methods. The direction is given by the following way where the parameter is computed by the following respectively, and is taken to be the spectral gradient and computed by the following where . The numerical results show that these methods are very effective. Unfortunately, they cannot guarantee to generate descent directions. Based on the FR conjugate gradient method, Zhang et al. [14] take modification to the FR method such that the direction generated is always a descent direction. The is defined by the following where is specified in [3], and . They prove that this method can guarantee to generate descent directions and is globally convergent.

In this paper, motivated by success of the spectral gradient method, we propose a new spectral conjugate gradient method by combining the CD method and the spectral gradient method. The direction is given by the following way: where is specified by the following Under some mild conditions, we give the global convergence of the new spectral conjugate gradient method with the strong Wolfe line search.

This paper is organized as follows. In Section 2, we propose our algorithm, and global convergence analysis is provided under suitable conditions. Preliminary numerical results are presented in Section 3.

2. Global Convergence Analysis

In order to establish the global convergence of our method, we need the following assumption on objective function, which have often been used in the literatures to analyze the global convergence of nonlinear conjugate gradient method and the spectral conjugate gradient method with inexact line searches.

Assumption 2.1. (i) The level set = is bounded, where is the starting point.
(ii) In some neighborhood of , the objective function is continuously differentiable, and its gradient is Lipschitz continuous, that is, there exists a constant such that Now we present the new spectral conjugate gradient method as follows.

Algorithm 2.2 (SCD method). Step  1. Data: , . Set , if , then stop.
Step  2. Compute by some line search.
Step  3. Let , , if , then stop.
Step  4. Compute by (1.11), and generate by (1.10).
Step  5. Set , go to Step 2.

The following theorem shows that Algorithm 2.2 possesses the sufficient descent condition for any line search.

Theorem 2.3. Let the sequences and be generated by Algorithm 2.2, and let the step-size be determined by any line search, then

Proof. We can prove the conclusion by induction. From , the conclusion (2.2) holds for . Now we assume that the conclusion is true for and , that is, . In the following, we need to prove that the conclusion holds for .
If , then . From (1.4), (1.10), and (1.12), we have If , then . From (1.10), (1.12), and our assumption: , we have From (2.3) and (2.4), we know that the conclusion (2.2) holds for .

Remark 2.4. From (1.4) and (2.3), if , then we have

Remark 2.5. From (1.11), (2.2), and (2.5), we have for .

The conclusion of the following lemma, often called the Zoutendijk condition, is used to prove the global convergence of nonlinear conjugate gradient methods. It was originally given by Zoutendijk [15].

Lemma 2.6. Suppose that Assumption 2.1 holds. Consider any method (1.1)-(1.2), where satisfies for and satisfies the Wolfe line search. Then

Introduction. The strong Wolfe line search is a special case of the Wolfe line search, so the Lemma 2.6 also holds under the strong Wolfe line search. What is more, we can also use the same method to prove the Zoutendijk condition holding for the spectral conjugate gradient method.

The following theorem establishes the global convergence of the new spectral conjugate gradient method with the strong Wolfe line search for the general functions.

Theorem 2.7. Suppose that (Assumption 2.1) holds. Let the sequences and be generated by Algorithm 2.2, and let the step-size be determined by the strong Wolfe line search (1.5). Then

Proof. According to the given conditions, Lemma 2.6 all hold. In the following, we will obtain the conclusion (2.7) by contradiction. Suppose by contradiction that there exists a positive constant such that holds for . On the one hand, rewriting (1.11) as follows and squaring both side of it, we get From the above equation and Remark 2.5, we have Dividing the above inequality by , we have Using (2.12) recursively and noting that , we get Then we get from (2.13) and (2.8) that which indicates This contradicts the Zoutendijk condition (2.6). Therefore the conclusion (2.7) holds.

3. Numerical Experiments

In this section, we report some numerical results. Under the strong Wolfe line search, we compare the performances of the CPU time and the iteration number of the SCD method with that of CD, FR, and PRP methods on the given test problems which come from the CUTE test problem library [16]. The parameters in the strong Wolfe line search are met the following requirements: and . We stop the iteration if the iteration number exceeds 5000 or the inequity is satisfied. All codes were written in FORTRAN 90 and run on a PC with 2.0 GHz CPU processor and 512 MB memory and Windows XP operation system.

In Table 1, the column “Problem” represents the problem’s name. “Dim” denotes the problem’s dimension. The detailed numerical results are listed in the form NI/CPU, where NI and CPU denote the iteration number and the CPU time in seconds, respectively. From Table 1, some CPU times are zero. This is because that the CPU times are retained two digits in our numerical experiments. In order to compare these methods in the performance of the CPU time, we use a constant instead of when CPU time of the problem is zero, where denotes the set of the test problems whose CPU time is not zero.

In this paper, we adopt the performance profiles by Dolan and Moré [17] to compare the SCD method with CD, FR, and PRP methods. Figure 1 shows the performance profiles with respect to CPU time means that for each method, we plot the fraction of problems for which the method is within a factor τ of the best time. The left side of the figure gives the percentage of the test problems for which a method is the fastest; the right side gives the percentage of the test problems that are successfully solved by each of the methods. The top curve is the method that solved the most problems in a time that was within a factor τ of the best time. Using the same method, we also test on the iteration number, see Figure 2.

From Figures 12, the SCD method performs a little worse than the famous PRP method in the performances of the CPU time and the iteration number. However, the SCD method has absolute potential compared with the famous CD and FR methods in the performances of the CPU time and the iteration number. So the efficiency of the SCD method is encouraging.

Acknowledgments

The authors wish to express their heartfelt thanks to the anonymous referees and the editor for their detailed and helpful suggestions for revising the paper. This work was supported by The Nature Science Foundation of Chongqing Education Committee (KJ121112).