Abstract

A new spectral conjugate gradient method (SDYCG) is presented for solving unconstrained optimization problems in this paper. Our method provides a new expression of spectral parameter. This formula ensures that the sufficient descent condition holds. The search direction in the SDYCG can be viewed as a combination of the spectral gradient and the Dai-Yuan conjugate gradient. The global convergence of the SDYCG is also obtained. Numerical results show that the SDYCG may be capable of solving large-scale nonlinear unconstrained optimization problems.

1. Introduction

As well known, a great deal of issues, which are studied in scientific research fields, can be translated to unconstrained optimization problems. The spectral conjugate gradient (SCG) method does nice jobs among various algorithms for solving nonlinear optimization problems. The spectral conjugate gradient combines the spectral gradient and the conjugate gradient. To the SCG method, the choice of spectral parameter is crucially important. In this paper, we propose a new spectral conjugate gradient method based on the Dai-Yuan conjugate gradient method by providing a new spectral parameter. Our purpose is to obtain an efficient algorithm for the unconstrained optimization.

An unconstrained optimization problem is customarily expressed asThe nonlinear function considered in this paper is continuously differentiable; the gradient of is denoted by . We usually impose the following properties on function .(P1)The function is bounded below and is continuously differentiable in a neighbourhood of the level set , where is the starting point.(P2)The gradient of is Lipschitz continuous in ; that is, there exists a constant , such that for all , .

Generally, a sequence is obtained in an algorithm for solving (1) and has the following format:where is a search direction and is the step size. At each iterative point , we usually determine firstly and then compute by some principles.

There are different ways to determine the direction . In the classical steepest-descent method, . In the conjugate gradient (CG) method, is of the form where is a scalar parameter characterizing the conjugate gradient method. The best-known expressions of are Hestenes-Stiefel (HS) [1], Fletcher-Reeves (FR) [2], Polak-Ribiere-Polyak (PRP) [3, 4], and Dai-Yuan (DY) [5] formulas. They are defined by respectively, where denotes the Euclidean norm and .

There also are some approaches to determine the step size in (2). Unfortunately, the steepest-descent method performs poorly. Barzilai and Borwein improved the steepest-descent method greatly by providing a spectral choice of step size in [6]. Their algorithm has the form where or with . Many algorithms are proved convergent under the Wolfe condition; that is, the step size satisfieswith .

In recent years, some scholars developed a new method—spectral conjugate gradient (SCG) method—for solving (1). For example, Raydan introduced the spectral gradient method for large-scale unconstrained optimization in [7]. He combined a nonmonotone line search strategy that guarantees global convergence with the Barzilai and Borwein method. Utilizing spectral gradient and conjugate gradient ideas, Birgin and Martinez proposed a spectral conjugate gradient method in [8]. In their algorithm, the search direction has the formIn [8], the best combination of this formula, the scaling, and the initial choice of step-length is also studied. Following [8], some papers discussed the various choices of the spectral parameter based on different . For example, Du and Chen [9] gave a modified spectral FR conjugate gradient method with Wolfe-type line search based on FR formula. Their spectral parameters and are expressed asYu et al. [10] presented a modification of spectral Perry’s conjugate gradient formula, which possessed the sufficient descent property independent of line search condition. Their search direction is defined by (8) and has the formwhereLiu and Li [11] proposed a spectral DY-type projection method for nonlinear monotone systems of equations. The direction is also determined by (8) and the parameters are defined by

We will propose a new SCG method based on the Dai-Yuan-type conjugate gradient method in this paper. A new selection of is introduced in our algorithm such that the sufficient descent condition holds. In addition, the global convergence of the new method is obtained.

The present paper is organized as follows. In Section 2, we outline our new method for unconstrained nonlinear optimization, and we show that the sufficient descent condition holds under mild assumptions. The global convergence is proved in Section 3, while the numerical results compared with CG-DESCENT are given in Section 4. At last section, we draw some conclusions about our new spectral conjugate gradient method.

2. Spectral Dai-Yuan-Type Conjugate Gradient Method

In this paper, we consider the spectral conjugate gradient method in which the search direction is of the formwhereand the scalar parameter is defined by (4). This is a new spectral conjugate gradient method for solving problem (1) because the expression (14) of spectral parameter is completely different from those in other papers. The search direction (13) is a combination of the spectral gradient and the Dai-Yuan conjugate gradient. We hope that (14) may be an efficient choice.

In order to obtain the global convergence of our method, we assume that the step size satisfies the strong Wolfe condition; that is, the step size satisfies (6) andwith . It is easy to see that (7) holds if (15) holds.

The following is a detailed description of the spectral Dai-Yuan-type conjugate gradient (SDYCG) algorithm.

SDYCG Algorithm

Step 1 (initialization). Choose , set , and take .

Step 2 (check the convergence condition). Compute ; if , then stop.

Step 3 (form the search direction). If , then . Else, compute and by formulas (14) and (4), respectively; then compute by (13) and (14).

Step 4 (line search). Find which satisfies the strong Wolfe conditions (6) and (15).

Step 5 (compute the new point). Set and and go to Step 2.

The framework of the SDYCG algorithm is similar to other spectral conjugate gradient algorithms. However we choose a different spectral parameter (see (14)) which is the main difference between SDYCG and the others.

The global convergence of SDYCG algorithm will be given in the next section. Before that, we will show that the search direction (13) can ensure the sufficient descent condition.

Lemma 1. Suppose that the sequence is generated by the SDYCG algorithm; thenfor all .

Proof. Since is calculated by formula (13), we can get, if , that whereas when , from (14), we find Furthermore,From the second formula of (13), we obtainThe proof is completed.

This lemma gives the fact that the direction is a descent direction. Besides, possesses the following property.

Lemma 2. Suppose that the SDYCG algorithm is implemented with the step size that satisfies the strong Wolfe conditions (6) and (15). If for all , then

Proof. By (14) and , we haveWith (15) and (21), we get Therefore, inequality (20) follows. This completes the proof.

Inequality (20) gives the close relationship of the inner product of gradient and direction between the adjacent two iterations. It will play an important role in our global convergence analysis.

3. Convergence Analysis

Dai and Yuan stated in [5] that the following result had been essentially proved by Zoutendijk and Wolfe.

Lemma 3. Suppose that the function has the properties and . Assume that is a descent direction and is obtained by the Wolfe conditions (6) and (7). ThenOne customarily calls (23) the Zoutendijk condition.

Theorem 4. Suppose that the function has the properties and . Sequences , , and are generated by SDYCG algorithm. Then either for some or

Proof. Suppose that for all and (24) is not true. Then there exists a constant , such thatfor all of the iterations.
The second equality of (13) implieswe getDividing both sides by , we have Combining (20) in Lemma 2, we see the inequalitySumming both sides, we obtainSo, from (25) and (30), we getThis relation is equivalent toSumming over , we obtainFrom the SDYCG algorithm, the step size satisfies the strong Wolfe condition, so the Wolfe condition (7) holds. And the directions obtained by the algorithm are descent from Lemma 1. But the last equality contradicts the Zoutendijk condition (23). Hence, our original assertion (25) must be false, giving that either for some or (24) holds.

4. Numerical Results

In order to test the numerical performance of the SDYCG algorithm, we choose some unconstrained problems with the initial points from CUTEr library [12, 13]. They are listed in Table 1.

The experiments are run on a personal computer with a 64-bit processor, 2.5 GHz of CPU, and 4 GB of RAM memory. All the codes are written in MATLAB language and are compiled with this software.

We would like to compare the SDYCG with the CG-DESCENT. The CG-DESCENT is a conjugate gradient algorithm with guaranteed descent proposed by Hager and Zhang in [14]. It has been proven an excellent algorithm in recent years.

To make the comparison as fair as possible, we use the criterion to terminate the executions and impose restriction on the number of iterations less than in both algorithms. All the step sizes satisfy the strong Wolfe conditions (6) and (15).

We use the performance profiles proposed by Dolan and Moré [15] to show the efficiency of comparisons. Performance profiles can be used as a tool for benchmarking and comparing optimization software. The performance profile for a solver is the (cumulative) distribution function for a performance metric. For example, if computing time is chosen as a metric, then we compute the ratio of the computing time of the solver versus the best time of all of the solvers. That is, for each method, we plot the fraction (-axis) of problems for which the method is within a factor (-axis) of the best time. The curve of a solver being above others means that it has the highest probability of being the optimal solver. We use a scale for to capture the performance of all the solvers.

In order to observe the numerical results of the SDYCG and the CG-DESCENT, we choose three different dimensions of each test function. The dimensions are , , and , respectively. According to the numerical results obtained in every dimension, we plot two figures based on CPU time and iterations, respectively.

We can find from Figure 1 that the SDYCG is similar to the CG-DESCENT when because their curves crosses each other. The predominance of the SDYCG appears in Figure 2 when . If the test dimension is chosen as , the SDYCG is better than the CG-DESCENT from the fact that its curve is almost completely above that of the CG-DESCENT in Figure 3.

Furthermore, we are interested in the robustness of our SDYCG algorithm. Ten problems are selected from CUTEr to be tested. The numerical results listed in Table 2 are obtained by changing the initial iteration point every time. “” represents the standard initial iteration point of the problem; “iter.” and “time(s)” indicate the iterative number and time (in seconds).

The conclusion that can be drawn is that the SDYCG is a robust algorithm and it may be capable of solving large-scale nonlinear unconstrained optimization problem.

5. Conclusions

We propose a new spectral conjugate gradient method for nonlinear unconstrained optimization. This method, which we call the SDYCG, is built based on the Dai-Yuan conjugate gradient method. A new spectral choice is provided in the search direction. Numerical results show that the SDYCG is comparable with the CG-DESCENT. The SDYCG algorithm may be capable of solving large-scale nonlinear unconstrained optimization problems.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (11471159), the Natural Science Foundation of Jiangsu Province (BK20141409), and the Foundation of Education Department of Anhui Province (2014jyxm161).