Abstract

Recently, sufficient descent property plays an important role in the global convergence analysis of some iterative methods. In this paper, we propose a new iterative method for solving unconstrained optimization problems. This method provides a sufficient descent direction for objective function. Moreover, the global convergence of the proposed method is established under some appropriate conditions. We also report some numerical results and compare the performance of the proposed method with some existing methods. Numerical results indicate that the presented method is efficient.

1. Introduction

Consider the unconstrained optimization problem where is a continuously differentiable function. For solving (1), the following iterative formula is often used: where is the current iterative point, is a step size which is determined by some line search, and is a search direction. Different search directions correspond to different iterative methods [14]. Throughout this paper, is an -dimensional column vector, , and are defined as the Euclidian norm and transpose of vectors, respectively. Generally, if there exists a positive constant , such that then the search direction possesses sufficient descent property. This property may be crucial for the iterative methods to be global convergence [5], and some numerical experiments have shown that sufficient descent methods are efficient [6]. However, not all iterative methods can satisfy sufficient descent condition (3) under some inexact linear search conditions, such as the conjugate gradient method proposed by Wei et al. [7] or the gradient method presented in [8]. In order to make the search direction satisfy the condition (3) at each step, much effort has been done [912].

In [9], Cheng proposed a modified PRP conjugate gradient method in which the search direction is determined by where , is a matrix and is an identity matrix.

In [10], Zhang et al. derived a simple sufficient descent method; the search direction is given by

Recently, Zhang et al. [11] presented a three-term modified PRP conjugate gradient method; the search direction is generated by where

We note that (4), (5), and (6) can be written as a linear combination of the steepest descent direction and the projection of the original direction; that is, where is an original direction, is a scalar, and is any vector such that holds. Indeed, if , , and , then (8) reduces to the method (4). Let , , and ; then (8) reduces to the method (5). When , , and , it is easy to deduce that (8) reduces to the method (6). From (8), we can easily obtain Thus, one has for all . It implies that the sufficient descent condition (3) holds with . But the method (5) does not possess a restart feature which can avoid the jamming phenomenon. In addition, the methods (4) and (6) may not always be globally convergent under some inexact linear search [13], such as the standard Armijo-type line search which is given as follows: where and .

Motivated by (8) and (9), our purpose is to design a direction in the subspace , where is a parameter. This direction can be written as where is any vector such that holds. Let It is clear that (8) can be regarded as a special case of (12) with . Therefore, (12) will have a wider application than (8). If we take , , , , and in (12), then a new search direction is given as follows: where

In this paper, we present a new iterative method for unconstrained optimization problems; the search direction is defined by (13) and (14). We prove that satisfies without any line search. It means that the sufficient descent condition (3) holds with . Furthermore, we prove that the proposed method is globally convergent under the standard Armijo-type line search or the modified Armijo-type line search. From (13) and (14), we can see that the proposed method has a restart feature that directly addresses the jamming problem. In fact, when the step is small, then the factor tends to zero vector. Therefore, the direction generated by (13) is very close to the steepest descent direction .

The rest of this paper is organized as follows. In Section 2, we propose a new algorithm and discuss its sufficient descent property. In Section 3, the global convergence of the proposed method is proved under the modified Armijo-type line search or the standard Armijo line search. Some numerical results are given to test the performance of the proposed method in Section 4. Finally, we have some conclusions about the proposed method.

2. New Algorithm

In this section, the specific iterative steps of the proposed algorithm are listed as follows.

Algorithm 1. Consider the following.Step 1. Choose parameters , , and ; given an initial point . Set and .Step 2. If , then stop; otherwise go to the next step.Step 3. Determine a step size satisfying modified Armijo-type line search conditions: Step 4. Let .Step 5. Calculate the search direction by (13) and (14).Step 6. Set , and go to Step 2.

Theorem 2. Let sequences and be generated by (13) and (2); then for all .

Proof. Obviously, the conclusion is true for .
If , multiplying (13) by , we have Therefore, the inequality (16) holds for all . The proof is completed.

Theorem 2 shows that the search direction given by (13) possesses the sufficient descent property for any line search.

3. Convergence Analysis

The following assumptions are often needed to prove the global convergence of nonlinear conjugate gradient methods [14, 15]. In this section, we also use these assumptions in the convergence analysis of the proposed method.

Assumption 3. Consider the following.(i)The level set is bounded.(ii)In a neighborhood of , the function is continuously differentiable and its gradient is Lipchitz continuous; namely, there exists a constant , such that

Lemma 4. Suppose that Assumption 3 holds. Let and be generated by Algorithm 1. If the step size is obtained by (15) or (10), then there exists a constant , such that and one can also have

Proof. The results of this lemma will be proved in the following two cases.
Case 1. Let the step size be computed by (15). From Theorem 2, we have ; thus . If , then we obtain . If , then we know does not satisfy the inequality (15). So we have By Assumption 3(ii) and the mean value theorem, we have where .
From (21) and (22), we have Using Theorem 2 again, we get Let ; then the inequality (19) is obtained.
From Assumption 3(i), there exists a constant , such that , . By (15), (19), and Theorem 2, we have Therefore, from the above inequality, we have
Case 2. Let the step size be computed by (10). Similar to the proof of the above case, we can obtain Let ; then the inequality (19) is obtained. From (10), (19), and Theorem 2, we obtain By the above inequality, we can get (20). The proof is completed.

Theorem 5. Suppose that Assumption 3 holds. If Algorithm 1 generates infinite sequences and , then one has

Proof. We obtain this conclusion (29) by contradiction. Suppose that (29) does not hold, then there exists a positive constant , such that , for all . From Assumption 3(i), we know that there also exists a positive constant , such that , for all . Since , then we have The above inequality implies which contradicts with (20). This completes the proof.

Remark 6. If the search direction is defined by (13) with , , then the sufficient descent property and global convergence can also be proved similar to the proof of Theorems 2 and 5.

4. Numerical Results

In this section, some numerical results are provided to test the performance of the proposed method, and the proposed method is compared with the existing methods [911]. For the sake of simplicity, the proposed method and other comparative methods are named by NSDM, LPRP [11], SSD [10], and MPRP [9], respectively. The test problems and initial points are from [16]. The test problems are listed in Table 1. In our experiment, all the codes were written in MATLAB 7.0 and run on PC with 2.00 GB RAM memory, 2.10 GHz CPU, and windows 7 operation system.

In all algorithms, the step size is computed satisfying the modified Armijo-type line search (15) with , , and , and the stopping condition is . We also stop these algorithms if CPU time is over 500(s).

In Table 2, P, N, NI, NF, NG, and CPU stand for the number of test problems, the dimension of the vectors, the number of iterations, the number of function evaluations, the number of gradient evaluations, and the run time of CPU in seconds, respectively. The symbol “—” means that the corresponding method fails in solving the test problems when the CPU time is more than 500 seconds, and the star denotes that the numerical result is the best one among all the comparative methods.

In Table 2, we compare the performance of the new method by testing 28 different problems. According to the distribution of the star , one can see that the NSDM method performs better than the LPRP, MPRP, and SSD methods with 14 test problems, worse than the MPRP method with 1 test problem and worse than the LPRP method with 6 test problems. However, there also exist 7 test problems that are not marked by the symbol . Among these 7 test problems, the NSDM method performs better than other methods with 5 test problems in the number of iterations, 4 test problems in the number of function evaluations, 5 test problems in the number of gradient evaluations, and 1 test problem in CPU time.

In order to compare the performance of these methods clearly, we adopt the performance profiles introduced by Dolan and Moré [17]. The performance results are shown in Figures 14, respectively. In [17], Dolan and Moré introduced the notion as a means to evaluate and compare the performance of the set solvers on a test set . Assuming solvers and problems exist, for each problem and solver , they defined

The performance ratio is given by Assume that a parameter for all is chosen, and if and only if solver s does not solve problem . The performance profile is defined by Hence, is the probability for solver that a performance ratio is within a factor of the best possible ratio. The performance profile for a solver was nondecreasing, piecewise, and continuous from the right. The value of is the probability that the solver will win over the rest of the solvers. In general, a solver with high values of or at the top right of the figure is preferable or represents the best solver.

From Figures 14, we can obviously see that the NSDM method performs better than the MPRP method and SSD method. Although the LPRP method outperforms the NSDM method for in Figure 1, in Figure 2, in Figure 3, and in Figure 4, the NSDM method is superior to the LPRP method in the remaining interval. Moreover, from Figures 14, we can see that the NSDM method can solve 100% of the test problems, while the LPRP method can solve about 96% of the problems. Hence, the NSDM method is superior to the LPRP method. By comparing the value of in Figures 14, one can have a conclusion that the NSDM method is competitive to others; for example, the NSDM method is superior to other methods at least 45% in the number of iterations. In a word, one can have a conclusion that the presented method is much better than the LPRP, MPRP, and SSD methods from the analysis of the numerical results.

5. Conclusions

In this paper, we have proposed a new formula (11) that can generate different search directions by taking different parameters. Based on this formula, we have proposed a new sufficient descent method for solving unconstrained optimization problems. At each iteration, the generated direction is only related to the gradient information of two successive points. We have shown that this method is globally convergent. The numerical results indicate that the given method is superior to other methods for the test problems. In the future, we will study much better iterative methods according to (11) and perform new convergence analysis on them.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the editor and anonymous referees for their valuable comments and suggestions, which improve this paper greatly. This work is partly supported by the National Natural Science Foundation of China (11371071), Natural Science Foundation of Liaoning Province (20102003), Scientific Research Foundation of Liaoning Province Educational Department (L2013426), and Graduate Innovation Foundation of Bohai University (201208).