About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 742815, 5 pages
http://dx.doi.org/10.1155/2013/742815
Research Article

Nonlinear Conjugate Gradient Methods with Wolfe Type Line Search

College of Mathematics, Qingdao University, Qingdao 266071, China

Received 20 January 2013; Accepted 6 February 2013

Academic Editor: Yisheng Song

Copyright © 2013 Yuan-Yuan Chen and Shou-Qiang Du. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Nonlinear conjugate gradient method is one of the useful methods for unconstrained optimization problems. In this paper, we consider three kinds of nonlinear conjugate gradient methods with Wolfe type line search for unstrained optimization problems. Under some mild assumptions, the global convergence results of the given methods are proposed. The numerical results show that the nonlinear conjugate gradient methods with Wolfe type line search are efficient for some unconstrained optimization problems.

1. Introduction

In this paper, we focus our attention on the global convergence of nonlinear conjugate gradient method with Wolfe type line search. We consider the following unconstrained optimization problem: In (1), is continuously differentiable function, and its gradient is denoted by . Of course, the iterative methods are often used for (1). The iterative formula is given by where ,   is the kth and (k+1)th iterative step, is a step size, and is a search direction. Here, in the following, we define the search direction by In (3), is a conjugate gradient scalar, and the well-known useful formulas are , , , and (see [16]). Recently, some kinds of new nonlinear conjugate gradient methods are given in [711]. Based on the new method, we give some new kinds of nonlinear conjugate gradient methods and analyze the global convergence of the methods with Wolfe type line search.

The rest of the paper is organized as follows. In Section 2, we give the methods and the global convergence results for them. In the last section, numerical results and some discussions are given.

2. The Methods and Their Global Convergence Results

Firstly, we give the Wolfe type line search, which will be used in our new nonlinear conjugate gradient methods. In the following section of this paper, stands for the 2-norm.

We have used the Wolfe type line search in [12].

The line search is to compute such that where .

Now, we present the nonlinear conjugate gradient methods as follows.

Algorithm 1. We have the following steps.

Step 0. Given , set . If , then stop.

Step 1. Find satisfying (4) and (5), and by (2), is given. If , then stop.

Step 2. Compute by the following equation: in which . Set , and go to Step 1.

Before giving the global convergence theorem, we need the following assumptions.

Assumption 1. (A1) The set is bounded.
(A2) In the neighborhood of , denoted as , is continuously differentiable. Its gradient is Lipschitz continuous; namely, for , there exists such that In order to establish the global convergence of Algorithm 1, we also need the following lemmas.

Lemma 2. Suppose that Assumption 1 holds; then, (4) and (5) are well defined.

The proof is essentially the same as Lemma 1 of [12]; hence, we do not rewrite it again.

Lemma 3. Suppose that direction is given by (6); then, one has holds for all . So, one knows that is descent search direction.

Proof. From the definitions of and , we can get it.

Lemma 4. Suppose that Assumption 1 holds, and is determined by (4) and (5); one has

Proof. By (4), (5), Lemma 3, and Assumption 1, we can get Then, we know that By squaring both sides of the previous inequation, we get By (4), we know that So, we get (9), and this completes the proof of the lemma.

Lemma 5. Suppose that Assumption 1 holds, is computed by (6), and is determined by (4) and (5); one has

Proof. From Lemmas 3 and 4, we can obtain (14).

Theorem 6. Consider Algorithm 1, and suppose that Assumption 1 holds. Then, one has

Proof. We suppose that the theorem is not true. Suppose by contradiction that there exists such that holds for .
From (6) and Lemma 3, we get Dividing the previous inequation by , we get So, we obtain which contradicts (14). Hence we get this theorem.

Remark 7. In Algorithm 1, we also can use the following equations to compute : where ; where .

Algorithm 8. We have the following steps.

Step 0. Given , set , . If , then stop.

Step 1. Find satisfying (4) and (5), and by (2), is given. If , then stop.

Step 2. Compute by formula and compute by (3). Set , and go to Step 1.

Lemma 9. Suppose that Assumption 1  holds, and is computed by (22); if , then one gets for all and (see [9]).

Lemma 10. Suppose that , and is a constant. If positive series satisfied one has

From the previous analysis, we can get the following global convergence result for Algorithm 8.

Theorem 11. Suppose that Assumption 1 holds, and , where is a constant. Then, one has

Proof. Suppose by contradiction that there exists such that holds for all . From (3), we have Squaring both sides of the previous equation, we get Let and ; from (22), we have By ,  , we know that By (30), we get From (31) and Lemma 10, we have which contradicts Lemma 4. Therefore, we get this theorem.

Algorithm 12. We have the following steps.

Step 0. Given , , set . If , then stop.

Step 1. Find satisfying (4) and (5), and by (2), is given. If , then stop.

Step 2. Compute by where Set , and go to Step 1.

Lemma 13. Suppose that direction is given by (33) and (34); then, one has holds for any .

Lemma 14. Suppose that Assumption 1 holds, is generated by (33) and (34), and is determined by (4) and (5); one has

Proof. From Lemma 4 and Lemma 13, we obtain (36).

Lemma 15. Suppose that is convex. That is, , for all , where is the Hessian matrix of . Let and be generated by Algorithm 12; one has

Proof. By Taylor’s theorem, we can get where , and .
By Assumption 1, (4), and (38), we get So, we get (37).

Theorem 16. Consider Algorithm 12, and suppose that Assumption 1 and the assumption of Lemma 15 hold. Then, one has

Proof. We suppose that the conclusion is not true. Suppose by contradiction that there exists such that holds for all .
By Lemma 13, we have From Assumption 1, Lemma 15, and (42), we know that Therefore, by (33), we get We obtain which contradicts (36). Therefore, we have So, we complete the proof of this theorem.

Remark 17. In Algorithm 12, can also be computed by the following formula: where .

3. Numerical Experiments and Discussions

In this section, we give some numerical experiments for the previous new nonlinear conjugate gradient methods with Wolfe type line search and some discussions. The problems that we tested are from [13]. We use the condition as the stopping criterion. We use MATLAB 7.0 to test the chosen problems. We give the numerical results of Algorithms 1 and 12 to show that the method is efficient for unconstrained optimization problems. The numerical results of Algorithms 1 and 12 are listed in Tables 1 and 2.

tab1
Table 1: Test results for Algorithm 1.
tab2
Table 2: Test results for Algorithm 12.

Discussion 1. From the analysis of the global convergence of Algorithm 1, we can see that if satisfies the property of efficient descent search direction, we can get the global convergence of the corresponding nonlinear conjugate gradient method with Wolfe type line search without other assumptions.

Discussion 2. In Algorithm 8, we use a Wolfe type line search. Overall, we also feel that nonmonotone line search (see [14]) also can be used in our algorithms.

Discussion 3. From the analysis of the global convergence of Algorithm 12, we can see that when is an efficient descent search direction, we can get the global convergence of the corresponding conjugate gradient method with Wolfe type line search without requiring uniformly convex function.

Acknowledgments

This work is supported by the National Science Foundation of China (11101231 and 10971118), Project of Shandong Province Higher Educational Science and Technology Program (J10LA05), and the International Cooperation Program for Excellent Lecturers of 2011 by Shandong Provincial Education Department.

References

  1. B. T. Polyak, “The conjugate gradient method in extremal problems,” USSR Computational Mathematics and Mathematical Physics, vol. 9, no. 4, pp. 94–112, 1969. View at Scopus
  2. R. Fletcher and C. M. Reeves, “Function minimization by conjugate gradients,” The Computer Journal, vol. 7, pp. 149–154, 1964. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  3. M. R. Hestenes, “E.Stiefel. Method of conjugate gradient for solving linear equations,” Journal of Research of the National Bureau of Standards, vol. 49, pp. 409–436, 1952.
  4. Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM Journal on Optimization, vol. 10, no. 1, pp. 177–182, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  5. R. Fletcher, Practical Methods of Optimization, Unconstrained Optimization, Wiley, New York, NY, USA, 2nd edition, 1987. View at MathSciNet
  6. M. Raydan, “The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem,” SIAM Journal on Optimization, vol. 7, no. 1, pp. 26–33, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  7. L. Zhang and W. Zhou, “Two descent hybrid conjugate gradient methods for optimization,” Journal of Computational and Applied Mathematics, vol. 216, no. 1, pp. 251–264, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  8. A. Zhou, Z. Zhu, H. Fan, and Q. Qing, “Three new hybrid conjugate gradient methods for optimization,” Applied Mathematics, vol. 2, no. 3, pp. 303–308, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  9. B. C. Jiao, L. P. Chen, and C. Y. Pan, “Global convergence of a hybrid conjugate gradient method with Goldstein line search,” Mathematica Numerica Sinica, vol. 29, no. 2, pp. 137–146, 2007. View at MathSciNet
  10. G. Yuan, “Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems,” Optimization Letters, vol. 3, no. 1, pp. 11–21, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  11. Z. Dai and B. Tian, “Global convergence of some modified PRP nonlinear conjugate gradient methods,” Optimization Letters, vol. 5, no. 4, pp. 615–630, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  12. C. Y. Wang, Y. Y. Chen, and S. Q. Du, “Further insight into the Shamanskii modification of Newton method,” Applied Mathematics and Computation, vol. 180, no. 1, pp. 46–52, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  13. J. J. Moré, B. S. Garbow, and K. E. Hillstrom, “Testing unconstrained optimization software,” ACM Transactions on Mathematical Software, vol. 7, no. 1, pp. 17–41, 1981. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  14. L. Grippo, F. Lampariello, and S. Lucidi, “A nonmonotone line search technique for Newton's method,” SIAM Journal on Numerical Analysis, vol. 23, no. 4, pp. 707–716, 1986. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet