Abstract

Nonlinear conjugate gradient method is one of the useful methods for unconstrained optimization problems. In this paper, we consider three kinds of nonlinear conjugate gradient methods with Wolfe type line search for unstrained optimization problems. Under some mild assumptions, the global convergence results of the given methods are proposed. The numerical results show that the nonlinear conjugate gradient methods with Wolfe type line search are efficient for some unconstrained optimization problems.

1. Introduction

In this paper, we focus our attention on the global convergence of nonlinear conjugate gradient method with Wolfe type line search. We consider the following unconstrained optimization problem: In (1), is continuously differentiable function, and its gradient is denoted by . Of course, the iterative methods are often used for (1). The iterative formula is given by where ,   is the kth and (k+1)th iterative step, is a step size, and is a search direction. Here, in the following, we define the search direction by In (3), is a conjugate gradient scalar, and the well-known useful formulas are , , , and (see [16]). Recently, some kinds of new nonlinear conjugate gradient methods are given in [711]. Based on the new method, we give some new kinds of nonlinear conjugate gradient methods and analyze the global convergence of the methods with Wolfe type line search.

The rest of the paper is organized as follows. In Section 2, we give the methods and the global convergence results for them. In the last section, numerical results and some discussions are given.

2. The Methods and Their Global Convergence Results

Firstly, we give the Wolfe type line search, which will be used in our new nonlinear conjugate gradient methods. In the following section of this paper, stands for the 2-norm.

We have used the Wolfe type line search in [12].

The line search is to compute such that where .

Now, we present the nonlinear conjugate gradient methods as follows.

Algorithm 1. We have the following steps.

Step 0. Given , set . If , then stop.

Step 1. Find satisfying (4) and (5), and by (2), is given. If , then stop.

Step 2. Compute by the following equation: in which . Set , and go to Step 1.

Before giving the global convergence theorem, we need the following assumptions.

Assumption 1. (A1) The set is bounded.
(A2) In the neighborhood of , denoted as , is continuously differentiable. Its gradient is Lipschitz continuous; namely, for , there exists such that In order to establish the global convergence of Algorithm 1, we also need the following lemmas.

Lemma 2. Suppose that Assumption 1 holds; then, (4) and (5) are well defined.

The proof is essentially the same as Lemma 1 of [12]; hence, we do not rewrite it again.

Lemma 3. Suppose that direction is given by (6); then, one has holds for all . So, one knows that is descent search direction.

Proof. From the definitions of and , we can get it.

Lemma 4. Suppose that Assumption 1 holds, and is determined by (4) and (5); one has

Proof. By (4), (5), Lemma 3, and Assumption 1, we can get Then, we know that By squaring both sides of the previous inequation, we get By (4), we know that So, we get (9), and this completes the proof of the lemma.

Lemma 5. Suppose that Assumption 1 holds, is computed by (6), and is determined by (4) and (5); one has

Proof. From Lemmas 3 and 4, we can obtain (14).

Theorem 6. Consider Algorithm 1, and suppose that Assumption 1 holds. Then, one has

Proof. We suppose that the theorem is not true. Suppose by contradiction that there exists such that holds for .
From (6) and Lemma 3, we get Dividing the previous inequation by , we get So, we obtain which contradicts (14). Hence we get this theorem.

Remark 7. In Algorithm 1, we also can use the following equations to compute : where ; where .

Algorithm 8. We have the following steps.

Step 0. Given , set , . If , then stop.

Step 1. Find satisfying (4) and (5), and by (2), is given. If , then stop.

Step 2. Compute by formula and compute by (3). Set , and go to Step 1.

Lemma 9. Suppose that Assumption 1  holds, and is computed by (22); if , then one gets for all and (see [9]).

Lemma 10. Suppose that , and is a constant. If positive series satisfied one has

From the previous analysis, we can get the following global convergence result for Algorithm 8.

Theorem 11. Suppose that Assumption 1 holds, and , where is a constant. Then, one has

Proof. Suppose by contradiction that there exists such that holds for all . From (3), we have Squaring both sides of the previous equation, we get Let and ; from (22), we have By ,  , we know that By (30), we get From (31) and Lemma 10, we have which contradicts Lemma 4. Therefore, we get this theorem.

Algorithm 12. We have the following steps.

Step 0. Given , , set . If , then stop.

Step 1. Find satisfying (4) and (5), and by (2), is given. If , then stop.

Step 2. Compute by where Set , and go to Step 1.

Lemma 13. Suppose that direction is given by (33) and (34); then, one has holds for any .

Lemma 14. Suppose that Assumption 1 holds, is generated by (33) and (34), and is determined by (4) and (5); one has

Proof. From Lemma 4 and Lemma 13, we obtain (36).

Lemma 15. Suppose that is convex. That is, , for all , where is the Hessian matrix of . Let and be generated by Algorithm 12; one has

Proof. By Taylor’s theorem, we can get where , and .
By Assumption 1, (4), and (38), we get So, we get (37).

Theorem 16. Consider Algorithm 12, and suppose that Assumption 1 and the assumption of Lemma 15 hold. Then, one has

Proof. We suppose that the conclusion is not true. Suppose by contradiction that there exists such that holds for all .
By Lemma 13, we have From Assumption 1, Lemma 15, and (42), we know that Therefore, by (33), we get We obtain which contradicts (36). Therefore, we have So, we complete the proof of this theorem.

Remark 17. In Algorithm 12, can also be computed by the following formula: where .

3. Numerical Experiments and Discussions

In this section, we give some numerical experiments for the previous new nonlinear conjugate gradient methods with Wolfe type line search and some discussions. The problems that we tested are from [13]. We use the condition as the stopping criterion. We use MATLAB 7.0 to test the chosen problems. We give the numerical results of Algorithms 1 and 12 to show that the method is efficient for unconstrained optimization problems. The numerical results of Algorithms 1 and 12 are listed in Tables 1 and 2.

Discussion 1. From the analysis of the global convergence of Algorithm 1, we can see that if satisfies the property of efficient descent search direction, we can get the global convergence of the corresponding nonlinear conjugate gradient method with Wolfe type line search without other assumptions.

Discussion 2. In Algorithm 8, we use a Wolfe type line search. Overall, we also feel that nonmonotone line search (see [14]) also can be used in our algorithms.

Discussion 3. From the analysis of the global convergence of Algorithm 12, we can see that when is an efficient descent search direction, we can get the global convergence of the corresponding conjugate gradient method with Wolfe type line search without requiring uniformly convex function.

Acknowledgments

This work is supported by the National Science Foundation of China (11101231 and 10971118), Project of Shandong Province Higher Educational Science and Technology Program (J10LA05), and the International Cooperation Program for Excellent Lecturers of 2011 by Shandong Provincial Education Department.