Mathematical Problems in Engineering

Volume 2017 (2017), Article ID 1425857, 11 pages

https://doi.org/10.1155/2017/1425857

## A Modified Nonlinear Conjugate Gradient Method for Engineering Computation

^{1}Science College, Inner Mongolia University of Technology, Hohhot 010051, China^{2}Department of Information Engineering, College of Youth Politics, Inner Mongolia Normal University, Hohhot 010051, China

Correspondence should be addressed to Zaizai Yan; moc.361@nay.zz

Received 6 July 2016; Revised 6 November 2016; Accepted 8 December 2016; Published 11 January 2017

Academic Editor: Yakov Strelniker

Copyright © 2017 Tiefeng Zhu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

A general criterion for the global convergence of the nonlinear conjugate gradient method is established, based on which the global convergence of a new modified three-parameter nonlinear conjugate gradient method is proved under some mild conditions. A large amount of numerical experiments is executed and reported, which show that the proposed method is competitive and alternative. Finally, one engineering example has been analyzed for illustrative purposes.

#### 1. Introduction

Unconstrained optimization methods are widely used in the fields of nonlinear dynamic systems and engineering computation to obtain the numerical solution of the optimal control problem [1–4]. In this paper, we consider the unconstrained optimization problem: where is a continuously differentiable function. The nonlinear conjugate gradient (CG) method is highly useful for solving this kind of problems because of its simplicity and its very low memory requirement [1]. The iterative formula of the CG methods is given by where is step length which is obtained by carrying out some linear search, such as exact or inexact line search. In practical computation, exact line search is consumption time and the workload is very large, so we usually take the following inexact line search (see [5–7]). Usually, a major inexact line search is the strong Wolfe-Powell line search. The strong Wolfe-Powell line search is to find the step-length in (2) satisfying where and . In this paper, the following modified Wolfe-Powell line search is to find the step-length in (2) satisfying (3) and the following: and is the search direction defined by where denotes the gradient , is scalar, and is chosen so that becomes the th conjugate direction. There have been many well-known formulae for the scalar , for example, (Fletcher-Reeves [8], 1964),(Polak-Ribiere-Polyak [9], 1969),(Dai-Yuan [10], 1999), and other formulae (e.g., [11–13]), where is the Euclidean norm of vectors, , and “” stand for the transpose. These methods are generally regarded as very efficient conjugate gradient methods in practical computation.

In recent decades, in order to obtain the CG method which has not only good convergence property but also excellent computation, many researchers have studied the CG method extensively and obtained some improved methods with good properties [14–20]. Li and Feng [21] gave the modified CG method which generates a sufficient descent direction and showed its global convergence property under the strong Wolfe-Powell conditions. Dai and Wen [22] gave a scaled conjugate gradient method. They proved its global convergence property under the strong Wolfe-Powell conditions. Al-Baali [23] proved that the FR method satisfies the sufficient descent condition and converges globally for general objective functions if the strong Wolfe-Powell line search is used. Dai and Yuan [24] also introduced a formula for : where , , and . Because we can rewrite (10) as This formula includes the above three classes of CG method as an extreme case, and global convergence of three parameters of CG method was proved under strong Wolfe-Powell line search. If , then the family reduces to the two-parameter family of conjugate gradient methods in [25]. Further, if , , and , then the family reduces to the one-parameter family in [26]. Therefore, the three-parameter family has the one-parameter family in [26] and the two-parameter family in [25] as its subfamilies. In addition, some hybrid methods can also be regarded as special cases of the three-parameter family [24]. Above many modified CG methods, global convergence was obtained under strong Wolfe-Powell line search; however, in this paper, we further study the CG method, and our main aim is to improve the numerical performance of the CG method while keeping its global convergence with modified Wolfe-Powell line search.

This paper is organized as follows. We first present a criterion for the global convergence of CG method in the next section. In Section 3, we propose a new modified three-parameter conjugate gradient method and establish global convergence results for relative algorithm under modified Wolfe-Powell line search. The preliminary numerical results are contained in Section 4. One engineering example is analyzed for illustration in Section 5. Finally, conclusions appear in Section 6.

#### 2. A Criterion for the Global Convergence of CG Method

In this section, first, we adopt the following assumption used commonly in the research literatures.

*Assumption 1. *The function is in a neighborhood of the level set and is bounded. Here, by , we mean that the gradient is Lipschitz continuous with modulus ; that is, there exists such that

Lemma 2 (Zoutendijk condition [27]). *Suppose that Assumption 1 holds, is given by (2) and (6), and is obtained by the modified Wolfe-Powell line search ((3), (5)), while the direction satisfies . Then, *

Lemma 3 (see [28]). *Suppose that > and are constants; if satisfy , then and .*

Theorem 4. *Suppose that the objective function satisfies Assumption 1 and that is given by (2) and (6), where satisfies the modified Wolfe-Powell (3) and (5), and ; then, either holds for certain or *

*Proof. *Suppose, by contradiction, that the stated conclusion is not true. Then, in view of , there exits a constant , such that From (6), we have By multiplying on both sides of (17), then we have Let and ; then, Thus, from (19) and , we get Note that and ; then, it follows from (20) that From Assumption 1, it follows that there exists constant >0, such that and from (16), (21), and (22), we get From the above, it is obvious that From the other side, for from (23), it follows that From (24) and (25) and Lemma 3, we have what contradicts Lemma 2. Therefore, the global convergence is proved.

#### 3. The Global Convergence for the New Formula and Algorithm Frame

##### 3.1. The New Formula and the Corresponding Properties

Based on formula (10), we put forward a new formula of : where , , , and . Because of possible negative values of we use the maximum function to truncate zero and Using the equality we can rewrite the denominator of (27) as When then the denominator of given by (27) reduces to the denominator of . On the other hand, when the numerator of (27) reduces to When then Now, the numerator of (27) reduces to the numerator of . From the above analysis, we can see that (27) indeed is an extension of (10). Due to the existence of the parameters , , and , it would be more flexible to call methods (2), (6), and (27) by this paper of conjugate gradient methods. Numerical experiments results in Section 4 demonstrate the influence of these parameters versus formula (27).

Lemma 5. *Suppose that Assumption 1 holds and that is given by (2) and (6), where satisfies the modified Wolfe-Powell conditions (3) and (5), while is computed by (27). Then, one has *

*Proof. *When , we have Suppose hold, in formula (27), and the conclusion holds.

If , we have where ; by formulas (3) and (5), we have . Hence, When , , and , we obtain Due to and , through the above analysis, we have Hence,

The result shows that the search direction satisfies descent condition (); this condition may be crucial for convergence analysis of any conjugate gradient method.

Lemma 6. *Suppose that Assumption 1 holds and that is given by (2) and (6), where satisfies the modified Wolfe-Powell conditions (3) and (5), while is computed by (27). Then, one has *

*Proof. *Let ; when , then ; when , if , then ; if , .

By Lemma 5, then

To sum up, , and by , we have , and hence .

Theorem 7. *Suppose the objective function satisfies Assumption 1; consider methods (2) and (6), where is given by (27) and satisfies the modified Wolfe-Powell line search condition. Then, either holds for certain or *

*Proof. *By Theorem 4 and Lemma 6, Theorem 7 is proved.

The result shows that the proposed algorithm with the modified Wolfe-Powell line search possesses global convergence.

##### 3.2. Algorithm A

Based on the discussed above, now we can describe the algorithm frame for solving the unconstrained optimization problems (1) as follows.

*Step 0. *Choose an initial point , given constants , , and , , and , subject to , set , and let .

*Step 1. *If a stopping criterion is satisfied, then stop; otherwise, go to Step 2.

*Step 2. *Determine a step size by line searches (3) and (5).

*Step 3. *Let ; compute and by (27) and (6).

*Step 4. *Set and go to Step 1.

#### 4. Numerical Experiments and Results

In this section, in order to show the performance of the given algorithm, we test our proposed algorithm (algorithm A) and DY algorithm (given by formula (10)) via unconstrained optimization problems from Andrei [29] as follow. These testing functions are often used in engineering field:(1)Sphere function , , , .(2)Rastrigin function = , , = , = .(3)Freudenstein and Roth function (Froth) = + + , , .(4)Perturbed Quadratic diagonal function (Pqd) = + , , = .(5)Extended White & Holst function (Ewh) = + , , .(6)Raydan 1 function = , , .(7)Raydan 2 function , , .(8)Extended Trigonometric function (Etri) = + , , .(9)Extended Powell function (Epow) = + + , , .(10)Wood function = + + + + + , , .(11)Extended Wood function (Ewood) = + + + + + + , , .(12)Perturbed Quadratic function (Perq) = + , , .(13)Extended Tridiagonal 1 function (Etri1) = + + , , .(14)Extended Miele & Cantrell function (Emic) = + + + , , .(15)Extended Rosenbrock function (Erosen) = + , , .(16)Generalized Rosenbrock function (Grosen) = + , , .(17)QUARTC function = , , .(18)LIARWHD function = , , .(19)Staircase 1 function = , , .(20)Staircase 2 function = , , .(21)POWER function = , , .(22)Diagonal 4 function = , , .(23)Extended BD1 function (EBD1) = + , , .(24)CUBE function = + , , .

Here, and are the optimal solution and the function value at the optimal solution, respectively. For each algorithm, the parameters are chosen as and . All codes were written in MATLAB 7.5 and run on Lenovo with 1.90 GHz CPU processor, 2.43 GB RAM memory, and Windows XP operating system. The stop criterion of the iteration is one of the following conditions: and the number of iterations . If condition occurs, the method is deemed to fail for solving the corresponding test problem, and denote it by “.” For the first three test problems, we present experimental results to observe the behavior of the proposed and DY (given by formula (10)) conjugate gradient algorithm for different , different , and different . Details of the schemes for parameters set are given in Table 1. Numerical results of test problems are listed in Tables 2, 3, 4, 5, 6, 7, and 8, respectively. Table 9 shows numerical results of other test problems. Here, denotes the initial point of the test problems and and are iteration value and the function value at the final iteration, respectively.