Mathematical Problems in Engineering

Volume 2015, Article ID 103517, 7 pages

http://dx.doi.org/10.1155/2015/103517

## An Efficient Hybrid Conjugate Gradient Method with the Strong Wolfe-Powell Line Search

^{1}School of Informatics and Applied Mathematics, Universiti Malaysia Terengganu, 21030 Kuala Terengganu, Terengganu, Malaysia^{2}Faculty of Informatics and Computing, Universiti Sultan Zainal Abidin, 21300 Kuala Terengganu, Terengganu, Malaysia^{3}Department of Computer Science and Mathematics, Universiti Teknologi MARA (UITM) Terengganu, Campus Kuala Terengganu, 21080 Kuala Terengganu, Terengganu, Malaysia

Received 11 March 2015; Revised 6 July 2015; Accepted 7 July 2015

Academic Editor: Haipeng Peng

Copyright © 2015 Ahmad Alhawarat et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Conjugate gradient (CG) method is an interesting tool to solve optimization problems in many fields, such as design, economics, physics, and engineering. In this paper, we depict a new hybrid of CG method which relates to the famous Polak-Ribière-Polyak (PRP) formula. It reveals a solution for the PRP case which is not globally convergent with the strong Wolfe-Powell (SWP) line search. The new formula possesses the sufficient descent condition and the global convergent properties. In addition, we further explained about the cases where PRP method failed with SWP line search. Furthermore, we provide numerical computations for the new hybrid CG method which is almost better than other related PRP formulas in both the number of iterations and the CPU time under some standard test functions.

#### 1. Introduction

The nonlinear conjugate gradient (CG) method is a useful tool to find the minimum value for unconstrained optimization problems. Consider the following form:where is continuously differentiable function and its gradient is denoted by . The CG method is to find a sequence of points , , starting from initial point which is given as the following iterative formula:where is the current iteration point and is the step length obtained by some line search. The search direction is defined bywhere and is known as CG method, formula, or coefficient.

To find the step length () we can use the exact line search which is given byIn fact, (4) is not an effective line search since it needs heavy computations in function and gradient evaluations. Therefore, we prefer to use inexpensive line search. The strong Wolfe-Powell (SWP) line search [1, 2], which is given as follows, where , is to find an approximation of where the descent property (see (14)) must be satisfied and no longer searching in the direction when is far from the solution. Thus, by using SWP line search we inherit the advantages of exact line search with inexpensive and low computational cost. However, different choices of and imply different CG methods. In fact the SWP line search is a strong version of weak Wolfe-Powell (WWP) line search where the latter is given by (5) and The CG method has been developed recently based on its simplicity, numerical efficiency, and low memory requirements. Thus, it is used widely in engineering medical science and other fields. As an application in engineering we can use CG method to solve some real life problem similar to that mentioned in [3]. The CG method is limited for the functions where their gradient is available. Thus, the heuristic algorithm [4] can be used as an alternative method to find the solution for general functions. A heuristic algorithm is to find an approximation solution for the objective functions with accepted time. In addition, the heuristic algorithms can be applied without using computers. We refer the reader to see some applications for this algorithm in [5–7].

The most popular formulas for are Hestenes-Stiefel (HS) [8], Fletcher-Reeves (FR) [9], Polak-Ribière-Polyak (PRP) [10], and Wei et al. (WYL) [11], respectively, as follows:where .

Hestenes-Stiefel [8] proposed the first formula for solving the quadratic functions in 1952. Fletcher and Reeves [9] presented the first formula (9) for nonlinear functions in 1964. The convergence properties of FR method with exact line search were obtained by Zoutendijk [12]. Al-Baali [13] proved that FR method is globally convergent with the SWP line search when . Later on Guanghui et al. [14] extended the result to . The global convergence of PRP method (10) with the exact line search was proved by Elijah and Ribiere in [10]. Powell [15] gave out a counterexample showing that there exists nonconvex function, where PRP method does not converge globally, even when the exact line search is used. Powell suggested using nonnegative PRP method to reveal this problem. Gilbert and Nocedal [16] proved that nonnegative PRP (PRP+) method, that is, , is globally convergent under complicated line searches. However, there is no guarantee that PRP+ is convergent with SWP line search for general nonlinear functions. Touati-Ahmed and Storey [17] suggest the following hybrid method:In 2006 Wei et al. [11] presented a new positive CG method (11), which is quite similar to original PRP method which has been studied in both exact and inexact line search. Many modifications have appeared, such as the following [18–20], respectively:Recently many CG formulas were constructed in order to get the efficiency and robustness. For more about the latest CG methods we refer the reader to see [21, 22].

One of the important rules in CG methods is the descent condition; that is, if one can prove then we have a guarantee for . If we extended (14) to the following form,then (15) is called the sufficient descent condition.

This paper is organized as follows; in Section 2 we will present the current problem for PRP and nonnegative PRP method with SWP line search. Later on we will suggest the new hybrid CG formula and its simplifications. Furthermore, we will establish the global convergence properties with the SWP line search, in Section 3. Numerical results with conclusion will be presented in Sections 4 and 5, respectively.

#### 2. Motivation and the Hybrid Formula

The PRP formula is one of the best CG methods in this century. However, as we mentioned before this method fails to solve some standard test problems for nonconvex functions; even the exact line search is used. Thus, the main contribution of this paper is to extend using PRP formula in several cases with SWP line search under mild condition and restart the CG algorithm by using NPRP CG formula when PRP failed to satisfy that condition.

The following discussion illustrates the cases in which PRP method fails and succeeds with SWP line search to obtain the convergence properties. The PRP method could be simplified as follows:Therefore we have the following cases.

*Case A. *If , then we have the following two possibilities.*Case A1*. If , then we have . In this case, PRP method is efficient and has global convergence properties.*Case A2*. If , then we have . In this case based on [16] we fail to obtain the global convergence properties for nonconvex functions.

*Case B. *If , then

In Case B there is no guarantee that this method will satisfy the sufficient descent condition.

For the next discussion, we will discuss the nonnegative PRP method which is given as follows:Therefore, we have a problem in Case B. To solve this problem Gilbert and Nocedal [16] used another line search to satisfy the convergence properties. In addition, if , the CG method returns to the steepest descent method which is sometimes a weak tool to find the optimum point for functions. Furthermore we can notice that So from PRP method we can use only Case A1.

To improve the above ideas, we suggest the following hybrid method: If , then under the condition , we obtainAnd if we obtainOne of the advantages for is that we can use Case A1 and Case B under the condition .

Choosing as a restart CG formula in (20), if the condition is not satisfied that means Thus, NPRP method is a suitable nonnegative value to use.

The following algorithm is an algorithm of CG method with the new coefficient .

*Algorithm 1. *Consider the following.*Step 1.* Initialization: given , set .*Step 2.* Compute based on (20).*Step 3.* Compute based on (3).*Step 4.* Compute based on (5) and (6).*Step 5.* Update new point based on (2).*Step 6.* Convergent test and stopping criteria: if then stop; otherwise go to Step 2 with .

#### 3. The Global Convergence Properties for with SWP Line Search

The following standard assumptions are necessary for this work.

*Assumption 1. *The level set , with to the starting point of the iterative method (2), is bounded.

*Assumption 2. *In some open convex neighborhood of , is continuous and differentiable, and its gradient is Lipschitz continuous; that is, for any , , there exists a constant such that .

The following lemma is one of the most important lemmas which is used to prove the global convergence properties.

Lemma 2 (see [12]). *Suppose Assumptions 1 and 2 are true. Consider any form of (2) and (3), with computed by WWP line search direction , is descent for all ; then**where**Equation (25) is known as Zoutendijk condition.*

The following discussion will discuss the global convergence properties for with SWP line search.

*Case 1 (). *Since the proof of the global convergence properties is similar to . We refer the reader to see [13, 14].

*Case 2 (, where ). *In this case we haveThe following theorem demonstrates in Case 2 satisfies the sufficient descent condition with SWP line search.

Theorem 3. *Consider the sequences and are constructed by Algorithm 1 and is computed by (5) and (6) if . Then (15) holds.*

*Proof. *From (3), for , it is obvious. Suppose it is true until ; that is, . Multiply (3) by ; we obtainDividing both sides by , and by using Case 2 and (6), then we obtainFrom (29) we obtainSincewe haveIf we getLetthenThuswhere . The proof is complete.

Gilbert and Nocedal [16] presented an important theorem (Theorem 4) to find the global convergence properties for nonnegative PRP method if the descent condition is satisfied. Furthermore [16] presented nice property called as follows.

. Consider a CG method of forms (2) and (3), and suppose ; we say that the CG method possesses if there exists constant , and , such that, for all , we get , and if , we obtain

Theorem 4. *Consider any CG method of forms (2) and (3) achieves the following properties: *(I)*.*(II)*The sufficient descent condition (15) holds.*(III)*The Zoutendijk condition (25) is satisfied by the line search.*(IV)* holds.*(V)*Assumptions 1 and 2 hold.**Then the iterates are globally convergent.*

The next lemma shows that if the gradients are bounded away from zero and holds, then a certain fraction of steps cannot be too small. The proof is given in [16]. However, we state it for readability.

Lemma 5. *Consider a CG algorithm as defined in (2) and (3) with the parameter . If Assumptions 1 and 2 are satisfied, then holds.*

*Proof. *Let and :Using , we haveBy Assumption 2,The proof is complete.

Lemma 6. *The CG formula presented in Case 2 has the following properties:**(1) , since the condition forces the CG formula in (20) to be nonnegative.(2) satisfies , based on Lemma 5.(3) satisfies the sufficient descent condition based on Theorem 3 and .*

*By using Theorems 3 and 4 and Lemma 6 we have the following convergence result. The proof is similar to Theorem 4.3 which is presented in [16].*

*Theorem 7. Suppose that Assumption 1 holds. Consider the CG method of forms (2) and (3) and as in Case 2, where is computed by (5) and (6) with ; then .*

*4. Numerical Results and Discussions*

*4. Numerical Results and Discussions*

*To evaluate the efficiency of the new method, we selected some of the test functions in Table 1 from CUTEr [24], Neculai [23], and Adorio and Diliman [25]. We performed a comparison with other CG methods, including VHS, NPRP, PRP+, FR, and formulas. The tolerance is selected to for all algorithms to investigate the rapidity of the iteration methods towards the optimal solution. The gradient value is used as the stopping criteria. Here, the stopping criteria considered . We considered the method failed if the number of iterations exceeds 1000 times.*