Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2017 (2017), Article ID 1425857, 11 pages
https://doi.org/10.1155/2017/1425857
Research Article

A Modified Nonlinear Conjugate Gradient Method for Engineering Computation

1Science College, Inner Mongolia University of Technology, Hohhot 010051, China
2Department of Information Engineering, College of Youth Politics, Inner Mongolia Normal University, Hohhot 010051, China

Correspondence should be addressed to Zaizai Yan; moc.361@nay.zz

Received 6 July 2016; Revised 6 November 2016; Accepted 8 December 2016; Published 11 January 2017

Academic Editor: Yakov Strelniker

Copyright © 2017 Tiefeng Zhu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A general criterion for the global convergence of the nonlinear conjugate gradient method is established, based on which the global convergence of a new modified three-parameter nonlinear conjugate gradient method is proved under some mild conditions. A large amount of numerical experiments is executed and reported, which show that the proposed method is competitive and alternative. Finally, one engineering example has been analyzed for illustrative purposes.

1. Introduction

Unconstrained optimization methods are widely used in the fields of nonlinear dynamic systems and engineering computation to obtain the numerical solution of the optimal control problem [14]. In this paper, we consider the unconstrained optimization problem: where is a continuously differentiable function. The nonlinear conjugate gradient (CG) method is highly useful for solving this kind of problems because of its simplicity and its very low memory requirement [1]. The iterative formula of the CG methods is given by where is step length which is obtained by carrying out some linear search, such as exact or inexact line search. In practical computation, exact line search is consumption time and the workload is very large, so we usually take the following inexact line search (see [57]). Usually, a major inexact line search is the strong Wolfe-Powell line search. The strong Wolfe-Powell line search is to find the step-length in (2) satisfying where and . In this paper, the following modified Wolfe-Powell line search is to find the step-length in (2) satisfying (3) and the following: and is the search direction defined by where denotes the gradient , is scalar, and is chosen so that becomes the th conjugate direction. There have been many well-known formulae for the scalar , for example, (Fletcher-Reeves [8], 1964),(Polak-Ribiere-Polyak [9], 1969),(Dai-Yuan [10], 1999), and other formulae (e.g., [1113]), where is the Euclidean norm of vectors, , and “” stand for the transpose. These methods are generally regarded as very efficient conjugate gradient methods in practical computation.

In recent decades, in order to obtain the CG method which has not only good convergence property but also excellent computation, many researchers have studied the CG method extensively and obtained some improved methods with good properties [1420]. Li and Feng [21] gave the modified CG method which generates a sufficient descent direction and showed its global convergence property under the strong Wolfe-Powell conditions. Dai and Wen [22] gave a scaled conjugate gradient method. They proved its global convergence property under the strong Wolfe-Powell conditions. Al-Baali [23] proved that the FR method satisfies the sufficient descent condition and converges globally for general objective functions if the strong Wolfe-Powell line search is used. Dai and Yuan [24] also introduced a formula for : where , , and . Because we can rewrite (10) as This formula includes the above three classes of CG method as an extreme case, and global convergence of three parameters of CG method was proved under strong Wolfe-Powell line search. If , then the family reduces to the two-parameter family of conjugate gradient methods in [25]. Further, if , , and , then the family reduces to the one-parameter family in [26]. Therefore, the three-parameter family has the one-parameter family in [26] and the two-parameter family in [25] as its subfamilies. In addition, some hybrid methods can also be regarded as special cases of the three-parameter family [24]. Above many modified CG methods, global convergence was obtained under strong Wolfe-Powell line search; however, in this paper, we further study the CG method, and our main aim is to improve the numerical performance of the CG method while keeping its global convergence with modified Wolfe-Powell line search.

This paper is organized as follows. We first present a criterion for the global convergence of CG method in the next section. In Section 3, we propose a new modified three-parameter conjugate gradient method and establish global convergence results for relative algorithm under modified Wolfe-Powell line search. The preliminary numerical results are contained in Section 4. One engineering example is analyzed for illustration in Section 5. Finally, conclusions appear in Section 6.

2. A Criterion for the Global Convergence of CG Method

In this section, first, we adopt the following assumption used commonly in the research literatures.

Assumption 1. The function is in a neighborhood of the level set and is bounded. Here, by , we mean that the gradient is Lipschitz continuous with modulus ; that is, there exists such that

Lemma 2 (Zoutendijk condition [27]). Suppose that Assumption 1 holds, is given by (2) and (6), and is obtained by the modified Wolfe-Powell line search ((3), (5)), while the direction satisfies . Then,

Lemma 3 (see [28]). Suppose that > and are constants; if satisfy , then and .

Theorem 4. Suppose that the objective function satisfies Assumption 1 and that is given by (2) and (6), where satisfies the modified Wolfe-Powell (3) and (5), and ; then, either holds for certain or

Proof. Suppose, by contradiction, that the stated conclusion is not true. Then, in view of , there exits a constant , such that From (6), we have By multiplying on both sides of (17), then we have Let and ; then, Thus, from (19) and , we get Note that and ; then, it follows from (20) that From Assumption 1, it follows that there exists constant >0, such that and from (16), (21), and (22), we get From the above, it is obvious that From the other side, for from (23), it follows that From (24) and (25) and Lemma 3, we have what contradicts Lemma 2. Therefore, the global convergence is proved.

3. The Global Convergence for the New Formula and Algorithm Frame

3.1. The New Formula and the Corresponding Properties

Based on formula (10), we put forward a new formula of : where , , , and . Because of possible negative values of we use the maximum function to truncate zero and Using the equality we can rewrite the denominator of (27) as When then the denominator of given by (27) reduces to the denominator of . On the other hand, when the numerator of (27) reduces to When then Now, the numerator of (27) reduces to the numerator of . From the above analysis, we can see that (27) indeed is an extension of (10). Due to the existence of the parameters , , and , it would be more flexible to call methods (2), (6), and (27) by this paper of conjugate gradient methods. Numerical experiments results in Section 4 demonstrate the influence of these parameters versus formula (27).

Lemma 5. Suppose that Assumption 1 holds and that is given by (2) and (6), where satisfies the modified Wolfe-Powell conditions (3) and (5), while is computed by (27). Then, one has

Proof. When , we have Suppose hold, in formula (27), and the conclusion holds.
If , we have where ; by formulas (3) and (5), we have . Hence, When , , and , we obtain Due to and , through the above analysis, we have Hence,

The result shows that the search direction satisfies descent condition (); this condition may be crucial for convergence analysis of any conjugate gradient method.

Lemma 6. Suppose that Assumption 1 holds and that is given by (2) and (6), where satisfies the modified Wolfe-Powell conditions (3) and (5), while is computed by (27). Then, one has

Proof. Let ; when , then ; when , if , then ; if , .
By Lemma 5, then
To sum up, , and by , we have , and hence .

Theorem 7. Suppose the objective function satisfies Assumption 1; consider methods (2) and (6), where is given by (27) and satisfies the modified Wolfe-Powell line search condition. Then, either holds for certain or

Proof. By Theorem 4 and Lemma 6, Theorem 7 is proved.

The result shows that the proposed algorithm with the modified Wolfe-Powell line search possesses global convergence.

3.2. Algorithm A

Based on the discussed above, now we can describe the algorithm frame for solving the unconstrained optimization problems (1) as follows.

Step 0. Choose an initial point , given constants , , and , , and , subject to , set , and let .

Step 1. If a stopping criterion is satisfied, then stop; otherwise, go to Step 2.

Step 2. Determine a step size by line searches (3) and (5).

Step 3. Let ; compute and by (27) and (6).

Step 4. Set and go to Step 1.

4. Numerical Experiments and Results

In this section, in order to show the performance of the given algorithm, we test our proposed algorithm (algorithm A) and DY algorithm (given by formula (10)) via unconstrained optimization problems from Andrei [29] as follow. These testing functions are often used in engineering field:(1)Sphere function , , , .(2)Rastrigin function = , , = , = .(3)Freudenstein and Roth function (Froth) = + + , , .(4)Perturbed Quadratic diagonal function (Pqd) = + , , = .(5)Extended White & Holst function (Ewh) = + , , .(6)Raydan 1 function = , , .(7)Raydan 2 function , , .(8)Extended Trigonometric function (Etri) = + , , .(9)Extended Powell function (Epow) = + + , , .(10)Wood function = + + + + + , , .(11)Extended Wood function (Ewood) = + + + + + + , , .(12)Perturbed Quadratic function (Perq) = + , , .(13)Extended Tridiagonal 1 function (Etri1) = + + , , .(14)Extended Miele & Cantrell function (Emic) = + + + , , .(15)Extended Rosenbrock function (Erosen) = + , , .(16)Generalized Rosenbrock function (Grosen) = + , , .(17)QUARTC function = , , .(18)LIARWHD function = , , .(19)Staircase 1 function = , , .(20)Staircase 2 function = , , .(21)POWER function = , , .(22)Diagonal 4 function = , , .(23)Extended BD1 function (EBD1) = + , , .(24)CUBE function = + , , .

Here, and are the optimal solution and the function value at the optimal solution, respectively. For each algorithm, the parameters are chosen as and . All codes were written in MATLAB 7.5 and run on Lenovo with 1.90 GHz CPU processor, 2.43 GB RAM memory, and Windows XP operating system. The stop criterion of the iteration is one of the following conditions: and the number of iterations . If condition occurs, the method is deemed to fail for solving the corresponding test problem, and denote it by “.” For the first three test problems, we present experimental results to observe the behavior of the proposed and DY (given by formula (10)) conjugate gradient algorithm for different , different , and different . Details of the schemes for parameters set are given in Table 1. Numerical results of test problems are listed in Tables 2, 3, 4, 5, 6, 7, and 8, respectively. Table 9 shows numerical results of other test problems. Here, denotes the initial point of the test problems and and are iteration value and the function value at the final iteration, respectively.

Table 1: Several schemes for the parameters set.
Table 2: The numerical results of sphere function for different scheme.
Table 3: The numerical results of sphere function for different scheme.
Table 4: The numerical results of Rastrigin function for different scheme.
Table 5: The numerical results of Rastrigin function for different scheme.
Table 6: The numerical results of Freudenstein and Roth function for different scheme.
Table 7: The numerical results of Freudenstein and Roth function for different scheme.
Table 8: The numerical results of Freudenstein and Roth function for different scheme.
Table 9: The numerical results of different function for , , and .

Based on Table 1’s sixteen kinds of scheme (different parameters set), we compared algorithm A with DY (given by formula (10)) conjugate gradient algorithm based on different initial point for three test problems. It is easy to see that the two algorithms based on different scheme (different parameters set) are successful for the first and the second test problems listed in Tables 2, 3, and 4. From Tables 5, 6, 7, 8, and 9, we can see that algorithm A is more successful than DY (given by formula (10)) conjugate gradient algorithm. For example, for the first three test problems based on different scheme (different parameters set), algorithm A based on different initial point all achieved satisfied iteration value and the function value at the final iteration. Nevertheless, under some scheme (parameters set), DY (given by formula (10)) conjugate gradient algorithm cannot search satisfied iteration solution and the function value at the final iteration. From Tables 59, we can also see that DY (given by formula (10)) conjugate gradient algorithm sometimes is failed based on some scheme (parameters set); however, our algorithm is failed only one time. These indicate that the influence of parameters value’s changing in formula (27) on the algorithm is not big. We presented the Dolan and Moré [30] performance profiles for the algorithm A and DY method. Note that the performance ratio is the probability for a solver for the tested problems with the factor of the smallest cost. As we can see from Figure 1, algorithm A is superior to DY method for the absolute errors of versus . Hence, compared with the DY (given by formula (10)) conjugate gradient algorithm, algorithm A has higher stability and adaptability. Therefore, algorithm A yields a better numerical performance than the DY (given by formula (10)) conjugate gradient algorithm. From the above analysis, we can conclude that algorithm A is competitive for solving unconstrained optimization problems.

Figure 1: Performance profile on the absolute errors of versus (algorithm A versus DY).

5. Application to Engineering

In this section, we present a real example to illustrate application of the algorithm proposed in this article. The example is the results of tests on endurance of deep groove ball bearings. For illustrating the purposes, we applied the real dataset of 23 observed failure times that was initially reported in Lieblein and Zelen [31] and later by a number of authors including Abouammoh and Alshingiti [32] and Krishna and Kumar [33]. The following dataset represents the number of millions of revolutions before failure for each of the 23 ball bearings in a life test: 17.88, 28.92, 33.0, 41.52, 42.12, 45.60, 48.40, 51.84, 51.96, 54.12, 55.56, 67.80, 68.64, 68.64, 68.88, 84.12, 93.12, 98.64, 105.12, 105.84, 127.92, 128.04, and 173.4. Dey and Pradhan [34] indicated that Weibull distribution fits this dataset better than the exponential, inverted exponential, and gamma distribution. A random variable follows the Weibull distribution with probability density function (pdf) being that where and are the shape and scale parameters, respectively. Let denote the number of observed failures and denote the complete sample; the logarithm likelihood function is The corresponding differential equations are From (47), a closed-form solution of and does not exist, so a numerical technique (minimization ) must be used to find the maximum likelihood estimation (MLE) of and for any given dataset. By using algorithm A, We obtain and . Dey and Pradhan [34] obtained the MLE of the parameters as follows: . From the numerical results, we can see that our algorithm is alternative for the above real unconstrained optimization problem.

6. Conclusion

In this article, by modifying the scalar , we have proposed a three-parameter family of conjugate gradient method for solving large-scale unconstrained optimization problems. Global convergence of the proposed methods under modified Wolfe-Powell line search and general criterion are established, respectively. Numerical results show that our algorithm is competitive for solving unconstrained optimization problems. So, the proposed method is an alternative method used in the reliability data.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

The research work are supported by National Natural Science Foundation of China (11361036; 11461051), the Joint Specialized Research Fund for the Doctoral Program of Higher Education, Inner Mongolia Education Department (20131514110005), Higher School Science Research Project of Inner Mongolia (NJZY16394), Natural Science Foundation of Inner Mongolia (2014MS0112), and Science Research Project of Inner Mongolia University of Technology (ZD201409).

References

  1. B. Balaram, M. D. Narayanan, and P. K. Rajendrakumar, “Optimal design of multi-parametric nonlinear systems using a parametric continuation based genetic algorithm approach,” Nonlinear Dynamics, vol. 67, no. 4, pp. 2759–2777, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  2. L. Zhang, H. Gao, Z. Chen, Q. Sun, and X. Zhang, “Multi-objective global optimal parafoil homing trajectory optimization via Gauss pseudospectral method,” Nonlinear Dynamics, vol. 72, no. 1-2, pp. 1–8, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  3. X.-z. Jiang and J.-b. Jian, “A sufficient descent Dai-Yuan type nonlinear conjugate gradient method for unconstrained optimization problems,” Nonlinear Dynamics, vol. 72, no. 1-2, pp. 101–112, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  4. L. Zhu, G. Wang, and H. Chen, “Estimating steady multi-variables inverse heat conduction problem by using conjugate gradient method,” Proceedings of the Chinese Society of Electrical Engineering, vol. 31, no. 8, pp. 58–61, 2011. View at Google Scholar · View at Scopus
  5. Z. Wei, G. Li, and L. Qi, “New nonlinear conjugate gradient formulas for large-scale unconstrained optimization problems,” Applied Mathematics and Computation, vol. 179, no. 2, pp. 407–430, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  6. Z. X. Wei, G. Y. Li, and L. Q. Qi, “Global convergence of the Polak-Ribière-Polyak conjugate gradient method with an Armijo-type inexact line search for nonconvex unconstrained optimization problems,” Mathematics of Computation, vol. 77, no. 264, pp. 2173–2193, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. G. Yuan, Z. Wei, and Q. Zhao, “A modified Polak-Ribière-Polyak conjugate gradient algorithm for large-scale optimization problems,” IIE Transactions, vol. 46, no. 4, pp. 397–413, 2014. View at Publisher · View at Google Scholar · View at Scopus
  8. R. Fletcher and C. M. Reeves, “Function minimization by conjugate gradients,” The Computer Journal, vol. 7, pp. 149–154, 1964. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  9. B. T. Polyak, “The conjugate gradient method in extremal problems,” USSR Computational Mathematics and Mathematical Physics, vol. 9, no. 4, pp. 94–112, 1969. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  10. Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM Journal on Optimization, vol. 10, no. 1, pp. 177–182, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  11. M. R. Hestenes and E. Stiefel, “Method of conjugate gradient for solving linear equations,” Journal of Research of the National Bureau of Standards, vol. 49, pp. 409–436, 1952. View at Publisher · View at Google Scholar
  12. R. Fletcher, Practical Methods of Optimization. Vol. I, A Wiley-Interscience Publication, John Wiley & Sons, Chichester, UK, 2nd edition, 1987. View at MathSciNet
  13. Y. Liu and C. Storey, “Efficient generalized conjugate gradient algorithms, part 1: theory,” Journal of Optimization Theory and Applications, vol. 69, no. 1, pp. 129–137, 1991. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. H. Schramm and J. Zowe, “A version of the bundle idea for minimizing a nonsmooth function: conceptual idea, convergence analysis, numerical results,” SIAM Journal on Optimization, vol. 2, no. 1, pp. 121–152, 1992. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  15. L. Lukšan and J. Vlček, “A bundle-Newton method for nonsmooth unconstrained minimization,” Mathematical Programming, vol. 83, no. 3, pp. 373–391, 1998. View at Publisher · View at Google Scholar · View at MathSciNet
  16. G. Yuan, Z. Wei, and G. Li, “A modified Polak-Ribière-Polyak conjugate gradient algorithm for nonsmooth convex programs,” Journal of Computational and Applied Mathematics, vol. 255, pp. 86–96, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. Z.-F. Dai, “Two modified HS type conjugate gradient methods for unconstrained optimization problems,” Nonlinear Analysis: Theory, Methods & Applications, vol. 74, no. 3, pp. 927–936, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  18. Z. Dai and F. Wen, “Another improved Wei-Yao-Liu nonlinear conjugate gradient method with sufficient descent property,” Applied Mathematics and Computation, vol. 218, no. 14, pp. 7421–7430, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  19. X. Z. Jiang, L. Han, and J. B. Jian, “A globally convergent mixed conjugate gradient method with Wolfe-Powell line search,” Mathematica Numerica Sinica, vol. 34, no. 1, pp. 103–112, 2012. View at Google Scholar
  20. K. Sugiki, Y. Narushima, and H. Yabe, “Globally convergent three-term conjugate gradient methods that use secant conditions and generate descent search directions for unconstrained optimization,” Journal of Optimization Theory and Applications, vol. 153, no. 3, pp. 733–757, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  21. M. Li and H. Feng, “A sufficient descent LS conjugate gradient method for unconstrained optimization problems,” Applied Mathematics and Computation, vol. 218, no. 5, pp. 1577–1586, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. Z. Dai and F. Wen, “A modified CG-DESCENT method for unconstrained optimization,” Journal of Computational and Applied Mathematics, vol. 235, no. 11, pp. 3332–3341, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  23. M. Al-Baali, “Descent property and global convergence of the Fletcher-Reeves method with inexact line search,” IMA Journal of Numerical Analysis, vol. 5, no. 1, pp. 121–124, 1985. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  24. Y. H. Dai and Y. Yuan, “A three-parameter family of nonlinear conjugate gradient methods,” Mathematics of Computation, vol. 70, no. 235, pp. 1155–1167, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  25. L. Nazareth, “Conjugate-gradient methods,” in Encyclopedia of Optimization, C. Floudas and P. Pardalos, Eds., Kluwer Academic Publishers, Boston, Mass, USA, 1999. View at Google Scholar
  26. Y. H. Dai and Y. Yuan, “A class of globally convergent conjugate gradient methods,” Research Report ICM-98-030, Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences, 1998. View at Google Scholar
  27. G. Zoutendijk, “Nonlinear programming, computational methods,” in Integer and Nonlinear Programming, pp. 37–86, North-Holland, Amsterdam, The Netherlands, 1970. View at Google Scholar · View at MathSciNet
  28. Y. H. Dai and Y. X. Yuan, Nonlinear Conjugate Gradient Method, Science Press of Shanghai, Shanghai, China, 2000.
  29. N. Andrei, “An unconstrained optimization test functions collection,” Advanced Modeling and Optimization, vol. 10, no. 1, pp. 147–161, 2008. View at Google Scholar · View at MathSciNet
  30. E. D. Dolan and J. J. Moré, “Benchmarking optimization software with performance profiles,” Mathematical Programming, vol. 91, no. 2, pp. 201–213, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  31. J. Lieblein and M. Zelen, “Statistical investigation of the fatigue life of deep-groove ball bearings,” Journal of Research of the National Bureau of Standards, vol. 57, no. 5, pp. 273–316, 1956. View at Publisher · View at Google Scholar
  32. A. M. Abouammoh and A. M. Alshingiti, “Reliability estimation of generalized inverted exponential distribution,” Journal of Statistical Computation and Simulation, vol. 79, no. 11-12, pp. 1301–1315, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  33. H. Krishna and K. Kumar, “Reliability estimation in generalized inverted exponential distribution with progressively type-II censored sample,” Journal of Statistical Computation and Simulation, vol. 83, no. 6, pp. 1007–1019, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  34. S. Dey and B. Pradhan, “Generalized inverted exponential distribution under hybrid censoring,” Statistical Methodology, vol. 18, pp. 101–114, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus