- About this Journal
- Abstracting and Indexing
- Aims and Scope
- Article Processing Charges
- Articles in Press
- Author Guidelines
- Bibliographic Information
- Citations to this Journal
- Contact Information
- Editorial Board
- Editorial Workflow
- Free eTOC Alerts
- Publication Ethics
- Submit a Manuscript
- Subscription Information
- Table of Contents
ISRN Artificial Intelligence
Volume 2012 (2012), Article ID 486361, 9 pages
An Advanced Conjugate Gradient Training Algorithm Based on a Modified Secant Equation
1Department of Mathematics, University of Patras, 26500 Patras, Greece
2Educational Software Development Laboratory, Department of Mathematics, University of Patras, 26500 Patras, Greece
Received 5 August 2011; Accepted 4 September 2011
Academic Editors: T. Kurita and Z. Liu
Copyright © 2012 Ioannis E. Livieris and Panagiotis Pintelas. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
- C. M. Bishop, Neural Networks for Pattern Recognition, Oxford, UK, 1995.
- S. Haykin, Neural Networks: A Comprehensive Foundation, Macmillan College, New York, NY, USA, 1994.
- A. Hmich, A. Badri, and A. Sahel, “Automatic speaker identification by using the neural network,” in Proceedings of the IEEE International Conference on Multimedia Computing and Systems (ICMCS '11), pp. 1–5, April 2011.
- H. Takeuchi, Y. Terabayashi, K. Yamauchi, and N. Ishii, “Improvement of robot sensor by integrating information using neural network,” International Journal on Artificial Intelligence Tools, vol. 12, no. 2, pp. 139–152, 2003.
- C. H. Wu, H. L. Chen, and S. C. Chen, “Gene classification artificial neural system,” International Journal on Artificial Intelligence Tools, vol. 4, no. 4, pp. 501–510, 1995.
- B. Lerner, H. Guterman, M. Aladjem, and I. Dinstein, “A comparative study of neural network based feature extraction paradigms,” Pattern Recognition Letters, vol. 20, no. 1, pp. 7–14, 1999.
- D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representations by error propagation,” in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, D. Rumelhart and J. McClelland, Eds., pp. 318–362, Cambridge, Mass, USA, 1986.
- R. Fletcher and C. M. Reeves, “Function minimization by conjugate gradients,” Computer Journal, vol. 7, pp. 149–154, 1964.
- M. R. Hestenes and E. Stiefel, “Methods for conjugate gradients for solving linear systems,” Journal of Research of the National Bureau of Standards, vol. 49, no. 6, pp. 409–436, 1952.
- E. Polak and G. Ribière, “Note sur la convergence de methods de directions conjuguees,” Revue Francais d'Informatique et de Recherche Operationnelle, vol. 16, pp. 35–43, 1969.
- M. J. D. Powell, “Nonconvexminimization calculations and the conjugate gradientmethod,” in Numerical Analysis, vol. 1066 of Lecture notes in mathematics, pp. 122–141, Springer, Berlin, Germany, 1984.
- J. C. Gilbert and J. Nocedal, “Global convergence properties of conjugate gradient methods for optimization,” SIAM Journal of Optimization, vol. 2, no. 1, pp. 21–42, 1992.
- M. J. D. Powell, “Convergence properties of algorithms for nonlinear optimization,” Siam Review, vol. 28, no. 4, pp. 487–500, 1986.
- J. Nocedal, “Theory of algorithms for unconstrained optimization,” Acta Numerica, vol. 1, pp. 199–242, 1992.
- D. H. Li and M. Fukushima, “A modified BFGS method and its global convergence in nonconvex minimization,” Journal of Computational and Applied Mathematics, vol. 129, no. 1-2, pp. 15–35, 2001.
- J. A. Ford and I. A. Moghrabi, “Multi-step quasi-Newton methods for optimization,” Journal of Computational and Applied Mathematics, vol. 50, no. 1-3, pp. 305–323, 1994.
- J. A. Ford and I. A. Moghrabi, “Using function-values in multi-step quasi-Newton methods,” Journal of Computational and Applied Mathematics, vol. 66, no. 1-2, pp. 201–211, 1996.
- G. Li, C. Tang, and Z. Wei, “New conjugacy condition and related new conjugate gradient methods for unconstrained optimization,” Journal of Computational and Applied Mathematics, vol. 202, no. 2, pp. 523–539, 2007.
- L. Zhang, “Two modified Dai-Yuan nonlinear conjugate gradient methods,” Numerical Algorithms, vol. 50, no. 1, pp. 1–16, 2009.
- L. Zhang, W. Zhou, and D. Li, “Some descent three-term conjugate gradient methods and their global convergence,” Optimization Methods and Software, vol. 22, no. 4, pp. 697–711, 2007.
- W. Zhou and L. Zhang, “A nonlinear conjugate gradient method based on the MBFGS secant condition,” Optimization Methods and Software, vol. 21, no. 5, pp. 707–714, 2006.
- J. A. Ford, Y. Narushima, and H. Yabe, “Multi-step nonlinear conjugate gradient methods for unconstrained minimization,” Computational Optimization and Applications, vol. 40, no. 2, pp. 191–216, 2008.
- H. Yabe and M. Takano, “Global convergence properties of nonlinear conjugate gradient methods with modified secant condition,” Computational Optimization and Applications, vol. 28, no. 2, pp. 203–225, 2004.
- L. Zhang, W. Zhou, and D. Li, “Global convergence of a modified Fletcher-Reeves conjugate gradient method with Armijo-type line search,” Numerische Mathematik, vol. 104, no. 4, pp. 561–572, 2006.
- L. Zhang, W. Zhou, and D. Li, “A descent modified Polak-Ribière-Polyak conjugate gradient method and its global convergence,” IMA Journal of Numerical Analysis, vol. 26, no. 4, pp. 629–640, 2006.
- L. Zhang and W. Zhou, “Two descent hybrid conjugate gradient methods for optimization,” Journal of Computational and Applied Mathematics, vol. 216, no. 1, pp. 251–264, 2008.
- W. W. Hager and H. Zhang, “A new conjugate gradient method with guaranteed descent and an efficient line search,” SIAM Journal on Optimization, vol. 16, no. 1, pp. 170–192, 2005.
- G. Yuan, “Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems,” Optimization Letters, vol. 3, no. 1, pp. 11–21, 2009.
- G. H. Yu., Nonlinear self-scaling conjugate gradient methods for large-scale optimization problems, Ph.D. thesis, Sun Yat-Sen University, 2007.
- I. E. Livieris and P. Pintelas, “Performance evaluation of descent CG methods for neural network training,” in Proceedings of the 9th Hellenic European Research on Computer Mathematics & its Applications Conference (HERCMA '09), E. A. Lipitakis, Ed., pp. 40–46, 2009.
- I. E. Livieris and P. Pintelas, “An improved spectral conjugate gradient neural network training algorithm,” International Journal on Artificial Intelligence Tools. In press.
- I. E. Livieris, D. G. Sotiropoulos, and P. Pintelas, “On descent spectral CG algorithms for training recurrent neural networks,” in Proceedings of the 13th Panellenic Conference of Informatics, pp. 65–69, 2009.
- E. Dolan and J. J. Moré, “Benchmarking optimization software with performance profiles,” Mathematical Programming, vol. 91, no. 2, pp. 201–213, 2002.
- J. Hertz, A. Krogh, and R. Palmer, Introduction to the Theory of Neural Computation, Addison-Wesley, Reading, Mass, USA, 1991.
- M. J. D. Powell, “Restart procedures for the conjugate gradient method,” Mathematical Programming, vol. 12, no. 1, pp. 241–254, 1977.
- P. M. Murphy and D. W. Aha, UCI Repository of Machine Learning Databases, University of California, Department of Information and Computer Science, Irvine, Calif, USA, 1994.
- E. G. Birgin and J. M. Martínez, “A spectral conjugate gradient method for unconstrained optimization,” Applied Mathematics and Optimization, vol. 43, no. 2, pp. 117–128, 2001.
- D. F. Shanno and K. H. Phua, “Minimization of unconstrained multivariate functions,” ACM Transactions on Mathematical Software, vol. 2, pp. 87–94, 1976.
- G. Yu, L. Guan, and W. Chen, “Spectral conjugate gradient methods with sufficient descent property for large-scale unconstrained optimization,” Optimization Methods and Software, vol. 23, no. 2, pp. 275–293, 2008.
- D. Nguyen and B. Widrow, “Improving the learning speed of 2-layer neural network by choosing initial values of adaptive weights,” Biological Cybernetics, vol. 59, pp. 71–113, 1990.
- R. Kohavi, “A study of cross-validation and bootstrap for accuracy estimation and model selection,” in Proceedings of the IEEE International Joint Conference on Artificial Intelligence, pp. 223–228, AAAI Press and MIT Press, 1995.
- J. C. Gilbert and X. Jonsson, LIBOPT-an enviroment for testing solvers on heterogeneous collections of problems—version 1. CoRR, abs/cs/0703025, 2007.
- L. Prechelt, “PROBEN1-a set of benchmarks and benchmarking rules for neural network training algorithms,” Tech. Rep. 21/94, Fakultät für Informatik, University of Karlsruhe, 1994.
- J. Yu, S. Wang, and L. Xi, “Evolving artificial neural networks using an improved PSO and DPSO,” Neurocomputing, vol. 71, no. 4–6, pp. 1054–1060, 2008.
- R. P. Gorman and T. J. Sejnowski, “Analysis of hidden units in a layered network trained to classify sonar targets,” Neural Networks, vol. 1, no. 1, pp. 75–89, 1988.
- A. D. Anastasiadis, G. D. Magoulas, and M. N. Vrahatis, “New globally convergent training scheme based on the resilient propagation algorithm,” Neurocomputing, vol. 64, no. 1–4, pp. 253–270, 2005.
- P. Horton and K. Nakai, “Better prediction of protein cellular localization sites with the k nearest neighbors classifier,” Intelligent Systems for Molecular Biology, pp. 368–383, 1997.
- P. Liang, B. Labedan, and M. Riley, “Physiological genomics of escherichia coli protein families,” Physiological Genomics, vol. 2002, no. 9, pp. 15–26, 2002.