Table of Contents Author Guidelines Submit a Manuscript
Journal of Optimization
Volume 2016, Article ID 2659012, 7 pages
http://dx.doi.org/10.1155/2016/2659012
Research Article

Evidence Maximization Technique for Training of Elastic Nets

1Moscow Institute of Physics and Technology, Moscow 141700, Russia
2Institute for Systems Analysis, Russian Academy of Sciences, Prospekt 60-Let Octyabria 9, Moscow 117312, Russia
3MV Lomonosov Moscow State University, Leninskie Gory 1, Moscow 119991, Russia

Received 15 February 2016; Revised 10 May 2016; Accepted 15 May 2016

Academic Editor: Manlio Gaudioso

Copyright © 2016 Igor Dubnov et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. G. James, D. Witten, T. Hastie, and R. Tibshirani, An Introduction to Statistical Learning, vol. 103 of Springer Texts in Statistics, Springer, New York, NY, USA, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  2. I. Guyon and A. Elisseeff, “An introduction to variable and feature selection,” The Journal of Machine Learning Research, vol. 3, pp. 1157–1182, 2003. View at Google Scholar · View at Scopus
  3. H. Zou and T. Hastie, “Regularization and variable selection via the elastic net,” Journal of the Royal Statistical Society. Series B. Statistical Methodology, vol. 67, no. 2, pp. 301–320, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  4. Y. Nesterov, Gradient Methods for Minimizing Composite Objective Function, 2007.
  5. P. Richtarik and M. Schmidt, “Modern convex optimization methods for large-scale empirical risk minimization,” in Proceedings of the International Conference on Machine Learning, July 2015.
  6. C. M. Bishop, Pattern recognition and machine learning, Information Science and Statistics, Springer, New York, NY, USA, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  7. A. I. Prilepko and D. Ph. Kalinichenko, Asymptotic Methods and Special Functions, MIPI, 1980.
  8. Y. Yao, L. Rosasco, and A. Caponnetto, “On early stopping in gradient descent learning,” Constructive Approximation, vol. 26, no. 2, pp. 289–315, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. D. F. Morgado, A. Antunes, and A. M. Mota, “Regularization versus early stopping: a case study with a real system,” in Proceedings of the 2nd IFAC Conference Control Systems Design, Bratislava, Slovakia, 2003.
  10. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2323, 1998. View at Publisher · View at Google Scholar · View at Scopus
  11. Ø. D. Trier, A. K. Jain, and T. Taxt, “Feature extraction methods for character recognition—a survey,” Pattern Recognition, vol. 29, no. 4, pp. 641–662, 1996. View at Publisher · View at Google Scholar · View at Scopus
  12. The MathWorks Inc, MATLAB Image Processing Toolbox documentation, http://www.mathworks.com/help/images/.