Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2014, Article ID 815156, 9 pages
http://dx.doi.org/10.1155/2014/815156
Research Article

The Generalization Complexity Measure for Continuous Input Data

1Departamento de Lenguajes y Ciencias de la Computación, Universidad de Málaga, 29071 Málaga, Spain
2Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba, 5000 Córdoba, Argentina

Received 18 December 2013; Accepted 5 March 2014; Published 10 April 2014

Academic Editors: B. Liu and T. Zhao

Copyright © 2014 Iván Gómez et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. E. B. Baum and D. Haussler, “What size net gives valid generalization?” Neural Computation, vol. 1, no. 1, pp. 151–160, 1990. View at Google Scholar
  2. A. R. Barron, “Approximation and estimation bounds for artificial neural networks,” Machine Learning, vol. 14, no. 1, pp. 115–133, 1994. View at Publisher · View at Google Scholar · View at Scopus
  3. L. S. Camargo and T. Yoneyama, “Specification of training sets and the number of hidden neurons for multilayer perceptrons,” Neural Computation, vol. 13, no. 12, pp. 2673–2680, 2001. View at Publisher · View at Google Scholar · View at Scopus
  4. F. Scarselli and A. Chung Tsoi, “Universal approximation using feedforward neural networks: a survey of some existing methods, and some new results,” Neural Networks, vol. 11, no. 1, pp. 15–37, 1998. View at Publisher · View at Google Scholar · View at Scopus
  5. D. Hunter, H. Yu, M. S. Pukish III, J. Kolbusz, and B. M. Wilamowski, “Selection of proper neural network sizes and architectures—a comparative study,” IEEE Transactions on Industrial Informatics, vol. 8, no. 2, pp. 228–240, 2012. View at Publisher · View at Google Scholar · View at Scopus
  6. G. Mirchandani and W. Cao, “On hidden nodes for neural nets,” IEEE Transactions on Circuits and Systems, vol. 36, no. 5, pp. 661–664, 1989. View at Google Scholar · View at Scopus
  7. M. Arai, “Bounds on the number of hidden units in binary-valued three-layer neural networks,” Neural Networks, vol. 6, no. 6, pp. 855–860, 1993. View at Google Scholar · View at Scopus
  8. Z. Zhang, X. Ma, and Y. Yang, “Bounds on the number of hidden neurons in three-layer binary neural networks,” Neural Networks, vol. 16, no. 7, pp. 995–1002, 2003. View at Publisher · View at Google Scholar · View at Scopus
  9. M. Bacauskiene, V. Cibulskis, and A. Verikas, “Selecting variables for neural network committees,” in Advances in Neural Networks—ISNN, J. Wang, Z. Yi, J. M. Zurada, B.-L. Lu, and H. Yin, Eds., vol. 3971 of Lecture Notes in Computer Science, pp. 837–842, Springer, 2006. View at Publisher · View at Google Scholar
  10. H. C. Yuan, F. L. Xiong, and X. Y. Huai, “A method for estimating the number of hidden neurons in feed-forward neural networks based on information entropy,” Computers and Electronics in Agriculture, vol. 40, no. 1–3, pp. 57–64, 2003. View at Publisher · View at Google Scholar · View at Scopus
  11. Y. Liu, J. A. Starzyk, and Z. Zhu, “Optimizing number of hidden neurons in neural networks,” in Proceedings of the IASTED International Conference on Artificial Intelligence and Applications (AIA '07), pp. 121–126, ACTA Press, Anaheim, Calif, USA, February 2007. View at Scopus
  12. I. Wegener, The Complexity of Boolean Functions, John Wiley & Sons, 1987.
  13. J. Hastad, “Almost optimal lower bounds for small depth circuits,” Advanced Computer Research, vol. 5, pp. 143–170, 1989. View at Google Scholar
  14. I. Parberry, Circuit Complexity and Neural Networks, MIT Press, 1994.
  15. T. K. Ho and M. Basu, “Complexity measures of supervised classification problems,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 3, pp. 289–300, 2002. View at Publisher · View at Google Scholar · View at Scopus
  16. M. Basu and T. K. Ho, Data Complexity in Pattern Recognition (Advanced Information and Knowledge Processing), Springer, New York, NY, USA, 2006.
  17. J. S. Sánchez, R. A. Mollineda, and J. M. Sotoca, “An analysis of how training data complexity affects the nearest neighbor classifiers,” Pattern Analysis and Applications, vol. 10, no. 3, pp. 189–201, 2007. View at Publisher · View at Google Scholar · View at Scopus
  18. W. Duch, N. Jankowski, and T. Maszczyk, “Make it cheap: learning with o(nd) complexity,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN '12), pp. 1–4, 2012.
  19. L. Franco, “Generalization ability of Boolean functions implemented in feedforward neural networks,” Neurocomputing, vol. 70, no. 1–3, pp. 351–361, 2006. View at Publisher · View at Google Scholar · View at Scopus
  20. L. Franco and M. Anthony, “The influence of oppositely classified examples on the generalization complexity of Boolean functions,” IEEE Transactions on Neural Networks, vol. 17, no. 3, pp. 578–590, 2006. View at Publisher · View at Google Scholar · View at Scopus
  21. I. Gómez, L. Franco, and J. M. Jerez, “Neural network architecture selection: can function complexity help?” Neural Processing Letters, vol. 30, no. 2, pp. 71–87, 2009. View at Publisher · View at Google Scholar · View at Scopus
  22. J. L. Walsh, “A closed set of normal orthogonal functions,” The American Journal of Mathematics, vol. 45, pp. 5–24, 1923. View at Google Scholar
  23. K. G. Beauchamp, Walsh Functions and Their Applications, Academic Press, 1975.
  24. W. A. Evans, “Sine-wave synthesis using walsh functions,” IEE Proceedings G, vol. 134, no. 1, pp. 1–6, 1987. View at Google Scholar · View at Scopus
  25. W. K. Pratt, J. Kane, and H. C. Andrews, “Hadamard transform image coding,” Proceedings of the IEEE, vol. 57, pp. 58–68, 1969. View at Google Scholar
  26. R. E. Mickens, Mathematical Methods for the Natural and Engineering Sciences, vol. 65 of Series on Advances in Mathematics for Applied Sciences, World Scientific, 2004.