Table of Contents Author Guidelines Submit a Manuscript
Journal of Robotics
Volume 2011, Article ID 193146, 16 pages
http://dx.doi.org/10.1155/2011/193146
Research Article

Discovering and Characterizing Hidden Variables Using a Novel Neural Network Architecture: LO-Net

Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, 1000 Hilltop Circle, Baltimore, MD 21250, USA

Received 1 June 2011; Revised 27 September 2011; Accepted 28 September 2011

Academic Editor: Ivo Bukovsky

Copyright © 2011 Soumi Ray and Tim Oates. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. J. R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann, 1993.
  2. J. Wnek and R. S. Michalski, “Hypothesis-driven constructive induction in aq17- hci: a method and experiments,” Machine Learning, vol. 14, no. 2, pp. 139–168, 1994. View at Google Scholar
  3. B. Scholkopf and A. J. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, MIT Press, 2001.
  4. S. E. Fahlman and C. Lebiere, “The cascade-correlation learning architecture,” Advances in Neural Information Processing Systems, vol. 2, pp. 524–532, 1990. View at Google Scholar
  5. A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incom- plete data via the em algorithm,” Journal of the Royal Statistical Society Series B, vol. 39, no. 1, pp. 1–38, 1977. View at Google Scholar
  6. N. Friedman, “Learning belief networks in the presence of missing values and hidden variables,” in Proceedings of the 14th International Conference on Machine Learning, pp. 125–133, Morgan Kaufmann, 1997.
  7. L. P. Kaelbling, M. L. Littman, and A. R. Cassandra, “Planning and acting in partially observable stochastic domains,” Artificial Intelligence, vol. 101, no. 1-2, pp. 99–134, 1998. View at Google Scholar · View at Scopus
  8. M. L. Littman, R. S. Sutton, and S. Singh, “Predictive representations of state,” in Advances in Neural Information Processing Systems, vol. 14, pp. 1555–1561, MIT Press, 2002. View at Google Scholar
  9. R. L. Rivest and R. E. Schapire, “Diversity-based inference of finite automata,” Journal of the ACM, vol. 41, no. 3, pp. 555–589, 1994. View at Publisher · View at Google Scholar · View at Scopus
  10. M. P. Holmes and C. L. Isbell, “Looping suffix tree-based inference of partially observable hidden state,” in Proceedings of the 23rd International Conference on Machine Learning (ICML '06), pp. 409–416, ACM, New York, NY, USA, June 2006.
  11. A. Mccallum, “Instance-based state identification for reinforcement learning,” in Advances in Neural Information Processing Systems, vol. 7, pp. 377–384, MIT Press, 1994. View at Google Scholar
  12. G. E. Hinton, S. Osindero, and Y. W. Teh, “A fast learning algorithm for deep belief nets,” Neural Computation, vol. 18, no. 7, pp. 1527–1554, 2006. View at Publisher · View at Google Scholar · View at PubMed · View at MathSciNet · View at Scopus
  13. J. M. Wang, D. J. Fleet, and A. Hertzmann, “Gaussian process dynamical models,” in NIPS, pp. 1441–1448, MIT Press, 2006. View at Google Scholar
  14. K. A. Bollen, “Latent variables in psychology and the social sciences,” Annual Review of Psychology, vol. 53, pp. 605–634, 2002. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  15. S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall, 1998.
  16. F. Takens, “Detecting strange attractors in turbulence,” Lecture Notes in Mathematics, pp. 366–381, 1981. View at Google Scholar
  17. D. Nguyen and B. Widrow, “Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN '90), vol. 3, pp. 21–26, June 1990.
  18. N. Lorenz, “Deterministic non-periodic flows,” Journal of Atmospheric Science, 1963. View at Google Scholar
  19. A. Frank and A. Asuncion, “UCI machine learning repository,” 2010. View at Google Scholar
  20. S. Ray and T. Oates, “Discovering and characterizing hidden variables in streaming multivariate time series,” in Proceedings of the 9th International Conference on Machine Learning and Applications (ICMLA '10), pp. 913–916, IEEE Computer Society, 2010. View at Publisher · View at Google Scholar
  21. J. Bergstra and Y. Bengio, “Slow, decorrelated features for pretraining complex cell-like networks,” in NIPS, 2009. View at Google Scholar
  22. L. R. Rabiner and B. H. Juang, “An introduction to hidden markov models,” IEEE ASSP Magazine, vol. 3, no. 1, pp. 4–16, 1986. View at Google Scholar · View at Scopus