Table of Contents
Advances in Artificial Intelligence
Volume 2013, Article ID 891501, 9 pages
Research Article

A Novel Method for Training an Echo State Network with Feedback-Error Learning

Department of Computer and Information Science, Norwegian University of Science and Technology, Sem Sælands vei 7-9, 7491 Trondheim, Norway

Received 31 May 2012; Revised 10 December 2012; Accepted 19 February 2013

Academic Editor: Ralf Moeller

Copyright © 2013 Rikke Amilde Løvlid. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. K. Doya, “Universality of fully connected recurrent neural networks,” Tech. Rep., University of California, San Diego, Calif, USA, 1993, Submitted to: IEEE Transactions on Neural Networks. View at Google Scholar
  2. M. Lukoševičius and H. Jaeger, “Reservoir computing approaches to recurrent neural network training,” Computer Science Review, vol. 3, no. 3, pp. 127–149, 2009. View at Publisher · View at Google Scholar · View at Scopus
  3. T. Natschläger, W. Maass, and H. Markram, “The "liquid computer": a novel strategy for real-time computing on time series,” Special Issue on Foundations of Information Processing of TELEMATIK, vol. 8, no. 1, pp. 39–43, 2002. View at Google Scholar
  4. H. Jaeger, “A tutorial on training recurrent neural networks, covering bppt, rtrl, and the echo state network approach,” Tech. Rep., Fraunhofer Institute for Autonomous Intelligent Systems, Sankt Augustin, Germany, 2002. View at Google Scholar
  5. J. J. Steil, “Backpropagation-decorrelation: online recurrent learning with O(N) complexity,” in Proceedings of IEEE International Joint Conference on Neural Networks (IJCNN '04), pp. 843–848, July 2004. View at Scopus
  6. B. Schrauwen, D. Verstraeten, and J. van Campenhout, “An overview of reservoir computing: theory, applications and implementations,” in Proceedings of the 15th European Symposium on Artificial Neural Networks, vol. 4, pp. 471–482, 2007.
  7. C. Fernando and S. Sojakka, “Pattern recognition in a bucket,” in Advances in Artificial Life, Lecture Notes in computer Science, pp. 588–597, Springer, Berlin, Germany, 2003. View at Google Scholar
  8. D. Nquyen-Tuong and J. Peters, “Model learning for robot control: a survey,” Cognitive Processing, vol. 12, no. 4, pp. 319–340, 2011. View at Google Scholar
  9. M. Oubbati, M. Schanz, and P. Levi, “Kinematic and dynamic adaptive control of a nonholonomic mobile robot using a RNN,” in Proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA '05), pp. 27–33, June 2005. View at Scopus
  10. M. Kawato, “Feedback-error-learning neural network for supervised motor learning,” in Advanced Neural Computers, R. Eckmiller, Ed., pp. 365–372, Elsevier, Amsterdam, The Netherlands, 1990. View at Google Scholar
  11. M. I. Jordan and D. E. Rumelhart, “Forward models: supervised learning with a distal teacher,” Cognitive Science, vol. 16, no. 3, pp. 307–354, 1992. View at Google Scholar · View at Scopus
  12. M. Kawato, “Internal models for motor control and trajectory planning,” Current Opinion in Neurobiology, vol. 9, no. 6, pp. 718–727, 1999. View at Publisher · View at Google Scholar · View at Scopus
  13. R. A. Løvlid, “Learning to imitate YMCA with an ESN,” in Proceedings of the 22nd International Conference on Artificial Neural Networks and Machine Learning (ICANN '12), Lecture Notes in Computer Science, pp. 507–514, Springer, 2012.
  14. R. A. Løvlid, “Learning motor control by dancing YMCA,” IFIP Advances in Information and Communication Technology, vol. 331, pp. 79–88, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. A. Tidemann and P. Öztürk, “Self-organizing multiple models for imitation: teaching a robot to dance the YMCA,” in IEA/AIE, vol. 4570 of Lecture Notes in Computer Science, pp. 291–302, Springer, Berlin, Germany, 2007. View at Google Scholar
  16. H. Jaeger et al., “Simple toolbox for esns,” 2009,
  17. F. Toutounian and A. Ataei, “A new method for computing Moore-Penrose inverse matrices,” Journal of Computational and Applied Mathematics, vol. 228, no. 1, pp. 412–417, 2009. View at Publisher · View at Google Scholar · View at Scopus
  18. F. Wyffels, B. Schrauwen, and D. Stroobandt, “Stable output feedback in reservoir computing using ridge regression,” in Proceedings of the 18th International Conference on Artificial Neural Networks, Part I (ICANN '08), pp. 808–817, Springer, 2008.
  19. H. Jaeger, “The echo state approach to analysing and training recurrent neural networks,” Tech. Rep., GMD, 2001. View at Google Scholar