Table of Contents
Advances in Artificial Neural Systems
Volume 2009, Article ID 846040, 11 pages
http://dx.doi.org/10.1155/2009/846040
Research Article

Building Recurrent Neural Networks to Implement Multiple Attractor Dynamics Using the Gradient Descent Method

Brain Science Institute, RIKEN, 2-1 Hirosawa, Wako City, Saitama 351-0198, Japan

Received 31 March 2008; Accepted 22 August 2008

Academic Editor: Akira Imada

Copyright © 2009 Jun Namikawa and Jun Tani. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. K. Funahashi and Y. Nakamura, “Approximation of dynamical systems by continuous time recurrent neural networks,” Neural Networks, vol. 6, no. 6, pp. 801–806, 1993. View at Publisher · View at Google Scholar
  2. H. T. Siegelmann and E. D. Sontag, “Analog computation via neural networks,” Theoretical Computer Science, vol. 131, no. 2, pp. 331–360, 1994. View at Publisher · View at Google Scholar · View at MathSciNet
  3. H. T. Siegelmann and E. D. Sontag, “On the computational power of neural nets,” Journal of Computer and System Sciences, vol. 50, no. 1, pp. 132–150, 1995. View at Publisher · View at Google Scholar · View at MathSciNet
  4. Y. Bengio, P. Simard, and P. Frasconi, “Learning long-term dependencies with gradient descent is difficult,” IEEE Transactions on Neural Networks, vol. 5, no. 2, pp. 157–166, 1994. View at Publisher · View at Google Scholar · View at PubMed
  5. K. Doya and S. Yoshizawa, “Memorizing oscillatory patterns in the analog neuron network,” in Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN '89), vol. 1, pp. 27–32, Washington, DC, USA, June 1989. View at Publisher · View at Google Scholar
  6. F.-S. Tsung, Modeling dynamical systems with recurrent neural networks, Ph.D. thesis, Department of Computer Science, University of California, San Diego, Calif, USA, 1994.
  7. D. M. Wolpert and M. Kawato, “Multiple paired forward and inverse models for motor control,” Neural Networks, vol. 11, no. 7-8, pp. 1317–1329, 1998. View at Publisher · View at Google Scholar
  8. J. Tani and S. Nolfi, “Learning to perceive the world as articulated: an approach for hierarchical learning in sensory-motor systems,” Neural Networks, vol. 12, no. 7-8, pp. 1131–1141, 1999. View at Publisher · View at Google Scholar
  9. J. Tani, “Learning to generate articulated behavior through the bottom-up and the top-down interaction processes,” Neural Networks, vol. 16, no. 1, pp. 11–23, 2003. View at Publisher · View at Google Scholar
  10. J. Tani and M. Ito, “Self-organization of behavioral primitives as multiple attractor dynamics: a robot experiment,” IEEE Transactions on Systems, Man and Cybernetics Part A, vol. 33, no. 4, pp. 481–488, 2003. View at Publisher · View at Google Scholar
  11. H. Jaeger, “Short term memory in echo state networks,” National Research Center for Information Technology, Bremen, German, 2001. View at Google Scholar
  12. H. Jaeger and H. Haas, “Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication,” Science, vol. 304, no. 5667, pp. 78–80, 2004. View at Publisher · View at Google Scholar · View at PubMed
  13. W. Maass, T. Natschläger, and H. Markram, “A fresh look at real-time computation in generic recurrent neural circuits,” Institute for Theoretical Computer Science, TU Graz, Graz, Austria, 2002. View at Google Scholar
  14. B. Hammer and P. Tiňo, “Recurrent neural networks with small weights implement definite memory machines,” Neural Computation, vol. 15, no. 8, pp. 1897–1929, 2003. View at Publisher · View at Google Scholar
  15. O. L. White, D. D. Lee, and H. Sompolinsky, “Short-term memory in orthogonal neural networks,” Physical Review Letters, vol. 92, no. 14, Article ID 148102, 4 pages, 2004. View at Publisher · View at Google Scholar
  16. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representations by error propagation,” in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1: Foundations, D. E. Rumelhart and J. L. McLelland, Eds., pp. 318–362, MIT Press, Cambridge, Mass, USA, 1986. View at Google Scholar
  17. J. L. Elman, “Finding structure in time,” Cognitive Science, vol. 14, no. 2, pp. 179–211, 1990. View at Publisher · View at Google Scholar