Table of Contents Author Guidelines Submit a Manuscript
Journal of Robotics
Volume 2010, Article ID 437654, 9 pages
Research Article

Emergence of Prediction by Reinforcement Learning Using a Recurrent Neural Network

Department of Electrical and Electronic Engineering, Oita University, 700 Dannoharu, Oita 870-1192, Japan

Received 6 November 2009; Revised 1 March 2010; Accepted 17 May 2010

Academic Editor: Noriyasu Homma

Copyright © 2010 Kenta Goto and Katsunari Shibata. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


To develop a robot that behaves flexibly in the real world, it is essential that it learns various necessary functions autonomously without receiving significant information from a human in advance. Among such functions, this paper focuses on learning “prediction” that is attracting attention recently from the viewpoint of autonomous learning. The authors point out that it is important to acquire through learning not only the way of predicting future information, but also the purposive extraction of prediction target from sensor signals. It is suggested that through reinforcement learning using a recurrent neural network, both emerge purposively and simultaneously without testing individually whether or not each piece of information is predictable. In a task where an agent gets a reward when it catches a moving object that can possibly become invisible, it was observed that the agent learned to detect the necessary factors of the object velocity before it disappeared, to relay the information among some hidden neurons, and finally to catch the object at an appropriate position and timing, considering the effects of bounces off a wall after the object became invisible.