Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2013, Article ID 141353, 9 pages
http://dx.doi.org/10.1155/2013/141353
Research Article

Decision Making in Reinforcement Learning Using a Modified Learning Space Based on the Importance of Sensors

Muroran Institute of Technology, 27-1 Mizumoto, Hokkaido, Muroran 0508585, Japan

Received 15 March 2013; Accepted 21 May 2013

Academic Editor: Guangming Song

Copyright © 2013 Yasutaka Kishima et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Many studies have been conducted on the application of reinforcement learning (RL) to robots. A robot which is made for general purpose has redundant sensors or actuators because it is difficult to assume an environment that the robot will face and a task that the robot must execute. In this case, -space on RL contains redundancy so that the robot must take much time to learn a given task. In this study, we focus on the importance of sensors with regard to a robot’s performance of a particular task. The sensors that are applicable to a task differ according to the task. By using the importance of the sensors, we try to adjust the state number of the sensors and to reduce the size of -space. In this paper, we define the measure of importance of a sensor for a task with the correlation between the value of each sensor and reward. A robot calculates the importance of the sensors and makes the size of -space smaller. We propose the method which reduces learning space and construct the learning system by putting it in RL. In this paper, we confirm the effectiveness of our proposed system with an experimental robot.