Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2014, Article ID 120760, 6 pages
http://dx.doi.org/10.1155/2014/120760
Research Article

A Reward Optimization Method Based on Action Subrewards in Hierarchical Reinforcement Learning

1Suzhou Industrial Park Institute of Services Outsourcing, Suzhou, Jiangsu 215123, China
2School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu 215006, China

Received 15 August 2013; Accepted 14 November 2013; Published 28 January 2014

Academic Editors: J. Shu and F. Yu

Copyright © 2014 Yuchen Fu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Reinforcement learning (RL) is one kind of interactive learning methods. Its main characteristics are “trial and error” and “related reward.” A hierarchical reinforcement learning method based on action subrewards is proposed to solve the problem of “curse of dimensionality,” which means that the states space will grow exponentially in the number of features and low convergence speed. The method can reduce state spaces greatly and choose actions with favorable purpose and efficiency so as to optimize reward function and enhance convergence speed. Apply it to the online learning in Tetris game, and the experiment result shows that the convergence speed of this algorithm can be enhanced evidently based on the new method which combines hierarchical reinforcement learning algorithm and action subrewards. The “curse of dimensionality” problem is also solved to a certain extent with hierarchical method. All the performance with different parameters is compared and analyzed as well.