Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2014, Article ID 120760, 6 pages
Research Article

A Reward Optimization Method Based on Action Subrewards in Hierarchical Reinforcement Learning

1Suzhou Industrial Park Institute of Services Outsourcing, Suzhou, Jiangsu 215123, China
2School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu 215006, China

Received 15 August 2013; Accepted 14 November 2013; Published 28 January 2014

Academic Editors: J. Shu and F. Yu

Copyright © 2014 Yuchen Fu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. J. Shen, G. Gu, and H. Liu, “A survey of hierarchical reinforcement learning,” Pattern Recognition and Artificial Intelligence, vol. 18, no. 5, pp. 574–581, 2005 (Chinese). View at Google Scholar · View at Scopus
  2. C. Zonghai, W. Feng, N. Jianbin, and W. Xiaoshu, “A reinforcement learning method based on node-growing k-means clustering algorithm,” Journal of Computer Research and Development, vol. 43, no. 4, pp. 661–666, 2006 (Chinese). View at Google Scholar
  3. J. J. F. Ribas-Fernandes, A. Solway, C. Diuk et al., “A neural signature of hierarchical reinforcement learning,” Neuron, vol. 71, no. 2, pp. 370–379, 2011. View at Publisher · View at Google Scholar · View at Scopus
  4. H. Cuayáhuitl, S. Renals, O. Lemon, and H. Shimodaira, “Evaluation of a hierarchical reinforcement learning spoken dialogue system,” Computer Speech and Language, vol. 24, no. 2, pp. 395–429, 2010. View at Publisher · View at Google Scholar · View at Scopus
  5. M. van Otterlo, “A survey of reinforcement learning in relational domains,” Tech. Rep. TR-CTIT-05-31, University of Twente, 2005. View at Google Scholar
  6. S. Dzeroski, L. D. Raedt, and H. Blockeel, “Relational Reinforcement Learning,” in Advances in Proceedings ICML ’98, J. Shavlik, Ed., pp. 136–143, Morgan Kaufmann, Berlin, Germany, 2003. View at Google Scholar
  7. S. Sanner, “Simulteneous learning of structure and value in relational reinforcement learning,” in Advances in Proceeding of the ICML '05 Workshop on Rich Representations for Reinforcement Learning, 2005. View at Google Scholar
  8. K. Driessens, J. Ramon, and T. Gärtner, “Graph kernels and Gaussian processes for relational reinforcement learning,” Machine Learning, vol. 64, pp. 91–119, 2006. View at Publisher · View at Google Scholar · View at Scopus
  9. Q. Liu, Y. Gao, D. Chen, and Z. Cui, “A heuristic contour prolog list method used in logical reinforcement learning,” Journal of Information and Computational Science, vol. 5, no. 5, pp. 2001–2007, 2008. View at Google Scholar · View at Scopus
  10. Q. Liu, Y. Gao, Z. Cui, W. Yao, and Z. Chen, “An tableau automated theorem proving method using logical reinforcement learning,” in Advances in Computation and Intelligence, vol. 4683 of Lecture Notes in Artificial Intelligence, Springer, 2007. View at Google Scholar
  11. R. S. Sutton, D. Precup, and S. P. Singh, “Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning,” Artificial Intelligence, vol. 112, no. 1, pp. 181–211, 1999. View at Publisher · View at Google Scholar · View at Scopus
  12. T. G. Dietterich, “Hierarchical reinforcement learning with the MAXQ value function decomposition,” Journal of Artificial Intelligence Research, vol. 13, pp. 227–303, 2000. View at Google Scholar · View at Scopus
  13. R. Parr, Hierarchical Control and Learning for Markov Decision Processes, University of California, Berkeley, Calif, USA, 1998.
  14. E. G. Schultink, R. Cavallo, and D. C. Parkes, “Economic hierarchical Q-learning,” in Proceedings of the 23rd AAAI Conference on Artificial Intelligence, pp. 689–695, July 2008. View at Scopus
  15. S. Mannor, I. Menache, A. Hoze, and U. Klein, “Dynamic abstraction in reinforcement learning via clustering,” in Proceedings of the 21st International Conference on Machine Learning, pp. 560–567, ACM Press, Banff, Canada, July 2004. View at Scopus
  16. M. Stolle and D. Precup, “Learning options in reinforcement learning,” in Proceedings of the 5th International Symposium on Abstraction, Reformulation and Approximation, pp. 212–285, Kananaskis, Canada, 2002.
  17. Ö. Şimşek, A. P. Wolfe, and A. G. Barto, “Identifying useful subgoals in reinforcement learning by local graph partitioning,” in Proceedings of the 22nd International Conference on Machine Learning, pp. 248–256, ACM Press, August 2005.
  18. R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, The MIT Press, Cambridge, Mass, USA, 1998.
  19. G. Yang, C. Shifu, and L. Xin, “Research on reinforcement learning technology: a review,” Acta Automatica Sinica, vol. 33, no. 1, pp. 86–99, 2004. View at Google Scholar