Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2013, Article ID 141353, 9 pages
http://dx.doi.org/10.1155/2013/141353
Research Article

Decision Making in Reinforcement Learning Using a Modified Learning Space Based on the Importance of Sensors

Muroran Institute of Technology, 27-1 Mizumoto, Hokkaido, Muroran 0508585, Japan

Received 15 March 2013; Accepted 21 May 2013

Academic Editor: Guangming Song

Copyright © 2013 Yasutaka Kishima et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, MIT Press, 1998.
  2. T. Kondo and K. Ito, “A reinforcement learning with evolutionary state recruitment strategy for autonomous mobile robots control,” Robotics and Autonomous Systems, vol. 46, no. 2, pp. 111–124, 2004. View at Publisher · View at Google Scholar · View at Scopus
  3. K. J. Person, E. Oztop, and J. Peters, “Reinforcement learning to adjust robot movements to new situations,” in Proceedings of the 22 nd International Joint Conference on Artificial Intelligence, pp. 2650–2655, 2010.
  4. N. Navarro, C. Weber, and S. Wermter, “Real-world reinforcement learning for autonomous humanoid robot charging in a home environment,” in Towards Autonomous Robotic Systems, vol. 6856 of Lecture Notes in Computer Science, pp. 231–240, Springer, 2011. View at Publisher · View at Google Scholar · View at Scopus
  5. M. Tan, “Multi-agent reinforcement learning: independent vs. cooperative agents,” in Proceedings of the 10th International Conference on Machine Learning, 1993.
  6. M. N. Ahmadabadi, M. Asadpur, S. H. Khodaabakhsh, and E. Nakano, “Expertness measuring in cooperative learning,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '00), vol. 3, pp. 2261–2267, November 2000. View at Scopus
  7. M. N. Ahmadabadi and M. Asadpour, “Expertness based cooperative Q-learning,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 32, no. 1, pp. 66–76, 2002. View at Publisher · View at Google Scholar · View at Scopus
  8. H. Iima and Y. Kuroe, “Swarm reinforcement learning algorithm based on exchanging information among agents,” Transactions of the Society of Instrument and Control Engineers, vol. 42, no. 11, pp. 1244–1251, 2006. View at Google Scholar
  9. Y. Yongming, T. Yantao, and M. Hao, “Cooperative Q learning based on blackboard architecture,” in Proceedings of International Conference on Computational Intelligence and Security Workshops (CIS '07), pp. 224–227, December 2007. View at Scopus
  10. T. Nishi, Y. Takahashi, and M. Asada, “Incremental behavior acquisition based on reliability of observed behavior recognition,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '07), pp. 70–75, November 2007. View at Publisher · View at Google Scholar · View at Scopus
  11. M. Asada, S. Noda, and K. Hosoda, “Action-based sensor space categorization for robot learning,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '96), pp. 1502–1509, November 1996. View at Publisher · View at Google Scholar · View at Scopus
  12. H. Ishiguro, R. Sato, and T. Ishida, “Robot oriented state space construction,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '96), pp. 1496–1501, November 1996. View at Scopus
  13. K. Samejima and T. Omori, “Adaptive internal state space construction method for reinforcement learning of a real-world agent,” Neural Networks, vol. 12, no. 7-8, pp. 1143–1155, 1999. View at Publisher · View at Google Scholar · View at Scopus
  14. A. J. Smith, “Applications of the self-organising map to reinforcement learning,” Neural Networks, vol. 15, no. 8-9, pp. 1107–1124, 2002. View at Publisher · View at Google Scholar · View at Scopus
  15. K. T. Aung and T. Fuchda, “A proposition of adaptive state space partition in reinforcement learning with voronoi tessellation,” in Proceedings of the 17th International Symposium on Artificial Life and Robotics, pp. 638–641, 2012.
  16. Y. Takahashi, M. Asada, and K. Hosoda, “Reasonable performance in less learning time by real robot based on incremental state space segmentation,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 96), pp. 1518–1524, November 1996. View at Scopus