Table of Contents Author Guidelines Submit a Manuscript
Discrete Dynamics in Nature and Society
Volume 2017, Article ID 4580206, 15 pages
https://doi.org/10.1155/2017/4580206
Research Article

A Decentralized Partially Observable Markov Decision Model with Action Duration for Goal Recognition in Real Time Strategy Games

College of Information System and Management, National University of Defense Technology, Changsha 410073, China

Correspondence should be addressed to Kai Xu; nc.ude.tdun@90iakux

Received 22 March 2017; Accepted 8 June 2017; Published 16 July 2017

Academic Editor: Filippo Cacace

Copyright © 2017 Peng Jiao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. S. Ontanón, G. Synnaeve, A. Uriarte, F. Richoux, D. Churchill, and M. Preuss, “A survey of real-time strategy game AI research and competition in starcraft,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 5, no. 4, pp. 293–311, 2013. View at Publisher · View at Google Scholar · View at Scopus
  2. S. C. J. Bakkes, P. H. M. Spronck, and G. van Lankveld, “Player behavioural modelling for video games,” Entertainment Computing, vol. 3, no. 3, pp. 71–79, 2012. View at Publisher · View at Google Scholar · View at Scopus
  3. D. Churchill, D. Churchill, Sparcraft: open source StarCraft combat simulation, 2013, http://code.google.com/p/sparcraft/.
  4. H. H. Bui, S. Venkatesh, and G. West, “Policy recognition in the abstract hidden Markov model,” Journal of Artificial Intelligence Research, vol. 17, pp. 451–499, 2002. View at Google Scholar · View at Scopus
  5. A. Hoogs and A. A. Perera, “Video activity recognition in the real world,” in Proceedings of the 23rd National Conference on Artificial Intelligence, pp. 1551–1554, Chicago, Ill, USA, July, 2008.
  6. C. L. Baker, R. Saxe, and J. B. Tenenbaum, “Action understanding as inverse planning,” Cognition, vol. 113, no. 3, pp. 329–349, 2009. View at Publisher · View at Google Scholar · View at Scopus
  7. K. Yordanova, F. Krüger, and T. Kirste, “Context aware approach for activity recognition based on precondition-effect rules,” in Proceedings of the IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops '12), pp. 602–607, IEEE, Lugano, Switzerland, March 2012. View at Publisher · View at Google Scholar · View at Scopus
  8. L. R. Rabiner, “Tutorial on hidden Markov models and selected applications in speech recognition,” Proceedings of the IEEE, vol. 77, no. 2, pp. 257–286, 1989. View at Publisher · View at Google Scholar · View at Scopus
  9. S. Yue, K. Yordanova, F. Krüger, T. Kirste, and Y. Zha, “A decentralized partially observable decision model for recognizing the multiagent goal in simulation systems,” Discrete Dynamics in Nature and Society, vol. 2016, Article ID 5323121, 15 pages, 2016. View at Publisher · View at Google Scholar · View at Scopus
  10. M. Baykal‐Gürsoy, Semi‐Markov Decision Processes, Wiley Encyclopedia of Operations Research and Management Science, 2010.
  11. M. Ramırez and H. Geffner, “Goal recognition over POMDPs: inferring the intention of a POMDP agent,” in Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI '11), pp. 2009–2014, Barcelona, Spain, July 2011. View at Publisher · View at Google Scholar · View at Scopus
  12. B. Scherrer and F. Charpillet, “Cooperative co-learning: a model-based approach for solving multi agent reinforcement problems,” in Proceedings of the 14th International Conference on Tools with Artificial Intelligence, pp. 463–468, November 2002. View at Scopus
  13. D. Koller and N. Friedman, Probabilistic Graphical Models: Principles and Techniques, MIT press, 2009.
  14. S. Fine, Y. Singer, and N. Tishby, “The hierarchical hidden markov model: analysis and applications,” Machine Learning, vol. 32, no. 1, pp. 41–62, 1998. View at Publisher · View at Google Scholar · View at Scopus
  15. N. Oliver, E. Horvitz, and A. Garg, “Layered representations for human activity recognition,” in Proceedings of the 4th IEEE International Conference on Multimodal Interfaces, pp. 3–8, IEEE, Pittsburgh, Pa, USA, 2002. View at Publisher · View at Google Scholar
  16. L. Liao, D. Fox, and H. Kautz, “Hierarchical conditional random fields for GPS-based activity recognition [C],” in Proceedings of the International Symposium of Robotics Research, 2005.
  17. S. Hladky and V. Bulitko, “An evaluation of models for predicting opponent positions in first-person shooter video games,” in Proceedings of the IEEE Symposium on Computational Intelligence and Games (CIG '08), pp. 39–46, IEEE, Perth, Australia, December 2008. View at Publisher · View at Google Scholar · View at Scopus
  18. T. Duong, D. Phung, H. Bui, and S. Venkatesh, “Efficient duration and hierarchical modeling for human activity recognition,” Artificial Intelligence, vol. 173, no. 7-8, pp. 830–856, 2009. View at Publisher · View at Google Scholar · View at Scopus
  19. L. Getoor and B. Taskar, Eds., Introduction to Statistical Relational Learning, MIT Press, 2007. View at MathSciNet
  20. K. Kersting, L. De Raedt, and T. Raiko, “Logical hidden Markov models,” Journal of Artificial Intelligence Research, vol. 25, pp. 425–456, 2006. View at Google Scholar · View at MathSciNet · View at Scopus
  21. S. Raghavan, P. Singla, and R. J. Mooney, “Plan recognition using statistical-relational models,” in Plan, Activity, and Intent Recognition: Theory and Practice, G. Sukthankar, R. P. Goldman, C. Geib, D. V. Pynadath, and H. H. Bui, Eds., Morgan Kaufmann Publishers, Waltham, MA, USA, 2014. View at Google Scholar
  22. K. Kersting and L. De Raedt, “Towards combining inductive logic programming with Bayesian networks,” in Proceedings of the 11th International Conference on Inductive Logic Programming, pp. 118–131, 2001.
  23. C. W. Geib and M. Steedman, “On natural language processing and plan recognition,” in Proceedings of the 20th International Joint Conference on Artificial Intelligence, IJCAI 2007, pp. 1612–1617, January 2007. View at Scopus
  24. W. Min, E. Y. Ha, J. Rowe, B. Mott, and J. Lester, “Deep learning-based goal recognition in open-ended digital games,” in Proceedings of the 10th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE '14), pp. 37–43, Raleigh, NC, USA, October 2014. View at Scopus
  25. S. Keren, A. Gal, and E. Karpas, “Goal recognition design with non-observable actions,” in Proceedings of the 30th AAAI Conference on Artificial Intelligence, AAAI 2016, pp. 3152–3158, February 2016. View at Scopus
  26. S. Keren, A. Gal, and E. Karpas, “Goal recognition design for non-optimal agents,” in Proceedings of the 29th AAAI Conference on Artificial Intelligence, AAAI 2015 and the 27th Innovative Applications of Artificial Intelligence Conference, IAAI 2015, pp. 3298–3304, January 2015. View at Scopus
  27. S. Saria and S. Mahadevan, “Probabilistic plan recognition in multiagent systems,” in Proceedings of the 14th International Conference on Automated Planning and Scheduling, ICAPS 2004, pp. 287–296, June 2004. View at Scopus
  28. J. Yin, D. H. Hu, and Q. Yang, “Spatio-temporal event detection using dynamic conditional random fields[C],” in Proceedings of the 21st International Joint Conference on Artificial Intelligence, pp. 1321–1327, 2009.
  29. S. Sarawagi and W. W. Cohen, “Semi-Markov conditional random fields for information extraction[C],” in Proceedings of the 17th Annual Conference Neural Information Processing Systems, pp. 1185–1192, 2004.
  30. T. T. Truyen, D. Q. Phung, H. H. Bui, and S. Venkatesh, “Hierarchical semi-Markov conditional random fields for recursive sequential data[C],” in Proceedings of the 22nd Annual Conference on Neural Information Processing Systems, NIPS 2008, pp. 1657–1664, December 2008. View at Scopus
  31. T. D. Ullman, C. L. Baker, O. Macindoe, O. Evans, N. D. Goodman, and J. B. Tenenbaum, “Help or hinder: bayesian models of social goal inference,” in Proceedings of the 23rd Annual Conference on Neural Information Processing Systems (NIPS '09), pp. 1874–1882, December 2009. View at Scopus
  32. B. Riordan, S. Brimi, N. Schurr et al., “Inferring user intent with Bayesian inverse planning: making sense of multi-UAS mission management,” in Proceedings of the 20th Annual Conference on Behavior Representation in Modeling and Simulation (BRiMS '11), pp. 49–56, Sundance, Utah, USA, March 2011. View at Scopus
  33. Q. Yin, S. Yue, Y. Zha, and P. Jiao, “A semi-Markov decision model for recognizing the destination of a maneuvering agent in real time strategy games,” Mathematical Problems in Engineering, vol. 2016, Article ID 1907971, 12 pages, 2016. View at Publisher · View at Google Scholar · View at MathSciNet
  34. G. E. Monahan, “State of the art—a survey of partially observable Markov decision processes: theory, models, and algorithms,” Management Science, vol. 28, no. 1, pp. 1–16, 1982. View at Publisher · View at Google Scholar · View at MathSciNet
  35. M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking,” IEEE Transactions on Signal Processing, vol. 50, no. 2, pp. 174–188, 2002. View at Publisher · View at Google Scholar · View at Scopus
  36. G. Sukthankar, C. Geib, H. H. Bui, D. Pynadath, D. Pynadath, and R. P. Goldman, Eds., Plan, Activity, and Intent Recognition: Theory and Practice, Newnes, 2014.