Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2014, Article ID 951367, 11 pages
http://dx.doi.org/10.1155/2014/951367
Research Article

Multiagent-Based Simulation of Temporal-Spatial Characteristics of Activity-Travel Patterns Using Interactive Reinforcement Learning

1School of Transportation, Southeast University, Si Pai Lou No. 2, Nanjing 210096, China
2Department of Civil and Environment Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139-4307, USA

Received 6 September 2013; Revised 6 November 2013; Accepted 9 December 2013; Published 30 January 2014

Academic Editor: Geert Wets

Copyright © 2014 Min Yang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. M. Fried, J. Havens, and M. Thall, “Travel behavior—a synthesized theory,” Final Report, NCHRP, Transportation Research Board, 1977. View at Google Scholar
  2. F. S. Chapin, Human Activity Patterns in the City, John Wiley & Sons, New York, NY, USA, 1974.
  3. T. Hägerstrand, “What about people in regional Science?” Papers of the Regional Science Association, vol. 24, no. 5, pp. 6–21, 1970. View at Publisher · View at Google Scholar · View at Scopus
  4. J. L. Bowman and M. E. Ben-Akiva, “Activity-based disaggregate travel demand model system with activity schedules,” Transportation Research A, vol. 35, no. 1, pp. 1–28, 2001. View at Publisher · View at Google Scholar · View at Scopus
  5. F. S. Koppelman and C.-H. Wen, “The paired combinatorial logit model: properties, estimation and application,” Transportation Research B, vol. 34, no. 2, pp. 75–89, 2000. View at Publisher · View at Google Scholar · View at Scopus
  6. M. E. Ben-Akiva and S. R. Lerman, Discrete Choice Analysis: Theory and Application to Travel Demand, The MIT Press, London, UK, 1985.
  7. C. R. Bhat, “A hazard-based duration model of shopping activity with nonparametric baseline specification and nonparametric control for unobserved heterogeneity,” Transportation Research B, vol. 30, no. 3, pp. 189–207, 1996. View at Publisher · View at Google Scholar · View at Scopus
  8. P. M. Jones, Understanding Travel Behavior, University of Oxford, Transport Studies Unit, Oxford, UK, 1983.
  9. M. G. Karlaftis and E. Vlahogianni, “Statistical methods versus neural networks in transportation research: differences, similarities and some insights,” Transportation Research C, vol. 19, no. 3, pp. 387–399, 2011. View at Publisher · View at Google Scholar · View at Scopus
  10. T. A. Arentze and H. J. P. Timmermans, “A learning-based transportation oriented simulation system,” Transportation Research B, vol. 38, no. 7, pp. 613–633, 2004. View at Publisher · View at Google Scholar · View at Scopus
  11. T. Arentze, F. Hofman, and H. Timmermans, “Reinduction of Albatross decision rules with pooled activity-travel diary data and an extended set of land use and cost-related condition states,” Transportation Research Record, vol. 1831, pp. 230–239, 2003. View at Publisher · View at Google Scholar · View at Scopus
  12. T. Arentze and H. Timmermans, “Parametric action decision trees: incorporating continuous attribute variables into rule-based models of discrete choice,” Transportation Research B, vol. 41, no. 7, pp. 772–783, 2007. View at Publisher · View at Google Scholar · View at Scopus
  13. K. M. Nurul Habib, “A random utility maximization (RUM) based dynamic activity scheduling model: application in weekend activity scheduling,” Transportation, vol. 38, no. 1, pp. 123–151, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. B. Fernandez-Gauna, J. M. Lopez-Guede, and M. Graña, “Transfer learning with partially constrained models: application to reinforcement learning of linked multicomponent robot system control,” Robotics and Autonomous Systems, vol. 61, no. 7, pp. 694–703, 2013. View at Publisher · View at Google Scholar
  15. E. A. Jasmin, T. P. Imthias Ahamed, and V. P. Jagathy Raj, “Reinforcement learning approaches to economic dispatch problem,” International Journal of Electrical Power & Energy Systems, vol. 33, no. 4, pp. 836–845, 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. A. Agung and F. L. Gaol, “Game artificial intelligence based using reinforcement learning,” Procedia Engineering, vol. 50, pp. 555–565, 2012. View at Publisher · View at Google Scholar
  17. R. Lahkar and R. M. Seymour, “Reinforcement learning in population games,” Games and Economic Behavior, vol. 80, pp. 10–38, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  18. Z. Tan, C. Quek, and P. Y. K. Cheng, “Stock trading with cycles: a financial application of ANFIS and reinforcement learning,” Expert Systems with Applications, vol. 38, no. 5, pp. 4741–4755, 2011. View at Publisher · View at Google Scholar · View at Scopus
  19. L. P. Kaelbling, M. L. Littman, and A. W. Moore, “Reinforcement learning: a survey,” Journal of Artificial Intelligence Research, vol. 4, pp. 237–285, 1996. View at Google Scholar
  20. D. Charypar and K. Nagel, “Q-learning for flexible learning of daily activity plans,” Transportation Research Record, vol. 1935, pp. 163–169, 2005. View at Publisher · View at Google Scholar · View at Scopus
  21. W. Wang, Transportation Engineering, Southeast University Press, Nanjing, China, 2000.
  22. M. Vanhulsel, D. Janssens, and G. Wets, “Calibrating a new reinforcement learning mechanism for modeling dynamic activity-travel behavior and key events,” in Proceedings of the 86th Annual Meeting of the Transportation Research Board, Transportation Research Board, Washington, DC, USA, 2007.
  23. D. Janssens, Y. Lan, G. Wets, and G. Chen, “Allocating time and location information to activity-travel patterns through reinforcement learning,” Knowledge-Based Systems, vol. 20, no. 5, pp. 466–477, 2007. View at Publisher · View at Google Scholar · View at Scopus