Table of Contents
Advances in Artificial Intelligence
Volume 2014 (2014), Article ID 932485, 23 pages
Research Article

Reinforcement Learning in an Environment Synthetically Augmented with Digital Pheromones

University of Alabama in Huntsville, 301 Sparkman Drive, Huntsville, AL 35899, USA

Received 1 October 2013; Revised 19 January 2014; Accepted 31 January 2014; Published 13 March 2014

Academic Editor: Ozlem Uzuner

Copyright © 2014 Salvador E. Barbosa and Mikel D. Petty. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Reinforcement learning requires information about states, actions, and outcomes as the basis for learning. For many applications, it can be difficult to construct a representative model of the environment, either due to lack of required information or because of that the model's state space may become too large to allow a solution in a reasonable amount of time, using the experience of prior actions. An environment consisting solely of the occurrence or nonoccurrence of specific events attributable to a human actor may appear to lack the necessary structure for the positioning of responding agents in time and space using reinforcement learning. Digital pheromones can be used to synthetically augment such an environment with event sequence information to create a more persistent and measurable imprint on the environment that supports reinforcement learning. We implemented this method and combined it with the ability of agents to learn from actions not taken, a concept known as fictive learning. This approach was tested against the historical sequence of Somali maritime pirate attacks from 2005 to mid-2012, enabling a set of autonomous agents representing naval vessels to successfully respond to an average of 333 of the 899 pirate attacks, outperforming the historical record of 139 successes.