Research Article  Open Access
The Study of Reinforcement Learning for Traffic SelfAdaptive Control under Multiagent Markov Game Environment
Abstract
Urban traffic selfadaptive control problem is dynamic and uncertain, so the states of traffic environment are hard to be observed. Efficient agent which controls a single intersection can be discovered automatically via multiagent reinforcement learning. However, in the majority of the previous works on this approach, each agent needed perfect observed information when interacting with the environment and learned individually with less efficient coordination. This study casts traffic selfadaptive control as a multiagent Markov game problem. The design employs traffic signal control agent (TSCA) for each signalized intersection that coordinates with neighboring TSCAs. A mathematical model for TSCAs’ interaction is built based on nonzerosum markov game which has been applied to let TSCAs learn how to cooperate. A multiagent Markov game reinforcement learning approach is constructed on the basis of singleagent Qlearning. This method lets each TSCA learn to update its Qvalues under the joint actions and imperfect information. The convergence of the proposed algorithm is analyzed theoretically. The simulation results show that the proposed method is convergent and effective in realistic traffic selfadaptive control setting.
1. Introduction
As car ownership rates and traffic volume have steadily increased over the last decades, existing road infrastructure today is often strained nearly to its limits. Continuous expansion of this infrastructure, however, is not possible or even desirable due to spatial, economic, and environmental reasons. It is therefore of paramount importance to try to optimize the flow of traffic in a given infrastructure. Traffic selfadaptive control of multiple intersections is synergetic and has the potential to significantly alleviate traffic congestion in urban transportation networks as opposed to the commonly used fixed timing and actuated control systems. Existing traditional traffic adaptive control systems such as TRANSYT, SCOOT, SCATS, and sophisticated dynamic programming approach [1] do not have a mechanism for learning from feedback on the quality of their model, which may lead to systematic errors.
Several researchers have employed classical control methods such as fuzzy logic [2], neural networks [3], and evolutionary algorithms [4] to traffic selfadaptive control. These methods perform well but cannot be adapted to the changing characteristics of traffic flow. Reinforcement learning (RL) [5] is able to perpetually learn and improve the service over time. Multiagent reinforcement learning is an extension of RL to multiple agents in stochastic environment. The decentralized traffic control problem is an excellent test for multiagent reinforcement learning due to the inherited dynamics and stochastic nature of the traffic system [6–9].
There are two shortages in the application of multiagent reinforcement learning to traffic selfadaptive control problem as discussed below.(1)With Less Efficient Coordination. The majority of the previous studies consider independent learning agents which do not include any explicit mechanism for coordination [6–14]. Only a few previous studies consider coordination mechanism between the learning agents. Kuyer et al. [15] consider explicit twolevel coordination mechanism between the learning agents that extends Wiering [6] using the coordination graphs. Maxplus algorithm is used to estimate the optimal joint action by sending locally optimized messages among connected agents. However, Maxplus algorithm is computationally demanding, and therefore the agents report their current best action at any time even if the action found so far may be suboptimal. Tantawy and Abdulhai [16] presented multiagent reinforcement learning for integrated network of adaptive traffic signal controllers (MARLINATSC) which maintains a coordination mechanism (indirect coordination and direct coordination) between agents without compromising the dimensionality of the problem. Indirect coordination is realized by bestresponse multiagent learning in nonstationary environments, and the direct coordination is typically based on communication.(2)On the Assumption of the Complete Knowledge. In addition, since the traffic environment is changing in time and intersection cannot fully understand other intersections’ information such as traffic arrival rate, vehicle queue, or delays, it is over idealized that the utility matrix of each agent is public; that is, perfect observed information is required when agents are interacting with the environment. According to this assumption, agents may select individual actions that are locally optimal but that together result in global inefficiencies. So this assumption is not too realistic.
It is argued that the use of a modelbased RL approach adds unnecessary complexities compared with using modelfree learning. To overcome the upper deficiencies of the previous approach, this paper conducts a multiagent Markov game reinforcement learning method for optimizing traffic selfadaptive control. We define a TSCA for each signalized intersection that coordinates with neighboring agents. A mathematical model for TSCAs’ interaction is built based on nonzerosum Markov game which has been applied to let TSCAs learn how to cooperate. A multiagent Markov game reinforcement learning approach is constructed on the basis of singleagent learning. This method let each TSCA learn to update its values under the joint actions and imperfect information. Convergence and effectiveness of the improving algorithm are verified.
2. Structure Model for TSCA
We defined a TSCA for each urban signalized intersection that takes the charge of controlling all signal phases. Its main function is to establish corresponding control strategy which will be implemented by signal light according to the current traffic state of both of its own and of its neighbors [14]. Therefore, the intersection’s traffic flow conditions are improved. Figure 1 shows the structure model for TSCA. The reinforcement signal is a reward function which will be defined in Section 3.
The structure model of TSCA has been shown in Figure 1. As we can see in Figure 2, it is mainly composed of learning module, action decision making module, communication module, and coordination module. Learning module infers whether there are reasonable regulations from realtime observational data. If some reasonable regulations exist, learning module executes these regulations and determines signal control plan. Coordination module analyzes the present traffic state of TSCA to decide if it is necessary to send messages to the adjacent TSCAs, and it deals with the TSCA coordination. Communication module is mostly responsible for the communication with the adjacent TSCAs. Action decision making module resolves function of reasoning and decision making of TSCA.
In most cases, average delay time is sufficient to determine intersection’s relative traffic performance; that is, lower average delay time implies lower ratio of stopped vehicles and total queue length. Vehicle’s delay of coordinated controlled intersection is composed of normal delay, random delay, and oversaturation delay which is the same as isolated controlled ones. The computation of normal delay needs vehicle arriving and leaving graph. Since every incoming traffic flow of an intersection is up to green light time and released ratio of upstream intersections, arrival rate is not a constant but a timevarying functional expression. For random over saturation delay, the degree of traffic flow’s random fluctuation within each cycle in coordinated control is far away less than that in isolated control, so delay value will decrease. Intersection’s delay time can adopt the following transient functional model [17]: where is the average delay time per vehicle (s/pcu); is a signal cycle length (s); is a split; is a ratio of flow volume; is the average length of over saturation stopped fleet (pcu); is the intersection’s traffic capacity (pcu/h); is a time interval (h); is the intersection’s saturation; is the vehicle’s arrival rate (pcu/h); and is the length of red light time.
3. Mathematical Model for TSCAs’ Interaction Based on NonzeroSum Markov Game
Game theory is the best mathematical tool to study human society’s interaction. The interaction between TSCAs comes down to game model [18]. In the dynamic traffic signal control system of multiple intersections, each intersection’s signal timing scheme not only affects directly its neighbor’s traffic but also indirectly affects nonneighboring intersection’s traffic. Although, in traffic networks, the TSCA is incapable of observing the conditions of the entire network, it is possible to observe the conditions of the neighboring TSCAs. In order to avoid the excessive complexity and frequency of TSCA’s interaction brought by the increase in the number of controlled intersections, we limit that every TSCA to only interact with the adjacent one. Since two adjacent TSCAs can form a coalition to acquire a whole and local optimal performance in view of the openness of bilateral information, the interaction between two adjacent TSCAs conforms to a twomatrix nonzerosum cooperative game [19].
Markov decision process (MDP) is widely researched which represents the problem about single agent in multiple states. By contrast, two matrix games were used to solve the problem for multiple agents in single state. Markov game can be regarded as the combination of MDP and twomatrix game which defines the frame of multiple agents and multiple environment states [18–20]. Since the traffic environments which TSCA is confronted with have the characteristics of dynamic, complexity, uncertainty, and openness, nplayer nonzerosum Markov game is suitable to be used to establish the interaction model for TSCAs.
An nplayer nonzerosum Markov game can be defined by a tuple , where the parameters , , , , are explained in detail as follows according to traffic selfadaptive control.
is a set of finite interactive TSCAs.
is a set of finite states. Let be a local state of TSCA is a global state which is decomposed into local states. is defined by a vector of two components. The first component is the position for the first vehicle approaching TSCA from the directions of west, north, east, and south. Since the size of the state space grows rapidly if the accurate distance from the vehicle to TSCA represents local state, all lanes connected to TSCA are divided into a certain number of equal sections. The sections are sequentially encoded from the vehicle nearest to TSCA. So, the section’s code can be used to define the local state . The second component is the maximum queue lengths associated with each direction defined as follows [16]: where is the number of queued vehicles in lane at time . A vehicle is considered at a queue if its speed is below a certain speed threshold . is computed as follows: where is the set of vehicles traveling on lane at time .
is a set of joint signal timing actions for multiple TSCAs, where is the subset of finite signal timing actions for TSCA . represents the current phase duration which depends on intersection’s phase and the adjustment of green time for the current green phase, . The adjustment of green time is to be determined by the relation of section’s length and vehicle’s velocity, for example, {green time plus 1 s, green light time plus 2 s, green time minus 1 s, green time minus 2 s, unchangeably}. Phase timing signal scheme generically consists of eastwest straight and right turn, southnorth straight and right turn, eastwest left turn, and southnorth left turn. represents a joint signal timing action for the TSCAs, .
is a state transition function mapping a present state and a joint signal timing action to a probability over states.
is a set of reward functions for all TSCAs, where is a reward function for TSCA mapping stateaction tuples to immediate scalar rewards. represents a reward function for TSCA when the TSCAs take the joint action in the state . can be expressed by the division value between total volume for passing vehicles and accumulative waiting time.
Suppose that the traffic flow is random and corresponds to Poisson distribution. When green light of TSCA in southnorth direction is open, is described in a mathematical equation as follows: When green light of TSCA in eastwest direction is open, is computed as follows: where is a signal timing action for TSCA and , , , and represent the time to reach TSCA for the first vehicle incoming but not passing TSCA from the directions of west, north, east, and south, respectively, beginning with the alternation of red and green light. If ( represents , , , or ), we set . , , , and represent the intensity for Poisson flow of vehicles incoming TSCA from the directions of west, north, east, and south, respectively. depends on the state and timing signal action of TSCAs adjacent to TSCA . It can generalize from the average value of statistical historical data. is a positive constant which represents the degree of punishment.
Let be the probability distributions over action set of TSCA . With each, there is an nplayer game . is TSCA ’s total discounted reward in state and under joint strategy . Suppose that is a joint strategy when each TSCA selects action with a probability . We define ; for each given , TSCA chooses a corresponding optimal strategy .
In an nplayer nonzerosum Markov game under mixed strategy, a Nash equilibrium point is a tuple of strategies such that, for all and , where is the space of probability distributions over the TSCA ’s actions.
In the nonzerosum Markov game, TSCA can gain more rewards via cooperation than that under independent action. A Nash equilibrium point can be reached when none TSCAs gain more optimal policy.
4. Multiagent Markov Game Reinforcement for Traffic SelfAdaptive Control
4.1. SingleAgent Learning
learning which was presented by Watkins defines a learning method within a Markov decision process [21]. The basic idea of learning is that we can define a function such that By this definition, is a discount factor and is used to discount future rewards. is the probability of transiting to state after taking action in state . A solution that satisfies (9) is guaranteed to be an optimal policy. is the total discounted reward of taking action in state and then following the optimal policy thereafter.
If is given, then the optimal policy can be found by simply identifying the action that maximizes under the state . The problem is then reduced to finding the function instead of searching for the optimal value of .
learning provides us with a simple updating procedure, in which the TSCA starts with arbitrary initial values of for all , and updates the values as follows: where is the learning rate sequence.
Watkins and Dayan proved that sequence (8) converges to under the assumption that all states and actions have been visited infinitely often and the learning rate satisfies certain constraints [21].
Even in singleagent learning approach that is proven to optimally converge to the joint policy, each TSCA has to keep a set of tables whose size is exponential in the number of agents: . In addition to the dimensionality issue, the method requires each TSCA to observe the state of the whole system which is infeasible in the case of transportation networks. Singleagent learning takes other TSCAs as a part of environment and it updates future rewards based on merely the TSCA’s own maximum payoff regardless of other TSCAs’ actions. In MAS, we adopted a multiagent Markov game reinforcement method in which each TSCA updates its according to immediate reward by interacting with other TSCAs and observing actions taken by all other TSCAs and others’ rewards.
4.2. Multiagent Markov Game Reinforcement Learning for Traffic SelfAdaptive Control
In this method, at each time , TSCA updates its own values and learn about other adjacent TSCAs’ values , , by observing its own reward, actions taken by neighboring TSCAs, neighbor’s rewards, and the new state . In the state , TSCA calculates a Nash equilibrium for the stage game . Then, TSCA ’s value function is its payoff in state for the selected equilibrium which can be computed as follows: TSCA updates its values according to
Information about other TSCAs’ values is not given, so TSCA must learn about them too. TSCA updates the beliefs about TSCA ’s function according to the same rule (12) which is applied to its own,
Note that is a joint mixed strategy to a Nash equilibrium point. and are TSCA ’s observed information. In order to update values, TSCA would know the priori knowledge about other TSCAs’ strategy value . That is to say, we should solve a set composed of (13) when . This is an norder nonlinear problem which has no practical solutions. In addition, since the TSCA’s observed information is imperfect, and in order to avoid the curse of dimensionality, we use probability statistics and Bayes method to estimate beliefs about other TSCAs’ mixed strategies. And in such coordination mechanism, TSCA can reach a unique equilibrium.
Let us define that TSCA conjecture TSCA takes the action in the probability of . The probability can be calculated as follows via Boltzmann formula: where is a temperature parameter which reflects exploring degree and decreases with time.
Each TSCA takes its own actions in state ; then TSCA observes other TSCAs’ taken actions and new state ; after that, TSCA updates the belief about other TSCAs’ actions. According to Bayes formula, the belief about TSCA ’s actions can be computed as follows: where is the probability of transiting to state after TSCA and TSCA take joint actions in state and is the probability of transiting to state after TSCA takes its own action independently in state . The probability of and can be acquired from environment knowledge. represents the probability in which TSCA will take the action . can similarly replace the estimate of .
Furthermore, TSCA ’s belief about other multiple TSCA’s joint actions can be generalized from (15) as follows:
According to the analysis above, the multiagent Markov game reinforcement algorithm is summarized as follows by taking TSCA for example.
Initialize(1) Let ; get the initial state . For any and any , let and .
Loop(2) Choose action : TSCA chooses the action by the probability of which is the bestresponse mixed policy of the matrix game under the joint actions of other TSCAs.(3) Observe ; ; and .(4) TSCA updates the belief about other TSCAs’ actions according to Bayes formula (15).(5) Update for . Consider where is defined in (16).(6) Let return to (2).
5. Convergence Proof
Assumption 1. For any multiagent game , its Nash equilibrium point has the following properties. (i) If the Nash equilibrium does not reach a global optimum, then the TSCA which takes the policy of Nash equilibrium will get more payoffs when the other TSCAs’ policies deviate from Nash equilibrium: (ii) If the Nash equilibrium reaches a global optimum, then
Assumption 2. Learning sequence satisfies the following:(i), , , and the latter two hold uniformly and with probability 1, where is constant, (ii) if , then , where . Under Assumption (i), given a finite MDP, the learning algorithm proposed by [21] converges with probability 1 to the optimal function. The second item in Assumption 2 states that the agent updates only the function element corresponding to current state and actions .
Lemma 3 (see [22]). Under Assumption 2, iterative process will converge to , with probability 1, where is the historical states and policies at the time .
Lemma 4 (see [23]). , assume that is the Nash equilibrium point for the stage game , its Nash equilibrium payoff is , then the TSCA ’s Nash equilibrium payoff under joint actions is
Lemma 5 (see [23]). Given that where, , is the mixed Nash equilibrium policy for the stage game , then is a contract mapping.
Lemma 6 (see [22]). Under Assumption 2, if the iterative process holds and satisfies , where , and converges to 0 with probability 1, then the following iterative process will converge to with probability 1:
Under the above assumptions and lemmas, if the Nash equilibrium point for the stage game is obtained, then proposed method converges to game values of equilibrium point with probability 1.
Theorem 7. Assume that the game sequence for each satisfies then the game sequence converges to value of equilibrium point with probability 1, where the value meets where is the mixed Nash equilibrium policy for the stage game at the state is the Nash equilibrium point for the game.
Proof. Under Lemma 4, Under Lemma 3, converges to Defining mapping as then . Under Lemma 5, since is the contract mapping of and , is also the contract mapping of and is the fixed point of mapping . Consider Under Lemma 4, , so . Therefore the iterative formula converges to with probability 1. Under Lemma 6, formula (20) converges to .
6. Simulation
6.1. Analysis of the Method’s Convergence
We consider the traffic network shown in Figure 2 used for the scenery to test the proposed approach described in Section 4. Paramics, a microscopic traffic simulator, is used to build the testbed network. The multiagent Markov game reinforcement learning algorithm is written in Matlab as a standalone application. The interaction between the multiagent Markov game reinforcement learning algorithm and the Paramics environment is implemented through the application programming interface (API) functions in Paramics.
, defined as the intensity of Poisson flow observed entering the traffic network, obeys uniform distribution in the interval of [5, 18]. The traffic network includes 5 intersections in which each direction has four lanes. Each intersection sets two phases. The length of the lanes is 60 m. The average velocity of the traffic flow is 2.5 m/s. Let , , , and . Since some general constraints posed by safety rules should be respected when designing the signal plan, we let minimum and maximum green time for each phase be 20 s and 90 s, respectively. The subsets of actions for TSCA 1, TSCA 2, TSCA 3, TSCA 4, and TSCA 5 are given by , , , , and , respectively. Each lane is equally divided into a certain number of segments in an interval of 20 m. As a result, the values of joint state and actions are set as , and , respectively. Under the state and in the taken joint action given above, how the values of TSCA 1, TSCA 2, and TSCA 5 vary with learning time, respectively, was shown in Figure 3.
As can be seen from Figure 3, the multiagent Markov game reinforcement learning approach presented in this paper is convergent; that is, it can reach a Nash equilibrium. In general conditions, an urban traffic network has a relatively stable flow of vehicles, so the time to solve such problem is acceptable.
In the traffic network shown in Figure 2, supposed sometime TSCA 5 is about to choose a red light timing action in northsouth direction; then we specify that the local state of TSCA 1, 2, 3, 4 is , respectively, so the finally learned values of TSCA 5 are shown in Table 1.

6.2. Analysis of the Method’s Effectiveness
We use the average delay time per vehicle () which was defined in (1) of Section 2 as the performance index of each method.
6.2.1. In Comparison with LQF
Local traffic consists of vehicles that cross a single intersection and then exit the network, thereby interacting with just one learning agent. According to [24], when the saturation is greater than 0.90, the level of intersection’s service is unbearable. So we think that a highly saturated condition in this paper refers to one in which the saturation greatly exceeds 0.9, and as a result the road is congested and the service level is relatively poor. In this section, we compare the novel approach described in Section 4.2 to LQF traffic signal scheduling algorithm proposed in [25] for an isolated intersection. The LQF algorithm was designed for a signal control problem employing concepts drawn from the field of packet switching in computer networks. It utilized a maximal weight matching algorithm to minimize the queue sizes at each approach yielding significantly lower average vehicle delay through the intersection. The primary limitation of LQF is that every agent only considers its own local traffic volume and thus controls its traffic signals in isolation. Consequently, agents may select individual actions that are locally optimal but that together result in global inefficiencies. Therefore, we focus our experiments on comparisons between the novel method and LQF in highly saturated conditions.
In particular, we also consider the scenery shown in Figure 2. In such traffic network all routes contain at least two intersections, and destinations are selected uniformly, thereby eliminating local traffic. The scenery is challenging and realistic, as it requires the methods to cope with an abundance of nonlocal traffic. The experiment is designed to test the hypothesis that, under highly saturated conditions, coordinated learning is beneficial when the amount of local traffic is small. If this hypothesis is correct, coordinated learning with multiagent Markov game reinforcement learning should substantially outperform LQF when most vehicles pass through multiple intersections.
We use the average delay time per vehicle () which is defined in (1) of Section 2 as the performance index of each method.
The result is averaged over 10 independent runs. Figure 4 shows the results from the nonuniform destinations and nonlocal traffic scenery. As can be seen from the figure, multiagent Markov game reinforcement learning substantially outperforms the other noncoordinated method. This result is not surprising since the lack of uniform destinations and local traffic create clear incentive for the TSCAs to learn to coordinate their actions. This approach allows the TSCAs to learn different state transition probabilities and value functions when the outbound lanes are congested. For example, the lane from intersection 1 to 5 is likely to become saturated as all traffic from edge nodes connected to intersection 1 must travel through it. When such saturation occurs, it is important for the two TSCAs to learn to coordinate since allowing incoming traffic to cross intersection 1 is pointless unless intersection 5 allows that same traffic to cross in a “green wave”. The cost of including such congestion information is a larger state space and potentially slower learning. So, the simulation results show that multiagent Markov game reinforcement learning is effective.
6.2.2. In Comparison with Fixed Timing Control Approach and Independent Reinforcement Learning Approach
In this section, we compare the novel approach described in Section 4.2 to fixed timing control and independent reinforcement learning. We consider the scenery shown in Figure 2 again. Assume that the vehicle arrival rate in the direction of northsouth is and the vehicle arrival rate in the direction of eastwest is The yellow light time length is 4 s; TSCA schedules traffic signal every two seconds. The learning time length is 5400 s. During the independent reinforcement learning, TSCA does not consider other TSCA’s actions and states, the reinforcement signals from the belief allocation modules are only associated with their own states and actions, then the goal is to maximize the local rewards. In the fixed timing control, the signal cycle length is 120 s. The effective green time length in the direction of northsouth and eastwest is 56 s. Figure 5 shows the results.
The results show that in low traffic flow the differences between the effectiveness of the three control methods are not particularly evident. When the traffic flow increases gradually, the differences between the performance of the three methods are more and more apparent. Since independent learning process does not take other TSCA’s states and actions into consideration, it is not easy to achieve global optimum, which may lower the performance of the system. When the vehicle arrival rate is more than 1000 vehicle/h, independent learning leads to a lot of heavy traffic. On the contrary, in our method, each TSCA must consider the influence of other TSCA’s states and actions, so the results have a certain amount of global properties.
Next, we analyze why multiagent Markov game reinforcement learning approach outperforms the other two approaches. Fixed timing control method can not be adapted to the changes of the traffic environment. In the independent learning algorithm, each TSCA learns and decides at the local level (i.e., using its local state and local action) by using (10). Multiagent Markov game reinforcement learning method biases action selection toward actions that are likely to result in good rewards. The likelihood of good values is evaluated using models of the other agents estimated by the learner through observing their behavior in the past. The efficiency of multiagent Markov game reinforcement learning approach is more profound in cases of traffic fluctuations which assure the adaptability of the approach as the highly saturated condition triggers the TSCAs to coordinate their actions.
7. Conclusion
Previous work about urban traffic control used multiagent reinforcement learning, but the TSCAs selected only locally optimal actions without coordinating their behavior and needed perfect observed information when interacting with environment. In this paper, a multiagent Markov game reinforcement learning approach based on nplayer nonzerosum Markov game is designed for optimizing urban traffic on the basis of the analysis of TSCA’s structure model and singleagent learning. Theoretical analysis and experimental result show that the proposed method is convergent and effective. Multiagent Markov game reinforcement learning substantially outperforms the other noncoordinated method like LQF, fixed timing control, and independent reinforcement learning especially under highly saturated conditions when the amount of local traffic is small. It will be demonstrated that the novel method offers the capability to provide distributed control as needed for scheduling multiple intersections.
In future work, it would be interesting to incorporate the effects of driver behavior and transit signal priority in our framework. Moreover, the basic fiveintersection network considered here will be expanded to include larger traffic networks and more extensive collaboration among TSCAs.
Acknowledgments
The work described in this paper was supported by the Natural Science Foundation of China (nos. 61263024 and 51268017). The authors would like to thank the members of the academic team guided by Professor Lunhui Xu for their advice.
References
 C. Diakaki, M. Papageorgiou, and K. Aboudolas, “A multivariable regulator approach to trafficresponsive network—wide signal control,” Control Engineering Practice, vol. 10, no. 2, pp. 183–195, 2002. View at: Publisher Site  Google Scholar
 P. G. Balaji, D. Srinivasan, and C.K. Tham, “Coordination in distributed multiagent system using type2 fuzzy decision systems,” in Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZ '08), pp. 2291–2298, Piscataway, NJ, USA, June 2008. View at: Publisher Site  Google Scholar
 D. Srinivasan, M. C. Choy, and R. L. Cheu, “Neural networks for realtime traffic signal control,” IEEE Transactions on Intelligent Transportation Systems, vol. 7, no. 3, pp. 261–272, 2006. View at: Publisher Site  Google Scholar
 J. J. Sánchez, M. Galán, and E. Rubio, “Genetic algorithms and cellular automata: a new architecture for traffic light cycles optimization,” in Proceedings of the Congress on Evolutionary Computation (CEC '04), pp. 1668–1674, June 2004. View at: Google Scholar
 R. Sutton and A. Barto, Reinforcement Learning: An Introduction, MIT Press, Boston, Mass, USA, 1998.
 M. Wiering, “Multiagent reinforcement learning for traffic light control,” in Proceedings of the 17th International Conference on Machine Learning and Applications, pp. 1151–1158, 2000. View at: Google Scholar
 B. Abdulhai, R. Pringle, and G. J. Karakoulas, “Reinforcement learning for true adaptive traffic signal control,” Journal of Transportation Engineering, vol. 129, no. 3, pp. 278–285, 2003. View at: Publisher Site  Google Scholar
 M. D. Pendrith, “Distributed reinforcement learning for a traffic engineering application,” in Proceedings of the 4th International Conference on Autonomous Agents, pp. 404–411, ACM Press, Rockville, Md, USA, June 2000. View at: Google Scholar
 B. Bakker, M. Steingrover, R. Schouten, E. Nijhuis, and L. Kester, “Cooperative multiagent reinforcement learning of traffic lights,” in Proceedings of the Workshop on Cooperative MultiAgent Learning, pp. 24–36, 2005. View at: Google Scholar
 A. Salkham, R. Cunningham, A. Garg, and V. Cahill, “A collaborative reinforcement learning approach to urban traffic control optimization,” in Proceedings of the IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT '08), pp. 560–566, Sydney, Australia, December 2008. View at: Publisher Site  Google Scholar
 I. Arel, C. Liu, T. Urbanik, and A. G. Kohls, “Reinforcement learningbased multiagent system for network traffic signal control,” IET Intelligent Transport Systems, vol. 4, no. 2, pp. 128–135, 2010. View at: Publisher Site  Google Scholar
 J. C. Medina and R. F. Benekohal, “Qlearning and approximate dynamic programming for traffic control—a case study for an oversaturated network,” in Proceedings of the Boar Annual Meeting, 2012. View at: Google Scholar
 M. Abdoos, N. Mozayani, and A. L. C. Bazzan, “Traffic light control in nonstationary environments based on multi agent Qlearning,” in Proceedings of the 14th IEEE International Intelligent Transportation Systems Conference (ITSC '11), pp. 1580–1585, Washington, DC, USA, October 2011. View at: Publisher Site  Google Scholar
 P. G. Balaji, X. German, and D. Srinivasan, “Urban traffic signal control using reinforcement learning agents,” IET Intelligent Transport Systems, vol. 4, no. 3, pp. 177–188, 2010. View at: Publisher Site  Google Scholar
 L. Kuyer, S. Whiteson, B. Bakker, and N. Vlassis, “Multiagent reinforcement learning for urban traffic control using coordination graph,” in Proceedings of the 19th European Conference on Machine Learning, 2008. View at: Google Scholar
 S. E. Tantawy and B. Abdulhai, “Multiagent reinforcement learning for integrated network of adaptive traffic signal controllers (MARLINATSC),” in Proceedings of the 15th International IEEE Annual Conference on Intelligent Transportation Systems (ITSC '12), pp. 319–326, Anchorage, Alaska, USA, 2012. View at: Google Scholar
 P. Koonce, “Traffic signal timing manual,” US Department of Transportation FHWAHOP08024, Federal Highway Administration, 2008. View at: Google Scholar
 M. L. Littman, “Markov games as a framework for muitiagent reinforcement learning,” in Proceedings of the 11th International Conference on Machine learning, pp. 157–153, Morgan Kaufmann, Boston, Mass, USA, 1994. View at: Google Scholar
 I. Alvarez, A. Poznyak, and A. Malo, “Urban traffic control problem a game theory approach,” in Proceedings of the 47th IEEE Conference on Decision and Control (CDC '08), pp. 2168–2172, Cancún, Mexico, December 2008. View at: Publisher Site  Google Scholar
 J. Hu and M. P. Wellman, “Nash Qlearning for generalsum stochastic games,” Journal of Machine Learning Research (JMLR), vol. 4, no. 6, pp. 1039–1069, 2003. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 C. J. C. H. Watkins and P. Dayan, “Qlearning,” Machine Learning, vol. 8, no. 34, pp. 279–292, 1992. View at: Publisher Site  Google Scholar
 C. Szepesvári and M. L. Littman, “A unified analysis of valuefunctionbased reinforcementlearning algorithms,” Neural Computation, vol. 11, no. 8, pp. 2017–2060, 1999. View at: Google Scholar
 J. Filar and K. Vrieze, Competitive Markov Decision Processes, Springer, New York, NY, USA, 1997. View at: MathSciNet
 W. Wang, Urban Traffic Planning, Southeast University Press, Nanjing, China, 1999.
 R. Wunderlich, C. Liu, I. Elhanany, and T. Urbanik II, “A novel signalscheduling algorithm with qualityofservice provisioning for an isolated intersection,” IEEE Transactions on Intelligent Transportation Systems, vol. 9, no. 3, pp. 536–547, 2008. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2013 LunHui Xu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.