Abstract

We propose a multiagent, large-scale, vehicle routing modeling framework for the simulation of transportation system. The goal of this paper is twofold. Firstly, we investigate how individual and social knowledge interact and ultimately influence the effectiveness of resulting traffic flow. Secondly, we evaluate how different discrete-event simulation designs (delays vs. queuing) affect conclusions within the model. We present a new agent-based model that combines the efficient discrete-event approach to modeling with the intelligent drivers who are capable to learn about their environment in the long-term perspective from both, individual experience, and widely available social knowledge. The approach is illustrated as practical application to modeling commuter behavior in the city of Winnipeg, Manitoba, Canada. All simulations in the paper are fully reproducible as they have been carried out by utilizing a set of opensource libraries and tools that we have developed for the Julia programming language and that are openly available on GitHub.

1. Introduction

Traffic flow and congestion models have been researched since ‘30s (eg., by Greenshields [1]). Nowadays, they can be classified by the level of detail considered into 4 groups: (1) macroscopic, (2) microscopic, (3) submicroscopic, and (4) mesoscopic [25].

Macroscopic traffic models concentrate on the relationships among traffic flow attributes such as flow, density, or speed [2, 6]. In those models, individual vehicles are not modelled but aggregated variables such as the average density or the average flow are analyzed [2]. The family of macroscopic models includes kinematic wave models [7] and ultidimensional fundamental d iagram [8, 9]. With macroscopic algorithms, it is difficult to compare the results from the model with real life data [10].

Microscopic traffic models simulate single vehicle-driver units focusing on the dynamic model variables representing microattributes such as the position or velocity of a single vehicle (e.g., basic cellular automaton Nagel and Schreckenberg [11] model). Perfect examples of microapproach are stimulus-response models [12], where driver is reacting (accelerating or decelerating) to three main stimuli: her own velocity, spacing, and relative velocity with respect to the leader. Microscopic models’ calibration and validation can be challenging [10].

Submicroscopic traffic models include more details compared to microscopic ones: not only each vehicle is modelled individually but also functions inside the vehicle [3, 13], such as driver’s psychological reactions (e.g., response to traffic lights) or vehicle performance (e.g., acceleration or braking curves). Submicroscopic approach can be problematic when it comes to model’s effectiveness and the measurement and calibration of the thresholds (e.g., acceleration threshold) [13].

Mesoscopic model aggregation level is in between of those of microscopic and macroscopic models [14]. Classical mesoscopic approach describes aggregated vehicle behaviour by a specific probability distribution function, while single vehicle behaviour rules are defined individually [2],e.g., gas-kinetic models [15, 16]. Finally, hybrid mesoscopic models appeared most recently: they combine microscopic and macroscopic approaches by modeling the traffic at different aggregation levels simultaneously [2]. Hybrid approach applies the microscopic model to areas of specific interest resulting in more detailed outcome (e.g., city centre), while simulating its surrounding network with macroscopic model guarantees fast results [10].

Naturally, there are some limitations concerning traffic flow models. According to Daiheng [17], they can be classified into four categories: (1) lack of model consistency, (2) lack of model flexibility to include driver heterogeneity, (3) lack of model capability to foresee near future, and (4) lack of model expandability beyond one-dimensional traffic. The first limitation describes the inconsistency between model outcome and observed traffic which may arise, e.g., in macroscopic models not taking into consideration individual drivers’ behaviour [10, 17]. The next limitation focuses on driver heterogeneity such as different decision factors or different decision rules. Then, there is a “look-ahead” drivers’ capability which affects decision- making process concerning near future. Finally, there are lots of successful one-dimensional traffic models, but still there is a gap left for an integrated traffic flow model incorporating a few traffic dimensions at once, such as car following, lane changing, and gap acceptance. All these limitations along with some improvements are widely discussed in Daiheng [17]. Drivers’ heterogeneity in terms of agents’ knowledge is the limitation that can be addressed by the model introduced in this paper.

An important long-term determinant of behaviour of drivers who are capable of planning their travels is based on what they learn from their previous experience and beliefs about the traffic density. On the microlevel of traffic network, modeling the question is how the information’s spread might improve the effectiveness of traffic flow by increasing its smoothness, by optimizing the car speed in platoon, giving opportunity for the cars to avoid traffic congestion [18, 19], or how to design and implement the vehicle-to-vehicle communication system for the intelligent cars [20, 21] in order to optimize their behaviour in the traffic network. On the other hand, the problem of the macroscopic and long-term relations between knowledge and the structure of traffic flows might be crucial to better understand how individual decisions of drivers (or autonomous vehicles) contribute to the emergence of traffic congestion and how to optimize such systems. For example, by knowing how drivers react to changes in traffic network and how fast they adapt to new conditions, better solutions for planning the roadworks might be provided.

The subject of learning and adaptive behavior itself is a well-known concept in social sciences [22]. The idea of the modes of learning, individual and social one, was used to explain such different phenomena as pricing on the markets with asymmetric information and uncertainty [23, 24], organizational learning, and trade-off between the exploration of new possibilities and the exploitation of old certainties [25], or even more widely, the evolution of the culture and development of new inventions [2628]. The problem of social learning is the most interesting part of those research studies. It appears to be more advantageous in comparison with individual one, because it allows to avoid the costs of trial-and-error learning and also reduces the uncertainty of the explored problem, but it turns out that the outcome of the social learning depends strictly on the learned subject. In cases when agents learn about the objects, which are independent and not varying in time, such as the quality of the good they are interested in buying in Izquierdo and Izquierdo [24] model, social learning turns out to be extremely effective. Using it decreases uncertainty, and in the extreme case, it might reduce the problem to the market with the perfect information case. However, when the environment is changing and nonuniform, relying on social knowledge is prone to error and may lead individuals to learn inappropriate or outdated information (Rogers, 1988; [28]).

The aim of this paper is twofold. Firstly, we investigate how individual and social knowledge of intelligent drivers interact and ultimately influence the effectiveness of resulting traffic flow. Secondly, we evaluate how different discrete-event simulation designs (delays vs. queuing) affect conclusions within the model. We present a new agent-based model that combines the efficient discrete-event approach to modeling with the intelligent drivers who are capable to learn about their environment in the long-term perspective from both, individual experience and widely available social knowledge.

Traffic congestion has been an issue in many cities around the world; hence, traffic flow modeling and prediction is one of the science’s challenges. Moreover, infrastructure improvements tend to be very expensive; thus, it is crucial to evaluate its impact on the traffic flow beforehand. This paper introduces a computer simulation model as it proves to be exceptionally useful and a low-cost method which enables in-deep traffic flow analysis. Our model is composed of intelligent agents reflecting personalized behaviour of real drivers. A concept of “intelligent drivers” has already been investigated in several papers (e.g., Kesting et al. [29]; Camponogara and Kraus [30], or Ehlert and Rothkrantz [31]) with the adaptive cruise control (ACC) model as the first driver assistance system having the potential to impact real traffic flow environment [29] by automatically adapting car acceleration to different traffic conditions. Treiber et al. [32] proposed a simple microscopic ACC model of intelligent drivers that despite its simplicity (the model uses only a few intuitive parameters) yields realistic traffic flow collective dynamics along with drivers’ acceleration and deceleration behaviour. We proposed a model composed of intelligent drivers as it helps to capture real traffic flow characteristics by taking into consideration human reflexes and behaviour with such parameters as drivers’ acceleration strategy, breaking reactions, line changing decisions, or agents’ heterogeneity represented by individual sets of parameters for each driver (Kesting, Treiber, and Helbing [33]; Kesting et al. [29]; Kesting, Treiber, and Helbing [34].

In real world, drivers’ behaviour characteristics could fluctuate in time as drivers are able to learn and adapt to changing traffic environment due to human ability to process and analyze the available information. Automatic learning techniques seem to be very promising in boosting traffic models efficiency [30]. Indeed, a number of traffic flow models incorporating reinforcement learning have already been introduced, such as Camponogara and Kraus [30]; Wiering [35]; Balaji, German, and Srinivasan [36]; Ehlert and Rothkrantz [31]; or Logi and Ritchie [37]. Model proposed by this paper also uses reinforcement-learning algorithms since knowledge-based approach proves that personal and social knowledge highly influence traffic flow environment as agents make their traffic-decisions based on the information they have. Adaptive and flexible intelligent agents could incorporate into a model various types of personalized driving styles causing the simulated vehicles behaviour to be realistic which in turn makes it possible to investigate the interactions between drivers in the traffic flow ecosystem [31]. In this paper, we analyze how individual and social knowledge interact and influence the traffic flow model. Both types of information has already been researched but in another context: e.g., Camponogara and Kraus [30] studied personal knowledge by developing a traffic network model as a distributed, stochastic game in which agents solve reinforcement-learning problems;each driver seeks a policy maximizing his reward. Then, Wiering [35] analyzed both personal and social knowledge by introducing “co-learning”: in their model, there are two types of intelligent agents: vehicles and traffic lights, both using reinforcement-learning in order to optimize their behaviour by minimizing the same value function. Finally, Balaji, German, and Srinivasan [36] proposed a traffic signal control model with reinforcement-learning agents capable of interacting with each other in order to not only reduce the overall travel time delay but also to increase vehicles’ mean speed. Their model proved that agents’ adaptability and information exchange resulted in higher drivers’ ability to foresee as well as a reduced congestion. On the other hand, the model introduced in this paper in terms of knowledge strictly focuses on the impact of various levels of agents’ ability to learn, both personal and social, on the overall model outcome.

We investigated two types of discrete-events simulation designs: model with queuing and model with delays. Each road segment can be described by two main parameters: its physical capacity and the flow rate [38]. As the capacity of each route segment is limited, it is crucial to decide what happens when agents are not able to enter a specific route when it reaches its maximum capacity level. In such situations, we considered two scenarios: in a queue-based approach, an agent must wait on its current edge until there is some space for its vehicle on a congested route segment. In a delay-based approach, it is always allowed to enter a congested road segment but with minimum possible vehicle speed. The first scenario reflects authentic traffic flow network, but it is very computationally expensive while the second one is simplified thus less realistic. This paper answers the question whether it is possible to replace an accurate queuing model with a less complicated delay-based approach, without losing model’s generality. Both above approaches are discussed in more details in section 2. In order to compare those scenarios, we have implemented a computational framework for simulation of real-world transportation systems. The model as well as the simulation framework have been released as Open Source on GitHub1. Our implementation makes it possible to compare the discrete-event queuing mechanism as well as a simpler (and hence faster to run) queuing version.

The remainder of this paper is organized as follows. In Section 2, a description of the model is provided. In particular, the network representation, drivers’ characteristics, and the mechanisms of capturing the traffic dynamics in both perspectives are presented. Additionally, we discuss the software architecture used to build this model and compare it to other options. Section 3.1 describes the conditions of the application of OpenStreetMapXDES.jl on a realistic traffic network (the model is calibrated for Winnipeg, Canada, but it can be used for other cities). In Section 3.2, results of the simulations are provided. Agents’ behaviour is explained, and also two architectures described above are compared in terms of the execution speed as well as the quality of produced outcome. Finally, Section 4 concludes and presents the directions of future development of our model.

2. Traffic Modeling and Simulation on Networks

2.1. Simulation Environment and Behaviour of Agents

We consider a population of agents living in a city represented by a weighted, directed graph . Each node in this graph serves as a depiction of a single intersection in the city road network and each (directed) edge , represents a road segment from intersection to intersection , . Edge , is described by the following four parameters:(1) – segment length expressed in meters(2)speed limit on this particular segment expressed in meters per second(3) – segment maximum density, that is, the maximum number of cars capable to travel through the specific segment at the same time calculated as follows:where is a number of lanes available on this particular segment and is some fixed parameter representing the average length of the car.(4)driving time corresponding to the optimal situation when agents are able to travel with a velocity equal to the speed limit on this particular segment; that is, .

The main goal of this model is to study the repetitive, everyday behaviour of citizens of a large city that are commuting to and from work and its impact on traffic on the network of roads. The way how the agents are defined emerges from this assumption. Three basic parameters that are used to describe them are and which are the nodes in graph associated with home and work, respectively, workplace of the particular agent, and route of edgeswhich is a sequence of incident edges visited during the agent’s trip between and . (It is noted that , , and are specific for each agent but in order to keep the notation simple we do not include it).

In order to simplify the model, we assume that agents are travelling only in one direction (from home to work) and that they are not driving through any additional points of interest associated with other daily activities such as driving their children to school or going shopping. Hence, we are essentially modeling the morning traffic. However, an analogous approach can be used to model the afternoon traffic. If we take the assumption that people work during fixed hours (e.g., 9am–5pm), the main difference between the morning and afternoon traffic is that in the morning many people try to arrive to work at the same time, and in the afternoon traffic, people depart at roughly the same time. Assuming homogeneous depart times and no side activities, the afternoon traffic would be symmetric to the morning traffic. However, we note that the framework described in this paper allows to easily extend the discussed model by depart time heterogeneity and after-work activities. In this paper, we focus only on the morning traffic which is more condensed.

As mentioned previously, we are interested in studying a long-term traffic dynamics. Hence, agents must be able to change their behaviour during the simulation’s span. In every iteration (representing one workday), , they adjust their routes to find the most efficient ones. In the model, we assume that the agent is interested in covering the route from to as fast as possible. However, the times of driving by route segments are affected by choices made by the other agents. The relationship between the number of cars on a given segment and the speed of a new car entering this particular part of the road is calculated by (the variant of) the Lighthill–Whitham–Richards equation (Lighthill and Whitham, 1955; [9]:where is a fixed, the lowest possible velocity equal to 1 mps, and is the traffic density on edge . The relation between traffic congestion and velocity calculation mechanism will be further described in the following section focused on implementations of discrete-event simulations in OpenStreetMapXDES.jl framework. In particular, this approach will be compared against the queuing-based approach.

Agents are internalizing the differences between the expected driving time and the true current driving time on the particular road segment by using a simple temporal difference learning mechanism [39, 40]. Their beliefs about the expected driving times are based on the previous experience and are represented for the day by for each . After visiting a particular edge and observing the actual travelling time , they update their expectations as follows:where is a parameter describing agent’s learning rate on the individual level that controls the way how experience influences agent’s behavior (it is noted that we consider only a single global ; however, our approach could be further extended by considering heterogeneity of across the population). In particular, if , then agents do not learn at all which means that they will expect to drive a given segment of road in the time that does not include their experience with the traffic. On the other hand, if , then agents will be extremely myopic and consider only the most recent information about the driving time. In the model, the initial belief values are set to ; that is, travel times correspond to the situation where the is no congestion.

In real life situations, people make their decisions and assumptions about the surrounding world not only based on their experience but also from external sources such as various media, Internet, and communication with other people. Knowledge accumulated from all these sources will undoubtedly influence their decisions that, in turn, affect the commuting behaviour.

These mechanisms are implemented in the model in a rather simple but an effective way. For each edge, information about the driving times is collected during the simulation span. At the end of each day, the average driving times are calculated for all edges in graph . Then, all agents are again adjust their expectations:where is the average driving time on edge in day , is a social learning rate, and is a perturbation parameter, randomly selected with the expected value equal to 1. Hence, the expected driving time is a linear combination of what the agent observed and the population wide (perturbed) value. For simplicity, in our model, we assume no perturbation, that is, .

The social learning rate controls how much the information from the environment influences agents. If , then agents do not use their knowledge from outside sources at all. On the other hand, if , then agents only use the most recent information when planning their departure trip and time for the next day. Finally, let us point out that people are usually not able to obtain perfect information about their surroundings, and almost always it is somehow disrupted; random parameter is a way of implementing and controlling this behavior.

The agent’s belief update mechanism presented in (4) and (5) can be further combined to include a single model, both the individual and societal learning capabilities:

At the end of the day, when all agents have finished their trips and have updated their beliefs, they need to plan the next day. They possibly update their routes by choosing the fastest route (based on their current, adjusted expectations about driving times): ( is the is the number of edges in the route). Then, they also need to choose a proper departure time. For a given agent , her departure time in iteration is equal to

That is, we assume that all agents start their workday on the same hour, e.g., 9 a.m., which is represented in the model as “time-zero.” Again, this assumption can be easily lifted within the analyzed framework by assigning heterogeneous work start times to the agents. However, in order to focus on the information flow within the travelling agent population, in this paper, we only consider a single cohort of agents.

Once all parameters are updated, single iteration ends and the model moves to the next day. In the next section, we provide more details about the routing algorithm and the discrete-event simulation mechanism that controls the flow of agents during a single day.

We note that the above (7) combined with (6) actually means that at the end of each day, agents make a decision on their departure time plans on the base of their experience and the available social knowledge (for example from an online maps routing application). This scenario happens in everyday life—people need to decide in the evening at what time to get up in the morning to arrive to work on time. Our simulation model aims to answer the question how the individual and societal knowledge affects the optimality of agents’ decisions.

2.2. Vehicle Routing Mechanisms

Once the agents have their beliefs about the travel times, the actual vehicle movement is simulated. Following the solutions presented in Thulasidasan and Eidenbenz [38], we based our routing mechanism on the implementation of search algorithm [41]. We consider two scenarios:(i)discrete-event-simulator with vehicle queues at the graph edges(ii)discrete-event-simulator with variable time delays at the graph edges

Both scenarios are discussed below.

2.2.1. Discrete-Event Model with Queuing

An essential part of building a discrete-event simulation model is defining proper events; if they are too specific, then the system updates too often and the model starts to resemble the continuous time simulation where the model is efficient but at the expense of loss of the accuracy. A single event in time represents the moment of transition between two edges, segments of the route between two intersections. The agent cannot change the direction of the trip while it is driving on such defined part of the road, it can only drive forward. It means that all important decision regarding agent’s trip must be made when it approaches the intersection.

Another important question regarding such traffic model is how to implement the mechanism of the creation of traffic congestion. The capacity of the route segments is finite and when it reaches its maximum level new agents that cannot enter such edge. In this scenario, traffic flow will be disturbed and congestion will propagate on the preceding edges. In this section, we describe the basic form of the traffic congestion diffusion in the queuing version of the model. Later, insection 2.2.2, we show a simplified version of this mechanism where queuing is replaced with reducing speed on the congested segment of the road.

We take a standard approach in discrete-event simulation models where the control flow is based on the simulation clock, which stores the time of the next event (here approaching the next intersection) for all agents in the model [42]. When the event at the time is triggered and proper agent is brought forth, it tries to enter the next segment of its predefined route. When the current density on this edge is smaller than its maximal possible density , the agent is able to enter this segment. Otherwise, the agent must wait on its current edge until the traffic on declines or the agent changes its plans and travel by another edge reachable from the intersection the agent currently waits at.

An agent makes her choice by randomly selecting between and all the edges available from the particular intersection with densities smaller than their corresponding maximums; the decision is changed by an agent only when a new route can be entered immediately. When an agent needs to wait, it will do it on the segment that has previously selected as the part of the fastest route, and it will be added to the waiting list of edge with a priority corresponding to the current event time .

The process of crossing an intersection by an agent triggers the following chain of events. Firstly, the agent moves from edge , so if there is another driver waiting to enter on her waiting list, it is permitted to do so. Then, if the density on allows another driver waiting in line to enter it, it will also enter . Otherwise, the agent will randomly select the next edge in the same manner as described above. The procedure will continue until the first of the waiting agents will decide to stay in the queue or all of them will be removed from the waiting list of .

During the update of the queue associated with , the same procedure is employed on edges preceding it and then on their predecessors and so on. In a single event, all segments in the traffic network might be recursively updated, either propagating or reducing the traffic congestion in a model, depending on decisions of agents in previous links of the chain.

In a case when an agent is able to travel by some segment, its driving time is calculated according to (3). Then, the agent updates her beliefs according to (4), adjusting it to the time it spends in the queue: . Finally, the internal simulation clock is updated to the next event, indicating the moment when the agent ends driving by newly entered edge. Algorithm 1 describes the behaviour of the agent.

while event_schedule do
  
  randomly select an agent assigned to the event at the time .
  ifthen
   remove agent from its previous edge cars count: .
   add agent to cars count: .
   calculate agent driving time from eq. 3
   update agent beliefs from eq. 6
   update waiting list
   if is agent final edge then
    delete event_schedule agent .
   else
    event_schedule[agent] =  .
   end if
  else
   available_edges []
   insert into available_edges
   for in edges reachable from do
    if then
     insert into available_edges
    end if
   end for
   randomly select edge form available_edges
   if then
    add agent to waiting list
   else
    start driving by edge .
   end if
  end if
end while
2.2.2. Discrete-Event Model with Delays

Mechanism presented in the previous section was designed to resemble the way how the traffic congestion behaves in the real life situations. However, capturing the full queuing mechanism is very computationally expensive,especially where one wants to perform population modeling in a 1 : 1 scale. One of the questions stated in this paper is whether it is possible to replace the discussed queuing model with a simplified approach, without losing model’s generality. In this section, we focused on describing such alternative way of modeling traffic on a large scale.

Basic behaviour of this routing algorithm is similar to the previous one. The whole simulation is controlled by the clock with values of next event for all agents in the population. However, this time when agent is called and tries to enter a new edge , there are no additional conditions for entering the edge, that is, she can always do it. Subsequently, agents’ driving time is calculated according to (3) and when the density exceeds , then the speed of the agent on this segment of the route is reduced to . Algorithm 2 explains the way the model works. Finally, let us note that is the smallest possible speed in the network and it is designed to correspond with the expected velocity in a heavy traffic jam.

while event_schedule do
   (event_schedule).
  select agent assigned to the event .
  remove agent from its previous edge cars count: .
  add agent to cars count: .
  calculate agent driving time from eq. 3
  update agent beliefs from eq. 6
  if is agent final edge then
   delete event_schedule [agent].
  else
   event_schedule[agent] =  .
  end if
end while
2.3. Implementation Notes

The code presented in this paper is based on Julia programming language [43]. The discrete-event-simulation engine is available at OpenStreetMapXDES.jl3 library. We have implemented the routing mechanism in the OpenStreetMapX.jl library.4 For ad-hoc data visualization, a compatible Julia library OpenStreetMapXPlot.jl5 has been developed. All the software is Open Source and freely available at GitHub.

There are other vehicle simulation frameworks that support different traffic simulation models with main simulators including MATSim, FastTrans [38], SUMO [44], or TRANSIMS [45]. MATSim (Multi-Agent Transport Simulation) is a framework for implementing large-scale transport simulation, considering different modules such as demand, supply, or control of traffic systems which all can be combined or used standalone (Allan and Farid [46]. SUMO (Simulation of Urban Mobility) vehicles can move freely (vehicle behavior is taken into consideration, e.g., lane changes), and the collisions between them along with traffic accidents are simulated by Saidallah and El Fergougui and Elbelrhiti [47]. Finally, TRANSIMS (Transportation Analysis and Simulation System) is an integrated tool based on a cellular automaton concept that allows to conduct transportation analysis, simulation, and dynamic traffic assignment within an integrated development environment Saidallah and El Fergougui and Elbelrhiti [47].

However, the existing solutions have a few drawbacks from our point of view. Firstly, they are written in verbose programming languages with sharp learning curve: Java (MATSim) or C++ (SUMO and TRANSIMS). On the other hand, Julia allows the code to have around 4 times less lines of code6 (compared to C++ or Java) while maintaining similar execution speed what makes it specially useful for the numerical computing. Secondly, our Julia based-framework takes a more loose-coupled generic approach (rather than rely on some routing batch files like existing frameworks do) and makes it possible to fully control and program the behavior of each individual car in the model. This makes it possible to simulate in real time the adaptation of agents to the changing environment. Thirdly, the Julia language has an in-built support for distributed computing, and hence, a Julia simulation can be easily run on a large cluster or supercomputer without using external tools and libraries (such as Spark framework for Java or MPI for C++). Last but not least, it should be noted that the numerical performance of the discussed solution is very high—finding a customized route for a single agent can take as little as around 200 ns on a single CPU core on a modern machine.

3. Experiments and Results

3.1. Experiment Design

As a sample data set for our use cases, we have selected Winnipeg Metropolitan Area (WMA) in Canada having a total population of around 840,000 people. This region is isolated from other large cities in Canada with Regina (SK) over 500 km away as its closest neighbour (assuming cities with a population of at least 200,000). In Winnipeg, there are no freeways in the city but there is a 90 km beltway called Perimeter Highway around Winnipeg which reduces traffic volumes within the city by offering an alternate route for those who do not need to stop in the centre. These features classify Winnipeg as a city dominated by inner-city traffic (residents, commuters) and almost no transit traffic. Lack of freeways within the center of Winnipeg makes it also more difficult for commuting drivers in the city center to escape from traffic into a beltway. Since the only way to commute around Winnipeg area is a car and there is no transit traffic, a very significant portion of traffic in Winnipeg on workdays is home-to-work-to-home daily commute. This makes it possible to use census data along with business locations to estimate commute for a synthetic population of commuters. However, a similar approach could be used for other cities.

The WMA census data are available for 1,229 dissemination areas (DA, presented at Figure 1), small geographic regions in Canada, each comprising of around citizens.

Three datasets were used in the simulation experiment: (1) Demographic data (Canadian statistical office): socioeconomic and demographic data of commuters aggregated to dissemination areas levels (data per each DA), (2) Home-work flow matrix: that represents an estimated number of people living in a given Winnipeg DA who are employed at a location outside their DA, and (3) DA centroids: geographic coordinates of centroid of each DA. The data sources include Winnipeg Open Data Portal (https://data.winnipeg.ca/), and data provided thanks to the courtesy of the Environics Analytics, Canada.

The starting location for an agent is being selected at random with probability weighted by the working population travel to work by car as driver size of a given dissemination area. Destination location is chosen using the probabilities calculated based on Home-work flow matrix. In both cases, selected point is the centroid of specific DA or the closest point on the boundary of study area (see Figure 2).

Table 1 presents values of fixed parameters in the experiment. In every simulation run, both and are selected by grid search from interval with step 0.05. Then, the simulation is running twice, independently for both implemented queuing mechanisms. Each simulation run finishes by aggregating the statistics on the population level. The procedure is repeated 441 times for different pairs of and .

3.2. Results

In this section, we focus on describing the output of both types of simulation presented. At first, the comparison between model with implemented queuing and its counterpart method simplified by introducing delays of the driving time on clogged edges is provided. Then, we move to describing the obtained results based on provided statistics in order to better understand the implications of the implemented learning on the traffic system in a longer perspective.

The analysis of the expected delays brings another important observation about the agent’s behaviour. We can see from Figure 3 that agents are able to internalize the experience and knowledge about the traffic and adjust their departure times accordingly to increase the chance to arrive on time. On average, agents arrive at workplace significantly earlier than they are expected to, which shows that they not only start to travel at the proper time but also learn to keep a secure margin in case of ending in traffic jam. This result is in accordance with results of Cao et al. [48].

The most interesting findings come from the analysis of number of changed routes and expected driving times for different combinations of learning rates. At first glance, those results might be counter-intuitive; it turns out that the myopic behaviour of agents turns out to be a better solution in terms of the overall well being of the population (in a long-term) than a case when agents are able to use the signals from the environment in their decision processes. It is especially surprising in case when is equal to 1. In such scenario, agent is basically relying on a navigation application (e.g. mobile phone app) to plan the trip and departure time for the next day and one could expect that it should be most efficient solution, outperforming the naive and greedy route optimization attempts when agents are choosing the best routes based on their own past experience. However, after the more in-depth examinations, those results turn out to have logical explanation and their implications might be crucial in many different applications.

In order to better understand the situation, let us start with looking at the special structure of the traffic network presented in Figure 2. It is noted that primary roads are surrounded by smaller ones, which serve as a connection of minor streets to more major roads. Obviously, those types or roads differ in their parameters; usually, primary ones have more lanes, higher speed limits, interchange road junctions instead of the traditional intersections, and so on. Those differences implicate one important fact for drivers decision making process. The set of the attractive fastest routes is usually smaller than we might expect it to be; drivers tend to travel by primary roads instead of the less important ones. Even when the traffic density on such main road is huge, it is usually still significantly faster to travel by than the surrounding residential streets.

How does it influence agents’ behaviour? In Figures 4 and 5, we can observe that for large values of number of drivers, changing their routes is in first few iterations of simulations is enormously high, significantly higher than for any other combinations of learning rates. At this stage of the simulation, agents are trying different solutions from their set of the best routes greedily choosing the best ones based on their beliefs. After a while, they are capable of finishing the exploration and they start to exploit the solution which is optimal based on their beliefs. Then, the number of changer routes plunges significantly and stays on a considerably low level for the rest of the simulation’s run resulting in a stage of the quasiequilibrium, where majority of drivers are using the same route for the rest of the run and only a small number is changing their routes from iteration to iteration.

Obtained stability have a beneficial impact on the average driving times; when the traffic pattern is stable and predictable in a long term, it benefits the traffic flows by making them more smooth and fluent. As a result, expected driving time is higher than the best case scenario (driving with ) only by .

When an agent starts to exploit its chosen route, it loses any track of the other possibilities; its knowledge about traffic is based on the situation the last time route was visited, somewhere at the beginning of the simulation. Obviously, when the current traffic on that routes is lower than during its last visit, the agent loses a great opportunity by sticking to its choice. The usage of the outside knowledge solves this problem, the agent tracks a traffic density on all routes, and the agent might choose a best solution in every iteration the without need of basing on own experience.

But in a long run, this results in the traffic network variation of the well-known tragedy of the commons [49]. Drivers who are independently selecting an optimal routes based on their self-interest as a result behave contrary to the common good of all users, which might be observed at Figure 6, where we can see that relying on the common knowledge of previous traffic increases expected driving time by 5 percentage points in comparison with acting only in accord to own past experience.

But how do we explain it? As it was stated before, social knowledge is an ex-ante estimation of the traffic flows based on the past experience. Agent still has no knowledge about the traffic on particular segment of the route right now when the agent start to drive by it. It chooses to do it believing that the situation on this segment will be exactly the same as in the past. However, other agents share this belief and they might also select this segment in order to improve their driving time, thus resulting in significant increase of the traffic density on that segment and decreasing the driving time for all of them.

It is visible on Figures 7, 8, and 5 where the traffic oscillations from iteration to iteration are clearly noticeable. For short periods of time, the traffic stabilizes (number of changed routes reaches its bottom in a cycle) and allocation of the drivers is relatively effective. But then, the information about better alternatives spreads around the population; thus, many agents decide to change their routes and system is again unbalanced, and as a result, even more agents change their routes. Perturbations last until the system again reaches a lowest point and cycle is repeating over and over again.

This situation resembles the classic game-theory problem of El Farol Bar [50, 51]. In its classic formulation, agents payoff depends on the behaviour of other players, if too many of them decided to go to the bar at the same moment and their utility will by lower than if they stayed at home. Moreover, all of them are obliged to decide at the same time whether they will go to the bar or not. Similarly, in the model described in this paper, driver’s utility (measured as a speed of travelling by their routes) depends on the decisions of another users of the network, when the segment is clogged velocity of all drivers on this part of the route that is reduced; thus, their utility is lower than it might be. They are also planning their routes day before departure. Arthur [50] have proved that in such problem, no forecasting model can be employed by all individuals and be accurate at the same time. It basically means that if all drivers are using the same strategy, in this case, the knowledge from the same source, it will not be effective, guarantying that the density on routes will be significantly higher than it could be.

Moreover, this kind of the global knowledge is averaged for all agents in the simulation, despite the fact that their departure time is varying. It means that those drivers who departure earlier, when the traffic density is lower, inherit the knowledge from ones who start their trip later when the density might be significantly higher. In such case, their decisions will be suboptimal, because their expectations about the densities on the edges will be significantly overestimated. Obviously, this mechanism will also work in a very similar manner for the agents who are starting their trip later, but this time, they will underestimate driving times.

As a result, when the knowledge is distributed in a manner described in this paper, it is not an effective method of selecting the routes, especially in comparison with the own experiences of the agents, which are adjusted to their usual departure time and are the proper way of estimating the driving time on segments of the agent routes. It follows the intuitions of Rogers [27]; learning based mostly on the social knowledge is more prone to exploiting the past experiences, incoherent with actual state of the system. One of the possible ways to avoid these errors is using the selective social learning as it is proposed by Enquist et al. [26].

In comparison with the previous literature on the subject of the autonomous [1821] vehicles, this paper gives insights on the more general level of designing traffic system based on such vehicles. It turns out that the most effective approach to build the autonomous car transportation network is to schedule all of them with the fixed routes in a train-like manner. In such scenario, the system will be perfectly stable and, as a result, most efficient.

4. Concluding Remarks

In this paper, we compare two scenarios for discrete-event-simulation modeling a transportation network, delay based and queuing based. The results show that the both the individual experience of the agents as well as exchange of information are important for city transportation system efficiency.

We show that the usage of the common knowledge can lead to a decrease of overall quality of the system by increasing variability and reducing effectiveness of utilization of the traffic network. The best scenarios are these ones where drivers select routes based on their previous experience and keep using them for the rest of the considered period of time. It results in situations when traffic flows are the most stable and efficient in terms of expected driving times, which is an important finding for the future research studies in traffic planning and management field.

Our model shows that microsimulations (and in particular agent-based simulation combined with discrete-event simulation) can bring a new level of detail for the analysis of real-word transportation systems. The developed simulation model and framework can be used to support policy making decisions in many ways. Firstly, it makes it possible to analyze outcomes to the changes in the transportation systems (such as a road closure), changes to transportation policy (that affects the number of cars), and the external effects of ongoing changes of attitude towards home working or car pooling. Since in the model we include experience of agents, it is also possible to analyze transition period to a new steady state after a change in the transportation system (not only long-term steady state). Finally, it is possible to use the proposed approach to optimize the global communication directed to the society if it can be assumed that the policy maker can influence the social learning component that we use in our model (e.g., by broadcasting routing recommendations via the Internet).

We have created an Open Source transportation simulation framework in the Julia language consisting of three modules: OpenStreetMapX.jl for processing of spatial data and efficient vehicle routing, OpenStreetMapXPlot.jl for transportation system data visualization, and finally, OpenStreetMapXDES.jl for discrete-event simulation of transportation systems. The created frameworks offer great flexibility of model structure; it is possible to inject any type of behavior mechanism into the agents. This allows easy and convenient further extension of the presented research. All developed simulation code is available in the GitHub repository.

Constructed vehicle routing simulation framework applied to real-data of the city of Winnipeg in Canada proved to be useful in analyzing and predicting traffic size and commuter’s behavior moving around the city. Model validation confirmed a good match between the artificial traffic returned by the simulation and the actual weekday traffic data for Winnipeg. Simulation results also proved to be very effective in predicting the demographic profiles of commuters across the city, which can be used in a number of practical applications. Simulation results allow to visualize and investigate city traffic size, spatial analysis of agents’ demographic profiles, and single node investigation which includes the analysis of (1) network routes taken by all the agents passing by a node as well as (2) the distribution of agents’ demographic profile attributes.

Indeed, the simulation framework can be applied to a wide range of real life problems based on spatial data, e.g. finding an optimal location of a school, restaurant, store or service, crowd control, fleet management, or out-of-home marketing. The study could be extended by introducing different framework modules (e.g. a new approach to the destination location selection or the expanded list of demographic profiles attributes). Validation outcome proves the overall potential of the presented framework.

The main limitation of this research is the set of simplifying assumption that have been made: (1) the model uses a discrete time rather than continues, (2) the model does not consider how intersections and street contribute to congestion (e.g. turning left vs turning right), (3) the impact of pedestrian traffic, bikes, and public transportation is not included, and (4) no model of actual vehicle acceleration. In the future research, we plan to address some of those limitations by including submicroscopic traffic modeling approach which would introduce driver’s psychological reactions such as response time to traffic signal or brake lights of the preceding vehicle. Submicroscopic approach could also implement vehicle performance parameters, not only driver’s reaction would be modelled but also car acceleration or breaking curves. In the next step, the third limitation could be also addressed by introducing pedestrian crossing traffic (e.g. pedestrians appearing on the crossing without traffic lights with some randomly distributed probabilities). Another possible extension of the model is the environmental pollution with regard to the traffic. Hence, a possible mechanism design question is how the market regulator should influence driver’s decision in order to minimize the total congestion level. Since we model the time spent in traffic, this simulation can be also used as a part of simulation-optimization model for optimal out-of-home advertising locations.

Data Availability

We use the freely available data from the OpenStreetMap project - https://www.openstreetmap.org/Additionally, all codes and models are Open Source. The model and the framework can be reached at https://github.com/pszufe/OpenStreetMapXDES.jl, and https://github.com/pszufe/OpenStreetMapX.jl, respectively.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The initial version of simulation framework that was conducted in cooperation with Environics Analytics of Toronto within the research project entitled Agent-based simulation modelling of out-of-home advertising viewing opportunity was supported by the Ontario Centres of Excellence (“OCE”) under Voucher for Innovation and Productivity (VIP) program, OCE Project Number: 30293.