Information Management and Applications of Intelligent Transportation SystemView this Special Issue
Research Article | Open Access
A Method for Driving Route Predictions Based on Hidden Markov Model
We present a driving route prediction method that is based on Hidden Markov Model (HMM). This method can accurately predict a vehicle’s entire route as early in a trip’s lifetime as possible without inputting origins and destinations beforehand. Firstly, we propose the route recommendation system architecture, where route predictions play important role in the system. Secondly, we define a road network model, normalize each of driving routes in the rectangular coordinate system, and build the HMM to make preparation for route predictions using a method of training set extension based on K-means++ and the add-one (Laplace) smoothing technique. Thirdly, we present the route prediction algorithm. Finally, the experimental results of the effectiveness of the route predictions that is based on HMM are shown.
Currently, many drivers use different kinds of navigation software to acquire better driving routes. The main function of vehicle route recommendation in the software is to find several routes between given origins and destinations by combing some path algorithms with historical traffic data, for example, Google Map and Baidu Map. And then a driver could select one of those recommendation routes according to personal preference, driving distance, and current road congestion information. People usually would like to choose routes with more smooth roads. However, the above methods for driving route recommendation have some problems. Firstly, more people would like to choose routes with many smooth road segments. Thus, the original relatively smooth roads will become congested and the original congested roads will become smooth. Secondly, once a route is selected, the software could not timely inform the driver to adjust the route according to real-time traffic congestion data as the trip progresses. Finally, most of traffic route navigation software programs rely on historical data to predict traffic congestion . While some emergency situations arise, for example, when organizing a large rally in an area, a large number of vehicles will move to this region in a short time, leading to traffic congestion in the area. Obviously, this case may not have happened in previous historical data.
In view of the above problems, a driving route recommendation system is proposed and highlights a method for driving route predictions based on the knowledge of Hidden Markov Model (HMM). The method can predict which road segments are congested or smooth through route predictions. The system will also update traffic information in real time in the near future and inform the driver to adjust the driving route as the trip progresses.
At present, several methods of route prediction have been suggested, but there remain some problems. Karbassi and Barth  described a method to predict smart vehicles’ routes between given starting and ending drop-off stations based on a car-sharing application. In our work, the destination never needs to be inputted into the system beforehand. Our approach also differentiates from the short-term route prediction in Krumm’s work . Our method makes long-term predictions about the entire route. Froehlich and Krumm  found that a large portion of a typical driver’s trips are repeated from the collected GPS data. So based on this fact, they predicted a driver’s entire route by using drivers’ trip history. Simmons et al.  firstly assumed that drivers have certain routine routes and that, by learning a model based on previous experience, one can accurately predict what a driver will do in the future. So based on this underlying premise, they presented an approach to predict driver intent using Hidden Markov Models. However, in fact, it is impractical to build a Hidden Markov Model for every driver, and many routes are not fully regular. When a driver takes a new route, the model for this driver could not predict the driver’s route and destination intent.
This paper is organized as follows. The next section describes the architecture of our route recommendation system and explains each module in the system. Section 3 introduces how to construct a road network model and Section 4 presents how to define each of the driving routes based on Section 3. The process of building HMM and the method of making route predictions are discussed in Section 5. Then Section 6 shows experimental results. Finally, Section 7 will conclude the paper.
2. The Architecture of Driving Route Recommendation System Based on HMM
The architecture of the driving route recommendation consists of the following phases (see Figure 1).
(i) Driving Route Predictions Based on HMM. It is the core of our recommendation system and is chiefly introduced in this paper. The module could find which routes a driver will be on when making a route prediction. Even though we could not accurately gain the completely correct routes in practice, these possible routes are still very important for preestimating traffic congestion in the future.
(ii) Traffic Congestion Preestimation. It is mainly used to predict the congestion of each road. At the time , the congestion level of each road is denoted by the total number of possible driving routes with the road in a time period. The higher the value is, the more congested the road is.
(iii) Vehicle Route Recommendation. It collects information about just-driven road segments and traffic congestion situations to introduce better routes for drivers based on existing path algorithms [6–10] (all of these route planning algorithms take traffic congestion situations into account in the process of a vehicle route guidance) without presetting the destination beforehand.
(iv) HMM Correction. It is used to correct the HMM depending on new input driving routes. The given corpus of training samples may not fully include all of possible driving routes. With the increase of inputting driving routes, the amount of training data for training HMM will also grow, which could improve the prediction accuracy.
3. The Definition of Road Network Model
This section will give details on how to build a road network model in the rectangular coordinate system. The connection relationship between roads is followed strictly in the model. And it should reflect the difference between roads as large as possible.
Assume that each road is described as a line segment perpendicular to -axis: that is, the coordinate of two endpoints of a line segment is separately defined by and , where , or a line segment perpendicular to -axis: that is, the coordinate of two endpoints of a line segment is separately defined by and , where .
In the rectangular coordinate system, the rule for a road network model construction composed of different road segments is represented as follows:(i)If and only if () roads intersect at an approximate point, suppose that the road is defined by the line segment perpendicular to -axis, so roads and adjacent to the road are represented as line segments and intersected with the line segment and perpendicular to -axis, and roads and not adjacent to road are separately defined by the line segments and intersected with the line segment ( or ) and perpendicular to . For example, there are five line segments intersected at a point in Figure 2.(ii)If and only if three different roads , , and intersect at three points (as shown in Figure 3), suppose that the road is defined by the line segment perpendicular to -axis; then the road is defined by the line segment intersected with the line segment and perpendicular to -axis, and the road is divided into two segments: one is the line segment intersected with the line segment and perpendicular to -axis and another is the line segment intersected with the line segment and perpendicular to -axis.
The length of each line segment is defined as follows: the length of the line segment () is represented as the amount of line segments perpendicular to -axis between two endpoints of (including two endpoints), and the length of the line segment () is represented as the amount of line segments perpendicular to -axis between two endpoints of (including two endpoints). But in Figure 3 the length of is different from others. The definitions for the length of and are both limited in the region made up of roads , , and .
Therefore, as shown in Figure 4, our method transforms the map into the road network model in a rectangular coordinate system. Our method only deals with main roads in the map to clearly describe the process of building the model.
4. The Definition of Driving Routes in -Axis and -Axis
Suppose that the starting point of the vehicle route is and the endpoint is ; the route composed of roads from to is expressed as an ordered coordinate points’ sequence composed of coordinate points:where is represented as the endpoint of the line segment or , is represented as the endpoint of the line segment or , and is represented as the intersection point of the line segments and .
For example, the line connecting point (i.e., Huafuyuan) with point (i.e., Kang’ai Hospital) is a driving route in Figure 5. The vehicle has passed through 5 roads, including Fujian Road, Zhongfu Road, Heilongjiang Road, Jinmao Street, and Xufu Alley. Suppose that is the starting point and is the endpoint; then the route can be represented as follows based on Figure 4:
5. Driving Route Predictions Based on HMM
5.1. A Method of Extending Training Set Based on -Means++
It is necessary to train the HMM from drivers’ past history. In particular, the larger the size of training examples is, the more accurate the HMM for path predictions is. In view of the limitation of given training examples, the training set cannot contain all of routes that drivers will take in the future. So the paper proposes a method of extending training examples based on -means++ . It could enlarge the training data as much as possible based on given training examples.
After analyzing the given training examples, it is found that starting and endpoints of vehicle routes are distributed in residential, commercial, and work areas. People usually go to work from residential areas in the morning and then go back from work areas or they will first go to commercial areas and then go home. Therefore, it is believed that vehicle routes are generally regular in some extent so that a path can be regarded as two return paths. In addition, it is also found that when traffic reaches its peak, a driver will generally avoid congested roads and select a route with the shortest time to the destination. In other times, drivers will select the shortest distance to the destination to save costs. For a beginning and end of a path, it is able to generate two kinds of routes according to different times.
Last, it is not sure how many clusters the coordinate point set should be classified beforehand, so the -means++ algorithm to automatically classify coordinate points into clusters is exploited in the paper. Here it should be pointed out that the distance of vehicle routes in the same cluster is rather short so that people would not have to drive from one point to another. It is not necessary to calculate vehicle routes for the above case. This assumption will be verified in the experiment.
The algorithm of extending training examples based on -means++ is as follows (see Algorithm 1).(i)Initialize coordinate point sets and and an extending route set New (Lines 01-02).(ii)Traverse a given training set and read all of vehicle routes’ starting points and endpoints , and then insert these coordinate points into the set . Filter repeated coordinates in the set , which could get the set composed of different starting and endpoints (Lines 03–07).(iii)Use the -means++ algorithm to classify and then acquire clusters (Line 08).(iv)Traverse each cluster and then distinguish whether or not two coordinate points belong to the same cluster . If not, use the function , to calculate routes between two coordinate points (Lines 09–13).
5.2. Parameter Definitions of a HMM for Route Predictions
Since it is necessary to input a driver’s just-driven path represented by coordinate points into a HMM and then output future entire paths, coordinate points’ sequence corresponding to the just-driven path can be regarded as an observation sequence and the corresponding sequence composed of different route sets can be regarded as a hidden state sequence . The next gives details on the process of the HMM construction by following training examples (shown in (3)). Note the number of training examples is much more than following data in practice.
Training Examples. Consider
In (3), assume that are routes’ symbols in order to distinguish different vehicle routes. The observation set includes the starting symbol (), the end symbol (), and different coordinate points. Each observation is defined by , where is the number of route in the training set and is the number of coordinate points in each route . For example, the observation set of the above training example is <, >, (), (), (), (), (), (), (), (), (), (), (), (), (), ()}. And an observation sequence is an ordered sequence of symbols and coordinate points from the starting to the end. For example, the observation sequence of the route is <, , , , , and >.
Besides, the definition of hidden states is relatively more complex than observation states. At first, assume that each hidden state is defined by , where is the number of route in the training set and is the number of coordinate points in each vehicle route . The hidden state set includes the symbol being produced from the observations , and different routes’ symbol sets (e.g., ) corresponding to different coordinate points. For example, hidden states being produced from the above observations of the route are separately , , , , , and . A hidden state sequence set is defined by QS, storing hidden state sequences being produced from hidden states and each vehicle route is directed. Suppose that represents that a vehicle passes through road segments from the starting point to the endpoint , but represents that a vehicle passes through the same road segments from to . Even though each observation state is same in the two opposite routes, ordered coordinate points’ sequences are completely opposite. So a method is explored to calculate hidden states corresponding to each coordinate point next.
The algorithm for hidden state determinations is as follows (see Algorithm 2).(i)Initialize a hidden state sequence set QS (Line 1).(ii) Obtain a beginning point and an endpoint from the vehicle route and a beginning point and an endpoint from the vehicle route ; then calculate denoted by and denoted by (Lines 2–9).(iii) Compute the cosine value of intersection angle between vectors and (Line 10):(iv) If , traverse each coordinate point in vehicle routes and , and then judge whether or not a coordinate point in is also included in . If it is included, insert a symbol into the corresponding location of the sequence (Lines 10–14). If , driving directions of the two routes are opposite although the routes include the same coordinate point. For example, if a vehicle is driving east in a route , the possibility of passing through south or western roads in a route in our road network model is low. So the kind of hidden states will not be taken into account. And then insert a symbol and a symbol into on the basis of the given (Lines 15–20).(v) After calculating all of the hidden state sequence, insert each hidden state sequence into the sequence set QS (Line 21).
5.3. Parameter Estimation of a HMM for Route Predictions
After determining observation states and corresponding hidden states in the HMM for route predictions, our method uses the total training dataset Total, including the given training set and the extending training set New, to estimate model parameters. To reduce the negative impact on the HMM, a weighted method is used to improve the process of estimating HMM parameters. In addition, the problem of data sparseness, also known as the zero-frequency problem, arises in the process of building the HMM. So our method adopts the add-one (Laplace)  smoothing technique to deal with events that do not occur in the total training set. The process of estimating HMM parameters by a weighted method and add-one (Laplace) smoothing is described as follows.(i)The following equation is used for the initial probability distribution: where is the number of hidden states (i.e., the total number of different vehicle routes), and separately represent the number of times the hidden state appears in the given and extending training sets, and represents the weight ().(ii)The following equation is used for the hidden state transition matrix: where and separately represent the number of times a hidden state followed in the given and extending training sets and is the number of times the hidden state occurs in the total training set.(iii)The following equation is used for the confusion matrix: where and separately represent the number of times the hidden state accompanies the observation state in the given and extending training sets and is the number of times the observation state occurs in the total training set.
As described above, our method could build the HMM for vehicle route predictions. But drivers would like to choose different vehicle routes from a starting point to an endpoint during different time of each day. For example, people hope to reach the end during the rush hour (7:00~9:00 A.M. and 17:00~19:00 P.M.) as quickly as possible and try their best to avoid congested roads. But at other times people may choose the shortest route to drive. Therefore, training examples can be classified according to the time of day. A group of training examples is from 7:00~9:00 A.M. and 17:00~19:00 P.M., and another is from other times. Section 7 will test the impact on the prediction accuracy with different training examples by building different HMMs at different times.
5.4. Driving Route Predictions
The aim of this section is to introduce how to predict upcoming routes based on just-driven road segments. The solution to this problem is corresponding to a HMM decoding which is to discover the hidden state sequence that was most likely to have produced a given observation sequence. Here, the Viterbi algorithm  is used to find the best hidden state sequence composed of different symbols for an observation sequence (a given vehicle route). The process of a vehicle route prediction is shown in Figure 6.
Perhaps it will encounter some problems in the process of implementing Viterbi algorithm. The total training set, including the given and extending training examples, is still so limited that it could not fully contain all of possible upcoming vehicle routes. Assuming that the upcoming route does not occur in the total training set, which means part of coordinate points are new ones for training examples and each coordinate point has occurred in the total training set, a group from these coordinate points does not appear in the training examples. For this case , the Viterbi algorithm could not be directly used to compute the hidden state sequence. For example, in Figure 5, if a vehicle is on the current road segment represented by () and the representation of the corresponding just-driven route is < , the Viterbi algorithm is not adopted to find hidden state sequence for this observation sequence. And for case , even though the Viterbi algorithm can be used, each hidden state will not contain this new route’s symbol. For example, if a new route is represented by < and all of these coordinate points have occurred in Figure 5, the symbol of the upcoming vehicle route will not appear in each hidden state, which means people could not directly understand where the vehicle will drive to. Applied to these problems, an algorithm for vehicle route predictions is proposed as follows (see Algorithm 3).(i)Suppose that is an observation sequence composed of coordinate points after the vehicle has passed through roads; then initialize three sets , , and , where represents a set of upcoming vehicle routes’ symbols, (; as described above, is a set of all of observations in the training set), (), and the elements of are all in the set (Lines 1-2).(ii)Traverse the observation sequence and determine whether or not each coordinate point belongs to the set . If a coordinate point belongs to , then insert the point into the set . If not, insert it into (Lines 3–8).(iii)Define that is the number of elements in the set and is the number of elements in the set (Lines 9-10).(iv)If , the Viterbi algorithm is not used to find the upcoming routes and then (Lines 11-12).(v)If , the Viterbi algorithm could be used to predict and then use a function Viterbi_Route to acquire the route set related to the upcoming routes most likely. This set will be helpful for people to drive as much as possible (Lines 13-14).(vi)If the input observation sequence has not appeared in the total training set before and part of coordinate points in have also not appeared in (i.e., ), four cases should be discussed:(a)Suppose that ; then possible routes’ set could be calculated by the function Viterbi_Route () (Lines 15–17).(b)Suppose that ; then use the function recursion to predict with the observation sequence composed of remaining coordinate points (Lines 18-19).(c)Suppose that ; then use the function recursion to predict with the observation sequence composed of remaining coordinate points (Lines 20-21).(d)In addition to the above cases, suppose that and , , ; then use the function recursion to predict with the observation sequence composed of remaining coordinate points (Lines 22-23). For example, the input observation sequence is () () () () (), where ; then the result of vehicle route prediction is the set of hidden states corresponding to the coordinate point ().
The function Viterbi_Route is described as follows (see Algorithm 4).(i)Use Viterbi algorithm to calculate the hidden state sequence corresponding to the observation sequence (Line 1).(ii)Define that the number of elements in the hidden state sequence is (Line 2).(iii)If , a set of upcoming vehicle routes’ symbols is the hidden state set (Lines 3-4).(iv)Calculate the intersection between and another hidden state set . If this intersection exists, . If not, (Lines 5–10).
For example, if two hidden states are separately and , then and the most likely upcoming route is . If two hidden states are separately and and , then the most likely upcoming route is .
6. Route Prediction Results
6.1. Experimental Platform
Every vehicle should be equipped with a device for collecting vehicle route data. And data collectors use a mobile phone with software Map Plus. We mainly focus on one of functions, path tracking, to record down the path of driving. It runs in the background, while someone could run other apps or lock the device at the same time. It also can export or send tracked paths as KML files. However, continued use of GPS running in the background can dramatically decrease battery life of mobile phone. So the experiment also needs an external large-capacity battery to support the phone continuously. In addition, researchers install the software Google Earth on the computer to present each of collected vehicle routes.
6.2. Data Collection
A total of 20 volunteers are selected for the purpose of collecting the experimental data. In order to facilitate the communication between volunteers and us, all volunteers are from our university, including 15 teachers and 5 students. A month later our researchers finally acquire a total of 1052 paths, where the number of different routes is 51. The same path is the journey that volunteers start from a point to the end through the same road segments. But in the process of the data collection, there are some problems inevitably.(i)In tunnels, underground parking, and high-rise dense areas, the phenomenon that part of paths are offset from GPS noise will appear .(ii)Volunteers forget to open the software for recording route data, resulting in collecting route data unsuccessfully.(iii)Volunteers forget to turn off the software when they drive to the end, resulting in the path to be relatively concentrated in a small area.
Once researchers come across the above problems when checking path data, we will manually correct the GPS data. In summary, the experimental results can overcome the influence of GPS noise and human factor to ensure the accuracy of the collected data.
In the actual process of collecting the GPS data, collective data do not only focus on the longitude and latitude but also combine the GPS data of the starting point, the middle, and the end with road segments, describing the route as a path that is made up of the starting and endpoints and driven streets.
6.3. Experimental Metric
To evaluate the performance of route predictions based on HMM, a metric to explore is the correct prediction accuracy based on driven process. Suppose that a vehicle has passed through roads; the possible route set after predicting based on HMM is . So the definition of the prediction accuracy is as follows:where indicates an entirely upcoming route, represents the number of duplicate road segments between one of possible vehicle routes in the set and the entirely upcoming route, and represents the length of the route , that is, the number of road segments.
For example, assume that the total training examples are shown in (3) and is the upcoming vehicle route, which means is from the starting point to the end . When the vehicle has traveled through one road, the observation sequence is denoted by and the corresponding hidden state sequence is . So the duplicate between and , separately is , . The length of routes and is separately and . So when the vehicle has passed through the first point, the prediction accuracy is as follows:
6.4. Experimental Results
6.4.1. Training and Test Data
In the experiment, all of collected route examples are from the software Map Plus, where each route is included in a .KML file composed of a series of GPS data. Researchers check these data in a certain time period through Google Earth. According to previous description of the road network model, routes represented by GPS data points could be changed into ones represented by coordinate points.
Besides, some extending training examples are introduced here. These examples are extended from original collected data through a method to enlarge the training set based on -means++ described before. Firstly, raw training examples composed of coordinate points have been entered. Then all of starting and endpoints can be divided into 5 clusters based on -means++. It is known that the distance between each coordinate point and the corresponding clustering center is, on average, 0.314 km and the farthest distance between two points in a cluster is, on average, 0.628 km. It can illustrate that the distance between two places in a cluster is relatively short, so most of people would not like to drive. Therefore, this is the reason that extending algorithm was not used to calculate driving route in a cluster.
Figure 7 displays the trip data overlaid on two maps, one of original different routes (a) and the other of original and extending different routes (b). The number of extending training examples is 13605, where the number of routes different from original training examples is 13556.
Finally, the composition of test training examples is illustrated in detail. To test the prediction accuracy of our prediction algorithm, our method should acquire part of real-world vehicle route data. Here the method applies a leave-one-out approach [4, 15], meaning that part of route data are extracted from total training examples as test examples.
Test Examples (i). It includes part of routes that have not appeared in the training examples. So it can simulate real-world trip data to evaluate the prediction accuracy of our algorithm in actual applications.
Test Examples (ii). All of the route examples have appeared in the training examples. It can evaluate the prediction accuracy compared to test examples (i) in order to illustrate a fact that the number of different routes in the training examples should be as much as possible.
6.4.2. Prediction Accuracy
Figure 8 shows the average correct prediction rate of test examples (i) and test examples (ii) by percent of route completed and by current travel distance with different weight values and also shows the comparison of results between Jon Froehlich’s algorithm and our method in these graphs. “Percent of trip completed” is an intuitive evaluation criterion and it is useful in evaluating how well the algorithm performed. However, it is difficult to achieve in practice. A vehicle navigation system can never be sure of how far along a route it is in terms of percentage completed without knowing the exact route of the trip from start-to-end—this is what our prediction method is trying to predict. Instead, a much more practical input parameter is the trip’s current distance traveled—that is, how far the vehicle has traveled since the trip began. Furthermore, it also should evaluate the weight value to impact HMM for driving route prediction. The algorithm separately set the threshold value as 0.2, 0.5, and 0.8.
(a) Correct prediction rate of all trips by percent of trip completed
(b) Correct prediction rate of repeated trips by percent of trip completed
(c) Correct prediction rate of all trips by miles driven
(d) Correct prediction rate of repeated trips by miles driven
For test examples (i), Figure 8(a) shows that, as expected, after a vehicle has driven the first road segment, little information is known about its path, and the correct prediction rates of both algorithms are much lower. After 35% of the trip has been completed, the correct prediction rate of our algorithm increases to, on average, 49.69% and Jon Froehlich’s algorithm only increases to, on average, 29.94%; after 50% completion, the correct prediction rate of our algorithm moves to, on average, 62.52% and Jon Froehlich’s algorithm moves to, on average, 38.54%. Figure 8(c) can more accurately show the performance of our proposed algorithm for driving route prediction in a real-world scenario. By the end of the first mile, the correct prediction rate of our algorithm jumps to 31.93% accuracy and by the tenth mile this percentage increases to 61.12%. And the results of Jon Froehlich’s algorithm are only between 23.037% and 29.2% for each mile traveled up to 20 miles.
For test examples (ii), Figures 8(b) and 8(d) show that the correct prediction accuracy for both algorithms is, on average, higher than the test dataset (i). In Figure 8(b), the percentage of our algorithm jumps to 90.86% accuracy at the halfway point, but Jon Froehlich’s algorithm can increase to this percentage only after 65% of the trip has been completed. In Figure 8(d), by the end of first mile, the correct prediction accuracy is similar to Figure 8(c), but as the trip progresses, there is a significant jump in prediction accuracy. By the end of 10 miles, the percentage of our algorithm already increases to 83.87%, but at this time Jon Froehlich’s algorithm only increases to 63%. As the vehicle has traveled up to 20 miles, the percentage of our algorithm can move to 99.29%.
Figure 8 concludes that the accuracy for driving route predictions increases as the number of observed road segments increases. This means that a longer sequence of road segments will be more helpful for our predictions. Also both of algorithms should take the driving direction into account by the end of first road segment because the vehicle could be heading toward either end of the current road segment and observing only one segment is not indicative of a driver’s direction so that the correct prediction rate is nearly zero. Furthermore, the prediction accuracy for repeated trips is already, on average, much higher than for unknown trips. It can demonstrate the necessity of extending the training examples. The probability that new routes occur will be reduced so that the prediction accuracy will be improved as much as possible. At last, the larger the threshold value “” is, the lower the correct prediction rate is. In our opinion, driving routes are relatively regular but many route data from extending examples do not follow this rule. Indeed, it will disturb this rule to drop the prediction accuracy. On the other hand, we have to acquire these extending samples, which could improve the prediction accuracy as mentioned before. Therefore, we should keep balance, meaning that extending data not only reduces the impact on a driver’s regularity (a regular route is a path that a driver often takes) as much as possible but also keeps it in existence (in the training set) for training and improving the accuracy of HMM. It is similar to core thought of add-one (Laplace) smoothing for the problem of data sparseness. This threshold value is defined as in future applications.
Figure 9 shows the results of prediction accuracy based on different HMMs by the percent of trip completed and by current travel distance depending on the time of day into two categories: (i) 7:00~9:00 A.M. and 17:00~19:00 P.M. and (ii) other time periods. Then, HMMs are trained and tested according to classified test examples. The plot shows that the prediction accuracy is not very sensitive to the time of day, so this is not an important factor to consider when making driving route predictions. Froehlich and Krumm  also found a similar lack of sensitivity to both time of day and day of week for increasing prediction accuracy. Above all, it is not necessary to classify training samples to acquire different HMMs for route predictions according to the time of day.
This paper firstly presents a driving route recommendation system, where the prediction module is the core of recommendation system, thereby giving details on a method to accurately predict a driver’s entire route very early in a trip. Then, a road network model was defined and normalized each of driving routes in the rectangular coordinate system. The method also builds HMMs to make preparation for route prediction using a method of training set extension based on -means++ and the add-one (Laplace) smoothing technique. Next the paper introduces how to predict upcoming routes in a trip by HMMs and Viterbi algorithm. Finally, experimental results demonstrate the correction of our assumptions as mentioned before and also verify the effectiveness of our algorithm for routes predictions.
As a direction of the future work, the improvement will be from two points: (i) investigate to enhance the Laplace smoothing technique to suit HMM for driving route predictions; (ii) apply the statistics method to make Viterbi algorithm work with unknown coordinate points.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
The research is support by National Natural Science Foundation of China (nos. 61170065 and 61003039), Peak of Six Major Talent in Jiangsu Province (no. 2010DZXX026), China Postdoctoral Science Foundation (no. 2014M560440), Jiangsu Planned Projects for Postdoctoral Research Funds (no. 1302055C), and Science & Technology Innovation Fund for higher education institutions of Jiangsu Province (no. CXZZ11-0405).
- A. Hamilton, B. Waterson, T. Cherrett, A. Robinson, and I. Snell, “The evolution of urban traffic control: changing policy and technology,” Transportation Planning and Technology, vol. 36, no. 1, pp. 24–43, 2013.
- A. Karbassi and M. Barth, “Vehicle route prediction and time of arrival estimation techniques for improved transportation system management,” in Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 511–516, IEEE, Columbus, Ohio, USA, 2003.
- J. Krumm, “A markov model for driver turn prediction,” SAE SP 2193(1), 2008.
- J. Froehlich and J. Krumm, “Route prediction from trip observations,” SAE SP 2193:53, SAE, 2008.
- R. Simmons, B. Browning, Y. Zhang, and V. Sadekar, “Learning to predict driver route and destination intent,” in Proceedings of the IEEE Intelligent Transportation Systems Conference (ITSC '06), pp. 127–132, IEEE, September 2006.
- D. Tian, Y. Yuan, J. Zhou, Y. Wang, G. Lu, and H. Xia, “Real-time vehicle route guidance based on connected vehicles,” in Proceedings of the IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing (GreenCom-iThings-CPSCom '13), pp. 1512–1517, Beijing, China, August 2013.
- I. Kaparias and M. G. H. Bell, “A reliability-based dynamic re-routing algorithm for in-vehicle navigation,” in Proceedings of the 13th International IEEE Conference on Intelligent Transportation Systems (ITSC '10), pp. 974–979, IEEE, September 2010.
- J.-W. Lee, C.-C. Lo, S.-P. Tang, M.-F. Horng, and Y.-H. Kuo, “A hybrid traffic geographic routing with cooperative traffic information collection scheme in VANET,” in Proceedings of the 13th International Conference on Advanced Communication Technology: Smart Service Innovation through Mobile Interactivity (ICACT '11), pp. 1495–1501, IEEE, February 2011.
- I. Leontiadis, G. Marfia, D. Mack, G. Pau, C. Mascolo, and M. Gerla, “On the effectiveness of an opportunistic traffic management system for vehicular networks,” IEEE Transactions on Intelligent Transportation Systems, vol. 12, no. 4, pp. 1537–1548, 2011.
- M. H. Kabir, M. N. Alam, and K. K. Sup, “Designing an enhanced route guided navigation for intelligent vehicular system (ITS),” in Proceedings of the 5th International Conference on Ubiquitous and Future Networks (ICUFN '13), pp. 340–344, July 2013.
- X. Ma, Y. J. Wu, Y. Wang, F. Chen, and J. Liu, “Mining smart card data for transit riders' travel patterns,” Transportation Research Part C: Emerging Technologies, vol. 36, pp. 1–12, 2013.
- R. Szalai and G. Orosz, “Decomposing the dynamics of heterogeneous delayed networks with applications to connected vehicle systems,” Physical Review E, vol. 88, no. 4, Article ID 040902, 2013.
- N.-S. Pai, H.-J. Kuang, T.-Y. Chang, Y.-C. Kuo, and C.-Y. Lai, “Implementation of a tour guide robot system using RFID technology and viterbi algorithm-based HMM for speech recognition,” Mathematical Problems in Engineering, vol. 2014, Article ID 262791, 7 pages, 2014.
- B.-F. Wu, Y.-H. Chen, and P.-C. Huang, “A localization-assistance system using GPS and wireless sensor networks for pedestrian navigation,” Journal of Convergence Information Technology, vol. 7, no. 17, pp. 146–155, 2012.
- J. D. Lees-Miller, R. E. Wilson, and S. Box, “Hidden markov models for vehicle tracking with bluetooth,” in Proceedings of the TRB 92nd Annual Meeting Compendium of Papers, 2013.
Copyright © 2015 Ning Ye et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.