Journal of Control Science and Engineering

Journal of Control Science and Engineering / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 4206919 | https://doi.org/10.1155/2020/4206919

Zhihao Zhang, Wenzhong Yang, Silamu Wushour, "Traffic Accident Prediction Based on LSTM-GBRT Model", Journal of Control Science and Engineering, vol. 2020, Article ID 4206919, 10 pages, 2020. https://doi.org/10.1155/2020/4206919

Traffic Accident Prediction Based on LSTM-GBRT Model

Academic Editor: Juan-Albino Méndez-Pérez
Received09 Sep 2019
Accepted15 Nov 2019
Published05 Mar 2020

Abstract

Road traffic accidents are a concrete manifestation of road traffic safety levels. The current traffic accident prediction has a problem of low accuracy. In order to provide traffic management departments with more accurate forecast data, it can be applied in the traffic management system to help make scientific decisions. This paper establishes a traffic accident prediction model based on LSTM-GBRT (long short-term memory, gradient boosted regression trees) and predicts traffic accident safety level indicators by training traffic accident-related data. Compared with various regression models and neural network models, the experimental results show that the LSTM-GBRT model has a good fitting effect and robustness. The LSTM-GBRT model can accurately predict the safety level of traffic accidents, so that the traffic management department can better grasp the situation of traffic safety levels.

1. Introduction

“By 2020, half the number of global deaths and injuries from road traffic accidents” is one target of the Sustainable Development Goals (SDGs) published by the United Nations (UN) in 2015 [1]. The country’s attention to traffic safety continues to increase. Applying traffic accident situation prediction results to traffic planning can improve traffic safety. Many experts and scholars have predicted some indicators of traffic accidents [2, 3]. The research methods are mainly divided into three categories, statistical regression method [4], grey prediction [5], and neural network model method.

Statistical regression methods include time series prediction and many classic traffic accident experience models (Smid model, I. Agalal model, Japanese model, and Beijing model). Yannis et al. [6] proposed an autoregressive nonlinear time-series modelling of traffic fatalities in Europe. Kumar and Toshniwal [7] proposed a novel framework for time series data of road traffic accidents, which segments the time series data into different clusters for trend analysis. Ihueze and Onwurah [8] analyzed road traffic crashes in Anambra State, Nigeria, with the intention of developing accurate predictive models for forecasting crash frequency in the state using autoregressive integrated moving average (ARIMA) and autoregressive integrated moving average with explanatory variables (ARIMAX) modelling techniques. The regression model is simple and convenient to calculate, and it can predict short-term data changes. The essence of the regression model is the linear fit to the data. However, the results predicted by the model are one sided and weak in anti-interference ability. Due to the randomness of traffic accidents themselves, there are many influencing factors. Therefore, the reliability of its prediction results is not guaranteed.

The grey prediction model can predict a small number of samples, and the principle is simple, the operation speed is fast, and the testability is strong. The grey prediction model can make short-term and medium-term macropredictions for data with little fluctuation. The essence of the model is to find the dynamic relationship between the road traffic accident sequence data. However, the grey theory is modeled for a class of series that conforms to the condition of a smooth discrete function, and the grey system model describes only a process that monotonically increases or decays exponentially over time. Shi et al. [9] proposed a sequence GM (1, 1) model with strong exponential law to predict traffic accidents, but the model can only describe the monotonous change process. Hosse et al. [10] applied a Grey Systems Theory MGM (1, 4) in order to predict the development of road traffic accidents in Germany until 2025 based on the market diffusion of electronic stability program (ESP). Liu and Wu [11] proposed a grey Verhulst prediction model for road traffic accidents, which is suitable for nonmonotonic wobble development sequences or S-shaped sequences with saturation. Zhao et al. [12] proposed a model that weighted and combined a variety of grey prediction methods. Although the prediction accuracy has been improved, its essence is a linear combination of the original data and there are still shortcomings in the medium- and long-term prediction.

The neural network prediction method has strong nonlinear mapping ability, high robustness, and powerful self-learning ability and has been widely used in many fields. He and Guo [13] proposed a traffic accident prediction model based on the BP neural network. The model can implement any nonlinear mapping, especially suitable for complex internal mechanisms. The shortcomings of the BP neural network model include slow training convergence, long training time, and easy to fall into the saddle point. Liwei et al. [14] proposed a grey neural network model. The grey theory compensated for the shortcomings of data mining for small sample data distortion, while the neural network compensated for the shortcomings of grey theory that can only be used for short-term prediction. Although the model improves the training speed, the accuracy of the model prediction results is low and the deviation is too large.

This paper proposed an LSTM-GBRT model for traffic accident prediction. The LSTM layer captures time-dependent information in the data; the GBRT model has the advantage of high robustness of ensemble learning for model training.

2.1. Long Short-Term Memory

The LSTM [15] model proposed by Hochreiter et al. is a variant of the recurrent neural network (RNN). It builds a specialized memory storage unit that trains the data through a time backpropagation algorithm. It can solve the problem that the RNN has no long-term dependence. The schematic diagram of the LSTM structure is shown in Figure 1.

The standard LSTM can be expressed as follows. Each step t and its corresponding input sequence are , the input gate is t, the forget gate is , and the output gate is . Memory cell state controls data memory and oblivion through different gates. The formula is as follows:

The memory cell state of the unit time t of the jth LSTM is as follows:

After the memory cell state is updated, calculate the current hidden layer :where W is the weight matrix of the input, U is the state transition weight matrix, is the sigmoid function, tanh is the hyperbolic tangent function, is the hidden state vector of the output, is the new cell state after the adjustment and update, and “” indicates point multiplication. The three types of gates jointly control the information entering and leaving the memory cell state, and the input gates adjust new information into the memory cells; the forgetting gate controls how much information is stored in the memory cells and how much information can be output by the output gate definition. The gate structure of the LSTM allows the information in the time series to form a balanced long short-term dependency.

2.2. Boosting Ensemble Learning Framework

GBRT model is a boosting [16] type ensemble learning algorithm. Ensemble learning is a technical framework that combines multiple different models to perform the corresponding tasks in order to achieve more efficient and accurate arrival. Currently used ensemble learning frameworks include bagging, boosting, and stacking. The training process of the boosting framework is stepped, the base model is trained in order, and the training set of the base model is transformed according to a certain summary strategy. Then, the prediction results of all the base models are linearly integrated to produce the final prediction result. Figure 2 is a schematic diagram of the boosting ensemble learning framework.

The overall model based on the boosting framework can be described by a linear combination:where is the product of the base model and its weight. The training goal of the overall model is to approximate the predicted value F(x) to the true value y, that is, to make the predicted value of each base model approximate the partial true value to be predicted. is tested by using training examples, and the weight of misclassified instances is increased. The researchers came up with a greedy solution: train only one base model at a time, and in each iteration, focus on one base model training problem:

Fit the residual. Introducing an arbitrary loss function and fitting the inverse gradient

2.3. Gradient Boosted Regression Trees Model

For a given data set with n examples and m features , a tree ensemble model uses K additive functions to predict the output.where is the space of regression trees. Here, q represents the structure of each tree that maps an example to the corresponding leaf index, T is the number of leaves in the tree, and each corresponds to an independent tree structure q and leaf weights . Unlike decision trees, each regression tree contains a continuous score on each of the leaf, and we use to represent score on the ith leaf.

3. Data Source

3.1. Road Safety Impact Factor Data

As we all know, the occurrence of traffic accidents is caused by the combination of factors such as people, vehicles, roads, and the environment. People include pedestrians and drivers; vehicles include motor vehicles and nonmotor vehicles on the road; road conditions are the condition of the road; environment refers to the natural environment and social environment, and the social environment includes political, economic, cultural, and other factors. On the premise of collecting data, we should consider as much as possible the relevant data of the accident. The data used in this paper include gross national product (GDP) (100 million yuan), per capita GDP (yuan), gross national income (RMB 100 million), road mileage (10,000 kilometers), highway mileage (10,000 kilometers), number of civilian vehicles (10,000 vehicles), number of drivers (10,000 people), passenger traffic (10,000 people), road passenger traffic (10,000 people), total population at the end of the year (10,000 people), male population (10,000 people), female population (10,000 people), urban population (10,000 people), rural population (10,000 people), and the total number of deaths from traffic accidents per year (person). The data used are from the 1997–2016 data of the National Bureau of Statistics of China. The data are shown in Table 1.


YearGDPGross national incomePer capita GDPRoad mileageHighway mileageNumber of civilian vehiclesNumber of driversPassenger trafficHighway passenger trafficTotal populationMale populationFemale populationUrban populationRural populationTraffic accident death toll

19977971578802.96481122.640.481219.092619.25132609412045831236266313160495394498417773861
19988519583817.66860127.850.871319.302974.06137871712573321247616394060821416088315378067
19999056489366.57229135.171.161452.943361.12139441312690041257866469261094437488203883529
2000100280.199066.17942167.981.631608.913746.51147857313473921267436543761306459068083793853
2001110863.1109276.28717169.81.941802.044462.681534122140279812762765672619554806479563105930
2002121717.4120480.49506176.522.512053.174827.081608150147525712845366115623385021278241109381
2003137422136576.310666180.982.972382.935368.071587497146433512922766556626715237676851104372
2004161840.2161415.412487187.073.432693.717101.641767453162452612998866976630125428375705107077
2005187318.9185998.914368334.524.13159.668017.76184701816973811307566737563381562127454498738
2006219438.5219028.516738345.74.533697.359317.242024157.6418604871314486772863720582887316089455
2007270232.327084420505358.375.394358.3610567.152227761.2120506801321296804864081606337149681649
2008319515.5321500.524121373.026.035099.6112276.82867891.9626821141328026835764445624037039973484
2009349081.4348498.526222386.086.516280.6113740.732976897.8327790811334506864764803645126893867759
2010413030.3411265.230876400.827.417801.8315129.893269508.1730527381340916874865343669786711365225
2011489300.6484753.236403410.648.499356.3217416.763526318.7332862201347356906865667690796565662387
2012540367.4539116.540007423.759.6210933.0920028.523804034.935570101354046939566009711826422259997
2013595244.4590422.443852435.6210.4412670.1421742.72122991.5518534631360726972866344731116296158539
2014643974644791.147203446.3911.1914598.1124812.07203221717362701367827007966703749166186658523
2015689052686449.650251457.7312.3516284.4528012.99194327116190971374627041467048771166034658022
201674358574059853935469.6313.118574.5430328.77190019415427581382711082567456792985897363093

3.2. Prediction Index for Road Safety Level

The measures of traffic safety level generally include the number of accidents, deaths, injuries, and property losses. To ensure the accuracy of the data, indicators such as the number of accidents, the number of injured people, and economic losses are subject to subjective influence and the accuracy is difficult to judge. The statistics on the number of deaths are true and reliable, difficult to falsify, and comparable. Therefore, this article uses the number of deaths as a predictor of traffic safety levels to predict the number of deaths.

3.3. Variable Correlation Analysis

If the information in the data is uncorrelated or noisy, the quality of the predictions may be affected [17]. In this paper, by comparing the chi-square value and the Pearson correlation coefficient to filter the features, the prediction results can be optimized. The formula for calculating the chi-square value is as follows:where k is the variable of the kth group, r is the variable number, c is the target variable number, d is the degree of freedom = , is the observation frequency of the variable , and is the expected frequency of the variable .

The Pearson correlation coefficient is calculated as follows:where R is the correlation coefficient, X is the independent variable, Y is the dependent variable, is the mean of the independent variable, and is the mean of the dependent variable.

The chi-square test can calculate the degree of deviation between samples, and the greater the score of the chi-square test, the more obvious the association exists. The Pearson correlation coefficient can roughly give the degree of correlation between variables, and the absolute value of the Pearson coefficient indicates the degree of correlation. According to the chi-square score and the Pearson coefficient in Table 2, we removed the variable with the smallest chi-square score (highway mileage) and removed the variable with the smallest Pearson coefficient (road passenger traffic). Finally, 12 related independent variables and death toll were used as input variables, a total of 13.


VariableChi-square valuePearson coefficient value

GDP3050901−0.799
Gross national income3047539−0.799
Per capita GDP212068.8−0.801
Highway mileage1030.5−0.728
Highway mileage of high-speed roads53.8−0.749
Number of civilian vehicles90673.9−0.764
Number of drivers119582.7−0.766
Passenger traffic5040066−0.555
Road passenger traffic5131653−0.498
Total population at the end of the year2764.561−0.649
Male population1353.25−0.598
Female population1433.637−0.696
Urban population48875.14−0.693
Rural population16934.48+0.716

3.4. Model Performance Evaluation Index

In this paper, error rate (E) and root mean square error (RMSE) were used to compare the predicted deviation degree, and root mean square logarithmic error (RMSLE) and decision coefficient (R-square) were used to measure the fitting capacity of the model.

The error rate and root mean square error formula are as follows:

The formula for the logarithmic error and the coefficient of determination of the root mean square is as follows:where n is the number of samples, is the original value, is the predicted value, and is the sample mean.

4. LSTM-GBRT Modelling Methodology

The LSTM neural network is capable of capturing time-dependent information and has an excellent effect on time series prediction, but it is insufficient in predicting inflection point data. The GBRT model is a typical representative of the ensemble learning algorithm, and the model is robust. In this paper, the LSTM-GBRT model is proposed by combining the two methods. The LSTM neural network is used to extract the features with time-dependent information. The features are trained by the GBRT model to predict traffic accidents. The structure of the LSTM-GBRT model is shown in Figure 3.

4.1. Normalization

The raw data are processed using min-max normalization to eliminate dimensional differences. A linear transformation of the original data causes the result to fall into the [0, 1] interval, and the conversion formula is as follows:where max represents the maximum value of the feature in the sample data and min represents the minimum value of the feature in the sample data; x represents raw data, and X represents normalized data.

4.2. LSTM Layer Hidden Unit Number

There is no clear theoretical guidance for determining the number of nodes in the hidden layer. In general, use the following formula to select the number of nodes:where N is the number of hidden nodes; n is the number of input nodes; m is the number of output nodes; and a can take a constant of 1 to 10.

In this paper, there are 13 input nodes and 1 output node. According to formula (7), the number of hidden nodes is 5∼13. Try a different number of hidden layer nodes using 1 layer of LSTM and judge the degree of deviation according to the error rate and root mean square error, so as to select the number of hidden layer nodes.

The experimental results of the test set show that the LSTM model using 11 hidden nodes has the smallest RMSE value and the best prediction effect. The detailed error rate and root mean square error results of the test set are shown in Table 3.


Hide_sizeE (2013)E (2014)E (2015)E (2015)RMSE

5−0.00530.0085−0.0001−0.09633052.086
6−0.01200.0111−0.0077−0.11383629.307
7−0.00040.0059−0.0054−0.10213228.962
8−0.00350.01410.0041−0.09413002.066
90.00670.01890.0187−0.07432476.743
100.0084−0.0028−0.0100−0.10253256.069
110.01220.00310.0073−0.07422378.019
120.0094−0.0092−0.0169−0.10933505.351
130.0055−0.0016−0.0040−0.10083186.757

4.3. LSTM Layer Depth

Since there are only 19 records in this example, the model depth is too high, which will cause the data to be overfitting. The experiment uses 1∼5 layer models for comparison, and 11 hidden nodes are used for each layer. After training, the model performance is judged by calculating the root mean square logarithmic error and the decision coefficient of all records. The fitting results are shown in Table 4.


Layer number12345

RMSLE0.03330.02670.03010.03140.0356
R-square0.97470.98430.98040.98220.9810

The smaller the RMSLE model, the better the fitting effect. The closer the R-square is to 1, the stronger the ability of the variable to interpret y and the model fits the data better. According to the results in Table 4, the 2-layer LSTM model has the best fitting ability.

4.4. GBRT Layer Regularization

The regularization formula is as follows:where . Here, l is a differentiable convex loss function that measures the difference between the prediction and the target ; the second term Ω penalizes the complexity of the model (i.e., the regression tree functions). The additional regularization term helps to smooth the final learnt weights to avoid overfitting.

4.5. Hyperparameters of LSTM-GBRT Model

The hyperparameters of the LSTM layer include the number of network layers, the number of hidden cells in the layer, the learning rate, and the optimizer type, and the parameter settings are shown in Table 5.


HyperparametersValues

Learning rates0.02
LSTM layer depth2
LSTM layer hidden unit number11
OptimizerAdam

The hyperparameters of the GBRT layer include the learning rate, the number of estimators, the maximum depth of the tree, the number of split nodes in the sample, the minimum sample required for the leaf nodes, and the loss function. This paper uses GridResearchCV to automatically find the optimal superparameters. The final parameter settings are shown in Table 6.


HyperparametersValues

Learning rates0.2
The depth of the tree4
min_samples_split3
min_samples_leaf1
Number of estimators140

5. Comparative Analysis of Experiments

5.1. Experimental Environment

The experimental environment of this example is TOSHIBA satellite S40-A laptop, CPU: Intel(R) Core(TM) i3-3217U CPU at 1.80 GHz, running memory is 10 G, operating system is Windows 10 Enterprise Edition 2016 long-term service version, development environment. To use the PyCharm integrated development tool of Python 3.5 language, use the neural network model such as LSTM provided by Keras and use the GBRT model provided by skit-learn.

5.2. Experimental Design and Analysis of Results

Experiments include traditional regression models, neural network models, and integration model types of experiments. The experimental items are multivariate nonlinear regression (MUL), BP neural network model (BP), long- and short-term memory neural network model (LSTM), gradient boosted regression trees model (GBRT), and LSTM-GBRT model. The 15 data from 1998 to 2012 were used as training sets, and the four data from 2013 to 2016 were used as test sets. Use the data from the previous year as an input sample to predict the number of traffic accident deaths in the coming year. Figure 4 is a trend chart of actual traffic accident deaths from 1998 to 2016.

After experimental training, the prediction results of each model in the test set are shown in Table 7.


YearTrue valueP (LSTM)P (MUL)P (BP)P (GBRT)P (LSTM-GBRT)

20135853959000.9956518.1860185.6159998.3858553.24
20145852358738.2853302.6760485.9261364.2358553.24
20155802258209.4452670.0360221.4661364.2358553.24
20166309357777.6951228.8959990.1161364.2358553.24

The prediction results in the test set show that the BP neural and MUL regression models have no obvious regularity, and the prediction accuracy is not high.

The accuracy of LSTM in 2013, 2014, and 2015 was extremely high, and the deviation in 2016 suddenly increased. The trend of the actual number of deaths in Figure 4 is analyzed. 2016 is the year of the inflection point in the time period, and the trend of the first three years is consistent with the trend of the training data set, indicating that LSTM has an excellent prediction effect on the same trend. Conversely, when the forecast is the inflection point of the trend, the performance will suddenly drop. It also proves that the LSTM model can indeed learn the time-dependent information in the data.

The prediction results of the GBRT model and the LSTM-GBRT model did not fluctuate particularly among large samples, and the overall prediction effect remained stable. Many of the predicted values of the GBRT model and the LSTM-GBRT model are the same. We analyzed that the base model of the GBRT model is a regression tree, and the data fluctuations in the test set in 2013–2016 are small. In addition, the result for 2015–2016 moves away from the real data because the LSTM layer included in the LSTM-GBRT model learned time-dependent information, resulting in poor prediction of the trend inflection point.

Figure 5 shows the actual death toll in 1998–2016 and the fitted prediction results for each model.

After observing the prediction results of each model, we evaluate the effect of fitting all the data of each model. In addition to the performance indicators mentioned in Section 3.4, we add the training time-consuming indicators of the model for comparison. The performance indicators are shown in Table 8.


IndicatorsMULBPLSTMGBRTLSTM-GBRT

RMSLE0.06650.04310.02670.01890.0172
R-square0.93720. 95430.98430.99610.9967
Train time (s)0.18515.42766.69090.12967.4275

The performance indicators of different models in Table 8 show that the LSTM-GBRT model has the smallest RMSLE value, the best model fitting effect, the R-square value is closest to 1, and the variable has the strongest interpretation ability for the predicted value, but the training time is the longest. GBRT model training time is the shortest. The prediction performance of the GBRT model is not bad but slightly lower than the LSTM-GBRT model. The performance of the LSTM neural nework is lower than the GBRT model, and the performance of the MUL regression model and the BP neural network model is poor.

In terms of training time, the MUL regression model and the GBRT model have the shortest training time because their essence is a linear combination of mathematical data; the LSTM-GBRT model has the slowest training time, and LSTM model training time is very close to BP neural network training time. The training time of the neural network is obviously higher than that of the former because the training of the neural network model needs to construct a complex network structure.

5.3. Robustness Analysis

The occurrence of a traffic accident is influenced by many factors. The predictive model can predict the complex and variable conditions more stably, which indicates that the model has better robustness.

When analyzing the robustness of the model in this experiment, two aspects should be considered: first, internal factors, whether there are abnormal fluctuations in the model training data; second, external factors and policies proposed at the social level have promoted or inhibited the predicted data. Regardless of internal factors and external factors, the core is in the data. For model training data, the role of external factors is still indirectly affecting the data required for training, and then the effect of prediction is reflected. The model which is difficult to control is the external factor.

In this case, the model uses annual periodic data and policy factors have a short period of action and can cause less data fluctuation, so the robustness is better. When the model uses more sophisticated data, the influence of data fluctuation will increase. Firstly, anomaly data should be analyzed visually, the uniformity of each variable should be observed, and the uneven data should be processed, such as log function. Secondly, the abnormal variables are divided into two or more types of processing strategies. After the correlation analysis is completed, two or more models are established to train and predict. Finally, the prediction results of the multiple models are accumulated. Model training for specific data classification can also improve the accuracy of prediction, thus enhancing the robustness of the model.

6. Conclusion

The prediction of traffic accidents is of great significance. The future traffic accident trend forecasting work can help the traffic management department to grasp the trend dynamics in time, discover the rules of traffic accidents, formulate laws and regulations according to the rules, make scientific decisions, and construct the traffic system reasonably.

This paper proposes a road traffic accident prediction model based on the LSTM-GBRT model. Compared with the traditional regression model, the traditional BP neural network model, the LSTM neural network model, and the GBRT model, the experimental results show that the LSTM-GBRT model has the strongest ability to fit the data and the variable has the best interpretability for the predicted value. The model has a good predictive ability for the trend of road traffic safety level and can provide more accurate forecast data for the traffic management department, so that the traffic management department can better grasp the situation of traffic safety levels.

The model proposed in this paper also has defects. (1) Data collection, model training lacks relevant data on environmental factors. Due to the large randomness of road traffic accidents, its occurrence is affected by many factors. The environmental data belong to spatiotemporal data, which are difficult to collect, and the data of annual accident traffic accidents are not easy to quantify, so the weather environment factors are lacking in the model training data. (2) In terms of inflection point prediction, since the inflection point of the trend is unlikely to be discovered by the model in advance, the forecasting ability of the inflection point of the possible future trend is poor.

This paper takes China’s annual traffic accident data as the research object, the proposed prediction task of the model is relatively macroscopic, and the predictability of microdata needs further experiment. This paper considers adding more relevant features, but in the macrodata forecasting work, related features are difficult to obtain or difficult to quantify. In the future, when taking microtraffic accident data as the research object, consider adding more features.

Data Availability

The raw data we used were official open data published by the UK Department of Transportation, and our experimental data were filtered from raw data online available at https://data.gov.uk/dataset/road-accidents-safety-data. The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was funded by the Xinjiang Uygur Autonomous Region Natural Science Fund Project “Research on Highway VANET Early Warning Information Broadcast Transmission Mechanism” (2017D01C042).

References

  1. United Nation, Transforming Our World: The 2030 Agenda for Sustainable Development, 2015, https://sustainabledevelopment.un.org/post2015/transformingourworld.
  2. F. La Torre, M. Meocci, L. Domenichini, V. Branzi, N. Tanzi, and A. Paliotto, “Development of an accident prediction model for Italian freeways,” Accident Analysis & Prevention, vol. 124, pp. 1–11, 2019. View at: Publisher Site | Google Scholar
  3. M. Taamneh, S. Alkheder, and S. Taamneh, “Data-mining techniques for traffic accident modeling and prediction in the United Arab Emirates,” Journal of Transportation Safety & Security, vol. 9, no. 2, pp. 146–166, 2017. View at: Publisher Site | Google Scholar
  4. F. Zong, H. G. Xu, and H. Y. Zhang, “Prediction for traffic accident severity: comparing the Bayesian network and regression models,” Mathematical Problems in Engineering, vol. 2013, Article ID 475194, 9 pages, 2013. View at: Publisher Site | Google Scholar
  5. J. L. Deng, “Control problems of grey systems,” Systems & Control Letters, vol. 1, no. 5, pp. 288–294, 1982. View at: Publisher Site | Google Scholar
  6. G. Yannis, C. Antoniou, and E. Papadimitriou, “Autoregressive nonlinear time-series modeling of traffic fatalities in Europe,” European Transport Research Review, vol. 3, no. 3, pp. 113–127, 2011. View at: Publisher Site | Google Scholar
  7. S. Kumar and D. Toshniwal, “A novel framework to analyze road accident time series data,” Journal of Big Data, vol. 3, no. 1, p. 8, 2016. View at: Publisher Site | Google Scholar
  8. C. C. Ihueze and U. O. Onwurah, “Road traffic accidents prediction modelling: an analysis of Anambra State, Nigeria,” Accident Analysis & Prevention, vol. 112, pp. 21–29, 2018. View at: Publisher Site | Google Scholar
  9. Y. Shi, Y. Lin, Y. Zou, L. Jing, and L. Wu, “The prediction model on Chinese traffic deaths based on the grey topology,” Mathematics in Practice and Theory, vol. 43, no. 20, pp. 110–116, 2013. View at: Google Scholar
  10. R. S. Hosse, U. Becker, and H. Manz, “Grey systems theory time series prediction applied to road traffic safety in Germany,” IFAC-PapersOnLine, vol. 49, no. 3, pp. 231–236, 2016. View at: Publisher Site | Google Scholar
  11. S. B. Liu and C. W. Wu, “Road traffic accident forecast based on optimized grey Verhulst model,” in Proceedings of the 2016 Joint International Information Technology, Mechanical and Electronic Engineering, Xi’an, China, October 2016. View at: Google Scholar
  12. L. Zhao, X. U. Hongke, and H. Cheng, “Road traffic accidents prediction based on optimal weighted combined model,” Computer Engineering & Applications, 2013. View at: Publisher Site | Google Scholar
  13. M. He and X. C. Guo, “The application of BP neural network principal component analysis in the forecasting the road traffic accident,” in Proceedings of the Second International Conference on Intelligent Computation Technology and Automation, Zhangjiajie, China, October 2009. View at: Publisher Site | Google Scholar
  14. H. U. Liwei, T. Zhang, F. Guo, and Z. Chen, “Traffic accident split rate of vehicle types prediction and prevention strategies study based on gray BP neural network,” Journal of Wuhan University of Technology, 2018. View at: Publisher Site | Google Scholar
  15. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997. View at: Publisher Site | Google Scholar
  16. J. H. Friedman, “Stochastic gradient boosting,” Computational Statistics & Data Analysis, vol. 38, no. 4, pp. 367–378, 2002. View at: Publisher Site | Google Scholar
  17. H. Wang, A. Parrish, R. K. Smith, and S. Vrbsky, “Variable selection and ranking for analyzing automobile traffic accident data,” in Proceedings of the 2005 ACM Symposium on Applied Computing, Santa Fe, NM, USA, December 2005. View at: Google Scholar

Copyright © 2020 Zhihao Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1923
Downloads1237
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.