Abstract

As an essential parameter to represent vehicle following characteristics, distance headway (DHW) plays an essential role in microtraffic flow simulation, traffic control, and traffic safety alarm. However, due to the randomness, nonlinearity, and correlation of DHW data, constructing DHW prediction models is difficult. Moreover, few studies have considered the time correlation between the historical DHW and the target DHW. To solve the above problems, a DHW prediction model is proposed in this paper by integrating entropy-based grey relation analysis (EB-GRA) and temporal convolutional network (TCN), named as EB-GRA-TCN model. In the model, the EB-GRA is adopted to calculate the correlation between the target DHW and historical DHW sequences, and the DHW data with high correlation are dynamically selected as the optimal input of the DHW prediction model. Then, the TCN algorithm is used to train the DHW prediction model. The TCN architecture integrates the advantages of recurrent neural network (RNN) and convolutional neural network (CNN), which could fully use the previous DHW information. In the experiment, the DHW data from Hefei Expressway are utilized for training the EB-GRA-TCN model. The prediction results showed that the average root mean square error (RMSE) and mean absolute error (MAE) of the proposed model were 0.115 and 0.090, respectively, in the 5, 10, and 15 predicted steps. Compared with the autoregressive integrated moving average (ARIMA), TCN, RNN, and long short-term memory (LSTM) models, the EB-GRA-TCN model achieved the best prediction accuracy. The results indicated that the EB-GRA-TCN model obtained good predictive performance and could provide support for road traffic control and traffic safety warming.

1. Introduction

The car-following (CF) model describes the interaction of a vehicle’s longitudinal motion and plays an essential role in microscopic traffic flow simulation, traffic flow parameter prediction, and traffic safety. The distance headway is an important parameter in the CF model and reflects the longitudinal position relationship between the front and rear vehicles. It has practical application significance to predict the distance headway. For traffic operation, the distance headway distribution of all vehicles on the road could reflect the overall operational state of traffic flow [1, 2], thus providing data support for the strategy formulation of traffic control and guidance [3, 4]. From the aspect of traffic safety, timely and accurate distance headway prediction could provide safety warnings for vehicles and avoid rear-end collisions [57].

It is a common method to construct the mathematical model of distance headway from the vehicle’s motion characteristics, such as the pipe model [8], collision avoidance (CA) model [9], SPACES model [10], and INTARS model [11]. The mathematical method of distance headway prediction could provide the specific calculation formula for distance headway. And the prediction results are promising, so it has been widely used. However, the mathematical methods make many assumptions about drivers’ behavior characteristics and may not be applicable to all countries and different types of drivers.

With the development of artificial intelligence technology, the data-driven distance headway prediction model has been developed. The data-driven distance headway prediction models mine the critical factors affecting distance headway through a large amount of data and use statistical analysis and machine learning to train the distance headway prediction model. The data-driven methods do not need many assumptions and are applicable to many experimental scenarios. The common data-driven distance headway prediction model includes the autoregressive integrated moving average (ARIMA) [12], the support vector machine (SVM) [13], and the long short-term memory (LSTM) [14] models.

The distance headway prediction is essentially a time series prediction problem. The existing time series prediction algorithms mainly include autoregressive moving average (ARMA) [15], ARIMA [16], the seasonal ARIMA [17], the artificial neural network (ANN) [18], the SVM [19, 20], the K-nearest neighbor (KNN) [21], and the gradient boosting machines (GBM) [22]. Then, time series prediction methods based on deep learning emerged, such as recurrent neural network (RNN) [23], LSTM [24, 25], gate recurrent unit (GRU) [26], WaveNet [2730], and transformer [3134].

Due to the randomness, nonlinearity, and correlation of distance headway, constructing an accurate distance headway prediction model is difficult. To the authors’ knowledge, the existing studies generally selected a fixed length of historical distance headway as model input. Few studies have considered the correlation between historical distance headway data and target distance headway data. This may lead to ineffectively capturing historical distance headway data that significantly correlates with the target distance headway, thus affecting the overall prediction performance of the model. Some regression analysis methods could be used in distance headway prediction, such as the autocorrelation coefficient (ACF) and the Ljung-Box (LB) test. These methods assume that the time series are linearly correlated. However, there is no absolute linear correlation between the historical distance headway and the target distance headway series. Therefore, the linear correlation analysis may not be suitable for distance headway series correlation analysis. The entropy-based grey relation analysis (EB-GRA) [35, 36] combines the grey correlation degree with the equilibrium degree by introducing grey entropy. It overcomes the disadvantage that the common regression analysis methods are unsuitable for nonlinear correlation models. The EB-GRA has been applied in many fields and achieved good prediction results [3739].

In addition, for the prediction algorithm, the existing studies usually used common time series prediction algorithms, such as RNN and LSTM. Nevertheless, RNN and LSTM may have the problem of excessive memory load when training complex networks. The temporal convolutional network (TCN) algorithm simplifies the network structure and reduces memory requirements by combining causal convolutions and dilated convolutions.

To accurately predict the distance headway, a distance headway prediction model is developed in this paper by coupling the EB-GRA and TCN. In this model, EB-GRA is first used to analyze the temporal correlation between the historical distance headway and the target distance headway. Then, the historical distance headway data with a high correlation degree with the target distance headway is selected as the input of the distance headway prediction model. And the TCN algorithm and real vehicle trajectory data are used to train the distance headway prediction model. The distance headway prediction model could be used to predict the distance headway of vehicles on the road and provide decision support for road control, traffic flow guidance, and traffic safety alarms.

The rest of this paper is arranged as follows. Section 2 is a summary of the related research. Section 3 presents the methods, which include the distance headway prediction framework, optimal lag step selection based on the EB-GRA, and distance headway prediction based on the TCN. Section 4 is the experiment, which includes experimental data and preprocessing experimental evaluation index selection, and experimental results and analysis. Section 5 is the conclusion and discussion.

As an important parameter in microscopic traffic flow, distance headway plays an essential role in microscopic traffic flow simulation, traffic control and traffic guidance, and traffic safety warnings. The existing research on distance headway prediction can be roughly divided into two types: mathematical model and data-driven model.

The mathematical model for distance headway prediction is constructed based on driving habits and vehicle motion characteristics. Pies [8] first proposed the mathematical distance headway prediction model, which considered the speed and length of following vehicles. And the formalization and physical meaning of the Pies model are simple. Gipps [9] established the CA model according to the minimum safe distance between vehicles. The CA model considered the influence of the speed of the following vehicle and the lead vehicle on the distance headway. In addition, based on the CA model, the SPACES model [10] and the INTARS model [11] were developed. The mathematical methods could give the specific function formula between the distance headway and influence factors. However, the mathematical methods make many assumptions about drivers’ behavior characteristics, and some of these assumptions may not be applicable to other countries’ drivers. In addition, with the increase in model complexity, it will be difficult to calibrate the model parameters.

With the development of artificial intelligence technology, the data-driven distance headway prediction model has emerged. Data-driven algorithms do not need specific formulas and learn the headway prediction model from a large amount of data. For example, Avr et al. [12] selected 30 frames of distance headway data before the target distance headway as the model input and utilized ARIMA to predict the distance headway at frame 31. Theja and Vanajakshi [13] took the distance headway data from 1 minute to 5 minutes as input and adopted the SVM to predict the distance headway at the 6th and 10th minutes.

Distance headway prediction is a time series prediction problem. The ARMA model [15] is the classic time series prediction method based on statistical analysis. The ARMA model is a prediction method for stationary time series. If the original time series do not satisfy stationarity, the corresponding data processing method should be used to transform the data into stationary sequences. Then, the autocorrelation and partial autocorrelation functions are calculated, and the Akaike information criterion (AIC) or Bayesian information criterion (BIC) criteria are used to estimate the identification parameters and order. Finally, the prediction model with the highest fitness is selected. With the gradual application of the ARMA model in the time series prediction field, some new prediction methods have appeared, such as the ARIMA model [16] and the SARIMA model [17].

With the popularization of big data application technology, the data-driven method for time series prediction appeared. The data-driven prediction method uses a large amount of real vehicle trajectory data to construct the relationship function between historical and future time series. The common machine learning methods include ANN [18], SVM [19, 20], KNN [21], and GBM [22]. With the development of deep learning algorithms, short-term traffic flow prediction algorithms based on deep learning have emerged. RNN [23] is the typical deep learning algorithm for short-term time series prediction. Additionally, some variants of RNN are also applied to short-term time series prediction, such as LSTM [24, 25] and GRU [26], and have achieved good prediction results.

However, RNN, LSTM, and other time series prediction algorithms cannot efficiently capture the long-term dependence of time series. In terms of the problems of RNN and LSTM, the Deepmind team of Google proposed the WaveNet framework [27] in 2016. The core of WaveNet is casual convolutions and dilated convolutions, which can correctly solve the long-term dependence of time series and will not cause a rapid increase in model complexity. However, the operation speed of WaveNet is slow due to its sample-level autoregressive feature. For this problem, Tacotron [28, 29] adopted an end-to-end architecture to improve the operation speed. Later, the TCN algorithm [30] simplified the WaveNet architecture and removed skip connections across layers, conditioning, and context stacking. TCN has the following advantages: First, TCN has parallelism. Unlike RNN, which needs to process sequence data orderly, TCN can process the input sequence as a whole. Second, the network architecture of TCN is different from recursive architectures. TCN has a backpropagation path different from the time direction of sequences so as to avoid the gradient explosion/disappearance problem. Third, TCN has low training memory requirements. LSTM needs large memory to store the partial results of cell gates when inputting long sequences. However, filters in TCN can be shared across layers, and the backpropagation path depends on the network depth. Finally, the dilated convolutional layer makes the TCN have a flexible receptive field. Due to the high parallelism, low memory requirement, and flexible receptive field, the TCN algorithm is widely used in speech generation and stock price prediction fields and has achieved great prediction performance. In 2017, the transformer structure [31] proposed a new time series prediction idea. The multihead attention makes the transformer model long-term and short-term temporal features at the same time. In addition, there are some variations of transformer, such as the LogSparse transformer [32], temporal fusion transformer (TFT) [33], and Informer [34]. The WaveNet and Transformer both belong to supervised learning, which may require a large amount of data when training complex networks.

In conclusion, the existing distance headway prediction methods usually use the historical distance headway as model input and do not consider the time correlation of the distance headway sequence. For the prediction algorithms, the common prediction methods of RNN and LSTM have the problem of large memory load when training complex networks. This study developed a distance headway prediction model to solve the above problems by integrating the EB-GRA [35, 36] and TCN [30]. In the model, the EB-GRA is adopted to calculate the correlation between the historical distance headway sequence and the target distance headway, and the high correlation data are extracted as the model input. Then, the TCN is used to train the distance headway prediction model, which has the advantages of both RNN and CNN. The combined prediction model can effectively improve the prediction performance.

3. Methods

The methods section includes the following three topics: the distance headway prediction framework, the optimal lag step selection based on the EB-GRA, and the distance headway prediction based on the TCN.

3.1. Distance Headway Prediction Framework

To extract the historical distance headway data that have a significant correlation with the predicted distance headway and utilize the advantages of deep learning in time series prediction, a short-term distance headway prediction framework of EB-GRA-TCN is established. The specific prediction framework is shown in Figure 1 [30]. The EB-GRA-TCN framework includes the following three parts: the distance headway data preprocessing, the optimal lag step selection based on the EB-GRA, and the distance headway prediction based on the TCN. First, distance headway data are extracted from the vehicle trajectory data. Then, to remove the influence of data noise on the overall data quality, the original distance headway data are smoothed. Second, the processed distance headway time series are divided into alternative sequences and target sequences. The alternative distance headway sequence is the historical time series with a high correlation to the target sequence, and the target sequence is the distance headway to be predicted. Then, the EB-GRA is used to calculate the grey relevancy grade (GRG) and determine the optimal lag step. Finally, based on the optimal lag step, the optimal distance headway input sequence is dynamically selected from the alternative sequence. And the TCN algorithm is used to predict the target distance headway sequence. TCN introduces the dilated causal convolution and could solve the problems of gradient explosion caused by the increase of network layers in time series modeling.

3.2. Optimal Lag Step Selection Based on EB-GRA

Time series have lag correlations. To extract the most relevant historical distance headway sequence to the target distance headway, the EB-GRA [35, 36] is used to compute the grey relation grade between the historical distance headway and the target distance headway. Then, the distance headway time series with a high grey relation grade level is selected as the input of the distance headway prediction model. The EB-GRA can extract the high correlation factors with the target value and is widely used in economics [37], benefit evaluation [38], planning [39], and other fields.

Let represent the distance headway of vehicle k at time t. Suppose that the is affected by the time period distance headway, that is . We use to represent the distance headway time series . The term i represents the hysteresis step, . J is the prediction step length, . Then, the grey relational coefficient between and is:where represents the correlation between and . is the distinguishing coefficient, . The distinguishing coefficient can make the target sequence and the alternative sequence have a better distinction.

However, the sum of the target sequence and the alternative sequence is arbitrary. To satisfy the principle of grey entropy, it is necessary to transform their grey correlation coefficient into the grey correlation density . The calculation method of is as follows:where , .

Then, the correlation degree of the grey correlation coefficient is calculated according to the grey correlation density, that is, the grey correlation entropy , as shown in the following equation:where . The grey entropy between the target sequence and the alternative sequence is represented as . is the maximum grey entropy.

Finally, the GRG can be obtained by multiplying the entropy of the grey correlation coefficient by the average grey correlation coefficient of the alternative sequence. The GRG represents the correlation degree between the target distance headway and the historic distance headway sequence, and the calculation method is as follows:where represents the correlation degree between the target distance headway sequence and the alternative distance headway sequence. The higher the value of is, the higher the correlation between the two sequences.

3.3. Distance Headway Prediction Based on TCN

The TCN algorithm [30] is a novel time-series prediction algorithm that integrates the architectural patterns of RNN and CNN. The TCN algorithm can take an input sequence of any length and map it to an output sequence of the same length. Additionally, TCN adopts causal convolutions, which fully use the time series from the past to the future, so there is no information leakage. The TCN algorithm has achieved good prediction performance in time series prediction and is widely used in weather prediction [40], speech recognition [41], and other fields.

The architecture of the TCN consists of the following three parts: causal convolutions, dilated convolutions, and residual connections.

3.3.1. Causal Convolutions

CNN has achieved outstanding prediction performance in the image processing field. However, classical CNN considers that the convolution computation results are related not only to the past state but also to the future state. For time-series data prediction, future information mainly depends on past data. Therefore, an information leakage problem will occur if the same architecture as CNN is adopted in time series data prediction.

To make CNN the structure to process time-series data, TCN adopts the causal convolutions architecture method, as shown in Figure 2 [30]. Causal convolutions have the following two characteristics: (1) There is no information leakage; that is, future convolution results are only related to historical observation data; (2) the deeper the accumulation of the causal convolution layer is, the longer the historical data that can be traced back.

3.3.2. Dilated Convolutions

However, with the increase in historical traceability length, it is necessary to deepen the network depth and increase model parameters and complexity. The TCN introduces dilated convolutions into the network architecture. As shown in Figure 3 [30], by adding a certain cavity length in the convolution calculation and skipping part of the input, a long memory for the history data can be achieved.

For a one-dimensional distance headway time series and filters , the dilated calculation is defined as follows:where p is the dilated factor, . The term k is the filter size, and represents the direction of the past.

The dilated convolutions introduce a sampling interval with a fixed length between two adjacent filters, and the calculation formula of the receptive field is:

3.3.3. Residual Connections

The receptive field of TCN depends on the network depth n, the filter size h, and the dilated factor p. The problems of gradient disappearance and explosion will occur with increasing the network depths. For this problem, TCN adopts residual connections to solve the problem of network degradation.

The residual structure makes the output O the superposition of the input D and the nonlinear transformation F(d). The output of the residual structure is calculated by the following equation:where () is the activation function.

4. Experiment

This section includes the experimental data and preprocessing, experimental evaluation index selection, and experimental results and analysis.

4.1. Experiment Data and Preprocessing

In the experiment, the vehicles’ trajectory data from the Hefei Expressway are selected to test the prediction performance of the EB-GRA-TCN model. As shown in Figure 4, vehicle position and speed data in the radar monitoring area are collected by roadside radar equipment installed on the Hefei Expressway. The data collection time included the following two periods: from 5 : 30 PM to 6 : 00 PM on July 28, 2021, and from 3 : 40 PM to 4 : 15 PM on July 30, 2021. The total collection time is 65 minutes, and the data collection interval is 50 ms.

Before the experiment, the original vehicle trajectory data should be smoothed. In this paper, a symmetric exponential moving average filter is selected to remove the influence of data noise. Finally, 913 vehicles’ distance headway data are collected. In addition, the filtered data are structured in accordance with the time interval of 0.1 s, which is consistent with the data acquisition interval of the common short-time distance headway prediction.

In addition, in order to verify the generalization ability of the model, the I-80 expressway dataset from the open NGSIM database was selected for verification. The I-80 expressway includes seven lanes and three periods: from 4 : 00 p.m. to 4 : 15 p.m., from 5 : 00 p.m. to 5 : 15 p.m., and from 5 : 15 p.m. to 5 : 30 p.m. The data collection frequency is 0.1 s. The symmetric exponential moving average filter is also adopted in the I-80 expressway dataset to remove the influence of data noise. In this paper, 80% of the vehicles’ trajectory data on the Hefei Expressway are randomly selected to train the distance headway prediction model, and the distance headway data of 183 vehicles on the I-80 expressway are randomly selected to test the model performance.

4.2. Experimental Evaluation Index Selection

To guarantee that the training data and test data are disjointed, 10-fold rolling cross-validation is utilized to evaluate the model performance. The distance headway data are divided into training sets and test sets according to the time sequence. The RMSE and MAE are selected to evaluate the prediction performance. The calculation methods of the RMSE and MAE are as follows:where is the observed distance headway value, is the distance headway prediction value, and n represents the total number of samples.

4.3. Experimental Results and Analysis

The experimental result concludes the results of the EB-GRA-TCN model and comparative experiments.

4.3.1. EB-GRA-TCN Model Results

Before using the TCN model to predict distance headway, it is necessary to utilize EB-GRA to determine the optimal lag step. We suppose the distance headway of vehicle k at time t is related to the historical distance headway of the former 50 steps (5 s); that is, the initial lag step is . Then, the EB-GRA is adopted to calculate the correlation degree between the historical distance headway time series and the target distance headway. In this paper, the GRG values between the target distance headway and alternative sequences of all vehicles were calculated, as shown in Figure 5.

The results showed that although there are some differences in the GRG values under different lag steps, the GRG value decreases with the increase of lag steps. The farther the historical distance headway from the target distance headway, the smaller the time correlation. The above results are consistent with common knowledge. Generally speaking, the smaller the time distance between the historical and target distance headway, the higher the correlation.

Three prediction steps of , , and are selected, and the iterative loss process for the training set and validation set under three groups of predicted steps is demonstrated in Figure 6. With the increase of iterations, the training loss and valuation loss under the three groups of predicted step sizes showed a downward trend and finally approached stability.

The distance headway prediction results under the three groups of prediction steps are shown in Table 1. For the Hefei Expressway test set, when the predicted step size is 5, the RMSE and MAE values of the EB-GRA-TCN are 0.040 and 0.033, respectively. When the predicted step size is 10, the RMSE and MAE values of the EB-GRA-TCN are 0.117 and 0.093, respectively. Moreover, when the prediction step size is 15, the RMSE and MAE values of EB-GRA-TCN are 0.188 and 0.143, respectively. For the I-80 expressway test set, the RMSE of the EB-GRA-TCN are 0.088, 0.287, and 0.584, respectively, under the three predicted steps. And the MAE of the EB-GRA-TCN are 0.130, 0.508, and 1.357, respectively, under the three predicted steps. It can be found that when the prediction steps are 5 and 10, the prediction errors of the Hefei Expressway and I-80 expressway test sets have little difference. When the prediction step is 15, the prediction errors of the I-80 expressway test set are higher than those of the Hefei Expressway test set. This may be because the prediction ability of the EB-GRA-TCN model decreases with the prediction step increases.

4.3.2. Comparative Experimental Results

To prove the effectiveness of the EB-GRA-TCN model, we designed two groups of contrast experiments. For the first comparative experiment, the ACF and statistics of the LB test are selected to calculate the optimal lag step. The ACF and LB tests are the two common methods of autocorrelation analysis. And we compared the prediction performances of EB-GRA-TCN, ACF-TCN, LB-TCN, and TCN models. The results of the four models are shown in Figures 7(a) and 7(b).

From Figure 7, it could be found that, with the increase in predicted step size, the prediction errors of the four models all increased. But the EB-GRA-TCN model has the lowest prediction error. Additionally, comparing the TCN model with EB-GRA-TCN, ACF-TCN, and LB-TCN models, the TCN model obtained the worst results. The results indicated that autocorrelation analysis could effectively extract the most relevant historical distance headway as input and improve the model prediction performance.

In the second comparative experiment, four common distance headway prediction models are compared in this paper. They are the ARIMA, SVM, RNN, and LSTM models. The comparative experimental results for the three groups of predicted steps are shown in Table 2. In order to verify the effect of EB-GRA in distance headway prediction, we tested the prediction performance of EB-GRA-SVM, EB-GRA-RNN, and EB-GRA-LSTM models, respectively. The experimental results of adding EB-GRA are shown in Table 3.

Based on the comparative experimental results shown in Tables 2 and 3, we can find that the EB-GRA-TCN model achieves the best prediction performance under the three groups of the predicted steps. Taking the prediction step size of 5 as an example, the RMSE of the TCN, ARIMA, SVM, RNN, and LSTM models are distributed between 0.14 and 1.64, and the MSE is distributed between 0.12 and 1.50. The RMSE and MAE of the EB-GRA-TCN model are 0.040 and 0.033, respectively, in the prediction step size of 5. Compared with the other four models, the prediction error of the EB-GRA-TCN model is the smallest.

In addition, comparing Tables 2 with 3, we can find that the prediction errors of the four models decreased after using EB-GRA to optimize the input of the distance headway prediction model. Compared with TCN and EB-GRA-TCN models, the prediction errors of RMSE and MAE decreased by an average of 59.39% and 60.57%, respectively. For SVM and EB-GRA-SVM models, the prediction errors of MAE and RMSE decreased by 43.58% and 44.46%, on average. For RNN and EB-GRA-RNN, the prediction errors of RMSE and MAE decreased by 27.58% and 29.56% on average. For LSTM and EB-GRA-LSTM, the prediction errors of RMSE and MAE decreased by 40.45% and 45.39% on average. The experimental results showed that EB-GRA could effectively improve the model prediction performance by optimizing the input of the prediction model.

As shown in Figure 8, a comparison is made between the observed and predicted distance headway of a vehicle in this paper. With the increase in the predicted step size, although the prediction error of the EB-GRA-TCN model increased, it showed a high prediction accuracy overall.

Furthermore, to verify that EB-GRA-TCN has lower training memory requirements, we tested the training memory load of EB-GRA-TCN, RNN, and LSTM models on the computer with an 8-core CPU and 16 G running memory of 100 times. The average training memory requirements of EB-GRA-TCN, RNN, and LSTM models are 337.9 MB, 376.8 MB, and 407.6 MB, respectively. The experimental results indicated that the training memory load of EB-GRA-TCN is the lowest. And the memory requirement of EB-GRA-TCN is reduced by 10.32% and 17.10% compared with RNN and LSTM models.

5. Conclusions and Discussion

As an essential parameter in car-following (CF) models, the distance headway (DHW) can reflect the relative position between two vehicles at the micro-level. At the macro level, the DHW distribution of vehicles on the road can reflect the current traffic flow. Accurate DHW prediction can provide data support for traffic signal control, vehicle guidance, and traffic safety warnings. DHW prediction is essentially one-dimensional time series data. However, current DHW prediction methods do not consider the time correlation between the historical DHW data and the target DHW. This may affect the final prediction accuracy. To select the optimal DHW input and utilize the powerful advantage of deep learning in the prediction field, a DHW prediction model that combines entropy-based grey relation analysis (EB-GRA) and temporal convolutional network (TCN) is proposed in this paper, named EB-GRA-TCN. In the model, the EB-GRA is used to calculate the correlation between the historical DHW sequence and the target DHW. The historical DHW sequences with a high correlation with the target DHW are selected as the optimal model input. Then, the DHW prediction model is trained by using the real DHW data and the TCN algorithm. The experimental results showed that the EB-GRA-TCN model achieved good prediction performance in the three prediction steps, and the average RMSE and MAE were 0.115 and 0.090, respectively. Compared with the ARIMA, SVM, RNN, LSTM, and TCN models, the EB-GRA-TCN model obtained great prediction results.

In addition, the model prediction error declines when the ACF, LB test, and EB-GRA methods are used to optimize the input of the TCN algorithm. The results indicated that the correlation analysis could effectively capture the autocorrelation between the distance headway sequences and select the optimal model input for the prediction model. Furthermore, with the increase in prediction step length, the prediction errors of the EB-GRA-TCN, ARIMA, SVM, RNN, and LSTM models all increased. However, compared with the other four models, the prediction accuracy of the EB-GRA-TCN model did not significantly decrease. It indicates that the EB-GRA-TCN model still has good stability in long-term prediction.

In conclusion, this study could provide powerful data support for traffic guidance and control and traffic safety warnings. First, based on the real-time vehicle trajectory data and the forecasted value of distance headway, the location distribution of vehicles on the road in the future time can be deduced. Thus, the methods in this study can help traffic managers obtain road traffic operation conditions in the future and formulate reasonable traffic guidance and control measures dynamically. In addition, predicting the distance headway can provide safety warnings for autonomous vehicles and improve the safety of autonomous driving.

However, this paper mainly utilizes the EB-GRA to optimize the input of the TCN algorithm. With the continuous development of deep learning algorithms, future work will study the coupling mechanism of attention mechanism and novel time series prediction algorithm and apply it to distance headway prediction, which may achieve better prediction performance.

Data Availability

The raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research is funded by the National Natural Science Foundation of China [51878236].