Abstract

An improved Bayesian fusion algorithm (BFA) is proposed for forecasting the blink number in a continuous video. It assumes that, at one prediction interval, the blink number is correlated with the blink numbers of only a few previous intervals. With this assumption, the weights of the component predictors in the improved BFA are calculated according to their prediction performance only from a few intervals rather than from all intervals. Therefore, compared with the conventional BFA, the improved BFA is more sensitive to the disturbed condition of the component predictors for adjusting their weights more rapidly. To determine the most relevant intervals, the grey relation entropy-based analysis (GREBA) method is proposed, which can be used analyze the relevancy between the historical data flows of blink number and the data flow at the current interval. Three single predictors, that is, the autoregressive integrated moving average (ARIMA), radial basis function neural network (RBFNN), and Kalman filter (KF), are designed and incorporated linearly into the BFA. Experimental results demonstrate that the improved BFA obviously outperforms the conventional BFA in both accuracy and stability; also fatigue driving can be accurately warned against in advance based on the blink number forecasted by the improved BFA.

1. Introduction

Fatigue driving is one of the major causes of serious accidents in transportation. Statistics show that traffic accidents caused by driver fatigue account for about 20% of the total number of accidents and more than 40% of serious traffic accidents [1, 2]. Experts agree that the actual contributions of fatigue driving to road accidents may be much higher [3]. Frequent road traffic accidents, serious casualties, and property losses caused by fatigue driving of drivers bring a heavy burden to the society and families. Thus, the accurate and efficient detection of driver fatigue is very important [4].

Many methods have been developed for detecting the driver’s fatigue recently, including the measurements of physiological features detection like EEG [5, 6], heart rate variability (HRV) [7, 8], and electrooculogram (EOG) [9], facial features detection such as blink [10, 11] and yawning [12, 13], and vehicle behaviors detection, for instance, lane deviation [14, 15] and steering angle movement [16, 17]. Among those methods, the physiological features based methods are intrusive for drivers, because these methods need to place detection electrodes on the skin, drivers will feel uncomfortable if these electrodes are in touch with the skin for a long time. Vehicle behaviors based methods are easily affected by different sizes and shapes of vehicles, as well as different driving habits of drivers [18]. Facial characteristics based methods especially blink based detection methods using machine vision are attracting more and more researchers [1820]. The objective of the study is to propose a new forecasting algorithm of blinking number and further verify the effectiveness of the proposed algorithm according to the forecasting results.

For the fatigue driving detection methods based on eye blinking, almost all researchers pay more attention to real time detection algorithms that can fast detect opening or closing state of the eyes and correctly make recognition of whether a driver is fatigued currently. However, the detection methods usually are not able to meet the requirement of real time processing. It is computationally costly in detecting and recognizing fatigue characters, especially for fusion detection methods based on multiple fatigue features [21] which require detecting and recognizing many more characteristic parameters [22]. Moreover, statistics show that if drivers’ responses are only half a second faster when traffic accidents occurred, sixty percent of traffic accidents can be avoided. Therefore, if fatigue driving can be warned against in advance before traffic accidents happen using a certain detection instrument or forecasting method, mostly traffic accidents can be avoided [22, 23]. Furthermore, physiological researches show that fatigue is a gradual process where drivers experience from consciousness to drowsiness [24]. Therefore, if we develop a predictor that can forecast the blink number of the driver in a given time interval according to previous time intervals, then we can recognize the fatigue driving in advance and also save more time for fatigue feature extraction and detection algorithm enhancement. In a word, the advantage of using predictor method is that we can know and estimate the future driving state in advance according to the past and current states as long as we set appropriate time intervals. Meanwhile, we do not need to detect the driving state of drivers in real time, which can save more computation time for other more important processing tasks.

Currently, many successful applications of single predictor based forecasting algorithms have been reported from various fields, for instance, in the fault diagnosis [25, 26], transportation flow forecasting [27], time-series prediction [28], and so on. Based on this, a variety of methods has been put forward for goal state prediction, including the Kalman filtering (KF) model [29], nonparametric regression model [30], and autoregressive integrated moving average (ARIMA) model [31]. Generally, these prediction methods can be categorized as statistical time series analysis methods, which conduct their predictions based on historical data analysis. One benefit of statistical time series analysis methods is that they can make very good predictions when the goal state varies temporally. However, due to linear properties, these methods are inadequate for capturing the rapid variations of goal state. To overcome this problem, numerous studies have used machine learning methods such as artificial neural networks (ANN) [32, 33] and support vector machines (SVM) [34] as alternative predictors. A machine learning method is able to approximate goal state in any degree of complexity without prior knowledge of problem solving. In addition, because of its ability to learn from data, it can capture the underlying relationships of goal state even when they are not apparent.

Though the previously mentioned prediction approaches are powerful and useful methods for goal state prediction and can generate accurate results for certain patterns, they each have their own drawbacks in dealing with predictions. None of them can maintain excellent prediction performance under all application conditions [35, 36]. Generally, driver’s fatigue is related to not only the duration time of continuous driving but also the current time period. For example, drivers feel fatigued more likely at 3:00–5:00 and 14:00–16:00 than at other time periods, and drivers usually do not feel fatigued until after three hours of continuous driving. Additionally, the fluctuation of blink number of drivers from nonfatigue to fatigue is very obvious and abrupt when detected by a device or instrument. Therefore, forecasting algorithm based on a single predictor is hardly suitable for the abrupt fluctuation of blink number when drivers continuously drive for a long time. This result occurs because goal state generally exhibits a spatiotemporal behavior characterized by irregular randomness, and it is very difficult for a single prediction method to capture such a disturbed pattern.

In view of these deficiencies, some researchers have turned to multivariate modeling in which models are developed by combining multiple methods to take advantage of the merits of each algorithm. A feasible fusion method that could effectively combine the predictions of single predictors is the Bayesian fusion algorithm (BFA) proposed by Petridis et al. [37]. The BFA generates a prediction by a weighted fusion of forecasts of its all component predictors based on posterior probabilities and the Bayesian rule. However, the BFA pays no attention to the relevance between the historical data flows and the current data flow. It assumes that, at a particular interval, the weights of the component predictors in the BFA depend on the cumulative prediction performance of all past intervals. This may result in making the prediction quite impervious to the greatly fluctuated prediction accuracy of the component predictors. To overcome this problem, an improved BFA has been proposed in this paper. The underlying assumption is that the data flow of blink number at a particular prediction interval is only affected by the data flows from only few of the previous time intervals, which have a comparatively higher relevancy with it. Based on this assumption, the prediction errors of only a few intervals of the component predictors are required to be considered when calculating their weights.

Simulation experiments of fatigue driving find that the improved BFA is more sensitive to the fluctuation of the component predictors and can adjust their weights more rapidly compared to the conventional BFA and also can obtain better forecasting accuracy and stability. Therefore, it can be used to judge in advance whether drivers are fatigued by the blink number forecasted in next time period.

2. Shortcoming of Conventional BFA

The conventional Bayesian fusion algorithm was originally proposed by Petridis et al. [37]. Its general idea is summarized as follows.

Let denote the actual blink number at time interval . Then, we havewhere is the th predictor, is the predicted blink number at time interval by the th predictor, and is the corresponding prediction error.

For a certain time interval, we can use to denote the uncertainty. The posterior probability of the th predictor to be the best model at time interval is defined as

Then, according to Bayesian rule, we can obtainwhere is the total number of component predictors; notice that

Assuming that the prediction error is a Gaussian white noise time series with zero mean and standard deviation , we have

Combining (3), (4), and (5) yields

Then, at time interval , the prediction of the conventional BFA can be written as the linear combination of the output of all the predictors as follows:where is the prediction generated by the conventional BFA and is the result estimated by the th predictor.

Similar to (6), , is formulated as

Combine (6) and (8) and we have

Similarly, we can formulate explicit expressions for . Substituting them into (9), we have

From (10), we can see that, rather than only depending on the prediction error at prediction interval , the weight depends on the prediction errors of all past intervals. This characteristic makes the conventional BFA very inert to the fluctuating accuracy of component predictors. If the dominant predictor is no longer the most accurate, it will take many intervals to reduce the dominant status of that predictor, thus imposing a negative impact on the predictions of the conventional BFA.

This problem arises because the conventional BFA does not consider the correlation between the historical data flows and the data flow at the prediction interval. Generally, only the data flows of blink number at the latest few intervals may strongly correlate with the data flow at a given prediction interval. For the data flows of blink number at earlier time intervals, they may have less impact on the current data flow.

3. Improved Bayesian Fusion Algorithm

3.1. Selection of Correlative Dada Flow

For most of the forecasting methods, the selected blink data flows from only a few intervals are considered. Inspired by this idea, this paper hypothesizes that, at a particular prediction interval, the data flow of blink number is only affected by the past few intervals’ data flows which have comparatively higher relevance with it. With this assumption, the weight of each component predictor in the BFA at a given prediction interval would just depend on the accumulative prediction performance of a few selected intervals rather than all intervals. Based on this assumption, (1) can be rewritten as where represents the set of the previous intervals at which the data flows of blink number have a comparatively higher correlation with the data flow at prediction interval .

Following the same inferential procedures from (2) to (6) and from (9) to (10), we havewhere represents the dimension of set . From (12), we can see that the weights of the component predictors just depend on their prediction errors at the previous intervals. Therefore, theoretically, the weights calculated by (12) are more sensitive to the fluctuated accuracy of component predictors, and the number of data flows to be used to calculate is also reduced greatly. Substituting it into (7), we can obtain the prediction of the improved BFA.

3.2. Grey Relation Entropy-Based Analysis

According to the discussion in the previous section, one challenge to implementing the improved BFA is how to identify the time interval set where the data flows of blink number have comparatively higher correlation with the data flows at the prediction interval. Because (12) includes a nonlinear, complex exponential function, the regression method is not applicable here. An alternative method is the grey relation entropy-based analysis (GREBA) [38, 39]. It has been extensively applied for relevance analysis in various disciplines, such as logistics [40], economics [41], and engineering [42]. In this paper, we use this method for relevance analysis between current and historical data flows of blink number.

Assuming that, at the prediction interval , blink number is affected by its previous intervals’ data flows, listed as , should be large enough to cover most of the important data flows. We use and express the target data flow time sequence at the th interval and the alternative data flow time sequence at the interval, respectively; , is the amount of the alternatives data flow time sequence; , is the length of the target and alternative data flow time sequences; and . Here, the sequences of and are comparable because they are obtained from the same data sequence ; therefore, it is unnecessary to normalize the original data. The grey relation coefficient between and is then calculated aswhere is the distinguishing coefficient in the range of and it can be adjusted to make a better distinction between the target sequence and the alternative sequence.

To satisfy the rule of grey entropy, the grey relation coefficients should be transformed into a grey relation density , calculated by

Once the grey relation density is obtained, the entropy of the grey relation coefficient of each alternative , , is then computable, which represents the relevance degree of the grey relation coefficient. The calculation is shown below aswhere is the grey entropy between the target data flow time sequence and the alternative data flow time sequences and is the maximum grey entropy, which guarantees .

Finally, by multiplying the entropy and the average grey relation coefficient of alternative , we can obtain the grey relevancy grade (GRG). In this study, the GRG is defined as the numerical measurement of the relevancy between the alternative data flow time sequence and the target data flow time sequence , which is calculated aswhere is the GRG of the alternative time sequence with respect to the target sequence . It is distributed between 0 and 1. Based on the definition of the GRG, it can be seen that the higher the is, the more relevance there is between the alternative data flow time sequence and the target data flow time sequence .

3.3. Procedures of the Improved BFA

Based on the above discussions, the procedures of the improved BFA are described in Figure 1 and detailed steps are as follows.

Step 1 (predictor calibration and validation). Use the historical data flow of blink number to calibrate the coefficients for each component predictor and then forecast the data flows that will be tested with the trained predictors.

Step 2 (set identification). Assume that, at the prediction interval, the data flow is potentially correlated with the latest th intervals’ data flows. Then, use the GREBA method to calculate the GRG of each alternative data flow time sequence . Ranking the alternative data flow time sequence according to their GRGs in decreasing order, the intervals of the first alternative data flow time sequences are identified as the set .

Step 3 (BFA prediction). Predict the data flows using the improved BFA where the weights of the component predictors are calculated by (12).

4. Component Predictors Selection

4.1. ARIMA Model

In theory, ARIMA model is the most common models for time series forecasting. It formulates its mathematical model to fit the time series based on the Box-Jenkins autoregressive moving average (ARMA) model [43]. It is often referred to as model, where and are the factors of AR (autoregressive) and MA (moving average), respectively, while is the difference frequency to make time series stationary [4446]. The model can be written aswhere denotes the predicted data flow at th interval. is the backshift operator defined by ; is the AR operator of order , where , , , and are the autoregressive coefficients; is the MA operator of order , where , , , and are the moving average coefficients; is the random error which meets normal distribution with mean zero, variance , and , for all .

The procedures to formulate ARIMA model are summarized in Figure 2: firstly, AR and MA operator are used to identify whether the time series is a stationary sequence. If not, we difference the data to make it stable. Secondly, the maximum likelihood estimation method is used to evaluate the parameters in ARIMA. Then the resulting model is tested. We diagnose whether the residual error of the time series is white noise or not. If it is not, we reestimate the parameters until the generated residual error turns out to be white noise. Finally, with the suitable parameters, the ARIMA could be applied for prediction [47].

4.2. Radial Basis Function Neural Network Model

ANN models are formulated by emulating the processes of the human neurological system. Among numerous ANN models, the radial basis function neural network (RBFNN) is perhaps the best model, which is often considered as an alternative for data flow forecasting [48]. The RBFNN is a feed-forward neural network with strong local nonlinear approximation ability and fast learning rate. It contains three layers, including the input layer, hidden layer, and output layer. Its structure is presented in Figure 3 [49]. There are neurons in the input layer; therefore, input data can be expressed by a -dimension column vector, , where is the blink number at current interval . There are neurons in the hidden layer, which can be expressed using radial basis function , . There is one neuron in the output layer, which is used to express the prediction value at next interval . , where, , , is the weight value between the th neuron in the hidden layer and the neuron in the output layer.

4.3. Kalman Filtering Model

KF is an estimator that provides a set of mathematical equations to estimate the state of a process by minimizing the mean of the squared error. The filter is very powerful in estimating past, present, and even future states. With its high computational efficiency, it has been applied to numerous fields such as process control, flood prediction, tracking, and navigation [5052]. Assuming that is the time interval index, and represent the predicted and observed data flow at the current interval, respectively. Let . According to KF theory, the predicted data flow can be formulated as measurement equation:where represents the measurement noise with normal probability distributions and is measurement matrix. is the state vector at interval which is updated aswhere is the state vector at interval , is the process noise with normal probability distributions of , and is the transition matrix.

According to KF theory, state equations can be expressed as follows:The prediction equation is To simplify the forecasting procedures and improve the accuracy of KF method, the following tips are adopted when it is applied for data flow prediction: (1) the dimension of state matrix is decided by the ARIMA model, and it is set to be the number of the history data flows that is most correlative with the current data flow; (2) the transition matrix is set to be identity matrix; (3) let ; (4) the measurement noise covariance matrices can change with each time step or measurement, but for simplicity it is assumed to be a constant decided by sample data of blink number; (5) let and .

5. Experiments and Analysis

5.1. Experiment Setup

It is very dangerous for subjects to drive in real highway when they feel fatigued; therefore, the experimental data of blink number are acquired by driving simulation experiments in laboratory. The platform of our driving simulation experiment includes hardware and software. The hardware consists of driving simulation equipment, computer, scene display device, video collection card, CCD camera, and so forth. The software is comprised of virtual driving scene and visual detection program of blink number. Experiments are carried out in bright lighting condition. Twenty subjects without any diseases or drinking before the experiment, including 12 male subjects and 8 female subjects who all have more than three years of driving experience, are allowed to take part in this experiment.

5.2. Blink Number Calculation

In this paper we select as an indicator of open-close state of eyes and it can be calculated by (22). Whether driver’s eyes are open or closed is estimated according to the size of and the set threshold . If , then it will be inferred that the driver is conscious; otherwise, the driver is deduced to be blinking. Consider

Physiological studies show that blinking number increases and the duration of the blinking is longer when a driver feels fatigued. Therefore, it is very reasonable that fatigue driving is deduced by the blinking number in given time intervals, where is the frame number of the images where driver’s eyes are recognized as being closed. If exceeds the threshold set, then fatigue driving is determined [53].

5.3. Data Collection

In this study, eight hours of blink data each day were collected on driving simulator from 1:00 to 5:00 and from 12:00 to 16:00. Whether drivers are blinking or not is detected and judged by visual detection algorithms based on eye state recognition every 100 milliseconds. A time interval of prediction is set to be five minutes. Four-day data set is separated into two parts. The first part including the data from the first three days is used as historical data. The second part including the data from the fourth day is used for model testing and comparison. The blink data collected in five-minute interval from 1:00 to 5:00 and from 12:00 to 16:00 on the first day are plotted in Figures 4(a) and 4(b), respectively. Figures 4(c) and 4(d) show blink data from 1:00 to 5:00 and from 12:00 to 16:00 on the second day. From Figure 4, we can see that the blink numbers from 3:00 to 5:00 and from 14:00 to 16:00 are much higher than those of other periods. In addition, blink data in midnight and noon contains two peak value, including the midnight peak value (about from 3:30 to 4:20) and the noon peak value (about from 14:40 to 15:20). The observed blink data from 1:00 to 2:30 and from 12:00 to 13:30, however, exhibits no apparently peak value. The testing data is collected from the fourth day, including both peak value data and nonpeak value data. Thus we can compare the prediction performance between the conventional and improved BFA under different data flow pattern.

5.4. Construction of Three Component Predictors
5.4.1. ARIMA

The first three-day data from the first part are used for model identification, which is to determine the orders of AR, difference, and MA. The procedure is discussed in Section 4.1. First of all, the autocorrelation test shows that the partial correlations of the original sample data decay very slowly. Then we difference the sample data to make the sample data become a stationary time sequence. The corresponding test shows the data differenced is stable. Following that, the differenced data is used to identify the orders of AR and MA. With parameter estimation; finally, ARIMA model is identified to be the most suitable model with statistical packages.

5.4.2. RBFNN

Blink data collected in the first three days is used for network training. RBFNN is selected with the following architecture: five neurons in the input layer, one hidden layer with neurons, and one output neuron. At a prediction interval, the latest five blink data, , , , , and are used for the input data. Gaussian function is chosen as the radial basis function in the hidden layer; that is, , where is the center of Gaussian function and is the width. During the training RBFNN, and are calculated by using -means clustering algorithm [54] and , where is the largest distance between each two that has been obtained.

5.4.3. KF

KF was used for forecasting by extracting the most recent blink data measured. As is mentioned before, the orders of AR are regarded as the number of previous blink data that the KF is based on. Therefore, the dimension of in (18) is .

5.5. Evaluation Criterion of Prediction Performance

We select and employ three evaluation criterions, that is, the mean absolute error (MSA), the mean relative error (MRE), and variance of relative error (VRE), to evaluate the prediction performance of predictors. Evaluation criterions are calculated by (23). Among these criterions, MSA and MRE are used to measure the prediction accuracy, while VRE is used to present the prediction stability. Considerwhere is the observed blink number in time interval , is the predicted blink number in time interval , and is the number of prediction intervals.

5.6. Predictions of Component Predictors

After being trained with the blink data of the first three days, the three component predictors are used to predict the blink number of next observation period on the fourth day. Statistics show that drivers usually feel the most fatigued in the period from 1:00 to 5:00 on each day, even though they have good sleep on the last day, and the blink data flow in the period has an obvious fluctuation. Therefore, we select the blink data from 1:00 to 5:00 to test prediction performance of each predictor. Figure 5 describes the outputs of the three component predictors for blink number in the period from 1:00 to 5:00. The corresponding prediction performances are summarized in Table 1. Table 1 indicates that, despite all component predictors successfully making predictions as a whole, no single predictor can retain the best prediction performance all the time. It can be observed from Table 1 that while the ARIMA predictor made relatively better predictions in the period from 1:00 to 2:30 than RBFNN and KF, it delivers the poorest prediction performance in the period from 2:30 to 3:30. Therefore, it is necessary to find ways to integrate the merits of the single predictors to improve the prediction accuracy and stability.

5.7. Comparison of Prediction Performance of the Improved BFA and the Conventional BFA

Prior to applying the improved BFA for blink number forecasting, the set should be determined by using the GREBA method. We assumed that the blink number at the prediction interval is potentially affected by the blink number from the last 10 intervals (50 minutes) . Our aim is to identify a few intervals at which the blink number has comparatively higher relevance to the blink number at the prediction intervals. Once determined, the prediction errors of each of the component predictors at these intervals will be used to decide their weights in the improved BFA. The GRG between each alternative and the target blink number time sequences is described in Figure 6. It demonstrates that although there are some perturbations between adjacent intervals, the GRG decreases with respect to decreasing time intervals in a longer period, which implies that the further the time gets away from the current prediction intervals, the less relevance there is between the corresponding blink number and the current blink number. This result coincides with our hypothesis that the blink number is very complicated and usually exhibits irregular randomness. It is affected by subject’s sleep situation, continuous driving time, road traffic conditions, and subject’s psychological or physiological conditions, and only a few of the latest blink data are competent enough to dynamically observe the blink number in prediction interval. Note that, in Figure 6, the GRG ceased to decrease dramatically after three intervals, the blink number at the past 3 intervals is thus identified to be the value that mostly correlates with the blink number at forecasting interval; that is, .

The prediction performances of the improved BFA and the conventional BFA are listed in Table 2. Tables 1 and 2 imply that while the conventional BFA improves the prediction performance of single predictors to some extent, it does not compare to the improved BFA. Table 2 indicates that the improved BFA considerably outperforms the conventional BFA in both accuracy and stability. For example, the improved BFA has a performance of 4.16% MRE and 3.52% VRE in the period from 1:00 to 2:00 and a 26.11% improvement for MRE and a 30.01% improvement for VRE over the conventional BFA, which has a MRE of 5.63% and a VRE of 5.03%. Even the improved BFA has its worst performance in the period from 2:00 to 3:00, with 7.99% MRE and 12.11 MSA; it is still better, however, than the corresponding performance of the conventional BFA. Another insight gained from the comparison of conventional BFA and the improved BFA in Tables 1 and 2 is that the performances of both methods depend heavily on the performance of the component predictors that each method combines. Thus, both BFAs perform better if each component predictor is more accurate.

Note that the prediction performances of both the conventional BFA and the improved BFA rely heavily on the weights of the component predictors calculated by (10) and (12). Because the weight in the BFA characterizes the estimated possibility that a particular predictor best forecasts the observed blink number, whether the BFA can assign more weight to the “actual” best predictor at each prediction interval is thus very crucial to its prediction accuracy.

Figure 7(a) shows that the weight of the ARIMA predictor assigned by the conventional BFA is the highest from 1:00 to 2:30 and then decreases slowly and finally approaches 0 after 2:30. The weight of the RBFNN predictor increases monotonically over time and eventually dominates the predictions of the conventional BFA. It happens because the predictor with the best average performance changed during the day, from the ARIMA predictor to the KF predictor. However, upon inspection, we find that the weight calculated by the conventional BFA is unable to dynamically monitor the disturbed accuracy of each component predictor. For example, Figure 5 shows that the RBFNN predictor delivers the best predictions in the periods from 2:30 to 3:30 and from 4:20 to 5:00. However, during the periods, the weight of the RBFNN assigned by the conventional BFA is not the highest of the three, as shown in Figure 7(a). Nevertheless, the extraordinary prediction performance of the RBFNN predictor is indeed detected by the conventional BFA, given that the weight of the RBFNN predictor increases monotonously during this period. However, because the weights of the component predictors at every prediction interval in the conventional BFA are accounted for by their prediction errors from all past intervals, the increased scope is very limited. Once the weight of RBFNN predictor finally has a leading role, it is also difficult to make a change even if there are comparatively larger prediction errors for RBFNN predictor during a period. This effect could be justified by the predictions in Figure 5 in the period from 3:30 to 4:20, during the period the RBFNN predictor makes much worse predictions than the KF predictor. However, Figure 7(a) shows that the weight of the RBFNN predictor assigned by the conventional BFA is dramatically larger than that of the KF predictor, which may lead to inaccurate predictions. For our improved BFA, there is no such deficiency because the weight of the component predictors at the current interval is only affected by their prediction errors from the latest three intervals. Therefore, the improved BFA is more sensitive to the fluctuated accuracy of the component predictors and can more quickly adjust their weight. Figure 7(b) shows that, in the periods from 2:30 to 3:30 and from 4:20 to 5:00, when the RBFNN predictor makes the best predictions, its weight assigned by the improved BFA is always the highest of the three, and when it is no longer the best predictor in the period from 3:30 to 4:20, its corresponding weight decreases dramatically; thus, better predictions are generated compared to the conventional BFA (Figure 7(c)).

Figure 7(a) also reveals that, in most cases, none of single predictors can achieve the best performance during all prediction periods. However, our improved BFA is sensitive to the disturbed accuracy of each component predictor and can assign more weight to the best predictor. Therefore, it is definitely more preferable than the conventional BFA.

Based on the forecasted blink number, further experiments that can be used to identify whether drivers are fatigued or not show that correct rate of fatigue driving detection can reach 98.4%; false-alarm rate is only 1.9%. Therefore, fatigue driving can be accurately warned against in advance based on the blink number forecasted by the improved BFA.

6. Conclusions

For forecasting fatigue state of drivers in advance, we propose an improved BFA to enhance the accuracy and stability of blink number forecasted in next time interval. By analyzing the correlation between historical data flows and current data flows, the improved BFA only considers the prediction errors from the latest few intervals when calculating the weights of each component predictor. Thus, it is more sensitive to the disturbed accuracy of the component predictors and can adjust their weights more quickly than the conventional BFA. To make full use of the advantages of various methods, ARIMA, BPNN, and KF are designed and fused linearly into the BFA. Simulating experiments demonstrate that the improved BFA outperforms the conventional BFA in both accuracy and stability considerably. We found that the conventional BFA is quite impervious to the disturbed accuracy of the component predictors. Most of the time, it assigns nearly all weight to one component predictor. Compared with the conventional BFA, our improved BFA can more rapidly adjust the weights of the component predictors and generate better predictions. Therefore, the improved BFA is more preferable for blink number prediction than the conventional BFA.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The research has been supported in part by the National Nature Science Foundation of China (nos. 61304205, 61203273, 61103086, and 41301037), Nature Science Foundation of Jiangsu Province (BK20141002), the Open Funding Project of State Key Laboratory of Virtual Reality Technology and Systems, Beihang University (no. BUAA-VR-13KF-04), Jiangsu Ordinary University Science Research Project (no. 13KJB120007), and Innovation and Entrepreneurship Training Project of College Students (nos. 201510300228, 201510300276).