Research Article  Open Access
Min Han, Sunan Ge, Minghui Wang, Xiaojun Hong, Jie Han, "A Novel Dynamic Update Framework for Epileptic Seizure Prediction", BioMed Research International, vol. 2014, Article ID 957427, 11 pages, 2014. https://doi.org/10.1155/2014/957427
A Novel Dynamic Update Framework for Epileptic Seizure Prediction
Abstract
Epileptic seizure prediction is a difficult problem in clinical applications, and it has the potential to significantly improve the patients’ daily lives whose seizures cannot be controlled by either drugs or surgery. However, most current studies of epileptic seizure prediction focus on high sensitivity and low falsepositive rate only and lack the flexibility for a variety of epileptic seizures and patients’ physical conditions. Therefore, a novel dynamic update framework for epileptic seizure prediction is proposed in this paper. In this framework, two basic sample pools are constructed and updated dynamically. Furthermore, the prediction model can be updated to be the most appropriate one for the prediction of seizures’ arrival. Mahalanobis distance is introduced in this part to solve the problem of side information, measuring the distance between two data sets. In addition, a multichannel feature extraction method based on HilbertHuang transform and extreme learning machine is utilized to extract the features of a patient’s preseizure state against the normal state. At last, a dynamic update epileptic seizure prediction system is built up. Simulations on Freiburg database show that the proposed system has a better performance than the one without update. The research of this paper is significantly helpful for clinical applications, especially for the exploitation of online portable devices.
1. Introduction
EPILEPSY is a kind of chronic brain dysfunction syndrome, which is one of the most common serious brain diseases [1]. With a worldwide prevalence of approximately 1%, it affects over 50 million people [2]. Apart from the epilepsy patients whose seizures can be controlled by antiepileptic drugs or the epilepsy surgery, there are still many who cannot be treated sufficiently by any available therapy [3]. These patients are at risk of serious injuries and are prone to acquiring an intense feeling of helplessness that adversely influences their daily lives. Therefore, an effective and reliable seizure prediction method, which can forecast the arrival of seizure, is needed for these patients, providing warning time to allow for safetyenhancing behavioral responses.
The most effective way to predict the arrival of an epileptic seizure is electroencephalogram (EEG) analysis [4]. EEG has been proven to be a kind of nonlinear, nonstationary, and chaotic time series [5], providing information about spatiotemporal patterns of brain electrical activity [6]. Usually, the power spectrum [7], largest Lyapunov exponent [8], correlation dimension [9], similarity index [10], AR coefficients [11], and so forth are calculated to present the features of a piece of EEG recordings, but they are univariate measures. Aarabi et al. pointed out that there was no clear superiority of the nonlinear measures over linear measures, whereas bivariate measures were generally more effective [12]. Therefore, the bivariate measures, such as phase synchronization [13–16], linear correlation [17], and nonlinear interdependence [17], have received close attention from researchers. Since the epileptic seizures are usually characterized by an abnormal synchronized electric discharge of neurons, this paper will extract the EEG features from the point of phase analysis. Considering the limits of Hilbert transform (HT) and wavelet transform (WT) [18], HilbertHuang transform (HHT) [19], which is more suitable for nonlinear and nonstationary signal processing, is chosen to calculate the phases of EEG signals. Different from other commonly used phase synchronization indices [20, 21], the phase interaction is quantified by extreme learning machine (ELM) [22].
However, most current studies of automatic epileptic seizure prediction focus on offline methods [7–16, 20, 21]. Although they may have high sensitivity and low falsepositive rate temporarily, they cannot keep catching up the patients’ changing conditions. Therefore, an adaptive and online method or framework is badly needed. Under the condition that a seizure prediction problem can be solved as a classification problem [23], there are many online classification methods of neural networks [24, 25] though they are not appropriate for this application. In the general online methods, the current samples have a significant effect on the result, and the early samples just have less influence [24, 25]. However, the early samples also play an important role in the application of epileptic seizure prediction, so its importance could not be ignored or reduced. Furthermore, the general online method cannot guarantee the balance of training samples in different classes and easily makes the training bias in favor of one of the classes. Therefore, a novel dynamic update framework is proposed in this paper, which keeps the prediction model fresh through updating the sample pools.
In the proposed framework, distance metric is the key issue, which measures the distance between different sample points or different classes. For instance, means [26] and nearest neighbor (KNN) [27] classifiers need to be supplied with a suitable distance metric, through which neighboring data points can be identified. Although Euclidean distance metric is commonly used, it assumes that each feature of data point is equally important and independent of others. This assumption may not be always satisfied in real applications, especially when dealing with high dimensional data where some features may not be tightly related to the topic of interest [28]. Thus, supplying a distance metric is highly problemspecific and determines the success or failure of the learning algorithm or the developed system. In addition, another family of distance metric learning algorithms is developed to make use of pairwise constraints [29–31]. Pairwise constraint is a kind of side information [29]. One popular form of side information is mustlinks and cannotlinks [31]. A mustlink indicates the pair of data points must be in the same class, whereas a cannotlink indicates that the two data points must be in two different classes. Another popular form is the relative comparison with “A is closer to B than A is to C” [30]. In this paper, such side information is considered, and Mahalanobis distance is introduced.
All the above considerations motivate our method. Firstly, a novel dynamic update framework for seizure prediction is proposed. Secondly, a basic prediction model based on both multichannel feature extraction and classification is built up and is embedded into the proposed dynamic update framework. Furthermore, an epileptic seizure prediction system is accomplished. The rest of the paper is organized as follows. Section 2 explains the proposed dynamic update framework for the seizure prediction in detail. Section 3 interprets the multichannel EEG feature extraction method based on HHT and ELM. Section 4 outlines the basic prediction model of the epileptic seizure prediction. In Section 5, the performance of the proposed method is evaluated on the Freiburg dataset. Finally, Section 6 ends the paper with some conclusions.
2. Dynamic Update Framework for Seizure Prediction Model
Currently, most automatic seizure prediction methods are focused on offline methods, of which models cannot be changed and improved once they are built up [7–16, 20, 21]. However, for the reason that the physical conditions of patients always change, if the prediction model can be constructed via only a few seizures and finite interictal recordings, it not only cannot be guaranteed to be the most appropriate one, but also cannot change adaptively to the patients’ health conditions. Therefore, training the seizure prediction model dynamically is necessary. Based on the abovementioned, a novel dynamic update framework for the seizure prediction is proposed, which can achieve the purpose of selfadaptive. It is used to update the training dataset. Figure 1 shows the flow chart of the dynamic update framework.
For each of the patients, there are datasets called ictal and interictal. The ictal periods, which contain epileptic seizure period and preictal period, are determined based on identification of typical seizure patterns preceding clinically manifest seizures in intracranial recordings by visual inspection of experienced epileptologists. Herein, for acquiring enough training samples, the preictal period is at least 50 min. It can be seen from Figure 1 that two sample pools, and , need to be built up for the dynamic update framework first, which are filled with preictal samples and interictal samples, respectively. The prediction model is built up based on and , and the prediction horizon is initialized. The system uses the current model to predict the seizures in the . Once a false alarm occurs, whether the sample set (it will be explained in Section 2.2 ) is abnormal or not needs to be decided, and only the normal samples can be used to update the interictal sample pool of the model. If the seizure cannot be predicted (i.e., the seizure alarm missed), the preictal sample pool is updated by using the samples of 30 to 40 minutes immediately preceding the seizure onset. At last, the prediction model can be updated based on the new sample pools, and the system can use the new prediction model to predict seizures.
In the above procedures, three parts need to be discussed and explained. They are the abnormal detection, the criterion of the sample pools’ update, and the two conditions for update. For the abnormal detection, a criterion needs to be decided to determine what kind of sample is abnormal. For the criterion of the sample pools’ update, a rule needs to be decided to determine how the old samples are replaced by the new samples. At last, two conditions, false alarm and missing alarm, are considered.
Currently, the commonly used distance metrics are Euclidean distance [32], Mahalanobis distance [28], Manhattan distance [33], Chebyshev distance [34], and so on. Obviously, Manhattan distance and Chebyshev distance are not appropriate for the problem in this paper according to their theories. The Euclidean distance metric assumes that each feature of data point is equally important and independent of others. This assumption may not be always satisfied in real applications, especially when dealing with high dimensional data where some features may not be tightly related to the topic of interest [28]. However, the Mahalanobis distance is measured between two data points or two data sets in the space defined by relevant features [28]. Since it accounts for unequal variances as well as correlations between features, it will adequately evaluate the distance by assigning different weights or importance factors to the features of data points. Only when the features are uncorrelated, the distance under a Mahalanobis distance metric is different from that under the Euclidean distance metric. In addition, geometrically, a Mahalanobis distance metric can adjust the geometrical distribution of data so that the distance between similar data points is small. Therefore, Mahalanobis distance is an effective metric to measure the similarity of two sample sets, and so it is used in this paper in both the abnormal detection and the update of the sample pools.
In what follows [35], given and are two points of the observed dataset , their Mahalanobis distance can be calculated as follows: The Mahalanobis distance of a point and the set can be calculated as follows: where and are the mean and covariance matrix of the observed dataset .
2.1. Abnormal Detection
Suppose the interictal sample set is and the Mahalanobis distances are calculated between and and between and , which are denoted by and , respectively. If is less or equal to , the samples in are taken as abnormal, which is shown as follows: calculated according to (1) and (2).
2.2. Criterion of the Sample Pools’ Update
The idea of support vector is introduced in [11], and the farthest sample from the support vector will be replaced. Suppose there are samples in the sample pool , samples in the sample pool , and samples in the sample set to be observed or filled with the samples immediately preceding the seizure onset which misses alarm.
The update of the interictal sample pool : calculate the Mahalanobis distances , and sort the samples in according to the above Mahalanobis distances. Only the first samples in will be retained as the new sample pool .
The update of the preictal sample pool : calculate the Mahalanobis distances , and sort the samples in according to the above Mahalanobis distances. Only the first samples in will be retained as the new sample pool .
2.3. Two Conditions for Update
Suppose the current window is win, the dealtwith window is winpre, and the relationship of time is winpre + = win. Both the two windows are corresponding to the observation window, which will be explained in Step 4 in Section 4. The main idea is that the window winpre is dealt with according to the state of window win.
Condition A (seen as Figure 2(a)). It is a false alarm condition when there is not seizure onset actually in the window win while there is an alarm in the window winpre. Firstly, the samples in the window winpre are decided whether they are abnormal or not. Then, the interictal sample pool will be updated using the samples in the window winpre if they are not abnormal.
(a) Condition
(b) Condition
Condition B (seen as Figure 2(b)). It is a missing alarm condition when a seizure dose not alarm in the window win. Firstly, the preictal sample pool will be updated using the samples of a period of time immediately preceding the window win. And then, the EEG recordings will be going on observed from the time point of “ictal + postictal + .”
3. Multichannel EEG Feature Extraction Based on HHT and ELM
Although the exact mechanisms underlying seizure generation are still uncertain, more and more studies show that epileptic seizures are usually characterized by an abnormal synchronized electric discharge of neurons involved in the epileptic process [36], implying that a method based on phase analysis should be adopted. Phase synchronization method is popular in EEG analysis, using some indices to represent the degree of phase synchronization [13–16]. However, the information provided by these indices is simple, and it is limited to doublechannel analysis. Thus, phase synchronization method becomes increasingly important to explore a multivariate one for EEG analysis.
In the phase analysis method, there are two key points to focus on: phase calculation and phase interaction information extraction. Firstly, HT [13] and WT [15] are usually adopted to calculate the phases of signals. But, there are some drawbacks of them. On the one hand, HT computes the instantaneous amplitude, frequency, and phase of the signals using the mathematics framework in macroperspective, and it is likely that negative frequency occurs. On the other hand, a proper wavelet needs to be selected for WT, and also its transformed result is not unique [18]. Secondly, the indices for quantifying the phase interaction are limited to doublechannel analysis, which extracts features among multiple bivariate channels and does not represent the useful information that is available among all channels [20, 21].
According to the above considerations, a novel multichannel EEG feature extraction method based on HHT and ELM is utilized in this paper, which is named HHTELM for short. In general, HHT and ELM network take place of the phase synchronization indices (such as mean phase coherence (MPC) [13]) at the same time. HHT is a kind of nonlinear and nonstationary signal processing method, which decomposes and transforms adaptively according to the data itself [19]. ELM is utilized for imitating and identifying the phase interaction information among all channels with a low computation cost. Figure 3 shows the main structure of HHTELM.
As is shown in Figure 3, the inputs of the whole structure are EEG recordings with channels which are preprocessed by the filter. They are transformed into phase series by HHT. Then, ELM network is used to process the phase series. Through nonlinear mapping and onestep prediction training, the output weights of ELM are obtained, which are taken as the EEG features we need. In the following sections, the two main parts of HHTELM will be explained in detail.
3.1. HHT for Phase Calculation
This section presents the HHT method in a nutshell. All the details regarding the implementation of HHT algorithm and Matlab codes are fully available in [37]. Empirical mode decomposition (EMD) algorithm is the basis of HHT, which was proposed by Huang et al. in 1998 [19]. It is a new method applicable for timefrequency analysis of nonstationary and nonlinear time series. The feature of EMD is time series smoothing processing; that is, the different scales of fluctuations or trends of the upcoming complex signals are decomposed gradually. A group of linear and steadystate data sequences with different characteristic time scales is obtained using EMD, and each sequence is taken as an intrinsic mode function (IMF) [38]. IMFs are obtained through the socalled “sifting process,” and they must meet the following two criteria: the number of local maxima and the number of local minima must differ by at most one; the mean of its upper and lower envelopes must equal zero [39].
Given an original signal , EMD can be summarized as follows, including the “sifting process” [40]: Step 1: identify all the extremes of ; Step 2: interpolate between minimums (or maximums), ending up with envelope (or ); Step 3: compute the mean ; Step 4: extract the detail , and iterate Steps 1 to 4 until meets the criteria of IMF; Step 5: denote as , and compute the residual function ; Step 6: iterate Steps 1 to 5 on the residual function until is a monotonic function.
Thus, the original signal can be decomposed into where denotes the number of IMFs and is called the residual function, representing the trend of signal .
From the above steps, it can be seen that the underlying principle of EMD is to locally identify the most rapid oscillations in the signal, which are defined as waveforms interpolating interwoven local maximum and minimum. To do so, the local maximum points (resp., the local minimum points) are interpolated with a cubic spline, to determine the upper (resp., the lower) envelope. The mean envelope is then subtracted from the original signal, and the same interpolation scheme is reiterated on the remainder. The “sifting process” terminates when the mean envelope is reasonably zero everywhere, and the resultant signal is designated as the first order IMF. The higher order IMFs are iteratively extracted applying the same procedure to the original signal, after removing the previous IMFs [38]. In all cases, IMFs can be viewed as a nonlinear frequency narrowband, from high frequency to low frequency. For different signals, EMD has the ability of adaptive decomposition and the decomposition result is unique.
Based on EMD, HHT can be explained as follows. HHT consists of EMD and HT [19, 41]. For given signal , according to (4), EMD can decompose into a group of IMFs, , where is the number of IMFs. Then, applying HT to the IMF components, the following is obtained: where The instantaneous angle frequency and amplitude of IMF can be obtained. A timefrequency distribution for signal is obtained.
Comparing with other commonly used transform methods, HHT is more suitable for handling the nonlinear, nonstationary signal processing. It decomposes and transforms adaptively according to the data itself and does not require a specific decomposition base.
3.2. ELM for Phase Interaction Quantization
After calculating the phase, the phase interaction information needs to be extracted. Currently, MPC is mostly employed to assess the degree of phase synchronization [13–16, 20, 21] (the definition of MPC can be found in these references), but it contains limited information of phase synchronization and may leave out some important information which is propitious to present the complete characteristic. Therefore, a new method is proposed to deal with multichannel and extract out all the useful phase interaction information among all channels. Neural networks are employed to replace the index functions. By means of onestep prediction of the phases, the signal system can be identified.
As is mentioned before, the feature extraction method needs to be fast, so that it can be used in online device. However, the general neural networks usually iterate to calculate the output weights and need to design the input weights and biases at the same time, which takes high computation cost [22]. Consequently, ELM is used, which has been demonstrated to have impressive performance in regression and classification tasks due to its high generalization ability and fast learning speed. Comparing with the traditional neural networks and SVM, ELM not only has a high accuracy in much shorter training time, but also can avoid the problems such as overfitting, local minima, and improper learning rate. Moreover, ELM works with no iteration and least human intervention [22]. The principle of ELM is explained next, which works for singlehidden layer feedforward networks (SLFNs).
Let be a set of arbitrary instances, where is the th input and is the th target output. If there exists a standard SLFN with hidden neurons able to approximate the instances with zero error, then it can be mathematically modeled by the following equation: where denotes the weight vector connecting the th hidden neuron and the input neurons, denotes the weight vector connecting the th hidden neuron and output neurons, represents the bias of the th hidden neuron, and is the activation function. Equation (7) can be expressed as follows: is the hidden layer output matrix of SLFN. The input weights and the hidden layer bias are generated randomly. The processing train of a SLFN is to discover a leastsquares solution of the linear system . is the best weight matrix, where is the MoorePenrose generalized inverse. ELM utilizes such a MoorePenrose inverse approach. It can perform at extremely fast learning speed. Unlike some conventional methods, for example, backpropagation (BP) algorithm, ELM is able to avoid the problems in tuning control parameters (learning epochs, learning rate, and so on) and keeping to local minimum.
The procedures of ELM are expressed as follows. Step 1: Choose arbitrary value for input weights and biases of hidden neurons. Step 2: Calculate hidden layer output matrix according to (8). Step 3: Obtain the optimal using .
By means of ELM, the phase interaction can be quantified. Because the research of this paper is based on a movingwindow analysis, the feature extraction method acts on each time window. In real line box of Figure 3, the input layer of ELM is phase , and the output layer of ELM is phase . In each time window, the one step prediction training procedure of ELM is used to fit the actual phase series. Then, the output weights of ELM are obtained, which are taken as the useful extracted EEG features of the corresponding time window. The features contain the information of the phase interaction among all channels.
4. Basic Epileptic Seizure Prediction Model
This section realizes a system that is able to predict the arrival of an epileptic seizure. Figure 4 reveals the basic flow chart of it, whose interpretations will be explained as follows.
Step 1 (preprocessing). The EEG signal is affected by a superimposed sinusoidal disturbance at the frequency of the ac power supply. In order to eliminate the influence of such a disturbance, a 50 Hz bandsuppression filter is exploited in this step. This choice aims at preserving the available information as much as possible in the EEG recordings.
Step 2 (dynamic update framework). This step focuses on constructing the preictal sample pool and the interictal sample pool for next step of feature extracted. The data is continual update to achieve the optimal prediction model. The detailed processing is described in Section 2.
Step 3 (feature extraction). The EEG signals that have been acquired by the dynamic update framework are passed through the feature extraction step, producing a feature vector to be used for classification. Feature extraction is done using the data over time windows. In this paper, the feature extraction method HHTELM is adopted.
Step 4 (classification). Following the feature extraction, ELM is used to learn the mappings from the training set features into the patient’s state: preictal or interictal. In this way, the seizure prediction problem can be converted into a binary classification one. The output of this step is a binary variable which should be set equal to 1 whenever the segment of EEG is a preictal state and equal to 0 in an interictal state.
The time taken to train the classification models should be an important factor of developing online portable devices for epileptic seizures, because the devices will need to update their training during use. However, the classifiers with high accuracy often cannot meet the demand of speed. In our study, we investigate the usage of ELM to obtain a balance between high classification accuracy and short training time [22].
Step 5 (calculation of “preictal density”). The final stage of the system is to calculate the “preictal density.” From the classification results, the trend of patient’s brain condition can be found. However, EEG is a kind of nonstationary signal and can easily be interfered by some factors; therefore, the classification results must have much noise. In fact, when observing the output obtained by ELM, a chattering behavior can often be found. In order to avoid this phenomenon which negatively affects the seizure prediction capability, the following “preictal density” Den in an observation window is calculated: and a density threshold should be chosen. As Figure 4 shows, when Den is over , an alarm is produced, otherwise no alarm.
5. Experimental Results
5.1. EEG Database
To evaluate the proposed method, some simulations on the Freiburg EEG database (http://epilepsy.unifreiburg.de/) are carried out. The database contains invasive EEG recordings of 21 patients suffering from medically intractable focal epilepsy [42].
The EEG data were recorded during invasive presurgical epilepsy monitoring at the Epilepsy Center of the University Hospital of Freiburg, Germany. In order to obtain a high signaltonoise ratio, fewer artifacts, and to record directly from focal areas, intracranial grid, strip, and depth electrodes were utilized.
The EEG data were obtained using a Neurofile NT digital video EEG system with 128 channels, 256 Hz sampling rate, and a 16bit analoguetodigital converter. The 6 contacts of all implanted grid, strip, and depth electrodes were selected by visual inspection of the raw data by a certified epileptologist. Three of them were chosen from the seizure onset zone, involved early in ictal activity. The remaining three electrodes were selected as not involved or involved latest during seizure spread.
For each of the patients, there are datasets called ictal and interictal. The former contains files with epileptic seizures that were at least 50 min preictal data, and the latter contains approximately 24 h of EEG recordings without seizure activity. At least 24 h of continuous interictal recordings is available for 13 patients. For the remaining patients, interictal invasive EEG data consisting of less than 24 h were joined together, to end up with at least 24 h per patient. The ictal periods were determined based on the identification of typical seizure patterns preceding clinically manifest seizures in intracranial recordings by visual inspection of experienced epileptologists.
For evaluating the performance of dynamic update method, it needs enough testing sample for reflecting the reasonable function of dynamic update model. Considering the characterization of machine learning, in our study, only the 9 patients in the database are used, whose seizure numbers are all 5. The seizure occurrence period is different for each individual patient. Most of them have a short seizure occurrence period of a few minutes. A maximum seizure occurrence period is 28.5 min for all patients. The details of the 9 patients’ characteristics are listed in the Appendix.
5.2. Simulations
All the simulations were based on a 1.80 GHz 2core CPU with 2.00 GB memory. In order to show the effectiveness of the proposed method, both the experiments of dynamic update and no update for the model were carried out. The comparison of them was shown as follows.
The initial preictal sample pool and the interictal sample pool were generated for each patient separately. For the preictal sample pool , the first two seizures were used. By using the intervals of 10 s and overlapped them by 50%, 37.6 minutes of data immediately preceding each seizure can produce 450 preictal samples. For the interictal sample pool , the interictal training samples are also generated using the intervals of 10 s, randomly chosen from the interictal recordings of 24 h for a total of 150 minutes, that is, 900 interictal samples.
The implementation of the proposed method also requires the choice of some design parameters. The time window is set at 10 s from experience (since the EEG data is 256 Hz sampling rate; therefore, there are 2560 sample points for each time window correspondingly), and in order to avoid the edge effect, the time window is overlapped by 50%. For HHTELM, the maximum number of IMFs is set at 3, so that the number of IMFs is limited, which is convenient to the operation of the feature extraction and classification procedures. In addition, the number of hidden neurons of ELM is empirically determined as 10, and the sigmoid function is chosen as the activation function. As to ELM [43] for classification, the number of hidden neurons is set at 1000, and the activation function uses sigmoid functions. The observation window is 1.5 min, and the density threshold is 0.7. For the dynamic update framework, the prediction horizon is set at 110 min, and the parameter λ is set at 1.
5.3. Evaluations and Results
In order to illustrate the results clearly, the following evaluations are used: the sensitivity , the falsepositive rate fpr, the advance prediction time , and the performance index . The sensitivity is the percentage of seizures which have been predicted accurately. The falsepositive rate fpr is defined as the number of false alarms per hour in interictal EEG. The advance prediction time is defined as the difference between the seizure beginning time marked in the database and the alarm time determined by the prediction system. In reality, the sensitivity cannot be focused on only, and a bad falsepositive rate always brings troubles for clinical applications. In clinical analysis, the predict sensitivity and falsepositive rate are both the most important evaluation indicators of the seizure prediction. Only when both of them reach the best balance point, the prediction system is satisfactory. Therefore, the prediction system needs to be evaluated via both indicators, and a performance index is employed [14, 16], which combines the two indicators together as defined in the following: where denotes the mean sensitivity and denotes the specificity rate, which is defined as 1 minus the mean falsepositive rate for the entire group of patients (when fpr is more than 1 h^{−1}, is set at zero). Therefore, the larger is, the better the performance of the system is.
Based on the above methods, 9 patients who have 5 seizures recordings totally are chosen as the simulation objects. And Tables 1 and 2 give the results.


In Table 1, “0” presents there is no alarm in the column of “Advance time.” It can be seen from Table 1 that the dynamic update for the model method is more effective. From the point of sensitivity, each patient’s sensitivities of the two methods are the same except patient 17. For patient 17, the sensitivity is 66.7% of the method with dynamic model update whereas it is 33.3% of the method without model update. For the situation without model update, only the first seizure can be detected, and the falsepositive rate is high, 0.38 h^{−1}. However, for the situation with dynamic model update, the first and the third seizures can be detected, and the falsepositive rate is much lower, 0.14 h^{−1}. Therefore, we can conclude that the sample pools become more diversified and the prediction model becomes closer to the current physical conditions. From the point of falsepositive rate, patients 4, 5, 9, 10, 16, and 20 are separately almost the same of the two methods, whereas they are significantly different from patients 17, 18, and 21. For patients 17, 18, and 21, the falsepositive rates of the method with dynamic model update are much lower than the method without update. The model can change with the patient’s physical condition all the time and such update keeps it being closer to the reality as possible. Table 2 lists out the mean results of Table 1, and the performance index shows that the method with dynamic model update performs better than the method without update.
For the shared Freiburg data, a lot of attempts have been made to predict epileptic seizures, all with a varying degree of success. Some research used nonlinear measures including the dynamic similarity index with MPC [44, 45], the waveletbased nonlinear similarity index [46], and the lag synchronization index with MPC [47]. By using single bivariate feature of [45], the average seizure prediction sensitivity achieved 35.2% and 43.2% with “OR” and “AND” combination system, respectively, when SOP (seizure occurrence period) is 30 min under a maximum false prediction rate of 0.15 h^{−1}. Averaged sensitivity values of 60% were obtained for fpr of 0.15 h^{−1} by replacing dynamic similarity index with lag synchronization index in [47]. Compared with [45, 47], the dynamic update method achieved the larger mean sensitivity (85.2%) and lower mean fpr (0.04 h^{−1}) by a multichannel EEG feature extraction method.
In a more recent research, in order to enhance the sensitivity, a set of quantitative univariate and bivariate nonlinear features [48] were used in seizure prediction. For patients 5, 9, 17, 18, 20, and 21, a relatively high sensitivity of 88.83% with an average fpr of 0.13 h^{−1} was got by means of the system of [48] under a SOP of 50 min. Further, the machine learning was introduced for making massive efforts to improve the sensitivity and fpr. In [49, 50], they, respectively, acquired the sensitivity of 88.89% and 95.56% for patients 4, 5, 9, 10, 16, 17, 18, 20, and 21. The average fpr of them were, respectively, 0.096 h^{−1} and 0.22 h^{−1}. Comparing with the above said methods, for dynamic update method, although the sensitivity was lower than other methods, however, the mean fpr still gained the best results. The aim of dynamic update framework is to reduce the false prediction rate without needing to set a maximum false alarm condition. By calculating the performance index for reported results of [5–7], the values of them were, respectively, 0.8784, 0.8713, and 0.8965. It clearly found that the dynamic update method outperformed other methods in terms of the performance index .
6. Conclusions
A novel dynamic update framework for epileptic seizure prediction system has been proposed, in which the prediction model can be updated and kept fresh. The framework utilizes Mahalanobis distance as the distance matric. Two sample pools filled with preictal samples and interictal samples, respectively, are constructed. Through the judgment of missing alarm and false alarm, the two sample pools are updated and so is the prediction model.
In order to evaluate the performance of the system proposed, careful comparison experiments on the Freiburg database are carried out. Compared to the system without model update, our method is more effective. Under a satisfying sensitivity, the falsepositive rate can be as low as 0.04 h^{−1}, with the performance index being 0.91. The results clearly indicate that the proposed system can keep fresh at all times. Along with the update of sample pools, the prediction model is updated to be more effective than the early one. In addition, the usage of the multichannel feature extraction method based on HHT and ELM can extract the effective features to distinguish the preictal and interictal states. The whole system is significantly helpful for the exploitation of online portable devices.
Appendix
Table 3 lists the characteristics of the 9 patients in the Freiburg database [42], including the descriptions of their sex (“f” for female and “m” for male), age, seizure type (“SP” for simple partial, “CP” for complex partial, and “GTC” for generalized tonicclonic), seizure location (“H” for hippocampal and “NC” for neocortical), seizure origin, seizure number, electrodes (“d” for depth, “g” for grid, and “s” for strip), and interictal length.
 
Seizure types and location: simple partial (SP), complex partial (CP), generalized tonicclonic (GTC), hippocampal (H), and neocortical (NC). Electrodes: depth (d), grid (g), and strip (s). Five seizures and at least 24 h of interictal EEG data for every patient were analyzed. 
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
This work was supported by the Project (61074096) of the National Nature Science Foundation of China, the Project (61374154) of the National Nature Science Foundation of China, and the fundamental research funds for the central universities (DUT13JB08).
References
 R. S. Fisher, W. van Emde Boas, W. Blume et al., “Epileptic seizures and epilepsy: definitions proposed by the International League Against Epilepsy (ILAE) and the International Bureau for Epilepsy (IBE),” Epilepsia, vol. 46, no. 4, pp. 470–472, 2005. View at: Publisher Site  Google Scholar
 J. S. Duncan, J. W. Sander, S. M. Sisodiya, and M. C. Walker, “Adult epilepsy,” The Lancet, vol. 367, no. 9516, pp. 1087–1100, 2006. View at: Publisher Site  Google Scholar
 F. Mormann, R. G. Andrzejak, C. E. Elger, and K. Lehnertz, “Seizure prediction: the long and winding road,” Brain, vol. 130, no. 2, pp. 314–333, 2007. View at: Publisher Site  Google Scholar
 F. H. Lopes da Silva, “The impact of EEG/MEG signal processing and modeling in the diagnostic and management of epilepsy,” IEEE Reviews in Biomedical Engineering, vol. 1, pp. 143–156, 2008. View at: Publisher Site  Google Scholar
 W. Xingyuan and L. Chao, “Researches on chaos phenomenon of EEG dynamics model,” Applied Mathematics and Computation, vol. 183, no. 1, pp. 30–41, 2006. View at: Publisher Site  Google Scholar
 B. He, L. Yang, C. Wilke, and H. Yuan, “Electrophysiological imaging of brain activity and connectivity—challenges and opportunities,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 7, pp. 1918–1931, 2011. View at: Publisher Site  Google Scholar
 K. C. Chua, V. Chandran, U. Rajendra Acharya, and C. M. Lim, “Analysis of epileptic EEG signals using higher order spectra,” Journal of Medical Engineering and Technology, vol. 33, no. 1, pp. 42–50, 2009. View at: Publisher Site  Google Scholar
 S. Sabesan, N. Chakravarthy, K. Tsakalis, P. Pardalos, and L. Iasemidis, “Measuring resetting of brain dynamics at epileptic seizures: application of global optimization and spatial synchronization techniques,” Journal of Combinatorial Optimization, vol. 17, no. 1, pp. 74–97, 2009. View at: Publisher Site  Google Scholar
 A. F. Rabbi, A. Aarabi, and R. FazelRezai, “Fuzzy rulebased seizure prediction based on correlation dimension changes in intracranial EEG,” in Proceedings of the 32nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '10), pp. 3301–3304, Buenos Aires, Argentina, September 2010. View at: Publisher Site  Google Scholar
 X. Li and G. Ouyang, “Nonlinear similarity analysis for epileptic seizures prediction,” Nonlinear Analysis, Theory, Methods and Applications, vol. 64, no. 8, pp. 1666–1678, 2006. View at: Publisher Site  Google Scholar
 L. Chisci, A. Mavino, G. Perferi et al., “Realtime epileptic seizure prediction using AR models and support vector machines,” IEEE Transactions on Biomedical Engineering, vol. 57, no. 5, pp. 1124–1132, 2010. View at: Publisher Site  Google Scholar
 A. Aarabi, R. FazelRezai, and Y. Aghakhani, “EEG seizure prediction: measures and challenges,” in Proceedings of the 31st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '09), pp. 1864–1867, Minneapolis, Minn, USA, 2009. View at: Google Scholar
 F. Mormann, K. Lehnertz, P. David, and C. E. Elger, “Mean phase coherence as a measure for phase synchronization and its application to the EEG of epilepsy patients,” Physica D: Nonlinear Phenomena, vol. 144, no. 3, pp. 358–369, 2000. View at: Publisher Site  Google Scholar
 M. Han, M.H. Wang, X.J. Hong, and J. Han, “Epileptic seizure prediction based on probabilistic discriminative extreme leaning machine,” Chinese Journal of Biomedical Engineering, vol. 31, no. 2, pp. 175–183, 2012. View at: Publisher Site  Google Scholar
 L. Wang, C. Wang, F. Fu et al., “Temporal lobe seizure prediction based on a complex Gaussian wavelet,” Clinical Neurophysiology, vol. 122, no. 4, pp. 656–663, 2011. View at: Publisher Site  Google Scholar
 F. Mormann, T. Kreuz, R. G. Andrzejak, P. David, K. Lehnertz, and C. E. Elger, “Epileptic seizures are preceded by a decrease in synchronization,” Epilepsy Research, vol. 53, no. 3, pp. 173–185, 2003. View at: Publisher Site  Google Scholar
 R. G. Andrzejak, D. Chicharro, K. Lehnertz, and F. Mormann, “Using bivariate signal analysis to characterize the epileptic focus: the benefit of surrogates,” Physical Review E. Statistical, Nonlinear, and Soft Matter Physics, vol. 83, no. 4, Article ID 046203, 2011. View at: Publisher Site  Google Scholar
 R. Pabel, R. Koch, G. Jager, and A. Kunoth, “Fast empirical mode decompositions of multivariate data based on adaptive splinewavelets and a generalization of the HilbertHuangTransformation (HHT) to arbitrary space dimensions,” Advances in Adaptive Data Analysis, vol. 2, no. 3, pp. 337–358, 2010. View at: Publisher Site  Google Scholar
 N. E. Huang, Z. Shen, S. R. Long et al., “The empirical mode decomposition and the Hubert spectrum for nonlinear and nonstationary time series analysis,” Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 454, no. 1971, pp. 903–995, 1998. View at: Google Scholar
 J. Sun, X. Hong, and S. Tong, “Phase synchronization analysis of eeg signals: an evaluation based on surrogate tests,” IEEE Transactions on Biomedical Engineering, vol. 59, no. 8, pp. 2254–2263, 2012. View at: Publisher Site  Google Scholar
 E. Pereda, R. Q. Quiroga, and J. Bhattacharya, “Nonlinear multivariate analysis of neurophysiological signals,” Progress in Neurobiology, vol. 77, no. 12, pp. 1–37, 2005. View at: Publisher Site  Google Scholar
 G.B. Huang, Q.Y. Zhu, and C.K. Siew, “Extreme learning machine: theory and applications,” Neurocomputing, vol. 70, no. 1–3, pp. 489–501, 2006. View at: Publisher Site  Google Scholar
 B. Liu, L. Yan, L. Li, and W. Wang, “Comparing study of nonlinear model for epileptic preictal prediction,” in Proceedings of the 4th International Conference on Bioinformatics and Biomedical Engineering (iCBBE '10), pp. 1–4, Chengdu, China, June 2010. View at: Publisher Site  Google Scholar
 N.Y. Liang, G.B. Huang, P. Saratchandran, and N. Sundararajan, “A fast and accurate online sequential learning algorithm for feedforward networks,” IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1411–1423, 2006. View at: Publisher Site  Google Scholar
 J. Zhao, Z. Wang, and D. S. Park, “Online sequential extreme learning machine with forgetting mechanism,” Neurocomputing, vol. 87, pp. 79–89, 2012. View at: Publisher Site  Google Scholar
 R. Cordeiro de Amorim and B. Mirkin, “Minkowski metric, feature weighting and anomalous cluster initializing in KMeans clustering,” Pattern Recognition, vol. 45, no. 3, pp. 1061–1075, 2012. View at: Publisher Site  Google Scholar
 F. Gu, D. Liu, and X. Wang, “Semisupervised weighted distance metric learning for kNN classification,” in Proceedings of the International Conference on Computer, Mechatronics, Control and Electronic Engineering (CMCE '10), pp. 406–409, Changchun, China, August 2010. View at: Publisher Site  Google Scholar
 S. Xiang, F. Nie, and C. Zhang, “Learning a Mahalanobis distance metric for data clustering and classification,” Pattern Recognition, vol. 41, no. 12, pp. 3600–3612, 2008. View at: Publisher Site  Google Scholar
 E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell, “Distance metric learning with application to clustering with sideinformation,” in Advances in Neural Information Processing Systems, pp. 505–512, Cambridge, Mass, USA, 2003. View at: Google Scholar
 M. Schultz and T. Joachims, “Learning a distance metric from relative comparisons,” in Advances in Neural Information Processing Systems, Cambridge, Mass, USA, 2004. View at: Google Scholar
 J. Zhang and R. Yan, “On the value of pairwise constraints in classification and consistency,” in Proceedings of the 24th International Conference on Machine Learning (ICML '07), pp. 1111–1118, Corvallis, Ore, USA, June 2007. View at: Publisher Site  Google Scholar
 S. Sonnum, S. Thaithieng, S. Ano, K. Kusolchu, and N. Kerdprasop, “Approximate web database search based on Euclidean distance measurement,” in Proceedings of the International MultiConference of Engineers and Computer Scientists (IMECS '11), vol. 1, pp. 702–706, Kowloon, Hong Kong, March 2011. View at: Google Scholar
 M. Hori, M. Ueda, and A. Iwata, “Stochastic computing chip for measurement of Manhattan distance,” Japanese Journal of Applied Physics 1, vol. 45, no. 4 B, pp. 3301–3306, 2006. View at: Publisher Site  Google Scholar
 T. Kløve, T.T. Lin, S.C. Tsai, and W.G. Tzeng, “Permutation arrays under the Chebyshev distance,” IEEE Transactions on Information Theory, vol. 56, no. 6, pp. 2611–2617, 2010. View at: Publisher Site  Google Scholar
 Y. Ren, X. Liu, and W. Liu, “DBCAMM: a novel density based clustering algorithm via using the Mahalanobis metric,” Applied Soft Computing Journal, vol. 12, no. 5, pp. 1542–1554, 2012. View at: Publisher Site  Google Scholar
 K. Lehnertz, S. Bialonski, M.T. Horstmann et al., “Synchronization phenomena in human epileptic brain networks,” Journal of Neuroscience Methods, vol. 183, no. 1, pp. 42–48, 2009. View at: Publisher Site  Google Scholar
 G. Rilling, “Empirical Mode Decomposition,” http://perso.enslyon.fr/patrick.flandrin/emd.html. View at: Google Scholar
 D.M. Bai, T.S. Qiu, and H.P. Bao, “A new epileptic prediction method based on EMD and sample entropy,” Chinese Journal of Biomedical Engineering, vol. 25, no. 5, pp. 527–531, 2006. View at: Google Scholar
 O. Niang, É. Delechelle, and J. Lemoine, “A spectral approach for sifting process in empirical mode decomposition,” IEEE Transactions on Signal Processing, vol. 58, no. 11, pp. 5612–5623, 2010. View at: Publisher Site  Google Scholar
 G. Rilling, P. Flandrin, and P. Goncalves, “On empirical mode decomposition and its algorithms,” in Proceedings of the IEEE Eurasip Workshop on Nonlinear Signal and Image Processing (NSIP '03), 2003. View at: Google Scholar
 D. Chen, D. Li, M. Xiong, H. Bao, and X. Li, “GPGPUaided ensemble empiricalmode decomposition for EEG analysis during anesthesia,” IEEE Transactions on Information Technology in Biomedicine, vol. 14, no. 6, pp. 1417–1427, 2010. View at: Publisher Site  Google Scholar
 T. Maiwald, M. Winterhalder, R. AschenbrennerScheibe, H. U. Voss, A. SchulzeBonhage, and J. Timmer, “Comparison of three nonlinear seizure prediction methods by means of the seizure prediction characteristic,” Physica D: Nonlinear Phenomena, vol. 194, no. 34, pp. 357–368, 2004. View at: Publisher Site  Google Scholar
 G. B. Huang, “MATLAB Codes of ELM Algorithm,” http://www3.ntu.edu.sg/home/egbhuang/elm_codes.html. View at: Google Scholar
 M. Winterhalder, T. Maiwald, H. U. Voss, R. AschenbrennerScheibe, J. Timmer, and A. SchulzeBonhage, “The seizure prediction characteristics: a general framework to assess and compare seizure prediction methods,” Epilepsy and Behavior, vol. 4, no. 3, pp. 318–325, 2003. View at: Publisher Site  Google Scholar
 H. FeldwischDrentrup, B. Schelter, M. Jachan, J. Nawrath, J. Timmer, and A. SchulzeBonhage, “Joining the benefits: combining epileptic seizure prediction methods,” Epilepsia, vol. 51, no. 8, pp. 1598–1606, 2010. View at: Publisher Site  Google Scholar
 G. Ouyang, X. Li, Y. Li, and X. Guan, “Application of waveletbased similarity analysis to epileptic seizures prediction,” Computers in Biology and Medicine, vol. 37, no. 4, pp. 430–437, 2007. View at: Publisher Site  Google Scholar
 M. Winterhalder, B. Schelter, T. Maiwald et al., “Spatiotemporal patientindividual assessment of synchronization changes for epileptic seizure prediction,” Clinical Neurophysiology, vol. 117, no. 11, pp. 2399–2413, 2006. View at: Publisher Site  Google Scholar
 A. Aarabi and B. He, “A rulebased seizure prediction method for focal neocortical epilepsy,” Clinical Neurophysiology, vol. 123, no. 6, pp. 1111–1122, 2012. View at: Publisher Site  Google Scholar
 J. R. Williamson, D. W. Bliss, and D. W. Browne, “Epileptic seizure prediction using the spatiotemporal correlation structure of intracranial EEG,” in Proceedings of the 36th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '11), pp. 665–668, Prague, Czech, May 2011. View at: Publisher Site  Google Scholar
 N. Wang and M. R. Lyu, “Exploration of instantaneous amplitude and frequency features for epileptic seizure prediction,” in Proceedings of the 12th IEEE International Conference on Bioinformatics and Bioengineering (BIBE '12), pp. 292–297, Larnaca, Cyprus, November 2012. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2014 Min Han et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.