Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2016 / Article

Research Article | Open Access

Volume 2016 |Article ID 6212684 | https://doi.org/10.1155/2016/6212684

Lin-peng Jin, Jun Dong, "Ensemble Deep Learning for Biomedical Time Series Classification", Computational Intelligence and Neuroscience, vol. 2016, Article ID 6212684, 13 pages, 2016. https://doi.org/10.1155/2016/6212684

Ensemble Deep Learning for Biomedical Time Series Classification

Academic Editor: Leonardo Franco
Received12 Jan 2016
Revised10 Apr 2016
Accepted04 May 2016
Published20 Sep 2016

Abstract

Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.

1. Introduction

In the field of pattern recognition, the target of classification is to construct a decision-making function in nature. We can obtain the best ones via simple calculation for linear classification problems, while it is not easy to determine the best ones for nonlinear classification problems, such as image recognition and biomedical time series classification. The processing flow of traditional methods is such that feature vectors are extracted from raw data first (feature selection is conducted when necessary), and then a suitable model based on them is employed for classification. If we let and be the corresponding function of these two parts, the constructed decision-making function can be written as . However, with many of the existing feature extraction technologies and classification algorithms, we cannot have highly complicated nonlinear functions. For example, principal component analysis and independent component analysis are both linear dimensionality reduction algorithms, the wavelet transformation is a simple integral transformation, and the Gaussian mixture model is made of a finite number of Gaussian functions. Therefore, many traditional methods do not perform very well in hard artificial intelligence task.

Achieving great success in complicated fields of pattern recognition in recent years, deep learning [1, 2] is a deep neural network (DNN) with more than 3 layers, which inherently fuses “feature extraction” and “classification” into a signal learning body and directly constructs a decision-making function. Obviously, its ability to construct nonlinear functions becomes strong with the increasing number of layers and neurons, but the number of network weights that need to be adjusted is significantly increased. On the other hand, with ensemble learning that combines multiple classifiers [3], we can also have complicated decision-making functions. As shown in Figure 1, we can obtain a nonlinear classification model via seven linear classifiers (the filled and unfilled regions denote two classes, resp.). In fact, the constructive mechanism of ensemble learning is the same as that of support vector machine, which constructs nonlinear functions by combining multiple kernel functions.

The pioneering work of ensemble learning was done in 1990 [4, 5], which proved that multiple weak learning algorithms could be converted into a strong learning algorithm in theory, and since then many scholars have carried out widespread and thorough research. In general, ensemble-learning algorithms consist of two parts: how to generate differentiated individual classifiers and how to fuse them, namely, generation strategies and fusion strategies. Next, we will provide a brief review of both of them.

There are two kinds of generation strategies, namely, the heterogeneous type and the homogeneous type. The former is such that individual classifiers are generated using different learning algorithms. We will not elaborate on this type since it is relatively simple. The latter uses the same learning algorithm, so different settings (such as learning parameters and training samples) are necessary. Many methods have been developed for this subject and can be divided into four categories.

The first way is to manipulate training samples. For instance, Bagging [7] creates multiple data sets by sampling with replacement from the original training samples, each of which is used to train an individual classifier. Boosting [810] is another example, in which the learning algorithm uses a different weighting or distribution over the training samples at each iteration, according to the errors of the individual classifiers. There are also other approaches, such as Cross-Validated Committees [11], Wagging [12], and Arcing [13]. The second way is to manipulate input features. Random Subspace [14] generates different randomly selected subsets of input features and trains an individual classifier on each of them, while Input Decimation [15] trains the individual classifier only on the most correlated subset of input features for each class. There are also other methods, such as Rotation Forest [16] and Similarity-Based Feature Space [17]. The third way is to manipulate class labels. For instance, Output Coding [1820] decomposes a multiclassification task into a series of binary-classification subtasks and trains individual classifiers for them. Class-Switching [21, 22] is another example, which generates an individual classifier based on randomly changing the class label of a fraction of the training samples. The last way is to inject randomness into the learning algorithm. For example, using the backpropagation (BP) algorithm, the resulting classifiers can be quite different if neural networks with different initial weights are applied to the same training samples [23]. There are also other approaches such as Randomized First-Order Inductive Learner [24] and Random Forest [25].

In a word, the core of generation strategies is to make individual classifiers different (independent errors and diversity), and only when this condition is satisfied can the classification performance be improved; that is, a good decision-making function can be constructed. As for fusion strategies, Major Voting [26] is one of the most popular methods, in which each individual classifier votes for a specific class, and the predicted class is the one that collects the largest number of votes. Simple Average and Weighted Average [27] are also commonly used. Besides them, one can also employ other methods to combining individual classifiers, such as Dempster-Shafer Combination Rules [28], Stacking Method [29], and Second-Level Trainable Combiners [30].

Since both deep learning and ensemble learning have advantages in constructing complicated nonlinear functions, the combination of the two can better handle hard artificial intelligence tasks. Deng and Platt [31] adopted linear and log-linear stacking methods to fuse convolutional, recurrent, and fully connected DNNs. Xie et al. [32] proposed three DNN-based ensemble methods, that is, “fusing a series of classifiers whose inputs are the representation of intermediate layers” and “using Major Voting and Stacking Method to fuse a series of classifiers obtained within a relatively stable range of epoch.” Zhang et al. [33] presented several methods for integrating Restricted Boltzmann Machines with Bagging to construct multiple individual classifiers. Qiu et al. [34] employed a model of Support Vector Regression (Stacking Method) to aggregate the outputs from various deep belief networks. Zhang et al. [35] trained an ensemble of DNNs whose initial weights were initialized differently and penalizes the differences between the output of each DNN and their average output. Huang et al. [36] presented an ensemble criterion of DNNs based on the reconstruction error. In conclusion, we can use many existing strategies to construct a good ensemble of DNNs, such as setting different architectures, injecting noise, and employing the framework of AdaBoost [3741].

In this paper, we propose a novel DNN-based ensemble method for biomedical time series classification. Based on the local and distorted view transformations, different types of digit filters and different validation mechanisms are used to generate individual classifiers, and “subview prediction” and “Simple Average” are utilized to fuse them. In what follows, Section 2 presents our proposed method in detail. In Sections 3 and 4, the experimental step is described and the experimental results are reported. Section 5 concludes the paper.

2. Methodologies

Figure 2 depicts the full process of the proposed method. First, we utilize different filtering methods to preprocess the biomedical time series and selectively conduct downsampling operation and then respectively employ the explicit method and the implicit method to train two DNNs. In the testing phase, we independently apply “subview prediction” to two DNNs first, and then use “Simple Average” to incorporate the outputs of them.

2.1. Filtering View

In practical application, collected biomedical time series are often contaminated by interfering noise. Although we can perform denoising, some useful information may be lost after doing that. DNNs have the ability to capture useful information but ignore interfering noise after learning from a certain number of training samples, and an effective strategy for homogeneous ensemble learning is to make input data different. Therefore, we just extract different view data from raw biomedical time series.

2.2. Deep Neural Network

At present, DNNs mainly include convolutional neural networks (CNNs) [42], deep belief networks [43], and stacked denosing autoencoders [44], among which CNNs utilize “weight sharing” and “pooling” to make the number of weights not increase dramatically when the numbers of input neurons and layers are very large, so that they can be widely used in various fields of pattern recognition. However, as a model developed for images, CNNs perform convolution operations in both horizontal and vertical directions. It is a reasonable thing to do since image data are relevant in both directions. However, for biomedical time series with multiple channels (also organized as a matrix), directly employing CNNs for classification is not very appropriate since the data in the horizontal direction (intrachannel) are relevant while the data in the vertical direction (interchannel) are independent. For this, the previous works [6, 45, 46] developed multichannel convolutional neural networks (MCNNs) which possess better classification performance.

An example of 3-stage MCNN is shown in Figure 3: data of each channel go through three different convolution units (CU) first, and then information from all the channels is inputted into a fully connected (FC) layer; finally, the predictive value is outputted by a logistic regression (LR) layer. Note that a CU consists of a convolutional layer and a subsampling layer (max-pooling layer) and “1D-Cov” denotes a one-dimensional convolution operation.

2.3. Explicit Training

The DNN can construct a nonlinear function when each network weigh is assigned a value. Obviously, the number of constructed nonlinear functions becomes large with the increasing number of weights. The nature of network training is to determine which function is the best for a given problem. As a most common used method, the BP algorithm cannot find out a good nonlinear function unless there are enough training samples. It does not mean to increase the size of training set really but to increase the number of training samples presented to the network. Fortunately, the virtual sample technology is up to this task, whose core is to perform a transformation on biomedical time series under the premise of preserving class labels. There are mainly the local view transformation and the distorted view transformation. The former is to extract subseries and the latter is to add distortion (e.g., adding random data with low amplitude or burst one).

Generally speaking, “explicit training” is adopted to train a DNN in supervised learning; that is, besides training samples, there are a small number of independent validation samples used to evaluate the obtained model during the training phase. Suppose the size of the biomedical time series is (C is the number of channels; F1 is the number of data points in each channel) and the number of input neurons is (F2 < F1); the training process can be described as follows: a random subseries is extracted from the training sample first (the local view transformation), and then it is added with some type of distortion with high probability (the distorted view transformation); finally, backpropagation is invoked. When PMax training samples have been presented to the network, the weights will be adjusted and the current DNN model will be tested by particular subseries extracted from validation samples. If the accuracy (or other metrics) is the highest up to the present, the current DNN model will be saved. Afterwards, the next training sample is chosen to repeat the process. Algorithm 1 gives the pseudocode of this training method.

Algorithm]: Explicit Training
Input]: Training Samples ,  , Validation Samples ,  
Output]: An DNN Model
Begin
 Best = 0
while (!StopCondition)
  dW =                 //Initialize the weight changes
  for    to  
    = LocalViewTransform(, rand)      //Perform the local view transformation (start from a random position)
    = DistortedViewTransform(, rand)   //Perform the distorted view transformation with high probability
   dW = dW + BackPropagation(, )      //Invoke Backpropagation and accumulate
   if ( %   == 0)                // training samples have been presented to the network
    UpdateNN(dW)               //Adjust the weights
    dW =
    Matrix =               //Initialize the confusion matrix
    for    to  
      = LocalViewTransform(, fixed) //Perform the local view transformation (start from a particular position)
     Matrix = Matrix + Test(, )  //Test the current DNN model
    end
    if (Performance(Matrix) > Best)
     Best = Performance(Matrix)
     SaveNN(Model)
    end
   end
  end
end
End
2.4. Implicit Training

In “explicit training,” the DNN model is tested by validation samples at regular intervals in order to avoid overfitting, which also means that the selection of validation samples will significantly influence the classification performance. However, handpicking representative samples is not easy and limited by practical conditions in some case. For this, we develop “implicit training” in this study. As shown in Algorithm 2, most of the steps are the same as those shown in Algorithm 1; the main difference lies in the validation mechanism. When PMax training samples have been presented to the network, the weights will be adjusted and the current DNN model will be tested by particular subseries extracted from the training samples, which is used between two adjacent weight-updating processes. At the end of every training epoch, the DNN model will be saved if the total accuracy (or other metrics) is the highest up to the present.

Algorithm]: Implicit Training
Input]: Training Samples , ),  
Output]: An DNN Model
Begin
 Best = 0
while (!StopCondition)
  dW =  //Initialize the weight changes
  Matrix =  //Initialize the confusion matrix
  for    to  N
    = LocalViewTransform(, rand) //Perform the local view transformation (start from a random position)
    = DistortedViewTransform(, rand) //Perform the distorted view transformation with high probability
   dW = dW + BackPropagation(, ) //Invoke Backpropagation and accumulate
   if ( %   == 0) // training samples have been presented to the network
    UpdateNN(dW) //Adjust the weights
    dW =
    for    to   //Training samples used between two adjacent weight-updating processes
      = LocalViewTransform (,
                     fixed) //Perform the local view transformation (start from a particular position)
     Matrix = Matrix + Test(, ) //Test the current DNN model
    end
   end
  end
  if (Performance(Matrix) > Best)
   Best = Performance(Matrix)
   SaveNN(Model)
  end
end
End

Someone may think that this method will result in overfitting, but this is a false judgment. For training, we extract random subseries and add distortion to them with high probability; and for validation, we extract particular subseries and do not add distortion to them. Hence, the probability of overlap is very small. Of course, we can use such validation mode only in the situation where the virtual sample technology is used.

2.5. Subview Prediction

The output of the DNN is a probability value that ranges from 0 to 1. If the value is about 0.5, the classification confidence is low. On the other hand, in the training phase, we utilize the virtual sample technology including the local view transformation and the distorted view transformation (collectively called the view transformation) to increase the number of training samples presented to the network. With these considerations in mind, we develop a new testing method called “subview prediction.”

As shown in Figure 4, the testing process can be described as follows: subseries which start from different positions are, respectively, extracted from the testing sample first, and then they are selectively added with a type of distortion used in the training phase; finally, their predictive values outputted by the DNN are aggregated by the average rule (for the sake of simplicity). Note that only one subseries is extracted and tested by the DNN in “simple prediction.”

2.6. Simple Average

Simple Average [27] is employed to fuse two DNNs in this paper: given M-class data, the predicted class is determined bywhere , is the probability value predicted on the th class by the DNN . Of course, we can use other fusion methods, such as Weighted Average [27], Dempster-Shafer Combination Rules [28], and Second-Level Trainable Combiners [30].

3. Experimental Setup

In this study, we apply the proposed ensemble method to one biomedical application, that is, classification of normal and abnormal electrocardiogram (ECG) recordings with short duration. It is useful for telemedicine centers where abnormal recordings are delivered to physicians for further interpretation after computer-assisted ECG analysis algorithms filter out normal ones, so that the diagnostic efficiency will be greatly increased [47]. However, this classification task is rather hard due to wild variations in ECG characteristics among different individuals. Many traditional methods do not perform well for this subject [48, 49]. In the research work [6], “low-pass filtering” and “downsampling (from original 500 Hz to 200 Hz)” are successively applied to ECG recordings first, and then one MCNN model is obtained by “explicit training.” By testing 151,274 recordings from the Chinese Cardiovascular Disease Database (CCDD) [50], it achieved the best results up to now. To ensure a fair comparison, the same ECG dataset is used to evaluate the performance of the newly proposed method.

3.1. ECG Dataset

The CCDD database consists of 179,130 standard 12-lead ECG recordings with sampling frequency of 500 Hz. After throwing away exception data, we choose 175,943 recordings for the numerical experiments where the numbers of training samples, validation samples, and testing samples (nine groups obtained from different sources) are 12320, 560, and 163063, respectively. Note that training samples and validation samples will be combined together in “implicit training.” These recordings are first processed using a digit filter, and then a downsampling operation (from original 500 Hz to 200 Hz) is conducted. Finally, 8 1900 sampling points are available for each recording. The Appendix shows the details of the process [6].

3.2. Individual Classifier

To ensure a fair comparison, all ensemble methods in the numerical experiments employ MCNNs as individual classifiers. In this study, two different network architectures, namely, MCNN[A] and MCNN[B], are used for different purposes. The first one is a 3-stage MCNN whose parameters are set as follows: the number of the neurons in the input layer is 8 × 1700 (1 × 1700); the sizes of three convolution kernels are 1 × 21, 1 × 13, and 1 × 9 (1 denotes the size in the vertical direction; 21 denotes the size in the horizontal direction), respectively; the sizes of three subsampling steps are 1 × 7, 1 × 6, and 1 × 6, respectively; the numbers of three feature maps are 6, 7, and 5, respectively; the numbers of the neurons in the FC layer and the LR layer are 50 and 1, respectively. The second one is also a 3-stage MCNN, whose parameters are set as follows: the number of the neurons in the input layer is 1 × 1900; the sizes of three convolution kernels are 1 × 18, 1 × 12, and 1 × 8, respectively; the sizes of three subsampling steps are 1 × 7, 1 × 6, and 1 × 6, respectively; the numbers of three feature maps are 6, 7, and 5, respectively; the numbers of the neurons in the FC layer and the LR layer are 50 and 1, respectively.

The local view transformation can be used for MCNN[A], but it cannot be used for MCNN[B] since the number of sampling points of an ECG recording is 1 × 1900. As for the distorted view transformation, it can be used for both MCNN[A] and MCNN[B]. In this study, random data with low amplitude (the maximal amplitude is lower than 0.15) is added to ECG recordings during the training phase, and this operation (e.g., the distorted view transformation) is ignored in the testing phase for the sake of simplicity.

Using MCNN[A], 8 × 1700 local segments which start from the 1st sampling point are extracted (from validation samples in “explicit training” and from training samples in “implicit training”) to evaluate the obtained model during the training phase. And in the testing phase, nine 8 × 1700 local segments which start from the 1st, 26th, 51st, 76th, 101st, 126th, 151st, 176th, and 201st sampling points are extract from a given ECG recording if “subview prediction” is used; otherwise, only one 8 × 1700 local segment which starts from the 1st sampling point is extracted. Using MCNN[B], we can train and test neural networks in a traditional manner.

We only employ the BP algorithm of inertia moment and variable step [51] in supervised learning and its related parameters are set as follows: the initial step size is 0.02, the step decay is 0.01 except for the 2nd epoch and the 3rd epoch (set as 0.0505), PMax is 560, and the maximal number of training epochs is 500. The experimental platform is based on an Intel Core i7-3770 CPU @3.4 GHz, 8.0 G RAM, 64-bit Window 7 operating system, and the theano-0.6rc [52] implementation is used.

3.3. Ensemble Method

Bagging [7] and AdaBoost [8] are two of the most popular and effective ensemble-learning methods, and they work especially well for unstable learning algorithms (i.e., small changes in the training data lead to large changes in the individual classifiers), such as neural networks and decision tress [53]. In addition, Ye et al. [54] proposed an ensemble method for multilead ECG that has excellent classification performance. To demonstrate the effectiveness of our proposed method, we compare it with these three ensemble methods. Next, we will provide their configuration parameters in detail:(1)ViewEL (e.g., our proposed method): filtering views A and B in Figure 2 are set as “low-pass filtering” and “band-pass filtering with passband of 0.5–40 Hz” [55], respectively. Note that there is no problem if we exchange “low-pass filtering” for “band-pass filtering,” since an effective measure for homogeneous ensemble learning is to make input data different. Here, we just make the 1st path consistent with the research work [6].(2)AdaBoost: the filtering view is set as “low-pass filtering.” In the training phase, we utilize the explicit method to train two MCNN[A] models based on the framework of AdaBoost. In the testing phase, we apply “subview prediction” to individual classifiers independently and fuse them afterwards.(3)Bagging: most of the configuration parameters are the same as those of AdaBoost. The only difference is that two MCNN[A] models are obtained based on the framework of Bagging.(4)YeC: the filtering view is set as “low-pass filtering.” We utilize the explicit method to train one MCNN[A] model for each lead, so there are a total of 8 MCNN[A] models since the incoming ECG recording contains 8 leads. In the testing phase, we first apply “subview prediction” to each individual classifier and then employ the Bayesian approach (product rule) to fuse them [54].(5)YeCRaw: most of the configuration parameters are the same as those of YeC. The only difference is that we replace MCNN[A] with MCNN[B]. Note that neither the local view transformation nor “subview prediction” can be used in this method.

3.4. Performance Metrics

We utilize the following metrics [56] to evaluate algorithms: specificity (Sp), sensitivity (Se), geometric mean (GMean), accuracy (Acc), and negative predictive value (NPV), given bywhere TN and TP are the number of normal and abnormal samples correctly classified, respectively, FN is the number of abnormal samples that are classified as normal, and FP is the number of normal samples that are classified as abnormal. In addition, we also utilize related metrics of the ROC (receiver operating characteristic) curve, including AUC (area under the ROC curve), TPR (the vertical axis of the ROC curve, e.g., “Sp” in this study), and FPR (the horizontal axis of the ROC curve, e.g., “1-Se” in this study).

4. Results

In this section, we report experimental results on the CCDD database. There are nine groups of testing samples with different sources, namely, DS1~DS9. Besides presenting the testing results of each group, we will summarize the averages and the standard deviations of performance metrics for each algorithm. The metrics GMean takes into consideration the classification results on both positive and negative classes [57], while the key technology of the classification of normal and abnormal ECG recordings with short duration is to make TPR under the condition of NPV being equal to 95% (TPR95) as high as possible [58]. Therefore, we perform the Wilcoxon signed ranks test [59] to investigate whether the difference in GMean and TPR95 achieved by different algorithms is statistically significant. Generally speaking, a value that is less than 0.05 indicates the difference is significant, and the smaller the value is, the more significant the difference is.

4.1. Effectiveness Evaluation

We first investigate the contribution of “subview prediction.” From the testing results presented in Tables 1 and 2, we can see that most of the performance metrics are increased and the classification performance is improved regardless of “explicit training” and “implicit training.” From the results of statistical analysis summarized in Tables 3 and 4, we know that “subview prediction” significantly outperforms “simple prediction” in terms of GMean, no matter which training method we use. In addition, we also find that both “explicit training” and “implicit training” have advantages and disadvantages: the former has higher Se while the latter has higher Sp. From the perspective of ensemble learning, these two classifiers are exactly what we want. Next, we present their fusion results in Tables 5 and 6.


DatasetMethodSp (%)Se (%)GMean (%)Acc (%)AUCNPV = 95%
TPR (%)FPR (%)

DS1Simple [6]88.8476.9582.6885.410.903463.3228.217
Subview89.8576.8183.0786.090.912371.1119.227

DS2Simple [6]88.6379.5583.9785.990.917474.3899.558
Subview89.9280.0584.8487.050.927278.87910.135

DS3Simple [6]86.5877.6982.0184.030.897265.5978.599
Subview87.8178.0382.7785.010.907472.5069.505

DS4Simple [6]82.7584.8183.7783.910.90960.0910.011
Subview83.6785.0984.3884.470.91530.0510.004

DS5Simple [6]79.5286.2082.7983.230.908400
Subview80.7086.6483.6284.000.914400

DS6Simple [6]81.9884.9083.4383.570.91010.0250.003
Subview82.8085.4984.1384.260.916900

DS7Simple [6]77.8184.7181.1981.280.89050.0100.001
Subview78.6085.1781.8281.900.896400

DS8Simple [6]78.3184.7481.4781.710.891300
Subview79.4685.2382.3082.510.89760.0200.003

DS9Simple [6]83.9775.4079.5781.480.86611.1960.159
Subview84.6276.0480.2282.130.877852.1606.722

We can change the discrimination threshold from 0 to 1 and calculate the corresponding values of Se, Sp, and NPV. As for “0,” it means that the condition of NPV being equal to 95% cannot be satisfied.

DatasetMethodSp (%)Se (%)GMean (%)Acc (%)AUCNPV = 95%
TPR (%)FPR (%)

DS1Simple91.7473.2581.9786.400.904764.4828.368
Subview92.5873.5282.5087.080.914971.2029.239

DS2Simple91.7376.5183.7787.300.922875.3369.680
Subview92.4576.4084.0487.780.931880.98310.405

DS3Simple89.3074.6581.6485.100.903070.4249.232
Subview90.2374.8882.2085.840.911274.2569.735

DS4Simple87.3182.3084.7784.490.913100
Subview88.1582.3085.1884.850.919200

DS5Simple85.8082.1883.9783.790.906700
Subview86.7882.3484.5384.310.913500

DS6Simple86.9881.3084.0983.900.909800
Subview88.2181.6084.8484.620.916800

DS7Simple83.7180.0981.8881.890.88720.0450.012
Subview84.1980.4682.3082.310.894300

DS8Simple83.5380.4081.9581.870.88910.1010.013
Subview84.7980.9782.8682.770.89620.0320.004

DS9Simple87.0771.3478.8182.510.863500
Subview87.2470.9178.6582.510.873300


MethodSp (%)Se (%)GMean (%) valueAcc (%)AUCNPV = 95% value
TPR (%)FPR (%)

Simple [6]83.16 ± 4.2081.66 ± 4.2082.32 ± 1.420.003983.40 ± 1.680.8993 ± 0.0222.74 ± 33.902.95 ± 4.400.1953
Subview84.16 ± 4.2882.06 ± 4.2783.02 ± 1.4484.16 ± 1.760.9073 ± 0.0130.53 ± 36.863.96 ± 4.78


MethodSp (%)Se (%)GMean (%) valueAcc (%)AUCNPV = 5% value
TPR (%)FPR (%)

Simple87.46 ± 3.0178.00 ± 4.1582.54 ± 1.830.007884.14 ± 1.910.9000 ± 0.0223.38 ± 35.133.03 ± 4.560.3125
Subview88.29 ± 3.0078.15 ± 4.3083.01 ± 2.0084.68 ± 1.960.9079 ± 0.0225.16 ± 37.823.26 ± 4.90


DatasetModelSp (%)Se (%)GMean (%)Acc (%)AUCNPV = 95%
TPR (%)FPR (%)

DS1Explicit [6]88.8476.9582.6885.410.903463.3228.217
Explicit89.8576.8183.0786.090.912371.1119.227
Implicit92.5873.5282.5087.080.914971.2029.239
Fusion91.8174.9682.9686.950.917274.0639.611

DS2Explicit [6]88.6379.5583.9785.990.917474.3899.558
Explicit89.9280.0584.8487.050.927278.87910.135
Implicit92.4576.4084.0487.780.931880.98310.405
Fusion91.4778.2784.6187.640.932481.46810.468

DS3Explicit [6]86.5877.6982.0184.030.897265.5978.599
Explicit87.8178.0382.7785.010.907472.5069.505
Implicit90.2374.8882.2085.840.911274.2569.735
Fusion89.5776.2982.6685.760.912274.8199.809

DS4Explicit [6]82.7584.8183.7783.910.90960.0910.011
Explicit83.6785.0984.3884.470.91530.0510.004
Implicit88.1582.3085.1884.850.919200
Fusion86.6484.1885.4085.250.92150.0200.005

DS5Explicit [6]79.5286.2082.7983.230.908400
Explicit80.7086.6483.6284.000.914400
Implicit86.7882.3484.5384.310.913500
Fusion84.6984.8284.7684.760.918700

DS6Explicit [6]81.9884.9083.4383.570.91010.0250.003
Explicit82.8085.4984.1384.260.916900
Implicit88.2181.6084.8484.620.916800
Fusion86.3183.8185.0584.960.921319.8350.882

DS7Explicit [6]77.8184.7181.1981.280.89050.0100.001
Explicit78.6085.1781.8281.900.896400
Implicit84.1980.4682.3082.310.894300
Fusion82.0883.0282.5582.550.900010.7690.561

DS8Explicit [6]78.3184.7481.4781.710.891300
Explicit79.4685.2382.3082.510.89760.0200.003
Implicit84.7980.9782.8682.770.89620.0320.004
Fusion82.6983.3283.0083.020.901110.8890.514

DS9Explicit [6]83.9775.4079.5781.480.86611.1960.159
Explicit84.6276.0480.2282.130.877852.1606.722
Implicit87.2470.9178.6582.510.873300
Fusion86.4673.3779.6482.660.880653.1516.850

The classification models are obtained by “explicit training” and “implicit training,” respectively, and the results are based on subview prediction.

ModelSp (%)Se (%)GMean (%) valueAcc (%)AUCNPV = 95% value
TPR (%)FPR (%)

Explicit [6]83.16 ± 4.2081.66 ± 4.2082.32 ± 1.420.003983.40 ± 1.680.8993 ± 0.0222.74 ± 33.902.95 ± 4.400.0156
Explicit84.16 ± 4.2882.06 ± 4.2783.02 ± 1.440.164184.16 ± 1.760.9073 ± 0.0130.53 ± 36.863.96 ± 4.780.0156
Implicit88.29 ± 3.0078.15 ± 4.3083.01 ± 2.000.003984.68 ± 1.960.9079 ± 0.0225.16 ± 37.823.26 ± 4.900.0078
Fusion86.86 ± 3.5180.23 ± 4.4983.40 ± 1.8084.84 ± 1.820.9117 ± 0.0236.11 ± 34.344.30 ± 4.74

The classification models are obtained by “explicit training” and “implicit training,” respectively, and the results are based on subview prediction.

It is observed from the testing results presented in Table 5 that the fusion model always has the highest AUC. From the statistical results summarized in Table 6, we can see that the fusion model significantly outperforms the explicit [6] and implicit (the classification models are obtained by “explicit training” and “implicit training,” resp., and the results are based on subview prediction) models in terms of GMean and all other models in terms of TPR95. All these findings illustrate the effectiveness of our newly developed generation and fusion strategies.

4.2. Comparison with Other Methods

We compare the performance of the proposed method (e.g., ViewEL) with that of YeCRaw, YeC, Bagging, and AdaBoost. From the testing results presented in Table 7, we can see that ViewEL produced the best results in many of the performance metrics. From the results of statistical analysis summarized in Table 8, we know that ViewEL significantly outperforms YeCRaw and Bagging in terms of both GMean and TPR95 and YeC in terms of GMean. Although we cannot say that ViewEL significantly outperforms AdaBoost in terms of GMean, the value is a little greater than 0.05.


DatasetMethodSp (%)Se (%)GMean (%)Acc (%)AUCNPV = 95%
TPR (%)FPR (%)

DS1YeCRaw95.6156.1173.2584.210.888252.6636.834
YeC91.4469.0579.4684.980.896563.3028.215
Bagging91.6873.9082.3186.550.908167.0058.694
AdaBoost90.7575.9383.0186.470.911672.0319.347
ViewEL91.8174.9682.9686.950.917274.0639.611

DS2YeCRaw95.3558.8474.9184.740.901065.3638.399
YeC90.8470.4880.0184.920.902867.8158.715
Bagging91.1177.3183.9387.100.925277.7399.988
AdaBoost90.6279.0184.6287.250.925579.50310.215
ViewEL91.4778.2784.6187.640.932481.46810.468

DS3YeCRaw94.2256.8273.1783.510.882955.9447.349
YeC89.6569.5978.9983.910.896267.1038.797
Bagging88.9175.5581.9685.090.903369.9659.172
AdaBoost88.1376.8482.2984.900.902571.5369.378
ViewEL89.5776.2982.6685.760.912274.8199.809

DS4YeCRaw92.5171.3581.2480.570.905400
YeC86.8780.2183.4783.110.907200
Bagging85.8383.6084.7084.570.914000
AdaBoost84.3284.9884.6584.690.91720.3620.015
ViewEL86.6484.1885.4085.250.92150.0200.005

DS5YeCRaw91.0373.2281.6481.130.903900
YeC84.2982.5883.4383.340.90710.1840.008
Bagging83.1684.5783.8683.940.910600
AdaBoost81.5886.1083.8184.090.91430.1490.007
ViewEL84.6984.8284.7684.760.918700

DS6YeCRaw92.5370.7980.9380.720.905200
YeC86.7580.0983.3583.130.906900
Bagging85.2683.3284.2984.210.91340.0740.006
AdaBoost83.5185.1084.3084.370.91580.1080.007
ViewEL86.3183.8185.0584.960.921319.8350.882

DS7YeCRaw89.0071.5779.8180.230.888500
YeC81.9381.5281.7281.720.894416.3240.850
Bagging80.7782.6281.6981.700.891600
AdaBoost79.4684.8182.0982.150.895600
ViewEL82.0883.0282.5582.550.900010.7690.561

DS8YeCRaw89.1470.4979.2779.290.887900
YeC83.2780.2181.7281.650.89101.9890.098
Bagging81.2883.0282.1582.200.89168.5510.410
AdaBoost80.2184.7282.4482.590.89540.0140.001
ViewEL82.6983.3283.0083.020.901110.8890.514

DS9YeCRaw92.5764.0677.0184.310.878949.5246.382
YeC86.0276.9081.3383.370.884954.4947.039
Bagging87.4272.3079.5083.030.880456.6697.302
AdaBoost85.7174.2279.7682.380.882559.0617.611
ViewEL86.4673.3779.6482.660.880653.1516.850


MethodSp (%)Se (%)GMean (%) valueAcc (%)AUCNPV = 95% value
TPR (%)FPR (%)

YeCRaw92.44 ± 2.4165.92 ± 7.0077.91 ± 3.420.003982.08 ± 2.090.8936 ± 0.0124.83 ± 29.753.22 ± 3.850.0078
YeC86.78 ± 3.3476.73 ± 5.5081.50 ± 1.730.027383.35 ± 1.180.8985 ± 0.0130.13 ± 31.973.75 ± 4.250.1289
Bagging86.16 ± 3.9879.58 ± 4.7882.71 ± 1.650.003984.26 ± 1.820.9042 ± 0.0131.11 ± 35.363.95 ± 4.640.0391
AdaBoost84.92 ± 4.2381.30 ± 4.7383.00 ± 1.570.054784.32 ± 1.770.9067 ± 0.0131.42 ± 37.474.06 ± 4.860.1289
ViewEL86.86 ± 3.5180.23 ± 4.4983.40 ± 1.8084.84 ± 1.820.9117 ± 0.0236.11 ± 34.344.30 ± 4.74

The essential difference between YeC and YeCRaw is whether the local view transformation and “subview prediction” are used. Table 7 shows that YeC increases almost all the metrics except Sp and significantly outperforms YeCRaw in terms of both GMean and TPR95 (the values are 0.0039 and 0.0156, resp.). Likewise, the performance of Bagging and AdaBoost will be degraded if we remove the view transformations and “subview prediction” from them, respectively. Of course, their performance could be improved if the number of DNN models is increased. For ViewEL, we can also train multiple individual classifiers to enhance performance using different validation samples, such as time subseries starting from other positions and time subseries extracted from some part of training samples in “implicit training.” However, besides classification performance, we should also take into consideration the computational efficiency since the DNN is a model with high complexity. In practical applications (telemedicine centers), an effective ensemble method with less number of individual classifiers is needed.

5. Conclusion

The current work proposes a novel DNN-based ensemble method that uses multiple view-related strategies, such as the view transformations, “implicit training,” and “subview prediction.” Experiment results on the CCDD database demonstrate that the proposed method is effective for biomedical time series classification. Furthermore, we compare it with some well-known methods for the classification of normal and abnormal ECG recordings. Our proposed method achieves comparable or better classification results than those by the others.

It is worth noting that this study just presents a new research idea for ensemble learning, and we can incorporate more strategies into Figure 2, such as wavelet-transformation views, compressive sensing view, class-switching technology, and misclassification cost. Those are all research tasks we will do in the future.

Appendix

Each ECG recording in the Chinese Cardiovascular Disease Database is approximately 10~20 seconds in duration and contains 12 leads, namely, I, II, III, aVR, aVL, aVF, V1, V2, V3, V4, V5, and V6, where II, III, V1, V2, V3, V4, V5, and V6 are orthogonal, while the remaining four can be linearly derived from them. We obtained these data from different hospitals (i.e., real clinical environment) in Shanghai and Suzhou successively. Cardiologists gave the diagnostic conclusion of each recording, which may contain more than one disease type. We used hexadecimal coding (form of “0xdddddd”) to encode disease types which are divided into three grades, including 12 one-level types (e.g., invalid, normal, sinus rhythm, atrial arrhythmia, junctional rhythm, ventricular arrhythmia, conduction block, atrial hypertrophy, ventricular hypertrophy, myocardial infarction, ST-T change, and other abnormities), 72 two-level types, and 297 three-level types. More details can be seen on our website (http://58.210.56.164:88/ccdd or http://58.210.56.164:66/ccdd).

In numerical experiments, we first throw away the invalid ECG recordings and the ones whose duration is less than 9.625 seconds. Then, a data segment of 9.5 s seconds that only contains eight orthogonal leads is extracted from an ECG recording after ignoring the first 0.125 s. Finally, each recording has 8 × 1900 sampling points at the sampling frequency of 200 Hz. We regard a recording as normal if the diagnostic conclusion is “0 020101” or “0 020102” or “0 01”; otherwise, it is regarded as abnormal. Table 9 summarizes the detail information.


DatasetNormalAbnormalTotalSource

The training samplesdata944–256938800352012320Shanghai, District #1
The validation samplesdata944–25693280280560Shanghai, District #1
The testing samples (DS1)data944–256938387340211789Shanghai, District #1
The testing samples (DS4)data25694–370824911635211263Shanghai, District #2
The testing samples (DS2)data37083–72607250201024935269Shanghai, District #3
The testing samples (DS3)data72608–9582916210650822718Shanghai, District #4
The testing samples (DS5)data95830–119551103511294823299Shanghai, District #5
The testing samples (DS6)data119552–14110497031152921232Shanghai, District #6
The testing samples (DS7)data141105–1609139713983119544Shanghai, District #7
The testing samples (DS8)data160914–1758716944778114725Shanghai, District #8
The testing samples (DS9)data175872–17913022899353224Suzhou, District #1

The recordings from “data 944–25693” used as training samples are as follows: 4520–4584, 4586–4613, 4615– 4652, 4654–4761, 4763–4967, 4969–4972, 4975–5146, 5148, 5151–5279, 5281–5300, 5302–5348, 5350–5540, 5542–5568, 5570–5713, 5715–5777, 5779, 5781–5792, 5794–5974, 5976–6074, 6076–6118, 6120–6127, 6129–6134, 6136–6206, 6208–6281, 6283–6441, 6443–6502, 6504–6538, 6540–6654, 6656–6997, 6999–7005, 7007–7012, 7014– 7060, 7062–7451, 7453–7506, 7508–7531, 7533–7594, 7596–7642, 7644–7732, 7734–7739, 7741–7829, 7831–7887, 7889–7939, 7941–7946, 7948–7956, 7980–8064, 8066–8108, 16556–17045, 17047–17128, 17407–17422, 17424–17454, 17456–17928, 17930–17933, 17935–17955, 17957–18093, 18095–18258, 18260–18441, 18443–18538, 18540–18562, 18565–18642, 18644–18814, 18816–18817, 18819–18984, 18986–19327, 19329–19439, 19441– 19647, 19649–19657, 19659–19852, 19854–20173, 20175–20700, 20702–20881, 20883–21252, 21254–21301, 21303–21586, 21588–21815, 21817–21865, 21867–21889, 21891–21986, 21988–22149, 22151–22226, 22228– 22462, 22464–22623, 22625–22793, 22795–22935, 22937–23032, 23034–23038, 23040–23043, 23045–23067, 23069–23268, 23270–23587, 23589–23611, 23613–23973, 23975–24108, 24110–24646, 24648, 24650–24775, 24777–24820, 24822–24962, 24964–25201, 25203–25282, 25284–25301, 25303–25330, 25332–25457, 25459– 25495, 25497–25606, and 25608–25691.

The recordings from “data 944–25693” used as validation samples are as follows: 4176–4278, 4362–4413, 4415–4519, 7957–7979, 17129–17389, and 17391–17406.

There are nine groups of testing samples, namely, DS1~DS9, which were obtained from hospitals located at different districts in Shanghai and Suzhou. DS1 consists of the remaining recordings from “data 944–25693,” while other groups contain all the valid recordings from the corresponding dataset, respectively.

Competing Interests

The authors declare that there are no competing interests regarding the publication of this paper.

References

  1. G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, 2006. View at: Publisher Site | Google Scholar | MathSciNet
  2. Y. Bengio, “Learning deep architectures for AI,” Foundations and Trends in Machine Learning, vol. 2, no. 1, pp. 1–27, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  3. Z. H. Zhou, “Ensemble Learning,” in Encyclopedia of Biometrics, pp. 270–273, Springer, Berlin, Germany, 2009. View at: Google Scholar
  4. L. K. Hansen and P. Salamon, “Neural network ensembles,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 10, pp. 993–1001, 1990. View at: Publisher Site | Google Scholar
  5. R. E. Schapire, “The strength of weak learnability,” Machine Learning, vol. 5, no. 2, pp. 197–227, 1990. View at: Publisher Site | Google Scholar
  6. L. P. Jin and J. Dong, “Deep learning research on clinical electrocardiogram analysis,” Science China Information Sciences, vol. 45, no. 3, pp. 398–416, 2015. View at: Google Scholar
  7. L. Breiman, “Bagging predictors,” Machine Learning, vol. 24, no. 2, pp. 123–140, 1996. View at: Google Scholar
  8. Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, vol. 55, no. 1, part 2, pp. 119–139, 1997. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  9. H. Drucker, C. Cortes, L. D. Jackel, Y. LeCun, and V. Vapnik, “Boosting and other ensemble methods,” Neural Computation, vol. 6, no. 6, pp. 1289–1301, 1994. View at: Publisher Site | Google Scholar
  10. R. E. Schapire, “A brief introduction to boosting,” in Proceedings of the 16th International Joint Conference on Artificial Intelligence (IJCAI '99), pp. 1401–1406, Stockholm, Sweden, August 1999. View at: Google Scholar
  11. B. Parmanto, P. W. Munro, and H. R. Doyle, “Reducing variance of committee prediction with resampling techniques,” Connection Science, vol. 8, no. 3-4, pp. 405–426, 1996. View at: Publisher Site | Google Scholar
  12. E. Bauer and R. Kohavi, “An empirical comparison of voting classification algorithms: bagging, boosting, and variants,” Machine Learning, vol. 36, no. 1, pp. 105–139, 1999. View at: Publisher Site | Google Scholar
  13. L. Breiman, “Prediction games and arcing classifiers,” Neural Computation, vol. 11, no. 7, pp. 1493–1517, 1999. View at: Publisher Site | Google Scholar
  14. T. K. Ho, “The random subspace method for constructing decision forests,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 8, pp. 832–844, 1998. View at: Publisher Site | Google Scholar
  15. K. Tumer and N. C. Oza, “Input decimated ensembles,” Pattern Analysis & Applications, vol. 6, no. 1, pp. 65–77, 2003. View at: Publisher Site | Google Scholar | MathSciNet
  16. J. J. Rodríguez, L. I. Kuncheva, and C. J. Alonso, “Rotation forest: a new classifier ensemble method,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 10, pp. 1619–1630, 2006. View at: Publisher Site | Google Scholar
  17. E. Pekalska, M. Skurichina, and R. P. W. Duin, “Combining fisher linear discriminants for dissimilarity representations,” in Multiple Classifier Systems, vol. 1857 of Lecture Notes in Computer Science, pp. 117–126, Springer, Berlin, Germany, 2000. View at: Publisher Site | Google Scholar
  18. L. I. Kuncheva, “Using diversity measures for generating error-correcting output codes in classifier ensembles,” Pattern Recognition Letters, vol. 26, no. 1, pp. 83–90, 2005. View at: Publisher Site | Google Scholar
  19. R. Anand, K. Mehrotra, C. K. Mohan, and S. Ranka, “Efficient classification for multiclass problems using modular neural networks,” IEEE Transactions on Neural Networks, vol. 6, no. 1, pp. 117–124, 1995. View at: Publisher Site | Google Scholar
  20. T. Hastie and R. Tibshirani, “Classification by pairwise coupling,” The Annals of Statistics, vol. 26, no. 2, pp. 451–471, 1998. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  21. L. Breiman, “Randomizing outputs to increase prediction accuracy,” Machine Learning, vol. 40, no. 3, pp. 229–242, 2000. View at: Publisher Site | Google Scholar
  22. G. Martínez-Muñoz, A. Sánchez-Martínez, D. Hernández-Lobato, and A. Suárez, “Class-switching neural network ensembles,” Neurocomputing, vol. 71, no. 13–15, pp. 2521–2528, 2008. View at: Publisher Site | Google Scholar
  23. J. F. Kolen and J. B. Pollack, “Back propagation is sensitiveto initial conditions,” in Proceedings of the Conference on Advances in Neural Information Processing Systems (NIPS '91), pp. 860–867, San Francisco, Calif, USA, 1991. View at: Google Scholar
  24. K. M. Ali and M. J. Pazzani, “Error reduction through learning multiple descriptions,” Machine Learning, vol. 24, no. 3, pp. 173–202, 1996. View at: Google Scholar
  25. L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, 2001. View at: Publisher Site | Google Scholar
  26. L. Lam and C. Y. Suen, “Application of majority voting to pattern recognition: an analysis of its behavior and performance,” IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans, vol. 27, no. 5, pp. 553–568, 1997. View at: Publisher Site | Google Scholar
  27. G. Fumera and F. Roli, “A theoretical and experimental analysis of linear combiners for multiple classifier systems,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 6, pp. 942–956, 2005. View at: Publisher Site | Google Scholar
  28. G. Rogova, “Combining the results of several neural network classifiers,” Neural Networks, vol. 7, no. 5, pp. 777–781, 1994. View at: Publisher Site | Google Scholar
  29. D. H. Wolpert, “Stacked generalization,” Neural Networks, vol. 5, no. 2, pp. 241–259, 1992. View at: Publisher Site | Google Scholar
  30. R. P. W. Duin and D. M. J. Tax, “Experiments with classifier combining rules,” in Multiple Classifier Systems, vol. 1857 of Lecture Notes in Computer Science, pp. 16–29, Springer, 2000. View at: Publisher Site | Google Scholar
  31. L. Deng and J. C. Platt, “Ensemble deep learning for speech recognition,” in Proceedings of the 15th Annual Conference of the International Speech Communication Association (INTERSPEECH '14), pp. 1915–1919, Singapore, September 2014. View at: Google Scholar
  32. J. J. Xie, B. Xu, and Z. Chuang, “Horizontal and vertical ensemble with deep representation for classification,” in Proceedings of the International Conference on Machine Learning Workshop on Representation Learning (ICML '13), Atlanta, Ga, USA, 2013. View at: Google Scholar
  33. C.-X. Zhang, J.-S. Zhang, N.-N. Ji, and G. Guo, “Learning ensemble classifiers via restricted Boltzmann machines,” Pattern Recognition Letters, vol. 36, no. 1, pp. 161–170, 2014. View at: Publisher Site | Google Scholar
  34. X. H. Qiu, L. Zhang, Y. Ren et al., “Ensemble deep learning for regression and time series forecasting,” in Proceedings of the IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL '14), Orlando, Fla, USA, 2014. View at: Google Scholar
  35. X. H. Zhang, D. Povey, and S. Khudanpur, “A diversity-penalizing ensemble training method for deep learning,” in Proceedings of the 16th Annual Conference of International Speech Communication Association, Dresden, Germany, 2015. View at: Google Scholar
  36. W. Huang, H. Hong, K. G. Bian, X. Zhou, G. Song, and K. Xie, “Improving deep neural network ensembles using reconstruction error,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN '15), pp. 1–7, Killarney, Ireland, July 2015. View at: Publisher Site | Google Scholar
  37. L. Romaszko, “A deep learning approach with an ensemble-based neural network classifier for black box ICML 2013 contest,” in Proceedings of the IEEE 12th International Conference on Data Mining Workshops, pp. 865–868, Brussels, Belgium, 2012. View at: Google Scholar
  38. I. Hwang, H. Park, and J. Chang, “Ensemble of deep neural networks using acoustic environment classification for statistical model-based voice activity detection,” Computer Speech & Language, vol. 38, pp. 1–12, 2016. View at: Publisher Site | Google Scholar
  39. X. Zhou, L. Xie, P. Zhang, and Y. Zhang, “An ensemble of deep neural networks for object tracking,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '14), pp. 843–847, IEEE, Paris, France, October 2014. View at: Publisher Site | Google Scholar
  40. M. Barghash, “An effective and novel neural network ensemble for shift pattern detection in control charts,” Computational Intelligence and Neuroscience, vol. 2015, Article ID 939248, 9 pages, 2015. View at: Publisher Site | Google Scholar
  41. A. J. C. Sharkey, “On combining artificial neural nets,” Connection Science, vol. 8, no. 3-4, pp. 299–314, 1996. View at: Publisher Site | Google Scholar
  42. Y. LeCun, L. Bottou, Y. Bengio et al., “Gradient-based learning applied to document recognition,” in Proceedings of the IEEE, pp. 2278–2324, New York, NY, USA, November 1998. View at: Google Scholar
  43. G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural Computation, vol. 18, no. 7, pp. 1527–1554, 2006. View at: Publisher Site | Google Scholar | MathSciNet
  44. Y. Bengio, P. Lamblin, D. Popovici et al., “Greedy layer-wise training of deep networks,” in Proceedings of the Advances in Neural Information Processing Systems (NIPS '06), pp. 153–160, Vancouver, Canada. View at: Google Scholar
  45. R. F. Zhang, C. P. Li, and D. Y. Jia, “A new multi-channels sequence recognition framework using deep convolutional neural network,” Procedia Computer Science, vol. 53, pp. 383–390, 2015. View at: Publisher Site | Google Scholar
  46. Y. Zheng, Q. Liu, E. Chen, Y. Ge, and J. L. Zhao, “Time series classification using multi-channels deep convolutional neural networks,” in Web-Age Information Management, F. Li, G. Li, S.-W. Hwang, B. Yao, and Z. Zhang, Eds., vol. 8485 of Lecture Notes in Computer Science, pp. 298–310, 2014. View at: Publisher Site | Google Scholar
  47. J. Dong, J. W. Zhang, H. H. Zhu, L. P. Wang, X. Liu, and Z. J. Li, “Wearable ECG monitors and its remote diagnosis service platform,” IEEE Intelligent Systems, vol. 27, no. 6, pp. 36–43, 2012. View at: Google Scholar
  48. H. H. Zhu, Research on ECG Recognition Critical Methods and Development on Remote Multi Body Characteristic Signal Monitoring System, University of Chinese Academy of Sciences, Beijing, China, 2013.
  49. L. P. Wang, Study on Approach of ECG Classification with Domain Knowledge, East China Normal University, Shanghai, China, 2013.
  50. J.-W. Zhang, X. Liu, and J. Dong, “CCDD: an enhanced standard ecg database with its management and annotation tools,” International Journal on Artificial Intelligence Tools, vol. 21, no. 5, Article ID 1240020, 26 pages, 2012. View at: Publisher Site | Google Scholar
  51. T. P. Vogl, J. K. Mangis, A. K. Rigler, W. T. Zink, and D. L. Alkon, “Accelerating the convergence of the back-propagation method,” Biological Cybernetics, vol. 59, no. 4-5, pp. 257–263, 1988. View at: Publisher Site | Google Scholar
  52. Theano documentation [EB/OL], http://deeplearning.net/software/theano/.
  53. T. Evgeniou, M. Pontil, and A. Elisseeff, “Leave one out error, stability, and generalization of voting combinations of classifiers,” Machine Learning, vol. 55, no. 1, pp. 71–97, 2004. View at: Publisher Site | Google Scholar
  54. C. Ye, B. V. K. Vijaya Kumar, and M. T. Coimbra, “Heartbeat classification using morphological and dynamic features of ECG signals,” IEEE Transactions on Biomedical Engineering, vol. 59, no. 10, pp. 2930–2941, 2012. View at: Publisher Site | Google Scholar
  55. N. V. Thakor, J. G. Webster, and W. J. Tompkins, “Estimation of QRS complex power spectra for design of a QRS filter,” IEEE Transactions on Biomedical Engineering, vol. 31, no. 11, pp. 702–706, 1984. View at: Google Scholar
  56. X. H. Zhou, N. A. Obuchowski, and D. K. McClish, Statistical Methods in Diagnostic Medicine, John Wiley & Sons, New York, NY, USA, 2nd edition, 2011.
  57. M. Wu and J. Ye, “A small sphere and large margin approach for novelty detection using training data with outliers,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 11, pp. 2088–2092, 2009. View at: Publisher Site | Google Scholar
  58. X. Liu, Atlas of Classical Electrocardiograms, Shanghai Science and Technology Press, Shanghai, China, 1st edition, 2011.
  59. J. Demsar, “Statistical comparisons of classifiers over multiple data sets,” Journal of Machine Learning Research, vol. 7, pp. 1–30, 2006. View at: Google Scholar | MathSciNet

Copyright © 2016 Lin-peng Jin and Jun Dong. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views4760
Downloads2183
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.