Abstract

Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.

1. Introduction

In the field of pattern recognition, the target of classification is to construct a decision-making function in nature. We can obtain the best ones via simple calculation for linear classification problems, while it is not easy to determine the best ones for nonlinear classification problems, such as image recognition and biomedical time series classification. The processing flow of traditional methods is such that feature vectors are extracted from raw data first (feature selection is conducted when necessary), and then a suitable model based on them is employed for classification. If we let and be the corresponding function of these two parts, the constructed decision-making function can be written as . However, with many of the existing feature extraction technologies and classification algorithms, we cannot have highly complicated nonlinear functions. For example, principal component analysis and independent component analysis are both linear dimensionality reduction algorithms, the wavelet transformation is a simple integral transformation, and the Gaussian mixture model is made of a finite number of Gaussian functions. Therefore, many traditional methods do not perform very well in hard artificial intelligence task.

Achieving great success in complicated fields of pattern recognition in recent years, deep learning [1, 2] is a deep neural network (DNN) with more than 3 layers, which inherently fuses “feature extraction” and “classification” into a signal learning body and directly constructs a decision-making function. Obviously, its ability to construct nonlinear functions becomes strong with the increasing number of layers and neurons, but the number of network weights that need to be adjusted is significantly increased. On the other hand, with ensemble learning that combines multiple classifiers [3], we can also have complicated decision-making functions. As shown in Figure 1, we can obtain a nonlinear classification model via seven linear classifiers (the filled and unfilled regions denote two classes, resp.). In fact, the constructive mechanism of ensemble learning is the same as that of support vector machine, which constructs nonlinear functions by combining multiple kernel functions.

The pioneering work of ensemble learning was done in 1990 [4, 5], which proved that multiple weak learning algorithms could be converted into a strong learning algorithm in theory, and since then many scholars have carried out widespread and thorough research. In general, ensemble-learning algorithms consist of two parts: how to generate differentiated individual classifiers and how to fuse them, namely, generation strategies and fusion strategies. Next, we will provide a brief review of both of them.

There are two kinds of generation strategies, namely, the heterogeneous type and the homogeneous type. The former is such that individual classifiers are generated using different learning algorithms. We will not elaborate on this type since it is relatively simple. The latter uses the same learning algorithm, so different settings (such as learning parameters and training samples) are necessary. Many methods have been developed for this subject and can be divided into four categories.

The first way is to manipulate training samples. For instance, Bagging [7] creates multiple data sets by sampling with replacement from the original training samples, each of which is used to train an individual classifier. Boosting [810] is another example, in which the learning algorithm uses a different weighting or distribution over the training samples at each iteration, according to the errors of the individual classifiers. There are also other approaches, such as Cross-Validated Committees [11], Wagging [12], and Arcing [13]. The second way is to manipulate input features. Random Subspace [14] generates different randomly selected subsets of input features and trains an individual classifier on each of them, while Input Decimation [15] trains the individual classifier only on the most correlated subset of input features for each class. There are also other methods, such as Rotation Forest [16] and Similarity-Based Feature Space [17]. The third way is to manipulate class labels. For instance, Output Coding [1820] decomposes a multiclassification task into a series of binary-classification subtasks and trains individual classifiers for them. Class-Switching [21, 22] is another example, which generates an individual classifier based on randomly changing the class label of a fraction of the training samples. The last way is to inject randomness into the learning algorithm. For example, using the backpropagation (BP) algorithm, the resulting classifiers can be quite different if neural networks with different initial weights are applied to the same training samples [23]. There are also other approaches such as Randomized First-Order Inductive Learner [24] and Random Forest [25].

In a word, the core of generation strategies is to make individual classifiers different (independent errors and diversity), and only when this condition is satisfied can the classification performance be improved; that is, a good decision-making function can be constructed. As for fusion strategies, Major Voting [26] is one of the most popular methods, in which each individual classifier votes for a specific class, and the predicted class is the one that collects the largest number of votes. Simple Average and Weighted Average [27] are also commonly used. Besides them, one can also employ other methods to combining individual classifiers, such as Dempster-Shafer Combination Rules [28], Stacking Method [29], and Second-Level Trainable Combiners [30].

Since both deep learning and ensemble learning have advantages in constructing complicated nonlinear functions, the combination of the two can better handle hard artificial intelligence tasks. Deng and Platt [31] adopted linear and log-linear stacking methods to fuse convolutional, recurrent, and fully connected DNNs. Xie et al. [32] proposed three DNN-based ensemble methods, that is, “fusing a series of classifiers whose inputs are the representation of intermediate layers” and “using Major Voting and Stacking Method to fuse a series of classifiers obtained within a relatively stable range of epoch.” Zhang et al. [33] presented several methods for integrating Restricted Boltzmann Machines with Bagging to construct multiple individual classifiers. Qiu et al. [34] employed a model of Support Vector Regression (Stacking Method) to aggregate the outputs from various deep belief networks. Zhang et al. [35] trained an ensemble of DNNs whose initial weights were initialized differently and penalizes the differences between the output of each DNN and their average output. Huang et al. [36] presented an ensemble criterion of DNNs based on the reconstruction error. In conclusion, we can use many existing strategies to construct a good ensemble of DNNs, such as setting different architectures, injecting noise, and employing the framework of AdaBoost [3741].

In this paper, we propose a novel DNN-based ensemble method for biomedical time series classification. Based on the local and distorted view transformations, different types of digit filters and different validation mechanisms are used to generate individual classifiers, and “subview prediction” and “Simple Average” are utilized to fuse them. In what follows, Section 2 presents our proposed method in detail. In Sections 3 and 4, the experimental step is described and the experimental results are reported. Section 5 concludes the paper.

2. Methodologies

Figure 2 depicts the full process of the proposed method. First, we utilize different filtering methods to preprocess the biomedical time series and selectively conduct downsampling operation and then respectively employ the explicit method and the implicit method to train two DNNs. In the testing phase, we independently apply “subview prediction” to two DNNs first, and then use “Simple Average” to incorporate the outputs of them.

2.1. Filtering View

In practical application, collected biomedical time series are often contaminated by interfering noise. Although we can perform denoising, some useful information may be lost after doing that. DNNs have the ability to capture useful information but ignore interfering noise after learning from a certain number of training samples, and an effective strategy for homogeneous ensemble learning is to make input data different. Therefore, we just extract different view data from raw biomedical time series.

2.2. Deep Neural Network

At present, DNNs mainly include convolutional neural networks (CNNs) [42], deep belief networks [43], and stacked denosing autoencoders [44], among which CNNs utilize “weight sharing” and “pooling” to make the number of weights not increase dramatically when the numbers of input neurons and layers are very large, so that they can be widely used in various fields of pattern recognition. However, as a model developed for images, CNNs perform convolution operations in both horizontal and vertical directions. It is a reasonable thing to do since image data are relevant in both directions. However, for biomedical time series with multiple channels (also organized as a matrix), directly employing CNNs for classification is not very appropriate since the data in the horizontal direction (intrachannel) are relevant while the data in the vertical direction (interchannel) are independent. For this, the previous works [6, 45, 46] developed multichannel convolutional neural networks (MCNNs) which possess better classification performance.

An example of 3-stage MCNN is shown in Figure 3: data of each channel go through three different convolution units (CU) first, and then information from all the channels is inputted into a fully connected (FC) layer; finally, the predictive value is outputted by a logistic regression (LR) layer. Note that a CU consists of a convolutional layer and a subsampling layer (max-pooling layer) and “1D-Cov” denotes a one-dimensional convolution operation.

2.3. Explicit Training

The DNN can construct a nonlinear function when each network weigh is assigned a value. Obviously, the number of constructed nonlinear functions becomes large with the increasing number of weights. The nature of network training is to determine which function is the best for a given problem. As a most common used method, the BP algorithm cannot find out a good nonlinear function unless there are enough training samples. It does not mean to increase the size of training set really but to increase the number of training samples presented to the network. Fortunately, the virtual sample technology is up to this task, whose core is to perform a transformation on biomedical time series under the premise of preserving class labels. There are mainly the local view transformation and the distorted view transformation. The former is to extract subseries and the latter is to add distortion (e.g., adding random data with low amplitude or burst one).

Generally speaking, “explicit training” is adopted to train a DNN in supervised learning; that is, besides training samples, there are a small number of independent validation samples used to evaluate the obtained model during the training phase. Suppose the size of the biomedical time series is (C is the number of channels; F1 is the number of data points in each channel) and the number of input neurons is (F2 < F1); the training process can be described as follows: a random subseries is extracted from the training sample first (the local view transformation), and then it is added with some type of distortion with high probability (the distorted view transformation); finally, backpropagation is invoked. When PMax training samples have been presented to the network, the weights will be adjusted and the current DNN model will be tested by particular subseries extracted from validation samples. If the accuracy (or other metrics) is the highest up to the present, the current DNN model will be saved. Afterwards, the next training sample is chosen to repeat the process. Algorithm 1 gives the pseudocode of this training method.

Algorithm]: Explicit Training
Input]: Training Samples ,  , Validation Samples ,  
Output]: An DNN Model
Begin
 Best = 0
while (!StopCondition)
  dW =                 //Initialize the weight changes
  for    to  
    = LocalViewTransform(, rand)      //Perform the local view transformation (start from a random position)
    = DistortedViewTransform(, rand)   //Perform the distorted view transformation with high probability
   dW = dW + BackPropagation(, )      //Invoke Backpropagation and accumulate
   if ( %   == 0)                // training samples have been presented to the network
    UpdateNN(dW)               //Adjust the weights
    dW =
    Matrix =               //Initialize the confusion matrix
    for    to  
      = LocalViewTransform(, fixed) //Perform the local view transformation (start from a particular position)
     Matrix = Matrix + Test(, )  //Test the current DNN model
    end
    if (Performance(Matrix) > Best)
     Best = Performance(Matrix)
     SaveNN(Model)
    end
   end
  end
end
End
2.4. Implicit Training

In “explicit training,” the DNN model is tested by validation samples at regular intervals in order to avoid overfitting, which also means that the selection of validation samples will significantly influence the classification performance. However, handpicking representative samples is not easy and limited by practical conditions in some case. For this, we develop “implicit training” in this study. As shown in Algorithm 2, most of the steps are the same as those shown in Algorithm 1; the main difference lies in the validation mechanism. When PMax training samples have been presented to the network, the weights will be adjusted and the current DNN model will be tested by particular subseries extracted from the training samples, which is used between two adjacent weight-updating processes. At the end of every training epoch, the DNN model will be saved if the total accuracy (or other metrics) is the highest up to the present.

Algorithm]: Implicit Training
Input]: Training Samples , ),  
Output]: An DNN Model
Begin
 Best = 0
while (!StopCondition)
  dW =  //Initialize the weight changes
  Matrix =  //Initialize the confusion matrix
  for    to  N
    = LocalViewTransform(, rand) //Perform the local view transformation (start from a random position)
    = DistortedViewTransform(, rand) //Perform the distorted view transformation with high probability
   dW = dW + BackPropagation(, ) //Invoke Backpropagation and accumulate
   if ( %   == 0) // training samples have been presented to the network
    UpdateNN(dW) //Adjust the weights
    dW =
    for    to   //Training samples used between two adjacent weight-updating processes
      = LocalViewTransform (,
                     fixed) //Perform the local view transformation (start from a particular position)
     Matrix = Matrix + Test(, ) //Test the current DNN model
    end
   end
  end
  if (Performance(Matrix) > Best)
   Best = Performance(Matrix)
   SaveNN(Model)
  end
end
End

Someone may think that this method will result in overfitting, but this is a false judgment. For training, we extract random subseries and add distortion to them with high probability; and for validation, we extract particular subseries and do not add distortion to them. Hence, the probability of overlap is very small. Of course, we can use such validation mode only in the situation where the virtual sample technology is used.

2.5. Subview Prediction

The output of the DNN is a probability value that ranges from 0 to 1. If the value is about 0.5, the classification confidence is low. On the other hand, in the training phase, we utilize the virtual sample technology including the local view transformation and the distorted view transformation (collectively called the view transformation) to increase the number of training samples presented to the network. With these considerations in mind, we develop a new testing method called “subview prediction.”

As shown in Figure 4, the testing process can be described as follows: subseries which start from different positions are, respectively, extracted from the testing sample first, and then they are selectively added with a type of distortion used in the training phase; finally, their predictive values outputted by the DNN are aggregated by the average rule (for the sake of simplicity). Note that only one subseries is extracted and tested by the DNN in “simple prediction.”

2.6. Simple Average

Simple Average [27] is employed to fuse two DNNs in this paper: given M-class data, the predicted class is determined bywhere , is the probability value predicted on the th class by the DNN . Of course, we can use other fusion methods, such as Weighted Average [27], Dempster-Shafer Combination Rules [28], and Second-Level Trainable Combiners [30].

3. Experimental Setup

In this study, we apply the proposed ensemble method to one biomedical application, that is, classification of normal and abnormal electrocardiogram (ECG) recordings with short duration. It is useful for telemedicine centers where abnormal recordings are delivered to physicians for further interpretation after computer-assisted ECG analysis algorithms filter out normal ones, so that the diagnostic efficiency will be greatly increased [47]. However, this classification task is rather hard due to wild variations in ECG characteristics among different individuals. Many traditional methods do not perform well for this subject [48, 49]. In the research work [6], “low-pass filtering” and “downsampling (from original 500 Hz to 200 Hz)” are successively applied to ECG recordings first, and then one MCNN model is obtained by “explicit training.” By testing 151,274 recordings from the Chinese Cardiovascular Disease Database (CCDD) [50], it achieved the best results up to now. To ensure a fair comparison, the same ECG dataset is used to evaluate the performance of the newly proposed method.

3.1. ECG Dataset

The CCDD database consists of 179,130 standard 12-lead ECG recordings with sampling frequency of 500 Hz. After throwing away exception data, we choose 175,943 recordings for the numerical experiments where the numbers of training samples, validation samples, and testing samples (nine groups obtained from different sources) are 12320, 560, and 163063, respectively. Note that training samples and validation samples will be combined together in “implicit training.” These recordings are first processed using a digit filter, and then a downsampling operation (from original 500 Hz to 200 Hz) is conducted. Finally, 8 1900 sampling points are available for each recording. The Appendix shows the details of the process [6].

3.2. Individual Classifier

To ensure a fair comparison, all ensemble methods in the numerical experiments employ MCNNs as individual classifiers. In this study, two different network architectures, namely, MCNN[A] and MCNN[B], are used for different purposes. The first one is a 3-stage MCNN whose parameters are set as follows: the number of the neurons in the input layer is 8 × 1700 (1 × 1700); the sizes of three convolution kernels are 1 × 21, 1 × 13, and 1 × 9 (1 denotes the size in the vertical direction; 21 denotes the size in the horizontal direction), respectively; the sizes of three subsampling steps are 1 × 7, 1 × 6, and 1 × 6, respectively; the numbers of three feature maps are 6, 7, and 5, respectively; the numbers of the neurons in the FC layer and the LR layer are 50 and 1, respectively. The second one is also a 3-stage MCNN, whose parameters are set as follows: the number of the neurons in the input layer is 1 × 1900; the sizes of three convolution kernels are 1 × 18, 1 × 12, and 1 × 8, respectively; the sizes of three subsampling steps are 1 × 7, 1 × 6, and 1 × 6, respectively; the numbers of three feature maps are 6, 7, and 5, respectively; the numbers of the neurons in the FC layer and the LR layer are 50 and 1, respectively.

The local view transformation can be used for MCNN[A], but it cannot be used for MCNN[B] since the number of sampling points of an ECG recording is 1 × 1900. As for the distorted view transformation, it can be used for both MCNN[A] and MCNN[B]. In this study, random data with low amplitude (the maximal amplitude is lower than 0.15) is added to ECG recordings during the training phase, and this operation (e.g., the distorted view transformation) is ignored in the testing phase for the sake of simplicity.

Using MCNN[A], 8 × 1700 local segments which start from the 1st sampling point are extracted (from validation samples in “explicit training” and from training samples in “implicit training”) to evaluate the obtained model during the training phase. And in the testing phase, nine 8 × 1700 local segments which start from the 1st, 26th, 51st, 76th, 101st, 126th, 151st, 176th, and 201st sampling points are extract from a given ECG recording if “subview prediction” is used; otherwise, only one 8 × 1700 local segment which starts from the 1st sampling point is extracted. Using MCNN[B], we can train and test neural networks in a traditional manner.

We only employ the BP algorithm of inertia moment and variable step [51] in supervised learning and its related parameters are set as follows: the initial step size is 0.02, the step decay is 0.01 except for the 2nd epoch and the 3rd epoch (set as 0.0505), PMax is 560, and the maximal number of training epochs is 500. The experimental platform is based on an Intel Core i7-3770 CPU @3.4 GHz, 8.0 G RAM, 64-bit Window 7 operating system, and the theano-0.6rc [52] implementation is used.

3.3. Ensemble Method

Bagging [7] and AdaBoost [8] are two of the most popular and effective ensemble-learning methods, and they work especially well for unstable learning algorithms (i.e., small changes in the training data lead to large changes in the individual classifiers), such as neural networks and decision tress [53]. In addition, Ye et al. [54] proposed an ensemble method for multilead ECG that has excellent classification performance. To demonstrate the effectiveness of our proposed method, we compare it with these three ensemble methods. Next, we will provide their configuration parameters in detail:(1)ViewEL (e.g., our proposed method): filtering views A and B in Figure 2 are set as “low-pass filtering” and “band-pass filtering with passband of 0.5–40 Hz” [55], respectively. Note that there is no problem if we exchange “low-pass filtering” for “band-pass filtering,” since an effective measure for homogeneous ensemble learning is to make input data different. Here, we just make the 1st path consistent with the research work [6].(2)AdaBoost: the filtering view is set as “low-pass filtering.” In the training phase, we utilize the explicit method to train two MCNN[A] models based on the framework of AdaBoost. In the testing phase, we apply “subview prediction” to individual classifiers independently and fuse them afterwards.(3)Bagging: most of the configuration parameters are the same as those of AdaBoost. The only difference is that two MCNN[A] models are obtained based on the framework of Bagging.(4)YeC: the filtering view is set as “low-pass filtering.” We utilize the explicit method to train one MCNN[A] model for each lead, so there are a total of 8 MCNN[A] models since the incoming ECG recording contains 8 leads. In the testing phase, we first apply “subview prediction” to each individual classifier and then employ the Bayesian approach (product rule) to fuse them [54].(5)YeCRaw: most of the configuration parameters are the same as those of YeC. The only difference is that we replace MCNN[A] with MCNN[B]. Note that neither the local view transformation nor “subview prediction” can be used in this method.

3.4. Performance Metrics

We utilize the following metrics [56] to evaluate algorithms: specificity (Sp), sensitivity (Se), geometric mean (GMean), accuracy (Acc), and negative predictive value (NPV), given bywhere TN and TP are the number of normal and abnormal samples correctly classified, respectively, FN is the number of abnormal samples that are classified as normal, and FP is the number of normal samples that are classified as abnormal. In addition, we also utilize related metrics of the ROC (receiver operating characteristic) curve, including AUC (area under the ROC curve), TPR (the vertical axis of the ROC curve, e.g., “Sp” in this study), and FPR (the horizontal axis of the ROC curve, e.g., “1-Se” in this study).

4. Results

In this section, we report experimental results on the CCDD database. There are nine groups of testing samples with different sources, namely, DS1~DS9. Besides presenting the testing results of each group, we will summarize the averages and the standard deviations of performance metrics for each algorithm. The metrics GMean takes into consideration the classification results on both positive and negative classes [57], while the key technology of the classification of normal and abnormal ECG recordings with short duration is to make TPR under the condition of NPV being equal to 95% (TPR95) as high as possible [58]. Therefore, we perform the Wilcoxon signed ranks test [59] to investigate whether the difference in GMean and TPR95 achieved by different algorithms is statistically significant. Generally speaking, a value that is less than 0.05 indicates the difference is significant, and the smaller the value is, the more significant the difference is.

4.1. Effectiveness Evaluation

We first investigate the contribution of “subview prediction.” From the testing results presented in Tables 1 and 2, we can see that most of the performance metrics are increased and the classification performance is improved regardless of “explicit training” and “implicit training.” From the results of statistical analysis summarized in Tables 3 and 4, we know that “subview prediction” significantly outperforms “simple prediction” in terms of GMean, no matter which training method we use. In addition, we also find that both “explicit training” and “implicit training” have advantages and disadvantages: the former has higher Se while the latter has higher Sp. From the perspective of ensemble learning, these two classifiers are exactly what we want. Next, we present their fusion results in Tables 5 and 6.

It is observed from the testing results presented in Table 5 that the fusion model always has the highest AUC. From the statistical results summarized in Table 6, we can see that the fusion model significantly outperforms the explicit [6] and implicit (the classification models are obtained by “explicit training” and “implicit training,” resp., and the results are based on subview prediction) models in terms of GMean and all other models in terms of TPR95. All these findings illustrate the effectiveness of our newly developed generation and fusion strategies.

4.2. Comparison with Other Methods

We compare the performance of the proposed method (e.g., ViewEL) with that of YeCRaw, YeC, Bagging, and AdaBoost. From the testing results presented in Table 7, we can see that ViewEL produced the best results in many of the performance metrics. From the results of statistical analysis summarized in Table 8, we know that ViewEL significantly outperforms YeCRaw and Bagging in terms of both GMean and TPR95 and YeC in terms of GMean. Although we cannot say that ViewEL significantly outperforms AdaBoost in terms of GMean, the value is a little greater than 0.05.

The essential difference between YeC and YeCRaw is whether the local view transformation and “subview prediction” are used. Table 7 shows that YeC increases almost all the metrics except Sp and significantly outperforms YeCRaw in terms of both GMean and TPR95 (the values are 0.0039 and 0.0156, resp.). Likewise, the performance of Bagging and AdaBoost will be degraded if we remove the view transformations and “subview prediction” from them, respectively. Of course, their performance could be improved if the number of DNN models is increased. For ViewEL, we can also train multiple individual classifiers to enhance performance using different validation samples, such as time subseries starting from other positions and time subseries extracted from some part of training samples in “implicit training.” However, besides classification performance, we should also take into consideration the computational efficiency since the DNN is a model with high complexity. In practical applications (telemedicine centers), an effective ensemble method with less number of individual classifiers is needed.

5. Conclusion

The current work proposes a novel DNN-based ensemble method that uses multiple view-related strategies, such as the view transformations, “implicit training,” and “subview prediction.” Experiment results on the CCDD database demonstrate that the proposed method is effective for biomedical time series classification. Furthermore, we compare it with some well-known methods for the classification of normal and abnormal ECG recordings. Our proposed method achieves comparable or better classification results than those by the others.

It is worth noting that this study just presents a new research idea for ensemble learning, and we can incorporate more strategies into Figure 2, such as wavelet-transformation views, compressive sensing view, class-switching technology, and misclassification cost. Those are all research tasks we will do in the future.

Appendix

Each ECG recording in the Chinese Cardiovascular Disease Database is approximately 10~20 seconds in duration and contains 12 leads, namely, I, II, III, aVR, aVL, aVF, V1, V2, V3, V4, V5, and V6, where II, III, V1, V2, V3, V4, V5, and V6 are orthogonal, while the remaining four can be linearly derived from them. We obtained these data from different hospitals (i.e., real clinical environment) in Shanghai and Suzhou successively. Cardiologists gave the diagnostic conclusion of each recording, which may contain more than one disease type. We used hexadecimal coding (form of “0xdddddd”) to encode disease types which are divided into three grades, including 12 one-level types (e.g., invalid, normal, sinus rhythm, atrial arrhythmia, junctional rhythm, ventricular arrhythmia, conduction block, atrial hypertrophy, ventricular hypertrophy, myocardial infarction, ST-T change, and other abnormities), 72 two-level types, and 297 three-level types. More details can be seen on our website (http://58.210.56.164:88/ccdd or http://58.210.56.164:66/ccdd).

In numerical experiments, we first throw away the invalid ECG recordings and the ones whose duration is less than 9.625 seconds. Then, a data segment of 9.5 s seconds that only contains eight orthogonal leads is extracted from an ECG recording after ignoring the first 0.125 s. Finally, each recording has 8 × 1900 sampling points at the sampling frequency of 200 Hz. We regard a recording as normal if the diagnostic conclusion is “0 020101” or “0 020102” or “0 01”; otherwise, it is regarded as abnormal. Table 9 summarizes the detail information.

The recordings from “data 944–25693” used as training samples are as follows: 4520–4584, 4586–4613, 4615– 4652, 4654–4761, 4763–4967, 4969–4972, 4975–5146, 5148, 5151–5279, 5281–5300, 5302–5348, 5350–5540, 5542–5568, 5570–5713, 5715–5777, 5779, 5781–5792, 5794–5974, 5976–6074, 6076–6118, 6120–6127, 6129–6134, 6136–6206, 6208–6281, 6283–6441, 6443–6502, 6504–6538, 6540–6654, 6656–6997, 6999–7005, 7007–7012, 7014– 7060, 7062–7451, 7453–7506, 7508–7531, 7533–7594, 7596–7642, 7644–7732, 7734–7739, 7741–7829, 7831–7887, 7889–7939, 7941–7946, 7948–7956, 7980–8064, 8066–8108, 16556–17045, 17047–17128, 17407–17422, 17424–17454, 17456–17928, 17930–17933, 17935–17955, 17957–18093, 18095–18258, 18260–18441, 18443–18538, 18540–18562, 18565–18642, 18644–18814, 18816–18817, 18819–18984, 18986–19327, 19329–19439, 19441– 19647, 19649–19657, 19659–19852, 19854–20173, 20175–20700, 20702–20881, 20883–21252, 21254–21301, 21303–21586, 21588–21815, 21817–21865, 21867–21889, 21891–21986, 21988–22149, 22151–22226, 22228– 22462, 22464–22623, 22625–22793, 22795–22935, 22937–23032, 23034–23038, 23040–23043, 23045–23067, 23069–23268, 23270–23587, 23589–23611, 23613–23973, 23975–24108, 24110–24646, 24648, 24650–24775, 24777–24820, 24822–24962, 24964–25201, 25203–25282, 25284–25301, 25303–25330, 25332–25457, 25459– 25495, 25497–25606, and 25608–25691.

The recordings from “data 944–25693” used as validation samples are as follows: 4176–4278, 4362–4413, 4415–4519, 7957–7979, 17129–17389, and 17391–17406.

There are nine groups of testing samples, namely, DS1~DS9, which were obtained from hospitals located at different districts in Shanghai and Suzhou. DS1 consists of the remaining recordings from “data 944–25693,” while other groups contain all the valid recordings from the corresponding dataset, respectively.

Competing Interests

The authors declare that there are no competing interests regarding the publication of this paper.