Research Article  Open Access
Rensong Liu, Zhiwen Zhang, Feng Duan, Xin Zhou, Zixuan Meng, "Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms", Computational Intelligence and Neuroscience, vol. 2017, Article ID 2727856, 12 pages, 2017. https://doi.org/10.1155/2017/2727856
Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms
Abstract
Motor imagery (MI) electroencephalograph (EEG) signals are widely applied in braincomputer interface (BCI). However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG) artifact preprocessing, bandpass filtering is performed to obtain the frequency band of MIrelated signals, and then, canonical correlation analysis (CCA) combined with wavelet threshold denoising (WTD) is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (RCSP) algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the nearest neighbor (KNN) and support vector machine (SVM) approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance.
1. Introduction
Braincomputer interface (BCI) provides an efficient communication bridge between the human brain and external manageable devices [1]. Among the signalcontrolling BCI sources, the P300 [2], steadystate visualevoked potential (SSVEP) [3], and motor imagery (MI) [4] signals are the most commonly used. In contrast to SSVEP and P300, MI is a selfinduced brain activity, which is initiated by imaging certain limbs or other body parts to move without the help of outside inducing factors [5]. An MI BCI system was first used based on this feature to assist humans with severe disabilities [6]. This system is also used for humanoid controls [7], entertainment game designs [8], and aircraft flight controls [9]. However, the performance of this system is largely dependent on the number of MI motion commands that can be precisely classified.
The cerebral cortex of lefthanders and righthanders is anisomerous [10]. Therefore, cerebral cortex activities often present evident differences and cannot be easily distinguished when righthanders imagine symmetric limb movements [11, 12]. Our study aims to analyze and recognize four anisomerous MI states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state.
In general, MI pattern recognition systems involve raw MI EEG signal preprocessing, feature extraction, and pattern classification. However, subjects experience difficulty in avoiding eye movements and consequently produce electrooculogram (EOG) artifacts in raw MI EEG signals [13]. The obtained raw MI EEG signals are mainly affected by the vertical EOG (VEOG) signals generated by blinking eyes. The preprocessing algorithms for EEG signals mainly include time domain filtering, blind source separation [14], and timefrequency domain analysis methods. Time domain methods, such as the lowpass filter method and bandpass filter method [15], have been used to eliminate EOG artifacts [16, 17]. However, time domain filtering methods cannot effectively remove the majority of the EOG artifacts. Vergult et al. [18] used blind source separation and canonical correlation analysis (CCA) to effectively denoise EOG artifacts from raw MI EEG signals, but the CCA algorithm should be able to artificially recognize the artifact components. Hsu et al. [19] used a timefrequency domain analysis method called discrete wavelet transform (DWT) to denoise EOG artifacts from raw EEG signals. The multiresolving feature of DWT enables nonstationary EEG signals to be considered. However, a small portion of the EOG artifacts remains in the EEG signals after the DWT denoise preprocessing is completed. Thus, a more effective preprocessing algorithm should be developed to denoise EOG artifacts.
Feature extraction is another critical step in MI pattern recognition. Common EEG features include those in the time domain, frequency domain, timefrequency domain, and spatial domain [20]. Time domain analysis is mainly conducted to extract EEG features because MI EEG signals are recorded in the time domain. For example, Khushaba et al. [21] extracted EEG features from the time domain to form a set of features that was relevant to the limb position. EEG signals also contain various frequency components. Prasad et al. [22] used power spectral density as an EEG feature. Timefrequency domain analysis methods can integrate the advantages of time domain and frequency domain analysis methods. Wang et al. [23] applied a wavelet packet transform method to extract the time and frequency information in EEG signals. However, univariate and integrated analysis methods using the time domain and frequency domain are not appropriate for multichannel EEG feature extraction [24].
After preprocessing raw MI EEG signals and extracting the features, we aimed to develop an appropriate classifier to precisely categorize the MI motion commands. Common classification algorithms for EEG features include the linear distance discriminant [25], support vector machine (SVM) [26], clustering algorithms [27], Bayesian classifiers, and back propagation neural network (BPNN) classifiers [28]. However, the classifiers exhibit poor performance when the EEG features overlap with one another.
Considering previous studies, we propose a novel MI pattern recognition system for classifying MI EEG signals. We use the Butterworth bandpass filter to extract EEG signals having frequencies of 8–30 Hz during the preprocessing of raw EEG signals. We then apply a CCA algorithm that integrates a wavelet threshold denoising (WTD) algorithm to form a compound algorithm called the wCCA algorithm and to process the extracted frequency band signals. We also use a regularized common spatial pattern (RCSP) algorithm by incorporating the principle of generic learning [29] to extract the EEG features in the spatial domain. This approach can effectively extract connotative spatial information from multichannel EEG signals and reduce the data dimension based on minority samples. We combine the nearest neighbor (KNN) and SVM methods, which we call KNNSVM, to classify the EEG features. We compare the KNNSVM classifier to several classifiers to validate its classification performance.
The remainder of this paper is organized as follows. Section 2 describes the EEG signal acquisition. Section 3 introduces the raw EEG signal preprocessing. Section 4 explains feature extraction with the RCSP algorithm. Section 5 discusses the KNNSVM classifier and compares it with several classifiers. Section 6 presents our experimental results and a discussion. Section 7 provides the conclusions and recommends concepts for future studies.
2. EEG Signal Acquisition
We selected 14 Ag/AgCl electrodes that were relevant to the MI brain region based on the Brodmann brain function partition and international 10/20 electrode lead system [30, 31]. Among the 14 electrodes, two (, ) were placed in the central brain region, six (T7, P3, P7, CP3, FC3, and C3) were in the left brain region, and six (T8, P4, P8, CP4, FC4, and C4) were located in the right brain region. The electrodes in the left and right brain regions are symmetric (Figure 1). Bipolar lead modes with two electrodes were used to record vertical EOG (VEOG) signals: one electrode was placed above the left eyebrow, and the other electrode was placed on the lower edge of the left eye socket. Monopolar derivations were used throughout the recordings. In this process, the left mastoid and forehead served as the reference and ground, respectively. The signals were sampled at 256 Hz, and an additional 50 Hz notch filter was enabled to suppress the power line interference by using a g.tec device (g.tec medical engineering GmbH, Schiedlberg, Austria).
A subject sat on a relaxing chair, and the subject’s arms were placed in a relaxed position on his legs. The paradigm consisted of four different tasks, namely, imaginary movements with the left hand (LH), right foot (RF), right shoulder (RS), and the resting state (R). At the beginning of a trial ( = 0 s), a fixation cross “+” was displayed on a black screen. In addition, a short acoustic warning tone was presented. After two seconds ( = 2 s), a text prompt for the left hand (LH), right foot (RF), right shoulder (RS), or resting state (R) was displayed in the center of the screen and remained on the screen for 2 s. This prompted the subject to perform the desired MI task. The subject was asked to continue performing the MI task until the fixation cross “+” disappeared from the screen at = 7 s. A short break followed, with a blank screen lasting for two seconds. The paradigm is illustrated in Figure 2.
Five healthy subjects, namely, three men (Subjects A, B, and D) who were 30, 25, and 23 years of age, respectively, and two women (Subjects C and E) who were 21 and 23 years of age, respectively, participated in the experiment and performed the four MI tasks. Subject A was lefthanded, and the other subjects were righthanded. Each MI motion state was recorded for one session, and altogether four sessions were recorded for each subject. Each session consisted of 60 trials separated by short breaks (lasting a couple of minutes) breaks. For each state, 50 trials were selected for training and the remaining 10 trials were used for testing. In total, 240 trials were performed per subject.
3. Raw EEG Signal Preprocessing
For raw EEG signals, the Butterworth bandpass filter was used to extract the 8–30 Hz frequencies of the signals. The brain is a good conductor of electricity. As such, EOG signals spread from the forehead to the back of the head and thus traverse the entire head. We considered the different spatial distribution characteristics of the EEG and EOG signals. For twelve symmetrical electrodes (T7, T8, P3, P4, P7, P8, CP3, CP4, FC3, FC4, C3, and C4), we used the wCCA algorithm to examine the mixed signals in a new form. represents the EEG signals collected from six electrodes in the left brain region, and denotes the six other electrodes in the right brain region. The VEOG signal was added to and . The first pair of components was calculated through CCA decomposition and exhibited the highest correlation. The components can be regarded as the most communal ingredient between the left and right brain regions, which are composed of the EOG artifacts and a small number of highfrequency EEG components. Then, wavelet threshold denoising was performed to remove the EOG artifacts and maintain a small amount of highfrequency EEG components. Finally, pure EEG signals of twelve symmetric electrodes were obtained through wCCA algorithm processing. For two central brain region electrodes (, ), the wavelet basis “db4” was used to conduct fivelayer wavelet decomposition for the EEG signals of and . Then, the wavelet soft threshold denoising function “wdencmp” was used to process the decomposed signal components. Next, the denoised signal components were used to reconstruct the pure EEG signals with the wavelet basis “db4.” The structure of the EEG signal preprocessing is shown in Figure 3.
(a)
(b)
3.1. wCCA Algorithm
Next, the derivation process of the wCCA algorithm was described in detail. Suppose that and represent EEG signals that were collected from the left brain region and right brain region, respectively. Among them, and are two sets of 12 raw symmetric electrode EEG signals and is the VEOG signal. After the centralization of and , the new variables and are obtained, respectively. We then find a linear combination of the points through CCA to obtain the new variables and , which exhibit the highest correlation.
The obtained canonical correlation variables are the estimations of seven raw independent signals and , respectively. The vectors and are obtained by calculating the maximum simple correlation coefficient. and are the autocovariance matrices. and are the crossvariance matrices.where the constraints are
The Lagrangian function is constructed to calculate the values of and under the premise that achieves the maximum value:
According to (1), and have the following forms:
The obtained canonical correlation variables and included seven independent components, and . The vectors and are the th columns of matrices and , respectively.
Next, , , , and can be calculated based on (2) to (4). The first independent components and are called and . Each component is composed of EOG artifacts and several valuable EEG signals. Then, we used a wavelet hard threshold noise reduction method. We first used the wavelet basis “db4” for fivelayer signal wavelet decomposition for and . Thus, we obtained five wavelet coefficients and one scale coefficient. Then, we set any coefficient that was higher that the threshold to zero, whereas we retained the value of any coefficient that was lower that the threshold. In 1994, Donoho proposed the VisuShrink method (the unified threshold denoising method). The threshold for each coefficient is defined as follows:where is the threshold for the th coefficient and is the number of elements for the th coefficient. Donoho and Johnstone [32] proposed an estimation formula for the noise standard deviation in the wavelet domain , where is the median value of the subband wavelet coefficient. Thus, the standard deviation of the noise in wavelet domain is defined as follows:where is the th coefficient.
After the wavelet hard threshold noise reduction processing, the six processed coefficients were used for wavelet transform reconstruction with the wavelet basis “db_{4},” and we obtained the denoised independent component signals and which are the wavelet threshold denoised signal of and , respectively. These signals and the other six independent components comprise and .
According to (5), after calculating and , we reconstructed the new variables and , which are the estimates of the pure EEG signals from the left brain region and right brain region, respectively.
Twelve pure electrodes with symmetric EEG signals were obtained. represents the six EEG electrode signals from the left brain region, and represents the six EEG channel electrodes signals from the right brain region.
3.2. EEG Signal Denoising
We constructed brain topographic maps of the four MI states to examine the topology of significant EEG features. Figure 4(a) shows the brain topographic map of Subject A. The red color and blue color both indicated higher values within the corresponding state. Evident differences were observed among the four brain topography maps. Therefore, we can obtain good classification effectiveness. Furthermore, we constructed timefrequency maps to concretely obtain the activity degree of the 14 electrodes in each MI state. Figure 4(b) shows the 8–30 Hz frequency spectrum chart for the 14 electrode EEG signals of each MI state from Subject A. Taking the resting state as an example, electrode exhibits the lowest activity degree, whereas electrode C3 exhibits the highest activity degree.
(a)
(b)
The 14channel raw EEG signals are mixed with EOG signals; in particular, electrodes close to the eyes are particularly influenced by the EOG signals. Figure 5(a) shows the time domain graph of the resting state from Subject A. In Figure 5(a), the EEG electrode signals of FC3, , and FC4 fluctuate significantly, whereas the other electrode signals are less affected by the EOG signals because these electrodes are far from the eyes. We obtained the pure EEG signals after the EEG signal preprocessing is completed (Figure 5(b)). The denoised 14channel EEG signals fluctuate only slightly.
(a)
(b)
4. Feature Extraction Using RCSP
After the preprocessing of the raw EEG signals, we must extract the EEG features (Figure 6). The common space model (CSP) is more effective than the traditional timefrequency domain feature extraction method for extracting the differences in the spatial features of the two types of signals. However, the CSP algorithm is based on a large number of signal samples based on covariance estimation. Therefore, the feature extraction is affected by the number of samples available for training. In recent years, regularized discriminant analysis (RDA) has been used to solve small sample problems for linear and secondary discriminant analyses. The smalltrainingsample approach leads to biased estimates of the eigenvalues, and such problems can lead to instability in the feature extraction. Thus, two regularization parameters are used to address these undesirable features.
In this paper, we adopt the improved regularized common spatial pattern (RCSP) algorithm by incorporating the principle of generic learning to extract the EEG features in the spatial domain. In RCSP, we used the principle of generic learning to address one training sample problem. The training set of RCSP uses a generic database that contains subjects that are different from those to be identified. The classifier is trained to extract the discriminant information from the subjects other than those that will be called on to perform recognition when in operation. The principle behind generic learning is that the discriminant information pertinent to the specific subjects (those to be identified) can be learned from other subjects because the EEG signals exhibit similar intrasubject variations. The RCSP algorithm is an improved CSP algorithm. It can provide a good approach to overcoming outlier (such as noise) sensitivity and poor robustness, which are shortcomings of having a small number of samples. There are two parameters in the RCSP algorithm, and . The first regularization parameter controls the shrinkage of a subjectspecific covariance matrix toward a “generic” covariance matrix to improve the estimation stability based on the principle of generic learning. The second regularization parameter controls the shrinkage of the samplebased covariance matrix estimation toward a scaled identity matrix to account for the bias due to the limited number of samples.
4.1. RCSP Algorithm
We assume that there are subjects who participated in the experiment. Assume that and are two kinds of MI tasks in the space multimodal evoked response signal matrix from the multichannel MI EEG signals. Their dimensions are , where is the number of EEG channels and is the number of samples collected for each channel. is a trial of dimensions MI EEG signals from MI task or .
The normalized sample covariance matrix of a trial is obtained as follows:
The two MI tasks of EEG signals are indexed by . For simplicity, we assume that there are trials in each class that are available for training for a subject of interest, which are indexed by as in , where . Thus, each trial has a corresponding covariance matrix .
The average spatial covariance matrix for each class is then calculated as follows:
Next, the regularization technique is introduced into the equation. The regularized average spatial covariance matrix for each class is calculated as where and are two regularization parameters, is an identity matrix of size , and is defined as follows:
In (12), is the sum of the sample covariance matrices for all training trials in class , and is the sum of the sample covariance matrices for a set of generic training trials from the other subjects in class .
Next, the composite spatial covariance is formed and factorized aswhere is the matrix of eigenvectors and is the diagonal matrix of corresponding eigenvalues. In this paper, we adopt the convention that the eigenvalues are sorted in descending order.
Next, the whitening transformation is obtained as follows: and are whitened as follows: can then be factorized as follows:
The full projection matrix of is formed as follows:
For the most discriminative patterns, only the first and last (we set ) columns of are retained to form , which is of size , where . For the feature extraction, a trial is first projected as follows:
Then, a dimensional feature vector is formed from the variance of the rows of as follows:where is the th component of , is the th row of , and is the variance of vector .
However, in this study, we analyzed four MI states, assuming that the four states are A, B, C, and D. We converted the fourclassification tasks (A&B&C&D) into six twoclassificationtasks: (A&B), (A&C), (A&D), (B&C), (B&D), and (C&D). Thus, six spatial filters are generated: , , , , , and . Finally, the four state signals are sequentially passed through the six spatial filters, and the feature vectors are obtained.
4.2. Feature Selection
We constructed a sixspatialfilter group using the RCSP algorithm. After the EEG signals of four MI states were extracted by the sixspatialfilter group, the diversitymaximized feature vectors of 24 () were obtained. To optimize the performance of the RCSP algorithm, we explored the effect of classification with different combinations of and . RCSP with = = 0 is equivalent to the classical CSP. We calculated 121 classification results by the outer product of = [] and = []. Figure 7 shows the 121 classification results with different combinations of and from Subject A. Then, we determined and values that corresponded to the maximum classification accuracy using the KNNSVM algorithm. Five subjects participated in the experiment and performed the four MI motions. There were 240 trials for each subject, that is, 200 trials for training and 40 trials for testing. Incorporating the principle of generic learning, for each subject, the training set is 1,000 training trials, which is composed of the subject’s own 200 training trials and 800 training trials from the other four subjects.
To verify the classification effectiveness of the feature extraction, we first used CSP and RCSP to extract the features separately. Then, we used KNNSVM to classify the extracted features. Table 1 shows the different classification results using CSP and RCSP. The classification accuracy rates (AC) of subjects A, B, and D, separately, were improved by 5, 7.5, and 10 percentage points, respectively. The classification accuracy rate of Subject E remained at the same level as CSP. The classification accuracy rate of Subject C is reduced by 5 percentage points. Overall, the feature extraction performance of RCSP is better than that of CSP.

5. Classification Using KNNSVM
The sample feature points of the four MI states indicate that the tested EEG signals can cross or overlap. The KNN method is a mature classification algorithm. The concept of this method is that, for a sample of interest, if the most similar samples in the feature space belong to a particular category, then the sample of interest also belongs to this category. Because the KNN method mainly depends on the samples that are adjacent, it is limited compared with the method of discriminant domain for determining the category. Thus, the KNN method is more suitable than the other methods for crossed or overlapping samples. However, the KNN method classifier uses local information for prediction. Thus, KNN lacks good generalization ability under small sample conditions, and the classification results are easily affected by noise.
The SVM is a machine learning algorithm that is based on statistical learning theory. Specifically, the SVM is based on the principle of structural risk minimization, which effectively avoids the problems that exist in traditional learning methods, such as learning, dimension disaster, and local minima, and it still has good generalization ability under the condition of having a small sample size. In particular, the SVM is superior to other classification methods in solving two types of classification problems. However, the classification effect is not suitable for crossed or overlapping samples. The use of the SVM for multiclass classification remains limited.
Therefore, the KNN algorithm is used to establish the classification framework based on the KNN algorithm such that the KNN algorithm outputs the two most likely classification categories as the rough classification result, which is then input into the SVM for the second classification to obtain the final classification result. This new composite algorithm is called the KNNSVM method. The new composite algorithm KNNSVM can handle crossed or overlapping sample sets and still maintain good generalization ability under small sample conditions. Figure 8 shows the accuracy results of KNNSVM from five subjects. The classification accuracy rate varies from 80% to 92.5%. Subjects B and D have the best classification effects, with accuracies of up to 92.5%. The KNNSVM algorithm shows good classification effectiveness. The steps of the KNNSVM algorithm are as follows (see Figure 9).
Step 1. Calculate the cosine angle distance between the sample and the training set for each sample and obtain the first training samples.
Step 2. Calculate the weight of each category of the selected training samples.
Step 3. The two classes of and with the largest weights are selected as a result of the rough classification. If the classification result of the KNN algorithm is only one , then the instance is directly classified as ; otherwise, and are two types of results using the oneagainstone SVM for the final classification results of the two classifications.
6. Experimental Results and Discussion
In this study, the following three steps are involved in the MI pattern recognition system: (1) preprocessing of raw EEG signals; (2) extraction of the features of each state of the EEG signal; and (3) building a pattern recognition classifier. Five healthy subjects participated in the experiments. Each subject had to execute the four proposed MI states in the same experimental environment. The 240 trials of EEG signals obtained from each subject were divided into two sets, namely, the training trials and testing trials.
During the preprocessing of the raw EEG signals, we first used a notch filter to suppress the 50 Hz power frequency interference and the Butterworth bandpass filter to extract the 8–30 Hz frequencies of the EEG signals. Then, we used the wCCA and WTD algorithm to separate the EOG artifacts from the raw EEG signals (Figure 3). To demonstrate the effects of the EEG signal preprocessing, we consider the three electrode signals closest to the eyes before and after the preprocessing in Figure 10. The three raw electrode signals are mixed EOG signals and fluctuate significantly. After the preprocessing, the obtained three pure electrode signals fluctuate only slightly.
(a)
(b)
(c)
For the EEG feature extraction process, we used RCSP to extract the EEG features in the spatial domain. Figure 11 shows the scattering of the feature points for the 40 test samples of Subject A. The sample feature points of the LH are crossed or overlapping with feature points of the RS and RF. The classification accuracies of the five subjects for the four MI states are maximum with the best combination of and , as shown in Table 1. Compared with the CSP, the classification accuracy of three subjects (A, B, and D) is higher using RCSP, and only one subject (C) exhibited a slight decrease in the classification accuracy.
After training the classifier with the training set data, we used the test data as the input of the KNNSVM to verify the classification accuracy rate. Figure 8 shows the accuracy results of KNNSVM for the five subjects. In addition, the classification results of KNNSVM for the four MI states are provided in Table 2. For the five subjects, the accuracy rates of the resting (R) state and right foot (RF) state vary from 80% to 100%, and thus, KNNSVM produces the best average classification accuracy of 92%. The accuracy rate of the right shoulder (RS) from Subject C has a low accuracy (70%), and thus, the accuracy rate of the RS has the second highest accuracy rate. For the left hand (LH), three subjects (A, C, and D) have low accuracy, and therefore, the average accuracy rate of the LH is the lowest (74%).

The confusion matrix is used to verify the actual discrimination success of the proposed method. If an MI state is often misconstrued as another state, then special pattern recognition efforts should be applied to address the complex problems related to the MI states. Figure 12 shows the confusion matrix for the MI state categorizations by KNNSVM. Using this matrix, the discrimination among the various MI states of all of the subjects can be evaluated in depth. Three MI states (R, RF, and RS) have good classification effectiveness, and only the LH has a low accuracy rate (74%). The misidentification of the LH state is mainly concentrated on the RF and RS. The confusion matrix illustrates that the KNNSVM classifier is highly precise.
After EEG feature extraction using RCSP, a suitable pattern recognition classifier is required. To confirm the good classification performance of the KNNSVM classifier, we compare it with five commonly used classifiers (Table 3). We used those classifiers to classify the four MI states using the same sample data. Table 3 shows the classification results of six different classifiers. The KNNSVM classifier has the highest average classification accuracy rate (87%), and the naive Bayes classifier has the lowest accuracy rate (69.5%). Among the six classifiers, the accuracy rates of KNNSVM from the five subjects are all above 80%. In contrast, the accuracy rate of the Naive Bayes classifier is above 80% for only one subject (D). In addition, the standard deviation of KNNSVM is the smallest, and thus, the KNNSVM classifier is highly reliable. The performance of KNNSVM is also significantly more efficient than those of the other five commonly used classifiers.

In this study, we adopted 16 EEG sensors to classify four MI states. Furthermore, we compared the results with those in previous studies to verify the contribution of our proposed EEG pattern recognition system [17, 32–34], as shown in Table 4. Additional EEG sensors can contribute to the quantity and quality of the MI state classification. However, increasing the number of sensors increases the complexity of the classification algorithms and deteriorates the stability performance of the EEG pattern recognition system. Furthermore, more sensors cause discomfort for the subjects. In Table 4, a minimum of 22 electrodes is required to recognize four MI states, but we utilized 16 sensors. In addition, the proposed method provides more effective classifications than the other methods.
7. Conclusions
We proposed a novel MI pattern recognition system for classifying four anisomerous MI states using 16 EEG sensors. First, we combined the Butterworth bandpass filter, wavelet transform, and CCA to preprocess the raw EEG signals. We then used the RCSP algorithm to extract feature vectors in the spatial domain. We subsequently utilized the KNNSVM algorithm for classification. For comparison, five mainstream classifiers were used to classify the same sample data. The results indicate that the KNNSVM classifier is more suitable for the recognition of the four MI states than the five mainstream classifiers. KNNSVM also exhibits comparatively excellent results. In particular, the average classification accuracy rate is 87%, and the maximum accuracy rate is 92.5%. Based on these findings, we will assign the subjects to receive systematic MI training in the next stage. Thus, the proposed MI pattern recognition system reaches its maximum performance and satisfies the actual needs.
Conflicts of Interest
The authors declare no competing financial interests.
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China (no. 61203339) and the Tianjin Research Program of Application Foundation and Advanced Technology (no. 14JCYBJC18300).
References
 M. Hamedi, S.H. Salleh, and A. M. Noor, “Electroencephalographic motor imagery brain connectivity analysis for BCI: a review,” Neural Computation, vol. 28, no. 6, pp. 999–1041, 2016. View at: Publisher Site  Google Scholar
 L. da SilvaSauer, L. ValeroAguayo, A. de la TorreLuque, R. RonAngevin, and S. VaronaMoya, “Concentration on performance with P300based BCI systems: A matter of interface features,” Applied Ergonomics, vol. 52, article no. 2090, pp. 325–332, 2016. View at: Publisher Site  Google Scholar
 K.K. Shyu, Y.J. Chiu, P.L. Lee et al., “Total design of an FPGAbased braincomputer interface control hospital bed nursing system,” IEEE Transactions on Industrial Electronics, vol. 60, no. 7, pp. 2731–2739, 2013. View at: Publisher Site  Google Scholar
 W. Yi, S. Qiu, H. Qi, L. Zhang, B. Wan, and D. Ming, “EEG feature comparison and classification of simple and compound limb motor imagery,” Journal of NeuroEngineering and Rehabilitation, vol. 10, no. 1, article no. 106, 2013. View at: Publisher Site  Google Scholar
 L. F. NicolasAlonso, R. Corralejo, J. GomezPilar, D. Álvarez, and R. Hornero, “Adaptive semisupervised classification to reduce intersession nonstationarity in multiclass motor imagerybased braincomputer interfaces,” Neurocomputing, vol. 159, no. 1, pp. 186–196, 2015. View at: Publisher Site  Google Scholar
 G. Onose, C. Grozea, A. Anghelescu et al., “On the feasibility of using motor imagery EEGbased braincomputer interface in chronic tetraplegics for assistive robotic arm control: A clinical test and longterm posttrial followup,” Spinal Cord, vol. 50, no. 8, pp. 599–608, 2012. View at: Publisher Site  Google Scholar
 Y. Chae, J. Jeong, and S. Jo, “Toward brainactuated humanoid robots: asynchronous direct control using an EEGBased BCI,” IEEE Transactions on Robotics, vol. 28, no. 5, pp. 1131–1144, 2012. View at: Publisher Site  Google Scholar
 S. Invitto, C. Faggiano, S. Sammarco, V. de Luca, and L. T. de Paolis, “Haptic, virtual interaction and motor imagery: Entertainment tools and psychophysiological testing,” Sensors (Switzerland), vol. 16, no. 3, article no. 394, 2016. View at: Publisher Site  Google Scholar
 T. Shi, H. Wang, and C. Zhang, “Brain Computer Interface system based on indoor semiautonomous navigation and motor imagery for Unmanned Aerial Vehicle control,” Expert Systems with Applications, vol. 42, no. 9, pp. 4196–4206, 2015. View at: Publisher Site  Google Scholar
 C. H. Kasess, C. Windischberger, R. Cunnington, R. Lanzenberger, L. Pezawas, and E. Moser, “The suppressive influence of SMA on M1 in motor imagery revealed by fMRI and dynamic causal modeling,” NeuroImage, vol. 40, no. 2, pp. 828–837, 2008. View at: Publisher Site  Google Scholar
 A. Solodkin, P. Hlustik, E. E. Chen, and S. L. Small, “Fine modulation in network activation during motor execution and motor imagery,” Cerebral Cortex, vol. 14, no. 11, pp. 1246–1255, 2004. View at: Publisher Site  Google Scholar
 C. R. Genovese, N. A. Lazar, and T. Nichols, “Thresholding of statistical maps in functional neuroimaging using the false discovery rate,” NeuroImage, vol. 15, no. 4, pp. 870–878, 2002. View at: Publisher Site  Google Scholar
 C. Burger and D. J. Van Den Heever, “Removal of EOG artefacts by combining wavelet neural network and independent component analysis,” Biomedical Signal Processing and Control, vol. 15, pp. 67–79, 2015. View at: Publisher Site  Google Scholar
 R. Romo Vázquez, H. VélezPérez, R. Ranta, V. Louis Dorr, D. Maquin, and L. Maillard, “Blind source separation, wavelet denoising and discriminant analysis for EEG artefacts and noise cancelling,” Biomedical Signal Processing and Control, vol. 7, no. 4, pp. 389–400, 2012. View at: Publisher Site  Google Scholar
 J.X. Chen, Y. Ma, J. Cai, L.H. Zhou, Z.H. Bao, and W. Che, “Novel frequencyagile bandpass filter with wide tuning range and spurious suppression,” IEEE Transactions on Industrial Electronics, vol. 62, no. 10, pp. 6428–6435, 2015. View at: Publisher Site  Google Scholar
 N. Ille, P. Berg, and M. Scherg, “Artifact correction of the ongoing EEG using spatial filters based on artifact and brain signal topographies,” Journal of Clinical Neurophysiology, vol. 19, no. 2, pp. 113–124, 2002. View at: Publisher Site  Google Scholar
 C. Brunner, M. Naeem, R. Leeb, B. Graimann, and G. Pfurtscheller, “Spatial filtering and selection of optimized components in four class motor imagery EEG data using independent components analysis,” Pattern Recognition Letters, vol. 28, no. 8, pp. 957–964, 2007. View at: Publisher Site  Google Scholar
 A. Vergult, W. De Clercq, A. Palmini et al., “Improving the interpretation of ictal scalp EEG: BSSCCA algorithm for muscle artifact removal,” Epilepsia, vol. 48, no. 5, pp. 950–958, 2007. View at: Publisher Site  Google Scholar
 W.Y. Hsu, C.H. Lin, H.J. Hsu, P.H. Chen, and I.R. Chen, “Waveletbased envelope features with automatic EOG artifact removal: application to singletrial EEG data,” Expert Systems with Applications, vol. 39, no. 3, pp. 2743–2749, 2012. View at: Publisher Site  Google Scholar
 W. Samek, C. Vidaurre, K.R. Müller, and M. Kawanabe, “Stationary common spatial patterns for braincomputer interfacing,” Journal of Neural Engineering, vol. 9, no. 2, Article ID 026013, 2012. View at: Publisher Site  Google Scholar
 R. N. Khushaba, L. Greenacre, S. Kodagoda, J. Louviere, S. Burke, and G. Dissanayake, “Choice modeling and the brain: A study on the Electroencephalogram (EEG) of preferences,” Expert Systems with Applications, vol. 39, no. 16, pp. 12378–12388, 2012. View at: Publisher Site  Google Scholar
 G. Prasad, P. Herman, D. Coyle, S. McDonough, and J. Crosbie, “Applying a braincomputer interface to support motor imagery practice in people with stroke for upper limb recovery: A feasibility study,” Journal of NeuroEngineering and Rehabilitation, vol. 7, no. 1, article no. 60, 2010. View at: Publisher Site  Google Scholar
 D. Wang, D. Miao, and C. Xie, “Best basisbased wavelet packet entropy feature extraction and hierarchical EEG classification for epileptic detection,” Expert Systems with Applications, vol. 38, no. 11, pp. 14314–14320, 2011. View at: Publisher Site  Google Scholar
 R. Jenke, A. Peer, and M. Buss, “Feature extraction and selection for emotion recognition from EEG,” IEEE Transactions on Affective Computing, vol. 5, no. 3, pp. 327–339, 2014. View at: Publisher Site  Google Scholar
 X. Jin, M. Zhao, T. W. S. Chow, and M. Pecht, “Motor bearing fault diagnosis using trace ratio linear discriminant analysis,” IEEE Transactions on Industrial Electronics, vol. 61, no. 5, pp. 2441–2451, 2014. View at: Publisher Site  Google Scholar
 S. Gupta, R. Kambli, S. Wagh, and F. Kazi, “Supportvectormachinebased proactive cascade prediction in smart grid using probabilistic framework,” IEEE Transactions on Industrial Electronics, vol. 62, no. 4, pp. 2478–2486, 2015. View at: Publisher Site  Google Scholar
 X. Suraj, P. Tiwari, S. Ghosh, and R. K. Sinha, “Classification of two class motor imagery tasks using hybrid GAPSO based Kmeans clustering,” Computational Intelligence and Neuroscience, vol. 2015, Article ID 945729, 11 pages, 2015. View at: Publisher Site  Google Scholar
 C. Yang, G. Deconinck, and W. Gui, “An optimal powerdispatching control system for the electrochemical process of zinc based on backpropagation and hopfield neural networks,” IEEE Transactions on Industrial Electronics, vol. 50, no. 5, pp. 953–961, 2003. View at: Publisher Site  Google Scholar
 J. Wang, K. N. Plataniotis, J. Lu, and A. N. Venetsanopoulos, “On solving the face recognition problem with one training sample per subject,” Pattern Recognition, vol. 39, no. 9, pp. 1746–1762, 2006. View at: Publisher Site  Google Scholar
 K. Amunts and K. Zilles, “Architectonic Mapping of the Human Brain beyond Brodmann,” Neuron, vol. 88, no. 6, pp. 1086–1107, 2015. View at: Publisher Site  Google Scholar
 R. M. SanchezPanchuelo, J. Besle, A. Beckett, R. Bowtell, D. Schluppeck, and S. Francis, “Withindigit functional parcellation of brodmann areas of the human primary somatosensory cortex using functional magnetic resonance imaging at 7 tesla,” Journal of Neuroscience, vol. 32, no. 45, pp. 15815–15822, 2012. View at: Publisher Site  Google Scholar
 D. L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by wavelet shrinkage,” Biometrika, vol. 81, no. 3, pp. 425–455, 1994. View at: Publisher Site  Google Scholar  MathSciNet
 N. Lu, T. Li, J. Pan, X. Ren, Z. Feng, and H. Miao, “Structure constrained seminonnegative matrix factorization for EEGbased motor imagery classification,” Computers in Biology and Medicine, vol. 60, pp. 32–39, 2015. View at: Publisher Site  Google Scholar
 P. J. GarcíaLaencina, G. RodríguezBermudez, and J. RocaDorda, “Exploring dimensionality reduction of EEG features in motor imagery task classification,” Expert Systems with Applications, vol. 41, no. 11, pp. 5285–5295, 2014. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2017 Rensong Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.