Table of Contents Author Guidelines Submit a Manuscript
BioMed Research International
Volume 2017, Article ID 8317357, 9 pages
https://doi.org/10.1155/2017/8317357
Research Article

Emotion Recognition from EEG Signals Using Multidimensional Information in EMD Domain

1China National Digital Switching System Engineering and Technological Research Center, Zhengzhou 450002, China
2Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China

Correspondence should be addressed to Bin Yan; moc.liamtoh@ecapsby

Received 27 March 2017; Revised 21 June 2017; Accepted 16 July 2017; Published 16 August 2017

Academic Editor: Robertas Damaševičius

Copyright © 2017 Ning Zhuang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper introduces a method for feature extraction and emotion recognition based on empirical mode decomposition (EMD). By using EMD, EEG signals are decomposed into Intrinsic Mode Functions (IMFs) automatically. Multidimensional information of IMF is utilized as features, the first difference of time series, the first difference of phase, and the normalized energy. The performance of the proposed method is verified on a publicly available emotional database. The results show that the three features are effective for emotion recognition. The role of each IMF is inquired and we find that high frequency component IMF1 has significant effect on different emotional states detection. The informative electrodes based on EMD strategy are analyzed. In addition, the classification accuracy of the proposed method is compared with several classical techniques, including fractal dimension (FD), sample entropy, differential entropy, and discrete wavelet transform (DWT). Experiment results on DEAP datasets demonstrate that our method can improve emotion recognition performance.

1. Introduction

Emotion plays an important role in our daily life and work. Real-time assessment and regulation of emotion will improve people’s life and make it better. For example, in the communication of human-machine-interaction, emotion recognition will make the process more easy and natural. Another example, in the treatment of patients, especially those with expression problems, the real emotion state of patients will help doctors to provide more appropriate medical care. In recent years, emotion recognition from EEG has gained mass attention. Also it is a very important factor in brain computer interface (BCI) systems, which will effectively improve the communication between human and machines [1].

Various features and extraction methods have been proposed for emotion recognition from EEG signals, including time domain techniques, frequency domain techniques, joint time-frequency analysis techniques, and other strategies.

Statistics of EEG series, that is, first and second difference, mean value, and power are usually used in time domain [2]. Nonlinear features, including fractal dimension (FD) [3, 4], sample entropy [5], and nonstationary index [6], are utilized for emotion recognition. Hjorth features [7] had also been used in EEG studies [8, 9]. Petrantonakis and Hadjileontiadis introduced higher order crossings (HOC) features to capture the oscillatory pattern of EEG [10]. Wang et al. extracted frequency domain features for classification [11]. Time-frequency analysis is based on the spectrum of EEG signals; then the energy, power, power spectral density (PSD), and differential entropy [12] of certain subband are usually utilized as features. Short-time Fourier transform (STFT) [13, 14], Hilbert-Huang transform (HHT) [15, 16], and discrete wavelet transform (DWT) [1719] are the most commonly used techniques for spectrum calculating. It has been commonly tested and verified that higher frequency subband such as Beta (16–32 Hz) and Gamma (32–64 Hz) bands outperforms lower subband for emotion recognition [20, 21].

Other features extracted from combination of electrode are utilized too, such as coherence and asymmetry of electrodes in different brain regions [2224] and graph-theoretic features [25]. Jenke et al. had done a research comparing the performance of different features mentioned above and got a guiding rule for feature extraction and selection [26].

Some other strategies such as utilizing deep network to improve the classification performance have also been researched. Zheng and Lu used deep neural network to investigate critical frequency bands and channels for emotion recognition [27]. Yang et al. used hierarchical network with subnetwork nodes for emotion recognition [28].

EMD is proposed by Huang et al. in 1998 [29]. Unlike DWT, which needs to predetermine transform base function and decomposition level, EMD can decompose signals into IMF automatically. These IMFs represent different frequency components of original signals, with band-limited characteristic. By applying Hilbert transform to IMF, we can get instantaneous phase information of IMF. So EMD is suitable for analysis of nonlinear and nonstationary sequence, such as neural signals.

EMD is a good choice for EEG signals and we utilize it for emotion recognition from EEG data. Which feature is effective for emotion recognition in EMD domain? Which IMF component is best for classification? Is the performance based on EMD strategy better compared to time domain method and time-frequency method or not? All these have not been researched yet and we investigate them in our research.

EMD has been widely used for seizure prediction and detection, but for emotion recognition based on EMD, there is not so much research. Higher order statistics of IMFs [30], geometrical properties of the decomposed IMF in complex plane [31], and the variation and fluctuation of IMF [32] are used as features for seizure prediction and detection. For emotion recognition, Mert and Akan extracted entropy, power, power spectral density, correlation, and asymmetry of IMF as features and then utilized independent component analysis (ICA) to reduce dimension of the feature set [33]. The classification accuracy is computed with all the subjects mixed together.

In this paper, we present an emotion recognition method based on EMD. We utilize the first difference of IMF time series, the first difference of the IMF’s phase, and the normalized energy of IMF as features. The motivation of using these three features is that they depict the characteristics of IMF in time, frequency, and energy domain, providing multidimensional information. The first difference of time series depicts the intensity of signal change in time domain. The first difference of phase measures the change intensity in phase and normalized energy describes the weight of current oscillation component. The three features constitute a feature vector, which is fed into SVM classifier for emotional state detection.

The proposed method is studied on a publicly available emotional database DEAP [20]. The effectiveness of the three features is investigated. IMF reduction and channel reduction for feature extraction are both discussed, which aim at improving the classification accuracy with less computation complexity. The performance is compared with some other techniques, including fractal dimension (FD), sample entropy, differential entropy, and time-frequency analysis DWT.

2. Method

To realize emotional state recognition, the EEG signals are decomposed into IMFs by EMD. Three features of IMFs, the fluctuation of the phase, the fluctuation of the time series, and the normalized energy, are formed as a feature vector, which is fed into SVM for classification. The whole process of the algorithm is shown in Figure 1.

Figure 1: Block diagram of the proposed method.
2.1. Data and Materials

DEAP is a publicly available dataset for emotion analysis, which recorded EEG and peripheral physiological signals of 32 participants as they watched 40 music videos. All the music video clips last for 1 minute, representing different emotion visual stimuli, with grade from 1 to 9. Among the 40 music videos, 20 are high valence visual stimuli and 20 are low valence visual stimuli. The situation is exactly the same for arousal dimension. After watching the music video, participants performed a self-assessment of their levels on arousal, valence, liking, dominance, and familiarity, with ratings from 1 to 9. EEG was recorded with 32 electrodes, placing according to the international 10-20 system. Each electrode recorded 63 s EEG signal, with 3 s baseline signal before the trial.

In this paper, we used the preprocessed EEG data for study, with sample rate 128 Hz and band range 4–45 Hz. EOG artefacts were removed as method in [20]. The data was segmented into 60-second trials and a 3-second pretrial baseline removed. The binary classifications of valence and arousal dimension are considered. We utilized the participants’ self-assessment as label. If the participant’s rating was <5, the label of valence/arousal is low and if the rating was ≥5, the label of valence/arousal is high.

Each music video lasts for 1 minute, and 5 s EEG signals are extracted as a sample. So for each subject who watched 40 music videos, we acquire 480 labeled samples.

2.2. Empirical Mode Decomposition

EMD decomposes EEG signals into a set of IMFs by an automatic shifting process. Each IMF represents different frequency components of original signals and should satisfy two conditions: during the whole data set, the number of extreme points and the number of zero crossings must be either equal or differ at most by one; at each point, the mean value calculated from the upper and lower envelope must be zero [29]. For input signal , the process of EMD is as follows:(1)Set and .(2)Get local maximum and minimum of .(3)Interpolate the local maximum and minimum with cubic spline function and get the upper envelope and lower envelope .(4)Calculate the mean value of the upper and lower envelope as(5)Subtract with :If satisfies the two conditions of IMF, then the first IMF component is gotten; otherwise, set and go to step , repeating steps until satisfies the two conditions of IMF. Finally is gotten as(6)If is gotten, set asGo to step and repeat steps to get .

By the iterative process described above, can be finally expressed asIt is a linear combination of IMF components and the residual part . Figure 2 shows a segment of original EEG signals corresponding to the first five decomposed IMFs. EMD works like an adaptive high pass filter. It shifts out the fastest changing component first and as the level of IMF increases, the oscillation of IMF becomes smoother. Each component is band-limited, which can reflect the characteristic of instantaneous frequency.

Figure 2: EEG signals and the corresponding first five IMFs.
2.3. Feature Extraction

In this paper, three features of IMF are utilized for emotion recognition, the first difference of time series, the first difference of phase, and the normalized energy. The first difference of time series depicts the intensity of signal change in time domain. The first difference of phase reveals the change intensity of phase, representing the physical meaning of instantaneous frequency. Normalized energy describes the weight of current oscillation component. The motivation of using these three features is that they depict the characteristics of IMF in time, frequency, and energy domain, utilizing multidimensional information.

2.3.1. First Difference of IMF Time Series

The first difference of times series depicts the intensity of signal change in time domain. Previous research has revealed that the variation of EEG time series can reflect different emotion states [2]. For an IMF component with points, , the definition of is

2.3.2. First Difference of IMF’s Phase

Based on EMD, EEG is decomposed into multilevel IMFs, each IMF being band-limited and representing an oscillation component of original EEG signals. For an -point IMF, , Hilbert transform is applied to it, obtaining an analytic signal

The analytic signal can be further expressed as follows:where is the amplitude of and is the instantaneous phase.

First difference of phase is defined aswhich measures the change intensity in phase and represents the physical meaning of instantaneous frequency.

2.3.3. Normalized Energy of IMF

For an -point IMF, , the normalized energy is defined as follows:where is the original EEG signal points. So the numerator is the energy of IMF and the denominator represents the energy of original EEG data set. The normalized energy describes the weight of current oscillation component. When fed into the classifier, is taken as an element of the feature vector according to [26].

2.4. SVM Classifier

The extracted features are fed into SVM for classification. SVM is widely used for emotion recognition [34, 35], which has promising property in many fields. In our study, LIBSVM is implemented for SVM classifier with radial basis kernel function and default parameters setting [36].

3. Performance Verification

In the following subsections, we test our method on DEAP emotional dataset. Training and classifying tasks were conducted for each subject independently and we utilized leave-one-trail-out validation to evaluate the classification performance. Each subject watched 40 music video clips, and every video clips lasted 1 minute. In our experiment, we utilized the participants’ self-assessment as label. Every 5 s EEG signals are extracted as a sample, so for each subject we acquire 480 labeled samples.

In leave-one-trail-out validation, for each subject, 468 samples extracted from 39 trails were assigned to training set, and 12 samples extracted from the remaining one trail were assigned to test set. So there was no correlation between samples in the training set and the test set. Among the total 40 trails of one subject, each trail will be assigned to the test set once as the validation data. The 40 results from the 40 test trails then can be averaged to produce a general estimation for each subject. The final mean accuracy is computed among all the subjects.

3.1. Effectiveness of the Features for Emotion Recognition

In order to evaluate the effectiveness of the three features for emotion recognition, we first use only one single feature for classification each time. All the experiments in this subsection are under the condition that the first five IMF components and total 32 electrodes are utilized for feature extraction. The training and classifying for each subject were conducted, respectively, and the mean accuracy was computed among all the subjects.

The mean classification accuracies of three features are given in Figure 3. It shows that all the three features can distinguish high level from low level on both valence and arousal dimension, higher than random probability of 50%. For valence dimension, the classification accuracy yields 68.27%, 64.46%, and 61.07% with features , , and , respectively. For arousal dimension, the classification accuracy yields 69.89%, 67.56%, and 63.76% with features , , and , respectively.

Figure 3: Classification accuracies of three single features. For each subject, one single feature was extracted from the first five IMF components. “,” “,” and “” in the figure are corresponding to the three single features, respectively. The mean accuracies for all circumstances were computed among all the subjects. Error bars show the standard deviation of the mean accuracies across all subjects.
3.2. IMF Reduction for Feature Extraction

In this subsection, we did two experiments to investigate the role of different IMF components in emotion recognition. In the first experiment, each time only one IMF component was utilized for feature extraction and we analyzed which IMF is effective for emotion recognition. In the second experiment, we further verified whether the combination of multi-IMFs would improve the accuracy.

Table 2 gives all the results in detail. Standard deviation of the mean accuracies across all subjects is shown in parenthesis. “IMF1,” “IMF2,” “IMF3,” “IMF4,” and “IMF5” are corresponding to single IMF component. “IMF1–3” in the table represents the first three IMFs, corresponding to IMF1, IMF2, and IMF3. Similarly, “IMF1–4” and “IMF1–5” are corresponding to the first four IMFs and the first five IMFs, respectively.

It shows that IMF1 yields the best performance, 70.41% for valence and 72.10% for arousal. As the level increases, the performance decreases sharply. The performance of IMF5 is only 55.74% for valence and 62.38% for arousal. We applied -test () to examine the performance between only IMF1 utilized for feature extraction and other circumstances. The null hypothesis is “the performance is similar” and if value is larger than , the null hypothesis is accepted. The results of -test in Table 1 show that the performance of IMF1 is more splendid than other single components, IMF2, IMF3, IM4, and IMF5, with far less than 0.05. It also shows that performance of multi-IMF combinations is similar to only IMF1 utilized for feature extraction, with larger than 0.05.

Table 1: Comparison of performance for different IMFs selected for feature extraction (32 channels) (standard deviation shown in parentheses).
Table 2: Performance of 8 channels selected for feature extraction (Fp1, Fp2, F7, F8, T7, T8, P7, and P8) (standard deviation shown in parentheses).

IMF1 represents the fastest changing component of EEG signals, with the highest frequency characteristic. As the level increases, the oscillation becomes smoother with frequency becoming lower and lower. So we infer that the valence and arousal of emotion relate more tightly to high frequency. It is also coincided with the finding in [26] that Beta (16–32 Hz) and Gamma (32–64 Hz) bands are successfully selected more often than other bands. These two bands are higher frequency subbands of EEG signals.

So combining the results of classification accuracy and -test, in practical use, we just need to extract features from IMF1, which will save vast time and relieve computation burden because only one level of EMD decomposition needed to be done.

3.3. Channel Reduction for Feature Extraction

Form verification in Section 3.2, we know that using component IMF1 will achieve good performance. In this subsection, we will investigate which electrodes are informative based on EMD strategy.

Fisher distance is an efficient criterion of divisibility between two classes, which is broadly used in pattern recognition. It computes the ratio of between-class scatter degree and within-class scatter degree between two classes. Larger ratio means larger divisibility of the two classes. In our experiment, we used fisher distance to mark important electrodes under condition that IMF1 is used for feature extraction. For each channel, fisher distance is calculated among features extracted from one subject’s total 480 labeled emotion samples.

Figure 4 gives fisher distance on valence dimension with subject 1. Figure 4(a) shows that, under feature , electrodes Fp1, Fp2, FC6, Cp1, O1, and Oz have larger values. Figure 4(b) shows that, under feature , Fp1, FC6, Cp1, Cp2, O1, Oz, P7, and P8 have larger values. Figure 4(c) shows that, under feature , F7, F8, T7, T8, P7, P8, O1, O2, and Oz have larger values.

Figure 4: Fisher distance of different channels with subject 1. Features are extracted from component IMF1. For each channel, Fisher distance is calculated among features extracted from 480 labeled emotion samples of subject 1. (a) Fisher distance under feature . (b) Fisher distance under feature . (c) Fisher distance under feature .

Based on the analysis of all the subjects, we selected the following 8 electrodes Fp1, Fp2, F7, F8, T7, T8, P7, and P8 for channel reduction verification. Table 2 gives score and classification accuracy with 8 channels selected for emotion recognition. We see that score is 0.7374 for valence and 0.7769 for arousal. The classification accuracy with 8 channels is 69.10% for valence and 71.99% for arousal, slightly lower than accuracy with total 32 channels. We also applied -test to examine whether the performance of 8 channels is similar to total 32 channels. The null hypothesis is “the performance is similar” and if value is larger than , the null hypothesis is accepted. The -test result shows that the performance under 8 channels and 32 channels is similar, with for valence and for arousal.

So in practical use, we just need to extract features from IMF1 with 8 channels. Our offline experiment used every 5 s EEG signals as a labeled emotion sample. This infers that our method may provide a new solution for real-time emotion recognition in BCI systems.

3.4. Results Comparison with Other Methods

In this subsection, we compared our proposed method with some classical methods, including fractal dimension (FD), sample entropy, differential entropy, and time-frequency analysis DWT. We used box counting for fractal dimension calculating. The parameter for sample entropy was set as , , and . We used “db4” decomposition to realize DWT. Then the differential entropy of Beta (16–32 Hz) and Gamma (32–64 Hz) bands is extracted as features. Our method used IMF1 for feature extraction of , , and . For all the methods, 8 selected channels FP1, FP2, F7, F8, T7, T8, P7, and P8 are used for feature extraction.

From Figure 5 and Table 3, we see that our method yields the highest accuracy, 69.10% for valence and 71.99% for arousal. We applied -test () to examine the performance between classical method and our method. The null hypothesis is “the performance is similar” and if value is larger than , the null hypothesis is accepted. The results of -test in Table 3 show that the performance of our method is more splendid than fractal dimension, sample entropy, and differential entropy of Beta band with far less than 0.05. It also shows that the performance of our method is similar and better than the differential entropy of Gamma band.

Table 3: The mean accuracy of different kinds of methods (Fp1, Fp2, F7, F8, T7, T8, P7, and P8) (standard deviation shown in parentheses; statistical analysis shown in column -test).
Figure 5: Classification accuracies of different methods. “FD,” “SampEn,” and “DE” in the figure are corresponding to fractal dimension, sample entropy, and differential entropy, respectively. The mean accuracy was computed among all the subjects. Error bars show the standard deviation of the mean accuracies across all subjects.

EMD strategy outperforms time domain method, including fractal dimension and sample entropy. This is because compared to methods in time domain, EMD has the advantage of utilizing more oscillation information. Compared to time-frequency method DWT, EMD can decompose EEG signals automatically, getting rid of selecting transform window first. The classification accuracy is also higher than DWT. So the experiment results infer that our method based on EMD strategy is suitable for emotion recognition from EEG signals.

4. Discussion

Emotion recognition from EEG signals has achieved significant progress in recent years. Previous methods are usually conducted in time domain, frequency domain, and time-frequency domain. In this paper, we propose a method of feature extraction for emotion recognition in EMD domain, a new aspect of view. By utilizing EMD, EEG signals can be decomposed into different oscillation components named IMF automatically. The characteristics of IMF are utilized as features for emotion recognition, including the first difference of time series, the first difference of phase, and the normalized energy.

Compared to methods in time domain, EMD has the advantage of utilizing more frequency information. The experiment results show that the proposed method outperforms method in time domain, such as fractal dimension in [3, 4] and sample entropy in [5]. Compared to time-frequency methods, such as STFT and DWT, EMD can decompose EEG signals automatically, getting rid of selecting transform window first. The classification accuracy is also higher than DWT in [18].

We investigate the role of each IMF in emotion classification. Features extracted from IMF1 yield the highest accuracy. IMF1 is corresponding to the fastest changing component of EEG signals, so our study confirms the deduction that emotion is more relative to high frequency component. This consists with findings in [26] that Beta (16–32 Hz) and Gamma (32–64 Hz) bands are successfully selected more often than other bands.

Finally, we selected 8 informative channels based on EMD strategy, namely, FP1, FP2, F7, F8, T7, T8, P7, and P8. Our proposed method just needs to extract features from IMF1 with 8 channels, which will save time and relieve computation burden. Also in our experiment, every 5 s EEG signals are extracted as a sample, so it may provide a new solution for real-time emotion recognition in BCI systems.

Our limitation is that now we just test it on DEAP dataset, so in the future we want to experiment it on more emotional datasets to verify the method comprehensively. Also we will utilize more strategies such as feature smoothing and deep network to improve the classification accuracy.

5. Conclusion

In this paper, an emotion recognition method based on EMD using three statistics is proposed. An extensive analysis has been carried out to investigate the effectiveness of the features for emotion classification. The results show that the three features are suitable for emotion recognition. Then the effect of each IMF component is inquired. The results reveal that, among the multilevel IMFs, the first component IMF1 plays the most important role in emotion recognition. Also the informative channels based on EMD strategy are investigated and 8 channels, namely, FP1, FP2, F7, F8, T7, T8, P7, and P8, are selected for feature extraction. Finally, the proposed method is compared with some classical methods and our method yields the highest accuracy.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the grant from the National Natural Science Foundation of China (Grant no. 61701089).

References

  1. C.-H. Han, J.-H. Lim, J.-H. Lee, K. Kim, and C.-H. Im, “Data-driven user feedback: an improved neurofeedback strategy considering the interindividual variability of EEG features,” BioMed Research International, vol. 2016, Article ID 3939815, 7 pages, 2016. View at Publisher · View at Google Scholar · View at Scopus
  2. K. Takahashi, “Remarks on emotion recognition from multi-modal bio-potential signals,” in Proceedings of the 2004 IEEE International Conference on Industrial Technology, pp. 1138–1143, Hammamet, Tunisia, December 2004. View at Scopus
  3. O. Sourina and Y. Liu, “A fractal-based algorithm of emotion recognition from EEG using arousal-valence model,” in Proceedings of the International Conference on Bio-Inspired Systems and Signal Processing, BIOSIGNALS 2011, pp. 209–214, Rome, Italy, January 2011. View at Scopus
  4. Y. Liu and O. Sourina, “Real-time subject-dependent EEG-based emotion recognition algorithm,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 8490, pp. 199–223, 2014. View at Publisher · View at Google Scholar · View at Scopus
  5. X. Jie, R. Cao, and L. Li, “Emotion recognition based on the sample entropy of EEG,” Bio-Medical Marerials and Engineering, vol. 24, no. 1, pp. 1185–1192, 2014. View at Google Scholar
  6. E. Kroupi, A. Yazdani, and T. Ebrahimi, “EEG correlates of different emotional states elicited during watching music videos,” in in Procceding of the 2011 Interntionnal Conference on Affective Conputing, pp. 457–466, Memphis, TN, USA, 2011.
  7. B. Hjorth, “EEG analysis based on time domain properties,” Electroencephalography and Clinical Neurophysiology, vol. 29, no. 3, pp. 306–310, 1970. View at Publisher · View at Google Scholar · View at Scopus
  8. K. Ansari-Asl, G. Chanel, and T. Pun, “A channel selection method for EEG classification in emotion assessment based on synchronization likelihood,” in Proceedings of the 15th European Signal Processing Conference, EUSIPCO 2007, pp. 1241–1245, Pozna, Poland, September 2007. View at Scopus
  9. R. Horlings, D. Datcu, and L. J. M. Rothkrantz, “Emotion recognition using brain activity,” in Proceedings of the International Conference on Computer Systems and Technology, vol. 25, pp. 1–6, New York, NY, USA, 2008.
  10. P. C. Petrantonakis and L. J. Hadjileontiadis, “Emotion recognition from EEG using higher order crossings,” IEEE Transactions on Information Technology in Biomedicine, vol. 14, no. 2, pp. 186–197, 2010. View at Publisher · View at Google Scholar · View at Scopus
  11. X. W. Wang, D. Nie, and B. L. Lu, “EEG-based emotion recognition using frequency domain features and support vector machines,” in in Procceding of the International Conference on Neural Information Processing, pp. 734–743, Guangzhou, China, 2011.
  12. R.-N. Duan, J.-Y. Zhu, and B.-L. Lu, “Differential entropy feature for EEG-based emotion classification,” in Proceedings of the 2013 6th International IEEE EMBS Conference on Neural Engineering, NER 2013, pp. 81–84, New Jersey, NJ, USA, November 2013. View at Publisher · View at Google Scholar · View at Scopus
  13. G. Chanel, K. Ansari-Asl, and T. Pun, “Valence-arousal evaluation using physiological signals in an emotion recall paradigm,” in Proceedings of the 2007 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2007, pp. 2662–2667, Halifax, NS, Canada, October 2007. View at Publisher · View at Google Scholar · View at Scopus
  14. Y.-P. Lin, C.-H. Wang, T.-P. Jung et al., “EEG-based emotion recognition in music listening,” IEEE Transactions on Biomedical Engineering, vol. 57, no. 7, pp. 1798–1806, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. S. K. Hadjidimitriou and L. J. Hadjileontiadis, “Toward an EEG-based recognition of music liking using time-frequency analysis,” IEEE Transactions on Biomedical Engineering, vol. 59, no. 12, pp. 3498–3510, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. S. S. Uzun, S. Yildirim, and E. Yildirim, “Emotion primitives estimation from EEG signals using Hilbert Huang Transform,” in Proceedings of the IEEE-EMBS International Conference on Biomedical and Health Informatics, pp. 224–227, Hong Kong, China, January 2012. View at Publisher · View at Google Scholar · View at Scopus
  17. M. Murugappan, M. Rizon, R. Nagarajan, and S. Yaacob, “EEG feature extraction for classifying emotions using FCM and FKM,” in Proceedings of the International Conference on Applied Computer and Applied Computational Science, vol. 1, pp. 21–25, Venice, Italy, 2007.
  18. Z. Mohammadi, J. Frounchi, and M. Amiri, “Wavelet-based emotion recognition system using EEG signal,” Neural Computing and Applications, pp. 1–6, 2016. View at Publisher · View at Google Scholar · View at Scopus
  19. M. Murugappan, “Human emotion classification using wavelet transform and KNN,” in Proceedings of the 2011 International Conference on Pattern Analysis and Intelligent Robotics, (ICPAIR '11), vol. 1, pp. 148–153, Putrajaya, Malaysia, June 2011. View at Publisher · View at Google Scholar · View at Scopus
  20. S. Koelstra, C. Mühl, M. Soleymani et al., “DEAP: a database for emotion analysis; using physiological signals,” IEEE Transactions on Affective Computing, vol. 3, no. 1, pp. 18–31, 2012. View at Publisher · View at Google Scholar · View at Scopus
  21. I. Wichakam and P. Vateekul, “An evaluation of feature extraction in EEG-based emotion prediction with support vector machines,” in Proceedings of the 2014 11th International Joint Conference on Computer Science and Software Engineering (JCSSE '14), pp. 106–110, Chon Buri, Thailand, May 2014. View at Publisher · View at Google Scholar · View at Scopus
  22. B. Reuderink, C. Mühl, and M. Poel, “Valence, arousal and dominance in the EEG during game play,” International Journal of Autonomous and Adaptive Communications Systems, vol. 6, no. 1, pp. 45–62, 2013. View at Publisher · View at Google Scholar · View at Scopus
  23. L. Brown, B. Grundlehner, and J. Penders, “Towards wireless emotional valence detection from EEG,” in Proceedings of the 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS 2011, pp. 2188–2191, Boston, MA, USA, September 2011. View at Publisher · View at Google Scholar · View at Scopus
  24. V. Rozgic, S. N. Vitaladevuni, and R. Prasad, “Robust EEG emotion classification using segment level decision fusion,” in Proceedings of the 2013 38th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2013, pp. 1286–1290, Vancouver, BC, Canada, May 2013. View at Publisher · View at Google Scholar · View at Scopus
  25. R. Gupta, K. U. R. Laghari, and T. H. Falk, “Relevance vector classifier decision fusion and EEG graph-theoretic features for automatic affective state characterization,” Neurocomputing, vol. 174, pp. 875–884, 2016. View at Publisher · View at Google Scholar · View at Scopus
  26. R. Jenke, A. Peer, and M. Buss, “Feature extraction and selection for emotion recognition from EEG,” IEEE Transactions on Affective Computing, vol. 5, no. 3, pp. 327–339, 2014. View at Publisher · View at Google Scholar · View at Scopus
  27. W.-L. Zheng and B.-L. Lu, “Investigating critical frequency bands and channels for eeg-based emotion recognition with deep neural networks,” IEEE Transactions on Autonomous Mental Development, vol. 7, no. 3, pp. 162–175, 2015. View at Publisher · View at Google Scholar · View at Scopus
  28. Y. Yang, Q. M. J. Wu, W. L. Zheng, and B. L. Lu, “EEG-based emotion recognition using hierarchical network with subnetwork nodes,” IEEE Transactions on Cognitive Developmental Systems, vol. PP, no. 99, p. 1, 2017. View at Google Scholar
  29. N. E. Huang, Z. Shen, S. R. Long et al., “The empirical mode decomposition and the hilbert spectrum for nonlinear and non-stationary time series analysis,” in Proceedings of the Proceedings Mathematical Physical and Engineering Sciences, vol. 454, pp. 903–995, 1998.
  30. S. M. S. Alam and M. I. H. Bhuiyan, “Detection of seizure and epilepsy using higher order statistics in the EMD domain,” IEEE Journal of Biomedical and Health Informatics, vol. 17, no. 2, pp. 312–318, 2013. View at Publisher · View at Google Scholar · View at Scopus
  31. R. B. Pachori and V. Bajaj, “Analysis of normal and epileptic seizure EEG signals using empirical mode decomposition,” Computer Methods and Programs in Biomedicine, vol. 104, no. 3, pp. 373–381, 2011. View at Publisher · View at Google Scholar · View at Scopus
  32. S. Li, W. Zhou, Q. Yuan, S. Geng, and D. Cai, “Feature extraction and recognition of ictal EEG using EMD and SVM,” Computers in Biology and Medicine, vol. 43, no. 7, pp. 807–816, 2013. View at Publisher · View at Google Scholar · View at Scopus
  33. A. Mert and A. Akan, “Emotion recognition from EEG signals by using multivariate empirical mode decomposition,” Pattern Analysis and Applications, pp. 1–9, 2016. View at Publisher · View at Google Scholar · View at Scopus
  34. K. Takahashi, “Remarks on emotion recognition from multi-modal bio-potential signals,” in Proceedings of the 2004 IEEE International Conference on Industrial Technology, 2004 (IEEE ICIT '04), pp. 1138–1143, Hammemet, Tunsia. View at Publisher · View at Google Scholar
  35. G. Chanel, J. Kronegg, D. Grandjean, and T. Pun, “Emotion assessment: arousal evaluation using eegs and peripheral physiological signals,” in in procceding of the International Workshop on Multimedia Content Representation, Classification and Security, pp. 530–537, Istanbul, Turkey, 2006.
  36. C. C. Chang and C. J. Lin, “LIBSVM: A library for support vector machines,” Acm Transactions on Intelligent Systems Technology, vol. 2, no. 3, p. 27, 2007. View at Google Scholar