About this Journal Submit a Manuscript Table of Contents
Computational and Mathematical Methods in Medicine
Volume 2013 (2013), Article ID 573734, 13 pages
http://dx.doi.org/10.1155/2013/573734
Review Article

A Review on the Computational Methods for Emotional State Estimation from the Human EEG

1Department of Brain and Cognitive Engineering, Korea University, Seoul 136701, Republic of Korea
2Samsung Electronics, DMC R&D Center, Suwon 443742, Republic of Korea
3Research and Business Foundation, Korea University, Seoul 136701, Republic of Korea

Received 11 January 2013; Accepted 18 February 2013

Academic Editor: Chang-Hwan Im

Copyright © 2013 Min-Ki Kim et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A growing number of affective computing researches recently developed a computer system that can recognize an emotional state of the human user to establish affective human-computer interactions. Various measures have been used to estimate emotional states, including self-report, startle response, behavioral response, autonomic measurement, and neurophysiologic measurement. Among them, inferring emotional states from electroencephalography (EEG) has received considerable attention as EEG could directly reflect emotional states with relatively low costs and simplicity. Yet, EEG-based emotional state estimation requires well-designed computational methods to extract information from complex and noisy multichannel EEG data. In this paper, we review the computational methods that have been developed to deduct EEG indices of emotion, to extract emotion-related features, or to classify EEG signals into one of many emotional states. We also propose using sequential Bayesian inference to estimate the continuous emotional state in real time. We present current challenges for building an EEG-based emotion recognition system and suggest some future directions.  

1. Introduction

An emotional state refers to a psychological and physiological state in which emotions and behaviors are interrelated and appraised within a context [1]. From the psychological aspects, the space of the emotional state can be built from the discrete model or the dimensional model. In the discrete model, an emotional state is defined as a set of a finite number of discrete states corresponding to one of core emotions, including anger, fear, disgust, surprise, happiness, and sadness, or a combination of them [2]. The dimensional model defines an emotional state spatially with the basic dimensions of emotion such as valence and arousal and interprets an emotion through the levels of each dimension [3]. These emotion models have been used for systematical and multilateral analyses of emotion [3]. Based on the emotion models, neurophysiologic mechanisms under the emotional state have been vigorously investigated. Broadly, it has been documented that the emotional processes performed at the ventral and dorsal systems in the human brain are functionally different [4]. The ventral system, including ventral anterior cingulate gyrus and some ventral areas of prefrontal cortex (ventromedial prefrontal cortex and medial orbitofrontal cortex), is involved in the production of emotional states and the regulation of affective responses, whereas the dorsal system, including dorsal anterior cingulate gyrus, some dorsal areas of prefrontal cortex (dorsolateral, posterior dorsolateral, and mid-dorsolateral prefrontal cortex), and hippocampus, is involved in effortful emotion regulation and subsequent behavior [4, 5].

Recently, affective computing (AC) has emerged as a converging technology blending emotion into human computer interaction (HCI) [6]. AC, often called emotion aware computing, builds emotional interactions between a human and a computer by measuring the emotional state through behavioral and physiological signals and developing computational models for the emotional state [6, 7]. One of the key elements in AC is emotion recognition that estimates the emotional state of users from their behavioral and physiological responses [7]. Emotion recognition aims to advance the intelligence of computer for creating affective user interfaces and to enhance the quality of psychiatric health care.

A variety of measures have been used for emotion recognition including self-report, startle response, behavioral response, autonomic measurement, and neurophysiologic measurement [3]. Self-report readily acquires emotional responses according to the emotion modeling framework but makes it difficult to track rapid affective changes and needs to rely on the outcome from self-estimation of the emotional state [3, 8]. The startle response magnitude using electromyography (EMG) measures unconscious myoneural responses but assesses only partial aspects of emotion (e.g., arousal level) [3, 9]. Behavioral measurement detects changes in facial and/or whole-body behavior using EMG or video image, but needs an assumption that EMG signals directly correspond to a specific emotional state [3, 10]. Autonomic measurement can objectively detect emotion-related physiological responses of autonomic nervous system (ANS), such as skin conductance responses (SCRs) and heart rate variability (HRV), but only access the subspaces of the emotional state [3, 11]. Neurophysiologic measurement based on electrophysiological and neuroimaging techniques can detect a wide range of dynamics of the emotional state by directly accessing the fundamental structure in the brain from which an emotional state emerges [3, 12]. Hence, neurophysiologic measurements clearly provide the most direct and comprehensive means for emotion recognition.

A large body of research has investigated neural correlates of emotion in humans using many noninvasive sensor modalities, each presenting unique characteristics with respect to spatiotemporal resolution and mobility. Functional magnetic resonance imaging (fMRI) has been used to find cortical and subcortical structures implicated in emotional states [13]. MEG has also been used to find emotion-related neural signals from specific sources in a timely manner with fine spatial and temporal resolutions [14]. But the cost and immobility of fMRI and MEG prevents these modalities from being used for practical emotion recognition systems [15, 16]. EEG, although suffering from its poor spatial resolution and high susceptibility to noise, has been widely used to investigate the brain dynamics relative to emotion as it enables the detection of immediate responses to emotional stimuli with an excellent temporal resolution [1721]. Being developed to become more cost-effective and mobile with increased practicability and less physical restriction [22], EEG, not without its downsides, still carries critical advantages in practical usage and therefore has been a primary option for the development of online emotion recognition systems. In fact, there have been a growing number of efforts to recognize a person’s emotion in real time using EEG. For example, EmoRate developed as a commercial product (Emotiv Corp., CA, USA) detects the flow of the emotional state while user is watching a film [23]. Brown et al. proposed an EEG-based affective computer system that measures the state of valence and transmits it via a wireless link [24].

The development of an EEG-based emotion recognition system requires computational models that describe how the emotional state is represented in EEG signals and how one can estimate an emotional state from EEG signals. Despite a long history of searching for EEG indices of emotion, less attention has been paid to the computational models for emotional state estimation. Hence, we feel needs for a review of the state-of-the-art computational models for emotional state estimation to subserve the development of advanced emotion recognition methods. This paper will review the current computational methods of emotional state estimation from the human EEG with discussion on challenges and some future directions.

This paper will particularly focus on the following aspects of EEG-based emotional state estimation models. First, it will start with a quick review on EEG correlates of emotion, including definition of the emotional state space, the design of emotional stimuli, and the EEG indices of emotion. Then, it will revisit the computational methods to extract EEG features relative to emotional states and to estimate emotional states from EEG. We will also propose a mathematical approach to the estimation of continuous emotional state based on Bayesian inference.

2. EEG Correlates of Emotion

Finding EEG correlates of emotional states should begin with how to define the emotional state space. The emotional state space can be largely categorized into a discrete space and a continuous space. The discrete state space draws upon the discrete emotion model and contains a set of discrete experiential emotional states. The discrete emotional state comprises seven to ten core emotions such as happiness, surprise, sadness, anger, disgust, contempt, and fear [2, 25] and sometimes expands to contain a large number of emotions with the synonyms of these core emotions [25]. The continuous state space is built from the dimensional emotion model and represents an emotional state as a vector in a multidimensional space. This vector space of the continuous emotional state depends on the definition a basis. For instance, the circumplex model, developed by Russell, describes an emotional state in a two-dimensional circular space with the arousal and valence dimensions [26]. Various psychological models define emotional dimensions that subsequently constitute the basis for the emotional state space [25, 2730].

Based on the construction of the emotional state space, the investigation of EEG correlates of emotion should also address how to determine experimental stimuli to induce emotions. Typically, emotional stimuli are selected to cover desired arousal levels and valence states, and presented in different modalities including the visual, auditory, tactile, or odor stimulation. The ground truth of the emotional state induced by a stimulus is secured by exploiting the self-ratings of subjects or using the standard stimulus sets such as the international affective picture system (IAPS) or the international affective digitized sound system (IADS). The IAPS provides a set of normative pictures for emotional stimuli to induce emotional changes and attention levels [31]. The IADS embodies acoustic stimuli to induce emotions, sometimes together with the IAPS [32]. These international affective systems are known to be independent of culture, sex, and age [33].

A number of neuropsychological studies have reported EEG correlates of emotion. These EEG features can be broadly placed in one of two domains: time domain and frequency domain. In the time domain, several components of event-related potentials (ERPs) reflected underlying emotional states [34]. These ERP components can be encapsulated in a chronological order: P1 and N1 components generated in a short latency from stimulus onset, N2 and P2 in a middle latency, and P3 and slow cortical potential (SCP) in a long latency. The ERP components of short to middle latencies have been shown to correlate with valence [3437], whereas with the ERP components of middle to long latencies have been shown to correlate with arousal [3841]. Basically, the computation of ERPs requires averaging EEG signals over multiple trials, rendering ERP features inappropriate for online computing. However, recent developments of the single-trial ERP computation methods increase a possibility to use ERP features for online emotional state estimation [4246].

In the frequency domain, the spectral power in various frequency bands has been implicated in the emotional state. The alpha power varied with the valence state [47] or with discrete emotions such as happiness, sadness, and fear [18]. Specifically, the frontal asymmetry of the alpha power has been repeatedly reported as a steady correlate of valence [48]. The subsequent studies have suggested that the frontal alpha asymmetry may reflect the approach/avoidance aspects of emotion, rather than valence per se [49]. The event-related synchronization (ERS) and desynchronization (ERD) of the gamma power has been related to some emotions such as happiness and sadness [5052]. The ERS of the theta power has also been modulated during transitions in the emotional state [18, 5355].

Besides the waveforms and the spectral power, the interactive properties between a pair of EEG oscillations such as phase synchronization and coherence have also been implicated in emotional processes. For instance, the phase synchronization level between the frontal and right temporoparietal areas varied with the emotional states of energetic, tension, and hedonic arousal [56]. The EEG coherence across the prefrontal and posterior beta oscillations was increased by viewing high arousal images [57]. Also, increases in the gamma phase synchronization index were induced by unpleasant visual stimuli [58]. As the emotional process engages a large-scale network of the neural structures in the brain, these multichannel analyses of EEG across the brain will reveal more signatures of emotion as they do for other cognitive functions [5964]. In short, a brief summary of the EEG correlates of emotion is presented in Table 1.

tab1
Table 1: EEG correlates of emotion.

3. Computational Methods to Estimate Emotional States

The computational methods to estimate the emotional state have been designed based on various EEG features related to emotional processes. As most EEG analysis methods are accompanied by preprocesses for reducing the artifacts, so is the emotional state estimation method. Figure 1 illustrates overall processing steps to estimate the emotional state from EEG signals. The recorded EEG signals in response to affective stimuli pass through the preprocessing step in which noise reduction algorithms and spatiotemporal filtering methods are employed to enhance the signal-to-noise power ratio (SNR). Then, the feature extraction step determines specific band powers, ERPs, and phase coupling indices correlated with the target emotional states. Usually, this feature selection process is optimized by mathematical methods to achieve maximum emotional estimation accuracy. The classification step estimates the most probable emotional state from the selected EEG features. The number of class depends on the definition of the emotional state space, such as the continuous state of arousal and valence, or the discrete states.

573734.fig.001
Figure 1: Overall emotional state estimation process. The overall emotional state estimation procedure. EEG signals are recorded during emotional situations and passed through the preprocessing step including noise reduction and spatial and temporal filtering. The features related with the emotional states such as spectral power, ERP, and phase synchronization are extracted from the preprocessed EEG signals. These features are used to estimate emotional states by classification methods.

As the preprocessing methods are relatively general to a variety of EEG signal processing applications, here we focus on the feature extraction and emotion classification methods. We first review the computational methods to extract emotion-related features from EEG, followed by the classification algorithms used to estimate the emotional state from the EEG features. The feature extraction methods usually build a computational model to find emotion-related features based on neurophysiologic and neuropsychological knowledge. Unlike the feature extraction methods, the classification methods draw more upon signal processing theories such as machine learning and statistical signal processing. It has been of interest how each of these two steps impact on estimation accuracy. On one hand, the feature extraction seems to be more closely tied to estimation performance since without pointing to the very features correlated with emotion, it is implausible to build a correct model. On the other hand, the classification algorithms should also be carefully designed to fit to the characteristics of the feature space; for instance, using a linear classifier for highly nonlinear feature structures would not make much sense. In general, one should weigh coherence between a feature space and a classifier for increasing estimation accuracy.

3.1. Feature Extraction Methods

As for valence-related features, it has been shown that positive and negative emotions induce asymmetric modulations in the frontal alpha power of EEG, leading to a relative decrease in the left frontal alpha power for positive emotions and a decrease in the right for negative emotions [65]. This frontal alpha asymmetry provides an effective index for valence by computing a difference between the left and right alpha powers, here denoted as and respectively, divided by the sum of both: The computation of the spectral power in the alpha band has been executed by a number of methods, including the squares of the EEG amplitude filtered through an alpha bandpass filter [53], Fourier transform [66], power spectral density [18, 21], and wavelet transform [7, 67, 68]. Most of these methods are well established and can readily be implemented in real time.

As for arousal-related features, one can extract the spectral power features such as the frontal midline theta power similar to the alpha power. Recently, more advanced computational methods have been proposed to evaluate emotional arousal. For instance, Asymmetry index (AsI) assesses emotion elicitation by computing a multidimensional directed information (MDI) between EEG channels [69]: indicates the total amount of information flowing from left hemisphere signals, , to right hemisphere signals, , when the subject has emotional feelings. refers to the total bidirectional information with emotion. indicates the same directional information from to but when the subject does not have emotional feelings, and refers to the total bidirectional information without emotion. AsI can effectively indicate whether an emotional state is elicited or not [69]. Besides AsI, the variance of potentials from a specific channel over different EEG channels has been used as an emotion-related feature [68]. Also, the entropy of EEG signals has been used to extract information related to emotion from intrusive noise [68].

As for individual discrete emotions, a typical approach is to search through all the possible EEG channels, spectral bands, and time segments for a set of features that maximizes the accuracy of emotional state estimation. This approach adopts a greedy search method with supervised learning, often resulting in different optimal feature sets for each individual. To overcome this issue of subject-by-subject variability, a higher order crossing (HOC) analysis was developed to implement a user-independent emotion recognition system [70]. The HOC analysis aims to find EEG features with respect to six affective traits, including surprise, disgust, anger, fear, happiness, and sadness [70]. The HOC model is given as: is the simplified version of the HOC feature that counts the number of zero-crossing from a high-pass filtered, standardized EEG time series. Zero-crossing indicates an event at which the signal amplitude passed through a zero-line with the change of polarity. The zero-crossing counts often represent oscillation properties more robustly than the spectral power. A vector of the simple HOCs is constructed to contain the features related to emotion. A higher value of means decreases in the discrimination power of the simple HOC because different processes can yield almost the same . indicates a binary time series with zeros and ones: at time instant where if the amplitude of the filtered signal is negative and otherwise. indicates the length of the time series . The EEG feature vector is defined as that consists of multiple simple HOCs [70]. The computational methods to extract emotional features from EEG are summarized in Table 2.

tab2
Table 2: Emotional state estimation model.
3.2. Emotion Classification Methods

The EEG feature vector provides observations from which an emotional state can be inferred. Commonly, a classifier has been used for decoding the feature vector into one of possible emotional states. A number of classification methods have been used for emotional state estimation, including discriminant analysis (DA), support vector machine (SVM), k-nearest neighbor (k-NN), and the Mahalanobis distance (MD) based method. DA performs dimensionality reduction in a high-dimensional feature space onto a low-dimensional space with an aim to maximize the Fisher discriminant ratio, , of between-class scatter, , to within-class scatter, , [42, 7176]. A larger value indicates greater separation between classes. The dimensionality of the low-dimensional space varies from one up to the number of classes minus one.

SVM is derived from DA but determines a decision boundary in a kernel space instead of the original feature space. SVM finds an optimal hyperplane, , and the hypermargin of the decision boundary in the feature space using a supervised learning method. The classifier classifies a new input feature vector using a classification rule given by Here, indicates a set of the support vectors that are used to determine the maximum margin hyperplane, and denotes the kernel function of the SVM classifier. denotes an offset parameter, does training input vectors, and does nonzero weights on each support vector [7, 7780]. Various kernel functions have been proposed such as the Gaussian function or polynomials. SVM offers advantages of good generalizability for nonlinear feature spaces.

The k-NN algorithm determines the class of a new feature vector according to the number of nearest vectors in the training set surrounding a new feature vector [73, 81]. k is a parameter determining the encircled boundary. The k-NN algorithm depends on how to define a distance between feature vectors, which is subject to be affected by the curse of dimensionality [81, 82].

The MD-based method, has been widely used in the clustering analysis, not only for distance, but also with correlation coefficient and the standard deviation [83, 84]: and indicate the inverse of the covariance matrix and the mean vector of a class , respectively. MD converges to Euclidean distance when the covariance matrix of feature vectors becomes the identify matrix [84]. Basically, when a new feature vector arrives, the MD-based classifier compares the distance of the vector to each class using MD and chooses the class with the smallest distance. The classification methods that have been used for emotional state estimation are summarized in Table 2.

4. A Generative Model for Online Tracking of Emotional States

As described earlier, most computational models estimating emotional states have focused on the discrete state space and classified EEG features into one of a finite number of emotional states. This approach generally suits well to the case of a static determination of which emotion is induced by a given stimulus. Yet, for the development of an online emotion recognition system, where continuous tracking of the emotional state may play an important role, the current approach might be suboptimal because they do not take temporal dynamics of the emotional state into account. Another downside of the current approach originates from their direct modeling framework. A model in this framework builds a direct input-output mapping from the observed EEG signal to the emotional state. Although this framework may be able to provide a reasonable solution just for the purpose of improving classification accuracy, it does not exploit prior information of the emotional state as well as dynamics of the emotional state. These shortcomings make it difficult to gain useful insights on the neural mechanism of emotion. Also, it is often desirable to incorporate prior information of the dynamics of the emotional state within a model, especially for tracking emotional state continuously over time.

To address these issues, we propose a computational modeling approach based on the generative modeling framework [8587]. Our approach focuses on tracking the change of the emotional state over time from EEG signal. In this approach, a generative model depicts how EEG signal is generated from a hidden emotional state. Also, a prior model explains how the emotional state changes over time. Integrating these two models, we infer a most likely emotional state from an observed EEG signal. Differences between the generative and direct models can be illustrated in a probabilistic view where a goal is to estimate a conditional probability of emotional state variables given EEG observations as accurately as possible. Suppose that a random vector denotes hidden emotional states and a random vector denotes observed EEG data (e.g., an EEG feature vector). An estimation model aims to optimize a parameter set, , for the following conditional probability: A direct model forms a functional relationship from to with , the parameter set of a function , where is a residual vector. In many cases, the residual vector is assumed to follow the Gaussian distribution. Parameter estimation of can be accomplished by many standard solutions such as maximum likelihood [88]. On the other hand, a generative model uses maximum a posteriori (MAP) or the Bayesian inference to estimate the conditional probability, where represents a constant representing the integral of . The posterior is estimated by the product of , the likelihood of observation given a state , and , the prior of the state . The parameter set is used to model a generative relation from to . In terms of the EEG correlate of the emotional state, the likelihood describes how the observed EEG signal is generated from an emotional state, the prior describes a probability of each emotional state, and the posterior describes which emotional state most likely elicits the observed EEG signal.

Here, we extend this generative approach to take into account the temporal dynamics of the emotional state. We use sequential Bayesian inference to track a time-varying emotional state from EEG signal [89]. To this end, we first assume that the emotional state is defined in a continuous space. An example of a continuous state space consists of two emotional dimensions, such as valence and arousal. The valence dimension ranges from negative values to positive values. The arousal dimension ranges from low to high arousal levels. A key point is that an emotional state varies over a continuous space, instead of altering between discrete values. This does not mean that we need to assign an explicit emotion to every possible point in the emotional state space. A specific area or volume in the state space can represent a single emotion.

The generative model is then formulated as follows. Let be an emotional state vector and an EEG signal vector at time instant . contains a set of emotional state variables (e.g., , where is the valence dimension, is the arousal dimension, and is the dominance dimension). contains a set of EEG features selected to be related to emotion (e.g., the power of certain frequency band at a selected channel). The goal of the model is to find the most probable emotional state given a series of observation from the beginning, (assuming observation begins at ). The posterior is formed as The posterior can be rewritten as a recursive equation, Note that the likelihood, , depends only on the current time . The prior, , represents state transition from to , assuming the first-order Markov process. The dynamics of emotional state is embedded in the prior, whereas the generative process of the EEG features from an emotional state is modeled by the likelihood. The integral can be approximately computed by a number of methods with different model assumptions [89].

As this approximation relies on the recursion of the posterior, inference of an emotional state from EEG signal operates sequentially over time. This property enables our model fit well to the purpose of tracking emotional states continuously. In fact, the sequential Bayesian inference model (or called a Bayesian filter) has been widely adopted for many neuroengineering studies (e.g., see [9094]). Our model may provide an effective way for online emotion aware computing, especially when we need to keep track of changes in the emotional state from EEG measurements continuously over time, for instance, tracking emotional changes while a subject is watching movies [95].

5. Discussion

In this paper, we overviewed the computational methods used for emotional state estimation. We first briefly gave an overview of the EEG correlates of emotion. Then, we revisited the computational methods to extract EEG features correlated with the continuous and discrete emotional states. We also described the classification methods to discriminate a particular emotional state from EEG features. Finally, we proposed a computational approach based on the generative modeling framework that may suit well to tracking the emotional state over time. These computational methods for emotional state estimation will serve as a key element for practical online emotion recognition systems for affective computing.

While affective computing has attracted attentions in the HCI field with a promise to develop a novel user interface, the development of the computational methods to estimate the emotional state still requires further understanding of emotion processes and their neurophysiologic substrates [96]. Especially, the estimation of emotional states from the human EEG has been posed only as a relatively simple classification problem with a few discrete emotions. The development of a real-time emotional state tracking system would require a more rigorous definition of the emotional state space suitable for estimation models.

Exploration of the EEG signatures of emotion that can span a broad area of the emotional state space or represent a number of different discrete emotions should continue. Such investigations may need to overcome many existing challenges. In particular, finding such EEG signatures of emotion that are invariant across individuals will be important for general emotion recognition systems [69]. As the emotion-related features have been mostly found in the frontal EEG signal, online algorithms to overcome the eye movement artifact should be continuously developed [9799]. Also, bringing the EEG-based emotion recognition system out to the normal users would require a simple yet efficient EEG sensor. A new EEG sensor should meet some criteria such as stabilization of a signal to noise ratio (SNR), reduction of noise elicited from hair, optimization of active dry electrodes, development of multi-channel wireless communication, and sustainment of the quality of EEG signals over a long period [100103]. Many previous studies have estimated the emotional state by analyzing the EEG responding to specific emotional stimuli. However, this emotion-induction paradigm has a limitation that the EEG signals can be modulated by the stimulus properties irrelevant to emotion [21]. Hence, a computational model that can predict the emotional state with various stimuli may be required for real-world applications.

The computational methods to estimate the emotional state may improve further with several advances in computational models. First, a model that can associate the dynamics of EEG signal with the dynamics of cognitive emotional process will provide a basis for constructing a novel emotional state estimation method. The current methods only capture the static properties in the EEG pattern in response to emotional stimuli. If a new model can embrace the temporal dynamics of emotional information processing in the human cognitive system and find EEG correlates of those dynamical properties, it will estimate the emotional state more precisely. Second, the quest for novel EEG signatures of the emotional state should be pursued. In particular, interactive properties between EEG signals such as cross-frequency coupling and effective connectivity pattern may be worth exploring to find novel EEG correlates of emotion. Third, inference of emotion-related information from high-dimensional and nonlinear EEG data poses an interesting problem to develop and apply the state-of-the-art machine learning algorithms. So far, only a few basic learning algorithms have been applied for emotional state estimation, but it is likely that emotion recognition would benefit from more advanced statistical learning and pattern recognition algorithms. With these advances, we foresee that the computational models of emotional estimation would play a key role in future consumer devices. Before long, they can bring serendipity to device users by estimating emotional states in a natural and nonintrusive way.

Conflict of Interests

The authors declare that there is no conflict of interests.

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) Grants funded by the Korean government (2012047239 and 20120006588) and was funded by the Samsung Electronics Grant (R1210241).

References

  1. K. R. Scherer, “What are emotions? and how can they be measured?” Social Science Information, vol. 44, no. 4, pp. 695–729, 2005. View at Publisher · View at Google Scholar · View at Scopus
  2. L. F. Barrett, “Discrete emotions or dimensions? the role of valence focus and arousal focus,” Cognition and Emotion, vol. 12, no. 4, pp. 579–599, 1998. View at Scopus
  3. I. B. Mauss and M. D. Robinson, “Measures of emotion: a review,” Cognition and Emotion, vol. 23, no. 2, pp. 209–237, 2009. View at Publisher · View at Google Scholar · View at Scopus
  4. M. L. Phillips, W. C. Drevets, S. L. Rauch, and R. Lane, “Neurobiology of emotion perception II: implications for major psychiatric disorders,” Biological Psychiatry, vol. 54, no. 5, pp. 515–528, 2003. View at Publisher · View at Google Scholar · View at Scopus
  5. M. L. Phillips, W. C. Drevets, S. L. Rauch, and R. Lane, “Neurobiology of emotion perception I: the neural basis of normal emotion perception,” Biological Psychiatry, vol. 54, no. 5, pp. 504–514, 2003. View at Publisher · View at Google Scholar · View at Scopus
  6. A. Luneski, E. Konstantinidis, and P. D. Bamidis, “Affective medicine: a review of affective computing efforts in medical informatics,” Methods of Information in Medicine, vol. 49, no. 3, pp. 207–218, 2010. View at Publisher · View at Google Scholar · View at Scopus
  7. C. A. Frantzidis, C. Bratsas, C. L. Papadelis, E. Konstantinidis, C. Pappas, and P. D. Bamidis, “Toward emotion aware computing: an integrated approach using multichannel neurophysiological recordings and affective visual stimuli,” IEEE Transactions on Information Technology in Biomedicine, vol. 14, no. 3, pp. 589–597, 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. M. D. Robinson and G. L. Clore, “Belief and feeling: evidence for an accessibility model of emotional self-report,” Psychological Bulletin, vol. 128, no. 6, pp. 934–960, 2002. View at Publisher · View at Google Scholar · View at Scopus
  9. B. N. Cuthbert, M. M. Bradley, and P. J. Lang, “Fear and anxiety: theoretical distinction and clinical test,” Psychophysiology, vol. 33, supplement 1, pp. S15–S15, 1996.
  10. R. Adolphs, “Neural systems for recognizing emotion,” Current Opinion in Neurobiology, vol. 12, no. 2, pp. 169–177, 2002. View at Publisher · View at Google Scholar · View at Scopus
  11. I. C. Christie and B. H. Friedman, “Autonomic specificity of discrete emotion and dimensions of affective space: a multivariate approach,” International Journal of Psychophysiology, vol. 51, no. 2, pp. 143–153, 2004. View at Publisher · View at Google Scholar · View at Scopus
  12. J. Panksepp, “Neuro-psychoanalysis may enliven the mindbrain sciences,” Cortex, vol. 43, no. 8, pp. 1106–1107, 2007. View at Publisher · View at Google Scholar · View at Scopus
  13. K. Vytal and S. Hamann, “Neuroimaging support for discrete neural correlates of basic emotions: a voxel-based meta-analysis,” Journal of Cognitive Neuroscience, vol. 22, no. 12, pp. 2864–2885, 2010. View at Publisher · View at Google Scholar · View at Scopus
  14. P. Peyk, H. T. Schupp, T. Elbert, and M. Junghöfer, “Emotion processing in the visual brain: a MEG analysis,” Brain Topography, vol. 20, no. 4, pp. 205–215, 2008. View at Publisher · View at Google Scholar · View at Scopus
  15. M. Hämäläinen, R. Hari, R. J. Ilmoniemi, J. Knuutila, and O. V. Lounasmaa, “Magnetoencephalography—theory, instrumentation, and applications to noninvasive studies of the working human brain,” Reviews of Modern Physics, vol. 65, no. 2, pp. 413–497, 1993. View at Scopus
  16. A. Ray and S. M. Bowyer, “Clinical applications of magnetoencephalography in epilepsy,” Annals of Indian Academy of Neurology, vol. 13, no. 1, pp. 14–22, 2010. View at Publisher · View at Google Scholar · View at Scopus
  17. P. R. Davidson, R. D. Jones, and M. T. Peiris, “EEG-based lapse detection with high temporal resolution,” IEEE Transactions on Biomedical Engineering, vol. 54, no. 5, pp. 832–839, 2007.
  18. M. Balconi and C. Lucchiari, “EEG correlates (event-related desynchronization) of emotional face elaboration: a temporal analysis,” Neuroscience Letters, vol. 392, no. 1-2, pp. 118–123, 2006. View at Publisher · View at Google Scholar · View at Scopus
  19. M. Balconi and U. Pozzoli, “Event-related oscillations (ERO) and event-related potentials (ERP) in emotional face recognition,” International Journal of Neuroscience, vol. 118, no. 10, pp. 1412–1424, 2008. View at Publisher · View at Google Scholar · View at Scopus
  20. A. A. Beaton, N. C. Fouquet, N. C. Maycock, E. Platt, L. S. Payne, and A. Derrett, “Processing emotion from the eyes: a divided visual field and ERP study,” Laterality, vol. 17, no. 4, pp. 486–514, 2012.
  21. M. Y. Bekkedal, J. Rossi III, and J. Panksepp, “Human brain EEG indices of emotions: delineating responses to affective vocalizations by measuring frontal theta event-related synchronization,” Neuroscience & Biobehavioral Reviews, vol. 35, no. 9, pp. 1959–1970, 2011.
  22. E. I. Konstantinidis, C. A. Frantzidis, C. Pappas, and P. D. Bamidis, “Real time emotion aware applications: a case study employing emotion evocative pictures and neuro-physiological sensing enhanced by Graphic Processor Units,” Computer Methods and Programs in Biomedicine, vol. 107, no. 1, pp. 16–27, 2012.
  23. O. Sourina and Y. Liu, “A fractal-based algorithm of emotion recognition from EEG using arousal-valence model,” in Proceedings of the International Conference on Bio-Inspired Systems and Signal Processing (BIOSIGNALS '11), pp. 209–214, Rome, Italy, January 2011. View at Scopus
  24. L. Brown, B. Grundlehner, and J. Penders, “Towards wireless emotional valence detection from EEG,” IEEE Engineering in Medicine and Biology Society, vol. 2011, pp. 2188–2191, 2011. View at Publisher · View at Google Scholar
  25. J. A. Russell, “Core affect and the psychological construction of emotion,” Psychological Review, vol. 110, no. 1, pp. 145–172, 2003. View at Publisher · View at Google Scholar · View at Scopus
  26. E. S. Schaefer, “A circumplex model for maternal behavior,” Journal of Abnormal and Social Psychology, vol. 59, no. 2, pp. 226–235, 1959. View at Publisher · View at Google Scholar · View at Scopus
  27. A. Mehrabian, Basic Dimensions for a General Psychological Theory :Implications for Personality, Social, Environmental, and Developmental Studies, Oelgeschlager, Gunn & Hain, Cambridge, UK, 1980.
  28. J. R. Fontaine, K. R. Scherer, E. B. Roesch, and P. C. Ellsworth, “The world of emotions is not two-dimensional,” Psychological Science, vol. 18, no. 12, pp. 1050–1057, 2007.
  29. S. Hamann, “Mapping discrete and dimensional emotions onto the brain: controversies and consensus,” Trends in Cognitive Sciences, vol. 16, no. 9, pp. 458–466, 2012.
  30. D. C. Rubin and J. M. Talarico, “A comparison of dimensional models of emotion: evidence from emotions, prototypical events, autobiographical memories, and words,” Memory, vol. 17, no. 8, pp. 802–808, 2009. View at Publisher · View at Google Scholar · View at Scopus
  31. J. A. Mikels, B. L. Fredrickson, G. R. Larkin, C. M. Lindberg, S. J. Maglio, and P. A. Reuter-Lorenz, “Emotional category data on images from the international affective picture system,” Behavior Research Methods, vol. 37, no. 4, pp. 626–630, 2005. View at Scopus
  32. J. Redondo, I. Fraga, I. Padrón, and A. Piñeiro, “Affective ratings of sound stimuli,” Behavior Research Methods, vol. 40, no. 3, pp. 784–790, 2008. View at Publisher · View at Google Scholar · View at Scopus
  33. R. L. Ribeiro, S. Pompéia, and O. F. Amodeo Bueno, “Comparison of Brazilian and American norms for the International Affective Picture System (IAPS),” Revista Brasileira de Psiquiatria, vol. 27, no. 3, pp. 208–215, 2005. View at Scopus
  34. J. K. Olofsson, S. Nordin, H. Sequeira, and J. Polich, “Affective picture processing: an integrative review of ERP findings,” Biological Psychology, vol. 77, no. 3, pp. 247–265, 2008. View at Publisher · View at Google Scholar · View at Scopus
  35. M. Codispoti, V. Ferrari, and M. M. Bradley, “Repetition and event-related potentials: distinguishing early and late processes in affective picture perception,” Journal of Cognitive Neuroscience, vol. 19, no. 4, pp. 577–586, 2007. View at Publisher · View at Google Scholar · View at Scopus
  36. J. K. Olofsson and J. Polich, “Affective visual event-related potentials: arousal, repetition, and time-on-task,” Biological Psychology, vol. 75, no. 1, pp. 101–108, 2007. View at Publisher · View at Google Scholar · View at Scopus
  37. L. R. R. Gianotti, P. L. Faber, M. Schuler, R. D. Pascual-Marqui, K. Kochi, and D. Lehmann, “First valence, then arousal: the temporal dynamics of brain electric activity evoked by emotional stimuli,” Brain Topography, vol. 20, no. 3, pp. 143–156, 2008. View at Publisher · View at Google Scholar · View at Scopus
  38. E. Bernat, S. Bunce, and H. Shevrin, “Event-related brain potentials differentiate positive and negative mood adjectives during both supraliminal and subliminal visual processing,” International Journal of Psychophysiology, vol. 42, no. 1, pp. 11–34, 2001. View at Publisher · View at Google Scholar · View at Scopus
  39. B. N. Cuthbert, H. T. Schupp, M. M. Bradley, N. Birbaumer, and P. J. Lang, “Brain potentials in affective picture processing: covariation with autonomic arousal and affective report,” Biological Psychology, vol. 52, no. 2, pp. 95–111, 2000. View at Publisher · View at Google Scholar · View at Scopus
  40. R. Roschmann and W. Wittling, “Topographic brain mapping of emotion-related hemisphere asymmetries,” International Journal of Neuroscience, vol. 63, no. 1-2, pp. 5–16, 1992. View at Scopus
  41. C. M. Yee and G. A. Miller, “Affective valence and information processing,” Electroencephalography and Clinical Neurophysiology, vol. 40, pp. 300–307, 1987.
  42. B. Blankertz, S. Lemm, M. Treder, S. Haufe, and K. R. Müller, “Single-trial analysis and classification of ERP components—a tutorial,” NeuroImage, vol. 56, no. 2, pp. 814–825, 2011. View at Publisher · View at Google Scholar · View at Scopus
  43. L. Hu, M. Liang, A. Mouraux, R. G. Wise, Y. Hu, and G. D. Iannetti, “Taking into account latency, amplitude, and morphology: improved estimation of single-trial ERPs by wavelet filtering and multiple linear regression,” Journal of Neurophysiology, vol. 106, no. 6, pp. 3216–3229, 2011.
  44. M. Ahmadi and R. Quian Quiroga, “Automatic denoising of single-trial evoked potentials,” NeuroImage, vol. 66, pp. 672–680, 2012.
  45. K. Vanderperren, B. Mijovic, N. Novitskiy et al., “Single trial ERP reading based on parallel factor analysis,” Psychophysiology, vol. 50, no. 1, pp. 97–110, 2013.
  46. D. Jarchi, S. Sanei, J. C. Principe, and B. Makkiabadi, “A new spatiotemporal filtering method for single-trial estimation of correlated ERP subcomponents,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 1, pp. 132–143, 2011. View at Publisher · View at Google Scholar · View at Scopus
  47. M. Balconi and G. Mazza, “Brain oscillations and BIS/BAS (behavioral inhibition/activation system) effects on processing masked emotional cues. ERS/ERD and coherence measures of alpha band,” International Journal of Psychophysiology, vol. 74, no. 2, pp. 158–165, 2009. View at Publisher · View at Google Scholar · View at Scopus
  48. R. J. Davidson, “Anterior cerebral asymmetry and the nature of emotion,” Brain and Cognition, vol. 20, no. 1, pp. 125–151, 1992. View at Scopus
  49. I. H. Gotlib, C. Ranganath, and J. P. Rosenfeld, “Frontal EEG alpha asymmetry, depression, and cognitive functioning,” Cognition and Emotion, vol. 12, no. 3, pp. 449–478, 1998. View at Scopus
  50. M. Balconi and C. Lucchiari, “Consciousness and arousal effects on emotional face processing as revealed by brain oscillations. A gamma band analysis,” International Journal of Psychophysiology, vol. 67, no. 1, pp. 41–46, 2008. View at Publisher · View at Google Scholar · View at Scopus
  51. A. Keil, M. M. Müller, T. Gruber, C. Wienbruch, M. Stolarova, and T. Elbert, “Effects of emotional arousal in the cerebral hemispheres: a study of oscillatory brain activity and event-related potentials,” Clinical Neurophysiology, vol. 112, no. 11, pp. 2057–2068, 2001. View at Publisher · View at Google Scholar · View at Scopus
  52. M. M. Müller, A. Keil, T. Gruber, and T. Elbert, “Processing of affective pictures modulates right-hemispheric gamma band EEG activity,” Clinical Neurophysiology, vol. 110, no. 11, pp. 1913–1920, 1999. View at Publisher · View at Google Scholar · View at Scopus
  53. L. I. Aftanas, N. V. Reva, A. A. Varlamov, S. V. Pavlov, and V. P. Makhnev, “Analysis of evoked EEG synchronization and desynchronization in conditions of emotional activation in humans: temporal and topographic characteristics,” Neuroscience and Behavioral Physiology, vol. 34, no. 8, pp. 859–867, 2004. View at Publisher · View at Google Scholar · View at Scopus
  54. L. I. Aftanas, A. A. Varlamov, S. V. Pavlov, V. P. Makhnev, and N. V. Reva, “Affective picture processing: event-related synchronization within individually defined human theta band is modulated by valence dimension,” Neuroscience Letters, vol. 303, no. 2, pp. 115–118, 2001. View at Publisher · View at Google Scholar · View at Scopus
  55. D. Sammler, M. Grigutsch, T. Fritz, and S. Koelsch, “Music and emotion: electrophysiological correlates of the processing of pleasant and unpleasant music,” Psychophysiology, vol. 44, no. 2, pp. 293–304, 2007. View at Publisher · View at Google Scholar · View at Scopus
  56. M. Wyczesany, S. J. Grzybowski, R. J. Barry, J. Kaiser, A. M. L. Coenen, and A. Potoczek, “Covariation of EEG synchronization and emotional state as modified by anxiolytics,” Journal of Clinical Neurophysiology, vol. 28, no. 3, pp. 289–296, 2011. View at Publisher · View at Google Scholar · View at Scopus
  57. V. Miskovic and L. A. Schmidt, “Cross-regional cortical synchronization during affective image viewing,” Brain Research, vol. 1362, pp. 102–111, 2010. View at Publisher · View at Google Scholar · View at Scopus
  58. N. Martini, D. Menicucci, L. Sebastiani et al., “The dynamics of EEG gamma responses to unpleasant visual stimuli: from local activity to functional connectivity,” NeuroImage, vol. 60, no. 2, pp. 922–932, 2012.
  59. J. Fell and N. Axmacher, “The role of phase synchronization in memory processes,” Nature Reviews Neuroscience, vol. 12, no. 2, pp. 105–118, 2011. View at Publisher · View at Google Scholar · View at Scopus
  60. P. Sauseng and W. Klimesch, “What does phase information of oscillatory brain activity tell us about cognitive processes?” Neuroscience and Biobehavioral Reviews, vol. 32, no. 5, pp. 1001–1013, 2008. View at Publisher · View at Google Scholar · View at Scopus
  61. P. Fries, “Neuronal gamma-band synchronization as a fundamental process in cortical computation,” Annual Review of Neuroscience, vol. 32, pp. 209–224, 2009. View at Publisher · View at Google Scholar · View at Scopus
  62. S. Pockett and M. D. Holmes, “Intracranial EEG power spectra and phase synchrony during consciousness and unconsciousness,” Consciousness and Cognition, vol. 18, no. 4, pp. 1049–1055, 2009. View at Publisher · View at Google Scholar · View at Scopus
  63. L. Melloni, C. Molina, M. Pena, D. Torres, W. Singer, and E. Rodriguez, “Synchronization of neural activity across cortical areas correlates with conscious perception,” Journal of Neuroscience, vol. 27, no. 11, pp. 2858–2865, 2007. View at Publisher · View at Google Scholar · View at Scopus
  64. S. M. Doesburg, A. B. Roggeveen, K. Kitajo, and L. M. Ward, “Large-scale gamma-band phase synchronization and selective attention,” Cerebral Cortex, vol. 18, no. 2, pp. 386–396, 2008. View at Publisher · View at Google Scholar · View at Scopus
  65. A. J. Tomarken, R. J. Davidson, and J. B. Henriques, “Resting frontal brain asymmetry predicts affective responses to films,” Journal of Personality and Social Psychology, vol. 59, no. 4, pp. 791–801, 1990. View at Scopus
  66. B. Güntekin and E. Basar, “Emotional face expressions are differentiated with brain oscillations,” International Journal of Psychophysiology, vol. 64, no. 1, pp. 91–100, 2007. View at Publisher · View at Google Scholar · View at Scopus
  67. E. Gross, A. S. El-Baz, G. E. Sokhadze, L. Sears, M. F. Casanova, and E. M. Sokhadze, “Induced eeg gamma oscillation alignment improves differentiation between autism and adhd group responses in a facial categorization task,” Journal of Neurotherapy, vol. 16, no. 2, pp. 78–91, 2012.
  68. M. Murugappan, R. Nagarajan, and S. Yaacob, “Combining spatial filtering and wavelet transform for classifying human emotions using EEG Signals,” Journal of Medical and Biological Engineering, vol. 31, no. 1, pp. 45–51, 2011. View at Publisher · View at Google Scholar · View at Scopus
  69. L. J. H. Panagiotis C. Petrantonakis, “A novel emotion elicitation index using frontal brain asymmetry for enhanced EEG-based emotion recognition,” IEEE Transactions on Information Technology in Biomedicine, vol. 15, no. 5, pp. 737–746, 2011.
  70. P. C. Petrantonakis and L. J. Hadjileontiadis, “Emotion recognition from EEG using higher order crossings,” IEEE Transactions on Information Technology in Biomedicine, vol. 14, no. 2, pp. 186–197, 2010. View at Publisher · View at Google Scholar · View at Scopus
  71. C. Bandt, M. Weymar, D. Samaga, and A. O. Hamm, “A simple classification tool for single-trial analysis of ERP components,” Psychophysiology, vol. 46, no. 4, pp. 747–757, 2009. View at Publisher · View at Google Scholar · View at Scopus
  72. R. L. Horst and E. Donchin, “Beyond averaging. II. Single-trial classification of exogenous event-related potentials using stepwise discriminant analysis,” Electroencephalography and Clinical Neurophysiology, vol. 48, no. 2, pp. 113–126, 1980. View at Publisher · View at Google Scholar · View at Scopus
  73. V. Kolodyazhniy, S. D. Kreibig, J. J. Gross, W. T. Roth, and F. H. Wilhelm, “An affective computing approach to physiological emotion specificity: toward subject-independent and stimulus-independent classification of film-induced emotions,” Psychophysiology, vol. 48, no. 7, pp. 908–922, 2011. View at Publisher · View at Google Scholar · View at Scopus
  74. P. Poolman, R. M. Frank, P. Luu, S. M. Pederson, and D. M. Tucker, “A single-trial analytic framework for EEG analysis and its application to target detection and classification,” NeuroImage, vol. 42, no. 2, pp. 787–798, 2008. View at Publisher · View at Google Scholar · View at Scopus
  75. C. Wang, S. Xiong, X. Hu, L. Yao, and J. Zhang, “Combining features from ERP components in single-trial EEG for discriminating four-category visual objects,” Journal of Neural Engineering, vol. 9, no. 5, Article ID 056013, 2012.
  76. Y. Zhang, Q. Zhao, J. Jin, X. Wang, and A. Cichocki, “A novel BCI based on ERP components sensitive to configural processing of human faces,” Journal of Neural Engineering, vol. 9, no. 2, Article ID 026018, 2012.
  77. I. Guler and E. D. Ubeyli, “Multiclass support vector machines for EEG-signals classification,” IEEE Transactions on Information Technology in Biomedicine, vol. 11, no. 2, pp. 117–126, 2007.
  78. K. Gouizi, F. Bereksi Reguig, and C. Maaoui, “Emotion recognition from physiological signals,” Journal of Medical Engineering & Technology, vol. 35, no. 6-7, pp. 300–307, 2011.
  79. I. Martisius, R. Damasevicius, V. Jusas, and D. Birvinskas, “Using higher order nonlinear operators for SVM classification of EEG data,” Elektronika ir Elektrotechnika, vol. 119, no. 3, pp. 99–102, 2012.
  80. L. Guo, Y. Wu, L. Zhao, T. Cao, W. Yan, and X. Shen, “Classification of mental task from EEG signals using immune feature weighted support vector machines,” IEEE Transactions on Magnetics, vol. 47, no. 5, pp. 866–869, 2011. View at Publisher · View at Google Scholar · View at Scopus
  81. M. Murugappan, R. Nagarajan, and S. Yaacob, “Combining spatial filtering and wavelet transform for classifying human emotions using EEG Signals,” Journal of Medical and Biological Engineering, vol. 31, no. 1, pp. 45–51, 2010. View at Publisher · View at Google Scholar · View at Scopus
  82. Md. Rafiul Hassan, M. Marufhossain, J. Bailey, and K. Ramamohanarao, “Improving k-nearest neighbour classification with distance functions based on receiver operating characteristics,” in Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases-Part I, pp. 489–504.
  83. F. Babiloni, L. Bianchi, F. Semeraro et al., “Mahalanobis distance-based classifiers are able to recognize EEG patterns by using few EEG electrodes,” in Proceedings of the 23rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 651–654, October 2001. View at Scopus
  84. P. C. Mahalanobis, “On the generalized distance in statistics,” Proceedings National Institute of Science, India, vol. 2, no. 1, pp. 49–55, 1936.
  85. S. M. M. Martens and J. M. Leiva, “A generative model approach for decoding in the visual event-related potential-based brain-computer interface speller,” Journal of Neural Engineering, vol. 7, no. 2, Article ID 026003, 2010. View at Publisher · View at Google Scholar · View at Scopus
  86. M. A. J. van Gerven, F. P. de Lange, and T. Heskes, “Neural decoding with hierarchical generative models,” Neural Computation, vol. 22, no. 12, pp. 3127–3142, 2010. View at Publisher · View at Google Scholar · View at Scopus
  87. K. H. Brodersen, T. M. Schofield, A. P. Leff et al., “Generative embedding for model-based classification of fMRI data,” PLOS Computational Biology, vol. 7, no. 6, Article ID e1002079, 2011.
  88. C. Kemere, K. V. Shenoy, and T. H. Meng, “Model-based neural decoding of reaching movements: a maximum likelihood approach,” IEEE Transactions on Biomedical Engineering, vol. 51, no. 6, pp. 925–932, 2004. View at Publisher · View at Google Scholar · View at Scopus
  89. M. HB, “Sequential Bayesian Inference,” in Data Fusion: Concepts and Ideas, pp. 239–272, Springer, 2012.
  90. S. Wu, D. Chen, M. Niranjan, and S. I. Amari, “Sequential Bayesian decoding with a population of neurons,” Neural Computation, vol. 15, no. 5, pp. 993–1012, 2003. View at Publisher · View at Google Scholar · View at Scopus
  91. J. W. Yoon, S. J. Roberts, M. Dyson, and J. Q. Gan, “Adaptive classification for brain computer interface systems using sequential monte carlo sampling,” Neural Networks, vol. 22, no. 9, pp. 1286–1294, 2009. View at Publisher · View at Google Scholar · View at Scopus
  92. Y. Wang, A. R. Paiva, J. C. Príncipe, and J. C. Sanchez, “Sequential Monte Carlo point-process estimation of kinematics from neural spiking activity for brain-machine interfaces,” Neural Computation, vol. 21, no. 10, pp. 2894–2930, 2009. View at Publisher · View at Google Scholar · View at Scopus
  93. A. Sorrentino, L. Parkkonen, A. Pascarella, C. Campi, and M. Piana, “Dynamical MEG source modeling with multi-target bayesian filtering,” Human Brain Mapping, vol. 30, no. 6, pp. 1911–1921, 2009. View at Publisher · View at Google Scholar · View at Scopus
  94. T. D. Sanger, “Bayesian filtering of myoelectric signals,” Journal of Neurophysiology, vol. 97, no. 2, pp. 1839–1845, 2007. View at Publisher · View at Google Scholar · View at Scopus
  95. D. Nie, X. W. Wang, L. C. Shi, and B. L. Lu, “EEG-based emotion recognition during watching movies,” in Proceedings of the 2011 5th International IEEE/EMBS Conference on Neural Engineering (NER '11), pp. 667–670, Cancun, Mexico, May 2011. View at Publisher · View at Google Scholar · View at Scopus
  96. E. Cutrell and D. Tan, “BCI for passive input in HCI,” in Proceedings ACM CHI Conference on Human Factors in Computing Systems Workshop on Brain-Computer Interfaces for HCI and Games, 2007.
  97. J. F. Gao, Y. Yang, P. Lin, P. Wang, and C. X. Zheng, “Automatic removal of eye-movement and blink artifacts from eeg signals,” Brain Topography, vol. 23, no. 1, pp. 105–114, 2010. View at Publisher · View at Google Scholar · View at Scopus
  98. C. A. Joyce, I. F. Gorodnitsky, and M. Kutas, “Automatic removal of eye movement and blink artifacts from EEG data using blind component separation,” Psychophysiology, vol. 41, no. 2, pp. 313–325, 2004. View at Publisher · View at Google Scholar · View at Scopus
  99. S. Romero, M. A. Mañanas, and M. J. Barbanoj, “A comparative study of automatic techniques for ocular artifact reduction in spontaneous EEG signals based on clinical target variables: a simulation case,” Computers in Biology and Medicine, vol. 38, no. 3, pp. 348–360, 2008. View at Publisher · View at Google Scholar · View at Scopus
  100. Y. M. Chi, Y. T. Wang, Y. Wang, C. Maier, T. P. Jung, and G. Cauwenberghs, “Dry and noncontact EEG sensors for mobile brain-computer interfaces,” IEEE Trans Neural Syst Rehabil Eng, vol. 20, no. 2, pp. 228–235, 2012.
  101. L. D. Liao, I. J. Wang, S. F. Chen, J. Y. Chang, and C. T. Lin, “Design, fabrication and experimental validation of a novel dry-contact sensor for measuring electroencephalography signals without skin preparation,” Sensors, vol. 11, no. 6, pp. 5819–5834, 2011. View at Publisher · View at Google Scholar · View at Scopus
  102. Y. M. Chi, T. P. Jung, and G. Cauwenberghs, “Dry-contact and noncontact biopotential electrodes: methodological review,” IEEE Reviews in Biomedical Engineering, vol. 3, pp. 106–119, 2010. View at Publisher · View at Google Scholar · View at Scopus
  103. C. T. Lin, L. D. Liao, Y. H. Liu, I. J. Wang, B. S. Lin, and J. Y. Chang, “Novel dry polymer foam electrodes for long-term EEG measurement,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 5, pp. 1200–1207, 2011. View at Publisher · View at Google Scholar · View at Scopus
  104. A. H. Kemp, M. A. Gray, P. Eide, R. B. Silberstein, and P. J. Nathan, “Steady-state visually evoked potential topography during processing of emotional valence in healthy subjects,” NeuroImage, vol. 17, no. 4, pp. 1684–1692, 2002. View at Publisher · View at Google Scholar · View at Scopus
  105. O. Pollatos, W. Kirsch, and R. Schandry, “On the relationship between interoceptive awareness, emotional experience, and brain processes,” Cognitive Brain Research, vol. 25, no. 3, pp. 948–962, 2005. View at Publisher · View at Google Scholar · View at Scopus
  106. T. Baumgartner, M. Esslen, and L. Jäncke, “From emotion perception to emotion experience: emotions evoked by pictures and classical music,” International Journal of Psychophysiology, vol. 60, no. 1, pp. 34–43, 2006. View at Publisher · View at Google Scholar · View at Scopus
  107. M. Li and B. L. Lu, “Emotion classification based on gamma-band EEG,” Conference Proceedings: IEEE Engineering in Medicine and Biology Society, vol. 2009, pp. 1323–1326, 2009.
  108. C. Lithari, C. A. Frantzidis, C. Papadelis et al., “Are females more responsive to emotional stimuli? A neurophysiological study across arousal and valence dimensions,” Brain Topography, vol. 23, no. 1, pp. 27–40, 2010. View at Publisher · View at Google Scholar · View at Scopus
  109. K. S. Park, H. Choi, K. J. Lee, J. Y. Lee, K. O. An, and E. J. Kim, “Emotion recognition based on the asymmetric left and right activation,” International Journal of Medicine and Medical Sciences, vol. 3, no. 6, pp. 201–209, 2011. View at Scopus
  110. R. Degabriele, J. Lagopoulos, and G. Malhi, “Neural correlates of emotional face processing in bipolar disorder: an event-related potential study,” Journal of Affective Disorders, vol. 133, no. 1-2, pp. 212–220, 2011.
  111. Y. P. Lin, C. H. Wang, T. P. Jung et al., “EEG-based emotion recognition in music listening,” IEEE Transactions on Biomedical Engineering, vol. 57, no. 7, pp. 1798–1806, 2010. View at Publisher · View at Google Scholar · View at Scopus
  112. S. A. Hosseini, M. A. Khalilzadeh, M. B. Naghibi-Sistani, and V. Niazmand, “Higher order spectra analysis of EEG signals in emotional stress states,” in Proceedings of the 2nd International Conference on Information Technology and Computer Science (ITCS '10), pp. 60–63, Kiev, Ukraine, July 2010. View at Publisher · View at Google Scholar · View at Scopus
  113. X. W. Wang, D. Nie, and B. L. Lu, “EEG-based emotion recognition using frequency domain features and support vector machines,” in Neural Information Processing, vol. 7062 of Lecture Notes in Computer Science, pp. 734–743, 2011.
  114. K. Stelios and L. J. Hadjidimitriou, “Toward an EEG-based recognition of music liking using time-frequency analysis,” IEEE Transactions on Biomedical Engineering, vol. 59, no. 12, pp. 3498–3510, 2012.