Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2020 / Article

Review Article | Open Access

Volume 2020 |Article ID 8875426 | https://doi.org/10.1155/2020/8875426

Nazmi Sofian Suhaimi, James Mountstephens, Jason Teo, "EEG-Based Emotion Recognition: A State-of-the-Art Review of Current Trends and Opportunities", Computational Intelligence and Neuroscience, vol. 2020, Article ID 8875426, 19 pages, 2020. https://doi.org/10.1155/2020/8875426

EEG-Based Emotion Recognition: A State-of-the-Art Review of Current Trends and Opportunities

Academic Editor: Silvia Conforto
Received30 Apr 2020
Revised30 Jul 2020
Accepted28 Aug 2020
Published16 Sep 2020

Abstract

Emotions are fundamental for human beings and play an important role in human cognition. Emotion is commonly associated with logical decision making, perception, human interaction, and to a certain extent, human intelligence itself. With the growing interest of the research community towards establishing some meaningful “emotional” interactions between humans and computers, the need for reliable and deployable solutions for the identification of human emotional states is required. Recent developments in using electroencephalography (EEG) for emotion recognition have garnered strong interest from the research community as the latest developments in consumer-grade wearable EEG solutions can provide a cheap, portable, and simple solution for identifying emotions. Since the last comprehensive review was conducted back from the years 2009 to 2016, this paper will update on the current progress of emotion recognition using EEG signals from 2016 to 2019. The focus on this state-of-the-art review focuses on the elements of emotion stimuli type and presentation approach, study size, EEG hardware, machine learning classifiers, and classification approach. From this state-of-the-art review, we suggest several future research opportunities including proposing a different approach in presenting the stimuli in the form of virtual reality (VR). To this end, an additional section devoted specifically to reviewing only VR studies within this research domain is presented as the motivation for this proposed new approach using VR as the stimuli presentation device. This review paper is intended to be useful for the research community working on emotion recognition using EEG signals as well as for those who are venturing into this field of research.

1. Introduction

Although human emotional experience plays a central part in our daily lives, our scientific knowledge relating to the human emotions is still very limited. The progress for affective sciences is crucial for the development of human psychology for the benefit and application of the society. When machines are integrated into the system to help recognize these emotions, it would improve productivity and reduce the cost of expenditure in many ways [1], for example, integrations of machines into the society such as education where observations of student’s mental state towards the contents of the teaching materials being engaging or nonengaging can be detected. Medical doctors would be able to assess their patients’ mental conditions and provide better constructive feedback to improve their health conditions. The military will be able to train their trainees in simulated environments with the ability to assess their trainees’ mental conditions in combat situations.

A person’s emotional state may become apparent through subjective experiences, internal and external expressions. Self-evaluation reports such as the Self-Assessment Manikin (SAM) [2] is commonly used for evaluating the mental state of a person by measuring the three independent and bipolar dimensions [3], presented visually to the person by reflecting images of pleasure-displeasure, degree of arousal, and dominance-submissiveness. This method provides an alternative to the sometimes more difficult assessment of psychological evaluations of a patient done by a medical profession where they would require thorough training and experience to understand the patient’s mental health conditions. However, the validity and corroboration of the information provided from the patient using the SAM report are unreliable given that many people have difficulty expressing themselves honestly or lack of knowledge or grasp towards their mental state. SAM is also not feasible to be conducted on young children or elders due to the limitation of literacy skills [4]. Therefore, the physiological signals that are transported throughout the human body can provide health information directly from patients to medical professionals and evaluate their conditions almost immediately. The brainwave signal of a human being produces insurmountable levels of neuron signals that manage all functionalities of the body. The human brain stores the emotional experiences that are gathered throughout their lifetime. By tapping directly into the brainwave signals, we can examine the emotional responses of a person when exposed to certain environments. With this information provided from the brainwave signals, it can help strengthen and justify the person is physically fit or may be suffering from mental illness.

The architectural design and cost of the EEG headset differ differently. The difference here is that the type of electrodes used to collect the brainwave signals affects the quality as well as the duration of setup [57]. There are also a different number of electrodes placed across the human scalp, and the resolution of these EEG headsets differs depending on the build quality and technological accessibility [810]. Due to the sensitivity of the electrodes, many users are required to be very static when the brainwave collection procedure is initiated, and any small body or head movements may accidentally detach the electrodes out from the scalp and require to be reattached to the head which could waste time and materials. Any hair strands where the electrodes would be placed had to be removed to receive proper connection of the brainwave signals. Therefore, people with large hair volumes would face difficulty as the hair would need to be shifted or removed. Artefacts are noises produced from muscle movements such as eye blinking, jaw clenching, and muscle twitches which would be picked up by the electrodes [1114]. Furthermore, external interferences such as audio noise or sense of touch may also introduce artefacts into the brainwave signals during collection, and these artefacts will need to be removed by the use of filtering algorithms [1520]. Finally, the brainwave signals will need to be transformed from time domain to frequency domain using fast Fourier transform (FFT) [21] to assess and evaluate the specific brainwave bands for emotion recognition with machine learning algorithms.

Since the last comprehensive review for emotion recognition was published by Alarcao and Fonseca [22], this review paper will serve as an update on the previously reviewed paper. The paper is organized as follows: Section 2 includes the methodology of reviewing this paper by using specific keywords search. Section 3 will cover the definition of what emotion is, EEG, brainwave bands, general positions of EEG electrodes, comparison between clinical and low-cost wearable EEG headset, emotions in the brain, and virtual reality (VR). Section 4 will review past studies of emotion classification by comparing the types of stimulus, emotion classes, dataset availability, common EEG headset used for emotion recognition, common algorithms and performances of machine learning in emotion recognition, and participants involved. Section 5 provides discussion, and finally, Section 6 concludes the study.

2. Methodology

The approach adopted in this state-of-the-art review firstly performs queries on the three most commonly accessed scholarly search engine and database, namely, Google Scholar, IEEE Explore, and ScienceDirect, to collect papers for the review using the keywords “Electroencephalography” or “EEG” + “Emotion” + “Recognition” or “Classification” or “Detection” with the publication year ranging only from 2016 to 2019. The papers resulting from this search are then carefully vetted and reviewed so that works that were similar and incremental from the same author were removed, leaving only distinctly significant novel contributions to EEG-based emotion recognition.

2.1. State of the Art

In the following paragraphs, the paper will introduce the definitions and representations of emotions as well as some characteristics of the EEG signals to give some background context for the reader to understand the field of EEG-based emotion recognition.

3. Emotions

Affective neuroscience is aimed to elucidate the neural networks underlying the emotional processes and their consequences on physiology, cognition, and behavior [2325]. The field has been historically centered around defining the universal human emotions and their somatic markers [26], clarifying the cause of the emotional process and determining the role of the body and interoception in feelings and emotions [27]. In affective neuroscience, the concept of emotions can be differentiated from various constructs such as feelings, moods, and affects. Feelings can be viewed as a personal experience that associates itself with that emotion. Moods are diffuse affective states that generally last longer than emotions and are less intense than emotions. Lastly, affect is an encompassing term that describes the topics of emotions, feelings, and moods altogether [22].

Emotions play an adaptive, social, or motivational role in the life of human beings as they produce different characteristics indicative of human behavior [28]. Emotions affect decision making, perception, human interactions, and human intelligence. It also affects the status of humans physiologically and psychologically [29]. Emotions can be expressed through positive and negative representations, and from them, it can affect human health as well as work efficiency [30].

Three components influence the psychological behavior of a human, which are personal experiences, physiological response, and behavioral or expressive response [31, 32]. Emotions can be described as being responsive to discrete or consistent responses of events with significance for the organisms [33] which are brief in duration and corresponds to a coordinated set of responses.

To better grasp the kinds of emotions that are being expressed daily, these emotions can be viewed from categorical perspective or dimensional perspective. The categorical perspective revolves around the idea of basic emotions that have been imprinted in our human physiology. Ekman [34] states that there are certain characteristics of basic emotions: (1) humans are born with emotions that are not learned; (2) humans exhibit the same emotions in the same situation; (3) humans express these emotions in a similar way; and (4) humans show similar physiological patterns when expressing the same emotions. Through these characteristics, Ekman was able to summarize the six basic emotions of happiness, sadness, anger, fear, surprise, and disgust, and he viewed the rest of the emotions as a byproduct of reactions and combinations of the basic emotions. Plutchik [35] proposes that there are eight basic emotions described in a wheel model, which are joy, trust, fear, surprise, sadness, disgust, anger, and anticipation. Izard (Izard, 2007; Izard, 2009) describes that (1) basic emotions were formed in the course of human evolution and (2) each basic emotion corresponded to a simple brain circuit and there was no complex cognitive component involved. He then proposed his ten basic emotions: interest, joy, surprise, sadness, fear, shyness, guilt, anger, disgust, and contempt. On the other hand, from the dimensionality perspective, the emotions are mapped into valence, arousal, and dominance. Valence is measured from positive to negative feelings, arousal is measured from high to low, and similarly, dominance is measured from high to low [38, 39].

Understanding emotional signals in everyday life environments becomes an important aspect that influences people’s communication through verbal and nonverbal behavior [40]. One such example of emotional signals is expressed through facial expression which is known to be one of the most immediate means of human beings to communicate their emotions and intentions [41]. With the advancement of technologies in brain-computer interface and neuroimaging, it is now feasible to capture the brainwave signals nonintrusively and to measure or control the motions of devices virtually [42] or physically such as wheelchairs [43], mobile phone interfacing [44], or prosthetic arms [45, 46] with the use of a wearable EEG headset. Currently, the advancement of artificial intelligence and machine learning is being actively developed and researched to adopt to newer applications. Such applications include neuroinformatics field which studies the emotion classification by collecting brainwave signals and classifying them using machine learning algorithms. This would help improve human-computer interactions to meet human needs [47].

3.1. The Importance of EEG for Use in Emotion Classification

EEG is considered a physiological clue in which electrical activities of the neural cells cluster across the human cerebral cortex. EEG is used to record such activities and is reliable for emotion recognition due to its relatively objective evaluation of emotion compared to nonphysiological clues (facial expression, gesture, etc.) [48, 49]. Works describing that EEG contains the most comprehensive features such as the power spectral bands can be utilized for basic emotion classifications [50]. There are three structures in the limbic system as shown in Figure 1, where the brain heavily implicates emotion and memory: the hypothalamus, amygdala, and hippocampus. The hypothalamus handles the emotional reaction while the amygdala handles external stimuli that process the emotional information from the recognition of situations as well as analysis of potential threats. Studies have suggested that amygdala is the biological basis of emotions that store fear and anxiety [5153]. Finally, the hippocampus integrates emotional experience with cognition.

3.2. Electrode Positions for EEG

To be able to replicate and record the EEG readings, there is a standardized procedure for the placements of these electrodes across the skull, and these electrode placement procedures usually conform to the standard of the 10–20 international system [54, 55]. The “10 and “20” refers to the actual distances between the adjacent electrodes either 10% or 20% of the total front to back or right to the left of the skull. Additional electrodes can be placed on any of the existing empty locations. Figure 2 shows the electrode positions placed according to the 10–20 international system.

Depending on the architectural design of the EEG headset, the positions of the EEG electrodes may differ slightly than the standard 10–20 international standard. However, these low-cost EEG headsets will usually have electrodes positioned at the frontal lobe as can be seen from Figures 3 and 4. EEG headsets with a higher number of channels will then add electrodes to the temporal, parietal, and occipital lobe such as the 14-channel Emotiv EPOC+ and Ultracortex Mark IV. Both these EEG headsets have wireless capabilities for data transmission and therefore have no lengthy wires dangling around their body which makes it feasible for this device to be portable and easy to setup. Furthermore, companies such as OpenBCI provide 3D-printable designs and hardware configurations for their EEG headset which provides unlimited customization to their headset configurations.

3.3. Clinical-Grade EEG Headset vs. Wearable Low-Cost EEG Headset

Previously, invasive electrodes were used to record brain signals by penetrating through the skin and into the brain, but technology improvements have made it possible for electrical activity of the brain to be recorded by using noninvasive electrodes placed along the scalp of the brain. EEG devices focus on event-related (stimulus onset) potentials or spectral content (neural oscillations) of EEG. They can be used to diagnose epilepsy, sleep disorders, encephalopathies (brain damage or malfunction), and other brain disorders such as brain death, stroke, or brain tumors. EEG diagnostics can help doctors to identify medical conditions and appropriate injury treatments to mitigate long-term effects.

EEG has advantages over other techniques because of the ease to provide immediate medical care in high traffic hospitals with lower hardware costs as compared to magnetoencephalography. In addition, EEG does not aggravate claustrophobia in patients, can be used for patients who cannot respond, or cannot make a motor respond or attending to a stimulus where EEG can elucidate stages of processing instead of just final end results.

tMedical-grade EEG devices would have channels ranging between 16 and 32 channels on a single headset or more depending on the manufacturer [58] and it has amplifier modules connected to the electrodes to amplify these brainwave signals which can be seen in Figure 5. The EEG devices that are used in clinics help to diagnose and characterize any symptoms obtained from the patient and these data are then interpreted by a registered medical officer for medical interventions [60, 61]. A study conducted by Obeid and Picone [62] where the clinical EEG data stored in secure archives are collected and made publicly available. This would also help establish a best practice for curation and publication of clinical signal data. Table 1 shows the current EEG market and the pricing of its products available for purchase. However, the cost of EEG headsets is not disclosed from the middle-cost range most likely due to the sensitivity of the market price or they would require clients to specifically order according to their specifications unlike the low-cost EEG headsets, which disclosed the cost of their EEG headsets.


Product tierProductsChannel positionsSampling rateElectrodesCost

Low-cost range (USD99-USD 1,000)Emotiv EPOC+AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, AF432 Hz–64 Hz14USD 799.00
NeuroSky MindWaveFP1512 Hz1USD 99.00
Ultracortex “Mark IV” EEG headsetFP2, FP1, C4, C3, P8, P7, O2, O1128 Hz8–16USD 349.99
Interaxon MuseAF7, AF8, TP9, TP10256 Hz4USD 250.00

Middle-cost range (USD 1,000-USD 25,000)B-Alert X SeriesFz, F3, F4, Cz, C3, C4, P3, P4, Poz256 Hz10(Undisclosed)
ANT-Neuro eego rtAF7, AF3, AF4, AF8, F5, F1, F2, F6, FT7, FC3, FCZ, FC4, FT8, C5, C1, C2, C6, TP7, CP3, CPz, CP4, TP8, P5, P1, P2, P6, PO7, PO5, PO3, PO4, PO6, PO82048 Hz64(Undisclosed)

A low-cost, consumer-grade wearable EEG device would have channels ranging from 2 to 14 channels [58]. As seen from Figure 6, the ease of setup while wearing a low-cost, consumer-grade wearable EEG headset provides comfort and reduces the complexity of setting up the device on the user’s scalp, which is important for both researchers and users [63]. Even with the lower performance of wearable low-cost EEG devices, it is much more affordable compared to the standard clinical-grade EEG amplifiers [64]. Interestingly, the supposedly lower performance EEG headset could outperform a medical-grade EEG system with a lesser number of electrodes [65]. The lower cost of wearable EEG systems could also detect artefacts such as eye blinking, jaw clenches, muscle movements, and power supply line noises which can be filtered out during preprocessing [66]. The brain activity of the wireless portable EEG headset can also assist through the imagined directional inputs or hand movements from a user, which was compared and shown to perform better than medical-grade EEG headsets [6770].

3.4. Emotions in the Brain

In recent developments, a high number of neurophysiological studies have reported that there are correlations between EEG signals and emotions. The two main areas of the brain that are correlated with emotional activity are the amygdala and the frontal lobe. Studies showed that the frontal scalp seems to store more emotional activation compared to other regions of the brain such as temporal, parietal, and occipital [71].

In a study regarding music video excerpts, it was observed that higher frequency bands such as gamma were detected more prominently when subjects were listening to unfamiliar songs [72]. Other studies have observed that high-frequency bands such as alpha, beta, and gamma are more effective for classifying emotions in both valence and arousal dimensions [71, 73] (Table 2).


Band nameFrequency band (Hz)Functions

Delta<4Usually associated with the unconscious mind and occurs in deep sleep
Theta4–7Usually associated with the subconscious mind and occurs in sleeping and dreaming
Alpha8–15Usually associated with a relaxed mental state yet aware and are correlated with brain activation
Beta16–31Usually associated with active mind state and occurs during intense focused mental activity
Gamma>32Usually associated with intense brain activity

Previous studies have suggested that men and women process emotional stimuli differently, suggesting that men evaluate current emotional experiences relying on the recall of past emotional experiences, whereas women seemed to directly engage with the present and immediate stimuli to evaluate current emotional experiences more readily [74]. There is also some evidence that women share more similar EEG patterns among them when emotions are evoked, while men have more individual differences among their EEG patterns [75].

In summary, the frontal and parietal lobes seem to store the most information about emotional states, while alpha, gamma, and beta waves appear to be most discriminative.

3.5. What Is Virtual Reality (VR)?

VR is an emerging technology that is capable of creating some amazingly realistic environments and is able to reproduce and capture real-life scenarios. With great accessibility and flexibility, the adaptation of this technology for different industries is limitless. For instance, the use of a VR as a platform to train fresh graduates to be better in soft skills while applying for a job interview can better prepare them for real-life situations [76]. There are also applications where moods can be tracked based on their emotional levels while viewing movies, thus creating a list of databases for movie recommendations for users [77]. It is also possible to improve social skills for children with autism spectrum disorder (ASD) using virtual reality [78]. To track all of the emotion responses of each person, the use of a low-cost wearable EEG that is wireless is now feasible to record the brainwave signals and then evaluate the mental state of the person with the acquired signals.

VR is used by many different people with many meanings. Some of the people would refer to this technology as a collection of different devices which are a head-mounted device (HMD), glove input device, and audio [79]. The first idea of a virtual world was presented by Ivan Sutherland in 1965 which he was quoted as saying: “make that (virtual) world in the window look real, sound real, feel real and respond realistically to the viewer’s actions” [80]. Afterward, the first VR hardware was realized with the very first HMD with appropriate head tracking and has a stereo view that is updated correctly according to the user’s head position and orientation [81].

From a study conducted by Milgram and Kishimo [82] regarding mixed reality, it is a convergence of interaction between the real world and the virtual world. The term mixed reality is also used interchangeably with augmented reality (AR) but most commonly referred to as AR nowadays. To further understand what AR really is, it is the incorporation of virtual computer graphic objects into a real three-dimensional scene, or alternatively the inclusions of real-world environment elements into a virtual environment [83]. The rise of personal mobile devices [84] especially in 2010 accelerated the growth of AR applications in many areas such as tourism, medicine, industry, and educations. The inclusion of this technology has been nothing short of positive responses [8487].

In VR technology, the technology itself opens up to many new possibilities for innovations in areas such as healthcare [88], military [89, 90], and education [91].

4. Examining Previous Studies

In the following section, the papers obtained between 2016 and 2019 will be analyzed and categorized according to the findings in tables. Each of the findings will be discussed thoroughly by comparing the stimulus types presented, elapsed time of stimulus presentation, classes of emotions used for assessments, frequency of usage, the types of wearable EEG headsets used for brainwave collections and its costs, the popularity usage of machine learning algorithms, comparison of intra- and intersubject variability assessments, and the number of participants conducted in the emotional classification experiments.

4.1. Examining the Stimulus Presented

Recent papers collected from the years 2016 to 2019 found that the common approach towards stimulating user’s emotional experience was music, music video, pictures, video clips, and VR. Of the five stimuli, VR (31.03%) was seen to have the highest common usage for emotion classification followed by music (24.14%), music videos and video clips (both at 20.69%), and pictures (3.45%) which can be observed in Table 3.


Item No.DatasetDescription

1DEAP“Dataset for Emotion Analysis using Physiological and Video Signals” is an open-source dataset to analyze human affective states. The dataset consists of 32 recorded participants watching 40 music video clips with a certain level of stimuli evaluated
2IADS“The International Affective Digital Sounds” system is a collection of digital sounds that is used to stimulate emotional responses through acoustics and is used in investigations of emotion and attention of an individual
IAPS“The International Affective Picture” system is a collection of the emotionally evocative picture that is used to stimulate emotional responses to investigate the emotion and attention of an individual
4DREAMERA dataset that has collected 23 participants with signals from EEG and ECG using audio-visual stimuli responses. The access of this dataset is restricted and can be requested upon filling a request form to the owner
5ASCERTAINA “database for implicit personality and affect recognition” that collects signals from EEG, ECG, GSR, and facial activities from 58 individuals using 36 movie clips with an average length of 80 seconds
6SEEDThe “SJTU Emotion EEG Dataset” is a collection of EEG signals collected from 15 individuals watching 15 movie clips and measures the positive, negative, and neutral emotions
7SEED-IVAn extension of the SEED dataset that now specifically targets the labels of the emotion specifically, happy, sad, fear, and neutral with an additional eye tracking feature added into the collection data inclusive of the EEG signal

The datasets the researchers used to collect for their stimulation contents are ranked as follows: first is Self-Designed at 43.75%, second is DEAP at 18.75%, third are SEED, AVRS, and IAPS at 6.25%, and lastly, IADS, DREAMER, MediaEval, Quran Verse, DECAF, and NAPS all at 3.13%. The most prominent use for music stimuli all comes from the DEAP dataset [121] which is highly regarded and commonly referred to for its open access for researchers to conduct their research studies. While IADS [122] and MediaEval [123] are both open-source content for their music database with labeled emotions, it does not seem that researchers have utilized the database much or might be unaware of the availability of these datasets. As for video-related contents, SEED [124126], DREAMER [127], and ASCERTAIN [107] do provide their video database either openly or upon request. Researchers who designed their own stimulus database used two different stimuli, which are music and video clips, and of those two stimuli approaches, self-designed with music stimuli have 42.86% and self-designed video clips have 57.14%. Table 3 provides the information for accessing the mentioned databases available for public usage.

One of the studies was not included in the clip length averaging (247.55 seconds) as this paper reported the total length instead of per clip video length. The rest of the papers in Table 4 have explicitly mentioned per clip length or the range of the video length (taken at maximum length) that were used to average out the length per clip presented to the participants. Looking into the length of the clips whether it is in pictures, music, video clips, or virtual reality when measured on average, the length per clip was 107 seconds with the shortest length at 15 seconds (picture) while the longest was at 820 seconds (video clip). This may not reflect properly with the calculated average length of the clip since some of the lengthier videos were only presented in one paper and again because DEAP was referred repeatedly (60 seconds).


Research authorStimuliDatasetClip lengthEmotion classes

[92]MusicIADS (4 songs)60 sec per clipPleasant, happy, frightened, angry
[93]MusicSelf-Designed (40 songs)Happy, angry, afraid, sad
[94]MusicSelf-Designed (301 songs collected from different albums)30 sec per clipHappy, angry, sad, peaceful
[95]MusicSelf-Designed (1080 songs)Anger, sadness, happiness, boredom, calm, relaxation, nervousness, pleased, and peace
[96]MusicSelf-Designed (3552 songs from Baidu)Contentment, depression, exuberance
[97]Music1000 songs from MediaEval45 sec per clipPleasing, angry, sad, relaxing
[98]MusicSelf-Designed (25 songs + Healing4Happiness dataset)247.55 secValence, arousal
[99]Music + pictureIAPS, Quran Verse, Self-Designed (Musicovery, AMG, Last.fm)60 sec per clipHappy, fear, sad, calm
[100]Music videosDEAP (40 music videos)60 sec per clipValence, arousal, dominance, liking
[101]Music videosDEAP (40 music videos)Valence, arousal
[102]Music videosDEAP (40 music videos)60 sec per clipValence, arousal
[103]Music videosDEAP (40 music videos)60 sec per clip
[104]Music videosDEAP (40 music videos)60 sec per clipValence, arousal
[105]Music videosDEAP (40 music videos)60 sec per clipValence, arousal, dominance
[106]Video clipsSelf-Designed (12 video clips)150-sec per clipHappy, fear, sad, relax
[107]Video clipsDECAF (36 video clips) [108]51–128 sec per clipValence, arousal
[109]Video clipsSelf-designed (15 video clips)120–240 sec per clipHappy, sad, fear, disgust, neutral
[110]Video clipsSEED (15 video clips), DREAMER (18 video clips)SEED (240 sec per clip), DREAMER (65–393 sec per clip)Negative, positive, and neutral (SEED). Amusement, excitement, happiness, calmness, anger, disgust, fear, sadness, and surprise (DREAMER)
[111]Video clipsSEED (15 video clips)240 sec per clipPositive, neutral, negative
[112]Video clipsSelf-Designed (20 video clips)120 sec per clipValence, arousal
[113]VRSelf-Designed (4 scenes)Arousal and valence
[114]VRAVRS (8 scenes)80 sec per sceneHappy, sad, fear, relaxation, disgust, rage
[115]VRSelf-Designed (2 video clips)475 sec + 820 sec clipHorror, empathy
[116]VRSelf-Designed (5 scenes)60 sec per sceneHappy, relaxed, depressed, distressed, fear
[117]VRSelf-Designed (1 scene)Engagement, enjoyment, boredom, frustration, workload
[118]VRSelf-Designed (1 scene that changes colour intensity)Anguish, tenderness
[114]VRAVRS (4 scenes)Happy, fear, Peace, disgust, sadness
[119]VRNAPS (Nencki Affective Picture System) (20 pictures)15 sec per pictureHappy, fear
[120]VRSelf-Designed (1 scene)90 sec per clipFear

Looking into VR focused stimuli, the researchers designed their own stimuli database that would fit into their VR environment since there is a lack of available datasets as those currently available datasets were designed for viewing from a monitor’s perspectives. Affective Virtual Reality System (AVRS) is a new database designed by Zhang et al. [114] which combines IAPS [128], IADS, and China Affective Video System (CAVS) to produce a virtual environment that would accommodate VR headset for emotion classification. However, the dataset has only been evaluated using Self-Assessment Manikin (SAM) to evaluate the effectiveness of the AVRS system delivery of emotion and currently is still not made available for public access. Nencki Affective Picture System (NAPS) developed by Marchewka et al. [129] uses high-quality and realistic picture databases to induce emotional states.

4.2. Emotion Classes Used for Classification

30 papers studying emotion classification were identified, and only 29 of these papers are tabulated in Table 4 for reference on its stimuli presented, the types of emotions assessed, length of their stimulus, and the type of dataset utilized for their stimuli presentation to their test participants. Only 18 studies have reported the emotional tags used for emotion classification and the remaining 11 papers use the two-dimensional emotional space while one of the papers did not report the emotional classes used but is based on the DEAP dataset, and as such, this paper was excluded from Table 4. Among the 18 investigations that reported their emotional tags, an average number of 4.3 emotion classes were utilized and ranged from one to nine classes that were used for emotion classifications. There were a total of 73 emotional tags used for these emotional classes with some of the commonly used emotional classes such happy (16.44%), sad (13.70%), and fear (12.33%), which Ekman [34] has described in his six basic emotions research, but the other three emotion classes such as angry (5.48%), surprise (1.37%), and disgust (5.48%) were not among the more commonly used tags for emotional classifications. The rest of the emotional classes (afraid, amusement, anger, anguish, boredom, calm, contentment, depression, distress, empathy engagement, enjoyment, exciting, exuberance, frightened, frustration, horror, nervous, peaceful, pleasant, pleased, rage, relaxation, tenderness, workload, among others) were used only between 1.37% and 5.48% and these do not include valence, arousal, dominance, and liking indications.

Emotional assessment using nonspecific classes such as valence, arousal dominance, liking, positive, negative, and neutral had been used 28 times in total. Emotional assessment using the two-dimensional space such as valence and arousal where valence was used to measure the positive or negative emotions showed about 32.14% usage in the experiment and arousal where the user’s level of engagement (passive or active) was also seen to have 32.14% usage in these papers. The lesser evaluated three-dimensional space where dominance was included showed only 7.14% usage. This may be due to the higher complexity of the emotional state of the user and requires them to have a knowledgeable understanding of their mental state control. As for the remainder nonspecific tags such as positive, negative, neutral, liking, these usages range between 3.57% and 10.71% only.

Finally, there were four types of stimuli used to evoke emotions in their test participants consisting solely of music, music videos, video clips, and virtual reality with one report that combines both music and pictures together. Music contains audible sounds that can be heard daily such as rain, writing, laughter, or barking as done from using IAPS stimulus database while other auditory sounds used musical excerpts collected from online musical repositories to induce emotions. Music videos are a combination of rhythmic songs with videos with dancing movements. Video clips pertaining to Hollywood movie segments (DECAF) or Chinese movie films (SEED) were collected and stitched according to their intended emotion representation needed to entice their test participants. Virtual reality utilizes the capability of being immersed in a virtual reality environment with users being capable of freely viewing its surroundings. Some virtual reality environments were captured using horror films or a scene where users are only able to view objects from its static position with environments changing its colours and patterns to arouse the users' emotions. The stimuli used for emotion classification were virtual reality stimuli having seen a 31.03% usage, music at 24.14%, both music videos and video clips at 20.69% usage, and finally the combination of music and picture at 3.45% single usage.

4.3. Common EEG Headset Used for Recordings

The tabulated information on the common usage of wearable EEG headsets is described in Table 5. There were 6 EEG recording devices that were utilized for EEG recordings. These headsets are NeuroSky, Emotiv EPOC+, B-Alert X10, Ag Electrodes, actiChamp, and Muse. Each of these EEG recording devices is ranked according to their usages: BioSemi ActiveTwo (40.00%), Emotiv EPOC+, and NeuroSky MindWave (13.33%), while the remainder had 6.67% usage from actiChamp, Ag/AgCK Sintered Ring Electrodes, AgCl Electrode Cap, B-Alert X10, and Muse. Among the six EEG recording devices here, only the Ag Electrodes are required to manually place its electrodes on the scalp of their subjects while the remaining five EEG recording devices are headsets that have preset electrode positions for researchers to place the headset easily over their subject’s head. To obtain better readings from the electrodes of these devices, the Emotiv EPOC+ and Ag Electrodes are supplied with an adhesive gel to improve the signal acquisition quality from their electrodes and Muse only required to use a wet cloth applied onto the skin to improve their signal quality due to its dry electrode technology while the other three devices (B-Alert X10, actiChamp, and NeuroSky) do not provide recommendations if there is any need to apply any adhesive element to help improve their signal acquisition quality. All of these devices are capable of collecting brainwave frequencies such as delta, theta, alpha, beta, and gamma, which also indicates that the specific functions of the brainwave can be analyzed in a deeper manner especially for emotion classification, particularly based on the frontal and temporal regions that process emotional experiences. With regard to the regions of the brain, Emotiv EPOC+ electrode positions can be placed at the frontal, temporal, parietal, and occipital regions, B-Alert X10 and actiChamp place their electrode positions at the frontal and parietal region, Muse places their electrode positions at the frontal and temporal region, and NeuroSky places their electrode positions only at the frontal region. Ag Electrodes have no limitations on the number of electrodes provided as this solely depends on the researcher and the EEG recording device only.


Research authorEEG headset model usedBrief description of electrode placementsFrequency bands recorded

[102]BioSemi ActiveTwoPrefrontal, prefrontal-frontal, frontal, frontal-central, temporal, central, central-parietal, parietal, parietal-occipital, occipitalTheta, alpha, lower-beta, upper-beta, gamma
[130]NeuroSky MindWavePrefrontalDelta, theta, low-alpha, high-alpha, low-beta, high-beta, low-gamma, mid-gamma
[120]actiChampFrontal, central, parietal, occipitalDelta, theta, alpha, beta, gamma
[109]AgCl Electrode CapDelta, theta, alpha, beta, gamma
[103]BioSemi ActiveTwoFrontalDelta, theta, alpha, beta, gamma
[104]BioSemi ActiveTwoPrefrontal, prefrontal-frontal, frontal, frontal-central, temporal, central, central-parietal, parietal, parietal-occipital, occipitalDelta, theta, alpha, beta, gamma
[105]BioSemi ActiveTwoPrefrontal, prefrontal-frontal, frontal, frontal-central, temporal, central, central-parietal, parietal, parietal-occipital, occipitalDelta, theta, alpha, beta, gamma
[117]Emotiv EPOC+Prefrontal-frontal, frontal, frontal-central, temporal, parietal, occipital, frontal-centralDelta, theta, alpha, beta, gamma
[58]MuseTemporal-parietal, prefrontal-frontalDelta, theta, alpha, beta, gamma
[107]NeuroSky MindWavePrefrontalDelta, theta, alpha, beta, gamma
[119]Emotiv EPOC+Prefrontal-frontal, frontal, frontal-central, temporal, parietal, occipital, frontal-centralAlpha, low-beta, high-beta, gamma, theta
[101]BioSemi ActiveTwoPrefrontal, prefrontal-frontal, frontal, frontal-central, temporal, central, central-parietal, parietal, parietal-occipital, occipitalAlpha, beta
[112]Ag/AgCK Subtered Ring ElectrodesFp1, T3, F7, O1, T4, Fp2, C3, T5, F3, P3, T6, P4, O2, F4, F8
[113]B-Alert X10Frontal, central, parietal
[100]BioSemi ActiveTwoPrefrontal, prefrontal-frontal, frontal, frontal-central, temporal, central, central-parietal, parietal, parietal-occipital, occipital

Based on Table 5, of the 15 research papers which disclosed their headsets used, only 11 reported on their collected EEG brainwave bands with 9 of the papers having collected all of the five bands (delta, theta, alpha, beta, and gamma) while 2 of the papers did not collect delta band and 1 paper did not collect delta, theta, and gamma bands. This suggests that emotion classification studies, both lower frequency bands (delta and theta) and higher frequency bands (alpha, beta, and Gamma) are equally important to study and are the preferred choice of brainwave feature acquisition among researchers.

4.4. Popular Algorithms Used for Emotion Classification

The recent developments on human-computer interaction (HCI) that allows the computer to recognize the emotional state of the user provide an integrated interaction between human and computers. This platform propels the technology forward and creates vast opportunities for applications to be applied in many different fields such as education, healthcare, and military applications [131]. Human emotions can be recognized through various means such as gestures, facial recognition, physiological signals, and neuroimaging.

According to previous researchers, over the last decade of research on emotion recognition using physiological signals, many have deployed numerous methods of classifiers to classify the different types of emotional states [132]. Features such as K-nearest neighbor (KNN) [133, 134], regression tree, Bayesian networks, support vector machines (SVM) [133, 135], canonical correlation analysis (CCA) [136], artificial neural network (ANN) [137], linear discriminant analysis (LDA) [138], and Marquardt backpropagation (MBP) [139] were used by researchers to classify the different emotions. However, the use of these different classifiers makes it difficult for systems to port to different training and testing datasets, which generate different learning features depending on the way the emotion stimulations are presented for the user.

Observations were made over the recent developments of emotion classifications between the years 2016 and 2019 and it shows that many techniques described earlier were applied onto them with some other additional augmentation techniques implemented. Table 6 shows the classifiers used and the performance achieved from these classifications, and each of the classifiers is ranked accordingly by popularity: SVM (31.48%), KNN (11.11%), NB (7.41%), MLP, RF, and CNN (5.56% each), Fisherface (3.70%), BP, Bayes, DGCNN, ELM, FKNN, GP, GBDT, Haar, IB, LDA, LFSM, neural network, neuro-fuzzy network, WPDAI-ICA, and HC (1.85% each) while one other used Biotrace+ (1.85%) software to evaluate their classification performance and it was unclear as to which algorithm technique was actually applied for the performance obtained.


Research authorClassifiersBest performance achievedIntersubject or Intrasubject

[110]Dynamical graph convolutional neural network90.40%Intrasubject and intersubject
[140]Support vector machine80.76%Intrasubject and intersubject
[93]Random forest, instance-based98.20%Intrasubject
[118]Support vector machineIntrasubject
[99]Multilayer perceptron76.81%Intrasubject
[117]K-nearest neighbor95.00%Intersubject
[92]Support vector machine73.10%Intersubject
[104]Support vector machine, K-nearest neighbor, convolutional neural network, deep neural network82.81%Intersubject
[141]Support vector machine81.33%Intersubject
[102]Support vector machine, convolutional neural network81.14%Intersubject
[103]Gradient boosting decision tree75.18%Intersubject
[113]Support vector machine70.00%Intersubject
[100]Support vector machine70.52%Intersubject
[107]Support vector machine, naïve Bayes61.00%Intersubject
[142]Support vector machine57.00%Intersubject
[94]Support vector machine, K-nearest neighborIntersubject
[111]Support vector machine, K-nearest neighbor98.37%
[143]Convolutional neural network97.69%
[144]Support vector machine, backpropagation neural network, late fusion method92.23%
[145]Fisherface91.00%
[93]Haar, Fisherface91.00%
[106]Extreme learning machine87.10%
[112]K-nearest neighbor, support vector machine, multilayer perceptron86.27%
[97]Support vector machine, K-nearest neighbor, fuzzy networks, Bayes, linear discriminant analysis83.00%
[105]Naïve Bayes, support vector machine, K-means, hierarchical clustering78.06%
[130]Support vector machine, naïve Bayes, multilayer perceptron71.42%
[95]Gaussian process71.30%
[96]Naïve Bayes68.00%

As can be seen here, SVM and KNN were among the more popular methods for emotion classification and the highest achieved performance was 97.33% (SVM) and 98.37% (KNN). However, there were other algorithms used for emotion classification that performed very successfully as well and some of these classifiers which crossed the 90% margin were CNN (97.69%), DGCNN (90.40%), Fisherface (91.00%), LFSM (92.23%), and RF (98.20%). This suggests that other classification techniques may be able to achieve good performance or improve the results of the classification. These performances only show the highest performing indicators and do not actually reflect the general emotion consensus as some of these algorithms worked well on the generalized arousal and/or valence dimensions and in other cases used very specific emotional tags, and therefore, it is difficult to directly compare the actual classification performance across all the different classifiers.

4.5. Inter- and Intrasubject Classification in the Study of Emotion Classification

The definition of intersubject variability is the differences in brain anatomy and functionality across different individuals whereas intrasubject variability is the difference in brain anatomy and functionality within an individual. Additionally, intrasubject classification conducts classification using the training and testing data from only the same individual whereas intersubject classification conducts classification using training and testing data that is not limited to only from the same individual but from across many different individuals. This means that in intersubject classification, testing can be done without retraining the classifier for the individual being tested. This is clearly a more challenging task where the classifier is trained and tested using different individuals’ EEG data. In recent studies, there has been an increasing number of studies that focused on appreciating rather than ignoring classification. Through the lens of variability, it could gain insight on the individual differences and cross-session variations, facilitating precision functional brain mapping and decoding based on individual variability and similarity. The application of neurophysiological biometrics relies on the intersubject variability and intrasubject variability where questions regarding how intersubject and intrasubject variability can be observed, analyzed, and modeled. This would entail questions of what differences could researchers gain from observing the variability and how to deal with the variability in neuroimaging. From the 30 papers identified, 28 indicated whether they conducted intrasubject, intersubject, or both types of classification.

The nonstationary EEG correlates of emotional responses that exist between individuals, namely, intersubject variability would be affected by the intrinsic differences in personality, culture, gender, educational background, and living environment, and individuals may have distinct behavioral and/or neurophysiological responses even when perceiving the same event. Thus, each individual is not likely to share the common EEG distributions that correlate to the same emotional states. Researchers have highlighted the significant challenges posed by intersubject classification in affective computing [140, 142147]. Lin describes that for a subject-dependent exercise (intersubject classification) to work well, the class distributions between individuals have to be similar to some extent. However, individuals in real life may have different behavioral or physiological responses towards the same stimuli. Subject-independent (intrasubject classification) was argued and shown to be the preferable emotion classification approach by Rinderknecht et al. [148]. Nonetheless, the difficulty here is to develop and fit a generalized classifier that will work well for all individuals, which currently remains a grand challenge in this research domain.

From Table 6, it can be observed that not all of the researchers indicated their method of classifying their subject matter. Typically, setup descriptions that include subject-independent and across subjects refer to inter-subject classification while subject-dependent and within subjects refer to intra-subject classification. These descriptors were used interchangeably by researchers as there are no specific guidelines as to how these words should be used specifically in the description of the setups of these emotion classification experiments. Therefore, according to these descriptors, the table helps to summarize these papers in a more objective manner. From the 30 papers identified, only 18 (5 on intrasubject and 13 on intersubject) of the papers have specifically mentioned their classifications on the subject matter. Of these, the best performing classifier for intrasubject classification was achieved by RF (98.20%) by Kumaran et al. [93] on music stimuli while the best for intersubject classification was achieved by DGCNN (90.40%) by Song et al. [110] using video stimulations from SEED and DREAMER datasets. As for VR stimuli, only Hidaka et al. [116] performed using SVM (81.33%) but using only five subjects to evaluate its performance, which is considered to be very low when the number of subjects at minimal is expected to be 30 to be justifiable as mentioned by Alarcao and Fonseca [22].

4.6. Participants

From the 30 papers identified, only 26 of the papers have reported the number of participants used for emotion classification analysis as summarized in Table 7, and the table is arranged from the highest total number of participants to the lowest. The number of participants varies between the ranges from 5 to 100 participants, and 23 reports stated their gender population with the number of males (408) being higher than females (342) overall, while another 3 reports only stated the number of participants without stating the gender population. 7.70% was reported using less than 10 subjects, 46.15% reported using between 10 and 30 participants, and 46.15% reported using more than 30 participants.


AuthorEmotion classesParticipantsMaleFemaleMean age ± SD

[114]Happy, sad, fear, relaxation, disgust, rage1005743
[113]Arousal and valence (4 quadrants)60164428.9 ± 5.44
[149]Valence, arousal58 (ASCERTAIN)372130
[107]Valence, arousal58 (ASCERTAIN)372130
[112]Valence, arousal (high and low)40202026.13 ± 2.79
[110]Negative, positive, and neutral (SEED). Amusement, excitement, happiness, calmness, anger, disgust, fear, sadness, and surprise (DREAMER)15 (SEED), 23 (DREAMER)211726.6 ± 2.7
[115]Horror = (fear, anxiety, disgust, surprise, tension), empathy = (happiness, sadness, love, being touched, compassion, distressing, disappointment)381919
[100]Valence, arousal, dominance, liking32 (DEAP)161626.9
[101]Valence, arousal (high and low)32 (DEAP)161626.9
[102]Valence, arousal32 (DEAP)161626.9
[103]32 (DEAP)161626.9
[104]Valence, arousal (2 class)32 (DEAP)161626.9
[105]Valence, arousal, dominance32 (DEAP)161626.9
[114]Happy, fear, peace, disgust, sadness13 (watching video materials), 18 (VR materials)1318
[130]Stress level (low and high)2819927.5
[98]Valence, arousal (high and low)25
[120]Fear22148
[106]Happy, fear, sad, relax20
[117]Engagement, enjoyment, boredom, frustration, workload2019115.29
[109]Happy, sad, fear, disgust, neutral1661023.27 ± 2.37
[118]Anguish, tenderness16
[111]Positive, neutral, negative15 (SEED)78
[99]Happy, fear, sad, calm1385
[141]Happy, relaxed, depressed, distressed, fear101021
[119]Happy, fear65126.67 ± 1.11
[92]Pleasant, happy, frightened, angry541

16 reports stated their mean age groups ranging between 15.29 and 30 with an exception that there was a study on ASD (autism spectrum disorder) group being the youngest with the mean age of 15.29. Another 4 only reported their participants’ age ranging between 18 and 28 [106, 120, 141, 150] while 2 other studies only reported they had volunteers from their university students [98, 115] and 1 other report stated they had 2 additional institutions volunteered in addition to their own university students [118].

The 2 reported studies with less than 10 participants [92, 119] have had their justifications on why they would be conducting with these numbers such that Horvat expressed their interest in investigating the stability of affective EEG features by running multiple sessions on single subjects compared to running large number of subjects such as DEAP with single EEG recording session for each subject. Lan was conducting a pilot study on the combination of VR using NAPS database with the Emotiv EPOC+ headset to investigate the effectiveness of both devices and later found that in order to achieve a better immersion experience, some elements of ergonomics on both devices have to be sacrificed.

The participants who volunteered to join for these experiments for emotion classification had all reported to have no physical abnormalities or mental disorders and are thus fit and healthy for the experiments aside from one reported study which was granted permission to conduct on ASD subjects [117]. Other reports have evaluated their understanding of emotion labels before partaking any experiment as most of the participants would need to evaluate their emotions using Self-Assessment Manikin (SAM) after each trial. The studies also reported that the participants had sufficient educational backgrounds and therefore can justify their emotions when questioned on their current mental state. Many of the studies were conducted on university grounds with permission since the research of emotion classification was conducted by university-based academicians, and therefore, the population of the participants was mostly from university students.

Many of these reported studies only focused on the feature extractions from their EEG experiments or from SAM evaluations on valence, arousal, and dominance and presented their classification results at the end. Based on the current findings, no studies were found that conducted specifically differentiating the differences between male and female emotional responses or classifications. To have a reliable classification result, such studies should be conducted with at least 10 participants to have statistically meaningful results.

5. Discussion

One of the issues that emerged from this review is that there is a lack of studies conducted for virtual reality-based emotion classification where the immersive experience of the virtual reality could possibly evoke greater emotional responses over the traditional stimuli presented through computer monitors or audible speakers since virtual reality combines senses such as sight, hearing, and sense of “being there” immersively. There is currently no openly available database for VR-based emotion classification, where the stimuli have been validated for virtual reality usage in emotional responses. Many of the research have had to self-design their own emotional stimuli. Furthermore, there are inconsistencies in terms of the duration of the stimulus presented for the participants, especially in virtual reality where the emotion fluctuates greatly depending on the duration and content of the stimulus presented. Therefore, to keep the fluctuations of the emotions as minimal as possible as well as being direct to the intended emotional response, the length of the stimulus presented should be kept between 15 and 20 seconds. The reason behind this selected duration was that there is ample amount of time for the participants to explore the virtual reality environment to get oneself associated and stimulated enough that there are emotional responses received as feedback from the stimuli presented.

In recent developments for virtual reality, there are many available products in the market used for entertainment purposes with the majority of the products intended for gaming experiences such as Oculus Rift, HTC Vive, Playstation VR, and many other upcoming products. However, these products might be costly and overburdened with requirements such as the need for a workstation capable of handling virtual reality rendering environments or a console-specific device. Current smartphones have built-in inertial sensors such as gyroscope and accelerometers to measure direction and movement speed. Furthermore, this small and compact device has enough computational power to run virtual reality content provided with a VR headset and a set of earphones. The package for building a virtual reality environment is available using System Development Kits (SDKs) such as Unity3D which can be exported to multiple platforms making it versatile for deployments across many devices.

With regard to versatility, various machine learning algorithms are currently available for use in different applications, and these algorithms can achieve complex calculations with minimal time wasted thanks to the technological advancements in computing as well as efficient utilization of algorithmic procedures [151]. However, there is no evidence of a single algorithm that can best the rest and this makes it difficult for algorithm selection when preparing for emotion classification tasks. Furthermore, with regard to versatility, there needs to be a trained model for machine learning algorithms that can be used for commercial deployment or benchmarking for future emotion classifications. Therefore, intersubject variability (also known as subject-dependent, studies across subjects, or leave-one-out in some other studies) is a concept that should be followed as this method generalizes the emotion classification task over the overall population and has a high impact value due to the nonrequirement of retraining the classification model for every single new user.

The collection of brainwave signals varies differently depending on the quality or sensitivity of the electrodes when attempting to collect the brainwave signals. Furthermore, the collection of brainwave signals depends on the number of electrodes and its placements around the scalp which should conform to the 10–20 international EEG standards. There needs to be a standardized measuring tool for the collection of EEG signals, and the large variances of products of wearable EEG headsets would produce varying results depending on the handlings of the user. It is suggested that standardization for the collection of the brainwave signals be accomplished using a low-cost wearable EEG headset since it is easily accessible by the research community. While previous studies have reported that the emotional experiences are stored within the temporal region of the brain, current evidence suggests that emotional responses may also be influenced by different regions of the brain such as the frontal and parietal regions. Furthermore, the association of brainwave bands from both the lower and higher frequencies can actually improve the emotional classification accuracy. Additionally, the optimal selection of the electrodes as learning features should also be considered since many of the EEG devices have different numbers of electrodes and placements, and hence, the number and selection of electrode positions should be explored systematically in order to verify how it affects the emotion classification task.

6. Conclusions

In this review, we have presented the analysis of emotion classification studies from 2016–2019 that propose novel methods for emotion recognition using EEG signals. The review also suggests a different approach towards emotion classification using VR as the emotional stimuli presentation platform and the need for developing a new database based on VR stimuli. We hope that this paper has provided a useful critical review update on the current research work in EEG-based emotion classification and that the future opportunities for research in this area would serve as a platform for new researchers venturing into this line of research.

Data Availability

No data are made available for this work.

Conflicts of Interest

The authors declare that they have no competing interests.

Acknowledgments

This work was supported by a grant from the Ministry of Science, Technology, Innovation (MOSTI), Malaysia (ref. ICF0001-2018).

References

  1. A. Mert and A. Akan, “Emotion recognition from EEG signals by using multivariate empirical mode decomposition,” Pattern Analysis and Applications, vol. 21, no. 1, pp. 81–89, 2018. View at: Publisher Site | Google Scholar
  2. M. M. Bradley and P. J. Lang, “Measuring emotion: the self-assessment manikin and the semantic differential,” Journal of Behavior Therapy and Experimental Psychiatry, vol. 25, no. 1, pp. 49–59, 1994. View at: Publisher Site | Google Scholar
  3. J. Morris, “Observations: SAM: the Self-Assessment Manikin; an efficient cross-cultural measurement of emotional response,” Journal of Advertising Research, vol. 35, no. 6, pp. 63–68, 1995. View at: Google Scholar
  4. E. C. S. Hayashi, J. E. G. Posada, V. R. M. L. Maike, and M. C. C. Baranauskas, “Exploring new formats of the Self-Assessment Manikin in the design with children,” in Proceedings of the 15th Brazilian Symposium on Human Factors in Computer Systems-IHC’16, São Paulo, Brazil, October 2016. View at: Publisher Site | Google Scholar
  5. A. J. Casson, “Wearable EEG and beyond,” Biomedical Engineering Letters, vol. 9, no. 1, pp. 53–71, 2019. View at: Publisher Site | Google Scholar
  6. Y.-H. Chen, M. de Beeck, L. Vanderheyden et al., “Soft, comfortable polymer dry electrodes for high quality ECG and EEG recording,” Sensors, vol. 14, no. 12, pp. 23758–23780, 2014. View at: Publisher Site | Google Scholar
  7. G. Boon, P. Aricò, G. Borghini, N. Sciaraffa, A. Di Florio, and F. Babiloni, “The dry revolution: evaluation of three different eeg dry electrode types in terms of signal spectral features, mental states classification and usability,” Sensors (Switzerland), vol. 19, no. 6, pp. 1–21, 2019. View at: Publisher Site | Google Scholar
  8. S. Jeon, J. Chien, C. Song, and J. Hong, “A preliminary study on precision image guidance for electrode placement in an EEG study,” Brain Topography, vol. 31, no. 2, pp. 174–185, 2018. View at: Publisher Site | Google Scholar
  9. Y. Kakisaka, R. Alkawadri, Z. I. Wang et al., “Sensitivity of scalp 10–20 EEG and magnetoencephalography,” Epileptic Disorders, vol. 15, no. 1, pp. 27–31, 2013. View at: Publisher Site | Google Scholar
  10. M. Burgess, A. Kumar, and V. M. J, “Analysis of EEG using 10:20 electrode system,” International Journal of Innovative Research in Science, Engineering and Technology, vol. 1, no. 2, pp. 2319–8753, 2012. View at: Google Scholar
  11. A. D. Bigirimana, N. Siddique, and D. Coyle, “A hybrid ICA-wavelet transform for automated artefact removal in EEG-based emotion recognition,” in IEEE International Conference on Systems, Man, and Cybernetics, SMC 2016-Conference Proceedings, pp. 4429–4434, Budapest, Hungary, October 2016. View at: Publisher Site | Google Scholar
  12. R. Bogacz, U. Markowska-Kaczmar, and A. Kozik, “Blinking artefact recognition in EEG signal using artificial neural network,” in Proceedings of the 4th Conference on Neural, Zakopane, Poland, June 1999. View at: Google Scholar
  13. S. O’Regan, S. Faul, and W. Marnane, “Automatic detection of EEG artefacts arising from head movements using EEG and gyroscope signals,” Medical Engineering and Physics, vol. 35, no. 7, pp. 867–874, 2013. View at: Publisher Site | Google Scholar
  14. R. Romo-Vazquez, R. Ranta, V. Louis-Dorr, and D. Maquin, “EEG ocular artefacts and noise removal,” in Annual International Conference of the IEEE Engineering in Medicine and Biology-Proceedings, pp. 5445–5448, Lyon, France, August 2007. View at: Publisher Site | Google Scholar
  15. M. K. Islam, A. Rastegarnia, and Z. Yang, “Methods for artifact detection and removal from scalp EEG: a review,” Neurophysiologie Clinique/Clinical Neurophysiology, vol. 46, no. 4-5, pp. 287–305, 2016. View at: Publisher Site | Google Scholar
  16. A. S. Janani, T. S. Grummett, T. W. Lewis et al., “Improved artefact removal from EEG using Canonical Correlation Analysis and spectral slope,” Journal of Neuroscience Methods, vol. 298, pp. 1–15, 2018. View at: Publisher Site | Google Scholar
  17. X. Pope, G. B. Bian, and Z. Tian, “Removal of artifacts from EEG signals: a review,” Sensors (Switzerland), vol. 19, no. 5, pp. 1–18, 2019. View at: Publisher Site | Google Scholar
  18. S. Suja Priyadharsini, S. Edward Rajan, and S. Femilin Sheniha, “A novel approach for the elimination of artefacts from EEG signals employing an improved Artificial Immune System algorithm,” Journal of Experimental & Theoretical Artificial Intelligence, vol. 28, no. 1-2, pp. 239–259, 2016. View at: Publisher Site | Google Scholar
  19. A. Szentkirályi, K. K. H. Wong, R. R. Grunstein, A. L. D'Rozario, and J. W. Kim, “Performance of an automated algorithm to process artefacts for quantitative EEG analysis during a simultaneous driving simulator performance task,” International Journal of Psychophysiology, vol. 121, no. August, pp. 12–17, 2017. View at: Publisher Site | Google Scholar
  20. A. Tandle, N. Jog, P. D'cunha, and M. Chheta, “Classification of artefacts in EEG signal recordings and EOG artefact removal using EOG subtraction,” Communications on Applied Electronics, vol. 4, no. 1, pp. 12–19, 2016. View at: Publisher Site | Google Scholar
  21. M. Murugappan and S. Murugappan, “Human emotion recognition through short time Electroencephalogram (EEG) signals using Fast Fourier Transform (FFT),” in Proceedings-2013 IEEE 9th International Colloquium on Signal Processing and its Applications, CSPA 2013, pp. 289–294, Kuala Lumpur, Malaysia, March 2013. View at: Publisher Site | Google Scholar
  22. S. M. Alarcao and M. J. Fonseca, “Emotions recognition using EEG signals: a survey,” IEEE Transactions on Affective Computing, vol. 10, pp. 1–20, 2019. View at: Publisher Site | Google Scholar
  23. J. Panksepp, Affective Neuroscience: The Foundations of Human and Animal Emotions, Oxford University Press, Oxford, UK, 2004.
  24. A. E. Penner and J. Stoddard, “Clinical affective neuroscience,” Journal of the American Academy of Child & Adolescent Psychiatry, vol. 57, no. 12, p. 906, 2018. View at: Publisher Site | Google Scholar
  25. L. Pessoa, “Understanding emotion with brain networks,” Current Opinion in Behavioral Sciences, vol. 19, pp. 19–25, 2018. View at: Publisher Site | Google Scholar
  26. P. Ekman and W. V. Friesen, “Constants across cultures in the face and emotion,” Journal of Personality and Social Psychology, vol. 17, no. 2, p. 124, 1971. View at: Publisher Site | Google Scholar
  27. B. De Gelder, “Why bodies? Twelve reasons for including bodily expressions in affective neuroscience,” Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 364, no. 1535, pp. 3475–3484, 2009. View at: Publisher Site | Google Scholar
  28. F. M. Plaza-del-Arco, M. T. Martín-Valdivia, L. A. Ureña-López, and R. Mitkov, “Improved emotion recognition in Spanish social media through incorporation of lexical knowledge,” Future Generation Computer Systems, vol. 110, 2020. View at: Publisher Site | Google Scholar
  29. J. Kumar and J. A. Kumar, “Machine learning approach to classify emotions using GSR,” Advanced Research in Electrical and Electronic Engineering, vol. 2, no. 12, pp. 72–76, 2015. View at: Google Scholar
  30. M. Ali, A. H. Mosa, F. Al Machot, and K. Kyamakya, “Emotion recognition involving physiological and speech signals: a comprehensive review,” in Recent Advances in Nonlinear Dynamics and Synchronization, pp. 287–302, Springer, Berlin, Germany, 2018. View at: Google Scholar
  31. D. H. Hockenbury and S. E. Hockenbury, Discovering Psychology, Macmillan, New York, NY, USA, 2010.
  32. I. B. Mauss and M. D. Robinson, “Measures of emotion: a review,” Cognition & Emotion, vol. 23, no. 2, pp. 209–237, 2009. View at: Publisher Site | Google Scholar
  33. E. Fox, Emotion Science Cognitive and Neuroscientific Approaches to Understanding Human Emotions, Macmillan, New York, NY, USA, 2008.
  34. P. Ekman, “Are there basic emotions?” Psychological Review, vol. 99, no. 3, pp. 550–553, 1992. View at: Publisher Site | Google Scholar
  35. R. Plutchik, “The nature of emotions,” American Scientist, vol. 89, no. 4, pp. 344–350, 2001. View at: Publisher Site | Google Scholar
  36. C. E. Izard, “Basic emotions, natural kinds, emotion schemas, and a new paradigm,” Perspectives on Psychological Science, vol. 2, no. 3, pp. 260–280, 2007. View at: Publisher Site | Google Scholar
  37. C. E. Izard, “Emotion theory and research: highlights, unanswered questions, and emerging issues,” Annual Review of Psychology, vol. 60, no. 1, pp. 1–25, 2009. View at: Publisher Site | Google Scholar
  38. P. J. Lang, “The emotion probe: studies of motivation and attention,” American Psychologist, vol. 50, no. 5, p. 372, 1995. View at: Publisher Site | Google Scholar
  39. A. Mehrabian, “Comparison of the PAD and PANAS as models for describing emotions and for differentiating anxiety from depression,” Journal of Psychopathology and Behavioral Assessment, vol. 19, no. 4, pp. 331–357, 1997. View at: Publisher Site | Google Scholar
  40. E. Osuna, L. Rodríguez, J. O. Gutierrez-garcia, A. Luis, E. Osuna, and L. Rodr, “Development of computational models of Emotions : a software engineering perspective,” Cognitive Systems Research, vol. 60, 2020. View at: Publisher Site | Google Scholar
  41. A. Hassouneh, A. M. Mutawa, and M. Murugappan, “Development of a real-time emotion recognition system using facial expressions and EEG based on machine learning and deep neural network methods,” Informatics in Medicine Unlocked, vol. 20, p. 100372, 2020. View at: Publisher Site | Google Scholar
  42. F. Balducci, C. Grana, and R. Cucchiara, “Affective level design for a role-playing videogame evaluated by a brain-computer interface and machine learning methods,” The Visual Computer, vol. 33, no. 4, pp. 413–427, 2017. View at: Publisher Site | Google Scholar
  43. Z. Su, X. Xu, D. Jiawei, and W. Lu, “Intelligent wheelchair control system based on BCI and the image display of EEG,” in Proceedings of 2016 IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference, IMCEC 2016, pp. 1350–1354, Xi’an, China, October 2016. View at: Publisher Site | Google Scholar
  44. A. Campbell, T. Choudhury, S. Hu et al., “NeuroPhone: brain-mobile phone interface using a wireless EEG headset,” in Proceedings of the 2nd ACM SIGCOMM Workshop on Networking, Systems, and Applications on Mobile Handhelds, MobiHeld ’10, Co-located with SIGCOMM 2010, New Delhi, India, January 2010. View at: Publisher Site | Google Scholar
  45. D. Bright, A. Nair, D. Salvekar, and S. Bhisikar, “EEG-based brain controlled prosthetic arm,” in Proceedings of the Conference on Advances in Signal Processing, CASP 2016, pp. 479–483, Pune, India, June 2016. View at: Publisher Site | Google Scholar
  46. C. Demirel, H. Kandemir, and H. Kose, “Controlling a robot with extraocular muscles using EEG device,” in Proceedings of the 26th IEEE Signal Processing and Communications Applications Conference, SIU 2018, Izmir, Turkey, May 2018. View at: Publisher Site | Google Scholar
  47. Y. Liu, Y. Ding, C. Li et al., “Multi-channel EEG-based emotion recognition via a multi-level features guided capsule network,” Computers in Biology and Medicine, vol. 123, p. 103927, 2020. View at: Publisher Site | Google Scholar
  48. G. L. Ahern and G. E. Schwartz, “Differential lateralization for positive and negative emotion in the human brain: EEG spectral analysis,” Neuropsychologia, vol. 23, no. 6, pp. 745–755, 1985. View at: Publisher Site | Google Scholar
  49. H. Gunes and M. Piccardi, “Bi-modal emotion recognition from expressive face and body gestures,” Journal of Network and Computer Applications, vol. 30, no. 4, pp. 1334–1345, 2007. View at: Publisher Site | Google Scholar
  50. R. Jenke, A. Peer, M. Buss et al., “Feature extraction and selection for emotion recognition from EEG,” IEEE Transactions on Affective Computing, vol. 5, no. 3, pp. 327–339, 2014. View at: Publisher Site | Google Scholar
  51. J. U. Blackford and D. S. Pine, “Neural substrates of childhood anxiety disorders,” Child and Adolescent Psychiatric Clinics of North America, vol. 21, no. 3, pp. 501–525, 2012. View at: Publisher Site | Google Scholar
  52. K. A. Goosens and S. Maren, “Long-term potentiation as a substrate for memory: evidence from studies of amygdaloid plasticity and pavlovian fear conditioning,” Hippocampus, vol. 12, no. 5, pp. 592–599, 2002. View at: Publisher Site | Google Scholar
  53. M. R. Turner, S. Maren, K. L. Phan, and I. Liberzon, “The contextual brain: implications for fear conditioning, extinction and psychopathology,” Nature Reviews Neuroscience, vol. 14, no. 6, pp. 417–428, 2013. View at: Google Scholar
  54. U. Herwig, P. Satrapi, and C. Schönfeldt-Lecuona, “Using the international 10–20 EEG system for positioning of transcranial magnetic stimulation,” Brain Topography, vol. 16, no. 2, pp. 95–99, 2003. View at: Publisher Site | Google Scholar
  55. R. W. Homan, J. Herman, and P. Purdy, “Cerebral location of international 10–20 system electrode placement,” Electroencephalography and Clinical Neurophysiology, vol. 66, no. 4, pp. 376–382, 1987. View at: Publisher Site | Google Scholar
  56. G. M. Rojas, C. Alvarez, C. E. Montoya, M. de la Iglesia-Vayá, J. E. Cisternas, and M. Gálvez, “Study of resting-state functional connectivity networks using EEG electrodes position as seed,” Frontiers in Neuroscience, vol. 12, no. APR, pp. 1–12, 2018. View at: Publisher Site | Google Scholar
  57. J. A. Blanco, A. C. Vanleer, T. K. Calibo, and S. L. Firebaugh, “Single-trial cognitive stress classification using portable wireless electroencephalography,” Sensors (Switzerland), vol. 19, no. 3, pp. 1–16, 2019. View at: Publisher Site | Google Scholar
  58. M. Abujelala, A. Sharma, C. Abellanoza, and F. Makedon, “Brain-EE: brain enjoyment evaluation using commercial EEG headband,” in Proceedings of the ACM International Conference Proceeding Series, New York, NY, USA, September 2016. View at: Publisher Site | Google Scholar
  59. L. H. Chew, J. Teo, and J. Mountstephens, “Aesthetic preference recognition of 3D shapes using EEG,” Cognitive Neurodynamics, vol. 10, no. 2, pp. 165–173. View at: Publisher Site | Google Scholar
  60. G. Mountstephens and T. Yamada, “Pediatric clinical neurophysiology,” Atlas of Artifacts in Clinical Neurophysiology, vol. 41, 2018. View at: Google Scholar
  61. C. Miller, “Review of handbook of EEG interpretation,” The Neurodiagnostic Journal, vol. 55, no. 2, p. 136, 2015. View at: Google Scholar
  62. I. Obeid and J. Picone, “The temple university hospital EEG data corpus,” Frontiers in Neuroscience, vol. 10, no. MAY, 2016. View at: Publisher Site | Google Scholar
  63. A. Aldridge, E. Barnes, C. L. Bethel et al., “Accessible electroencephalograms (EEGs): A comparative review with openbci’s ultracortex mark IV headset,” in Proceedings of the 2019 29th International Conference Radioelektronika, pp. 1–6, Pardubice, Czech Republic, April 2019. View at: Publisher Site | Google Scholar
  64. P. Bialas and P. Milanowski, “A high frequency steady-state visually evoked potential based brain computer interface using consumer-grade EEG headset,” in Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2014, pp. 5442–5445, Chicago, IL, USA, August 2014. View at: Publisher Site | Google Scholar
  65. Y. Wang, Z. Wang, W. Clifford, C. Markham, T. E. Ward, and C. Deegan, “Validation of low-cost wireless EEG system for measuring event-related potentials,” in Proceedings of the 29th Irish Signals and Systems Conference, ISSC 2018, pp. 1–6, Belfast, UK, June 2018. View at: Publisher Site | Google Scholar
  66. S. Sridhar, U. Ramachandraiah, E. Sathish, G. Muthukumaran, and P. R. Prasad, “Identification of eye blink artifacts using wireless EEG headset for brain computer interface system,” in Proceedings of IEEE Sensors, Montreal, UK, October 2018. View at: Publisher Site | Google Scholar
  67. M. Ahmad and M. Aqil, “Implementation of nonlinear classifiers for adaptive autoregressive EEG features classification,” in Proceedings-2015 Symposium on Recent Advances in Electrical Engineering, RAEE 2015, Islamabad, Pakistan, October 2015. View at: Publisher Site | Google Scholar
  68. A. Mheich, J. Guilloton, and N. Houmani, “Monitoring visual sustained attention with a low-cost EEG headset,” in Proceedings of the International Conference on Advances in Biomedical Engineering, Beirut, Lebanon, October 2017. View at: Publisher Site | Google Scholar
  69. K. Tomonaga, S. Wakamizu, and J. Kobayashi, “Experiments on classification of electroencephalography (EEG) signals in imagination of direction using a wireless portable EEG headset,” in Proceedings of the ICCAS 2015-2015 15th International Conference On Control, Automation And Systems, Busan, South Korea, October 2015. View at: Publisher Site | Google Scholar
  70. S. Wakamizu, K. Tomonaga, and J. Kobayashi, “Experiments on neural networks with different configurations for electroencephalography (EEG) signal pattern classifications in imagination of direction,” in Proceedings-5th IEEE International Conference on Control System, Computing and Engineering, ICCSCE 2015, pp. 453–457, George Town, Malaysia, November 2015. View at: Publisher Site | Google Scholar
  71. R. Sarno, M. N. Munawar, and B. T. Nugraha, “Real-time electroencephalography-based emotion recognition system,” International Review on Computers and Software (IRECOS), vol. 11, no. 5, pp. 456–465, 2016. View at: Publisher Site | Google Scholar
  72. N. Thammasan, K. Moriyama, K.-i. Fukui, and M. Numao, “Familiarity effects in EEG-based emotion recognition,” Brain Informatics, vol. 4, no. 1, pp. 39–50, 2017. View at: Publisher Site | Google Scholar
  73. N. Zhuang, Y. Zeng, L. Tong, C. Zhang, H. Zhang, and B. Yan, “Emotion recognition from EEG signals using multidimensional information in EMD domain,” BioMed Research International, vol. 2017, Article ID 8317357, 9 pages, 2017. View at: Publisher Site | Google Scholar
  74. T. M. C. Lee, H.-L. Liu, C. C. H. Chan, S.-Y. Fang, and J.-H. Gao, “Neural activities associated with emotion recognition observed in men and women,” Molecular Psychiatry, vol. 10, no. 5, p. 450, 2005. View at: Publisher Site | Google Scholar
  75. J.-Y. Zhu, W.-L. Zheng, and B.-L. Lu, “Cross-subject and cross-gender emotion classification from EEG,” in World Congress on Medical Physics and Biomedical Engineering, pp. 1188–1191, Springer, Berlin, Germany, 2015. View at: Google Scholar
  76. I. Stanica, M. I. Dascalu, C. N. Bodea, and A. D. Bogdan Moldoveanu, “VR job interview simulator: where virtual reality meets artificial intelligence for education,” in Proceedings of the 2018 Zooming Innovation in Consumer Technologies Conference, Novi Sad, Serbia, May 2018. View at: Publisher Site | Google Scholar
  77. N. Malandrakis, A. Potamianos, G. Evangelopoulos, and A. Zlatintsi, A Supervised Approach To Movie Emotion Tracking, National Technical University of Athens, Athens, Greece, 2011.
  78. H. H. S. Ip, S. W. L. Wong, D. F. Y. Chan et al., “Enhance emotional and social adaptation skills for children with autism spectrum disorder: a virtual reality enabled approach,” Computers & Education, vol. 117, pp. 1–15, 2018. View at: Publisher Site | Google Scholar
  79. J. Wong, “What is virtual reality?” Virtual Reality Information Resources, American Library Association, Chicago, IL, USA, 1998. View at: Publisher Site | Google Scholar
  80. I. E. Sutherland, C. J. Fluke, and D. G. Barnes, “The ultimate display. Multimedia: from wagner to virtual reality,” pp. 506–508, 1965, http://arxiv.org/abs/1601.03459. View at: Google Scholar
  81. R. G. Klein and I. E. Sutherland, “A head-mounted three dimensional display,” in Proceedings of the December 9–11, 1968, Fall Joint Computer Conference, Part I, pp. 757–764, New York, NY, USA, December 1968. View at: Publisher Site | Google Scholar
  82. P. Milgram and F. Kishimo, “A taxonomy of mixed reality,” IEICE Transactions on Information and Systems, vol. 77, no. 12, pp. 1321–1329, 1994. View at: Google Scholar
  83. Z. Pan, A. D. Cheok, H. Yang, J. Zhu, and J. Shi, “Virtual reality and mixed reality for virtual learning environments,” Computers & Graphics, vol. 30, no. 1, pp. 20–28, 2006. View at: Publisher Site | Google Scholar
  84. M. Mekni and A. Lemieux, “Augmented reality: applications, challenges and future trends,” Applied Computational Science, vol. 20, pp. 205–214, 2014. View at: Google Scholar
  85. M. Billinghurst, A. Clark, and G. Lee, “A survey of augmented reality foundations and trends R in human-computer interaction,” Human-Computer Interaction, vol. 8, no. 3, pp. 73–272, 2014. View at: Publisher Site | Google Scholar
  86. S. Martin, G. Diaz, E. Sancristobal, R. Gil, M. Castro, and J. Peire, “New technology trends in education: seven years of forecasts and convergence,” Computers & Education, vol. 57, no. 3, pp. 1893–1906, 2011. View at: Publisher Site | Google Scholar
  87. Y. Yang, Q. M. J. Wu, W.-L. Zheng, and B.-L. Lu, “EEG-based emotion recognition using hierarchical network with subnetwork nodes,” IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 2, pp. 408–419, 2018. View at: Publisher Site | Google Scholar
  88. T. T. Beemster, J. M. van Velzen, C. A. M. van Bennekom, M. F. Reneman, and M. H. W. Frings-Dresen, “Test-retest reliability, agreement and responsiveness of productivity loss (iPCQ-VR) and healthcare utilization (TiCP-VR) questionnaires for sick workers with chronic musculoskeletal pain,” Journal of Occupational Rehabilitation, vol. 29, no. 1, pp. 91–103, 2019. View at: Publisher Site | Google Scholar
  89. X. Liu, J. Zhang, G. Hou, and Z. Wang, “Virtual reality and its application in military,” IOP Conference Series: Earth and Environmental Science, vol. 170, no. 3, 2018. View at: Publisher Site | Google Scholar
  90. J. Mcintosh, M. Rodgers, B. Marques, and A. Cadle, The Use of VR for Creating Therapeutic Environments for the Health and Wellbeing of Military Personnel , Their Families and Their Communities, VDE VERLAG GMBH, Berlin, Germany, 2019.
  91. M. Johnson-Glenberg, “Immersive VR and education: embodied design principles that include gesture and hand controls,” Frontiers Robotics AI, vol. 5, pp. 1–19, 2018. View at: Publisher Site | Google Scholar
  92. Z. Lan, O. Sourina, L. Wang, and Y. Liu, “Real-time EEG-based emotion monitoring using stable features,” The Visual Computer, vol. 32, no. 3, pp. 347–358, 2016. View at: Publisher Site | Google Scholar
  93. D. S. Kumaran, S. Y. Ragavendar, A. Aung, and P. Wai, Using EEG-validated Music Emotion Recognition Techniques to Classify Multi-Genre Popular Music for Therapeutic Purposes, Nanyang Technological University, Nanyang Ave, Singapore, 2018.
  94. C. Lin, M. Liu, W. Hsiung, and J. Jhang, “Music emotion recognition based on two-level support vector classification,” Proceedings-International Conference on Machine Learning and Cybernetics, vol. 1, pp. 375–379, 2017. View at: Publisher Site | Google Scholar
  95. S. H. Chen, Y. S. Lee, W. C. Hsieh, and J. C. Wang, “Music emotion recognition using deep Gaussian process,” in Proceedings of the 2015 asia-pacific Signal and information processing association annual Summit and conference, vol. 2015, pp. 495–498, Hong Kong, China, December 2016. View at: Publisher Site | Google Scholar
  96. Y. An, S. Sun, and S. Wang, “Naive Bayes classifiers for music emotion classification based on lyrics,” in Proceedings-16th IEEE/ACIS International Conference on Computer and Information Science, ICIS 2017, no. 1, pp. 635–638, Wuhan, China, May 2017. View at: Publisher Site | Google Scholar
  97. J. Bai, K. Luo, J. Peng et al., “Music emotions recognition by cognitive classification methodologies,” in Proceedings of the 2017 IEEE 16th International Conference on Cognitive Informatics and Cognitive Computing, ICCI∗CC 2017, pp. 121–129, Oxford, UK, July 2017. View at: Publisher Site | Google Scholar
  98. R. Nawaz, H. Nisar, and V. V. Yap, “Recognition of useful music for emotion enhancement based on dimensional model,” in Proceedings of the 2nd International Conference on BioSignal Analysis, Processing and Systems (ICBAPS), Kuching, Malaysia, July 2018. View at: Google Scholar
  99. S. A. Y. Al-Galal, I. F. T. Alshaikhli, A. W. B. A. Rahman, and M. A. Dzulkifli, “EEG-based emotion recognition while listening to quran recitation compared with relaxing music using valence-arousal model,” in Proceedings-2015 4th International Conference on Advanced Computer Science Applications and Technologies, pp. 245–250, Kuala Lumpur, Malaysia, December 2015. View at: Publisher Site | Google Scholar
  100. C. Shahnaz, S. B. Masud, and S. M. S. Hasan, “Emotion recognition based on wavelet analysis of Empirical Mode Decomposed EEG signals responsive to music videos,” in Proceedings of the IEEE Region 10 Annual International Conference/TENCON, Singapore, November 2016. View at: Publisher Site | Google Scholar
  101. S. W. Byun, S. P. Lee, and H. S. Han, “Feature selection and comparison for the emotion recognition according to music listening,” in Proceedings of the International Conference on Robotics and Automation Sciences, pp. 172–176, Hong Kong, China, August 2017. View at: Publisher Site | Google Scholar
  102. J. Xu, F. Ren, and Y. Bao, “EEG emotion classification based on baseline strategy,” in Proceedings of 2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems, Nanjing, China, November 2018. View at: Publisher Site | Google Scholar
  103. S. Wu, X. Xu, L. Shu, and B. Hu, “Estimation of valence of emotion using two frontal EEG channels,” in Proceedings of the 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 1127–1130, Kansas City, MO, USA, November 2017. View at: Publisher Site | Google Scholar
  104. H. Ullah, M. Uzair, A. Mahmood, M. Ullah, S. D. Khan, and F. A. Cheikh, “Internal emotion classification using EEG signal with sparse discriminative ensemble,” IEEE Access, vol. 7, pp. 40144–40153, 2019. View at: Publisher Site | Google Scholar
  105. H. Dabas, C. Sethi, C. Dua, M. Dalawat, and D. Sethia, “Emotion classification using EEG signals,” in ACM International Conference Proceeding Series, pp. 380–384, Las Vegas, NV, USA, June 2018. View at: Publisher Site | Google Scholar
  106. A. H. Krishna, A. B. Sri, K. Y. V. S. Priyanka, S. Taran, and V. Bajaj, “Emotion classification using EEG signals based on tunable-Q wavelet transform,” IET Science, Measurement & Technology, vol. 13, no. 3, pp. 375–380, 2019. View at: Publisher Site | Google Scholar
  107. R. Subramanian, J. Wache, M. K. Abadi, R. L. Vieriu, S. Winkler, and N. Sebe, “Ascertain: emotion and personality recognition using commercial sensors,” IEEE Transactions on Affective Computing, vol. 9, no. 2, pp. 147–160, 2018. View at: Publisher Site | Google Scholar
  108. M. K. Abadi, R. Subramanian, S. M. Kia, P. Avesani, I. Patras, and N. Sebe, “DECAF: MEG-based multimodal database for decoding affective physiological responses,” IEEE Transactions on Affective Computing, vol. 6, no. 3, pp. 209–222, 2015. View at: Publisher Site | Google Scholar
  109. T. H. Li, W. Liu, W. L. Zheng, and B. L. Lu, “Classification of five emotions from EEG and eye movement signals: discrimination ability and stability over time,” in Proceedings of the International IEEE/EMBS Conference on Neural Engineering, San Francisco, CA, USA, March 2019. View at: Publisher Site | Google Scholar
  110. T. Song, W. Zheng, P. Song, and Z. Cui, “EEG emotion recognition using dynamical graph convolutional neural networks,” IEEE Transactions on Affective Computing, vol. 3045, pp. 1–10, 2018. View at: Publisher Site | Google Scholar
  111. N. V. Kimmatkar and V. B. Babu, “Human emotion classification from brain EEG signal using multimodal approach of classifier,” in Proceedings of the ACM International Conference Proceeding Series, pp. 9–13, Galway, Ireland, April 2018. View at: Publisher Site | Google Scholar
  112. M. Zangeneh Soroush, K. Maghooli, S. Kamaledin Setarehdan, and A. Motie Nasrabadi, “Emotion classification through nonlinear EEG analysis using machine learning methods,” International Clinical Neuroscience Journal, vol. 5, no. 4, pp. 135–149, 2018. View at: Publisher Site | Google Scholar
  113. J. Marín-Morales, J. L. Higuera-Trujillo, A. Greco et al., “Affective computing in virtual reality: emotion recognition from brain and heartbeat dynamics using wearable sensors,” Scientific Reports, vol. 8, no. 1, pp. 1–15, 2018. View at: Publisher Site | Google Scholar
  114. W. Zhang, L. Shu, X. Xu, and D. Liao, “Affective virtual reality system (AVRS): design and ratings of affective VR scenes,” in Proceedings of the 2017 International Conference on Virtual Reality and Visualization, ICVRV 2017, pp. 311–314, Zhengzhou, China, October 2017. View at: Publisher Site | Google Scholar
  115. A. Kim, M. Chang, Y. Choi, S. Jeon, and K. Lee, “The effect of immersion on emotional responses to film viewing in a virtual environment,” in Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces, pp. 601-602, Reutlingen, Germany, March 2018. View at: Publisher Site | Google Scholar
  116. K. Hidaka, H. Qin, and J. Kobayashi, “Preliminary test of affective virtual reality scenes with head mount display for emotion elicitation experiment,” in Proceedings of the International Conference On Control, Automation And Systems, (Iccas), pp. 325–329, Ramada Plaza, Korea, October 2017. View at: Google Scholar
  117. J. Fan, J. W. Wade, A. P. Key, Z. E. Warren, and N. Sarkar, “EEG-based affect and workload recognition in a virtual driving environment for ASD intervention,” IEEE Transactions on Biomedical Engineering, vol. 65, no. 1, pp. 43–51, 2018. View at: Publisher Site | Google Scholar
  118. V. Lorenzetti, B. Melo, R. Basílio et al., “Emotion regulation using virtual environments and real-time fMRI neurofeedback,” Frontiers in Neurology, vol. 9, pp. 1–15, 2018. View at: Publisher Site | Google Scholar
  119. M. Horvat, M. Dobrinic, M. Novosel, and P. Jercic, “Assessing emotional responses induced in virtual reality using a consumer eeg headset: a preliminary report,” in Proceedings of the 2018 41st International Convention On Information And Communication Technology, Electronics And Microelectronics, Opatija, Croatia, May 2018. View at: Publisher Site | Google Scholar
  120. K. Guo, J. Huang, Y. Yang, and X. Xu, “Effect of virtual reality on fear emotion base on EEG signals analysis,” in Proceedings of the 2019 IEEE MTT-S International Microwave Biomedical Conference (IMBioC), Nanjing, China, May 2019. View at: Publisher Site | Google Scholar
  121. S. Koelstra, C. Muhl, M. Soleymani et al., “DEAP: a database for emotion analysis; using physiological signals,” IEEE Transactions on Affective Computing, vol. 3, no. 1, pp. 18–31, 2012. View at: Publisher Site | Google Scholar
  122. A. Patras, G. Valenza, L. Citi, and E. P. Scilingo, “Arousal and valence recognition of affective sounds based on electrodermal activity,” IEEE Sensors Journal, vol. 17, no. 3, pp. 716–725, 2017. View at: Publisher Site | Google Scholar
  123. M. Soleymani, M. N. Caro, E. M. Schmidt, C. Y. Sha, and Y. H. Yang, “1000 songs for emotional analysis of music.,” in CrowdMM 2013-Proceedings of the 2nd ACM International Workshop on Crowdsourcing for Multimedia, Barcelona, Spain, October 2013. View at: Publisher Site | Google Scholar
  124. X. Q. Huo, W. L. Zheng, and B. L. Lu, “Driving fatigue detection with fusion of EEG and forehead EOG,” in Proceedings of the International Joint Conference on Neural Networks, Vancouver, BC, Canada, July 2016. View at: Publisher Site | Google Scholar
  125. M. Soleymani, S. Asghari-Esfeden, M. Pantic, and Y. Fu, “Continuous emotion detection using EEG signals and facial expressions,” in Proceedings of the IEEE International Conference on Multimedia and Expo, Chengdu, China, July 2014. View at: Publisher Site | Google Scholar
  126. W. L. Zheng and B. L. Lu, “A multimodal approach to estimating vigilance using EEG and forehead EOG,” Journal of Neural Engineering, vol. 14, no. 2, 2017. View at: Publisher Site | Google Scholar
  127. S. Katsigiannis and N. Ramzan, “DREAMER: a database for emotion recognition through EEG and ecg signals from wireless low-cost off-the-shelf devices,” IEEE Journal of Biomedical and Health Informatics, vol. 22, no. 1, pp. 98–107, 2018. View at: Publisher Site | Google Scholar
  128. A. C. Constantinescu, M. Wolters, A. Moore, and S. E. MacPherson, “A cluster-based approach to selecting representative stimuli from the International Affective Picture System (IAPS) database,” Behavior Research Methods, vol. 49, no. 3, pp. 896–912, 2017. View at: Publisher Site | Google Scholar
  129. A. Marchewka, Ł. Żurawski, K. Jednoróg, and A. Grabowska, “The Nencki Affective Picture System (NAPS): introduction to a novel, standardized, wide-range, high-quality, realistic picture database,” Behavior Research Methods, vol. 46, no. 2, pp. 596–610, 2014. View at: Publisher Site | Google Scholar
  130. S. M. U. Saeed, S. M. Anwar, M. Majid, and A. M. Bhatti, “Psychological stress measurement using low cost single channel EEG headset,” in Proceedings of the IEEE International Symposium on Signal Processing and Information Technology, Abu Dhabi, United Arab Emirates, December 2015. View at: Publisher Site | Google Scholar
  131. S. Jerritta, M. Murugappan, R. Nagarajan, and K. Wan, “Physiological signals based human emotion recognition: a review,” in Proceedings-2011 IEEE 7th International Colloquium on Signal Processing and its Applications, Penang, Malaysia, March 2011. View at: Publisher Site | Google Scholar
  132. C. Maaoul and A. Pruski, “Emotion recognition through physiological signals for human-machine communication,” Cutting Edge Robotics, vol. 13, 2010. View at: Publisher Site | Google Scholar
  133. C. Liu, P. Rani, and N. Sarkar, “An empirical study of machine learning techniques for affect recognition in human-robot interaction,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, Sendai, Japan, September 2005. View at: Publisher Site | Google Scholar
  134. G. Rigas, C. D. Katsis, G. Ganiatsas, and D. I. Fotiadis, A User Independent, Biosignal Based, Emotion Recognition Method, Springer, Berlin, Germany, 2007.
  135. C. Zong and M. Chetouani, “Hilbert-Huang transform based physiological signals analysis for emotion recognition,” in Proceedings of the IEEE International Symposium on Signal Processing and Information Technology, ISSPIT, pp. 334–339, Ajman, United Arab Emirates, December 2009. View at: Publisher Site | Google Scholar
  136. L. Li and J. H. Chen, “Emotion recognition using physiological signals from multiple subjects,” in Proceedings of the International Conference on Intelligent Information Hiding and Multimedia, pp. 437–446, Pasadena, CA, USA, December 2006. View at: Publisher Site | Google Scholar
  137. A. Haag, S. Goronzy, P. Schaich, and J. Williams, “Emotion recognition using bio-sensors: first steps towards an automatic system,” Lecture Notes in Computer Science, Springer, Berlin, Germany, 2004. View at: Publisher Site | Google Scholar
  138. J. Kim and E. Andre, “Emotion recognition based on physiological changes in music listening,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 12, pp. 2067–2083, 2008. View at: Publisher Site | Google Scholar
  139. F. Nasoz, K. Alvarez, C. L. Lisetti, and N. Finkelstein, “Emotion recognition from physiological signals using wireless sensors for presence technologies,” Cognition, Technology & Work, vol. 6, no. 1, pp. 4–14, 2004. View at: Publisher Site | Google Scholar
  140. Y. Li, W. Zheng, Y. Zong, Z. Cui, and T. Zhang, “A Bi-hemisphere domain adversarial neural network model for EEG emotion recognition,” IEEE Transactions on Affective Computing, 2019. View at: Publisher Site | Google Scholar
  141. K. Zhou, H. Qin, and J. Kobayashi, “Preliminary test of affective virtual reality scenes with head mount display for emotion elicitation experiment,” in Proceedings of the 17th International Conference on Control, Automation and Systems (ICCAS), pp. 325–329, Jeju, South Korea, October 2017. View at: Publisher Site | Google Scholar
  142. M. Soleymani, J. Lichtenauer, T. Pun, and M. Pantic, “A multimodal database for affect recognition and implicit tagging,” IEEE Transactions on Affective Computing, vol. 3, no. 1, pp. 42–55, 2012. View at: Publisher Site | Google Scholar
  143. S. Gilda, H. Zafar, C. Soni, and K. Waghurdekar, “Smart music player integrating facial emotion recognition and music mood recommendation,” in Proceedings of the 2017 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), pp. 154–158, IEEE, Chennai, India, March 2017. View at: Publisher Site | Google Scholar
  144. W. Shi and S. Feng, “Research on music emotion classification based on lyrics and audio,” in Proceedings of the 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), pp. 1154–1159, Chongqing, China, October 2018. View at: Publisher Site | Google Scholar
  145. A. V. Iyer, V. Pasad, S. R. Sankhe, and K. Prajapati, “Emotion based mood enhancing music recommendation,” in Proceedings of the 2017 2nd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), pp. 1573–1577, Bangalore, India, May 2017. View at: Publisher Site | Google Scholar
  146. Y. P. Lin and T. P. Jung, “Improving EEG-based emotion classification using conditional transfer learning,” Frontiers in Human Neuroscience, vol. 11, pp. 1–11, 2017. View at: Publisher Site | Google Scholar
  147. Y. P. Lin, C. H. Wang, T. P. Jung et al., “EEG-based emotion recognition in music listening,” IEEE Transactions on Bio-Medical Engineering, vol. 57, no. 7, pp. 1798–1806, 2010. View at: Publisher Site | Google Scholar
  148. M. D. Rinderknecht, O. Lambercy, and R. Gassert, “Enhancing simulations with intra-subject variability for improved psychophysical assessments,” PLoS One, vol. 13, no. 12, 2018. View at: Publisher Site | Google Scholar
  149. J. H. Yoon and J. H. Kim, “Wavelet-based statistical noise detection and emotion classification method for improving multimodal emotion recognition,” Journal of IKEEE, vol. 22, no. 4, pp. 1140–1146, 2018. View at: Google Scholar
  150. D. Liao, W. Zhang, G. Liang et al., “Arousal evaluation of VR affective scenes based on HR and SAM,” in 2019 IEEE MTT-S International Microwave Biomedical Conference (IMBioC), Nanjing, China, May 2019. View at: Publisher Site | Google Scholar
  151. T. Karydis, F. Aguiar, S. L. Foster, and A. Mershin, “Performance characterization of self-calibrating protocols for wearable EEG applications,” in Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to Assistive Environments-PETRA ’15, pp. 1–7, Corfu, Greece, July 2015. View at: Publisher Site | Google Scholar

Copyright © 2020 Nazmi Sofian Suhaimi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views172
Downloads37
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.