About this Journal Submit a Manuscript Table of Contents
Advances in Artificial Intelligence
Volume 2011 (2011), Article ID 384169, 13 pages
Research Article

Towards a Brain-Sensitive Intelligent Tutoring System: Detecting Emotions from Brainwaves

HERON Lab, Computer Science Department, University of Montreal, P.O. Box 6128, Centre Ville Montréal, QC, Canada H3T-1J4

Received 14 May 2010; Accepted 21 February 2011

Academic Editor: Jun Hong

Copyright © 2011 Alicia Heraz and Claude Frasson. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


This paper proposes and evaluates a multiagents system called NORA that predicts emotional attributes from learners' brainwaves within an intelligent tutoring system. The measurements from the electrical brain activity of the learner are combined with information about the learner's emotional attributes. Electroencephalogram was used to measure brainwaves and self-reports to measure the three emotional dimensions: pleasure, arousal, and dominance, the eight emotions occurring during learning: anger, boredom, confusion, contempt curious, disgust, eureka, and frustration, and the emotional valence positive for learning and negative for learning. The system is evaluated on natural data, and it achieves an accuracy of over 63%, significantly outperforming classification using the individual modalities and several other combination schemes.

1. Introduction

All around us are hand-compatible tools and machines and keyboards, designed to fit the hand. We are not apt to think of them in that light, because it does not happen to us that anyone would bring out some device to be used by human hands without being sure that the nature of hands was considered. A keyboard machine or musical instrument that called for eight fingers on each hand would draw instant ridicule. Yet, we force millions of children into schools who have never seriously studied the nature and shape of the human brain and who not surprisingly prove actively brain antagonistic [1].

An Increasing number of researches in the field of artificial intelligence recognize that cognition, motivation, and emotions are three fundamental components of learning [2]. Emotional learning refers to one of the three domains identified in Bloom's taxonomy of learning [3]. The link between emotions, motivation, and learning is quite complex. Some scientists consider emotion as a source of motivational energy [1, 4, 5], and some others argue that emotion is a complex independent factor that merits direct inquiry in its relation to learning and motivation [6, 7]. In addition to the domain of artificial intelligence, education, psychology, and computational linguistics are the other domains of research that show an increasing attention to study the link between emotions and learning [817]. To collect the data needed by the studies, many types of sensors were used such as posture-sensing chair, wrist sensors to measure the learner's skin conductivity, cameras to detect user’s concentration, sensitive mouse to detect overall pressure, IBM blue eyes camera, and blood pressure measuring system [18].

These sensors provide physiological data that is streamed to the tutoring software for students in real educational settings. These physiological responses have been linked to various affective states [18].

Unfortunately, all the sensors presented above measure only the physical reactions of the learner. Most of them are based on external appearance of the learner, on his ability to be expansive, to be physically in good conditions to move, and to have an interactive character. It is not always the case. Sometimes learners are physically disabled. In some cultures, learners do not express their emotions in their face or by moving. They can adopt a nonresponsive attitude when they interact with the machine.

Our research investigates the use of a new sensor: the electroencephalogram (EEG). This system allows recording the data directly from the brain through the electrical brain activity of the learner during emotional tasks. Learners interact with an intelligent tutoring system by learning and feeling particular emotions. Measuring the electrical brain activity during learning has an important role in helping students to reach the best conditions for learning; in fact, many research works suggest that learning to consciously control our brainwave states could help us increase our ability to concentrate and decrease our stress levels [19].

We present here a framework called NORA to automatically capture process and model the electrical brain activity of the learner for recognizing some emotional attributes that occur during learning situations. NORA is a multiagent system that will communicate with an intelligent tutoring system to enrich the learner module and give more precise information about the learner state. It will also help in developing theoretical understanding of the learner's electrical brain activity behavior in learning situations.

This work tackles a number of challenging issues. While most of the prior work on detecting learners' emotions has used popular sensors like camera, mouse, and wrist, our research focus on exploring the electrical brain activity by using an electroencephalogram as it is crucial to take into consideration the non expansive and disabled learners who do not move or express their emotions on their face. Second, despite the advances in face analysis and gesture recognition, the real-time sensing of nonverbal behaviors is still a challenging problem. In this work we demonstrate a multiagent system that can automatically extract brainwaves and features from the interaction between the learner and the ITS, which can be used to detect some emotional attributes like the frequent emotions occurring during learning.

2. Brainwaves and Mental States

In the human brain, each individual neuron communicates with the other by sending tiny electrochemical signals. When millions of neurons are activated, each contributing its small electrical current, they generate a signal that is strong enough to be detected by an EEG device [20, 21].

Commonly, brainwaves are categorized into 4 different frequency bands, or types, known as delta, theta, alpha, and beta waves. Each of these wave types often correlates with different mental states. Table 1 lists the different frequency bands and their associated mental states.

Table 1: Brainwaves and mental states.

Delta frequency band is associated with deep sleep. Theta is dominant during dream sleep, meditation, and creative inspiration. Alpha brainwave is associated with tranquility and relaxation.

Beta frequency band is associated with an alert state of mind, concentration, and mental activity [22]. For more precise information, it can be divided in three subbands: Beta1, Beta2, and Beta3 (Table 2).

Table 2: Subbands in beta frequency band.

Thus, the electrical brain activity can be filtered into 6 frequency bands. Research has shown that, although one brainwave state may predominate at any given time, depending on the activity level of the individual, the remaining five brain states are present in the mix of brainwaves at all times. In other words, while somebody is in attentive state and exhibiting a beta2 brainwave pattern, there also exists in that person's brain a component of Beta3, Beta2, Alpha, Theta, and Delta, even though these may be present only at the trace level [23].

3. Goals of the Study

The goal of this study is to implement and evaluate a system that recognizes some emotional attributes from brainwaves during learning situation.

First, the study implements a framework called NORA, a multiagent system that communicates with an intelligent tutoring system interacting with learners when their electrical brain activity is recorded and their emotional states are measured. Agents of NORA are designed to (1) extract data from learners' brainwaves and through their interactions with the ITS and (2) build models that recognize emotional attributes from brainwaves.

Second, the study explores the link between the emotions frequently occurring during learning and brainwaves. In a previous study conducted byD'Mello et al. [24], eight emotions were identified: anger, boredom, confusion, contempt, curiosity, disgust, eureka, and frustration. Anger was defined as a strong feeling of displeasure and usually of antagonism. Boredom was defined as the state of being weary and restless through lack of interest. Confusion was defined as a failure to differentiate similar or related ideas. Contempt was defined as the act of despising, a lack of respect or reverence for something. Curious was defined as an active desire to learn or to know. Disgust was defined as marked aversion aroused by something highly distasteful. Eureka was defined as a feeling used to express triumph on a discovery. Frustration was defined as making vain or ineffectual efforts however vigorous; a deep chronic sense or state of insecurity and dissatisfaction arising from unresolved problems or unfulfilled needs [24].

Third, the study measures the correlation between brainwaves and the emotional dimensions pleasure, dominance, and arousal. In fact, variance in emotional assessments was accounted for by three major dimensions: affective valence: ranging from pleasant to unpleasant, dominance (or control) and arousal: ranging from calm to excited [25].

NORA addresses all the challenges mentioned above by extracting the electrical brain activity of a learner during their interaction with an ITS, classifying emotional attributes related to learning situation. The system uses real-time tracking of brainwaves and interaction features to extract variables related to the learner. Brain sensory information and other characteristics of the learner are combined using machine learning techniques.

4. Method

4.1. The Material

To measure the learners’ brainwaves, we use Pendant EEG Pendant EEG [26], a portable wireless EEG. Figure 1 shows the Pendant EEG Tool kit. To measure the global electrical brain activity, we only need to use three electrodes [27].

Figure 1: Pendant EEG tool kit.

Electrode placement was determined according to the 10–20 International System of Electrode Placement [21]. This system is based on the location of the cerebral cortical regions. The three electrodes were on Cz, A1 and A2 (Figure 2).

Figure 2: The 10–20 International System of electrode placement.

Pendant EEG sends the electrical signals to the computer via an infrared connection. Being light and easy to carry, it is not cumbersome and can easily be forgotten within a few minutes. The learner wearing Pendant EEG is completely free of his movements as shown in Figure 3.

Figure 3: Learner wearing Pendant EEG.
4.2. The Participants

The participants used in this study consisted of seventeen (10 women) undergraduates with a mean age of years, ranging from 19 to 32 years. They were selected from the department of computer science at University de Montréal. Participation was compensated with 20 Canadian dollars. All participants signed a written consent form, were French speakers with normal or corrected-to-normal vision, and were without any neuropsychological disorder according to self-report. For a better conductivity, participants were asked to have clean and dry hairs without any gel, oil, or cream applied on the scalp.

4.3. The Framework

NORA is a multiagent system entirely implemented in Java. Our choice for multiagent architecture is motivated by the numerous advantages of this type of architectures: robust, scalable, flexible, and rapid for integration since NORA is willing to communicate with any intelligent tutoring system. Figure 4 describes the architecture of the system. The learner wears the EEG device while he is interacting with the ITS through a graphic user interface (GUI). The ITS includes the three fundamental modules (expert, tutor, and learner). The implementation of the different components (including agents) in a Jade platform allows a better portability to various kinds of ITS. Both ITS and learner communicate with NORA.

Figure 4: NORA overall architecture.

NORA includes 6 main agents distributed into two groups. (i)Group 1 is the Data Collection Group. It includes Agent A1: Wave Input (WI), Agent A2: Interact Input (II), and Agent A3: Data Base (CDB). (ii)Group 2 is the Prediction Group; it includes Agent A4: Brain profile (BP), Agent A5: Emo Wave, and Agent A6: Demo Wave (DW).(iii)The following explains the role of each agent.(a)Agent WI receives brainwaves amplitudes while agent II receives information concerning the interaction between the learner and the ITS. Agent DB stores these two categories of input into a central database (CDB) after a filtration of some variables. For instance, variables coming from brainwave signals and learner’s interactions need to be prepared or calculated for further machine learning processes.(b)Agent EW requests data from the central data base and predicts the emotional and cognitive state of the learner from his cerebral waves. EW is able to predict these states by using machine learning techniques applied on CDB data and send this information on request of BP agent. (c)Agent DW requests data from the central data base about the emotional and cognitive state of the learner in order to select a method for inducing cerebral waves able to improve learning. DW sends to BP agent a list of hearing and visual stimuli to apply for inducing specific brainwaves useful for learning.(d)Agent BP plays the role of a supervising agent; it requests information from EW and DW agents to determine the cerebral profile of the learner, update the learner module, and select a pedagogical neuro-strategy to apply in order to improve learning.

NORA agents present the following properties (according to the agent theory). (i)They are autonomous and control their internal state and actions.(ii)They cooperate with other agents using the FIPA agent communication language [28]; Figure 5 shows an example of communication between BP and another subagent in charge of the communication with the tutor module. (iii)They are reactive to the environment (other agents’ requests, ITS actions).(iv)They are proactive, detect brainwaves evolution during learning, and are able to induce emotional stimuli to change the emotional state of the learner.

Figure 5: Communication between two agents.

The following scenario is presented to understand how NORA communicates with the ITS and how the agents coordinate their tasks.(i)Agents WI and II send their collected data in real time to Agent DB.(ii)Agent DB stores the data in the central database (CDB).(iii)Agent DB sends its notifications to Agents EW and DW about the availability of new data collected from Agents WI and II.(iv)Agents EW and DW send their requests to Agent DB to get the data in a certain format to reinforce their models of prediction.(v)Agent BP receives a request for prediction from the Learner Module in the ITS. The request is about to predict some emotional attributes from learners’ brainwaves.(vi)Agent BP recruits one or both of Agents EW or DW to answer the request.(vii)If Agent EW and DW are both selected for a request, they negotiate to optimize their results.(viii)Agent BP sends the results of prediction to the ITS. It also evaluates the prediction and sends it to EW or DW so that they update their belief base (their models of prediction).

4.3.1. Inducing Emotions via IAPS

The International Affective Picture System (IAPS) is a large colored bank of pictures. It provides the ratings of emotions. It includes contents across a wide range of semantic categories. IAPS is developed and distributed by the NIMH Center for Emotion and Attention (CSEA) at the University of Florida in order to provide standardized database that are available to researchers in the study of emotion and attention. IAPS has been characterized primarily along the dimensions of valence, arousal, and dominance. Even though research has shown that the IAPS is useful in the study of discrete emotions, the categorical structure of the IAPS has not been characterized thoroughly. Mikels et al. [29] experimentation consisted of collecting descriptive emotional category data on subsets of the IAPS in an effort to identify pictures that elicit one discrete emotion more than others. Results revealed multiple emotional categories for the pictures and indicated that this picture set has great potential in the investigation of discrete emotions [29].

This study provided categorical data that allows the IAPS to be used more generally in the study of emotion from a discrete categorical perspective. In accord with previous reports [30], gender differences in the emotional categorization of the IAPS images were minimal. These data show that there are numerous images that elicit single discrete emotions and, furthermore, that overall, a majority of the images elicit either single discrete emotions or emotions that represent a blend of discrete emotions, also in accord with previous reports.

4.3.2. Assessing Emotional Dimensions

To assess the three dimensions of pleasure, arousal, or dominance, we use the Self-Assessment Manikin (SAM), an affective rating system devised by Lang et al. [25]. In this system, a graphic figure depicting values along each of the 3 dimensions on a continuously varying scale is used to indicate emotional reactions (Figure 6).

Figure 6: Self-Assessment Manikin system

For the pleasure dimension, SAM ranges from a smiling, happy figure to a frowning, unhappy figure. For the arousal dimension, SAM ranges from an excited, wide-eyed figure to a relaxed, sleepy figure. For the dominance dimension, SAM ranges from a small figure (dominated) to a large figure (in control). Ratings are scored such that 9 represents a high rating on each dimension and 1 represents a low rating on each dimension.

4.3.3. Data from the Learners' Brain

Agent WI samples, digitalizes and filters the original signal recorded by the EEG, and divides it into six frequency bands: delta, theta, alpha, beta1, beta2, and beta3 (Figure 7).

Figure 7: A raw EEG signal is filtered into its component frequencies.

After filtering, Agent WI calculates the signals average amplitude or strength and analyze the average amplitude of the signal to determine in which mental state (among those presented in Tables 2 and 3) a learner is.

Table 3: Description of the information collected by Agent II.
4.3.4. Data from the Learners' Interactions with the ITS

Agent II collects some data exchanged between learners and the ITS. During their interactions with the ITS, learners provide some useful information needed by NORA’s Agents to build their prediction’s models of some emotional attributes from brainwaves. Table 3 provides an overview of information channels of the interaction history that are available in Agent DB’s CDB. Some Information channels are not relevant to this study but will be used for further analysis. The information can be divided into five categories: permanent information, periodic information and dynamic information.

4.4. Agent DB and the Central Database (CDB)

Agent DB collects real-time data from Agent WI and Agent II. Its main task is to update the central database CDB presented in Figure 8. Agent DB also notifies Agents EW and DW when new data is available. It answers requests from them to provide a view of the data according to the need of the Agents.

Figure 8: CDB schema in Agent DB.

The CDB contains the following tables. Not all the tables are considered by this study but will serve in future studies.(i)Learners table: contains permanent and periodic information about the learner (personality traits: PT and learning style: LS).(ii)Emotions table: contains dynamic information about the learner (emotions felt during learning: EL and emotional dimensions: ED).(iii)Lessons table: contains the knowledge content followed by the learner.(iv)Quiz table: contains the results of the learner to the questions about the lessons he followed.(v)Stimuli table: contains sensorial stimuli that induce particular emotions. In this study, we used visual stimuli: hundred of photos from the IAPS. Table 14 gives an example of the stimuli table.(i)Brainwave’s Amplitudes Table: Contains the recording of the electrical brain activity (the brainwaves’ amplitudes) and the associated emotions (EL and ED), quiz, lessons, and emotional stimuli (images’ ID from the IAPS).

4.5. Agent EW and DW Behavior

Agent BP supervises Agent EW and Agent DW. Once Agent BP assigns a task to one of the Agents or both, Agents EW and DW communicate to cooperate and select their local choice to perform the task or not. For example, an Agent can decide to do not perform a task due to its weak model of prediction; this Agent decides to get more data from Agent DB, to perform more training and increase its percentage of prediction.

Both Agent EW and DW follow a markov decision process (MDP) to select a task and maximize its gain. MDP provides for Agents EW and DW a mathematical framework for modeling decision making in situations where Agents EW and DW have to choose between: (1) to get data from Agent DB to built a new model of prediction, (2) to get data from Agent DB to strengthen an existing model of prediction and (3) to answer Agent BP’s request and predict emotional attributes from brainwaves.

The strategy of coordination is based on a sequential allocation of tasks during several cycles of negotiation. At each step, an Agent decides to perform a task or to ignore it. Decision is based on the available resources.

For classification purpose, Agents EW and DW use algorithms from the Weka Library [33], a collection of machine learning algorithms intended for data mining. Weka contains tools for data preprocessing, classification, regression, clustering, association rules, and visualization.

4.6. Agent BP

Agent BP attributes tasks and sends feedback to Agent EW and Agent DW. It also communicates with the ITS to get some additional information to take its decisions. For its function, Agent BP adopts a markov decisional process (MDP) Behavior. The model maximizes the expected gain in a long-term perspective.

Consider, the set of the tasks given by Agent BP to Agent WI and Agent II. Both of WI and II have limited resources to achieve the tasks. The execution of a task depends on the Agent’s gain by executing the task. Both of Agents WI and II have, respectively, the rewarding functions and defined as the following: where is the gain of Agent WI (or II) when executing the task .

The global gain for Agent WI (or II) after executing all the tasks is defined by the following formula: where is the set of the tasks executed by Agent WI (or II). In the same way, we define the global gain of NORA after executing all the tasks by

5. Results

5.1. Agents WI and II: Data Description

The data considered for our study is located in the brainwave’s amplitudes table described in Figure 7. We consider the amplitudes of the four main frequency bands.

For Agent EW, we need to consider all the recordings that associate any of the eight emotions (anger, boredom, confusion, contempt, curious, disgust, eureka, and frustration) to any of the four brainwaves (Delta, Theta, Alpha and Beta).

Agent DW needs to build a model that predicts the value (1 to 9) of any of the emotional dimensions (pleasure, dominance, arousal) from any of the four Brainwaves Amplitudes (Delta, Theta, Alpha and Beta).

Thus, the view that we consider from the brainwaves' amplitude table is where are the amplitudes, (e) is the emotion, (P, A, D) and are the values (from 1 to 9) of each of the emotional dimensions pleasure, arousal and dominance.

The size of the sample we collected is 31599 records. The amplitudes were normalized as follows: where and is the size of our database collected multiplied by the four types of brainwaves. Pleasure, arousal, and dominance values are continuous and vary from 1 to 9.

For each dimension, we want to obtain a finite number of classes. We rounded each value to the nearest value greater or equal to from a set of seventeen discrete values: .

Table 4 shows the percentage of observations for each of the eight emotions occurring during learning.

Table 4: Percentage of occurrences for each emotion.

Percentages of emotions are calculated on the 19672 relevant instances. We do not consider the 11927 other or undefined emotions.

Table 5 shows the frequencies and the percentage of observations with the three major emotions as a function of rating classes

Table 5: Percentages of observations of the three major emotions as a function of rating classes.

Due to a low frequency of observations, the classes (; ; ) and (; ; ) were not included in the subsequent analyses. This data-cleaning procedure yielded reliable data for the rating classes 2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7, and 7.5.

This resulted in a further reduction in the database from 31599 to 29740 for pleasure dimension with 4528 instances of -classes, 4393 of -classes, 3546 of -classes, 5458 of -classes, 6292 of -classes, and 5523 of -classes. For arousal, the same database was reduced to 31528 with 1825 instances of -classes, 5345 of -classes, 8479 of -classes, 9893 of -classes, 5412 of -classes, and 574 of -classes. For dominance, the database size remained the same, all pictures seen by participants were rated between 2 and 7.5; 31599 with 778 instances of {2, 2.5}-classes, 5519 of {3, 3.5}-classes, 5779 of -classes, 11124 of -classes, 8018 of - classes, and 381 of {7, 7.5}-classes.

5.2. Agent EW: Predicting Emotions from Brainwaves

Our database contains noise due to the limitations of the material used and the learners’ response time. We took that into consideration in our choice of training algorithms. Among those that have the advantage of being robust to noisy training data and effective when that data is large, IBK gave us the best results. IBK is an implementation of k-nearest neighbors [34]. Given a query point it finds the k closest to the query point. Agent EW used 70% of the dataset for training and 30% for testing. The use of the k-nearest neighbors’ algorithm requires determination of parameter k. The best results obtained were for . The best percentage of correctly classified instances is 82.27% (Figure 9).

Figure 9: Results among k values.

Table 6 shows details on the accuracy by emotion.

Table 6: Accuracy by emotion.

Classification precision varies between 79.2% and 84%. Recall varies between 78.7% and 82.2%. The performance of the classifier, given by F-measure (Table 7), is computed as follows:

Table 7: Confusion matrix.

The performance of our classifier varies between 79.2% and 83.5%. The Kappa statistic measures the proportion of agreement between two raters with correction for chance. Kappa scores ranging from 0.4 to 0.6 are considered to be fair, 0.6–0.75 are good, and scores greater than 0.75 are excellent [35]. Our statistical value of kappa is 0.78. This shows a good agreement between real and predicted emotional states.

To give more weight to emotional states with minority instances, we decided to use Youden’s J-index [36] defined as with Card(EL) = 8 is the cardinality of emotional states list. The value of JIndex = 81.2% is close to our classification prediction (). This means that the prediction is almost the same for all different emotional states. This can be seen on Table 7. The greatest values are found in the diagonal of the matrix, which means that the majority of emotional states associated with mental states are correctly predicted.

5.3. Agent DW: Emotional Dimensions from Brainwaves

Preliminary analyses revealed significant correlations for all the three dimensions. Spearman correlations in a 4 by 3 matrix are presented in Table 8. Spearman rank correlation measures the correlation between two sequences of values. The two sequences are ranked separately, and the differences in rank are calculated at each position. Each of the brainwaves features predicted one or more of the three major emotional dimensions, and there were some variations among the emotional dimensions.

Table 8: Correlations between brainwaves and emotional dimensions.

Knowing the high degree of freedom (), all of the brainwaves features showed a week but a significant correlation with dominance. Delta brainwave had a negative correlation with pleasure and dominance. Theta brainwave showed a positive correlation with arousal but a negative correlation with dominance. Alpha brainwave also had a week but significant negative correlation with arousal and positive with dominance. Beta correlated with all three major dimensions, positively with arousal and negatively with pleasure and dominance.

Agent DW performed multiple machine learning techniques, one for each of the three emotional dimensions, with the four brainwave features as predictors. Significant overall relationships were found for all three emotional dimensions: pleasure, arousal and dominance. was adopted in all subsequent statistical tests.

5.3.1. Pleasure

Multiple regression analysis results were interesting. The ANOVA table reports a significant F-statistic = 21.67 (), indicating that using the model is better than guessing the mean, with β-weights of −0.014, −0.007, 0.012, and −0.048 for brainwaves Delta, Theta, Alpha and Beta, respectively (Table 9). Only 0.3% of the variation in pleasure is explained by the model (R2adj = 0.003). Beta brainwave feature was statistically significant, but not delta brainwave.

Table 9: Coefficient of the regression line to predict pleasure from brainwaves.

Theta brainwave does not contribute much to the model (), while Beta Brainwave contributes more to the model with the largest absolute standardized coefficient .

5.3.2. Arousal

Multiple regression analysis results were interesting. The ANOVA table reports a significant (), indicating that using the model is better than guessing the mean, with β-weights of −0.018, 0.110, −0.082, and 0.033 for Delta, Theta, Alpha, and Beta brainwaves, respectively (Table 10). Only 0.5% of the variation in arousal is explained by the model (). Brainwaves Beta, Alpha, and Theta feature was statistically significant, but not brainwave Delta.

Table 10: Coefficient of the regression line to predict arousal from brainwaves.

The two largest absolute standardized coefficients are and which means that, respectively, Theta and Alpha brainwaves contribute more to the model than the other brainwaves.

5.3.3. Dominance

Multiple regression analysis results were interesting. The ANOVA table reports a significant (), indicating that using the model is better than guessing the mean, with β-weights of −0.020, −0.040, 0.046, and −0.086 for Delta, Theta, Alpha and Beta brainwaves, respectively, (Table 11). Only 0.3% of the variation in pleasure is explained by the model (). Beta brainwave feature was statistically significant but not delta brainwave.

Table 11: Coefficient of the regression line to predict dominance from brainwaves.

Theta brainwave do not contribute much to the model (), while Beta Brainwave contributes more to the model with the largest absolute standardized coefficient .

5.3.4. Classifying Emotional Dimensions from Brainwaves

The multiple regression analyses presented in the previous section produced significant models for all three emotional dimensions: pleasure, arousal and dominance. Agent DW tested many classification algorithms: a k-Nearest Neighbor [34], J48 Decision Tree [37], Bagging Predictor [38], and a Classification via Regression [39] with a decision stump as the base learner. Several other algorithms were used but few of them gave good results.

Table 13 shows the overall classification results using k-fold cross-validation5 () for the various classifiers when evaluated on the data consisting of the four brainwaves features for the emotional dimensions pleasure, arousal, and dominance. In k-fold cross-validation, the data set (N) is divided into k subsets of approximately equal size (N/k). The classifier is trained on () of the subsets and evaluated on the remaining subset. Accuracy statistics are measured. The process is repeated k times. The overall accuracy is the average of the k training iterations.

The various classification algorithms were successful in detecting pleasure, arousal, and dominance. Classification accuracy varies from 58.54% to 75.16%. Kappa statistic measures the proportion of agreement between two raters with correction for chance. It is fair for Classification via Regression algorithm (0.53) but good for the other algorithms (Nearest Neighbor, J48 Decision tree and Bagging), it varies between 0.64 and 0.72. In fact, Kappa scores ranging from 0.4 to 0.6 are considered to be fair, 0.6–0.75 are good, and scores greater than 0.75 are excellent. Results are shown on Table 12.

Table 12: Comparison of classification techniques results.
Table 13: Precision by emotional dimension.
Table 14: Excerpt from the IAPS database.

The nearest neighbor and bagging techniques provided the highest accuracy for each of the three emotional dimensions (74% for pleasure and arousal and 75% for dominance). These two techniques yielded, globally the same kappa value (.71), which is a good result. While the classification accuracies and kappa scores for the various classification algorithms are useful in obtaining an overview of the reliability of detecting the three major emotional dimensions from brainwaves features, they do not provide any insight into class level accuracies.

Table 13 lists the precision, recall, and F-measure scores as metrics for assessing class level accuracy for the three emotional dimensions.

Precision (specificity) and recall (sensitivity) are standard metrics for assessing the discriminability of a given class. The precision for class C is the proportion of samples that truly belong to class C among all the samples that were classified as class C. The recall score (sensitivity or true positive rate) provides a measure of the accuracy of the learning scheme in detecting a particular class. Finally, the F-measure provides a single metric of performance by combining the precision and recall.

Table 13 indicates that the precision for the different rating classes were highly similar (varies between 0.69 and 0.83). It also shows that the recall is globally similar among the different class ratings (between 0.66 and 0.79). We also see that the F-measure for the different rating classes is quantitatively similar.

To give more weight to the rating classes with minority instances, we decided to use, for each of pleasure, arousal, and dominance the Youden’s J-index. Through the nearest neighbor algorithm, the values of J-Index (P, A, D) for: pleasure, arousal, and dominance are, respectively, 73.5%, 74.6% and 74%. They are close to our classification prediction shown in table 13 through the same algorithm (73.55%, 74.86%, and 75.16%). These results support the claim that all rating classes for the three emotional dimensions can be automatically detected with good accuracy through the nearest neighbor algorithm.

6. Discussion and Conclusion

NORA is a multi agent system that analyzes the electrical brain activity occurring during learning. Agent WI, Agent II, and Agent DB capture, filter, and store learners' brainwaves and some other emotional attributes. Agent EW, Agent DW and Agent BP predict emotions and emotional dimensions from brainwaves and communicate the result to the ITS.

The evaluation of NORA demonstrates the effectiveness of using an EEG to measure learners' brainwaves and to predict some emotional attributes. The framework implemented for the study allowed us to record the electrical brain activity of the learners when they were exposed to emotional stimuli. These data were used to predict the eight emotions frequently occurring during learning and the three emotional dimensions. The system is evaluated on natural data and it achieves an accuracy of over 63%, significantly outperforming classification using the individual modalities and several other combination schemes.

We acknowledge that the use of EEG has some potential limitations. Any physical learner’s movement can cause noise that is detected by the electrodes and interpreted as brain activity by Pendant EEG. Nevertheless, we think that the instructions given to participants (to remain very still), the number of participants (seventeen), and the database size (31599 recordings) can considerably reduce this eventual noise.

Agent EW uses kNN algorithm to predict emotions from brainwaves. It yielded a classification prediction of 82.27% with a good kappa statistics value (78%), this shows an acceptable concordance between the predicted emotion and the real emotion.

The multiple regression analyses resulted in accurate predictions for pleasure, arousal, and dominance degrees. Delta brainwave showed a significant negative correlation with pleasure and dominance. Theta and Beta brainwaves showed a significant negative correlation with pleasure and dominance (strong negative for Beta) but a positive correlation with arousal (strong positive for Theta). Alpha brainwave had a significant negative correlation with arousal and dominance.

Agent DW predicts the emotional dimensions from brainwaves with a good accuracy: 73.55% for pleasure, 74.86% for arousal, and 75.16% for dominance. Kappa statistics values are good and, respectively, 71%, 72%, and 71%.

The major advantages of using brainwaves for emotional attributes detection lie in its effectiveness when used in the case of disabled, impassible or taciturn learners. If the grounding criterion hypothesis holds in future replication, then it would give indications on how to help those learners to control their emotions during learning only by using EEG.

If the method described above proves to be effective in identifying the learners emotional attributes, we can direct our focus to a second stage. An ITS would select an adequate pedagogical strategy that adapts to certain learner’s emotional dimensions in addition to cognitive states. In addition to the traditional available pedagogical strategies (learning by explanation, by example, by contradiction, increasing motivation, etc.), the ITS could use the information about brainwave state of the learner in order to select a strategy able to improve the emotional state and to conduct a better receptivity and attention. The pedagogical selection would be guided by a more precise learner model which would include the elements presented in this paper. This adaptation would increase the bandwidth of communication and allow an ITS to respond at a better level.


The authors acknowledge the support of the FQRSC (Fonds Québécois de la Recherche sur la Société et la Culture) and NSERC (National Science and Engineering Research Council) for this work.


  1. S. Harter, “A new self-report scale of intrinsic versus extrinsic orientation in the classroom: motivational and informational components,” Developmental Psychology, vol. 17, no. 3, pp. 300–312, 1981. View at Publisher · View at Google Scholar · View at Scopus
  2. R. Snow, L. Corno, and D. Jackson, “Individual differences in affective and cognitive functions,” in Handbook of Educational Psychology, D. C. Berliner and R. C. Calfee, Eds., pp. 243–310, Macmillan, New York, NY, USA, 1996.
  3. B. S. Bloom, Taxonomy of Educational Objectives. Handbook I: Cognitive Domain, David McKay, New York, NY, USA, 1956.
  4. M. Miserandino, “Children who do well in school: individual differences in perceived competence and autonomy in above-average children,” Journal of Educational Psychology, vol. 88, pp. 203–214, 1996.
  5. D. Stipek, Motivation to Learn: From Theory to Practice, Allyn and Bacon, Boston, Mass, USA, 3rd edition, 1998.
  6. M. E. Ford, Motivating Humans: Goals, Emotions, and Personal Agency Beliefs, Sage, London, UK, 1992.
  7. D. K. Meyer and J. C. Turner, “Discovering emotion in classroom motivation research,” Educational Psychologist, vol. 37, no. 2, pp. 107–114, 2002. View at Scopus
  8. C. Breazeal, Designing Sociable Robots, MIT Press, Cambridge, UK, 2003.
  9. C. Conati, “Probabilistic assessment of user's emotions in educational games,” Applied Artificial Intelligence, vol. 16, no. 7-8, pp. 555–575, 2002. View at Publisher · View at Google Scholar · View at Scopus
  10. S. D. Craig, A. C. Graesser, J. Sullins, and B. Gholson, “Affect and learning: an exploratory look into the role of affect in learning,” Journal of Educational Media, vol. 29, pp. 241–250, 2004.
  11. A. De Vicente and H. Pain, “Informing the detection of students motivational state : an empirical study,” in Proceedings of the 6th International Conference on Intelligent Tutoring Systems, S. A. Cerri, G. Gouarderes, and F. Paraguacu, Eds., pp. 933–943, Springer, Berlin, Germany, 2002.
  12. B. Kort, R. Reilly, and R. Picard, “An affective model of interplay between emotions and learning: reengineering educational pedagogy—building a learning companion,” in Proceedings IEEE International Conference on Advanced Learning Technology: Issues, Achievements and Challenges, T. Okamoto, R. Hartley, Kinshuk, and J. P. Klus, Eds., pp. 43–48, IEEE Computer Society, Madison, Wis, USA, 2001.
  13. M. R. Lepper and M. Woolverton, “The wisdom of practice: lessons learned from the study of highly effective tutors,” in Improving Academic Achievement: Impact of Psychological Factors on Education, J. Aronson, Ed., pp. 135–158, Academic Press, Orlando, Fla, USA, 2002.
  14. J. C. Lester, S. G. Towns, and P. J. FitzGerald, “Achieving affective impact: visual emotive communication in lifelike pedagogical agents,” The International Journal of Artificial Intelligence in Education, vol. 10, no. 3-4, pp. 278–291, 1999.
  15. D. J. Litman and K. Forbes-Riley, “Predicting student emotions in computer-human tutoring dialogues,” in Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, pp. 352–359, Association for Computational Linguistics, East Stroudsburg, Pa, USA, 2004.
  16. R. W. Picard, Affective Computing, MIT Press, Cambridge, UK, 1997.
  17. N. Wang, W. L. Johnson, R. Mayer, P. Rizzzo, E. Shaw, and H. Collins, “The politeness effect: pedagogical agents and learning gains,” in Artificial Intelligence in Education, C. Looi, G. McCalla, B. Bredeweg, and J. Breuker, Eds., pp. 686–693, IOS Press, Amsterdam, The Netherlands, 2005.
  18. I. Arroyo, D. G. Cooper, W. Burleson, B. Woolf, K. Muldner, and R. Christopherson, “Emotion sensors go to school,” in Proceedings of the 14th International Conference on Artificial Intelligence in Education, Brighton, UK, 2009.
  19. S. L. Norris and M. Currieri, “Performance enhancement training through neurofeedback,” in Introduction to Quantitative EEG and Neurofeedback, J. R. Evans and A. Abarbanel, Eds., Academic Press, New York, NY, USA, 1999.
  20. M. F. Bear, B. W. Connors, and M. A. Paradiso, Neuroscience: Exploring the Brain, Lippincott Williams & Williams, Baltimore, Md, USA, 2nd edition, 2001.
  21. D. S. Cantor, “An overview of quantitative EEG and its applications to neurofeedback,” in Introduction to Quantitative EEG and Neurofeedback, J. R. Evans and A. Abarbanel, Eds., Academic Press, San Diego, Calif, USA, 1999.
  22. A. Wise, The High Performance Mind, G. P. Putnam's Sons, New York, NY, USA, 1995.
  23. F. Angeleri, S. Butler, S. Glaquinto, and J. Majkowski, Analysis of the Electrical Activity of the Brain, John Wiley & Sons, New York, NY, USA, 1997.
  24. S. K. D'Mello, S. D. Craig, B. Gholson, S. Franklin, R. W. Picard, and A. C. Graesser, “Integrating affect sensors in an intelligent tutoring system,” in Proceedings of the Affective Interactions: The Computer in the Affective Loop Workshop at Intelligent User Interface 2005 (IUI '05), AMC Press, New York, NY, USA, 2005.
  25. P. J. Lang, M. M. Bradley, and B. N. Cuthbert, “International affective picture system (IAPS):affective ratings of pictures and instruction manual,” Tech. Rep. A-6, University of Florida, Gainesville, Fla, USA, 2005.
  26. B. McMillan, Pendant EEG, Pocket Neurobics, 2006, http://www.pocket-neurobics.com/.
  27. J. Lévesque, , Ph.D. thesis, 2006, BCIA-EEG.
  28. FIPA, “Foundation for Intelligent Physical Agents,” Specifications, 2009, http://www.fipa.org/.
  29. J. A. Mikels, B. L. Fredrickson, G. R. Larkin, C. M. Lindberg, S. J. Maglio, and P. A. Reuter-Lorenz, “Emotional category data on images from the international affective picture system,” Behavior Research Methods, vol. 37, no. 4, pp. 626–630, 2005. View at Scopus
  30. M. M. Bradley, M. Codispoti, B. N. Cuthbert, and P. J. Lang, “Emotion and motivation: defensive and appetitive reactions in picture processing,” Emotion, vol. 1, no. 3, pp. 276–298, 2001. View at Publisher · View at Google Scholar · View at Scopus
  31. R. R. McCrae and P. T. Costa Jr., “A Five-Factor Theory of personality,” in Handbook of Personality: Theory and Research, L. A. Pervin and O. P. John, Eds., 1999.
  32. T. F. Hawk and A. J. Shah, “Using learning style instruments to enhance student learning,” Decision Science Journal of Innovative Education, vol. 5, pp. 1–19, 2007.
  33. I. H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations, Morgan Kaufmann, San Francisco, Calif, USA, 2005.
  34. V. Belur and S. Dasarathy, Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques, 1991.
  35. C. Robson, Real Word Research: A Resource for Social Scientist and Practitioner Researchers, Blackwell, Hoboken, NJ, USA, 1993.
  36. W. J. Youden, How to Evaluate Accuracy. Materials Research and Standards, ASTM, 1961.
  37. R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann Publishers, San Mateo, Calif, USA, 1993.
  38. L. Breiman, “Bagging predictors,” Machine Learning, vol. 24, no. 2, pp. 123–140, 1996. View at Scopus
  39. T. W. Anderson and M. Anderson, An Introduction to Multivariate Statistical Analysis, John Wiley & Sons, New York, NY, USA, 3rd edition, 2003.