Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article
Special Issue

Advanced Intelligent Fuzzy Systems Modeling Technologies for Smart Cities

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 8931486 | https://doi.org/10.1155/2020/8931486

Yan Ding, Xuemei Chen, Shan Zhong, Li Liu, "Emotion Analysis of College Students Using a Fuzzy Support Vector Machine", Mathematical Problems in Engineering, vol. 2020, Article ID 8931486, 11 pages, 2020. https://doi.org/10.1155/2020/8931486

Emotion Analysis of College Students Using a Fuzzy Support Vector Machine

Guest Editor: Yi-Zhang Jiang
Received26 Jul 2020
Accepted24 Aug 2020
Published10 Sep 2020

Abstract

With the rapid development of society, the number of college students in our country is on the rise. College students are under pressure due to challenges from the society, school, and family, but they cannot find a suitable solution. As a result, the psychological problems of college students are diversified and complicated. The mental health problem of college students is becoming more and more serious, which requires urgent attention. This article realizes the monitoring of university mental health by identifying and analyzing the emotions of college students. This article uses EEG to determine the emotional state of college students. First, feature extraction is performed on different rhythm data of EEG, and then a fuzzy support vector machine (FSVM) is used for classification. Finally, a decision fusion mechanism based on the D-S evidence combination theory is used to fuse the classification results and output the final emotion recognition results. The contribution of this research is mainly in three aspects. One is the use of multiple features, which improves the efficiency of data use; the other is the use of a fuzzy support vector machine classifier with higher noise resistance, and the recognition rate of the model is better. The third is that the decision fusion mechanism based on the D-S evidence combination theory takes into account the classification results of each feature, and the classification results assist each other and integrate organically. The experiment compares emotion recognition based on single rhythm, multirhythm combination, and multirhythm fusion. The experimental results fully prove that the proposed emotion recognition method can effectively improve the recognition efficiency. It has a good practical value in the emotion recognition of college students.

1. Introduction

On contemporary university campuses, the number of college students with mental illnesses is increasing day by day. Many college students have difficulty adapting to college life for a while, and a series of mental health problems arise, which seriously affect their normal study and life. At present, the mental health prevention work of college students mainly relies on the inquiry methods of counselors and class teachers. The problems with this approach are as follows. (1) The number of teachers is far lower than the number of potential mental health problems. In addition, the work of university teachers is complicated, and the workload is huge, with limited energy and time. Therefore, the prevention and treatment of mental health among college students are often a mere formality. (2) At present, the main way to carry out mental health prevention and control work is questionnaire survey, which is difficult to identify students with real psychological problems. This has given birth to more intelligent mental health investigation and prevention methods. The emotional changes of college students can reflect their mental health problems to a certain extent. If a college student is in a sad or neutral state for a long time, it indicates that the student has some psychological problems. At this time, the teacher can start mental health counseling in time. Therefore, emotion recognition for college students is of great significance.

Emotion is a very complex psychological state produced by human beings in a specific environment, which is often associated with temperament, temperament, and motivation [1]. People can feel their own emotional state at all times; it provides a guarantee for human survival, and affects our learning, decision-making, and memory capabilities [2]. Emotion is a person's attitude and experience towards objective situations or things. It is a physical and psychological state produced by a person's senses, thoughts, and behaviors [3]. Emotion occupies an important position in the human society. As an advanced function of the human brain, it can ensure the adaptability of people in different environments. At the same time, it can characterize human personality characteristics and psychopathology [4]. Generally, positive emotions can make people full of strength and vitality and make people energetic, so it is beneficial to physical and mental health and the recovery of the body. Neutral emotion is an important criterion for personal psychological stability. Negative emotions usually cause a person to become depressed. Being in this state for a long time will affect people's working conditions and endanger physical and mental health.

Therefore, emotion recognition and monitoring has become a necessary way to solve human mental illness. Human emotion prediction also has important research significance and application value in areas such as mental health evaluation. For example, in medicine, the relationship between emotion and stress and other diseases is studied by analyzing physiological signals such as EEG in different emotional states [5, 6]. It is possible to find new ways to treat and recover similar mental illnesses. In education, the distance teaching platform based on emotion recognition can become more humane by obtaining feedback from students [7]. In entertainment, intelligent emotional sensing robots can bring more fun to life. With the deepening of research on emotion recognition, its service areas for humans will become more extensive. At present, in terms of research materials, emotion recognition can be divided into speech-based [8, 9], video-based [10, 11], image-based [1214], text-based [15, 16], and physiological signal [17, 18] and emotion recognition [19, 20] combining multiple modal data. In the recognition of classifiers, they are mainly based on machine learning [2128] and based on deep learning [29, 30]. Machine learning algorithms have been successfully applied to the recognition of various physiological signals [3136]. The application of deep learning algorithms is still being explored further.

This article is mainly devoted to the research of emotion recognition based on EEG signals. In sentiment analysis based on EEG signals, there are mainly two methods. They are linear analysis method and nonlinear analysis method. Representative studies are shown in Table 1.


MethodFeatureRepresentative researchRecognition rate (%)

Linear analysis methods (Pearson correlation, amplitude squared coherence, autoregressive model, cumulative energy algorithm, time-frequency analysis, etc.)EEG signal waveform characteristics (such as amplitude, phase, etc.), rhythm wave average power, power spectral density, band energy, wavelet coefficient root mean square, etc.References [6]60.42
References [7]62.50
References [37]88.51
Nonlinear analysis methods (mutual information [38], correlation dimension, Lempel–Ziv (LZ) complexity, recursive graph, and entropy analysis [39])Entropy, fractal dimension, correlation dimension, CO complexity, LZ complexity, Hust index, maximum Lyapunov index, etc.References [40]80.40
References [41]92.50
References [42]86.65

The abovementioned EEG-based emotion recognition method does not consider the characteristics of different rhythms in the EEG signal but processes the EEG uniformly. This method ignores the different effects of different rhythms on emotion recognition. Aiming at this problem, this paper proposes an emotion recognition method based on the fusion of multirhythm results. The contributions of this research are summarized as follows:(1)In order to fully excavate the information characteristics of different rhythms in EEG signals, this paper extracts and classifies multiple rhythms. This method can make full use of the information of different rhythms and has better pertinence.(2)Aiming at the problem of large feature dimension space and difficulty in integrating multiple rhythms in emotion recognition, in this study, the D-S evidence combination theory was used to merge multiple rhythm classification results to obtain the final classification results. The result fusion method can not only obtain more accurate results than a single rhythm or simple integration of multiple rhythms but also reduce the dimension of the feature space and overcome the problem of how to integrate multiple rhythms.(3)This study used the FSVM classifier. Due to the introduction of the fuzzy membership mechanism, this classifier has better noise immunity than other classic classifiers. This classifier is more suitable for applications in noisy actual production environments.

2. Emotion Recognition Based on EEG Signals

2.1. Emotion Recognition Process Based on EEG Signal

The process of emotion recognition is essentially a process of pattern recognition, which is generally divided into three steps. They are data collection and preprocessing, feature extraction, and model training and recognition. Figure 1 is a flowchart of emotion recognition. In the supervised machine learning process, it is first necessary to label the acquired sample set, divide it into different categories, and divide the sample set into training set and test set. Secondly, data preprocessing and feature extraction are required for the two sets. Finally, train the model by the training set. The trained model is for classification and decision-making. In the recognition process, the test set features are sent to the trained model for sample prediction. The output emotion category label is the recognition result, thus completing the whole process of emotion recognition.

2.2. Introduction to EEG Signals

According to different classification basis, EEG signals can be divided into categories as shown in Table 2.


Classification basisClassification details

Frequency(1) δ (0.1∼4 Hz)
(2) θ (4∼8 Hz)
(3) α (8∼13 Hz)
(4) β (13∼30 Hz)
(5) γ (31∼100 Hz)
Gibbs classification(1) Minor episode variability
(2) Small waves
(3) High-amplitude slow wave
(4) Low-speed slow wave
(5) Slow wave

(6) 8.5∼12.0 Hz, step length is 0.5 Hz
(7) Slow ground amplitude
(8) Fast wave
(9) High-speed fast wave
Classification by EEG signal pattern(1) EEG
(2) EEG
(3) Flat EEG
(4) Irregular EEG

In the preprocessing of the received EEG signal, noise reduction processing is mainly performed. At the same time, it reduces the interference of non-brain wave signals such as skin electricity and muscle electricity. Then feature extraction is performed on the data to obtain useful signals for sentiment analysis.

2.3. Feature Extraction of EEG Signals

In this study, the wavelet transform was used to extract 4 rhythms in EEG electrode signals, namely, rhythm, α rhythm, β rhythm, and γ rhythm. Taking β as an example, calculate the wavelet packet coefficients of the β wave decomposition node of the EEG signal and obtain various statistical values of the EEG signal through calculation. These original statistical values are used as original features. According to the particularity and difference of the EEG signal, the average energy of the β wave rhythm of the EEG signal in the time domain and the frequency domain is extracted. The characteristics of the extracted β waves are shown in Table 3.


Feature abbreviationsDescription

MeanThe average value of the β wave
MedianMedian of the β wave
StdStandard deviation of the β wave
MinMinimum value of the β wave
MaxMaximum value of β wave
Min RatioThe ratio of the minimum number of β waves to the signal length
Max RatioThe ratio of the maximum number of β waves to the signal length
Energy MeanAverage energy of β wave

The calculation formula of some statistical values is as follows: where E represents the brain electrical signal data and N represents the length of the brain electrical signal data.

2.4. Learning and Classification of EEG Signals

A support vector machine (SVM) is one of the most common classification methods in emotion recognition. Considering that the classic SVM is susceptible to noise interference, the EEG signal collected in the real production environment usually contains noise interference. In order to improve the classification accuracy, this paper uses the SVM with fuzzy membership.

Let the training sample set be . represents the feature vector of each sample. represents two different categories, . is the fuzzy membership function. represents the membership degree of the ith sample and represents the reliability of the ith sample belonging to the class, . According to the principle of the SVM algorithm, the training samples are mapped to the high-dimensional feature space, and the feature mapping function is used to obtain . The training sample is converted to . The classification hyperplane is , where the kernel function represented by is .where and represent the penalty factors of positive and negative samples, respectively. is the relaxation factor. The optimal hyperplane is obtained by solving the objective function by the Lagrangian multiplier method:

According to the degree of influence of each sample on the classification surface, each sample point is given a different degree of membership. The purpose is to make the sample points with larger influence degree have a larger degree of membership, and the sample data with smaller effect will give a smaller degree of membership.

2.5. The D-S Evidence Combination Theory

Dempster first described the Dempster–Shafer evidence combination theory in his article [43]. Later, Shafer further developed and perfected the theory, which formed the Dempster–Shafer evidence combination theory as it is now known. The Dempster–Shafer evidence combination theory is also called the D-S evidence theory. It expands the data fusion solution and is widely used in multisource data fusion. The D-S evidence theory is based on the trust function of different observations and uses Dempster's evidence combination rules to fuse them. Then a judgment is made on the result obtained according to a certain type of rule, and finally the fusion and final decision result is realized. The principle is described as follows [44].

Suppose a finite space and let be all the subsets in the space . This also includes the empty set itself. For the subset , define the function and satisfy

Function is the basic confidence distribution function on , and is the precise trust level of the subset . In this theory, the basic confidence distribution function is assigned to A in a fixed form as its evidence information. However, different people will give inconsistent confidence assignments to the same evidence because of their special experience and knowledge. Maximize the use of independent and different sources of evidence to improve the accuracy or confidence of the target event.

Assume

3. The Proposed Emotion Recognition Method

Different rhythms in EEG data correspond to different emotional states, because emotion recognition based on a single rhythm often has problems such as low recognition rate and poor stability. Using multiple EEG rhythms as feature recognition will improve the recognition results and stability. At present, most emotion recognition based on multiple EEG rhythms simply combine the feature recognition results extracted from these rhythms, and there is no more effective fusion strategy. This makes the dimension of the feature space and the input dimension of the classifier too high, making the accuracy and stability of the discrimination result poor. In order to make full use of the advantages of EEG data and improve decision-making results, this study applies the D-S theory to the decision-making level. The main idea of the proposed method is: First, extract the four characteristic rhythms in the EEG electrode signal: rhythm, α rhythm, β rhythm, and γ rhythm, and extract each characteristic wave separately. Secondly, input the feature vector into the corresponding FSVM classifier for recognition. Finally, the basic confidence distribution of each mode under each classifier is obtained, and the D-S evidence combination theory is used to fuse the classification results to obtain the final decision result. The framework of the proposed method is shown in Figure 2.


RhythmFeature details

Time domain characteristics: peak value, mean value, and standard deviation of time domain signal
Frequency domain characteristics: power spectral density, center of gravity frequency, and frequency band energy
Nonlinear dynamic characteristics: approximate entropy and sample entropy

The steps of the proposed algorithm are as follows:Step 1: prepare the electrodes FC5-FC6 to be analyzed, respectively. Due to the time-varying nonstationary characteristics of EEG, preprocessing is essential before waveform extraction. Among them, there are mainly framing and windowing. Set the frame length to 512, the frame shift to 256, and the window function to the hamming windowStep 2: extract the rhythm, α rhythm, β rhythm, and γ rhythm of each electrode signal after preprocessing, and use them as the EEG characteristic band. The feature extraction is shown in Table 4Step 3: assign an FSVM classifier to identify each rhythm. Each FSVM can be regarded as independent evidence, and its output value is transformed into the basic confidence function of each emotion model under evidenceStep 4: after obtaining the basic allocation function of each FSVM classifier in step 3, perform fusion according to the formula (5)Step 5: the fusion result is judged according to the rules: , the function with the maximum trust degree is selected as the target class

4. Experiment Analysis

4.1. Experimental Data and Parameter Settings

The data set used in this study is a public data set provided by S. Koelstra et al. for analyzing the human emotional state. The data set contains audio and video, 32-lead EEG data and 8-lead peripheral physiological signals. The data set is divided into 4 emotions, namely, high arousal and high valence (HAHV), low arousal and high valence (LAHV), low arousal and low valence (LALV), and high arousal and low valence (HALV). In the experiment, 312 training samples were selected and 144 test samples were selected. Each type has 78 training samples and 36 test samples. This study used the data that were preprocessed by experimenters such as S. Koelstra and others after removing oculogram, frequency reduction, and filtering from the original data. The data sampling frequency of each segment is reduced to 128 Hz, and the sampling time is 63 seconds. The first 3 seconds are the baseline duration, and the next 60 seconds are the experimental data.

The contrast classifiers used in this study are SVM, Gaussian mixture model (GMM), and BP neural network (BPNN), SVM: the parameter and nuclear parameter . The number of Gaussian components is 6. The evaluation index is the recognition rate.

4.2. Experimental Program and Result Analysis

This study designed experiments from three perspectives: single multirhythm as data input, different classification result fusion strategies, and different classifiers. The specific design plan is as follows :Scheme 1: In order to verify the influence of single multirhythm as data input on emotion recognition, the experiment compared the recognition rate of single rhythm and multirhythm in different combinations. The multirhythm classification result of this experiment uses the fusion of the D-S evidence combination theory, and the classifier uses the FSVM. The experimental results are shown in Table 5 and Figure 3.Scheme 2: In order to verify the effectiveness of the D-S fusion strategy used, the two result fusion methods of ordinary linear combination and D-S evidence combination are compared. The experimental data use a combination of ++ three rhythms. The experimental results are shown in Table 6.Scheme 3: In order to verify the robustness of the FSVM classifier, the classic SVM, GMM, and BPNN are selected for the comparison classifier. The experimental result data are a multirhythm fusion classification result. The experimental results are shown in Table 7 and Figure 4.


RhythmHALVLAHVLALVHALVMean

Single rhythm0.52800.54260.49850.54460.5284
0.53900.55660.50780.53350.5342
0.56180.63090.47750.58210.5631
0.58850.53210.53470.55380.5523
Multirhythm fusion + 0.53110.54890.50260.55230.5337
 + 0.56020.62430.47240.59530.5631
 + 0.59250.53410.53870.56220.5569
 + 0.54540.62430.48560.57540.5577
 + 0.55650.52870.52950.54320.5395
 + 0.57430.62650.50430.57540.5701
 +  + 0.61170.58760.59060.53500.5812
+ +  + 0.62430.59730.56670.54650.5837
 +  + 0.70880.56300.58940.64210.6258
 + + + 0.70080.55340.58780.63460.6192


Result fusion methodHALVLAHVLALVHALVMean

Linear combination0.65230.55320.56410.63300.6007
D-S evidence combination theory0.70880.56300.58940.64210.6258


ClassifierHALVLAHVLALVHALVMean

SVM0.68180.54320.56780.63460.6069
GMM0.67780.55280.55840.63020.6048
BPNN0.69720.55780.57080.64450.6176
FSVM0.70880.56300.58940.64210.6258

From Table 5 and Figure 3, the following conclusions can be drawn:(1)In a single rhythm, the recognition rate of different rhythms is different. It shows that the contribution of information carried by different rhythms is different. Among them, rhythms and have the better recognition rate, which shows that rhythms and can truly reflect emotional changes. When performing multifeature recognition, these two kinds of rhythms should be given priority.(2)The recognition rate of different combinations of each rhythm is higher than that of any single rhythm. This shows that emotion recognition performance based on multiple rhythm combinations is better. Among the multirhythms of different combinations, the D-S fusion recognition rate of the three rhythms  +  +  is the highest. The fusion recognition rate of the four rhythms  +  +  +  is lower than the fusion recognition rate of the three rhythms  +  + . This shows that it is not that more rhythms are better. Some rhythms carry less useful information, which will weaken the final decision result. The recognition rate of any combination of two rhythms is lower than the recognition rate of  +  +  three rhythm combinations. Among the three rhythm combinations, the recognition rate of the  +  +  three rhythm combinations is significantly higher than the other three rhythm combinations. This shows that in order to obtain the optimal decision result, not only the number of combined rhythms must be confirmed but also the most representative rhythm must be selected.

It can be concluded from Table 6 that the fusion method based on the D-S evidence combination has the highest recognition rate. The separate discrimination results of the three rhythms are merged, and the group with the largest trust degree after fusion is taken as the target class. The linear combination experiment simply integrates the three rhythm features into a set of feature vectors. In the process of pattern recognition, misjudgment may be caused due to the inconsistency between certain dimensional features. From the comparison of these two sets of experiments, it can be concluded that the D-S evidence combination theory can reduce the misjudgment caused by the inconsistency between the features to a certain extent, thereby improving the recognition rate.

From Table 7 and Figure 4, the following conclusions can be drawn: the emotion recognition rate under the FSVM classifier is the highest. It is 3.02% higher than SVM, 3.47% higher than GMM, and 1.33% higher than BPNN. The overall improvement is not large, which shows that the use of different classifiers has little effect on the final recognition rate. Among different classifiers, the recognition rates of BPNN and FSVM are not much different. This shows that although the neural network-based classifier has high computational time complexity, the final decision-making effect is ideal. For some operations that do not consider time cost, we can consider using a neural network-based classifier. By comparing the four emotion recognition results, we found that the recognition rates of HALV and HALV are generally higher. Divided from the degree of arousal, both categories belong to the range of high arousal. This shows that EEG emotional data with high arousal have a better performance in recognition.

5. Conclusion

As the pressure of college students increases, some negative events occur frequently. Emotion recognition for college students is particularly important and meaningful. In this context, this article proposes an EEG signal-based emotion recognition method for college students. First of all, in the use of data sets, this study uses multirhythms as data input. Through experimental comparison, three rhythms were finally selected. The second step is to use the wavelet transform for feature extraction for each rhythm. The third step is to use the FSVM classifier to classify the input feature data to obtain classification results of different rhythms. The fourth step is to use the D-S evidence combination theory to fuse the classification results of the three rhythms, in order to get the final decision result. There are 3 points of innovation in this research. One is to use multiple rhythms as input. The second is to introduce the FSVM classifier with strong noise immunity. The third is to use the result fusion strategy based on the D-S evidence theory. Through experimental comparison, the emotion recognition method proposed in this article can effectively improve the recognition rate, which has a reference value. However, this study also has some shortcomings; for example, classifying different rhythms separately; this method directly discards the relationship between different rhythms and may reduce the final recognition effect. Subsequently, a classifier based on a collaborative learning mechanism is used for classification and recognition.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (NSFC grant nos. 51705021, U1764261, 61702055, 61972059, and 61773272), Key Laboratory of Symbolic Computation and Knowledge Engineering Ministry of Education, Jilin University (93K172017K18).

References

  1. R. Giner-Sorolla, “The past thirty years of emotion research: appraisal and beyond,” Cognition and Emotion, vol. 33, no. 1, pp. 48–54, 2019. View at: Publisher Site | Google Scholar
  2. I. Blanchette and A. Richards, “The influence of affect on higher level cognition: a review of research on interpretation, judgement, decision making and reasoning,” Cognition & Emotion, vol. 24, no. 4, pp. 561–595, 2010. View at: Publisher Site | Google Scholar
  3. H. Jazaieri, A. S. Morrison, P. R. Goldin, and J. J. Gross, “The role of emotion and emotion regulation in social anxiety disorder,” Current Psychiatry Reports, vol. 17, no. 1, p. 531, 2014. View at: Publisher Site | Google Scholar
  4. Z. Rakovec-Felser, “The sensitiveness and fulfillment of psychological needs: medical, health care and students,” Collegium Antropologicum, vol. 39, no. 3, pp. 541–550, 2015. View at: Google Scholar
  5. J. A. Healey and R. W. Picard, “Detecting stress during real-world driving tasks using physiological sensors,” IEEE Transactions on Intelligent Transportation Systems, vol. 6, no. 2, pp. 156–166, 2005. View at: Publisher Site | Google Scholar
  6. H. Sandler, S. Tamm, U. Fendel, M. Rose, B. F. Klapp, and R. Bösel, “Positive emotional experience: induced by vibroacoustic stimulation using a body monochord in patients with psychosomatic disorders: is associated with an increase in EEG-Theta and a decrease in EEG-alpha power,” Brain Topography, vol. 29, no. 4, pp. 524–538, 2016. View at: Publisher Site | Google Scholar
  7. O. K. Akputu, K. P. Seng, Y. Lee, and L.-M. Ang, “Emotion recognition using multiple kernel learning toward E-learning applications,” ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 14, no. 1, 2018. View at: Publisher Site | Google Scholar
  8. S. Nakagawa, L. Wang, and S. Ohtsuka, “Speaker identification and verification by combining MFCC and phase information,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 4, pp. 1085–1095, 2012. View at: Publisher Site | Google Scholar
  9. R. Xia and Y. Liu, “A multi-task learning framework for emotion recognition using 2D continuous space,” IEEE Transactions on Affective Computing, vol. 8, no. 1, pp. 3–14, 2017. View at: Publisher Site | Google Scholar
  10. H.-W. Yoo and S.-B. Cho, “Video scene retrieval with interactive genetic algorithm,” Multimedia Tools and Applications, vol. 34, no. 3, pp. 317–336, 2007. View at: Publisher Site | Google Scholar
  11. M. Xu, C. Xu, X. He, J. S. Jin, S. Luo, and Y. Rui, “Hierarchical affective content analysis in arousal and valence dimensions,” Signal Processing, vol. 93, no. 8, pp. 2140–2150, 2013. View at: Publisher Site | Google Scholar
  12. S.-H. Wang, P. Phillips, Z.-C. Dong, and Y.-D. Zhang, “Intelligent facial emotion recognition based on stationary wavelet entropy and Jaya algorithm,” Neurocomputing, vol. 272, pp. 668–676, 2018. View at: Publisher Site | Google Scholar
  13. Y. Sun, G. Wen, and J. Wang, “Weighted spectral features based on local Hu moments for speech emotion recognition,” Biomedical Signal Processing and Control, vol. 18, pp. 80–90, 2015. View at: Publisher Site | Google Scholar
  14. A. R. Damasio, T. J. Grabowski, A. Bechara et al., “Subcortical and cortical brain activity during the feeling of self-generated emotions,” Nature Neuroscience, vol. 3, no. 10, pp. 1049–1056, 2000. View at: Publisher Site | Google Scholar
  15. C.-H. Wu, Z.-J. Chuang, and Y.-C. Lin, “Emotion recognition from text using semantic labels and separable mixture models,” ACM Transactions on Asian Language Information Processing (TALIP), vol. 5, no. 2, pp. 165–182, 2006. View at: Publisher Site | Google Scholar
  16. D. Zeng, H. Chen, R. Lusch, and S.-H. Li, “Social media analytics and intelligence,” IEEE Intelligent Systems, vol. 25, no. 6, pp. 13–16, 2010. View at: Publisher Site | Google Scholar
  17. W.-L. Zheng and B.-L. Lu, “Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks,” IEEE Transactions on Autonomous Mental Development, vol. 7, no. 3, pp. 162–175, 2015. View at: Publisher Site | Google Scholar
  18. W.-L. Zheng, J.-Y. Zhu, and B.-L. Lu, “Identifying stable patterns over time for emotion recognition from EEG,” IEEE Transactions on Affective Computing, vol. 10, no. 3, pp. 417–429, 2019. View at: Publisher Site | Google Scholar
  19. S. Zhalehpour, O. Onder, Z. Akhtar, and C. E. Erdem, “BAUM-1: a spontaneous audio-visual face database of affective and mental states,” IEEE Transactions on Affective Computing, vol. 8, no. 3, pp. 300–313, 2017. View at: Publisher Site | Google Scholar
  20. Y. Wang, L. Guan, and A. N. Venetsanopoulos, “Kernel cross-modal factor analysis for information fusion with application to bimodal emotion recognition,” IEEE Transactions on Multimedia, vol. 14, pp. 597–607, 2012. View at: Google Scholar
  21. P. Qian, Y. Chen, J.-W. Kuo et al., “mDixon-based synthetic CT generation for PET attenuation correction on abdomen and pelvis jointly using transfer fuzzy clustering and active learning-based classification,” IEEE Transactions on Medical Imaging, vol. 39, no. 4, pp. 819–832, 2020. View at: Publisher Site | Google Scholar
  22. Y. Jiang, K. Zhao, K. Xia et al., “A novel distributed multitask fuzzy clustering algorithm for automatic MR brain image segmentation,” Journal of Medical Systems, vol. 43, no. 5, 2019. View at: Publisher Site | Google Scholar
  23. P. Qian, C. Xi, M. Xu et al., “SSC-EKE: semi-supervised classification with extensive knowledge exploitation,” Information Sciences, vol. 422, pp. 51–76, 2018. View at: Publisher Site | Google Scholar
  24. Y. Jiang, Z. Deng, F.-L. Chung et al., “Recognition of epileptic EEG signals using a novel multiview TSK fuzzy system,” IEEE Transactions on Fuzzy Systems, vol. 25, no. 1, pp. 3–20, 2017. View at: Publisher Site | Google Scholar
  25. P. Qian, J. Zhou, F. Y. Liang et al., “Multi-view maximum entropy clustering by jointly leveraging inter-view collaborations and intra-view-weighted attributes,” IEEE Access, vol. 6, pp. 28594–28610, 2018. View at: Publisher Site | Google Scholar
  26. Y. Jiang, D. Wu, Z. Deng et al., “Seizure classification from EEG Signals using transfer learning, semi-supervised learning and TSK fuzzy system,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25, no. 12, pp. 2270–2284, 2017. View at: Publisher Site | Google Scholar
  27. P. Qian, Y. Jiang, Z. Deng et al., “Cluster prototypes and fuzzy memberships jointly leveraged cross-domain maximum entropy clustering,” IEEE Transactions on Cybernetics, vol. 46, no. 1, pp. 181–193, 2016. View at: Publisher Site | Google Scholar
  28. P. Qian, S. Sun, Y. Jiang et al., “Cross-domain, soft-partition clustering with diversity measure and knowledge reference,” Pattern Recognition, vol. 50, pp. 155–177, 2016. View at: Publisher Site | Google Scholar
  29. D. Garg and G. K. Verma, “Emotion recognition in valence-arousal space from multi-channel EEG data and wavelet based deep learning framework,” Procedia Computer Science, vol. 171, pp. 857–867, 2020. View at: Publisher Site | Google Scholar
  30. S. B. Wankhade and D. D. Doye, “Deep learning of empirical mean curve decomposition-wavelet decomposed EEG signal for emotion recognition,” International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, vol. 28, no. 1, pp. 153–177, 2020. View at: Google Scholar
  31. P. C. Petrantonakis and L. J. Hadjileontiadis, “Emotion recognition from EEG using higher order crossings,” IEEE Transactions on Information Technology in Biomedicine, vol. 14, no. 2, pp. 186–197, 2010. View at: Publisher Site | Google Scholar
  32. Y. Lin, C. H Wang, T. P Jung et al., “EEG-based emotion recognition in music listening,” IEEE Transactions on Bio-Medical Engineering, vol. 57, no. 7, pp. 1798–1806, 2010. View at: Publisher Site | Google Scholar
  33. Y. Yang, Q. M. J. Wu, W.-L. Zheng, and B.-L. Lu, “EEG-based emotion recognition using hierarchical network with subnetwork nodes,” IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 2, pp. 408–419, 2018. View at: Publisher Site | Google Scholar
  34. Y. Zhang, Z. Zhou, W. Pan et al., “Epilepsy signal recognition using online transfer TSK fuzzy classifier underlying classification error and joint distribution consensus regularization,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2020. View at: Publisher Site | Google Scholar
  35. Y. Jiang, Y. Zhang, C. Lin, D. Wu, and C. Lin, “EEG-based driver drowsiness estimation using an online multi-view and transfer TSK fuzzy system,” IEEE Transactions on Intelligent Transportation Systems, 2020. View at: Publisher Site | Google Scholar
  36. Y. Zhang, J. Dong, J. Zhu, and C. Wu, “Common and special knowledge-driven TSK fuzzy system and its modeling and application for epileptic EEG signals recognition,” IEEE Access, vol. 7, pp. 127600–127614, 2019. View at: Publisher Site | Google Scholar
  37. A. Greco, G. Valenza, L. Citi, and E. P. Scilingo, “Arousal and valence recognition of affective sounds based on electrodermal activity,” IEEE Sensors Journal, vol. 17, no. 3, pp. 716–725, 2017. View at: Publisher Site | Google Scholar
  38. J. Kim and E. André, “Emotion recognition based on physiological changes in music listening,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 12, pp. 2067–2083, 2008. View at: Publisher Site | Google Scholar
  39. M. Wyczesany and T. S. Ligeza, “Towards a constructionist approach to emotions: verification of the three-dimensional model of affect with EEG-independent component analysis,” Experimental Brain Research, vol. 233, no. 3, pp. 723–733, 2015. View at: Publisher Site | Google Scholar
  40. J. Tao and T. Tan, Affective Computing: A review, International Conference on Affective Computing and Intelligent Interaction, vol. 3784, Springer, Berlin, Germany, 2005.
  41. K. H. Kim, S. W. Bang, and S. R. Kim, “Emotion recognition system using short-term monitoring of physiological signals,” Medical & Biological Engineering & Computing, vol. 42, no. 3, pp. 419–427, 2004. View at: Publisher Site | Google Scholar
  42. S. Paul, A. Banerjee, and D. N. Tibarewala, “Emotional eye movement analysis using electrooculography signal,” International Journal of Biomedical Engineering and Technology, vol. 23, no. 1, pp. 59–70, 2017. View at: Publisher Site | Google Scholar
  43. A. P. Dempster, “Upper and lower probabilities induced by a multivalued mapping,” The Annals of Mathematical Statistics, vol. 38, no. 2, pp. 325–339, 1967. View at: Publisher Site | Google Scholar
  44. J. Inglis, “A mathematical theory of evidence,” Technometrics, vol. 20, no. 1, p. 242, 1976. View at: Google Scholar

Copyright © 2020 Yan Ding et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views314
Downloads213
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.