Applied Computational Intelligence and Soft Computing

Applied Computational Intelligence and Soft Computing / 2021 / Article

Review Article | Open Access

Volume 2021 |Article ID 9917246 | https://doi.org/10.1155/2021/9917246

Ebenezer Owusu, Jacqueline Asor Kumi, Justice Kwame Appati, "On Facial Expression Recognition Benchmarks", Applied Computational Intelligence and Soft Computing, vol. 2021, Article ID 9917246, 20 pages, 2021. https://doi.org/10.1155/2021/9917246

On Facial Expression Recognition Benchmarks

Academic Editor: Ridha Ejbali
Received04 Apr 2021
Revised04 Aug 2021
Accepted07 Sep 2021
Published18 Sep 2021

Abstract

Facial expression is an important form of nonverbal communication, as it is noted that 55% of what humans communicate is expressed in facial expressions. There are several applications of facial expressions in diverse fields including medicine, security, gaming, and even business enterprises. Thus, currently, automatic facial expression recognition is a hotbed research area that attracts lots of grants and therefore the need to understand the trends very well. This study, as a result, aims to review selected published works in the domain of study and conduct valuable analysis to determine the most common and useful algorithms employed in the study. We selected published works from 2010 to 2021 and extracted, analyzed, and summarized the findings based on the most used techniques in feature extraction, feature selection, validation, databases, and classification. The result of the study indicates strongly that local binary pattern (LBP), principal component analysis (PCA), saturated vector machine (SVM), CK+, and 10-fold cross-validation are the most widely used feature extraction, feature selection, classifier, database, and validation method used, respectively. Therefore, in line with our findings, this study provides recommendations for research specifically for new researchers with little or no background as to which methods they can employ and strive to improve.

1. Introduction

The discovery of expression and emotions in humans and animals by Darwin [1] in the nineteenth century served as the premise for research on emotions. In his work, Darwin indicated that both humans and animals exhibit emotions of similar behaviour [2]. Since then, there has been significant progress in the research on emotions, with the past two decades witnessing immense contributions from multidisciplinary fields, such as psychology, medicine, sociology, business, neuroscience, endocrinology, and computer science, resulting in a colossal number of algorithms for automatic facial expression recognition [3].

Emotions can be described as things we feel that are caused by neurons that shoot electrons around the tiny pathways inside the amygdala, the emotion centre of the brain. Emotion can also be described as a complex experience involving related feelings, which tends to move one out of a person’s individuality [4, 5]. They come with physical and physiological changes, which regulate our behaviour, due to reactions to internal and external stimuli [6]. Emotion is a salient characteristic of humans. It plays a useful role in human communication, as well as the growth and regulation of interpersonal relationships [79]. It also affects thoughts, actions, and the making of decisions [10]. In recognising emotions, several sources of emotional information have been proposed. These sources of emotion information serve as the primary data from which emotions can be inferred. They can be broadly classified into three groups, namely, biological indicators, behavioural indicators, and physiological signals (see Figure 1) [3, 11]. The biological indicators comprise facial expressions and body postures or gestures. The physiological signals are measurements based on electrical signals recording produced by the heart, brain, muscles, and skin. They include electroencephalography (EEG), electromyography (EMG), electrocardiography (ECG), respiration rate, skin conductance, electrooculogram (EOG), blood pressure rate, Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI), Magnetoencephalography (MEG), Functional Magnetic Resonance Imaging (fMRI), and Near-Infrared Spectroscopy (NIRS). Further, speech signals and text represent the behavioural indicators for emotion recognition.

Research on facial expression dates back to the ancient times, making facial expressions a recognised and important modality among the nonverbal forms of communication. Moreover, it can be inferred from the literature that facial expressions is mostly combined modality with other modalities when performing emotion recognition [12]. Darwin’s work on the universality of facial expressions of emotions across different cultures and different tribes served as a foundation for the empirical study on facial expressions [1]. Thus, it made facial expressions the only measure with developed frameworks as it has been researched thoroughly in the past few decades [13]. Additionally, among the indicators for emotion recognition, facial expressions are argued to be a significant and leading measure for emotion recognition as it conveys 55% of what humans communicate and then 7% and 38% through language and speech, respectively [14, 15]. Emotions can be easily and accurately detected from the face [16, 17]. Furthermore, the use of the facial expressions for emotion recognition has several advantages, such as its noninvasiveness and relative cheapness, as it does not involve any physical contact with the user employing sensors in comparison to the case of collecting EEG signals or has any requirement for expensive hardware [18]. Facial expressions are useful in deciphering an individual’s thoughts or state of mind during a conversation [19]. It also serves as the most real indicators that lend information on age, truthfulness, temperament, personality, and the emotional state of a person [20, 21]. Hence, it can be concluded that the face is an important feature of the body, as it conveys an individual’s personality, emotions, thoughts, and ideas even before it has been verbalized, playing a significant role in human communication and social interaction [3, 22].

Darwin’s research established the foundation for the conceptualisation of emotions, and thus it received attention among various psychologists. Ekman [23] validated Darwin theory on the universality of emotions irrespective of the tribes and cultures when he proposed the discrete theory of emotion namely the basic emotion. From that, several psychologists have theorised variants of emotions based on the basic theories, for example, Ortony et al. models [2426]. These conceptualised emotions vary according to the type and number, even though they are all borne out of Darwin and Ekman’s universality of emotions. Nonetheless, the most employed emotions for emotion research based on these discrete theories are the basic emotions, which are modelled by six classes: happiness, disgust, fear, surprise, sadness, and anger [7, 27]. The basic emotions are considered to be universal across different cultures and different people and are used in describing the affective states of individuals [23, 28]. Each basic emotion is characterised by a unique facial expression [29].

Advances in technology have contributed immensely to the analysis of emotions, begetting automated facial expression recognition. The general framework of facial expression classification or recognition involves the following stages (refer Figure 2): image acquisition, preprocessing, feature extraction, feature selection, and classification [30].

Although there has been considerable number of surveys and literature on facial expression classification, to the best of our knowledge, there is not any comprehensive systematic review in the field of facial expression classification. Having evaluated the above surveys and reviews using an evaluation checklist by Kitchenham [31], it was observed that majority of the authors conducted a narrative review instead of a systematic review, providing general information on the various aspects of facial expression classification analysis [3242]. Moreover, to the best of our knowledge, there is no existing systematic review on facial expression classification, which motivates our work as existing literature sought to examine the general trend of reviewing the most utilized methods. Although some reviews also performed a systematic review; however, they examined the general methods for the various facial expression recognition stages [43]. This background knowledge informed the decision to conduct a systematic review investigating the most utilized: feature extraction methods, feature selection techniques, algorithms for classification, validation methods, and the leading databases used from 2010 to the first half of 2021.

This rest of this paper is structured as follows. Section 2 describes the methodology used in this review. Section 3 presents the results and discussion, and the conclusions and future work are in Section 4.

2. Method

The overall objective of the work is to summarize, analyze, and assess the domain of facial expression recognition, providing an up-to-date summary and review of (1) the most dominant feature selection methods utilized for facial expression recognition, (2) the most employed feature selection technique, (3) the most used classification algorithm, (4) the most utilized database, and (5) the most dominant model validation method. Further guidelines are provided to novices as to techniques to be used when conducting a facial expression classification research. The five research questions for this systematic literature review are presented in Table 1.


Question numberQuestionSolution

RQ1What is the most used feature extraction method for facial expression classification?Identify the trends and opportunities for feature selection techniques for facial expression classification
RQ2What is the most employed feature selection or reduction technique for facial expression classification?Examine the trends for dimensionality reduction of the extracted features for the classification of facial expressions
RQ3What is the most dominant algorithm utilized for facial expression classification?To review the algorithms deployed for classifying facial expressions into the basic emotions
RQ4What is the most utilized database for facial expression classification?Explore the kind of database used: either posed or spontaneous or 2D or 3D database for facial expression classification
RQ5What is the most dominant validation method for facial expression classification?To investigate the most employed technique in evaluating the classification as well as portioning the dataset into fractions

In this systematic review, we planned, conducted, and reported the systematic review based on procedures proposed by [31], which are planning, conducting, and reporting the review. In planning the review, a justification for the review was first established as well as the development of a review protocol. The development of the review protocol entails the definition of the research question, search strategy design, study selection, quality assessment, data extraction, and data synthesis. Figure 3 shows the review protocol.

To start with, we formulated focused research questions based on the aim of this work. Then, the design of the search strategy, which involves the determination of search terms and selection of appropriate search engines, will be useful in retrieving relevant literature resources for the subsequent search process. Subsequently, to select the relevant studies that contribute to addressing the research questions, a study selection criterion was defined. As part of further polishing the study selection criteria, a pilot study selection was first employed. Afterwards, several quality checklists were established to assess relevant studies during the quality assessment process. A data extraction form was devised for the data extraction stage and later refined through piloting of the form to address technical issues, such as the ordering of the questions. Finally, the collected data was synthesized in the data synthesis stage. The appropriate methodologies for synthetization were determined based on the types of data, as well as the research questions that were addressed by the collected data. The details of the review protocol are presented in the following sections [44].

2.1. Search Strategy

The search terms, search engines, and search process comprise the search strategy; each of the following is detailed as follows.

2.1.1. Search Terms

Our search terms were derived using the steps proposed by Kitchenham et al. [45]:(i)Identifying relevant keywords from papers(ii)Using alternative synonyms for the keywords(iii)Originating major terms from research questions

The resulting searching strategy was developed using keywords including facial expression, machine learning, deep learning, and classification. The generated search string used on the search engines was conducted using the following protocol: (i) facial expression AND classifier AND facial expression databases and (ii) facial expression AND feature extraction AND feature selection. The generated search string was created to have a tradeoff between manageable size as well as coverage.

2.1.2. Search Engines

After formulation of the search terms, the appropriate and relevant search engines were selected. The selection of the search engines was not restricted by their availability at the home university. The search for primary studies was done using the following databases: (i) Institute of Electronic and Electrical Engineers (IEEE), (ii) SpringerDirect, and (iii) ScienceDirect. The generated search strings were searched in the above databases for only articles and then restricted the search to the period between January 1, 2010, to June 2021, inclusive because we wanted to investigate the latest developments or trends in the domain of facial expression classification.

2.1.3. Search Process

An initial informal search was conducted for the search process task to ascertain if the duty will yield enough literature resources for the study. After the search engines and search strings were identified and already defined, we searched all four electronic databases separately for articles. The candidate retrieved literature resources were downloaded and exported into Excel. However, for ScienceDirect, software package JabRef (https://www.jabref.org/) was used instead of Excel since the electronic database does not have the option to export to Excel and also allows for only 100 articles to be downloaded and exported at a time. In the long run, the downloaded articles were later combined and exported to Excel for manual scanning and selection of the relevant articles. For storing and managing of relevant articles, software package Mendeley was utilized (https://www.mendeley.com/). Figure 2 presents the search process and the total number of papers identified at each phase.

2.2. Study Selection

The study selection helps in filtering candidate papers, which provide no useful information in answering the research questions in this review. The selection was conducted in two phases—Selection stage 1 and Selection stage 2. The selection stage 1 eliminated all irrelevant candidate articles irrelevant to answering the research questions based on the inclusion and exclusion criteria. Then, the selection stage 2 was used in selecting relevant papers based on the quality assessment criteria. The search process produced 361 candidate articles. There was selection and elimination of candidate articles after a scrutiny of the title, elimination of duplicates, selection of the potential relevant papers after elimination based on scrutiny of the abstract, and the inclusion criteria and then a careful perusal of the selected relevant papers in the previous stage for quality assessment review. The review process was to select relevant papers with acceptable quality that was used for data extraction (see Figure 4).

To select relevant articles to be included in the systematic review, the following search limits were applied as formulated from the research questions.

2.2.1. Inclusion Criteria

The process involves the include articles from journals such as the following:(i)Papers whose content has the main objective of discussing facial expression and classification of the basic emotions using machine or deep learning algorithms were included(ii)Articles within the range of 2010 to 2021 were included(iii)Databases utilized for the classification task in the methodology section should be human faces.(iv)Papers written in English were included.

2.2.2. Exclusion Criteria

Papers whose content was either an extended abstract or Powerpoint were excluded.(i)Books and magazines were excluded(ii)Papers based on facial expression for pain analysis or diseases such as altruism, depression, and Parkinson were excluded(iii)Review papers were also excluded

Therefore, a total of 240 potentially relevant articles were obtained after selection stage 1 for quality assessment review in selection stage 2. After the application of the quality assessment criteria in stage 2, we obtained 233 final relevant studies as shown in the Appendix.

2.3. Study Quality Assessment

Measures to ensure the quality of the search were carried out during the review. The review articles were manually done after the original automated search. Then, we verified papers to be included or excluded after the evaluation and analysis of the title and abstracts. The automated initial search for articles was conducted in private browsing to avoid the influence of historical searches. Additionally, a quality questionnaire was formed to assess the relevance of the included studies. These questions were formulated to test the relevance, rigour, and credibility of the papers. Some of these questions were derived from Wen et al. [44, 46]. Each question was scored with one of these 3 optional answers: “yes” = 1, “partly yes” = 0.5, and “no” = 0. The resulting scores were summed as scores from answers to the quality assessment questions, which is the quality score for each of the included studies. Included studies with quality above 5 were considered for inclusion for the data extraction and synthesis processes.

2.4. Data Extraction

We designed a data form in Excel to collect data that answer the review questions from each included paper independently using Table 2. We summarized information on both feature extraction and reduction techniques, algorithms, the classification algorithms, the validation methods, and the reported databases. Standard information, such as publication details, date of publication, title, author name or authors’ names, and publication venue, were also collected. During the extraction process, we observed not all the included studies provided answers to all the review questions. Another issue encountered during the data extraction process was that some papers used different terminologies. For instance, dimensionality reduction was synonymous to feature selection in some papers. To avoid issues of ambiguity, we adopted the terminology feature selection for all.


NumberQuestion

QA1Are the aims of the study reported?
QA2Is the feature extraction or selection method or methods communicated?
QA3Is the feature extraction or selection technique or techniques justified?
QA4Is the utilized classification algorithm or algorithms indicated?
QA5Is there any justification for the chosen algorithm or algorithms?
QA6Is the user database or databases specified?
QA7Is there any explanation for the selected database?
QA8Is the classification accuracy reported?
QA9Is the validation model written?
QA10Is the research methodology repeatable?
QA11Are the results and findings clearly stated?
QA12Are the limitations of the study specified?

2.5. Data Synthesis

The collected data was saved for use during the data synthesizing stage. The purpose of data synthesis is to aggregate and summarize the collected data from the included studies to provide answers to the formulated review questions. Answers from each of the included studies with similar or comparable evidence are accumulated to provide conclusive answers. Quantitative data were extracted in this review and thus after synthesizing our results, our outcomes were shown in a comparable way [31]. Additionally, we utilized a narrative synthesis method due to the extracted data as a result of our review questions. Therefore, we used visualization techniques, such as funnel graphs, clustered bar graphs, line graphs, clustered columns, and pie charts. Also, we employed the use of tables for summarization and presentation of the results [44, 47].

3. Results and Discussion

3.1. Description of the Included Studies

In this section, a brief overview of the included studies is recounted. We identified 233 articles published in the period 2010 to 2021 inclusive in the area of facial expression recognition. The research questions answered by each of the included studies are presented in Table 3.


Selected studiesRQ1RQ2RQ3RQ4RQ5Selected studiesRQ1RQ2RQ3RQ4RQ5

P1XXXP45XXX
P2XXXXXP46XXXX
P3XXXP47XXXX
P4XXXXP48XXXX
P5XXXP49XXXX
P6XXXP50XXXXX
P7XXXP51XXXX
P8XXXP52XXX
P9XP53XXX
P10XXP54XX
P11XXXXXP55XXXX
P12XXXP56XXX
P13XXXXP57XXXX
P14XXP58XXXXX
P15XXXP59XXXX
P16XXXP60XXX
P17XXXXP61XXX
P18XXXP62XXXX
P19XXXP63XXX
P20XXXP64XXX
P21XXXP65XX
P22XXXP66XXX
P23XXXXP67XX
P24XXP68XXX
P25XXXXP69XXX
P26XXXXP70XX
P27XXXP71XXX
P28XXXP72XX
P29XXP73XXXX
P30XXXXP74XX
P31XXP75XX
P32XXXXXP76XX
P33XXXP77XX
P34XXXP78XXX
P35XXXP79XXXX
P36XXP80XXX
P37XXXXP81XXX
P38XXXP82XX
P39XXXP83XXX
P40XXXP84XXX
P41XXXP85XXX
P42XXXXXP86XXX
P43XXXXP87XXX
XXXP44P88XXXX

P89XXP134XXX
P90XXXXP135XXXX
P91XXXP136XXX
P92XXXXP137XX
P93XXXP138XXXX
P94XXXP139XXXX
P95XXXXP140XXXXX
P96XXXXP141XXX
P97XXXXP142XX
P98XXXP143XXXX
P99XXXXP144XX
P100XXP145XXXX
P101XXXXP146XXX
P102XXP147XXXX
P103XXXXP148XXX
P104XXP149XXX
P105XXXP150XXXX
P106XXXXXP151XX
P107XXXP152XX
P108XXP153XX
P109XXXP154XXX
P110XXXP155XXX
P112XXXP156XXX
P113XXXXP157XXX
P114XXXXP158XX
P115XXXXP159XXXXX
P116XXXXP160XXX
P117XXXP161XXXX
P118XXXP162XX
P119XXXP163XXXX
P120XXXP164XXX
P121XXXXP165XXXX
P122XXXP166XX
P123XXXP167XXXX
P124XXXXP168XXX
P125XXXP169XXXX
P126XXXXP170XX
P127XXXXP171XXXX
P128XXXP172XXX
P129XXXXP173XXXX
P130XXXP174XXXX
P131XXXP175XXXX
P132XXXXP176XXXX
P133XXXXP177XXX

Selected studiesRQ1RQ2RQ3RQ4RQ5Selected studiesRQ1RQ2RQ3RQ4RQ5
P178XXP206XX
P179XXXP207XXXX
P180XXXP208XXX
P181XXP209XXXX
P182XXP210XXXX
P183XXXXP211XXXXX
P184XXXP212XXXX
P185XXXXP213XXX
P186XXXXP214XXX
P187XXP215XX
P188XXXP216XXXX
P189XXXXP217XXXXX
P190XXXP218XXX
P191XXXXP219XXXX
P192XXP220XXXXX
P193XXXP221XXX
P194XXXXP222XX
P195XXXXP223XX
P196XXXP224XXXX
P197XXXP225XXX
P198XXXXP226XX
P199XXXXXP227XXX
P200XXXXP228XXXX
P201P229XXX
P202XXXXP230XX
P203XP231XXX
P204XXXP232XX
P205XXXP233XX

3.2. Publication Year

The distribution of the articles published from the year 2010 to 2021 is shown in Figure 5. Generally, the distribution shows an upward trend of research in the study domain. The line graph in Figure 5 shows that there was a steep rise in 2011 and a fall in 2020. The 2020 fall could be accounted for a general recession in 2020 due to COVID-19 pandemic. The publications recorded in 2021 are just that of half of the year, an implication that it is likely to shoot up tremendously at the end of the year. With the upward uprise in the trend of publication, we anticipate there could be exceedingly more articles as there is an increase in the application of facial expression in areas, such as human-computer interaction, medicine for pain detection, autism, and security [47, 48].

3.3. Publication Source

The information on the publications along with the number of primary studies in the corresponding journal is summarized in Table 4. The included studies were published in fifty-four different journals. The journal that recorded the most publications is Multimedia Tools and Applications with a whooping sixty-three publications. This was followed by Neurocomputing (18) and Visual Computer (16). It was observed that the majority of our primary studies were obtained from ScienceDirect and SpringerDirect.


Journal nameFrequency

Multimedia Systems1
Pattern Analysis and Applications1
Applied Computing and Informatics1
Applied Intelligence9
Applied Soft Computing4
Artificial Intelligence Review1
Cluster Computing3
Cognitive Computation1
Cognitive Systems Research1
Computer Methods and Programs in Biomedicine1
Computer Vision and Image Understanding5
Computers & Electrical Engineering3
Digital Signal Processing1
Engineering Applications of Artificial Intelligence2
Engineering Science and Technology, an International Journal1
Formal of Information Technology & Electronic Engineering1
Frontiers of Computer Science2
Human-Centric Computing and Information Sciences1
IEEE Access1
IEEE Signal Processing Letters1
Image and vision computing4
International Journal of Cognitive Computing in Engineering1
International Journal of Computer Vision1
International Journal of Multimedia Information Retrieval1
Journal of Computer Science and Technology2
Journal of King Saud University—Computer and Information Sciences1
Journal of Neuroscience Methods1
Journal of Parallel and Distributed Computing1
Journal of Real-Time Image Processing2
Journal of supercomputing2
Journal of Visual Communication and Image Representation7
Journal on Multimodal User Interfaces1
Knowl Inf Syst1
Knowledge-Based Systems2
Machine Vision and Applications6
Multimedia Tools and Applications63
Neural Computing and Applications8
Neural Processing Letters1
Neurocomputing18
Optik4
Pattern Analysis and Applications4
Pattern Recognition14
Pattern Recognition and Image Analysis4
Personal and Ubiquitous Computing2
Procedia Computer Science2
Procedia Technology1
Recognition and Image Analysis1
Signal, Image and Video Processing1
Signal Processing1
Signal Processing: Image Communication2
Signal, Image and Video Processing14
SN Computer Science1
The Journal of Supercomputing1
Visual Computer16
Visual Computing for Industry, Biomedicine and Art1

3.4. Analysis on Systematic Review Questions
3.4.1. What Is the Most Used Feature Extraction Method for Facial Expression Classification (RQ1)?

To develop better models, a considerable number of techniques have been proposed and utilized over the years [47]. Feature extraction is mostly considered the second and most important step in facial expression recognition as the selection of the features is an important task. It helps in representing the facial image effectively by extracting the subtle changes of a facial image into a feature vector [40, 49]. The results displayed in Figure 6 shows that local binary pattern (LBP) is the most commonly used feature extraction method and it accounts for 22.9% of all the twenty-nine (29) methods used by researchers in the period. This is followed by geometric related methods, which also accounted for 14%. Though less reported than LBP, the study reported 24 different geometric methods formulated within the 12 years of study. The third and fourth most frequently utilized technique is Histogram of Oriented Gradients (HOG -12.9%) and Gabor (10.5%), respectively. Other employed methods are Scale Invariant Feature Transform (SIFT), Convoluted Neural Network (CNN), combinations of LPB and Hog, Linear Discriminant Analysis (LDA), Discrete Wavelet Transform (DWT), Local Ternary Pattern (LTP), and Histogram of Oriented Gradient (PHOG). The rest of the methods are all demonstrated in Figure 6. The study also finds that various variants of feature extraction methods are being developed. Methods, such as LBP, CNN, HOG, and Gabor, have different advanced variants.

According to Shan et al. [50] and Zavaschi et al. [51], the original LBP that was proposed by Ojala et al. [52] frequently surpasses the widely adopted Gabor because of its ability to save computational resource whilst retaining facial information as well as its tolerance to illumination. Chengeta and Viriri [37] also state that LBP has been widely adopted because it possesses rotational and grayscale invariance properties. Zavaschi et al. [51] state Gabor filter has superior performance for facial expression classification; hence, it has been the third adopted technique. However, in comparison to LBP, the authors [41] mentioned that Gabor filter usually attains a better accuracy between 82.5% and 99% and is less sophisticated. The figure extraction techniques are presented in Figure 6.

3.4.2. What Is the Most Employed Feature Selection or Reduction Technique for Facial Expression Classification (RQ2)?

In this section, the various feature selection techniques utilized within the twelve years period are summarized. Feature selection helps in the selection of the most important features, discarding the unimportant ones [47, 53]. It was noticed from Figure 7 that only 33% of the included articles used a feature selection method. In comparison to the other methods, principal component analysis dominated research attention over the years with a percentage of 30.6%, and this was followed by the Linear Discriminant Analysis of 18.1%. In affirmation to our result from other relevant reviews, Revina and Emmanuel [41] and Fan and Tjahjadi [54] stated that PCA was the most adopted technique for the feature selection among several feature selection algorithms, such as LDA and AdaBoost. As noted above, PCA can also be used for feature extraction as it extracts both global and low dimensional features. Again, probably the reason why PCA has gained a lot of recognition as feature selection algorithm in facial expression recognition is that it performs well in removing uncorrelated features and improves visualization as well. PCA is also known to reduce overfitting. These are true key points that improve facial expression recognition; thus, PCA is fitted to be adopted.

3.4.3. What Is the Most Dominant Algorithm Utilized for Facial Expression Recognition (RQ3)?

This section summarizes the various algorithms employed in facial expression recognitions. The distribution of the consistently used algorithms is shown in Figure 8. Ideally, classification is the final stage in facial expression recognition [40]. The classifier must be trained to categorize expressions into sadness, anger, fear, happiness, disgust, surprise, neutral, and sometimes other emotions like joy and smiling [27, 41]. The result records that support vector machine (SVM) was the most dominant algorithm used, which alone accounted for 48.6%. Also, variants of SVM, such as iterative universum twin support vector machine, were used. Followed by SVM is convolutional neural network (CNN), which also accounts for 20.6%. The K-Nearest Neighbor (KNN) and the Hidden Markov Model (HMM) were the third and fourth frequently employed algorithms, respectively. Other less used algorithms as summarized from the included studies are found in Figure 8. Additionally, it was observed that some algorithms were fused with other algorithms for classification. For instance, the naïve Bayes was boosted with neural network ensemble, and the random forest was with SVM labelers.

In summary, it was observed that SVM was the popular choice for classification of the facial expressions. The authors in [41, 55] affirm that SVM is the most usable classifier for facial expression classification as it produces better classification and recognition accuracy.

3.4.4. What Is the Most Utilized Database for Facial Expression Classification (RQ4)?

The summary of the frequency of the most utilized database is presented in Figure 9. Databases are utilized for the validation of proposed methods. Our findings discovered forty-three (43) different databases. Figure 9 shows the frequencies of usage of the databases within the period.

From our outcome, CK+ and JAFFE performed better than the other databases with accuracies between 90 and 100. The reason is perhaps of their two-dimensional nature. As even now, there are lots of studies in 2-dimensional facial expression recognition. Additionally, it was observed that CK+ on several experiments outperformed JAFFE. CK+ consists of both posed and spontaneous (only smile expressions) video sequences of African Americans, Euro-Americans, and others (6%) within ages of 18–50, and JAFFE consists of 213 posed images from 10 Japanese females [56, 57]. Although CK+ is a combination of posed and spontaneous smile expressions, it is normally classified as aposed database [34, 36]. The variety of different expressions in CK+ can make it useful for all kinds of facial expressions study, namely, pose, emotion, age, 2D, and 3D. The combination of dark skin, color skin, and Caucasians even make it more realistic in facial expression studies.

In some few years ago, Revina and Emmanuel [41] and Kumar and Sharma [55] reported that JAFFE and CK are the most utilized databases. However, this study has proved otherwise.

3.4.5. What Is the Most Dominant Validation Method for Model Evaluation for Facial Expression Classification (RQ5)?

Cross-validation is useful in evaluating a model’s accuracy. This is done by splitting the database into two sets: one for training the model and the other for testing [44, 56]. The usual cross-validation methods are leave-one-out cross and k-fold, specifically ten-fold and five-fold, validations. The published research papers that employed validations are shown in Table 3. Figure 10 shows how the various validation methods are used for the 12-year period. k-fold by way of training and testing k-times completely avoids overfitting and underfitting [34]. Similar to our results, Bengio and Lecun [57] affirm that k-fold is the often-adopted validation method.

In straight forward, in cross-validation, k = 10 is more appealing in terms of computing efficiency. Furthermore, lower k values, such as 2 or 3, have a large bias even though they are computationally efficient.

4. Conclusions

In this study, we systematically reviewed the domain of facial expression recognition by investigating the most dominant techniques utilized at the various phases, such as feature extraction, selection, and classification. First, we identified 233 papers published in 2010 to 2021, after the execution of a series of systematic steps and quality assessment. Then, we extracted, analyzed, and summarized the collated data from the included studies based on the most utilized: feature extraction technique, feature selection method, classification algorithm, the databases, and validation algorithm. The relevant findings are as follows:(i)The techniques utilized in feature extraction are twenty-nine (29) and the most utilized ones are local binary pattern (LBP), geometric methods, HOG, Gabor, Scale Invariant Feature Transform (SIFT), convolutional neural network (CNN), Particle Component Analysis (PCA), Wavelet Transform, Curvelet Transform, and Active Appearance Model (AAM).(ii)The commonly used feature selection methods are PCA and LDA. Others are AdaBoost, CNN, Discrete Cosine Transform (DCT), and Genetic Algorithm (GA).(iii)A total of nineteen (19) classifiers are used and the most popular ones are support vector machine (SVM) and CNN. These two techniques alone account for 69.8% of the nineteen different techniques and some of least commonly adopted techniques are Decision Tree and Artificial Neural Network.(iv)A total of forty-three databases are used with CK+, Jaffe, MMI, CK, BU-3DEFE, FER-2013, and SFEW alone accounting for 68.4%.(v)All five different validation methods are used with 10-fold validation being the most popular method. The others are LOSO, 5-fold, 4-fold, and 3-fold.

This review provides recommendations and guidelines to researchers specifically to new researchers who do not have enough background in the field of facial expression classification analysis as to which methods to adopt for their research work since these excelling and most used techniques will be useful in the correct and efficient unravel of facial expressions [38].

For future work, we will investigate how a combination of these dominant methods and databases will perform in the classification accuracy.

Appendix

Sample selected 233 reviewed papers are as follows:[P1] Berretti, S., Amor, B. B., Daoudi, M., & Del Bimbo, A. (2011). 3D facial expression recognition using SIFT descriptors of automatically detected keypoints. The Visual Computer, 27 (11), 1021–1036.[P2] Jeni, L. A., Lőrincz, A., Nagy, T., Palotai, Z., Sebők, J., Szabó, Z., & Takács, D. (2012). 3D shape estimation in video sequences provides high precision evaluation of facial expressions. Image and Vision Computing, 30 (10), 785–795.[P3] Fang, T., Zhao, X., Ocegueda, O., Shah, S. K., & Kakadiaris, I. A. (2012). 3D/4D facial expression analysis: an advanced annotated face model approach. Image and vision Computing, 30 (10), 738–749.[P4] Zarbakhsh, P., & Demirel, H. (2019). 4D facial expression recognition using multimodal time series analysis of geometric landmark-based deformations. The Visual Computer, 1–15.[P5] Nejadgholi, I., SeyyedSalehi, S. A., & Chartier, S. (2017). A brain-inspired method of facial expression generation using chaotic feature extracting bidirectional associative memory. Neural Processing Letters, 46 (3), 943–960.[P6] Uddin, M. Z., & Hassan, M. M. (2015). A depth video-based facial expression recognition system using radon transform, generalized discriminant analysis, and hidden Markov model. Multimedia Tools And Applications, 74 (11), 3675–3690.[P7] Fan, X., Yang, X., Ye, Q., & Yang, Y. (2018). A discriminative dynamic framework for facial expression recognition in video sequences. Journal of Visual Communication and Image Representation, 56, 182–187.[P8] Rabhi, Y., Mrabet, M., & Fnaiech, F. (2018). A facial expression controlled wheelchair for people with disabilities. Computer methods and programs in biomedicine, 165, 89–105.[P9] Chen, A., Xing, H., & Wang, F. (2020). A facial expression recognition method using deep convolutional neural networks based on edge computing. IEEE Access, 8, 49741–49751.[P10] Uddin, M. Z., Hassan, M. M., Almogren, A., Zuair, M., Fortino, G., & Torresen, J. (2017). A facial expression recognition system using robust face features from depth videos and deep learning. Computers & Electrical Engineering, 63, 114–125.[P11] Ghazouani, H. (2021). A genetic programming-based feature selection and fusion for facial expression recognition. Applied Soft Computing, 103, 107173.[P12] Ruiz-Garcia, A., Elshaw, M., Altahhan, A., & Palade, V. (2018). A hybrid deep learning neural approach for emotion recognition from facial expressions for socially assistive robots. Neural Computing and Applications, 29 (7), 359–373.[P13] Zhou, J., Zhang, S., Mei, H., & Wang, D. (2016). A method of facial expression recognition based on Gabor and NMF. Pattern Recognition and Image Analysis, 26 (1), 119–124.[P14] Zhan, Y. Z., Cheng, K. Y., Chen, Y. B., & Wen, C. J. (2010). A new classifier for facial expression recognition: fuzzy buried Markov model. Journal of computer science and technology, 25 (3), 641–650.[P15] Uçar, A., Demir, Y., & Güzeliş, C. (2016). A new facial expression recognition based on curvelet transform and online sequential extreme learning machine initialized with spherical clustering. Neural Computing and Applications, 27 (1), 131–142.[P16] Zou, W., Zhang, D., & Lee, D. J. (2021). A new multi-feature fusion based convolutional neural network for facial expression recognition. Applied Intelligence, 1–12.[P17] Greche, L., Akil, M., Kachouri, R., & Es-Sbai, N. (2019). A new pipeline for the recognition of universal expressions of multiple faces in a video sequence. Journal of Real-Time Image Processing, 1–14.[P18] Kola, D. G. R., & Samayamantula, S. K. (2021). A novel approach for facial expression recognition using local binary pattern with adaptive window. Multimedia Tools and Applications, 80 (2), 2243–2262.[P19] Silwal, R., Alsadoon, A., Prasad, P. W. C., Alsadoon, O. H., & Al-Qaraghuli, A. (2020). A novel deep learning system for facial feature extraction by fusing CNN and MB-LBP and using enhanced loss function. Multimedia Tools and Applications, 79 (41), 31027–31047.[P20] Ilbeygi, M., & Shah-Hosseini, H. (2012). A novel fuzzy facial expression recognition system based on facial feature extraction from color face images. Engineering Applications of Artificial Intelligence, 25 (1), 130–146.[P21] Alphonse, A. S., & Starvin, M. S. (2019). A novel maximum and minimum response-based Gabor (MMRG) feature extraction method for facial expression recognition. Multimedia Tools and Applications, 78 (16), 23369–23397.[P22] Zia, M. S., Hussain, M., & Jaffar, M. A. (2018). A novel spontaneous facial expression recognition using dynamically weighted majority voting based ensemble classifier. Multimedia Tools and Applications, 77 (19), 25537–25567.[P23] Vedantham, R., & Reddy, E. S. (2020). A robust feature extraction with optimized DBN-SMO for facial expression recognition. Multimedia Tools and Applications, 79 (29), 21487–21512.[P24] An, G., Liu, S., & Ruan, Q. (2017). A sparse neighborhood preserving non-negative tensor factorization algorithm for facial expression recognition. Pattern Analysis and Applications, 20 (2), 453–471.[P25] Fan, X., & Tjahjadi, T. (2015). A spatial-temporal framework based on histogram of gradients and optical flow for facial expression recognition in video sequences. Pattern Recognition, 48 (11), 3407–3416.[P26] Hu, M., Ge, P., Wang, X., Lin, H., & Ren, F. (2021). A spatio-temporal integrated model based on local and global features for video expression recognition. The Visual Computer, 1–18.[P27] Danelakis, A., Theoharis, T., & Pratikakis, I. (2016). A spatio-temporal wavelet-based descriptor for dynamic 3D facial expression retrieval and recognition. The visual computer, 32 (6), 1001–1011.[P28] Zhao, X., Dellandréa, E., Zou, J., & Chen, L. (2013). A unified probabilistic framework for automatic 3D facial expression analysis based on a Bayesian belief inference and statistical feature models. Image and Vision Computing, 31 (3), 231–245.[P29] Yu, J., & Wang, Z. (2017). A video-based facial motion tracking and expression recognition system. Multimedia Tools and Applications, 76 (13), 14653–14672.[P30] Sun, W., Zhao, H., & Jin, Z. (2018). A visual attention based ROI detection method for facial expression recognition. Neurocomputing, 296, 12–22.[P31] Jiang, P., Liu, G., Wang, Q., & Wu, J. (2020). Accurate and reliable facial expression recognition using advanced softmax loss with fixed weights. IEEE Signal Processing Letters, 27, 725–729.[P32] Siddiqi, M. H. (2018). Accurate and robust facial expression recognition system using real-time YouTube-based datasets. Applied Intelligence, 48 (9), 2912–2929.[P33] Ouyang, Y., Sang, N., & Huang, R. (2015). Accurate and robust facial expressions recognition by fusing multiple sparse representation based classifiers. Neurocomputing, 149, 71–78.[P34] Peng, Y., & Yin, H. (2018). Facial expression analysis and expression-invariant face recognition by manifold-based synthesis. Machine Vision and Applications, 29 (2), 263–284.78.[P35] Kommineni, J., Mandala, S., Sunar, M. S., & Chakravarthy, P. M. (2021). Accurate computing of facial expression recognition using a hybrid feature extraction technique. The Journal of Supercomputing, 77 (5), 5019–5044.[P36] Yao, L., Wan, Y., Ni, H., & Xu, B. (2021). Action unit classification for facial expression recognition using active learning and SVM. Multimedia Tools and Applications, 1–15.[P37] Yao, L., Wan, Y., Ni, H., & Xu, B. (2021). Action unit classification for facial expression recognition using active learning and SVM. Multimedia Tools and Applications, 1–15.[P38] Deng, X., Da, F., & Shao, H. (2017). Adaptive feature selection based on reconstruction residual and accurately located landmarks for expression-robust 3D face recognition. Signal, Image and Video Processing, 11 (7), 1305–1312.[P39] Kommineni, J., Mandala, S., Sunar, M. S., & Chakravarthy, P. M. (2020). Advances in computer–human interaction for detecting facial expression using dual tree multi band wavelet transform and Gaussian mixture model. Neural Computing and Applications, 1–12.[P40] Zia, M. S., & Jaffar, M. A. (2015). An adaptive training based on classification system for patterns in facial expressions using SURF descriptor templates. Multimedia Tools and Applications, 74 (11), 3881–3899.[P41] Sun, Z., Hu, Z. P., Chiong, R., Wang, M., & Zhao, S. (2018). An adaptive weighted fusion model with two subspaces for facial expression recognition. Signal, Image and Video Processing, 12 (5), 835–843.[P42] Owusu, E., & Wiafe, I. (2021). An advance ensemble classification for object recognition. Neural Computing and Applications, 1–12.[P43] Roy, S. D., Bhowmik, M. K., Saha, P., & Ghosh, A. K. (2016). An approach for automatic pain detection through facial expression. Procedia Computer Science, 84, 99–106.[P44] Danelakis, A., Theoharis, T., Pratikakis, I., & Perakis, P. (2016). An effective methodology for dynamic 3D facial expression retrieval. Pattern Recognition, 52, 174–185.[P45] Shanthi, P., & Nickolas, S. (2021). An efficient automatic facial expression recognition using local neighborhood feature fusion. Multimedia Tools and Applications, 80 (7), 10187–10212.[P46] Ch, S. (2021). An efficient facial emotion recognition system using novel deep learning neural network-regression activation classifier. Multimedia Tools and Applications, 80 (12), 17543–17568.[P47] Sun, W., Zhao, H., & Jin, Z. (2017). An efficient unconstrained facial expression recognition algorithm based on stack binarized auto-encoders and binarized neural networks. Neurocomputing, 267, 385–395.[P48] Saeed, S., Mahmood, M. K., & Khan, Y. D. (2018). An exposition of facial expression recognition techniques. Neural Computing and Applications, 29 (9), 425–443.[P49] Sun, Z., Chiong, R., & Hu, Z. P. (2018). An extended dictionary representation approach with deep subspace learning for facial expression recognition. Neurocomputing, 316, 1–9.[P50] Owusu, E., Zhan, Y., & Mao, Q. R. (2014). An SVM-AdaBoost facial expression recognition system. Applied intelligence, 40 (3), 536–545.[P51] Bandini, A., Orlandi, S., Escalante, H. J., Giovannelli, F., Cincotta, M., Reyes-Garcia, C. A., ... & Manfredi, C. (2017). Analysis of facial expressions in Parkinson’s disease through video-based automatic methods. Journal of neuroscience methods, 281, 7–20.[P52] Alugupally, N., Samal, A., Marx, D., & Bhatia, S. (2011). Analysis of landmarks in recognition of face expressions. Pattern Recognition and Image Analysis, 21 (4), 681–693.[P53] Soyel, H., Tekguc, U., & Demirel, H. (2011). Application of NSGA-II to feature selection for facial expression recognition. Computers & Electrical Engineering, 37 (6), 1232–1240.[P54] Liu, M., Li, S., Shan, S., & Chen, X. (2015). Au-inspired deep networks for facial expression feature learning. Neurocomputing, 159, 126–136.[P55] Arora, M., & Kumar, M. (2021). AutoFER: PCA and PSO based automatic facial emotion recognition. Multimedia Tools and Applications, 80 (2), 3039–3049.[P56] Mlakar, U., & Potočnik, B. (2015). Automated facial expression recognition based on histograms of oriented gradient feature vector differences. Signal, Image and Video Processing, 9 (1), 245–253.[P57] Berretti, S., Del Bimbo, A., & Pala, P. (2013). Automatic facial expression recognition in real-time from dynamic sequences of 3D face scans. The Visual Computer, 29 (12), 1333–1350.[P58] Mayya, V., Pai, R. M., & Pai, M. M. (2016). Automatic facial expression recognition using DCNN. Procedia Computer Science, 93, 453–461.[P59] Lajevardi, S. M., & Hussain, Z. M. (2012). Automatic facial expression recognition: feature extraction and selection. Signal, Image and video processing, 6 (1), 159–169.[P60] Chen, J., Lv, Y., Xu, R., & Xu, C. (2019). Automatic social signal analysis: Facial expression recognition using difference convolution neural network. Journal of Parallel and Distributed Computing, 131, 97–102.[P61] Li, S., & Deng, W. (2019). Blended emotion in-the-wild: Multi-label facial expression recognition using crowdsourced annotations and deep locality feature learning. International Journal of Computer Vision, 127 (6), 884–906.[P62] Ali, G., Iqbal, M. A., & Choi, T. S. (2016). Boosted NNE collections for multicultural facial expression recognition. Pattern Recognition, 55, 14–27.[P63] Wang, Y., Dong, X., Li, G., Dong, J., & Yu, H. (2021). Cascade regression-based face frontalization for dynamic facial expression analysis. Cognitive Computation, 1–14.[P64] Mermillod, M., Bonin, P., Mondillon, L., Alleysson, D., & Vermeulen, N. (2010). Coarse scales are sufficient for efficient categorization of emotional facial expressions: Evidence from neural computation. Neurocomputing, 73 (13–15), 2522–2531.[P65] Liu, Y., Yuan, X., Gong, X., Xie, Z., Fang, F., & Luo, Z. (2018). Conditional convolution neural network enhanced random forest for facial expression recognition. Pattern Recognition, 84, 251–261.[P66] Shahid, A. R., Khan, S., & Yan, H. (2020). Contour and region harmonic features for sub-local facial expression recognition. Journal of Visual Communication and Image Representation, 73, 102949.[P67] Liang, D., Liang, H., Yu, Z., & Zhang, Y. (2020). Deep convolutional BiLSTM fusion network for facial expression recognition. The Visual Computer, 36 (3), 499–508.[P68] Liao, H., Wang, D., Fan, P., & Ding, L. (2021). Deep learning enhanced attributes conditional random forest for robust facial expression recognition. Multimedia Tools and Applications, 1–19.[P69] Chen, J., Xu, R., & Liu, L. (2018). Deep peak-neutral difference feature for facial expression recognition. Multimedia Tools and Applications, 77 (22), 29871–29887.[P70] Li, H., & Xu, H. (2020). Deep reinforcement learning for robust emotional classification in facial expression recognition. Knowledge-Based Systems, 204, 106172.[P71] Kumar, M. P., & Rajagopal, M. K. (2019). Detecting facial emotions using normalized minimal feature vectors and semi-supervised twin support vector machines classifier. Applied Intelligence, 49 (12), 4150–4174.[P72] Sun, Z., Hu, Z. P., Wang, M., & Zhao, S. H. (2019). Dictionary learning feature space via sparse representation classification for facial expression recognition. Artificial Intelligence Review, 51 (1), 1–18.[P73] Imran, S. M., Rahman, S. M., & Hatzinakos, D. (2016). Differential components of discriminative 2D Gaussian–Hermite moments for recognition of facial expressions. Pattern Recognition, 56, 100–115.[P74] Sánchez, A., Ruiz, J. V., Moreno, A. B., Montemayor, A. S., Hernández, J., & Pantrigo, J. J. (2011). Differential optical flow applied to automatic facial expression recognition. Neurocomputing, 74 (8), 1272–1282.[P75] Zhou, L., Fan, X., Tjahjadi, T., & Choudhury, S. D. (2021). Discriminative attention-augmented feature learning for facial expression recognition in the wild. Neural Computing and Applications, 1–12.[P76] Saurav, S., Gidde, P., Saini, R., & Singh, S. (2021). Dual integrated convolutional neural network for real-time facial expression recognition in the wild. The Visual Computer, 1–14.[P77] Shao, J., & Cheng, Q. (2021). E-FCNN for tiny facial expression recognition. Applied Intelligence, 51 (1), 549–559.[P78] Jain, N., Kumar, S., & Kumar, A. (2019). Effective approach for facial expression recognition using hybrid square-based diagonal pattern geometric model. Multimedia Tools and Applications, 78 (20), 29555–29571.[P79] Meena, H. K., Sharma, K. K., & Joshi, S. D. (2020). Effective curvelet-based facial expression recognition using graph signal processing. Signal, Image and Video Processing, 14 (2), 241–247.[P80] Wang, Y., See, J., Oh, Y. H., Phan, R. C. W., Rahulamathavan, Y., Ling, H. C., ... & Li, X. (2017). Effective recognition of facial micro-expressions with video motion magnification. Multimedia Tools and Applications, 76 (20), 21665–21690.[P81] Hsieh, C. C., Hsih, M. H., Jiang, M. K., Cheng, Y. M., & Liang, E. H. (2016). Effective semantic features for facial expressions recognition using SVM. Multimedia Tools and Applications, 75 (11), 6663–6682.[P82] Li, M., Li, X., Sun, W., Wang, X., & Wang, S. (2021). Efficient convolutional neural network with multi-kernel enhancement features for real-time facial expression recognition. Journal of Real-Time Image Processing, 1–12.[P83] Nigam, S., Singh, R., & Misra, A. K. (2018). Efficient facial expression recognition using histogram of oriented gradients in wavelet domain. Multimedia tools and applications, 77 (21), 28725–28747.[P84] Saurav, S., Saini, R., & Singh, S. (2021). EmNet: a deep integrated convolutional neural network for facial emotion recognition in the wild. Applied Intelligence, 1–28.[P85] Yadav, S. P. (2021). Emotion recognition model based on facial expressions. Multimedia Tools and Applications, 1–23.[P86] Sharma, M., Jalal, A. S., & Khan, A. (2019). Emotion recognition using facial expression by fusing key points descriptor and texture features. Multimedia Tools and Applications, 78 (12), 16195–16219.[P87] Yurtkan, K., & Demirel, H. (2014). Entropy-based feature selection for improved 3D facial expression recognition. Signal, Image and Video Processing, 8 (2), 267–277.[P88] Patil, H. Y., Kothari, A. G., & Bhurchandi, K. M. (2016). Expression invariant face recognition using local binary patterns and contourlet transform. Optik, 127 (5), 2670–2678.[P89] Wang, W., Chang, F., Liu, Y., & Wu, X. (2017). Expression recognition method based on evidence theory and local texture. Multimedia Tools and Applications, 76 (5), 7365–7379.[P90] Martins, J. A., Lam, R. L., Rodrigues, J. M. F., & du Buf, J. H. (2018). Expression-invariant face recognition using a biological disparity energy model. Neurocomputing, 297, 82–93.[P91] Deng, X., Da, F., & Shao, H. (2017). Expression-robust 3D face recognition based on feature-level fusion and feature-region fusion. Multimedia Tools and Applications, 76 (1), 13–31.[P92] Samad, R., & Sawada, H. (2011). Extraction of the minimum number of Gabor wavelet parameters for the recognition of natural facial expressions. Artificial Life and Robotics, 16 (1), 21–31.[P93] Lekdioui, K., Messoussi, R., Ruichek, Y., Chaabi, Y., & Touahni, R. (2017). Facial decomposition for expression recognition using texture/shape descriptors and SVM classifier. Signal Processing: Image Communication, 58, 300–312.[P94] Fan, W., & Bouguila, N. (2015). Face detection and facial expression recognition using simultaneous clustering and feature selection via an expectation propagation statistical learning framework. Multimedia Tools and Applications, 74 (12), 4303–4327.[P95] Kar, N. B., Babu, K. S., Sangaiah, A. K., & Bakshi, S. (2019). Face expression recognition system based on ripplet transform type II and least square SVM. Multimedia Tools and Applications, 78 (4), 4789–4812.[P96] Revina, I. M., & Emmanuel, W. S. (2019). Face expression recognition with the optimization based multi-SVNN classifier and the modified LDP features. Journal of Visual Communication and Image Representation, 62, 43–55.[P97] Smith, R. S., & Windeatt, T. (2015). Facial action unit recognition using multi-class classification. Neurocomputing, 150, 440–448.[P98] Sajjad, M., Shah, A., Jan, Z., Shah, S. I., Baik, S. W., & Mehmood, I. (2018). Facial appearance and texture feature-based robust facial expression recognition framework for sentiment knowledge discovery. Cluster Computing, 21 (1), 549–567.[P99] Lekdioui, K., Messoussi, R., Ruichek, Y., Chaabi, Y., & Touahni, R. (2017). Facial decomposition for expression recognition using texture/shape descriptors and SVM classifier. Signal Processing: Image Communication, 58, 300–312.[P100] Sen, D., Datta, S., & Balasubramanian, R. (2019). Facial emotion classification using concatenated geometric and textural features. Multimedia Tools and Applications, 78 (8), 10287–10323.

Data Availability

There are no research data for this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. C. Darwin, The Expression of the Emotions in Man and Animals, John Murray, London, UK, 1872.
  2. V. A. Petrushin, “Emotion recognition in speech signal: experimental study, development, and application,” in Proceedings of the 6th International Conference on Spoken Language Processing, Beijing, China, 2000. View at: Google Scholar
  3. S. Mitsuyoshi and F. Ren, “Emotion recognition,” The Journal of The Institute of Electrical Engineers of Japan, vol. 125, no. 10, pp. 641–644, 2005. View at: Publisher Site | Google Scholar
  4. D. A. Moritz, “Understanding anger,” American Journal of Nursing, vol. 78, no. 1, p. 81, 1978. View at: Publisher Site | Google Scholar
  5. A. M. Shahsavarani, S. Noohi, S. Jafari, M. H. Kalkhoran, and S. Hatefi, “Assessment & measurement of anger in behavioral and social sciences: a systematic review of literature,” International Journal of Medical Reviews, vol. 2, no. 3, pp. 279–286, 2015. View at: Google Scholar
  6. J. A. Domínguez-Jiménez, K. C. Campo-Landines, J. C. Martínez-Santos, E. J. Delahoz, and S. H. Contreras-Ortiz, “A machine learning model for emotion recognition from physiological signals,” Biomedical Signal Processing and Control, vol. 55, Article ID 101646, 2020. View at: Publisher Site | Google Scholar
  7. M. Kowalska and M. Wróbel, “Basic emotions,” Encyclopedia of Personality and Individual Differences, Springer, Berlin, Germany, 2017. View at: Publisher Site | Google Scholar
  8. D. K. Kirange and R. R. Deshmukh, “Emotion classification of news headlines using SVM,” Asian Journal of Computer Science and Information Technology, vol. 5, pp. 104–106, 2012. View at: Google Scholar
  9. S. G. Mangalagowri and P. C. P. Raj, “EEG feature extraction and classification using feed forward backpropogation algorithm for emotion detection,” in Proceedings of the 2016 International Conference on Electrical, Electronics, Communication, Computer and Optimization Techniques, pp. 183–187, Mysuru, India, 2016. View at: Publisher Site | Google Scholar
  10. C. E. Izard, “Basic emotions, natural kinds, emotion schemas, and a new paradigm,” Perspectives on Psychological Science, vol. 2, no. 3, pp. 260–280, 2007. View at: Publisher Site | Google Scholar
  11. M. Feidakis, A Review of Emotion-Aware Systems for E-Learning in Virtual Environments, Elsevier Inc., Amsterdam, Netherlands, 2016.
  12. W.-L. Zheng, W. Liu, Y. Lu, B.-L. Lu, and A. Cichocki, “EmotionMeter: a multimodal framework for recognizing human emotions,” IEEE Transactions on Cybernetics, vol. 49, no. 3, pp. 1110–1122, 2019. View at: Publisher Site | Google Scholar
  13. T. Keshari and S. Palaniswamy, “Emotion recognition using feature-level fusion of facial expressions and body gestures,” in Proceedings of the 2019 International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 2019. View at: Publisher Site | Google Scholar
  14. A. Mehrabian, “Some referents and measures of nonverbal behavior,” Behavior Research Methods and Instrumentation, vol. 1, 1968. View at: Google Scholar
  15. M. Pantic, A. Pentland, A. Nijholt, and T. S. Hunag, “Human computing and machine understanding of human behaviour,” 2007, https://ibug.doc.ic.ac.uk/media/uploads/documents/LNAI-PanticEtAl-CAMERA.pdf. View at: Google Scholar
  16. P. A. Abhang, B. W. Gawali, and S. C. Mehrotra, “Multimodal emotion recognition,” Introduction to EEG- and Speech-Based Emotion Recognition, Academic Press, Cambridge, MA, USA, 2016. View at: Publisher Site | Google Scholar
  17. S. L. Happy, P. Patnaik, A. Routray, and R. Guha, “The Indian spontaneous expression database for emotion recognition,” IEEE Transactions on Affective Computing, vol. 8, no. 1, pp. 131–142, 2017. View at: Publisher Site | Google Scholar
  18. J. Gonzalez-sanchez, M. Baydogan, M. E. Chavez-echeagaray, K. Robert, and W. Burleson, Affect Measurement: A Roadmap Through Approaches, Technologies and Data Analysis, Elsevier Inc., Amsterdam, Netherlands, 2017.
  19. R. Jameel, A. Singhal, and A. Bansal, “A comprehensive study on facial expressions recognition techniques,” in Proceedings of the 2016 6th International Conference on Cloud System and Big Data Engineering (Confluence), Noida, India, 2016. View at: Publisher Site | Google Scholar
  20. A. Apte, A. Basavaraj, and R. K. Nithin, “Efficient facial expression ecognition and classification system based on morphological processing of frontal face images,” in Proceedings of the 2015 IEEE 10th International Conference on Industrial and Information Systems (ICIIS), Peradeniya, Sri Lanka, 2016. View at: Publisher Site | Google Scholar
  21. M. Pantic and M. S. Bartlett, Machine Analysis of Facial Expressions, vol. 5, IntchOpen, London, UK, 2007.
  22. S. Dhall and P. Sethi, “Geometric and appearance feature analysis for facial expression recognition,” International Journal of Advances in Engineering & Technology, vol. 5, no. 3, pp. 1–11, 2014. View at: Google Scholar
  23. P. Ekman, “Universal-facial-expressions-of-emotions,” California Mental Health Research Digest, vol. 8, pp. 151–158, 1970. View at: Google Scholar
  24. A. Ortony and T. J. Turner, “What’s basic about basic emotions?” Psychological Review, vol. 97, no. 3, pp. 315–331, 1990. View at: Publisher Site | Google Scholar
  25. R. Plutchik, “The nature of emotions,” Philosophical Studies, vol. 52, no. 3, pp. 393–409, 1987. View at: Publisher Site | Google Scholar
  26. J. A. Russell and G. Pratt, “A description of the affective quality attributed to environments,” Journal of Personality and Social Psychology, vol. 38, no. 2, pp. 311–322, 1980. View at: Publisher Site | Google Scholar
  27. http://ugspace.ug.edu.gh/bitstream/handle/123456789/36407/Detecting%20Anger%20in%20Persuasive%20Spaces%20An%20Evaluation%20of%20Facial%20Expression%20Algorithms.pdf1.
  28. S. Haq and P. Jackson, “Machine audition: principles, algorithms and systems, chapter 8,” 2010, http://scholar.google.com/scholar?hl=en&btnG=Search&q=intitle:Machine+Audition+:+Principles+,+Algorithms+and+Systems#1. View at: Google Scholar
  29. P. Ekman, Facial Expression, Siegman & Feldstein, Hillsdale, NJ, USA, 1977.
  30. H. G. Valero, Automatic Facial Expression Recognition, University of Manchester, Manchester, UK, 2016.
  31. B. Kitchenham, “Procedures for performing systematic literature reviews,” Tech. Rep., Keele University, Keele, UK, 2004, Technical report. View at: Google Scholar
  32. A. Verma and L. K. Sharma, “A comprehensive survey on human facial expression detection,” International Journal of Image Processing, vol. 7, no. 7, pp. 171–182, 2013. View at: Google Scholar
  33. A. Lonare and S. V. Jain, “A survey on facial expression analysis for emotion recognition,” International Journal of Advanced Research in Computer and Communication Engineering, vol. 2, no. 12, pp. 4647–4650, 2013. View at: Google Scholar
  34. Y. Huang, F. Chen, S. Lv, and X. Wang, “Facial expression recognition: a survey,” Symmetry, vol. 11, no. 10, pp. 1189–1228, 2019. View at: Publisher Site | Google Scholar
  35. X. Zhao and S. Zhang, “A review on facial expression recognition: feature extraction and classification,” IETE Technical Review, vol. 33, no. 5, pp. 505–517, 2016. View at: Publisher Site | Google Scholar
  36. O. Ekundayo and S. Viriri, “Facial expression recognition: a review of methods, performances and limitations,” in Proceedings of the Conference on Information Communications Technology and Society (ICTAS) 2019, Durban, South Africa, 2019. View at: Publisher Site | Google Scholar
  37. K. Chengeta and S. Viriri, “A review of local, holistic and deep learning approaches in facial expressions recognition,” in Proceedings of the Conference on Information Communications Technology and Society (ICTAS) 2019, Durban, South Africa, 2019. View at: Publisher Site | Google Scholar
  38. G. Hemalatha and C. P. Sumathi, “A study of techniques for facial detection and expression classification,” International Journal of Computer Science & Engineering Survey, vol. 5, no. 2, pp. 27–37, 2014. View at: Publisher Site | Google Scholar
  39. J. Kumari, R. Rajesh, and K. M. Pooja, “Facial expression recognition: a survey,” Procedia Computer Science, vol. 58, pp. 486–491, 2015. View at: Publisher Site | Google Scholar
  40. N. Bhardwaj and M. Dixit, “A review: facial expression detection with its techniques and application,” International Journal of Signal Processing, Image Processing and Pattern Recognition, vol. 9, no. 6, pp. 149–158, 2016. View at: Publisher Site | Google Scholar
  41. I. M. Revina and W. R. S. Emmanuel, “A survey on human face expression recognition techniques,” Journal of King Saud University—Computer and Information Sciences, vol. 33, pp. 619–628, 2018. View at: Publisher Site | Google Scholar
  42. V. Mokaya and H. Singh, “Facial expression recognition and analysis techniques,” International Journal of Management, Technology and Engineering, vol. 8, no. 12, pp. 1336–1348, 2018. View at: Google Scholar
  43. D. Canedo and A. J. R. Neves, “Facial expression recognition using computer vision: a systematic review,” Applied Sciences, vol. 9, no. 21, pp. 4678–4731, 2019. View at: Publisher Site | Google Scholar
  44. J. Wen, S. Li, Z. Lin, Y. Hu, and C. Huang, “Systematic literature review of machine learning based software development effort estimation models,” Information and Software Technology, vol. 54, no. 1, pp. 41–59, 2012. View at: Publisher Site | Google Scholar
  45. B. A. Kitchenham, E. Mendes, and G. H. Travassos, “Cross versus within-company cost estimation studies: a systematic review,” IEEE Transactions on Software Engineering, vol. 33, no. 5, pp. 316–329, 2007. View at: Publisher Site | Google Scholar
  46. T. Dybå and T. Dingsøyr, “Empirical studies of agile software development: a systematic review,” Information and Software Technology, vol. 50, no. 9-10, pp. 833–859, 2008. View at: Publisher Site | Google Scholar
  47. R. Malhotra, “A systematic review of machine learning techniques for software fault prediction,” Applied Soft Computing, vol. 27, pp. 504–518, 2015. View at: Publisher Site | Google Scholar
  48. M. Islam, M. Hasan, X. Wang, H. Germack, and M. Noor-E-Alam, “A systematic review on healthcare analytics: application and theoretical perspective of data mining,” Healthcare, vol. 6, no. 2, p. 54, 2018. View at: Publisher Site | Google Scholar
  49. A. Abouyahya, S. El Fkihi, R. O. H. Thami, and D. Aboutajdine, “Features extraction for facial expressions recognition,” in Proceedings of the 2016 5th International Conference on Multimedia Computing and Systems (ICMCS), vol. 16, pp. 46–49, Marrakech, Morocco, 2016. View at: Publisher Site | Google Scholar
  50. C. Shan, S. Gong, and P. W. McOwan, “Facial expression recognition based on local binary patterns: a comprehensive study,” Image and Vision Computing, vol. 27, no. 6, pp. 803–816, 2009. View at: Publisher Site | Google Scholar
  51. T. H. H. Zavaschi, A. S. Britto, L. E. S. Oliveira, and A. L. Koerich, “Fusion of feature sets and classifiers for facial expression recognition,” Expert Systems with Applications, vol. 40, no. 2, pp. 646–655, 2013. View at: Publisher Site | Google Scholar
  52. T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures with classification based on featured distributions,” Pattern Recognition, vol. 29, no. 1, pp. 51–59, 1996. View at: Publisher Site | Google Scholar
  53. B. Remeseiro and V. Bolon-Canedo, “A review of feature selection methods in medical applications,” Computers in Biology and Medicine, vol. 112, Article ID 103375, 2019. View at: Publisher Site | Google Scholar
  54. X. Fan and T. Tjahjadi, “A dynamic framework based on local Zernike moment and motion history image for facial expression recognition,” Pattern Recognition, vol. 64, pp. 399–406, 2017. View at: Publisher Site | Google Scholar
  55. Y. Kumar and S. Sharma, “A systematic survey of facial expression recognition techniques,” in Proceedings of the International Conference on Computing Methodologies and Communication (ICCMC) 2017, vol. 17, Erode, India, 2017. View at: Publisher Site | Google Scholar
  56. P. Refaeilzadeh, L. Tang, and H. Liu, “Cross-validation,” Australasian Institute of Mining and Metallurgy Publication Series, Australasian Institute of Mining and Metallurgy, Carlton, Australia, 2005. View at: Google Scholar
  57. Y. Bengio and Y. Lecun, “Scaling learning algorithms toward AI,” Large-Scale Kernel Machines, MIT Press, Cambridge, MA, USA, 2007. View at: Publisher Site | Google Scholar

Copyright © 2021 Ebenezer Owusu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views447
Downloads274
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.