Table of Contents
ISRN Signal Processing
Volume 2011 (2011), Article ID 753819, 15 pages
Research Article

Classification of Emotional Speech Based on an Automatically Elaborated Hierarchical Classifier

1Université de Lyon, CNRS, Ecole Centrale de Lyon, LIRIS, UMR5205, 69134, France
2School of Physical Science and Technology, Soochow University, Suzhou, Jiangsu 215006, China
3Tsinghua National Laboratory for Information Science and Technology, Department of Electronic Engineering, Tsinghua University, Beijing 100084, China

Received 4 November 2010; Accepted 15 December 2010

Academic Editors: Y. H. Ha, R. Palaniappan, and F. Palmieri

Copyright © 2011 Zhongzhe Xiao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Current machine-based techniques for vocal emotion recognition only consider a finite number of clearly labeled emotional classes whereas the kinds of emotional classes and their number are typically application dependent. Previous studies have shown that multistage classification scheme, because of ambiguous nature of affect classes, helps to improve emotion classification accuracy. However, these multistage classification schemes were manually elaborated by taking into account the underlying emotional classes to be discriminated. In this paper, we propose an automatically elaborated hierarchical classification scheme (ACS), which is driven by an evidence theory-based embedded feature-selection scheme (ESFS), for the purpose of application-dependent emotions' recognition. Experimented on the Berlin dataset with 68 features and six emotion states, this automatically elaborated hierarchical classifier (ACS) showed its effectiveness, displaying a 71.38% classification accuracy rate compared to a 71.52% classification rate achieved by our previously dimensional model-driven but still manually elaborated multistage classifier (DEC). Using the DES dataset with five emotion states, our ACS achieved a 76.74% recognition rate compared to a 81.22% accuracy rate displayed by a manually elaborated multistage classification scheme (DEC).