Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2013, Article ID 265819, 9 pages
http://dx.doi.org/10.1155/2013/265819
Research Article

Practical Speech Emotion Recognition Based on Online Learning: From Acted Data to Elicited Data

School of Information Science and Engineering, Southeast University, Nanjing 210096, China

Received 7 March 2013; Revised 26 May 2013; Accepted 4 June 2013

Academic Editor: Saeed Balochian

Copyright © 2013 Chengwei Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. C. Clavel, I. Vasilescu, L. Devillers, G. Richard, and T. Ehrette, “Fear-type emotion recognition for future audio-based surveillance systems,” Speech Communication, vol. 50, no. 6, pp. 487–503, 2008. View at Publisher · View at Google Scholar · View at Scopus
  2. C. Huang, Y. Jin, Y. Zhao, Y. Yu, and L. Zhao, “Speech emotion recognition based on re-composition of two-class classifiers,” in Proceedings of the 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops (ACII '09), Amsterdam, The Netherlands, September 2009. View at Publisher · View at Google Scholar · View at Scopus
  3. K. R. Scherer, “Vocal communication of emotion: a review of research paradigms,” Speech Communication, vol. 40, no. 1-2, pp. 227–256, 2003. View at Publisher · View at Google Scholar · View at Scopus
  4. A. Tawari and M. M. Trivedi, “Speech emotion analysis: exploring the role of context,” IEEE Transactions on Multimedia, vol. 12, no. 6, pp. 502–509, 2010. View at Publisher · View at Google Scholar · View at Scopus
  5. F. Burkhardt, A. Paeschke, M. Rolfes, W. Sendlmeier, and B. Weiss, “A database of German emotional speech,” in Proceedings of the 9th European Conference on Speech Communication and Technology, pp. 1517–1520, Lissabon, Portugal, September 2005. View at Scopus
  6. D. Ververidis and C. Kotropoulos, “Automatic speech classification to five emotional states based on gender information,” in Proceedings of the 12th European Signal Processing Conference, pp. 341–344, Vienna, Austria, 2004.
  7. S. Steidl, Automatic Classification of Emotion-Related User States in Spontaneous Children's Speech, Department of Computer Science, Friedrich-Alexander-Universitaet Erlangen-Nuermberg, Berlin, Germany, 2008.
  8. M. Grimm, K. Kroschel, and S. Narayanan, “The Vera am Mittag German audio-visual emotional speech database,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME '08), pp. 865–868, Hannover, Germany, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  9. K. P. Truong, How Does Real Affect Affect Affect Recognition in Speech?Center for Telematics and Information Technology, University of Twente, Enschede, The Netherlands, 2009.
  10. C. Huang, Y. Jin, Y. Zhao, Y. Yu, and L. Zhao, “Recognition of practical emotion from elicited speech,” in Proceedings of the 1st International Conference on Information Science and Engineering (ICISE '09), pp. 639–642, Nanjing, China, December 2009. View at Publisher · View at Google Scholar · View at Scopus
  11. R. Polikar, L. Udpa, S. S. Udpa, and V. Honavar, “Learn++: an incremental learning algorithm for supervised neural networks,” IEEE Transactions on Systems, Man and Cybernetics C, vol. 31, no. 4, pp. 497–508, 2001. View at Publisher · View at Google Scholar · View at Scopus
  12. Q. L. Zhao, Y. H. Jiang, and M. Xu, “Incremental learning by heterogeneous Bagging ensemble,” Lecture Notes in Computer Science, vol. 6441, no. 2, pp. 1–12, 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. R. Xiao, J. Wang, and F. Zhang, “An approach to incremental SVM learning algorithm,” in Proceedings of the IEEE International Conference on Tools with Artificial Intelligence, pp. 268–273, 2000.
  14. D. N. Reshef, Y. A. Reshef, H. K. Finucane et al., “Detecting novel associations in large data sets,” Science, vol. 334, no. 6062, pp. 1518–1524, 2011. View at Publisher · View at Google Scholar · View at Scopus
  15. D. A. Reynolds and R. C. Rose, “Robust text-independent speaker identification using Gaussian mixture speaker models,” IEEE Transactions on Speech and Audio Processing, vol. 3, no. 1, pp. 72–83, 1995. View at Publisher · View at Google Scholar · View at Scopus
  16. Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, vol. 55, no. 1, part 2, pp. 119–139, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  17. Q. Zhao, The research on ensemble pruning and its application in on-line machine learning [Ph.D. thesis], National University of Defense Technology, Changsha, China, 2010.