Table of Contents Author Guidelines Submit a Manuscript
Journal of Electrical and Computer Engineering
Volume 2017 (2017), Article ID 8639782, 6 pages
https://doi.org/10.1155/2017/8639782
Research Article

Cross-Corpus Speech Emotion Recognition Based on Multiple Kernel Learning of Joint Sample and Feature Matching

College of Big Data and Information Engineering, Guizhou University, Guiyang 550002, China

Correspondence should be addressed to Ping Yang

Received 5 April 2017; Revised 3 August 2017; Accepted 13 September 2017; Published 1 November 2017

Academic Editor: Ping Feng Pai

Copyright © 2017 Ping Yang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. J. Deng, Z. Zhang, F. Eyben, and B. Schuller, “Autoencoder-based unsupervised domain adaptation for speech emotion recognition,” IEEE Signal Processing Letters, vol. 21, no. 9, pp. 1068–1072, 2014. View at Publisher · View at Google Scholar · View at Scopus
  2. Z. Zhang, F. Weninger, M. Wöllmer, and B. Schuller, “Unsupervised learning in cross-corpus acoustic emotion recognition,” in Proceedings of the Automatic Speech Recognition and Understanding (ASRU), 2011 IEEE Workshop on, IEEE, 2011. View at Publisher · View at Google Scholar · View at Scopus
  3. P. Song, Y. Jin, L. Zhao, and M. Xin, “Speech emotion recognition using transfer learning,” IEICE Transaction on Information and Systems, vol. 97, no. 9, pp. 2530–2532, 2014. View at Publisher · View at Google Scholar · View at Scopus
  4. Schuller, Björn et al., “Selecting training data for cross-corpus speech emotion recognition: Prototypicality vs. generalization,” in Proceedings of the Afeka-AVIOS Speech Processing Conference, Tel Aviv, Israel, 2011.
  5. Y. Zong, W. Zheng, T. Zhang, and X. Huang, “Cross-corpus speech emotion recognition based on domain-adaptive least-squares regression,” IEEE Signal Processing Letters, vol. 23, no. 5, pp. 585–589, 2016. View at Publisher · View at Google Scholar · View at Scopus
  6. A. Hassan, R. Damper, and M. Niranjan, “On acoustic emotion recognition: compensating for covariate shift,” IEEE Transactions on Audio, Speech and Language Processing, vol. 21, no. 7, pp. 1458–1468, 2013. View at Publisher · View at Google Scholar · View at Scopus
  7. F. R. Bach, G. RG. Lanckriet, and M. I. Jordan, “Multiple kernel learning, conic duality, and the SMO algorithm,” in Proceedings of the twenty-first international conference on Machine learning. ACM'04, 2004.
  8. L. Duan, D. Xu, I. W. Tsang, and J. Luo, “Visual event recognition in videos by learning from web data,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 9, pp. 1667–1680, 2012. View at Publisher · View at Google Scholar · View at Scopus
  9. M. Chen, K. Q. Weinberger, and B. John, Co-Training for Domain Adaptation, Advances in Neural Information Processing Systems, 2011.
  10. K. M. Borgwardt, M. Karsten et al., “Integrating structured biological data by kernel maximum mean discrepancy,” Bioinformatics, vol. 22, no. 14, pp. e49–e57, 2006. View at Publisher · View at Google Scholar · View at Scopus
  11. S. Stefan, Automatic cLassification of Emotion Related User States in Spontaneous Children's Speech, University of Erlangen-Nuremberg, Erlangen, Germany, 2009.
  12. Schuller, Björn, S. Steidl, and A. Batliner, “The interspeech 2009 emotion challenge,” in Proceedings of the Tenth Annual Conference of the International Speech Communication Association, 2009.
  13. C.-C. Lee, E. Mower, C. Busso, S. Lee, and S. Narayanan, “Emotion recognition using a hierarchical binary decision tree approach,” Speech Communication, vol. 53, no. 9-10, pp. 1162–1171, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. V. Chawla Nitesh et al., “SMOTE: synthetic minority over-sampling technique,” Journal of Artificial Intelligence Research, vol. 16, pp. 321–357, 2002. View at Google Scholar
  15. F. Eyben, Florian, M. Wollmer, and B. Schuller, “OpenEAR-Introducing the munich open-source emotion and affect recognition toolkit,” in Proceedings of the 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops (ACII '09), pp. 1–6, IEEE, Amsterdam, The Netherlands, September 2009. View at Publisher · View at Google Scholar · View at Scopus