Table of Contents Author Guidelines Submit a Manuscript
Applied Computational Intelligence and Soft Computing
Volume 2012, Article ID 650818, 6 pages
Research Article

Environmental Sound Recognition Using Time-Frequency Intersection Patterns

1Graduate Department of Computer and Information Systems, Graduate School of Computer Science and Engineering, The University of Aizu, Aizu-Wakamatsu 965-8580, Japan
2Department of Computer Science and Engineering, Shanghai Jiaotong University, 200240 Shanghai, China

Received 13 January 2012; Accepted 27 February 2012

Academic Editor: Zhishun She

Copyright © 2012 Xuan Guo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. J. Huang, N. Ohnishi, and N. Sugie, “Building ears for rbotos: Sound localization and separation,” Artificial Life and Robotics, vol. 1, no. 4, pp. 157–163, 1997. View at Google Scholar
  2. S. Nakamura, K. Hiyane, F. Asano, and T. Endo, “Sound scene data collection in real acoustical environments,” Journal of the Acoustical Society of Japan, vol. 20, no. 3, pp. 225–232, 1999. View at Google Scholar · View at Scopus
  3. K. Hiyane and J. Iio, “Non-speech sound recognition with microphone array,” in Proceedings of the IEEE International Workshop Hands-Free Speech Communication, 2001.
  4. K. J. Lang, A. H. Waibel, and G. E. Hinton, “A time-delay neural network architecture for isolated world recognition,” Neural Networks, vol. 3, pp. 23–43, 1990. View at Google Scholar
  5. K. Miki, T. Nishiura, S. Nakamura, and G. Kashino, “Environmental sound recognition by HMM,” in Proceedings of the Spring Meet of The Acoustical Society of Japan, no. 1-8-8, 2000.
  6. A. Sasou and K. Tanaka, “Environmental sound recognition based on AR-HMM,” in Proceedings of the Autumn Meet of The Acoustical Society of Japan, no. 3-Q-7, 2002.