Table of Contents Author Guidelines Submit a Manuscript
Journal of Robotics
Volume 2017, Article ID 2061827, 7 pages
https://doi.org/10.1155/2017/2061827
Research Article

Long Short-Term Memory Projection Recurrent Neural Network Architectures for Piano’s Continuous Note Recognition

1School of Information Science and Technology, Beijing Forestry University, No. 35 Qinghuadong Road, Haidian District, Beijing 100083, China
2National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, No. 95 Zhongguancundong Road, Haidian District, Beijing 100190, China
3College of Information Science and Technology, Jinan University, No. 601, West Huangpu Avenue, Guangzhou, Guangdong 510632, China

Correspondence should be addressed to Yanyan Xu; nc.ude.ufjb@naynayux

Received 10 May 2017; Revised 30 July 2017; Accepted 6 August 2017; Published 12 September 2017

Academic Editor: Keigo Watanabe

Copyright © 2017 YuKang Jia et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. A. K. Jain, J. Mao, and K. M. Mohiuddin, “Artificial neural networks: a tutorial,” IEEE Computational Science & Engineering, vol. 29, no. 3, pp. 31–44, 1996. View at Publisher · View at Google Scholar · View at Scopus
  2. J. R. Zhang, T. M. Lok, and M. R. Lyu, “A hybrid particle swarm optimization-back-propagation algorithm for feedforward neural network training,” Applied Mathematics and Computation, vol. 185, no. 2, pp. 1026–1037, 2007. View at Publisher · View at Google Scholar · View at Scopus
  3. L. Liang, Y. Tang, L. Bin, and W. Xiaohua, Artificial neural network based universal time series, May 11 2004. US Patent 6,735,580.
  4. R. J. Williams and D. Zipser, “A learning algorithm for continually running fully recurrent neural networks,” Neural Computation, vol. 1, no. 2, pp. 270–280, 1989. View at Publisher · View at Google Scholar
  5. L. R. Medsker and L. C. Jain, “Recurrent neural networks,” Design and Applications, vol. 5, 2001. View at Google Scholar
  6. G. Shalini, O. Vinyals, B. Strope, R. Scott, T. Dean, and L. Heck, “Contextual lstm (clstm) models for large scale nlp tasks,” https://arxiv.org/abs/1602.06291.
  7. A. Karpathy, The unreasonable effectiveness of recurrent neural networks. http://karpathy.github.io/2015/05/21/rnn-effectiveness/, 2015.
  8. D. Turnbull and C. Elkan, “Fast recognition of musical genres using RBF networks,” IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 4, pp. 580–584, 2005. View at Publisher · View at Google Scholar · View at Scopus
  9. H. Sak, A. Senior, and F. Beaufays, “Long short-term memory recurrent neural network architectures for large scale acoustic modeling,” in Proceedings of the 15th Annual Conference of the International Speech Communication Association: Celebrating the Diversity of Spoken Languages, INTERSPEECH 2014, pp. 338–342, September 2014. View at Scopus
  10. A. Graves, A.-R. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in Proceedings of the 38th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '13), pp. 6645–6649, May 2013. View at Publisher · View at Google Scholar · View at Scopus
  11. J. Schmidhuber, “Deep learning in neural networks: an overview,” Neural Networks, vol. 61, pp. 85–117, 2015. View at Publisher · View at Google Scholar · View at Scopus
  12. K. M. Hermann, T. Kočiský, E. Grefenstette et al., “Teaching machines to read and comprehend,” in Proceedings of the 29th Annual Conference on Neural Information Processing Systems, NIPS 2015, pp. 1693–1701, December 2015. View at Scopus
  13. B. Bakker, “Reinforcement learning with long short-term memory,” in In Advances in Neural Information Processing Systems, pp. 1475–1482.
  14. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997. View at Publisher · View at Google Scholar · View at Scopus
  15. P. Goelet, V. F. Castellucci, S. Schacher, and E. R. Kandel, “The long and the short of long-term memory - A molecular framework,” Nature, vol. 322, no. 6078, pp. 419–422, 1986. View at Publisher · View at Google Scholar · View at Scopus
  16. Q. Lyu, Z. Wu, and J. Zhu, “Polyphonic music modelling with LSTM-RTRBM,” in Proceedings of the 23rd ACM International Conference on Multimedia, MM 2015, pp. 991–994, aus, October 2015. View at Publisher · View at Google Scholar · View at Scopus
  17. S. Haim, A. Senior, R. Kanishka, and F. Beaufays, “Fast and accurate recurrent neural network acoustic models for speech recognition,” https://arxiv.org/abs/1507.06947.
  18. F. A. Gers, J. Schmidhuber, and F. Cummins, “Learning to forget: Continual prediction with LSTM,” Neural Computation, vol. 12, no. 10, pp. 2451–2471, 2000. View at Publisher · View at Google Scholar · View at Scopus
  19. F. A. Gers, N. N. Schraudolph, and J. Schmidhuber, “Learning precise timing with {LSTM} recurrent networks,” Journal of Machine Learning Research (JMLR), vol. 3, no. 1, pp. 115–143, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber, “Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks,” in Proceedings of the 23rd International Conference on Machine Learning, ICML 2006, pp. 369–376, Pittsburgh, Pa, USA, June 2006. View at Publisher · View at Google Scholar · View at Scopus