Table of Contents Author Guidelines Submit a Manuscript
Journal of Robotics
Volume 2019, Article ID 8591035, 12 pages
https://doi.org/10.1155/2019/8591035
Research Article

An Indoor Scene Classification Method for Service Robot Based on CNN Feature

School of Control Science and Engineering, Shandong University, Jinan, 250061, China

Correspondence should be addressed to Guohui Tian; nc.ude.uds@nait.h.g

Received 5 November 2018; Revised 4 April 2019; Accepted 15 April 2019; Published 24 April 2019

Academic Editor: Keigo Watanabe

Copyright © 2019 Shaopeng Liu and Guohui Tian. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. M. Brown and S. Susstrunk, “Multi-spectral SIFT for scene category recognition,” in Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 177–184, Colorado Springs, CO, USA, June 2011. View at Publisher · View at Google Scholar
  2. H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (surf),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359, 2008. View at Publisher · View at Google Scholar · View at Scopus
  3. J. Niu, X. Bu, K. Qian, and Z. Li, “An indoor scene recognition method combining global and saliency region features,” Jiqiren/Robot, vol. 37, no. 1, pp. 122–128, 2015. View at Google Scholar · View at Scopus
  4. S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: spatial pyramid matching for recognizing natural categories,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '06), pp. 2169–2178, New York, USA, June 2006. View at Publisher · View at Google Scholar · View at Scopus
  5. P. Wu, Y. Li, F. Yang, L. Kong, and Z. Hou, “A CLM-Based Method of Indoor Affordance Areas Classification for Service Robots,” Jiqiren/Robot, vol. 40, no. 2, pp. 188–194, 2018. View at Google Scholar · View at Scopus
  6. C. Szegedy, W. Liu, Y. Jia et al., “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '15), pp. 1–9, Boston, Mass, USA, June 2015. View at Publisher · View at Google Scholar
  7. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '16), pp. 770–778, Las Vegas, Nev, USA, June 2016. View at Publisher · View at Google Scholar
  8. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, pp. 779–788, July 2016. View at Scopus
  9. J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” in Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, (CVPR '17), pp. 6517–6525, Honolulu, Hawaii, USA, 2017. View at Scopus
  10. W. Liu, D. Anguelov, D. Erhan et al., “SSD: single shot multibox detector,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Preface, vol. 9905, pp. 21–37, 2016. View at Publisher · View at Google Scholar · View at Scopus
  11. B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva, “Learning deep features for scene recognition using places database,” in Proceedings of the 28th Annual Conference on Neural Information Processing Systems 2014, NIPS 2014, pp. 487–495, Montréal, Canada, December 2014. View at Scopus
  12. T. Akilan, Q. M. J. Wu, A. Safaei, and W. Jiang, “A late fusion approach for harnessing multi-CNN model high-level features,” in Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2017, pp. 566–571, Windsor, Canada, October 2017. View at Scopus
  13. L. Zheng, Y. Zhao, S. Wang et al., “Good practice in CNN feature transfer,” https://arxiv.org/abs/1604.00133.
  14. A. Quattoni and A. Torralba, “Recognizing indoor scenes,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops '09), pp. 413–420, Miami, USA, June 2009. View at Publisher · View at Google Scholar · View at Scopus
  15. Q. Wang, P. Li, L. Zhang, and W. Zuo, “Towards effective codebookless model for image classification,” Pattern Recognition, vol. 59, pp. 63–71, 2016. View at Publisher · View at Google Scholar · View at Scopus
  16. S. N. Parizi, J. G. Oberlin, and P. F. Felzenszwalb, “Reconfigurable models for scene recognition,” in Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2012, pp. 2775–2782, Piscataway, USA, June 2012. View at Scopus
  17. L. Zhou, Z. Zhou, and D. Hu, “Scene classification using multi-resolution low-level feature combination,” Neurocomputing, vol. 122, pp. 284–297, 2013. View at Publisher · View at Google Scholar · View at Scopus
  18. M. Dixit, . Si Chen, . Dashan Gao, N. Rasiwasia, and N. Vasconcelos, “Scene classification with semantic Fisher vectors,” in Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2974–2983, Boston, MA, USA, June 2015. View at Publisher · View at Google Scholar
  19. P. Tang, H. Wang, and S. Kwong, “G-MS2F: GoogLeNet based multi-stage feature fusion of deep CNN for scene recognition,” Neurocomputing, vol. 225, pp. 188–197, 2017. View at Publisher · View at Google Scholar · View at Scopus
  20. S. Bai and H. Tang, “Categorizing scenes by exploring scene part information without constructing explicit models,” Neurocomputing, vol. 160-168, no. 281, 2018. View at Google Scholar · View at Scopus
  21. The Theano Development Team, R. AI-Rfou, G. Alain et al., “Theano: A Python framework for fast computation of mathematical expressions,” https://arxiv.org/abs/1605.02688.
  22. Y. Jia, E. Shelhamer, J. Donahue et al., “Caffe: convolutional architecture for fast feature embedding,” in Proceedings of the ACM Conference on Multimedia (MM '14), pp. 675–678, ACM, Orlando, Fla, USA, November 2014. View at Publisher · View at Google Scholar · View at Scopus
  23. T. Chen, M. Li, Y. Li et al., “MXNet: A flexible and efficient machine learning library for heterogeneous distributed systems,” https://arxiv.org/abs/1512.01274.
  24. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proceedings of the 26th Annual Conference on Neural Information Processing Systems (NIPS '12), pp. 1097–1105, Lake Tahoe, Nev, USA, December 2012. View at Scopus
  25. B. Zhou, A. Lapedriza, J. Xiao et al., “Learning deep features for scene recognition using places database,” in International Conference on Neural Information Processing Systems, NIPS 2014, pp. 487–495, Daegu, Korea, 2014.
  26. J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba, “SUN database: large-scale scene recognition from abbey to zoo,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 3485–3492, San Francisco, Calif, USA, June 2010. View at Publisher · View at Google Scholar · View at Scopus