Table of Contents Author Guidelines Submit a Manuscript
Complexity
Volume 2018, Article ID 1627185, 12 pages
https://doi.org/10.1155/2018/1627185
Research Article

Visual Semantic Navigation Based on Deep Learning for Indoor Mobile Robots

1State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
2HNA Technology Group, Shanghai 200122, China
3Key Laboratory of Autonomous Systems and Networked Control, College of Automation Science and Engineering, South China University of Technology, Guangzhou 510640, China

Correspondence should be addressed to Lijun Zhao; nc.ude.tih@jloahz

Received 14 July 2017; Accepted 11 February 2018; Published 22 April 2018

Academic Editor: Thierry Floquet

Copyright © 2018 Li Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In order to improve the environmental perception ability of mobile robots during semantic navigation, a three-layer perception framework based on transfer learning is proposed, including a place recognition model, a rotation region recognition model, and a “side” recognition model. The first model is used to recognize different regions in rooms and corridors, the second one is used to determine where the robot should be rotated, and the third one is used to decide the walking side of corridors or aisles in the room. Furthermore, the “side” recognition model can also correct the motion of robots in real time, according to which accurate arrival to the specific target is guaranteed. Moreover, semantic navigation is accomplished using only one sensor (a camera). Several experiments are conducted in a real indoor environment, demonstrating the effectiveness and robustness of the proposed perception framework.