Table of Contents Author Guidelines Submit a Manuscript

A corrigendum for this article has been published. To view the corrigendum, please click here.

Journal of Sensors
Volume 2017, Article ID 2157243, 18 pages
Research Article

Visual Localization by Place Recognition Based on Multifeature (D-λLBP++HOG)

1Laboratoire Électronique, Informatique et Image, Université de Technologie de Belfort-Montbéliard, 90000 Belfort, France
2College of Electron and Electricity Engineering, Baoji University of Arts and Sciences, Baoji 721016, China

Correspondence should be addressed to Yongliang Qiao; rf.mbtu@oaiq.gnailgnoy

Received 6 June 2017; Accepted 17 August 2017; Published 22 October 2017

Academic Editor: Stephane Evoy

Copyright © 2017 Yongliang Qiao and Zhao Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Visual localization is widely used in the autonomous navigation system and Advanced Driver Assistance Systems (ADAS). This paper presents a visual localization method based on multifeature fusion and disparity information using stereo images. We integrate disparity information into complete center-symmetric local binary patterns (CSLBP) to obtain a robust global image description (D-CSLBP). In order to represent the scene in depth, multifeature fusion of D-CSLBP and HOG features provides valuable information and permits decreasing the effect of some typical problems in place recognition such as perceptual aliasing. It improves visual recognition performance by taking advantage of depth, texture, and shape information. In addition, for real-time visual localization, local sensitive hashing method (LSH) was used to compress the high-dimensional multifeature into binary vectors. It can thus speed up the process of image matching. To show its effectiveness, the proposed method is tested and evaluated using real datasets acquired in outdoor environments. Given the obtained results, our approach allows more effective visual localization compared with the state-of-the-art method FAB-MAP.