Mobile Information Systems

Volume 2016 (2016), Article ID 6463945, 11 pages

http://dx.doi.org/10.1155/2016/6463945

## Novel Point-to-Point Scan Matching Algorithm Based on Cross-Correlation

^{1}Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, 70833 Ostrava, Czech Republic^{2}Department of Computer Science, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, 70833 Ostrava, Czech Republic^{3}Department of Electrical and Computer Engineering, Faculty of Engineering, University of Alberta, 9107-116 Street, Edmonton, AB, Canada T6G 2V4

Received 4 February 2016; Accepted 5 April 2016

Academic Editor: Peter Brida

Copyright © 2016 Jaromir Konecny et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The localization of mobile robots in outdoor and indoor environments is a complex issue. Many sophisticated approaches, based on various types of sensory inputs and different computational concepts, are used to accomplish this task. However, many of the most efficient methods for mobile robot localization suffer from high computational costs and/or the need for high resolution sensory inputs. Scan cross-correlation is a traditional approach that can be, in special cases, used to match temporally aligned scans of robot environment. This work proposes a set of novel modifications to the cross-correlation method that extend its capability beyond these special cases to general scan matching and mitigate its computational costs so that it is usable in practical settings. The properties and validity of the proposed approach are in this study illustrated on a number of computational experiments.

#### 1. Introduction

Accurate and efficient positioning and localization is a fundamental problem of mobile robotics. It involves estimation of robots’ position relative to a map of an environment [1]. To accomplish this task, mobile robots adopt two high-level localization approaches. They can determine their position by receiving signals from beacons, such as in the case of fingerprinting algorithms [2] or employ various sensory subsystems that inform them about their vicinity [3]. Devices commonly used for beaconless localization are wheel sensors (odometers) and ultrasonic and optical rangefinders [4].

Optical rangefinders perform 2D laser scans of robot surroundings and provide data with high resolution and at high sampling rates. In general, the processing of such data is computationally expensive and usually requires massive computing resources [5]. On the contrary, control systems of mobile robots are usually low-consumption embedded devices with limited resources, low performance, and small memory. Therefore, there is a clear need for innovative laser scan processing methods with a good trade-off between accuracy and computational complexity. In this paper, a novel cross-correlation-based scan matching method suitable for low-performance microcontrollers is proposed and evaluated. It is an efficient cloud point-matching algorithm that can be in mobile robots instead of the traditional methods such as the Iterative Closest Point (ICP) [6], Cox [7], complete line segment (CLS) [8], Normal Distributions Transform (NDT) [9], Perimeter-Based Polar Scan Matching (PB-PSM) [10], and, for example, pIC [11] algorithm.

The novel cross-correlation-based [12] scan matching method, proposed in this work, uses laser scans obtained by optical rangefinder to solve the simultaneous localization and mapping (SLAM) problem [13] and to determine robot position in an unknown environment. The proposed algorithm has been implemented in C# and evaluated in a series of computational experiments involving a realistic mobile robot platform equipped with a specific optical rangefinder (SICK LMS 100 [14]). The accuracy and performance of the proposed method have been compared to a standard scan matching algorithm, ICP, and found better in terms of processing time and the accuracy of estimated position.

The rest of this paper is organized as follows. The scan matching problem, a general classification of scan matching methods, and the definition of selected baseline scan matching methods is provided in Section 2. Section 3 gives a brief overview of recent related work and relevant approaches. An efficient and robust scan matching algorithm, based on the cross-correlation of rasterized LiDAR scans, is proposed in Section 4. Section 5 describes the experiments conducted in order to verify the approach and to assess its properties. Finally, conclusions are drawn and future work is outlined in Section 6.

#### 2. Scan Matching

Informally, scan matching (cloud point matching) is a general procedure that aims at aligning current scan of an environment with a reference scan [15]. Many methods, based on various principles and different formal approaches, have been proposed for scan matching in the past. However, most of them suffer from high computational costs [16] and only a limited ability to work efficiently in different environments [17] (e.g., the method described in [18] requires an environment with perpendicular walls).

Scan matching-based robot localization methods utilize information about the distance between the device and the nearest obstacle. This information can be obtained with high accuracy using a laser range finder (LiDAR) [14]. In these devices, a measuring beam is often swept in one axis and provides the information about the distance to the nearest obstacle at every measured angle. Common LiDARs provide approximately 10–50 such scans per second. Each scan contains information about the distances to the nearest obstacle within a plane in front of the device (2D LiDAR). A typical LiDAR, such as the SICK LMS 100, has a measurement range of with an angular resolution [14]. Effective measurement distance ranges from several meters to tens of meters, depending on sensor types and properties. Besides traditional 2D LiDARs, devices able to provide 3D scans of their environment are becoming increasingly popular [19].

Thanks to their favourable properties, 2D LiDARs have become considerably popular in robotics. There are several methods, based on various heuristics and principles, that can be employed to determine the position of a robot in an environment. Methods that align current scan of an environment with a reference scan or with a map are called scan matching methods.

Scan matching methods can be divided into two large groups.* Conventional scan matching* methods use the apparatus of classical mathematics, while* probabilistic scan matching* methods evaluate the likelihood of a robot being at certain place. Typical examples of conventional and probabilistic methods are the Iterative Closest Point algorithm [6] and the Normal Distribution Transform algorithm [20], respectively.

Another classification of scan matching procedures is based on the way scan data is being processed. The* point*-*to*-*point scan matching* strategies process all individual points in environment scans. They provide localization with a high accuracy but suffer from high computational costs. However, they are very well usable in both complex and nonstructured environments. The* feature*-*to*-*feature* methods extract from the scans higher-level features before the actual matching and localization take place. The extracted features can be diverse. They usually include basic geometric shapes such as lines, arcs, edges, polygons, and, for example, 3D features. These algorithms have lower computational cost of the matching phase but can operate only in sufficiently feature-rich (i.e., structured) environments. They perform well in buildings with well-structured environmental elements consisting of large, flat surfaces and regular, geometric shapes. In the following, the standard scan matching methods from both categories are summarized.

##### 2.1. Point-Based Scan Matching Methods

###### 2.1.1. Iterative Closest Point Algorithm

The ICP is an iterative algorithm that looks for the pairs of closest points in a pair of environment scans. An affine transformation, , that makes projection of one point to another is calculated between two different scans, A and B. The algorithm minimizes a loss function, , defined aswhere is the affine transformation, and are translations in the direction of the - and -axes, respectively, is rotation, and is a function that finds in the second scan, B, index of a point that is closest to the point from the original scan with index .

The result of the minimization is a three-element vector that represents the translations in - and -axes and the rotation of the test scan with respect to the reference scan [21]. The ICP algorithm can be summarized as follows [6]:(1)Preprocessing: removal of the remote points.(2)Assignment: finding pairs of the closest points (the first point is from the reference scan; the second point is from the test scan).(3)Rejection: removal of the pairs with the long distance.(4)The loss function calculation: equation (1).(5)The loss function minimization: iterative process (e.g., Newton method or Lorentzian estimator [22]).Loss function minimization is the key part of the algorithm. Minimization methods with good trade-off between accuracy (i.e., the ability to find good transformations) and computational costs are required for mobile robots equipped with energy and resource constrained microcontrollers.

###### 2.1.2. Histogram Correlation

Correlation methods based on histograms, such as the Extended Histogram Matching algorithm, can be used to accomplish scan matching as well [12]. However, traditional correlation can be applied only for scans that differ in rotation only. For two arbitrary scans that differ in rotation, the function that represents the laser scan differs in shift in the axis. If the scans differ in both, rotation and translation, the function differs in distribution and the algorithms may produce misleading results (i.e., wrong matching).

Histogram-based correlation methods therefore use histograms, including the angle histogram [12], to determine the rotation and translation of the matched scans. They compute a histogram of the angles between every pair of points, and , measured in the scan. The function, obtained in this way, is invariant towards displacement. The - and -axis histograms show the distributions of points scanned in these two directions. Histogram-based scan correlation is described in detail in [12].

##### 2.2. Feature-Based Methods

A typical example of feature-based scan matching methods is the complete line segment (CLS) algorithm that compares complete line segments, extracted from two different scans. It can be also used to match a scan with a reference map of environment. This method has been successfully applied to scan matching [8] and for SLAM [23].

The CLS extracts from each LiDAR scan line segments that represent high-level real-world objects found in robot environment. The lines can be either complete or incomplete. An incomplete line is a sign of one object occluded by another. A complete line segment, on the other hand, describes a real-world object in plain view of the robot. The algorithm is especially concerned with complete line segments. It assumes that a complete line segment has an unique Euclidean length within the environment. The map of an environment is made up of a set of lines with defined start and end points and corresponding length. However, the lines can be defined also by their center point, orientation, and length.

Line comparisons (i.e., scan matching) are performed in CLS using the length of line segments, relative position of their center points, and their relative rotation. Let us consider two complete line segments and . is a segment from a local map, and is a segment from a global map. Together, they form a pair. The CLS works in the following way. For all line segments from the local map, , one by one, it builds a set of segments from the global map, , with similar length. Then, it calculates for each pair and the relative position of their centers and their relative rotation. The segment, , is matched if the following condition is satisfied:where the midpoint of a segment is denoted by subscript and the relative rotation by subscript . The more the segments meet the condition given by (2), the greater the credibility of the test match is.

If the test match contains at least two corresponding pairs, it is possible to calculate the angle, , and the displacement parameters, and , respectively. The angle, , is calculated from two pairs of complete line segments as a difference between an orientation vector, created from the midpoints of the local line segments, and , and a vector created from the midpoints of the global line segments, and . It is possible to use any of those two pairs of segments for the calculation. The displacement parameters are computed using

The scan matching procedure, proposed in this work, is a novel point-based method developed especially for the segment of energy and power constrained devices such as mobile robots. In the next section, we briefly summarize relevant related approaches.

#### 3. Related Work

A SLAM method based on stereo vision and the ICP algorithm has been described in [24]. SLAM method based on laser scan matching has been introduced in [25], where authors use polar coordinates for scan matching. The combination of the ICP algorithm and correlation histogram is used in [26] for large scale SLAM. In [27] the SLAM method based on entropy is presented. In [28] authors propose a beam selection method. The laser sensor beam is filtered and only the most important beams are used for SLAM. The representative of a multiagent approach is presented in [29].

6-DoF low dimensionality SLAM (L-SLAM) is introduced in [19]. Authors use 3D kinematic model instead of 2D. The particle filter and Kalman filter are used in that SLAM. The alternative approach is introduced in [30] that uses Extended Kalman Filter (EKF). Authors also present the SLAM comparison.

Another frequently used approach is based on the extraction of geometric primitives. For example, these primitives can take form of line segments [8] or more complex 3D segments [31]. In [31] the authors use 3D landmarks for feature-based SLAM. Another example of feature-based SLAM method is in [32], where the linear group algorithm (LGA) and stereo vision is used for SLAM. In [33] authors deal with a kidnap problem. They use a upward-looking camera for a first pose estimation.

Variety of additional information can be included in the maps. Those pieces of information can be used in subsequent analysis of explored area. The mobile robot that explores a waste rock is described in [34]. The concentration of a carbon monoxide and methane is measured and collected. The global positioning system (GPS) and online maps are used for localization. Another research is referred to in [35], where a mobile robotic device for mapping a distribution of a gas is presented.

In [36] the wireless node localization is proposed. This method is suitable for indoor use, while GPS signal is not present. Monte Carlo localization is used for wireless node identification. The localized nodes can be afterwards used for backward localization.

The following section describes the proposed novel cross-correlation-based scan matching method in detail. It provides an efficient an accurate algorithm for evaluating the degree of similarity between two laser scans, called correlation coefficient. The correlation coefficient is in this work a number that represents the overlay of two LiDAR scans. The larger the correlation coefficient is, the better the match is obtained between the investigated LiDAR scans. The calculation of the correlation coefficient is crucial for computational costs of the scan matching procedure. In this approach, the 2D point cloud, generated by the sensor, is transformed into a lower-resolution raster and then used to evaluate how much the borders of the investigated scans collide.

#### 4. Point-to-Point Scan Matching Algorithm Based on Cross-Correlation

In this work, we propose a robot localization strategy using a novel point-to-point scan matching algorithm based on cross-correlation. The proposed method has low computational requirements and high accuracy and is therefore suitable for the use with embedded devices that are frequently found in mobile robot platforms. The cross-correlation is in this approach used to determine relative translation and rotation of consecutive LiDAR scans performed by a moving robot. Each LiDAR scan can be in this context understood as a momentary snapshot of a floor plan of a room (more general, environment) where the robot is located. The scans have usually angular resolution of and cover the entire neighborhood of the robot (i.e., ). They contain for each measured angle a number of points that indicate the distance between the robot and nearest obstacle in the corresponding direction.

Intuitively, two LiDAR scans performed in the same environment shortly after each other will be similar. The proposed scan matching approach finds an affine transformation vector, , that is the best projection between an actual and a reference LiDAR scan. The transformation vector consists of three elements, transformation parameters, that correspond to translations, and , and rotation, .

Consider a set of all possible affine transformation vectors, , and a vector, : where and are the translation and is the rotation.

An affine transformation, , based on a parameter vector, , is defined by where and are the coordinates of a point, , in two LiDAR scans, A and B, respectively.

There is a handful of methods, based on different formal approaches and designed for various applications, that can find the parameter vector for the affine transformation between A and B. Some of them are summarized in Section 2. In the following, a novel cross-correlation-based method suitable for embedded microcontrollers is proposed. The method is first defined for two scans that differ only in translation and then extended to match scans when both, translation and rotation, are performed.

##### 4.1. Cross-Correlation of Rotated-Only LiDAR Scans

Consider two LiDAR scans, A and B, that differ only by rotation. Assume that they were captured in a sufficiently indented environment so that the functions and , representing the scans A and B, respectively, as a function of rotation angle, , have a period of . An example of this scenario is illustrated in Figure 1. The figure shows three points, , , and , in scans A and B, respectively. Each point, , is in scan A represented by and in scan B by . The cross-correlation of two rotated-only 2D scans, A and B, can be then evaluated using [37]Formula (6) is a function of scan similarity that depends on the angle, , only. The rotation between the matched scans A and B, , is then simply calculated byThe value of can be easily obtained from (7) using a single scan along the domain of at selected angular resolution. This intuitive approach requires only a single program loop and is computationally acceptable even for low-power embedded devices. Unfortunately, the cross-correlation problem is significantly more complex for a general case of two LiDAR scans that differ in both rotation and translation.