#### Abstract

A novel point-to-point scan matching approach is proposed to address pose estimation and map building issues of mobile robots. Polar Scan Matching (PSM) and Metric-Based Iterative Closest Point (Mb-ICP) are usually employed for point-to-point scan matching tasks. However, due to the facts that PSM considers the distribution similarity of polar radii in irrelevant region of reference and current scans and Mb-ICP assumes a constant weight in the norm about rotation angle, they may lead to a mismatching of the reference and current scan in real-world scenarios. In order to obtain better match results and accurate estimation of the robot pose, we introduce a new metric rule, Polar Metric-Weighted Norm (PMWN), which takes both rotation and translation into account to match the reference and current scan. For robot pose estimation, the heading rotation angle is estimated by correspondences establishing results and further corrected by an absolute-value function, and then the geometric property of PMWN called projected circle is used to estimate the robot translation. The extensive experiments are conducted to evaluate the performance of PMWN-based approach. The results show that the proposed approach outperforms PSM and Mb-ICP in terms of accuracy, efficiency, and loop closure error of mapping.

#### 1. Introduction

Localization and environment mapping are foundational functions of mobile robots. The dominant way to implement these functionalities is to use a scan matching approach [1–4], where the robot pose is iteratively estimated by the established correspondences between reference and current scans. In each iteration, the most popular way is to apply a set of predefined nearest neighbor rules to three types of features [5], that is, segments (represented as lines or curves), corners (environmental significant feature changes), and raw points, to achieve the scan matching.

Generally, scan matching approaches will fall into three categories based on the feature representation [6], that is, feature-to-feature approaches [7, 8], point-to-feature approaches [9, 10], and point-to-point approaches [11–14]. Feature-to-feature approaches adopt the location criteria to match features in both the reference and current scans, while point-to-feature approaches are utilized in geometric distance criteria to match segments in the reference scan with current scan points. However, both approaches suffer from uncertain or misrecognized features, which inevitably lead to rapid declination of pose estimation accuracy in a real indoor environment. Therefore, those approaches are only applicable in structural environments with simple feature representation. Point-to-point approaches, in contrast, are more suitable for unknown environments because they are independent of the environmental features. The earliest point-to-point scan matching approach is Iterative Closest Points (ICP) [11], which is limited in its real-world application because of the long runtime and low convergence rate. Currently, the two most popular point-to-point algorithms are Mb-ICP [13] and PSM [14]. The former assumes a constant weight of the norm about the rotation angle, but the rotation angle is actually different in every iteration. This may cause mismatching between reference and current scans. Moreover, it has a high computational cost to match and calculate each pair of laser scanning points between the reference and current scan. The latter mainly considers rotation of mobile robot and sometimes might lead to mismatching since the distribution of polar radii in irrelevant regions of the reference and current scans is similar.

In order to solve problems of Mb-ICP and PSM mentioned above, a novel point-to-point scan matching approach called PMWN is proposed in this paper. This approach treats the scans as a whole during the correspondences establishing process, rather than point to point. And correspondences between reference and current scans are established by means of one adaptive metric weight. Furthermore, the rotation and translation estimations are separated, where the former is achieved by linear absolute function method, and the latter is implemented by a projected circle method.

The rest of our paper is organized as follows: some related works are discussed in Section 2. The scan matching method based on PMWN is described in Section 3, where the relevant derivation about PMWN is interpreted to facilitate understanding of the proposed methods, and, then, the proposed methods for rotation and translation estimation are introduced. The empirical experiments are reported in Section 4, and conclusions are presented in Section 5.

#### 2. Related Works

The objective of the scan matching techniques is to estimate the relative motion of a mobile robot between two consecutive sensor scans. More precisely, given a reference scan , the current scan , and a rough intermediate pose estimation, the objective is to estimate the real pose , where and indicate the translation displacement, and is the rotation angle of the robot. Currently, many typical localization methods are conducted in Cartesian coordinate frames or polar coordinate frames.

##### 2.1. Scan Matching Method in Cartesian Coordinate Frame

In a Cartesian coordinate frame, a lot of researches have been conducted on point-to-point scan matching approaches, such as ICP [11], Iterative Dual Correspondence (IDC) [12], and Mb-ICP [13]. The classical ICP is an iterative algorithm for robot pose estimation, where each iteration includes two steps, that is, searching correspondences and calculating relative pose. The first step is to establish the correspondence between the reference and current scan. Then, a minimization process is adopted in the second step to improve the estimation of relative pose until convergence. The family of ICP algorithms uses the minimum Euclidean distance criteria to establish the correspondences and to apply the least squares for estimating the pose. However, many factors limit the applications of ICP [12]. For example, the Euclidean distance used by ICP does not take the sensor rotation into account, and correspondences of ICP are established by matching every pair of points between the reference and current scans, which may lead to a high computational burden. Otherwise, the convergence rate of ICP is slow in structured indoor environments because the consecutive scans are less distinguishable in a small region.

To address the inherent rotation issue in traditional ICP methods, Lu and Milios proposed the IDC algorithm to estimate the translation and rotation components in robot pose. The novelty of IDC is that it evaluates translation with ICP and rotation with IMRP [12], which guarantees that the translation and rotation estimation is estimated accurately. In fact, the efficiency of IDC is lower than that of ICP since IDC integrates the IMRP and ICP. To improve IDC, Gutmann and Schlegel proposed a special filter, which projects the original points into the center [15]. Unfortunately, its accuracy of pose estimation depends on the center estimation.

Mb-ICP is designed for the rotation estimating issue of ICP. Unlike ICP, Mb-ICP adopts a new metric distance to establish the correspondences, which is defined in the configuration space of the sensor. It takes into account both the translation and rotation error of the sensor as shown in formula (1). Zhuang et al. presented a hybrid sensing system for mobile robot localization in an indoor, semistructured environment based on the Mb-ICP algorithm, which is a typical application of Mb-ICP [16]:

However, for Mb-ICP, as the number of pose estimation iterations increases, the rotation angle is changeable whereas remains constant; this discrepancy may lead to mismatching during the pose estimation process. Moreover, like ICP, Mb-ICP prefers unstructured environments.

In recent years, some scan matching applications in Cartesian coordinate frame have been proposed. For example, Pedraza et al. proposed an approach called BS-SLAM [17], which uses B-spline [18] to describe the environment in which equal-interval control points are viewed as landmarks. His correspondences establishing method is realized by calculating the smallest Euclidean distance between the control points of the reference and current scan. Its performance depends on the selected endpoints and the controlled points of the spline. This algorithm is suitable for environments that cannot be partially occluded. Olson proposed a probabilistically motivated scan matching algorithm [19], which produces higher quality and more robust results at the cost of additional computational time. Grzonka et al. use this laser scan matching method to solve the indoor navigation of aerial vehicles [20]. The 6-degree-of-freedom navigation problem can be solved using laser scan matching because the aerial vehicle they used carried multiple sensors.

##### 2.2. Scan Matching in Polar Coordinate Frame

The rotation and scaling of data in polar coordinate frame are easier to be accomplished than in Cartesian coordinate frame [21]. For example, three squares in Cartesian coordinate frame in Figure 1(a) are converted to the curves in the polar coordinate frame in Figure 1(b). A rotation or a scaling in Cartesian coordinate frame can be viewed as the translation in direction or direction in polar coordinate frame shown in Figure 1.

Diosi and Kleeman proposed PSM, which has better convergence performance in larger areas and has already been used in SLAM [14]. In PSM, the establishment of correspondences is achieved by shifting the current scan until it covers the reference scan based on rotation property of polar coordinate frame mentioned above. Unlike ICP or Mb-ICP, PSM separately estimates the robot pose including the rotation estimation and translation estimation at each iteration. However, PSM will fail if two consecutive scans in polar coordinate frame have a similar distribution of polar radii in irrelevant regions. For example, in Figure 1(b), the full curve in red rectangle is similar to the dotted curves in blue dotted rectangles. In this case, point and its neighboring points may not find the correct correspondences.

Our work is different from the previous methods in the following aspects:(1)In contrast to Mb-ICP considering a constant weight in the norm about rotation angle, the metric weight of PMWN is adaptive with iterations increase.(2)Instead of considering only the rotation effect in PSM, our scan matching approach deals with both rotations and translations together by calculating the polar radius and polar angle variations of two consecutive scans.

#### 3. PMWN-Based Scan Matching

Our work focuses on pose estimation in polar coordinate frame using a 2D laser sensor. Scan matching works are shown as follows (Figure 2).

*Step 1. *Reference and current scans are obtained by the 2D laser sensor.

*Step 2. *Reference and current scans are preprocessed by a media filter and then clustering [14].

*Step 3. *The scan projection transforms current scan into the reference frame [14].

*Step 4. *Correspondences are established between reference and current scans by PMWN.

*Step 5. *A metric weight is updated with iterations increasing in order to adapt the rotation angle variation.

*Step 6. *One linear absolute function is used to estimate the rotation angle.

*Step 7. *The translation is estimated by one kind of circle projection.

*Step 8. *If , the estimation of current pose is obtained; else go to Step 3.

In this section, we will give the definition of PMWN. Then, PMWN is deduced and its metric weight is updated. Finally, a geometric property of this norm called projected circle is given.

##### 3.1. Scan Matching Method

PMWN of norm is defined aswhere and express polar radius vectors of reference and current scans, respectively, and and represent polar angle vectors of reference and current scans, respectively. denotes the metric weight of PMWN and .

In order to illustrate the principle of the PMWN-based scan matching method, a matching model is used to show that our scan matching method can correct the mismatching of PSM as shown in Figure 3.

**(a) Reference and current scans**

**(b) Reference scan in polar coordinate frame**

**(c) Current scan in polar coordinate frame**

**(d) Correspondences establishing result of PSM**

**(e) Correspondences establishing result of PMWN**

As shown in Figure 3(a), the mobile robot moves from P1 to P2 where the reference scan and the current scan are obtained, and points to in counterclockwise order are the reference scanned points, while points to in the same order are current scanned points. Figures 3(b) and 3(c) show the reference and current scan of Figure 3(a) in polar coordinate frame, respectively. According to PSM, scanned points to are mismatched with scanned points to shown in Figure 3(d) because the patterns of the polar radii distribution in the two red rectangles seem similar as shown in Figures 3(b)-3(c). The correspondences establishing result of PSM is shown in Figure 3(d). In contrast, our scan matching method ensures that true correspondences will be established between reference and current scans, which depends on PMWN’s two factors:(1) in formula (2) ensures the similar patterns of the polar radii distribution in the polar coordinate frame;(2) in formula (2) ensures a minimum distance at the direction in the reference and current scan in polar coordinate frame.

Correspondences between reference and current scans are established by left or right shifting current scans to reference scans until the sum of the absolute variation of both polar radii and polar angles reaches a minimum in polar coordinate frame based on formula (2), such as polar radii in two blue dotted rectangles in Figures 3(b)-3(c). These two factors of the norm ensure that reference and current scans have the least variation in terms of and . Therefore, most correspondences are correct using PMWN shown in Figure 3(e).

##### 3.2. Derivation of PMWN

The derivation of PMWN is given in this subsection. Assuming that the correct correspondences have been established, the following derivation process is executed. The robot’s motion in a plane can be regarded as a synthesis process of rotation and translation. In polar coordinate frame, the relationship between the polar angle and the polar radius is defined by , . In range data of 2D laser sensor, only considering rotation of the robot, polar radii of reference and current scans should be equal if the robot rotates clockwise. The equation can be expressed aswhere and are the polar angle and the polar radius of reference and current scans, respectively. Formula (3) can be converted to

The translation of the mobile robot in polar coordinate frame is presented in Figure 4.

Where the mobile robot translates distance from position to position without any rotation, , , and and are the translation displacement. For any scanned point the variation in the polar radius of each correspondence can be expressed aswhere . In general, because . Equation (5) can be simplified with the rotation as follows:

Now, consideration of the rotation and the translation together substitutes the corresponding term in (4) with (6) intoFormula (7) shows that the robot translates by distance and rotates by angle . In this function, we only consider the variable quantities. According to the mean value theorem of integrals, we havewhere . We havewhere , is the angular resolution of the laser sensor, and denotes the number of scanned points in scanning range . Formula (8) can be rearranged intoFormula (10) is rearranged into

Because , formula (11) is rearranged intowhere . Formula (12) can be further rewritten into

Under the condition of counterclockwise rotation , formula (13) can be expressed as follows:where and .

According to the conditions of clockwise or counterclockwise rotation, we havewhere is the sum of the variation of the polar radius and the polar angle of each correspondence, is the metric weight to balance the variation of the polar radius and the polar angle, and and denote the maximum of counterclockwise and clockwise rotation angle. Furthermore, we finally calculate the average variation of the norm because the number of correspondences is variable shown in the following:

The above derivation assumes that correct correspondences have been established yet are still unknown actually, so several iterations are needed. The valid correspondences between the reference and current scan will be established when meets the minimum at each iteration, and , which corresponds to the minimum of , is the heading rotation angle. With the increase of iteration, the value of metric weight is adaptively adjusted. According to formulas (12) and (13), we have , so the metric weight can be derived as the following formula at each iteration:

##### 3.3. Projected Circle

In this subsection, one kind of projected circle is introduced, which will be used as the translation estimation. Actually, the projected circle is the geometric property of PMWN. From formula (7), if the rotation of the robot is not considered, we obtainBy expanding formula (18), we can rewrite it aswhere denotes the number of scanned points in . The geometric property of PMWN is shown in Figure 5.

In Figure 5, are the projection of the translation displacement onto polar radii of reference scans and . Therefore, those formulas guarantee that projected points, such as , , and , must be on a circle which is called projected circle. The projected circle can be formed by correspondences, and the translation of the mobile robot will be estimated by this projected circle in Section 4.2.

#### 4. Pose Estimation Based on PMWN

In this section, PMWN will be used to estimate the robot pose . Assume that the initial pose of the robot is known in the world coordinates.

##### 4.1. Rotation Estimation

The rotation estimation is realized by linear absolute function at each iteration. Figure 6 shows the plots of against . The rotation estimation of the robot can be carried out by searching the corresponding minimum in direction. However, due to the discreteness error of , the rotation estimation further needs to be corrected based on one kind of linear absolute function.

Generally, points near the minimum of the curve shown in the red rectangle in Figure 6 can be viewed as a linear absolute-value function. Therefore, the minimum of can be estimated using the model in Figure 7.

The linear absolute-value function is defined asFormula (20) can be classified into two cases.

*Case 1. *When , , , and , substitute the corresponding term in (20) with , , and :The linear absolute-value function has the minimum when , and, from (21), we get

*Case 2. *When , , , and , the value of can be calculated as in Case 1:

Combining (22) with (23), can be defined as follows:where is the maximum of and . The rotation angle of the robot is shown as follows: is the angular resolution of the laser sensor.

##### 4.2. Translation Estimation

With regard to translation, our estimation method is implemented by the process of calculating a circle center called one kind of projected circle method. In Section 3.3, given the translation, the projected circle is derived. In contrast, the translation will be solved in condition of the known correspondences in this subsection. The vector indicates the robot translation vector. The locations of these points can be calculated if a series of points are correspondences as shown in Figure 5. As shown in Figure 8, any correspondences will have two related points on the circle, for example, point having correspondence points and . For all correspondences, the coordinates of the related points on the circle can be calculated as follows:where and are locations of the points on the circle, and and are correspondences in the reference and current scan, respectively; . is the weight [22]:where . The range of is determined by , and the variety rate of is determined by .

In Figure 8, the center point location of the projection circle is determined using the following:where is the number of correspondences. Under the reference scan, the translation displacement iswhere the vector is the translation displacement.

#### 5. Experimental Evaluation

The extensive experiments are conducted to evaluate the performance of the proposed PMWN-based approach aimed at determining the accuracy, efficiency, and loop closure error of mapping. In Section 5.1, in order to ensure a comprehensive evaluation, we will compare the PMWN method with PSM and Mb-ICP to evaluate their accuracy and efficiency in 8 different selected scenes in indoor environments. In Section 5.2, we design a closed path consisting of four curve segments and one straight segment. In two different scenes, the laser sensor will be placed at three intervals along the same path to test loop closure of PMWN compared with PSM and Mb-ICP. In Section 5.3, we use the PMWN to establish two indoor maps.

The laser sensor is a UTM-30LX in the front of the robot, where the measuring range is 0.1~30 m, and the angular resolution is 0.25°. PMWN is compiled on Microsoft VS 2010 software platform in Windows 7 OS. The robot is driven by differential system. The programs are run on an Intel Core I3 Processor with 4 GB of RAM. The configuration parameters of the PMWN, PSM, and Mb-ICP algorithm are shown in Table 1. Maximum Distance Difference is used to decide whether the two consecutive points in reference or current scan belong to the same object. is the maximum of rotation angle. is the range of laser sensor. is the weight of Mb-ICP in formula (1). is the metric weight in this paper. Max iterations are the maximum of iterations in three approaches. Termination condition is the condition of stopping the iteration.

##### 5.1. The Scene Experiment

In this subsection, experiments will be carried out to ground true information using 9 different scenes in 5 typical indoor environments for results of comprehensive comparison of the proposed approach with PSM and Mb-ICP.

*Environment 1*. It is an artificial environment including 2 scenes. In order to illustrate the correspondences establishing problem of PSM and Mb-ICP in convergence rate, Scene 1 and Scene 2 are artificial structure environments where laser sensor placements are precalculated.

*Environment 2*. It is a complex room environment being characterized by disorder objects and narrow space, such as a laboratory (Scene 3) and an office (Scene 4), to show the performance of PMWN, PSM, and Mb-ICP in such conditions.

*Environment 3*. It is an open environment having less object and open space including 2 scenes. Scene 5 is a corridor, and Scene 6 is a hall, which are used to show the performance of PMWN, PSM, and Mb-ICP in open environments.

*Environment 4*. It is a dynamic environment including some moving objects. Scene 7 is to test the translation and rotation of PMWN, PSM, and Mb-ICP in dynamic environment, respectively.

*Environment 5*. It is a special environment including a long wall with fewer scanning points in the direction (Scene 8).

Our experiments will compare performances of our proposed approach with PSM and Mb-ICP in terms of the accuracy of pose estimation, running time, and convergence rate because PSM and Mb-ICP are classic and similar to PMWN. The correspondences establishing results of the current scan and reference scan from each scene are shown in Figure 9 for Environment 1, Figure 11 for Environment 2, Figure 12 for Environment 3, Figure 13 for Environment 4, and Figure 14 for Environment 5. The odd rows of Figures 9 and 11–14 are all scenes, where the blue points are the reference scans and the red points are the current scans, and the units of the -axis and -axis are cm. In the even rows of those figures, the worst-case convergence procedure is shown for PMWN, PSM, and Mb-ICP, respectively, where (stars), (circles), and (triangles) are expressed in cm and °, and the unit of the -axis is ms. The pose estimation results are shown in Table 2, and some statistical information is shown in Table 3.

**(a) Pose estimation of PMWN**

**(b) Pose estimation of PSM**

**(c) Pose estimation of Mb-ICP**

**(d) Convergence procedure of PMWN**

**(e) Convergence procedure of PSM**

**(f) Convergence procedure of Mb-ICP**

**(g) Pose estimation of PMWN**

**(h) Pose estimation of PSM**

**(i) Pose estimation of Mb-ICP**

**(j) Convergence procedure of PMWN**

**(k) Convergence procedure of PSM**

**(l) Convergence procedure of Mb-ICP**

According to Figure 9(b), the pose estimation of PSM is failed, because the polar radii of the current scan (red points) in the red frame are approximately equal to the polar radii of the reference scan (blue points) in the blue frame as shown in Figure 10, in which the reference and current scans in Figures 9(a)–9(c) are converted to polar coordinate frame. According to the iterative procedure of PSM shown in Figure 9(e), the initial rotation estimation is 50°, and the incorrect result of iteration converged at . In Figure 9(h) (Scene 2), such situation of PSM occasionally occurs. However, the PMWN and the Mb-ICP approaches both obtain correct estimation in these scenes, where Scene 1 and Scene 2 convergence times of Mb-ICP are 275.4 ms and 301.6 ms while those of PMWN are 52.8 ms and 48 ms, respectively. Therefore, Figures 9(d), 9(f), 9(j), and 9(l) show that the convergence rate of PMWN is better than Mb-ICP.

**(a) Pose estimation of PMWN**

**(b) Pose estimation of PSM**

**(c) Pose estimation of Mb-ICP**

**(d) Convergence procedure of PMWN**

**(e) Convergence procedure of PSM**

**(f) Convergence procedure of Mb-ICP**

**(g) Pose estimation of PMWN**

**(h) Pose estimation of PSM**

**(i) Pose estimation of Mb-ICP**

**(j) Convergence procedure of PMWN**

**(k) Convergence procedure of PSM**

**(l) Convergence procedure of Mb-ICP**

**(a) Pose estimation of PMWN**

**(b) Pose estimation of PSM**

**(c) Pose estimation of Mb-ICP**

**(d) Convergence procedure of PMWN**

**(e) Convergence procedure of PSM**

**(f) Convergence procedure of Mb-ICP**

**(g) Pose estimation of PMWN**

**(h) Pose estimation of PSM**

**(i) Pose estimation of Mb-ICP**

**(j) Convergence procedure of PMWN**

**(k) Convergence procedure of PSM**

**(l) Convergence procedure of Mb-ICP**

**(a) Pose estimation of PMWN**

**(b) Pose estimation of PSM**

**(c) Pose estimation of Mb-ICP**

**(d) Convergence procedure of PMWN**

**(e) Convergence procedure of PSM**

**(f) Convergence procedure of Mb-ICP**

**(a) Pose estimation of PMWN**

**(b) Pose estimation of PSM**

**(c) Pose estimation of Mb-ICP**

**(d) Convergence procedure of PMWN**

**(e) Convergence procedure of PSM**

**(f) Convergence procedure of Mb-ICP**

Scene 3 and Scene 4 are correspondences establishing experiments of complex environments. Obviously, on the extent of correspondences establishing errors, PSM’s pose estimation errors are bigger than those of PMWN and Mb-ICP while correspondences establishing results of PMWN are similar to Mb-ICP’s from Figures 11(a)–11(c) and 11(g)–11(i). In Scene 3 and Scene 4, big correspondences establishing errors occur at the initial iteration stage of PSM in the orientation estimation for the similar situation in Figure 10, which results in a longer convergence time in Figures 11(e) and 11(k). Although the PMWN and Mb-ICP methods have small estimation errors in both scenes, the convergence rate of PMWN is better than that of Mb-ICP in complex environments.

Scene 5 and Scene 6 are open environments. PSM and Mb-ICP obtain convergent results, while PMWN is nearly convergent for pose estimation in Scene 5. However, the pose estimation errors of PMWN, PSM, and Mb-ICP are (−0.9 cm, −1.7 cm, 1.84°), (0.3 cm, −5.2 cm, 1.62°), and (0.4 cm, −54.3 cm, 1.48°) from Table 2, respectively. Errors of are bigger than those of and so all the approaches can be affected by such scene of open environments due to fewer objects in front in Figures 12(a)–12(c). In Scene 6, PMWN, PSM, and Mb-ICP have good results of pose estimation in Figures 12(g)–12(i), but the convergence times of PMWN, PSM, and Mb-ICP are 41 ms, 116 ms, and 87 ms, respectively, in Figures 12(j)–12(l). Obviously, the convergence rate of PMWN is faster than that of both approaches.

Experiments in Environment 4 are carried out in Scene 7 with a few moving objects. The laser sensor displacement actually combines translation with rotation (50 cm, 20 cm, 20°) when dynamic objects are walking at a speed of 3 km/h. Pose estimation results of PMWN, PSM, and Mb-ICP are shown in Figures 13(d)–13(f). Performances of these three approaches can efficiently deal with dynamic factors without implementing a special detection and tracking algorithm.

In a corridor environment, scanned points can be unevenly distributed when the robot faces the wall. As such extreme place in Scene 8, both PSM and Mb-ICP have large pose estimation errors in because scanned points are distributed densely in coordinate axis and sparsely in coordinate axis leading to involved correspondences insufficient in coordinate axis. However, PMWN has a smaller pose estimation error than that of both PSM and Mb-ICP at the cost of more iterations and longer running time.

Estimation results of comparison experiments are given with 8 scenes in Table 2. In order to obtain valid data, all experiments above are carried out in the same condition including translation and rotation at given data. Table 2 shows that the pose estimation errors of PMWN are obviously smaller than those of PSM and Mb-ICP during experiments in 8 scenes, and PMWN have more stability than that of both approaches in artificial environment, open environment, and special environment in this paper, such as Scene 1, Scene 5, and Scene 8. As far as average iterations are concerned, the iteration of PMWN is only 115 while that of PSM and Mb-ICP is 180 and 268, respectively. Furthermore, the average runtime of PMWN is 59.5 ms and also the least of these approaches due to using the metric weight . In particular, although Mb-ICP has a high convergence rate in Scene 5 and Scene 9, the pose estimation error is large in both scenes. In most of scenes, Mb-ICP is slower than PMWN and PSM.

In consideration of convergence rate and errors in the above 8 scenes, Table 3 shows the performance of the three approaches in different scenes.

##### 5.2. Loop Closure Experiments

Loop closure is one of key steps for unknown environments mapping of mobile robot, so we choose it as evaluation criterion of performance of comparing experiments in this subsection. In order to verify that the PMWN algorithm can obtain pose estimation even if the robot moves in different track, comparison experiments will be conducted by taking the same path that contains four curved segments and one straight segment in two different indoor environments. We drew a series of footprints, marked with numbers from 1 to 115 to form a closed path on one flat board with size of 500 cm × 150 cm, as shown in Figure 15. This board was then placed in two different indoor environments which are a laboratory room and a hall. In both scenes, simulating pose estimation at different speeds, a series of footprints with increases sequentially in the number along the path were drawn as different series, that is, Series 1: and Series 2: . The initial pose is deliberately set to in the iteration procedure.

We compare the performance of PMWN with those of PSM and Mb-ICP in terms of map closure, map distortion, precision of pose estimation, and runtime. Maps of the hall environment are shown in Figures 16 and 17, and maps of the laboratory are displayed in Figures 18 and 19. These maps are built by PMWN, PSM, and Mb-ICP in odd rows of Figures 16–19, respectively. Below the maps, comparisons of the pose estimation with the true pose are given for PMWN, PSM, and Mb-ICP, respectively, where dotted lines and solid lines indicate the true data and the pose estimation value, respectively, and the blue lines, red lines, and black lines denote the pose estimation in , , and , respectively. The solid red curve denotes the estimation track of the sensor moving in Figures 16(a)–16(c) to 19(a)–19(c). The path on the board should be absolutely closed theoretically. Therefore, the error of the closed loop is an important criterion for performance evaluation of localization algorithms. The loop closure errors of maps and runtime will be given in Table 4 while experiments are carried out in Series 1-2.

**(a) PMWN**

**(b) PSM**

**(c) Mb-ICP**

**(d) PMWN**

**(e) PSM**

**(f) Mb-ICP**

**(a) PMWN**

**(b) PSM**

**(c) Mb-ICP**

**(d) PMWN**

**(e) PSM**

**(f) Mb-ICP**

**(a) PMWN**

**(b) PSM**

**(c) Mb-ICP**

**(d) PMWN**

**(e) PSM**

**(f) Mb-ICP**

**(a) PMWN**

**(b) PSM**

**(c) Mb-ICP**

**(d) PMWN**

**(e) PSM**

**(f) Mb-ICP**

The maximum pose estimation errors of PMWN are smaller than those of PSM and Mb-ICP shown in Figures 16(a)–16(e). The estimation track (red line) of PMWN is closer to the actual track than those of PSM and Mb-ICP in Figures 16(a)–16(c). Moreover, the runtime of PMWN is shorter than those of PSM and Mb-ICP, although the loop closure errors of PMWN are slightly larger than Mb-ICP shown in Table 4.

In Figure 17(b), the PSM pose estimation in Series 2 failed on the straight line segment for the same reason as Scene 1 in experiment 5.1, but it is correct on the curved path. In Figure 17(c), the map built by Mb-ICP is totally distorted due to the environment consisting of many segments which gives rise to the fact that many scanned points of the reference scan cannot find their correct correspondences in current scan. Although the map built by PMWN is a little blurry, the estimation track (red line) is close to the real ones in Figure 17(a). And the loop closure errors and runtime of PMWN are smaller than those of PSM and Mb-ICP in Table 4.

In Figures 18(a)–18(c), we can see that the clearest map is Figure 18(c) built by Mb-ICP with Series 1 path and loop closure error of the map built by Mb-ICP is the smallest one. However, the runtime of Mb-ICP is much longer than that of PMWN and PSM. The map built by PMWN is a little blurry, but this map is much clearer than that of PSM and can be also used to show the real environment.

In Figures 19(a)–19(c), the map quality of PMWN is better than that of PSM and Mb-ICP in Series 2. The estimation track (red line) of PMWN is closer to the real track than those of PSM and Mb-ICP shown in Figures 19(a)–19(c). The maximum pose estimation errors of PMWN are smaller than those of PSM and Mb-ICP shown in Figures 19(d) and 19(e). The loop closure errors and runtime of PMWN are smaller than those of PSM and Mb-ICP shown in Table 4.

Table 4 shows the loop closure in the hall environment and the laboratory environment, respectively. Considering the map closure, map distortion, precision of pose estimation, and runtime, Table 5 shows the performances of the three algorithms in the different scenes.

##### 5.3. Map Building in the Indoor Environment

Some mapping experiments of PMWN are tested in one-corridor and one-hall indoor environments with a differential wheeled robot. Mapping algorithm employs the Kalman Filter fusing PMWN and odometer [23]. The measuring standard utilizes the end localization error and the map distortion. The robot runs three cycles in both indoor environments to verify the ability of the proposed algorithm mapping. Performances of algorithm experiments are shown in Figure 20, where blue, red, and green dots denote paths of robot mapping for the first, second, and third cycles, respectively, and black points as metric maps represent the environments. The start point location is set . The actual location of end points is (92.1 cm, 140.7 cm) in Figure 20(a) and (156.7 cm, 120.6 cm) in Figure 20(b). The estimation location of end points is (65.3 cm, 118.4 cm) in Figure 20(a) and (195.6 cm, 108.6 cm) in Figure 20(b) when the experiments have been completed. Therefore, the end point location error is (26.8 cm, 22.3 cm) in Figure 20(a) and (−38.9 cm, 12 cm) in Figure 20(b) while the distance of robot running in both indoor environments is about 200 m. Actually, the end point location error is small although it has accumulated during mapping, to some extent. Maximums of the robot pose estimation error are (6.4 cm, 7.2 cm, 6.1°) and (7.2 cm, 7.1 cm, 7.2°) as the robot running around the corner during mapping in Figures 20(a) and 20(b), respectively. Both the end point location errors and the pose estimation errors are acceptable for course localization of the robot mobile operation in indoor environment. The whole maps have been closed and their structures have been successfully described and undistorted by mapping algorithm based on PMWN.

**(a) Corridor**

**(b) Hall**

#### 6. Conclusions

In this work, we present a PWMN-based scan matching approach for pose estimation. PWMN considers both translation and rotation of the robot pose during the correspondences establishing process, which have some advantages over the classical PSM algorithm because PSM only considers the rotation effect. Compared with the Mb-ICP scan matching method, PWMN defines an adaptive weight in the distance metric as the iterative optimization increases, which ensures the most correctly established correspondences. The computational complexity of the translation estimation and rotation estimation, respectively, is and , where is the threshold of matching window. Some real environment experiments are used to test the performance of PMWN. The scan matching experiments in 8 different indoor scenes demonstrate that PMWN has a faster convergence rate and shorter runtime with large moving steps than PSM and Mb-ICP. The scan matching experiments with the known track in two different scenes demonstrate that the PMWN algorithm has better performance for different footprints series than PSM and Mb-ICP. The usability of the proposed scan matching approaches is tested in the third experiment with scan matching in a corridor environment and a hall environment. Maps are built using PMWN in two different scenes. According to the experimental results, the following conclusions can be drawn:(1)Compared with PSM and Mb-ICP, our approach has better performance in convergence rate and runtime, so that our method will meet the real-time requirement.(2)The map built using our scan matching method can obtain loop closure better under condition of given large footstep intervals, so our scan matching method is more suitable for a high-speed robot than that of PSM and Mb-ICP.

In future work, the tracking and tagging of moving objects will be explored further. PMWN is more efficient than conventional matching ways. One such case is 4-DOF 3D scan matching; it would be interesting to apply PMWN to the 3D scan matching problem.

#### Competing Interests

The authors declare that they have no financial and personal relationships with other people or organizations that can inappropriately influence our work; there is no professional or other personal interests of any nature or kind in any product, service, and/or company that could be construed as influencing the position presented in, or the review of, the paper.

#### Acknowledgments

This work is supported by National Natural Science Foundation of China (61305103 and 61473103) and Natural Science Foundation of Heilongjiang Province of China (QC2014C072 and F2015010).