Journal of Advanced Transportation

Journal of Advanced Transportation / 2019 / Article
Special Issue

Sensors in Intelligent Transportation Systems

View this Special Issue

Research Article | Open Access

Volume 2019 |Article ID 3680181 | 11 pages | https://doi.org/10.1155/2019/3680181

A Particle Filter Localization Method Using 2D Laser Sensor Measurements and Road Features for Autonomous Vehicle

Academic Editor: Krzysztof Okarma
Received06 Aug 2018
Revised11 Nov 2018
Accepted17 Dec 2018
Published14 Feb 2019

Abstract

This paper presents a method of particle filter localization for autonomous vehicles, based on two-dimensional (2D) laser sensor measurements and road features. To navigate an urban environment, an autonomous vehicle should be able to estimate its location with a reasonable accuracy. By detecting road features such as curbs and road markings, a grid-based feature map is constructed using 2D laser range finder measurements. Then, a particle filter is employed to accurately estimate the position of the autonomous vehicle. Finally, the performance of the proposed method is verified and compared to accurate Differential Global Positioning Systems (DGPS) data through real road driving experiments.

1. Introduction

The development of autonomous driving technology is one of the forefront fields of research, bound to have a significant influence on the automobile industry. Presently, autonomous driving technology is developed by both traditional automobile companies such as Daimler and Toyota, as well as information technology (IT) industries such as Uber and Apple. The four essential technologies currently being developed are environmental perception, localization, path planning, and vehicle control. Localization may be accomplished using various methods, including the utilization of the Global Positioning System (GPS), a vision system, and the road infrastructure, as well as the road environment maps. The efficient use of localization is one of the most essential features that influence the performance of a self-driving car. The most popular methods of localization use GPS sensors, which are advantageous because of their vast popularity on the market and their low price compared to the other sensors. However, the GPS-based methods do not perform as well in shaded areas (overpasses, tunnels, skyscrapers, etc.), where the satellite signal may not be received, which causes a large position error [1, 2]. The operation of autonomous vehicles in an urban environment requires a seamless localization capability with an accuracy of several centimeters or less. To this end, researchers have recently been studying localization methods based on road environment maps, using on-board sensor measurements [3, 4].

A road environment map includes a variety of road features (curbs, road marking, structure, etc.) and helps a vehicle localize itself seamlessly and accurately. Many studies have been performed on map-based localization techniques that use a vision system or three-dimensional (3D) Light Detection and Ranging (LiDAR) sensors, which can acquire point clouds in a single scan [5]. However, the price of such a sensor is too high and the processing of such large amounts of data requires a lot of computational power. On the other hand, localization based on a vision system does not provide depth information as accurately as laser sensors and its performance is significantly affected by the available illumination.

In this paper we use two-dimensional (2D) laser sensor measurements to detect road features such as the curb position and the markings on the road surface. The curb, i.e., the boundary of the road, can be detected using the 2D laser sensor measurements and stored in the road environment map. Various studies have demonstrated the extraction of the curb position by an application of a line extraction algorithm, including linear regression, the Hough transform, split and merge algorithms, the random sample consensus (RANSAC) algorithm, and the expectation–maximization (EM) algorithm. The Hough transform and the RANSAC algorithm are the most widely used procedures in the extraction of the curb position [68]. The Hough transform, the RANSAC algorithm, and the EM algorithm have been demonstrated to extract a straight line more accurately than other techniques; however, their accuracy comes at a large computational cost [9].

In general, the location of the vehicle is estimated by a Bayesian estimator, such as the Kalman filter, the extended Kalman filter, or a particle filter. The Kalman filter relies on the assumption that the system is linear and the posterior probability Gaussian, meaning that it achieves a high accuracy in environments that satisfy these assumptions; however, its performance may be degraded in nonlinear and non-Gaussian posterior probability environments. A particle filter can overcome the disadvantages of the Kalman filter in non-Gaussian environments by representing the posterior probability using samples. However, because the number of samples for this estimator increases, the computational complexity of the procedure increases, as well [10, 11].

We propose a map-based localization method that uses a 2D laser sensor to detect the road boundary and road markings for autonomous driving in an urban environment. Our main contributions are the following.

(i) We developed a seamless localization method that uses on-board sensor measurements with a reasonable accuracy and price. To detect road features, we used an affordable 2D laser sensor.

(ii) To detect the road environment features such as the lane and road markings, we used the reflectivity information and laser sensor measurements, instead of only relying on point measurements. We employed the classification method on the sensor measurements to extract the road feature information and construct a road feature map.

(iii) We employed a method based on a particle filter to calculate a posteriori distribution of the road features and estimate the position of the autonomous vehicle. A posteriori distribution is not Gaussian, meaning that the application of a localization method based on the Kalman filter would be inefficient.

2. Sensor System Configuration

To obtain the road boundaries and markings, we used a LMS-151 2D laser sensor manufactured by SICK. This sensor determines the distance from the object using time-of-flight measurements, which are used to calculate the propagation time of the infrared light that makes contact with an object and returns to the receiver at the sensor [12]. It is also possible to measure the reflectivity of the point on the road by determining the intensity of the reflected laser light. The 2D laser sensor was installed at a height of 1.6 m from the ground and set to scan the road surface 10 m in front of the vehicle, as shown in Figure 1.

The moving distance of the vehicle can be measured using the encoder sensor and the inertial measurement unit (IMU). To predict the location of the vehicle in the particle filter method, it is necessary to obtain the moving distance of the vehicle during the sampling. The vehicle moving distance and velocity were calculated by Runge–Kutta integration for the dead reckoning obtained by inertial measurements [13].

3. Road Information Extraction by a 2D Laser Sensor

The road environment includes a variety of features, such as curbs, signal lights, lane markings, and guard rails. The locations of the curb and the lane were estimated using the distance and reflectivity measured by the 2D laser sensor. The road boundaries were classified by denoting the pattern of a continuous and long straight line followed by an abrupt change in its slope. The observed lane markings are painted with a special material that contains glass beads, which allows for a greater reflectivity compared to the remaining road surface. The retroreflection phenomenon of the glass beads is demonstrated in Figure 2(a), whereas Figure 2(b) shows a section of the experimental site used for the road feature extraction.

We estimate the road boundaries by identifying the curb position based on the measurements of the 2D laser sensor. The Adaptive Breakpoint Detector (ABD) and the Iterative Endpoint Fit (IEPF) algorithm were employed for their high computational speed for the extraction of straight lines [14]. The ABD algorithm is used to classify data into breakpoints when the interval of consecutive distance data is larger than the threshold circle, . Figure 3 shows the threshold circle described by the following equation:where the distance from the sensor to the point on the road surface is denoted by . The symbol represents the angle between the vectors and , whereas and are used as hyperparameters that control the size of the threshold circle. If the distance between the point data is larger than , the points and are considered to be disconnected and classified as a breakpoint. Table 1 shows the pseudocode of the ABD algorithm, where is the cardinal number of a point and is the total number of points.


1:   for to do
2:   
3:   if then
4:    and
5:   else
6:   
7:   end
8:   end

We used the IEPF algorithm to extract a straight line for the M groups clustered by the ABD algorithm , where M is the total number of particles. The IEPF algorithm calculates the longest vertical distance at each of the points, , as well as the line connecting the endpoints of the first group, and . If is larger than the set parameter , the algorithm splits based on the point into two groups, and . Then, a new and are calculated for one of the divided groups, . If the vertical distance is smaller than , the process of splitting is started in the next group, . The IEPF algorithm is applied to the remaining groups , which enables us to represent the point data of the laser sensor as a group of straight lines. When the angular difference between the adjacent straight lines is small, they are merged into one group. Figure 4 presents a schematic diagram of the IEPF algorithm and Table 2 shows its pseudocode.


1:   choose cluster
2:   while
3:   find endpoint of line
4:   detect point with maximum distance
5:   if then split
6:    and
7:   else
8:   if last group break
9:   go to next group
10:   end
11:   end
12:   when all sets in S have been checked, merge
   collinear segments

Figure 5(a) shows the extracted road area obtained by sequentially processing the distance measurement data using the ABD and the IEPF algorithms. The black circles in the figure represent the distance measurement data, the red stars are the breakpoints, and the blue line shows the road area extracted by applying the line extraction algorithm. As described, the discrimination of the lane markings on the road is enhanced by the use of a paint that contains glass beads [15]. The classification of the road and the road markings depend on the intensity of the reflected laser pulse [16]. The results of the lane marking classification are shown in Figure 5(b), where the vertical axis represents the reflectivity of the returned laser light. The reflectivity values larger than the threshold value are denoted by red stars, whereas those lower than the threshold value are denoted by black circles. As the figure shows, four road surface markings with high reflectivity were classified and the location of the vehicle relative to the lane marking can be estimated.

4. Construction of a Road Feature Map

The road region and the lane markings were extracted using the method described in the previous section. Based on those results, a grid map was constructed with a resolution of 10 cm × 10 cm for each grid cell. Figure 6(a) shows the classification of the grid states, according to the extracted road environmental conditions, whereas Figure 6(b) shows photos of the lane markings on the road and the lane marking data measured by the laser sensor. The information map of the grid state of the road contains data for the following four states:(i)Road surface: grid cells that represent a traversable road surface.(ii)Road boundary: grid cells of the road boundary distinguished by the curb.(iii)Lane marking: grid cell that contains lane markings.(iv)Unknown space: grid cells beyond the sensor measurements.

Figure 7 depicts the road environment information grid map created by measuring approximately 240 m of the road. The road surface, road boundary, and unknown space in the map are expressed in gray, black, and white, respectively. The cells that contain road markings are shown in red.

5. Particle Filter Localization

The probabilistic localization method we used employs a Bayesian filter [17], which relies on a belief, i.e., a probability distribution of the vehicle location at a time t that can be expressed as follows:where the sensor measurements before the time and the vehicle movement are denoted by and , respectively. By using the previous location, , and the movement, , the location of the vehicle can be expressed in a recursive form:The probability distribution, , at the following time, , can be estimated using the probability at the moving position, and the previous position of the vehicle can be expressed using the theorem of total probability:In addition, the current positional probability, , can be calculated by employing the Bayes rule with , the normalization constant , and the conditional probability that uses the measured values, :In this study we consider various features present in real road environments. In many cases, the posterior probability distributions are not Gaussian [18, 19], which is why we use a particle filter suitable for nonlinear systems to estimate the location of the vehicle. Our particle filter calculates the posterior probability without assuming the distribution to be Gaussian [10, 11, 19]. The posterior probability in the particle filter is expressed through the particles sampled using the non-Gaussian distributions:where represents the number of particles and each particle is denoted by , with . As is the case for other Bayesian filters, a particle filter recursively obtains from . Since the belief of a particle filter is represented by the particles, is also recursively obtained from . The particle filter method comprises five steps: Initialization, Prediction, Update, Estimate, and Resampling; Figure 8 illustrates the importance sampling process for a particle filter [20].

(i) Initialization: the initial position and direction of a vehicle are not known in advance, so the particles are placed at arbitrary positions. The initialization is performed only at the beginning and the individual weights of the particles are all equal to .

(ii) Prediction: the following particle locations, , are predicted by applying the vehicle movement, , to the particle locations at the previous time step, . The probability can be expressed using the distribution of the moved particles:

(iii) Update: the importance factor of a particle , i.e., its weight, is denoted by . The weights are expressed using the information measured by the sensor and the measurement information available on the map. By comparing the road boundary information measured by the laser sensor and the road marking information in the road environment map constructed in advance, the weights can be calculated as follows:

(iv) Estimation: by use of the normalization constant , the weight , and the probability , we can obtain the belief, i.e., the position of the vehicle:

(v) Resampling: the particles with high weights are cloned by creating a new set of particles that does not contain those with a low weight. This new set of particles is used for the next prediction, at the time .

6. Results and Discussion

We verified the performance of the proposed localization method based on a particle filter on a public road. Both sides of the public road contain curbs that separate the road region from the nonroad region. The road region has center lines, lanes that help distinguish the direction of travel, and various other road markings. The relative weight of the particles was obtained from the weights assigned by the road markings and the curb, which allowed them to diverge or converge. Figure 9 shows the convergence and divergence of particles according to the road surface information in some sections of the experiment. The yellow color in the figure represents the road markings and curbs, whereas the black color denotes the road feature map information collected a priori, using a 2D laser sensor. The positions of each particle are shown in gray and the estimated positions from the particle filter are shown in red. The measurements obtained using the DGPS, showing the actual position of the vehicle with an accuracy of up to 10 cm, are presented as blue.

The lateral position error shown in Figure 9(a) is small, but the repeated lane markings and curbs diverged the particles longitudinally, and the longitudinal position error obtained from the DGPS and the particle filter position estimation is approximately equal to 1.1 m. Figure 9(b) shows a directional arrow marking between 34 m and 36 m of the longitudinal and between 28 m and 29 m of the lateral position. The road markings that indicate the road surface information (the directional arrow markings, the symbol markings, the stop lines, etc.) can be used to estimate the longitudinal position of the vehicle. Compared to Figure 9(a), such road markings show a gradual decrease in the longitudinal position error in Figure 9(b). Further road markings in Figure 9(c) indicate that particles were collected, which resulted in a significant reduction in the error between the DGPS and the particle filter position recognition.

The root mean square (RMS) error between the DGPS location measurements and the results obtained by the method presented in this paper is shown for the longitudinal and the lateral position in Figures 10(a) and 10(b), respectively. As Figure 10 shows, the longitudinal error is greater than the lateral error for the initial 40 seconds. The RMS value of the lateral axis error is 0.125 and it does not significantly deviate from the RMS value after the initial 40 seconds have passed. However, the RMS of the longitudinal axis error equals 1.79 m immediately before 40 seconds had passed and 0.178 m after. Thus, as the image shows, it takes some time for the longitudinal axis estimation to converge to the true value. This effect stems from the 2D laser sensor scanning the road surface only in the lateral direction, thereby not providing information in the longitudinal direction. In addition, the road features for the travel direction are identical, which makes the identification of the longitudinal position difficult.

Our experimental data are divided into three sections: the first, second, and third sections comprise the data obtained during the temporal intervals from 20 s to 60 s, from 70 s to 140 s, and from 150 s to 200 s, respectively. Figure 11 shows the convergence of the particle filter estimates to the DGPS reference in the first section. The present longitudinal errors were significantly reduced by the detection of an arrow marking on the road between 30 s and 40 s. Then, the longitudinal error increased until 45 s, when a new feature was detected which allowed it to converge between 50 s and 60 s. The localization error during the second section is shown in Figure 12. Both the lateral and the longitudinal error maintained the value of approximately 0.1 m for the first 20 seconds of the second section. At that point, the road marking used to distinguish the center line began to widen, which made it difficult to estimate the exact lateral position of the vehicle. Once the vehicle passed the area where the center line widened, the lateral error converged to a small value, which did not happen for the longitudinal error, because no road feature in the longitudinal direction was detected.

The third section, presented in Figure 13, mapped an intersection of the road. As can be seen in Figure 13(a), the error increased in both the lateral and the longitudinal direction between 160 s and 185 s, as the vehicle was approaching the intersection area, where the curb and the lane do not exist. The lack of features in the sensor field of view caused the localization algorithm to assign even weights to all particles, which resulted in the growth of the lateral error. The lateral error greatly diminished around 185 s, after the vehicle passed the intersection and the curb, when the lane markings could be detected again. The error in the longitudinal direction did not converge because no features were found to provide longitudinal information.

As has been shown, the proposed localization algorithm provided good estimates, with an accuracy of approximately 0.1 m, which is equal to the size of the grid cell, if the road feature exists in its sensor field of view. In general, lane markings and curbs can be found in most parts of a road, except for the intersections. Therefore, we expect the proposed algorithm to provide good localization estimates in the lateral direction. However, on parts of the road that contained no features to provide longitudinal information, it took time for the longitudinal position estimation to converge to the true value. Despite this, the accuracy of 0.1 in the lateral direction was good enough to enable the vehicle to follow the road successfully. Because accurate lateral estimates are more important than the longitudinal error for the urban navigation of an autonomous vehicle, this method can be expected to successfully determine the location of the vehicle on roads in these environments.

7. Conclusions

We studied a localization method for autonomous vehicles, based on a road environment information map. Many map-based localization methods use expensive 3D LiDAR sensors, which also require a lot of computational power, whereas our proposed method uses a 2D laser sensor as a more affordable alternative. To overcome the difficulties stemming from the smaller number of measurements obtained using the 2D laser sensor, we utilized different types of road features, stored in the form of a grid map. By comparing the road features obtained during the vehicle motion to the information of the stored map and using our proposed particle filter method, we efficiently estimated the location information for the vehicle. The experiments were conducted on a public road, for which we could construct a map in advance, allowing for an evaluation of the localization performance. Another experiment was carried out with an accurate DGPS sensor, and the accuracy of the proposed method was evaluated by comparing the localization results with the accurate DGPS measurements.

Our analysis showed that the lateral error can reach the accuracy required for lateral vehicle control. However, the longitudinal error is greater than the lateral error due to the road features not being continuously found on the road. Despite this, as soon as a road feature relevant for the longitudinal direction appears, the error converges quickly and the longitudinal localization maintains a good accuracy. The used 2D laser sensor has a reliable return of the laser light from the road surface, which makes the reliability of the proposed method better than of those that use cheap GPS sensors. In conclusion, the proposed method can be expected to provide reliable localization data for autonomous vehicles in an urban environment, it comes at a reasonable price, and it can be combined with other sensing methods, such as camera vision or assistance from the road infrastructure.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the Basic Science Research Program and the BK 21 Plus Program (Security Enhanced Smart Electric Vehicle Specialist Education Team) through the National Research Foundation of Korea funded by the Ministry of Education, Science and Technology (NRF-2018R1A2B6001888 and 31Z20130012993).

References

  1. B. J. Clark and D. M. Bevly, “GPS/INS integration with fault detection and exclusion in shadowed environments,” in Proceedings of the IEEE/ION Position, Location and Navigation Symposium, PLANS, pp. 1–8, Monterey, CA, USA, May 2008. View at: Google Scholar
  2. S. Choi, G. Kim, Y. Kim, and J. Lee, “Outdoor positioning estimation of multi-GPS / INS integrated system by EKF / UPF filter conversion,” Journal of Institute of Control, Robotics and Systems, vol. 20, no. 12, pp. 1284–1289, 2014. View at: Publisher Site | Google Scholar
  3. M. Stubler, J. Wiest, and K. Dietmayer, “Feature-based mapping and self-localization for road vehicles using a single grayscale camera,” in Proceedings of the IEEE Intelligent Vehicles Symposium, IV 2015, pp. 267–272, Seoul, South Korea, July 2015. View at: Google Scholar
  4. N. Suganuma and D. Yamamoto, “Map based localization of autonomous vehicle and its public urban road driving evaluation,” in Proceedings of the IEEE International Symposium on System Integration, pp. 467–471, Nagoya, Japan, December 2015. View at: Google Scholar
  5. J. Ziegler, H. Lategahn, M. Schreiber et al., “Video based localization for Bertha,” in Proceedings of the IEEE Intelligent Vehicles Symposium (IV), pp. 1231–1238, Dearborn, Michigan, USA, June 2014. View at: Publisher Site | Google Scholar
  6. A. Harati and R. Siegwart, “A new approach to segmentation of 2D range scans into linear regions,” in Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2007, pp. 2083–2088, San Diego, CA, USA, November 2007. View at: Google Scholar
  7. Y. Kang, C. Roh, S.-B. Suh, and B. Song, “A lidar-based decision-making method for road boundary detection using multiple Kalman filters,” IEEE Transactions on Industrial Electronics, vol. 59, no. 11, pp. 4360–4368, 2012. View at: Publisher Site | Google Scholar
  8. J.-K. Park and T.-H. Park, “Autonomous navigation system of mobile robot using laser scanner for corridor environment,” Journal of Institute of Control, Robotics and Systems, vol. 21, no. 11, pp. 1044–1049, 2015. View at: Publisher Site | Google Scholar
  9. V. Nguyen, A. Martinelli, N. Tomatis, and R. Siegwart, “A comparison of line extraction algorithms using 2D laser rangefinder for indoor mobile robotics,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1929–1934, Edmonton, Alta., Canada, August 2005. View at: Google Scholar
  10. M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking,” IEEE Transactions on Signal Processing, vol. 50, no. 2, pp. 174–188, 2002. View at: Publisher Site | Google Scholar
  11. J.-W. Park, C.-K. Yang, and D.-S. Shim, “Particle filter performance for ultra-tightly GPS/INS integration,” Journal of Institute of Control, Robotics and Systems, vol. 14, no. 8, pp. 785–791, 2008. View at: Publisher Site | Google Scholar
  12. B.-J. Jung, J.-H. Park, T.-Y. Kim, D.-Y. Kim, and H. Moon, “Lane marking detection of mobile robot with single laser rangefinder,” Journal of Institute of Control, Robotics and Systems, vol. 17, no. 6, pp. 521–525, 2011. View at: Publisher Site | Google Scholar
  13. S. W. Jang, J. S. Lee, K. J. Ahn, and Y. Kang, “A study on integration of particle filters and dead reckoning for efficient localization of automated guided vehicles,” in Proceedings of the IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), pp. 81–86, Langkawi, Malaysia, October 2015. View at: Publisher Site | Google Scholar
  14. G. A. Borges and M.-J. Aldon, “Line extraction in 2D range images for mobile robotics,” Journal of Intelligent & Robotic Systems, vol. 40, no. 3, pp. 267–297, 2004. View at: Google Scholar
  15. F. Homm, N. Kaempchen, and D. Burschka, “Fusion of laserscannner and video based lanemarking detection for robust lateral vehicle control and lane change maneuvers,” in Proceedings of the IEEE Intelligent Vehicles Symposium, IV'11, pp. 969–974, Germany, June 2011. View at: Google Scholar
  16. S.-H. Im, J.-H. Im, S.-H. Yoo, and G.-I. Jee, “Pedestrian safety road marking detection using LRF range and reflectivity,” Journal of Institute of Control, Robotics and Systems, vol. 18, no. 1, pp. 62–68, 2012. View at: Publisher Site | Google Scholar
  17. S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics, vol. 026220162, MIT press, Cambridge, USA.
  18. S. Thrun, “Particle filters in robotics,” in Proceedings of 17th Annual Conference on Uncertainty in AI, pp. 511–518, Alberta, Canada, 2002. View at: Google Scholar
  19. M. W. Khan, N. Salman, A. Ali, A. M. Khan, and A. H. Kemp, “A comparative study of target tracking with Kalman filter, extended Kalman filter and particle filter using received signal strength measurements,” in Proceedings of the International Conference on Emerging Technologies (ICET), pp. 1–6, Peshawar, Pakistan, December 2015. View at: Publisher Site | Google Scholar
  20. I. Rekleitis, G. Dudek, and E. Millios, “Probabilistic cooperative localization and mapping in practice,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '03), vol. 2, pp. 1907–1912, Taipei, Taiwan, September 2003. View at: Publisher Site | Google Scholar

Copyright © 2019 KyungJae Ahn and Yeonsik Kang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

767 Views | 400 Downloads | 2 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.