Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2016, Article ID 1963450, 8 pages
Research Article

Detection and Tracking of Road Barrier Based on Radar and Vision Sensor Fusion

Department of Mechanical Engineering, Ajou University, Suwon 16499, Republic of Korea

Received 19 June 2016; Accepted 16 August 2016

Academic Editor: Antonio Fernández-Caballero

Copyright © 2016 Taeryun Kim and Bongsob Song. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


The detection and tracking algorithms of road barrier including tunnel and guardrail are proposed to enhance performance and reliability for driver assistance systems. Although the road barrier is one of the key features to determine a safe drivable area, it may be recognized incorrectly due to performance degradation of commercial sensors such as radar and monocular camera. Two frequent cases among many challenging problems are considered with the commercial sensors. The first case is that few tracks of radar to road barrier are detected due to material type of road barrier. The second one is inaccuracy of relative lateral position by radar, thus resulting in large variance of distance between a vehicle and road barrier. To overcome the problems, the detection and estimation algorithms of tracks corresponding to road barrier are proposed. Then, the tracking algorithm based on a probabilistic data association filter (PDAF) is used to reduce variation of lateral distance between vehicle and road barrier. Finally, the proposed algorithms are validated via field test data and their performance is compared with that of road barrier measured by lidar.

1. Introduction

The driver assistance systems (DAS) such as adaptive cruise control (ACC), forward collision warning, and lane departure warning systems have been commercialized on the market [1]. They have evolved to more intelligent DAS such as automatic emergency braking (AEB), lane change assistance (LCA), and lane keeping assistance (LKA) systems [2]. As prototypes of a highly automated vehicle have been introduced on the media recently, reliability of the performance becomes more important. That is, once false decision is made by a computer or vehicle, it makes the driver have low reliability of the system and may thus result in no use of the system. The reliability of the decision mainly counts on accurate detection and recognition of multiple obstacles and vehicles. For instance, Honda Motor Company had to recall certain model year 2014-2015 Acura vehicles with AEB in the United States. The reason was that a collision mitigation braking system (CMBS) may inappropriately interpret certain roadside infrastructure such as iron fences or metal guardrails as obstacles and unexpectedly apply the brakes [3]. Furthermore, NHTSA in the United States investigated complaints alleging unexpected braking incidents of the autonomous braking system in Jeep Grand Cherokee vehicles with no visible objects on the road [3].

The detection and tracking algorithms of road barrier, which may be called either road border or boundary in the literature, depend on sensor configuration and their models for the road barrier. Most of the sensor configurations are single or a combination of radar [1, 2, 4, 5], camera [6], and lidar (or laser scanners) [7, 8] to recognize the drivable area via reflections from guardrail and curb. Next, extended objects such as road and road barrier are described as clothoid, circle, and elliptical model and their tracking algorithm is based on Kalman filter, probabilistic data association filter (PDAF), and interacting multiple model (IMM) PDAF [1, 2, 4, 9]. In this study, it is assumed that a front radar and a monocular camera are only used for detection and tracking of road barrier. Although additional sensors can be implemented for better performance or the lidar may be used as in the literature, the sensor configuration is limited in the viewpoint of commercialization in the near future. For instance, cost of sensors, robustness to weather, installation inside bumper, and popularity in automotive market are considered to choose the sensor configuration.

The contribution of this paper is to enhance reliability of road border detection when only few tracks are generated by radar with respect to road border and to improve lateral position accuracy of road border. Since the performance of a commercial radar relies on material and geometry of tunnel and guardrail, different number of tracks are made depending on driving environments. Thus, the estimation for stationary tracks out of detection range is proposed for better performance of road barrier detection. Furthermore, the tracking algorithm of road barrier based on a probabilistic data association filter (PDAF) is proposed to reduce variation of lateral offset, which is a lateral distance between an ego vehicle and road barrier.

2. Problem Statement

When commercial radars for driver assistance systems such as ACC and AEB are used, two challenging problems will be considered in this paper. A normal detection scenario is shown in Figure 1(a) and the corresponding tracks are shown in Figure 1(b). Two tracks to front vehicles are marked as a square and the others marked as × correspond to left guardrail in Figure 1(a). However, as shown in Figures 1(c) and 1(d), few tracks of radar are generated for the road barrier. Its detection performance may rely on the material type and/or shape of road barrier. This problem may lead to difficulty in determining whether there is road barrier in either left or right side.

Figure 1: Detection characteristics to road barrier by radar.

Next, radar tracks of road barrier are compared with cloud points with magenta color measured by a front lidar sensor in Figures 2(b) and 2(d). As shown in Figures 2(a) and 2(c), inaccuracy of lateral position measured by radar with respect to lidar measurements can be larger in the same driving scenario. This is expected to have large variance of lateral position of road barrier when only radar is used to recognize it.

Figure 2: Road barrier detection by radar and lidar.

3. Road Barrier Detection

The proposed road barrier detection based on sensor fusion of radar and monocular camera consists in four steps: selection of region of interest (ROI), estimation, clustering, and representation. First, the selection of ROI is roughly described in Figure 3. That is, based on the assumption that road barrier is placed on either left or right side, zone ② is defined with respect to a body fixed coordinate. If the road in Figure 3 is modeled as [4]where and are longitudinal and lateral position, respectively, in a body fixed coordinate in Figure 4, is curvature, and is the angle between the longitudinal axis of the vehicle and the road lane from a monocular camera as shown Figure 4. Then, zone ② is written aswhere is a lateral offset which determines a width of zone ②; that is, and .

Figure 3: Region of interest (ROI) for road barrier detection.
Figure 4: Definitions of lateral offset () and projection point.

Next, based on detection range of radar, zone ③ in Figure 3 is defined for estimation of stationary tracks. Before the estimation, a motion attribute to determine whether tracks in zone ② are either stationary or dynamic is decided by where is the vehicle velocity, is the yaw rate, is the range rate, and subscript stands for the th radar track. It is remarked that there is uncertainty of detection range of radar. So, it is essential to divide regions either to track or to estimate. Once a stationary track in zone ② enters zone ③, it is estimated based on discrete Kalman filters as follows [10]:where and and denote the relative longitudinal and lateral position, respectively, and and are the relative longitudinal and lateral velocity.

The measurement update equations are given by where .

Since a constant velocity (CV) is considered, the system matrix and measurement matrix are written as [10, 11]where is sampling time.

In the third step or clustering, in order to group tracks corresponding to road barrier among stationary tracks including estimated track, projection points are calculated as follows (see circles in Figure 4):where the subscript stands for the th track of radar. After that, the projection points are classified by left or right if the projection points are positioned in zone ②. If the distance between th and th projection points is less than , th is increased and th is defined as a breakpoint. If the distance between th and th projection points is greater than , the breakpoint is generated at th as follows:

Finally, if there are two projection points in two breakpoints or , all th tracks satisfying the following inequality are regarded as road barrier [2]:Considering the -coordinates as erroneous in comparison with -coordinates, the clothoid model which can be approximated by a two-order polynomial is calculated inwhere . It is noted that the calculation of clothoid model corresponds to creation of road barrier.

The procedure to detect road barrier is shown in Figure 5. First, six tracks among seven ones coming from radar are classified as stationary objects. Then, they are projected to -axis along the road model and the corresponding projection points are shown as circles in Figure 5(a). Next, based on distance between two projection points in (7), two breakpoints are determined as shown in Figure 5(b) and six tracks thus are classified as road barrier based on (9). Then the clothoid model in (10) is determined and shown as a solid line in Figure 5(c). After a few seconds, the closest stationary track in zone ② goes into zone ③ (also refer to Figure 3) and it becomes out of detection range and estimated based on discrete Kalman filter (see also a diamond mark in Figure 5(d)).

Figure 5: Procedure of road barrier detection.

4. Tracking Road Barrier

The tracking road barrier is composed of creation, maintenance, and deletion. Creation step uses the result of road barrier detection in (10). Maintenance step based on PDAF tracks lateral offset of road barrier. So, it is rewritten aswhere is a tracked value via PDAF and will be derived later.

While the lane information detected by a monocular camera sensor is useful to model roads, its performance depends on light and road conditions. For example, if the lane mark is worn out or covered by soil or snow, it may result in false detection of lane mark. Thus, considering the condition of lane marks, whether measurements by a monocular camera or estimate value is used is decided as follows [12]:where is confidence of right or left lanes, is yaw rate, and is vehicle speed.

The PDAF is based on discrete Kalman filter and a state variable is defined aswhere and are the relative lateral position and velocity, respectively. For time update of the state variable and error covariance matrix,System and measurement metrics are as follows:If the number of projection points in two breakpoints is , the measurement is defined aswhere means the number of projection points.

In the gating region it has to be decided which measurements (i.e., clustered projection points) are associated with which existing projection points. The corresponding residual and residual covariance matrix are calculated asAll measurements are checked whether the normalized residual satisfies the following thresholding condition.After that, valid measurements which are in gating region are combined to single residual. The weighting is according to the likelihood values of the corresponding measurements. Finally, the measurement update of the state variable calculates estimated track using the combined residual [13]:where , , and

5. Experimental Validation

As listed in Table 1, both radar and monocular camera, which are available commercially, are installed on a test vehicle and a front lidar is in addition used for performance comparison. Although a large amount of driving data has to be used for validation, the primitive evaluation of the performance is conducted with driving data of 13 minutes including the driving scenarios in Figures 1 and 2. Most of driving data have been tested on a highway. It is also noted that the driving data corresponding to tollgate, exit zone, and area without any tunnel or guardrail is not considered for performance evaluation.

Table 1: Specification of environment sensors.

Based on detection characteristics of radar, five environment scenarios on highway are considered. The first and second scenarios are shown in Figure 1 and called concrete+steel and concrete guardrail, respectively, in the paper. It is interesting to remark that different detection characteristics for the concrete guardrail are shown in Figures 1(c) and 2. In addition, steel guardrail, curb, and tunnel are included for validation as shown in Figure 6. It is thought that most of road barrier on highway in Korea can be described by one of five environment scenarios.

Figure 6: Additional environment scenarios for detection and tracking of road barrier.

The first case shown in Figures 1(c) and 1(d) is revisited. That is, few tracks for road barrier are generated by radar. As shown in Figure 7, only two tracks with respect to left guardrail are generated at = 123.7 (sec). After 0.8 sec, the nearest stationary track stays out of detection range of radar as shown in Figure 7(d). The solid lines represent trajectory of tracks from = 123.7 to 124.5 (sec). Since the track corresponding to road barrier is located in zone ③ in Figure 3, it becomes a road barrier candidate (see also a diamond mark in the figure). Then there are still two breakpoints for road barrier and the detected road barrier is represented as shown in Figure 7(d).

Figure 7: Road barrier detection via estimation.

Two driving scenarios in which lateral position of tracks by radar may be inaccurate are considered when an ego vehicle drives along tunnel or guardrail as shown in Figures 8(a) and 8(c). The proposed tracking algorithm of road barrier is compared with lidar and Kalman filter based approach in [4]. It is shown in Figures 8(b) and 8(d) that the performance of the proposed detection and tracking of road barrier is closer to that of lidar. Furthermore, it is shown in Figure 9 that four different approaches are compared with respect to lateral offset. Then, their relative quantitative performances with respect to lidar are evaluated in terms of root mean square error (RMSE) of lateral offset and recognition accuracy. They are summarized with respect to environment scenarios in Table 2.

Table 2: Performance comparison of tracking of road barrier.
Figure 8: Performance comparison of road barrier tracking.
Figure 9: Time response of lateral offset of road barrier.

Two performance measures are used for validation depending on environment scenarios. The first one is perception of road barrier (RB) and defined as ratio of detection period of RB by the proposed algorithm to detection by image manually. To describe the tracking performance, RMSE of lateral offset between sensor fusion of radar and vision and that of lidar and vision is used for the second performance measure. Finally, the performance comparison is summarized in Table 2. It is validated that the proposed algorithm improves perception of road barrier more when few tracks are often generated by radar in cases of concrete guardrail, tunnel, and curb on highway. Furthermore, it is shown that tracking performance of the proposed algorithm is more robust than others in different environment scenarios.

The detection and tracking algorithms are proposed to overcome two problems: one is when few tracks to road barrier are generated by radar and the other is when inaccuracy of lateral position becomes worse in a short time and it happens frequently. Both estimation and clustering methods are combined to handle the first problem and the tracking algorithm of lateral offset of road barrier based on PDAF is proposed to deal with the second problem. Its performance is evaluated with comparison of that of lidar and other approaches in the literature. Although it is shown via field test data that the proposed algorithm is good enough to recognize road barrier without lidar, it is quite necessary to validate it with massive field test data in the near future in order to consider various driving scenarios.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


This work was supported by the Hyundai Motor Company under Grant no. 13RPHMCEL017 and in part by the Formation of Technological Infrastructure Program (N019400027) funded by the Ministry of Trade, Industry and Energy Republic of Korea (MOTIE, Korea).


  1. C. Lundquist, L. Hammarstrand, and F. Gustafsson, “Road intensity based mapping using radar measurements with a probability hypothesis density filter,” IEEE Transactions on Signal Processing, vol. 59, no. 4, pp. 1397–1408, 2011. View at Publisher · View at Google Scholar · View at Scopus
  2. F. Breyer, C. Blaschke, B. Färber, J. Freyer, and R. Limbacher, “Negative behavioral adaptation to lane-keeping assistance systems,” IEEE Intelligent Transportation Systems Magazine, vol. 2, no. 2, pp. 21–32, 2010. View at Publisher · View at Google Scholar · View at Scopus
  3. T. Winkle, Autonomous Driving, Legal and Social Aspects, Springer, Berlin, Germany, 2016.
  4. C. Lundquist, U. Orguner, and F. Gustafsson, “Extended target tracking using polynomials with applications to road-map estimation,” IEEE Transactions on Signal Processing, vol. 59, no. 1, pp. 15–26, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. A. Polychronopoulos, A. Amditis, N. Floudas, and H. Lind, “Integrated object and road border tracking using 77 GHz automotive radars,” IEE Proceedings—Radar, Sonal and Navigation, vol. 151, no. 6, pp. 375–381, 2004. View at Publisher · View at Google Scholar
  6. G. Alessandretti, A. Broggi, and P. Cerri, “Vehicle and guard rail detection using radar and vision data fusion,” IEEE Transactions on Intelligent Transportation Systems, vol. 8, no. 1, pp. 95–105, 2007. View at Publisher · View at Google Scholar · View at Scopus
  7. J. Han, D. Kim, M. Lee, and M. Sunwoo, “Enhanced road boundary and obstacle detection using a downward-looking LIDAR sensor,” IEEE Transactions on Vehicular Technology, vol. 61, no. 3, pp. 971–985, 2012. View at Publisher · View at Google Scholar · View at Scopus
  8. K. R. S. Kodagoda, S. S. Ge, W. S. Wijesoma, and A. P. Balasuriya, “IMMPDAF approach for road-boundary tracking,” IEEE Transactions on Vehicular Technology, vol. 56, no. 2, pp. 478–486, 2007. View at Publisher · View at Google Scholar · View at Scopus
  9. R. Schubert, K. Schulze, and G. Wanielik, “Situation assessment for automatic lane-change maneuvers,” IEEE Transactions on Intelligent Transportation Systems, vol. 11, no. 3, pp. 607–616, 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. R. Faragher, “Understanding the basis of the kalman filter via a simple and intuitive derivation,” IEEE Signal Processing Magazine, vol. 29, no. 5, pp. 128–132, 2012. View at Publisher · View at Google Scholar · View at Scopus
  11. H. Kim, B. Song, H. Lee, and H. Jang, “Multiple vehicle tracking and estimation for all-around perception,” in Proceedings of the 12th International Symposium on Advanced Vehicle Control (AVEC '14), pp. 480–485, Tokyo, Japan, September 2014.
  12. H.-T. Kim, O. Kwon, B. Song, H. Lee, and H. Jang, “Lane confidence assessment and lane change decision for lane-level localization,” in Proceedings of the 14th International Conference on Control, Automation and Systems (ICCAS '14), pp. 1448–1451, Seoul, Republic of Korea, October 2014. View at Publisher · View at Google Scholar · View at Scopus
  13. R. Möbus and U. Kolbe, “Multi-target multi-object tracking, sensor fusion of radar and infrared,” in Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 732–737, IEEE, June 2004. View at Publisher · View at Google Scholar · View at Scopus