About this Journal Submit a Manuscript Table of Contents
International Journal of Vehicular Technology
Volume 2013 (2013), Article ID 749896, 9 pages
http://dx.doi.org/10.1155/2013/749896
Research Article

Lidar Data Analysis for Time to Headway Determination in the DriveSafe Project Field Tests

1Department of Mechanical Engineering, Istanbul Technical University, Istanbul TR-34437, Turkey
2Faculty of Engineering and Architecture, Okan University, Istanbul TR-34959, Turkey

Received 9 October 2012; Revised 21 December 2012; Accepted 30 December 2012

Academic Editor: Tang-Hsien Chang

Copyright © 2013 İlker Altay et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The DriveSafe project was carried out by a consortium of university research centers and automotive OEMs in Turkey to reduce accidents caused by driver behavior. A huge amount of driving data was collected from 108 drivers who drove the instrumented DriveSafe vehicle in the same route of 25 km of urban and highway traffic in Istanbul. One of the sensors used in the DriveSafe vehicle was a forward-looking LIDAR. The data from the LIDAR is used here to determine and record the headway time characteristics of different drivers. This paper concentrates on the analysis of LIDAR data from the DriveSafe vehicle. A simple algorithm that only looks at the forward direction along a straight line is used first. Headway times based on this simple approach are presented for an example driver. A more accurate detection and tracking algorithm taken from the literature are presented later in the paper. Grid-based and point distance-based methods are presented first. Then, a detection and tracking algorithm based on the Kalman filter is presented. The results are demonstrated using experimental data.

1. Introduction

(National Highway Traffic Safety Administration) NTHSA reported six million vehicle crashes in 2005 that resulted in 43,000 dead and 2.5 million injured people in the USA [1]. Moreover, (International Road Traffic and Accident Database) IRTAD reported 40,000 dead and 1.4 million injured people all over Europe in the same year. Driver mistakes caused 90 percent of these crashes [2]. Driving dynamics should be investigated to prevent accidents caused by driver mistakes before they take place. Driving dynamics is the interaction between vehicle, road conditions, and the driver [3]. The theoretical model of driving behaviour based on a microsimulator is presented in [4] where the driver error is separated into perception, decision-making, and action parts. According to [5], the development in passive and active safety systems, car accidents, and fatalities per distance decrease as the total distance travelled per year increases.

The DriveSafe project was started to reduce accidents caused by driver behavior with the collaboration of OTAM, Sabanci University, İstanbul Technical University, Ford A.Ş., Renault A.Ş., and Tofas A.Ş. A huge amount of driving data was collected in 2006. A total of 108 drivers, 89 men and 19 women, drove the DriveSafe vehicle in the same route of 25 km in İstanbul [6, 7]. This instrumented vehicle, used for multimodal data collection, is a Renault Megane see (Figure 1) equipped with a large array of sensors and a data acquisition system. It includes two cameras looking at the driver, one camera looking at the road and several microphones. Additional sensors include an EEG and a heart beat sensor that are used for validation of driver state, GPS and a laser scanner. Also, gas, brake pedal pressure, steering angle, and vehicle speed data were collected through the vehicle CAN bus [6, 7].

749896.fig.001
Figure 1: DriveSafe project vehicle named “UYANIK.”

Time to headway is an important driving characteristic that shows significant changes between drivers if other conditions like the road (urban, highway), traffic (dense, light), and so forth are kept the same. Time to headway (TTH) is the main controlled variable in Advanced Driver Assistance Systems (ADAS) like Adaptive Cruise Control (ACC) and Cooperative Adaptive Cruise Control (CACC). In ACC and CACC, the claim is that safer TTH values are used as compared to manual speed control by the driver [8]. It is, therefore, important to first analyze TTH characteristics in manual driving. This will form a benchmark that TTH values regulated with ACC and CACC systems can be compared against.

This paper concentrates on extracting TTH values from the data collected in the DriveSafe project field tests. The Lidar in front of the vehicle (see Figure 1) is chosen as the sensor that will be used for TTH determination. This paper presents both a simple approach and detection and tracking-based approach for computing TTH values based on front looking Lidar data.

TTH has been analyzed in detail in previous work in the literature. In most of these references, TTH data is computed either using a simulator or measures based on sensors embedded in the road and correlated with traffic congestion [9, 10]. In other references, TTH is investigated as a part of an ACC or CACC system, and TTH values for manual driving are not treated [8, 11]. In contrast to the abovementioned and similar references, the TTH computations in this paper are for manual driving by a human driver. TTH is used to characterize the driving characteristics of drivers and is not used to investigate traffic congestion. Instead of using sensors in the road or a radar (for vehicles with ACC), a Lidar is used in this paper to determine TTH values.

The outline of the rest of the paper is as follows. The DriveSafe experiments and a simple approach to TTH computation are presented in Section 2. The grid-based method and the point distance-based segmentation method are outlined in Section 3. The Kalman filter approach to detection and tracking of vehicles in front is outlined in Section 4. The Kalman filter-tracking results are presented in Sections 5 and 6. The paper ends with conclusions.

2. Driver Characteristics Based on Collected Lidar Data

A single-layer laser scanner made by SICK with a 1 Hz sampling frequency and a range of view of 81 meters was used to detect the distance to the vehicle in front of the DriveSafe data collection vehicle. This distance and the recorded speed of the experimental vehicle can be used to determine the headway time. Figure 2 shows two different plots showing one Lidar scan of the ego vehicle’s front side. This section presents analysis of the time to headway (TTH) characteristics of different drivers in the DriveSafe experiments. TTH is the amount of time the experimental vehicle would take at its current speed and vehicle-to-vehicle distance to reach the vehicle in front assuming that the vehicle in front is stationary. Using a simplistic approach, the Lidar data at 90 degrees (vertical line from ), that is, directly ahead of the ego vehicle, is selected for relative distance calculation in Figure 2.

fig2
Figure 2: Plotting samples of lidar data.

Time to headway is the ratio of relative distance to ego vehicle speed. In the literature, there is a 2-second rule for a safe headway in manual driving. The driving situation is safe if TTH is equal to or more than 2 seconds. It is possible to use lower headway times on the order of 0.6 to 2 sec in semiautomated driving using Cooperative Adaptive Cruise Control (CACC) or Adaptive Cruise Control (ACC), for example.

2.1. Time to Headway (TTH)

In Figure 3, how TTH changes with the ego vehicle speed are shown for the driver with data code IM1088. M denotes a male driver and 1088 denotes the 88th driver in the dataset. The data in Figure 2 shows that the upper value of TTH decreases as the ego vehicle increases speed. If one concentrates on the lower envelope of the data in Figure 3, however, it is seen that an almost constant lower bound on TTH with a slightly increasing trend with increasing speed is used by this driver.

749896.fig.003
Figure 3: TTH versus ego vehicle speed.

Typical data used in the computations is displayed in Figure 4. This data includes relative distance (Lidar), brake pressure (CAN), and ego vehicle speed (CAN) versus time. The last subplot in Figure 4 shows the relative distance between the vehicles versus ego vehicle speed. Taking a look at the lower envelope of the data in that subplot of the figure shows that relative distance kept by the driver increases with vehicle speed, as expected.

fig4
Figure 4: CAN bus and lidar data plots to investigate driver behavior.

If Figure 3 is zoomed and different speed ranges are concentrated upon as in Figure 5, it can be seen that the driver is using a lower bound of 1 sec at higher speeds, which can be considered to be dangerous for manual driving. Note that the higher speeds in the zoomed plot in Figure 5 correspond to highway driving. These results illustrate the usefulness of equipping vehicles with ACC and CACC for safer vehicle following and with Collision Warning (CW) and Collision Avoidance (CA) as accident preventing measures in the case of manual driving.

749896.fig.005
Figure 5: TTH versus different ego speed vehicle ranges.

GPS data was also collected with the DriveSafe vehicle. This GPS data when placed on a map allowed us to extract highway driving part of the data. The GPS location data for the same driver is illustrated in Figure 6 where the highway driving part of the trajectory is marked separately.

749896.fig.006
Figure 6: Bold points represent D100 highway.

Figure 7 shows the computed TTH values versus ego vehicle speed in the highway driving part of the data. There is no information below 10 km/h of ego vehicle speed as traffic congestion did not occur during highway driving. Minimum TTH is detected as 0.5143 sec. This, of course, is a very low value and is not safe in manual driving. The average TTH values are much higher. Average TTH values of about 2 sec, which correspond to safe manual driving, are observed at high speeds (70–100 km/hr) in Figure 7. The maximum TTH values used are pretty high and range from 3.5 sec at 100 km/hr to 14 sec at 20 km/hr. These high variations in TTH values imply that the use of semiautomated ACC or CACC driving in platoons of vehicles would help driving at both the individual vehicle and traffic flow levels. This is based on the expectation that ACC and CACC equipped vehicles will be able to reach TTH values of 1.0 to 0.6 sec with very little scatter in TTH at individual speeds as compared to the very large scatter in the vertical direction in the manual driving example of Figure 7.

749896.fig.007
Figure 7: TTH versus ego vehicle speed in D100 highway.

Minimum following distance differs for each driver. How relative distance between vehicles changes with ego vehicle speed is displayed in Figure 8. It is seen that the lower bound of the relative distance increases with speed.

749896.fig.008
Figure 8: Local minimum of relative distance versus ego vehicle speed.

3. Lidar Data Processing

The previous section concentrated on using the Lidar for vehicle-to-vehicle distance-and TTH calculations where only the 90 degrees (directly across the ego vehicle) information from the Lidar was used. In contrast, the data in the complete 180 degrees sweep of the Lidar is used in this section to detect vehicles in front and to detect road limits on both sides due to vehicles on other lanes. The grid-based method and point distance-based method for vehicle segmentation in lidar data processing that are both available in the literature are used and presented in this section.

3.1. Grid-Based Method

The laser scans the area in front of the ego vehicle. This area is divided into grids. In this study, the grid resolution is 1 m2. It is necessary to improve this resolution to be able to distinguish smaller objects. If a scan falls within a grid, then the probability of having an object inside it is assigned the value of 0.9. This is called grid occupancy probability. The cells before an occupied cell are assigned an occupancy probability of 0.1. The cells that have unknown state are assigned an occupancy probability of 0.5. The grid-based method is generally used in image processing.

Figure 9 displays the grid-based representation of a Lidar scan. White cells at number 83 in the axis show the ego vehicle. After the cells are assigned occupancy probabilities, the connected components algorithm (CCA) in [12] is used to determine whether cells are connected or not. This method gives object size and information on whether the detected object is a vehicle, truck, or road limit. In Figure 9, the different colored grid points possibly represent objects (sides of the road and vehicles in front). Note that the grid-based method gives a coarse image of the vehicles in front and should not be preferred for TTH computations.

749896.fig.009
Figure 9: Grid-based representation of ego vehicle front.
3.2. Point-Distance-Based Segmentation Method

Segments are a set of planar range measurements (points) close to each other, with the property that they have a high probability of belonging to one single object. There exist many segmentation methods in the literature. The point-distance-based segmentation method (PDBS) as presented in [13] is selected and used here. It is based on calculating the distance between two consecutive scanned points which is then compared to a threshold value.

The axis represents the front of the ego vehicle in Figure 10. If the distance between two successive scan points given by is greater than a threshold value ; that is, if , then the segments are separated. If this is not the case, the segments are not separated.

749896.fig.0010
Figure 10: Schematical representation of laser scanning and some parameters.

The following threshold definition taken from [13]: where is a constant parameter used for noise reduction and is given by is used in this paper. The angle in Figure 10 is used to represent the orientation of the detected object [14].

3.3. Detecting Road Limits

Detecting road limits is also important to stay on the road. If it is a barrier or a road limit, the number of scanned points in the vicinity of one -axis position increases as is seen on the left and the right hand sides in Figure 11. The ego vehicle’s Lidar is at the point in Figure 11. Both the and axes unit is meters. At axis positions of approximately −6 m and 16 m, road limits are observed in Figure 11 in the form of vertical lines.

749896.fig.0011
Figure 11: Laser scan of ego vehicle.

Road limits can also be recognized by histogram plotting as shown in Figure 12 (see [15] for histogram plotting). At -axis positions that are approximately equal to −6 m and 16 m, frequency of reflected laser scanner measurements increases. These positions correspond to the left and right hand road limits.

749896.fig.0012
Figure 12: Frequency of coordinates of objects’ appearance.

4. Kalman Filter Tracking

The Kalman filter is an algorithm that estimates the state of a dynamic system. The prediction-correction type Kalman filter algorithm is used to estimate the relative positions, velocities, and uncertainties of moving vehicles in this paper. The prediction part of the Kalman filter produces a search region for each detected vehicle that is then concentrated on in the next laser scan to detect the vehicle with less computational effort. The region that covers the detected vehicle is called the “bounding box.” One control point, the average of vehicle scan points, for each bounding box, is considered in the analysis.

4.1. Vehicle Dynamic State.

Detected vehicles are assumed to have linear and uniform motion in between two consecutive scans. The dynamics of the vehicles are represented with a constant velocity model as The subscript is the current scan number. and represent the and coordinates of the vehicles. and are the vehicle speed components along the and directions, and is the time step used. The state space representation of the constant velocity model in (4) is given by (see [14]) and the measurement matrix is given by The first two equation, in the state space formulation are (4) while the last two equations represent constant velocity.

4.1.1. Initial Estimates

After the recognition of the vehicles, the current state vector and the covariance matrices are initialized. represents the process noise covariance matrix, represents the measurement noise covariance matrix, and represents the error covariance. Tracking starts in the next scan after detection.

4.1.2. Prediction

Prediction is achieved by repeating the prediction equations given below for each frame in the scan sequence: The position of the detected vehicle is predicted using the model described in (7). Then, the search area for the predicted vehicle position is calculated as mentioned in the hypothesis verification.

4.1.3. Measurement Update (Correction)

In the determined search area, the corresponding vehicle is searched by extracting horizontal and vertical edges. The position of the tracked vehicle is updated based on the measurements: In the above equations, is the filter gain, is the measurement vector, and is the measurement matrix (see [16] for more details of the Kalman filter).

5. Kalman Filter Tracking Results

Results of Kalman filter tracking of objects in the Lidar data are presented in this section. Corresponding camera images are also presented for better visualisation of the tracked objects. A range of meters along the axis and meters along the axis are used for investigation of the Lidar data.

In Figure 13, one object is tracked. Security barriers on the left side of ego vehicle can be seen on the Lidar image. Also a traffic warning pylon is at a point that is approximately at −2 m in the coordinates. The corresponding camera image is shown in the upper subplot of Figure 13. The black box surrounding the vehicle marked with 1 is called the bounding box in the Lidar data [17]. The distance to the tracked vehicle in front is also computed in the Kalman filter algorithm. This distance along with the ego vehicle speed obtained from the vehicle CAN bus is used to compute TTH.

749896.fig.0013
Figure 13: Tracking one vehicle.

In Figure 14, two vehicles are tracked. The Kalman filter helps to predict where a vehicle possibly exists in the next scan. By defining a search area, the vehicle can be found in the next scan. Figure 15 shows the tracking of the vehicles for 4 sec. 4 scan results are superimposed on each other for 4 seconds at an update rate of 1 Hz. Blue dots are used to illustrate the initial state of the Lidar scan. The first and second vehicles are faster than the ego vehicle. The third and fourth vehicles entered the scanning range of the Lidar at the fourth second. During these computations, the nearest vehicle in front is the vehicle being detected and tracked. Its distance to the ego vehicle is used in the subsequent TTH computations.

749896.fig.0014
Figure 14: Tracking two vehicles.
749896.fig.0015
Figure 15: Kalman filter tracking of vehicles for 4 sec.

6. Results and Discussion

Figure 16 shows the TTH histogram for driver IM1088 where the Kalman filter-based algorithm was used to detect the nearest vehicle in front and the distance to it. The TTH histogram in Figure 16 has a peak at less than 1 sec. of TTH showing that this driver has a tendency of driving too close to the vehicle in front.

749896.fig.0016
Figure 16: TTH histogram for driver IM1088.

Figure 17 shows the TTH histograms of four selected drivers in the data set of the DriveSafe project. TTH statistical information of the selected four drivers is presented in Table 1. Driver IM1089 drives more safely than the others as he uses a larger TTH value with a mean of 2.031 sec. Driver IM1084 drives more dangerously as compared to the others with a mean TTH of 1.247 sec.

tab1
Table 1: Drivers statistical information for TTH (in seconds).
749896.fig.0017
Figure 17: TTH histogram for four selected drivers.

7. Conclusions

In this paper, the Lidar data collected during the DriveSafe project was used to investigate driver behavior and to analyze the vehicle’s surrounding environment. Time to headway was investigated to obtain information about the driver. Lidar data is used to obtain information on what lies ahead of the ego vehicle. Objects in front of the ego vehicle are detected with grid-based and point distance-based segmentation methods. Then, the point distance-based method is selected for tracking of detected vehicles. Tracking of these objects is carried out with the Kalman filter using the constant speed model. The relative position and velocities of these tracked vehicles were estimated using the Kalman filter. The method was applied to example driving data in the DriveSafe project database.

The method used in this paper for TTH computation will be used in our future studies to investigate all the driver data in the DriveSafe project database.

Acknowledgments

The data used in the analyses presented in this paper was from one of the drivers who took part in the DriveSafe project data collection experiments. The authors would, thus, like to thank the Turkish National Planning Association (DPT) for funding the DriveSafe project (EACF05—00322/1) and all the project partners of the DriveSafe project.

References

  1. National Center for Statistics and Analysis, National Highway Traffic Safety Administration, http://www.nhtsa.gov/.
  2. International Traffic Safety Data and Analysis Group, http://internationaltransportforum.org/.
  3. T. Wakita, K. Ozawa, C. Miyajima et al., “Driver identification using driving behavior signals,” IEICE Transactions on Information and Systems, vol. E89-D, no. 3, pp. 1188–1194, 2006. View at Publisher · View at Google Scholar · View at Scopus
  4. J. Archer and I. Kosonen, “The potential of micro-simulation in relation to traffic safety assesment,” ESS Conference Proceedings, vol. 45, no. 7, pp. 569–590, 2006.
  5. O. Gietelink, J. Ploeg, B. de Schutter, and M. Verhaegen, “Development of advanced driver assistance systems with vehicle hardware-in-the-loop simulations,” Vehicle System Dynamics, vol. 44, no. 7, pp. 569–590, 2006. View at Publisher · View at Google Scholar · View at Scopus
  6. B. Aytekin, E. Dinçmen, B. A. Güvenç et al., “Framework for development of driver adaptive warning and assistance systems that will be triggered by a driver inattention monitor,” International Journal of Vehicle Design, vol. 52, no. 1-4, pp. 20–37, 2010.
  7. H. Abut, H. Erdoğan, A. Erçil et al., “Data collection with uyanik: too much pain, but gains are coming,” in Biennial on DSP For in-Vehicle and Mobile Systems, 2007.
  8. I. M. C. Uygan, K. Kahraman, R. Karaahmetoglu et al., “Cooperative adaptive cruise control implementation of team mekar at the grand cooperative driving challenge,” Special Issue of the IEEE Transactions on Intelligent Transportation Systems on Implementations of Cooperative Adaptive Cruise Control at the Grand Cooperative Driving Challenge, vol. 13, no. 3, pp. 1062–1074, 2012.
  9. L. Neubert, L. Santen, A. Schadschneider, and M. Schreckenberg, “Single-vehicle data of highway traffic: a statistical analysis,” Physical Review E, vol. 60, no. 6, pp. 6480–6490, 1999. View at Scopus
  10. R. Riccardo and G. Massimiliano, “An empirical analysis of vehicle time headways on rural two-lane two-way roads,” in Proceedings of the 15th meeting of the EURO Working Group on Transportation (EWGT '12), September 2012.
  11. N. Tricot, B. Rajaonah, M. P. Pacaux, and J. C. Popieul, “Driver's behaviors and human-machine interactions characterization for the design of an advanced driving assistance system,” in Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC '04), pp. 3976–3981, Valenciennes, France, October 2004. View at Scopus
  12. P. Lindner and G. Wanielik, “3D LIDAR processing for vehicle safety and environment recognition,” in Proceedings of the IEEE Workshop on Computational Intelligence in Vehicles and Vehicular Systems (CIVVS '09), pp. 66–71, Nashville, Tenn, USA, April 2009. View at Publisher · View at Google Scholar · View at Scopus
  13. K. C. J. Dietmayer, J. Sparbert, and D. Streller, “Model based object classification and object tracking in traffic scenes from range images,” in Proceedings of the 4th IEEE Intelligent Vehicles Symposium, Tokyo, Japan, 2001.
  14. C. Premebida and U. Nunes, “Segmentation and geometric extraction from 2D laser range data for mobile robot applications,” in Proceedings of the 4th National Festival of Robotics Scientific Meeting (ROBOTICA '05), pp. 17–25, Coimbra, Portugal, 2005.
  15. F. Fayad and V. Cherfaoui, “Tracking objects using a laser scanner in driving situation based on modeling target shape,” in Proceedings of the IEEE Intelligent Vehicles Symposium (IVS '07), pp. 44–49, Istanbul, Turkey, June 2007. View at Scopus
  16. B. Aytekin and E. Altuǧ, “Increasing driving safety with a multiple vehicle detection and tracking system using ongoing vehicle shadow information,” in Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC '10), pp. 3650–3656, Istanbul, Turkey, October 2010. View at Publisher · View at Google Scholar · View at Scopus
  17. D. Streller, K. Fürstenberg, and K. Dietmayer, “Vehicle and object models for robust tracking in traffic scenes using laser range images,” in Proceedings of the IEEE 5th International Conference on Intelligent Transportation Systems, pp. 118–123, 2002.