Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2016, Article ID 5810910, 13 pages
http://dx.doi.org/10.1155/2016/5810910
Research Article

Study on Leading Vehicle Detection at Night Based on Multisensor and Image Enhancement Method

1School of Transportation, Jilin University, Changchun 130022, China
2China-Japan Union Hospital of Jilin University, Changchun 130033, China

Received 2 June 2016; Revised 16 August 2016; Accepted 23 August 2016

Academic Editor: Jinyang Liang

Copyright © 2016 Mei Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Low visibility is one of the reasons for rear accident at night. In this paper, we propose a method to detect the leading vehicle based on multisensor to decrease rear accidents at night. Then, we use image enhancement algorithm to improve the human vision. First, by millimeter wave radar to get the world coordinate of the preceding vehicles and establish the transformation of the relationship between the world coordinate and image pixels coordinate, we can convert the world coordinates of the radar target to image coordinate in order to form the region of interesting image. And then, by using the image processing method, we can reduce interference from the outside environment. Depending on D-S evidence theory, we can achieve a general value of reliability to test vehicles of interest. The experimental results show that the method can effectively eliminate the influence of illumination condition at night, accurately detect leading vehicles, and determine their location and accurate positioning. In order to improve nighttime driving, the driver shortage vision, reduce rear-end accident. Enhancing nighttime color image by three algorithms, a comparative study and evaluation by three algorithms are presented. The evaluation demonstrates that results after image enhancement satisfy the human visual habits.

1. Introduction

According to the statistics, the number of traffic accidents was up to 196812 in 2014; they caused 58523 fatalities, 211882 injuries, and 1075.42-million direct property loss [1]. While rear collision accidents represented 17.27% of total traffic, the represented 9.84% of fatalities, 10.7% of injuries, and 20.8% of direct economic loss in 1995 in China [2].

With the development of processor technology and sensor technology, more and more security systems are applied to the field of the vehicle. To reduce rear accident, a preceding vehicle detecting method at night based on multisensor and image enhancement method are proposed. Owing to a lack of enough light at night, most of vehicle feature information during the daytime is not available, so daytime vehicle detection algorithm is basically ineffective. Vehicle taillight is obvious vehicle features at night; at present, the studies on leading vehicle detection and recognition at night mainly use single vision sensor to obtain preceding vehicle’s visual information and identify preceding vehicle based on image information extracting taillight features. Liu et al. combine vehicle taillight color and brightness to detect taillight in [3]; Wu et al. track vehicles through using a pair of headlights [4]; Tang et al. extract the region of interest by using the frame difference method [5]; Wang present an image segmentation method based on fuzzy theory, extracting the license plate and taillights feature in [6]; Qi and Chen distinguish the vehicle position based on the HSV color model to segmental taillight color information in [7]; Zhou segments image based on R channel histogram in RGB color space by the use of adaptive threshold, and the effect is not very satisfactory [8]. Digital camera is an effective sensor for detecting a vehicle, but it has some limitations, considering digital camera, and laser radar information to detect target vehicle was put forward in the literature [9, 10]. To a certain extent, although laser radar and digital camera are complementary, laser radar is very sensitive to weather, lighting, and surface smoothness of obstructions, so it is not suitable for complex road environment. Since the millimeter wave radar is less susceptible to outside interference and the distance measurement accuracy is high, it also can get the exact preceding vehicle speed, angle, and so forth. We present a technique for leading vehicle detection at night by the use of millimeter wave radar and digital camera fusion getting multisensor data, screening obstacles data that was detected by using millimeter wave radar through prior knowledge, establishing initial dynamic region of interest (ROI) by the use of radar data and image information, and extracting the vehicle feature within the narrow range based on vision sensors; we use D-S evidence theory information to reduce the amount of calculation and subjective threshold impact on detection accuracy and improve the accuracy of vehicle location and execution efficiency.

While the detection system of preceding vehicles can reduce the risk of rear vehicle, it cannot reduce the driver psychological pressure during driving at night. Since the distance that the system detected is in the range of 60 m–70 m and rear collision may be in an emergency situation when we detected vehicle, in this case, using this system cannot completely avoid rear collision accidents. The statistics show that more than 80% of the road environment information is acquired by the driver vision. Thus, in order to avoid traffic accidents fundamentally, we need to improve the driver visual perception in the traffic scene at night.

Driver visual is limited at nighttime and visual range is short, so it is easily prone to fatigue and it is hard to observe road traffic conditions. According to the statistics, the driver is more intense and the reaction time is longer during driving at nighttime than at daytime when emergency braking, which may result in a serious accident. The night image enhanced algorithm remains to be study.

This paper is organized as follows. Section 2 presents the preceding vehicle detecting method at night based on multisensor. Section 3 presents image enhancement theories. The result of image enhancement methods is illustrated in Section 4. Finally, we conclude the paper in Section 5.

2. Nighttime Vehicle Detection Algorithm

The approach consists of hypothesis generation (HG) and hypothesis verification (HV). In hypothesis generating process, we can obtain candidate target distance, angle, speed, and other pieces of information by radar and then get the world coordinates of the candidate target, on the basis of an inverse, the camera calibration principle; we obtain conversions relationship between the world coordinates and image pixel coordinates, initially identified region of candidate targets on the image, namely, the region of interest (ROI). Hypothesis verification process for image segmentation is processed through improved OTSU and then we use the image processing method, prior knowledge, and D-S evidence theory to detect the presence of the vehicle features in ROI. Algorithm flow chart is shown in Figure 1.

Figure 1: Night preceding vehicle detection method flowchart based on information fusion of the millimeter wave radar and digital camera.
2.1. Processing Radar Data and Determining Candidate Target

Millimeter wave radar receives hexadecimal data; according to the agreement, we can calculate radar data and extract useful information that can be used to detect a vehicle; the data include angle of preceding vehicle relative to our vehicle, distance, speed, and reflection intensity. In actual measurement, a part of millimeter wave radar signal is empty target signal, the inactive target signal, and stationary target signal. First, we remove the interference of three target signals (empty target signal, the inactive target signal, and stationary target signal). If the signal value is in the range of distance threshold and the relative velocity threshold, the data are stored. According to the longitudinal width threshold, we judge the target vehicle and own vehicle in the same lane and then further record preceding goal vehicle that has been screened in accordance with from near to far principle. Radar scan plane is shown in Figure 2, in which indicates the radar scan radius that detected vehicle (the distance threshold is 50 m and the relative velocity threshold is 30 m/s and the width of the lane is 4 m; ).

Figure 2: Radar scan plane.

The valid targets are shown in Table 1. Assuming the front vehicle is stationary, relative velocity threshold is the speed of our vehicle relative to leading vehicle; it is positive or negative. If relative velocity threshold is a negative value, it indicates that our vehicle speed is higher than the front vehicle; a positive value indicates that our vehicle speed is lower than the leading vehicle.

Table 1: Primary effective target signal.
2.2. Fusion Digital Image and Radar Data

Coordinate systems of radar sensors and digital image are different; we must establish conversion model of two sensor coordinate systems to achieve spatial integration of radar and machine radar vision so that it can convert from the radar coordinates to image pixel coordinates [11]. We established coordinate system in accordance with the principles of the right-handed coordinate system, and then we can establish the spatial relationship between the radar coordinate system and the image pixel coordinates by formulas (1)-(2).

of the world coordinate system is converted to of the image pixel coordinates. Let be focal length: distance between the image plane and the projection center. expressed the camera coordinate system; let be principal point, let and be the distance adjacent pixels in the horizontal and vertical directions of the image sensor, and expressed the optical axis of the camera. The conversion equals the following:

Among them, there is a relationship shown in Figure 3 coordinates between the world coordinate system and the camera coordinate system: let be rotation matrix, and expressed translation matrix.

Figure 3: The relationship between the world coordinate system and the camera coordinate system radar scan plane.

The transform between the camera coordinates and image physical coordinates is shown:

The transform between the physical coordinates and image pixel coordinates is shown:

Preceding obstacle information we acquired by millimeter wave radar is two-dimensional information in polar coordinates; the information of the barrier (Figure 2) is converted from polar coordinates under a two-dimensional Cartesian coordinate system to rectangular coordinate system.

plane of radar coordinate system and plane of the world coordinate system are parallel (Figure 4), the distance between the two planes is , and center point of preceding vehicle can be projected to radar plane as the point ; we can obtain relative distance and angle that point relative to the radar and we can determine the coordinates of the point in the world coordinate system; its conversion relationship is as follows:

Figure 4: Positional relationship between the radar coordinate system and the world coordinate system: expressed millimeter wave radar coordinate system and expressed the world coordinate system.

We get center point of the preceding vehicle by the radar as input. By the use of the relationship between the radar coordinates and image pixel coordinates, we can get projection of the preceding vehicle on the pixel plane. Based on a common shape of the vehicle (aspect ratio) being projection on the pixel plane, we can establish a dynamic region of interest which will change according to the change of distance, so we can shorten search time on the image and reduce the amount of calculation. Dynamic region of interest in pixel coordinates was shown in Figure 5.

Figure 5: Dynamic region of interest in pixel coordinates.

According to the statistics, we found that in general the aspect ratio of the vehicle is in the range of 0.7 to 2.0; for example, the aspect ratio of car, SUV, van, and commercial vehicle models is in the range of 0.7 to 1.3 [12]. In order to avoid missing target vehicle taillights in the subsequent taillight detection process, in our paper we select 1.3 as maximum aspect ratio of several common models. According to (3), the recognized dynamic region of interest was shown in Figure 3. It is assumed that expressed the height at which the object is projected onto the image plane, the scan radius of the radar , and vehicle height , in order to adapt to every vehicle. We assume it as 2 m, so we obtained and dynamic region of interest changing according to the distance to the radar scans preceding vehicle, so that the identification frame of the front vehicle becomes larger with the distance becoming smaller and becomes smaller with the distance becoming larger. Figure 6 shows derivation of .

Figure 6: Derivation of .

, , respectively, represent the pixel coordinates of the top left corner point and bottom right corner point of the rectangular area in dynamic region of interest, represent the pixel coordinates of the vehicle center point, and is aspect ratio of the vehicle common shape. Target vehicle of radar coordinates is shown as the region of interest of image by the coordinate transformation. As shown in Figures 7(a)–7(c), Figures 7(d)–7(f), respectively, represent an enlarged view of the radar scans (a), (b), and (c). Size of the dynamic region of interest will vary according to the distance from our vehicle to the target vehicle, in order to obtain more appropriate size of the region of interest to satisfy the next test verifications; narrowing the detection range, we can reduce the amount of computation and improve real-time detection. Region of interest was shown in Figure 8.

Figure 7: Radar target on the image of ROI. From top to bottom: the distance between detected leading vehicle and our vehicle was 5.6 m, 7.6 m, and 18.3 m.
Figure 8: ROI image.
2.3. Image Segmentation

There is significant difference between vehicle taillight and the road surface background in gray scale; using threshold segmentation method, we can segment taillights quickly and accurately. In this paper, we use the improved OTSU algorithm to segment image to highlight taillight section that can represent features of the vehicle. The improved OTSU algorithm is based on traditional OTSU algorithm; it traverses every pixels from the minimum gray value to maximum gray value, according to the following equation: , where expressed after image segmentation percentage of foreground pixels representing the total image pixels points, expressed, after image segmentation, percentage of background pixels representing the total image pixels points, expressed average gray of foreground pixels, and expressed average gray of background pixels. When is maximum, the distinction between vehicle taillight and the road surface background is maximum, so we get threshold value and initial vehicle taillight image. By use of conventional OTSU algorithm again, we get threshold value that is bigger than and binary segment region of interest with ; 1 represents target gray and 0 represents background gray, as shown in Figure 9.

Figure 9: Segmented image.
2.4. Image Processing Based on Prior Knowledge and Image Morphology

Due to the effect of noise, the image boundaries after threshold are not very smooth. There are some noise holes in object region and the background areas are dotted with small noise objects, so, after image segmentation, we process image by the use of morphological opening operation process and closing operation process to eliminate small objects, isolated slim object point, and smooth border of larger objects, but it does not significantly change the area. Erosion operation will remove the object edge point. All the points of small objects will be regarded as an edge point, so the points will be entirely omitted. Then, we perform dilate process; the large object left behind will be changed to its original size and small objects that have been deleted will be gone forever. Dilation operation causes outward expansion to the object boundary. If there are some small holes inside the object, these holes will be filled up through the dilation operation, so it is no longer a border. Then, we perform erosion operation again, external border will be changed back to its original appearance, and these internal voids will be gone forever [13]. Operational rules of opening operation and closing operation are as follows:

expressed the input image and expressed structural element, expressed morphological dilation operation, and expressed morphological corrosion operation.

We collect 253 pieces of images with pixels at different distances, and after processing we conclude that areas of vehicle bright block are not less than 10 and not exceeding 300; for the same vehicle, horizontal distance between left vehicle taillight bright blocks and right vehicle taillight bright blocks is not less than 20 and not greater than 300. At the same time, the literature [14] presents that, in the range of 0–100 meters, bright block area of vehicle taillight image collected at different distances is not less than 10. Therefore, according to the area threshold level of the bright spots and horizontal distance threshold, we can remove some interference bright spots on the region of interest, as shown in Figure 10.

Figure 10: Image after image processing based on a priori knowledge and morphology.
2.5. D-S Evidence Theory Fusion Characteristic Information

We label connected region on the region of interest after the image processing and extract vehicle features such as area ratios of connected area and overlap rates in the vertical direction. We can obtain the total confidence value by fusion of D-S evidence theory and vehicle feature information.

Definition 1. Assuming that the elements of collection are incompatible, the basic probability function value is a mapping that, from collection to , the following conditions are met:

In the paper, we defined identified framework as , the area ratio of connected area and overlap ratio in the vertical direction were two propositions under the frame of discernment, respectively, representing , and the evidence probability functions of , respectively, represent . When area ratio of connected region is close to 1, the probability that the two connected regions belong to the same vehicle is relatively large. If overlap rate in the vertical direction is closer to 1, there is a greater probability that two connected regions belong to the same level, so the basic probability function values of two propositions are determined by the following formula:where represents area ratio of connected region and represents overlap rate of connected region in the vertical direction. If probability of two car taillights area ratio is close to 1, the probability value is close to 0. Probability of two car taillights vertical overlap ratio is close to 1 and the probability value is close to 0. Results are in line with formula (8) D-S evidence theory.

According to D-S evidence theory combination rules, we integrate probability distribution values for compatible proposition, so we get the probability distribution values of these intersection propositions of compatible proposition. We assume that the focal element of two basic probability functions , respectively, represents , by the use of orthogonal rules; the two bodies of evidences were combined output: where represents integrated probability value of , represents (9), and denotes (10). In this article, if is more than 0.9, we believe that two taillights come from the same car. Eventually, we establish trust threshold to verify the vehicle, as shown in Figure 11.

Figure 11: Detection of the vehicle taillight by the use of the D-S evidence theory.
2.6. Results

In the paper, we present a method to fuse data based on millimeter wave radar and digital image, focusing on researching the detection method of how to identify the preceding vehicle under complex environment at night. The hardware operating environment is Intel Pentium E6500CPU, software environment including Windows XP system, VC ++ 6.0 integrated development environment, and Opencv open source computer vision library [15]. We transfer data between the millimeter wave radar system and the visual system to achieve preceding vehicle identification at night as shown in Figure 12.

Figure 12: Preceding vehicle identification at night.

3. Color Image Enhancement

By leading vehicle detection system, the real-time gray images are captured to detect the preceding vehicle. Although nighttime vehicle detection system can effectively identify the leading vehicle, the statistics show that driver’s reaction time at night was significantly longer than during the day. Figure 13 shows that different braking distance is caused by different reaction times at different brake initial velocity. When vehicle detection system detects leading vehicle, the braking time was extended and increases the risk of rear-end.

Figure 13: Different braking distance caused by different reaction times at different brake initial velocity.

In the meanwhile, the human eye is more sensitive to color images than gray images. Currently, most of image enhancement methods are used to enhance the gray image. Based on the human perception of color, while the human visual system can perceive about twenty different gray levels, it can identify thousands of colors. In generally, a three-channel RGB color image can be obtained by in-vehicle camera, such as automobile data recorder. In this paper, we enhance color images under nighttime conditions.

There are several methods to enhance image degraded by irregular illumination, including image contrast enhancement, histogram equalization [16], and retinex [17, 18]. These methods usually enhance an input image by increasing its contrast. Retinex can process color images, which can improve image quality caused by insufficient lighting at night. It has become a hot research field of image enhancement processing. This part will discuss how to enhance overall nighttime image by McCann99 Retinex, Frankle-McCann Retinex, and single-scale retinex (SSR).

3.1. The SSR Algorithm

Retinex theory plays an important role in the development of image enhancement. The color of the object is determined by reflectivity that is the inherent property of the object within some band, and it does not depend on the light source. In the SSR, based on the image formation model, each color component image is as follows:where represents convolution operator, represents the input color component image, represents the illumination, and represents the reflectance component, . The illumination is estimated by applying a Gaussian function to the input color component image as follows:where represents the estimated illumination and represents the Gaussian filter function as follows: where represents the scale parameter; in Figure 14, (a) Gaussian template was very smooth, dynamic range of enhanced image was compressed, and image become locally blurred. (b) Gaussian function is relatively smooth; although the pixel dynamic range is smaller than (a), image fidelity is better. (c) Gaussian function is sharper, the central pixel receives more impact from neighbor pixels, and the details of enhanced image are better and include greater dynamic range, but enhanced image is dark and has more distortion. In this paper, we select 110 as scale parameter and represents normalized factor. Finally, the output color component image is as follows: represents the output of Retinex. By using SSR algorithm to enhance nighttime image, we can overcome the situation that leading vehicle cannot be recognized properly at night, and ultimately we obtain a similar visual effect as in daytime.

Figure 14: Gaussian function with different scales; scale parameters of (a), (b), and (c) are chosen as 250, 110, and 30.
3.2. McCann99 Retinex Algorithm and Frankle-McCann Retinex Algorithm

McCann99 Retinex algorithm and Frankle-McCann Retinex algorithm come from Retinex algorithm based on multiple iteration strategy. Essentially, there is no difference between McCann99 Retinex algorithm and Frankle-McCann Retinex algorithm. The two algorithms, including taken points, compare and average iterate operating. But retinex_mccann99 algorithm is more time-consuming than Frankle-McCann Retinex algorithm.

McCann99 Retinex selects pixels by using the image pyramid model layer by layer. Topmost layer image resolution is the minimum size of and the bottom layer image resolution is the maximum size of , , and , where represents the number of layers. In the calculation process, each pixel compares with its eight neighboring pixels to obtain estimated reflectance component from top layer to bottom layer. We use the estimated reflectance component interpolating operation at the previous layer. So the size of the upper layer of the pyramid image is the same size as the image size of the lower layer by interpolating operation. Repeat interpolation and comparison operation until the end of the bottom of the pyramid image. Eventually, we can get the final color enhanced image after comparing with the original image (Figure 15).

Figure 15: Pyramid model of McCann99 Retinex algorithm.

Frankle-McCann Retinex uses spiral path to select the gray value of the estimated pixel to estimate and remove luminance component of the image. The closer to the prediction the center point is, the more points should be selected, because the point that is near the center point is more relevant to the center point. Each step will be relatively rotated 90 degrees clockwise; the distance is halved until it reaches the unit pixel distance (Figure 16).

Figure 16: Spiral structure path of Frankle-McCann Retinex algorithm.
3.3. Comparison and Analysis on Results

We used software environment including MATLABR2009a development environment and Windows XP operating system. The camera system provides images with a resolution of pixels. The image is obtained under night conditions. They compared the performance of the SSR, that of the Frankle-McCann Retinex, and that of McCann99 algorithm as shown in Figures 1720.

Figure 17: The comparative results of nighttime images with different algorithms. From left to right, respectively, the following are represented: original image, result image of the SSR, result image of Frankle-McCann, and result image of McCann99.
Figure 18: The comparative results of nighttime images with different algorithms. From left to right, respectively, the following are represented: original image, result image of the SSR, result image of Frankle-McCann, and result image of McCann99.
Figure 19: The comparative results of nighttime images with different algorithms. From left to right, respectively, the following are represented: original image, result image of the SSR, result image of Frankle-McCann, and result image of McCann99.
Figure 20: The comparative results of nighttime images with different algorithms. From left to right, respectively, the following are represented: original image, result image of the SSR, result image of Frankle-McCann, and result image of McCann99.

4. Image Enhancement Evaluation

Looking at the results of the enhanced image, we can see that the nighttime color image is really well restored. We realize the restoration from nighttime image to the daytime image. Although the human visual system is an effective image evaluation standard, it is a subjective standard. The distribution of visual effect is proposed as shown in Figure 21; the region of gray average between 100 and 200 and the standard deviation between 35 and 80 is the optimal visual [19].

Figure 21: Distribution of visual effect.

It is difficult to acquire a reference image of normal daytime in the same scene. Therefore, we evaluate enhanced image by the use of no-reference objective quality evaluation methods. In order to illustrate the results objectively, we use objective evaluation criteria to evaluate the image quality and the effectiveness of the algorithms. Five simple and effective indicators are proposed, including time, average gray, standard deviation, average gradient, and color image information entropy. Specifically, average gray denotes the quantity of lighting. Standard deviation indicates the contrast of the image. Average gradient indicates the structural features of the image. Color image information entropy indicates the image information [17] (the larger value contains the more information). The results are shown in Tables 26.

Table 2: Value of objective quality evaluation indicators in Figure 17.
Table 3: Value of objective quality evaluation indicators in Figure 18.
Table 4: Value of objective quality evaluation indicators in Figure 19.
Table 5: Value of objective quality evaluation indicators in Figure 20.
Table 6: Average value of objective quality evaluation indicators.

The evaluation results are presented in Tables 26. Specifically, (1) the results of the average gray show that the entire image is excessively bright by McCann99 retinex algorithm and Frankle-McCann Retinex algorithm, compared to the original image, the average gray of the SSR algorithm is improved significantly, the overall image brightness is moderate, and it is consistent with human visual experience. (2) For standard deviation, Table 6 shows that the SSR algorithm is better than the other two algorithms. It indicates that by the SSR algorithm image contrast is significantly enhanced and image detail is obviously restored. (3) For average gradient, the value after image enhancement is significantly higher than the original image; it means that we obtain the better structural features. (4) In terms of entropy, McCann99 algorithm is better than the other two algorithms; it indicates that the enhanced images by McCann99 algorithm have more information. (5) Processing time, the SSR algorithm is shorter than the other two algorithms. In order to reduce the statistical errors, the data in Table 6 is the average of Tables 25.

According to five objective criteria for evaluation, the restored images that are produced by the SSR enhancement algorithm appear more naturally than the other two algorithms, and, according to the subjective judgment of the human visual system, its color scenes reproduction capability is the strongest. The SSR can effectively improve visibility of the nighttime color image and restore the image details and image enhancement results satisfy the human visual habits.

5. Conclusion

In this paper, in order to reduce rear accident at night, a leading vehicle detection method at night based on millimeter wave radar vision is proposed, using the fusion of data from multisensor to detect the preceding vehicles. Our results show that radar can determine the preceding vehicle distance, speed information, and form a region of interest. In the region of interest, we verify vehicle based on digital image information, which not only reduces the interference of the external environment, but also reduces the scope of inspection and the amount of calculation. Test results show that the method based on fusion of millimeter wave radar and digital image can be used to identify preceding vehicle effectively at night. The method for vehicle taillight with the other shapes also has good recognition results, since taillights overlap or block will cause verifying fault, which is focus of our study in the future.

By the image enhancement algorithms, night images are enhanced and results have been assessed by objective evaluation and subjective evaluation. The evaluation results show that the image enhancement results satisfy the human visual habits. We believe that SSR algorithm is the best compared to Frankle-McCann and McCann99. In this paper, visual enhancement algorithms have the following disadvantages. The image enhancement algorithms cannot apply to real-time process and all nighttime images. In particular, the efficiency of the algorithm needs to be optimized and improved based on the actual the application process. Visual image enhancement algorithms applied to general image are worth to be further studied.

Competing Interests

The authors do not have any competing interests with the content of the paper.

Acknowledgments

This research is supported by the National Natural Science Foundation (no. 51575229), The Fundamental Research Funds for the Central Universities, The National Distinguished Young Scholar Foundation Candidate Cultivation Program of Jilin University, and The Education Department of Jilin Province “The thirteenth Five-Year” scientific and technological research projects no. 419.

References

  1. National Bureau of Statistics of China, http://data.stats.gov.cn/
  2. People's Republic of China road accident statistics compiling in 1995, Rear-end collision accident data of Page 39
  3. Z.-Y. Liu, Q. Ye, F. Li, M.-H. Zhao, J.-S. Nie, and X.-Q. Sun, “Taillight detection algorithm based on four thresholds of brightness and color,” Computer Engineering, vol. 36, no. 21, pp. 202–206, 2010. View at Google Scholar
  4. H. Wu, H. Huo, T. Fang et al., “Nighttime video vehicle detection in complex environment,” Application Research of Computers, vol. 24, no. 12, pp. 386–389, 2007. View at Google Scholar
  5. J. Tang, X. Li, and D. Luo, “Vehicle detection at night based on frame difference,” Computer Measurement & Control, vol. 16, no. 12, pp. 1811–1813, 2008. View at Google Scholar
  6. X. Wang, Detection of Preceding Vehicles at Night Based on Infrared CCD, Jilin University, Changchun, China, 2009.
  7. Q. Qi and Q. Chen, “Vehicle detection based on two-way multilane at night,” Communications Technology, vol. 45, no. 10, pp. 58–60, 2012. View at Google Scholar
  8. J. Zhou, A Method to Identify Vehicle and Vehicle Distance at Night Based on Monocular Vision, Nanjing University of Science and Technology, Nanjing, China, 2009.
  9. D.-Z. Gao, J.-M. Duan, and H.-X. Wang, “Preceding vehicle detection based on laser and CCD,” Journal of Beijing University of Technology, vol. 38, no. 9, pp. 1337–1342, 2012. View at Google Scholar · View at Scopus
  10. Z. Cui, Study on Detection and Identifier of Vehicles at Night Based on Laser and Vision, Jilin University, Changchun, China, 2007.
  11. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000. View at Publisher · View at Google Scholar · View at Scopus
  12. Z. Ou, J. An, and F. Zhou, “Night-time vehicle detection using D-S evidence theory,” Application Research of Computer, vol. 25, no. 5, pp. 1943–1946, 2012. View at Google Scholar
  13. Z. Liu, Z. Yu, J. Geng, and Z. Zhu, “Target detection in infrared image based on morphological filter algorithm,” Infrared and Laser Engineering, vol. 42, no. 1, pp. 249–252, 2013. View at Google Scholar · View at Scopus
  14. R. O'Malley, E. Jones, and M. Glavin, “Rear-lamp vehicle detection and tracking in low-exposure color video for night conditions,” IEEE Transactions on Intelligent Transportation Systems, vol. 11, no. 2, pp. 453–462, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. S. Yu and R. Liu, Leaning OPENCV, Tsinghua University Press, Beijing, China, 2009.
  16. A. K. Jain, Fundamentals of Digital Image Processing, Prentice Hall, Englewood Cliffs, NJ, USA, 1989.
  17. W. T. Yang, R. G. Wang, S. Fang, and X. Zhang, “Variable filter Retinex algorithm for foggy image enhancement,” Journal of Computer-Aided Design & Computer Graphics, vol. 22, no. 6, pp. 965–971, 2010. View at Publisher · View at Google Scholar · View at Scopus
  18. D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, “Properties and performance of a center/surround retinex,” IEEE Transactions on Image Processing, vol. 6, no. 3, pp. 451–462, 1997. View at Publisher · View at Google Scholar · View at Scopus
  19. D. J. Jobson, Z. ur Rahman, and G. A. Woodell, “Statistics of visual representation,” in Visual Information Processing XI, vol. 4736 of Proceedings of SPIE, pp. 25–35, Orlando, Fla, USA, 2002. View at Publisher · View at Google Scholar