Research Article | Open Access
Cao Liu, Hong Zheng, Dian Yu, Xiaohang Xu, "A Novel Method of Adaptive Traffic Image Enhancement for Complex Environments", Journal of Sensors, vol. 2015, Article ID 516326, 9 pages, 2015. https://doi.org/10.1155/2015/516326
A Novel Method of Adaptive Traffic Image Enhancement for Complex Environments
There exist two main drawbacks for traffic images in classic image enhancement methods. First is the performance degradation that occurs under frontlight, backlight, and extremely dark conditions. The second drawback is complicated manual settings, such as transform functions and multiple parameter selection mechanisms. Thus, this paper proposes an effective and adaptive parameter optimization enhancement algorithm based on adaptive brightness baseline drift (ABBD) for color traffic images under different luminance conditions. This method consists of two parts: brightness baseline model acquisition and adaptive color image compensation. The brightness baseline model can be attained by analyzing changes in light along a timeline. The adaptive color image compensation involves color space remapping and adaptive compensation specific color components. Our experiments were tested on various traffic images under frontlight, backlight, and during nighttime. The experimental results show that the proposed method achieved better effects compared with other available methods under different luminance conditions, which also effectively reduced the influence of the weather.
Video-based traffic surveillance has become essential in city management due to the rapid development of transportation. However, image sequences taken from surveillance systems display various properties in various environments, including frontlight, backlight, and extremely dark environments. These fluttering properties may reduce the surveillance quality and affect consequential detection processes, such as vehicle detection and vehicle plate recognition. As a result, the enhancement of traffic image sequences under complex environments plays a critical role in video-based surveillance.
There are two tracks for this topic. One is image enhancement under certain conditions. Xu et al.  proposed methods that involved a denoiser and a tone mapper, which enhanced very dark image sequences in particular. Klaus and Warrant  proposed a theoretical biological model of visual systems in nocturnal animals to determine the optimum spatiotemporal receptive fields for vision in dim light. He et al.  presented an effective method using dark channel prior to remove haze from a single image. Tan  proposed a method that maximized the local contrast of the image in inclement weather, such as fog and haze. Im et al.  described a contrast enhancement method for backlit images that first extracted underexposed regions using the dark channel prior map and then performed spatially adaptive contrast enhancements. Tsai and Yeh  gave a contrast compensation method using fuzzy classification and image illumination analysis especially for backlit and front-lit color faces. All these proposals perform well under certain conditions.
The other track involves the enhancement of images for certain objects. Yang  introduced a method that maximized the image contrast and then employed morphology processing for road sign recognition in natural scenes. The authors in  used normalization for local luminance to eliminate the impact of light changes so as to achieve the automatic detection and recognition of signs in natural scenes. In [9, 10], studies on license plate images in natural scenes were performed by means of histogram equalization and tensile contrast. In , Tan et al. employed the combination of high frequency enhancement and neural networks to strengthen the contrast in van images under different light conditions.
Additionally, histogram equalization and its variations are one of the most commonly used algorithms to perform contrast enhancement due to its simplicity and effectiveness [12–16]. Recently, Wu and Juang  proposed a histogram extension method specifically for vehicle detection systems. Wu  proposed an image enhancement method via optimal contrast-tone mapping. Xu et al.  analyzed the relationships between image histograms and contrast enhancement/white balancing and then proposed a generalized equalization model for image enhancement.
These research topics have contributed a great deal to improving image quality. However, as regards traffic images, these methods are not robust enough to deal with luminance changes during the course of a day or given the impact of extreme weather. In this paper, we have presented a new method to improve the quality of traffic surveillance images based on adaptive brightness baseline drift. In this method, the impacts of both brightness variation and time change have been considered simultaneously, and a brightness variation model has automatically obtained the parameters for enhancement. This method could remove the effects resulting from the impacts of both weather and light and could consequently produce satisfactory enhancement results under complex environments.
The remainder of this paper is organized as follows. Section 2 describes the proposed ABBD method. Section 3 presents the experimental results. Conclusions are finally drawn in Section 4, along with recommendations for future research.
2. Adaptive Brightness Baseline Drift
The traffic image enhancement approach discussed in this paper consists of two steps: brightness baseline model acquisition and adaptive image compensation based on such a model. Figure 1 illustrates the proposed approach flow diagram. In the first step, the brightness baseline model was obtained by an offline brightness baseline curve and online brightness feedback. In the second step, the color image to be processed was initially transformed into an HSV color space in which the brightness component was adjusted according to the baseline value, and then the backward color transformation is applied to produce the enhanced image.
2.1. Brightness Baseline Model Acquisition
Because brightness changes with time, the baseline model has been used to illustrate the overall trend of illumination; the model is comprised of two parts: the first part is brightness baseline analysis and compensation and the other is real-time brightness feedback.
2.1.1. Brightness Baseline Analysis and Compensation
In order to precisely measure the illumination variations with time, two groups of images were obtained from a north-south intersection and an east-west intersection, respectively, in different light directions. The mean values of the brightness were calculated after certain intervals for both groups. The statistical result is illustrated in Figure 2. It can be concluded from this figure that the mean brightness values do vary significantly over time. Therefore, a baseline of this value is proposed in this paper, and these baseline values form a brightness baseline curve.
Define the brightness baseline value in time as , which is relatively invariable at night, since it is free from the impact of sunlight, while this value changes during daytime. The camera that faces north or south has the brightest surveillance image at noon but is much darker in the morning and in the evening. The change in brightness goes from low to high and then drops back to the low level. This process can be described as a Gaussian function during daytime and as a constant during nighttime inwhere , , , and are the brightness value during the brightest moment, the current moment, the mean value of this curve, and the standard deviation of the Gaussian function, respectively. Here, is set to 150, according to experiments, and . was chosen for this procedure: assume that daytime lasts from to , and the interval covers more than 99.7% of the area in the Gaussian distribution. Thus . The constant denotes the brightness value at night.
Figure 3 illustrates the condition for the camera facing east and west. When the luminance angle changes, frontlight and backlight occur. The image is bright under frontlight conditions, as is the area exposed to the direct light present in backlight images, but the area in shadow is dark. Figure 3(a) shows the relationship between camera orientation, the object, and the sun. The traffic image shows frontlight in the morning and backlight in the afternoon when the camera is facing west; the opposite occurs when it is facing east. Figures 3(b) and 3(c) give examples of the frontlight image and backlight image, respectively. The brightness histograms shown in Figures 3(d) and 3(e) can quantitatively depict brightness distributions.
To move the range of the traffic image to a reasonable range of brightness, the entire brightness of the frontlight image should be reduced. For the backlight image, the brightness in the area exposed to direct sunlight should be reduced, while the brightness should be increased for those in the shadow. The vehicle plate and other useful information can be found in shadow, whereas highlighted areas are on the ground; under circumstance of backlight the brightness baseline value should be increased. Therefore, the brightness baseline value should be reduced in the morning and increased in the afternoon when the camera is facing west; the opposite should occur when facing east. Because the overall trend of illumination is slowly changing, the trigonometric function in the following equation is utilized to compensate the image:where is the compensation factor, which varies with the angle of the camera and seasons. Thus it is larger in summer because the backlight and frontlight are more intense, whereas the value is smaller during winter, when lighting is mild. This value is greater than zero when the camera is facing west, less than zero when facing east, and equal to zero when facing north and south. indicates when the time compensation starts; is when it ends. Then we have
Therefore, the value of brightness baseline after compensation at time is equal to where represents the brightness baseline in normal situations and represents the compensated brightness. The curves of and are illustrated as shown in Figure 4. The red curve is the compensation brightness curve in frontlight while the blue curve is the compensation brightness curve in backlight. In contrast, the red curve stands for the baseline in frontlight and the blue curve stands for the baseline in backlight.
2.1.2. Real-Time Brightness Feedback
Considering that images taken at the same times differ under different weather and luminance conditions, it is not sufficient to confirm the brightness reference value solely by applying the brightness baseline curve. Therefore, a real-time feedback system was introduced to resolve this problem. First, the reference value was obtained from the brightness baseline curve, . Next, the mean brightness of the real-time images was calculated as , along with the difference between the baselines, . Defining the correction value for brightness, , we have where is the correction step; it takes the value between 0 and 1.
However, in a real experiment, two frames in consequence may differ greatly due to violent changes in lightning and object movement. Therefore, the previous correction value, defined as the “inertia term,” was taken into consideration in the current experiment. Figure 5 shows this process, where stands for the correction value of the current frame and the previous frame, and is the “inertia coefficients” that ranges from 0 to 1. Thus
The final brightness reference value, shown in the following equation, is composed of the value from the curve and the feedback value:
2.2. Adaptive Color Image Compensation
After the reference value for brightness was attained, it was then used in the brightness histogram to enhance the image. In order to maintain the color, the color image must first be transformed into the HSV color space; the enhancing process is performed in the V component. The steps for image compensation are listed as follows.
Firstly, the RGB image was transformed into the HSV color space, and the mean of brightness was calculated using where is the brightness value and is the pixel count in brightness .
Secondly, the adaptive reference value for brightness was obtained by (7).
Thirdly the histogram of brightness was shifted by . Consequently, the drift coefficients and were calculated, using (9), where the two regions were separated by the brightness baseline.
Then brightness transformation was processed in both regions using (10). Figure 6 is an example of adaptive image compensation, where the left area has a value from 0 to , while the right one has a range from to 255.
Finally, the image was transformed back into the RGB color space:where and are the minimum and maximum for the brightness, respectively. is the brightness value that is less than the reference brightness and is the value greater than the reference brightness. and are the corresponding values after the shift.
3. Experimental Results
3.1. Effectiveness of Traffic Image Enhancement
In order to testify the effectiveness of this enhancement method, the experiments were conducted at one crossroad in Hubei province, China. As a result, frontlight images at 11 h, backlight images at 16 h, and images at 21 h were randomly selected among the samples. The methods in [1–6] only focused on certain environments, and [13–16] aimed at detecting certain objects; the two latest and typical enhancement methods that are effective under different luminance conditions, histogram extension  and ESIHE , were employed as comparisons.
Figure 7(a) is the original frontlight image, which is considerably bright because of the intense sunlight and the installation angle of the camera. Figure 7(b) is the image enhanced by our method, while Figures 7(c) and 7(d) are the results of histogram extension and ESIHE, respectively. From these figures, it can be concluded that the proposed method could reduce the entire brightness while increasing the contrast at the same time. The color of the vehicles and the ground processed by our method is superior to other comparison methods in visual perception. For the purpose of strengthening details, Figures 7(e) and 7(f) show the amplified images of the red rectangle areas. In comparison with other methods, our proposed method enjoys significant reduction of exposure and a wider range of brightness.
Figure 8(a) is the original backlight image; most of the areas are extremely bright except for those in shadow. Figure 8(b) is the image enhanced by our method, which has a smoother transition in the area where brightness and shadow meet; moreover, there is less road texture and better contrast in shadow area. For example, the vehicle plate and other features become clearer. Figures 8(c) and 8(d) show the results of the histogram extension and ESIHE. As can be observed, the histogram extension method barely changed the brightness value, and ESIHE method had a limited effect on this occasion. Observed from Figures 8(e) and 8(f), the vehicle detail is enhanced in shadow, and thus a better visual effect has been fulfilled.
Figure 9(a) is the original image during nighttime. The vehicle is too dark to be observed. Figure 9(b) is the result processed by our method, which illustrates clear details without causing distortion. Figures 9(c) and 9(d) are the histogram extension and ESIHE results. Even though the former increased the brightness, it added a severe block-effect, while the ESIHE enhanced the contract but meanwhile brought in more noise. Figures 9(e) and 9(f) indicate again that our method augments the contract without causing overexposure in the vehicle lamp and vehicle plate area.
3.2. Experimental Results for Eliminating Effects of Weather
The quality of traffic surveillance videos can be greatly affected by environment change, which results in reduced precision in vehicle detection and plate recognition. Experiments were conducted to demonstrate that ABBD is a method that eliminates the effects of weather impact, with samples collected every 10 h, 12 h, 16 h, and 21 h on sunny days (S), rainy days (R), and cloudy days (C), respectively. The root mean square error (RMSE)  and absolute mean brightness error (AMBE)  in the following equations, respectively, were employed to measure the differences between the two images with different sceneries:where and are defined as the pixel value at in images and . and stand for the height and width of such images. and are the mean values of the image. Table 1 lists the RMSE result for each sample by applying ABBD, HE, and ESIHE. The corresponding AMBE is listed in Table 2.
It can be concluded from Table 1 that the RMSE varies at the same times under different weather conditions; however, the RMSE was reduced after our method was implemented and produced the lowest RMSE for all images. The HE method, which could reduce original RMSE values, had RMSE values higher than the proposed method. The ESIHE method seldom cut down the RMSE values.
Table 2 shows the absolute difference between images under different weather conditions. The less the value was, the greater the similarities between the images were. The AMBE values of the HE and ESIHE methods for all the images were higher than the AMBE value of the proposed method; therefore, our method proved effective in diminishing the absolute average brightness difference.
Based on the experimental results, the weather effects were significantly reduced from all the source images. These results benefit processes such as moving object segments, vehicle detection, and plate recognition. Moreover, the parameters set for the surveillance system could be simplified, and better robustness could be achieved in complex environments.
This paper proposed a new method based on adaptive brightness baseline drift for traffic image enhancement. First, changes in lighting were analyzed and then an adaptive model was applied. After the modeling, a nonlinear enhancement method was processed onto the image. The performance analysis indicated that our method achieved better effects compared with other available methods under different luminance conditions and could effectively reduce the influence of the weather. In our further work, we will continue to optimize this method, especially as regards evening performance; moreover, we will apply this method to vehicle detection and license plate recognition.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work was supported by the Major State Basic Research Development Program of China (973 Program) (no. 2012CB719900).
- Q. Xu, H. Jiang, R. Scopigno, and M. Sbert, “A novel approach for enhancing very dark image sequences,” Signal Processing, vol. 103, pp. 309–330, 2014.
- A. Klaus and E. J. Warrant, “Optimum spatiotemporal receptive fields for vision in dim light,” Journal of Vision, vol. 9, article 18, 2009.
- K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, 2011.
- R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08), pp. 2347–2354, Anchorage, Alaska, USA, June 2008.
- J. Im, I. Yoon, M. H. Hayes, and J. Paik, “Dark channel prior-based spatially adaptive contrast enhancement for back lighting compensation,” in Proceedings of the 38th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '13), pp. 2464–2468, Vancouver, Canada, May 2013.
- C.-M. Tsai and Z.-M. Yeh, “Contrast compensation by fuzzy classification and image illumination analysis for back-lit and front-lit color face images,” IEEE Transactions on Consumer Electronics, vol. 56, no. 3, pp. 1570–1578, 2010.
- X. Yang, “Enhancement for road sign images and its performance evaluation,” Optik, vol. 124, no. 14, pp. 1957–1960, 2013.
- X. Chen, J. Yang, J. Zhang, and A. Waibel, “Automatic detection and recognition of signs from natural scenes,” IEEE Transactions on Image Processing, vol. 13, no. 1, pp. 87–99, 2004.
- M. Cinsdikici, A. Ugur, and T. Tunali, “Automatic number plate information extraction and recognition for intelligent transportation system,” Imaging Science Journal, vol. 55, no. 2, pp. 102–113, 2007.
- J. S. Kim, S. C. Park, and S. H. Kim, “Text locating from natural scene images using image intensities,” in Proceedings of the 8th International Conference on Document Analysis and Recognition, vol. 2, pp. 655–659, Seoul, Republic of Korea, August 2005.
- H.-S. Tan, F.-Q. Zhou, Y. Xiong, and X.-K. Li, “Adaptive enhancement of image brightness and contrast based on neural networks,” Journal of Optoelectronics Laser, vol. 21, no. 12, pp. 1881–1884, 2010.
- Y. Wang, Q. Chen, and B. Zhang, “Image enhancement based on equal area dualistic sub-image histogram equalization method,” IEEE Transactions on Consumer Electronics, vol. 45, no. 1, pp. 68–75, 1999.
- S.-D. Chen and A. R. Ramli, “Contrast enhancement using recursive mean-separate histogram equalization for scalable brightness preservation,” IEEE Transactions on Consumer Electronics, vol. 49, no. 4, pp. 1301–1309, 2003.
- S.-D. Chen and A. R. Ramli, “Preserving brightness in histogram equalization based contrast enhancement techniques,” Digital Signal Processing, vol. 14, no. 5, pp. 413–428, 2004.
- Q. Wang and R. K. Ward, “Fast image/video contrast enhancement based on weighted thresholded histogram equalization,” IEEE Transactions on Consumer Electronics, vol. 53, no. 2, pp. 757–764, 2007.
- K. Singh and R. Kapoor, “Image enhancement using exposure based sub image histogram equalization,” Pattern Recognition Letters, vol. 36, no. 1, pp. 10–14, 2014.
- B.-F. Wu and J.-H. Juang, “Adaptive vehicle detector approach for complex environments,” IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 2, pp. 817–827, 2012.
- X. L. Wu, “A linear programming approach for optimal contrast-tone mapping,” IEEE Transactions on Image Processing, vol. 20, no. 5, pp. 1262–1272, 2011.
- H. Xu, G. Zhai, X. Wu, and X. Yang, “Generalized equalization model for image enhancement,” IEEE Transactions on Multimedia, vol. 16, no. 1, pp. 68–82, 2014.
Copyright © 2015 Cao Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.