About this Journal Submit a Manuscript Table of Contents
Advances in Mechanical Engineering
Volume 2013 (2013), Article ID 861321, 8 pages
http://dx.doi.org/10.1155/2013/861321
Research Article

An Method for Vehicle-Flow Detection and Tracking in Real-Time Based on Gaussian Mixture Distribution

1Xinjiang Technical Institute of Physics & Chemistry, Chinese Academy of Science, Xinjiang 830011, China
2College of Electromechanical & Information Engineering, Dalian Nationalities University, Dalian 116024, China
3College of Transportation, Jilin University, Changchun 130022, China

Received 15 July 2013; Revised 9 October 2013; Accepted 27 October 2013

Academic Editor: Wuhong Wang

Copyright © 2013 Ronghui Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Vehicle-flow detection and tracking by digital image are one of the most important technologies in the traffic monitoring system. Gaussian mixture distribution method is used to eliminate the influence of moving vehicle firstly in this text, and then we built the background images for vehicle flow. Combining the advantages of background difference algorithm with inter frame difference operator, the real-time background is segmented integrally and dynamically updated accurately by matching the reconstructed image with current background. In order to ensure the robustness of vehicle detection, three by three window templates are adopted to remove the isolated noise spot in the image of vehicle contour. The template structural element is used to do some graphical morphological filtering. So, the corrosion and expansion sets are obtained. To narrow the target search scope and improve the calculation speed and precision of the algorithm, Kalman filtering model is used to realize the tracking of fast moving vehicles. Experimental results show that the method has good real-time and reliable performance.

1. Introduction

At present, the vehicle-flow detection method was mainly divided into three methods, which are, respectively, based on wave frequency, magnetic frequency, and machine vision [1]. Comparing these methods, as a noncontact detection method, the machine vision method based on video has the following advantages.(1)Easy to install, repair, and maintain. CCD can be installed in the road adjacent areas to acquire vehicle image and video. In addition, they are installed without the closure of expressway and road mining, which will not influence the orderly traffic infrastructure.(2)Wide scope of video monitoring. The imaging area of CCD can detect multiple lanes simultaneously and obtain more parameters related to traffic flow.(3)It can record live video, which provides evidence for the causes of accidents [2]. Additionally, with the continuous development of computer hardware and software technology, image processing, artificial intelligence, and pattern recognition, the video based method has satisfied real time and accuracy. With the attention of transportation operator, governmental authority, and research scholar, the video based vehicle detection method is applied more and more widely in ITS [36].

In recent years, experts and scholars in related fields pay more attention on moving vehicles detection. Fruitful research results, such as optical flow method [7], background difference method, [8] and interframe difference method [9], also have been achieved. All of these have their own advantages and disadvantages, respectively. Optical flow method has better accuracy, while the real-time performance is poor. Background difference method is commonly used when the background is in little change situations and the testing results depend on the background update algorithm. Namely, the higher the reliability of the background update, the better the results. And inter-frame difference method can detect the vehicle well, while it is easy to miss the slow vehicle so as to cause false detection because it often segments vehicle into multiple parts.

In [10], Magee put forward a tracking algorithm based on the background model and on the shape of an object, position and velocity. Each pixel in the traffic scene was classified as background objects, foreground objects and noise. Also, the background information calibrated was used in foreground models to verify the consistent assumptions of area and the detected vehicle speed. Kamijo and Sakauchi and Bennett et al. [11, 12] split vehicle image into multiple regions by using space-time Markov random field model. Each region was marked based on texture information and identified the contact of regions as the time axis. This method could acquire the good recognition performance and the accuracy of vehicle-flow identification can be obtained as high as 95%. Schclar et al. [13] put forward 3D vehicle model based on the probability density function and the detection process by adopting linear motion characteristics. In [14], Huang et al. detected vehicle in aerial images using the rectangular shape, the front window, and shadow information. But the process is at slower speed (an image in 1000 by 870 pixels is processed in about 30 seconds). Stauffer and Grimson [15] tracked the vehicle flow and pedestrian with mixed Gaussian method based on the background-foreground model. The processing speed is 2 frames/s. Although the algorithm of vehicle flow detection based on video has laid sound research foundation on vehicle detection, real-time performance of the algorithm needs further improve yet.

The main contents of this paper contain moving background reconstruction, real-time background updates, moving vehicle-flow extraction, noise suppression, vehicle-flow tracking and some others. Firstly, we apply the mixed Gaussian distribution method to eliminate the impact of the vehicle, and then to establish the background image of vehicle. Secondly, by matching the reconstruction background with the current background of the image sequence and combining the advantages of background difference method with inter-frame difference method, the real-time background can be completely and accurately segment, and updates dynamically. To ensure the robustness of the follow-up vehicles detection, thewindow template is applied to eliminate isolated noise point of vehicle contour image. Moreover, to obtain images of the corrosion sets and expansion sets, the template structural element is used to do some graphical morphological filtering. It can effectively restrain the noise while keeping marginal information of the moving vehicles as far as possible. Finally, as to narrow the target search range and improve the calculation speed and precision of the algorithm, this paper uses Kalman filtering model to realize track of fast moving vehicles flow. Using the presented method, apart from extreme weather, the experiment statistical results show that the algorithm is very good in real time and reliability and the acquisition of the traffic parameters can meet the needs of the transportation management department.

This paper is organized as follows. Following the introduction, the detection algorithm of the moving vehicle flow was described in Section 2. Section 3 illustrates moving vehicles flow tracking based on Kalman filter and Section 4 is the experimental results. In Section 5, we give some discussions about this text. Finally, the last two sections conclude conflict of interests and acknowledgments in the paper.

2. Detection Algorithm of Moving Vehicle Flow

Background subtraction [1618] calculates the difference image between background image and the current image, and it can segment the moving pixels in the scene. At first, this paper applies Gaussian mixture model to establish moving vehicles background images, uses the background adaptive method to update the background real time, and then classifies each pixel in the image according to Gaussian movement model. Secondly, this paper proposes the method of reducing noise by two steps. What is more is that the noise suppression mechanisms are added in the foreground detection part to reduce the impact of noise on foreground detecting. Finally, in order to obtain an accurate description of regions of moving vehicles, the system compares former frame image, reference background and determines pixel with drastically changes of the brightness. The moving vehicles can be identified as follows: where,, andare the current image, background image, and extraction image of moving vehicles.

2.1. Median Filtering

Because of convert device’s influence (such as photodetectors, A/D converters) and the surrounding environment [19], there are many sorts of noises and distortion in collected image. And median filter can overcome negative influence on image and protect the edge information of vehicle [4, 20]. Furthermore, the processing speed is fast relatively. Therefore, we use median filter for image preprocessing. Figure 1 is the original image and Figure 2 shows the result using template three by three median filters.

861321.fig.001
Figure 1: Original image.
861321.fig.002
Figure 2: Vehicle image using 3 by 3 median filter.
2.2. Background Reconstruction

Gaussian mixture model is capable of handling environment changes caused by light. In recent years, it is used in detecting moving target more and more frequently. Such as ship detection, pedestrian detection, gait detection [2, 5, 1719]. Therefore, this paper collects the image within timeand the background model of moving vehicles detection established by applying Gaussian mixture model for each pixel [20]. The following is its basic principle.(1)Assuming that each pixel in the images hasstates,is usually taken 3 to 5. The greater thevalue, the stronger the ability to deal with fluctuations, but the longer the processing time required.(2)Each state is indicated by a Gaussian function. Parts of the states mean pixel value of the background, while the rest of states present the pixel value of moving vehicles in foreground.(3)If the pixel color values are stated by variable, the probability density function is shown as whereis average value in timeat theGaussian distribution,is covariance matrix,is weight, and. Here,is the dimension of.

If it is gray image, then. And if it is color image, then. To reduce computation cost, we consider that the color channels of each pixel in acquisition image are independent of each other. And they have the same variance, where the covariance matrix is

We put the Gaussian distributions in descending order according to the value of. If, then the Gaussian distribution with the maximumvalue is the background. If, and the frontdistributions meet (4), they are considered to be background distributions. The others are foreground distributions: whereis weight threshold and it is the minimum value for the sum of the Gaussian distribution that can describe the scene background. Figure 3 shows the background image which is constructed with Gaussian mixture method. We can see from the figure that there is no impact of vehicle in the reconstructed background image.

861321.fig.003
Figure 3: Background image constructed with the mixture Gaussians distribution model.
2.3. Background Updating

As the vehicle motion detection system operates in the open air. It will be influenced by random factors greatly. Such as the changes of illumination intensity and the direction of the light, strong winds and the influence of vehicle shock, the roadside flower, grass, and trees moving with the wind [21]. Therefore, the designed algorithms must have the ability of updating the background model states to adapt the change of environment.

Suppose,are the current background image and the corresponding image atmoment. The moving pixel at timeis calculated as follows:

In this paper, we generate mask template of image by combining the advantages of background difference method and inter-frame difference method. Background difference method, inter-frame difference method, and the generation of mask template as shown as

Figure 4 shows the image by background difference, and Figure 5 is the image using inter-frame difference. The generated mask template is shown as in Figure 6.

861321.fig.004
Figure 4: Background difference image for vehicle detection.
861321.fig.005
Figure 5: Inter-frame difference image for vehicle detection.
861321.fig.006
Figure 6: The mask template combined background difference image with inter-frame difference image.

The,, andare current background image, the corresponding image atmoment, and the corresponding image atmoment. Cutting image that is corresponding toin region mask and another cutting image that is corresponding toin region nonmask are shown as

We defineas the updated background image, which is made up of the weighted sum for the current background image and real-time background image, shown as follows:

The weight value ofin (8) affects the background updating speed. Whenis too small, the ability to suit for changes in the background is not strong, especially when light changes greatly because the background image adaptive time increases obviously. Ifis too large, there will be vehicle drag marks in the background image. So, we can update background image quality and adjust speed automatically by regulating the value ofconsidering that the impact of moving target on light changes can be ignored. So, the influence on light changes depends on the pixel in no moving target area, shown as

Here,is the variation of illumination between imagecollected atmoment and imagecollected atmoment.is the pixel number of imagecollected atmoment in no moving target area.is the normalized variation of illumination between imageand image.is the movement pixel. To obtain equably changed background images, we linkwith historic data using whereis the weight value used when the background image updates atmoment. Figure 7 is the reconstructed real-time background image.

861321.fig.007
Figure 7: Reconstruction background image in real time.
2.4. Segmentation Based on Background

In order to subtract background image from original image, we adopt binary segmentation method aiming to establish a segmentation model based on pixel distribution characteristics. Finally, the binary image of vehicle is acquired. Although the extraction results will inevitably loose some image information of vehicle, even impair the accuracy of vehicle detection. Besides, decreased information will reduce the computational complexity of detection algorithm, which is significant to system with high real-time performance.

Otsu’s method is used to automatically perform histogram shape-based image shareholding or reduce a gray level image to a binary image. The algorithm assumes that the threshold of image contains two pixels classes or bimodal histogram (e.g., foreground and background). Then, it calculates the optimum threshold separating those two classes so that their combined spread (intraclass variance) is minimal [22]. Specifically, the provided image has levels from 1 to. The amount of levelis. And the sum of all levels is

The probability of levelis

Then, we use integerto divide levels into two groups-groupand group, namely,and.

The sum of probability for groupis

And the mean of groupis Correspondingly, the probability sum and mean of groupare

We define the sum of mean and the statistic mean of the whole image as

Otsu method shows that minimizing the intraclass variance is the same as maximizing interclass variance:

We compute (17) substituting different(varies from 1 to 255). The correspondingis the optimal threshold when (17) is maximized.

2.5. Vehicle Characteristic Extraction

Read the current image, and judge it with condition (11). One can see that

If it meets the above condition, we think that the point is the moving vehicles’ pixel, or it is the background point. Consider that the noise will not continuous appear at the same pixel position 3 times or more. Then, we can establish a bufferand, if it meets (18),, or. Read the images collected at,, andmoment and judge it according to (18). If it meets condition (19), we think the point is the noise point. Figure 6 is the image of the moving vehicle we picked up. One can see that

2.6. Noise Suppression

In Figure 8, there are some isolated noise points near the vehicle outlines. And the window template three by three is used to eliminate these noise points. The center element of the window template aims at the current waiting element to count corresponding white pixels (pixels of moving vehicles) under the template window. The counted result is recorded as sum.is set as the threshold value. If, we consider the pixel according to the template center element as isolated point and set the current pixel value to 0. Traversing through the image, the result image can be obtained as shown in Figure 8. In Figure 9, most of the noise points can be eliminated.

861321.fig.008
Figure 8: Extraction of moving vehicle.
861321.fig.009
Figure 9: Vehicle image processed by noise suppression.
2.7. Image Filling

After the image processing, some discontinuous mass will appear in the area in which the grey value of the moving vehicles is close to the grey value of background. This affects vehicle detection accuracy significantly. Therefore in this paper, the morphological filtering method is used to fill the moving vehicle image [23]. The computational procedure is described as follows.(1)Erosion process for image is performed with template window structural element. The noise can be eliminated. And the corrosion image of moving vehicles can be obtained.(2)Expansion process for image is performed with template window structural element. The pores inside the moving vehicles can be filled. And the expansion image of the moving vehicles can be gotten.(3)Traversing through the pixels of moving vehicles in dilation image, searching connected regions and marking them.(4)Projecting the connected regions into the original threshold moving vehicles image.

The above filling algorithm not only avoids noise influence on moving vehicle but also retains boundary details as much as possible.

Figure 10 shows that the filling algorithm both avoids noise influence on moving vehicle and retains boundary details as much as possible.

861321.fig.0010
Figure 10: Vehicle image processed by erosion and dilation.
2.8. Neighborhood Analysis

Neighborhood analysis algorithm, which is to find out a path between two pixels in a certain region, aims to calculate the position of moving vehicle in each image acquired by CCD. There are two kinds of neighborhood path: 4-neighbor and 8-neighbor paths. We analyze the connectivity region using 8-neighbor method to detect the size of vehicle. To eliminate noise effects, we discard the region whose size is too smaller than 20 (the threshold depends on experiment). Thus, we obtain moving vehicle using the minimum enclosing rectangle (MER), which is critical for searching range of vehicle tracking. Every vehicle in image is marked with MER and the analysis result is shown in Figure 11.

861321.fig.0011
Figure 11: Moving vehicle marked by MER.

3. Moving Vehicles Flow Tracking Based on Kalman Filter

Because of the image acquisition high frequency, time intervals of the images sequences between two frames are small and changes of moving targets between two frames are also small. So, we can consider it as uniform motion. Naturally, the state vector and the observation vector are defined as equation [24]. In the calculation process, there is still need to handle a large number of redundant data. And search time is about 100 ms. It still can not meet the requirements of real time. So, in this paper, we establish dynamic vehicle movement model based on Kalman filter and predict the motion of the vehicle. It can reduce the target searching scope and improve the reliability, including robustness of the algorithm. The state equation and measurement equation are as follows: where,are the center-of-mass coordinates of moving vehicles and,, are the unit displacements of center-of-mass coordinates inanddirections. Thus,,are the width and length inanddirection of tracking region.,are the unit displacements of tracking region inanddirections. The observation matrix and the state transition matrix are as follows:

On this trial, the state errorand the observation errorare assumed as the Gaussian noise. The results of the vehicle tracking which is based on Kalman filter are shown in Figure 12. The search time is about 50 ms–80 ms under our test conditions shown in Section 4.

861321.fig.0012
Figure 12: Vehicle tracking based on Kalman filter.

4. Experimental Results

In this experiment, the video format is PAL pattern, and frame rate is 25 frames/s, and the image size is. The platform of this algorithm is software Visual Studio 2005. Hardware platform is P4 2.8 GHZ and internal storage is 2 GB; the every experiment time is 120 minutes in morning and afternoon.

We update background rapidly by combining the background difference method with the frame difference method. The background updating time is about ms. In order to validate the algorithm, we have a large number of tests. The testing place is two-way ten-lane in the Shen-Shanxi expressway. There are nine statistics, which contain shadow, thunderstorms, lightning, and other conditions. Each statistics lasts 120 minutes by sample data. For the various weather, BP-NN was adapted as the image detection algorithm classifier [3, 1922]. The statistical result is shown in Table 1 (in this table, the vehicles tracking accuracy = number of tracking vehicles/number of detecting vehicles). By the statistics result we can know that, in various environments, the vehicle detection accuracy is higher than 96.1%. Accuracy after vehicle tracking is higher than 97.8%. This states that the algorithm described above has a good reliability and robustness. In case of water and lightning, detection and tracking accuracy have certain declines. This is mainly because strong flash and reflection in this environment disturb collecting image seriously. Besides, this method is mainly useful in the structure road. How to identify, suppress, and eliminate such distractions are the further research contents. We will do the next research in another project.

tab1
Table 1: Statistics of vehicle detection and tracking results in two hours.

5. Conclusion

In this paper, the Gaussian mixture method eliminates the impact of the vehicle and sets up the background image. The real-time background is segmented completely and accurately, updated dynamically by matching the reconstruction background with the current background of the image sequence and combining the advantages of background difference method with the interframe difference method. To ensure the robustness of the follow-up vehicles detection, the window template three by three is applied to eliminate isolated noise point of vehicle contour image. And to obtain images of the corrosion sets and expansion sets, the template structural element is used to do some graphical morphological filtering. To narrow target search range and improve the calculation speed and precision of the algorithm, this paper uses Kalman filtering model to realize the tracking of fast moving vehicle. The statistical results of the experiments show that the algorithm has good real time and reliability.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is partially supported by the National Natural Science Foundation of China (Grant no. 51208500), the Western High Technique Program of Chinese Academy of Science (Grant no. XBBS201119) and the Urumqi City Science and Technology Project (Grant no. G121310001). The first author would like to thank Dr. Pingshu Ge for the valuable discussions in improving the quality and presentation of the paper.

References

  1. W. Wang, X. Jiang, S. Xia, and Q. Cao, “Incident tree model and incident tree analysis method for quantified risk assessment: an in-depth accident study in traffic operation,” Safety Science, vol. 48, no. 10, pp. 1248–1262, 2010. View at Publisher · View at Google Scholar · View at Scopus
  2. T. M. Deng and B. Li, “A detection method of traffic parameters based on EPI,” in Proceedings of the International Workshop on Information and Electronics Engineering (IWIEE '12), pp. 3054–3059, Harbin, China, March 2012. View at Publisher · View at Google Scholar · View at Scopus
  3. Z. Zivkovic and F. van der Heijden, “Efficient adaptive density estimation per image pixel for the task of background subtraction,” Pattern Recognition Letters, vol. 27, no. 7, pp. 773–780, 2006. View at Publisher · View at Google Scholar · View at Scopus
  4. D. Y. Chen and C. H. Chen, “Salient video cube guided nighttime vehicle braking event detection,” Journal of Visual Communication and Image Representation, vol. 23, no. 3, pp. 586–597, 2012. View at Publisher · View at Google Scholar · View at Scopus
  5. S. Gupte, O. Masoud, R. F. K. Martin, and N. P. Papanikolopoulos, “Detection and classification of vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 3, no. 1, pp. 37–47, 2002. View at Publisher · View at Google Scholar · View at Scopus
  6. A. Leone, C. Distante, and F. Buccolieri, “A shadow elimination approach in video-surveillance context,” Pattern Recognition Letters, vol. 27, no. 5, pp. 345–355, 2006. View at Publisher · View at Google Scholar · View at Scopus
  7. K. G. Gokhan and S. Afsar, “An FPGA based high performance optical flow hardware design for computer vision applications,” Microprocessors and Microsystems, vol. 37, no. 3, pp. 270–286, 2013.
  8. T. van Hertem, V. Alchanatis, A. Antler, et al., “Comparison of segmentation algorithms for cow contour extraction from natural barn background in side view images,” Computers and Electronics in Agriculture, vol. 91, pp. 65–74, 2013.
  9. A. J. Lipton, H. Fujiyoshi, and R. S. Patil, “Moving target classification and tracking from real-time video,” Proceedings of the IEEE Workshop on Application of Computer Vision, pp. 8–14, 1998.
  10. D. R. Magee, “Tracking multiple vehicles using foreground, background and motion models,” Image and Vision Computing, vol. 22, no. 2, pp. 143–155, 2004. View at Publisher · View at Google Scholar · View at Scopus
  11. S. Kamijo and M. Sakauchi, “Illumination invariant and occlusion robust vehicle tracking by patio-temporal MRF model,” in Proceedings of the 9th World Congress on ITS, Chicago, Ill, USA, 2002.
  12. B. Bennett, D. Magee, A. G. Cohn, and D. C. Hogg, “Using spatiotemporal continuity constraints to enhance visual tracking of moving objects,” in Proceedings of the European Conference on Artificial Intelligence, pp. 922–926, 2004.
  13. A. Schclar, A. Averbuch, N. Rabin, V. Zheludev, and K. Hochman, “A diffusion framework for detection of moving vehicles,” Digital Signal Processing, vol. 20, no. 1, pp. 111–122, 2010. View at Publisher · View at Google Scholar · View at Scopus
  14. D.-Y. Huang, C.-H. Chen, W.-C. Hu, and S.-S. Su, “Reliable moving vehicle detection based on the filtering of swinging tree leaves and raindrops,” Journal of Visual Communication and Image Representation, vol. 23, no. 4, pp. 648–664, 2012. View at Publisher · View at Google Scholar · View at Scopus
  15. C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '99), pp. 246–252, June 1999. View at Scopus
  16. W. Zhang, X. Z. Fang, and X. Yang, “Moving vehicles segmentation based on Bayesian framework for Gaussian motion model,” Pattern Recognition Letters, vol. 27, no. 9, pp. 956–967, 2006. View at Publisher · View at Google Scholar · View at Scopus
  17. D. Fischer, M. Börner, J. Schmitt, and R. Isermann, “Fault detection for lateral and vertical vehicle dynamics,” Control Engineering Practice, vol. 15, no. 3, pp. 315–324, 2007. View at Publisher · View at Google Scholar · View at Scopus
  18. X. Xu, S. Xu, L. Jin, and E. Song, “Characteristic analysis of Otsu threshold and its applications,” Pattern Recognition Letters, vol. 32, no. 7, pp. 956–961, 2011. View at Publisher · View at Google Scholar · View at Scopus
  19. H. Moon, R. Chellappa, and A. Rosenfeld, “Performance analysis of a simple vehicle detection algorithm,” Image and Vision Computing, vol. 20, no. 1, pp. 1–13, 2002. View at Publisher · View at Google Scholar · View at Scopus
  20. W. Wang, W. Zhang, H. Guo, H. Bubb, and K. Ikeuchi, “A safety-based approaching behavioural model with various driving characteristics,” Transportation Research C, vol. 19, no. 6, pp. 1202–1214, 2011. View at Publisher · View at Google Scholar · View at Scopus
  21. H. Guo, W. Wang, W. Guo, X. Jiang, and H. Bubb, “Reliability analysis of pedestrian safety crossing in urban traffic environment,” Safety Science, vol. 50, no. 4, pp. 968–973, 2012. View at Publisher · View at Google Scholar · View at Scopus
  22. G. Guo and M. Mandal, “On optimal threshold and structure in threshold system based detector,” Signal Processing, vol. 92, no. 1, pp. 170–178, 2012. View at Publisher · View at Google Scholar · View at Scopus
  23. N. Loménie and D. Racoceanu, “Point set morphological filtering and semantic spatial configuration modeling: application to microscopic image and bio-structure analysis,” Pattern Recognition, vol. 45, no. 8, pp. 2894–2911, 2012. View at Publisher · View at Google Scholar · View at Scopus
  24. L. Guo, P.-S. Ge, M.-H. Zhang, L.-H. Li, and Y.-B. Zhao, “Pedestrian detection for intelligent transportation systems combining AdaBoost algorithm and support vector machine,” Expert Systems with Applications, vol. 39, no. 4, pp. 4274–4286, 2012. View at Publisher · View at Google Scholar · View at Scopus