About this Journal Submit a Manuscript Table of Contents
Advances in Mechanical Engineering
Volume 2013 (2013), Article ID 546752, 7 pages
http://dx.doi.org/10.1155/2013/546752
Research Article

A Rapid Method Based on Vehicle Video for Multiobjects Detection

1College of Information Engineering, North China University of Technology, Beijing 100144, China
2Beijing Urban Engineering Design and Research Institute, Beijing 100037, China
3Systems Engineering Research Institute, CSSC, Beijing 100094, China

Received 24 September 2013; Accepted 1 November 2013

Academic Editor: Fenyuan Wang

Copyright © 2013 Qing Tian et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

An efficient and rapid method for car detection in video is presented in this paper. In this method, rear side view of cars is used in the detection phase. And in combination with histograms of oriented gradients (HOG) which is one of the most discriminative features, a linear support vector machine (SVM) is used for object classification. Besides, in order to avoid car missing, Kalman filter is used to track the objects. It is known that the calculation of HOG is complex and costs the most run time. So the processing time in this method is decreased by using information of objects’ areas from the previous frames. It is shown by the experimental results that the detection rate can reach 96.20% and is more accurate when choosing the fit interval number such as 5. It is also illustrated that this method can decrease the calculating time on a large degree when the accuracy is about 94.90% by comparing with traditional method of HOG combining with SVM.

1. Introduction

Robust on-road vehicle detection and tracking is a key issue for developing a driving assistance system and safe driving in urban area. Due to huge class variability in objects’ appearance, there is very big challenge using optical sensors in on-road multiobjects detection and tracking [1]. Usually, background subtraction method [2, 3] and frames subtraction [4] are used to detect moving objects based on video. For example, Collins et al. take an algorithm named VASM [5], and Zhu et al. get the different regions for the background update of background subtraction based on frames subtraction [6]. Both of them improve the rate and accuracy of detection to a certain extent. However, the key of background subtraction method and frames subtraction is a relatively stable background image. But the images of vehicle video change constantly. So the aforementioned two methods are useless for us. Also, some people use shadow to get the size and position of vehicles [7]. But this method depends on the light too much. It will have a giant impact on detection when the brightness changes. Some people use histograms of oriented gradients (HOG) [810] features to detect the vehicles. HOG features can be used to detect objects while the background and the brightness are changing. However, the traditional method to compute HOG features always spends too much time. So finding a rapid and efficient way to deal with this problem is very important.

In this paper, in combination with histograms of Oriented Gradients (HOG) which is one of the most discriminative features, a linear support vector machine (SVM) is used for object classification. Besides, in order to avoid car missing, Kalman filter is used to track the objects. In order to save the computing time, an efficient method is presented which uses the feedback information from the previous frames to reduce the detection scopes. This method can improve the detection efficiency greatly while ensuring the detection accuracy.

2. Car Detection Based on Histograms of Oriented Gradients (HOG) Features

At present, the approach that combined HOG with SVM has been widely applied to image recognition and achieved a great success especially in object detection. In this paper, linear SVM classifier with HOG features is used to realize classification and recognition. In detection phase, two important methods are feature description and classifier training.

2.1. Histograms of Oriented Gradients (HOG)

Histogram of oriented gradient (HOG) is a method of intensive descriptors used for local overlapped images and constitutes features which are obtained by calculating the local direction of gradient. The HOG described method has the following advantages compared with other features: HOG is represented by the structural characteristics of the edge (gradient) and thus can describe partial shape information; the translation and rotation impact quantized on the position and direction of the space can be suppressed to a certain extent; taking the method that uses the local normalized histogram can offset the influence of light change. If HOG features are taken as local descriptors, car features can be constituted by computing local direction of the gradient. So the local direction of the gradient is the key factor to constitute car features. The gradient of the pixel of in an image can be denoted as where denotes the horizontal direction gradient of input image pixel, denotes the vertical direction gradient, and denotes the pixel values. Then the gradient magnitude and direction of can be represented as

In our experiment, the size of images we select is . The procedure of extracting HOG features is as follows.

First, an image is inputted which is one of the sample library or the regions of interest (ROI).

Second, the gradient is calculated using and median filter to perform filtering.

The third step is computing the vertical and horizontal gradient of the image and the gradient direction and magnitude of every pixel.

Fourth, the range of 0°–180° is divided into 9 equal parts, so there are 9 channels in total. Because one block has four cells ( cells), there are features in each block. In the image of , there will be 7 scanning windows in both horizontal direction and vertical direction when scanning image with 8 pixels. So, the number of features in the image of is .

Fifth, it is got that the statistics of every pixel in each cell of their histogram of orientated gradient.

Sixth, in step four, there are 9 equal parts that are divided from 0°–180°. Now each part has its summation of the gradient. Therefore, we have a series of vectors. Then we should normalize these vectors in blocks. The normal method of normalization is as follows: .

Finally, combine all these vectors processed above and get a series of vectors, which are HOG features of this image.

2.2. Support Vector Machine (SVM)

SVM [1113] is based on the development of statistical learning theory and has a relatively good performance. Through the learning process, SVM can automatically find a support vector classification which has better discrimination ability and construct a good classifier.

In this paper, the method based on the linear SVM combining HOG features is used to realize classification and recognition. The nature of linear SVM is that in the case of the samples which are linearly separable, a hyperplane, which can separate the training samples completely, is found out.

In order to get good performance, the classifier is trained in three steps. Fisrt, 11953 positive samples are manually made and 94982 negative samples are automatically generated by using image frames without vehicles. All the samples are scaled to the size of and then used for classifier training. Second, since some easy cases can usually be correctly classified, the training process should focus on the difficult samples. Thus, the false positive and false negative samples after the first round of training are selected to train the classifiers one more time. This may take few rounds. At last, all trained HOG features are combined to train the final SVM classifier.

3. Car Tracking Based on Kalman Filter

In this paper, Kalman filter [1416] is introduced for object tracking. It can help system improve the accuracy of car tracking and decrease the losses of frames. The filter estimates the process state at some time and then obtains feedback in the form of noisy measurements.

The Kalman filter is based on a state equation and an observation equation. The state equation and the observation equation are given by the following:

In the formulas, and denote the state vector of moment and moment; denotes the observation vector of moment; stands for the state transition matrix from moment to moment; stands for the observation matrix; stands for the system noise, ; stands for the observation noise, . , is the variance of and .

Usually, there are two parts of Kalman filter. The first one is prediction. It is based on the state equation (3) and used to get the prediction of state vector and error covariance. The prediction part is shown as follows.

State vector prediction equation is as follows:

Error covariance prediction equation is as follows:

The second one is correction. It is based on observation equation (4) and used to correct the prediction vector and get the and the minimum error covariance matrix. The correction part is shown as follows.

Kalman filter gain equation is as follows:

Correction state vector is as follows:

Correction error covariance matrix is as follows:

According to the principles of Kalman filter, the position of object can be predicted in the next frame. Thus, it is realized that Kalman filter can be introduced into object tracking to improve the accuracy of tracking and decrease the rate of object undetected.

4. New Method

According to the experiments, a great number of times are spent on calculating the HOG features while processing each video frame. To solve this problem, an advanced method is developed to deal with it in this paper.

It is known that the objects only occupy a small part in the picture and more regions are useless. Therefore, it can save much time in which the HOG features in useless regions are not computed. Our method is divided into four steps which are as follows.(1)Compute the HOG features in whole picture at first frame. Get objects without missing.(2)According to the information obtained from the first frame, set the regions of interest (ROI) in the second frame, compute the HOG features only in these regions, detect the objects that are related to those in the first frame, record their sizes and coordinates, and then set new ROI. In next few frames, repeat these operations.(3)It may make mistakes computing the HOG features only in ROI because new objects will appear in other areas. To avoid this problem, the HOG features in whole picture are calculated in some predetermined frames (such as 5, 10, 15, ) every few frames (such as when interval number is 5). (4)Occasionally, some objects are undetected in few frames. In order to avoid the missing objects, Kalman filter is used to track them and get higher accuracy.

The algorithm flow is shown in Figure 1.

546752.fig.001
Figure 1: Flow chart of setting regions of interest.

Now, four images are used to illustrate this new method. In the following images, there are three cars and the most of the regions is useless. It is unnecessary to calculate HOG features in the whole image. The areas where objects locate are valuable for us. So removing the useless regions is very important to decrease the calculation time.

Figure 2(a) shows the original image. Compute HOG features in the whole image and three cars are detected. The detection result is shown in Figure 2(b) with blue rectangles. The tracking result is shown in Figure 2(c) with yellow rectangles. Then set the regions of interest (ROI) in next frame based on the feedback information of the detection results of this frame. The regions are shown in Figure 2(d) with red rectangles.

fig2
Figure 2: Set the regions of interest (ROI).

The red rectangles are the ROI and objects will most likely appear in these regions in the next frame. So we only need to compute HOG features in the ROI except for the entire image. Thus, the calculation time will be decreased much by using the aforementioned method.

5. Experiments

In our experiments, a clip of vehicle video taken in realistic environment is chosen as test material. Through the experiments, our advanced method is adopted that is mentioned above to detect running vehicles on the road. In the experiments, interval number is taken as a variable parametric to test the relation between efficiency and accuracy. It is shown from the experimental results that the larger the interval number, the less the time cost. But it is also shown that the detection accuracy would decrease with the interval number increasing. In the following example, the interval number is chosen as 0, 5, 10, and 15.

In the start frame 3241 shown in Figure 3, there are three cars. The left and the middle car in this frame have been detected and tracked in previous frame. The right car just appears in this frame. According to our method, HOG features are not computed in the whole picture in frame 3241. So the new appearing car cannot be detected in this frame.

546752.fig.003
Figure 3: Detection and tracking results in frame 3241 when the interval number is 0.

If some relatively small interval numbers are chosen, such as 5, 10, and 15, the right car will be detected firstly in frames 3246, 3251, and 3256. Here is the reason. In frame 3241, the algorithm computes HOG features in the entire image. When the interval number is 5, 10, and 15, it will compute HOG features in the whole image next time in frames , and , . The results are shown in Figure 4. It is suggested by the results that when one of 5, 10 and 15 was selected as interval number, the method can achieve better detection and tracking performance.

fig4
Figure 4: Detection and tracking results when the interval number is 5, 10, and 15.

However, if 30 is chosen as the interval number, our method will compute HOG features in the whole picture again in frame 3271 after frame 3241. Because the car is going to disappear from frame 3267, there is only a half of the car in frame 3271. Thus, the car cannot be detected. They are shown in Figures 5 and 6. Based on these results, the case of missing detection would probably happen if the interval number is over large.

546752.fig.005
Figure 5: The detection result in frame 3267 when the interval number is 30.
546752.fig.006
Figure 6: The detection result in frame 3271 when the interval number is 30.

In order to solve this problem, we conduct several experiments based on different interval numbers. At first, the common method is used. Then our new approach is adopted and HOG features are computed in the whole image each 5, 10, 15, and 30 frame. The resolution ratio of the test video is in our experiments. The duration of the video is 8 minutes and 50 seconds. Its frame rate is 18 frames per second. The experiments are carried out on a computer with AMD Athlon (tm) X2 250 Processer, MMX, 3DNow (2 CPUs), ~3.0 GHz, and 3.25 GB RAM. The results are shown in Table 1.

tab1
Table 1: Experimental results of time consuming and detection accuracy.

It is presented in Table 1 that the calculation time is reducing with the interval number becoming larger, while the detection rate is decreasing though they all have good detection performance. What is more, the calculation time is reduced about 2.65 times when the number of the interval frames changes from 0 to 5. It has greatly reduced the cost time. But the computation time is reduced only tens of milliseconds when the interval number changes from 5 to 10, 10 to 15, or 15 to 30. It is not much improved. So it is unnecessary to use a huge interval number to have a shorter time. Considering about detection rate, choose 10 as the interval number. Under this condition, the method can have both a short time and a high detection rate.

6. Conclusion

This paper presents a method based on vehicle video for multivehicles detection. It uses HOG features to describe the objects and linear support vector machine (SVM) to realize classification and recognition. In order to reduce the computation time, a new method is provided in which the computation time is reduced by reducing the detection areas of the current frame through the feedback information of the previous frame. According to the experimental results, the computation time is greatly reduced and the detection rate is high with low interval number. Therefore, when a balance point is got, our method will have higher efficiency and better accuracy.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by Project of Beijing Municipal Commission of Education (no. KM201210009008) and Natural Science Foundation of China (no. 61103113).

References

  1. Z. Sun, G. Bebis, and R. Miller, “On-road vehicle detection: a review,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 5, pp. 694–711, 2006. View at Publisher · View at Google Scholar · View at Scopus
  2. Z. Chen, V. Pears, M. Freeman, and J. Austin, “Background subtraction in video using recursive mixture models, spatio-temporal filtering and shadow removal,” in Proceedings of the 5th International Symposium on Visual Computing (ISVC '09), vol. 5876 of Lecture Notes in Computer Science, pp. 1141–1150, 2009.
  3. T. Horprasert, D. Harwood, and L. S. Davis, “Astatistical approach for real-time robust background subtraction and shadow detection,” in Proceedings of the IEEE International Conference on Computer Vision Frame Rate Workshop (ICCV ’99), pp. 1–19, 1999.
  4. J. Tan, C. Wu, Y. Zhou, J. Hou, and Q. Wang, “Research of abnormal target algorithm in intelligent surveillance system,” Mechanical & Electrical Engineering Magazine, vol. 26, pp. l2–15, 2009.
  5. R. T. Collins, A. J. Lipton, and T. Kanade, “Special section on video surveillance,” IEEE Transations on Pattern Analysis and Intelligence, vol. 22, pp. 745–746, 2000.
  6. M.-H. Zhu, D.-Y. Luo, and Q. Cao, “Moving targets detection algorithm based on two consecutive frames subtraction and background subtraction,” Computer Measurement & Control, vol. 13, pp. 215–217, 2005.
  7. B. Johansson, J. Wiklund, P.-E. Forssén, and G. Granlund, “Combining shadow detection and simulation for estimation of vehicle size and position,” Pattern Recognition Letters, vol. 30, no. 8, pp. 751–759, 2009. View at Publisher · View at Google Scholar · View at Scopus
  8. N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection,” Computer Vision and Pattern Recognition, vol. 1, pp. 886–893, 2005.
  9. S. Tuermer, F. Kurz, P. Reinartz, and U. Stilla, “Airborne vehicle detection in dense urban areas using HoG features and disparity maps,” Selected Topics in Applied Earth Observations and Remote Sensing, pp. 1–11, 2013.
  10. C. Conde, D. Moctezuma, I. M. de Diego, and E. Cabello, “HoGG: Gabor and HoG-based Human detection for surveillance in non-controlled environments,” Neurocomputing, vol. 100, pp. 19–30, 2013.
  11. L. Yun, G. Zhijie, and S. Yu, “Static Hand Gesture Recognition and Its Application based on Support Vector Machines,” in Proceedings of the 9th ACIS International Conference on Software Engineering, Artifical Intelligence, Networking, and Paralled/Distributed Computing, Phuket, Thailland, August 2008.
  12. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995. View at Publisher · View at Google Scholar · View at Scopus
  13. Y. Liang, M. L. Reyes, and J. D. Lee, “Real-time detection of driver cognitive distraction using support vector machines,” IEEE Transactions on Intelligent Transportation Systems, vol. 8, no. 2, pp. 340–350, 2007. View at Publisher · View at Google Scholar · View at Scopus
  14. G. Welch and G. Bishop, An Introduction to the Kalman Filter TR95-041, UNC-Chapel Hill, 2006.
  15. I. Arasaratnam and S. Haykin, “Cubature kalman filters,” IEEE Transactions on Automatic Control, vol. 54, no. 6, pp. 1254–1269, 2009. View at Publisher · View at Google Scholar · View at Scopus
  16. Q. Xiao and B. Lei, “Pickup camera tracking object based on Kalman filter,” Journal of Xi’An Institute of Technology, vol. 26, pp. 1–4, 2006.