About this Journal Submit a Manuscript Table of Contents
Advances in Multimedia
Volume 2012 (2012), Article ID 343724, 14 pages
http://dx.doi.org/10.1155/2012/343724
Research Article

Multitarget Tracking of Pedestrians in Video Sequences Based on Particle Filters

School of Computer Science and Technology, Wuhan University of Technology, Wuhan 430070, China

Received 31 August 2012; Revised 4 November 2012; Accepted 9 November 2012

Academic Editor: Weidong Cai

Copyright © 2012 Hui Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Video target tracking is a critical problem in the field of computer vision. Particle filters have been proven to be very useful in target tracking for nonlinear and non-Gaussian estimation problems. Although most existing algorithms are able to track targets well in controlled environments, it is often difficult to achieve automated and robust tracking of pedestrians in video sequences if there are various changes in target appearance or surrounding illumination. To surmount these difficulties, this paper presents multitarget tracking of pedestrians in video sequences based on particle filters. In order to improve the efficiency and accuracy of the detection, the algorithm firstly obtains target regions in training frames by combining the methods of background subtraction and Histogram of Oriented Gradient (HOG) and then establishes discriminative appearance model by generating patches and constructing codebooks using superpixel and Local Binary Pattern (LBP) features in those target regions. During the process of tracking, the algorithm uses the similarity between candidates and codebooks as observation likelihood function and processes severe occlusion condition to prevent drift and loss phenomenon caused by target occlusion. Experimental results demonstrate that our algorithm improves the tracking performance in complicated real scenarios.

1. Introduction

Video target tracking is an important research field in computer vision for its wide range of application demands and prospects in many industries, such as military guidance, visual surveillance, visual navigation of robots, human-computer interaction and medical diagnosis [13], and so forth. The main task of target tracking is to track one or more mobile targets in video sequences so that the position, velocity, trajectory, and other parameters of the target can be obtained. Two main tasks needs to be completed by moving target tracking during the processing procedure: the first one is target detection and classification which detects the location of relevant targets in the image frames; the second one is the relevance of the target location of consecutive image frames, which identifies the target points in the image and determines their location coordinates, thus to determine the trajectory of the target as time changes. However, automated detection and tracking of pedestrians in video sequences is still a challenging task because of following reasons [4]. (1) Large intraclass variability which refers to various changes in appearance of pedestrians due to different poses, clothing, viewpoints, illumination, and articulation. (2) Interclass similarities which are the common likeness between pedestrians and other background objects in heavy cluttered environment. (3) Partial occlusions, which may change frequently in a dynamic scene, of pedestrians which are caused by other interclass or intraclass targets.

Considering the difficulties mentioned above in pedestrians detection and tracking tasks, pedestrians tracking has been studied intensively and a number of elegant algorithms have been established. One popular tracking method is mean shift procedure [5], which finds the local maximum of probability distribution in the direction of gradient. Comaniciu and Ramesh [6] gave a strict proof of the convergence of the algorithm and proposed a mean shift based on tracking method. As a deterministic method, mean shift keeps single hypothesis and is thus computationally efficient. But it may run into trouble when similar targets are presented in background or occlusion occurs. Another common approach is the use of the Kalman filter [7]. This approach is based on the assumption that the probability distribution of the target state is Gaussian, and therefore the mean and covariance, computed recursively by the Kalman filter equations, can fully characterize the behavior of the tracked target. However, in video target tracking, tracking targets in real world rarely satisfy Gaussian assumptions required by the Kalman filter in that background clutter may resemble a part of foreground features. One promising category is sequential Monte Carlo approach, which is also known as particle filter [8], which recursively estimates target posterior with discrete sample-weight pairs in a dynamic Bayesian framework. Due to particle filters’ non-Gaussian, non-linear assumption and multiple hypothesis property, they have been successfully applied to video target tracking [9].

2. Previous Work

Various researchers have attempted to extend particle filters to target tracking. Among others, one of the most successful features used in target tracking is color. Nummiaro et al. [10] proposed a tracking algorithm that considered color histograms, as a feature, that were tracked using the particle filter algorithm. Despite the algorithm being more robust to the partial blocked target and the target shape changes, the algorithm exhibits high sensitivity to illumination changes that may cause the tracker to fail. Vermaak et al. [11] introduced a mixture particle filter (MPF), where each component was modeled with an individual particle filter that formed part of the mixture. The filters in the mixture interacted only through the computation of the importance weights. By distributing the resampling step to individual filters, the MPF avoids the problem of sample depletion. Okuma et al. [12] extended the approach of Vermaak et al. and proposed a boosted particle filter. The algorithm combined the strengths of two successful algorithms: mixture particle filters and adaboost. It is a simple and automatic multiple target tracking system, but it is easy to fail in tracking when the background image is complex.

Therefore, a more effective method for target recognition is needed. Superpixel has been one of the most promising representations with demonstrated success in image segmentation and target recognition [1315]. For this reason, Ren and Malik [16] proposed a tracking method based on superpixel, which regards tracking task as a figure/ground segmentation across frames. However, as it processes every entire frame individually with Delaunay triangularization and conditional random field (CRF) for region matching, the computational complexity is rather high. Further, it is not designed to handle complex scenes including heavy occlusion and cluttered background as well as large lighting change. Wang et al. [17] proposed a tracking method from the perspective of mid-level vision with structural information captured in superpixel. The method is able to handle heavy occlusion and recover from drifts. Thus in this paper, the observation model adopts superpixel which is combined with the LBP to extract the target feature.

In recent years, bag of features (BoF) representation has been successfully applied to object and natural scene classification owing to its simplicity, robustness, and good practical performance. Yang et al. [18] proposed a visual tracking approach based on BoF. The algorithm randomly samples image patches within the object region in training frames to construct two codebooks using RGB and LBP features instead of only one codebook in traditional BoF. It is more robust in handling occlusion, scaling and rotation, but it can only track one target. Based on the advantages of BoF in target tracking, the paper employs BoF to establish discriminative appearance model, which converts high-dimensional feature vector into low-dimensional histogram comparison, overcoming high computational complexity due to superpixel in the observation model.

Therefore, to achieve automated and robust tracking of pedestrians in complex scenarios, we present multi-target tracking of pedestrians in video sequences based on particle filters. The algorithm uses BoF algorithm to create discriminative appearance model which is then used to be combined with particle filter algorithm to achieve target tracking. In order to improve the efficiency and accuracy of the detection, firstly, background subtraction and the HOG detection methods are combined to get the target motion regions in the training frames. And then the discriminative appearance model established by the target regions is used to discriminate the candidate targets. During the process of tracking, severe occlusion condition is handled to prevent drift and loss phenomenon due to pedestrians’ mutual occlusion. Figure 1 shows the entire algorithmic flowchart.

343724.fig.001
Figure 1: The flowchart of algorithm.

The paper is organized as follows: Section 3 introduces detection of pedestrians; Section 4 describes our particle filter algorithm; Section 5 presents the experimental results and the performance evaluation and conclusion work is given in Section 6.

3. Detection of Pedestrians

There are mainly two parts in this section, one is target regions extraction, and the other is the construction of the discriminative appearance model. The former aims to determine the target regions of video sequence in the first frames, the latter aims to do sampling, feature extraction in the target region when these target regions are seen as a training set, and eventually establish the discriminative appearance model.

3.1. Target Regions Extraction

Before tracking, we need to detect the targets in the first frames and get the target regions in each frame for later trainings. Figure 2 shows the whole flow diagram of target regions extraction of the first frames.

343724.fig.002
Figure 2: Flow diagram of target regions extraction of frames.

We can see from Figure 2 that, first of all, in order to get motion region, a simple and fast approach is to perform background subtraction, which identifies motion targets from the portion of video sequences that differ significantly from a background model, as shown in Figure 3. Then we use the HOG descriptors [19] and Support Vector Machines (SVMs) to build a pedestrian detector. Since the method has been proved to be capable but time-consuming, we only detect motion regions which have been acquired by background subtraction and frames. This not only reduces the HOG detection region, but also improves the efficiency and the accuracy of the detection. Figure 4 shows that adopting the HOG detection after background subtracting improves the accuracy of pedestrian detection, whereas using the HOG directly can lead to false detection.

343724.fig.003
Figure 3: Background subtraction result.
fig4
Figure 4: The HOG detection result. (a) Detection results of using the HOG directly. (b) Detection results of adopting the HOG after background subtracting.
3.2. Discriminative Appearance Model

During this stage, discriminative appearance model is created by target regions extraction of the first frames to distinguish targets from cluttered backgrounds. The th pedestrian in the th frame is , where is the number of training frames, is the number of target pedestrians in the training frames. According to all -frame regions in which pedestrian appears, we draw the pedestrian’s discriminative appearance model (We assume that the number of targets in the training frames is invariable.), and therefore we need get discriminative appearance models.

3.2.1. Patch Generation

In the training stage, some patches with a constant scale are randomly sampled within the region of the pedestrian . For pedestrian , image patches are collected and represented by superpixel descriptor and LBP descriptor, respectively, in each training frame. Superpixel descriptor and LBP descriptor extraction process in training frames is illustrated in Figure 5.

343724.fig.005
Figure 5: Extraction process of superpixel descriptor and LBP descriptor in training frames.

The superpixel segmentation method we adopt in this paper is SLIC [15] (Simple Linear Iterative Clustering) that clusters pixels in the combined five-dimensional color and image plane space to efficiently generate compact, nearly uniform superpixel. For superpixel descriptor, we segment target region in th training frame into superpixels, as shown in Figure 5. As the superpixel does not have a fixed shape, and its distribution is often irregular, it is unsuitable for extracting the local template information; in addition, due to the similarity of the superpixel’s internal pixel texture as well as the similarity of color characteristics, more stable superpixel information can be obtained by extracting the color space histogram. However, RGB color space distribution does not accord with human’s vision distribution, and it is not robust enough for illumination changes, therefore we only use the normalized histogram of HSV color space which is simple and accords with human’s vision as a feature for all superpixels.

LBP is vastly used for texture description which has good performance in texture classification, fabric defect detection and moving region detection. LBP is an illumination invariant descriptor which is not sensitive to the intensity change caused by the light changes. The LBP descriptor is stable as long as the differences among the image pixel values do not change a lot. In addition, there are certain complementary between LBP and color features, so we adopt LBP descriptor as a feature. The LBP descriptor is defined as follows: where is the intensity value of center pixel and is the intensity of neighboring pixels.

The image histogram obtained from the computation of LBP is defined as follows: where represents the length of the encode bit generated by the LBP operator, represents the number of pixels in the neighborhood, is the LBP value at , in this way, represents the number of pixels which have the LBP value of , the histogram can reflect the distribution of the LBP values.

3.2.2. Codebook Construction

As frames slip, patches accumulate. For extracted collections of sample features , features are gathered into a number of clusters by performing mean shift clustering, and cluster centers compose the codebook. Here is the number of cluster centers as well as the size of the codebook. Cluster centers which represent the most typical features are regarded as the keywords in the codebook and used to create bags. In this way, a large collection of sample characteristics is converted into a comparatively small codebook. Figure 6 shows the process of codebook construction.

343724.fig.006
Figure 6: Codebook construction of target 1.

After codebook construction, for each characteristic of a set of features in each training sample image, find the codeword which has the nearest Euclidean distance from it, then count the appearance times of all features corresponding nearest codeword to acquire the final histogram. Repeat the above steps to training sample images, a set of training images will be converted into a set of histograms called bags. A bag is equivalent to the occurrence frequency of codewords in an image and can be represented as a histogram. training images are converted to a set of bags by raw counts.

Here the discriminative appearance model has been established for subsequent classification decisions.

3.2.3. Updating

Since appearance and pose changes of a target occur all the time, updating is necessary or even crucial. After frames, a new collection of patches is obtained. We then perform mean shift clustering again on and the old codebook using Here, denotes the new codebook. is a forget factor imposed on the old codebook to reduce its importance gradually so that the newly-constructed codebook pays more attention to the latest patches.

4. Particle Filter Tracking

The particle filter [8] is a Bayesian sequential importance sampling technique, which recursively approximates the posterior distribution using a finite set of weighted samples. It consists of two essential steps: prediction and update. We use to express the set of states of the target system at moment . In the set of states , stands for the target’s states number at moment ; stands for the state of the th target at moment .

Given all available observations , up to time , the prediction stage uses the probabilistic system transition model to predict the posterior at time as At time , the observation is available, the state can be updated using Bayesian’s rule: where is described by the observation equation.

4.1. State-Space Model

In the video scene, the movement of each target can be considered as an independent process, and therefore state-space model can be regarded as the joint product form of a single-target motion model:

Suppose the target state number of both moment and are , , is the state number in , is the state number in , is the state of the th video target at moment , , and are respectively the rectangle center’s position in the direction of and in the image, and are the length and width of the rectangle.

To get the state transition density function of the th target at moment , random perturbation model is used to describe the state transition of the th target from momet to moment , that is, where is the normal density function whose covariance is . is a diagonal matrix, and the variances of the four parameters in ’s diagonal elements corresponding state are . Random perturbation model is used to describe the motion of each target mainly in the condition that the tracking targets of the video are pedestrians who have movement randomness, thus it is difficult to predict the state of motion for the next moment by using constant-velocity model or constant acceleration model.

4.2. Observation Model

When a new frame arrives, for target , firstly, according to its location at the last frame, state-space model is used to randomly sample candidate targets, as illustrated in Figure 7.

343724.fig.007
Figure 7: Collection of candidate targets.

Secondly, each candidate target is handled as follows:(1) extract superpixel patches.

We adopt superpixel segmentation to each candidate target and obtain superpixels. Then extract each superpixel’s HSV color histogram and normalized them. (2) extract LBP patches.

Extract patches from each candidate target, and then calculate each patch’s LBP histogram and normalize them.

Then calculate the color histogram and the LBP histogram of patches (each superpixel is also referred to as a patch) separately according to the following process:

We calculate the patches’ similarities with codewords, so a similarity function is defined as follows: where denotes the similarity between patch and each codeword , ; denotes the eigenvector of the test patch , denotes the eigenvector of the codeword in the codebook, denotes the histogram intersection distance between the two histogram images.

Thus, the patches in each candidate target all have their most similar codewords. Make a statistics of the occurring frequency of codewords in each candidate target as a bag of features, , which is illustrated in the following formula:

Then we compute the similarity of bags to get the weight of each candidate target: where denotes the eigenvector of the test sample, denotes the eigenvector of the template, denotes the bag of features intersection distance between the two patches.

The observation likelihood function is defined as follows:

In this way, we get , , , and , respectively. In the condition that is the given target state, the total observation likelihood function of the target is defined as follows: where , are the observation likelihood functions of superpixel and LBP features respectively, are the weights of the two characteristics information in the fusion. The feature weights can be dynamically calculated through the weight distribution of the particle sets.

4.3. Occlusion Handling

The above procedure can be used to handle partial occlusion of the target. However, when there is severe or complete occlusion, the total observation likelihood value of the target becomes extremely small. As to that situation, when the total observation likelihood value is smaller than certain threshold, we keep the target’s last tracking state unchanged and the particles continue state transition. Tracking result and particles’ movements in severe occlusion condition are illustrated in Figures 8 and 9, respectively.

343724.fig.008
Figure 8: Tracking result in severe occlusion condition.
343724.fig.009
Figure 9: Particles’ movements in severe occlusion condition.
4.4. The Algorithmic Process

The entire algorithmic process can be summarized as in Algorithm 1.

alg1
Algorithm 1: Our algorithm.

5. Experimental Verification and Analysis

To verify performance of our algorithm, we evaluate our algorithm on some video sequences. These sequences are acquired from our own dataset, PETS 2012 Benchmark data and CAVIAR database where the target pedestrians move in different conditions which include complex background, severe occlusion, illumination and changes of walking speed, and so forth.

In our algorithm, parameter settings are shown in Table 1. These parameters are fixed for all video sequences.

tab1
Table 1: Parameters of our algorithm.
5.1. Comparison with Other Trackers

For comparison purposes, these sequences are utilized to evaluate the performance of superpixel tracking, boost particle filter (BPF) and our algorithm under the situation of occlusion.

The video parameters in the evaluation are shown in Table 2.

tab2
Table 2: Video parameters in simulation.

First of all, sequence “three pedestrians in the hall” is tested, in which three pedestrians are walking in the hall from our own dataset. In Figure 10, the first row and the second row represents the outcomes of the algorithm which are contrasted with those of superpixel tracking and BPF respectively. We can see from these frames that BPF tracker leads to drifts under the situation of the pedestrian’s occlusion and the pedestrian’s distraction in that BPF tracker constructs proposal distribution using a mixture model that incorporates information from the dynamic models of each pedestrian and the detection hypotheses generated by Adaboost. However, when partial occlusion occurs, BPF tracker cannot get enough pedestrian feature descriptions, which leads to the failure. By contrast, both superpixel tracking and our algorithm track the targets because they require only part of the feature to track targets, and they are able to handle severe occlusion and recover from drifts. Therefore, both superpixel tracking and our algorithm can track the targets accurately, but the latter has better tracking accuracy and robustness than the former.

343724.fig.0010
Figure 10: Sequence 1: tracking results. The results by our algorithm, superpixel tracking, and BPF methods are represented by solid line, dashed line, and dotted line rectangles. Rectangles in different colors denote the tracking results of different pedestrians.

The pedestrians’ weight variation curves of superpixel weight and LBP weight in the process of tracking are illustrated in Figure 11. Because occlusion does not occur in the tracking process to pedestrian 3, there is no obvious fluctuation of superpixel weight and LBP weight. Superpixel weight begins to decline after the 107th frame in which the occlusions between pedestrians emerge and LBP weight begins to increase. As the targets move, the interferences of the occlusions between pedestrians move away after the 123th frame, therefore superpixel weight regains the state of being higher than LBP weight.

fig11
Figure 11: Sequence 1: pedestrians’ weight variation curves. (a) pedestrian 1, (b) pedestrian 2, and (c) pedestrian 3.

Figure 12 shows three pedestrians’ position error respectively in the process of target tracking. For each pedestrian, the position error is defined as follows: where denotes the estimation value of target position at moment , denotes the real position at moment , denotes the mean-square-root error at moment .

fig12
Figure 12: Sequence 1: pedestrians’ error curves. (a) pedestrian 1, (b) pedestrian 2, and (c) pedestrian 3.

We can see that the our algorithm has better accuracy than any of the other two in that using the superpixel tracking and the BPF tracking. It can be seen that the robustness of tracking is improved by using our algorithm.

Figure 13 shows target motion trajectories from the first frame to the last by using our algorithm. The different colors represent different pedestrian trajectories. The points in the graph constitute target motion trajectory, and each point represents the target location of each frame.

343724.fig.0013
Figure 13: Sequence 1: target motion trajectories.

Secondly, sequence “five pedestrians in the corridor” is tested from CAVIAR database, in which there are twice severe occlusions. Figure 14 shows that our algorithm has better tracking accuracy and robustness, although the pedestrians’ severe mutual occlusion occurs. Figure 15 shows target motion trajectories from the first frame to the last by using our algorithm.

343724.fig.0014
Figure 14: Sequence 2: tracking results. The results by our algorithm, superpixel tracking, and BPF methods are represented by solid line, dashed line, and dotted line rectangles. Rectangles in different colors denote the tracking results of different pedestrians.
343724.fig.0015
Figure 15: Sequence 2: target motion trajectories.

Thirdly, sequence “sparse crowd” is tested from PETS 2012 Benchmark data. It can be seen from Figure 16 that there are failures in tracking when either the superpixel tracking or the BPF tracking is used. However, our algorithm can track all the targets in the condition of severe occlusion, pose variation, or changes of walking speed. Figure 17 shows target motion trajectories from the first frame to the last by using our algorithm.

343724.fig.0016
Figure 16: Sequence 3: tracking results. The results by our algorithm, superpixel tracking, and BPF methods are represented by solid line, dashed line, and dotted line rectangles. Rectangles in different colors denote the tracking results of different pedestrians.
343724.fig.0017
Figure 17: Sequence 3: target motion trajectories.

Finally, sequence “two pedestrians in the square” is tested, in which one pedestrian was severely obscured by another pedestrian at a time. It differs from the first group of videos in that certain changes happen to pedestrians’ walking environment illumination, that is, from the strong illumination into the weak illumination environment. Figure 18 shows our algorithm has better tracking accuracy and robustness. Although the pedestrians’ walking illumination changes and severe mutual occlusion occurs, they are tracked out with accurate location. Figure 19 shows target motion trajectories from the first frame to the last by using our algorithm.

343724.fig.0018
Figure 18: Sequence 4: tracking results. The results of our algorithm, superpixel tracking, and BPF methods are represented by solid line, dashed line, and dotted line rectangles. Rectangles in different colors denote the tracking results of different pedestrians.
343724.fig.0019
Figure 19: Sequence 4: target motion trajectories.

The quantitative evaluations of the superpixel tracking, BPF, and our algorithm are presented in Table 3. It can be seen from the table that our algorithm has smaller average errors of center location in pixels than the other two algorithms, thus it has better tracking accuracy. For each pedestrian, the average position error is defined as follows: where denotes the total frame numbers of the tracked video sequence, denotes the average mean-square-root error which measures the experiment results error; the smaller the , the better the tracking effect.

tab3
Table 3: Tracking average error. The numbers denote average errors of center location in pixels.
5.2. More Tracking Results

Our algorithm is tested in more sequences which are acquired from our own dataset, PETS 2012 Benchmark data and CAVIAR database. Tracking results are showed in Figure 20.

343724.fig.0020
Figure 20: More tracking results of our algorithm.

It can be seen from the test results of the above three groups of video sequences, the our algorithm has better tracking performances in dealing with complex situations such as the target’s translation, severe occlusion, illumination, and changes of walking speed, as well as analogue interference, and so forth.

6. Conclusions

In this paper, we propose multi-target tracking of pedestrians in video sequences based on particle filters. The contribution of our work can be listed as the following: (1) we apply background subtraction and HOG to getting target regions in training frames rapidly and accurately. (2) Our algorithm builds discriminative appearance model to collect training samples and construct two codebooks using superpixel and LBP features. (3) We integrate BoF into particle filter to get better observation results, and then automatically adjust the weight value of each feature according to the current tracking environment. Our algorithm was tested on a pedestrian tracking application in campus environment. In that case the algorithm can reliably track multiple targets and targets’ motion trajectories in difficult sequences with dramatic illumination changes, partial or severe occlusions, and background clutter edges. Experimental results demonstrate the effectiveness and robustness of our algorithm.

Acknowledgments

This work was supported in part by the National Science Foundation of China under Grant no. 61170202 and Wuhan Municipality Programs for Science and Technology Development under Grant no. 201210121029.

References

  1. B. Babenko, S. Belongie, and M. H. Yang, “Visual tracking with online multiple instance learning,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009, pp. 983–990, June 2009. View at Publisher · View at Google Scholar · View at Scopus
  2. J. Kwon and K. M. Lee, “Visual tracking decomposition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 1269–1276, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  3. A. Makris, D. Kosmopoulos, S. Perantonis, and S. Theodoridis, “A hierarchical feature fusion framework for adaptive visual tracking,” Image and Vision Computing, vol. 29, no. 9, pp. 594–606, 2011. View at Publisher · View at Google Scholar · View at Scopus
  4. B. Leibe, E. Seemann, and B. Schiele, “Pedestrian detection in crowded scenes,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), pp. 878–885, June 2005. View at Publisher · View at Google Scholar · View at Scopus
  5. H. Zhou, Y. Yuan, and C. Shi, “Object tracking using SIFT features and mean shift,” Computer Vision and Image Understanding, vol. 113, no. 3, pp. 345–352, 2009. View at Publisher · View at Google Scholar · View at Scopus
  6. D. Comaniciu and V. Ramesh, “Mean shift and optimal prediction for efficient object tracking,” in Proceedings of the International Conference on Image Processing (ICIP '00), pp. 70–73, Vancouver, Canada, September 2000. View at Scopus
  7. B. O. S. Teixeira, M. A. Santillo, R. S. Erwin, and D. S. Bernstein, “Spacecraft tracking using sampled-data Kalman filters,” IEEE Control Systems Magazine, vol. 28, no. 4, pp. 78–94, 2008. View at Publisher · View at Google Scholar · View at Scopus
  8. M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking,” IEEE Transactions on Signal Processing, vol. 50, no. 2, pp. 174–188, 2002. View at Publisher · View at Google Scholar · View at Scopus
  9. S. L. Tang, Z. Kadim, K. M. Liang, and M. K. Lim, “Hybrid blob and particle filter tracking approach for robust object tracking,” in Proceedings of the 10th International Conference on Computational Science (ICCS '10), pp. 2559–2567, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. K. Nummiaro, E. Koller-Meier, and L. Van Gool, “An adaptive color-based particle filter,” Image and Vision Computing, vol. 21, no. 1, pp. 99–110, 2003. View at Publisher · View at Google Scholar · View at Scopus
  11. J. Vermaak, A. Doucet, and P. Pérez, “Maintaining multi-modality through mixture tracking,” in Proceedings of the 9th IEEE International Conference on Computer Vision, pp. 1110–1116, October 2003. View at Scopus
  12. K. Okuma, A. Taleghani, N. De Freitas, et al., “A boosted particle filter: Multitarget detection and tracking,” in Proceedings of the European Conference on Computer Vision, pp. 28–39, 2004. View at Scopus
  13. A. Vedaldi and S. Soatto, “Quick shift and kernel methods for mode seeking,” in Proceedings of the European Conference on Computer Vision, pp. 705–718, 2008. View at Publisher · View at Google Scholar · View at Scopus
  14. A. Levinshtein, A. Stere, K. N. Kutulakos, D. J. Fleet, S. J. Dickinson, and K. Siddiqi, “TurboPixels: fast superpixels using geometric flows,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 12, pp. 2290–2297, 2009. View at Publisher · View at Google Scholar · View at Scopus
  15. R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk, “Slic superpixels,” EPFL Technical Report 149300, 2010.
  16. X. Ren and J. Malik, “Tracking as repeated figure/ground segmentation,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1–8, June 2007. View at Publisher · View at Google Scholar · View at Scopus
  17. S. Wang, H. C. Lu, F. Yang, and M. H. Yang, “Superpixel tracking,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 1323–1330, 2011.
  18. F. Yang, H. Lu, and Y. W. Chen, “Bag of features tracking,” in Proceedings of the International Conference on Pattern Recognition (ICPR '10), pp. 153–156, August 2010. View at Publisher · View at Google Scholar · View at Scopus
  19. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), pp. 886–893, June 2005. View at Publisher · View at Google Scholar · View at Scopus