Table of Contents Author Guidelines Submit a Manuscript
Journal of Electrical and Computer Engineering
Volume 2018, Article ID 5381962, 9 pages
https://doi.org/10.1155/2018/5381962
Research Article

Target Tracking via Particle Filter and Convolutional Network

1College of Automation, Harbin Engineering University, Harbin, China
2College of Electrical and Information Engineering, Heilongjiang Institute of Technology, Harbin, China

Correspondence should be addressed to Hongxia Chu; moc.361@0240xhc

Received 3 June 2017; Revised 17 August 2017; Accepted 14 November 2017; Published 9 January 2018

Academic Editor: Tongliang Liu

Copyright © 2018 Hongxia Chu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We propose a more effective tracking algorithm which can work robustly in a complex scene such as illumination, appearance change, and partial occlusion. The algorithm is based on an improved particle filter which used the efficient design of observation model. Predefined convolutional filters are used to extract the high-order features. The global representation is generated by combining local features without changing their structures and space arrangements. It not only increases the feature invariance, but also maintains the specificity. The extracted feature from convolution network is introduced into particle filter algorithm. The observation model is constructed by fusing the color feature of the target and a set of features from templates which are extracted by convolutional networks without training in our paper. It is fused with the features extracted from convolutional network for tracking. In the process of tracking, the template is updated in real time, and then the robustness of the algorithm is improved. Experiments show that the algorithm can achieve an ideal tracking effect when the targets are in a complex environment.

1. Introduction

Object tracking has a wide application prospect in computer vision. Recently, many researchers have carried out a lot of research on it in the real world [1]. Detecting and tracking the target is a very difficult task in practical application [2]. Many factors can impact the performance of the tracking algorithm. These issues are made up of attitude change, appearance variation owing to illumination changes, partial occlusion, and background noise [3, 4]. To solve all these problems, we need more efficient machine learning [5] and feature extraction [6] to describe the target.

At present, the tracking algorithms mainly include two types: generative model and discriminative model [5]. Particle filter is one of the representatives of generative tracking algorithm. Particle filters have been used widely in the tracking problem. Particle filter algorithm has the advantage of simplicity and flexibility. And it is easy to handle non-Gaussian and multimodality system model. There are many related literatures presented in [711]. The information from different measurement sources can be used in the framework of particle filter. This has greatly improved the tracking performance. But in the actual process of tracking, there are still a lot of ways to improve the effectiveness of the tracking algorithm. In addition, the classical particle filter usually adopts the dynamic model with global information. Regardless of whether the target is blocked or deformed, it treats the target as a whole. This leads to the neglect of the local information of the target. When the target is partially occluded and local appearance of it changes, particle filter algorithm cannot accurately track the target.

Discriminative methods are used to distinguish targets and backgrounds by training classifiers. At present, most of deep learning methods are also attributed to discriminative frames in the target tracking. Deep learning has made outstanding achievements in the field of image classification and target detection. It has become one of the most powerful automatic feature extraction methods. The deep network can get high-level abstract features gradually from low-level features through learning and mapping of multiple levels. These abstract features have high dimension and strong distinction. High accuracy of classification and regression tasks can be achieved by using simple classifier. At present, some tracking methods based on learning feature have been proposed, using convolutional networks trained offline [12, 13]. In tracking, the target localization is achieved by intercepting the characteristics of the target in different layers of the network.

The key point in all of these methods is how to learn an effective feature extraction offline with a great deal of auxiliary data, and it consumes a lot of time. The methods also have given no consideration to the similar local structure and inner geometry distribution information between the targets over consequent frames, which is handy and effective in distinguishing the target from background for visual tracking. In addition, the pure use of the deep learning method does not solve the problem of tracking drift, and it needs to be combined with other methods in order to better play the role of the depth network [14, 15].

In summary, a target tracking algorithm combining particle filter and convolution network is proposed in the paper. The extracted features from convolutional networks are introduced into the particle filter framework. The target block is represented by sparse representation. The local information and spatial information of the target are fully exploited to represent the state change of the object. According to the target state, different information is dealt with. Because the global pieces of information of particle filter are combined to determine the position of the current target, the local appearance change and partial occlusion problem of the target are better solved. And in the tracking process, the template is updated according to the tracking results, which improves the robustness of the algorithm to a certain extent. Experiments show that when the target is in a complex environment, the algorithm can achieve an ideal tracking effect.

2. Particle Filtering Tracking Formula

The tracking problem for particle filter is to estimate the posterior probability density at the moment, which is obtained by two steps [10].

Step 1 (prediction). First, suppose the initial value of the probability density is known and the posterior probability density function is also known at the moment. is described as a three-dimension vector; . , express the position of object. expresses the size change of the object. Then the prior probability iswhere is defined by the state equation of the target.

Step 2 (updating). Finally, is obtained from the observation model of the system.The observation likelihood function is determined by the observation of the target. is a normalization constant.

In fact, since the integral of formula (1) is difficult to realize, the recursive Bayesian filtering (i.e., particle filter) is simulated by the nonparametric Monte Carlo method. The basic formula iswhere is the weight of the corresponding particle. The weight of the particle is updated according to the observation value. That is,where is the proposed distribution (importance density) function in Bayesian importance sampling. The optimal choice is to select the proposed distribution as a priori density. That is, . Then the weight is

Finally, the real state estimate of the target is obtained; that is,

3. Model Construction and Implementation

3.1. Target Motion Model

For images in a new frame, each particle carries out state transition according to the following motion model:where is the Gauss white noise and is the propagation radius the particle, which is proportional to the average state change of the target at the previous moment, .

3.2. Target Observation Model

Each input image is divided into a fixed size of pixels, denoted as . A set of local image blocks is obtained by densely sampling through sliding a window of size ( is called the receptive field size). is the th image block and . Each block is preprocessed through subtracting the mean and normalization, respectively [11].

3.2.1. Color Model

(1) Target Model. , is vectorized image patches, with zero as the center. The number of Eigen values’ bin is . The probability of the Eigen value of the target model is [16]where is kernel function used to adjust the size of the weights, is delta function, represents the color value of pixels at , is the color index of the histogram, and is normalized constant coefficient.

(2) Target Candidate Model. For empathy, taking as the center, the probability of the target candidate model iswhere window radius is and is normalized constant coefficient.

(3) Similarity Function. In the paper, Bhattacharyya coefficients are used to calculate the similarity functions of two models [17]:

It has a value of 0~1. Then we suppose the distance between the two target templates as

The corresponding color observation probability of the particle is obtained:where mean square deviation is .

3.2.2. Convolutional Networks Model

In order to describe the target better, we applied convolutional networks to learn robust representations for visual tracking without offline training using a large amount of auxiliary data, which is inspired by recent studies [11, 18]. First, we use predefined convolutional filters to extract the high-order features. Second, we generate a global representation by combining local features which their structures and space arrangements need not be changed. So it increases feature invariance while maintaining specificity.

Step 1 (target layer). The -means clustering method is used to cluster out filters as a convolution kernel from patch. The patch is . Given the th filter , its response on the input image is denoted with a feature map , where and is the convolution operator.

Step 2 (background layer). At the same time, the useful background information around the target is used to distinguish the target from the background. samples are selected around the background target, and -means is used to select a bank of filters . We use the average pooling method to summarize each filter in , and generate the background context filter set . Then it does the convolution with the input image . Finally, the simple cell feature maps are defined as

Step 3 (convolution layer). At first, simple cell feature map consists of the filter set . Then different feature maps are stacked to construct a three-dimensional tensor , that is, the combination of the characteristic graphs. This kind of specificity has the characteristic of shift and sensitivity. In addition, the warp region is , which makes the characteristic of the target scale robust.

To increase the robustness of appearance change, we represent the feature by using the sparse representation.

Then the solution of the model can be solved by using the method of soft shrinkage [19]:where is a sign function and is set to median value of it and , with .

Step 4 (model update). The update strategy is as follows. It is a low pass filtering form, in which is the target template at frame , is the characteristic of the upper frame, and is the sparse expression of .where is a learning parameter.
Observation model is defined by (17) in convolution model. where .
is the th candidate sample representation at frame based on the complex cell features, where expresses the product of elements and is an indicator function whose element is defined as

3.2.3. System Observation Model

The system observation probability density function of each particle is

The parameter is used to regulate the proportion of the observed probability of each feature in the total observed probability. When the background is complex and the target is partially occluded, the global positioning advantage of color distribution should be fully exploited, and the value should be increased at this time. When the color of the target is different from the background color, the value should be reduced, and the localization advantage of the convolution feature can be fully exploited. Under normal circumstances, we take .

The flow chart of particle filter algorithm based on feature fusion is displayed in Figure 1.

Figure 1: Flow chart of the algorithm.

4. Experimental Results

4.1. Implementation Parameters Setup

We utilized the two-layer convolutional network as a feature extractor in the experiment. We made warped image patches obtained at the initial frame. The size of the warped image is set to (). We set the receptive field size as () and set the number of filters as . The learning parameter in (16) is set to 0.95 and the template is updated every frame. The standard deviations of the target state of the particle filter are set as follows: , , , and particles are used. We compared the improved algorithm with five other tracking algorithms (CNT [11], MS [16], MTT [20], VTS [21], and CPF [22]). These five algorithms all use the particle filter as the search mechanism, and MTT, CNT, and VTS are sparse representation.

4.2. Evaluation Metrics

The tracking benchmark dataset [23] is used for the experimental validation. To better analyze the performance of the algorithm, we employ the videos with 11 attributes from complex scene [11].

In order to quantitatively evaluate the system, we use two graphs including the success plot and the precision plot. The success plot is calculated by the overlap rate: . BT denotes the tracked boundary frame and BG denotes the ground truth. The percentage of frames with throughout all threshold is used to express success rates. At the same time, the precision plot shows the percentage of frames between the given threshold distance and the ground truth over the tracked positions. In order to rank the trackers, we set the threshold to 20 pixels in the precision mark represented. One-pass evaluation (OPE) is used to express the average success and precision ratio of target state in the ground truth [23].

4.3. Quantitative Analysis

(1) Whole Performance. We give the performance of the top 6 implemented tracking algorithms in Figure 2 according to success and precision plots. It must be pointed out that all the graphics are produced applying the benchmark evaluation [23]; our proposed algorithm ranks firstly based on the success rate while it ranks secondly based on the precision rate. Note that the proposed algorithm exploits only simple sparse image representation that encodes local structural and geometric layout information of the target, and achieves competitive performance compared to other methods. Furthermore, even using only specific target information from the first frame without learning with auxiliary training data, our method performs well in contrast to the other methods. This is mainly because the generic features learned offline from numerous auxiliary data may not adapt well to object appearance variations in target tracking.

Figure 2: OPE of the success plots and precision plots in the top 6 trackers.

(2) Performance of Basic Attribute. To analyze the performance of the improved method, we need to use different attributes to estimate the tracker for videos. In this paper, 11 attributes are selected. Figures 3 and 4 are, respectively, the success plots and the correspondent precision plots [24]. We note that our improved algorithm ranks within top 1 on 3 out of multiple attributes in success plots. And the precision plots rank within top 1 on 3.

Figure 3: The success plots of videos with different attributes.
Figure 4: The precision plots of videos with multiple attributes.

Our method also ranks first for the video sequences with the low resolution in all estimated trackers. It is difficult to extract useful characters of the targets when the resolution of the videos is low. In contrast, our algorithm extracts dense information across the entire target region by convolution operators to divide the target from the complex scene.

Our method ranks second which follows the CNT methods on the video sequences with background clutter attributes. The proposed algorithm uses background context information that is updated online and pooled in every frame and hence provides effective features to precisely locate objects from the clutters.

4.4. Qualitative Analysis

(1) The Variation of the Illumination and Posture. Figure 5 shows the successful tracking results on shaking and skating_1 seq. When great changes have taken place in the stage lighting conditions, the posture of the target is drastically changed owing to dancing or head shaking. Our algorithm effectively solved the posture changes because the observation model is developed through online updating. In addition, the proposed algorithm is robust with regard to illumination changes because the observation model uses a hybrid template. But other algorithms fail to track the target while the light changes exist simultaneously.

Figure 5: Qualitative evaluation of the tracker on sequences shaking_1 and skating 1.

(2) Occlusion. Figure 6 gives the successful tracking results of the improved algorithm while the object is occluded seriously by other objects. Our method and CNT can track robustly woman and jogging-1 seq. And the positioning of our method is more accurate. The two methods are robust in the presence of occlusion because of the efficient observation models. The model used local features and recorded the appearance of the target over time including the occlusion, appearance change, and the mixing of the two. Furthermore, it is responsible for various occlusions. But other algorithms fail to accurately track the targets.

Figure 6: Qualitative evaluation about the trackers on sequences woman and jogging-1 seq.

(3) Background Clutters. carDark is tested in Figure 7. Background clutter is drastic, and its appearance is similar to the target itself. Under these circumstances, other tracking methods fail because of the interference of similar targets around the car. But our algorithm succeeds in tracking.

Figure 7: Qualitative evaluation about the tracker for sequences carDark seq.

(4) Deformation. Some successful tracking results of Singer2 seq are shown in Figure 8. There are illumination changes and deformation in sequences. Only our method performs well in all of the sequences.

Figure 8: Qualitative evaluation about of the tracker over sequences singer2 seq.

(5) Scale Changes. The tracking results of freeman3 and singer1 seq are shown in Figure 9. In sequences, tracked targets have serious scale changes. For the freeman3 sequence, a person moves towards the camera with a large scale variation in his face appearance. Furthermore, the appearance also varies as the posture changes. The MTT, CPF, MS, and VTS algorithms deviate from tracked targets from #330, whereas the proposed and CNT algorithms succeed in tracking. The MTT, CPF, MS, and VTS algorithms do not efficiently complete tracking when the target has a large scale change in the singer1 sequence. But our proposed method and CNT achieve better performance. The proposed algorithm effectively solves scale variation because the representation of model is built on scale-invariant complex cell features.

Figure 9: Qualitative evaluation about the tracker over sequences freeman3 and singer1 seq.

5. Conclusion

We put forward an effective tracking method by using particle filter and convolutional network. Deep learning method is used to extract effective features for robust tracking. The algorithm efficiently solves the problems for appearance changing and occlusion severely. The experimental results showed that the improved method is better than traditional tracking methods in drastic tracking surroundings. Since the algorithm is extended easily by adding some more effective feature information, the tracking results could be enhanced further.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was supported by the Research Foundation of Young Resherve Talents Project in Scientific and Technical Bureau in Harbin (no. 2017RAQXJ134) and the project of the National Natural Science Foundation of China (no. 61573114).

References

  1. A. Yilmaz, O. Javed, and M. Shah, “Object tracking: a survey,” ACM Computing Surveys, vol. 38, no. 4, article 13, 2006. View at Publisher · View at Google Scholar · View at Scopus
  2. X. Ben, W. Meng, and R. Yan, “Dual-ellipse fitting approach for robust gait periodicity detection,” Neurocomputing, vol. 79, pp. 173–178, 2012. View at Publisher · View at Google Scholar · View at Scopus
  3. A. D. Jepson, D. J. Fleet, and T. F. El-Maraghi, “Robust online appearance models for visual tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 10, pp. 1296–1311, 2003. View at Publisher · View at Google Scholar · View at Scopus
  4. D. A. Ross, J. Lim, R.-S. Lin, and M.-H. Yang, “Incremental learning for robust visual tracking,” International Journal of Computer Vision, vol. 77, no. 1–3, pp. 125–141, 2008. View at Publisher · View at Google Scholar · View at Scopus
  5. T. Liu and D. Tao, “Classification with Noisy Labels by Importance Reweighting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 3, pp. 447–461, 2016. View at Publisher · View at Google Scholar · View at Scopus
  6. T. L. Liu, Q. Yang, and D. C. Tao, “Understanding how feature structure transfers in transfer learning,” in Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, pp. 2365–2371, Melbourne, Australia, 2017. View at Publisher · View at Google Scholar
  7. M. Nieto, A. Cortés, O. Otaegui, J. Arróspide, and L. Salgado, “Real-time lane tracking using Rao-Blackwellized particle filter,” Journal of Real-Time Image Processing, vol. 11, no. 1, pp. 179–191, 2016. View at Publisher · View at Google Scholar · View at Scopus
  8. M. Lucena, J. M. Fuertes, and N. P. de la Blanca, “Optical flow-based observation models for particle filter tracking,” PAA. Pattern Analysis and Applications, vol. 18, no. 1, pp. 135–143, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  9. M. E. Yildirim, I. F. Ince, Y. B. Salman, J. K. Song, J. S. Park, and B. W. Yoon, “Direction-Based Modified Particle Filter for Vehicle Tracking,” ETRI Journal, vol. 38, no. 2, pp. 356–365, 2016. View at Publisher · View at Google Scholar · View at Scopus
  10. J. Kwon and K. M. Lee, “Visual tracking decomposition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 1269–1276, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  11. K. Zhang, Q. Liu, Y. Wu, and M.-H. Yang, “Robust visual tracking via convolutional networks without training,” IEEE Transactions on Image Processing, vol. 25, no. 4, pp. 1779–1792, 2016. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. H. Nam and B. Han, “Learning multi-domain convolutional neural networks for visual tracking,” in Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR '16), pp. 4293–4302, July 2016. View at Scopus
  13. L. J. Wang, W. L. Ouyang, X. G. Wang, and H. C. Lu, “Visual tracking with fully convolutional networks,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV '15), pp. 3119–3127, Santiago, Chile, December 2015. View at Publisher · View at Google Scholar
  14. C. Ma, J. B. Huang, X. K. Yang, and M. H. Yang, “Hierarchical convolutional features for visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV '15), pp. 3074–3082, Santiago, Chile, December 2015. View at Publisher · View at Google Scholar
  15. H. Li, Y. Li, and F. Porikli, “DeepTrack: learning discriminative feature representations online for robust visual tracking,” IEEE Transactions on Image Processing, vol. 25, no. 4, pp. 1834–1848, 2016. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. D. Comaniciu, V. Ramesh, and P. Meer, “Real-time tracking of non-rigid objects using mean shift,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '00), pp. 142–149, Hilton Head Island, SC, USA, June 2000. View at Scopus
  17. X. Ben, P. Zhang, W. Meng et al., “On the distance metric learning between cross-domain gaits,” Neurocomputing, vol. 208, pp. 153–164, 2016. View at Publisher · View at Google Scholar · View at Scopus
  18. D. Held, S. Thrun, and S. Savarese, “Learning to track at 100 FPS with deep regression networks,” in European Conference on Computer Vision, vol. 9905, pp. 749–765, Springer International Publishin, 2016. View at Publisher · View at Google Scholar · View at Scopus
  19. H. Zaimaga and M. Lambert, “Soft shrinkage thresholding algorithm for nonlinear microwave imaging,” Journal of Physics: Conference Series, vol. 756, no. 1, Article ID 012011, 2016. View at Publisher · View at Google Scholar · View at Scopus
  20. T. Zhang, B. Ghanem, S. Liu, and N. Ahuja, “Robust visual tracking via multi-task sparse learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '12), pp. 2042–2049, Providence, RI, USA, June 2012. View at Publisher · View at Google Scholar · View at Scopus
  21. J. Kwon and K. M. Lee, “Tracking by sampling trackers,” in Proceedings of the 2011 IEEE International Conference on Computer Vision (ICCV '11), pp. 1195–1202, November 2011. View at Publisher · View at Google Scholar · View at Scopus
  22. P. Pérez, C. Hue, J. Vermaak, and M. Gangnet, “Color-based probabilistic tracking,” IEEE Conference Computer Vision Pattern Recognition, pp. 661–675, 2002. View at Publisher · View at Google Scholar
  23. Y. Wu, J. Lim, and M.-H. Yang, “Online object tracking: a benchmark,” in Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '13), pp. 2411–2418, Portland, Ore, USA, June 2013. View at Publisher · View at Google Scholar · View at Scopus
  24. W. Chen, K. Zhang, and Q. Liu, “Robust visual tracking via patch based kernel correlation filters with adaptive multiple feature ensemble,” Neurocomputing, vol. 214, pp. 607–617, 2016. View at Publisher · View at Google Scholar · View at Scopus