- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Discrete Dynamics in Nature and Society
Volume 2014 (2014), Article ID 976574, 11 pages
Moving Target Detection and Active Tracking with a Multicamera Network
1Science and Technology on Aircraft Control Laboratory, Beihang University, Beijing 100191, China
2Digital Navigation Center, Beihang University, Beijing 100191, China
3School of Aeronautical Science and Engineering, Beihang University, Beijing 100191, China
Received 16 January 2014; Revised 7 April 2014; Accepted 7 April 2014; Published 30 April 2014
Academic Editor: Wei Lin
Copyright © 2014 Long Zhao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We propose a systematic framework for Intelligence Video Surveillance System (IVSS) with a multicamera network. The proposed framework consists of low-cost static and PTZ cameras, target detection and tracking algorithms, and a low-cost PTZ camera feedback control algorithm based on target information. The target detection and tracking is realized by fixed cameras using a moving target detection and tracking algorithm; the PTZ camera is manoeuvred to actively track the target from the tracking results of the static camera. The experiments are carried out using practical surveillance system data, and the experimental results show that the systematic framework and algorithms presented in this paper are efficient.
Moving target detection and tracking has a variety of applications in the field of computer vision, such as intelligence video surveillance, motion analysis, action recognition, environmental monitoring, and disaster response. Normally, it is quite easy and intuitive for humans to see and track targets and recognize their actions. However, establishing an automatic system without any intervention by humans is very challenging. Especially, as the size of the camera network grows with the development of the safe and smart city, it becomes infeasible for human operators to manually monitor multiple video streams and identify all events of possible interest, nor even to control individual cameras in performing advanced surveillance tasks, such as actively tracking a moving target of interest to capture one or more close-up snapshots. Therefore, an important task of the Intelligence Video Surveillance System (IVSS) is to design multicamera sensor networks capable of performing visual surveillance tasks automatically or at least with minimum human intervention. The design of an autonomous visual sensor network as a problem in resource allocation and scheduling can be found in . Existing camera networks generally consist of fixed cameras covering a large area. This results in situations where targets are often not covered at desirable resolutions or viewpoints, thus making it difficult to analyze videos, particularly when there are special requirements on the targets, such as detection and tracking precision, target positioning, and target identification. Since the total number of cameras is usually restricted by various factors, for example, costs and placement, in order to solve this problem, works have been introduced by designing a combination of a pan, tilt, zoom (PTZ) camera with multiple PTZ or fixed cameras in a master-slave manner to complete some practical tasks [2–7]. A typical system contains multiple static and PTZ cameras, the static cameras can cover a large area, the moving target detection and tracking is done by the static cameras, and the PTZ cameras are manoeuvred to actively track the target from the tracking results of the static cameras with a central supervisor unit. For this purpose, it is necessary to determine the geometrical relations between cameras by the camera calibration technology [8–11].
To detect the moving targets from video frames of the static cameras, one of the widely used algorithms is background subtraction approach [12, 13]. When the video camera is stationary, the background scene does not change, and, thus, it is very easy to construct a background model [14, 15]. It is critical that background images are efficiently and accurately estimated for any robust background subtraction algorithms. A model of the recent history is built for each pixel location. The classification of new pixel values is achieved by comparing each of them with the corresponding pixel models. Background modeling techniques can be divided into two categories: one is the parametric techniques that use a parametric model for each pixel location and another is the sample-based techniques that build their model by aggregating previously observed values for each pixel location . A well-known method presented by Stauffer and Grimson  uses an adaptive strategy for parametric background modeling. In this technique, each pixel is modeled using a separate Gaussian mixture, which is continuously learnt by an online approximation. Target detection at the current frame is then performed at the pixel level by comparing its value against the most likely background Gaussians determined by a threshold. However, since its sensitivity cannot be accurately tuned, its ability to successfully handle both high- and low-frequency changes in the background is debatable. To overcome these shortages, sample-based techniques  circumvent a part of the parameter estimation step by building their models from observed pixel values and enhance their robustness to noise. They provide fast responses to high-frequency events in the background by directly including newly observed values in their pixel models. However, their ability to successfully handle concomitant events evolving at various speeds is limited, since they update their pixel models in a first-in-first-out manner, as its adaptive ability to deal with the concurrent events with different frequencies is limited. In order to address this issue, a random background modeling that improves sample-based algorithms can be found in .
Moving target tracking is an important processing step in the field of computer vision and has been widely applied in some practical applications, such as video surveillance , intelligent transportation , and multiagent systems tracking and control . The purpose of target tracking is to estimate the position and the shape of any foreground region in subsequent image frames. The termination of a track occurs when a target can no longer be detected, because it leaves the field of view, stops, and becomes static or can no longer be distinguished from the background. The challenges in designing a robust target tracking algorithm are caused by occlusion, varying viewpoints, background clutter, and illumination changes. During target tracking, a target is accurately tracked by correctly associating a target detected in subsequent image frames with the same identified track. There are various approaches to completing this task. Classic approaches include the multiple hypothesis tracker  and the joint probabilistic data association filter . These methods and their variations commonly make use of the one-to-one assumption, namely, that a target can generate at most one measurement in each frame and a measurement can originate from at most one target. However, the one-to-one assumption is always difficult to hold in practical applications due to the splitting and merging processes as well as the existence of multiple targets in the practical application scenes. In recent years, several approaches have been proposed for multiple targets tracking [22–25], and some applications related to multitarget tracking have been realized by using distributed cameras [1–3, 7, 26–28].
In this paper, we focus on the real time surveillance system with a multicamera network, which includes static and PTZ cameras, and the control system of active cameras. The target detection and tracking is done by fixed cameras using a moving target detection and tracking algorithm. Target coordinates are transformed to appropriate pan and tilt values using geometrical transformation, and then camera is moved accordingly. The contribution of this paper lies in that we design the real time control strategy of active cameras based on the target information obtained by detection and tracking algorithms.
The paper is organized as follows. The system framework is presented in Section 2. The low-cost PTZ camera control strategy based on target information is presented in Section 3. The test results of target detection and tracking with a multicamera network are detailed in Section 4. Finally, we draw some conclusion and shed light on future work in Section 5.
2. System Framework and Problem Statement
The work presented in this paper originates from a research project on video surveillance applications in the Digital Navigation Center (DNC) at Beihang University. The primary goal of the project lies in the development of an IVSS platform. Intelligence video surveillance in a large or complex environment requires the use of distributed multiple cameras. Since the focal length of static cameras is fixed, they cannot be used to realize some advanced surveillance tasks, such as capturing high-quality videos of moving targets of interest, actively tracking one or more moving targets of interest, and capturing close-up image. For this reason, plenty of researches have been dedicated to designing the combination of a PTZ camera with multiple PTZ or fixed cameras in a master-slave manner to complete some practical tasks [2–7, 25–28]. In this paper, we focus on some problems confronted by the real time surveillance system with a multicamera network in practical applications, in which the surveillance system includes low-cost static and PTZ cameras as well as algorithms. The target detection and tracking is done by fixed cameras using a moving target detection and tracking algorithm, and the target of interest is actively tracked by a PTZ camera using a simple feedback control strategy. The whole structure diagram is depicted in Figure 1.
3. Multicamera Target Tracking and PTZ Camera Control
3.1. Multicamera and Multitargets Tracking
In this paper, we focus on the design and application of a practical IVSS with a multicamera network which consists of low-cost static and PTZ cameras as well as algorithms. The low-cost static cameras are placed at the perimeter, indoor and outdoor areas, and used to realize targets detection and tracking by using moving target detection and tracking algorithm.
An experiment is carried out by using the video data. The Gaussian mixture model , the random background model , and an improved algorithm of tracking moving targets under occlusions  are used for multitarget detection and tracking, and the video data 1 is the evaluating data from PETS database with the video image resolution of 768 × 576 pixel and the frame rate of 25 frame/s; the video data 2 is practical surveillance system data from the DNC of Beihang University, with the video image resolution of 352 × 288 pixel and the frame rate of 25 frame/s.
The experimental results are shown in Figures 2, 3, 4, and 5. As can be seen from Figures 2 and 3, the tree which is swinging in the wind is classified as foreground motion by the Gaussian mixture model but is detected as the background by the random background model. As can be seen from Figures 4 and 5, the target tracking algorithm is effective and has good performance under occlusions.
3.2. Low-Cost PTZ Camera Control Strategy
The feedback signal is unavailable for low-cost PTZ cameras which can only implement one instruction within a certain time interval. In addition, the relationship between time and the variety of pan, tilt, and zoom is indeterminate. In order to solve this problem, we propose a PTZ control algorithm based on the target information feedback. The principle diagram of the acquisition of feedback signal is illustrated in Figure 6.
The feedback signal of the PTZ control algorithm based on the target information feedback is obtained by computing the distance (e.g., the horizontal direction distance and the vertical direction distance ) and the orientation between the centers of an image and the area of the interesting target. Here, the area of the interesting target can be computed as , where and denote the width and height of the area of the interesting target. The PTZ will receive a zoom instruction when the target is smaller than the threshold. The position of the target in the next frame would be estimated by a Kalman filtering. Then the PTZ control instruction for the first frame can be calculated.
3.2.1. Determination of Directions
When calculating the offsets of the centroid of target (COT) to the center of image (COI) and , in order to adapt the direction with large offset, we choose the larger values of and as the rotational direction of the PTZ camera.
3.2.2. Determination of Velocity
Based on the rotational speed of the PTZ camera, we adopt a linear approximation to map the relationship between the speed and the central offsets and . In real applications, all the 16-level rotational speeds are calibrated off line.
3.2.3. Realization of the Moving Prediction Based PTZ Camera Control
The distance between COT and COI is chosen as the feedback and the corresponding up-down, left-right, and zoom in-out control instructions are sent according to the calibrated rotational speed.
In order to adjust the target to the COI in the first frame, after the position of the interesting target is obtained, the average speed of the target can be obtained by its historical moving information as follows: where and denote the average speed on and directions at time ; denotes the update rate of the speed; and and denote the positions on and directions at time .
The position can be estimated by using the Kalman filtering in the next frame in order to obtain the relative offset between the target and camera as follows:
The state vector and observation vector for the Kalman filtering can be represented as  where and denote the horizontal and vertical coordinates of the centroid of the moving target; and denote the width and height of the external rectangle of the moving target; and and denote the speeds of the target.
According to the result of (2), one can find the most approximate integer value which can be the rotational speed of the PTZ in the first frame,
Regarding the zoom control of a PTZ camera, in order to alleviate the difficulties of detection and tracking in the process of rotation control due to the changing size of targets, we first realize the P/T rotation control of the PTZ camera and then realize zoom control only if the distance between the COT and COI is less than a predefined threshold.
In the process of the zoom control, the size of targets may change intensively if the camera zooms intensively. It brings great challenges for the algorithm of target matching and tracking. In order to solve the problem of zooming, we adopt a gradual type of control strategy. The control signal is sent every time in the minimal unit and the control process is repeated until the zooming time is satisfied. The feedback signal is computed by , where and denote the target area and the area of the field of view, respectively. If is smaller than the threshold, then send an instruction to zoom-in image. If equals the threshold, the zoom-in operation will be terminated. If is larger than the threshold, the instruction of the zoom-out image will be sent.
During the continuous frame tracking, PTZ will adopt a slightly adjusted tracking plan and recalculate the shift of the target and , and then one can obtain the corresponding values of and . The tolerance of the COI to the COT is set as 10 pixels and the direction of the rotation will be determined by the sign of , as shown in Table 1.
Once the system sends a control instruction, the PTZ will respond within a certain interval. A whole package of the PTZ control needs 3 instructions at most and the response time is about 40 ms. Hence, the PTZ tracking system can be run in real time.
The control algorithm is tested in this paper. The parameters of the PTZ camera are listed in Table 2. When the P/T rotational control is finished, the results of zoom = 7 to zoom = 28 are shown in Figure 7, and the active tracking results of moving targets are illustrated in Figure 8.
From the experimental result, we can find that the zooming is smooth and the visual effect is in accordance with the law of human vision. The PTZ control can guarantee the camera to rotate with the moving target and keep the target in the center of the field of view. In the control process of the PTZ camera, the performance of detection and tracking algorithm strongly affects the result. If the detection and tracking algorithm performs unsatisfactorily, one will lose the target, which makes the PTZ camera have no feedback for the sampled video that hinders the control for the PTZ camera.
4. Experimental Test
The system presented in this paper is tested, and the parameters of the PTZ cameras are shown in Table 2. All cameras that include the static and PTZ cameras are calibrated, and the coordinates of cameras are unified into the world coordinate system. The target detection and tracking is done by static cameras using a moving target detection and tracking algorithm; the target of interest is actively tracked by the PTZ cameras using a simple feedback control strategy.
In the area of surveillance, we set up some important regions, entrances and exits, and design a joint tracking system consisting of the PTZ cameras and static cameras. The regions of entrance and exit are the regions where the target arrives and departs. In order to track targets in the first time, those regions are set as the initial regions for the PTZ camera, as illustrated in Figure 9.
The area of surveillance is between the office building A and the wall. The regions “a,” “b,” “c,” “d,” and “e” are the regions covered by the static cameras (the corresponding number of cameras is camera 1, camera 2,…, camera 5), where region “a” is the start of the road which connects the gate and other roads and also the region that targets must cross when they are entering or departing. Therefore, the region “a” is set as the entrance region and is set as preset 1 with the initial preset of the PTZ camera. The region “c” is more important than other regions, and thus it is set as the important preset, that is, preset 2. The important preset possesses a higher monitoring authority than the other preset.
When a target enters the area of surveillance and turns up in the entrance region “a,” the static camera covering region “a” will detect the target and track it. Meanwhile, a “Call initial preset 1” instruction will be sent to the control system of the PTZ camera. The PTZ camera will turn to the initial preset 1 and the target will be actively tracked by the active control algorithm; following that the channel for the instruction of “Call initial preset 1” will be cut off to prevent the circumstances of unclear targets. When a target enters region “c,” the static camera covering region “c” will be in charge and a “Call preset 2” instruction will be sent to the control system of the PTZ camera. The PTZ camera will turn to preset 2 and the target will be actively tracked.
The relay tracking results of a single walking man in cameras 1 and 2 are illustrated in Figure 10, and those in cameras 2 and 3 are shown in Figure 11. From the experimental results, we can find that the system is capable of continuously tracking targets in different camera views.
When a target enters the entrance region of the surveillance area, the static camera will detect the target and the PTZ camera will be adjusted from patrol state to initial preset 1. When the target turns up in the view of camera 3, namely, the important region, it will be detected, and corresponding instructions will be sent. The PTZ camera will be shifted to preset 2. The feedback instruction will be formed by the target information and the PTZ camera will be controlled to track the targets. The test result is shown in Figure 12.
5. Conclusion and Future Work
In this paper, the comprehensive design and implementation of the IVSS platform based on a multicamera network were presented. The system is composed of the low-cost static and PTZ cameras, the target detection and tracking algorithms, and the low-cost PTZ camera feedback control algorithm based on target information. The target detection and tracking is done by static cameras using a moving target detection and tracking algorithm; the PTZ camera is commanded to track actively the target from the tracking results of the static cameras, and the target information is transformed to the appropriate pan and tilt values using the geometrical transformation, such that the camera is moved accordingly. The test results of the target detection and tracking, active target tracking algorithm, and multicamera target tracking system were reported. Although the development of the multiple target active tracking based on a multicamera network is still challenging when there are more targets to be monitored in the scene than PTZ cameras, we believe that the developed low-cost PTZ control algorithm and scheduling strategy can be widely applied to IVVS and extended to other visual analysis systems.
The multicamera system that can realize the multitarget tracking and active target tracking was verified by a practical IVVS. In addition, the low-cost PTZ camera control algorithm and scheduling strategy were preliminary realized too. However, the algorithm of controlling and scheduling multiple PTZ cameras is undeveloped. Further research works will be required to develop and test these algorithms, and the tests of these algorithms in the practical IVVS will be carried out as well.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This project is supported by the key program of the National Natural Science Foundation of China (Grant no. 61039003), the National Natural Science Foundation of China (Grant no. 41274038), the Aeronautical Science Foundation of China (Grant no. 2013ZC51027), the Aerospace Innovation Foundation of China (CASC201102), and the Fundamental Research Funds for the Central Universities.
- F. Z. Qureshi and D. Terzopoulos, “Surveillance in virtual reality: system design and multi-camera control,” in Proceedings of the 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '07), pp. 1–8, Minneapolis, Minn, USA, June 2007.
- P. D. Z. Varcheie and G. A. Bilodeau, “People tracking using a network-based PTZ camera,” Machine Vision and Applications, vol. 22, no. 4, pp. 671–690, 2011.
- C. H. Chen, Y. Yao, D. Page, B. Abidi, A. Koschan, and M. Abidi, “Heterogeneous fusion of omnidirectional and PTZ cameras for multiple object tracking,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, no. 8, pp. 1052–1063, 2008.
- X. Clady, F. Collange, F. Jurie, and P. Martinet, “Object tracking with a Pan-tilt-zoom camera: application to car driving assistance,” in Proceedings of the 2001IEEE International Conference on Robotics and Automation (ICRA '01), pp. 1653–1658, Seoul, Korea, May 2001.
- N. Bellotto, E. Sommerlade, B. Benfold et al., “A dstributed camera system for multi-resolution surveillance,” in Proceedings of the 2009 3rd ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC '09), Como, Italy, September 2009.
- C. Ding, B. Song, A. Morye, J. A. Farrell, and A. K. Roy-Chowdhury, “Collaborative sensing in a distributed PTZ camera network,” IEEE Transactions on Image Processing, vol. 21, no. 7, pp. 3282–3295, 2012.
- E. B. Ermis, P. Clarot, P. M. Jodoin, and V. Saligrama, “Activity based matching in distributed camera networks,” IEEE Transactions on Image Processing, vol. 19, no. 10, pp. 2595–2613, 2010.
- Y. I. Abdel-Aziz and H. M. Karara, “Direct linear transformation into object space coordinates in close-range photogrammetry,” in Proceedings of the Symposium on Close-Range Photogrammetry, pp. 1–18, Urbana, Ill, USA, 1971.
- R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D Machine vision metrology using off-the-shelf TV camera and lenses,” IEEE Journal of Robotics and Automation, vol. 3, no. 4, pp. 323–344, 1987.
- J. Long, X. Zhang, and L. Zhao, “A fast calibration algorithm based on vanishing point for scene camera,” Applied Mechanics and Materials, vol. 58–60, pp. 1148–1153, 2011.
- Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000.
- D. M. Tsai and S. C. Lai, “Independent component analysis-based background subtraction for indoor surveillance,” IEEE Transactions on Image Processing, vol. 18, no. 1, pp. 158–167, 2009.
- J. Migdal and W. E. L. Grimson, “Background subtraction using Markov thresholds,” in Proceedings of the IEEE Workshop on Motion and Video Computing (MOTION '05), pp. 58–65, Breckenridge, Colo, USA, January 2005.
- C. Stauffer and W. E. L. Grimson, “Learning patterns of activity using real-time tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 747–757, 2000.
- D. S. Lee, “Effective Gaussian mixture learning for video background subtraction,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 827–832, 2005.
- O. Barnich and M. Van Droogenbroeck, “ViBE: a powerful random technique to estimate the background in video sequences,” in Proceedings of the 2009 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '09), pp. 945–948, Taibei, Taiwan, April 2009.
- A. Elgammal, D. Harwood, and L. Davis, “Non-parametric model for background subtraction,” in Proceedings of the 6th European Conference on Computer Vision-Part II, pp. 751–767, London, UK, 2000.
- S. Sivaraman and M. M. Trivedi, “Looking at vehicles on the road: a survey of vision-based vehicle detection, tracking, and behavior analysis,” IEEE Transactions on Intelligent Transportation Systems, vol. 14, no. 4, pp. 1773–1795, 2013.
- G. H. Wen, Z. S. Duan, G. R. Chen, and W. W. Yu, “Consensus tracking of multi-agent systems with lipschitz-type node dynamics and switching topologies,” IEEE Transactions on Circuits and Systems, vol. 60, no. 9, pp. 1–13, 2013.
- D. B. Reid, “An algorithm for tracking multiple targets,” IEEE Transactions on Automatic Control, vol. 24, no. 6, pp. 843–854, 1979.
- C. Rasmussen and G. D. Hager, “Probabilistic data association methods for tracking complex visual objects,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp. 560–576, 2001.
- S. Oh, S. Russell, and S. Sastry, “Markov chain Monte Carlo data association for multi-target tracking,” IEEE Transactions on Automatic Control, vol. 54, no. 3, pp. 481–497, 2009.
- A. Andriyenko, K. Schindler, and S. Roth, “Discrete-continuous optimization for multi-target tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1926–1933, Providence, RI, USA, 2012.
- H. Zhang and L. Zhao, “Integral channel features for particle filter based object tracking,” in Proceedings of the 5th International Conference on Intelligent Human-Machine Systems and Cybernetics, pp. 190–193, Hangzhou, China, 2013.
- B. Song and A. K. Roy-Chowdhury, “Stochastic adaptive tracking in a camera network,” in Proceedings of the 2007 IEEE 11th International Conference on Computer Vision (ICCV '07), pp. 1–8, October 2007.
- M. Taj and A. Cavallaro, “Distributed and decentralized multicamera tracking,” IEEE Signal Processing Magazine, vol. 28, no. 3, pp. 46–58, 2011.
- C. Micheloni, B. Rinner, and G. L. Foresti, “Video analysis in pan-tilt-zoom camera networks,” IEEE Signal Processing Magazine, vol. 27, no. 5, pp. 78–90, 2010.
- N. Krahnstoever, T. Yu, S. Lim, K. Patwardhan, and P. Tu, “Collaborative real-time control of active cameras in large scale surveillance systems,” in Proceedings of the Workshop on Multi-Camera and Multi-modal Sensor Fusion Algorithms and Applications, pp. 1–12, Marseille, France, 2008.
- L. Zhao and J. B. Xiao, “Improved algorithm of tracking moving objects under occlusions,” Journal of Beijing University of Aeronautics and Astronautics, vol. 39, no. 4, pp. 517–520, 2013.