Research Article  Open Access
Yifan Liu, Zhenjiang Cai, Xuesong Suo, "A Multiframes Integration Object Detection Algorithm Based on TimeDomain and SpaceDomain", Mathematical Problems in Engineering, vol. 2018, Article ID 4127305, 15 pages, 2018. https://doi.org/10.1155/2018/4127305
A Multiframes Integration Object Detection Algorithm Based on TimeDomain and SpaceDomain
Abstract
In order to overcome the disadvantages of the commonly used object detection algorithm, this paper proposed a multiframes integration object detection algorithm based on timedomain and spacedomain (MFITS). At first, the consecutive multiframes were observed in timedomain. Then the horizontal and vertical fourdirection extension neighborhood of each target pixel were selected in spacedomain. Transverse and longitudinal sections were formed by fusing of the timedomain and spacedomain. The mean and standard deviation of the pixels in transverse and longitudinal section were calculated. We also added an improved median filter to generate a new pixel in each target pixel position, eventually to generate a new image. This method is not only to overcome the RPAC method affected by lights, shadows, and noise, but also to reserve the object information to the maximum compared with the interframe difference method and overcome the difficulty in dealing with the high frequency noise compared with the adaptive background modeling algorithm. The experiment results showed that the proposed algorithm reserved the motion object information well and removed the background to the maximum.
1. Introduction
The research and application of the motion object detecting and tracking are on the rise in various fields; object detection is one of the essential technical issues. Motion object detection technology is to extract moving objects (also known as the background) from the scene (also known as the foreground) which in the video of every frame image. The better the results of more reserved objects information and less background information, the better the ability of object detection.
The common motion objects detection algorithms include the background difference method, the interframe difference (IFD) method, RPCA method, and the adaptive background modeling method. The advantages of background difference method are being simple, being able to maintain the integrity of motion objects to the maximum, and having more accurate positioning; this method is suitable for moving object detection with fixed cameras and no changed background. Its shortcomings are obvious; in most practical application, the background information will be influenced by illumination change, target shadow, and some outside influence of impurities and noise [1–3]. Interframe difference method has the advantage of simple operation, being suitable for unfixed cameras, and also having strong robust character and acclimatization. The disadvantage is being unable to extract the complete object information, in the case of object too slow or too fast motion, which may cause object missing or error detection in two objects [4, 5].
Robust principal component analysis (RPCA) method transformed video frames into vectors. The background matrix was decomposed into low rank matrix after determinant transformation; the foreground matrix was decomposed into sparse matrix after determinant transformation. Rodriguez and Wohlberg proposed a simple alternating minimization algorithm for solving a minor variation on the Fast Principal Component Pursuit (FPCP) [6, 7]. This method decomposed the sparse matrix even after the first outer loop. But detection results showed difficulty in dealing with the high frequency noise. He et al. proposed Grassmannian Robust Adaptive Subspace Tracking Algorithm (GRASTA) [8]; this method improved the processing speed and was effective for removing the background, but the effect of extracting motion object information was not good.
In recent years, adaptive background modeling method has received the widespread attention and research. It mainly includes the following several kinds of algorithms: Gaussian mixture model (GMM) algorithm [9–12]: the algorithm is to establish initial background frame by single or multiple Gaussian filters and adaptively updated as the background frame changes. This algorithm can extract the moving object information well and remove most of the background noise. Codebook algorithm [13, 14]: this algorithm is a compressed sample background extraction algorithm based on the background codebook, and in the meanwhile codebook is updated correspondingly. Visual background extraction (Vibe) algorithm [15–17]: this algorithm updates background model adaptively and detects the motion objects based on consecutive pixels having similar characteristics in spacedomain. And it possesses better realtime performance and robustness. Essentially speaking, these three algorithms are all in the background difference method, so they have some common faults. For example, they easily lead to “ghosting” in background model, difficulty in dealing with the high frequency noise (flicker leaves, fluctuating water surface), and hysteresis of background noise removal.
In this paper, a multiframes integration object detection algorithm based on timedomain and spacedomain (MFITS) was presented aiming at explaining the advantages and disadvantages of these common object detection algorithms. At first, the consecutive multiframes were observed in timedomain. Then the horizontal and vertical fourdirection extension neighborhood of each target pixel were selected in spacedomain. Transverse and longitudinal section were formed by fusing of the timedomain and spacedomain. The mean and standard deviation of the pixels in transverse and longitudinal section were calculated. We also added an improved median filter to generate a new pixel in each target pixel position, eventually to generate a new image. The proposed algorithm is fundamentally different from the common algorithms in image processing within the image frames calculation, but in a transverse and longitudinal section of the multiframes.
The MFITS algorithm is not only to overcome the PRAC method affected by light, shadows, and noise, but also to reserve the object information to the maximum compared with the interframe difference method and overcome the difficulty in dealing with the high frequency noise compared with the adaptive background modeling algorithm.
2. Common Object Detection Algorithms Principle
Common object detection algorithms often use difference between frames to detect the motion object, in other words, timedomain operation. For example, the background difference method and the interframe difference method both compute greyscale changes of each pixel between two frames. Optical flow method is to compute changed trend of the pixel values for multiple frames in the timedomain. Adaptive background modeling algorithm is to generate an adaptive background model and then difference with other frames. The principle is described as follows generally: is pixel value in frame position. is pixel value in frame position.
Then the different values between two frames are compared with a threshold value . For less than the threshold grey value as the background, directly set pixel grey value of 0, grey value greater than the threshold value is as the foreground set to 255 pixels directly. is pixel value in position of the difference image.
3. MFITS Algorithm Principle
At first, the consecutive multiframes were observed in timedomain. Then the horizontal and vertical fourdirection extension neighborhood of each target pixel were selected in spacedomain. Transverse and longitudinal section were formed by fusing of the timedomain and spacedomain. The mean and standard deviation of the pixels in transverse and longitudinal section were calculated. We also added an improved median filter to generate a new pixel in each target pixel position, eventually to generate a new image.
In this paper, the experiment of the five consecutive frames is used in the timedomain; 5 frames can reflect the enough displacement change of moving target, instead of the traditional frame difference method moving targets between two near frames, a motion too small for the displacement to detect. And moving targets can ensure not having much displacement within 5 frames, located in a reasonable range. Equation is expressed as follows: is pixels sequence in position of consecutive frames. Figure 1 shows the schematic of five consecutive frames.
In practical applications, the target could be affected by noise and even a static background; the disturbances occur in the timedomain. The target can be considered as stationary signal when the pixel value did not change much (background). On the other hand, the target can be considered as impact signal when the pixel value changed a lot (foreground). So different from the common difference moving target detection algorithm in (1), we adopt the mean and standard deviation of the pixels value in the timedomain to distinguish signal characteristics. is the mean of the target in five consecutive frames. is the standard deviation of the target in five consecutive frames.
Combined with (4), (5) analysis shows that target in foreground standard deviation is greater than that in background, so moving targets are segmented by setting threshold , finally generating a new binary image. But the motion targets are mistaken for background noise with the moving targets’ neighborhood pixels with no significant difference leading to small standard deviation. Or the background pixels are affected by high frequency noise which leads to big standard deviation; the background is mistaken for motion target. Aiming at this problem, the pixels information of target extension neighborhood in each frame (spacedomain) is combined with five consecutive frames (timedomain), which improved the detection accuracy. In order to get enough pixel information and minimize the data computation, two pixels are selected in the four horizontal and vertical directions of each target’s extension neighborhood. Equations are expressed as follows: is the horizontal direction extension neighborhood of target; is the vertical direction extension neighborhood of target.
Transverse and longitudinal section were formed by fusing of the timedomain and spacedomain. Combined with (3), (6) analysis shows that the transverse and longitudinal section of each have pixels. Equations are expressed as follows: is the transverse section of target generated in timedomain and spacedomain; is the longitudinal section of target generated in timedomain and spacedomain. The details are shown in Figure 2.
(a)
(b)
Figure 2(a) is as the position sketch map of the transverse and longitudinal section in consecutive frames. Figure 2(b) is as the magnification view of the transverse and longitudinal section. Each point in the figure represents a pixel. Longitudinal section consists of 25 pixels; transverse section is the same. Dark blue points represent the position of the target in the 5 frames.
Combined with (4) and (5), the mean and standard deviation of target pixels on five consecutive frames (timedomain) expand to the transverse and longitudinal section (timedomain and spacedomain). As shown in Figure 2, each transverse section contains five sets of data, each group of data with 5 samples; each sample is the grey level of each pixel. Longitudinal section is the same. There are 10 groups in total. The standard deviation for each set of data was resolved firstly, and then mean of the standard deviation of 10 sets of data can be solved. If the mean is less than the threshold (2 as the threshold in experiments), then the target is regarded as the background. The mean of the target in five consecutive frames in (4) is as a new grey value in the corresponding location of the new image. is the mean of the standard deviation of 10 sets of data. is the new grey value into the corresponding location of the new image.
If the mean is greater than the threshold , the target is regarded as the motion object. The median of the data of the transverse and longitudinal section is calculated separately by median filtering. The median of the transverse and longitudinal section is as a new grey value in the corresponding location of the new image. In the traditional median filter, we usually calculate the median value by a or square template, where
In this paper, the pixels in transverse and longitudinal section need to median filter, respectively; then the mean of the two medians is as the new grey value. The weight of the traditional median filter all is 1 (shown in (10)). Because the pixels in transverse and longitudinal section are different from the traditional image plane, the same weight is not suitable for this algorithm. So we proposed an improved median filtering algorithm [18], to find out the most suitable pixel as the new pixel into the corresponding location of the new image to best represent the motion object. square template in (5) is still adopted, but each pixel in the template is combined with the median filter and mean filter given different weights; finally the new grey value is calculated. Traditional median filter is given the weight of 0.3; the mean filter is given the weight of 0.7. The new median value calculated by improved median filter both reflects the changed pixels of motion object and suppresses the background noise to a great extent, where is as the median value of transverse section, is as the median value of longitudinal section, is as the traditional median filter, is as the improved median value, is as the traditional mean filter, and is as the new grey value in the corresponding location of the new image.
The new grey value of pixel fills the corresponding location to complete a new image. Then we calculate binary images of difference images for the new image and the five consecutive frames, respectively. Information of motion objects can be extracted.
4. Experiment
In this section, several typical object detection algorithms were presented, and a comparison of their features was made with the MFITS algorithm. Shooting video equipment is the CCD industrial camera in Microvision Co., Ltd., model MVVS220, resolution of ; the computer used in experiment is Intel I5 processor, 2 GB of memory, Windows 7 operating system. The programs are compiled with Matlab 2014.
Experiments selected number 46 frame as the key frame in the outdoor video with small background disturbance. For comparison, experiments also selected number 34 frame as the key frame in the indoor video with large background disturbance, number 41 frame in shop video, and number 17 frame in escalator video. The shop and escalator video were both sourced from LRSlibrary. The experiments compared kinds of commonly used algorithms with the key frames. In the multiframes integration object detection algorithm based on timedomain and spacedomain, we need to deal with multiple consecutive frames of video; meanwhile to facilitate comparing with other algorithms, number 46–50 frames of the outdoor video, number 33–37 frames of the indoor video, number 39–43 frames of the shop video, and number 16–20 frames of the escalator video were chosen separately as the key frames. The binary images of different algorithms were listed to get the simpler, intuitive comparing result. Binary image threshold is grey value 10. In Gaussian mixture model (GMM) algorithm, the number of Gaussian filters is 3; the initial background modeling frames are 10; learning rate is 0.7. In ViBe algorithm, the number of sample adjacent pixels is 20; the threshold of matching points number is 20 and # min = 2; update rate is 16. Details are shown in Figures 3–6.
(a) The original frames
(b) The IFD algorithm
(c) The FPCP algorithm
(d) The GRASTA algorithm
(e) The GMM algorithm
(f) The ViBe algorithm
(a) Number 47 frame
(b) Number 48 frame
(c) Number 49 frame
(d) Number 50 frame
(e) New generated image
(f) Difference image with number 46 frame
(g) Binary image
(a) The original frames
(b) The IFD algorithm
(c) The FPCP algorithm
(d) The GRASTA algorithm
(e) The GMM algorithm
(f) The ViBe algorithm
(a) Number 34 frame
(b) Number 35 frame
(c) Number 36 frame
(d) Number 37 frame
(e) New generated image
(f) Difference image with number 34 frame
(g) Binary image
To compare and analyze these kinds of algorithm more efficiently and accurately, this paper adopted the accuracy rate and recall rate as the quantitative indicators. The equations were as follows: is as the accuracy rate, is as the recall rate, is as the correctly detected moving object pixel, is as the actually detected moving object pixel, and is as the ground truth moving object pixel. Accuracy rate reflected the denoising performance of the algorithms. Recall rate reflected the retention of the moving object information. Details were shown in Table 1.

Comparing with these kinds of algorithm combined with Figures 3–10 and Table 1, interframe difference (IFD) method can remove noise effectively (mean of accuracy rate was 91.63%), but the object information retention performs poorly (mean of recall rate was 61.37%). The FPCP algorithm performed generally in all aspects. The GRASTA algorithm showed that either background denoising effect (mean of accuracy rate was 78.19%) or object information retention (mean of recall rate was 70.52%) performed well compared with FPCP algorithm. However, the algorithm performed worse in denoising when the background noise disturbance was large (recall rate of the escalator video was 42.61%). The GMM algorithm showed that either background denoising effect or object information retention performed well when the background noise disturbance is small (outdoor video). However, the algorithms for object information reserving have their shortcomings to a certain extent when the background noise disturbance is large (the other three videos). The ViBe algorithm had the lowest recall rate (57.11%) compared with these algorithms. The MFITS algorithm showed stable performance with different conditions and higher robustness. The MFITS algorithm had the optimal recall rate (82.98%), which is promoted a lot compared with these kinds of algorithms. Denoising effect was almost the same as the GMM algorithm (mean of accuracy rate was 95.24%).
(a) The original frames
(b) The IFD algorithm
(c) The FPCP algorithm
(d) The GRASTA algorithm
(e) The GMM algorithm
(f) The ViBe algorithm
(a) Number 39 frame
(b) Number 40 frame
(c) Number 42 frame
(d) Number 43 frame
(e) New generated image
(f) Difference image with number 41 frame
(g) Binary image
(a) The original frames
(b) The IFD algorithm
(c) The FPCP algorithm
(d) The GRASTA algorithm
(e) The GMM algorithm
(f) The ViBe algorithm
(a) Number 16 frame
(b) Number 18 frame
(c) Number 19 frame
(d) Number 20 frame
(e) New generated image
(f) Difference image with number 17 frame
(g) Binary image
According to (9), (11) in new image generation, threshold selection is very important. The data was compared with the three different thresholds (0.5, 2, and 4). In order to facilitate observation, new images generated by three different thresholds are made by the binarization processing. Details are shown in Figure 11.
(a)
(b)
(c)
Figure 11 shows the image processing effect in the different threshold. When the threshold value is 0.5 in Figure 11(a), the generated image contains a large amount of background noise. When the threshold value is 4 in Figure 11(c), background suppression effect is good, but part of the moving object information is lost compared with the two previous images (center area of the moving object). When the threshold value is 2 in Figure 11(b), the generated image not only retained the effective information to the utmost extent, but also suppressed background noise effectively. In conclusion, this paper selected the threshold to 2.
Figure 12 shows the transverse and longitudinal section (center coordinates 350,246) in 46–50 frames of the outdoor video.
(a) Transverse section
(b) Longitudinal section
Figure 12(a) is the shown magnification of transverse section (actual size is pixel size). Figure 12(b) is the shown magnification of longitudinal section. In Figure 12, the boundary line of transverse section is vertical; the boundary line of longitudinal section is horizontal. The transverse and longitudinal section do not exist with any motion object.
Aiming at the improved median filter we proposed in this paper the effect of generating new images compared with the average filter, median filter, and improved median filter from outdoor video frames. Details are shown in Figure 13.
(a) Mean filtering
(b) Traditional median filtering
(c) Improved median filtering
Figure 13(a) is the new image generated by mean filter; this filter retained the relatively complete foreground, but a lot of backgrounds are mistaken for the moving object, which led to blurring the edge of the motion object. Figure 13(b) is the new image generated by traditional median filter; a lot of foregrounds are mistaken for the background, which led to loss of information of the motion object. Figure 13(c) is the new image generated by improved median filter; algorithm has the optimal effect of removing background and maximum reserving of the motion object information in these three kinds of filters.
In order to verify the denoising performance of the improved median filtering algorithm, we compared with the different median filter denoising effect in Lena image with saltpepper noise and outdoor video frames. Experimental images adopt the mean square error (MSE) and peak signal to noise ratio (PSNR) as the evaluating criterion for image denoising effect and object information reserving effect. The mean square error method gauges the distortion degree of the image by calculating the mean square error between the original frame and denoised frame. The smaller value indicates that algorithm for noise suppression effect is better. Peak signal to noise ratio method is the value of maximum signal and intensity of the noise. The bigger value indicates that algorithm for noise suppression effect is better. Details are shown in Figure 14 and Table 2.

(a) Original image
(b) Traditional median filter
(c) Improved median filter
Comparing with the different median filter denoising effect in Figure 14 and Table 2, the improved median filter is superior to traditional median filter in denoising effect and object information retention effect.
Because the transverse and longitudinal section of multiframes need to be traversed in MFITS algorithm, the computational time and the complexity have increased. In order to analyze the complexity of the MFITS algorithm, the computational time of different algorithm processing in different videos was compared separately. Details were shown in Table 3.

Comparing with the computational time of different algorithms in Table 3, the MFITS algorithm improved motion foreground detection accuracy and completeness but spends much time on computation. The computational time and the complexity of MFITS algorithm will be improved in the following study.
5. Conclusion
In this paper, a multiframes integration object detection algorithm based on timedomain and spacedomain (MFITS) was presented aiming at the advantages and disadvantages of commonly used object detection algorithms. The MFITS algorithm is different from commonly used algorithms using difference between frames. Instead of extracting sequential frames of video, it forms a pair of new images constituted with frames transverse and longitudinal section. Then the new image is formed by traversing through the multiframes. We also added an improved median filter combined with the traditional median filter and mean filter. The experiment results showed that the MFITS algorithm reserves the object information well and removed the background to the maximum. The MFITS algorithm also has strong robustness in dealing with different situations of video frames.
Conflicts of Interest
The authors Yifan Liu, Zhenjiang Cai, and Xuesong Suo declared that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This paper is supported by the Hebei Province Key Research and Development Project (17227206D).
References
 Y. Benezeth, P. M. Jodoin, B. Emile, H. Laurent, and C. Rosenberger, “Review and evaluation of commonlyimplemented background subtraction algorithms,” in Proceedings of the 19th International Conference on Pattern Recognition (ICPR '08), pp. 1–4, Tampa, Fla, USA, December 2008. View at: Google Scholar
 Z. Yusi, The Illumination Robustness Based on Background Subtraction Motion Target Detection and Tracking Technology Research, Southwest Jiaotong University, Sichuan, China, 2011.
 L. Chen and Z. Jibei, “Symmetric difference and background difference method based on dynamic threshold of motion object detection algorithm,” Application Research of Computers, vol. 25, no. 2, pp. 488–494, 2008. View at: Google Scholar
 E. Stringa and C. S. Regazzoni, “Realtime videoshot detection for scene surveillance applications,” IEEE Transactions on Image Processing, vol. 9, no. 1, pp. 69–79, 2000. View at: Publisher Site  Google Scholar
 K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, “Wallflower: principles and practice of background maintenance,” in Proceedings of the 7th International Conference on Computer Vision (ICCV '99), vol. 1, pp. 255–261, Kerkyra, Greece, September 1999. View at: Publisher Site  Google Scholar
 P. Rodriguez and B. Wohlberg, “Fast principal component pursuit via alternating minimization,” in Proceedings of the 2013 20th IEEE International Conference on Image Processing, ICIP 2013, pp. 69–73, Australia, September 2013. View at: Publisher Site  Google Scholar
 P. Rodriguez and B. Wohlberg, “Incremental principal component pursuit for video background modeling,” Journal of Mathematical Imaging and Vision, vol. 55, no. 1, pp. 1–18, 2016. View at: Publisher Site  Google Scholar  MathSciNet
 J. He, L. Balzano, and A. Szlam, “Incremental gradient on the Grassmannian for online foreground and background separation in subsampled video,” in Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2012, pp. 1568–1575, USA, June 2012. View at: Publisher Site  Google Scholar
 Y. Chen, K. Ren, G. Gu, W. Qian, and F. Xu, “Moving object detection based on improved single Gaussian background model,” Chinese Journal of Lasers, vol. 41, no. 11, Article ID 1109002, pp. 245–253, 2014. View at: Publisher Site  Google Scholar
 W. Tong and W. Ling, “Based on the frame difference of mixed Gaussian background model,” Computer Engineering and Application, vol. 50, no. 23, pp. 176–180, 2014. View at: Google Scholar
 H. Yuanlei and L. Wanjun, “Improvement of the gaussian mixture model moving target detection algorithm,” Computer Application, vol. 34, no. 2, pp. 580–584, 2014. View at: Google Scholar
 L. Yanan, Zhouyong, and T. Ruijuan, “Background modeling based on gaussian mixture model and three frame difference,” Ordnance Industry Automation, vol. 34, no. 4, pp. 33–35, 2015. View at: Google Scholar
 K. Kim, T. H. Chalidabhongse, D. Harwood, and L. Davis, “Realtime foregroundbackground segmentation using codebook model,” Special Issue on Video Object Processing, vol. 11, no. 3, pp. 172–185, 2005. View at: Publisher Site  Google Scholar
 T. Sun, S.C. Hsu, and C.L. Huang, “Hybrid codebook model for foreground object segmentation and shadow/highlight removal,” Journal of Information Science and Engineering, vol. 30, no. 6, pp. 1965–1984, 2014. View at: Google Scholar
 O. Barnich and M. Van Droogenbroeck, “ViBe: a universal background subtraction algorithm for video sequences,” IEEE Transactions on Image Processing, vol. 20, no. 6, pp. 1709–1724, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 Y. Qin, S. Sun, X. Ma, S. Hu, and B. Lei, “A background extraction and shadow removal algorithm based on clustering for ViBe,” in Proceedings of the 13th International Conference on Machine Learning and Cybernetics, ICMLC 2014, pp. 52–57, China, July 2014. View at: Publisher Site  Google Scholar
 A. Sanin, C. Sanderson, and B. Lovell, “Shadow detection: a survey and comparativeevaluation of recent methods,” Pattern Recognition, vol. 45, no. 4, pp. 1684–1695, 2012. View at: Publisher Site  Google Scholar
 Z. Gaochang, Z. Lei, and W. Fengbo, “The improved median filtering algorithm in the application of image denoising,” Applied Optics, vol. 32, no. 4, pp. 678–682, 2011. View at: Google Scholar
Copyright
Copyright © 2018 Yifan Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.