Research Article | Open Access
Video Noise Reduction Method Using Adaptive Spatial-Temporal Filtering
We proposed a novel method of video noise reduction based on the spatial Wiener filter and the temporal filter. In the proposed spatial Wiener filter, both the amount of noise and the size of the mask are taken into consideration. The proposed model has a great capacity to be adaptive in each area in accordance with the amount of noise. In the proposed model, the motion detector is applied to control the noise removal process in accordance with the area’s information (i.e., static or movable). More accurately, more noise removal is done in the areas that are potentially still areas and less removal in the areas that are potentially motion areas. The proposed model achieves a maximum gain of 7.6 dB and capacity of conserving the significant image features (e.g., edges). The experimental results demonstrate that the new approach is more efficient than reference methods in terms of noise removal and edges preservation.
Noise introduced during acquisition of images, broadcasting over analog channels, and encoding or decoding often corrupted the video sequences, which leads to significant degradation of image quality. Hence, highlighting the importance of the role of noise reduction methods of video sequences is needed .
Noise reduction is a useful tool to enhance perceptual quality and increase compression effectiveness, in addition to pattern recognition processes .
There are many video noise reduction algorithms existing beforehand in the literature. These algorithms can be classified into three categories: first category implements in a spatial domain , second category implements in a temporal domain [4, 5], while the third category implements in the combination of a spatial and temporal domain [6, 7].
As an example of spatial filter, Wiener filter is considered as a classical approach for spatial noise filtering . This filter has the capacity to achieve high gain in noise removal. However, it can cause serious damage to the edge of the image during the process of the noise removing, especially in noise-free areas.
To improve the traditional Wiener filter, the authors in  proposed a new approach based on multidirectional Wiener filter. The main idea of this filter is to conserve essential structures of the image by choosing only the homogeneous directions for filtering.
On other hand, Yan and Yanfeng in  have proposed a noise reduction method based on temporal filtering. The basic idea of this method is to remove the noise in continuous frames. The proposed filter has succeeded in reducing the noise in real video sequence. Nevertheless, this filter suffers from dragging effects on moving objects .
Rakhshanfar and Amer  proposed temporal video denoising filter based on temporal data blocks. The authors succeeded in reducing the noise and minimizing the blocking artifacts. However, this method has failed to prevent blur edges.
Frames in video sequence are temporally associated. Wherefore in video noise reduction algorithms, the temporal filter should be used in the motion areas of the frames to diminish the noise extent practicable. However, temporal filter cannot be utilized alone since it may cause blurring in the motion areas. On the other hand, utilizing the spatial filter separately often causes spatial blurring. For that the spatial filter must be used in combination with temporal filter .
Zuo et al.  introduced a new video denoising method based on the exploitation of the strong spatiotemporal correlations of neighboring frames. In this denoising method, the authors first applied the motion estimation on the previously denoised frames and the current noisy frame. Then, Kalman filter and bilateral filter both were applied on the current noisy frame. Finally, to get a satisfying result, they weighted the denoised frame from Kalman filtering and bilateral filtering.
Maggioni et al.  proposed a new framework for the denoising of the videos which were corrupted by random and fixed-pattern noise. This video denoising approach is based on motion-compensated 3D spatiotemporal volumes. To sparsify the data in 3D spatiotemporal transform domain, the authors have leveraged both the spatial and temporal correlations within each volume. After that the adaptive 3D threshold array is used to shrink the coefficients of 3D volume spectrum.
Hong-zhi et al.  have addressed the problem of noise reduction in the video sequences. The proposed method which is based on spatial-temporal combination has the ability to discriminate the static regions from the motion regions of video frames. In this method, the temporal bilateral Kalman filtering is performed on the static regions while the spatial bilateral adaptive nonlocal means filtering is performed on the motion regions.
Esche et al.  proposed a new adaptive loop filter that only requires small overhead on slice level. In this filter to reconstruct the individual motion trajectory of every pixel in a frame at both encoder and decoder, the temporal information conveyed in the bit stream is used. The temporal information is exploited to perform pixelwise adaptive motion-compensated temporal filtering.
Cong et al.  proposed new surveillance video denoising based on hierarchical motion estimation. The main idea of this method lies in tracking matching blocks and filter along the motion trajectory. In this method, the hierarchical motion estimation proceeds from large blocks to small blocks while the corresponding motion vector field is from coarse to fine.
Wang et al.  proposed using the depth and texture information jointly in a spatial-temporal domain to develop the spatial-temporal depth filter. By depending on the similarity of pixel vectors, the authors selected the reference pixels of a to-be-filtered pixel in the spatial-temporal domain. Then, just the most relevant pixels are selected to be identified among reference pixels. Finally, the median filter has been selected among the identified pixels to obtain the result for the to-be-filtered pixel.
Lin et al.  addressed the effect of the noise on the depth pixels in the image. The authors tried to overcome this problem by using the painting method to remove the noise from object-removed images. Moreover, the holes in the depth image are filled in order to enhance the quality of this image.
Maggioni et al.  proposed VBM4D based on the spatiotemporal redundancy characterizing natural video sequences, which represents the state of the art in video denosing. In this filter, the paradigam of nonlocal grouping and collaborative filtering is implemented. The tracking blocks along the trajectories are used to construct the 3D spatial-temporal volumes and the mutually similar volumes are grouped together by stacking them along the 4th D.
Yahya et al.  proposed a video denosing method based on total variation (TV) and temporal filtering. In order to minimize the blurring of the edges in this algorithm, both the previously denoised frame and the current noisy frame are filtered by TV algorithm. Then, the TV’s output has been filtered by the temporal filter to get more improvement. Finally, the motion detector and recursive time average are applied for more noise suppression.
Most of the existing spatial-temporal methods have succeeded in removing noise but, unfortunately, they often degrade image quality by staining the free-noise areas.
To avoid this drawback, we propose a motion based on spatial-temporal filtering which takes the areas quality into consideration. In the proposed model, the recursive time averaging will be applied in the areas where the motion has not been detected. This proposed model is able to be adapted in each area depending on the information of the area. More precisely, our spatial-temporal recursive filter is going to play a good role in making noise removal more at the still areas and less at the motion areas. In this way, we can avoid blurring edges.
Our proposed model as illustrated in Figures 1 and 2 passes through three stages; in the first stage, we use the spatial Wiener filter to filter the previous and current degraded frames. The proposed Wiener adaptive filter has the ability to adapt and change in each area in accordance with the amount of noise. In the proposed spatial Wiener filter, the size of the mask is taken into consideration. More accurately, the mask with large size is applied in the areas with high noise levels, while the mask with small size is utilized in areas with low noise levels. In the second stage, we enhance the results of the spatial filter with temporal filter. At the last stage, we use the motion detector and recursive time averaging to improve the temporal filter’s output.
The remainder of this paper is organized as follows. In Section 2, we briefly describe the spatial Wiener filter. The proposed model is described in Section 3. The experimental results are presented in Section 4. Some concluding remarks are outlined in Section 5.
2. Spatial (Wiener) Filter
Suppose that is the input image corrupted with a white Gaussian noise , with zero mean and variance . The observed noisy image is known as the sum of the original image and some noise ; that is, .
The main purpose of the noise reduction algorithms is to restore the image from the degraded image of the original image . The most efficient algorithm is that one which has the ability to yield image so as to be as close as possible to the original image . Wiener filter is based on this principle.
The Wiener filter formulation can be written aswhere is the variance of the noise over the input image (noisy image) , .
3. Proposed Model
As illustrated in Figures 1 and 2, in this section, we propose a new video noise reduction algorithm based on combination of a spatial Wiener and temporal filters and then improve the combination’s output by subjecting it to the test of motion detector.
Spatial Wiener filter is the most common filter in the field of spatial noise filtering. This filter has the ability to reduce noise effectively. But unfortunately, it can cause blurring especially in the areas with low noise levels.
To avoid this blurring, we propose increasing the filtering action in the areas with high noise levels and decreasing it in the areas with low noise levels.
The regulation of the filtering action is carried out by applying the mask size of in the areas which contain high levels of noise and low image features. On the other hand, we apply the mask size of in the areas which contain less noise and many image features.
In the areas that contain high levels of noise, the mask size of will outperform the mask size of in terms of noise removal. However, applying the mask size of often leads to blurring especially in the areas that contain less levels of noise. To combine the advantages of both masks, we propose using the threshold to determine which mask should be applied in accordance with the noise level. More accurately, we apply the mask size of in the areas with high noise levels in order to reduce the highest amount of noise; however, applying the mask size of in the areas with low noise levels will play an important role in the conservation of the image edges.
For controlling the filtering action, we use the following function:where is the amount of noise and is an optional test threshold.
Now let denote the th current frame filtered by spatial Wiener filter. The temporally filtered th frame can be expressed aswhereand is the previous frame filtered with spatial Wiener filter.
For further improvement of the temporally filtered frame , we select the following motion field to control the process of removing noise according to the areas information (static or movable) :where is an optional test threshold.
From the above motion field we can observe the following:(i)In case of , the changes over and at the spatial position will be nearly zero; in other words, .(ii)In case of , the changes over and at the spatial position will be significant. So the noise removal should be stopped in this case; otherwise, the significant information of the image in this area will be removed which will lead to a blurry image.In case of , we enforce recursive time averaging by taking the weighted average of and as follows:
Here the weighting factor iswhere is the noise variance and is the residue variance.
4. Experimental Results
For PSNR, we use the following formula:where and are the numbers of pixels horizontally and vertically, respectively, and and are the denoised frame and original frame, respectively.
To assess the superiority of our new model, we take the common video sequences, that is, Miss America, Salesman, Flower Garden, and Foreman debased by different types of noise, and filter them by the algorithms in [26–28] and the new proposed algorithm.
The experimental results are shown in Figures 3–6 where these figures illustrate: original frame, noisy frame, result of the algorithm in , result of the algorithm in , result of the algorithm in , and result of the new algorithm, respectively.
From Figures 3–6, we can see that all of the algorithms in [26–28] leave serious noise without removing, especially in the video sequence which is filtered by the algorithm in , unlike in the proposed model, where the noise is almost removed, and at the same time the edges are preserved.
The quantitative results of the four models are shown in Tables 1–4. From Tables 1 and 2, we can observe that the proposed model achieves 7.6 dB gain better than that of the algorithm in , 4.48 dB gain better than that of the algorithm in , and 4.61 gain better than that of the algorithm in , while in Tables 3 and 4 the proposed model achieves 4.92 dB gain better than that of the algorithm in , 2.13 dB gain better than that of the algorithm in , and 3.96 gain better than that of the algorithm in .
From Figures 7 and 9, we can see that the algorithm in  achieves higher PSNR than that in  in low noise level, while in Figures 8 and 10 the algorithm in  outperforms that of  in all noise levels in terms of PSNR. Nevertheless, Figures 7–10 show that the proposed algorithm outperforms all of the algorithm in , the algorithm in , and the algorithm in  in terms of PSNR.
Experimental results demonstrate superiority of the new approach in terms of edge preservation and noise suppression, due to its ability to control the amount of noise removal according to areas information (static or movable).
Step 1. Filter out the previous frame by (2) as follows:(a)If , filter out the previous frame by the mask size of .(b)If , filter out the previous frame by the mask size of .
Step 2. Filter out the current frame by (2) as follows:(a)If , filter out the current frame by the mask size of .(b)If , filter out the current frame by the mask size of .
Step 5. In case of static area (), return to Step 3.
Step 6. Otherwise () go to output.
This paper presented a novel model of video noise reduction based on spatial Wiener filter and temporal filter. The proposed algorithm has high ability to remove the noise very efficiently and at the same time maintain the important image features. The application of the proposed spatial Wiener filter in this paper is based on the amount of noise in each area, where the mask with large size is applied in the areas with high noise levels, while the mask with small size is applied in the areas with low noise levels. In the proposed temporal filter, the motion detector is applied to control the noise removal process in accordance with the areas quality, where the static areas have been heavily filtered to remove greater amount of noise unlike in the motion areas which are subject to less filtering in order to maintain the features of the image. Numerical experiments with four different video sequences and various levels of Speckle noise and white Gaussian noise show that our proposed model has achieved higher noise removal gain as compared with the algorithms in [26–28]. To emphasize the superiority of the proposed algorithm, we use a Peak Signal-to-Noise Ratio (PSNR) as a quantitative measurement, while the visual quality is used as a qualitative measurement.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
The project is supported by the NSFC-Guangdong Joint Foundation (Key Project) under Grant no. U1135003 and the National Natural Science Foundation of China under Grant nos. 61472466 and 61070227.
- S. Yu, M. O. Ahmad, and M. N. S. Swamy, “Video denoising using motion compensated 3-D wavelet transform with integrated recursive temporal filtering,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, no. 6, pp. 780–791, 2010.
- R. V. Arjunan and V. V. Kumar, “Adaptive spatio-temporal filtering for video denoising using integer wavelet transform,” in Proceedings of the International Conference on Emerging Trends in Electrical and Computer Technology (ICETECT '11), pp. 842–846, IEEE, Tamil Nadu, India, March 2011.
- E. J. Balster, Y. F. Zheng, and R. L. Ewing, “Feature-based wavelet shrinkage algorithm for image denoising,” IEEE Transactions on Image Processing, vol. 14, no. 12, pp. 2024–2039, 2005.
- L. Guo, O. C. Au, M. Ma, and Z. Liang, “Temporal video denoising based on multihypothesis motion compensation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, no. 10, pp. 1423–1429, 2007.
- R. Rajagopalan and M. T. Orchard, “Synthesizing processed video by filtering temporal relationships,” IEEE Transactions on Image Processing, vol. 11, no. 1, pp. 26–36, 2002.
- E. J. Balster and R. L. Ewing, “Combined spatial and temporal domain wavelet shrinkage algorithm for video denoising,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 16, no. 2, pp. 220–230, 2006.
- S.-W. Lee, V. Maik, J.-H. Jang, J. Shin, and J. Paik, “Noise-adaptive spatio-temporal filter for real-time noise removal in low light level images,” IEEE Transactions on Consumer Electronics, vol. 51, no. 2, pp. 648–653, 2005.
- J. S. Lee, “Digital image enhancement and noise filtering by use of local statistics,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 2, no. 2, pp. 165–168, 1980.
- M. Ghazal, A. Amer, and A. Ghrayeb, “Structure-oriented multidirectional wiener filter for denoising of image and video signals,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, no. 12, pp. 1797–1802, 2008.
- L. Yan and Q. Yanfeng, “Novel adaptive temporal filter based on motion compensation for video noise reduction,” in Proceedings of the International Symposium on Communications and Information Technologies (ISCIT '06), pp. 1031–1034, Bangkok, Thailand, October 2006.
- S.-C. Hsia, W.-C. Hsu, and C.-L. Tsai, “High-efficiency TV video noise reduction through adaptive spatial-temporal frame filtering,” Journal of Real-Time Image Processing, vol. 10, no. 3, pp. 561–572, 2015.
- M. Rakhshanfar and A. Amer, “Motion blur resistant method for temporal video denoising,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '14), pp. 2694–2698, Paris, France, October 2014.
- G. Varghese and Z. Wang, “Video denoising based on a spatiotemporal gaussian scale mixture model,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, no. 7, pp. 1032–1040, 2010.
- S. Mishra and P. D. Swami, “Spatio-temporal video denoising by block-based motion detection,” International Journal of Engineering Trends and Technology, vol. 4, no. 8, pp. 3371–3382, 2013.
- C. Zuo, Y. Liu, X. Tan, W. Wang, and M. Zhang, “Video denoising based on a spatiotemporal Kalman-bilateral mixture model,” The Scientific World Journal, vol. 2013, Article ID 438147, 10 pages, 2013.
- M. Maggioni, E. Sánchez-Monge, and A. Foi, “Joint removal of random and fixed-pattern noise through spatiotemporal video filtering,” IEEE Transactions on Image Processing, vol. 23, no. 10, pp. 4282–4296, 2014.
- W. Hong-zhi, C. Ling, and X. Shu-liang, “Improved video denoising algorithm based on spatial-temporal combination,” in Proceedings of the 7th International Conference on Image and Graphics (ICIG '13), pp. 64–67, IEEE, Qingdao, China, July 2013.
- M. Esche, A. Glantz, A. Krutz, and T. Sikora, “Adaptive temporal trajectory filtering for video compression,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 5, pp. 659–670, 2012.
- Z. Cong, Z. Gao, and X. Zhang, “A practical video denoising method based on hierarchical motion estimation,” in Proceedings of IEEE 8th IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB '13), pp. 1–5, London, UK, June 2013.
- X. Wang, C. Zhu, S. Li, J. Xiao, and T. Tillo, “Depth filter design by jointly utilizing spatial-temporal depth and texture information,” in Proceedings of the IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB '15), pp. 1–5, Ghent, Belgium, June 2015.
- B. S. Lin, W. R. Chou, C. Yu, P. H. Cheng, P. J. Tseng, and S. J. Chen, “An effective spatial-temporal denoising approach for depth images,” in Proceedings of the IEEE International Conference on Digital Signal Processing (DSP '15), pp. 647–651, IEEE, Singapore, July 2015.
- M. Maggioni, G. Boracchi, A. Foi, and K. Egiazarian, “Video denoising, deblocking, and enhancement through separable 4-D nonlocal spatiotemporal transforms,” IEEE Transactions on Image Processing, vol. 21, no. 9, pp. 3952–3966, 2012.
- A. A. Yahya, J. Tan, L. Li, and M. Hu, “A novel video denoising method based on total variation and recursive temporal filtering,” Journal of Information & Computational Science, vol. 12, no. 13, pp. 5063–5071, 2015.
- A. M. Andrew, An Introduction to Digital Image Processing with Matlab, School of Computer Science and Mathematics Victoria University of Technology, 2004.
- A. Pizurica, V. Zlokolica, and W. Philips, “Combined wavelet domain and temporal video denoising,” in Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS '03), pp. 334–341, IEEE, Miami, Fla, USA, July 2003.
- J.-S. Lee, “Digital image smoothing and the sigma filter,” Computer Vision, Graphics & Image Processing, vol. 24, no. 2, pp. 255–269, 1983.
- D. Zhang, J.-W. Han, O.-J. Kwon, H.-M. Nam, and S.-J. Ko, “A saliency based noise reduction method for digital TV,” in Proceedings of the IEEE International Conference on Consumer Electronics (ICCE '11), pp. 743–744, Las Vegas, Nev, USA, January 2011.
- A. A. Yahya, J. Tan, and L. Li, “An amalgam method based on anisotropic diffusion and temporal filtering for video denoising,” Journal of Computational Information Systems, vol. 11, no. 17, pp. 6467–6475, 2015.
Copyright © 2015 Ali Abdullah Yahya et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.