Research Article  Open Access
Meiyu Liang, Junping Du, Honggang Liu, "Spatiotemporal SuperResolution Reconstruction Based on Robust Optical Flow and Zernike Moment for Video Sequences", Mathematical Problems in Engineering, vol. 2013, Article ID 745752, 14 pages, 2013. https://doi.org/10.1155/2013/745752
Spatiotemporal SuperResolution Reconstruction Based on Robust Optical Flow and Zernike Moment for Video Sequences
Abstract
In order to improve the spatiotemporal resolution of the video sequences, a novel spatiotemporal superresolution reconstruction model (STSR) based on robust optical flow and Zernike moment is proposed in this paper, which integrates the spatial resolution reconstruction and temporal resolution reconstruction into a unified framework. The model does not rely on accurate estimation of subpixel motion and is robust to noise and rotation. Moreover, it can effectively overcome the problems of hole and block artifacts. First we propose an efficient robust optical flow motion estimation model based on motion details preserving, then we introduce the biweighted fusion strategy to implement the spatiotemporal motion compensation. Next, combining the selfadaptive region correlation judgment strategy, we construct a fast fuzzy registration scheme based on Zernike moment for better STSR with higher efficiency, and then the final video sequences with high spatiotemporal resolution can be obtained by fusion of the complementary and redundant information with nonlocal selfsimilarity between the adjacent video frames. Experimental results demonstrate that the proposed method outperforms the existing methods in terms of both subjective visual and objective quantitative evaluations.
1. Introduction
The resolution quality of the video sequences, which are collected by multisource vision sensors, plays an important role in the accurate moving targets recognition and tracking of the intelligent monitoring and control system. However, some factors such as light, inaccurate focusing, optical or motion blur, subsampling, and noise disturbance can negatively affect the visual quality of the video sequences. In this situation, the spatiotemporal superresolution (SR) reconstruction technology [1] can provide an excellent solution, which can be described as in Figure 1. By making full use of the complementary and redundant information with similar but not exactly the same details in the different spatiotemporal scales between the adjacent video frames, video sequences with high resolution (HR) via fusion of several low resolution (LR) video frames can be produced, thus it would have great research significance and application potential for the intelligent monitoring and control.
In recent years, the spatiotemporal superresolution reconstruction technology has become the focus of much research [2–5]. Now lots of researchers make use of the complementary and redundancy features between multiframe images and make endeavors to research the SR reconstruction via fusion of multiframe images, which aims to improve the spatial resolution of each image or video [6–9]. However, the traditional methods [10, 11] usually rely on accurate estimation of subpixel motion, which only constrain their applicability to video sequences with relatively simple motions such as global translation. Thus in recent years, some scholars have proposed a novel fuzzy registration scheme for probabilistic estimation of motion based on similarity match and introduced it into the superresolution methods [12] for further improving the spatial resolution of image or video, which can effectively avoid the accurate estimation of subpixel motion. Using such a scheme, Protter et al. [13] proposed a nonlocal mean (NLM) based SR framework by extending the NLM filter [14, 15] concept successfully applied in the denoising field into the SR field. Su et al. proposed a spatially adaptive blockbased SR model [16].
However, there are also some limitations which exist in the new developed fuzzy registration scheme. If different angle rotations exist in the image or video sequence, the correlation between corresponding pixels becomes weak, and then it will be difficult to use LR images or video frames effectively in the process of SR reconstruction. Moreover, if the LR images or video frames are noised, the reconstruction quality will be affected seriously. Thus considering good properties of rotation, translation, and scaleinvariance of Zernike moment (ZM) [17, 18], we propose a fast fuzzy registration scheme based on ZM by using the selfadaptive region correlation judgment strategy, which could make an efficient similarity measure between region features in the spatiotemporal nonlocal domain for weight calculation. Based on that, we construct a novel spatiotemporal SR reconstruction model based on robust optical flow and ZM, which makes full use of the nonlocal selfsimilarity and redundant information in the different spatiotemporal scales between the adjacent video frames and produces video sequences with high resolution via fusion of several LR video frames. Meanwhile, the new model integrates spatial SR and temporal SR into a unified framework, which can improve both the spatial resolution and the temporal resolution and make the video sequences more clear and fluent. Different from the traditional SR reconstruction methods, the proposed method does not rely on accurate estimation of subpixel motion and can be adaptive to many kinds of complex motion patterns. The traditional motion vector based frame interpolation technology [19, 20] can also improve the temporal resolution, but because of the inevitable influence from the motion estimation error, the visual block or hole artifacts usually exist in the interpolation frame. Our proposed method can effectively overcome these artifacts while improving the spatial and temporal resolution.
The contributions of this paper are as follows. (1) We propose a novel spatiotemporal SR reconstruction model for video sequences based on robust optical flow and ZM. (2) We propose a robust optical flow motion estimation and compensation model based on motion details preserving. (3) By introducing the selfadaptive region correlation judgment strategy, we construct a fast fuzzy registration scheme based on ZM for better STSR with higher efficiency. (4) An efficient iterative curvaturebased interpolation (ICBI) scheme is introduced to obtain the initial HR estimate of each LR video frame.
The remainder of the paper is organized as follows. Section 2 presents our proposed model architecture. Section 3 describes the algorithm implementation of our model. Section 4 gives the experimental results and analysis. Conclusions are presented in Section 5.
2. The Model Architecture
Given a low resolution video sequence , which is degraded by blurring , downsampling , noise disturbance , and frame missing from the high resolution video sequence , where , , denotes the frame number in and denotes the frame number in . Our proposed model aims to reconstruct the initial high resolution video sequence , from degraded low resolution video sequence , by using the spatiotemporal reconstruction technology. Thus in this paper, a novel spatiotemporal SR reconstruction model (STSR) based on robust optical flow and Zernike moment is proposed, which integrates the spatial SR and temporal SR into a unified framework. The architecture of the proposed model is shown in Figure 2. It mainly includes the following three processes for the spatiotemporal SR reconstruction modeling.
First, by making the motion analysis in the spatiotemporal domain of the video sequence , we propose a robust multilayered optical flow motion estimation method based on motion details preserving to obtain the motion vector .
Then, according to the obtained motion vector , we introduce an efficient biweighted fusion strategy to implement the spatiotemporal motion compensation, aiming at obtaining the compensated video sequence .
Finally, the fast fuzzy registration scheme based on ZM is proposed to implement the efficient spatiotemporal SR reconstruction and optimization for the sequence , via fusion of the nonlocal complementary and redundant information between the adjacent video frames, which can produce high quality video sequence , with high spatiotemporal resolution. This process mainly includes three operations: initial HR estimation based on ICBI scheme, iterative multiframe fusion, and deblurring.
3. The Algorithm Implementation of the Model
3.1. Robust Optical Flow Motion Estimation Model Based on Motion Details Preserving
Owing to the brightness constancy and motion smoothness constraints, the traditional optical flow motion estimation methods are not usually robust to light noise, and lack the strong ability of motion details preserving [21]. In order to solve this problem, we propose a novel robust optical flow motion estimation model. In our model, the iterative multiresolution coarsetofine strategy and the Total Variation (TV) idea are applied in the model framework, which could effectively avoid falling into the local optimum and further improve the time efficiency as well.
Traditional optical flow calculation methods based on TV usually have higher computational complexity, and also when larger displacements exist in the video sequence, the error of motion estimation becomes larger. In order to overcome this problem, the iterative multiresolution layered mechanism based on Gaussian pyramid is introduced in our model to calculate optical flow from coarse to fine in the multiresolution scales. The final motion vectors are obtained by adding the offset got in the higher solution to the motion vector got in the lower solution. The new mechanism cannot only effectively improve time efficiency, but can also obtain more reliable motion vectors.
Given two adjacent frames and in the video sequence, in the traditional TV based optical flow model, the process to obtain the motion vectors by optical flow motion estimation is equal to solving the minimum speed vector of the objective energy function shown as follows: where denotes the data item and denotes the regularization item, which are calculated, respectively, as follows:
In the above calculation method, the minimum process for data term is based on the brightness constancy constraint, so it usually makes the traditional model greatly influenced by some factors such as light, shadows, or occlusion. And for the regularization item , it is based on the motion smoothness constraint, but motion is often discontinuous near the contours or edges of the video frame, thus it usually makes the motion estimation perform worse for the motion discontinuity points.
Thus, on the basis of the traditional methods, we make improvements and optimization for the optical flow model, in order to further improve the model robustness and motion estimation accuracy. The specific details are shown as follows.
First, in order to enhance the model robustness to the factors such as light noise, we make improvement for the data item in the optical flow model and construct a new data item with the combination constraints of brightness constancy and gradient constancy, which is calculated as follows: where the parameter is a weight adjustment factor between the two constraints. denotes luminance value for the pixel at time point . , and denote partial derivatives on , and for , and denotes motion vector obtained by optical flow estimation.
Second, in order to effectively protect the motion discontinuity and edge details, inspired by the ideas of [22, 23], a motion structure adaptive strategy is introduced in the regularization item of our optical flow model. The new improved regularization item is defined as follows: where is the traditional TV regularization operation. is the adaptive weight to protect motion details, which is calculated as follows:
Results from large quantities of experiments show that when the parameter is set to 0.8, the motion estimation obtains best performance.
Finally, a heuristic nonlocal median filtering item is introduced in our model, which can make the optimization for optical flow motion vector on each level using the adaptive weighted median filtering. This process can be formulated as follows: where is the set of neighbors of pixel in a large nonlocal region. and denote the flow field estimate. is the adaptive weight factor for the pixel . Inspired by the idea in [24], the weight is determined according to the spatial distance, the colorvalue distance, and the occlusion state, which is calculated as follows: where and denote color vectors in the Lab space, and denote occlusion variables, and , and .
Owing to the pixels in the occlusion regions between the adjacent video frames lack correspondence, the motion vectors estimated from these regions are usually not accurate. Thus we need to make a further optimization for the estimated motion vectors by occlusion perception refinement. Comprehensively considering the flow difference and the pixel projection difference, we apply the following method to detect the occlusion region and solve the occlusion variable .
Consider where follows nonnormal Gaussian prior assumption with zero mean. denotes the flow difference; denotes the pixel projection difference. In our experiment, and .
Based on the above three points of optimization, we construct the objective energy function of our new optical flow model shown in (9). The final optical flow motion vector with high accuracy is obtained by minimizing where parameters and are weight adjustment factor between , , and .
In the optimization process using the heuristic nonlocal median filtering item , we design a strategy to further improve the time efficiency for our optical flow model. Given the estimated optical flow, first we detect the motion boundaries using a Canny edge detector and then dilate these edges with a mask to obtain flow boundary regions. In these regions, we apply the adaptive weight in (7) in a nonlocal window. But in the nonboundary regions, we apply the equal weight in a window for the median calculation.
3.2. Spatiotemporal Motion Compensation Using the Biweighted Fusion Strategy
After the motion vector is obtained using the optical flow model introduced in Section 3.1, the spatiotemporal motion compensation scheme is introduced to predict the intermediate missed frames in the video sequence, which aims to obtain their initial estimation.
The traditional motion compensation strategies based on single directional motion vectors usually produce some visual artifacts such as block effect, frame distortion, and motion blurring, which significantly influence the video visual quality. To some extent, some complex strategies can obtain better performance, but too much time cost. In order to obtain better compensation effects, and meanwhile improve the algorithm time efficiency, we introduce an efficient biweighted fusion strategy to make the spatiotemporal motion compensation. The predicted frame pixel energy value can be determined according to the following calculation method:
Comprehensively, considering the algorithm time complexity and compensation accuracy, the parameters and are both set to be 0.5 in our experiment.
Based on Sections 3.1 and 3.2, the general framework of our proposed optical flow motion estimation and spatiotemporal motion compensation is shown in Figure 3.
3.3. Enhanced Spatiotemporal SuperResolution Reconstruction Based on Robust Optical Flow and Zernike Moment
Through the experimental analysis, we can see that some artifacts of hole usually exist in the spatiotemporal predicted frames, which is mainly caused by the inevitable optical flow motion estimation errors. Thus in order to overcome this problem, through making full use of the nonlocal selfsimilarity and redundant information in the spatiotemporal domain between the adjacent video frames, we propose a fast fuzzy registration scheme based on ZM and then apply multiframe information fusion strategy to construct an efficient enhanced spatiotemporal superresolution reconstruction model to reconstruct and optimize the predicted frames, which aims to obtain predicted frames with more pleasing visual quality and further improve the temporal resolution of the video sequence. Moreover, this new model can implement the spatial resolution reconstruction for the initial LR video sequence and finally obtain a video sequence with high spatiotemporal resolution.
Different from the traditional SR reconstruction approaches, the new scheme is not dependent on the accurate estimation of subpixel motion, which implements SR reconstruction by mining the nonlocal selfsimilarities between several adjacent video frames. Because of good rotation, translation, and scaleinvariance properties of ZM, the proposed model can perform better when some complex motion patterns exist in the video sequence such as arbitrary angles of rotations. What is more, ZM is not sensitive to noise, thus our model also has well noise robustness. The implementation of our proposed STSR model mainly includes the following three steps.
Step 1. An efficient iterative curvaturebased interpolation scheme is introduced to obtain the initial HR estimation , of the LR video sequence , after motion compensation.
Step 2. On the basis of the initial HR estimation , each video frame is superresolved using the multiframe information fusion strategy based on the proposed fast fuzzy registration scheme.
Step 3. The deblurring operation and iterative refinement mechanism are applied to optimize the superresolved video sequence, which aims to further improve the reconstruction quality. And finally the high quality video sequence , with high spatiotemporal solution is obtained.
In the first step, we introduce an efficient iterative curvaturebased interpolation scheme (ICBI) [25] to provide better HR initial estimation for the second step, which would significantly influence the weight calculation in the first iteration of the next fusion process. Compared with traditional interpolation schemes, ICBI scheme used in our method, based on the continuity of the secondorder derivatives and energy curvature, not only is simple and extremely effective in removing blurring or jaggy artifacts, but also has realtime performance. In this scheme, a rough estimate of the energy of each interpolated pixel is calculated as follows: where and are local approximations of the secondorder derivatives along the two diagonal directions using the eight neighboring pixels.
However, the pixel energy obtained above is only a rough estimate and we need continuous iterative refinement in some way. Following Giachetti and Asuni [25], the rough estimate of energy for each pixel is modified according to (12), and then we obtain an initial HR estimate of each LR video frame with higher quality. where , and are adjustment factors that control the proportions of the curvature continuity energy , the curvature enhancement energy , and the isolevel curve smoothing energy . We calculate these three energies using (5), (9), and (10) in [25].
Once the initial HR estimate of each LR video frame is obtained, the next step (Step 2) is the core of the algorithm, which is established on the basis of the fast fuzzy registration scheme based on ZM. We make weight calculation by mining the nonlocal selfsimilarity patterns between the frame to be superresolved and each LR video frame, and then the HR estimation of the frame to be superresolved could be obtained by weighted average. The weight of each pixel in the nonlocal neighboring region is calculated based on the similarity with ZM. On the basis of [17], to improve the time efficiency, we introduce a region correlation judgment strategy controlled by a selfadaptive threshold, which yields a spatiotemporal adaptive model. Moreover, it is beneficial for mining the most similar patterns to calculate the weight based on similarity, and thus SR quality can also be improved to some extent. To describe our improved method, we first provide the following definitions.
Definition 1. A video frame is divided into many regions of equal size, and each region is divided into patches. The total number of pixels in each region is Num, and pixels energies are denoted by , respectively. We define as the average energy for the region centered on pixel , calculated as
Definition 2. Given two regions centered on pixels and , denoted by and , respectively, the corresponding feature vectors extracted from these two regions are and . The feature similarity between these two regions is defined as
where and represent two ZM feature vectors for pixel and pixel in the nonlocal neighboring search region, calculated as
where the ZM with order and repetition of a video frame is defined as
where , , , represents the complex conjugate of .
In our SR reconstruction model, the region correlation judgment is first applied to divide the regions centered on all pixels in the search region for pixel into related and unrelated regions; only related regions are used to calculate the weight, which can further improve the time efficiency. For the region correlation judgment, a selfadaptive threshold is introduced. If two regions are related, is defined as
The selfadaptive threshold is adaptively determined by the average energy for the region centered on pixel . This leads to more accurate judgment of region correlation. The selfadaptive threshold is calculated as
where is an adjustment factor that controls . Experiments confirmed that better SR quality is obtained when is set to 0.08.
Based on the above ideas, we construct the following enhanced weight calculation formula based on similarity calculation with ZM.where denotes the pixel to be superresolved and denotes the pixel in the nonlocal neighboring region of . The parameter controls the decay rate of the exponential function and the weight. is a normalization constant.
It is worth noting that the higher the ZM order is, the more sensitive it is to the noise. Thus, in our experiments we only calculated the first thirdorder moments, including , and .
When the weight is determined, the HR estimation of each pixel in the video frame to be superresolved can be obtained by the weighted average for the pixels in its nonlocal neighboring region of each LR video frame. Suppose that ; the objective function of the blurring HR estimate can be obtained by the following energy function:
where represents the video frame to be superresolved.
Finally in the third step, we introduce a stronger adaptive Kernel regression (AKR) deblurring mechanism [26] to deal with the blur, which is applied in the results of the second step of multiframe fusion. The desired HR video frame can be obtained by minimizing the following objective function:
where denotes the weighted parameter of the AKR deblurring process.
To further improve the SR reconstruction quality, the result needs to be iteratively refined for further optimization. The result after each iteration could provide basis for more accurate weight calculation in the next iteration.
4. Experimental Results and Analysis
4.1. Experimental Data Set and Evaluation Indices
In order to validate the effectiveness of our proposed model and algorithm, two groups of experiments were designed. In the first group of experiments, we made evaluation for our proposed optical flow motion estimation model in terms of the estimation accuracy and time efficiency and made comparison and analysis with some existing methods. And the second group of experiments was designed to evaluate our proposed spatiotemporal SR reconstruction model in terms of subjective visual evaluation and three objective quantitative indices, the peak signaltonoise ratio (PSNR), mean structural similarity index (MSSIM), and rootmeansquare error (RMSE). In the experiments, we used the spatial video sequence taken from YOUKU website (http://www.youku.com/) and the standard video sequences taken from http://trace.eas.asu.edu/yuv/index.html website.
The SR methods were assessed by subjective visual evaluation and three quantitative indices, the peak signaltonoise ratio (PSNR), mean structural similarity index (MSSIM), and rootmeansquare error (RMSE), calculated as follows: where and denote the length and width of the video frame. and denote the reconstructed frame and the original frame, respectively. and are the mean. and are the standard deviation for the original and reconstructed frame, respectively. is the covariance for the original and reconstructed frames. and are constants. is the number of frame blocks. For greater PSNR, the reconstructed frame is closer to the original. The closer MSSIM () is to 1, the greater the similarity is between the original and reconstructed frame structures. For smaller RMSE, the reconstructed frame is closer to the original.
4.2. Experimental Results
In the first group of experiments, our improved optical flow motion estimation model was assessed by two quantitative indices, average endpoint error (EPE) and average angular error (AAE). And also the existing four methods (HS [27], BA [27], ClassicC [28], and ClassicNL [24]) were introduced for comparison with our approach. In this experiment, we chose two video sequences of Rubber Whale and Grove2 from the standard optical flow Middlebury database [29] to evaluate the performance of our optical flow model. The parameters , , and in our optical flow model are set to 1, 10^{2}, and 1, respectively. The EPE and AAE values and the time efficiency for the two sequences in the different optical flow methods under the conditions without noise and with noise are shown in Tables 1 and 2, respectively. The results from Table 1 show that compared with some existing methods, the overall performance of the proposed method is optimal, which obtains lower average EPE and AAE values and smaller time cost. From results shown in Table 2, we can see that the proposed model has good noise robustness, which can also perform better under the conditions with noise disturbance.


Figure 4 gives the optical flow maps and motion vector graphs for the spatial sequences which were obtained by the proposed optical flow model. As can be seen from the optical flow maps and flow vectors shown in Figure 4, the proposed method can accurately detect the motion area of spatial target and also preserve the motion edges well, thus it is effective to use it for the motion estimation of spatial sequences.
In the second group of experiments, in order to verify the performance of the proposed SR reconstruction model and algorithm, three experiments were designed for comparison of our method (STSR) with existing approaches, POCS, NLSR [13], and ZMSR [17]. In our method, to obtain better SR reconstruction results and also improve time efficiency, we applied STSR to each frame using its adjacent six frames in the video sequence. In our STSR method, the region size for the weight calculation is , and the optimal search region size is set to .
In Experiment 1, we used two spatial video sequences of Satellite1 (20 frames/s, /frame) and Satellite2 (20 frames/s, /frame) and two standard video sequences of Forman (20 frames/s, /frame) and Suzie (20 frames/s, /frame), each of which was blurred using a uniform mask, decimated by a factor of 1 : 2 (for each frame), and then contaminated by additive Gaussian noise with . And meanwhile, the even frames were gotten rid of, and then the frame rate was compressed to 10 frames/s. Then by the two times spatial and temporal superresolution reconstruction, we tried to reconstruct the HR video sequences from the LR video sequences.
Figure 5 gives PSNR, MSSIM, and RMSE values of the POCS, NLSR, ZMSR, and STSR methods for the four video sequences. The average PSNR, MSSIM, and RMSE values of the four methods are shown in Table 3. Table 4 shows the time cost of the multiframe fusion process in the ZMSR and STSR methods. It lists the average SR time for each iteration of each video frame in the spatial sequences and the standard Forman and Suzie sequences. Through a further analysis from the experimental results, we can see that compared with traditional methods, the proposed STSR method shows obvious advantages. On one hand, it yields better results with higher PSNR and MSSIM values and lower RMSE values. On the other hand, it has higher time efficiency, which only costs half of that for ZMSR. The main reason is the region correlation judgment strategy controlled by a selfadaptive threshold. On one hand, it is beneficial for mining the most similar patterns to calculate the weight based on a similarity measure, and on the other hand, only the most related regions are used for the weight calculation rather than all the regions.


Figure 6 gives the visual effects of the spatial resolution reconstruction in the four different methods (POCS, NLSR, ZMSR, and STSR) for the four video sequences. From Figure 6 we can see that, because of relying on the accurate estimation of subpixel motion, POCS method is influenced by the errors of motion estimation, which produces some ghosting phenomenon (see the red rectangular box), so it is not adaptive to the sequences with some complex motion patterns. Compared with POCS, NLSR, ZMSR, and STSR perform better in the SR reconstruction, because of that they do not rely on the accurate estimation of subpixel motion and also have a denoising effect to some extent. However, compared with NLSR and ZMSR which usually produce blur and jagged effects, such as some textures in the sequences (see the local textures marked in the red rectangular box), the proposed STSR method yields more pleasing visual effects that contain richer details and clearer edges and contours.
(a) POCS
(b) NLSR
(c) ZMSR
(d) STSR
Figure 7 gives the visual effects of the temporal resolution reconstruction in the two different schemes (ICBI and STSR) for the two spatial video sequences, which aim to predict and reconstruct the missed or distorted video frames. From results shown in Figure 7, we can see that because of the influence of noise and optical flow motion estimation errors, the traditional single frame based ICBI scheme usually produces some black hole effects in the reconstructed frames (see the local texture effects marked in the red rectangular box). But compared with ICBI, our method can effectively overcome this problem, and the reason mainly lies in the fact that we applied the multiframe information fusion strategy for SR reconstruction by making full use of the nonlocal selfsimilarity and redundant information in the spatiotemporal domain between the adjacent video frames.
In Experiment 2, we used the two spatial sequences of Satellite1 and Satellite2 to test the effectiveness of the noise robustness and rotation invariance of the proposed method and made comparison with traditional POCS, NLSR, and ZMSR methods. The two sequences of Satellite1 (Frames 55–60) and Satellite2 (Frames 62–67) were blurred using a uniform mask, decimated by a factor of 1 : 2 (for each frame) and contaminated by additive noise with mean 0 and standard deviation 0.2, 0.4, 0.6, 0.8, 1.0, and 1.2; then some frames were processed with a slight angle. Moreover, the even frames were gotten rid of. Then we made the two times spatial and temporal SR reconstruction for the two sequences. PSNR, MSSIM, and RMSE values for the POCS, NLSR, ZMSR, and STSR methods under different noise levels are shown in Figure 8, which demonstrate that, compared with traditional methods, STSR method with a filter parameter of shows better performance, with higher PSNR and MSSIM values and lower RMSE values for all noise levels tested. Results for the first two experiments demonstrate that the STSR method performs better regardless of whether rotations occur or not. Thus the performance of our method shows that it has higher rotation invariance effectiveness and also is not sensitive to noise.
Furthermore, we made Experiment 3 to test the performance of our proposed STSR model under some different noise models (Gaussian, Poisson, and mixed PoissonGaussian) for the sequences of Satellite1, Satellite2, Forman, and Suzie, and the experimental results are shown in Table 5. PSNR, MSSIM, and RMSE results shown in Table 5 demonstrate that our proposed method yields better performance for Gaussian, Poisson, and mixed PoissonGaussian noise. Thus we can see that our method can be applied to some other noise models except the white Gaussian noise model and can also perform better.

5. Conclusions
A novel model and algorithm are proposed in this paper to implement the spatiotemporal superresolution reconstruction for the video sequences. In our model, a motion details preserving based optical flow motion estimation model is first proposed to obtain the motion vectors, and then an efficient biweighted fusion strategy is introduced to implement the spatiotemporal motion compensation. Then combining the good properties of rotation, translation, and scaleinvariance of ZM, we propose a fast fuzzy registration scheme based on ZM by using the selfadaptive region correlation judgment strategy, and then the final video sequences with high spatiotemporal resolution can be obtained by fusion of the complementary and redundant information with nonlocal selfsimilarity between the adjacent video frames. Moreover, the new model integrates spatial SR and temporal SR into a unified framework, which can improve both the spatial resolution and the temporal resolution and make the video sequences more clear and fluent. Different from the traditional SR reconstruction methods, the proposed method does not rely on accurate estimation of subpixel motion and can be adaptive to many kinds of complex motion patterns. Experimental results demonstrate that the proposed method outperforms the existing methods in terms of both subjective visual and objective quantitative evaluations and has higher rotation invariance effectiveness and noise robustness.
Acknowledgments
This work was supported by the National Basic Research Program of China (973 Program) 2012CB821200 (2012CB821206), National Natural Science Foundation of China (nos. 91024001 and 61070142), and Beijing Natural Science Foundation (no. 4111002).
References
 O. Shahar, A. Faktor, and M. Irani, “Spacetime superresolution from a single video,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR '11), pp. 3353–3360, June 2011. View at: Publisher Site  Google Scholar
 Y. An, Y. Lu, and Z. Yan, “Spatialtemporal motion compensation based video super resolution,” in Proceedings of the 10th Asian Conference on Computer Vision, vol. 6493 of Lecture Notes in Computer Science, no. 2, pp. 282–292, 2011. View at: Publisher Site  Google Scholar
 U. Mudenagudi, S. Banerjee, and P. K. Kalra, “Spacetime superresolution using graphcut optimization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 5, pp. 995–1008, 2011. View at: Publisher Site  Google Scholar
 J. Chen, J. NunezYanez, and A. Achim, “Video superresolution using generalized Gaussian Markov random fields,” IEEE Signal Processing Letters, vol. 19, no. 2, pp. 63–66, 2012. View at: Publisher Site  Google Scholar
 S. P. Belekos, N. P. Galatsanos, and A. K. Katsaggelos, “Maximum a posteriori video superresolution using a new multichannel image prior,” IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1451–1464, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 Q. Yuan, L. Zhang, and H. Shen, “Multiframe superresolution employing a spatially weighted total variation model,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 3, pp. 379–392, 2012. View at: Publisher Site  Google Scholar
 C. Liu and D. Sun, “A Bayesian approach to adaptive video super resolution,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '11), pp. 209–216, June 2011. View at: Publisher Site  Google Scholar
 Z. Xiong, X. Sun, and F. Wu, “Robust web image/video superresolution,” IEEE Transactions on Image Processing, vol. 19, no. 8, pp. 2017–2028, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 M. Shimano, T. Okabe, I. Sato, and Y. Sato, “Video temporal superresolution based on selfsimilarity,” in Proceedings of the 10th Asian Conference on Computer Vision, vol. 6492 of Lecture Notes in Computer Science, no. 1, pp. 93–106, 2011. View at: Publisher Site  Google Scholar
 J. Tian, T. Hou, and M. Li, “Spatiotemporal adaptive superresolution reconstruction of video based on POCS frame,” Application Research of Computers, vol. 28, no. 7, pp. 2778–2781, 2011. View at: Google Scholar
 A. L. D. Martins, A. L. M. Levada, M. R. P. Homem, and N. D. A. Mascarenhas, “MAPMRF superresolution image reconstruction using maximum pseudolikelihood parameter estimation,” in Proceedings of IEEE International Conference on Image Processing (ICIP '09), pp. 1165–1168, November 2009. View at: Publisher Site  Google Scholar
 H. Takeda, P. Milanfar, M. Protter, and M. Elad, “Superresolution without explicit subpixel motion estimation,” IEEE Transactions on Image Processing, vol. 18, no. 9, pp. 1958–1975, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 M. Protter, M. Elad, H. Takeda, and P. Milanfar, “Generalizing the nonlocalmeans to superresolution reconstruction,” IEEE Transactions on Image Processing, vol. 18, no. 1, pp. 36–51, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 N. Dowson and O. Salvado, “Hashed nonlocal means for rapid image filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 3, pp. 485–499, 2011. View at: Publisher Site  Google Scholar
 H. Zheng, A. Bouzerdoum, and S. L. Phung, “Wavelet based nonlocalmeans superresolution for video sequences,” in Proceedings of the 17th IEEE International Conference on Image Processing (ICIP '10), pp. 2817–2820, September 2010. View at: Publisher Site  Google Scholar
 H. Su, L. Tang, Y. Wu, D. Tretter, and J. Zhou, “Spatially adaptive blockbased superresolution,” IEEE Transactions on Image Processing, vol. 21, no. 3, pp. 1031–1045, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 X. Gao, Q. Wang, X. Li, D. Tao, and K. Zhang, “Zernikemomentbased image super resolution,” IEEE Transactions on Image Processing, vol. 20, no. 10, pp. 2738–2747, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 N. Vretos, N. Nikolaidis, and I. Pitas, “3D facial expression recognition using Zernike moments on depth images,” in Proceedings of the 18th IEEE International Conference on Image Processing (ICIP '11), pp. 773–776, September 2011. View at: Publisher Site  Google Scholar
 C. Xu, Y. Chen, Z. Gao, Y. Ye, and T. Shan, “Frame rate upconversion with true motion estimation and adaptive motion vector refinement,” in Proceedings of the 4th International Congress on Image and Signal Processing (CISP '11), pp. 353–356, October 2011. View at: Publisher Site  Google Scholar
 N. Jacobson and T. Q. Nguyen, “Scaleaware saliency for application to frame rate upconversion,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 2198–2206, 2012. View at: Publisher Site  Google Scholar
 C. Lei and Y.H. Yang, “Optical flow estimation on coarsetofine regiontrees using discrete optimization,” in Proceedings of the 12th International Conference on Computer Vision (ICCV '09), pp. 1562–1569, October 2009. View at: Publisher Site  Google Scholar
 A. Wedel, D. Cremers, T. Pock, and H. Bischof, “Structure and motionadaptive regularization for high accuracy optic flow,” in Proceedings of the 12th International Conference on Computer Vision (ICCV '09), pp. 1663–1668, October 2009. View at: Publisher Site  Google Scholar
 L. Xu, J. Jia, and Y. Matsushita, “Motion detail preserving optical flow estimation,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 1293–1300, June 2010. View at: Publisher Site  Google Scholar
 D. Sun, S. Roth, and M. J. Black, “Secrets of optical flow estimation and their principles,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 2432–2439, June 2010. View at: Publisher Site  Google Scholar
 A. Giachetti and N. Asuni, “Corrections to “Realtime artifact free image upscaling”,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 2361–2369, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 H. Takeda, S. Farsiu, and P. Milanfar, “Deblurring using regularized locally adaptive Kernel regression,” IEEE Transactions on Image Processing, vol. 17, no. 4, pp. 550–563, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 D. Sun, S. Roth, J. Lewis, and M. J. Black, “Learning optical flow,” in Proceedings of the 10th European Conference on Computer Vision (ECCV '08), vol. 5304 of Lecture Notes in Computer Science, pp. 83–97, Marseille, France, 2008. View at: Publisher Site  Google Scholar
 A. Wedel, T. Pock, and C. Zach, “An improved algorithm for TVL^{1} optical flow,” in Statistical and Geometrical Approaches to Visual Motion Analysis, vol. 5604 of Lecture Notes in Computer Science, pp. 23–45, 2009. View at: Publisher Site  Google Scholar
 S. Baker, S. Roth, D. Scharstein, M. J. Black, J. P. Lewis, and R. Szeliski, “A database and evaluation methodology for optical flow,” in Proceedings of the 11th IEEE International Conference on Computer Vision (ICCV '07), pp. 1–8, October 2007. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2013 Meiyu Liang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.