Research Article  Open Access
Yuanjie Shao, Nong Sang, Juncai Peng, Changxin Gao, "Joint Image Deblurring and Matching with Blurred InvariantBased Sparse Representation Prior", Complexity, vol. 2019, Article ID 3829263, 12 pages, 2019. https://doi.org/10.1155/2019/3829263
Joint Image Deblurring and Matching with Blurred InvariantBased Sparse Representation Prior
Abstract
Image matching is important for visionbased navigation. However, most image matching approaches do not consider the degradation of the real world, such as image blur; thus, the performance of image matching often decreases greatly. Recent methods try to deal with this problem by utilizing a twostage framework—first resorting to image deblurring and then performing image matching, which is effective but depends heavily on the quality of image deblurring. An emerging way to resolve this dilemma is to perform image deblurring and matching jointly, which utilize sparse representation prior to explore the correlation between deblurring and matching. However, these approaches obtain the sparse representation prior in the original pixel space, which do not adequately consider the influence of image blurring and thus may lead to an inaccurate estimation of sparse representation prior. Fortunately, we can extract the pseudoZernike moment with blurred invariant from images and obtain a reliable sparse representation prior in the blurred invariant space. Motivated by the observation, we propose a joint image deblurring and matching method with blurred invariantbased sparse representation prior (JDMBISR), which obtains the sparse representation prior in the robust blurred invariant space rather than the original pixel space and thus can effectively improve the quality of image deblurring and the accuracy of image matching. Moreover, since the dimension of the pseudoZernike moment is much lower than the original image feature, our model can also increase the computational efficiency. Extensive experimental results demonstrate that the proposed method performs favorably against the stateoftheart blurred image matching approach.
1. Introduction
Image matching has been an active research area in the field of computer vision, such as image mosaicing [1, 2], object tracking [3, 4], and character recognition [5–7]. Recent years have witnessed great progress in this task [8–14]. However, these methods always assume the input is ideal, while the image is inevitable to be blurred for camera shake or object defocus in practical applications. To deal with this problem, a twostage method has been proposed; it first performs image deblurring [15, 16] to obtain a latent sharp image and then performs image matching utilizing the recovered image. Unfortunately, this straightforward approach heavily depends on the quality of the recovered image, while many deblurring methods are designed for improving human visual perception rather than machine perception; thus, there is no guarantee of the improvement of matching accuracy. Since the purpose of image deblurring is to improve the accuracy of image matching, some works propose to explore the correlation between the image deblurring and matching [17, 18]. Shao et al. [18] proposed a joint image restoration and matching method based on distanceweighted sparse representation (JRMDSR), which utilizes the sparse representation prior to exploit the correlation between restoration and matching and obtains image restoration and matching simultaneously. The prior assumes that the blurry image, if correctly restored, can be well represented as a sparse linear combination of the dictionary constructed by the reference image. The key to this method is to obtain reliable representation coefficients to help image restoration and further to improve the matching accuracy. However, the JRMDSR method obtains the sparse representation coefficients in the original pixel space, and it does not adequately consider the influence of image blurring. Due to image blurring, the soobtained sparse representation coefficients in pixel space may not accurately reflect the similarity between the realtime image and the reference image. Therefore, it is impossible to obtain a reliable sparse representation prior. Fortunately, we can extract the pseudoZernike moment with blurred invariant [19] from images and calculate the sparse representation coefficients in the blurred invariant space.
PseudoZernike moment blur invariant is derived from the pseudoZernike moments of the blurred images; it is invariable to convolution with circularly symmetric point spread function. Thus, it can efficiently alleviate the influence of image blurring and improve the accuracy of the sparse representation coefficients.
Motivated by the above analysis, we propose a joint image deblurring and matching method with blurred invariantbased sparse representation prior (JDMBISR). The framework of our JDMBISR is shown in Figure 1. Inspired by JRMDSR [18], our JDMBISR also assumes that if the blurry image can be correctly restored, it can lead to a sparse representation of the dictionary constructed by the reference image. Different from JRMDSR, we obtain the sparse representation coefficients in blurred invariant space rather than original pixel space, thus improving the accuracy of the sparse representation prior, thereby facilitating the following deblurring and matching tasks. Moreover, since the dimension of the blur invariant is much lower than the original pixel vector, our method can also reduce the computation time of sparse representation and speed matching. We adopt the alternating minimization algorithm to solve the JDMBISR model. The experimental results demonstrate that our JDMBISR method performs favorably against the stateoftheart blurred image matching approaches.
The main contributions of this paper are as follows:(i)We propose a joint image deblurring and matching method with blurred invariantbased sparse representation prior, to deal with the problem of blurred image matching.(ii)We extract pseudoZernike moment with blurred invariants from images and obtain the sparse representation coefficients in blurred invariant space, which alleviates the influence of image blurring and improves the reliability of the sparse representation prior.
The remainder of the paper is organized as follows. We will review the related works of pseudoZernike moment with blurred invariants and image matching in Section 2. In Section 3, we will detail the model of joint image deblurring and matching method with blurred invariantbased sparse representation prior. Experimental results and analysis will be presented in Section 4. Finally, we will conclude our work in Section 5.
2. Related Work
In this section, we first introduce the definition of pseudoZernike blurred invariants, which is utilized in the paper, and then review the methods of image matching.
2.1. PseudoZernike Blurred Invariants
PseudoZernike blurred invariants are based on orthogonal pseudoZernike moments and are suitable for blur point spread functions with circular symmetry, and they have blur invariance and noise robustness. The computation of blur invariants of pseudoZernike moments needs to compute pseudoZernike moments first and then generate different orders of invariants via an iterative way. Specifically, for a polar coordinate image , the pseudoZernike moments of order p with repetition q are defined as follows [20]:where . Since is symmetrical to q, we only consider the case where .
Assuming , equation (1) can be reformulated aswhere
According to [20], we can obtainwhere
Generally speaking, the blurred image can be regarded as the convolution of the original image and the blur kernel point spread function . Considering the rotation invariance of pseudoZernike moment, we can obtain
According to [21], the relationship between the radial moment of the blurred image and original image is as follows:
By substituting equations (4) and (7) into equation (2), we can obtainwhere
According to above insights, Dai et al. [19] gave the definition of pseudoZernike blur invariant for blur point spread functions with circular symmetry:where denotes the order of pseudoZernike blur invariants.
2.2. Image Matching
Image matching has been intensively studied over the past decade due to its crucial role in computer vision. Traditional image matching methods have been classified into two classes [22]: featurebased methods and pixelbased methods. Featurebased methods first extract feature vectors from the realtime image and the reference image and then measure the similarity among the feature vectors, thereby obtaining the position of the realtime image. Following are some featurebased methods: Canny operator [23], Harris operator [24], SUSAN operator [25], SIFT feature descriptor [26], SURF operator [27], and ridgelet transform [28]. However, these methods perform poor when the input image is blurred, since it is hard to extract robust feature vector from the degenerated images. Since the pixelbased approaches utilize all of the pixels in the local window, they can achieve better performance than the featurebased approach under occlusion conditions. Many pixelbased methods are also proposed, e.g., template matching (TM) [29], increment sign correlation [10], binary coding and phase correlation [30], and selective correlation coefficient [9]. Recently, some crosscorrelationbased methods [8, 31, 32] have also been proposed to improve the matching performance. Yoo and Ahn [8] utilized correlation coefficient of occlusionfree matching to determine the position of the realtime image. Bilal and Masud [31] reduced the search speed by applying a monotonically increasing crosscorrelation function. Zhu and Deng [32] proposed a gradient direction selection crosscorrelation method for image matching. However, the above methods cannot efficiently deal with the problem of blurred image matching.
An intuitive idea to solve this problem is to first resort to image restorations [33–35] and then to perform image matching. Unfortunately, this straightforward approach is heavily depended on the quality of image restoration, while many restoration methods are designed for improving human visual perception rather than machine perception; thus, there is no guarantee of the improvement of matching accuracy. Therefore, some works attempt to explore the correlation between the image deblurring and matching [17, 18]. Yang et al. [17] utilized sparse representation prior to achieve joint face image restoration and recognition.However, to obtain sparsity, the sparse representation may choose different images to represent input images and result in an inaccurate recognition result, thus cannot give meaningful guidance for restoration Since the local information can ensure that similar samples have similar representation coefficients, Shao et al. [18] proposed a joint image restoration and matching method based on distanceweighted sparse representation (JRMDSR), and they considered both local and sparse information, adopting distanceweighted sparse representation to obtain better representation coefficients.
However, they both obtained the sparse representation coefficients in the original pixel space, which do not adequately consider the influence of image blurring, thus leading to an inaccurate estimation of sparse representation prior. In this paper, we obtain the sparse representation coefficients in blurred invariant space rather than original pixel space, thus improving the accuracy of the sparse representation prior, thereby facilitating the following deblurring and matching tasks.
3. The Proposed Method
In this section, we will present our JDMBISR model for blurred image matching. For completeness, we first give a brief overview of JRMDSR.
3.1. JRMDSR: An Overview
The JRMDSR method aims to solve the problem of blurred image matching by fully exploiting the correlation between restoration and matching. Given the blurred input image and the dictionary , which is constructed by using a sliding window with step size 1 to extract small image blocks from the reference image, the JRMDSR method hopes to obtain the recovered clear image , sparse representation coefficient α, and the blur kernel by solving the following optimization problem:where represents the Euclidean distance between the restored image and the dictionary , indicates point multiplication, and s denotes the sparse exponential of the responses of derivative filters. Then, we can obtain the matching position of the blurred image according to the sparse representation coefficient α. The first term is the image reconstruction constraint. The second one denotes that the blurred image, if correctly restored, should be represented as a linear combination of the few atoms in the dictionary. The third sparse regularization emphasizes the representation coefficient should be sparse, and it also enforces that similar images should have similar representation coefficients. The fourth term represents the sparse prior of natural image, where , . The last term is the regularization for the blur kernel , of which the norm is required to be as small as possible. The parameters η, λ, τ, and γ control the effects of the last four regularization terms
The basic idea of the JRMDSR is that the blurred image, if correctly recovered, should be represented as a sparse linear combination of the dictionary. Meanwhile, a better restored image can lead to more accurate representation coefficients, which in turn can also improve the quality of image restoration. The JRMDSR method iteratively recovers the input image by seeking the sparsest representation, thus correcting the initial mismatch and improving the confidence of image matching.
However, in the real application, there always exists some blur in the recovered image; thus, the soobtained sparse representation coefficients in pixel space may not accurately reflect the similarity between the realtime image and the reference image. To overcome this problem and improve the performance of the image matching, we next propose a joint image deblurring and matching method by obtaining sparse representation prior in a blurred invariant space rather than original pixel space.
3.2. The Proposed JDMBISR Model
In this section, we compute the sparse representation coefficients in the blurred invariant space and propose a joint image deblurring and matching method with blurred invariantbased sparse representation prior (JDMBISR) The key idea of JDMBISR is to obtain sparse representation prior in blurred invariant space rather than the original pixel space. The JRMDSR approach has achieved good performance via obtaining sparse representation prior in the original pixel space. However, in practical applications, the restored image often has some blur, so the sparse representation coefficient obtained in the pixel space may not accurately reflect the similarity between the realtime image and the reference image. As we all know, blurred invariant [21, 36, 37] is a special image feature, which has certain blurred invariance to the blurred image. Generally, blurred invariant can be divided into orthogonal and nonorthogonal, and the former is superior to the latter. Therefore, we extract the pseudoZernike moment with blurred invariant [19] from images and perform sparse representation in this blurred invariant space. We formulate our JDMBISR model as follows:where denotes the sparse representation coefficients, which is obtained in the blurred invariant space. Given the image dictionary , we can obtain the blur invariant dictionary via extracting pseudoZernike moment with blurred invariant from all image patches in the dictionary . Similarly, we can also extract pseudoZernike moment with blurred invariant from blurred realtime image. Therefore, we can obtain sparse representation coefficients in this blurred invariant space in each iteration and utilize this prior to help image deblurring and matching. As we can see from (12), similar to JRMDSR, JDMBISR also iteratively recovers the input image by seeking the sparsest representation among the small image blocks in the reference image.
3.3. Optimization
In this section, we adopt the alternating minimization algorithm [38, 39] to solve the proposed model, which divides the original problem into three subproblems and solves each subproblem separately while keeping the other subproblems fixed. By optimizing the alternating subproblems, our model will finally converge to a global minimization and output the result of image deblurring and matching.
Firstly, according to reference [19], we extract the pseudoZernike moment with blurred invariant feature and dictionary from blurred image and dictionary , respectively. Then, we initialize the sparse representation coefficient by solving the sparse representation of w.r.t , and the restored image as . In the following, we will update , , and iteratively. Updating k. For updating blur kernel , we fix all other variables and solve the following objective function: Given the restored image and the blurred image , the above equation has a closedform solution, so we update by where denotes fast Fourier transform, denotes inverse fast Fourier transform, denotes the complex conjugate of , and indicates the elementwise product. Updating x. We update x by In order to solve the above equation, we introduce an auxiliary variable : With the blur kernel , the blurred template image , and the representation coefficient , we decompose the above equation into xsubproblem and hsubproblem. In order to update the recovered template image , we first fix the auxiliary variable and solve xsubproblem by The solution of the above problem is Secondly, we fix and update each dimension of separately by Updating . At last, we update as
Since the sparse representation coefficient obtained by solving the above equation in the origin pixel space is inaccurate, our method utilizes sparse representation prior in robust invariant space as follows:
More specially, we extract the pseudoZernike moment blur invariants and according to equation (10), where the order and repetition of each blur invariants are the same, and obtain the weight via calculating the Euclidean distance between and . Then, the SPAMS toolbox [40] is applied to solve this weighted sparse representation. Finally, the matching position of the realtime image in the reference image is obtained bywhere is the set of central coordinates of each small image block on the reference image. Algorithm 1 summarizes the procedure of our joint image deblurring and matching method with blurred invariantbased sparse representation prior.

4. Experiments and Analysis
In this section, we conduct extensive experiments on six aerial images to demonstrate the efficiency of the proposed JDMBISR method. In the experiments, we set the size of the reference image as . Firstly, we generate the blurry image of each reference image using Gaussian blur kernel and then we randomly select 100 small images from each blurry reference image as the blurred realtime images, the size of which is set as . Next, we construct the dictionary by using a sliding window with step size 1 to extract image blocks from the reference image; the size of each image block is the same as the blurred realtime image.
We empirically set the parameters , , , , , and the number of iterations . We evaluate the performance of our JDMBISR against the stateoftheart image matching methods including template matching based on normalized correlation coefficient (NCC) [41], sparse representationbased classification (SRC) [22, 42], deblur + NCC (DNCC), and JRMDSR [18]. For image matching, we adopt the position deviation (PD), which is represented by the Manhattan distance between the central coordinates of the image localization and the real position, to evaluate the performance of image matching:where denotes the central coordinates of the image localization and is the central coordinates of real position. For image deblurring, we utilize the peak signaltonoise ratio (PSNR) and structural similarity index (SSIM) between the recovered template image and the latent template image to evaluate the performance of image deblurring.
4.1. An Illustrative Example
Firstly, we illustrate the proposed JDMBISR method with a simple example in Figure 2. Given a reference image and a blurry image, we jointly estimate the blur kernel and recover the latent sharp image and the matching position in an iterative way. Figure 2 shows the restored images and matching results on the reference image in each iteration, and Figure 3 shows the image matching deviation and image deblurring result of the example. For image matching, we can observe that the becomes smaller and smaller as the optimization iteration increases, which means that the underlying position of the blurry image can be determined with increasing confidence. Meanwhile, the restored image resembles more and more the clear image as indicated by the increase of the PSNR and SSIM. Actually, in the initialization stage, the distance between the predicted position and the ground truth is 3 pixels. After two iterations, with better restored image, the approach finds the accurate position. This implies that our approach can effectively regularize image deblurring, seeking the sparsest representation for image matching. On the one hand, a better recovered image will obtain better sparse representation coefficients for image matching; on the other hand, the sparse representation coefficients, tightly connected with image matching, will provide a powerful regularization for image deblurring.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(a)
(b)
(c)
(d)
4.2. Efficiency Analysis of Sparse Representation Prior in Blurred Invariant Space
In this section, we analyze the efficiency of sparse representation prior in blurred invariant space. Specifically, we compare two sparse representationbased image matching methods on above six aerial images: one is to obtain sparse representation in the original pixel space (SRPIXEL) [22, 42] and the other is to obtain sparse representation in the blurred invariant space (SRBI). These two methods utilize sparse representation to solve the matching problem, but the SRBI method extracts pseudoZernike moment with blurred invariant and obtains sparse representation in this blurred invariant space rather than the pixel space.
The matching results of the above two methods are listed in Tables 1 and 2. In the experiments, σ is the standard deviation of Gaussian blur kernel and it ranges from 1 to 5. The dimension of the pixel vector in the SRPIXEL method is 2500, and the dimension of the blur invariant in the SRBI method is 50. The results show that the matching accuracy of the two methods are similar for , but the matching accuracy of the SRBI method is higher than that of the SRPIXEL method as σ increases. More specifically, the accuracy of the SRPIXEL method is 31.67 for and , while the SRBI method achieves 42.17 under the same conditions. This can be explained by the fact that the SRBI method can extract blurred invariant feature from image, thus alleviating the influence of image blurring on matching. From these observations, we can conclude that the sparse representation prior obtained in blurred invariant space is more accurate than that obtained in pixel space, especially when the image is seriously blurred.


4.3. Results of Experiments
In this section, we conduct experiments on joint image deblurring and matching under different degradation settings. In our JDMBISR algorithm, image deblurring and matching are tightly coupled. Thus, we present the results for image matching and deblurring separately. In addition, we also give a comparison of matching speed.
4.3.1. Image Matching Results Comparison
Tables 3 and 4 present the image matching accuracy for 600 blurry images on six reference images, where the standard deviation of Gaussian blur kernel is set as 3 and 4, respectively. From these tables, we can observe that the performance of the DNCC method is very poor, since the image is blurred severely, and poor quality of image deblurring seriously affects image matching performance. In addition, we can also observe that our JDMBISR algorithm performs the best among all the methods in all cases, which denotes that the sparse representation obtained in blurred invariant space is more reliable than that obtained in original pixel space, thus improving the quality of image deblurring. On the other hand, a better restored image can lead to better matching results.


To visually demonstrate the effectiveness of the proposed JDMBISR method, we choose a blurry image and its corresponding reference image as an illustrative example, where the standard deviation of Gaussian blur kernel is set as 3. Figure 4 shows the image matching and restoration results of our JDMBISR method and other four methods. From these figures, we can observe that only our method can obtain the correct matching position and the better quality of restored image.
(a)
(b)
(c)
(d)
(e)
(f)
4.3.2. Image Deblurring Results Comparison
For image deblurring, we randomly select 600 blurry images for each blur kernel size to verify the efficiency of image deblurring, and the standard deviation of Gaussian blur kernel ranges from 1 to 5. Then, we utilize PSNR and SSIM to evaluate the performance of image deblurring between our JDMBISR method and JRMDSR method. Table 5 summarizes the average PSNR of two methods as the standard deviation of the Gaussian blur kernel σ ranges from 1 to 5. From the table, we can observe that the image deblurring performance of JDMBISR is better than that of JRMDSR in all cases, which is also in accordance with Table 6. This implies that the sparse representation prior obtained in blurred invariant space is more accurate than that obtained in pixel space, thus improving the quality of image deblurring effectively.


4.3.3. Matching Speed Comparison
In practical applications, we should not only consider the matching accuracy but also the matching speed. Therefore, we carry out experiments to compare the computing time of JRMDSR and JDMBISR methods; the experimental results are listed in Table 7. In the experiment, the size of blurry input image is set as ; thus, the dimension of the pixel vector in the JRMDSR method is 2500. As shown in Table 7, the JRMDSR method takes 43.65 seconds for joint image deblurring and matching, while JDMBISR takes only 5.6 seconds since the dimension of the blur variant vector is much lower than pixel vector. We can see that our method is much faster than the JRMDSR method and can meet the requirements of practical application.

4.4. Robust Analysis of the Proposed Approach
In this section, we analyze the influence of blur kernel size and scale variation on image matching.
4.4.1. Influence of Blur Kernel Size
To verify the robustness of our method to the kernel size, we utilize different degrees of blurred image for image matching, in which σ are set as 1, 2, 3, 4, and 5, respectively, and the kernel size corresponds to , , , , and . For each kernel size, 600 corresponding blurry images are adopted in the experiments. The matching results comparison among NCC, SRC, DNCC, JRMDSR, and JDMBISR is listed in Table 8. From Table 8, we can observe that the matching accuracy of all methods decreases as σ increases, which means that image blurring brings great challenges to image matching. However, our JDMBISR method achieves higher matching accuracy than other methods in all cases. For example, our JDMBISR method can achieve 59.33 when , but the highest accuracy of other methods is only 48.50. From these results, we can conclude that our JDMBISR method is more robust to kernel size variation than other methods.

4.4.2. Influence of Scale Variation
To verify the robustness of our method to scale variation, we conduct image matching experiments on blurry input image with different sizes. In the experiment, we set the size of the blurry input image as and , respectively, and the standard deviation of Gaussian blur kernel is set as 3. The matching results of NCC, SRC, DNCC, JRMDSR, and JDMBISR methods are listed in Tables 9 and 10. From these tables, we can observe the matching accuracy of JDMBISR method is higher than other methods in all cases, especially when the size of the blurry input image is . Besides, we can also see that as the blurry input image becomes smaller, the matching accuracy decreases This is because with the same blur kernel size, the smaller the blurry input image is, the more blurred the image is. Nevertheless, the matching accuracy of our JDMBISR method still outperforms other methods when the size of the blurry input image becomes smaller.


5. Conclusions
In this paper, we propose a joint image deblurring and matching method with blurred invariantbased sparse representation prior (JDMBISR). Our method obtains the sparse representation prior in the robust blurred invariant space rather than the original pixel space, thus improving the accuracy of the sparse representation prior, thereby facilitating the following image deblurring and matching tasks. Moreover, since the dimension of the pseudoZernike moment is much lower than the original image feature, our model also increases the computational efficiency. Extensive experimental results demonstrate that the proposed method outperforms the stateoftheart blurred image matching approach in terms of both deblurring and matching.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare no conflicts of interest.
Acknowledgments
This study was supported by the project of the National Natural Science Foundation of China (nos. 61433007 and 61901184).
References
 M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” International Journal of Computer Vision, vol. 74, no. 1, pp. 59–73, 2007. View at: Publisher Site  Google Scholar
 R. Szeliski, “Image alignment and stitching: a tutorial,” Foundations and Trends® in Computer Graphics and Vision, vol. 2, pp. 1–104, 2007. View at: Google Scholar
 T. Zhang, K. Jia, C. Xu, Y. Ma, and N. Ahuja, “Partial occlusion handling for visual tracking via robust part matching,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1258–1265, Columbus, OH, USA, June 2014. View at: Google Scholar
 T. Zhang, S. Liu, N. Ahuja, M.H. Yang, and B. Ghanem, “Robust visual tracking via consistent lowrank sparse learning,” International Journal of Computer Vision, vol. 111, pp. 171–190, 2015. View at: Google Scholar
 M. Ryan and N. Hanafiah, “An examination of character recognition on id card using template matching approach,” Procedia Computer Science, vol. 59, pp. 520–529, 2015. View at: Google Scholar
 R. Boia, C. Florea, L. Florea, and R. Dogaru, “Logo localization and recognition in natural images using homographic class graphs,” Machine Vision and Applications, vol. 27, pp. 287–301, 2016. View at: Google Scholar
 A. Alaei and M. Delalandre, “A complete logo detection/recognition system for document images,” in Proceedings of the 2014 11th IAPR International Workshop on Document Analysis Systems, pp. 324–328, IEEE, Tours, France, April 2014. View at: Google Scholar
 J.C. Yoo and C. W. Ahn, “Image matching using peak signaltonoise ratiobased occlusion detection,” IET Image Processing, vol. 6, no. 5, pp. 483–495, 2012. View at: Publisher Site  Google Scholar
 S. I. Kaneko, Y. Satoh, and S. Igarashi, “Using selective correlation coefficient for robust image registration,” Pattern Recognition, vol. 36, no. 5, pp. 1165–1173, 2003. View at: Publisher Site  Google Scholar
 S. I. Kaneko, I. Murase, and S. Igarashi, “Robust image registration by increment sign correlation,” Pattern Recognition, vol. 35, no. 10, pp. 2223–2234, 2002. View at: Publisher Site  Google Scholar
 J. Cheng, Y. Wu, W. AbdAlmageed, and P. Natarajan, “Qatm: qualityaware template matching for deep learning,” 2019, https://arxiv.org/abs/1903.07254. View at: Google Scholar
 R. Kat, R. Jevnisek, and S. Avidan, “Matching pixels using cooccurrence statistics,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1751–1759, Newcastle, UK, June 2018. View at: Google Scholar
 I. Talmi, R. Mechrez, and L. ZelnikManor, “Template matching with deformable diversity similarity,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 175–183, Honolulu, HI, USA, July 2017. View at: Google Scholar
 T. Dekel, S. Oron, M. Rubinstein, S. Avidan, and W. T. Freeman, “Bestbuddies similarity for robust template matching,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2021–2029, Boston, MA, USA, June 2015. View at: Google Scholar
 J. Pan, D. Sun, H. Pfister, and M.H. Yang, “Blind image deblurring using dark channel prior,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1628–1636, Las Vegas, NV, USA, June 2016. View at: Google Scholar
 L. Xu, S. Zheng, and J. Jia, “Unnatural l0 sparse representation for natural image deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1107–1114, Portland, OR, USA, June 2013. View at: Google Scholar
 J. Yang, Y. Zhang, N. M. Nasrabadi, and T. S. Huang, “Close the loop: joint blind image restoration and recognition with sparse representation prior,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 770–777, Tampa, FL, USA, June 2011. View at: Google Scholar
 Y. Shao, N. Sang, C. Gao, and W. Lin, “Joint image restoration and matching based on distanceweighted sparse representation,” in Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), pp. 2498–2503, IEEE, Beijing, China, August 2018. View at: Google Scholar
 X. Dai, T. Liu, H. Shu, and L. Luo, “Pseudozernike moment invariants to blur degradation and their use in image recognition,” in Proceedings of the International Conference on Intelligent Science and Intelligent Data Engineering, pp. 90–97, Springer, Natal, Brazil, August 2012. View at: Google Scholar
 H. Zhang, Z. Dong, and H. Shu, “Object recognition by a complete set of pseudozernike moment invariants,” in Proceedings of the 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 930–933, IEEE, Dallas, TX, USA, March 2010. View at: Google Scholar
 B. Chen, H. Shu, H. Zhang, G. Coatrieux, L. Luo, and J. L. Coatrieux, “Combined invariants to similarity transformation and to blur using orthogonal zernike moments,” IEEE Transactions on Image Processing, vol. 20, pp. 345–360, 2010. View at: Google Scholar
 S. Yang, B. Xiao, L. Yan, Y. Xia, M. Fu, and Y. Liu, “Robust scene matching method based on sparse representation and iterative correction,” Image and Vision Computing, vol. 60, pp. 115–123, 2017. View at: Publisher Site  Google Scholar
 J. Canny, “A computational approach to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, pp. 184–203, 1986. View at: Google Scholar
 C. Harris, “A combined corner and edge detector,” in Proceedings of the Alvey Vision Conference, pp. 147–151, Manchester, UK, September 1988. View at: Google Scholar
 S. M. Smith and J. M. Brady, “Susan a new approach to low level image processing,” International Journal of Computer Vision, vol. 23, pp. 45–78, 1997. View at: Google Scholar
 D. G. Lowe, “Distinctive image features from scaleinvariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. View at: Publisher Site  Google Scholar
 H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: speeded up robust features,” in Proceedings of the European Conference on Computer Vision, pp. 404–417, Springer, Graz, Austria, Europe, May 2006. View at: Google Scholar
 D. L. Donoho, “Orthonormal ridgelets and linear singularities,” Siam Journal on Mathematical Analysis, vol. 31, no. 5, pp. 1062–1099, 2006. View at: Google Scholar
 L. G. Brown, A Survey of Image Registration Techniques, ACM, New York, NY, USA, 1992.
 B. S. Liu, L. P. Yan, and D. H. Zhou, Robust Scene Matching Method Based on Binary Coding and Phase Correlation, Fire Control & Command Control, Taiyuan, Shanxi, China,, 2007.
 M. Bilal and S. Masud, “Efficient computation of correlation coefficient using negative reference in template matching applications,” IET Image Processing, vol. 6, no. 2, pp. 197–204, 2012. View at: Publisher Site  Google Scholar
 H. Zhu and L. Deng, “Image matching using gradient orientation selective cross correlation,” Optik, vol. 124, no. 20, pp. 4460–4464, 2013. View at: Publisher Site  Google Scholar
 R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, “Removing camera shake from a single photograph,” ACM Transactions on Graphics, vol. 25, no. 3, pp. 787–794, 2006. View at: Publisher Site  Google Scholar
 Q. Shan, Z. Li, J. Jia, and C.K. Tang, “Fast image/video upsampling,” ACM Transactions on Graphics, vol. 27, no. 5, pp. 1–10, 2008. View at: Publisher Site  Google Scholar
 J. Yang, J. Wright, T. Huang, and Y. Ma, “Image superresolution as sparse representation of raw image patches,” in Proceedings of the IEEE Conference On Computer Vision And Pattern Recognition, pp. 1–8, Anchorage, AK, USA, June 2008. View at: Google Scholar
 H. Zhang, H. Shu, G. N. Han, G. Coatrieux, L. Luo, and J. L. Coatrieux, “Blurred image recognition by legendre moment invariants,” IEEE Transactions on Image Processing, vol. 19, pp. 596–611, 2009. View at: Google Scholar
 H. Zhu, M. Liu, H. Ji, and Y. Li, “Combined invariants to blur and rotation using zernike moment descriptors,” Pattern Analysis and Applications, vol. 13, no. 3, pp. 309–319, 2010. View at: Publisher Site  Google Scholar
 Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimization algorithm for total variation image reconstruction,” SIAM Journal on Imaging Sciences, vol. 1, no. 3, pp. 248–272, 2008. View at: Publisher Site  Google Scholar
 D. Krishnan and R. Fergus, “Fast image deconvolution using hyperlaplacian priors,” in Proceedings of the Advances In Neural Information Processing Systems, pp. 1033–1041, Vancouver, Canada, December 2009. View at: Google Scholar
 J. Mairal, F. Bach, J. Ponce, and G. Sapiro, “Online learning for matrix factorization and sparse coding,” Journal of Machine Learning Research, vol. 11, pp. 19–60, 2010. View at: Google Scholar
 K. Briechle and U. D. Hanebeck, “Template matching using fast normalized cross correlation,” International Society for Optics and Photonics, vol. 4387, pp. 95–103, 2001. View at: Google Scholar
 J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 31, pp. 210–227, 2008. View at: Google Scholar
Copyright
Copyright © 2019 Yuanjie Shao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.