Research Article  Open Access
Image Deconvolution by Means of Frequency Blur Invariant Concept
Abstract
Different blur invariant descriptors have been proposed so far, which are either in the spatial domain or based on the properties available in the moment domain. In this paper, a frequency framework is proposed to develop blur invariant features that are used to deconvolve a degraded image caused by a Gaussian blur. These descriptors are obtained by establishing an equivalent relationship between the normalized Fourier transforms of the blurred and original images, both normalized by their respective fixed frequencies set to one. Advantage of using the proposed invariant descriptors is that it is possible to estimate both the point spread function (PSF) and the original image. The performance of frequency invariants will be demonstrated through experiments. An image deconvolution is done as an additional application to verify the proposed blur invariant features.
1. Introduction
Blur is one of the degradations that is classified as radiometric, created by factors such as motion, overexposure, camera vibrations, and strong illumination. The point spread function (PSF) of an imaging system introduces some levels of blurring in the captured images. Mostly the PSF is modeled as a Gaussian distribution which is widely applicable in imaging devices [1]. Restoration of images can be performed using nonblind and blind techniques. In case of nonblind technique [2–4], the estimation of the original image is obtained using prior knowledge of the PSF which can be derived based on the various modeling algorithms [5, 6]. But in most cases, the PSF is unknown. Hence, in order to estimate the PSF and the original image, blind deconvolution technique is adopted [7–13]. The general model which is used for the observed blurred image, of a scene, , is described by a convolution integral: where and are the PSF kernel and random noise, respectively. Here, denotes the 2D linear convolution.
Earlier work in image features invariant with respect to blur is divided into three different categories. First category belongs to deriving blur invariant properties of the PSF. The second one is to estimate the PSF, and the last is to obtain the original image via deblurring processes. A frequency domain approach for blur invariant has been done to develop some features for object recognition [14]. Blur invariant in moment domain is proposed by Flusser and Suk,which is invariant to convolution of an image with an arbitrary symmetric PSF kernel [15]. Two recognition methods for motion blurred images have been developed based on a relation between the moments of the blurred and the original images [16]. A set of invariants are derived from Zernike moments which are simultaneously invariant to similarity transformation and to convolution with circularly symmetric PSF [17]. Dai et al. [18] proposed a solution to develop a blur invariant feature set for degraded image recognition systems based on the orthogonal Legendre moments. Yan et al. [19] used the second order central moment minimization for restoring of the astronomical images degraded by the atmospheric turbulence. Khan et al. presented a biometric technique for identification of a person using the iris image by means of the ordinary moments and means algorithm [20]. In this method, the iris is segmented from the acquired image of an eye using an edge detection algorithm. Waveletdomain blur invariants are proposed for 2D discrete signals which are invariant to centrally symmetric blurs [21]. Honarvar et al. [22] derived new algorithms for image deblurring by means of image reconstruction from its complete set of geometric and complex moments. The adaptive total variation (TV) minimization technique by Yoon et al. [23] has used image enhancement from flash and noflash pairs. Accurate sparseprojection image reconstruction via nonlocal TV regularization is proposed by Zhang et al. for tomography applications [24].
In this paper, the blur invariant features in Fourier domain are proposed. By establishing a relationship between the normalized Fourier transform of the blurred and the original images, it is possible to estimate the original image.
2. Frequency Domain Concerns
In this section, we establish a new frequency blur invariant based on a Gaussian PSF for degraded images. Since an imaging system can be modeled as a 2D convolution in (1), it is possible to transform this equation to the Fourier domain. For frequency analysis, we consider the imaging system in the presence and absence of noise, respectively.
2.1. Noise Effect
In the presence of noise, the degradation model can be expressed in the Fourier domain as where , , , and are the frequency responses of the observed image, original image, PSF, and noise, respectively. The Wiener deconvolution method has widespread use in image deconvolution applications, as the frequency spectrum of most visual images is fairly well behaved and may be estimated easily [25, 26]. Here, the target is to find in the way that can be approximated as a convolution, that is, , to minimize the mean square error, where is an estimation of . The Wiener deconvolution filter provides such a . The filter is described in the frequency domain: where is the mean power spectral density of the original image and and the superscript denote complex conjugation. Using this technique to find the best reconstruction of a noisy image can be compared with other algorithms such as Gaussian filtering.
2.2. Proposed Frequency Blur Invariant
If noise is neglected, (2) can be reduced to Here, we consider a Gaussian distribution for the PSF as Assume that the imaging system does not change the overall brightness of the image; that is, It is clear that (5) is a separable function in terms of and , and we can rewrite that as follows:
In order to obtain the Fourier transform of the 2D Gaussian PSF, it is easy to consider the 1D PSF, and using the formula in [27], we have For a 2D PSF, because of its separability property, the Fourier transform of (7) can be written as Substituting (9) into (4), we get To obtain the frequency domain blur invariant, we set both frequencies to in (10) which leads to The PSF kernel can be eliminated by substituting (11) into (10) which yields Equation (12) shows the proposed blur invariant features in Fourier domain for all range of frequencies which is independent of the Gaussian blur kernel (). In this paper, these features are obtained by normalizing the Fourier transform of the original and blurred images with their respective Fourier transforms, and .
3. Image Deconvolution
In this section, we show an image deconvolution method based on the derivatives of the blurred image function which are defined in terms of differences. We begin with 1D version of (10) because the separable property of the Gaussian distributions will allow the easy 2D implementation of them. To deconvolve the original signal, we rewrite (10) in 1D form as Since the inverse Fourier transform of the function does not exist, we can not find an explicit form of that to find the original signal deconvolution from (13). If we use the Taylor series expansion of the squared exponential function, it is possible to connect the degraded signal to its original form. Equation (13) leads to
By using the high order derivative property of the Fourier transform (differentiation property), we have Using (15) in (14) and taking the inverse Fourier transform yields Equation (16) includes the even order derivative of the degraded signal that can be defined as a difference. For example, the definition of a secondorder derivative as the difference is [28] By generalizing the definition of a high order derivative as differences, we are able to approximate the continuous derivatives with discrete differences as Sustituting (18) into (16) yields the original signal in terms of the degraded signal as
Similarly, for a 2D blurred image, the desired image deconvolution can be obtained from
4. Experimental Studies
Different numerical experiments are conducted in order to prove the validity and the efficiency of the proposed methods. The detailed description of these numerical experiments will be presented in this section. The performance for the proposed methods is evaluated based on the binary, grayscale, and real images. This section is divided into two subsections.
In the first subsection, the accuracy of the proposed blur invariant features in the Fourier domain is validated by using the frequency analysis of the blurred and the original images. The efficiency of the proposed image deconvolution algorithm based on the Gaussian PSF is carried out with different experiments in the second subsection. Results of nine numerical experiments are used to ensure the efficiency of the proposed image deconvolution method.
4.1. Experiments on the Frequency Blur Invariant
In order to verify the proposed blur invariant in (12), binary and grayscale images of size are used. The blurred images are obtained using different variance, , by a mask with size of . Table 1 shows the original binary and its corresponding blurred images. In this table, the blur invariants shown in (12) are denoted as , where the frequencies, and , are varied in random ranges. In each row of this table, the results of the amplitude and phase of the blur invariants are shown. It can be observed that their respective values remain the same or slightly change for different .

The results shown in Table 2 for “Saturn” grayscale image indicate similar observations of blur invariants as in Table 1. One thing to observe for both the tables is that the values of the blur invariants vary slightly with different . This is because (8) was based on the integral form whereas all invariant computations are executed in discrete form, which may lead to numerical error in the calculation. The proposed blurinvariant values are fairly stable with respect to different Gaussian kernel, .

4.2. Experiments on the Proposed Image Deconvolution
To validate the proposed image deconvolution shown in (20), an iterative procedure is performed. Rewriting (20) in terms of iterations, we obtain the estimated restored image as where and is the iteration number. In this technique, an estimation of the original image is obtained after every iteration. To estimate the value of in (11), the only unknown parameter is which can be replaced by a suitable Fourier transform of the blurred version of the original image such as . To understand the behavior of in terms of the variation of standard deviation (), we plot the amplitude and phase of this factor for different amount of blur from 0.1 to 10 for “Saturn” image that is shown in Table 2. Figures 1 and 2 show the variation of amplitude and phase of the original and blurred images’ Fourier transform in terms of . It can be seen that the amplitude of is decreasing up to uniformly, whereas the phase of that is increasing up to the same point of standard deviation.
To measure the improved quality of the restored images, the normalized mean square error (NMSE), , has been used as a reference metric. The iteration stops once a minimum value of NMSE is reached.
This iterative approach is performed on three real astronomical images—Tropical Storm Lorenzo (Image A) of size , Milky Way (Image B) of size , and Galaxy (Image C) of size ; see Figure 3.
(a)
(b)
(c)
In this experiment, for each of the aforementioned astronomical images, we degraded them using artificial blur by a Gaussian kernel of different mask sizes. Image A is degraded by blur kernel of sizes , , and with values of , , and , respectively. Table 3 illustrates the results for deconvolved images using the proposed frequency blur invariant features. It is clear that, after every step, the quality of the deconvolved image becomes better, and finally, we can get a fine quality of the deconvolved images based on the minimum value of the NMSE. Image B is degraded by blur kernel of sizes , , and with values of , , and , respectively. As can be seen from Table 4, the process converges reaching , and . for different mask sizes and yielding visually very good result with small NMSE. Finally, Image C is degraded by blur kernel of sizes , , and with values of , , and , respectively. The same deblurring processes are shown in Table 5 for different level of blurs of Galaxy image. One can observe from these three tables that it yields very good results for the overestimating of values. The advantage of overestimation can be seen in the first and second rows of Tables 3 and 4 and also in the below row of Table 5.



The plotted curves of NMSE for three images are displayed in Figure 4. It would be noted that the three curves of NMSE are plotted in the same figure in terms of standard deviation of blur kernel for easier comparison. As shown in Figure 4, the NMSE curves of the restored images approach zero by increasing the values. The results of these experiments ensure the robustness of the proposed Fourier domain blur invariant.
5. Conclusion
In summary, we presented a novel blur invariant technique in frequency domain using Fourier transform properties of a Gaussian PSF kernel. The proposed features are equal in both original and blurred images which are described in Fourier domain. To our knowledge, this represents a normalization of the Fourier transform of the original and degraded images by their respective fixed frequencies which are set to one. In addition, the obtained blur invariant features will enable us to estimate the original image which is degraded by a Gaussian kernel. We use this invariant not only to restore the degraded images, but also to evaluate the variance of the PSF. Since the proposed image deblurring algorithm is similar to the nonblind deconvolution, we applied the NMSE factor to show the error measurement and the image quality in these analyses. Finding other types of image quality measurement to determine an appropriate range for real image deconvolution is a major direction for further practical applications on the proposed method.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
This work has been supported by the University of Malaya High Impact Research Grant (MOHEHIRG A00000750001).
References
 C. Tang, C. Hou, and Z. Song, “Defocus map estimation from a single image via spectrum contrast,” Optics Letters, vol. 38, no. 10, pp. 1706–1708, 2013. View at: Publisher Site  Google Scholar
 A. Kumar, R. Paramesran, and B. H. Shakibaei, “Moment domain representation of nonblind image deblurring,” Applied Optics, vol. 53, no. 10, pp. B167–B171, 2014. View at: Google Scholar
 M. Almeida and M. Figueiredo, “Parameter estimation for blind and nonblind deblurring using residual whiteness measures,” IEEE Transactions on Image Processing, vol. 22, no. 7, pp. 2751–2763, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 S. Tang, W. Gong, W. Li, and W. Wang, “Nonblind image deblurring method by local and nonlocal total variation models,” Signal Processing, vol. 94, no. 1, pp. 339–349, 2014. View at: Publisher Site  Google Scholar
 N. Meitav and E. N. Ribak, “Estimation of the ocular point spread function by retina modeling,” Optics Letters, vol. 37, no. 9, pp. 1466–1468, 2012. View at: Publisher Site  Google Scholar
 B. H. Shakibaei and J. Flusser, Image Deconvolution in the Moment Domain, chapter 5, Science Gate Publishing, 2014.
 W. He, Z. Zhao, J. Wang et al., “Blind deconvolution for spatial distribution of ${K}_{\alpha}$; emission from ultraintense laserplasma interaction,” Optics Express, vol. 22, no. 5, pp. 5875–5882, 2014. View at: Publisher Site  Google Scholar
 J. Zhang, Q. Zhang, and G. He, “Blind deconvolution of a noisy degraded image,” Applied Optics, vol. 48, no. 12, pp. 2350–2355, 2009. View at: Publisher Site  Google Scholar
 L. Yan, H. Fang, and S. Zhong, “Blind image deconvolution with spatially adaptive total variation regularization,” Optics Letters, vol. 37, no. 14, pp. 2778–2780, 2012. View at: Publisher Site  Google Scholar
 X. Gong, B. Lai, and Z. Xiang, “A l0 sparse analysis prior for blind poissonian image deconvolution,” Optics Express, vol. 22, no. 4, pp. 3860–3865, 2014. View at: Google Scholar
 H. Fang, L. Yan, H. Liu, and Y. Chang, “Blind Poissonian images deconvolution with framelet regularization,” Optics Letters, vol. 38, no. 4, pp. 389–391, 2013. View at: Publisher Site  Google Scholar
 J. Chen, R. Lin, H. Wang, J. Meng, H. Zheng, and L. Song, “Blinddeconvolution opticalresolution photoacoustic microscopy in vivo,” Optics Express, vol. 21, no. 6, pp. 7316–7327, 2013. View at: Publisher Site  Google Scholar
 S. V. Vorontsov, V. N. Strakhov, S. M. Jefferies, and K. J. Borelli, “Deconvolution of astronomical images using SOR with adaptive relaxation,” Optics Express, vol. 19, no. 14, pp. 13509–13524, 2011. View at: Publisher Site  Google Scholar
 V. Ojansivu and J. Heikkilä, “A method for blur and similarity transform invariant object recognition,” in Proceedings of the 14th Edition of the International Conference on Image Analysis and Processing (ICIAP '07), pp. 583–588, September 2007. View at: Publisher Site  Google Scholar
 J. Flusser and T. Suk, “Degraded image analysis: an invariant approach,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 6, pp. 590–603, 1998. View at: Publisher Site  Google Scholar
 A. Stern, I. Kruchakov, E. Yoavi, and N. S. Kopeika, “Recognition of motionblurred images by use of the method of moments,” Applied Optics, vol. 41, no. 11, pp. 2164–2171, 2002. View at: Publisher Site  Google Scholar
 B. Chen, H. Shu, H. Zhang, G. Coatrieux, L. Luo, and J. L. Coatrieux, “Combined invariants to similarity transformation and to blur using orthogonal Zernike moments,” IEEE Transactions on Image Processing, vol. 20, no. 2, pp. 345–360, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 X. Dai, H. Zhang, T. Liu, H. Shu, and L. Luo, “Legendre moment invariants to blur and affine transformation and their use in image recognition,” Pattern Analysis and Applications, vol. 17, no. 2, pp. 311–326, 2014. View at: Publisher Site  Google Scholar
 L. Yan, M. Jin, H. Fang, H. Liu, and T. Zhang, “Atmosphericturbulencedegraded astronomical image restoration by minimizing secondorder central moment,” IEEE Geoscience and Remote Sensing Letters, vol. 9, no. 4, pp. 672–676, 2012. View at: Publisher Site  Google Scholar
 Y. D. Khan, S. A. Khan, F. Ahmad, and S. Islam, “Iris recognition using image moments and kmeans algorithm,” The Scientific World Journal, vol. 98, pp. 224–232, 2014. View at: Google Scholar
 I. Makaremi and M. Ahmadi, “Waveletdomain blur invariants for image analysis,” IEEE Transactions on Image Processing, vol. 21, no. 3, pp. 996–1006, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 B. Honarvar, R. Paramesran, and C.L. Lim, “Image reconstruction from a complete set of geometric and complex moments,” Signal Processing, vol. 98, pp. 224–232, 2014. View at: Publisher Site  Google Scholar
 S. M. Yoon, Y. J. Lee, G.J. Yoon, and J. Yoon, “Adaptive total variation minimizationbased image enhancement from flash and noflash pairs,” The Scientific World Journal, vol. 98, pp. 224–232, 2014. View at: Google Scholar
 Y. Zhang, W. Zhang, and J. Zhou, “Accurate sparseprojection image reconstruction via nonlocal TV regularization,” The Scientific World Journal, vol. 2014, Article ID 458496, 7 pages, 2014. View at: Publisher Site  Google Scholar
 Y. Zhang, Z.M. Tang, Y.P. Li, and Y. Luo, “A hierarchical framework approach for voice activity detection and speech enhancement,” The Scientific World Journal, vol. 2014, Article ID 723643, 18 pages, 2014. View at: Publisher Site  Google Scholar
 T. Chen, K.K. Ma, and L.H. Chen, “Tristate median filter for image denoising,” IEEE Transactions on Image Processing, vol. 8, no. 12, pp. 1834–1838, 1999. View at: Publisher Site  Google Scholar
 M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, vol. 55, 1964.
 R. C. Gonzalez and R. E. Woods, Digital Image Processing, AddisonWesley Longman, Boston, Mass, USA, 2nd edition, 1992.
Copyright
Copyright © 2014 Barmak Honarvar Shakibaei and Peyman Jahanshahi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.