Table of Contents
Journal of Computational Engineering
Volume 2014, Article ID 125356, 9 pages
http://dx.doi.org/10.1155/2014/125356
Research Article

An Image Dehazing Model considering Multiplicative Noise and Sensor Blur

1Department of Mathematical and Computational Sciences, National Institute of Technology, Karnataka, Srinivasanagar, Mangalore 575025, India
2Department of Electronics and Communication Engineering, National Institute of Technology, Karnataka, Srinivasanagar, Mangalore 575025, India

Received 15 September 2014; Revised 25 November 2014; Accepted 28 November 2014; Published 22 December 2014

Academic Editor: Quan Yuan

Copyright © 2014 P. Jidesh and A. A. Bini. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A restoration model considering the data-dependent multiplicative noise, shift-invariant blur, and haze has been introduced in this paper. The proposed strategy adopts a two-step model to perform a single image dehazing under the blurred and noisy observations. The first step uses the well-known dark channel prior method to estimate the transmission of the medium and atmospheric light that signifies the global color of the haze and dehaze the images. The second step performs denoising and deblurring under a Gamma distributed noise setup and a linear blurring artefact. The restoration under the above mentioned setup has quite a few applications in satellite and long-distant telescopic imaging systems, where the captured images are noisy due to atmospheric pressure turbulence and hazy due to the presence of atmospheric dust formation; further they are blurred due to the common device artefacts. The proposed strategy is tested using a large amount of available image-sets and the performance of the model is analysed in detail in the results section.

1. Introduction

Image dehazing is a common inverse problem in the image processing literature. The light received by a sensor from outdoor scenes is often absorbed and scattered by the medium through which the light ray travels; the common examples would be dust, mist, fog, fumes, and so forth. Therefore, the captured outdoor images are commonly found to be degraded with fog, mist, or haze. The visibility of the scene contents would be highly challenged in such images. Nevertheless, most of the outdoor imaging systems like surveillance and transport navigation should extract semantic information for proper analysis. Therefore, dehazing becomes an inevitable part of such image restorations. Moreover in long distance photography of foggy scenes, this process has a substantial effect on the image in which contrasts are reduced and surface colors become faint. In addition to the dense haze formation in captured data, sometimes the variations in atmospheric pressure conditions or transmission errors can cause introduction of noise in images. Such images are noisy and hazy and the restoration involves a denoising and dehazing process. Moreover, the assumption that the imaging systems are not free from the devise artifacts leads to the necessity of a deblurring process along with denoising and dehazing. In this work we assume a data-correlated noise and spatially invariant out-of-focus blur. The noise intensity distribution is assumed to be Gamma and multiplicative in nature. The blurring is devised as a linear and bounded operator. The restoration of images under the above-mentioned scenario is an ill-posed problem in the sense of Hadamard [1]. That is, a small perturbation in the initial data can cause large disturbances in the restored version. This demands a regularization in the restoration process. Many image dehazing methods require multiple input images and additional prior information to perform in efficient manner. The additional prior information for dehazing includes degrees of polarization [2, 3], multiple images under different weather conditions [46] and the depth information in 3D data [7, 8].

Single image dehazing is more relevant and useful in most image processing scenarios because of its generalized behavior and efficiency in dehazing with limited prior information. Single image dehazing is highly underconstrained and ill-posed in the sense that the local transmission that depends on the scene depth for homogeneous atmosphere needs to be estimated. Tan [9] developed a dehazing model using the Markov random field framework, under a weak assumption that airlight in the atmospheric scattering model is constant [10]. In another study Fattal [11] estimated the albedo and transmission light, assuming that shading and transmission are locally uncorrelated. Though this assumption is a strong one, the method is very effective for light haze but not for dense haze.

To solve this underconstrained single image dehazing problem many prior based techniques have been proposed. Among those successful prior methods the dark channel prior method is a prominent one; see He et al. [12, 13]. Dark channel prior serves as an effective model to estimate the local transmissions for hazy images. However, refining the observed transmission map with soft matting as in [5] is computationally expensive. A further refinement was proposed for the dark channel prior using the guided image filter; the details can be found in He et al. [13] and Pang et al. [14].

The rest of the paper is organized as follows. In Section 1.1 the commonly used image dehazing model and its mathematical formulations are explained and the dark channel prior based dehazing model is discussed in Section 1.2. A brief description of current denoising and deblurring literature is provided in Section 1.3. Section 2 portrays the proposed strategy and its theoretical analysis including the numerical implementation. The experimental analysis of various models including the proposed one is done in Section 3. Finally we conclude the work in Section 4.

1.1. Image Dehazing Model

Though the physical mechanism of haze formation is rather a complicated scenario, a commonly used hazy image formation can be modeled as Here denotes the observed color, is the true color that we have to restore, is the transmission of the medium, and is the atmospheric light, which in turn represents the global “color” of haze. Now the aim of dehazing method is to restore , , and from the hazy observation ; apparently the problem is an inverse problem which is highly under constrained; refer to [12] for more details of the haze model formulations.

1.2. Image Dehazing with Dark Channel Priors

The dark channel prior is a statistics of the out-door haze free images; for most of the non-sky patches in an outdoor haze-free image, at least one color channel contains some pixels whose intensities are very low [12]. Mathematically, the dark channel of an image is given by where denotes the color channel of the image and is a local patch centered at pixel . For a haze free image its dark channel tends to zero, which is the dark channel proposed in [12, 13], whereas for hazy images the dark channel will never be zero.

Further, assuming the haze model (1) He et al. prove that the transmission map can be simply estimated as in which denotes the constant transmission in the patch . The parameter in (3) is introduced to prevent removing the haze thoroughly and keep the feeling of depth; it is set to be 0.95 in their work. The global atmospheric light in (1) is estimated by the technique proposed in [12], which investigates the candidate pixels first using the dark channel of the input hazy image. The transmission obtained by (3) is only a coarse estimation; the images recovered using the coarse transmission map suffer from severe halo artifacts; hence, a refinement of the transmission map is necessary; moreover, this refinement also helps to capture the depth changes at object edges. The closed matting framework used in [15] is a general way to suppress block artifacts in the coarse map, and this minimizes the cost function: However the time complexity of this approach is extremely high, because the process involves computing the Laplacian matrix of size () in each iteration. Therefore, a faster approach is introduced in [14], which can compute the dark channel prior with a worst case complexity: . Finally the transmission and atmospheric lights are estimated. The haze free image is obtained using (1) as

1.3. Image Restoration under Multiplicative Noise and Blur

Image denoising is an inevitable preprocessing activity in most of the image processing applications. The restoration of images considering data-independent additive noise is quite a well-explored concept. In this paper we assume that the image is corrupted by data-dependent multiplicative noise that follows a Gamma distribution. This kind of noise generally appears in long distant photography, especially in telescopic imaging systems where the light travels through the medium with high pressure variations. The noisy image formation under this assumed model is where is the multiplicative noise, is the observed image, and is the actual image (without noise and blur). Further considering a blurring artifact in the observed image as a general phenomena, one can write the degradation process as where is a blurring operator which is linear. Assuming a shift invariance in the operator the above equation can be reformulated using a linear convolution operator with a Gaussian blurring kernel; that is, where is a linear blurring kernel whose area of support is assumed to be compact and * denotes a linear convolution operator (the image is assumed to be blurred at first and then noisy). Image restoration under multiplicative data-dependent noise setup is not explored much, unlike the additive random noise models.

The first kind of restoration proposed under the multiplicative noise setup is RLO model [16]. In this model the noise is assumed to be multiplicative with unit mean and variance and the blur is linear shift invariant one. The evolution equation of the model is formulated as where is a divergence operator and Lagrange multipliers and are dynamically updated to satisfy the constraints. The PDE in this model is solved with the uniform boundary condition , where is a unit vector normal to the image plane and the initial condition , where is the initial observed image. Furthermore, this boundary condition and initial condition are assumed for all the PDEs described in this paper, if not described otherwise. Since the model assumes a Gaussian multiplicative noise the practical applicability of the model is quite limited.

Another advancement in this direction is Aubert and Aujol [17] AA model. This model is devised to address data-dependent multiplicative noise in images which follows a Gamma distribution. This model finds many applications in the contemporary imaging world. Quite a few imaging modalities are found to be facing similar noise distortions in the captured data; a prominent example would be ultrasound imaging. The devised model follows the evolution equation: Here is a regularization parameter and other symbols have their usual meanings. Considering a spatially invariant blur, the above equation can be rewritten as where is a blurring kernel. This model is conditionally convex; the diffusion part is a normal total variational denoising model as proposed in [18]. As it is well known from the restoration literature that the total variation (TV) functional is convex (not in the strict sense). The reactive term is derived based on the maximum a posteriori (MAP) estimate and one can easily prove that this term is conditionally convex. Therefore, a unique solution exists conditionally.

Quite a few models are also proposed to improve AA model in terms of visual appearance and theoretical stability [1921]. In Xiao et al. [19] the authors replace the TV regularizer with a Weberized TV regularizer for a better restoration in terms of visual quality. In Huang et al. [20]. the authors provide a modified fitting term which is convex under all conditions; therefore, the solution is unique. And in [21] the author fits a convex diffusion term to reduce the staircase effect due to the piecewise constant approximation of nonlinear second order filters.

Many of the dehazing models discussed in the literature are ignorant of the medium induced errors like noise and devise induced artifacts like blur. However, noise and blur are quite common and, precisely, are a significant issue while removing haze. There are some haze removal methods which consider noise; see [22, 23]; however, they are least meant for single image dehazing problems. In another work Matlin and Milanfar [24] proposed a dehazing and denoising method for single image with the help of the BM3D denoising algorithm; however, they fail to mention the sensor blur related problems in the captured images. Sensor blur is also an important degradation factor, especially for thin haze. Another model in this direction is [25]. This model assumes an additive noise and a shift invariant blur. However as we know from the literature that the additive noise is of more theoretical interest and in most imaging setups the captured images are corrupted by multiplicative noise. Motivated by the blurred (spatially invariant), noisy (multiplicative), and hazy behaviors of the input data, we propose a novel strategy to restore the images under these unfavorable conditions.

2. The Proposed Method

In this section we propose a novel restoration model for hazy, blurred, and noisy images. We assume the degradation model: where is the impulse response of the blurring system (as already mentioned) and is multiplicative noise (the image is assumed to be blurred at first and then noisy). Now we employ a two-stage restoration process to retrieve the estimated original image. In the first step we apply the dark channel prior method to estimate the dehazed image. The dark channel prior model is explained in Section 1.2. The estimated dehazed image is obtained by solving (5) using the airlight and transmission obtained using the dark channel priors. Now we have the dehazed image and we begin the second step of the algorithm, that is, to deblur and denoise the dehazed image obtained from the first step. Let us assume as the estimated dehazed image (obtained from the first step); now we can write the denoising and deblurring model as . Now the restoration problem can be stated as an energy minimization problem with the following cost functional (derived using the TV norm) and constraint (derived using MAP estimator), see [17] for details: where stands for the space of bounded variation over the image support domain and is the regularization parameter. The gradient descent solution for this problem can be stated as Here denotes the divergence of the vector. The restored image is obtained at the steady state of the above gradient descent solution.

The functional defined in (13) is conditionally convex. The cost functional is convex (not in the strict sense) because the second derivative of the functional is positive. However, the constraint term is conditionally convex; the term becomes convex if .

2.1. Numerical Implementation

We have used explicit time marching scheme for solving the PDEs. Due to the desired accuracy and moderate computational cost of the method, it serves as good choice. The steady state solution of the PDE in (14) gives the restored version of the image. The finite central differencing scheme for the PDE gives where Finally the gradient descent method gives where stands for the image at th iteration. The implementation of the reactive/fidelity term is straightforward and, therefore, the explanation of its implementation is skipped for brevity.

3. Results and Discussions

We have used some standard test images for testing the proposed model with the existing models. So far there does not exist any model proposed for dehazing images under multiplicative noise and spatially invariant blur, to the best of our knowledge. We have used hazy images from the well-known databases and the noise and blur were added to the test images. The intensities of all the test images were normalized to the range . The Gamma distributed multiplicative noise was added using a function written in Matlab. The sensor blur was created by converting the input image to the Fourier domain using Discrete Fourier Transform (DFT) and multiplying the Fourier coefficients with a Gaussian blurring function and taking the inverse Fourier transform of the coefficients. Though we have tested with different test images with varying characteristics, we show the results only for three test images “cinque coast,” “peak,” and “toys” for brevity in explanation. The hazy images used for testing are shown in Figure 1. Further we have tested for various noise variances and different sizes of the blurring kernel. The regularization process is controlled using the parameter , where stands for the noise variance and the time step in (17) is initialized as to satisfy the stability condition and to get the optional restoration.

Figure 1: Hazy observation of the test images: “cinque coast,” “peak,” and “toys.”

Furthermore, a statistical quantitative analysis is performed on the filtered outputs to quantify the results. We have used the statistical measures, Peak Signal to Noise Ratio (PSNR) and structure similarity (SSIM) index, to demonstrate the performances of various filters considered in this work along with the one proposed in this paper. The PSNR values indicate the noise removal and signal strengthening capacity of the filter under consideration. The PSNR value increases as the noise content decreases or alternatively the signal strength increases. So a high PSNR value indicates a better noise removal capability. PSNR is defined as , where is the maximum intensity value and MSE is the mean square error defined as , where is the size of the image; see [26] for details. Similarly the diffusion results in smoothing of image structures like edges and fine details; therefore, a structure preservation is quite trivial for such diffusion filters. So we verify the structure similarity (SSIM) index of the filter for the test images. The SSIM takes values in the range : a value 1 for a perfect structure preservation and zero for a very poor preservation in terms of structures.

The motivation to use SSIM approach is to find a more direct way to compare the structures of the reference and the distorted images. We invite readers to refer [27] for further details. This framework for the design of image quality measures was proposed, based on the assumption that the human visual system is highly adapted to extract structural information from the viewing field; the SSIM is formulated as where and denote the content of local windows in original and reconstructed images, respectively, is the covariance of and , and denote the variance of and , respectively, and , where is the dynamic range of pixels values ( for 8-bit gray scale image) and and are constants. The measure is applied for nonoverlapping windows in both of the images (original and reconstructed). In this thesis work we measure mean-SSIM (MSSIM) which is an index to evaluate the overall image quality. It is defined as where and are the original and reconstructed images, respectively, and denote the content of the th local window in reference and distorted images, respectively, and is the number of local windows in the image.

The input images are tested at two different noise variance values (0.15 and 0.25) of the Gamma noise. Furthermore, a blurring kernel (spread value 3) is used to generate the sensor blur in images. Results of different statistical measures (namely, PSNR, MSSIM, and MSE) are provided in Tables 1, 2, and 3, respectively. We have shown the results for the proposed method and one other relevant method proposed by Lan et al. [25] for dehazing and denoising. The model by Lan et al. [25] is designed for data-independent additive noise with spatially invariant blur. Literally we have no methods to compare our results, since so far no models have been proposed for multiplicative noise removal and dehazing. Nevertheless, we have tested the model Lan et al. [25] for multiplicative Gamma noise. We confirm that the performance of this model is not quite comparable to the proposed model in terms of these statistical measures, as the noise is multiplicative and data-dependent in nature. The restored versions of the test images for two different noise variances and for the blurring kernel spread value 3 are shown in Figure 2 (for the test image “cinque coast”) and Figure 3 (for the test image “peak”). From these restored versions we can confirm that the proposed model performs well in terms of denoising, deblurring, and dehazing. We also demonstrate the performance of the proposed model when the image is corrupted by higher density speckle noise (of variances 0.3 and 0.4, resp.) and blurred by Gaussian kernels of a bigger spreads (i.e., 4 and 5). The results for this experiment are shown in Figure 4. The response of the proposed model remains superior to other compared models in terms of visual and quantitative results.

Table 1: PSNR (in dB) for noisy and restored images for various noise variance values and kernel size 3.
Table 2: MSSIM for noisy and restored images for various noise variance values and kernel size 3.
Table 3: MSE for noisy and restored images for various noise variance values and kernel size 3.
Figure 2: (a) and (b) two hazy, blurred (spread of kernel 3), and noisy images (Gamma noise variances 0.15 and 0.25): “cinque coast.” (c) and (d) restored using Lan et al. [25] (for additive noise and blur). (e) and (f) restored using the proposed model.
Figure 3: (a) and (b) two hazy, blurred (spread of kernel 3), and noisy images (Gamma noise variances 0.15 and 0.25): “peak.” (c) and (d) restored using Lan et al. [25] (for additive noise and blur). (e) and (f) restored using the proposed model.
Figure 4: (a) and (d) hazy, blurred (spread of kernel 4), and noisy images (Gamma noise variances 0.3 and 0.4, resp.): “toy”; (g) and (h) hazy, blurred (spread of kernel 5), and noisy images (Gamma noise variances 0.3 and 0.4, resp.); (b), (e), (h), and (k) images restored by Lan et al. [25] (for additive noise and blur); (c), (f), (i), and (l) images restored using the proposed model.

4. Concluding Remarks

A novel dehazing method was proposed and analyzed considering the data dependent multiplicative Gamma noise and shift invariant sensor blur. A two-step method was devised: the first step uses dark channel priors to handle the haze present in images and in the second step a regularization filter based on the maximum a posteriori estimate was devised to handle data-dependent multiplicative noise and blur present in images. The proposed strategy is demonstrated to be effective in handling hazy and speckled images. The model is applicable to many satellite and telescope images which are misty or foggy due to bad weather conditions and noisy due to atmospheric pressure variations and blurred due to the sensor artifacts. The model is tested with several natural and synthetic images and the results show a remarkable progress in dehazing and restoring of images.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. J. Hadamard, Lecturer on Cauchy’s Problem in Linear Partial Differential Equations, vol. 1, Dover, 1953.
  2. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '01), vol. 1, pp. I-325–I-332, December 2001. View at Publisher · View at Google Scholar · View at Scopus
  3. S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '06), pp. 1984–1991, June 2006. View at Publisher · View at Google Scholar · View at Scopus
  4. S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in Proceedings of the 7th IEEE International Conference on Computer Vision (ICCV '99), vol. 2, pp. 820–827, IEEE, Kerkyra, Greece, September 1999. View at Publisher · View at Google Scholar · View at Scopus
  5. S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '00), pp. 598–605, Hilton Head Island, SC, USA, June 2000. View at Scopus
  6. S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 6, pp. 713–724, 2003. View at Publisher · View at Google Scholar · View at Scopus
  7. S. Narasimhan and S. Nayar, “Interactive deweathering of an image using physical models,” in Proceedings of the IEEE Workshop on Color and Photometric Methods in Computer Vision, pp. 598–605, IEEE, 2003.
  8. J. Kopf, B. Neubert, B. Cohen et al., “Model-based photograph enhancement and viewing,” in Proceedings of the SIGGRAPH Asia, pp. 1–10, ACM, 2008.
  9. R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08), pp. 1–8, Anchorage, Alaska, USA, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  10. J. P. Oakley and H. Bu, “Correction of simple contrast loss in color images,” IEEE Transactions on Image Processing, vol. 16, no. 2, pp. 511–522, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. R. Fattal, “Single image dehazing,” ACM Transactions on Graphics, vol. 27, no. 3, article 72, 2008. View at Publisher · View at Google Scholar · View at Scopus
  12. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” in Proceedings of the IEEE Computer Vision and Pattern Recognition (CVPR '09), pp. 1956–1963, Miami, Fla, USA, June 2009. View at Publisher · View at Google Scholar · View at Scopus
  13. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. J. Pang, A. Oscar, and G. Zheng, “Improved single image dehazingusing guided filter,” in Proceedings of the APSIPA Annual Summit and Conference (APSIPA ASC '11), pp. 1–4, ACM, Xi'an, China, October 2011.
  15. A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 228–242, 2008. View at Publisher · View at Google Scholar · View at Scopus
  16. L. Rudin, P. Lions, and S. Osher, Multiplic Ative Denoising and Deblurring: Theory Andalgorithms, in Geometric Level Set Methods in Imaging, Vision, and Graphics, Springer, Berlin, Germany, 2003.
  17. G. Aubert and J. F. Aujol, “A variational approach to removing multiplicative noise,” SIAM Journal on Applied Mathematics, vol. 68, no. 4, pp. 925–946, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D: Nonlinear Phenomena, vol. 60, no. 1–4, pp. 259–268, 1992. View at Publisher · View at Google Scholar · View at Scopus
  19. L. Xiao, L.-L. Huang, and Z.-H. Wei, “Multiplicative noise removal via a novel variational model,” EURASIP Journal on Image and Video Processing, vol. 2010, Article ID 250768, 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. Y.-M. Huang, M. K. Ng, and Y.-W. Wen, “A new total variation method for multiplicative noise removal,” SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 20–40, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  21. P. Jidesh, “A convex regularization model for image restoration,” Computers and Electrical Engineering, vol. 40, pp. 66–78, 2014. View at Publisher · View at Google Scholar · View at Scopus
  22. Y. Y. Schechner and Y. Averbuch, “Regularized image recovery in scattering media,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 9, pp. 1655–1660, 2007. View at Publisher · View at Google Scholar · View at Scopus
  23. N. Joshi and M. Cohen, “Seeing Mt. Rainier: lucky imaging for multi-image denoising, sharpening, and haze removal,” in Proceedings of the IEEE International Conference on Computational Photography, pp. 1–8, Cambridge, Mass, USA, March 2010. View at Publisher · View at Google Scholar
  24. E. Matlin and P. Milanfar, “Removal of haze and noise from a single image,” in Computational Imaging X, vol. 8296 of Proceedings of SPIE, 2012.
  25. X. Lan, L. Zhang, H. Shen, Q. Yuan, and H. Li, “Single image haze removal considering sensor blur and noise,” Eurasip Journal on Advances in Signal Processing, vol. 2013, no. 1, article 86, 2013. View at Publisher · View at Google Scholar · View at Scopus
  26. R. Gonzalez and R. Woods, Digital Image Processing, Prentice Hall, Upper Saddle River, NJ, USA, 2nd edition, 2001.
  27. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004. View at Publisher · View at Google Scholar · View at Scopus