Abstract

The single image dehazing algorithms in existence can only satisfy the demand for dehazing efficiency, not for denoising. In order to solve the problem, a Bayesian framework for single image dehazing considering noise is proposed. Firstly, the Bayesian framework is transformed to meet the dehazing algorithm. Then, the probability density function of the improved atmospheric scattering model is estimated by using the statistical prior and objective assumption of degraded image. Finally, the reflectance image is achieved by an iterative approach with feedback to reach the balance between dehazing and denoising. Experimental results demonstrate that the proposed method can remove haze and noise simultaneously and effectively.

1. Introduction

As one of the most important topics and basic issues in image processing, single image dehazing aims at two aspects. One is creating visually pleasing images suitable for human visual perception; the other is improving the interpretability of images for computer vision and preprocessing tasks. Thus, advanced techniques of single image dehazing are in urgent needs. The existing papers can be roughly classified into two methods. The first scheme based on image enhancement technique aims at improving the visual effect of image directly, such as gamma correction [1], histogram equalization [2], and Retinex [3, 4]. This scheme is fast and simple but has strong pertinence and can hardly adjust all image characteristics to a proper range simultaneously, according to human vision system. The second one is based on image restoration technique. Strong prior or assumption atmospheric transmission and environmental luminance model makes it possible to solve the problem caused by the atmospheric scattering which has the ill-posedness, for instance, Tan [5] optimization based on Markov random field (MRF), Fattal [6] estimation based on independent component analysis (ICA), and He et al. [7] solution based on dark channel prior (DCP). This scheme is a hotspot recently, but it is overly dependent on the model and vulnerable by external environment [811].

By analyzing the recent dehazing algorithms based on image restoration, we find that most algorithms only consider improving contrast and luminance of degraded image; however, in fact, noise is a universal phenomenon and a significant issue in dehazing [1217]. In 2012, Fang et al. [15] realized simultaneous dehazing and denoising based on joint bilateral filter [16], but it may cause excessive enhancement as the parameters of joint bilateral filter are unknown. In the same year, Matlin and Milanfar [17] proposed two methods for removing haze and noise from a single image: one is to denoise the image prior to dehazing based on BM3D [18] and He’s algorithm; the other is an iterative regression method. Both of them perform well when the noise level is precisely known, but when the noise level is not given, latent errors from either “under”-denoising or “over”-denoising can be amplified. In 2013, Lan et al. [19] presented a haze image model considering both sensor blur and noise. Based on the degradation model, a three-stage haze removal algorithm is proposed; the algorithm is effective, but it denoises the image prior to dehazing, which would cause a loss of information on image details.

In this paper, we propose a novel “Bayesian framework,” which would avoid dynamic range compression in He’s algorithm. The accuracy of the input image is ensured by removing haze and noise simultaneously. The robustness of our approach is guaranteed by the iterative approach with feedback. This paper is arranged as follows. Section 2 describes the development of image dehazing and proposes an improved atmospheric scattering model based on McCartney’s method. In Section 3, a Bayesian framework for single image dehazing considering noise is proposed. The experiments are presented in Section 4. Conclusion is summarized in Section 5.

2. Backgrounds

The single image dehazing algorithm was classified as an image enhancement technique in the earlier time. Middleton [20] modeled it as an image restoration technique in 1952, and then McCartney [21] developed it to a mature model based on Rayleigh scattering, which was widely used to describe the formation of the degraded image in 1976. In this section, we will make a brief introduction of the McCartney’s atmospheric scattering model and propose our improved atmospheric scattering model based on its defects.

2.1. The McCartney’s Atmospheric Scattering Model

As is well known, the image received by a sensor from scene points is often absorbed and scattered by a complex medium. In computer vision and atmospheric optics, the McCartney’s atmospheric scattering is playing a major role in image degradation. It was modeled as follows [22]: where denotes the observed degraded image. denotes the scene radiance, which represents original appearance of image. , the global atmospheric light, is mostly recognized as the mean of the top 0.6% brightest pixels in the haze image [15]. is the atmospheric transmission map. Then, the problem changes into how to estimate the latent image from when are given, which makes it an abnormal equation.

2.2. The Improved Atmospheric Scattering Model

Noise from environment and sensor is also an important degradation factor, but it is not considered in McCartney’s atmospheric scattering model. Therefore, our improved atmospheric scattering model is proposed as follows: where denotes zero-mean Gaussian noise, as it comes from environment and sensor [25, 26]. There are two kinds of approaches to solve (2): one is to dehaze and denoise step by step; the other is to dehaze and denoise simultaneously. The former includes denoising prior for dehazing and dehazing prior for denoising. Denoising prior for dehazing may cause loss of information on image details. Dehazing prior to denoising can be explained as follows: where is a value between 0 and 1, and it varies inversely with the density of haze. Equation (3) implies that the noise will be amplified if not removed before dehazing, especially in very hazy regions where is close to 0 and the noise contribution can dominate the results. Therefore, the main focus of our work is to dehaze and denoise simultaneously.

3. Our Approach

The key to our approach is that it combines the best of the Bayesian framework, the statistical prior and objective assumption of degraded image, and the iterative algorithm with feedback, to achieve the balance between dehazing and denoising. This section arranges as follows. The establishment of dehazing based on Bayesian framework is in Section 3.1, the definition of Bayesian framework’s probability density function in Section 3.2, and the solution of our approach in Section 3.3.

3.1. Image Dehazing Based on Bayesian Framework

Rearranging (2), we find the following expression:

In order to keep (4) nonnegative, we reverse it as where , . According to Bayesian law, posterior probability is defined [27] as where is a constant, as is given; , as and are uncorrelated. In order to get and , we can maximize (6) as follows:

3.2. The Obtaining of Probability Density Function
3.2.1. The Obtaining of the Prior Probability Density Function Based on Noise Level Estimation

Assuming that the signal and the noise are uncorrelated, the variance of (5) on direction can be expressed as where represents the variance of the dataset ; is the standard deviation of the Gaussian noise. We define the minimum variance direction as The variance of can be calculated using principal component analysis (PCA) [28] where , when ; denotes the covariance matrix of ; represents the th eigenvalue of the matrix . The variance of the data projected onto the minimum variance direction equals the minimum eigenvalue of the covariance matrix. Then we can derive (8) as follows:

The noise level can be estimated easily if we can decompose the minimum eigenvalue of the covariance matrix of the noisy patches as (11). The weak textured patches are known to span only low dimensional subspace. The minimum eigenvalue of the covariance matrix of such weak textured patches is approximately zero. Then, the noise level can be estimated simply as follows: where is the covariance matrix of the selected weak textured patches. We can select the weak textured patches as [29]. After acknowledging the noise level, we model the inherent noise in the observations with Gaussian distribution of the same variance . The likelihood of then becomes where the prior probability density function in RGB channels are independent of the randomness of noise distribution.

3.2.2. The Obtaining of JA’s Probability Density Function Based on the Distribution of Chromaticity Gradient Histogram

After analyzing 200 randomly selected haze images and their haze-free images, we can find that the distribution of chromaticity gradient histogram of haze images is the same as their haze-free images, which is the power of the exponential power distribution. In order to explain this, we can define the chromaticity of input image as follows [27]: where . The gradient of is defined as where and , respectively, represent the matrices of horizontal and vertical derivative operators. For example, the distributions of chromaticity gradient histogram of haze images and their haze-free images are shown in Figure 1. We can find that all of them are exponentially distributed; the only difference is that they have different rate parameter and normalization parameter . Figure 2 shows mean squared error (MSE) between the distribution of chromaticity gradient histogram and their exponential power distribution of the 200 haze images and their haze-free images. The result demonstrates the reliability of fitting by the exponential power distribution, as their MSE are still in the low level.

Therefore, JA’s probability density function can be obtained as follows: where and and , respectively, represent rate parameter and normalization parameter of the exponential power distribution.

3.2.3. The Obtaining of ’s Probability Density Function Based on the Sensitivity of Green Wavelength

Human visual system (HVS) has specific response sensitivity to the small interval of light wavelength [30]. Figure 3(a) shows the segment of wavelength where the HVS has its maximum sensitivity. In this figure one curve represents the sensitivity for photonic vision and the other for scotopic vision. Because of the much higher sensitivity to luminous efficiency of the scotopic vision compared to the photopic vision, both of them have maximum sensitivity from green-blue wavelength for red and blue perception, and the combined overall sensitivity ranges from 505 nm to 555 nm. Figure 3(b) shows the symmetric forward-scattered intensity from particle of aerosol in the incident light beam: the blue wavelength will tendentially be scattered more into 90° (270°, resp.) direction relative to the incident light in the plane of observation; the red wavelength will be scattered into forward (0°) in the plane of observation. With the angular increases from 0° to 90°, the intensity will be decreased. Meanwhile, the light wavelength ranges from red to blue. Due to the response of green wavelength and the intensity of forward-scattered, the green light component of image is assumed as the input of , which can not only satisfy efficiency (reduce the numbers of the estimation of transmission map from three to one) but also corresponds to the statistical prior.

In order to meet the global spatial smoothness of the image, which is the basic assumption of the atmospheric transmission map, meanwhile, to preserve the detail-and-edge information of when denoising, we combine characters of both the sensitivity of green wavelength and the bilateral filtering to estimate the initial atmospheric transmission map , which can be formulated as follows [31]: where is the green light component of haze image, is normalization parameter, is a local patch centered at with 7 × 7, and and , respectively, represent spatial and luminance function; they can be defined [32] as: where and , respectively, represent the standard deviation of spatial and luminance function. According to the exponential damping of [33], ’s probability density function is formulated as follows: where is the standard deviation of the initial estimation of , which can be calculated by (12).

3.3. The Iterative Approach with Feedback Based on the Law of Minimum Noise Level

Putting the likelihood of (13), (16), and (19) into (7), we can estimate and simply by

Optimizing (20) directly is not possible, as and are unknown at the same time. In order to solve this, we can estimate each variable with the other one fixed. Thus, (20) becomes two separate partial energies minimized functions as follows:

When solving (21), a large computational complexity is expected as we have to traverse every pixel’s level of light in RGB channels simultaneously. In order to avoid this problem, we choose (22) to solve, as the transmission map is the same in the three channels. We assume that ’s value is traversed between −5% and +5% of its initial value, which will improve the efficiency greatly. Considering dehazing, we fix by BM3D [18]: where . According to (22) and (23), we can find two items in it: the first item assures more edge information in the transmission map; the second item ensures that more noise will be removed as the value comes near to . Finally, we adopt the iterative approach with feedback in Figure 4 to achieve the balance between dehazing and denoising.

Figures 5 and 6 show that He’s result can recover more details than our 3rd iteration’s result, but with more noise; He-BM3D’s result has the same dehazing effect as our 3rd iteration’s result, but less edge and texture information; meanwhile, our result has a better dehazing effect than Lan’s result with nearly the same noise level. Besides, our approach’s relationship between iteration and the final restored effect of Figure 5(a) is shown in Figure 7. The “noise level” is estimated by (12), and a lower value suggests better effect of image denoising. The “PDCP” is the proportion of the number of pixels, whose luminance is lower than 25 in our DCP image; with the increase of the PDCP, we will have a better result of image dehazing. Finally, Figure 7 shows that when the number of iterations is 3, a stable and effective result will be achieved.

4. Experimental Results

In order to validate the performance of our approach, 4 groups of experiments are established: synthetic images with haze and noise to test performance in Figure 8, close depth images to test performance in Figure 9, close depth images with noise to test performance in Figure 10, and deep depth images and their local enlargements to test performance in Figures 11 and 12.

Figure 8 shows that our approach can remove haze and noise more effectively than others. Meanwhile, we apply PSNR as a typical objective evaluation listed in Table 1.

We compare the proposed algorithms with different condition in Figures 9 and 10. The results show that there are fewer details of the local dark areas in He’s result, as He’s algorithm could lead to lower mean luminance than original haze image; for example, the texture information of leaf in the close range is hidden after processing. In addition, when dealing with noise image, the noise contribution is amplified in He’s result, which can dominate the result. He-BM3D’s result has the same effect as He’s result in Figure 9 but may cause more smoothing and detail loss after BM3D in Figure 10, as it aims to achieve the same noise level as our result. Lan’s algorithm is to denoise prior to dehazing, which may cause a loss of information on details; the latent hazards may not be clear in Figure 9 but it is obvious in Figure 10. However, whether in Figure 9 or Figure 10, our approach achieves good effects both in dehazing and denoising, which demonstrates our approach’s capacity in scene restoration and detail protection.

Figures 11 and 12 show the deep depth image coming with lots of details and complex noise. He’s algorithm may amplify the noise and lose texture information. Then, denoising by BM3D will lead to smoothing and detail loss. Even Lan’s algorithm cannot restore scene and details in the large depth area effectively. Our result, in contrast, could present more vivid restored image with high contrast and obtain nearly the same haze-free image as He’s result. In particular, the proposed algorithm achieves wider dynamic range compression in dark regions and also holds the strong ability of resisting noise. Except for subjective evaluation, the typical objective evaluation is shown in Figure 13. The figure shows that our result achieves almost the same noise level as Lan’s result and gets nearly the same effect as He’s result in dehazing. Thus, we can see that the subjective evaluation agrees with the objective one.

5. Conclusion

In this paper, we present a novel single image dehazing approach considering noise based on Bayesian framework. We focus on an improved atmospheric scattering model by considering noise and haze simultaneously. The likelihood of posterior probability based on Bayesian framework is estimated by the statistical prior and objective assumption of degraded image. Meanwhile, we focus more on the efficiency by choosing the transmission map to get the scene radiance. BM3D is used to fix the initial input of the iterative approach with feedback, which can help to achieve the balance between dehazing and denoising. The experimental results demonstrate that our approach is effective, especially in challenging scenes with both haze and noise. However, color distortion still exists which will be involved in our future work.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was supported by National Natural Science Foundation of China (Grants no. 61372167, no. 61379104, no. 61203268, and no. 61202339).