Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2019, Article ID 9721503, 11 pages
https://doi.org/10.1155/2019/9721503
Research Article

Single Image Dehazing and Edge Preservation Based on the Dark Channel Probability-Weighted Moments

1Department of Software Engineering, University of Engineering and Technology, Taxila 47050, Pakistan
2Department of Computer Engineering, University of Engineering and Technology, Taxila 47050, Pakistan
3Department of Information Systems and Technology, College of Computer Science and Engineering, University of Jeddah, Jeddah 21589, Saudi Arabia

Correspondence should be addressed to Rehan Mehmood Yousaf; kp.ude.alixatteu@41-es-dhp-f61

Received 30 May 2019; Revised 24 September 2019; Accepted 16 October 2019; Published 2 December 2019

Academic Editor: Andras Szekrenyes

Copyright © 2019 Rehan Mehmood Yousaf et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The method of single image-based dehazing is addressed in the last two decades due to its extreme variating properties in different environments. Different factors make the image dehazing process cumbersome like unbalanced airlight, contrast, and darkness in hazy images. Many estimating and learning-based techniques are used to dehaze the images to overcome the aforementioned problems that suffer from halo artifacts and weak edges. The proposed technique can preserve better edges and illumination and retain the original color of the image. Dark channel prior (DCP) and probability-weighted moments (PWMs) are applied on each channel of an image to suppress the hazy regions and enhance the true edges. PWM is very effective as it suppresses low variations present in images that are affected by the haze. We have proposed a method in this article that performs well as compared to state-of-the-art image dehazing techniques in various conditions which include illumination changes, contrast variation, and preserving edges without producing halo effects within the image. The qualitative and quantitative analysis carried on standard image databases proves its robustness in terms of the standard performance evaluation metrics.

1. Introduction

Natural outdoor images and their perception is a key factor in image understanding. It is a true representation of what a human visual system is capable of and what it perceives from it. A better understanding of images makes it easier to execute visual techniques such as recognition, detection, and surveillance [1]. The hazy and foggy particles reduce the atmospheric visibility in real-world scenes. It could be in the form of haze, fog, smog, or mist. The light when strikes with these particles is scattered in different directions and thus forming images that suffer from scattered luminance, faded color, and low contrast. The camera receives irradiance from the scene point as the scene light combines with the airlight [2]. The visibility of images is reduced to a level that is harmful and causes mismanagement on the roads in case of camera-guided vehicles or autonomous vehicles and in navigation based systems.

Haze removal/image dehazing is the basic requirement in image processing and computer vision-based applications. Once the haze removal algorithm removes the haze, then it is feasible for the computer vision algorithms to analyze them. Haze in images has become a major issue as image analysis at a low level, such as image deblurring, sharpening, and enhancement, assumes the input image is in the natural radiance. Similarly, in high-level image analysis, such as target detection, recognition, and surveillance, high-quality images are used. Haze removing techniques can also help in-depth analysis of an image [3] and can play a vital role in many image analysis related fields and applications.

Color preservation, illumination changes, depth analysis, and edge clarity are the challenging issues of hazy images. Already [4] proposed prediction techniques used for image dehazing normally require the same haze-free image captured in different scenarios and conditions so that it can be compared for visibility parameter improvement. However, its computational cost is high as different images have various depths of fog and it is difficult to evaluate for each individual case separately. Different methods in the literature have proposed techniques for haze removal in images with various depth level of haze [1, 4, 5]. The mathematical equation of the hazy input image is defined as follows [1, 4, 6]:where is the surface scene radiance, is the hazy image, is the atmospheric airlight representing ambient light in the atmosphere, the transmission medium that reaches the camera and does not deviate is represented by , and is the pixel value. In image dehazing, the significant part is to estimate , , and by processing . Equation (1) has two parts. known as direct attenuation and that is the airlight. Direct attenuation is the response of the medium and explains the amount of attenuated light by the medium. Airlight is the illumination effect produced by scattered atmospheric light that causes color degradation and fading in hazy images. If an assumption of homogeneous light is considered, the following mathematical equation defines the transmission:where represents the depth of the scene and scattering variable of the medium is defined as .

Figure 1: Image dehazing on a sample image using the proposed method. (a) Hazy image. (b) He et al. [7]. (c) The output of the proposed method based on the DCPWM.

The process of image dehazing is based on estimating the transmission map and using prior knowledge to estimate the depth of the haze. In this article, we have proposed an effective method for image dehazing known as dark channel probability-weighted moments (DCPWMs). The dark channel contains low-intensity values of the pixels that are closer to zero in at least one-color channel red, green, or blue caused by the airlight present in the scene of a hazy environment. These intensity values are almost of the same kind and can estimate the transmission map accurately. The DCPWM is applied on each channel to estimate the transmission map and eliminate the low-intensity values, regenerating a good quality output image with normal color, contrast, and illumination near to ground truth. The DCPWM method also uses the same assumptions as most of the previous image dehazing methods [7]; however, it performs well when it comes to preserving sharp edges, color, contrast, and illumination. It also outperforms in case of images having an object similar to the airlight. It also produces valid physical results handling distant objects by preserving true edges without producing halo effects.

The result of the proposed method based on the DCPWM and its comparison with He et al. [7] method are shown in (Figure 1). The main contributions of the DCPWM method of this article are as follows:(1)The DCPWM method performs well while preserving true edges and suppressing the outliers by applying probability-weighted moments(2)It uses probability-weighted moments for contrast restoration(3)It uses log transform to restore color and illumination of the hazy image(4)It eliminates outliers using PWM to reduce halo effects and artifacts(5)It handles distant objects accurately during image dehazing

The remaining sections of this article are organized as follows: Section 2 describes the related work of the state-of-the-art image dehazing methods. Section 3 describes the methodology of the proposed method based on DCPWM. Section 4 presents the performance evaluation metrics, experimental results of the proposed method on the state-of-the-art image databases followed by discussion. Section 5 concludes the proposed method and presents future directions.

2. Related Work

The earliest visibility enhancements for image dehazing have been addressed in literature [8] in which visibility improvement is carried out through dark-object subtraction to eliminate scattered light in multiple images in various weather conditions. Schechner et al. [9] introduced an onboard haze-free system. The proposed system uses a weather estimating technique to remove the haze by contrast restoration. It is based on a flat world assumption, and creating 3D geometrical information-based models is difficult in practice and makes it challenging. Tan [10] proposed a technique based on maximizing the local contrast in a homogeneous airlight improving the visibility but producing saturation and halo effect. Fattal [4] proposed a method based on optical transmission estimation, eliminating the scattering light and restoring the contrast in images with high visibility but fails in nonhomogeneous and dense hazy areas. He et al. [7] introduced a novel method that defines dark channel prior. The key idea is based on the concept that at minimum there should be one dark color channel containing pixels with very small intensity values. This information helps in estimating the depth of haze and restores a good quality dehazed image. Increased sunlight and nonhomogeneous haze in images may affect efficiency of the method. Tarel and Hautière [11] presented a technique for image dehazing based on enhanced visibility in real-time processing and less complex for both color and grey images. This algorithm is based on maximum contrast assumption and normalized airlight with preserving edges, but the depth-map restored is not smooth along the edges.

Kratz and Nishino [12] focused on the scene washout effect and density in an image by using Markov random field as two different layers. The results are promising but the algorithm creates dark artifacts at locations with high depth. Ancuti and Ancuti [13] proposed a technique based on the fusion of two hazy input images. Three important measures are considered for feature extraction that is saliency, luminance, and chromaticity. The results are pleasing; however, the image is overenhanced and natural color contrast of an image is not restored. Meng et al. [14] presented a technique which regularizes and optimizes the unknown scene transmission. The result produced high-quality images with natural colors and fine edges; however, the technique does not perform well for images with large sky areas and white areas as the resultant image is extremely enhanced to an artificial level. Tang et al. [15] gave an idea of a framework based on machine learning and extracted the combination of the best-selected features used for image dehazing. The technique focused on the dark channel features as it is the most important part of image dehazing. It restores good quality dehazed images but enhances noise, where haze depth is high. Cai et al. [16] introduced a novel technique that uses convolutional neural networks (CNNs) to estimate assumptions and priors. CNN layers are used for extracting features responsible for generating haze relevant features. This technique outperforms the state-of-the-art techniques by restoring the sky area and white patches but distorts the dark colors in the image.

Bansal et al. [17] discussed a number of a single image and multiple image dehazing algorithms for image restoration. The paper is a comparison between different state-of-the-art techniques and elaborates the advantages and disadvantages summarizing future scope as well. Salazar-Colores et al. [18] proposed a fast technique using morphological operations for restoring quality images. The performance measure is based on the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). This technique performs well as far as speed is concerned; however, it is unable to handle sky regions and white areas due to DCP limitations. Berman et al. [19] introduced a novel technique based on no local prior. The technique focuses on pixels in a specific cluster, and its color is represented by a few hundred different color lines. These haze lines are used to restore the haze-free image. It performs well on a variety of images but fails for parts with brighter airlight. Li et al. [20] gave a new approach called realistic single-image dehazing (RESIDE) based on a training set, with two different quality evaluations objective and subjective. The model is trained on synthetic and nonsynthetic images. The results are better than the state-of-the-art methods.

3. The Proposed Method Based on the DCPWM

In this paper, a novel method based on the dark channel probability-weighted moments (DCPWMs) is proposed which restores a dehazed image with preserved edges, original illumination, and original color of the image. Initially, an input hazy image is taken and preprocessing is applied on the image that normalizes the pixel values in each color channel. At that point, the dark channel is computed that is based on an observation that one-color channel should contain low-intensity pixels may be close to zero. The dark channel also gives out a byproduct for estimating the depth of haze in a hazy image. The transmission map is estimated by considering transmission to be constant for a local patch and is refined using a kernel matrix. Probability-weighted moments are applied on these refined maps to restore the sharp edges and suppress low variations that are probably the hazy patches in an image. PWM [2123] is applied on each channel individually to capture outlines of an object and suppress haze in each color channel. The log transform is applied to the resultant image that is very helpful for skewed data. Log transform conforms the skewed data to normality and is applied separately to each channel. The restored image shows better results compared to the state-of-the-art techniques. The framework of the proposed technique is shown in Figure 2.

Figure 2: The methodology of the proposed method based on DCPWM.
3.1. Preprocessing of a Hazy Input Image

In the preprocessing stage, colored hazy images are used as an input. There are two types of hazy images: natural and synthetic. The pixel values of the image are normalized in each color channel. Normalization not only helps to remove noise but also maintains intensity values to a range that follows a normal distribution. The haze imaging equation (1) is normalized as follows:where C in equation (3) represents each color channel (R, G, B) and .

3.2. Dark Channel Computation

The computational criteria of dark channel prior are based on an observation in RGB images. In the RGB channel at least one-color channel consists of very low-intensity values almost close to zero. So, the minimum intensity value in that particular patch is almost zero [7]. This observation is formulated as follows.

Considering a normalized image from equation (3), its dark channel is mathematically defined as follows:where denotes the pixel of the normalized image, denotes the specific color channel from (R, G, B), and denotes the specific patch that is centered at . The dark channel is the combination of two minimum operators that are applied on all color channels with a minimum filter on each pixel. According to the observation, DCP is mathematically expressed by

The concept of DCP is mainly an inspiration taken from a technique called dark-object subtraction [24] that is used in multispectral remote sensing systems. In dark-object subtraction, a constant value is subtracted to remove homogeneous haze. The darkest object in the scene estimates the value to be subtracted. DCP uses a generic assumption, focuses on the whole scene, and specifies a certain channel.

3.3. Estimating the Transmission Map

The transmission map is estimated by assuming the atmospheric light A is known. The normalization of haze imaging equation by A is given in equation (3). Assuming transmission to be constant for the small local patch we apply the dark channel on equation (3) on both sides [7]:

The minimum operator does not apply on as it is considered constant for a patch. As defined in dark channel prior which is the scene radiance, it is very close to zero and is expressed by

is always positive, so

By equating equations (6) and (8), we can estimate which is equal to

Equation (9) gives us the dark channel of the normalized hazy image that estimates the transmission. The transmission estimation achieved in equation (9) are very reasonable and estimates original color and low contrast edges. The main issue with the transmission map is the halo effect and artifacts. The above-stated problems are due to the assumption of transmission to be constant for a patch, but it is not always true. So, a soft matting technique is used to solve the above-stated problem [25], which is mathematically defined as follows:where represents background color, denotes the foreground colors of the image, and refers to the foreground opacity. As we can see that soft matting equation (10) is of the same form as haze imaging equation (1), so map and map are almost similar. We use a closed-form framework to improve the transmission map [25]. The following cost function represents and in their vector form:where denotes the refined transmission map and is the transmission map achieved in equation (9). The Laplacian matrix that is used for soft matting is represented by , and is a weight. The following linear system equation (12) derived from equation (11) gives us the optimized transmission. is an identity matrix which has the same dimensions as , and is set to 10−4 which is a small value in order to achieve refined transmission maps:

3.4. Probability-Weighted Moments

While recovering the contrast of a hazy image, it is very important to preserve the true edges in an image. Majorly, two types of estimations are used, PWM and maximum likelihood estimation (MLE). It is a proven fact that PWM has a better capability to restore and estimate the contrast in an image [21]. PWM outperforms when it comes to more intraclass variability as compared to maximum likelihood estimation. PWM is capable of estimating the data uniquely. So, it can properly estimate the middle of the distribution. This quality to estimate gives it an edge to estimate the standard deviation of the data which helps in suppressing false edges and restoring sharp true edges. Eliminating outliers helps in reducing halo effects from the image. Due to this reason, PWM can perform better for contrast restoration as compared to other methods.

PWM can be mathematically defined as a linear estimate of standard deviation as follows:

In equation (13), is the total sample size, is the ordered observation, whereas is the empirical distribution function, and is a constant whose value is 3.1416.

The transmission maps achieved in equation (12) are further refined and estimated by applying PWM on each color channel thus restoring the original contrast of the original image for each color channel. The slight halo effects and artifacts are also suppressed in this process as PWM identifies the outliers and uniquely categorizes the distribution in a given scenario.

3.5. Log Transform

Log transform is a very effective and useful method to deal with skewed data. The restored image from equation (12) is divided into three color channels, which are represented by (Ri, Gi, Bi). The output images in equation (13) are achieved after applying PWM on each color channel, which is meaningful; however, they suffer from skewed data and high intensities in the image. To overcome these issues, the log transform is applied to each color channel with a constant factor multiplier that is achieved by a hit and trial method and works for almost all kind of images with thin, thick, synthetic, and nonsynthetic haze:

In equations (14)–(16), similar color channels are subtracted to discard the unnecessary high intensities.

The final output image is achieved by concatenating the results of equations (14)–(16) and forming an RGB dehazed image, which consists of original contrast, illumination, and restored edges (Algorithm 1).

Algorithm 1: Algorithm of the proposed method based on DCPWM.

4. Experimental Results, Evaluation Parameters, and Discussions

The datasets used to assess DCPWM with the state-of-the-art methods are Frida and Frida 2 that are created by freely available 3D models in which homogeneous haze is assumed. Frida and Frida 2 are part of synthetic homogeneous haze category, and their ground truths are also available. Frida comprises of 90 images which contain synthetic haze of 18 urban road scenes. Frida 2 comprises of 330 synthetic images which contain synthetic haze of 66 diverse road scenes. A corpus of 100 and 500 random hazy images of real scenes are also used to verify the results of DCPWM. Some random hazy images are also taken from Fattal [4], He et al. [7], Tarel and Hautiere [11], Kratz and Nishino [12], Meng et al. [14], Cai et al. [16], Yuan and Huang [26], Li et al. [20], Salazar-Colores et al. [18], Berman et al. [19], and Ancuti et al. [27] to compare DCPWM to the state-of-the-art methods.

4.1. Performance Evaluation Parameters

Qualitative and quantitative evaluations are two types of evaluating performance. Qualitative evaluation is carried out by visual analysis, and quantitative evaluation is carried out using different metric parameters such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), , , and . The metric e denotes the rate of new edges that are visible in a dehazed image. represents the percentage in pixels changing to black or white while dehazing. The metric represents the ratio of mean normal gradients before and after dehazing [28]:

PSNR is the ratio between maximum power of an image to noise known as mean square error. is the haze-free image, and MSE is the mean square error:

SSIM tells us the similarity index between two images on the basis of three parameters such as luminance, contrast, and structure:where denotes the number of increased visible edges after dehazing, is the number of edges in restored image, and is the number of edges in the original image and is the size of the image:

In equation (20), is known as a blind assessment indicator that helps to assess the color restoration of an algorithm, is the total number of pixels while changing to black or white, and is the size of the image:

The metric parameter indicates how good an image is restored while preserving edges and texture after dehazing and represents the ratio of restored image gradient and the original image gradient .

4.2. Experimental Results of the Proposed Method

In this section, we evaluate the performance of DCPWM by applying it on standard datasets. DCPWM is applied to the synthetic images taken from Frida and Frida 2. Five random images are taken from Frida and Frida 2 with different density levels (K, M, L, U) of haze with their ground truth seen in Figures 36.

Figure 3: Experimental results on images from Frida dataset with different haze depths. (a) K level hazy images. (b) DCPWM results. (c) Ground truth. (d) M level hazy images. (e) DCPWM results. (f) Ground truth.
Figure 4: Experimental results on images from Frida dataset with different haze depths. (a) L level hazy images. (b) DCPWM results. (c) Ground truth. (d) U level hazy images. (e) DCPWM results. (f) Ground truth.
Figure 5: Experimental results on images from Frida 2 dataset with different haze depths. (a) K level hazy images. (b) DCPWM results. (c) Ground truth. (d) M level hazy images. (e) DCPWM results. (f) Ground truth.
Figure 6: Experimental results on images from Frida dataset with different haze depths. (a) L level hazy images. (b) DCPWM results. (c) Ground truth. (d) U level hazy images. (e) DCPWM results. (f) Ground truth.
4.3. Performance Comparisons and Discussions

The proposed method based on the DCPWM outperforms both qualitatively and quantitatively as compared to 7 state-of-the-art image dehazing methods. Images taken from different real-world scenes used by the state-of-the-art image dehazing methods are presented in Figure 7. According to the experimental results, the DCPWM method outperforms as compared with its competitor image dehazing methods in terms of illumination, natural color, edges, and original clear sky. The original hazy image is presented in the first column of Figure 7(a) with 9 different images used for qualitative comparison.

Figure 7: Comparison with state-of-the-art methods. (a) Original hazy image. (b) He et al. [7]. (c) Kim et al. [29]. (d) Meng et al. [14]. (e) Tang et al. [15]. (f) Tarel and Hautiere [11]. (g) Xiao and Gan [30]. (h) DCPWM (proposed).

He et al. [7] method produced comparable results which are presented in Figure 7(b) with areas having dense haze but dull and noisy sky with a clear color change and low contrast in the overall image. Dark channel prior is not applicable to the whole image and transmission is not accurately estimated. Also the brighter portions of the image are not correctly estimated and are not restored accurately. On the contrary, DCPWM is very keen about the sky regions and restores a proper contrast pertaining to sharper edges for both low and high density of haze. It enhances the brightness and preserves true edges, while it suppresses the outliers. Kim et al. [29] in Figure 7(c) is based on contrast restoration. While compensating the contrast in a hazy image some pixels are truncated resulting in information loss. This method is efficient than other the state-of-the-art methods. It also controls the artifacts but is unable to preserve good quality edges. Meng et al. [14] presented in Figure 7(d) are almost similar to He et al. [7] results. It also produces an image that is overenhanced. Moreover, we can see halo artifacts in the restored image. It is also seen that the sky color is almost white that changes the original color present in the hazy image.

Experimental results of the Tang et al. [15] method can be seen in Figure 7(e) based on a machine learning framework which extracted the combination of the best-selected features used for image dehazing. It can be seen clearly that it restores a good quality image but enhances noise where haze depth is high and DCPWM maintains its quality of performance for a higher density of haze as demonstrated in Frida and Frida 2. Tarel and Hautiere [11] in Figure 7(f) proposed an algorithm based on visibility restoration in real-time processing and less complex for both color and grey images. It can be seen that this method works well for low haze images, and the depth-map restored is not smooth along the edges. The color contrast and illumination restored is not up to the mark. On the contrary, DCPWM performs well when it comes to edge clarity and restoration of original illumination and contrast. Cai et al. [16] presented in Figure 7(g) is a learning-based algorithm using deep convolutional neural networks. Results show that few amounts of haze are still present in the restored image and the sky areas are badly affected by the algorithm in color restoration.

The overall proposed method based on the DCPWM outshines the state-of-the-art methods mentioned above in all aspects particularly in case of illumination, edge preservation, and contrast restoration. The quantitative evaluation is carried out on the basis of PSNR, SSIM, , , and . The comparison of PSNR and SSIM with 7 state-of-the-art methods is given in Table 1 and , , and are given in Table 2, respectively. Higher PSNR and SSIM closer to 1 demonstrate better image quality. Values in bold show better performance of DCPWM as well as state-of-the-art methods.

Table 1: Performance comparison of DCPWM in terms of PSNR and SSIM with state-of-the-art image dehazing methods for 5 images (values in bold format indicate the best performances).
Table 2: Performance comparison of DCPWM in terms of the e, , and with state-of-the-art image dehazing methods for 5 images (values in bold format indicate the best performances).

These descriptors are used to assess the visibility restoration proposed in [28]. is a metric parameter that denotes the rate of new visible edges after removing haze; similarly, is responsible to address how good the contrast of an image is restored after dehazing and presents the rate of pixels converted from black to white after dehazing. The main objective is to restore the contrast, original color, and the illumination of the image as haze affects mostly these properties of the images. Restoring all visual information is also a part of good dehazing algorithms. Higher values of and closer to zero, respectively, demonstrate increased edges and better contrast restoration. demonstrates the rate of saturated pixels that should be closer to zero. So in Table 2, we can conclude that the proposed method based on the DCPWM outperforms the state-of-the-art methods in terms of the three performance evaluation metrics and thus verify better illumination, color, and contrast restoration for hazy images.

5. Conclusion and Future Directions

In this article, a novel method for image dehazing based on dark channel prior (DCP) is proposed. It uses DCP with PWM to achieve the complementary effect for image dehazing. DCP is a concept which is based on image statistics of natural haze-free images. The prior and haze imaging model combined with PWM makes the proposed method more effective and robust. PWM focuses on intraclass variability and performs well when it comes to preserving true edges and suppressing low variations. DCPWM has shown improved output than the state-of-the-art image dehazing methods in all parameters such as illumination, true edges, color, and contrast. DCPWM outperforms in preserving more details near to the original haze-free image. Hence, DCPWM method can be utilized in smart cars for haze removal and in remote surveillance systems. As DCP is a critical part of this algorithm, so images with extra scene albedo may be misjudged in some cases though PWM covers up the deficiency to some extent; however, it may fail in some cases. In such cases, edges may get vague and blur. Future contribution to DCPWM can be the estimation of scene radiance in the images with extra scene albedo that is problematic sometimes.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

All the authors contributed equally.

References

  1. S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in Proceedings of the Seventh IEEE International Conference on Computer Vision, IEEE, Kerkyra, Greece, September 1999. View at Publisher · View at Google Scholar
  2. H. Koschmieder, in Beitrage zur Physik der Freien Atmosphare, Walter de Gruyter, Berlin, Germany, 1924.
  3. X. Lan, L. Zhang, H. Shen, Q. Yuan, and H. Li, “Single image haze removal considering sensor blur and noise,” EURASIP Journal on Advances in Signal Processing, vol. 2013, no. 1, p. 86, 2013. View at Publisher · View at Google Scholar · View at Scopus
  4. R. Fattal, “Single image dehazing,” ACM Transactions on Graphics (TOG), vol. 27, no. 3, p. 72, 2008. View at Publisher · View at Google Scholar · View at Scopus
  5. Y. Liu, J. Shang, L. Pan, A. Wang, and M. Wang, “A unified variational model for single image dehazing,” IEEE Access, vol. 7, pp. 15722–15736, 2019. View at Publisher · View at Google Scholar · View at Scopus
  6. S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” International Journal of Computer Vision, vol. 48, no. 3, pp. 233–254, 2002. View at Google Scholar
  7. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, 2011. View at Publisher · View at Google Scholar · View at Scopus
  8. S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, New York, NY, USA, June 2006. View at Publisher · View at Google Scholar · View at Scopus
  9. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, Kauai, HI, USA, December 2001. View at Publisher · View at Google Scholar
  10. R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Anchorage, AK, USA, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  11. J.-P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, IEEE, Kyoto, Japan, September–October 2009. View at Publisher · View at Google Scholar · View at Scopus
  12. L. Kratz and K. Nishino, “Factorizing scene albedo and depth from a single foggy image,” in Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, IEEE, Kyoto, Japan, September–October 2009. View at Publisher · View at Google Scholar · View at Scopus
  13. C. O. Ancuti and C. Ancuti, “Single image dehazing by multi-scale fusion,” IEEE Transactions on Image Processing, vol. 22, no. 8, pp. 3271–3282, 2013. View at Publisher · View at Google Scholar · View at Scopus
  14. G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient image dehazing with boundary constraint and contextual regularization,” in Proceedings of the IEEE International Conference on Computer Vision, Sydney, NSW, Australia, December 2013. View at Publisher · View at Google Scholar · View at Scopus
  15. K. Tang, J. Yang, and J. Wang, “Investigating haze-relevant features in a learning framework for image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, June 2014. View at Publisher · View at Google Scholar · View at Scopus
  16. B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: an end-to-end system for single image haze removal,” IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5187–5198, 2016. View at Publisher · View at Google Scholar · View at Scopus
  17. B. Bansal, J. Singh Sidhu, and K. Jyoti, “A review of image restoration based image defogging algorithms,” International Journal of Image, Graphics and Signal Processing, vol. 9, no. 11, pp. 62–74, 2017. View at Publisher · View at Google Scholar
  18. S. Salazar-Colores, E. Cabal-Yepez, J. M. Ramos-Arreguin, G. Botella, L. M. Ledesma-Carrillo, and S. Ledesma, “A fast image dehazing algorithm using morphological reconstruction,” IEEE Transactions on Image Processing, vol. 28, no. 5, pp. 2357–2366, 2018. View at Publisher · View at Google Scholar · View at Scopus
  19. D. Berman, T. Treibitz, and S. Avidan, “Single image dehazing using haze-lines,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018. View at Publisher · View at Google Scholar · View at Scopus
  20. B. Li, W. Ren, D. Fu et al., “Benchmarking single-image dehazing and beyond,” IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 492–505, 2019. View at Publisher · View at Google Scholar · View at Scopus
  21. F. Downton, “Linear estimates with polynomial coefficients,” Biometrika, vol. 53, no. 1/2, pp. 129–141, 1966. View at Publisher · View at Google Scholar
  22. H. Dawood, H. Dawood, and P. Guo, “Combining the contrast information with WLD for texture classification,” in Proceedings of the 2012 IEEE International Conference on Computer Science and Automation Engineering (CSAE), IEEE, Zhangjiajie, China, May 2012. View at Publisher · View at Google Scholar · View at Scopus
  23. H. Dawood, H. Dawood, G. Ping et al., Probability Weighted Moments Regularization Based Blind Image De-blurring, Multimedia Tools and Applications, 2019.
  24. P. S. Chavez Jr., “An improved dark-object subtraction technique for atmospheric scattering correction of multispectral data,” Remote Sensing of Environment, vol. 24, no. 3, pp. 459–479, 1988. View at Publisher · View at Google Scholar · View at Scopus
  25. A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 228–242, 2008. View at Publisher · View at Google Scholar · View at Scopus
  26. F. Yuan and H. Huang, “Image haze removal via reference retrieval and scene prior,” IEEE Transactions on Image Processing, vol. 27, no. 9, pp. 4395–4409, 2018. View at Publisher · View at Google Scholar · View at Scopus
  27. C. O. Ancuti, C. Ancuti, M. Sbert, and R. Timofte, “Dense haze: a benchmark for image dehazing with dense-haze and haze-free images,” 2019, http://arxiv.org/abs/1904.02904. View at Google Scholar
  28. N. Hautière, J.-P. Tarel, D. Aubert, and É. Dumont, “Blind contrast enhancement assessment by gradient ratioing at visible edges,” Image Analysis & Stereology, vol. 27, no. 2, pp. 87–95, 2011. View at Publisher · View at Google Scholar
  29. J.-H. Kim, W.-D. Jang, J.-Y. Sim, and C.-S. Kim, “Optimized contrast enhancement for real-time image and video dehazing,” Journal of Visual Communication and Image Representation, vol. 24, no. 3, pp. 410–425, 2013. View at Publisher · View at Google Scholar · View at Scopus
  30. C. Xiao and J. Gan, “Fast image dehazing using guided joint bilateral filter,” The Visual Computer, vol. 28, no. 6–8, pp. 713–721, 2012. View at Publisher · View at Google Scholar · View at Scopus
  31. W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in European Conference on Computer Vision, pp. 154–169, Springer, Berlin, Germany, 2016. View at Publisher · View at Google Scholar · View at Scopus
  32. D. Berman, T. Treibitz, and S. Avidan, “Non-local image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, June 2016. View at Publisher · View at Google Scholar · View at Scopus
  33. L. K. Choi, J. You, and A. C. Bovik, “Referenceless prediction of perceptual fog density and perceptual image defogging,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3888–3901, 2015. View at Publisher · View at Google Scholar · View at Scopus