Advanced Techniques for Computational and Information SciencesView this Special Issue
Research Article | Open Access
Multiscale Single Image Dehazing Based on Adaptive Wavelet Fusion
Removing the haze effects on images or videos is a challenging and meaningful task for image processing and computer vision applications. In this paper, we propose a multiscale fusion method to remove the haze from a single image. Based on the existing dark channel prior and optics theory, two atmospheric veils with different scales are first derived from the hazy image. Then, a novel and adaptive local similarity-based wavelet fusion method is proposed for preserving the significant scene depth property and avoiding blocky artifacts. Finally, the clear haze-free image is restored by solving the atmospheric scattering model. Experimental results demonstrate that the proposed method can yield comparative or even better results than several state-of-the-art methods by subjective and objective evaluations.
Outdoor scene images captured in the bad weather are often degraded due to the existence of the haze, fog, mist, or other media. The main reason is that light rays reflected by the object scene are absorbed by the medium aerosols resulting in the direct attenuation. At the same time, the airlight is also scattered by these aerosols during the propagation from the scene points to the observer. Consequently, captured images easily display poor quality such as low contrast, distorted color, and inferior visibility.
Moreover, most of the vision systems such as outdoor surveillance, remote sensing, object detection, and tracking system would not work effectively due to the influence of haze. Accordingly, attention has been paid widely to image dehazing techniques in computer vision and image processing applications in recent years. However, recovering the original clear scene is a challenging task especially when only one hazy image is given. At present, some dehazing methods have been proposed. They are mainly divided into two categories: multiple images dehazing and single image dehazing. Narasimhan and Nayar [1, 2] presented a physics-based model that described the appearances of scene in fog or haze weather conditions. Based on the observation that the scene depth was invariant in different bad weather conditions while the camera was static, they estimated the depth map and restored the scene contrast using two or more captured images with the same scene. Although these methods can produce good results in certain circumstances, it is hard to obtain such multiple images from the same scene in different bad weathers. Assuming the airlight was partially polarized or unpolarized whereas the scene was polarized, polarization based methods [3–7] were proposed to remove the haze effects by using multi-images with different angles of polarized filter. However, polarization filtering might lose efficacy to dense fog situations due to the scattering. One of the key points for removing the unwanted influence of the hazy image is to get the accurate transmission map and the atmosphere light. Kopf et al.  obtained the scene depth map and corrected the color bias with the aid of existing approximate 3D model or georeferenced digital terrain and urban models. Nevertheless, it is limited in case no accurate or approximate models are obtained for some existing images.
Recently, a number of single image dehazing methods have been developed for overcoming the weakness of multi-images or getting 3D model in advance. Fattal  estimated the scene albedo and depth with the hypothesis that the surface shading was locally uncorrelated with the transmission. This method is effective but fails when the hypothesis is broken. Tan  observed that the images in clear days had stronger contrast than those in bad weather. Simultaneously, the variation of airlight mainly depended on the scene depth and tended to be smooth. Consequently, they developed a maximization local contrast method for restoring the scene. Although this method can enhance the visibility, it yields serious halos and oversaturated colors. Based on the dark channel prior of the haze-free natural images, He et al.  estimated the thickness of the haze and recovered a high quality haze-free image. Although this method can achieve ideal dehazing results, the cost of soft matting for refining the transmission map is very large and unsuitable for real-time applications. Tarel and Hautiere  proposed a linear complexity visibility restoration method based on median filter. While this method can meet the real-time requirements, the median filter is not conformal and edge-preserving. To get around this problem, Xiao and Gan  proposed a guided joint bilateral filter for haze removal. By using bilateral filter, Zhang et al.  introduced a new algorithm to remove haze from a single image based on the assumption that only the variation of transmission could result in large-scale luminance variation in the fog-free image. However, this method will fail in case the assumption does not hold. Nishino et al.  introduced a Bayesian probabilistic method that jointly estimated the scene albedo and depth from a single hazy image. However, since it is hard to estimate the multiple statistical prior models accurately, the dehazing results obtained by this method may contain unnaturally bright colors. More recently, C. O. Ancuti and C. Ancuti  presented a fusion-based strategy that was able to accurately restore images using only the original hazy or foggy image. This method has high computing speed and yields similar results with [8, 9, 11]. Kim et al.  restored the hazy image by an optimized contrast enhancement procedure that balanced the cost of contrast and information loss. Besides, they proposed a quad-tree subdivision based searching method for estimating the atmospheric light. Nevertheless, the quad-tree searching method will not be robust if the maximum value of the final subblock is noise. Zhang et al.  described a hazy layer estimation method using low-rank technique and overlap averaging scheme. The disadvantage is that it may not work well for far scenes with heavy fog and great depth jump.
In this paper, we present a novel single image dehazing method based on atmospheric scattering model. In the basis of existing dark channel prior and optics theory, two atmospheric veils with different scales are first derived from the hazy image. Then, a local similarity-based wavelet method is proposed to fuse the two atmospheric veils adaptively, which can reflect the scene depth information well. At last, the scene is restored by solving the atmospheric scattering model.
Compared with previous single image dehazing methods, our proposed method exhibits the following advantages. Firstly, the proposed adaptive local similarity-based wavelet fusion method shows better performance in preserving the most discriminant scene depth. Secondly, our dehazing method performs per-pixel computation efficiently and yields very little artifacts. Moreover, the proposed method does not need the postprocessing like He et al.’s exposure  and Tarel and Hautiere’s tone mapping . Finally, we test our method on a number of natural and synthetic hazy images. The comparative results display comparative or even better dehazing results by subjective and objective evaluations.
The rest of this paper is organized as follows. In Section 2, we review the atmospheric scattering model widely used in dehazing task. Section 3 describes the proposed dehazing method in detail, mainly including atmospheric light and atmospheric veil estimations, image fusion, and scene restoration. In Section 4, we report and discuss the experiment results. Finally, conclusions are provided in Section 5.
2. Atmospheric Scattering Model
In computer vision and image processing, the formation of hazy image is commonly described by the atmospheric scattering model [9–12]:where is the observed hazy image at location , is the scene radiance (the clear haze-free image), is the global atmospheric light, and is the medium transmission which is depended on the distance between the object scene and the observer while the atmosphere is homogenous. The transmission is described as follows:where is the scattering coefficient of atmosphere.
The first term describes that the reflected light by the scene points is partially attenuated during the propagation. The second term is called atmospheric veil , which is generated by atmosphere scattering and leads to color shift. The task of dehazing is to recover from .
3. Proposed Dehazing Method
The proposed method can be mainly divided into four steps. Firstly, the atmospheric light is estimated by the improved quad-tree subdivision method. Then, atmospheric veils with different scales are derived from the hazy image. Next, the accurate atmospheric veil is obtained by fusing the two veils. Finally, the scene radiance is restored by solving the atmospheric scattering model. The flow chart of the proposed dehazing method is shown as in Figure 1.
3.1. Estimation of Atmospheric Light
Generally, the atmospheric light is always considered as the brightest intensity in the entire image because a large amount of haze makes the object scene brighter than itself. However, this will be not reliable when a white object is present in the scene. Kim et al.  proposed a quad-tree subdivision method to estimate atmospheric light, aiming at avoiding choosing the white object. The basic idea of this method is that atmospheric light should be estimated from the region with brighter intensity and less texture. It searches the region with the largest score obtained by the difference of its mean and standard deviation iteratively. The atmospheric light is estimated as the value that has the least distance to the pure white.
In this work, we opt for this method since it is simple but efficient. However, it is not robust because the estimated atmospheric light may be the noise. So we regard the mean value of the top 5% brightest values in the selected region as the atmospheric light for rejecting the noise influence effectively.
3.2. Dark Channel Prior
He et al.  proposed a dark channel prior to the haze-free outdoor images, which supports that most of local regions excluding the sky have very low intensity in at least one color () channel. The intensities of these dark pixels are mainly caused by (1) shadows; (2) colorful objects or surfaces; (3) dark objects or surfaces. Similarly, it has been observed that the pixels in sky or hazy regions have high values in all color channels. The dark channel is defined aswhere is a channel of and is a local neighborhood centered at . Based on the dark channel prior, is a constant in a local image patch. Thus, we define the local transmission as and take the min operation with a local patch on (1):
Since the is close to zero, we can estimate the atmospheric veil term as
It is coarsely good but contains some block effects, since the atmospheric veil is not always constant in a patch. That is to say, a local patch may contain the region where the scene depth is discontinuous.
For the sake of avoiding this problem, Tarel and Hautiere  estimated the atmospheric veil using median filter on the minimal component of the hazy image. Although this method is pixel-based and can get neutral atmospheric veil, the scene depth cannot be preserved well due to two median filter operations. Inspired by the above two methods [11, 12], we will present an adaptive wavelet fusion method to obtain accurate atmospheric veil in the next subsection.
3.3. Estimation of Atmospheric Veil
Now we describe an adaptive wavelet image fusion method to estimate the atmospheric veil accurately. Firstly, we acquire two atmospheric veils with different scales. Then, a local similarity-based wavelet fusion method is introduced to preserve the most scene depth features.
3.3.1. Coarse Scale Atmospheric Veil
The image pyramid  is a highly efficient and remarkable tool for image processing such as enhancement, resolution, and image compression. It extracts the image structure to represent image information in different scales. Therefore, we reduce the resolution levels of the pyramid representation for getting a coarse scale version of the original image. A simple iterative image pyramid algorithm is used for obtaining a relative smooth image. The zero level of the pyramid, , is equal to the original image. The next pyramid level, , is subsample by a factor to the last level . Figure 2 shows the levels of image pyramid expanded to the size of the original image. By this procedure, the image is smoother and fewer details are left in a larger scale. At last, the coarse scale version of original image is obtained bywhere is the number of pyramid level and is each level of the image pyramid.
The coarse scale of the original image is defined as the minimum operation:where is the coarse scale version of the original hazy image. Figure 3(a) shows an example of hazy image. Figure 3(b) is the coarse scale atmospheric veil estimated by our method.
3.3.2. Fine Scale Atmospheric Veil
During the propagation of the visible spectrum from the light source to the sensor, the light with different wavelengths is scattered in different degrees. Rayleigh scattering  describes a physical phenomenon that light is scattered in propagation directions by very small particles. According to Rayleigh law, the scattered intensity is inversely proportional to biquadrate of the wavelength. The blue and purple lights of visible spectrum are scattered strongly at the very distant sky while a little light is scattered in a close shot. However, the human visual system is not sensitive to the purple light. This is the reason why the distant sky seems blue in sunny day (grey in cloudy or hazy day). In the condition of haze weather, the blue light is scattered severely in long shots while it is scattered weakly in close shots, which can reflect the change of scene depth indirectly. Hence, the blue channel of original hazy image is chosen as the fine scale atmospheric veil . Figure 3(c) shows an example of the fine scale atmospheric veil.
3.3.3. Local Similarity-Based Wavelet Image Fusion
Over past several decades, wavelet transform has been a popular tool for image fusion, which is a frequency domain processing technique in multiscale levels [21–23]. Moreover, the results are considered more suitable for human and machine perception or further image processing tasks such as segmentation, feature extraction, and object tracking. In this section, we describe a local similarity-based wavelet image fusion method for preserving the most important features of the coarse and fine scale atmospheric veils.
(A) Local Similarity Definition. For each pixel , we define the local similarity aswhere and are two corresponding local image patches with same position, is a neighborhood of , and represents the Euclidean distance between and . Alternatively, the 2-norm can be replaced by other norms. Since 2-norm is a common measurement and without loss of generality, we adopt 2-norm in our experiments. The value range of LS is , which ensures that the neighborhoods of two patches get high similarity if they are close to each other and vice versa. This local similarity is more robust compared to computing similarity of the two corresponding pixels since some pixels in the coarse scale veil might not hold true values after the image pyramid operation.
(B) Adaptive Wavelet-Based Fusion Algorithm. As analyzed in Sections 3.3.1 and 3.3.2, the two atmospheric veils reflect the scene depth information with different scales. Fusing them effectively is very crucial for getting an accurate atmospheric veil. Since the wavelet-based fusion strategies can fuse the two images naturally and without artifacts, they have been widely explored and applied in the fields of image or video restoration [24, 25]. In this study, we propose an adaptive wavelet-based fusion method. Given the coarse scale atmospheric veil and fine scale atmospheric veil , multilevel wavelet decomposition is performed on the two images, respectively. For the two approximation subbands, the mean of corresponding coefficients is used as the fused result. For one pair of corresponding high frequency subbands, the local similarity of each pixel is calculated using (8). Since the edges in coarse scale can reflect the scene depth jump with large probability while only a part of edges related to outer contour in fine scale can reflect the scene depth jump, we use the local similarity to measure the edge consistency between the two atmospheric veils. Due to the side effect of pyramid decomposition, the edges in coarse scale might not be the original ones. Hence, the edges in fine scale atmospheric veil should be preserved as much as possible in case a large local similarity is obtained between the two scale atmospheric veils. The small similarity illustrates that it might be texture in fine scale atmospheric veil and the value in coarse scale atmospheric veil should be maintained as much as possible. Therefore, the fused coefficient at th level of wavelet decomposition can be computed as
After fusing each pair of corresponding high frequency subbands, the inverse wavelet transform is carried out. The fusion procedure is described in Algorithm 1.
Algorithm 1 (the adaptive wavelet-based fusion algorithm). Consider the following.
Input. Two images and , the number of decomposition levels in wavelet transformation , and the size of neighborhood in computing local similarity .
Output. The fused image .(1)Decompose and into approximation sub-bands and , and high frequency sub-bands and using wavelet transformation with levels: where is a function of two dimension wavelet decomposition.(2)Calculate the approximation coefficient: .(3)Initialize: .(4)Do .(5)Compute the local similarity between and with neighborhood: (6)Compute the high frequency coefficients at th level wavelet decomposition:(7)Until .(8)Reconstruct the fused image using inverse wavelet transform on : where is a function of two dimension wavelet reconstruction.(9)Return .
As we can see, the coarse scale atmospheric veil (Figure 3(b)) preserves rough image details and ensures the scene depth and the boundaries are not destroyed. In Figure 3(c), the close shots are not influenced. However, the cloud of the fine scale veil in the white rectangle characterizes no visibility due to the scattering of the blue and the purple lights. Fortunately, the coarse scale atmospheric veil can be a relative complement for the lost information. Figure 3(d) shows the fused atmospheric veil by applying Algorithm 1 on and . It exhibits complementary results and the clouds in the white rectangle show better visibility than the ones in the fine scale veil, while the scene depth information is preserved better than the coarse scale veil.
3.4. The Scene Restoration
Once atmospheric veil and atmospheric light are obtained, the transmission map can be estimated simply by
As we know, the observed scene is shrouded by the atmosphere more or less even in sunny days. If we remove the haze directly using (14), the restored image may seem unnatural. Therefore, a reference parameter [11, 12] is introduced:
According to the transmission map and the atmospheric light , we can recover the scene radiance according to (1). However, the direct attenuation will be zero when is close to zero and the sky or infinite distant regions of the directly recovered tend to be incorrect. Consequently, the scene radiance is recovered bywhere is a lower bound used for restricting the transmission map. Algorithm 2 shows our dehazing procedure.
Algorithm 2 (the proposed single image dehazing algorithm). Consider the following.
Inputs. Hazy image , the number of pyramid , wavelet transformation levels , the size of neighborhood in computing local similarity , and parameter .
Outputs. Haze-free image .(1)Calculate the atmospheric light as described in Section 3.1.(2)Generate the coarse scale atmospheric veil using image pyramid with (6) and min operation with (7).(3)Yield the fine scale atmospheric veil by extracting the blue channel of hazy image.(4)Get the final atmospheric veil by fusing and using Algorithm 1.(5)Compute the transmission map with (15).(6)Recover the scene radiance with (16).
4. Results and Discussion
In this section, the performance of proposed dehazing method is evaluated on a series of hazy images and compared with several well-known single image dehazing methods.
In the experiments, we found that the parameters of and have little effect on the results. For example, as seen from the subjective effect, the results with and have no obvious difference. Therefore, we set these two parameters with fixed values as and . In contrast, has a greater impact on the results. Generally, the value of is proportional to the density and set from 0.5 to 1.0. That is to say, small is assigned for removing the thin haze. Figure 4 shows the dehazing results with changed . The size of neighborhood in computing local similarity local similarity should be small. Since large will generate small similarity, according to Algorithm 1, the edges in coarse scale will be preserved with small similarity. Thus, edges related to outer contour in fine scale will be neglected, which will make scene depth not accurate. In addition, large will generate large computational quantity. Therefore, is set to 3 × 3 in experiments. Typically, is equal to 0.1 as mentioned in .
(a) Hazy image
4.1. Subjective Observation
Figure 5 shows the dehazing results of several well-known methods and the proposed method in terms of subjective visibility on the so-called “house” image. From this figure, we can see that there are some color biases and halos in the dehazing results obtained by Fattal  and Tarel and Hautiere , as shown in Figures 5(b) and 5(c). For other methods, Ancuti et al.  and Wang and Feng  recover the original scenes better relatively, as shown in Figures 5(f) and 5(g). However, Ancuti et al.’s method introduces the oversaturated color in the close tree points. Wang and Feng’s method  cannot remove the haze completely. Kim et al.’s method  yields serious artifacts and the color of the close tree is also oversaturated (Figure 5(d)). Similarly, the haze is not dehazed completely in Zhang et al.’s result (Figure 5(e)). More unfortunately, the color is shifted severely on some boundaries with abrupt scene depth jumps. Despite the dark color, Meng et al.’s  and Lan et al.’s  results exhibit similar effect and look very real and natural as a whole, as shown in Figures 5(h) and 5(i). Nevertheless, the color of white rectangular regions above the windows has not been restored well. By contrast, our method can remove most of the haze without introducing any artifacts (Figure 5(j)). The restored scene images show better visibility and higher contrast and preserve the real color. The contents in the red rectangles indicate the obvious differences of various methods.
Figure 6 shows the comparative results of the proposed method with other four methods on “mountain” image. Seen from the region of mountain, the color is obviously distorted in Yeh et al.’s  and oversaturated in Shiau et al.’s  results, as shown in Figures 6(d) and 6(e). On the whole, the other three methods produce similar results except for the region of sky (shown in Figures 6(b), 6(c), and 6(f)). However, it seems that there is still a veil of mist over the scene in Figure 6(b). And Lan et al.’s  result is a little oversaturated in the sky (Figure 6(c)). Taken as a whole, our method gains relatively satisfactory result (Figure 6(f)). It is worth noting that the value of is set to 0.3 for “mountain” image.
Next, we compare the proposed method with Tan’s  and Nishino et al.’s  methods on four representative hazy images. Clearly, the color is distorted in Tan’s results (Figure 7(b)), which might be caused by the maximum local contrast operation. Nishino et al.’s results seem to be more natural. But Nishino et al.’s method produces the halos in the sky (shown in the third row of Figure 7(c)). Relatively, our method again exhibits better dehazing results (Figure 7(d)).
Figure 8(a) shows a nonhomogeneous hazy image and Figures 8(b)–8(e) are the results of different methods. Compared with others, the haze is removed in a more complete way using Fattal’s method . However, the phenomenon of color shift appears to be visible in Figure 8(b). Tarel and Hautiere’s  and Zhang et al.’s  results yield unpleasing artifacts. C. O. Ancuti and C. Ancuti  are not helpful to deal with this situation. Although our method also cannot remove the haze nicely, the color and depth are kept more real and natural.
4.2. Objective Evaluation
Except for the natural outdoor scenes, we also test our method on synthetic hazy images and evaluate it with objective criteria. The synthetic images are from the project website provided by Tang et al. . Figures 9(a) and 9(b) show the synthetic hazy images and their corresponding haze-free images. Figures 9(c)–9(e) give the comparative results of He et al.’s, Tang et al.’s, and ours. We can see that our results are more close to the haze-free images, which is especially clear in the red rectangles.
Images captured in haze weather often suffer from degrading contrast, distorting color, and missing image contents. Focusing on these issues, three criteria are applied for objective evaluation as follows.(i) Average Gradient . For the sake of testing image contrast between the recovered haze-free image and the original haze-free image, the image average gradient is calculated: where is the location of a pixel. is the intensity of . is the number of pixels of an image.(ii) Color Consistency . For measuring the color consistency between the restored haze-free image and the original haze-free image, the color descriptor on HSV color space and the color histogram intersection are used to calculate the consistency: where is the color level and are the numbers of the pixels in and with level , respectively.(iii) Structure Similarity . Generally, human visual perception is highly adapted for extracting structural information from a scene. Exactly, the dehazing task is closely connected with the human visual system. So, SSIM index is employed to measure the restored image quality: where are restored haze-free image and the original haze-free image, respectively. are the mean intensities of . are the variances of . is the covariance of and . and are two variables for stabilizing the division with weak denominator, where is the dynamic range of the pixel-values, , and by default.
Table 1 lists the objective comparisons of different methods on “dolls,” “teddy,” and “venus.” For “dolls” image, our results are slightly inferior to the other methods on AG, CC, and SSIM. For “teddy” image, our results yield the best performances in terms of AG and CC, which means that our restored images hold higher contrast and less color distortion. For the criterion of structure similarity, all these methods display very similar values which are close to 1. This indicates that the structure information of restored images is well maintained. Visually, the three methods yield very similar results of the “teddy” image (the second row). For “venus” image, our method yields much better results in terms of AG and SSIM. The value of CC is lower than He et al.’s by 0.23, which demonstrates that the restored image of He et al.’s method can better maintain the color information. However, as seen from Figure 9(c), the color is obviously distorted, especially in the upper left part. This phenomenon illustrates that some objective criteria are not consistent with subjective visualization. That is to say, these criteria can only reveal partially the level of dehazing. Therefore, it is necessary to explore some integrated criteria for evaluating the restored result.
4.3. Running Time
Furthermore, the running time is considered in this paper. For fair comparison, we download the code from the original author’s project website and run the procedures on the same platform (Intel core i7 CPU, 16 GB RAM). Table 2 lists the running time of four methods. Because our proposed method is a linear function of the number of the input image pixels, it is obviously faster than others. For an image of size , the complexity of the proposed dehazing algorithm is . That is, the complexity of our method is linear and comparative or even lower than conventional image dehazing methods. Although the complexity of Tarel et al.’s method is also linear, two median filter operations are very time-consuming. Obviously, Zhang et al.’s method costs the longest time which might be caused by the utility of overlap scheme and median filter. Instead, Meng et al.’s method is very more time-saving compared with Tarel et al.’s and Zhang et al.’s methods. In particular, our method runs five times as fast as Meng et al.’s method for image.
In this paper, we address a novel single image dehazing method based on wavelet fusion. Two atmospheric veils with different scales are derived from the hazy image, one of which is minimum channel after image pyramid while the other one is the blue component. In order to get an accurate atmospheric veil, we proposed an effectively local-similarity based adaptive wavelet fusion method for reducing blocky artifacts, which is the major novelty of this work. The experimental results indicate that our method achieves comparative or even better result by visual effect and objective assessments.
However, our method cannot remove the haze completely for nonhomogeneous hazy and heavy foggy image. In addition, there are still unnecessary textures in the image of scene depth. To overcome these limitations, a more robust and practical dehazing method will be studied in the future work.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work is supported by Science and Technology Research Project of Liaoning Province Education Department (no. L2014450), National Natural Science Foundation of China (no. 61403078), Science and Technology Development Plan of Jilin Province (no. 20140101181JC), and Science and Technology Project of Jinzhou New District of China (no. KJCX-ZTPY-2014-0004).
- S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 6, pp. 713–724, 2003.
- S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” International Journal of Computer Vision, vol. 48, no. 3, pp. 233–254, 2002.
- L. J. Denes, M. S. Gottlieb, B. Kaminsky, P. Metes, and R. J. Mericsko, “AOTF polarization difference imaging,” in Proceedings of the 27th AIPR Workshop: Advances in Computer-Assisted Recognition, Washington, DC, USA, 1998.
- Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '01), vol. 1, pp. 325–332, IEEE, Kauai, Hawaii, USA, 2001.
- J. G. Walker, P. C. Y. Chang, and K. I. Hopcraft, “Visibility depth improvement in active polarization imaging in scattering media,” Applied Optics, vol. 39, no. 27, pp. 4933–4941, 2000.
- D. B. Chenault and J. L. Pezzaniti, “Polarization imaging through scattering media,” in Polarization Analysis, Measurement, and Remote Sensing III, vol. 4133 of Proceedings of SPIE, p. 124, San Diego, Calif, USA, July 2000.
- M. P. Rowe, E. N. Pugh Jr., J. S. Tyo, and N. Engheta, “Polarization-difference imaging: a biologically inspired technique for observation through scattering media,” Optics Letters, vol. 20, no. 6, pp. 608–610, 1995.
- J. Kopf, B. Neubert, B. Chen et al., “Deep photo: model-based photograph enhancement and viewing,” ACM Transactions on Graphics, vol. 27, no. 5, article 116, 2008.
- R. Fattal, “Single image dehazing,” ACM Transactions on Graphics, vol. 27, no. 3, article 72, 2008.
- R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08), pp. 1–8, IEEE, Anchorage, Alaska, USA, June 2008.
- K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '09), pp. 1956–1963, IEEE, San Francisco, Calif, USA, June 2009.
- J. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of the 12th International Conference on Computer Vision (ICCV '09), pp. 2201–2208, IEEE, Kyoto, Japan, September 2009.
- C. X. Xiao and J. J. Gan, “Fast image dehazing using guided joint bilateral filter,” Visual Computer, vol. 28, no. 6–8, pp. 713–721, 2012.
- J. W. Zhang, L. Li, G. Q. Yang, Y. Zhang, and J. Z. Sun, “Local albedo-insensitive single image dehazing,” Visual Computer, vol. 26, no. 6-8, pp. 761–768, 2010.
- K. Nishino, L. Kratz, and S. Lombardi, “Bayesian defogging,” International Journal of Computer Vision, vol. 98, no. 3, pp. 263–278, 2012.
- C. O. Ancuti and C. Ancuti, “Single image dehazing by multi-scale fusion,” IEEE Transactions on Image Processing, vol. 22, no. 8, pp. 3271–3282, 2013.
- J.-H. Kim, W.-D. Jang, J.-Y. Sim, and C.-S. Kim, “Optimized contrast enhancement for real-time image and video dehazing,” Journal of Visual Communication and Image Representation, vol. 24, no. 3, pp. 410–425, 2013.
- Y.-Q. Zhang, Y. Ding, J.-S. Xiao, J. Liu, and Z. Guo, “Visibility enhancement using an image filtering approach,” EURASIP Journal on Advances in Signal Processing, vol. 2012, article 220, 2012.
- B. K. Choudhary, N. K. Sinhaand, and P. Shanker, “Pyramid method in image processing,” Journal of Information Systems and Communication, vol. 3, no. 1, pp. 269–273, 2012.
- A. Bucholtz, “Rayleigh-scattering calculations for the terrestrial atmosphere,” Applied Optics, vol. 34, no. 15, pp. 2765–2773, 1995.
- L. J. Chipman, C. Intergraph, A. L. Huntsville, T. M. Orr, and L. N. Graham, “Wavelets and image fusion,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '95), vol. 3, pp. 248–251, International Society for Optics and Photonics, Washington, DC, USA, 1995.
- K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques—an introduction, review and comparison,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 62, no. 4, pp. 249–263, 2007.
- G. Pajares and J. M. de la Cruz, “A wavelet-based image fusion tutorial,” Pattern Recognition, vol. 37, no. 9, pp. 1855–1872, 2004.
- Y. Du, B. Guindon, and J. Cihlar, “Haze detection and removal in high resolution satellite image with wavelet analysis,” IEEE Transactions on Geoscience and Remote Sensing, vol. 40, no. 1, pp. 210–216, 2002.
- N. Anantrasirichai, A. Achim, N. G. Kingsbury, and D. R. Bull, “Atmospheric turbulence mitigation using complex wavelet-based fusion,” IEEE Transactions on Image Processing, vol. 22, no. 6, pp. 2398–2408, 2013.
- C. O. Ancuti, C. Ancuti, C. Hermans, and P. Bekaert, “A fast semi-inverse approach to detect and remove the haze from a single image,” in Proceedings of the 10th Asian Conference on Computer Vision (ACCV '10), November 2010, pp. 501–514, Springer, Berlin, Germany, 2010.
- Z. Wang and Y. Feng, “Fast single haze image enhancement,” Computers and Electrical Engineering, vol. 40, no. 3, pp. 785–795, 2014.
- G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient image dehazing with boundary constraint and contextual regularization,” in Proceedings of the 14th IEEE International Conference on Computer Vision (ICCV '13), pp. 617–624, IEEE, Sydney, Australia, December 2013.
- X. Lan, L. Zhang, H. Shen, Q. Yuan, and H. Li, “Single image haze removal considering sensor blur and noise,” EURASIP Journal on Advances in Signal Processing, vol. 2013, article 86, 2013.
- C.-H. Yeh, L.-W. Kang, M.-S. Lee, and C.-Y. Lin, “Haze effect removal from image via haze density estimation in optical model,” Optics Express, vol. 21, no. 22, pp. 27127–27141, 2013.
- Y.-H. Shiau, P.-Y. Chen, H.-Y. Yang, C.-H. Chen, and S.-S. Wang, “Weighted haze removal method with halo prevention,” Journal of Visual Communication and Image Representation, vol. 25, no. 2, pp. 445–453, 2014.
- K. Tang, J. Yang, and J. Wang, “Investigating haze-relevant features in a learning framework for image dehazing,” in Proceedings of the 27th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '14), pp. 2995–3002, IEEE, Columbus, Ohio, USA, June 2014.
- R. C. Gonzalez and R. E. Woods, Digital Image Processing, Pearson Education, Upper Saddle River, NJ, USA, 2nd edition, 2002.
- B. S. Manjunath, J.-R. Ohm, V. V. Vasudevan, and A. Yamada, “Color and texture descriptors,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 11, no. 6, pp. 703–715, 2001.
- Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
Copyright © 2015 Wei Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.