Abstract

Foggy images taken in the bad weather inevitably suffer from contrast loss and color distortion. Existing defogging methods merely resort to digging out an accurate scene transmission in ignorance of their unpleasing distortion and high complexity. Different from previous works, we propose a simple but powerful method based on histogram equalization and the physical degradation model. By revising two constraints in a variational histogram equalization framework, the intensity component of a fog-free image can be estimated in HSI color space, since the airlight is inferred through a color attenuation prior in advance. To cut down the time consumption, a general variation filter is proposed to obtain a numerical solution from the revised framework. After getting the estimated intensity component, it is easy to infer the saturation component from the physical degradation model in saturation channel. Accordingly, the fog-free image can be restored with the estimated intensity and saturation components. In the end, the proposed method is tested on several foggy images and assessed by two no-reference indexes. Experimental results reveal that our method is relatively superior to three groups of relevant and state-of-the-art defogging methods.

1. Introduction

To perceive natural scenes from captured images means a lot to the computer vision applications such as image retrieval, video analysis, object recognition, and car navigation. In the foggy weather, the visual quality of images is impaired by the distribution of atmospheric particles, resulting in a loss of image contrast and color fidelity. This kind of degradation undoubtedly makes a great discount on the effectiveness of those applications. Therefore, the technique of removing foggy image degradation, namely, “image defogging,” has attracted much attention in the image processing field [13].

Since the fog degradation is largely dependent on the distant from the object to the camera, many existing approaches rely on the physical degradation model [4]. In the model, the fog-free scene can be recovered as long as the airlight and the scene transmission (or alternatively the depth map) are estimated in advance. Compared with the airlight, the scene transmission is relatively more difficult to be inferred, so people concentrate on how to obtain an accurate scene transmission. At the very beginning, two or more images captured in the same scene yet at different times or angles of polarizing filters are utilized to estimate the scene transmission [5, 6]. Obviously, it is too much for practical use to fetch multiple images on demand. Soon afterwards, single-image-based approaches become a mainstream and are classified into two categories: one class obtains the scene transmission through the third party such as satellites and 3D models [7, 8]; the other one needs to estimate the scene transmission under priors or assumptions that come from empirical statistics and proper analyses [9, 10]. Because the scene transmission stays unknown in most practical cases, people lay more emphasis on the second class. For example, He et al. find a dark channel prior on purpose of obtaining a rough scene transmission which is then refined by soft-matting algorithm in 2010 [9]. Three years later, Nishino et al. try to propose a Bayesian probabilistic approach that predicts the scene transmission and albedo jointly under scene-specific albedo priors [10]. Although these methods produce good results, the color of their results is quite dim and crucial features are almost covered up. Worse still, it takes so much time in achieving a precisely estimated scene transmission that this kind of approach can hardly meet with the time requirement of the practical applications.

Traditional image enhancement algorithms like intensity transformation functions and histogram equalization which do not have to evaluate the scene transmission will not suffer from those problems mentioned above. However, they are likely to produce distorted results, due to their ignorance of the physical degradation mechanism. Nowadays, people change their minds by combining image enhancement algorithms with the physical degradation model. In 2014, Arigela and Asari design a sine nonlinear function to refine a rough scene transmission [11], but the method is more likely to produce results with a dim color and a distorted sky region. This is mainly due to the fact that it relies on precisely estimated scene transmission information. In 2015, Liu et al. stop digging out an accurate scene transmission and turn to introduce a contrast-stretching transformation function based on rough depth layers of scenes to enhance local contrast of foggy images [12]. It is an intuitive and effective method that can achieve clear visibility, but the corresponding results are in oversaturation distortion. The main reason is that the transformation function does not contain an available constraint which guarantees color fidelity. Different from Arigela’s and Liu’s methods, histogram equalization is able to not only enhance image contrast but also preserve color fidelity of the input images. In 2015, Ranota and Kaur decompose a foggy image into three components in LAB color space and apply the adaptive histogram equalization channel by channel to enhance contrast [13]. Then, a rough scene transmission is obtained by dark channel prior and refined by an adaptive Gaussian filter. The method is capable of reconstructing many fine details, but the processed results still suffer from heavy color distortion. The reasons can be concluded into two aspects: for one thing, the method turns a blind eye to the fact that three components in HSI channels are attenuated by fog degradation to different degrees; for another, histogram equalization preserves the color fidelity of the foggy image instead of the fog-free one. To deal with the first factor, we use physical degradation model in HSI space to process three channels separately. As to the second one, it is a necessity for us to revise the color preservation mechanism of histogram equalization. Thanks to the work of Wang and Ng in 2013 [14], we have a chance to modify the mechanism with the aid of his proposed variational histogram equalization framework.

Here, we propose an improved variational histogram equalization framework for single color image defogging. Similar to the previously presented methods in [1113], histogram equalization and the physical degradation model are merged together into an effective and fast defogging framework in our paper. The major contributions we make can be summarized into four aspects. (1) The strategy which treats saturation and intensity components differently occurs on the purpose of avoiding artificial color distortion. (2) According to the physical degradation model, we modify the mean brightness constraint that can preserve the color fidelity in the variational framework. (3) Unlike the work of Wang and Ng, we design and substitute a mixed norm for total variation (TV) and norms in the original variational framework. Thus, there is no need to choose a proper norm to be a regularization term manually. Moreover, a general variation filter as an extension of a TV filter is established, in order to solve the framework efficiently. (4) Different from a global constant airlight in many existing methods, a local airlight associated with the density of fog is appropriately estimated under a color attenuation prior and pixel-based dark and bright channels.

The remainder of this paper is organized as follows. In the next section, a physical degradation model is expressed in HSI color space and our strategy for color image defogging is then illustrated in brief; in Section 3, we present the proposed variational histogram equalization framework in detail. In Section 4, our method is compared with other representative and relevant approaches in some simulation experiments while this paper is summarized in the last section.

2. Physical Degradation Model in HSI Color Space

Generally, because of suspended particles’ absorption and scattering in the foggy weather, the scene-reflected light goes through undesirable attenuation and dispersal. Worse still, the airlight scattered by those particles pools much light quantity to the observers. Both of these two factors are closely associated with scene depth , so the observed scene appearance can be illustrated in the following expression, according to Koschmieder’s law [4]:where is the scene transmission related to at position and it can be described as with denoting the scattering coefficient of the atmosphere. The additive model in formula (1) is obviously comprised of two major parts, direct attenuation and the veiling light. The former part interprets that is attenuated and dispersed while the latter one is the main cause of color distortion. With the physical degradation model, image restoration in foggy scenes is essentially an ill-posed problem that recovers , , and from the observed image . Notice that HSI space is practical for human interpretation, which makes it an ideal tool for processing images [15]. The space contains three components (hue, saturation, and intensity) that can be transformed by RGB channels. With the equivalent transformation relationship from RGB to HSI, the physical degradation model can be expressed in HSI color space bywhere , , and are the HSI components of while , , and represent ones of , respectively. Formula (2) implies that the hue component keeps constant, according to the color constancy theory while formula (3) verifies that fog contaminates the saturation component, which is easy to be overlooked in some existing methods. Obviously, formula (4) is accessible, because the intensity channel can be considered as a gray-level image. Based on this model, we propose a color image defogging idea that is firstly inferred through a variational framework of histogram equalization, and then can be obtained provided is estimated in advance:where and are given by the decomposition of in and channels. Together with estimated and , the fog-removal image is recovered in the end.

3. The Proposed Variational Framework for Image Defogging

Histogram equalization is one of the most representative methods in the image enhancement field, but the original one is limited, since a mean brightness constraint is not considered. In 2007, Jafar and Ying proposed a constrained variational framework of histogram equalization for image contrast enhancement [16]. With an attached constraint, the mean brightness of the output is approximately equal to that of the input, resulting in realistic color restoration. Nevertheless, it may fail to enhance the contrast because of neglecting the differences among the local transformations at the nearby pixel locations. On the basis of Iyad Jafar’s work, Wang and Ng modified the framework with another constraint to a further step in 2013 [14]. The specific expression of the variational framework can be described aswhere denotes the input of gray-level image while is the enhanced output. and are the histograms of the input and output, respectively. represents the local transformation function and , where is each pixel location and denotes the image domain. denotes the first derivative of with respect to . is the gradient of with respect to the horizontal and vertical directions. is the mean brightness of the input. and are positive constant parameters. From the framework, consists of three positive parts. The first part is meant to make distribute uniformly through local transformation function, resulting in enhancing local details of image scenes. The second part is the same as that in Iyad Jafar’s method, aiming at preserving . For traditional image enhancement tasks, this part is necessary and helpful but may be incorrect or even harmful for foggy image recovery. The reason is that the mean brightness of a foggy image with whitening color is generally higher than that of a fog-free image. We plan to discuss and modify in Section 3.1. The last part in is to keep structures consistent by narrowing down the differences among those pairs of in local regions. However, the selection from two norms in formula (6) still needs manual intervention. Thus, a mixed norm is designed for an automatic process in Section 3.2. With those two improvements, a modified variational framework is built up in Section 3.3. Moreover, the airlight is estimated by a color attenuation prior and the pixel-based dark and bright channels in Section 3.4. Notice that a general variation filter is designed for calculating the proposed framework efficiently, resulting in recovering the intensity of a fog-removal image finally. The flowchart of our proposed method is depicted as in Figure 1.

3.1. Improvement for a Mean Brightness Constraint

As is well known, a foggy image possesses a high mean brightness, so the mean brightness constraint in the framework should be improved through the physical degradation model. Due to , formula (4) can be properly rearranged aswhere is a local constant that will be estimated in Section 3.4. Moreover, is assumed to be piecewise smooth in [17, 18], which means that it can be treated as a constant in local regions. Therefore, after taking average of each component in formula (7), we can getwhere and represent the mean intensity values of and in a local region , respectively. and denote the mean values of airlight and scene transmission in , respectively. Apparently, the remaining problem is how to fetch from a foggy image. Fortunately, dark channel prior makes it possible to get a rough that proceeds to be refined by soft-matting algorithm [9], as is shown in Figure 2. From the figure, the mean value of the rough in red or blue boxes is approximately close to that of the refined . Accordingly, the mean rough may be adequate enough to equal . Now that there is no need to calculate the refined that is the main cause for large time consumption in [9], it implies that we can obtain in patches promptly. Thus, when is substituted for in formula (6), a proper mean brightness constraint can be described aswhen , formula (9) turns to be which is only appropriate for regular image enhancement such as Iyad Jafar’s or Wei Wang’s works. Due to in foggy images, it is fully convinced that the modified constraint in formula (9) is more beneficial in foggy image restoration.

3.2. Design for a Spatial Regularization Term

First emerging in image restoration field, TV and norms perform well and have their own merits. On the one hand, TV norm allows discontinuity of images and preserves more edges in textual regions, which is proven in [19]. On the other hand, [20] validates that norm is able to keep the structural consistency in flat regions and costs fewer computer sources when the minimization of its regularization is processed. Given that we seek for a spatial regularization term that can imitate both of the two norms, to be specific, the expected regularization term should get close to TV norm in textual regions and behave like norm in flat areas.

Here, we suppose to be a function with respect to denoted by for the sake of clearness in the paper. If we have , then a TV norm is formed. In a similar way, a norm is established when . In order to imitate TV and norms, it is reasonable to analyze the diffuse behavior of through its corresponding Euler-Lagrange equation:

First of all, we would like to decompose the divergence term into two orthotropic components along the level set curve, as is shown in the following expression:where and represent tangential and normal components, respectively. Notice that it is available to control the diffuse speed of and . For one thing, if both of the speeds in tangential direction and normal direction gradually go to zero as grows up, together with the descending rate of speed in being lower than that in the other direction, it guarantees that is close to TV norm in the textural areas. Hence, the first rule can be listed asFor another, if the speed in keeps as fast as that in direction in the flat regions, can be treated as norm approximately. Therefore, the second rule is illustrated as

Based on those two rules of pointed diffuse behavior mentioned above, a satisfactory function is designed and turns out to be

It is easy to examine whether obeys those two rules. Plugging into formulas (12) and (13), we can get

Apparently, the function is available. Thus, the spatial regularization term in formula (6) is changed into a new version:

3.3. Construction and Calculation of the Proposed Framework

Combined with a mean brightness constraint in formula (9) and a spatial regularization term in formula (16), our variational framework of histogram equalization for image defogging is finally built up, which is depicted as

From formula (17), our model is more concise in comparison with Wei Wang’s framework. The first term is utilized to enhance contrast through local histogram equalization while the second one aims at recovering the true brightness by enforcing the output brightness being close to . The last one is devoted to preserving the structural consistency by minimizing the differences among local transformation functions.

As to the solution of the proposed framework, we can learn from Wang’s algorithm. According to the alternating direction method of multipliers (ADMM) [21], formula (17) is converted into an unconstrained minimization problem through a pair of quadratic penalty functions. Thus, the whole process for minimizing is actually a loop iteration containing two corresponding Euler-Lagrange equations. Relevant information about solving Euler-Lagrange equations can be found in [22, 23]. However, the time consumption is too expensive to be accepted. A possible way to accelerate the process is to deal with Euler-Lagrange equation through a TV filter [24]. Nevertheless, the regularization term in our framework is not the same as TV norm exactly. If the fitted TV energy in the filter is replaced by a new energy, we have to adjust the filter coefficients, especially the weights . First of all, we might as well define the general form of a regularization term:where is a monotone function. Then, it is easy to obtain the energy function’s Euler-Lagrange equation from an input image in the discrete case [25]:where is one node of and denotes the edge derivative. Focusing on the first term of formula (19), we proceed to define the discrete versions of and as

With formula (20), we can get

If formula (21) is plugged into formula (19), the discrete equation turns to be

Now, it is available to describe the expression of from formula (22):

Therefore, a general variation filter is formed to get a numerical solution from the energy functional framework precisely and promptly. In particular, goes to when , which is brought into correspondence with the weights of the TV filter. Now that in our regularization term, the newly configured should be

3.4. The Estimation of Airlight

To recover the fog-free scene without yielding color shifting, the airlight is another important factor which is often neglected. It is simply inferred by selecting the brightest pixel of the entire image in [10]. Afterwards, He et al. pick up a pointed pixel that corresponds to the brightest one in the dark channel as the estimated airlight [9]. Then, Kim et al. merge quad-tree subdivision into a hierarchical searching strategy on purpose of obtaining a roughly inferred airlight [26]. Recently, people are devoted to seeking for an accurately estimated value. For example, Sulami et al. take a two-step estimating approach to recover the airlight through a geometric constraint and a global image prior [27]. Although they are remarkable in some situations, it is worth noting that these methods just provide a global constant airlight. Unfortunately, this is contrary to the fact that the airlight ought to vary with the fog density. Thus, we need to estimate a local airlight associated with the fog density.

To recover the local airlight, the first step aims at measuring the fog density. We introduce a color attenuation prior [28] to measure the density for each pixel. The prior finds that the difference between the brightness and saturation is directly proportional to the depth. Moreover, it is well known that when the depth increases gradually, the fog density goes higher and higher. Based on these two observations, we can draw a conclusion about the relationship among the fog density , the depth , and the difference between the brightness and the saturation :

Because varies along with the changes of , it is reasonable to make an assumption that is positively proportional to and we can get

Since there is one-to-one correspondence between and , the maximum and minimum of denoted by and correspond to the highest and lowest fog density, respectively. Based on pixel-based dark channel and pixel-based bright channel [29], and are simply defined as the pixels with the highest and lowest values in and , respectively. and are mathematically expressed as

According to formula (26) with two known points: and , a local can be estimated bywhere and denoted by and are constants. With the estimated , we can infer from the variational framework and then is obtained by formula (5). At last, can be easily recovered with and .

4. Experiments and Analysis

In order to perform a qualitative and quantitative analysis of the proposed method, we do some simulation experiments on color foggy images in comparison with three pairs of state-of-the-art defogging approaches. The first pair is Ranota and Kaur’s [13] and Wang and Ng’s [14] that are directly based on histogram equalization technique while the second one is Arigela and Asari’s [11] and Liu et al.’s [12] which belong to intensity transformation functions. Apparently, it is indispensable for us to choose them as comparative groups, since they are quite relevant to our method. He et al.’s [9] and Nishino et al.’s [10] in the last pair are classical and representative, as is well known in the image defogging field. Thus, we are going to compare our method with all of them pair by pair on the foggy image set that contains benchmarked and realistic images chosen from [914].

4.1. Test of Parameters

Before the comparison, we ought to inform the experimental condition and the parameter selection. All the mentioned approaches are carried out in the MATLAB R2014a environment on a 3.5 GHz computer with 4 GB RAM. On the simulation platform, the parameters utilized in our method are set to be , , , and . It is worth pointing out that and are picked up from several pairs of , where in the parameter-testing experiment. Specifically, two synthetic images are chosen from Frida database, as shown in Figure 3. Besides, three assessment indexes are introduced to evaluate the effectiveness of our method initialized by different pairs of . The first index is the absolute mean brightness error (AMBE) that is the difference of mean brightness between the output and the ground-truth image while the second one is edge intensity (EI) which quantifies the structural information. The last one is the mean square error (MSE) that measures the change of the output in comparison with the ground-truth image. Notice that AMBE index and MSE index belong to backward pointers. Their scores range from 0 to 1, and the lower they are, the better image quality will be. EI index is on the opposite side where higher scores imply better results.

Firstly, we might as well fix on and adjust freely. Figure 4 describes a part of results processed with different and Figure 5 is the quantitative evaluation of performance by AMBE and MSE indexes. From Figure 5(a), AMBE index is decreasing along with the increasing , due to the fact that is directly related to the second term in formula (17). It means that the higher is, the better the mean brightness of results will be. However, it can be seen from Figure 5(b) that MSE index keeps decreasing at a slow speed after reaches , which implies that merely resorting to increasing is not enough to remove the fog degradation. Worse still, a higher is prone to induce more iterations in the solution-calculating step. Therefore, it is logical to set to be in the proposed framework.

Secondly, is fixed on and can be adjusted from to . Processed results of our method initialized by different are presented partially and evaluated, as shown in Figures 6 and 7, respectively. At first sight of Figure 6, results suffer from a great loss of contrast when increases gradually. This observation is strengthened by Figure 7(a) where EI index, as a measurement of image contrast, becomes smaller and smaller. Nevertheless, it does not mean that the higher is, the worse image quality will be. From Figure 6, the structural consistency keeps better along with the increasing . This is because affects both of image contrast and structural consistency, according to the first and third terms in formula (17). The contradictory relationship between the contrast and consistency is validated by Figure 7(b) where MSE index is in the shape of “U” with high at two ends and low in middle, given that it is a compromise that is set to be in the proposed method.

4.2. Qualitative Comparison
4.2.1. Qualitative Comparison with Histogram-Equalization-Based Methods

Since our method is based on a variational histogram equalization framework, it is acceptable to be compared with Ranota and Kaur’s [13] and Wang and Ng’s [14] on house and trees images. From Figures 8 and 9, results obtained by Ranota’s method are in artificial color and dark patches. In contrast, results processed by Wang’s method and ours are quite superior in terms of color fidelity and local structure. The reasons can be summed up into two aspects: for one thing, Wang’s method and ours avoid distorted color by treating and components differently, unlike Ranota’s processing all of color components in the same way; for another, Gaussian filter in Ranota’s method tends to smoothen the rough scene transmission as well as local contrast at the same time. Wang’s method and ours which do not need to refine the scene transmission will not produce the structural distortion. However, in comparison with our results, Wang’s method seems to be brighter in the global illumination. For example, the grasses in blue box of Figure 8(c) are too bright to exhibit their real color information, so is the road in blue box of Figure 9(c). This is largely because Wang’s method preserves the mean brightness of an input foggy image. Thanks to the revised brightness constraint in our framework, the color is recovered properly. Moreover, from the results of Wang’s method based on TV norm in the house image and norm in the trees image, the structural consistency between the wall and the branch is violated in the middle red box of Figure 8(c). What is worse, many details of branches are lost in the middle red box of Figure 9(c). The main reason is that TV and norms are unable to preserve the consistency and fine details in the discontinuity of textural regions simultaneously. With the help of a designed regularization term in our method, there is no need to sacrifice the consistency for fine details and vice versa.

4.2.2. Qualitative Comparison with Intensity-Transformation-Based Methods

Histogram equalization and intensity-transformation-based methods are similar enhancement algorithms, so we launch a comparison among Arigela’s method [11], Liu et al.’s method [12], and ours on street and train images. As exhibited in Figure 10 (up) and Figure 11 (up), Arigela’s results are still in dim color with the sky and white objects being accompanied with halo effect. This is because Arigela’s method substitutes intensity transformation function for soft-matting algorithm used in He’s method [9] so as to refine a rough scene transmission. It means that the method is in close relationship with He’s and therefore its results inevitably suffer from the same unwanted distortion as He’s does exactly. A further explanation will be found in Section 4.2.3. Liu’s method does not seek for an accurate scene transmission and its results are displayed in Figure 10 (middle) and Figure 11 (middle). They show fine local details and abundant color information, because the method removes fog degradation with intensity transformation function guided by scene depth layers. Nevertheless, Liu’s results present an oversaturation appearance, due to a lack of color fidelity constraint. Compared with those two defogging methods, ours can produce pleasing results with vivid color and great contrast. The success may owe to the constrained variational framework with a color fidelity term.

4.2.3. Qualitative Comparison with Classical and Representative Methods

To make the performance of the proposed method more persuasive and convincing, it is a must to compare our method with several classical and representative ones such as He et al.’s [9] and Nishino et al.’s [10]. From Figures 12 and 13, results delivered by He’s and Nishino’s method are recovered up to a reasonable level, so does the proposed method. However, the scene color of He’s and Nishino’s results should have been bright, but stay quite dim instead. For instance, the color of grasses on the rocks is nearly concealed so that we could not tell the difference between grasses and rocks in the blue box of Figure 12 (up) and Figure 12 (middle), so is the one of buildings in the blue box of Figure 13 (up) and Figure 13 (middle). The reason that He’s method induces over-defogging results is mainly due to dark channel prior that overestimates the thickness of fog. The predicted transmission value is lower or much lower sometimes than the true one when it is filtered by “min” filters, which has been pointed out in [30]. The reason for Nishino’s method is the same thing, since statistical distributions are considered as depth prior absorbed in the probabilistic minimization. Worse still, false color and blocking artifacts occur in the regions of He’s and Nishino’s results, as is shown in the red boxes of Figures 12 and 13. It is due to the different ways of their estimating the airlight. Both of them consider the roughly estimated airlight to be a global constant, which goes against the basic fact that the airlight varies with the fog density. Different from their methods, the proposed one estimates a local airlight associated with the fog density under a color attenuation prior and pixel-based bright and dark channels. Apparently, our method has the capability of removing fog effects without yielding unappealing distortion or information loss.

4.3. Quantitative Comparison

In order to strengthen the qualitative analyses mentioned above, two no-reference assessment indexes are introduced, including EI index and no-reference image quality evaluator index (NIQE) [31]. It is worth noting that among those three indexes in Section 4.1, AMBE index and MSE index are full-reference evaluators. They are inappropriate for assessing image quality with the absence of ground-truth images. Accordingly, only EI index is adopted to evaluate the structural contrast again in this section. Plus, NIQE index is meant to measure the distortion degree of processed results through scene statistics of locally normalized luminance coefficients. Actually, the shape of those coefficients’ distribution in a foggy image is thinner than the one in a defogged image, which implies NIQE index is capable of quantifying the losses of naturalness from a distorted image. Scores of the index distribute from 0 to 100 and zero score represents the best result. Figures 1416 give a series of quantitative assessments for images in Figures 813. As displayed in the plots of Figures 1416, results processed by our method gain the best scores of EI index and NIQE index. This fully shows that the proposed method can produce more plausible results, compared with the other algorithms.

The time consumption needs to be taken into consideration, if the method is put into the practice. Figure 17 exhibits the running time of several defogging methods previously mentioned. From the figure, He’s and Nishino’s methods cost the most time sources above all. This is because they depend much on the accurate scene transmission refined by the soft-matting algorithm or a jointly estimated Bayesian framework that takes up plenty of time. Except for He’s and Nishino’s methods, Wang’s method also consumes too much time up to 15 seconds which is three times as much as the cost of the remaining four methods. The reason is that two Euler-Lagrange equations are required to be tacked in every loop iteration. Compared with Wang’s work, our method proposes a general variation filter to solve the Euler-Lagrange equation, resulting in saving considerable time. Moreover, the time cost of Ranota’s, Arigela’s, and Liu’s methods also keep at a comfortably lower level, since they are based on image enhancement algorithms.

5. Conclusion

In the paper, we propose an image defogging method using a variational histogram equalization framework. A previous variational framework on image enhancement inspires us to establish a constrained energy functional that contains histogram equalization and the physical degradation model. The mean brightness constraint in the framework is revised to preserve the brightness of a fog-free image while the regularization term is redesigned for avoiding manual intervention. To pursue the processing efficiency, a general variation filter is proposed to solve the constrained framework promptly. As to another important unknown quantity , a color attenuation prior and pixel-based dark and bright channels are introduced to infer a local constant reasonably. In the end, the proposed method is tested on several benchmarked and realistic images in comparison with three groups of representative defogging methods. With qualitative and quantitative comparison, it is safe to draw a conclusion that our method performs much better in terms of color adjustment and contrast enhancement. In the future, more attention will be put on accelerating the processing speed up to a real-time level for some computer vision applications.

Competing Interests

The authors declare that they have no competing interests.