Abstract

Images captured in fog conditions often suffer from weather of poor visibility which fades the colors and reduces the contrast in the scene. This paper proposes a novel regularization method which utilizes space transformation in order to restore the hidden scene with high dynamic range and enhanced edge information. In order to efficiently improve the visualization, the proposed method is built upon contrast stretching which can obtain better estimation map as well as solve the problem. Using minimum energy constraints, the algorithm recovers scene albedo on a number of haze images. Experimental results show that the method effectively achieves accurate and true representation.

1. Introduction

Bad weather condition is a significant factor in the degradation of the optical image quality. This has been an unavoidable issue for decades and has not afforded a clear structure in many applications involving scene understanding [1, 2]. For instance, it cannot help acquire visibility when autonomously navigating a robot or vehicle [3]. It decreases object recognition and detection accuracy. Scene irradiance is unable to provide valid information. The indisputable fact is that there are a large number of uncertain elements, such as fog and haze, which attenuate and constrain scene appearance and severely alter the scene radiance. In other words, the clear observation of objects in the scene is hidden as a result of aerosol particles. Furthermore, atmospheric absorption and scattering cause contrast loss and visual vividness deviation in images [4]. Restoring the hidden scene appearance via the analysis of atmospheric propagation model is, therefore, a key indication in solving the problem.

Scattering is a process by which a particle redistributes a fraction of the incident energy into a total solid angle [5, 6]. The scattering properties depend on the refractive index and size of the particles. According to the haze particle’s size and the type of camera, the monochrome atmospheric scattering model which followed Koschmieder law is used to describe colors and contrast of scene points in bad weather [7, 8]. Nevertheless, this equation is ill-posed especially for the single-image dehazing. Earlier techniques targeting on removal of light scattering distortion in optical model include exploiting the polarization effects to compensate for visibility degradation [9, 10]. However, it needs multiple photographs to afford several of polarized haze priors. Other treatments have enormously changed with the reason that the rigorous physical model which could describe atmospheric absorption and scattering was exhumed [8]. They continued the works of McCartney [7] to draw on what is already known about atmospheric optics and to develop models and methods for recovering pertinent scene properties. Many successful approaches lie on using this model and stronger priors. Tan [11] assumes that a visibility scene has a higher contrast ratio than the hazy image. It means that maximizing the contrast could lift and restore the image quality. Nevertheless, the results inevitably suffer from halo artifacts if overcompensated. Fattal [12] broke the vector of surface albedo coefficients into a sum of two components, one parallel to the air-light and another plumb to the air-light. It has presented locally irrelevance of the shading and scene transmission to solve the problem. He et al. [13] propose a dark channel prior (DCP) to estimate the transmission function with soft-matting and improve the speed by guided filter [14]. Similarly, the dark channel prior was also employed in [1517] for single image dehazing. The key points of Nishino et al. [18] and Caraffa and Tarel [19] assume the image obeys MRF (FMRF) distribution and utilize hidden fields energy to iterate the optimal observation fields. In order to infer the atmosphere veil, Liu et al. [20] and Li et al. [21] use regularization method to refine the minimal component image for preserving the edges and gradients of images.

According to the analysis of recent methods mentioned above, most existing endeavors are devoted to find out the priors and constraints for restoration of the image from statistics. Indeed, prior filter methods were also demonstrated to be successful removal methods for single images, such as median filter [22], guided filter [14], bilateral filter [15], and TV filter [20, 21]. However, most methods only focus on the similarities and differences between pixels [20] or patches [18] as priors and do not consider many striking features (edge) covered or degraded by scattering effect. Hence, there are small amounts of edges that can be used as constraints to correct the image restoration. As a result, these methods may cause halo artifacts, blurred visibility, and lower contrast.

Therefore, our method is to add extra restrictions and define compact sets in order to refine the slight structure. Specifically, taking advantage of the regularization theory and getting the final solution based on the observation data which has the physical reality and mathematical tractability [23, 24] are the way to go. In this paper we present a Hardy space method which allows the more differential operator (pseudo differential operator) to generate a great deal of edge and texture as the constraint prior. Compared with previous dehazing algorithms, the main contribution of this paper contains the following aspects: (I) according to a lack of edge information in the foggy image, proposed algorithm can provide plenty of edges and textures through showing anisotropic-pseudo-differential kernel on Hardy space to obtain abundant priori constraints. (II) By taking advantage of characterization of similarity between Hardy space H1 and Lebesgue space L1, we make H1 spaces associated with Hardy-Littlewood maximal operator to describe the infimum of H1 space as well as to solve rough estimation of the cost function caused by the previous algorithm. Compared with the recent effective implementations with complex constraints such as MRF [11, 20], FMRF [19], and graph theory [12], our technique is able to restore a hazy image, which shows more visually plausible results. Experimental results demonstrate that our compound regularization method is not only able to restrain the noise and blurring effect but also able to reveal important image features, such as edges and textures.

The outline of the paper is as follows. In the next section, we present our new defogging algorithm based variational framework; the details regarding our optimal technique are discussed in this section. In Section 3 we report and discuss the results while in Section 4 our method is summarized.

2. Single Image Dehazing

The task of image restoration is to estimate the latent high quality images given the low-quality observations. For simplicity and clarity, in the following review of previous works, we take the minimum-energy restriction based on Hardy space H1 as a canonical restoration problem. In this plot, we are interested in image restoration where we are desired to obtain a number of edges and textures as the priories of diffusion model in order to restore the degraded image. Therefore, we focus on the analysis of haze optical model and point out the disadvantage of the typical regularization model. Afterwards, we consider that a classical H1 space can completely replace L1 space, which are better to suit for characterizing the geometric features of foggy image. More importantly, through rigorous analysis, we find a handy way which can seek the infimum of the space to solve minimum problem.

2.1. Haze Optical Model

In general, objects appear hazier and this is attributed to scattering and absorption along the viewing ray as light travels from the source to the viewer. Scattering and absorption of light by the propagation media are the main reasons of image degradation in foggy scenes. Therefore, it is necessary to analyze the mechanisms of scattering in the atmosphere. Although the exact nature of molecules and particles which influences the light from the source to lose intensity and undergo a spectral shift is highly complex, in reality, it spreads out energy from its original direction.

Following this assumption [7] and on the basis of Bouguer’s exponential law of attenuation, Narasimhan and Nayar [8] proposed that the degradation image at any pixel recorded by a monochrome camera is decomposed into “Attenuation” and “Air-light.” The former is the direct attenuation factor when light travels in straight lines from a scene to the observer. The second term is scattered ambient light in the atmosphere reaching the observer in addition to the radiance propagated from the scene. As the result, the total irradiance is usually described by the sum of the direct attenuated irradiance and the air-light irradiance as depicted in Figure 1. The atmospheric scattering model which is widely used in hazy images is defined as follows:where is the observed intensity at pixel , is the scene radiance or haze-free image, is the sky brightness for the whole image pixels, is the scattering coefficient of the atmosphere, and is the depth of the scene point. Note that the monochrome model assumes that the scattering coefficient is the same for all the color channels.

It is an impossible task to calculate which represents the true intensity in a clear day without any idea about the three parameters , , and . According to the results of previous research [21], (1) can be rewritten as

For convenience, we call . As suggested by [13], can be easily estimated automatically. Hence, there are only 2 parameters to calculate for solving this problem. If we take the attenuation term as the breakthrough point, it is obvious to see due to its physical property. Accordingly we intend to use minimization of (2) to describe maximum value of :By deriving from (3), we can get and . In that case, the range of is Obviously, as described in [21], maximizing the contrast of the resulting image is equivalent to maximizing assuming that is smoothness:

However, [21] is a nonconvex optimization problem which is very hard to obtain. More importantly, grows less when the fog concentration is great, and it cannot provide efficient information for degradation shown in Figure 2. Here, we adapt modified minimization energy restriction to pursue the maximum of the while allowing discontinuities along edges.

2.2. Edge Diffusion Model via Anisotropic Pseudodifferential Operator on Hardy Space

Total variation (TV) is a well-known model proposed by Rudin et al. [25] which is a very successful and efficient method for image restoration as its ability could preserve edges due to the piecewise constant regularization property of the TV norm [26]. For any degradation factor, we consider the following total variation based image restoration problem:where is the first-order derivative operator, and is a parameter to balance the influence between data-fidelity term and a regularization term. The regularity term is understood beyond the conventional Sobolev space instead, as the TV Radon measure. The main characteristic of the TV image model is that it has shown more singularities and edges which is an important visual cue in computer vision. However, if the estimated gradient is not satisfied with the requirement, then the restoration image tends to be over-smooth and to lose texture [27]. Evidently, the more mist concentration has, the less gradient priors are pushing the quality of restored image into a decline.

In order to add extra edge restrictions, pseudodifferential operator is a good way to solve this problem which can contain more geometric features of singularity and local mutation characteristics for restoring the foggy image. However, , facto is not self-mapping. In contrast, it is bounded, which has effect on Hardy space H1, that is, analogical space of L1. More importantly, as far as energy function is concerned, the regular terms represented to that of the local minimum energy. Although it can guarantee lower semicontinuity and compactness in L1 space, the minimum value still needs infimum to measure the accuracy in the space. Inspired by these works above, we propose a novel variational model on H1 space as regularizer to seek more features and stable infimum for foggy image restoration.

Typically, Hardy space is associated with Laplace operator and Riesz transform , but all of these depend on smoothness of operator whose integral equals 1, and these spaces cannot describe the features of singularity in foggy images. Therefore, how to utilize pseudodifferential operator to represent Hardy space becomes a significant point. We note operator , which followed several assumptions:(1)is a nonsmooth function, and more geometric features of singularity can be demonstrated by ;(2)the classic Hardy space can be characterized by the area integral, square function, and the maximal function defined by semigroup and ;(3) should be considered to have the anisotropic kernel that is robust and overcome the defect that the isotropic kernel does not have enough capability to extract fine multidirectional intensity variation of image.

In view of these conceptions, we consider Hardy space which is depicted by a anisotropic-pseudodifferential operator as a general regularization term which satisfies these assumptions. In this paper, we denote operator as follows [28]:where , is the result of inverse Fourier transform, is pseudodifferential positive homogeneous operator, and its symbol is positive homogeneous function of degree . Let ; if , we note and else if , , where is inverse Fourier transform. In that case, the pointwise estimation of kernel in operator family can be shown as follows.

Lemma 1. , and is smoothness function in and there is a constant , as follows:

Lemma 2. If  , , is smoothness function and there is a constant , as follows:Using these ratiocinations, we can consider the nontangential-maximal function defined by operator family :where Recall that is defined by (7); we justify the conclusion particularly through showing:

Proof. On the one hand, we first notice thatwhere . In that case, we immediately know from the expression of Lemma 1 that and . With reference to [29],On the other hand, we note continuous integrable function defined in satisfyingWe also denote . Clearly, , and it is very easily to verify ; is Schwartz function [30]. So we prove thatMoreover, inspired by [31], we believe that anisotropic-pseudodifferential kernel (ANPDKs) can attain the edge map strength (EMS). From the ANPDKs, anisotropic directional derivatives (ANDDs) are derived to capture the local directional intensity variation of an image and then the EMS is obtained using the ANPDKs. For this purpose, (10) can be specified bywhere is the anisotropic factor and . ANPDKs can be obtained through rotation:where is the rotation matrix. Furthermore, ANDDs can be received by first order derivative of ANPDKs . This suggests that the noise robustness is highly dependent on the scale while the edge resolution is dependent on the ratio of the scale to the anisotropic factor. More importantly, the image convolution results by the ANPDKs reveal edge stretch effect.

2.3. Infimum for Constrained Energy Minimization

Following the demonstration of the above, formula (6) can be rewritten aswhere . Generally, we adapt cross-iteration method to solve this problem. But iteration is very complex. If we can turn a diffuse equation minimum problem to find out an infimum in H1 space, then the trouble becomes easy to resolve. It suggests that, in order to find out the existence of H1 representation of an infimum problem, we can try and look for the maximal operator in the following ways.

Lemma 3. Define , which is a normalized surface measurement on the sphere with center point and radius of . Let telescopic function be , which can be represented as ; then [32].

Lemma 4. For any fixed , , it is obvious that and [33].

By applying Lemmas 3 and 4, we now prove the following alternative theorem: If , when , then . In particular, if one can express

Proof. It follows directly from atomic decomposition characteristics of Hardy space so that we notice that . In addition, due to the fact that , . So, and we get the conclusion if we can prove the inequality . In this case, we let support set of atoms locate cube , where it satisfies . Combining Lemma 4, we immediately conclude the functionFollowing the property of translation invariant, we can denote the center of to be an origin point. Therefore, we use decomposing strategy instead of :where leads to reliable inequalities that are specified by Cauchy-Schwartz inequalities:For ,because and ; then we have which deduces to a common representation for . As a result, (18) is equal to

3. Numerical Calculation Method and Results

In this section, we present the numerical method and results on seven experiments to validate the theoretical results and show that the proposed model can recover the original image from its degraded image well, especially for piecewise constant images.

3.1. Parameters Determination

In order to efficiently estimate the factor from (24), first of all, we need to transform the image space to Hardy space. Thus (7) can be rewritten as [34]where is a kernel function. is the amplitude of each pixel and is a phase. In practice, means the difference of position. Taking the relationship between adjacent pixels into consideration, the simplest way is to convert the pixels into spectrum, thus calculating and using a training block centered on to find out , where . After that, anisotropic-pseudodifferential kernel can be described followed by (17). As a result, . In addition, to guarantee the accuracy of a minimum, we obtain the infimum of Hardy space by Hardy-Littlewood maximal operator.

As shown in Figure 3, In the numerical calculation method, we evaluate the transmission map by split Bregman method. The SB approach deals with nonlinear minimization problem into linear problem, which combines the accelerated iterative strategy and fasted convergence speed method. To implement the SB method, we need to construct more accurate forward differences of objective function to constrain attenuated factor. From this aspect, it is verified the correction of the above discussion. Next we present the detailed algorithm of the proposed scheme.

(1) Initialization(a)The air-light was estimated through [13].(b)Obtain the gradient of and then transform the image pixel into the form of Hardy space , where is a reciprocal basis of , , , which is defined in (10).(c)Find out the infimum gradient of Hardy space .(d)Set the initial estimate as , , , , .

(2) Cross-Iteration. (a) Outer loop: . (b) Update parameters. (c) Inner loop: calculate , , , . (d) Estimate .

(3) Refine the Supplementary Term. (a) Adjust the = 0.8~1.4 which is regarded as a parameter that controls the strength of the visibility restoration. (b) Acquire the real image: .

An illustration of the adjusting step for getting the transmission map and real scene is shown in Figure 4. More edges and textures on the hill in the image are challenging to estimate the transmission map. Obviously, after L1 space being converted to Hardy space, the proposed method can get sharp depth edges consistent with the fog scene. By using different parameter , our approach isolates fine edges between different depth objects with much finer detail.

3.2. Experiment on Density of Aerosol Particles

Figure 5 compares the proposed method with the approach by Nishino and Fattal. Fattal [12] estimates the density map (he called it the depth map in his article) through uncorrelated characteristic of object shading and scene transmission in a simple local region. Nishino’s algorithm [18] is mainly to obtain the density map by factorial Markov random field. Apparently, the estimated images are more or less the same, but the proposed method produced better consistency. For instance, in Figure 5(c), more details can be seen across orange colors of different pumpkins due to the density map showing plenty of structure information. Similarly, the proposed haze-free images are vivid and contrast is stronger. This is because of edge and texture prior on the entire image. Figure 6 illustrates the importance of edge prior for defogging. As shown in zoom comparison, edge prior determines how much the quality of the recovery image. For example, Tarel and TV-L1 algorithms show different degrees of having fog-opaque pixels around foreground objects. The challenge in Figure 6 is the depth discontinuity caused by the green leaves in front of the scene. The proposed gets the plenty of edges consistent used by anisotropic pseudodifferential operator. By gradient priors of different regions, our approach isolates fine edges between different depth objects with much finer detail than TV-L1 methods, especially in the area marked by red rectangles. Many of branches and leaves as well as a large number of weeds around the pumpkins in Figure 5 can be observed. In contrast, Tarel computes depth value but cannot generate the details for these gradients altogether. Although TV-L1 considered this factor, the edges were not highlighted which caused insensitive contrast.

Figures 7 and 8 compare the proposed procedures with those of He et al. Though the theories are different, both estimate the transmission map to solve the problem. The density map was transformed into transmission values by (in fact the proposed method estimates the ()), which follows (1).

Obviously, it has already captured the transmission map (b), but neither described the details for the edges and textures. Consequently, the improved estimation can be represented well by gradient constraint, so the quantity of (f) is better than (c). (b) easily shows the depth map obtained by guided filtering [14] with dark channel estimation and (e) the depth map by proposed method. The guided filter is an edge-preserving filter that can be applied to improve dark channel results, while carefully tuning a lot of parameters is necessary.

More importantly, the fact that once DCP loses effectiveness would lead to the disastrous consequences illustrated by (c). Our approach more accurately estimates the depth variation of the scene at finer granularity around the whole scene because of efficient gradient information. The results shown in (e) and (f) demonstrate the accuracy.

3.3. Experiment on Comparison of Conventional Algorithms

To quantitatively evaluate the performance of our approach, we utilized the synthetic image dataset built by Tarel et al. [34]. The results of the proposed brief are illustrated on typical images in Figures 9 and 10. The performances were robust and significantly outperformed the others. Figure 9 is landscape image with loss of visibility near the skyline. All methods successfully improve visibility and local contrast of the scene, but many of them have distortion and halo artifacts, especially on the clouds and huge stone. Compared with Tan’s approach, although both of us increase the edges to enhance visibility, Tan’s algorithms [11] produced plenty of saturated pixels, owing to simply maximizing the contrast of every image patch without considering extra restriction. More specifically, Tan models the air-light using MRF but every term that follows with more contrast produces a larger number of edges. Oppositely, the proposed method promotes confidence level of edges based on prior restriction, so the result exhibits greater consistency in colors besides the removal of haze. Moreover, colorfulness and contrast are also an evaluation indicator measuring the degree of color image quality. Although Fattal and Kopf provide good performances in the region close to the sensor, the haze is not removed effectively in the far away regions. While Tarel’s and He’s methods, respectively, show reasonably good results, they have the color cast phenomenon in the lost range. Our proposed method provides distinctively superior performance as shown in Figure 9. Furthermore, in sharp contrast, the proposed method can recover the image structure based on variational framework computations per pixel which did not generate artifacts. Recovery of Figure 10 is prone to contaminate the clouds and produces artifacts in mountains.

Figure 11 shows the results compared with the abovementioned approaches. It plots the objective indicators during all of the briefs and clearly reflects the comprehensive performance of every algorithm. The proposed method has received more edges and gradients, mainly due to the dimming out of spare details in the estimating density step in Hardy space. Although Tan also shows enough edges and gradients, Tan’s method regulated image contrast in association with the number of edges and did not directly consider the influence of attenuation. In addition, the proposed method not only Prevents recovery image from the contamination and artifacts but also casts improved blue tints and local contrast. These results get better quantitative scores in DIIVINE, SSIM, and PSNR which were used in [35]. In simpler terms, the energy minimization method is suitable for dehazing.

4. Conclusion

This paper proposed a dehazing algorithm based on variation framework which can estimate the density of haze and restore the actual scene. The rough transmission map is first estimated and refined on Hardy space where the rich details and textures are dimmed out in order to satisfy minimum energy constraints. Afterwards, through refining the density map, we can obtain the real higher image quality. The proposed algorithm not only recovers scene albedo by taking advantage of variational iteration but also retrieves the transmission map. Experimental test results verified that the method effectively achieves accurate and true representation. In the future, this method will be researched in depth to improve operation efficiency and further application to video dehazing.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.