Research Article | Open Access

# Hardy Variation Framework for Restoration of Weather Degraded Images

**Academic Editor:**Chih-Cheng Hung

#### Abstract

Images captured in fog conditions often suffer from weather of poor visibility which fades the colors and reduces the contrast in the scene. This paper proposes a novel regularization method which utilizes space transformation in order to restore the hidden scene with high dynamic range and enhanced edge information. In order to efficiently improve the visualization, the proposed method is built upon contrast stretching which can obtain better estimation map as well as solve the problem. Using minimum energy constraints, the algorithm recovers scene albedo on a number of haze images. Experimental results show that the method effectively achieves accurate and true representation.

#### 1. Introduction

Bad weather condition is a significant factor in the degradation of the optical image quality. This has been an unavoidable issue for decades and has not afforded a clear structure in many applications involving scene understanding [1, 2]. For instance, it cannot help acquire visibility when autonomously navigating a robot or vehicle [3]. It decreases object recognition and detection accuracy. Scene irradiance is unable to provide valid information. The indisputable fact is that there are a large number of uncertain elements, such as fog and haze, which attenuate and constrain scene appearance and severely alter the scene radiance. In other words, the clear observation of objects in the scene is hidden as a result of aerosol particles. Furthermore, atmospheric absorption and scattering cause contrast loss and visual vividness deviation in images [4]. Restoring the hidden scene appearance via the analysis of atmospheric propagation model is, therefore, a key indication in solving the problem.

Scattering is a process by which a particle redistributes a fraction of the incident energy into a total solid angle [5, 6]. The scattering properties depend on the refractive index and size of the particles. According to the haze particle’s size and the type of camera, the monochrome atmospheric scattering model which followed Koschmieder law is used to describe colors and contrast of scene points in bad weather [7, 8]. Nevertheless, this equation is ill-posed especially for the single-image dehazing. Earlier techniques targeting on removal of light scattering distortion in optical model include exploiting the polarization effects to compensate for visibility degradation [9, 10]. However, it needs multiple photographs to afford several of polarized haze priors. Other treatments have enormously changed with the reason that the rigorous physical model which could describe atmospheric absorption and scattering was exhumed [8]. They continued the works of McCartney [7] to draw on what is already known about atmospheric optics and to develop models and methods for recovering pertinent scene properties. Many successful approaches lie on using this model and stronger priors. Tan [11] assumes that a visibility scene has a higher contrast ratio than the hazy image. It means that maximizing the contrast could lift and restore the image quality. Nevertheless, the results inevitably suffer from halo artifacts if overcompensated. Fattal [12] broke the vector of surface albedo coefficients into a sum of two components, one parallel to the air-light and another plumb to the air-light. It has presented locally irrelevance of the shading and scene transmission to solve the problem. He et al. [13] propose a dark channel prior (DCP) to estimate the transmission function with soft-matting and improve the speed by guided filter [14]. Similarly, the dark channel prior was also employed in [15–17] for single image dehazing. The key points of Nishino et al. [18] and Caraffa and Tarel [19] assume the image obeys MRF (FMRF) distribution and utilize hidden fields energy to iterate the optimal observation fields. In order to infer the atmosphere veil, Liu et al. [20] and Li et al. [21] use regularization method to refine the minimal component image for preserving the edges and gradients of images.

According to the analysis of recent methods mentioned above, most existing endeavors are devoted to find out the priors and constraints for restoration of the image from statistics. Indeed, prior filter methods were also demonstrated to be successful removal methods for single images, such as median filter [22], guided filter [14], bilateral filter [15], and TV filter [20, 21]. However, most methods only focus on the similarities and differences between pixels [20] or patches [18] as priors and do not consider many striking features (edge) covered or degraded by scattering effect. Hence, there are small amounts of edges that can be used as constraints to correct the image restoration. As a result, these methods may cause halo artifacts, blurred visibility, and lower contrast.

Therefore, our method is to add extra restrictions and define compact sets in order to refine the slight structure. Specifically, taking advantage of the regularization theory and getting the final solution based on the observation data which has the physical reality and mathematical tractability [23, 24] are the way to go. In this paper we present a Hardy space method which allows the more differential operator (pseudo differential operator) to generate a great deal of edge and texture as the constraint prior. Compared with previous dehazing algorithms, the main contribution of this paper contains the following aspects: (I) according to a lack of edge information in the foggy image, proposed algorithm can provide plenty of edges and textures through showing anisotropic-pseudo-differential kernel on Hardy space to obtain abundant priori constraints. (II) By taking advantage of characterization of similarity between Hardy space* H*^{1} and Lebesgue space* L*^{1}, we make* H*^{1} spaces associated with Hardy-Littlewood maximal operator to describe the infimum of* H*^{1} space as well as to solve rough estimation of the cost function caused by the previous algorithm. Compared with the recent effective implementations with complex constraints such as MRF [11, 20], FMRF [19], and graph theory [12], our technique is able to restore a hazy image, which shows more visually plausible results. Experimental results demonstrate that our compound regularization method is not only able to restrain the noise and blurring effect but also able to reveal important image features, such as edges and textures.

The outline of the paper is as follows. In the next section, we present our new defogging algorithm based variational framework; the details regarding our optimal technique are discussed in this section. In Section 3 we report and discuss the results while in Section 4 our method is summarized.

#### 2. Single Image Dehazing

The task of image restoration is to estimate the latent high quality images given the low-quality observations. For simplicity and clarity, in the following review of previous works, we take the minimum-energy restriction based on Hardy space *H*^{1} as a canonical restoration problem. In this plot, we are interested in image restoration where we are desired to obtain a number of edges and textures as the priories of diffusion model in order to restore the degraded image. Therefore, we focus on the analysis of haze optical model and point out the disadvantage of the typical regularization model. Afterwards, we consider that a classical* H*^{1} space can completely replace* L*^{1} space, which are better to suit for characterizing the geometric features of foggy image. More importantly, through rigorous analysis, we find a handy way which can seek the infimum of the space to solve minimum problem.

##### 2.1. Haze Optical Model

In general, objects appear hazier and this is attributed to scattering and absorption along the viewing ray as light travels from the source to the viewer. Scattering and absorption of light by the propagation media are the main reasons of image degradation in foggy scenes. Therefore, it is necessary to analyze the mechanisms of scattering in the atmosphere. Although the exact nature of molecules and particles which influences the light from the source to lose intensity and undergo a spectral shift is highly complex, in reality, it spreads out energy from its original direction.

Following this assumption [7] and on the basis of Bouguer’s exponential law of attenuation, Narasimhan and Nayar [8] proposed that the degradation image at any pixel recorded by a monochrome camera is decomposed into “Attenuation” and “Air-light.” The former is the direct attenuation factor when light travels in straight lines from a scene to the observer. The second term is scattered ambient light in the atmosphere reaching the observer in addition to the radiance propagated from the scene. As the result, the total irradiance is usually described by the sum of the direct attenuated irradiance and the air-light irradiance as depicted in Figure 1. The atmospheric scattering model which is widely used in hazy images is defined as follows:where is the observed intensity at pixel , is the scene radiance or haze-free image, is the sky brightness for the whole image pixels, is the scattering coefficient of the atmosphere, and is the depth of the scene point. Note that the monochrome model assumes that the scattering coefficient is the same for all the color channels.

It is an impossible task to calculate which represents the true intensity in a clear day without any idea about the three parameters , , and . According to the results of previous research [21], (1) can be rewritten as

For convenience, we call . As suggested by [13], can be easily estimated automatically. Hence, there are only 2 parameters to calculate for solving this problem. If we take the attenuation term as the breakthrough point, it is obvious to see due to its physical property. Accordingly we intend to use minimization of (2) to describe maximum value of :By deriving from (3), we can get and . In that case, the range of is Obviously, as described in [21], maximizing the contrast of the resulting image is equivalent to maximizing assuming that is smoothness:

However, [21] is a nonconvex optimization problem which is very hard to obtain. More importantly, grows less when the fog concentration is great, and it cannot provide efficient information for degradation shown in Figure 2. Here, we adapt modified minimization energy restriction to pursue the maximum of the while allowing discontinuities along edges.

##### 2.2. Edge Diffusion Model via Anisotropic Pseudodifferential Operator on Hardy Space

Total variation (TV) is a well-known model proposed by Rudin et al. [25] which is a very successful and efficient method for image restoration as its ability could preserve edges due to the piecewise constant regularization property of the TV norm [26]. For any degradation factor, we consider the following total variation based image restoration problem:where is the first-order derivative operator, and is a parameter to balance the influence between data-fidelity term and a regularization term. The regularity term is understood beyond the conventional Sobolev space instead, as the TV Radon measure. The main characteristic of the TV image model is that it has shown more singularities and edges which is an important visual cue in computer vision. However, if the estimated gradient is not satisfied with the requirement, then the restoration image tends to be over-smooth and to lose texture [27]. Evidently, the more mist concentration has, the less gradient priors are pushing the quality of restored image into a decline.

In order to add extra edge restrictions, pseudodifferential operator is a good way to solve this problem which can contain more geometric features of singularity and local mutation characteristics for restoring the foggy image. However, , facto is not self-mapping. In contrast, it is bounded, which has effect on Hardy space* H*^{1}, that is, analogical space of* L*^{1}. More importantly, as far as energy function is concerned, the regular terms represented to that of the local minimum energy. Although it can guarantee lower semicontinuity and compactness in* L*^{1} space, the minimum value still needs infimum to measure the accuracy in the space. Inspired by these works above, we propose a novel variational model on* H*^{1} space as regularizer to seek more features and stable infimum for foggy image restoration.

Typically, Hardy space is associated with Laplace operator and Riesz transform , but all of these depend on smoothness of operator whose integral equals 1, and these spaces cannot describe the features of singularity in foggy images. Therefore, how to utilize pseudodifferential operator to represent Hardy space becomes a significant point. We note operator , which followed several assumptions:(1)is a nonsmooth function, and more geometric features of singularity can be demonstrated by ;(2)the classic Hardy space can be characterized by the area integral, square function, and the maximal function defined by semigroup and ;(3) should be considered to have the anisotropic kernel that is robust and overcome the defect that the isotropic kernel does not have enough capability to extract fine multidirectional intensity variation of image.

In view of these conceptions, we consider Hardy space which is depicted by a anisotropic-pseudodifferential operator as a general regularization term which satisfies these assumptions. In this paper, we denote operator as follows [28]:where , is the result of inverse Fourier transform, is pseudodifferential positive homogeneous operator, and its symbol is positive homogeneous function of degree . Let ; if , we note and else if , , where is inverse Fourier transform. In that case, the pointwise estimation of kernel in operator family can be shown as follows.

Lemma 1. *, and is smoothness function in and there is a constant , as follows:*

Lemma 2. *If , , is smoothness function and there is a constant , as follows:**Using these ratiocinations, we can consider the nontangential-maximal function defined by operator family :**where Recall that is defined by (7); we justify the conclusion particularly through showing:*

*Proof. *On the one hand, we first notice thatwhere . In that case, we immediately know from the expression of Lemma 1 that and . With reference to [29],On the other hand, we note continuous integrable function defined in satisfyingWe also denote . Clearly, , and it is very easily to verify ; is Schwartz function [30]. So we prove thatMoreover, inspired by [31], we believe that anisotropic-pseudodifferential kernel (ANPDKs) can attain the edge map strength (EMS). From the ANPDKs, anisotropic directional derivatives (ANDDs) are derived to capture the local directional intensity variation of an image and then the EMS is obtained using the ANPDKs. For this purpose, (10) can be specified bywhere is the anisotropic factor and . ANPDKs can be obtained through rotation:where is the rotation matrix. Furthermore, ANDDs can be received by first order derivative of ANPDKs . This suggests that the noise robustness is highly dependent on the scale while the edge resolution is dependent on the ratio of the scale to the anisotropic factor. More importantly, the image convolution results by the ANPDKs reveal edge stretch effect.

##### 2.3. Infimum for Constrained Energy Minimization

Following the demonstration of the above, formula (6) can be rewritten aswhere . Generally, we adapt cross-iteration method to solve this problem. But iteration is very complex. If we can turn a diffuse equation minimum problem to find out an infimum in* H*^{1} space, then the trouble becomes easy to resolve. It suggests that, in order to find out the existence of* H*^{1} representation of an infimum problem, we can try and look for the maximal operator in the following ways.

Lemma 3. *Define , which is a normalized surface measurement on the sphere with center point and radius of . Let telescopic function be , which can be represented as ; then [32].*

Lemma 4. *For any fixed , , it is obvious that and [33].*

By applying Lemmas 3 and 4, we now prove the following alternative theorem: If , when , then . In particular, if one can express

*Proof. *It follows directly from atomic decomposition characteristics of Hardy space so that we notice that . In addition, due to the fact that , . So, and we get the conclusion if we can prove the inequality . In this case, we let support set of atoms locate cube , where it satisfies . Combining Lemma 4, we immediately conclude the functionFollowing the property of translation invariant, we can denote the center of to be an origin point. Therefore, we use decomposing strategy instead of :where leads to reliable inequalities that are specified by Cauchy-Schwartz inequalities:For ,because and ; then we have which deduces to a common representation for . As a result, (18) is equal to

#### 3. Numerical Calculation Method and Results

In this section, we present the numerical method and results on seven experiments to validate the theoretical results and show that the proposed model can recover the original image from its degraded image well, especially for piecewise constant images.

##### 3.1. Parameters Determination

In order to efficiently estimate the factor from (24), first of all, we need to transform the image space to Hardy space. Thus (7) can be rewritten as [34]where is a kernel function. is the amplitude of each pixel and is a phase. In practice, means the difference of position. Taking the relationship between adjacent pixels into consideration, the simplest way is to convert the pixels into spectrum, thus calculating and using a training block centered on to find out , where . After that, anisotropic-pseudodifferential kernel can be described followed by (17). As a result, . In addition, to guarantee the accuracy of a minimum, we obtain the infimum of Hardy space by Hardy-Littlewood maximal operator.

As shown in Figure 3, In the numerical calculation method, we evaluate the transmission map by split Bregman method. The SB approach deals with nonlinear minimization problem into linear problem, which combines the accelerated iterative strategy and fasted convergence speed method. To implement the SB method, we need to construct more accurate forward differences of objective function to constrain attenuated factor. From this aspect, it is verified the correction of the above discussion. Next we present the detailed algorithm of the proposed scheme.

*(**1) Initialization*(a)The air-light was estimated through [13].(b)Obtain the gradient of and then transform the image pixel into the form of Hardy space , where is a reciprocal basis of , , , which is defined in (10).(c)Find out the infimum gradient of Hardy space .(d)Set the initial estimate as , , , , .

*(**2) Cross-Iteration*. (a) Outer loop: . (b) Update parameters. (c) Inner loop: calculate , , , . (d) Estimate .

*(**3) Refine the Supplementary Term*. (a) Adjust the = 0.8~1.4 which is regarded as a parameter that controls the strength of the visibility restoration. (b) Acquire the real image: .

An illustration of the adjusting step for getting the transmission map and real scene is shown in Figure 4. More edges and textures on the hill in the image are challenging to estimate the transmission map. Obviously, after* L*^{1} space being converted to Hardy space, the proposed method can get sharp depth edges consistent with the fog scene. By using different parameter , our approach isolates fine edges between different depth objects with much finer detail.

##### 3.2. Experiment on Density of Aerosol Particles

Figure 5 compares the proposed method with the approach by Nishino and Fattal. Fattal [12] estimates the density map (he called it the depth map in his article) through uncorrelated characteristic of object shading and scene transmission in a simple local region. Nishino’s algorithm [18] is mainly to obtain the density map by factorial Markov random field. Apparently, the estimated images are more or less the same, but the proposed method produced better consistency. For instance, in Figure 5(c), more details can be seen across orange colors of different pumpkins due to the density map showing plenty of structure information. Similarly, the proposed haze-free images are vivid and contrast is stronger. This is because of edge and texture prior on the entire image. Figure 6 illustrates the importance of edge prior for defogging. As shown in zoom comparison, edge prior determines how much the quality of the recovery image. For example, Tarel and TV-*L*^{1} algorithms show different degrees of having fog-opaque pixels around foreground objects. The challenge in Figure 6 is the depth discontinuity caused by the green leaves in front of the scene. The proposed gets the plenty of edges consistent used by anisotropic pseudodifferential operator. By gradient priors of different regions, our approach isolates fine edges between different depth objects with much finer detail than TV-*L*^{1} methods, especially in the area marked by red rectangles. Many of branches and leaves as well as a large number of weeds around the pumpkins in Figure 5 can be observed. In contrast, Tarel computes depth value but cannot generate the details for these gradients altogether. Although TV-*L*^{1} considered this factor, the edges were not highlighted which caused insensitive contrast.

**(a)**

**(b)**

**(c)**

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

**(f)**

Figures 7 and 8 compare the proposed procedures with those of He et al. Though the theories are different, both estimate the transmission map to solve the problem. The density map was transformed into transmission values by (in fact the proposed method estimates the ()), which follows (1).

Obviously, it has already captured the transmission map (b), but neither described the details for the edges and textures. Consequently, the improved estimation can be represented well by gradient constraint, so the quantity of (f) is better than (c). (b) easily shows the depth map obtained by guided filtering [14] with dark channel estimation and (e) the depth map by proposed method. The guided filter is an edge-preserving filter that can be applied to improve dark channel results, while carefully tuning a lot of parameters is necessary.

More importantly, the fact that once DCP loses effectiveness would lead to the disastrous consequences illustrated by (c). Our approach more accurately estimates the depth variation of the scene at finer granularity around the whole scene because of efficient gradient information. The results shown in (e) and (f) demonstrate the accuracy.

##### 3.3. Experiment on Comparison of Conventional Algorithms

To quantitatively evaluate the performance of our approach, we utilized the synthetic image dataset built by Tarel et al. [34]. The results of the proposed brief are illustrated on typical images in Figures 9 and 10. The performances were robust and significantly outperformed the others. Figure 9 is landscape image with loss of visibility near the skyline. All methods successfully improve visibility and local contrast of the scene, but many of them have distortion and halo artifacts, especially on the clouds and huge stone. Compared with Tan’s approach, although both of us increase the edges to enhance visibility, Tan’s algorithms [11] produced plenty of saturated pixels, owing to simply maximizing the contrast of every image patch without considering extra restriction. More specifically, Tan models the air-light using MRF but every term that follows with more contrast produces a larger number of edges. Oppositely, the proposed method promotes confidence level of edges based on prior restriction, so the result exhibits greater consistency in colors besides the removal of haze. Moreover, colorfulness and contrast are also an evaluation indicator measuring the degree of color image quality. Although Fattal and Kopf provide good performances in the region close to the sensor, the haze is not removed effectively in the far away regions. While Tarel’s and He’s methods, respectively, show reasonably good results, they have the color cast phenomenon in the lost range. Our proposed method provides distinctively superior performance as shown in Figure 9. Furthermore, in sharp contrast, the proposed method can recover the image structure based on variational framework computations per pixel which did not generate artifacts. Recovery of Figure 10 is prone to contaminate the clouds and produces artifacts in mountains.

Figure 11 shows the results compared with the abovementioned approaches. It plots the objective indicators during all of the briefs and clearly reflects the comprehensive performance of every algorithm. The proposed method has received more edges and gradients, mainly due to the dimming out of spare details in the estimating density step in Hardy space. Although Tan also shows enough edges and gradients, Tan’s method regulated image contrast in association with the number of edges and did not directly consider the influence of attenuation. In addition, the proposed method not only Prevents recovery image from the contamination and artifacts but also casts improved blue tints and local contrast. These results get better quantitative scores in DIIVINE, SSIM, and PSNR which were used in [35]. In simpler terms, the energy minimization method is suitable for dehazing.

#### 4. Conclusion

This paper proposed a dehazing algorithm based on variation framework which can estimate the density of haze and restore the actual scene. The rough transmission map is first estimated and refined on Hardy space where the rich details and textures are dimmed out in order to satisfy minimum energy constraints. Afterwards, through refining the density map, we can obtain the real higher image quality. The proposed algorithm not only recovers scene albedo by taking advantage of variational iteration but also retrieves the transmission map. Experimental test results verified that the method effectively achieves accurate and true representation. In the future, this method will be researched in depth to improve operation efficiency and further application to video dehazing.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### References

- K. B. Gibson, D. T. Võ, and T. Q. Nguyen, “An investigation of dehazing effects on image and video coding,”
*IEEE Transactions on Image Processing*, vol. 21, no. 2, pp. 662–673, 2012. View at: Publisher Site | Google Scholar | MathSciNet - Z. Chen, B. R. Abidi, D. L. Page, and M. A. Abidi, “Gray-level grouping (GLG): an automatic method for optimized image contrast enhancement—Part II: the variations,”
*IEEE Transactions on Image Processing*, vol. 15, no. 8, pp. 2303–2314, 2006. View at: Publisher Site | Google Scholar - M. S. Shehata, J. Cai, W. M. Badawy et al., “Video-based automatic incident detection for smart roads: the outdoor environmental challenges regarding false alarms,”
*IEEE Transactions on Intelligent Transportation Systems*, vol. 9, no. 2, pp. 349–360, 2008. View at: Publisher Site | Google Scholar - J. P. Oakley and B. L. Satherley, “Improving image quality in poor visibility conditions using a physical model for contrast degradation,”
*IEEE Transactions on Image Processing*, vol. 7, no. 2, pp. 167–179, 1998. View at: Publisher Site | Google Scholar - E. J. McCartney,
*Optics of the Atmosphere: Scattering by Molecules and Particles*, John Wiley and Sons, New York, NY, USA, 1976. - A. J. Preetham, P. Shirley, and B. E. Smits, “A practical analytical model for daylight,” in
*Proceedings of Siggraph*, pp. 91–100, Los Angeles, Calif, USA, 1999. View at: Google Scholar - E. J. McCartney,
*Optics of the Atmo-Sphere: Scattering by Molecules and Particles*, John Wiley and Sons, New York, NY, USA, 1976. - S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,”
*IEEE Transactions on Pattern Analysis and Machine Intelligence*, vol. 25, no. 6, pp. 713–724, 2003. View at: Publisher Site | Google Scholar - E. Namer, S. Shwartz, and Y. Y. Schechner, “Skyless polarimetric calibration and visibility enhancement,”
*Optics Express*, vol. 17, no. 2, pp. 472–493, 2009. View at: Publisher Site | Google Scholar - Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,”
*Applied Optics*, vol. 42, no. 3, pp. 511–525, 2003. View at: Publisher Site | Google Scholar - R. T. Tan, “Visibility in bad weather from a single image,” in
*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 1–8, Anchorage, Alaska, USA, June 2008. View at: Publisher Site | Google Scholar - R. Fattal, “Single image dehazing,”
*ACM Transactions on Graphics*, vol. 27, no. 3, article 72, 2008. View at: Publisher Site | Google Scholar - K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,”
*IEEE Transactions on Pattern Analysis and Machine Intelligence*, vol. 33, no. 12, pp. 2341–2353, 2011. View at: Publisher Site | Google Scholar - K. He, J. Sun, and X. Tang, “Guided image filtering,” in
*Proceedings of the European Conference on Computer Vision*, pp. 1–14, Crete, Greece, 2010. View at: Google Scholar - C.-H. Yeh, L.-W. Kang, M.-S. Lee, and C.-Y. Lin, “Haze effect removal from image via haze density estimation in optical model,”
*Optics Express*, vol. 21, no. 22, pp. 27127–27141, 2013. View at: Publisher Site | Google Scholar - K. Gibson, D. Võ, and T. Nguyen, “An investigation in dehazing compressed images and video,” in
*Proceedings of the IEEE OCEANS Conference*, pp. 1–8, IEEE, Seattle, Wash, USA, September 2010. View at: Publisher Site | Google Scholar - T. H. Kil, S. W. Lee, and N. I. Cho, “A dehazing algorithm using dark channel prior and contrast enhancement,” in
*Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '13)*, pp. 2484–2487, IEEE, Vancouver, Canada, May 2013. View at: Publisher Site | Google Scholar - K. Nishino, L. Kratz, and S. Lombardi, “Bayesian defogging,”
*International Journal of Computer Vision*, vol. 98, no. 3, pp. 263–278, 2012. View at: Publisher Site | Google Scholar | MathSciNet - L. Caraffa and J.-P. Tarel, “Markov random field model for single image defogging,” in
*Proceedings of the IEEE Intelligent Vehicles Symposium (IV '13)*, pp. 994–999, Gold Coast, Australia, June 2013. View at: Publisher Site | Google Scholar - X. Liu, F. Zeng, Z. Huang, and Y. Ji, “Single color image dehazing based on digital total variation filter with color transfer,” in
*Proceedings of the 20th IEEE International Conference on Image Processing (ICIP '13)*, pp. 909–913, IEEE, Melbourne, VIC, Australia, September 2013. View at: Publisher Site | Google Scholar - L. Li, W. Feng, and J. Zhang, “Contrast enhancement based single image dehazing via TV-L
^{1}minimization,” in*Proceedings of the IEEE International Conference on Multimedia & Expo*, pp. 435–440, Chengdu, China, 2014. View at: Google Scholar - J.-P. Tarel, N. Hautière, L. Caraffa, A. Cord, H. Halmaoui, and D. Gruyer, “Vision enhancement in homogeneous and heterogeneous fog,”
*IEEE Intelligent Transportation Systems Magazine*, vol. 4, no. 2, pp. 6–20, 2012. View at: Publisher Site | Google Scholar - M. Bertero, T. Poggio, and V. Torre, “Ill-posed problems in early vision,”
*Proceedings of the Royal Society B: Biological Sciences*, vol. 226, no. 1244, pp. 303–323, 1985. View at: Google Scholar - F. Cucker and S. Smale, “Best choices for regularization parameters in learning theory: on the bias-variance problem,”
*Foundations of Computational Mathematics*, vol. 2, no. 4, pp. 413–428, 2002. View at: Publisher Site | Google Scholar | MathSciNet - L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,”
*Physica D: Nonlinear Phenomena*, vol. 60, no. 1–4, pp. 259–268, 1992. View at: Publisher Site | Google Scholar - S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Parameter estimation in TV image restoration using variational distribution approximation,”
*IEEE Transactions on Image Processing*, vol. 17, no. 3, pp. 326–339, 2008. View at: Publisher Site | Google Scholar | MathSciNet - L. Huang, L. Xiao, Z. Wei, and Z. Zhang, “Variational image restoration based on Poisson singular integral and curvelet-type decomposition space regularization,” in
*Proceedings of the 18th IEEE International Conference on Image Processing*, pp. 685–688, Brussels, Belgium, 2011. View at: Google Scholar - Q. Deng, Y. Ding, and X. Yao, “Characterizations of Hardy spaces associated to higher order elliptic operators,”
*Journal of Functional Analysis*, vol. 263, no. 3, pp. 604–674, 2012. View at: Publisher Site | Google Scholar | MathSciNet - E. M. Stein,
*Harmonic Analysis, Real Variable Methods, Orthogonally, and Oscillatory Integrals*, Princeton University Press, Princeton, NJ, USA, 1993. View at: MathSciNet - C. Fefferman and E. Stein, “H
_{p}spaces of several variables,”*Acta Mathematica*, vol. 129, pp. 137–193, 1972. View at: Google Scholar - P.-L. Shui and W.-C. Zhang, “Noise-robust edge detector combining isotropic and anisotropic Gaussian kernels,”
*Pattern Recognition*, vol. 45, no. 2, pp. 806–820, 2012. View at: Publisher Site | Google Scholar - M. Huixi, “Commutators of generalized Hardy operators on homogeneous groups,”
*Acta Mathematica Scientia. Series B*, vol. 30, no. 3, pp. 897–906, 2010. View at: Publisher Site | Google Scholar | MathSciNet - R. Burkholder, “The maximal Function characterization of the class ${\text{H}}_{\text{p}}$,”
*Transactions of the American Mathematical Society*, vol. 157, pp. 137–153, 1971. View at: Google Scholar - J.-P. Tarel, N. Hautière, A. Cord, D. Gruyer, and H. Halmaoui, “Improved visibility of road scene images under heterogeneous fog,” in
*Proceedings of the IEEE Intelligent Vehicles Symposium (IV '10)*, pp. 478–485, IEEE, San Diego, Calif, USA, June 2010. View at: Publisher Site | Google Scholar - Y.-K. Wang and C.-T. Fan, “Single image defogging by multiscale depth fusion,”
*IEEE Transactions on Image Processing*, vol. 23, no. 11, pp. 4826–4837, 2014. View at: Publisher Site | Google Scholar | MathSciNet

#### Copyright

Copyright © 2015 Lin-Yuan He et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.