Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 5531706 | https://doi.org/10.1155/2021/5531706

Zhixiang Chen, Binna Ou, "Visibility Detection Algorithm of Single Fog Image Based on the Ratio of Wavelength Residual Energy", Mathematical Problems in Engineering, vol. 2021, Article ID 5531706, 13 pages, 2021. https://doi.org/10.1155/2021/5531706

Visibility Detection Algorithm of Single Fog Image Based on the Ratio of Wavelength Residual Energy

Academic Editor: Shujun Fu
Received12 Jan 2021
Revised12 Mar 2021
Accepted18 Jun 2021
Published25 Jun 2021

Abstract

Different visibilities and different wavelength attenuations can cause color deviation problem in some dehazing algorithms. A visibility detection algorithm based on a single fog image is proposed. First, the visibility range of the image is preliminarily determined according to the transmissivity; then, the normalized differences between the residual energy ratios of different wavelengths of RGB channels are calculated, and the pixels with large gray deviation of a single channel are filtered to improve the calculation accuracy; finally, the image visibility detection value is calculated. The experimental results show that the proposed algorithm not only effectively reflects the fog image visibility but is also well suited for evaluating the effectiveness of the image defogging algorithms and the restoration degree of the defogging color difference.

1. Introduction

Visibility, also known as meteorological optical range [1], refers to the maximum horizontal distance that a normal person with a contrast threshold of 0.05 can distinguish a moderately sized black target from the background. Visibility reflects the transparency of the atmosphere. In severe weather, such as rainstorms, haze, and sandstorms, the transparency of the atmosphere will be reduced, and the visibility will be greatly reduced. When the visibility is less than 100 m, it is considered zero. Hazy weather conditions may reduce the visibility to zero. This will have a negative impact on transportation, navigation, aviation, and human daily life. Therefore, it is of great significance for environmental protection and traffic management to accurately detect visibility in fog.

Since Koschmieder put forward the visibility measurement theory based on the sky in 1924 [2], researchers have never stopped the research of visibility measurement. Currently, visual inspection, visibility meter measurement, and visibility detection based on image processing can be used for visibility detection. The first visual inspection method relies on human subjective consciousness and lacks scientificity, standardization, and stability. Nowadays, the widely used instrument measurement methods are mainly transmission visibility meter, scattering visibility meter, lidar visibility meter, and so on, but, in practical application, a measurement instrument needs to be installed at an interval of about 50 km, and the cost of purchase and maintenance is expensive, which is difficult to meet the needs of large-scale coverage [3]. However, the visibility detection method based on image processing can use the existing camera monitoring system to obtain pictures and detect the visibility, which is the main research direction of researchers at present. In 2013, Kim proposed a new method of obtaining coarse transmittance, which is not based on the dark channel prior principle, nor does it focus on the study of transmittance, but focuses on enhancing contrast [4]. Negru [5] and others proposed an algorithm based on inflection points to detect the occurrence and concentration of fog, but the algorithm is inefficient and prone to block effects. In 2014, Mao et al. [6] proposed the visibility evaluation index and classification algorithm of foggy images. This method is based on the atmospheric scattering model and data analysis theory of dark channel and light channels, but it is not suitable for the case of monochromatic light sources. In 2015, Zhu et al. [7] observed the HSV color channel and found that the difference between brightness and saturation increased with the increase of fog concentration, so a prior algorithm of color attenuation was proposed. In 2016, to improve the visibility of sea fog images of the unmanned surface vessel visual system, Ma et al. [8] presented a novel defogging algorithm based on a fusion strategy. Feature fusion attention (FFA) [9] verifies the effectiveness of using different weights for thick fog and mist when designing a network. It has a good performance on the Benchmarking Single Image Dehazing and Beyond (RESIDE) dataset [10], but when dealing with real-world foggy images, the defogging effect is not obvious, and it even destroys the image. To sum up, the visibility detection algorithm based on image processing has achieved some results, but it is still in the stage of theory and experiment, and there are still great challenges with a large development space.

The higher the visibility is, the higher the human eye can distinguish objects. When the visibility is reduced, the ability of human eyes to distinguish objects is reduced, and it is more difficult to distinguish different scenes on the image obtained by the camera. Therefore, the visibility can be used as the evaluation index of the effectiveness of the image defogging algorithm. Image quality evaluation is of great significance for the subsequent application of object detection, recognition, and analysis, so researchers also attach great importance to and put forward many quality evaluation indexes for defogged images. Currently, there are two kinds of methods: subjective evaluation and objective evaluation. The subjective evaluation method has a strong individual subjective difference and a large error and is not reliable. For the objective evaluation method, the most widely used and basic evaluation indexes, such as peak signal-to-noise-to-ratio (PSNR), information entropy (IE), structural similarity (SSIM), as well as some researchers’ new visible edge ratio, visible edge normalized gradient mean, and saturated pixel ratio, are only comparing the defog image with the original image, there is no visibility index, and these indexes do not take into account the color offset caused by different defog algorithms.

At present, there are few researches on visibility measurement and quantitative evaluation of visibility based on a single image. In this paper, a visibility detection algorithm based on the ratio of wavelength residual energy in the image is proposed. It includes four parts, haze weather judgment, filter pixels, color bias, and visibility calculation. First, in the stage of judging whether there is haze or not, the range of fog concentration is determined by the atmospheric transmittance obtained by the statistical guided filtering algorithm. Second, the pixels with larger fading are filtered to improve the accuracy of the calculation results. Finally, in the phase of visibility calculation, the residual energy ratio of three channels of each image is calculated, respectively, and then, the visibility is calculated according to the difference value between each pixel point of the three channels. The visibility value obtained from the experiment can be used as the evaluation index of the effectiveness of the demisting algorithm, and the different transmittance obtained from different wavelength residual energy ratio can also be used as the evaluation index of the color offset restoration of the demisting image of different demisting algorithms. This algorithm does not need to set the measurement target manually, so it is convenient to measure and has a broad application prospect.

2. Visibility Detection

2.1. Solution of Transmissivity

Kaiming He et al. [11] put forward a priori theory of dark channels, which holds that, in most outdoor nonfog images, there are always pixels with low luminance values and close to 0 in a certain color channel. Wang et al. [12] used a weight map to optimize the transmission image in the dark channel prior algorithm and overcome the challenges of the dark channel prior-based algorithm, such as block effect and color distortion. In fog and other bad weather conditions, before the light reaches the camera, it is absorbed, scattered, and refracted by the larger diameter particles in the air, which makes the outdoor scene image degraded. The process is described by the atmospheric scattering model [13]:where x is the coordinate of the pixel, I (x) is the input image with fog, J (x) is the restored image without fog, A is the atmospheric light value, and t (x) is the transmittance of the scene light. Defogging based on this model is to recover the original fog-free image J (x) from the observed image I (x).

For fog-free image J, the dark primary color of outdoor fog-free image Jdark is expressed aswhere c is the three channels of red, green, and blue, Jc (y) is the channel c image of the restored clear image J, and Ω (x) is the local area centered on the pixel point x. According to the prior theory of dark primary color, the intensity value of Jdark is always very low and close to 0 in the nonsky part of most outdoor nonfog images.

Given that the atmospheric light value A is constant in the local region of Ω (x), A is uniform, and the transmittance is fixed. On taking the minimum value of (1), which is calculated independently in three color channels and normalized by A, the following is obtained:where Ic (y) is the channel c image of the input image I and Ac is the atmospheric light value of channel c. Then, we take the minimum value operation in the three color channels and combine the prior theory of dark primary color, Jdark (x) = 0, and get

According to the transmissivity calculated by (4), the fog of the image with fog will be completely removed, which may cause the processed image to lack the sense of hierarchy and reality. To maintain the subjective visual effect of the image without fog after processing, the spatial perspective, retention factor ω (0 < ω ≤ 1) is introduced to take the value according to the actual situation. A certain proportion of fog is retained, and ω is usually taken as 0.90∼0.97.

2.2. Visibility Detection Based on Transmissivity

With the sky as the background, Koschmieder established the relationship between the luminance L0 of the scene and the distance d [2]:where L is the received brightness, Lf is the sky brightness, and μ is the extinction coefficient.

Duntley proposed the attenuation law of contrast with distance change [2]:where C is the visual brightness contrast of the scene at a distance of d, and C0 is the brightness contrast of the scene and the background. C/C0 = ε is the contrast threshold corresponding to the transmittance:

Under the maximum visibility of the image, the transmissivity is close to 0, which is shared by all scenes of the image.

Each point in the image may have different d (x) and different t (x). When t (x) is close to 1, d (x) is close to 0, which means that there is no fog in the point x. However, when the transmissivity of point x is close to 0, point x is the farthest point in the image, and d (x) is the largest. V (x) is the visibility of the image, and it must be less than or equal to this maximum distance d (x)t⟶0. Koschmieder recommends using the distance when the contrast threshold is 0.02 as the visibility:

The World Meteorological Organization uses the distance when the contrast threshold is 0.05 as the meteorological optical range [14]:

2.3. Residual Energy Ratio of Different Wavelengths

According to the Yandel effect [15], the scattering degree of suspended particles in the atmosphere is different for different wavelengths. The longer the wavelength is, the less obvious the scattering effect is. Therefore, the transmittance of different wavelengths of light is different and cannot be regarded as the same t. Taking RGB color channels as an example, the transmittance of RGB three channels should be different [16].

Kruse [17] gives the relationship between extinction coefficient and visibility when the propagation contrast threshold is 0.02 in fog. The spectral distribution characteristics of different wavelengths are clearly different, so the extinction coefficient is monochromatic at this time. In this paper, the relationship between extinction coefficient and visibility when the contrast threshold is 0.05 is used:where λ is the wavelength of light, 0.55 μm is the central wavelength of visible light, and q is the wavelength correction factor, which is expressed as

Kim et al. [18] made a more accurate division of the relationship between the values of visibility less than 6 km. The visibility with V greater than 6 km is better, and there is no need to remove fog. In this paper, the following formula is adopted:

According to (8) and (15), the transmissivity is as follows:

The dark channel algorithm is used to calculate the different transmissivity corresponding to different wavelengths of the three channels, and the obtained transmissivity is modified by median filtering, as shown in Figure 1.

Figure 2 is the histogram corresponding to the three-channel transmissivities diagram in Figure 1. It can be seen from Figures 1 and 2 that the transmissivities of the three channels of the original images vary to different degrees. Theoretically, the transmittance values of RGB three channels should be the same, but the transmittance of RGB three channels are not the same, because of the differences of light energy attenuation rates of different wavelengths. If the difference between the transmittances of the three channels of the images is smaller, it indicates that the smaller the difference of the light attenuation, the smaller the fog concentration and the less the color shift of the image. If the difference between the transmissivities of the three channels of the image is larger, it indicates that the greater the difference between the light attenuation, the greater the fog concentration and the more the color shift of the image.

The transmissivity t of light is also called the residual energy ratio of light [19]. The ratio of the initial energy of a light beam with wavelength λ to the residual energy after propagating a distance d (x) is expressed as follows:where μ (λ) is the scattering coefficient of light with wavelength λ and Nrer (λ) is the standard residual energy ratio, which can be obtained from (16) and (17):

3. Visibility Detection Based on Residual Energy Ratio

3.1. Rough Estimation of Fog Concentration

In the transmissivity map, the higher the fog concentration in the original image, the lower the transmissivity, and the lower the visibility. If there is no fog or the area has a smaller fog concentration, the transmissivity is higher and visibility is greater. According to [18], it is generally considered that the visual effect is better when visibility is greater than 1 km. First, Kaiming’s guide filtering algorithm [20] is used to obtain the transmittance map of each image in Figure 3.

When the transmittance is less than 0.5, γ is the ratio of transmittance less than 0.5. According to the calculation results, the visibility is roughly divided by 1 km. When γ is more than 30%, it can be considered that the image has low transmittance, high fog concentration, and low visibility. In practical applications, it is necessary to defog the image to improve the visibility of the image. When γ is less than 30%, it can be considered that the image has high transmittance, low fog concentration, and high visibility. According to the calculations of the images shown in Figure 4, the values of γ in images (a) and (b) are less than 30%, and the visibility is greater than 1 km, whereas the values of γ in images (c) and (d) are greater than 30%, and the visibility is less than 1 km (Table 1). The calculation results are also in line with human visual judgment.


Figure 4γ (%)Visibility (km)

(a)3.06>1
(b)16.31>1
(c)47.20<1
(d)63.86<1

3.2. Residual Energy Ratio Detection Visibility

The residual energy ratio of different wavelengths of light passing through the propagation distance d (x) is different. Under different visibility, after each unit distance attenuation, the residual energy ratio of three color channels is constant [16]. Taking 1 km as an example, the residual energy of three channels in the range of 0–6 km visibility is shown in Figure 5.

It can be seen from Figure 6 that, under the same visibility, after the attenuation of light for 1 km, there are differences in the residual energy ratio of the three channels. The channel R has the most residual energy, channel G is less than channel R, and channel B has the least residual energy ratio. With the increase of propagation distance, the residual energy ratio of each channel is less and less, and the differences among the three channels are increasing. Therefore, calculate the difference between the residual energy ratios of the three channels with 1 km as the unit distance under the same visibility, and the calculation results are shown in Figure 6.

It can be seen that the difference between channel R and channel B is the largest and between channel G and channel B is the smallest. On taking 1 km as the dividing line, each difference on each line has a corresponding visibility. Due to the fastest attenuation of the blue channel, to avoid large error, the value of (R − G)/G is selected, and the more accurate image visibility can be calculated by combining with the weather judgment of fog in Section 3.1.

3.3. Filter Pixels

In the outdoor image, there may be pixels with too large values in one of the three channels, such as green tree and red brick. The values of the other two color channels of these pixels are relatively low. After attenuation, the two channels will not change significantly. In order to improve the accuracy of visibility estimation, these pixels are removed. Through experiments, the selection of the normalized removal threshold can be obtained by the following:γ is the ratio of transmittance less than 0.5. When the gray value of at least one of the three channels of a pixel exceeds σ, the pixel is filtered out. The images are calculated, and the results are shown in Figure 7. Black pixels are the filtered pixels, and the white areas are the pixels that will be used for calculation after being filtered. This method can effectively remove the pixels with great influence, and at the same time, it can reduce the amount of computation.

4. Experimental Results and Evaluation Application

4.1. Visibility Measurement Results

We select some images with different fog concentrations in Figure 8, and the experimental results are shown in Table 2. The visibilities of Figures 8(a)–8(d) are more than 1 km, and those in Figures 8(e)–8(h) are less than 1 km. To improve the computing speed, the pixels with gray values greater than σ in any channels of the image should be removed. Then, we calculated (R − G)/G of the remaining pixels and determined the corresponding visibility value V in Figure 8. The experimental results are in line with the subjective judgment of human eyes.


Figure 8γ (%)(R − G)/GV (km)FADE

(a)16.30.02132.790.4547
(b)25.60.02472.050.4250
(c)28.80.01734.760.4532
(d)24.50.02591.881.0730
(e)46.10.01640.641.3509
(f)63.90.03030.821.4030
(g)91.30.03110.841.5353
(h)48.80.03630.951.5041

To verify the effectiveness of this algorithm for image visibility estimation, we compared it with the fog aware density evaluator (FADE) model [21], based on natural scene statistics and fog sensing statistical characteristics. FADE evaluates the fog density quantitatively. The lower the index value is, the lower the fog density is. The comparison results are also shown in Table 2. It can be seen that the visibilities of the images in Figures 8(a)–8(d) are higher than 1 km, and the corresponding FADE values are smaller, indicating that the fog concentrations of the images are lower. The visibilities of the images in Figures 8(e)–8(h) are lower than 1 km, and the corresponding FADE values are larger, which indicate that the fog concentrations of the images are higher. However, for images which do not have the same scene, the values of FADE cannot reflect the fog concentrations without the same scene reference images, and the visibility estimated by this algorithm has significant consistency in different scenes.

To objectively evaluate the effect of several algorithms, we also compared Figures 8(e) and 8(f) with the peak signal-to-noise ratio (PSNR), information entropy (IE), effective detail intensity ratio (EDIR), histogram similarity (HS), and structure similarity (SSIM). The two images are restored by the algorithms of dark channel, Tarel, guided filtering, and Chen. The comparison results are shown in Figure 9 and Table 3.


Figure 8Fog removal algorithmPSNRIEEDIRHSSSIM

Image eDark channel11.91936.47880.27810.79010.8226
Guided filtering14.21316.65680.27720.91190.9054
Tarel13.84697.05860.32880.81320.8475
Chen14.89587.01140.30450.89850.9044

Image fDark channel7.58226.92180.14790.73320.7269
Guided filtering10.61087.47740.15300.92760.9272
Tarel15.29837.14500.16360.61820.6337
Chen11.28347.53180.15140.94650.9438

From the objective evaluation data in Table 3, it can be seen that the PSNR and IE of Chen’s algorithm are higher, which shows that the image quality of the algorithm is better after fog removal. EDIR, HS, and SSIM show that the matching degree between the Chen’s algorithm and the original image features is higher, and the ability to maintain the original image structure information is stronger.

Figure 10 shows the four scenes in different weather conditions of the K277 + 749 down section of Beijing Kunshitai expressway. Table 4 shows the comparison of observed visibilities calculated by the PTZ algorithm [22], curve response fitting algorithm [23], and our algorithm at that time. It can be seen that the error between the actual observation value and calculated results by the algorithm in this paper is smaller. Therefore, our algorithm is suitable for the detection of visibility in fog.


Actual observation value (km)PTZ algorithm (km)Curve response fitting algorithm (km)Algorithm of this paper (km)

Scene 10.720.800.750.69
Scene 21.291.411.351.31
Scene 33.483.083.833.11
Scene 42.662.862.892.49

4.2. Using Visibility to Evaluate the Demisting Effect of Different Algorithms

The visibility of the foggy images is low, and it is difficult to distinguish different scenes in the image. Therefore, the visibility can be used as the evaluation index of the effectiveness of the image defogging algorithm. This algorithm can calculate the visibility of images processed by different algorithms, so it can also be used to evaluate the effectiveness of different defogging algorithms. The images restored by the dark primary color algorithm, guide filter algorithm, DEFADE [21] algorithm, and Meng [24] algorithm are used for visibility estimation, and the comparison results are shown in Figure 11 and Table 5. Figure 11(a) are the images with fog, and the three images from top to bottom are image 1, image 2, and image 3. Figures 11(b)11(e) are the results of dark primary color prior algorithm, guide filter algorithm, DEFADE algorithm, and Meng algorithm.


Figure 11Image 1Image 2Image 3

Foggy imageVisibility0.79 km0.66 km0.82 km
FADE1.80711.36973.0818

Dark channelVisibility1.48 km2.67 km4.00 km
FADE0.75380.23300.6566

Guided filteringVisibility1.13 km1.53 km3.39 km
FADE0.74910.47421.1067

DEFADEVisibility2.35 km2.45 km2.17 km
FADE0.49340.23990.8394

MENGVisibility1.57 km3.39 km2.12 km
FADE0.22210.20900.5498

In Table 5, the results of visibility calculation are similar to those of FADE fog concentration calculation. The smaller the FADE value is, the lower the visibility is. From the evaluation data in Table 5, it can be seen that the fog image concentrations in Figure 11 are less than 1 km, and the images visibilities detection values after defogging by different algorithms are greater than 1 km. For example, the visibility of image 1 in foggy image is 0.79 km, while the values are 1.48 km, 1.13 km, 2.35 km, and 1.57 km, after defogging separately by dark channel, guide filter, DEFADE and Meng algorithm. Only from the view of defog effect, the visibility of all defog images is improved in different degrees compared with the original image, which shows that all kinds of algorithms play a defog effect. In image 2, the visibility of Meng algorithm is higher than other algorithms, and its FADE value is also the smallest. In image 3, there is a block effect in the defog results of Meng, and DEFADE algorithm shows better visual effect. The FADE values show that the fog removal effect of Meng algorithm is good. However, when the atmospheric light is selected manually by a human–computer interaction, there will be an obvious halo effect in the sky area.

We also compared the values of visibility and FADE with some neural network algorithms in Figure 12 and Table 6. Figure 12(a) are the images with fog, and the three images from top to bottom are image 4, image 5 and image 6. Figures 12(b)12(e) are the results of MSCNN algorithm [25], AOD algorithm [26], GFN algorithm [27], and FFA algorithm [9].


Figure 12Image 4Image 5Image 6

Foggy imageVisibility0.42 km0.97 km0.56 km
FADE0.90061.43790.9603

MSCNNVisibility1.02 km1.99 km1.87 km
FADE0.53320.57080.4286

AODVisibility1.80 km1.62 km2.01 km
FADE0.34640.64850.3258

GFNVisibility1.76 km1.31 km2.07 km
FADE0.36350.89960.3209

FFAVisibility1.93 km1.48 km1.15 km
FADE0.29320.77080.6113

Table 6 also shows that the smaller the FADE value is, the bigger the visibility is. From the evaluation data in Table 6, it can be seen that the fog images concentrations in Figure 12 are less than 1 km, and the image visibility detection values after defogging by different neural network algorithms are greater than 1 km. For example, the visibility of image 4 in foggy image is 0.42 km, while the values are 1.02 km, 1.80 km, 1.76 km and 1.93 km after defogging separately by MSCNN, AOD, GFN, and FFA algorithm. The FADE values of different neural network algorithms can only be effectively evaluated with the same image. For different pictures, the FADE calculations cannot be compared, but the visibilities can be effectively compared. Therefore, the visibility calculated by this algorithm can be used as an evaluation index to evaluate the effectiveness of the demisting algorithm.

4.3. Evaluation of the Recovery Effect of Defog Color Difference

Fog image not only reduces the visibility but also causes the problem of color shift. Theoretically, the RGB three channels should share the same transmittance value t, but the longer the wavelength is, the slower the energy attenuation is. Therefore, the smaller the difference between the transmittance of the three channels after defogging, the better the defogging effect and the color restoration. In this paper, the algorithm calculates and counts the different distribution of transmittance values of different channels to evaluate the color offset restoration effect of different algorithms. We compared the restored images of dark primary color algorithm, guide filter algorithm, Meng [24] algorithm, DEFADE [21] algorithm, Tarel [28] algorithm, and Chen algorithm [16]. Figure 13(a) are the fog images, and the four images from top to bottom are images 7 to 10.

For the four defog images in Figure 13 obtained by different algorithms, for each defog image with three channels and three different transmittance images, the absolute value of the difference value of the corresponding transmittance value of each pixel is calculated one by one between the two, and the proportion of the number of pixels in each difference interval to the total number of pixels is calculated. The smaller the difference, the larger the proportion of pixels, the less the color deviation after defogging, and the better the color restoration. Taking channels R and G of image 9 in Figure 13(a) as an example, the calculation results are shown in Figure 14 and Table 7. The first line in Figure 14 is R channel, and the second line is channel G. It can be seen that our method can effectively judge the effect of fog removal.


Figure 13Ratio of t difference of R-G channelsDark channel (%)Guided filtering (%)Meng (%)DEFADE (%)Tarel (%)Chen (%)

Image 70–0.194.3393.5391.6490.6497.4497.23
0.1–0.25.666.428.299.282.52.77
0.2–0.30.010.050.070.080.060
0.3–0.5000000
>0.5000000

Image 80–0.180.5381.8884.2886.7078.4296.61
0.1–0.217.2816.5213.5412.9521.413.39
0.2–0.31.841.582.180.350.170
0.3–0.50.350.020000
>0.5000000

Image 90–0.143.2142.8958.2631.4314.5367.79
0.1–0.241.2540.8029.9141.2957.0424.22
0.2–0.314.2915.1811.2316.4720.577.78
0.3–0.51.251.120.610.687.830.21
>0.500.0100.120.030

Image 100–0.157.8454.8248.4857.1242.1760.58
0.1–0.241.9644.9438.9441.9649.8338.99
0.2–0.30.190.2212.540.198.000.43
0.3–0.500.010.04000
>0.5000000

In Figure 14, after defogging image 9 in Figure 13 by each algorithm, the transmittances of R channel and G channel are different from each other in the visual sense, and the differences are consistent with the calculation results in Table 7. The visibility of images 7 and 8 is relatively high (2.55 km; 4.76 km), which is in the mist state. Therefore, the transmittance difference between channel R and channel G is relatively small, most of which is concentrated between 0 and 0.1, and the difference between different algorithms is not obvious.

The visibilities of images 9 and 10 are relatively low (0.58 km; 0.64 km), and the light attenuations of different wavelengths are quite different. The proportions of the points, where the transmittances difference of channel R and channel G are relatively large, are increased. Chen algorithm [16] compensates the image according to the different attenuation of different wavelengths and effectively corrects the color offset of the image while removing the fog. The visual effect is good.

4.4. Discussion of Experimental Results

When different wavelengths of light pass through the fog, they will show different attenuation. When the same wavelength of light passes through different concentrations of fog, it also shows different attenuation. The main innovation of this algorithm is to detect the image visibility according to the different energy attenuation of RGB three channels under different fog concentrations. The visibility values detected by three methods, namely, visual method, instrumental method, and image processing method, may have deviation. There are limitations in the visibility range of Figure 6. If it is beyond the range of Figure 6, the reverse extension line and other methods can be used. In the calculation of visibility, we used the value of (R − G)/G. In the following work, we will discuss the different results of the three lines.

We compared the data of several different defogging algorithms, such as classic image algorithms and neural network algorithms. The results of the algorithms in this paper are consistent with the results of FADE, while FADE often has better indicators but a worse visual effect. The results show that the smaller the FADE value is, the bigger the visibility is. From the experimental data, we can see that the visibility values detected by the algorithm in this paper are more consistent with the subjective feelings of human eyes. It is simple in principle but feasible to use the difference of transmittance between different channels as the evaluation index of color restoration of the defog image.

5. Conclusions

There are many reasons that can cause color deviation problem, such as different wavelengths, different optical attenuations, and different visibilities. With the different visibilities, the intensity of dehazing should be different, and we cannot use the same fog removal method to defog the images of different visibilities. So, we proposed a visibility detection algorithm of a single fog image in this paper. This visibility detection method is based on the measurement of transmissivity, which is the same as the principle of transmissive visibility instrument. Based on the prior of dark channel and the attenuation of light wave energy, the preliminary prediction of haze weather conditions and the final measurement of visibility distance can be obtained. The experimental results show that the measurement results are reliable. The follow-up work will further study the application of dynamic video visibility in real-time monitoring.

Data Availability

The figure data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors appreciate Dr. Yingpin Chen for helpful suggestions. This research was funded by Minnan Normal University (no. MSYJG 8) and Natural Science Foundation of Zhangzhou (no. ZZ2020J33).

References

  1. J. Ting, W. Bing, W. Zhao et al., “Relationships between low-level jet and low visibility associated with precipitation, air pollution, and fog in Tianjin,” Atmosphere, vol. 11, Article ID 1197, 2020. View at: Publisher Site | Google Scholar
  2. N. Hautiére, J.-P. Tarel, J. Lavenant, and D. Aubert, “Automatic fog detection and estimation of visibility distance through use of an onboard camera,” Machine Vision and Applications, vol. 17, no. 1, pp. 8–20, 2006. View at: Publisher Site | Google Scholar
  3. J. Zhao, M. Han, and X. Xin, “Multi-mode detection techniques of video visibility based on improved dual differential luminance algorithm,” International Journal of Signal Processing, Image Processing and Pattern Recognition, vol. 9, pp. 147–158, 2016. View at: Publisher Site | Google Scholar
  4. J.-H. Kim, W.-D. Jang, J.-Y. Sim, and C.-S. Kim, “Optimized contrast enhancement for real-time image and video dehazing,” Journal of Visual Communication and Image Representation, vol. 24, no. 3, pp. 410–425, 2013. View at: Publisher Site | Google Scholar
  5. M. Negru and S. Nedevschi, “Image based fog detection and visibility estimation for driving assistance systems,” in Proceedings of the 2013 IEEE 9th International Conference on Intelligent Computer Communication and Processing (ICCP), pp. 163–168, Cluj-Napoca, Romania, September 2013. View at: Publisher Site | Google Scholar
  6. J. Mao, U. Phommasak, S. Watanabe et al., “Detecting foggy images and estimating the haze degree factor,” Journal of Computer Science & Systems Biology, vol. 7, no. 6, pp. 226–228, 2014. View at: Publisher Site | Google Scholar
  7. Q. Qingsong Zhu, J. Jiaming Mai, and L. Ling Shao, “A fast single image haze removal algorithm using color attenuation prior,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3522–3533, 2015. View at: Publisher Site | Google Scholar
  8. Z. Ma, J. Wen, C. Zhang, Q. Liu, and D. Yan, “An effective fusion defogging approach for single sea fog image,” Neurocomputing, vol. 173, pp. 1257–1267, 2016. View at: Publisher Site | Google Scholar
  9. X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “FFA-Net: feature fusion attention network for single image dehazing,” in Proceedings of the AAAI Conference on Artificial Intelligence, pp. 11908–11915, New York, NY, USA, February 2020. View at: Publisher Site | Google Scholar
  10. B. Li, W. Ren, D. Fu et al., “Benchmarking single-image dehazing and beyond,” IEEE Transactions on Image Processing, vol. 28, pp. 492–505, 2018. View at: Publisher Site | Google Scholar
  11. K. Kaiming He, J. Jian Sun, and X. Xiaoou Tang, “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, 2011. View at: Publisher Site | Google Scholar
  12. W. Wang, X. Yuan, X. Wu, and Y. Liu, “Dehazing for images with large sky region,” Neurocomputing, vol. 238, pp. 365–376, 2017. View at: Publisher Site | Google Scholar
  13. R. Tan, “Visibility in bad weather from a single image,” in Proceedings of the 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 24–26, Anchorage, AK, USA, June 2008. View at: Google Scholar
  14. M. M. Kugeiko, “Spectral nephelometric method for the determination of the meteorological optical range,” Journal of Optical Technology, vol. 87, no. 8, pp. 491–494, 2020. View at: Publisher Site | Google Scholar
  15. G. S. Smith, “Human color vision and the unsaturated blue color of the daytime sky,” American Journal of Physics, vol. 73, no. 7, pp. 590–597, 2005. View at: Publisher Site | Google Scholar
  16. Z. Chen, B. Ou, and Q. Tian, “An improved dark channel prior image defogging algorithm based on wavelength compensation,” Earth Science Informatics, vol. 12, no. 4, pp. 1–12, 2019. View at: Publisher Site | Google Scholar
  17. P. Kruse and R. Mcquistan, Elements of Infrared Technology: Generation, Transmission and Detection, John Wiley, New York, NY, USA, 1962.
  18. I. I. Kim, B. Mcarthur, and E. J. Korevaar, “Comparison of laser beam propagation at 785 nm and 1550 nm in fog and haze for optical wireless communications,” Optical Wireless Communications, vol. 4214, no. 2, pp. 26–37, 2001. View at: Publisher Site | Google Scholar
  19. J. Houghton, The Physics of Atmospheres, Cambridge University Press, Cambridge, UK, 2002.
  20. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397–1409, 2013. View at: Publisher Site | Google Scholar
  21. L. Lark Kwon Choi, J. Jaehee You, and A. C. Bovik, “Referenceless prediction of perceptual fog density and perceptual image defogging,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3888–3901, 2015. View at: Publisher Site | Google Scholar
  22. Z. Chen, “PTZ visibility detection based on image luminance changing tendency,” in Proceedings of the 2016 International Conference on Optoelectronics and Image Processing, pp. 15–19, Warsaw, Poland, June 2016. View at: Google Scholar
  23. X. Zhang, Z. Guo, X. Li et al., “A research on the detection of fog visibility,” in Proceedings of the 2020 Artificial Intelligence and Security 6th International Conference, pp. 430–440, Hohhot, China, July 2020. View at: Google Scholar
  24. G. Meng, Y. Wang, J. Duan et al., “Efficient image dehazing with boundary constraint and contextual regularization,” in Proceedings of the 2013 IEEE International Conference on Computer Vision, pp. 617–624, Sydney, Australia, December 2013. View at: Google Scholar
  25. W. Ren, S. Liu, H. Zhang et al., “Single image dehazing via multi-scale convolutional neural networks,” in Proceedings of the European conference on computer vision, pp. 154–169, Amsterdam, Netherlands, September 2016. View at: Google Scholar
  26. B. Li, X. Peng, Z. Wang et al., “An all-in-one network for dehazing and beyond,” Journal of Latex Class Files, vol. 14, no. 8, pp. 1–12, 2015. View at: Google Scholar
  27. W. Ren, L. Ma, J. Zhang et al., “Gated fusion network for single image dehazing,” in Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3253–3261, Salt Lake City, UT, USA, June 2018. View at: Publisher Site | Google Scholar
  28. J. Tarel and H. Nicolas, “Fast visibility restoration from a single color or gray level image,” in Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, pp. 2201–2208, Kyoto, Japan, September 2009. View at: Google Scholar

Copyright © 2021 Zhixiang Chen and Binna Ou. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views605
Downloads384
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.