Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2021 / Article
Special Issue

Meta-Heuristic Techniques for Solving Computational Engineering Problems 2021

View this Special Issue

Review Article | Open Access

Volume 2021 |Article ID 5584464 | https://doi.org/10.1155/2021/5584464

Vinay Kehar, Vinay Chopra, Bhupesh Kumar Singh, Shailendra Tiwari, "Efficient Single Image Dehazing Model Using Metaheuristics-Based Brightness Channel Prior", Mathematical Problems in Engineering, vol. 2021, Article ID 5584464, 12 pages, 2021. https://doi.org/10.1155/2021/5584464

Efficient Single Image Dehazing Model Using Metaheuristics-Based Brightness Channel Prior

Academic Editor: Hassène Gritli
Received23 Feb 2021
Revised28 Mar 2021
Accepted23 Apr 2021
Published08 May 2021

Abstract

Haze degrades the spatial and spectral information of outdoor images. It may reduce the performance of the existing imaging models. Therefore, various visibility restoration models approaches have been designed to restore haze from still images. But restoring the haze is an open area of research. Although the existing approaches perform significantly better, they are not so effective against a large haze gradient. Also, the effect of hyperparameters tuning issue is also ignored. Therefore, a brightness channel prior (BCP) based dehazing model is proposed. The gradient filter is utilized to improve the transmission map computed using the gradient filter. Nondominated Sorting Genetic Algorithm is also used to optimize the initial parameters of the BCP approach. The comparative analysis shows that BCP performs effectively across a wide range of haze degradation levels without causing any visible artifacts.

1. Introduction

Images captured in poor environmental conditions, such as haze, fog, and smoggy, suffer from poor visibility issues [1, 2]. The haze attenuates the scene radiance with correspondence to an object’s distance from the camera [3, 4]. The haze imaging model is defined as a linear per-pixel consolidation of an original scene radiance and an airlight [5, 6].

Various multiple-images based haze restoration approaches have been implemented [7]. These approaches require physical characteristics of input images in prior [810]. But in real life, no physical attributes of input images are available in prior [11, 12].

Many techniques have been designed in the literature to remove haze from still images. Oakley and Satherley [13] designed a physical model to restore weather degraded images. The depth map and atmospheric veil were estimated to remove the visibility degradation from weather degraded images. It discovers the law of weather degraded image formation by considering the visual manifestations under various environmental circumstances [14]. Due to the extensive computational complexity of the physical model, He et al. [15] implemented a novel channel prior, that is, dark channel prior (DCP). It assumes that, for an image taken in a sunny environment, the intensity of at least one-color channel approaches toward zero. However, DCP suffers from a number of problems such as sky-region, halo, and gradient-reversal artifacts, color, edges, and texture distortion issues [16]. Recently, researchers have proposed various channel priors to handle the issues associated with standard DCP such as boosting dark channel [17], bounded optimization-based dark channel prior [18], gradient channel prior [19], adaptive bichannel priors [20], sparse dark channel prior [21], and dark channel prior guided variational framework [22]. However, the existing methods perform poorly especially when images contain a large haze gradient. Most of the existing methods suffer from texture distortion issues [23, 24].

Guo et al. [25] proposed a fusion model to restore the foggy images. It has shown significantly better edge and color preservation. Yoon [26] implemented a variational minimization based haze restoration model. However, [25, 26] are computationally expensive in nature [27].

The primary goal of this research work is to overcome the dehazing artifacts and to preserve significant information of restored images. Brightness channel prior (BCP) is used to obtain the physical attributes of hazy images. The gradient filter is used to refine the transmission map computed using the gradient filter. To optimize the initial parameters of BCP, NSGA is utilized. The comparative analysis is also drawn to evaluate the performance of NSGA based BCP model.

The rest of the paper is organized as follows: Section 2 discusses the related work. The proposed model is presented in Section 3. The comparative analysis is presented in Section 4. Section 5 concludes the proposed haze restoration model.

Luan et al. [28] proposed a restoration model by using the regression model. Support vector regression is used for regression model learning. Jiang et al. [20] proposed an adaptive bichannel prior to superpixels for removing haze from the single image. The superpixels are used as local regions to estimate the atmospheric light and transmission map by combining bright and dark channel priors. Liu et al. [29] utilized a multiscale correlated wavelet approach for restoring the weather degraded images. In multiscale wavelet decomposition, it has been found that haze mainly presents in the low-frequency spectrum. To remove the haze effect, an open dark channel model (ODCM) is used.

Nair and Sankaran [14] implemented a haze removal approach using a dark channel prior and surround filter. The computational complexity of the approach is due to simple convolution. The surround filter minimizes the memory requirements and enhances the speed of transmission estimation. Wu et al. [30] proposed a restoration model for UAV-based railway images using a densely pyramidal residual network (DPRnet). The loss function is used to preserve the structural information significantly. Shu et al. [31] used a multichannel total variation (MTV) regularizer to restore the hazy images. The alternating direction approach of multipliers is utilized for a nonsmooth optimization problem.

Hodges et al. [32] developed a deep learning-based restoration model (DLR) of weather degraded images. In this, a deep learning model is utilized to train the samples using unmatched images. Zhang et al. [33] solved the problem of bright distortion due to DCP. To eliminate the bright distortion, four parameters, such as mean square error (MSE), mean gradient, program running time, and peak signal-to-noise ratio (PSNR), are evaluated optimally. The logarithmic enhancement approach is used as an optimal approach.

Emberton et al. [34] used haze region segmentation (SRS) to remove haze from images. To address the problem of spectral distortion, a semantic white balancing approach is applied. Guo et al. [35] utilized the deep convolutional network (DCN) to remove the haze from images. Five maps are derived from the original hazy image. The saliency map and exposure map are used to focus on near-region scenes. The gamma correction map and white balance map are applied to gain the components of the intensity and latent color of the scene. The global image contrast is enhanced by using the haze veil map.

Alajarmeh et al. [36] proposed an image restoration model based on contrast time airlight and liner transmission (CLT). Two approaches are used such as airlight by image integrals to estimate the airlight value and bounded transmission to estimate the linear transmission maps. Gao et al. [37] proposed a dual-fusion approach (DFT) to restore the hazy images. The segmentation approach is used to divided the regions such as the sky and nonsky. A multiregion fusion approach is used to optimally evaluate the transmission map.

From the existing review, it has been observed that the hyperparameters tuning of the dehazing models can achieve better results [38, 39]. In this paper, we have considered various metaheuristic techniques such as Multiobjective Harris Hawks Optimizer (MOHO) [40], Multiobjective Extremal Optimization (MOEO) [41], Multiobjective Particle Swarm Optimization (MOPSO) [42], Multiobjective Grey Wolf Optimizer (MOGWO) [43], Multiobjective Whale Optimization (MOWO) [44], and Nondominated Sorting Genetic Algorithm (NSGA)-III [45]. All these approaches are utilized in this paper to tune the initial parameters of GCP by designing a multiobjective fitness function.

3. Proposed Brightness Channel Prior-Based Dehazing Approach

This section discusses the designed dehazing model. Figure 1 demonstrates the overall flow of the designed dehazing model.

3.1. Depth Map Estimation

In the first step, BCP is defined to approximate the depth map of a hazy image aswhere defines local patch. shows BCP. defines color channels of .

3.2. Atmospheric Veil

Atmospheric veil () is then estimated as [15]

3.3. Transmission Map

Transmission map () is then evaluated by using the computed BCP as

3.4. Coarse Atmospheric Veil

The coarse atmospheric veil () is then computed as [15]

In this paper, a gradient filter is used to improve aswhere defines the standard deviation.

3.5. Restoration Model

In the last step of BCP, a haze-free image () can be evaluated as

3.6. Optimization of Initial Parameters of BCP

To optimize the initial parameters of BCP, NSGA [46] is used. Flowchart of NSGA based BCP is depicted in Figure 2.

NSGA-III [45] has been extensively utilized to solve many computationally complex problems. NSGA-III is preferred over the existing multiobjective optimization approaches as it has good convergence speed and it does not suffer from premature convergence issues [4749]. It utilizes nondominated sorting to sort the nondominated solutions. Table 1 shows the nomenclature of NSGA-III.


SymbolMeaning

Random set of solutions
Group of
Random variable
Permutation vector
Encoding
Elite population
Optimal BCP
Optimal parameters
Binary decision vector

Algorithm 1 demonstrates the initial population of NSGA-III based BCP. Initially, a random population is computed. The obtained solutions are then encoded to the range of hyperparameters of BCP. Algorithm 2 demonstrates the working of the proposed NSGA-III based BCP. New visible edges, saturated pixels, and new edge gradients are used as a fitness function. Dominated and nondominated solutions are then computed. Crossover and mutation operators are then used to compute the child solutions. Nondominated sorting is then implemented by using the dominance relation (). Finally, when termination criteria are achieved, then the optimal initial parameter of BCP is returned.

Optimal BCP.
Optimal parameters
whiledo
Assume BCP with maximum performance
if
  
else
  
end if
end while
elect a random group of solutions from utilizing a normal distribution
obtain a group of random solutions
return
elect randomly solutions from given elite for all
decode as hyperparameters of BCP forto
   obtain a random solution in
  ifthen
   
  end if
end for
ifthen
  
end if
fortodo
   elect randomly an
  ifthen
   
  else
   
  end if
  ifthen
   
  end if
end for
end for
ifthen
select solutions obtain from NSGA-III
end if

decompose random individual to a hyperparameters of BCP.

4. Performance Analysis

The performance of the NSGA-BCP dehazing model is evaluated on processor with 2.66 GHz and 16 GB RAM. MATLAB is used to perform the experiments. The patch size is selected as pixels. Seven well-known dehazing approaches are used for comparative analyses. These approaches are DCP [50], CTT [51], CNN [52], WT [53], FVID [54], norm [55], and TGV [56]. Fifteen benchmark synthetic and real-life hazy images are considered for experimental analysis.

4.1. Visual Analyses of Proposed Dehazing Model

Figures 35 show the visual analysis. Restored images obtained from DCP [15] and CTT [51] show distortion of edge and texture information. Also, DCP [15] and CTT [51] suffer from the sky-region problem. CNN [52] and TGV [56] provide better results, but still some texture distortion is there. WT [53], norm [55], and FVID [54] suffer from some halo and gradient-reversal artifacts. The NSGA-BCP based restored images preserve the edge, texture, and color details efficiently.

4.2. Quantitative Analyses of Proposed Dehazing Model

The proposed model is compared with the competitive models by considering various performance metrics like percentage of saturated pixels (), haze gradient, contrast gain (), visible edges, execution time (), structural similarity index metric, and peak signal-to-noise ratio. Bold values in the table represent the high performance of the given model.

Table 2 demonstrates analysis. It is found that the proposed NSGA-BCP based dehazing model has significant values compared to the competitive dehazing approaches. Overall CG analysis shows that the proposed model achieves an average of 1.9745 CG as compared to the maximum average CG, that is, 1.8932, obtained using the existing dehazing models.


Img.DCPFVIDWTTGVCTTCNNNSGA-BCP

1.87451.84321.77451.85641.87451.86541.88561.9631
1.87561.82451.87541.89231.81561.79231.81251.9149
1.87491.79321.88301.86231.89231.71581.88561.9198
1.87521.85841.85231.88641.86251.85231.84561.9158
1.82851.85461.85241.84231.85181.85231.85231.9125
1.88231.85691.85691.86541.88531.85971.88291.9065
1.79261.79571.78591.77451.78521.87461.84891.9875
1.85311.85471.86871.87411.85681.87411.88471.9746
1.87411.86541.88521.81591.85471.86241.83571.9025
1.77881.87591.87481.84751.88971.85471.87491.9568
1.89561.89831.86421.85541.87591.79821.83691.9546
1.84121.85471.89631.88581.88591.86541.85211.9214
1.86241.85121.84561.82581.82581.82451.80141.9078
1.82141.88521.83691.81471.83251.85121.85211.9756
1.84121.86541.86471.80251.87451.87421.88541.9789

Table 3 reveals that the proposed NSGA-BCP based dehazing model has minimum values compared to the competitive dehazing models. Overall analysis shows that the proposed model achieves an average of which is significantly lesser as compared to the minimum average , that is, 0.0463 obtained using the existing dehazing models.


Img.DCPFVIDWTTGVCTTCNNNSGA-BCP

0.05680.07450.04560.02350.05470.07450.04780.0188
0.09580.04560.07450.07660.04560.02580.01780.0115
0.07410.12580.24560.18540.23240.14780.11470.0231
0.05140.14780.21490.21470.27590.21480.17890.0148
0.01480.11480.21480.12580.21850.14460.12580.0354
0.04590.17480.24790.26540.14120.21280.17410.0413
0.01470.24590.21470.28520.17430.25470.14230.0521
0.04130.25690.13120.17890.21470.23210.14260.0214
0.08740.26520.27410.11230.12580.26540.27420.0428
0.07420.15890.27420.28420.27420.24590.14240.0561
0.07590.12580.15120.17420.13580.28470.14260.0570
0.04560.23480.24280.17590.17830.15240.14780.0248
0.06480.16210.17590.12470.18970.23580.24780.0480
0.05870.12480.13240.23480.12140.18560.25740.0587
0.06540.24120.13240.12140.12250.12470.24150.0485

Tables 4 and 5 demonstrate that the proposed NSGA-BCP based dehazing model has achieved better and values as compared to the existing approaches. Overall analysis shows that the proposed model achieves an average of which is significantly better as compared to the maximum average , that is, 2.7367, obtained using the existing dehazing models. Overall analysis shows that the proposed model achieves an average of which is significantly better as compared to the maximum average , that is, 2.7363, obtained using the existing dehazing models.


Img.DCPFVIDWTTGVCTTCNNNSGA-BCP

2.00762.60481.77262.17892.57781.98571.97582.9301
2.51872.83212.67892.74122.65462.14562.23253.0547
2.03242.24562.51542.62542.17892.23252.74713.0526
2.73261.86542.22581.74521.98571.83252.24023.0289
2.35072.38452.32452.04521.92452.63251.76542.9875
2.57252.75242.65891.88562.51462.07082.23012.9841
2.74562.67722.55461.88522.12651.8562.57582.9854
2.17592.34782.34522.62542.81252.00882.18522.9742
1.98241.96542.31891.92482.37412.25442.12452.9547
2.32562.21452.21472.24851.98741.98562.02212.7456
2.51562.49851.85672.79982.88571.87421.89873.2148
2.49872.19522.52312.03242.14561.79872.13472.8534
2.56411.78521.94722.46522.38732.78142.69322.9856
2.26322.52142.13691.79632.52142.22662.67422.9872
2.43212.74212.79812.03252.23542.75612.53213.1456


Img.DCPFVIDWTTGVCTTCNNNSGA-BCP

2.19631.98562.18542.86242.65412.78962.85613.1856
2.14522.19562.18522.18261.84522.05782.04362.5234
2.27562.45362.78522.53452.01752.65232.11023.1234
2.32452.31572.85632.61242.14752.78532.16323.0725
2.52341.82422.82131.78942.56322.77532.63213.1456
1.93542.45362.43692.88741.96321.87891.96533.1156
2.85242.84562.49512.16542.39561.86322.56343.1561
2.00852.31651.86322.42142.15321.89972.65472.8987
2.21872.58452.14752.58172.29852.41232.74562.8562
2.15632.55212.71652.12542.54232.67562.44572.8177
2.48522.28812.52362.47562.17562.46812.15332.8623
2.66532.87452.22411.88541.77422.15872.13543.1483
2.43252.47012.13242.02142.21472.65412.85213.1475
2.18561.83211.85722.75232.37532.74232.71452.8563
2.53212.27422.12531.88562.12582.77452.82303.0302

Table 6 demonstrates execution time (in seconds) analysis. It is observed that the proposed NSGA-BCP based dehazing model is computationally faster than the existing approaches. Overall execution time analysis shows that the proposed model achieves an average of 1.0324 execution time which is significantly minimum as compared to the minimum average execution time, that is, 1.3853, obtained using the existing dehazing models.


Img.DCPFVIDWTTGVCTTCNNNSGA-BCP

1.75321.32141.29411.27891.21431.18541.98741.0211
1.32141.34781.78521.05471.28471.14581.44121.0112
1.97651.85471.64751.55211.48741.63211.43651.2415
1.48421.32141.56411.87451.41231.07451.27771.0521
1.07421.43241.05471.74231.35471.33211.13241.0214
1.25671.22471.46581.35471.11871.35471.65711.1012
1.84231.34521.23101.58451.66211.68741.32761.1014
1.47411.21141.32141.12141.38411.32451.86231.0214
1.43211.33211.22141.43211.46321.43211.82871.1234
1.24561.58121.46321.24521.32141.71131.32141.1132
1.25471.23211.37561.44511.34751.15321.63211.0203
1.32141.51471.21471.18471.22141.21471.71451.1014
1.80741.63211.63211.07021.27561.52141.66541.0551
1.81141.14771.34751.23541.44351.32141.07661.0321
1.12141.75611.18321.75471.41231.21471.44711.1123

Table 7 demonstrates haze gradient analyses. It is observed that the NSGA-BCP based dehazing model achieves lesser haze gradient values as compared to the existing approaches. Overall haze gradient analysis shows that the proposed model achieves an average of 1.7483 haze gradient which is significantly minimum as compared to the minimum average haze gradient, that is, 1.9485, obtained using the existing dehazing models.


Img.DCPFVIDWTTGVCTTCNNNSGA-BCP

2.04962.24122.12591.83331.77982.08642.02241.7786
1.94771.99191.73942.28631.94141.99011.91091.7382
1.90812.08212.20171.82361.86992.02231.74891.7477
1.85411.90362.23111.82962.22092.09621.92311.8284
1.97431.85021.91642.11041.89472.09781.94911.8494
1.92852.26062.10832.04251.93752.28051.97851.9273
2.24662.26951.78531.72811.92081.99181.98671.7269
2.11972.19372.22312.14311.17521.98862.16911.7744
2.28341.83091.86981.82641.83842.16822.27091.8252
2.08281.87191.72881.79632.24432.03862.23681.7276
1.78641.87411.79262.19492.09541.89992.21631.7852
1.81892.21181.87352.14471.87081.89991.99181.8177
2.14642.14392.11222.19861.94862.26922.23271.9474
1.92642.03141.95431.82661.98921.80871.85311.8075
2.29151.95631.72422.05862.15252.26261.85941.7238

Table 8 shows analyses of the proposed and the competitive dehazing models. It is found that the proposed NSGA-BCP based dehazing model has significant values than the competitive dehazing approaches. Overall analysis shows that the proposed model achieves an average of which is significantly lesser as compared to the maximum average , that is, 23.2943, obtained using the existing dehazing models.


Img.DCPFVIDWTTGVCTTCNNNSGA-BCP

21.435221.352617.484725.223319.798825.944318.212327.1617
19.311617.043520.249420.662623.487925.838217.286727.0597
18.848425.616123.979623.431723.248718.234622.215626.8378
18.233927.525426.726220.056624.667121.158316.898528.7417
19.556222.740527.068226.735816.967617.774217.400328.2899
24.632924.153524.220725.081918.677422.130619.863426.3036
19.877225.410825.734119.674624.046422.706721.437626.9558
23.791227.680721.814927.586521.394520.056725.701828.9024
27.783826.898517.213121.029820.664420.254127.332729.0055
19.360124.061827.622325.410121.805424.156626.912228.8449
19.156424.781519.869323.091118.901521.227824.966426.1881
25.624220.374622.854321.145921.614124.073919.121526.8459
21.596718.422117.624527.758919.687324.545918.473728.9806
25.221223.897821.191825.150223.847318.031817.522326.4429
26.597127.321127.002824.343518.353621.706520.238828.5428

Table 9 shows analyses of the proposed and the competitive dehazing models. It is found that the proposed NSGA-BCP based dehazing model has significant values than the competitive dehazing approaches. Overall analysis shows that the proposed model achieves an average of which is significantly lesser as compared to the maximum average , that is, 0.8395, obtained using the existing dehazing models.


TechniqueCGNVEAGPSNRSSIM

MOHO [40]-BCP2.42 0.0772.43 0.1912.19 0.11130.9 0.670.883 0.079
MOEO [41]-BCP2.39 0.1812.24 0.1892.56 0.13331.3 1.230.882 0.064
MOPSO [42]-BCP2.74 0.1932.16 0.1152.66 0.07929.9 1.180.881 0.091
MOGWO [43]-BCP2.33 0.1292.42 0.1372.07 0.16230.2 0.770.875 0.079
MOWO [44]-BCP2.38 0.0612.14 0.1122.38 0.10129.4 0.940.873 0.098
NSGA-BCP2.51 0.1722.67 0.0672.69 0.13631.8 1.830.887 0.082

From Tables 210, it has been found that the NICP outperforms the competitive dehazing models in terms of contrast gain (CG), new visible edges (NVE), average gradient (AG), peak signal-to-noise ratio (PSNR), and structural similarity index metric (SSIM) by , , , , and , respectively. Compared to the competitive models, NICP also minimizes the haze gradient, saturated pixels, and execution time by , , and , respectively.


Img.DCPFVIDWTTGVCTTCNNNSGA-BCP

0.75020.76210.86740.77930.76190.81210.80670.8691
0.85220.79590.86650.72550.88440.78940.85060.8861
0.81620.82570.73310.85180.75290.74780.83140.8535
0.87150.75780.79520.78560.73840.86360.83440.9127
0.80960.87640.80390.74010.85910.74490.75410.9081
0.76090.84390.83290.88030.79460.77830.83940.8826
0.83860.73070.89050.86170.77830.89830.74170.9545
0.83650.80150.75110.83640.82280.82790.81410.8982
0.73110.75820.72990.76620.76140.78320.88930.8917
0.73830.72580.75570.73930.77120.80290.77640.8046
0.89880.72440.73910.88950.73450.87140.72340.9005
0.79080.85670.89730.84880.82350.78340.82650.8993
0.75820.74230.85750.74680.83190.73380.45230.8592
0.72740.86450.89140.75220.79590.87810.74210.8933
0.72770.88230.80050.87840.76160.78130.84440.8844

Besides this, we have also compared the performance of the proposed model with the different optimization approaches such as Multiobjective Harris Hawks Optimizer (MOHO) [40], Multiobjective Extremal Optimization (MOEO) [41], Multiobjective Particle Swarm Optimization (MOPSO) [42], Multiobjective Grey Wolf Optimizer (MOGWO) [43], and Multiobjective Whale Optimization (MOWO) [44]. We have applied the selected metaheuristic techniques on BCP to obtain the restored images. It has been observed that the proposed model achieves significantly better results as compared to the existing metaheuristic techniques.

5. Conclusion

A brightness channel prior (BCP) based dehazing model was implemented. The transmission map refinement was achieved using the gradient filter. The hyperparameters of BCP are tuned using NSGA. The obtained results revealed that BCP outperforms the competitive dehazing models in terms of contrast gain, new visible edges, average gradient, peak signal-to-noise ratio, and structural similarity index metric by , , , , and , respectively. Compared to the competitive models, BCP also minimizes the smog gradient, saturated pixels, and execution time by , , and , respectively.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Ethical Approval

The research was conducted according to the principles expressed in the Declaration of Hindawi.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. M. Jian, Y. Yin, J. Dong, and W. Zhang, “Comprehensive assessment of non-uniform illumination for 3d heightmap reconstruction in outdoor environments,” Computers in Industry, vol. 99, pp. 110–118, 2018. View at: Publisher Site | Google Scholar
  2. S. Liu and Y. Zhang, “Detail-preserving underexposed image enhancement via optimal weighted multi-exposure fusion,” IEEE Transactions on Consumer Electronics, vol. 65, no. 3, 2019. View at: Google Scholar
  3. J.-P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of 2009 IEEE 12th International Conference on Computer Vision, pp. 2201–2208, IEEE, Kyoto, Japan, September 2009. View at: Google Scholar
  4. C. O. Ancuti, C. Ancuti, and P. Bekaert, “Enhancing by saliency-guided decolorization,” in Proceedings of CVPR 2011, pp. 257–264, IEEE, Colorado Springs, CO, USA, June 2011. View at: Google Scholar
  5. M. Jian, K.-M. Lam, and J. Dong, “Illumination-insensitive texture discrimination based on illumination compensation and enhancement,” Information Sciences, vol. 269, pp. 60–72, 2014. View at: Publisher Site | Google Scholar
  6. Y. Zhang and S. Liu, “Non-uniform illumination video enhancement based on zone system and fusion,” in Proceedings of 2018 24th International Conference on Pattern Recognition (ICPR), pp. 2711–2716, IEEE, Beijing, China, August 2018. View at: Google Scholar
  7. X. Liu, H. Zhang, Y. Y. Tang, and J. X. Du, “Scene-adaptive single image dehazing via opening dark channel model,” IET Image Processing, vol. 10, no. 11, pp. 877–884, 2016. View at: Publisher Site | Google Scholar
  8. I. Riaz, X. Fan, and H. Shin, “Single image dehazing with bright object handling,” IET Computer Vision, vol. 10, no. 8, pp. 817–827, 2016. View at: Publisher Site | Google Scholar
  9. S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 6, pp. 713–724, 2003. View at: Publisher Site | Google Scholar
  10. S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in 1999 Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2, pp. 820–827, IEEE, Corfu, Greece, September 1999. View at: Google Scholar
  11. B. Li, S. Wang, J. Zheng, and L. Zheng, “Single image haze removal using content‐adaptive dark channel and post enhancement,” IET Computer Vision, vol. 8, no. 2, pp. 131–140, 2014. View at: Publisher Site | Google Scholar
  12. D. Wang and J. Zhu, “Fast smoothing technique with edge preservation for single image dehazing,” IET Computer Vision, vol. 9, no. 6, pp. 950–959, 2015. View at: Publisher Site | Google Scholar
  13. J. P. Oakley and B. L. Satherley, “Improving image quality in poor visibility conditions using a physical model for contrast degradation,” IEEE Transactions on Image Processing, vol. 7, no. 2, pp. 167–179, 1998. View at: Publisher Site | Google Scholar
  14. D. Nair and P. Sankaran, “Color image dehazing using surround filter and dark channel prior,” Journal of Visual Communication and Image Representation, vol. 50, pp. 9–15, 2018. View at: Publisher Site | Google Scholar
  15. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, 2011. View at: Google Scholar
  16. J. Bala and K. Lakhwani, “Single image desmogging using oblique gradient profile prior and variational minimization,” Multidimensional Systems and Signal Processing, vol. 31, no. 9, pp. 1–17, 2020. View at: Google Scholar
  17. M. Zhu, B. He, J. Liu, and J. Yu, “Boosting dark channel dehazing via weighted local constant assumption,” Signal Processing, vol. 171, p. 107453, 2020. View at: Google Scholar
  18. A. Mathias and D. Samiappan, “Underwater image restoration based on diffraction bounded optimization algorithm with dark channel prior,” Optik, vol. 192, p. 162925, 2019. View at: Google Scholar
  19. D. Singh, V. Kumar, and M. Kaur, “Single image dehazing using gradient channel prior,” Applied Intelligence, vol. 49, no. 12, pp. 4276–4293, 2019. View at: Google Scholar
  20. Y. Jiang, C. Sun, Y. Zhao, and L. Yang, “Image dehazing using adaptive bi-channel priors on superpixels,” Computer Vision and Image Understanding, vol. 165, pp. 17–32, 2017. View at: Publisher Site | Google Scholar
  21. Y. Wang, T.-Z. Huang, X.-L. Zhao, L.-J. Deng, and T.-Y. Ji, “A convex single image dehazing model via sparse dark channel prior,” Applied Mathematics and Computation, vol. 375, p. 125085, 2020. View at: Google Scholar
  22. G. Hou, J. Li, G. Wang, H. Yang, B. Huang, and Z. Pan, “A novel dark channel prior guided variational framework for underwater image restoration,” Journal of Visual Communication and Image Representation, vol. 66, p. 102732, 2020. View at: Google Scholar
  23. S. Ghosh, P. Shivakumara, P. Roy, U. Pal, and T. Lu, “Graphology based handwritten character analysis for human behaviour identification,” CAAI Transactions on Intelligence Technology, vol. 5, no. 1, pp. 55–65, 2020. View at: Google Scholar
  24. T. Wiens, “Engine speed reduction for hydraulic machinery using predictive algorithms,” International Journal of Hydromechatronics, vol. 2, no. 1, pp. 16–31, 2019. View at: Google Scholar
  25. J.-M. Guo, J.-Y. Syue, V. Radzicki, and H. Lee, “An efficient fusion-based defogging,” IEEE Transactions on Image Processing, vol. 99, 2017. View at: Google Scholar
  26. S. M. Yoon, “Visibility enhancement of fog-degraded image using adaptive total variation minimisation,” The Imaging Science Journal, vol. 64, no. 2, pp. 82–86, 2016. View at: Google Scholar
  27. B. Jiang, H. Meng, J. Zhao et al., “Single image fog and haze removal based on self-adaptive guided image filter and color channel information of sky region,” Multimedia Tools and Applications, vol. 77, pp. 13513–13530, 2017. View at: Google Scholar
  28. Z. Luan, Y. Shang, X. Zhou, Z. Shao, G. Guo, and X. Liu, “Fast single image dehazing based on a regression model,” Neurocomputing, vol. 245, pp. 10–22, 2017. View at: Publisher Site | Google Scholar
  29. X. Liu, H. Zhang, Y. ming Cheung, X. You, and Y. Y. Tang, “Efficient single image dehazing and denoising: an efficient multi-scale correlated wavelet approach,” Computer Vision and Image Understanding, vol. 162, pp. 23–33, 2017. View at: Publisher Site | Google Scholar
  30. Y. Wu, Y. Qin, Z. Wang, X. Ma, and Z. Cao, “Densely pyramidal residual network for uav-based railway images dehazing,” Neurocomputing, vol. 371, pp. 124–136, 2020. View at: Publisher Site | Google Scholar
  31. Q. Shu, C. Wu, Q. Zhong, and R. W. Liu, “Alternating minimization algorithm for hybrid regularized variational image dehazing,” Optik, vol. 185, pp. 943–956, 2019. View at: Publisher Site | Google Scholar
  32. C. Hodges, M. Bennamoun, and H. Rahmani, “Single image dehazing using deep neural networks,” Pattern Recognition Letters, vol. 128, pp. 70–77, 2019. View at: Publisher Site | Google Scholar
  33. J. Zhang, X. Wang, C. Yang, J. Zhang, D. He, and H. Song, “Image dehazing based on dark channel prior and brightness enhancement for agricultural remote sensing images from consumer-grade cameras,” Computers and Electronics in Agriculture, vol. 151, pp. 196–206, 2018. View at: Publisher Site | Google Scholar
  34. S. Emberton, L. Chittka, and A. Cavallaro, “Underwater image and video dehazing with pure haze region segmentation,” Computer Vision and Image Understanding, vol. 168, pp. 145–156, 2018. View at: Publisher Site | Google Scholar
  35. F. Guo, X. Zhao, J. Tang, H. Peng, L. Liu, and B. Zou, “Single image dehazing based on fusion strategy,” Neurocomputing, vol. 378, 2020. View at: Publisher Site | Google Scholar
  36. A. Alajarmeh, R. Salam, K. Abdulrahim, M. Marhusin, A. Zaidan, and B. Zaidan, “Real-time framework for image dehazing based on linear transmission and constant-time airlight estimation,” Information Sciences, vol. 436-437, pp. 108–130, 2018. View at: Publisher Site | Google Scholar
  37. Y. Gao, Q. Li, and J. Li, “Single image dehazing via a dual-fusion method,” Image and Vision Computing, vol. 94, p. 103868, 2020. View at: Publisher Site | Google Scholar
  38. H. S. Basavegowda and G. Dagnew, “Deep learning approach for microarray cancer data classification,” CAAI Transactions on Intelligence Technology, vol. 5, no. 1, pp. 22–33, 2020. View at: Google Scholar
  39. R. Wang, H. Yu, G. Wang, G. Zhang, and W. Wang, “Study on the dynamic and static characteristics of gas static thrust bearing with micro-hole restrictors,” International Journal of Hydromechatronics, vol. 2, no. 3, pp. 189–202, 2019. View at: Google Scholar
  40. U. Yüzgeç and M. Kusoglu, “Multi-objective harris hawks optimizer for multiobjective optimization problems,” BSEU Journal of Engineering Research and Technology, vol. 1, no. 1, pp. 31–41, 2020. View at: Google Scholar
  41. M.-R. Chen and Y.-Z. Lu, “A novel elitist multiobjective optimization algorithm: multiobjective extremal optimization,” European Journal of Operational Research, vol. 188, no. 3, pp. 637–651, 2008. View at: Google Scholar
  42. P. K. Tripathi, S. Bandyopadhyay, and S. K. Pal, “Multi-objective particle swarm optimization with time variant inertia and acceleration coefficients,” Information Sciences, vol. 177, no. 22, pp. 5033–5049, 2007. View at: Google Scholar
  43. S. Khalilpourazari and S. H. R. Pasandideh, “Multi-objective optimization of multi-item eoq model with partial backordering and defective batches and stochastic constraints using mowca and mogwo,” Operational Research, vol. 20, no. 2, pp. 1–33, 2018. View at: Google Scholar
  44. I. R. Kumawat, S. J. Nanda, and R. K. Maddila, “Multi-objective whale optimization,” in Proceedings of Tencon 2017-2017 IEEE Region 10 Conference, pp. 2747–2752, IEEE, Penang, Malaysia, November 2017. View at: Google Scholar
  45. Y. Yuan, H. Xu, B. Wang, and X. Yao, “A new dominance relation-based evolutionary algorithm for many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 1, pp. 16–37, 2015. View at: Google Scholar
  46. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: Nsga-ii,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at: Google Scholar
  47. M. Kaur, D. Singh, V. Kumar, and K. Sun, “Color image dehazing using gradient channel prior and guided l0 filter,” Information Sciences, vol. 521, pp. 326–342, 2020. View at: Google Scholar
  48. S. Osterland and J. Weber, “Analytical analysis of single-stage pressure relief valves,” International Journal of Hydromechatronics, vol. 2, no. 1, pp. 32–53, 2019. View at: Google Scholar
  49. B. Gupta, M. Tiwari, and S. S. Lamba, “Visibility improvement and mass segmentation of mammogram images using quantile separated histogram equalisation with local contrast enhancement,” CAAI Transactions on Intelligence Technology, vol. 4, no. 2, pp. 73–79, 2019. View at: Google Scholar
  50. A. Golts, D. Freedman, and M. Elad, “Unsupervised single image dehazing using dark channel prior loss,” IEEE Transactions on Image Processing, vol. 29, pp. 2692–2701, 2020. View at: Publisher Site | Google Scholar
  51. C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, and M. Sbetr, “Color channel transfer for image dehazing,” IEEE Signal Processing Letters, vol. 26, no. 9, pp. 1413–1417, 2019. View at: Google Scholar
  52. L. Li, Y. Dong, W. Ren et al., “Semi-supervised image dehazing,” IEEE Transactions on Image Processing, vol. 29, pp. 2766–2779, 2020. View at: Google Scholar
  53. H. Khan, M. Sharif, N. Bibi et al., “Localization of radiance transformation for image dehazing in wavelet domain,” Neurocomputing, vol. 381, pp. 141–151, 2020. View at: Publisher Site | Google Scholar
  54. A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Fusion-based variational image dehazing,” IEEE Signal Processing Letters, vol. 24, no. 2, pp. 151–155, 2017. View at: Google Scholar
  55. T. Cui, J. Tian, E. Wang, and Y. Tang, “Single image dehazing by latent region-segmentation based transmission estimation and weighted l1-norm regularisation,” IET Image Processing, vol. 11, no. 2, pp. 145–154, 2017. View at: Google Scholar
  56. Y. Gu, X. Yang, and Y. Gao, “A novel total generalized variation model for image dehazing,” Journal of Mathematical Imaging and Vision, vol. 61, no. 9, pp. 1329–1341, 2019. View at: Google Scholar

Copyright © 2021 Vinay Kehar et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views346
Downloads387
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.