Journal of Sensors

Journal of Sensors / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 8818650 | https://doi.org/10.1155/2020/8818650

Yubin Yuan, Yu Shen, Jing Peng, Lin Wang, Hongguo Zhang, "Defogging Technology Based on Dual-Channel Sensor Information Fusion of Near-Infrared and Visible Light", Journal of Sensors, vol. 2020, Article ID 8818650, 17 pages, 2020. https://doi.org/10.1155/2020/8818650

Defogging Technology Based on Dual-Channel Sensor Information Fusion of Near-Infrared and Visible Light

Academic Editor: Abdellah Touhafi
Received22 Mar 2020
Revised03 Oct 2020
Accepted12 Oct 2020
Published16 Nov 2020

Abstract

Since the method to remove fog from images is complicated and detail loss and color distortion could occur to the defogged images, a defogging method based on near-infrared and visible image fusion is put forward in this paper. The algorithm in this paper uses the near-infrared image with rich details as a new data source and adopts the image fusion method to obtain a defog image with rich details and high color recovery. First, the colorful visible image is converted into HSI color space to obtain an intensity channel image, color channel image, and saturation channel image. The intensity channel image is fused with a near-infrared image and defogged, and then it is decomposed by Nonsubsampled Shearlet Transform. The obtained high-frequency coefficient is filtered by preserving the edge with a double exponential edge smoothing filter, while low-frequency antisharpening masking treatment is conducted on the low-frequency coefficient. The new intensity channel image could be obtained based on the fusion rule and by reciprocal transformation. Then, in color treatment of the visible image, the degradation model of the saturation image is established, which estimates the parameters based on the principle of dark primary color to obtain the estimated saturation image. Finally, the new intensity channel image, the estimated saturation image, and the primary color image are reflected to RGB space to obtain the fusion image, which is enhanced by color and sharpness correction. In order to prove the effectiveness of the algorithm, the dense fog image and the thin fog image are compared with the popular single image defogging and multiple image defogging algorithms and the visible light-near infrared fusion defogging algorithm based on deep learning. The experimental results show that the proposed algorithm is better in improving the edge contrast and the visual sharpness of the image than the existing high-efficiency defogging method.

1. Introduction

Fog is an atmospheric phenomenon because the light is absorbed or reflected by the floating particles composed of water vapor, suspended matters, and aerosol in the air. Under such wicked condition, due to atmospheric scattering, the scenic radiation obtained would be weakened or polluted sharply, resulting in the captured images easily blurred, for example, the image contrast, content details, and color saturation would deteriorate gradually with the increase of the scenic distance and the hazy level. On the one hand, image degradation caused by fog damages the visual quality of the images; on the other hand, it could reduce the efficiency of subsequent image processing and computer vision tasks (such as feature extraction, identification, and classification). So, it is of great significance to study the foggy image sharpening algorithm [13].

At present, there are mainly two mainstream image defogging traditional algorithms: One is the method based on image enhancement [47], which processes image pixels directly in the spatial domain; the other is the method based on image restoration [813], which establishes first atmospheric scattering degradation model of the image and obtains fogless image by reversing the imaging process.

Enhancement-based defogging algorithms improve visual effect mainly by improving the image contrast, but neglecting the nature degradation of the image quality. Such methods include mainly the histogram equalization method [4, 5], Retinex method [6, 7], and other improved algorithms of the method. Based on the average enhancement method and self-adaptive histogram equalization method, Shrivastava and Jain [4] used YCbCR model to eliminate fog and display clear image targets with the intensity image. The other two images are used to preserve the colors related with specific color value so as to obtain a fogless image of the misty image. To solve the problem of excessive enhancement of the contrast, Babu and Rajamani [5] put forward an improved histogram equalization technology based on a real-number encoding genetic algorithm, with which an enhancement method that could maintain the original intensity is obtained. This method could control the enhancement level of the contrast and is applicable in all types of images, including MRI brain image with low contrast. Galdran et al. [6] connected the Retinex model with image enhancement by establishing a linear relationship and put forward a Retinex image enhancement algorithm with inverse intensity, which could solve the problem of image deviation caused by uneven illumination while removing fog. Wang et al. [7] adopted a multiscale Retinex-based color restoration algorithm, which, while considering the dynamic range of the image, could effectively improve the image quality after fog degradation meanwhile retaining sufficient image details by calculating the atmospheric light value and the transmission map.

The realization form of the image restoration-based method is relatively simple, and it has little requirement for hardware. The major research methods include the dark channel prior model raised by He et al. [8], the quick image-defogging algorithm raised by Tarel and Hautière [9] as well as the improved algorithm put forward by researchers based on these two algorithms. According to the statistical law, He et al. [8] raised the defogging method based on dark channel prior, which has obvious defogging effects and little color distortion, so it is considered the best defogging method at present. However, this method adopts a soft sectional drawing to estimate the transmissivity, which has high computation complexity and cannot meet real-time requirements. Tarel and Hautière [9] assumed that the ambient light could approach the maximum value within a certain region, and that the local change is gentle. They used the method of median filter to estimate the transmissivity and put forward a quick image defogging algorithm. This method is simple to operate and less time-consuming, so it is applied extensively. However, Tarel’s algorithm has many parameters and cannot process self-adaptively, so that the parameters should be adjusted continuously aiming at different images to achieve better results. Meanwhile, it is easy to have halo formation where hopping occurs at the depth-of-field, so it has poorer effects for dense fog. Subsequently, researchers at home and abroad have improved and optimized the above two typical reinforced defogging algorithms and achieved better effects. Zhang et al. [10] put forward a fast defogging algorithm based on dark channel prior algorithm, substituting the matting process of the original algorithm by edge substitution method, which could reduce the computational complexity. Then, aiming at the problem of dark channel failure in the bright region, a double threshold-based identification method of bright regions and transmittance correction mechanism is put forward to improve the application scope of dark channel prior and the visual effect of the defogged image. Xu et al. [11] put forward a dark channel and optical channel roguing algorithm based on dark channel prior, among which optical channel is statistics of outdoor fuzzy images. In addition, a guiding filter is also introduced to refine the dark channel and optical channel, which has achieved good effects by relieving the problem of highly computational complexity of He’s algorithm without overexposure. Singh and Kumar [12] raised a kind of visibility restoration model, which could solve the problem of glow effect in the region of sky and improve the computation speed and edge preservation taking the advantage of DCP, bright channel prior (BCP), and gain interference filter, with greater potential of real-time application. Zhu et al. [13] established an effective dark channel model integrating information of various dark channels by analyzing various dark channel methods, which could estimate the transmission diagram of the input image, refine it with improved steerable filter, and restore the radiation image with single color atmospheric scattering model. Based on the fusion method, Ancuti C.O. and Ancuti C. [14] defogged a single image, by deriving it from two original blurred image inputs with white balance and contrast enhancement procedures. The important features are found out by calculating brightness, chroma, and significance. The multiscale transform represented by the Laplacian pyramid is introduced to minimize the artifacts introduced by the weight map, which has achieved a good defogging effect and is suitable for real-time applications.

From the perspective of image source, image defogging can be divided into multi-image defogging and single image defogging. The defogging effect could be finally achieved by using multiple images, such as polarized images [15, 16], sunny images [17], image frame sequences [18], and infrared images [1922] as the auxiliary images. Such algorithms are complex to realize but with good restoration effects. Due to hardware limitations and complexity of the processing, at present, most of the studies focus on single image defogging, however, there are also a few scholars who have conducted studies on multiple image defogging.

Miyazaki et al. [15] put forward an image defogging algorithm based on multiple polarized images, which conducted a defogging process on two images captured under different polarization conditions, and pointed out that polarizer is effective only under certain conditions. The operation of polarizers is strict; otherwise, it is easy to cause human error. Yadav et al. [16] captured the foggy image sequence, processed the foggy image by four steps, and enhanced and filtered the image based on estimation of the image scattering and visibility. Zhang et al. [17] raised an atmospheric model defogging algorithm based on multiple images, which could estimate the parameter values by referring to the image on a sunny day and the fuzzy image on a foggy day. Jiang and Ma [18] put forward the defogging method of remote sensing image. Color compensated image was selected from the remote sensing image sequence. MSR is stretched near the average brightness value to enhance the image, and then new color compensated image was obtained. The defogged image was represented well in information entropy. Shao et al. [19] introduced Focus Measure Operators into the fusion of visible light and near-infrared images. First, the quick scattering Curvelet transformation was conducted on the original image, and the measured values of the focus of the subband of each coefficient were obtained through calculation. Then, the low-frequency coefficients are fused by local variance weighting, and the high-frequency coefficient subbands are matched by the four-order correlation coefficient matching method with good fusion effect. Zhou et al. [20] separated texture details from edge features by using multiscale Gaussian filters and bilateral filters jointly. By fusing multiscale infrared spectral features into visible images, a better human visual fusion effect is obtained meanwhile retaining (or appropriately enhancing) important perception clues of the background scenes and details. Schaul et al. [21] achieved the fusion of near-infrared images and color visible light images by using a multiresolution method of edge-preserving filters, which could minimize the artifacts and achieve the purpose of defogging. Son and Zhang [22] raised a near-infrared fusion model combining new color and depth regularization with the traditional fog degradation model and proposed the color regularization method based on the combination of color near-infrared image and the captured visible image. The color range of the unknown fogless image is set so that the continuously estimated depth map would not deviate greatly, thereby the natural color of the image could be transmitted and the visibility of the image can be improved. Li and Wu [23] proposed an image fusion method based on latent low-rank representation. Use latent low-rank representation (LatLRR) to decompose the source image of a low-rank part (global structure) and a salient part (local structure). The weighted average strategy is used to fuse low-rank parts to retain more contour information. Use and strategize to merge the salient parts and achieve a better fusion effect.

From the above analysis, it could be seen that the multi-image defogging algorithm has various forms. One is to defog the image under different polarization conditions by polarizer. This method relies on a polarized plate, so, it has a small scope of application and has poor processing effects on dynamic scenes and dense-fog image. The other one is to defog various images taken at the same scene on foggy days and sunny days under different air conditions. This method is limited by the weather and the time, with poor real-time performance and weak applicability. There are also some cases requiring reference images of different conditions, which are hard to acquire. So, multi-image defogging algorithm faces various problems, such as difficulty in hardware implementation and limited acquisition path. In addition, when comparing multiple images, the problem of image rectification should also be paid attention to. In a word, the algorithm has poor real-time and high computation complexity.

With the improvement of equipment capabilities, deep learning is widely used in digital image processing. Compared with traditional methods, deep learning can obtain more global and detailed information, and at the same time can use a large amount of easily available video data. Compared with the traditional image processing technology in more limited fields, the models and frameworks obtained by deep learning can be retrained with new custom data, which has great flexibility [24, 25]. Li et al. [26] proposed a deep learning framework to generate a single image containing all the features of infrared and visible images. This method first decomposes the source image into low-frequency subbands and high-frequency subbands. A weighted average method is used for fusion of low-frequency subbands, and a deep learning network is used for high-frequency subbands to extract multilayer features. Using the extracted features, the l1 norm and weighted average strategy are used to generate multiple candidates for fusion details, and the final high-frequency fusion image is obtained through the maximum selection strategy. Liu et al. [27] proposed an infrared and visible image fusion method based on a convolutional neural network. A convolutional neural network is applied to obtain a weight map, which integrates pixel activity information on two source images. This CNN-based method can deal with two key issues in image fusion as a whole, namely, activity level measurement and weight distribution. The fusion process is carried out in a multiscale manner through the image pyramid, and the fusion model of the decomposition coefficients is adaptively adjusted by a strategy based on local similarity. Li et al. [28] proposed a new fusion framework based on deep features and zero phase component analysis (ZCA) in order to solve the problem that most deep learning networks ignore feature extraction and cause partial information loss. First, the residual network is used to extract depth features from the source image. Then use ZCA to normalize the depth features to get the initial weight map. The final weight map is obtained by using the soft-max operation associated with the initial weight map. Finally, a weighted average strategy is used to reconstruct the fused image.

But deep learning also has many problems. First is the problem of overcorrecting. The traditional algorithm idea is relatively simple, mainly using color threshold or pixel technology, only simple codes can solve the problem, and the performance is the same on various images. Compared with deep learning, the learned features are greatly affected by the training set. Second, when dealing with specific problems, there is a lack of a large number of specific training sets, and deep learning may overfine the training data, causing the model to have a large amount of calculation and low generalization. Moreover, the amount of model parameters of deep learning is large, and it is impractical to improve by adjusting parameters. Therefore, when computing power is low or when dealing with problems with strong feature limitations, traditional algorithms are still the first choice.

So, to solve the problems in defogging multiple images, a fusion defogging algorithm based on near-infrared and visible binocular image sensor is put forward in this essay. First, the color image captured by the visible sensor is converted into HSI color space. To extract the detailed features of near-infrared sensor image into the visible color image, the intensity channel graph of the visible image and the image captured by near-infrared sensor are fuses by the method of Nonsubsampled Shearlet Transform (NSST), and the luminance image is defogged and enhanced by filtering and enhancing high-pass components and high-frequency components. The degradation model is established for the saturation channel of visible light, and the parameters are estimated using the principle of dark primary color to obtain the saturation map after defogging. Then, the defogged intensity channel diagram, the saturation channel diagram, and the primary color channel image are mapped reversely to RGB color space. Finally, the RGB image is defogged by color and sharpness correction (CSC).

The major innovation points of this research are as follows: (1)The near-infrared image is taken as the auxiliary image, which could enhance the detailed information of the defogged image, such as the edge and the outline. Red-infrared sensor could penetrate fog to some degree and could capture image details that cannot be captured by visible light sensor on foggy days and hazed days(2)The visible-light image is converted into HSI color space and all the components are processed, respectively, each component of the HSI color space does not affect each other, which not only could retain the color information of the original image but also could improve the contrast of the image(3)For the decomposed high-frequency coefficients, the double exponential edge smoothing filter is used for filtering for the first time, and then the gray-scale similarity rule is used for high-frequency fusion to obtain the fused high-frequency coefficients(4)For the decomposed low-frequency coefficients, an improved unsharp masking processing method is proposed, and then the local area standard deviation fusion rule is used to fuse the low-frequency coefficients to obtain the fused low-frequency coefficients(5)We propose a color correction method, CSC color correction is conducted on the fused image, which could improve detail lack caused by uneven illumination due to fog, making the image color more natural and the detail features of the shadow region enhanced(6)While performing qualitative analysis of each algorithm, some evaluation indicators are integrated into the visible and infrared image fusion benchmark (VIFB) evaluation system [29], through the function interface provided by the VIFB evaluation system, the calculation of various evaluation indicators is completed, and the quantitative analysis is completed, an objective and comprehensive evaluation of various algorithms

The major contents of this essay are as follows: Section 2 introduces briefly relevant theoretical background and analysis, including HSI color space theory, Nonsubsampled Shearlet Transform, and CSC color correction. Section 3 introduces the quality evaluation system. Section 4 introduces the proposed fusion defogging algorithm based on near-infrared and visible binocular image sensor. The experimental results and analysis are given in Section 5. Finally, the conclusion is made and the work remaining to be studied is also put forward in Section 6.

2. Materials and Methods

2.1. HSI Color Space

HSI is a color model based on the human vision system, which adopts hue (H), saturation (S), and intensity (I) to represent independent color adjustment. HSI could convert with RGB color model. HSI model has its peculiar natures: First, its intensity component has no relation with the color, the channels are interdependent, and separate treatment of the intensity component cannot affect the color information of the image. Second, the HSI color model accords more with human visual characteristics, which is beneficial to image processing. The more important reason is that it could map visible color RGB image into HSI space, while the hue channel would not degrade under the influence of foggy days [3032]. It is only necessary to process respectively the intensity channel and the saturation channel, which could simplify the subsequent calculations.

The images after the color visible-light image is mapped to the three channels (see Figure 1).

2.2. Nonsubsampled Shearlet Transform

Nonsubsampled Shearlet Transform (NSST) is developed based on Shearlet transform. It is one of the multiscale geometric analysis methods constructed by Fei et al. [33] based on a composite expansion affine system. Featured by simple structure, it could generate basis function conveniently by translation, rotation, and stretch, so it has multidimensional function sparse representation performance. Two-dimensional composite expansion affine systems could be expressed as follows where , and and are invertible matrixes. For any, if , the elements of are called synthetic wavelets. Shearlet is a special form of synthetic wavelets, determines the multi-dimensional decomposition of the image, and it is a kind of anisotropic expansion matrix; is shear matrix, which determines the multidirectional decomposition of the image.

NSST is composed by nonsubsampled pyramid (NSP) filters and Shearlet filters (SF), as shown in Figure 2. It could realize multidimensional transformation through NSP. To extract the singularity features in the image, the low-frequency coefficients are decomposed continuously by NSP, and 1 low-frequency coefficient and high-frequency coefficients are obtained at layer decomposition. SF is mapped to the Cartesian coordinate system and is filtered by two-dimensional convolution after Fourier transformation, which could avoid subsample operation and make NSST shift-invariant. In addition, the transformed image has the same size as the source image, which is favorable for the fusion operation of the image.

2.3. CSC Color Correction

When the fused intensity component is shifted from the HSI color space back to the RGB color space together with the hue and the saturation, the overall color component of the original image will be changed and the saturation will be reduced. So, a color correction mechanism is introduced, the working mode of which is similar to hue mapping of fused color image. In this way, the saturation could be improved and the image color would look brighter. The color correction is described as: where is color components , , . Index is the style control parameter, larger than or equal to 1. It controls the color saturation of the composite image. The larger the value is, the larger the saturation is. In actual conditions, is set to 1.5. The users could also acquire the required color saturation by adjusting this parameter.

As mentioned above, multiresolution fusion could also result in smoothing of the fused image. So, a sharpening mechanism is adopted to improve the visual quality of the fused image. The image after color correction is shifted again into HSI color space. First, the intensity component is filtered by a high-pass filter to extract the high-frequency component; then, the scaled version of the high-frequency component that has been extracted is added back to the image.

The image is shifted back again into RGB color space to form the final result, is a high-pass filter, is the sharpening control parameter that is larger than or equal to 0. is set as 0.4. If there is noise in the original image, linear high-pass filter would result in larger interference. On the contrary, sharpness generated by the weighted median (WM) filter has better noise immunity, so the following WM filter is used to replace the high-pass filter.

CSC color correction not only could improve the visibility of the output image but also could enhance the visual quality of the image. The output image undergone CSC color correction is more vivid because the fuzziness becomes smaller and the color becomes more saturated. As required, this procedure of CSC could be regarded as an optional procedure. The results of the study are exhibited under the conditions with CSC and with no CSC.

3. NSST Based Algorithms of Image Defogging

The algorithm flow chart is shown in Figure 3.

Step 1. Color space conversion. Shift color image captured by the visible light sensor into HSI color space to obtain the hue image , saturation image , and brightness image , respectively. The three channels do not interfere with each other and completely retain the color information of the original visible light image.

Step 2. Multiscale decomposition. Decompose the intensity image and near-infrared image of the visible light image by the NSST method to obtain the decomposed high-frequency coefficients and , and low-frequency coefficients and .

Step 3. High-frequency coefficient fusion. The double exponential edge smoothing filter and gray-scale similarity rules are introduced for the first time, which preserves the high-frequency information of the two groups of images as much as possible. Filter the decomposed high-frequency coefficients and with double exponential edge smoothing filter to obtain the filtered images and . After that, conduct high-frequency fusion based on the rule of grey similarity to obtain the fused high-frequency coefficient .

Step 4. Low-frequency coefficient fusion. An improved low-frequency unsharp masking enhancement algorithm is proposed, and the local area standard deviation rule is used for fusion. Conduct low-frequency unsharp masking on decomposed low-frequency coefficients and to obtain images and . Then, fuse low-frequency coefficients based on the fusion rule of local regional standard deviation to obtain the fused low-frequency coefficient .

Step 5. NSST reverse transformation. Conduct NSST reciprocal transformation on the fused high-frequency and low-frequency coefficient to obtain the processed intensity channel image .

Step 6. Saturation channel processing. For the first time, the dark channel principle is applied to the defogging algorithm based on the fusion of visible light image and near-infrared image. Construct degradation model off the saturation channel of the visible light image to estimate the parameters by the principle of dark channel, and the estimated saturation image is obtained.

Step 7. Color space reverse conversion. Map reversely the processed intensity channel image, the estimated saturation image and the original hue image into RGB space to obtain the fused RGB image.

Step 8. A color compensation method is proposed to restore the color information of the image after defogging to the greatest extent. Compensate color to the fused RGB image by CSC to obtain the defogged image .

The details of each step of the algorithm are as follows.

3.1. Edge-Preserving Filtering of NSST High-Frequency Components

The high-frequency component after NSST multidirection multiscale decomposition contains most of the high-frequency details of the edge and outline of the image, and the noises are also high-frequency component. Most of the filters could lose many linear details such as high-frequency edge while filtering out the noises. Biexponential edge-preserving smoother is adopted to filter high-frequency coefficients after multiscale decomposition (BEEPS), which is a kind of edge-preserving filter. The principle of BEEPS is similar to the value domain filter in the bilateral filter, but it abandons the edge gradient reversal effect of the bilateral filter, which could reduce the complexity of computation. The calculation of each pixel is related only with the computation of the previous pixel, without relying on the filtering parameters and the grey value of the pixel. The one-time BEEPS filtering result of discrete series containing elements is

When processing a two-dimensional image, the BEEPS filter considers it as a two-dimensional matrix that is constituted by several discrete series. Filter the matrix with BEEPS first horizontally and then vertically and record the filter results. Then, filter the original matrix again first vertically and then horizontally and record the filter results. Finally, take the average value of the two filter results to obtain the final processing results:

3.2. High-Frequency Coefficient Fusion Rules

Calculate the value of gray similarity as the fusion rule of high-frequency coefficients. Use the high-frequency component of the near-infrared image and the high-frequency component of the visible light image as the similarity item with the large difference in gray between points and . The formula is as follows: where is the grey value difference of high-frequency components in the near-infrared image at points and ; is the grey value difference of the high-frequency component of the visible light image at points and . According to this fusion rule, a high-frequency fusion coefficient is obtained.

3.3. NSST Low-Frequency Unsharp Masking Enhancement

The low-frequency components after NSST multiscale and multidirectional decomposition contain most of the energy of the image; at the same time, contaminated by the fog, the low-frequency components require to be filtered and enhanced. In this paper, an improved low-frequency unsharp masking enhancement algorithm is adopted to process the low-frequency components of NSST.

An unsharp mask enhances the image by adding the original image with the blurred image. Generally, the process procedures of the unsharp mask are as follows: (1)Blur the original image by the fuzzy template to obtain image (2)Subtract the blurred image from the original image to obtain the difference image, also called mask template (3)Multiply the mask template with an appropriate coefficient and then add the original image to obtain the image with the high-frequency details enhanced

In the following, the unsharp mask method will be improved to make it applicable for the enhancement of the low-frequency component of NSST.

NSST low-frequency image is interfered with by thin fog, and the phenomenon of atmospheric scattering causes the image fuzzy, the edge and the details unclear, and the image contrast low. The atmospheric scattering model of NSST low-frequency image is established [18]: where is the distance from the image sensor to the scene; is the wavelength of the light; is the coefficient of atmospheric scattering; is the radiant emittance of the incident light at point .

For image , the blurred image is where is the original image; is the equivalent filter that could blur the image, i.e., the point spread function mentioned above.

The mask template is

The image after low-frequency unsharp mask is

Conduct low-frequency unsharp mask on low-frequency coefficients and of NSST and obtain images and .

3.4. Rule of Low-Frequency Fusion

The simple average method is the most common fusion method of low-frequency subband coefficients, but this method could reduce the contrast of the image to large extent and cannot extract the useful information of the source image into the fused image. As a parameter to measure the sharpness of the image, the local area standard deviation can represent the intensity of the grayscale change in the image area. Areas with obvious grayscale transformation are usually areas where the image features are concentrated. Therefore, the feature of the image can be extracted effectively by taking the advantage of the standard deviation of the local area. So, in this chapter, the method of local region standard deviation is taken as the fusion rule in low-frequency coefficient fusion. Specific procedures are as follows:

First, calculate local region standard deviations and of low-frequency coefficients and , whose domain size is or . The formula is as follows:

Then, select the low-frequency coefficient after fusion according to formulas (15) and (16). The fusion rule is to compare the difference between the standard deviation of the local area of the infrared image and the visible light image with the threshold. If the former is larger, the coefficient of the image with larger value is taken, while if the latter is larger, the average value of the coefficients of both images is taken. So, it is very important to select the threshold value . At present, it is selected based on experiences and generally value ranges within 0.1 to 0.3.

The low-frequency coefficient after fusion is obtained according to the above procedure.

3.5. Image Reconstruction in NSST Domain

The processed intensity channel image is obtained after NSST reverse transformation is conducted on fused high-frequency coefficient and low-frequency coefficient .

Figure 4 shows the fusion results of the intensity channel of visible light image and the infrared image by the NSST fusion algorithm. The fusion results show that the edge detail of the image has been enhanced, and the clarity and contrast of the image are improved.

3.6. Defogging of Saturation Channel

The degradation model of saturation channel image is consistent in form with the degradation image model in foggy weather: where is the intensity of light source illumination; is the saturation channel image on foggy days; is the saturation channel image without fog; is the transmittance (also known as the medium transmission diagram or transmission diagram), usually with the image depth value (the distance between the image sensor and the scene) decaying exponentially.

Based on data statistics of a large number of images, He et al. [8] proposed that at least signal color channel of some pixels in the local area of the nonsky portion of the fogless image has a very low-intensity value and even close to 0. This is the dark channel: where is one of channels , , and in image ; is the neighborhood with pixel as the center.

According to dark primary color prior, the transmissivity could be obtained:

In which, the exposure intensity of the light source takes the highest intensity in the dark channel among various color channels:

Substitute the estimated transmissivity and the bias light into formula (20) and the restored saturation channel image could be obtained:

3.7. Reverse Mapping of HSI Image

The fusion image of the infrared image and the intensity component, the stretched saturation component and the hue component of the original image are replaced by a new intensity component, saturation component, and hue component, which are added to HSI color space. Then, transform the HSI image into a RGB image, specific as follows [24]:

When

When

When

4. Image Quality Evaluation System

Image quality evaluation standards can be classified into three categories, full reference image quality evaluation, weak reference image quality evaluation, and no reference image quality evaluation. Full reference images and weak reference image quality evaluations require a clear image that is corresponding to the foggy image to be used as the reference image. Unless there is a composite foggy image, it is hard to meet such requirement in actual application. So, in the field of image defogging, no reference indicator is used extensively, such as peak signal-to-noise ratio (PSNR), structural similarity (SSIM), average gradient (AVG), information entropy, and global contrast.

The main purpose of image defogging is to improve the visibility of foggy images. It is required that a good defogging algorithm not only could improve the visibility, edge, and texture information of the image but also could maintain the structure and color information of the image. An image with higher visibility also means that it has an obvious edge and texture information. Therefore, image quality evaluation methods should compare the visibility, color restoration, and similarity of image structures of different defogging algorithms.

4.1. Evaluation Standards of Image Visibility

The first two indicators of blind evaluation [34], image visibility measurement (IVM), image contrast gain, and vision contrast measurement (VCM) could be used to compare the visibility of images.

4.1.1. Indicators of Blind Evaluation

The first two indicators of blind evaluation use the enhanced intensity of image edge to show the enhancement of image visibility. is the increasing rate of the visible edge of the image after defogging. The calculation formula is where and are the numbers of the visible edges in the defogged image and the original image . For some dense foggy images, the number of the visible edge of the original foggy image could be 0. Equation (28) could be converted into: where and are the sizes of the images. The larger value is, the larger the visibility is improved. This indicator could represent the enhancement degree of image visibility by increasing the number of the visible edges.

uses the enhancement degree of image gradient to show the restoration degree of the image edge and texture. Large shows that the corresponding defogging algorithm has better performance in edge preserving. The calculation formula of is where ; and are gradual changes of the defogged image and the original image, respectively, and is the visible edge set of the defogged image.

can be used as the indicator to evaluate the restoration of the edge.

4.1.2. Image Visibility Measurement (IVM)

Enlightened by blind evaluation indicators, Yu et al. [35] put forward another image visibility measuring method based on visible edge segmentation. IVM is defined as: where is the number of visible edges, is the number of edges, is the average contrast, and is the image region of the visible edges.

4.1.3. Image Contrast

The contrast of clear images is usually much higher than that of foggy images, so image contrast can be used to compare different defogging algorithms. The higher the contrast of the defogged image is, the better the performance of the defogging algorithm is. The contrast gain is the average contrast difference between the defogged image and the original foggy image, which can more represent the difference in image contrast more intuitively. The calculation formula is as follows: where and are the average contrast of the defogged image and the foggy image, respectively. The average contrast of images with the size of is where is the local contrast of the image in the mini-window. The calculation formula is where and ,

where is the radius of the local region. The larger the contrast gain is, the better the effect of the defogging algorithm is.

4.2. Evaluation Standard for Color Restoration

Blind evaluation indicator could be used to evaluate the performance of color restoration of defogging algorithms. is the saturated pixel of the defogged image, which could be calculated as follows [36]:

Where, and are the size of the images, is the black and white pixels of the enhanced image, which are not absolutely black and white pixels in the original foggy image. The smaller is, the better the effect of the defogging algorithm is. However, when different algorithms are compared, the indicator is not flawless. This is because the defogged images of some algorithms, especially Retinex-based algorithm, require to be transmitted into the display domain [0,255] through gain/migration algorithm, which could convert some pixels into black and white pixels so as to increase the displayed dynamic range, which could result in rise of value.

4.3. Structural Similarity of Image

The structural similarity of image (SSIM) [37] and universal quality index (UQI) [38] are used to evaluate the structural similarity between the original foggy image and the enhanced image. Both traditional SSIM and UQI standards use high-quality image as the reference image, so, the higher SSIM and UQI is, the higher the image quality is. However, in actual application of image defogging, the original foggy image is generally adopted as the reference image, so large SSIM and UQI do not mean the image has high quality. For example, the SSIM index of two identical foggy images must be larger than the SSIM index of the foggy image and the enhanced image. So, the enhanced image with the best visibility might have the smallest SSIM and UQI. In addition, the image structure could also be changed by removing fog from the foggy image, which could also cause SSIM and UQI small.

The above evaluation indicators are integrated into the VIFB quantitative evaluation system to complete the evaluation of each comparison algorithm more objectively and truthfully. VIFB is specifically designed to evaluate image performance indicators and provides a functional interface for each evaluation indicator. Ten evaluation indicators and the above evaluation indicators are selected from them to form the quantitative evaluation system of this article. A total of 12 evaluation indicators are shown in Table 1.


IndicatorMeaningIndicatorMeaning

Increasing rate of edgeEdge preserving
Color restorationSSIMStructural similarity index measure
CECross entropyAGAverage gradient
ENEntropyEIEdge intensity
MIMutual informationSDStandard deviation
PSNRPeak signal-to-noise rationUQIUniversal quality index

5. Results and Discussion

To verify the effectiveness of this proposed method, the data of dense fog and thin fog are selected for comparison. The images of dense fog with the resolution of and the images of thin fog are from the EPFL database, with the resolution of . In the experiment, 3.2 GHz Intel i7 processor and 64GB RAM are used to operate Matlab 2018b on the computer to implement the algorithm with or without CSC, respectively. The results of the proposed algorithm are compared with those of the current most advanced single/multi-image defogging algorithms. For roguing of single image, the results of He et al. [8], Tarel and Hautière [9], and Ancuti C.O. and Ancuti C. [14] are selected for comparison, among which the method of Ancuti C.O. and Ancuti C. is to defog single image by fusion method. For the visible-near-infrared fusion defogging algorithm, the results of Zhou et al. [20], Schaul et al. [21], Son and Zhang et al. [22], and Li and Wu [23] are selected for comparison. For the visible light-near infrared fusion defogging algorithm based on deep learning, the results of Li et al. [26], Liu et al. [27], and Li et al. [28] are used for comparison. The number of iterations is set to 400 times, and the optimization is all adopting Adam optimizer. The training set uses the images provided by the VIFB dataset and the EPFL dataset. In order to improve training efficiency, all pictures are cropped to a size of pixels for training. In order to improve the utilization of the dataset and avoid overfitting of the deep learning network, the cropped image is enhanced by rotation, and the data is enhanced to 6 times of the original data set, reaching 46,080 groups. The data enhancement method is shown in Figure 5.

Four pictures are selected, respectively, from dense and thin fog databases. As shown in Figures 513, in each image, (a) is the visible light image, (b) is the near-infrared image, (c) is the processing result of He et al. [8], (d) is the processing result of Tarel and Hautière [9], (e) is the processing result of He et al. [8], (f) is the processing result of Zhou et al. [20], (g) is the processing result of Schaul et al. [21], (h) is the processing result of Son and Zhang [22], (i) is the processing result of Li and Wu [23], (j) is the processing result of Li et al. [26], (k) is the processing result of Liu et al. [27], (l) is the processing result of Li et al. [28], (m) is the processing result of the proposed algorithm with no CSC, and (n) is the processing result of the proposed algorithm with CSC. All these algorithms adapt their predefined settings, and the average execution time is also recorded as 360 runs. The running time of these algorithms is compared with that of the proposed algorithm, as shown in Table 2.


MethodTime[S]

He et al. [8]470.4504
Tarel and Hautière [9]17.7399
Ancuti C.O. and Ancuti C. [14]29.4427
Zhou et al. [20]48.5624
Schaul et al. [21]69.7975
Son and Zhang [22]98.8906
Li and Wu [23]290.1047
Li et al. [26]20.8263
Liu et al. [27]30.9012
Li et al. [28]23.7148
Our without CSC9.2482
Our with CSC11.0204

The results of He et al. [8] and Tarel and Hautière [9] have enhanced the contrast of foggy images. He’s algorithm could make the color of the image with no sky seem darker and lose some details, so that the final image would seem unnatural. While Tarel and Hautière’s method [9] retains more details, which could make the image appearance improved obviously at the state of color saturation. The results of Ancuti C.O. and Ancuti C. [14] and Tarel and Hautière [9] share the same results in processing foggy images, but lacking color reoccurrence. In some cases, the processed images have a brighter appearance; however, compared with other results, details of some regions might get lost. For the regions in images with no sky, compared with other algorithms, the algorithm raised by Ancuti C.O. and Ancuti C. [14] has better contrast and better color reproducibility. Zhou et al. [20] gave similar results as Tarel and Hautière [9] in processing foggy images, which contain plenty of details and color output. For objects in the distance, Zhou et al. [20] and Li and Wu [23] gave more details than He et al. [8], Tarel and Hautière [9] and Ancuti C.O. and Ancuti C. [14]; while for the regions with no sky, output of Zhou et al. [20] is displayed darker.

Compared with He et al. [8], Tarel and Hautière [9], and Ancuti C.O. and Ancuti C. [14], Zhou et al. [20], Schaul et al. [21], Son and Zhang [22], and Li and Wu [23] have more detail transmission, which is mainly caused by near-infrared image. The results of Son and Zhang [22] show that details could transmit better in regions with thin fog. The algorithm raised by Yadav et al. [16] has poorer performance in regions with no sky. Compared with Yadav et al. [16], the method of Schaul et al. [21] could achieve better color saturation; but compared with He et al. [8], Tarel and Hautière [9], and Ancuti C.O. and Ancuti C. [14], it has lower saturation. Compared with several traditional methods, the three defogging methods are based on deep learning to retain more detailed information in detail fusion, but still have certain defects in color details.

Compared with the above algorithms, the visibility of our images is increased greatly under the condition with no CSC; meanwhile, results obtained by the proposed algorithm have more details and textures under fuzzy conditions. It could be seen clearly that our results could show the maximum visibility of the foggy regions no matter with or without CSC.

At the same time, we also use the VIFB evaluation system introduced to the previous section to quantitatively evaluate the image defogging effect. Figure 14 shows a quantitative comparison between various methods and the method of this article. In order to further demonstrate the quantitative comparison of the performance of various methods, Table 3 shows the average of 12 evaluation indicators of all the methods on the 8 image pairs. It can be seen that among all the evaluation indicators, the algorithm in this paper has multiple indicators that are relatively high in evaluation, especially in the indicators that indicate the color performance after defogging. In the comparison method, some algorithms have better performance in the performance of a single index, and the authors of these algorithms may pay more attention to a certain type of information when designing such algorithms. In addition, it can be seen from the table that the method based on deep learning does not show better performance than the traditional fusion algorithm, showing certain limitations of the current deep learning method of this field.


SSIMQUICEAGENEIMISDPSNR

He et al. [8]1.47320.04461.63381.63160.91881.64796.44916.995944.61961.541742.835857.2518
Tarel and Hautière [9]1.95890.00003.48431.73180.89762.01678.22797.587358.97311.604750.394856.5191
Ancuti C.O. and Ancuti C. [14]1.52770.44133.36321.86010.81891.98029.44727.419984.45071.305561.385557.3773
Zhou et al. [20]1.42650.03953.79251.60010.72751.93789.30678.184772.48891.405461.016759.1760
Schaul et al. [21]1.18880.00002.64281.69290.69261.93977.79436.595774.87222.115658.432057.5806
Son and Zhang [22]1.06020.05853.67981.75170.70271.98627.77177.168176.24232.437740.268659.6547
Li and Wu [23]1.10850.00003.60141.72030.75952.19399.23007.8570100.95182.386267.337858.9492
Li et al. [26]1.26200.14013.48251.67550.71351.97556.71877.515757.48041.715972.235159.3997
Liu et al. [27]1.35050.11363.30201.74470.78701.50758.24117.717378.18022.134555.978959.1252
Li et al. [28]1.67310.05774.28591.72640.82292.00475.94156.983150.28642.124241.771859.6178
Our1.81750.00004.50351.70260.77401.70809.62497.897094.59031.996368.919159.5652
Our+CSC2.14470.00004.67511.79360.78691.75229.64507.972895.57262.434670.406059.8374

Through the above simulation and analysis, it could be concluded that, compared with the existing methods, the proposed method could transmit most of the detailed information of edges and outlines in near-infrared images to the final defogging image with less information loss and fewer fused artifacts, which could improve effectively the visibility of the foggy image and retain the color information of the image meanwhile reducing greatly the calculation amount. So, it could be used to defog images in real-time.

6. Conclusions

By combining the image fusion method and the image fusion algorithm, a new defogging algorithm is put forward in this essay, which converts the color visible image into HSI color space, acquires the edge, the outline, and other detailed information by fusing its intensity component image and near-infrared image, maps the visible image to the color space, and finally conduct CSC color correction on the fused image.

This algorithm has the following advantages: (1)Near-infrared image is introduced as the auxiliary defogging image, which contains the edge, the outline, and other detailed information that cannot be acquired by the visible image, so that the detailed information of the defogging image is increased(2)The visible image is converted into HSI color space and all the components are processed, respectively. The color information of the original image is preserved completely and the image contrast is enhanced(3)CSC color correction is conducted on the fused image, which not only improves detail lack caused by uneven illumination resulted from fog but also could make the color of the image more natural(4)While qualitatively analyzing each algorithm, through the evaluation index function interface provided by the VIFB evaluation system, the evaluation index calculation and quantitative analysis are completed in the VIFB evaluation system, and various algorithms are objectively and comprehensively evaluation

This essay also compares the proposed algorithm with the current efficient single and multiple image defogging methods, and the experimental results show that the proposed algorithm can achieve good results on multiple evaluation indicators, which has provided a new technological approach for image defogging. In the follow-up research, we will focus on the combination of traditional algorithms and deep learning methods.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Acknowledgments

This work was supported in part by Project supported by the National Natural Science Foundation of China under Grants (61861025, 61562057, 61761027, and 51669010), in part by the Program for Changjiang Scholars and Innovative Research Team in University of Ministry of Education of China under Grant IRT_16R36, and in part by the Opening Foundation of Key Laboratory of Opto-Technology and Intelligent Control (Lanzhou Jiaotong University), Ministry of Education under Grant KFKT2018-.

References

  1. D. Zhao, L. Xu, Y. Yan, J. Chen, and L. Y. Duan, “Multi-scale optimal fusion model for single image dehazing,” Signal Processing: Image Communication, vol. 74, pp. 253–265, 2019. View at: Publisher Site | Google Scholar
  2. S. Santra, R. Mondal, and B. Chanda, “Learning a patch quality comparator for single image dehazing,” IEEE Trans. on Image Processing, vol. 27, no. 9, pp. 4598–4607, 2018. View at: Publisher Site | Google Scholar
  3. Z. Li and J. Zheng, “Single image de-hazing using globally guided image filtering,” IEEE Trans. on Image Processing, vol. 27, no. 1, pp. 442–450, 2018. View at: Publisher Site | Google Scholar
  4. A. Shrivastava and S. Jain, “Single image dehazing based on one dimensional linear filtering and adoptive histogram equalization method,” in International Conference on Electrical, pp. 4074–4078, Chennai, India, 2016. View at: Google Scholar
  5. P. Babu and V. Rajamani, “Contrast enhancement using real coded genetic algorithm based modified histogram equalization for gray scale images,” International Journal of Imaging System and Technology, vol. 25, no. 1, pp. 24–32, 2015. View at: Publisher Site | Google Scholar
  6. A. Galdran, A. Bria, A. Alvarez-Gila, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8212–8221, Salt Lake City, UT, 2018. View at: Google Scholar
  7. J. Wang, K. Lu, J. Xue, N. He, and L. Shao, “Single image dehazing based on the physical model and MSRCR algorithm,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 9, pp. 2190–2199, 2018. View at: Publisher Site | Google Scholar
  8. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, 2011. View at: Publisher Site | Google Scholar
  9. J. Tarel and N. Hautière, “Fast visibility restoration from a single color or gray level image,” in 2009 IEEE 12th International Conference on Computer Vision, pp. 2201–2208, Kyoto, Japan, 2009. View at: Google Scholar
  10. D. Y. Zhang, J. M. Zhang, and X. M. Wang, “Single image dehazing algorithm based on dark channel prior and Inverse Image,” Chinese Journal of Electronics, vol. 30, no. 10, pp. 1437–1443, 2017. View at: Publisher Site | Google Scholar
  11. Y. Xu, X. Guo, H. Wang, F. Zhao, and L. Peng, “Single image haze removal using light and dark channel prior,” in 2016 IEEE/CIC International Conference on Communications in China (ICCC), pp. 1–6, Chengdu, China, 2016. View at: Google Scholar
  12. D. Singh and V. Kumar, “Single image haze removal using integrated dark and bright channel prior,” Modern Physics Letters B, vol. 32, no. 4, p. 1850051, 2018. View at: Publisher Site | Google Scholar
  13. X. Zhu, R. Xiang, F. Wu, and X. Jiang, “Single image haze removal based on fusion darkness channel prior,” Modern Physics Letters B, vol. 31, no. 19-21, p. 1740037, 2017. View at: Publisher Site | Google Scholar
  14. C. O. Ancuti and C. Ancuti, “Single image dehazing by multi-scale fusion,” IEEE Trans. on Image Processing, vol. 22, no. 8, pp. 3271–3282, 2013. View at: Publisher Site | Google Scholar
  15. D. Miyazaki, D. Akiyama, M. Baba, R. Furukawa, S. Hiura, and N. Asada, “Polarization-based dehazing using two reference objects,” in 2013 IEEE International Conference on Computer Vision Workshops, pp. 852–859, Sydney, NSW, 2013. View at: Google Scholar
  16. G. Yadav, S. Maheshwari, and A. Agarwal, “Fog removal techniques from images: a comparative review and future directions,” in 2014 International Conference on Signal Propagation and Computer Technology (ICSPCT 2014), pp. 44–52, Ajmer, 2014. View at: Google Scholar
  17. T. Zhang, C. Shao, and X. Wang, “Atmospheric scattering-based multiple images fog removal,” in 2011 4th International Congress on Image and Signal Processing, pp. 108–112, Shanghai, China, 2011. View at: Google Scholar
  18. X. Jiang and W. Ma, “Research of new method for removal thin cloud and fog of the remote sensing images,” in 2010 Symposium on Photonics and Optoelectronics, pp. 1–4, Chengdu, China, 2010. View at: Google Scholar
  19. S. Zhenfeng, L. Jun, and C. Qimin, “Fusion of infrared and visible images based on focus measure operators in the curvelet domain,” Applied Optics, vol. 51, no. 12, pp. 1910–1921, 2012. View at: Publisher Site | Google Scholar
  20. Z. Zhou, B. Wang, S. Li, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Information Fusion, vol. 30, pp. 15–26, 2016. View at: Publisher Site | Google Scholar
  21. L. Schaul, C. Fredembach, and S. Süsstrunk, “Color image dehazing using the near-infrared,” in 16th IEEE International Conference on Image Processing (ICIP), pp. 1629–1632, Cairo, 2010. View at: Google Scholar
  22. C.-H. Son and X.-P. Zhang, “Near-Infrared fusion via color regularization for haze and color distortion removals,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 11, pp. 3111–3126, 2018. View at: Publisher Site | Google Scholar
  23. H. Li and X. Wu, “Infrared and visible image fusion using latent low-rank representation,” 2018, https://arxiv.org/abs/1804.08992v4. View at: Google Scholar
  24. J. Fu, X. Gao, M. Xu, and W. Wang, “Multi focus and multi-source image fusion based on deep learning model,” in 2nd World Conference on Mechanical Engineering and Intelligent Manufacturing (WCMEIM), pp. 512–515, Shanghai, China, 2019. View at: Google Scholar
  25. M. Rout, S. Nahak, S. Priyadarshinee, P. Mohapatra, K. D. Sa, and D. Dash, “A deep learning approach for SAR image fusion,” in 2019 2nd International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT), pp. 335–339, Kannur, Kerala, India, 2019. View at: Google Scholar
  26. H. Li, X. Wu, and J. Kittler, “Infrared and visible image fusion using a deep learning framework,” in 24th International Conference on Pattern Recognition (ICPR), pp. 2705–2710, Beijing, 2018. View at: Google Scholar
  27. Y. Liu, X. Chen, J. Cheng, H. Peng, and Z. Wang, “Infrared and visible image fusion with convolutional neural networks,” International Journal of Wavelets, Multiresolution and Information Processing, vol. 16, no. 3, p. 1850018, 2018. View at: Publisher Site | Google Scholar
  28. H. Li, X.-j. Wu, and T. S. Durrani, “Infrared and visible image fusion with ResNet and zero-phase component analysis,” Infrared Physics & Technology, vol. 102, p. 103039, 2019. View at: Publisher Site | Google Scholar
  29. X. Zhang, P. Ye, and G. Xiao, “VIFB: a visible and infrared image fusion benchmark,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 2020. View at: Publisher Site | Google Scholar
  30. N. Nakajima and A. Taguchi, “A novel color image processing scheme in HSI color space with negative image processing,” in 2014 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), pp. 29–33, Kuching, 2014. View at: Google Scholar
  31. F. Wu and U. KinTak, “Low-light image enhancement algorithm based on HSI color space,” in 2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), pp. 1–6, Shanghai, China, 2017. View at: Google Scholar
  32. A. Palsokar and C. S. Warnekar, “A RGB-HSI conversion based model to map the colour perception into audio equivalent to assist visually impaired persons,” in 2016 IEEE 6th International Conference on Advanced Computing (IACC), pp. 484–487, Bhimavaram, 2016. View at: Google Scholar
  33. C. Fei, P. Zhang, M. Tian, X. Wang, and J. Wu, “Infrared and visible image fusion using saliency detection based on shearlet transform,” in 2016 13th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), pp. 273–276, Chengdu, China, 2016. View at: Google Scholar
  34. N. Hautière, J. P. Tarel, D. Aubert, and É. Dumont, “Blind contrast enhancement assessment by gradient ratioing at visible edges,” Image Analysis & Stereology, vol. 27, no. 2, pp. 87–95, 2008. View at: Publisher Site | Google Scholar
  35. X. Yu, C. Xiao, M. Deng, and L. Peng, “A classification algorithm to distinguish image as haze or non-haze,” in 2011 Sixth International Conference on Image and Graphics, pp. 286–289, Hefei, China, 2011. View at: Google Scholar
  36. R. Wu, D. Yu, J. Liu, H. Wu, W. Chen, and Q. Gu, “An improved fusion method for infrared and low-light level visible image,” in 2017 14th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), pp. 147–151, Chengdu, China, 2017. View at: Google Scholar
  37. J. Jiang, C. Huang, and T. Liu, “Fusion evaluation of X-ray backscatter image and holographic subsurface radar image,” in 2019 IEEE 4th International Conference on Signal and Image Processing (ICSIP), pp. 793–797, Wuxi, China, 2019. View at: Google Scholar
  38. Y. Zhou, K. Gao, Z. Dou, Z. Hua, and H. Wang, “Target-aware fusion of infrared and visible images,” IEEE Access, vol. 6, pp. 79039–79049, 2018. View at: Publisher Site | Google Scholar

Copyright © 2020 Yubin Yuan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views61
Downloads17
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.