Journal of Sensors

Journal of Sensors / 2021 / Article
Special Issue

Sensors, Signal, and Artificial Intelligent Processing

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 5563698 | https://doi.org/10.1155/2021/5563698

Wencheng Wang, Xiaohui Yuan, Zhenxue Chen, XiaoJin Wu, Zairui Gao, "Weak-Light Image Enhancement Method Based on Adaptive Local Gamma Transform and Color Compensation", Journal of Sensors, vol. 2021, Article ID 5563698, 18 pages, 2021. https://doi.org/10.1155/2021/5563698

Weak-Light Image Enhancement Method Based on Adaptive Local Gamma Transform and Color Compensation

Academic Editor: Bin Gao
Received27 Jan 2021
Revised06 Feb 2021
Accepted10 Mar 2021
Published25 Jun 2021

Abstract

In weak-light environments, images suffer from low contrast and the loss of details. Traditional image enhancement models are usually failure to avoid the issue of overenhancement. In this paper, a simple and novel correction method is proposed based on an adaptive local gamma transformation and color compensation, which is inspired by the illumination reflection model. Our proposed method converts the source image into YUV color space, and the component is estimated with a fast guided filter. The local gamma transform function is used to improve the brightness of the image by adaptively adjusting the parameters. Finally, the dynamic range of the image is optimized by a color compensation mechanism and a linear stretching strategy. By comparing with the state-of-the-art algorithms, it is demonstrated that the proposed method adaptively reduces the influence of uneven illumination to avoid overenhancement and improve the visual effect of low-light images.

1. Introduction

The computer vision system has been widely used in a variety of fields such as industrial production, video surveillance, intelligent transportation, and remote sensing [1] and plays a more and important role in human’s life. Nevertheless, during image acquisition, many uncontrollable factors will lead to various defects in the acquired images. Especially under poor and complex light conditions, such as low light, uneven light, backlight, and hazy conditions, the weak reflection of light from the object’s surface causes color distortion and noise amplification in the images, which seriously affects the image quality [2]. As shown in Figure 1, the top row includes uneven-light images, in which uneven illumination can cause some areas of an image to be overexposed while others are underexposed, affecting not only human visual perception but also the accuracy of image segmentation and object recognition, sometimes resulting in the failure of a machine vision system. Therefore, it is of great importance to enhance the contrast and observability of images collected from poor lighting conditions [35].

Weak-light image enhancement has become a focus of research in the image processing field, and its interdisciplinary characteristics have attracted considerable attention from researchers worldwide. For example, in a facial recognition system, Oloyede et al. [6] applied a new evaluation function in conjunction with metaheuristic-based optimization algorithms to automatically select the best-enhanced face image. To enhance underwater images, Hou et al. [7] presented a novel underwater color image enhancement approach based on hue preservation by combining the HSI and HSV color models. Fu and Cao [8] combine the merits of deep learning and conventional image enhancement technology to improve the quality of underwater image. To improve the contrast of retinal fundus images, Soomro et al. [9] used independent component analysis (ICA) for image enhancement to effectively achieve quick and accurate segmentation of the eye vessels. Kallel et al. [10] proposed a new enhancement algorithm dedicated to computed tomography (CT) scans based on the discrete wavelet transform with singular value decomposition (DWT–SVD) followed by adaptive gamma correction (AGC), which consistently produced good contrast enhancement with excellent brightness and edge detail conservation.

However, those conventional methods are limited in adaptivity and tend to overenhance some local areas in the case of uneven illumination. They are also difficult to strike the balance between computational complexity and visual effect. Therefore, in this research work, a self-adaptive enhancement method is proposed for processing images from uneven light environments, which is inspired by the illumination-reflection model. Figure 1 shows samples processed with our method. This method can effectively enhance the visual effect of an image, revealing more details in dark areas while preserving the overall detail information, thus providing a valuable reference for the study of the correction of images acquired under uneven lighting conditions. The contributions are as follows: (1)It is a simple and effective image enhancer based on a novel local gamma transformation and illumination reflection model. This method can effectively enhance the visual effect of an image, revealing more details in dark areas while preserving the overall detail information(2)The method has a color compensation mechanism; it is suitable for the processing of color images captured with monitoring system(3)The proposed method can adjust the parameters according to the light distribution and adaptively reduce the influence of uneven illumination on the image, thus providing a valuable reference for the study of the correction of images acquired under uneven lighting conditions(4)Our method can produce the satisfied results with less computational complexity

The rest of this paper is organized as follows. In Section 2, some related works are summarized. Section 3 introduces the flowchart of the proposed method. In Section 4, the comparisons of the experimental results are presented. Finally, the research work is concluded in Section 5.

Traditional image enhancement methods of weak-light images include histogram equalization (HE) and grayscale transformation (GT) [11, 12], which usually obtains the correction parameters based on the cumulative probability distribution of gray values. For example, Huang et al. [13] proposed a gamma correction algorithm that adaptively obtains gamma correction parameters based on the cumulative probability distribution. Later, Liu et al. proposed a low-light image enhancement method based on the optimal hyperbolic tangent function [14]. In Ref. [15], a block-iterative histogram method was used to enhance the contrast of an image while processing each different part of the image with partially overlapped subblock histogram equalization (POSHE) using a moving template. Subsequently, Chen and Ramli proposed the minimum mean brightness error bihistogram equalization (MMBEBHE) [16] algorithm to minimize the error between the brightness mean values of the output image and the original image. Celik and Tjahjadi [17] proposed the contextual and variational contrast enhancement (CVC) algorithm, which performs nonlinear data mapping using context information and a 2D gray histogram to achieve the contrast improvement. These methods are simple in their computation rules and low in computational complexity but are prone to various processing issues, such as color loss and noise amplification. Huang et al. propose an effective image enhancement strategy named as contrast limited dynamic quadri-histogram equalization (CLDQHE) which includes three steps can yield pleasing results with the preservation of brightness and structures [18].

Based on the computational theory of color constancy, Jobson et al. proposed the single-scale retinex (SSR) algorithm, which was further developed into various multiple-scale retinex (MSR) algorithms, such as the multiscale retinex with color restoration (MSRCR) algorithm [19, 20] and multiscale retinex with chromaticity preservation (MSRCP) [21]. Later, Fu et al. presented a weighted variational model to estimate illumination from a weak-light image, which can not only extract the reflection information accurately but also suppress the amplification of noises [22]. Wang et al. [23] introduced an NPE (naturalness preserved enhancement) algorithm with a bright-pass filter to preserve the image naturalness by integrating the neighborhood information of pixels, thereby improving the image contrast while avoiding excessive local enhancement. In 2011, Dong et al. [24] inverted low-light images to generate images similar to those acquired on foggy days and then used a defogging algorithm to enhance the contrast of the original images. Later, Zhang et al. proposed the framework of real-time enhancer by combining the dehazing methods with bilateral filter techniques, in which the DCP (dark channel prior) model is used for parameter optimization and a joint bilateral filter is used to reduce the noise interferences [25]. Park et al. introduced the bright channel prior (BCP) and combined the BCP estimation with retinex theory to realize the weak-light image enhancement [26] and achieved good results. Such methods effectively enhance the details in the dark areas of an image, but they also incur high computational complexity and tend to produce halo effects in dark areas.

In the past decade, machine-learning-based techniques have been widely adopted to improve the contrast of weak-light images [27]. For example, Lore et al. [28] adopted SSDA (stacked sparse denoising autoencoder) method to develop an image enhancer based on the simulation of a low-light environment, in which a machine-learning algorithm was used for training a self-encoder to adjust the brightness adaptively for several low-illumination image signals. Shen et al. [29] analyzed the performance of the MSR algorithm from the perspective of CNNs and designed an MSR network with a CNN architecture for enhancing low-light images. Tao et al. proposed an LLCNN (low-light convolutional neural network) model for image enchantment based on a deep learning technique, in which the enhanced images can finally be generated from multilevel feature graphs after learning on the low-light image database [30]. Park et al. introduced the retinex theory into the deep learning framework and proposed a double self-encoding network [31], in which a convolutional autoencoder and a stacked autoencoder are used to achieve brightness enhancement and noise suppression. Inspired by image-fusion-based methods [32] developed a single-image enhancer by combing the image-fusion-based technique to train an end-to-end CNN model, which is based on building a multiexposure image dataset with different contrast-scale images. Methods of this kind offer a good image enhancement effect, but their computational models often require an excessively long time or too many expensive resources for training.

3. Framework of Proposed Method

According to the basic principle of imaging, an image is produced by the light that is reflected or emitted from the surface of an object in a scene and reaches the camera. Generally, it often is regarded as a two-dimensional function , where the value of this function is the brightness of the pixel at coordinates in the image, and is the composition of the illumination component () that enters the scene and the reflection component ()from the object surface. The mathematical expression of this illumination-reflection model is as follows:

The spatial relations of this model are illustrated in Figure 2.

It is shown that the intensity of incident light mainly relies on the light source, and its distribution function () shows little spatial variation. The spectrum of mainly concentrated in the low-frequency region to reflect the lighting environment during the imaging process, while that of the reflection component is mainly concentrated over a wide range in the high-frequency band, corresponding to the image details that reflects the natural attributes of the target. If the illumination in a scene is even, then the illumination component is uniformly distributed in the space, and the acquired image is considered to have natural lighting and high visual quality; however, if the illumination in an imaged scene is uneven, then areas with excessively strong illumination will be overexposed, while those with insufficient illumination will be underexposed, causing various visual questions for the human eyes. If we can find a way to estimate the reflection component, i.e., separating from , then, we can eliminate the effects of light on imaging, thus helping to achieve the goal of image enhancement [33].

Inspired by the above model and theory, we propose the framework of image enhancer based on adaptive local gamma transform and color compensation in this paper. The proposed method eliminates the associations among color components by modifying color space; thus, the goal of image enhancement is achieved by processing the color components in a different space. First of all, the source color image is transformed to the YUV space, where the brightness part of the scene is estimated from the component using a fast guided filtering function, and then, local gamma transform enhancement is performed on the image through adaptive adjustment according to the gray distribution of the brightness component. Finally, the contrast of the image is adjusted via grayscale linear stretching, and a color compensation strategy is applied to the RGB image. The whole flowchart is shown in Figure 3.

3.1. Color Space Conversion

As known from the neural mechanism of the visual perception system, the human eyes are more sensitive to luminance than to color; thus, the enhancement of luminance is the key to the proposed algorithm for the correction of unevenness in illumination. For color images, the chrominance information and brightness information cannot be effectively distinguished in the RGB (red, green, blue) color space; consequently, applying a direct correction to the three channels in the RGB color space not only leads to color distortion but also increases the computation load. By contrast, in the YUV color space, each color corresponds to two chrominance components ( and ) and one brightness component (); hence, the separation of brightness and chrominance makes it possible to alter illumination intensity without affecting the color. Therefore, in this study, we propose a YUV-space grayscale mapping based chrominance-luminance recombination algorithm in which the luminance component is processed while leaving the chrominance components and unchanged for enhancement. The relation of RGB color space and YUV color space is [33]:

After the conversion to YUV space, the images corresponding to each component are as shown in Figure 4. In Figure 4, (a) is the source color image, (e) is the corresponding gray image of (a), (b–d) are the three components of , , and , respectively, and (f–h) are the components of , , and , respectively.

3.2. Estimation of the Illumination Component

To effectively reduce the effect of uneven illumination on image quality, the accurate extraction of the lighting information from a scene is particularly important. Currently, the main methods for extracting the illumination component include average filter, bilateral filter, and Gaussian filter. The average filtering method smooths images by calculating the mean value of each pixel with its neighbors. It is fast but can be strongly influenced by neighboring pixels. The Gaussian filtering method is poor at retaining edges, causing the extracted illumination component to have fuzzy edges and thus to perform poorly in the retention of detailed information. The bilateral filtering algorithm shows better edge preservation characteristics but has a very high computational complexity, which limits its use in practical engineering applications. The guided filtering algorithm is a guided image-based local linear transformation that obtains the low-frequency information from the image while retaining the edge information and has low computational complexity. It is the fastest available edge-retaining filtering algorithm and was therefore used in this study to extract the illumination component [34, 35].

Let the images for inputting, outputting, and guiding are denoted by , , and , respectively. Then, for any given pixel , its guided filtering process is: where is the pixel index and and are the linear transformation factors. The minimum reconstruction difference between and is calculated as follows: where and are the variance and the mean value of the guided image within the window , respectively; is a parameter that controls the degree of smoothness of the filter; is the pixel number of ; and is the mean value of the input image . Thus, the output of the filter will be: where and are the mean values of and , respectively, within the neighborhood window centered on pixel .

Therefore, the guided filtering process can be seen as the convolution of the guided filtering function and the original image, which gives rise to the following estimate of the illumination component: where is the guided filter, is the input image, and is the output image which denotes the estimation of the luminance component.

To consider both the local and global characteristics of the estimated luminance values, we introduced the multiscale guided filtering, which will extract the illumination components of the scene using filtering windows of different scales and weights, which ultimately gives rise to the following estimate of the illumination component: where is the weights for the illumination component extracted at the th scale and is the number of scales used; is the value of the weighted combination of the illumination components at point that is extracted using the guided filtering function with windows of various scales. In Figure 5, the values of the illumination components extracted using three different scales are shown. In Figure 5(f), the result of the fusion of these three different scales (15, 80, and 250) is shown, where the weight of the illumination component extracted at each scale was set to 1/3.

From Figure 5, it can be seen that the method based on multiscale guided filtering can well extract the illumination component from the source image, which describes the variation of illumination while get rid of the details, to meet the requirements of practical application. However, this method requires multiple filtering operations to be performed on the image. Based on the comprehensive consideration of both computational complexity and performance, we propose a calculation method with an adaptive window, in which the window size is 1/4 of the smaller dimension of the image, as follows: where is a function to extract an integer that down to the nearest number and and are the height and width of the image, respectively. To demonstrate the advantages of guided filtering, the effects of various filtering methods (including Gaussian filter, average filter, median filter, and bilateral filter) are compared in Figure 6.

It shows that guided filter, bilateral filter, and Gaussian filter all yield good descriptions of the illumination variations in the scene, consistent with the distribution of the illumination component. To further compare the edge-retaining characteristics of the guided filter, bilateral filter, and Gaussian filter, we consider the pixels on Line 110 in Figures 6(a) and 6(d)6(f) as examples. In Figure 7, we present a one-dimensional brightness diagram generated from the grayscale values acquired at the corresponding positions in these images.

Figure 7 shows that Gaussian filtering results in larger deviations in sharp edge regions compared with the edges in the original image, while the fast guided filtering algorithm performs the best approximating the brightness distribution of the original image, especially in the edge areas, while maintaining low computational complexity and a high speed.

3.3. Local Gamma Transform

To adaptively increase the brightness of low-illumination areas while decreasing the brightness of high-illumination areas based on the gray distribution, we attempt to improve and expand the conventional method of gamma correction, which has the following standard form: where is the corrected brightness with a range of [0 1], is the source image to be enhanced, and is a control parameter. When is less than 1 but greater than 0, the overall brightness increases, and when is greater than 1, the overall brightness decreases.

For uniformly overexposed or underexposed images, this algorithm can produce satisfactory results through the adjustment of the parameter , but when both overexposed and underexposed areas are present in the same image, it is difficult for the algorithm to achieve satisfactory effectiveness when the same parameter is used across the entire image. Therefore, we introduce an algorithm that allows to vary with the local information of the image, as follows: where is the illumination extracted from and is the base of the exponential function. In general, areas with low illumination need more aggressive correction, so a small value should be adopted, i.e., a greater value of should be adopted in Eq. (12); for images with excessively high contrast, a value greater than 1 should be adopted, i.e., the value of should be low, to suppress the illumination intensity. In Ref. [36], a piecewise function is formulated based on whether the mean value of the input image is greater than 0.5. However, for images with both overexposed and underexposed areas, the mean value can be very close to 0.5, and if such images are processed using this algorithm, it is possible that no notable change may result, meaning that the actual needs of image correction will not be met. Therefore, we present a formulation of the gamma correction parameter that varies with the illumination component of the scene and propose an adaptive brightness adjustment function based on this local gamma transformation, which adaptively adjusts the control parameters according to the illumination distribution of the input image, as follows:

According to Eq. (12), when the base values are set to 2 and , the changes in the output with the input are as shown in Figure 8.

According to Eq. (13), when the base is set to 2, , or , the changes of correction effect are as shown in Figure 9. Figure 9 shows that as increases, low pixel values are enhanced, and high pixel values are suppressed. This compresses the image’s dynamic range and leads to an overall enhancement in the image brightness, at the cost of reduced contrast.

3.4. Grayscale Linear Stretching

To mitigate the problem of image gray value concentration, we use a grayscale stretching function to improve the image. A simple linear pointwise operation is performed to expand the histogram of the image to include the entire grayscale range. The rationale for this action is to improve the dynamic grayscale range for image processing.

Let denote the input image, whose minimum grayscale value and maximum grayscale value are defined as follows:

By linearly mapping the dynamic range from to , then the output image will be:

As shown in Figure 10, processing with the proposed algorithm expands the dynamic range of the image, facilitating the identification of details in overexposed and underexposed areas of the image.

3.5. Color Compensation

Using the following formulas, we convert the image back from the YUV color space into the RGB color space using the enhanced component while leaving the and components unchanged: where , , and are the brightness component and the two chrominance signal components, respectively, in YUV space.

However, after the conversion to RGB space using the above method, the image may show a decrease in color saturation. To ensure that the color saturation of the output image is consistent with that of the input image, we adopt the following expressions: where is set to an empirical value of 0.5; , , and denote the red, green, and blue components in the RGB space, respectively; and and are the brightness components in YUV space before and after enhancement, respectively.

The images obtained using Eqs. (16) and (17) and their corresponding grayscale histograms are shown in Figure 11. The image obtained using Eq. (17) has better color saturation and higher contrast than that obtained using Eq. (16).

The specific steps of implementation of the above-described adaptive enhancement method for low-light images acquired under uneven illumination are summarized as follows:

(1) Input weak-light image F
(2) Transform source image into YUV space using Eq. (2) and separate the luminance and the chrominance components , , and
(3) Extract the illumination component from using Eq. (8) to obtain image G
(4) Obtain the enhanced brightness image O using Eq. (13)
(5) Perform linear stretching on image O using Eqs. (14) and (15) to generate image
(6) Make color compensation to enhance the saturation of image using Eq. (17)
(7) Output the enhanced image J

4. Experiments and Analysis

To test the performance of our method, we used an experimental platform consisting of a computer (with an Intel(R) Core (TM) i7-6700 and 16 GB of RAM) and the simulation software MATLAB. The images used for testing included an urban streetscape, some natural scenery, and an indoor scene and have the common features of a large dynamic range and uneven illumination. Some of the experimental results are shown in Figure 12 for the images “Night,” “Bridge,” “Castle,” “Town,” “Girl” [37], “Street,” “Pine,” and “Dawn.” As shown in Figure 12, after processing with the proposed algorithm, the areas with low illumination are enhanced, and those with high illumination are suppressed. The enhanced images are natural in color and clear in detail, indicating that the proposed method can adaptively mitigate the impact of uneven scene illumination on image quality. Next, we will compare the processing results of the proposed algorithm with those of various mainstream algorithms in terms of both a subjective visual assessment and an objective quantitative analysis.

4.1. Subjective Evaluation
4.1.1. Comparison with Traditional Enhancement Methods

In Figure 13, the results of the proposed method and other conventional image enhancement methods are shown. Figure 13(a) shows the original images [37], and Figures 13(b)13(h) show the experimental results of a linear transformation (LT), histogram equalization (HE), adaptive histogram equalization (AHE), homomorphic filtering (HF), the wavelet transform (WT), the Retinex method, and the proposed method, respectively. The corresponding amplification effects in the areas demarcated by boxes in Figure 13(a) are shown in rows 3 and 6. The results indicate that the images processed using the various methods show changes of varying degrees relative to the original image. For example, Figures 13(c) and 13(g) are significantly enhanced in terms of contrast, showing greater detail but a shift in hue. In addition, the severe “halo” noise in Figure 13(g) results in poor visual quality. Figures 13(e) and 13(f) show no overall hue shift but exhibit inadequate improvement in details and are fuzzy. Figures 13(b) and 13(d) show good overall effects, but excessive enhancement is evident in bright regions due to the linear transformation method, whereas AHE causes the color to be significantly darkened. In contrast, the proposed method yields remarkable improvements in both color and contrast, achieving a better visual effect than the other methods.

4.1.2. Comparison with State-of-the-Art Methods

We further compared the enhancement effect of the proposed method with those of some state-of-the-art methods using “Window” and “Furniture” as test images. The results are shown in Figures 14 and 15. In each of these figures, (a) shows the original image and enlarged views of the areas demarcated by the boxes, and (b–h) show the results obtained using CegaHE [38], CVC [16], the linear dynamic range (LDR) technique [39], DCP [24], MSRCP [21], SRIE [22] and the proposed algorithm, respectively, along with the corresponding enlarged areas. The results show that compared with the original image, the overall visibility and contrast of the enhanced images obtained using the various enhancement methods are greatly improved, achieving good enhancement effectiveness. However, the CegaHE method results in a severe hue shift. The CVC and LDR methods achieve only a slight enhancement while amplifying the noise in the dark regions, while the CVC method is additionally unable to restore color to low-light pixels. The MSRCP and DCP methods enhance the overall image brightness, but the MSRCP method results in overenhancement, while the DCP method shows a significant overenhancement effect in edge regions. Relative to the other methods, the SRIE method and the proposed method both strike a balance between color information and brightness information, thereby achieving good enhancement effects. However, the SRIE method is unable to achieve uniform results for images with alternating bright and dark regions, resulting in inferior overall performance compared to the proposed method. With regard to local details, in the areas of the images demarcated by boxes, the DCP method results in overenhancement and consequent noise at the edges. The CVC and LDR methods lead to underenhancement, the CegaHE and MSRCP methods lead to local overenhancement, and the SRIE method generates shadows in some local areas. By contrast, the proposed method shows no excessive amplification of the noise in dark areas in the enhanced image while significantly enhancing the areas that need highlighting without overenhancement, thereby achieving superior sharpness, contrast, and image color.

To further compare the processing effects of the different algorithms, we also tested the algorithms on artificially synthesized images, as shown in Figure 16. In this figure, (a) shows two images acquired under proper lighting, and (b) shows corresponding low-light images that have been synthesized through gamma transformation (with a value of 2). (c–h) show the image enhancement results obtained using the different methods. The results indicate that the proposed method can adaptively enhance the brightness of low-light areas while suppressing that of high-illuminance areas, and the enhancement effects are consistent with those observed on the actual images presented above.

4.2. Objective Evaluation

Because different methods focus on different aspects of an image, a subjective evaluation is likely to be biased [40]. Therefore, we adopt several objective evaluation criteria to further examine the processing effects of different methods. We adopt the mean squared error (MSE), the peak signal-to-noise ratio (PSNR), and the structural similarity index measure (SSIM) as objective evaluation metrics for comparison and evaluation [41]. The objective evaluation data corresponding to Figure 16 are shown in Table 1, where the best results are italicized.


CegaHECVCLDRDCPMSRCPOur method

MSE1087.5352462.565864.13718.641088.675329.58
PSNR17.814.58518.7819.63517.7723.13
SSIM0.850.8250.8950.8350.870.97

To conduct a more general test, we subjected a number of synthesized images to processing with various methods, including CegaHE [38], CVC [16], LDR [39], DCP [24], EFF [42], MSRCP [21], SRIE [22], and the proposed algorithm. Some of the experimental results are shown in Figure 17, where Figure 17(a) shows the original images, Figure 17(b) shows the artificial quality-reduced images, and Figure 17(c) shows the results obtained after the enhancement of the images in Figure 17(b). The objective evaluation metrics achieved by the various methods based on these images are shown in Table 2, in which the values indicating the best performance are italicized.


CegaHECVCLDRDCPEFFMSRCPSRIEOur method

Group 1MSE839.314554.18732.72873.171292.501131.821135.16368.12
PSNR18.8911.5519.4818.7217.0217.5917.5822.47
SSIM0.820.740.850.830.860.820.800.95
Group 2MSE1212.111389.11695.13976.86427.981167.741184.34683.90
PSNR17.3016.7019.7118.2321.8217.4617.4019.78
SSIM0.900.930.930.900.960.880.920.96
Group 3MSE712.053084.25487.98453.461172.70712.31588.00142.47
PSNR19.6113.2421.2521.5717.4419.6020.4426.59
SSIM0.800.770.860.850.870.860.880.95
Group 4MSE914.601518.881193.66672.35685.98992.041105.18748.61
PSNR18.5216.3217.3619.8519.7718.1717.7019.39
SSIM0.890.880.890.910.930.880.900.95
Groupe5MSE1016.393160.821106.58492.06651.731061.99655.86478.82
PSNR18.0613.1317.6921.2119.9917.8719.9621.33
SSIM0.810.780.830.820.900.830.890.96

Tables 1 and 2 indicate that the enhanced images generated using the proposed method most closely match the original images in terms of both gray value distribution and structure. The proposed method greatly outperforms the other methods in terms of its comprehensive effect, generating the best results. These results show that the proposed algorithm can mitigate the influence of uneven illumination on images and achieve effective correction for images of diverse scenes acquired under uneven lighting.

4.3. Computational Complexity

To compare the computational complexity of the above methods, we tested the methods on images of different sizes in the MATLAB experimental environment and report the average run time calculated from 20 operations on images of the same size. The results presented in Table 3 show that the SRIE method has the lowest computational efficiency when processing a single image, requiring 242.22 seconds to process an image with pixels, while CVC, MSRCP, DCP, EFF, and the proposed method all require similar seconds to process the same image. With the size of the image increasing, the processing time of the MSRCP method increases more rapidly, while that of the other methods increases linearly. The proposed method requires the least run time and thus has the lowest time complexity.



CVC0.270.400.601.272.33
MSRCP0.170.400.883.297.41
DCP0.330.601.062.423.89
SRIE7.0813.6122.38101.91242.02
EFF0.420.620.941.863.03
Proposed method0.190.320.511.092.14

4.4. Adaptivity of Our Method

We also tested the methods on images acquired under extremely low illumination as well as images obtained in normal light conditions; the experimental results are shown in Figure 18. In the top panel of this figure, the first row contains the original images acquired under extremely low illumination, and the second row shows the corresponding enhancement results.

The results show that for the enhancement of images acquired under extremely low illumination, which has presented great challenges in the field of image processing, although the enhancement effect of the proposed method is unsatisfactory, no blocky effect is not present in the restored images; in this sense, they are consistent with human visual perception. In the bottom panel of the figure, the first and second rows show images acquired under normal illumination and the corresponding enhancement results obtained using the proposed method, respectively, and the results showed that for images acquired under normal illumination conditions, the processing results of the proposed method are identical to the original images, indicating that the proposed method can adaptively adjust its parameters for different scenes and thus shows good robustness and adaptability.

5. Conclusion

In this paper, we propose a color image correction method based on local gamma transformation and color compensation. In which the illumination-reflection model is adopted to address the problems of local overenhancement due to uneven illumination in low-light images and the lack of adaptability of the parameter settings encountered in previous methods. First, we convert the original RGB color image into the YUV color space and extract the illumination distribution of the scene from the component using a guided filtering function. Then, we perform illuminance enhancement based on an adaptive local gamma transformation and expansion of the dynamic range. Finally, we enhance the color saturation of the image. Comparisons between the proposed method and other conventional algorithms indicate that the proposed algorithm can not only effectively improve the visual effect of the processed image but also reveal more detailed information in dark regions. Because the proposed algorithm uses the distribution characteristics of the illumination component of the scene to dynamically adjust the parameters of the gamma function, it can effectively improve the visual quality of an image, allowing better identification of details in both overexposed and underexposed areas of the image.

Data Availability

Some or all data, models, or code generated or used during the study are available from the corresponding author by request (Wencheng Wang).

Conflicts of Interest

The authors declare that they have no conflicts of interest related to this work.

Acknowledgments

This research was funded by the Shandong Provincial Natural Science Foundation (No. ZR2019FM059), the Science and Technology Plan for Youth Innovation of Shandong Universities (No. 2019KJN012), and the National Natural Science Foundation of China (No. 61403283).

References

  1. Z. Huang, Y. Zhang, Q. Li et al., “Joint analysis and weighted synthesis sparsity priors for simultaneous denoising and destriping optical remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 10, pp. 6958–6982, 2020. View at: Publisher Site | Google Scholar
  2. W. Wang, X. Yuan, X. Wu, and Y. Liu, “Fast image dehazing method based on linear transformation,” IEEE Transactions on Multimedia, vol. 19, no. 6, pp. 1142–1155, 2017. View at: Publisher Site | Google Scholar
  3. Z. Zhu, H. Wei, G. Hu, Y. Li, G. Qi, and N. Mazur, “A novel fast single image dehazing algorithm based on artificial multiexposure image fusion,” IEEE Transactions on Instrumentation and Measurement, vol. 70, article 5001523, pp. 1–23, 2021. View at: Publisher Site | Google Scholar
  4. W. Wang, F. Chang, T. Ji, and X. Wu, “A fast single-image dehazing method based on a physical model and gray projection,” IEEE Access, vol. 6, pp. 5641–5653, 2018. View at: Publisher Site | Google Scholar
  5. M. Zheng, G. Qi, Z. Zhu, Y. Li, H. Wei, and Y. Liu, “Image dehazing by an artificial image fusion method based on adaptive structure decomposition,” IEEE Sensors Journal, vol. 20, no. 14, pp. 8062–8072, 2020. View at: Publisher Site | Google Scholar
  6. M. Oloyede, G. Hancke, H. Myburgh, and A. Onumanyi, “A new evaluation function for face image enhancement in unconstrained environments using metaheuristic algorithms,” EURASIP Journal on Image and Video Processing, vol. 2019, no. 1, 2019. View at: Publisher Site | Google Scholar
  7. G. Hou, Z. Pan, B. Huang, G. Wang, and X. Luan, “Hue preserving-based approach for underwater colour image enhancement,” IET Image Processing, vol. 12, no. 2, pp. 292–298, 2017. View at: Publisher Site | Google Scholar
  8. X. Fu and X. Cao, “Underwater image enhancement with global-local networks and compressed- histogram equalization,” Signal Processing: Image Communication, vol. 86, article 115892, 2020. View at: Publisher Site | Google Scholar
  9. T. Soomro, T. Khan, M. Khan, J. Gao, M. Paul, and L. Zheng, “Impact of ICA-based image enhancement technique on retinal blood vessels segmentation,” IEEE Access, vol. 6, no. 6, pp. 3524–3538, 2018. View at: Publisher Site | Google Scholar
  10. F. Kallel, M. Sahnoun, A. Ben Hamida, and K. Chtourou, “CT scan contrast enhancement using singular value decomposition and adaptive gamma correction,” Signal Image & Video Processing, vol. 12, no. 5, article 1232, pp. 905–913, 2018. View at: Publisher Site | Google Scholar
  11. Z. Huang, T. Zhang, Q. Li, and H. Fang, “Adaptive gamma correction based on cumulative histogram for enhancing near- infrared images,” Infrared Physics and Technology, vol. 79, pp. 205–215, 2016. View at: Publisher Site | Google Scholar
  12. W. Wang, X. Wu, X. Yuan, and Z. Gao, “An experiment-based review of low-light image enhancement methods,” IEEE Access, vol. 8, pp. 87884–87917, 2020. View at: Publisher Site | Google Scholar
  13. S. Huang, F. Cheng, and Y. Chiu, “Efficient contrast enhancement using adaptive gamma correction with weighting distribution,” IEEE Transactions on Image Processing, vol. 22, no. 3, pp. 1032–1041, 2013. View at: Publisher Site | Google Scholar
  14. S. Chi Liu, S. Liu, H. Wu et al., “Enhancement of low illumination images based on an optimal hyperbolic tangent profile,” Computers & Electrical Engineering, vol. 70, no. 8, pp. 538–550, 2018. View at: Publisher Site | Google Scholar
  15. J. Kim, L. Kim, and S. Hwang, “An advanced contrast enhancement using partially overlapped sub-block histogram equalization,” IEEE Transactions Circuits and Systems for Video Technology, vol. 11, no. 4, pp. 475–484, 2001. View at: Publisher Site | Google Scholar
  16. Soong-der Chen and A. Ramli, “Minimum mean brightness error bi-histogram equalization in contrast enhancement,” IEEE Transactions on Consumer Electronics, vol. 49, no. 4, pp. 1310–1319, 2003. View at: Publisher Site | Google Scholar
  17. T. Celik and T. Tjahjadi, “Contextual and variational contrast enhancement,” IEEE Transactions on Image Processing, vol. 20, no. 12, pp. 3431–3441, 2011. View at: Publisher Site | Google Scholar
  18. Z. Huang, Z. Wang, J. Zhang, Q. Li, and Y. Shi, “Image enhancement with the preservation of brightness and structures by employing contrast limited dynamic quadri-histogram equalization,” Optik, vol. 226, article 165877, 2021. View at: Publisher Site | Google Scholar
  19. D. Jobson, Z. Rahman, and G. Woodell, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Transactions on Image Processing, vol. 6, no. 7, pp. 965–976, 1997. View at: Publisher Site | Google Scholar
  20. Z. Rahman, D. Jobson, and G. Woodell, “Retinex processing for automatic image enhancement,” Journal of Electronic Imaging, vol. 13, no. 1, pp. 100–110, 2004. View at: Google Scholar
  21. A. Petro, C. Sbert, and J. Morel, “Multiscale retinex,” Image Processing on Line, vol. 4, no. 4, pp. 71–88, 2014. View at: Publisher Site | Google Scholar
  22. X. Fu, D. Zeng, Y. Huang, X. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2782–2790, Las Vegas, NV, USA, 2016. View at: Publisher Site | Google Scholar
  23. S. Wang, J. Zheng, H. Hu, and B. Li, “Naturalness preserved enhancement algorithm for non-uniform illumination images,” IEEE Transactions on Image Processing, vol. 22, no. 9, pp. 3538–3548, 2013. View at: Publisher Site | Google Scholar
  24. X. Dong, G. Wang, Y. Pang et al., “Fast efficient algorithm for enhancement of low lighting video,” in 2011 IEEE International Conference on Multimedia and Expo, pp. 1–6, Barcelona, Spain, 2011. View at: Publisher Site | Google Scholar
  25. L. Zhang, P. Shen, X. Peng et al., “Simultaneous enhancement and noise reduction of a single low-light image,” IET Image Processing, vol. 10, no. 11, pp. 840–847, 2016. View at: Publisher Site | Google Scholar
  26. S. Park, B. Moon, S. Ko, S. Yu, and J. Paik, “Low-light image restoration using bright channel prior-based variational retinex model,” EURASIP Journal on Image and Video Processing, vol. 2017, no. 1, 2017. View at: Publisher Site | Google Scholar
  27. Z. Huang, Y. Zhang, Q. Li et al., “Unidirectional variation and deep CNN denoiser priors for simultaneously destriping and denoising optical remote sensing images,” International Journal of Remote Sensing, vol. 40, no. 15, pp. 5737–5748, 2019. View at: Publisher Site | Google Scholar
  28. K. Lore, A. Akintayo, and S. Sarkar, “LLNet: a deep autoencoder approach to natural low-light image enhancement,” Pattern Recognition, vol. 61, pp. 650–662, 2017. View at: Publisher Site | Google Scholar
  29. L. Shen, Z. Yue, F. Feng, Q. Chen, S. Liu, and J. Ma, “MSR-net: low-light image enhancement using deep convolutional network,” 2017, https://arxiv.org/abs/1711.02488. View at: Google Scholar
  30. L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, and X. Xie, “LLCNN: a convolutional neural network for low-light image enhancement,” in 2017 IEEE Visual Communications and Image Processing (VCIP), pp. 1–4, St. Petersburg, FL, USA, 2017. View at: Publisher Site | Google Scholar
  31. S. Park, S. Yu, M. Kim, K. Park, and J. Paik, “Dual autoencoder network for Retinex-based low-light image enhancement,” IEEE Access, vol. 6, no. 3, pp. 22084–22093, 2018. View at: Publisher Site | Google Scholar
  32. J. Cai, S. Gu, and L. Zhang, “Learning a deep single image contrast enhancer from multi-exposure images,” IEEE Transactions on Image Processing, vol. 27, no. 4, pp. 2049–2062, 2018. View at: Publisher Site | Google Scholar
  33. W. Wang, Z. Chen, X. Yuan, and X. Wu, “Adaptive image enhancement method for correcting low-illumination images,” Information Sciences, vol. 496, pp. 25–41, 2019. View at: Publisher Site | Google Scholar
  34. W. Wang and X. Yuan, “Recent advances in image dehazing,” IEEE/CAA Journal of Automatica Sinica, vol. 4, no. 3, pp. 410–436, 2017. View at: Publisher Site | Google Scholar
  35. K. He, J. Sun, and X. Tang, “Guided image filtering,” Institute of Electrical and Electronics Engineers transactions on pattern analysis and machine intelligence, vol. 35, no. 6, pp. 1397–1409, 2013. View at: Publisher Site | Google Scholar
  36. R. Schettini, F. Gasparini, S. Corchs, F. Marini, A. Capra, and A. Castorina, “Contrast image correction method,” Journal of Electronic Imaging, vol. 19, no. 2, article 023005, 2010. View at: Publisher Site | Google Scholar
  37. T. Pu and S. Wang, “Perceptually motivated enhancement method for non-uniformly illuminated images,” IET Computer Vision, vol. 12, no. 4, pp. 424–433, 2017. View at: Publisher Site | Google Scholar
  38. C. Chiu and C. Ting, “Contrast enhancement algorithm based on gap adjustment for histogram equalization,” Sensors, vol. 16, no. 6, p. 936, 2016. View at: Publisher Site | Google Scholar
  39. C. Lee, C. Lee, and C. Kim, “Contrast enhancement based on layered difference representation of 2D histograms,” IEEE Transactions on Image Processing, vol. 22, no. 12, pp. 5372–5384, 2013. View at: Publisher Site | Google Scholar
  40. W. Wang, X. Yuan, X. Wu, and Y. Liu, “Dehazing for images with large sky region,” Neurocomputing, vol. 238, pp. 365–376, 2017. View at: Publisher Site | Google Scholar
  41. W. Wang, Z. Chen, X. Yuan, and F. Guan, “An adaptive weak light image enhancement method,” in The 12th International Conference on Signal Processing Systems, vol. 11719, Shanghai, China, November 2020. View at: Google Scholar
  42. Z. Ying, G. Li, Y. Ren, R. Wang, and W. Wang, “A new image contrast enhancement algorithm using exposure fusion framework,” in International Conference on Computer Analysis of Images and Patterns, vol. 10425, pp. 36–46, Ystad, Sweden, August 2017. View at: Publisher Site | Google Scholar

Copyright © 2021 Wencheng Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views762
Downloads474
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.