Abstract

In weak-light environments, images suffer from low contrast and the loss of details. Traditional image enhancement models are usually failure to avoid the issue of overenhancement. In this paper, a simple and novel correction method is proposed based on an adaptive local gamma transformation and color compensation, which is inspired by the illumination reflection model. Our proposed method converts the source image into YUV color space, and the component is estimated with a fast guided filter. The local gamma transform function is used to improve the brightness of the image by adaptively adjusting the parameters. Finally, the dynamic range of the image is optimized by a color compensation mechanism and a linear stretching strategy. By comparing with the state-of-the-art algorithms, it is demonstrated that the proposed method adaptively reduces the influence of uneven illumination to avoid overenhancement and improve the visual effect of low-light images.

1. Introduction

The computer vision system has been widely used in a variety of fields such as industrial production, video surveillance, intelligent transportation, and remote sensing [1] and plays a more and important role in human’s life. Nevertheless, during image acquisition, many uncontrollable factors will lead to various defects in the acquired images. Especially under poor and complex light conditions, such as low light, uneven light, backlight, and hazy conditions, the weak reflection of light from the object’s surface causes color distortion and noise amplification in the images, which seriously affects the image quality [2]. As shown in Figure 1, the top row includes uneven-light images, in which uneven illumination can cause some areas of an image to be overexposed while others are underexposed, affecting not only human visual perception but also the accuracy of image segmentation and object recognition, sometimes resulting in the failure of a machine vision system. Therefore, it is of great importance to enhance the contrast and observability of images collected from poor lighting conditions [35].

Weak-light image enhancement has become a focus of research in the image processing field, and its interdisciplinary characteristics have attracted considerable attention from researchers worldwide. For example, in a facial recognition system, Oloyede et al. [6] applied a new evaluation function in conjunction with metaheuristic-based optimization algorithms to automatically select the best-enhanced face image. To enhance underwater images, Hou et al. [7] presented a novel underwater color image enhancement approach based on hue preservation by combining the HSI and HSV color models. Fu and Cao [8] combine the merits of deep learning and conventional image enhancement technology to improve the quality of underwater image. To improve the contrast of retinal fundus images, Soomro et al. [9] used independent component analysis (ICA) for image enhancement to effectively achieve quick and accurate segmentation of the eye vessels. Kallel et al. [10] proposed a new enhancement algorithm dedicated to computed tomography (CT) scans based on the discrete wavelet transform with singular value decomposition (DWT–SVD) followed by adaptive gamma correction (AGC), which consistently produced good contrast enhancement with excellent brightness and edge detail conservation.

However, those conventional methods are limited in adaptivity and tend to overenhance some local areas in the case of uneven illumination. They are also difficult to strike the balance between computational complexity and visual effect. Therefore, in this research work, a self-adaptive enhancement method is proposed for processing images from uneven light environments, which is inspired by the illumination-reflection model. Figure 1 shows samples processed with our method. This method can effectively enhance the visual effect of an image, revealing more details in dark areas while preserving the overall detail information, thus providing a valuable reference for the study of the correction of images acquired under uneven lighting conditions. The contributions are as follows: (1)It is a simple and effective image enhancer based on a novel local gamma transformation and illumination reflection model. This method can effectively enhance the visual effect of an image, revealing more details in dark areas while preserving the overall detail information(2)The method has a color compensation mechanism; it is suitable for the processing of color images captured with monitoring system(3)The proposed method can adjust the parameters according to the light distribution and adaptively reduce the influence of uneven illumination on the image, thus providing a valuable reference for the study of the correction of images acquired under uneven lighting conditions(4)Our method can produce the satisfied results with less computational complexity

The rest of this paper is organized as follows. In Section 2, some related works are summarized. Section 3 introduces the flowchart of the proposed method. In Section 4, the comparisons of the experimental results are presented. Finally, the research work is concluded in Section 5.

Traditional image enhancement methods of weak-light images include histogram equalization (HE) and grayscale transformation (GT) [11, 12], which usually obtains the correction parameters based on the cumulative probability distribution of gray values. For example, Huang et al. [13] proposed a gamma correction algorithm that adaptively obtains gamma correction parameters based on the cumulative probability distribution. Later, Liu et al. proposed a low-light image enhancement method based on the optimal hyperbolic tangent function [14]. In Ref. [15], a block-iterative histogram method was used to enhance the contrast of an image while processing each different part of the image with partially overlapped subblock histogram equalization (POSHE) using a moving template. Subsequently, Chen and Ramli proposed the minimum mean brightness error bihistogram equalization (MMBEBHE) [16] algorithm to minimize the error between the brightness mean values of the output image and the original image. Celik and Tjahjadi [17] proposed the contextual and variational contrast enhancement (CVC) algorithm, which performs nonlinear data mapping using context information and a 2D gray histogram to achieve the contrast improvement. These methods are simple in their computation rules and low in computational complexity but are prone to various processing issues, such as color loss and noise amplification. Huang et al. propose an effective image enhancement strategy named as contrast limited dynamic quadri-histogram equalization (CLDQHE) which includes three steps can yield pleasing results with the preservation of brightness and structures [18].

Based on the computational theory of color constancy, Jobson et al. proposed the single-scale retinex (SSR) algorithm, which was further developed into various multiple-scale retinex (MSR) algorithms, such as the multiscale retinex with color restoration (MSRCR) algorithm [19, 20] and multiscale retinex with chromaticity preservation (MSRCP) [21]. Later, Fu et al. presented a weighted variational model to estimate illumination from a weak-light image, which can not only extract the reflection information accurately but also suppress the amplification of noises [22]. Wang et al. [23] introduced an NPE (naturalness preserved enhancement) algorithm with a bright-pass filter to preserve the image naturalness by integrating the neighborhood information of pixels, thereby improving the image contrast while avoiding excessive local enhancement. In 2011, Dong et al. [24] inverted low-light images to generate images similar to those acquired on foggy days and then used a defogging algorithm to enhance the contrast of the original images. Later, Zhang et al. proposed the framework of real-time enhancer by combining the dehazing methods with bilateral filter techniques, in which the DCP (dark channel prior) model is used for parameter optimization and a joint bilateral filter is used to reduce the noise interferences [25]. Park et al. introduced the bright channel prior (BCP) and combined the BCP estimation with retinex theory to realize the weak-light image enhancement [26] and achieved good results. Such methods effectively enhance the details in the dark areas of an image, but they also incur high computational complexity and tend to produce halo effects in dark areas.

In the past decade, machine-learning-based techniques have been widely adopted to improve the contrast of weak-light images [27]. For example, Lore et al. [28] adopted SSDA (stacked sparse denoising autoencoder) method to develop an image enhancer based on the simulation of a low-light environment, in which a machine-learning algorithm was used for training a self-encoder to adjust the brightness adaptively for several low-illumination image signals. Shen et al. [29] analyzed the performance of the MSR algorithm from the perspective of CNNs and designed an MSR network with a CNN architecture for enhancing low-light images. Tao et al. proposed an LLCNN (low-light convolutional neural network) model for image enchantment based on a deep learning technique, in which the enhanced images can finally be generated from multilevel feature graphs after learning on the low-light image database [30]. Park et al. introduced the retinex theory into the deep learning framework and proposed a double self-encoding network [31], in which a convolutional autoencoder and a stacked autoencoder are used to achieve brightness enhancement and noise suppression. Inspired by image-fusion-based methods [32] developed a single-image enhancer by combing the image-fusion-based technique to train an end-to-end CNN model, which is based on building a multiexposure image dataset with different contrast-scale images. Methods of this kind offer a good image enhancement effect, but their computational models often require an excessively long time or too many expensive resources for training.

3. Framework of Proposed Method

According to the basic principle of imaging, an image is produced by the light that is reflected or emitted from the surface of an object in a scene and reaches the camera. Generally, it often is regarded as a two-dimensional function , where the value of this function is the brightness of the pixel at coordinates in the image, and is the composition of the illumination component () that enters the scene and the reflection component ()from the object surface. The mathematical expression of this illumination-reflection model is as follows:

The spatial relations of this model are illustrated in Figure 2.

It is shown that the intensity of incident light mainly relies on the light source, and its distribution function () shows little spatial variation. The spectrum of mainly concentrated in the low-frequency region to reflect the lighting environment during the imaging process, while that of the reflection component is mainly concentrated over a wide range in the high-frequency band, corresponding to the image details that reflects the natural attributes of the target. If the illumination in a scene is even, then the illumination component is uniformly distributed in the space, and the acquired image is considered to have natural lighting and high visual quality; however, if the illumination in an imaged scene is uneven, then areas with excessively strong illumination will be overexposed, while those with insufficient illumination will be underexposed, causing various visual questions for the human eyes. If we can find a way to estimate the reflection component, i.e., separating from , then, we can eliminate the effects of light on imaging, thus helping to achieve the goal of image enhancement [33].

Inspired by the above model and theory, we propose the framework of image enhancer based on adaptive local gamma transform and color compensation in this paper. The proposed method eliminates the associations among color components by modifying color space; thus, the goal of image enhancement is achieved by processing the color components in a different space. First of all, the source color image is transformed to the YUV space, where the brightness part of the scene is estimated from the component using a fast guided filtering function, and then, local gamma transform enhancement is performed on the image through adaptive adjustment according to the gray distribution of the brightness component. Finally, the contrast of the image is adjusted via grayscale linear stretching, and a color compensation strategy is applied to the RGB image. The whole flowchart is shown in Figure 3.

3.1. Color Space Conversion

As known from the neural mechanism of the visual perception system, the human eyes are more sensitive to luminance than to color; thus, the enhancement of luminance is the key to the proposed algorithm for the correction of unevenness in illumination. For color images, the chrominance information and brightness information cannot be effectively distinguished in the RGB (red, green, blue) color space; consequently, applying a direct correction to the three channels in the RGB color space not only leads to color distortion but also increases the computation load. By contrast, in the YUV color space, each color corresponds to two chrominance components ( and ) and one brightness component (); hence, the separation of brightness and chrominance makes it possible to alter illumination intensity without affecting the color. Therefore, in this study, we propose a YUV-space grayscale mapping based chrominance-luminance recombination algorithm in which the luminance component is processed while leaving the chrominance components and unchanged for enhancement. The relation of RGB color space and YUV color space is [33]:

After the conversion to YUV space, the images corresponding to each component are as shown in Figure 4. In Figure 4, (a) is the source color image, (e) is the corresponding gray image of (a), (b–d) are the three components of , , and , respectively, and (f–h) are the components of , , and , respectively.

3.2. Estimation of the Illumination Component

To effectively reduce the effect of uneven illumination on image quality, the accurate extraction of the lighting information from a scene is particularly important. Currently, the main methods for extracting the illumination component include average filter, bilateral filter, and Gaussian filter. The average filtering method smooths images by calculating the mean value of each pixel with its neighbors. It is fast but can be strongly influenced by neighboring pixels. The Gaussian filtering method is poor at retaining edges, causing the extracted illumination component to have fuzzy edges and thus to perform poorly in the retention of detailed information. The bilateral filtering algorithm shows better edge preservation characteristics but has a very high computational complexity, which limits its use in practical engineering applications. The guided filtering algorithm is a guided image-based local linear transformation that obtains the low-frequency information from the image while retaining the edge information and has low computational complexity. It is the fastest available edge-retaining filtering algorithm and was therefore used in this study to extract the illumination component [34, 35].

Let the images for inputting, outputting, and guiding are denoted by , , and , respectively. Then, for any given pixel , its guided filtering process is: where is the pixel index and and are the linear transformation factors. The minimum reconstruction difference between and is calculated as follows: where and are the variance and the mean value of the guided image within the window , respectively; is a parameter that controls the degree of smoothness of the filter; is the pixel number of ; and is the mean value of the input image . Thus, the output of the filter will be: where and are the mean values of and , respectively, within the neighborhood window centered on pixel .

Therefore, the guided filtering process can be seen as the convolution of the guided filtering function and the original image, which gives rise to the following estimate of the illumination component: where is the guided filter, is the input image, and is the output image which denotes the estimation of the luminance component.

To consider both the local and global characteristics of the estimated luminance values, we introduced the multiscale guided filtering, which will extract the illumination components of the scene using filtering windows of different scales and weights, which ultimately gives rise to the following estimate of the illumination component: where is the weights for the illumination component extracted at the th scale and is the number of scales used; is the value of the weighted combination of the illumination components at point that is extracted using the guided filtering function with windows of various scales. In Figure 5, the values of the illumination components extracted using three different scales are shown. In Figure 5(f), the result of the fusion of these three different scales (15, 80, and 250) is shown, where the weight of the illumination component extracted at each scale was set to 1/3.

From Figure 5, it can be seen that the method based on multiscale guided filtering can well extract the illumination component from the source image, which describes the variation of illumination while get rid of the details, to meet the requirements of practical application. However, this method requires multiple filtering operations to be performed on the image. Based on the comprehensive consideration of both computational complexity and performance, we propose a calculation method with an adaptive window, in which the window size is 1/4 of the smaller dimension of the image, as follows: where is a function to extract an integer that down to the nearest number and and are the height and width of the image, respectively. To demonstrate the advantages of guided filtering, the effects of various filtering methods (including Gaussian filter, average filter, median filter, and bilateral filter) are compared in Figure 6.

It shows that guided filter, bilateral filter, and Gaussian filter all yield good descriptions of the illumination variations in the scene, consistent with the distribution of the illumination component. To further compare the edge-retaining characteristics of the guided filter, bilateral filter, and Gaussian filter, we consider the pixels on Line 110 in Figures 6(a) and 6(d)6(f) as examples. In Figure 7, we present a one-dimensional brightness diagram generated from the grayscale values acquired at the corresponding positions in these images.

Figure 7 shows that Gaussian filtering results in larger deviations in sharp edge regions compared with the edges in the original image, while the fast guided filtering algorithm performs the best approximating the brightness distribution of the original image, especially in the edge areas, while maintaining low computational complexity and a high speed.

3.3. Local Gamma Transform

To adaptively increase the brightness of low-illumination areas while decreasing the brightness of high-illumination areas based on the gray distribution, we attempt to improve and expand the conventional method of gamma correction, which has the following standard form: where is the corrected brightness with a range of [0 1], is the source image to be enhanced, and is a control parameter. When is less than 1 but greater than 0, the overall brightness increases, and when is greater than 1, the overall brightness decreases.

For uniformly overexposed or underexposed images, this algorithm can produce satisfactory results through the adjustment of the parameter , but when both overexposed and underexposed areas are present in the same image, it is difficult for the algorithm to achieve satisfactory effectiveness when the same parameter is used across the entire image. Therefore, we introduce an algorithm that allows to vary with the local information of the image, as follows: where is the illumination extracted from and is the base of the exponential function. In general, areas with low illumination need more aggressive correction, so a small value should be adopted, i.e., a greater value of should be adopted in Eq. (12); for images with excessively high contrast, a value greater than 1 should be adopted, i.e., the value of should be low, to suppress the illumination intensity. In Ref. [36], a piecewise function is formulated based on whether the mean value of the input image is greater than 0.5. However, for images with both overexposed and underexposed areas, the mean value can be very close to 0.5, and if such images are processed using this algorithm, it is possible that no notable change may result, meaning that the actual needs of image correction will not be met. Therefore, we present a formulation of the gamma correction parameter that varies with the illumination component of the scene and propose an adaptive brightness adjustment function based on this local gamma transformation, which adaptively adjusts the control parameters according to the illumination distribution of the input image, as follows:

According to Eq. (12), when the base values are set to 2 and , the changes in the output with the input are as shown in Figure 8.

According to Eq. (13), when the base is set to 2, , or , the changes of correction effect are as shown in Figure 9. Figure 9 shows that as increases, low pixel values are enhanced, and high pixel values are suppressed. This compresses the image’s dynamic range and leads to an overall enhancement in the image brightness, at the cost of reduced contrast.

3.4. Grayscale Linear Stretching

To mitigate the problem of image gray value concentration, we use a grayscale stretching function to improve the image. A simple linear pointwise operation is performed to expand the histogram of the image to include the entire grayscale range. The rationale for this action is to improve the dynamic grayscale range for image processing.

Let denote the input image, whose minimum grayscale value and maximum grayscale value are defined as follows:

By linearly mapping the dynamic range from to , then the output image will be:

As shown in Figure 10, processing with the proposed algorithm expands the dynamic range of the image, facilitating the identification of details in overexposed and underexposed areas of the image.

3.5. Color Compensation

Using the following formulas, we convert the image back from the YUV color space into the RGB color space using the enhanced component while leaving the and components unchanged: where , , and are the brightness component and the two chrominance signal components, respectively, in YUV space.

However, after the conversion to RGB space using the above method, the image may show a decrease in color saturation. To ensure that the color saturation of the output image is consistent with that of the input image, we adopt the following expressions: where is set to an empirical value of 0.5; , , and denote the red, green, and blue components in the RGB space, respectively; and and are the brightness components in YUV space before and after enhancement, respectively.

The images obtained using Eqs. (16) and (17) and their corresponding grayscale histograms are shown in Figure 11. The image obtained using Eq. (17) has better color saturation and higher contrast than that obtained using Eq. (16).

The specific steps of implementation of the above-described adaptive enhancement method for low-light images acquired under uneven illumination are summarized as follows:

(1) Input weak-light image F
(2) Transform source image into YUV space using Eq. (2) and separate the luminance and the chrominance components , , and
(3) Extract the illumination component from using Eq. (8) to obtain image G
(4) Obtain the enhanced brightness image O using Eq. (13)
(5) Perform linear stretching on image O using Eqs. (14) and (15) to generate image
(6) Make color compensation to enhance the saturation of image using Eq. (17)
(7) Output the enhanced image J

4. Experiments and Analysis

To test the performance of our method, we used an experimental platform consisting of a computer (with an Intel(R) Core (TM) i7-6700 and 16 GB of RAM) and the simulation software MATLAB. The images used for testing included an urban streetscape, some natural scenery, and an indoor scene and have the common features of a large dynamic range and uneven illumination. Some of the experimental results are shown in Figure 12 for the images “Night,” “Bridge,” “Castle,” “Town,” “Girl” [37], “Street,” “Pine,” and “Dawn.” As shown in Figure 12, after processing with the proposed algorithm, the areas with low illumination are enhanced, and those with high illumination are suppressed. The enhanced images are natural in color and clear in detail, indicating that the proposed method can adaptively mitigate the impact of uneven scene illumination on image quality. Next, we will compare the processing results of the proposed algorithm with those of various mainstream algorithms in terms of both a subjective visual assessment and an objective quantitative analysis.

4.1. Subjective Evaluation
4.1.1. Comparison with Traditional Enhancement Methods

In Figure 13, the results of the proposed method and other conventional image enhancement methods are shown. Figure 13(a) shows the original images [37], and Figures 13(b)13(h) show the experimental results of a linear transformation (LT), histogram equalization (HE), adaptive histogram equalization (AHE), homomorphic filtering (HF), the wavelet transform (WT), the Retinex method, and the proposed method, respectively. The corresponding amplification effects in the areas demarcated by boxes in Figure 13(a) are shown in rows 3 and 6. The results indicate that the images processed using the various methods show changes of varying degrees relative to the original image. For example, Figures 13(c) and 13(g) are significantly enhanced in terms of contrast, showing greater detail but a shift in hue. In addition, the severe “halo” noise in Figure 13(g) results in poor visual quality. Figures 13(e) and 13(f) show no overall hue shift but exhibit inadequate improvement in details and are fuzzy. Figures 13(b) and 13(d) show good overall effects, but excessive enhancement is evident in bright regions due to the linear transformation method, whereas AHE causes the color to be significantly darkened. In contrast, the proposed method yields remarkable improvements in both color and contrast, achieving a better visual effect than the other methods.

4.1.2. Comparison with State-of-the-Art Methods

We further compared the enhancement effect of the proposed method with those of some state-of-the-art methods using “Window” and “Furniture” as test images. The results are shown in Figures 14 and 15. In each of these figures, (a) shows the original image and enlarged views of the areas demarcated by the boxes, and (b–h) show the results obtained using CegaHE [38], CVC [16], the linear dynamic range (LDR) technique [39], DCP [24], MSRCP [21], SRIE [22] and the proposed algorithm, respectively, along with the corresponding enlarged areas. The results show that compared with the original image, the overall visibility and contrast of the enhanced images obtained using the various enhancement methods are greatly improved, achieving good enhancement effectiveness. However, the CegaHE method results in a severe hue shift. The CVC and LDR methods achieve only a slight enhancement while amplifying the noise in the dark regions, while the CVC method is additionally unable to restore color to low-light pixels. The MSRCP and DCP methods enhance the overall image brightness, but the MSRCP method results in overenhancement, while the DCP method shows a significant overenhancement effect in edge regions. Relative to the other methods, the SRIE method and the proposed method both strike a balance between color information and brightness information, thereby achieving good enhancement effects. However, the SRIE method is unable to achieve uniform results for images with alternating bright and dark regions, resulting in inferior overall performance compared to the proposed method. With regard to local details, in the areas of the images demarcated by boxes, the DCP method results in overenhancement and consequent noise at the edges. The CVC and LDR methods lead to underenhancement, the CegaHE and MSRCP methods lead to local overenhancement, and the SRIE method generates shadows in some local areas. By contrast, the proposed method shows no excessive amplification of the noise in dark areas in the enhanced image while significantly enhancing the areas that need highlighting without overenhancement, thereby achieving superior sharpness, contrast, and image color.

To further compare the processing effects of the different algorithms, we also tested the algorithms on artificially synthesized images, as shown in Figure 16. In this figure, (a) shows two images acquired under proper lighting, and (b) shows corresponding low-light images that have been synthesized through gamma transformation (with a value of 2). (c–h) show the image enhancement results obtained using the different methods. The results indicate that the proposed method can adaptively enhance the brightness of low-light areas while suppressing that of high-illuminance areas, and the enhancement effects are consistent with those observed on the actual images presented above.

4.2. Objective Evaluation

Because different methods focus on different aspects of an image, a subjective evaluation is likely to be biased [40]. Therefore, we adopt several objective evaluation criteria to further examine the processing effects of different methods. We adopt the mean squared error (MSE), the peak signal-to-noise ratio (PSNR), and the structural similarity index measure (SSIM) as objective evaluation metrics for comparison and evaluation [41]. The objective evaluation data corresponding to Figure 16 are shown in Table 1, where the best results are italicized.

To conduct a more general test, we subjected a number of synthesized images to processing with various methods, including CegaHE [38], CVC [16], LDR [39], DCP [24], EFF [42], MSRCP [21], SRIE [22], and the proposed algorithm. Some of the experimental results are shown in Figure 17, where Figure 17(a) shows the original images, Figure 17(b) shows the artificial quality-reduced images, and Figure 17(c) shows the results obtained after the enhancement of the images in Figure 17(b). The objective evaluation metrics achieved by the various methods based on these images are shown in Table 2, in which the values indicating the best performance are italicized.

Tables 1 and 2 indicate that the enhanced images generated using the proposed method most closely match the original images in terms of both gray value distribution and structure. The proposed method greatly outperforms the other methods in terms of its comprehensive effect, generating the best results. These results show that the proposed algorithm can mitigate the influence of uneven illumination on images and achieve effective correction for images of diverse scenes acquired under uneven lighting.

4.3. Computational Complexity

To compare the computational complexity of the above methods, we tested the methods on images of different sizes in the MATLAB experimental environment and report the average run time calculated from 20 operations on images of the same size. The results presented in Table 3 show that the SRIE method has the lowest computational efficiency when processing a single image, requiring 242.22 seconds to process an image with pixels, while CVC, MSRCP, DCP, EFF, and the proposed method all require similar seconds to process the same image. With the size of the image increasing, the processing time of the MSRCP method increases more rapidly, while that of the other methods increases linearly. The proposed method requires the least run time and thus has the lowest time complexity.

4.4. Adaptivity of Our Method

We also tested the methods on images acquired under extremely low illumination as well as images obtained in normal light conditions; the experimental results are shown in Figure 18. In the top panel of this figure, the first row contains the original images acquired under extremely low illumination, and the second row shows the corresponding enhancement results.

The results show that for the enhancement of images acquired under extremely low illumination, which has presented great challenges in the field of image processing, although the enhancement effect of the proposed method is unsatisfactory, no blocky effect is not present in the restored images; in this sense, they are consistent with human visual perception. In the bottom panel of the figure, the first and second rows show images acquired under normal illumination and the corresponding enhancement results obtained using the proposed method, respectively, and the results showed that for images acquired under normal illumination conditions, the processing results of the proposed method are identical to the original images, indicating that the proposed method can adaptively adjust its parameters for different scenes and thus shows good robustness and adaptability.

5. Conclusion

In this paper, we propose a color image correction method based on local gamma transformation and color compensation. In which the illumination-reflection model is adopted to address the problems of local overenhancement due to uneven illumination in low-light images and the lack of adaptability of the parameter settings encountered in previous methods. First, we convert the original RGB color image into the YUV color space and extract the illumination distribution of the scene from the component using a guided filtering function. Then, we perform illuminance enhancement based on an adaptive local gamma transformation and expansion of the dynamic range. Finally, we enhance the color saturation of the image. Comparisons between the proposed method and other conventional algorithms indicate that the proposed algorithm can not only effectively improve the visual effect of the processed image but also reveal more detailed information in dark regions. Because the proposed algorithm uses the distribution characteristics of the illumination component of the scene to dynamically adjust the parameters of the gamma function, it can effectively improve the visual quality of an image, allowing better identification of details in both overexposed and underexposed areas of the image.

Data Availability

Some or all data, models, or code generated or used during the study are available from the corresponding author by request (Wencheng Wang).

Conflicts of Interest

The authors declare that they have no conflicts of interest related to this work.

Acknowledgments

This research was funded by the Shandong Provincial Natural Science Foundation (No. ZR2019FM059), the Science and Technology Plan for Youth Innovation of Shandong Universities (No. 2019KJN012), and the National Natural Science Foundation of China (No. 61403283).