Abstract

We propose a novel approach for low-light image enhancement. Based on illumination-reflection model, the guided image filter is employed to extract the illumination component of the underlying image. Afterwards, we obtain the reflection component and enhance it by nonlinear functions, sigmoid and gamma, respectively. We use the first-order edge-aware constraint in the gradient domain to achieve good edge preserving features of enhanced images and to eliminate halo artefact effectively. Moreover, the resulting images have high contrast and ample details due to the enhanced illumination and reflection component. We evaluate our method by operating on a large amount of low-light images, with comparison with other popular methods. The experimental results show that our approach outperforms the others in terms of visual perception and objective evaluation.

1. Introduction

Video surveillance is now widely used in various fields, like public security, transportation, and so on. The surveillance systems are required to perform not only in day time but also in night time. However, the videos captured in some situations such as dark place or night are very poor so that the objects can hardly be perceived by humans. Thus, it is necessary to enhance the low-light images in image processing and video surveillance.

Recently, the technique of low-light image enhancement has made remarkable progress. The commonly used methods include dark channel prior model [1, 2], neural network model [3, 4], histogram equalization (HE) [5, 6], image fusion [7, 8], wavelet domain algorithm [9, 10], and illumination-reflection model [1114]. It is noted that the adaptability of dark channel prior model is poor in disposing the images with rich details and high brightness [2]. The design and use of neural network require domain knowledge so that the underlying neural system has good generalization. The idea of histogram equalization is to merge several bins of grayscales in order to increase contrast, but such process may yield detail loss. The image fusion method needs multiple frame images, which is not applicable for single image frame. Wavelet transform is alternative technique for low-light image enhancement as shown in [9]. The most popular method in enhancing low-light images is to decompose images with the illumination-reflection model.

It was proposed by Land [15] that an image can be decomposed by the reflection component and the illumination component :where represents the coordinates of image. Normally, the illumination component is determined by the dynamic range of the underlying image, while the reflection component is dependent on the intrinsic characteristics of the objects within the underlying image. Equation (1) is generally converted to the logarithmic domain so that the multiplication becomes addition.

The illumination component can be obtained by various methods, such as multiscale Gaussian function in [11], an improved Gaussian function in [12], and the bilateral filtering-based method in [14]. The key idea behind the Gaussian function is to use a low-pass filter as shown in (2). In the equation, represents a center-around function, which is also a low-pass filter, and the symbol represents the convolution. Then, the reflection component can be obtained from (3) with the illumination component.

While calculating the illumination component and the reflection component of an image, what is most important is edge preserving and making the flat region smooth. The Multiscale Retinex (MSR) algorithm in [11] processes each color channel individually and then removes the illumination component to keep reflection component. Such processing is liable to color distortion. In addition, the method cannot achieve edge preserving and easily yields halo artefact as shown in Figure 1(b). Bilateral filtering (BLF), proposed by Tomaci and Mabduchi [16], is a good method for edge preserving and eliminating halo artefact. It was pointed out in [17] that the BLF may result in gradient reversal as shown in Figure 2(b). Accordingly, guided image filter (GIF) [17] and its variants, that is, weighted guided image filter (WGIF) [18] and guided image filter in gradient domain (GDGIF) [19] were proposed to achieve good edge preserving.

In this paper, we present a new method for low-light image enhancement, which mainly contributes to the following three aspects: In order to effectively cope with the halo artefact and gradient reversal, the proposed method uses the illumination-reflection model and selects the GIF in gradient domain characterized by smoothness and edge preserving to estimate the illumination component; the proposed method works on HSI color space to eliminate color distortion; the illumination component and the reflection component are enhanced by nonlinear sigmoid and gamma transforms, respectively, to improve image contrast and enhance image details.

The remainder of the paper is organized as follows: Section 2 depicts the proposed low-light image enhancement algorithm. Section 3 demonstrates experimental results, followed by conclusion drawn in Section 4.

2. The Proposed Approach

The proposed method for enhancing low-light images consists of the following steps:(1)Converting the low-light image from RGB color space to HSI color space(2)For intensity (illumination) layer, estimating the illumination component with guided image filter in gradient domain, followed by extracting the reflection component(3)Enhancing the illumination component with a nonlinear sigmoid transform(4)Enhancing the reflection component with a nonlinear gamma transform(5)Taking the antilog of the sum of steps and as the enhanced intensity layer(6)Converting the new HSI image back to RGB, which produces the final enhanced image

2.1. Guided Image Filter in Gradient Domain

GDGIF [19] and WGIF [18] are both the improvements of the guided image filter. Basically, the output image is a linear transform of guided image in a square window centered at the pixel :In (4), guided image can be input image itself. The pixel is located in the window of length , in which is filter radius. and are linear coefficients in pixel . For GIF, WGIF, and GDGIF, the larger value of implies better edge preserving. On the contrary, if the value of is much closer to 0, the filters have good smoothing performance in flat regions.

According to the gradient-domain optimization framework in [20], the task of filtering an image means converting one input image into the final image , which can be expressed as an energy function including the zero-order data cost function and first-order gradient cost function terms: where is gradient magnitude; is gradient weight constraint. In (4), the gradient magnitude of is .

2.1.1. Guided Image Filter

In the guided image filter [17], , the expressions of and can be obtained according to (4)–(7) shown as follows:where is the mean of image in the window , and is the variance of image in the window , is the total pixel number in window , and is the mean of image in window . is a regularization parameter, which controls the trade-off between edge preserving and smoothness. As is generally fixed ratherthan spatially varying in filtering process, halo artefact is unavoidable in edges.

2.1.2. Weighted Guided Image Filter

Li et al. [18] proposed the WGIF, where a spatially varying gradient weight constraint was added in (7).where is a single-scale edge-aware weighting, which is defined by using local variances in windows, is the variance of guided image in the window, is the number of image pixels, is a small constant and its value is , and L is the dynamic range of the input image. The expressions of and in the WGIF can be obtained according to (4)–(7) as shown below:It is noted that the edge-aware weighting is spatially varying in the WGIF; that is, is larger than 1 when the pixel locates in the edge area, and is smaller than 1 when the pixel locates in the flat area. As a result, is closer to 1 than in the GIF, which implies that WGIF has the better edge preservation than GIF.

WGIF can reduce the halo artefact to some extent. However, both the GIF and the WGIF filters have no explicit constraints to cope with edges. Both cannot preserve edges well because image filtering is performed on edges, which definitely smoothed the edges [19, 21].

2.1.3. Guided Image Filter in Gradient Domain

Kou et al. [19] proposed the guided image filter in gradient domain (GDGIF), by adding an explicit first-order (gradient domain) edge-aware constraint to gradient-domain equation (5) shown below:where is edge-aware constraint. The aim is to perceive the changes in local neighbourhoods so that the similar filtering is performed in the similar regions. is a weight value, and represent the gradient values of output image and input image .

In the GDGIF, a multiscale edge-aware varying spatially gradient weight constraint is added in (7). The energy function of GDGIF is shown in (12), in which the second item is the combination of the first-order gradient cost function and the edge-aware constraint .where is defined by local variances of windows of all pixels; is the filter radius. It is noted that is a multiscale edge-aware weighting varying spatially, in which . The edge-aware weighting detects the edge more accurately, and a pixel is detected as an edge pixel when the two scale variances are very large.

The comparison of and in WGIF of an image is shown in Figure 3. The edges of the image are detected accurately by using the multiscale edge-aware weighting rather than using the single-scale edge-aware weighting in WGIF.

In (14), is an edge-aware constraint to preserve edges. is the mean of all , and the value of is . The value of is close to 1 when the pixel locates in the edge area, and the value is close to 0 when the pixel locates in the flat area.

The expressions of and in the GDGIF can be obtained according to (12)–(14), shown as follows:

When the input image and guided image are the same, the GDGIF has better edge preserving and smoother features than the GIF and the WGIF due to the following two points.

(1) When pixel locates in the edge area, the expression of can be computed as It is seen that the value of is 1, and the value of is 1, which is independent of parameter . In fact, the value of in GDGIF is much closer to 1 than it is in GIF and WGIF. Hence, GDGIF has the best edge preserving feature.

(2) When a pixel locates in the flat areas, the value is close to 0, the parameter is accordingly independent of the choice of . As a result, we can select larger in GDGIF than that in WGIF and GIF, so that better smoothing is achieved without affecting the edge preserving [19].

For example, we use GIF, WGIF, and GDGIF for image filtering by choosing the same filter radius and regularization parameter . The result is shown in Figure 4. It is observed that the image filtered by GDGIF has the best edge preserving.

In summary, the GDGIF has the best edge preserving and smoothing features. As a result, we choose the GDGIF to estimate the illumination component.

2.2. Enhancing the Intensity Layer

Generally, the processing of an RGB image is to operate the R, G, and B three channels separately, which is time-consuming. In this work, the low-light image is converted from RGB color space to HSI color space. The HSI color space stems from the human visual system, using three elements of color, hue (H), saturation (S), and intensity (I) to describe the color, which is more consistent with human visual feature than RGB color space. In HSI color space, the intensity component is enhanced while the hue and saturation components are extracted without further processing.

2.2.1. Estimating Illumination Component

Based on illumination-reflection model, combined with (2), (4), and (15), we use the GDGIF to estimate the illumination component of intensity layer image as follows:where is represented by the GDGIF function, as shown in (4). Then the reflection component of intensity layer image can be obtained according to (3) and (17) as follows:

2.2.2. Enhancing Illumination Component

Normally, the methods based on illumination-reflection model are to extract and then enhance the reflection component without considering the illumination component. Such processing leads to a lack of coordination between gray levels and yields color distortion. To cope with this problem, Wu et al. [13] proposed enhancing the reflection component together with the illumination component. Inspired by this idea, we process the illumination component by the nonlinear tensile sigmoid transform as shown in Figure 5. It is indicated in [22] that the sigmoid transform has the ability to sharpen images, highlight the local details, and stretch the image contrast. The nonlinear tensile sigmoid transform is expressed in the following equations:where is self-defined sigmoid nonlinear function, whose range is , and represent the minimum and maximum values of reflection component, respectively, and and are the initial and the enhanced illumination component, respectively. It is noted that there are two important parameters , in which the parameter controls how an image is enhanced and the parameter controls the contrast enhancement. Generally, a large value of enhances an image greatly. On the other hand, a small value of enhances the contrast of the dark regions, and a large value of enhances the contrast of the light regions. Figure 6 shows examples by taking different parameters. In this study, the parameters , are selected as 2 and 0.004, respectively.

2.2.3. Enhancing the Reflection Component

It is known that human eyes are not sensible for high brightness difference but sensible for the small difference in low intensity. Thus, gamma transform [23] is normally employed to enhance the reflection component as follows:where is a positive constant; is a parameter to control the image contrast.

It is seen from Figure 7 that when is less than 1, the contrast in low intensities is increased. On the contrary, the contrast in high intensities is enhanced in case . The effect of parameter is shown in Figure 8. In this work, the parameters are experimentally selected as follows: ; .

2.3. Final Enhanced Image

By taking antilog of the illumination component combined with the reflection component, we obtain the enhanced intensity image. The new HSI image is made up of enhanced intensity layer, original hue, and saturation layer. Then the enhanced HSI image is converted into RGB image to obtain the final enhanced image. The whole process is shown in Figure 9.

3. Experimental Results and Discussions

As there is no public low-light image database, we collect 20 images from the Library of Congress and Internet as shown in Figure 10. In the simulation, the parameters are predefined as follows: window radius , regularization parameter , the sigmoid parameters , , and gamma parameters , . All experiments are performed in Matlab code on Windows 7 operation system. The computer is Intel® core™ i5-4570 3.20 GHz, and the RAM is 4.00 GB. The popular algorithms such as traditional MSR [11], Hao’s algorithm [14], He’s algorithm [17], histogram equalization (HE), improved MSR [12], and Kim’s algorithm [9] are implemented for performance comparison.

3.1. Subjective Evaluation

It is observed from Figures 11 and 12 that the images enhanced by traditional MSR algorithm have obvious “white” phenomenon, which indicates color distortion. The images enhanced by Hao’s method result in gradient reversal artefact as seen in red square from image 4 (white tower) in Figure 11, which indicates that the bilateral filtering has poor edge preserving property. Moreover, the resulting images by Hao’s method are blurred which can be seen from images 1 (church) and 5 (the study) in Figure 11. He’s algorithm also shows halo artefact which can be seen from image 1 (church), and this method cannot enhance the image contrast and brightness shown in Figure 12.

Histogram equalization algorithm can enhance image contrast, but this method misses details and enlarges noise, which can be observed from Figures 11 and 12. Improved MSR algorithm is better than the traditional MSR algorithm, but this method causes halo artefact which can be seen from image 1 (church) in Figure 11 (highlighted box). Moreover, “white” phenomenon exists as shown in image 13 (boy), where the tire is over-enhanced and loses detailed information. Our results show that improved MSR algorithm is not effective in dealing with night images, as observed in Figure 12.

Kim’s algorithm also causes color distortion, which can be seen from image 1 (church) in Figure 11, and the sky in the red square becomes gray. The enhanced images may be blurred as seen in image 1 (church), which implies that Kim’s algorithm has poor edge preserving.

As seen from Figures 11 and 12, the proposed algorithm has the best color fidelity and edge preserving. Also, the enhanced images have less noise and clearer details than the one enhanced by histogram equalization algorithm.

3.2. Objective Evaluation

Currently, no standard objective metrics are proposed for enhancement assessment of low-light images. In this study, we employ information entropy (IE) to evaluate image details and choose the average edge intensity (AEI) to evaluate the edge preserving feature of enhanced images. The IE is defined as follows:where represents the probability of gray level ; is the total number of gray levels. Larger IE indicates more information, which implies that there are more details in the underlying image.

The AEI is defined as follows:where and represent gradients in and directions on image edges. represents expectation. In this study, Sobel operator is used for gradient computation. Larger AEI indicates good edge preserving. Also, the efficiency of the proposed method is compared with other methods.

Table 1 shows the average IE and AEI and operating time based on 20 test images. It is observed that the performance of He’s method is not good in terms of IE. The images enhanced by He’s method are usually dark and lose many details. Figure 13 shows the IE performances of the best three methods, that is, Hao’s algorithm, Kim’s algorithm, and the proposed method. It is seen that the median IE (marked by a red line) of the proposed method is the largest. The mean IE (marked by a green line) of the proposed method is 9.8% higher than Hao’s algorithm and 1.8% higher than Kim’s algorithm in the 20 test images.

On the other hand, it is observed from Table 1 that the improved MSR, histogram equalization (HE), Kim’s algorithm, and the proposed algorithm achieve top four performances in terms of AEI. It is noted that high AEI in the histogram equalization accounts for much noise. The AEI of the improved MSR, Kim’s method, and the proposed method on each individual image are shown in Figure 14. Results show that the median AEI (marked by a red line) of the proposed method is the largest, and the mean AEI (marked by a green line) of the proposed method is 26.8% higher than the improved MSR and 15.6% higher than Kim’s algorithm.

In summary, the proposed method achieves the best performance in terms of both IE and AEI. Furthermore, the proposed method is very efficient and achieves the 3rd position among the 7 methods as shown in Table 1. It is noted that He’s algorithm has the worst enhanced image quality. Figure 15 shows the median operating time (marked by a red line) and the mean operating time (marked by a green line) of HE, Kim’s, and the proposed method. Generally, the bilateral filtering is not as efficient as the GDGIF. The computation complexity of bilateral filtering is , where refers to the window radius of bilateral filtering, and is the number of image pixels while the GDGIF’s computation complexity is . In other words, the GDGIF’s computation complexity is not related to the filter size. Results show that the average time by the proposed method is 59.7% higher than HE and 19.4% less than Kim’s algorithm.

4. Conclusion

A low-light image enhancement algorithm is presented in the paper. By decomposing a low-light image into the illumination component and the reflection component, it offers a solution to expand illumination and enhances image details separately. Specifically, the illumination component is processed using guided image filter in gradient domain, followed by nonlinear sigmoid transform. The reflection component is enhanced by the gamma transform. This solution enhances low-light images and effectively avoids distortions (for example color) and annoying artefacts (e.g., blurring, halo). Then, the final result is obtained by antilogging the sum of the enhanced two components. Experimental results demonstrate that the enhanced images by the proposed method are visually pleasing by subjective test and the performance of the proposed method outperforms the existing methods in terms of both IE and AEI assessments. Moreover, the proposed algorithm is efficient because the computation complexity is not related to filter size. The proposed method has great potential to implement in real-time low-light video processing.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant 61371190, Natural Science Foundation of Shanghai under Grants 13ZR1455200 and 17ZR1411900, the Opening Project of Shanghai Key Laboratory of Integrated Administration Technologies for Information Security (AGK2015006), the Founding Program for the Cultivation of Young University Teachers of Shanghai (ZZGCD15090), and the Research Start-Up Founding Program of Shanghai University of Engineering Science (2016-56). The authors would like to thank Se Eun Kim for sharing his code.