- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
The Scientific World Journal
Volume 2014 (2014), Article ID 979081, 9 pages
Green Channel Guiding Denoising on Bayer Image
College of Information System and Management, National University of Defense Technology, Changsha, Hunan 410073, China
Received 30 August 2013; Accepted 8 January 2014; Published 11 March 2014
Academic Editors: R. Cabeza and J. M. Corchado
Copyright © 2014 Xin Tan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Denoising is an indispensable function for digital cameras. In respect that noise is diffused during the demosaicking, the denoising ought to work directly on bayer data. The difficulty of denoising on bayer image is the interlaced mosaic pattern of red, green, and blue. Guided filter is a novel time efficient explicit filter kernel which can incorporate additional information from the guidance image, but it is still not applied for bayer image. In this work, we observe that the green channel of bayer mode is higher in both sampling rate and Signal-to-Noise Ratio (SNR) than the red and blue ones. Therefore the green channel can be used to guide denoising. This kind of guidance integrates the different color channels together. Experiments on both actual and simulated bayer images indicate that green channel acts well as the guidance signal, and the proposed method is competitive with other popular filter kernel denoising methods.
Image denoising is one of the hottest spots in image processing. It can not only enhance image quality but also increase compression efficiency and improve the robustness of the subsequent intelligent analysis algorithms, such as objection detection and pattern classification. Many high-performance algorithms are proposed. However, they often suffer heavy computational burden. For example, Non-Local Means (NLM) , BM3D [2, 3], and K-SVD  are based on local similar patches search; some others are based on statistical learning or dictionary training [4–6] which requires iteration optimization; some also relate to domain transformation [7, 8]. For the digital cameras, especially video cameras, the frame rate is required for 24 fps at least. Additionally, there are many image processing steps in the camera including denoising, demosaicking, automatic white balance, automatic exposure, gamma correction, color correction, brightness and contrast adjustment, and edge enhancement. All these steps need to be computed for less than 42 ms (1/24 fps). For some high-end cameras, this demand is much stricter, for example, at least 60 fps for motion-capture cameras. Thus those block matching, domain transformation, or model training based algorithms are not suitable for real time image processing in digital cameras.
Therefore, the digital camera denoising often uses the explicit filter kernel, such as mean filter, median filter, Gaussian filter, and more effective edge-preserving filter: bilateral filter . In addition, based on bilateral filter, He et al.  proposed a new type of explicit image filter with the name guided filter. It incorporates additional information from a given guidance image. The filtered image has the same edge information as the guidance image. Guided filter has the edge-preserving property like bilateral filter but does not suffer from the gradient reversal artifacts. Moreover, it has higher time efficiency than bilateral filter .
Now consider the image sensor in digital cameras. Most cameras use single sensor with a color matrix. And most sensors adopt the RGB three-color bayer pattern which is shown in Figure 1.
The image which is obtained by bayer pattern sensor is called bayer image. One pixel can only record a single color, so the full three-color representation should be reconstructed by estimating the missing components from the adjacent pixels. This process is called demosaicking. Researches have focused on this interpolation problem for a long time [11–13].
At present, almost all denoising algorithms [1–10] are aiming at monochromatic images or color images. Because of the red, green, and blue interlaced pattern, these denoising methods cannot work directly on bayer image. There are two options to handle this problem. One is to remove noise from the demosaicked color image. However, this is not a good choice due to the reason that noise is spread across the channels during the demosaicking and its characteristics are more complicated. Removing noise after demosaicking causes the denoising to be more difficult [5, 14]. The other option is to denoise before demosaicking. Many algorithms that adopt this option are presented [5, 14–17]. The aim of this paper is also to design a denoising method for bayer image. In view of the fact that guided filter is an excellent filter kernel with high time efficiency, we concentrate on how to use this filter in our work.
During our research, we observe that green channel of bayer image not only has higher sampling rate than red and blue channels, but also its photosensitivity, that is, ISO, at most cases is higher. Another research  shows that image Signal-to-Noise Ratio (SNR) is increased with ISO at the same exposure time. Thus for guided filter, green channel can act well as a guidance signal. Thus, we propose a high SNR channel guided denoising method with guided filter.
The rest of the paper is structured as follows. Section 2 briefly introduces the guided filter. Section 3 presents the characteristics of bayer image’s green channel and describes the algorithm flow. In Section 4, experiments on real bayer image and Kodak test sets demonstrate the efficiency of the proposed method. Finally, conclusions are drawn in Section 5.
2. Guided Filter Overview
In some cases, we need to merge extra information into the original image. For example, in colorization the output chromatic channels require having the same edge as the luminance channel. He et al.  propose an explicit filter kernel to solve this problem.
Define the guidance image , the input image , and the output image . The assumption of guided filter is the local linear property between the guidance image and the output image. Namely, in a local window centered at the pixel , the output at a pixel is where are the constant linear coefficients in . The local window is a square area with the radius . From (1) we know . Namely, edge information of the output image is linear with that of the guidance image. In order to determine the linear coefficients , it should minimize the following cost function:
Here is a regularization parameter preventing from being too large. The solution of (2) is
Here and are the mean and variance of in . is the number of pixels in . is the mean of in . is the covariance between and .
In (4), there is only one parameter . Increasing , the gradient of output image is reduced. When , . The guided filter deteriorates to mean filter. Thus is the criterion of smoothness or sharpness. In another case, when , that is, taking the input image itself as guidance, and without smoothness (), then , , so . It means that the guided filter deteriorates to show itself.
Due to the fact that pixel is involved in many windows and coefficients in each window are different, the output of is different. He et al.  take a simple strategy that is averaging all the possible values as where , .
However, this manipulation makes no longer scaling of . Since is the average of , it can still have . Therefore guided filter is good at edge-preserving and has no gradient reversal problem as bilateral filter.
For time efficiency, He et al.  analyze that the time complexity is independent of the window radius thanks to the utilization of the box filter, which is described in their paper. As for bilateral filter, its computational complexity increases when the kernel becomes larger. So the speed of guided filter is faster than that of bilateral filter.
In fact, guided filter is the advance of joint bilateral filter (JBF) . It employs the extra guidance information to improve the filter performance too. But it suffers gradient reversal artifacts.
3. Green Channel Guiding Denoising
When using guided filter as a denoising method on bayer image, the key is to obtain the appropriate guiding signals. One simple scheme is treating each color channel as a subimage of gray scale and then denoising on them separately. This solution, however, does not exploit the spectral correlation among channels. The scheme that can make full use of the interchannel correlation is ideal. Moreover, the high quality guiding signal needs to have less noise, namely, higher SNR than the guided image. It also should be simple, owing to the real time and low resource consumption requirements.
According to the bayer color matrix in Figure 1, the information of the green channel is half of the entire sensor and two times of the red or blue channel. During demosaicking process, the rest three-fourths of the full image in the red or blue channel is estimated, while for the green channel only half of the full image is estimated. So the estimation error of the green channel is less than that of the other two. Furthermore, the green color can perceive brightness well, and human eye is more sensitive to brightness than chromaticity, so the green channel is more sensitive for human compared to red and blue. Owing to this reason many demosaicking methods recover the full-resolution green channel at first and then reconstruct the red and blue channel from the green one [12, 22, 23].
On the other hand, Hasinoff et al. in their research  found that at a given brightness and exposure time images with higher ISO have higher SNR. In their noise model, they defined the scene brightness as . Thus for dark pixels, additive noise is dominant, so SNR increases with , and, for bright pixels, photon noise is dominant, so SNR increases with . Namely, the brighter the scene is, the higher the SNR is. In , Hasinoff et al. improve ISO to increase the brightness, so as to raise SNR.
According to Lambertian reflection model , the pixel value on spatial point relies on the integration of the incident ray spectral distribution , the surface reflectivity function at point of wavelength , and the sensor sensitivity function as follows:
Here is the spatial coordinates, is the spectral wavelength of the light, and is the entire visible spectrum range (wavelength from 380 nm to 780 nm). Obviously, the red, green, and blue photosensitive diodes have the different sensitive wavelength range as shown in Figure 2. This means that sensor sensitivity of each kind of diodes is unique.
Thus the different kind of color pixels can measure the different brightness values even though the spectral distribution and the reflectivity are identical at the same local area. It indicates that the ISO of different color channel is different at the same circumstance for the different photo diodes sensitive characteristics.
Figure 3 and Table 1 show that the green channel of bayer image is brighter than the others, except for the lowest (F) color temperature. It means that, in most illumination conditions, green dominates the original bayer image. Therefore, green channel possesses higher ISO than the red or blue channels at the same exposure time. The original images captured by Canon single lens reflex (SLR) camera at indoor and outdoor , as shown in Figure 4, can also illustrate high ISO of green channel. Thus according to Hasinoff et al.’s conclusion that at the same exposure time high ISO images have higher SNR , green channel has higher SNR characteristic. Based on its high sampling rate and high SNR features, the green channel can be used as guidance signal.
In order to employ green channel as the guidance image, the green value at red and blue locations should be interpolated at first. Then the guided filter is operated. The flow chart of red channel denoising is described in Figure 5. The blue channel follows the same approach. The green channel takes itself as guidance image.
As for green channel interpolation, many methods are proposed. We select a simple strategy as presented in  which is also chosen by Matlab image toolbox for demosaicking. The filter coefficients kernel of  is shown in Figure 6.
So the interpolated green value is where is the spatial coordinates, is the green value at red locations, and is the green value at blue locations.
According to (7) the green interpolation correlates the red and blue channels with the green channel. Furthermore, green color is the guidance for all color channels, so denoising is unified as a whole by green channel.
4. Experiments and Analysis
In this section, the performance of the proposed method is tested on the actual bayer image and Kodak losses true color image suite . Two aspects are examined. One is the guiding signal selection; the other is the comparison with other denoising methods.
4.1. Guiding Signals Comparison
In order to show the advantage of green channel guidance, other three guiding signals are compared: red channel, blue channel, and each of the channels guiding themselves. The test image is the actual bayer image of Aptina MT9P031 sensor. The reason why the bayer image recovered from the processed RGB picture is not selected is that the nonlinear image processing influences the original features of bayer image. For example, AWB destroys the high SNR characteristics of green channel, and demosaicking diffuses the dotted noise to patches, and so forth.
In our experiment, the local window radius , and the regularization parameter chooses the best one for each guiding signal. The interpolation for each color channel takes the method in . Figure 7 is the comparison result. It shows that green channel guidance obtains the best result compared to other guiding signals. Therefore the excellent properties (high sampling rate, high ISO, and high SNR) of green channel really play an important role in guided filter denoising.
Another phenomenon is no matter the guiding image is red, blue, or green, the denoised results are all better than each of the channels guiding themselves (Figure 7(f)). The reason is that the correlation among channels is not considered when treating each color channel as an independent subimage. Guided by one color channel, the others may have the same gradients. In fact, all color channels certainly have the same gradients. Each channel guiding itself can result in different gradients in noise area.
4.2. Comparison with Other Denoising Methods
In order to further evaluate the proposed method, the proposed method is compared with other denoising methods. The test dataset is Kodak losses true color image suite which has 24 images. Some samples are shown in Figure 8.
The original images are corrupted by additive Gaussian white noise (AGWN). First, the RGB color image is mosaicked as bayer image, and then the AGWN is added. The standard deviation is . In order to simulate the high SNR feature of green channel, the standard deviation of green channel is lower than the other channel. In our experiment we set , where is a constant between 0 and 1. Here we set . The test is under two noise levels and which simulate the low noise and high noise.
CPSNR is employed as the objective criteria. CPSNR is defined as where is the dynamic range of the image. Here for 8-bit pixel depth. is the mean squared error between the original and distorted images and : where is the spatial coordinates and the image size is . A 20-pixel wide band around the border of the images is ignored when computing the CPSNR, since some of the tested algorithms do not perform well in side band. The pixel values are clipped to integers in before computing the CPSNR.
For the compared methods, we ignore the complex algorithms, such as BM3D and K-SVD, since they cannot meet the requirements of digital imaging devices. Some simple and fast explicit filter kernel methods are selected as follows.(1)Wiener filter: it is a classical denoising method which is provided in Matlab imaging toolbox. The denoising of Wiener filter is executed on RGB color image, for it cannot be used directly on bayer pattern. The input noise image is demosaicked from the noise bayer image.(2)Joint demosaicking and denoising with space-varying filters (JDD) : it is a typical demosaicking and denoising combined method which can be downloaded at author’s website: http://www.danielemenon.netsons.org/pub/jdd/jdd.php. It works directly on bayer images.(3)Joint bilateral filter (JBF): it is similar to guided filter with the capability of merging extra information. So in the experiment, it uses green channel as guiding signal too. This method is implemented by us according to .
For the parameters setting, the filter kernel size for all methods is . Wiener filter and JDD do not have any other parameters to set. Our method needs to set parameter , and JBF needs to set the geometric spread parameter and the photometric spread parameter . Each parameter keeps the same for all the test images. We traverse all possible values for each parameter and take the best averaged CPSNR result for comparison. The test example is shown in Figure 9.
From Tables 2 and 3, it can be concluded that the proposed method is significantly better than the RGB image based Wiener filter and the bayer image based filters JDD and JBF. The larger the noise is, the better result the proposed method has. The Wiener filter does the worst job, since it works directly on demosaicked RGB image where the noise characteristic is complicated. JBF is better than JDD because in the experiment JBF also joins green channel information to calculate the filter coefficients. Our algorithm is superior to JBF thanks to the usage of the advanced filter kernel : guided filter.
This paper proposes a bayer image denoising method based on guided filter. The excellent properties of green channel in bayer pattern have been exploited. Green channel has the advantage of high sampling rate, which means more direct information of real world and less interpolation error. More importantly, it has the characteristic of higher ISO than others, which means higher SNR . On the other hand, guided filter is an outstanding explicit filter kernel, which is better than bilateral filter. In our method, green channel is used as the guiding signal in guided filter. At first, the green values at red and blue locations are interpolated. Then the full-resolution green image is used to guide denoising by guided filter. It is worth noting that the green channel takes itself as guiding signal. The experiment on the actual bayer image demonstrates that green channel acts well as the guiding signal compared to other color channels. While compared with other explicit filter kernels, such as Wiener filter, JDD, and JBF, the proposed method is competitive. As for the methods that are based on block matching, domain transformation, and even the statistical learning or dictionary training, they suffer heavy computational burden and are not suitable for digital imaging devices. The further research aims at designing better green channel interpolation algorithm like demosaicking or searching for better guiding signal from bayer image than green channel.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work was partially supported by the National Natural Science Foundation (NSFC) of China under Grant nos. 61175006 and 61175015.
- A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), vol. 2, pp. 60–65, June 2005.
- K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, 2007.
- K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “BM3D image denoising with shape-adaptive principal component analysis,” in Proceedings of the Workshop on Signal Processing with Adaptive Sparse Structured Representations (SPARS '09), Saint-Malo, France, April 2009.
- M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Transactions on Image Processing, vol. 15, no. 12, pp. 3736–3745, 2006.
- L. Zhang, R. Lukac, X. Wu, and D. Zhang, “PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras,” IEEE Transactions on Image Processing, vol. 18, no. 4, pp. 797–812, 2009.
- A. Barbu, “Training an active random field for real-time image denoising,” IEEE Transactions on Image Processing, vol. 18, no. 11, pp. 2451–2462, 2009.
- R. Öktem, L. Yaroslavsky, K. Egiazarian, and J. Astola, “Transform domain approaches for image denoising,” Journal of Electronic Imaging, vol. 11, no. 2, pp. 149–156, 2002.
- A. K. Mandava and E. E. Regentova, “Image denoising based on adaptive nonlinear wavelet domain,” Journal of Electronic Imaging, vol. 20, no. 3, Article ID 033016, 2011.
- C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of the IEEE 6th International Conference on Computer Vision (ICCV ’98), pp. 839–846, Bombay, India, January 1998.
- K. He, J. Sun, and X. Tang, “Guided image filter,” in Proceedings of the 11th European Conf. Computer Vision (ECCV '10), vol. 6311 of Lecture Notes in Computer Science, pp. 1–14, 2010.
- D. Menon and G. Calvagno, “Color image demosaicking: an overview,” Signal Processing, vol. 26, no. 8-9, pp. 518–533, 2011.
- N.-X. Lian, L. Chang, Y.-P. Tan, and V. Zagorodnov, “Adaptive filtering for color filter array demosaicking,” IEEE Transactions on Image Processing, vol. 16, no. 10, pp. 2515–2525, 2007.
- H. S. Malvar, L.-W. He, and R. Cutler, “High-quality linear interpolation for demosaicing of Bayer-patterned color images,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ’04), vol. 3, pp. III485–III488, May 2004.
- P. Chatterjee, N. Joshi, S. B. Kang, and Y. Matsushita, “Noise suppression in low-light images through joint denoising and demosaicing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '11), pp. 321–328, June 2011.
- L. Condat, “A simple, fast and efficient approach to denoisaicking: Joint demosaicking and denoising,” in Proceedings of the 17th IEEE International Conference on Image Processing (ICIP '10), pp. 905–908, September 2010.
- D. Menon and G. Calvagno, “Joint demosaicking and denoising with space-varying filters,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '09), pp. 477–480, November 2009.
- D. Paliy, A. Foi, R. Bilcu, and V. Katkovnik, “Denoising and interpolation of noisy Bayer data with adaptive cross-color filters,” in Visual Communications and Image Processing, vol. 6822 of Proceedings of SPIE, January 2008.
- S. W. Hasinoff, F. Durand, and W. T. Freeman, “Noise-optimal capture for high dynamic range photography,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 553–560, June 2010.
- Aptina imaging corporation, “1/2.5-Inch 5MP Digital Image Sensor: MT9P031,” 2013, http://www.aptina.com/products/image_sensors/mt9p031i12stc/.
- R. Franzen, “Kodak lossless true color image suite,” http://r0k.us/graphics/kodak/.
- G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Transactions on Graphics, vol. 23, no. 3, pp. 664–672, 2004.
- D. R. Newlin and E. C. Monie, “An efficient adaptive filtering for CFA demosaicking,” International Journal on Computer Science and Engineering, vol. 2, no. 4, pp. 954–958, 2010.
- K. Hirakawa and T. W. Parks, “Adaptive homogeneity-directed demosaicing algorithm,” IEEE Transactions on Image Processing, vol. 14, no. 3, pp. 360–369, 2005.
- D. A. Forsyth and J. Ponce, Computer Vision: A Modern Approach, Prentice Hall, 2003.
- P. V. Gehler, C. Rother, A. Blake, T. Minka, and T. Sharp, “Bayesian color constancy revisited,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08), pp. 1–8, June 2008.