- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Volume 2013 (2013), Article ID 738057, 9 pages
Discrete Wavelet Transform on Color Picture Interpolation of Digital Still Camera
Department of Electronic Engineering and Graduate Institute of Computer and Communication Engineering, National Taipei University of Technology, 1, Section 3, Chung-Hsiao East Road, Taipei 10608, Taiwan
Received 5 October 2012; Accepted 18 January 2013
Academic Editor: Yeong-Kang Lai
Copyright © 2013 Yu-Cheng Fan and Yi-Feng Chiang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Many people use digital still cameras to take photographs in contemporary society. Significant amounts of digital information have led to the emergence of a digital era. Because of the small size and low cost of the product hardware, most image sensors use a color filter array to obtain image information. However, employing a color filter array results in the loss of image information; thus, a color interpolation technique must be employed to retrieve the original picture. Numerous researchers have developed interpolation algorithms in response to various image problems. The method proposed in this study involves integrating discrete wavelet transform (DWT) into the interpolation algorithm. The method was developed based on edge weight and partial gain characteristics and uses the basic wavelet function to enhance the edge performance and processes of the nearest or larger and smaller direction gradients. The experiment results were compared to those of other methods to verify that the proposed method can improve image quality.
The basic principles of digital still cameras and traditional cameras are analogous. Traditional cameras use sensitization negatives to sense the input image. Digital still cameras project the input image onto a charge-coupled device (CCD), where it is transformed into a digital signal. The digital signal is then stored in a memory component after compression. However, this signal indicates the light intensity and not the color variation. Therefore, a color filter array must be employed for digital sampling. Color filter arrays typically employ the RGB original color separation technique, where red, green, and blue values are mixed into a complete color image after the original image is passed through three color filter arrays. Because of the high costs and large space required to use three color filter arrays with CCDs, only one color filter array with a CCD is employed. Consequently, each pixel possesses only one red, green, and blue color elements. The general color filter array in digital still cameras possesses a Bayer pattern , as shown in Figure 1. An interpolation algorithm must be employed to identify the two missing colors based on the surrounding pixels. The zipper effect or false colors are typically observed in images after interpolation. Numerous interpolation algorithms have been proposed to resolve these problems and obtain good image quality.
Image interpolation methods possess spatial and frequency characteristics. Edge direction and nonedge direction interpolation methods adopt spatial characteristics. The adjacent pixels selected by nonedge direction interpolation methods are constant. Examples of this method type include the bilinear interpolation method  and color difference interpolation method . Because these methods do not detect edges, the edges of partial images are blurred following interpolation. The adjacent pixels selected by edge direction interpolation methods are nonconstant. These methods can detect and reduce blurred edges in the horizontal and vertical directions of an image. Examples of this method type include the edge sensing interpolation method  and edge correlation sensing correction interpolation method . Frequency characteristic interpolation methods include the alternating projections interpolation method  and novel frequency-domain interpolation method . Interpolation methods of this type use high- or low-frequency correlation to improve image aliasing and contrived phenomena and can provide high-quality images. A number of studies have employed a combination of the described methods or have proposed methods that use a wavelet algorithm for the edge or frequency domains [8–10]. Common research techniques are based on the physical characteristics of interference [11, 12]. Furthermore, the method combines edge and frequency algorithms for interpolation [13–15]. In , good missing green samples were first obtained based on the variances of color differences along a correct edge direction. The red and blue components were then interpolated based on the interpolated green plane. The refinement scheme was employed to improve the interpolation performance. The method employed in  obtains luminance values at the green sample locations and preserves high-frequency information. An adaptive filter was used to estimate the luminance values of the red and blue samples. Then, the estimated full-resolution luminance was used to interpolate the red, green, and blue color components. These results indicate that many interpolation methods result in contrived colors or blurred edges because they cannot sensitively detect edges or perform appropriate color interpolation. Therefore, effective interpolation of the image edge cannot be achieved. In this study, the relationship between the surrounding interpolation pixel weights and discrete wavelet transform (DWT) was used to perform color interpolation and edge detection. The results were then compared with those reported by other studies using conventional methods.
2. Proposed Method and DWT
2.1. Discrete Wavelet Transform
DWT can use the basic wavelet function and scaling function to conduct decomposition and reconstruction of sampling signals. The basic function is used to detect detailed variations. The scaling function is used to approximate original signals, which can be denoted as
The basic wavelet function can be calculated from the scaling function. and are digital filter coefficients; their relationship is expressed as
In wavelet transform, and are approximately equal to a high-pass filter and a low-pass filter. In (2), denotes the filter length. DWT has a similar function as a filter and can analyze the signal layer by layer. This filter comprises a high-pass filter and a low-pass filter. Figure 2 shows the operational manner and first-order wavelet transform decomposition of this filter. is an approximate coefficient. This indicates that the signal passes through a low-pass filter and undergoes downsampling. Approximate coefficients retain low-frequency information of the original signal and less high-frequency noise. is a detailed coefficient. This indicates that the signal passes through a high-pass filter and undergoes downsampling. Detailed coefficients retain high-frequency information of the original signal . Figure 2 is used in 2 to denote downsampling, which involves retaining half low-frequency and half high-frequency data. The method involves sampling odd and even terms.
The wavelet transform decomposition process is expressed as
2.2. Proposed Method
Image information is obtained after passing through a Bayer pattern color filter array. Horizontal and vertical direction information is used to interpolate the green portion. For the red and blue portions, only information in the horizontal, vertical, and diagonal directions can be employed, as shown in Figure 3. Therefore, in this study, the wavelet sensitivity and color correlation weight  are used to identify the horizontal, vertical, and/or diagonal directions and interpolate missing pixels.
cD1H is the horizontal gradient. cD2V is the vertical gradient. Thd is determined through an experiment and is used to limit the range of the nearest or smaller direction gradients. When the horizontal or vertical gradients are small, DWT is used to detect edges and is judged according to gradients. Because of the good sensitivity of DWT, detailed coefficients cD1 and cD2 can be obtained to enhance the gradients. If an edge exists, it has the correct cD1 or cD2 to determine the edge direction. G8 and G18 are calculated by DWT, which produces cA1 and cD1. G12 and G14 follow the same procedure. Figure 5 shows the flowchart of the processes in Algorithm 1. First, the operations were assessed, the edge direction was determined, and then interpolation was conducted.
When the gradients are relatively close, DWT is used to detect edges; otherwise, the color correlations are employed to interpolate directly. Thd1 is determined through an experiment. Furthermore, the edge direction of large gradients can be adjusted to limit the range of the nearest or larger direction gradients. Figure 6 shows a flowchart of the processes in Algorithm 2. First, the operations are assessed. If DWT is employed, the edge direction is determined before conducting interpolation; otherwise, interpolation is directly performed. The horizontal and vertical directions only possess information of the two adjacent pixels; thus, their correlation is directly employed for interpolation. The interpolation method can be expressed as
The green interpolation method is identical to the red and blue interpolation methods. Diagonal interpolation of red and blue follows the same method used for blue and red. Furthermore, the red horizontal or vertical interpolation method is identical to that of blue.
3. Simulation Result
This study employed 24 standard color pictures provided for popular use. These images measure or , as shown in Figure 7, and have been included in numerous studies. To begin the simulation, raw image data are read and then separated into red, blue, and green image planes before sampling using the Bayer pattern. Regarding wavelet function selection, for this study, we adopted Daubechies wavelets for the basic functions and used the db2 function for simulation. Db2 is a second-rank Daubechies wavelet with a filter length of 4 and four low-pass filter coefficients denoted as , , , and . The values of these coefficients are 0.34151, 0.59151, 0.15849, and −0.091506. Employing the db2 wavelet for simulation provides the optimum result. Db2 to db7 were analyzed using, which indicated that shorter filters provide superior results. Furthermore, only one transformation is required. When transformed numerous times, waveforms are stretched to the left and right. Figure 8 shows a db2 wavelet waveform. Wavelet waveforms and filter coefficients are closely linked. Employing the adaptive wavelet function to analyze signals provides superior results. The waveform compression and stretch characteristics provide superior results in the shortest time variations and can be understood through observation. According to the experiment results, Thd and Thd1 were 223 and 1, respectively. The experimental process is shown in Tables 1 and 2. Picture 01 was used as an example. Furthermore, 223 and 1 were the stable range values.
Table 3 shows a comparison of the peak signal-to-noise ratio (PSNR) in this study with that of other studies. The organizational sequence for the standard images in Figure 7 was from left to right and top to bottom. These images were named according to a nominal scale as Picture 01 to Picture 24. After processing, the images differ from their original appearance. To examine the image quality, PSNR is typically contrasted. PSNR can be expressed as of the mean square error (MSE) is the pixel value of the original image and is located at . is the pixel value of position after image processing. The unit of PSNR is in decibels (dB). A larger PSNR indicates less aliasing. denotes the largest image pixel color value. If 8 bits are used to represent each sample pixel, the total bit number is 255. The results in Table 3 show that the proposed method provides a superior image quality compared to that in previous studies. Most images have large low-frequency areas. Thus, the spatial domain interpolation algorithm can be used to process most of images. However, if images possess a large fluid wave, wood, or grassy area, similar to Picture 22, the frequency domain algorithm provides a superior performance. Table 4 shows a PSNR evaluation, including each RGB color component and their average.
Figures 9 and 10 show Pictures 1 and 6, respectively. These images were selected from among the 24 standard images. The proposed method and conventional interpolation methods were used for simulation. Magnifying the images shows that the proposed method improves image quality significantly, as shown in Figure 11.
The images processed by using the proposed method are extremely similar to the original images. Figure 11 shows a magnification of Figure 9. DWT exhibited good edge detection sensitivity and partially resolved the zipper effects, color shifts, aliasing artifacts, blur effects, and obvious unnatural color grains. Furthermore, DWT limited the unnatural colors of the window lattice and the color inaccuracies of the crisscross.
Science and technology change every day. Although chip processing speeds continue to accelerate, their size and costs are increasingly decreasing. The proposed method does not employ frequency characteristics; instead, image quality is enhanced using spatial characteristics. Previous studies have discussed the importance of edges and interpolation pixels and calculated the frequency and spatial characteristics. This study exploited the sensitivity of wavelet algorithms and the correlation between colors to obtain good results regarding image edges and interpolation pixels. Comparing the simulation results to those of previous studies, the experimental images and data indicate that the proposed method can provide high-quality images.
Conflict of Interests
The authors do not have a direct financial relation with the commercial identity (Kodak Company and MATLAB/TOOLBOX) mentioned in our paper that might lead to a conflict of interests for any of the authors.
This study was supported by the Taiwan e-learning and Digital Archives Program (TELDAP) and the National Science Council of Taiwan under Grant no. NSC 100-2631-H-027-003. The authors gratefully acknowledge the Chip Implementation Center (CIC) for supplying the technology models used for IC design.
- B. E. Bayer, “Color imaging array,” U.S. Patent 3 971 065, 1976.
- T. Sakamoto, C. Nakanishi, and T. Hase, “Software pixel interpolation for digital still cameras suitable for a 32-bit MCU,” IEEE Transactions on Consumer Electronics, vol. 44, no. 4, pp. 1342–1352, 1998.
- S. C. Pei and I. K. Tam, “Effective color interpolation in CCD color filter arrays using signal correlation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, no. 6, pp. 503–513, 2003.
- J. E. Adams Jr., “Design of practical color filter array interpolation algorithms for digital cameras,” in 2nd Real-Time Imaging, vol. 3028 of Proceedings of SPIE, pp. 117–125, February 1997.
- R. Lukac, K. N. Plataniotis, D. Hatzinakos, and M. Aleksic, “A new CFA interpolation framework,” Signal Processing, vol. 86, no. 7, pp. 1559–1579, 2006.
- B. K. Gunturk, Y. Altunbasak, and R. M. Mersereau, “Color plane interpolation using alternating projections,” IEEE Transactions on Image Processing, vol. 11, no. 9, pp. 997–1013, 2002.
- E. Dubois, “Frequency-domain methods for demosaicking of bayer-sampled color images,” IEEE Signal Processing Letters, vol. 12, no. 12, pp. 847–850, 2005.
- B. G. Jeong, S. H. Hyun, and I. K. Eom, “Edge adaptive demosaicking in wavelet domain,” in Proceedings of the 9th International Conference on Signal Processing (ICSP'08), pp. 836–839, Beijing, China, October 2008.
- L. Chen, K. H. Yap, and Y. He, “Color filter array demosaicking using wavelet-based subband synthesis,” in Proceedings of the IEEE International Conference on Image Processing (ICIP'05), pp. II-1002–II-1005, September 2005.
- J. Driesen and P. Scheunders, “Wavelet-based color filter array demosaicking,” in Proceedings of the International Conference on Image Processing (ICIP'04), vol. 5, pp. 3311–3314, October 2004.
- X. Wu and X. Zhang, “Joint color decrosstalk and demosaicking for CFA cameras,” IEEE Transactions on Image Processing, vol. 19, no. 12, pp. 3181–3189, 2010.
- K. H. Chung and Y. H. Chan, “Color demosaicing using variance of color differences,” IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 2944–2955, 2006.
- K. L. Chung, W. J. Yang, W. M. Yan, and C. C. Wang, “Demosaicing of color filter array captured images using gradient edge detection masks and adaptive heterogeneity-projection,” IEEE Transactions on Image Processing, vol. 17, no. 12, pp. 2356–2367, 2008.
- K. L. Chung, W. J. Yang, P. Y. Chen, W. M. Yan, and C. S. Fuh, “New joint demosaicing and zooming algorithm for color filter array,” IEEE Transactions on Consumer Electronics, vol. 55, no. 3, pp. 1477–1486, 2009.
- N. X. Lian, L. Chang, Y. P. Tan, and V. Zagorodnov, “Adaptive filtering for color filter array demosaicking,” IEEE Transactions on Image Processing, vol. 16, no. 10, pp. 2515–2525, 2007.