VLSI Design

VLSI Design / 2013 / Article
Special Issue

Advanced VLSI Design Methodologies for Emerging Industrial Multimedia and Communication Applications

View this Special Issue

Research Article | Open Access

Volume 2013 |Article ID 738057 | https://doi.org/10.1155/2013/738057

Yu-Cheng Fan, Yi-Feng Chiang, "Discrete Wavelet Transform on Color Picture Interpolation of Digital Still Camera", VLSI Design, vol. 2013, Article ID 738057, 9 pages, 2013. https://doi.org/10.1155/2013/738057

Discrete Wavelet Transform on Color Picture Interpolation of Digital Still Camera

Academic Editor: Yeong-Kang Lai
Received05 Oct 2012
Accepted18 Jan 2013
Published26 Feb 2013


Many people use digital still cameras to take photographs in contemporary society. Significant amounts of digital information have led to the emergence of a digital era. Because of the small size and low cost of the product hardware, most image sensors use a color filter array to obtain image information. However, employing a color filter array results in the loss of image information; thus, a color interpolation technique must be employed to retrieve the original picture. Numerous researchers have developed interpolation algorithms in response to various image problems. The method proposed in this study involves integrating discrete wavelet transform (DWT) into the interpolation algorithm. The method was developed based on edge weight and partial gain characteristics and uses the basic wavelet function to enhance the edge performance and processes of the nearest or larger and smaller direction gradients. The experiment results were compared to those of other methods to verify that the proposed method can improve image quality.

1. Introduction

The basic principles of digital still cameras and traditional cameras are analogous. Traditional cameras use sensitization negatives to sense the input image. Digital still cameras project the input image onto a charge-coupled device (CCD), where it is transformed into a digital signal. The digital signal is then stored in a memory component after compression. However, this signal indicates the light intensity and not the color variation. Therefore, a color filter array must be employed for digital sampling. Color filter arrays typically employ the RGB original color separation technique, where red, green, and blue values are mixed into a complete color image after the original image is passed through three color filter arrays. Because of the high costs and large space required to use three color filter arrays with CCDs, only one color filter array with a CCD is employed. Consequently, each pixel possesses only one red, green, and blue color elements. The general color filter array in digital still cameras possesses a Bayer pattern [1], as shown in Figure 1. An interpolation algorithm must be employed to identify the two missing colors based on the surrounding pixels. The zipper effect or false colors are typically observed in images after interpolation. Numerous interpolation algorithms have been proposed to resolve these problems and obtain good image quality.

Image interpolation methods possess spatial and frequency characteristics. Edge direction and nonedge direction interpolation methods adopt spatial characteristics. The adjacent pixels selected by nonedge direction interpolation methods are constant. Examples of this method type include the bilinear interpolation method [2] and color difference interpolation method [3]. Because these methods do not detect edges, the edges of partial images are blurred following interpolation. The adjacent pixels selected by edge direction interpolation methods are nonconstant. These methods can detect and reduce blurred edges in the horizontal and vertical directions of an image. Examples of this method type include the edge sensing interpolation method [4] and edge correlation sensing correction interpolation method [5]. Frequency characteristic interpolation methods include the alternating projections interpolation method [6] and novel frequency-domain interpolation method [7]. Interpolation methods of this type use high- or low-frequency correlation to improve image aliasing and contrived phenomena and can provide high-quality images. A number of studies have employed a combination of the described methods or have proposed methods that use a wavelet algorithm for the edge or frequency domains [810]. Common research techniques are based on the physical characteristics of interference [11, 12]. Furthermore, the method combines edge and frequency algorithms for interpolation [1315]. In [12], good missing green samples were first obtained based on the variances of color differences along a correct edge direction. The red and blue components were then interpolated based on the interpolated green plane. The refinement scheme was employed to improve the interpolation performance. The method employed in [15] obtains luminance values at the green sample locations and preserves high-frequency information. An adaptive filter was used to estimate the luminance values of the red and blue samples. Then, the estimated full-resolution luminance was used to interpolate the red, green, and blue color components. These results indicate that many interpolation methods result in contrived colors or blurred edges because they cannot sensitively detect edges or perform appropriate color interpolation. Therefore, effective interpolation of the image edge cannot be achieved. In this study, the relationship between the surrounding interpolation pixel weights and discrete wavelet transform (DWT) was used to perform color interpolation and edge detection. The results were then compared with those reported by other studies using conventional methods.

2. Proposed Method and DWT

2.1. Discrete Wavelet Transform

DWT can use the basic wavelet function and scaling function to conduct decomposition and reconstruction of sampling signals. The basic function is used to detect detailed variations. The scaling function is used to approximate original signals, which can be denoted as

The basic wavelet function can be calculated from the scaling function. and are digital filter coefficients; their relationship is expressed as

In wavelet transform, and are approximately equal to a high-pass filter and a low-pass filter. In (2), denotes the filter length. DWT has a similar function as a filter and can analyze the signal layer by layer. This filter comprises a high-pass filter and a low-pass filter. Figure 2 shows the operational manner and first-order wavelet transform decomposition of this filter. is an approximate coefficient. This indicates that the signal passes through a low-pass filter and undergoes downsampling. Approximate coefficients retain low-frequency information of the original signal and less high-frequency noise. is a detailed coefficient. This indicates that the signal passes through a high-pass filter and undergoes downsampling. Detailed coefficients retain high-frequency information of the original signal . Figure 2 is used in 2 to denote downsampling, which involves retaining half low-frequency and half high-frequency data. The method involves sampling odd and even terms.

The wavelet transform decomposition process is expressed as

2.2. Proposed Method

Image information is obtained after passing through a Bayer pattern color filter array. Horizontal and vertical direction information is used to interpolate the green portion. For the red and blue portions, only information in the horizontal, vertical, and diagonal directions can be employed, as shown in Figure 3. Therefore, in this study, the wavelet sensitivity and color correlation weight [4] are used to identify the horizontal, vertical, and/or diagonal directions and interpolate missing pixels.

Please refer to Figure 4. The green interpolation method can be expressed asshown in Algorithm 1.

if ( >0)&( >0)
 &( <Thd)&( <Thd)
 &( >0& >0& >0)
 cD1H= + ;
 cD2V= + ;
 [cA1, cD1]=DWT(G8, G18);
 [cA2, cD2]=DWT(G12, G14);
 cD1H= + ;
 cD2V= + ;
if (cD2V>cD1H)
elseif (cD1H>cD2V)

cD1H is the horizontal gradient. cD2V is the vertical gradient. Thd is determined through an experiment and is used to limit the range of the nearest or smaller direction gradients. When the horizontal or vertical gradients are small, DWT is used to detect edges and is judged according to gradients. Because of the good sensitivity of DWT, detailed coefficients cD1 and cD2 can be obtained to enhance the gradients. If an edge exists, it has the correct cD1 or cD2 to determine the edge direction. G8 and G18 are calculated by DWT, which produces cA1 and cD1. G12 and G14 follow the same procedure. Figure 5 shows the flowchart of the processes in Algorithm 1. First, the operations were assessed, the edge direction was determined, and then interpolation was conducted.

Please refer to Figure 4. The red and blue interpolation methods can be expressed as shown in Algorithm 2.

if ( <Thd1& <Thd1& <Thd1)
 [cA1, cD1]=DWT(B7, B19);
 [cA2, cD2]=DWT(B17, B9);
 cD1H= ;
 cD2V= ;
 if cD1H<cD2V
 elseif cD1H>cD2V

When the gradients are relatively close, DWT is used to detect edges; otherwise, the color correlations are employed to interpolate directly. Thd1 is determined through an experiment. Furthermore, the edge direction of large gradients can be adjusted to limit the range of the nearest or larger direction gradients. Figure 6 shows a flowchart of the processes in Algorithm 2. First, the operations are assessed. If DWT is employed, the edge direction is determined before conducting interpolation; otherwise, interpolation is directly performed. The horizontal and vertical directions only possess information of the two adjacent pixels; thus, their correlation is directly employed for interpolation. The interpolation method can be expressed as

The green interpolation method is identical to the red and blue interpolation methods. Diagonal interpolation of red and blue follows the same method used for blue and red. Furthermore, the red horizontal or vertical interpolation method is identical to that of blue.

3. Simulation Result

This study employed 24 standard color pictures provided for popular use. These images measure or , as shown in Figure 7, and have been included in numerous studies. To begin the simulation, raw image data are read and then separated into red, blue, and green image planes before sampling using the Bayer pattern. Regarding wavelet function selection, for this study, we adopted Daubechies wavelets for the basic functions and used the db2 function for simulation. Db2 is a second-rank Daubechies wavelet with a filter length of 4 and four low-pass filter coefficients denoted as , , , and . The values of these coefficients are 0.34151, 0.59151, 0.15849, and −0.091506. Employing the db2 wavelet for simulation provides the optimum result. Db2 to db7 were analyzed using, which indicated that shorter filters provide superior results. Furthermore, only one transformation is required. When transformed numerous times, waveforms are stretched to the left and right. Figure 8 shows a db2 wavelet waveform. Wavelet waveforms and filter coefficients are closely linked. Employing the adaptive wavelet function to analyze signals provides superior results. The waveform compression and stretch characteristics provide superior results in the shortest time variations and can be understood through observation. According to the experiment results, Thd and Thd1 were 223 and 1, respectively. The experimental process is shown in Tables 1 and 2. Picture 01 was used as an example. Furthermore, 223 and 1 were the stable range values.

(thd1 = 1)


thd 1 
(thd = 223)


Table 3 shows a comparison of the peak signal-to-noise ratio (PSNR) in this study with that of other studies. The organizational sequence for the standard images in Figure 7 was from left to right and top to bottom. These images were named according to a nominal scale as Picture 01 to Picture 24. After processing, the images differ from their original appearance. To examine the image quality, PSNR is typically contrasted. PSNR can be expressed as of the mean square error (MSE) is the pixel value of the original image and is located at . is the pixel value of position after image processing. The unit of PSNR is in decibels (dB). A larger PSNR indicates less aliasing. denotes the largest image pixel color value. If 8 bits are used to represent each sample pixel, the total bit number is 255. The results in Table 3 show that the proposed method provides a superior image quality compared to that in previous studies. Most images have large low-frequency areas. Thus, the spatial domain interpolation algorithm can be used to process most of images. However, if images possess a large fluid wave, wood, or grassy area, similar to Picture 22, the frequency domain algorithm provides a superior performance. Table 4 shows a PSNR evaluation, including each RGB color component and their average.

Studies Sakamoto et al. [2]Gunturk et al. [6]Pei and Tam [3]Dubois [7]Lukac et al. [5]Chung et al. [13]Proposed

Picture 0130.8538.0534.0838.0237.4139.8838.26
Picture 0236.0337.5535.5437.8837.7837.9638.96
Picture 0335.8842.0639.7842.1742.3242.9143.70
Picture 0428.6234.7331.5735.1334.2936.2234.62
Picture 0535.6041.7539.3641.7141.5942.0543.09
Picture 0632.5739.2236.0638.8638.7440.1739.72
Picture 0736.0339.0739.1639.3240.0139.7941.92
Picture 0832.0037.9435.2539.9637.4540.7539.19
Picture 0935.7642.1140.2942.1742.1942.3243.35
Picture 1033.1335.8935.9336.0437.0736.2439.28
Picture 1132.3037.3935.4337.2737.0937.6838.27
Picture 1234.2137.9336.9338.2238.5238.4540.80
Picture 1336.7341.4541.2341.6942.3942.3143.25
Picture 1436.3641.5740.7241.9742.4841.6743.96
Picture 1533.3639.3137.8639.8139.1440.5640.41
Picture 1635.0238.9438.9239.2739.2539.3340.91
Picture 1732.5040.3835.5140.4239.3441.7541.98
Picture 1837.7541.7142.1242.1942.9342.0743.73
Picture 1936.5140.3839.9440.5240.7940.1542.66
Picture 2028.5937.2631.0435.2734.5737.5636.76
Picture 2136.3442.5240.3342.9842.2743.5444.55
Picture 2234.8342.1038.3243.7140.4544.3141.63
Picture 2333.0340.8338.9940.3941.1841.5840.80
Picture 2430.6634.9233.9535.3434.8935.2936.46


RGB Average

Picture 0137.674639.274337.826638.2585
Picture 0238.411639.979438.485338.9588
Picture 0343.379945.141142.588143.7030
Picture 0434.407335.257634.186534.6171
Picture 0543.090144.006342.175843.0907
Picture 0639.345640.648039.162939.7188
Picture 0740.036144.184441.536741.9191
Picture 0838.794040.325938.450139.1900
Picture 0942.831344.769442.457243.3526
Picture 1037.846341.189938.806639.2809
Picture 1138.102439.107937.614038.2748
Picture 1239.421342.033140.934840.7964
Picture 1342.735445.074341.934543.2481
Picture 1443.049545.471643.372743.9646
Picture 1539.388041.561140.281540.4102
Picture 1639.414042.914540.410840.9131
Picture 1741.426842.939841.582741.9831
Picture 1842.531945.816842.855443.7347
Picture 1941.569644.205442.215242.6634
Picture 2036.130037.860136.297336.7625
Picture 2143.463945.950244.246244.5534
Picture 2241.420943.084240.374641.6266
Picture 2339.762842.099340.535240.7991
Picture 2436.266537.672535.431636.4569


Figures 9 and 10 show Pictures 1 and 6, respectively. These images were selected from among the 24 standard images. The proposed method and conventional interpolation methods were used for simulation. Magnifying the images shows that the proposed method improves image quality significantly, as shown in Figure 11.

The images processed by using the proposed method are extremely similar to the original images. Figure 11 shows a magnification of Figure 9. DWT exhibited good edge detection sensitivity and partially resolved the zipper effects, color shifts, aliasing artifacts, blur effects, and obvious unnatural color grains. Furthermore, DWT limited the unnatural colors of the window lattice and the color inaccuracies of the crisscross.

4. Conclusion

Science and technology change every day. Although chip processing speeds continue to accelerate, their size and costs are increasingly decreasing. The proposed method does not employ frequency characteristics; instead, image quality is enhanced using spatial characteristics. Previous studies have discussed the importance of edges and interpolation pixels and calculated the frequency and spatial characteristics. This study exploited the sensitivity of wavelet algorithms and the correlation between colors to obtain good results regarding image edges and interpolation pixels. Comparing the simulation results to those of previous studies, the experimental images and data indicate that the proposed method can provide high-quality images.

Conflict of Interests

The authors do not have a direct financial relation with the commercial identity (Kodak Company and MATLAB/TOOLBOX) mentioned in our paper that might lead to a conflict of interests for any of the authors.


This study was supported by the Taiwan e-learning and Digital Archives Program (TELDAP) and the National Science Council of Taiwan under Grant no. NSC 100-2631-H-027-003. The authors gratefully acknowledge the Chip Implementation Center (CIC) for supplying the technology models used for IC design.


  1. B. E. Bayer, “Color imaging array,” U.S. Patent 3 971 065, 1976. View at: Google Scholar
  2. T. Sakamoto, C. Nakanishi, and T. Hase, “Software pixel interpolation for digital still cameras suitable for a 32-bit MCU,” IEEE Transactions on Consumer Electronics, vol. 44, no. 4, pp. 1342–1352, 1998. View at: Google Scholar
  3. S. C. Pei and I. K. Tam, “Effective color interpolation in CCD color filter arrays using signal correlation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, no. 6, pp. 503–513, 2003. View at: Publisher Site | Google Scholar
  4. J. E. Adams Jr., “Design of practical color filter array interpolation algorithms for digital cameras,” in 2nd Real-Time Imaging, vol. 3028 of Proceedings of SPIE, pp. 117–125, February 1997. View at: Google Scholar
  5. R. Lukac, K. N. Plataniotis, D. Hatzinakos, and M. Aleksic, “A new CFA interpolation framework,” Signal Processing, vol. 86, no. 7, pp. 1559–1579, 2006. View at: Publisher Site | Google Scholar
  6. B. K. Gunturk, Y. Altunbasak, and R. M. Mersereau, “Color plane interpolation using alternating projections,” IEEE Transactions on Image Processing, vol. 11, no. 9, pp. 997–1013, 2002. View at: Publisher Site | Google Scholar
  7. E. Dubois, “Frequency-domain methods for demosaicking of bayer-sampled color images,” IEEE Signal Processing Letters, vol. 12, no. 12, pp. 847–850, 2005. View at: Publisher Site | Google Scholar
  8. B. G. Jeong, S. H. Hyun, and I. K. Eom, “Edge adaptive demosaicking in wavelet domain,” in Proceedings of the 9th International Conference on Signal Processing (ICSP'08), pp. 836–839, Beijing, China, October 2008. View at: Publisher Site | Google Scholar
  9. L. Chen, K. H. Yap, and Y. He, “Color filter array demosaicking using wavelet-based subband synthesis,” in Proceedings of the IEEE International Conference on Image Processing (ICIP'05), pp. II-1002–II-1005, September 2005. View at: Publisher Site | Google Scholar
  10. J. Driesen and P. Scheunders, “Wavelet-based color filter array demosaicking,” in Proceedings of the International Conference on Image Processing (ICIP'04), vol. 5, pp. 3311–3314, October 2004. View at: Google Scholar
  11. X. Wu and X. Zhang, “Joint color decrosstalk and demosaicking for CFA cameras,” IEEE Transactions on Image Processing, vol. 19, no. 12, pp. 3181–3189, 2010. View at: Publisher Site | Google Scholar
  12. K. H. Chung and Y. H. Chan, “Color demosaicing using variance of color differences,” IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 2944–2955, 2006. View at: Publisher Site | Google Scholar
  13. K. L. Chung, W. J. Yang, W. M. Yan, and C. C. Wang, “Demosaicing of color filter array captured images using gradient edge detection masks and adaptive heterogeneity-projection,” IEEE Transactions on Image Processing, vol. 17, no. 12, pp. 2356–2367, 2008. View at: Publisher Site | Google Scholar
  14. K. L. Chung, W. J. Yang, P. Y. Chen, W. M. Yan, and C. S. Fuh, “New joint demosaicing and zooming algorithm for color filter array,” IEEE Transactions on Consumer Electronics, vol. 55, no. 3, pp. 1477–1486, 2009. View at: Publisher Site | Google Scholar
  15. N. X. Lian, L. Chang, Y. P. Tan, and V. Zagorodnov, “Adaptive filtering for color filter array demosaicking,” IEEE Transactions on Image Processing, vol. 16, no. 10, pp. 2515–2525, 2007. View at: Publisher Site | Google Scholar

Copyright © 2013 Yu-Cheng Fan and Yi-Feng Chiang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.