About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 635738, 14 pages
http://dx.doi.org/10.1155/2012/635738
Research Article

Haar-Wavelet-Based Just Noticeable Distortion Model for Transparent Watermark

1Department of Electrical Engineering, National Central University, Chungli City 320-01, Taiwan
2Department of Computer Science and Information Engineering, National United University, Miaoli 360-03, Taiwan
3Department of Electronics Engineering, Chung Hua University, Hsinchu City 300-12, Taiwan

Received 3 June 2011; Accepted 5 July 2011

Academic Editor: Carlo Cattani

Copyright © 2012 Lu-Ting Ko et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Watermark transparency is required mainly for copyright protection. Based on the characteristics of human visual system, the just noticeable distortion (JND) can be used to verify the transparency requirement. More specifically, any watermarks whose intensities are less than the JND values of an image can be added without degrading the visual quality. It takes extensive experimentations for an appropriate JND model. Motivated by the texture masking effect and the spatial masking effect, which are key factors of JND, Chou and Li (1995) proposed the well-known full-band JND model for the transparent watermark applications. In this paper, we propose a novel JND model based on discrete wavelet transform. Experimental results show that the performance of the proposed JND model is comparable to that of the full-band JND model. However, it has the advantage of saving a lot of computation time; the speed is about 6 times faster than that of the full-band JND model.

1. Introduction

Watermarking is a process that hides information into a host image for the purpose of copyright protection, integrity checking, or captioning [13]. In order to achieve the transparency of watermark, many commonly used techniques are based on the characteristics of human visual system (HVS) [113]. Jayant et al. [14, 15] introduced a key concept known as the just noticeable distortion (JND), against which insignificant errors are not perceptible by human eyes. The JND of an image is in general dependent on background luminance, contrast of luminance, and dominant spatial frequency. It takes extensive experimentations to obtain an appropriate JND model.

Perceptual redundancies refer to the details of an image that are not perceivable by human eyes and therefore can be discarded without affecting the visual quality. As noted, human visual perception is sensitive to the contrast of luminance rather than their individual values [1618]. In addition, the visibility of stimuli can be reduced by nonuniformly quantizing the background luminance [1820]. The above known as the texture masking effect and the spatial masking effect are key factors that affect the JND of an image. Chou and Li proposed an effective model called the full-band JND model for the transparent watermark applications [21].

Wavelet transform provides an efficient multiresolution representation with various desirable properties such as subband decompositions with orientation selectivity and joint space-spatial frequency localization. In wavelet domain, the higher detailed information of a signal is projected onto the shorter basis function with higher spatial resolution; the lower detailed information is projected onto the larger basis function with higher spectral resolution. This matches the characteristics of HVS. Many wavelet-transform-based algorithms were proposed for various applications [2234].

In this paper, we propose a wavelet-transform-based JND model for the watermark applications. It has the advantage of saving a lot of computation time. The remainder of the paper proceeds as follows. In Section 2, the full-band JND model is reviewed briefly. In Section 3, the discrete-wavelet-transform- (DWT-) based JND model is proposed. The modified DWT-based JND model and its evaluation are presented in Section 4. Conclusion can be found in Section 5.

2. Review of the Full-Band Just Noticeable Distortion (JND) Model

The full-band JND model [21] makes use of the properties of the HVS to measure the perceptual redundancies of an image. It produces the JND file for image pixels as follows: where where and are the texture mask and the spatial mask, respectively, as mentioned in Section 1, is the average background luminance obtained by using a low-pass filter, , is given in Figure 1, is the maximum gradient obtained by using a set of high-pass filters, , are given in Figure 2, functions and are dependent on the average background luminance, and is the luminance value at pixel position .

635738.fig.001
Figure 1: Low-pass filter used in (2.6).
fig2
Figure 2: A set of high-pass filters used in  (2.8).

3. Discrete-Wavelet-Transform-Based JND Model

In this section, we propose a novel JND model based on discrete wavelet transform. It has the advantage of reducing computational complexity significantly.

3.1. Discrete Wavelet Transform

Discrete wavelet transform (DWT) provides an efficient multiresolution analysis for signals, Specifically, any finite energy signal can be written by where denotes the resolution index with larger values meaning coarser resolutions, is the translation index, is a mother wavelet, is the corresponding scaling function, , , is the scaling coefficient representing the approximation information of at the coarsest resolution , and is the wavelet coefficient representing the detail information of at resolution . Coefficients and can be obtained from the scaling coefficient at the next finer resolution by using 1-level DWT, which is given by where , , and denotes the inner product. It is noted that and are the corresponding low-pass filter and high-pass filter, respectively. Moreover, can be reconstructed from and by using the inverse DWT, which is given by where and .

For image applications, 2D DWT can be obtained by using the tensor product of 1D DWT. Among wavelets, Haar wavelet [22] is the simplest one, which has been widely used for many applications. The low-pass filter and high-pass filter of Haar wavelet are as follows: Figures 3 and 4 show the row decomposition and the column decomposition using Haar wavelet, respectively. Notice that the column decomposition may follow the row decomposition, or vice versa, in 2D DWT: where , , , and are pixel values, and , , , and denote the approximation, detail information in the horizontal, vertical, and diagonal orientations, respectively, of the input image. Figure 5 shows 1-level, 2D DWT using Haar wavelet.

635738.fig.003
Figure 3: The row decomposition using Haar wavelet (, , and , are pixel values).
635738.fig.004
Figure 4: The column decomposition using Haar wavelet (, , , and are pixel values).
635738.fig.005
Figure 5: 1-level 2D DWT using Haar wavelet (, , , and are pixel values).

The subband of an image can be further decomposed into four subbands: , , , and at the next coarser resolution, which together with , , and forms the 2-level DWT of the input image. Thus, higher level DWT can be obtained by decomposing the approximation subband in the recursive manner.

3.2. Proposed DWT-Based JND Model

As mentioned in Section 2, the full-band JND model [21] consists of (2.1)–(2.8), which is essentially computation consuming. In order to simplify computational complexity, a novel JND model based on Haar wavelet is proposed as follows. where and are the proposed texture mask and spatial mask, respectively, based on DWT, which are given by where is the modified , which replaces the low-pass filter of the full-band JND model. replaces the maximum gradient, , which is given by where, , , and replace , , , and , respectively.

4. Modification of the DWT-Based JND Model

In this section, we introduce an adjustable parameter to modify the DWT-based JND model such that the computation time can be reduced significantly while the performance is comparable to that of the benchmark full-band JND model. The test images, namely, Lena, Cameraman, Baboon, Board, and Peppers are shown in the first row of Figure 12.

4.1. Evaluation of JND Models

Figure 6 shows the distortion-tolerant model, which can be used to evaluate JND models. It takes the JND value as noise, adds to the original image, and computes the peak-signal-to-noise ratio (PSNR) defined as where is the mean squared error, which is defined as where image size is . As shown in Table 1, the proposed DWT-based JND model is somewhat different from the benchmark full-band JND model in terms of the PSNR values.

tab1
Table 1: PSNR comparisons of the benchmark full-band JND model, the proposed DWT-based JND model, and the modified DWT-based JND model.
635738.fig.006
Figure 6: Distortion-tolerant evaluation model for the proposed JND model.
4.2. Modified DWT-Based JND Model

Based on (2.1)–(2.3) and (3.6)–(3.8), one can examine the influences of the dominant mask, texture mask, and spatial mask for the full-band JND model and the DWT-based JND model, respectively. Their respective MSE values are shown in Table 2. As one can see, the proposed texture mask for the DWT-based JND model is less significant than that of the full-band JND model. Thus, we propose an adjustable parameter, , to modify (3.17) and (3.18) as follows: where and replace and , respectively. Figures 7, 8, 9, 10, and 11 show the MSE values obtained by modifying the DWT-based texture mask with various in (4.3). In this paper, the adjustable parameter is set to 4 after extensive simulations. The performance of the modified DWT-based JND model with is comparable to that of the full-band JND model in terms of the PSNR and MSE values as shown in Tables 1 and 2, respectively.

tab2
Table 2: The MSE values due to the dominant mask (case 1), the spatial mask (case 2), and the texture mask (case 3) using the full-band JND model, the DWT-based JND model, and the modified DWT-based JND model.
635738.fig.007
Figure 7: MSE values obtained by modifying the DWT-based texture mask using (4.3) with various for Lena image; dashed line: MSE value obtained by using the texture mask of the full-band JND model.
635738.fig.008
Figure 8: MSE values obtained by modifying the DWT-based texture mask using (4.3) with various for Cameraman image; dashed line: MSE value obtained by using the texture mask of the full-band JND model.
635738.fig.009
Figure 9: MSE values obtained by modifying the DWT-based texture mask using (4.3) with various for Baboon image; dashed line: MSE value obtained by using the texture mask of the full-band JND model.
635738.fig.0010
Figure 10: MSE values obtained by modifying the DWT-based texture mask using (4.3) with various for Board image; dashed line: MSE value obtained by using the texture mask of the full-band JND model.
635738.fig.0011
Figure 11: MSE values obtained by modifying the DWT-based texture mask using (4.3) with various for Peppers image; dashed line: MSE value obtained by using the texture mask of the full-band JND model.
fig12
Figure 12: (a) The original images, namely, Lena, Cameraman, Baboon, Board, and Peppers; (b), (c), and (d) the noisy images obtained by adding the full-band JND values, the DWT-based JND values, and the modified DWT-based JND values, respectively.

Figure 12 shows the noisy images obtained by adding the JND values to the original images (Figure 12(a)) using the full-band JND model (Figure 12(b)), the DWT-based JND model (Figure 12(c)), and the modified DWT-based JND model (Figure 12(d)). It is noted that the images in the second and fourth rows are almost indistinguishable from the original images. As a result, the modified DWT-based JND model is visually comparable to the full-band JND model.

4.3. Computation Complexity of the Proposed JND Models

In the full-band JND model, the computation of requires 9 multiplications per pixel, the computation of requires 28 multiplications per pixel, and the computation required for (2.2)–(2.5) is 6 multiplications per pixel. Thus, for an image, it requires multiplications. In the proposed DWT-based JND model, the computations of , , , , and require , , (, , and multiplications, respectively, for an image, and the computation required for (3.7)–(3.10) is also 6 multiplications per pixel. Thus, for an image, it requires multiplications. In the modified DWT-based JND model, as the computations of and require no multiplication, it only needs multiplications for an image. Figure 13 shows the log plot of numbers of multiplications required for the three JND models versus different image sizes. As a result, the speed of the DWT-based JND model is about 6 times faster than that of the full-band JND model, which is the main advantage.

635738.fig.0013
Figure 13: Log plot of numbers of multiplications required for the three JND models versus different image sizes.

5. Conclusion

In this paper, an efficient DWT-based JND model is presented. It has the advantage of saving a lot of computation time while the performance is comparable to the benchmark full-band JND model. More specifically, the computation complexity of the proposed DWT-based JND model is only one sixth of that of the full-band JND model. As a result, it is suitable for the real-time applications.

Acknowledgment

The National Science Council of Taiwan, under Grants NSC98-2221-E-216-037 and NSC99-2221-E-239-034, supported this work.

References

  1. I. J. Cox, J. Kilian, F. T. Leighton, and T. Shamoon, “Secure spread spectrum watermarking for multimedia,” IEEE Transactions on Image Processing, vol. 6, no. 12, pp. 1673–1687, 1997. View at Scopus
  2. M. D. Swanson, M. Kobayashi, and A. H. Tewfik, “Multimedia data-embedding and watermarking technologies,” Proceedings of the IEEE, vol. 86, no. 6, pp. 1064–1087, 1998. View at Scopus
  3. M. Barni, F. Bartolini, and A. Piva, “Improved wavelet-based watermarking through pixel-wise masking,” IEEE Transactions on Image Processing, vol. 10, no. 5, pp. 783–791, 2001. View at Publisher · View at Google Scholar · View at Scopus
  4. M. Kutter and S. Winkler, “A vision-based masking model for spread-spectrum image watermarking,” IEEE Transactions on Image Processing, vol. 11, no. 1, pp. 16–25, 2002. View at Publisher · View at Google Scholar · View at Scopus
  5. C. De Vleeschouwer, J. F. Delaigle, and B. Macq, “Invisibility and application functionalities in perceptual watermarking - An overview,” Proceedings of the IEEE, vol. 90, no. 1, pp. 64–77, 2002. View at Publisher · View at Google Scholar · View at Scopus
  6. Q. Li, C. Yuan, and Y. Z. Zhong, “Adaptive DWT-SVD domain image watermarking using human visual model,” in Proceedings of the 9th International Conference on Advanced Communication Technology (ICACT '07), pp. 1947–1951, February 2007. View at Publisher · View at Google Scholar · View at Scopus
  7. F. Autrusseau and P. L. Callet, “A robust image watermarking technique based on quantization noise visibility thresholds,” Signal Processing, vol. 87, no. 6, pp. 1363–1383, 2007. View at Publisher · View at Google Scholar · View at Scopus
  8. H. S. Moon, T. You, M. H. Sohn, H. S. Kim, and D. S. Jang, “Expert system for low frequency adaptive image watermarking: using psychological experiments on human image perception,” Expert Systems with Applications, vol. 32, no. 2, pp. 674–686, 2007. View at Publisher · View at Google Scholar · View at Scopus
  9. H. Qi, D. Zheng, and J. Zhao, “Human visual system based adaptive digital image watermarking,” Signal Processing, vol. 88, no. 1, pp. 174–188, 2008. View at Publisher · View at Google Scholar · View at Scopus
  10. A. Koz and A. A. Alatan, “Oblivious spatio-temporal watermarking of digital video by exploiting the human visual system,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, no. 3, pp. 326–337, 2008. View at Publisher · View at Google Scholar · View at Scopus
  11. S. Y. Chen, Y. F. Li, and J. Zhang, “Vision processing for realtime 3-D data acquisition based on coded structured light,” IEEE Transactions on Image Processing, vol. 17, no. 2, pp. 167–176, 2008. View at Publisher · View at Google Scholar
  12. S. Y. Chen, H. Tong, Z. Wang, S. Liu, M. Li, and B. Zhang, “Improved generalized belief propagation for vision processing,” Mathematical Problems in Engineering, vol. 2011, Article ID 416963, 12 pages, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  13. S. Y. Chen and Q. Guan, “Parametric shape representation by a deformable NURBS model for cardiac functional measurements,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 3, pp. 480–487, 2011. View at Publisher · View at Google Scholar
  14. N. Jayant, “Signal compression: technology targets and research directions,” IEEE Journal on Selected Areas in Communications, vol. 10, pp. 314–323, 1992.
  15. N. Jayant, J. Johnston, and R. Safranek, “Signal compression based on models of human perception,” Proceedings of the IEEE, vol. 81, no. 10, pp. 1385–1422, 1993. View at Publisher · View at Google Scholar · View at Scopus
  16. R. F. Boyer and R. S. Spencer, “Thermal expansion and second-order transition effects in high polymers: part II. Theory,” Journal of Applied Physics, vol. 16, no. 10, pp. 594–607, 1945. View at Publisher · View at Google Scholar · View at Scopus
  17. A. K. Jain, Fundamentals of Digital Image Processing, Prentice-Hall, Englewood Cliffs, NJ, USA, 1989.
  18. X. Yang, W. Lin, Z. Lu, E. Ong, and S. Yao, “Motion-compensated residue preprocessing in video coding based on just-noticeable-distortion profile,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 15, no. 6, pp. 742–751, 2005. View at Publisher · View at Google Scholar · View at Scopus
  19. J. Pandel, “Variable bit-rate image sequence coding with adaptive quantization,” Signal Processing, vol. 3, no. 2-3, pp. 123–128, 1991. View at Scopus
  20. B. Girod, “Psychovisual aspects of image communication,” Signal Processing, vol. 28, no. 3, pp. 239–251, 1992. View at Scopus
  21. C. H. Chou and Y. C. Li, “Perceptually tuned subband image coder based on the measure of just-noticeable-distortion profile,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 5, no. 6, pp. 467–476, 1995. View at Publisher · View at Google Scholar · View at Scopus
  22. B. S. Kim, I. J. Shim, M. T. Lim, and Y. J. Kim, “Combined preorder and postorder traversal algorithm for the analysis of singular systems by Haar wavelets,” Mathematical Problems in Engineering, vol. 2008, Article ID 323080, 16 pages, 2008. View at Publisher · View at Google Scholar
  23. G. Mattioli, M. Scalia, and C. Cattani, “Analysis of large-amplitude pulses in short time intervals: application to neuron interactions,” Mathematical Problems in Engineering, vol. 2010, Article ID 895785, 15 pages, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  24. C. Cattani, “Harmonic wavelet approximation of random, fractal and high frequency signals,” Telecommunication Systems, vol. 43, no. 3-4, pp. 207–217, 2010. View at Publisher · View at Google Scholar · View at Scopus
  25. C. Cattani, “Shannon wavelets theory,” Mathematical Problems in Engineering, vol. 2008, Article ID 164808, 24 pages, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  26. A. Kudreyko and C. Cattani, “Application of periodized harmonic wavelets towards solution of eigenvalue problems for integral equations,” Mathematical Problems in Engineering, vol. 2010, Article ID 570136, 8 pages, 2010. View at Publisher · View at Google Scholar
  27. M. Li and W. Zhao, “Representation of a stochastic traffic bound,” IEEE Transactions on Parallel and Distributed Systems, vol. 21, no. 9, Article ID 5342414, pp. 1368–1372, 2010. View at Publisher · View at Google Scholar · View at Scopus
  28. M. Li, “Generation of teletraffic of generalized Cauchy type,” Physica Scripta, vol. 81, no. 2, Article ID 025007, 2010. View at Publisher · View at Google Scholar · View at Scopus
  29. M. Li, “Fractal time series-a tutorial review,” Mathematical Problems in Engineering, vol. 2010, Article ID 157264, 26 pages, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  30. E. G. Bakhoum and C. Toma, “Specific mathematical aspects of dynamics generated by coherence functions,” Mathematical Problems in Engineering, vol. 2011, Article ID 436198, 10 pages, 2011. View at Publisher · View at Google Scholar
  31. E. G. Bakhoum and C. Toma, “Dynamical aspects of macroscopic and quantum transitions due to coherence function and time series events,” Mathematical Problems in Engineering, vol. 2010, Article ID 428903, 13 pages, 2010. View at Publisher · View at Google Scholar
  32. E. G. Bakhoum and C. Toma, “Mathematical transform of traveling-wave equations and phase aspects of quantum interaction,” Mathematical Problems in Engineering, vol. 2010, Article ID /695208, 15 pages, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  33. T. Y. Sung and H. C. Hsin, “A hybrid image coder based on SPIHT algorithm with embedded block coding,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E90-A, no. 12, pp. 2979–2984, 2007. View at Publisher · View at Google Scholar · View at Scopus
  34. H. C. Hsin and T. Y. Sung, “Adaptive selection and rearrangement of wavelet packets for quad-tree image coding,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E91-A, no. 9, pp. 2655–2662, 2008. View at Publisher · View at Google Scholar · View at Scopus