Research Article  Open Access
LuTing Ko, JwuE Chen, HsiChin Hsin, YawShih Shieh, TzeYun Sung, "HaarWaveletBased Just Noticeable Distortion Model for Transparent Watermark", Mathematical Problems in Engineering, vol. 2012, Article ID 635738, 14 pages, 2012. https://doi.org/10.1155/2012/635738
HaarWaveletBased Just Noticeable Distortion Model for Transparent Watermark
Abstract
Watermark transparency is required mainly for copyright protection. Based on the characteristics of human visual system, the just noticeable distortion (JND) can be used to verify the transparency requirement. More specifically, any watermarks whose intensities are less than the JND values of an image can be added without degrading the visual quality. It takes extensive experimentations for an appropriate JND model. Motivated by the texture masking effect and the spatial masking effect, which are key factors of JND, Chou and Li (1995) proposed the wellknown fullband JND model for the transparent watermark applications. In this paper, we propose a novel JND model based on discrete wavelet transform. Experimental results show that the performance of the proposed JND model is comparable to that of the fullband JND model. However, it has the advantage of saving a lot of computation time; the speed is about 6 times faster than that of the fullband JND model.
1. Introduction
Watermarking is a process that hides information into a host image for the purpose of copyright protection, integrity checking, or captioning [1–3]. In order to achieve the transparency of watermark, many commonly used techniques are based on the characteristics of human visual system (HVS) [1–13]. Jayant et al. [14, 15] introduced a key concept known as the just noticeable distortion (JND), against which insignificant errors are not perceptible by human eyes. The JND of an image is in general dependent on background luminance, contrast of luminance, and dominant spatial frequency. It takes extensive experimentations to obtain an appropriate JND model.
Perceptual redundancies refer to the details of an image that are not perceivable by human eyes and therefore can be discarded without affecting the visual quality. As noted, human visual perception is sensitive to the contrast of luminance rather than their individual values [16–18]. In addition, the visibility of stimuli can be reduced by nonuniformly quantizing the background luminance [18–20]. The above known as the texture masking effect and the spatial masking effect are key factors that affect the JND of an image. Chou and Li proposed an effective model called the fullband JND model for the transparent watermark applications [21].
Wavelet transform provides an efficient multiresolution representation with various desirable properties such as subband decompositions with orientation selectivity and joint spacespatial frequency localization. In wavelet domain, the higher detailed information of a signal is projected onto the shorter basis function with higher spatial resolution; the lower detailed information is projected onto the larger basis function with higher spectral resolution. This matches the characteristics of HVS. Many wavelettransformbased algorithms were proposed for various applications [22–34].
In this paper, we propose a wavelettransformbased JND model for the watermark applications. It has the advantage of saving a lot of computation time. The remainder of the paper proceeds as follows. In Section 2, the fullband JND model is reviewed briefly. In Section 3, the discretewavelettransform (DWT) based JND model is proposed. The modified DWTbased JND model and its evaluation are presented in Section 4. Conclusion can be found in Section 5.
2. Review of the FullBand Just Noticeable Distortion (JND) Model
The fullband JND model [21] makes use of the properties of the HVS to measure the perceptual redundancies of an image. It produces the JND file for image pixels as follows: where where and are the texture mask and the spatial mask, respectively, as mentioned in Section 1, is the average background luminance obtained by using a lowpass filter, , is given in Figure 1, is the maximum gradient obtained by using a set of highpass filters, , are given in Figure 2, functions and are dependent on the average background luminance, and is the luminance value at pixel position .
(a)
(b)
(c)
(d)
3. DiscreteWaveletTransformBased JND Model
In this section, we propose a novel JND model based on discrete wavelet transform. It has the advantage of reducing computational complexity significantly.
3.1. Discrete Wavelet Transform
Discrete wavelet transform (DWT) provides an efficient multiresolution analysis for signals, Specifically, any finite energy signal can be written by where denotes the resolution index with larger values meaning coarser resolutions, is the translation index, is a mother wavelet, is the corresponding scaling function, , , is the scaling coefficient representing the approximation information of at the coarsest resolution , and is the wavelet coefficient representing the detail information of at resolution . Coefficients and can be obtained from the scaling coefficient at the next finer resolution by using 1level DWT, which is given by where , , and denotes the inner product. It is noted that and are the corresponding lowpass filter and highpass filter, respectively. Moreover, can be reconstructed from and by using the inverse DWT, which is given by where and .
For image applications, 2D DWT can be obtained by using the tensor product of 1D DWT. Among wavelets, Haar wavelet [22] is the simplest one, which has been widely used for many applications. The lowpass filter and highpass filter of Haar wavelet are as follows: Figures 3 and 4 show the row decomposition and the column decomposition using Haar wavelet, respectively. Notice that the column decomposition may follow the row decomposition, or vice versa, in 2D DWT: where , , , and are pixel values, and , , , and denote the approximation, detail information in the horizontal, vertical, and diagonal orientations, respectively, of the input image. Figure 5 shows 1level, 2D DWT using Haar wavelet.
The subband of an image can be further decomposed into four subbands: , , , and at the next coarser resolution, which together with , , and forms the 2level DWT of the input image. Thus, higher level DWT can be obtained by decomposing the approximation subband in the recursive manner.
3.2. Proposed DWTBased JND Model
As mentioned in Section 2, the fullband JND model [21] consists of (2.1)–(2.8), which is essentially computation consuming. In order to simplify computational complexity, a novel JND model based on Haar wavelet is proposed as follows. where and are the proposed texture mask and spatial mask, respectively, based on DWT, which are given by where is the modified , which replaces the lowpass filter of the fullband JND model. replaces the maximum gradient, , which is given by where, , , and replace , , , and , respectively.
4. Modification of the DWTBased JND Model
In this section, we introduce an adjustable parameter to modify the DWTbased JND model such that the computation time can be reduced significantly while the performance is comparable to that of the benchmark fullband JND model. The test images, namely, Lena, Cameraman, Baboon, Board, and Peppers are shown in the first row of Figure 12.
4.1. Evaluation of JND Models
Figure 6 shows the distortiontolerant model, which can be used to evaluate JND models. It takes the JND value as noise, adds to the original image, and computes the peaksignaltonoise ratio (PSNR) defined as where is the mean squared error, which is defined as where image size is . As shown in Table 1, the proposed DWTbased JND model is somewhat different from the benchmark fullband JND model in terms of the PSNR values.

4.2. Modified DWTBased JND Model
Based on (2.1)–(2.3) and (3.6)–(3.8), one can examine the influences of the dominant mask, texture mask, and spatial mask for the fullband JND model and the DWTbased JND model, respectively. Their respective MSE values are shown in Table 2. As one can see, the proposed texture mask for the DWTbased JND model is less significant than that of the fullband JND model. Thus, we propose an adjustable parameter, , to modify (3.17) and (3.18) as follows: where and replace and , respectively. Figures 7, 8, 9, 10, and 11 show the MSE values obtained by modifying the DWTbased texture mask with various in (4.3). In this paper, the adjustable parameter is set to 4 after extensive simulations. The performance of the modified DWTbased JND model with is comparable to that of the fullband JND model in terms of the PSNR and MSE values as shown in Tables 1 and 2, respectively.

(a)
(b)
(c)
(d)
Figure 12 shows the noisy images obtained by adding the JND values to the original images (Figure 12(a)) using the fullband JND model (Figure 12(b)), the DWTbased JND model (Figure 12(c)), and the modified DWTbased JND model (Figure 12(d)). It is noted that the images in the second and fourth rows are almost indistinguishable from the original images. As a result, the modified DWTbased JND model is visually comparable to the fullband JND model.
4.3. Computation Complexity of the Proposed JND Models
In the fullband JND model, the computation of requires 9 multiplications per pixel, the computation of requires 28 multiplications per pixel, and the computation required for (2.2)–(2.5) is 6 multiplications per pixel. Thus, for an image, it requires multiplications. In the proposed DWTbased JND model, the computations of , , , , and require , , (, , and multiplications, respectively, for an image, and the computation required for (3.7)–(3.10) is also 6 multiplications per pixel. Thus, for an image, it requires multiplications. In the modified DWTbased JND model, as the computations of and require no multiplication, it only needs multiplications for an image. Figure 13 shows the log plot of numbers of multiplications required for the three JND models versus different image sizes. As a result, the speed of the DWTbased JND model is about 6 times faster than that of the fullband JND model, which is the main advantage.
5. Conclusion
In this paper, an efficient DWTbased JND model is presented. It has the advantage of saving a lot of computation time while the performance is comparable to the benchmark fullband JND model. More specifically, the computation complexity of the proposed DWTbased JND model is only one sixth of that of the fullband JND model. As a result, it is suitable for the realtime applications.
Acknowledgment
The National Science Council of Taiwan, under Grants NSC982221E216037 and NSC992221E239034, supported this work.
References
 I. J. Cox, J. Kilian, F. T. Leighton, and T. Shamoon, “Secure spread spectrum watermarking for multimedia,” IEEE Transactions on Image Processing, vol. 6, no. 12, pp. 1673–1687, 1997. View at: Google Scholar
 M. D. Swanson, M. Kobayashi, and A. H. Tewfik, “Multimedia dataembedding and watermarking technologies,” Proceedings of the IEEE, vol. 86, no. 6, pp. 1064–1087, 1998. View at: Google Scholar
 M. Barni, F. Bartolini, and A. Piva, “Improved waveletbased watermarking through pixelwise masking,” IEEE Transactions on Image Processing, vol. 10, no. 5, pp. 783–791, 2001. View at: Publisher Site  Google Scholar
 M. Kutter and S. Winkler, “A visionbased masking model for spreadspectrum image watermarking,” IEEE Transactions on Image Processing, vol. 11, no. 1, pp. 16–25, 2002. View at: Publisher Site  Google Scholar
 C. De Vleeschouwer, J. F. Delaigle, and B. Macq, “Invisibility and application functionalities in perceptual watermarking  An overview,” Proceedings of the IEEE, vol. 90, no. 1, pp. 64–77, 2002. View at: Publisher Site  Google Scholar
 Q. Li, C. Yuan, and Y. Z. Zhong, “Adaptive DWTSVD domain image watermarking using human visual model,” in Proceedings of the 9th International Conference on Advanced Communication Technology (ICACT '07), pp. 1947–1951, February 2007. View at: Publisher Site  Google Scholar
 F. Autrusseau and P. L. Callet, “A robust image watermarking technique based on quantization noise visibility thresholds,” Signal Processing, vol. 87, no. 6, pp. 1363–1383, 2007. View at: Publisher Site  Google Scholar
 H. S. Moon, T. You, M. H. Sohn, H. S. Kim, and D. S. Jang, “Expert system for low frequency adaptive image watermarking: using psychological experiments on human image perception,” Expert Systems with Applications, vol. 32, no. 2, pp. 674–686, 2007. View at: Publisher Site  Google Scholar
 H. Qi, D. Zheng, and J. Zhao, “Human visual system based adaptive digital image watermarking,” Signal Processing, vol. 88, no. 1, pp. 174–188, 2008. View at: Publisher Site  Google Scholar
 A. Koz and A. A. Alatan, “Oblivious spatiotemporal watermarking of digital video by exploiting the human visual system,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, no. 3, pp. 326–337, 2008. View at: Publisher Site  Google Scholar
 S. Y. Chen, Y. F. Li, and J. Zhang, “Vision processing for realtime 3D data acquisition based on coded structured light,” IEEE Transactions on Image Processing, vol. 17, no. 2, pp. 167–176, 2008. View at: Publisher Site  Google Scholar
 S. Y. Chen, H. Tong, Z. Wang, S. Liu, M. Li, and B. Zhang, “Improved generalized belief propagation for vision processing,” Mathematical Problems in Engineering, vol. 2011, Article ID 416963, 12 pages, 2011. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 S. Y. Chen and Q. Guan, “Parametric shape representation by a deformable NURBS model for cardiac functional measurements,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 3, pp. 480–487, 2011. View at: Publisher Site  Google Scholar
 N. Jayant, “Signal compression: technology targets and research directions,” IEEE Journal on Selected Areas in Communications, vol. 10, pp. 314–323, 1992. View at: Google Scholar
 N. Jayant, J. Johnston, and R. Safranek, “Signal compression based on models of human perception,” Proceedings of the IEEE, vol. 81, no. 10, pp. 1385–1422, 1993. View at: Publisher Site  Google Scholar
 R. F. Boyer and R. S. Spencer, “Thermal expansion and secondorder transition effects in high polymers: part II. Theory,” Journal of Applied Physics, vol. 16, no. 10, pp. 594–607, 1945. View at: Publisher Site  Google Scholar
 A. K. Jain, Fundamentals of Digital Image Processing, PrenticeHall, Englewood Cliffs, NJ, USA, 1989.
 X. Yang, W. Lin, Z. Lu, E. Ong, and S. Yao, “Motioncompensated residue preprocessing in video coding based on justnoticeabledistortion profile,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 15, no. 6, pp. 742–751, 2005. View at: Publisher Site  Google Scholar
 J. Pandel, “Variable bitrate image sequence coding with adaptive quantization,” Signal Processing, vol. 3, no. 23, pp. 123–128, 1991. View at: Google Scholar
 B. Girod, “Psychovisual aspects of image communication,” Signal Processing, vol. 28, no. 3, pp. 239–251, 1992. View at: Google Scholar
 C. H. Chou and Y. C. Li, “Perceptually tuned subband image coder based on the measure of justnoticeabledistortion profile,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 5, no. 6, pp. 467–476, 1995. View at: Publisher Site  Google Scholar
 B. S. Kim, I. J. Shim, M. T. Lim, and Y. J. Kim, “Combined preorder and postorder traversal algorithm for the analysis of singular systems by Haar wavelets,” Mathematical Problems in Engineering, vol. 2008, Article ID 323080, 16 pages, 2008. View at: Publisher Site  Google Scholar
 G. Mattioli, M. Scalia, and C. Cattani, “Analysis of largeamplitude pulses in short time intervals: application to neuron interactions,” Mathematical Problems in Engineering, vol. 2010, Article ID 895785, 15 pages, 2010. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 C. Cattani, “Harmonic wavelet approximation of random, fractal and high frequency signals,” Telecommunication Systems, vol. 43, no. 34, pp. 207–217, 2010. View at: Publisher Site  Google Scholar
 C. Cattani, “Shannon wavelets theory,” Mathematical Problems in Engineering, vol. 2008, Article ID 164808, 24 pages, 2008. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 A. Kudreyko and C. Cattani, “Application of periodized harmonic wavelets towards solution of eigenvalue problems for integral equations,” Mathematical Problems in Engineering, vol. 2010, Article ID 570136, 8 pages, 2010. View at: Publisher Site  Google Scholar
 M. Li and W. Zhao, “Representation of a stochastic traffic bound,” IEEE Transactions on Parallel and Distributed Systems, vol. 21, no. 9, Article ID 5342414, pp. 1368–1372, 2010. View at: Publisher Site  Google Scholar
 M. Li, “Generation of teletraffic of generalized Cauchy type,” Physica Scripta, vol. 81, no. 2, Article ID 025007, 2010. View at: Publisher Site  Google Scholar
 M. Li, “Fractal time seriesa tutorial review,” Mathematical Problems in Engineering, vol. 2010, Article ID 157264, 26 pages, 2010. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 E. G. Bakhoum and C. Toma, “Specific mathematical aspects of dynamics generated by coherence functions,” Mathematical Problems in Engineering, vol. 2011, Article ID 436198, 10 pages, 2011. View at: Publisher Site  Google Scholar
 E. G. Bakhoum and C. Toma, “Dynamical aspects of macroscopic and quantum transitions due to coherence function and time series events,” Mathematical Problems in Engineering, vol. 2010, Article ID 428903, 13 pages, 2010. View at: Publisher Site  Google Scholar
 E. G. Bakhoum and C. Toma, “Mathematical transform of travelingwave equations and phase aspects of quantum interaction,” Mathematical Problems in Engineering, vol. 2010, Article ID /695208, 15 pages, 2010. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 T. Y. Sung and H. C. Hsin, “A hybrid image coder based on SPIHT algorithm with embedded block coding,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E90A, no. 12, pp. 2979–2984, 2007. View at: Publisher Site  Google Scholar
 H. C. Hsin and T. Y. Sung, “Adaptive selection and rearrangement of wavelet packets for quadtree image coding,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E91A, no. 9, pp. 2655–2662, 2008. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2012 LuTing Ko et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.