Abstract

Watermark transparency is required mainly for copyright protection. Based on the characteristics of human visual system, the just noticeable distortion (JND) can be used to verify the transparency requirement. More specifically, any watermarks whose intensities are less than the JND values of an image can be added without degrading the visual quality. It takes extensive experimentations for an appropriate JND model. Motivated by the texture masking effect and the spatial masking effect, which are key factors of JND, Chou and Li (1995) proposed the well-known full-band JND model for the transparent watermark applications. In this paper, we propose a novel JND model based on discrete wavelet transform. Experimental results show that the performance of the proposed JND model is comparable to that of the full-band JND model. However, it has the advantage of saving a lot of computation time; the speed is about 6 times faster than that of the full-band JND model.

1. Introduction

Watermarking is a process that hides information into a host image for the purpose of copyright protection, integrity checking, or captioning [13]. In order to achieve the transparency of watermark, many commonly used techniques are based on the characteristics of human visual system (HVS) [113]. Jayant et al. [14, 15] introduced a key concept known as the just noticeable distortion (JND), against which insignificant errors are not perceptible by human eyes. The JND of an image is in general dependent on background luminance, contrast of luminance, and dominant spatial frequency. It takes extensive experimentations to obtain an appropriate JND model.

Perceptual redundancies refer to the details of an image that are not perceivable by human eyes and therefore can be discarded without affecting the visual quality. As noted, human visual perception is sensitive to the contrast of luminance rather than their individual values [1618]. In addition, the visibility of stimuli can be reduced by nonuniformly quantizing the background luminance [1820]. The above known as the texture masking effect and the spatial masking effect are key factors that affect the JND of an image. Chou and Li proposed an effective model called the full-band JND model for the transparent watermark applications [21].

Wavelet transform provides an efficient multiresolution representation with various desirable properties such as subband decompositions with orientation selectivity and joint space-spatial frequency localization. In wavelet domain, the higher detailed information of a signal is projected onto the shorter basis function with higher spatial resolution; the lower detailed information is projected onto the larger basis function with higher spectral resolution. This matches the characteristics of HVS. Many wavelet-transform-based algorithms were proposed for various applications [2234].

In this paper, we propose a wavelet-transform-based JND model for the watermark applications. It has the advantage of saving a lot of computation time. The remainder of the paper proceeds as follows. In Section 2, the full-band JND model is reviewed briefly. In Section 3, the discrete-wavelet-transform- (DWT-) based JND model is proposed. The modified DWT-based JND model and its evaluation are presented in Section 4. Conclusion can be found in Section 5.

2. Review of the Full-Band Just Noticeable Distortion (JND) Model

The full-band JND model [21] makes use of the properties of the HVS to measure the perceptual redundancies of an image. It produces the JND file for image pixels as follows: where where and are the texture mask and the spatial mask, respectively, as mentioned in Section 1, is the average background luminance obtained by using a low-pass filter, , is given in Figure 1, is the maximum gradient obtained by using a set of high-pass filters, , are given in Figure 2, functions and are dependent on the average background luminance, and is the luminance value at pixel position .

3. Discrete-Wavelet-Transform-Based JND Model

In this section, we propose a novel JND model based on discrete wavelet transform. It has the advantage of reducing computational complexity significantly.

3.1. Discrete Wavelet Transform

Discrete wavelet transform (DWT) provides an efficient multiresolution analysis for signals, Specifically, any finite energy signal can be written by where denotes the resolution index with larger values meaning coarser resolutions, is the translation index, is a mother wavelet, is the corresponding scaling function, , , is the scaling coefficient representing the approximation information of at the coarsest resolution , and is the wavelet coefficient representing the detail information of at resolution . Coefficients and can be obtained from the scaling coefficient at the next finer resolution by using 1-level DWT, which is given by where , , and denotes the inner product. It is noted that and are the corresponding low-pass filter and high-pass filter, respectively. Moreover, can be reconstructed from and by using the inverse DWT, which is given by where and .

For image applications, 2D DWT can be obtained by using the tensor product of 1D DWT. Among wavelets, Haar wavelet [22] is the simplest one, which has been widely used for many applications. The low-pass filter and high-pass filter of Haar wavelet are as follows: Figures 3 and 4 show the row decomposition and the column decomposition using Haar wavelet, respectively. Notice that the column decomposition may follow the row decomposition, or vice versa, in 2D DWT: where , , , and are pixel values, and , , , and denote the approximation, detail information in the horizontal, vertical, and diagonal orientations, respectively, of the input image. Figure 5 shows 1-level, 2D DWT using Haar wavelet.

The subband of an image can be further decomposed into four subbands: , , , and at the next coarser resolution, which together with , , and forms the 2-level DWT of the input image. Thus, higher level DWT can be obtained by decomposing the approximation subband in the recursive manner.

3.2. Proposed DWT-Based JND Model

As mentioned in Section 2, the full-band JND model [21] consists of (2.1)–(2.8), which is essentially computation consuming. In order to simplify computational complexity, a novel JND model based on Haar wavelet is proposed as follows. where and are the proposed texture mask and spatial mask, respectively, based on DWT, which are given by where is the modified , which replaces the low-pass filter of the full-band JND model. replaces the maximum gradient, , which is given by where, , , and replace , , , and , respectively.

4. Modification of the DWT-Based JND Model

In this section, we introduce an adjustable parameter to modify the DWT-based JND model such that the computation time can be reduced significantly while the performance is comparable to that of the benchmark full-band JND model. The test images, namely, Lena, Cameraman, Baboon, Board, and Peppers are shown in the first row of Figure 12.

4.1. Evaluation of JND Models

Figure 6 shows the distortion-tolerant model, which can be used to evaluate JND models. It takes the JND value as noise, adds to the original image, and computes the peak-signal-to-noise ratio (PSNR) defined as where is the mean squared error, which is defined as where image size is . As shown in Table 1, the proposed DWT-based JND model is somewhat different from the benchmark full-band JND model in terms of the PSNR values.

4.2. Modified DWT-Based JND Model

Based on (2.1)–(2.3) and (3.6)–(3.8), one can examine the influences of the dominant mask, texture mask, and spatial mask for the full-band JND model and the DWT-based JND model, respectively. Their respective MSE values are shown in Table 2. As one can see, the proposed texture mask for the DWT-based JND model is less significant than that of the full-band JND model. Thus, we propose an adjustable parameter, , to modify (3.17) and (3.18) as follows: where and replace and , respectively. Figures 7, 8, 9, 10, and 11 show the MSE values obtained by modifying the DWT-based texture mask with various in (4.3). In this paper, the adjustable parameter is set to 4 after extensive simulations. The performance of the modified DWT-based JND model with is comparable to that of the full-band JND model in terms of the PSNR and MSE values as shown in Tables 1 and 2, respectively.

Figure 12 shows the noisy images obtained by adding the JND values to the original images (Figure 12(a)) using the full-band JND model (Figure 12(b)), the DWT-based JND model (Figure 12(c)), and the modified DWT-based JND model (Figure 12(d)). It is noted that the images in the second and fourth rows are almost indistinguishable from the original images. As a result, the modified DWT-based JND model is visually comparable to the full-band JND model.

4.3. Computation Complexity of the Proposed JND Models

In the full-band JND model, the computation of requires 9 multiplications per pixel, the computation of requires 28 multiplications per pixel, and the computation required for (2.2)–(2.5) is 6 multiplications per pixel. Thus, for an image, it requires multiplications. In the proposed DWT-based JND model, the computations of , , , , and require , , (, , and multiplications, respectively, for an image, and the computation required for (3.7)–(3.10) is also 6 multiplications per pixel. Thus, for an image, it requires multiplications. In the modified DWT-based JND model, as the computations of and require no multiplication, it only needs multiplications for an image. Figure 13 shows the log plot of numbers of multiplications required for the three JND models versus different image sizes. As a result, the speed of the DWT-based JND model is about 6 times faster than that of the full-band JND model, which is the main advantage.

5. Conclusion

In this paper, an efficient DWT-based JND model is presented. It has the advantage of saving a lot of computation time while the performance is comparable to the benchmark full-band JND model. More specifically, the computation complexity of the proposed DWT-based JND model is only one sixth of that of the full-band JND model. As a result, it is suitable for the real-time applications.

Acknowledgment

The National Science Council of Taiwan, under Grants NSC98-2221-E-216-037 and NSC99-2221-E-239-034, supported this work.