Abstract

We presented a RAO hypothesis detector by modeling Cauchy distribution for the Nonsubsampled Contourlet Transform (NSCT) subband coefficients in the field of additive spread spectrum image watermarking. Firstly, the NSCT subband coefficients were modeled following the Cauchy distributions, and the Fit of Goodness shows that Cauchy distribution fits the NSCT subband coefficients more accurately than the Generalized Gaussian Distribution (GGD) commonly used. Secondly, a blind RAO test watermark detector was derived in the NSCT domain, which does not need the knowledge of embedding strength at the receiving end. Finally, compared to the other three state-of-art detectors, the robustness of the proposed watermarking scheme was evaluated when the watermarked images were attacked by JPEG compression, random noise, low pass filtering, and median filtering. Experimental results show that, compared with the other three detectors, the proposed RAO detector guarantees the lower probability of miss under the given probability of false alarm.

1. Introduction

In the past decades, watermarking has become one of the most active research fields and has received a great deal of attention not only in the academic community but also in the industry for it supplies an effective solution to the intellectual copyright protection for digital media.

Watermarking strategy is accomplished by embedding digital information (watermark, which can be text, image, digital digits, etc.) in original multimedia content (host) such as images, videos, and audios, mostly without affecting the visual or audio quality of original data greatly. Furthermore, during the distribution of the watermarked media, it might be interfered by intentional and unintentional signal processing (attacks).

Usually, the embedding processes can be categorized as those in the space and transform domain, and each can be implemented by means of additive, multiplicative, and quantization-based one, and so forth [14].

Additive watermarking in the transform domain is the simplest and commonly used way, since it can enhance the robustness against attacks or control the perceptive quality.

Many approaches for additive watermarks implemented in Discrete Cosine Transformation (DCT) domain, which can concentrate the whole energy of the host signal in fewer coefficients, have been proposed in the literature so far [58]. Although DCT is the suboptimal orthogonal unitary transform, it cannot provide any directional information such as the contour of the images, which can be not beneficial for selection of embedding position and strength. The Discrete Wavelet Transform (DWT) is unsuitable for representing the intrinsic structure of the images, which can only capture three kinds of directional information (horizontal, vertical, and diagonal). To conquer the limited directionality, many directional image representations have been proposed in recent years, which include the dual-tree complex wavelets, bandlets, contourlets, and Nonsubsampled Contourlet Transform (NSCT) [912].

Recently, a number of watermarking schemes have been proposed, wherein the watermark is embedded in the NSCT coefficients of the image. Compared with the aforementioned transformations, NSCT is the better choice for watermarking, because it possesses not only the multidirectionality, locality, and multiscale, but also the redundancy and shift-invariance.

At the receiving end, watermark could be detected or extracted from watermarked (maybe attacked) images to declare the legal ownership. When the host data are not available at the receiver (such as data monitoring or tracking, etc.), it is unnecessary to identify the specific hidden information. Therefore, detecting the watermark becomes a very practical task, for it does not need any ideas about the embedding strength or the prior knowledge of the host, which is also known as blind detection. Blind watermark detection is a crucial part of a watermarking system and various optimal transform-based methods have been proposed in the literatures [1319].

From the angle of communication system, transform coefficients are considered as the noise and the hidden information is viewed as the signal to be transmitted through the channel. As a result, the detection of the hidden information (watermark) becomes a problem of detecting the weak signal buried among the huge amount of noise, which our research interest focused on.

While detecting the weak signal, the accurate model of the noise influences the performance of the detector greatly as well as the design of the corresponding detector.

If the transformation coefficients of host media follow a Gaussian law, the linear correlation detector has been proved an optimal solution. However, Gaussian distribution is not the general case for the transformation coefficients of natural images.

In previous years, various models have been developed to account for non-Gaussian behavior of image statistics. Generally speaking, image data statistics in transform domain are modeled following more heavy-tailed distribution leading to optimal or nearly optimal detectors which exploit the aforementioned characteristics. In the additive watermarking problem, Generalized Gaussian Distribution (GGD) [20, 21] has been widely adopted all these years by many researchers. Recently, Cauchy distribution, [22, 23] as a member of (Symmetric alpha Stable) distributions, has been an alternative solution applicable to the same problem.

On the other hand, although many detectors in various transform domains have been proposed, similar modeling and detecting work in NSCT domain has not been published even though NSCT is much better for watermarking.

In this work, within the framework of weak signal detection and additive spread spectrum embedding in NSCT domains, we first modeled the statistical property of NSCT coefficients using different distributions. We found that the Cauchy distribution can be the better choice for NSCT coefficients, which has a sharp peak and heavy tails. Furthermore, we derived the RAO hypothesis test detector, compared to Likelihood Ratio Test (LRT) detector, which can give the better performance without any prior knowledge of the embedding strength factor. Finally, the robustness of the proposed watermarking scheme against attacks, including JPEG compression, random noise, low pass filtering, and median filtering, were verified.

The rest of the paper is organized as follows. In Section 2, we give the overview of the model of watermarking from the view of the communication system and the asymptotically optimum detection is derived. A brief introduction of NSCT is given and we modeled the statistical modeling of the NSCT coefficients using different probability distribution functions in Section 3. In Section 4, we derived the RAO statistics for Cauchy distribution and proposed detection statistics for additive watermarking. Section 5 is devoted to the evaluation of the performance of the proposed approach in terms of invisibility and robustness for various settings. Finally, the main contributions of this work are summarized in Section 6 where some suggestions for future research are also outlined.

2. Modeling the Watermarking System

2.1. Modeling Watermarking as a Communication System

Some researchers modeled the watermarking as the communication system. Embedding watermark process is considered as the signal transmitter. The watermark can be seen as the signal to be transmitted, the host can be seen as the noise, any operations to the watermarking system (e.g., compression, noising, and filtering) can be modeled as the communication channel, and the detection of the embedded watermark corresponds to the detection of signal with the presence of noise at the receiver in the communication scenario.

Figure 1 shows a schematic overview of the watermarking system configuration. Generally speaking, we can evaluate the system performance from the angle of the transmitter, the receiver, and the whole system. However, our emphasis is about the information detecting issue and in particular on blind detection of the transmitted watermark, which means we need to determine whether the watermark is present or not without reference to the original host information.

In our configuration, the host data acts as the interference to the watermark signal. Hence, as for the signal detection, modeling the host signal is very crucial. Transform domains (such as the DWT domain, the Contourlet Transform domain, and NSCT domain) facilitate selection of significant signal components and locations for embedding the watermark. We follow the additive embedding strategy throughout this paper, which has been researched in the literatures in the past few years and variety of detectors has been proposed which are based on the particular statistical model for the host transform coefficients [24, 25].

2.2. Modeling Watermarking Detection as a Signal Detection Problem

To deduce the different signal detection strategies for additive spread spectrum watermarking scheme, we start with the classic Neyman-Pearson detection approach, which implies we know all parameters at the detecting end. However, this is usually not the case due to unknown noise parameters or unknown embedding strength. Consequently, we gradually relax the limitations according to the characteristics of host signal. This leads to the idea of an asymptotically equivalent formulation of the Generalized Likelihood Ratio Test (GLRT) known as the RAO hypothesis test. It is worth noting that even when we have the knowledge of the embedding strength at the detecting end sometimes the host parameters are still unknown and have to be estimated from the received watermarked images. Therefore, we exploit the estimate-and-plug methods.

Suppose that the transform coefficients of the host are referred to as , which follow certain statistical distribution. For the additive spread spectrum watermarking, we assume that watermark , denoted by , follows an equally probable uniform distribution with bipolar values of , so the corresponding probability function of can be written asThe watermark signal is generated by a Pseudo-Random Number Generator (PRNG) with a private seed as the secret key . The rule for additive embedding can be formulated aswhere denotes the embedding strength and denotes the watermarked transform coefficients.

In terms of hypothesis testing, we can state the null-hypothesis () and alternative hypothesis () asThis is equivalent to the (two-sided) parameter test:In the rare case that the p.d.f.s (probability distribution functions) under both hypotheses can be completely specified, we can easily deduce a Neyman-Pearson (NP) detector which is optimal in the sense that it maximizes the probability of detection under a given probability of false alarm. Suppose that and denote the p.d.f.s under and , respectively, then according to the Neyman-Pearson criterion, we get the Likelihood Ratio Test (LRT) with threshold , which implies that the optimal detector favors ifThe terms and denote the fully specified parameter vector(s) of the noise model under and , respectively.

In case we can deduce the distribution of the detection statistic under , it is straightforward to determine a suitable threshold for a fixed probability of false alarm aswhere denotes the distribution function of under .

In order to constrain the probability of false alarm, the Neyman-Pearson test requires that the distribution of the detection statistic under does not depend on any unknown parameters. However, as a matter of fact, the embedding strength as well as the distribution parameters of the assumed host data model might be unknown to the detector. Therefore, it is more realistic that we have to estimate the unknown parameters from the received watermarked media. One special case is the one-dimensional Gaussian distribution, which can design a Neyman-Pearson test as if all parameters were known and obtain LRT detection statistic, which is commonly referred to as the linear correlation (LC) detector. It is worthy to note that, in the general case, it is not feasible to deduce the unknown parameters.

When we cannot completely specify the host data distribution under both hypotheses, we have to utilize the composite hypothesis testing approaches.

The RAO hypothesis test is a commonly used approach to solve the problem of composite hypothesis test because of its asymptotic equivalent performance of GLRT.

The RAO test favors in case where denotes the ML estimates under : for example, for a general parameter vector. Further, the term is given bywhere , , , and denote the partitions of the Fisher information matrix, which are defined byIt is well known that the detection statistic asymptotically followswhere denotes a Chi-Square distribution with one degree of freedom and denotes a noncentral Chi-Square distribution with one degree of freedom and noncentrality parameter :We see that the RAO test leads to a Constant False-Alarm Rate (CFAR) detector since the detection statistic distribution under does not depend on any parameters, which avoids parameters estimation of the embedding strength . Hence, no matter what host data model we construct, the threshold needs to be calculated only once, resulting in decrease of the computational efforts.

3. Modeling the NSCT Subband Coefficients as Statistical Distribution

3.1. NSCT

Nonsubsampled Contourlet Transform (NSCT) is a flexible multiscale, multidirectional, and shift-invariant version for images representation originated from Contourlet Transform.

Contourlet Transform provides an efficient representation for 2D signals with smooth contours and outperforms the wavelet transform in this case. Compared to other structures such as the dual-tree complex wavelet and curvelet which also provide multiscale and directional image representation, Contourlet Transform shows its advantages. The Contourlet Transform coefficients are locally correlated due to the smoothness of the contours, which allow for different and flexible number of directions at each scale, and can obtain a sparse expansion for natural images.

Contourlet Transform is composed of Laplacian Pyramid (LP) decomposition and directional filter bank (DFB). In the structure, LP is used first for the multiresolution decomposition of the image to capture the point discontinuities, followed by a DFB to gather the nearby basis functions at the same scale and link point discontinuities into linear structures. Finally, the image is represented as a set of directional subbands at multiple scales.

However, the pyramid directional filter bank structure of the Contourlet Transform has very little redundancy, which may become a drawback in watermarking. Moreover, downsamplers and upsamplers are present in both LP and DFB in Contourlet Transform. As a result, the lack of shift-invariance causes pseudo-Gibbs phenomena around singularities.

In order to get rid of the frequency aliasing of Contourlet Transform and enhance its directional selectivity and regularity, da Cunha et al. [12] proposed overcomplete Nonsubsampled Contourlet Transform (NSCT) targeting applications where redundancy is not a major issue or even is preferable.

The NSCT construction is built upon iterated nonseparable two-channel Nonsubsampled Filter Bank (NSFB) and can be divided into two parts: (1) a Nonsubsampled Pyramid (NSP) decomposition and (2) a NonSubsampled DFB (NSDFB).

The schematic overview of the framework of NSCT decomposition structure is illustrated in Figure 2.

The two-channel NSP ensures the multiscale property of NSCT by using the shift-invariant filter structure.

The image is first decomposed by NSP, producing one low frequency image and one high frequency image at the first decomposition level. For the subsequent pyramid decomposition levels, the low frequency component in the former stage is upsampled, and then is decomposed again. We carry out the process in each decomposition stage iteratively, resulting in the capture of the singularities in the image.

The NSP decomposition of stages consists of one lowpass subband and multiple bandpass subbands. Specifically, one bandpass image is produced at each stage resulting in redundancy.

The shift-invariant directional property is obtained with a NSDFB. The NSDFB is constructed by switching off the downsamplers/upsamplers in each two-channel filter bank in the DFB tree structure and upsampling the filters accordingly.

The high frequency subbands from NSP at each scale are decomposed by NSDFB with stages, producing directional subbands with wedge-shaped frequency partition as shown in Figure 3 (3 levels, 4, 4, 8 directions in each level, resp.).

Through this phase, NSCT can extract more precise directional details information as shown in Figure 4 (enlarged by 100 times), which can benefit the image watermarking.

The NSCT is appropriate for the analysis of 2D signals which have line, curve, or hyperplane singularity, and it has great approximation precision and sparsity description.

The size of different subbands decomposed by NSCT is the same to the original images, which make it preferred in the watermarking, enlarging the capacity of the watermark relatively. Moreover, the shift-invariance can be helpful to enhance the robustness against the common signal operations.

3.2. Modeling the NSCT Transform Subband Coefficients as Statistical Distribution

We discuss the statistical models of NSCT coefficients, and, in particular, we take a closer look into the characteristic distributions which arise in some cases of natural images.

A commonly observed situation in the first stage of finding a suitable statistical model for a set of transform coefficients is to analyze their distribution. We use the classic histogram where the amplitude range of transform subband coefficients is divided into a certain number of bins (with equal bin width) and we count the number of coefficients falling into each bin. Plotting the bin count against each bin interval then conveys an impression about the coefficients distribution. Also we check the Goodness of Fit (GoF) of the selected statistical model using this strategy.

Figure 5 illustrates the histogram of finest subband coefficients of NSCT. As we can see, the coefficients are distributed with sharp peak at zero point and heavy tails at each side of the peak. We take advantage of kurtosis to evaluate if the signal is Gaussian, which is formulated aswhere denotes the expectation operation. For Gaussian signal, the kurtosis will be around three. We calculate the kurtosis for NSCT subband coefficients, respectively, which are much higher than three (as listed in Table 1), indicating the marginal probability of NSCT subband coefficients is not Gaussian distributed. Therefore, it is unsuitable to represent those with Gaussian distribution, and we exploit GGD and Cauchy to account for the distribution.

3.2.1. Generalized Gaussian Distribution (GGD)

Clarke first modeled the Alternate Current (AC) coefficients of a Discrete Cosine Transform (DCT) using GGD [26]. After that, the GGD has become by far one of the most popular statistical models for many transform coefficients, including DCT AC coefficients, DWT detail subband coefficients, and Contourlet Transform subband coefficients.

We recapitulate the GGD parametrization of Nadarajah [27], where the p.d.f. with shape parameter , scale parameter , and location parameter is given byThe Laplace distribution and the Gaussian distribution act as the special case of the GGD with and , respectively.

Taking into account that parameter estimation is based on an i.i.d. sample , one commonly used method is Maximum Likelihood (ML) estimation. ML estimation is extensively covered by Varanasi and Aazhang [28] and a Newton-Raphson algorithm to compute a numerical solution to the ML equations is introduced by Do and Vetterli [29]. Starting values for Newton-Raphson are obtained using moment estimates based on the lookup-table approach.

We make use of Do’s approach to estimate the parameters of GGD.

Figure 5 also shows the corresponding p.d.f.s of fitted GGDs to the same NSCT subband coefficients.

3.2.2. Cauchy Distribution

Briassouli et al. introduced the Cauchy distribution as a possible alternative for modeling the AC coefficients of DCT transformed images in the context of digital image watermarking. In [23], authors extended this approach for modeling DWT detail subband coefficients for the purpose of image watermarking. The p.d.f. of the Cauchy distribution with location parameter and shape parameter is given:In contrast to the Gaussian distribution, the tails of the Cauchy distribution decay at a rate slower than exponential distribution.

We use the estimation based on the ML. The ML estimate of is defined as the solution toWe solve the equation using the Newton-Raphson algorithm numerically. The update steps can easily be derived: first, we define the left-hand side of (15) as and then deduceThe update step follows as . The possible starting value can be found in the literatures.

The estimated GGD and Cauchy distribution parameters with ML approach are listed in Table 2 (averaged with 1000 experiments).

It is worth noting that the case would indicate a standard Cauchy distribution, which is exploited in this paper.

To illustrate its characteristics, Figure 5 shows fitted Cauchy p.d.f.s for the same NSCT detail subband coefficients also.

As illustrated by the visual GoF in Figure 5, we conclude that the Cauchy distribution can be the better choice for modeling the NSCT subband coefficients compared to the GGD.

4. Hypothesis Test Detector for NSCT Subband Coefficients

4.1. RAO Hypothesis Test Detector for NSCT Subband Coefficients following Cauchy Distribution

Nikolaidis and Pitas derived a watermark detector for additive spread spectrum watermarks exploiting RAO test in the DWT domain, which considered DWT coefficients as GGD variants [8]. While a great deal of watermark detection approaches applied in various transform domains have been published in the literatures, similar work in the NSCT domain has not been reported to the best of our knowledge.

One of our main contributions is to derive a suboptimal watermark detector for host signal following Cauchy distribution, resulting in less computation for the ML estimation of the parameters and no need for the presence of embedding strength on the receiving end.

As we verified before, the Cauchy distribution is a reasonable model for NSCT subband coefficients. Furthermore, its parameter estimation can be performed efficiently.

In the first step, we derive the first term of the detection statistic in (7) and plug in the p.d.f. of the Cauchy distribution of (14)Inserting the ML estimate and combining the first term and the third term, we obtainNext, we derive the second term. Since we know that in case of a symmetric p.d.f., only is retained:Plugging in the ML estimate under , we get the second term:Then, we combine the above results and get the following expression for the detection statistic of RAO hypothesis test conditioned on Cauchy host signal:Furthermore, it is straightforward to deduce the expression for the noncentrality parameter of the detection statistic under asMoreover, exploiting (10), we obtainwhereFinally, we get the relationship between and as

4.2. Other Hypothesis Test Detectors

Similarly, we derived some other commonly used hypothesis test detectors; herein we only recapitulate the detection statistics; corresponding and can be derived.

RAO with Generalized Gaussian Distribution (RAO-GGD). Nikolaidis and Pitas first proposed to use a RAO hypothesis test to replace the LRT detector [8]. The detection statistic of RAO test assuming GGD host signal is given byLRT with Generalized Gaussian Distribution (LRT-GGD). This detector was introduced by Comesaña et al. based on the LRT and a Generalized Gaussian host signal noise model [16]. The detection statistic is given bywhere the distribution parameters and are estimated from the received signal without caring whether a watermark is present or not.

LRT with Cauchy Distribution (LRT-Cauchy). This detector was introduced by Briassouli et al. as an extension of the LRT-GGD detector [24]. The host signal noise is modeled by a Cauchy distribution and the detection statistic is given asFrom the derived detection statistics, we can see that the RAO detector does not need the information about embedding strength as the LRT detector does, as we declared before, which simplifies the realization of blind detecting the watermark.

5. Experimental Results

To evaluate a comparative performance of various detectors for additive image watermarking in NSCT domain, we carried out a great deal of experiments with different 14 images to investigate the imperceptibility of the embedded watermark as well as the robustness of the proposed method against attacks. And we choose the probability of miss as the evaluation criterion. We embedded the watermark with the same size as the host images, which came from the standard image datasets of bits. As a result, the watermark is 65536-bit long, much longer than the case in the DWT domain and Contourlet Transform domain. The test images are shown in Figure 6.

To select the appropriate subband for embedding the watermark bits adopting the additive spread spectrum embedding rule, both the visual quality and the robustness of watermarked image should be considered. In view of this, we embed the watermark through the following procedure:(1)Decompose the original image into a number of subbands by using the NSCT with three pyramidal levels followed by eight directions in each scale.(2)Compute the energy of each subband and choose the subband with the highest energy for embedding the watermark.(3)Embed the pseudo-random bipolar watermark in an additive manner.(4)Apply the inverse NSCT to the modified coefficients to obtain the watermarked image.Watermark detection is performed without requiring the use of the original image (i.e., blind image watermarking). We implement the following detectors combination for additive spread spectrum watermarking: the naming convention is that the first part of the name denotes the type of hypothesis test and the second part denotes the host signal statistical model. For example, RAO-Cauchy signifies that the hypothesis test is a RAO and the host signal is modeled by a Cauchy distribution. It is worth noting that RAO detectors do not need knowledge about the embedding strength, while all mentioned LRT detectors are estimate-and-plug detectors and we assume that the embedding strength is known at the receiving end.

For a practical additive spread spectrum watermarking scenario, is usually not determined arbitrarily, but it is based on the Data-to-Watermark (DWR) ratio, expressed in decibel (dB), where the term data refers to the coefficients used for embedding in one specific NSCT subband, which is determined by the energy distribution. The DWR is given by the expressionwhere denotes the variance of the NSCT subband coefficients and denotes the variance of the watermark information which equals 1 in our strategy (i.e., bipolar watermark). Hence, we can express the embedding strength as a function of the DWR and the variance of the host signal asFirst of all, we test the visual quality of the watermarked images, as listed in Table 3.

As for the Lena image, DWRs = 6, 8, and 10 correspond to Peak Signal Noise Ratio (PSNR) of 53.99 dB, 56.36 dB, and 59.69 dB. As we can see, the PSNRs are rather high. The images are indistinguishable with high PSNR values, thus showing the effectiveness of the additive NSCT watermarking in terms of the invisibility of the watermark. We have obtained similar results in many other images.

Secondly, we analyze the performance of our detectors in the absence of attacks, see Figure 7. We determine the experimental receiver operating characteristics (ROC) curves by running 1000 randomly generated watermarks (simulating hypothesis). To compute the ROC curves of the RAO detector we estimate the noncentrality parameter, and similarly for those of the LRT detector. It is worthy note that the p.d.f.s of the original and watermarked images are assumed to be the same: that is, embedding the watermark does not change the distribution of the original image coefficients.

We examined the performance of ROC, where probability of miss varies with the probability of false alarm, in the case of different combination of detector and distribution. It should be noted that the probability of miss is expected to be kept at a relatively low level for a predefined probability of false alarm to increase the reliability of detection.

It is seen that the RAO detector with Cauchy distribution yields a performance which is much better than others as evidenced by a lower probability of miss for any given value of false alarm.

Furthermore, to compare the performance of the detectors for watermarks with different strengths, we examined the performance of ROC over 2 other different DWRs, respectively, as shown in Figures 8 and 9.

As we can see, with the increase of DWR, which means the decrease of the embedding strength, the probability of miss will increase, which is in accordance with theoretical analysis, since the smaller the embedding strength is, the much more difficult the watermark can be detected, and vice versa. On the other hand, as seen from the values of the probabilities of miss, for all of DWR, the performance of RAO-Cauchy combination outperforms the others greatly at any level of the watermark strength, which can be considered as a comprising choice.

The image received by the watermark detector might have been subjected to image processing operations. Consequently, we analyzed the robustness of the detectors against various attacks using the same set of images. Similarly, we determine the experimental ROC curves by running 1000 randomly generated watermarks.

The robustness of the proposed detector under JPEG compression is now investigated and the watermarked images are compressed by JPEG with compression quality factor of 70. Figure 10 shows the averaged ROC curves obtained using various detectors.

Figure 10 shows that the proposed RAO-Cauchy detectors are slightly less robust than LRT-Cauchy and LRT-GGD at the low level of probability of miss, but more robust against JPEG compression than the others at much higher level probability of miss, especially when it is higher than 10−3.

The performances against the random noise attack, low pass filtering, and median filtering are also tested. Figures 11, 12, and 13 show the averaged ROC curves for various detectors.

Figures 11, 12, and 13 show that the proposed RAO-Cauchy detectors are more robust against random noise, low pass filtering, and median filtering than the others by providing lower probability of miss, respectively.

Furthermore, other experiments have been carried out to verify the performance of the proposed scheme, which are partly shown in Figure 14.

From all of results mentioned above, we conclude that the RAO hypothesis test for NSCT subband coefficients following Cauchy statistical distribution model might be more robust and a better choice for detecting the additive spread spectrum image watermarking.

6. Conclusion

In this paper, we proposed a blind detection approach for additive spread spectrum image watermarking scheme in the NSCT domain.

We first studied the suitability of the statistical distributions in modeling the NSCT coefficients of an image. The results show that the Cauchy distribution provides a more accurate fitting to the NSCT subband coefficients through a visual comparison to GGD.

We then deduced the basis for the derivation of each hypothesis test and discussed parameter estimation issues in ML framework.

Motivated by these modeling results, blind additive spread spectrum watermark detector in the NSCT domain following Cauchy distributions was designed. The proposed detectors employed the RAO hypothesis criterion for the watermark detection and did not need to compute the embedding strength at the receiving end.

We took a closer look into other three state-of-the-art detectors, which shows that the detector based on both of the RAO hypothesis and the Cauchy distribution has lowest probability of miss for a given probability of false alarm.

The performance of the proposed detector has been evaluated in detail by conducting several experiments. The robustness of the proposed detector, using the RAO hypothesis and Cauchy p.d.f.s, against JPEG compression, random noise, low pass filtering, and median filtering attacks has been investigated and results show that the proposed detector is superior to that of other approaches.

Regarding the better robustness against attacks and property of no need to compute the embedding strength at the receiving end, we suggest the use of the RAO-Cauchy detector in the additive spread spectrum image watermarking.

It is worth noting that a novel alpha stable distribution model corresponding to the Contourlet Transform subband coefficients has been proposed recently [13] and has shown a great improvement, which may be seen as a potential solution to improve the performance of our proposed scheme. Our similar work in NSCT domain is in progress.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This work is supported by the Heilongjiang Provincial Natural Science Foundation for Young Scholars under Grant no. QC2014C066.