Research Article  Open Access
Jessada Karnjana, Masashi Unoki, Pakinee Aimmanee, Chai Wutiwiwatchai, "Audio Watermarking Scheme Based on Singular Spectrum Analysis and Psychoacoustic Model with SelfSynchronization", Journal of Electrical and Computer Engineering, vol. 2016, Article ID 5067313, 15 pages, 2016. https://doi.org/10.1155/2016/5067313
Audio Watermarking Scheme Based on Singular Spectrum Analysis and Psychoacoustic Model with SelfSynchronization
Abstract
This paper proposes a blind, inaudible, and robust audio watermarking scheme based on singular spectrum analysis (SSA) and the psychoacoustic model 1 (ISO/IEC 111723). In this work, SSA is used to analyze the host signals and to extract the singular spectra. A watermark is embedded into the host signals by modifying the singular spectra which are in the convex part of the singular spectrum curve so that this part becomes concave. This modification certainly affects the inaudibility and robustness properties of the watermarking scheme. To satisfy both properties, the modified part of the singular spectrum is determined by a novel parameter selection method based on the psychoacoustic model. The test results showed that the proposed scheme achieves not only inaudibility and robustness but also blindness. In addition, this work showed that the extraction process of a variant of the proposed scheme can extract the watermark without assuming to know the frame positions in advance and without embedding additional synchronization code into the audio content.
1. Introduction
Since the last decade, music sharing via the Internet has caused the music industry to lose annual sales of more than 3 billion US dollars [1] because the Internet is a good distribution system; that is, it distributes audio signals widely and very rapidly. In addition, all digital products have special characteristics; that is, they are expensive to produce for the first copy, but cheap to reproduce for duplicates [2]. One potential solution for protecting the digital content is audio watermarking [3]. Also, audio watermarking has been proposed as a solution for other purposes, such as ownership protection, content authentication, broadcast monitoring, information carrier, and covert communication [4–8].
The audio watermarking system consists of two processes: the embedding process and the extraction process, as illustrated in Figure 1. The first process embeds the watermark into the host audio signal. The second process extracts the watermark from the watermarked signal. Normally, the embedding process is framebased. Therefore, the extraction process requires the frame positions in order to extract the watermark. The frame position requirement raises the frame synchronization problem. This problem is to be discussed in great detail in Section 3. Audio watermarking systems can be characterized by a number of properties [4]. Among them, there are five important properties [3, 9].
(i) Inaudibility. It is the property that the watermark does not affect the perceptual quality of the host signal.
(ii) Robustness. It is the ability to extract the watermark correctly when attacks are performed on the watermarked signals.
(iii) Blindness. It is the ability to be independent of the host signal in the extraction process. The system is blind when the extraction process does not require the host signal to be compared, in order to correctly extract the watermark. If the extraction process requires the host signal, as illustrated by the dashed line in Figure 1, it is nonblind.
(iv) Confidentiality. It is the property that keeps the watermark secret.
(v) Capacity. It is the quantity of the hidden information that is embedded into the host signals.
These required properties normally conflict with each other. Some techniques that obtain high robustness may suffer in inaudibility [10]. Some techniques are good at inaudibility but do not meet the blindness property [11]. Some with high capacity are not robust [12]. The method based on the leastsignificantbit coding [13] obtains good inaudibility but loses on the robustness. The phasecoding method [14] achieves the inaudibility but fails the capacity. The phase modulation method [15] survives the inaudibility but does not pass the blindness property. The tradeoff between the inaudibility and the robustness can be found in the methods based on adaptive phase modulation as well [16]. The method based on cochlear delay characteristics [17] is robust and inaudible. However, it has a significant tradeoff between the inaudibility and the capacity. In addition, the blind cochleardelaybased scheme reduces the sound quality of the watermarked signals, compared with nonblind ones. The methods based on echo hiding [18, 19] are blind and robust, but they perform poorly in inaudibility and confidentiality. The spreadspectrumbased technique is good in robustness but is poor in inaudibility and capacity [20]. These examples show that balancing among the required properties has always been a difficult task.
A literature review of audio watermarking has suggested that the schemes based on SingularValue Decomposition (SVD) are robust [10, 11, 21–26]. In general, the SVDbased scheme extracts the singular values from the host signals and slightly changes some of those values with respect to the watermark bit. It is robust because the singular values are unchanged under common signal processing [27]. However, the balance between inaudibility and robustness for some audio signals needs to be further improved due to the fact that it has never taken the human perception into consideration.
The motivation for this work has started from the idea of exploiting the advantages of the SVDbased method and combining it with a human perceptual model. We turn to the SSA, which is SVDbased, and adopt it as the main analysis tool. We choose SSA because when a signal is analyzed, the singular values can be interpreted and have the physical meanings [28]. The physical meanings are of importance because they help us understand a relationship between the SSA and the perceptual model. Recently, we proposed the audio watermarking schemes based on the SSA [28, 29]. Also, we showed the benefits of using the SSA over the SVD. To verify the effectiveness of the SSAbased scheme, we used the differential evolution to adjust the balance. The results were quite successful [29]. However, as the search space was very large, therefore, the embedding process was timeconsuming.
This work aims to show that SSA equipped with the perceptual model also gives the good balance between inaudibility and robustness. This work proposes a novel audio watermarking based on SSA and the human perceptual model. Also, it proposes a new method for automatic frame detection; that is, the frame positions are not required in the extraction process.
The rest of this paper is organized as follows. The proposed scheme and necessary background information are detailed in Section 2. Section 3 shows that we can slightly modify the proposed scheme to make it a selfsynchronized one. The performance evaluation and experimental results are given in Section 4. The observations from the experiments are made and discussed in Section 5. Last, the whole work is summarized in Section 6.
2. Proposed Scheme
The proposed scheme is mainly based on the SSAbased audio watermarking scheme proposed by Karnjana et al. [28, 29]. The first two subsections are part of the embedding process, and the last two subsections are part of the extraction process. The proposed scheme with the selfsynchronization is provided in Section 3.
2.1. Embedding Process
The embedding process consists of two major parts, as shown in Figure 2. The first part is a core structure. In this core structure, the basic SSA is mainly used to analyze the host signals and to extract the singular spectra. The basic SSA has experimentally proved to be useful for extracting meaningful information from signals [30, 31]. The second part, which is shown in the gray box, is the parameter selection method based on a psychoacoustic model. In this work, we adopt the psychoacoustic model 1 (ISO/IEC 111723) [32] to the proposed scheme. The brief details of the psychoacoustic model and the parameter selection method are provided in the next subsection.
The core structure of the embedding process consists of six steps which are described as follows.(1)The host audio signal is segmented into nonoverlapping frames. The number of frames is equal to the number of the watermark bits since one bit is embedded into one frame. Let denote a frame of size , where is greater than . Remark that the embedding capacity is the sampling frequency of the host signal divided by .(2)The trajectory matrix which represents each frame is constructed. The construction of is done as follows: where , called a window length of the matrix formation, is the only parameter of the basic SSA and not greater than , and .(3)SVD is performed on each trajectory matrix to obtain the singular spectrum , where , and denote the eigenvalues of .(4)When the watermark bit is , we modify singular values , for , given that is greater than , as follows. When the watermark bit is , the singular values are left unchanged. In this step, there are two parameters, and , and these parameters are determined by the parameter selection algorithm based on the psychoacoustic model.
(5)The modified trajectory matrix is constructed by SVD reversion, and then it is hankelized. The hankelization of a modified trajectory matrix to a signal is defined as follows: where is an element at the row and column of the matrix , , , if , and if .(6)Finally, the watermarked signal is reconstructed by stacking those hankelized frames.2.2. Parameter Selection Based on Psychoacoustic Model
The block diagram of the parameter selection method based on the psychoacoustic model is shown in the gray box of Figure 2. The psychoacoustic model 1, which is deployed in the MPEG1 Layer , is adopted in order to deliver a signaltomask ratio (SMR) of the analyzed signal, and then the SMR is used as a criterion for selecting the parameters and .
Basically, the psychoacoustic model is built based on three psychoacoustic principles: the absolute threshold of hearing, the simultaneous masking, and the upward spread of masking [33]. It consists of five steps [33, 34], as shown in Figure 3. According to the standard ISO/IEC: 111723, the overview of the process is summarized as follows. First, the FFT and the power spectral density (PSD) of the signal are calculated, and then the PSD is normalized with the maximum sound pressure level (SPL) of 96 dB. Next, the PSD is used to identify the tonal (more sinusoidlike) and nontonal (more noiselike) components of the signal. This identification is used for the calculation of the masking levels due to the tonal and nontonal maskers. Then, the irrelevant maskers are removed by applying two psychoacoustic principles in the following manner. The maskers which are lower than the absolute threshold of hearing are removed, and only the strongest masker within a distance of Bark is kept. Subsequently, these survival maskers are used to calculate the individual masking levels. Finally, we combine all masking levels to calculate the global masking level. The output of the psychoacoustic model is SMR. The SMR is defined as the difference between the SPL of the global masking level and the PSD of the analyzed signal. Figure 4 shows an example of the SMR (red line) of one frame.
In perceptual audio codings, such as MP3 compression, the SMR is used to allocate the quantization bits. The frequency components with the lower SMR are assigned with smaller numbers of bits since the human auditory system is less sensitive to those frequency components. In this work, the SMR is used as guidance to determine the appropriate parameters because embedding the watermark into the components with the low SMRs helps to improve the inaudibility. The algorithm that we use to deliver the parameters and consists of the following five steps:(1)We first calculate the SMR of each frame. According to the standard, the frame size used for this calculation is samples. Note that the frame size from the segmentation of the core structure is not necessary to be the same as that of this psychoacoustic model.(2)We use the SMRs obtained from the previous step to calculate the average SMR of the host signal.(3)We identify the frequency band with the average SMR lower than a predefined value, . If there are more than one band, the band with the lowest frequency is selected. If the frequency bandwidth is wider than a predefined bandwidth, it is limited to the predefined bandwidth. In our simulation, the predefined bandwidth is kHz. An example is shown in Figure 5.(4)For each frame, we map the selected band to a singularvalue interval. In this step, we have to find the relationship between the frequencies and the singularvalue indices because the output of the psychoacoustic model, the SMR, is expressed as a function of frequency. When the basic SSA is used to decompose a signal, the singular values of the matrix representing the signal can be interpreted as the scale factors of the oscillatory components of the signal [28]. After analyzing each oscillatory component by the Fourier transform, we found that a frequency band of each oscillatory component is quite narrow compared with the signal bandwidth, as shown in Figure 6. We associate the index of the singular value with the peak frequency of its oscillatory components. Figure 7 shows an example of the relationship between the frequencies and the singularvalue indices. Thus, to map the frequency range to the interval , we first find the local minimum which is closest to , and set the index of this local minimum. Then, we find the local maximum that is closest to which must be on the right side of , and then set the index of this local maximum. An example of this mapping is shown in Figure 8. Note that different frames may have different intervals from one another, and the word frame in this step means the frame from the segmentation process.(5)Finally, the parameters and for embedding the watermark are selected using the arithmetic mean of boundaries of all intervals.
2.3. Extraction Process
The plot of singular spectra is normally convex, as shown in Figure 9. However, after the watermark bit is embedded into an interval of the singular spectrum of a host frame, the embedding process causes the concave part on the interval of the singular spectrum of the reconstructed, watermarked frame [29], as shown in Figure 10. We exploit this property to extract the watermark bit. The extraction process consists of five steps, as shown in Figure 11. The details of each step are as follows.(1)We segment the watermarked signal into nonoverlapping frames. At this stage, we assume that we know the frame positions and the frame size.(2)We construct the trajectory matrix in the same way we do in the embedding process.(3)We perform the SVD operation on the trajectory matrix to obtain the singular spectrum.(4)If the parameters and are not provided, automatic parameter estimation, which is illustrated in the gray box of Figure 11, is used to estimate them. The details of this parameter estimation are given in the upcoming subsection.(5)We approximate all singular values of using the quadratic equation, , where is the singular value and is the index of the singular value. Since the coefficient of the quadratic formula indicates the rate of change of the singular values, the sign of the coefficient is used to determine the watermark bit. A minus sign indicates concavity or the watermark bit , and a plus sign indicates convexity or the watermark bit .
2.4. Automatic Parameter Estimation
To automatically estimate the parameters and , we use the fact that when watermark bit is embedded into a frame, there exists a concave part in the singular spectrum plot. In other words, when watermark bit is embedded into the frame, we can find some pairs of indices and , where , such that the singular values of the interval are mostly above the line segment connecting two singular values and . Thus, this automatic parameter estimation estimates the parameters from the width of the concave part.
We first define the concavity density as a measurement of degree of the concavity. Given a singular spectrum , the concavity density of singular values from to is defined as follows.where is the function defining the line connecting and .
Starting from the first singular value (), a sequence of the singular values that is used to calculate the concavity density is shifted to the right by one singularvalue point at a time to determine the set of the concavity density . An example of the positive and the negative concavity density of two sequences of the singular values is shown in Figure 12.
Figure 13 shows an example of the concavity density curve of the singular spectrum in Figure 12 when a sequence of the singular values used to calculate the concavity density has a length of . It can be seen that the positive density roughly corresponds to the concave part of the singular spectrum. However, the concavity density depends upon the choice of the length of the sequence used to calculate the concavity density. In this work, we get around the problem by using the average density at the different lengths. Then, the averagedensity curve is refined as follows. First, any negativedensity value is ignored because it implies convexity. Second, any positive density curve that is narrower than , where is a userdefined real number around , is neglected because, practically, we can set the minimum value of in advance.
Subsequently, the indices at the rising and falling edges of the consequent density curve, together with an offsetting constant, are used to estimate the parameters and for the given frame. Finally, the parameters and for the watermarked signal are calculated by averaging the estimated parameters and from all frames. The averaging algorithm is depicted in Figure 14 and detailed as follows.
Let and for denote the estimated parameters of the frame . The subscripts indicate that there can be more than one concave interval detected within one frame. The maximum number of intervals detected within the frame is denoted by .
The general idea of the averaging algorithm is as follows. Given two integral intervals and , where , , , and are integers and , we say that there is an overlap between those two intervals if or . For a pair of overlapping intervals, , we define the overlap degree aswhere and are the maximum and minimum functions, respectively.
Given a set of the estimated parameterinterval , we can expect that the set must contain many overlapping intervals . By the same token, we know that there is no overlap between intervals and when . Then, the averaging algorithm is just the process of recursively grouping the overlapping members of the set . The following is the procedure used in the averaging algorithm.(1)We assign the frequency weight to each interval in the set . Initially, the frequency weight is set to .(2)We calculate the overlap degree of a pair of estimated parameterintervals. If is greater than a predefined value , the two intervals are merged to create a new interval. Then, the two old intervals are removed. The frequency weight of the new interval is the sum of the frequency weights of the two old intervals. The average of the lower bounds and that of the upper bounds of the old intervals are used for the new interval.(3)Step () is repeated until set has no overlapping members.(4)The interval with the highest frequency is chosen as the estimated parameters and . If there are multiple intervals with the highest frequency, the estimated parameters and are randomly chosen from them.
3. SelfSynchronization
The embedding and extraction processes as described in previous section are framebased. That means that the host signal is divided into frames, and one watermark bit is embedded into one frame. Thus, to correctly extract the watermark, the extraction process must know the frame positions. The assumption that the extraction process knows the frame positions in advance may not be practical in some situations. For example, an attacker can attack watermarked signals by cutting a few audio samples. This causes the extraction process to work improperly. This is known as a cropping attack. How the frame positions are acquired is the frame synchronization problem.
There are two solutions to solve the frame synchronization problem [11]. The first solution is by binding the watermark with some invariant audio features of the host signal [35] or performing selfsynchronization [36–38]. The second solution is by embedding the frame synchronization code into the host signal [39, 40].
From experiments, we found that the proposed scheme can automatically detect the watermarked frames. In order to do that, we need to modify the scheme slightly. To fully grasp the idea behind the new rules, let us start with the basic findings from this work.
Consider an audio signal with three frames of equal length , where the watermark bit is embedded in its middle frame by the method described in Section 2.1, as illustrated in Figure 15. The starting and the last indices of audio samples of the middle frame are denoted by and , respectively. According to the embedding and extraction processes, if we use the frame to construct the trajectory matrix, then we can detect the concave pattern in the singular spectrum plot. If is an integer which is less than , then the frames and are overlapping. We discovered that the singular spectrum curve of the trajectory matrix constructed by the frames also has a concave part if the overlapping region is large enough. A similar effect occurs to the frame as well. In general, if we construct matrices from frames for to , there are many matrices that we can interpret as having the watermark bit embedded. Those matrices are the ones in which is in the same neighborhood with . This overlapping effect of embedding the watermark bit is utilized in our automatic frame detection. We perform a scanning operation by first constructing the frames for to the last possible frame and then extracting the watermark bit from those frames. This effect implies that we can localize the watermarked frame where watermark bit is embedded by performing a scan operation. This is the reason why we need to modify the proposed scheme if we want to make it selfsynchronizing. The modification is as follows.
We first divide the frame into equal subframes, where each subframe has a length of . Each watermark bit is represented by the fourbit strings of either “0100” or “0110” depending upon the watermark bit. If the watermark bit is , four bits of “0100” are embedded into the subframes. If the watermark bit is , “0110” are embedded into those subframes, as illustrated in Figure 16. For example, if the watermark bits are “001”, then the subframeembedding bits are “010001000110”.
(a)
(b)
Given a frame of length , we define the subframescan operator on as follows.where is a scanstep size, is the subframes , for to , of the frame , and is if the singular spectrum curve of the matrix constructed from the subframe on the interval is concave; otherwise, is .
The meaning of this operator is that the scanner , which operates on samples, scans through the frame with step size and returns to or , depending upon the characteristics of the singular spectra of the scanned subframes.
We use the first appearance of “1” in “0100” and “0110” as the synchronization point of watermark bit and , respectively. If we can detect the next concavity, we interpret it as the watermark bit ; otherwise, it is . Since the first detected concavity is used as the synchronization point, to ensure that all concavities are surrounded by convexities and that the distance between two concavities is far enough, “0” is added at the first and the last of the fourbit patterns. This is the concept behind our new proposed selfsynchronization. An example of performing the subframescan operation according to (6) is shown in Figure 17.
To detect a watermarked frame, we define another scanner, which operates on samples, called the framescan operation. Given a watermarked audio signal of length greater than , the framescan operation scans from with a scan step of until it detects the first watermarked frame.
Given is a frame scanned at step , we first perform .
Let four rectangular windows , for to , whereand , and is a positive integer, called the overlap margin. Then, each of these windows is elementwisely multiplied with ; that is, , where the value of is calculated by the following equation. for .
If is greater than for or , or if is greater than for or , we say that, by looking through the window , the concavity of the singular spectrum is detected.
The scanner stops scanning and declares a watermarked frame only when the conditions described in Table 1 are satisfied. The extracted watermark bit is detected if and only if the concavity of singular spectrum cannot be detected through the windows , and but can be detected through window . In comparison, the extracted watermark bit is detected if and only if the concavity of the singular spectrum curve cannot be detected through the windows and but can be detected through windows and . Otherwise, it continues scanning with a step size of . The framescan operation is restarted repeatedly until it reaches the end of the watermarked signal .

An example of performing the four windows on one frame is shown in Figure 18. In this figure, the second and third subframes are embedded so that the framescan operation can decode the pattern of as the watermarked bit .
4. Evaluation
Twelve host signals from the RWC musicgenre database (Track numbers , , , , , , , , , , , and ) [41] were used in our experiments. All have a sampling rate of 44.1 kHz, 16bit quantization, and two channels. Unless stated otherwise, the hidden information was embedded in one channel, starting from the initial segment of host signals. The frame size was set to samples. The embedding capacity was bit per second (bps). We chose this capacity because the number is not too low or not too high, and it seems reasonable for general applications. The window length for the matrix formation was . One hundred and fifty bits of the watermark were embedded in total. The audio duration of each signal was about seconds.
The parameters and , obtained from the parameter selection based on the psychoacoustic model , are shown in Table 2. The estimated parameters, obtained from the automatic parameter estimation, are shown in Table 3. We implemented the proposed scheme using an adaptive criterion for the predefined SMR level as follows. If the maximum SMR is greater than dB, . If the maximum SMR is less than dB, . Otherwise, .


The proposed schemes were compared with the previously proposed schemes [28, 29] and the conventional SVDbased scheme [23]. There are three reasons for comparing with the conventional SVDbased scheme. First, it is one of a few blind SVDbased techniques. Second, its published results are promising. Last, both the SSAbased and SVDbased schemes belong to the same family of audio watermarking schemes; that is, they extract singular values from the host signals and embed the information into the signals by modifying those singular values. The following subsections report evaluations of the performance in the aspects of sound quality, robustness, and selfsynchronization.
4.1. SoundQuality Evaluation
Three distance measures were chosen to evaluate the sound quality of watermarked signals: the evaluation of audio quality (EAQUAL) [42], logspectral distance (LSD), and the signaltodistortion ratio (SDR). The EAQUAL measures the degradation of the watermarked signal, compared with the original, and covers a scale, called the objective difference grade (ODG), from (very annoying) to (imperceptible).
The LSD is a distance measure between two spectra. Given and are power spectra of the original and the watermarked signals, respectively, the LSD is defined as the following formula:
The SDR is a power ratio between the signal and the distortion. Given the amplitudes of original and watermarked signals, and , the SDR is defined as follows.
The evaluation criteria for good sound quality are as follows. The ODG must be greater than (not annoying), the LSD must be less than dB, and the SDR must be greater than dB. An ODG of indicates that the noise perceived in the watermarked signal is perceptible but not annoying. Based on our simulations, ODG values between 0 and mean excellent in sound quality. We set the criteria for LSD and SDR to dB and dB, respectively, because we found from our preliminary experiments that either an LSD greater than dB or an SDR lower than dB can cause an annoying perception.
The comparison of the average ODGs, average LSDs, and average SDRs is shown in Table 4. The proposed scheme satisfies the inaudibility criteria and is considerably improved when it is compared with the SSAbased method [28]. Compared with the conventional SVDbased method and the SSAbased method with differential evolution [29], the proposed method is less inaudible. However, the difference in the inaudibility among them is nonsignificant. Based on our listeningtest experiment [29], we found that the signals that satisfy all conditions , dB, and dB are hardly distinguishable in terms of the sound quality. Therefore, these results show that we can use the psychoacoustic model to deliver the parameters and , in order to improve the sound quality of the watermarked signal obtained from the previously proposed SSAbased method [28]. However, the parameters determined by the differential evolution give the best performance in terms of sound quality.
4.2. Robustness Evaluation
The effectiveness of the proposed schemes in terms of robustness is measured by the watermark extraction precision. We use the biterror rate (BER) to represent the watermark extraction precision. Given the embedded watermark bitstring and the extracted watermark bitstring for to the frame length ,where is the bitwise XOR operator. The criterion for the robust scheme is that the BER must be less than or . At this level of BER, it is possible to reduce the BER further to close to by adding error correction code. Furthermore, at this level, the BER can be reduced practically and effectively by the embeddingrepetition scheme. That is, a frame is segmented into several subframes, and a watermark bit is embedded repeatedly into those subframes. Then the majority rule is applied in the extraction process to decode the extracted watermark bit.
Five attacks were performed on watermarked signals: Gaussiannoise addition with average signaltonoise ratio (SNR) of 36 dB, resampling with and 22.05 kHz, bandpass filtering with 100–6000 Hz and −12 dB/Oct, MP3 compression with 128 kbps joint stereo, and MP4 compression with 96 kbps.
The results from the robustness evaluation are shown in Table 5. The average BERs of the proposed schemes are less than on almost all evaluation attacks except MP3 compression and the bandpass filtering (BPF). For MP3 and BPF, the average BERs are slightly above . If we consider the overall average BERs, which is the average of BERs from all types of attacks, our proposed methods are still below and less than that of the conventional SVDbased method. Table 6 shows the overall average of all methods.

Compared with the conventional SVDbased method, the proposed schemes are slightly less robust in the case of “no attack,” “MP4,” “AWGN,” “RES16,” and “BPF.” However, the overall average BERs of the proposed schemes are better than that of the conventional SVDbased one. In general, when the BER is low enough (e.g., 10%), it can be reduced further by applying error correction code or by employing embedding repetitions. On the other hand, the proposed schemes outperform the conventional SVDbased method in the case of “MP3” and “BPF.” Since the average BERs of the conventional SVDbased method in both cases are close to the chance level, they are hard to be improved further by those techniques.
Compared with the previously proposed SSAbased methods, it is less robust to some degree. Therefore, the overall performance of the proposed scheme seems to be slightly poorer than that of the SSAbased one with the differential evolution. The explanation concerning this issue is discussed in Section 5.
When the extraction process does not assume to know the parameters and in advance, the average BER increases about . The rootmeansquare deviation of the difference between the estimated values and actual values is about 2.83. Thus, the extraction process is sensitive to the correctness of the parameter values to some degree. When it extracts the watermark with less information, the BER increases.
4.3. SelfSynchronization Evaluation
To test the selfsynchronization, we implemented the scheme with settings shown in Table 7. Each test signal is randomly chosen as a segment of samples (about seconds), and bits of the watermark were embedded into the segment.

To detect the watermarked frame and to extract the watermark, we randomly choose the initial sample, which is before the embedded segment, for the scan operation, as depicted in Figure 19. The accuracy of the frame detection and the watermark extraction is defined as the number of correctly extracted watermark bits divided by the total number of embedded watermark bits. Since there is naturally the concavity on singular spectra, it is possible that our proposed method identifies an unwatermarked segment as a watermarked frame. In this case we will have a misidentified frame. The false positive rate is defined as the number of misidentified frames divided by the total number of frames identified by the algorithm. The test results show that the accuracy of the frame detection and the watermark extraction is 80%. The false positive detection rate is 6.42%.
5. Discussion
Even though the proposed scheme satisfies the robustness and inaudibility criteria, there are other aspects that need to be improved. In this section, five issues concerning the performance and limitation of the proposed scheme are discussed. The first two issues are about the performance of the proposed scheme. The next two issues are about the limitation of the currently proposed selfsynchronized scheme. The last one is a general problem in terms of the confidentiality property.
First, we have shown that the psychoacoustic model can be used to determine the parameters and . These parameters are hostsignaldependent and of importance because their values determine the balance between the inaudibility and the robustness. In our previously proposed method, differential evolution was used to determine the parameters. Compared with using differential evolution, there are two advantages of using the psychoacoustic model.(i)The computational time is reduced considerably because the differential evolution optimization has a large search space. The comparison of the computational time is shown in Table 8. To determine the parameters and for one signal, differential evolution takes about hours, whereas the psychoacousticmodelbased method takes about 4.3 seconds.(ii)The optimal parameters from the differential evolution depends on many factors, such as the simulations included in the optimizer [29]. Moreover, the cost function has two additional parameters. In this sense, using the psychoacoustic model reduces the number of scheme parameters.

However, the robustness of the proposed scheme is slightly poorer than that of the previously proposed methods. This is because only the SMR is used as the guidance for the parameter determination. The low SMR can gain inaudibility but may lose robustness because the lower SMRs associates with the lower singularvalue indices. In addition, the components with the lower SMRs are more likely to be destroyed by the perceptual codings. To improve the robustness of the proposed scheme, we may include the other masking phenomena, such as the nonlinear excitatory masking, to the psychoacoustic model. This is one of our future work.
Second, different from the previously proposed scheme, this scheme does not modify the singular spectrum when the watermark bit is embedded. We found that the effectiveness in terms of robustness is the same, but in terms of inaudibility, the objective scores improve slightly, as shown in Table 9. The previously proposed schemes, especially the one with the differential evolution optimization, can benefit from this fact because the optimization function directly handles the tradeoff between inaudibility and robustness.

Third, the proposed selfsynchronization is timeconsuming. The extraction process with the selfsynchronization takes up to times that of one without. Based on our simulation, the extraction process without the selfsynchronization took about seconds to extract one watermark bit, whereas the one with the selfsynchronized process took about minutes. This explains why we separately simulated and evaluated the selfsynchronization.
Fourth, although the synchronization rate of of the proposed scheme with selfsynchronization does not satisfy the criterion of BER being less than , it can confirm the fundamental concepts on which the selfsynchronized, proposed scheme is based. From our analysis, we found that the detection rate is determined by the algorithm that interprets the bitstring . In the proposed scheme, our algorithm uses the simplest rectangular windows to find the pattern of . Even in the case that the algorithm could not detect a watermark bit, we found that the string correctly presented the concavity on the singular spectra. Therefore, some effective pattern recognition techniques could be helpful to improve the situation.
Also, the false positive detection rate indicates that the algorithm sometimes detects a watermark bit when no hidden information is embedded there. We investigated this problem by analyzing unwatermarked signals with the proposed automatic frame detection. We found that in those false positive detection cases, there is some concavities on the singular spectra. If the false positive detection is a serious concern, we can solve this problem by first detecting the natural concavity and then hiding the watermark only in the noconcavity frames. Otherwise, good pattern recognition is required due to our findings that the patterns of the string of the natural concavity are different from those of the embedded watermark. This problem will be further investigated in the future.
Fifth, since this work has shown that we can completely blindly scan and analyze the watermarked signals to detect and extract the watermark, there is a question on the confidentiality of the watermark. As a result, if the secrecy of the watermark is a concern, we may need to encrypt the watermark with an encryption key before it is embedded into the host signals. Later, in the extraction process, a decryption key is required to decrypt the extracted, encrypted watermark to obtain the original one.
6. Conclusion
The main objective of this work is to show that SSA, equipped with the psychoacoustic model, can give a good balance between inaudibility and robustness, so that it can overcome the problems in the previously proposed SSAbased method [28] and the SVDbased method. Even though the overall performance of the currently proposed schemes is poorer than that of the SSAbased one with differential evolution, the processing time is reduced considerably. Integrated with the psychoacoustic model, the SSAbased audio watermarking scheme achieves three required properties of the audio watermarking system: inaudibility, robustness, and blindness. Also, this paper presented a novel method for selfsynchronization. The synchronization rate of the proposed selfsynchronized scheme was about 80%. Improving the synchronization rate and reducing the computational time of the selfsynchronized scheme are our future work.
Competing Interests
The authors declare that they have no competing interests.
Acknowledgments
This work was supported by an A3 foresight program made available by the Japan Society for the Promotion of Science and partially supported by a GrantinAid for Scientific Research (A) (no. 25240026). It was also under a grant in the SIITJAISTNECTEC Dual Doctoral Degree Program and the National Research University Funding from Thailand.
References
 S. Bhattacharjee, R. D. Gopal, and G. L. Sanders, “Digital music and online sharing: software piracy 2.0?” Communications of the ACM, vol. 46, no. 7, pp. 107–111, 2003. View at: Publisher Site  Google Scholar
 R. D. Gopal and G. L. Sanders, “Global software piracy: you can't get blood out of a turnip,” Communications of the ACM, vol. 43, no. 9, pp. 83–89, 2000. View at: Publisher Site  Google Scholar
 A. M. AlHaj, Advanced Techniques in Multimedia Watermarking: Image, Video and Audio Applications, IGI Global, 2010. View at: Publisher Site
 I. Cox, M. Miller, J. Bloom, J. Fridrich, and T. Kalker, Digital Watermarking and Steganography, Morgan Kaufman, 2007.
 X. Quan and H. Zhang, “Perceptual criterion based fragile audio watermarking using adaptive wavelet packets,” in Proceedings of the 17th International Conference on Pattern Recognition (ICPR '04), pp. 867–870, IEEE, Cambridge, UK, August 2004. View at: Publisher Site  Google Scholar
 T. Kalker and J. Haitsma, “Efficient detection of a spatial spreadspectrum watermark in MPEG video streams,” in Proceedings of the International Conference on Image Processing (ICIP '00), pp. 434–437, September 2000. View at: Google Scholar
 I. J. Cox and M. L. Miller, “The first 50 years of electronic watermarking,” EURASIP Journal on Advances in Signal Processing, vol. 2002, no. 2, pp. 1–7, 2002. View at: Google Scholar
 I. J. Cox, “Watermarking, steganography and content forensics,” in Proceedings of the ACM WMsec, pp. 1–2, 2008. View at: Google Scholar
 S. Craver, M. Wu, and B. Liu, “What can we reasonably expect from watermarks?” in Proceedings of the IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA '01), pp. 223–226, 2001. View at: Google Scholar
 A. AlHaj, C. Twal, and A. Mohammad, “Hybrid DWTSVD audio watermarking,” in Proceedings of the 5th International Conference on Digital Information Management (ICDIM '10), pp. 525–529, IEEE, Ontario, Canada, July 2010. View at: Publisher Site  Google Scholar
 B. Lei, I. Y. Soon, and E.L. Tan, “Robust SVDbased audio watermarking scheme with differential evolution optimization,” IEEE Transactions on Audio, Speech and Language Processing, vol. 21, no. 11, pp. 2368–2378, 2013. View at: Publisher Site  Google Scholar
 M. Fallahpour and D. Megías, “High capacity audio watermarking using the high frequency band of the wavelet domain,” Multimedia Tools and Applications, vol. 52, no. 23, pp. 485–498, 2011. View at: Publisher Site  Google Scholar
 C. H. Yeh and C. J. Kuo, “Digital watermarking through quasi marrays,” in Proceedings of the IEEE Workshop on Signal Processing Systems (SiPS '99), pp. 456–461, 1999. View at: Google Scholar
 W. Bender, D. Gruhl, N. Morimoto, and A. Lu, “Techniques for data hiding,” IBM Systems Journal, vol. 35, no. 34, pp. 313–335, 1996. View at: Publisher Site  Google Scholar
 S.S. Kuo, J. D. Johnston, W. Turin, and S. R. Quackenbush, “Covert audio watermarking using perceptually tuned signal independent multiband phase modulation,” in Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP '02), pp. 1753–1756, Orlando, Fla, USA, May 2002. View at: Google Scholar
 N. M. Ngo and M. Unoki, “Watermarking for digital audio based on adaptive phase modulation,” in Digital Forensics and Watermarking, Lecture Notes in Computer Science 9023, pp. 105–119, Springer, 2015. View at: Publisher Site  Google Scholar
 M. Unoki and R. Miyauchi, “Robust, blindlydetectable, and semireversible technique of audio watermarking based on cochlear delay characteristics,” IEICE Transactions on Information and Systems, vol. 98, no. 1, pp. 38–48, 2015. View at: Publisher Site  Google Scholar
 H. O. Oh, J. W. Seok, J. W. Hong, and D. H. Youn, “New echo embedding technique for robust and imperceptible audio watermarking,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '01), pp. 1341–1344, Salt Lake City, Utah, USA, May 2001. View at: Google Scholar
 H. J. Kim and Y. H. Choi, “A novel echohiding scheme with backward and forward kernels,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, no. 8, pp. 885–889, 2003. View at: Publisher Site  Google Scholar
 P. Bassia, I. Pitas, and N. Nikolaidis, “Robust audio watermarking in the time domain,” IEEE Transactions on Multimedia, vol. 3, no. 2, pp. 232–241, 2001. View at: Publisher Site  Google Scholar
 H. Ozer, B. Sankur, and N. Memon, “An SVDbased Audio Watermarking Technique,” in Proceedings of the 1st ACM Workshop on Information Hiding and Multimedia Security, pp. 51–56, IEEE Press, 2005. View at: Google Scholar
 A. AlHaj and A. Mohammad, “Digital audio watermarking based on the discrete wavelets transform and singular value decomposition,” European Journal of Scientific Research, vol. 39, no. 1, pp. 6–21, 2010. View at: Google Scholar
 V. Bhat K, I. Sengupta, and A. Das, “A new audio watermarking scheme based on singular value decomposition and quantization,” Circuits, Systems, and Signal Processing, vol. 30, no. 5, pp. 915–927, 2011. View at: Publisher Site  Google Scholar
 P. K. Dhar and T. Shimamura, “A DWTDCTbased audio watermarking method using singular value decomposition and quantization,” Journal of Signal Processing, vol. 17, no. 3, pp. 69–79, 2013. View at: Publisher Site  Google Scholar
 F. E. Abd ElSamie, “An efficient singular value decomposition algorithm for digital audio watermarking,” International Journal of Speech Technology, vol. 12, no. 1, pp. 27–45, 2009. View at: Publisher Site  Google Scholar
 K. V. Bhat, I. Sengupta, and A. Das, “An audio watermarking scheme using singular value decomposition and dithermodulation quantization,” Multimedia Tools and Applications, vol. 52, no. 23, pp. 369–383, 2011. View at: Publisher Site  Google Scholar
 L. Lamarche, Y. Liu, and J. Zhao, “Flaw in SVDbased watermarking,” in Proceedings of the Canadian Conference on Electrical and Computer Engineering (CCECE '06), pp. 2082–2085, Ottawa, Canada, May 2006. View at: Publisher Site  Google Scholar
 J. Karnjana, M. Unoki, P. Aimmanee, and C. Wutiwiwatchai, “An audio watermarking scheme based on singularspectrum analysis,” in Digital Forensics and Watermarking, vol. 9023 of Lecture Notes in Computer Science, pp. 145–159, 2015. View at: Google Scholar
 J. Karnjana, P. Aimmanee, M. Unoki, and C. Wutiwiwatchai, “An audio watermarking scheme based on automatic parameterized singularspectrum analysis using differential evolution,” in Proceedings of the AsiaPacific Signal and Information Processing Association Annual Summit and Conference (APSIPA '15), pp. 543–551, Hong Kong, December 2015. View at: Publisher Site  Google Scholar
 H. Hassani, “Singular spectrum analysis: methodology and comparison,” Journal of Data Science, vol. 5, no. 2, pp. 239–257, 2007. View at: Google Scholar
 N. Golyandina, V. Nekrutkin, and A. Zhigljavsky, Analysis of Time Series Structure: SSA and Related Techniques, Chapman and Hall/CRC, Boca Raton, Fla, USA, 2001. View at: Publisher Site  MathSciNet
 K. Bradenburg and G. Stoll, “Iso/mpeg1 audio: a generic standard for coding of highquality digital audio,” Journal of the Audio Engineering Society, vol. 42, no. 10, pp. 780–792, 1994. View at: Google Scholar
 A. Spanias, T. Painter, and V. Atti, Audio Signal Processing and Coding, John Wiley & Sons, 2006.
 Y. You, Audio Coding: Theory and Applications, Springer Science & Business Media, 2010.
 W. Li, X. Xue, and P. Lu, “Localized audio watermarking technique robust against timescale modification,” IEEE Transactions on Multimedia, vol. 8, no. 1, pp. 60–69, 2006. View at: Publisher Site  Google Scholar
 S. Wu, J. Huang, D. Huang, and Y. Q. Shi, “Efficiently selfsynchronized audio watermarking for assured audio data transmission,” IEEE Transactions on Broadcasting, vol. 51, no. 1, pp. 69–76, 2005. View at: Publisher Site  Google Scholar
 S. Shuifa and K. Sam, “A selfsynchronization blind audio watermarking algorithm,” in Proceedings of the International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS '05), pp. 133–136, Hong Kong, December 2005. View at: Publisher Site  Google Scholar
 S. Wu, J. Huang, D. Huang, and Y. Q. Shi, “Selfsynchronized audio watermark in DWT domain,” in Proceedings of the 2004 IEEE International Symposium on Cirquits and Systems (ISCAS '04), pp. V712–V715, Vancouver, Canada, May 2004. View at: Google Scholar
 X.Y. Wang and H. Zhao, “A novel synchronization invariant audio watermarking scheme based on DWT and DCT,” IEEE Transactions on Signal Processing, vol. 54, no. 12, pp. 4835–4840, 2006. View at: Publisher Site  Google Scholar
 K. Hiratsuka, K. Kondo, and K. Nakagawa, “On the accuracy of estimated synchronization positions for audio digital watermarks using the modified patchwork algorithm on analog channels,” in Proceedings of the 4th International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIHMSP '08), pp. 628–631, Harbin, China, August 2008. View at: Google Scholar
 https://staff.aist.go.jp/m.goto/RWCMDB/.
 A. Learch, Software: EAQUAL—Evaluation of Audio Quality, v.0.1.3alpha ed, 2002.
Copyright
Copyright © 2016 Jessada Karnjana et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.