Abstract

Zero-watermarking is one of the solutions for image copyright protection without tampering with images, and thus it is suitable for medical images, which commonly do not allow any distortion. Moment-based zero-watermarking is robust against both image processing and geometric attacks, but the discrimination of watermarks is often ignored by researchers, resulting in the high possibility that host images and fake host images cannot be distinguished by verifier. To this end, this paper proposes a PCET- (polar complex exponential transform-) based zero-watermarking scheme based on the stability of the relationships between moment magnitudes of the same order and stability of the relationships between moment magnitudes of the same repetition, which can handle multiple medical images simultaneously. The scheme first calculates the PCET moment magnitudes for each image in an image group. Then, the magnitudes of the same order and the magnitudes of the same repetition are compared to obtain the content-related features. All the image features are added together to obtain the features for the image group. Finally, the scheme extracts a robust feature vector with the chaos system and takes the bitwise XOR of the robust feature and a scrambled watermark to generate a zero-watermark. The scheme produces robust features with both resistance to various attacks and low similarity among different images. In addition, the one-to-many mapping between magnitudes and robust feature bits reduces the number of moments involved, which not only reduces the computation time but also further improves the robustness. The experimental results show that the proposed scheme meets the performance requirements of zero-watermarking on the robustness, discrimination, and capacity, and it outperforms the state-of-the-art methods in terms of robustness, discrimination, and computational time under the same payloads.

1. Introduction

Medical images such as those obtained via ultrasound and magnetic resonance imaging (MRI) provide substantial evidence for clinical diagnosis, medical treatment, and research. High-speed mobile networks such as 5G and 6G have further accelerated the application of telemedicine and medical resource sharing. Under this circumstance, researchers are concerned with security issues such as copyright protection, ownership identification, and tamper detection. Robust watermarking, as a traditional tool that establishes a security barrier for information transmitted over open networks, has application prospects in medical image research [1, 2].

Medical images display the details and lesions of tissues and organs and have extremely high requirements for quality; thus, the traditional robust watermarking which embeds watermarks by tampering with images is not suitable. Zero-watermarking, also known as lossless watermarking, does not require modifying image contents and maintains stability after attacks, thus becoming an effective method of medical image authentication.

To offer security without tampering with images, zero-watermarking was proposed by Wen et al. [3]. Zero-watermarking extracts robust features of images to construct a zero-watermark and stores it in a third-party authority together with potential side information. When ownership or integrity is disputed, the watermark generated based on the zero-watermark and robust features can be used as a proof. Robustness, discrimination, security, payload, and computational time are the essential properties of zero-watermarking. Among them, discrimination and robustness are two high-priority and conflicting properties, and a promising scheme achieves a trade-off between them.

Inspired by Wen et al., a large number of zero-watermarking schemes have appeared, and a substantial portion of them create robust features based on the frequency domain [4, 5] or hybrids of image transforms and decomposition [612]. Common image transforms such as the DCT (discrete cosine transform) [9, 10], DWT (discrete wavelet transform) [4, 7, 8, 11], DFT (discrete Fourier transform) [4], CAT (cellular automata transform) [5], FrFT (fractional Fourier transform) [6], and CT (contourlet transform) [12] have been applied to zero-watermarking. Utilizing the invariance of the significant values in decompositions such as SVD [69, 1113] and QR [10] to further improve the robustness is common in zero-watermarking research. Zero-watermarking in the frequency domain or image decomposition is robust and widely used. However, the coefficients and significant values are susceptible to scaling and rotation attacks, placing the watermark at risk when these attacks occur.

Moments with geometrically invariant properties can be used to solve the issue and are widely applied to zero-watermarking [14]. However, these schemes have deficiencies, owing to the utilization of the relation between the moment magnitudes and a single reference value to construct robust features, which results in similar features among different images and insufficient discrimination. Low discrimination will degrade the verification credibility. To this end, this paper takes PCET (polar complex exponential transform) [15] magnitudes of the same order and PCET magnitudes of the same repetition as reference values and proposes a novel PCET-based zero-watermarking with discrimination, robustness, and low time cost for multiple medical images. The main contributions of this paper can be summarized as follows:(1)A novel robust feature generation location selection method is proposed, which constructs content-related features, improving the discrimination and reducing the possibility of false positives.(2)The proposed robust feature generation location selection method and the resistance of the PCET magnitudes to geometric attacks are successfully applied to zero-watermarking, and a zero-watermarking scheme with robustness, discrimination, and a low time cost is proposed.(3)The proposed scheme processes multiple medical images concurrently by overlapping their robust features. Finally, the efficiency and practicability of the scheme are obtained.

The rest of this paper is organized as follows. Section 2 describes the previous moment-based zero-watermarking schemes and summarizes their workflows. Section 3 presents a novel robust feature generation location selection method and analyses its feasibility. The framework and procedures of proposed scheme are presented in Section 4. Section 5 verifies the robustness, discrimination, and capacity of the proposed method and discusses the experimental results. Section 6 concludes the work.

Moment-based zero-watermarking is the focus of this paper. This section briefly introduces the previous moment-based schemes and gives their frameworks, to illustrate the rationality of the proposed robust feature generation location selection method and zero-watermarking scheme.

Orthogonal moments are divided into discrete orthogonal moments and continuous orthogonal moments, among which continuous orthogonal moments are more effective in image representation [16]. Thus, this paper mainly studies continuous orthogonal moment-based zero-watermarking. Early continuous orthogonal moments such as Zernike [17] have a base function with factorial operations, and the zero-watermarking schemes based on these moments include Bessel Fourier moment-based scheme [14] and OFMM- (orthogonal Fourier-Mellin moments-) based scheme [18]. Factorial operations of moments may limit both computational efficiency and resistance to geometric attacks. Therefore, zero-watermarking that uses harmonic-based continuous orthogonal moments, such as PCET [19] and PHFM (polar harmonic Fourier moments) [20], has been developed. Considering the relation of the three channels of colour images, quaternion continuous orthogonal moments such as the QPHT (quaternion polar harmonic transform) [21] and QEM (quaternion exponential moment) [22] have been applied to zero-watermarking. Furthermore, the QPHFM-(quaternion polar harmonic Fourier moments-) based scheme by Xia et al. [23] utilizes wavelet numerical integration to improve the moment accuracy and robustness and uses QR codes to encode watermarks for large payload and security. Yang et al. [24] used the DFT to calculate QGPCET (quaternion generic polar complex exponential transform) moment to lower the computational complexity. Additionally, to solve the low discrimination issue, the scheme utilizes the characteristic of the GPCET in which the image areas highly described by the GPCET are adjustable by a parameter to mix up the GPCET magnitudes with the different parameters. Shao et al. [25] combined quaternion continuous orthogonal moments with visual cryptography to give a zero-watermarking model.

The above moment-based zero-watermarking schemes are different in their specific details, but their frameworks, as shown in Figure 1, are similar. First, these schemes construct the robust feature domain, mainly selecting an appropriate moment. Then, the geometric invariants and reference values are calculated as the robust feature generation location, which is crucial to the performance of zero-watermarking. Moment magnitudes are stable against attacks, making them the first choice for geometric invariants. In the selection of the reference value, most schemes select one value as the reference value of all the geometric invariants. For example, [1921, 23] select the mean of the geometric invariants, [14, 18, 24, 25] select the median of the geometric invariants, and [22] selects the Otsu threshold as the reference value (Table 1). With the invariants and reference value, schemes construct image features. Finally, a binary robust feature vector based on the image features is created and combines with a watermark to generate a zero-watermark. The verification is different from the construction in the third step, which uses the generated robust features and the zero-watermark stored in a third-party authority to recover the watermark.

It can be seen from Figure 1 that when the robust feature generation domain is known, the robust feature generation location, that is, the selection of geometric invariants and reference values, determines the robustness and discrimination. Although these existing zero-watermarking schemes have already achieved resistance to both image processing and geometric attacks, the characteristic that the value distributions of magnitudes have certain regularity results in insufficient discrimination. Therefore, to achieve both discrimination and robustness, this paper proposes a new robust feature generation location selection method based on the stability of the relation between magnitudes of the same order and magnitudes of the same repetition and presents a novel PCET-based zero-watermarking scheme on this basis.

3. Proposed Robust Feature Generation Location Selection Method

3.1. Selection Procedure

Denote the moment of order n with repetition l as . The flowchart of proposed robust feature generation location selection method is seen in Figure 2, and the steps are as follows:(1)Calculation of magnitudes and reference values: calculate the moment magnitudes and denote them as , which can be saved as matrix , where . The reference values of are the magnitudes of column coordinates greater than j in its row and the magnitudes of row coordinates greater than i in its column, that is, are the reference values of .(2)Image feature construction: subtracts its reference values with the same row coordinate; the result is a row vector . Concatenate the 2K row vectors generated by the ith row of A into one row vector, and denote the row vector as which has 2K2 + K elements. Similar steps are performed on and its reference values with the same column coordinate, thereby a row vector with 2K2 + K elements is obtained. and compose an image feature matrix denoted as F:(3)Robust feature construction: binarize F by comparing each element with zero to get robust features . The binarization process is

3.2. Performance Comparison

Robust features of images should be stable against various attacks and closely relate to the image contents. Most existing moment-based zero-watermarking schemes choose the median or average of the moment magnitudes as the reference values, resulting in the low discrimination. To verify the performance of the proposed robust feature generation location selection method, this section designs three robust feature generation location selection methods based on the existing reference value selection methods (see Table 1) and compares them with the proposed method. The first method calculates the PCET magnitudes of images and uses the median of the magnitudes as the reference value to generate robust features; the second method also uses the PCET magnitudes as the geometric invariants, but the mean of the magnitude as the reference value; Inspired by Yang et al. [24], the third method calculates GPCET magnitudes with the parameter s of 0.5, 1, 2, and 3, and then uses the mean of medians of the four magnitude vectors as the reference value to construct the robust features. The robust features with sizes of 1000, 2000, 3000, and 4000 generated by the four methods are compared. The tested images are 2125 (1068 images with size of 128 × 128, 204 images with size of 256 × 256, 384 images with size of 160 × 128, and 469 images with size of 512 × 512) medical images. Hamming distance is the quantitative measurement of the discrimination in this section, which is proportional to the discrimination. BER (bit error rate) and two attacks of the JPEG compression (QF = 30) and rotation (−5°) are used to measure the robustness. The comparative results of the Hamming distance and BER for the four methods are shown in Figures 3 and 4, and the formulas of Hamming distance and BER are seen in the following equations:

Figure 3 shows the comparative results of the average Hamming distance. For the proposed method, the percentages of the Hamming distance to robust feature size are more than 30%, which are at least 6.1% higher than the average of the other three methods. Although the third method mixes GPCET magnitudes with different parameters to improve the discrimination, its average Hamming distance is 3.0% lower than that of the proposed method.

After compression and rotation attacks, the comparative results of the average BER are shown in Figure 4. As can be seen, compared with the other three methods, the BER values of the proposed method are low for both the JPEG compression and rotation. Specifically, the average BER for JPEG compression is 0.027, while the highest and lowest average BERs for the other three methods are 0.074 and 0.031, respectively; The average BER for rotation is 0.029, while the highest and lowest average BERs for the other three methods are 0.091 and 0.041, respectively. Therefore, the robustness of the proposed selection method is superior to the other three methods.

In summary, the proposed robust feature generation location selection method has both discrimination and robustness. Its application in the moment-based zero-watermarking is presented in Section 4.

4. Proposed Zero-watermarking Scheme

To conserve the computational space and improve the practicability, the proposed scheme processes medical images in groups. Zero-watermark generation and verification for an image group are introduced in this section, and Figure 5 shows the framework of the zero-watermark generation process.

4.1. Zero-Watermark Generation

For the image group and binary watermark , the steps of zero-watermark generation are as follows:(1)Robust feature generation domain selection. For the tth image, the PCET moments are calculated, and denote the moment of order n with repetition l as .(2)Robust feature generation location selection(2.1)Apply the proposed robust feature generation location selection method to each image of . Extract the image features of each image and denote them as , and the extraction steps can be seen in Section 3.(2.2)Superposition of image features. Repeat step 2.1 to get the T image feature matrixes of , and add them together based on the principle of matrix addition:(3)Construction of zero-watermark(3.1)Robust feature generation. Binarize F by comparing each element with zero, and denote the resulting binary sequence as . The specific binarization process is(3.2)Robust feature scrambling and selection: set the initial value of a chaos systems such as the logistic map as to get a pseudorandom number sequence , where . is adopted as the secret key 1. The subscript sequence is obtained by sorting in ascending or descending order. is transformed into an one-dimensional vector and then is sorted according to I. The first L elements of the sorted vector are the robust features of the image group and are denoted as .(3.3)Watermark scrambling: set the initial value of the chaos system as to get a pseudorandom number sequence . is the secret key 2. Similar to Step 3.2, the binary watermark is scrambled according to the subscript sequence of sorted and then is denoted as .(3.4)Zero-watermark generation: perform the bitwise XOR operation on the robust features and the watermark to get the final zero-watermark . The computation of the ith bit of zero-watermark is as follows:(4) is transmitted to a third-party authority together with the secret keys Key1 and Key2.

4.2. Zero-Watermark Verification

(1)Robust feature generation domain and location selection. They are same as Step 1 and Step 2 of the zero-watermark generation.(2)Watermark generation(2.1)Robust feature generation and scrambling: with the secret key Key1 obtained from the authority, the robust feature of the potentially distorted host images is generated. The generation method of robust feature can be seen in zero-watermark generation.(2.2)Generation of scrambled watermark: perform the bitwise XOR on the zero-watermark obtained from the authority and , and denote the resulting binary sequence as , where the ith bit is generated as follows:(2.3)Watermark recovery: use Key2 obtained from the authority to recover , thereby a binary watermark for image authentication is obtained.

5. Experimental Results and Discussion

This section first comprehensively evaluated the robustness, discrimination, and capacity of the proposed scheme. Then, a comparison with other schemes is performed in terms of the discrimination, robustness, and computational efficiency.

5.1. Experimental Settings

The test images come from 10 grayscale medical images of 256 × 256 and 345 grayscale medical images of 512 × 512 in the TCIA library. The watermarks include 4 logo images and8 randomly generated binary sequences. Both the discrimination and robustness can be measured using BER defined in (4). The PCET-based zero-watermarking [19], the QPHFM-based zero-watermarking [20], and the QGPCET-based zero-watermarking [24] are selected for a performance comparison. Because the performance of moment-based zero-watermarking is related to the maximum order of the moment, this section sets the maximum order K to the minimum allowed. When the watermark size is 64 × 64, the minimums of K for scheme [19], scheme [20], scheme [24], and the proposed scheme are 37, 45, 16, and 8, respectively. The specific experimental parameters are shown in Table 2.

5.2. Robustness

Robustness is one of the important properties of zero-watermarking and is inversely proportional to BER. To verify the robustness of the proposed scheme against both common image processing and geometric attacks, 130 images of 512 × 512, 10 images of 256 × 256, and 4 watermarks of 32 × 32, 48 × 32, 48 × 48, and 64 × 64 are chosen to test the resistance against compression (QF = 30), rotation (−5°), scaling (1/2), and salt and peppers noise (0.01). Figures 6 and 7 show the correct extraction rate Rc = Nc/N and BER of the extracted watermarks, respectively, where Nc is the number of correct watermark bits and N is the total number of watermark bits.

When 10 images of 512 × 512 and 10 images of 256 × 256 are divided into 10 groups (2 images in each group ), the relationship between the correct extraction rate and the watermark size can be seen in Figure 6. As can be seen, the correct extraction rates are close to 1.Especially,the correct extraction rates of rotation and scaling are higher than 0.99. These data show that the proposed scheme maintains good resistance to different attacks under different payloads.

Figure 7 is the relationship between the average BER and the number T of images each group with a watermark of 64 × 64 and 120 images of 512 × 512. The mean and standard deviation are 0.0132 and 0.0013 for compression, 0.0034 and 0.0003 for rotation, 0.0046 and 0.0005 for scaling, and 0.0121 and 0.0031 for salt and peppers, respectively. The highest point of the four curves corresponds to the salt and peppers noise and is 0.0178. Thus, the robustness of the proposed scheme is not affected by the number of images each group.

5.3. Discrimination

Another attribute of zero-watermarking is discrimination, which is proportional to the BER of the watermarks extracted from fake host images. To verify that the proposed scheme achieves significant discrimination, this section uses 10 images of 512 × 512 to construct 10 image groups numbered 1 to 10 and executes 10 zero-watermark generation processes and 10 × 10 zero-watermark verification process. The BERs obtained from the 10 × 10 verification process are presented in Table 3.

The row headings of Table 3 indicate the host image number, and the column headings are the image number in the verification process. When the generation and verification process are performed on the same image, the BERs (data on the diagonal line) are zero; when they are on different images, the average and minimum of BERs are 0.3631 and 0.2629, respectively. As can be seen, the proposed scheme can distinguish “true” and “false” images in the verification process.

5.4. Capacity

It is known that the robust features of the proposed scheme are generated by moments. Therefore, the number of robust feature bits is proportional to K, and theoretically unlimited capacity for the proposed scheme can be inferred. A scheme with extensive application prospective should achieve trade-off between capacity, robustness, and discrimination. To verify that the robustness and discrimination of the proposed scheme do not be degraded by high capacity, Figure 8 shows the relationship between the capacity and robustness and relationship between the capacity and discrimination based on 10 groups (T = 1) of 512 × 512 images and binary wat ermarks with different sizes.

Figure 8(a) presents BERs corresponding to four attacks of compression (QF = 30), rotation (-5°), scaling (1/2), and salt and peppers noise (0.01). Figure 8(b) shows the average BER when one group is regarded as the host image group, and the other 9 groups are used for the verification. The robustness decreases slightly as the capacity increases, and the maximum of BER is 0.0407. The discrimination increases slightly with the increasing capacity, and the minimum of BER is 0.3218. These results demonstrate that the increasing of capacitydoes not degrade the robustness and discrimination.

5.5. Comparison with Other Schemes

This section will compare the performance of the proposed scheme with [19, 20, 24] based on 215 tested images in terms of robustness, discrimination, and computation time. For robustness, this section chooses 22 representative attacks which include common image processing such as compression, and geometric distortion such as rotation. The comparative results are presented in Tables 47. Because [20] takes 3 images as one group, its results come from 70 image groups. In addition, Table 8presents the watermarks generated by [20] and the proposed scheme when the 3 images of one group are rotated by 90°, 45°, and 0°, respectively. A zero-watermarking with high discrimination should distinguish not only original fake host images but also attacked fake host images. Table 9 presents the average BER when 210 attacked images are fake host images. Table 10shows the computation time of the four schemes in the same software environments (MATLAB 2018) and hardware environments (2.90 GHz processor, 8 GB RAM). In the section 5, the moment calculation in the simulation experiments are based on ZOA (zeroth-order approximation) except for that of [24]. For the schemes that process images in groups, the computation time of an image is the result that the executing time of its group is divided by the number of images each group. The group size of the proposed scheme is set to 1, 3, and 5, thereby the 210 tested images are divided into 210 groups, 70 groups, and 42 groups. The size of the watermark used in this section is 64 × 64 .

It can be seen from Tables 47 that the robustness of the proposed scheme is significantly higher than the other three schemes for most attacks. For example, BER of the proposed scheme is lower than 0.004 for counterclockwise rotation with 3°, while the minimum of the other three schemes is 0.0103. Although the four schemes are based on the harmonic-based continuous orthogonal moments, the proposed scheme assigns multiple reference values to each magnitude to build a one-many relationships between magnitudes and robust feature bits, which lowers the maximum order K of the moments involed in zero-watermarking. Therefore, the proposed scheme improves the robustness significantly by utlizing low-order moments to construct robust features.

Xia et al. [20] utilize the resistance of the quaternion moment magnitudes to construct QPHFM-based robust features for three images at the same time. However, binding three images together may affect its robustness against rotation. In Table 8, when the three images of a group are rotated with 90°, 45°, and 0°, respectively, the BERs of the watermarks generated by the proposed scheme and [20] are 0.0017 and 0.1006. As can be seen, when the images of a group are rotated at different angles, the robustness of [20] is obviously degraded. [20] takes three images rotated with different degrees as the three imaginary parts of a quaternionto calculate a quaternion moment, destroying the stability of quaternion moment magnitudes. The proposed scheme calculates the moments for each image in a group separately beforeconstructing the robust feature of the group; thus, it not only can change the group size but also maintians robustness when the images in one group are rotated at different angles.

To verify the discrimination of the proposed scheme, Table 9 gives the average BER of the watermarks obtained from the distorted fake host images. BER of the [19, 20] is significantly lower than that of the proposed scheme. Yang et al. [24] mixed QGPCET moments with different parameters to strengthen the correlation between robust features and host images, but the discrimination is still weaker than the proposed scheme for attacks other than Gaussian noise. Therefore, robust feature generation method based on the relationship between magnitudes of the same order and relationship between magnitudes of the same repetition is effective in improving the discrimination.In addition, the discrimination of the proposed scheme is independent of the number of images each group.

When the moments of an M × N image are calculated by ZOA, the time complexity for the PCET, GPHFM, and GPCET are all , where K is the maximum order. Therefore, their computaition time is closely related to K. Table 10 shows that the proposed scheme is superior to the other schemes in terms of the computation time. The computationtime of [19] is highest because its K is 37. Although [20] can process three images at the same time, its quaternion PHFM moment calculation includes the calculation of PHFM moments with maximum order 45 for three image. Yang et al. [24] use DFT to reduce the computation time of GPCET moments, and its time efficiency is highest among the other three schemes. However, it is still lower than that of the proposed scheme. The proposed scheme set the maximum order of the moments to 8, which substantially lowers its time cost.

6. Conclusions

To solve the low discrimination issues that is common in previous moment-based zero-watermarking, this paper proposed a new robust feature generation location selection method by regarding the magnitudes with the same order and magnitudes with the same repetition as the reference values of each magnitude, avoiding the low discrimination by assigning a single reference value to all the magnitudes and improving the robustness and computational efficiency by reducing the number of the magnitudes involved. Based on the proposed robust feature generation location selection method, this paper further proposed a PCET-based zero-watermarking scheme for multiple medical images. The experimental results indicate the significance of this work in terms of discrimination, robustness, capacity, and time cost. In the future, we will extend the robust feature construction method to other moments and apply it to other image processing domains.

Data Availability

The pseudocode used to support the findings of this study is available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Nos. 62002387 and 61872448) and the Science and Technology Innovation Talent Project of Henan Province (No. 184200510018).