Advances in AI-related Information Forensics and SecurityView this Special Issue
New Framework of Self-Embedding Fragile Watermarking Based on Reference Sharing Mechanism
We proposed in this paper a new self-embedding framework based on a reference sharing mechanism. The framework has high flexibility; it can not only estimate the optimal recovered image quality based on a given tampering rate but also estimate the largest tampering rate that the framework can resist based on the given peak signal-to-noise ratio (PSNR) of the recovered image. When the tampering rate is given, we first calculate the largest number of character bits and then allocate an appropriate number of character bits according to the complexity of the image block to achieve the optimal recovered image quality. When the PSNR of the recovered image is given, the number of character bits is minimized by satisfying the corresponding constraints to achieve the largest tolerable tampering rate. Experimental results show the flexibility, effectiveness, and superiority of the proposed scheme compared with some reported schemes.
With the rapid development of digital signal communication multimedia tools, we enjoy a mass of information from every aspect of life. At the same time, it brings us the convenience of communication, and it is also associated with information security issues. The multimedia information may suffer tampering or manipulation through transmission. Therefore, authentication for the integrity and authenticity of multimedia data is vital in communication [1–3]. Researching image authentication technology and content protection is a hot topic currently. Many techniques are applied to verify the authenticity and integrity of images, such as reversible data hiding [4–6], perceptual image hashing , and fragile digital watermarking [8–35]. This paper mainly studies digital fragile watermarking. According to various functions, fragile watermarking could be divided into two categories: tampering localization schemes [8–14] and self-embedding schemes [15–35]. The former is used to detect and locate the tampered regions of an image, while the latter is designed to recover the image content to its original state, in addition to identifying the tampered regions.
In , Walton proposed the first fragile watermarking scheme for tampering authentication. This scheme calculates the checksum of randomly selected pixels as a watermark and embeds it in the least significant bit (LSB). To resist vector quantization (VQ) attack, Celik et al.  proposed a hierarchical watermarking for secure image authentication by a hierarchical structure. The VQ attack is resisted by the high-level signature shared in the low level. Zhang and Wang proposed a statistical scheme of fragile watermarking to locate tampered regions with pixelwise accuracy . The watermark data of this scheme consisted of tailor-made authentication data for each pixel and some additional test data that can be used to reveal the exact pattern of the tampered contents. To improve the ability to detect the tampered regions with equal modification in brightness, instead of embedding block-independent authentication code (AC) used in other methods, Hong et al.  embedded the hash value of block features to avoid the tampered regions that are undetectable in prior works. However, in many real applications, just detecting tampering is not enough, and it is highly desirable to recover the original content from the tampered regions. Therefore, many researchers have investigated ways of recovering the original content after tampering has been detected.
Fridrich and Goljan  developed a fragile watermarking scheme with self-recovery capability and proposed the first self-embedding watermarking, which encoded the discrete cosine transform (DCT) coefficients of each block and embedded them into other blocks. When the tampered blocks were detected, the tampered blocks could be recovered by the extracted bits from intact blocks. In , Zhu et al. proposed to use the exclusive-or (XOR) between a pseudorandom sequence and a polar sequence of DCT coefficients as a watermark. In this method, data representing the principal content in a region are always hidden in a different region within the image. If both regions are tampered with, the restoration will fail. This problem is called the tampering coincidence problem. To solve this problem, an effective dual watermark scheme was proposed in . In this scheme, there are two copies of the watermark for each nonoverlapping block in the image, thereby providing a second chance for block recovery in case one copy is destroyed. In , a fragile watermarking with error-free restoration capability was proposed, which can achieve lossless recovery through the combination of ingenious watermark design and difference expansion algorithm. But a necessary condition of the perfect image restoration is that the proportion of tampered content must be less than 3.2%.
Zhang et al.  proposed a self-embedding fragile watermarking scheme based on a reference sharing mechanism. The watermark in the tampered area could be recovered accurately under a certain tampering rate, and it will be introduced in Section 2. Based on the reference sharing mechanism, an adaptive scheme is proposed in , which has two embedding modes, overlapping-free embedding and overlapping embedding. What is more, it takes adaptive selection between the most significant bit (MSB) and LSB layers for embedding according to different tampering rates and results in a better quality of image recovery. In 2018, Qin et al. proposed a self-recovery scheme based on a nonuniform reference sharing mechanism . An optimal iterative block truncation coding (OIBTC) algorithm is used to generate recovery bits, including binary patterns and reconstruction levels. A nonuniform sharing mechanism is used to interleave these recovery bits. The recovered image quality is in the range [31, 40] dB under the tamper ratio of less than 50%. Since the traditional manner of concealing image content within the image is inflexible and fragile to diverse digital attacks, that is, image cropping and JPEG compression, to address this issue, a novel self-embedding algorithm based on deep learning was proposed by Ying et al., which is an image tamper resilient generative scheme  by jointly training a U-Net backboned encoder, a tamper localization network, and a decoder for image recovery.
Self-embedding based on a reference sharing mechanism is effective in solving the tampering coincidence problem; however, existing algorithms can only achieve the fixed recovered quality under different tampering rates, and it is not possible to achieve more flexible watermark embedding operations according to user customization. In some application scenarios, the user only needs an acceptable recovery quality when the tampering is large, and at the same time, the user wishes to estimate the largest tolerable tampering rate when specifying the recovery quality. To this end, we propose a new framework of self-embedding fragile watermarking based on the reference sharing mechanism in this paper. The main contributions of this work are summarized as follows:(1)Our framework sufficiently considers the user preference and customization for tampering recovery; that is, the optimal recovered quality and the largest tolerable tampering rate can be achieved under the given conditions by the proposed scheme.(2)When the possible tampering rate is given, we can estimate the optimal visual quality of the recovered image by designing an optimization algorithm to obtain the largest number of character bits. Detailedly, a different number of character bits generated from different methods are adaptively allocated to corresponding image blocks according to the block complexity, while the successful tampering recovery under the given tampering rate can also be guaranteed effectively.(3)When the lowest requirement of PSNR value for the recovered image is decided, we can achieve the largest tolerable tampering rate by minimizing the total number of character bits for tampering recovery.
The remainder of this paper is organized as follows. Section 2 describes the baseline of the reference sharing mechanism and the motivation of our work. Our scheme and framework are proposed in Section 3. Theoretical analysis is given in Section 4. Section 5 presents experimental results and comparisons. Finally, we conclude this paper in Section 6.
In this Section, the basic idea of reference sharing mechanism for image tampering recovery is first introduced, and then its limitations and the motivations of our work are presented subsequently.
2.1. Reference Sharing Mechanism
As proposed in , for the original image I sized N1 × N2, 5 MSB layers of all N = N1 × N2 pixels are collected as character bits for the reference bits generation, and 3 LSB layers are set to zeros for embedding watermark bits including authentication bits and reference bits.
Detailedly, all the 5N character bits from 5 MSB layers are first permuted by a secret key and divided into M = 5 N/L subsets, each of which containing L bits, that is, cm,1, …, cm,L, and for each subset, L/2 reference bits can be calculated by equationwhere Am is the pseudorandom L/2 × L binary matrices derived by the secret key and the arithmetic in equation (1) is modulo-2. Then, all generated 5N/2 reference bits are permuted and divided into N/64 groups, each of which contains 160 bits. The original image is divided into N/64 blocks sized 8 × 8, and the 160 reference bits in each group correspond to each block. In addition, the 32 hash-based authentication bits for each 8 × 8 block are generated by feeding the 5 MSBs of the block and the 160 corresponding reference bits into the hash function. According to the secret key, the 160 reference bits and 32 authentication bits are permuted to form the 192 watermark bits, which are embedded into each block by LSB replacement. Thus, after all blocks are processed with the above steps, the watermarked image is obtained.
After receiving a suspicious image, the integrity of each block can be judged by comparing the equality of the extracted authentication bits and the recalculated ones. Clearly, if they are different, the corresponding block is judged as tampered; otherwise, as intact. The character bits of tampered blocks can be recovered by the reference bits extracted from intact blocks as follows:where rm,e(1), rm,e(2), …, are the reference bits extracted from the intact blocks, the number may be less than L/2 due to tampering, and is the matrix composed of the rows taken from Am corresponding to the rows of extracted reference bits. Then, according to whether the character bits come from tampered or intact blocks, equation (2) can be reformulated aswhere CT is a column vector consisting of the nT character bits from tampered blocks, CR is a column vector consisting of the L − nT character bits from intact blocks, and are the matrices composed of the columns taken from corresponding to the bits in CT and CR, respectively, and the size of is × nT. Thus, the nT unknown character bits can be obtained by solving the equations in a binary system, and the tampered blocks can be recovered if and only if equation (3) has a unique solution.
In , the authors also considered the probability of equation (3) with a unique solution. That is, if and only if the nT columns of are linearly independent, a unique solution to equation (3) exists. Here, the probability q(x, y) of columns being linearly dependent on a random binary matrix sized x × y can be calculated as
Assume the tampering rate is γ; thus, the number of reference bits extracted from intact blocks conforms to a binomial distribution:
The number nT of unknown character bits from tampered blocks also conforms to a binomial distribution:
The probability of the nT columns in being linearly independent can be calculated as
Thus, the probability of unknown character bits in all M subsets, that is, 5 MSBs, being recovered is
Specifically, for an original image sized N = 512 × 512, when L = 512, the probability PR can be equal to 1 under the tampering rate γ not greater than 24%.
2.2. Motivation of Our Work
Based on the above analysis, we find an important parameter that expands the L character bits into several reference bits in each subset, and the parameter is called the expansion coefficient ε in our work; that is, ε represents the expansion coefficient that is the ratio between the numbers of reference bits and character bits. The expansion coefficient ε is fixed in ; that is, ε = 1/2. However, we find that the probability PR of successful tampering recovery is closely related not only to the tampering rate γ, the number of character bits L in each subset, and the image size N but also to the expansion coefficient ε. To ensure the successful tampering recovery, PR(γ, L, N, ε) = 1 is used as the necessary condition of the objective function in our work.
In , the number of character bits and expansion coefficient in each subset are fixed, which leads to the largest tolerable tampering rate equaling 24% constantly under an original image sized N = 512 × 512. Even when the actual tampering rate is smaller than 24%, the PSNR of the recovered image is fixed as 40.7 dB. That is to say, this scheme is inflexible towards the tolerable tampering rate and recovered image quality, which is not applicable to the scenarios of larger tampering rate and customized recovered quality; that is, when a larger tampering rate or recovery quality is given, the user hopes to obtain the optimal recovery quality or the largest tolerable tampering rate under the corresponding conditions, but the algorithm in  cannot realize this function. Considering that the computational complexity and PSNR of 40.7 dB have achieved a good visual effect, our work will mainly study the problem of optimizing the recovered image quality under large tampering rates and estimating the largest tolerable tampering under the given PSNR is less than 40.7 dB. To this end, the motivation of our work is to achieve the flexible self-embedding based on a general reference sharing mechanism, which includes two main aspects in the following:(1)When the largest requirement of tolerable tampering rate is given, how to achieve the best visual quality of the recovered image? From the analysis in Section 2.1, we can see that the quality of the recovered image is almost proportional to the total number of character bits and can also be verified from Table 1. However, the largest possible number of reference bits is decided by the number t of LSB layers used for embedding; that is, when the tampering rate is given and the tampered image is required to be recovered, the largest possible number of character bits is fixed. Therefore, we first calculate the largest number of character bits allowed and then allocate an appropriate number of character bits according to the complexity of the image block. Then, the problem of achieving the highest PSNR of the recovered image under the given tampering rate can be transformed into two optimization problems: (a) Maximize the number of character bits. (b) For blocks of different complexity, allocate an appropriate number of character bits for blocks to make the quality of the recovered image better.(2)When the lowest requirement for the visual quality of the recovered image is given, how to achieve the largest tolerable tampering rate? First, satisfy the given recovered image quality, that is, PSNR, with the least number of character bits. And then calculate the expansion coefficient ε according to the number of reference bits and the least number of character bits. Finally, the largest tolerable tampering rate can be estimated according to equations (2)–(8). Thus, the problem of achieving the largest tolerable tampering rate under the given lowest PSNR requirement for the recovered image can be transformed into an optimization problem: minimize the number of character bits subject to satisfying the required recovered quality to achieve the largest tolerable tampering rate.
3. Proposed Framework
In order to solve the two above-mentioned problems in Section 2.2, we propose a general framework of self-embedding fragile watermarking based on a reference sharing mechanism, and the overall framework of our scheme is shown in Figure 1. Since the framework designed in this paper is based on blocks, the original image I is first divided into Nb nonoverlapping blocks of size , and two schemes are introduced: (1) Given the possible tampering rate, a scheme to optimize the recovery quality is designed to obtain the optimal recovered quality image. (2) Given the PSNR of the recovered image, the largest tampering rate estimation scheme is designed to obtain the largest tolerable tampering rate. More details will be described in the next two subsections.
3.1. The Optimal Recovered Quality Given the Largest Requirement of Tolerable Tampering Rate
The problem of achieving the highest PSNR of the recovered image under the given tampering rate can be transformed into two optimization problems: (a) Maximize the total number of character bits under the condition of satisfying the given possible tampering rate. (b) Allocate an appropriate number of character bits for blocks with different complexity. For two reasons, one is the quality of the recovered image proportional to the total number of character bits, and the other is the recovery difficulty of regions with different complexities, which is usually different in the process of self-embedding recovery. Therefore, we collect many character bits generation methods to construct a library, which have different resilience to blocks of different complexity. The appropriate method from the library is selected to achieve the best recovery quality for blocks of different complexity. Finally, the entire optimization process includes three optimization objectives: the optimization problem for reference indicator bits generation, the optimization problem for character bits generation, and the optimization problem for recovered image quality. The optimization process is shown in Figure 2.
Firstly, in order to achieve the optimal quality of the recovered image, many character bits generation methods are collected to generate character bits and solve the minimum number problem of reference indicator bits. Secondly, determine the largest number of image content character bits that can guarantee successful recovery under the given largest tampering rate γ. Thirdly, calculate the average mean squared error (MSE) of the recovered image quality for each character bits generation method, which will be used to initially allocate the number of blocks for the character bits generation method. Then recalculate a new MSE according to the number of allocated blocks and the complexity of the blocks to obtain the optimal recovered quality of the image. Finally, watermark bits are obtained, which contain two parts: (1) reference bits consisting of image content reference bits and corresponding reference indicator bits and (2) hash-based authentication bits used to verify the authenticity of each block. After watermark bits are embedded into the original image, the watermarked image can be produced.
3.1.1. Reference Indicator Bits Generation
For image blocks with different complexity, different character bits generation methods have different recovery capabilities. To achieve better recovery quality, we collect a variety of typical character bits generation methods and denote it as . Obviously, during the recovery process, it is necessary to identify which is used for each block. Thus, the indicator bits of the character bits should be recoverable. Under this requirement, we should minimize the reference indicator bits to reserve more space for content reference bit embedding while guaranteeing indicator bits to resist the possible largest tampering attacks.
The number of indicator bits used to mark each block can be calculated according to the number of collected in the framework; see equations (9) and (10).where n(M) is the number of and Nb is the number of image blocks. As a result, there are totally nc(1) indicator bits, which are then interleaved to nr(1) reference indicator bits through the reference sharing mechanism. In fact, this interleaving process is realized by the expansion coefficient ε, which can be rewritten as ε = (γ, PR = 1, L, N) according to equations (4)–(8). Detailedly, during bit interleaving, the nc(1) bits are divided into M(1) groups, and each group contains L(1) bits. The nr(1) reference indicator bits are embedded into the t LSB layers of the original image I with nr(2) content reference bits; since the largest tolerable tampering rate of reference indicator bits is set to 80%, it is possible to recover even if the reference indicator bits suffer a larger tamper rate than the given tampering rate.
In order to reduce the computational complexity and ensure the recoverability of indicator bits, the tampering rate that indicator bits can resist is set to 80%; that is, when the given tampering rate is less than 80%, the relevant parameters, that is, the number of reference indicator bits and the expansion coefficient, M(1) and L(1), of the indicator bits do not need to be recalculated. When the tampering rate γ = 80%, the least number of reference indicator bits nr(1) can be calculated as follows:where M(1) ∈ [MT1(1), MT2(1)] and ε(1) is the expansion coefficient of indicator bits.
3.1.2. Character Bits Generation
In order to achieve the best recovery quality, the total number of character bits, nc(2) = L(2) × M(2), should be as large as possible. By combining the expansion coefficient calculation method derived from equations (4)–(8), that is, ε(2) = (γ, PR = 1, L(2), N), and the largest possible number of reference bits possible, that is, nr(lar) = [( × t − na) × Nb − nr(1)], na is the number of authentication bits per block. Finally, the largest number of character bits can be calculated by adjusting the variable M(2) and two constraints as follows:where M(2) ∈ [MT1(2), MT2(2)].
3.1.3. Optimization of Recovered Image Quality
After the total number of character bits is calculated, in order to improve the quality of the recovered image, an appropriate for each block should be decided to obtain character bits of each block. In this work, PSNR is utilized to evaluate the visual quality of the recovered image with respect to the original image; see equationwhere I(i, j) and R(i, j) denote the pixels in the original image and the corresponding recovered image, respectively. We take MSE minimization as the optimization objective to obtain the best recovered image quality. The average MSE Ei of each whole image in image databases is first used to initially allocate the number of blocks corresponding to the selected . The optimization formula is as follows:where is the number of character bits generated by the selected and is the number of blocks corresponding to the selected .
The number of blocks corresponding to the selected can be obtained according to the optimization of equation (14). Since the difficulty of recovering blocks with different complexities is different, in our framework, the complexity of the blocks is sorted, and the block with higher complexities are allocated to the with lower MSE to obtain a better quality of the recovered image. Detailedly, a complexity measurement algorithm is first proposed to sort the complexities of all blocks. Then, the better average MSE of each block can be calculated based on the specific and the corresponding allocated blocks; that is, is the average MSE of the recovery quality for each block corresponding to specific . Unlike the previous Ei, which is only used to allocate the number of blocks of the corresponding , is used to estimate the block corresponding to the appropriate complexity of the specific to achieve better recovery quality. Finally, the optimal MSE of the recovered image can be estimated as follows:
3.2. The Largest Tolerable Tampering Rate Given the Lowest Requirement of Recovered Quality
If the user wants to achieve a larger tolerable tampering rate, the larger extension coefficient should be used; thereby, less number of character bits are used; see equations (2)–(8). Thus, how to achieve the largest tolerable tampering rate under the given lowest requirement of recovered image quality can be transformed into the problem: how to achieve the given recovered quality (PSNR) with the least number of character bits.
As shown in Figure 3, PSNR is first converted to MSE, and the corresponding character bits of are allocated according to the optimization equation (16) to obtain the least character bits. Second, the largest tolerable tampering rate can be calculated by optimization equation (17). The reference indicator bits and indicator bits are calculated in Section 3.1. It is worth noting that since the largest tolerated tampering rate needs to be estimated, the total number of reference bits takes the maximum value; that is, nr = ( × t − na) × Nb.
3.2.1. Minimization of the Number of Character Bits
In order to solve the least character bits that satisfy the conditions of a given PSNR, the average MSE Ei is first used to calculate the appropriate number blocks to to satisfy a given PSNR. And then, the least number of character bits can be calculated according to the number blocks of and its corresponding character bits number; the solution process is as follows:
3.2.2. Calculation of the Largest Tolerable Tampering Rate
After the least number of character bits to satisfy a given PSNR has been obtained by equation (16), the largest coefficient ε can be calculated; that is, ε = nr/nc(min), where nr is the largest possible number of reference bits. In addition, the calculation method of the tolerable tampering rate can be derived according to equations (2)–(8); that is, γ = (PR = 1, L, N, ε). Finally, according to the two constraints and the tampering rate calculation method, the largest tolerable tampering rate γe can be obtained by adjusting the variable M as follows:where M ∈ [MT1, MT2].
4. Theoretical Analysis
In this section, we conduct the theoretical analysis of the performance of the proposed scheme from three aspects: (1) relevant introduction of character bits generation methods and image block complexity measurement algorithm, (2) optimal quality estimation of the recovered image under the given tampering rate, and (3) largest tolerable tampering rate estimation under the given PSNR. Note that the experimental results in the theoretical analysis are estimated according to the appropriate number of character bits obtained from the optimization formulas in Section 3 and are reasonably allocated to the selected for reconstructing the image without real watermark embedding and tampering recovery operation on specific images.
4.1. Character Bits Generation Methods
Suppose that the original image I is sized 512 × 512, which is divided into 8 × 8 nonoverlapping blocks, and the 3 LSB layers are utilized for watermark embedding. That is, N1 = N2 = 512, t = 3, , and Nb = 4096, and all subsequent experiments are based on these parameters. As mentioned in Section 3, an image block complexity measurement algorithm and are used to improve the quality of the recovered image.
Here, we define the block complexity measurement algorithm as F(C) (⋅). Denote the divided blocks as B(i), i = 1, …, Nb, and each pixel in the block can be represented as Bj(i), j = 1, …, . The minimum values of the pixel in each block B(i) are denoted as
Then, the average difference value of each block can be calculated aswhere Di is the average difference and regarded as the complexity degree of each block B(i). Thus, all the Nb blocks can be sorted in the descending order of Di (i.e., from rough to smooth), and we denote the sorted complexity degrees Di for all Nb blocks as the set Ds.
In our work, 15 kinds of were collected for experimental analysis. For simplicity, 2 kinds of are used to combine, that is, n(M) = 2, to generate character bits in our work, a total of 105 combinations. The 15 kinds of are derived from 5 self-embedding algorithms, of which the character bits generation methods of the 5 kinds of self-embedding algorithms are briefly described as follows:(1)MSB-Based Algorithm. Five MSBs are collected as the character bits of each pixel, and a block has a total of 320 character bits .(2)DCT-Based Algorithm. The quantized DCT coefficients of each image block are collected as the character bits. According to the DCT coefficients in , we construct 8 different levels of character bits generation by setting different numbers for different coefficients; that is, 8 kinds of character bits generation methods are designed.(3)DWT-Based Algorithm. The low-frequency subband in level 1, LL1, of DWT coefficients is selected, and 128 character bits of a block will be generated.(4)AMBTC-Based Algorithm. AMBTC  for each block is used, and 80 character bits will be generated.(5)VQ-Based Algorithm. Different codebooks of VQ  are used in this paper. Four different character bits generation methods will be constructed according to 4 different codebooks corresponding to 1024, 512, 256, and 128 codewords, respectively; that is, 4 kinds of character bits generation methods are designed.
Through testing on the UCID database, the statistical average of MSE between the reconstructed image and original image for each described above can be estimated as listed in Table 1, which sorts all the 15 methods in the descending order of the estimated MSE. The number of generated character bits for one 8 × 8 block corresponding to is also given in Table 1.
As for the authentication bits generation method F(a) (⋅), the cryptographic MD5 hash function is utilized. Thus, each 8 × 8 block can produce 32 authentication bits for tampering detection; that is, na = 32. As a result, in 3 LSB layers of each block, the remaining space for accommodating reference bits is × t − na = 160.
4.2. Optimal Quality Estimation of Recovered Image
As mentioned in Section 4.1, 2 kinds of are actually applied; that is, n(M) = 2; thus, s and nc(1) can be calculated by equation (10); that is, s = 1 and nc(1) = 4096. And then, according to the optimization equation (11), the reference indicator bits can be calculated, that is, nr(1) = 29568, when the tempering rate γ = 80%. And then, the optimal recovered image quality under the condition of the given tampering rate can be estimated by equations (12)–(15). As shown in Table 2, the combination of optimal and the optimal recovered image quality is calculated. To further explain the optimization problem in Section 3.1.3, a concrete example will be introduced. Suppose the given tampering rate is γ = 50%. Firstly, according to the optimization equation (12) and the average MSE value of each algorithm in Table 1, the number of blocks allocated to the optimal combination can be calculated, and . and will be selected to generate character bits. Secondly, the blocks are sorted in descending order according to the complexity calculation algorithm F(C) (⋅). Therefore, the first 2884/4096 blocks and the last 1212/4096 blocks corresponding to and , respectively, will be used to generate character bits and estimate better MSE; that is, and . Finally, the optimal PSNR will be estimated; that is, PSNRe = 30.65 dB. As shown in Table 2, more relevant optimization results are given. The relationship between the tampering rate and the estimated optimal PSNRe is given in Figure 4.
4.3. Estimation of the Largest Tolerable Tampering Rate
As described in Section 3.2, the given PSNR of the recovered image is denoted as , and then is converted to MSE, denoting it as . Firstly, the least number of character bits that satisfies the given is calculated by the optimization equation (16). Secondly, the largest tolerable tampering rate is estimated based on the calculated least number of character bits and the optimization equation (17). As shown in Table 3, more relevant optimization results are given.
We can know from  that the highest PSNR is 40.7 dB in our instance with the largest tampering rate being 24%. The relationship between the estimated largest tolerable tampering rate γe and given is shown in Figure 5.
5. Experimental Results and Comparisons
In this paper, all experiments were implemented on a computer with a 3.70 GHz Intel i9 processor, 32.0 G memory, and Windows 10 operating system, and the programming environment was Matlab R2020b. The relevant parameter settings are shown in Section 4.1. As shown in Figure 6, the image size used in the following experiments is 512 × 512.
5.1. Results of Our Framework
By observing the embedding process, the 5 MSBs of each pixel in the image keep unchanged during the embedding process, and only 3 LSBs are used to embed the watermark. Therefore, the PSNR of watermarked image can be obtained by calculating the change of 3 LSBs. The calculation process is as follows:where and are the decimal values of the three original LSBs and three new LSBs of a pixel, respectively. Since the new LSBs are produced in a pseudorandom manner, the distribution of is approximately uniform. It is assumed that the original distribution of the data in the three LSB layers is also uniform.
In the experiment of this paper, only two kinds of recovered image quality are calculated. PSNRr and PSNRb are the PSNR of the recovered image compared with the whole original image and the recovered parts of the recovered image compared with the corresponding part in the original image, respectively. As shown in Tables 4–6, standard test images and 100 images in UCID are used for random tampering experiments under different tampering rate conditions. The average in Tables 4 and 5 represents the average value of the experiments by using 100 images in UCID; due to the inconsistency between the complexity of the UCID images and the standard images, the experimental results are slightly different. Compared with the estimated PSNRe in Section 4.1, the PSNRr in the experiment is slightly different. Because the UCID database is used in the estimation process, the image complexity in the database is inconsistent with the 6 images used in the test. At the same time, the estimated PSNRe is calculated by the original image and the reconstructed image using the number of character bits calculated in equations (9)–(15) and the corresponding , without the real watermark embedding and tampering recovery process.
As shown in Figure 7, the tampered image is on the left, and the recovered image is on the right. A recovered image with better visual quality can be obtained by using the proposed scheme. For intentional tampering, we take image Lake in Figure 6(d) as an example with tampering rates set to be 30%, 60%, and 80%. The results are shown in Figure 8. They are the watermarked image, the tampered image, the tampering detection result, and the recovered image from left to right. Additionally, the detected tampered parts are marked with white in the results.
5.2. Comparison with State-of-the-Art Schemes
In Table 6, we compare the MSB-based self-embedding watermarking algorithms. It can be observed from Table 6 that the proposed scheme can achieve more robustness against tampering rate and comparable quality of recovered image quality than the reported schemes under the different tampering rates. Furthermore, 80% is not the upper limit of the tampering rate but is set by us considering the recovery quality. A larger tampering rate can still be achieved in our framework by sacrificing recovery quality.
Furthermore, we take Lena and Baboon as examples to compare the recovered quality with other schemes under the conditions of some tampering rate. In Table 7, we compare the self-embedding watermarking algorithms based on DCT coefficients. Considering the tampering rate that the scheme can tolerate and the quality of the recovered image, the scheme proposed in this paper can achieve a better trade-off.
In this paper, we proposed a new self-embedding framework based on reference sharing mechanism. Different from the reported schemes that the PSNR of the recovered image can only be calculated by completing the entire embedding process, the framework of the proposed scheme can be categorized into estimating the highest PSNR of the recovered image and estimating the largest tolerable tampering rate when the tampering rate and the PSNR of the recovered image are given, respectively, because the number of character bits that are used will influence the quality of the recovered image and the ability to resist the tampering rate. In this paper, the problem of estimating the highest PSNR of the recovered image is first transformed into the problem of calculating the largest number of character bits and then reallocating the character bits to each block according to the complexity of the image block to achieve the best recovery quality. The problem of estimating the largest tolerable tampering rate is transformed into the problem of calculating the least number of character bits, and then the largest tolerable tampering rate can be obtained. In addition, the experimental results show the flexibility and effectiveness of the proposed scheme.
The image datasets used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
This work was supported in part by the National Natural Science Foundation of China, under Grants 62172280, U20B2051, and 62172281, the Natural Science Foundation of Shanghai, under Grant 21ZR1444600, the Natural Science Foundation of Shandong, under Grant ZR2020MF054, the STCSM Capability Construction Project for Shanghai Municipal Universities, under Grant 20060502300, the Shandong Provincial Natural Science Foundation (ZR2019BF017), and Major Scientific and Technological Innovation Projects of Shandong Province (2019JZZY010127, 2019JZZY010132, and 2019JZZY010201).
S. Walton, “Image authentication for a slippery new age,” Dr. Dobb’s Journal, vol. 20, no. 4, pp. 18–26, 1995.View at: Google Scholar
X. L. Liu, C. C. Lin, and S. M. Yuan, “Blind dual watermarking for color images’ authentication and copyright protection,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 5, pp. 1047–1055, 2016.View at: Google Scholar
J. Fridrich and M. Goljan, “Images with self-correcting capabilities,” in Proceedings of the IEEE International Conference on Image Processing, pp. 792–796, IEEE, Kobe, Japan, October 1999.View at: Google Scholar
Q. C. Ying, Z. X. Qian, H. Zhou, H. S. Xu, X. P. Zhang, and S. Y. Li, “From image to imuge: immunized image generation,” in Proceedings of the 29th ACM International Conference on Multimedia, pp. 3565–3573, Association for Computing Machinery, New York, NY, U S A, October 2021.View at: Google Scholar
Z. N. You, Y. Liu, and T. G. Gao, “A lossless self-recovery watermarking scheme with JPEG-LS compression,” Journal of Information Security and Applications, vol. 58, Article ID 102733, 2021.View at: Google Scholar
J. Molina-Garcia, B. P. Garcia-Salgado, V. Ponomaryov, R. Reyes-Reyes, S. Sadovnychiy, and C. Cruz-Ramos, “An effective fragile watermarking scheme for color image tampering detection and self-recovery,” Signal Processing: Image Communication, vol. 81, Article ID 115725, 2020.View at: Publisher Site | Google Scholar