Abstract

Aiming at the problem that the robustness, visibility, and transparency of the existing visible watermarking technologies are difficult to achieve a balance, this paper proposes an adaptive embedding method for visible watermarking. Firstly, the salient region of the host image is detected based on superpixel detection. Secondly, the flat region with relatively low complexity is selected as the embedding region in the nonsalient region of the host image. Then, the watermarking strength is adaptively calculated by considering the gray distribution and image texture complexity of the embedding region. Finally, the visible watermark image is adaptively embedded into the host image with slight adjustment by just noticeable difference (JND) coefficient. The experimental results show that our proposed method improves the robustness of visible watermarking technology and greatly reduces the risk of malicious removal of visible watermark image. Meanwhile, a good balance between the visibility and transparency of the visible watermark image is achieved, which has the advantages of high security and ideal visual effect.

1. Introduction

Visible watermarking technology has important applications in many fields, such as content protection [1, 2], copyright identification [3], document security [4, 5], and advertising [6]. In the past two decades, there have been a large number of algorithms with different features in visible watermarking technology. These technologies can be divided into three categories [7]: permanent [811], removable [1214], and reversible visible watermarks [1520]. In the permanent visible watermarking, the embedded watermark image is permanently retained in the watermarked image, and even the owner cannot completely erase it. However, in removable and reversible visible watermarking techniques, the embedded visible watermark image can be removed by the authorized person using the correct secret key. In addition, the reversible technology can completely remove the visible watermark image and recover the original host image data without loss. Regardless of the type, the visible watermarking technology must meet three requirements [21]: robustness, visibility, and transparency. Robustness means that it is difficult for the unauthorized person to remove the visible watermark image maliciously or unintentionally through conventional image processing methods, thus having the ability to resist various destructive attacks. Additionally, the visible watermark image has certain visibility after embedding into the host image, which allows easy and clear identification of the ownership of the host image. Finally, the visible watermark image has the least obtrusiveness or maximum transparency, which allows the observer to easily identify the details of the host image. In the existing algorithms, these three requirements are contradictory; however, it is necessary to achieve a balance between them.

Nowadays, many scholars have done many researches on how to effectively remove the visible watermark image from the watermarked image [2230]. Some visible watermark removal algorithms need to know the accurate position of the visible watermark image in advance, and the corresponding removal strategy is designed by combining the features of the visible watermark image itself [22, 23]. Another method to remove the watermark image is based on the traditional image inpainting method [2426], and it mainly uses the surrounding information to fill the areas corresponding to the black pixels in the watermarked image. Therefore, only by knowing the specific location of the missing pixels in the image can the visible watermark image be successfully removed from the watermarked image. Pei and Zeng [27] propose to separate the host image from the watermarked image by using independent component analysis (ICA). However, such a method requires users to mark the watermark area manually, which is time-consuming. Clearly, it cannot process large watermarked areas and automatically remove visible watermark images in batches. To achieve batch removal of visible watermark images, Dekel et al. [28] propose a method to automatically estimate the watermark images and restore the host image with high precision. However, this method has a basic premise; that is, when all visible watermark images are embedded in many different host images in the same way, an effective model of visible watermarking image removal can be established. When the visible watermarking images are added to different positions on different images from different angles, the existing algorithms cannot work well.

Therefore, to improve the robustness of the visible watermarking scheme against batch processing attacks, it is necessary to study the adaptive selection strategy of embedding region for visible watermarking. In addition, the visibility and transparency of the visible watermark image also should be comprehensively considered to obtain a more natural image fusion effect.

The rest of the paper is organized as follows. The related works are introduced in Section 2. Section 3 presents the embedding scheme of the visible watermark images in detail. Section 4 shows the experimental results as well as the comparisons with prior works. Finally, Section 5 concludes this paper.

When the embedding position of the visible watermark image changes adaptively with the content of the host image, the existing batch processing methods of visible watermarking cannot accurately locate the watermarked area and cannot effectively erase the visible watermark image. For this purpose, Qi et al. [29] propose an improved visible watermark embedding scheme based on the human visual system (HVS) and the region of interest (ROI) selection. This method often finds the relatively smooth region according to the complexity of the host image in the high-tone or low-tone image areas, and these regions usually contain the key objects in the host image. Therefore, the visible watermark image frequently occupies the salient area and occludes the important objects of the host image, which usually produces undesirable visual effects.

In terms of the visibility and transparency of visible watermark images, some scholars have proposed many adaptive embedding methods of visible watermark images [3033]. In [30], the brightness and texture features of host image in DCT frequency domain are extracted to realize the dynamic embedding of the visible watermark. In [31], the visible watermark is dynamically embedded according to JND coefficients in DCT frequency domain. However, the neighbourhood features of the visible watermarked area are not considered in the process of watermark embedding. The watermark strength of visible watermarking also needs to be adaptively changed according to different image features. In [32], the authors point out that it is necessary to dynamically adjust the embedding strength according to the brightness, contrast, texture complexity, and other related features of the host image. In [33], the visual salience matrix of the host image is calculated first, and the embedding strength is proportional to the intensity of the salience of the visible watermarked region. However, the computational complexity of this method is relatively high.

3. Proposed Method

In most of the visual scenes, the human visual system can see any region in the image, but the important regions of interest only account for a small part, which is called the salient area [34]. Therefore, the visible watermark image should not obscure the salient objects in the host image; otherwise, it will affect the value of the host image itself. In this paper, the salient areas are detected firstly. Then, it selects the most suitable region in the nonsalient areas for visible watermark embedding. At last, the watermark strength is adaptively calculated for the visible watermarking.

3.1. Salient Region Detection

Firstly, an image is segmented into a series of superpixels. Then, all corner points of the image are extracted, and the image is divided into inner and outer parts according to the corner point distribution. Next, the average score of all pixels in each superpixel is calculated and regarded as the score of the superpixel. Finally, all superpixel scores are normalized to the range of , and a complete visual saliency map is formed. The specific process is as follows.

3.1.1. Superpixel Segmentation

Since there is a lot of redundant information in an image, the k-means clustering method is used to segment all pixels with close distances and similar colours into many superpixel regions. The average colour value of all pixels in the CIELAB space of each superpixel is recorded as the colour of the superpixel, which is marked as :

Suppose that an image is segmented into n superpixels as . For any two given superpixels and , usually there will be more than one path connecting them. The length of each path is the sum of the distances of every two adjacent superpixels. It calculates the shortest path from to ; for example, , and its length is recorded as :

3.1.2. Image Region Segmentation

Since there are a lot of contour lines in the image, the different distribution of contour lines forms inflection points, intersection points, and other feature points, which are uniformly called corner points. After all corner points are recognized, a minimum polygon containing them can be constructed. Superpixels located inside the polygon are given larger weights, while those outside the polygon are given smaller weights.

3.1.3. Calculation of the Saliency Score of Each Superpixel

For a given superpixel in the image, the saliency score is calculated as follows.Step 1. Calculate the scaling factor of as shown in the following equation:where is the length of the shortest path from superpixel to .Step 2. The initial score of superpixel is calculated aswhereand represents the sum of the squares of the distances between and all other superpixels; namely,Step 3. The final visual saliency score of superpixel is calculated as shown in the following equation:

The saliency scores of all superpixels are obtained by equation (7), and then the scores of all pixels also can be obtained. All scores are normalized to the range of [0, 255] to obtain the salient area of the original image. For example, for the host image as shown in Figure 1(a), the corresponding salient areas are extracted as shown in Figure 1(b). It is further binarized to get the final salient areas as shown in Figure 1(c), in which white regions are the visually salient areas, and the black regions are the visually nonsalient areas.

3.2. Adaptive Selection of Embedding Region

Next, the nonsalient regions of the host image are evenly segmented into subblocks according to the size of the visible watermark image. Based on the texture complexity and the gray distribution features of each image block, the relatively smooth image block is selected as the watermark embedding region.

3.2.1. Calculation of Image Texture Complexity

It can be concluded that the edge density of the image is an important factor affecting the texture complexity of the image. Therefore, it determines the texture complexity by calculating the density of the image boundary. The specific calculation steps are as follows:Step 1: obtain the gradient feature map of the host image.To eliminate the interference of noise, is obtained by Gaussian filtering for the host image ; then the gradient feature graph is obtained by Laplace transform:Step 2: obtain image boundary features.Use the Otsu method to binarize to obtain an image with only two gray levels of 0 and 255, and then perform a morphological closing operation on to obtain :Here, is to perform morphological closing operations on the image. At this time, a large number of boundary features of the host image are saved in .Step 3: calculate texture complexity.

The boundary density of the subblock in the host image H, that is, the texture complexity , is calculated as follows:where is the number of pixels located on the boundary extracted from in , is the size of , and . In practical applications, it sets .

3.2.2. The Distribution Features of Image Gray Value

The visible watermark embedding can be performed by modifying the pixel value of the host image in spatial domain. Generally speaking, in dark tone areas with a gray value interval of [0, 127], the pixel value should be appropriately increased. In the bright tone area with a gray value interval of , the pixel value needs to be appropriately reduced. The visibility of the visible watermark image can be ensured when the magnitude of the modification is large. To ensure the overall visual effect of the visible watermark, our proposed method tries to select the pixel values of the middle tone interval range , where can be set to . Next, we will discuss the calculation of the gray value distribution feature of the subblock .

The average gray level of all pixels in the subblock is calculated as follows:where is the pixel value of the host image at point and the gray value distribution feature of the subblock is calculated as follows:

3.2.3. Smooth Area Selection

After obtaining the boundary density and the gray value distribution feature of each subblock using equations (10) and (12), the feature value of can be obtained by the following equation:

The feature value of each subblock is sequentially calculated, and all feature values are sorted in ascending order. The image area corresponding to the smallest feature value is the final visible watermark embedding area. In this paper, it reduces and increases to achieve the purpose of reducing . Among them, when the value is smaller, the visible watermark can successfully avoid the areas with high image texture complexity. When the value is larger, the selected visible watermark embedding area will be far away from the high-tone and low-tone areas in the host image, thus making the watermarked image achieve a relatively ideal visual effect.

3.3. Adaptively Visible Watermark Embedding

In this paper, the visible watermark image is embedded in the host image as follows:where is the pixel value of the host image, is the watermarked pixel value, and is the adaptive watermark strength. To adaptively calculate the watermark strength from the image features of the neighbourhood area around the watermarked pixel, each pixel in the visible watermark image is embedded into the corresponding image block B size of pixels in the host image; that is, each pixel in is adjusted according to the same watermark intensity in equation (14). To ensure the visibility of the visible watermark image, JND model is used to adjust the gray value of each pixel after watermark embedding. The general watermark embedding diagram is shown in Figure 2.

3.3.1. Adaptive Calculation of Watermark Strength

The embedding strength of the visible watermarking adaptively changes with the image complexity of the embedding region. As for the pixel in the visible watermark image , the corresponding watermark embedding area in the host image consists of four pixels . Then the specific calculation process of watermark strength is as follows.Step 1. Calculate image texture complexity. and 8 embedding regions in its neighborhood constitute a region with the size of pixels. Through equation (10), the boundary density in of can be obtained; then the texture complexity of region can be expressed by ; that is,Step 2. Calculate the change amplitude of the pixel value in the embedding region.The intensity change of pixel value in the embedding area of the visible watermark image is , which can be measured by the gradient of pixel values in the region. In the embedding region , the four pixels are sorted in ascending order according to the gray value, and the average gray value of two smaller pixels and the average gray value of two larger pixels are calculated, respectively. The calculation of is as follows:where, . As shown in equation (16), in the region where the pixel value changes smoothly, the value of is close to , and is approximately equal to , which corresponds to a larger . Similarly, when the pixel value in the region changes sharply, the difference between and is large, and the denominator of equation (16) will increase accordingly, so will decrease accordingly.Step 3. Calculate the final watermark embedding strength.

The embedding strength of visible watermarking is calculated as

It can be seen from equation (17) that, in the flat area of the host image, if is small and is large, the watermark embedding strength is relatively small. On the contrary, in the region of the host image with complex texture, is large and is small, and the watermark embedding strength is relatively large. All γ values in the embedding areas are calculated normalized to the interval of .

3.3.2. Visual Watermark Image Embedding

For the host image with RGB colour space, the specific steps of visible watermark image embedding are as follows:Step 1. Firstly, the host image is transformed from RGB colour space to YUV colour space, and the visible watermark is embedded in the luminance component .Step 2. For the component, the JND masking matrix is calculated.Step 3. For each pixel in , the embedding process in the region of the host image is as follows:(1)When , all pixel values of remain unchanged.(2)When , the watermarked pixels , , are calculated by equation (14).Step 4. Adjust the watermarked pixel value.

For any in , the change range of the pixel values before and after watermark embedding is calculated:(1)If , where , is a fixed constant and is the value of the masking matrix at the point , it is shown that the variable of the watermarked pixel value obtained by equation (14) is too small, and the pixel value after the watermark embedding is recalculated in the following way:(2)If , then .

So far, the embedding process of visible watermarking is finished.

For example, the Logo image in Figure 3(a) is embedded into host image in Figure 1(a) to get the watermarked image shown in Figure 3(b), where

4. Experimental Results and Discussion

4.1. Comparison of the Embedding Region Selection

In [33], the visual saliency matrix of the host image is calculated by using the ITTI visual model, and the region with the lowest visual saliency is selected as the embedding region for visible watermarking. In [29], it calculates the average gray value of each block image and selects the region that meets the following conditions as the embedding area: (1) The average gray value of the image block is greatly different from the middle tone 127. (2) There are as many pixels as possible in the area whose gray values are not equal to the average gray value. In this experiment, there are 24 images selected as the host images from Kodak image set. First of all, the sizes of all host images are scaled to 800800 pixels, and the size of the binary visible watermark image is scaled to 120120 pixels. Accordingly, the size of watermarked region is set as 240240 pixels. The adaptive selection effects of the visible watermark embedding region are given by using the methods in [29, 33] and our proposed method, respectively, just as shown in Figure 4.

In Figure 4, as for the image kodim14, both the method in [33] and our proposed method bypass the key objects, such as boat and the people on the boat. However, the method in [29] does not avoid the boat. But the texture complexity of the region in [33] is higher and the gray level is near the middle tone, which disturbs the visual vision inevitably. For the image kodim17, in [33], the regions occupied by statue face and the ball are detected as the salient areas, and the watermarked area overlaps the region occupied by the clothes. The method in [29] and our proposed method bypass the statue, and the location of the region selected by our method is relatively more ideal. As for the image kodim22, the most significant area should be the area occupied by the house. In [29], the watermarked region conflicts with the area of the house, while the method in [33] and our method both avoided the house successfully. In contrast, the selected region by our proposed method is more suitable, in which the background is relatively flat, and the visual effect of the watermarked image is more natural. On the contrary, the method in [33] selects the area occupied by grassland for visible watermarking, and the texture of the background image reflects the visibility of the visible watermark image.

In addition, as mentioned before, whether it is based on ICA or traditional image inpainting method, the location information of the embedding region is needed to remove the visible watermark image from watermarked image. In the proposed method, the embedding region of the visible watermarking is adaptively selected, which can resist the watermark removal attack to a certain extent, especially for the batch removal of the visible watermark image. Therefore, the proposed method is robust to the visual watermark removal attack.

To sum up, in [33], the ITTI visual model is used to detect the region of interest in the host image, but the gray distribution and texture complexity of the host image are not considered while selecting the watermark embedding region. Therefore, when the texture details of the host image are complex or the significant areas are widely distributed, the accuracy of the region of interest detected by the model is low. In [29], it only uses the features of gray distribution in the host image and does not consider the texture complexity of the host image, in which the selected area usually obscures the important objects. On the basis of salient region detection, combining with the gray distribution and texture complexity of the background image, our proposed method can adaptively select the embedding area for the visible watermarking, which can effectively overcome the defects of the existing methods.

4.2. Comparison of Visible Watermark Embedding

In [29], it uses the visual effect factor (VEF) of HVS to adaptively modify the pixel value of the host image to produce a better fusion effect between the visible watermark image and the host image. In [35], the visible watermark is embedded by using the method of dynamic pixel value mapping (DPVM). The visual effect of the visible watermark embedded by our proposed method is compared with the methods in [29, 35]. The subjective effects of visible watermarked images are shown in Figure 5, and the objective visual effect index parameters are as follows.

4.2.1. Peak Signal-to-Noise Ratio ()

is used to measure the distortion or noise level of an image. It is often used to objectively evaluate the degradation degree of an image. The greater the value between two images, the lower the degradation degree of the image, that is, the higher the image quality. Given the original image with the size of pixels and the watermarked image , the peak signal-to-noise ratio is defined aswhere MSE is the mean square error between images, which is defined as

4.2.2. Structural Similarity (SSIM)

is composed of three contrast modules: brightness, contrast, and structure. For the host image and the watermarked image to be compared, the comparison results of image brightness are as follows:

The comparison results of image contrast are as follows:

The comparison results of image structure are as follows:

The definition of structural similarity is as follows:

In practical applications, , , and SSIM can be expressed aswhere is the average gray value of the host image, is the average gray value of the watermark image , is the variance of , is the variance of , and is the gray level covariance of the two images. The range of structural similarity is from 0 to 1; when the two images are identical, the structure similarity is 1.

4.2.3. Obtrusiveness

In [7], a standard for detecting the visual effect of watermarked images using the abruptness of watermarked images is proposed. For the original image H and the watermarked image , the normalized (mean square perceptual error) value at the point is calculated as follows:where is the gray value of point in the watermarked image, is the gray value of point in the host image, is the value of JND matrix of the host image at , and the calculation method of is calculated as follows:

The abruptness of the entire image is represented by the average of the values of all pixels. It can be seen from equation (27) that the larger the value of the watermarked image is, the more obvious the abruptness of the embedded visible watermark image is, and the worse the visual effect of the watermarked image is.

The visual quality evaluation effect of the watermarked image shown in Figure 5 is given in Table 1.

It can be seen from Figure 5 and Table 1 that the visibility and transparency of the visible watermark image are contradictory. In [35], it calculates a visible watermarking strength by using the overall features of the watermarked region but ignores the feature information in the local domain. The obtained watermark image has the strongest visibility, as shown in column (b) of Figure 5, but the obtained watermark image has the worst abruptness and the maximum value. Due to the strong visibility, the watermark image has the lowest similarity with the original host image, and the corresponding value is the minimum. On the basis of ensuring the visibility of the visible watermark image, to further increase the transparency of the watermarked image, the texture and colour features around each pixel to be modified are considered in both the method in [29] and our proposed method. However, compared with [29], the proposed method is based on the continuous gray gradient of the host image, and the JND value is introduced to ensure the transparency of the watermark image. In addition, compared with the original host image, the watermarked image generated by our proposed method has the least distortion, so it has the highest similarity.

5. Conclusion

To improve the robustness, visibility, and transparency of visible watermarking technology, this paper proposes an adaptive embedding method for visible watermarking. Firstly, the salient region of the host image is detected based on superpixel detection. On the one hand, the visible watermark image avoids the key objects in the host image and does not damage the value of the host image itself. On the other hand, the embedding region of the visible watermark image will change with content of the host image which increases the difficulty of malicious removal of visible watermark and especially can effectively resist the attack of batch automatic removal. Then, in the nonsalient area of the host image, a relatively low complexity flat area is selected as the embedding region, because the details in the host image with high texture complexity will affect the visibility of the visible watermark image. Finally, considering the JND coefficient, gray distribution, and texture complexity of the embedding region, the watermark embedding strength is calculated adaptively, and the ideal transparency of the visible watermark image is obtained. In a word, the proposed method achieves a good balance in robustness, visibility, and transparency. However, how to further improve the security of visible watermarking algorithm is worthy of further research.

Data Availability

The software code and data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

All authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by National R&D Project of China under Contract no. 2018YFB0803702.