Table of Contents Author Guidelines Submit a Manuscript
Advances in Multimedia
Volume 2016, Article ID 4901609, 17 pages
http://dx.doi.org/10.1155/2016/4901609
Research Article

A Novel Printable Watermarking Method in Dithering Halftone Images

Department of Computer Science, National Chiao Tung University Hsinchu, Hsinchu 300, Taiwan

Received 11 January 2016; Revised 5 April 2016; Accepted 3 May 2016

Academic Editor: Martin Reisslein

Copyright © 2016 Hui-Lung Lee and Ling-Hwei Chen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Halftone images are commonly printed on books, newspapers, and magazines. How to protect the copyright of these printed halftone images becomes an important issue. Digital watermarking provides a solution for copyright protection. In this paper, we will propose a novel printable watermarking method for dithering halftone images. Based on downsampling and the property of a dispersed dithering screen, the method can resist cropping, tampering, and print-and-scan process attacks. In addition, comparing to Guo et al.’s method, the experimental results show that the proposed method provides higher robustness for the above-mentioned attacks and better visual quality in the high-frequency regions of halftone images.

1. Introduction

Digital halftoning is a method to convert continuous-tone images to two-tone ones; it is widely used in printing newspapers, magazines, books, and so forth. When viewed from a proper distance, halftone images resemble the original grayscale images. Today, many digital halftoning methods [13] were developed. Error diffusion, ordered dithering, and iteration-based technique are three common types of halftoning methods. Error diffusion [1] is a single-pass sequential algorithm. The past error is diffused back to the unprocessed neighboring pixels. When processing the current pixel, its gray value will add all the past error and compare with a fixed threshold 128 to determine its output. Ordered dithering [2] is applied to a threshold matrix to convert a gray image to a halftone image. It compares the pixel value with the threshold matrix to determine pixel output. Hence, it has better time efficiency. The iteration-based technique [3] is an iterative algorithm; it proceeds by generating an initial halftone image and then iteratively performs a local search on the halftone space by swapping and toggling to minimize the perceived error. It usually generates a better quality images than the error diffusion and the ordered dithering, but it is time consuming.

Many watermarking techniques have been provided for halftone image copyright protection and authentication; they are divided into three categories: error diffusion based [411], ordered dithering based [4, 1218], and iteration-based [1921] techniques.

For error diffusion based watermark techniques, in 2002, Fu and Au [4] proposed a few data hiding or watermarking methods in halftone images. When the original multitone image is not available, a data hiding smart pair toggling scheme was presented to hide data in halftone images. When the original multitone image is available and the halftoning method is error diffusion, a modified data hiding error diffusion method was provided to hide data in the halftone images by forced self-toggling with its distortion diffused to the surrounding pixels. This method can be applied to the halftone images generated by ordered dithering. Experimental results show that the proposed methods have high data hiding capacity, low computational complexity, good visual quality, and reasonable resistance toward noise. However, it is not robust to any distortions without applying error correction code, and some artifacts are found in the locations of pair toggling. Later, Fu and Au [5] embedded a single watermark or multiwatermarks in the parity domain of a halftone image during halftoning. However, it can only embed 1 bit to indicate whether a watermark or one of two watermarks exists. Thus, this method is improper for copyright authentication. In 2004, Fu and Au [6] proposed an improved method to embed a watermark in the local correlation coefficient between the watermark bits and the halftone image. The local correlation coefficient is computed by the exclusive operation between a security code and halftone image. However, if the security code size is small, the visual quality of the watermarked halftone image would be degraded. In 2006, Pei and Guo [7] proposed a data hiding method in several halftone images or color planes using minimal-error bit searching. It employed the gray code to divide code vectors into two groups, and each group corresponds to a watermark bit 0 or 1. According to the watermark bit embedded, most suitable code vectors with better visual quality are chosen to form the watermarked halftone images. However, the quality degrades significantly when capacity increases up to 50%. In 2007, Li et al. [8] proposed a watermarking method for error diffusion halftone images; the method provides a block-overlapping parity check algorithm to reduce the number of pair toggling required in the Fu and Au’s method [4]. Experiments show that the method has better visual quality than Fu and Au’s method [4]. To treat Pei and Guo’s problem [7], Guo and Liu [9] proposed a data hiding method in several halftone images by using overall minimal-error searching and secret sharing. Moreover, the least-mean-square based scheme is also employed to achieve even better quality and edge enhanced embedded results. However, the homogeneous regions of watermarked images maybe have artifacts.

For ordered dithering based watermark techniques, in 2001, Fu and Au [14] proposed a two-phase watermarking method for ordered dithering images. First, one out of every pseudorandom locations is selected using threshold selection to embed one data bit. Second, screen modification is applied to the local neighborhood of the selected location to change the ordered dithering screen to achieve the desired data embedding. Some quality measures were proposed to evaluate the visual quality of a dithering image. Simulation results show that the method can hide a large amount of data while maintaining good visual quality. However, it is not robust to the print-and-scan process. In 2001, Hel-Or [13] proposed a method to embed a watermark in printed images. First, based on watermark bits, a dithering screen is created by selecting different dither matrices; then the screen is used to produce a printed image. The method is robust under reconstruction errors. However, it is not robust to cropping and has artifacts.

In 2005, Pei et al. [15] presented a method to embed watermarks into dithered halftone images. The method divides a dithered halftone image into several sub-subimages by the bit and sub-subimage interleaving preprocesses, and each watermark bit is embedded into each pair of sub-subimages. The method has low computational complexity and flexible embedding capacity. But the method requires the knowledge of the original watermark to do copyright authentication. To treat Pei et al.’s problem, in 2008, Guo et al. [12] proposed another watermarking method using the blind paired subimage matching ordered dithering (BPSMOD) technique. It does not require the knowledge of the original watermark in the watermark extraction. However, the visual quality in the boundary of the embedded dithered image may be degraded.

In 2010, Bulan et al. [16] proposed a data hiding method that embeds bits through clustered-dot orientations during halftoning process. For extracting the embedding data, a moment-based extracting method is used to detect the clustered-dot orientations. The method is only applicable for clustered-dot halftoning methods, and it relies on the ability to accurately control the printing of the halftone image; this may be restrictive in some applications. In 2013, Feng et al. [17] proposed a halftone watermarking algorithm based on particle swarm optimization. It is robust under smearing and cropping attacks. Unfortunately, it needs mean filtering and median filtering to remove noise from the recovered watermark image. This is only suitable for a watermark with a solid black/white object. Thus, the method is unsuitable for a watermark with a random sequence.

For iteration-based watermark techniques, in 2003, Chun and Ha [19] proposed a watermark technique based on iterative halftoning method. In the embedding stage, a pseudorandom number generator is used to locate the embedding locations and force these pixel values at these locations to be 0 or 1 according to the watermark bits. Then, in the error minimizing stage, for each unembedded pixel, check whether toggling the pixel value or swapping the pixel value with neighbor pixels can reduce the perceived halftoning error. In 2012, Guo et al. [20] proposed a DBS-based orientation modulation watermarking method. In this method, the direction of the point spread function is used to represent different watermark values. To extract the watermark bit, the LMS trained filters and naive Bayes classifier are used to classify the angle. In 2015, Guo et al. [21] proposed a halftoning-based multilayer watermarking method. An efficient direct binary search and lookup table method is applied to embed multiple watermarks. Then, the least mean square and native Bayes classifier are used to extract the watermarks. Although all these methods provide excellent image quality, they are time consuming.

In this paper, we focus on ordered dithering halftone images. And a blind watermarking method will be proposed to treat the disadvantages of the above-mentioned dithering based watermarking methods. First, a grayscale image is transformed into a dithering halftone image according to an dispersed dithering screen (for convenience of illustration, here, we take ; see Figure 1); then the halftone image is divided into several subimages through downsampling. For each subimage, to embed watermark bits, it is first divided into several 4 × 4 blocks. Then, the number of black pixels, , in all blocks corresponding to the position with the th smallest value (see Figure 1(c), , ) in the 4 × 4 dispersed dithering screen is counted. Finally, we take as a pair to embed a bit based on the sign of (), where , or . If the embedding bit is 0 and or the embedding bit is 1 and , nothing is done. Otherwise, in each block, the pixel at position with the th smallest value and the pixel at position with the th smallest value are swapped. Since is usually larger than , this provides higher robustness than Guo et al.’s method [12] for the print-and-scan process. In addition, since the number of black pixels in each block is not changed, the proposed method also provides higher visual quality in the edge boundary than Guo et al.’s method [12]. Furthermore, the downsampling technique is presented to provide higher robustness than Guo et al.’s method for cropping and tampering. The rest of the paper is organized as follows. Section 2 outlines Guo et al.’s [12, 15] methods. Section 3 describes the proposed method. Section 4 shows the experimental results and comparisons. Section 5 draws conclusions.

Figure 1: An example of dispersed dithering screens. (a) A 4 × 4 dispersed dithering screen. (b) An 8 × 8 dispersed dithering screen. (c) The order and position of each pixel in (a). (d) The order and position of each pixel in (b).

2. A Brief Description for Guo et al.’s Methods

As mentioned previously, Guo et al. [12] proposed a watermarking method using BPSMOD in dithering halftone images. At first, a dispersed dithering method is adopted to convert a grayscale image into a dithering halftone image. Then, the bit-interleaving algorithm proposed by Pei et al. [15] is used to arrange the dithering halftone image. After that, the BPSMOD is applied to the arranged image to embed watermark bits. To raise the embedding capacity, a sub-subimage interleaving algorithm [12, 15] is adopted. The details are described in the following subsections.

2.1. Bit-Interleaving Algorithm in a Dithering Halftone Image

As mentioned above, in Guo et al.’s method [12], an dispersed dithering screen (DS) [1] is first applied to a grayscale image to result in a dithering halftone image according to the following equation:where and are the gray levels of pixel in and , respectively. is the value of the position in the dispersed dithering screen. Figure 2(a) shows a 128 × 128 grayscale image. Figure 2(b) is the resulting dithering halftone image by applying Figure 1(a) to Figure 2(a).

Figure 2: An example of bit-interleaving algorithm [12]. (a) Lena image. (b) Dithering halftone image of (b) using Figure 1(a). (c) Bit-interleaving result of (b). (d) Dithering halftone image divided into 16 subimages. (e) The result of applying bit-interleaving to each subimage in (d).

After obtaining the dithering halftone image, all pixels corresponding to the same screen value are then grouped into a subimage; this will result in subimages, each of which has pixels. Finally, according to the screen values, all subimages will be sorted in ascending order of the screen values and arranged from left to right and bottom to top to result in a binary image . Let be the th subimage, where , and the bottom-left subimage is . The above process is called bit-interleaving [12, 15]. Figure 2(c) shows the bit-interleaving result of Figure 2(b), and it contains 16 subimages, and the bottom-left subimage corresponds to screen value 8.

2.2. BPSMOD

In BPSMOD, based on , subimages and are considered a pair , where and . Since the screen value used to form is smaller than that used to form , will usually have less black pixels than . However, sometimes has black pixels more than (equal to) ; this kind of pairs is called nonincreased black pixel pair (NIP). Before embedding, if a pair is a NIP, it will be modified by increasing black pixels of or decreasing black pixels of to make the modified have less black pixels than the modified . Then, in embedding, each pair will embed 1 bit. If watermark bit being embedded is 1, and will be swapped. Otherwise, nothing is done.

Since there are pairs of subimages in and each pair of subimages embeds a bit, the embedding capacity is . To increase the embedding capacity, Guo et al. [12] first divides the original halftone image into subimages s. Then, the above-mentioned bit-interleaving method is applied to each to get sub-subimages ’s to embed bits. Hence, the capacity is increased to . Figure 2(d) shows the result of dividing Figure 2(c) into 16 subimages. Figure 2(e) shows the result of applying bit-interleaving to each subimage in Figure 2(d).

Table 1 shows the numbers of black pixels in the sub-subimage pairs of the subimage in the bottom-left of Figure 2(e), where and . In Table 1, the difference of the numbers of black pixels between and is 0, for . These sub-subimage pairs are NIPs. To eliminate these NIPs, at least one white pixel is chosen and changed into a black pixel in , , and , respectively. This may make the visual quality degraded. Furthermore, from Table 1, we find that the average difference of numbers of black pixels in the eight sub-subimage pairs is 5.25. The low difference of black pixel numbers between and could lead to the sign of the difference being altered, when the embedded dithered image is processed by print-and-scan process. This will make the extracted watermark bit wrong. Here, we will propose a method to treat these disadvantages.

Table 1: The black pixel numbers of eight sub-subimage pairs in the bottom-left subimage of Figure 2(d).

3. The Proposed Method

The proposed method contains two parts: embedding and extraction. Figure 3 shows the block diagram of the proposed method. In the embedding part, first, a grayscale image is converted into a halftone image . Secondly, is segmented into subimages through downsampling. Thirdly, watermark bits are embedded into each subimage. Fourthly, all pixels in the embedded subimages are relocated to their original positions to form the embedded dithered image . Finally, can be printed on a paper. In the extraction, after transmission, the printed embedded dithered image is scanned by a scanner, and then a scanned embedded dithered image is produced. Since the print-and-scan process could cause distortion, a recovering algorithm proposed by Guo et al. [12] is used to correct the distortion in . After that, the output will be segmented into several subimages through downsampling. Finally, the watermark can be extracted from each subimage.

Figure 3: The block diagram of the proposed method.

In this section, we will first introduce the proposed embedding algorithm. Then, the provided extracting algorithm is described.

3.1. Embedding Algorithm

The embedding algorithm contains four stages: halftone conversion, subimage segmentation, embedding, and relocation. They are described in the following.

3.1.1. Halftone Conversion

An dispersed dithering screen (DS) [1] is first applied to a grayscale image to result in a dithering halftone image according to (1).

3.1.2. Subimage Segmentation

Suppose that a watermark with bits will be embedded. In the subimage segmentation stage, is segmented into subimages through downsampling, where and each subimage has pixels. First, is divided into blocks, each of which has pixels. Then, each block is further divided into subblocks, each of which has pixels. Let be the th subblock in block , where and , respectively. Finally, through downsampling, all th subblocks are grouped into a subimage . For the convenience of explanation, all s are arranged into an image.

Figure 4(a) shows an image divided into 3 × 3 blocks with , , and , each of which is further divided into 4 × 4 subblocks. Figure 4(b) shows the subimage segmentation result of Figure 4(a). Figure 4(c) shows the segmentation result of Figure 2(b) with , , , and . The rectangles in Figure 4(c) denote the 16 subimages.

Figure 4: An example of subimage segmentation. (a) An image divided into 3 × 3 blocks, each of which has 4 × 4 subblocks. (b) The segmentation result of (a). (c) The segmentation result of Figure 2(b).
3.1.3. Embedding

In the embedding stage, an dispersed dithering screen with and is used. The dispersed dithering screen is first divided into 2 × 2 dispersed dithering screens. Secondly, in each 2 × 2 dispersed dithering screen, two elements with thresholds more than 128 are grouped as a pair, and the other two elements with thresholds less than 128 are grouped as a pair. Hence, we can obtain pairs.

Thirdly, sort all values in the dispersed dithering screen and give each value an order; then each pair is represented by its corresponding order. For example, in Figure 1(b) with , the four elements in the top-left 2 × 2 dispersed dithering screen marked by a red rectangle have thresholds 6, 238, 134, and 70. The two elements with thresholds 6 and 70 are grouped as a pair, and 134 and 238 are grouped as another pair. After sorting the values in the dispersed dithering screen in Figure 1(b), the corresponding order of each element is shown in Figure 1(d). The two pairs are represented by their corresponding orders as and .

Fourthly, let be the position with order in the dispersed dithering screen DS, where . Figure 1(c) shows the order and position of each pixel in the 4 × 4 dispersed dithering screen shown in Figure 1(a); in this figure, , . Fifthly, each subimage is divided into blocks with size , and all pixels at position of all blocks are grouped into a sub-subimage .

Sixthly, the number, , of black pixels at is counted. Then, we take as a pair, when , , or . Note that, for each of above-mentioned pair with , the difference, , of values of and in the dispersed dithering screen DS is not less than 32. However, for each pair () used by Guo et al., the difference, , of values of and in the dispersed dithering screen DS is equal to 16. This will make each pair used in the proposed method have larger than used in Guo et al.’s method. One example is given in Table 2. Table 2 shows the black pixel numbers of each pair in the top-left subimage of Figure 4(c); the average difference of numbers of black pixels in the eight pairs is 16.75, which is greater than 5.25 in Guo et al.’s method (see Table 1). This means that the proposed method will provide higher robustness than Guo et al.’s method for the print-and-scan process.

Table 2: The black pixel numbers in eight pairs of the top-left subimage in Figure 4(c).

Finally, one bit is embedded into each pair . If the embedding bit is 0 and or the embedding bit is 1 and , nothing is done. Otherwise, in each block, the pixel at position and the pixel at position are swapped. Note that if , no bit will be embedded into the pair. This kind of pairs is called equivalent black pixel pair (EBP).

3.1.4. Relocation

After embedding, all pixels are relocated to their original positions to form the embedded dithered image . Then can be printed on a paper to form a printed embedded dithered image . Note that the embedding capacity is bits.

Figure 5(a) shows the result of embedding 120 watermark bits into Figure 4(c). The rectangle marked in Figure 5(a) denotes the first subimage with 8 watermark bits 10101010 embedded. Figure 5(b) shows the embedded dithered image resulting from relocating Figure 5(a).

Figure 5: An example of embedding algorithm. (a) The result of embedding 120 bits into Figure 4(c). (b) The embedded dithered image by relocating (a).
3.2. Recovering Algorithm for Print-and-Scan Process

When we receive the paper with the printed embedded dithered image , a scanner is used to capture and produce a scanned image . The scanned image usually has geometrical distortion and dot gain effect due to the scanner and printer properties. Dot gain is a phenomenon in printing which causes the size of a dot to be increased or decreased slightly. Here, we adopt the recovering algorithm proposed by Guo et al. [12] to get the embedded dithered image .

3.3. Watermark Extraction Algorithm

The watermark extraction contains two steps: subimage segmentation and extraction. To extract the embedded watermark, will be segmented into several subimages through downsampling mentioned in the embedding algorithm. For each subimage , we divide it into blocks with size and then count the number, , of black pixels at position of all blocks. Then, is taken as a pair, when , , or . Finally, for each pair , a watermark bit is extracted and considered to be 0 if and 1 if . Otherwise, no watermark bit is embedded if .

4. Experimental Results and Comparisons

Eight 512 × 512 test images used in [12, 15] are shown in Figure 6 and also used in our experiments. Figure 7 shows the visual quality comparison of different methods when . The regions marked by cycles in Figures 7(e)7(h) are high-frequency ones of Figures 7(a)7(d), respectively. Compared to Figure 7(e), we can see that Figure 7(h) is similar to Figure 7(e), but the boundary area in Figure 7(f) is smeared and unclear. This means that the proposed method provides higher visual quality in high-frequency areas than Guo et al.’s method [12]. Besides, we can see that Figure 7(c) has salt-and-pepper noises. Hence, the proposed method provides better visual quality than Hagit’s method [13].

Figure 6: Thumbnail of eight test images. (a) Lena, Mandrill, Earth, and Tiffany. (b) Shuttle, Peppers, Milk, and Lake.
Figure 7: Visual quality comparison with 128 embedding bits when . (a) Original dithered image. (b) Embedded dithered image using Guo et al.’s method [12]. (c) Embedded dithered image using Hagit’s method [13]. (d) Embedded dithered image using the proposed method. (e) Enlarged partial image of (a). (f) Enlarged partial image of (b). (g) Enlarged partial image of (c). (h) Enlarged partial image of (d).

Figure 8 shows the visual quality comparison of different methods when . Compared to Figure 8(e), we can see that Figure 8(h) is similar to Figure 8(e), but the boundary area in Figure 8(f) is smeared and unclear. Compared to Figures 7(f) and 8(f), we can see that Figure 8(f) is more smeared and unclear than Figure 7(f). Besides, we can see that Figure 8(c) has more salt-and-pepper noises. Hence, the proposed method provides better visual quality than Hagit’s method [13].

Figure 8: Visual quality comparison with 128 embedding bits when . (a) Original dithered image. (b) Embedded dithered image using Guo et al.’s method [12]. (c) Embedded dithered image using Hagit’s method [13]. (d) Embedded dithered image using the proposed method. (e) Enlarged partial image of (a). (f) Enlarged partial image of (b). (g) Enlarged partial image of (c). (h) Enlarged partial image of (d).

Next, two objective methods [7, 14] are used to measure the halftone image quality. One is the Pei-Guo-PSNR proposed by Pei and Guo [7]; it is adopted to measure the quality of a halftone image and is evaluated as follows:where is an least-mean square filter and can be obtained by a training process [7], , is the original grayscale image, and is the corresponding halftone image. Here, a 7 × 7 least-mean square (LMS) filter [7] (see Figure 9) is adopted to measure the quality of halftone images. Table 3 shows the quality comparisons of various methods using Pei-Guo-PSNR; a random bit stream is adopted as a watermark. From this table, we can see that the proposed methods and Guo et al.’s method provide similar qualities when . But the proposed method provides better qualities than Hagit’s method. When , the proposed method provides better qualities than other methods. Because Pei-Guo-PSNR is basically the PSNR of the original grayscale image and a low-pass version of the halftone image, it measures effectively the distortions to the low-frequency image content [14]. But Pei-Guo-PSNR is improper for measuring the high-frequency image content.

Table 3: Quality comparisons of various algorithms using Pei-Guo-PSNR [12].
Figure 9: Coefficients of LMS filter.

Fu and Au [14] proposed another measure to treat the above-mentioned disadvantage. Let be the original grayscale image; let be the embedded halftone image. Fu and Au [14] define two special classes of elements in as follows:Class 1. Black pixel in bright region .Class 2. White pixel in dark region .Based on these two classes, Fu and Au [14] define four scores as follows: where is the total number of Class 1 and Class 2 elements in having neighbors with the same pixel values in the 4-neighborhood.   corresponds to the number of isolated Class 1 or Class 2 elements. and can be used to measure the visual disturbing of “salt-and-pepper” clusters formed by neighboring pixels [14]. Thus, we adopt scores and to measure the quality of a halftone image. Algorithms with smaller scores of and are better. Table 4 shows the quality comparisons of various methods based on scores and . From this table, we can see that the proposed method has smaller scores of and ; thus, it is better than other methods.

Table 4: Quality comparisons of various algorithms based on scores and [14].

From the above experiments, we can see that the image quality of Guo et al.’s method becomes worse when the dispersed dithering screen size increases. The reason is that if the dispersed dithering screen size increases, the NIP problem in Guo et al.’s method becomes more serious. Many black pixels will be added to eliminate these NIPs in Guo et al.’s method. Hence, the number of black pixels in some subimages will be changed. In addition, the pixel swapping distance of Guo et al.’s method also increases if the dispersed dithering screen size increases. Hence, the boundary of Guo et al.’s method will become more smeared and unclear and its image quality is degraded. However, the proposed method does not have this problem. Hence, the boundary is still clear when the dispersed dithering screen size increases.

Since Feng et al.’s method [17] can only embed watermark bits, we compare the qualities of the embedded halftone images using Feng et al.’s method [17] and others only for watermark bits embedded. Table 5 shows the comparison results for the 512 × 512 halftone image shown in Figure 8(a). From this table, we can see that the proposed method provides better qualities than other methods.

Table 5: Quality comparisons of various algorithms with 64 × 64 watermark bits embedded in Figure 8(a).

To justify the selection of pair, we use three different kinds of selection for pairs. The first selection is used by Guo et al.’s method with ; all pairs are , , , , , , , and . The second selection follows the rule ; all pairs are , , , , , , , and . The third selection follows the rule ; all pairs are , , , , , , , and . From the dispersed dithering screens shown in Figure 1, we can see that when is larger, is larger; this will make larger. When is larger, then it can provide a higher robustness for the print-and-scan process. But the quality will be degraded. Thus, the second selection will provide higher robustness and better quality. To prove this point, we have done some experiments based on these three kinds of selection by embedding 128 watermark bits into a 512 × 512 image.

Table 6 shows quality comparisons of three different kinds of pair selections using Pei-Guo-PSNR, scores and . From this table, we can see that these pair selections provide similar qualities in Pei-Guo-PSNR. But, in and , the second selection used in the proposed method has the best result, and the third selection has the worst result. Table 7 shows the black pixel numbers in eight pairs of the top-left subimage for various pair selections. The average difference on numbers of black pixels in the eight pairs for the second selection used in the proposed method is 223.5, the third selection is 238.5, and the first selection used in Guo et al.’s method is 62.75. This means that Guo et al.’s method provides less robustness than the other pair selections, and the third selection will provide similar robustness to the second pair selection for the print-and-scan process. However, the image quality using the third pair selection is worse than those of two pair selections. Therefore, under the consideration of image quality and robustness, the second selection used in the proposed method is better.

Table 6: Quality comparisons of three different kinds of pair selections using Pei-Guo-PSNR [12] and scores and [14].
Table 7: The black pixel numbers of the eight pairs in the top-left subimage for different kinds of selections.

Furthermore, in the experiments of Guo et al. [12], based on a 4 × 4 dispersed dithering screen, the average percentages of NIPs with 8, 32, 128, and 512 bits embedded into each of eight test images shown in Figure 6 are 12.5%, 15.62%, 27.32%, and 45%, respectively. Guo et al. modify the number of black pixels in these NIPs before embedding watermark. This may lower visual quality. On the contrary, in the proposed method, we do not modify any pair before data embedding.

As to the embedding capacity, since the data embedding depends on the difference of black pixel numbers of each pair, if the difference is zero, the pair cannot be used for data embedding in the proposed method. This will reduce the embedding capacity. Fortunately, from our experimental results, we found that the situation rarely appears in most images. Table 8 shows the numbers of equivalent black pixel pairs (EBP) in eight dithered test images. From this table, we can see that most images have zero EBP, except “Tiffany” image. The reason is that most pixels in “Tiffany” image have gray values >90, and the gray values of pixels in a local area are nearly constant. For example, for pair , the corresponding values in DS are (see Figure 1); thus, each pixel in and for each will be a white point. This will make and make it an EBP.

Table 8: Numbers of equivalent black pixel pairs in eight test images using the proposed method.

In the next experiment, we demonstrate the robustness of the proposed method and Guo et al.’s method by cropping and tampering attacks. To measure the integrity of the extracted watermark, the correct decoding rate (CDR) is defined as follows:where , , and denote the Levenshtein distance [22], original watermark, and extracted watermark, respectively. The Levenshtein distance is a string metric for estimating the least number of edit operations that is necessary to modify one string to obtain another string. In Figures 10 and 11, a 32 × 32 watermark shown in Figure 10(a) is embedded into the halftone image in Figure 11(b) using the proposed method and Guo et al.’s method, respectively, and the results are shown in Figures 10(b) and 11(a), respectively. Figures 10(c) and 11(b) show the embedded dithered images cropped by 1/4 portion. Figures 10(d) and 11(c) show the embedded dithered images tampered with several words. Note that when the number of embedding bits >8, the dithered image is first divided into subimages, and the watermark bits are also divided into parts. Each part is embedded into one subimage. The resulting subimages using the proposed method and Guo et al.’s method are shown in Figures 10(e), 10(f), 11(d), and 11(e). From Figure 10(e), we can see that each subimage using the proposed method is also cropped by 1/4 portion; the reason is that each subimage is obtained by block downsampling (see Figure 4). The cropping will make 1/4 portion of all pixel pairs lost. Since a watermark bit is embedded through the sign of , 1/4 portion of pairs lost will not affect the sign of . Thus, the watermark can be extracted correctly (see Figure 10(g)). Figure 10(h) shows the watermark extracted from Figure 10(d) and it can also be extracted correctly. On the contrary, in Figure 11(d), we can see that some subimages using Guo et al.’s method are totally removed; this will make the watermark parts embedded in these subimages lost (see Figure 11(f)). Figures 11(f) and 11(g) show the watermarks extracted from Figures 11(b) and 11(c), respectively. Table 9 shows the average correct decoding rates of cropping 1/3, 1/4, and 1/2 portions, respectively. Note that in this experiment, each of the eight images is cropped at three different locations for each cropping portion; thus there are 72 (3 × 3 × 8) cropped images. From this table, we can see that the proposed method is more robust than Guo et al.’s method and Feng et al.’s method [17].

Table 9: Average correct decoding rates of different cropping size.
Figure 10: The robustness of the proposed method. (a) A 32 × 32 watermark. (b) Embedded dithered image using the proposed method. (c) Embedded dithered image cropped by 1/4 portion. (d) Embedded dithered image with tampering. (e) Downsampled subimages of (c). (f) Downsampled subimages of (d). (g) The watermark extracted from (c). (h) The watermark extracted from (d).
Figure 11: The robustness of Guo et al.’s method [12]. (a) Embedded dithered image using Guo et al.’s method. (b) Embedded dithered image cropped by 1/4 portion. (c) Embedded dithered image with tampering. (d) The result of applying bit-interleaving to each subimage of (b). (e) The result of applying bit-interleaving to each subimage of (c). (f) The watermark extracted from (b). (g) The watermark extracted from (c).

To measure the robustness of the proposed method and Guo et al.’s method [12] under print-and-scan attack, for each of eight test images, we first embed 8, 32, 128, and 512 bits into its corresponding halftone image; then each embedded halftone image is printed at 150 dpi. After printing, each printed image is scanned at 150, 450, and 750 dpi, respectively, and then the embedded watermark from each scanned image is extracted. Table 10 shows the average correct decoding rate with eight test images as shown in Figure 6. From this table, we can see that the average correct decoding rates of the proposed method are higher than those of Guo et al.’s method. This means that the proposed method provides more robustness than Guo et al.’s method under print-and-scan attack.

Table 10: Average correct decoding rates of all scanned embedded images with different scanning resolutions.

5. Conclusions

In this paper, a robust watermarking method has been proposed for dithered halftone images. Before embedding, a dithered halftone image is divided into subimages through downsampling; this step provides robustness to cropping and tampering. In the embedding step, each pair is taken to embed a watermark bit with , or ; however, in Guo et al.’s method, the pair is taken to embed a watermark bit. Since the average of is larger than the average of , this makes the proposed method provide higher correct decoding rate than Guo et al.’s method after print-and-scan process. Experimental results show that the proposed method actually provides higher robustness than Guo et al.’s method in cropping, tampering, and print-and-scan process. In addition, experimental results also show that the proposed method provides higher visual quality in the high-frequency areas than Guo et al.’s method.

Competing Interests

The authors declare that there are no competing interests regarding the publication of this paper.

Acknowledgments

This work was supported in part by the Ministry of Science and Technology of Taiwan under Contract MOST 103-2221-E-009-121-MY2.

References

  1. R. W. Floyd and L. Steinberg, “An adaptive algorithm for spatial grey scale,” in Proceedings of the SID International Symposium, Digest of Technical Papers, pp. 36–37, 1975.
  2. B. E. Bayers, “An optimum method for two level renditions of continuous tone pictures,” in Proceedings of the IEEE International Conference Communication, pp. 2611–2615, June 1973.
  3. J. P. Allebach, R. Eschbach, and G. G. Marcu, “DBS: retrospective and future directions,” in Color Imaging: Device-Independent Color, Color Hardcopy, and Graphic Arts VI, Proceedings of SPIE, pp. 358–376, San Jose, Calif, USA, January 2001. View at Publisher · View at Google Scholar
  4. M. S. Fu and O. C. Au, “Data hiding watermarking for halftone images,” IEEE Transactions on Image Processing, vol. 11, no. 4, pp. 477–484, 2002. View at Publisher · View at Google Scholar · View at Scopus
  5. M. S. Fu and O. C. Au, “A robust public watermark for halftone images,” in Proceedings of the IEEE International Symposium on Circuits and Systems, pp. III/639–III/642, May 2002. View at Scopus
  6. M. S. Fu and O. C. Au, “Correlation-based watermarking for halftone images,” in Proceedings of the International Symposium on Circuits and Systems (ISCAS '04), vol. 2, pp. II-21–II-24, Vancouver, Canada, May 2004. View at Publisher · View at Google Scholar
  7. S.-C. Pei and J.-M. Guo, “High-capacity data hiding in halftone images using minimal-error bit searching and least-mean square filter,” IEEE Transactions on Image Processing, vol. 15, no. 6, pp. 1665–1679, 2006. View at Publisher · View at Google Scholar · View at Scopus
  8. R. Y. M. Li, O. C. Au, C. K. M. Yuk, S.-K. Yip, and S.-Y. Lam, “Halftone image data hiding with block-overlapping parity check,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '07), vol. 2, pp. II-193–II-196, Honolulu, Hawaii, USA, April 2007. View at Publisher · View at Google Scholar · View at Scopus
  9. J.-M. Guo and Y.-F. Liu, “Halftone-image security improving using overall minimal-error searching,” IEEE Transactions on Image Processing, vol. 20, no. 10, pp. 2800–2812, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  10. Y. F. Guo, O. C. Au, and K. Tang, “Watermark embedding for multiscale error diffused halftone images by adopting visual cryptography,” International Journal of Digital Crime and Forensics, vol. 7, no. 1, pp. 51–68, 2015. View at Publisher · View at Google Scholar · View at Scopus
  11. Y. F. Guo, O. C. Au, J. T. Zhou, K. Tang, and X. P. Fan, “Halftone image watermarking via optimization,” Signal Processing: Image Communication, vol. 41, pp. 85–100, 2016. View at Publisher · View at Google Scholar
  12. J.-M. Guo, S.-C. Pei, and H. Lee, “Paired subimage matching watermarking method on ordered dither images and its high-quality progressive coding,” IEEE Transactions on Multimedia, vol. 10, no. 1, pp. 16–30, 2008. View at Publisher · View at Google Scholar · View at Scopus
  13. H. Z. Hel-Or, “Watermarking and copyright labelling of printed images,” Journal of Electronic Imaging, vol. 10, no. 3, pp. 794–803, 2001. View at Publisher · View at Google Scholar · View at Scopus
  14. M. S. Fu and O. C. Au, “Data hiding in ordered dithered halftone images,” Circuits, Systems, and Signal Processing, vol. 20, no. 2, pp. 209–232, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  15. S.-C. Pei, J.-M. Guo, and H. Lee, “Novel robust watermarking technique in dithering halftone images,” IEEE Signal Processing Letters, vol. 12, no. 4, pp. 333–336, 2005. View at Publisher · View at Google Scholar · View at Scopus
  16. O. Bulan, G. Sharma, and V. Monga, “Orientation modulation for data Hiding in clustered-dot halftone prints,” IEEE Transactions on Image Processing, vol. 19, no. 8, pp. 2070–2084, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. L. Feng, D. Cong, H. Shu, and B. Liu, “Adaptive halftone watermarking algorithm based on particle swarm optimization,” Journal of Multimedia, vol. 8, no. 3, pp. 183–190, 2013. View at Publisher · View at Google Scholar · View at Scopus
  18. C.-H. Son and H. S. Choo, “Watermark detection from clustered halftone dots via learned dictionary,” Signal Processing, vol. 102, pp. 77–84, 2014. View at Publisher · View at Google Scholar · View at Scopus
  19. I. G. Chun and S. Ha, “A watermarking method for halftone images based on iterative halftoning method,” in E-Commerce and Web Technologies, vol. 2738 of Lecture Notes in Computer Science, pp. 165–175, Springer, Berlin, Germany, 2003. View at Publisher · View at Google Scholar
  20. J.-M. Guo, C.-C. Su, Y.-F. Liu, H. Lee, and J.-D. Lee, “Oriented modulation for watermarking in direct binary search halftone images,” IEEE Transactions on Image Processing, vol. 21, no. 9, pp. 4117–4127, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  21. J.-M. Guo, G.-H. Lai, K. Wong, and L.-C. Chang, “Progressive halftone watermarking using multilayer table lookup strategy,” IEEE Transactions on Image Processing, vol. 24, no. 7, pp. 2009–2024, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. V. I. Levenshtein, “Binary codes capable of correcting deletions, insertions, and reversals,” Soviet Physics—Doklady, vol. 10, no. 8, pp. 707–710, 1966. View at Google Scholar · View at MathSciNet