Security and Communication Networks

Security and Communication Networks / 2020 / Article
Special Issue

Theory and Engineering Practice for Security and Privacy of Edge Computing

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 8830310 | https://doi.org/10.1155/2020/8830310

Dengyong Zhang, Xiao Chen, Feng Li, Arun Kumar Sangaiah, Xiangling Ding, "Seam-Carved Image Tampering Detection Based on the Cooccurrence of Adjacent LBPs", Security and Communication Networks, vol. 2020, Article ID 8830310, 12 pages, 2020. https://doi.org/10.1155/2020/8830310

Seam-Carved Image Tampering Detection Based on the Cooccurrence of Adjacent LBPs

Academic Editor: Honghao Gao
Received26 Aug 2020
Revised11 Nov 2020
Accepted11 Dec 2020
Published21 Dec 2020

Abstract

Seam carving has been widely used in image resizing due to its superior performance in avoiding image distortion and deformation, which can maliciously be used on purpose, such as tampering contents of an image. As a result, seam-carving detection is becoming crucially important to recognize the image authenticity. However, existing methods do not perform well in the accuracy of seam-carving detection especially when the scaling ratio is low. In this paper, we propose an image forensic approach based on the cooccurrence of adjacent local binary patterns (LBPs), which employs LBP to better display texture information. Specifically, a total of 24 energy-based, seam-based, half-seam-based, and noise-based features in the LBP domain are applied to the seam-carving detection. Moreover, the cooccurrence features of adjacent LBPs are combined to highlight the local relationship between LBPs. Besides, SVM after training is adopted for feature classification to determine whether an image is seam-carved or not. Experimental results demonstrate the effectiveness in improving the detection accuracy with respect to different scaling ratios, especially under low scaling ratios.

1. Introduction

As image processing technologies, such as scene graphs prediction [1]and image tampering technology, have steadily developed, the content-aware image retargeting techniques have emerged and have attracted increasing attention. One of these, seam carving [2], is able to avoid image distortion and deformation when applied to an image, creating no obvious difference in visual effect. However, this method can also be used for malicious tampering purposes, for instance, removing specific objects in the image or modifying with the semantic content conveyed by the original image, which creates great obstacles for image forensics tasks. Therefore, seam-carving detection has become an important issue, and designing a method for detecting images that may have been subjected to seam carving is of vital importance.

Over the past years, some approaches have been proposed to aim at the detection of seam carving. Sarkar et al. [3] proposed a forensic method based on the Markov feature, which exploits the 324-dimensional Markov feature for classification in the block-based Discrete Cosine Transform domain. However, it achieves a less-than-ideal detection accuracy with a small scaling ratio. As described in [4], Lu and Wu proposed a detection method based on forensic hash to research the problems of seam-carving estimation and tampering localization. However, a forensic hash must be built ahead of time, so it is active; furthermore, it can be detected so easily that counterfeiters can remove it in some ways. Wei et al. [5] proposed dividing the image into small squares of size and looking for the patch that could possibly recover the small squares from seam carving. This method eliminates the patch transition probability among the three small connected squares. Yin et al. [6] first used local binary pattern (LBP) to preprocess the image data and then defined six new features based on the half-seam to reveal the energy change in half of an image; subsequently, these were combined with the existing 18 energy features, after which those features were finally classified using support vector machine (SVM). However, this method is still not ideal for image detection accuracy with low scaling ratios. Wattanachote and Shih [7] proposed a forensic approach based on Blocking Artifact Characteristics Matrix (BACM). In an original JPEG image, regular symmetrical data are presented in a BACM block matrix. Following seam carving, these symmetrical data are reconstructed so that their symmetry is destroyed. From this, 22 features are proposed and a high recognition rate is obtained. However, this method can be easily affected by the quality factor (QF). Ke et al. [8] put forward a forensic method based on the additional seam-carving operation. First, an additional seam-carving operation is performed on the testing image. The approach then extracts 11-dimensional features by calculating the similarity, energy relative error, and seam distance difference between an image and the version of the image undergoing seam carving; this facilitates identifying whether the image has been tampered using seam carving or not. It can be observed that this method has wide applicability. Ye and Shi [9] proposed a method that incorporates a local derivative pattern, Markov transition probability, and the subtractive pixel adjacency model. They also utilized recursive feature elimination based on the linear support vector machine to reduce the feature dimensionality, which greatly improves detection accuracy. Liu et al. [10] combined calibrated adjacent joint density with a rich model-based method originally used for steganalysis and exploited the feature selection algorithm to reduce the feature dimensionality of the combined feature set by using a smaller and more optimized feature set, enabling further improvements on the detection accuracy for the forensic task. Subsequently, as discussed in [11], Liu developed a hybrid large feature mining-based method. As there are many types of large features, ensemble learning is utilized to process high-dimensional features, effectively solving the problem of differentiating between seam-carved and untouched JPEG images. However, a lot of work is involved. Han et al. [12] further proposed a blind detection method based on the block artifact grid (BAG) mispairing characteristic, which firstly extracts BAG from a JPEG image and then constructs 10-dimensional features of the BAG chart. Clustering technology is thus used to extract these features and obtain the classification results. This method is not only able to detect whether the image has been seam-carved or not but also can go a step further and locate the object removed from an image by seam carving. Cieslak et al. [13] proposed a forensic approach based on the combination of convolutional neural networks (CNNs) and local binary pattern (LBP). This method first transforms images to the LBP domain and then inputs them to the CNN in order to determine whether or not these images have undergone a seam-carving operation. The experimental results reach a detection accuracy of more than 81%. Similarly, Ye et al. [14] proposed a CNN-based deep learning architecture. This approach makes use of the joint optimization of feature extraction and pattern classification for employing more effective features, thereby improving the classification speed and accuracy. Zhang et al. [15] proposed a detection approach based on uniform local binary patterns (ULBP). In this approach, ULBP is exploited to decrease the species of binary patterns without losing any information, thereby reducing the dimensionality of features and mitigating the effects of high-frequency noise. Lu and Niu [16] combined the histogram features of the local neighborhood magnitude occurrence pattern (LNMOP) with a histogram of oriented gradient (HOG) and selected the final features for the classifier from the extracted LNMOP features. However, this approach does not consider the postprocessing of tampered images.

A large number of approaches exist for the detection of seam-carved images, which has strongly promoted the development of the seam-carving forensics task. However, even if some images (such as higher-resolution biopsy slice images [17] with many details) are rarely tampered under low scaling ratio conditions, the content expressed may be changed. Thus, some scope for further improving the detection accuracy still exists. As we know, in an image that undergoes seam carving, the minimum cumulative energy seams defined by the energy function are removed, resulting in the change of not only the energy but also the local texture. Most existing methods employing this characteristic directly use the local binary pattern (LBP) features of the image for forensic detection; however, these methods cannot completely represent the change of local texture information.

Accordingly, we introduce the cooccurrence of adjacent LBPs into the forensic task. Cooccurrence is defined as the simultaneous occurrence of all forms of adjacent LBPs, which are generally used to extract information that is bound up with the global structure in various characteristics based on local region; as described in [18], we obtain this information through the use of autocorrelation matrices calculated from two adjacent LBPs. The advantage of this feature is that it comprises both the original LBPs and the cooccurrence of adjacent LBPs; therefore, the location relationship of adjacent LBP values can be more explicitly determined. In this paper, the cooccurrence feature is combined with energy bias and noise-based features, as well as half-seam-based features, for feature extraction purposes. Subsequently, a trained support vector machine (SVM) classifier is used to determine whether seam carving has been applied to the image or not. Our experimental results prove that this forensic method achieves superior detection performance relative to existing methods. The contributions of the proposed approach are threefold: (1) we analyse the distortion that affects image resizing by seam carving and conclude that the relationship between adjacent pixels is an important clue for the forensics of seam-carving operation; inspired by [18], we extract the cooccurrence feature of adjacent LBPs; (2) we combined the cooccurrence feature of adjacent LBPs with the existent energy features to form a new feature set; (3) we apply PCA to reduce the dimensionality of the features to get an identified feature set; a series of experiments have verified that the proposed method has higher detection accuracy in identifying whether the image has undergone seam carving or not. Moreover, the experimental results demonstrate that the proposed approach has better robustness for TIFF and JPEG images.

The remainder of this paper is organized as follows. Section 2 briefly introduces the theory of the seam-carving algorithm. Section 3 introduces some related backgrounds and discusses the seam-carving detection method based on the cooccurrence of adjacent LBPs. The experimental results are reported in Section 4. Finally, Section 5 summarizes the paper.

2. Seam Carving

The seam-carving algorithm preserves important areas of the image and performs operations on unimportant areas. Because of the smoothness of unimportant areas in an image, it is not easy to cause visual perception if these areas are modified. As discussed in [2], the implementation of the seam-carving algorithm assigns different energy values to each pixel of the image on the basis of the importance of the pixels in said image and then calculates the parts that can be removed according to these energy values. The type of algorithm can bypass the limitations of traditional image processing technology, such as excessive deleting deformation [19] or distortion caused by image scaling, which is the reason why it can be so widely utilized.

A seam is an eight-connected path, consisting of a set of connected pixels that pass through the image from top to bottom or from left to right. The least important parts, namely, the lowest energy pixel paths (called “optimal seams” in the image), are deleted as possible by using a seam-carving algorithm to reduce the image scale and achieve the desired width and height, while retaining the important content of the image in vision. In the calculation of the seam, the energy function of each path is defined as follows:where I is an image of . There are two types of seams, namely, vertical seam and horizontal seam. Taking the vertical seam as an example, its definition is as provided in equation (2), while the definition of the horizontal seam is similar:where denotes the row coordinates and denotes the corresponding column coordinates of pixels. On the basis of the above formula, the pixel value set of a seam S can be obtained as follows:

It should be noted that, for a vertical seam, there is only one pixel in each row of the seam. In light of the energy function provided above, the optimal seam that minimizes this seam cost is defined as follows:

Dynamic programming is exploited to select the optimal seam, after which the optimal seam is continuously eliminated in order to reduce the image size or remove the target object. Taking the vertical seam as an example, the step traverses the image from the second row to the last row and computes the possible minimum energy matrix for each pixel .

Once the cumulative minimum energy has been constructed for all possible seams, the optimal seams are found continuously by backtracking from the minimum value of the last row of . The image content that has undergone seam carving is visually stable. Any possible visual artifacts appear only near the removed seams, while the remainder of the image remains unchanged. This is because, when an optimal seam is removed, the pixel paths at the right of the seam will move to the left to compensate for the missing pixel path. Generally speaking, it is difficult to visually detect such weak changes.

An example of seam carving is presented in Figure 1. Here, we perform seam carving on an image; the vertical seams to be eliminated are displayed and labeled in red. By removing these vertical seams, the width of the image is reduced by 15%. As can be seen from Figure 1(c), there is no obvious deformation or distortion of the refrigerator, and the most important content remains stable. This is because the smooth area in the image is mainly removed when seam carving is performed, which makes the change of image unable to be detected visually. Therefore, we study texture information of the image and then introduce the cooccurrence of adjacent LBPs and combine it with the energy features to detect this visually imperceptible tampering behavior.

3. Proposed Method

The energy feature can be used to describe the inherent characteristics of the image. In the process of seam carving, the lowest energy seams as defined by the energy function in the image are removed, bringing about changes in energy information of the image. At the same time, local texture change is important information for the operation of seam carving, which reflects the characteristics of the image surface; more specifically, it depicts the repeated local patterns and their arrangement rules in the image and additionally exhibits rotation invariance as well as excellent noise resistance performance. This combination of energy features and local texture features enables image information to be more comprehensively embodied, meaning that the proposed method can achieve higher detection accuracy. Therefore, in this paper, we briefly extract the energy features and local texture features related to the global structure in the LBP domain, where LBP is used to report the change of local texture.

There are several steps involved in our proposed approach. First, the LBP values of the suspicious image are calculated with the goal of transforming the image to the LBP domain. Second, we extract the above-mentioned features in the LBP domain involving energy-based features, seam-based features, noise-based features, half-seam-based features, and the cooccurrence feature of adjacent LBPs. Among those features, as Ryu and Lee’s paper has introduced, energy-based features include average energy, average row energy, average column energy, and average energy difference, for a total of four features. The seam-based features refer to the minimum value, maximum value, mean value, standard deviation, and the difference between the maximum value and the minimum value in both directions (i.e., for both column and row directions) following the construction of the cumulative minimum energy matrix for all possible seams. There are 10 features in total. In order to remove the noise from the candidate image , the image is filtered using a Wiener filter F with windows; moreover, the noise is computed by , from which the average value, standard deviation, skewness, and kurtosis based on the noise can be calculated. As described in Yin’s paper, half-seam-based features are defined with reference to the half seams rather than the whole seams of the image; in other words, these are calculated according to the accumulated minimum energy matrix M of half of the image. Subsequently, three statistics (called min, max, and mean) are calculated for the specified column and row, for a total of six features. Based on the 24 features obtained at the end of this process, we propose to apply the cooccurrence feature of adjacent LBPs to seam-carving detection. As a result, we obtain a grand total of 25 LBP-based features totally. Following feature extraction, the SVM classifier is adopted for classification task; because of its supervised nature, it needs to be trained before the testing stage. In the final classification phase, we input the features extracted from the image under study into the trained SVM classifier. Finally, we obtain the detection results under different scaling ratios and compare them with the data derived through other methods. Figure 2 presents the general framework of our proposed method.

3.1. Local Binary Pattern (LBP)

As described in [20], LBP is a kind of operator applied to describe the local texture characteristics of an image. It is defined as having a window size of , with the center pixel value of the window used as the threshold value. Compared with the adjacent 8 pixel values, if the adjacent pixel value is bigger than the center pixel value, the position of this adjacent pixel is represented by 1; if not, it is represented by 0. The formula can be expressed as follows:

Here, is the center pixel value at location of the image, denotes the value of its adjacent pixel, and indicates the total number of adjacent pixels.

In this way, we can obtain an 8-bit binary number, which is generally converted into a decimal number. This value then is retained as the LBP value of the window center pixel in order to express the texture information of the region. By recursively calculating the LBP value of each pixel, the input image can be transformed into LBP domain. Figure 3 presents an example of a basic LBP operator. It can be seen from the figure that the LBP value of the center pixel is 83.

3.2. The Co-occurrence of Adjacent LBPs

In this paper, the cooccurrence characteristic of adjacent LBPs is applied to the detection of image seam-carving operation. Figure 4 illustrates the difference between the LBP histogram and the histogram of special cooccurrence of adjacent LBPs.

Figure 4(a) displays two example images composed of two LBP patterns (X and Y); here, the numbers of Xs and Ys in each image are the same. The gray and white surrounding squares represent the values of adjacent pixels that are larger and smaller than the center pixel value, respectively. As can be seen in Figure 4(b), when the numbers of different LBP values in two disparate images are identical, the same results are obtained utilizing the original LBP histograms. However, the histograms of the cooccurrence of adjacent LBPs extracted from the two images are different, as shown in Figure 4(c). In this case, it is clear that the cooccurrence of adjacent LBPs can more efficiently give expression to the local texture characteristics of an image.

In terms of the procedure for extracting the cooccurrence feature of adjacent LBPs, the encoding method proposed in [18] is exploited. When calculating the LBP value of an image (as shown in Figure 5), there are two sparser configurations of LBPs, including four pixels in the diagonal direction and four pixels in the cross direction of eight adjacent pixels; one of these is selected to reduce the computational complexity, enabling all possible LBP values to be obtained with a total number of N (N ≤ 16).

There are four location cases from each reference LBP to its neighbor LBP, as shown in Figure 6. Thereby, the configurations patterns of adjacent LBPs are combined in these four ways to facilitate continuously calculating the autocorrelation matrices of adjacent LBPs.

The autocorrelation matrix is calculated continuously according to these four patterns, which can be expressed aswhere is the position of every pixel in the image. is the distance from the reference LBP to its adjacent LBP. and are two LBP values. After calculating all the autocorrelation matrices, eventually, these matrices are vectorized and combined to create dimensional features.

In order to achieve better detection performance, we utilize the Principal Component Analysis (PCA) [21] method, the purpose of which is to extract important information from a data table and represent this information as a new set of orthogonal variables, called the principal component. This approach is exploited to reduce the dimensionality of the high-dimensional data, allowing the introduced feature to be better combined with other features (Algorithm 1).

Input: an image, I
Output: co-occurrence feature of adjacent LBPs of the image, CoALBP
(1)Transform I into LBP domain based on Figure 5
(2)Define four configurations of adjacent LBPs shown in Figure 6
(3)Calculate the auto-correlation matrices based on formula (7)
(4)Vectorize those matrices and obtain the CoALBP feature
(5)Reduce the dimensionality of the feature using PCA
3.3. SVM Classifier

After extracting the twenty-five features, a classifier is required to determine whether the image has undergone seam carving or not. As we know, support vector machine (SVM) can be utilized not only in medical scenarios such as prediction of the extubation failure [22] but also in forensic tasks such as image security.

In this paper, first, we adopt the downloaded LibSVM [23] in MATLAB and svm-scale function to normalize the experimental data (such as features and labels) to be trained. The RBF kernel function is then selected. Grid search and 5-fold cross validation is exploited for parameter optimization selection in order to choose the best parameters c (the penalty factor) and (parameter of gamma function), which are selected to train the entire training sets using SVM-train function to obtain a SVM model. Finally, according to the SVM model, the svm-predict function is used to detect image under investigation, while the confusion matrix is used to determine the classification result and the classifier accuracy.

4. Experimental Results and Discussion

The hardware configuration used for experiments is a personal computer (Intel(R) Core(TM) i5-7400 CPU @ 3.00 GHz 8.00 GB RAM). We employ MATLAB 2016b as the experimental tool. Moreover, LibSVM [23] is used for the classification task. Python and Gnuplot are used for the parameter optimization.

In order to prove the robustness of our proposed method in different format images, experiments are carried out on TIFF and JPEG images. In addition, our method focuses on the detection of low scaling ratio images and strives to improve their detection accuracy.

As there is currently no complete image database for seam-carving forensics available, the Uncompressed Color Image Database (UCID) is utilized here due to its abundant content (e.g., people, plants, goods, etc.). We obtain 1338 original images without compression from the UCID image set and then implement seam-carving technology to resize these images in the following two cases: (1) after compressing images, under the condition that the quality factor (QF) is 10, 20, 50, 75, and 100; (2) exploiting uncompressed TIFF format images. In the former cases, the scaling ratios used for seam carving are 1%, 2%, 5%, 10%,20%, 30%, and 50%. For example, consider an image with a size of where the scaling ratio is 10% in the vertical direction; this means that the width is reduced by 10%, resulting in the image size changing to be . Thus, 1338 compressed images with QF of 10, 20, 50, 75, and 100 at scaling ratios of 1%, 2%, 5%, 10%, 20%, 30%, and 50% can be obtained, for a total of 46,830 images; there are also 1338 uncompressed images at scaling ratios of 1%, 2%, 5%, 10%, 20%, 30%, and 50%, respectively, for a total of 9366 images.

Therefore, based on the adding of the original images, we divide the entire image set into identical sized training and testing sets. For compressed image sets at every QF and for uncompressed image sets, we have several training sets, the scaling ratios of which are 1%, 2%, 5%, 10%, 20%, 30%, and 50%. In addition, we have several testing sets at scaling ratios of 1%, 2%, 5%, 10%, 20%, 30%, and 50%. Furthermore, we also extract images from the uncompressed image sets with scaling ratios of 10%, 20%, 30%, and 50% to create a mixed set. The number of images in different scaling ratio is equal, and then we divide these sets equally into training and testing sets. Since the UCID image set is divided equally, the number of images in each subset is the same. To summarize, in the training and the testing sets, each subset contains 669 images with a specific scaling ratio. During the experiment, we firstly extract the cooccurrence feature of adjacent LBPs and twenty-four energy features of the images under investigation. We then use the SVM classifier after training to complete the classification tasks of these features, inferring how many images have been tampered (while the others have not been tampered with) under each subset of different scaling ratios, and then use these experimental data to evaluate the detection performance of our proposed approach in different situations and to compare it with other existing methods.

Generally speaking, the quality factor (QF) has a certain influence on the detection operation. However, it can also be seen from Table 1 that, at the condition of each specific QF, the method proposed in this paper can achieve extremely high accuracies for images with different scaling ratios; specifically, all images with large scaling ratios can be correctly detected. Furthermore, this also demonstrates the wide applicability of this method under different QFs (i.e., the method is stable and less affected by quality factors). When the value of QF is 0, meaning that the images are not compressed, it can be observed that this approach is also suitable and can achieve considerable accuracy.


Scaling ratio (%)Quality factor (QF)
0 (%)10 (%)20 (%)50 (%)75 (%)100 (%)

191.2692.5294.9693.2696.1492.26
292.1093.5594.2595.9597.9593.80
594.3395.3197.3197.0197.0197.01
1096.1196.8596.8599.7097.8099.70
2098.8398.9599.1098.6598.6598.80
3099.9410010099.8599.85100
50100100100100100100

In order to demonstrate more clearly that the combination of the cooccurrence features of adjacent LBPs and 24 energy features yields the best experimental results, we also carried out a comparison experiment in which the cooccurrence features were added separately for the detection without the 24 energy features. EFCOFAL is used to describe the combination features of the cooccurrence feature of adjacent LBPs and 24 energy features, and COFAL is adopted to describe the cooccurrence features of adjacent LBPs.

As can be seen from Table 2, the effect of using the cooccurrence features of adjacent LBPs alone is not ideal compared with the effect when it they are combined with the 24 energy features, especially when scaling ratios are low.


Scaling ratios (%)Accuracy (%)
EFCOFALCOFAL

191.2681.54
292.1085.63
594.3391.55
1096.1194.14
2098.8398.30
3099.9499.24
5010099.98

Table 3 summarizes the comparison results of the six forensic methods. As the image scaling ratio increases, the detection accuracy of the six approaches evidently also increases as across the board. In general, the proposed method achieves the best accuracy, which is higher on average by 35.74%, 30.75%, 13.27%, and 9.87% than the other four methods; particularly under small scaling ratios of 1%, 2%, and 5%, this method achieves more outstanding performance. Compared with the method proposed by Ye et al., it can be seen that the average accuracy of the proposed method is 0.25% higher when the scaling ratios are among 5% and 50%.


Scaling ratiosAccuracy (%)
Wei et al. [5]Ryu and Lee [24]Yin et al. [6]Wattanachote et al. [7]Ye et al. [14]Ours

1%50.0752.3951.1287.5991.26
2%50.9752.7752.4787.8692.10
5%50.2058.3063.8388.5493.9994.33
10%57.9165.2280.0089.5096.7196.11
20%74.1875.3794.4889.7098.5598.83
30%91.3485.5298.6690.1399.0899.94
50%94.9396.2799.8594.9099.60100
Average63.8868.8786.3589.7597.5996.08

The method proposed by Wei et al. does not consider the alternation of the internal nature (such as energy change) in an image caused by seam carving. Ryu and Lee [24] proposed a method based on energy bias and noise features in the LBP domain. Compared with the former method, this latter method takes advantage of the inherent change in the characteristics of the image following seam carving. On this basis, Yin et al. added six half-seam-based features that more comprehensively reflect the change of energy and local texture. However, the energy distribution changes when seams are inserted in a seam-carved image, which can offset the alternation of energy distribution caused by seam carving and make the forensic task more difficult. The change of local texture is also unable to reflect the location relationship of the adjacent LBP values in the local region, leading to the detection accuracy not being ideal when the scaling ratio is low. The method proposed by Wattanachote et al. involves displaying regular symmetrical data in the original JPEG image in the Blocking Artifact Characteristics Matrix (BACM), while the symmetrical data in the block reconstructed by the seam carving are destroyed. Accordingly, 22 features are proposed and used for the feature classification, and considerable accuracy is obtained as a result. However, due to the influence of the quality factor (QF), the accuracy exhibits a large wave motion, which is also (in brief) the reason why the performance of this method is not inadequate enough. Ye et al. proposed a deep-learning-based method; essentially, the CNN is exploited, after which more effective features are used to substantially boost the classification rates and obtain high detection accuracy.

In essence, the method proposed in this paper, which is based on the local texture features and energy features of an image in the LBP domain, introduces the cooccurrence feature of adjacent LBPs, which can reflect the location relationship information of the LBP values corresponding to adjacent pixels in an image in order to improve the detection performance and thereby reduce the detection difficulty caused by the variety of changes to energy features. The experimental results reveal that, in fact, our proposed approach also achieves better detection accuracy.

The receiver operating characteristic (ROC) curves for the four approaches are plotted in Figure 7. The subfigures, that is, Figures 7(a)7(f), represent the ROC performance under different scaling ratios from 1% to 50%, respectively. It can be observed that the corresponding area under ROC curves (AUC) obtained by the proposed method is significantly larger than the other three methods, whether the scaling ratio is large or small, demonstrating that our method can achieve higher accuracy and confirming its robustness.

In addition, we also exploit the cross experiment to test the effect of our proposed approach alongside the other four methods on different training and testing sets. The cross experiment image sets are conducted with five different scaling ratios, specifically 10%, 20%, 30%, 50%, and a mixed scaling ratio, where the images with scaling ratios of 10%, 20%, 30%, and 50% are uniformly distributed in the mixed image set.

The detection accuracies under various experimental situations are listed in Table 4, which displays the accuracy of cross experiment obtained using the UCID database. Table 4(a) presents the results of Wei et al.; the method achieves higher accuracy in the detection of mixed sets than the other three methods but not our proposed method. Table 4(b) demonstrates the results obtained by Ryu and Lee; it can be seen that when the mixed set is used as test set and training set, the experimental accuracy is generally not high. Table 4(c) lists the results of Yin et al., and Table 4(d) presents the results obtained by Wattanachote et al.; neither of the two methods has good robustness. Finally, Table 4(e) shows the results of our proposed approach. These results indicate that, overall, the detection accuracy of our proposed method is higher than that of the other four methods. Furthermore, our proposed method is more robust than the other four methods in the cross experiments.


TestTrain
10%20%30%50%Mixed

(a) Wei et al. [5]
10%64.8663.7561.5872.47
20%65.4782.4862.3780.67
30%61.6592.0371.9291.92
50%72.1494.4694.8394.35
Mixed66.2184.9983.1572.5784.85

(b) Ryu and Lee [24]
10%62.1862.9362.4859.49
20%72.2769.5161.0667.64
30%79.1579.7572.9477.73
50%85.2188.4276.0190.21
Mixed62.0361.8860.6259.4962.33

(c) Yin et al. [6]
10%60.2456.0551.6463.83
20%83.5675.0456.2087.52
30%92.6092.2384.6886.47
50%99.0399.4899.4898.51
Mixed69.2867.9466.5259.0476.01

(d) Wattanachote et al. [7]
10%55.1954.6749.6354.19
20%58.5264.0556.9563.45
30%53.7070.9666.1872.35
50%38.1576.9482.8175.97
Mixed52.6967.0068.8365.8869.74

(e) Ours
10%85.2866.8266.3785.43
20%99.7880.3489.3987.59
30%99.8598.0398.3296.71
50%10010010099.48
Mixed72.8070.9368.5462.3385.13

Besides, according to Section 3.2, we know that even though the dimensionality of the cooccurrence feature has been reduced in order to reduce the computational complexity in the process of generating the cooccurrence feature of adjacent LBPs, the feature dimensionality is still too large compared with other energy features. As shown in Table 5, when we use the feature directly without further dimension reduction, the detection accuracy is very low when testing the image with low scaling ratios.


Scaling ratios (%)12510203050

Accuracy (%)29.4577.7389.5493.4295.6795.07100

In this paper, we propose an image forensic approach based on the cooccurrence of adjacent LBPs, which can effectively report the location relationship of adjacent LBP values. From the above experimental results, it is evident that our method achieves good detection accuracy under different scaling ratios. When the scaling ratio is high, the detection accuracy of our method is almost 100%. Meanwhile, when the scaling ratio is low, it also achieves higher accuracy than other methods; this is of great significance for those images that have been tampered in a nonobvious way. The detection results of TIFF and JPEG images also show that our proposed approach is robust.

5. Conclusions and Future Work

Seam carving is widely utilized due to its ability to protect the important areas of an image from a visual perspective, meaning that the pivotal contents of the image are not distorted or deformed. Moreover, this technology may also be utilized maliciously, which can result in change to the semantic contents of the image; however, this situation may not be perceivable by the naked eye, meaning that it is more likely that people will be misled or that some harmful behaviors will occur, which endanger society. Therefore, despite the challenges, it is necessary to develop and improve seam-carving detection research. In this paper, a forensic method designed for the seam-carving detection task and based on the cooccurrence of adjacent LBPs is proposed. Experimental results demonstrate that our method has better detection performance and good robustness under different QF values and scaling ratios. This is of great significance for forensic work in the field of image security. However, the proposed approach only detects whether or not the image has been seam-carved and cannot determine the specific place at which seam carving has occurred within the image. In the future, we will continue to research location detection [25] of the seam-carved image. Moreover, many video/image processing methods [2631] will be adopted to extract the identified features. We will also try to apply deep learning [3237] methods to identify whether the image is seam-carved or not.

Data Availability

The software code and data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

All authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This project was supported by the National Natural Science Foundation of China under Grant nos. 61972057 and 62072055, Hunan Provincial Natural Science Foundation of China under Grant no. 2020JJ4626, Scientific Research Fund of Hunan Provincial Education Department of China under Grant no. 19B004, “Double First-class” International Cooperation and Development Scientific Research Project of Changsha University of Science and Technology under Grant no. 2018IC25, and the Young Teacher Growth Plan Project of Changsha University of Science and Technology under Grant no. 2019QJCZ076.

References

  1. W. Gao, Y. Zhu, W. Zhang, K. Zhang, and H. Gao, “A hierarchical recurrent approach to predict scene graphs from a visual‐attention‐oriented perspective,” Computational Intelligence, vol. 35, no. 3, pp. 496–516, 2019. View at: Publisher Site | Google Scholar
  2. S. Avidan and A. Shamir, “Seam carving for content-aware image resizing,” ACM Transaction on Graphics, vol. 26, no. 3, 2007. View at: Publisher Site | Google Scholar
  3. A. Sarkar, L. Nataraj, and B. Manjunath, “Detection of seam carving and localization of seam insertions in digital images,” in Proceedings of the 11th ACM Workshop on Multimedia and Security, pp. 107–116, Berkeley, CA, USA, 2009. View at: Google Scholar
  4. W. Lu and M. Wu, “Seam carving estimation using forensic hash,” in Proceedings of the Thirteenth ACM Multimedia Workshop on Multimedia and Security, pp. 9–14, San Francisco, CA, USA, 2011. View at: Google Scholar
  5. J.-D. Wei, Y.-J. Lin, and Y.-J. Wu, “A patch analysis method to detect seam carved images,” Pattern Recognition Letters, vol. 36, pp. 100–106, 2014. View at: Publisher Site | Google Scholar
  6. T. Yin, G. Yang, L. Li, D. Zhang, and X. Sun, “Detecting seam carving based image resizing using local binary patterns,” Computers & Security, vol. 55, pp. 130–141, 2015. View at: Publisher Site | Google Scholar
  7. K. Wattanachote and T. K. Shih, “Tamper detection of JPEG image due to seam modifications,” IEEE Transaction Information Forensics and Security, vol. 10, 2015. View at: Google Scholar
  8. Y. Ke, Q. Q. Shan, F. Qin et al., “Detection of seam carved image based on additional seam carving behavior,” Image Processing and Pattern Recognition, vol. 9, no. 2, pp. 167–178, 2016. View at: Google Scholar
  9. J. Ye and Y.-Q. Shi, “An effective method to detect seam carving,” Journal of Information Security and Applications, vol. 35, pp. 13–22, 2017. View at: Publisher Site | Google Scholar
  10. Q. Liu, P. A. Cooper, and B. Zhou, An Improved Approach to Detecting Content-Aware Scaling-Based Tampering in Jpeg Images, IEEE ChinaSIP, Beijing, China, 2013.
  11. Q. Liu, “An improved approach to exposing JPEG seam carving under recompression,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 7, 2019. View at: Google Scholar
  12. R. Han, Y. Ke, L. Du, F. Qin, and J. Guo, “Exploring the location of object deleted by seam-carving,” Expert Systems with Applications, vol. 95, pp. 162–171, 2018. View at: Publisher Site | Google Scholar
  13. L. F. S. Cieslak, K. A.P. Da Costa, and J. P. Papa, “Seam carving detection using convolutional neural networks,” in Proceedings of the IEEE SACI 2018, Timisoara, Romania, 2018. View at: Google Scholar
  14. J. Ye, Y. Shi, G. Xu et al., “A convolutional neural network based seam carving detection scheme for uncompressed digital images,” Lecture Notes in Computer Science, Springer Science Business Media, Berlin, Germany, 2019. View at: Google Scholar
  15. D. Zhang, G. Yang, F. Li, J. Wang, and A. K. Sangaiah, “Detecting seam carved images using uniform local binary patterns,” Multimedia Tools and Applications, vol. 79, no. 13-14, pp. 8415–8430, 2020. View at: Publisher Site | Google Scholar
  16. M. Lu and S. Niu, “Detection of image seam carving using a novel pattern,” Computational Intelligence and Neuroscience, vol. 2019, Article ID 9492358, 15 pages, 2019. View at: Publisher Site | Google Scholar
  17. J. Chen, H. Ying, X. Liu et al., “A transfer learning based super-resolution microscopy for biopsy slice images: the joint methods perspective,” IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), vol. 14, 2020. View at: Google Scholar
  18. R. Nosaka, Y. Ohkawa, and K. Fukui, “Feature extraction based on co-occurrence of adjacent local binary patterns PSIVT 2011 part II,” Lecture Notes in Computer Science, Springer Science Business Media, Berlin, Germany, 2011. View at: Google Scholar
  19. Z. Zhang, “The comparison of image retargeting algorithms based on seam carving,” in Proceedings of the 2015 International Conference on Test, Measurement and Computational Methods, Atlantis Press, Chiang Mai, Thailand, 2015. View at: Google Scholar
  20. T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures with classification based on featured distributions,” Pattern Recognition, l, vol. 29, no. 1, pp. 51–59, 1996. View at: Publisher Site | Google Scholar
  21. H. Abdi and L. J. Williams, “Principal component analysis,” Wiley Interdisciplinary Reviews: Computational Statistics, vol. 2, no. 4, pp. 433–459, 2010. View at: Google Scholar
  22. T. Chen, J. Xu, H. Ying et al., “Prediction of extubation failure for intensive care unit patients using light gradient boosting machine,” IEEE Access, vol. 7, no. 1, pp. 150960–150968, 2019. View at: Publisher Site | Google Scholar
  23. C.-C. Chang and C.-J. Lin, “Libsvm,” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, pp. 1–27, 2011. View at: Publisher Site | Google Scholar
  24. S. J. Ryu and H. Y. Lee, “Detecting trace of seam carving for forensic analysis,” IEICE Transaction Information System, vol. 97, no. 5, pp. 1304–1311, 2014. View at: Google Scholar
  25. J. Li, B. Yang, and X. Sun, “Segmentation-based image copy-move forgery detection scheme,” IEEE Transaction Information Forensics Security, vol. 10, no. 3, pp. 507–518, 2015. View at: Google Scholar
  26. Z. Bi, L. Yu, H. Gao et al., “Improved VGG model-based efficient traffic sign recognition for safe driving in 5G scenarios,” International Journal of Machine Learning and Cybernetics, pp. 1–12, 2020. View at: Google Scholar
  27. B. Lin, S. Deng, H. Gao et al., “A multi-scale activity transition network for data translation in EEG signals decoding,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2020. View at: Google Scholar
  28. J. Qin, H. Li, X. Xiang et al., “An encrypted image retrieval method based on harris corner optimization and LSH in cloud computing,” IEEE Access, vol. 7, no. 1, pp. 24626–24633, 2019. View at: Publisher Site | Google Scholar
  29. Y. Tan, J. Qin, X. Xiang, W. Ma, W. Pan, and N. N. Xiong, “A robust watermarking scheme in YCbCr color space based on channel coding,” IEEE Access, vol. 7, no. 1, pp. 25026–25036, 2019. View at: Publisher Site | Google Scholar
  30. J. Zhang, Y. Wu, W. Feng, and J. Wang, “Spatially attentive visual tracking using multi-model adaptive response fusion,” IEEE Access, vol. 7, pp. 83873–83887, 2019. View at: Publisher Site | Google Scholar
  31. Y. Chen, W. Xu, J. Zuo et al., “The fire recognition algorithm using dynamic feature fusion and IV-SVM classifier,” Cluster Computing, vol. 22, no. 3, pp. 7665–7675, 2019. View at: Publisher Site | Google Scholar
  32. J. Wang and J. H. QinJ. Xiang, Y. Tan, and N. Pan, ““CAPTCHA recognition based on deep convolutional neural network,” Mathematical Biosciences and Engineering, vol. 16, no. 5, pp. 5851–5861, 2019. View at: Publisher Site | Google Scholar
  33. Y. Luo, J. Qin, X. Xiang, Y. Tan, Q. Liu, and L. Xiang, “Coverless real-time image information hiding based on image block matching and dense convolutional network,” Journal of Real-Time Image Processing, vol. 17, no. 1, pp. 125–135, 2020. View at: Publisher Site | Google Scholar
  34. L. Xiang, G. Guo, G. Yu, V. Sheng, and P. Yang, “A convolutional neural network-based linguistic steganalysis for synonym substitution steganography,” Mathematical Biosciences and Engineering, vol. 17, no. 2, pp. 1041–1058, 2020. View at: Publisher Site | Google Scholar
  35. S. He, Z. Li, Y. Tang, Z. Liao, F. Li, and S.-J. Lim, “Parameters compressing in deep learning,” Computers, Materials & Continua, vol. 62, no. 1, pp. 321–336, 2020. View at: Publisher Site | Google Scholar
  36. A. K. Sangaiah, D. V. Medhane, T. Han, M. S. Hossain, and G. Muhammad, “Enforcing position-based confidentiality with machine learning paradigm through mobile edge computing in real-time industrial informatics,” IEEE Transactions on Industrial Informatics, vol. 15, no. 7, pp. 4189–4196, 2019. View at: Publisher Site | Google Scholar
  37. L. Xiang, G. Zhao, Q. Li, W. Hao, and F. Li, “TUMK-ELM: a fast unsupervised heterogeneous data learning approach,” IEEE Access, vol. 6, pp. 35305–35315, 2018. View at: Publisher Site | Google Scholar

Copyright © 2020 Dengyong Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views78
Downloads118
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.