Abstract

Users transfer large number of images everyday over the Internet. Easy to use commercial and open source image editing tools have made intactness of images questionable. Passive methods have been proposed in the literature to determine authenticity of images. However, a specific type of forgery called “Object Removal with uniform Background forgery” becomes a problem for keypoint based methods in the literature. In this paper, we proposed an effective copy move forgery detection technique. The method uses AKAZE features and nonlinear scale space for detection of copied/pasted regions. The proposed method detects “Object Removal with uniform Background” and “Replication” types of forgeries with high precision compared to similar works. Experimental results also indicate that the method yields better discriminative capability compared to others even if forged image has been rotated, blurred, AWGN added, or compressed by JPEG to hide clues of forgery.

1. Introduction

Users transfer many images and video files everyday over the Internet as social networking services have made sharing multimedia content easier. High-resolution images and video files consume a significant part of the Internet bandwidth even if high efficiency coding is used. High quality image and video files are shared not only in social networks but also in news portals, video surveillance transmission and telemedicine image and video transmission. High quality, low cost cameras, and easy to use commercial and open source image editing tools such as Photoshop or GIMP have also made intactness of images questionable. Many methods have been proposed in the literature to determine authenticity of images. These methods can be classified into two groups: active and passive methods.

Digital signatures and watermarking methods fall into the first group. Digital signature requires transmission of not only the image but also a signature created from the image and a private key by an algorithm. Watermarking also requires embedding specially generated watermark data robust to modification attacks into an image for authentication. Both methods require extra information and hence classified as active methods.

The methods in the second group only use selected statistical features of the image to determine possible forgery. Passive methods attract more attention of the researchers recently because these methods do not need extra information to authenticate the image. Substantial amount of research is focused on copy move type of forgery since it is easy to manipulate images by image editing without altering global statistics. Figure 1 shows an example of a copy move forged image where a portion of the image is copied and pasted onto another region to hide or to replicate some part of the image.

The first method to detect copy move forgery proposed by Fridrich and others [1] is based on the Discrete Cosine Transform (DCT). Their method divides the image into pixel overlapping blocks. DCT of these blocks forms feature vectors and they are lexicographically sorted to move similar vectors closer. Euclidean distance among set of neighboring vectors is used to determine similarity expected as a result of forged regions. Popescu and Farid used Principal Component Analysis (PCA) to reduce feature vector size of [1]. Their results show that the method can detect forged regions even if Additive Gaussian Noise, JPEG compression, and blurring have been applied to reduce visible clues of forgery [2]. Luo and others used intensity of the blocks to construct feature vectors. Average pixel intensity of R, G, and B channels and some directional information construct feature vector corresponding to blocks [3]. The method yields high accuracy ratios even if forged images have been postprocessed. Blur moment invariants are utilized by Mahdian and Saic to make their method robust against blurring operation [4]. They used feature vectors to represent the blocks. The authors also exploit the dimension reduction property of the PCA to speed up feature matching time. Bayram and others used Fourier Mellin Transform to represent the blocks [5]. Their method also used Counting Bloom filters to enhance comparison time. Results show that their method can detect slightly rotated forged regions. Rotation invariant Local Binary Patterns (LBP) is used by Li et al. to detect forgery [6]. Zhang and others used Discrete Wavelet Transform (DWT) to extract forged image’s subbands [7]. Their method uses phase correlation to test similarity. Ryu and others used Zernike moments of the blocks as rotation invariant features [8]. Bravo-Solorio and Nandi utilized correlation coefficient of the Fourier Transform to test similarity between blocks [9]. Their method discards the blocks with low entropy. Their method also detects flipping as shown in the results. Wang and others used circular blocks and Gaussian Pyramid Decomposition (GPD) while extracting features [10]. GPD decreases computational complexity of the feature matching algorithm. Results show that the method yields high accuracy rates even if forged image has been rotated, blurred, distorted by additive noise, or compressed by JPEG. Wang and others used Hu moments with GPD [11]. GPD is applied on the forged image and Hu moments of each block are calculated to construct feature vectors. In 2011, Huang et al. proposed an improved DCT based method to detect copy move forgery [12]. Their method applies the truncation procedure to reduce the dimension of the features and makes quantization to DCT coefficients make the method more robust against JPEG compression attacks. In 2012, Cao et al. divide overlapping DCT transformed blocks into four equal sized regions and calculate mean of DCT coefficients in these regions to construct feature vector [13]. Their results show that the method gives better results when postprocessing operations are applied on the forged images. In 2013, Zhao and Guo apply the DCT transform on overlapping blocks and divide the block into nonoverlapping subblocks [14]. Singular Value Decomposition is applied on each subblock to construct a feature vector of size . Results show that the method gives better performance when compared to similar works. Hussain et al. used multiscale Weber’s law descriptor and multiscale Local Binary Pattern for copy move forgery detection [15]. Their method also applied Locally Learning Based (LLB) algorithm to reduce the dimension of feature space. Suresh and Rao applied Local Binary Pattern (LBP) on the low frequency content of DWT to extract feature vectors from the blocks [16]. Results show that their method is rotation invariant and correct detection ratio is approximately 99%. In 2016, Zhou et al. used color distribution information to divide the entire search space into smaller pieces [17]. The method assumes that copied and pasted regions will reside on the same cluster. Five image descriptors are utilized by their method to construct feature vectors from the blocks. Results indicate that the method can detect forgery operation even if gamma correction is applied on the forged image. In 2016, Emam et al. used Polar Complex Exponential Transform (PCET) to extract features from the blocks [18]. Approximate Nearest Neighbor (ANN) searching algorithm is also utilized by the method with Locality Sensitive Hashing to determine similar blocks. Results show that the proposed approach is robust to geometric transformations with low computational complexity.

All the methods mentioned above share a common framework. They divide input image into circular or square overlapping blocks, extract spatial or frequency domain features from blocks, compare feature vectors with an appropriate technique to measure similarity, and mark regions as forged if similarity exceeds a certain threshold.

Researchers proposed keypoint feature comparison as an alternative to block feature comparison recently. Huang and others used Scale Invariant Feature Transform (SIFT) to detect and mark forged regions [19]. After their method, Amerini and others realized more comprehensive analysis and used hierarchical clustering to analyze SIFT correspondences [20]. Xu and others used Speeded-Up Feature Transform (SURF) to detect copied and pasted regions [21]. Their method divides SURF keypoints into two groups and nearest neighbor search is realized in these groups. In 2016, Zhu and others used Oriented Fast and Rotated Binary Robust Independent Elementary Features (ORB) to detect forged regions [22]. Their method also constructs scale space before keypoint extraction to make ORB method robust against scale attacks. Their results show that the method yields better results compared to other keypoint based methods. In [23], Cozzolino et al. used PatchMatch algorithm to compute a high quality approximate nearest neighbor field for the image. Dense linear fitting based postprocessing procedure is applied by their method to reduce the complexity. Their results show that the proposed method gives better results compared to state-of-the-art dense field references.

The most important problem of keypoint based methods is their weakness if forged region does not accommodate any keypoint. One realizes “Object removal with uniform Background” type forgery if the operation aims to hide an object in the image. Zhu and others emphasized “Object removal with uniform Background” problem and claim that Scaled ORB (sORB) can detect tampered regions [22]. But keypoint based methods in the literature exhibit weakness if forged region used for removal has smooth characteristic.

SIFT, SURF, and sORB features uses Gaussian scale space either by constructing Gaussian scale space in a pyramidal manner or by approximating Gaussian derivatives through box filters. The most important drawback of these methods is that they do not preserve object boundaries since Gaussian blurring smoothens details and noise characteristic at the same extent. Alcantarilla and others proposed KAZE features to overcome this problem in 2012 [24]. KAZE uses nonlinear diffusion filtering. They also proposed a novel and fast multiscale feature detection and description, Accelarated KAZE (AKAZE) exploiting the benefits of nonlinear scale spaces in 2013 [25]. AKAZE features are faster than KAZE, SIFT, and SURF but slower than ORB features as indicated in [25].

In this work, we utilize AKAZE features to determine the copied and pasted regions on the forged image. Modified Local Difference Binary (M-LDB) is used to extract descriptors from the keypoints found by AKAZE from the input image. Then, Random Sample Consensus (RANSAC) algorithm is applied to eliminate false matches after keypoints are matched. Experimental results show that the proposed method yields better precision results against rotation, blurring, noise addition, and JPEG compression attacks compared to similar works [2022]. After all, the most important advantage of the method is that it detects “Object Removal with uniform Background forgery” with high precision compared to other works. The proposed method takes the advantage of nonlinear scale space creation to determine the “Object Removal with uniform Background forgery.”

The paper is organized as follows. Sections 2 and 3 give the details of the keypoint extraction method and feature extraction technique, respectively. Experimental results are given in Section 4 and conclusion is drawn in Section 5.

2. Keypoint Extraction from the Forged Image

In this work, we adapted AKAZE features that have priority especially to detect keypoints on the uniform regions, in copy move forgery detection. While keypoint based methods in the literature uses Gaussian scale space to detect keypoints on the test images, the proposed method utilizes nonlinear scale space suggested by the AKAZE algorithm to determine keypoints even if the copied and pasted regions are uniform.

The proposed method has five steps: keypoint extraction from the image, feature extraction at the keypoints, matching features, false match elimination with RANSAC (Random Sample Consensus) algorithm, and tamper localization. First two steps of the proposed method use AKAZE keypoint and feature extraction algorithm. Hamming distance is used by the proposed method to match the descriptors of AKAZE features. RANSAC is also utilized by the method at fourth step to eliminate false matches. A new forgery localization method is applied on the coordinates of matched keypoints to localize the forged regions at the last step of the proposed method.

The proposed method is the first in the literature that adapted a nonlinear scale space based keypoint extraction method into copy move forgery detection. In this section and the latter, we give general outline of the AKAZE keypoint and feature extraction methods and the details of the false match elimination and forgery localization procedures.

Popular keypoint extraction methods in the literature such as Scale Invariant Feature Transform (SIFT), Speeded-Up Robust Features (SURF), and Oriented FAST and Rotated BRIEF (ORB) make use of scale space by filtering the image with a function. For example, SIFT creates scale space of an image with Gaussian kernels. Original image is convolved with Gaussian kernels of increasing standard deviation to construct Gaussian scale space. SURF algorithm creates Gaussian scale space by approximating Gaussian derivatives through box filters. ORB detects keypoints on image using features from Accelerated Segment Test (FAST) algorithm. It also employs Harris corner measure to choose best points. However, FAST does not produce multiscale features. Therefore, the authors used scale pyramid of the image and produce FAST features at each level in the pyramid.

Construction of the scale space with a linear approach has one drawback: Gaussian blurring does not preserve object boundaries as indicated in [24, 25]. Increasing scale in linear scale space eliminates noise effect and prominent structures of image become more dominant. But Gaussian blurring does not take into account the difference between natural boundaries of objects and noise. Blurring smoothens both object details and noise to same extent. Thus, it has a negative effect on localization as scale level increase. Nonlinear scale space is used by AKAZE adaptively blurs image data and reduces noise while object details remain intact, thanks to the nonlinear scale space. The authors used Fast Explicit Diffusion (FED) to build nonlinear scale space [25]. The details of the keypoint extraction phase of AKAZE are given below. The proposed method will extract keypoints from the test image using AKAZE.

Luminance of an image through increasing scale levels can be modeled by nonlinear diffusion. Diffusion process is controlled by the divergence of a flow function. These approaches use nonlinear Partial Differential Equations (PDEs) because nonlinear nature of the differential equations diffuses luminance of the image through the nonlinear scale space. Nonlinear diffusion can be formulated as in where div and denote the divergence and gradient operations, respectively, and is the luminance of the image. Conductivity function, , ensures the applicability of the diffusion according to the local image structure. denotes the time of the function and it is also the scale parameter. The image can be represented in a simpler manner for larger values. Conductivity function is defined as in and represent the Gaussian smoothed version of the image and gradient of , respectively. Akaze uses (3) as conductivity function that supports wide regions.

The contrast parameter is used for elimination of edges. Histogram of denoted by is constructed to determine the contrast parameter 70% percentile of the histogram will be used to choose the appropriate value of .

2.1. Scale Space Construction

The method constructs scale space with octaves and sublevels . Let images in scales generated from the test image be . First image in each octave, , is generated by subsampling the last image in the previous octave and contrast parameter used in the previous octave is multiplied by a factor to detect finer edges. The equation given in (4) is used during the generation of the images in the scales, . First image in the first octave is the blurred version of the test image , , and the value of . The function used in (4) applies Gaussian blurring operation on with standard deviation .

Conductivity function in (4) applies the function proposed by Perona and Malik on as explained in [26]. The function gets the smoothed version of the current scale image and current contrast value as arguments and applies the conductivity function given in (3). Fast Explicit Diffusion (FED) function gets a time value for the th scale, smoothed version of the previous scale image, and the result of the conductivity function as arguments. Time value for the current scale image is determined using current octave and scale number as in (5). Current octave and scale indexes are mapped to their corresponding scale as in (5). Scale levels in pixel units are converted into time domain by the following equation:

Fast explicit diffusion is used by the AKAZE features to create scale images at each scale. FED schemes use iterated box filters to approximate Gaussian kernels. The scheme performs cycles of explicit diffusion steps with varying step sizes, , to create scale images. The method use one cycle () and FED implements steps to find current scale image and determines step sizes to use with these steps. Each step can get different step sizes and available maximum step size denoted by is chosen to be 0.25 (because of the image dimension) by the algorithm.

Appropriate cycle length for the current image, , is determined according to the time value of the current scale. Cycle length for the current scale is determined using

In this regard, step size for the th step in the FED cycle is calculated using

FED cycle given in (8) is repeated times with step sizes for considering a priori estimate . represents the temporary scale image in the th cycle of FED. Matrix is built according to the method described in [27]. Scale image will be created after cycle,

2.2. Keypoint Extraction

Keypoint extraction is realized on each image in the nonlinear scale space. Hessian matrix of each image is calculated and multiplied by a normalized scale factor. The factor will be different for each image in the scale space. The formula given in (9) is used to calculate current normalized scale factor, for the th image in the nonlinear scale space. Let octave index of be .

Let Hessian matrix of the current scale image be . The matrix is multiplied by the scaling factor , . Then, keypoint extraction algorithm computes the determinant of the scaled Hessian matrix to find scale space extremes. The values in the matrices are checked to find the points whose values are higher than a predefined threshold and this point is a maxima in its neighborhood. Other keypoints in the former determinant image in the scale space are also investigated for maxima determination. The coordinate of the current keypoint is multiplied by to compare with keypoints residing on lower scale. Points close to other keypoints residing on lower scales with a high response are chosen as keypoints. After keypoints are extracted from the images in the nonlinear scale space according to lower scale levels, they are filtered with upper scale levels. Each keypoint extracted from the th scale image, , will be consulted with other keypoints reside on the th scale image . A point is selected as a keypoint if it has higher response in compared to other points in a window at .

3. Extraction of the Descriptors and Matching the Features

The details of descriptor extraction, descriptor matching, and false match elimination are given in this section.

3.1. Descriptor Extraction Algorithm

The method uses Modified Local Difference Binary (M-LDB) method proposed by [25] to extract features at the keypoints. LDB descriptor proposed by [28] uses the same approach introduced by Binary Robust Independent Elementary Features (BRIEF) [29]. However, LDB uses average of areas to make binary comparisons instead of single pixel values used in BRIEF. The means of the horizontal and vertical derivatives in corresponding areas are also used for comparison. Thus, three bits represent binary comparison results. LDB divides the patch into grids of various sizes, , , and so forth. Average computation of these subregions is very fast by using the integral images. But using the integral images makes the feature extraction method vulnerable to rotation. In this regard, AKAZE proposed to use main orientation information to make the LDB rotation invariant. M-LDB rotates the grid of LDB according to the main orientation information. M-LDB method steps are given below.

Step 1. Main orientation for the current keypoint is determined.

Step 2. Subsampling step size is determined from pattern size used in the algorithm. Pattern size is 12 for the method and sample step sizes are . Assume that coordinates of the current keypoint scaled with octave value be . A grid centered at is divided into , , and subregions. Each subregion is rotated by main orientation and average pixel values and derivatives in both horizontal and vertical directions are calculated. 12, 27, and 48 average values are generated for 5, 4, and 3 step sized subregions, respectively, and they are embedded into a temporary vector T.

Step 3. Average values are compared in this step. The temporary vector has three parts: first 12 for step size 3, from 13 to 39 for step size 4, and from 40 to 87 for step size 5. Elements in each part are compared with each other and a descriptor is created for each keypoint as explained in [25].

3.2. Descriptor Matching Algorithm

The method extracts keypoints from the test image using AKAZE as described in Section 2 and obtains corresponding descriptors with the method explained above. Assume that corresponding descriptors at the keypoints be , where is the number of keypoints for the test image. Each descriptor is compared to other vectors in the descriptor list. The method uses Hamming distance to determine similarity between two descriptor vectors. Corresponding binary elements of two vectors are XOR’ed to count the number of elements with different values. The equation given in (10) compares th and th descriptor vectors. Let be the th element of the th descriptor.

Keypoints are matched if Hamming distance of two descriptors is smaller than a predefined threshold . Coordinate of the keypoints corresponding to matched descriptors is stored in match matrix . Rows of the match matrix hold coordinates of both of the matched keypoints.

3.3. False Match Elimination

Random Sample Consensus (RANSAC) proposed by Fischler and Bolles estimates general parameters of a certain model with an iterative approach [30]. The method randomly selects a set from the matched keypoints (we use five points in the experiments) and estimates the transformation matrix as in

Matched keypoints will be evaluated according to the transformation matrix . Each keypoint is transformed by and compared with its matching keypoint in terms of distance. Matched pair is considered as inlier if the distance is smaller than a predefined threshold . Otherwise, it is considered as outlier and removed from . Real matches and false matches in the matrix are called by inliers and outliers, respectively. A threshold value of is used by the method.

Figure 2 shows the effect of RANSAC on false matched keypoints. Figures 2(a) and 2(b) show the original image and its tampered version, respectively. The result of the proposed method contains some false matches as can be seen in Figure 2(c). Figure 2(d) indicates that RANSAC eliminates false matched keypoints.

3.4. Tamper Localization

In this step of the proposed method, coordinates of the matched key points are used to localize forged regions. pairwise matched keypoints are represented by the method as two independent sets: source keypoints in the first set denoted by and corresponding keypoints in the second set denoted by . Tamper localization procedure uses corresponding keypoints’ coordinates and determines the exact region of forgery. Matlab source code of the algorithm is given in Algorithm 1. Some functions in the code are predefined such as ,  , and The function returns a matrix that represents a circle with center coordinate () and radius at the image IM. The function returns the index of maximum valued element in . The function colors a circular region at () with radius on image IM to black.

for j=1: N
   i =1; p=zeros(1, -+1);
   for r=:
       =extract_circle(, , r, IM);
       =extract_circle(, , r, IM);
       p[i]=psnr(,);
       i = i + 1;
   end
   i=max_index(p);
   r = +i-1;
   fill(, , r, IM);
   fill(, , r, IM);
end
IM = imopen (IM, strel(disk,r));

The function extracts the corresponding circular regions with radius around the corresponding keypoints and calculates Peak Signal to Noise Ratio (PSNR) between them. The function proceeds to calculate PSNR while incrementing radius by one up to . At last, the radius value that gives maximum PSNR is chosen and corresponding regions around the keypoints are colored to black. Opening operation is applied on the image after all matched keypoints are processed in the same manner. Figure 3 shows the result of tamper localization algorithm for and . Figures 3(a) and 3(b) show the forged image and the mask used for tampering process. Matched keypoints after false match elimination and the result of tamper localization algorithm are also given in Figures 3(c) and 3(d), respectively.

4. Experimental Results

Some images are obtained from Google image search and CoMoFoD (http://www.vcl.fer.hr/comofod/) is used to create test dataset [31, 32]. Forged images are created from the test images by GIMP, an open source image editing package, and experiments are carried out on a notebook computer with Core i7 2.3 GHz processor running OpenCV, another open source package for computer vision. Test dataset is processed to evaluate the effectiveness of the proposed copy move forgery detection method. The results of these experiments are summarized in this section.

There are well known attacks used in the literature to minimize forgery clues making forged images hard to detect. 100 test images denoted by TI are used to create forged image data set to test the robustness of the method. Forged image dataset consists of various partitions as listed below.

Replication Based Forgery (RBF Dataset). 60 test images (60% of the total dataset) are randomly selected from TI and Replication Based Forgery is applied on them. Nonregular regions (that are not larger than %10 of the original image and not smaller than %5 of the original image) are chosen and replicated to create the 60 forged images.

RBFRot30 and RBFRot90. Chosen regions to create the forged images in RBF are rotated by 30° and 90° before pasting and 60 forged images are obtained for RBFRot30 and 60 forged images are obtained for RBFRot90.

RBF-JPG70 and RBF-JPG90. Forged images in RBF are resaved with JPEG quality factor 70 (and quality factor 90) and RBF-JPG70 dataset is created (and RBF-JPG90 is created).

RBF-AWGN25 and RBF-AWGN40. Forged images in RBF are postprocessed by AWGN with 25 dB (and by AWGN with 40 dB) and RBF-AWGN25 dataset is created (and RBF-AWGN40 dataset is created).

RBF-Blur0.5 and RBF-Blur2.0. Forged images in RBF are blurred by Gaussian Function with (and with ) and RBF-Blur0.5 dataset is created (RBF-Blur2.0 dataset is created).

Object Removal with Uniform Background Based Forgery (ORBF Dataset). Other test images in TI are randomly selected and object removal with uniform Background based forgery is applied on them. Nonregular regions (that are not larger than %10 of the original image and not smaller than %5 of the original image) are chosen and removed to create the 40 forged images.

ORBFRot30 and ORBFRot90. Chosen regions to create the forged images in ORBF are rotated by 30° and 90° before pasting and 40 forged images are obtained for ORBFRot30 and 40 forged images are obtained for ORBFRot90.

ORBF-JPG70 and ORBF-JPG90. Forged images in ORBF are resaved with JPEG quality factor 70 (and quality factor 90) and ORBF-JPG70 dataset is created (and ORBF-JPG90 is created).

ORBF-AWGN25 and ORBF-AWGN40. Forged images in ORBF are postprocessed by AWGN with 25 dB (and by AWGN with 40 dB) and ORBF-AWGN25 dataset is created (and ORBF-AWGN40 dataset is created).

ORBF-Blur0.5 and ORBF-Blur2.0. Forged images in ORBF are blurred by Gaussian Function with (and with ) and ORBF-Blur0.5 dataset is created (ORBF-Blur2.0 dataset is created).

Four scales and four octaves are used to create the scale space in the experiments. The performance of the detection method is measured with True Positive Rate (TPR) and False Positive Rate (FPR). TPR and FPR are calculated by using (12). IDF, FI, IDFO, and OI represent “The number of Images Detected as forged being Forged,” “The number of Forged Images,” “The number of Images Detected as Forged being Original,” and “The number of Original Images,” respectively. Receiver Operating Characteristics (ROC) curves are also used in the experiments to make a fair comparison between the proposed method and the others.

The experiments are reported in three sections. Both the proposed and the other methods’ matched keypoints are shown on images visually for postprocessed “Replication” and “Object Removal with uniform Background” types of forgery in the first two sections, respectively. Rotation, JPEG compression, blurring, and AWGN are used to minimize traces of forgery. ROC curves are created from TPR and FPR values for both normal and postprocessed images to compare the method with the others in the last section.

4.1. Visual Result of Replication Based Forgery

Visual results of both the proposed and other methods are presented for “Replication” forgery on the test images in this section. Number of matched keypoints is also given in the figures to compare the methods quantitatively.

First experiment gives visual results for randomly selected image from RBF with number of matched keypoints. Forged image is shown in Figure 4(a) where colored rectangular area designates the copied and pasted regions. Visual results for other keypoint based methods are given in Figures 4(c), 4(d), and 4(e). Number of matched keypoints is also reported in the caption of Figure 4. SIFT based forgery detection method matches total 110 keypoints whereas the proposed method matched 324 keypoints. Visual results also support the numeric results.

Second set of experiments in this section applies rotation on copied region before pasting it onto another region as shown in Figure 5(a). Forged image given in Figure 5(a) is chosen from RBFRot30. When SIFT, SURF, and sORB based methods are compared with each other, the method in [20] yields the best result according to the number of matched keypoints as shown in Figure 5(b). The proposed method detects 95 matched keypoints on the forged image, while the method in [20] detects only 28 matched keypoints as indicated in Figures 5(b) and 5(e).

We compare the methods according to average number of keypoints in the third experiment. The proposed method and the similar works are applied on the “Replication Based Forgery” datasets (RBF, RBFRot30, RBFRot90, RBF-JPG70, RBF-JPG90, RBF-AWGN25, RBF-AWGN40, RBF-Blur0.5, and RBF-Blur2.0) separately and the average number of matched keypoints is reported in Table 1. When average number of keypoints for RBF-JPG70 and RBF-JPG70 datasets are considered, the proposed method gives better results compared to similar works. The proposed method also gives better results for all datasets. For example, while average number of keypoints for the proposed method is approximately 77 for RBF-Blur2.0 dataset, similar works have approximately 58, 22, and 37 average results. As a result, Table 1 indicates that proposed method detects more matched keypoints on the forged images compared to others [2022] for various attacks.

Experiments in this section show that the proposed method detects more keypoints compared to similar works in the literature [2022] on “Replication” type forged regions even if rotation, AWGN, blurring, or JPEG compression is applied.

4.2. Visual Result of Object Removal with Uniform Background Forgery

Visual results of both the proposed and other methods are compared for “Object Removal with uniform Background” type forged test images in this section. Keypoint based methods in the literature cannot detect forged regions properly for removal types of forged images. Usually, regions that do not have high frequency components are used for object removal. Thus, keypoint based forgery detection techniques become unsuccessful if copied and pasted regions do not accommodate keypoints. The authors in [22] emphasized this fact and give some results for object removal with uniform Background forgery.

First experiment reports the results of simple object removal with uniform Background forgery operation. Test image in Figure 6(a) is modified to create the forged image as shown in Figure 6(b) that is an example-forged image from ORBF. Figure 6(c) illustrate the result of SIFT, SURF, and sORB based forgery detection methods in the literature. Because regions do not accommodate any keypoint, the methods cannot detect forged regions. However, the proposed method detects 53 matched keypoint pairs as shown in Figure 6(d). AKAZE features based method can extract keypoints from the forged regions thanks to the nonlinear scale space.

Object removal forged image detection performance is also tested for rotated forged region. Figures 7(a) and 7(b) show the original and forged images, respectively, where the region is rotated . Forged image is randomly chosen from ORBF-30. Figure 7(c) designates that other methods in the literature do not detect forgery. The proposed method detects 37 matched keypoint pairs as shown in Figure 7(d), while other methods cannot detect forged regions. Robustness against rotation is also tested by another experiment with rotated forged region. Figures 7(e) and 7(f) show the original and forged images, respectively. Figure 7(g) indicates the results of other methods. SIFT, SURF, and sORB based methods cannot detect any matched keypoints on the forged region as shown in the visual result. One image is shown for all three methods because they do not detect any matched keypoints on the forged region. The method can detect 7 matched keypoint pairs as shown in Figure 7(h).

Third experiment shows the effectiveness of the proposed method for JPEG compression on object removal with uniform Background forged image. Quality factors 90 and 70 are used for testing. Figure 8(a) shows the original image and Figure 8(b) is the forged and JPEG recompressed image by quality factor 90. Forged image is randomly chosen from ORBF-JPG90. SIFT and SURF based methods detect false matches as illustrated in Figures 8(c) and 8(d). Figure 8(e) shows that sORB based method cannot detect any matched keypoint on the forged regions. However, the proposed method matches 15 keypoint pairs as can be seen in Figure 8(f). To create an even lower quality forged image, quality factor 70 is also used in the next experiment to create Figure 8(h) from the test image, Figure 8(g). Numbers of true matched keypoints for the three works in the literature [2022] are 2, 0, and 0, respectively. Visual results of [2022] are given in Figures 8(i), 8(j), and 8(k). The proposed method matches 8 keypoint pairs as shown in Figure 8(l).

The performance of the method for blurring is also evaluated with two different tests. Kernel size is and is 0.5 and 2 for the tests, respectively. Figure 9(a) shows the test image. Figure 9(b) shows the forged image blurred by a , Gaussian kernel that is selected randomly selected from ORBF-Blur0.5. SIFT and SURF based methods cannot detect any keypoint on the forged region and thus they do not mark the forged regions as shown in Figures 9(c) and 9(d), respectively [20, 21]. The methods find also false matches. The method in [22] that uses sORB detects 9 matched keypoint pairs as shown in Figure 9(e). However, Figure 9(f) shows that the proposed method detects 41 matched keypoint pairs in the forged region. The methods are also tested for . Figures 9(g) and 9(h) show the test and forged images, respectively. SIFT, SURF, and sORB based methods can detect 15, 3, and 3 matched keypoint pairs as shown in Figures 9(i), 9(j), and 9(k). Figure 9(l) shows that the proposed method can detect 84 matched keypoint pairs. The method detects more keypoints compared to SIFT based methods.

Figure 10 will show the comparison of the visual results for the proposed and the other methods after white Gaussian noise is added onto the forged image. Figures 10(a) and 10(b) show the original image and forged image that is randomly chosen from ORBF-AWGN40. The method in [20] detects 15-matched keypoint on the forged regions as shown in Figure 10(c). SURF and sORB based methods in [21, 22] do not detect any keypoint on grass as shown in Figures 10(d) and 10(e). Figure 10(f) shows that the proposed method detects 119 matched keypoints thanks to the nonlinear scale space. 20 dB WGN is also added on the forged test image given in Figure 10(g) to obtain Figure 10(h) that is an example image from ORBF-AWGN20. SIFT based method detects 2 matched keypoints on the forged region whereas 28 matched keypoint pairs are obtained by the proposed method as shown in Figures 10(i) and 10(l), respectively. Methods in [21, 22] do not find any matched keypoint on the forged regions as shown in Figures 10(j) and 10(k).

Visual experiments show that the proposed method detects more keypoints compared to similar works [2022] in the literature on forged regions for “Object Removal with uniform Background” type of forgery even if rotation, AWGN, blurring, or JPEG compression is used to reduce traces of forgery.

At last experiment, we apply the prosed method and similar works on the ORBF, ORBFRot30, ORBFRot90, ORBF-JPG70, ORBF-JPG90, ORBF-AWGN25, ORBF-AWGN40, ORBF-Blur0.5, and ORBF-Blur2.0 datasets separately. Average numbers of matched keypoints for all methods are reported in Table 2. The experiment indicates that the proposed method matches more keypoints on the forged images when compared to similar works under various attacks. For example, while the average number of keypoints for the proposed method is approximately 42 for RBFRot30 dataset, similar works have approximately 10, 2, and 1 average results. When ORBF-AWGN40 dataset is considered, while proposed method has approximately 49 average results, the others have approximately 7, 2, and 5 average results [2022].

4.3. Evaluation of the Method by TPR/FPR Values and ROC Curves

First experiment in this section compares the proposed method with similar works according to TPR/FPR values. A threshold value of 11 is used to classify a test image to be forged or original. This threshold value is determined to be the best value for the dataset according to TPR/FPR values when all the previous works and the proposed method are considered [2022]. When the matched number of keypoints is greater than 11, the method classifies the image as forged. Otherwise the image is labeled as authentic.

Forged images in the datasets (RBF, RBFRot30, RBFRot90, RBF-JPG70, RBF-JPG90, RBF-AWGN25, RBF-AWGN40, RBF-Blur0.5, RBF-Blur2.0, ORBF, ORBFRot30, ORBFRot90, ORBF-JPG70, ORBF-JPG90, ORBF-AWGN25, ORBF-AWGN40, ORBF-Blur0.5, and ORBF-Blur2.0) are used to report the average TPR. On the other hand, Original 100 test images in TI are also used to report the average FPR values. Table 3 shows the average TPR/FPR values for the proposed method and for the others in the literature. TPR values show that proposed method can detect forged images with higher accuracy. While TPR for the proposed method is 0.8, the SIFT, SURF, and ORB based methods have approximately 0.6, 0.3, and 0.4 TPR values, respectively. The proposed method also yields the same FPR value compared to SIFT and SURF based methods [20, 21] and approximately equal result to ORB based method.

ROC curves indicate Sensitivity or TPR as a function of Specificity or FPR for varying threshold values. line divides the ROC space into two sections. Points above the line and points below the line represent the good classification results and bad results, respectively. Comparison of ROC curves for the proposed method and others in the literature is also given in this section.

Figures 11(a) and 11(b) represent the ROC curves of the methods for both “Replication” and “Object Removal with uniform Background” types of forged images with rotated regions. Forged images in RBFRot30, RBFRot90, ORBFRot30, and ORBFRot70 are used with their original versions during the experiment. ROC curves of the methods in the literature are given along with the result of proposed method in Figure 11. The proposed method exhibits better classifying performance compared to other methods for both 30° and 90° rotated forged regions.

Impact of JPEG compression on discriminative capability of the proposed method is evaluated with two different JPEG quality factors in the second set experiments. Forged images in RBF-JPG70, RBF-JPG90, ORBF-JPG70, and ORBF-JPG90 are used with their original versions during the experiment to evaluate the methods and the corresponding ROC curves are given in Figures 12(a) and 12(b), respectively. ROC curves in Figure 12 show that the proposed method classifies forged images with better performance compared to similar works. The method also yields better ROC curve when quality factor 70 is used to recompress the forged images.

ROC curves for blurred forged images in RBF-Blur0.5, RBF-Blur2.0, ORBF-Blur0.5, and ORBF-Blur2.0 are also given in Figures 13(a) and 13(b), respectively. 80 forged images are used in this experiment. The proposed method has better discrimination compared to similar works for both and as shown in Figure 13. The method’s discrimination improves as the value of increase.

Robustness of the method is compared with others using the ROC curves under noise addition attack as the last experiment. Forged images in RBF-AWGN25, RBF-AWGN40, ORBF-AWGN25, and ORBF-AWGN40 and their original versions are used during the test. Figure 14 shows the ROC curves of the methods for two-noise level. Figure 14(b) shows that the proposed method can classify the images with better accuracy even if the noise is 25 dB SNR.

ROC curves are used to show that the proposed method has better classification compared to similar works even if various attacks are performed to hide traces of forgery. It indicates that AKAZE features based method can detect forged regions even in object removal with uniform Background type of forgery. Other methods cannot detect keypoint in forged region used for object removal with uniform Background type of forgery. Nonlinear scale space based AKAZE features applies various scales on the image and preserves the object boundary. AKAZE features can be calculated faster than SURF and SIFT features but slower than ORB features as indicated in [25].

5. Conclusion

In this paper, an effective copy move forgery detection technique is proposed. The method uses AKAZE features for detection of copied and pasted regions and uses RANSAC for elimination of the false matches. Two kinds of forgery used in the literature are “Replication” and “Object Removal with uniform Background” types of forgeries. Latter forgery technique becomes a problem for keypoint based methods and the authors in [22] reported the problem in their work. The proposed method detects “Object Removal with uniform Background” and “Replication” types of forgeries with high precision compared to similar works thanks to the nonlinear scale creation [2022]. Experimental results also indicate that the method yields better discriminative capability compared to others even if forged image has been rotated, blurred, WGN added, or JPEG compressed to hide clues of forgery.

Disclosure

The authors further confirm that the order of authors listed in the manuscript has been approved by all of them.

Competing Interests

There is no conflict of interests regarding the publication.

Authors’ Contributions

The authors confirm that the manuscript has been read and approved by all named authors and that there are no other persons who satisfied the criteria for authorship but are not listed.