Security and Communication Networks

Security and Communication Networks / 2020 / Article
Special Issue

Privacy Protection and Security in Multimedia Processing and Artificial Intelligence

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 8849902 | https://doi.org/10.1155/2020/8849902

Diqun Yan, Xiaowen Li, Li Dong, Rangding Wang, "An Antiforensic Method against AMR Compression Detection", Security and Communication Networks, vol. 2020, Article ID 8849902, 8 pages, 2020. https://doi.org/10.1155/2020/8849902

An Antiforensic Method against AMR Compression Detection

Academic Editor: Zhihua Xia
Received24 May 2020
Revised23 Jul 2020
Accepted19 Aug 2020
Published02 Sep 2020

Abstract

Adaptive multirate (AMR) compression audio has been exploited as an effective forensic evidence to justify audio authenticity. Little consideration has been given, however, to antiforensic techniques capable of fooling AMR compression forensic algorithms. In this paper, we present an antiforensic method based on generative adversarial network (GAN) to attack AMR compression detectors. The GAN framework is utilized to modify double AMR compressed audio to have the underlying statistics of single compressed one. Three state-of-the-art detectors of AMR compression are selected as the targets to be attacked. The experimental results demonstrate that the proposed method is capable of removing the forensically detectable artifacts of AMR compression under various ratios with an average successful attack rate about 94.75%, which means the modified audios generated by our well-trained generator can treat the forensic detector effectively. Moreover, we show that the perceptual quality of the generated AMR audio is well preserved.

1. Introduction

AMR audio codec [1] is one of the most popular audio codec standards, which is optimized for speech signals and encodes narrowband (200–3400 Hz) signals, with sampling frequency of 8000 Hz [2]. As more and more AMR audio appears as evidence in the forensics scene, it is of extreme importance to verify their integrity [3]. Generally, to manipulate an AMR audio, attacker should decompress it into raw waveform first and then do the forgery operations and decompress it into AMR format. The double compressed audio becomes questionable because the manipulated audio is always through the double compression. In the past decade, many forensic techniques have been proposed to detect compression history of AMR audios based on traditional methods [35] and deep learning methods [2, 6, 7]. To represent the difference of single compressed audios and double compressed audios, traditional AMR compression detection techniques rely on low-level acoustic features such as sub-band energy and linear prediction coefficients (LPCs), which acquire professional acoustic knowledge. Recently, deep learning methods are gaining popularity in forensic research studies, which can capture the highly complex feature from a raw sample by training large-scale sample data with a neural network.

However, as many forensic techniques are proposed to detect the integrity of digital file, some antiforensic methods have also been proposed to expose the shortcomings and weakness of existing forensic techniques and thus help investigators better address the weaknesses and improve their forensic techniques. For example, Fontani et al. [8] firstly presented an antiforensic method of median filtering (MF), which made MF images undetectable by the MF detectors [911] while keeping the image quality in a good PSNR. Luo et al. [12] applied a GAN framework to improve the quality of JPEG images and fool the JPEG compression detectors successfully. Chen et al. [13] used the legacy traces of a designated camera to generate a forged image that can deceive the existing camera identification techniques successfully. Kim et al. [14] adopted a deep convolutional neural network (DCNN) to remove the forensic traces from MF images and effectively recover the MF images visually similar to the original image. Li et al. [15] modified the forensic traces using a data-driven manner to mislead the results of three advanced audio source identification techniques [1618].

These antiforensic methods have a little consideration about exposing the weakness of the robustness of AMR compression detection. Generally speaking, as more and more AMR audio appears as evidence in forensics scene, it is important to help the investigators to address the weakness of AMR compression detectors. Therefore, in this paper, we propose an antiforensic method utilizing a GAN framework which comprised of two networks: a generator and a discriminator. The generated data can statistically model the distribution of real data [19]. To improve the perceptual quality of the double compressed audio and remove the artifacts introduced by AMR compression procedure, we adopt the GAN to modify the double compressed audios to avoid forensic detection. For building our antiforensic attack, we design the architecture of GAN and the loss functions. In particular, three state-of-the-art detectors of AMR compression have been selected as the attack target to evaluate the performance of our method.

The rest of this paper is organized as follows. In Section 2, we introduce the related work of forensic method of AMR compression and the GAN framework. The detail of our proposed GAN framework has been provided in Section 3. Section 4 presents the experimental settings and extensive experiments against three AMR compression detectors. Conclusions are given in Section 5.

In this section, we briefly introduce three advanced detection methods, which are considered as attack targets. Additionally, the GAN framework is also briefly reviewed.

2.1. Detection of AMR Compression

In general, traditional detection of AMR compression consists of two primary steps: feature extraction and model classification.

As the first work of the detection of AMR compression, Shen et al. [3] used the traditional acoustic features including average sub-band frequency energy ratio, average low-frequency sub-band energy ratio, bispectrum features, and linear prediction spectrum to represent the difference caused by AMR compression. And a standard SVM modelling technique was employed for classification. They achieved an accuracy about 87% for detecting the single compressed audio from the double one.

In [2], Luo et al. adopted an autoencoder network for automatic feature extraction. They demonstrated that the deep features differ greatly between the single compressed audio and the double one which were extracted from a well-trained autoencoder. And they designed a majority voting strategy for classification.

In [6], the authors delved into the stack autoencoder (SAE) network for obtaining better deep features in the AMR compression forensic task. Then, they applied a universal background model-Gaussian mixture model (UBM-GMM) for the identification of compression history. They improved the classification accuracy to 98% on the TIMIT [20] database.

2.2. Generative Adversarial Network

The generative adversarial network (GAN) was firstly proposed by Goodfellow et al. [21] for generating realistic images. In GAN, two networks are training against each other in a min-max two-player game. In their iterative training, the purpose of generator is to capture the distribution of real data and that of discriminator is to classify a sample that came from the real database rather than generated by . The generator tries to maximize the probability of making the discriminator mistakenly classify the generated data as real, while the discriminator guides the generator to produce a more realistic sample. Generally, the adversarial training process can be denoted as a min-max game and it will be optimized by the loss function as follows:where denotes the real data and denotes the random noise similar to after the adversarial training of the generator and discriminator . In the training process, the purpose of is to minimize the loss value while that of is to maximize it.

Recently, GAN has gained growing popularity in various fields because of its effective generative capability. In this work, the GAN framework is assumed as the reverse procedure of AMR compression to improve the perceptual quality of double compressed audio and remove the forensic artifacts. Specifically, the generator and the discriminator can be regarded as an antiforensic model and AMR compression detector, respectively. Hence, the adversarial concept is suitable for antiforensic task in the AMR compression detection.

3. Proposed Antiforensic Framework

In this section, we briefly introduce three advanced detection methods, which are considered as attack targets. Additionally, the GAN framework is also briefly reviewed.

is firstly sent into the generator to get a falsified audio . and selected from the uncompressed audio are further fed into the discriminator for classification. Then, by freezing the parameters of discriminator, the loss from will be fed back to , which is represented by the dotted lines.

3.1. Overall Architecture

The overall goal of our attack is to remove the artifacts left by the AMR compression so that the resultant audio can fool the detectors. To deploy a successful attack, the generated audio should be decompressed back to AMR format because many investigators only accept the AMR file before the detection. Thus, the generated audio must statistically model the distribution of original audio so that the decompressed ones will be similar to the single compressed audio .

As shown in Figure 1, the proposed framework consists of a generator and a discriminator . To remove the artifacts left by the compression, is used to generate the falsified audio by adding a generated perturbation into . The discriminator is designed to distinguish an original audio , which is never through compression from a falsified audio . In the adversarial training of and , is encouraged to learn how to minimize the difference between and and optimize the parameters to achieve a better performance in generating good perceptual quality of .

3.2. Architecture of Proposed Framework
3.2.1. Generator

Generator is used to generate the antiforensic audios. In this framework, we use the SEGAN [22] as a reference architecture to design our adversarial network, which has been effectively applied in speech enhancement. As shown in Figure 2, the generator gets (size = 1 × 8000) as the input and consists of 7 convolutional groups and 7 corresponding deconvolutional groups.

Each convolutional group includes a convolutional layer with 64 filters with 1 × 30 kernels and stride = 2, whereafter a batch normalization (BN) layer which can stabilize the training process makes the generated audios more realistic. And the Leaky-ReLU is chosen as the activation function. The deconvolutional group is constituted of a deconvolutional layer which is set up as the convolutional group, followed by a BN layer and ReLU as the activation function. To reconstruct the details of audio and diminish the loss when information flows through convolutional and deconvolutional groups, we apply the skip connection in the generator, which can make the convolutional group’s output flow to its corresponding deconvolutional group. The skip connection can make the generator have a better performance, as the gradients can flow deeper through the skip connection without suffering much vanishing [23]. And the sigmoid activation is added to restrict the output for classification.

3.2.2. Discriminator

Since the key advantage of GAN is iterative training to obtain a better performance in generating samples, it seems that the architecture of is a very important constraint to our framework. The discriminator is intended to classify and and force the generated audios to deceive the detector. Hence, the discriminator must perform well in distinguishing and . Therefore, we build a CNN architecture for . As shown in Figure 3, the discriminator is designed as a compression detector based on CNN. It comprises 6 convolutional groups and is followed by a group consisting of a global average pool layer. At the end of the network, a dense layer coupled with a softmax activation function is placed to output the categorical probability.

Before the iterative training, we firstly test the capability of the designed discriminator to distinguish the original audio from double compressed audio . Then, we test the capability of the designed discriminator in a sub-dataset including 6000 original audios selected from TIMIT database and its double compressed audios with a compression bit rate randomly selected from {4.75 kbps, 5.15 kbps, 5.9 kbps, 6.7 kbps, 7.4 kbps, 7.95 kbps, 10.2 kbps, 12.2 kbps}. The sub-dataset was then divided into training (70%) and validation (30%). The accuracy of the discriminator model is shown in Figure 4. It is observed that our designed discriminator achieves a good performance.

3.3. Loss Functions

In this section, we demonstrate the loss functions for the two networks. To achieve the goal of antiforensics, the generator should be capable to learn how to minimize the difference of the modified double compressed audio and the original audio , while maintaining an acceptable perceptual quality. In this work, we define the loss of generator aswhere represents the perceptual loss of , denotes the adversarial loss calculated from , and are the weights to balance the importance of and .

Considering that the attack needs to introduce lesser perceptual artifacts to improve the forensic undetectability, we employ the perceptual loss for improving the quality of . is defined aswhere presents the output of , and and represent the batch size and the position of in this batch, respectively.

Then, the adversarial loss is designed to force to have a better performance in the iterative training. We define the aswhere denotes the class probabilities of the modified audio calculated by .

In this adversarial task, for forcing to modify similar to , should have the ability to detect the original audio correctly from the decompressed or the generated . Therefore, is defined as follows:

4. Experimental Results

In this section, we evaluate our antiforensic method against the three advanced forensic techniques [2, 3, 6]. First, we create an audio database designed especially for the experiment. Then, successful attack rate (SAR) is used to perform the forensic undetectability of our antiforensic audios and perceptual evaluation of speech quality (PESQ) [24] is adopted to present the quality of our antiforensic audios.

4.1. Database

TIMIT [20] is a typical speech database which consists of 630 speakers from different dialects of American English (192 females and 432 males) and each speaker reads ten sentences which are approximately three seconds. At first, to build the forensic database, we use the AMR codec to obtain the single compression audio from TIMIT database, with a random compression bit rate selected from {4.75 kbps, 5.15 kbps, 5.9 kbps, 6.7 kbps, 7.4 kbps, 7.95 kbps, 10.2 kbps, 12.2 kbps}. Then, we decode and recompress the AMR audios to get the double compressed AMR audio with random bit rates also selected from 4.75 to 12.2 kbps.

In the experiments, we first split those audios into 1 s clips and randomly divide those clips into the train set and test set. Therefore, we obtain 12000 1 s training audios and 6900 testing audios. Then, three detectors [2, 3, 6] are trained using the train set, and the average detection accuracies in test set are 87.52%, 92.60%, and 98.54%, respectively, which are essentially in agreement with the results reported in their works.

4.2. Experimental Setup and Evaluation Metrics
4.2.1. Experimental Setup

We train our network on patch sized audios with the pairs of sets: . Considering the audio might be split into different sizes by the investigator before the detection, we stitch all 1 s audios to obtain more audios with difference sizes, including 13800 0.5 s clips, 6900 1 s clips, 3450 2 s clips, and 2300 3 s clips. Then, we compressed back to AMR format with random bit rates chosen from 4.75 to 12.2 kbps.

Adam [25] is adopted as the optimizer with a learning rate of 1 × 10−4 for G and 5 × 10−6 for . Before the iterative training, we perform the generator training with batch size = 64 and weight terms of  = 1000 and  = 0 for 5 epochs. Then, and are trained iteratively for 30 epochs with weight terms of  = 1000 and = 1, with an iteration ratio of 1 : 5, which gives the discriminator more iterations to get a better performance.

4.2.2. Evaluation Metrics

The successful attack rate (SAR) is used as the evaluation metric, which could well represent the forensic undetectability of our antiforensic audio. We define the SAR aswhere represents the audio decompressed with each bit rate selected from 4.75 to 12.2 kbps and is the classification result of forensic detector, that is, while has been misclassified as and 0 otherwise.

Meanwhile, we apply the PESQ to test the perceptual quality of the antiforensic audio . PESQ is an industry-standard methodology for the assessment of speech quality. The range from −0.5 to 4.5 is the default PESQ score range, and higher score means better perceptual quality.

4.3. Experimental Performance and Analysis

We perform our attack on three advanced forensic methods [2, 3, 6]. Specifically, for each clip in the testing set, we generate a copy of it with the well-trained generator and then decompress the copy with eight different bit rates.. Finally, three trained detectors are used to classify our antiforensic audios.

As shown in Table 1, the experimental results are in line with expectations. The antiforensic audios can significantly deceive the three advanced AMR compression detectors, and the SAR of is significantly achieved with an average rate about 94.71% which means the forensic techniques cannot distinguish the antiforensic audios correctly. Obviously, our method can significantly make undetectable by the forensic techniques.


Size (s)MethodBit rates (kbps)Average
4.755.155.96.77.47.9510.212.2

0.5[2]97.0793.9698.4195.2896.5297.4096.8897.5996.63
[3]95.8292.7493.9594.2693.5396.8391.7594.8594.20
[6]94.9596.4992.6097.2596.5591.6498.2893.7595.69

1[2]97.6698.0598.1591.7295.7898.4292.1794.8295.82
[3]98.3397.9498.3093.6996.7296.5996.0891.0196.21
[6]94.0596.8892.2090.6386.0891.2691.7390.2391.63

2[2]96.1290.9597.1791.5695.8698.4998.1297.3295.70
[3]96.2399.3598.5993.4192.3291.3496.9193.2795.15
[6]94.7792.8895.2689.1489.0095.2590.1996.8592.91

3[2]98.8696.0693.3298.0595.3593.3685.9596.7494.71
[3]97.5796.3794.7298.4597.5198.0596.7493.1996.68
[6]93.3696.6194.7292.2093.8691.5694.1492.5393.62

To measure the quality of our antiforensic audios, we compute the PESQ score of comparing the original audios. As shown in Figure 5, it is obvious that our antiforensic audios can retain good perceptual quality and the PESQ values of most are over 3.3 compared with , which means that our method can improve the perceptual quality of while achieving the antiforensic purpose. Figure 6 shows the spectrograms of an original audio from the test set, its and , and its antiforensic audio compressed with random bit rates. Compared with , presents fewer losses of content in the high frequency than , and is similar to .

5. Conclusion and Future Work

In this paper, we have proposed a new method to prove the weakness of the forensic detectors of AMR compression. To do this, we developed a GAN framework for the removal of AMR compression artifacts. Unlike the conventional antiforensic methods, our method can retain good perceptual quality with a better antiforensic capability in a data-driven manner. Through extensive experiments, the results demonstrate that the antiforensic double compressed audio can effectively avoid the detection of existing AMR compression methods with an average SAR about 94.75%, while retaining good perceptual quality.

However, there are still many remaining problems in the competition of forensics and antiforensics. In the future, we plan to consider the robustness of the forensic approach of AMR compression, i.e., whether adversarial framework could obtain a robust discriminator which can detect the antiforensic audios correctly by a well-trained generator or other attack strategy while distinguishing the double compressed audios from single compressed audios successfully.

Data Availability

The TIMIT dataset used to support the findings of the study is public and available at https://catalog.ldc.upenn.edu/LDC93S1.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This study was supported in part by the National Natural Science Foundation of China under grant nos. U1736215, 61672302, and 61901237, in part by the Zhejiang Natural Science Foundation under grant nos. LY20F020010 and LY17F020010, and in part by the K. C. Wong Magna Fund of Ningbo University.

References

  1. B. Bessette, R. Salami, R. Lefebvre et al., “The adaptive multirate wideband speech codec (AMR-WB),” IEEE Transactions on Speech and Audio Processing, vol. 10, no. 8, pp. 620–636, 2002. View at: Publisher Site | Google Scholar
  2. D. Luo, R. Yang, and J. Huang, “Detecting double compressed AMR audio using deeplearning,” in Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2669–2673, Florence, Italy, May 2014. View at: Publisher Site | Google Scholar
  3. Y. Shen, J. Jia, and L. Cai, “Detecting double compressed AMR-format audio recordings,” in Proceedings of the 10th Phonetics Conference. China (PCC), pp. 1–5, Shanghai China, April 2012. View at: Google Scholar
  4. J. Sampaio and F. Nascimento, “Double compressed AMR audio detection using linear prediction coefficients and support vector machine,” in Proceedings of the 22th Brazilian Conference on Automation, João Pessoa, Brazil, September 2018. View at: Google Scholar
  5. J. F. P. Sampaio and F. A. D. O. Nascimento, “Detection of AMR double compression using compressed-domain speech features,” Forensic Science International: Digital Investigation, vol. 33, Article ID 200907, 2020. View at: Publisher Site | Google Scholar
  6. D. Luo, R. Yang, B. Li, and J. Huang, “Detection of double compressed AMR audio using stacked autoencoder,” IEEE Transactions on Information Forensics and Security, vol. 12, no. 2, pp. 432–444, 2016. View at: Google Scholar
  7. K. Valanchery, “Analysis of different classifier for the detection of double compressed AMR audio,” International Journal of Advance Research, Ideas and Innovations in Technology, vol. 4, pp. 98–107, 2018. View at: Google Scholar
  8. M. Fontani and M. Barni, “Hiding traces of median filtering in digital images,” in Proceedings of the 2012 20th European Signal Processing Conference (EUSIPCO), pp. 1239–1243, IEEE, Bucharest, Romania, August 2012. View at: Google Scholar
  9. M. Kirchner and J. Fridrich, “On detection of median filtering in digital images,” in Proceedings of the SPIE Electronic Image, Security, Steganography, vol. 7541, pp. 1–6, Watermarking on Multimedia Contents, San Jose, CA, USA, August 2010. View at: Google Scholar
  10. G. Cao, Y. Zhao, R. Ni, L. Yu, and H. Tian, “Forensic detection of median filtering in digital images,” in Proceedings of the 2010 IEEE International Conference on Multimedia and Expo, pp. 89–94, Singapore, July 2010. View at: Publisher Site | Google Scholar
  11. H.-D. Yuan, “Blind forensics of median filtering in digital imagesfiltering in digital images,” IEEE Transactions on Information Forensics and Security, vol. 6, no. 4, pp. 1335–1345, 2011. View at: Publisher Site | Google Scholar
  12. Y. Luo, H. Zi, Q. Zhang, and X. Kang, “Anti-forensics of JPEG compression using generative adversarial networks,” in Proceedings of the 2018 26th European Signal Processing Conference (EUSIPCO), pp. 952–956, IEEE, Rome, Italy, September 2018. View at: Publisher Site | Google Scholar
  13. C. Chen, X. Zhao, and M. C. Stamm, “Mislgan: an anti-forensic camera model falsification framework using a generative adversarial network,” in Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 535–539, Athens, Greece, October 2018. View at: Publisher Site | Google Scholar
  14. D. Kim, H.-U. Jang, S.-M. Mun, S. Choi, and H.-K. Lee, “Median filtered image restoration and anti-forensics using adversarial networks filtered image restoration and anti-forensics using adversarial networks,” IEEE Signal Processing Letters, vol. 25, no. 2, pp. 278–282, 2018. View at: Publisher Site | Google Scholar
  15. X. Li, D. Yan, L. Dong, and R. Wang, “Anti-forensics of audio source identification using generative adversarial networkfication using generative adversarial network,” IEEE Access, vol. 7, pp. 184332–184339, 2019. View at: Publisher Site | Google Scholar
  16. C. Hanilci, F. Ertas, T. Ertas, and O. Eskidere, “Recognition of brand and models of cell-phones from recorded speech signals,” IEEE Transactions on Information Forensics and Security, vol. 7, no. 2, pp. 625–634, 2012. View at: Publisher Site | Google Scholar
  17. C. Kotropoulos and S. Samaras, “Mobile phone identification using recorded speech signals,” in Proceedings of the 2014 19th International Conference on Digital Signal Processing, pp. 586–591, IEEE, Hong Kong, China, August 2014. View at: Publisher Site | Google Scholar
  18. D. Luo, P. Korus, and J. Huang, “Band energy difference for source attribution in audio forensics,” IEEE Transactions on Information Forensics and Security, vol. 13, no. 9, pp. 2179–2189, 2018. View at: Publisher Site | Google Scholar
  19. S. Chintala, E. Denton, M. Arjovsky, and M. Mathieu, 2016, How to train a GAN? tips and tricks to make GANs work (2017).
  20. J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, and D. S. Pallett, Getting Started with the DARPA TIMIT CD-ROM: An Acoustic Phonetic Continuous Speech Database, vol. 107, National Institute of Standards and Technology (NIST), Gaithersburgh, MD, USA, 1988.
  21. I. Goodfellow, J. Pouget-Abadie, M. Mirza et al., “Generative adversarial nets,” in Advances in Neural Information Processing Systems, pp. 2672–2680, Université De Montréal, Montreal, Canada, 2014. View at: Google Scholar
  22. S. Pascual, A. Bonafonte, and J. Serra, “SEGAN: speech enhancement generative adversarial network,” 2017, https://arxiv.org/abs/1703.09452. View at: Google Scholar
  23. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, Las Vegas, NV, USA, June 2016. View at: Google Scholar
  24. A. W. Rix, J. G. Beerends, M. P. Hollier, and A. P. Hekstra, “Perceptual evaluation of speech quality (PESQ)-a new method for speech quality assessment of telephone networks and codecs,” in Proceedings of the 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings, vol. 2, pp. 749–752, IEEE, Salt Lake City, UT, USA, February 2001. View at: Publisher Site | Google Scholar
  25. D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” 2014, https://arxiv.org/abs/1412.6980. View at: Google Scholar

Copyright © 2020 Diqun Yan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

39 Views | 31 Downloads | 0 Citations
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.