Journal of Electrical and Computer Engineering

Journal of Electrical and Computer Engineering / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 6668369 | https://doi.org/10.1155/2021/6668369

Sai-Mei Jiao, Hai-feng Wang, Kun Zhang, Ya-qi Hu, "Neural Linguistic Steganalysis via Multi-Head Self-Attention", Journal of Electrical and Computer Engineering, vol. 2021, Article ID 6668369, 5 pages, 2021. https://doi.org/10.1155/2021/6668369

Neural Linguistic Steganalysis via Multi-Head Self-Attention

Academic Editor: Yang Li
Received29 Dec 2020
Revised07 Mar 2021
Accepted07 Apr 2021
Published19 Apr 2021

Abstract

Linguistic steganalysis can indicate the existence of steganographic content in suspicious text carriers. Precise linguistic steganalysis on suspicious carrier is critical for multimedia security. In this paper, we introduced a neural linguistic steganalysis approach based on multi-head self-attention. In the proposed steganalysis approach, words in text are firstly mapped into semantic space with a hidden representation for better modeling the semantic features. Then, we utilize multi-head self-attention to model the interactions between words in carrier. Finally, a softmax layer is utilized to categorize the input text as cover or stego. Extensive experiments validate the effectiveness of our approach.

1. Introduction

Steganography is an ancient technique aiming at embedding secret messages into carriers which can be divided into image steganography [1], text steganography [2], and audio steganography [3] according to the different types of carriers. On the contrary, steganalysis mainly focuses on how to detect hidden messages in suspicious carriers.

Text steganography is the process of embedding secret data through a cover text so that the existence of the data is invisible/undetectable for adversaries or casual viewers. It has been widely considered as an attractive technology to improve the use of conventional cryptography algorithms in the area of multimedia security by concealing a secret message/watermark into a cover text file/message to protect confidential information. However, this technology can also be used by terrorists and other criminals for malicious purposes, which poses great threats to security in cyberspace. Besides, text steganography technology has been significantly changed with the significant development of natural language processing technology. Thus, it is crucial to propose a linguistic steganalysis approach with most recent technologies.

Traditional linguistic steganalysis always firstly extracts statistical features from the carrier directly and then conducts the following classification procedure. For example, Taskiran et al. [4] distinguished cover and stego text based on a 3-gram language model as the feature and used Support Vector Machine (SVM) for classification. Chen et al. [5] proposed a steganalysis scheme (NFZ-WDA) to model the language structure based on the word distribution in different natural frequency zones. The authors in [6] utilized the meta features which included word frequency, word length, and space rate and immune mechanism to select the proper features. The main difficulties of previous approach are that they always need related domain knowledge and the generalized performance to the proposed latest text steganography is very limited.

Zuo et al. [7] firstly proposed word embedding feature which can better exploit semantic and statistical distortion in linguistic steganography. Many other neural steganalysis approaches are based on CNNs and RNNs or their combination [8]. Although being studied, precise linguistic steganalysis still remains an unsolved problem.

Our work is motivated by the observation that the interactions between words in text are important for steganalysis and multi-head self-attention has great potential to model these interactions [9, 10]. Thus, we propose a neural steganalysis approach with multi-head self-attention. In the proposed approach, a hidden representation layer is utilized to mapping words in text into semantic space for better exploitation of the semantic features. Secondly, we utilized multi-head self-attention to exploit the relationships between different words in a text, which is crucial in linguistic steganalysis. Finally, we concatenated the representation from words and the calibration representation from multi-head self-attention for further classification. The softmax layer is used to categorize the text into “cover” or “stego.” Experiments validate the effectiveness of the proposed approach.

The contribution of our work is as follows. (1) As far as we know, we are the first to propose an approach based on attention mechanism to model and extract correlation features in linguistic steganalysis. (2) Experiments show that the proposed steganalysis method achieves excellent performance in detecting generative linguistic steganography.

The content of the paper is divided into four parts. Section 2 describes many details of the proposed linguistic steganalysis approach based on multi-head self-attention. Section 3 introduces the experimental results, and model is discussed in this part. Finally, concluding remarks and future works are given in Section 4.

2. Proposed Approach

The architecture of the proposed linguistic steganalysis approach is shown in Figure 1. It contains three major modules, i.e., text representation, carrier encoder, and carrier prediction. Detailed analyses on different components of the proposed architecture are presented in the subsequent subsections.

2.1. Text Representation

The core of text representation is word embedding, which is used to convert a text carrier from a sequence of words into a sequence of low-dimensional embedding vectors. Denote a suspicious carrier text with words as . Through this layer it is converted into a vector sequence . Besides, in order to provide more position information of words in text for the approach, we added a position embedding vector to word embedding sequence, and thus get a new word representation sequence .

In the carrier encoder layer, we adopt multi-head self-attention [9] which has recently achieved remarkable performances in modeling complicated relations between context words. Taking the -th word representation feature as an example, we will explain how to identify multiple meaningful correlation features involving feature based on such a mechanism. At first, we define the correlation between feature and feature under a specific attention head as follows:where is an attention function which defines the correlation between feature and feature . The attention functions have many different forms and most of them are neural networks. In our case, we adopted the widely used form inner product, which can be formularized as follows:where are transformation matrices which map the original embedding space into a new semantic space .

After getting these correlation coefficients, we recalibrate representation of feature in subspace by combining all relevant features guided by coefficients :where and is the text length. Since is a combination of feature and its relevant features under head , it represents a new combinatorial feature learned by our method. The multi-head representation of the -th word is the concatenation of the representations produced by separate self-attention heads, i.e., .

An interesting intuition is that utilizing more different levels of features may boost the performance, and inspired by the residual connection structure in ResNet [11], we concatenate features from calibration representation and the origin word representation in concatenation layers. The final feature vector in these layers can be formulated as follows:where was taken as the text representation for the proposed linguistic steganalysis problem. Then, global average pooling is utilized to reduce dimension of features because the dimension of is very high which may lead the model under the risk of overfitting. After that, the pooling features were fed in to a classification layer for the model to generate the probability distribution and give an indicator over the label set.

2.2. Carrier Prediction

The main focus of the carrier prediction module is to categorize whether a text belongs to “cover” or “stego.” The prediction layer is composed of two dense layers with ReLU and sigmoid activation functions, and the layer can be formulated aswhere , and are parameters and bias terms of linear transformation. Finally, the suspicious text belonging to “stego” cover was reflected by output value which is a probability. A prediction label can be finally determined by a threshold, which can be formulated by following equation:

2.3. Training Framework

Optimization of proposed approach is based on a supervised learning framework. Loss function of the network is cross entropy error loss. Parameters in the proposed model are updated by the back propagation. The gradient of the procedure is computed by minimizing the cross entropy loss, and the procedure can be formulated as follows:

Moreover, in order to mitigate the overfitting issue, we applied batch normalization [12] and dropout technique [13] to regularize the proposed model.

3. Experiments and Analysis

3.1. Dataset and Experimental Settings

A linguistic stegosystem was firstly constructed based on the proposed approach in [14] for the purpose of evaluating the performance of the proposed approach. Three large-scale text datasets containing the most common text media on the Internet are taken as our training sets, which are Twitter [15], Movie reviews [16], and News to train the Linguistic stegosystem.

Then, we utilized the linguistic stegosystem to construct our own steganalysis dataset. 10000 stego samples were generated and 10000 nature texts were randomly chosen in each dataset as our dataset to conduct steganalysis. Note that sentences for different types of text with different embedding rates are different.

The cross validation process in the validation set determined the hyperparameters in the proposed model. Specifically, the number of heads in multi-head self-attention is 8. The embedding size is 256. The dimension of fully connected layer in classification layer is 100. The detection threshold is set as 0.5. Optimization method in the training process is Adam [17], where the learning rates were initially set as 0.001, dropout rate was set as 0.9, and the batch size was set as 256.

3.2. Evaluation Metrics

Several evaluation metrics commonly used in classification tasks were utilized to evaluate the performance of proposed model, including accuracy (Acc), precision (P), recall (R), and F1 score. The definition of metrics is formularized as follows:where TP (true positive) represents the number of positive samples that are predicted to be positive by the approach, FN (false negative) illustrates the number of positive samples predicted to be negative, TN (true negative) represents the number of negative samples predicted to be negative, FP (false positive) indicates the number of negative samples predicted to be positive, and F1 score is the harmonic mean of the precision and recall.

3.3. Performance Evaluation

Several different representative steganalysis algorithms were chosen as our baseline models [1821] to validate the performance of proposed model.

The results of the comparison is shown in Table 1. From the results, we can conclude that compared to other linguistic steganalysis methods, the proposed model has achieved the best detection performance on various metrics, including different text formats and different embedding rates. We can also observe that different datasets have different linguistic steganalysis performances. This may be because of different text lengths in different datasets. Longer texts may have more clues for steganalysis which lead to higher detection accuracy. Besides, we also noticed that the detection performance of steganographic text will increase with the increase of steganographic information in generated texts. One of the explanations of the phenomenon is that once more information is embedded in texts, the distortion of the generated texts will decrease, which will damage the coherence of text semantics and give more steganalysis clues.


Method[18][19][20][21]Ours
FormatbpwAccPRAccPRAccPRAccPRAccPR

News10.5320.5170.3820.7630.7390.8120.8400.8690.8010.8580.8580.8580.9130.9300.894
20.5130.5350.2040.7860.7620.8320.8350.8670.7910.8640.9150.8030.9200.9230.916
30.5970.6790.3670.8240.7670.9310.8970.9090.8820.9200.9220.9180.9620.9660.958
40.7550.8310.6400.8590.7970.9620.9380.9620.9110.9610.9790.9420.9730.9810.966
50.8470.9180.7610.8810.8290.9590.9610.9760.9450.9730.9880.9580.9850.9830.987

IMDB10.5770.6420.3450.7670.7790.7440.7870.8290.7220.8450.9410.7360.9010.9530.844
20.7130.8070.5600.8490.9340.8710.8690.9110.8180.9180.9470.8860.9570.9720.940
30.8400.9250.7410.900.8770.9310.9160.9440.8850.9410.9500.9320.9660.9830.949
40.9090.9690.8450.9370.9050.9750.9620.9750.9470.9760.9860.9660.9870.9900.983
50.9090.9890.8280.9290.9210.9400.9770.9870.9660.9900.9880.9920.9950.9960.993

Twitter10.5380.5200.3870.6540.6520.6580.6650.6640.6700.7450.8110.6210.7860.8730.657
20.5440.5230.3990.7450.7620.7120.7500.8270.6310.7930.9140.6470.8340.8830.770
30.5770.6690.3030.8090.7980.8260.8340.8890.7640.8790.9390.8120.9080.9500.861
40.7290.8360.5700.8420.8240.8710.8850.9500.8130.9340.9880.8790.9430.9860.899
50.8500.9160.7700.8510.8390.8700.8990.9610.8320.9210.9600.8790.9360.9580.911

We also conducted multi-classification experiments in the dataset which can be taken as embedded rate estimation task [22] where we mixed the texts at various embedding rates, i.e., bpw = 0, 1, 2, 3, 4, 5. The experimental results are shown in Table 2. From Table 2, we can see that our model can also outperform all the base models.


Method[18][19][20][21]Ours
FormatPRF1PRF1PRF1PRF1PRF1

News0.4450.3960.4200.7010.7060.7030.7450.7410.7430.8170.8100.8110.8720.8700.871
IMDB0.4900.5120.5010.7420.7450.7430.7670.7600.7630.8480.8420.8430.8790.8730.873
Twitter0.4170.3630.3030.6200.6200.6200.6380.6150.6260.7500.7050.7120.8000.7580.766

4. Conclusions

Precise linguistic steganalysis on suspicious carrier is critical for multimedia security. In this paper, we introduced a neural linguistic steganalysis approach based on multi-head self-attention. In the proposed approach, words in text are firstly mapped into semantic space with a hidden representation for better exploitation of the semantic features. Then, we utilize multi-head self-attention to model the interactions between words in carrier. Finally, a softmax layer is utilized to categorize the input text as cover or stego. Extensive experiments validate the effectiveness of our approach. In the future, we will construct more general steganalysis approach to detect more linguistic steganography.

Data Availability

The datasets to train this linguistic stegosystem are based on three large-scale text datasets containing the most common text media on the Internet as our training sets, which are Twitter [15], Movie reviews [16], and News (https://www.kaggle.com/snapcrack/all-the-news/data).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was supported by the National Natural Science Foundation of China (no. 61861015) and Key Research and Development Project of Hainan Province (ZDYF2020017).

References

  1. J. Fridrich, Steganography in Digital Media, Cambridge University Press Cambridge, Cambridge, UK, 2009.
  2. Z. Yang, X. Guo, Z. Chen, Y. Huang, and Y.-J. Zhang, “RNN-stega: linguistic steganography based on recurrent neural networks,” IEEE Transactions on Information Forensics and Security, vol. 99, 2018. View at: Publisher Site | Google Scholar
  3. H. Yang, Z. Yang, and Y. Huang, “Steganalysis of VOIP streams with CNN-LSTM network,” in Proceedings of the ACM Workshop on Information Hiding and Multimedia Security, pp. 204–209, ACM, Paris, France, July 2019. View at: Google Scholar
  4. C. Taskiran, U. Topkara, M. Topkara, and E. Delp, “Attacks on lexical natural language steganography systems,” in Proceedings of SPIE - The International Society for Optical Engineering, Bellingham, WA, USA, October 2006. View at: Google Scholar
  5. Z. Chen, L. Huang, P. Meng, W. Yang, and H. Miao, “Blind linguistic steganalysis against translation based steganography,” in Lecture Notes in Computer Science, Springer, Berlin, Germany, 2011. View at: Google Scholar
  6. H. Yang and X. Cao, “Linguistic steganalysis based on meta features and immune mechanism,” Chinese Journal of Electronics, vol. 19, no. 4, pp. 661–666, 2010. View at: Google Scholar
  7. X. Zuo, H. Hu, W. Zhang, and N. Yu, “Text semantic steganalysis based on word embedding,” in Proceedings of the International Conference on Cloud Computing and Security, IEEE, Haikou, China, June 2018. View at: Google Scholar
  8. Y. J. Bao, H. Yang, Z. Yang, S. Liu, and Y. Huang, “Text steganalysis with attentional LSTM-CNN,” 2019, https://arxiv.org/abs/1912.12871. View at: Google Scholar
  9. A. Vaswani, N. Shazeer, N. Parmar et al., “Attention is all you need,” in Proceedings of the Advances in Neural Information Processing Systems, pp. 5998–6008, IEEE, Long Beach, CA, USA, December 2017. View at: Google Scholar
  10. H. Yang, L. Zhong, Y. Yong, J. Bao, S. Liu, and Y. F. Huang, “Fcem: a novel fast correlation extract model for real time steganalysis of VOIP stream via multi-head attention,” in Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, Brighton, UK, May 2020. View at: Google Scholar
  11. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision & Pattern Recognition, IEEE, Las Vegas, NV, USA, June 2016. View at: Google Scholar
  12. S. Szegedy C Ioffe, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” in Proceedings of the International Conference on International Conference on Machine Learning, pp. 448–456, JMLR.org, Lille, France, July 2015. View at: Google Scholar
  13. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014. View at: Google Scholar
  14. T. Fang, J. Martin, and K. Argyraki, “Generating steganographic text with LSTMs,” 2017, https://arxiv.org/abs/1705.10742. View at: Google Scholar
  15. A. Go, “Sentiment classification using distant supervision,” 2009. View at: Google Scholar
  16. A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, and C. Potts, “Learning word vectors for sentiment analysis,” in Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, IEEE, Portland, OR, USA, June 2011. View at: Google Scholar
  17. P. Diederik, “Kingma and jimmy lei Ba. Adam: amethod for stochastic optimization,” in Proceedings of the 3rd International Conference on Learning Representations, Banff, Canada, April 2014. View at: Google Scholar
  18. P. Meng, L. Hang, W. Yang, Z. Chen, and Z. Hu, “Linguistic steganography detection algorithm using statistical language model,” in Proceedings of the 2009 International Conference on Information Technology and Computer Science, pp. 540–543, IEEE, Manchester, UK, May 2009. View at: Google Scholar
  19. S. Samanta, S. Dutta, and Goutam Sanyal, “A real time text steganalysis by using statistical method,” in Proceedings of the 2016 IEEE International Conference on Engineering and Technology (ICETECH), pp. 264–268, IEEE, Coimbatore, India, March 2016. View at: Google Scholar
  20. R. Din, S. Affendi Mohd Yusof, A. Amphawan et al., “Performance analysis on text steganalysis method using a computational intelligence approach,” Proceeding of the Electrical Engineering Computer Science and Informatics, vol. 2, no. 1, pp. 67–73, 2015. View at: Publisher Site | Google Scholar
  21. Z. Yang, Y. Huang, and Y.-J. Zhang, “A fast and efficient text steganalysis method,” IEEE Signal Processing Letters, vol. 26, no. 4, pp. 627–631, 2019. View at: Publisher Site | Google Scholar
  22. Z. Yang, K. Wang, J. Li, Y. Huang, and Y. Zhang, “TS-RNN: text steganalysis based on recurrent neural networks,” IEEE Signal Processing Letters, vol. 99, no. 1, 2019. View at: Google Scholar

Copyright © 2021 Sai-Mei Jiao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views335
Downloads485
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.