Application-Aware Multimedia Security TechniquesView this Special Issue
Research Article | Open Access
Ahmed I. Iskanderani, Ibrahim M. Mehedi, Abdulah Jeza Aljohani, Mohammad Shorfuzzaman, Farzana Akther, Thangam Palaniswamy, Shaikh Abdul Latif, Abdul Latif, "Artificial Intelligence-Based Digital Image Steganalysis", Security and Communication Networks, vol. 2021, Article ID 9923389, 9 pages, 2021. https://doi.org/10.1155/2021/9923389
Artificial Intelligence-Based Digital Image Steganalysis
Recently, deep learning-based models are being extensively utilized for steganalysis. However, deep learning models suffer from overfitting and hyperparameter tuning issues. Therefore, in this paper, an efficient -nondominated sorting genetic algorithm- ( NSGA-) III based densely connected convolutional neural network (DCNN) model is proposed for image steganalysis. NSGA-III is utilized to tune the initial parameters of DCNN model. It can control the accuracy and f-measure of the DCNN model by utilizing them as the multiobjective fitness function. Extensive experiments are drawn on STEGRT1 dataset. Comparison of the proposed model is also drawn with the competitive steganalysis model. Performance analyses reveal that the proposed model outperforms the existing steganalysis models in terms of various performance metrics.
With the advancement in Internet technology and communication, a substantial amount of images are transferred over public networks. Recently, it has been found that many criminal groups utilize images to transfer their dangerous data. These groups hide their dangerous data in the images. Generally, they utilize steganography approaches to hide their harmful contents in the images . Therefore, researchers have started utilizing steganalysis models to recognize the images which contain embedded data. Thus, image steganalysis is an approach for recognizing data embedded in images. Consequently, steganalysis classifies the given image as a stego-embedded image or normal image .
Zhou et al.  designed an ensemble learning model- (ELM-) based image steganalysis. SRNet and RESDET were utilized as base models. Fusion of the base models was then achieved to classify the embedded images. Zhang et al.  designed a CNN model by using kernels, and the optimization of convolution kernels was achieved during the preprocessing layer. The minimal convolution kernels were utilized to minimize the initial parameters. Spatial pyramid pooling was also used to integrate the local features. Gowda et al.  designed an ensemble color space model (ECSM) to evaluate a weighted activation map. It can extract various features explicit to each color space. Levy-flight grey wolf optimization was utilized to minimize the number of features selected in the map.
Boroumand et al.  proposed a deep residual model (DRM) to reduce the heuristics and externally enforced elements. This model computes the noise residuals by disabling the pooling to overcome the suppression of the stego signal. Yedroudj et al.  designed a truncation activation-based ensemble model (TREM) trained with Rich features. It utilizes a truncation activation function and batch normalization on a scale layer. Ye et al.  utilized high-pass filter-based CNN (HCNN) to achieve steganalysis. The weights of the initial layer were computed using a high-pass filter for evaluation of residual maps in a spatial rich model. It was utilized as a regularizer to suppress the image content efficiently. A truncated linear unit was also utilized. Wu et al.  utilized CNN and deep residual network for steganalysis. It contains a substantial number of network layers, which are significant for evaluating the complex statistics of images.
Yang et al.  designed thirty-two-layer CNNs to enhance the performance of features by integrating all features to enhance the gradient. The bottleneck layers enhance the feature propagation and minimize CNN parameters dramatically. Li et al.  designed a novel CNN model to evaluate embedded artifacts in an efficient manner. Information diversely was also achieved. A parallel subnet module was also designed utilizing numerous filters. Subnets were trained independently to improve computational speed. Zhang et al.  designed a novel CNN model to enhance the classification accuracy of spatial-domain steganography. A spatial pyramid pooling was utilized to integrate the local features. Sharma et al.  designed an aggregated residual transformation-based CNN model to obtain significant features for steganalysis. This model has limited initial parameters for enhancing the classification rate. The residual skip connections were also utilized.
Liu et al.  have shown the similarity and dissimilarity between SRM-EC and CNN models. An ensemble model was designed to integrate SRM-EC with CNN by averaging their resultant probabilities. Zeng et al.  utilized CNN for a Rich model feature set. The bottom to up strategy was utilized for training the output of each subnetwork to the actual output. Yang et al.  designed a max CNN for steganalysis. It allocates significant weights to features learned from the complex texture regions. Yang et al.  proposed image steganalysis using a transfer learning model with structure preservation. The discriminant projection matrix was utilized for building the model. Frobenius-norm-based regularization was also utilized to achieve better results. Ren et al.  designed an efficient selection channel network and steganalysis model. The steganalysis model combined with the trained selection channels estimates the final steganalysis outcomes.
From the extensive review, it has been observed that deep learning-based models can be utilized for steganalysis . However, deep learning models suffer from overfitting and hyperparameter tuning issues. Therefore, in this paper, an efficient NSGA-III-based densely connected convolutional neural network (DCNN) model is proposed for image steganalysis. This is the principle difference from the existing model available in the literature.
The main contributions of this paper are as follows:(1)An efficient NSGA-III-based DCNN model is proposed for image steganalysis.(2) NSGA-III is utilized to tune the initial parameters of the DCNN model.(3)Accuracy and f-measure performance metrics are used as a multiobjective fitness function.(4)Extensive experiments are drawn on STEGRT1 dataset. Comparison of the proposed model is also drawn with the competitive steganalysis model.
The remaining paper is organized as follows: Section 2 presents the proposed NSGA-III-based DCNN model for steganalysis. Experimental results and comparative analysis are presented in Section 3. Section 4 concludes the paper.
2. Proposed Model
In this paper, an efficient NSGA-III-based DCNN model is proposed for image steganalysis. The following section discusses the working of DCNN and NSGA-III.
2.1. Densely Connected Convolutional Neural Network
The diagrammatic flow of the DCNN is shown in Figure 1.
Assume a stego/normal image , which is assigned to CNN. The model has layers which utilize nonlinear transformation such that shows the layer’s indexes . shows a set of operators like pooling, rectified linear units (ReLU), convolution (Conv), and batch normalization (BN). shows the outcome of the layer. However, the existing CNN joins the outcome of the layer as an input of layer. It achieves the layer transition as . ResNets utilize a skip join which avoids the nonlinear transformations utilizing an identity operator such as
ResNets achieve better gradient flow compared to CNN. However, the summation of the identity operator with an output of may hinder the data flow in the model.
Therefore, to enhance the data flow, a DenseNet was designed. It contains direct links from a given layer to every other layer. The layer takes the feature maps of all previous layers, , as input:
Here, shows the integration of feature maps obtained from layer .
is defined as a group operator. It contains BN, ReLU, and a Conv.
The integration operator utilized in equation (2) is not sustainable if there are some variations in the size of the feature maps. The downsampling layers of CNN vary with the size of the feature maps. To achieve downsampling, the model is divided into various densely connected dense blocks. Layers among the blocks are represented as transition layers. In this paper, the transition layer utilizes BN and 1 1 Conv followed by a 2 2 average pooling layer. There are no links across dense blocks except the transition layer.
If every generates feature maps, it considers layer with input feature maps. defines the channels of the input layer. The main significance of DenseNet over CNN is that it has confined layers, e.g., . represents the growth rate of the DenseNet. Every layer merges with the feature maps. The growth rate regulates the details of every layer’s contribution to the global state. The global state is globally defined; therefore, it is not required to redefine in every layer.
Every layer will compute feature maps, but it may have more inputs. 1 1 Conv is utilized as the bottleneck layer prior to every 3 3 Conv to minimize the size of feature maps and enhance the computational speed. This model is efficient for DenseNet, and DenseNet with bottleneck layer can be defined as BN-ReLU-Conv (1 1)-BN-ReLU-Conv (3 3) version of , as DenseNet-B. In this paper, 1 1 Conv provides feature maps.
To enhance the model density, the feature maps are minimized at the transition layers. If a dense block has feature maps, then the transition layer computes output feature maps. is represented as a compression factor. If , then the size of feature maps through the transition layer stays constant.
DenseNet contains four dense blocks. Each dense block contains an equal number of layers. Initially, Conv with 16 output channels is implemented on the input images. For Conv layers having kernel size as 3 3, every side of the inputs is zero-padded to maintain the fixed-size feature map. 1 1 Conv is followed by 2 2 average pooling between two connecting dense blocks. Finally, a global average pooling is implemented, and a softmax activation function is used. The sizes of feature map sizes in dense blocks are 32 32, 16 16, and 8 8, respectively. The DenseNet with configurations , , and are computed. The size of the input image is 256 256. Conv layer has convolution having a size 5 5 and stride as 2.
The exact network configurations and other hyperparameters of the DenseNet are tuned using .
2.2. -Nondominated Sorting Genetic Algorithm-III
Table 1 represents the nomenclature of NSGA-III. Algorithm 1 illustrates the generation of an initial population of NSGA-III-based DCNN. Initially, a random population is computed by utilizing the normal distribution. The computed solutions are then mapped to the group of initial parameters of DCNN.
Algorithm 2 demonstrates the proposed NSGA-III-based DCNN model. Initially, we will test the DCNN by using the random population to train and test the model on the chunk of steganography dataset. The fitness of each solution is then obtained. Dominated and nondominated groups are then evaluated. Thereafter, mutation and crossover operations are used to compute the child solutions. Nondominated sorting is used to sort the obtained nondominated solutions. If the number of fitness evaluations exceeds the max allowed, then we return the tuned parameters of DCNN. Finally, NSGA-III-based DCNN is trained on the steganalysis dataset.
3. Performance Analysis
Rezaei et al.  designed a reference dataset for image steganalysis. It is the so-called Real version 1 (STEGRT1), and it contains both JPEG and BITMAP images. It has 8000 cover and stego images with different sizes and characteristics. These images were obtained using various steganographic approaches such as payload and quality factors.
3.2. Experimental Set-Up
The experiments of the proposed and the existing models are drawn on MATLAB online server with the help of a deep learning toolbox. Additionally, to increase the size of the dataset, the BitMix data augmentation  is also implemented. The performance of the proposed model is compared with the HCNN , TREM , CNN , ELM , ECSM , and DRM .
3.3. Comparative Analysis
In this section, the comparison between the proposed and the existing CNN-based steganalysis models are presented.
Figure 2 shows the performance analysis of the proposed model. It is found that the best performance is found at epoch 8 and iteration. Therefore, the proposed model converges efficiently with good convergence speed.
Figures 3 and 4 represent the confusion matrices obtained by using the proposed model with and without NSGA-III. It has been found that the majority of the obtained results lie in the true classes (i.e., in diagonal matrices). Therefore, it will lead to good performance results such as accuracy, f-measure, precision, recall, and area under the curve (AUC). In Figure 4, every diagonal value shows whether the corresponding class is true or false. It helps in evaluating the various performance metrics. Assume that stego-embedded image is our true class; it means the normal image belongs to the negative class. Overall, the analysis indicates that the proposed model with NSGA-III achieves better performance than without the use of NSGA-III.
Figures 5 to 9 show the comparative analysis between the existing and the proposed models. In these figures, the notched boxplots are shown. The box shows the interquartile range (IQR). Red line shows the median of the computed performance. Notch indicates a confidence interval around the median which is dependent upon the median interquartile range/sqrt of a number of experiments (). Here, we have considered . If the size of a notch is smaller, then the steganalysis model achieves better results. To evaluate the significant improvement or reduction, we have selected the average computed values of the proposed model and one from the existing steganalysis models (i.e., showing a better average value among existing models). Thereafter, we evaluate their absolute difference. It computes the average mean improvement or reduction; to make it in percentage form, we divide the absolute difference by the maximum possible value and multiply the computed value by 100.
Figure 5 represents the comparison between the existing and proposed steganalysis models in terms of accuracy. It reveals that the proposed model achieves better accuracy than the existing steganalysis models. The proposed model outperforms the existing steganalysis models in terms of accuracy by .
Figure 6 represents the precision analysis among the proposed model and the existing steganalysis models. It is evaluated that the proposed model achieves consistent values of precision than the existing models. The proposed model outperforms the existing models by .
Figure 7 demonstrates the recall analysis of the proposed steganalysis model. It is observed that the proposed model outperforms the competitive models in terms of recall values compared to the existing models. The proposed model has shown an average enhancement in recall values by 1.2832%.
Figure 8 represents the f-measure analysis among the proposed model and the existing steganalysis models. It is evaluated that the proposed model achieves consistent values of f-measure than the existing models. The proposed model outperforms the existing models by .
Figure 9 demonstrates the AUC analysis of the proposed steganalysis model. It is observed that the proposed model outperforms the competitive models in terms of AUC values compared to the existing models. The proposed model has shown an average enhancement in AUC values by 1.2913%.
From the extensive review, it has been found that deep learning-based models have been extensively utilized for steganalysis. However, these models suffer from overfitting and hyperparameter tuning issues. Therefore, NSGA-III based DCNN model was proposed for image steganalysis. NSGA-III was utilized to optimize the initial parameters of DCNN model. The accuracy and f-measure were utilized to design a multiobjective fitness function. Extensive experiments were drawn on STEGRT1 dataset. Comparison of the proposed model was also drawn with the competitive steganalysis model. Performance analyses have shown that the proposed model outperforms the existing steganalysis models in terms of accuracy, f-measure, precision, recall, and AUC by 1.2643%, 1.0245%, 1.1438%, 1.2832%, and 1.2913%, respectively. The results show that the proposed model can record even little changes in image features.
In the near future, one may extend the proposed work by designing a novel deep learning model to enhance the results further. Additionally, one may test the proposed model on other steganography datasets.
No data were used to support this study
Conflicts of Interest
The authors declare that they have no conflicts of interest.
This research work was funded by Institutional Fund Projects under grant no (IFPRC-027-135–2020). Therefore, authors gratefully acknowledge technical and financial support from the Ministry of Education and King Abdulaziz University, Jeddah, Saudi Arabia.
- S. Tan, W. Wu, Z. Shao, Q. Li, B. Li, and J. Huang, “Calpa-net: channel-pruning-assisted deep residual network for steganalysis of digital images,” Institute of Electrical and Electronics Engineers Transactions on Information Forensics and Security, vol. 16, no. 131–146, 2021.
- S. Ozcan and A. F. Mustacoglu, “Transfer learning effects on image steganalysis with pre-trained deep residual neural network model,” in Proceedings of the 2018 IEEE International Conference on Big Data (Big Data), pp. 2280–2287, Seattle, WA, USA, December 2018.
- Z. Zhou, S. Tan, J. Zeng, C. Han, and S. Hong, “Ensemble deep learning features for real-world image steganalysis,” KSII Transactions on Internet and Information Systems (TIIS), vol. 14, no. 11, pp. 4557–4572, 2020.
- R. Zhang, F. Zhu, J. Liu, and G. Liu, “Efficient feature learning and multi-size image steganalysis based on cnn,” 2018, https://arxiv.org/pdf/1807.11428.pdf.
- S. N Gowda and C. Yuan, Stegcolnet: steganalysis based on an ensemble colorspace approach, 2020, https://arxiv.org/abs/2002.02413.
- M. Boroumand, M. Chen, and J. Fridrich, “Deep residual network for steganalysis of digital images,” Institute of Electrical and Electronics Engineers Transactions on Information Forensics and Security, vol. 14, no. 5, pp. 1181–1193, 2018.
- M. Yedroudj, F. Comby, and M. Chaumont, “Yedroudj-net: an efficient cnn for spatial steganalysis,” in Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2092–2096, IEEE, Calgary, AB, Canada, April 2018.
- J. Ye, J. Ni, and Y. Yi, “Deep learning hierarchical representations for image steganalysis,” Institute of Electrical and Electronics Engineers Transactions on Information Forensics and Security, vol. 12, no. 11, pp. 2545–2557, 2017.
- S. Wu, S. Zhong, and Y. Liu, “Steganalysis via deep residual network,” in Proceedings of the 2016 IEEE 22nd International Conference on Parallel and Distributed Systems (ICPADS), pp. 1233–1236, Wuhan, China, December 2016.
- J. Yang, Y.-Q. Shi, E. K. Wong, and X. Kang, “Jpeg steganalysis based on densenet,” 2017, https://arxiv.org/pdf/1711.09335.pdf.
- B. Li, W. Wei, A. Ferreira, and S. Tan, “Rest-net: diverse activation modules and parallel subnets-based cnn for spatial image steganalysis,” Institute of Electrical and Electronics Engineers Signal Processing Letters, vol. 25, no. 5, pp. 650–654, 2018.
- R. Zhang, F. Zhu, J. Liu, and G. Liu, “Depth-wise separable convolutions and multi-level pooling for an efficient spatial cnn-based steganalysis,” Institute of Electrical and Electronics Engineers Transactions on Information Forensics and Security, vol. 15, pp. 1138–1150, 2019.
- A. Sharma and S. K. Muttoo, “Spatial image steganalysis based on resnext,” in Proceedings of the 2018 IEEE 18th International Conference on Communication Technology (ICCT), pp. 1213–1216, IEEE, Chengdu, China, June 2018.
- K. Liu, J. Yang, and X. Kang, “Ensemble of cnn and rich model for steganalysis,” in Proceedings of the 2017 International Conference on Systems, Signals and Image Processing (IWSSIP), pp. 1–5, IEEE, London, UK, March 2017.
- J. Zeng, S. Tan, B. Li, and J. Huang, “Pre-training via fitting deep neural network to rich-model features extraction procedure and its effect on deep learning for steganalysis,” Electronic Imaging, vol. 2017, no. 7, pp. 44–49, 2017.
- J. Yang, K. Liu, X. Kang, E. Wong, and Y. Shi, “Steganalysis based on awareness of selection-channel and deep learning, digital forensics and watermarking,” in Proceedings of the International Workshop on Digital Watermarking, pp. 263–272, Springer, Jeju Island, Korea, October 2017.
- L. Yang, M. Men, Y. Xue, J. Wen, and P. Zhong, “Transfer subspace learning based on structure preservation for jpeg image mismatched steganalysis,” Signal Processing: Image Communication, vol. 90, Article ID 116052, 2021.
- W. Ren, L. Zhai, J. Jia, L. Wang, and L. Zhang, “Learning selection channels for image steganalysis in spatial domain,” Neurocomputing, vol. 401, pp. 78–90, 2020.
- A. Cohen, A. Cohen, and N. Nissim, “Assaf: advanced and slim steganalysis detection framework for jpeg images based on deep convolutional denoising autoencoder and siamese networks,” Neural Networks: The Official Journal of the International Neural Network Society, vol. 131, pp. 64–77, 2020.
- G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708, Honolulu, HI, USA, July 2017.
- Y. Yuan, H. Xu, B. Wang, and X. Yao, “A new dominance relation-based evolutionary algorithm for many-objective optimization,” Institute of Electrical and Electronics Engineers Transactions on Evolutionary Computation, vol. 20, no. 1, pp. 16–37, 2015.
- A. Gupta, D. Singh, and M. Kaur, “An efficient image encryption using non-dominated sorting genetic algorithm-iii based 4-d chaotic maps,” Journal of Ambient Intelligence and Humanized Computing, vol. 11, no. 3, pp. 1309–1324, 2020.
- M. Kaur, D. Singh, and V. Kumar, “Color image encryption using minimax differential evolution-based 7d hyper-chaotic map,” Applied Physics B, vol. 126, no. 9, pp. 1–19, 2020.
- M. Kaur, D. Singh, V. Kumar, and K. Sun, “Color image dehazing using gradient channel prior and guided l0 filter,” Information Sciences, vol. 521, pp. 326–342, 2020.
- M. Rezaei, M. Riahi, and H. Hayati, “Stegrt1: a dataset for evaluating steganalysis systems in real-world scenarios,” in Proceedings of the 2020 28th Iranian Conference on Electrical Engineering (ICEE), pp. 1–5, IEEE, Tabriz, Iran, August 2020.
- M. Yedroudj, M. Chaumont, and F. Comby, “How to augment a small learning set for improving the performances of a cnn-based steganalyzer?” Electronic Imaging, vol. 2018, no. 7, pp. 317–321, 2018.
Copyright © 2021 Ahmed I. Iskanderani et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.