Mathematical Problems in Engineering

Volume 2017, Article ID 3675459, 12 pages

https://doi.org/10.1155/2017/3675459

## Batch Image Encryption Using Generated Deep Features Based on Stacked Autoencoder Network

^{1}School of Computer and Information Science, Southwest University, Chongqing, China^{2}Network Centre, Chongqing University of Education, Chongqing, China

Correspondence should be addressed to Fei Hu; moc.361@1zte

Received 8 November 2016; Accepted 30 January 2017; Published 28 February 2017

Academic Editor: Maria L. Gandarias

Copyright © 2017 Fei Hu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Chaos-based algorithms have been widely adopted to encrypt images. But previous chaos-based encryption schemes are not secure enough for batch image encryption, for images are usually encrypted using a single sequence. Once an encrypted image is cracked, all the others will be vulnerable. In this paper, we proposed a batch image encryption scheme into which a stacked autoencoder (SAE) network was introduced to generate two chaotic matrices; then one set is used to produce a total shuffling matrix to shuffle the pixel positions on each plain image, and another produces a series of independent sequences of which each is used to confuse the relationship between the permutated image and the encrypted image. The scheme is efficient because of the advantages of parallel computing of SAE, which leads to a significant reduction in the run-time complexity; in addition, the hybrid application of shuffling and confusing enhances the encryption effect. To evaluate the efficiency of our scheme, we compared it with the prevalent “logistic map,” and outperformance was achieved in running time estimation. The experimental results and analysis show that our scheme has good encryption effect and is able to resist brute-force attack, statistical attack, and differential attack.

#### 1. Introduction

Nowadays, though the communication system has been greatly developed, insecure communication channels such as the Internet are still prevalent and a growing number of digital images are transmitted through them. Nevertheless, as long as the images are transmitted and stored over public networks, they are easy to be intercepted and tampered by unauthorized IPs. If the images have confidential information, specially, they need to be encrypted before exchange across Internet. However, traditional encryption algorithms such as 3DES, AES, and IDEA are typically designed for text encryption and are not suitable for image encryption on account of the intrinsic features of images as high correlation between pixels and redundancy [1]. On the other hand, a lot of image encryption schemes have been suggested, for example, DNA cryptography, mathematical concept, compression methodology, and transform domain, but most of them have security vulnerabilities [2]. Over the past two decades, chaos-based cryptography has been studied by more and more researchers for the fundamental characteristics of chaos as ergodicity, pseudostochasticity, mixing, and high sensitivity to initial conditions/parameters, and so forth. [3–6]. In 1998, Fridrich firstly proposed permutation-diffusion method which encrypted images using chaos [7]. Based on his work, there have been many improvements for a wide variety of image encryption tasks, such as bit permutation [8–12], image pixel confusion [13, 14], extensive diffusion operations [15, 16], high-quality key-stream generation process [17, 18], and applications of plain-image features [19–21].

Chaotic cryptography with artificial neural networks (ANNs) has been extensively developed due to the following characteristics: nonlinear computation, associative memory, large-scale parallel processing, and highly fault tolerance. Those characteristics contribute much to enhancing chaos security. For example, in [22], a discrete Hopfield neural network was utilized to make a nonlinear sequential cipher generator, which could generate a pseudostochastic sequence to confuse the plain image given a small amount of stochastic parameters as the cipher codes. And, in [23–25], ANN models were also used for image encryption. Among kinds of ANN models, the SAE model provides a baseline method for unsupervised feature learning and additionally extracts a great number of feature parameters simultaneously, which are necessarily helpful for batch image encryption.

With previous chaos-based encryption schemes, a batch of images are usually encrypted using a single chaotic sequence. In this paper, we proposed a batch image encryption scheme that encrypts each image with an independent sequence. Firstly, a SAE based five-layer deep neural network was constructed to produce two chaotic matrices; and then the two matrices were used together for mixing encryption processing, that is, one for a total shuffling matrix generation and another for multiple chaotic sequence generation; finally the shuffling matrix was used to shuffle the pixel positions on each plain image, and each chaotic sequence was used to confuse the relationship between the corresponding shuffled image and the encrypted image. Several security evaluation results proved that the proposed scheme completely met the requirements of image encryption. Compared with traditional image schemes such as the logistic map, the proposed scheme is more powerful. In other words, the scheme is able to generate a large number of chaotic sequences in parallel and then to encrypt a lot of images simultaneously. Our scheme has a better performance in run-time complexity and a significant effect on batch image encryption.

#### 2. Stacked Autoencoder

Autoencoder (AE) is a single-hidden-layer and unsupervised learning neural network. It is actually generated by two identical Restricted Boltzmann Machine models (RBMs) [26]. Several AEs compose the SAE network which is like a multilayer AE. For example, in the SAE network in Figure 1, the hidden layer of the previous AE is the input layer of the next AE; these joined layers are combined to make the encoding section of SAE (see layers *, **, * in Figure 1); then a reversed network of the encoding section makes the decoding section (see layers *, **, * in Figure 1); in addition, and are input and output layers, respectively; the hollow cycles are neurons and the blue solid cycles are offsets. is like a mirror of , they have the same number of neurons, and so do and (For the sake of convenience, we will denote the input/output value of a certain layer by */**/**/**/* in the following statement.). The encoding and decoding sections are combined to make the SAE network that has the functions of encoding and decoding. Greedy training methods such as back propagation (BP) can be directly used to train a monolayer AE to learn weight parameters; however, it is hard to train the SAE network because the multilayer network would consume much more memory and computation time. In order to alleviate the problem, a two-step training method is widely adopted for the SAE network training, that is, pretraining and fine-training.