Computational Intelligence and Neuroscience

Volume 2017, Article ID 3105053, 8 pages

https://doi.org/10.1155/2017/3105053

## A Novel Active Semisupervised Convolutional Neural Network Algorithm for SAR Image Recognition

^{1}Electronic Information Engineering, Beihang University, Beijing 100191, China^{2}Space Mechatronic Systems Technology Laboratory, Department of Design, Manufacture and Engineering Management, University of Strathclyde, Glasgow G1 1XJ, UK^{3}School of Electronics, Electrical Engineering and Computer Science, Queen’s University, Belfast BT7 1NN, UK

Correspondence should be addressed to Jun Wang; nc.ude.aaub@302jgnaw

Received 5 May 2017; Revised 9 August 2017; Accepted 23 August 2017; Published 1 October 2017

Academic Editor: George A. Papakostas

Copyright © 2017 Fei Gao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Convolutional neural network (CNN) can be applied in synthetic aperture radar (SAR) object recognition for achieving good performance. However, it requires a large number of the labelled samples in its training phase, and therefore its performance could decrease dramatically when the labelled samples are insufficient. To solve this problem, in this paper, we present a novel active semisupervised CNN algorithm. First, the active learning is used to query the most informative and reliable samples in the unlabelled samples to extend the initial training dataset. Next, a semisupervised method is developed by adding a new regularization term into the loss function of CNN. As a result, the class probability information contained in the unlabelled samples can be maximally utilized. The experimental results on the MSTAR database demonstrate the effectiveness of the proposed algorithm despite the lack of the initial labelled samples.

#### 1. Introduction

Synthetic aperture radar (SAR) has wide applications in both military and civilian fields due to its merits, such as strong penetrating ability and adaption to severe weathers. SAR automatic target recognition technology (SAR-ATR) aims at automatically recognizing the targets from SAR images [1]. With an increasing amount of the data acquired by a SAR imaging system, the SAR-ATR has become one of research hotspots.

Traditional machine learning methods for the SAR-ATR include Support Vector Machine (SVM) [2], local texture feature [3, 4], dictionary learning [5, 6], and sparse representation [7]. These methods have produced some promising results, but they heavily rely on the hand-crafted feature extraction [8]. Because of the imaging nature, clutters and speckling noise exist in the SAR images, which increases the difficulty of feature extraction despite the fact that experts are involved.

In recent years, with the development of deep learning techniques, CNN has received a great attention in object recognition [9–11]. It can automatically extract the target features without experts’ intervention. Compared with the traditional machine learning methods, the CNN is more effective and robust and has been successfully applied to SAR image recognition. In [12], a CNN method was proposed for improving the SAR image classification accuracy. The experimental result showed that the CNN method outperforms the Gabor feature extraction-based SVM method, which demonstrated a great potential of the CNN for SAR image recognition. A convolutional network was designed in [13] to automatically extract the features for SAR target recognition. Using the learned convolutional features, the accuracy of 84.7% was achieved on the 10 types of targets in the MSTAR dataset. Zhou et al. [14] studied the application of the Deep Convolutional Neural Networks (DCNN) in the polarimetric SAR image classification, in which the hierarchical spatial features of images could be automatically learned by the DCNN and the classification accuracy was improved significantly.

As can be found, the CNN has made a great breakthrough in the SAR image recognition. However, the sample labelling for SAR image is still time-consuming, and the accuracy of the image recognition decreases quickly when the labelled samples are insufficient. Active learning (AL) can be effective by adding the most informative and reliable unlabelled samples into the labelled training set. As a result, it would be a promising way to solve the above-mentioned problem. Wang et al. proposed an AL method for the SAR image classification based on a SVM classifier [15]. The most uncertain samples were chosen according to the confidence value, and the experimental results showed that the AL-based method can effectively improve the classification accuracy when the labelled samples are insufficient. Babaee et al. presented an active learning method by employing a low-rank classifier as the training model. This method selects the samples whose labels are predicted wrong but the classifier is highly certain about them, namely, the first certain wrong labelled (FCWL) selection criteria [16]. Samat et al. reported an active extreme learning machine (AELM) method for the polarimetric SAR image classification. In this method, the class supports based on the posterior probability are utilized as the selecting criterion. According to the experimental results, the proposed method was faster than the existing techniques in the both learning and classification phases [17].

Active learning method effectively adds the most informative and reliable unlabelled samples into the training set. The remaining samples may be less informative and the use of active learning may cause too much computational complexity. However, the information contained in them can be used to improve the generalization ability of the classification algorithm. Semisupervised learning (SSL) is an effective way to utilize the information contained in the unlabelled samples. The commonly used SSL methods include semisupervised SVM [18], label propagation [19], and semisupervised clustering. Recently, SSL has been successfully applied in the SAR image recognition. Duan et al. introduced a semisupervised classification method incorporating the likelihood space approach in the training and testing processes so that the unlabelled samples can be effectively used to improve the classification performance [20]. To overcome the complexity of data and the difficulty of creating a fine-level ground truth, a semisupervised method for ice-water classification based on self-training was presented in [21]. By integrating the spatial context model, the region merging, and the self-training technique, the proposed algorithm is capable of accurately distinguishing ice and open water in SAR images using a very small number of labelled samples. In [22], the unlabelled samples were analysed by an unsupervised clustering algorithm under the usage of all the available information. Besides, each sample was classified by a supervised method using the available information at the current phase of clustering. The experimental results on the SAR image showed that the proposed semisupervised method leads to promising classification results.

Recently, inspired by the superiority of CNN, AL, and SSL, the combination of the three methods has become a research tendency. For example, a deep active learning method and a semisupervised CNN were constructed [23–25]. The experimental results demonstrated the effectiveness of these methods for hyperspectral or optical image recognition. However, we have seen a number of problems from SAR images, for example, difficult feature extraction, time-consuming sample labelling, and insufficient labelled samples. The developed techniques are rarely applied to SAR image recognition. In this paper, a novel active semisupervised CNN algorithm for SAR image recognition is proposed. First, the most informative and reliable samples selected by the active learning method are labelled using an information entropy criterion. We believe that the information entropy can be used to effectively measure the reliability of the unlabelled samples, and it can be calculated based on the output of the CNN framework. Then, the class probability information of the remaining unlabelled samples is obtained from the output of the softmax layer of the CNN. Afterwards, the class probability information is designed as the regularization term and added to the loss function of the CNN for the retraining purpose. Since the class probability information can effectively control the impact of the unlabelled samples in the training process, the unlabelled samples are well utilized at this stage.

The rest of this paper is arranged as follows. In Section 2, the convolutional neural network is briefly introduced. Section 3 describes the proposed method in detail. Then experiments are performed in Section 4. Finally, we summarize this paper in Section 5.

#### 2. Convolutional Neural Network

As a multilayer neural network structure, CNN is mainly composed of an input layer, a convolution layer, a pooling layer, and an output layer, where both the convolution and pooling layers are hidden. The input layer is used to receive the pixel values from the original image. The convolution layer extracts the image features by utilizing the convolution kernel. The pooling layer uses local image correlation to reduce the amount of data to be processed. The output layer maps the extracted features to the corresponding labels. The training of the CNN is composed of two ways: forward and backward propagation.

##### 2.1. Forward Propagation

The mapping process of an image in the CNN is a forward propagation process, where the output of a previous layer is taken as the input of the current layer. In order to provide a full version of the linear model, a nonlinear activation function is added to the neurons of each layer in the mapping process. Since the first layer only receives pixel values from the image, there are no activation functions. From the second layer of the CNN, the nonlinear activation functions are employed. The output of each layer can be expressed as follows: where denotes the layer. If , is the pixel value matrix of the image. If , represents the feature map matrix , which is extracted from the layer, that is, . , , and represent the weight matrix, the bias matrix, and the weighted input of the layer, respectively; is the nonlinear activation function, and a rectified linear unit (Relu) is selected in this paper. Suppose ; the layer is the output layer, and denotes the final output vector.

##### 2.2. Backpropagation

The standard backward propagation (BP) algorithm is used to update the parameters and of the CNN [10]. The BP algorithm is a supervised learning method which firstly constructs a cost function based on the actual and the expected outputs, and then a gradient descent method (GD) is used to update and along the gradient descent direction of the cost function. In detail, we suppose represents the cost function of the CNN structure. The error vector of the output layer can be expressed as follows:In the process of backward propagation, the error vector of the layer can be derived from the error vector of the output layer. Thus, the error vector for each layer can be computed by the Chain Rule as follows:where the symbolic is the Hadamard product (or Schur product) which denotes the element-wise product of the two vectors. The gradients of and are denoted by and , respectively. The partial derivative of to and can be calculated using (1) and (3):The change values of and can be calculated bywhere represents the learning rate.

##### 2.3. The Output Layer

If the number of neurons in the output layer is* N*, the CNN eventually divides the input images into categories. In the forward propagation process, the input of the output layer is , since the output of the softmax activation function provides the probability of each class to which a sample belongs. Thus, unlike the middle layer of the CNN, we use the softmax activation function instead of the Relu function in the output layer, which is the key in our proposed method. The output is normalized by the softmax function, which can be expressed as where is the output of the th neuron in the output layer and is the parameter of the softmax function. It is obvious that , and if one item increases, all the other items will decrease accordingly.

#### 3. The Proposed Method

First, we define the symbols to be used in this section. The training dataset is composed of two parts: , where represents the set of the labelled samples and represents the set of the unlabelled samples. is the total number of the training samples. The training process of the proposed methods is composed of two stages. As shown in Figure 1, first, the most informative and reliable samples selected by the active learning method are labelled based on the information entropy. Then the class probability information extracted from the remaining samples is designed as a regularization term, which will be added to the loss function of the CNN for retraining. When the training process has finished, the unlabelled samples go to the CNN and obtain the labels which can be calculated from the softmax layer of the CNN.