Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 6087680 | 9 pages |

Remote Aircraft Target Recognition Method Based on Superpixel Segmentation and Image Reconstruction

Academic Editor: Vittorio Bianco
Received10 Sep 2019
Accepted24 Jan 2020
Published14 Feb 2020


Satellite images are always with complex background and shadow areas. These factors can lead to target segmentation break up and recognition with a low accuracy. Aiming at solving these problems, we proposed an aircraft recognition method based on superpixel segmentation and reconstruction. First, we need to estimate the orientation of an aircraft by using histograms of oriented gradients. And then, an improved Simple Linear Iterative Cluster (SLIC) superpixel segmentation algorithm is provided. By comparing texture feature instead of color feature space, we cluster the pixels that are with the same features. Last, through target template images and orientation, we reconstruct the superpixels. Also, the lowest error matching ratio is the recognized target. The test results show that the algorithm is robust to noise and recognize more aircrafts. Especially, when the satellite images with complex background and shadow areas, our method recognizes accuracy better than other methods. It can satisfy the demand of satellite image aircraft recognition.

1. Introduction

Satellite images have the advantages of superior real-time performance, rich information, and objective coverage. However, with the large amount of satellite data, ground processing still mainly relies on manual interpretation, thereby resulting in time-consuming information acquisition and poor real-time performance. Meanwhile, target recognition is a hot topic in machine vision. The extraction of target features and automatic classification of the target are considered the key steps to improve recognition efficiency. Therefore, by studying target recognition technology on the basis of remote sensing images, we can shorten the information acquisition period and improve real-time performance, which are of considerable significance. However, given the difference between the imaging object and the imaging environment, target recognition in remote sensing images is more difficult than conventional image recognition. Remote sensing images usually have complex backgrounds, and completely segmenting the target is not easy, thereby resulting in recognition errors. In addition, when the solar altitude is low, satellite images become riddled with shadow areas, which increase the difficulty of target segmentation. These problems pose challenges for remote sensing image target recognition.

Traditional remote sensing image target recognition technology usually extracts the global features of the target and then obtains the image invariant features for target classification and recognition. Zeng et al. [1] used Zernike moments to extract the contour features of an aircraft target and then applied a Bayes classifier and K-nearest neighbour algorithm to classify this target. This method can identify six types of aircraft targets. Fu et al. [2] improved the Zernike moment algorithm to extract the characteristics of aircraft targets; this modified version incorporates the support vector machine classification method to identify aircraft targets. Rong et al. [3] first eliminated the noise interference in an image by the contour tracking algorithm. Then, he calculated the global invariant feature of the image according to a neural network classifier to perform target recognition. In summary, the use of moment invariants to extract the global invariant features of aircraft targets has good robustness and space complexity but still has disadvantages. First, recognition needs to extract the complete target contour. However, remote sensing images often have complex backgrounds and low target contrast, which affect the accuracy of target recognition. These methods thus fail to effectively use the special shape characteristics of aircraft targets, and the robustness of recognition needs to be further improved.

Certain methods consider the shape features of aircraft fuselage symmetry [4]. The aircraft fuselage direction needs to be estimated before the global invariant of the target is extracted. Then, the shape and contour of the target are described. This method is advantageous in that it improves the tolerance of target segmentation to a certain extent and can accurately identify the target in the absence of contour features. However, given the limitations of satellite imaging conditions, the target is often affected by shadow regions, and accurately identifying the target type is difficult. Deep learning is also applied in aircraft remote sensing image target recognition, such as [5]. It is a novel landmark-based aircraft recognition method and uses a variant of a convolutional neural network. The accuracy of detection is high, but the database of aircraft remote sensing images based on deep learning is still incomplete.

Satellite images have complex backgrounds and shadow areas. To solve these problems, we propose the use of an aircraft recognition method that is based on superpixel segmentation and reconstruction for remote sensing image aircraft target recognition. In particular, we need to estimate the orientation of aircraft by using histograms of oriented gradients. The target to be reconstructed must be in the same direction as the template. We also provide an improved SLIC superpixel segmentation algorithm. For the difference between the texture features of the target and the shadow region, the CIELAB colour space measurement method is replaced by the texture similarity measure in separating the target.

The rest of this paper is organized as follows. Section 2 presents the target direction estimation method, and Section 3 describes the improved SLIC superpixel segmentation algorithm. Section 4 explains the target segmentation and recognition processes. Section 5 gives the experiment results, and Section 6 concludes the study and others directions for future work.

2. Target Direction Estimation

To ensure target reconstruction, the direction of the target should be consistent with the orientation of the template. This goal requires estimating the aircraft direction in the remote sensing image. Given the typical symmetry of aircraft fuselage, we can draw a histogram of oriented gradients (HOG) by this feature [6, 7]. The direction of the gradient vector distribution is the direction of aircraft fuselage. The steps are as follows.(1)Figure 1(a) is selected for reducing the influence of low target contrast and brightness change in remote sensing images. Firstly, the Gamma algorithm is used for normalization and correction to adjust the contrast of the image. Meanwhile, the noise interference is also suppressed. The result is shown in Figure 1(b).(2)The gradient of the abscissa and ordinate directions of each pixel in the image are calculated.where , , and express input image horizontal gradient, vertical gradient, and pixel values, respectively. The gradient magnitude at the pixel and gradient direction are as calculating the gradient of the pixel points, we can determine the outline and texture of the image, as shown in Figure 1(c).(3)A gradient direction histogram is drawn by the gradient vector of the image. Similar to the image grey histogram, the X-axis of the gradient direction histogram indicates the distribution of the gradient vector direction from 0° to 180° in the image. The Y-axis represents the number of gradient vectors in each direction. For convenience, the gradient direction of every 20° is divided into a group. Therefore, the gradient direction histogram, as shown in Figure 1(d), is composed of 9 dimensions.

Given that extracted aircraft target gradient vector features are mainly composed of fuselage and wing features, the characteristic vector of the fuselage is in the same direction as the plane. Thus, the orientation of the plane is generally the highest proportion of the direction in the gradient direction histogram, and 90° is the direction of the aircraft fuselage in Figure 1(d).

3. Improved SLIC Superpixel Segmentation Algorithm

The segmentation quality determines target recognition accuracy. The ideal segmentation method is based on pixel-level segmentation [8, 9]. The characteristics of each pixel are identified to determine whether the target is recognized. However, the amount of data in remote sensing images is large, and such point-by-point pixel segmentation leads to time-consuming recognition. Therefore, we introduce superpixel segmentation. The SLIC algorithm [1012] is a fast clustering algorithm for local comparison calculation. Firstly, by selection of the seed point and measurement of the similarity between the point and the pixel, the image is segmented into superpixels. Then, each superpixel is considered a node in the image during segmentation, thereby effectively reducing image complexity and increasing ease of image processing.

The SLIC algorithm uses the CIELAB colour space to measure the colour similarity between seed points and pixels during its measurement of the similarity between clustered pixels. Given the reduced spectrum number of remote sensing images, which leads to insufficient colour features, the CIELAB colour space measurement is not applicable. In remote sensing images, aircraft targets have abundant texture features, whereas the shadow regions, which affect target recognition, are lack texture features. Therefore, this work improves the similarity measurement method in the SLIC algorithm. The texture feature measurement cluster [13, 14] is used to replace the colour feature of the original algorithm. The specific steps are as follows.

3.1. Seed Point Initialization

For an image with N pixels, the number of superpixel seed points is expected to be K. Then, the area of each superpixel is , and the distance between adjacent seed points is . The selected seed points are evenly distributed over the image. To avoid identifying the selected seed point as a noise pixel or have it appeared at the edge of the image, we move the seed point to the position with the smallest gradient value in the 3 × 3 window and then give each seed point an independent label.

3.2. Similarity Measure

Each pixel in the image is compared with its neighbouring seed points to enable acquisition of their texture and positional spatial similarity. The nearest seed point label of the total similarity is assigned to the pixel point. The means of determining the distance between two texture feature vectors is an important issue in texture research. Commonly used texture distance metrics include Euclidean distance, Mahalanobis distance, and other nonlinear metrics, but these methods have low retrieval efficiency and poor real-time performance. Texture is an intrinsic feature of object surfaces and can be considered a pattern caused by changes in grey or colour in certain forms. Therefore, the texture features in the image can be measured by histogram.

The texture similarity measure between seed points and pixels is as follows:where is the texture similarity measurement, is the texture histogram of the seed pixel, is the texture histogram of adjacent pixels between the window and the seed point, and is the texture value for the pixel. The smaller is, the higher the similarity between pixels and seed points.

The location space similarity metrics are as follows:where is the similarity measure for the location space; and are the abscissa and ordinate components of the seed point, respectively; and are the horizontal and vertical components of the pixel points adjacent to seed points.

The total similarity measure between seed points and pixels is as follows:where is the distance between adjacent seed points. Because the distance between adjacent seed points is . When the distribution of superpixel seed points K is sparse, the value of will increase. Then, will increase. m is the balance parameter, which is used to balance the similarity of colour and positional space similarity. The value range is usually (1, 20). After repeated trials, we set m = 10 for all superpixel segmentations in our implementation.

3.3. Pixel Clustering

Pixels with the same seed point labels are clustered into one superpixel. However, to ensure algorithm speed of the algorithm, we cannot select the entire image as the search window. Instead, we choose a 2S × 2S window with the seed point as the centre, where S is the area of each superpixel, for searching pixels with similar clustering. The clustering results are shown in Figure 2.

4. Target Segmentation and Recognition

The target and the background, which have different pixel characteristics in a remote sensing image, can be separated by superpixel segmentation. The segmented target image can be identified, as shown in Figure 3, as long as it is reconstructed into a complete target.

The divided subimage may be a target image or a background image, and the value of is 0 or 1. The process of target reconstruction can be represented by mathematical modelling, as shown in Figure 3. If represents the target, then represents the matrix after superpixel segmentation. , and then,

The principle of target reconstruction is to use as few segmentation images as possible to mosaic into a complete target while keeping the mosaic error as small as possible. Target reconstruction is the minimum value between and [15, 16].where represents the and the best square approximation polynomials, and is the balance parameter used to solve the term where is not 0. The process is as follows.(1)Scale Normalisation. The purpose of scale normalization is to make the mosaic image and target template have the same scale and aspect ratio. The assumption is that the width and height of the template target are M and N, respectively, and that the aspect ratio is . The normalized image and area are and , respectively. Given aspect ratios of width and height, the scale factors of the X-axis and the Y-axis can be calculated and written as and , respectively. Hence, the scale normalization of the original image is as follows:(2)In the estimated target direction, the target is reconstructed by position, and the divided subimage in the target area assumes the value of 1 while the outside assumes 0. We use the method in [17] to obtain the best square approximation polynomial of the template and the stitched image. Then, we obtain the reconstructed image.(3)The reconstruction mismatch rate is calculated, and the recognition target is determined. The reconstruction mismatch rate is an important indicator that measures reconstruction quality. That is, the ratio of the target template area to the difference of reconstruction area and the template area is as follows:

Therefore, aircraft target recognition aims to calculate the mismatch rate of different target templates and corresponding reconstruction targets. R is the minimum value, in which the image with the minimum reconstruction difference is the identified target:

The specific identification process is shown in Figure 4.

5. Experimental Analysis

5.1. Parameter Setting

All experiments in this paper are based on the platform MATLAB 2013a+ VisualStudio 2010 and completed on a personal computer with an Intel® Core™ i5-4570 processor, 3.2 GHz main frequency, and 4 GB memory. The algorithm is validated by the QuickBird satellite image database of DigitalGlobe Remote Sensing Imaging Company of the United States ( The image resolution is 0.61 m. To verify the universality of the algorithm, we collect 5 types of aircraft target images, with each type having 30 images, as shown in Table 1.

TypeTemplateSample image

Type 1

Type 2

Type 3

Type 4

Type 5

5.2. Results of Superpixel Segmentation and Target Reconstruction

We use the SLIC superpixel segmentation algorithm and the method proposed in this section to segment the target image with superpixels. Then, the target is constructed by the segmentation results. The findings are shown in Table 2.

Target imageSLIC segmentationPropose methodReconstruction image

Type 1
Type 2
Type 3
Type 4
Type 5

As seen in Table 2, the superpixel segmentation based on the CIELAB colour space similarity measure is inaccurate, especially when the background of the remote sensing image is complex (Types 1 and 3) and the target is affected by a shadow area (Types 2 and 4). The main reason is that the number of spectra is reduced. The colour similarity measurement does not effectively distinguish between targets and shadow regions, thereby resulting in segmentation errors. In this paper, on the basis of the SLIC superpixel segmentation algorithm, texture and spatial features are used for measuring the similarity between pixel points. This procedure takes full advantage of the texture difference between the target and shadow areas. Under the influence of a complex background and shadow areas, the target can still be effectively segmented and the target stitching effect is enhanced.

5.3. Experiments of Matching Recognition and Accuracy Comparison

Five aircraft types in the dataset are tested for matching recognition, and the recognition results are shown in Figures 59. These figures show the results of (a) direct use of the template and image matching recognition and (b) matching recognition with template after superpixel segmentation and reconstruction.

The matching results show that the accuracy of matching recognition of aircraft targets after superpixel segmentation and reconstruction is significantly improved. After reconstruction, the influence of the shadow area is removed, and other interference information in the background is effectively filtered out, thereby further improving the accuracy of matching recognition. We compared our results with the CNN [5] method and the results without segmentation which are shown in Table 3.

ImageMatching accuracy
Without segmentation (%)CNN with landmark (%)With segmentation (%)

Type 177.884.592.5
Type 271.382.790.2
Type 376.685.192.8
Type 470.582.390.0
Type 575.184.792.7

As seen in Table 3, the matching recognition accuracy is remarkably improved after superpixel segmentation, being approximately 17.3% higher than that after direct matching recognition. The method of CNN obviously improves the matching accuracy, reaching about 83.9%. Because the experiment used the network model which has been trained in [5] to test our image, the training parameters are based on the dataset in [5]. The number of remote sensing satellite pictures in this paper is not enough to meet the training requirements. Therefore, the matching accuracy of our test images using their parameters is low. Compared with the method in this paper, our method has reached a higher accuracy.

To verify the accuracy of the proposed algorithm, an experimentally measured classification confusion matrix for aircraft target recognition [18] is used. Such matrix is primarily adopted for evaluating the classification accuracy of an image. Each column represents a prediction category, and the total number of data in each column represents the percentage of data projected for that category. Each row represents the true category of data, and the total number of data in each row is the percentage of table instances. For example, the value in the first column of the first row indicates the probability that the aircraft actually belonging to the first category will be predicted as the first category. The value of the first row and the second column indicates the probability that the aircraft actually belonging to the first category will be mispredicted as the second category. Similarly, other values are calculated in the same way. The confusion matrix of this algorithm is shown in Table 4.

Type 1Type 2Type 3Type 4Type 5

Type 10.890.
Type 20.000.970.000.000.00

As shown in the table, Types 1 and 3 are vulnerable to classification errors. The main reason is that the wings of both types of aircraft are perpendicular to the aircraft fuselage, unlike those of Type 1, which are wider. Types 5, 2, and 4, which also have similarities in fuselage, are likewise easily confused. In general, although some aircraft types have similarities, the classification accuracy of the algorithm is high, thereby satisfying the needs of remote sensing image aircraft target recognition.

5.4. Experiment of Computational Complexity

The size of the remote sensing images selected in this chapter is 128 × 128. In the computational complexity comparison experiment, the target matching and recognition time before and after the superpixel segmentation algorithm are compared. In addition, we also compared the CNN algorithm with them, which are shown in Table 5.

ImageMatching time
Without segmentationCNN with landmarkWith segmentation

Type 1113318528
Type 2125327537
Type 3131341545
Type 4118323531
Type 5149356551

As detailed in Table 5, after the superpixel segmentation, the time consumption of the algorithm is approximately 500 ms more than that of direct matching. The method of CNN takes less time. However, in general, the time consumption of the three algorithms remains on the same order of magnitude, thus meeting the requirements of target matching and recognition.

6. Conclusion

In this research, we propose an aircraft recognition method that is based on superpixel segmentation and reconstruction. First, a gradient direction histogram is used for estimating the target direction, and the template direction is kept consistent with the reconstruction direction. The Gamma correction method is used for reducing the influence of low contrast and brightness changes of remote sensing images. Then, an improved SLIC superpixel segmentation algorithm is proposed; this algorithm is based on the difference of texture features between target and shadow regions in the satellite images and solves the problem of incomplete target contour extraction caused by shadow regions. Finally, the mismatch rate of the template area and the reconstruction area is calculated, in which the minimum mismatch rate is identified as the target. Experiments show that the proposed method positively affects object segmentation and reconstruction and can identify five types of aircraft.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.


The abstract of this paper has been presented in cospar2018 with the following link:

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.


This work was partially supported by the Chinese National Natural Science Foundation (no. 61901081) and the Fundamental Research Funds for the Central Universities (no. 3132018180).


  1. Y. Zeng, J. Lan, C. Han, K. Huang, J. Li, and X. Shi, “Aircraft recognition based on improved iterative threshold selection and skeleton Zernike moment,” Optik, vol. 125, no. 14, pp. 3733–3737, 2014. View at: Publisher Site | Google Scholar
  2. L. Fu, Y. Peng, and L. Kun, “Research concerning aircraft recognition of remote sensing images based on ICA Zernike invariant moments,” Journal of Intelligent Systems, vol. 6, no. 1, pp. 51–56, 2013. View at: Google Scholar
  3. H.-J. Rong, Y.-X. Jia, and G.-S. Zhao, “Aircraft recognition using modular extreme learning machine,” Neurocomputing, vol. 128, no. 27, pp. 166–174, 2014. View at: Publisher Site | Google Scholar
  4. Q. Wu, H. Sun, X. Sun, Z. Daobing, F. Kun, and W. Hongqi, “Aircraft recognition in high-resolution optical satellite remote sensing images,” IEEE Geoscience and Remote Sensing Letters, vol. 12, no. 1, pp. 112–116, 2015. View at: Publisher Site | Google Scholar
  5. A. Zhao, K. Fu, S. Wang et al., “Aircraft recognition based on landmark detection in remote sensing images,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 8, pp. 1413–1417, 2017. View at: Publisher Site | Google Scholar
  6. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp. 886–893, San Diego, CA, USA, June 2005. View at: Publisher Site | Google Scholar
  7. R. Kapoor, R. Gupta, L. H. Son, S. Jha, and R. Kumar, “Detection of power quality event using histogram of oriented gradients and support vector machine,” Measurement, vol. 120, pp. 52–75, 2018. View at: Publisher Site | Google Scholar
  8. P. O. Pinheiro and R. Collobert, “From image-level to pixel-level labeling with convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1713–1721, Boston, MA, USA, June 2015. View at: Publisher Site | Google Scholar
  9. J. Uhrig, M. Cordts, U. Franke et al., Pixel-level Encoding and Depth Layering for Instance-Level Semantic Labeling German Conference on Pattern Recognition, Springer, Berlin, Germany, 2016.
  10. R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2274–2282, 2012. View at: Publisher Site | Google Scholar
  11. A. Schick, M. Bäuml, and R. Stiefelhagen, “Improving foreground segmentations with probabilistic superpixel Markov random fields,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 27–31, Providence, RI, USA, June 2012. View at: Publisher Site | Google Scholar
  12. O. Csillik, “Fast segmentation and classification of very high resolution remote sensing data using SLIC superpixels,” Remote Sensing, vol. 9, no. 3, p. 243, 2017. View at: Publisher Site | Google Scholar
  13. W. Zuo, L. Zhang, C. Song, and Z. David, “Texture enhanced image denoising via gradient histogram preservation,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1203–1210, Portland, OR, USA, June 2013. View at: Publisher Site | Google Scholar
  14. G. Cheng and Q. Yue, “Rock images analysis of FCM clustering algorithm based on weighted color texture features,” Journal of Physics: Conference Series, vol. 1069, no. 1, Article ID 012185, 2018. View at: Publisher Site | Google Scholar
  15. A. Mustafa, H. Kim, J. Y. Guillemaut, and H. Adrian, “General dynamic scene reconstruction from multiple view video,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 900–908, Santiago, Chile, December 2015. View at: Publisher Site | Google Scholar
  16. B. A. Williams and D. G. Long, “Reconstruction from aperture-filtered samples with application to scatterometer image reconstruction,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 5, pp. 1663–1676, 2011. View at: Publisher Site | Google Scholar
  17. W. R. Lee, V. Rehbock, K. L. Teo, and L. Caccetta, “A weighted least-square-based approach to FIR filter design using the frequency-response masking technique,” IEEE Signal Processing Letters, vol. 11, no. 7, pp. 593–596, 2004. View at: Publisher Site | Google Scholar
  18. T. R. Patil and S. S. Sherekar, “Performance analysis of naive Bayes and J48 classification algorithm for data classification,” International Journal of Computer Science and Applications, vol. 6, no. 2, pp. 256–261, 2013. View at: Google Scholar

Copyright © 2020 Yantong Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

39 Views | 61 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder