Scientific Programming

Scientific Programming / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 9974723 | https://doi.org/10.1155/2021/9974723

Xinying Miao, Yunlong Liu, "Target Recognition of SAR Images Based on Azimuthal Constraint Reconstruction", Scientific Programming, vol. 2021, Article ID 9974723, 10 pages, 2021. https://doi.org/10.1155/2021/9974723

Target Recognition of SAR Images Based on Azimuthal Constraint Reconstruction

Academic Editor: Bai Yuan Ding
Received15 Mar 2021
Accepted07 Apr 2021
Published16 Apr 2021

Abstract

A synthetic aperture radar (SAR) target classification method has been developed, in the study, based on dynamic target reconstruction. According to SAR azimuthal sensitivity, the truly useful training samples for the reconstructing the test sample are those with approaching azimuths and same labels. Hence, the proposed method performs linear presentation of the test sample on the local dictionary established by several training samples selected from each class under the azimuthal correlation. By properly adjusting the azimuthal correlation constraint, the test sample can be reconstructed at different levels by different scales of training samples. During the classification phase, the reconstruction error vectors from different levels are combined by linear fusion and the label of the test sample is determined based on the fused errors. Experimental conditions are setup on the moving and stationary target acquisition and recognition (MSTAR) dataset to evaluate the proposed method. The results confirm the effectiveness of the proposed method.

1. Introduction

Synthetic aperture radar (SAR) is able to measure images with high resolutions of the interested area for battlefield surveillance. For the ground targets in the observed area, the target classification algorithm is often performed to obtain their labels for information analysis [1]. In the past twenty years, many SAR target classification methods were developed with high performance. In the early stage, the classification algorithms were directly conducted on the image pixels. In [2], template-matching was designed to measure the intensity correlations between the test and template samples. In [3], a statistical way, i.e., conditional Gaussian model, was employed to approach the distributions of SAR image intensities based on a large volume of training samples. Then, the posterior probabilities of the test sample to different training classes could be calculated for target classification. Considering the high dimension and redundancy in the original image intensities, feature extraction techniques were widely applied to SAR target classification methods. Different kinds of features were used including geometrical ones, transformation ones, and electromagnetic ones. In [49], the region features, target outline contour, and shadow were used as basic features for target classification. Various transformation features [1017] were adopted in SAR target classification including principal component analysis (PCA) [10], non-negative matrix factorization (NMF) [11], wavelet transform [12], monogenic signal [13, 14], and empirical mode decomposition [15]. Scattering centers are typical representatives of electromagnetic features. In [1820], few methods were developed based on the attributed scattering centers for target classification. In company with the extracted features, the classifiers were brought from the existing ones or specifically designed for SAR target classification. In [21, 22], the support vector machine (SVM) was employed as the basic classifier. Sun et al. applied adaptive boosting (AdaBoost) to SAR target classification [23]. The sparse representation-based classification (SRC) (including the modified ones) operated as the classifier in [22, 2428]. For the unorder scattering centers, the present classifiers can hardly be directly used. As a remedy, the researchers specifically developed several matching schemes for the attributed scattering centers [1820]. Recently, the deep learning algorithms stirred a big surge in the field of pattern recognition. Also, they have been the mainstream in remote sensing image interpretation [29], including SAR target recognition, and many deep learning models have been successfully applied, e.g., autocoder and convolution neural network (CNN). In [30], a stacked autocoder was developed for feature fusion with application to SAR target classification. CNN was the most popular deep learning model in SAR target classification with a rich set of published works with different network architectures or training tricks. In [31], CNN was first applied to SAR target classification and validated its effectiveness. Chen et al. proposed the structure of all-convolutional networks (A-ConvNets) for SAR target classification [32]. Latest work reported in [3336] use different CNN architectures to improve classification performance. Data augmentation provided another way to enhance the classification capability of CNN. Ding et al. conducted image translation and noise addition to augment the available samples for training the CNN for SAR target classification [37]. In [38], the training samples were augmented by noise addition, multiple resolutions, and occlusion simulation. The CAD models were processed by the electromagnetic simulation software to produce more SAR images [39]. CNN can also be combined with other classifiers to further improve the classification performance. In [40, 41], CNN was combined with SVM and sparse coding, respectively, for SAR target classification. An updating strategy was designed by Cui et al. with the aid of SVM [42]. The hierarchal decision fusion algorithm for CNN and scattering center matching was developed in [43]. However, the classification performance of deep learning models is closely related to the available training samples. In the case of SAR target classification, there are many extended operating conditions (EOCs): configuration variants, noise corruptions, etc. As a result, the adaptivity of deep learning-based methods could hardly handle them well.

This study proposes an SAR target classification method by dynamic target reconstruction under the constraint of azimuthal sensitivity. In the traditional SRC, the test sample is linearly represented over the global dictionary formed by all the training classes [24, 25]. However, due to the azimuthal sensitivity [4447], only a few samples with approaching azimuths to the test sample are indeed useful in the reconstruction. Therefore, the sparse representation related to the global dictionary may bring false alarms and reduce the true reconstruction precision. In this study, the target reconstruction is conducted on the local dictionary comprised by those training samples from each class, which share similar azimuths to the estimated one from the test sample. Then, it is preferred that the linear representation should be performed with no sparsity constraint to minimize the reconstruction error. However, considering the possible azimuth estimation errors and instability of azimuthal sensitivity, the local reconstruction is repeated under different levels of azimuthal constraint. In detail, the reconstruction is performed in different intervals around the estimated azimuth; thus, different scales of training samples are selected. Based on the reconstruction errors from different azimuth intervals, a linear fusion algorithm is adopted to combine them as a single one, which decides the target label according to the minimum error. The main works of this paper are as follows. First, the azimuthal sensitivity of SAR image is considered, which helps select the truly corresponding training samples to the test one. So, linear representation and reconstruction will be more precise. Second, the constraint of azimuthal sensitivity is dynamically adjusted to obtain reconstruction results at different levels. Their results are fused so the final reconstruction errors can better capture the actual label of the sample to be classified. To investigate the performance of the proposed method, several experimental setups are designed on the moving and stationary target acquisition and recognition (MSTAR) dataset. The results confirm the validity of the proposed method.

2. Method Description

2.1. Azimuthal Sensitivity of SAR Images

SAR images are sensitive to azimuths [4447], which indicate the relative aspect angles between the targets and radar platform. For the same target at a fixed depression angle, when the azimuth changes greatly, its images tend to have significant differences. Figure 1 illustrates SAR images of BMP2 and T72 at different azimuths, i.e., 0°, 45°, 90°, 135°, and 180°, which are drawn from the MSTAR dataset. It shows that those images with large azimuth differences have notably different target appearances. In addition, SAR images from different targets with similar azimuths tend to have higher correlations than those from the same target but with quite different azimuths. Figure 2 plots the correlation curve between a BMP2 SAR image at 45° azimuth with those from 0 to 180° (the azimuth 45° is omitted). Some quantitative results can be analyzed as Table 1. When the azimuth difference is lower than 5°, the correlation coefficients keep higher than 0.7. However, the correlation drops below 0.5 when the azimuth difference goes over about 30°. Therefore, it is probably invalid to represent a test sample using those training samples with azimuths far from one of the test samples. Properly selecting those training samples highly related to the test sample is beneficial to enhance the reconstruction precision.


CorrelationAbsolute of azimuth difference
[0°, 5°][5°, 10°]>10°

Maximum0.800.690.50
Minimum0.690.510.12
Average0.730.560.28

2.2. Target Reconstruction under Azimuthal Constraint

At present, the target azimuth of SAR image can be estimated with good precision [23, 48, 49]. Assume the estimated azimuth of the test sample as , the training samples are selected in the azimuth interval of . After the selection, the training samples from classes are used to build a global dictionary as . Then, the test sample is reconstructed as follows:where represents the linear coefficient corresponding to the class and is the permitted maximum reconstruction error.

Although with a similar formulation with traditional SRC, the proposed reconstruction algorithm has some differences. First, the linear representation is performed on the local dictionaries from individual classes. As analyzed in Section 2.1, the training samples from a false class may share higher correlations with the test sample than the samples from the true class but with large azimuth differences. So, the global dictionary may bring many false atoms during the solution of the linear coefficients, and local dictionary could better reveal the reconstruction capability of each class. Second, no sparsity constraint is assigned to the optimization problem in equation (1). The atoms in the local dictionary are selected under the azimuthal constraint, and they share approaching azimuths to the test sample so all of them are useful in the linear presentation. For the problem in equation (1), the regularization algorithm can be employed to convert it to equation (2).where denotes the regularization coefficient. The analytic solution of equation (2) can be obtained as follows:where denotes the unit matrix. After solving the coefficient vectors of different classes, their corresponding reconstruction errors are calculated as follows:

2.3. Dynamic Reconstruction for Target Recognition

Considering the instability of azimuth sensitivity and possible estimation errors of the azimuth, the neighborhood for selecting the correlated training samples is adjusted for dynamic reconstruction. A few choices of are set as and the available training samples are selected from the azimuthal interval . Then, the target reconstruction is performed as Section 2.2 to obtain the reconstruction error vectors at different neighborhoods as , where . The linear combination algorithm is performed as equation (5) to fuse all the reconstruction errors as a unified one.where denotes the weight vector and is the fused reconstruction error vector, which determines the target label according to the minimum error.

In fact, the reconstruction results at different azimuth neighborhoods reflect the relations between the test sample and different classes under different levels of azimuthal sensitivity. At a low , only a few training samples with notably approaching azimuths to the sample are used. With the increase of , more training samples are available in the reconstruction. From the sense of the nearest neighbor, the representation capability of a few training samples tends to be more important. Hence, the weight vector during the dynamic reconstruction is decided as follows:where represents the amount of training samples at the reconstruction level and a smaller number results in a larger weight.

Figure 3 illustrates the basic procedure of the proposed target classification method. The azimuth of the test sample is first estimated to choose the proper training samples from different classes. Here, the estimation algorithm proposed in [23] was employed, which was confirmed to be effective in several related works [46]. Afterwards, the test sample is reconstructed by the training samples from each class to obtain a reconstruction error vector at a special azimuth neighborhood. By adjusting the neighborhood, the dynamic reconstruction is performed to achieve a few reconstruction error vectors, which are jointly fused based on linear weighting. Finally, the target label is classified based on the fused reconstruction errors.

3. Experiments

3.1. MSTAR Dataset

Since the release in 1990s, the MSTAR dataset has long been the benchmark database for the evaluation of SAR target classification methods. The dataset comprises of SAR images from ten stationary ground vehicles (shown in Figure 4) measured under different conditions. The resolution of these SAR images is about 0.3 m, and so many details of the targets can be observed. With the MSTAR dataset, several experimental setups can be designed and a typical example is given in Table 2. All the ten targets are used, whose images at 17° and 15° depression angles are available for training and test, respectively. Because of the small differences between the training and test samples (only 2° depression angle variance), the experimental setup in Table 2 is often adopted as the standard operating condition (SOC). Except the SOC setup, some other experimental conditions can also be designed based on the MSTAR dataset such as configuration variants, depression angle variances, noise corruption, and target occlusion. A few reference methods are chosen from current studies for comparison. Traditional classifiers including SVM and SRC are selected. Three CNN-based methods drawn from [32, 38], and [41] are compared, which are denoted as “A-ConvNets”, “DACNN” (Data Augmentation + CNN), and “SCCNN” (Sparse Coding + CNN), respectively. In the following, all the methods are simultaneously evaluated under SOC and several EOCs to validate the effectiveness of the proposed method.


Depr.BMP2BTR70T72T62BDRM2BTR60ZSU23/4D7ZIL1312S1

Training17°233 (Sn_9563)233232 (Sn_132)299298256299299299299
Test15°195 (Sn_9563)196196 (Sn_132)273274195274274274274
196 (Sn_9566)195 (Sn_812)
196 (Sn_c21)191 (Sn_s7)

3.2. 10-Class Recognition under SOC

The classification task is first considered under SOC using the 10-class training and test samples shown in Table 2. The classification results of the proposed method are displayed as the confusion matrix in Figure 5. The probability of correct classification (denoted as ) is adopted to evaluate the classification accuracy, which is defined as the proportion of correctly classified samples in all the test samples. By observing the diagonal elements, each target is classified with a over 97% with an average one reaching 98.72%. The comparison of the average s of different methods is shown in Figure 6. The proposed method outperforms SVM, SRC, and A-ConvNets but has a slightly lower than DACNN and SCCNN. As mentioned above, the classification capability of deep learning models is highly dependent on the amount and coverage of training samples. In this experimental setup, there exist differences between BMP2 and T72 on their configurations. Consequently, the performance of CNN using the original training samples is affected. For DACNN, it augmented the limited training samples using image translation and noise addition, which enhanced the representation capability of the networks. SCCNN complemented CNN with sparse coding, which was beneficial to handle the existed configuration variance. Compared with the traditional SRC, the dynamic target reconstruction in this study effectively enhances the overall classification accuracy. The target reconstruction with azimuthal constraint actually focuses on the potential atoms, which are useful of linear representation in SRC. So, the possible interferences from the false alarms either from the true class or false class can be greatly relieved. Therefore, the dynamic reconstruction tends to be more targeted and the reconstructed results could better reveal the correlations between the test sample and different training classes.

3.3. Configuration Variants

In the MSTAR dataset, some targets have more than one configuration, e.g., BMP2 and T72, as shown in Table 2. Then, the experimental condition can be set up to test the proposed method under configuration variants [38]. Table 3 displays the training and test sets including four targets, among which the configurations from BMP2 and T72 for classification are different with their training counterparts. Another two targets, i.e., BDRM2 and BTR70, are placed in the training set to further increase the classification difficulty. Table 4 presents the detailed classification results achieved by the proposed method for BMP2 and T72. All the configurations in the training set can be classified with s over 97% and the average one reaches 98.15%. Figure 7 compares the average s of different methods, which confirms the best performance of the proposed method under configuration variants. For DACNN and SCCNN, their performance degrades below the proposed method because of the severe configuration variances between the training and test samples. In the proposed dynamic reconstruction, the training samples for representation are constrained in a small azimuth interval around the test one. Therefore, those differences caused by configuration variants in other training samples would not be brought into the representation, which may probably occur in SRC. By representing the test sample in a focused local dictionary, the relation between the test sample and a special training class can be fully considered.


Depr.BMP2BDRM2BTR70T72

Training17°233 (Sn_9563)298233232 (Sn_132)
Test15°, 17°426 (Sn_9566)00424 (Sn_812)
427 (Sn_c21)572 (Sn_A04)
572 (Sn_A05)
572 (Sn_A07)
565 (Sn_A10)


Test classSerial no.Classified class (%)
BMP2BRDM2BTR70T72

BMP2Sn_956641752297.66
Sn_c2142123198.14
T72Sn_81241141898.12
Sn_A0452256298.25
Sn_A0543256398.25
Sn_A0753356197.91
Sn_A1022156098.94
Average98.15

3.4. Depression Angle Variances

The experimental results under SOC show that a small depression angle variance would not cause many differences between the training and test samples. However, with the increase of the depression angle divergence, the test samples may be notably deformed with reference to the training ones. Table 5 displays the experimental setup of depression angle variances. The training set comprise of SAR images of 2S1, BDRM2, and ZSU23/4 at 17° depression angle while the test sets are from 30° to 45° depression angles, respectively. Figure 8 compares the average s of different methods at both depression angles. In comparison, the performance at 45° degrades much sharply compared to that at 30° because of the remarkable differences between the training and test samples caused by 28° depression angle variance. The highest s at both depression angles are achieved by the proposed method, validating its superior robustness to depression angle variances. Similar to the situation of configuration variants, the depression angle variances cause some local differences between the training and test samples. In this case, it is hard to precisely evaluate the azimuthal stability. Therefore, the dynamic reconstruction through a series of azimuthal intervals could better find the true correlation between the test sample and different classes.


Depr.2S1BDRM2ZSU23/4

Training17°297296297
Test30°286285286
45°301301301

3.5. Noise Corruption

In the public version of the MSTAR dataset, SAR images were mainly acquired under cooperative conditions with high signal-to-noise ratios (SNR). However, in the actual applications, the measured images may be severely corrupted by noises. So, it is essential to examine the target classification method under noise corruption. According to the previous studies [5052], this paper simulates noisy images by adding different extents of additive noises to the test samples in Table 2. Afterwards, the average s of different methods are obtained as plotted in Figure 9. Undoubtedly, a lower SNR results in a lower of each method. Because of addition of simulated noise images into the training samples, DACNN achieves the best noise-robustness among all the reference methods. At each noise level, the proposed method could achieve the highest performance, indicating its better noise-robustness. The selection of training samples under the azimuthal constraint actually eliminates the noise interferences from the unselected samples. In addition, the dynamic reconstruction performs the optimization task in equation (2) at different scales of training samples. Their joint use could effectively relieve the influences caused by noises thus reaching more robust decisions to noise corruptions.

3.6. Target Occlusion

Although with some penetration capability, it is still possible that the targets on the ground are occluded by the obstacles nearby. As a result, in some measured SAR images, only a part of the target is presented. Based on occlusion model in [53, 54], this paper first simulates SAR images with occluded targets based on the test samples in Table 2. Afterwards, the occluded samples at different occlusion levels are classified by different methods and their performance is shown as Figure 10. As shown, the proposed method outperforms the reference ones at each occlusion level. When the target is partially occluded, it becomes complex to find its corresponding training samples in the true class. In this method, the linear representation is performed on the local dictionaries, so the representation capability of each training class can be better exploited. Furthermore, with the proper constraint of azimuthal sensitivity, the dynamic reconstruction is more targeted, which helps relieve the influences caused by target occlusion.

4. Conclusion

An SAR target classification method is proposed based on dynamic target reconstruction under the constraint of azimuthal sensitivity. With the estimated azimuth, only the training samples with approaching azimuths to the test sample are selected for target reconstruction. By adjusting the azimuthal intervals, the dynamic reconstruction is performed and the results reflect the relations between the test sample and different training classes at different levels. Finally, those reconstruction error vectors from dynamic reconstruction are linearly fused to decide the target label. Experimental setups are designed on the MSTAR dataset to test the proposed method together with some reference methods. Conclusions are drawn as follows based on the experimental results. Under SOC, the proposed method achieves a high classification accuracy of 99.12%. When the test samples have significant differences with the training set caused by configurations, depression angles, noises, or occlusions, the proposed method still keeps superior robustness over the reference methods. In the future works, some statistical or analytic way should be further adopted or developed to obtain the adaptive weights to different levels of target reconstruction.

Data Availability

The MSTAR dataset used to support the findings of this study is available online at http://www.sdms.afrl.af.mil/datasets/mstar/.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work received no funding.

References

  1. K. El-Darymli, E. W. Gill, D. Power, and C. Moloney, “Automatic target recognition in synthetic aperture radar imagery: a state-of-the-art review,” IEEE Access, vol. 4, pp. 6014–6058, 2016. View at: Publisher Site | Google Scholar
  2. L. M. Novak, G. J. Owirka, W. S. Brower et al., “The automatic target recognition system in SAIP,” Lincoln Laboratory Journal, vol. 10, no. 2, pp. 187–202, 1997. View at: Google Scholar
  3. J. A. O’Sullivan, M. D. Devore, V. Kedia et al., “SAR ATR performance using a conditionally Gaussian model,” IEEE Transactions on Aerospace and Electronic Systems, vol. 37, no. 1, pp. 91–108, 2001. View at: Google Scholar
  4. M. Amoon and G. a. Rezai‐rad, “Automatic target recognition of synthetic aperture radar (SAR) images based on optimal selection of Zernike moments features,” IET Computer Vision, vol. 8, no. 2, pp. 77–85, 2014. View at: Publisher Site | Google Scholar
  5. B. Ding, G. Wen, C. Ma et al., “Target recognition in synthetic aperture radar images using binary morphological operations,” Journal of Applied Remote Sensing, vol. 10, no. 4, Article ID 046006, 2016. View at: Publisher Site | Google Scholar
  6. C. Shan, B. Huang, and M. Li, “Binary morphological filtering of dominant scattering area residues for SAR target recognition,” Computational Intelligence and Neuroscience, vol. 2018, Article ID 9680465, 15 pages, 2018. View at: Publisher Site | Google Scholar
  7. L. Jin, J. Chen, and X. Peng, “Synthetic aperture radar target classification via joint sparse representation of multi-level dominant scattering images,” Optik, vol. 186, pp. 110–119, 2019. View at: Publisher Site | Google Scholar
  8. J. Tan, X. Fan, S. Wang et al., “Target recognition of SAR images by partially matching of target outlines,” Journal of Electromagnetic Waves and Applications, vol. 33, no. 7, pp. 865–881, 2019. View at: Publisher Site | Google Scholar
  9. S. Papson and R. M. Narayanan, “Classification via the shadow region in SAR imagery,” IEEE Transactions on Aerospace and Electronic Systems, vol. 48, no. 2, pp. 969–980, 2012. View at: Publisher Site | Google Scholar
  10. A. K. Mishra, “Validation of PCA and LDA for SAR ATR,” in proceedings of the IEEE TENCON, pp. 1–6, Hyderabad, India, November 2008. View at: Google Scholar
  11. Z. Cui, Z. Cao, J. Yang, J. Feng, and H. Ren, “Target recognition in synthetic aperture radar images via non‐negative matrix factorisation,” IET Radar, Sonar & Navigation, vol. 9, no. 9, pp. 1376–1385, 2015. View at: Publisher Site | Google Scholar
  12. W. Xiong, L. Cao, Z. Hao et al., “Combining wavelet invariant moments and relevance vector machine for SAR target recognition,” in Proceedings of the IET International Radar Conference, pp. 1–4, Gulin, China, April 2009. View at: Google Scholar
  13. G. Dong, G. Kuang, N. Wang, L. Zhao, and J. Lu, “SAR target recognition via joint sparse representation of monogenic signal,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 7, pp. 3316–3328, 2015. View at: Publisher Site | Google Scholar
  14. Y. Zhou, Y. Chen, R. Gao et al., “SAR target recognition via joint sparse representation of monogenic components with 2D canonical correlation analysis,” IEEE Access, vol. 7, pp. 25815–25826, 2019. View at: Google Scholar
  15. M. Chang, X. You, and Z. Cao, “Bidimensional empirical mode decomposition for SAR image feature extraction with application to target recognition,” IEEE Access, vol. 7, pp. 135720–135731, 2019. View at: Publisher Site | Google Scholar
  16. M. Yu, G. Dong, H. Fan et al., “SAR target recognition via local sparse representation of multi-manifold regularized low-rank approximation,” Remote Sensing, vol. 10, p. 211, 2018. View at: Google Scholar
  17. Y. Huang, J. Peia, J. Yanga, B. Wang, and X. Liu, “Neighborhood geometric center scaling embedding for SAR ATR,” IEEE Transactions on Aerospace and Electronic Systems, vol. 50, no. 1, pp. 180–192, 2014. View at: Publisher Site | Google Scholar
  18. L. C. Potter and R. L. Moses, “Attributed scattering centers for SAR ATR,” IEEE Transactions on Image Processing, vol. 6, no. 1, pp. 79–91, 1997. View at: Publisher Site | Google Scholar
  19. B. Ding, G. Wen, J. Zhong, C. Ma, and X. Yang, “A robust similarity measure for attributed scattering center sets with application to SAR ATR,” Neurocomputing, vol. 219, pp. 130–143, 2017. View at: Publisher Site | Google Scholar
  20. B. Ding, G. Wen, X. Huang, C. Ma, and X. Yang, “Target recognition in synthetic aperture radar images via matching of attributed scattering centers,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, no. 7, pp. 3334–3347, 2017. View at: Publisher Site | Google Scholar
  21. Q. Zhao and J. C. Principe, “Support vector machines for SAR automatic target recognition,” IEEE Transactions on Aerospace and Electronic Systems, vol. 37, no. 2, pp. 643–654, 2001. View at: Publisher Site | Google Scholar
  22. H. Liu and S. Li, “Decision fusion of sparse representation and support vector machine for SAR image target recognition,” Neurocomputing, vol. 113, pp. 97–104, 2013. View at: Publisher Site | Google Scholar
  23. Y. Sun, Z. Liu, S. Todorovic et al., “Adaptive boosting for SAR automatic target recognition,” IEEE Transactions on Aerospace and Electronic Systems, vol. 43, no. 1, pp. 112–125, 2006. View at: Google Scholar
  24. J. J. Thiagaraianm, K. N. Ramamurthy, P. Knee, A. Spanias, and V. Berisha, “Sparse representations for automatic target classification in SAR images,” in Proceedings of the 4th International Symposium on Communications, Control, and Signal Processing, pp. 1–4, Cyprus, Middle East, March 2010. View at: Google Scholar
  25. H. Song, K. Ji, Y. Zhang et al., “Sparse representation-based SAR image target classification on the 10-class MSTAR data set,” Applied Sciences, vol. 6, no. 26, 2016. View at: Publisher Site | Google Scholar
  26. B. Ding and G. Wen, “Sparsity constraint nearest subspace classifier for target recognition of SAR images,” Journal of Visual Communication and Image Representation, vol. 52, pp. 170–176, 2018. View at: Publisher Site | Google Scholar
  27. W. Li, J. Yang, and Y. Ma, “Target recognition of synthetic aperture radar images based on two-phase sparse representation,” Journal of Sensors, vol. 2020, Article ID 2032645, 12 pages, 2020. View at: Publisher Site | Google Scholar
  28. X. Zhang, Y. Wang, Z. Tan et al., “Two-stage multi-task representation learning for synthetic aperture radar (SAR) target images classification,” Sensors, vol. 17, no. 11, p. 2506, 2017. View at: Publisher Site | Google Scholar
  29. X. X. Zhu, D. Tuia, L. Mou et al., “Deep learning in remote sensing: a comprehensive review and list of resources,” IEEE Geoscience and Remote Sensing Magazine, vol. 5, no. 4, pp. 8–36, 2017. View at: Publisher Site | Google Scholar
  30. M. Kang, K. Ji, X. Leng et al., “Synthetic aperture radar target recognition with feature fusion based on a stacked autoencoder,” Sensors, vol. 17, no. 1, p. 192, 2017. View at: Publisher Site | Google Scholar
  31. D. E. Morgan, “Deep convolutional neural networks for ATR from SAR imagery,” Proceedings of the SPIE, pp. 1–13. View at: Google Scholar
  32. S. Chen, H. Wang, F. Xu et al., “Target classification using the deep convolutional networks for SAR images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 6, pp. 1685–1697, 2016. View at: Google Scholar
  33. J. Zhao, Z. Zhang, W. Yu et al., “A cascade coupled convolutional neural network guided visual attention method for ship detection from SAR images,” IEEE Access, vol. 6, pp. 50693–50708. View at: Google Scholar
  34. R. Min, H. Lan, Z. Cao et al., “A gradually distilled CNN for SAR target recognition,” IEEE Access, vol. 7, pp. 42190–42200. View at: Google Scholar
  35. L. Wang, X. Bai, and F. Zhou, “SAR ATR of ground vehicles based on ESENet,” Remote Sensing, vol. 11, no. 11, p. 1316, 2019. View at: Publisher Site | Google Scholar
  36. P. Zhao, K. Liu, H. Zou, and X. Zhen, “Multi-stream convolutional neural network for SAR automatic target recognition,” Remote Sensing, vol. 10, no. 9, p. 1473, 2018. View at: Publisher Site | Google Scholar
  37. J. Ding, B. Chen, H. Liu et al., “Convolutional neural network with data augmentation for SAR target recognition,” IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 3, pp. 364–368, 2016. View at: Google Scholar
  38. Y. Yan, “Convolutional neural networks based on augmented training samples for synthetic aperture radar target recognition,” Journal of Electron Imaging, vol. 27, no. 2, Article ID 023024, 2018. View at: Publisher Site | Google Scholar
  39. D. Malmgren-Hansen, A. Kusk, J. Dall, A. A. Nielsen, R. Engholm, and H. Skriver, “Improving SAR automatic target recognition models with transfer learning from simulated data,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 9, pp. 1484–1488, 2017. View at: Publisher Site | Google Scholar
  40. S. A. Wagner, “SAR ATR by a combination of convolutional neural network and support vector machines,” IEEE Transactions on Aerospace and Electronic Systems, vol. 52, no. 6, pp. 2861–2872, 2016. View at: Publisher Site | Google Scholar
  41. O. Kechagias-Stamatis, N. Aouf, Z. Liu et al., “Fusing deep learning and sparse coding for SAR ATR,” IEEE Transactions on Aerospace and Electronic Systems, vol. 55, no. 2, pp. 785–797, 2019. View at: Publisher Site | Google Scholar
  42. Z. Cui, T. Cui, Z. Cao et al., “SAR unlabeled target recognition based on updating CNN with assistant decision,” IEEE Geoscience and Remote Sensing Letters, vol. 15, no. 10, pp. 1585–1589, 2018. View at: Google Scholar
  43. C. Jiang and Y. Zhou, “Hierarchical fusion of convolutional neural networks and attributed scattering centers with application to robust SAR ATR,” Remote Sensing, vol. 10, no. 6, p. 819, 2018. View at: Publisher Site | Google Scholar
  44. M. Z. Brown, “Analysis of multiple-view Bayesian classification for SAR ATR,” in Proceedings of the SPIE, pp. 265–274, Orlando, FL, USA, December 2003. View at: Google Scholar
  45. B. Ding, G. Wen, X. Huang, C. Ma, and X. Yang, “Target recognition in SAR images by exploiting the azimuth sensitivity,” Remote Sensing Letters, vol. 8, no. 9, pp. 821–830, 2017. View at: Publisher Site | Google Scholar
  46. X. Miao and Y. Shan, “SAR target recognition via sparse representation of multi-view SAR images with correlation analysis,” Journal of Electromagnetic Waves and Applications, vol. 33, no. 7, pp. 897–910, 2019. View at: Publisher Site | Google Scholar
  47. B. Ding and G. Wen, “Exploiting multi-view SAR images for robust target recognition,” Remote Sensing, vol. 9, no. 1150, 2017. View at: Publisher Site | Google Scholar
  48. Y. Zhang, Y. Zhuang, H. Li et al., “A novel method for estimation of the target rotation angle in SAR image,” in Proceedings of the IET International Radar Conference, pp. 1–4, Hangzhou, China, October 2015. View at: Google Scholar
  49. S. Chen, F. Lu, J. Wang et al., “Target aspect angle estimation for synthetic aperture radar automatic target recognition using spare representation,” in Proceedings of the ICSPCC, pp. 1–4, Hong Kong, China, August 2016. View at: Google Scholar
  50. X. Zhang, Y. Wang, D. Li, Z. Tan, and S. Liu, “Fusion of multifeature low-rank representation for synthetic aperture radar target configuration recognition,” IEEE Geoscience and Remote Sensing Letters, vol. 15, no. 9, pp. 1402–1406, 2018. View at: Publisher Site | Google Scholar
  51. S. Doo, G. Smith, and C. Baker, “Target classification performance as a function of measurement uncertainty,” in Proceedings of the IEEE APSAR, pp. 587–590, Singapore, September 2015. View at: Google Scholar
  52. X. Zhang, “Noise-robust target recognition of SAR images based on attribute scattering center matching,” Remote Sensing Letters, vol. 10, no. 2, pp. 186–194, 2019. View at: Publisher Site | Google Scholar
  53. Z. Zhang, “Joint classification of multiresolution representations with discrimination analysis for SAR ATR,” Journal of Electron Imaging, vol. 27, no. 4, Article ID 043030, 2018. View at: Google Scholar
  54. B. Bhanu and Y. Lin, “Stochastic models for recognition of occluded targets,” Pattern Recognition, vol. 36, no. 12, pp. 2855–2873, 2003. View at: Publisher Site | Google Scholar

Copyright © 2021 Xinying Miao and Yunlong Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views251
Downloads288
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.