International Journal of Optics

International Journal of Optics / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 5464010 | https://doi.org/10.1155/2020/5464010

Chenyu Li, Guohua Liu, "Block Sparse Bayesian Learning over Local Dictionary for Robust SAR Target Recognition", International Journal of Optics, vol. 2020, Article ID 5464010, 10 pages, 2020. https://doi.org/10.1155/2020/5464010

Block Sparse Bayesian Learning over Local Dictionary for Robust SAR Target Recognition

Academic Editor: Mark A. Kahan
Received28 Mar 2020
Accepted15 Jul 2020
Published01 Aug 2020

Abstract

This paper applied block sparse Bayesian learning (BSBL) to synthetic aperture radar (SAR) target recognition. The traditional sparse representation-based classification (SRC) operates on the global dictionary collaborated by different classes. Afterwards, the similarities between the test sample and various classes are evaluated by the reconstruction errors. This paper reconstructs the test sample based on local dictionaries formed by individual classes. Considering the azimuthal sensitivity of SAR images, the linear coefficients on the local dictionary are sparse ones with block structure. Therefore, to solve the sparse coefficients, the BSBL is employed. The proposed method can better exploit the representation capability of each class, thus benefiting the recognition performance. Based on the experimental results on the moving and stationary target acquisition and recognition (MSTAR) dataset, the effectiveness and robustness of the proposed method is confirmed.

1. Introduction

Synthetic aperture radar (SAR) has been used in Earth observations since it was first developed. Automatic target recognition (ATR) is a special application in SAR image interpretation, which aims to analyze the interested targets in images and determine their labels. Since the start in 1990s, SAR ATR methods have been studied widely using feature extraction and classification algorithms [1, 2]. Different types of features were applied to SAR ATR including geometrical, transformation, and electromagnetic features. Target contour, region, shadow, etc. are typical geometrical features, which describe the sizes or shape distributions [312]. Ding et al. developed a matching algorithm of binary target regions for SAR ATR [3], which was further improve by Cui et al. using the Euclidean distance transform [4]. The Zernike and Krawtchouk moments were employed to describe the target regions in [5, 6], respectively. Anagnostopoulos extracted the outline descriptors for SAR target recognition [7]. Papson validated the utility of target shadow for SAR ATR [8]. The transformation features were usually obtained in mathematical or signal processing ways. The mathematical tools include principal component analysis (PCA) [13], kernel PCA (KPCA) [14], and nonnegative matrix factorization (NMF) [15]. In addition, some newly proposed manifold learning methods were demonstrated effective for SAR ATR [1619]. Image decomposition tools including wavelet [20], monogenic signal [21, 22], and empirical mode decomposition (EMD) [23, 24] were adopted in SAR ATR with good performance. The scattering center is the typical scattering feature with several applications in SAR ATR [2531]. A Bayesian matching scheme of attributed scattering centers was developed in [26] for target recognition. Ding et al. used the attributed scattering centers as the basic features and proposed several classification schemes [27, 28]. Zhang proposed a noise-robust method using attributed scattering centers [29]. Furthermore, the attributed scattering centers were employed to partially reconstruct the target to enrich the available training samples [30, 31]. In addition to the use of single-type features, many multifeature SAR ATR methods were designed in the present works [3236].

The classification algorithms were mainly introduced from the pattern recognition fields. The famous classifiers including support vector machine (SVM) [37, 38], adaptive boosting (AdaBoost) [39], and sparse representation-based classification (SRC) [4042], were successfully applied to SAR ATR. SVM was first used by Zhao and Principe for SAR target recognition [37]. After then, SVM has been the most popular classifier to classify different kinds of features for SAR ATR [3, 5, 38]. Sun et al. developed the AdaBoost for SAR target recognition, which enhanced the classification performance by boosting several simple classifiers [39]. Based on the compressive sensing theory, SRC was first validated in face recognition [43] and further used in SAR ATR in many related works [4042]. With the progress in deep learning, many novel networks were developed for SAR target recognition [4459], in which the convolutional neural network (CNN) is the mostly used. Network architectures including the all-convolutional neural networks (A-ConvNets) [46], enhanced squeeze and excitation network (ESENet) [47], gradually distilled CNN [48], cascade coupled CNN [49], and multistream CNN [50], were developed and applied. Other works enriched the effective training samples using transfer learning, data augmentation, and so forth, thus improving the classification ability of the networks [5153]. However, the performance of deep learning models has close relation to the scale of the training set. With scarce training SAR images, the final performance will be significantly impaired.

This paper proposes a novel classification scheme for SAR target recognition by improving traditional SRC. SRC performs linear representation of the test sample over the global dictionary established based on all the training samples. The reconstruction errors from different classes are analyzed to obtain the target label afterwards. In essence, the relative representation capabilities are compared in SRC but the absolute representation capability of each class is not exploited fully. Therefore, this paper represents the test sample over the local dictionaries from individual training classes. Therefore, the capability of each class can be fully investigated as for describing and representing the input sample. Considering the azimuthal sensitivity of SAR images [60, 61], the test sample is only related to those training samples, which share similar azimuths with it. When the atoms in the local dictionary are sorted according to the azimuths, the linear coefficients over the local dictionary are sparse ones with block structure; i.e., the nonzero elements accumulate in a small azimuth interval. Accordingly, the block sparse Bayesian learning (BSBL) [62] is employed to solve the sparse coefficients on the local dictionary, which could exploit the block structure with higher precision. Finally, the reconstruction errors of individual classes are analyzed to determine the target type. To investigate the performance of the proposed method, the moving and stationary target acquisition and recognition (MSTAR) dataset is employed for test and comparison. The results validate the superiority of the proposed method under the standard operating condition (SOC) and typical extended operating conditions (EOC).

2. SRC

SRC can be regarded as a modification of the linear representation problem with the idea of compressive sensing [4043]. For the sample to be classified, it is represented over the global dictionary comprising of all the training samples while the linear coefficients are sparse with only a few nonzero ones. The global dictionary is denoted as , in which is a local dictionary with atoms of the class. For the test sample , the reconstruction process is illustrated as follows:where denotes the solved coefficient vector.

With the solution of , the target label of is determined by calculating the reconstruction errors of different classes and comparing them as follows:where extracts the coefficient vector of the class.

In SRC, the representation errors actually embody the relative capabilities of reconstructing the test sample for different classes. However, the absolute representation capability of each class cannot be effectively exploited. In other words, how could individual classes best reconstruct the test sample should be further evaluated.

3. Block Sparse Bayesian Learning over Local Dictionary

3.1. Sparse Representation over Local Dictionary

Rather than the representation over the global dictionary, this paper represents the test sample on the local dictionary as follows:where denotes the linear coefficient vector over the ith local dictionary and is the reconstruction error.

Figure 1 illustrates four SAR images of BMP2 target from the MSTAR dataset, which are measured at different azimuths. As shown, SAR images of the same target from notably different azimuths have obviously distinct appearances. Because SAR images are sensitive to the azimuth changing, only those training samples (atoms in the dictionary) with approaching azimuths to that of the test sample are useful in the linear reconstruction. By arranging the atoms in the local dictionary in the descending (or ascending) manner, the nonzero elements in tend to amass in a small azimuth interval. So, the resulting is a sparse vector with the block structure. To better reconstruct the test sample, BSBL is employed to estimate , which is demonstrated more suitable for the reconstruction of block sparse signals [62].

Compared with the traditionally global dictionary-based SRC, the sparse representation over the local dictionary could further exploit the representation abilities of the training classes. The reconstruction error in (2) reflects the absolute representation capability of the class as for representing the test sample . In addition, with the constraint of azimuthal sensitivity during the linear representation, the reconstruction errors from different targets can be used to make reliable decisions on the target label.

3.2. BSBL Framework

Assuming as a block sparse signal, it contains the block structures as follows:

The signal in the previous equation has blocks among which only a few ones are nonzero. Here, denotes the length of the block. Usually, the samples in the same block are closely related. To describe the block structure as well as the intrablock correlation, the BSBL framework [62] employs the parameterized Gaussian distribution:

In the previous equation, and are unknown deterministic parameters in which represents the confidence of the relevance of the block and captures the intrablock correlation. Assuming that different blocks are mutually independent, then the signal model can be rewritten as the following equation:where is a block diagonal matrix in which the principal diagonal is .

The observation is modeled as the following equationfd7:where is a sensing matrix and denotes the noise term. The sensing matrix is an underdetermined matrix and the noise is modeled as a zero-mean Gaussian distribution with and variance of with being an unknown parameter. Therefore, the likelihood is given by

The main body of the BSBL algorithm iterates between the estimation of the posterior with , and maximizing the likelihood with . The update rules for the parameters and are derived using the Type II Maximum Likelihood method, which leads to the following cost function:

Based on the estimations of the parameters and , the MAP estimates the coefficient vector as follows:

3.3. Target Recognition

By solving the block sparse coefficients on local dictionaries, respectively, the reconstruction error of each training class is obtained as follows:where is the solved coefficient vector over the local dictionary by BSBL. Afterwards, the target label is determined to the minimum-error class as (2).

Figure 2 illustrates the main idea of the proposed method. During the implementation, PCA is performed as a feature extraction step for both training and test samples and the detailed steps summarized as follows:Step 1: Arrange the training samples of each class according to their azimuths in an ascending orderStep 2: Represent the test sample on the local dictionaries using BSBLStep 3: Reconstruct the test sample with different classes to obtain the residualsStep 4: Make the classification decision according to the minimum error

4. Experiment

4.1. Preparation

With volumes of measured SAR images, MSTAR dataset has long been used on the examinations of target recognition algorithms. As shown, SAR images of the 10 targets in Figure 3 are available in the dataset, collected by X-band radar with the resolution of 0.3 m (cross range) × 0.3 m (range). Samples for each target cover 0°∼360° aspect angles in both training and test sets. Accordingly, several experimental conditions could be set up to test the SAR ATR methods including SOC and EOCs.

Several reference methods are chosen from the current works to be compared with the proposed method including SVM [37], AdaBoost [39], SRC [40], and A-ConvNet [46]. These methods aimed to improve the performance by updating the classification schemes. For SVM, AdaBoost, and SRC, they also performed on the PCA feature vectors, which are consistent with the proposed method for fair comparison. A-ConvNet was a CNN-based method, which was trained by the original image pixels. All these methods are performed by the authors on the same hardware platform with the proposed one.

4.2. Recognition Results

In the following experiments, the SOC is first set up for classification. Afterwards, three different EOCs are set up including configuration variances, depression angle variances, and noise corruption. Simultaneously, the four reference methods are tested and compared with the proposed one.

4.2.1. Recognition under SOC

The conditions for the SOC experiment are set up as in Table 1, which include the 10 classes of targets in Figure 3. Overall, the training and test samples are assumed to share high similarities. Specifically, the test samples of BMP2 and T72 include two different configurations from their training sets (denoted by the serial number). The classification results of this method in this case are obtained as in Figure 4, which is displayed as a confusion matrix. As shown, the correct recognition rates of different classes are higher than 97% recorded in the diagonal. As an overall evaluation, the average rate of the correct recognition () reaches 98.76%. Table 2 compares of the proposed method and the reference ones. It reflects that the result of A-ConvNet is slightly lower than the proposed method owing to the good classification of deep learning models. In comparison with SRC, the recognition performance is greatly enhanced by the proposed method, which validates the effectiveness of BSBL as a classification scheme. With the highest , the proposed method achieves the best effectiveness under SOC.


ClassTraining set (17°)Test set (15°)
Configuration (SN)Number of samplesConfiguration (SN)Number of samples

BMP295632339563195
9566196
C21196

BTR70233196

T72132232132195
812195
S7191

T62299273
BDRM2298274
BTR60256195
ZSU23/4299274
D7299274
ZIL131299274
2S1299274


Method typeProposedSVMSRCAdaBoostA-ConvNet

(%)98.7696.0294.6695.4898.52

4.2.2. Recognition under EOCs

Different from the SOC situations, EOCs are common to see in real applications because of the variations of target, background, sensors, etc. As reported in the current literatures, the MSTAR dataset can be employed to set up several different EOCs with regard to target configurations, depression angles, and noises. In the following, the proposed method is tested under the three typical EOCs, respectively.

5. Configuration Variance

The military vehicles usually have several different variants, which have structural modifications. The training and test samples under configuration variance are set up as in Table 3 with four targets to be classified. Among them, BDRM2 and BTR70 are placed in the training set but with no test samples in order to enhance the classification difficulties. The test configurations of BMP2 and T72 are totally different from the counterparts in their training samples. Figure 5 illustrates four different configurations of T72. As observed, they share similar global appearances but have some local differences. Table 4 gives the assigned labels of all the test samples of BMP2 and T72. Each configuration from BMP2 and T72 can be correctly recognized with an accuracy over 96% and reaches 97.18%. of different methods with regard to configuration variance are compared in Table 5. With the highest , the superior robustness of the proposed method over the reference methods can be validated. Specifically, in comparison with traditional SRC, the proposed method noticeably improves with a large margin, which demonstrated the high effectiveness of BSBL.


ClassTraining set (17°)Test set (15°, 17°)
Configuration (SN)Number of samplesConfiguration (SN)Number of samples

BMP295632339566428
C21429

BTR702330

T72132232812426
A04573
A05573
A07573
A10567

BDRM22980


ClassSNBMP2BRDM2BTR70T72Recognition rate (%)

BMP2956641643597.20
C2141382696.27

T7281253441497.18
A0468855196.16
A05122255797.21
A07821055397.21
A10125055097.00

97.18


Method typeProposedSVMSRCAdaBoostA-ConvNet

(%)97.1896.0294.6695.4898.52

6. Depression Angle Variance

When SAR images are measured from a depression angles notably different from the corresponding training sample, they have many differences although from the same azimuth. The training and test samples under large depression angle variances are set up as in Table 6 with three different targets. The training samples are combined by SAR images of three targets at 17° depression angle but the test set is comprised by two subsets from 30° and 45°, respectively. Figure 6 illustrates SAR images from the three different depression angles, respectively, in which their differences can be intuitively observed.


ClassTraining setTest set
DepressionNumber of samplesDepressionNumber of samples

2S117°29930°288
45°303

BDRM217°29830°287
45°303

ZSU23/417°29930°288
45°303

Table 7 compares of the five methods at the two test depression angles, respectively. At 30°, these methods maintain higher than 94%. However, at 45° depression angle, of each method degrades significantly below 73%. In comparison, the highest at both depression angles are achieved by the proposed method, showing its better robustness to large depression angle variances. of A-ConvNet decreases greatly at 45° depression angle as the training set could hardly reflect and describe the situations occurring in the test samples. As a result, the trained networks lose its high validity. Compared with traditional SRC, BSBL over the local dictionaries effectively improve the final performance.


Method type(%)
30°45°

Proposed97.3272.25
SVM95.4263.24
SRC95.0166.24
AdaBoost94.9562.92
A-ConvNet95.6765.82

7. Noise Corruption

Noises are common in measured SAR images, which cause obstacles to correct target recognition. In the previous works, two types of noises are used to simulate noisy SAR images for classification, for example, additive Gaussian noises [63] and random noises [46]. Figure 7 shows exemplar SAR images with random noises. Some of the original pixels are replaced with randomly high values according to the noise level. At each noise level, the performance of different methods is tested and the results are plotted in Figure 8. As shown, the proposed method gets the highest at each noise level, showing its superior robustness to noise corruption. As a compressive sensing algorithm, BSBL has better robustness to noises. Similarly, SRC generally achieved better performance than SVM, AdaBoost, and A-ConvNet under noise corruption. Compared with traditional SRC, BSBL contributes to the better performance of the proposed method.

8. Conclusion

BSBL is applied to SAR target recognition in this paper, which is performed on local dictionaries. For each training class, it produces a reconstruction error for the test sample based on the solution form BSBL. These reconstruction errors fully exploit the representation capability of different classes, which can be used to judge the target label. As the azimuthal sensitivity, the linear coefficients generated for the test sample over local dictionary are block sparse ones; thus BSBL is more suitable for solution. From the results on the MSTAR dataset, the proposed method could achieve of 98.76% for 10 classes under SOC and 97.18% under configuration variance. at depression angles of 30° and 45° are 97.32% and 72.85%, respectively. The robustness under random noise corruption also defeats the four reference methods. All these comparisons show the superior performance of the proposed method.

Data Availability

The MSTAR dataset used to support the findings of this study is available online at http://www.sdms.afrl.af.mil/datasets/mstar/.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

References

  1. L. M. Novak, G. J. Owirka, W. S. Brower et al., “The automatic target recognition system in SAIP,” Lincoln Laboratory Journal, vol. 10, no. 2, pp. 187–202, 1997. View at: Google Scholar
  2. K. El-Darymli, E. W. Gill, D. Power, and C. Moloney, “Automatic target recognition in synthetic aperture radar imagery: A state-of-the-art review,” IEEE Access, vol. 4, pp. 6014–6058, 2016. View at: Publisher Site | Google Scholar
  3. B. Ding, G. Wen, C. Ma, and X. Yang, “Target recognition in synthetic aperture radar images using binary morphological operations,” Journal of Applied Remote Sensing, vol. 10, no. 4, Article ID 046006, 2016. View at: Publisher Site | Google Scholar
  4. C. Shi, F. Miao, Z. Jin, and Y. Xia, “Target recognition of synthetic aperture radar images based on matching and similarity evaluation between binary regions,” IEEE Access, vol. 7, pp. 154398–154413, 2019. View at: Publisher Site | Google Scholar
  5. M. Amoon and G.-a. Rezai-rad, “Automatic target recognition of synthetic aperture radar (SAR) images based on optimal selection of Zernike moments features,” IET Computer Vision, vol. 8, no. 2, pp. 77–85, 2014. View at: Publisher Site | Google Scholar
  6. C. Clemente, L. Pallotta, D. Gaglione, A. De Maio, and J. J. Soraghan, “Automatic target recognition of military vehicles with Krawtchouk moments,” IEEE Transactions on Aerospace and Electronic Systems, vol. 53, no. 1, pp. 493–500, 2017. View at: Publisher Site | Google Scholar
  7. G. C. Anagnostopoulos, “SVM-based target recognition from synthetic aperture radar images using target region outline descriptors,” Nonlinear Analysis: Theory, Methods & Applications, vol. 71, no. 2, pp. e2934–e2939, 2009. View at: Publisher Site | Google Scholar
  8. S. Papson and R. M. Narayanan, “Classification via the shadow region in SAR imagery,” IEEE Transactions on Aerospace and Electronic Systems, vol. 48, no. 2, pp. 969–980, 2012. View at: Publisher Site | Google Scholar
  9. J. Tan, X. Fan, S. Wang et al., “Target recognition of SAR images by partially matching of target outlines,” Journal of Electromagnetic Waves and Applications, vol. 33, no. 7, pp. 865–881, 2019. View at: Publisher Site | Google Scholar
  10. X. Zhu, Z. Huang, and Z. Zhang, “Automatic target recognition of synthetic aperture radar images via gaussian mixture modeling of target outlines,” Optik, vol. 194, Article ID 162922, 2019. View at: Publisher Site | Google Scholar
  11. M. Chang and X. You, “Target recognition in SAR images based on information-decoupled representation,” Remote Sensing, vol. 10, no. 1, p. 138, 2018. View at: Publisher Site | Google Scholar
  12. C. Shan, B. Huan, and M. Li, “Binary morphological filtering of dominant scattering area residues for SAR target recognition,” Computational Intelligence and Neuroscience, vol. 2018, Article ID 9680465, pp. 1–15, 2018. View at: Publisher Site | Google Scholar
  13. A. K. Mishra, “Validation of PCA and LDA for SAR ATR,” in Proceedings of the TENCON 2008–IEEE Region 10 Conference, pp. 1–6, Hyderabad, India, November 2008. View at: Publisher Site | Google Scholar
  14. A. K. Mishra and T. Motaung, “Application of linear and nonlinear PCA to SAR ATR,” in Proceedings of the 25th International Conference Radioelektronika, pp. 319–354, Pardubice, Czech Republic, April 2015. View at: Publisher Site | Google Scholar
  15. Z. Cui, J. Feng, Z. Cao, H. Ren, and J. Yang, “Target recognition in synthetic aperture radar images via non-negative matrix factorisation,” IET Radar, Sonar & Navigation, vol. 9, no. 9, pp. 1376–1385, 2015. View at: Publisher Site | Google Scholar
  16. J. Pei, Y. Huang, W. Huo, J. Wu, J. Yang, and H. Yang, “SAR imagery feature extraction using 2DPCA-based two-dimensional neighborhood virtual points discriminant embedding,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 9, no. 6, pp. 2206–2214, 2016. View at: Publisher Site | Google Scholar
  17. M. Yu, G. Dong, H. Fan, and G. Kuang, “SAR target recognition via local sparse representation of multi-manifold regularized low-rank approximation,” Remote Sensing, vol. 10, no. 2, p. 211, 2018. View at: Publisher Site | Google Scholar
  18. Y. Huang, J. Peia, J. Yanga, B. Wang, and X. Liu, “Neighborhood geometric center scaling embedding for SAR ATR,” IEEE Transactions on Aerospace and Electronic Systems, vol. 50, no. 1, pp. 180–192, 2014. View at: Publisher Site | Google Scholar
  19. X. Liu, Y. Huang, J. Pei, and J. Yang, “Sample discriminant analysis for SAR ATR,” IEEE Geoscience and Remote Sensing Letters, vol. 11, no. 12, pp. 2120–2124, 2014. View at: Publisher Site | Google Scholar
  20. W. Xiong, L. Cao, and Z. Hao, “Combining wavelet invariant moments and relevance vector machine for SAR target recognition,” in Proceedings of the IET International Radar Conference, pp. 1–4, Guilin, China, 2009. View at: Publisher Site | Google Scholar
  21. G. Dong and G. Kuang, “Classification on the monogenic scale space: Application to target recognition in SAR image,” IEEE Transactions on Image Processing, vol. 24, no. 8, pp. 2527–2539, 2015. View at: Publisher Site | Google Scholar
  22. G. Dong, G. Kuang, N. Wang, L. Zhao, and J. Lu, “SAR target recognition via joint sparse representation of monogenic signal,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 7, pp. 3316–3328, 2015. View at: Publisher Site | Google Scholar
  23. Y. Zhou, Y. Chen, R. Gao, J. Feng, P. Zhao, and L. Wang, “SAR target recognition via joint sparse representation of monogenic components with 2D canonical correlation analysis,” IEEE Access, vol. 7, p. 1, 2019. View at: Publisher Site | Google Scholar
  24. M. Chang, X. You, and Z. Cao, “Bidimensional empirical mode decomposition for SAR image feature extraction with application to target recognition,” IEEE Access, vol. 7, pp. 135720–135731, 2019. View at: Publisher Site | Google Scholar
  25. L. C. Potter and R. L. Moses, “Attributed scattering centers for SAR ATR,” IEEE Transactions on Image Processing, vol. 6, no. 1, pp. 79–91, 1997. View at: Publisher Site | Google Scholar
  26. H.-C. China, R. L. Moses, and L. C. Potter, “Model-based classification of radar images,” IEEE Transactions on Information Theory, vol. 46, no. 5, pp. 1842–1854, 2000. View at: Publisher Site | Google Scholar
  27. B. Ding, G. Wen, J. Zhong, C. Ma, and X. Yang, “A robust similarity measure for attributed scattering center sets with application to SAR ATR,” Neurocomputing, vol. 219, pp. 130–143, 2017. View at: Publisher Site | Google Scholar
  28. B. Ding, G. Wen, X. Huang, C. Ma, and X. Yang, “Target recognition in synthetic aperture radar images via matching of attributed scattering centers,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, no. 7, pp. 3334–3347, 2017. View at: Publisher Site | Google Scholar
  29. X. Zhang, “Noise-robust target recognition of SAR images based on attribute scattering center matching,” Remote Sensing Letters, vol. 10, no. 2, pp. 186–194, 2019. View at: Publisher Site | Google Scholar
  30. J. Fan and A. Tomas, “Target reconstruction based on attributed scattering centers with application to robust SAR ATR,” Remote Sensing, vol. 10, no. 4, p. 655, 2018. View at: Publisher Site | Google Scholar
  31. J. Lv and Y. Liu, “Data augmentation based on attributed scattering centers to train robust CNN for SAR ATR,” IEEE Access, vol. 7, pp. 25459–25473, 2019. View at: Publisher Site | Google Scholar
  32. U. Srinivas and V. Monga, “Meta-classifiers for exploiting feature dependence in automatic target recognition,” in Proceedings of the IEEE Radar Conference, pp. 147–151, Kansas, MO, USA, June 2011. View at: Publisher Site | Google Scholar
  33. L. Jin, J. Chen, and X. Peng, “Joint classification of complementary features based on multitask compressive sensing with application to synthetic aperture radar automatic target recognition,” Journal of Electronic Imaging, vol. 27, no. 5, Article ID 053034, p. 1, 2018. View at: Publisher Site | Google Scholar
  34. S. Miao and X. Liu, “Joint sparse representation of complementary components in SAR images for robust target recognition,” Journal of Electromagnetic Waves and Applications, vol. 33, no. 7, pp. 882–896, 2019. View at: Publisher Site | Google Scholar
  35. S. Liu and J. Yang, “Target recognition in synthetic aperture radar images via joint multifeature decision fusion,” Journal of Electronic Imaging, vol. 12, no. 1, Article ID 016012, p. 1, 2018. View at: Publisher Site | Google Scholar
  36. P. Huang and W. Qiu, “A robust decision fusion strategy for SAR target recognition,” Remote Sensing Letters, vol. 9, no. 6, pp. 507–514, 2018. View at: Publisher Site | Google Scholar
  37. Q. Zhao and J. C. Principe, “Support vector machines for SAR automatic target recognition,” IEEE Transactions on Aerospace and Electronic Systems, vol. 37, no. 2, pp. 643–654, 2001. View at: Publisher Site | Google Scholar
  38. H. Liu and S. Li, “Decision fusion of sparse representation and support vector machine for SAR image target recognition,” Neurocomputing, vol. 113, pp. 97–104, 2013. View at: Publisher Site | Google Scholar
  39. Y. Sun, Z. Liu, S. Todorovic, and J. Li, “Adaptive boosting for SAR automatic target recognition,” IEEE Transactions on Aerospace and Electronic Systems, vol. 43, no. 1, pp. 112–125, 2006. View at: Publisher Site | Google Scholar
  40. J. J. Thiagaraianm, K. N. Ramamurthy, P. Knee, A. Spanias, and V. Berisha, “Sparse representations for automatic target classification in SAR images,” in Proceedings of the 4th International Symposium on Communications, Control and Signal Processing (ISCCSP), pp. 1–4, Limassol, Cyprus, March 2010. View at: Publisher Site | Google Scholar
  41. H. Song, K. Ji, Y. Zhang, X. Xing, and H. Zou, “Sparse representation-based SAR image target classification on the 10-class MSTAR data set,” Applied Sciences, vol. 6, no. 26, pp. 1–26, 2016. View at: Publisher Site | Google Scholar
  42. X. Zhang, Y. Wang, Z. Tan et al., “Two-stage multi-task representation learning for synthetic aperture radar (SAR) target images classification,” Sensors, vol. 17, no. 11, p. 2506, 2017. View at: Publisher Site | Google Scholar
  43. J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 210–227, 2009. View at: Publisher Site | Google Scholar
  44. X. X. Zhu, D. Tuia, L. Mou et al., “Deep learning in remote sensing: A comprehensive review and list of resources,” IEEE Geoscience and Remote Sensing Magazine, vol. 5, no. 4, pp. 8–36, 2017. View at: Publisher Site | Google Scholar
  45. D. E. Morgan, “Deep convolutional neural networks for ATR from SAR imagery,” in Proceedings of the SPIE, pp. 1–13, Baltimore, MD, USA, December 2015. View at: Google Scholar
  46. S. Chen, H. Wang, F. Xu, and Y-Q. Jin, “Target classification using the deep convolutional networks for SAR images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 6, pp. 1685–1697, 2016. View at: Google Scholar
  47. L. Wang, X. Bai, and F. Zhou, “SAR ATR of ground vehicles based on ESENet,” Remote Sensing, vol. 11, no. 11, p. 1316, 2019. View at: Publisher Site | Google Scholar
  48. R. Min, H. Lan, Z. Cao, and Z. Cui, “A gradually distilled CNN for SAR target recognition,” IEEE Access, vol. 7, pp. 42190–42200, 2019. View at: Publisher Site | Google Scholar
  49. J. Zhao, Z. Zhang, W. Yu, and T.-K. Truong, “A cascade coupled convolutional neural network guided visual attention method for ship detection from SAR images,” IEEE Access, vol. 6, pp. 50693–50708, 2018. View at: Publisher Site | Google Scholar
  50. P. Zhao, K. Liu, H. Zou, and X. Zhen, “Multi-stream convolutional neural network for SAR automatic target recognition,” Remote Sensing, vol. 10, no. 9, p. 1473, 2018. View at: Publisher Site | Google Scholar
  51. J. Ding, B. Chen, H. Liu, and M. Huang, “Convolutional neural network with data augmentation for SAR target recognition,” IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 3, pp. 364–368, 2016. View at: Publisher Site | Google Scholar
  52. Y. Yan, “Convolutional neural networks based on augmented training samples for synthetic aperture radar target recognition,” Journal of Electronic Imaging, vol. 27, no. 2, Article ID 023024, p. 1, 2018. View at: Publisher Site | Google Scholar
  53. D. Malmgren-Hansen, A. Kusk, J. Dall, A. A. Nielsen, R. Engholm, and H. Skriver, “Improving SAR automatic target recognition models with transfer learning from simulated data,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 9, pp. 1484–1488, 2017. View at: Publisher Site | Google Scholar
  54. J. Lv, “Exploiting multi-level deep features via joint sparse representation with application to SAR target recognition,” International Journal of Remote Sensing, vol. 41, no. 1, pp. 320–338, 2020. View at: Publisher Site | Google Scholar
  55. H. Gao, S. Peng, and W. Zeng, “Recognition of targets in SAR images using joint classification of deep features fused by multi-canonical correlation analysis,” Remote Sensing Letters, vol. 10, no. 9, pp. 883–892, 2019. View at: Publisher Site | Google Scholar
  56. S. A. Wagner, “SAR ATR by a combination of convolutional neural network and support vector machines,” IEEE Transactions on Aerospace and Electronic Systems, vol. 52, no. 6, pp. 2861–2872, 2016. View at: Publisher Site | Google Scholar
  57. O. Kechagias-Stamatis and N. Aouf, “Fusing deep learning and sparse coding for SAR ATR,” IEEE Transactions on Aerospace and Electronic Systems, vol. 55, no. 2, pp. 785–797, 2019. View at: Publisher Site | Google Scholar
  58. Y. Xie, W. Dai, Z. Hu, Y. Liu, C. Li, and X. Pu, “A novel convolutional neural network architecture for SAR target recognition,” Journal of Sensors, vol. 2019, Article ID 1246548, pp. 1–9, 2019. View at: Publisher Site | Google Scholar
  59. M. Kang, K. Ji, X. Leng, X. Xing, and H. Zou, “Synthetic aperture radar target recognition with feature fusion based on a stacked autoencoder,” Sensors, vol. 17, no. 1, p. 192, 2017. View at: Publisher Site | Google Scholar
  60. Z. Zhang and W. Zhu, “Azimuthal constraint representation for synthetic aperture radar target recognition along with aspect estimation,” Signal, Image and Video Processing, vol. 13, no. 8, pp. 1577–1584, 2019. View at: Publisher Site | Google Scholar
  61. B. Ding, G. Wen, X. Huang, C. Ma, and X. Yang, “Target recognition in SAR images by exploiting the azimuth sensitivity,” Remote Sensing Letters, vol. 8, no. 9, pp. 821–830, 2017. View at: Publisher Site | Google Scholar
  62. Z. Zhang and B. D. Rao, “Extension of SBL algorithms for the recovery of block sparse signals with intra-block correlation,” IEEE Transactions on Signal Processing, vol. 61, no. 8, pp. 2009–2015, 2013. View at: Publisher Site | Google Scholar
  63. S. H. Doo, G. Smith, and C. Baker, “Target classification performance as a function of measurement uncertainty,” in Proceedings of the IEEE 5th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), pp. 587–590, Singapore, September 2015. View at: Publisher Site | Google Scholar

Copyright © 2020 Chenyu Li and Guohua Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views227
Downloads388
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.