Scientific Programming

Scientific Programming / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 6642316 | https://doi.org/10.1155/2021/6642316

Xinying Miao, Yunlong Liu, "Target Recognition of SAR Images Based on Complex Bidimensional Empirical Mode Decomposition", Scientific Programming, vol. 2021, Article ID 6642316, 10 pages, 2021. https://doi.org/10.1155/2021/6642316

Target Recognition of SAR Images Based on Complex Bidimensional Empirical Mode Decomposition

Academic Editor: Pengwei Wang
Received01 Dec 2020
Accepted02 Jan 2021
Published12 Jan 2021

Abstract

A target recognition method for synthetic aperture radar (SAR) image based on complex bidimensional empirical mode decomposition (C-BEMD) is proposed. C-BEMD is used to decompose the original SAR image to obtain multilevel complex bidimensional intrinsic mode functions (BIMF), which reflect the two-dimensional time-frequency characteristics of the target. In the classification stage, the decomposed multilevel BIMFs are represented using the multitask sparse representation. Finally, the target category of the test sample is determined according to the reconstruction errors related to different training classes. In the experiment, the standard operating condition (SOC) and extended operating conditions (EOC) are designed based on the MSTAR dataset to test and verify the proposed method. The results confirm the effectiveness and robustness of the method.

1. Introduction

Synthetic aperture radar (SAR) image processing has potential value in both military and civil fields [1]. SAR target recognition technology determines the target category through the analysis of target characteristics in images, which can be performed using template-based [2, 3] and mode-based ways [47]. Feature extraction is one of the key steps in SAR target recognition, which mainly realizes the extraction and representation of target characteristics. At this stage, commonly used SAR image features include geometric ones, transformation ones, and electromagnetic ones. In [817], the region (including the target and shadow ones), contours were used for SAR target recognition for describing the shape distributions. The transformation features can be summarized into two categories. One type was processed by mathematical projection algorithms [1822], typically like principal component analysis (PCA) [18] and nonnegative matrix factorization (NMF) [20]. The other type used image decomposition through signal processing algorithms, such as monogenic signal [23] and bidimensional empirical mode decomposition (BEMD) [24]. In [25], the visual saliency model was employed for discriminative feature learning. The electromagnetic features describe the backscattering characteristics of the target reflected in the SAR imaging process, such as polarization [26] [27] and scattering centers [2831]. The classifier is another key step of SAR target recognition. The classification mechanism is designed to classify the extracted features. A large number of classifiers have been used and verified in SAR target recognition, including support vector machines (SVM) [32, 33] and sparse representation-based classification (SRC) [3436]. In recent years, with the maturity of deep learning theory and algorithms [3739], a large number of SAR target recognition methods were developed based on deep leaning models, among which the most representative one was the convolutional neural network (CNN) [4046].

The results of feature extraction, as the input of the classifier, largely determine the classification accuracy. Therefore, designing and proposing new SAR image feature extraction algorithms are of great significance for target recognition. This paper proposes a SAR target recognition method based on complex BEMD (C-BEMD) [47, 48]. C-BEMD is an extension of the traditional EMD [49, 50] and BEMD [24, 49] to the complex domain, which can be directly used for the processing and analysis of complex images. In [24], the authors applied BEMD for SAR image decomposition and target recognition, which verified the effectiveness. However, SAR images are filled with complex values with both amplitude and phase information. The sole use of image intensities would lose the discrimination of the phase distribution. In this sense, C-BEMD can more effectively reflect the two-dimensional time-frequency characteristics of the target, thereby providing more sufficient information for the following classification. For the bidimensional intrinsic mode functions (BIMF) obtained by C-BEMD, this paper adopts the multitask sparse representation for decision-making in the classification stage. The multitask sparse representation is a general extension of traditional single-task one, which considers and makes use of the relationship between several tasks. For the BIMFs decomposed by C-BEMD, their inner correlations can be employed by the multitask sparse representation, thereby improving the overall reconstruction accuracy. Finally, the target category of the test sample is determined according to the total reconstruction errors of all the BIMFs from the test sample achieved by individual training classes. In the experiments, a variety of operating conditions were set up based on the MSTAR dataset to test the proposed method. The experimental results verify the effectiveness and robustness of this method.

2. Basics of C-BEMD

Yeh first developed EMD to adaptively analyze the nonstationary signals [48]. Unlike traditional signal decomposition methods, e.g., wavelet analysis, EMD does not impose any prior assumptions on the data, such as linearity or stationarity. In the past researches, EMD has been numerically validated to be more capable of describing patterns in nonstationary and nonlinear signals. As a natural generalization of EMD to 2D space, BEMD is capable of describing an image using several BIMFs [24, 49]. The original image is decomposed into high- and low-frequency components with some residues. Hence, the generated BIMFs could better reflect the global and detailed information of the decomposed image. However, the traditional BEMD are designed for real signals and cannot process complex signals or images. Yeh extended BEMD to the complex domain to enable it for directly decomposing the complex matrix. According to [47, 48], the specific implementation process of C-BEMD algorithm can be summarized as following steps.

Step 1. Construction of a two-dimensional band-pass filter aswhere represents a matrix with the sizes of ; was a zero matrix with the sizes of . The values of and are determined as follows:

Step 2. Construction of 4 analytic signals aswhere denotes the two-dimensional Fourier transform of the input image .

Step 3. Get the two-dimensional inverse Fourier transforms of and and extract their real parts as and . Get the two-dimensional inverse Fourier transforms of and and extract their imagery part as and .

Step4. Apply BEMD to decompose , , , and , respectively. The obtained BIMFs are denoted as , , , and , with the numbers of decompositions as , .

Step 5. Apply to process the and obtain the complex BIMF asIn equation (4), the function is defined asThe detailed deduction and implementation of C-BEMD can be found in [45].
In this paper, C-BEMD is applied to the decomposition of complex SAR images, and the two-dimensional time-frequency characteristics of the targets are described through multilevel BIMFs. Figure 1 decomposes a SAR image (shown in Figure 1(a)) from the MSTAR dataset with the amplitude parts of the first three BIMFs shown in Figures 1(b)1(d), respectively. It can be seen that the decomposition results can effectively describe the characteristics associated with the target, while forming an effective complement to the original image with more detailed information. Therefore, this paper jointly uses the original image and BIMFs decomposed by C-BEMD for the following classification.

3. Classification of Multilevel BIMFs for Target Recognition

3.1. Multitask Sparse Representation

The multitask sparse representation can be considered as a united and compact form of several related sparse representation tasks [51, 52]. With the constraint of inner correlations, the multitask sparse representation could produce more precise and robust solutions than those from individual tasks. As reported in [5357], the multitask sparse representation has been successfully applied to SAR target recognition to classify multiple views, features, resolutions, etc. This study employs it for the classification of multilevel BIMFs generated by C-BEMD. Assume there are M BIMFs denoted as , which are from the same test sample , and they are represented based on the sparse representations aswhere forms the dictionary of the BIMF; stores the coefficient vectors of all the BIMFs.

Equation (6) aims to minimize the total reconstruction error but neglects the correlations between different tasks. As the decompositions are from the same image, different levels of BIMFs are actually correlated. So, the core of the multitask sparse representation falls on the constraint on the coefficient matrix and the optimization problem is changed to bewhere calculates the norm of the coefficient matrix; is a nonnegative constant acting as the regularization parameter.

As validated, the coefficient vectors of different components solved by equation (7) tend to share similar patterns originated from their inner correlations. From reports in related researches [5357], such modifications effectively improve the reconstruction precision, especially for pattern recognition problems. With the estimation of the coefficient matrix , the reconstruction errors of the test sample with respect to individual training classes can be obtained for the determination of target category aswhere extracts the subdictionary of the lth BIMF in the class; infers to the corresponding coefficients.

3.2. Target Recognition

Figure 2 shows the basic flow of the proposed method for SAR target recognition. The training samples are first decomposed by C-BEMD to obtain the multilevel BIMFs, and a global dictionary is constructed for each of them accordingly. For the test sample, the same C-BEMD is used to decompose the corresponding levels of BIMFs. Then, the BIMFs of the test sample are jointly represented with the support of the constructed dictionaries. Finally, the target category of the test sample is determined according to the reconstruction errors from equation (8).

In the actual operation process, the BIMFs obtained by C-BEMD are complex ones with both amplitude and phase parts, so they are extracted and used separately as well as the original SAR image. As shown in Figure 2, the K BIMFs for the dictionaries come from K/2 decompositions by C-BEMD, where the former K/2 represent the amplitude and latter K/2 infer to the phase. Especially in this paper, the first three BIMFs are used together with the original SAR image as shown in Figure 1. Both the global and local information of SAR targets can be characterized by these components. Therefore, the proposed method can make full use of the two-dimensional time-frequency characteristics of complex SAR images to improve the final recognition performance.

4. Experiments

4.1. Preparation

The proposed method is tested based on the MSTAR dataset. The dataset contains SAR images of the 10 types of targets shown in Figure 3, covering 0°∼360° azimuth angles and typical depression angles such as 15°, 17°, 30°, and 45°. Due to the abundant data samples, the MSTAR dataset has long been the benchmark data source for the verification of SAR target recognition methods. According to the existing researches, this study relies on the MSTAR dataset to set up typical operating conditions for experiments, including standard operating condition (SOC), extended operating conditions (EOC) to be configuration differences, depression angle differences, and noise interference.

Table 1 shows the training and test samples for the 10-class classification task under SOC, which come from 17° and 15° depression angles, respectively. All types of targets are from the same configurations, with only a small difference in the depression angles. Therefore, the test and training samples tend to share high similarities, so the recognition problem is relatively simple. Table 2 sets the training and test samples under the condition of configuration differences, including 3 types of targets. Among them, the training and test samples of BMP2 and T72 come from completely different configurations. Table 3 shows the training and test samples from different depression angles. In this case, the training samples are from 17° depression angle but the test ones are from 30° and 45°, respectively. In addition, on the basis of the experimental setting in Table 1, noises are added to the test samples to generate test sets of different signal-to-noise ratios (SNR) [29]. Then, the proposed method can be evaluated under noise interference. Figure 4 shows some noisy SAR images at different SNRs, where the influences of noises can be observed on the target appearances.


Depr.Target category
BMP2BTR70T72T62BDRM2BTR60ZSU23/4D7ZIL1312S1

Training17°232 (SN_9563)232231 (SN_132)298297255298298298298
Test15°194 (SN_9563)195195 (SN_132)272273194273273273273


Depr.Target category
BMP2BDRM2BTR70T72

Training17°232 (SN_9563)297232231 (SN_132)
Test15°, 17°427 (SN_9566)
428 (SN_C21)
00425 (SN_812)
572 (SN_A04)
572 (SN_A05)
572 (SN_A07)
566 (SN_A10)


Depr.Target category
2S1BDRM2ZSU23/4

Training17°298297298
Test30°287286287
45°302302302

Six types of reference methods are selected from existing researches to be simultaneously compared with the proposed one under same conditions. The first one comes from [24], which employed BEMD for SAR image feature extraction. The second one used visual saliency model for feature extraction, denoted as VSM method. The third and fourth methods are CNN-based ones, using the residual networks (Res-Net) [42] and deep feature [46], respectively. The last two are developed based on the multitask sparse representations to classify the multiple features (extracted by PCA, kernel PCA, and NMF) [55] and multiresolution representations [56]. They are abbreviated as “multifeature” and “multiresolution,” respectively. The following experiments are conducted sequentially under SOC and three EOCs. All the methods are compared with quantitative results to reach some effective conclusions.

4.2. SOC

Relying on the experimental setup in Table 1, the proposed method is tested and verified under SOC. Figure 5 shows the classification confusion matrix of the 10 types of targets. The horizontal and vertical coordinates in the figure represent the true and the prediction labels of the test samples, respectively. Hence, the diagonal elements are the classification accuracies of different targets. This study defines the recognition rate as , where and denote the numbers of correctly-classified and total test samples, respectively. The average recognition rates on the 10 types of targets achieved by various methods are shown in Table 4. It can be seen that different methods can achieve high recognition performance under SOC. In contrast, the proposed method is better than the five reference methods with an average recognition rate of 99.34%. Under SOC, the training sample and the test sample have high similarities, which can effectively cover various situations in the test set, which contributes to the good performance of CNN-based methods, i.e., Res-Net and deep feature. In particular, compared with the BEMD method, this paper uses C-BEMD to effectively explore the time-frequency characteristics of the original complex SAR image and obtain more effective feature descriptions. Therefore, the final performance of the proposed method is better than the BEMD method. For multifeature and multiresolution methods, they used the multitask sparse representation in the classification stage, same with the proposed one. The higher of the proposed one shows that the BIMFs decomposed by C-BEMD have higher discriminability than the multiple features or resolutions.


MethodProposedBEMDVSMRes-NetDeep featureMultifeatureMultiresolution

(%)99.3499.0299.0699.1699.2199.1299.14

4.3. Configuration Differences

Relying on the experimental setup in Table 2, the proposed method is tested and verified under configuration differences. Table 5 lists the classification results with respect to each configuration from BMP2 and T72 with the , and the average recognition rate of all the configurations reach 98.52%. The recognition performance of various methods is shown in Table 6. Compared with the SOC case, the s of different methods decreases to some extent because of the configuration differences. Specially, Res-Net and deep feature methods have the significant falls with inadequate training samples to cover the situations in the test set. In contrast with the traditional BEMD method, the of the proposed method has some improvements, which proves that C-BEMD can more effectively extract the complex-domain features of SAR images, thereby improving the overall recognition performance. Also, the better performance than the multifeature and multiresolution methods validates the higher discrimination of BIMFs decomposed by C-BEMD.


ConfigurationClassification (%)
BMP2BRDM2BTR70T72

BMP2SN_956642221298.83
SN_C2142511199.30
T72SN_81231241998.59
SN_A0412456598.78
SN_A0535356198.08
SN_A0726156398.43
SN_A1032655598.06
Average98.52


MethodProposedBEMDVSMRes-NetDeep featureMultifeatureMultiresolution

(%)98.5298.0298.0898.0898.0697.9298.14

4.4. Depression Angle Differences

Relying on the experimental setup in Table 3, the proposed method is tested under the condition of depression angle differences. All the methods perform the classifications at 30° and 45° depression angles, respectively, and the results are summarized as Figure 6. At the depression angle of 30°, the average recognition rates of various methods can be maintained above 93%, indicating that the image differences caused by the depression angle difference are relatively small at this time. However, at the 45° depression angle, the performance of various methods drops significantly. In this situation, the image differences caused by the depression angle difference are much more significant. The proposed method in this paper maintains the highest at both cases, which proves its robustness to depression angle differences. Compared with the BEMD method, the performance of this paper has been greatly improved, which illustrates the effectiveness of C-BEMD for SAR image feature extraction. With significant differences between the training and test samples, the Res-Net and deep feature methods experience the largest degradations at 45° depression angle among all the methods because the trained networks can hardly discern those test samples with low similarity with the training ones.

4.5. Noise Interference

Relying on the constructed noisy test sets at multiple SNRs, the proposed method is evaluated under noise interference conditions. Figure 7 plots the curves of average recognition rates achieved by different methods with reference to SNR. The proposed method achieves the highest at each noise level, which verifies its robustness to noise interference. The C-BEMD used in this paper analyzes and extracts features from SAR images in the complex domain and finally obtains features that are robust against noise interference. During the decomposition, a certain denoising process is actually carried out, which can be observed in the implementation steps of C-BEMD. In the classification process, the discriminations of different BIMFs are combined and fused through the multitask sparse representation. Therefore, the proposed method can maintain a high level of performance under noise interference. Among the five reference methods, the BEMD method outperforms the remaining ones, further validating the noise robustness of the decomposed BIMFs. The two CNN-based methods achieve the lowest s, especially at low SNRs, because the networks trained by SAR images at high SNRs have weak adaptivity to those test sets with much noise.

5. Conclusion

This paper applies C-BEMD to SAR image feature extraction and target recognition. C-BEMD is an extension of traditional BEMD in the complex domain and can be directly used to process complex matrix. In this paper, C-BEMD is used to extract the features of complex SAR images with multilevel complex BIMFs, which can effectively reflect the time-frequency characteristics of SAR targets. In the classification stage, the multitask sparse representation is used to characterize the extracted BIMFs, and the target category of the test sample is determined according to the reconstruction errors. Based on the MSTAR dataset, the proposed method is tested and verified under SOC and typical EOCs including configuration differences, depression angle differences, and noise interferences. The experimental results reflect the highest average recognition rate of the proposed with 99.34% under SOC. Also, the robustness under the three EOCs is also higher than the reference methods.

Data Availability

The MSTAR dataset used to support the findings of this work is available online at http://www.sdms.afrl.af.mil/datasets/mstar/.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. K. El-Darymli, E. W. Gill, P. McGuire, D. Power, and C. Moloney, “Automatic target recognition in synthetic aperture radar imagery: a state-of-the-art review,” IEEE Access, vol. 4, pp. 6014–6058, 2016. View at: Publisher Site | Google Scholar
  2. L. M. Novak, G. J. Owirka, W. S. Brower, and A. L. Weaver, “The automatic target-recognition system in SAIP,” Lincoln Laboratory Journal, vol. 10, no. 2, pp. 187–202, 1997. View at: Google Scholar
  3. L. M. Novak, G. J. Owirka, and W. S. Brower, “Performance of 10- and 20-target MSE classifiers,” IEEE Transactions on Aerospace and Electronic Systems, vol. 36, no. 4, pp. 1279–1289, 2000. View at: Publisher Site | Google Scholar
  4. S. M. Verbout, W. W. Iring, and A. S. Hanes, “Improving a template-based classifier in a SAR automatic target recognition system by using 3-D target infomration,” Lincoln Laboratory Journal, vol. 6, no. 1, pp. 53–76, 1993. View at: Google Scholar
  5. J. R. Diemunsch and J. Wissinger, “Moving and stationary target acquisition and recognition (MSTAR) model-based automatic target recognition: search technology for a robust ATR,” in Proceedings of the 5th SPIE Algorithms Synthetic Aperture Radar Imagery, pp. 481–492, Orlando, FL, USA, May 1998. View at: Google Scholar
  6. T. D. Ross, J. J. Bradley, L. J. Hudson, and M. P. O’Connor, “SAR ATR: so what’s the problem? —an MSTAR perspective,” in Proceedings of the 6th SPIE Algorithms Synthetic Aperture Radar Imagery, pp. 662–672, Orlando, FL, USA, August 1999. View at: Google Scholar
  7. E. Keydel, S. Lee, and J. Moore, “MSTAR extended operating conditions: a tutorial,” in Proceedings of the SPIE, pp. 228–242, San Diego, CA, USA, August 1996. View at: Google Scholar
  8. M. Amoon and G. a. Rezai-rad, “Automatic target recognition of synthetic aperture radar (SAR) images based on optimal selection of Zernike moments features,” IET Computer Vision, vol. 8, no. 2, pp. 77–85, 2014. View at: Publisher Site | Google Scholar
  9. X. Zhang, Z. Liu, S. Liu, D. Li, Y. Jia, and P. Huang, “Sparse coding of 2D-slice Zernike moments for SAR ATR,” International Journal of Remote Sensing, vol. 38, no. 2, pp. 412–431, 2017. View at: Publisher Site | Google Scholar
  10. C. Clemente, L. Pallotta, D. Gaglione, A. De Maio, and J. J. Soraghan, “Automatic target recognition of military vehicles with Krawtchouk moments,” IEEE Transactions on Aerospace and Electronic Systems, vol. 53, no. 1, pp. 493–500, 2017. View at: Publisher Site | Google Scholar
  11. B. Ding, G. Wen, C. Ma et al., “Target recognition in synthetic aperture radar images using binary morphological operations,” Journal of Applied Remote Sensing, vol. 10, no. 4, Article ID 046006, 2016. View at: Publisher Site | Google Scholar
  12. S. Cui, F. Miao, Z. Jin et al., “Target recognition of synthetic aperture radar images based on matching and similarity evaluation between binary regions,” IEEE Access, vol. 7, pp. 154398–154413, 2019. View at: Publisher Site | Google Scholar
  13. C. Shan, B. Huang, and M. Li, “Binary morphological filtering of dominant scattering area residues for SAR target recognition,” Computational Intelligence and Neuroscience, vol. 2018, Article ID 9680465, 2018. View at: Publisher Site | Google Scholar
  14. S. Papson and R. M. Narayanan, “Classification via the shadow region in SAR imagery,” IEEE Transactions on Aerospace and Electronic Systems, vol. 48, no. 2, pp. 969–980, 2012. View at: Publisher Site | Google Scholar
  15. G. C. Anagnostopoulos, “SVM-based target recognition from synthetic aperture radar images using target region outline descriptors,” Nonlinear Analysis, vol. 71, no. 2, pp. e2934–e2939, 2009. View at: Publisher Site | Google Scholar
  16. J. Tan, X. Fan, S. Wang et al., “Target recognition of SAR images by partially matching of target outlines,” Journal of Electromagnetic Waves and Applications, vol. 33, no. 7, pp. 865–881, 2019. View at: Publisher Site | Google Scholar
  17. X. Zhu, Z. Huang, and Z. Zhang, “Automatic target recognition of synthetic aperture radar images via Gaussian mixture modeling of target outlines,” Optik, vol. 194, p. 162922, 2019. View at: Publisher Site | Google Scholar
  18. A. K. Mishra, “Validation of PCA and LDA for SAR ATR,” in Proceedings of the IEEE TENCON, pp. 1–6, Hyderabad, India, November 2008. View at: Google Scholar
  19. A. K. Mishra and T. Motaung, “Application of linear and nonlinear PCA to SAR ATR,” in Proceedings of the IEEE 25th International Conference Radioelektronika, pp. 349–354, Pardubice, Czech, April 2015. View at: Google Scholar
  20. Z. Cui, J. Feng, Z. Cao, H. Ren, and J. Yang, “Target recognition in synthetic aperture radar images via non-negative matrix factorisation,” IET Radar, Sonar & Navigation, vol. 9, no. 9, pp. 1376–1385, 2015. View at: Publisher Site | Google Scholar
  21. Y. Huang, J. Peia, J. Yanga, B. Wang, and X. Liu, “Neighborhood geometric center scaling embedding for SAR ATR,” IEEE Transactions on Aerospace and Electronic Systems, vol. 50, no. 1, pp. 180–192, 2014. View at: Publisher Site | Google Scholar
  22. M. Yu, G. Dong, H. Fan et al., “SAR target recognition via local sparse representation of multi-manifold regularized low-rank approximation,” Remote Sensing, vol. 10, no. 2, p. 211, 2018. View at: Publisher Site | Google Scholar
  23. G. Dong, G. Kuang, N. Wang, L. Zhao, and J. Lu, “SAR target recognition via joint sparse representation of monogenic signal,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 7, pp. 3316–3328, 2015. View at: Publisher Site | Google Scholar
  24. M. Chang, X. You, and Z. Cao, “Bidimensional empirical mode decomposition for SAR image feature extraction with application to target recognition,” IEEE Access, vol. 7, pp. 135720–135731, 2019. View at: Publisher Site | Google Scholar
  25. M. Amrani, F. Jiang, Y. Xu, S. Liu, and S. Zhang, “SAR-oriented visual saliency model and directed acyclic graph support vector metric based target classification,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11, no. 10, pp. 3794–3810, 2018. View at: Publisher Site | Google Scholar
  26. L. M. Novak, S. D. Halversen, G. Owirka, and M. Hiett, “Effects of polarization and resolution on SAR ATR,” IEEE Transactions on Aerospace and Electronic Systems, vol. 33, no. 1, pp. 102–116, 1997. View at: Publisher Site | Google Scholar
  27. W. Yang, J. Lu, and Z. Cao, “A new algorithm of target classification based on maximum and minimum polarizations,” in Proceedings of the CIE International Conference on Radar, pp. 1–4, Shanghai, China, October 2006. View at: Google Scholar
  28. L. C. Potter and R. L. Moses, “Attributed scattering centers for SAR ATR,” IEEE Transactions on Image Processing, vol. 6, no. 1, pp. 79–91, 1997. View at: Publisher Site | Google Scholar
  29. B. Ding, G. Wen, J. Zhong, C. Ma, and X. Yang, “A robust similarity measure for attributed scattering center sets with application to SAR ATR,” Neurocomputing, vol. 219, pp. 130–143, 2017. View at: Publisher Site | Google Scholar
  30. B. Ding, G. Wen, X. Huang, C. Ma, and X. Yang, “Target recognition in synthetic aperture radar images via matching of attributed scattering centers,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, no. 7, pp. 3334–3347, 2017. View at: Publisher Site | Google Scholar
  31. X. Zhang, “Noise-robust target recognition of SAR images based on attribute scattering center matching,” Remote Sensing Letters, vol. 10, no. 2, pp. 186–194, 2019. View at: Publisher Site | Google Scholar
  32. Q. Zhao and J. C. Principe, “Support vector machines for SAR automatic target recognition,” IEEE Transactions on Aerospace and Electronic Systems, vol. 37, no. 2, pp. 643–654, 2001. View at: Publisher Site | Google Scholar
  33. H. Liu and S. Li, “Decision fusion of sparse representation and support vector machine for SAR image target recognition,” Neurocomputing, vol. 113, pp. 97–104, 2013. View at: Publisher Site | Google Scholar
  34. J. J. Thiagaraianm, K. N. Ramamurthy, P. Knee et al., “Sparse representations for automatic target classification in SAR images,” in Proceedings of the 4th International Symposium on Communication, Control Signal Process, pp. 1–4, Limassol, Cyprus, March 2010. View at: Google Scholar
  35. H. Song, K. Ji, Y. Zhang, X. Xing, and H. Zou, “Sparse representation-based SAR image target classification on the 10-class MSTAR data set,” Applied Sciences, vol. 6, no. 1, p. 26, 2016. View at: Publisher Site | Google Scholar
  36. C. Ning, W. Liu, G. Zhang, and X. Wang, “Synthetic aperture radar target recognition using weighted multi-task kernel sparse representation,” IEEE Access, vol. 7, pp. 181202–181212, 2019. View at: Publisher Site | Google Scholar
  37. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proceedings of the NIPS, pp. 1096–1105, Lake Tahoe, NV, USA, December 2012. View at: Google Scholar
  38. C. Szegedu, W. Liu, Y. Jia et al., “Going deeper with convolutions,” in Proceedings of the CVPR, pp. 1–9, Boston, MA, USA, June 2015. View at: Google Scholar
  39. X. X. Zhu, D. Tuia, L. Mou et al., “Deep learning in remote sensing: a comprehensive review and list of resources,” IEEE Geoscience and Remote Sensing Magazine, vol. 5, no. 4, pp. 8–36, 2017. View at: Publisher Site | Google Scholar
  40. D. E. Morgan, “Deep convolutional neural networks for ATR from SAR imagery,” in Proceedings of the SPIE, pp. 1–13, Szczecin, Poland, July 2015. View at: Google Scholar
  41. S. Chen, H. Wang, F. Xu et al., “Target classification using the deep convolutional networks for SAR images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 6, pp. 1685–1697, 2016. View at: Publisher Site | Google Scholar
  42. H. Furukawa, “Deep learning for target classification from SAR imagery data augmentation and translation invariance,” IEICE Technical Report, vol. 117, no. 182, pp. 13–17, 2017. View at: Google Scholar
  43. J. Zhao, Z. Zhang, W. Yu, and T.-K. Truong, “A cascade coupled convolutional neural network guided visual attention method for ship detection from SAR images,” IEEE Access, vol. 6, pp. 50693–50708, 2018. View at: Publisher Site | Google Scholar
  44. L. Wang, X. Bai, and F. Zhou, “SAR ATR of ground vehicles based on ESENet,” Remote Sensing, vol. 11, no. 11, p. 1316, 2019. View at: Publisher Site | Google Scholar
  45. P. Zhao, K. Liu, H. Zou, and X. Zhen, “Multi-stream convolutional neural network for SAR automatic target recognition,” Remote Sensing, vol. 10, no. 9, p. 1473, 2018. View at: Publisher Site | Google Scholar
  46. M. Amrani and F. Jiang, “Deep feature extraction and combination for synthetic aperture radar target classification,” Journal of Applied Remote Sensing, vol. 11, no. 4, Article ID 042616, 2017. View at: Publisher Site | Google Scholar
  47. M.-H. Yeh, “The complex bidimensional empirical mode decomposition,” Signal Processing, vol. 92, no. 2, pp. 523–541, 2012. View at: Publisher Site | Google Scholar
  48. M. H. Yeh, “Multi-focus color image fusion based on bivariate and complex bidimensional empirical mode decompositions,” in Proceedings of the IEEE International Conference on Signal Processing, Communication and Computing (ICSPCC), pp. 358–363, Hong Kong, China, August 2012. View at: Google Scholar
  49. N. E. Huang, Z. Shen, S. Long et al., “The empirical mode decomposition and the Hilbert spectrum for nonlinear and nonstationary time series analysis,” in Proceedings of the Royal Society of London, pp. 1–5, London, UK, October 1998. View at: Google Scholar
  50. Y. Qin, L. Qiao, Q. Wang, X. Ren, and C. Zhu, “Bidimensional empirical mode decomposition method for image processing in sensing system,” Computers & Electrical Engineering, vol. 68, pp. 215–224, 2018. View at: Publisher Site | Google Scholar
  51. J. A. Tropp, A. C. Gilbert, and M. J. Strauss, “Algorithms for simultaneous sparse approximation. Part II: convex relaxation,” Signal Processing, vol. 86, no. 3, pp. 589–602, 2006. View at: Publisher Site | Google Scholar
  52. S. Ji, D. Dunson, and L. Carin, “Multitask compressive sensing,” IEEE Transactions on Signal Processing, vol. 57, no. 1, pp. 92–106, 2009. View at: Publisher Site | Google Scholar
  53. H. Zhang, N. M. Nasrabadi, Y. Zhang, and T. S. Huang, “Multi-view automatic target recognition using joint sparse representation,” IEEE Transactions on Aerospace and Electronic Systems, vol. 48, no. 3, pp. 2481–2497, 2012. View at: Publisher Site | Google Scholar
  54. B. Ding and G. Wen, “Exploiting multi-view SAR images for robust target recognition,” Remote Sensing, vol. 9, no. 11, p. 1150, 2017. View at: Publisher Site | Google Scholar
  55. S. Liu and J. Yang, “Target recognition in synthetic aperture radar images via joint multifeature decision fusion,” Journal of Applied Remote Sensing, vol. 12, no. 1, Article ID 016012, 2018. View at: Publisher Site | Google Scholar
  56. Z. Zhang, “Joint classification of multiresolution representations with discrimination analysis for SAR ATR,” Journal of Electronic Imaging, vol. 27, no. 4, Article ID 043030, 2018. View at: Google Scholar
  57. L. Zhu, “Selection of multi-level deep features via spearman rank correlation for synthetic aperture radar target recognition using decision fusion,” IEEE Access, vol. 8, pp. 133914–133927, 2020. View at: Publisher Site | Google Scholar

Copyright © 2021 Xinying Miao and Yunlong Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views36
Downloads41
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.