Scientific Programming

Scientific Programming / 2021 / Article
Special Issue

Machine Learning in Image and Video Processing

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 9063419 | https://doi.org/10.1155/2021/9063419

Liqun Yu, Lu Wang, Yongxing Xu, "Combination of Joint Representation and Adaptive Weighting for Multiple Features with Application to SAR Target Recognition", Scientific Programming, vol. 2021, Article ID 9063419, 9 pages, 2021. https://doi.org/10.1155/2021/9063419

Combination of Joint Representation and Adaptive Weighting for Multiple Features with Application to SAR Target Recognition

Academic Editor: Bai Yuan Ding
Received22 Apr 2021
Revised07 May 2021
Accepted18 May 2021
Published25 May 2021

Abstract

For the synthetic aperture radar (SAR) target recognition problem, a method combining multifeature joint classification and adaptive weighting is proposed with innovations in fusion strategies. Zernike moments, nonnegative matrix factorization (NMF), and monogenic signal are employed as the feature extraction algorithms to describe the characteristics of original SAR images with three corresponding feature vectors. Based on the joint sparse representation model, the three types of features are jointly represented. For the reconstruction error vectors from different features, an adaptive weighting algorithm is used for decision fusion. That is, the weights are adaptively obtained under the framework of linear fusion to achieve a good fusion result. Finally, the target label is determined according to the fused error vector. Experiments are conducted on the moving and stationary target acquisition and recognition (MSTAR) dataset under the standard operating condition (SOC) and four extended operating conditions (EOC), i.e., configuration variants, depression angle variances, noise interference, and partial occlusion. The results verify the effectiveness and robustness of the proposed method.

1. Introduction

Synthetic aperture radar (SAR) target recognition has been researched for decades since 1990s [1]. According to the comprehensive review of current literature, the existing SAR target recognition methods can be divided into different aspects. From the aspect of target descriptions, the methods can be categorized as template-based and model-based ones. In the former, the references for the test sample are described by SAR images from different conditions, e.g., azimuths, depression angles, backgrounds, called training samples [24]. In the latter, the target characteristics are generated by models including CAD and scattering center models [510]. From the aspect of the decision engine, these methods are distinguished as feature-based and classifier-based ones. The former employs or designs specific features for SAR images so the discrimination can be exploited. The latter adopts or develops suitable classifiers for SAR target recognition so the overall performance can be improved. According to previous works, the features used in SAR target recognition cover geometric ones, transformation ones, and electromagnetic ones. The geometric shape features describe the target area and contour distributions [1121], such as the Zernike moments, outline descriptors. The transformation features can be further divided into two sub-categorifies as projection and decomposition ones. The former aims to find the optimal projection directions through the learning of training samples, so the high dimension of the original images can be reduced efficiently. Typical algorithms for projection features include principal component analysis (PCA) [22], nonnegative matrix factorization (NFM) [23], etc. The latter decomposes the original image through a series of signal bases to obtain different layers of descriptors. The representation algorithms for decomposition features include wavelet decomposition [24], monogenic signal [25, 26], bidimensional empirical mode decomposition (BEMD) [27], etc. The electromagnetic features focus on radar backscattering characteristics of targets, e.g., the attributed scattering center [2832]. Classifiers are usually applied after feature extraction to make the final decisions. The classifiers in previous SAR target recognition methods were mainly inherited from the traditional pattern recognition field, such as K nearest neighbor (KNN) [22], support vector machine (SVM) [33, 34], sparse representation-based classification (SRC) [3436], and joint sparse representation [3739]. In recent years, with the development of deep learning theory, the relevant models represented by convolutional neural networks (CNN) [4043] have also been continuously applied to SAR target recognition with high effectiveness.

Considering the properties of different types of features, the multifeature SAR target recognition methods were developed to combine their strengths. These methods can be generally divided into parallel fusion, hierarchal fusion, and joint fusion. The parallel fusion classifies different features independently and further fuses their decisions [44, 45]. The hierarchal fusion classifies different features sequentially and a reliable decision in the former stage can avoid the remaining works [43, 46, 47]. The joint fusion mainly makes use of the multitask learning algorithms, which classify different features in the same framework, such as joint sparse representation [39]. Based on the previous works, this paper proposes a SAR target recognition method via a combination of joint representation of multiple features and adaptive weighting. Three types of features, i.e., Zernike moments, NMF, and monogenic signal, are used to describe the target characteristics in SAR images, which reflect the target shape, pixel distribution, time-frequency properties, respectively. In this sense, the three features have good complementarity. The joint sparse representation model [48, 49] is used to represent the three features, which employs their inner correlation to improve the representation accuracy. In the traditional decision-making mechanism based on joint sparse representation, the reconstruction errors of different tasks are directly added, and then the decision is made according to the minimum error. Actually, different tasks have different weights because they have different discrimination capabilities, so the idea of equal weights has certain shortcomings. As a remedy, this paper uses the adaptive weighting algorithm proposed in [50] to obtain the optimal weights for different features. For the reconstructed error vectors of different types of features, the adaptive weights are solved and used for linear fusion. Finally, the target label of the test sample is decided based on the fused error vector. In the experiments, tests and verifications are carried out on the moving and stationary target acquisition and recognition (MSTAR) dataset. The results of typical experimental setups show the effectiveness and robustness of the proposed method.

2. Extraction of Multiple Features

2.1. Zernike Moments

The moment features are useful features to describe the shape and outline distribution of a region. The famous Hu moments could maintain good effectiveness for image with the relatively low noise level. However, for SAR images with strong noises and rotations and translations, the Hu moments may lose their adaptability. The Zernike moments can maintain high rotation invariance and noise robustness, which are more suitable for describing the regional features of SAR images [1113].

With the form of in polar coordinates, the Zernike moments of the input image are calculated as follows:where is an even and

The Zernike polynomials are a set of orthogonal complete complex-valued functions on the unit circle , which complies with the following constraints:

Based on Zernike moments, the rotation invariants can be generated as follows:

Before calculating the Zernike moment of an image, it is necessary to place the center of the image at the origin of the coordinates and map the pixels to the inside of the unit circle. Based on the principle of Zernike moments, the moments of any order can be obtained. In comparison, the higher-order moments contain more detailed information about the objects in the image. With reference to [11], this paper selects the Zernike moments at the 6th, 7th, 8th, 9th, 10th, and 11th orders (i.e., [n, m] = {[0, 0, 1, 1, 1, 1, 6, 7, 8, 9, 10, 11]}), to construct a feature vector, which describes the target area in the SAR image.

2.2. NMF

NMF provided a way to efficiently reduce data dimension. Different from traditional PCA, NMF brings in the nonnegativity constraint and the resulting projection matrix could better maintain the valid information as validated in previous works [23].

For an input matrix , it is decomposed by NMF as follows:where , and . The reconstruction error is employed to evaluate the decomposition precision, which is defined as the square Euclidean distance as follows:

The above objective function can be iteratively updated to find the solutions as follows:where .

With the solution of the matrix , its transpose is used as the projection matrix for feature extraction. With reference to [23], this paper employs NMF to obtain an 80-dimension feature vector for an input SAR image.

2.3. Monogenic Signal

As a 2D extension of the traditional analytic signal, the monogenic signal has been successfully applied to feature extraction of SAR images [25, 26]. Denote the input image as to where represent the pixel locations. The monogenic component is calculated as follows:where and are imagery units along with different directions.

Based on , three monogenic features can be generated to describe the local amplitude, local phase, and local orientation:where and are the -imaginary and -imaginary components of the monogenic component, respectively.

As reported in previous works, the monogenic features could reveal the time-frequency properties of the original SAR image, including the intensity distribution, structural, and geometric information. With reference to [25], this paper reorganizes the three features in a vector, called monogenic feature vector.

3. Joint Classification with Adaptive Weights

3.1. Joint Sparse Representation

The joint sparse representation is an extended version of traditional sparse representation, which handles several related problems simultaneously [48, 49]. As the inner correlations of different sparse representations are exploited, the overall reconstruction precision can be improved. For the multiple features from the same SAR image, they are related and suitable to be represented by joint sparse representation. In the following, the basic process of jointly representing multiple features is described. Assume there are M different features from the sample , denoted as , a general form of joint sparse representation is as follows:where is the global dictionary corresponding to the feature; is a matrix established by the coefficient vectors by different features.

It can be analyzed that the objective function in equation (9) is equal to the solutions of the sparse representation problems of different features separately. In this sense, it can hardly make use of the inner correlations of different features. As a remedy, the joint sparse representation model in previous works imposed norm on the coefficient matrix with a new objective function as follows:where is the regularization factor.

During the solution of equation (10), the coefficient vectors of different features tend to share a similar pattern because of the constraint of norm. Therefore, the inner correlations among different features can be employed. It is reported and validated that simultaneous orthogonal matching pursuit (SOMP) [48] and multitask compressive sensing [49] are suitable for solving the problem. With the solution of the coefficient matrix , the reconstruction errors of different training classes can be calculated to further determine the target label as follows:where is the local dictionary of the lth feature with regard to the class; is the corresponding coefficient vector.

3.2. Decision Fusion with Adaptive Weights

Equation (11) gives the basic decision-making mechanism of the traditional joint sparse representation model applied to classification. In essence, this is a linear weighted fusion algorithm with the same weight. That is, it is considered that the contributions of different features to the final recognition are consistent. However, in the actual process, the effectiveness of different features for recognition is often different, so special consideration is required. Fusion by linear weighting is an effective method for processing multisource information, and its core element is to scientifically determine the weights of different components. To this end, this paper adopts the adaptive weight determination algorithm proposed in [50]. To simplify the description, take the reconstruction error vectors of the two types of features (denoted as and ) as examples to describe their fusion process.

Step 1. Define , in which

Step 2. Normalize and using and , in which and are the maximum and minimum of ; and are the maximum and minimum of ;

Step 3. Reorganize and in an ascending manner to obtain the new sequences as and to further define , and .

Step 4. is achieved as the fused reconstruction error. The target label is decided as the class with the minimum error .
The above algorithm analyzes the distributions of single reconstruction error vectors while comparing their individual characteristics. So, the result weights could better reflect the importance of different components than the traditional experiential weights such as the equal ones. For the reconstruction error fusion of the three types of features in this paper, the same idea is adopted, and the specific algorithm can be found in [50]. Figure 1 shows the basic process of the proposed method. The three types of features produce the reconstruction error vectors corresponding to each training class under the joint sparse representation model, respectively. The final reconstruction error vector is obtained using the adaptive weighted fusion algorithm. Finally, the target label of the test sample is determined according to the minimum error.

4. Experiments and Analysis

4.1. Experimental Setup

The MSTAR dataset is used to test and analyze the proposed method. Figure 2 shows the 10 types of typical vehicle targets included in the dataset, e.g., tanks, armored vehicles, and trucks. For each target, the MSTAR dataset collects samples in a relatively complete azimuth range with several depression angles. Table 1 shows the basic training and test sets used in subsequent experiments, which are from two depression angles of 17° and 15°. Accordingly, the test and the training sets have only a small depression angle difference, and their overall similarity is relatively high. Such a situation is generally considered as a standard operating condition (SOC). On this basis, some simulation algorithms, including noise addition and occlusion generation, are developed to obtain test samples under extended operating conditions (EOC). Furthermore, samples from different target configurations and depression angles can be employed to set up EOCs like configuration variants and depression angle variances. Therefore, based on the above conditions, the performance of the proposed method can be investigated and verified in a comprehensive way.


TypeTraining setTest set
Depression angleNumber of samplesDepression angleNumber of samples

BMP217°23315°195
BTR70233196
T72232196
T62299273
BRDM2298274
BTR60256195
ZSU23/4299274
D7299274
ZIL131299274
2S1299274

Some reference methods selected from published works are used for comparison, including ones using single features and ones using multiple features. The former three use single features, i.e., Zernike moments [11], NMF [23], and monogenic signal [25], which are consistent with the proposed method. The latter three decision fusion strategies including parallel fusion [45], hierarchical fusion [46], and joint classification [39], in which the three classified features are the same as the proposed method. Especially, the joint classification in the reference methods only performs joint sparse representation and directly adds the reconstruction results of different features with no adaptive weighting.

4.2. SOC

With reference to the basic training and test sets in Table 1, the proposed method is tested under SOC. Figure 3 presents the recognition results of the proposed method in the form of a confusion matrix. According to the corresponding relationship between the horizontal and vertical coordinates, the diagonal elements mark the correct recognition rates of different categories. Define the average recognition rate Pav as the proportion of the test samples correctly classified in the entire test set. The Pav of the proposed method is calculated to be 99.38%. Table 2 shows the Pavs of all the methods which are achieved according to the same process on the same platforms. The effectiveness of the proposed method can be intuitively validated with its highest Pav. Compared with the three types of methods using single features, the multifeature methods achieve obvious advantages, reflecting the complementarity between different features. Among the four multifeature methods, the idea of joint classification has some predominance over the parallel fusion and hierarchal fusion mainly because the inner correlations of different features are exploited. In comparison with the joint classification method, the proposed one further enhances the overall recognition performance by introducing adaptive weights, verifying the effectiveness of the proposed strategy.


MethodProposedZernikeNMFMonoParallel fusionHierarchical fusionJoint classification

Pav (%)99.3898.4298.5698.7499.0899.1399.18

4.3. EOC1-Configuration Variants

For different military applications, the same target may have different models. When the test sample and the training sample come from different models, the difficulty of the target recognition problem will increase. Table 3 shows the training and test sets under the condition of model difference, where the test samples and training samples of BMP2 and T72 are from different models. Table 4 shows the identification results of the proposed method for different models. It can be seen from the recognition rate that there are differences in the similarity between different test models and the reference models in the training set. Table 5 compares the Pavs of different methods under current conditions. The performance advantage of the multifeature method compared with the single-feature method is still obvious. In the framework of joint classification, the proposed method makes full use of the classification advantages of different features for model differences by introducing adaptive weights, thereby improving the final recognition performance.


TypeTraining setTest set
Depression angleConfigurationNumber of samplesDepression angleConfigurationNumber of samples

BMP217°956323315°, 17°9566428
C21429
BRDM22980
BTR702330
T72132232812426
A04573
A05573
A07573
A10567


TypeConfigurationDecided classPav (%)
BMP2BRDM2BTR70T72

BMP2956642212398.60
C2142520299.07

T7281220342198.83
A0423256698.78
A0522656398.25
A0721356798.95
A1031455998.59

Overall98.76


MethodProposedZernikeNMFMonoParallel fusionHierarchical fusionJoint classification

Pav (%)98.7697.6797.1297.3697.8298.0898.12

4.4. EOC2-Depression Angle Variances

The test sample to be classified may come from a different depression angle from the training samples. Considering the sensitivity of SAR images to view angles, it is difficult to correctly classify test samples from different depression angles. Table 6 sets up the training and test samples with different depression angles, in which the test set including samples from 30° to 45° depression angles. Figure 4 shows the Pavs of different methods at two depression angles. It shows that the large depression angle difference (45°) causes significant performance degradations of all the methods. Comparing the results at the two depression angles, the Pavs of the proposed method are the highest, verifying its robustness. Based on multifeature joint representation, the proposed method adaptively obtains the weights of different features, so their effectiveness for depression angle variances can be better utilized.


TypeTraining setTest set
Depression angleNumber of samplesDepression angleNumber of samples

2S117°29930°288
45°303
BRDM229830°287
45°303
ZSU23/429930°288
45°303

4.5. EOC3-Noise Corruption

In the actual process, the obtained test samples of noncooperative targets are contaminated by varying degrees of noise. With the difference of the signal-to-noise ratio (SNR) between the test and the training samples increasing, their correlation decreases simultaneously. Based on the basic training and test sets in Table 1, this paper uses noise simulation to construct test sets with different SNRs using the original test samples [31], including -10 dB, -5 dB, 0 dB, 5 dB, and 10 dB. The proposed and reference methods are tested under different noise levels, and the statistical results are shown in Figure 5. Intuitively, noise corruption has a significant impact on recognition performance. In contrast, the proposed method maintains the highest Pav at each SNR, verifying its robustness. Similar to the results under SOC, the performance predominance of the multifeature methods is still obvious over the single-feature ones. Among the three types of single-feature methods, the ones using Zernike moments and monogenic features are more robust than the ones using NMF features, which also reflects the different sensitivities of different features to noise interference. By effectively fusing different features with joint classification and adaptive weighting, they can be integrated to improve the reliability of the final decision in the proposed method.

4.6. EOC4-Partial Occlusion

Occlusion situations are also very common in practical applications. As a result, part of the ground target cannot be illuminated by the radar waves with no echoes. Using a similar idea of noise simulation, this paper constructs the test sets of different occlusion levels based on the test samples in Table 1 according to the target occlusion model in [31]. Afterwards, the recognition results of the proposed and reference methods are obtained as shown in Figure 6. The comparison shows that the proposed method maintains the highest Pavs at different occlusion levels, reflecting its robustness. The proposed method comprehensively exploits multiple types of features and uses adaptive weights to further employ the advantages of the specified features that are more effective for occlusion situations. Therefore, the final recognition results are improved.

5. Conclusion

This paper applies joint sparse representation and adaptive weighting to SAR target recognition. For the reconstruction error vectors of the three types of features resulting from the joint sparse representation, their corresponding weights are determined adaptively, reflecting different contributions of different features to the final classification result. Based on the MSTAR data set, experiments were carried out to test the recognition performance under a typical SOC and four representative EOCs. The proposed method achieves a Pav of 99.38% for 10-class targets under SOC and superior performance over the reference ones under different EOCs. The experimental results show that the high effectiveness and robustness of the proposed method, which has certain advantages and potentials in practical uses.

Data Availability

The dataset used in this paper is publicly available.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was funded by the National Key Research and Development Program of China (No. 2017YFB1401800), the Philosophy and Social Sciences Research Planning Project of Heilongjiang Province (Nos. 20GLB119 and 19GLB327), and the Talents Plan of Harbin University of Science and Technology: Outstanding Youth Project (No. 2019-KYYWF-0216).

References

  1. K. El-Darymli, E. W. Gill, P. McGuire, D. Power, and C. Moloney, “Automatic target recognition in synthetic aperture radar imagery: a state-of-the-art review,” IEEE Access, vol. 4, pp. 6014–6058, 2016. View at: Publisher Site | Google Scholar
  2. L. M. Novak, G. J. Owirka, W. S. Brower, and A. L. Weaver, “The automatic target-recognition system in SAIP,” Lincoln Laboratory Journal, vol. 10, no. 2, pp. 187–202, 1997. View at: Google Scholar
  3. L. M. Novak, G. J. Owirka, and W. S. Brower, “Performance of 10- and 20-target MSE classifiers,” IEEE Transactions on Aerospace and Electronic Systems, vol. 36, no. 4, pp. 1279–1289, 2000. View at: Google Scholar
  4. S. M. Verbout, W. W. Iring, and A. S. Hanes, “Improving a template-based classifier in a SAR automatic target recognition system by using 3-D target information,” Lincoln Laboratory Journal, vol. 6, no. 1, pp. 53–76, 1993. View at: Google Scholar
  5. J. R. Diemunsch and J. Wissinger, “Moving and stationary target acquisition and recognition (MSTAR) model-based automatic target recognition: search technology for a robust ATR,” SPIE Algorithms Synthetic Aperture Radar Imagery, vol. 3370, pp. 481–492, 1998. View at: Google Scholar
  6. T. D. Ross, J. J. Bradley, L. J. Hudson, and M. P. O’Connor, “So what’s the problem? —an MSTAR perspective,” SPIE Algorithms Synthetic Aperture Radar Imagery, vol. 3721, pp. 662–672, 1999. View at: Google Scholar
  7. E. Keydel, S. Lee, and J. Moore, “MSTAR extended operating conditions: a tutorial,” SPIE, vol. 2757, pp. 228–242, 1996. View at: Google Scholar
  8. J. Zhou, Z. Shi, X. Cheng et al., “Automatic target recognition of SAR images based on global scattering center model,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 10, pp. 3713–3729, 2011. View at: Google Scholar
  9. B. Ding and G. Wen, “Target reconstruction based on 3-D scattering center model for robust SAR ATR,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 7, pp. 3772–3785, 2018. View at: Publisher Site | Google Scholar
  10. B. Ding and G. Wen, “A region matching approach based on 3-D scattering center model with application to SAR target recognition,” IEEE Sensors Journal, vol. 18, no. 11, pp. 4623–4632, 2018. View at: Publisher Site | Google Scholar
  11. M. Amoon and G. a. Rezai‐rad, “Automatic target recognition of synthetic aperture radar (SAR) images based on optimal selection of Zernike moments features,” IET Computer Vision, vol. 8, no. 2, pp. 77–85, 2014. View at: Publisher Site | Google Scholar
  12. S. Gishkori and B. Mulgrew, “Pseudo-Zernike moments based sparse representations for SAR image classification,” IEEE Transactions on Aerospace and Electronic Systems, vol. 55, no. 2, pp. 1037–1044, 2019. View at: Publisher Site | Google Scholar
  13. X. Zhang, Z. Liu, S. Liu, D. Li, Y. Jia, and P. Huang, “Sparse coding of 2D-slice Zernike moments for SAR ATR,” International Journal of Remote Sensing, vol. 38, no. 2, pp. 412–431, 2017. View at: Publisher Site | Google Scholar
  14. P. Bolourchi, H. Demirel, and S. Uysal, “Target recognition in SAR images using radial Chebyshev moments,” Signal Image & Video Processing, vol. 11, no. 6, pp. 1–8, 2017. View at: Publisher Site | Google Scholar
  15. C. Clemente, L. Pallotta, D. Gaglione, A. De Maio, and J. J. Soraghan, “Automatic target recognition of military vehicles with Krawtchouk moments,” IEEE Transactions on Aerospace and Electronic Systems, vol. 53, no. 1, pp. 493–500, 2017. View at: Publisher Site | Google Scholar
  16. B. Ding, G. Wen, C. Ma et al., “Target recognition in synthetic aperture radar images using binary morphological operations,” Journal of Applied Remote Sensing, vol. 10, no. 4, Article ID 046006, 2016. View at: Publisher Site | Google Scholar
  17. S. Cui, F. Miao, Z. Jin et al., “Target recognition of synthetic aperture radar images based on matching and similarity evaluation between binary regions,” IEEE Access, vol. 7, pp. 154398–154413, 2019. View at: Google Scholar
  18. C. Shan, B. Huang, and M. Li, “Binary morphological filtering of dominant scattering area residues for SAR target recognition,” Computational Intelligence and Neuroscience, vol. 2018, Article ID 9680465, 15 pages, 2018. View at: Publisher Site | Google Scholar
  19. G. C. Anagnostopoulos, “SVM-based target recognition from synthetic aperture radar images using target region outline descriptors,” Nonlinear Anal., vol. 71, no. 2, pp. e2934–e2939, 2009. View at: Publisher Site | Google Scholar
  20. J. Tan, X. Fan, S. Wang et al., “Target recognition of SAR images by partially matching of target outlines,” Journal of Electromagnetic Waves and Applications, vol. 33, no. 7, pp. 865–881, 2019. View at: Google Scholar
  21. X. Zhu, Z. Huang, and Z. Zhang, “Automatic target recognition of synthetic aperture radar images via Gaussian mixture modeling of target outlines,” Optik, vol. 194, Article ID 162922, 2019. View at: Google Scholar
  22. A. K. Mishra and T. Motaung, “Application of linear and nonlinear PCA to SAR ATR,” in Proceedings of the IEEE 25th International Conference Radioelektronika (RADIOELEKTRONIKA), pp. 349–354, Pardubice, Czech Republic, April 2015. View at: Google Scholar
  23. Z. Cui, Z. Cao, J. Yang, J. Feng, and H. Ren, “Target recognition in synthetic aperture radar images via nonnegative matrix factorisation,” IET Radar, Sonar & Navigation, vol. 9, no. 9, pp. 1376–1385, 2015. View at: Publisher Site | Google Scholar
  24. W. Xiong, L. Cao, and Z. Hao, “Combining wavelet invariant moments and relevance vector machine for SAR target recognition,” in Proceedings of the IET International Radar Conference, pp. 1–4, Xi’an, China, April 2009. View at: Google Scholar
  25. G. Dong and G. Kuang, “Classification on the monogenic scale space: application to target recognition in SAR image,” IEEE Transactions on Image Processing, vol. 24, no. 8, pp. 2527–2539, 2015. View at: Publisher Site | Google Scholar
  26. G. Dong, G. Kuang, N. Wang, L. Zhao, and J. Lu, “SAR target recognition via joint sparse representation of monogenic signal,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 7, pp. 3316–3328, 2015. View at: Publisher Site | Google Scholar
  27. M. Chang, X. You, and Z. Cao, “Bidimensional empirical mode decomposition for SAR image feature extraction with application to target recognition,” IEEE Access, vol. 7, pp. 135720–135731, 2019. View at: Publisher Site | Google Scholar
  28. H. C. China, R. L. Moses, and L. C. Potter, “Model-based classification of radar images,” IEEE Transactions on Information Theory, vol. 46, no. 5, pp. 1842–1854, 2000. View at: Google Scholar
  29. B. Ding, G. Wen, J. Zhong et al., “Robust method for the matching of attributed scattering centers with application to synthetic aperture radar automatic target recognition,” Journal of Applied Remote Sensing, vol. 10, no. 1, Article ID 016010, 2016. View at: Publisher Site | Google Scholar
  30. B. Ding, G. Wen, J. Zhong, C. Ma, and X. Yang, “A robust similarity measure for attributed scattering center sets with application to SAR ATR,” Neurocomputing, vol. 219, pp. 130–143, 2017. View at: Publisher Site | Google Scholar
  31. B. Ding, G. Wen, X. Huang, C. Ma, and X. Yang, “Target recognition in synthetic aperture radar images via matching of attributed scattering centers,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, no. 7, pp. 3334–3347, 2017. View at: Publisher Site | Google Scholar
  32. X. Zhang, “Noise-robust target recognition of SAR images based on attribute scattering center matching,” Remote Sensing Letters, vol. 10, no. 2, pp. 186–194, 2019. View at: Publisher Site | Google Scholar
  33. Q. Zhao and J. C. Principe, “Support vector machines for SAR automatic target recognition,” IEEE Transactions on Aerospace and Electronic Systems, vol. 37, no. 2, pp. 643–654, 2001. View at: Publisher Site | Google Scholar
  34. H. Liu and S. Li, “Decision fusion of sparse representation and support vector machine for SAR image target recognition,” Neurocomputing, vol. 113, pp. 97–104, 2013. View at: Publisher Site | Google Scholar
  35. J. J. Thiagaraianm, K. N. Ramamurthy, P. Knee et al., “Sparse representations for automatic target classification in SAR images,” in Proceedings of the 4th International Symposium on Communications, Control and Signal Processing, pp. 1–4, Monticello, IL, USA, October 2010. View at: Google Scholar
  36. H. Song, K. Ji, Y. Zhang, X. Xing, and H. Zou, “Sparse representation-based SAR image target classification on the 10-class MSTAR data set,” Applied Sciences, vol. 6, no. 1, p. 26, 2016. View at: Publisher Site | Google Scholar
  37. H. Zhang, N. M. Nasrabadi, Y. Zhang, and T. S. Huang, “Multi-view automatic target recognition using joint sparse representation,” IEEE Transactions on Aerospace and Electronic Systems, vol. 48, no. 3, pp. 2481–2497, 2012. View at: Publisher Site | Google Scholar
  38. B. Ding and G. Wen, “Exploiting multi-view SAR images for robust target recognition,” Remote Sensing, vol. 9, no. 11, p. 1150, 2017. View at: Publisher Site | Google Scholar
  39. S. Liu and J. Yang, “Target recognition in synthetic aperture radar images via joint multifeature decision fusion,” Journal of Applied Remote Sensing, vol. 12, no. 1, Article ID 016012, 2018. View at: Publisher Site | Google Scholar
  40. D. E. Morgan, “Deep convolutional neural networks for ATR from SAR imagery,” in Proceedings of the SPIE, pp. 1–13, Bellingham, WA, USA, April 2015. View at: Google Scholar
  41. S. Chen, H. Wang, F. Xu et al., “Target classification using the deep convolutional networks for SAR images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 6, pp. 1685–1697, 2016. View at: Publisher Site | Google Scholar
  42. H Furukawa, “Deep learning for target classification from SAR imagery data augmentation and translation invariance,” IEICE Technical Report, vol. 109, pp. 13–17, 2017. View at: Google Scholar
  43. C. Jiang and Y. Zhou, “Hierarchical fusion of convolutional neural networks and attributed scattering centers with application to robust SAR ATR,” Remote Sensing, vol. 10, no. 6, p. 819, 2018. View at: Publisher Site | Google Scholar
  44. U. Srinivas and V. Monga, “Meta-classifiers for exploiting feature dependence in automatic target recognition,” in Proceedings of the IEEE Radar Conference, pp. 147–151, Florence, Italy, October 2011. View at: Google Scholar
  45. R. Huan and Y. Pan, “Decision fusion strategies for SAR image target recognition,” IET Radar, Sonar & Navigation, vol. 5, no. 7, pp. 747–755, 2011. View at: Publisher Site | Google Scholar
  46. Z. Cui, Z. Cao, J. Yang et al., “A hierarchical propelled fusion strategy for SAR automatic target recognition,” EURASIP Journal on Wireless Communications and Networking, vol. 39, pp. 1–8, 2013. View at: Google Scholar
  47. B. Feng, W. Tang, and D. Feng, “Target recognition of SAR images via hierarchical fusion of complementary features,” OPTIK, vol. 217, Article ID 164695, 2020. View at: Publisher Site | Google Scholar
  48. J. A. Tropp, A. C. Gilbert, and M. J. Strauss, “Algorithms for simultaneous sparse approximation. Part II: convex relaxation,” Signal Processing, vol. 86, no. 3, pp. 589–602, 2006. View at: Publisher Site | Google Scholar
  49. S. Ji, D. Dunson, and L. Carin, “Multitask compressive sensing,” IEEE Transactions on Signal Processing, vol. 57, no. 1, pp. 92–106, 2009. View at: Publisher Site | Google Scholar
  50. Y. Xu and Y. Lu, “Adaptive weighted fusion: a novel fusion approach for image classification,” Neurocomputing, vol. 168, pp. 566–574, 2015. View at: Publisher Site | Google Scholar

Copyright © 2021 Liqun Yu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views180
Downloads381
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.