Abstract

Feature extraction is the key step of Inverse Synthetic Aperture Radar (ISAR) image recognition. However, limited by the cost and conditions of ISAR image acquisition, it is relatively difficult to obtain large-scale sample data, which makes it difficult to obtain target deep features with good discriminability by using the currently popular deep learning method. In this paper, a new method for low-dimensional, strongly robust, and fast space target ISAR image recognition based on local and global structural feature fusion is proposed. This method performs the trace transformation along the longest axis of the ISAR image to generate the global trace feature of the space target ISAR image. By introducing the local structural feature, Local Binary Pattern (LBP), the complementary fusion of the global and local features is achieved, which makes up for the missing structural information of the trace feature and ensures the integrity of the ISAR image feature information. The representation of trace and LBP features in a low-dimensional mapping feature space is found by using the manifold learning method. Under the condition of maintaining the local neighborhood relationship in the original feature space, the effective fusion of trace and LBP features is achieved. So, in the practical application process, the target recognition accuracy is no longer affected by trace function, LBP feature block number selection, and other factors, realizing the high robustness of the algorithm. To verify the effectiveness of the proposed algorithm, an ISAR image database containing 1325 samples of 5 types of space targets is used for experiments. The results show that the classification accuracy of the 5 types of space targets can reach more than 99%, and the recognition accuracy is no longer affected by the trace feature and LBP feature selection, which has strong robustness. The proposed method provides a fast and effective high-precision model for space target feature extraction, which can give some references for solving the problem of space object efficient identification under the condition of small sample data.

1. Introduction

ISAR is all-day, all-weather, long-range, high-resolution two-dimensional imaging equipment, which plays an essential role in civil and military fields [1]. The ISAR image can accurately describe the structural characteristics and the scattering center distribution of the target. Compared with the traditional Radar Cross-Section (RCS), High-Resolution Range Profile (HRRP), and Micro-Doppler Signature (MDS), the ISAR image can provide more abundant information of the target. Therefore, the use of ISAR images for space target recognition has always been a research hotspot in the field of space situational awareness [24].

In recent years, target recognition algorithms based on a deep convolution neural network (such as CNN) have attracted wide attention in the field of computer vision. A convolution network can automatically extract image features, but its training needs a lot of annotated samples. Limited by imaging conditions, it is difficult to obtain large-scale ISAR image data of space targets. This will make it difficult to recognize the space target ISAR image accurately by using the deep convolution neural network. Therefore, extracting the high-discriminant features of the space target ISAR image is the key to realize the fast and accurate recognition of the space target ISAR image under the condition of small sample. This also explains why the ISAR image recognition method based on feature extraction is still very popular.

Affected by measurement conditions, imaging principles, and other factors, it is difficult to extract features of space target ISAR images [5]. It is mainly reflected in the following six aspects: (1) The ISAR image is different from a traditional optical image and usually more difficult to understand. (2) Due to factors such as speckle noise and interference fringes, the quality of the ISAR image will decrease to varying degrees. (3) The ISAR image usually appears as a sparse or isolated scattering center distribution. (4) The ISAR image changes with the incident angle of the radar wave, resulting in multiple possibilities for spatial pointing and spatial distribution when the three-dimensional space target is projected on the two-dimensional ISAR image plane. (5) For noncooperative space targets, it is impossible to control the rotation speed of the target relative to the radar, which makes the crossrange resolution of the ISAR image difficult to determine, and thus, the scaling of the ISAR image cannot be guaranteed to be consistent. (6) The ISAR image has high dimension. Direct extraction of high-dimensional features will lead to low computational efficiency, while extraction of low-dimensional information must consider whether useful information is lost. All these characteristics bring difficulties to the classification of ISAR images.

To overcome the above difficulties, a large amount of research on radar image feature extraction has been carried out. For example, in literature [68], geometric features such as elliptic Fourier descriptor, Zernike moment, and outer contour are studied for target recognition. In literature [9, 10], a wavelet transform is adopted to extract low-dimensional features. In literature [11, 12], scattering center features are introduced to classify radar images. In literature [13, 14], the application of pattern structure features in radar image classification was explored; in literature [1, 15], the statistical characteristics of radar images are analyzed and statistical characteristics to classify radar targets are used. In literature [16, 17], ISAR image classification of aerial targets is proposed by using polar coordinate mapping technology, and high classification accuracy was achieved. Recently, Lee et al. pointed out in literature [18] that the polar coordinate mapping method described in literature [17] is difficult to be robust in the practical application. Because it assumes that the ISAR images used for training and testing are all located on the same fixed projection plane, which is inconsistent with the actual situation. In this regard, Lee et al. proposed to extract the small dimensions and high-discriminant features of ISAR images by trace transformation, effectively overcoming the impact of spatial distribution changes of ISAR images on classification accuracy. However, this method in practical application is still affected by many factors, such as trace functions, target types, ISAR imaging conditions, and noise types. Factors above will make the angular region selection and the angular interval division uncertain; as a result, classification accuracy is difficult to be robust.

In this study, we provide an effective solution to overcome the defects of the above trace transformation methods. First of all, when trace feature extraction is carried out, local area selection and small-angle division are no longer considered, and trace transformation is conducted directly along the longest axis of the ISAR image. Therefore, the problems of unstable classification accuracy caused by local region selection and small-angle interval division are effectively solved. However, this operation will cause the loss of some trace feature information without considering local areas. Considering that trace features belong to global structural features, in order to make up for lost trace feature information and ensure high classification accuracy, this paper further proposes to introduce local structural features and enhance trace features by complementary fusion of global and local features.

As one of the most effective methods to describe local structural features of images, LBP features have high discrimination ability in the field of recognition [19, 20]. Therefore, this paper introduces the LBP feature and fuses it with the trace feature. When extracting LBP features of space target ISAR images, to make LBP features not affected by attitude changes of space target and retain the spatial structure relationship of the space target, the ISAR image needs to do block processing first. However, too many blocks will lead to high feature dimensions and low computational efficiency, and too few blocks will make the target background and noise dominant in the statistical characteristics of LBP features, thus affecting the classification accuracy. To solve this problem, this paper further proposes to use the manifold learning method to fuse trace features and LBP features. On the premise of not destroying the feature space structure, this method can not only retain the effective information contained in the two features to the maximum extent but also effectively reduce the fusion feature dimension, so as to improve the accuracy of ISAR image classification and ensure the computational efficiency.

The main contributions of this paper are as follows: (1)The original trace features have been improved. The extraction of trace features along the longest axis of the ISAR image effectively overcomes the influence of local area selection and small-angle division on trace features(2)The complementary fusion of global and local features is realized. By introducing the local structural feature LBP, partial structural information lost by the trace feature is compensated, and the integrity of ISAR image feature information is ensured(3)It solves the problem that the classification accuracy is affected by the trace feature type and LBP rectangular region division in practical application. The manifold learning method is proposed to fuse trace and LBP features. Under the premise of not destroying the feature space structure, this method can retain the effective information of the two features to the maximum extent and realize the reduction of the feature dimension. Therefore, no matter which trace and LBP features are selected for classification, high classification accuracy can be achieved

The rest of the paper is organized as follows: in Section 2, the steps of ISAR image acquisition and preprocessing are presented. In Section 3, the original trace feature extraction algorithm is improved, and the detailed extraction process of the new trace feature is given. In Section 4, the extraction process of LBP features is given, and the limitations of LBP features in practical application are analyzed. In Section 5, the fusion algorithm framework is given. In Section 6, the recognition results are provided, and the classification accuracy before and after feature fusion is compared and analyzed. In Section 7, some conclusions are drawn.

2. ISAR Image Acquisition and Preprocessing

To use the ISAR image for space target recognition, the first step is to establish the ISAR image database of the space target. In this paper, an ISAR imaging model based on a 3D mesh model of the space target, ISAR linear frequency modulation (LFM) signal model, and ISAR image extraction model are established successively, and the ISAR image of the space target is finally obtained through side lobe suppression and Lee filtering preliminary processing. Among them, the ISAR image extraction model used in this paper is the Doppler imaging algorithm. The imaging results are shown in Figure 1.

To further segment the target from the background, the paper uses Otsu [21] to adaptively determine the optimal threshold of the ISAR image. The algorithm uses the principle of maximum interclass variance to classify the ISAR image gray values into and (satellite and background) and finally determines the optimal threshold by finding the maximum interclass variance between and : where represents the grayscale value of the grayscale image of the ISAR image. The pixel value greater than the optimal threshold is taken as the target pixel (preserving the original value), and that less than the optimal threshold is set to 0, thereby segmenting the satellite from the background.

Furthermore, to eliminate the intensity change of the target reflected signal due to the change of the distance between the radar and the target satellite, the paper continues to normalize the ISAR image data. Normalization is performed using the summation of the amplitudes of the ISAR image. The calculation model is where represents the ISAR image after segmentation and represents the normalized ISAR image. Figure 2 shows the result of segmentation and normalization of the target satellite ISAR image.

3. Trace Feature Extraction of ISAR Images

Trace transform, proposed by Maria Petrou et al., is a new technique for extracting features that are insensitive to scaling, translation, and rotation from images. The trace transform is mapped along a specific trace line in the figure. Figure 3 shows the definition of the trace line.

Each trace line can be characterized by two parameters, distance and angle . The distance characteristic parameter represents the distance from the center point of the image to the trace line, and the value range is . and are the length and width of the image, respectively; the characteristic parameter represents the angle between the normal line of the trace line and the horizontal reference line, and the value ranges from .

The result of the trace transformation depends on the selected trace function. Different trace functions will get different mapping calculation results. In Table 1, eight commonly used trace transform functions are given. In the following research, the trace features extracted by these eight trace transform functions will be studied.

In Table 1, the T1 transform represents the traditional Radon transform; means the weighted median of the sequence , and means the weighted sequence. For example, means finding the median number of , and the corresponding weight is . That is, finding the standard median number of is equivalent to finding the median number of , and the calculation result is 5.5.

When extracting the trace features of an image, the traditional method usually uses diametric functional and circus functional to calculate the result of the trace transform, thereby generating a small dimensional feature () [22, 23] that is invariant to rotation, translation, and scaling. However, this method will result in a severe loss of ISAR image information so that the classification accuracy is limited. In this regard, literature [18] adopts a new trace feature extraction method. The method first finds the longest axis of the ISAR image, then selects an angle region near the longest axis and divides it into several equal small-angle intervals, respectively, and performs a trace transformation on each angular interval to generate a trace matrix; finally, each column of the trace matrix is used as a feature vector for space target recognition. This method solves the problem of severe loss of ISAR image information caused by the original trace feature extraction method.

However, this method in practical application is still affected by many factors, such as trace functions, target types, ISAR imaging conditions, and noise types. Factors above will make the angular region selection and the angular interval division uncertain; as a result, classification accuracy is difficult to be robust.

To solve this problem, the paper no longer considers the angle region selection and the angle interval division when extracting the trace feature but directly extracts the trace feature along the longest axis. At the same time, to compensate for the trace feature information lost in the above operation, the paper introduces the local structural feature LBP and fuses it with the new trace feature, thus achieving the complementary enhancement of the two features and ensuring the classification accuracy of the ISAR image.

First, the Canny edge detection and Hough transform method [24] are used to estimate the longest axis of the ISAR image. Figure 4 shows the schematic of the longest axis of the ISAR image. Along the longest axis direction, the functional calculates the trace value of the ISAR image at , and the size of the trace feature of an ISAR image is .

It is worth noting that when the position of the same ISAR image is shifted, the trace feature will change, as shown in Figures 5(a)5(d).

Figure 5 shows the trace features corresponding to the ISAR image before and after the translation at . It can be seen that the trace features of the ISAR image before and after the translation are shifted in the direction and the two features are almost the same except for the translation. Therefore, to eliminate the influence of ISAR image translation on space target classification, the paper further shifts the trace feature vector so that the first element of each feature vector is nonzero. Figure 6 shows the result of the shift alignment operation of Figure 5.

Compared with the traditional target alignment method using the target ISAR image, the shift alignment operation calculation is simple and easy, which is of great significance for improving the target classification efficiency. Figure 7 shows the entire flow of trace feature extraction.

4. LBP Feature Extraction of ISAR Images

Extracting the trace feature without considering the angle region selection and the angle interval division will cause some feature information to be lost, which will affect the classification accuracy; at the same time, the trace feature only describes the structure information of the ISAR image from a global perspective. The limited image information contained will inevitably lead to limited classification accuracy. To solve the above problems, the paper introduces the local structural feature LBP and fuses it with the trace features extracted in the third section. Through the complementary enhancement between the two features, the integrity of the ISAR image feature information is guaranteed.

An LBP operator, as one of the most effective methods to describe local structural features of images, has the advantages of monotonic grayscale variation invariance, high computational efficiency, and strong feature discrimination ability. So it is widely used in industrial detection [25, 26], medical image processing [27], remote sensing image analysis [28, 29], face detection [30], and other fields of image processing and computer vision.

The original LBP operator takes each pixel of the image as the center and uses the pixel value of the center point to threshold the surrounding neighborhood. And the obtained 8-bit binary number is used as the label of the pixel. The statistical histogram of all the labels of the image is the structural feature of the image. Figure 8 shows the schematic of the original LBP operator.

To be able to calculate structural features of different scales, the original LBP operator was extended to use a circular neighborhood system. The circular neighborhood system is centered on the pixel to be marked, and the sampling points are evenly spaced at the circle centered on the pixel, as shown in Figure 9. Then, by changing the size of the circular neighborhood system, LBP features of different scales can be calculated.

In Figure 9, the symbol represents the radius of the circular neighborhood system and the number of sampling points. When the sampling point is not at the center of the pixel, the value of the sampling point can be obtained by bilinear interpolation.

To reduce the feature dimension, the LBP using the circular neighborhood system is further extended to “uniform pattern” LBP [31]. It means that the number of hops of adjacent binary values in the binary mode of LBP is not greater than 2. For example, 00000000 (the number of hops is 0), 00100000 (the number of hops is 2), and 11000011 (the number of hops is 2) are all uniform patterns. 11001100 (the number of hops is 3), 11001001 (the number of hops is 4), and 1010100011 (the number of hops is 6) are all nonuniform patterns. When calculating the histogram of the “ uniform pattern” LBP, each uniform pattern is classified into a single category and all the other nonuniform patterns are classified into one category, and the number of pattern types is reduced from the original to . Therefore, using the “uniform pattern” LBP can significantly reduce the feature dimension, save a storage space, and improve computational efficiency.

Next, feature extraction is performed using the “uniform pattern” LBP. The symbol is used to represent the “uniform pattern” LBP. The subscript represents the circular neighborhood system, and the superscript represents the uniform pattern.

To make the proposed feature robust to the attitude change of the target satellite and effectively preserve the spatial structure relationship of the target satellite, the paper firstly divides the ISAR image into multiple local regions (as shown in Figure 10). Then, is used to calculate the LBP histogram for each local area separately. Finally, the obtained histograms are combined to generate the LBP feature of the ISAR image. The size of the LBP feature is , where is the length of the single LBP histogram. Figure 11 shows the LBP feature extraction process of the ISAR image (take as an example).

The extracted LBP features contain three levels of information: (1) Each label of the LBP histogram contains information at the pixel level. (2) The summation of pixel information on a local area produces information at the regional level. (3) The regional histograms are connected to form the information of the target satellite ISAR image at the overall level.

However, the LBP features extracted by the above methods are affected by the block number of local regions. If too many blocks are selected, the LBP feature dimension will be too high and the calculation efficiency will be low. Otherwise, if the number of blocks is too small, the target background and noise will play a dominant role in the statistical characteristics of LBP features, thereby affecting the classification accuracy. To overcome this shortcoming, the paper then uses the manifold learning method to fuse the trace features and LBP features. While realizing the dimension reduction of features, it retains the valid information of the two features to the maximum extent and effectively solves the problem that the classification accuracy is affected by the selection of LBP feature blocks in practical application.

5. Multifeature Fusion of ISAR Images Based on Manifold Learning

From the analysis of Sections 3 and 4, it is known that the fusion of trace features and LBP features needs to solve two critical problems: (1) The fusion features can retain ISAR image information contained in trace features and LBP features to the utmost, and its classification accuracy is not affected by trace feature selection, LBP feature block number selection, and other factors, with strong robustness. (2) The dimension of the fusion features is reduced, thereby saving a storage space and improving computational efficiency.

The feature fusion method is mainly divided into two categories: early fusion and late fusion. The early fusion method refers to the fusion of features. The commonly used methods mainly include feature splicing and weighted summation after feature alignment. The late fusion method mainly refers to the fusion of classification accuracy corresponding to different features. Therefore, the feature fusion of this paper belongs to early fusion. Considering that the trace features and the LBP features are both high-dimensional features, if they are directly fused by the abovementioned early fusion method, problems such as excessive fusion feature dimension and complex classification calculation will occur.

Aimed at the problems existing in the fusion of high-dimensional features, the literature [32] proposed the method of first reducing dimension and then performing a fusion, which can effectively remove the redundant information of each feature and achieve dimensional reduction. However, considering that trace features describe the structural information of the ISAR image from the global perspective, LBP features focus on local description and contain more location and detailed information, so the information contained in these two features is complementary in structure to some extent. If the two features are fused by the method in literature [32], it will lead to the destruction of complementary structural information between the two features when the redundant information of each feature is removed.

In summary, the paper proposes to use the manifold learning method to directly fuse the trace features and the LBP features of the ISAR image, so that the information contained in the two features can be retained to the maximum extent while reducing the feature dimension. Figure 12 shows a schematic diagram of using the manifold learning method to fuse features.

The manifold learning method was first proposed by Professor Josh Tenenbaum of the Massachusetts Institute of Technology in the journal Science in 2000 and has become a research hotspot in the field of information science. The manifold learning method is defined as follows: let be a Hausdorff space. For any point , there is a neighborhood of in homeomorphic to an open set in the -dimensional Euclidean space ; then, is an -dimensional manifold. The manifold learning method can not only find the low-dimensional feature representation of the sample set in the mapped feature space but also maintain the local neighborhood relationship of the sample set in the original feature space in the low-dimensional feature space.

Commonly used manifold learning methods mainly include local linear embedding (LLE), local tangent space alignment (LTSA), maximum linear embedding (MLE), and isometric feature mapping (ISOMAP). Among them, the ISOMAP method is a noniterative global optimization algorithm, which is improved from the multidimensional scaling (MDS). It uses the geodesic distance on the manifold as the data difference metric instead of the Euclidean distance in the original Euclidean space so that the mapped data can reflect the actual low-dimensional structure of the manifold.

The core idea of the ISOMAP method is to represent the points in the high-dimensional space by using the coordinates in the low-dimensional Euclidean space, to remove the redundant information of the features and reduce the feature dimension. That is, a set of data in a low-dimensional space , is used to represent the original high-dimensional feature set.

The main steps of trace feature and LBP feature fusion using the ISOMAP method are as follows:

Step 1. Establishing a neighborhood graph. The data set of the high-dimensional feature is input, wherein the high-dimensional feature is formed by simple splicing of the trace feature vector and the LBP feature vector. Take data set as a graph, as long as any two vertices are adjacent to each other, there will be an edge connection. The adjacent determination method can adopt the -nearest neighbor (KNN) method or directly obtain the distance and then threshold processing. The point whose distance is less than the threshold value is considered to be a neighbor. All the edges form a set . Thus, the neighborhood graph is established.

Step 2. Calculating the geodesic distance. The Floyd algorithm or the Dijkstra algorithm is used to calculate the shortest path between each vertex in the data set and take this shortest path between the corresponding nodes as the approximate geodesic distance.

Step 3. Data embedding. The calculation result in Step 2 is used as the input of the MDS algorithm to calculate the low-dimensional fusion feature of the original high-dimensional feature. The algorithm framework for feature fusion based on the ISOMAP method is given in Algorithm 1.

Input: The high-dimensional feature data set , where represents the th vector in the data set , which is spliced by the trace feature vector and the LBP feature vector, the neighbor parameter , and the fusion feature dimension
Process:
 1: Fordo
 2: Determine the nearest neighbor vertexes of ;
 3: Set the distance between and the nearest neighbor vertexes to the Euclidean distance, and the distance to other vertexes is infinity;
 4: End for
 5: Call the shortest path algorithm (Floyd algorithm or Dijkstra algorithm) to calculate the distance between any two vertexes;
 6: Input into the MDS algorithm;
 7: Return the output of the MDS algorithm.
Output: Low-dimensional fusion feature of the high-dimensional feature data set .
Algorithm 1. Isometric feature mapping (ISOMAP).

Through the operations in Sections 35, we have realized the feature extraction and feature fusion of the space target ISAR image. Next, we will use the SVM (support vector machine) classifier to do the classification operation. The framework of the classification algorithm is shown in Figure 13.

As shown in Figure 13, the ISAR image recognition of space targets in practice is divided into two steps: (1) Training a model: firstly, the existing space target ISAR image is preprocessed, and the trace feature and LBP feature of the ISAR image are extracted. Then, the manifold learning method is used to fuse the two features. Finally, the SVM classifier is trained by fusion features, and the classification model of the space target ISAR image is obtained. (2) Recognizing a target: According to the processing method of the ISAR image in step (1), the fusion features of the ISAR image to be recognized are obtained. Then, the fusion features are input into the pretrained classification model in step (1) for classification, so as to obtain the space target category in the ISAR image to be recognized.

6. Experimental Results and Analysis

6.1. Experimental Setting and Data Generation

Since there is no public data set about the ISAR image at present, in order to verify the effectiveness of the method proposed in this paper, we generated the ISAR image of the space target through simulation according to the actual scene. In the simulation, the factors such as self-occlusion and nonlinear scattering mechanism in the ISAR imaging process are considered, so that the simulation data can be close to the measured data to a certain extent. Referring to Section 2, the ISAR image data of the target satellites are firstly generated. The 3DMAX software was used to build the 3D mesh models of 5 types of targets: SAT_1, SAT_2, SAT_3, SAT_4, and SAT_5, and the total number of facets per model is 10000. In the STK simulation environment, the target satellites fly in a circular orbit with an orbital height of 1100 km, an orbit inclination of 70°, and right ascension of ascending node (RAAN) of 115°, and the radar is placed at 29.5° north latitude and 119° east longitude. Then, linear frequency modulation (LFM) signals with pulse frequency , bandwidth , pulse width , and sampling frequency are used to irradiate the five types of space targets to generate the radar echo. The echo data is processed by the Doppler imaging algorithm, and the initial processing of SVA side lobe suppression and Lee filtering is used to obtain the ISAR image of the space target finally. Figure 14 shows the 3D mesh models and ISAR imaging results of the five types of space targets.

Then, the ISAR images are segmented and normalized to separate the satellite from the background and eliminate the intensity transformation of the radar echo due to the change of the distance between the radar and the target satellite. Figure 15 shows the segmentation and normalization results of the ISAR images of the five types of space targets.

In the experiment, five data sets are generated for the five types of target satellites. Each data set contains 265 ISAR images, and each image size is .

To display our data set better, we randomly selected twenty-four ISAR images of each kind of space target in the data set and displayed them in Figure 16.

6.2. Classification Experiment of Space Target ISAR Images

To evaluate the effectiveness of the proposed algorithm in this paper, simulation experiments are carried out next. The simulation consists of three parts: First, Section 6.2.1 uses trace features of ISAR images to conduct classification experiments, so as to study the classification performance of 8 trace features. In Section 6.2.2, the classification performance of LBP features is studied to determine the impact of different rectangular region methods on the classification accuracy of LBP features. Finally, Section 6.2.3 studies the classification performance of the low-dimensional fusion features to verify the correctness of the proposed algorithm.

6.2.1. ISAR Image Classification Experiment Based on the Trace Feature

Firstly, the trace feature extraction of the space target ISAR images is carried out by using the eight different trace transformation functions given in Table 1. Then, the eight kinds of trace feature data extracted are, respectively, classified by the SVM classifier. 90% of each type of sample is randomly selected for training and the remaining 10% for testing. The confusion matrix and recognition accuracy of the test data are shown in Figure 17.

In Figures 17(a)17(h), the abscissa indicates the predicted value, the ordinate indicates the true value, and the value on the diagonal line indicates the number of correct classification result.

To avoid the influence of the random selection of training data and test data on the classification result of space targets, we conducted 5 times 10-fold cross validation. Firstly, the ISAR image data set of space target is divided into ten parts randomly according to the proportion. We take turns to use nine of them as training data and one of them as test data. The corresponding classification accuracy will be obtained in each test. The average value of the classification results of ten experiments is used as the estimation of the classification accuracy of the algorithm (10-fold). Then, the above operation is repeated five times, and the mean value of the five results is taken as the estimation of the robustness of the algorithm. The classification results are given in Table 2.

Table 2 shows the classification accuracy of 8 trace features in the 5 times 10-fold cross validation. The second row of the table gives the dimensions of each feature, and the last row shows the average classification accuracy of the 5 times 10-fold cross validation. As can be seen from the classification results in Figure 17 and Table 2, there are two problems when only using the trace features to classify the space target ISAR images: (1) The classification accuracy is limited, and the maximum classification accuracy is only 61.7%. (2) The classification accuracy is affected by the selection of the trace transform function, and the classification accuracy corresponding to different trace features is quite different. Among them, the classification accuracy corresponding to the T1 transformation is relatively the highest, and the classification accuracy corresponding to the T5, T6, T2, T7, T4, T8, and T3 transformations is sequentially reduced. Therefore, it is necessary to study the classification performance of various trace features in the actual application process and select the trace feature with the highest classification accuracy to classify the space targets, thereby increasing the complexity of the work.

To overcome the above problem, the LBP features of the space target ISAR images are introduced.

6.2.2. ISAR Image Classification Experiment Based on the LBP Feature

To determine which LBP feature should be adopted for the next feature fusion, the classification performance of LBP features extracted by different rectangular region division methods is studied in this section. Following the LBP feature extraction process shown in Figure 11 in Section 4, ISAR images are firstly divided into different rectangular regions, and then, LBP features of ISAR images are extracted. On this basis, the SVM classifier is used to study the classification performance of LBP features extracted by different rectangular region division methods. 90% of each type of sample is randomly selected for training and the remaining 10% for testing. In the 5 times 10-fold cross validation, the dimensions of different LBP features and their corresponding average classification accuracy are shown in Figure 18, and the specific results are given in Table 3.

In Figure 18, the abscissa indicates the LBP features extracted by different rectangular region division methods, the left ordinate indicates the classification accuracy, and the right ordinate indicates the dimensions of the LBP features (in the logarithmic coordinate system). Table 3 shows the classification accuracy of 13 LBP features in the 5 times 10-fold cross validation, the second column of the table gives the dimensions of each feature, and the last column shows the average classification accuracy of the 5 times 10-fold cross validation.

It can be seen from Figure 18 and Table 3 that the classification accuracy is also low when only LBP features of space target ISAR images are used for classification, with the maximum classification accuracy of 83.8%. Meanwhile, the classification accuracy is affected by the rectangular region division method. The corresponding classification accuracy of different rectangular region division methods varies greatly, with the difference between the maximum classification accuracy and the minimum classification accuracy reaching 23.8%. As the number of the divided rectangular regions increases, the corresponding classification accuracy rate generally shows an upward trend and finally stabilizes at about 82%, but the feature dimension increases exponentially, which leads to the need for more storage space in classification and the increased computational complexity.

From the above analysis, it can be known that the low-dimensional LBP feature requires a small storage space, but its classification performance is reduced. High-dimensional LBP features have better classification performance, but they need a large storage space. As a result, in the actual application process, it is necessary to study which rectangular region division method should be selected to extract the LBP feature that satisfies the actual conditions, thereby increasing the complexity of the work.

To overcome this problem, the ISOMAP method is adopted to fuse the trace features with the LBP features, so as to obtain low-dimensional fusion features with stronger expressive force on the spatial structure.

6.2.3. ISAR Image Classification Experiment Based on the Low-Dimensional Fusion Feature

Using the ISOMAP-based feature fusion method proposed in Section 4, eight trace features are fused with 13 LBP features in Table 4, respectively. The dimension of the fusion features is set to 25. At the same time, the manifold learning method is used to obtain the representation of the trace feature and LBP feature in the 25-dimensional space, respectively. Classification experiments were carried out with the SVM classifier. 90% of each type of sample is randomly selected for training and the remaining 10% for testing. Table 4 shows the average classification accuracy of 8 types of trace features after fusing them with 13 types of LBP features in 5 times 10-fold cross validation. Tables 5 and 6 show the average classification accuracy of the “trace feature+manifold learning” and “LBP feature+manifold learning” methods in 5 times 10-fold cross validation.

Table 4 lists the average classification accuracy of the low-dimensional fusion features of the trace features and the LBP features in the 5 times 10-fold cross validation. The last row and the last column of the table, respectively, represent the average classification accuracy corresponding to each trace feature and the LBP feature.

The following conclusions can be drawn from Table 4: (1)The proposed algorithm can achieve high classification accuracy. The lowest classification accuracy rate is 97.4%, and the highest classification accuracy rate can reach 99.8%, which proves the effectiveness of the proposed algorithm(2)The classification performance of each type of feature has been dramatically improved. For the trace feature, the lowest classification accuracy of the eight trace features can reach 97.9%, which is 44.9% higher than the classification accuracy of 53.0% before the feature fusion. At the same time, the highest classification accuracy rate can reach 99.2%, which is 37.5% higher than the classification accuracy rate of 61.7% before the feature fusion

Besides, the maximum difference between the classification accuracy of different trace features is 1.3%, which makes it possible to select any trace feature in the actual application process, thereby effectively solving the trace feature selection problem in the actual application process and reducing the complexity of the work.

For the LBP feature, the lowest classification accuracy of 13 LBP features can reach 98.6%, which is 38.6% higher than the classification accuracy of 60.0% before feature fusion. At the same time, the highest classification accuracy rate can reach 99.2%, which is 15.4% higher than the 83.8% classification accuracy before feature fusion. In addition, the difference between the classification accuracy rates of different rectangular area division methods is 0.6%, which makes it possible to select any LBP feature in the actual application process, thereby effectively solving the problem of LBP feature selection (for example, selecting the LBP feature with the lowest dimension will hardly affect the final classification accuracy).

Tables 5 and 6 show the average classification accuracy of “trace feature+manifold learning” and “LBP feature+manifold learning” in 5 times 10-fold cross validation. By comparing Tables 4, 5, and 6, we can see that the classification accuracy of the algorithm proposed in the paper is better than the classification accuracy of the “trace feature+manifold learning” and “LBP feature+manifold learning” methods. Thus, the effectiveness of the method proposed in the paper was further verified.

In summary, the proposed algorithm has a high classification performance. At the same time, it solves the problem of which trace features and LBP features should be selected in the actual application process, which significantly reduces the complexity of the classification work and enhances the practical applicability of the algorithm.

To further analyze the impact of the dimension of fusion features on the classification accuracy, this paper takes the fusion features of trace features and LBP features as an example to conduct research. Among them, trace features contain eight types, which are, respectively, represented by T1 trace feature to T8 trace feature. Table 7 shows the average classification accuracy of 8 fusion features with different dimensions in the 5 times 10-fold cross validation, and Figure 19 shows the variation trend of classification accuracy with the change of fusion feature dimensions.

In Table 7, T1_ LBP to T8_ LBP in the first row, respectively, represent the fusion features of T1 trace feature to T8 trace feature and 1 × 1 LBP and the first column represents the dimensions of the fusion features. In Figure 19, the abscissa indicates the dimension of the fusion features, and the ordinate indicates the classification accuracy.

It can be seen from Figure 19 and Table 7 that as the dimension of the fusion feature increases from 3 to 45, the classification accuracy of the eight fusion features is increasing and reaches the maximum at the dimension of 27. Then, starting from the dimension of 27, the classification accuracy is gradually stable and finally stabilizes at about 99.1%. Therefore, in the actual application, the dimension can be selected to be about 27, so that the classification accuracy is ensured while having a small feature dimension, thereby reducing the storage space and the computational complexity and improving computational efficiency.

Although the above classification results have proved the effectiveness of the proposed feature fusion algorithm, we still hope to further show the actual spatial distribution of the fusion features. Therefore, the paper adopts the visualization technology to display the fusion features in a 3-dimensional space, as shown in Figure 20. The spatial distribution of the fused features can be seen visually in the graph.

As can be seen from Figure 20, (1) the better the classification performance of the feature is, the more satisfactory its distinguishability in a 3-dimensional space is and (2) the fusion feature generated by the algorithm proposed in this paper has better discrimination in a 3-D space compared to the unfused features, which indicates that the proposed algorithm can effectively improve the classification performance of the feature. Therefore, the effectiveness of the proposed algorithm is verified at a visual level.

7. Conclusion

In order to improve the classification performance of space target ISAR images under small sample data conditions, a new feature extraction method is proposed. The proposed method improves the extraction method of trace features and effectively solves the problem that the classification accuracy is not robust due to the uncertainty of local region selection and angular interval division. By introducing the LBP feature, the space structure information of the trace feature is enhanced. Finally, the ISOMAP method is used to fuse the two structural complementary features. While achieving feature dimensionality reduction, the integrity of the structure information of the space target ISAR image is retained to the utmost. The simulation results show the following: (1)The proposed method can achieve high classification accuracy of space target ISAR images(2)When using low-dimensional fusion features to classify space target ISAR images, the classification accuracy is no longer affected by factors such as trace feature selection and LBP rectangular region division and the classification performance has strong robustness

In conclusion, the method proposed in this paper can achieve fast and accurate identification of space targets. And the algorithm has strong robustness. It solves the problem that it is difficult to extract high-discriminative features under the condition of small sample data. It provides a new idea for ISAR image recognition of space targets, and the method can be applied to other types of tasks using ISAR images for classification, such as ships, airplanes, vehicles, and missiles.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The work in this paper has been supported by the National Natural Science Foundation of China (61304228).