International Journal of Aerospace Engineering

International Journal of Aerospace Engineering / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 3412582 | 21 pages | https://doi.org/10.1155/2020/3412582

A Fast Recognition Method for Space Targets in ISAR Images Based on Local and Global Structural Fusion Features with Lower Dimensions

Academic Editor: Hikmat Asadov
Received25 Jun 2019
Accepted26 Nov 2019
Published14 Feb 2020

Abstract

Feature extraction is the key step of Inverse Synthetic Aperture Radar (ISAR) image recognition. However, limited by the cost and conditions of ISAR image acquisition, it is relatively difficult to obtain large-scale sample data, which makes it difficult to obtain target deep features with good discriminability by using the currently popular deep learning method. In this paper, a new method for low-dimensional, strongly robust, and fast space target ISAR image recognition based on local and global structural feature fusion is proposed. This method performs the trace transformation along the longest axis of the ISAR image to generate the global trace feature of the space target ISAR image. By introducing the local structural feature, Local Binary Pattern (LBP), the complementary fusion of the global and local features is achieved, which makes up for the missing structural information of the trace feature and ensures the integrity of the ISAR image feature information. The representation of trace and LBP features in a low-dimensional mapping feature space is found by using the manifold learning method. Under the condition of maintaining the local neighborhood relationship in the original feature space, the effective fusion of trace and LBP features is achieved. So, in the practical application process, the target recognition accuracy is no longer affected by trace function, LBP feature block number selection, and other factors, realizing the high robustness of the algorithm. To verify the effectiveness of the proposed algorithm, an ISAR image database containing 1325 samples of 5 types of space targets is used for experiments. The results show that the classification accuracy of the 5 types of space targets can reach more than 99%, and the recognition accuracy is no longer affected by the trace feature and LBP feature selection, which has strong robustness. The proposed method provides a fast and effective high-precision model for space target feature extraction, which can give some references for solving the problem of space object efficient identification under the condition of small sample data.

1. Introduction

ISAR is all-day, all-weather, long-range, high-resolution two-dimensional imaging equipment, which plays an essential role in civil and military fields [1]. The ISAR image can accurately describe the structural characteristics and the scattering center distribution of the target. Compared with the traditional Radar Cross-Section (RCS), High-Resolution Range Profile (HRRP), and Micro-Doppler Signature (MDS), the ISAR image can provide more abundant information of the target. Therefore, the use of ISAR images for space target recognition has always been a research hotspot in the field of space situational awareness [24].

In recent years, target recognition algorithms based on a deep convolution neural network (such as CNN) have attracted wide attention in the field of computer vision. A convolution network can automatically extract image features, but its training needs a lot of annotated samples. Limited by imaging conditions, it is difficult to obtain large-scale ISAR image data of space targets. This will make it difficult to recognize the space target ISAR image accurately by using the deep convolution neural network. Therefore, extracting the high-discriminant features of the space target ISAR image is the key to realize the fast and accurate recognition of the space target ISAR image under the condition of small sample. This also explains why the ISAR image recognition method based on feature extraction is still very popular.

Affected by measurement conditions, imaging principles, and other factors, it is difficult to extract features of space target ISAR images [5]. It is mainly reflected in the following six aspects: (1) The ISAR image is different from a traditional optical image and usually more difficult to understand. (2) Due to factors such as speckle noise and interference fringes, the quality of the ISAR image will decrease to varying degrees. (3) The ISAR image usually appears as a sparse or isolated scattering center distribution. (4) The ISAR image changes with the incident angle of the radar wave, resulting in multiple possibilities for spatial pointing and spatial distribution when the three-dimensional space target is projected on the two-dimensional ISAR image plane. (5) For noncooperative space targets, it is impossible to control the rotation speed of the target relative to the radar, which makes the crossrange resolution of the ISAR image difficult to determine, and thus, the scaling of the ISAR image cannot be guaranteed to be consistent. (6) The ISAR image has high dimension. Direct extraction of high-dimensional features will lead to low computational efficiency, while extraction of low-dimensional information must consider whether useful information is lost. All these characteristics bring difficulties to the classification of ISAR images.

To overcome the above difficulties, a large amount of research on radar image feature extraction has been carried out. For example, in literature [68], geometric features such as elliptic Fourier descriptor, Zernike moment, and outer contour are studied for target recognition. In literature [9, 10], a wavelet transform is adopted to extract low-dimensional features. In literature [11, 12], scattering center features are introduced to classify radar images. In literature [13, 14], the application of pattern structure features in radar image classification was explored; in literature [1, 15], the statistical characteristics of radar images are analyzed and statistical characteristics to classify radar targets are used. In literature [16, 17], ISAR image classification of aerial targets is proposed by using polar coordinate mapping technology, and high classification accuracy was achieved. Recently, Lee et al. pointed out in literature [18] that the polar coordinate mapping method described in literature [17] is difficult to be robust in the practical application. Because it assumes that the ISAR images used for training and testing are all located on the same fixed projection plane, which is inconsistent with the actual situation. In this regard, Lee et al. proposed to extract the small dimensions and high-discriminant features of ISAR images by trace transformation, effectively overcoming the impact of spatial distribution changes of ISAR images on classification accuracy. However, this method in practical application is still affected by many factors, such as trace functions, target types, ISAR imaging conditions, and noise types. Factors above will make the angular region selection and the angular interval division uncertain; as a result, classification accuracy is difficult to be robust.

In this study, we provide an effective solution to overcome the defects of the above trace transformation methods. First of all, when trace feature extraction is carried out, local area selection and small-angle division are no longer considered, and trace transformation is conducted directly along the longest axis of the ISAR image. Therefore, the problems of unstable classification accuracy caused by local region selection and small-angle interval division are effectively solved. However, this operation will cause the loss of some trace feature information without considering local areas. Considering that trace features belong to global structural features, in order to make up for lost trace feature information and ensure high classification accuracy, this paper further proposes to introduce local structural features and enhance trace features by complementary fusion of global and local features.

As one of the most effective methods to describe local structural features of images, LBP features have high discrimination ability in the field of recognition [19, 20]. Therefore, this paper introduces the LBP feature and fuses it with the trace feature. When extracting LBP features of space target ISAR images, to make LBP features not affected by attitude changes of space target and retain the spatial structure relationship of the space target, the ISAR image needs to do block processing first. However, too many blocks will lead to high feature dimensions and low computational efficiency, and too few blocks will make the target background and noise dominant in the statistical characteristics of LBP features, thus affecting the classification accuracy. To solve this problem, this paper further proposes to use the manifold learning method to fuse trace features and LBP features. On the premise of not destroying the feature space structure, this method can not only retain the effective information contained in the two features to the maximum extent but also effectively reduce the fusion feature dimension, so as to improve the accuracy of ISAR image classification and ensure the computational efficiency.

The main contributions of this paper are as follows: (1)The original trace features have been improved. The extraction of trace features along the longest axis of the ISAR image effectively overcomes the influence of local area selection and small-angle division on trace features(2)The complementary fusion of global and local features is realized. By introducing the local structural feature LBP, partial structural information lost by the trace feature is compensated, and the integrity of ISAR image feature information is ensured(3)It solves the problem that the classification accuracy is affected by the trace feature type and LBP rectangular region division in practical application. The manifold learning method is proposed to fuse trace and LBP features. Under the premise of not destroying the feature space structure, this method can retain the effective information of the two features to the maximum extent and realize the reduction of the feature dimension. Therefore, no matter which trace and LBP features are selected for classification, high classification accuracy can be achieved

The rest of the paper is organized as follows: in Section 2, the steps of ISAR image acquisition and preprocessing are presented. In Section 3, the original trace feature extraction algorithm is improved, and the detailed extraction process of the new trace feature is given. In Section 4, the extraction process of LBP features is given, and the limitations of LBP features in practical application are analyzed. In Section 5, the fusion algorithm framework is given. In Section 6, the recognition results are provided, and the classification accuracy before and after feature fusion is compared and analyzed. In Section 7, some conclusions are drawn.

2. ISAR Image Acquisition and Preprocessing

To use the ISAR image for space target recognition, the first step is to establish the ISAR image database of the space target. In this paper, an ISAR imaging model based on a 3D mesh model of the space target, ISAR linear frequency modulation (LFM) signal model, and ISAR image extraction model are established successively, and the ISAR image of the space target is finally obtained through side lobe suppression and Lee filtering preliminary processing. Among them, the ISAR image extraction model used in this paper is the Doppler imaging algorithm. The imaging results are shown in Figure 1.

To further segment the target from the background, the paper uses Otsu [21] to adaptively determine the optimal threshold of the ISAR image. The algorithm uses the principle of maximum interclass variance to classify the ISAR image gray values into and (satellite and background) and finally determines the optimal threshold by finding the maximum interclass variance between and : where represents the grayscale value of the grayscale image of the ISAR image. The pixel value greater than the optimal threshold is taken as the target pixel (preserving the original value), and that less than the optimal threshold is set to 0, thereby segmenting the satellite from the background.

Furthermore, to eliminate the intensity change of the target reflected signal due to the change of the distance between the radar and the target satellite, the paper continues to normalize the ISAR image data. Normalization is performed using the summation of the amplitudes of the ISAR image. The calculation model is where represents the ISAR image after segmentation and represents the normalized ISAR image. Figure 2 shows the result of segmentation and normalization of the target satellite ISAR image.

3. Trace Feature Extraction of ISAR Images

Trace transform, proposed by Maria Petrou et al., is a new technique for extracting features that are insensitive to scaling, translation, and rotation from images. The trace transform is mapped along a specific trace line in the figure. Figure 3 shows the definition of the trace line.

Each trace line can be characterized by two parameters, distance and angle . The distance characteristic parameter represents the distance from the center point of the image to the trace line, and the value range is . and are the length and width of the image, respectively; the characteristic parameter represents the angle between the normal line of the trace line and the horizontal reference line, and the value ranges from .

The result of the trace transformation depends on the selected trace function. Different trace functions will get different mapping calculation results. In Table 1, eight commonly used trace transform functions are given. In the following research, the trace features extracted by these eight trace transform functions will be studied.


Trace transformFunctional used

T1, where , and
T2, where , and
T3, where , and
T4, where , and
T5, where , and
T6, where , and
T7, where , and
T8, where , and

In Table 1, the T1 transform represents the traditional Radon transform; means the weighted median of the sequence , and means the weighted sequence. For example, means finding the median number of , and the corresponding weight is . That is, finding the standard median number of is equivalent to finding the median number of , and the calculation result is 5.5.

When extracting the trace features of an image, the traditional method usually uses diametric functional and circus functional to calculate the result of the trace transform, thereby generating a small dimensional feature () [22, 23] that is invariant to rotation, translation, and scaling. However, this method will result in a severe loss of ISAR image information so that the classification accuracy is limited. In this regard, literature [18] adopts a new trace feature extraction method. The method first finds the longest axis of the ISAR image, then selects an angle region near the longest axis and divides it into several equal small-angle intervals, respectively, and performs a trace transformation on each angular interval to generate a trace matrix; finally, each column of the trace matrix is used as a feature vector for space target recognition. This method solves the problem of severe loss of ISAR image information caused by the original trace feature extraction method.

However, this method in practical application is still affected by many factors, such as trace functions, target types, ISAR imaging conditions, and noise types. Factors above will make the angular region selection and the angular interval division uncertain; as a result, classification accuracy is difficult to be robust.

To solve this problem, the paper no longer considers the angle region selection and the angle interval division when extracting the trace feature but directly extracts the trace feature along the longest axis. At the same time, to compensate for the trace feature information lost in the above operation, the paper introduces the local structural feature LBP and fuses it with the new trace feature, thus achieving the complementary enhancement of the two features and ensuring the classification accuracy of the ISAR image.

First, the Canny edge detection and Hough transform method [24] are used to estimate the longest axis of the ISAR image. Figure 4 shows the schematic of the longest axis of the ISAR image. Along the longest axis direction, the functional calculates the trace value of the ISAR image at , and the size of the trace feature of an ISAR image is .

It is worth noting that when the position of the same ISAR image is shifted, the trace feature will change, as shown in Figures 5(a)5(d).

Figure 5 shows the trace features corresponding to the ISAR image before and after the translation at . It can be seen that the trace features of the ISAR image before and after the translation are shifted in the direction and the two features are almost the same except for the translation. Therefore, to eliminate the influence of ISAR image translation on space target classification, the paper further shifts the trace feature vector so that the first element of each feature vector is nonzero. Figure 6 shows the result of the shift alignment operation of Figure 5.

Compared with the traditional target alignment method using the target ISAR image, the shift alignment operation calculation is simple and easy, which is of great significance for improving the target classification efficiency. Figure 7 shows the entire flow of trace feature extraction.

4. LBP Feature Extraction of ISAR Images

Extracting the trace feature without considering the angle region selection and the angle interval division will cause some feature information to be lost, which will affect the classification accuracy; at the same time, the trace feature only describes the structure information of the ISAR image from a global perspective. The limited image information contained will inevitably lead to limited classification accuracy. To solve the above problems, the paper introduces the local structural feature LBP and fuses it with the trace features extracted in the third section. Through the complementary enhancement between the two features, the integrity of the ISAR image feature information is guaranteed.

An LBP operator, as one of the most effective methods to describe local structural features of images, has the advantages of monotonic grayscale variation invariance, high computational efficiency, and strong feature discrimination ability. So it is widely used in industrial detection [25, 26], medical image processing [27], remote sensing image analysis [28, 29], face detection [30], and other fields of image processing and computer vision.

The original LBP operator takes each pixel of the image as the center and uses the pixel value of the center point to threshold the surrounding neighborhood. And the obtained 8-bit binary number is used as the label of the pixel. The statistical histogram of all the labels of the image is the structural feature of the image. Figure 8 shows the schematic of the original LBP operator.

To be able to calculate structural features of different scales, the original LBP operator was extended to use a circular neighborhood system. The circular neighborhood system is centered on the pixel to be marked, and the sampling points are evenly spaced at the circle centered on the pixel, as shown in Figure 9. Then, by changing the size of the circular neighborhood system, LBP features of different scales can be calculated.

In Figure 9, the symbol represents the radius of the circular neighborhood system and the number of sampling points. When the sampling point is not at the center of the pixel, the value of the sampling point can be obtained by bilinear interpolation.

To reduce the feature dimension, the LBP using the circular neighborhood system is further extended to “uniform pattern” LBP [31]. It means that the number of hops of adjacent binary values in the binary mode of LBP is not greater than 2. For example, 00000000 (the number of hops is 0), 00100000 (the number of hops is 2), and 11000011 (the number of hops is 2) are all uniform patterns. 11001100 (the number of hops is 3), 11001001 (the number of hops is 4), and 1010100011 (the number of hops is 6) are all nonuniform patterns. When calculating the histogram of the “ uniform pattern” LBP, each uniform pattern is classified into a single category and all the other nonuniform patterns are classified into one category, and the number of pattern types is reduced from the original to . Therefore, using the “uniform pattern” LBP can significantly reduce the feature dimension, save a storage space, and improve computational efficiency.

Next, feature extraction is performed using the “uniform pattern” LBP. The symbol is used to represent the “uniform pattern” LBP. The subscript represents the circular neighborhood system, and the superscript represents the uniform pattern.

To make the proposed feature robust to the attitude change of the target satellite and effectively preserve the spatial structure relationship of the target satellite, the paper firstly divides the ISAR image into multiple local regions (as shown in Figure 10). Then, is used to calculate the LBP histogram for each local area separately. Finally, the obtained histograms are combined to generate the LBP feature of the ISAR image. The size of the LBP feature is , where is the length of the single LBP histogram. Figure 11 shows the LBP feature extraction process of the ISAR image (take as an example).

The extracted LBP features contain three levels of information: (1) Each label of the LBP histogram contains information at the pixel level. (2) The summation of pixel information on a local area produces information at the regional level. (3) The regional histograms are connected to form the information of the target satellite ISAR image at the overall level.

However, the LBP features extracted by the above methods are affected by the block number of local regions. If too many blocks are selected, the LBP feature dimension will be too high and the calculation efficiency will be low. Otherwise, if the number of blocks is too small, the target background and noise will play a dominant role in the statistical characteristics of LBP features, thereby affecting the classification accuracy. To overcome this shortcoming, the paper then uses the manifold learning method to fuse the trace features and LBP features. While realizing the dimension reduction of features, it retains the valid information of the two features to the maximum extent and effectively solves the problem that the classification accuracy is affected by the selection of LBP feature blocks in practical application.

5. Multifeature Fusion of ISAR Images Based on Manifold Learning

From the analysis of Sections 3 and 4, it is known that the fusion of trace features and LBP features needs to solve two critical problems: (1) The fusion features can retain ISAR image information contained in trace features and LBP features to the utmost, and its classification accuracy is not affected by trace feature selection, LBP feature block number selection, and other factors, with strong robustness. (2) The dimension of the fusion features is reduced, thereby saving a storage space and improving computational efficiency.

The feature fusion method is mainly divided into two categories: early fusion and late fusion. The early fusion method refers to the fusion of features. The commonly used methods mainly include feature splicing and weighted summation after feature alignment. The late fusion method mainly refers to the fusion of classification accuracy corresponding to different features. Therefore, the feature fusion of this paper belongs to early fusion. Considering that the trace features and the LBP features are both high-dimensional features, if they are directly fused by the abovementioned early fusion method, problems such as excessive fusion feature dimension and complex classification calculation will occur.

Aimed at the problems existing in the fusion of high-dimensional features, the literature [32] proposed the method of first reducing dimension and then performing a fusion, which can effectively remove the redundant information of each feature and achieve dimensional reduction. However, considering that trace features describe the structural information of the ISAR image from the global perspective, LBP features focus on local description and contain more location and detailed information, so the information contained in these two features is complementary in structure to some extent. If the two features are fused by the method in literature [32], it will lead to the destruction of complementary structural information between the two features when the redundant information of each feature is removed.

In summary, the paper proposes to use the manifold learning method to directly fuse the trace features and the LBP features of the ISAR image, so that the information contained in the two features can be retained to the maximum extent while reducing the feature dimension. Figure 12 shows a schematic diagram of using the manifold learning method to fuse features.

The manifold learning method was first proposed by Professor Josh Tenenbaum of the Massachusetts Institute of Technology in the journal Science in 2000 and has become a research hotspot in the field of information science. The manifold learning method is defined as follows: let be a Hausdorff space. For any point , there is a neighborhood of in homeomorphic to an open set in the -dimensional Euclidean space ; then, is an -dimensional manifold. The manifold learning method can not only find the low-dimensional feature representation of the sample set in the mapped feature space but also maintain the local neighborhood relationship of the sample set in the original feature space in the low-dimensional feature space.

Commonly used manifold learning methods mainly include local linear embedding (LLE), local tangent space alignment (LTSA), maximum linear embedding (MLE), and isometric feature mapping (ISOMAP). Among them, the ISOMAP method is a noniterative global optimization algorithm, which is improved from the multidimensional scaling (MDS). It uses the geodesic distance on the manifold as the data difference metric instead of the Euclidean distance in the original Euclidean space so that the mapped data can reflect the actual low-dimensional structure of the manifold.

The core idea of the ISOMAP method is to represent the points in the high-dimensional space by using the coordinates in the low-dimensional Euclidean space, to remove the redundant information of the features and reduce the feature dimension. That is, a set of data in a low-dimensional space , is used to represent the original high-dimensional feature set.

The main steps of trace feature and LBP feature fusion using the ISOMAP method are as follows:

Step 1. Establishing a neighborhood graph. The data set of the high-dimensional feature is input, wherein the high-dimensional feature is formed by simple splicing of the trace feature vector and the LBP feature vector. Take data set as a graph, as long as any two vertices are adjacent to each other, there will be an edge connection. The adjacent determination method can adopt the -nearest neighbor (KNN) method or directly obtain the distance and then threshold processing. The point whose distance is less than the threshold value is considered to be a neighbor. All the edges form a set . Thus, the neighborhood graph is established.

Step 2. Calculating the geodesic distance. The Floyd algorithm or the Dijkstra algorithm is used to calculate the shortest path between each vertex in the data set and take this shortest path between the corresponding nodes as the approximate geodesic distance.

Step 3. Data embedding. The calculation result in Step 2 is used as the input of the MDS algorithm to calculate the low-dimensional fusion feature of the original high-dimensional feature. The algorithm framework for feature fusion based on the ISOMAP method is given in Algorithm 1.

Input: The high-dimensional feature data set , where represents the th vector in the data set , which is spliced by the trace feature vector and the LBP feature vector, the neighbor parameter , and the fusion feature dimension
Process:
 1: Fordo
 2: Determine the nearest neighbor vertexes of ;
 3: Set the distance between and the nearest neighbor vertexes to the Euclidean distance, and the distance to other vertexes is infinity;
 4: End for
 5: Call the shortest path algorithm (Floyd algorithm or Dijkstra algorithm) to calculate the distance between any two vertexes;
 6: Input into the MDS algorithm;
 7: Return the output of the MDS algorithm.
Output: Low-dimensional fusion feature of the high-dimensional feature data set .
Algorithm 1. Isometric feature mapping (ISOMAP).

Through the operations in Sections 35, we have realized the feature extraction and feature fusion of the space target ISAR image. Next, we will use the SVM (support vector machine) classifier to do the classification operation. The framework of the classification algorithm is shown in Figure 13.

As shown in Figure 13, the ISAR image recognition of space targets in practice is divided into two steps: (1) Training a model: firstly, the existing space target ISAR image is preprocessed, and the trace feature and LBP feature of the ISAR image are extracted. Then, the manifold learning method is used to fuse the two features. Finally, the SVM classifier is trained by fusion features, and the classification model of the space target ISAR image is obtained. (2) Recognizing a target: According to the processing method of the ISAR image in step (1), the fusion features of the ISAR image to be recognized are obtained. Then, the fusion features are input into the pretrained classification model in step (1) for classification, so as to obtain the space target category in the ISAR image to be recognized.

6. Experimental Results and Analysis

6.1. Experimental Setting and Data Generation

Since there is no public data set about the ISAR image at present, in order to verify the effectiveness of the method proposed in this paper, we generated the ISAR image of the space target through simulation according to the actual scene. In the simulation, the factors such as self-occlusion and nonlinear scattering mechanism in the ISAR imaging process are considered, so that the simulation data can be close to the measured data to a certain extent. Referring to Section 2, the ISAR image data of the target satellites are firstly generated. The 3DMAX software was used to build the 3D mesh models of 5 types of targets: SAT_1, SAT_2, SAT_3, SAT_4, and SAT_5, and the total number of facets per model is 10000. In the STK simulation environment, the target satellites fly in a circular orbit with an orbital height of 1100 km, an orbit inclination of 70°, and right ascension of ascending node (RAAN) of 115°, and the radar is placed at 29.5° north latitude and 119° east longitude. Then, linear frequency modulation (LFM) signals with pulse frequency , bandwidth , pulse width , and sampling frequency are used to irradiate the five types of space targets to generate the radar echo. The echo data is processed by the Doppler imaging algorithm, and the initial processing of SVA side lobe suppression and Lee filtering is used to obtain the ISAR image of the space target finally. Figure 14 shows the 3D mesh models and ISAR imaging results of the five types of space targets.

Then, the ISAR images are segmented and normalized to separate the satellite from the background and eliminate the intensity transformation of the radar echo due to the change of the distance between the radar and the target satellite. Figure 15 shows the segmentation and normalization results of the ISAR images of the five types of space targets.

In the experiment, five data sets are generated for the five types of target satellites. Each data set contains 265 ISAR images, and each image size is .

To display our data set better, we randomly selected twenty-four ISAR images of each kind of space target in the data set and displayed them in Figure 16.

6.2. Classification Experiment of Space Target ISAR Images

To evaluate the effectiveness of the proposed algorithm in this paper, simulation experiments are carried out next. The simulation consists of three parts: First, Section 6.2.1 uses trace features of ISAR images to conduct classification experiments, so as to study the classification performance of 8 trace features. In Section 6.2.2, the classification performance of LBP features is studied to determine the impact of different rectangular region methods on the classification accuracy of LBP features. Finally, Section 6.2.3 studies the classification performance of the low-dimensional fusion features to verify the correctness of the proposed algorithm.

6.2.1. ISAR Image Classification Experiment Based on the Trace Feature

Firstly, the trace feature extraction of the space target ISAR images is carried out by using the eight different trace transformation functions given in Table 1. Then, the eight kinds of trace feature data extracted are, respectively, classified by the SVM classifier. 90% of each type of sample is randomly selected for training and the remaining 10% for testing. The confusion matrix and recognition accuracy of the test data are shown in Figure 17.

In Figures 17(a)17(h), the abscissa indicates the predicted value, the ordinate indicates the true value, and the value on the diagonal line indicates the number of correct classification result.

To avoid the influence of the random selection of training data and test data on the classification result of space targets, we conducted 5 times 10-fold cross validation. Firstly, the ISAR image data set of space target is divided into ten parts randomly according to the proportion. We take turns to use nine of them as training data and one of them as test data. The corresponding classification accuracy will be obtained in each test. The average value of the classification results of ten experiments is used as the estimation of the classification accuracy of the algorithm (10-fold). Then, the above operation is repeated five times, and the mean value of the five results is taken as the estimation of the robustness of the algorithm. The classification results are given in Table 2.


T1T2T3T4T5T6T7T8

Dimension364364364364364364364364
1st0.5930.6300.5090.5650.6110.6020.6110.601
2nd0.6110.5370.5000.5830.6200.5930.5740.602
3rd0.5830.5830.6300.5370.6200.6110.5930.528
4th0.6390.6110.4910.5280.6110.6300.5650.537
5th0.6570.5740.5190.6020.5190.5460.5370.463
Average0.6170.5870.5300.5630.5960.5960.5760.546

Table 2 shows the classification accuracy of 8 trace features in the 5 times 10-fold cross validation. The second row of the table gives the dimensions of each feature, and the last row shows the average classification accuracy of the 5 times 10-fold cross validation. As can be seen from the classification results in Figure 17 and Table 2, there are two problems when only using the trace features to classify the space target ISAR images: (1) The classification accuracy is limited, and the maximum classification accuracy is only 61.7%. (2) The classification accuracy is affected by the selection of the trace transform function, and the classification accuracy corresponding to different trace features is quite different. Among them, the classification accuracy corresponding to the T1 transformation is relatively the highest, and the classification accuracy corresponding to the T5, T6, T2, T7, T4, T8, and T3 transformations is sequentially reduced. Therefore, it is necessary to study the classification performance of various trace features in the actual application process and select the trace feature with the highest classification accuracy to classify the space targets, thereby increasing the complexity of the work.

To overcome the above problem, the LBP features of the space target ISAR images are introduced.

6.2.2. ISAR Image Classification Experiment Based on the LBP Feature

To determine which LBP feature should be adopted for the next feature fusion, the classification performance of LBP features extracted by different rectangular region division methods is studied in this section. Following the LBP feature extraction process shown in Figure 11 in Section 4, ISAR images are firstly divided into different rectangular regions, and then, LBP features of ISAR images are extracted. On this basis, the SVM classifier is used to study the classification performance of LBP features extracted by different rectangular region division methods. 90% of each type of sample is randomly selected for training and the remaining 10% for testing. In the 5 times 10-fold cross validation, the dimensions of different LBP features and their corresponding average classification accuracy are shown in Figure 18, and the specific results are given in Table 3.


Dimension1st2nd3rd4th5thAverage

LBP590.5940.6220.6410.5760.5670.600
LBP2360.6780.6960.7610.7610.7430.728
LBP5310.6780.6320.7240.6220.6960.670
LBP9440.7700.7520.7610.7610.7980.769
LBP14750.7700.7610.7610.7430.7060.748
LBP21240.7700.7330.7610.7330.7610.752
LBP28910.7700.7800.7330.7520.7330.754
LBP37760.7980.8170.8340.7980.7520.804
LBP47790.7890.8350.8260.7980.8070.811
LBP59000.8540.8350.8260.8070.8260.830
LBP71390.8170.8170.8630.7890.7800.813
LBP84960.8350.8440.8350.8350.8440.838
LBP99710.8260.8170.8540.7890.8540.828

In Figure 18, the abscissa indicates the LBP features extracted by different rectangular region division methods, the left ordinate indicates the classification accuracy, and the right ordinate indicates the dimensions of the LBP features (in the logarithmic coordinate system). Table 3 shows the classification accuracy of 13 LBP features in the 5 times 10-fold cross validation, the second column of the table gives the dimensions of each feature, and the last column shows the average classification accuracy of the 5 times 10-fold cross validation.

It can be seen from Figure 18 and Table 3 that the classification accuracy is also low when only LBP features of space target ISAR images are used for classification, with the maximum classification accuracy of 83.8%. Meanwhile, the classification accuracy is affected by the rectangular region division method. The corresponding classification accuracy of different rectangular region division methods varies greatly, with the difference between the maximum classification accuracy and the minimum classification accuracy reaching 23.8%. As the number of the divided rectangular regions increases, the corresponding classification accuracy rate generally shows an upward trend and finally stabilizes at about 82%, but the feature dimension increases exponentially, which leads to the need for more storage space in classification and the increased computational complexity.

From the above analysis, it can be known that the low-dimensional LBP feature requires a small storage space, but its classification performance is reduced. High-dimensional LBP features have better classification performance, but they need a large storage space. As a result, in the actual application process, it is necessary to study which rectangular region division method should be selected to extract the LBP feature that satisfies the actual conditions, thereby increasing the complexity of the work.

To overcome this problem, the ISOMAP method is adopted to fuse the trace features with the LBP features, so as to obtain low-dimensional fusion features with stronger expressive force on the spatial structure.

6.2.3. ISAR Image Classification Experiment Based on the Low-Dimensional Fusion Feature

Using the ISOMAP-based feature fusion method proposed in Section 4, eight trace features are fused with 13 LBP features in Table 4, respectively. The dimension of the fusion features is set to 25. At the same time, the manifold learning method is used to obtain the representation of the trace feature and LBP feature in the 25-dimensional space, respectively. Classification experiments were carried out with the SVM classifier. 90% of each type of sample is randomly selected for training and the remaining 10% for testing. Table 4 shows the average classification accuracy of 8 types of trace features after fusing them with 13 types of LBP features in 5 times 10-fold cross validation. Tables 5 and 6 show the average classification accuracy of the “trace feature+manifold learning” and “LBP feature+manifold learning” methods in 5 times 10-fold cross validation.


T1T2T3T4T5T6T7T8Average

LBP0.9890.9890.9870.9850.9890.9940.9930.9740.988
LBP0.9820.9910.9800.9890.9930.9930.9940.9700.987
LBP0.9930.9940.9910.9920.9890.9920.9930.9780.990
LBP0.9960.9960.9920.9870.9900.9940.9930.9760.991
LBP0.9950.9910.9920.9880.9920.9890.9890.9740.989
LBP0.9980.9930.9920.9930.9920.9910.9940.9810.992
LBP0.9910.9970.9940.9900.9890.9920.9880.9780.990
LBP0.9910.9920.9920.9840.9910.9930.9870.9780.989
LBP0.9930.9910.9900.9850.9940.9940.9880.9820.990
LBP0.9970.9940.9910.9880.9880.9890.9940.9880.991
LBP0.9940.9940.9910.9940.9920.9960.9890.9830.992
LBP0.9910.9850.9840.9870.9940.9940.9860.9910.989
LBP0.9900.9910.9800.9870.9810.9840.9960.9770.986
Average0.9920.9920.9890.9880.9900.9920.9910.979/


T1+MLT2+MLT3+MLT4+MLT5+MLT6+MLT7+MLT8+ML

Accuracy0.7060.6570.4280.7590.7870.7130.7250.789

Note: ML stands for manifold learning.

LBP+ML LBP+ML LBP+ML LBP+ML LBP+ML LBP+ML LBP+ML LBP+ML LBP+ML LBP+ML LBP+ML LBP+ML LBP+ML

Accuracy0.5350.6610.6930.6910.7130.7610.7930.7800.8110.8280.8220.8220.837

Table 4 lists the average classification accuracy of the low-dimensional fusion features of the trace features and the LBP features in the 5 times 10-fold cross validation. The last row and the last column of the table, respectively, represent the average classification accuracy corresponding to each trace feature and the LBP feature.

The following conclusions can be drawn from Table 4: (1)The proposed algorithm can achieve high classification accuracy. The lowest classification accuracy rate is 97.4%, and the highest classification accuracy rate can reach 99.8%, which proves the effectiveness of the proposed algorithm(2)The classification performance of each type of feature has been dramatically improved. For the trace feature, the lowest classification accuracy of the eight trace features can reach 97.9%, which is 44.9% higher than the classification accuracy of 53.0% before the feature fusion. At the same time, the highest classification accuracy rate can reach 99.2%, which is 37.5% higher than the classification accuracy rate of 61.7% before the feature fusion

Besides, the maximum difference between the classification accuracy of different trace features is 1.3%, which makes it possible to select any trace feature in the actual application process, thereby effectively solving the trace feature selection problem in the actual application process and reducing the complexity of the work.

For the LBP feature, the lowest classification accuracy of 13 LBP features can reach 98.6%, which is 38.6% higher than the classification accuracy of 60.0% before feature fusion. At the same time, the highest classification accuracy rate can reach 99.2%, which is 15.4% higher than the 83.8% classification accuracy before feature fusion. In addition, the difference between the classification accuracy rates of different rectangular area division methods is 0.6%, which makes it possible to select any LBP feature in the actual application process, thereby effectively solving the problem of LBP feature selection (for example, selecting the LBP feature with the lowest dimension will hardly affect the final classification accuracy).

Tables 5 and 6 show the average classification accuracy of “trace feature+manifold learning” and “LBP feature+manifold learning” in 5 times 10-fold cross validation. By comparing Tables 4, 5, and 6, we can see that the classification accuracy of the algorithm proposed in the paper is better than the classification accuracy of the “trace feature+manifold learning” and “LBP feature+manifold learning” methods. Thus, the effectiveness of the method proposed in the paper was further verified.

In summary, the proposed algorithm has a high classification performance. At the same time, it solves the problem of which trace features and LBP features should be selected in the actual application process, which significantly reduces the complexity of the classification work and enhances the practical applicability of the algorithm.

To further analyze the impact of the dimension of fusion features on the classification accuracy, this paper takes the fusion features of trace features and LBP features as an example to conduct research. Among them, trace features contain eight types, which are, respectively, represented by T1 trace feature to T8 trace feature. Table 7 shows the average classification accuracy of 8 fusion features with different dimensions in the 5 times 10-fold cross validation, and Figure 19 shows the variation trend of classification accuracy with the change of fusion feature dimensions.


T1_ LBPT2_ LBPT3_ LBPT4_ LBPT5_ LBPT6_ LBPT7_ LBPT8_ LBP

30.8870.9530.9560.9220.9350.9160.8640.801
60.9510.9680.9630.9550.9630.9760.9710.948
90.9720.9700.9560.9810.9820.9830.9800.960
120.9750.9820.9490.9830.9820.9820.9790.965
150.9700.9850.9670.9890.9890.9880.9870.967
180.9870.9860.9680.9940.9860.9860.9910.966
210.9910.9880.9820.9910.9860.9860.9900.974
240.9930.9880.9900.9910.9870.9880.9940.972
270.9940.9940.9880.9900.9920.9910.9930.971
300.9940.9890.9900.9870.9880.9950.9950.972
330.9930.9880.9910.9920.9900.9910.9950.986
360.9920.9930.9900.9890.9880.9940.9960.978
390.9960.9890.9900.9860.9910.9890.9930.980
420.9940.9910.9900.9890.9910.9890.9940.980
450.9930.9910.9900.9900.9920.9900.9930.982

In Table 7, T1_ LBP to T8_ LBP in the first row, respectively, represent the fusion features of T1 trace feature to T8 trace feature and 1 × 1 LBP and the first column represents the dimensions of the fusion features. In Figure 19, the abscissa indicates the dimension of the fusion features, and the ordinate indicates the classification accuracy.

It can be seen from Figure 19 and Table 7 that as the dimension of the fusion feature increases from 3 to 45, the classification accuracy of the eight fusion features is increasing and reaches the maximum at the dimension of 27. Then, starting from the dimension of 27, the classification accuracy is gradually stable and finally stabilizes at about 99.1%. Therefore, in the actual application, the dimension can be selected to be about 27, so that the classification accuracy is ensured while having a small feature dimension, thereby reducing the storage space and the computational complexity and improving computational efficiency.

Although the above classification results have proved the effectiveness of the proposed feature fusion algorithm, we still hope to further show the actual spatial distribution of the fusion features. Therefore, the paper adopts the visualization technology to display the fusion features in a 3-dimensional space, as shown in Figure 20. The spatial distribution of the fused features can be seen visually in the graph.

As can be seen from Figure 20, (1) the better the classification performance of the feature is, the more satisfactory its distinguishability in a 3-dimensional space is and (2) the fusion feature generated by the algorithm proposed in this paper has better discrimination in a 3-D space compared to the unfused features, which indicates that the proposed algorithm can effectively improve the classification performance of the feature. Therefore, the effectiveness of the proposed algorithm is verified at a visual level.

7. Conclusion

In order to improve the classification performance of space target ISAR images under small sample data conditions, a new feature extraction method is proposed. The proposed method improves the extraction method of trace features and effectively solves the problem that the classification accuracy is not robust due to the uncertainty of local region selection and angular interval division. By introducing the LBP feature, the space structure information of the trace feature is enhanced. Finally, the ISOMAP method is used to fuse the two structural complementary features. While achieving feature dimensionality reduction, the integrity of the structure information of the space target ISAR image is retained to the utmost. The simulation results show the following: (1)The proposed method can achieve high classification accuracy of space target ISAR images(2)When using low-dimensional fusion features to classify space target ISAR images, the classification accuracy is no longer affected by factors such as trace feature selection and LBP rectangular region division and the classification performance has strong robustness

In conclusion, the method proposed in this paper can achieve fast and accurate identification of space targets. And the algorithm has strong robustness. It solves the problem that it is difficult to extract high-discriminative features under the condition of small sample data. It provides a new idea for ISAR image recognition of space targets, and the method can be applied to other types of tasks using ISAR images for classification, such as ships, airplanes, vehicles, and missiles.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The work in this paper has been supported by the National Natural Science Foundation of China (61304228).

References

  1. A. Karine, A. Toumi, A. Khenchaf, and M. El Hassouni, “Target recognition in radar images using weighted statistical dictionary-based sparse representation,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 12, pp. 2403–2407, 2017. View at: Publisher Site | Google Scholar
  2. S.-J. Lee, M.-J. Lee, K.-T. Kim, and J.-H. Bae, “Classification of ISAR images using variable cross-range resolutions,” IEEE Transactions on Aerospace and Electronic Systems, vol. 54, no. 5, pp. 2291–2303, 2018. View at: Publisher Site | Google Scholar
  3. A. Karine, A. Toumi, A. Khenchaf, and M. Hassouni, “Radar target recognition using salient keypoint descriptors and multitask sparse representation,” Remote Sensing, vol. 10, no. 6, p. 843, 2018. View at: Publisher Site | Google Scholar
  4. M. T. Islam, B. N. K. Siddique, S. Rahman, and T. Jabid, “Image recognition with deep learning,” in Proceedings of 2018 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), pp. 106–110, Bangkok, Thailand, October 2018. View at: Google Scholar
  5. B. Xue and N. Tong, “DIOD: fast and efficient weakly semi-supervised deep complex ISAR object detection,” IEEE Transactions on Cybernetics, vol. 49, no. 11, pp. 3991–4003, 2019. View at: Publisher Site | Google Scholar
  6. G. C. Anagnostopoulos, “SVM-based target recognition from synthetic aperture radar images using target region outline descriptors,” Nonlinear Analysis: Theory, Methods & Applications, vol. 71, no. 12, pp. e2934–e2939, 2009. View at: Publisher Site | Google Scholar
  7. G.-a. Rezai-rad and M. Amoon, “Automatic target recognition of synthetic aperture radar (SAR) images based on optimal selection of Zernike moments features,” IET Computer Vision, vol. 8, no. 2, pp. 77–85, 2014. View at: Publisher Site | Google Scholar
  8. J. Zhu, X. Qiu, Z. Pan, Y. Zhang, and B. Lei, “Projection shape template-based ship target recognition in TerraSAR-X images,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 2, pp. 222–226, 2017. View at: Publisher Site | Google Scholar
  9. F. Wang, W. Sheng, X. Ma, and H. Wang, “Target automatic recognition based on ISAR image with wavelet transform and MBLBP,” in 2010 International Symposium on Signals, Systems and Electronics, vol. 2, pp. 1–4, Nanjing, China, September 2010. View at: Publisher Site | Google Scholar
  10. G. Dong, G. Kuang, N. Wang, and W. Wang, “Classification via sparse representation of steerable wavelet frames on Grassmann manifold: application to target recognition in SAR image,” IEEE Transactions on Image Processing, vol. 26, no. 6, pp. 2892–2904, 2017. View at: Publisher Site | Google Scholar
  11. Z. Jianxiong, S. Zhiguang, C. Xiao, and F. Qiang, “Automatic target recognition of SAR images based on global scattering center model,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 10, pp. 3713–3729, 2011. View at: Publisher Site | Google Scholar
  12. B. Ding, G. Wen, X. Huang, C. Ma, and X. Yang, “Data augmentation by multilevel reconstruction using attributed scattering center for SAR target recognition,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 6, pp. 979–983, 2017. View at: Publisher Site | Google Scholar
  13. X. Huang, H. Qiao, and B. Zhang, “SAR target configuration recognition using tensor global and local discriminant embedding,” IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 2, pp. 222–226, 2016. View at: Publisher Site | Google Scholar
  14. M. Yu, L. Zhao, S. Zhang, B. Xiong, and G. Kuang, “SAR target recognition using parametric supervised t-stochastic neighbor embedding,” Remote Sensing Letters, vol. 8, no. 9, pp. 849–858, 2017. View at: Publisher Site | Google Scholar
  15. S. Song, B. Xu, and J. Yang, “SAR target recognition via supervised discriminative dictionary learning and sparse representation of the SAR-HOG feature,” Remote Sensing, vol. 8, no. 8, p. 683, 2016. View at: Publisher Site | Google Scholar
  16. K.-T. Kim, D.-K. Seo, and H.-T. Kim, “Efficient classification of ISAR images,” IEEE Transactions on Antennas and Propagation, vol. 53, no. 5, pp. 1611–1621, 2005. View at: Publisher Site | Google Scholar
  17. S.-h. Park, J.-h. Jung, S.-h. Kim, and K.-t. Kim, “Efficient classification of ISAR images using 2D Fourier transform and polar mapping,” IEEE Transactions on Aerospace and Electronic Systems, vol. 51, no. 3, pp. 1726–1736, 2015. View at: Publisher Site | Google Scholar
  18. S.-J. Lee, S.-H. Park, and K.-T. Kim, “Improved classification performance using ISAR images and trace transform,” IEEE Transactions on Aerospace and Electronic Systems, vol. 53, no. 2, pp. 950–965, 2017. View at: Publisher Site | Google Scholar
  19. Y. Guo, G. Zhao, and M. Pietikäinen, “Discriminative features for texture description,” Pattern Recognition, vol. 45, no. 10, pp. 3834–3843, 2012. View at: Publisher Site | Google Scholar
  20. M. Subrahmanyam, R. P. Maheshwari, and R. Balasubramanian, “Local maximum edge binary patterns: a new descriptor for image retrieval and object tracking,” Signal Processing, vol. 92, no. 6, pp. 1467–1479, 2012. View at: Publisher Site | Google Scholar
  21. N. Jiang, W. Song, H. Wang, G. Guo, and Y. Liu, “Differentiation between organic and non-organic apples using diffraction grating and image processing—a cost-effective approach,” Sensors, vol. 18, no. 6, p. 1667, 2018. View at: Publisher Site | Google Scholar
  22. G. Goudelis, K. Karpouzis, and S. Kollias, “Exploring trace transform for robust human action recognition,” Pattern Recognition, vol. 46, no. 12, pp. 3238–3248, 2013. View at: Publisher Site | Google Scholar
  23. W. A. Albukhanajer, Y. Jin, and J. A. Briffa, “Classifier ensembles for image identification using multi-objective Pareto features,” Neurocomputing, vol. 238, pp. 316–327, 2017. View at: Publisher Site | Google Scholar
  24. P. Mukhopadhyay and B. B. Chaudhuri, “A survey of Hough transform,” Pattern Recognition, vol. 48, no. 3, pp. 993–1010, 2015. View at: Publisher Site | Google Scholar
  25. H. Hu, G. Peng, X. Wang, and Z. Zhou, “Weld defect classification using 1-D LBP feature extraction of ultrasonic signals,” Nondestructive Testing and Evaluation, vol. 33, no. 1, pp. 92–108, 2018. View at: Publisher Site | Google Scholar
  26. Z. Lu, Q. He, X. Xiang, and H. Liu, “Defect detection of PCB based on Bayes feature fusion,” The Journal of Engineering, vol. 2018, no. 16, pp. 1741–1745, 2018. View at: Publisher Site | Google Scholar
  27. F. Yang, Q. Xu, and B. Li, “Ship detection from optical satellite images based on saliency segmentation and structure-LBP feature,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 5, pp. 602–606, 2017. View at: Publisher Site | Google Scholar
  28. T. J. Alhindi, S. Kalra, K. H. Ng, A. Afrin, and H. R. Tizhoosh, “Comparing LBP, HOG and deep features for classification of histopathology images,” in 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–7, Rio de Janeiro, Brazil, July 2018. View at: Publisher Site | Google Scholar
  29. A. K. Jaiswal and H. Banka, “Local pattern transformation based feature extraction techniques for classification of epileptic EEG signals,” Biomedical Signal Processing and Control, vol. 34, pp. 81–92, 2017. View at: Publisher Site | Google Scholar
  30. H. Zhang, Z. Qu, L. Yuan, and G. Li, “A face recognition method based on LBP feature for CNN,” in 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), pp. 544–547, Chongqing, China, March 2017. View at: Publisher Site | Google Scholar
  31. Z.-G. Liu, Y. Yang, and X.-H. Ji, “Flame detection algorithm based on a saliency detection technique and the uniform local binary pattern in the YCbCr color space,” Signal, Image and Video Processing, vol. 10, no. 2, pp. 277–284, 2016. View at: Publisher Site | Google Scholar
  32. L. Zhang, L. Zhang, D. Tao, and X. Huang, “On combining multiple features for hyperspectral remote sensing image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 3, pp. 879–893, 2012. View at: Publisher Site | Google Scholar

Copyright © 2020 Hong Yang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

14 Views | 16 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder