Abstract

The pose estimation of the aircraft in the airport plays an important role in preventing collisions and constructing the real-time scene of the airport. However, current airport target surveillance methods regard the aircraft as a point, neglecting the importance of pose estimation. Inspired by human pose estimation, this paper presents an aircraft pose estimation method based on a convolutional neural network through reconstructing the two-dimensional skeleton of an aircraft. Firstly, the key points of an aircraft and the matching relationship are defined to design a 2D skeleton of an aircraft. Secondly, a convolutional neural network is designed to predict all key points and components of the aircraft kept in the confidence maps and the Correlation Fields, respectively. Thirdly, all key points are coarsely matched based on the matching relationship and then refined through the Correlation Fields. Finally, the 2D skeleton of an aircraft is reconstructed. To overcome the lack of benchmark dataset, the airport surveillance video and Autodesk 3ds Max are utilized to build two datasets. Experiment results show that the proposed method get better performance in terms of accuracy and efficiency compared with other related methods.

1. Introduction

Real-time surveillance of airport surface is important for safeguarding the safety and efficiency of airport operation. Conventionally, Surface Movement Radar (SMR), Multilateration (MLAT) and Automatic Dependent Surveillance-Broadcast (ADS-B) are the main methods in the airport surface surveillance. Since the aircraft on airport surface are regarded as points in these methods, poses of the aircraft are ignored. However, the poses of the aircraft not only show the occupied spaces of the aircraft in airports, but also indicate the purpose of the aircraft’s movement which plays important roles in preventing collisions. Therefore, not only the location information but also the poses of the aircraft should be estimated. Video surveillance can provide the fine-grained features of the aircraft, such as colors, shapes and sizes, which could help to estimate identities, positions, and poses of the aircraft. Some studies focus on the pose estimation of humans for surveillance, activity recognition, gaming, etc. However, there is little attention to estimate the poses of other kinds of objects due to the insufficient samples of these objects. The pose estimation heavily relies on the key points recognition based on video processing. On the airport surface, the large sizes of the aircraft, the main objects of interest, lead to the difficulties in key points recognition and connection. Besides, since the movements of objects on the airport surface are always complex, the views of the aircraft confront some occlusions in cameras’ images, which further increase the difficulties in pose estimation of the aircraft.

The aircraft attitude estimation methods mainly depend on parameter measurement by various sensors on the aircraft (such as gyroscope and GPS). Boedecker [1] used multiantennae Global Navigation Satellite System (GNSS) receivers to obtain an optimal configuration in economical, operational and accuracy aspects. To use all available information, Rhudy [2] combined the control inputs with the sensor measurements through Kalman filtering technology. Han et al. [3] used central difference Kalman filter (CDKF) based on Stirling interpolation formulation to prevent the defects of the computational complexity and large linearization error caused by extended Kalman filter (EKF). Wind gusts had a great impact on the attitude estimation accuracy of the Small Unmanned Aircraft Systems (SUASs); Weibel et al. [4] utilized Global Positioning System (GPS) velocities to estimate attitude and heading reference systems that corrected accelerometer specific-force measurements. This method had a high level of performance with contemporary low-cost sensors in gusty conditions. The accuracy of attitude estimation degraded when the unmanned aerial vehicle (UAV) was under accelerative maneuver. In order to solve this problem, No et al. [5] fused the pseudo-attitude, magnetic-attitude and gyroscope measurement based on the Euler angle. This method maintained stable attitude accuracy even if the aircraft experienced sudden or continuous acceleration. In the event of gyroscopic failure, Kallapur et al. [6] utilized on-board accelerometers and a GPS receiver to update the errors in attitude propagation. However, it is obvious that the errors of the sensor will increase over time. And these methods cannot be used to estimate the poses of the noncooperative targets.

At the same time, there are some aircraft pose estimation methods based on computer vision. In order to avoid the increase of sensor error caused by fast movement, Zhao et al. [7] found the correspond points of the same aircraft in two images through the Speeded Up Robust Features (SURF) method. Tong et al. [8] presented a method of sequence screen-spot imaging based on the laser-aided cooperative target to amplify the motion attitude of the aircraft for improving the measurement accuracy. Carrio et al. [9] proposed a method based on thermal images to solve the effect of atmospheric and illumination conditions on visible light image sensors. Tehrani et al. [10] used panoramic images and optic flow to improve accurate attitude estimation in cluttered environments. In order not to rely on the control point, Zhang et al. [11] matched the simulated image with the real image. Zhao et al. [12] built the three-dimensional (3D) geometric model of the rigid object and the camera parameters. Lu et al. [13] used object space collinearity error to replace iterative optimization methods, which did not effectively account for the orthonormal structure of rotation matrices. Teng et al. [14] used line features of a straight wing aircraft’s structure and geometry constraints when feature point matching was difficult and inaccurate in the large baseline and long-distance imaging. Luo et al. [15] used line clustering to improve the accuracy of the lines. Locally Linear Embedding (LLE) in [16] was used to preserve the intrinsic structure information. Ling et al. [17] used features of shapes and regions of the aircraft to reduce the complexity of 3D model matching. Wang et al. [18] presented a novel geometry structure feature to describe the objects’ structure information. Fu and Sun [19] not only extracted the targets contours, but also computed their Pseudo-Zernike Moments (PZM). It had good performances on the aspect of adjusting to the 3D target with freely changed pose. However, these methods only use the shallow features of the aircraft. They are not suitable to the complex airport scenes. Although shallow features contain more location information, they include few deep features that can improve the generalization ability of the method. Therefore, this paper uses a convolutional neural network (CNN) [20] to extract deep features of the aircraft to estimate aircraft pose.

The CNN is a deep learning model evolved from artificial neural networks. It mimics the working mechanism of the human brain. Because of its capacity to automatically extract features, the CNN has been employed in multiple fields. Abdel-Hamid et al. [21] used a CNN to improve speech recognition performance and proposed a limited-weight-sharing scheme that could better model speech features. Experiment results showed that the proposed method could reduce the error. Lin et al. [22] proposed an improved CNN which was utilized to improve the ability of the evaluation method of the distribution network. The operation state and network structure characteristics of the power network were analysed by this CNN. And the CNN gave an optimal evaluation result. In order to enhance the security and improve the detection ability of malicious intrusion behavior in a wireless network, Yang and Wang [23] designed an improved CNN. This CNN regarded low-level intrusion traffic data of wireless network as features that were used to detect the intrusion behavior of the network. Yang et al. [24] combined the hierarchical symbolic analysis and CNN to diagnose the faults of rotating machinery. This CNN used the features of the vibration signals to evaluate health conditions of rotating machinery. Due to a lack of understanding of the underlying atrial structures, the treatment for atrial fibrillation (AF) was suboptimal. Xiong et al. [25] proposed a CNN named “AtrialNet” that was able to successfully process each 3D late gadolinium-enhanced magnetic resonance imaging (LGE-MRI) within 1 min. Because of the strong adaptability of the CNN, using the CNN to estimate the aircraft pose is feasible.

In this paper, a CNN-based pose estimation method of the aircraft on the airport surface is proposed. A 2D skeleton model is established to represent an aircraft inspired by the estimation method of human pose which generally falls into two main categories: top-down methods and bottom-up methods. Top-down [26] methods first detect all people in the image and then estimate single person pose. Bottom-up methods [27] first detect all key points in the image and then use postprocessing to match these key points. A bottom-up method is applied in this paper, which contains key-point detection and matching. A CNN architecture is designed to detect the key points and components of the aircraft existing in the image. For the key-point matching, the matching relationship and the Correlation Fields (CFs) are proposed. The matching relationship is utilized for rough matching, which restricts the matching of key points. The CFs are utilized for refined matching, which contains the information of all detected components obtained through the CNN.

The main contributions of this work are summarized as follows. (1) A 2D skeleton is designed to represent an aircraft in images. (2) A method of aircraft pose estimation is proposed, which includes two steps, i.e., key-point detection and matching.

The rest of the paper is organized as follows. Section 2 shows the process of designing aircraft skeleton. Section 3 introduces the aircraft pose estimation process including the key-point detection and matching. Section 4 shows the experimental results and the network optimal process. Finally, Section 5 gives the conclusion.

2. Aircraft Skeleton Design

The cameras project the 3D objects into 2D images, which lead to the loss of the depth information. In order to design the aircraft 2D skeleton, the key points of an aircraft must be selected, which could provide the shape information as much as possible. The correlations between these key points are also important to indicate the occupied space of an aircraft. In this study, two principles to define the key points of an aircraft are proposed. (1) The selected key points should include main end points, junction points and inflection points of an aircraft. (2) The selected key points should be easily recognized.

It is obvious that the connection between the fuselage and the wing is surface contact rather than point contact. Thus, the center point of the connecting surface is used to denote this connecting surface. According to the first principle, an aircraft nose end point, two center points of the connecting surfaces between the fuselage and two wings (hereinafter point 2 and 4), two wing end points, two center points of the connecting surfaces between the fuselage and two horizontal empennages, two horizontal empennage end points, the center point of the connecting surface between the fuselage and the vertical empennage (hereinafter point 10), a vertical empennage end point and a tail end point are deemed as the key points of an aircraft as shown in Figure 1(a). It is shown that the key points of an aircraft tail have low recognition accuracy when the sizes of the aircraft in the image are small or the plane is facing the camera, e.g., the tail end point, the vertical empennage end point and the point 10. According to the second principle, these points are ignored. The remaining key points of an aircraft are shown in Figure 1(b). In order to facilitate the description of the position of the aircraft, an aircraft center point has been added (shown as the green point in Figure 1(c)), which is also the midpoint of point 2 and 4. Thus, there are N = 10 types of key points.

The correlations between key points called the matching relationship are important for indicating the occupied space of an aircraft. The matching relationship is described by the connections of key points. There are some principles to define the matching relationship. (1) The connection of two key points is on the airframe and keeps away from the airframe contour. (2) Each key point is connected at least once. According to the principles above, the matching relationship that divides an aircraft into M = 9 components as shown in Figure 1(d). Each connection represents one type of the components. The matching relationship indicates which two types of key points can match. It also indicates the direction that points from one end of each component to the other. The matching relationship can be expressed as follows:where and denote, respectively, the and type of key points. If has a matching relationship with . At the same time, and are, respectively, the start and end of the component defined by these two types of key points, . Otherwise, .

3. Aircraft Pose Estimation

3.1. Overview

In order to estimate aircraft pose, an aircraft pose estimation method based on the CNN is proposed, which mainly contains two steps: the key points and CFs detection and the key-point matching. In the first step, a CNN is established to detect all key points and components of the aircraft in the image. This CNN simultaneously generates N detection confidence maps for the key points (Figure 2(b)) and M detection CFs for the components (Figure 2(c)). In the second step, the matching relationship is applied to match all key points in N confidence maps (Figure 2(d)), which is called rough matching. Then, the results of the rough matching are further refined through the CFs (Figure 2(e)), which is called refined matching. Finally, the 2D skeletons of all aircrafts are obtained from the refined results (Figure 2(f)).

3.2. Step 1: The Key Points and CFs Detection
3.2.1. Network Architecture

According to the goals of the detection, the CNN shown in Figure 3 contains two detection tasks: to detect the key points and CFs. Thus, the proposed network includes two branches: Branch 1, to predict the key points (the dark green area in Figure 3) and Branch 2, to generate the CFs (the yellow area in Figure 3).

The Visual Geometry Group Network (VGGNet) [28] is utilized as the backbone network. An input image is processed by Part 1 of the proposed network, which will generate a set of feature maps F as the input of Part 2. The two branches of Part 2 (shown in brown box in Figure 3) are for the prediction of the key points and CFs, respectively. Both two branches include three stages. Each stage consists of convolution layers. In the first stage of Branch 1, a set of key point confidence maps is predicted. In each subsequent stage of Branch 1, the predictions of Branch 1 in the previous stage, along with feature maps F, are used to produce refined predictions. Branch 2 is similar to Branch 1. The only difference between them is that Branch 1 is used to predict key points, while Branch 2 is used to predict the CFs.

3.2.2. Training and Testing

(1) Training Phase. In this section, the training method of the proposed CNN is introduced. The CNN consists of two parts for extracting features and predicting the key points and CFs, respectively. In order to train the proposed CNN, the squared hinge loss (L2 loss) function is utilized. Suppose there are T = 3 stages in Part 2 of the proposed CNN. At each stage, the loss function is made up of two parts:where the loss is in the end of the tth stage of Branch 1 and the loss is in the end of the tth stage of Branch 2, denotes the value at location in the detection confidence maps for the nth type of key points, denotes the value at location A in the detection CFs for the mth type of components, and and denote the groundtruth. when the label is missing at location A in the image; otherwise, . The overall loss function is

(2) Testing Phase. During testing, the proposed CNN simultaneously generates N detection confidence maps and M detection CFs. N detection confidence maps include type and location information of detected key points. M detection CFs include direction and location of detected components. The information of M detection CFs would be applied for key-point matching.

For example, a set of detection confidence maps D and a set of detection CFs S are obtained after an input image is analysed by the network. The set has N detection confidence maps, each one for the corresponding type of the key points, where , . Each element in represents the extent of confidence that the key points occurs at a certain designated location (Figure 2(b)). The set has M detection CFs, each one for the corresponding type of the component, where , . Each element in encodes a 2D unit vector or a zero vector (Figure 2(c)). The 2D unit vector indicates that this element belongs to the mth type of the components of the aircraft. At the same time, the direction of 2D unit vector encodes the direction that points from one end of the components to the other end. The zero vector indicates that this element belongs to the background. For detection confidence maps, the peak values higher than the threshold in are chosen, and the locations of peak values are the positions of corresponding key points. Many peak values in one detection confidence map represent that there are multiple key points of the same type existing in the image. Finally, a set of key point detection candidates could be obtained, where Jn represents the number of the detected nth type of key points and represents the jth candidate of the detected nth type of the key points. For the application of detection CFs, see Section 3.3.

3.2.3. The Groundtruth

As for , individual confidence map for the nth type of the key points of the kth aircraft is generated. denotes the groundtruth position of the nth type of key point of the kth aircraft in the image. Therefore, the value at location A in is defined as follows:where is the radius of the key points. The higher the value of is, the closer the point A is to the groundtruth position. The value at location A in groundtruth confidence map is the maximum of an aggregation of the values at the same location in all individual confidence maps:

For , the correlation vector fields is defined as follows:

Consider the right wing of one aircraft (Figure 4). Points and , respectively, represent the groundtruth location of the and type of key points of the kth aircraft in the image with a matching relationship between each other. (equation (6)) is a unit vector whose direction is from to . If point A satisfies equation (7), the point A belongs to this component and ; otherwise, :where is the distance between points and , is a vector perpendicular to v, , and and are the thresholds. During the training, and .

The groundtruth CFs is the average value of all . The average operation in order to solve the aircraft overlap caused by shooting angle iswhere is the number of nonzero vectors of all aircrafts at location A.

Equation (7) can be applied to the other components of the aircraft.

3.3. Step 2: The Key-Point Matching

A set of key-point detection candidates is obtained through the CNN (e.g., the points in Figure 5(a)). However, it is a problem to quickly and accurately match them. Matching any two key-point candidates will end up with possible results (e.g., the black lines in Figure 5(b)), where L denotes the number of all key-point candidates in . If all possible results are needed to determine whether each one is correct, this will cause a significant computational cost. In order to decrease the runtime of postprocessing, the matching process is divided into two steps: rough matching and refined matching. The matching relationship is used in rough matching to decrease the number of possible results to achieve fast refined matching. Based on the results of rough matching, the CFs are applied in refined matching to get the best key-point matching results.

3.3.1. Rough Matching Using the Matching Relationship

In the rough matching step, the matching relationship determines which two types of key points can be matched. For example, the aircraft nose end point could match aircraft center point, but it could not match any horizontal empennage end point. Figure 5(c) shows the results of rough matching. Compared to Figure 5(b), the number of possible results is significantly reduced, achieving the goal of decreasing the runtime. After rough matching, all possible results are regarded as rough component candidates.

3.3.2. Refined Matching Using the CFs

In this section, the method of removing the wrong components from rough component candidates is introduced. Figure 6 shows the rough matching results of one detected aircraft center point (the green point) and three detected aircraft nose end points (the red points). It is obvious that the rough component candidate represented by the yellow arrow is correct and two blue arrows are wrong. The reason for this is that the boundary isolation of an aircraft is not considered. As for this question, the CFs is proposed, which utilizes the pixel continuity of the same aircraft and the boundary isolation between the aircrafts and the airport facilities. In other words, if rough matching candidates do not overlap completely with the aircraft in the picture (e.g., the blue arrows in Figure 6), they are wrong. The CFs include all information of components of the aircraft in the image. In this case, determine whether the component is correct by calculating the degree of overlap between the rough component candidates and the CFs. Refined matching denotes this process, which includes two steps: scoring and removing.

(1) Scoring for Rough Component Candidates. In order to remove wrong component candidates, all rough component candidates are scored by judging how much they overlap with the CFs. For example, there are two key-point candidate locations and , giving a rough component candidate for the mth type of the components. And the is offered. The correlation confidence E for this rough component candidate is measured by computing the line integral (equation (9)) along the line segment:where interpolates the position of and , which along line segments is the vector at location in . The higher the confidence, the more likely this rough component candidate is correct.

(2) Removing Wrong Component Candidates. After scoring for each rough component candidate, a maximum weight bipartite graph matching method [29] is performed to remove all wrong component candidates. It is obvious that the key-point matching can be divided into multiple bipartite graph matching steps (Figure 5(d)). Specifically, only the matching of the and type of the key points would be considered. and are, respectively, the sets of two types of key-point detection candidates. In this bipartite graph matching problem, key-point detection candidates are the nodes of the graph and all rough component candidates are the edges. Additionally, the edges are weighted by equation (9). In order to guarantee that all edges cannot share the same node, is defined, where that determines whether and have been matched satisfies equation (10). The goal is to find the maximum weigh matching (equation (11)):where Zm is the subset of Z for the mth type of the components, is the weight between and , and Em is the overall weight for the mth type of the components. The Hungarian algorithm [30] is utilized to obtain the optimal matching. With these steps, the inaccurate component candidates could be removed and the refined component candidates are obtained. If two refined component candidates share the same key point, they would belong to the same aircraft. The airframe structure would be reconstructed through the refined component candidates that share the same key point.

4. Experiments

4.1. Experimental Setting
4.1.1. Dataset

Since there is no available dataset for the pose estimation of the aircraft, Autodesk 3ds Max is used to simulate the airport environment to obtain enough training and testing samples. To avoid manual annotation errors, ten types of key points on the 3D model of the aircraft were marked and the positions of them in the picture were derived. All images makeup a set called 3dmax dataset.

The images extracted from the airport surveillance videos are labelled manually to build the video dataset that is used to prove the reliability of the proposed method. The video dataset mainly includes the clips such as the aircraft entering and leaving the terminal building or the apron.

4.1.2. Evaluation Metric

The Microsoft Common Objects in COntext (MS COCO) key-point evaluation method [31] are utilized to evaluate the proposed method. The key-point evaluation method describes the Object Key-point Similarity (OKS) that is defined by equation (12) and uses the mean Average Precision (AP) over 10 OKS thresholds as main competition metric:where is Euclidean distance between the predicted key points and the groundtruth, is the standard deviation, and indicates that whether the key points are labelled.

4.2. Comparison to the State of the Art

In order to compare the performance of the proposed method, OpenPose [32] is utilized as the experimental comparison.

The overall performance is shown in Tables 1 and 2, where AP is the mean average precision (0.50:0.05:0.95AP), AP50 is for OKS = 0.5, APM is for medium scale aircrafts, and APL is for large scale aircrafts. It can be seen from the experimental results on 3dmax dataset, the AP of the proposed method has 1.5% higher than that of OpenPose. Meanwhile, the frames per second (FPS) of the proposed network is also faster. It can also be observed from Table 2 that the proposed method has a 3.3% higher AP value tested on video dataset. For smaller targets (APM), it has a higher recognition rate. The proposed method outperforms OpenPose. The reason is that the network in OpenPose is optimized. Some experimental results of the proposed method are shown in Figure 7.

4.3. Ablation Study

In this section, evaluation of the method’s performance is conducted by progressively integrating network architecture and hyperparameter optimization.

4.3.1. Network Architecture Optimization

The proposed network is inspired by OpenPose. In order to improve efficiency of the proposed method when keeping the similar accuracy, the network in OpenPose is optimized. Network optimization includes two parts, i.e., network simplification and network input strategy optimization. Network simplification means that a shrunk and simplified network compared with OpenPose is applied to predict key points, and network input optimization means that the input features are reduced at each stage (Figure 8).

Firstly, experiments on three network optimization architecture are conducted, i.e., OpenPose_3, architecture variation_1, and architecture variation_2. Compared with OpenPose, OpenPose_3 reduces the number of stages from six to three. Architecture variation_1 adds extra convolution layers and shallow features based on OpenPose_3 inspired by [33]. In the architecture variation_2, one extra stage is added to use shallow features, which is in parallel with OpenPose_3. According to Table 3, OpenPose_3 has the similar AP to OpenPose, but faster. It illustrates that a shallower network is more suitable to aircraft pose estimation since aircraft pose estimation is changeless than human pose estimation, and 3dmax dataset is smaller than the COCO dataset. If a network is too deep and trained on a small dataset, there will be a degradation problem [34]. With the network depth increasing, accuracy gets saturated and then degrades rapidly due to the problem of vanishing/exploding gradients. The architecture variation_1 and variation_2 have lower AP and slower OpenPose_3 when combine with shallow features. This is mainly because shallow features of conv3_4 are too low to be used to predict.

Secondly, the network input optimization is applied to OpenPose_3. This experiment is called architecture variation_3. According to Table 3, architecture variation_3 has similar AP to OpenPose_3, but faster. This is because input optimization reduces the number of convolution layer channels, i.e., the number of the input features.

An interesting aspect in Table 3 is that architecture variation_3 (the proposed architecture) performs worse than OpenPose_3 for AP75 and performs for AP50. To further explore this phenomenon, the average precision is plotted as a function of OKS values in Figure 9. And Figure 9 shows that architecture variation_3 is about the average of the OpenPose and OpenPose_3. Compared with OpenPose_3, OpenPose predicts more true positive key points at low OKS values (≤0.65). The reason is that that the impact of the degradation problem is not obvious at low OKS values. However, at high OKS values (≥0.7), the impact of the degradation problem is gradually reflected, which leads to lower AP values than OpenPose_3. As a result, OpenPose has a similar AP to OpenPose_3. Compared to OpenPose_3, architecture variation_3 has higher AP at low OKS values (≤0.65) when the features are reducted. The reason is that OpenPose_3 combines the feature maps F with the prediction of Branch 1 and 2 as the input of next stage. However, Branch 1 and 2 have different tasks, indicating that the feature fusion of Branch 1 and 2 not only does not necessarily contribute to their tasks, but also has the opposite effect. Architecture variation_3 avoids this problem. However, architecture variation_3 has lower AP at high OKS (≥0.7) than OpenPose_3. This is because more features in OpenPose_3 are helpful to improve the accuracy at high OKS, but not obvious at low OKS.

4.3.2. Optimal Hyperparameters

Based on architecture variation_3, the results of the hyperparameter tuning experiments are presented in this section to study the effect the radius of the key points . Through the analysis, it is found that hyperparameter makes an important role in aircraft pose estimation. Thus, several experiments are carried out to measure the AP with different radii, ranging from to . Table 4 shows results. The best results are obtained for , and using smaller or larger values seems to decrease performance. The reason is that a smaller value provides less feature information to predict the key points. On the other hand, using a bigger value may increase the chance for mistakes in prediction around ground facilities, such as the air bridge.

Table 5 shows the whole ablation study. First, the number of the stages is reduced from six to three. As a result, runtime decreases significantly while the AP increases 0.1%. Secondly, the new input strategy is applied, therefore making runtime further reduced and AP decreased only by 0.1%. Finally, the AP improves 1.5% because of the hyperparameter optimization. Through network optimization, the proposed method gets better performance on accuracy and efficiency.

5. Conclusion

In this paper, a CNN-based aircraft pose estimation method is proposed. This method exploits aircraft key points to generate the predesigned aircraft 2D skeleton, which include two steps, i.e., key-point detection and matching. A CNN is designed to produce the confidence maps and the CFs which can provide the information of the key points and components existing in the image, respectively. The matching relationship and the CFs are proposed to match key points quickly and accurately. Reconstructing the aircraft 2D skeleton is finished through linking the components sharing the same key-point. Two datasets are built to evaluate the proposed method. Several experiments are conducted to validate the effect of network optimization including network architecture and hyperparameters optimization. Compared to OpenPose, the proposed method gets higher accuracy.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Authors’ Contributions

DYF made the main contributions to the conception and algorithm’s design, as well as drafting the article. WL provided significant revising for important intellectual content and gave final approval of the current version to be submitted. SCH conceived the study, supervised the work, and helped to draft the manuscript. ZHZ, XYZ, and MLY provided the technical advices and checked the manuscript. All authors read and approved the final manuscript.

Acknowledgments

This work was supported by the National Key R&D Program of China (grant no. 2018YFC0809500) and the National Natural Science Foundation of China (grant no. U1933134).