Abstract

The exploration of information for aircraft wake vortex enables us to obtain new knowledge of wake turbulence separation standards. Traditional manual methods cannot work satisfactorily for the identification of great number of wake vortex data with high accuracy. Fortunately, the LiDAR intensity data can be explained by integrating LiDAR products with the strategies of computer vision. To overcome the limitation of traditional manual methods, this paper is aimed at developing an automatic method to identify a given set of wake vortices from various aircrafts. The main innovation works are outlined as follows. (1) From the wake vortex data that consists of various aircrafts measured by Wind3D 6000 LiDAR, a grayscale dataset of wake flow is constructed to boost the deep learning model for identifying aircraft wake vortex. (2) Following this, we propose a new method for the identification of aircraft wake vortex by modifying the VGG16 network, providing binary classifications of uncertain behavior patterns for wake vortices. To evaluate the proposed identification model, performance evaluation was conducted on our dataset, where experimental results revealed the values of 0.984, 0.951, 0.959, and 0.955 in terms of accuracy, precision, recall, and F1-score, respectively.

1. Introduction

Wake vortex refers to a pair of closed vortices around the aircraft wingtip and is a by-product of an aircraft lift during the entire flight life cycle [1]. In the Near-Earth phase, the aircraft wake vortex poses a potential hazard to the trailing aircraft, which originates an inherent limiting factor in the airport capacity [2]. Correspondingly, recognition of the aircraft wake vortices is considered a key issue in the aviation research area. Likewise, there have been many attempts to address the above stated issues. Currently, the research on wake vortex detection for civil aviation passenger aircrafts mainly includes the microwave, acoustic wave, radar test, and other methods [3]. Particularly, Doppler LiDAR is one of the most widely utilized tools for wind field detection [4]. In one relevant study, Bilbro et al. successfully used the pulsed CO2 coherent Doppler wind LiDAR for commercial turbulence detection [5]. The French Aerospace Center first reported the coherent Doppler based on a 1.5 μm wavelength fiber laser Wind LiDAR [68]. Additionally, a group from National Center for Atmospheric Research (USA) used an airborne continuous-wave coherent Doppler to measure the wind LiDAR at an altitude of 12 kilometers and detected the turbulence in front of the aircraft [9].

In fact, the wake vortex data measured by the pulsed Doppler LiDAR provides lower quality materials caused by the sensor scanning and environmental parameters, whose explaining extent requires the intervention of experts in the manual process. In tradition, the productivity of the manual methods does not accommodate the accumulated vortices, and the inappropriate color configuration can lead to poor performance and information loss [10]. Fortunately, the intensity data of the wake vortex contains significant amount of information about the behavior of the ground effect. Therefore, the behavior of the wake vortex is possible to be analyzed by models based on computer vision. Following this, in our previous works, we used -nearest neighbor (KNN) [11], support vector machine (SVM) [12], and random forest (RF) [13] methods to identify wakes, respectively. However, these machine learning techniques cannot elicit better recognition, limited to the wake shapes changing acutely when wake vortices are affected by the environment and ground effects.

In recent years, the development of deep learning made a series of breakthroughs in the field of computer vision. Particularly, deep learning has been surfaced as a new method in LiDAR data research due to the versatility and robustness of convolutional neural networks (CNNs). By mapping the LiDAR echo data to grayscale imaging, we propose a theoretical method to identify a given set of wake vortices based on VGG neural network (VGGNet) [14]. Compared with other learning models, VGGNet has the characteristics of simplicity and consistency. Experimental results demonstrated that the task of wake vortex identification was solved by the proposed scheme with only a small loss. For wake identification purposes, the main innovation works of this study were the following: (i)From the wake vortices that consists of various aircrafts measured by Wind3D 6000 LiDAR, we constructed a grayscale dataset of wake flow to boost the VGGNet-based model, allowing the transfer learning and retaining the correlation information of high-dimensional data when it is applied to solve the task of wake vortex identification(ii)To achieve the binary classification, we remove the original convolution layer and the last two layers of the fully connected layer section in the original VGGNet by adding a new convolution layer and an output layer. Following this, the deeper convolution layers and smaller convolution kernel embedded in VGGNet structure makes our VGG-based model capable of expressing rich feature space and handling the uncertainty of behavior patterns for wake vortices

The remainder of this paper is organized as follows. Section 2 details the data acquisition designing of our wake vortex dataset. The proposed model and its procedure for wake vortex identification are given in Section 3. Section 4 presents the results and discussion. The possible future research is provided in Section 5. The conclusion of this paper is presented in Section 6.

2. Data Acquisition Designing

2.1. LiDAR Working Principle

Principally, the LiDAR detection phenomenon starts with an emission of a laser beam (radiating at some specific wavelength) into the target airspace. Consequently, the Brownian motion of aerosol particles in the laser beam along with the thermal motion of atmospheric molecules produces a Doppler broadening. The motion speed, direction of motion, and scattering angle relative to the light source are different. Likewise, the Doppler frequency shift of the wave signal relative to the emitted laser light is different. Doppler frequency shift () of the backscattered signal with the laser wavelength (), and the radial wind velocity () can be measured as follows:

Figure 1 depicts the visual elaboration of LiDAR working principle, where laser beam emitted by LiDAR scans the cross-section of the aircraft while perpendicular to the flight direction.

2.2. LiDAR Detection Simulation

In this section, we explain the LiDAR detection simulation routine by taking A320 aircraft as an example, and the aircraft parameters are listed in Table 1. We use the Halock-Burnham (HB) model [15] to generate aircraft wake vortex. The overall simulation studies the radial velocity field generated by the wake of aircraft under LiDAR scanning.

Under the influence of A320, the vortex generated by wind field particles, the flow field, and velocity intensity are shown in Figure 2(a), whereas Figure 2(b) demonstrates the simulation results of the relevant velocity field scanning at the origin point () combined with the principle of LiDAR detection. In Figure 2(b), the grayscale corresponds to the speed in the gray bar. White represents the radial velocity of air particles moving away from the LiDAR, and black represents the radial velocity of air particles approaching the LiDAR.

2.3. Field Detection

Following the simulations of the wake vortex detection for various aircraft types in the previous section and combining the principle of LiDAR, we selected 3 suitable data collection points at Shuangliu Airport (Figure 3(b)), where the maximum radius of detection goes beyond 6 km. Correspondingly, due to its excellent performance characteristics, including smaller size, lesser weight, and low power consumption, we used Wind3D 6000 LiDAR (capable of detecting at large distances) to collect the wake data. A detailed description of Wind3D 6000 LiDAR is provided in Table 2(a). Considering the factors, e.g., airport terrain, weather conditions, and runway operation mode, we set the LiDAR parameters as highlighted in Table 2(b).

For the purpose of preserving the features of wake data efficiently, the wake data is visualized using a gray cloud image with linear mapping. Similarly, an example visualization of wake evolution data during the flight of A380 aircraft is presented in Figure 4. Through the analysis of the evolution diagram, it is clearly visible that under the action of the two eddies mutual induction and the environmental wind, the shape of the left and right eddies gradually increases, while the antisymmetric characteristic structure begins to weaken. When the wake vortex touches the ground and is bounced back into the air, the intuitive characteristic structure of the vortex becomes less obvious, while the wake vortex is still strong and a risk to the operation of the aircraft at this time. The changes in the wake vortex structure make it difficult to cope with the entire wake vortex life cycle using traditional machine learning methods. Deep learning-based methods are state-of-the-art in computer vision, while the weight learning may be more helpful in extracting the latent features of these unstructured wake vortices. The VGG16 model used in this paper has the ability to capture the dynamic morphological changes of the wake by reconstructing the structure and introducing a large number of unstructured features learned from the source domain for ImageNet.

3. VGGNet for Wake Vortex Identification

3.1. CNNs

The CNNs have shown a great impact on the field of computer vision and other various applications [1620] in recent years, and these recent advancements could pave the way for aircraft wake data identification with LiDAR. In this work, the CNN algorithm is used to perform the aircraft wake recognition. For radar data recognition, CNNs offer many unique advantages [2123], and thus, many CNN tools are widely used in LiDAR detection. At present, the AlexNet, VGGNet, GoogleNet, and deeper ResNet are largely used in solving complicated intelligence tasks. Theoretically, the deeper the neural network, the better the detection and recognition effect. Particularly, the VGG model [9] serves as a backbone network and performs well in target detection tasks with its simple network structure. Owing to these advantages, we have chosen the VGG16 network for aircraft wake recognition and make changes to the structure to adapt to the wake grayscale dataset.

3.2. Network Model Construction

The overall network structure algorithm is explained in Figure 5; the reconstructed VGG network basically retains the basic structure of the original VGG. There are 13 convolutional layers in the network modular structure, and each convolutional layer and pooling layer are stacked on each other, which makes the network have a larger receptive field while reducing the number of network parameters. After the operation of each convolutional layer, a ReLU activation function is calculated to change the original single linear change. At the same time, the four largest pooling layers are interspersed, which solves the blurring effect of average pooling and improves the richness of features. In the last 3 fully connected layers, dropout layers are interspersed to randomly ignore some neurons to avoid model training overfitting.

In this work, the trained ImageNet model was used as the pretraining model, and the convolutional layer parameters of the original model were transplanted to the model by transfer learning [24]. Next, we remove the original convolution layer and the last two layers of the fully connected layer section. We then add a new convolution layer and an output layer so that the model outputs can achieve binary classifications instead of the original network’s multiple classifications. Since the grayscale image is single-channel while the VGG is three-channel color image input, we change the first layer of VGG from three-channel input to a single-channel input and force VGG model to align with the grayscale image input style.

Figure 6 shows the aircraft wake vortex identification process based on the VGG16 model. The identification process of aircraft wake vortex based on VGG16 network relies on obtaining the wind field radial velocity data through LiDAR detection. In consequence, the collected data is mapped into grayscale cloud images, and the collected cloud atlases are preprocessed to generate the training samples. As the wake vortex sample cloud image is identified, we input the learned model and perform the identification test, and then, the results from identification test are passed to output via VGG16 network.

4. Results and Discussion

4.1. Experimental Platform

In our experiments, the deep convolutional neural network is built on the PyTorch framework and is implemented using Python language programming. The experimental workstation used in this work is a Dell T7810 workstation with 16 G memory, a dual-core CPU12 core, and a 3.4 G main operating frequency.

4.2. Data Processing and Network Parameter Setting

This study considers the flight take-off in Shuangliu International Airport, where about 500 flights (including A380, A320, and A330) depart from the airport every day. By converting the wind field data (detected by the LiDAR as a target) to gray cloud image, the average wind speed of the background wind field is plotted in Figure 7.

In practice, a number of collected images are limited. To increase the diversity of the data while preventing overfitting in network training, the original image is randomly flipped up and down in the preprocessing stage of network training to improve the generalization ability of the model [25]. In total, 3,530 datasets were collected in our experiment. Progressively, random sampling without replacement was used to select 60% of the images as the training set, 20% of the images as the validation set, and the rest 20% of the images as the test set. Among these datasets, the sample containing the vortex is positive (T), while the sample without the vortex is negative (F). The specific data related to image sets are shown in Table 3.

We use the stochastic gradient descent (SGD) method [26] to train the neural network, where the SGD algorithm adjusts the hyperparameter every time when the network weight is updated. The algorithm formula used in the given approach is as follows:

The initial learning rate of the SGD function is set to 0.01, and the regularization coefficient is set to 0.005. The cross-entropy, which characterizes the distance between the actual output ( value) and the expected output probability ( value), is used as the loss function. The loss function can be mathematically represented as

4.3. Experimental Results

We use the VGG16 network model and train it for 500 times in the PyTorch framework. Figure 6 illustrates the variation in the output value of loss function and the accuracy with the number of iterations during convolutional neural network training.

In Figure 8, when the epoch is 0 to 400, the loss of the training set shows a downward trend, the accuracy result of the training set shows an upward trend, and the accuracy result of the verification set oscillates repeatedly, indicating that the model obtained at this time is not stable enough and the model is not optimal. The output value of the network loss function gradually decreases with the increase in training time. When the number of training rounds reaches 450, the VGG16 network tends to stabilize and returns an optimal model.

To set comparison experiments, the results of KNN [11], SVM [12], RF [13], and VGG16-based model were considered. We first carry out an experiment to compare the performances of these models in terms of confusion matrix (Figure 9). The analysis of this evaluation metric in Figure 9 shows that the VGG16-based model performs better than the other methods, giving consistently improved identification accuracy. An additional identification experiment was conducted to assess the introduced model. To evaluate the performance of our VGG16-based model when it is applied to the identification of aircraft wake vortex, the accuracy, recall, precision, and F1-score [27] are considered. Table 4 shows the performance indicators of VGG16-based model on the test set, and the experimental results from this work are compared with the already reported models.

The accuracy of the VGG16 network on the gray cloud image test set is 98.40%, where the recall and precision are 95.90% and 95.10%, respectively. Moreover, the results provided in Table 4 clearly demonstrate the efficiency of the proposed network to distinguish the wake vortex in the gray cloud image. Significantly, our VGG16-based model achieves an F1-score of 95.50% which is the new state-of-the-art result on this dataset. The results highlight the potential of the developed network when it is implemented for the wake vortex identification in complex background wind fields. In other words, the method proposed in this work can be applied in practice to provide auxiliary decision-making information to airport control. Figure 10 shows the identification results for some aircraft wake vortex samples.

Thanks to the depth and migration learning of the network convolutional layer, the VGG16 network itself can extract various textures and high-level abstract features in the image. By changing the structure of the first layer of VGG16, the model adapts to the grayscale mapping of the aircraft wake. Figure 10 shows that the various shapes of aircraft wake in different evolution stages can be well recognized.

5. Limitations and Future Work

5.1. Limitations

Although the VGG-based model presented in this paper has achieved good results in wake recognition, there are still some limitations that need to be discussed. (i)Our dataset was collected at Chengdu Shuangliu International Airport from August 16, 2018, to October 10, 2018. In total, more than 270,000 detections obtained by wind3D 6000 LiDAR and per scan cycle on average 48 data from field observations were used for the task of wave vortex recognition. After data cleaning and preprocessing, about 169,440 complete data were obtained to build our wake dataset. Although the amount of our wake dataset is not large compared with benchmark dataset such as ImageNet, these wake data are representative and have high consistency with other data. In fact, the labor-intensive and time-consuming task of acquiring wake vortex data from the field results in the difficult data acquisition and anomalous data generation that often becomes the limiting step. We believe that large datasets can better explain the recognition models at work, which can help understand the results. In further research, by setting up multiple Doppler LiDAR in the near-ground flight areas of different runways of Shenzhen Bao’an Airport, a large amount of accurate wind field data will be captured for wake vortex data collection(ii)Our recognition model is formed by modifying the VGG16 network. The VGG16 network has certain requirements on the computing power and memory capacity, which may limit some of its application scenarios and user requirements. For some special scenarios which requires more in-depth, maintaining a balance between the efficiency and the computational cost is crucial for next-generation wake-recognition networks

5.2. Future Work

Although the task of wake data recognition can be well solved using deep learning techniques, some possible directions for further research are the designs of wake spatiotemporal sequence prediction, wake feature parameter estimation, etc. It should be noted that the rapid and accurate estimation of the geometric position of the wake based on the wake warning can provide the research basis for these further studies. Therefore, this section discusses a preliminary application of deep learning interpretability technology on aircraft wake vortex core based on the gradient-weighted class activation mapping (Grad-CAM) [28].

5.2.1. Grad-CAM Implementation

Grad-CAM is a visual interpretation technology to learn the results of convolutional neural networks. Grad-CAM calculates the activation heat map of CNN for the input image through the gradient information. The magnitude on the activation heat map indicates the degree of influence of image classification results, on each part of the original image. Furthermore, to evaluate whether the constructed model has acquired the key features, the given approach selects images correctly recognized by the three models from the test set and visualizes them through Grad-CAM. The interpretation of Grad-CAM implemented in this work is provided in Figure 11.

As shown in Figure 11, by calculating the gradient of the pixel in the feature map with respect to the classification probability, the degree of influence of the feature map on the classification result can be characterized. The Grad-CAM algorithm obtains a set of weights by calculating the average gradient of each feature map with respect to the classification probability. Finally, when the weight of each feature map is calculated, the overall Grad-CAM activation map can be extracted using the weighted summation method. The mathematical formula is given as follows:

First, we compute the gradient of the score for the class , , where is the pixel value of the coordinate in the th feature map and is the total number of feature maps. Next, we calculate the weight of the th feature map for the class through the inverse gradient . Finally, using the weighted sum method, Grad-CAM reveals the activation map of the corresponding class with being the th feature map output from the feature extraction part. Since only the pixels have a positive impact on the classification result, ReLU is used for the weighted summation result using the following expression:

As the magnitude of the Grad-CAM activation heat map indicates the degree of influence of the pixel (at the corresponding position in the original image) on the classification result, the most vital position in the activation map leads to the position of the wake vortex feature.

5.2.2. Interpretability of Wake Recognition

To verify the proposed detection method, we selected three gray cloud images with wake vortices. When a vortex is detected, the blue part indicates the pixel, contributing the least incorrect classification. On the contrary, the darker color represents a higher degree of contribution in the correct classification.

Visual demonstration of Grad-CAM is given in Figure 12, where Figure 12(b) is the original cloud image and Figure 12(a) is the Grad-CAM activation heat map. In this section of the paper, we mainly discuss the interpretability of wake recognition by the convolutional neural network. The developed and trained VGG16 model uses wake features to recognize wake vortices. When recognizing the wake, most of the pixels that make the greatest contribution in the cloud image come from the vortex core. These results provide a new and novel idea to identify the vortex cores in the future.

6. Conclusion

In this work, we have introduced an efficient method to identify the aircraft wake vortex using the VGG16 model. The experimental results reveal excellent model performance to identify the wake vortex in an atmospheric wind field. In summary, the proposed method can effectively improve the airport’s control of aircraft wake interval and determine safe wake interval for the realization of intelligent air traffic management. Furthermore, we use the Grad-CAM network to discuss the interpretability of wake recognition performed using the VGG16 convolutional neural network. The obtained results show that whenever VGG16 detects a wake, the vortex core contributes the most to the network. Consequently, the information on the contribution of wake vortex in wake detection indicates a novel direction for our future research on vortex core identification. Additionally, the data samples collected in this article are relatively smaller and are limited to only some locations of Shuangliu Airport. However, in practice, the humidity and temperature of various airports are different, which impacts the evolution of the wake shape. Due to time reasons, the experiment was not fully explored. To enhance the versatility of the model presented and complete sufficient comparative experiments in this work, data from different airports will be used to train deep learning models in the future. Real-time wake identification technology will help establish a real-time wake interval system for air traffic management, thereby, increasing airport capacity and efficient use of airport airspace.

Data Availability

The data used to support the findings of this study have not been made available because the raw data required to reproduce these findings cannot be shared at this time as the data also form a part of an ongoing study.

Conflicts of Interest

The authors declare no conflict of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant No. U1733203), the Program of China Sichuan Science and Technology (Grant No. 2021YFS0319), and the Special Project of Local Science and Technology Development Guided by the Central Government in 2020 (Grant No. 2020ZYD094).