Abstract

Whether it is for military or civilian use, quadrotor UAV has always been one of research central issues. Most of the current quadrotor drones are manually operated and use GPS signals for navigation, which not only limits the operating range of the drone but also consumes a lot of manpower and material resources. This research mainly studies the method of realizing autonomous flight and conflict avoidance of quadrotor UAV by using multisensor system and deep learning method in extreme flight conditions through track prediction. The convolutional neural network method is used to extract the image information collected by the UAV sensor system. And it uses the cyclic neural network to extract the time feature of the information collected by the UAV sensor. The research results show that the track prediction method based on the deep learning method has higher flight accuracy for quadrotor UAVs. The yaw error of the spatial position is only 2.82%, and the maximum error of the time characteristic error tolerance is only 0.77%.

1. Introduction

Quadrotor UAV is a common UAV system, and the stability of quadrotor UAV is higher than others due to the balancing effect of four wings. It has been widely used in firefighting, forest monitoring, aerial photography, and other fields and has shown good performance [1]. With the development of technology, the endurance of UAV is becoming stronger and stronger, which makes over the horizon flight possible. However, the autonomous navigation capability of UAV has become the main factor restricting the development of UAV to long-distance and long endurance [2]. The sensors carried by UAV can obtain a large amount of information such as images, which can provide a large amount of data for learning based on vision-based deep learning autonomous navigation methods, which promotes the development of UAV toward autonomous navigation methods [3].

At this stage, the navigation methods of UAVs are mainly divided into inertial navigation, satellite navigation, and navigation based on vision. The inertial navigation method will cause the flight error and yaw trajectory of the UAV to become more and more obvious due to the continuously accumulated error of the gyroscope, which is not suitable for long-range flight [4]. The satellite navigation method is the main navigation method currently use. The error of satellite navigation is relatively small, so it is possible to complete UAV missions at the precise control of human conditions [5]. UAVs use GPS signals for navigation and control, which not only requires highly skilled aircraft professionals but also puts forward higher requirements on the flight area [6]. However, with the continuous development of unmanned technology, people hope that UAVs can replace human beings to perform tasks in dangerous and complex environments [7], which limits the use of GPS navigation methods. With the rapid development of big data technology and image recognition technology, vision-based navigation provides a new way for the navigation mode of UAVs. This is a good idea for UAVs to achieve autonomous flight [8]. GPS navigation relies on signals for navigation, which can provide drones with information such as position and control. Autonomous navigation requires the drone to achieve real-time autonomous flight without any signal indication.

With the continuous development of sensors and navigation technology, a large number of studies have been conducted on the autonomous navigation methods of quadrotor UAVs, and the development of UAVs toward autonomous navigation and flight has been promoted. Zhang et al. [9] designed a multinested autonomous landing algorithm for UAV autonomous landing on unmanned surface craft ASV and conducted experimental research. At the same time, they derived a theory suitable for the stability of UAVs based on the Lyapunov stability theory and conducted experiments on the stability of UAV landings. Solend et al. [10] believe that infrared imaging technology is suitable for autonomous navigation of UAVs. It compares the performance of the UAV integrated flight controller integrated with global satellite navigation GNSS, inertial navigation, and other technologies with the real-time motion controller RTK. As a result, the surface position error has a greater impact on the image collection. Duan et al. [11] designed an autonomous navigation method for drones based on the characteristics of the homing behavior of pigeons. As a result, it is believed that the autonomous navigation system of drones can complete a navigation method similar to the homing behavior of pigeons and has high navigation accuracy, a technology worth promoting. Al-Darraji et al. [12] believe that the accuracy of autonomous navigation drones depends on the accuracy of sensors, such as image mapping and autonomous obstacle avoidance. At the same time, they suggested that a combination of various sensors should be avoided. This will not only increase the power consumption of the drone but also increase the cost of the drone. Lakhal et al. [13] apply drones to container terminals that lack signals or have signal blind areas. It establishes a cooperative system SOS based on the flight environment and realizes the cooperative communication mode between the intelligent surface vessel IAV and the drone when there is no signal. Under the conditions of the navigation mode. Guo et al. [14] improve the limitations of the autonomous navigation mode of drones based on the deep reinforcement learning method DRL. The interaction data is divided into two parts, one part uses the long and short memory neural network to process the interaction data, and the other part is used for the DRL clipping loss function. This paper proves that this method is superior to other reinforcement learning methods. Elmokadem and Savkin [15] proposed a hybrid navigation method and designed a global path planning method for UAVs based on the sliding mode control method law. Huang et al. [16] established a hybrid mathematical model IMU/VNS based on landmarks and visual navigation systems for the navigation mode of UAVs and deduced the navigation parameters based on IMU/VNS and observable mathematical matrices and UAVs. The conclusion is that the visual navigation mode of UAV proposed by him has high accuracy. Gonzalez-Sieira et al. [17] propose a method for traversing time, obtaining collision probability, and analyzing uncertainty for the autonomous navigation mode of drones. At the same time, it also establishes a 3D simulation scene for algorithm verification [18].

Through the abovementioned overview of UAV navigation methods, it can be found that autonomous navigation mode is the future development trend. Sensor technology and accuracy have been improved in recent years, which can provide data support for deep learning algorithms in real time [19, 20]. At the same time, with the rapid development of deep learning and machine vision, autonomous navigation of drones is possible. This paper studies the implementation of UAV autonomous navigation mode based on deep learning method. Basic flight parameters, such as yaw angle, attitude, flight speed, and position, are obtained by using basic sensors such as GPS, inertial navigation, and camera as the input of depth learning algorithm to train the weight and deviation of neural network [21, 22], and then, the output of neural network is obtained according to the weight, deviation, and other parameters.

This article mainly introduces the feasibility and accuracy of autonomous navigation of quadrotor drones from five aspects. The first part mainly introduces the current navigation methods, restrictions, and development trends of UAVs. The second introduces the significance of UAV data acquisition and the realization of autonomous navigation. The algorithms and steps needed to realize the autonomous navigation of the quadrotor UAV were introduced in Section 3. The trend and error distribution of the actual flight path and the predicted flight path in the autonomous navigation mode of the UAV were studied in Section 4, and it also analyzes the feasibility of the sensor data prediction. The fifth part summarizes the research of UAV autonomous navigation.

2. The Significance of Deep Learning Methods for Autonomous Navigation of Quadrotor UAV

2.1. Autonomous Drone Navigation by Sensors and Deep Learning

Sensors can collect images and other information to obtain a large amount of data, and deep learning technology can map the nonlinear relationship between these features, which will promote the development of autonomous drone navigation [23]. In this study, cameras, height sensors, angle sensors, and pressure sensors are mainly used, which are used to collect UAV image information, flight height information, flight angle information, and environmental pressure information, respectively. The navigation method of UAV is mainly to use GPS for positioning and navigation, which can be used in power equipment monitoring, unmanned aerial photography, bridge monitoring, and other issues. However, GPS signals are greatly affected by satellite signals and flight controllers [24]. At the same time, the signal interference in closed areas is also relatively large, which limits the use of UAVs [25]. Such functions as cabin monitoring, enclosed area monitoring, and military drones need to be capable of autonomous flight, and GPS navigation technology is often difficult to achieve these functions [26]. In recent years, artificial intelligence technology has developed rapidly, and many high-performance image recognition and autonomous obstacle avoidance algorithms have been derived [27]. At the same time, the accuracy of sensors has also been rapidly improved, which can also provide a large number of deep learning algorithms’ [28, 29] data support. UAVs can rely on multiple sensors onboard to collect data, such as temperature, sailing altitude, navigation angle, geographic environment, and then, standardized processing of these data is processed by deep learning algorithms, and finally, a reasonable plan is suitable for drones. One-step flight path will greatly reduce the restrictions of human factors. At the same time, the UAV can realize the autonomous obstacle avoidance function according to the learned algorithm and select the optimal flight path. Deep learning has brought new opportunities and development to the realization of autonomous drones [30]. Deep learning technology can well extract the image features, altitude features, and angles and other flight features collected by UAV. This can provide necessary data support for the UAV to realize autonomous navigation.

2.2. Method and Source of Data Acquisition

In order to study the visual autonomous navigation method based on the deep learning method, this paper selects the marine environment with relatively large information limitation and the scene where the aircraft controller cannot intervene [31]. If the autonomous navigation performance of the drone in this scenario is superior, it means that this autonomous navigation method is also suitable for other relatively spacious environments [32]. At the same time, this paper selects the ship’s indoor area with relatively poor signal and relatively narrow area to carry out the research on autonomous navigation of the drone. First, the ship sailing area is an area that lacks GPS signals. This study selects the cabin of the ship as the research object, which can better illustrate the feasibility of the deep learning model in UAV autonomous navigation.

Compared with other similar UAVs, the quadrotor UAV has a strong balance and it can carry multiple sensors for data collection. Sensor technology and accuracy have been rapidly developed due to advances in hardware technology and chip technology. At present, the most commonly equipped sensors of UAV include gyroscope, accelerometer, and GPS navigation system; in addition, UAV can also be equipped with radar and camera. Through these sensors, it can obtain information such as temperature, altitude, yaw angle, speed, and position. At the same time, it can measure the shape of the ship through the camera and feed back the collected information to the operation terminal in real time, for technicians to analyze and extract images. The central control machine carried by the drone contains the trained autonomous obstacle avoidance and autonomous image processing functions. Once the sensor has obtained the relevant information and processed it, it can transmit instructions to the relevant actuators to change the current flight status. The UAV will be equipped with other hardware devices such as cameras, altitude sensors, and angle sensors, which will collect UAV flight data. These multisource data can be used as data sources for deep learning methods.

Figure 1 shows the workflow of autonomous navigation of the quadrotor UAV. In this paper, the ship room with strong signal shielding is selected as the data collection area and the indoor place for UAV autonomous navigation. UAV can collect indoor images, obstacles, and other information through camera or radar, send the data collected by the multisensor to the deep learning network, then calculate and activate the function through the optimal weight and bias, and finally transmit the next flight path of the drone to the actuator of the drone in real time. Complete the autonomous navigation task of the quadrotor UAV. The camera mounted on the UAV will collect image information around the UAV’s flight path, which includes information such as obstacle color information and obstacle shape.

3. The Introduction to the Autonomous Navigation Algorithm of Quadrotor UAV

3.1. Vision Acquisition Processing Algorithm

In this article, the marine ship with a more complicated flight environment is selected, and the purpose is to realize the autonomous navigation and flight of the drone around the ship to monitor and observe the wrecked ship. This requires drones to process according to their own sensor data and transmit flight instructions according to these data. The research uses convolutional neural network to extract data acquired by sensors carried by UAV. Figure 2 shows the convolutional neural network operation process. The hardware equipment of the UAV will collect information such as image information, altitude information, and angle, and these data will be used as the input data of CNN. After the CNN is trained, it can provide predicted flight information to the UAV. UAV will be able to navigate autonomously in this way. Two different types of UAVs are included in Figure 2 to illustrate that the training set of CNN contains flight data for multiple types of UAVs, and it is not limited to one type of UAV. This can improve the generalization ability of CNNs in UAV autonomous navigation tasks.

For the CNN hyperparameters, the number of filters 128 was adopted in this study. The learning rate is set to 0.001, which can speed up the training process. The number of layers of CNN is set to 4, and it can mine more data features. Convolutional neural networks have unique advantages in processing feature recognition. Convolutional neural networks can perform feature extraction on multidimensional input data and can be mapped to the desired type of output through the activation function. Convolutional neural network is also a special case of perceptron. The most basic perceptron neural network operation process is shown in the Equation (1).

In Equation (1), the is the neural network weights, and the is the neural network bias. The sign is the activation function.

When the data obtained by the UAV sensor is input through the input layer, it first needs to go through the convolution operation of the convolution layer. The convolution operation flow is shown in Equation (2).

The convolution operation is determined by factors such as the number of filters and step size. The basic operation process is shown in Equation (3), and it converts the image data collected by the UAV sensor into the form of a tensor.

In order to better show the generalization ability of the model, this study uses the maximum pooling layer method for downsampling. As shown in Equation (4), the function is to sum the eigenvalues. Then, it adds a bias to output according to the activation function. And the function is to sum the eigenvalues.

where the represents the derivative of the above pooling layer function (5). The is convolution kernel, and the is the weight parameters.

Many tensor and differential operations are involved in the convolution operation, which are all realized by the automatic differentiation function of the Tensorflow platform. The differential calculation of weights and biases is shown in Equation (6). The is convolution kernel.

3.2. The Time Characteristics of UAV Sensor Data

The flight state of the UAV has obvious time characteristics, especially its flight angle and flight height. The historical status data of the flight will also affect the flight of the drone at the next moment. Therefore, this study uses LSTM to extract the temporal features during UAV flight. The trajectory of the UAV is closely related to the flight time, and its flight attitude is closely related to the historical information at the previous moment. If these time-related information can be extracted well, this will also improve the accuracy of the drone’s autonomous navigation. Long and short memory neural networks have been widely used in the field of speech recognition and have been proven to have good performance in temporal feature extraction. This paper selects long and short memory neural networks as the processing of sensor data carried by drones and then it transmits predicted instructions to the drone. Figure 3 shows the time feature extraction process of the sensor data carried by the drone. This study processes the output data of CNN into the form of time series, which will be input into LSTM network layers. These time series data will determine the label value based on the sliding step size and the sliding window width. The information collected by the sensor is transmitted to the long and short memory neural network of the UAV control system for time feature extraction to obtain the best weight and deviation. The parameters of LSTM mainly include sliding step size and sliding window length. Both sliding step size and sliding window length are set to 20. The number of layers of LSTM is set to 5, which is to extract more temporal features.

The long and short memory neural network has obvious advantages in processing time information because it can selectively control the memory and update of historical state information, which is mainly due to the existence of the gate structure. The first gate structure that the long and short memory neural network passes through is the forget gate, which selectively forgets the low-weight information and allows the high-weight historical information to be memorized. The calculation process is shown in Equation (7).

The input gate structure is a gate structure that combines the state of historical information and the state of input information to perform calculations, as shown in Equations (8) and (9).

After passing the forget gate and memory gate to get the current time variable, it can refresh the variable by the following formula, as shown in Equation (10).

The output gate is also an important gate structure of the long and short memory neural network. It selectively outputs the historical state and the state after the input operation, as shown in Equations (11) and (12).

3.3. The Preprocessing and Standardization of Sensor Data

Due to differences in the types of sensors carried by drones, there are also certain differences in the forms and formats of the data they obtained; the data acquired by the camera and other types of sensors differ. Therefore, it is necessary to process the data before training and predicting the data of the UAV sensor. Simultaneous interpreting and merging data acquired by different sensors enhance the correlation between the amounts of data and make it easier for the neural network to fit the sensor data and the unmanned correlation between machine instructions. For the feature data of the image, the image is matrixed in this study, and these data will be processed as between 0 and 1. Information such as the height and angle of the UAV will be dimensionless. Once the data is preprocessed, the two selected network models can be trained to obtain weights and hyperparameters suitable for autonomous drone navigation. This research adopts the normalized data processing method, which will process the image information, height, angle, and other data into data of the same interval. At the same time, the data will conform to the same distribution.

4. Analysis and Discussion on Performance of UAV Autonomous Navigation

The ocean area with high signal shielding was selected as the simulation area for the autonomous navigation of the UAV, and five different types of data were selected as the input of the network model. In order to improve the accuracy of prediction, this paper uses the clustering method to classify the data first. Figures 4 and 5 show the clustering and prediction errors of angle sensors. The angle sensor is used to measure the yaw angle of the UAV to correct the flight angle of the UAV autonomous navigation, which is a key parameter. The data measured by the sensor is processed into data between the interval -1 and 1. Figure 4 shows the distribution of data sources after clustering. Generally speaking, the distribution of data sources is relatively uniform, with the largest category accounting for 23.8% and the smallest accounting for 16.1%, which is beneficial for deep learning methods. The gap between them is relatively small, which is beneficial to the training stage of the deep learning algorithm. It is also helpful to find the optimal weights and hyperparameters. Figure 5 shows the prediction error under five types of data, and the error comes from the prediction value and the sensor data. In general, the prediction error of the UAV sensor data is within the acceptable range, and the maximum is about 4.8%, which is an acceptable error range for the realization of the autonomous navigation of the UAV. The smallest error is only 0.76%.

It can also be seen from the straight line in Figure 5 that the prediction error distribution is relatively uniform. The black straight line represents the cumulative proportion of drone-related factors, which can reflect the proportion of each factor. At the same time, it can also reflect the changing trend of the proportion. In Figure 5, the legend on the left represents the prediction error, and the legend on the right represents the cumulative error. Cumulative error is the cumulative sum of multiple errors, and it will follow the accumulation rule from left to right. This is because the types and quantities of data collected by the UAV sensor are relatively uniform, which is beneficial to the realization of the autonomous navigation of the UAV. Most of the errors are distributed within 4%, which shows that the neural network selected in this paper has high accuracy in extracting the time and space characteristics of the sensor. Moreover, the image is very important for the realization of the autonomous navigation of the UAV.

In order to more intuitively demonstrate the advantages and accuracy of the deep learning method in the autonomous navigation of drones, Figure 6 shows the average (upper portion of the graph) and minimum values (lower portion of the graph) for altitude sensor values and navigation predicted altitudes. The quantification of the mean value shows the uniformity of the distribution of UAV height prediction values. The quantification of the minimum value can show the degree of deviation of the predicted value of UAV height. First, considering the mean value, it can be seen that the errors of the predicted value are distributed on the upper and lower sides of the mean curve, and the distance from the mean curve is relatively small. This shows that the model and hyperparameters selected in this article are generally satisfactory for autonomous drone navigation. The maximum deviation range is only within 3%, and the smallest deviation is even less than 1%, which is a reliable result. From an extreme point of view, the extreme values of the error of the sensor data are also distributed on both sides of the extreme value, and the distance of the difference is relatively small. From the perspective of extreme values, the autonomous navigation mode of the UAV has also achieved better results. The maximum extreme value deviation is only about 3%, and the smallest extreme value deviation is within 1%. Figure 7 shows the flight altitude information and predicted altitude information collected by the altitude sensor carried by the UAV. In general, the actual flight height of the UAV is in good agreement with the predicted flight height information, and the gap between each set of data is relatively small. Even in places where the altitude gradient changes greatly, the UAV’s flight altitude prediction value is in good agreement with the data value collected by the altitude sensor.

Figure 8 shows the contour map of the error distribution of the image information captured by the camera carried by the UAV. This image information comes from the image value of the UAV forward flight path, which is an image that is matrixed according to the coordinate information. In Figure 8, the blue area represents the error range within 1%. The green area represents an error range between 1% and 2%. The margin of error in the yellow area is greater than 2% and less than 3%. It can be seen from Figure 8 that the prediction error in the middle area of the cabin is relatively small, which may be because the UAV flight is relatively stable and the image data acquisition is accurate due to the existence of the cabin. The prediction error at the edge position of the ship is relatively large, but it is also within an acceptable range. In general, the real data collected by the drone sensors is in good agreement with the predicted data. It can be seen from Figure 8 that most of the prediction errors of the image information of UAV flight are within 1%, and only some errors are within 1% and 2%.

5. Summary of Research on Autonomous Navigation of UAV

UAV can replace humans to perform some dangerous actions and has broad development prospects in military and civil fields. Current UAVs mainly rely on inertial navigation or navigation modes based on satellite signals, which limits the autonomous flight of UAVs in areas where signals are shielded. With the continuous progress of sensor technology and the application of intelligent algorithms, it is possible to realize the autonomous flight and obstacle avoidance of UAV.

This article combines the angle sensor, height sensor, and camera carried by the UAV, which can obtain more data for the flight of the UAV. At the same time, combined with the advantages of convolutional neural network and long-short memory neural network in image processing and temporal feature processing, the autonomous navigation performance of the UAV is predicted and analyzed. In general, the deep learning method proposed in this study is suitable for angle, height, and position prediction required for autonomous navigation of UAVs in sensor mode. The predicted flight angle, height, and position detail are in good agreement with the values collected by the sensor and camera. The flight angle and altitude errors are controlled within 3%, which is a reliable error range for UAV autonomous navigation prediction. The autonomous navigation method of quadrotor UAV based on deep learning methods will help to advance the application field and scope of UAV.

Data Availability

The data used in this article can be reasonably requested by readers and researchers.

Conflicts of Interest

There is no conflict of interest in the study.

Acknowledgments

This work was supported by the Special Scientific Research Fund of Civil Aviation Flight University of China: Research on key elements of safe operation of medium and large regional logistics UAV under new business state (No.: ZX2021-03), CAAC Security Capacity Building Project: Research on key technologies and rules of integrated operation of UAV and UAV in low altitude airspace, and General projects of Civil Aviation Flight University of China: Research on UAV visual navigation method based on deep learning.