Abstract

In a country, air defense systems are designed to reduce threats efficiently. An air defense system is a fundamental part of any country because it provides national security. This study presents an autonomous air defense system (AADS) development that will automatically detect aerial threats (e.g., drones) and target them without any human intervention. The AADS is implemented using radar, camera, and laser gun. The radar system dynamically emits microwaves and detects moving objects around it. It triggers the camera system if it senses the frequency of any aerial threat. The camera receives the radar’s signal and detects using a neural network algorithm whether it is a threat or not. Neural network algorithms are used for the detection and classification of objects. The laser gun locks its target if the live video feed classifies an object as a more than 75% threat. In the detection stage, an average loss of 0.184961 was achieved using YOLOv3 and 0.155 using the Faster-RCNN. This system will ensure that no human errors are made while detecting threats in a region and improve national safety.

1. Introduction

The autonomous air defense system (AADS) is an integrated defensive unit capable of detecting aerial threats and reducing the manpower needed to defend the country. Currently, Bangladesh military uses tanks, armored vehicles, artillery, and rocket projectors for defense. These require a lot of human resources and training, which are costly and inefficient, respectively. At present, military personnels are facing several problems due to the existing defense system’s rigidity. Most importantly, tanks, artilleries, and other machines require around three to four people to use [1]. It is expensive as it needs training and it is inefficient as well. Additionally, the people who operate these machines can go through many disturbances depending on the area, and aerial threats are harder to target since they are moving objects. As a result, the chance of human errors may increase to a great extent while shooting the targets. Last, to cope with the latest electronic warfare, the autonomous air defense system is a must for national security.

We need this system to compete with global defense industries as it is currently being deployed in other countries. Also, this system can be exported to other countries and increase foreign exchange. This system can be deployed in frigates, tanks, towers, or even aircraft to block incoming hits.

According to Figure 1, there have been an increasing number of internal and internationalized conflicts [2]. Also, it can be seen that battle-related deaths were a bit less during 2010 but had a sudden increase around 2016 [2]. An internal conflict is regarded as internationalized if one or more third-party governments are involved with combat personnel supporting the objective of either side [2]. From the research, it is also evident that countries are only increasing the number of troops sent to other countries for battling conflicts. It shows urgency for Bangladesh to find efficient ways to prevent these conflicts [2] and the importance of an automated system to handle this situation.

The study is organized as follows: Section 2 describes the existing works related to the AADS. Section 3 presents methods which include procedures and algorithms for training and detection. Section 4 describes the result and analysis, and Section 5 presents the conclusion.

At the moment, all defense systems have manual controls over the systems. Eventhough many sensors and radars are used for detection now, there are fewer implementations of systems with cameras locating the targets and automatically firing them by image processing.

In [3], thermal cameras and acoustic sensors are used to detect and track drones automatically. Sensor fusion has been used to make the system more robust and avoid false detection, but they did not use any deep learning methods. The study by Unlu et al. [4] presents YOLO convolutional neural network-based autonomous drone surveillance and tracking architecture. Thermal images are used to classify drones into 4 categories (drone, bird, plane, and background). In [5], synthetic radar data and real image data are used to track a moving target. Tracking performance is improved using data fusion and agile edge processing. In [6], PTZ (pan, tilt, zoom) camera and optical and thermal sensors are used to detect boats. YOLOv3 is used on the COCO dataset for detecting boats. In [7], YOLOv3 and Faster-RCNN are used to detect power transmission towers. Also, this study indicates that Faster-RCNN has better detection performance, and YOLOv3 has better detection speed which can be used in real-time object detection [7]. In [8], a dual camera is used to develop a multiple target zooming system. Cameras used for this system are wide-view cameras and ultrafast mirror drive pan-tilt cameras [8].

In [9], turrets on the tanks can be controlled via eye movements and blinking. Images of the eyes are taken as controls, so that four people are no longer needed, and it is easier to control the system. In [10], camera surveillance is used to look for targets, and remote controls are used to target the threats. Lots of kinematics are used in this system. In [11], an automatic detection of the target using a camera at the gun’s point is proposed. They use image processing techniques to detect targets from a distance from pictures that the gun points to. In [12], a very similar proposal for automatically detecting missiles using image processing and targeting aerial threats is proposed. The study by Anwar et al. [13] is the most recent study that proposes an automatic targeting system for gun turrets using deep learning methods.

The last system that was proposed did not have any automatic detection of nearby objects. It still needs manually moving the camera towards the target for detection. Our system proposes a radar that detects any nearby moving objects. The camera will automatically point towards the target for detection and verify whether the target is a drone or not.

The authors in [1416] illustrates that secure data replicas in distributed management of identity and authorization policies in smart city applications can be mitigated by blockchain technology. In [15], a novel algorithm using synergetic neural networks to ensure the robustness and security of digital image watermarking is proposed. In [17], an innovative infrastructure of secure scenario combining Internet of Things (IoT) with cloud computing which operates in a wireless-mobile 6G network for managing big data on smart buildings is proposed. Authors in [18, 19] focus on the security architecture of the Internet of Things (IoT). Radio frequency identification (RFID) and wireless sensor network (WSN) can be the enabling factors in IoT development. In [20], the multifeature fusion paradigm of images is presented and helps describe the image pattern more clearly. The study [21] shows that if digital fingerprint image quality degrades to a certain level, it decreases the fingerprint recognition accuracy. Unlike fingerprint recognition accuracy, the object detection accuracy of YOLOv3 depends on how much area the object covers in a particular image. In [22], four-image encryption scheme is proposed as an image encryption technique that can protect users’ privacy in online platforms such as cloud computing or social networks.

This study focuses on building autonomous air defense systems combining deep learning methods alongside a cheap camera and microwave sensors. The performance between the Faster-RCNN and YOLOv3 has been considered, and the real-time detection speed is given the most priority. Microwave radar sensors are used for the early detection of drones.

3. Method and Methodology

The AADS is a system that would provide safety from incoming aerial threats (e.g., drones) by locking targets. This device would detect and classify incoming aerial intruders by image processing, lock multiple targets at a time, and shoot sequentially nearby threats. The area coverage by the AADS will be 360 degrees. This device will work in a short range (20 meters) and can also be used for a particular area or building’s safety system.

3.1. Procedure

At first, a radar system will be integrated by the radar modules for getting early warnings to the system and direction. This custom radar system can detect aerial movement. After the detection, our camera and gun/laser will be activated and turned in that direction. Only drones will be classified and detected for this device and will not point to innocent birds, humans, or any natural beings. The device will check whether the classified object is a drone or not. When this device detects and classifies the drone, the target will be locked. Finally, a laser gun will point/fire the moving drones. The operation of the AADS is shown as a flowchart in Figure 2.

We used an HB100 X-band microwave sensor and an RCWL-0516 microwave radar sensor module, as shown in Figure 3. RCWL-0516 radar module only gave the result of the detected object. However, we needed a radar module to determine the frequency of moving objects. From HB100, we got the frequency of moving objects.

The pi camera has been used for drone detection and classification. This camera can be used to take pictures for image processing and identify whether the object is a drone or not. For moving the camera and laser, we have used 2 servo motors. One is used to rotate horizontally, and the other is used to rotate vertically. Therefore, the coverage area of the AADS will be 360 degrees.

3.2. YOLO (You Only Look Once) Algorithm

The AADS uses the YOLOv3 algorithm for drone detection and classification. The architecture of the YOLOv3 algorithm is shown in Figure 4. For feature extraction, we have used Darknet-53. It has 53 convolutional layers, which is an improved version of YOLOv2, Darknet-19. 1 × 1 and 3 × 3 convolutional layers are used in YOLOv3. At first, it resizes the image, then runs a convolutional neural network (CNN) to an image, and at the end, the resulting detection is constrained by the confidence of the model [24]. Batch normalization and stride-2 convolutions are used in YOLOv3. Inputs are normalized by batch normalization within the deep network [23]. In filter size 3 × 3/2, here “/2” is represented by stride-2. It basically resizes the input into half. For example, if the input size is 256 × 256, then the output size will be 128 × 128.

For bounding box prediction, 4 coordinates are used to predict for each bounding box. Coordinates for each bounding boxes are The cell in which the bounding box’s center falls is offset from the top left corner of the image by [23]. The width and height of the bounding box are calculated by k-means which are and . are the actual coordinates of the prediction bounding box, which can be determined using the formula:

3.3. Faster-RCNN

The Faster-RCNN [25] is a state-of-the-art object detection algorithm that is based on deep neural networks. In recent years, it is used widely because of its efficiency, taking less time for testing, and better performance. For the AADS, the Faster-RCNN is also used for real-time object detection for fast detection of drones. The Faster-RCNN is an improvisation of the Fast-RCNN [26], which had a computational bottleneck in the region proposal network (RPN). The RPN is the first stage of the RCNN [27], where regions of an object could be found which is also known as regions of interest (ROI). Then, features are extracted using different architectures (VGG or ResNet) of convolutional neural networks (CNN). The architecture used in this system for detecting drones is the Faster-RCNN ResNet-50 FPN, consisting of 50 layers [28, 29]. Figure 5 shows the Faster-RCNN architecture. The ROI pooling layer is the classification process and takes as input the region of interests and convolutional features. It generates a bounding box of the objects as well as their class names.

3.4. Training and Detection

The dataset collected of drones consists of 1359 images from the Kaggle dataset [30]. Then, those images were labeled into “.txt” format using “labeling” for the YOLOv3 algorithm. At first, a pretrained coco model was fetched for built-in classes (in total, 80 classes). Then, a custom object detector for drone detection was trained in “Google Colab.” This custom object detector model was built for one class (drone). Also, 2000 iterations were completed for the drone class.

4. Result and Analysis

The AADS was tested on 20% of total image data, completely different from the training dataset. We achieved a low average loss of 0.184961 for our custom object (drone) detector using the YOLOv3 algorithm. Loss is defined as a bad prediction in image processing. The number of losses indicates the bad prediction in every image. A training model has an overfitting problem when the loss is zero. A high value of loss also causes errors in prediction. Therefore, the value of loss should be close to zero. “Google Colab” has been used to train the model. During the training period, we generated the graph of loss dynamically in “Google Colab.” The total loss chart for the YOLOv3 algorithm is shown in Figure 6. Initially, the loss was 5. After 2000 completed iterations, the loss dropped from 5 to 0.1 (approximately).

A Faster-RCNN algorithm was also applied to our dataset to compare the result between YOLOv3 and Faster-RCNN. Again, the AADS was tested on 20% different image data, and it achieved a low average loss of 0.155 for the custom object (drone) detector. Initially, the loss was 0.151, and the final loss was 0.155. The total loss chart is shown in Figure 7.

From Figures 6 and 7, we can see that the Faster-RCNN has better detection performance than YOLOv3. But for real-time moving objects, YOLOv3 is faster. Therefore, we chose the model of YOLOv3 for the AADS over the Faster-RCNN. The AADS was able to detect and classify drones and lock the target successfully, as shown in Figure 8. The rotation of servos towards the X-axis and Y-axis was also tested. The AADS was able to move in any direction using servos and track moving drones towards it. The custom drone detector model was tested using test image data, and it can classify drones successfully in Figure 8.

The two radar modules HB100X and RCWL-0516 detect using the Doppler radar principle [31, 32]. Both of them have different detection distances. HB100X can sense 2–16 meters and RCWL-0516 can sense 5–7 meters [31, 32]. These two modules were used in parallel to get the absolute moving object’s data. RCWL-0516 sets a Boolean variable TRUE, and HB100X measures the distance and velocity of its target if any object passes through within range. These values are generated by the following Doppler equationwhere Fd stands for the Doppler frequency, is the velocity of the target, Ft stands for the transmitting frequency, c is the speed of light (3 × 108 m/sec), and is the angle between the moving target direction and the axis of the module [33]. Transmit frequency (Ft) for HB100 is 10.525 GHz. We calculated the speed by using the frequency received from HB100. The following equations (equations (3) and (4)) are used to calculate the speed.

The radar system is smart enough to work efficiently. It ignores the output if the object’s velocity is between 20 and 30 mph as it is the average velocity of birds [34].

From RCWL-0516, we got the output that only detects the movement. For any object moving towards its range (5–9 m), the radar module can easily detect moving objects. Figure 9 shows the output received from the RCWL-0516 microwave radar sensor module.

Similarly, from the HB100 X-band microwave sensor, we got the detected drone’s output and the measured Doppler frequency. Figure 10 shows the output received from the HB100 radar module. (2) and (3) are used to calculate the speed of moving drones.

The purpose of using these radar modules (RCWL-0516 and HB100) is to get an early warning of moving aerial threats. In the AADS, radar received signals from moving objects, and then, the camera and laser gun started their function. The camera was used to classify the object, whether it was a drone or not. The camera and the laser get activated only after getting the positive signal from the radar system. So, the AADS does not need to turn the camera and laser on all the time. The radar modules we used were very cheap and required less power to run compared to the camera and laser. Thus, the AADS can provide all-time security with less power which indicates less cost. As the camera remained off during its idle period, the life span of the AADS will also increase. Table 1 provides the comparison with other articles.

The AADS was built with a pan-tilt kit. The laser and camera were connected on the top of the frame, connected with a Y-axis rotating servo (horizontal). The X-axis servo was connected vertically, which allowed the upper portion of the AADS to move right or left. Radar modules were connected with Raspberry Pi. The AADS is shown in Figure 9.

Some drones are built using electric motors and plastic. We could have used the thermal camera for detection in the AADS as shown in Figures 11 and 12. But in the case of detecting plastic drones, it will not be effective. The competency of small drones is increasing, which is alarming. Drones are nowadays integrated with cameras, infrared detectors, thermal detectors, and many smart sensors and are being used for surveillance and bombing. It is difficult to maintain privacy in the drone-filled age. Competitors, thieves, or even just neighbors could be spying on every move using a remote-controlled flying camera. All these kinds of problems can be prevented using the AADS. The AADS is able to detect the smallest drones within its range and can be used as a strong defensive unit against surveillance drones. These drones are coated with plastic and stealthy materials which are difficult to detect for traditional radio wave radars. But that would not work if a drone is programmed to fly without radio uplinks and downlinks. Since we used microwave Doppler radar, detecting a movable object is our first task, and the camera is to determine whether it is a drone or not afterward. However, the old-fashioned Doppler radar is more effective against these stealth drones. Therefore, all kinds of mini drones, plastic drones, and drones coated with other elements (e.g., artificial leather) can be detected easily using this system.

5. Conclusion

The AADS is a modern technology that is very much needed in Bangladesh because of several efficiency and training problems. Recently, Bangladesh army tested their newly imported Swiss air defense system named Oerlikon Twin Gun GDF009 [35]. This defense system is similar to the AADS prototype from some standpoints such as hardware design, and the electronic components used are costly and as well as not fully automated because four persons are needed to operate this system. The cost of the AADS is only around 25,000 BDT. The cost is very low compared to the existing system such as Oerlikon Twin Gun because of the availability of parts and cheap Doppler modules.

However, to recover these problems and gain efficiency, the AADS prototype was developed to compete with global defense industries. To ensure national security, secure restricted areas from invasion, our AADS can be deployed. If we think economically, this system can save military expenses and be exported to earn substantial foreign currency. This system is also quite versatile as it can be deployed in different weapons such as frigates, tanks, and towers. Also, it can be used to predict the position of moving objects such as drones and planes. However, the most important fact about the AADS prototype is that it was developed with the current century’s latest technologies. We used the YOLOv3 algorithm for detecting the target, which is considered one of the fastest detection algorithms, and it proved its efficiency compared to the Faster-RCNN for detecting real-time moving objects. The AADS is smarter than all other traditional air defense guns because of its autonomic activities, portability, and accuracy. We made this prototype version because of our low budget, and we hope to develop the original AADS with more innovative features in the future.

Data Availability

The data used to support the findings of this study are freely available at https://www.kaggle.com/dasmehdixtr/drone-dataset-uav.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to thank for the support from Taif University, Taif, Saudi Arabia, Taif University Researchers Supporting Project number (TURSP-2020/73).