Security and Communication Networks

Security and Communication Networks / 2021 / Article
Special Issue

Application-Aware Multimedia Security Techniques

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 9984453 | https://doi.org/10.1155/2021/9984453

Fazle Rabby Khan, Md. Muhabullah, Roksana Islam, Mohammad Monirujjaman Khan, Mehedi Masud, Sultan Aljahdali, Avinash Kaur, Parminder Singh, "A Cost-Efficient Autonomous Air Defense System for National Security", Security and Communication Networks, vol. 2021, Article ID 9984453, 10 pages, 2021. https://doi.org/10.1155/2021/9984453

A Cost-Efficient Autonomous Air Defense System for National Security

Academic Editor: Jialiang Peng
Received22 Mar 2021
Revised16 May 2021
Accepted18 Jun 2021
Published25 Jun 2021

Abstract

In a country, air defense systems are designed to reduce threats efficiently. An air defense system is a fundamental part of any country because it provides national security. This study presents an autonomous air defense system (AADS) development that will automatically detect aerial threats (e.g., drones) and target them without any human intervention. The AADS is implemented using radar, camera, and laser gun. The radar system dynamically emits microwaves and detects moving objects around it. It triggers the camera system if it senses the frequency of any aerial threat. The camera receives the radar’s signal and detects using a neural network algorithm whether it is a threat or not. Neural network algorithms are used for the detection and classification of objects. The laser gun locks its target if the live video feed classifies an object as a more than 75% threat. In the detection stage, an average loss of 0.184961 was achieved using YOLOv3 and 0.155 using the Faster-RCNN. This system will ensure that no human errors are made while detecting threats in a region and improve national safety.

1. Introduction

The autonomous air defense system (AADS) is an integrated defensive unit capable of detecting aerial threats and reducing the manpower needed to defend the country. Currently, Bangladesh military uses tanks, armored vehicles, artillery, and rocket projectors for defense. These require a lot of human resources and training, which are costly and inefficient, respectively. At present, military personnels are facing several problems due to the existing defense system’s rigidity. Most importantly, tanks, artilleries, and other machines require around three to four people to use [1]. It is expensive as it needs training and it is inefficient as well. Additionally, the people who operate these machines can go through many disturbances depending on the area, and aerial threats are harder to target since they are moving objects. As a result, the chance of human errors may increase to a great extent while shooting the targets. Last, to cope with the latest electronic warfare, the autonomous air defense system is a must for national security.

We need this system to compete with global defense industries as it is currently being deployed in other countries. Also, this system can be exported to other countries and increase foreign exchange. This system can be deployed in frigates, tanks, towers, or even aircraft to block incoming hits.

According to Figure 1, there have been an increasing number of internal and internationalized conflicts [2]. Also, it can be seen that battle-related deaths were a bit less during 2010 but had a sudden increase around 2016 [2]. An internal conflict is regarded as internationalized if one or more third-party governments are involved with combat personnel supporting the objective of either side [2]. From the research, it is also evident that countries are only increasing the number of troops sent to other countries for battling conflicts. It shows urgency for Bangladesh to find efficient ways to prevent these conflicts [2] and the importance of an automated system to handle this situation.

The study is organized as follows: Section 2 describes the existing works related to the AADS. Section 3 presents methods which include procedures and algorithms for training and detection. Section 4 describes the result and analysis, and Section 5 presents the conclusion.

At the moment, all defense systems have manual controls over the systems. Eventhough many sensors and radars are used for detection now, there are fewer implementations of systems with cameras locating the targets and automatically firing them by image processing.

In [3], thermal cameras and acoustic sensors are used to detect and track drones automatically. Sensor fusion has been used to make the system more robust and avoid false detection, but they did not use any deep learning methods. The study by Unlu et al. [4] presents YOLO convolutional neural network-based autonomous drone surveillance and tracking architecture. Thermal images are used to classify drones into 4 categories (drone, bird, plane, and background). In [5], synthetic radar data and real image data are used to track a moving target. Tracking performance is improved using data fusion and agile edge processing. In [6], PTZ (pan, tilt, zoom) camera and optical and thermal sensors are used to detect boats. YOLOv3 is used on the COCO dataset for detecting boats. In [7], YOLOv3 and Faster-RCNN are used to detect power transmission towers. Also, this study indicates that Faster-RCNN has better detection performance, and YOLOv3 has better detection speed which can be used in real-time object detection [7]. In [8], a dual camera is used to develop a multiple target zooming system. Cameras used for this system are wide-view cameras and ultrafast mirror drive pan-tilt cameras [8].

In [9], turrets on the tanks can be controlled via eye movements and blinking. Images of the eyes are taken as controls, so that four people are no longer needed, and it is easier to control the system. In [10], camera surveillance is used to look for targets, and remote controls are used to target the threats. Lots of kinematics are used in this system. In [11], an automatic detection of the target using a camera at the gun’s point is proposed. They use image processing techniques to detect targets from a distance from pictures that the gun points to. In [12], a very similar proposal for automatically detecting missiles using image processing and targeting aerial threats is proposed. The study by Anwar et al. [13] is the most recent study that proposes an automatic targeting system for gun turrets using deep learning methods.

The last system that was proposed did not have any automatic detection of nearby objects. It still needs manually moving the camera towards the target for detection. Our system proposes a radar that detects any nearby moving objects. The camera will automatically point towards the target for detection and verify whether the target is a drone or not.

The authors in [1416] illustrates that secure data replicas in distributed management of identity and authorization policies in smart city applications can be mitigated by blockchain technology. In [15], a novel algorithm using synergetic neural networks to ensure the robustness and security of digital image watermarking is proposed. In [17], an innovative infrastructure of secure scenario combining Internet of Things (IoT) with cloud computing which operates in a wireless-mobile 6G network for managing big data on smart buildings is proposed. Authors in [18, 19] focus on the security architecture of the Internet of Things (IoT). Radio frequency identification (RFID) and wireless sensor network (WSN) can be the enabling factors in IoT development. In [20], the multifeature fusion paradigm of images is presented and helps describe the image pattern more clearly. The study [21] shows that if digital fingerprint image quality degrades to a certain level, it decreases the fingerprint recognition accuracy. Unlike fingerprint recognition accuracy, the object detection accuracy of YOLOv3 depends on how much area the object covers in a particular image. In [22], four-image encryption scheme is proposed as an image encryption technique that can protect users’ privacy in online platforms such as cloud computing or social networks.

This study focuses on building autonomous air defense systems combining deep learning methods alongside a cheap camera and microwave sensors. The performance between the Faster-RCNN and YOLOv3 has been considered, and the real-time detection speed is given the most priority. Microwave radar sensors are used for the early detection of drones.

3. Method and Methodology

The AADS is a system that would provide safety from incoming aerial threats (e.g., drones) by locking targets. This device would detect and classify incoming aerial intruders by image processing, lock multiple targets at a time, and shoot sequentially nearby threats. The area coverage by the AADS will be 360 degrees. This device will work in a short range (20 meters) and can also be used for a particular area or building’s safety system.

3.1. Procedure

At first, a radar system will be integrated by the radar modules for getting early warnings to the system and direction. This custom radar system can detect aerial movement. After the detection, our camera and gun/laser will be activated and turned in that direction. Only drones will be classified and detected for this device and will not point to innocent birds, humans, or any natural beings. The device will check whether the classified object is a drone or not. When this device detects and classifies the drone, the target will be locked. Finally, a laser gun will point/fire the moving drones. The operation of the AADS is shown as a flowchart in Figure 2.

We used an HB100 X-band microwave sensor and an RCWL-0516 microwave radar sensor module, as shown in Figure 3. RCWL-0516 radar module only gave the result of the detected object. However, we needed a radar module to determine the frequency of moving objects. From HB100, we got the frequency of moving objects.

The pi camera has been used for drone detection and classification. This camera can be used to take pictures for image processing and identify whether the object is a drone or not. For moving the camera and laser, we have used 2 servo motors. One is used to rotate horizontally, and the other is used to rotate vertically. Therefore, the coverage area of the AADS will be 360 degrees.

3.2. YOLO (You Only Look Once) Algorithm

The AADS uses the YOLOv3 algorithm for drone detection and classification. The architecture of the YOLOv3 algorithm is shown in Figure 4. For feature extraction, we have used Darknet-53. It has 53 convolutional layers, which is an improved version of YOLOv2, Darknet-19. 1 × 1 and 3 × 3 convolutional layers are used in YOLOv3. At first, it resizes the image, then runs a convolutional neural network (CNN) to an image, and at the end, the resulting detection is constrained by the confidence of the model [24]. Batch normalization and stride-2 convolutions are used in YOLOv3. Inputs are normalized by batch normalization within the deep network [23]. In filter size 3 × 3/2, here “/2” is represented by stride-2. It basically resizes the input into half. For example, if the input size is 256 × 256, then the output size will be 128 × 128.

For bounding box prediction, 4 coordinates are used to predict for each bounding box. Coordinates for each bounding boxes are The cell in which the bounding box’s center falls is offset from the top left corner of the image by [23]. The width and height of the bounding box are calculated by k-means which are and . are the actual coordinates of the prediction bounding box, which can be determined using the formula:

3.3. Faster-RCNN

The Faster-RCNN [25] is a state-of-the-art object detection algorithm that is based on deep neural networks. In recent years, it is used widely because of its efficiency, taking less time for testing, and better performance. For the AADS, the Faster-RCNN is also used for real-time object detection for fast detection of drones. The Faster-RCNN is an improvisation of the Fast-RCNN [26], which had a computational bottleneck in the region proposal network (RPN). The RPN is the first stage of the RCNN [27], where regions of an object could be found which is also known as regions of interest (ROI). Then, features are extracted using different architectures (VGG or ResNet) of convolutional neural networks (CNN). The architecture used in this system for detecting drones is the Faster-RCNN ResNet-50 FPN, consisting of 50 layers [28, 29]. Figure 5 shows the Faster-RCNN architecture. The ROI pooling layer is the classification process and takes as input the region of interests and convolutional features. It generates a bounding box of the objects as well as their class names.

3.4. Training and Detection

The dataset collected of drones consists of 1359 images from the Kaggle dataset [30]. Then, those images were labeled into “.txt” format using “labeling” for the YOLOv3 algorithm. At first, a pretrained coco model was fetched for built-in classes (in total, 80 classes). Then, a custom object detector for drone detection was trained in “Google Colab.” This custom object detector model was built for one class (drone). Also, 2000 iterations were completed for the drone class.

4. Result and Analysis

The AADS was tested on 20% of total image data, completely different from the training dataset. We achieved a low average loss of 0.184961 for our custom object (drone) detector using the YOLOv3 algorithm. Loss is defined as a bad prediction in image processing. The number of losses indicates the bad prediction in every image. A training model has an overfitting problem when the loss is zero. A high value of loss also causes errors in prediction. Therefore, the value of loss should be close to zero. “Google Colab” has been used to train the model. During the training period, we generated the graph of loss dynamically in “Google Colab.” The total loss chart for the YOLOv3 algorithm is shown in Figure 6. Initially, the loss was 5. After 2000 completed iterations, the loss dropped from 5 to 0.1 (approximately).

A Faster-RCNN algorithm was also applied to our dataset to compare the result between YOLOv3 and Faster-RCNN. Again, the AADS was tested on 20% different image data, and it achieved a low average loss of 0.155 for the custom object (drone) detector. Initially, the loss was 0.151, and the final loss was 0.155. The total loss chart is shown in Figure 7.

From Figures 6 and 7, we can see that the Faster-RCNN has better detection performance than YOLOv3. But for real-time moving objects, YOLOv3 is faster. Therefore, we chose the model of YOLOv3 for the AADS over the Faster-RCNN. The AADS was able to detect and classify drones and lock the target successfully, as shown in Figure 8. The rotation of servos towards the X-axis and Y-axis was also tested. The AADS was able to move in any direction using servos and track moving drones towards it. The custom drone detector model was tested using test image data, and it can classify drones successfully in Figure 8.

The two radar modules HB100X and RCWL-0516 detect using the Doppler radar principle [31, 32]. Both of them have different detection distances. HB100X can sense 2–16 meters and RCWL-0516 can sense 5–7 meters [31, 32]. These two modules were used in parallel to get the absolute moving object’s data. RCWL-0516 sets a Boolean variable TRUE, and HB100X measures the distance and velocity of its target if any object passes through within range. These values are generated by the following Doppler equationwhere Fd stands for the Doppler frequency, is the velocity of the target, Ft stands for the transmitting frequency, c is the speed of light (3 × 108 m/sec), and is the angle between the moving target direction and the axis of the module [33]. Transmit frequency (Ft) for HB100 is 10.525 GHz. We calculated the speed by using the frequency received from HB100. The following equations (equations (3) and (4)) are used to calculate the speed.

The radar system is smart enough to work efficiently. It ignores the output if the object’s velocity is between 20 and 30 mph as it is the average velocity of birds [34].

From RCWL-0516, we got the output that only detects the movement. For any object moving towards its range (5–9 m), the radar module can easily detect moving objects. Figure 9 shows the output received from the RCWL-0516 microwave radar sensor module.

Similarly, from the HB100 X-band microwave sensor, we got the detected drone’s output and the measured Doppler frequency. Figure 10 shows the output received from the HB100 radar module. (2) and (3) are used to calculate the speed of moving drones.

The purpose of using these radar modules (RCWL-0516 and HB100) is to get an early warning of moving aerial threats. In the AADS, radar received signals from moving objects, and then, the camera and laser gun started their function. The camera was used to classify the object, whether it was a drone or not. The camera and the laser get activated only after getting the positive signal from the radar system. So, the AADS does not need to turn the camera and laser on all the time. The radar modules we used were very cheap and required less power to run compared to the camera and laser. Thus, the AADS can provide all-time security with less power which indicates less cost. As the camera remained off during its idle period, the life span of the AADS will also increase. Table 1 provides the comparison with other articles.


No.NameSensorsMethodDeep learningImagesRadar

1This studyCameraYOLOv3YesRGBYes
HB100Faster-RCNN
RCWL-0516

2Reference [3]CameraSensor fusionNoThermal
Acoustic

3Reference [4]CameraYOLOYesThermalNo

4Reference [5]CameraData fusionNoRGBSynthetic radar data
Agile edge processing

5Reference [6]CameraYOLOv3YesYesNo
Optical and thermal sensors

6Reference [7]CameraYOLOv3YesRGBNo
Faster-RCNN

The AADS was built with a pan-tilt kit. The laser and camera were connected on the top of the frame, connected with a Y-axis rotating servo (horizontal). The X-axis servo was connected vertically, which allowed the upper portion of the AADS to move right or left. Radar modules were connected with Raspberry Pi. The AADS is shown in Figure 9.

Some drones are built using electric motors and plastic. We could have used the thermal camera for detection in the AADS as shown in Figures 11 and 12. But in the case of detecting plastic drones, it will not be effective. The competency of small drones is increasing, which is alarming. Drones are nowadays integrated with cameras, infrared detectors, thermal detectors, and many smart sensors and are being used for surveillance and bombing. It is difficult to maintain privacy in the drone-filled age. Competitors, thieves, or even just neighbors could be spying on every move using a remote-controlled flying camera. All these kinds of problems can be prevented using the AADS. The AADS is able to detect the smallest drones within its range and can be used as a strong defensive unit against surveillance drones. These drones are coated with plastic and stealthy materials which are difficult to detect for traditional radio wave radars. But that would not work if a drone is programmed to fly without radio uplinks and downlinks. Since we used microwave Doppler radar, detecting a movable object is our first task, and the camera is to determine whether it is a drone or not afterward. However, the old-fashioned Doppler radar is more effective against these stealth drones. Therefore, all kinds of mini drones, plastic drones, and drones coated with other elements (e.g., artificial leather) can be detected easily using this system.

5. Conclusion

The AADS is a modern technology that is very much needed in Bangladesh because of several efficiency and training problems. Recently, Bangladesh army tested their newly imported Swiss air defense system named Oerlikon Twin Gun GDF009 [35]. This defense system is similar to the AADS prototype from some standpoints such as hardware design, and the electronic components used are costly and as well as not fully automated because four persons are needed to operate this system. The cost of the AADS is only around 25,000 BDT. The cost is very low compared to the existing system such as Oerlikon Twin Gun because of the availability of parts and cheap Doppler modules.

However, to recover these problems and gain efficiency, the AADS prototype was developed to compete with global defense industries. To ensure national security, secure restricted areas from invasion, our AADS can be deployed. If we think economically, this system can save military expenses and be exported to earn substantial foreign currency. This system is also quite versatile as it can be deployed in different weapons such as frigates, tanks, and towers. Also, it can be used to predict the position of moving objects such as drones and planes. However, the most important fact about the AADS prototype is that it was developed with the current century’s latest technologies. We used the YOLOv3 algorithm for detecting the target, which is considered one of the fastest detection algorithms, and it proved its efficiency compared to the Faster-RCNN for detecting real-time moving objects. The AADS is smarter than all other traditional air defense guns because of its autonomic activities, portability, and accuracy. We made this prototype version because of our low budget, and we hope to develop the original AADS with more innovative features in the future.

Data Availability

The data used to support the findings of this study are freely available at https://www.kaggle.com/dasmehdixtr/drone-dataset-uav.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to thank for the support from Taif University, Taif, Saudi Arabia, Taif University Researchers Supporting Project number (TURSP-2020/73).

References

  1. R. G. Vickers, “Gun turrets,” 1967, https://patentimages.storage.googleapis.com/b9/7c/f9/43f8b6588314d4/US3348451.pdf In United States Patent Office [Online] Available:. View at: Google Scholar
  2. R. Strand and N. Urdal, “Trends in armed conflict, 1946–2018,” in Conflict Trends 3-2019, PRIO, Oslo, Norway, 2019, https://www.prio.org/utility/DownloadFile.ashx?id=1858&type=publicationfile [Online]. Available:. View at: Google Scholar
  3. F. Svanstrom, C. Englund, and F. Alonso-Fernandez, “Real-time drone detection and tracking with visible, thermal and acoustic sensors,” in Proceedings of the International Conference on Pattern Recognition (ICPR), Milan, Italy, January 2021. View at: Publisher Site | Google Scholar
  4. E. Unlu, E. Zenou, N. Riviere, and P. E. Dupouy, “An autonomous drone surveillance and tracking architecture,” IPSJ Transactions on Computer Vision and Applications, vol. 11, pp. 1–13, 2019. View at: Publisher Site | Google Scholar
  5. P. Tsiantis, S. A. Purryag, and I. Kyriakides, “Target tracking using radar and image IoT nodes,” in Proceedings of the 16th IEEE International Conference on Distributed Computing in Sensor Systems (DCOSS), pp. 418–422, Marina del Rey, CA, USA, May 2020. View at: Publisher Site | Google Scholar
  6. C. P. Simonsen, F. M. Thiesson, Ø. Holtskog, and R. Gade, “Detecting and locating boats using a PTZ camera with both optical and thermal sensors,” in Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics cs Theory and Appplications VISIGRAPP, pp. 395–403, Valletta, Malta, February 2020. View at: Publisher Site | Google Scholar
  7. H. Wang, G. Yang, E. Li, Y. Tian, M. Zhao, and Z. Liang, “High-voltage power transmission tower detection based on faster R-CNN and YOLO-V3,” in Proceedings of the 2019 Chinese Control Conference (CCC), pp. 8750–8755, Guangzhou, China, July 2019. View at: Google Scholar
  8. S. K. Sivanath, S. A. Muralikrishnan, P. Thothadri, and V. Raja, “Eyeball and blink-controlled firing system for military tank using labview,” in Proceeding of the 2012 4th InternationalConference on Intelligent Human Computer Interaction (IHCI), pp. 1–4, IEEE, Kharagpur, India, Decenber 2012. View at: Google Scholar
  9. S. Hu, K. Shimasaki, M. Jiang, T. Takaki, and I. Ishii, “A dual-camera-based ultrafast tracking system for simultaneous multi-target zooming,” in Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 521–526, Dali, China, December 2019. View at: Publisher Site | Google Scholar
  10. R. Bisewski and P. K. Atrey, “Toward a remote-controlled weapon equipped camera surveillance system,” in Proceedings of the Tools with Artificial Intelligence (ICTAI), 2011 23rd IEEE International Conference on, pp. 1087–1092, IEEE, Boca Raton, FL, USA, November 2011. View at: Google Scholar
  11. E. Iflachah, D. Purnomo, and I. A. Sulistijono, “Coil gun turret control using a camera,” EEPIS Final Project, 2011. View at: Google Scholar
  12. A. Garg and R. Raziur RoufK. N. Hafiz, M. Sharna, and N. Hasan, “Automated detection, locking and hitting a fast moving aerial object by image processing (suitable for guided missile),” IOSR Journal of Electronics and Communication Engineering, vol. 11, no. 4, pp. 60–68, 2016. View at: Publisher Site | Google Scholar
  13. M. K. Anwar, A. Risnumawan, A. Darmawan, M. N. Tamara, and D. S. Purnomo, “Deep multilayer network for automatic targeting system of gun turret,” in Proceedings of the 2017 International Electronics Symposium on Engineering Technology and Applications (IES-ETA), pp. 134–139, Surabaya, Indonesia, September 2017. View at: Google Scholar
  14. P. Singh, M. Masud, M. S. Hossain, and A. Kaur, “Blockchain and homomorphic encryption-based privacy-preserving data aggregation model in smart grid,” Computers & Electrical Engineering, vol. 93, 2021. View at: Publisher Site | Google Scholar
  15. D. Li, L. Deng, B. B. Gupta, H. Wang, and C. Chang, “A novel CNN based security guaranteed image watermarking generation scenario for smart city applications,” Information Sciences, vol. 479, pp. 432–447, 2018. View at: Publisher Site | Google Scholar
  16. P. Singh, M. Masud, M. Shamim Hossain et al., “Cross-domain secure data sharing using blockchain for industrial IoT,” Journal of Parallel and Distributed Computing, 2021, In press. View at: Publisher Site | Google Scholar
  17. C. L. Stergiou, K. E. Psannis, and B. B. Gupta, “IoT-based big data secure management in the fog over a 6G wireless network,” IEEE Internet of Things Journal, vol. 8, no. 7, pp. 5164–5171, 2021. View at: Publisher Site | Google Scholar
  18. M. Masud, G. S. Gaba, K. Choudhary, M. S. Hossain, M. F. Alhamid, and G. Muhammad, “Lightweight and anonymity-preserving user authentication scheme for IoT-based healthcare,” IEEE Internet of Things Journal, 2021, inpress. View at: Publisher Site | Google Scholar
  19. M. Masud, G. S. Gaba, S. Alqahtani et al., “A lightweight and robust secure key establishment protocol for Internet of medical things in COVID-19 patients care,” IEEE Internet of Things Journal, vol. 99, 2020. View at: Publisher Site | Google Scholar
  20. H. Wang, Z. Li, L. Yang, B. B. Gupta, and C. Chang, “Visual saliency guided complex image retrieval,” Pattern Recognition Letters, vol. 130, pp. 64–72, 2020. View at: Publisher Site | Google Scholar
  21. M. A. Alsmirat, F. Al-Alem, M. Al-Ayyoub, Y. Jararweh, and B. Gupta, “Impact of digital fingerprint image quality on the fingerprint recognition accuracy,” Multimedia Tools and Applications, vol. 78, no. 3, pp. 3649–3688, 2019. View at: Publisher Site | Google Scholar
  22. S. Ibrahim, H. Alhumyani, M. Masud et al., “Framework for efficient medical image encryption using dynamic S-boxes and chaotic maps,” IEEE Access, vol. 8, Article ID 160433, 2020. View at: Publisher Site | Google Scholar
  23. A. Farhadi and R. Joseph, “Yolov3: an incremental improvement,” in Proceedings of the Computer Vision and Pattern Recognition, Salt Lake, UT, USA, June 2018. View at: Google Scholar
  24. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: unified,real-time object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788, Las Vegas, NV, USA, June 2016. View at: Publisher Site | Google Scholar
  25. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN Towards real-time object detection with region proposal networks,” In IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, 2017. View at: Publisher Site | Google Scholar
  26. R. Girshick, “Fast R-CNN,” in Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1440–1448, Santiago, Chile, December 2015. View at: Google Scholar
  27. R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Region-based convolutional networks for accurate object detection and segmentation,” In IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 1, pp. 142–158, 2016. View at: Publisher Site | Google Scholar
  28. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN Towards real-time object detection with region proposal networks,” in Proceedings of the Neural Information Proceesing System, vol. 1, pp. 91–99, Montreal, Canada, December 2015. View at: Google Scholar
  29. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, Las Vegas, NV, USA, June 2016. View at: Google Scholar
  30. https://www.kaggle.com/dasmehdixtr/drone-dataset-uav Kaggle dataset. [Online].
  31. Spark Fruit Electronics, “HB100 X 10.525GHZ microwave sensor 2-16M Doppler radar human body induction switch module for arduino,” 2020, https://sparkfruit.ph/product/mh-et-live-hb100-x-10-525ghz-microwave-sensor/ [online] Available:. View at: Google Scholar
  32. T. K. Hareendran, “How to get started with a microwave radar motion sensor,” 2017, [Online]. Available:https://www.electroschematics.com/get-started-microwave-radar-motion-sensor/. View at: Google Scholar
  33. https://www.limpkin.fr/public/HB100/HB100_Microwave_Sensor_Application_Note.pdf AgilSense, 2020. [Online] Available:.
  34. E. Moore and C. Ryan, “How fast do birds fly?” https://www.jaysbirdbarn.com/fast-birds-fly/ [Online]. Available:. View at: Google Scholar
  35. Newsroom, “Skyguard anti-aircraft gun to boost bangladesh army’s air defense capability,” in Bangladesh Army, Dhaka, Bangladesh, 2019, [Online]. Available: https://bdnewsnet.com/bangladesh/bdmilitary/skyguard-anti-aircraft-gun-to-boost-bangladesh-armys-air-defense-capability. View at: Google Scholar

Copyright © 2021 Fazle Rabby Khan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1382
Downloads566
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.