Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2021 / Article
Special Issue

Computer Vision Methods in Precision Agriculture

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 3224164 | https://doi.org/10.1155/2021/3224164

Shaoxiong Zheng, Weixing Wang, Zeqian Liu, Zepeng Wu, "Forest Farm Fire Drone Monitoring System Based on Deep Learning and Unmanned Aerial Vehicle Imagery", Mathematical Problems in Engineering, vol. 2021, Article ID 3224164, 13 pages, 2021. https://doi.org/10.1155/2021/3224164

Forest Farm Fire Drone Monitoring System Based on Deep Learning and Unmanned Aerial Vehicle Imagery

Academic Editor: Aditya Rio Prabowo
Received20 Jul 2021
Revised13 Oct 2021
Accepted29 Oct 2021
Published25 Nov 2021

Abstract

Forest fires represent one of the main problems threatening forest sustainability. Therefore, an early prevention system of forest fire is urgently needed. To address the problem of forest farm fire monitoring, this paper proposes a forest fire monitoring system based on drones and deep learning. The proposed system aims to solve the shortcomings of traditional forest fire monitoring systems, such as blind spots, poor real-time performance, expensive operational costs, and large resource consumption. The image processing techniques are used to determine whether the frame returned by a drone contains fire. This process is accomplished in real time, and the resultant information is used to decide whether a rescue operation is needed. The proposed method has simple operations, high operating efficiency, and low operating cost. The experimental results indicate that the relative accuracy of the proposed algorithm is 81.97%. In addition, the proposed technique provides a digital ability to monitor forest fires in real time effectively. Thus, it can assist in avoiding fire-related disasters and can significantly reduce the labor and other costs of forest fire disaster prevention and suppression.

1. Introduction

With the rapid development of society, new requirements for the ecological environment have been introduced. Fire hazard, which is one of the eight major natural disasters, has the characteristics of spreading quickly, being difficult to control, and causing irreversible destruction [1]. Therefore, fire can often severely damage the ecological environment and can also threaten the safety of property and life.

The existing forest fire monitoring methods include artificial patrol, observation towers, and satellite remote sensing, each of which has certain advantages and disadvantages [2]. The manual patrol monitoring method can flexibly select the patrol route, can go deep into the forest area, and has strong mobility, but it is difficult to monitor a blind area using this method due to the great influence of topography, low efficiency, and narrow vision field. The tower video monitoring method can observe large forests in real time with the help of telescopes and video monitoring equipment, but there is a blind area in an understory environment in areas with dense trees and a lack of mobility. The satellite remote sensing monitoring method has a wide detection range, high positioning accuracy, and can provide all-weather observation, but its cost is high, it can identify only areas with a large fire, and fires cannot be identified accurately under foggy weather conditions [3]. Most of the monitoring systems need to use high-altitude satellites for auxiliary operations, and the system structure is too complex. Maintaining the system operation requires high investment and high professional and technical personnel, which makes it difficult for similar projects to meet the actual needs of forest monitoring. As a product of the rapid development of science and technology, drones provide the advantages of flying at high speeds, easy control, and strong real-time performance [4]. Therefore, drones have been widely applied to the fields of forest fire prevention and detection, fire behavior identification, and rescue monitoring of forest fire prevention.

2. Overall System Design

2.1. Overall System Structure

The proposed deep learning-based forest fire monitoring system includes a drone and a remote monitoring system terminal. The proposed forest fire disaster monitoring system introduces the drone platform into the forest fire prevention system, which has the capability of providing early warnings using the video-based fire detection technology [5]. The workflow of the proposed forest fire monitoring system based on deep learning and drone technology consists of multiple steps. First, a drone is equipped with a high-definition camera, and it performs the flight operations according to the preset patrol route to ensure that it covers the entire area under observation in such a way that there are no blind spots. The drone’s position is determined by the Global Position System (GPS) in real time. Second, the drone transmits the collected video and image information to the ground remote monitoring software in real-time [6]. Third, the monitoring system uses the forest fire deep learning-based algorithm to analyze the received data and to judge whether there has been a fire disaster in the area under observation. When a fire incident occurs, the system triggers the alarm, and a user receives the dynamic information of the forest fire through the interface on the monitoring host computer in real time. This information is dispatched to the relevant person to take the corresponding fire preventive measures [7]. The flowchart of the proposed system is displayed in Figure 1.

2.2. System Hardware Design
2.2.1. Unmanned Aerial Vehicle Platform Design

The unmanned aerial vehicle (UAV) forest fire monitoring system includes a GPS module, an image acquisition and transmission module, a communication module, and a flight control module [8]. These UAV modules accomplish various tasks such as UAV flight control, autonomous landing, GPS-based positioning, and image acquisition and transmission [9].

In this work, the CUIM600 drone is used for the implementation of the forest fire monitoring system. This drone adopts a modular design similar to M100, which is convenient for use and easy to install. The CUIM600 drone is also equipped with an efficient power system, which is integrated with dustproof, autonomous cooling in addition to other functionalities. The drone has the capability to carry items with a weight of up to 6.0 kg and can fly for 30 minutes with no load attached. In addition, the drone has a maximum flight speed of 18 m/s when ignoring wind conditions. The professional-level A3 flight control system and the sine drive technology application for intelligent ESCs also assist significantly in improving the reliability of flight performance [10]. To suppress the impact of the drone flight on the image captured during the drone flight, the proposed system uses a DJI Zenz Z3 gimbal camera. This is a three-axis stabilizing gimbal camera that has the ability to compensate for jitters caused during the image acquisition process effectively [11] when the drone moves forward, backward, or when the flight altitude changes, thus ensuring the good quality of acquired images. Moreover, the camera has 3.5x optical zoom and 2x digital zoom and can support 4K ultra-HD video recording at 30 fps.

2.2.2. Remote Terminal Monitoring System

The remote monitoring system is used for receiving, processing, and storing the data acquired by the drone. In addition, the ground center also provides the functionalities of deep-learning-based fire detection and alarm triggering [12]. The staff has the ability to observe the acquired forest images in real time at the ground monitoring terminal. When a fire incident occurs, the ground center provides real-time dynamic information. The ground center’s hardware includes a personal computer (PC) and a communication module for receiving images and other data, such as information on drone location [13].

2.2.3. Image Acquisition System

The signal obtained from the video source is transmitted to the image acquisition card through a video interface. The signal undergoes A/D conversion first and then is decoded by a digital decoder. The resultant signal is compressed into a digital video and transmitted to the PC [14]. The frame grabber collects the image frames continuously from the input video and transfers them to the PC before acquiring the next image frame [15]. Therefore, the real-time acquisition is highly dependent on the time required to process an image frame. Namely, if the time required to process a frame exceeds the interval between two adjacent frames, the image data will be lost, that is, the phenomenon of frame loss will occur. The video acquisition and compression operations of the image acquisition card are implemented together [16].

2.3. System Software Design

The software of the proposed system includes the unmanned aerial vehicle control system, data processing and communication system, and remote user-computer management system, as shown in Figure 2.

2.3.1. UAV Control System

The drone control system is used to control the drone flight and flight information feedback from the drone, including information from the route planning module, GPS module, and flight control module [17].

2.3.2. Data Processing and Communication System

The data processing and communication system is used to transmit the data and process the received forest images. In addition, this system is also responsible for managing the acquired data, including the fault information, information on fire incidents and disasters, information on drone flight status, and user login information. The image acquisition and transmission system are mainly composed of the data acquisition module and data transmission module, which are responsible for transmitting the collected image data to the remote monitoring terminal and providing the fire-risk early warning. The flowchart of the image acquisition and transmission process is presented in Figure 3.

The communication in the proposed system enables both to send and receive data. These data include various information acquired by different modules of the forest fire monitoring system. This mainly includes the data interaction between the drone and the remote monitoring system. The image data captured by the UAV are transmitted back to the ground monitoring terminal in real time, and the ground terminal realizes the flight control of the UAV according to the flight route set by the UAV [18]. The entire data interaction process between different modules is presented in Figure 4.

The communications of the drone are performed using the serial port function. Every time serial communications are performed, the listening thread opens. After monitoring is completed, the listening thread closes. It should be noted that if the listening thread is not closed, the next monitoring session cannot be executed successfully. The overall logic is to initialize the serial port for InitPort and to set the open serial port and baud rate. After confirming that the serial port is opened, the PacketConfig function initializes the data transmission format, frame header, frame tail, frame length, and storage byte position. When it is ensured that the listening thread has been opened, the data-ready state is set to TRUE. The data can be read only after confirmation. After reading the data, the data-ready state is set to FALSE; otherwise, the thread no longer works. In the case that the serial port is no longer operating, the port needs to be closed by using the ClosePort function; otherwise, another serial port cannot be opened; in the aspect of image transmission, serial communication has good stability [19].

2.3.3. Remote Upper-Computer Management System

The remote host-computer monitoring management system has functions of image transmission and processing and fire danger disaster warning. The function of image transmission and processing is to collect aerial images of the forest captured by a PTZ camera mounted on a drone. Then, by using the image transmission system, the video data are transmitted in real time to the PC on the ground terminal of the forest fire monitoring system. In addition, this system is also responsible for detecting a forest fire in the image data using a deep-learning-based algorithm. The function of video capturing and transmission depends on a PTZ camera, an image capturing card TC-4000 SD, and the image transmission system [20]. The image captured by a camera mounted on a drone is transmitted to the mobile terminal of the remote drone control using the image transmission system. The image is then transmitted to the image acquisition card via an HDMI cable and further transferred to the ground monitoring system via a USB PC. The image transmission and processing flows are presented in Figures 5 and 6, respectively.

The proposed forest fire monitoring system uses DJI’s Lightbridge2 image transmission system. The Lightbridge2 image transmission system supports a variety of interface outputs, including USB, mini-HDMI, and 3G-SDI. In addition, it supports up to 1,080 p/60 fps full HD output.

The Lightbridge2 video transmission system uses dynamic wireless-link adaptation technology to compensate for the effects of distance, environmental electromagnetic radiation, and picture quality [21]. It automatically selects the best channel and switches in the transmission channels in the case of channel disruptions. In addition, it also adjusts the video bandwidth when it is necessary to ensure smooth video and effectively reduces the picture defects and interruptions. Using the deep learning-based algorithm, the image delay is further reduced to 50 ms when the maximum transmission distance is 5 km. The Lightbridge2 image transmission system combines high-speed processors and deep-learning-based algorithms to make the wireless transmission of images more stable and reliable.

The modules of the remote upper-computer management system include the basic information module of the forest farm, the image processing and early warning module, and the manual data processing module. The basic information module introduces the list of state-owned forest farms in Guangdong province and the corresponding prefecture-level forestry bureau links. Using this interface, it is convenient for forest staff to find relevant forest farm information. According to the forestry bureau’s portal website links, it is possible to find the local forestry bureau that belongs to a particular forest farm, and it keeps the staff abreast of the local forestry bureau’s developments.

Similarly, it also has a map interface of various forest farms, which provides the geographical location of the forest site, such as city name and latitude and longitude. This is helpful in the deployment process of a drone for the forest fire danger monitoring system.

The image processing module detects fire incidents in the forest. In case of fire disaster detection, the system displays the geographic location and promptly alerts the forest farm staff. The signal is displayed in red for disaster warnings and green for normal conditions. The fire warning interface of the monitoring system is shown in Figure 7.

When it is necessary to manually process images, image processing is performed using the manual processing interface. In addition, there is a picture management interface that is used to store pictures of forest fire prevention and display the pictures from the picture library according to the user’s demand. It also provides the forest staff with an insight into historical image data. The picture management interface is presented in Figure 8.

In addition, in the log management interface, historical background data processing records are stored, and historical management operations are backed up.

3. Fire Insurance Monitoring Algorithm

3.1. Forest Fire Monitoring Algorithm

The forest fire monitoring algorithm is based on digital pattern processing and digital image processing techniques, such as image segmentation, feature extraction, image classification, and recognition, and it accomplishes digital, automated, unmanned real-time monitoring, and early warning.

The workflow of the forest fire monitoring algorithm is as follows.(1)A UAV equipped with a high-definition camera is used to capture images that are further transmitted to the PC on the ground monitoring system terminal via the image acquisition card.(2)The ground monitoring system terminal receives the images transmitted by the drone and reads the video data.(3)The acquired image may be corrupted due to interference such as noise. This interference is not conducive to forest fire monitoring and identification at later stages. Therefore, image preprocessing is performed to eliminate irrelevant information in the image and restore useful image information.(4)The flame segmentation method based on the combination of FDI index and R channel is used to extract the suspected flame area.(5)The dynamic and static features of the suspected forest fire color area, including circularity features, area change rate, gravity center height ratio features, and LBP texture features, are extracted.(6)The extracted feature vector is fed to the trained classifier for classification and recognition to determine if a fire incident has occurred.(7)In the case of a fire incident, the alarm device alarms and notifies the relevant personnel to prepare for firefighting; otherwise, cyclic monitoring continues.

The flowchart of the forest fire monitoring algorithm is presented in Figure 9.

In this case study, the forest fire areas are the main segmentation targets, and the color ranges of detections denote the color of flames; also, the fire index FDI (Fire Detection Index) [22] is used. The FDI enhances the color information on flames and weakens the color information on vegetation, and it is calculated by the following equation:

The forest fire images have obvious flame information in the R channel component and FDI fire index [23]. A forest fire segmentation method based on the combination of FDI fire index and R channel, which can segment the forest fire areas accurately and completely, is proposed. This method satisfies the following two formulas:where (i, j) represents the position of an image pixel.

If the FDI fire index of a pixel’s position is greater than the threshold , the pixel is marked as a suspicious flame pixel, and the value of the point is set to one. Similarly, if the R channel component of this position is greater than the threshold , the pixel is marked as a suspicious flame pixel, and the value of the point is set to one.

To sum up, if a pixel in an image can satisfy the two formulas at the same time, it will be judged to be a suspected forest fire pixel.

In this case study, to further extract the features of suspected forest fire areas, four features are used as a basis for forest fire recognition. These four features include roundness, area change rate, height ratio of the center of gravity, and LBP texture.

3.1.1. Roundness

The roundness is calculated by the following equation:where S is the area of suspected forest fire and L is the perimeter of the outline of a suspected forest fire.

When the outline of an object is closer to the circle, the roundness degree is smaller and vice versa. Therefore, roundness can be used to measure the outline complexity of an object.

3.1.2. Area Change Rate

The area change rate of a flame is obtained as follows:where denotes the suspected flame area of the current frame and is the area of the next frame.

When the substance burns, the flame changes all the time, even when a difference between adjacent frames is obvious. Therefore, by using the area change rate feature, the influence of moving interferers similar to the color of a similar flame can be eliminated.

3.1.3. Height Ratio of Center of Gravity

When a fire occurs, the flame edge is constantly shaking, the flame area is constantly changing, and the flame generally has a narrow shape at the top but a wide shape at the bottom. Thus, the center of gravity of a flame is on its lower side. Using this feature, the interfering substance with the upper center of gravity and regular shape can be distinguished. Assume that the height of the center of gravity is denoted by , and the total height of the object is denoted by ; then, the height ratio of the center of gravity to the total height can be expressed as follows [24, 25]:

3.2. Principle of Forest Fire Monitoring Algorithm
3.2.1. Monitoring and Identification Algorithm

The algorithm for monitoring and identification of forest fire is shown in Algorithm 1.

Require:
 Initial: Xi (possible smoke area in the ith edge)
 Compute: Cen Y(·) (Compute the centric of a fixed points on the y-axis)
 Assume: Ck = ∅ (For All p∈ [0, 1, ……, P − 1]), (a = 0, b = 0)
Output:
 SR (SR = 1 indicates that a smoke region is identified)
 For i = 0 to (N – 1) do
 If i < Pn then
  p = bj/nc; Cp = Cp SR Xj
 End If
 If i ≥ n then
 If Xi > |Cb| && Cen Y Xi < Cen Y (Cb) − 1 then
  a = a + 1
Else
  a = 0
End If
If a = n Then
  b = b + 1
  a = 0
End If
End If
If b ≥ P Then
  SR = 1
  Break
Else
  SR = 0
End If
End For
3.2.2. Proposed Monitoring Algorithm

The extremal areas of an image are characterized as the associated districts inside a twofold limit image x and y, which is obtained as follows:where x means the binary threshold.

For each extremal area, fluctuating the binary threshold will make a grouping of settled extremal regions, that is, . At that point, the soundness of each extremal district is characterized as follows:

An extremal area is maximally steady if Φ() achieves a nearby least at the threshold.

The proposed MSER is shown in Algorithm 2.

(1)Load Image Img with a variable size PxQxR
ai, aj, ak: Color Greatness Img = ∑a(i, j, k)
where
i: 1 to P
j: 1 to Q
k: 1 to R
(2)Convert Gray Color Code of an input image
Img: Igc
where
ai, j: gray code balance Igc = ∑a(i, j)
i: 1 to P
j: 1 to Q
(3)Image Augmentation
Image enlargement and renovation: Image size is 3 × 3
where
Regions obtained in Step 2
(4)Pixel Edge Finding
(5)Text Area Finding
Tr = Igc
S ⊂ Igc
(6)MSER Detection Detect Extremal Regions
S ⊂ Tr ⊂  for all x ∈ S, y ∈ ∂T
where ∂T is the external area edge.
(7)Image Edge Finding
Level the image using an image filter
Find strength pitch of the image
Remove low-strength pixels
Apply threshold to discover the limits
Eliminate feeble limits and focus on the substantial limits
3.2.3. Measured Analytical Model for Monitoring Algorithm

Measured analytical model for monitoring algorithm is shown in Algorithm 3.

(1)Mapping captured fire image (CFI)
MSER area distinct on CFI
Imap: E ⊂ x2: 0
O is a whole order
(a)i.e., automatic, antisymmetric, and transitive dual member ≤ previous state
(b)Proposed model O = (0, 1, …, 254) is considered, and the MSER region is defined
E.g., Real-value images (O = P)
(2)An adjacency connection Adj ⊂ E × E
  The proposed model has four computed zones
i.e., r, s ⊂ E bordering
region is contiguous
(3)Region S is the contiguous division of D, i.e., for all r,
 An order r, x1, x2, …, xn and
(4)District edge
, i.e., the limit of S is a set of pixels being neighboring to at
Least one pixel of S but not listed to S
(5)Extremal area is an area such that for every ,
Upper limit strength of area Img (r) > Img (s) is a lower limit strength of an area
(6)MSER state
S1, Si1, Si, … Order of the nested MSER state, i.e., the extremal state is the least secure
has a restricted minimum at , which is a limit of the proposed method

4. Experimental Results and Analysis

The UAV system used the STM32 development board, integrated embedded Hash memory, and RAM for program and data storage. It adopted the SDK secondary development to achieve custom control and function expansion. The proposed remote monitoring system was configured using the six-core Intel Core (TM) i7-8700K CPU@3.7g GHz, 16 GB RAM, and Windows 10 operating system. The algorithm was developed using OpenCV library and C++.

4.1. System Hardware Function Test

The hardware function test of the forest fire and disaster monitoring system based on deep learning and drone technology was conducted by repeated debugging of each hardware function and long-term running test of the entire system. The main goal was to test whether the drone system worked normally and whether the entire system ran successfully for a long time.

4.2. System Software Function Test

The deep learning-based software tests of the proposed drone forest fire disaster monitoring system included reliability and real-time testing. Testing was performed using various functions, such as user login function, abnormal alarm function, historical abnormal traceability function, and equipment fault prompt function.

The method used for testing reliability and real-time performance of the forest fire disaster monitoring algorithm was to log multiple segments of a video with flames and interference videos, such as video of car lights or people and objects that had a high similarity index with flames. The algorithm performed detection and identification on these videos and analyzed whether the accuracy rate, false alarm rate, and execution time of statistical monitoring met the monitoring requirements. Similarly, the method for testing the user login function was to test the correct and incorrect usernames and passwords multiple times to ensure that the system could log in normally. To ensure that the alarm was triggered accordingly, it was tested whether a wrong fire detection would lead to alarm triggering. To ensure the historical data correctness, it was tested whether a user could obtain the impact of abnormal historical events and related information through the software. The method used for the equipment fault prompt function test was to modify normal operations of the system’s equipment deliberately and to check whether the equipment fault prompt occurred.

4.3. System Communication Function Test

The communication performance of the forest fire disaster monitoring system based on a drone was tested on the basis of data interaction between the devices at different distances. The communication between a UAV and a remote server, the communication between the UAV and a remote controller, and the image transmission were normal.

4.4. Algorithm Result Comparison

In this case study, different algorithms were used to process drone imagery, and the results were compared with the deep learning-based algorithm.

The interframe difference method, background subtraction, vibe algorithm, and manual statistical method were used to process the forest image collected by a UAV, and their results, such as the number of pixels in the fireworks area, the number of similar pixels, the number of pixels misjudged, the number of pixels in judgment results, relative judgment accuracy, and judgment accuracy, were compared and analyzed.

The interframe difference method is a method to obtain the contour of a moving target by performing a difference operation on two adjacent frames in a video image sequence. It can be applied to the case of multiple moving targets and camera movement. When there is an abnormal object motion in a monitoring scene, there will be obvious differences between frames. By subtracting two frames, the absolute value of the brightness difference between the two frames can be obtained, and then it can be judged whether it is greater than the threshold; the motion characteristics of a video or an image sequence are analyzed; and it is determined whether there is an object’s motion in the image sequence.

The background subtraction method uses the gray difference between the corresponding pixel points in the current frame image and the background image to detect a moving target.

The specific idea of the vibe algorithm is to store a sample set for each pixel. The sampling value in the sample set is the past pixel value of the pixel and pixel values of its neighbors, and then the algorithm compares each new pixel value with the sample set to determine whether it belongs to the background point or not [26].

4.4.1. Data Processing Speed

A video with a length of 4 min and 19 s was considered. There were 29 images per second, and there were a total of 7,511 images. The size of each frame was 960 × 540. The image data comes from the experiment of Guangdong Sihui Jianggu forest farm in Zhaoqing. The times consumed by the algorithms to complete the relevant processes were calculated. The processing speed and delay rate of the algorithms were calculated, and they are presented in Table 1.


Processing methodProcessing speed (frames/s)Completion timeDelay rate

Original video294 min 19 s0
Deep-learning-based algorithm5.8321 min 28 s80.00%
Interframe difference method7.1217 min 35 s75.45%
Background subtraction6.8418 min 18 s76.41%
Vibe algorithm0.852 h 27 min 17 s97.10%

When an algorithm directly processes a video, it can lower the processing speed. To achieve real-time data processing, the video frame capturing method needs to reduce the number of frames in preprocessing. When the flight speed of a UAV is constant, the change in the scene information recorded by a video is limited.

When a video frame was set to five frames per second, the interframe difference method and background subtraction method could be used to speed up the process. The processing speed of division and the deep learning-based algorithm met the requirements for real-time data processing. The deep learning-based algorithm met the requirements of processing speed and accuracy under similar conditions.

4.4.2. Data Processing Accuracy

As presented in Figure 10 and Table 2, the deep learning-based algorithm was compared with the interframe difference method, background subtraction method, and vibe algorithm. The results in Table 2 show that more accurate experimental results were obtained by manually marking the pyrotechnic area. The results indicate that the proposed modified algorithm had advantages over the other algorithms.


Deep-learning-based algorithmInterframe difference methodBackground subtractionVibe algorithmManual statistics

Pixel number of pyrotechnic areas327183214216406
Number of similar pixels2,1251173094370
Miscalculation of pixel number2,3866,1851,0513,4190
The pixel number of the result2,9096,5832,3364,424406
Relative accuracy81%45%53%53%1
Judging accuracy11%3%9%5%1

In this case study, the result of the deep learning-based algorithm was closer to the result of the manual statistics. This proved that the deep learning-based algorithm performed better than the other methods in terms of recognition accuracy. When combined with the experimental results of removing suspected fire areas, the algorithm processed the statistical results of relative decision accuracy, and the decision accuracy rate was further improved.

In addition, the comparison of the general recognition algorithm and the deep-learning-based algorithm indicated the interframe difference. The method is not suitable for UAV video detection and can be easily affected by environmental and motion conditions, making the recognition results poor. The recognition accuracy was almost zero. The comparison of the background subtraction and the deep-learning-based algorithm indicated that the processing results of the background subtraction method varied greatly when the UAV was moving and hovering. Compared with the other four methods, the relative judgment accuracy of the proposed algorithm was higher. The performance of the proposed algorithm was better than those of the other algorithms, so it can be considered suitable for forest fire-risk monitoring.

5. Conclusion

This paper proposes a forest fire-risk monitoring system that includes the hardware consisting of a UAV and image acquisition system and the corresponding software. Through the detailed test of the software and hardware of the proposed system, the normal operation of each module of the proposed system and smooth communication between the modules have been verified. In forest fire-risk recognition, after image preprocessing, region segmentation, and feature extraction, different classifiers are used to recognize fire-risk images. The comparison with the general algorithm shows that the proposed algorithm can recognize the forest fire risk with better accuracy while meeting the requirements for real-time data processing of scene recognition. Therefore, the proposed system is applicable to forest fire monitoring.

Data Availability

The data used to support the findings of this study are available within the article.

Conflicts of Interest

The author declares that there are no conflicts of interest.

Acknowledgments

This work was supported by the Key-Area Research and Development Program of Guangdong Province of China under Grant no. 2019B020214003 and the Guangzhou Science and Technology Plan Project Innovation Platform Construction and Sharing under Grant no. 201605030013. The authors appreciate the continued and enthusiastic support. The authors would like to thank the Guangdong Academy of Forestry Sciences for providing image acquisition support for the UAV and the Guangdong Sihui Jianggu forest farm in Zhaoqing for providing site support for the research (Guangdong Provincial Forestry Science and Technology Innovation Project under Grant no. 2020KJCX003).

References

  1. M. Belgiu and L. Drăguţ, “Random forest in remote sensing: a review of applications and future directions,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 114, pp. 24–31, 2016. View at: Publisher Site | Google Scholar
  2. N. Horning, “Remotely piloted aircraft system applications in conservation and ecology,” Remote Sensing in Ecology and Conservation, vol. 4, no. 1, pp. 5-6, 2018. View at: Publisher Site | Google Scholar
  3. T. Chu, X. Guo, and K. Takeda, “Remote sensing approach to detect post-fire vegetation regrowth in Siberian boreal larch forest,” Ecological Indicators, vol. 62, pp. 32–46, 2016. View at: Publisher Site | Google Scholar
  4. A. Fernandez-Carrillo, L. McCaw, and M. A. Tanase, “Estimating prescribed fire impacts and post-fire tree survival in eucalyptus forests of Western Australia with L-band SAR data,” Remote Sensing of Environment, vol. 224, pp. 133–144, 2019. View at: Publisher Site | Google Scholar
  5. L. Collins, P. Griffioen, G. Newell, and A. Mellor, “The utility of Random Forests for wildfire severity mapping,” Remote Sensing of Environment, vol. 216, pp. 374–384, 2018. View at: Publisher Site | Google Scholar
  6. R. Biasi, E. Brunori, C. Ferrara, and L. Salvati, “Assessing impacts of climate change on phenology and quality traits of Vitis vinifera L.: the contribution of local knowledge,” Plants, vol. 8, no. 5, p. 121, 2019. View at: Publisher Site | Google Scholar
  7. Y. Tang, M. Chen, C. Wang, L. Luo, J. Li, and G. Lian, “Recognition and localization methods for vision-based fruit picking robots: a review,” Frontiers of Plant Science, vol. 11, p. 510, 2020. View at: Publisher Site | Google Scholar
  8. J. Bendig, K. Yu, H. Aasen et al., “Combining UAV-based plant height from crop surface models, visible, and near infrared vegetation indices for biomass monitoring in barley,” International Journal of Applied Earth Observation and Geoinformation, vol. 39, pp. 79–87, 2015. View at: Publisher Site | Google Scholar
  9. F. Fabra, W. Zamora, J. Masanet, C. T. Calafate, J.-C. Cano, and P. Manzoni, “Automatic system supporting multicopter swarms with manual guidance,” Computers & Electrical Engineering, vol. 74, pp. 413–428, 2019. View at: Publisher Site | Google Scholar
  10. N. Wang, S.-F. Su, M. Han, and W.-H. Chen, “Backpropagating constraints-based trajectory tracking control of a quadrotor with constrained actuator dynamics and complex unknowns,” IEEE Transactions on Systems, Man, and Cybernetics Systems, vol. 49, pp. 1322–1337, 2018. View at: Google Scholar
  11. M. Chen, Y. Tang, X. Zou, Z. Huang, H. Zhou, and S. Chen, “3D global mapping of large-scale unstructured orchard integrating eye-in-hand stereo vision and SLAM,” Computers and Electronics in Agriculture, vol. 187, Article ID 106237, 2021. View at: Publisher Site | Google Scholar
  12. A. Ullah, J. Ahmad, K. Muhammad, M. Sajjad, and S. W. Baik, “Action recognition in video sequences using deep bi-directional LSTM with CNN features,” IEEE access, vol. 6, pp. 1155–1166, 2017. View at: Google Scholar
  13. C. Amos, G. P. Petropoulos, and K. P. Ferentinos, “Determining the use of Sentinel-2A MSI for wildfire burning & severity detection,” International Journal of Remote Sensing, vol. 40, pp. 905–930, 2019. View at: Publisher Site | Google Scholar
  14. B. N. Tran, M. A. Tanase, L. T. Bennett, and C. Aponte, “Evaluation of spectral indices for assessing fire severity in Australian temperate forests,” Remote Sensing, vol. 10, p. 1680, 2018. View at: Publisher Site | Google Scholar
  15. L. A. Vega Isuhuaylas, Y. Hirata, L. C. Ventura Santos, and N. Serrudo Torobeo, “Natural forest mapping in the andes (Peru): a comparison of the performance of machine-learning algorithms,” Remote Sensing, vol. 10, p. 782, 2018. View at: Publisher Site | Google Scholar
  16. H. Wang, L. Dong, H. Zhou et al., “YOLOv3-Litchi detection method of densely distributed litchi in large vision scenes,” Mathematical Problems in Engineering, vol. 2021, Article ID 8883015, 11 pages, 2021. View at: Publisher Site | Google Scholar
  17. J. M. Fernández-Guisuraga, E. Sanz-Ablanedo, S. Suárez-Seoane, and L. Calvo, “Using unmanned aerial vehicles in postfire vegetation survey campaigns through large and heterogeneous areas,” Opportunities and Challenges, Sensors, vol. 18, p. 586, 2018. View at: Google Scholar
  18. M. F. Al-Sa’d, A. Al-Ali, A. Mohamed, T. Khattab, and A. Erbad, “RF-based drone detection and identification using deep learning approaches: an initiative towards a large open source drone database,” Future Generation Computer Systems, vol. 100, pp. 86–97, 2019. View at: Google Scholar
  19. B. Kellenberger, D. Marcos, and D. Tuia, “Detecting mammals in UAV images: best practices to address a substantially imbalanced dataset with deep learning,” Remote Sensing of Environment, vol. 216, pp. 139–153, 2018. View at: Publisher Site | Google Scholar
  20. E. Marcos, V. Fernández-García, A. Fernández-Manso et al., “Evaluation of composite burn index and land surface temperature for assessing soil burn severity in mediterranean fire-prone pine ecosystems,” Forests, vol. 9, p. 494, 2018. View at: Publisher Site | Google Scholar
  21. P. McKenna, P. D. Erskine, A. M. Lechner, and S. Phinn, “Measuring fire severity using UAV imagery in semi-arid central Queensland,” International Journal of Remote Sensing, vol. 38, pp. 4244–4264, 2017. View at: Publisher Site | Google Scholar
  22. H. Cruz, M. Eckert, J. Meneses, and J. F. Martinez, “Efficient forest fire detection index for application in unmanned aerial systems (UASs),” Sensors, vol. 16, no. 6, p. 893, 2016. View at: Publisher Site | Google Scholar
  23. Y. Zhao, Design and Implementation of Forest Fire Monitoring System Based on Drone, Shandong University, Shandong, China, 2019.
  24. M. Zharikova and V. Sherstjuk, “Forest firefighting monitoring system based on UAV team and remote sensing,” Automated Systems in the Aviation and Aerospace Industries, pp. 220–241, 2019. View at: Publisher Site | Google Scholar
  25. S. Sudhakar, V. Vijayakumar, C. Sathiya Kumar, V. Priya, L. Ravi, and V. Subramaniyaswamy, “Unmanned aerial vehicle (UAV) based forest fire detection and monitoring for reducing false alarms in forest-fires,” Computer Communications, vol. 149, pp. 1–16, 2020. View at: Publisher Site | Google Scholar
  26. F. Wu, J. Duan, S. Chen, Y. Ye, P. Ai, and M. Yang, “Multi-target recognition of bananas and automatic positioning for the inflorescence Axis cutting point,” Frontiers of Plant Science, vol. 12, pp. 1–15, 2021. View at: Publisher Site | Google Scholar

Copyright © 2021 Shaoxiong Zheng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views55
Downloads43
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.