Research Article | Open Access
Mubarak A. I. Mahmoud, Honge Ren, "Forest Fire Detection Using a Rule-Based Image Processing Algorithm and Temporal Variation", Mathematical Problems in Engineering, vol. 2018, Article ID 7612487, 8 pages, 2018. https://doi.org/10.1155/2018/7612487
Forest Fire Detection Using a Rule-Based Image Processing Algorithm and Temporal Variation
Forest fires represent a real threat to human lives, ecological systems, and infrastructure. Many commercial fire detection sensor systems exist, but all of them are difficult to apply at large open spaces like forests because of their response delay, necessary maintenance needed, high cost, and other problems. In this paper a forest fire detection algorithm is proposed, and it consists of the following stages. Firstly, background subtraction is applied to movement containing region detection. Secondly, converting the segmented moving regions from RGB to YCbCr color space and applying five fire detection rules for separating candidate fire pixels were undertaken. Finally, temporal variation is then employed to differentiate between fire and fire-color objects. The proposed method is tested using data set consisting of 6 videos collected from Internet. The final results show that the proposed method achieves up to 96.63% of true detection rates. These results indicate that the proposed method is accurate and can be used in automatic forest fire-alarm systems.
Forest fire detection systems are gaining a lot of attention because of the continual threat from fire to both economic properties and public safety . Hundreds of millions of hectares are destroyed by wildfires each year  and over 200,000 forest fires happen every year in the world. Forest fires destroy a total area of 3.5 to 4.5 million km2 . Increase in forest fires in forest areas around the world has resulted in an increased motivation for developing fire warning systems for the early detection of wildfires . Sensor technology has been widely used in fire detection, usually depending on sensing physical parameters such as changes in pressure, humidity, and temperature, as well as chemical parameters such as carbon dioxide, carbon monoxide, and nitrogen dioxide. However, it is hard to apply these systems in large open areas for a variety of reasons such as high cost, energy usage by the sensors, and the necessary proximity of the sensor to the fire for accurate sensing resulting in physical damage to the sensors . In addition, sensor methods have a high false alarms rate and their response time is quite big .
There are numerous motivating factors for the use of an image processing based method of fire detection. The first factor is the rapid development of digital camera technology and CCD or CMOS digital cameras, which has resulted in a rapid increase in image quality and decreased cost of the cameras. The second factor is that digital cameras can cover large areas with excellent results. Third, the response time of image processing models is better than that of existing sensor models. Finally, the overall cost of image processing systems is lower than existing systems.
1.1. Related Studies
Several fire detection algorithms have been proposed by various researchers. Thou-Ho et al.  presented fire detection algorithm, which combines the saturation channel of the HSV color and the RGB color. This algorithm employs three rules (R≥G>B), (R≥RT), and (S≥ ((255-R) ST/RT). Determination of the two thresholds RT and ST is required. The certain values range is from 115 to 135 for RT and from 55 to 65 for ST based on many investigational results done by the authors. This method is computationally simple compared to the other algorithms; however, it suffers from false-positive alarms in case of moving fire-like objects. Dios et al.  presented an optical model used to detect forest fires and measure the properties of the fire such as flame height, fire front, fire base width, and flame inclination angle. This system is very good; nevertheless, it is very expensive because it consists of infrared cameras and other technologies such as GPS and telemetry sensors. Yinglian et al.  proposed forest fire disaster prevention algorithm based on image processing. This algorithm depends on fire and smoke color properties to identify fire. Yinglian’s algorithm is good, but the smoke spreads quickly and it has many different colors which depend on the burning material; thus, the false alarm rate rises.
In this paper, a forest fire detection algorithm is proposed. The algorithm uses YCbCr color space since it effectively separates luminance from chrominance and is able to separate high temperature fire center pixels because the fire at the high temperature center region is white. The final results show that the proposed system has good detection rates and fewer false alarms, which are the main crucial problems of the most existing algorithms.
This paper is organized as follows: Section 2 describes the proposed fire detection method; Section 3 presents the results and computational complexity of the proposed algorithm; and Section 4 summarizes the work that has been carried out in this study and potential future direction.
2. Materials and Methods
This section presents the proposed forest fire detection algorithm. It consists of the following main stages: the first step receives the input video from the input device; the second step applies movement containing region detection based on background subtraction (MRDB); the third step converts the input image sequence from RGB to YCbCr color space; and the fourth step applies the fire detection rules, and temporal variation. A fire alarm is activated if all detection conditions are satisfied. The proposed algorithm stages will be described in detail. Figure 1 shows the proposed algorithm flowchart.
2.1. Movement Containing Region Detection Based on Background Subtraction (MRDB)
Detecting moving regions is a key factor in most of the video based fire detection systems because fire boundaries continuously fluctuate. So background subtraction is used to select candidate regions of fire. A pixel located at (x, y) is supposed to be moving if the following condition is satisfied . (x, y) represents the intensity value of the pixel at location (x, y) in the nth gray-level for the current frame and Bn (x, y) represent the background intensity value at the same pixel location, and thr is a threshold value experimentally set to 3. The background is continuously updated using (2):where (x, y) and (x, y) represent the intensity of the pixel value at location (x, y) for the current and previous backgrounds. Figure 2 shows an example of MRDB
2.2. Converting RGB Images to YCbCr
Due to the fact that different kinds of moving objects can be included after applying background subtraction, such as trees, animals, birds, and people, therefore images from the background subtraction stage are converted to YCbCr  to select candidate fire regions using (3). Figure 3 shows original RGB image (a) and YCbCr component. The mean values of the YCbCr channel are then calculated using (4), (5), and (6).where , , and are the mean values for the YCbCr channels; Y (x, y), Cb (x, y), and Cr (x, y) are YCbCr channel values for pixel at specific location (x, y); and NM is the total number of pixels.
2.3. Fire Color Pixel Detection Rules
In any fire image pixels, the red color value is larger than green and green is larger than blue as illustrated in Figure 4: (a) is a fire image and (b) is the RGB channels histogram for the same image. This fact is represented in RGB color space as R>G>B and can be converted to YCbCr using the flowing equations:Also, the Y component value is greater than the mean Y component of the same image and the Cb component is smaller than the mean Cb of same image, while the Cr component is greater than the mean Cr mean component. This fact can be represented by the flow:where F(x, y) can be any pixel on the image. , , and are the mean values for Y, Cb, and Cr, respectively.
The Cb component as shown is predominantly “black” and the Cr component is “white”. This idea can be represented by the following equations:where τ is a constant, specified in  using receiver operating characteristic (ROC), by applying different values of τ in the range . To measure the “true detection rate” and “false detection,” data sets consisting of 500 images (300 of them being images of a forest fire, 200 nonfire images) collected from Internet were used. Only (10) was used with different values of τ in the range to detect binary images of the candidate fire region. τ was selected as τ = 70 resulting in a true detection rate of more than 90% and false detection of less than 40%.
2.4. Temporal Variation
Using color models alone is not enough to identify fire correctly because there are several objects that share the same fire color such as red leaves, desert, and other red moving objects. The main difference between actual fire and the fire-color objects is the nature of their motion. Shape and size of the flame are totally changeable, because of burning materials and airflow; thus, it produces higher temporal variation. In contrast, rigid bodies’ motion produces lower temporal variation. Therefore, it is possible to differentiate between the fire pixels and the fire color. To detect a fire movement, the difference between successive frames was analyzed. Suppose a video sequence consisting of n frames, and the average temporal variation is defined as  where (x, y) is the average temporal variation, (x,y) is a pixel intensity at the location (x, y) in the ith frame. If (x, y) >thr (experimentally determined threshold), then a moving pixel is fire.
3. Results and Performance Analysis
To measure the performance of the proposed model, 6 videos were collected from Internet, 3 of them are available at (http://www.ultimatechase.com/). Four of these videos were actual fire, and two were fire-color objects. The algorithm was implemented using MATLAB (R2017a) and tested on an Intel core i7 2.97 GHz PC 8GB-RAM PC. Figure 6 shows the variety of forest fire condition videos used in the test. A true-positive was counted if an image frame had fire pixels, and it was determined by the proposed model to be fire. In contrast, false-positive was counted, if the image frame has no fire, and the result was determined as a fire. Table 1 shows the true-positive, false-positive, percentage of true-positive, and percentage of false-positive for tested videos.
The results in Table 1 show that the proposed method has achieved average true-positive percentage (TTP) up to 96.63% in the tested forest fire videos and 9.23% false-positive rate. These results indicate the good performance of the proposed method in forest fire detection.
3.1. Performance Evaluation
To evaluate the proposed method, comparison between some of the above-mentioned methods and the proposed one was carried out. All of these methods were tested in data sets consisting of 500 images (300 images of forest fire and 200 nonfire images) collected from Internet. Algorithms’ performances were calculated using the evaluation metric F-score.
The F-score  is used in this study to evaluate the performance of the fire detection algorithms. For any detection method, there are four potential results. If an image has fire pixels, and it was determined by the method as fire, then it is true-positive. If the same image is determined to be not a fire pixel by the algorithm, it is false-negative. If an image has no fire, and it was detected by the method as no fire, it is a true-negative, but if it was detected as fire by the method, it counts as a false-positive. Fire detection algorithms are evaluated using the following equations:where F is F-score. TP, TN, FP, and FN are true-positive, true-negative, false-positive, and false-negative, respectively. A higher method F-score means a better overall performance. Table 2 and Figure 7 show the comparison results.(i)TP-rate: obtained TP divided by the total number of fire images.(ii)TN-rate: obtained TN divided by the total number of nonfire images.(iii)FN-rate: obtained FN divided by the total number of fire images.(iv)FP-rate: obtained FP divided by the total number of nonfire images.
Figure 6 shows the F-score of the three methods. The proposed method F-score is higher than the existing methods described in [6, 7]; this indicates that the proposed method performs better than the existing methods.
This study proposes an effective forest fire detection method using image processing techniques including movement containing region detection based on background subtraction and color segmentation. The algorithm uses YCbCr color space which is better in separating the luminance from the chrominance and has good detection rate. five fire detection rules are applied to detect the fire. The performance of the proposed algorithm is tested on data set consisting of 6 videos collected from Internet, four of which were actual fire videos, while two were fire-like objects videos. TP-rate and TN-rate were calculated. The results show that the proposed algorithm achieves good detection rates. These results indicate that the proposed method is accurate and can be used in automatic forest fire-alarm systems.
For future work, the system could be improved by using a combination of rules of different color spaces; however, the challenge is selecting the right rules from different color spaces to build the method.
The data consists of 6 videos, 3 of them are available at (http://www.ultimatechase.com/) with license and the other 3 videos are collected randomly from Internet.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
The work is supported by Fundamental Research Funds for the Central Universities (2572017PZ10).
- B. Ko and S. Kwak, “Survey of computer visionbased natural disaster warning systems,” Optical Engineering, vol. 51, no. 7, Article ID 070901, 2012.
- J. R. Martinez-de Dios, B. C. Arrue, A. Ollero, L. Merino, and F. Gómez-Rodríguez, “Computer vision techniques for forest fire perception,” Image and Vision Computing, vol. 26, no. 4, pp. 550–562, 2008.
- Y. Meng, Y. Deng, and P. Shi, “Mapping Forest Wildfire Risk of the World,” in World Atlas of Natural Disaster Risk, P. Shi and R. Kasperson, Eds., pp. 261–275, Springer Berlin Heidelberg, Berlin, Germany, 2015.
- P. M. Hanamaraddi, “A Literature Study on Image Processing for Forest Fire Detection,” IJITR, vol. 4, pp. 2695–2700, 2016.
- P. Podržaj and H. Hashimoto, “Intelligent space as a framework for fire detection and evacuation,” Fire Technology, vol. 44, no. 1, pp. 65–76, 2008.
- C. Thou-Ho, W. Ping-Hsueh, and C. Yung-Chuen, “An early fire-detection method based on image processing,” in Proceedings of the 2004 International Conference on Image Processing, ICIP '04, vol. 3, pp. 1707–1710, 2004.
- Y. Wang and J. Ye, “Research on the algorithm of prevention forest fire disaster in the Poyang Lake Ecological Economic Zone,” Advanced Materials Research, pp. 5257–5260, 2012.
- A. D. Alzughaibi, H. A. Hakami, and Z. Chaczko, “Review of human motion detection based on background subtraction techniques,” International Journal of Computer Applications, vol. 122, 2015.
- C. E. Premal and S. S. Vinsley, “Image processing based forest fire detection using YCbCr colour model,” in Proceedings of the 2014 International Conference on Circuits, Power and Computing Technologies, ICCPCT 2014, pp. 1229–1237, 2014.
- V. Vipin, “Image processing based forest fire detection,” International Journal of Emerging Technology and Advanced Engineering, vol. 2, pp. 87–95, 2012.
- L.-H. Chen and W.-C. Huang, “Fire detection using spatial-temporal analysis,” in Proceedings of the World Congress on Engineering, pp. 3–5, 2013.
- T. Fawcett, “ROC graphs: Notes and practical considerations for researchers,” Machine Learning, vol. 31, pp. 1–38, 2004.
Copyright © 2018 Mubarak A. I. Mahmoud and Honge Ren. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.