Robotic Perception of the Sight and Touch to Interact with EnvironmentsView this Special Issue
Research Article | Open Access
Jong-Hwan Kim, Seongsik Jo, Brian Y. Lattimer, "Feature Selection for Intelligent Firefighting Robot Classification of Fire, Smoke, and Thermal Reflections Using Thermal Infrared Images", Journal of Sensors, vol. 2016, Article ID 8410731, 13 pages, 2016. https://doi.org/10.1155/2016/8410731
Feature Selection for Intelligent Firefighting Robot Classification of Fire, Smoke, and Thermal Reflections Using Thermal Infrared Images
Locating a fire inside of a structure that is not in the direct field of view of the robot has been researched for intelligent firefighting robots. By classifying fire, smoke, and their thermal reflections, firefighting robots can assess local conditions, decide a proper heading, and autonomously navigate toward a fire. Long-wavelength infrared camera images were used to capture the scene due to the camera’s ability to image through zero visibility smoke. This paper analyzes motion and statistical texture features acquired from thermal images to discover the suitable features for accurate classification. Bayesian classifier is implemented to probabilistically classify multiple classes, and a multiobjective genetic algorithm optimization is performed to investigate the appropriate combination of the features that have the lowest errors and the highest performance. The distributions of multiple feature combinations that have 6.70% or less error were analyzed and the best solution for the classification of fire and smoke was identified.
Intelligent firefighting humanoid robots are actively being researched to reduce firefighter injuries and deaths as well as increase their effectiveness on performing tasks [1–5]. One task is locating a fire inside of a structure outside the robot field of view (FOV). Fire, smoke, and their thermal reflections can be clues to determine a heading that will ultimately lead the robot to the fire so that it can suppress it. However, research for accurately classifying these clues has been incomplete.
2. Previous Features
In conventional fire (and/or smoke) detection systems [6, 7] in Table 1, temperature, ionization, and ultraviolet light were mainly used to indicate the presence of a fire and/or smoke inside the structure, but they can have a long response time in large spaces  and do not provide sufficient data for the location of fire and/or smoke. Recently using vision systems, color [9–12], motion [13, 14], both [8, 15–17], and texture features [12, 18, 19] have been researched to characterize fire or smoke in Table 1. However, color features from RGB camera are not applicable to firefighting robots due to the fact that RGB cameras may operate in the visible to short wavelength infrared (IR) (less than 1 micron) and are not usable in smoke-filled environments where the visibility has sufficiently decreased [2, 14]. Motion (e.g., dynamical motion, shape changing, etc.) of the feature can be another clue to detect fire and smoke by characterizing flickering flames and smoke flow from a stationary vision system. However, the vision system onboard a robot is moving due to the dynamics of the robot itself, and this causes a large amount of noise that results in extensive computation for motion compensation. Texture features researched in [12, 18, 19] were used to identify fire or smoke. The spatial characteristics of textures can be useful to recognize patterns of fire and smoke by remote sensing and are less influenced by rotation/motion .
Long-wavelength infrared cameras, similar to the handheld thermal infrared cameras (TICs) that are typically used to aid in firefighting tasks within smoke-filled environments [20–22] as well as fire-front and burned-area recognition in remote sensing , are used in this research. Due to the fact that TICs absorb infrared radiation in the long-wavelength IR (7–14 microns), they are able to image surfaces even in dense smoke and zero visibility environments [2, 14]. In addition, TIC can provide proper information under local or global darkness, for example, shadows or darkness caused by damaged lighting. Recently, thermal images from TIC are studied to recognize pattern and motion remotely . The cameras will detect hot objects as well as thermal reflections off of surfaces. As a result, image processing on detected objects must be sufficiently robust to discern between desired objects and their thermal reflections.
This study ultimately leads the shipboard autonomous firefighting robot (SAFFiR), whose prototype is displayed in Figure 1, to autonomously navigate toward fire outside FOV in indoor fire environments. For this, the robot needs to identify clues such as smoke and smoke and fire-reflections by itself to correctly navigate toward the fire. However, the recognition of key features has not been fully studied. This paper analyzes appropriate combination of features to accurately classify fire, smoke, their thermal reflections, and other hot objects using thermal infrared images. Large-scale fire tests were conducted to create actual fire environments having various ranges of both temperature and smoke conditions. A long-wavelength IR camera was installed to produce 14-bit thermal images of the fire environment. These images were used to extract motion and statistical texture features in regions of interest (ROI). Bayesian classification was performed to probabilistically identify multiple classes in real-time. To identify the best combination of features for accurate classification, the multiobjective optimization was implemented using two objective functions: resubstitution and cross-validation errors.
3. Motion and Texture Features
In pattern recognition system, the choice of features plays an important role in the performance of classification. Both motion and texture features were selected because they were crucial in the previous study of fire and/or smoke detection and also best suitable for the thermal image analysis that is major information the firefighting robot can acquire under fire environments. Optical flow, a popular motion measurement, was used for the motion features, while the first- and second-statistical texture features were applied for the texture measurement.
A FLIR A35 long-wavelength IR camera, which is capable of imaging through zero visibility environments, was used to produce images. All images were from a 320 × 256-pixel focal plane array, 60 Hz frame rate that produces 14-bit images with an intensity range of −16384 for −40°C to −1 for 550°C. Fifteen features from optical flow and the statistical texture features are evaluated to find the best feature combination. Optical flow shows temporal variations due to moving objects in the FOV or motion of the robot. The first- and second-order statistical texture features display spatial characteristics of objects in the scene.
3.1. Motion Features by Optical Flow
Optical flow is a useful tool to recognize motion of an object in sequential images . It consists of local and global methods. Lucas-Kanade (LK) is a local method that is relatively robust with a less dense flow field, while Horn-Schunck (HS) is a global method with a dense flow field and high sensitivity to noise . Because the intensities in the thermal image change due to the varying fire environment, LK method that has higher robustness compared with HS was selected in this research to measure motion features of the objects. Two features of optical flow vector number (OFVN) and optical flow mean magnitude (OFVMM) were computed to quantitatively characterize motions of fire, smoke, and their reflections. Figure 2 contains RGB and thermal images of dense smoke in a hallway and a wood crib fire in a room. Red arrows in the thermal images indicate the direction and magnitude of the optical flow vectors with red boxes that show smoke, fire, and thermal reflections.
(a) RGB images
(b) Thermal images
3.2. First- and Second-Order Statistical Texture Features
The first- and second-order statistical features were considered in this study for object classification. The first-order statistical features estimate individual property of pixels, not characterizing any relationship between neighboring pixels, and can be computed using the intensity histogram of the candidate region of interest (ROI) in the image. As described in , mean (MNI), variance (VAR), standard deviation (STD), skewness (SKE), and kurtosis (KUR) were calculated bywhere refers to the intensity of a pixel at and and denotes the number of pixels (NOP) of the object in the image. The second-order statistical features represent spatial relationships between a pixel and its neighbors. Gray-level cooccurrence matrix (GLCM)  is used to account for adjacent pixel relationships in four directions (horizontal, vertical, left, and right diagonals) by quantizing the spatial cooccurrence of neighboring pixels. A total of seven second-order statistics features were used including dissimilarity (DIS), entropy (ENT), contrast (CON), inverse difference (INV), correlation (COR), uniformity (UNI), and inverse difference moment (IDM). To measure these features, a normalized cooccurrence matrix is used which can be defined aswhere refers to the frequency of occurrences of the gray-level of adjacent pixels at and within the four directions and denotes the number of the gray-levels in the quantized image. The denominator of (2) normalizes to be estimates of the cooccurrence probabilities. After building the normalized cooccurrence matrix , seven features of the second-order statistics features were computed by
4. Object Extraction and Bayesian Classification
One of the main characteristics of fire, smoke, and their thermal reflections in thermal images is that they are higher in intensity than the background. With intensity related to temperature in the thermal image, higher temperature objects appear brighter than the background. Hence, intensity is a primary factor for object extraction from the background. Assuming that the thermal image histogram has a bimodal distribution for foreground (i.e., object) and background, the clustering-based image autothresholding method , called Otsu method, can calculate an optimum threshold that separates objects and background creating a binary image with 0 being the background and 1 being the objects. The binary images were filtered to remove small regions and holes inside objects through morphological filtering techniques. After convoluting the original 14-bit image with the filtered-binary image, a final image was obtained that includes the original 14-bit intensities in objects as well as zeroes in the background.
There are several classification methods commonly used in supervised machine learning; -nearest neighbors (NN), decision tree (DT), neural networks (NN), support vector machine (SVM), and Naïve Bayesian. For this study, these classification methods were analyzed by considering three points: capability to classify multiple classes such as fire, smoke, and their thermal reflections; less chance of overfitting problem because, under fire environments, there could be a number of situations that are not learned or trained; real-time implementation because firefighting robot needs to make a decision in real-time; otherwise it cannot operate its task. NN is insensitive to outliers but it needs a large amount of memory and expensive computation . DT has low computation burden but, for the multiclasses classification, it may generate a complicated tree structure and may cause overfitting problem [35, 36]. NN shows high performance when processing with multidimensions and continuous features but cannot overcome overfitting problem. SVM provides fast computation and the highest accuracy but it cannot be used for the multilabel classification because it produces binary results . Naïve Bayesian classification is Bayes’ theorem-based probabilistic classification and is popular for pattern recognition applications. Although this method has lower accuracy compared with other classifiers and assumes that each feature is independent, it has fast computation, robustness to untrained cases, and less chance of overfitting . In addition, this classification has the capability of probabilistic decision making over multiple classes with fast computation for real-time implementation. In this study, Bayesian classification is used for evaluation of each feature.
With several given features, (motion and texture features) we can calculate the probability that one class (fire, smoke, thermal reflections, etc.) corresponds to the candidate by using a conditional probability,, also known as the posterior probability. By using Bayes’ theorem, it can be written with prior, likelihood, and evidence as shown in where is the prior probability, meaning it represents candidate probability to be and can be calculated by number of samples of class divided by the total number of samples. is the likelihood function and the denominator of (4) is the evidence that plays as a normalizing constant by the summation of production between the prior and likelihood at each class. By applying the conditional independence assumption, the likelihood function can be rewritten byThe conditional probability density function can be described aswhereAs shown in Table 2, Gaussian parameters for fifteen features with respect to smoke, smoke thermal reflection, fire, and fire thermal reflection were estimated by using the maximum likelihood estimation . Probability density distributions for the entire features are illustrated in Figure 3. With (5), the evidence and then the posterior probability of each class were calculated. By applying the maximum priority decision rule in (8), the Bayesian classification was used to predict the class and probability of each candidate in the scene:
Figure 3 shows probability density distribution of each class using the Gaussian parameters of Table 2. Gaussian distribution for classes in Figure 3 shows how fire, fire-reflection, smoke, and smoke-reflection are distributed by the fifteen features. Some features split out the distribution of the four classes while others cause overlap. For example, MNI best describes a well split out case of the classes, although smoke and its reflection and fire and its reflection do overlap. SKE shows the worst case in which all classes overlap making it impossible to distinguish any of the four classes.
5. Result and Discussion
The accuracy in classifying fire objects was analyzed using data from a series of large-scale tests in the facility  using actual fires up to 75 kW. Fires included latex foam, wood cribs, and propane gas fires from a sand burner. These different types of fires produced a range of temperature and smoke conditions. Latex foam fires produced lower temperature conditions but dense, low visibility smoke. Conversely, propane gas fires produced higher gas temperatures and light smoke. Wood crib fires resulted in smoke and gas temperatures between those of latex foam and propane gas fires; however, these fires resulted in sparks created from the burning wood. Thermal images were collected by driving a wheeled mobile robot through the setup during a fire test. A total of 10,775 objects were collected from the experiments and categorized as either smoke, smoke-reflection, fire, fire-reflection, or other hot objects in order to be served as clues to lead the firefighting robot to navigate toward the fire source outside the FOV. In addition, as each object has sixteen corresponding data points (fifteen features and a class), the total number of data points used in this paper is 172,400. The numbers of each object in this experiment are shown in Table 3.
Two types of error criterions (resubstitution and -fold cross-validation errors ) were used to measure how each feature accurately performs in the classification. Resubstitution error takes the entire dataset to compare the actual classes with the predicted classes by the Bayesian classification in order to examine how well the actual and predicted classes match each other. When this criterion is used alone to enhance accuracy, the classification can be overfitted to the training dataset. Cross-validation error is advantageous to detect and prevent from overfitting. Instead of using the entire dataset, cross-validation randomly selects and splits the dataset into partitions of approximately equal size to estimate a mean error by comparing between the randomly selected partition and trained results of the remaining partitions.
5.1. Single Feature Performance
The performance results of each feature are shown in Table 4. The first-order statistical texture features MNI, VAR, and STD produced the lowest errors while NOP, SKE, and KUR show the highest. These results show that MNI and VAR are beneficial to distinguish fire, smoke, and thermal reflections while motion features are not. As NOP shows the highest error, OFVMM, one of the motion features, shows the second highest errors compared with the other features. This is in part attributed to the dynamic motion of the robot. ENT and COR second-order statistical texture features show 42~45% error, which is higher than the other second-order features.
5.2. Multiple Feature Combination Performance
The error results in Table 4 demonstrate that a single feature cannot accurately classify fire, smoke, and thermal reflections. Thus, possible combination of multiple features was considered and analyzed to find the best combination of the features. The total number of all possible combinations that have two or more features is where refers to the total number of features (i.e., ) and is the number of features in the combination. Based on all possible combination, the multiobjective genetic algorithm optimization  in the global optimization toolbox of MATLAB was used to find the best combination of features that has the highest performance in the classification. The objective functions in the optimization, resubstitution and -fold cross-validation errors , were used to measure how accurately different feature combinations perform in the classification.
Figure 4 contains a plot of the error associated with the most promising feature combinations. The behavioral solution set is defined as feature combinations with less than 7% error for both objective functions while the general set refers to all other possible feature combinations. The behavioral solution set contains 0.0061% of all possible feature combinations.
The occurrence probability of features in the behavioral solution set is illustrated in Figure 5. In the behavioral solution set, the first-order statistic texture features MNI and SKE always exist while OFVN, NOP, and OFVMM features do not. Both the first-order statistical texture features STD and VAR and the second-order statistical texture features COR, ENT, and DIS show a higher occurrence compared with the other first- and second-order texture features while KUR, IDM, UNI, IND, and CON show lower occurrence. Note that, due to the robot’s dynamical motion, motion features were not successful and even not included in the top 10 feature combinations of the behavioral set.
The top features based on the probability occurrence in Figure 5 are COR, ENT, DIS, SKE, STD, VAR, and MNI. However, the combination of these seven features does not result in the best solution for classification. Table 5 contains the classification performance of the combination of features in the behavioral solution set. In order to evaluate the performance of each feature combination, various performance measures have been used such as precision, sensitivity, F-measure, and accuracy. Precision measures the fraction of positive instances from the group that the classifier predicted to be positive, and recall measures the fraction of positive examples from the positive group of the actual class and . F-measure is the harmonic mean, and accuracy is the proportion of true results. These measures can be mathematically defined aswhere TP is correctly classified positive cases, FP is incorrectly classified negative cases, and FN is incorrectly classified positive cases. For the performance measurement, confusion matrixes were created as described in Appendix and applied into (10). In the precision, index number 1 combination shows the highest performance in the behavioral solution set while index number 7 combination shows the lowest. In the sensitivity, index number 7 combination records the highest results while index number 4 does the lowest. In the F-measure and accuracy, index number 2 combination shows the highest record while index number 4 does the lowest. Based on the confusion matrixes, most of misclassification occurs in the classification of smoke, smoke-reflection, and other hot objects, because, during small fire, texture patterns of these classes were diminished and the intensity was too low to distinguish. The best solution was determined to be index number 2 combination of MNI, DIS, COR, SKE, and STD, which has the lowest of resubstitution and cross-validation errors, 6.68% and 6.70%, respectively. This combination includes all of the top features based on the probability occurrence except ENT and VAR. The four performance results at each feature combination in the behavioral solution set are shown in Figure 6 where the highest results are marked in red circles and the lowest in green-dot circles. Sensitivity appears higher than precision at each feature combination because FPs are larger than FNs in the confusion matrix. Particularly, index number 7 has the biggest difference between FP and FN resulting in the highest sensitivity and lowest precision. The summation of FP and FN in index number 4 is the highest in the behavioral solution set resulting in the lowest accuracy while index number 2 has the lowest summation of FP and FN providing the highest accuracy.
This study investigated a wide range of features from long-wavelength infrared camera images, analyzed normal distributions of fifteen features with respect to the classes of smoke, fire, and their thermal reflections, and discovered the highest performing feature combination by examining single features and multiple feature combinations. As a result, the proposed feature combination of MNI, DIS, COR, SKE, and STD increases the performance compared with the previous study  which used MNI, VAR, ENT, and IDM. As shown in Figure 7, the errors are reduced by 2.86% and 2.68% resubstitution and cross-validation errors and performances are increased by 2.90%, 1.58%, 0.20%, and 2.85%, accuracy, F-measure, sensitivity, and precision, respectively.
Figure 8 shows original visual and thermal images with the robot at three different locations: start point, hallway entrance, and room entrance described in the experimental facility. Each row relates to a series of images from the robot at three locations. The first row contains visible images of the robot view. As seen in the visible image at start point, further information regarding the hallway is limited due to shadowing of the light. The image at hallway entrance shows a smoke layer in the upper portion of the hallway due to a fire inside the room. The image at the room entrance displays a wood crib fire with sparks. Because of soot and relative difference in brightness, the background is shown darker and thus limiting information on the background around the fire.
Thermal infrared images are displayed in the second row to show information that RGB camera cannot provide in fire environments. Unlike visual image at start point that is obscured due to shadowing, the presence of smoke and its thermal reflections on the ventilation hood can be obviously perceived. The red boxes on thermal images indicate objects extracted through the adaptive object extraction with optical flows and identification numbers. In spite of dense smoke-filled and low visibility environments, thermal images can generate the images of smoke and fire, as well as background information that is otherwise not visible through visual imaging.
On the third row, class labels and posterior probabilities of each candidate are displayed at the center of candidate ROI as a result of Bayesian classification. Using enhanced image processing techniques, the thermal images can be more refined and clearer than the thermal images on the second row. Smoke, fire, and their thermal reflections are identified and marked in red or orange ellipses.
The appropriate combination of features was investigated to accurately classify fire, smoke, and their thermal reflections using thermal images. Gray-scale 14-bit images from a single infrared camera were used to extract motion and texture features by applying a clustering-based, autothresholding technique. Bayesian classification is performed to probabilistically identify multiple classes during real-time implementation. To find the best combination of features, a multiobjective genetic algorithm optimization was implemented using resubstitution and cross-validation errors as objective functions. Large-scale fire tests with different fire sources were conducted to create a range of temperature and smoke conditions to evaluate the feature combinations.
Fifteen motion and texture features were analyzed and the probability density functions of the features were computed by the maximum likelihood estimation. The combination of multiple features was determined to more accurately classify fire, smoke, and thermal reflections compared with a single feature. In the behavioral solution set where feature combinations produce less than 7% resubstitution and cross-validation errors, COR, ENT, DIS, SKE, STD, VAR, and MNI had 80.0% or more occurrence while other features had 40.0% or less occurrence. The feature combination of MNI, DIS, COR, SKE, and STD produced the highest performance in the classification resulting in 6.68% and 6.70%, resubstitution and cross-validation errors, and 95.64%, 97.61%, 96.62%, and 93.45%, precision, sensitivity, F-measure, and accuracy, respectively.
In the near future, the classification of fire, smoke, and their thermal reflections will be evaluated on any classifiers and features to increase performance. The convolution neural network of deep learning which has recently shown high performance could be explored as a classifier; also model-based image features such as discrete wavelet transform will be further studied.
See Figure 9.
The authors declare that there is no conflict of interests regarding the publication of this manuscript.
This work was sponsored by the Office of Naval Research Grant no. N00014-11-1-0074 scientific office Dr. Thomas McKenna in USA, Hwarang-dae Research Institute in Seoul, and Agency for Defense Development in Daejeon, South Korea. The authors would like to thank Mr. Joseph Starr and Mr. Josh McNeil for assisting in performing the fire tests. The authors would also like to thank Rosana K. Lee for helping and supporting this research.
- J.-H. Kim and B. Y. Lattimer, “Real-time probabilistic classification of fire and smoke using thermal imagery for intelligent firefighting robot,” Fire Safety Journal, vol. 72, pp. 40–49, 2015.
- J. W. Starr and B. Y. Lattimer, “Evaluation of navigation sensors in fire smoke environments,” Fire Technology, vol. 50, no. 6, pp. 1459–1481, 2014.
- J.-H. Kim, B. Keller, and B. Y. Lattimer, “Sensor fusion based seek-and-find fire algorithm for intelligent firefighting robot,” in Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM '13), pp. 1482–1486, IEEE, Wollongong, Australia, July 2013.
- J. G. McNeil, J. Starr, and B. Y. Lattimer, “Autonomous fire suppression using multispectral sensors,” in Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics: Mechatronics for Human Wellbeing (AIM '13), pp. 1504–1509, Wollongong, Austrailia, July 2013.
- J.-H. Kim, J. W. Starr, and B. Y. Lattimer, “Firefighting robot stereo infrared vision and radar sensor fusion for imaging through smoke,” Fire Technology, vol. 51, no. 4, pp. 823–845, 2015.
- R. C. Luo and K. L. Su, “Autonomous fire-detection system using adaptive sensory fusion for intelligent security robot,” IEEE/ASME Transactions on Mechatronics, vol. 12, no. 3, pp. 274–281, 2007.
- M. A. Jackson and I. Robins, “Gas sensing for fire detection: measurements of CO, CO2, H2, O2, and smoke density in European standard fire tests,” Fire Safety Journal, vol. 22, no. 2, pp. 181–205, 1994.
- B. U. Töreyin, R. G. Cinbiş, Y. Dedeoğlu, and A. E. Çetin, “Fire detection in infrared video using wavelet analysis,” Optical Engineering, vol. 46, no. 6, Article ID 067204, 2007.
- B. U. Toreyin, Y. Dedeoglu, and A. E. Cetin, “Wavelet based real-time smoke detection in video,” in Proceedings of the 13th European Signal Processing Conference, pp. 4–8, Antalya, Turkey, September 2005.
- T. Celik, H. Demirel, H. Ozkaramanli, and M. Uyguroglu, “Fire detection using statistical color model in video sequences,” Journal of Visual Communication and Image Representation, vol. 18, no. 2, pp. 176–185, 2007.
- L. Merino, F. Caballero, J. R. Martínez-de-Dios, I. Maza, and A. Ollero, “An unmanned aircraft system for automatic forest fire monitoring and measurement,” Journal of Intelligent & Robotic Systems, vol. 65, no. 1, pp. 533–548, 2012.
- Y. Wang, T. W. Chua, R. Chang, and N. T. Pham, “Real-time smoke detection using texture and color features,” in Proceedings of the 21st International Conference on Pattern Recognition (ICPR '12), pp. 1727–1730, Tsukuba, Japan, November 2012.
- G. Marbach, M. Loepfe, and T. Brupbacher, “An image processing technique for fire detection in video images,” Fire Safety Journal, vol. 41, no. 4, pp. 285–289, 2006.
- M. I. Chacon-Murguia and F. J. Perez-Vargas, “Thermal video analysis for fire detection using shape regularity and intensity saturation features,” in Pattern Recognition, J. F. Martínez-Trinidad, J. A. Carrasco-Ochoa, C. B.-Y. Brants, and E. R. Hancock, Eds., vol. 6718 of Lecture Notes in Computer Science, pp. 118–126, Springer, Berlin, Germany, 2011.
- W. Phillips III, M. Shah, and N. D. V. Lobo, “Flame recognition in video,” Pattern Recognition Letters, vol. 23, no. 1–3, pp. 319–327, 2002.
- D. Han and B. Lee, “Development of early tunnel fire detection algorithm using the image processing,” in Advances in Visual Computing, pp. 39–48, Springer, Berlin, Germany, 2006.
- Y. Chunyu, F. Jun, W. Jinjun, and Z. Yongming, “Video fire smoke detection using motion and color features,” Fire Technology, vol. 46, no. 3, pp. 651–663, 2010.
- F. Yuan, “Video-based smoke detection with histogram sequence of LBP and LBPV pyramids,” Fire Safety Journal, vol. 46, no. 3, pp. 132–139, 2011.
- F. Lafarge, X. Descombes, and J. Zerubia, “Textural kernel for SVM classification in remote sensing: application to forest fire detection and Urban area extraction,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '05), pp. 1096–1099, September 2005.
- F. Amon and A. Ducharme, “Image frequency analysis for testing of fire service thermal imaging cameras,” Fire Technology, vol. 45, no. 3, pp. 313–322, 2009.
- F. Amon, V. Benetis, J. Kim, and A. Hamins, “Development of a performance evaluation facility for fire fighting thermal imagers,” in Defense and Security, pp. 244–252, 2004.
- F. D. Maxwell, “A portable IR system for observing fire thru smoke,” Fire Technology, vol. 7, no. 4, pp. 321–331, 1971.
- A. Barducci, D. Guzzi, P. Marcoionni, and I. Pippi, “Infrared detection of active fires and burnt areas: theory and observations,” Infrared Physics & Technology, vol. 43, no. 3–5, pp. 119–125, 2002.
- C. Wang and S. Qin, “Adaptive detection method of infrared small target based on target-background separation via robust principal component analysis,” Infrared Physics & Technology, vol. 69, pp. 123–135, 2015.
- N. Aggarwal and R. K. Agrawal, “First and second order statistics features for classification of magnetic resonance brain images,” Journal of Signal and Information Processing, vol. 3, no. 2, pp. 146–153, 2012.
- B. Ko, K.-H. Cheong, and J.-Y. Nam, “Early fire detection algorithm based on irregular patterns of flames and hierarchical Bayesian Networks,” Fire Safety Journal, vol. 45, no. 4, pp. 262–270, 2010.
- H. Maruta, Y. Kato, A. Nakamura, and F. Kurokawa, “Smoke detection in open areas using its texture features and time series properties,” in Proceedings of the IEEE International Symposium on Industrial Electronics (ISIE '09), pp. 1904–1908, IEEE, Seoul, South Korea, July 2009.
- C. M. Bautista, C. A. Dy, M. I. Mañalac, R. A. Orbe, and M. Cordel, “Convolutional neural network for vehicle detection in low resolution traffic videos,” in Proceedings of the IEEE Region 10 Symposium (TENSYMP ), pp. 277–281, IEEE, Bali, Indonesia, May 2016.
- H. Wang, Y. Cai, X. Chen, and L. Chen, “Night-time vehicle sensing in far infrared image with deep learning,” Journal of Sensors, vol. 2016, Article ID 3403451, 8 pages, 2016.
- C. Shen, Z. Bai, H. Cao et al., “Optical flow sensor/INS/magnetometer integrated navigation system for MAV in GPS-denied environment,” Journal of Sensors, vol. 2016, Article ID 6105803, 10 pages, 2016.
- A. Bruhn, J. Weickert, and C. Schnörr, “Lucas/Kanade meets Horn/Schunck: combining local and global optic flow methods,” International Journal of Computer Vision, vol. 61, no. 3, pp. 1–21, 2005.
- A. S. N. Huda and S. Taib, “Suitable features selection for monitoring thermal condition of electrical equipment using infrared thermography,” Infrared Physics and Technology, vol. 61, pp. 184–191, 2013.
- R. M. Haralick, K. Shanmugam, and I. H. Dinstein, “Textural features for image classification,” IEEE Transactions on Systems, Man and Cybernetics, vol. 3, no. 6, pp. 610–621, 1973.
- N. Otsu, “A threshold selection method from gray-level histograms,” Automatica, vol. 11, pp. 23–27, 1975.
- P. Harrington, Machine Learning in Action, Manning Publications, 2012.
- D. J. Hand, H. Mannila, and P. Smyth, Principles of Data Mining, MIT Press, 2001.
- D. Lin, X. Xu, and F. Pu, “Bayesian information criterion based feature filtering for the fusion of multiple features in high-spatial-resolution satellite scene classification,” Journal of Sensors, vol. 2015, Article ID 142612, 10 pages, 2015.
- F. Van Der Heijden, R. Duin, D. De Ridder, and D. M. Tax, Classification, Parameter Estimation and State Estimation: An Engineering Approach Using MATLAB, John Wiley & Sons, 2005.
- R. Kohavi, “A study of cross-validation and bootstrap for accuracy estimation and model selection,” in Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI '95), Montreal, Canada, August 1995.
- K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms, vol. 16, John Wiley & Sons, New York, NY, USA, 2001.
Copyright © 2016 Jong-Hwan Kim et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.