Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015, Article ID 750708, 11 pages
http://dx.doi.org/10.1155/2015/750708
Research Article

A Thermal Infrared and Visible Images Fusion Based Approach for Multitarget Detection under Complex Environment

1College of IOT Engineering, Hohai University, Changzhou 213022, China
2College of Computer and Information, Hohai University, Nanjing 210098, China

Received 9 May 2014; Revised 21 August 2014; Accepted 29 August 2014

Academic Editor: Minrui Fei

Copyright © 2015 Xinnan Fan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Multitarget detection under complex environment is a challenging task, where the measured signal will be submerged by noise. D-S belief theory is an effective approach in dealing with Multitarget detection. However, there are some limitations of the general D-S belief theory under complex environment. For example, the basic belief assignment is difficult to establish, and the subjective factors will influence the update process of evidence. In this paper, a new Multitarget detection approach based on thermal infrared and visible images fusion is proposed. To easily characterize the defected heterogeneous image, a basic belief assignment based on the distance distribution function of heterogeneous characteristics is presented. Furthermore, to improve the discrimination and effectiveness of the Multitarget detection, a concept of comprehensive credibility is introduced into the proposed approach and a new update rule of evidence is designed. Finally, some experiments are carried out and the experimental results show the efficiency and effectiveness of the proposed approach in the Multitarget detection task.

1. Introduction

Multitarget detection in complex environments has become a research hot spot [1]. Visible light camera has high resolution that can provide spatial details of the scene. But the low visibility makes the visible images less clear under complex environments (for the visible light camera, complex environment mainly refers to changes in illumination and noise). Thermal infrared camera is a passive sensor that captures the infrared radiation emitted by all objects with a temperature above absolute zero. These types of sensors are often deployed in vision systems to eliminate the illumination problems of normal Gray scale and RGB cameras [2]. However, these types of sensors are sensitive to temperature changes and insensitive to physical shape of targets (for the thermal infrared image, complex environment mainly refers to changes in ambient temperature and thermal noise interference resulting from the surroundings). So infrared and visible information is always fused to overcome the disadvantages of both visible images and thermal infrared images [3, 4]. To make the multitarget detection effective in complex environment, some new challenges have to be faced [5]. The first one is that the measured data acquired under complex environment are flawed and abnormal. The second one is that it is difficult to find a unified fusion approach to realize the information complements for flawed data obtained from different sensors. The third one is that it is difficult to obtain any prior knowledge such as historical database and expert knowledge of a certain field.

Lots of work has been done on multitarget detection. Conventional approaches for multitarget detection include signal processing, data mining, Bayesian inference, and machine learning [6, 7]. However, the methods mentioned above must use accurate and effective signal features extracted from the data collected. As we know, there are many factors in complex environment which will lead to the uncertainty, such as insufficient lighting, saturation, smoke, and extreme heat. Furthermore, multitarget detection will also lead to uncertainty, such as fuzzy randomness and diversity, especially when different targets have the similar attribute or feature that is difficult to distinguish, including the shape and temperature. The instability of the measured image signals will make the useful signals submerged in the background. There would be uneven distribution of gray, detail blurred, and poor contrast ratio in visual image and lower signal-to-noise ratio (SNR), the halo effect, silhouette, and fuzzy edge in thermal image [8, 9]. Ignoring these imperfections and making unrealistic assumption will lead to untrustworthy inferences.

Aiming at the problems above, a lot of improvements have been proposed. In these approaches, D-S belief theory has become a study hot spot for multitarget detection under complex environment in recent years, which is one of the most dominant uncertainty processing frameworks [1013]. D-S belief theory can make a relatively accurate model and consider various defects, which has been widely used for its advantages of uncertainty expressing and combination [14, 15]. The evidence update mechanism of D-S belief theory, especially, presents a great deal of flexibility for decision-making. However, there are two main challenges by using the general D-S belief theory based method to deal with the multitarget detection problems. Concretely, the difficulties existing in the D-S belief theory are how to build the mass assignment function model and set up a reasonable and effective combination rule of evidence.

The first important issue is the evidence modeling problem, namely, how to build the mass assignment function. D-S evidence theory does not provide a general modeling method and the existing methods are geared to the needs of specific applications. For example, Dezert et al. [16] modeled the uncertainties of the threshold value using the evidence theory and presented a nonsupervised method for edge detection in color images based on belief functions and their combination. Panigrahi et al. [17] combined multiple evidence and belief update for database intrusion detection. Bao et al. [18] presented a D-S belief theory based approach for structural damage detection. Poulain et al. [19] proposed a processing chain to create or update building database using high-resolution optical and SAR images, where relevant features were extracted from images and fused in the framework of D-S belief theory. D-S theory has achieved good effects in those applications above. However, those approaches are used in specific applications, which cannot be used directly in the multitarget detection under complex environments, where the evidences are deficient.

Another important issue in D-S evidence theory based method is the evidence combination method. The evidence combination is sensitive to the subjective factors in the process of solving multisource heterogeneous information fusion, which will lead to the lack of reasonability and validity of evidence fusion method. Most of the existing methods generally did not consider the order of combination process, the logical importance and the reliability of different evidence. For example, the classic Dempster Combination Rule (DCR) is used to solve the problem of evidence updating, which requires that the two FoDs (frames of discernment) being fused should be identical. It constitutes another drawback associated with the DCR based method. Sometimes counter-intuitive conclusions will be obtained by this approach [20]. Another classic combination rule method is the Jeffrey-like Combination Rule (JCR) [21]. But the JCR based method is only related to the current evidence, and it is difficult to determine the updating coefficient of the condition based evidence. Recently, some improvements have been done on the evidence combination rule. For example, Wickramarathne et al. [22] proposed a conditional core theorem algorithm, which simplified the calculation of Fagin-Halpern and improved the conditional approach to fuse evidence. Bolar et al. [23] proposed a hierarchical evidential reasoning (HER) framework where important and reliable factors were introduced for discounting evidence. However, those methods above are not suitable for multitarget detection under complex environment in real-world applications. The main reasons are that the methods are generally not considered the order of combination process, the logical importance, and reliability of different types of evidence.

As introduced above, there are two technical difficulties in the D-S evidence theory based approach for multitarget detection under complex environment. The first one is how to make reasonable and effective heterogeneous information distribution function based on the fundamental belief. Furthermore, how to map the heterogeneous information into the basic belief assignment (BBA) under the same framework also needs to be solved effectively. The second one is how to make reasonable and effective fusion method of heterogeneous information distribution function. To solve the two main technical issues above, a new multitarget detection algorithm based on multisource heterogeneous image information is proposed. In the proposed approach, a feature distance based BBA of heterogeneous image is presented firstly. For visible images, an improved Closed-Form Solution method after histogram equalization is used to segment and extract the targets. Then the distances of invariant moments between defect targets from extraction and targets in the knowledge base will be calculated and mapped as BBA. For thermal infrared images, the temperature difference between targets and their environment will be mapped to BBA. Furthermore, a new update rule of evidence is proposed and the evidence fusion will be processed both inside the homogeneous data and among the heterogeneous data by selecting rules in different circumstances. Finally, some experiments are carried out and the experimental results verify the effectiveness of the proposed algorithm.

This paper is organized as follows. In Section 2, the proposed multitarget detection algorithm based on thermal infrared and visible heterogeneous images fusion is given. Section 3 presents the simulation experiments and some performances of the proposed approach are analyzed in detail. Finally, the conclusion is given in Section 4.

2. Multitarget Detection Algorithm Based on Thermal Infrared and Visible Heterogeneous Images Fusion

Multitarget detection based on heterogeneous image information under complex environment is a very difficult task, because the image information is full of uncertainty. In this paper, defect feature distance of heterogeneous images is mapped into mass function of D-S evidence theory that can express the uncertainty well, and a new update rule of evidence combination is proposed to handle the uncertainty. The proposed algorithm in this paper is introduced in detail as follows.

2.1. Evidence Modeling
2.1.1. Basic Notions of Evidence Modeling

In the D-S theory, the total set of interested targets with mutually exclusive and exhaustive propositions is referred to as the frame of discernment (FoD), which is denoted as , where is the minimum identified level of information and is the number of the elements in the universal set. is used to denote the power set of . In D-S theory, the support for proposition is provided via the BBA, which maps . This mapping function satisfies

The set of propositions that possesses nonzero mass forms the core and the triplet is the corresponding body of evidence (BoE). For , in BoE, . The belief of isand the plausibility of iswhere represents the support assigned to proposition exactly; measures the sum of support assigned for all proper subsets of and represents the extent to which one finds plausible. In this paper, there are two different information sources, namely, the visible image and the thermal infrared image. Let be the universal set representing all possible states under consideration. The corresponding BoE obtained from the CCD iswhere is the core which contains visible images subsets of ; ; and the mapping function is defined as BBAs of visible images. By the same way, the corresponding BoE for the thermal infrared image iswhere is the core which contains thermal infrared images subsets of ; ; and the mapping is defined as BBAs of thermal infrared images.

2.1.2. Evidence Modeling for Thermal Infrared and Visible Heterogeneous Images

Evidence modeling is one of the key parts in D-S evidence theory based methods. The mapping from infrared and visible heterogeneous images to BBA is the basic part of evidence modeling. The mapping of the traditional method is by assigning a mass to the complete ambiguity [24] or by mapping the tool answer to mass assignments that feature a good separation between positive and negative examples [25]. The mapping of the existing method based on distance is gained mainly by the methods of experience, neural network, probability and statistics, and feature matching [2628]. However, all the existing methods mentioned above cannot be used directly in the multitarget detection based on heterogeneous images under complex environment. For example, the computation of neural network methods is complicated; the probability and statistics method needs to know the exactly statistical distribution which is difficult to be obtained in complex environment, especially for two heterogeneous images.

In this paper, the distance between the measured data obtained under different angles and prior information is used to construct the model, which makes the model closer to the actual situation. The work flow of the modeling process in this paper is shown in Figure 1, which is presented in detail as follows.

Figure 1: The work flow of the modeling process in the proposed approach.

First, the information of visible and thermal infrared images is obtained under different aspect angles and histogram equalization is used to enhance the contrast of image.

Next, an improved Closed-Form Solution is used to realize the feature extraction of multitargets. The method of Closed-Form Solution in [29] effectively resolved the problem of multiobjective extraction under natural environment. However, there are some limitations of the general Closed-Form Solution under complex environment, such as the loss of detail and the excessive segmentation. These problems can appear as the discontinuity of transparency value. Aimed at these problems, an improved method of Closed-Form Solution is proposed to add a smoothness constraint based on the original cost function formula, and the new expression to extract an alpha matte is as follows:where is a large number; is a diagonal matrix whose diagonal elements are one for constrained pixels and zero for all other pixels; is an matrix, and is the vector containing the specified alpha values for the constrained pixels and zero for all other pixels. The added smoothness constraint is used to calculate the square deviation of transparency values between each pixel and its adjacent pixels in the directions of rotation.

At last, the mapping from the image character to the BBA is conducted. In this paper, the seven Hu invariant moments of targets are used to state the image character. Hu invariant moments satisfy the conditions of translation invariance, scaling invariance, and rotation invariance. Thus, for the same target in different perspective images that are obtained from the same transducer, it has the invariant distance to the prior knowledge.

For the visible images, Hu invariant moments of prior knowledge are denoted by [30]where are used as identification feature of the prior target. And target Hu invariant moments of other targets under various angles are denoted bywhere are used as identification feature of other targets in various angles. By calculating the feature distance between Hu invariant moments of various targets under different angles and invariant moments of the targets corresponding to repository, the credibility of the evidence obtained under a specific angle can be measured. The feature distance function for the visible images can be expressed as

For the thermal infrared images, the temperature corresponding to the ambient brightness is denoted by . And the temperature corresponding to the target brightness is denoted by . By calculating the brightness feature distance between various targets under different angles and their corresponding ambient one, the credibility of the evidence obtained under a specific angle can be measured. The feature distance function for the thermal infrared images can be expressed as

Because the mapping from distance function to BBA is a nonlinear mapping and exponential function can reflect this nonlinear relationship well, the multitarget BBA in this paper is defined aswhere , are correction factors and , are uncorrelated white Gaussian noise.

2.2. Evidences Combination

The uncertainties in visual image and thermal image mean that there are some imperfection and misinterpretation data used in the target detection, which will lead to various mistakes, such as regarding the interference object as a target, ignoring the target, or confusing the multitarget detection. To reduce the uncertainty of characterization and improve the robustness of decision making, evidence from both optical and infrared cameras over different views should be combined.

There are several rules to combine evidences, such as the Dempster Combination Rule (DCR) and the Conditional Update Rule (CUR). Because it is difficult to fuse the conflicting BoEs by DCR [31], the CUR is used in this paper, which enables one sensor to update its own evidence and exchange evidence with other sensors without having to expand its FoD artificially. The proposed CUR based evidence combination method is introduced as follows.

2.2.1. The General Conditional Update Rule

In general, the update rule of is as follows [32]:where , for all ; , for all , and , for all . can be calculated by [33]where . If , then

2.2.2. The Proposed Conditional Update Rule

In the general fusion process introduced above, the parameter values of and are set artificially. There are some limitations of this artificial assignment method. The main reason is that it is difficult to find unity evidence between the two metrics, which is used to measure the value of the credibility for the heterogeneous information. Furthermore, the artificial assignment method is short of rigorous reasoning. To improve the adaptability of the method, a concept of comprehensive reliability is proposed in this paper where the credibility of evidence is not only related to its own credibility in evidence fusion process but also related to the support of another evidence. In addition, the comprehensive reliability used in this paper is formulated by distance from the characterization and evidence that is mentioned in the evidence modeling process (see Section 2.2). The credibility of the evidence is denoted by , which is calculated bywhere refers to the feature distance. Let represent the degree of support of another evidence, which is defined as follows:If the distance between one evidence and another evidence is smaller, then the mutual support among them is higher. The relative degree of confidence of is defined as

Because both the confidence of evidence itself and the relative degree of confidence are very important, the comprehensive confidence in this paper is defined as follows:

Thus a new evidence fusion algorithm based on the concept of comprehensive reliability is proposed to reduce the subjective factors of CUR. In this paper, the parameter values of and are defined as

In the fusion process of heterogeneous image, the types of evidence which have been updated, respectively, are combined with the order . The updated weights which considered the logical importance and reliability of different types of evidence are calculated by the characteristic distance and evidence distance.

3. Experiment

To test the performance of the proposed approach, some experiments are carried out. In these experiments, five cups with similar shape characteristics are used as the targets. In these cups, there is some water with different temperatures. These cups are placed in a complex environment without sufficient light, where the temperature is changing. So there are 5 possible target types, identified as , , and is used to denote any other object. A CCD and a thermal infrared camera are rotated around the target to obtain different images. Let denote the incident angle of sensors to the target. Five visible and thermal infrared images with the five targets were taken at the angles of 0, 30, 90, 270, and 300 degrees. Figure 2 shows the different perspective images.

Figure 2: Thermal infrared and visible images under complex environments from five angles.
3.1. Establish the BBA for Heterogeneous Images

At first, the visible images under complex environment are processed by histogram equalization (see Figure 3). From Figure 3, we can see that this processing can remove a significant amount of image noise. But it is still difficult to identify the targets by only the visible image.

Figure 3: The histogram equalization for visible image: (a) the original image under complex environment; (b) the image after histogram equalization.

Secondly, an improved Closed-Form Solution (see Section 2) is used for multitarget extraction under complex environments. Figure 4 shows the results of extraction of different perspectives.

Figure 4: The extraction result of improved Closed-Form Solution.

Thirdly, seven Hu invariant moments of visible image in different perspectives are calculated, respectively. Thus, the BBA values of the visible images can be obtained by (9) and (10), which is listed in Table 1 (see ).

Table 1: The BBA values of the visible images and the thermal infrared images.

In the same way, the BBA values of the thermal infrared images are obtained by (10) and (11) (see in Table 1). Because the thermal infrared camera cannot discern the targets with the same temperature characteristics, here ; ; and .

3.2. Verify the Proposed Evidence Fusion Method

To reduce the uncertainty of characterization, evidence from both the visible and thermal infrared images over different perspectives is combined. The defect data can be chosen to update evidence from each source individually in different perspectives or to combine two different types of sources. The comparison of these two ways of evidence fusion based on the proposed method in this paper is shown in Figure 5. The result of evidence fusion based on the proposed method by optical source or thermal infrared camera individually can be seen in Figures 5(a) and 5(b), respectively. The characteristics with uncertain information of visual image are mapped into , , while the characteristics with uncertain information of the thermal image are mapped into , . By rotating the sensors, the new evidence can update the existing belief about the multitargets. From Figures 5(a) and 5(b), we can see that the mass assigned to the total uncertainty decreases with the process of the iterations and the support towards the target (namely, and ) increases. To reduce the uncertainty of characterization, evidence from both the visible and thermal infrared images from different perspectives is combined. The result of evidence fusion based on the proposed method by combined information is shown in Figure 5(c). The decrement of the total uncertainty is more than that of single sensor, and the support towards target increases.

Figure 5: The comparison of evidence fusion between a source alone and combination of the two different types of sources: (a) evidence fusion by optical source individually; (b) evidence fusion by thermal infrared camera individually; (c) evidence fusion by both the visible and thermal infrared images from different perspectives.

To show the performance of the proposed fusion method (PUR), it is compared with the Dempster Combination Rules (DCR) [31], the Jeffrey-like Evidence Update Rules (JUR) [34], and the Linear Conditions Update Rules (LUR). The comparison of the BBA values by different update methods is shown in Figure 6.

Figure 6: BBA values by various update methods: (a) evidence updates of Dempster’s combination; (b) evidence updates of Jeffrey-like rules; (c) evidence updates of linearization condition; (d) evidence updates of proposed method.

In order to evaluate these algorithms more objectively, three indices are defined:(1)Index_A: the reduction rate of uncertainty after updating five times, which is calculated by ;(2)Index_B: the scope of the change rate of the BBA after disturbance, which is calculated by , ;(3)Index_C: the scope of the recovery rate of the update results after disturbance and updating the new evidence, which is calculated by , .

The three indices obtained by calculating in these experiments are shown in Table 2.

Table 2: Update performance evaluation.

The experimental results in Table 2 and Figure 6 show that the fusion results by the proposed method conform to the evidence consistently, when the support degree of evidence update changes slightly. The support degree of the update results by the proposed approach is superior to the other three methods (see the corresponding BBA values of the five objectives in Table 2). When the support degree of the evidence changes dramatically, the proposed method in this paper is the most sensitive to the changes. Furthermore, after reupdating the new evidence, the result support the original status. That means, the proposed method can fast recover BBAs to the Values before a dramatic change. This means that the proposed approach can conquer the influence of the abnormal evidence on the update results to improve its accuracy.

From Figure 6, we can see that although DCR can track and reflect the influence of changes in evidence on the result, the uncertainty after updating increases. The update results of JUR are only related to the new evidence rather than the original evidence, so its uncertainty reflects the conclusion contrary to the intuition. The LUR methods are relatively reasonable, while the selection of evidence combination weight should be the optimal in accordance with the experience after several tests, which is restricted in real application. The proposed method in this paper is obviously superior to the other methods, especially in the situation that the BBA of the evidence changes significantly.

4. The Conclusion

The multiple targets detection under complex environment is investigated in this paper. To deal with this problem, a new multitarget detection approach based on thermal infrared and visible images fusion is proposed. In the proposed approach, a feature distance based BBA of heterogeneous images is presented and a new update rule of evidence is proposed. The proposed approach can improve the distinguished ability of the several objectives and the detection correctness. The results of the simulation experiments show that the proposed approach in this paper can reduce the uncertainty of the objectives detection significantly and reflect the abnormality in the update process in a timely and correct manner. Furthermore, the proposed approach can reduce the influence of the unreasonable evidence on the update results.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (61203365 and 41301448), the Jiangsu Province Natural Science Foundation (BK2012149), the Fundamental Research Funds for the Central Universities (2011B04614), and the Science and Technology Commission of Shanghai Municipality (12595810200).

References

  1. J. Sun, H. Zhu, Z. Xu, and C. Han, “Poisson image fusion based on markov random field fusion model,” Information Fusion, vol. 14, no. 3, pp. 241–254, 2013. View at Publisher · View at Google Scholar · View at Scopus
  2. R. Gade and T. B. Moeslund, “Thermal cameras and applications: a survey,” Machine Vision and Applications, vol. 25, no. 1, pp. 245–262, 2014. View at Publisher · View at Google Scholar · View at Scopus
  3. T. Elguebaly and N. Bouguila, “Finite asymmetric generalized Gaussian mixture models learning for infrared object detection,” Computer Vision and Image Understanding, vol. 117, no. 12, pp. 1659–1671, 2013. View at Publisher · View at Google Scholar · View at Scopus
  4. X. Li, Y. S. He, and X. Zhan, “The optimization research of geo-environmental monitoring image fusion based on framelet,” Advanced Materials Research, vol. 121-122, pp. 945–950, 2010. View at Publisher · View at Google Scholar · View at Scopus
  5. Z.-G. Liu, G. Mercier, J. Dezert, and Q. Pan, “Change detection in heterogeneous remote sensing images based on multidimensional evidential reasoning,” IEEE Geoscience and Remote Sensing Letters, vol. 11, no. 1, pp. 168–172, 2014. View at Publisher · View at Google Scholar · View at Scopus
  6. R. Dutta, A. G. Cohn, and J. M. Muggleton, “3D mapping of buried underworld infrastructure using dynamic bayesian network based multi-sensory image data fusion,” Journal of Applied Geophysics, vol. 92, pp. 8–19, 2013. View at Publisher · View at Google Scholar · View at Scopus
  7. J. Daniel and J.-P. Lauffenburger, “Fusing navigation and vision information with the transferable belief model: application to an intelligent speed limit assistant,” Information Fusion, vol. 18, no. 1, pp. 62–77, 2014. View at Publisher · View at Google Scholar · View at Scopus
  8. I. Ulusoy and H. Yuruk, “New method for the fusion of complementary information from infrared and visual images for object detection,” IET Image Processing, vol. 5, no. 1, pp. 36–48, 2011. View at Publisher · View at Google Scholar · View at Scopus
  9. S. Gao, Y. Cheng, and Y. Zhao, “Method of visual and infrared fusion for moving object detection,” Optics Letters, vol. 38, no. 11, pp. 1981–1983, 2013. View at Publisher · View at Google Scholar · View at Scopus
  10. F. Delmotte and P. Smets, “Target identification based on the transferable belief model interpretation of Dempster-Shafer model,” IEEE Transactions on Systems, Man, and Cybernetics A: Systems and Humans., vol. 34, no. 4, pp. 457–471, 2004. View at Publisher · View at Google Scholar · View at Scopus
  11. B. Ristic and P. Smets, “Target identification using belief functions and implication rules,” IEEE Transactions on Aerospace and Electronic Systems, vol. 41, no. 3, pp. 1097–1103, 2005. View at Publisher · View at Google Scholar · View at Scopus
  12. T. Denœux and P. Smets, “Classification using belief functions: relationship between case-based and model-based approaches,” IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics, vol. 36, no. 6, pp. 1395–1405, 2006. View at Publisher · View at Google Scholar · View at Scopus
  13. M.-H. Masson and T. Denoeux, “Ensemble clustering in the belief functions framework,” International Journal of Approximate Reasoning, vol. 52, no. 1, pp. 92–109, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. M. Shoyaib, M. Abdullah-Al-Wadud, and O. Chae, “A skin detection approach based on the Dempster-Shafer theory of evidence,” International Journal of Approximate Reasoning, vol. 53, no. 4, pp. 636–659, 2012. View at Publisher · View at Google Scholar · View at Scopus
  15. J. Zhu, L. Wang, J. Gao, and R. Yang, “Spatial-temporal fusion for high accuracy depth maps using dynamic MRFs,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 5, pp. 899–909, 2010. View at Publisher · View at Google Scholar · View at Scopus
  16. J. Dezert, Z.-G. Liu, and G. Mercier, “Edge detection in color images based on DSmT,” in Proceedings of the 14th International Conference on Information Fusion (FUSION '11), pp. 1–8, Chicago, Ill, USA, July 2011. View at Scopus
  17. S. Panigrahi, S. Sural, and A. K. Majumdar, “Two-stage database intrusion detection by combining multiple evidence and belief update,” Information Systems Frontiers, vol. 15, no. 1, pp. 35–53, 2013. View at Publisher · View at Google Scholar · View at Scopus
  18. Y. Bao, H. Li, Y. An, and J. Ou, “Dempster-shafer evidence theory approach to structural damage detection,” Structural Health Monitoring, vol. 11, no. 1, pp. 13–26, 2012. View at Publisher · View at Google Scholar · View at Scopus
  19. V. Poulain, J. Inglada, M. Spigai, J.-Y. Tourneret, and P. Marthon, “High-resolution optical and SAR image fusion for building database updating,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 8, pp. 2900–2910, 2011. View at Publisher · View at Google Scholar · View at Scopus
  20. M. C. Florea, A.-L. Jousselme, É. Bossé, and D. Grenier, “Robust combination rules for evidence theory,” Information Fusion, vol. 10, no. 2, pp. 183–197, 2009. View at Publisher · View at Google Scholar · View at Scopus
  21. H. Ichihashi and H. Tanaka, “Jeffrey-like rules of conditioning for the Dempster-Shafer theory of evidence,” International Journal of Approximate Reasoning, vol. 3, no. 2, pp. 143–156, 1989. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. T. L. Wickramarathne, K. Premaratne, and M. N. Murthi, “Toward efficient computation of the Dempster-Shafer belief theoretic conditionals,” IEEE Transactions on Cybernetics, vol. 43, no. 2, pp. 712–724, 2013. View at Publisher · View at Google Scholar · View at Scopus
  23. A. Bolar, S. Tesfamariam, and R. Sadiq, “Condition assessment for bridges: a hierarchical evidential reasoning (her) framework,” Structure and Infrastructure Engineering, vol. 9, no. 7, pp. 648–666, 2013. View at Publisher · View at Google Scholar · View at Scopus
  24. T. L. Wickramarathne, S. Negahdaripour, K. Premaratne, L. N. Brisson, and P. P. Beaujean, “A belief theoretic approach for characterization of underwater munitions,” in Proceeding of the OCEANS 2010, pp. 1–6, Seattle, Wash, USA, September 2010. View at Publisher · View at Google Scholar · View at Scopus
  25. M. Fontani, T. Bianchi, A. De Rosa, A. Piva, and M. Barni, “A framework for decision fusion in image forensics based on Dempster-Shafer Theory of Evidence,” IEEE Transactions on Information Forensics and Security, vol. 8, no. 4, pp. 593–607, 2013. View at Publisher · View at Google Scholar · View at Scopus
  26. H. Li, G. Wen, Z. Yu, and T. Zhou, “Random subspace evidence classifier,” Neurocomputing, vol. 110, no. 13, pp. 62–69, 2013. View at Publisher · View at Google Scholar · View at Scopus
  27. M. Shoyaib, M. Abdullah-Al-Wadud, S. Z. Ishraque, and O. Chae, “Facial expression classification based on dempster-shafer theory of evidence,” in Belief Functions: Theory and Applications, pp. 213–220, Springer, New York, NY, USA, 2012. View at Google Scholar
  28. C. Liu, X. Ma, and Z. Cui, “Multi-source remote sensing image fusion classification based on DS evidence theory,” in MIPPR 2007: Remote Sensing and GIS Data Processing and Applications; and Innovative Multispectral Technology and Applications, vol. 6790 of Proceedings of SPIE, International Society for Optics and Photonics, 2007. View at Publisher · View at Google Scholar
  29. A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 228–242, 2008. View at Publisher · View at Google Scholar · View at Scopus
  30. M.-K. Hu, “Visual pattern recognition by moment invariants,” IRE Transactions on Information Theory, vol. 8, no. 2, pp. 179–187, 1962. View at Google Scholar
  31. W. Liu, “Analyzing the degree of conflict among belief functions,” Artificial Intelligence, vol. 170, no. 11, pp. 909–924, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  32. K. Premaratne, M. N. Murthi, J. Zhang, M. Scheutz, and P. H. Bauer, “A Dempster-Shafer theoretic conditional approach to evidence updating for fusion of hard and soft data,” in Proceedings of the 12th International Conference on Information Fusion (FUSION '09), pp. 2122–2129, July 2009. View at Scopus
  33. E. C. Kulasekere, K. Premaratne, D. A. Dewasurendra, M. Shyu, and P. H. Bauer, “Conditioning and updating evidence,” International Journal of Approximate Reasoning, vol. 36, no. 1, pp. 75–108, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  34. S.-W. Du, J.-J. Lin, and Y.-M. Su, “Maneuvering target classification based on conditional evidence updating,” Journal of East China University of Science and Technology (Natural Science Edition), vol. 38, no. 4, pp. 511–515, 2012. View at Google Scholar