Mathematical Problems in Engineering

Volume 2016, Article ID 1571795, 18 pages

http://dx.doi.org/10.1155/2016/1571795

## Cloud Model-Based Method for Infrared Image Thresholding

^{1}School of Information Science and Technology, Lingnan Normal University, Zhanjiang 524048, China^{2}College of Geographic and Biologic Information, Nanjing University of Posts and Telecommunications, Nanjing 210023, China

Received 20 January 2016; Accepted 14 April 2016

Academic Editor: Moulay Akhloufi

Copyright © 2016 Tao Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Traditional statistical thresholding methods, directly constructing the optimal threshold criterion using the class variance, have certain versatility but lack the specificity of practical application in some cases. To select the optimal threshold for infrared image thresholding, a simple and efficient method based on cloud model is proposed. The method firstly generates the cloud models corresponding to image background and object, respectively, and defines a novel threshold dependence criterion related with the hyper-entropy of these cloud models and then determines the optimal grayscale threshold by the minimization of this criterion. It is indicated by the experiments that, compared with selected methods, using both image thresholding and target detection, the proposed method is suitable for infrared image thresholding since it performs good results and is reasonable and effective.

#### 1. Introduction

Image thresholding converts a gray level image into a binary image, and it is one group of popular and simple methods. Many different techniques have been proposed and developed over the years [1–4]. Comprehensive overviews and comparative studies of image thresholding can be found in the recent literature [5, 6].

Among these thresholding methods, the most common idea is to optimize some threshold dependent functions, which include the information and properties of the images, as known to all, the statistical image thresholding. The Otsu method, a typical example, has been widely used [7] as one of the best threshold selection methods for general real world images. Based on the Otsu method, many modified methods or other statistical variations had been proposed, such as minimum variance method (Hou for short) [8], standard deviation-based method (Li) [9], and median-based method (Xue) [10]. The method in [11] proposed a cloud model-based framework for range-constrained thresholding and improved four traditional methods, while the method in [12] converted the histogram of image into a series of normal cloud models by cloud transformation, which is as an improvement of Gaussian Mixed Model.

In general, the existing statistical methods have been proven useful and successful in many applications [13]. However, none of them is generally applicable to all images, and different algorithms should be usually not equally suitable for any given particular application. We believe that image thresholding is also an essential part in infrared image tracking system, since target detection is an important problem in infrared image sequences with various cluttered environments, and image thresholding can be usually used to separate candidate targets in the image because of its simplicity and efficiency. Unfortunately, most statistical methods cannot provide satisfied results for infrared image thresholding without considering the practical features, which are not exclusive and lack sufficient attention on specific application prospect. In this sense, the automatic selection of optimum threshold for infrared image is still a challenge.

Almost all of infrared images are with mixture non-Gaussian model, narrow grayscale range, and low-contrast objects, and there are similar statistical properties between classes of objects and background. In addition, small targets to be detected exist in many cases [14]. These features of infrared images are our major concerns. To enforce the weak point of the previous statistical thresholding methods, we propose a cloud model-based approach for infrared image thresholding. Our intentions are twofold: () using cloud models to depict the classes of background and object in a more robust way, () presenting a new statistical threshold criterion related with cloud models to determine the optimal threshold. Different from the existing methods, especially our previous publications [11, 12], the proposed method used and only used cloud model and did not use any existing methods. In other words, cloud model is no longer an assistant tool for existing methods. Cloud model is a cognitive model between a qualitative concept and its quantitative instantiations [15–17] and has been used in image thresholding with uncertainty [11, 12, 18]. We have done quantitative and qualitative validation of the proposed approach against several infrared images. Comparison has been made with respect to seven methods, including three traditional state-of-the-art algorithms [19–21] and four relative methods [7–10]. The experimental results, both image thresholding and target detection, demonstrate that our approach is efficient and effective.

The rest of the paper is organized as the following: Section 2 presents an overview of related works. Then, Section 3 proposes a novel cloud model-based algorithm for infrared image thresholding, and the algorithm analysis is also discussed, as well as implementation and computational complexity. Section 4 shows the experimental results, both infrared image thresholding and an application of target detection. Section 5 provides some discussions on the proposal. Finally, the conclusion is drawn in Section 6.

#### 2. Related Works

For an image of pixels, each pixel is represented by its grayscale , , and denotes the grayscale level, which is 256 for an 8-bit grayscale image. Then the histogram can be as , , constructed by counting the frequencies of the grey levels. For convenience, we only consider a bi-level thresholding problem for infrared images and suppose dark background is with lower grayscale while bright object is with higher grayscale. Given a threshold , the segmented image would be divided into two classes , where consists of pixels with gray levels , and with gray levels in .

##### 2.1. The Otsu Method

The Otsu method is one of the simple and popular techniques for statistical image thresholding. Otsu’s rule for selecting the optimal threshold can be written aswhere , are the cumulative probability of two classes, that is, background pixels and object pixels , and can be defined asand , are the standard deviations of these classes:in addition, , are the means of these classes:

##### 2.2. The Hou Method

Hou et al. [8] proved that the Otsu method tends to divide an image into object and background of similar sizes and presented an improved method for image thresholding. Hou’s criterion obtains the optimal threshold by minimizing the sum of class variance:

The Hou method overcomes the class probability and the class variance effects using the relative distance and the average distance, but there still exist some disadvantages, such as noise or inhomogeneity.

##### 2.3. The Li Method

Li et al. [9] believed that both Otsu and Hou neglect specific characteristic of practical images and get unsatisfactory segmentation results when applied to those images with similar statistical distributions in both object and background. In other words, for two Gaussian classes with equal variances but distinct sizes or with equal sizes but distinct variances, the Otsu and Hou methods would perform not as perfectly as for two classes with more equal sizes and more equal variances. Aiming at the images with similar distributions in classes of background and object, especially for infrared images, Li improved the weakness of the Otsu and Hou methods and proposed the new criterion related with the minimal standard deviation, which can be rewritten as

##### 2.4. The Xue Method

The above methods in (1), (5), and (6) choose class variance of some form as criterions for threshold determination, while Xue and Titterington [10] argued that when the class distribution is skew or heavy-tailed, or when there are outliers in the sample, the mean absolute deviation from the median is a more robust estimator of location and dispersion than the class variance. Based on the consideration, Xue presented a median-based extension for the Otsu method and stated improving the robustness with the presence of skew or heavy-tailed class-conditional distributions. Xue’s rule can be stated aswhere denote the mean absolute deviations from the median , for two classes, and are defined as

Although Xue’s extension could accomplish more robust performance than that of the original Otsu method, the Xue method seemed not to notice Hou’s motivation originated from the Otsu method, and then it is destined that there are several dissatisfactions if involving some applications, as well as infrared images.

#### 3. The Cloud Model-Based Method

##### 3.1. Preliminaries

Cloud model, proposed by Li et al. [15, 16], is the innovation and development of membership function in fuzzy theory and uses probability and mathematical statistics to analyze the uncertainty [15, 22]. In theory, there are several forms of cloud model, which are successfully used in various applications, including knowledge representation [15, 23], intelligence control [16, 24], intelligent computing [25–27], data mining [28], and image segmentation [12, 18]. However, the normal cloud model is commonly used in practice, and the universality of normal distribution and bell membership function are the theoretical foundation for the universality of normal cloud model [16].

Let be a universe set described by precise numbers and let be a qualitative concept related to . Given a number , which randomly realizes the concept , satisfies , where , and the certainty degree of on is as below:then the distribution of on is defined as a normal cloud, and is defined as a cloud drop.

The MATLAB function of the normal cloud generator is included in the supplementary files (available online at http://dx.doi.org/10.1155/2016/1571795) (see Appendix ). The overall property of a concept can be represented by three numerical characters of normal cloud model, expected value , entropy , and hyper-entropy . is the mathematical expectation of the cloud drop distributed in universal set. is the uncertainty measurement of the qualitative concept, and it is determined by both randomness and fuzziness of the concept. is the uncertain measurement of entropy, which is determined by randomness and fuzziness of [16].

It is worth noting that hyper-entropy of a cloud model is a deviation measure from a normal distribution, which is the quantification on how a distribution deviates the Gaussian distribution. For comparison, Wang (Lixin Wang, written personal communication, May 2011) construct a random variable, whose central moments are as close as possible to those of the cloud model. And the mean, the variance, and the third central moment of the constructed variable are equal to cloud model. Therefore, the quantification of the difference between cloud model and Gaussian distribution is achieved in some extents. Accordingly, an accurate quantity on the deviation measure can be obtained, , from the point of view of statistical characteristics, especially the fourth central moment. Hence, the distribution of cloud drops can be regarded as a generalized normal distribution. The details on this property are included in the supplementary files (see Appendix ).

Compared with interval type-2 fuzzy sets widely researched and used [29], cloud model is based on probability and mathematical statistics, its hyper-entropy lets us capture and handle the higher-order uncertainty, and it is equivalent to the secondary grade of Gaussian type-2 fuzzy sets [18], which have been little studied but may be very useful [30].

##### 3.2. The Cloud Model-Based Criterion

Given a threshold , the background pixels can be obtained from the original image . Let the cloud model for the background class be . Considering as the input, three numerical characters would be generated by backward cloud generator [15]. More specifically, the expected value is the grayscale mean of background pixels, and it is formalized aswhere the cardinality of a set is the number of members. Notice that (10) is clearly equivalent to (4).

Next, the entropy is directly related to the first-order absolute central moment from the mean, written as

The derivation on (11) is included in the supplementary files (see Appendix ).

The last parameter, hyper-entropy , can be defined as

Similarly, the corresponding cloud model for object class can be also calculated. We take the original image in Figure 1(a) as a typical example, whose ground-truth image and the grayscale histogram are shown in Figures 1(b) and 1(c). We fix the optimal threshold according to its ground-truth image in Figure 1(b), and then the numerical characters of cloud models for background and object are calculated, respectively, that is, and . Figure 1(d) demonstrates the joint distribution of cloud drops and its certainty degree. Cloud model has depicted the gray level distribution of the sample image, and it is an approximate normal distribution, or a generalized normal distribution, rather than a normal distribution. Furthermore, the grayscale distributions between background and object must not be similar since the shapes of two cloud models are seriously different from Figure 1(d), as well as the ratio of these numerical characters. We will further analyze the difference in another section below.