Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2013 / Article

Research Article | Open Access

Volume 2013 |Article ID 537268 | 13 pages | https://doi.org/10.1155/2013/537268

A New Feature Selection Method for Hyperspectral Image Classification Based on Simulated Annealing Genetic Algorithm and Choquet Fuzzy Integral

Academic Editor: Gianluca Ranzi
Received01 Jun 2013
Revised14 Sep 2013
Accepted15 Sep 2013
Published05 Nov 2013

Abstract

Hyperspectral remote sensing technology is a rapidly developing new integrated technology that is widely used in numerous areas. Rich spectral information from hyperspectral images can aid in the classification and recognition of the ground objects. However, the high dimensions of hyperspectral images cause redundancy in information. Hence, the high dimensions of hyperspectral data must be reduced. This paper proposes a hybrid feature selection strategy based on the simulated annealing genetic algorithm (SAGA) and the Choquet fuzzy integral (CFI). The band selection method is proposed from subspace decomposition, which combines the simulated annealing algorithm with the genetic algorithm in choosing different cross-over and mutation probabilities, as well as mutation individuals. Then, the selecting bands are further refined by CFI. Experimental results show that the proposed method can achieve higher classification accuracy than traditional methods.

1. Introduction

Hyperspectral remote sensors peculiarly provide measurements of the Earth’s surface with very high spectral resolution, usually resulting in tens of channels. Unlike multispectral sensors, the high spectral resolution renders hyperspectral remote sensors very powerful in applications requiring the identification of subtle differences in ground covers (e.g., material quantification and target detection). On the other hand, the large-dimensional data spaces generated by these sensors introduce challenging methodological problems. In the context of supervised classification, the most important methodological issue raised by these sensors is the so-called curse of dimensionality (also known as the Hughes effect) that occurs when the numbers of features and of available training samples are unbalanced [1].

Meanwhile, hyperspectral remote sensing images have nonlinear properties. These nonlinear properties originate from the multiscattering between photons and ground targets, within pixel spectral mixing, and from scene heterogeneity. In addition, given that the pixel size in most remote sensing systems is sufficiently large to include different types of land cover, classification error arises and produces unreliable classification results. In this case, traditional classifiers may fail completely.

In remote sensing literature, numerous methods have been developed to solve the hyperspectral data classification problem. A successful approach to hyperspectral data classification is based on the support vector machine (SVM). SVM determines two classes by identifying the optimal separating hyperplane that maximizes the margin between the closest training sample and the separating hyperplane. Data samples located at the hyperplane border are referred to as support vectors and are used to create a decision surface. The properties of SVM for both full-dimensional and reduced-dimensional data have been investigated, while multi-class SVM strategies have been considered in [2]. Hyperspectral image classification using different kernel-based approaches has been analyzed and compared, and SVM has been found to be more useful than other kernel-based methods in [3]. SVM classification performance is compared with other well-known neural approaches in [4], which exhibited that SVM provides simplicity, robustness, and increased classification accuracy compared with neural networks. In addition, some improved SVM methods have also been successfully used in hyperspectral image classification. The proposed method, called contextual SVM using Hilbert space embedding showed significant improvement over other methods on several hyperspectral images in [5]. A semisupervised method for addressing a domain adaptation problem based on multiple-kernel SVMs in the classification of hyperspectral data was presented in [6]. Thus, SVM is very suitable for hyperspectral image classification. However, dimension reduction is not sufficiently considered in SVM.

Commonly used dimension reduction methods fall into two categories, namely, feature selection and feature extraction. Since every band of hyperspectral data has its own corresponding image, the feature extraction approach maps a high-dimensional feature space to low-dimensional space via linear or nonlinear transformation. However, the original physical interpretation of the image cannot be retained. Thus, feature extraction approaches are unsuitable for the dimension reduction of hyperspectral images. Given that the spectral distance between adjacent bands in the hyperspectral data is only 10 nm and because the correlation between them is extremely high [7], a considerable redundancy is observed, which should be largely reduced by the feature selection or band selection methods to improve classification efficiency and accuracy. A semisupervised feature-selection technique for hyperspectral image classification was developed in [8]. A method for unsupervised band selection by transforming the hyperspectral data into complex networks was presented in [9]. Therefore, a new dimension reduction method is proposed that combines the simulated annealing genetic algorithm (SAGA) with the Choquet fuzzy integral (CFI).

A population and temperature ladder-based new genetic algorithm (GA) or the so-called SAGA was recently proposed to examine a sample from a distribution defined on a space of finite binary sequence. The feature selection strategy of hyperspectral images based on GA and SVM was proposed in [10, 11]. A GA-based feature selection and local-Fisher’s discriminant analysis-based feature projection are performed for effective dimensionality reduction in [12]. But SAGA method works by simulating a parallel population of samples with different temperatures. The population is updated via selection, mutation, cross-over, and exchange operations that are highly similar with GA. SAGA has the learning capability of GA, as well as the fast-mixing capability of parallel tempering (simulated tempering). In most cases, classification accuracy is only used as the fitness function, but internal relations between bands and classes have not been taken into account. Considering the above problem, a correction method based on CFI is proposed. The CFI does not assume the independence of one element from another and, based on any fuzzy measure, it is employed to perform the overall evaluation of an input pattern [13]. Moreover, the fuzzy measure defined on an attribute is used as the relative degree of importance of this attribute such that the connection weights can be interpreted as the fuzzy measure values or the degrees of importance of the respective input variables. The band selection method of this paper that is based on SAGA and CFI (SAGA-CFI) cannot only improve the accuracy of classification, but also effectively reduce the uncertainty of the information in order to further improve the accuracy.

Since hundreds of bands in the hyperspectral imagery exist, the direct search space for SAGA and CFI on the original band space becomes extremely huge. An adaptive subspace decomposition (ASD) method for hyperspectral data dimensionality reduction was proposed in [14]. To avoid the impact of enormous data sets on traditional statistical classification techniques, the ASD scheme is used. Thus, the differences between global and local statistical characteristics have been fully considered, and the problem presented by a limited number of training samples is then alleviated.

In this paper, we use SAGA and CFI in every subspace to choose suitable bands based on ASD which differs from the previous work [5, 6, 812] in three aspects. First of all, ASD is employed to divide the bands into disjoint subspace rather than mutual information. Although mutual information may make better performance than ASD, it also cannot be chosen in this paper because mutual information is interconnected with entropy of information, and it can be directly formulated by entropy. It is better to keep independence between ASD and CFI. Furthermore, based on GA, SAGA is used in band selection which includes a schedule of temperatures and approaches the global minimum when the temperatures change gradually. Last but not least, CFI is first employed to further optimize the band selection method. Thus, we reduce the search space and computational complexity, while avoiding the selection of an excessive number of adjacent bands.

The remainder of this paper is organized as follows. Section 2 introduces subspace decomposition. Section 3 presents the proposed SAGA. In Section 4, a brief description of three related elements and fuzzy measure followed by CFI is given. Section 5 provides the SVM classification adopted in this paper. Section 6 describes the proposed method. Experiments and analysis are demonstrated in Section 7. Finally, Section 8 concludes the paper.

2. Subspace Decomposition

The main characteristics of hyperspectral remote sensing data are a large quantity of imaging channels (approximately 220 bands) and a narrow band spectrum. The spectrum of hyperspectral data is highly concentrated, rendering overall and local characteristics quite different. We may lose some important local characteristics if we select the bands from the total space. In terms of the overall situation, the bands are notably characterized by groups. We can divide all bands into several groups as long as a lower correlation exists between adjacent bands. Subspace decomposition not only reduces the dimension of the images, but also significantly improves the efficiency of data processing. Division of data sources based on ASD and fusion classification based on consensus theory is proposed in [15]. So the commonly used method continues to be ASD. According to the correlation matrix of hyperspectral images between bands, the full data space with dimensionality is adaptively decomposed into numerous subspaces with different dimensionalities. In each subspace, the bands have very strong correlation, while the energy is more concentrated. Hence, full data dimensionality can be logically reduced.

Since different bands have different correlations, all subspaces do not have the same dimensionality. Therefore, the goal is to match the features of each subspace with one or few classes. For this purpose, the new method primarily depends on the correlation matrix between different bands. The element of the correlation matrix is defined as

The value of the matrix ranges between 0 and 1. The closer is to 1, the more correlation exists between the two bands. and are the mean values of and , respectively. is the value of the mathematical expectation.

3. Simulated Annealing Genetic Algorithm

Traditional selection, cross-over, and mutation operator, as well as the selection of fitness proportion in GA, allow the superior chromosome to maintain its predominance or strengthen it in the subsequent generations. The convergent chromosome may not be the overall optimal chromosome. SAGA combines the simulated annealing algorithm with GA. Thus, SAGA can perform the temperature-control function in the simulated annealing algorithm by controlling selection probability [16]. If we want to sample from a distribution defined on a space of finite binary sequence, we employ the following: where is the -dimensional binary vector with , is the scale parameter (a so-called temperature that can be any value of interest), and is the fitness function in terms of GA.

First, a sequence of distributions is constructed as follows: where, for , . The temperatures form a ladder with the order . For convenience, we denote the ladder by . Note that we always set as to correspond to the target distribution from which we obtain the sample. denotes a population of samples where is a sample from and is called a chromosome or an individual in terms of GA, while represents the population size. In SAGA, the Boltzmann distribution of the population is expressed as where . The population is updated by selection, cross-over, mutation, and exchange operators.

3.1. Selection

The probability of having the chromosome chosen first is and probability of is where

3.2. Cross-Over

One chromosome pair, such as and (), is selected from the current population through the roulette wheel. Two offspring, and , are generated according to a specific cross-over operator. A new population is proposed as and is accepted with probability according to the Metropolis-Hastings rule that is expressed as follows: where denotes the selection probability of from the population and denotes the selection probability of from the population .

3.3. Mutation

We define the mutation operator as an additional move of the Metropolis-Hastings rule. One chromosome, such as , is uniformly chosen from the current population . A new chromosome is generated by the addition of a random vector , such that where is usually chosen to achieve moderate acceptance probability for the mutation operation. The new population is accepted with the probability according to the following Metropolis-Hastings rule:

3.4. Exchange

A straightforward implementation of relative parallel tempering can outperform simulated annealing in several crucial respects, and parallel tempering can offer a powerful alternative to simulated annealing for combinatorial optimization problems [17]. Given the current population and the attached temperature ladder in , we propose to obtain a new population by making an exchange between and without changing the . That is, . The new population is then accepted with probability ) according to the Metropolis-Hastings rule below:

3.5. Fitness Function

In addition, another key of SAGA is the design of the fitness function. We use only the classification accuracy obtained from the training feature subset as the fitness function. The purpose of the iterative repetition is to determine the optimal feature subset and to maximize classification accuracy. The adopted classifier is SVM, which is described in Section 5.

4. Choquet Fuzzy Integral

Based on subspace decomposition, CFI method is used to further refine the selecting bands. The definition of fuzzy measure and Choquet integral are shown in [18, 19].

Definition 1 (fuzzy measure (see [18])). Denote the Borel set as B which is obtained from the domain , and then define a fuzzy measure on , it must satisfy the following conditions:(1); , is null set;(2)given two subsets , if , so ;(3)If , then . According to the definition of fuzzy measure, Sugeno introduces the measure.

Definition 2 ( measure (see [19])). For all the sets , , there exists to satisfy
Obviously, when , -fuzzy measure is the probability measure.

Given a finite set and orders , the mapping as fuzzy density function, also as single-point importance. If , according to (20), the following formula can be deduced as where . Because , the value of can be achieved by solving . It can be proved that given the fixed set , , there exists one and only one and . So, if the fuzzy density () is given, it can get the unique -fuzzy measure.

With regards to the theory on information fusion, fuzzy density server as the importance or the contribution of the source . The group of source can determine a unique -fuzzy measure in the process of data fusion. Based on the -fuzzy measure, Choquet proposed a fuzzy integral method.

Definition 3 (Choquet integral (see [19])). Given a function , and its Choquet integral on fuzzy measure is defined as

In the equation, the value of the function can be interpreted as a credibility estimation of the source for specific target. Note that the function is increasing, ; fuzzy measure is the importance or contribution of information source with respect to the ultimate decision-making or estimation, .

According to (13), CFI can be seen as the weighted sum of , and the weights depend on of the rank of , and the value of decides the rank of ; so the CFI is a nonlinear function of function . It is clear that when , the -fuzzy measure is the probability measure, and the CFI is a linear function of . The CFI is used in data fusion if we regard the as a result of goal judgment and —as the degree of importance or contribution. Obviously the CFI is the nonlinear combination of the result of information source with the importance of information source.

Before computing the fuzzy integral we must compute the value of . From (12), we know that the solution to of the fuzzy integral is the root of high-order polynomial. If there are many sources, there is computation burden to get the value of parameter , blocking the online and real-time of algorithm.

5. SVM Classification Methods

Training data are required to train the SVM model. However, these data cannot be separated without errors. The data points that are closest to the hyperplane are used to measure the margin, while SVM attempts to identify the hyperplane that maximizes the margin and minimizes a quantitative proportion to the number of misclassification errors [20, 21]. SVM derives the optimal hyperplane as the solution of the following convex quadratic programming problem [22]: where are the labeled training datasets, and ; , and defines a linear classifier in the feature space; is the regularization parameter defined by the user; and is a positive slack variable that handles permitted errors.

The aforementioned optimization problem can be reformulated through a Lagrange function, where Lagrange multipliers can be found via dual optimization to generate a convex quadratic programming solution as follows [2325]: where is the vector of Lagrange multipliers, while is a kernel function which is introduced as follows [26]:

Thus, the final result is a discrimination function conveniently expressed as a function of the data in the original (lower) dimensional feature space [27]:

6. Proposed Method

6.1. Adaptive Subspace Decomposition

In the beginning, adaptive subspace decomposition is used to divide into seven subspace according to (1). All values are identified, and then the proper threshold is set. The continuous bands of in the same subspace are subsequently placed. We can dynamically control the number of subspaces and the number of bands in each subspace by changing the threshold .

6.2. The Band Order Method in Subspace

SAGA is used in order to find out the optimal bands in each subspace. Here we choose common binary coding method as the genetic coding mode, and the iteration times of SAGA is 50. Generally, a subspace has many bands, and all the suitable bands should be chosen. Meanwhile, if the subspace has only one band, it must be chosen.

6.3. The Band Reorder Method in Subspace

After the bands are chosen according to SAGA, they also can be further optimized based on the CFI method. CFI takes into account the factors of entropy of information, correlation coefficient, and standard distance between the means.

6.3.1. Entropy of Information and Variance

According to Shannon's information theory, entropy measures information content in terms of uncertainty. The entropy of the hyperspectral components represents the information content of each component. Thus, the higher the entropy, the richer the information content, resulting in a more meaningful representation. The entropy or total information [28] is defined as where is the probability of pixel value .

Variance represents deviation from mean value to the gray-scale value of pixel. The formulae of computing mean value and variance are as follows [29]: where and are the numbers of two adjacent bands. and represent the width and the height of image, and is the gray-scale value of pixel .

6.3.2. Correlation Coefficients

In statistics, the correlation coefficient denotes the accuracy of a least square fitting to the original data. It is a normalized measure of the strength of the linear relationship between two variables. Correlation is employed in many types of applications, such as hyperspectral image processing where it is used to measure and to quantitatively compare the similarity between bands [30]. The two-dimensional normalized correlation function for image processing is shown below: where is a real number between −1 and 1.

6.3.3. Standard Distance between the Means

Object classes need to be analyzed in depth in which the band is easy to be distinguished [31] that is, the statistical distance between object classes in the band. Standard distance between the means is defined as where and are spectrum means of corresponding regions of the two samples. and are variances of corresponding regions of the two samples. reflects separability of the two samples in each band.

Then, the procedure of the band reorder method using CFI is as follows.(1)According to (18), entropies of information in each subspace are computed and recorded as .(2)According to (21), the correlation coefficients in each subspace are computed and recorded as .(3)According to (22), standard distances between the means in each subspace are computed and recorded as .(4)Belief function is constructed and domain is . The relations between index value of each factor and band reorder are described below. The bigger the entropy is, the rich the information is. The smaller the correlation coefficient is, the more independent the band is. The bigger the standard distance between the means is, the easier to distinguish the two samples is. So the belief functions of CFI are listed as follows: where . Equation (23) is reordered and a new equation (24) is generated: , , and are minimum, median, and maximal values of the three, respectively.(5)Another important problem that needs to be fixed is fuzzy measure . Belief function is arranged in ascending order, and the biggest one is of primary importance. In each subspace, , (6)The formula of computing fuzzy integral value is as follows: where , .

6.4. Flowchart of the Proposed Method

The flowchart of this paper is illustrated in Figure 1.

7. Experiments and Analysis

7.1. Hyperspectral Images

Experiments were conducted on a hyperspectral data set from the Northwest Indiana Indian Pine test site 3 (2 × 2 mile portion of Northwest Tippecanoe County, Indiana) on June 12, 1992. These data include 145 by 145 pixels and 220 bands. The false color image is shown in Figure 2, which is composed of band 89, band 5, and band 120.

7.2. Subspace Decomposition Experiment

The ASD scheme is used to obtain the correlation value between the bands. Table 1 gives the values of the parts of the correlation matrix according to (1).


Bands56789101112131415

51.00000.97930.98230.98130.98090.98000.97850.97690.97500.97100.9667
60.97931.00000.98890.98920.98860.98810.98690.98570.98350.97980.9755
70.98230.98891.00000.99200.99300.99260.99180.99110.98920.98560.9819
80.98130.98920.99201.00000.99380.99410.99350.99290.99110.98750.9836
90.98090.98860.99300.99381.00000.99500.99540.99490.99340.98970.9861
100.98000.98810.99260.99410.99501.00000.99590.99590.99440.99070.9872
110.97850.98690.99180.99350.99540.99591.00000.99660.99590.99260.9894
120.97690.98570.99110.99290.99490.99590.99661.00000.99680.99430.9915
130.97500.98350.98920.99110.99340.99440.99590.99681.00000.99650.9949
140.97100.97980.98560.98750.98970.99070.99260.99430.99651.00000.9977
150.96670.97550.98190.98360.98610.98720.98940.99150.99490.99771.0000

As presented in Table 1, the autocorrelation coefficient of each band is equal to 1, and the correlation value is very high. In this paper, the ASD method is performed using the correlation criterion of a given threshold, which is 0.8. The full data space is decomposed into seven subspaces. The dimensions of each subspace are shown in Table 2.


Subspace1234567

Dimensions1–1516–353637-3839–7677–9798–220
Bands1520123821123

From the 220 spectral channels acquired by the AVIRIS sensor, 41 bands were discarded because they were affected by atmospheric problems. The discarded bands were as follows: 1–4, 78, 80–86, 103–110, 149–165, and 217–220. As a result, the new dimensions of each subspace are shown in Table 3.


Subspace1234567891011

Dimensions5–1516–353637-3839–76777987–9798–102111–148166–216
Bands112012381111153851

7.3. SAGA in Each Subspace

The hyperspectral image is categorized into seven classes according to the real data on the ground. The ratio of training and test samples is 1 : 3 because SVM is suitable for small samples. SAGA is used in each subspace, while fitness is computed and illustrated in Figure 3. We select the most optimum band in each subspace. SAGA in subspace numbers. 3, 6, and 7 is unnecessary because each of these subspaces contains only one band. The kernel function used is a radial basis function, while the two SVM parameters (i.e., and ) are selected based on fivefold cross-validation during the training phase. The search range for is in and for .

7.4. Index Value of CFI

Entropy, correlation coefficient, and standard distance between the means of each band are computed. The index values of CFI are then obtained and sorted in descending order in each subspace. The bigger the index value is, the more important the band is. is a given threshold of index value. Table 4 shows the index values of the bands when threshold is 0.940.


SubspaceBand No.Index valueBand No.Index value

1110.9419
2170.9732180.9706
160.9604190.9502
3361
4370.9868
5710.9667690.9658
700.9652730.9642
670.9638720.9636
740.9636
6771
7791
8880.9813890.9659
91010.9608
101190.96891200.9689
1210.96871220.9685
1180.96831230.9681
1240.96751250.9671
1260.96691270.9663
1280.96631290.9660
1300.9659
111840.9659

SAGA is used to determine which band/bands shall be selected in each subspace, but it cannot indicate which bands have higher priority than others. The index values of CFI are then further refined the selecting bands, and the more effective optimizations come into being.

7.5. Computational Time Complexity

There is one issue that needs to be considered. The proposed procedure constructs and analyses probably consume considerable time. Thus, we compare the time complexity of the four methods GA, SAGA, CFI, and SAGA-CFI in this part. The time complexity of SAGA-CFI is , just the same as the other three methods. This means that the processing cost of SAGA-CFI is no more than the others.

7.6. Classification Experiment

The hyperspectral image is also categorized into seven classes, while the ratio of training and test samples remains 1 : 3. The numbers of training samples and of test samples are shown in Table 5.


ClassesClass 1Class 2Class 3Class 4Class 5Class 6Class 7

Number of training samples4848563286777749163
Number of test samples1501435219223524172469516

In this work, we implement another two similar classification methods for hyperspectral images to compare with the algorithm proposed in the paper. One method is based on SAGA and SVM classification (SAGA-SVM). The other method is based on CFI and SVM classification (CFI-SVM). Similarly, the method of this paper is based on SAGA, CFI, and SVM classification (SAGA-CFI-SVM). The error matrices of the three methods are presented in Tables 6, 7, and 8, while the total accuracy and Kappa value are exhibited in Table 9. The threshold is 0.94 in all of the above methods. Table 10 shows the different total accuracy and Kappa value by changing the threshold .


ClassReference dataRow totalUA
1234567

Classified data
 11370353514402700.5074
 201328424410914240.9326
 3102024045107021770.9297
 44223822200132990.7425
 5528002235471023250.9613
 603653302271823710.9578
 7321034504765480.8686

Column total15014352192235241724695169414

PA0.91330.92540.92340.94460.92470.91980.92250.9249


ClassReference dataRow totalUA
1234567

Classified data
 11340695667003440.3895
 2012765597501514300.8923
 3501937075135021520.9001
 41416221000143280.6402
 5639002114632022420.9429
 6035696022011523260.9463
 7444058704525920.7635

Column total15014352192235241724695169414

PA0.89330.88920.88370.89360.87460.89150.87600.8841


ClassReference dataRow totalUA
1234567

Classified data
 11360321534902710.5018
 201323432350414070.9403
 350205805095022080.9321
 43232722700132930.7747
 5224002239291323070.9705
 603532102296923730.9676
 7430044004775550.8595

Column total15014352192235241724695169414

PA0.90670.92200.93890.96600.92640.92990.92440.9301


IndexTotal accuracyKappa value

SVM86.72%0.8315
GA-SVM90.28%0.8752
SAGA-SVM92.49%0.9030
CFI-SVM88.41%0.8539
SAGA-CFI-SVM93.01%0.9114


IndexTotal accuracyKappa value

0.9400.9650.9700.9400.9650.970

CFI-SVM88.41%89.12%68.75%0.85390.86540.6173
SAGA-CFI-SVM93.01%93.91%72.52%0.91140.92230.6355

In the error matrix, the product’s accuracy (PA) is defined as and the user’s accuracy (UA) is defined as where is the value on the major diagonal of the th row in the error matrix, is the total number of the th row, and is the total number of the th column.

To measure the agreement between the classification and the reference data, we compute the kappa coefficient based on the following equation: where is the number of total pixels.

The original reference image, the SAGA-SVM classification image, the CFI-SVM classification image, and the SAGA-CFI-SVM are illustrated in Figure 4.

8. Conclusions

An innovative band selection algorithm called SAGA-CFI has been developed and combined with the classification method SVM to classify hyperspectral remote sensing images. On the basis of subspace decomposition, SAGA was used in each subspace to lower the computational complexity and select the suitable bands, and CFI method was adopted to further modificate the selecting bands in order to increase classification accuracy. SAGA-CFI-SVM has been implemented to achieve improved classification methods compared with conventional algorithms. Comparison results show that the proposed method is superior in terms of classification accuracy.

The classification of hyperspectral remote sensing images based on SAGA-CFI-SVM in this paper is far from complete and thus requires further research. One problem cited is the further reduction of the computational complexity of SAGA and the acceleration of the searching procedure faster. Another problem is the thorough improvement of the kernel function to obtain significantly higher classification accuracy. Last but not least, we need to study the classification method based on selective ensemble support vector machine, for it may further improve the accuracy.

Acknowledgments

This study is supported by the National Natural Science Foundation of China (no. 61271386) and funded by the CRSRI Open Research Program (no. CKWV2013215/KY) and the Industrialization Project of Universities in Jiangsu Province (no. JH10-9).

References

  1. F. Melgani and L. Bruzzone, “Classification of hyperspectral remote sensing images with support vector machines,” IEEE Transactions on Geoscience and Remote Sensing, vol. 42, no. 8, pp. 1778–1790, 2004. View at: Publisher Site | Google Scholar
  2. I. T. Jolliffe, Principal Component Analysis, Springer Series in Statistics, Springer, New York, NY, USA, 1986. View at: MathSciNet
  3. A. A. Green, M. Berman, P. Switzer, and M. D. Craig, “A transformation for ordering multispectral data in terms of imagequality with implications for noise removal,” IEEE Transactions on Geoscience and Remote Sensing, vol. 26, no. 1, pp. 65–74, 1988. View at: Publisher Site | Google Scholar
  4. G. Camps-Valls, L. Gómez-Chova, J. Calpe-Maravilla et al., “Robust support vector method for hyperspectral data classification and knowledge discovery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 42, no. 7, pp. 1530–1542, 2004. View at: Publisher Site | Google Scholar
  5. P. Gurram and H. Kwon, “Contextual SVM using hilbert space embeddingfor hyperspectral classification,” IEEE Geoscience and Remote Sensing Letters, vol. 10, no. 5, pp. 1031–1035, 2013. View at: Publisher Site | Google Scholar
  6. Z. Sun, C. Wang, H. Wang, and J. Li, “Learn multiple-kernel SVMs for domain adaptation in hyperspectral data,” IEEE Geoscience and Remote Sensing Letters, vol. 10, no. 5, pp. 1224–1228, 2013. View at: Google Scholar
  7. B. Tso and P. Mather, Classification Methods For Remotely Sensed Data, CRC Press, New York, NY, USA, 2001.
  8. C. Yang, S. Liu, L. Bruzzone, R. Guan, and P. Du, “A feature-metric-based affinity propagation technique for feature selection in hyperspectral image classification,” IEEE Geoscience and Remote Sensing Letters, vol. 10, no. 5, pp. 1152–1156, 2013. View at: Google Scholar
  9. W. Xia, B. Wang, and L. Zhang, “Band selection for hyperspectral imagery: a new approach based on complex networks,” IEEE Geoscience and Remote Sensing Letters, vol. 10, no. 5, pp. 1229–1233, 2013. View at: Google Scholar
  10. S. Li, H. Wu, D. Wan, and J. Zhu, “An effective feature selection method for hyperspectral image classification based on genetic algorithm and support vector machine,” Knowledge-Based Systems, vol. 24, no. 1, pp. 40–48, 2011. View at: Publisher Site | Google Scholar
  11. M. Pal, “Hybrid genetic algorithm for feature selection with hyperspectral data,” Remote Sensing Letters, vol. 4, no. 7, pp. 619–628, 2013. View at: Publisher Site | Google Scholar
  12. M. Cui, S. Prasad, W. Li, and L. M. Bruce, “Locality preserving genetic algorithms for spatial-spectral hyperspectral image classification,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 6, no. 3, pp. 1688–1697, 2013. View at: Google Scholar
  13. M. Qinglin, “A gas outburst prediction algorithm based on Choquet fuzzy integral,” in Proceedings of the WRI Global Congress on Intelligent Systems (GCIS '09), pp. 3–7, May 2009. View at: Publisher Site | Google Scholar
  14. Y. Zhang, M. D. Desai, J. Zhang, and M. Jin, “Adaptive subspace decomposition for hyperspectral data dimensionality reduction,” in Proceedings of the International Conference on Image Processing (ICIP '99), pp. 326–329, October 1999. View at: Google Scholar
  15. J. Zhang, Y. Zhang, B. Zou, and T. Zhou, “Fusion classification of hyperspectral image based on adaptive subspace decomposition,” in Proceedings of the International Conference on Image Processing (ICIP '00), pp. 472–475, September 2000. View at: Google Scholar
  16. F. Liang and W. H. Wong, “Evolutionary Monte Carlo: applications to Cp model sampling and change point problem,” Statistica Sinica, vol. 10, no. 2, pp. 317–342, 2000. View at: Google Scholar
  17. C. Wang, J. D. Hyman, A. Percus, and R. Caflisch, “Parallel tempering for the traveling salesman problem,” International Journal of Modern Physics C, vol. 20, no. 4, pp. 539–556, 2009. View at: Publisher Site | Google Scholar
  18. S. Auephanwiriyakul, J. M. Keller, and P. D. Gader, “Generalized Choquet fuzzy integral fusion,” Information Fusion, vol. 3, no. 1, pp. 69–85, 2002. View at: Publisher Site | Google Scholar
  19. N. Hao and G. J. Wang, “Double-null asymptotic additivity of generalized fuzzy valued Choquet integrals,” Sichuan Shifan Daxue Xuebao, vol. 30, no. 1, pp. 62–65, 2007. View at: Google Scholar | Zentralblatt MATH | MathSciNet
  20. Y. Tarabalka, M. Fauvel, J. Chanussot, and J. A. Benediktsson, “SVM- and MRF-based method for accurate classification of hyperspectral images,” IEEE Geoscience and Remote Sensing Letters, vol. 7, no. 4, pp. 736–740, 2010. View at: Publisher Site | Google Scholar
  21. H. B. Wang, Z. Chen, X. Wang, and Y. Ma, “Random finite sets based UPF-CPHD multi-object tracking,” Journal on Communications, vol. 33, no. 12, pp. 147–153, 2012. View at: Google Scholar
  22. P. Du, K. Tan, and X. Xing, “Wavelet SVM in reproducing kernel hilbert space for hyperspectral remote sensing image classification,” Optics Communications, vol. 283, no. 24, pp. 4978–4984, 2010. View at: Publisher Site | Google Scholar
  23. C. Huang, K. Song, S. Kim et al., “Use of a dark object concept and support vector machines to automate forest cover change analysis,” Remote Sensing of Environment, vol. 112, no. 3, pp. 970–985, 2008. View at: Publisher Site | Google Scholar
  24. G. P. Tan, X. Y. Ni, X. Q. Liu, C. Y. Qu, and L. Y. Tang, “Real-time multicast with network coding in mobile ad-hoc networks,” Intelligent Automation and Soft Computing, vol. 18, no. 7, pp. 783–794, 2012. View at: Google Scholar
  25. D. Tuia, F. Ratle, F. Pacifici, M. F. Kanevski, and W. J. Emery, “Active learning methods for remote sensing image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 7, pp. 2218–2232, 2009. View at: Publisher Site | Google Scholar
  26. J. Ham, Y. Chen, M. M. Crawford, and J. Ghosh, “Investigation of the random forest framework for classification of hyperspectral data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 3, pp. 492–501, 2005. View at: Publisher Site | Google Scholar
  27. B. Zhang, S. Li, X. Jia, L. Gao, and M. Peng, “Adaptive Markov random field approach for classification of hyperspectral imagery,” IEEE Geoscience and Remote Sensing Letters, vol. 8, no. 5, pp. 973–977, 2011. View at: Publisher Site | Google Scholar
  28. V. Tsagaris, V. Anastassopoulos, and G. A. Lampropoulos, “Fusion of hyperspectral data using segmented PCT for color representation and classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 10, pp. 2365–2374, 2005. View at: Publisher Site | Google Scholar
  29. D. Ming, J. Luo, L. Li, and Z. Song, “Modified local variance based method for selecting the optimal spatial resolution of remote sensing image,” in Proceedings of the 18th International Conference on Geoinformatics (Geoinformatics '10), June 2010. View at: Publisher Site | Google Scholar
  30. Y. Zhu, P. K. Varshney, and H. Chen, “Evaluation of ICA based fusion of hyperspectral images for color display,” in Proceedings of the 10th International Conference on Information Fusion (FUSION '07), July 2007. View at: Publisher Site | Google Scholar
  31. Y. S. Zhao, “Methods on optimal bands selection in hyperspectral remote sensing data interpretation,” Journal of Graduate School, Academia Sinica, vol. 16, no. 2, pp. 153–161, 1999 (Chinese). View at: Google Scholar

Copyright © 2013 Hongmin Gao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1378 Views | 1000 Downloads | 5 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder