Discrete Dynamics in Nature and Society

Discrete Dynamics in Nature and Society / 2021 / Article
Special Issue

Analysis, Control and Applications of Passivity in Complex Networks 2021

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 9998185 | https://doi.org/10.1155/2021/9998185

Eryang Chen, Ruichun Chang, Kaibo Shi, Ansheng Ye, Fang Miao, Jianghong Yuan, Ke Guo, Youhua Wei, Yiping Li, "Spectral-Spatial Hyperspectral Image Semisupervised Classification by Fusing Maximum Noise Fraction and Adaptive Random Multigraphs", Discrete Dynamics in Nature and Society, vol. 2021, Article ID 9998185, 11 pages, 2021. https://doi.org/10.1155/2021/9998185

Spectral-Spatial Hyperspectral Image Semisupervised Classification by Fusing Maximum Noise Fraction and Adaptive Random Multigraphs

Academic Editor: Zi-Peng Wang
Received12 Mar 2021
Accepted15 Jun 2021
Published24 Jun 2021

Abstract

Hyperspectral images (HSIs) contain large amounts of spectral and spatial information, and this provides the possibility for ground object classification. However, when using the traditional method, achieving a satisfactory classification result is difficult because of the insufficient labeling of samples in the training set. In addition, parameter adjustment during HSI classification is time-consuming. This paper proposes a novel fusion method based on the maximum noise fraction (MNF) and adaptive random multigraphs for HSI classification. Considering the overall spectrum of the object and the correlation of adjacent bands, the MNF was utilized to reduce the spectral dimension. Next, a multiscale local binary pattern (LBP) analysis was performed on the MNF dimension-reduced data to extract the spatial features of different scales. The obtained multiscale spatial features were then stacked with the MNF dimension-reduced spectral features to form multiscale spectral-spatial features (SSFs), which were sent into the RMG for HSI classification. Optimal performance was obtained by fusion. For all three real datasets, our method achieved competitive results with only 10 training samples. More importantly, the classification parameters corresponding to different hyperspectral data can be automatically optimized using our method.

1. Introduction

Due to the advancements in remote sensing technology, hyperspectral images (HSIs) are containing increasing spectral and spatial information (SSI), resulting in their extensive use in domains, such as forest inventory [1], urban area monitoring [2], road extraction [3], geological surveys [4], precision agriculture [5], environmental protection [6], military applications [7], hydrocarbon detection [8], oil reservoir exploration [9], and lake sediment analysis [10]. HSI classification is a crucial research topic related to these applications. In contrast to those of Synthetic Aperture Radar (SAR) [11] or RGB images [12], the two main challenges associated with HSI classification are the high dimensionality of the dataset and the redundancy of spectral information.

Many approaches for HSI dimension reduction have been proposed. The most frequently used methods are principal component analysis (PCA) [13, 14], independent component analysis (ICA) [15], maximum noise fraction (MNF) [16], linear discriminant analysis [17], and the deep learning approach [18]. Uddin et al. [13] proposed an information-theoretic normalized-based PCA for HSI classification. Fu et al. [14] proposed a segmented PCA-based algorithm for HSI classification.

However, because the noise in hyperspectral data can easily mask subtle hyperspectral features, careful noise removal is required to extract useful information. This phenomenon is problematic for PCA, which maximizes the variance of the orthogonal set of the projection sample and vector. Unlike PCA, MNF aims to maximize the signal-to-noise ratio (SNR) rather than the variance. MNF is worth considering in HSI classification because removing the noise is effective during dimensionality reduction.

HSI classification can be significantly improved by employing an appropriate object-based [19], spectral-spatial fusion-based approach [20], decision fusion-based method [21], or deep learning-based method [22]. However, two challenges are unavoidable for most researchers. On the one hand, the traditional depth learning-based approaches rely heavily on a large amount of labeled data to achieve competitive results, whereas HSI annotation is expensive and time-consuming because it requires expert knowledge and skills. On the other hand, many methods require the use of a large number of parameter settings during the experiment, which involves expert experience [23]. HSI classification has been widely studied, and various classification methods have been adopted, namely, support vector machine (SVM) [24], random forests [25], neural networks [26], low-rank representation [27], sparse representation [28], deep learning methods [29, 30], and meta-learning methods [31]. Xu et al. [29] directly used the random patches extracted from the HSI image as the convolution kernel, without any training, to improve the classification efficiency. Yin et al. [30] proposed a CapsNet-based alternative data-driven HSI classification model. The aforementioned results demonstrate that the proposed method can achieve ideal results when the training samples are sufficient.

To integrate the spatial information, researchers have proposed many spectral-spatial feature extraction (FE) techniques, such as the extinction profile (EP) [32] and local binary patterns (LBP) [33]. In contrast with the EP features, LBP features facilitate the mining of the HSI texture information, such as global contrast information and texture depth [34]. In this study, we adopted the LBP features as spatial features.

However, when the labeled samples in the training set are insufficient, the classification accuracy achieved by the traditional methods significantly reduces [35] because of the so-called Hughes effect or the curse of dimensionality [36]. Moreover, many methods require a series of manual parameter settings. For instance, to extract spatial features, researchers must select an appropriate window size for the neighborhood. The selection is time-consuming [37] and requires expertise [38]. Since 2010, ensemble learning methods for HSI classification have received significant attention because of their dependence on limited training samples [39]. Many methods have been developed, such as support vector machines [40], boosting [41], segmentation-based methods [19], unsupervised methods [42], and semisupervised methods [43]. However, the graph-based semisupervised method is rarely considered in HSI classification [44, 45].

Motivated by the aforementioned discussions, by combining the SSI, we proposed a new spectral-spatial HSI semisupervised classifier based on MNF and adaptive random multigraphs (SS-MNF-ARMG). The primary contributions of this study are as follows:(1)A novel spectral-spatial HSI semisupervised classification framework was developed. Because of the adaptive properties, the optimal parameters can be determined without artificial auxiliaries.(2)By introducing the MNF, the noise in the HSI can be removed more efficiently during dimensionality reduction. On the basis of dimension-reduced HSI, the SSI is combined, which can degrade the curse of dimensionality.(3)In contrast with several studies, SS-MNF-ARMG can achieve competitive performance for three real HSI datasets while leveraging tiny labeled samples, which is further improved by introducing RMG in a new mode.

2. Relevant Work

2.1. MNF

Let be the HSI data, and and are the signal and noise part of , respectively. The goal of MNF is to seek out a linear transformation matrix to maximize the SNR of the transformed data. Assume that and are uncorrelated; then can be represented as [43]

And, the covariance matrix (CM) of can be obtained bywhere and are the CMs of and , respectively. The MNF transform can be expressed aswhere is the MNF result of , are the eigenvectors associated with the largest eigenvalues of , and is the number of MNF principal components.

Then the SNR of each can be described aswhere computes the variance and is the component in . Then we can obtain by solving the following problem:

2.2. LBP

For each pixel in HSI, a P-bit binary code and a new matrix with the new value (binary to decimal value) are generated by thresholding adjacent pixel values. The LBP operator can be calculated bywhere is the number of neighbors represented on a circle with the radius , and and , respectively, represent the gray level intensity values of the center and the neighbour. The binary threshold function is described as

Take the 10th band of Indian Pines HSI as an example; the procedure of LBP is shown in Figure 1. As shown in Figure 1, for a given center pixel in a 3 × 3 window, binary labels (“0” or “1”) are assigned to adjacent pixels according to whether the gray value of the center pixel is large. Starting at the top left corner, all binary codes are joined clockwise to produce an 8-bit binary number. The resulting binary number is called LBP code. The results show that LBP algorithm has significant rotation invariance and gray invariance [46], and it can be effectively applied to HSI classification [47].

2.3. RMG

Given the HSI dataset is comprised of labeled data and unlabeled data , then a weighted graph can be obtained. The vertices in the graph consist of and . Weighted edges, which can be defined as a matrix , represent the similarities between associated nodes. For a c-class classification problem, it can be defined as a quadratic optimization problem [43]:where is the trace function and is the prediction matrix to be learned. The indicator vector is the label vector corresponding to and is the 0 vector. In addition, we can obtain by the following equation:

Then each can be classified to the class if is the largest one in the row of which can be described as . is a diagonal matrix, and its element can be calculated aswhere and are two parameters.

The popular choice foris the graph Laplacian [48], which is defined aswhere is the weight matrix of the graph, which can be formulated by Gaussian kernel aswhere is the kernel width parameter to be adjusted. And the diagonal matrix is the row sum of .

However, it is difficult to discover the neighborhood structure inherent in the graph and learn the proper compact representation automatically. To solve this problem, researchers have proposed the anchor graph algorithm (AGA) [49] and multiview anchor graph algorithm [50]. The AGA extrapolates the Laplacian eigenvector of the graph to the eigenfunction, allowing constant time hashing of new data points. Then the hierarchical threshold learning method is used to make each feature function generate more than 1 bit to improve the search accuracy. And the label prediction function in the AGA can be expressed aswhereis the data-adaptive weight, and each in is an anchor point. By this formula, the solution space of the unknown labels can be reduced from a larger space to a smaller one. The centers of the K-means cluster are selected as anchor points because these centers have a powerful representation that covers the entire dataset. In this paper, we use Local Anchor Embedding (LAE) [49] to calculate the anchor points. Figure 2 shows the flowchart of RMG.

3. Proposed Method

The SS-MNF-ARMG framework is shown in Figure 3. Because of their adaptive properties, the optimal parameters can be determined without artificial auxiliaries. The framework comprises three main modules:(1)Preprocessing the HSI image by applying MNF, the noise in the HSI can be removed effectively during dimension reduction. This result avoids the problem of the dimension.(2)Through LBP, spatial vectors corresponding to different neighborhood regions are obtained. These spatial vectors are stacked with the spectral vector, respectively, and a series of spectral-spatial stacked feature information can be obtained.(3)For classification and decision fusion, a set of necessary parameters for the RMG is established and the spectral-spatial feature information is integrated into the RMG. Next, a set of classification results with different accuracies are obtained, based on which the optimal classification results are obtained through decision fusion. In addition, by injecting randomness into the graph in the RMG, overfitting due to the limited training sample can be avoided.

The proposed SS-MNF-ARMG algorithm is summarized in Algorithm 1.

Input: the original HSI set is ; the training set is ; the test set is ; the number of spectra after dimension reduction is ; the patch size for FE is , whereis odd and ;
Output: the best classification result of all test samples;
(1)for each do
(2)Obtain the dimension-reduced HSI by using MNF;
(3)Extract the spectral vector of ;
(4)for each do
(5)Calculate the spatial vector by using LBP;
(6)Obtain the spectral-spatial vector by stacking the and the ;
(7)Obtain the best (overall accuracy) by voting, and the corresponding confusion matrix ;
(8)end for
(9)Obtain the best from by decision fusion, and the corresponding confusion matrix ;
(10)end for

4. Experimental Results and Analysis

4.1. Experimental Datasets

Three hyperspectral datasets were employed to evaluate the performance of the SS-MNF-ARMG.

Indian Pines: this scene was gathered by the AVIRIS sensor over the Indian Pines test site in northwestern Indiana and consists of 145 × 145 pixels and 224 spectral reflectance bands in the wavelength range 0.4 to 2.5 μm. This scene, which includes 16 different ground truths, contains two-thirds of agriculture and one-third of forest or other natural perennial vegetation. The number of bands was reduced to 200 by removing the 24 water absorption bands.

Pavia University: this scene was acquired by the ROSIS sensor during a flight campaign over Pavia, northern Italy. The number of spectral bands was 103 at Pavia University. It is a 610 × 340 pixel image containing nine different ground objects with a geometric resolution of 1.3 meters.

Salinas: this scene was collected by the 224-band AVIRIS sensor over the agricultural area of Salinas Valley, California, and is characterized by high spatial resolution (3.7 m pixels). After discarding 20 water absorption bands, the size of this data image was 512 × 217, with 204 bands. Salinas ground truth contains 16 classes, including vegetables, bare soils, and vineyard fields.

4.2. Analysis of Experimental Parameters

Parameters can affect the classification accuracy, such as the number of spectral bands , the patch size , the number of sampling points , the number of graphs , and the number of features . The adaptive properties of the proposed SS-MNF-ARMG are such that most optimal parameters for , , , and can be determined without artificial auxiliaries.

Based on the existing result [44], we let , , , and , where represents the dimensionality of the spectral-spatial vector .

In this study, we set the number of bands from 3 to 35 to evaluate the impact on the three HSI datasets. We conducted the experiment five times, and the optimal experimental results are shown in Figure 4.

It can be observed that, on the Indian Pines dataset and the Salinas scene dataset, the value of overall classification accuracy (OA) shows a trend of steady growth. In other words, the increase in the number of spectral bands number of training has a positive promotion on the classification performance. In particular, on the Pavia University dataset, there is a sharp growth when the number of bands is above 15. In general, the OA of the proposed method is above 78% for all three HSI datasets. Especially in the Salinas scene dataset, the OA of the method is not less than 92% even with a small number of spectral bands, demonstrating the robustness of our method.

4.3. Comparison and Analysis

The proposed SS-MNF-ARMG method was compared with several state-of-the-art spectral-spatial fusion methods, such as Pixon-based classifier [19], PCA-SPCA-2D-SSA [14], R-VCANet [20], RN-FSC [31], iCapsNet [30], RPNet [29], and MBFSDA [21]. A comparison of the above algorithms can be seen in Table 1.


GroupSubgroupMethodAdvantagesDifficulties

Segmentation-based methodsObject-based classificationPixon-based classifierThe noise pixels in the classification map were removed and the suitable land cover smoothing map was obtainedThe spectral characteristics of abnormal pixels are often similar to the background, resulting in the loss of information when the abnormal pixels are removed

Feature fusionFeatures stackingPCA-SPCA-2D-SSACombined with appropriate spatial features, the classification efficiency of the algorithm will be higherComplex structure
Joint spectral-spatial FER-VCANetThe use of high correlation among SSIComplex structure
Meta-learning-based classifiersRN-FSCLess sensitive to the number of training samplesLimited generalization ability for large-scale hyperspectral datasets
Deep learning-based classifiersiCapsNetWell-initialized shallow layersComplex structure
RPNetFE and classification are carried out under a unified frameworkInsufficient training samples will lead to overfitting

Decision fusionDecision fusionMBFSDACombination of supplemental information and several advanced classifiersSelection of suitable feature extractor
Our method(i) Self-adaptability of parameters
(ii) Leveraging tiny labeled samples
Complex structure

To quantitatively compare the classification performance of the methods shown in Table 1, we used the average classification accuracy (AA), overall classification accuracy (OA), and the kappa coefficient (kappa) to assess the classification performance. To demonstrate the superior performance of the SS-MNF-ARMG with a limited number of training samples, we randomly selected 10 samples from each class as training samples. Tables 24 show the ground truth classes and their respective training and testing numbers for the three HSI datasets. Tables 57 summarize the experimental results for the three HSI datasets, from which the following conclusions can be drawn:(1)The results on the Indian Pines dataset show that almost all algorithms are effective. We can observe from Table 5 that the proposed SS-MNF-ARMG achieves a 3.78–28.6% advantage over the other methods in OA. In addition, for the classes that other methods do not recognize accurately, SS-MNF-ARMG can obtain better results, such as objects 1#, 2#, 3#, 5#, 6#, and 7# in Table 5.(2)The results on the Pavia University dataset demonstrate that our method has advantages over some state-of-the-art methods. As a decision fusion-based algorithm, the proposed SS-MNF-ARMG surpasses MBFSDA by 3.87% in OA. In addition, for the classes that other methods do not recognize accurately, SS-MNF-ARMG can obtain better results, such as objects 2#, 3#, 6#, and 8# in Table 6.(3)The results on the Salinas dataset show that all methods show close results in AA, but SS-MNF-ARMG has a competitive advantage in OA and kappa. From Table 7, we can observe that, compared with R-VCANet, SS-MNF-ARMG obtains a 0.1% improvement in OA, a 3.13% improvement in AA, and a 7.31% improvement in kappa. Furthermore, the proposed SS-MNF-ARMG surpasses iCapsNet by 11.01% in OA. In addition, for the classes that other methods do not recognize accurately, SS-MNF-ARMG can obtain better results, such as objects 3#, 7#, 8#, 9#, 15#, and 16# in Table 7.(4)In general, the decision fusion-based methods outperform the segmentation-based methods and feature fusion-based methods. First and foremost, the LBP features improve the performance of the SS-MNF-ARMG, and the application of MNF reduces the HSI data dimension, controlling the noise in the HSI data. In addition, the randomness in the SS-MNF-ARMG can be regarded as a regularization technique, which alleviates the phenomenon of overfitting. Finally, benefitting from the semisupervised deep learning framework, the classification accuracy is relatively satisfactory even with only a few labeled samples.


No.Name of classTrainingTesting

1Corn-notill101418
2Corn-mintill10820
3Grass-pasture10473
4Grass-trees10720
5Hay-windrowed10468
6Soybean-notill10962
7Soybean-mintill102445
8Soybean-clean10583
9Woods101255
10Bldg-Grass-Tree-Drives10376


No.Name of classTrainingTesting

1Asphalt106621
2Meadows1018639
3Gravel102089
4Trees103054
5Painted metal sheets101335
6Bare Soil105019
7Bitumen101320
8Self-Blocking Bricks103672
9Shadows10937


No.Name of classTrainingTesting

1Brocoli_green_weeds_1101999
2Brocoli_green_weeds_2103716
3Fallow101966
4Fallow_rough_plow101384
5Fallow_smooth102668
6Stubble103949
7Celery103569
8Grapes_untrained1011261
9Soil_vineyard_develop106193
10Corn_senesced_green_weeds103268
11Lettuce_romaine_4weeks101058
12Lettuce_romaine_5weeks101917
13Lettuce_romaine_6weeks10906
14Lettuce_romaine_7weeks101060
15Vineyard_untrained107258
16Vineyard_vertical_trellis101797


No.Pixon-basedPCA-SPCA-2D-SSAR-VCANetRN-FSCiCapsNetRPNetMBFSDASS-MNF-ARMG

141.4261.5554.9472.2964.7160.3270.5775.38
233.9386.4974.0978.9795.5673.3881.0695.6
371.4399.3283.5490.5982.0993.5681.09100.00
423.1696.0594.7889.8299.0684.282.0693.2
587.7399.3796.1399.7999.5996.7399.3999.79
665.8187.9775.5586.6173.3586.4777.4893.53
776.0986.2861.4389.9488.0181.4866.3792.43
822.9697.0180.6498.2574.5960.173.9497.69
995.4498.8588.1582.6198.8458.7390.897.63
1091.5893.5282.6394.8296.8495.7959.2194.82
AA60.9690.6479.1988.3787.2779.0878.293.01
OA62.4587.2786.2886.1685.8375.9476.4291.05
Kappa56.785.372.0583.9783.6372.3272.9589.61

Bold values represent the best results among these methods.

No.Pixon-basedPCA-SPCA-2D-SSAR-VCANetRN-FSCiCapsNetRPNetMBFSDASS-MNF-ARMG

145.2094.8379.0987.2865.8483.5286.6870.83
282.0081.9171.7684.3384.5554.1989.2296.78
390.2385.4785.4590.4229.9170.7087.2896.24
474.0587.9994.2878.0982.7091.6479.3466.42
585.1397.5599.9899.56100.00100.0099.3396.51
618.5571.5558.4463.2540.2265.3089.52100.00
792.4898.7794.5252.0921.4092.4188.1295.49
884.1779.9073.7084.8157.1974.9368.8895.11
997.5796.3099.8295.940.0093.2482.8977.51
AA74.3888.2584.1281.9466.6380.6685.7088.32
OA69.6384.5083.2381.7553.5468.8186.4590.32
Kappa57.7279.8667.9775.8456.0761.7882.3187.26

Bold values represent the best results among these methods.

No.Pixon-basedPCA-SPCA-2D-SSAR-VCANetRN-FSCiCapsNetRPNetMBFSDASS-MNF-ARMG

199.9096.3096.9599.2693.4198.0196.9597.11
299.3888.9499.0693.2167.1699.1997.6598.98
396.2092.0997.3197.8786.4996.9199.60100.00
491.6190.2399.4685.5090.3699.21100.0086.21
586.8988.9497.3584.8190.3998.3696.3088.67
688.7996.0899.8799.3599.0299.6099.1489.47
797.3291.3695.2696.6595.0796.4293.5597.65
887.2096.4464.0166.2482.1276.0178.3396.66
999.8485.9797.7997.3498.6396.5897.81100.00
1046.3486.3881.3882.6693.1384.6694.2387.22
1152.9094.1793.1773.9660.0996.2597.3888.30
1230.1080.4996.4896.8494.7897.8799.2797.18
1389.7496.1798.7085.3949.5193.5698.6986.42
1484.2199.3090.3590.3976.7697.7675.7095.79
1594.7986.6856.9265.8549.7666.2680.3895.36
1692.0998.1790.7986.8996.8997.5799.89100.00
AA83.5891.4390.9387.6477.2693.3994.0594.06
OA87.1591.7393.5693.1282.7288.1690.9693.73
Kappa85.6390.9085.7285.4474.7286.8389.9593.03

Bold values represent the best results among these methods.

Our experiments were performed using MATLAB 2018b on a computer with an IntelCore(TM) i5-4300M 2.60 GHz CPU, 16 GB memory, and a 64-bit Windows 7 system. For the three real HSI datasets, the duration to execute our algorithm was several minutes to several hours; notably, the duration to execute the other methods that we reviewed in this study was shorter than that of our method.

5. Conclusions and Further Research

In this study, we developed a novel decision fusion method for HSI data classification. In the proposed SS-MNF-ARMG, MNF and multiscale LBP were integrated to extract local SSFs. On the one hand, MNF helped reduce the dimension of the HSI, remove the noise in the HSI, and extract the spectral features from the HSI data. On the other hand, multiscale LBP was applied to the MNF dimension-reduced images to derive spatial features at different scales. These spatial features were further fused with the MNF spectral feature to form the SSFs. Compared with some state-of-the-art spectral-spatial classification methods, our experimental results have demonstrated that SS-MNF-ARMG can achieve higher classification accuracy with limited training samples. This method is effective for distinguishing different land cover types. In addition, a set of optimal parameters for different hyperspectral data can be obtained automatically.

Although the SS-MNF-ARMG algorithm has provided promising results, the classification accuracies of various datasets remain different. Further research could attempt to further improve the generalization ability of our method. Due to human ecological destruction or natural disasters (e.g., earthquakes), the vegetation in some areas has significantly changed. Monitoring vegetation restoration in these areas is of substantial importance. Therefore, we plan to research the application of the HSI classification in vegetation restoration monitoring.

Data Availability

The data used to support the findings of this study are available at GIC (http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes).

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant nos. 61703060, 61802036, 61701048, and 61873305, the Sichuan Science and Technology Program under Grant no. 21YYJC0469, the Project funded by China Postdoctoral Science Foundation under Grant no. 2020M683274, the Fundamental Research Funds of the Central Universities, Southwest Minzu University (2019NQN07), the Opening Fund of Geomathematics Key Laboratory of Sichuan Province (scsxdz2020zd01 and csxdz201710), and the Key Laboratory of Pattern Recognition and Intelligent Information Processing of Sichuan (MSSB-2018-0 and MSSB-2020-9).

References

  1. T. Matsuki, N. Yokoya, and A. Iwasaki, “Hyperspectral tree species classification of Japanese complex mixed forest with the aid of LiDAR data,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 5, pp. 2177–2187, 2015. View at: Google Scholar
  2. P. Hardin and A. Hardin, “Hyperspectral remote sensing of urban areas,” Geography Compass, vol. 7, no. 1, pp. 7–21, 2013. View at: Google Scholar
  3. Y. Xie, F. Miao, K. Zhou et al., “HsgNet: a road extraction network based on global perception of high-order spatial information,” ISPRS International Journal of Geo-Information, vol. 8, no. 12, p. 571, 2019. View at: Google Scholar
  4. Z. Pan, J. Liu, L. Ma et al., “Research on hyperspectral identification of altered minerals in Yemaquan West Gold Field, Xinjiang,” Sustainability, vol. 11, no. 2, p. 428, 2019. View at: Google Scholar
  5. X. Kang, X. Xiang, S. Li et al., “PCA-based edge-preserving features for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 12, pp. 7140–7151, 2017. View at: Google Scholar
  6. M. B. Stuart, A. J. S. McGonigle, and J. R. Willmott, “Hyperspectral imaging in environmental monitoring: a review of recent developments and technological advances in compact field deployable systems,” Sensors, vol. 19, no. 14, p. 3071, 2019. View at: Publisher Site | Google Scholar
  7. X. Briottet, Y. Boucher, A. Dimmeler et al., “Military applications of hyperspectral imagery, Targets and backgrounds XII: characterization and representation,” International Society for Optics and Photonics, vol. 6239, p. 62390B, 2006. View at: Google Scholar
  8. B. Hörig, F. Kühn, F. Oschütz et al., “HyMap hyperspectral remote sensing to detect hydrocarbons,” International Journal of Remote Sensing, vol. 22, no. 8, pp. 1413–1422, 2001. View at: Google Scholar
  9. C. Butz, M. Grosjean, D. Fischer et al., “Hyperspectral imaging spectroscopy: a promising method for the biogeochemical analysis of lake sediments,” Journal of Applied Remote Sensing, vol. 9, no. 1, Article ID 096031, 2015. View at: Google Scholar
  10. K. Jacq, Y. Perrette, B. Fanget et al., “Hyperspectral imaging for lake sediment cores analysis,” in Proceedings of the IPA-IAL Stockholm 2018 Conference, pp. 735–743, Stockholm, Sweden, June 2018. View at: Google Scholar
  11. E. C. Koeniguer, F. Janez, and J. M. Nicolas, “Change detection in SAR time-series based on the coefficient of variation,” 2019, https://arxiv.org/abs/1904.11335. View at: Google Scholar
  12. M. Zhang, J. Zhou, K. A. Sudduth et al., “Estimation of maize yield and effects of variable-rate nitrogen application using UAV-based RGB imagery,” Biosystems Engineering, vol. 189, pp. 24–35, 2020. View at: Google Scholar
  13. M. P. Uddin, M. A. Mamun, M. I. Afjal et al., “Information-theoretic feature selection with segmentation-based folded principal component analysis (PCA) for hyperspectral image classification,” International Journal of Remote Sensing, vol. 42, no. 1, pp. 286–321, 2021. View at: Google Scholar
  14. H. Fu, G. Sun, J. Ren et al., “Fusion of PCA and segmented-PCA domain multiscale 2-D-SSA for effective spectral-spatial feature extraction and data classification in hyperspectral imagery,” IEEE Transactions on Geoscience and Remote Sensing, 2020. View at: Publisher Site | Google Scholar
  15. C. Ruichun, W. Lu, and W. Maozhi, “Application of FastICA in mineral information extraction from hyperspectral remote sensing data,” Remote Sensing for Land & Resources, vol. 25, no. 4, pp. 129–132, 2013. View at: Google Scholar
  16. A. A. Green, M. Berman, P. Switzer et al., “A transformation for ordering multispectral data in terms of image quality with implications for noise removal,” IEEE Transactions on Geoscience and Remote Sensing, vol. 26, no. 1, pp. 65–74, 1988. View at: Google Scholar
  17. W. Li, S. Prasad, J. E. Fowler et al., “Locality-preserving dimensionality reduction and classification for hyperspectral image analysis,” IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 4, pp. 1185–1198, 2011. View at: Google Scholar
  18. W. Zhao and S. Du, “Spectral-spatial feature extraction for hyperspectral image classification: a dimension reduction and deep learning approach,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 8, pp. 4544–4554, 2016. View at: Publisher Site | Google Scholar
  19. A. Zehtabian and H. Ghassemian, “Automatic object-based hyperspectral image classification using complex diffusions and a new distance metric,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 7, pp. 4106–4114, 2016. View at: Google Scholar
  20. B. Pan, Z. Shi, X. Xu, and R-VCANet, “A new deep-learning-based hyperspectral image classification method,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, no. 5, pp. 1975–1986, 2017. View at: Google Scholar
  21. M. Imani and H. Ghassemian, “Discriminant analysis in morphological feature space for high-dimensional image spatial–spectral classification,” Journal of Applied Remote Sensing, vol. 12, no. 1, Article ID 016024, 2018. View at: Publisher Site | Google Scholar
  22. A. Sellami, A. B. Abbes, V. Barra et al., “Fused 3-D spectral-spatial deep neural networks and spectral clustering for hyperspectral image classification,” Pattern Recognition Letters, vol. 138, pp. 594–600, 2020. View at: Google Scholar
  23. M. Imani and H. Ghassemian, “An overview on spectral and spatial information fusion for hyperspectral image classification: current trends and challenges,” Information Fusion, vol. 59, pp. 59–83, 2020. View at: Google Scholar
  24. X. Huang and L. Zhang, “An SVM ensemble approach combining spectral, structural, and semantic features for the classification of high-resolution remotely sensed imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 1, pp. 257–272, 2012. View at: Google Scholar
  25. K. Y. Peerbhay, O. Mutanga, and R. Ismail, “Random forests unsupervised classification: the detection and mapping of solanum mauritianum infestations in plantation forestry using hyperspectral data,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 6, pp. 3107–3122, 2015. View at: Google Scholar
  26. K. Shi, J. Wang, Y. Tang et al., “Reliable asynchronous sampled-data filtering of T–S fuzzy uncertain delayed neural networks with stochastic switched topologies,” Fuzzy Sets and Systems, vol. 381, pp. 1–25, 2020. View at: Google Scholar
  27. H. Han, G. Wang, M. Wang et al., “Hyperspectral Unmixing via nonconvex sparse and low-rank constraint,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 5704–5718, 2020. View at: Google Scholar
  28. W. Dong, F. Fu, G. Shi et al., “Hyperspectral image super-resolution via non-negative structured sparse representation,” IEEE Transactions on Image Processing, vol. 25, no. 5, pp. 2337–2352, 2016. View at: Google Scholar
  29. Y. Xu, B. Du, F. Zhang et al., “Hyperspectral image classification via a random patches network,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 142, pp. 344–357, 2018. View at: Google Scholar
  30. J. Yin, S. Li, H. Zhu et al., “Hyperspectral image classification using CapsNet with well-initialized shallow layers,” IEEE Geoscience and Remote Sensing Letters, vol. 16, no. 7, pp. 1095–1099, 2019. View at: Google Scholar
  31. K. Gao, B. Liu, X. Yu et al., “Deep relation network for hyperspectral image few-shot classification,” Remote Sensing, vol. 12, no. 6, p. 923, 2020. View at: Google Scholar
  32. P. Ghamisi, R. Souza, J. A. Benediktsson et al., “Hyperspectral data classification using extended extinction profiles,” IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 11, pp. 1641–1645, 2016. View at: Google Scholar
  33. L. Fang, N. He, S. Li et al., “A new spatial–spectral feature extraction method for hyperspectral images using local covariance matrix representation,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 6, pp. 3534–3546, 2018. View at: Google Scholar
  34. C. Ge, Q. Du, W. Li et al., “Hyperspectral and LiDAR data classification using kernel collaborative representation based residual fusion,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, no. 6, pp. 1963–1973, 2019. View at: Google Scholar
  35. W. N. Khotimah, M. Bennamoun, F. Boussaid et al., “A high-performance spectral-spatial residual network for hyperspectral image classification with small training data,” Remote Sensing, vol. 12, no. 19, p. 3137, 2020. View at: Google Scholar
  36. G. Hughes, “On the mean accuracy of statistical pattern recognizers,” IEEE Transactions on Information Theory, vol. 14, no. 1, pp. 55–63, 1968. View at: Publisher Site | Google Scholar
  37. R. Girshick, J. Donahue, T. Darrell et al., “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587, Columbus, OH, USA, June 2014. View at: Publisher Site | Google Scholar
  38. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2014, https://arxiv.org/abs/1409.1556. View at: Google Scholar
  39. C. Qi, Z. Zhou, Y. Sun et al., “Feature selection and multiple kernel boosting framework based on PSO with mutation mechanism for hyperspectral classification,” Neurocomputing, vol. 220, pp. 181–190, 2017. View at: Google Scholar
  40. X. Ceamanos, B. Waske, J. A. Benediktsson et al., “A classifier ensemble based on fusion of support vector machines for classifying hyperspectral data,” International Journal of Image and Data Fusion, vol. 1, no. 4, pp. 293–307, 2010. View at: Google Scholar
  41. Y. Gu and H. Liu, “Sample-screening MKL method via boosting strategy for hyperspectral image classification,” Neurocomputing, vol. 173, pp. 1630–1639, 2016. View at: Google Scholar
  42. J. Jiang, J. Ma, C. Chen et al., “SuperPCA: a superpixelwise PCA approach for unsupervised feature extraction of hyperspectral imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 8, pp. 4581–4593, 2018. View at: Google Scholar
  43. Q. Zhang, J. Sun, G. Zhong et al., “Random multi-graphs: a semi-supervised learning framework for classification of high dimensional data,” Image and Vision Computing, vol. 60, pp. 30–37, 2017. View at: Google Scholar
  44. F. Gao, Q. Wang, J. Dong et al., “Spectral and spatial classification of hyperspectral images based on random multi-graphs,” Remote Sensing, vol. 10, no. 8, p. 1271, 2018. View at: Google Scholar
  45. H. Chen, F. Miao, and X. Shen, “Hyperspectral remote sensing image classification with CNN based on quantum genetic-optimized sparse representation,” IEEE Access, vol. 8, pp. 99900–99909, 2020. View at: Google Scholar
  46. F. Bianconi, R. Bello-Cerezo, and P. Napoletano, “Improved opponent color local binary patterns: an effective local image descriptor for color texture classification,” Journal of Electronic Imaging, vol. 27, no. 1, Article ID 011002, 2017. View at: Publisher Site | Google Scholar
  47. B. Tu, W. Kuang, G. Zhao et al., “Hyperspectral image classification by combining local binary pattern and joint sparse representation,” International Journal of Remote Sensing, vol. 40, no. 24, pp. 9484–9500, 2019. View at: Google Scholar
  48. F. R. K. Chung, “Lectures on spectral graph theory,” CBMS Lectures, Fresno, vol. 6, no. 92, pp. 17–21, 1996. View at: Google Scholar
  49. W. Liu, J. Wang, S. Kumar et al., “Hashing with graphs,” in Proceedings of the 28th International Conference on Machine Learning, ICML 2011, pp. 1–8, Bellevue, WA, USA, June-July 2011. View at: Google Scholar
  50. S. Kim and S. Choi, “Multi-view anchor graph hashing,” in Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3123–3127, IEEE, Vancouver, Canada, May 2013. View at: Publisher Site | Google Scholar

Copyright © 2021 Eryang Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views298
Downloads292
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.