Special Issue

## Algorithms for Multispectral and Hyperspectral Image Analysis

View this Special Issue

Research Article | Open Access

Volume 2012 |Article ID 818175 | https://doi.org/10.1155/2012/818175

Peter Bajorski, "Non-Gaussian Linear Mixing Models for Hyperspectral Images", Journal of Electrical and Computer Engineering, vol. 2012, Article ID 818175, 8 pages, 2012. https://doi.org/10.1155/2012/818175

# Non-Gaussian Linear Mixing Models for Hyperspectral Images

Accepted10 Sep 2012
Published04 Oct 2012

#### Abstract

Modeling of hyperspectral data with non-Gaussian distributions is gaining popularity in recent years. Such modeling mostly concentrates on attempts to describe a distribution, or its tails, of all image spectra. In this paper, we recognize that the presence of major materials in the image scene is likely to exhibit nonrandomness and only the remaining variability due to noise, or other factors, would exhibit random behavior. Hence, we assume a linear mixing model with a structured background, and we investigate various distributional models for the error term in that model. We propose one model based on the multivariate t-distribution and another one based on independent components following an exponential power distribution. The former model does not perform well in the context of the two images investigated in this paper, one AVIRIS and one HyMap image. On the other hand, the latter model works reasonably well with the AVIRIS image and very well with the HyMap image. This paper provides the tools that researchers can use for verifying a given model to be used with a given image.

#### 1. Introduction

The following linear mixing model (with a structured background) is often used in hyperspectral imaging literature [14]: where is a p-dimensional vector (e.g., of reflectance or radiance,) of a pixel spectrum, is a fixed matrix of spectra of materials present in the image (as columns ), and is an unknown vector of material abundances. The error term is often assumed to follow the multivariate normal (Gaussian) distribution or some other distributional assumptions can be made here.

In this paper, we want to address the question whether model (1) is realistic for hyperspectral images and what type of a distribution should be used for the error term . In previous research [5], we provided a preliminary investigation of the marginal distributions of to be modeled by the exponential power distribution. We expand this research here to model the multivariate structure of and to propose some other models such as the multivariate t-distribution.

The multivariate t-distribution is a popular distribution for modeling hyperspectral data (see [68]). In the current literature, this distribution is mostly used for modeling the variability of background materials in a purely stochastic model without a deterministic part such as the one defined in model (1). Here we try to use that distribution for modeling the error term , while the major part of the image variability is explained by the deterministic part of the model.

In Section 2, we introduce our notation and show how the deterministic and the stochastic parts of the model are constructed based on the singular value decomposition (SVD). We also provide details on an AVIRIS hyperspectral image used for the numerical results performed in this paper. In Section 3, we explore potential models for the marginal distributions of the error term. In Section 4, we explore potential models for the joint multivariate distributions of the error term. The AVIRIS image is used for numerical examples in Sections 3 and 4. In Section 5, we provide an additional example using a subset of the HyMap Cooke City image. Conclusions are formulated in Section 6.

#### 2. Preliminary Considerations

In this paper, we are going to assume that the abundances of all materials sum up to 1, that is, where are coordinates of the vector . In practice, this assumption may not necessarily be strictly fulfilled due to imperfections in estimation of background signatures and in the linearity of the spectral mixing process. However, it is reasonable to assume that (2) is fulfilled at least approximately. We do not specifically address another reasonable assumptions that , but an appropriate choice of the spectral signatures in should result in positive (or almost positive) .

We also assume that no a priori information is available about the spectral signatures in , that is, we are trying to “guess” all terms on the right-hand side of (1). Let us now assume that we have a hyperspectral image with pixels, and the model (1) takes the form for . Denote the average of all pixel spectra by . Let be a matrix of vectors as columns. Because of the property (2), the model (3) is equivalent to the following model centered at :

Let us denote by an by matrix of vectors and write its SVD as where is an n by p matrix (with columns ), is a diagonal by matrix of singular values , and is an orthogonal p by p matrix (with columns ). We assume that . The SVD in (5) can also be written as

In order to build a model of the form (4), we now want to identify out of the total of basis vectors such that represents the deterministic part of the model (4). This deterministic part will be selected based on the idea that the major materials present in the image exhibit nonrandom behavior, while the remaining variability due to noise, or due to some undetected small amounts of materials, or due to other imperfections in the model, should exhibit more random behavior. We want the remaining sum to represent realizations of the error term . We will investigate that sum in the basis system defined by the vectors , that is, realizations of a random vector that we denote by . Note that each n dimensional vector represents a sample from the distribution of . We want to study those samples so that an appropriate distributional assumption about the error terms can be made. We can define the standardized form with the sample of realizations given by . The vector represents the error term in the sphered, or whitened, coordinates. The marginal (uncorrelated) distributions of will be studied in the next section followed by modeling the joint distribution in Section 4.

Numerical results in Sections 3 and 4 use a 100 by 100 pixel (so ) AVIRIS urban image in Rochester, NY, USA, near the Lake Ontario shoreline (see Figure 1). The scene has a wide range of natural and man-made clutter including a mixture of commercial/warehouse and residential neighborhoods to add a wide range of spectral diversity. Prior to processing, invalid bands, due to atmospheric water absorption, were removed reducing the overall dimensionality to spectral bands. This image was used earlier in [9, 10].

#### 3. Modeling Marginal Distributions

Here we investigate two families of distributions as models for the distributions of 's. The first one is the exponential power distribution (also called general error or general Gaussian distribution) with the location parameter , the scale parameter , and the shape parameter . Its density is defined by where is the gamma function. This is a flexible family of distributions. Its special cases are the Gaussian distribution () and the Laplace distribution (). We assume that since the distribution of is already centered. For each , the remaining parameters, and , were estimated using the maximum likelihood principle. The fit of data to the resulting exponential power distribution was then checked with the chi-square test based on the statistic where is the count of observations in the th interval (), and is the interval probability based on the testing distribution. The intervals were chosen so that . The tested distribution should be rejected when , where is the upper th percentile from the chi-squared distribution with degrees of freedom, and is the number of estimated parameters. The value is chosen based on the Bonferroni principle since inferences are performed here. Figure 2 shows the base values of the chi-square statistic plotted versus the dimension number . The horizontal line is at the threshold level of (1.75 on the log scale). The points above the threshold represent the first 19 dimensions and then 22, 23, 25, 28, and 67. Since not all dimension numbers are consecutive, we are faced with a difficulty of choosing a suitable value for . We propose two potential strategies.(1)Select as the largest dimension number (here 28) that is not an “outlier” (such as dimension 67 here). With this approach, we would accept the imperfect modeling along the 67th dimension, and some of the dimensions (20, 21, 24, 26, and 27) would be represented in the deterministic part of the model, even though they could be modeled stochastically by the exponential power distribution. This approach is consistent with the principle of keeping the dimensions with the highest explained variability. (2)Select all dimensions above the threshold for the deterministic part of the model (possibly including dimension 67) and use the remaining dimensions for the error term. From the point of view of notation in Section 2, we would reorder the dimensions, so that the “non-random” dimensions 22, 23, 25, 28, and 67 would be numbered from 20 to 24, and would be chosen as 24.

Figure 3 shows the base values of the chi-square statistic plotted versus . For the values below the threshold, where the exponential power distribution model would be used, the values range from 1.37 to 1.91. Hence, the distributions have tails heavier than the Gaussian distribution, but not as heavy as those of the Laplace distribution.

The second potential family of distributions for modeling the distributions of 's is the t-distribution with degrees of freedom given by the density

Since is scaled to have variance 1, we also need to scale the t-distribution. Hence, we fit a distribution with the density , where is defined in (9) and . We assume that is larger than 2 but is not necessarily an integer. The scaling depends on the unknown parameter , so it needs to be taken into account in the maximum likelihood estimation of . The fit of data to the scaled t-distribution was again checked with the chi-square test. Figure 4 is analogous to Figure 2, but here the chi-square statistic measures the fit to the scaled t-distribution. The horizontal line is at the threshold level of (1.77 on the log scale). The points above the threshold represent the first 10 dimensions and then 12, 14, 17, 19, and 23. Here again, we can use one of the proposed two strategies for identifying the dimensionality of the deterministic part of the model. We can see that the scaled t-distribution gives a significantly better fit than the exponential power distribution.

Figure 5 shows the base values of the chi-square statistic plotted versus the number of degrees of freedom . (The parameter had a very large value for the first dimension ( with the highest chi-squared value), and it is not shown in the graph for clarity of presentation.) For the values below the threshold, where the scaled t-distribution model would be used, the values range from 4.3 to 43.3. Hence again, the distributions have tails heavier than the Gaussian distribution, but not heavier than those of the scaled t-distribution with 4 degrees of freedom.

#### 4. Modeling Joint Multivariate Distributions

The model fitting discussed in the previous section accounted only for the marginal distributions of 's. A more challenging task is to ensure a fit of the joint multivariate distribution of data to a suitable model. In the whitened version, the components of are uncorrelated, but they might be either stochastically independent or dependent, which depends on a specific assumed model. Again, we will use two competing models. The first one is based on the assumption of independent components following an exponential power distribution. In order to verify this model, we investigate the joint distribution of for . More specifically, we investigate the fit of the theoretical cumulative distribution function (CDF) to its empirical equivalent. Note that can be regarded as a univariate simplification of the multivariate CDF of with all coordinates being equal. Based on the assumption of independence, , where is the CDF of the exponential power distribution defined in (7). The empirical equivalent of is the fraction of observations (realizations) such that , or equivalently . The CDF depends on the parameters and that were estimated using the maximum likelihood principle. The chi-square test based on the statistic (8) was then used to assess the fit of the observations of to . The resulting base values of the chi-squared statistic are shown in Figure 6. As before, the horizontal line is at the threshold level of (1.75 on the log scale). The points below the threshold represent the dimensions 135 and 136, and then 138 through 152 (). The value for is only slightly above the threshold. We can say that the model of independent components with an exponential power distribution is quite successful in modeling the multivariate distribution of the “last” 18 dimensions from 135 until the last one (152). However, the model is not very successful in modeling further dimensions, where we apparently observe significant dependencies among ’s. This is consistent with low-dimensional components of being independent or having very weak dependence structure that does not show up as significant. That dependence structure gets stronger with higher dimensions.

Figure 7 shows the base values of the chi-square statistic plotted versus . For the values below the threshold, where a model with independent exponential power distributions would be used, the values range from 1.48 to 1.77. Hence again, the distributions have tails heavier than the Gaussian distribution, but not as heavy as those of the Laplace distribution.

The second multivariate model that we want to discuss is based on the standard -dimensional multivariate t-distribution with the density function

The variance-covariance matrix of this distribution is equal to , where is a -dimensional identity matrix and . Since we deal here with sphered data modeled by (with and ), we want to follow the standard multivariate t-distribution. Hence, an appropriate candidate distribution for is a scaled multivariate t-distribution with the density function given by , where is defined in (10). We can write

This distribution is spherically contoured in the sphered coordinates and elliptically contoured in the original coordinates. All marginal distributions of the scaled multivariate t-distribution are the scaled t-distributions discussed in Section 3 that were already confirmed as reasonable models for the AVIRIS data. Here, we want to verify if the multivariate t-distribution provides an adequate multivariate structure for those data.

If follows the standard multivariate t-distribution, then follows the F-distribution with and degrees of freedom (see [11]). Hence, in order to check the assumption of the multivariate t-distribution, we verify if follows the F-distribution scaled by . As before, the degrees of freedom parameter is estimated based on the maximum likelihood principle, and the fit is assessed based on the chi-square statistic. Figure 8 shows the base values of the chi-square statistic plotted versus . The horizontal line is at the threshold level of (1.77 on the log scale). We can see only one value below the threshold, suggesting the scaled F-distribution is suitable only for , which is consistent with the marginal scaled t-distribution for (discussed in Section 3) since the vector is one-dimensional. All remaining , are apparently not modeled well by the scaled multivariate t-distribution. We note that the components of the multivariate t-distribution are not independent, and the dependency structure proposed by this distribution is apparently not consistent with the AVIRIS hyperspectral data investigated here.

#### 5. Cooke City Image

In the two previous sections, we used an AVIRIS image as an example to demonstrate how to fit a linear mixing model with the term being potentially non-Gaussian. Specific numerical results might be different, of course, for other images. Hence, it is interesting to perform the same calculations on another image. The AVIRIS image has a wide range of spectral diversity due to the presence of various natural and man-made materials. Hence, it would be interesting to try a more homogenous dataset with less variety. One way to do this could be to classify an image into various types of cover material, and then use spectra from one class as our dataset. A disadvantage of such an approach is the possibility of removing tails of distributions, which might have a tendency of being assigned to a different class. Hence, we chose a relatively uniform subimage of forest in the HyMap Cooke City (see [12]) image as marked by a red rectangle shown in Figure 9. Four spectral bands were also removed due to some suspicious values. The spectral dimensionality of the dataset used is then , and the spatial dimensionality is 50 by 300 pixels for a total of pixels.

In order to investigate the marginal distributions, we now follow the ideas and notation of Section 3. The fit with the exponential power distribution is verified with Figure 10, which is analogous to Figure 2. Using Strategy 1 explained in Section 3, we would identify 26 as the dimensionality of the deterministic part of the model.

Figure 11 (analogous to Figure 3) shows the base values of the chi-square statistic plotted versus . The highest value of is almost perfectly equal to 2, suggesting the Gaussian distribution. Most of the values are around 1.8 and higher, suggesting slightly lighter tails than those for the AVIRIS image (compare with Figure 3).

Figure 12 (analogous to Figure 4), shows a slightly better fit of the scaled t-distribution than that of the exponential power distribution. However, the resulting dimensionality would be identified as the same as before (26).

Figure 13 (analogous to Figure 5) shows the base values of the chi-square statistic plotted versus the number of degrees of freedom . The high values of again point to more Gaussian-like marginal distributions than those of the AVIRIS image.

So far, we discussed only the marginal distributions of 's as a precursor to checking the model fit. We now want to consider a more challenging task to ensure a fit of the joint multivariate distribution of data to a suitable model as discussed in Section 4. Figure 14 (analogous to Figure 6) shows the base values of the chi-square statistic (for testing the multivariate fit to representing independent exponential power distributions) plotted versus . This figure is so strikingly different from the analogous Figure 6, that the author double checked the code (in fact the same code was used in both cases). Note that even for as small as 1 (which represents all dimensions from 1 until ), we obtain an acceptable fit with independent exponential power distributions. The only unacceptable fit is for , which points to some minor issues with the model.

Regarding the final conclusion about the model fit, we need to keep in mind that this is just one test of multivariate fit, and it needs to be considered together with the results shown in Figure 12 telling us that some of the marginal distributions do not have the satisfactory fit. Hence, we can accept the previous conclusion of the dimensionality of the deterministic part of the model as 26, and the remaining dimensions are remarkably well modeled by independent exponential power distributions. This successful modeling might be largely due to this dataset being a fairly homogenous set of spectra (forest area in the Cooke City image).

Since almost all chi-square values in Figure 14 are not significant, it would not be interesting to show an analog of Figure 7. Hence, in Figure 15, we created a plot of the shape parameter values (from the multivariate fit to ) versus . We can see that for large , the fit is fairly close to the Gaussian distribution ( close to 2). Then for smaller , 's are getting smaller, which represents much heavier tails of the distribution, almost up to the Laplace distribution ( close to 1).

We have also checked the fit to the multivariate t-distribution as shown in Figure 16, which is an analog of Figure 8. We can observe the only good fit at (which is really a univariate fit of the last dimension), which means that none of the multivariate t-distributions fits well to the data.

#### 6. Conclusions

In this paper, we investigated various distributional models for the error term in the linear mixing model with a structured background. The models were tested on two datasets. One was an AVIRIS image and the other one was a subimage of a forest area in a HyMap Cooke City image. The first proposed model was based on independent components following an exponential power distribution. The model fitted reasonably well to both datasets in terms of modeling marginal distributions. For the AVIRIS image, the fit of the joint distribution worked quite well for a small number of components, but not for a larger number. For the forest area in the Cooke City image, the fit with the joint distribution of independent exponential power distributions worked very well. This successful modeling might be largely due to this dataset being a fairly homogenous set of spectra.

The second model was based on the multivariate t-distribution. It performed well in terms of the resulting marginal distributions in both datasets, but the dependency structure imposed by this distribution was entirely inconsistent with both datasets. More research is needed to investigate the two models on other hyperspectral images. However, the multivariate t-distribution model does not look promising at this point, while the exponential power distribution model seems to have more potential.

#### References

1. D. G. Manolakis and G. Shaw, “Detection algorithms for hyperspectral imaging applications,” IEEE Signal Processing Magazine, vol. 19, no. 1, pp. 29–43, 2002. View at: Publisher Site | Google Scholar
2. L. L. Scharf and B. Friedlander, “Matched subspace detectors,” IEEE Transactions on Signal Processing, vol. 42, no. 8, pp. 2146–2156, 1994. View at: Publisher Site | Google Scholar
3. S. Johnson, “The relationship between the matched-filter operator and the target signature space-orthogonal projection classifier,” IEEE Transactions on Geoscience and Remote Sensing, vol. 38, no. 1, pp. 283–286, 2000. View at: Google Scholar
4. P. Bajorski, “Analytical comparison of the matched filter and orthogonal subspace projection detectors for hyperspectral images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 45, no. 7, part 2, pp. 2394–2402, 2007. View at: Publisher Site | Google Scholar
5. P. Bajorski, “A family of distributions for the error term in linear mixing models for hyperspectral images,” in Imaging Spectrometry XIII, S. S. Shen and P. E. Lewi, Eds., vol. 7086 of Proceedings of SPIE, August 2008. View at: Publisher Site | Google Scholar
6. D. B. Marden and D. Manolakis, “Using elliptically contoured distributions to model hyperspectral imaging data and generate statistically similar synthetic data,” in Algorithms and Technologies for MultiSpectral, Hyperspectral, and Ultraspectral Imagery X, S. S. Shen and P. E. Lewi, Eds., vol. 5425 of Proceedings of SPIE, pp. 558–572, April 2004. View at: Publisher Site | Google Scholar
7. D. Manolakis, M. Rossacci, J. Cipar, R. Lockwood, T. Cooley, and J. Jacobson, “Statistical characterization of natural hyperspectral backgrounds using t-elliptically contoured distributions,” in Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XI, S. S. Shen and P. E. Lewi, Eds., vol. 5806 of Proceedings of SPIE, pp. 56–65, April 2005. View at: Publisher Site | Google Scholar
8. J. Theiler and C. Scovel, “Uncorrelated versus independent Elliptically-contoured distributions for anomalous change detection in hyperspectral imagery,” in Computational Imaging VII, C. A. Bouman, E. L. Miller, and I. Pollak, Eds., vol. 7246 of Proceedings of SPIE-IS & T Electronic Imaging, p. 72460T, January 2009. View at: Publisher Site | Google Scholar
9. P. Bajorski, “Generalized detection fusion for hyperspectral images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 4, pp. 1199–11205, 2012. View at: Publisher Site | Google Scholar
10. P. Bajorski, Statistics For Imaging, Optics, and Photonics, John Wiley & Sons, New York, NY, USA, 2011.
11. T. W. Anderson, An Introduction to Multivariate Statistical Analysis, John Wiley & Sons, New York, NY, USA, 3rd edition, 2003.
12. D. Snyder, J. Kerekes, I. Fairweather, R. Crabtree, J. Shive, and S. Hager, “Development of a web-based application to evaluate target finding algorithms,” in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS '08), vol. 2, pp. 915–918, Boston, Mass, USA, July 2008. View at: Publisher Site | Google Scholar

Copyright © 2012 Peter Bajorski. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.