Research Article | Open Access
Chengzhi Deng, Yaning Zhang, Shengqian Wang, Shaoquan Zhang, Wei Tian, Zhaoming Wu, Saifeng Hu, "Approximate Sparse Regularized Hyperspectral Unmixing", Mathematical Problems in Engineering, vol. 2014, Article ID 947453, 11 pages, 2014. https://doi.org/10.1155/2014/947453
Approximate Sparse Regularized Hyperspectral Unmixing
Sparse regression based unmixing has been recently proposed to estimate the abundance of materials present in hyperspectral image pixel. In this paper, a novel sparse unmixing optimization model based on approximate sparsity, namely, approximate sparse unmixing (ASU), is firstly proposed to perform the unmixing task for hyperspectral remote sensing imagery. And then, a variable splitting and augmented Lagrangian algorithm is introduced to tackle the optimization problem. In ASU, approximate sparsity is used as a regularizer for sparse unmixing, which is sparser than regularizer and much easier to be solved than regularizer. Three simulated and one real hyperspectral images were used to evaluate the performance of the proposed algorithm in comparison to regularizer. Experimental results demonstrate that the proposed algorithm is more effective and accurate for hyperspectral unmixing than state-of-the-art regularizer.
With the wealth of spectral information, hyperspectral remote sensing has been widely used in the fields of target detection, material mapping, and environmental monitoring . However, due to the low spatial resolution of a sensor and the complexity of nature diversity of surface features, each pixel of hyperspectral imagery often contains more than one pure material, leading to the existence of spectral mixing. To deal with this problem and effectively identify the components of the mixed spectral, hyperspectral unmixing technique , which aims at decomposing the measured spectrum of each mixed pixel into a collection of spectral pure constituent spectra (endmembers) and the corresponding fractions (abundance), is proposed. Linear spectral mixing model and nonlinear spectral mixing model are two basic models used to analyze the mixed pixel problem. Due to its computational tractability and flexibility, the linear mixture model has been widely applied for many different applications. In this letter, we focus on the linear spectral mixing model.
The linear spectral mixing model assumes that the spectral response of a pixel in any given spectral band is a linear combination of all of the pure endmembers. Assuming that the hyperspectral sensor used in data acquisition has spectral bands, the linear spectral mixing model can be formulated as follows: where is the spectrum vector of a mixed pixel, is the spectral library, is the abundance vector corresponding to the library , and is the noise vector. Owing to physical constraints, the fractional abundance of all of the endmembers for a mixed pixel cannot be negative, and their sum should be one. Under the linear spectral mixing model, a variety of unmixing approaches based on geometry [3, 4] and statistics [5, 6] has been proposed. However, these methods are unsupervised and could extract virtual endmembers with no physical meaning.
In recent years, the sparse signal model has received much attention in hyperspectral unmixing area [7–12]. Sparse unmixing, as a semisupervised unmixing method, is based on the assumption that each mixed pixel observation in the hyperspectral imagery can be modeled as linear combinations of a few spectra from a large spectral library that is known in advance . Because the size of the spectral library is often large, the number of endmembers in the spectral library will be much greater than the number of spectral band, which implies that there are only a few nonzeros among the entries in abundance vector . Thus, the sparse unmixing problem can be written as where denotes the number of nonzero components of , is the tolerated error derived from the noise or the modeling error, and means that the sum of the fractional abundance is 1.
However, in terms of computational complexity, the norm regularized problem (2) is a typical NP-hard problem, and it was difficult to solve. Recently, Candès and Tao [14, 15] proved that norm can be replaced by norm under a certain condition of the restricted isometric property (RIP). Therefore, in most cases, it is always relaxed to alternative norm regularized convex problem, as follows: where is the norm of . Nevertheless, while regularization provides the best convex approximation to regularization and it is computationally efficient, the regularization often introduces extra bias in unmixing and cannot estimate the fractional abundance with the sparser solutions.
In this letter, we model the hyperspectral unmixing as an approximate sparsity regularized optimization problem and propose to solve it via variable splitting and augmented Lagrangian algorithm. The experimental results demonstrate that the proposed ASU algorithm achieves a better spectral unmixing accuracy. The rest of this letter is organized as follows. In Section 2, the hyperspectral unmixing methodology based on approximate sparsity is described in detail. Section 3 presents the experimental results. Conclusions are drawn in Section 4.
2. Proposed Hyperspectral Unmixing Methodology
2.1. Approximate Sparsity Model
The problems of using norm in hyperspectral unmixing (i.e., the need for a combinatorial search for its minimization and its too high sensibility to noise) are both due to the fact that the norm of a vector is a discontinuous function of that vector. The same as [16, 17], our idea is then to approximate this discontinuous function by a continuous one, named approximate sparsity function, which provides smooth measure of norm and better accuracy than regularizer.
The approximate sparsity function is defined as The parameter may be used to control the accuracy with which approximate the Kronecker delta. In mathematical terms, we have Define the continuous multivariate approximate sparsity function as It is clear from (5) that is an indicator of the number of zero entries in for small values of . Therefore, norm can be approximated by Note that the larger the value of , the smoother and the worse approximation to norm; the smaller value of norm, the closer behavior of to norm.
Figure 1 shows the sparsity of the proposed approximate sparsity function with respect to difference of value of . From Figure 1, we can see that the smaller value of , the closer behavior of approximate sparsity function to norm. And the larger value of , the smoother approximate sparsity function.
2.2. Unmixing Model Based on Approximate Sparsity
Based on the proposed approximate sparsity model and sparse unmixing model in (2) or (3), the proposed approximate sparse unmixing (ASU) model is built up as follows. Join the approximate sparsity terms, the abundance sum-to-one, and nonnegative regularization terms, and rewrite the hyperspectral unmixing optimization problem. The model of the proposed ASU algorithm is as follows:
The above regularized optimization problem can be converted into an unconstrained version by minimizing the respective Lagrangian function, as follows: where is the regularization parameter. and are the abundance nonnegative and sum-to-one functions, respectively. is the indicator function. If , then ; otherwise .
2.3. Numerical Algorithm
The sparse unmixing is a typical underdetermined linear inverse problem, and it is very difficult to find a unique, stable, and optimal solution . There are many proposed numerical algorithms in the literature to solve sparse unmixing problems. Greedy algorithms such as orthogonal matching pursuit (OMP)  and basis pursuit (BP)  are two alternative approaches in solving the and norm constrained sparse unmixing problems. Chen and Zhang  carried over the iteratively reweighted least squares algorithm to solve the constrained sparse unmixing optimization problem. Sun et al.  introduced reweighted iterative algorithm to convert the problem into the problem and used the split Bregman iterative algorithm to solve it. In , Dioucas-Dias and Figueiredo proposed a sparse unmixing algorithm via variable splitting and augmented Lagrangian (SUnSAL), the core idea of which is to introduce a set of new variables per regularizer and then exploit the alternating direction method of multipliers (ADMM)  to solve the resulting constrained optimization problems.
Due to the best performance of SUnSAL, in this letter, we carry over the variable splitting and augmented Lagrangian algorithm  to solve (9). We first introduce intermediate variable and transform problem (9) into an equivalent problem, that is, The augmented Lagrangian of problem (10) is where is a positive constant and denotes the Lagrangian multipliers associated with the constraint . The basic idea of the augmented Lagrangian method is to seek a saddle point of , which is also the solution of problem (9).
By using the augmented Lagrangian algorithm, we solve the problem (9) by iteratively solving problem (12) and (13). Consider It is evident that the minimization problem (12) is still hard to solve efficiently in a direct way, since it involves a nonseparable quadratic term and nondifferentiability terms. To solve this problem, a quite useful ADMM algorithm  is employed, which alternatively minimizes one variable while fixing the other variables. By using ADMM, the problem (12) can be solved by the following two subproblems with respect to and .
(2) -Subproblem. Let and be fixed; we solve Clearly, formulation (17) is a quadratic function; therefore, the steepest descent method is desirable to use to solve (17). Consider where is a constant and is the gradient of formation (17). Considering the nonnegative constraint, we can project vector onto the first orthant, and thus .
So far, all issues for handing the subproblems have been solved. According to the above description, the main implementation procedure of the proposed method is depicted in Algorithm 1.
In this section, a series of experiments on simulated datasets and real hyperspectral image are implemented to evaluate ASU method. Consistent comparisons between the ASU algorithm and the representative sparse unmixing algorithms of SUnSAL  were carried out. In all the experiments, we set the maximum iteration number and the stopping condition . And all the considered algorithms have taken into account the abundance nonnegativity constraint. Signal-to-reconstruction error (SRE)  was used as the quality metric to assess the accuracy of unmixing results, which is defined as follows: Here, denotes the true abundance, denotes the estimated abundance, and denotes the expectation function. Generally speaking, the larger the SRE is, the more accurate the unmixing algorithm is.
3.1. Simulated Datasets
In this experiment, the spectral library contains members with bands and was generated by randomly selecting from the USGS library denoted splib06  and released in September 2007. The mutual coherence of the library was close to one. Under library , three simulated datasets were generated with 500 pixels and each containing different numbers of endmembers: (denoted by SD1), (SD2), and (SD3). In each pixel, the fractional abundance was also defined following the Dirichlet distribution . Zero-mean i.i.d. Gaussian noises of three different variances were added to each of the simulated datasets with signal-to-noise ratios (SNRs) of 20, 30, and 40 dB, respectively.
Table 1 shows the SRE (dB) results after applying them to the three simulated datasets contaminated with white noise with different levels and the correspondingly optimal parameters of these two methods. From Table 1, it can be seen that ASU attains the highest SRE (dB) in all cases. As is known that the higher the SRE, the better the unmixing performance. That is to say, ASU achieves better unmixing accuracy than SUnSAL. For high SNR values especially, the improvements of SRE obtained with regard to the SUnSAL are significant. As it is expected in sparse unmixing , when the number of endmembers increases, the accuracy of two methods decreases. The ASU performs clearly better than SUnSAL, and the results of ASU method are acceptable.
To illustrate further the performances of the proposed ASU method, Figure 2 shows the estimated fractional abundance of two unmixing methods in a simulated dataset SD2 with SNR of 30 dB, along with the ground-truth abundance. In order to better visualization, the same abundance of 50 randomly selected pixels is shown. From Figure 2, it can be observed that the abundance maps estimated by ASU are more similar to the ground-truth maps than the ones estimated by SUnSAL. Note that there are many disturbed abundance values which are not actually present in the abundance maps estimated by SUnSAL. ASU algorithm vanishes most of those values. This is because approximate sparsity regularizer used in ASU algorithm produces the sparser numbers of fractional abundance than regularizer used in SUnSAL.
The unmixing performance comparison results with SUnSAL algorithm are also presented in Figure 3. From Figure 3, it can be seen that the proposed ASU algorithm can obtain better SRE results than the SUnSAL algorithm for all of the considered parameters . Considering the abundance sum-to-one and nonnegativity constraint, the range of abundance coefficients is . Therefore, in this letter we set .
Figure 4 shows the sparsity comparison results of estimated abundance of considered unmixing algorithms in a simulated dataset SD2 with SNR of 40 dB. It demonstrates that the nonzero number of abundance value using approximate sparsity regularizer (ASU) is smaller than that of using regularizer (SUnSAL). This proves that approximate sparsity regularizer is sparser than regularizer.
The parameter controls the sparsity of approximate sparsity regularizer and plays an important role in the unmixing algorithm. Figure 5(a) plots the SRE values with various values in a simulated dataset SD2 with SNR of 20, 30, and 40 dB. Figure 5(b) plots the SRE values with various values in a simulated dataset with 2, 4, and 6 endmembers at SNR = 40 dB. It can be seen that the SRE first increases with the , after reaching a peak value, and it then begins to decrease. The optimal , which corresponds to the peak SRE values for different SNR or different numbers of endmembers, are marked as red stars in Figure 5. From Figure 5(a), we can see that at the same numbers of endmember the smaller is much more optimal for the hyperspectral data with higher SNR. It can also be seen from Figure 5(b) that, for the hyperspectral data with the same SNR, the optimal value increases with the increase of the numbers of endmembers.
In the ASU model presented in (9), the regularization parameter plays an important role in controlling the tradeoff between fidelity to the data and sparsity of the unmixing results. To analyze the influence of when performing the ASU method, we plotted the curve of the SRE values with various values in a simulated dataset SD2 with SNR of 40 dB, as shown in Figure 6(a). As shown in Figure 6(a), lower or higher values result in a smaller SRE. And the higher values lead to a faster decrease in SRE than lower values. The relationship between the SRE and the combinations of the regularization parameter and the approximate sparsity parameter is presented in Figure 6(b).
3.2. Real Datasets
In the real datasets experiments, our method has been tested on well-known Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Cuprite dataset. The portion used in experiments corresponds to a -pixel subset, comprising 224 spectral bands between 400 and 2500 nm, with nominal spectral resolution of 10 nm. Prior to the analysis, bands 1-2, 105–115, 150–170, and 223-224 were removed due to water absorption and low SNR, leaving 188 spectral bands. In this experiment, the spectral library is the USGS library containing 498 pure endmember signatures with the corresponding bands removed. And the parameters were set as ; ; . For illustrative purposes, the mineral map produced by Tricorder 3.3 software product is shown in Figure 7. It should be noted that the Tricorder map was produced in 1995, but the publicly available AVIRIS Cuprite data were collected in 1997. Thus, we only adopt the mineral maps as a reference to make a qualitative analysis of the performances of the different sparse unmixing algorithms.
Figure 8 shows a visual comparison between the fractional abundance maps estimated by SUnSAL and the proposed ASU algorithm for three different minerals (alunite, buddingtonite, and chalcedony). Comparison to the reference maps (Figure 7) demonstrates that fractional abundance estimated by ASU is generally more accurate or comparable in the regions assigned to the respective materials than SUnSAL. For the buddingtonite especially, the results of ASU exhibit better spatial consistency of minerals of interest and less outliers in isolated regions of the image.
In this paper, we have proposed a new model using the approximate sparsity regularizer to replace the regularizer in a sparse regression model when dealing with hyperspectral unmixing problems. The proposed approximate sparsity regularizer model produces sparser and more accurate unmixing results than the traditional regularizer model. We also have presented an effective method to solve the approximate sparse regression based unmixing problem via variable splitting and augmented Lagrangian algorithm. To evaluate the performance the proposed method, several experiments on a series of simulated data sets and a real hyperspectral image have been performed. Experimental results show that the proposed method achieves better performance than the traditional SUnSAL.
Although our experimental results are very encouraging, the proposed method focuses on analyzing the hyperspectral data without incorporating spatial structure information of hyperspectral data. In the future, we will include the spatial-contextual information to approximate sparse regression based unmixing to improve the performance. In addition, since our unmixing method is conducted in the pixel-by-pixel fashion, the procedure could be accelerated by using some hardware architectures such as graphics processing unit (GPU) or field programmable gate arrays (FPGAs).
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of the paper.
The authors would like to thank the anonymous referees for their valuable and helpful comments. The work was supported by the National Natural Science Foundation of China under Grants 61162022 and 61362036, the Natural Science Foundation of Jiangxi China under Grant 20132BAB201021, the Jiangxi Science and Technology Research Development Project of China under Grant KJLD12098, and the Jiangxi Science and Technology Research Project of Education Department of China under Grants GJJ12632 and GJJ13762.
- P. Shippert, “Why use hyperspectral imagery?” Photogrammetric Engineering and Remote Sensing, vol. 70, no. 4, pp. 377–380, 2004.
- J. M. Bioucas-Dias, A. Plaza, N. Dobigeon et al., “Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 2, pp. 354–379, 2012.
- T. Chan, C. Chi, Y. Huang, and W. Ma, “A convex analysis-based minimum-volume enclosing simplex algorithm for hyperspectral unmixing,” IEEE Transactions on Signal Processing, vol. 57, no. 11, pp. 4418–4432, 2009.
- T. Chan, W. Ma, A. Ambikapathi, and C. Chi, “A simplex volume maximization framework for hyperspectral endmember extraction,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 11, pp. 4177–4193, 2011.
- F. Schmidt, A. Schmidt, E. Tréguier, M. Guiheneuf, S. Moussaoui, and N. Dobigeon, “Implementation strategies for hyperspectral unmixing using Bayesian source separation,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 11, pp. 4003–4013, 2010.
- M. Arngren, M. N. Schmidt, and J. Larsen, “Unmixing of Hyperspectral images using bayesian non-negative matrix factorization with volume prior,” Journal of Signal Processing Systems, vol. 65, no. 3, pp. 479–496, 2011.
- M.-D. Iordache, J. M. Bioucas-Dias, and A. Plaza, “Sparse unmixing of hyperspectral data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 6, pp. 2014–2039, 2011.
- J. B. Greer, “Sparse demixing of hyperspectral images,” IEEE Transactions on Image Processing, vol. 21, no. 1, pp. 219–228, 2012.
- Y. F. Zhong, R. Y. Feng, and L. P. Zhang, “Non-local sparse unmixing for hyperspectral remote sensing imagery,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 6, pp. 1889–1909, 2014.
- M. Iordache, J. M. Bioucas-Dias, and A. Plaza, “Total variation spatial regularization for sparse hyperspectral unmixing,” IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 11, pp. 4484–4502, 2012.
- M.-D. Iordache, J. M. Bioucas-Dias, and A. Plaza, “Collaborative sparse unmixing of hyperspectral data,” in Proceedings of the 32nd IEEE International Geoscience and Remote Sensing Symposium (IGARSS '12), pp. 7488–7491, July 2012.
- M.-D. Iordache, J. M. Bioucas-Dias, and A. Plaza, “Collaborative sparse unmixing of hyperspectral data,” in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS '12), pp. 7488–7491.
- M. Iordache, A. J. Plaza, and J. M. Bioucas-Dias, “Recent developments in sparse hyperspectral unmixing,” in Proceedings of the 30th IEEE International Geoscience and Remote Sensing Symposium (IGARSS '10), pp. 1281–1284, July 2010.
- E. J. Candès and T. Tao, “Decoding by linear programming,” IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4203–4215, 2005.
- E. J. Candes and T. Tao, “Near-optimal signal recovery from random projections: universal encoding strategies?” IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5406–5424, 2006.
- H. Mohimani, M. Babaie-Zadeh, and C. Jutten, “A fast approach for overcomplete sparse decomposition based on smoothed norm,” IEEE Transactions on Signal Processing, vol. 57, no. 1, pp. 289–301, 2009.
- J. H. Wang, Z. Y. Huang, Y. Y. Zhou, and F. H. Wang, “Robust sparse recovery based on approximate l0 norm,” Acta Electronica Sinica, vol. 40, no. 6, pp. 1185–1189, 2012.
- A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM Review, vol. 51, no. 1, pp. 34–81, 2009.
- A. M. Bruckstein, M. Elad, and M. Zibulevsky, “On the uniqueness of nonnegative sparse solutions to underdetermined systems of equations,” IEEE Transactions on Information Theory, vol. 54, no. 11, pp. 4813–4820, 2008.
- S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Review, vol. 43, no. 1, pp. 129–159, 2001.
- F. Chen and Y. Zhang, “Sparse hyperspectral unmixing based on constrained ℓp-ℓ2 optimization,” IEEE Geoscience and Remote Sensing Letters, vol. 10, no. 5, pp. 1142–1146, 2013.
- L. Sun, Z. B. Wu, L. Xiao, J. J. Liu, Z. H. Wei, and F. X. Dang, “A novel sparse regression method for hyperspectral unmixing,” International Journal of Remote Sensing, vol. 34, no. 20, pp. 6983–7001, 2013.
- J. M. Bioucas-Dias and M. A. T. Figueiredo, “Alternating direction algorithms for constrained sparse regression: application to hyperspectral unmixing,” in Proceedings of the IEEE 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS '10), pp. 1–4, June 2010.
- M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, “An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Transactions on Image Processing, vol. 20, no. 3, pp. 681–695, 2011.
- J. Nascimento and J. Bioucas-Dias, “Vertex component analysis: a fast algorithm to unmix hyperspectral data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 4, pp. 898–910, 2005.
Copyright © 2014 Chengzhi Deng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.