Research Article  Open Access
BacktrackingBased Simultaneous Orthogonal Matching Pursuit for Sparse Unmixing of Hyperspectral Data
Abstract
Sparse unmixing is a promising approach in a semisupervised fashion by assuming that the observed signatures of a hyperspectral image can be expressed in the form of linear combination of only a few spectral signatures (endmembers) in an available spectral library. Simultaneous orthogonal matching pursuit (SOMP) algorithm is a typical simultaneous greedy algorithm for sparse unmixing, which involves finding the optimal subset of signatures for the observed data from a spectral library. But the numbers of endmembers selected by SOMP are still larger than the actual number, and the nonexisting endmembers will have a negative effect on the estimation of the abundances corresponding to the actual endmembers. This paper presents a variant of SOMP, termed backtrackingbased SOMP (BSOMP), for sparse unmixing of hyperspectral data. As an extension of SOMP, BSOMP incorporates a backtracking technique to detect the previous chosen endmembers’ reliability and then deletes the unreliable endmembers. Through this modification, BSOMP can select the true endmembers more accurately than SOMP. Experimental results on both simulated and real data demonstrate the effectiveness of the proposed algorithm.
1. Introduction
With the rapid development of space technology, hyperspectral remote sensing image has gained more and more attention in many application domains, such as environmental monitoring, target detection, material identification, mineral exploration, and military surveillance. However, due to the low spatial resolution of the hyperspectral imaging sensor, each pixel in the hyperspectral image often contains a mixture of several different materials. In order to deal with the problem of spectral mixing, hyperspectral unmixing is used to decompose each pixel’s spectrum to identify and quantify the fractional abundances of the pure spectral signatures or endmembers in each mixed pixel [1, 2]. There are two basic models of hyperspectral unmixing used to analyze the mixed pixel problem: the linear mixture model and the nonlinear mixture model. The linear mixture model assumes that each mixed pixel can be expressed as a linear combination of endmembers weighted by their corresponding abundances [3]. The linear mixture model has been widely applied for spectral unmixing, due to its computational tractability and flexibility. Under the linear mixture model, the traditional linear spectral unmixing algorithms based on geometry [4–8], statistics [5, 9], and nonnegative matrix factorization [10–12] have been proposed. However, some of these methods [10–12] are unsupervised and could extract virtual endmembers with no physical meaning. In [5], the presence in the data of at least one pure pixel per endmember is assumed. If the pure pixel assumption is not fulfilled because of the inadequate spatial resolution and the microscopic mixture of distinct materials, the unmixing results will not be accurate and the unmixing process is a rather challenging task.
Sparse unmixing, as a semisupervised method, has been proposed to overcome this challenge. It assumes that the observed image can be expressed as a linear combinations of spectral signatures from a spectral library that is known in advance [3, 13]. But the number of spectral signatures in the spectral library is much larger than the number of endmembers in the hyperspectral image; the sparse unmixing model is combinatorial and difficult to find a unique, stable, and optimal solution. Fortunately, sparse linear regression techniques can be used [14, 15] to solve it.
Several sparse regression techniques, such as greedy algorithms (GAs) [16–19] and convex relaxation methods [20–23], are usually adopted to solve the sparse unmixing problem. Convex relaxation methods, such as SUnSAL [20], SUnSALTV [21], and CLSUnSAL [22], use the alternating direction method of multipliers to efficiently solve the norm sparse regression problem which can decompose a complex problem into several simpler ones. Convex relaxation methods can obtain the global sparse optimization solution and are more sophisticated than the Gas; however they are far more complicated than the GAs. The GAs, such as OMP [16], SP [17], and CGP [18, 19], adopt one or more potential endmembers from the spectral library in each iteration that explains the largest correlation between the current residual of one input signal and the spectral library. The GAs can get an approximate solution for the norm problem without smoothing the penalty function and have low computational complexity. However the endmembers selection criterion of GAs is not optimal in the sense of minimizing the residual of the new approximation; it means that a nonexisting endmember once selected into the supporting set will never be deleted. So the GAs tend to be trapped into the local optimum and are likely to miss some of the actual endmembers. To solve the local optimal solutions problem of GAs, several representative simultaneous greedy algorithms (SGAs) are presented, including simultaneous subspace pursuit (SSP) [24], subspace matching pursuit (SMP) [24], and simultaneous orthogonal matching pursuit (SOMP) [25]. These SGAs divide the whole hyperspectral image into several blocks and pick some potential endmembers from the spectral library in each block. Then the endmembers picked in each block are associated as the endmembers sets of the whole hyperspectral data. Finally, the abundances are estimated using the whole hyperspectral data with the obtained endmembers sets. The SGAs have the same low computational complexity as the GAs and can find the actual endmembers far more accurately than the GAs. The SGAs adopt a blockprocessing strategy to efficiently solve the local optimal problem, but the endmembers picked in all the blocks are not all actual endmembers and the nonexisting endmembers will affect the estimation of the abundances corresponding to the actual endmembers. To solve the drawback of the SGAs, RSFoBa [26] combinates a forward greedy step and a backward greedy step, which can select the actual endmembers more accurately than the SGAs.
Inspired by the existing SGA methods, we propose a sparse unmixing algorithm termed backtrackingbased simultaneous orthogonal matching pursuit (BSOMP) in this paper. Similar to SOMP and SMP, it uses a blockprocessing strategy to select some potential endmembers and adds them to the estimated endmembers set, which divides the whole hyperspectral image into several blocks and picks some potential endmembers from the spectral library in each block. Furthermore, BSOMP incorporates a backtracking process to detect the previous chosen endmembers’ reliability and then deletes the unreliable endmember from the estimated endmembers set in each iteration. Through this modification, BSOMP can identify the true endmembers set more accurately than the other considered SGA methods.
The remainder of the paper is organized as follows. Section 2 introduces the simultaneous sparse unmixing model. In Section 3, we present the proposed BSOMP algorithm and give out the theoretical analysis for the algorithm. Section 4 provides a quantitative comparison between BSOMP and previous sparse unmixing algorithms, using both simulated hyperspectral and real hyperspectral data. Finally, we conclude in Section 5.
2. Simultaneous Sparse Unmixing Model
The linear sparse unmixing model assumes that the observed spectrum of a mixed pixel is a linear combination of a few spectral signatures presented in a known spectral library. Let denote the measured spectrum vector of a mixed pixel with bands, , where is the number of signatures in the library ; denote a spectral library; then the linear sparse unmixing model can be expressed as follows [23]:where denotes the fractional abundance vector with regard to the library and is the noise. Considering physical constraints, abundance nonnegativity constraint (ANC) and abundance sumtoone constraint (ASC) are imposed on the linear sparse model as follows: where is the th element of .
The sparse unmixing problem can be expressed as follows:where (called the norm) denotes the number of nonzero atoms in and is the tolerated error due to the noise and model error. It is worth mentioning that we do not explicitly add the ASC in (3), because the hyperspectral libraries generally contain only nonnegative components, and the nonnegativity of the sources automatically imposes a generalized ASC [23].
The simultaneous sparse unmixing model assumes that several input signals can be expressed in the form of different linear combinations of the same elementary signals. This means that all the pixels in the hyperspectral image are constrained to share the same subset of endmembers selected from the spectral library. Then we can use SGA methods for sparse unmixing; the sparse unmixing model in (1) becomeswhere denote the hyperspectral data matrix with bands and mixed pixels, denote a spectral library, denotes the fractional abundance matrix, each column of which corresponds with the abundance fractions of a mixed pixel, and is the noise matrix.
Under the simultaneous sparse unmixing model, the simultaneous sparse unmixing problem can be expressed as follows:where denotes the Frobenius norm of , is the number of nonzero rows in matrix [25], and is the tolerated error due to the noise and model error.
It should be noted that the model in (5) is reasonable because there should be only a few rows with nonzero entries in the abundance matrix in light of only a small number of endmembers in the hyperspectral image, compared with the dimensionality of the spectral library [22].
3. BacktrackingBased SOMP
In this section, we first present our new algorithm, BSOMP, for sparse unmixing of hyperspectral data. Then, a theoretical analysis of the algorithm will be given. Then, we give a convergence theorem for the proposed algorithm.
3.1. Statement of Algorithm
The whole process of using BSOMP for sparse unmixing of hyperspectral data is summarized in Algorithm 1. The algorithm includes three main parts: SOMP for endmember selection, backtracking processing, and abundance estimation. In the first part, we adopt a blockprocessing strategy for SOMP to efficiently select endmembers. This strategy divides the whole hyperspectral image into several blocks. Then, in each block, SOMP will pick several potential endmembers from the spectral library and add them to the estimated endmembers set. In the second part, we utilize a backtracking strategy to remove some endmembers chosen wrongly from the estimated endmembers set in previous processing and identify the true endmembers set more accurately. The backtracking processing is halted when the maximum total correlation between an endmember in the spectral library and the residual drops below threshold . Finally, the abundances are estimated using the obtained endmembers set under the constraint of nonnegativity.

In the following paragraphs, we provide some definitions and then give two stopping conditions for the BSOMP algorithm.
Definition 1. If is a vector, where is the element of , the norm of is defined as
Definition 2. The operator norms of the matrix is defined as
Several of the operator norms can be computed easily using the following lemma [25, 27].
Lemma 3. The operator norm is the maximum norm of any column of .
The operator norm is the maximum singular value of .
The operator norm is the maximum norm of any row of .
We give a stopping condition for the first part of the BSOMP algorithm:where is the Frobenius norm of the residual corresponding to the iteration and is a threshold.
For the second part of the BSOMP algorithm, we give a stopping condition for deciding when to halt the iteration:where is the residual corresponding to the iteration, denotes the conjugate transpose of , is the maximum total correlation between an endmember in the spectral library and the residual , and is a threshold.
3.2. Theoretical Analysis
In this section, we will introduce some definitions and then give a theoretical analysis for the BSOMP algorithm.
Definition 4. The coherence of a spectral library equals the maximum correlation between two distinct spectral signatures [28]:where is the column of . If the coherence parameter is small, each pair of spectral signatures is nearly orthogonal.
Definition 5. The cumulative coherence [29, 30] is defined as where the index set , is the support of all the spectral signatures in the spectral library . The cumulative coherence measures the maximum total correlation between a fixed spectral signature and distinct spectral signatures. Particularly, .
Definition 6. Given hyperspectral image data matrix and available spectral library matrix , is an index set which lists at most endmembers presented in the hyperspectral image scene; is an optimal submatrix of the spectral library which contains the columns indexed in ; is the optimal abundance matrix corresponding to ; is a term best approximation of , and ; is an index set which lists endmembers after iterations and assumes ; is a submatrix of the spectral library which contains the columns indexed in ; is the abundance matrix corresponding to ; is the reconstruction of by BSOMP algorithm after iterations (assume that BSOMP algorithm selects a nonexisting endmember to remove from index set in each iteration), and ; is the residual error of after iterations, and .
Theorem 7. Suppose that the best approximation of the hyperspectral image data matrix over the index set satisfies the error bound: , the first part of BSOMP has selected potential endmembers from the spectral library to constitute the initial index set , and . After iteration of the second part of BSOMP, halt the second part of the algorithm if where . If this inequality fails, then BSOMP must remove another nonexisting endmember in iteration ().
The proof of Theorem 7 is shown in the Appendix. If the second part of the BSOMP algorithm terminates at the end of iteration , we may conclude that the BSOMP algorithm removes nonexisting endmembers from and has chosen optimal endmembers.
4. Experimental Results
In this section, we use both simulated hyperspectral data and real hyperspectral data to demonstrate the performance of the proposed algorithm. The BSOMP algorithm is compared with three SGAs (SOMP [25], SMP [24], and RSFoBa [26]) and three convex relaxation methods (i.e., SUnSAL [20], SUnSALTV [21], and CLSUnSAL [22]). All the considered algorithms have taken into account the abundance nonnegativity constraint. The TV regularizer used in SUnSALTV is a nonisotropic one. For the RSFoBa algorithm, the norm is considered in our experiment. As mentioned above, when the spectral library is overcomplete, the GAs behave worse than the SGAs and the convex relaxation methods; thus, we do not include the GAs in our experiment.
In the simulated data experiments, we evaluated the performances of the sparse unmixing algorithms in situations of different SNR level of noise; the signaltoreconstruction error (SRE) [21] is used to measure the quality of the reconstruction abundances of spectral endmembers: where denotes the true abundances of spectral endmembers and denotes the reconstruction abundances of spectral endmembers.
4.1. Evaluation with Simulated Data
In the simulated experiments, the United States Geological Survey (USGS) digital spectral library (splib06a) [31] is used to build the spectral library . The reflectance values of 498 spectral signatures are measured for 224 spectral bands distributed uniformly in the interval of 0.4–2.5 (). Fifteen spectral signatures are chosen from the spectral library to generate our simulated data. Figure 1 shows five spectral signatures used for all the simulated data experiments. The other ten spectral signatures that are not displayed in the figure include Monazite HS255.3B, Zoisite HS347.3B, Rhodochrosite HS67, Neodymium Oxide GDS34, Pigeonite HS199.3B, Meionite WS700.Hlsep, Spodumene HS210.3B, Labradorite HS17.3B, Grossular WS484, and Wollastonite HS348.3B.
In Simulated Data 1 experiment, we generated seven datacubes of pixels and 224 bands per pixel, each containing a different number of endmembers: 3, 5, 7, 9, 11, 13, and 15. In each simulated pixel, the fractional abundances of the endmembers were randomly generated following a Dirichlet distribution [23]. Note that there was no pure pixel in Simulated Data 1, and the fractional abundances of the endmembers were less than 0.8. After Simulated Data 1 was generated, Gaussian white noise or correlated noise was added to Simulated Data 1, having different levels of the signaltonoise ratio (SNR), that is, 20, 25, 30 35, 40, 45, and 50 dB. The Gaussian white noise was generated using the awgn function in MATLAB. The correlated noise was obtained by lowpass filtering i.i.d. Gaussian noise.
Simulated Data 2 was generated using nine randomly selected spectral signatures from , with pixels and 224 bands per pixel, provided by Dr. M. D. Iordache and Prof. J. M. BioucasDias. The fractional abundances satisfy the ANC and the ASC and are piecewise smoothed. The true abundances of three endmembers are illustrated in Figure 2. After Simulated Data 2 was generated, Gaussian white noise or correlated noise with seven levels of SNR (20, 25, 30, 35, 40, 45, and 50 dB) was added to Simulated Data 2 as well.
In the two simulated data experiments, all the SGA methods have adopted the blockprocessing strategy to get better performances. The block sizes of the SGA methods were empirically set to 10, and the stopping threshold parameter of SOMP and SMP was set to 110^{−2}. The stopping threshold parameters and of BSOMP were set to 110^{−2} and 0.1. RSFoBa was tested using different values of the parameters: and : 0 and 510^{−3} for Simulated Data 1 experiment; 1 and 110^{−2} for Simulated Data 2 experiment. SUnSALTV was tested using different values of the parameters and : 110^{−2} and 110^{−2}, 110^{−2} and 110^{−2}, 110^{−2} and 510^{−3}, 510^{−3} and 110^{−3}, 110^{−3} and 510^{−4}, 510^{−4} and 110^{−4}, and 510^{−4} and 110^{−4} for SNR = 20, 25, 30, 35, 40, 45, and 50 dB. SUnSAL and CLSUnSAL was tested using different values of the parameter : 110^{−2}, 510^{−3}, 110^{−3}, 510^{−4}, 510^{−4}, 510^{−4}, and 510^{−4} for SNR = 20, 25, 30, 35, 40, 45, and 50 dB.
Figure 2 shows the SRE (dB) results obtained on Simulated Data 1 with white noise as a function of endmember number, using the different tested methods. The SREs of all the algorithms decrease as the endmember number increases. We can find that BSOMP, SOMP, SMP, RSFoBa, and SUnSALTV perform better than SUnSAL and CLSUnSAL. All the SGA methods have comparable performances with SUnSALTV; BSOMP can always obtain the best estimation of the abundances when the number of endmembers is less than 15. Figure 3 shows the SRE (dB) results obtained on Simulated Data 1 with white noise as a function of SNR when the endmember number is eleven. The SREs of all the algorithms decrease as the SNR decreases. We can find that all the SGA methods perform better than SUnSALTV, SUnSAL, and CLSUnSAL. This result indicates that all the SGA methods adopt the blockprocessing strategy to effectively pick up all the actual endmembers. Among all the SGA methods, BSOMP and RSFoBa behave better than SOMP and SMP, and BSOMP can always obtain the best estimation of the abundances. Figures 4 and 5 show the SRE results obtained on Simulated Data 1 with correlated noise as a function of endmember number and SNR, respectively. As shown in Figures 4 and 5, BSOMP gets an overall better performance than the other algorithms.
Figure 6 shows the number of the potential endmembers obtained from the spectral library by all the SGA methods. The SGA methods adopt the blockprocessing strategy to effectively pick up all the actual endmembers. It can be observed that BSOMP and RSFoBa can select the actual endmembers more accurately than SOMP and SMP. It is worth mentioning that many endmembers in the estimated endmembers set of SOMP are nonexisting and affect the performance of SOMP. However, BSOMP incorporates a backtracking process to remove some endmembers chosen wrongly from the estimated endmembers set, which can select the actual endmembers more accurately than SOMP. Differing from RSFoBa, BSOMP uses a backtracking process in the whole hyperspectral data to remove some endmembers chosen wrongly from the estimated endmembers set in previous blockprocessing, which can identify the true endmembers set more accurately than RSFoBa. It indicates that BSOMP behaves better than the other SGA methods.
Figures 7 and 8 show the SRE results obtained on Simulated Data 2 with white noise and correlated noise as a function of SNR, respectively. We can find that BSOMP, SOMP, SMP, RSFoBa, and SUnSALTV perform better than SUnSAL and CLSUnSAL. This result indicates that all the SGA methods adopt the blockprocessing strategy to effectively pick up all the actual endmembers, and SUnSALTV combinates spatialcontextual coherence regularizer, which can get better performance over SUnSAL and CLSUnSAL. All the SGA methods have comparable performances with SUnSALTV; BSOMP and RSFoBa can always obtain the better estimation of the abundances. RSFoBa incorporates the spatialcontextual coherence regularizer to select the actual endmembers more accurately than SOMP and SMP, and BSOMP incorporates a backtracking process in the whole hyperspectral data to identify the true endmembers set more accurately than SOMP and SMP. It can be observed that BSOMP and RSFoBa can select the actual endmembers more accurately than SOMP and SMP.
For visual comparison, Figure 9 shows the true abundance maps and the abundance maps estimated by the different algorithms on Simulated Data 2 with 20 dB white noise. In Figure 9, the abundance maps of endmembers, which were obtained by SUnSALTV, contain fewer noise points. However, the abundance maps estimated by SUnSALTV may exhibit an oversmooth visual effect around some pixels; some of the details and edges are not well preserved. Compared with SUnSALTV, the estimated abundances by BSOMP may contain more noise points but preserve more edge regions and are closer to the true abundances. Moreover, it is much easier to tell the different materials apart. Compared with SOMP and SMP, the estimated abundances by BSOMP contain fewer noise points and preserve more edge regions, which are more accurate and have a better visual effect.
Table 1 shows the processing time measured after applying the tested algorithms to the two simulated data sets with 30 dB white noise. The algorithms were implemented in MATLAB 2009a and executed on a desktop PC equipped with an Intel Core 2 Duo CPU (2.93 GHz) and 2 GB of RAM memory. It can be observed that the SUnSALTV algorithm is far more time consuming compared with the other algorithms. We can find that BSOMP incorporating a backtracking technique is more time consuming than SOMP.

In order to make the differences among the BSOMP algorithm and the other sparse unmixing algorithms clearer, we display their comparison concerned with several properties in Table 2.

SUnSAL is a convex relaxation method for sparse unmixing which solves the norm sparse regression problem. CLSUnSAL is convex relaxation method for solving the optimization problem. The main difference between SUnSAL and CLSUnSAL is that the former employs pixelwise independent regressions, while the latter enforces joint sparsity among all the pixels. SUnSALTV is the special case of SUnSAL which incorporates the spatialcontextual coherence regularizer into the SUnSAL. As observed in Figures 7 and 8, SUnSALTV behaves better than SUnSAL and CLSUnSAL, which indicates that the spatialcontextual coherence regularizer can make SUnSALTV more effective.
SOMP and SMP are the forward SGAs which select one or more potential endmembers from the spectral library in each iteration. RSFoBa is a forwardbackward SGA which incorporates the backward step, spatialcontextual information, and joint sparsity into the greedy algorithm. BSOMP is a forwardbackward SGA which incorporates the backward step and joint sparsity to the greedy algorithm. As observed in Figure 6, the numbers of endmembers selected by the SGAs are still larger than the actual number. BSOMP and RSFoBa behave better than SMP and SOMP, which indicates that the backward step can make BSOMP and RSFoBa more accurate to select the actual endmembers. Differing from RSFoBa, which adopts a forward step and a backward step to pick some potential endmembers in each block, BSOMP adopts a forward step to pick some potential endmembers in each block and then adopts a backward step in the whole hyperspectral data to remove some endmembers chosen wrongly from the estimated endmembers set. Figure 10 shows the SRE (dB) results obtained on Simulated Data 1 with white noise as a function of block size. We can find that BSOMP performs better than the other SGAs. Figure 11 shows the number of the potential endmembers obtained from the spectral library by all the SGA methods. It can be observed that BSOMP can select the actual endmembers more accurately than RSFoBa. This result indicates that a backward step in the whole hyperspectral data can make BSOMP more effective.
4.2. Evaluation with Real Data
The hyperspectral data set used in our real data experiment is the wellknown AVIRIS Cuprite data set (http://aviris.jpl.nasa.gov/html/aviris.freedata.html). The size of the hyperspectral scene is a pixels subset, with 188 spectral bands (water absorption bands and low SNR bands are removed). The spectral library for this data set is the USGS spectral library (containing 498 pure signatures) with the corresponding bands removed. The minerals map (http://speclab.cr.usgs.gov/cuprite95.tgif.2.2um_map.gif) which was produced by a Tricorder 3.3 software (http://speclab.cr.usgs.gov/PAPERS/tetracorder/) is shown in Figure 12. Note that the Tricorder map is only suitable for the hyperspectral data collected in 1995; the AVIRIS Cuprite data was collected was in 1997. Thus we only use the mineral map as a reference indicator for qualitative analysis of the fractional abundance maps produced by different sparse unmixing methods.
In this section, for the real data experiment, the sparsity of solution and the reconstruction error of the hyperspectral image are used to evaluate the performances of the sparse unmixing algorithms, since the true abundance maps of the real data are not available. The sparsity is obtained by counting the average number of nonzero abundances estimated by each algorithm. The same as in [22], we define the fractional abundances larger than 0.001 as nonzero abundances to avoid counting negligible values. The reconstruction error is calculated by the original hyperspectral image and the one reconstructed by the actual endmembers and their corresponding abundances [32]. The root mean square error (RMSE) is used to evaluate the reconstruction error between the original hyperspectral image and the reconstructed one: where denotes the original hyperspectral image and denotes the reconstruction hyperspectral image. If the RMSE of a sparse unmixing algorithm is smaller, it also reflects the unmixing performance of this algorithm.
The regularization parameter used for CLSUnSAL was empirically set to 0.01, while the one corresponding to SUnSAL was set to 0.001. The parameters and of SUnSALTV were set to 0.001 and 0.001. The block sizes of the SGA methods were empirically set to 10. The stopping threshold parameter of SOMP and SMP was set to 0.01. The stopping threshold parameters and of BSOMP were set to 0.01 and 0.1. The parameters and of RSFoBa were set to 1 and 0.01. Table 2 shows the sparsity of the abundance vectors and the reconstruction error obtained by the different algorithm. In Table 3, we can find that BSOMP, SOMP, SMP, and SUnSAL perform better than SUnSALTV in the sparsest solutions, although SUnSALTV has the lowest reconstruction errors. BSOMP can achieve nearly as few reconstruction errors as SUnSALTV but obtain much sparser solutions. In general, BSOMP can obtain the sparsest solutions and achieve the lower reconstruction errors, which indicates the effectiveness of the proposed algorithm.

For visual comparison, Figure 13 shows the estimated fractional abundance maps by applying the proposed BSOMP, SMP, RSFoBa, SOMP, SUnSAL, and SUnSALTV algorithms to the AVIRIS Cuprite data for three different minerals (Alunite, Buddingtonite, and Chalcedony). For comparison, the classification maps of three materials extracted from the Tricorder 3.3 software are also presented in Figure 13. It can be observed that the abundances estimated by BSOMP are comparable or higher in the regions assigned to the respective minerals in comparison to the other considered algorithms, with regard to the classification maps produced by the Tricorder 3.3 software. We also analyzed the sparsity of the abundances estimated by BSOMP; the average number of endmembers with abundances higher than 0.001 estimated by BSOMP is 11.810 (per pixel). This means that the algorithm uses a minimum number of members to explain the data. Overall, the qualitative results and visual results reported in this section indicate that the BSOMP algorithm is more effective for sparse unmixing than the other algorithms.
5. Conclusions
In this paper, we present a new SGA termed BSOMP for sparse unmixing of hyperspectral data. The main idea of the proposed algorithm is that BSOMP incorporates a backtrack step to detect the previous chosen endmembers’ reliability and then deletes the unreliable endmembers. It is proved that BSOMP can recover the optimal endmembers from the spectral library under certain conditions. Experiments on both synthetic data and real data also demonstrate that the BSOMP algorithm is more effective than the other SGAs for sparse unmixing.
The topic we want to discuss is why BSOMP is effective for sparse unmixing. Endmember selection and abundance estimation are the two main parts of the SGAs. The number of the potential endmembers selected by the SGAs is far smaller than the number of spectral signatures in the spectral library, and then the abundance estimation method makes the abundance vectors sparse further with the relatively small endmembers subset. The main difference among BSOMP, SOMP, SMP, and RSFoBa is the endmember selection. The SGAs adopt a blockprocessing strategy to efficiently select endmembers, but the numbers of endmembers selected are still larger than the actual number. However the nonexisting endmembers will have a negative effect on the estimation of the abundances corresponding to the actual endmembers.
Compared with SOMP and SMP, BSOMP and RSFoBa adopt a backward step to detect the previous chosen endmembers’ reliability and delete the nonexisting endmembers. So BSOMP and RSFoBa can select the actual endmembers more accurately than SOMP and SMP. The main difference between BSOMP and RSFoBa is that the former adopts a backward step in the whole hyperspectral data, while the latter adopts a backward step in each block. As shown in Figures 10 and 11, we can observe that the block size will significantly affect the number of the endmembers obtained by RSFoBa, while BSOMP is not affected by the block size. In conclusion, BSOMP can select the actual endmembers more accurately than the other SGAs and be beneficial to improve estimation effectiveness.
Appendix
Proof of Theorem 7
First we should introduce some definitions which will be used during the proof process.
Definition A.1. Given hyperspectral image data matrix and available spectral library matrix , is a submatrix of the spectral library , where contains the columns indexed in , contains the remaining columns, ; is a submatrix of , which contains the remaining () columns indexed in ; is a submatrix of and contains the rows listed in .
Now we will prove the theorem on the performance of the algorithm.
Proof. Let denote the orthogonal projector onto the range of :where denotes the conjugate transpose of . Since the columns of are orthogonal to the columns of (), we haveThe maximum correlation between (a submatrix of the spectral library ) and the residual is smaller than the maximum correlation between the spectral library and the residual , so we haveInvoke (A.2) and (A.3) and apply the triangle inequality to getThen the matrix can be expressed in the following manner:The last equality holds because is orthogonal to the range of . Applying the triangle inequality, the second term of (A.4) can be rewritten as follows:We will apply the cumulative coherence function to deduce the estimates of (A.6); the deduction has five steps.
Each row of the matrix lists the inner products between a spectral endmember and () distinct endmembers; we have Use the fact that is submultiplicative and invoke (A.1) to get Making use of the same deduction as in step , we have Since the columns of are normalized, the Gram matrix lists the inner products between different endmembers, so the matrix has a unit diagonal, and then we can getwhere ; then we can expand the inverse in a Neumann series to get Invoke the bounds from steps to into (A.6); we can obtain thatSimplify expression (A.12) to conclude thatIn order to produce the an upper bound of , we deduce the following formula by using the triangle inequality:Since is orthogonal to the range of . Introduce (A.5) into (A.14); we can getThe equality holds because is orthogonal to the range of ; we can replace by . Let and rewrite ; we can expand its inverse in a Neumann series to obtainRecall the definition of , and apply the triangle inequality to obtainThe righthand side of inequality can be deduced completely analogous with steps , we can obtainIntroduce (A.18) and (A.16) into (A.14); we haveUsing the same deduction as (A.3), it follows that . So (A.19) can be rewritten as follows:Introduce (A.20) into (A.13); we can get Introduce (A.21) into (A.4) to getFinally, assume we have a bound , so (A.22) can be rewritten as follows:This completes the argument.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work has been supported by a grant from the NUAA Fundamental Research Funds (NS2013085). The authors would like to thank M. D. Iordache, J. BioucasDias, and A. Plaza for sharing Simulated Data 2 and their codes for the algorithms of CLSUnSAL, SUnSAL, and SUnSALTV. The authors would also like to thank Shi Z, Tang W, Duren Z, and Jiang Z for sharing their codes for the algorithms of SMP and RSFoBa. Moreover, the authors would also like to thank those anonymous reviewers for their helpful comments to improve this paper.
References
 N. Keshava and J. F. Mustard, “Spectral unmixing,” IEEE Signal Processing Magazine, vol. 19, no. 1, pp. 44–57, 2002. View at: Publisher Site  Google Scholar
 Y. H. Hu, H. B. Lee, and F. L. Scarpace, “Optimal linear spectral unmixing,” IEEE Transactions on Geoscience and Remote Sensing, vol. 37, no. 1, pp. 639–644, 1999. View at: Publisher Site  Google Scholar
 J. M. BioucasDias, A. Plaza, N. Dobigeon et al., “Hyperspectral unmixing overview: geometrical, statistical, and sparse regressionbased approaches,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 2, pp. 354–379, 2012. View at: Publisher Site  Google Scholar
 M. Graña, I. Villaverde, J. O. Maldonado, and C. Hernandez, “Two lattice computing approaches for the unsupervised segmentation of hyperspectral images,” Neurocomputing, vol. 72, no. 1012, pp. 2111–2120, 2009. View at: Publisher Site  Google Scholar
 J. Li and J. M. BioucasDias, “Minimum volume simplex analysis: a fast algorithm to unmix hyperspectral data,” in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS '08), pp. III250–III253, Boston, Mass, USA, July 2008. View at: Publisher Site  Google Scholar
 H. Pu, B. Wang, and L. Zhang, “Simplex geometrybased abundance estimation algorithm for hyperspectral unmixing,” Scientia Sinica Informationis, vol. 42, no. 8, pp. 1019–1033, 2012. View at: Publisher Site  Google Scholar
 G. Zhou, S. Xie, Z. Yang, J.M. Yang, and Z. He, “Minimumvolumeconstrained nonnegative matrix factorization: enhanced ability of learning parts,” IEEE Transactions on Neural Networks, vol. 22, no. 10, pp. 1626–1637, 2011. View at: Publisher Site  Google Scholar
 T.H. Chan, C.Y. Chi, Y.M. Huang, and W.K. Ma, “A convex analysisbased minimumvolume enclosing simplex algorithm for hyperspectral unmixing,” IEEE Transactions on Signal Processing, vol. 57, no. 11, pp. 4418–4432, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 M. Arngren, M. N. Schmidt, and J. Larsen, “Bayesian nonnegative matrix factorization with volume prior for unmixing of hyperspectral images,” in Proceedings of the IEEE International Workshop on Machine Learning for Signal Processing (MLSP '09), pp. 1–6, IEEE, Grenoble, France, September 2009. View at: Publisher Site  Google Scholar
 V. P. Pauca, J. Piper, and R. J. Plemmons, “Nonnegative matrix factorization for spectral data analysis,” Linear Algebra and Its Applications, vol. 416, no. 1, pp. 29–47, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 N. Wang, B. Du, and L. Zhang, “An endmember dissimilarity constrained nonnegative matrix factorization method for hyperspectral unmixing,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 6, no. 2, pp. 554–569, 2013. View at: Publisher Site  Google Scholar
 Z. Wu, S. Ye, J. Liu, L. Sun, and Z. Wei, “Sparse nonnegative matrix factorization on GPUs for hyperspectral unmixing,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 8, pp. 3640–3649, 2014. View at: Publisher Site  Google Scholar
 J. B. Greer, “Sparse demixing of hyperspectral images,” IEEE Transactions on Image Processing, vol. 21, no. 1, pp. 219–228, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 J. A. Tropp and S. J. Wright, “Computational methods for sparse solution of linear inverse problems,” Proceedings of the IEEE, vol. 98, no. 6, pp. 948–958, 2010. View at: Publisher Site  Google Scholar
 M. Elad, Sparse and Redundant Representations, Springer, New York, NY, USA, 2010. View at: Publisher Site  MathSciNet
 J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655–4666, 2007. View at: Publisher Site  Google Scholar  MathSciNet
 W. Dai and O. Milenkovic, “Subspace pursuit for compressive sensing signal reconstruction,” IEEE Transactions on Information Theory, vol. 55, no. 5, pp. 2230–2249, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 I. Marques and M. Graña, “Hybrid sparse linear and lattice method for hyperspectral image unmixing,” in Hybrid Artificial Intelligence Systems, vol. 8480 of Lecture Notes in Computer Science, pp. 266–273, Springer, 2014. View at: Publisher Site  Google Scholar
 I. Marques and M. Graña, “Greedy sparsification WM algorithm for endmember induction in hyperspectral images,” in Natural and Artificial Computation in Engineering and Medical Applications, vol. 7931 of Lecture Notes in Computer Science, pp. 336–344, Springer, Berlin, Germany, 2013. View at: Publisher Site  Google Scholar
 M. V. Afonso, J. M. BioucasDias, and M. A. Figueiredo, “An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems,” IEEE Transactions on Image Processing, vol. 20, no. 3, pp. 681–695, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 M.D. Iordache, J. M. BioucasDias, and A. Plaza, “Total variation spatial regularization for sparse hyperspectral unmixing,” IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 11, pp. 4484–4502, 2012. View at: Publisher Site  Google Scholar
 M.D. Iordache, J. M. BioucasDias, and A. Plaza, “Collaborative sparse unmixing of hyperspectral data,” in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS '12), pp. 7488–7491, IEEE, Munich, Germany, July 2012. View at: Publisher Site  Google Scholar
 J. M. BioucasDias and M. A. T. Figueiredo, “Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing,” in Proceedings of the 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS '10), pp. 1–4, Reykjavik, Iceland, June 2010. View at: Publisher Site  Google Scholar
 Z. Shi, W. Tang, Z. Duren, and Z. Jiang, “Subspace matching pursuit for sparse unmixing of hyperspectral data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 6, pp. 3256–3274, 2014. View at: Publisher Site  Google Scholar
 J. A. Tropp, A. C. Gilbert, and M. J. Strauss, “Algorithms for simultaneous sparse approximation. Part I: greedy pursuit,” Signal Processing, vol. 86, no. 3, pp. 572–588, 2006. View at: Publisher Site  Google Scholar
 W. Tang, Z. Shi, and Y. Wu, “Regularized simultaneous forwardbackward greedy algorithm for sparse unmixing of hyperspectral data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 9, pp. 5271–5288, 2014. View at: Publisher Site  Google Scholar
 J. Chen and X. Huo, “Theoretical results on sparse representations of multiplemeasurement vectors,” IEEE Transactions on Signal Processing, vol. 54, no. 12, pp. 4634–4643, 2006. View at: Publisher Site  Google Scholar
 G. Davis, S. Mallat, and M. Avellaneda, “Greedy adaptive approximation,” Journal of Constructive Approximation, vol. 13, no. 1, pp. 57–98, 1997. View at: Google Scholar
 J. A. Tropp, “Greed is good: algorithmic results for sparse approximation,” IEEE Transactions on Information Theory, vol. 50, no. 10, pp. 2231–2242, 2004. View at: Publisher Site  Google Scholar  MathSciNet
 D. L. Donoho and M. Elad, “Maximal sparsity representation via l1 minimization,” Proceedings of the National Academy of Sciences of the United States of America, vol. 100, pp. 2197–2202, 2003. View at: Google Scholar
 R. N. Clark, G. A. Swayze, R. Wise et al., USGS Digital Spectral Library splib06a, U.S. Geological Survey, Denver, Colo, USA, 2007.
 J. Plaza, A. Plaza, P. Martínez, and R. Pérez, “HCOMP: a tool for quantitative and comparative analysis of endmember identification algorithms,” in Proceedings of the Geoscience and Remote Sensing Symposium (IGARSS '03), vol. 1, pp. 291–293, 2003. View at: Google Scholar
Copyright
Copyright © 2015 Fanqiang Kong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.