Support Recovery of Greedy Block Coordinate Descent Using the Near Orthogonality Property
In this paper, using the near orthogonal property, we analyze the performance of greedy block coordinate descent (GBCD) algorithm when both the measurements and the measurement matrix are perturbed by some errors. An improved sufficient condition is presented to guarantee that the support of the sparse matrix is recovered exactly. A counterexample is provided to show that GBCD fails. It improves the existing result. By experiments, we also point out that GBCD is robust under these perturbations.
Greedy block coordinate descent (GBCD) algorithm was presented by  for direction of arrival (DOA) estimation. In the work of , the DOA estimation is treated as the multiple measurement vectors (MMV) model that recovers a common support shared by multiple unknown vectors from multiple measurements. The authors provided a sufficient condition, based on mutual coherence, to guarantee that GBCD exactly recover the nonzero supports with noiseless measurements.
Recently, the work of  discussed the following method: with inputs and . denotes the measurement noise and denotes the system perturbation. The perturbations and are quantified with the following relative bounds:where and are nonzero. Here, denotes the largest spectral norm taken over all -column submatrices of . Throughout the paper, we are only interested in the case where and are far less than 1. In (1), is a -group sparse matrix; that is, it has no more than nonzero rows, and , is the th row of . It is assumed that all columns of are normalized to be of unit-norm . Both and are totally perturbed in (1). This case can be found in source separation , radar , remote sensing , and countless other problems. In addition, the total perturbations have also been discussed in [7–9].
One of the most commonly known conditions is the restricted isometry property (RIP). A matrix satisfies RIP of the order if there exists a constant such that for all -sparse vector . In particular, the minimum of all constants satisfying (3) is called the restricted isometry constant (RIC) .
There are many papers [8, 10–14] discussing the sufficient condition for orthogonal matching pursuit (OMP) that is one of the widely greedy algorithms for sparse recovery. In , using the near orthogonality property, the authors improved the sufficient condition of OMP. As cited in , the near orthogonality property can further develop the orthogonality characterization of columns in ; it will play a fundamental role in the study of the signal reconstruction performance in compressed sensing. In the noiseless case, the work of  analyzed the performance of GBCD using near orthogonality property and improved the results in .
In this paper, under the total perturbations, we use near orthogonality property to improve the theoretical guarantee for the GBCD algorithm. In , the authors stated that is a sufficient condition for GBCD. We improve this condition to . We also present a counterexample to show that GBCD fails. The example is superior to that in . Under the total perturbations, the robustness of GBCD is shown by experiments.
Now we give some notations that will be used in this paper. denotes the th column of a matrix . denotes the transpose of . denotes an identity matrix. The symbol denotes the vectorization operator by stacking the columns of a matrix one underneath the other. The cardinality of a finite set is denoted by . Let . . The support of is denoted by (). denotes the largest spectral norm taken over all -column submatrices of . Let denote the maximum norm of the rows of . We write for the column submatrix of whose indices are listed in set of and for the row submatrix of whose indices are listed in the set . denotes the th unit standard vector.
2. Problem Formulation
Assume that . Obviously, . The objective function in (4) can be written as where with denoting the Kronecker product and . Combining the quadratic approximation for and standard BCD algorithm, the solution to the th subproblem can be given by a soft-thresholding operator. The authors in  only update the block that yields the greatest descent distance. Now, we list GBCD algorithm (Algorithm 1).
Suppose that satisfies the th order RIC . Recall that has no more than nonzero rows. According to the fact , we can obtain from (3).
Lemma 1 (near orthogonality property, see ). Let and be two orthogonal sparse vectors with supports and fulfilling . Suppose that satisfies RIP of order with RIC . Then we have where denotes the angle between and .
Lemma 3. For finite sets and , let and . Here, , and . If satisfies the RIP condition (3) with , then we have
3. RIP Based Recovery Condition
In this section, we firstly present the upper bound of the noise matrix and provide the recovery condition for GBCD.
Lemma 4 (see ). Suppose that satisfies the th order RIC . Then we have
According to steps () and () of Algorithm 1, at the th iteration, GBCD can obtain a correct index if
Theorem 5. Consider model (4). Let . If the matrix satisfies RIP of order withwhere , then GBCD can exactly recover the support set .
Proof. Consider . The initial value is . In order to guarantee that GBCD selects a correct index , combining step () of Algorithm 1 and (20), we should verify the following inequality: If , the right-hand-side is 0. Then inequality (26) holds. Thus, we only consider . Using Remark in , inequality (26) is true when Now, it is sufficient to verify (27). Let us construct an upper bound for . By step () of Algorithm 1, we have where (32) is from the property of norm and (34) follows from each column of which is of unit-norm, Lemmas 3 and 4.
To prove (27), we only need to prove We then go on to show by contradiction that (35) is true. For all , assume that Then we have Using the triangle inequality, we can get where (40) is from (10) and the property of norm.
After straightforward manipulations, we have where (41) follows from (21) and (43) follows from and (22).
Obviously, (44) contradicts (37), so this fact guarantees (27).
Assume that GBCD always picks up indices from the support for ( is an integer). Consider . In order to prove that GBCD can choose a correct index , analogous to , inequality (46) should be verified.Combining step () of Algorithm 1 with (46) yields It is sufficient to prove that (48) holds. Note that ; we have Now, we only need to prove We then show that (52) is true by contradiction. For all , assume that Using the definition of Frobenius norm, we have Combining , (21), and (22), we have where (59) follows from This contradicts (53). Thus, (48) is true.
Remark 6. The weaker the RIC bound is, the less required number of measurements we need, and the improved RIC results can be used in many CS-based applications . In the work of , the authors provided that the condition for GBCD is . Obviously, it is smaller than the bound in (21).
4. The Counterexample
Consider the measurements In this section, giving a matrix , whose RIC is a slight relaxation of , we will verify that GBCD can fail to recover the support of sparse matrix from (62).
Letwhere , and (the value of is far less than 1; this is reasonable).
The matrix is constructed aswhere
The eigenvalues of are
Thus, the RIC of is .
Recall that condition (27) is the criterion of recovery for GBCD. Note that . One can obtain On the other hand, we have
It is obviously in contradiction to (27). Thus, GBCD fails to recover support .
Remark 7. In the work of , the authors presented a matrix whose RIC is . They showed that the GBCD algorithm fails when using as measurement matrix. After a simple calculation, we can get Thus, our result improves this existing result.
5. Experimental Results
In this section, under the total perturbations, we test the performance of the GBCD algorithm for solving the DOA estimation problem.
Consider narrowband far-field point source signals impinging on an -element uniform linear array. The steering vector of the matrix is where . is the number of snapshots.
Using the sparse optimization approach in , the DOA estimation problem can be rewritten as model (1). Then the aim is hence to find out which row of the matrix is nonzero, that is, the support of the matrix .
Analogous to the simulation of , we have the following assumptions:(i)The number of the array elements is .(ii)The number of snapshots is .(iii)The grid spacing is from to . Then .(iv)Five () uncorrelated signals impinge from , , , , and .(v)Both the signals and the noise are white and follow Gaussian distributions. The power of nonzero entries of is , and the power of each entry of is .(vi)Use the following SNR1 and SNR2 to measure noises and , respectively:
Define the root mean square error (RMSE) of 500 Monte Carlo trials as the performance index: where is the estimate of at the th trial.
Figure 1, fixing matrix , describes the performance of GBCD. The results show that RMSE decreases as SNR1 increases. Figure 2, fixing matrix , describes the performance of GBCD. The results show that RMSE decreases as SNR2 increases. Thus, the performance of GBCD still is robust under the total perturbations.
In this paper, using the near orthogonality property, we provide a recovery condition for GBCD under the total perturbations. A counterexample is presented to show that GBCD fails. By experiments, we point out that GBCD is robust under the total perturbations.
Conflicts of Interest
The author declares that there are no conflicts of interest regarding the publication of this paper.
This work was supported by National Natural Science Foundation of China (nos. 11526081, 11601134, 11671122, and U1404603), the Scientific Research Foundation for Ph.D. of Henan Normal University (no. qd14142), and the Key Scientific Research Project of Colleges and Universities in Henan Province (no. 17A110008).
T. Blumensath and M. Davies, “Compressed sensing and source separation,” in Proceedings of the International Conference on Independent Component Analysis and Signal Separation (ICA '07), pp. 341–348, 2007.View at: Google Scholar