Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2017, Article ID 4903791, 7 pages
https://doi.org/10.1155/2017/4903791
Research Article

Support Recovery of Greedy Block Coordinate Descent Using the Near Orthogonality Property

College of Mathematics and Information Science, Henan Normal University, Xinxiang 453007, China

Correspondence should be addressed to Haifeng Li; moc.621@xxgnefiahil

Received 23 November 2016; Accepted 15 March 2017; Published 27 April 2017

Academic Editor: Bogdan Dumitrescu

Copyright © 2017 Haifeng Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this paper, using the near orthogonal property, we analyze the performance of greedy block coordinate descent (GBCD) algorithm when both the measurements and the measurement matrix are perturbed by some errors. An improved sufficient condition is presented to guarantee that the support of the sparse matrix is recovered exactly. A counterexample is provided to show that GBCD fails. It improves the existing result. By experiments, we also point out that GBCD is robust under these perturbations.

1. Introduction

Greedy block coordinate descent (GBCD) algorithm was presented by [1] for direction of arrival (DOA) estimation. In the work of [1], the DOA estimation is treated as the multiple measurement vectors (MMV) model that recovers a common support shared by multiple unknown vectors from multiple measurements. The authors provided a sufficient condition, based on mutual coherence, to guarantee that GBCD exactly recover the nonzero supports with noiseless measurements.

Recently, the work of [2] discussed the following method: with inputs and . denotes the measurement noise and denotes the system perturbation. The perturbations and are quantified with the following relative bounds:where and are nonzero. Here, denotes the largest spectral norm taken over all -column submatrices of . Throughout the paper, we are only interested in the case where and are far less than 1. In (1), is a -group sparse matrix; that is, it has no more than nonzero rows, and , is the th row of . It is assumed that all columns of are normalized to be of unit-norm [3]. Both and are totally perturbed in (1). This case can be found in source separation [4], radar [5], remote sensing [6], and countless other problems. In addition, the total perturbations have also been discussed in [79].

One of the most commonly known conditions is the restricted isometry property (RIP). A matrix satisfies RIP of the order if there exists a constant such that for all -sparse vector . In particular, the minimum of all constants satisfying (3) is called the restricted isometry constant (RIC) .

There are many papers [8, 1014] discussing the sufficient condition for orthogonal matching pursuit (OMP) that is one of the widely greedy algorithms for sparse recovery. In [3], using the near orthogonality property, the authors improved the sufficient condition of OMP. As cited in [3], the near orthogonality property can further develop the orthogonality characterization of columns in ; it will play a fundamental role in the study of the signal reconstruction performance in compressed sensing. In the noiseless case, the work of [15] analyzed the performance of GBCD using near orthogonality property and improved the results in [2].

In this paper, under the total perturbations, we use near orthogonality property to improve the theoretical guarantee for the GBCD algorithm. In [2], the authors stated that is a sufficient condition for GBCD. We improve this condition to . We also present a counterexample to show that GBCD fails. The example is superior to that in [2]. Under the total perturbations, the robustness of GBCD is shown by experiments.

Now we give some notations that will be used in this paper. denotes the th column of a matrix . denotes the transpose of . denotes an identity matrix. The symbol denotes the vectorization operator by stacking the columns of a matrix one underneath the other. The cardinality of a finite set is denoted by . Let . . The support of is denoted by (). denotes the largest spectral norm taken over all -column submatrices of . Let denote the maximum norm of the rows of . We write for the column submatrix of whose indices are listed in set of and for the row submatrix of whose indices are listed in the set . denotes the th unit standard vector.

2. Problem Formulation

Analogous to [1], (1) can be rewritten as

Assume that . Obviously, . The objective function in (4) can be written as where with denoting the Kronecker product and . Combining the quadratic approximation for and standard BCD algorithm, the solution to the th subproblem can be given by a soft-thresholding operator. The authors in [1] only update the block that yields the greatest descent distance. Now, we list GBCD algorithm (Algorithm 1).

Algorithm 1: GBCD: greedy block coordinate descent algorithm [1].

Suppose that satisfies the th order RIC . Recall that has no more than nonzero rows. According to the fact , we can obtain from (3).

Combining Lemma in [3] and (6), we have

Lemma 1 (near orthogonality property, see [3]). Let and be two orthogonal sparse vectors with supports and fulfilling . Suppose that satisfies RIP of order with RIC . Then we have where denotes the angle between and .

Lemma 2 (see [3]). Under the same assumptions as in Lemma 1, we have

Lemma 3. For finite sets and , let and . Here, , and . If satisfies the RIP condition (3) with , then we have

Proof. Note that the Frobenius norm of is derived from the Frobenius inner product. where (15) and (17) follow from Lemma 2 and Cauchy-Schwarz inequality, respectively.

3. RIP Based Recovery Condition

In this section, we firstly present the upper bound of the noise matrix and provide the recovery condition for GBCD.

Lemma 4 (see [2]). Suppose that satisfies the th order RIC . Then we have

According to steps () and () of Algorithm 1, at the th iteration, GBCD can obtain a correct index if

Theorem 5. Consider model (4). Let . If the matrix satisfies RIP of order withwhere , then GBCD can exactly recover the support set .

Proof. Consider . The initial value is . In order to guarantee that GBCD selects a correct index , combining step () of Algorithm 1 and (20), we should verify the following inequality: If , the right-hand-side is 0. Then inequality (26) holds. Thus, we only consider . Using Remark in [2], inequality (26) is true when Now, it is sufficient to verify (27). Let us construct an upper bound for . By step () of Algorithm 1, we have where (32) is from the property of norm and (34) follows from each column of which is of unit-norm, Lemmas 3 and 4.
To prove (27), we only need to prove We then go on to show by contradiction that (35) is true. For all , assume that Then we have Using the triangle inequality, we can get where (40) is from (10) and the property of norm.
After straightforward manipulations, we have where (41) follows from (21) and (43) follows from and (22).
Obviously, (44) contradicts (37), so this fact guarantees (27).
Assume that GBCD always picks up indices from the support for ( is an integer). Consider . In order to prove that GBCD can choose a correct index , analogous to [2], inequality (46) should be verified.Combining step () of Algorithm 1 with (46) yields It is sufficient to prove that (48) holds. Note that ; we have Now, we only need to prove We then show that (52) is true by contradiction. For all , assume that Using the definition of Frobenius norm, we have Combining , (21), and (22), we have where (59) follows from This contradicts (53). Thus, (48) is true.

Remark 6. The weaker the RIC bound is, the less required number of measurements we need, and the improved RIC results can be used in many CS-based applications [16]. In the work of [2], the authors provided that the condition for GBCD is . Obviously, it is smaller than the bound in (21).

4. The Counterexample

Consider the measurements In this section, giving a matrix , whose RIC is a slight relaxation of , we will verify that GBCD can fail to recover the support of sparse matrix from (62).

Letwhere , and (the value of is far less than 1; this is reasonable).

The matrix is constructed aswhere

Set

The eigenvalues of are

Thus, the RIC of is .

Recall that condition (27) is the criterion of recovery for GBCD. Note that . One can obtain On the other hand, we have

It can be derived that where (71) and (72) follow from (65) and (66).

It is obviously in contradiction to (27). Thus, GBCD fails to recover support .

Remark 7. In the work of [2], the authors presented a matrix whose RIC is . They showed that the GBCD algorithm fails when using as measurement matrix. After a simple calculation, we can get Thus, our result improves this existing result.

5. Experimental Results

In this section, under the total perturbations, we test the performance of the GBCD algorithm for solving the DOA estimation problem.

Consider narrowband far-field point source signals impinging on an -element uniform linear array. The steering vector of the matrix is where . is the number of snapshots.

Using the sparse optimization approach in [1], the DOA estimation problem can be rewritten as model (1). Then the aim is hence to find out which row of the matrix is nonzero, that is, the support of the matrix .

Analogous to the simulation of [1], we have the following assumptions:(i)The number of the array elements is .(ii)The number of snapshots is .(iii)The grid spacing is from to . Then .(iv)Five () uncorrelated signals impinge from , , , , and .(v)Both the signals and the noise are white and follow Gaussian distributions. The power of nonzero entries of is , and the power of each entry of is .(vi)Use the following SNR1 and SNR2 to measure noises and , respectively:

Define the root mean square error (RMSE) of 500 Monte Carlo trials as the performance index: where is the estimate of at the th trial.

Figure 1, fixing matrix , describes the performance of GBCD. The results show that RMSE decreases as SNR1 increases. Figure 2, fixing matrix , describes the performance of GBCD. The results show that RMSE decreases as SNR2 increases. Thus, the performance of GBCD still is robust under the total perturbations.

Figure 1: For SNR2 = 10. The RMSE of GBCD versus input SNR1.
Figure 2: For SNR1 = 2. RMSE of GBCD versus input SNR2.

6. Conclusion

In this paper, using the near orthogonality property, we provide a recovery condition for GBCD under the total perturbations. A counterexample is presented to show that GBCD fails. By experiments, we point out that GBCD is robust under the total perturbations.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by National Natural Science Foundation of China (nos. 11526081, 11601134, 11671122, and U1404603), the Scientific Research Foundation for Ph.D. of Henan Normal University (no. qd14142), and the Key Scientific Research Project of Colleges and Universities in Henan Province (no. 17A110008).

References

  1. X. Wei, Y. Yuan, and Q. Ling, “DOA estimation using a greedy block coordinate descent algorithm,” IEEE Transactions on Signal Processing, vol. 60, no. 12, pp. 6382–6394, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  2. H. Li, Y. Fu, R. Hu, and R. Rong, “Perturbation analysis of greedy block coordinate descent under RIP,” IEEE Signal Processing Letters, vol. 21, no. 5, pp. 518–522, 2014. View at Publisher · View at Google Scholar · View at Scopus
  3. L.-H. Chang and J.-Y. Wu, “An improved RIP-based performance guarantee for sparse signal recovery via orthogonal matching pursuit,” IEEE. Transactions on Information Theory, vol. 60, no. 9, pp. 5702–5715, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  4. T. Blumensath and M. Davies, “Compressed sensing and source separation,” in Proceedings of the International Conference on Independent Component Analysis and Signal Separation (ICA '07), pp. 341–348, 2007.
  5. M. A. Herman and T. Strohmer, “High-resolution radar via compressed sensing,” IEEE Transactions on Signal Processing, vol. 57, no. 6, pp. 2275–2284, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. A. C. Fannjiang, T. Strohmer, and P. Yan, “Compressed remote sensing of sparse objects,” SIAM Journal on Imaging Sciences, vol. 3, no. 3, pp. 595–618, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. M. A. Herman and T. Strohmer, “General deviants: an analysis of perturbations in compressed sensing,” IEEE Journal on Selected Topics in Signal Processing, vol. 4, no. 2, pp. 342–349, 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. J. Ding, L. Chen, and Y. Gu, “Perturbation analysis of orthogonal matching pursuit,” IEEE Transactions on Signal Processing, vol. 61, no. 2, pp. 398–410, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. B. Li, Y. Shen, Z. Wu, and J. Li, “Sufficient conditions for generalized Orthogonal Matching Pursuit in noisy case,” Signal Processing, vol. 108, pp. 111–123, 2015. View at Publisher · View at Google Scholar · View at Scopus
  10. T. Zhang, “Sparse recovery with orthogonal matching pursuit under RIP,” IEEE. Transactions on Information Theory, vol. 57, no. 9, pp. 6215–6221, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  11. J. Wen, X. Zhu, and D. Li, “Improved bounds on restricted isometry constant for orthogonal matching pursuit,” Electronics Letters, vol. 49, no. 23, pp. 1487–1489, 2013. View at Publisher · View at Google Scholar · View at Scopus
  12. R. Wu, W. Huang, and D.-R. Chen, “The exact support recovery of sparse signals with noise via orthogonal matching pursuit,” IEEE Signal Processing Letters, vol. 20, no. 4, pp. 403–406, 2013. View at Publisher · View at Google Scholar · View at Scopus
  13. Q. Mo and Y. Shen, “A remark on the restricted isometry property in orthogonal matching pursuit,” IEEE. Transactions on Information Theory, vol. 58, no. 6, pp. 3654–3656, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  14. J. Wang and B. Shim, “On the recovery limit of sparse signals using orthogonal matching pursuit,” IEEE Transactions on Signal Processing, vol. 60, no. 9, pp. 4973–4976, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. H. Li, Y. Ma, W. Liu, and Y. Fu, “Improved analysis of greedy block coordinate descent under RIP,” Electronics Letters, vol. 51, no. 6, pp. 488–490, 2015. View at Publisher · View at Google Scholar · View at Scopus
  16. C. B. Song, S. T. Xia, and X. J. Liu, “Improved analysis for subspace pursuit algorithm in terms of restricted isometry constant,” IEEE Signal Processing Letters, vol. 21, no. 11, pp. 1365–1369, 2014. View at Publisher · View at Google Scholar · View at Scopus