- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Discrete Dynamics in Nature and Society

Volume 2013 (2013), Article ID 905027, 6 pages

http://dx.doi.org/10.1155/2013/905027

## A Note on Block-Sparse Signal Recovery with Coherent Tight Frames

^{1}School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an 710049, China^{2}School of Mathematics and Statistics, Southwest University, Chongqing 400715, China

Received 23 May 2013; Accepted 17 November 2013

Academic Editor: Juan J. Nieto

Copyright © 2013 Yao Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This note discusses the recovery of signals from undersampled data in the situation that such signals are nearly block sparse in terms of an overcomplete and coherent tight frame . By introducing the notion of block -restricted isometry property (-RIP), we establish several sufficient conditions for the proposed mixed -analysis method to guarantee stable recovery of nearly block-sparse signals in terms of . One of the main results of this note shows that if the measurement matrix satisfies the block -RIP with constants , then the signals which are nearly block -sparse in terms of can be stably recovered via mixed -analysis in the presence of noise.

#### 1. Introduction

Compressed sensing (CS) [1, 2] has attracted great interests in a number of fields including information processing, electrical engineering, and statistics. In principle, CS theory states that it is possible to recover an unknown signal from considerably few information if the unknown signal has a sparse or nearly sparse representation in an orthonormal basis. However, a large number of applications in signal and image processing point to problems where signals are not sparse in an orthogonal basis but in an overcomplete and tight frame; see, for example, [3, 4] and the reference therein. Examples include the reflected radar and sonar signals (Gabor frames) and the images with curves (curvelet frames). In such contexts, the signal can be expressed as , where () is a matrix whose columns form a tight frame and is sparse or nearly sparse. Then one acquires via the observed few linear measurements , where is a known matrix (), are available measurements, and is a vector of measurements error.

There are two common ways to recover based on and . One natural way is first solving an -minimization problem: to find the sparse transform coefficients ; here is a bounded set determined by the noise structure and then reconstructing the signal by a synthesis operation; that is, . This is the so-called -synthesis or synthesis based method [5, 6]. Since the entries of are correlated when is highly coherent, may no longer satisfy the standard restricted isometry property (RIP) and the mutual incoherence property (MIP) which are commonly used in the standard CS framework. Therefore, it is not easy to study the theoretical performance of the -synthesis method.

An alternative to -synthesis is the -analysis method, which finds the estimator directly by solving the following -minimization problem: It has been shown that there is a remarkable difference between the two methods despite their apparent similarity [5]. To investigate the theoretical performance of -analysis, Candès et al. [7] introduced the definition of -RIP: a measurement matrix is said to satisfy the restricted isometry property adapted to (abbreviated -RIP) with constant if holds for every vector that is -sparse. Note that it is computationally difficult to verify the -RIP for a given deterministic matrix. But as discussed in [7], the matrices which satisfy the standard RIP requirements will also satisfy the -RIP requirements. It has been shown that -analysis can recover a signal that is (nearly) sparse in terms of with a small and zero error under various conditions on the -RIP, such as or used in [7], used in [8], and so forth.

There are a growing number of practical scenarios in which the transform coefficient is not only sparse, but the nonzero entries of appear in some fixed blocks as well. Such block sparsity (in terms of ) naturally arises in MR imaging [9], color imaging [10], and source localization [11]. Mathematically, such vector can be modeled over a block index set as follows: Here denotes the th subblock of and is the block size for the th subblock. In this term, a vector is called block -sparse over index set if is nonzero for at most indices . Now our objective of this note is to recover an unknown signal that is nearly block -sparse in terms of from a collection of linear measurements . We suggest the use of the following mixed -analysis method to recover such signals: In this note, we will consider two types of bounded noise: and . To study the performance of such method, we will introduce the definition of the block -RIP of a measurement matrix, which is an extension of the block RIP introduced in [12].

*Definition 1. *Let be an matrix whose columns form a tight frame for . An measurement matrix is said to satisfy the block -RIP over with constant if
holds for every vector that is block -sparse over .

Note that we only require the -RIP for block-sparse signals, which are a certain subset of -sparse signals ( is the sum of the largest values of ). Thus, the block -RIP constant is typically smaller than the -RIP constant . With a slight abuse of notation, we still denote by the block -RIP constant without further specification. It is also computationally difficult to verify the block -RIP for a given deterministic matrix. But using the similar arguments as those in [12], one can easily verify that random matrices with Gaussian, sub-Gaussian entries satisfy the block -RIP with overwhelming probability. With this, we will establish some sufficient conditions for mixed -analysis to guarantee stable recovery of the nearly block -sparse signal in terms of . The main contribution of this note is to show that if the measurement matrix has the block -RIP constants , then the signal can be stably recovered via mixed -analysis in the noise case. Note that the block -sparse signal is also a standard -sparse signal. Thus, from [8], can be stably recovered via -analysis if has the -RIP constants . Since the block -RIP constant is typically smaller than the -RIP constant , our obtained condition is not as stringent as that obtained by -analysis. This reveals the advantage of mixed -analysis over -analysis in the block-sparse (in terms of ) setting.

In the next section, we give two key lemmas. In Section 3, some new block -RIP conditions for stable recovery of nearly block -sparse signals (in terms of ) via mixed -analysis are presented. We conclude this paper with some discussions on the block -RIP bounds in Section 4.

#### 2. Preliminaries

We begin with introducing the definition of block -restricted orthogonality constant, which is a natural extension of -restricted orthogonality constant (-ROC) introduced in [8].

*Definition 2. *The block -restricted orthogonality constant of order , , is the smallest positive number that satisfies
for all and such that and are block -sparse and block -sparse over block index set , respectively.

For convenience, in the remainder of this note, we use , instead of , to represent the block -ROC whenever the confusion is not caused. Similar to the -RIP and the -ROC, the following monotone properties can be easily observed:

From the parallelogram and the definition of block -RIP, it is easy to see that for all and such that and are block -sparse and block -sparse.

In order to derive the main results of this note, we will introduce two useful inequalities related to and .

Lemma 3. *For all nonnegative integers , one has
*

*Proof. *Since is a tight frame, we have for any vector . Then using (9) and the definition of , we can directly get the results.

Lemma 4. *For any and positive integers such that is an integer, one has
*

*Proof. *The proof is similar to the procedure of the proof of Lemma 1 in [13] and the proof of Lemma 2.6 in [8] with some modifications. Let and be block -sparse and block -sparse, respectively. Without loss of generality, we assume that the indices of nonzero blocks of is . For , let be a vector such that keeps the th, th nonzero blocks of and replaces other blocks by zero. By the Cauchy-Schwarz inequality, we have
which yields the result.

*Remark 5. *Note that Lemma 4 is in essence the *square-root lifting inequality* first introduced by Cai et al. in [13]. It is obvious that when and for all , Lemma 4 degenerates to Lemma 1 in [13].

#### 3. Recovery via Mixed Analysis

In this section, we establish several block -RIP conditions for stable recovery of nearly block-sparse signals (in terms of ) via mixed analysis approach (5). In the sequel, we denote by the vector consisting of the largest blocks over of in norm; that is, , where and is an indicator function.

Theorem 6. *Let and be positive integers such that . Denote that and . If the matrix satisfies , then the solution to (5) obeys*(i)* noise:*(ii)

*noise:**Proof. *The proof will follow the ideas from [8, 14]. Let be a solution of (5), where is the original signal. Write and rearrange the block indices such that . Let and let be the block index set over the blocks with largest norm of . We represent as a concatenation of column-blocks of size :
where is the matrix restricted to the column-blocks indexed by . We use to denote . Partition into the following sets:

Since is a minimizer of (5), then we have
This implies

Note that and , and it thus follows from (18) that
which leads to
Using the inequality involving and norms (see Proposition 2.1 in [14]), it is easy to see that
holds for any . Thus, summing these terms yields
This, along with (20), gives
Since and , we can get
Similar to the consequence of -RIP (see [8]), we have
where, for the second inequality, we have used (24).

Let . Then, by the feasibility of (also see [8]), we have
Thus
It then follows from (25) and (27) that
By (20), it is not hard to see that
Consequently, we have
Plugging (28) into the previously mentioned inequality and by a direct calculation, we get
which yields (13).

For , by the feasibility of (also see [8]), we have
Thus, we can get
Combining (25) with (33) and by a simple calculation, we get
Note that we also have (30). Plugging (34) into (30) yields
which leads to (14).

*Remark 7. *Different choices of and can result in different sufficient conditions in Theorem 6. For instance, we can list some of them in Table 1. Since the condition implies that , we get a sufficient condition that involves ; that is, is a sufficient recovery condition for mixed -analysis (5).

*Remark 8. *Though we have only considered bounded noise in Theorem 6, the conclusion, however, can be applied directly to Gaussian noise.

Theorem 9. *If the matrix satisfies the block -RIP with , then the solution to (5) obeys*(i)* noise:*(ii)

*noise:**Proof. *In Theorem 6, we take , . By applying Lemmas 3 and 4 and using the similar proof techniques in [14], we can immediately get the results.

*Remark 10. *When the block size for all , (36) and (37) are the same as those of Theorem 3.1 in [8].

*Remark 11. *When no noise is present and is exactly block- sparse (in terms of ), recovers exactly.

#### 4. Discussion

This note has presented some conditions of the measurement matrix for the mixed -analysis method to stably recover signals which are nearly block sparse in terms of a coherent tight frame . To the best of our knowledge, this is the first theoretical characterization of the proposed mixed -analysis method in the compressed sensing framework. In a recent paper [15], Cai and Zhang established a sharp RIP condition for the standard sparse recovery via minimization. They showed that in the noiseless case the RIP condition is sharp for exact recovery of -sparse signals via minimization. One may then argue whether the block -RIP condition is also sharp for mixed -analysis. Moreover, Davies and Gribonval [16] showed that it is impossible to recover certain standard -sparse signals via minimization under RIP condition . It is interesting to investigate whether is also an upper bound on the block -RIP constant for mixed -analysis.

#### Acknowledgments

This work was supported in part by National 973 Project of China under Grant no. 2013CB329404, Natural Science Foundation of China under Grant nos. 61273020 and 61075054 and Fundamental Research Funds for the Central Universities under Grant no. XDJK2010B005.

#### References

- D. L. Donoho, “Compressed sensing,”
*IEEE Transactions on Information Theory*, vol. 52, no. 4, pp. 1289–1306, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,”
*IEEE Transactions on Information Theory*, vol. 52, no. 2, pp. 489–509, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,”
*SIAM Review*, vol. 43, no. 1, pp. 129–159, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - A. M. Bruckstein, D. L. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,”
*SIAM Review*, vol. 51, no. 1, pp. 34–81, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. Elad, P. Milanfar, and R. Rubinstein, “Analysis versus synthesis in signal priors,”
*Inverse Problems*, vol. 23, no. 3, pp. 947–968, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - H. Rauhut, K. Schnass, and P. Vandergheynst, “Compressed sensing and redundant dictionaries,”
*IEEE Transactions on Information Theory*, vol. 54, no. 5, pp. 2210–2219, 2008. View at Publisher · View at Google Scholar · View at MathSciNet - E. J. Candès, Y. C. Eldar, D. Needell, and P. Randall, “Compressed sensing with coherent and redundant dictionaries,”
*Applied and Computational Harmonic Analysis*, vol. 31, no. 1, pp. 59–73, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - J. Lin, S. Li, and Y. Shen, “New bounds for restricted isometry constants with coherent tight frames,”
*IEEE Transactions on Signal Processing*, vol. 61, no. 3, pp. 611–621, 2013. View at Publisher · View at Google Scholar · View at MathSciNet - A. Majumdar and R. K. Ward, “Accelerating multi-echo T2 weighted MR imaging: analysis prior group-sparse optimization,”
*Journal of Magnetic Resonance*, vol. 210, no. 1, pp. 90–97, 2011. View at Publisher · View at Google Scholar · View at Scopus - A. Majumdar and R. K. Ward, “Compressive color imaging with group-sparsity on analysis prior,” in
*Proceedings of the 17th IEEE International Conference on Image Processing (ICIP ’10)*, pp. 1337–1340, September 2010. View at Publisher · View at Google Scholar · View at Scopus - D. Model and M. Zibulevsky, “Signal reconstruction in sensor arrays using sparse representations,”
*Signal Processing*, vol. 86, no. 3, pp. 624–638, 2006. View at Publisher · View at Google Scholar · View at Scopus - Y. C. Eldar and M. Mishali, “Robust recovery of signals from a structured union of subspaces,”
*IEEE Transactions on Information Theory*, vol. 55, no. 11, pp. 5302–5316, 2009. View at Publisher · View at Google Scholar · View at MathSciNet - T. T. Cai, L. Wang, and G. Xu, “Shifting inequality and recovery of sparse signals,”
*IEEE Transactions on Signal Processing*, vol. 58, no. 3, part 1, pp. 1300–1308, 2010. View at Publisher · View at Google Scholar · View at MathSciNet - T. T. Cai, L. Wang, and G. Xu, “New bounds for restricted isometry constants,”
*IEEE Transactions on Information Theory*, vol. 56, no. 9, pp. 4388–4394, 2010. View at Publisher · View at Google Scholar · View at MathSciNet - T. T. Cai and A. Zhang, “Sharp RIP bound for sparse signal and low-rank matrix recovery,”
*Applied and Computational Harmonic Analysis*, vol. 35, no. 1, pp. 74–93, 2013. View at Publisher · View at Google Scholar · View at MathSciNet - M. E. Davies and R. Gribonval, “Restricted isometry constants where ${\ell}^{p}$ sparse recovery can fail for $0LTHEXAP\le 1$,”
*IEEE Transactions on Information Theory*, vol. 55, no. 5, pp. 2203–2214, 2009. View at Publisher · View at Google Scholar · View at MathSciNet