Table of Contents Author Guidelines Submit a Manuscript
Discrete Dynamics in Nature and Society
Volume 2017, Article ID 4372080, 8 pages
https://doi.org/10.1155/2017/4372080
Research Article

Improved RIP Conditions for Compressed Sensing with Coherent Tight Frames

1School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an 710049, China
2Shenyang Institute of Automation, Chinese Academy of Science, Shenyang 10016, China
3School of Mathematics and Statistics, Southwest University, Chongqing 400715, China

Correspondence should be addressed to Jianjun Wang; nc.ude.uws@jjw

Received 1 February 2017; Accepted 12 April 2017; Published 15 May 2017

Academic Editor: Daniele Fournier-Prunaret

Copyright © 2017 Yao Wang and Jianjun Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper establishes new sufficient conditions on the restricted isometry property (RIP) for compressed sensing with coherent tight frames. One of our main results shows that the RIP (adapted to ) condition guarantees the stable recovery of all signals that are nearly -sparse in terms of a coherent tight frame via the -analysis method, which improves the existing ones in the literature.

1. Introduction

Compressed sensing (CS) has received much recent attention in many fields, for example, information science, electrical engineering, and statistics [110]. A key problem in CS is to recover a nearly sparse signal from considerably fewer linear measurements. Typically, it takes the following model:where is a vector of observed measurements, is an unknown signal needed to be estimated, is a given measurement matrix, and is a vector of measurement errors. Considering the fact that is sparse or nearly sparse in terms of an orthogonal basis, one straightforward approach is to find the sparsest solution of (1) by minimization. However, it is well known that solving an minimization problem directly is NP-hard in general and thus is computationally infeasible for even moderate size setting [11].

To efficiently estimate in the high-dimensional setting, the most popular strategy is to replace the norm with its closest convex surrogate, the norm, which leads to the following norm minimization:where and is a bounded set determined by the noise structures. It is obvious that (2) is a convex optimization problem and thus can be solved efficiently in polynomial time. Therefore, the minimization method (2) has been widely used in compressed sensing and other related problems.

There are a number of practical applications in signal and image processing point to problems where signals are not sparse in terms of an orthogonal basis but in terms of an overcomplete and tight frame (see [1216], and the references therein). In such contexts, the signal can be expressed as , where is a sparse or nearly sparse vector. One natural way to recover is first solving the minimization problem (2) with the decoding matrix instead of to find the sparse transform coefficients and, then, reconstructing the signal by a synthesis operation, that is, . This is the so-called -synthesis or synthesis based method. Since the entries of are correlated when is highly coherent, may no longer satisfy the standard restricted isometry property (RIP [1]) and the mutual incoherence property (MIP [17]) which are commonly used in the standard CS framework. Thus, it would be difficult to characterize the theoretical performance of -synthesis method under the CS framework.

An alternative to the -synthesis method is the -analysis method, which finds the estimator directly by solving the following minimization problem:It has been shown in [14] that there is a remarkable difference between the two typical methods despite their apparent similarity. To investigate the theoretical performance of the -analysis method, Candès et al. in [13] introduced the definition of -RIP: A measurement matrix is said to satisfy the restricted isometry property adapted to (abbreviated -RIP) with constant ifholds for every vector that is -sparse. Note that it is a natural generalization of the RIP introduced in [1]. Similarly, it is also computationally difficult to verify the -RIP for a given deterministic matrix. But as discussed in [13], the matrices which satisfy the standard RIP requirements will also satisfy the -RIP requirements. Many previous works have tried to derive sufficient conditions on for stable recovery of nearly sparse (in terms of ) signals via -analysis. Candès et al. first presented conditions and [13]. Then, the conditions and were used in [18] and [19], respectively. In the recent literature, S. Lin and J. Lin [19] extended the notion of restricted orthogonality constant (ROC) used in standard CS to the setting of CS with coherent tight frames. The -restricted orthogonality constant (-ROC) of order , , is defined to be the smallest positive number satisfyingfor every and such that and are -sparse and -sparse, respectively. With this new notion, they extended some sufficient conditions which appeared in standard CS to the setting of CS with coherent tight frames, such as , , and . Moreover, they also obtained that , which is the first sufficient condition on , is sufficient for -analysis to guarantee the stable recovery of nearly -sparse (in terms of ) signals. In a recent paper [20], the condition was improved to .

Along the lines of [8, 19], we establish in this paper more relaxed RIP conditions for stable recovery of nearly sparse (in terms of ) signals from incomplete and contaminated data. Specifically, the main contribution of this paper is to show that, under the RIP condition , any signal that is nearly sparse in terms of can be recovered stably from its noisy measurements by solving the -analysis problem (3). To show these new conditions, in Section 2, we shall introduce a key technique tool which is an extension of Lemma in [8] and also state two lemmas that appeared in [8]. In Section 3, we will establish our new results by use of some proof ideas for the standard CS in [8]. Furthermore, we will show that the condition is mostly weaker than the best known condition .

2. Preliminaries

In this section, we first state two useful lemmas that appeared in [8], which reveal the relationship between -RIC and -ROC, and the relationship between -ROCs of different orders, respectively.

Lemma 1. For positive integers , , we have

Lemma 2. For any and positive integers , such that is an integer, we have

In the following, we will introduce and prove a key technical tool, which will be very useful for proving our main results.

Lemma 3. Let and and . Suppose , and is a -sparse vector. If and , then we have

Proof. We shall prove it by mathematical induction. Suppose the size of support of is , that is, . For , by the definition of , we haveThus, (8) holds for .
For the case , we first assume that (8) holds for . The following discussion will use the same argument as in [8]. But for completeness, we will include the sketch. Now for , we write as , where , and are “indicator vectors” with different supports. A vector is called an “indicator vector” if it has only one nonzero entry and the value is either 1 or . Since , the set is not empty. Now we choose the largest element , which meansDefineIt is not hard to check that , . Similar to the proof of lemma in [8], we also haveFrom the definition of , we obtain that is -sparse. Finally, using the induction assumption, we getwhich arrives to the conclusion of Lemma 3.

Remark 4. When is an identity matrix and , have disjoint supports, Lemma 3 has essentially the same result as lemma in [8].

3. Improved RIP Conditions

We now consider the stable recovery of nearly sparse (in terms of ) signals via the -analysis method (3). We will present some new RIP conditions under two bounded noise settings: andThroughout the paper, is the vector with all but the largest absolute entries of set to zero, and . The following theorem represents our main result.

Theorem 5. Let be a given tight frame, and . If the measurement matrix satisfies the -RIP condition withfor some positive integers and with , then the solution to (3) obeys(i)(bounded noise)(ii)(bounded noise)

Remark 6. When is exact -sparse in terms of and no noise is present, the solution to (3) is equal to ; that is to say, the recovery is exact.

Remark 7. As reported in [8], when , the bound 1 is sharp in the sense that, for any , the -RIP condition does not guarantee such exact recovery. But it is still open whether this bound is also sharp when is not identity matrix. We leave it to the interested readers.

Proof. Let , where is the original signal and is the solution to (3). As noted in [15, 18], different from the proof in [8] for standard compressed sensing, we need to develop bounds on instead of . We write , where and are indicator vectors (we have mentioned in the proof of Lemma 3) with different support. In the following, we will use some proof ideas from [8].
By the fact that is a minimizer of (3), we can easily get the following inequality (see [5, 6, 19]):Thus,Note thatIt then follows with (20) thatApplying Lemma 3 with , and yield Hence, By plugging the equalityinto the above inequality, we obtainOn the other hand, for bounded noise, we first havewhere we have used the following inequality:Therefore,It is known that; thus, now we shall bound. By lemma in [7], we haveif , and with . Then, set in (30) and combine it with (18); we further get where the last inequality we have used is (29). From Lemma 2, it is not hard to get Therefore, which arrives to the conclusion.
Forbounded noise, we first have Hence,Then, the following proof is essentially the same, where we only need to replace (27) with (35).

To be noted, if we choosein Theorem 5, we can naturally get the following result.

Corollary 8. If the measurement matrix satisfies , then the solution of (3) satisfies (16) and (17). In particular, if the original signal is exact -sparse in terms of , the recovery is exact in the noiseless case.

Remark 9. It is obvious that the obtained condition is weaker than ,, and which were used in [15].
Since most of the sufficient recovery conditions in the literature are based on -RIC alone, it would be interesting to compare these conditions with . For this, we shall present the following lemma which provides the -ROC in terms of -RIC .

Lemma 10. Let be a given tight frame; then, -ROC and -RIC of the measurement matrix satisfy

Proof. The proof is similar to the proof of lemma in [8]. For simplicity, we only present the proof sketch in the following. For two -sparse (in terms of ) signals , , we can write and as where , , is the support of , is the support of , and is the vector whoseth entry equals and all the other ones equal zero.
Case 1 ( is even). Without loss of generality, suppose and are normalized such that . Divide and into two subsets such that , , and are disjoint and for . Denote Then, by the definition of -RIP (4), we haveSimilarly, we haveThen, from (39) and (40), we have Thus,.
Case 2 ( is odd). Without loss of generality, suppose , and might be for . Also we can assume and are normalized such that and . Then, we have Similarly, we haveThen, from (42) and (43), we have which implies

The following results can be directly obtained from the above Theorem 5 and Lemma 10.

Corollary 11. For some integer , if , then we have

Remark 12. As stated in [20], is the best condition for stable recovery of nearly sparse signal in terms of via -analysis. Based on Corollary 11, our presented condition is mostly weaker than . Basically, there are several benefits to weaken the -RIP condition. For example, using a standard covering argument as in [21], it is easy to show that, for any positive integer and , the -RIC of a Gaussian or Bernoulli random measurement matrix satisfies

Note that is implied by which is further implied by the conditions and . Hence, using (46) and following the discussion of Section IV in [8], the number of measurements should satisfy to ensure the condition holds with probability at least . Similarly, holds with probability at least if the number of measurements satisfies with guaranteeing with probability at least , and withguaranteeing with probability at least . Therefore, for large and , the size requirement to ensure is less than 71.2% (115.4/162) of the corresponding size requirement to ensure . This clearly demonstrates the advantage of our presented condition over the best known condition .

4. Conclusion

Under the framework of CS with coherent tight frames, we present in this paper some improved RIP conditions for stable recovery of nearly sparse (in terms of ) signals via -analysis method, which are weaker than the existing ones. Although only convex optimization method is considered here, it would be also interesting to relax the RIP conditions for nonconvex optimization method. It is known that standard minimization method could recover conventional nearly sparse signals stably under weaker RIP conditions than standard minimization method [2224]. As such, one may make effort to weaken the RIP condition for nonconvex -analysis method, thus facilitating the further use of nonconvex analysis based method for more practical CS scenarios.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by Natural Science Foundation of China under Grant nos. 11501440, 61673015, and 62173020.

References

  1. E. J. Candès and T. Tao, “Decoding by linear programming,” Institute of Electrical and Electronics Engineers. Transactions on Information Theory, vol. 51, no. 12, pp. 4203–4215, 2005. View at Publisher · View at Google Scholar · View at MathSciNet
  2. E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” Institute of Electrical and Electronics Engineers. Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  3. D. L. Donoho, “Compressed sensing,” Institute of Electrical and Electronics Engineers. Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  4. E. J. Candes and T. Tao, “Near-optimal signal recovery from random projections: universal encoding strategies?” Institute of Electrical and Electronics Engineers. Transactions on Information Theory, vol. 52, no. 12, pp. 5406–5425, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. T. T. Cai, L. Wang, and G. Xu, “New bounds for restricted isometry constants,” Institute of Electrical and Electronics Engineers. Transactions on Information Theory, vol. 56, no. 9, pp. 4388–4394, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. T. T. Cai, G. Xu, and J. Zhang, “On recovery of sparse signals via 1 minimization,” Institute of Electrical and Electronics Engineers. Transactions on Information Theory, vol. 55, no. 7, pp. 3388–3397, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  7. T. T. Cai and A. Zhang, “Sharp RIP bound for sparse signal and low-rank matrix recovery,” Applied and Computational Harmonic Analysis, vol. 35, no. 1, pp. 74–93, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  8. T. T. Cai and A. Zhang, “Compressed sensing and affine rank minimization under restricted isometry,” IEEE Transactions on Signal Processing, vol. 61, no. 13, pp. 3279–3290, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. M. Grasmair, “Non-convex sparse regularisation,” Journal of Mathematical Analysis and Applications, vol. 365, no. 1, pp. 19–28, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  10. M. Lustig, J. M. Santos, J. H. Lee, D. L. Donoho, and J. M. Pauly, “Application of compressed sensing for rapid MR imaging,” in Proceedings of the SPARS, Rennes, France, 2005.
  11. B. K. Natarajan, “Sparse approximate solutions to linear systems,” SIAM Journal on Computing, vol. 24, no. 2, pp. 227–234, 1995. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. H. Rauhut, K. Schnass, and P. Vandergheynst, “Compressed sensing and redundant dictionaries,” Institute of Electrical and Electronics Engineers. Transactions on Information Theory, vol. 54, no. 5, pp. 2210–2219, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  13. E. J. Candès, Y. C. Eldar, D. Needell, and P. Randall, “Compressed sensing with coherent and redundant dictionaries,” Applied and Computational Harmonic Analysis, vol. 31, no. 1, pp. 59–73, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  14. M. Elad, P. Milanfar, and R. Rubinstein, “Analysis versus synthesis in signal priors,” Inverse Problems. An International Journal on the Theory and Practice of Inverse Problems, Inverse Methods and Computerized Inversion of Data, vol. 23, no. 3, pp. 947–968, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. J. Lin, S. Li, and Y. Shen, “New bounds for restricted isometry constants with coherent tight frames,” IEEE Transactions on Signal Processing, vol. 61, no. 3, pp. 611–621, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. J. Lin and S. Li, “Sparse recovery with coherent tight frames via analysis Dantzig selector and analysis LASSO,” Applied and Computational Harmonic Analysis. Time-Frequency and Time-Scale Analysis, Wavelets, Numerical Algorithms, and Applications, vol. 37, no. 1, pp. 126–139, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  17. D. L. Donoho, M. Elad, and V. N. Temlyakov, “Stable recovery of sparse overcomplete representations in the presence of noise,” Institute of Electrical and Electronics Engineers. Transactions on Information Theory, vol. 52, no. 1, pp. 6–18, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. Y. Liu, T. Mi, and S. Li, “Compressed sensing with general frames via optimal-dual-based 1-analysis,” Institute of Electrical and Electronics Engineers. Transactions on Information Theory, vol. 58, no. 7, pp. 4201–4214, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  19. S. Li and J. Lin, “Compressed sensing with coherent tight frames via q-minimization for 0 < q ≤ 1,” Inverse Problems and Imaging, vol. 8, no. 3, pp. 761–777, 2014. View at Publisher · View at Google Scholar · View at Scopus
  20. R. Zhang and S. Li, “Optimal D-RIP bounds in compressed sensing,” Acta Mathematica Sinica (English Series), vol. 31, no. 5, pp. 755–766, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  21. R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, “A simple proof of the restricted isometry property for random matrices,” Constructive Approximation. An International Journal for Approximations and Expansions, vol. 28, no. 3, pp. 253–263, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. R. Chartrand, “Exact reconstruction of sparse signals via nonconvex minimization,” IEEE Signal Processing Letters, vol. 14, no. 10, pp. 707–710, 2007. View at Publisher · View at Google Scholar · View at Scopus
  23. Q. Sun, “Recovery of sparsest signals via q-minimization,” Applied and Computational Harmonic Analysis. Time-Frequency and Time-Scale Analysis, Wavelets, Numerical Algorithms, and Applications, vol. 32, no. 3, pp. 329–341, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  24. C.-B. Song and S.-T. Xia, “Sparse signal recovery by q minimization under restricted isometry property,” IEEE Signal Processing Letters, vol. 21, no. 9, pp. 1154–1158, 2014. View at Publisher · View at Google Scholar · View at Scopus