Improved RIP Conditions for Compressed Sensing with Coherent Tight Frames
This paper establishes new sufficient conditions on the restricted isometry property (RIP) for compressed sensing with coherent tight frames. One of our main results shows that the RIP (adapted to ) condition guarantees the stable recovery of all signals that are nearly -sparse in terms of a coherent tight frame via the -analysis method, which improves the existing ones in the literature.
Compressed sensing (CS) has received much recent attention in many fields, for example, information science, electrical engineering, and statistics [1–10]. A key problem in CS is to recover a nearly sparse signal from considerably fewer linear measurements. Typically, it takes the following model:where is a vector of observed measurements, is an unknown signal needed to be estimated, is a given measurement matrix, and is a vector of measurement errors. Considering the fact that is sparse or nearly sparse in terms of an orthogonal basis, one straightforward approach is to find the sparsest solution of (1) by minimization. However, it is well known that solving an minimization problem directly is NP-hard in general and thus is computationally infeasible for even moderate size setting .
To efficiently estimate in the high-dimensional setting, the most popular strategy is to replace the norm with its closest convex surrogate, the norm, which leads to the following norm minimization:where and is a bounded set determined by the noise structures. It is obvious that (2) is a convex optimization problem and thus can be solved efficiently in polynomial time. Therefore, the minimization method (2) has been widely used in compressed sensing and other related problems.
There are a number of practical applications in signal and image processing point to problems where signals are not sparse in terms of an orthogonal basis but in terms of an overcomplete and tight frame (see [12–16], and the references therein). In such contexts, the signal can be expressed as , where is a sparse or nearly sparse vector. One natural way to recover is first solving the minimization problem (2) with the decoding matrix instead of to find the sparse transform coefficients and, then, reconstructing the signal by a synthesis operation, that is, . This is the so-called -synthesis or synthesis based method. Since the entries of are correlated when is highly coherent, may no longer satisfy the standard restricted isometry property (RIP ) and the mutual incoherence property (MIP ) which are commonly used in the standard CS framework. Thus, it would be difficult to characterize the theoretical performance of -synthesis method under the CS framework.
An alternative to the -synthesis method is the -analysis method, which finds the estimator directly by solving the following minimization problem:It has been shown in  that there is a remarkable difference between the two typical methods despite their apparent similarity. To investigate the theoretical performance of the -analysis method, Candès et al. in  introduced the definition of -RIP: A measurement matrix is said to satisfy the restricted isometry property adapted to (abbreviated -RIP) with constant ifholds for every vector that is -sparse. Note that it is a natural generalization of the RIP introduced in . Similarly, it is also computationally difficult to verify the -RIP for a given deterministic matrix. But as discussed in , the matrices which satisfy the standard RIP requirements will also satisfy the -RIP requirements. Many previous works have tried to derive sufficient conditions on for stable recovery of nearly sparse (in terms of ) signals via -analysis. Candès et al. first presented conditions and . Then, the conditions and were used in  and , respectively. In the recent literature, S. Lin and J. Lin  extended the notion of restricted orthogonality constant (ROC) used in standard CS to the setting of CS with coherent tight frames. The -restricted orthogonality constant (-ROC) of order , , is defined to be the smallest positive number satisfyingfor every and such that and are -sparse and -sparse, respectively. With this new notion, they extended some sufficient conditions which appeared in standard CS to the setting of CS with coherent tight frames, such as , , and . Moreover, they also obtained that , which is the first sufficient condition on , is sufficient for -analysis to guarantee the stable recovery of nearly -sparse (in terms of ) signals. In a recent paper , the condition was improved to .
Along the lines of [8, 19], we establish in this paper more relaxed RIP conditions for stable recovery of nearly sparse (in terms of ) signals from incomplete and contaminated data. Specifically, the main contribution of this paper is to show that, under the RIP condition , any signal that is nearly sparse in terms of can be recovered stably from its noisy measurements by solving the -analysis problem (3). To show these new conditions, in Section 2, we shall introduce a key technique tool which is an extension of Lemma in  and also state two lemmas that appeared in . In Section 3, we will establish our new results by use of some proof ideas for the standard CS in . Furthermore, we will show that the condition is mostly weaker than the best known condition .
In this section, we first state two useful lemmas that appeared in , which reveal the relationship between -RIC and -ROC, and the relationship between -ROCs of different orders, respectively.
Lemma 1. For positive integers , , we have
Lemma 2. For any and positive integers , such that is an integer, we have
In the following, we will introduce and prove a key technical tool, which will be very useful for proving our main results.
Lemma 3. Let and and . Suppose , and is a -sparse vector. If and , then we have
Proof. We shall prove it by mathematical induction. Suppose the size of support of is , that is, . For , by the definition of , we haveThus, (8) holds for .
For the case , we first assume that (8) holds for . The following discussion will use the same argument as in . But for completeness, we will include the sketch. Now for , we write as , where , and are “indicator vectors” with different supports. A vector is called an “indicator vector” if it has only one nonzero entry and the value is either 1 or . Since , the set is not empty. Now we choose the largest element , which meansDefineIt is not hard to check that , . Similar to the proof of lemma in , we also haveFrom the definition of , we obtain that is -sparse. Finally, using the induction assumption, we getwhich arrives to the conclusion of Lemma 3.
3. Improved RIP Conditions
We now consider the stable recovery of nearly sparse (in terms of ) signals via the -analysis method (3). We will present some new RIP conditions under two bounded noise settings: andThroughout the paper, is the vector with all but the largest absolute entries of set to zero, and . The following theorem represents our main result.
Theorem 5. Let be a given tight frame, and . If the measurement matrix satisfies the -RIP condition withfor some positive integers and with , then the solution to (3) obeys(i)(bounded noise)(ii)(bounded noise)
Remark 6. When is exact -sparse in terms of and no noise is present, the solution to (3) is equal to ; that is to say, the recovery is exact.
Remark 7. As reported in , when , the bound 1 is sharp in the sense that, for any , the -RIP condition does not guarantee such exact recovery. But it is still open whether this bound is also sharp when is not identity matrix. We leave it to the interested readers.
Proof. Let , where is the original signal and is the solution to (3). As noted in [15, 18], different from the proof in  for standard compressed sensing, we need to develop bounds on instead of . We write , where and are indicator vectors (we have mentioned in the proof of Lemma 3) with different support. In the following, we will use some proof ideas from .
By the fact that is a minimizer of (3), we can easily get the following inequality (see [5, 6, 19]):Thus,Note thatIt then follows with (20) thatApplying Lemma 3 with , and yield Hence, By plugging the equalityinto the above inequality, we obtainOn the other hand, for bounded noise, we first havewhere we have used the following inequality:Therefore,It is known that; thus, now we shall bound. By lemma in , we haveif , and with . Then, set in (30) and combine it with (18); we further get where the last inequality we have used is (29). From Lemma 2, it is not hard to get Therefore, which arrives to the conclusion.
Forbounded noise, we first have Hence,Then, the following proof is essentially the same, where we only need to replace (27) with (35).
To be noted, if we choosein Theorem 5, we can naturally get the following result.
Corollary 8. If the measurement matrix satisfies , then the solution of (3) satisfies (16) and (17). In particular, if the original signal is exact -sparse in terms of , the recovery is exact in the noiseless case.
Remark 9. It is obvious that the obtained condition is weaker than ,, and which were used in .
Since most of the sufficient recovery conditions in the literature are based on -RIC alone, it would be interesting to compare these conditions with . For this, we shall present the following lemma which provides the -ROC in terms of -RIC .
Lemma 10. Let be a given tight frame; then, -ROC and -RIC of the measurement matrix satisfy
Proof. The proof is similar to the proof of lemma in . For simplicity, we only present the proof sketch in the following. For two -sparse (in terms of ) signals , , we can write and as where , , is the support of , is the support of , and is the vector whoseth entry equals and all the other ones equal zero.
Case 1 ( is even). Without loss of generality, suppose and are normalized such that . Divide and into two subsets such that , , and are disjoint and for . Denote Then, by the definition of -RIP (4), we haveSimilarly, we haveThen, from (39) and (40), we have Thus,.
Case 2 ( is odd). Without loss of generality, suppose , and might be for . Also we can assume and are normalized such that and . Then, we have Similarly, we haveThen, from (42) and (43), we have which implies
Corollary 11. For some integer , if , then we have
Remark 12. As stated in , is the best condition for stable recovery of nearly sparse signal in terms of via -analysis. Based on Corollary 11, our presented condition is mostly weaker than . Basically, there are several benefits to weaken the -RIP condition. For example, using a standard covering argument as in , it is easy to show that, for any positive integer and , the -RIC of a Gaussian or Bernoulli random measurement matrix satisfies
Note that is implied by which is further implied by the conditions and . Hence, using (46) and following the discussion of Section IV in , the number of measurements should satisfy to ensure the condition holds with probability at least . Similarly, holds with probability at least if the number of measurements satisfies with guaranteeing with probability at least , and withguaranteeing with probability at least . Therefore, for large and , the size requirement to ensure is less than 71.2% (115.4/162) of the corresponding size requirement to ensure . This clearly demonstrates the advantage of our presented condition over the best known condition .
Under the framework of CS with coherent tight frames, we present in this paper some improved RIP conditions for stable recovery of nearly sparse (in terms of ) signals via -analysis method, which are weaker than the existing ones. Although only convex optimization method is considered here, it would be also interesting to relax the RIP conditions for nonconvex optimization method. It is known that standard minimization method could recover conventional nearly sparse signals stably under weaker RIP conditions than standard minimization method [22–24]. As such, one may make effort to weaken the RIP condition for nonconvex -analysis method, thus facilitating the further use of nonconvex analysis based method for more practical CS scenarios.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
This work was supported by Natural Science Foundation of China under Grant nos. 11501440, 61673015, and 62173020.
E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” Institute of Electrical and Electronics Engineers. Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006.View at: Publisher Site | Google Scholar | MathSciNet
M. Lustig, J. M. Santos, J. H. Lee, D. L. Donoho, and J. M. Pauly, “Application of compressed sensing for rapid MR imaging,” in Proceedings of the SPARS, Rennes, France, 2005.View at: Google Scholar
M. Elad, P. Milanfar, and R. Rubinstein, “Analysis versus synthesis in signal priors,” Inverse Problems. An International Journal on the Theory and Practice of Inverse Problems, Inverse Methods and Computerized Inversion of Data, vol. 23, no. 3, pp. 947–968, 2007.View at: Publisher Site | Google Scholar | MathSciNet
J. Lin and S. Li, “Sparse recovery with coherent tight frames via analysis Dantzig selector and analysis LASSO,” Applied and Computational Harmonic Analysis. Time-Frequency and Time-Scale Analysis, Wavelets, Numerical Algorithms, and Applications, vol. 37, no. 1, pp. 126–139, 2014.View at: Publisher Site | Google Scholar | MathSciNet
R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, “A simple proof of the restricted isometry property for random matrices,” Constructive Approximation. An International Journal for Approximations and Expansions, vol. 28, no. 3, pp. 253–263, 2008.View at: Publisher Site | Google Scholar | MathSciNet