Abstract

This paper first presents a new generally perturbed compressed sensing (CS) model , which incorporated a general nonzero perturbation into sensing matrix and a noise into signal simultaneously based on the standard CS model and is called noise folding in completely perturbed CS model. Our construction mainly will whiten the new proposed CS model and explore in restricted isometry property () and coherence of the new CS model under some conditions. Finally, we use OMP to give a numerical simulation which shows that our model is feasible although the recovered value of signal is not exact compared with original signal because of measurement noise , signal noise , and perturbation involved.

1. Introduction

Compressed sensing (CS) model, which was proposed by Candes et al. [1] and Donoho [2], had become a hot topic and attracted a lot of researchers to study it over the past years because it can recover a signal as a technique. Thus, it had been widely applied in many areas such as radar systems [3], signal processing [4], and image processing [5]. These applications depended on the main function of CS model to recover the original signal with some related algorithms including convex relaxation [6, 7], greedy pursuit [7], and Bayesian algorithm [8, 9], which were utilized to estimate the best approximation value of the original signal.

The classic and basic CS model in an unperturbed scenario and can be formulated asHere, is the measurement vector or observation value and is a full rank measurement matrix with . Signal is -sparse if no more than entries of signal are nonzero. Thus, is called a -sparse signal.

To date, the basic model has mature theories and there are a lot of different algorithms [6, 7] such as match pursuit (BP) [1, 10], orthogonal match pursuit (OMP) [1115], Compressive Sampling Matching Pursuit (CoSaMP) [16], and Bayesian algorithm [8, 9], which can be recovered exactly as signal value and utilized in many areas [1720].

But in practice, the measurement vector in (1) was often contaminated by a noise or an error. More concretely, a noise term , called an additive noise, was incorporated into to result in a partially perturbed model [2123]:where a noise or an error () was uncorrelated with signal . There were two methods to model noise in [24]. Here, a noise was randomly sampled from Gaussian distribution. This model was used in many areas [2123] and naturally had mature theories in recent years. For example, a number of accuracy algorithms on (2) emerged, for example, BP [1, 21], OMP [21], CoSaMP [16], and Bayesian algorithm [8, 9].

In 2010, Herman and Strohmer [25] first incorporated a randomly nontrivial perturbation into matrix in (2) to generate a general perturbation model [2527] as follows:where was called a general perturbation or a multiplicative noise. They studied influence of on signal and indicated that considering this CS model was a must [2527]. Intuitively, it was harder to analyze the multiplicative noise compared to the additive noise because was related to signal with .

As for (3), there were two different scenarios from different points of view [2527]. First, from user’s point of view, the sensing process can be formulated as follows:Its recovery process can be expressed by

Thus, the useful measurement matrix was the perturbed matrix not the original measurement matrix . The system was researched on the recovery signal with BP in [16, 25] and OMP in [26, 27].

The second model was from designer’s perspective [2527]. The sensing process was just written as and the recovery process was written asThe useful sensing matrix was not and the observation value was . To the best of our knowledge, no work focused on recovery signal in the context of general perturbation except for [2527].

In some practical scenarios, signal itself was often contaminated by noise and such case was applied in sub-Nyquist converter. Though introducing noise to signal was significant, no prolific paper studied such signal noise except for [24] which first added an unknown random noise to sinal of to produce noise folding CS model [24]:They analyzed the and of the equivalent system after whitening and showed that the difference of the and between original and whitened matrix was small [24]. Based on [2427], we propose a new CS model and study its related properties in Section 3.

2. Preliminaries

In this paper, we will restrict our attention to and of our new CS model. By convention, sensing matrix and perturbation are assumed to sample independently and identically distributed () Gaussian random variables since such matrix satisfies and coherence, and so forth [7, 24], with probability one.

Definition 1 (see [7]). A sensing matrix satisfies the restricted isometry property () of order if there exists :for any -sparse vector with , where is the smallest nonnegative number called the restricted isometry constant ().

Definition  1​′ (see [24]). For (1) and (2), there is another equivalent statement for of , denoted by , in some special cases. For any index set of size , let denote the submatrix of consisting of the column vectors indexed by , and the matrix possesses with constants , if for any index set of size , where is a positive integer.

For (6), there existed another form of for matrix which was given by Lemma 2 [24] since matrix was whitened.

Lemma 2 (see [24]). As for folding noise model (6), for whitened can be formulated as where with and matrix was obtained after whitening sensing matrix .

The perturbation and sensing matrix in (3) can be quantified below in [2527] where the symbol denotes spectral norm of a matrix , denotes the largest spectral norm taken over all -column submatrices of matrix , and [25] denotes the largest nonzero singular value taken over all -column submatrices of matrix . It was appropriate to assume , , and .

Lemma 3 ( for [25]). For , given the associated with matrix in (3) and the relative perturbation , fix the constant Assume that the    for matrix is the smallest nonnegative number, and the for can be written asfor any -sparse vector .

From Lemma 3 with (6), there is one equivalent statement for the for in some special cases given by Lemma  3′ [24].

Lemma  3′ (see [24]). For any index set of size , let denote the submatrix of consisting of the column vectors indexed by . A matrix possesses with constants , iffor any index set of size , where is a positive integer.

Definition 4 (see [7]). The coherence, , of a matrix is the largest absolute inner product between any two columns , , , of matrix as follows:

3. Constructions

3.1. A New Completely Perturbed CS Model

As mentioned above, for (2), (3), and (6), only one noise in (2) or two noises in (3) and (6) affected the CS model. Maybe a noise , a noise , and a perturbation simultaneously affect the CS model although no paper studies this. In terms of the idea, [24] together with [2527] motivate us to introduce a noise to general perturbation model (3) to generate noise folding in generally perturbed situation or to incorporate a nontrivial perturbation into (6) to produce a complete perturbation CS model with folding noise, which for the first time yields so called noise folding in completely perturbed CS model. We formulate the CS model aswhere is a random noise vector with covariance and presents a random premeasurement noise vector whose covariance is independent of . Here and are regarded as additive noise. is a random perturbation matrix and more details on perturbation can be seen in [25]. Here we call CS model (15) noise folding in completely perturbed CS model. Analogous to (3) in [2527], (15) can also be considered in two different situations. Similarly, from user’s point of view, an incorrect sensing matrix can be obtained via an unknown measurement model: and the recovery process algorithm can be written as

The only difference between and is noise in . From the designer’s view, sensing process can be formulated as and its recovery process is as

Similarly, compared to , noise belongs to . In this paper, we only study simply its properties: and after whitening. Obviously, (15) can be extended to general multiperturbation CS model:where is perturbation. System (18) can be viewed as a generalization of our proposed CS (15), which implies that the general conclusion of (18) can be obtained from the special conclusion of (15). The concrete results can be seen in the next section. Simultaneously, other general CS systems can be conjectured naturally as follows:Although their properties seem to be many but we do not know how to exploit and analyze them, we leave them as open problems. Here we mainly study relative and on (15) and (18). In the next section, we give general results.

3.2. Problem Formulation

For (15), our goal is to analyze the effect of the premeasurement noise and on its and .

Throughout this paper, assume that is a random noise vector with covariance , and is a random noise vector with covariance independent of . Under these assumptions, (15) will be proved to be equivalent to , where is a matrix whose and constants are very close to that of , is whitened noise with variance , and is identity matrix.

3.3. Equivalent Formulation

To set up our conclusion, (15) can be expressed as

By hypothesis of whitened noise, the covariance of effective vector is . Obviously, noise is not whitened where the recovery process analysis becomes complicated. If still preserves whitening, one case must be proportional to identity matrix. For example, suppose that consists of orthogonal basis such as in which , , are orthogonal matrices. Therefore, we have where noise covariance of is , . Under the special case, (or ) is equivalent to . Compared with noise covariance of , noise covariance of has increased by . If , the noise covariance of is increased by , which was called noise folding [24].

3.4. RIP and Coherence of Our CS Model

We will show that the conclusion holds generally. In other words, if is not proportional to the identity matrix , (15) and (20) are roughly equivalent really. Now we describe it in detail.

Note that if is one random matrix, is a random matrix. To study and of , we must whiten noise by multiplying with and get the equivalent system: Note that noise vector is whitened with covariance matrix exactly if is proportional to identity matrix. But the biggest difference lies in measurement matrix changing from original matrix to by whitening. The changing range is measured with three important indexes: constant, , and . Our theory mainly depends on approximating with and even is an arbitrary matrix. Let measure accuracy of the approximating, in which denotes the standard operator norm in . For derivation convenient, assume that is very small and show that the and constant of are very close to that of . By convention, the entries of are mean zero and variance random variables with Gaussian distribution; thus, it is easy to justify that is always small.

Another useful formula can be formulated:which was introduced in [24]. The fact that was very small had been proved in [24] with restrictions on only matrix . It is natural to think whether the difference between and is very small. Theorem 5 confirms our conjecture and further inspires us to think whether the difference between ’s and and ’s and ’s is very small, respectively. The later related theorems will give us the positive answers.

Theorem 5 shows the relation between and under the context .

Theorem 5. Assume that sensing matrix , an unknown random matrix , , , , , , where is the largest nonzero positive singular value of ; then

Proof. The detailed proof is postponed to the Appendix.

Remark 6. For (27), by assumption , , obtain due to , a positive number . Thus, ; that is, , such that , , and there is a positive number . Theorem 5 shows the relation between and which implies that under some special conditions. Therefore, we can let like in [24].

Theorem 7 shows the of in the case of though is sufficient for the proof of the for .

Theorem 7. Assume that sensing matrix , an unknown random matrix , . Let , , , , suppose that satisfies the of order with , and is the largest singular value of matrix ; then satisfies the of order with different constants below:

Proof. The detailed proof is postponed to the Appendix.

Remark 8. In Theorems 5 and 7, the condition with can be taken in place of [25], in which is a simple version of , so that we can get another result. Due to paper volume, they are omitted here. But their proofs are very simple that researchers can prove them and yield perfect results.

Multiperturbation CS system (18) can be viewed as a generalization of the new proposed CS system (15) so that the general conclusion of the (18) can come from that of (15). Theorems 9 and 11 give us the results.

Theorem 9. Assume that is sensing matrix, and is an unknown random matrix with . Let , , and let be the largest singular value of matrix , ; and suppose that matrix satisfies , a number is an integer, ; then the relation between and can be formulated as

Proof. The detailed proof is postponed to the Appendix.

Remark 10. For (32), since , , and , , are positive integers and is a constant, when which implies .

Theorem 11. Assume that is sensing matrix, and is an unknown random matrix with . Let , and , and suppose that , is the largest singular value of matrix , and are integers, , and then,

Proof. The detailed proof is postponed to the Appendix.

Remark 12. Though is sufficient for the proof, the for is positive in the restriction of .

Next, we compare the of after whitening to that of . , , is used to denoted the th column vector of a matrix . Similar to the coherence of , the coherence of is first given in Definition 13.

Definition 13. Assume that is a random matrix, is an unknown random matrix in CS, and ; then of , denoted by , can be formulated as In fact, is the largest absolute inner product between any columns , , .

As mentioned above, , respectively, in some special contexts with in Theorem 7. We can take advantage of to prove Theorem 14. For the lack of space, we only take , as an example with . The proofs of the rest of the cases, including , and , are similar, and we leave them to readers. As for the general results of the general CS model , we omit them too due to space constrains. The proof of general coherence of is similar too. Theorem 14 demonstrates the relation between coherence of and that of .

Theorem 14. Assume that in , with ; then, where , . denotes the th column vector of whitening matrix ; denotes the th column vector of ; that is, .

Proof. The detailed proof is postponed to the Appendix.

In [25], is simply version of random matrix such as with . The relation between and will be seen from Theorem 15 below in the case of , .

Theorem 15. Let , , . The correlation of coherence between and proceeds as with .

Proof. The proof of Theorem 15 is similar to that of Theorem 14; here we omit it.

4. Numerical Experiment

Vertical coordinate (sinusoidal) denotes the degree of recovery signal. Horizontal coordinate denotes time whose unit is seconds. Black line denotes original signal and red line denotes recovery signal.

Here we use OMP to give three numerical simulation results which demonstrate that our new proposed generally perturbed CS is feasible. To compare signal recovering with OMP from three figures, signal recovery from measurement noise model is almost exact because of only noise in basic CS model . There are a lot of differences between recovery signal and original signal in both and CS models because there are noises , , in the two CS models. Comparing the change between recovered signal and original signal of Figure 1(b) with that of Figure 1(c), the change of Figure 1(c) is a bit bigger than that of Figure 1(b) because perturbation is involved in Figure 1(c) () and is not involved in Figure 1(b) (), which shows that the different noises , , , have a different impact on signal recovering.

Compared with the change (error) between the recovery signal and the original signal in Figure 1(a), the changes (errors) in Figures 1(b) and 1(c) differ little. Namely, the differences between recovered signal and original signal from are almost the same as the differences between recovery signal and original signal from , which indicates that our proposed CS model is feasible.

Comparing the change between recovered signal and original signal in Figure 1(a), the changes in Figures 1(b) and 1(c) are quite different. The fact shows that OMP is not the best algorithm to recover from and although OMP is used to recover exact original signal from . Thus, it is important to search for a powerful algorithm or more algorithms to recover exactly for original sparse signal from and as open problems. And here leave these problems to the interested researchers to exploit them because the paper cannot focus on searching for optimal algorithms to recover exactly original signal from CS models and .

5. Conclusion

We first propose a new CS system (15) by introducing a multiplicative noise , a signal noise , and an additive noise into unperturbed CS model (1). We derive and for after whitening (15). As a matter of fact, this paper proves that our proposed completely perturbed CS model (15) equals to the classic CS model (2). The only difference is the changed measurement matrix by incorporating a nontrivial perturbation matrix to measurement matrix and a nontrivial noise to signal . And thus this induces noise variance increased by a factor of so that a tighter upper bound and lower bound of is produced. As for of deformed measurement matrix in CS model (15), the constant is nearly invariant essentially with , . Finally, we use OMP to give three figures to recover signal from CS model , , , respectively. Figures 1(b) and 1(c) in our experiment demonstrate that the change between recovered signal and original signal is much bigger than that in Figure 1(a) which indicates our proposed CS model is feasible and OMP is not fit for recovering signal from and . Thus, we can try to search one optimal algorithm or more algorithms to recover signal exactly from the two CS model although OMP is the best algorithm to recover exactly original signal from now.

6. Future Work

Thanks to the features of our proposed CS model (15), there are many works to do. The change between recovered signal and original signal in Figures 1(a), 1(b), and 1(c) indicates that our proposed CS in this paper is feasible although the differences between the recovery signal and original signal in Figures 1(b) and 1(c) are much bigger with OMP than the differences in Figure 1(a). Thus, an obvious problem is to search one algorithm or more optimal algorithms suitable for and to recover signal exactly.

The related of in [24] further motivates us to think that as a perturbation sensing matrix could form one perturbed CS model as . Thus, (15) may consist of two similar systems and . Similarly, our model may be divided into another two models , , or three basic parts , , and . If possible, what can we do to reduce or eliminate the influence of an error CS system ? Can we recover signal from the error system ? And if can, how to do it? Maybe there exists CS models and . In addition, maybe we can also consider the impulse noise and use instead of where is the impulse noise. If so, maybe it can generalize our model and we can get very good results in impulse noise model. Here we cannot study such impulse noise model and leave it as an open problem, too.

These open problems are worth considering and are to be waited for studying in future work. This paper only does some elementary researches on our proposed CS and we hope that the idea and simple study in this paper will be helpful to study its wide application in the future. We hope that higher level compressed sensing model will be put forward and more and more people explore this areas in the future.

Appendix

Proof of Theorem 5. On the one hand,Equation (A.1) holds because of and .
On the other hand, The last equation holds because of , . Combine (A.1) with (A.2) to obtain (32).

Proof of Theorem 7. The three different inequalities come from different proving processes but in essence they are the same. Here we only prove the first inequality in more detail; the proofs of the second and third inequality are similar. For convenience, we denote them by Cases  1, 2, and 3 related to Cases   1′, 2′, and 3′, respectively. The proofs depend on one fact that is close to due to the definition of . Suppose that Assume that (15) can be written as , whereCase  1. ConsiderThe last equation holds because of , .
For (A.5), since , , is positive; thus, holds when ; therefore, (A.5) .
That is, when .
Case  2. ConsiderThat is, .
Case  3. ConsiderThe last equation holds due to .
From (A.8), since , , and is positive,with ; thus, That is, with .
As mentioned above, holds under some conditions. Using Cases  1, 2, and 3, we can obtain three different results, denoted by Cases  1′, 2′, and 3′, respectively.
Here can be expressed as follows: Case  1​′. Note that converges due to (A.5) , where is an operator norm. Take such norm on both sides of the above equality and utilize the triangle inequality to get Let be an index set of size , ; holds. Since we obtain Remove the absolute value to get Due toif set , we have Case  2​′. Note that converges due to (A.7), . Take spectral norm on both sides of and utilize the triangle inequality to get The remaining proof of Case  2′ is similar to that of Case  1′ except for instead of . At last we have Case  3​′. Note that converges from (A.8) due to . Take spectral norm on both sides of and utilize the triangle inequality to get The remaining proof of Case  3′ is similar to that of Case  1′ except for instead of . At last, we obtain

Proof of Theorem 9. On the one hand, The last equation holds due to .
On the other hand,The last equation holds because of . As mentioned above, combine (A.23) with (A.24) to get (32).

Proof of Theorem 11. There are three different results of whitening due to the different proving process. The proof depends on one fact that is close to due to the definition of . Assume that (A.1) can be written as , where , .
Case  1. ConsiderThe last equation holds because of and .
From (A.25), since , , and is a constant, we have when ; therefore, (A.25) . That is, when .
Case  2. ConsiderFrom (A.27), we get .
Case  3. ConsiderThe last equation holds due to and .
From (A.28), since , , and is a constant, then when ; thus, (A.28) . That is, . As mentioned above, under some conditions. Using the above three cases (Cases  1, 2, and 3), we can obtain three different results, denoted by Cases  1′, 2′, and 3′, respectively.
Here can be expressed as follows:Case  1​′. Note that (A.30) converges due to Case  1: . Take spectral norm on both sides of the equality and utilize the triangle inequality to get Let be an index set of size ; we have Since we obtain that Remove the absolute value to get Due towe have Case  2​′. Note that (A.30) converges due to Case  2: . Take spectral norm on both sides of the equality (A.45) and utilize the triangle inequality to get The remaining proving is similar to that of Case  1′ except for instead of . At last we have Case  3​′. Note that (A.45) converges due to Case  3: . Take spectral norm on both sides of the equality (A.45) and utilize the triangle inequality to get The remaining proving is similar to that of Case  1′ except for instead of . At last we have

Proof of Theorem 14. To prove the theorem, we should find out an upper bound of the numerator of and a lower bound of the denominator . For , by assumption, we obtain Next, we estimate lower bound with restrictions on and . Similar to the proof of Theorem 7, can be expressed as a power series: where is the coefficients in the Taylor expansion of . Both sides of the equality are taken norm obtainingThus, where . Combine (A.42) with (A.45) to get the result in Theorem 14.

Competing Interests

The authors declare that there is no competing interests regarding the publication of this paper.

Acknowledgments

This work is partially supported by the National Natural Science Foundation of China (NSFC) A3 Foresight Program (no. 61411146001).