Abstract

Some basic statistical properties of the compressed measurements are investigated. It is well known that the statistical properties are a foundation for analyzing the performance of signal detection and the applications of compressed sensing in communication signal processing. Firstly, we discuss the statistical properties of the compressed signal, the compressed noise, and their corresponding energy. And then, the statistical characteristics of SNR of the compressed measurements are calculated, including the mean and the variance. Finally, probability density function and cumulative distribution function of SNR are derived for the cases of the Gamma distribution and the Gaussian distribution. Numerical simulation results demonstrate the correctness of the theoretical analysis.

1. Introduction

Compressed sensing is proposed to recover the original signal from the compressed measurements [1]. Its key technologies consist of sparse representation, the design of the measurement matrix, and the recovery algorithm. Sparse representation of the signal is a premise and basis of compressed sensing, RIP of the measurement matrix can guarantee a unique solution of signal reconstruction mathematically [2], and the recovery algorithm finds the unique solution through different methods. Therefore, to some extent, compressed sensing is considered as a sampling technology similar to Shannon’s sampling theorem. However, the signal is sampled according to the sparsity of the signal but not the bandwidth required by Shannon’s sampling theorem. In other words, compressed sensing only extracts useful information but not the signal itself. Based on the these advantages, compressed sensing is viewed as a promising technology for many research fields, such as remote sensing, image processing, and wireless communication [3].

Currently, the wideband signal must be employed for wireless communication to satisfy the demand of high-rate data transmission, which brings many challenges for signal sampling devices, such as high sampling rate and cost. To cope with these difficulties, compressed sensing has been employed in many signal processing tasks of wireless communication, for example, channel estimation and spectrum sensing [4].

In the beginning, the communication signal is processed in accordance with standard technological process of compressed sensing [2, 3]; that is, the signal is sampled by virtue of the measurement matrix to obtain the compressed measurements. And then, the original signal is reconstructed in terms of the recovery algorithm. Finally, the recovered signal is further manipulated to finish different signal processing tasks. It has been proved that the recovery algorithm has high computational complexity [2]. Nevertheless, some signal processing tasks, especially inference problems, have only concentrated on the decision results and the related parameters but not the signal itself, such as signal detection, signal classification, and signal parameters identification. The recovered signal is meaningless for the inference problem. That is to say, the reconstruction-based signal processing methods can not fully exploit the merits of compressed sensing. Consequently, the nonreconstruction framework of signal processing is presented [5, 6]. Under the nonreconstruction framework, the compressed measurements are directly employed to deal with the inference problem without resorting to a full-scale signal reconstruction.

It is widely recognized that the communication signal is inevitably corrupted by the noise. Consequently, the performances of these signal processing tasks have close relation with SNR of the compressed measurements, which involves the energy of the compressed noise and the compressed signal. In literature [7], the noise folding in compressed sensing is considered, and the relations of SNR of the compressed measurements and the compressed ratio were derived. Here, is the dimension of the signal and is the number of the compressed measurements. And then, the impact of the noise folding on the wideband signal acquisition is discussed [8], and the relation of SNR of the compressed measurements and SNR of the recovered signal plus noise was studied. However, the statistical properties of the energy and SNR of the compressed measurements are not further investigated. It is well known that the statistical properties are important to investigate the performance of signal detection and signal parameters identification. Therefore, it is vital to derive the statistical properties of the energy and SNR of the compressed measurements for the performance analysis of signal processing.

Because the random measurement matrix is frequently utilized, the energy of the compressed signal and the energy of the compressed noise are the random variables. Further, the resulting SNR of the compressed measurements is also the random variable. Consequently, we first discuss the mean, the variance, probability density function (PDF), and cumulative distribution function (CDF) about the energy of the compressed signal and the energy of the compressed noise. After that, we derive the statistical properties of SNR of the compressed measurements, including the mean, the variance, PDF, and CDF. These results provide a foundation for the performance analysis of signal processing tasks.

2. Statistical Properties of Compressed Measurements and Their Quadratic Sum

For compressed sensing, the compressed measurements can be expressed aswhere , , , and denote the compressed measurements, the random measurement matrix, the signal vector with the sparsity , and the noise vector, respectively. It is assumed that the signal and the noise are filtered by the band-pass filter before compressed sensing. Therefore, the noise folding problem may not be considered. Mathematically, the condition should be satisfied to recover the signal with high probability. Specifically, the entries of can be calculated as

Without loss of generality, we assume that the entries of the random measurement matrix are i.i.d. Gaussian random variables with mean zero and variance , the entries of the noise vector are i.i.d. Gaussian random variables with mean zero and variance , and the entries of the signal are i.i.d. random variables with mean zero and variance . The entries of the random measurement matrix, the noise vector, and the signal vector are also statistically independent of each other. Because their means are zero, they are also uncorrelated and orthogonal.

Let and denote the th entry of the compressed signal and the compressed noise, respectively. We first analyze the statistical properties of . According to central limit theorem, can be considered as Gaussian random variable, whose mean is

Because the means of and are zero, the variance of is calculated as

After that, we compute the mean of :

Similarly, the variance of can be also calculated as

Therefore, the entries of the compressed measurements are Gaussian random variables with mean zero and variance .

The quadratic sum of the compressed signal is denoted as

The quadratic sum of the compressed signal follows the central Gamma distribution with degrees of freedom. The corresponding mean and variance are calculated as

Its scale and shape parameters are and , respectively. Hence, can be represented as .

Next, we analyze the energy of the compressed noise, which is written as

By analogy, the quadratic sum of the compressed noise also follows the central Gamma distribution with degrees of freedom. The corresponding mean and variance are calculated as

Similarly, its scale and shape parameters are and , respectively. Thus, is represented as .

3. The Statistical Properties of SNR

SNR of the compressed measurements is defined as

We now analyze the relation of and . Firstly, we calculate the mean of the product of the compressed signal and the compressed noise:

It can be seen that items are obtained when (14) is expanded, and each item consists of the entry of the noise and the entry of the random measurement matrix with mean zero. Because these entries are uncorrelated and independent, the resulting mean , which means the orthogonality of and . Considering (3) and (5), we can observe that and are uncorrelated. Because the irrelevance and the independence are equivalent for Gaussian random variables, and are independent of each other. In a straight way, we can conclude that and are also independent. Based on these results, we calculate the mean of SNR:

Let ; we have . Combining with (10) and (11), we can obtain that follows the Chi-square distribution with degrees of freedom . Hence, the mean of the reciprocal of can be calculated as

According to the relation between and , we have

Substituting (8) and (17) into (15) yields

After that, we analyze the variance of SNR, which is viewed as the product of two random variables. By virtue of the property of the variance, we have

Now, we calculate the variance of . According to the definition of the variance, we can obtain

The item can be calculated as

By substituting (21) and (16) into (20), we can compute the variance of :

In terms of (8), (9), (17), and (22), (19) can be rewritten as

Most importantly, probability density function (PDF) of should be discussed. Because of the independence of and , and , pdf of the product of two random variables with the Gamma distribution can be expressed as

The corresponding cumulative distribution function (CDF) can be expressed aswhere is the regularized incomplete function; that is, , , and . Accordingly, (25) can be rewritten aswhere is the Hypergeometric function.

We can observe that the expressions of CDF and PDF are very complicated, so it is difficult to directly exploit them in practical scenarios. Generally, the number of the compressed measurements is relatively large; thus the quadratic sum and can be also viewed as the Gaussian distribution in terms of central limit theorem [9]. Because of the independence of and , probability density function of can be expressed as [10, 11]where , , , and .

The corresponding CDF is expressed aswhere is CDF of standard Gaussian random variable.

It can be seen that the resulting CDF and PDF are related with the sparsity , dimension of signal, the variance of the signal, the variance of the noise, and the variance of the measurement matrix. Furthermore, CDF and PDF in Gaussian assumption are simpler than of the Gamma distribution.

It needs to be explained that we do not place any restrictions on the power range of the noise and the signal in the previous analysis, and the analysis result can be applied for any SNR. Hence, it is reasonable to not consider the dynamic range.

4. Simulation Results

To prove the theoretical analysis, some simulations are performed. Firstly, PDF of the energy of the compressed signal for different is shown in Figure 1. The random variables following the binomial distribution are exploited as the signal, which is a common information model for the practical communication system. The mean and the variance of the binomial distribution are zero and , respectively; that is, .

The simulation parameters are as follows: , . To remove the effect of the measurement matrix on the energy of the compressed signal and the compressed noise, the variance of each column of the measurement matrix is set to ; that is, . Combining with the mean and the variance of the binomial distribution, (8) and (9) can be simplified as

It can be demonstrated that the mean is fixed when changes. In other words, the mean is independent of . However, the variance is inversely proportional to . Hence, the simulation result is consistent with the theoretical analysis (29).

For the noise, (11) and (12) can be rewritten as

Figure 2 shows the impact of the number of the compressed measurements on the statistical properties. It can be observed that the variation tendencies of the mean and the variance are the same as those of the compressed signal. It should be pointed out that the statistical properties experience some changes, but the energy of the noise approximates to the energy of the compressed noise because of RIP of the measurement matrix, which is also correct for the signal and the compressed signal.

Finally, SNR of the compressed measurements is demonstrated for the different number of the compressed measurements in Figure 3. It can be observed that the mean of SNR approximates for different due to , and the variance of SNR decreases with the increasing of . Consequently, we conclude that these simulated results coincide with (18) and (23).

5. Conclusion

In the framework of compressed sensing, some statistical properties of the compressed signal and the compressed noise were calculated and analyzed, which mainly consist of the mean, variance, probability density function, and cumulative distribution function. It has been illustrated that these statistical properties vary when the signal and the noise are processed by compressed sensing. If the entries of the measurement matrix are normalized, the mean of the compressed signal and the compressed noise remains unchanged, but their variance inversely varies with the number of the compressed measurements . Based on these results, the mean and the variance of SNR were achieved. And then, by the independence of the energy of the compressed signal and the compressed noise, we derived the closed-form expressions of probability density function and cumulative distribution function for the cases of the Gaussian and Gamma distribution.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this article.

Acknowledgments

This work is supported by National Natural Science Foundation of China (NSFC) (61301101, 61671176).