Abstract

Compressed sensing can recover sparse signals using a much smaller number of samples than the traditional Nyquist sampling theorem. Block sparse signals (BSS) with nonzero coefficients occurring in clusters arise naturally in many practical scenarios. Utilizing the sparse structure can improve the recovery performance. In this paper, we consider recovering arbitrary BSS with a sparse Bayesian learning framework by inducing correlated Laplacian scale mixture (LSM) prior, which can model the dependence of adjacent elements of the block sparse signal, and then a block sparse Bayesian learning algorithm is proposed via variational Bayesian inference. Moreover, we present a fast version of the proposed recovery algorithm, which does not involve the computation of matrix inversion and has robust recovery performance in the low SNR case. The experimental results with simulated data and ISAR imaging show that the proposed algorithms can efficiently reconstruct BSS and have good antinoise ability in noisy environments.

1. Introduction

Compressed sensing (CS) [1] provides a new sampling and reconstruction paradigm, which can recover sparse signals from linear measurements: where is the measurement matrix, is the measurement vector, is the sparse signal, and is the additive noise. Many recovery algorithms have been presented to reconstruct sparse signals, including orthogonal matching pursuit (OMP) [2] and sparse Bayesian learning (SBL) [3].

In some signal processing applications such as ISAR imaging [4] and gene expression levels [5], there are many sparse signals with block structural features; i.e., nonzero elements are often clustered. Inducing structural a priori can largely improve the reconstruction performance. Therefore, to improve the reconstruction effect of block sparse signals (BSS), many algorithms are proposed. For instance, Block-OMP [6] and Block-StOMP [7] are OMP-based approaches. Meanwhile, some block recovery algorithms based on the Bayesian compressed sensing framework are presented, including block sparse Bayesian learning (BSBL) [8], Cluss-MCMC [9], model-based Bayesian CS via local beta process (MBCS-LBP) [10], and pattern-coupled sparse Bayesian learning (PC-SBL) [11]. Among these recovery algorithms, Bayesian algorithms have parameter learning ability and can be applied to recovery arbitrary signals with unknown sparse structures by flexibly imposing different sparse prior models.

In [12], Zhang et al. have proposed an expectation-maximization-based variational Bayesian (EM-VB) inference method, which utilizes the Laplacian scale mixture (LSM) model as a sparse prior; i.e., it is assumed that the sparse signal obeys the Laplacian prior because the Laplacian distribution can represent sparseness well. Based on this model, for the BSS with unknown block information, this paper proposes a block Bayesian recovery algorithm by inducing a correlated LSM prior model, which uses the dependence between neighboring elements of the BSS. Furthermore, to improve the computational efficiency of the proposed recovery algorithm, a fast version without matrix inversion is presented, which is suitable for noisy environments, especially in the low SNR case. The experimental results on simulated data and ISAR imaging show that the proposed algorithms have a good reconstruction effect on BSS and can resist noise in noisy environments.

The remainder of this paper is organized as follows. In Section 2, a correlated LSM prior model for BSS is given. Then, the proposed block Bayesian recovery algorithm and the fast version are derived in Section 3. Simulation experiments are presented in Section 4. Finally, we conclude this paper in Section 5.

2. Signal Model

In the framework of sparse Bayesian learning, for the measurement model shown in (1), the noise is generally assumed to obey a Gaussian prior distribution , and a Gamma distribution for the hyperparameter is where . The sparse signal is usually assumed to obey a sparse prior distribution. In the LSM layered prior model [12], the sparse signal is supposed to follow a Laplacian prior distribution: where is the scale parameter of the Laplacian distribution for each element in the signal. Since the Inverse-Gamma (IG) distribution is conjugated to Laplacian distribution, the LSM model assumes that the scale parameter obeys the IG distribution as follows:

In the above model, each hyperparameter controls the corresponding signal element individually and each signal element is considered to be independent. Considering that nonzero elements of BSS appear in clusters, this requires a more appropriate model for BSS. For the BSS whose structure prior information is unknown, the PC-SBL algorithm [11] assumes that the hyperparameters of adjacent elements have a certain influence on its sparsity. Inspired by the PC-SBL, we assume that the block sparse signal obeys the following correlated Laplacian prior distribution, i.e., where the parameter indicates the degree of correlation between adjacent elements in the signal. It can be seen from (5) that the element is affected by its own hyperparameter and the neighboring ones and . For the elements at both ends and , let and . The model (5) makes use of the feature of the block sparse signal, in which the scale parameters still obey the IG distribution shown in (4).

3. Block Bayesian Recovery Algorithms

In Bayesian inference, given observation , it needs to derive the posterior probability density for all unknown parameters . Variational Bayesian inference is a widely used method to approximately solve the maximization of a posteriori, which assumes that the variables are independent of each other. Let , and then

For each of these latent variables, the approximate posterior distribution may be computed in an alternating manner as follows: where represents the expected operation with respect to the distributions . According to (7), the proposed reconstruction algorithm is derived by alternately learning the updating rules of these latent variables.

Firstly, the approximate posterior distribution is where is a Gaussian distribution and is a Laplacian distribution shown by (5). Since these two distributions are not conjugated, a direct solution is difficult. Similar to [12], let

The maximum a posteriori (MAP) estimate of the signal can be obtained by . The derivative of is where and denotes a diagonal matrix with the elements in the bracket. Let the derivative be equal to zero and get the approximate MAP estimate:

So the posterior distribution can be approximated by using the second-order Taylor expansion around , i.e., where . After similar simplification in [12], can be approximated to obey the Gaussian distribution with the mean and covariance matrix . Due to that, there exists some approximation in the above derivation of the posterior distribution ; the sparsity of the signal may be underestimated. So a parameter is introduced into the computation of the covariance matrix. Thus, is approximated to be the following Gaussian distribution: where the mean and covariance matrix, respectively, are

Secondly, the approximate posterior distribution is

From (4) and (5), we have

So can be approximated as

Therefore, obeys an Inverse-Gamma distribution: with

We can also obtain where can be computed as follows according to [13]: and is the error function.

Thirdly, the approximate posterior distribution of the noise parameter is

So obeys the Gamma distribution: with and we can obtain where and is the trace of a matrix.

Therefore, the whole process of the proposed algorithm is summarized in Algorithm 1, where is the preset error that can be tolerated. The proposed algorithm can be regarded as an extension of the EM-VB algorithm for the recovery of BSS, which is termed the Block EM-VB algorithm. It has an additional parameter except for the block parameter . When , the Block EM-VB algorithm reduces to the EM-VB. The parameters in (15) and in (22) have great influences on the recovery performance of the Block EM-VB. It is appropriate to set to void underestimation of the support set if the sparsity of the signal is larger, while larger can suppress nonzero signal elements and has certain antinoise capacity in noisy environments. For the choice of , it is similar that larger may enhance the influence between adjacent elements and could suppress the nonzero values to make the signal more sparse in noisy environments.

Input: , .
Initialize:
While or do
Update:
(1) Compute the mean and the covariance matrix by (15) and (11).
(2) Compute according to (22) and (23).
(3) Compute via (27).
Output:

Remark 1. From (23), it can be seen that the computation of is related to the involving integral operation. In practice, to reduce the complexity, can be calculated by utilizing some approximation equations such as From the process of the Block EM-VB algorithm, it can be seen that its complexity is almost the same as that of the PC-SBL.

The proposed Block EM-VB algorithm involves the matrix inversion shown in (15), which is the main computational complexity. It is better to consider the fast version of the proposed algorithm. To void computation of the inverse of the matrix, we can use the following approximate posterior distribution, which is expressed as

That is to say, it assumes the independence on the posterior of each coefficient element of the signal. Similarly, by using (7), we can alternately learn the updating rules of these latent variables.

Firstly, the approximate posterior distribution is where and . Let where is the -th column of . The derivative of is

Let and obtain the following approximate MAP estimate:

Similarly, is approximated to obey the following Gaussian distribution: where in which a parameter is also introduced to avoid underestimation of the sparsity of the signal. Thus, can be computed in a sequential manner.

Secondly, the approximate posterior distribution still obeys an Inverse-Gamma distribution shown in (20) and the computation of is the same as (22).

Thirdly, the approximate posterior distribution obeys the Gamma distribution and we can obtain , where and and .

Compared with the Block EM-VB algorithm, the above algorithm has low computational complexity due to without matrix inversion in each iteration. It can be called Fast Block EM-VB, and its process is summarized in Algorithm 2. In noisy environments, the correlations between adjacent signal elements may be weakened, especially in the low SNR case, so it is appropriate to assume independence on the posterior of each coefficient element, which implies that the Fast Block EM-VB is suitable to recover the BSS under the low SNR case.

Input: ,
Initialize:
While or do
Update:
(1) Compute the mean and the variance , sequentially by (35).
(2) Compute according to (22) and (23).
(3) Compute via (27) and (36).
Output:

4. Simulation Experiments

In this section, some simulation experiments are carried out to demonstrate the performances of the proposed Block EM-VB algorithm and its fast version. A comparison with other algorithms such as EM-VB [12] and PC-SBL [11] is also given.

4.1. Performance Analysis via Simulated Data

In the following simulation, let the length of the sparse signal be , and an arbitrary block Gaussian sparse signal is randomly generated with its nonzero entries randomly distributed in blocks, and the measurement matrix is a random Gaussian matrix. The number of Monte Carlo simulations is 200. The parameters in EM-VB and in the Block EM-VB and its fast version are set as , , and . The parameters and in the Block EM-VB and its fast version will be set adaptively according to the noiseless and noisy cases because their performances are sensitive to the choice of and . The parameters of PC-SBL are set the same as those in [11].

First, we discuss the influences of two parameters and on the performance of the proposed Block EM-VB and make a comparison with the EM-VB, which is a special case of the Block EM-VB when . To demonstrate the effect of these two parameters, the performances of the Block EM-VB with and are also given. The support recovery rate is used to evaluate the performance of the Block EM-VB algorithm with different parameters. The recovered support of the sparse signal is defined as , and then the support recovery rate is defined by , where denotes the number of elements in a set. If the overlap between the estimated support and the true support is more, the recovery rate is closer to 1. Figure 1 plots the support recovery rates of different algorithms versus the sparsity level when the number of measurements is and , respectively. It can be seen that the parameters of the Block EM-VB algorithm have an important influence on the recovery performance. The appropriate parameter can avoid the underestimation of the support of the sparse signals, and the block parameter is helpful to recover the block sparse signals. Thus, the proposed Block EM-VB algorithm with appropriate parameters has better performance than the EM-VB.

Then, we make a comparison between the Block EM-VB and its fast version with the PC-SBL. In the noiseless case, the success rate is used to evaluate the performances of these different algorithms. When a trial satisfies , it is regarded as a successful trial. The success rate is defined as the percentage of successful trials in the total of independent trials. In the noisy case, the reconstruction performance of each algorithm is evaluated by the normalized mean square error (NMSE), where .

In the noiseless case, let in the Block EM-VB and in the Fast Block EM-VB. Figure 2 plots the success rate of individual recovery algorithm versus the sparsity level when the number of measurement is and , respectively. Then, let the sparsity and , and the success rate of each algorithm versus the number of measurements is shown in Figure 3. From these results, it is observed that the Block EM-VB is superior to the PC-SBL when the number of measurements is less or the sparsity is smaller. The Fast Block EM-VB is inferior to these two algorithms due to the independent assumption.

Then, we consider the noisy case where the Gaussian noise is added to the measurements with the signal-to-noise ratio (SNR) defined as . It should be noted that the setting of parameter in the proposed algorithms needs to consider the trade-off between removing the noise and maintaining the nonzero elements of the signal. A larger value of tends to suppress noise while losing the nonzero values of the signal. Compared with the Block EM-VB, the fast version needs to select smaller to ensure the recovery of nonzero elements of the sparse signal because of its inherent denoising ability. Let and in the Block EM-VB and in the Fast Block EM-VB. The reconstruction results of different algorithms when and are given in Figure 4. It can be seen that the Block EM-VB algorithm and its fast version have better reconstruction performance than the PC-SBL in the low SNR environment.

The NMSE of each algorithm versus the sparsity in the case of is shown in Figure 5, where the numbers of measurements are given as and , respectively. Figure 6 plots the NMSE of individual recovery algorithm versus the number of measurements in the case of , where and , respectively. It can be found that the Block EM-VB algorithm is superior to the PC-SBL when the number of the measurements is larger or the sparsity is less, and the latter is better than the former when the number of the measurements is less and the sparsity is larger. It is also observed that the Fast Block EM-VB is better than the Block EM-VB when the number of measurements is less or the sparsity is larger. In addition, it is shown that the Fast Block EM-VB outperforms the PC-SBL when the sparsity is less or the number of the measurements is larger.

The performances of each algorithm in different SNR cases are shown in Figure 7, where and . For the Block EM-VB, the parameters and is set to vary with the SNR, i.e., when the SNR varies from 0 dB to 15 dB, when the SNR changes from 20 dB to 25 dB, and for the case of 30 dB. The parameters in the Fast Block EM-VB algorithm are still set as and . From Figure 7, it can be seen that the Block EM-VB algorithm and the fast version have good reconstruction performance in the case of low SNR when compared with the PC-SBL. The Fast Block EM-VB especially has robust recovery performance in respect of noise immunity.

Finally, the average runtimes of these algorithms versus the length of signals by 5 independent trials are given in Figure 8, where , , , and the number of nonzero blocks . It validates that the Block EM-VB almost has considerable computational complexity as the PC-SBL and the Fast Block EM-VB has the highest computational efficiency compared with other recovery algorithms, which makes it have a potential advantage in practical application.

4.2. Application in ISAR Imaging

The inverse synthetic aperture radar (ISAR) imaging is appropriately implemented under the framework of sparse signal recovery due to the sparse characteristic of the target [14]. In this experiment, the “Yak-42” dataset is used, in which the number of range cells is 256 and the number of pulses is 256. 128 pulses are randomly sampled to simulate the sparse aperture data. Here, we use the MATLAB code provided in [14], where the PC-SBL adopts a pruning operation. For a fair comparison, the proposed Block EM-VB and the fast version also use a similar pruning operation and the parameters in these two algorithms are set as and . Image entropy is usually used to measure image quality in ISAR imaging. The smaller image entropy means better reconstruction performance. The image entropy is defined as where is the energy of the radar image .

Here, the image is reconstructed by each range cell. Figures 9(a), 9(c), and 9(e) give the reconstruction results of these algorithms in the noiseless case, where we set for the Block EM-VB and for the Fast Block EM-VB. In the noisy case, the data are corrupted by additive Gaussian noise and let for the Block EM-VB and for the Fast Block EM-VB. Figures 9(b), 9(d), and 9(f) demonstrate the results in the case of (the noise variance ). The entropy values of these algorithms are shown in Table 1. From these reconstruction results, it can be seen that the image obtained by the Block EM-VB and its fast version has better quality in the noisy case, which implies that the Block EM-VB and its fast version have strong noise immunity ability. Table 2 gives the corresponding runtimes of these algorithms, which demonstrates that the Fast Block EM-VB has the highest computational efficiency and can be used in real-time processing.

5. Conclusions

Considering the clustered structural features of nonzero elements of block sparse signals, this paper proposes the Block EM-VB algorithm for signal recovery, which is based on a correlated LSM model. Furthermore, a fast version of the Block EM-VB is presented, which can recover the block sparse signals with lower computational complexity because of no inversion in each iteration. Experimental results with simulation data and ISAR imaging demonstrate that the Block EM-VB and its fast version have good BSS reconstruction performance and noise tolerance capability, especially in the low SNR scenarios, which implies that the proposed algorithms can be potentially applied in various signal processing fields.

Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

There is no conflict of interest regarding the publication of this paper.

Acknowledgments

The work is supported by the National Natural Science Foundation of China under Grants 61771046, 61931015, and 61731023 and the Beijing Natural Science Foundation (L191004). The authors are grateful for their support of this research.