Abstract

We study reconstruction of time-varying sparse signals in a wireless sensor network, where the bandwidth and energy constraints are considered severely. A novel particle filter algorithm is proposed to deal with the coarsely quantized innovation. To recover the sparse pattern of estimate by particle filter, we impose the sparsity constraint on the filter estimate by means of two methods. Simulation results demonstrate that the proposed algorithms provide performance which is comparable to that of the full information (i.e., unquantized) filtering schemes even in the case where only 1 bit is transmitted to the fusion center.

1. Introduction

In recent years, wireless sensor networks (WSNs) have been widely applied in many areas. A WSN system is composed of a large number of battery-powered sensors via wireless communication. Reconstruction of time-varying signals is a key technology for WSNs and plays an important role in many applications of WSNs (see, e.g., [13] and the references therein). As we know, the lifetime of a WSN depends on the lifespan of its sensors, which are battery-powered. To prolong the lifespan of sensors, sensors are often allowed to transmit only partial (e.g., quantized/encoded) information to a fusion center. Therefore, quantization of sensor measurements has been widely taken into account in the practical applications [46]. Moreover, it is maybe infeasible to quantize and transmit sensor measurements directly. This is because, for unstable systems, while the states will become unbounded, a large number of quantization bits may be needed, and so higher bandwidth and rate of quantizer are used by sensors. However, as demonstrated in [13], the filtering schemes relying on the quantized innovations can provide the performance, which is comparable to that of the full (e.g., unquantized/uncoded) information filtering schemes.

On the other hand, due to sparseness of signals exhibited in many applications, recently developed compressed sensing techniques have been extensively applied in WSNs [79]. This enables reconstruction of sparse signal from far fewer measurements. Therefore, the demands for communication between sensors and the fusion center will be lessened by exploiting sparseness of signals in WSNs, so as to save both bandwidth and energy [9, 10]. Reconstruction of time-varying sparse signals in WSNs has been recently studied in [11] by using the group lasso and fused lasso techniques. This is a batch algorithm which relies on quadratic programming to recover the unknown signal. A computationally efficient recursive lasso algorithm (R-lasso) was introduced in [12], for estimating recursively the sparse signal at each point in time. In [13], the SPARLS algorithm relies on the expectation-maximization technique to find estimates of the tap-weight vector output stream from its noisy observations. Recently, many researchers have attempted to solve the problem in the classic framework of signal estimation, such as Kalman filtering (KF) and its variants [1416]. The KF-based approaches can be divided into two classes: the hybrid and self-reliant. For the former, the peripheral optimization schemes were employed to estimate the support set of a sparse signal, and then a reduced-order Kalman filter was used to reconstruct the signal [14]. Meanwhile, for the latter, the sparsity constraint is enforced via the so-called pseudo-measurement (PM) [15]. In [15], two stages of Kalman filtering are employed: one for tracking the temporal changes and the other for enforcing the sparsity constraint at each stage. In [16], an unscented Kalman filter for the pseudo-measurement update stage was proposed. To the best of the authors’ knowledge, there is limited work on recursive compressed sensing techniques considering quantization as a mean of further reduction of the required bandwidth and power resources. However, an increased attention has been paid to develop algorithms for reconstructing sparse signals using quantized observations recently [1721]. In [17], a convex relaxation approach for reconstructing sparse signal from quantized observations was proposed by using an -norm regularization term and two convex cost functions. In [18], Qiu and Dogandzic proposed an unrelaxed probabilistic model with -norm constrained signal space and derived an expectation-maximization algorithm. In addition, [1921] have investigated the reconstruction of sparse source from 1-bit quantized measurement in the extreme case.

In this paper, we study reconstruction of time-varying sparse signals in WSNs by using quantized measurements. Our contributions are as follows:(1)We propose an improved particle filter algorithm which extends the fundamental results [2] to a multiple-observation case by employing information filter form. The algorithm in [2] is derived under the assumption that the fusion center has access to only one measurement source at each time step. The extension to a multiple-measurement scenario is straightforward but, in general, may lead to a computationally involved scheme. In contrast, the proposed algorithm can be implemented in a computationally efficient sequential processing form and avoids any matrix inversion. Meanwhile, the proposed algorithm has an advantage over numerical stability for inaccurate initialization.(2)We propose a new method to impose sparsity constraint on estimator by the particle filter algorithm. Compared to the iterative method in [15], the resulting method is noniterative and easy to implement. In particular, the system has an underlying state-space structure, where the state vector is sparse. In each time interval, the fusion center transmits the predicted signal estimate and its corresponding error covariance to a selected subset of sensors. The selected sensors compute quantized innovations and transmit them to the fusion center. The fusion center reconstructs the sparse state by employing the proposed particle filter algorithm and sparse cubature point filter method.

This paper is organized as follows. Section 2 gives a brief overview of basic problems in compressed sensing and introduces the sparse signal recovery method using Kalman filtering with embedded pseudo-measurement. In Section 3, we describe the system model. Section 4 develops a particle filter with quantized innovation. To recover the sparsity pattern of the state estimate by particle filter, a sparse cubature point filter method is developed with lower complexity compared to reiterative PM update method in Section 5. The intact version of adaptively recursive reconstruction algorithm for sparse signals with quantized innovations and the analysis of their complexity are presented in Section 6. Section 7 contains simulation results, and the conclusions are concluded in Section 8.

Notation. denotes -dimensional Gaussian r.v. with mean and variance , the -dimensional Gaussian probability density with mean , and variance is denoted by . denotes Gaussian probability distribution with mean and variance , and denotes the truncated probability, where belongs to Borel -field. denotes the collection of random Boldfaced uppercase and lowercase symbols represent matrices and vectors, respectively. For a vector , denotes its th component and denotes the error covariance . For a matrix , denotes the entry of .

2. Sparse Signal Reconstruction Using Kalman Filter

2.1. Sparse Signal Recovery

Compressive sensing is a framework for signal sensing and compression that enables representation of a sensed signal with fewer samples than those ones required by classical sampling theory. Consider a sparse random discrete-time process in , where , and is called -sparse if . Assume evolves according to the following dynamical equations:where is the state transition matrix and is the measurement matrix. Moreover, and denote the zero mean’s white Gaussian sequence with covariances and , respectively. is -dimensional linear measurement of . When and , it is noted that the reconstruction from underdetermined system is an ill-posed problem.

However, [22, 23] have shown that can be accurately reconstructed by solving the following optimization problem:

Unfortunately, the above optimization problem is NP-hard and cannot be solved effectively. Fortunately, as shown in [23], if the measurement matrix obeys the so-called restricted isometry property (RIP), the solution of (3) can be obtained with overwhelming probability by solving the following convex optimization:

This is a fundamental result in compressed sensing (CS). Moreover, for reconstructing a -sparse signal , linear measurements are needed, where is a fixed constant.

2.2. Pseudo-Measurement Embedded Kalman Filtering

For the system given in (1) and (2), the estimation of provided by Kalman filtering is equivalent to the solution of the following unconstrained minimization problem: where is the conditional expectation of the given measurements .

As shown in [15], the stochastic case of (4) is as follows:and its dual problem isIn [15], the authors incorporate the inequality constraint into the filtering process using a fictitious pseudo-measurement equationwhere and serves as the fictitious measurement noise; constrained optimization problem (7) can be solved in the framework of Kalman filtering and the specific method has been summarized as CS-embedded KF (CSKF) algorithm with -norm constraint in [15]. It is apparent from (8) that the measurement matrix is state-dependent and can be approximated by , where is the sign function. The pseudo-measurement equation was interpreted in Bayesian filtering framework, and a semi-Gaussian prior distribution was discussed in [15]. Furthermore, is a tuning parameter which regulates the tightness of -norm constraint on the state estimate .

3. System Model and Problem Statement

Consider a WSN configured in the star topology (see Figure 1 for an example topology). In the star topology, the communication is established between sensors and a single central controller, called the fusion center (FC). The FC is mains powered, while the sensors are battery-powered and battery replacement or recharging in relatively short intervals is impractical. The data is exchanged only between the FC and a sensor. In our application, sensors observe linear combinations of sparse time-varying signals and send the observations to a fusion center for signal reconstruction. Here, our attention is focused on Gaussian state-space models; that is, for sensor , the signal and the observation satisfy the following discrete-time linear system:where is the local observation matrix and denotes time-varying state vector which is sparse in some transform domain; that is, , where the majority of components of are zero and is an appropriate basis. Without loss of generality, we assume that itself is sparse and has at most nonzero components whose locations are unknown (). The fusion center gathers observations at all sensors in the -dimensional global real-valued vector and preserves the global observation matrix which satisfies the so-called restricted isometry property (RIP) imposed in the compressed sensing. Then, the global observation model can be described in (2). All the sensors are unconcerned about the sparsity. Moreover, and are uncorrelated Gaussian white noise with zero mean and covariances and , respectively.

The goal of the WSN is to form an estimate of sparse signal at the fusion center. Due to the energy and bandwidth constraint in WSNs, the observed analog measurements need to be quantized/coded before sending them to the fusion center. Moreover, the quantized innovation scheme also can be used. At time , the th sensor observes a measurement and computes the innovation , where together with the variance of innovation is received from the fusion center. Then, the innovation is quantized to and sent to the fusion center. As the fusion center has enough energy and enough transmission bandwidth, the data transmitted by the fusion center do not need to be quantized. The decision of which sensor is active at time and consequently which observation innovation gets transmitted depends on the underlying scheduling algorithm. The quantized transmission of also implies that can be viewed as a nonlinear function of the sensor’s analog observation. The aforementioned procedure is illustrated in Figure 2.

4. A Particle Filter Algorithm with Coarsely Quantized Observations

Most of the earlier works for estimation using quantized measurements concentrated upon using numerical integration methods to approximate the optimal state estimate and make an assumption that the conditional density is approximately Gaussian. However, this assumption does not hold for coarsely quantized measurements, as demonstrated in the following.

Firstly, suppose and are jointly Gaussian; then it is well known that the probability density of conditioned on is a Gaussian with the following parameters:where . When and follow the linear dynamical equations defined in (9), it is well known that the covariance can be propagated by Riccati recursion equation of KF. Let denote the quantized measurements obtained by quantizing ; that is, is a measurable function of . It will be shown that the probability density of conditioned on the quantized measurements has the similar characterization as (10). The result is stated in the following lemma.

Lemma 1 (akin to Lemma  3.1 in [2]). The state conditioned on the quantized measurements can be given by a sum of two independent random variables as follows:

Proof . See Appendix.

It should be noted that the difference between (10) and (11) is the replacement of by random variable . Apparently, is a multivariate Gaussian random variable truncated to lie in the region defined by . So, the covariance of can be expressed as

Under an environment of high rate quantization, it is apparent that converges to and approximates Gaussian. From Lemma 1, we note that is not Gaussian. For nonlinear and non-Gaussian signal reconstruction problems, a promising approach is particle filtering [24]. The particle filtering is based on sequential Monte Carlo methods and the optimal recursive Bayesian filtering. It uses a set of particles with associated weights to approximate the posterior distribution. As a bootstrap, the general shape of standard particle filtering is outlined below.

Algorithm  0 (standard particle filtering (SPF))(1) Initialization. Initialize the particles, and .(2)At time , using measurement , the importance weights are calculated as follows: .(3)Measurement update is given by where are the normalized weights; that is, (4)Resample particles with replacement as follows. Generate i.i.d. random variables such that : (5)For , predict new particles according to (6)Consider  . Also, set and iterate from Step .

Assume that the channel between the sensor and fusion center is rate-limited severely, and the sign of innovation scheme is employed (i.e., ). Obviously, the importance weights are given by . We note that . Therefore, it will be sufficient to propagate particles that are distributed as , where

In addition, note that the quantizer output, at time , is calculated by quantizing a scalar valued function of , . So, on receipt of and by using the previously received quantized values , some Borel measurable set containing , that is, , can be inferred at the fusion center.

In order to develop a particle filter to propagate , we need to give the measurement update of the probability density and time update of the probability density , which are described by Lemmas 2 and 3, respectively.

Lemma 2. The likelihood ratio between the conditional laws of and is given by So, if is a set of particles distributed by the law , then, from Lemma 2, a new set of particles can be generated. For each particle , associate a weight , generate i.i.d. random variables such that , and set . This is the standard resampling technique from Steps (3) and (4) of Algorithm  0 [25]. It should be noted that this is equivalent to a measurement update since we update the conditional law by receiving the new measurement .

Lemma 3. The random variable is a truncated Gaussian and its probability density function can be expressed as .
This result should be rather obvious. Here, one can observe that is the MMSE estimate of the state given . Since and have the state-space structure, Kalman filter can be employed to propagate recursively as follows: However, the information filter (IF), which utilizes the information states and the inverse of covariance rather than the states and covariance, is the algebraically equivalent form of Kalman filter. Compared with the KF, the information filter is computationally simpler and can be easily initialized with inaccurate a priori knowledge [26]. Moreover, another great advantage of the information filter is its ability to deal with multisensor data fusion [27]. The information from different sensors can be easily fused by simply adding the information contributions to the information matrix and information state.
Hence, we substitute the information form for (19) as follows:where and are the information matrix and information state, respectively. In addition, the covariance matrix and state can be recovered by using MATLAB’s leftdivide operator; that is, and , where denotes an identity matrix. The information state contribution and its associated information matrix areTogether with (20), Lemma 3 completely describes the transition from to . Following suit with Step (5) of Algorithm  0, suppose is a set of particles distributed as ; then a new set of particles , which are distributed as , can be obtained as follows. For each , generate by the law described as follows: Also, set .

From the above, the particle filter using coarsely quantized innovation (QPF) has been derived for individual sensor. The extension of multisensor scenario will be described in Section 6.

5. Sparse Signal Recovery

To ensure that the proposed quantized particle filtering scheme recovers sparsity pattern of signals, the sparsity constraints should be imposed on the fused estimate, that is, (). Here, we can make the sparsity constraint enforced either by reiterating pseudo-measurement update [15] or via the proposed sparse cubature point filter method.

5.1. Iterative Pseudo-Measurement Update Method

As stated in Section 2, the sparsity constraint can be imposed at each time point by bounding the -norm of the estimate of the state vector, . This constraint is readily expressed as a fictitious measurement , where can be interpreted as a measurement noise [15, 28]. Now we construct an auxiliary state-space model of the form as follows:where and and , , denotes the th component of the least-mean-square estimate of (obtained via Kalman filter). Finally, we reassign and , where the time-horizon of auxiliary state-space model (23) is chosen such that is below some predetermined threshold. This iterative procedure is formalized below:

5.2. Sparse Cubature Point Filter Method

Suppose a novel refinement method based on cubature Kalman filter (CKF). Compared to the iterative method described above, the resulting method is noniterative and easy to implement.

It is well known that unscented Kalman filter (UKF) is broadly used to handle generalized nonlinear process and measurement models. It relies on the so-called unscented transformation to compute posterior statistics of that are related to by a nonlinear transformation . It approximates the mean and the covariance of by a weighted sum of projected sigma points in the space. However, the suggested tuning parameter for UKF is . For a higher order system, the number of states is far more than three, so the tuning parameter becomes negative and may halt the operation. Recently, a more accurate filter, named cubature Kalman filter, has been proposed for nonlinear state estimation which is based on the third-degree spherical-radial cubature rule [29]. According to the cubature rule, the sample points are chosen as follows: where and denotes the square-root factor of ; that is, . Now, the mean and covariance of can be computed bywhere , .

By the iterative PM method, it should be noted that (24) can be seen as the nonlinear evolution process during which the state gradually becomes sparse. Motivated by this, we employ CKF to implement the nonlinear refinement process. Let and be the updated covariance and the th cubature point at time , respectively (i.e., after the measurement update). A set of sparse cubature points at time is thus given byfor , where is a tunable constant vector. Once the set is obtained, its sample mean and covariance can be computed by (27) directly. For readability, we defer the proof of (29) to Appendix.

6. The Algorithm

We now summarize the intact algorithm as follows (illustrated in Figure 3):(1)Initialization: at , generate ; then compute .(2)The fusion center transmits which denote the entry of the innovation error covariance matrix (see (30)) and predicted observation (see (31)) to the th sensor:(3)The th sensor computes the quantized innovation (see (32)) and transmits it to the fusion center(4)On receipt of quantized innovation (see (32)), the Borel -field can be inferred. Then, a set of observation particles (see (33)) and corresponding weights (see (34)) are generated by fusion center:(5)Run measurement updates in the information form (see (35)) using an observation particle generated in Step (4):(6)Resample the particles by using the normalized weights.(7)Compute the fused filtered estimate :where .(8)Impose the sparsity constraint on fused estimate by either (a) or (b):(a)iterative PM update method;(b)sparse cubature point filter method.(9)Determine time updates , , , , and for the next time interval:

Remarks. Here, the use of symbol is just for algorithm description and also can be interchanged with . In addition, it should be noted that the proposed algorithm amounts to Kalman filters running in parallel that are driven by the observations .

6.1. Computational Complexity

The complexity of sampling Step (4) in the general algorithm is . Measurement update Step (5) is of the order ; resampling Step (7) has a complexity . Step (9) has complexity either or . The complexity of time update Step (10) is .

7. Simulation Results

In this section, the performance of the proposed algorithms is demonstrated by using numerical experiment, in which sparse signals are reconstructed from a series of coarsely quantized observations. Without loss of generality, we attempt to reconstruct a 10-sparse signal sequence in and assume that the support set of sequence is constant. The sparse signal consists of 10 elements that behave as a random walk process. The driving white noise covariance of the elements in the support of is set as . This process can be described as follows: where and . Both the index and the value of are unknown. The measurement matrix consists of entries that are sampled according to . This type of matrix has been shown to satisfy the RIP with overwhelming probability for sufficiently sparse signals. The observation noise covariance is set as . The other parameters are set as , , and . There are two scenarios considered in the numerical experiment. The first one is constant support, and the second one is changing support. In the first scenario, we assume severely limited bandwidth resources and transmit 1-bit quantized innovations (i.e., sign of innovation). We compare the performance of the proposed algorithms with the scheme considered in [15], which investigates the scenario where the fusion center has full innovation (unquantized/uncoded). For convenience, we refer to the scheme in [15] as CSKF; the proposed QPF with iterative PM update method and sparse cubature point filter method are referred to as Algorithms  1 and 2, respectively.

Figure 4 shows how various algorithms track the nonzero components of the signal. The CSKF algorithm performs the best since it uses full innovations. Algorithm  1 performs almost as well as the CSKF algorithm. The QPF clearly performs poorly, while Algorithm  2 performs close to Algorithm  1 gradually.

Figure 5 gives a comparison of the instantaneous values of the estimates at time index . All of three algorithms can correctly identify the nonzero components.

Finally, the error performance of the algorithms is shown in Figure 6. The normalized RMSE (i.e., ) is employed to evaluate the performance. As can be seen, Algorithm  1 performs better than Algorithm  2 and very close to the CSKF before . However, the reconstruction accuracies of all algorithms almost coincide with each other after roughly . It is noted that the performance is achieved with far fewer measurements than the unknowns (<30%). In the example, the complexity of Algorithm  2 is dominated by , which is of the same order as that of QPF, while the complexity of Algorithm  1 is dominated by . It is maybe preferable to employ Algorithm  2.

In the second scenario, we verify the effectiveness of the proposed algorithm for sparse signal with slow change support set. The simulation parameters are set as , , , and and the others are the same as the first scenario. We assume that there are only possible nonzero components and that the actual number of the nonzero elements may change over time. In particular, the component is nonzero for the entire measurement interval, becomes zero at , is nonzero from onwards, and is nonzero between and . All the other components remain zero throughout the considered time interval. Figure 7 shows the estimator performance compared with the actual time variations of the nonzero components. As can be seen, the algorithms have good performance for tracking the support changed slowly.

In addition, we study the relationship between quantization bits and accuracy of reconstruction. Figure 8 shows normalized RMSE versus number of quantization bits when . The performance gets better but gains little as the quantization bits increase. However, more quantization bits will bring greater overheads of communication, computation, and storage, resulting in more energy consumption of sensors. For this reason, 1-bit quantization scheme has been employed in our algorithms and is enough to guarantee the accuracy of reconstruction.

Moreover, it is noted that the information filter is employed to propagate the particles in our algorithms. Compared with KF, apart from the ability to deal with multisensor fusion, the IF also has an advantage over numerical stability. In Figure 9, we take Algorithm  1, for example, and give the comparison of two cases whether Algorithm  1 is with KF or with IF. As can be seen, Algorithm  1-IF gives good performance, but Algorithm  1-KF diverges.

8. Conclusions

The algorithms for reconstructing time-varying sparse signals under communication constraints have been proposed in this paper. For severely bandwidth constrained (1-bit) scenarios, a particle filter algorithm, based on coarsely quantized innovation, is proposed. To recover the sparsity pattern, the algorithm enforces the sparsity constraint on fused estimate by either iterative PM update method or sparse cubature point filter method. Compared with iterative PM update method, the sparse cubature point filter method is preferable due to the comparable performance and lower complexity. A numerical example demonstrated that the proposed algorithm is effective with a far smaller number of measurements than the size of the state vector. This is very promising in the WSNs with energy constraint, and the lifetime of WSNs can be prolonged. Nevertheless, the algorithm presented in this paper is only suitable for the time-varying sparse signal with an invariant or slowly changing support set, and the more general methods should be combined with a support set estimator which will be discussed in our future work further.

Appendix

Proof. In order to prove Lemma 1, we will show that the moment generating function (MGF) of can be regarded as the product of two MGFs corresponding to the two random variables in (11). Recall that the MGF of a -dimensional r.v. is expressed as . Note thatand , so the MGF of can be given bywhere . Here, we used the fact that and . Then, the result should be rather obvious from (A.2).

Proof. Consider the pseudo-measurement equation As is unknown, the relation can be used to getwhere almost surely. Equation (A.4) is the approximate pseudo-measurement with an observation noise . Note that and . Since the mean of cannot be obtained easily, we approximate the second momentHere, we used the fact that and are statistically independent. Substituting obtained by KF for the error covariance in (A.5), we can get

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This work was supported by the fund for the National Natural Science Foundation of China (Grants nos. 60872123 and 61101014) and Higher-Level Talent in Guangdong Province, China (Grant no. N9101070).