Research Article  Open Access
Sahar Movaghati, Masoud Ardakani, "Distributed Binary Quantization of a Noisy Source in Wireless Sensor Networks", Journal of Sensors, vol. 2014, Article ID 368643, 11 pages, 2014. https://doi.org/10.1155/2014/368643
Distributed Binary Quantization of a Noisy Source in Wireless Sensor Networks
Abstract
In distributed (decentralized) estimation in wireless sensor networks, an unknown parameter must be estimated from some noisy measurements collected at different sensors. Due to limited communication resources, these measurements are typically quantized before being sent to a fusion center, where an estimation of the unknown parameter is calculated. In the most stringent condition, each measurement is converted to a single bit. In this study, we propose a distributed quantization scheme which is based on singlebit quantized data from each sensor and achieves high estimation accuracy at the fusion centre. We do this by designing some local binary quantizers which define a multithreshold quantization rule for each sensor. These local binary quantizers are initially designed so that together they mimic the functionality of a multilevel quantizer. Later, their design is improved to include some errorcorrecting capability, which further improves the estimation accuracy from the sensors’ binary data. The distributed quantization formed by such local binary quantizers along with the proper estimator proposed in this work achieves better performance, compared to the existing distributed binary quantization methods, specially when fewer sensors with low measurement noise are available.
1. Background and Introduction
The distributed quantization and estimation problem involves estimating an unknown parameter from several noisy measurements [1]. This problem appears in many distributed sensing systems in a wide range of applications, such as medical imaging, environmental monitoring, and target tracking. In many of these cases a central processing unit or fusion center (FC) has to determine an unknown signal from some noisy observations received from distributed sensors. In wireless sensor networks (WSNs), sensors encapsulate their measurement readings inside the communication packets, according to the standard protocols, such as IEEE 802.15.4 and ZigBee, and transmit these packets to the FC. These protocols take care of the lower layers’ services (the term “layer” refers to the OSI model), such as routing and dealing with errors caused by channel noise. In this study, we assume that the WSN communication protocols provide errorfree communication services for us. We focus our interest on the measurement data inside the packets and the advantage of optimum quantizer design at the application layer to improve estimation performance at the FC.
Normally, due to system constraints, such as shortage of energy and bandwidth resources, distributed observations need to be compressed before being sent to the FC [2]. Quantizing the observations reduces the transmission load in the network; however, it implies some information loss, reducing the estimation accuracy. Therefore, quantizer and estimator design is an important problem in distributed estimation, which has attracted considerable attention in the recent literature [3–15].
Suppose noisy measurements of a random parameter have the form , , where are independent zero mean measurement noises. If all the measurements are available in the estimator without any distortion, an unbiased estimation with error variance as low as can be achieved, where is the variance of the measurement noise for the th observation [3]. Because of communication constraints in a WSN, the measurements are quantized before transmission using a local quantization rule; that is, . The goal is then to design these local quantization rules, as well as the procedure to be used for estimating from the quantized data.
Some existing distributed estimation algorithms are based on applying a uniform quantizer to quantize each analog measurement into a few bits [3–8, 16]. In all these algorithms, the number of quantization bits for each analog measurement is decided based on the signaltonoise ratio (SNR). For example, in [3], the number of quantization bits for the th measurement, , is . In some applications such as WSNs, this means the sensors with higher SNR, that is, smaller , will have to send more bits; hence, they consume more power for data transmission. Consequently, the better sensors become more exhausted and die more quickly, which in turn reduces the longterm performance of the estimation task.
Another technique has been proposed by [9] which is based on adding some deterministic or random control input to the observation data, prior to the quantization. Therefore, the quantized value of the th observation is , where is a deterministic or random signal. To find the optimal control input, a metric based on the CramerRao lower bound (CRLB), or the Fisher information, is optimized [9, 17, 18].
Under severely stringent bandwidth or power conditions it is preferred to quantize each measurement into only one bit. Distributed estimation based on local binary quantizations has been studied in [4, 10–14, 19]. In [4], the local binary quantization is performed by comparing the analog measurement value to a fixed threshold in the middle of the analog data range. The estimation performance of a set of local binary quantizers is studied in [13] for asymptotic condition; that is, .
The local binary quantizers with a single fixed threshold are not very efficient, specially when the number of measurements is small and the measurement noise is low. To improve the performance, [10, 11] suggest adaptive thresholds, which are sequentially adjusted according to the previously generated bits. In their approach, the threshold used for quantizing the th measurement value is adjusted based on the previous bits. In [11], to find the optimum threshold for the th measurement a maximum likelihood (ML) estimator must be derived every time based on the previous generated bits and the previous thresholds.
The estimation based on local binaryquantized observations is further studied in [12], where they consider different thresholds on the analog range to assure that there will always be a threshold close to the true parameter . Out of the observations, measurements are quantized according to threshold , where . Through CRLB, they find the set of optimal threshold values and their associated frequencies .
In the above distributed estimation methods based on binary data, the generation of each bit is performed using a single threshold. A multithreshold quantization scheme was suggested by [14]. Based on the distributed estimation method in [14], each measurement is used to estimate one of the bits in the binary representation of the unknown parameter. According to [14], if measurements are available, of them estimate the first bit of the unknown parameter, estimate the second bit, and so on. So, the th bit in the binary representation of the unknown parameter is estimated times. Hence, the value for that bit is determined by taking the average among the binary values, and, consequently, is estimated by combining the final value of the individually estimated bits.
In this work we propose a new method for distributed binary quantization to improve the estimation performance at the FC. To do that, we design local quantizers to compress each local measurement to a single bit and suggest a centralized estimator to infer the unknown from those bits. Therefore, our goal in this work is to (i) first formulate the distributed quantization as a set of local binary quantizers (localQs), (ii) jointly design these localQs to find the optimal set of localQs, which can maximize the estimation accuracy of , and (iii) find a centralized algorithm for the FC that combines these binaryquantized data to form an accurate estimate of .
Our distributed quantization method benefits from multithreshold local quantizers to achieve high estimation accuracy. Compared to the binary quantization algorithms in [4, 11, 12, 14], our method achieves a better MSE performance at the FC, specially when limited number of sensors are available, with low measurement noise. This is while minimum computation and transmission load is imposed on the sensors.
The rest of this paper is organized as follows. In Section 2, the detailed setup of the problem and the required definitions and assumptions are provided. In Section 3, the design of a distributed quantizer based on different binary numeral systems is introduced. Section 4 proposes optimal localQs to improve the estimation performance. In Section 5, the appropriate decoder/estimator to be used in the FC is formulated. Finally, in Section 6, the simulation results are shown for performance evaluation.
2. Problem Setup
Suppose is a random scalar distributed according to the pdf . A number of noisy measurements of are observed as where for are i.i.d. additive noise. In this work, we assume that is a uniform distribution in the interval , and the measurement noise is Gaussian with zero mean and variance . It is straightforward to modify the proposed distributed estimation method to work with other signal and noise pdfs. An example for Gaussian is discussed in Section 4.3.
Considering the most stringent scenario, only singlebit data transmission is allowed for sending each measurement to the FC. Therefore, each measurement is quantized to a bit , according to a local binary quantization rule .
Note that data transmission over nonideal channels can involve some errors due to channel noise, which can be dealt with separately using error detection and error correction techniques implemented in the WSN communication protocols. In this study, we do not consider any noise or error added by the channel; that is, the communication channel is assumed to be errorfree. Therefore, all errors discussed here are due to the measurement or quantization noise, not the channel noise. Thus, the goal is to design a set of local binary quantization rules to be used for quantizing the observations and also to design an estimation algorithm that combines the quantized binary data to form an accurate estimate of at the FC, that is, (see Figure 1).
Assume that the parameter range is “partitioned” into “divisions” ; . Compressing the realvalued measurement to a bit can be performed through introducing a localQ , which is a function mapping each division , to a binary value, that is, 0 or 1. Therefore, the quantization of each measurement can be described as Each localQ is an ordered sequence of binary values, that is, a binary vector of length . Wherever the binary value alters between two successive divisions the threshold between those divisions is regarded as an edge in the localQ; see Figure 2. Therefore, a localQ denoted by is associated with a set of edges, that is, , where is the number of edges in . These edges define a new partitioning of into “cells”; see Figure 2. Similar to multiresolution quantization [20, 21], different quantizers have different cell sizes (resolution); however, in our method all local quantizers have binary output.
It must be mentioned that, due to the additive Gaussian noise, the analog measurements are in the range . However, since the desired parameter which must be estimated in the FC is within the range , the localQs are defined over this range. Therefore, if is in or it will be mapped to or , respectively.
3. Distributed Quantization
In this section we describe how a set of localQs are designed. For now, assume that we have noiseless observations of . In other words, each observation equals , which must be quantized into one bit, , . Since the combinations of bits can specify maximum of values, one can generate ; , so that together they identify the division of among distinct divisions within the range . In order to achieve that, localQs must be appropriately designed.
Consider the analog range to be partitioned into equallength divisions (considering equallength divisions is intuitive when we have a uniform source. For nonuniform sources a nonequal partitioning must be considered; see Section 4.3), , . To identify to which division among the total divisions belongs, bits are provided for the FC, using localQs; that is, . These localQs can be designed so that, for each division on the range , the bits together make the digit binary label/word assigned to that division. Figure 3 illustrates localQs that together assign digit binary words to the divisions. This assignment is conducted using the “natural” binary numeral system. In other words, reading vertically from left to right, the first division is assigned , the second is assigned , the third is assigned , and so on. If a different binary numeral systems, referred to as the “labeling” scheme in this work, is used for labeling the divisions, a different set of localQs would result. This is why the localQs in Figure 3 are identified as , , where the superscript “Nr” stands for the natural labeling. If instead of the natural labeling, Gray labeling is used, a set of localQs, that is, , will be produced; see Figure 4. Clearly, other labeling schemes can be considered.
Similarly, for the noisy scenario, we can quantize each local noisy measurement , , using one of the localQs , . Depending on where falls, decides the th bit. Since there are measurements, the th measurement provides the th digit of the digit binary word, for . At the FC, the bits are used to remake the digit binary word and thereby construct .
The above described method can be viewed as a distributed quantizer, since a uniform level quantization is implemented using separate binary quantizers. In a noiseless scenario, the performance of this distributed quantizer is the same as the centralized scalar quantizer, meaning that its MSE is , where . Moreover, when all observations are equal to , using either , , , , or any other set of localQs based on different labelings, results in the same MSE, that is, . However, in the presence of the measurement noise, any of the bits of the distributed quantizer might be quantized wrong, leading to a larger estimation error in the FC. In this case, the overall MSE at the FC is affected totally differently by different localQs and also depends on the position of the unknown . Thus, writing the MSE in closed form will not have a simple or insightful structure. In Section 5.3, we derive the CRLB to compare to the simulation results of our method.
One way to reduce the final MSE is to use better labeling schemes, which results in different localQs. In Section 4.1, the optimal localQs for achieving the best performance are discussed. For uniform , using the Gray labeling to design the localQs achieves the optimal estimation performance. However, this is not the case for other distributions of . For Gaussian , the optimal localQs are discussed in Section 4.3.
It is possible to further enhance the estimation when more observations are available. In the same noise level, when measurements are used, it is possible to estimate more accurately, using the same level quantizer as the basic quantizer. In such cases, the extra measurements can be used to repeat some of the binary digits and reduce the bit error rate of those bits, that is, a similar approach as [14]. However, instead of simple bit repetition, one can better employ the extra measurements to get the optimal estimation performance. This is further discussed in Section 4.2.
In the following sections, we introduce an algebraic approach to further explain the distributed quantization. Through that, we find the optimal localQs to get the best estimation performance when . In particular, for , an analogy between errorcorrecting codes and the distributed quantization is used.
4. Optimal Distributed Quantization
Assume that there are independent noisy measurements from the unknown . To achieve the best estimation, one needs to obtain the optimal set of localQs to be assigned to the analog measurements. For a fixed , if , the solution to the above problem results in the optimal labeling scheme. If , the result is an errorcorrecting distributed quantization. These two cases are studied separately in the following sections.
4.1. Best Labeling Scheme,
In Section 3, we discussed two sets of localQs, resulting from applying two different labeling systems, that is, natural and Gray. In this section we discuss other sets of localQs. It is worth mentioning again that if no measurement noise is present all of these sets of localQs will have the same performance. However, in the presence of measurement noise, they have different performances in terms of the estimation MSE.
To locate a variable in distinct divisions (in the interval ), bits are needed, where . To generate such bits, an eligible set of localQs can be chosen from a variety of choices. As mentioned in Section 2, each localQ is basically a binary vector of length . Choosing “eligible localQs” that can identify different divisions is analogous to choosing linearly independent binary vectors (please note that the algebraic calculations are done in GF (2)) of size . The “eligible localQs” are called the “basis localQs,” for brevity.
The set of natural localQs in Section 3 gives one example set of basis localQs, namely, the “natural basis localQs” , Figure 3. Taking the natural basis localQs as the reference vectors and linearly combining them to generate a new set of linearly independent vectors, we obtain a different set of basis localQs. The linear combinations of the reference localQs into a new set of linearly independent localQs can be represented by a binary matrix of rank . For example, for a special combination of natural basis localQs can be shown by which indicates a combination of the localQs as where means modulo2 summation of the valued binary vectors representing the localQs. The new set of basis localQs obtained by the example in (3) is the set of Gray localQs (Figure 4).
In this work, we focus on the natural basis localQs as the original localQ set and their linear combinations, in the form of , for producing new localQ sets. By searching all matrices of type , one can find the best basis localQs with the lowest MSE, for each SNR level. As mentioned above, must be full rank. Moreover, a permutation of the vectors of results in the same set of localQs. Hence, the size of the search space is
4.2. ErrorCorrecting LocalQs,
As mentioned earlier, having measurements available, the estimation accuracy can be improved by repeating some of the localQs. However, a more effective approach is proposed here, by choosing new local localQs for the redundant measurements. Taking the basis localQs of Section 3 as starting point, localQs are obtained so that together they generate bit words to identify divisions. Obviously, using bit words instead of bit words (as in Section 3) is a redundant way of labeling an division quantization. However, this redundancy enables some error correction possibility in estimating the division of . This property is analogous to errorcorrecting property in channel coding, where some redundant bits are added to the bit messages to make bit codewords [22]. This happens by adding extra vectors to the generator matrix.
Among the localQs used to quantize the measurements, there must be at least linearly independent localQs. Hence, the localQs are produced by linear combination of a set of basis localQs according to a matrix of rank . For example, consider the matrix which is used to produce localQs from the natural basis localQs. In this example, the first part of is a matrix similar to (3). Therefore, there will be the four Gray basis localQs among the localQs produced by . Figure 5 shows the extra localQs.
Different matrices can be used to produce different sets of localQs, each having a different estimation performance. A repetition strategy, in which some of the basis localQs are repeated for quantizing the extra measurements, is a special case of the explained approach. However, by searching among different matrices one can find a set of localQs that have better performance than a mere repetition scheme. The optimal set of localQs for each values of and depends on the SNR and can be obtained by exhaustive search. The size of the search space is given by
In Section 6, some optimal results are shown and discussed. It is worth mentioning that the search for finding the optimal localQs is done in the FC and happens only once. After the optimal set of localQs is obtained, each localQ is assigned to each sensor. As long as the measurement noise variance does not change a lot, the localQs do not need to be updated.
It is worthwhile pointing that there is a difference between the functionality of errorcorrecting codes when used against channel noise and when, in this case, used in the distributed estimation problem. In channel coding, the source of bit error is the additive channel noise which contaminates the coded bits after they are sent on the channel. On the contrary, in the case of distributed quantization, the bit error happens during the quantization process when the bits are being generated. The main difference this causes is that, unlike the channel coding, the a priori bit error probability (BEP) is not the same for all bits. Even in a homogeneous scenario with i.i.d. noise for all the observations, that is, the same , the probability of error for each bit depends on its localQ and how small the cell lengths are in that localQ. Moreover, the BEPs also depend on the value of . Because of these fundamental differences, known optimal channel codes are no longer optimal for distributed estimation. Also, a separate study is required for the decoder/estimator of the proposed errorcorrecting quantization method.
4.3. Gaussian Source
Up to this end we had assumed that the unknown parameter is uniformly distributed. Therefore, in Section 3, we had started our method by first assuming uniform partitioning of the range into equal divisions. If the parameter is not uniformly distributed, the uniform partitioning of the range is not optimal. However, still the same methodology described in Sections 4.1 and 4.2 can be used to design the optimal localQs, but with different partitioning of the range of . As an example, we discuss the Gaussian source. The results of applying our distributed quantization method for a Gaussian parameter are presented in Section 6.
For a Gaussian source, we assume that the range of , that is, , is partitioned according to the LloydMax algorithm [23, 24] for centralized quantization. For example, for , the optimal bit centralized quantization has the range of partitioned into divisions as . To derive localQs for a Gaussian noise, we take the same steps as in Sections 4.1 and 4.2, except that the value of edges for every localQ is a subset of the above edges.
4.4. Suboptimal Search Strategy
Due to the nature of the optimization problem, an exhaustive search must be performed to find the optimal localQs. In many WSN applications, and are small quantities, and the search is feasible. For those values of and where the search space size is too big, suboptimal strategies can be suggested to reduce the complexity. Some of those strategies are discussed here.
Strategy 1. When a bit precision is practiced with sensors, there must be always linearly independent localQs among the localQs. Based on the simulation results, it can be investigated that, for uniform and Gaussian noise, there are always the same independent localQs for every SNR value. In this scenario, the localQs derived based on Gray labeling, that is, , are always among the optimal localQs. Therefore, for uniform and Gaussian noise, to design the localQs for any SNR, one can fix of them to be the Gray localQs. This reduces the size of the search space from to , which is equal to the second term of the product in (7).
Strategy 2. Another simplifying strategy can be practiced for big values of . During the simulations, it was observed that when gets bigger while is constant, some localQs are repeated among the optimal localQs. Therefore, even for very big , one can limit the search to a smaller space, for example, , find the optimal localQs for that subspace, and repeat the same localQs for the rest of the observations.
Both of the above strategies can be combined to reduce the search space. For example, for a uniformly distributed unknown parameter with Gaussian measurement noises, a suboptimal search method is explained as follows. For , one uses . For , a search is performed on a space with size . For , , the same localQs are repeated. A third strategy which further reduces the complexity is presented in Section 5.3.
5. Decoding
In the FC, a decoding method must be used to estimate the unknown parameter from the received binaryquantized measurements . Here, we explain two types of decoders, that is, the discrete and continuous decoders, which estimate based on the discrete and continuous a posteriori distributions, respectively. For the sake of better understanding the overall behavior of the distributed estimation method, we first explain the discrete decoder. In both estimators the receiver only needs to do the computations once at the beginning of the algorithm to build a lookup table, which is used during the estimation procedure.
5.1. Discrete A Posteriori
As a discrete decoder, the estimator locates in one of the discrete divisions , , from the received bits. In this section we propose an ML estimator that decodes the received bits to estimate the division of among the divisions.
As discussed in Section 4, for each value of a set of localQs together generate an bit “codeword.” Assuming no measurement noise, there are different codewords; that is, , . There is a onetoone correspondence between these valid codewords and the divisions, that is, , . In the presence of measurement noise, each of the bits, that is, , , might be wrong, resulting in a “received word” (note that the notation “received” does not imply a communication channel or channel error) , which might include some bit errors. The discrete decoder’s function is to find the most likely valid codeword based on the received bit word , which in turn results in estimating the division of . This estimator can be reduced to a lookup table that can be saved in the receiver.
To build the lookup table for the discrete decoder, the likelihoods, or equivalently the a posteriori probabilities, of all the valid codewords conditioned on the received word are calculated. The codeword with the maximum likelihood is chosen as the decoder’s decision corresponding to each . The center of mass [23] for that division is recognized as the estimated value . This decoder is therefore referred to as the discrete ML estimator. The a posteriori likelihood of each codeword can be written as where is the a priori probability of codeword , which is equal to , where and indicate the left and right edges of the th quantization division , corresponding to . For uniform , , and .
Since each valid codeword is associated with one of the divisions , , each term of the product in (8) can be described as The denominator in the second line is by definition , the a priori probability of the codeword , which for uniform is equal for every . Remembering that can be either or , (9) can be written as where is the probability that the th bit is 1 when the parameter value is , that is, . The formula to calculate is given in Appendix B. Using (10) and (8) one can find the a posteriori probability of the codewords.
At the beginning of the algorithm a decoding table is formed by calculating the a posteriori probabilities for different values of the received word, that is, , , and different valid codewords , . This calculation is done once in the FC and does not need to repeat every time. The decoding table has entries, one for every possible value of , associating it with a codeword which has the highest likelihood . For every estimation instance, based on the received word, the decoder chooses the corresponding codeword from the table as the most likely codeword and announces the center of mass of the associated division as . It is easy to see that the lookup table depends on the localQs set, as well as .
The discrete ML decoder is based on finding the a posteriori probability of the valid codewords, which have onetoone correspondence with the division of . Therefore, they can only locate with a limited precision. To better estimate the analog parameter , a continuous decoder can be used instead of the discrete decoder, resulting in better estimation performance.
5.2. Continuous A Posteriori
The continuous decoder is designed based on the continuous a posteriori pdf of . Using the a posteriori pdf, a MAP or MMSE estimator can be designed. Again, similar to the discrete decoder, the receiver only needs to do the computations once at the beginning of the algorithm to build a lookup table, which is used during the estimation procedure.
The continuous a posteriori distribution of , that is, , can be written as The factorization in (11) is valid since the measurement noises as well as the quantization procedures are independent for different measurements. Once the above a posteriori conditional distribution is derived, one can find or as a MAP or MMSE estimator of , respectively.
In the FC, the following calculations are done once to build a decoding table. For each possible received word , , the MAP or MMSE estimation is calculated to build an entry in the lookup table, mapping to . Every time an bit word is received at the decoder this lookup table is used to find .
It is easy to investigate that the MSE performance of the continuous decoder is better than the discrete one, while the computational complexity of the continuous decoder is only slightly higher than the discrete decoder. The performance of different decoders is studied through simulations in Section 6 and compared to the CRLB for the optimum unbiased estimator.
5.3. Performance Bounds
To evaluate the performance of our estimator, we have calculated the CRLB for a set of localQs. In our distributed estimation method, for the case of Gaussian or uniform and Gaussian noise, the CRLB can be the indicator of the best estimation performance if the estimator is unbiased. The unbiasedness of our MMSE estimator is proven in Appendix A. The CRLB of the estimation method is given by where, for a uniform , , and, for a Gaussian with zero mean and variance , . See Appendix B for derivation of (12).
Strategy 3. Based on the CRLB, a strategy can be devised to reduce the computational complexity of the search method described in Section 4. As explained there, the optimal localQs are found by minimizing the MSE. That is to say, for every candidate set of localQs, the MSE of estimation is found through simulation, and finally the set of localQs with the lowest MSE is selected. To reduce the complexity, we suggest using an MSE bound, that is, the CRLB, instead of finding the actual MSE. So, for every candidate set of localQs, the CRLB is calculated based on (12), and the set with the lowest CRLB value is selected. Using this strategy along with the other two strategies in Section 4.4, the complexity of finding the localQs will be reduced greatly. The MSE results for the suboptimal localQs have been compared with the optimal localQs in Section 6.
6. Numerical Results
In this section, different simulation results are discussed in order to study the performance of the proposed distributed estimation method. Both scenarios in Section 4.1, where , and Section 4.2, where , are considered. The results are shown for two distributions of the unknown parameter, that is, uniform and Gaussian. For the case of uniform unknown parameter, is uniformly distributed in the interval . For Gaussian , it is distributed according to . Comparison with other methods, such as [4, 11, 12, 14], is discussed. The performance is evaluated in terms of the MSE. In all simulations, the measurement noise is i.i.d. zero mean Gaussian with variance . Note that this SNR is only related to the measurement noise, while the channel is assumed to be errorfree.
Figure 6 shows the performance of the distributed estimation method for a uniform source. Since is uniform, ; therefore, the average SNR is . At each SNR, the optimal set of localQs was found by searching all matrices for and , , and . For the case of , the results of the suboptimal search method based on both Strategy 1 and Strategy 3 are also shown in the figure. Note that, for all cases, the continuous MMSE estimator is used in the decoder. The results show that when the optimal localQ set for every SNR is the Gray basis localQs. When , for each SNR a different set of optimal localQs are found, reducing the estimation error. For example, for the case of and the optimal localQs for SNR dB are presented in Table 1. From the table, we can see that the localQs , , and are the 3 Gray basis localQs, and the rest are linear combinations of those.

For the case of the performance of our algorithm is compared with the distributed estimation proposed by Luo [14]. Based on Luo’s algorithm, measurements are quantized to the first bit in a natural binary system, measurements are used to estimate the second bit, and the remaining measurements are used for the third bit. As can be seen in Figure 6, the optimal sets of localQs for different SNRs outperform the method of [14].
The three different decoding methods based on the discrete and continuous a posteriori functions in Section 5 are compared in Figure 7. The first decoding method is the ML estimation based on the discrete a posteriori likelihood of the codewords discussed in Section 5.1. The other two methods are the MAP and MMSE estimations based on the continuous a posteriori density of in Section 5.2. Also, the CRLB for uniform has been shown for comparison.
For a Gaussian parameter with zero mean and variance , the results for are shown in Figure 8. The average SNR in this case is . Please note that, as explained in Section 4.3, the values for division edges of the centralized quantizer, that is, , , are set to [23]. As an example, the optimal localQs for SNR dB are presented in Table 2. Note that the localQs are lineally independent, but, unlike the case of uniform , the optimal localQs are not the Gray basis localQs.

Figure 9 compares the performance of our proposed method with other binary quantization methods, for . The circle points in Figure 9 show the MSE for the suboptimal localQs found using the three suboptimal search strategies. For the method proposed by Ribeiro and Giannakis [12], the CRLB for the optimal solution is indicated. Two adaptive quantization methods proposed by Fang and Li [11], that is, AQVS and AQML, are also shown in the figure. For the AQML, before quantizing each measurement a ML estimation must be calculated. It can be seen from Figure 9 that for SNR dB our method has lower MSE compared to [11, 12], while its complexity for “each estimation job” (as long as the SNR does not change a lot) is less than those algorithms.
7. Conclusion
In this study we proposed an optimum distributed quantization method to estimate unknown parameter form sensors’ noisy measurements, each compressed to one bit. The estimation method is based on designing multithreshold localQs to generate a single bit from each analog measurement and an appropriate decoder to be used at the fusion center for estimating the unknown. The results show that, for Gaussian , when measurements are used to achieve an estimation with bit precision, the optimal localQs are those based on the Gray labeling for all SNRs. For scenarios where measurements are available, extra localQs are found to achieve an errorcorrecting capability compared to the case when only measurements are available. The results have been compared with other distributed estimation methods with binary quantization and show better performance when a limited number of sensors with lower measurement noise are available.
Appendices
A. Unbiasedness of the Estimator
The amount of bias for an estimator is defined as where is the unknown random parameter and is its estimated value. In our method, is going to be estimated from binary data using a MMSE estimator in the FC; that is, . Therefore, The last equality ensures the unbiasedness of the method, when an MMSE estimator is used in the FC.
B. CRLB
The CRLB for estimating an unknown random variable from a noisy measurement is the inverse of the Fisher information metric, which can be obtained as where is the joint probability distribution of and and indicates the expected value with respect to both and . It can be proved that the above definition is identical to [25]: Having , (B.2) can be written as where the first term is related to the likelihood of measurements and the second term depends only on the distribution of .
Now, suppose that there are binary measurements available from . In other words, . The likelihood function can be written as [16] where is the probability that the th bit is 1 when the parameter value is , that is, . Similarly, . If the th localQ is defined by the set of cell edges mapping the first cell to 0, for Gaussian measurement noise with variance , can be written as (see Figure 10) Now, can be written as Therefore, its first and second derivative can be calculated as To find the first term, , in (B.3), the expectation of the second derivative of , that is, (B.8), is calculated as Inserting (B.8) in (B.9) and calculating the integral over , we have To further simplify the above, note that, for any binary localQs, hence, Therefore, (B.10) is reduced to For a random variable with uniform distribution, the second term, , in (B.3) is zero; therefore, the CRLB reduces to . For a Gaussian random variable, can be easily calculated as and the CRLB will be .
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
References
 M. Çetin, L. Chen, J. W. Fisher III et al., “Distributed fusion in sensor networks,” IEEE Signal Processing Magazine, vol. 23, no. 4, pp. 42–55, 2006. View at: Publisher Site  Google Scholar
 M. Gastpar, M. Vetterli, and P. L. Dragotti, “Sensing reality and communicating bits: a dangerous liaison,” IEEE Signal Processing Magazine, vol. 23, no. 4, pp. 70–83, 2006. View at: Publisher Site  Google Scholar
 J. Li and G. AlRegib, “Distributed estimation in energyconstrained wireless sensor networks,” IEEE Transactions on Signal Processing, vol. 57, no. 10, pp. 3746–3758, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 J.J. Xiao and Z.Q. Luo, “Decentralized estimation in an inhomogeneous sensing environment,” IEEE Transactions on Information Theory, vol. 51, no. 10, pp. 3564–3575, 2005. View at: Publisher Site  Google Scholar  MathSciNet
 J.J. Xiao, A. Ribeiro, Z.Q. Luo, and G. B. Giannakis, “Distributed compressionestimation using wireless sensor networks,” IEEE Signal Processing Magazine, vol. 23, no. 4, pp. 27–41, 2006. View at: Publisher Site  Google Scholar
 Z.Q. Luo and J.J. Xiao, “Universal decentralized estimation in a bandwidth constrained sensor network,” in Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, vol. 4, pp. 829–832, March 2005. View at: Google Scholar
 J. Xiao, S. Cui, Z. Luo, and A. J. Goldsmith, “Power scheduling of universal decentralized estimation in sensor networks,” IEEE Transactions on Signal Processing, vol. 54, no. 2, pp. 413–422, 2006. View at: Publisher Site  Google Scholar
 A. Krasnopeev, J. Xiao, and Z. Luo, “Minimum energy decentralized estimation in a wireless sensor network with correlated sensor noises,” Eurasip Journal on Wireless Communications and Networking, vol. 2005, no. 4, pp. 473–482, 2005. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 H. C. Papadopoulos, G. W. Wornell, and A. Oppenheim, “Sequential signal encoding from noisy measurements using quantizers with dynamic bias control,” IEEE Transactions on Information Theory, vol. 47, no. 3, pp. 978–1002, 2001. View at: Publisher Site  Google Scholar  MathSciNet
 H. Li and J. Fang, “Distributed adaptive quantization and estimation for wireless sensor networks,” IEEE Signal Processing Letters, vol. 14, no. 10, pp. 669–672, 2007. View at: Publisher Site  Google Scholar
 J. Fang and H. Li, “Distributed adaptive quantization for wireless sensor networks: from delta modulation to maximum likelihood,” IEEE Transactions on Signal Processing, vol. 56, no. 10, part 2, pp. 5246–5257, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 A. Ribeiro and G. B. Giannakis, “Bandwidthconstrained distributed estimation for wireless sensor networkspart I: Gaussian case,” IEEE Transactions on Signal Processing, vol. 54, no. 3, pp. 1131–1143, 2006. View at: Publisher Site  Google Scholar
 R. Gray, S. Boyd, and T. Lookabaugh, “Low rate distributed quantization of noisy observations,” in Proceedings of the Allerton Conference on Communication, Control, and Computing, pp. 354–358, 1985. View at: Google Scholar
 Z. Luo, “Universal decentralized estimation in a bandwidth constrained sensor network,” IEEE Transactions on Information Theory, vol. 51, no. 6, pp. 2210–2219, 2005. View at: Publisher Site  Google Scholar  MathSciNet
 I. D. Schizas, G. B. Giannakis, and N. Jindal, “Distortionrate bounds for distributed estimation using wireless sensor networks,” EURASIP Journal on Advances in Signal Processing, vol. 2008, Article ID 748605, 2008. View at: Publisher Site  Google Scholar
 S. Movaghati and M. Ardakani, “Particlebased message passing algorithm for inference problems in wireless sensor networks,” IEEE Sensors Journal, vol. 11, no. 3, pp. 745–754, 2011. View at: Publisher Site  Google Scholar
 F. ChapeauBlondeau and D. Rousseau, “Noiseenhanced performance for an optimal Bayesian estimator,” IEEE Transactions on Signal Processing, vol. 52, no. 5, pp. 1327–1334, 2004. View at: Publisher Site  Google Scholar  MathSciNet
 G. Balkan and S. Gezici, “CRLB based optimal noise enhanced parameter estimation using quantized observations,” IEEE Signal Processing Letter, vol. 17, no. 5, pp. 477–480, 2010. View at: Google Scholar
 S. Movaghati and M. Ardakani, “Energyefficient quantization for parameter estimation in inhomogeneous WSNs,” in Proceedings of the Annual IEEE Global Telecommunications Conference (GLOBECOM '11), pp. 1–5, Houston, Tex, USA, December 2011. View at: Publisher Site  Google Scholar
 J. Chen, S. Dumitrescu, Y. Zhang, and J. Wang, “Robust multiresolution coding,” IEEE Transactions on Communications, vol. 58, no. 11, pp. 3186–3195, 2010. View at: Publisher Site  Google Scholar
 S. Dumitrescu, “Fast encoder optimization for multiresolution scalar quantizer design,” IEEE Transactions on Information Theory, vol. 57, no. 3, pp. 1520–1529, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 S. Lin and D. J. Costello, Error Control Coding, PrenticeHall, New York, NY, USA, 2nd edition, 2004.
 S. P. Lloyd, “Least squares quantization in PCM,” IEEE Transactions on Information Theory, vol. 28, no. 2, pp. 129–137, 1982. View at: Publisher Site  Google Scholar  MathSciNet
 J. Max, “Quantizing for minimum distortion,” IRE Transactions on Information Theory, vol. 6, no. 1, pp. 7–12, 1960. View at: Publisher Site  Google Scholar  MathSciNet
 H. L. V. Trees, Detection, Estimation, and Modulation Theory, John Wiley & Sons, New York, NY, USA, 2001.
Copyright
Copyright © 2014 Sahar Movaghati and Masoud Ardakani. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.