Abstract

We study distributed detection and fusion in sensor networks with bathtub-shaped failure (BSF) rate of the sensors which may or not send data to the Fusion Center (FC). The reliability of semiconductor devices is usually represented by the failure rate curve (called the “bathtub curve”), which can be divided into the three following regions: initial failure period, random failure period, and wear-out failure period. Considering the possibility of the failed sensors which still work but in a bad situation, it is unreasonable to trust the data from these sensors. Based on the above situation, we bring in new characteristics to failed sensors. Each sensor quantizes its local observation into one bit of information which is sent to the FC for overall fusion because of power, communication, and bandwidth constraints. Under this sensor failure model, the Extension Log-likelihood Ratio Test (ELRT) rule is derived. Finally, the ROC curve for this model is presented. The simulation results show that the ELRT rule improves the robust performance of the system, compared with the traditional fusion rule without considering sensor failures.

1. Introduction

Distributed detection and decision fusion using multiple sensors have attracted significant attention because of their wide applications, such as security, traffic, battlefield surveillance, and environmental monitoring. Considering a wireless sensor network consisting of distribution sensors and a Fusion Center (FC), each sensor makes a binary hypothesis decision; then their decisions are sent to the FC to make a global decision which decides whether the target is present [18]. At each local sensor, the local likelihood ratio test is conducted and the local decision then is sent to the FC to make a global log-likelihood ratio test which is the optimal test statistic [4, 5]. The methods of decision fusion, such as Chair-Varshney Test (CVT) rule, Counting Rule test, Generalized Likelihood Ratio Test, and Bayesian View, have been compared in [6]. Considering the imperfect communication channel between the sensors and the FC, the authors in [7] extended the classical parallel fusion structure by incorporating the fading channel layer that is omnipresent in sensor networks. Then, the likelihood ratio based fusion rule given fixed local decision devices is derived. In [1], the authors have derived the optimal power allocation between training and data at each sensor over orthogonal channels that are subject to pathloss, Rayleigh fading, and Gaussian noise. In [9], the authors have proposed a new adaptive decentralized soft decision combining rule for multiple-sensor distributed detection systems with data fusion which does not require the knowledge of the false alarm and detection probabilities of the distributed sensors. Based on multiple decisions from each individual sensor assuming that the probability distributions are not known, the fusion rule which is insensitive to instabilities of the sensors probability distributions was derived in [10]. In [11], the authors studied an efficient design of a sensor network, the graph model as the formal tool to describe the interaction among the nodes, the distributed estimation problem, and the network topology. Decentralized detection through distributed sensors that perform level-triggered sampling and communicate with a FC via noisy channels was studied in [12]. The problem of decentralized detection in wireless sensor networks in the presence of one or more classes of misbehaving nodes was considered in [13]. In [14], the authors considered the problem of sensor resource management for multitarget tracking in decentralized tracking systems. In [15], the decentralized estimation of unknown random vectors by using wireless sensors and a FC was studied. In [16], the authors proposed a unified framework for sensor management in multimodal sensor networks, which is inspired by the trading behavior of economic agents in commercial markets. The computationally efficient fusion rule which involves injecting a deliberate random disturbance to the local sensor decisions before fusion was designed in [8].

However, the failed sensors which may or not send their untrusted data to the FC are not taken into account. Most sensor fusion methods rely on the strong assumption that the sensors generate the data operation according to a predetermined characteristic at all times. But when a sensor fails or wears out, detection systems demonstrate behaviors different from the expected. In [17], it has been shown that Bayes Risk Error is a Bregman divergence between the actual and estimated prior distribution probabilities. In [18], the authors have studied a different approach and have constructed a fusion rule which is sensor-failure-robust by including a sensor failure model and minimizing expected Bayesian risk.

In this paper, we raise a new approach which expands the traditional log-likelihood ratio test by including a sensor failure model. We assume that the failed sensors which still send their data to the FC will follow new characteristics. Of course, we do not pay attention to the failed sensors which will not send their data to the FC. In order to study the sensor failures, the analysis of reliability is introduced. For many mechanical and electronic components, the failure rate function (or the hazard rate function) which determines the number of failures occurring per unit time has a bathtub shape. The bathtub-shaped failure rate life distribution has been derived in [19, 20]. Then we can calculate the probability of sensor failures by using the initial number of all sensors, the number of sensors received by the FC, and the number of failed sensors through the failure rate. Our method makes the log-likelihood ratio test robust by expanding the traditional log-likelihood ratio test including BSF.

The remainder of this paper is organized as follows. Sensor mode is presented in Section 2. In Section 3, we introduce the analysis of reliability and propose the concept of failure of sensors. In Section 4, the ELRT rule which considers the failure of sensors is derived and the numerical results based on the Monte Carlo are given. At last, we summarize in Section 5.

2. Sensor Mode

We consider sensor nodes which are deployed in a Region Of Interest (ROI) with area . Noises at local sensors are independent and identically distributed (i.i.d.) and follow the standard Gaussian distribution with zero mean and variance which can be written as

We assume that sensors make their local decisions independently without collaborating with other sensors. Each sensor makes a binary local decision choosing from where is the hypothesis of target presence and is the hypothesis of target absence. is the signal received by the sensor and denotes the noise observed by the sensor . The signal strength decays as the sensor moves away from the target and follows the isotropic attenuation power model as defined in [3]: where is the original signal power from the target at a reference distance and represents the Euclidean distance between the target and the sensor . The signal attenuation exponent ranges from 2 to 3.

Assuming that the local sensor uses the threshold to make the local binary decisions in which represents the target is present and represents the target is absent, then all decisions are sent to the FC denoted by to make a final decision , where represents the target is present and represents the target is absent. According to the Neyman-Pearson lemma [6], the local sensor-level false alarm rate and probability of detection are given by where is the complementary distribution function of the standard Gaussian.

3. Analysis of Reliability

3.1. Failure of Sensors

Definition 1. Sensor failures are that their lifetimes have reached or been reaching to the end. They are divided into two classes: Little Bad (LB) sensor which still sends its untrusted data to the FC and Fully Bad (FB) sensor which has not worked.
We define as the event that the sensor fails. Obviously, the event of sensor failures is a Bernoulli event with denoting the probability of the event of failure of the sensor .

A sensor deployment example is shown in Figure 1. However, in this paper, for simplicity, we assume that the sensors are located uniformly in the ROI. The circles represent the good sensors. The squares represent the FB sensors. The triangles represent the LB sensors. And the star represents the target. Our attention, of course, is not paid to the state of FB sensors, but the state of LB sensors is our interest. According to Definition 1, the data sent by the LB sensors is untrusted; that is, their characteristics of and have changed. We assume that the LB sensors characteristics of and are denoted by and , respectively.

3.2. Analysis of Reliability

Analysis of reliability life data refers to the study and modeling of observed product lives. Life data can be lifetimes of products in the marketplace, such as the time the product operated successfully or the time the product operated before it failed. These lifetimes can be measured in hours, miles, cycles-to-failure, stress cycles, or any other metrics with which the life or exposure of a product can be measured. All such data of product lifetimes can be encompassed in the term of life data or, more specifically, product life data. Otherwise, in reliability, one often characterizes a lifetime distribution through three functions [21]: reliability function, failure rate function, and mean residual life.

Furthermore, for many mechanical and electronic components, the failure rate function has a bathtub shape. As is shown in Figure 2, the failure rate has three stages: initial failure period which is considered to occur when a latent is formed, random failure period which occurs once devices having latent defects have already failed and been removed, and wear-out failure period which occurs due to the aging of devices from wear and fatigue. In the random failure period, the remaining high-quality devices operate stably. The failures that occur during this period can usually be attributed to randomly occurring excessive stress, such as power surges and software errors.

In [19], the authors have studied a simple model by adding two Weibull survival functions. The failure rate function is given by where , , , and are related with life data of the products. Otherwise, Bebbington et al. [20] argue that it is worth adding a constant, say , to the failure rate function. This yields the reduced additive Weibull failure rate function In (6), let and to simplify (5).

3.3. Probability of Random Variable

Let be the initial number of the sensors and let be the number of the sensors received by the FC at time . Let be the failure number including all the FB and LB sensors at time .

From (6), we know where denotes the smallest integer which is not less than .

The number of good sensors at time can be easily calculated as follows:

If , it denotes that the sensors include the LB sensors which is where denotes the LB sensors received by the FC at time .

In the sensors, there is equal possibility for each sensor to fail. So, the probability of can be calculated as follows: where is the indicator function and is the positive integer set: , if ; otherwise , if .

4. Fusion Rule

4.1. CVT Rule

Based on the local sensors decision set, the traditional log-likelihood ratio test rule is given by [4]

4.2. ELRT Rule

At the local sensor , the likelihood function under the hypothesis of can be expressed as

With the local decisions which are mutually independent, the likelihood function at the FC in the hypothesis of is

From (12) and (13), we obtain where the second equality of (14) comes from the fact that and are uncorrelated. The third equality of (14) is the application of (4). (or ) denotes the set of (or ).

Similarly, under the hypothesis of , we obtain

Finally, combining (14) and (16) into (11), the ELRT rule is therefore

The advantage of such a formulation is that it allows us to consider all states of the sensors. Such a fusion rule obeys our intuitions at the extreme cases. When , the case that in (10) indicates no sensors fail; then the ELRT rule is simplified to the traditional fusion rule defined in (11). With the increase of , the performance of ELRT rule will be more better than the traditional rule without considering sensor failures.

4.3. Location of the Target

From (3) and (4), we know is the function of , if the is given. So, the CVT rule and the ELRT rule both require the knowledge of the coordinates of the target which are denoted by . In [22], the authors proposed to use Generalized Likelihood Ratio Test (GLRT) statistic which uses the Maximum Likelihood Estimate (MLE) of assuming is true. The authors showed that the GLRT fusion rule's performance was close to that of the state which the target's location was fully known to the FC. In [6], the uniform prior for the target is chosen when nothing else is available. However, in this paper, the coordinates of the target are given by to simplify the process because our major interest is the problem of failure.

4.4. Numerical Results

Figure 3 shows the ROC curves for the two different test statistics: CVT rule and ELRT rule. In this simulation, Monte Carlo runs have been used. We choose and for simulating. At (or ), the characteristics of and have changed for the LB sensors. We choose and for the LB sensors due to their poor performances. The worst case that the failed sensors are all the LB sensors and they make wrong decisions under the hypothesis of (or ) is only taken into consideration. Note that the performance of the CVT rule without failed sensors serves as an upper bound for other fusion methods. But the performance of the CVT rule with failed sensors at (or ) rapidly decreases. Their performances are outperformed by the ELRT rule. So, the ELRT rule improves the robust performances of the system.

In order to reflect the influence of failed sensors, we perform more simulations. In Figure 4, the ROC curves corresponding to the CVT at different times are plotted. In each time, we assume that all the failed sensors send either 1 or 0 to the FC, that is, when the sensor fails, it sends 1 (or ) to the FC all the time without considering the true environment. The failed sensors are randomly selected from all the sensors. In this simulation, we use , , , , , , and to conduct the experiment. In Figure 4, we see that the performance of the system decreases over time. Especially at time and , the ROC curves sharply decrease. Another observation is that when at low time, the ROC curves are close to each other. The reason of these phenomena is that the number of failed sensors changes over time. At low time, a small number of failed sensors have a negative but not big effect on the performance of the system. At high time, a large number of failed sensors have a major negative effect on the performance of the system.

To show the number of failed sensors how to change over time, the bar chart of the number of failed sensors is plotted in Figure 5. The major parameters are set as before: , , and . One observation is that the number of failed sensors begins to increase slowly at low time. When at time , the number of failed sensors increases at high speed. Another observation is that the cutoff point between the random failure period and the wear-out period is within the range of 5 to 6. The curve of the number of failed sensors over time also reflects the features of bathtub-shaped failure.

In Figure 6, the ROC curves corresponding to ELRT over time are plotted. In this simulation, we also assume that each sensor sends (or ) to the FC all the time when it becomes failure. The time , , , , , , and are also chosen to conduct the experiment. The failed sensors are randomly selected from all the sensors. From the figure, we show that the ROC curves decrease over time but become a sharp decline at high time because of a large number of failed sensors. These features of ELRT are similar to the CVT. Their performances of the system are both affected by the failed sensors.

To compare the ROC curves between the ELRT and the CVT, their comparisons at each time are plotted in Figure 7. All the basic parameters are set as before. In this simulation, we also assume that each sensor sends (or ) to the FC all the time when it becomes failure. The failed sensors are randomly selected from all the sensors. From Figure 7(a) to Figure 7(i), all the ROC curves of ELRT are above the ROC curves of the CVT; that is, the performances of the CVT are outperformed by the ELRT rule with failed sensors. At time , the performance of the system has a sharp decline because of a large number of failed sensors.

In summary, the performances of the ELRT and CVT are both affected by the bathtub-shaped failure. Because the ELRT considers the failed sensors, their performances are better than those of the CVT without considering the failed sensors when the features of failed sensors meet the bathtub-shaped failure.

5. Conclusion

We have derived the ELRT rule based on BSF model for distributed target detection in sensor networks. It is assumed that each sensor receives a signal that attenuates as a function of the distance between the target and the local sensor. The BSF model which introduces the analysis of reliability classifies the sensors' states into three stages: initial failure period, random failure period, and wear-out failure period. We have shown that the ELRT fusion rule outperforms the CVT rule without considering failed sensors. The ELRT fusion rule improves the robust performance of the system. In the future, we will investigate the design of and when the sensors become failures and the location's estimation of the target.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by National Natural Science Foundation of China (Grant no. 61001086) and Fundamental Research Funds for the Central Universities (Grant no. ZYGX2011X004).