- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
International Journal of Distributed Sensor Networks
Volume 2012 (2012), Article ID 482191, 13 pages
Subjective Logic-Based Anomaly Detection Framework in Wireless Sensor Networks
1Key Laboratory of Data Engineering and Knowledge Engineering, MOE, Beijing 100872, China
2School of Information, Renmin University of China, Beijing 100872, China
3Institute of Electronic Technology, Information Engineering University, Zhengzhou 450004, China
Received 15 June 2011; Revised 25 September 2011; Accepted 28 September 2011
Academic Editor: Yuhang Yang
Copyright © 2012 Jinhui Yuan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
In existing anomaly detection approaches, sensor node often turns to neighbors to further determine whether the data is normal while the node itself cannot decide. However, previous works consider neighbors' opinions being just normal and anomalous, and do not consider the uncertainty of neighbors to the data of the node. In this paper, we propose SLAD (subjective logic based anomaly detection) framework. It redefines opinion deriving from subjective logic theory which takes the uncertainty into account. Furthermore, it fuses the opinions of neighbors to get the quantitative anomaly score of the data. Simulation results show that SLAD framework improves the performance of anomaly detection compared with previous works.
Recently wireless sensor networks (WSNs) have been widely used in military surveillance, traffic monitoring, habitat monitoring and object tracking, and so forth [1, 2]. Such networks deploy lots of sensor nodes with sensing, data processing, and wireless communication capabilities in the monitoring area. Sensor nodes are resource-constrained and susceptible to interference from the environment so that their sensing data are often unreliable. Potential sources of anomalous data in WSNs are classified into three categories: faults (errors), events, and malicious attacks [3, 4]. While sensor nodes fail, their sensing data are faulty data . Once the number of faulty data increases, it will bring great influence on the user query. Thus, they should be eliminated or corrected. When some event happens, the sensing data of the nodes in the area are informational data, which are different from the normal data. They should be reported to user for further deciding. The thirdly potential source of anomalous data is attacks which are beyond the scope of this paper. Anomaly detection is considered as a solution to detect faulty data and informational data.
In existing anomaly detection approaches, sensor nodes turn to neighbors to further determine whether the data is normal while the node itself cannot decide. In this process, existing solutions, including voting algorithms [6, 7] and aggregation frameworks [8–10] which detect anomaly in the process of aggregating data, provide neighbors’ opinions being just normal and anomalous. However, no neighbor can always say that the data of the node are absolutely normal or anomalous, and something is neglected by previous works which we call uncertainty. Thus, taking the degree for neighbors’ opinions about the data being normal or anomalous into account can more realistically describe the view of neighbors. Consequently, the performance of anomaly detection is able to be improved.
In this paper, we propose SLAD (subjective logic-based anomaly detection) framework, which takes uncertainty into account, to improve the performance of anomaly detection. It includes three phases: preprocessing, self-monitoring, and cooperant detecting. Among them, pre-processing run on sink and self-monitoring execute on each node. After the two phases, sensor nodes send suspicious data to its neighbors to turn to further determine. The third phase is the key of our framework.
The important element of SLAD is ESLB (extended subjective logic-based algorithm), which is the key of the third phase mentioned above. Before plunging into the detail of ESLB, we first propose SLB (subjective logic-based algorithm) which elementarily describe our work. In SLB, each neighbor gives the quantitative opinion to the suspicious data involving with subjective logic theory. After fusing the opinions of all the neighbors, SLB gets the quantitative anomaly score, which demonstrates the degree of the suspicious data being considered as an anomaly. We extend SLB to ESLB in order to avoid the impact of those neighbors whose data are suspicious, effectively distinguish the faulty data from the informational data, and take the historical spatial correlations of the node and its neighbors into account.
The main contributions of this paper are as follows.(i)Proposes SLAD framework which takes the uncertainty of neighbors to the data of the node into account. It redefines opinion deriving from subjective logic theory and can more realistically describe the view of neighbors on the data of the node.(ii)Presents SLB and ESLB algorithms. SLB fuses all the opinions of neighbors for the data of the node to get the quantitative anomaly score of the data. We extend SLB to ESLB to improve the performance further.(iii)Constructs the experiments to verify the detection performance of the framework we propose. Simulation results show that SLAD framework is effective and gains a lot of performance improvement of anomaly detection compared with the previous approaches.
The rest of the paper is organized as follows. Section 2 summarizes the related work of this paper. Section 3 presents preliminary concepts. Our framework SLAD is introduced in Section 4. Section 5 gives SLB algorithm and its extended algorithm ESLB. Section 6 discusses some problems which are not involve in the above sections. Section 7 describes the experimental setup and evaluates the performance of framework in realistic data set. Finally, Section 8 concludes the paper.
2. Related Work
Lots of efforts have been made in recent years to detect the anomaly in wireless sensor networks. We briefly survey the recent researches relevant to our work as follows.
First category involves voting algorithm and its improved algorithms. Authors in  propose majority voting algorithm. If some node is aware that it’s sensing data maybe anomalous, it sends to its all one-hop neighbors. Each neighbor compares with its sensing data . If the difference is less than the threshold, casts a positive vote for , otherwise casts a negative vote. Node collects all the votes of its neighbors and gets the determination. If the number of positive votes is more than negative votes, is thought to be normal, otherwise is anomalous. Based on majority voting algorithm [6, 7] proposes weighted voting algorithm which considers that the neighbors who are closer to the node should have greater weights. Authors in  discuss how to detect the faulty (erroneous) data in WSNs. It uses extended Jaccard’s coefficient to compute the similarity degree between sensor nodes and set the different levels for the nodes to set up the correlation network. It presents an efficient two-phase voting algorithm called TrustVoting to determine whether the data is faulty. However, the algorithms mentioned above provide neighbors’ opinions being just normal and anomalous. In addition, taking the degree for neighbors’ opinions about the data being normal or anomalous into account is able to improve detection performance .
Second category is to detect anomaly in the process of aggregating data in the network. Authors in  propose a robust aggregate framework, which performs the similarity tests among sensor nodes to classify the particular node as anomaly. It returns the aggregate results excluding anomaly, which is also maintained and sent to the users. Furthermore, authors in  define minimum support MinSupp, which is the minimum count of sensor nodes to prove the data of the node being normal or anomalous. For some node holds on anomalous data, if it has MinSupp number of nodes whose data are similar to it, it is determined that some events happen, otherwise it is faulty data. On this basis,  present the in-network anomaly detection framework based on position sensitive hash function. It achieves the load balance of the network. Using comparison pruning methods, it assures the detection performance and energy efficiency. Authors in  introduce PAO framework to reliably and efficiently detect anomaly in WSNs, which is able to operate over multiple window type, and operate in exact or approximate mode suiting for a variety of application requirements. However, the outputs of similarity test for all these frameworks mentioned above are also only yes or no, which depends on the prethreshold, and do not provide quantitative determination, which are similar to the voting algorithms.
The third one regard the sensing data of the nodes as time-series data to some extent. Authors in [13, 14] construct autoregressive (AR) models for sensor nodes. Every sensor node sends the coefficients of the models to sink after establishing AR models, and sink estimates approximate values of the sensor nodes in the following rounds without getting real data from sensor nodes. Thus, it reduces the number of messages sent in the network a lot. Once the data are no longer predicable from AR models, it maybe due to that the models are not suitable to the data or anomalous data arise. If the reason is the former, it needs reconstructing AR models and repeating the process mentioned above. Otherwise, the anomalous data are identified to be eliminated or corrected. Authors in  use two thresholds to distinguish them. However, the approach only relies on the predefined thresholds and does not employ the spatial correlations among sensor nodes. If taking spatial correlations into account, it can make full use of neighbors’ opinions to achieve better performance of anomaly detection.
According to the above-related works, we can draw the conclusion that providing quantitative opinions is very important for anomaly detection after self-monitoring on each node in WSNs. As we know, in subjective logic theory, the subjects express subjective beliefs about the truth of the objects with degree of uncertainty and indicate subjective belief ownership whenever required [15, 16]. Subjective logic provides the quantitative evaluation for the trust degree of the object. From this perspective, judgment among the adjacent nodes in WSNs is similar to trust evaluation. So we take subjective logic theory into the anomaly detection in WSNs. Subjective logic is involved to offer quantitative neighbors’ opinions about the suspicious data of the node.
Besides, authors in [17–19] use machine learning techniques to detect anomaly in WSNs, which are different from our solution. For machine learning techniques are resource intensive that are difficult to be implemented on sensor nodes, the early studies, for example , run their algorithms on gateway (or sink). Authors in  identify anomalies in critical gas monitoring using offline echostate network in an underground coal mine. The following researches try to do something to make it possible to run the algorithms on sensor nodes. Authors in  compares and classifies the input signals in accordance with online learned prototypes on node-level, and then sends the results of classification to a fusion center for further processing. Based on , the authors in  propose a general anomaly detection framework which unifies fault and event detection. It runs on sensor nodes, distinguishes faults from events, and improves the performance of detection. The focuses of [18, 19] are how to select appropriate machine learning techniques and then decrease the complexity to make the algorithms be suitable to run on nodes. It is different from our solution, the difficulty of which is how to provide the quantitative neighbors’ opinions to improve the performance of detection.
Suppose that a sensor network is modeled as an undirected connected graph , where is the set of all sensor nodes (including sensor nodes and one sink , denoted as ) and is the set of the edges. An anomaly is defined as a measurement that significantly deviates from the normal pattern of the sensing data . Generally, the anomaly mentioned in this paper includes fault (error) and event, and the anomalous data includes the faulty (erroneous) data and the informational data, respectively.
For the data of sensor nodes can be regarded as time series data [13, 14], we construct AR model on each node. Suppose that the data of node at time can be denoted by as , where is the data of at time , is the corresponding coefficient of , and is the random error and is the normal distribution of the mean being 0 and the variance being . After that, given and we can get and . Among them, is the linear and the least variance-unbiased estimation of , and is the unbiased estimation of : where . At last, given the confidence level 1-α, the confidence interval of the estimate value is
We make the following assumptions about our framework.(1)The wireless sensor network is static, and the topology does not change in the network lifetime.(2)All sensor nodes are homogeneous and have the same energy and capabilities, and there is only one sink which holds on infinite energy.(3)Sensor nodes are deployed densely; that is, if some events happen in the network, adjacent sensor nodes (one-hop neighbors) can monitor them at the same time. Of course, the situation can be extended to not densely deployed, which will be discussed in Section 6.
4. SLAD Framework
SLAD framework consists of three phases: preprocessing, self-monitoring, and cooperant detecting. Among them, preprocessing phase is executed on sink, self-monitoring run on each node, and cooperant detecting is semidistributed algorithm, that is, run on sink and sensor node.
In the first phase, all sensor nodes collect rounds of data and transmit them to sink. Sink constructs autoregressive models and uses the least squares to estimate the coefficients . As for , it is estimated by use of the first rounds of data. Using the least rounds of data and the coefficients , we get the estimate value of the nodes. After that, using the last - rounds data, we get the confidence interval under the given confidence level 1-α.
For each node , if its data at time is within the range of its confidence interval , it is considered as normal, otherwise anomalous. However, this computation run on each node, if it is computed at each round on each node, the computational complexity is so high as to consume too much energy, which significantly leads to increased energy consumption. Consequently, a simple approach is taken to approximate as shown below. Through the use of , each node predicts the latest - rounds of data and compares them with the real data to get the average value of the confidence intervals of those - rounds data, which is set as approximate confidence interval at the given confidence level. Then it reduces the computational complexity on each node a lot. Sink sends the messages to each node including coefficients of its AR model and its respectively approximate confidence interval.
In the second phase, each node uses coefficients of its AR model and the most recently rounds of data to predict current round of data. If the difference between the predicative data and the real data is less than the threshold , SLAD considers the data as normal. Otherwise, the data is regarded as suspicious which needs to be determined further among adjacent neighbors. It is noted that, if the data is thought to be normal, it does not compute the confidence interval. However, while considers to be suspicious, it computes at 1-α. And then, it sends the message to all its one-hop neighbors, which include and .
In the third phase, sensor node whose data is suspicious sends its data to all its neighbors, and each neighbor produces its opinion about the suspicious data. SLAD fuses all the neighbors’ opinions and gets the expectation of the consensus opinion. And thus we get the anomaly score of the suspicious data. If the anomaly score is more than the threshold, the suspicious data is anomalous, or else the data is normal. Additionally, to avoid the impact of those neighbors’ opinions whose sensing data are suspicious, SLAD removes those opinions from the consensus opinion. In order to take the historical spatial correlations of the node and its neighbor nodes into account, SLAD computes the neighbors’ opinions in another way. For the reason of different treatments to faulty data and informational data, SLAD adopt the approach as follows. The suspicious data, if anomalous, is to be marked as faulty data. When the faulty data of sensor nodes at this round are all sent to sink, sink distinguishes faulty data and informational data by employing the spatial correlations of adjacent nodes. The detailed process will be discussed further in Section 5. The third phase is the fundamental step of SLAD framework, which will be discussed in detail in Section 5.
5. Subjective Logic-Based Algorithms
In WSNs, no neighbor can always say that the data of the node are absolutely normal or anomalous, and something is neglected by previous works which we call uncertainty. On the other hand, subjective logic theory is suitable to model the situations with consideration to uncertainty. This drives us to involve subjective logic theory in anomaly detection to improve the detection performance.
Before detailing the subjective logic-based algorithms, it is necessary to address three problems, including expressiveness of neighbors’ opinions, value assignment of neighbors’ opinions, and consensus of neighbors’ opinions. With the solutions of the problems, we propose SLB and ESLB which is the extension of SLB.
5.1. Expressiveness of Neighbors’ Opinions
Definition 1. Given sensor network , the opinion of the neighbor about the sensing data of node is defined as follows:
where is the degree of belief that neighbor considers the data of node to be normal. is the degree of disbelief that considers the data of node to be anomalous. is the degree of uncertainty that regards the data of node as normal or anomalous. is the base rate of that regards the data of node as normal or anomalous (i.e., a priori probability).
Definition 1 defines neighbor ’s opinion about the degree of node ’s data. and are combined to express the opinion thoroughly. The following problem is how to determine the opinion of neighbor about the data of node .
5.2. Value Assignment of Neighbors’ Opinions
In this section, we discuss how to determine neighbor’s opinion . We compute the similarity degree and difference degree of node and to denote as and , respectively. It is worth mentioning that the sum of and maybe more than one by use of the above method. In the case, we should scale the sum down to no more than one because of the requirement of the subjective logic theory. is equal to subtract the sum of and from one.
To scale them down, we take advantage of the observation that the data of the nodes are changing smoothly most of the time and changing nonsmoothly every some periods for the reason the sampling rates of the nodes are high in WSNs. We have taken into account the data trends while constructing AR model. So we just use the data at the current round to determine neighbors’ opinions while the data are changing smoothly. Only while the data are changing non-smoothly, we use several rounds of data to get the neighbors’ opinions. As we know, data trends of the nodes can be get according to historical data.
The detailed opinion of neighbor about the data of node is determined as follows.(1)If the data are changing smoothly, where and are the data of node and node , respectively, at current round. If , the sum is scaled down to no more than one. is the prior probability of ’s opinion about ’s data, that is, the expectation of the prior opinion. Initially it is set to 0.5; that is, considers the probability of the data of being normal and anomalous is 0.5.(2) If the data are changing nonsmoothly, where , supposing the current round is and are the vector data of node and from 1 round to rounds, and are the th element of and , is the length of vector data ( and ). If , the sum is scaled down to no more than one. is same as above.
5.3. Consensus of Neighbors’ Opinions
The opinions of neighbors and about node ’s data can be fused to get the consensus which is the new opinion about the proposition on node ’s data being anomalous according to Lemma 2.
Lemma 2. Given and are the opinions of neighbors and about the data of node is the consensus of two neighbors’ ( and ) opinions about the proposition on node ’s node being anomalous, it can be computed as follows. Let .
Proof. From , we know that posteriori probabilities (ppdf) of binary events can be expressed as
where , , , .
Here , and represent positive evidence, negative evidence, and relative atomicity (base rate), respectively. The probability expectation value is .
Let and be two ppdfs, respectively, held by the neighbor nodes and regarding the truth of the suspicious sensing data of the node . The ppdf defined as that :
Let be a neighbor node’s opinion about the suspicious sensing data, and let be the same neighbor node’s probability estimate regarding the same data. For , that is, , and , it is easy to get , , where .
The following is the process to prove that the equations of the lemma are correct. Because we want to get the consensus about the proposition on node ’s data is anomalous, we get the equations with exchanging and of (9); respectively,
Combining (17) onto (12), we get
Combining (17) onto (13), we obtain
If , let , it is easy to get the equation (7) which is similar as the above.
To be simply presented, we denote as , among which is the new operator which is similar to the consensus operator of subjective logics. The expectation of consensus of neighbors’ opinion about the data of node decides the thorough consideration of neighbors about the data of . Given consensus of neighbors’ opinion , the expectation of the opinion is .
Example 3. Suppose that the opinions of neighbors and about the data of node are and at some round, respectively, then the consensus of the opinions is , and the expectation is .
As we all know, each node has many neighbors in WSNs. We need to fuse the opinions of all neighbors into the consensus opinion. Suppose that node has neighbors, their opinions about the data of are . To get the thorough consideration of neighbors about ’s data, we fuse all its neighbors’ opinions, which denote as , that is, . The consensus process is recursively called by use of Theorem 4.
Theorem 4. Given neighbors of node , their opinions about the data of are , the consensus of their opinions about the proposition on node ’s node being anomalous is , then it can be computed as follows:
Proof. We utilize the mathematical induction approach to prove the theorem.(1)If , , which illustrates that (21) is true.(2)Suppose that, if , (21) is true; that is,
we need to prove that (21) is true while ; that is,
It is equivalent to (i)
For , we can get the following: (ii)
Equation (26) = (28); that is, .
It is easy to know that the others (, and ) can be proved as above. So (21) is true while .
The above procedure illustrates that (21) is true while is no less than 2 and no more than . That is, the theorem is proved to be true as follows:
Given neighbors of node , their opinions about the data of are , the consensus of all the neighbors’ opinions can be got through the computation of Theorem 4, then the expectation of consensus is , where . The anomaly score of the node ’s data is defined according to the expectation .
Definition 5. Suppose that the consensus of all the neighbors’ opinions about node ’s data is and the expectation of the consensus is , then the anomaly score of node is defined as follows:
There are some to be said. In the scenario that node has one neighbor, Lemma 2 is not able to deal with it. To do with that, we suppose an imaginary neighbor who holds the opinion and the neighbor takes part in the consensus with the real neighbor. Thus, we still get the consensus according to Lemma 2.
In the following sections, we present two algorithms to further determine whether suspicious data are normal or anomalous. The notations used to describe the algorithms are shown as in Table 1.
5.4. SLB Algorithm
The process of subjective logic-based algorithm (SLB) is as follows with discussion above. This process is executed among the node and its neighbors. Supposing node has neighbors . According to the suspicious data of node whether it is changing smoothly or nonsmoothly, each neighbor node gives the opinion about the data of node (Line 1–10). Utilizing Theorem 4 to compute, we get the consensus opinion of all the neighbors of node (Line 11). The expectation of consensus opinion is obtaind through the equation (Line 12). And then, the anomaly score can be get through Definition 5 (Line 13). If the anomaly score is less than the predefined threshold , the suspicious data of node is considered as normal, or it is thought of as anomalous(Line 14–18) (Algorithm 1).
5.5. ESLB Algorithm
SLB algorithm fuses the opinions of all the neighbors about the data of the node to decide whether the data is normal or anomalous. However, it has the following disadvantages. (1) In the process of judgement among the node and its neighbors, the opinions of the neighbors whose data are suspicious are also included so as to affect the performance of anomaly detection. It is more severely affected especially when the proportion of anomalous data is ascending. (2) It does not distinguish the faulty data from the informational data. (3) The base rate of all the neighbors’ opinions is set to 0.5 which is not reasonable. It does not take the historical information of the node and its neighbors into account.
To overcome the disadvantages of SLB, we extend SLB to ESLB. For the first point, ESLB removes the opinions of those neighbors whose data are suspicious. To solve the second point, ESLB employ the correlations of anomalous data. If those data are spatial correlated, they are the informational data or else the faulty data. Thirdly, we define as follows in considering the historical information.
Suppose that and are the latest rounds of historical data of node and neighbor in the pre-processing phase, the historical opinion of neighbor about node ’s data is . We set base rate of historical opinion is 0.5; that is, . Then we have the following definition.
Definition 6. Given the historical opinion of neighbor about node ’s data is , base rate of current opinion of about ’s data is defined as follows:
Theorem 7. Suppose that historical opinion of neighbor about node ’s data is , then base rate of current opinion of about ’s data is .
Proof. From the definition of the expectation, we know that
We extend SLB to ESLB algorithm as follows. If the data is suspicious, node turns to its neighbors set to further determine (Line 1–3). If the data of some neighbors are suspicious, they do not provide their opinions about the suspicious data of node . We exclude the neighbors from the candidate neighbors set and get the neighbors set which provides the opinions about the data of node (Line 4–8). For each node in , it computes its historical opinion of neighbor about ’s data by use of and , and is set to 0.5 (Line 11). We compute the current opinion according to SLB algorithm excluding which is computed through Theorem 7 (Line 12). Then we get the result whether is normal according to calling SLB algorithm (Line 15). If is not normal, it sends message to sink, which includes node , current round , data , and flag (Line 21). Sink receives all the messages at round and further analyzes neighbors who hold on faulty data at this round. If and are all faulty at the same time and are spatial correlated, they are informational data or else faulty data (Line 24–32) (Algorithm 2).
There are some problems to be explained further. First, authors in [8, 9] point out that voting algorithms cannot deal with the situation, in which the events are detected by sensor nodes which are not adjacent. However, our framework can do with the situation after minor revision. For example, suppose that node and are not within the radio range of each other and they detect the same event at some time. Suppose that the impact range of events is , radio range is . Our framework can still detect the event by computing the spatial correlation among -hop neighbors. For the computation is executed on sink, it does not increase the energy consumption.
Second, in order to reduce the energy consumption, we use the idea proposed by  to construct and maintain AR models. (1) It avoids unnecessary data transmission. While the data of nodes are normal, it does not transmit data in the network but estimates the data according to AR models by sink. (2) It reduces the computational complexity of constructing and maintaining AR models. The main computation is executed on sink and not sensor nodes. Please refer to  for more detail.
Third, although the thresholds, like and , are vital to SLAD, we do not pay much attention to them. We focus on how to more realistically quantize the opinion of the neighbors to special sensor node. In this paper, we set them with the historical experience. However, excellent methods are not excluded to improve SLAD further.
7. Simulation Results
7.1. Experimental Setup
We implement our simulation experiments in OMNET++ platform . The topology and the sensing data come from Intel Berkeley research lab data set . 54 sensors are deployed in the Lab of , and the locations of sensor nodes are known in advance. In the experiments of Section 7.2, radio range is set to 150. Section 7.3 shows the impact of different radio ranges on the detection performance. All the experiments suppose that the radio links are reliable and do not fail. The sensing data have four attributes, yet only temperature is selected in our experiments. We use 1000 rounds of data as experimental data, and use the initial 100 rounds of data to construct models.
While using models to predicate the sensing data in WSNs, AR (3) model can get good estimation and low cost of maintenance [13, 14]. So we use AR(3) as the models constructed on the nodes. If is set to 3, AR models can express as . In the beginning, we use the first 100() rounds as the training data, among which the first 90 () rounds of data are used to estimate the coefficients of AR model and the last 10 (-) rounds of data are used to determine the threshold .
If the sensing data are changing nonsmoothly, we would use the vector data to compute neighbors’ opinions. To compute the base rate of neighbors to the node (historical information), it also needs to utilize the vector data. So, it needs to select the appropriate length of vector data . If is set too small, it cannot express the data trends. Otherwise, it consumes too much energy to exchange sensing data. Figure 1 shows the detection rate of SLAD framework under the condition of different lengths of vector data. While is not more than 5, the detection rate increases obviously with the increase of . Once achieves 5, the detection rate varies not obviously with the increase of . Consequently, we set the length of vector data () to 5.
We randomly change some of normal data as faulty data and define the faulty rate as the proportion of faulty data to the whole data. In the experiments, we compare the performance of different algorithms at various faulty rate, and the results are mean of 20 times of executions.
7.2. Comparison of Detection Performance
In order to compare the anomaly detection performance of different algorithms, we define detection rate, false detection rate, and undetection rate. Among these definitions, the whole experimental data set is denoted as , the real faulty data set is expressed as , and the identified faulty data set which is determined by anomaly detection algorithms is marked as .
Definition 8 (detection rate). It is defined as the faulty data which are determined as faulty in the proportion of the real faulty data:
Definition 9 (false detection rate). It is defined as those normal data which are determined as faulty in the proportion of the real faulty data:
Definition 10 (undetection rate). It is defined as those faulty data which are determined as normal in the proportion of the real faulty data:
In this section, we compare the performance of different algorithms. These algorithms are listed as follows. (1) MV (majority voting algorithm) . (2) DWV (distance weight voting algorithm) : it use the Euclidean distance of sensor nodes as the weight, and the weight is smaller with the distance being farther. Please refer to Section 2 about the details of and algorithms. (3) VWV(value weight voting algorithm): it is different from , and it uses the distance of the data of node and its neighbors as the weight, that is, the difference of the data. It considers that the neighbors whose data are closer to that of the node should have greater weights. (4) ASLB(autoregressive model and SLB): it combines autoregressive models with subjective logic-based algorithm (SLB algorithm). (5) SLAD (subjective logic-based anomaly detection framework): it integrates autoregressive model and extended subjective logic-based algorithm(ESLB algorithm).
Figure 2 shows the detection rate of five algorithms at different faulty rate. It indicates that detection rates of all the algorithms are greater than 0.8 when faulty rate is low. The performances of and are better than MV, , and . The detection rates of MV, DWV, and VWV decrease sharply as faulty rate increases. keeps the high detection rate when faulty rate is less than 0.4, which decreases sharply once faulty rate reaches 0.4 and holds this trend with the increase of faulty rate. However, the detection rate of is still greater than 0.9 even though faulty rate increases, which shows the best performance compared with the other algorithms.
Figure 3 presents the detailed comparison results of these algorithms at different faulty rate. The false detection rate of all the algorithms increases as faulty rate becomes larger. The false detection rate of , DWV, and keeps in some specified scope as faulty rate increases, and increases suddenly once faulty rate achieves 0.4. holds the false detection rate within limits which is no greater than 0.1. The false detection rate of SLAD is much less than the others.
We then study the impact of different faulty rate on undetection rate of these algorithms. The undetection rate of , DWV, and decreases as faulty rate increases. The undetection rate of increases abruptly while faulty rate achieves 0.4, and it keeps the rising trend with the increase of faulty rate. preserves very low undetection rate which does not exceed 0.1 even though faulty rate is high. The undetection rate of is much less than other algorithms though it increases as faulty rate increases.
From the above figures, we note that suddenly changes its trends of detection performance when faulty rate is 0.4. The reason is presented as follows. When faulty rate is 0.4, the number of neighbors whose sensing data are right is more than those data being anomalous on average. It results that the detection performance does not decline too much. However, once faulty rate is more than 0.4, the number of neighbors whose data are faulty is no less than that whose data are normal. It is hard to decide whether the suspicious data is normal for , and it results to the poor detection performance significantly.
We also draw the following conclusion according to Figures 2, 3, and 4. The overall performance of is much better than the other algorithms, and the performance of is better than , DWV, and when faulty rate is low. The cause is the combination of subjective logic. Using subjective logic, and fuses the quantitative opinions of neighbors which avoid the problems other algorithms are facing. Because , , , and use the opinions of all the neighbors, the number of faulty data of neighbors may be rising along with faulty rate increasing, which shows the bad impact on the detection rate, false detection rate, and undetection rate. However, has removed the opinions of the neighbors whose data are suspicious before providing their opinions and takes historical spatial correlations of the nodes and their neighbors into account. So, holds significantly superior performance than other algorithms, especially when faulty rate is high.
The above experiments discuss the cases that the network are only involving the faulty data, and not including the informational data. In the monitoring area, some events randomly arise. The anomalous data of sensor nodes detecting the events are spatial correlations (i.e., informational data). The faulty rate, which is defined as the number of informational data in proportion of the whole data, is set to 0.2. The experiment shows that detection rate of framework for informational data reaches more than 0.9, and , , are only about 0.7. The reason is that framework utilizes subjective logic to fuse the quantitative opinions of neighbors so as to improve the detection performance obviously.
7.3. Impact of Radio Range on Detection Performance
In this section, we analyze the impact of radio range on detection rate, false detection rate, and undetection rate at different faulty rate. The number of neighbors affects the detection performance of the algorithm. Different radio range of the nodes leads to different number of neighbors. Thereby, we discuss the detection performance of framework under the condition of different radio ranges.
We conduct the experiments to compare the detection performance of framework under different radio ranges. We set faulty rate to 0.3, 0.4, and 0.5 in the experiments. Figures 5, 6, and 7 show the detection rate, false detection rate, and undetection rate of SLAD, respectively. These figures indicate that detection rate decreases; false detection rate and undetection rate increase with the increase of faulty rate. They also show that detection rate increases; false detection rate and undetection rate decrease as the radio range becomes larger. The reason is that there are more neighbors providing the opinions with the radio range increasing.
In this paper, we present SLAD framework which considers the uncertainty of neighbors to the data of the node. It includes three phases: pre-processing, self-monitoring, and cooperant detecting. In the first phase, sink constructs AR model for each node. In the second phase, it uses AR models to check whether the sensing data are suspicious. In the third phase, it presents two novel algorithms SLB and ESLB. The third phase is the key of our framework. In SLB, each neighbor gives the quantitative opinion to the suspicious data involving with subjective logic theory. After fusing the opinions of all the neighbors, SLB gets the expectation of the consensus opinion and anomaly score, which demonstrates the degree of the suspicious data being considered as an anomaly. We extend SLB to ESLB in order to avoid the impact of those neighbors whose data are suspicious, effectively distinguish the faulty data from the informational data, and take the historical spatial correlations of the node and its neighbors into account. Simulation results show that SLAD framework improves the performance of anomaly detection effectively compared with previous works.
However, we find there is something to do for further improving SLAD. We believe that the opinion of the neighbor, who holds the higher historical spatial correlation with the node, should be paid more attention to. An example is given to demonstrate that. Suppose node and node are the neighbors of node and node and node are located in the room while node is out of the room. Generally, the historical spatial correlation between node and node is higher than that between node and node . Thus, the opinion of node to node should be given more attention. Unfortunately, the subjective logic, which works as the foundation of SLAD, treats the opinions equally and has no capability to deal with it. As the preparatory work, we proposed an operator for subjective logic which is capable of making the consensus on several neighbors’ opinions with their weights in a fair way . With the support of the new operator, we can map the historical spatial correlation to the weight of the opinion to improve SLAD. In theory, we believe it will improve the performance of anomaly detection for SLAD. It is our future work.
This work is supported by the National Science Foundation (61070056, 61033010), the National 863 High-tech Plan (2008AA01Z120), Program for New Century Excellent Talents in University, and the Research Funds of the Renmin University of China (10XNI018).
- J. M. Kahn, R. H. Katz, and K. S. J. Pister, “Next century challenges: mobile networking for “smart dust”,” in Proceedings of the 5th Annual ACM/IEEE International Conference on Mobile Computing and Networking, pp. 271–278, Seattle, Wash, USA, August 1999.
- D. Cruller, D. Estrin, and M. Sivastava, “Overview of sensor networks,” Computer, vol. 37, pp. 41–49, 2004.
- V. Chandola, A. Banerjee, and V. Kumar, “Anomaly detection: a survey,” ACM Computing Surveys, vol. 41, no. 3, pp. 1–58, 2009.
- Y. Zhang, N. Meratnia, and P. Havinga, “Outlier detection techniques for wireless sensor networks: a survey,” IEEE Communications Surveys & Tutorials, vol. 12, no. 2, pp. 159–170, 2010.
- S. Jeffery, G. Alonso, M. J. Franklin, W. Hong, and J. Widom, “Declarative support for sensor data cleaning,” in Proceedings of the 4th International Conference on Pervasive Computing, pp. 83–100, Dublin, Ireland, May 2006.
- B. Krishnamachari and S. Iyengar, “Distributed Bayesian algorithms for fault-tolerant event region detection in wireless sensor networks,” IEEE Transactions on Computers, vol. 53, no. 3, pp. 241–250, 2004.
- M. Krasniewski, P. Varadharajan, B. Rabeler, S. Bagchi, and Y. C. Hu, “TIBFIT: trust index based fault tolerance for arbitrary data faults in sensor networks,” in Proceedings of the International Conference on Dependable Systems and Networks, pp. 672–681, Yokohama, Japan, July 2005.
- Y. Kotidis, A. Deligiannakis, and V. Stoumpos, “Robust management of outliers in sensor network aggregate queries,” in Proceedings of 6th International ACM Workshop on Data Engineering for Wireless and Mobile Access, pp. 17–24, Beijing, China, June 2007.
- A. Deligiannakis, Y. Kotidis, V. Vassalos, V. Stoumpos, and A. Delis, “Another outlier bites the dust: computing meaningful aggregates in sensor networks,” in Proceedings of the 25th IEEE International Conference on Data Engineering (ICDE '09), pp. 988–999, Shanghai, China, April 2009.
- N. Giatrakos, Y. Kotidis, A. Deligiannakis, V. Vassalos, and Y. Theodoridis, “TACO: tunable approximate computation of outliers in wireless sensor networks,” in Proceedings of the International Conference on Management of Data (SIGMOD '10), pp. 279–290, Indianapolis, Ind, USA, June 2010.
- X. Y. Xiao, W. C. Peng, C. C. Hung, and W. C. Lee, “Using sensor ranks for in-network detection of faulty readings in wireless sensor networks,” in Proceedings of the 6th International ACMWorkshop on Data Engineering for Wireless and Mobile Access, pp. 1–8, Beijing, China, June 2007.
- N. Giatrakos, Y. Kotidis, and A. Deligiannakis, “PAO: power-efficient attibution of outliers in wireless sensor networks,” in Proceedings of the 7th International Workshop on Data Management for Sensor Networks, pp. 33–38, Singapore, September 2010.
- D. Tulone and S. Madden, “PAQ: time series forecasting for approximate query answering in sensor networks,” in Proceedings of the European Conference on Wireless Sensor Networks, pp. 21–37, Zurich, Switzerland, February 2006.
- D. Tulone, “A resource—efficient time estimation for wireless sensor networks,” in Proceedings of the Joint Workshop on Foundations of Mobile Computing (DIALM-POMC '04), pp. 52–59, Philadelphia, Pa, USA, October 2004.
- A. Jøsang, “A logic for uncertain probabilities,” International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 9, no. 3, pp. 279–311, 2001.
- A. Josang, “Fission of opinions in subjective logic,” in Proceedings of the 12th International Conference on Information Fusion, pp. 1911–1918, Seattle, Wash, USA, July 2009.
- O. Obst, X. R. Wang, and M. Prokopenko, “Using echo state networks for anomaly detection in underground coal mines,” in Proceedings of the ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN '08), pp. 219–229, St. Louis, Mo, USA, April 2008.
- M. Wälchli, “Efficient signal processing and anomaly detection in wireless sensor networks,” in Proceedings of the EvoWorkshops on Applications of Evolutionary Computing: EvoCOMNET, EvoENVIRONMENT, EvoFIN, EvoGAMES, EvoHOT, EvoIASP, EvoINTERACTION, EvOmUSART, EvoNUM, EvoSTOC, EvoTRANSLOG, pp. 81–86, Tübingen, Germany, April 2009.
- M. Chang, A. Terzis, and P. Bonnet, “Mote-based online anomaly detection using echo state networks,” in Proceedings of the 5th IEEE International Conference on Distributed Computing in Sensor Systems, pp. 72–86, Marina Del Rey, Calif, USA, June 2009.
- A. Varga, “The OMNET++ discrete event simulation system,” in Proceedings of the European Simulation Multiconference, pp. 319–324, Prague, Czech, June 2001.
- Intel Berkeley Research Lab, http://berkeley.intel-research.net/labdata/.
- H. Zhou, W. Shi, Z. Liang, and B. Liang, “Using new fusion operations to improve trust expressiveness of subjective logic,” Wuhan University Journal of Natural Sciences, vol. 16, no. 5, pp. 376–382, 2011.