About this Journal Submit a Manuscript Table of Contents
International Journal of Distributed Sensor Networks
Volume 2012 (2012), Article ID 197592, 10 pages
http://dx.doi.org/10.1155/2012/197592
Research Article

Location Privacy and Energy Preservation in Sensor Allocation Systems

Computer Engineering Department, King Fahd University of Petroleum and Minerals, Box 1392, Dhahran 31261, Saudi Arabia

Received 29 January 2012; Revised 15 April 2012; Accepted 24 May 2012

Academic Editor: Yingshu Li

Copyright © 2012 Hosam Rowaihy. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We propose sensor-to-task assignments algorithms that take into account location privacy issues. Our solutions enable the network to choose the “best” assignment of the available sensors to the tasks to maximize the utility of the network while preserving sensors’ location privacy. We focus on preserving the sensors’ locations during the assignment process in contrast to previous works, which considered location privacy while sensors are in operation. We also propose an energy-based algorithm that uses the remaining energy of a sensor as a factor to scale the distance between a sensor and a task such that sensors that have more energy appear to be closer than ones with less energy left. This does not only provide location privacy but will also consume the sensing resources more evenly, which leads to extending the lifetime of the network. From our simulations, we found that our algorithms can successfully hide sensors’ locations while providing performance close to the algorithms, which use the exact locations. In addition, our energy-based algorithm, while achieving results close to the exact location algorithm, was found to increase the network lifetime by as much as 40%.

1. Introduction

Wireless sensor networks consist of a large number of small sensor devices that have the capability to take various measurements of their environment. These measurements can include seismic, acoustic, magnetic, IR, and video information. These devices are highly resource constrained. They are equipped with a small processor and wireless communication antenna and are battery powered. To be used, sensors are scattered around a sensing field to collect information about their surroundings. For example, sensors can be used in a battlefield to gather information about enemy troops, detect events such as explosions, and track and localize targets. Upon deployment in a field, they form a wireless ad hoc network and communicate with each other and with data processing centers.

A sensor network is typically expected to perform multiple tasks. By a task we mean any job that requires some amount of sensing resources to be accomplished such as video monitoring a field, tracking a target or localizing an event. Tasks can be divided into multiple subtasks. For example, monitoring a large field can be divided into monitoring multiple smaller areas. Each task can be modeled with a profit that represents its importance. The utility (or amount/quality of information) that a sensor can provide to a task depends on several factors. These include the type of the sensor, its sensing range, its geographic location relative to the task, and its current operational status, such as its remaining energy. Because of the limited number of sensors deployed in a field and the potentially large number of tasks, competition will arise. In such cases, it might not be possible to satisfy the requirements of all tasks using available sensors.

In most cases, the utility that a sensor can provide to a task will depend on the distance between them; the closer the sensor the higher the utility it can provide. Hence, in most solutions the exact sensor’s location needs to be determined to calculate its utility. That is, the sensor has to acquire its location, for example, by using GPS, and then reveal that exact location to the network control. This process, however, might not always be feasible due to security reasons especially when the sensor network is shared between multiple entities. In such case, we would like to allow these entities to use the sensor network with no or limited coordination, which is an important asset for operations. However, some entities might want to keep the locations of its sensing resources private due to their sensitivity. Consider for example a sensor network that is owned by the military and is used to monitor boarders. The military might want to share its sensing resources with researchers and scientists who would like to study the environment in a particular area but at the same time would like to keep the exact locations of its resources private. In this case, the exact locations of sensors should not be disclosed. This condition prohibits the use of current solutions, which depend on such information.

Another major concern for sensor networks is energy preservation. We would like to extend the lifetime of the network as much as possible while trying to satisfy the requirements of all tasks. To do this, we need to take into account the residual energy in each sensor that can be used by a task before we make the assignment decision.

In this paper, we will develop distributed algorithms for finding sensor-to-task assignments taking into account location privacy and energy preservation issues. We focus on detection tasks, which require a number of sensors that are close to the location to be monitored. Our algorithms enable the network to choose the “best” assignment of the available sensors to the tasks to maximize the utility of the network, that is, the total profit achieved from all tasks, while preserving sensors’ location privacy. We also propose an algorithm that takes into account the remaining energy in sensors before making any assignment decision. The goal here is to preserve the energy of sensors that have a small amount of energy left and prefer the ones that have more energy. Energy in this case is used as a factor to randomize the actual location of the sensor.

From our simulations, we found that our algorithms can successfully hide sensors’ locations while providing performance close to the algorithms which use the exact locations. In addition, our energy-based algorithm, while achieving results close to the exact location algorithm, was found to increase the network lifetime by as much as 40%.

The rest of this paper is organized as follows. Section 2 discusses related work in the areas of sensor assignment and location privacy in sensor networks. Section 3 provides the needed background. We present the different algorithms in Section 4. In Section 5, we analyze the performance of the proposed algorithms. Finally, Section 6 concludes this paper and discusses some future research directions.

2. Related Work

The general problem of choosing sensors to achieve an objective has received sizable attention lately. Several selection objectives have been considered. In [1, 2], for example, the goal is to cover the region using few sensors in order to conserve energy. The techniques used range from dividing the sensors in the network into a number of sets and rotating them, activating one at a time [1], to using Voronoi diagram properties [3] to ensure that the area of a sensor's diagram is as close to its sensing area as possible [2]. Another technique used is delayed sensor activation [4], in which sensors are initially inactive and are activated according to their coverage contribution. In that way, the schemes greedily turn on the sensors with highest probability of successfully sensing the areas of interest. Our problem is similar if we consider that covering a certain area is a task. A survey of sensor selection and assignment problems, including simple theoretical models of the problem, can be found in [5].

In [6, 7], the authors have considered several sensor-task assignment problems. In [7], the problems we considered are different from previous work since they consider multiple tasks with different priorities. These tasks contend for the same set of sensors, which calls for resolution mechanisms.

Our work extends on [8] which studied a closely related problem. In this paper, the authors considered both event detection and target localization problems and proposed solutions that use exact and fuzzy location. The main goal was to reduce the computational overhead by minimizing the number of choices a task leader has to consider. This is achieved by grouping sensors into different classes based on their location. In our work, we turn the focus on location privacy and energy preservation. We modify the algorithms that have been proposed for event detection and compare our results to theirs.

The issue of location privacy was studied recently by several researchers [915]. All of these works, however, focus on hiding the location of a source sensor during its operation and not during the assignment process. That is, they assume that the sensor is already in operation to perform a certain task. Our work focuses on preserving the sensors’ locations during the assignment process, that is, before the task starts.

The problem of energy preservation in sensor networks has been studied in many previous works in the literature. Both [16, 17] provide thorough surveys on this topic.

3. Background

In this section, we discuss the necessary background information. We start by introducing our system model. Next, we formally define the problem that we are trying to solve.

3.1. System Model

In our model, the sensor network consists of set of static sensors, which are directional in nature, for example, imaging sensors and directional acoustic arrays. Hence, we assume that a sensor can be assigned to at most one task at a time. The sensors are predeployed and the field and it is assumed that they know their location through the use of GPS devices or other localization mechanisms.

Although our proposed algorithms would work for any task in which the utility a sensor provides to the task depends on the distance between them, we focus on event detection tasks to facilitate the discussion. In our model, a task is specified by a geographic location, for example, detecting events occurring at location (𝑥,𝑦). A larger-scale task, such as field coverage or perimeter monitoring, can be divided into a set of subtasks, each having its own location. To mimic real deployments, tasks dynamically arrive and depart over time. When a task arrives in the system, sensing resources are allocated to it. Because tasks can vary in importance, we allow a sensor to be reassigned from a task with lower profit (which is used to represent importance) to a task with a higher profit.

3.2. Problem Definition

Our problem definition is similar to the one proposed in [8], which we explain next. The goal is to assign sensors to multiple ongoing tasks such that the sum of achieved profits from all tasks is maximized. The achieved profit of a task is equal to the task's original profit multiplied by the utility it receives from the assigned sensors. We base the calculation of the utility received by a task on the model in [18]. The authors of [18] presented a complicated model of sensor assignment, with an objective function that depends on the probability of detecting certain types of events, conditioned on the events occurring and the number of sensors assigned to detect the event in a given location.

In our model, given are collections of sensors and tasks. Each task is to monitor and detect events in a certain location. The utility of a sensor to a task is equal to the probability that it will, by itself, successfully detect an event if it occurs. Let 𝑆𝑖𝑇𝑗 indicate that sensor 𝑖 is assigned to task 𝑗. The objective function is then to maximize the sum of cumulative detection probabilities for tasks (weighted by task profits 𝑝𝑗), given the probability 𝑒𝑖𝑗 that a single sensor 𝑆𝑖 detects an event for 𝑇𝑗: 𝑗𝑝𝑗1𝑆𝑖𝑇𝑗1𝑒𝑖𝑗.(1)

This problem is called the Cumulative Detection Probability maximization problem (MaxCDP). The utilities in this case are monotonic, that is, each sensor potentially raises the detection probability further but in nonlinear fashion. This problem has been shown to be NP-hard [8].

4. Assignment Algorithms

In this section, we introduce our algorithms for assigning sensors to tasks. We start by discussing the basic operation of the proposed algorithms. Then we discuss the operation of a generic assignment algorithm on which all the other algorithms are based. After that, we describe the different variations of the assignment algorithm, namely, the exact location, discretized location, random location, and energy-based location algorithms.

4.1. Basic Operation

The algorithms we propose below are similar in terms of operation to the algorithms we proposed in previous works [68]. They are distributed in nature and do not require any central node to make all the assignment decisions. This allows the leveraging of real-time status information about sensors—for example, which are operational and which are currently assigned to other tasks. Distributed algorithms are also scalable and more efficient in terms of communication cost compared to centralized approaches. As stated above, we assume a dynamic system, in which the tasks constituting the problem instances described above arrive and depart over time.

For each task, a leader is chosen, which is a sensor close to the task’s location. The leader can be found using geographic-based routing techniques such as [19, 20]. Since location privacy is a concern, however, schemes such as [21], which provides geographic routing without location information, can be used. The task leaders are informed about their tasks’ locations and profits by the central command and control. Task leaders, then, run a local algorithm to match nearby sensors to the requirements of the task. Since the utility a sensor can provide to a task is limited by a finite sensing range 𝑅𝑠, only nearby sensors are considered. The leader advertises its task information, namely, task’s location and its profit, to the nearby sensors (e.g., sensors within two hops).

When nearby sensors hear this advertisement message, they propose to the leader with the utility they provide to the task at hand. To preserve location privacy, calculation of the utility may not use the exact location information. A sensor that is already assigned to a task may be reassigned to another task only if doing so will increase the total profit of the network. This is determined as follows. Assume that sensor 𝑆𝑖 is currently assigned to task 𝑇𝑘 and there is a new incoming task 𝑇𝑗 that is within the sensing range of 𝑆𝑖. If the utility that 𝑆𝑖 provides to 𝑇𝑗 weighted by 𝑇𝑗’s profit 𝑝𝑗 is greater than that of the current task 𝑇𝑘 then 𝑆𝑖 should be reassigned. More formally, if 𝑒𝑖𝑗𝑝𝑗>𝑒𝑖𝑘𝑝𝑘 then 𝑆𝑖 is reassigned. To reduce both the interruption of ongoing tasks and the communication overhead, no cascading preemption is allowed. That is, if task 𝑇𝑗 preempts task 𝑇𝑘,𝑇𝑘 will try to satisfy its demand only with available sensors rather than by preempting a third task.

When a task ends, the leader sends out a message to advertise that the task has ended and all its assigned sensors are released. Because the system is dynamic, tasks that are not satisfied after the first assignment process will try to obtain more sensors once they learn there may be more available. This information can be obtained either from the base station or by overhearing the message announcing the end of a task.

4.2. Generic Assignment Algorithm

We now discuss how the leader chooses the sensors to be assigned to the task from the applicable nearby sensors. The generic assignment algorithm (originally proposed in [8]) runs in rounds, which allows sensors to be assigned to their best match. When a task arrives to the network, the task leader advertises the presence of the task and its profit to nearby sensors. The advertisement message is propagated to ensure that all tasks that are within twice the sensing range (2𝑅𝑠) receive it. Since these tasks compete for the same sensors with the arriving task their leaders need to participate in the process.

In order to conserve energy, the number of sensors that can be assigned to a task is limited to 𝑁, which is an application parameter. A higher value of 𝑁 may yield a higher cumulative detection probability for an individual task. Between tasks, however, there will also be greater contention for sensors. Algorithm 1 summarizes the steps followed. Note that all the competing leaders will go through the steps shown for the task leader.

alg1
Algorithm 1: Sensor assignment [8].

In the first round, each task leader that is participating in the process informs the nearby sensors of the details of its task (location and profit). A sensor, which may hear advertisement messages from one or more tasks, proposes to its current best match. This is the task for which it provides the highest detection probability weighted by the task’s profit. More formally, 𝑆𝑖 proposes to task 𝑇𝑗 that maximizes 𝑒𝑖𝑗𝑝𝑗. From the set of proposing sensors, each task leader selects the sensor with the maximum detection probability and updates its current cumulative detection probability (CDP).

The sensor 𝑆𝑖 determines its utility to task 𝑇𝑗 using the distance between them. As in [8], we use the following to calculate the detection probability when sensor 𝑆𝑖 is assigned to task 𝑇𝑗: 𝑒𝑖𝑗𝑃=explogFA1+SNR1𝐷2𝑖𝑗1,(2) where 𝐷𝑖𝑗 is the distance between the sensor and the task location, 𝑃FA is the false alarm probability (a user-chosen parameter), and SNR1 is the normalized signal-to-noise ratio at a distance of one meter from the signal source. This expression results from analyzing a fluctuating source model embedded in Additive White Gaussian Noise (AWGN) when the square law detector is employed [22]. In this model, the detection probability decreases as the distance between the sensor and the signal source increases. Figure 2 depicts the function's behavior with the parameters that were picked for the simulations.

Obviously, if the exact distance is used to calculate the detection probability, the task leader can do the reverse operation and determines the whereabouts of the proposing sensors. Hence, as we will see in the next subsections, sensors do not necessarily use the exact distance to calculate the utility. We note, however, that even if the exact distance from the sensor to the target is known, the task leader cannot accurately locate the sensor since it can be anywhere around a circle.

In the next round, each leader sends out an update on the status of its task’s current CDP, after taking into account the currently assigned sensors. Sensors that were not selected in the first round recalculate 𝑒𝑖𝑗, the amount by which they can increase the current CDP of the different remaining tasks (shown in the step before last in Algorithm 1). Each unassigned sensor, then, proposes to its best fit. This process continues for 𝑅 rounds until all tasks have 𝑁 assigned sensors or there are no more sensors available. 𝑅 is an application parameter and should be set to be equal to at least 𝑁 to give tasks a chance to assign enough sensors.

During the assignment process, sensors can continue the detection for the tasks to which they were initially assigned even while proposing to other tasks. Change only happens if a sensor chooses a task that is different from the one to which it is already assigned. In that case, it will change direction towards the new task’s location and start detecting events.

4.3. Algorithm Variations

Here we describe the different variations of this algorithm. They only differ in the way sensors calculate their utility value (𝑒𝑖𝑗) and to be more exact, they differ in the way they calculate the distance between the sensor and the task’s location. Since this distance may not be equal to the exact distance, depending on the algorithm used, we call it 𝐷𝑖𝑗. We start by discussing two algorithms that were proposed in [8], and then we introduce our new variations which are the Random Location Algorithm and the Energy-based Location Algorithm.

4.3.1. Exact Location

This is the same algorithm that was proposed in [8] and will be used to benchmark the results of the other algorithms. In this variation, the distance used to calculate the utility is equal to the exact Euclidean distance between the sensor and the task's location. Hence, in this case: 𝐷𝑖𝑗=𝐷𝑖𝑗.(3)

4.3.2. Discretized Location

This variation of the algorithm was also proposed in [8] and will be used to compare the performance of the other algorithms (In [8], this algorithm was called Event Detection with Fuzzy Location.). Instead of using the exact distance between a sensor and a task to determine the utility, the area around the task is discretized into classes based on the distance. All sensors within the same class are considered to be equivalent from the task's leader perspective, that is, they provide the same utility. Figure 1 shows an example of this discretization in which the area around the task's location is divided into an inner disk and an outer ring. The distance between the center and the edge of the outer ring represents the sensing range 𝑅𝑠, and this are include all the sensor's that are applicable to the task.

197592.fig.001
Figure 1: Example of discretized location.
197592.fig.002
Figure 2: Detection probability model.

Higher degree of location privacy is achieved by having fewer classes and hence the larger the area for each class. This means that there will be a larger number of sensors in each class and the task leader will not be able to distinguish them from each other. On the other hand, this will lead to lower performance since the chosen sensors may be far from the task’s location. We define Distance Accuracy Degree (𝑑acc) as the measure of how accurate is the reported utility. (In [8], this measure was called Distance Granularity (DG).) It is inversely related to the achieved location privacy. In this variation of the algorithm, 𝑑acc is a measure of the number of classes; if 𝑑acc = 0, then all sensors within the sensing range are in the same class and are considered equivalent, that is, they will report the same utility to the task leader. 𝑑acc = 0 provides almost no guarantee on the solution quality. When 𝑑acc is increased to 1, the distance from the target to the edge of the circle is divided to create two rings or annuli of equal area, which partitions the sensors into two classes. Figure 1 shows a discretized distance based on 𝑑acc=1. A sensor of class 1 will provide higher detection probability than a sensor of class 2. 𝑑acc = 2 divides the circle into three equal-area rings, and so on.

Sensors in this case calculate the utility based on the expected distance to the task's location from the class to which they belong. More formally the distance used by the sensor to calculate the utility, 𝐷𝑖𝑗, is determined using the expected radial distance as follows: 𝐷𝑖𝑗=23𝐷𝑖+1𝐷𝑖,(4) where 𝐷𝑖+1 and 𝐷𝑖 are the distances form the edges of the outer and inner circle of the ring in which sensor 𝑆𝑖 lies, respectively.

The task leader chooses sensors that provides the highest utility values. If there are multiple sensors with the same utility (this will happen if they are in the same class), then the leader choose randomly among them.

4.3.3. Random Location

In this variation, each sensor adds some random noise to the distance from the task's location before calculating the utility. The maximum noise added depends on the distance accuracy degree (𝑑acc). The higher the 𝑑acc, the lower the noise that is added to the distance. More formally 𝐷𝑖𝑗 for a given 𝑑acc is calculated as follows: 𝐷𝑖𝑗𝑑acc=𝐷𝑖𝑗𝑑+𝑟acc,(5) where 𝑟(𝑑acc) is the added random noise defined as follows: 𝑟𝑑acc𝑅=rand()×𝑠𝑑acc;+1(6)rand() is a random function that returns a uniformly distributed value in [0,1]. This random value is scaled by the sensing range 𝑅𝑠 divided by 𝑑acc+1. When 𝑑acc=0, the random noise that is added to the distance will range between 0 and 𝑅𝑠. As 𝑑acc increases, the maximum value of the added random noise will decrease. For example, when 𝑑acc=1, the random noise will range between 0 and 𝑅𝑠/2 and when 𝑑acc becomes relatively high the random noise becomes almost 0 and the exact distance is used.

The task leader selects the sensors that provide the highest utility values based on Algorithm 1.

4.3.4. Energy-Based Location

In this variation, the actual distance from a sensor to the task is scaled by the remaining energy so that sensors will appear further as their energy gets depleted. This aims at providing location privacy and preserving energy as task leaders will choose sensors that appear closer, which will usually have higher amount of residual energy. In this variation, we do not control 𝑑acc as the level of location privacy provided depends on individual sensors and their remaining energy levels. More formally, 𝐷𝑖𝑗 is calculated as follows: 𝐷𝑖𝑗=1𝑓×𝐷𝑖𝑗,(7) where 𝑓 is the fraction of remaining energy. When 𝑓=1, which means that the battery is full, the sensor will use the exact distance. As 𝑓 gets smaller the 𝐷𝑖𝑗 gets larger and the sensors appear to be further from the task. We limit the maximum distance to 𝑅𝑠 so that the applicable sensor will always appear to be within the sensing range from the task's location.

The task leader selects the sensors with highest utility based on Algorithm 1, that is, the ones that appear to be closest. If there are multiple sensors with the same utility then the leader picks one randomly among them. This can happen when the energy of the applicable sensors has depleted to the level at which they all appear to be 𝑅𝑠 away from the task's location.

5. Performance Evaluation

In this section, we discuss the result of the experiments used to evaluate our algorithms. We implemented a simulator in Java and tested our algorithms on randomly generated problem instances. We compare the results achieved by the different algorithms with the results achieved by what we call an optimal solution, in which for each currently active task separately, we find the optimal achievable profit for it, assuming there are no other tasks in the network, that is, no competition, and ignoring any energy constraints. The sum of these values provides an upper bound. This is an unrealistic solution used to measure the performance of the other solution.

5.1. Simulation Setup

The detection probability with sensor 𝑆𝑖 assigned to task 𝑇𝑗 is determined using (2) above. SNR1 was set to 60 dB and 𝑃FA was set to 0.001. Figure 2 shows the detection probability of a sensor to a task based on the assumed model and these parameters. For computational and analytic convenience, we simply approximate 𝑒𝑖𝑗 as zero when 𝐷𝑖𝑗 exceeds an effective sensing range of the sensor 𝑅𝑠, which is set to 40 m.

Our goal is to maximize the achieved profits from all available tasks, that is, max𝑗𝑝𝑗𝑢𝑗 where 𝑢𝑗 is the utility received by task 𝑇𝑗 and 𝑝𝑗 is its profit. The utility achieved by a task is the cumulative detection probability (CDP) as in (1), which is naturally in [0,1].

We deploy 400 nodes in uniformly random locations in a 250 m × 250 m field. The communication range of sensors is set to 40 m. Tasks are created in uniformly random locations in the field. Tasks profits are exponentially distributed with an average of 10 and are capped at 100. So, they are in (0,100]. This mimics realistic scenarios in which many tasks will be of low profit and few with high profits. We assume that these profits are awarded per unit of time for which a task is active. The maximum possible profit in time step is the sum of profits of all active tasks at that time step. We compare our achieved profits with this maximum value.

Task lifetimes are also exponentially distributed to mimic realistic scenarios. In actual operations it is expected that many tasks will be of short durations and few with long durations. In our simulation we set the average lifetime to 1 hour and we cap the maximum lifetime to 6 hours. Tasks arrive based on a Poisson process. To test the robustness of our algorithms, we experimented with two loads, namely, average arrival rate of 4 tasks/hour and 8 tasks/hour.

Each sensor starts with a battery that will last for 6 hours of continuous sensing. To simplify our simulation, we assume that this battery is used solely for sensing purposes and is different than the sensor's main battery which it uses for communication and maintaining its own operation.

In our experiments, we show the average performance of the network for a period of 168 hours (1 week); we start collecting measurements after running the algorithms for 10 hours to allow it to reach steady state. The results are averaged over 500 runs.

5.2. Simulation Results

In Figure 3 we can see the performance of the algorithms in terms of the percentage of achieved profits over the lifetime of the network as 𝑑acc is increased from 0 to 7 for two load settings, namely, with arrival rate of 4 tasks/hour and 8 tasks/hour. Figure 3(a) shows the results when the arrival rate is 4 tasks/hour. We can see that the best performance, by far, is achieved by the energy-based algorithm followed by the exact algorithm. This is because the energy-based algorithm uses resources more uniformly and allows more sensors to stay alive to serve future tasks. This is not the case with the exact algorithm, which uses the best available sensors for the task without taking into account their remaining energy, which leads them to dying and not serving future tasks. Both the exact and energy-based location algorithms are independent of 𝑑acc and hence they form straight lines.

fig3
Figure 3: Total profit results after operating for 168 hours (1 week).

Below the exact location algorithm, we find the random location algorithm. This algorithm performs slightly lower than the exact location for small 𝑑acc and gets better as 𝑑acc increases. In all cases, however, it stays very close to the exact location algorithm because the random noise is added to all sensors with the same magnitude. A task leader in most cases will choose sensors that are close to the task's location and in many cases it ends up choosing the same set that is chosen by the exact location algorithm. It is clear from the results that the discretized location algorithm is the lowest in performing and shows the greatest impact when 𝑑acc is changed. The achieved profits increase rapidly as 𝑑acc increases, but once it reaches 4 there is little further increase. This suggests that the benefit gained from the increased accuracy may not justify the loss in privacy. By the time 𝑑acc reaches 7, the discretized location algorithm's performance is within less than 1% of the exact location algorithm.

Although the discretized location algorithm does not perform as well as the random location algorithm, it has another benefit which is highlighted in [8]. By dividing the sensors into classes based on their location, the task leader no longer needs to consider sensors individually for assignment but would rather need to consider the classes from which they are coming. This coarsens the problem instance to be solved and hence would reduce the computational overhead.

Figure 3(b) shows the same results but with increased network load. The arrival rate in this experiment is set to 8 tasks/hour. We see similar behavior of all algorithms. The only difference is that the achieved profits are now lower compared to the previous experiment. This is expected since with increased load sensors consume their energy at a faster rate, which leads to them dying earlier in the network lifetime and failing to contribute to later tasks. This will become clear when we consider the next set of results.

Figure 4(a) and Figure 4(b) show the profits achieved by the different algorithms over the lifetime of the network with lower network load (arrival rate = 4 tasks/hour) and high network load (arrival rate= 8 tasks/hour), respectively. Here, we set 𝐷acc=4 for both the discretized and random location algorithms. These figures describe the behavior of the algorithm over time compared to Figure 3, which only showed a condensed version of the results, that is, 𝑡 showed the sum of the profits over time with no regards on how and when the profits were achieved.

fig4
Figure 4: Profit over time for the different algorithm variations. Simulation time 168 hours (1 week). Distance Accuracy Degree 𝑑acc = 4.

In Figure 4(a), we can see that all algorithms start with high profit values and decrease over time as sensors start to die. The exact algorithm starts as high as the optimal and then starts decreasing. The random and discretized algorithms come under the exact algorithm and again starts decreasing over time. What is interesting is the behavior of the energy-based algorithm, which starts close to the exact location algorithm but then starts to decrease at a much lower rate compared to the others. This is because it utilizes the resources more evenly, which keeps them alive for longer time making them available to serve more tasks in the future. If we consider the network to be alive as long as it can achieve 50% of the profits, then we can see that the energy-based location algorithm can increase the lifetime by almost 40% compared to the other algorithms.

Figure 4(b) shows similar results when the load is increased on the network by doubling the task arrival rate to 8 tasks/hour. The difference here is that we see the achieved profits going down at a much faster rate. This is due to the higher competition between tasks and higher consumption of resources, which leads to sensors dying at a faster rate leaving fewer of them to serve future tasks.

The third set of results, in Figure 5, show the percentage of alive sensors over the lifetime of the network under low and high loads of tasks (see Figures 5(a) and 5(b), resp.). Here, we see a behavior similar to the results shown in Figure 4 since the number of alive sensors directly affects the ability to satisfy tasks' requirements. As the number of alive sensors goes down, the network will have less capability to achieve high profits. It is clear that the energy-based location algorithm can keep more sensors alive for longer times since it utilizes the sensors more uniformly. Hence, in Figure 5(a) we can see that by the end of one week of operation the energy-based location algorithm was successful in keeping more than 80% of sensors alive, which allowed it to achieve more than 60% of the possible profits (Figure 4(a)). The other algorithms were able to keep less than 10% of the sensors alive and achieved only about 10% of the profits. When the load is increased to 8 task/hour (Figure 5(a)), we can see that by the end of the operation time of one week no algorithm was able to keep any of the sensors alive due to the increased demands of the tasks. It is clear, however, that the energy-based location algorithm was able to keep sensors alive for longer time and hence was able to achieve some profits for a longer period compared to the other algorithms (see Figure 4(b)).

fig5
Figure 5: Percentage of alive nodes over time for the different algorithm variations. Simulation time 168 hours (1 week). Distance Accuracy Degree (𝑑acc) = 4.
5.3. Analysis of Location Privacy

Let us now define a privacy factor (𝑃𝐹) for different 𝑑acc values as follows: 𝑃𝐹𝑑acc=1𝑑acc.+1(8)

This is used for both the discretized and random location algorithms. When 𝑑acc=0, 𝑃𝐹 becomes 1, which is the highest degree of privacy. It then decreases as 𝑑acc increases. This is intuitive since as the distance accuracy increases we expect the privacy level to decrease. Figure 6 shows a plot of 𝑃𝐹 as 𝑑acc is increased from 0 to 7. Now, we use this factor to define two notions of privacy: (1) sensor's anonymity, similar to [8] and (2) sensor's location privacy.

197592.fig.006
Figure 6: Location privacy.

For anonymity we use the notion of 𝑘-anonymity [23]. An algorithm provides 𝑘-anonymity if the proposal from one sensor cannot be distinguished from at least 𝑘1 other sensors. Let us assume that 𝑁𝑠 is the number of nodes within sensing range of a task and that the sensors are uniformly random distributed. The discretized location algorithm divides the sensors within the sensing range into equally sized rings. If we multiply 𝑃𝐹 by 𝑁𝑠, we get the average number of sensors lying in a ring. Hence, for the discretized location algorithm 𝑘=𝑃𝐹×𝑁𝑠. If 𝑃𝐹=1, then a proposing sensor could be any of the 𝑁𝑠 sensors, which provides the highest anonymity, that is, it provides 𝑁𝑠-anonymity. On the other hand, if 𝑃𝐹1/𝑁𝑠 then 𝑘1 which leads to almost no anonymity.

The same analysis applies for the random algorithm. When 𝑑acc=0, the sensor adds a uniformly random value between 0 and 𝑅𝑠 to its distance. This will make it appear anywhere within the sensing range and hence will make all the sensors within the sensing range indistinguishable from each other, that is, it will provide 𝑁𝑠-anonymity. When 𝑑acc=1 sensors, will add a random value that ranges between 0 and 𝑅𝑠/2. For a particular sensor, this means that it can be located anywhere in a ring of width 𝑅𝑠/2 from the distance it has reported. This will make all sensors in that ring indistinguishable from each other, that is, it will provide 𝑁𝑠/2-anonymity, and so on. This is an interesting result since, as can be seen from the results above, the random algorithm provides the same anonymity levels as the discretized location algorithm but provides much better performance.

For the energy-based algorithm, the reported distance depends on the remaining energy and can range from the exact location to 𝑅𝑠. Without knowledge of the amount of remaining energy, the task leader will not be able to guess the location, and hence it will only assume that it is within the sensing range. This means that it will provide 𝑁𝑠-anonymity. However, if the task leader assumes that energy is consumed uniformly by sensors, then it can make a better guess by taking into account both the initial energy of the sensor and the elapsed network lifetime. This, however, will remain to be a guess as there are other factors that may affect it such as the number of tasks that occurred previously in that region and their durations.

Note that the anonymity level is affected by the network density. The more sensors deployed, the higher the value of 𝑁𝑆 and hence the better the anonymity.

Sensor's location privacy is defined as the size of the area in which a sensor lies. If 𝑅𝑠 is the sensing range of a sensor, then for the discretize algorithm 𝑃𝐹×𝑅𝑠 is the area of the ring given a certain 𝑑acc. For the random algorithm, 𝑑acc affects added random noise, which determines the size of the area in which the sensor may lie. Clearly, the larger the area the better the location privacy since a sensor can be located anywhere within that area. As mentioned earlier, the energy-based algorithm does not depend on 𝑑acc and hence determining the location of sensors becomes harder.

Note that location privacy is affected by the sensing range. Given that everything else is equal, the larger the sensing range is the larger the ring area (or the added random noise) will be which leads to better location privacy.

6. Concluding Remarks

In this paper, we proposed distributed algorithms for finding sensor-to-task assignments taking into account location privacy and energy preservation issues. We focused on detection tasks, which require a number of sensors that are close to the location to be monitored. We extended the work in [8] by turning the focus on location privacy rather than savings in computational overhead, which was their main focus.

Our algorithms were designed to maximize the utility of the network, that is, the total profit achieved from all tasks, while preserving sensors’ location privacy. We proposed the random location algorithm, which adds random noise to the sensors' locations in order to hide their exact location. The magnitude of the random noise depends on the location accuracy that we would like to achieve. We also proposed the energy-based location algorithm, which takes into account the remaining energy as a factor to randomize the actual location of the sensor. This allows the network to use the sensing resources more uniformly by preserving the energy of sensors that have a small amount of energy left and preferring the ones that have more energy.

The performance evaluation showed that our algorithms can successfully hide sensors' locations while providing performance close to the algorithms which use the exact locations. Our random algorithm was able to achieve results that are very close to the ones achieved by the exact algorithm while maintaining location privacy similar to the one achieved be the discretized algorithm. We also found that the energy-based algorithm, while achieving results close to the exact location algorithm, was able to increase the network lifetime by as much as 40%.

It is important to note that in this paper we only considered directional sensors that can serve a single task at a time. Our algorithms may change if omnidirectional sensors are used. We also note that an adversary who would like to locate sensors may send repeated task requests from different location to determine which sensors are located in the area in which they intersect. These two issues require additional work and can be a venue of future research in this area.

Acknowledgment

This research was supported by King Fahd University of Petroleum and Minerals through the Deanship of Scientific Research under project JF100016.

References

  1. M. Perillo and W. Heinzelman, “Optimal sensor management under energy and reliability constraints,” in Proceedings of the IEEE Conference on Wireless Communications and Networking, March 2003.
  2. K. P. Shih, Y. D. Chen, C. W. Chiang, and B. J. Liu, “A distributed active sensor selection scheme for wireless sensor networks,” in Proceedings of the 11th IEEE Symposium on Computers and Communications (ISCC '06), pp. 923–928, June 2006. View at Publisher · View at Google Scholar · View at Scopus
  3. F. Aurenhammer, “Voronoi diagrams—a survey of a fundamental geometric data structure,” ACM Computing Surveys, 1991.
  4. J. Lu, L. Bao, and T. Suda, “Coverage-aware sensor engagement in dense sensor networks,” in Proceedings of the International Confernece on Embedded and Ubiquitous Computing (EUC '05), December 2005.
  5. H. Rowaihy, S. Eswaran, M. Johnson et al., “A survey of sensor selection schemes in wireless sensor networks,” in Proceedings of the SPIE Defense and Security Symposium, April 2007. View at Publisher · View at Google Scholar · View at Scopus
  6. M. P. Johnson, H. Rowaihy, D. Pizzocaro et al., “Sensor-mission assignment in constrained environments,” IEEE Transactions on Parallel and Distributed Systems, vol. 21, no. 11, pp. 1692–1705, 2010. View at Publisher · View at Google Scholar · View at Scopus
  7. H. Rowaihy, M. P. Johnson, O. Liu, A. Bar-Noy, T. Brown, and T. La Porta, “Sensor-mission assignment in wireless sensor networks,” ACM Transactions on Sensor Networks, vol. 6, no. 4, 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. H. Rowaihy, M. Johnson, D. Pizzocaro, et al., “Detection and localization sensor assignment with exact and fuzzy locations,” in Proceedings of the DCOSS, 2009.
  9. J. Deng, R. Han, and S. Mishra, “Countermeasures against traffic analysis attacks in wireless sensor networks,” in Proceedings of the 1st International Conference on Security and Privacy for Emerging Areas in Communications Networks (SecureComm '05), pp. 113–126, September 2005. View at Publisher · View at Google Scholar · View at Scopus
  10. J. Deng, R. Han, and S. Mishra, “Decorrelating wireless sensor network traffic to inhibit traffic analysis attacks,” Pervasive and Mobile Computing, vol. 2, no. 2, pp. 159–186, 2006. View at Publisher · View at Google Scholar · View at Scopus
  11. Y. Jian, S. Chen, Z. Zhang, and L. Zhang, “Protecting receiver-location privacy in wireless sensor networks,” in Proceedings of the 26th IEEE International Conference on Computer Communications (IEEE INFOCOM '07), pp. 1955–1963, May 2007. View at Publisher · View at Google Scholar · View at Scopus
  12. K. Mehta, D. Liu, and M. Wright, “Location privacy in sensor networks against a global eavesdropper,” in Proceedings of the 15th IEEE International Conference on Network Protocols (ICNP '07), pp. 314–323, October 2007. View at Publisher · View at Google Scholar · View at Scopus
  13. M. Shao, Y. Yang, S. Zhu, and G. Cao, “Towards statistically strong source anonymity for sensor networks,” in Proceedings of the 27th IEEE Communications Society Conference on Computer Communications (INFOCOM '08), pp. 466–474, April 2008. View at Publisher · View at Google Scholar · View at Scopus
  14. Y. Xi, L. Schwiebert, and W. S. Shi, “Preserving source location privacy in monitoring-based wireless sensor networks,” in Proceedings of the 20th International Parallel and Distributed Processing Symposium (IPDPS '06), April 2006.
  15. Y. Yang, M. Shao, S. Zhu, B. Urgaonkar, and G. Cao, “Towards event source unobservability with minimum network traffic in sensor networks,” in Proceedings of the 1st ACM Conference on Wireless Network Security (WiSec '08), pp. 77–88, April 2008. View at Publisher · View at Google Scholar · View at Scopus
  16. G. Anastasi, M. Conti, M. Di Francesco, and A. Passarella, “Energy conservation in wireless sensor networks: a survey,” Ad Hoc Networks, vol. 7, no. 3, pp. 537–568, 2009. View at Publisher · View at Google Scholar · View at Scopus
  17. G. Wang, G. Cao, T. La Porta, and W. Zhang, “Sensor relocation in mobile sensor networks,” in Proceedings of the IEEE INFOCOM, pp. 2302–2312, March 2005. View at Scopus
  18. S. Tutton, “Optimizing the allocation of sensor assets for the unit of action,” Tech. Rep., Naval Postgraduate School, 2006.
  19. P. Bose, P. Morin, I. Stojmenović, and J. Urrutia, “Routing with guaranteed delivery in ad hoc wireless networks,” Wireless Networks, vol. 7, no. 6, pp. 609–616, 2001. View at Publisher · View at Google Scholar · View at Scopus
  20. B. Karp and H. T. Kung, “GPSR: greedy perimeter stateless routing for wireless networks,” in Proceedings of the 6th Annual International Conference on Mobile Computing and Networking (MOBICOM '00), pp. 243–254, August 2000. View at Scopus
  21. A. Rao, S. Ratnasamy, C. Papadimitriou, S. Shenker, and I. Stoica, “Geographic routing without location information,” in Proceedings of the Ninth Annual International Conference on Mobile Computing and Networking (MobiCom '03), pp. 96–108, September 2003. View at Scopus
  22. S. Blackman and R. Popoli, Design and Analysis of Modern Tracking Systems, 1999.
  23. L. Sweeney, “k-anonymity: a model for protecting privacy,” International Journal of Uncertainty, Fuzziness and Knowlege-Based Systems, vol. 10, no. 5, pp. 557–570, 2002. View at Publisher · View at Google Scholar · View at Scopus