Journal of Computer Networks and Communications

Volume 2011 (2011), Article ID 195685, 19 pages

http://dx.doi.org/10.1155/2011/195685

## Alert: An Adaptive Low-Latency Event-Driven MAC Protocol for Wireless Sensor Networks

^{1}Department of Electical Engineering and Computer Science, Wichita State University, Wichita, KS 67260, USA^{2}Research and Technology Center, Robert Bosch Corporation, Palo Alto, CA 94304, USA

Received 18 March 2011; Accepted 29 August 2011

Academic Editor: Eduardo Da Silva

Copyright © 2011 Vinod Namboodiri and Abtin Keshavarzian. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Collection of rare but delay-critical messages from a group of sensor nodes is a key process in many wireless sensor network applications. This is particularly important for security-related applications like intrusion detection and fire alarm systems. An event sensed by multiple sensor nodes in the network can trigger many messages to be sent simultaneously. We present Alert, a MAC protocol for collecting event-triggered urgent messages from a group of sensor nodes with minimum latency and without requiring any cooperation or prescheduling among the senders or between senders and receiver during protocol execution. Alert is designed to handle multiple simultaneous messages from different nodes efficiently and reliably, minimizing the overall delay to collect all messages along with the delay to get the first message. Moreover, the ability of the network to handle a large number of simultaneous messages does not come at the cost of excessive delays when only a few messages need to be handled. We analyze Alert and evaluate its feasibility and performance with an implementation on commodity hardware. We further compare Alert with existing approaches through simulations and show the performance improvement possible through Alert.

#### 1. Introduction

With the transition of many automated tasks from a wired to a wireless domain, wireless sensor networks (WSNs) are increasingly being subjected to new application domains. Applications of critical nature have been the forte of wired networks due to their reliability. The ever increasing reliability of WSNs coupled with cost-effectiveness has led to their gradual adoption for such critical applications as well. The nature of such applications, however, require new MAC protocols for WSNs that meet the requirements and inspire sufficient confidence about their usage.

The requirements for applications of critical nature can be fundamentally different from the applications for which current MAC protocols are designed for. For example, energy is a valuable resource in sensor devices and most existing MAC protocols are optimized to conserve energy, trading off latency, throughput, and other similar performance metrics in the process. These same protocols are typically not suitable when the application demands better performance at the expense of some additional energy. If latency is to be minimized, with energy consumption only a secondary issue, protocols need to be redesigned from that application perspective.

In this paper, we consider applications that require all wireless sensors to convey urgent messages to a centralized base station (i.e., a single hop away) with minimum delay from the time they are generated. These messages are triggered by events detected by sensor nodes and their task is to inform the base station for possible action. Such messages are triggered very rarely, and the aim is to focus on minimizing latency when they are triggered, even if some additional energy is expended during those times. (We describe in Section 3.1 why energy is not a concern in this application scenario and can be ignored.) Intrusion detection and fire alarm applications are some examples which require such a solution. Even though the messages are typically correlated, the collection of all triggered messages as opposed to one of them provides valuable information which can be used for detection of false positives or postevent analysis. (When sensors cover a large area, only a subset of these nodes will detect events and trigger messages to be sent to the base station. We require that all messages generated due to event detection by this subset be reported.) For example, the European Standard EN 54-25 for fire alarm systems specifies the duration within which the first alarm should be reported and by when all alarms must be received at the base station [1]. The challenges in designing WSN MAC protocols for such applications are the handling of a number of simultaneous messages at the same time without knowledge of how many, and planning for possible interference. Additionally, it is also important to ensure implementation feasibility taking into account the additional constraints imposed on WSNs like time synchronization and limited computation and storage capabilities.

We present the Alert MAC protocol that is designed to minimize latency when collecting simultaneous urgent messages. Alert minimizes contention among nodes by using a combination of time and frequency multiplexing. Multiple frequency channels are used within time slots and contention is minimized by controlling the selection probability of each channel by the nodes. Note that in spite of the use of multiple channels, we assume the presence of *only one transceiver* in all nodes including the receiver. The important features of Alert are the following: (a) minimizes delay of collecting first message as well as all messages; (b) noncarrier sense protocol; it thus eliminates hidden terminal collision problems; (c) dynamic shifting of frequency channels to provide robustness against interference; (d) adaptive characteristic enables operation without knowledge of number of contending nodes.

We make the following additional contributions in this paper. (i) Theoretical justifications for the choice of different design parameters of Alert. Our analytical results are of a fundamental nature and should prove useful for the design of other MAC protocols as well. (ii) Detailed performance analysis of Alert by comparing it with other existing protocols through simulations. These evaluations examine the protocols from an implementation perspective and take into account low level details like degree of time synchronization available. This further provides great insight into the design criteria for event-driven MAC protocols that focus on minimizing latency. (iii) Demonstration of feasibility and validation of analytical results through an implementation on commodity hardware. This validation step inspires the necessary confidence to trust Alert with applications of critical nature.

The rest of this paper is organized as follows. Section 2 presents the design space of MAC protocols in Wireless Sensor Networks. Section 3 presents our Alert protocol and describes some of its features in more detail. Section 4 presents some theoretical considerations in the selection of design parameters for Alert and our analytical results. Section 5 describes an adaptive algorithm used with Alert to handle cases where the number of contending nodes is unknown. In Section 6, we demonstrate the feasibility of implementing Alert and validate our analytical results. We further compare Alert with two other existing MAC protocols to point out its advantages. Concluding remarks are made in Section 7.

#### 2. Related Work

The related MAC protocols for wireless sensor networks can be mainly classified into contention-free, contention-based, and energy-saving protocols. Contention-free protocols are mainly the ones based on time-division multiple access (TDMA) where slots are assigned to each node by the base station and each node sends its message (if it has one) only during its assigned slot (e.g., GUARD [2]). Such TDMA-based protocols perform very poorly when the number of nodes contending is unknown or keeps varying. For the specific application targeted in this work, only a subset (of unknown size) of nodes may have events to report. This makes assigning slots to all nodes undesirable as it leads to a significant increase in delay to receive the first message. For example, consider the case of only one node having a message to report. The average delay incurred to collect this message would be half the TDMA cycle with the worst-case delay being the whole cycle. TDMA schemes are useful in cases where most of the nodes have events to report, a rare case for our scenario. Other contention-free approaches, for example, frequency-division multiple access (FDMA) [3] face similar limitations as outlined above in terms of unknown or varying number of nodes that make scheduling difficult. The work by Chintalapudi and Venkatraman [4] designs a MAC protocol for low-latency application scenarios similar to those considered in this paper, but with some important differences. They assume multiple base stations while our solution requires only one base station. Also in their work, many of the concepts are based on a TDMA schedule which has the same limitations as pointed out above. Finally, their model assumes that each message generated has its own deadline. In our scenario, we are indifferent to the order in which messages are received as long as constraints are met on the latency to receive the first message as well as all the messages.

Contention based protocols can be bifurcated into carrier sense multiple access- (CSMA-) based or non-CSMA-based. The IEEE 802.11 and 802.15.4 protocols are examples of CSMA protocols with the latter designed specifically for applications catered to by wireless sensor networks [5, 6]. They use a variable-sized contention window whose size is adjusted at each node based on the success of the node in sending its message, with each node picking a slot in this window using a uniform probability distribution. These protocols do a good job in handling scenarios with small number of nodes but do not handle a large number of simultaneous messages well. For a detailed performance evaluation on contention window-based schemes and description of deficiencies of the IEEE 802.11 protocol, refer to [7]. The Sift protocol was designed to overcome these deficiencies for WSN applications which need to handle such large number of event-driven spatially corelated messages [8, 9]. Sift is also CSMA based but uses a fixed-size contention window. Nodes pick slots from a geometric probability distribution such that only a few nodes contend for the first few slots, and thus handles a large number of messages easily. A variation based on replacing the uniform-distribution contention window of IEEE 802.11 with a *p*-persistent backoff protocol was presented in [10]. Protocols based on Aloha on the other hand do not sense the channel before transmission and rely on each node picking a slot to transmit on randomly, with the probability of transmission depending on number of messages contending [11, 12]. When this number is not known, these protocols do not adapt well. In general, CSMA-based protocols outperform Aloha-based protocols when the propagation time between nodes is small enough to make carrier sense useful. When the relative effectiveness of a CSMA-based and a TDMA-based scheme is unknown, a hybrid MAC protocol like Z-MAC can be used to adapt between these protocol types based on prevailing conditions [13].

Our protocol, Alert, is similar to Sift in that it uses a similar nonuniform distribution to control contention among nodes, but is non-CSMA based. Alert separates message transmissions across different frequency channels with this distribution while Sift does it over different time slots. This allows Alert to be free of hidden terminal issues while Sift is susceptible. (Aside from the performance perspective, operations performed at the sender side in Alert are comparatively much simpler than Sift or any other CSMA-based protocol. So fewer resources (less memory and less computational power) are required for implementation of Alert at individual nodes. This makes Alert a more cost-effective solution.) Alert extends this distribution to handle different interference levels as well. While Sift is also optimized for unknown number of messages, the adaptiveness of Alert allows it to perform well even when the number of contending messages is small.

The topic of energy-saving MAC protocols has been well researched for wireless sensor networks in the last few years primarily due to the limited energy supply in these devices [14–16]. A more general framework and survey of MAC protocols for wireless sensor networks can be found in [17]. Our work focuses on applications for which latency is the primary concern and the goal is to let all urgent messages reach a receiver node as soon as possible. Energy is a concern as well, but the rare occurrence of messages in these applications allows the MAC protocol to focus solely on the latency aspects. With such applications, energy efficiency should be built into more common tasks carried out by each node like time synchronization. Some researchers have focused on reducing latency in the collection of messages at a sink node from source nodes through duty cycling approaches (e.g., [18, 19]). These types of low-latency protocols solve a fundamentally different problem which is to sleep as often as possible while ensuring that messages reach the sink as fast as possible (with latencies in the order of seconds instead of milliseconds). Our work focuses on event that driven message generation where the receiver is always awaiting messages. The receiver is assumed to be wall powered or periodically rechargeable, and hence, its energy consumption is not an issue. The transmitters try to send messages when they have one, and the only latency that needs to be reduced is the one created due to contention between nodes trying to send messages at the same time.

In the preliminary version of this work in [20], we had presented some of the contributions above in terms of the Alert protocol. In this paper, there is additional emphasis on our theoretical results. We provide full mathematical proof of the optimal channel probabilities to use with Alert. Further, theoretical results on the success probability of sending a message in an Alert slot with number of nodes tending to infinity is given. Additionally, the optimal probability distributions for different optimization objectives are compared to each other.

#### 3. Protocol Description

We present details of our Alert protocol in this section along with a description of the associated design parameters. Theoretical considerations for selecting values for these design parameters will be presented in the following section.

##### 3.1. Alert Protocol Concepts

The protocol actions can be divided into those at a *sender*, a node that has a message to send, and a *receiver*, a node whose task is to collect messages from senders. The time is slotted into what we refer to as an Alert slot or simply a time slot. (We assume all the nodes in the network are time synchronized with each other. We use periodic broadcast beacons from our base station in our implementation in Section 6.) Each Alert slot can be used to exchange one data packet and its acknowledgment between a sender-receiver pair.

In each time slot, multiple frequency channels can be used by the senders or receiver. We denote the number of frequency channels in each slot by . These channels have different priorities. (The topic of how priorities are calculated and assigned forms the basis for most of the analysis later in the paper.) The receiver samples them one by one based on their priority level and tries to receive a packet from one of the senders.

Each sender selects a frequency channel randomly and independently of all other senders, but the channels are *not selected with equal probability*. Less chance is given to select a higher-priority channel, that is, the selection probability decreases as we move toward higher priority channels. An example is shown in Figure 1 with channels and channel selection distribution . This nonuniform distribution is prespecified and known to all senders. It is designed to reduce the chance of collision among the senders. We discuss its effect and how to find the optimum distribution in detail in subsequent sections.

Once a sender selects a frequency channel randomly based on the prespecified channel selection probabilities, it switches to the selected frequency and sends a long preamble before sending its data packet. (Note that the diagrams in Figures 1 and 2 do not represent the actual timing scales within a time slot. To present the idea, the sampling and preamble durations are shown much longer than their actual value compared to the packet and ack exchange duration.) After the data packet, the node expects an acknowledgment packet (ack) from the receiver. If the ack packet is received correctly, the sender stops, otherwise, it tries to send its message again in the next time slot.

At the beginning of each time slot, the receiver samples the signal level on each of the frequency channels starting with the highest priority channel. If high signal level (high RSSI (RSSI stands for received signal strength indicator)) is sensed by the receiver, it stays on the same frequency (locking to that channel) and stops sampling any more channels. Then, the receiver waits to receive a packet. If a packet is received correctly, it sends an acknowledgment packet back in response, otherwise, after some fixed timeout period, the receiver stops and continues to the next Alert slot. If the sensed high signal on a channel is due to the simultaneous transmission of preambles by more than one sender, then it is very likely that the received packets are corrupted (Note that a packet may still be received correctly due to capture effect.) If the high signal is due to interference or noise, a packet never arrives from any node, and the receiver simply repeats the procedure in the following slot.

Note that a transmitter is not aware if a receiver has successfully “locked” onto its selected frequency and will thus always transmit the packet even if the receiver is waiting on some other channel. For the applications under consideration, a sender has a message to send very infrequently. For example, a typical system may encounter a situation that requires sending fire alarms only once or twice a year. The rarity of alarms allows for greater effort to be put into reducing latency even at the cost of some extra energy. The receiver, or centralized base station, is typically wall powered (AC-outlet) and its energy consumption is not an issue as mentioned in Section 2. This allows the focus on reducing contention among nodes reporting messages without worrying about receiver wakeup schedules.

While the number of channels, , remains the same, the frequency of channels (and their priority) can change across time slots. This is illustrated in Figure 2. The frequency table shown in this example is from a simple expression where we have 16 channels numbered from 0 to 15 and represents the th frequency channel in th Alert slot:

We assume that the frequency table is prespecified, and all the nodes in the network know this pattern. Varying frequency channels after each time slot increases the reliability and robustness against channel fading and interference.

A summary of the protocol design space with all our assumptions is provided in Table 1 for convenience.

##### 3.2. Collision Avoidance

Alert is designed to avoid collision among senders such that in most time slots (with high probability), one message is received correctly. Therefore, the protocol can collect messages from all senders in as few time slots as possible. If there is only one sender, there will be no collision and the frequency channel selected by the sender does not matter as the receiver will find and lock to the frequency picked by the sender. If two nodes are contending to send their messages, the node that selects the higher priority channel will be successful since the receiver will hear its preamble/tone first and stay in the channel awaiting its packet. If both select the same channel, a collision occurs. The channel selection probabilities are designed to reduce the chance of collision. As the number of senders increases, it becomes more probable that one of the senders selects a higher priority channel, and if only one node selects the higher priority channel, the message from that sender will be successfully received. The illustration in Figure 3 shows three example cases where there are different number of senders.

##### 3.3. Design Parameters

Based on the protocol description, it should be clear that the two key design parameters are the *number of frequency channels* to use in the protocol and the *probability distribution* over the channels that minimizes the overall time to read all messages. A larger number of channels should decrease contention among the nodes that have messages to send. But this increases the size of a time slot and results in fewer time slots within a given period of time, exposing a tradeoff. The channel probability distribution controls the contention among the nodes. When the number of nodes with messages is large, assigning small probabilities to the higher priority channels ensures lower contention for those channels. This increases the chance that only one node chooses that channel. On the other hand, when the number of simultaneous messages is small, that is, the load is low, assigning small probabilities to the higher priority channels could lead to under utilization of these channels and higher utilization on the lower priority channels resulting in collisions and an increase in the overall latency. In the following section, we construct a theoretical framework to select these design parameters to optimize the performance of our protocol.

For final deployment, each node should be preloaded with information about the number of channels that are to be used in a slot and the probability distribution from which to select them. These should be designed taking into account the application under consideration which might give some information about the expected number of messages and interference levels it should be able to handle.

#### 4. Analysis and Design

For selection of the right design parameters to use in the Alert protocol to minimize latency, we need to consider the probability of success that a message is sent successfully in a single slot and extend that to quantify the number of slots that will be required to read all outstanding messages. We begin our analysis by obtaining an expression for the success probability of a single slot.

Let represent a row vector of channel probabilities corresponding to each channel . In this section, we assume that the same channel probabilities will be used by each node in all slots; that is, is not updated during the protocol execution. This allows for a simpler implementation. Also, the results of this section can be built upon by more sophisticated approaches (we show one such approach in the next section).

##### 4.1. Probability of Success in a Single Slot

We assume nodes are contending to send their message across in the slot (each node is assumed to have only one message to send). They select channels based on the probability distribution . Let random variables (for ) denote the number of nodes deciding to transmit on channel . The variables will have the joint multinomial distribution for nonnegative integers .

In order to model interference, we use an indicator random variable to specify if channel has been interfered with or not, if and when the receiver samples the channel. We assume s are independent and have the Bernoulli distribution: that is,

Parameter represents the probability that a channel sees no interference. The value of corresponds the ideal case of no interference on any channel. The assumption of independent interference on different channels is strengthened by the periodic switching of channels by all nodes in Alert protocol. (Our analysis can be easily extended to consider a correlated model for interference on all channels.)

Assume that is a non-idle channel that is selected by at least one of the nodes and node is transmitting on this channel. If any other node had transmitted (or interference had caused a high signal) on another channel before (in order of priority), the receiver would never have been waiting on to receive ’s packet. If another node also selects channel , the message from these two nodes will collide at the receiver. Ignoring the possibility of capture effect, the collision results in a slot failure with no message succeeding in the whole slot since the receiver will remain on channel for the whole slot. Thus, succeeds on channel if it is the only node transmitting on that channel (with no interference as well). So we have the following rule: *if there is only one node transmitting in the first non-idle channel (in order of priority), an Alert time slot will be successful. *

So a time slot is successful if for any value of ( between 1 to ), only one node selects the th channel (event ) and no node selects any of the first higher priority channels (event ), and there is no interference on any of the channels, that is, . So the probability of successful message delivery in a slot is where we assume that interference is independent of the activities of the node, and . Combining this with (3), we find the the probability of successful message delivery in a slot for a given , , for known number of senders and channel probability distribution as

##### 4.2. Number of Time Slots

We use random variable to denote the number of time slots required to collect messages. Next, we find the distribution of for a given probability distribution .

If we start with nodes, in first time slot, with probability , one message is successfully received and nodes remain to go in the next time slots, or with probability no node is successful and we still have messages left. Hence, we can write the following recursive expression: where is defined as the probability that it takes time slots to collect all messages from nodes. We can solve (8) numerically using the following initial condition for all , and .

We define the moment generating function (MGF) of as then from (8), we get

This shows that the random variable is the sum of independent Geometrically distributed random variables with parameter , that is, where the Geometric distribution () shows the number of Bernoulli trials needed to get the first success and is defined as

So from (11), we have

There is a simple intuitive explanation behind the distribution in (11): when we start with nodes, the chance of success in each slot is , so it takes trials/slots for the first message to go through. With the first message received, there remain senders/messages, so the second message requires an extra slots. So, in general, the th message requires slots.

##### 4.3. Optimum Probability Distribution

The optimum distribution depends on the available information about number of nodes and interference level, and the performance metric that is optimized. In this subsection, we present different methods to find the distribution and rationale behind each case.

###### 4.3.1. Maximizing for a Given

If we know (or estimate) the number of senders to be , we can select such that the probability of successful message delivery in one time slot is maximized for the given value of . By maximizing , we guarantee that the average delay of receiving *the first message* is minimized when nodes are contending to send. The number of time slots needed to successfully receive the first message is a random variable with geometric distribution and mean . The optimum distribution in this case can be found using the following recursive expression:
where , and for , we have

This case is very similar to the results in Sift [8] and our result is a generalization of their solution with addition of the effect of interference through parameter . The proof is given by the following lemma.

Lemma 1. *Let and, . Then given a probability distribution , if for , then
*

*Proof. *We have the success probability:
Now,
Equating the left-hand side to 0, we have
Now, we will use induction to prove (16).

For , set in (19) to get
which can be simplified as which proves Lemma 1 for since .

Next, assume Lemma 1 true for . That is,
which can be rearranged as
Now, we set in (19) which gives (along with splitting the left-hand side into two terms)
Using (19), we can write the above equation as
It follows from (21) and (22) that
Dividing through by and simplifying eventually gives
But from Lemma 1, . Thus, with some rearranging, we get
or
From (21), we have
Thus, (28) becomes
which is the same as (22) for instead of . Thus, by rearranging, we can get (21) for as well. That is,
Thus, the lemma is true for as well, completing the proof by induction.

Note that for , the optimal probabilities are and all , since success in the later channels after first is smaller since they require all previous channels to be interference free as well. With these probabilities, the probability of success for becomes .

###### 4.3.2. Minimizing for a Given

Here, we find the probability distribution such that the expected number of time slots needed to collect from senders is minimized. In this way, we guarantee that *all messages* are collected in as few time slots as possible. The optimization problem can be solved by using numerical methods and gradient descent techniques [21]. The gradient of the is
and we have
for and . For , we have

###### 4.3.3. Maximizing for a Given Range for

If we do not know the number of nodes in the system, we can select the distribution such that the probability of successful message delivery in a time slot is high across a given range. Essentially, we try to find the solution which maximizes the minimum value of in the given range, that is, We used the subgradient method [21] to solve this optimization problem numerically.

##### 4.4. Comparison of Optimal Distributions

We will present numerical examples and compare the different methods introduced in the previous subsection used to select the probability distribution .

Figure 4 shows the success probability of an Alert slot, , as a function of number of senders, , for different values of , with channels and . In Figure 4(a) (case 1), the probability distribution is calculated to maximize . In Figure 4(b) (case 2), the probability distribution is found to minimize the expected number of time slots required to receive from nodes. Figure 4(c), (case 3) corresponds to case where is found such that minimum of over the range , is maximized.

Note that there are two variables representing number of nodes or senders: denoting the estimated number of senders based on which the distribution is calculated and which is the actual number of senders (the horizontal axis).

For the first case (Figure 4(a)) the probability of success is maximized if the estimated is correct, that is, for , but for smaller values of , the probability of success decreases. So for this case, the delay of getting the first message is smallest. The second case on the other hand puts emphasis on collecting all messages as fast as possible. The third case provides a more flat for all values.

Figure 5 shows the average number of time slots required to receive the first message or all the messages in the network for the three cases. The results are calculated for messages, with channels, and (interference probability of 5). For correct number of senders , we see that case 2 (minimizing ) achieves the smallest number of time slots (on average) to collect all messages; case 3 follows closely, then we have case 1. This is expected as the optimization parameter in case 2 is the delay to collect all nodes. Case 1 (maximizing ) gives the best delay for collecting the first message at and for , but for smaller values , the delay of the first message and the overall delay to collect all messages are worse than the other two cases. Case 3 (maximizing ) gives an overall good performance between the two other cases and tries to keep delay of collecting all messages and the delay of the first message low for the whole range of .

###### 4.4.1. Asymptotic Limit of for

It is interesting to see if , how the probability of successful message delivery in a time slot scales. If we fix the probability distribution , it is easy to see that as . However, if we let the distribution to change with , we can get nonzero limits for the probability of success. These limits represent the asymptotic bounds on the performance of Alert. The distribution which gives a nonzero should have the following form: for , and , which gives

We claim that the values of which maximize can be found from the recurrence for , with . These values give the following asymptotic (upper) bound on the probability of successful message delivery in a time slot:

The proof of above result can be obtained as follows.

From (7) for , we have

If is fixed and , then the limit of as will be

This follows from the fact that

Note that with , it is guaranteed that for all .

In order to get a nonzero limit, we need to let the vector be a function of as well. The form gives a simple expression for which has a nonzero limit: with the following limit as :

The value of which maximizes should satisfy we have which gives the following conditions for the optimum values of :

For , we can simplify previous condition to get and for , we expand the sum and take out the first term (the term for ) and factor out from the remaining terms in the sum: Note that the expression in parenthesis is the condition (45) for and, therefore, should be equal to one. So, we obtain the following recursive expression to solve for in terms of (with initial value of ):

The value of for the optimum can be found from (42) and (45) as follows:

We get close to this asymptotic bound for relatively, small values of , for example, with and the maximum possible is close to for .

##### 4.5. Optimum Number of Channels

When multiple nodes contend to send messages in a slot, a larger value of (larger number of channels) *decreases the contention* among them by increasing the likelihood that they select different channels. Thus, one would think that we should use as many channels per slot as we can. However, there are practical considerations that present a tradeoff. For each channel used, the receiver has to sample it, and switch to the next channel. Thus, for each channel used, there is a sensing plus switching delay added to the size of a time slot. On one hand, as we increase , the length of each time slot is increased, on the other hand, with larger , we can get better success probabilities and, therefore, we can collect all messages in fewer time slots. Thus, selecting poses a tradeoff. We select to optimize this tradeoff and minimize the overall delay.

To find the optimum , we need to have some timing constants from the radio. In general, we can write the length of a time slot as , where specifies the time duration required by the receiver to sample the presence of tone/preamble on one channel and switch to the next channel, and is the time for completion of the packet, ack message, and other constant times in one time slot (see Figure 11 for more details). Since the number of channels is a positive integer value and bounded by a small number (total number of channels), we elected to calculate the optimum probabilities for each value of and pick the optimum (and corresponding ) as the one which minimizes the expected value of overall delay to collect from nodes, that is, . (Considering each value of has additional benefits as explained in the following section.)

Figure 6 shows the normalized delay () as a function of for . For timing constants, we used ms and ms which are representative values based on our measurements with the implementation on Bosch CC2420 node boards (see Section 6). The optimum which minimizes the delay is shown on each graph by a filled marker. The optimum value of depends on value of and . However, it is not very sensitive to as we see that close to , the curves are flat. So we can increase or decrease by one or two and expect to get almost the same performance.

Here we find out the optimal channel probabilities to be used by nodes for known values of and . This can be useful if the application is such that every node can fairly accurately estimate the number of nodes that will respond to any event it detects itself within an environment with known, unvarying interference levels.

##### 4.6. Impact of on Probability of Successful Message Delivery in a Slot

Given , , and , the optimal channel probabilities can be calculated based on (14). In this subsection, we evaluate numerically the impact various values of and have on the probability of successful message delivery in a slot. These results are important because in subsequent sections, we will assume we do not know the level of interference during protocol execution. Using (14), we study the sensitivity of optimum channel probabilities to the interference level . The aim was to see if an estimate of would be sufficient to compute close to optimal . We calculated the optimal probabilities for three values of ; , , and for various values of keeping . The results are plotted in Figure 7. (For = 0.1, for large , some values are undefined and hence those data points are missing.)

It can be seen that the probability of successful message delivery in a slot, for all values of on which channel probabilities were calculated, does not change much. This is because the computed channel probabilities themselves do not vary much across different values of . This does not mean that has no effect on probability of success in a slot; as actual varies, the slot success probability decreases as shown in the plots. We saw similar results for other values of as well. Thus, we can conclude that a reasonable estimate on is good enough for calculating optimal probabilities.

#### 5. Adaptive Algorithm for Alert

The previous section presented different theoretical approaches to minimize the number of time slots to collect all messages. These approaches focused on using a single set of channel probabilities throughout the execution of Alert. Though this results in a simpler implementation, we can do better by adapting these probabilities as the protocol executes. We present one such approach in this section where the probabilities to use are updated every time unit . We thus seek a set of channel probabilities used throughout the protocol execution represented as a list of vectors , or equivalently in matrix form: where is the number of time units required for reading all messages, with a new set of channel probabilities calculated for each unit. A time unit can be either a single time slot or a whole frame of time slots in the context of Alert. (The actual implementation of Alert is based on the concept of frames. A specific amount of time per frame is allocated for reading messages, with the rest used for other operations like time synchronization, maintenance among others. This time can be divided up into any number of slots depending on the size of a slot.) In this section, we specifically look at the case where no knowledge of number of messages is assumed and present an adaptive algorithm (Algorithm 1). We believe this to be the most important case to handle for the protocol. For simplicity, we assume that all initial messages arrive simultaneously. This can be justified when messages are rare, but correlated, and usually occur due to some event observed by a subset of the nodes resulting in the generation of simultaneous messages. Generation of alarms by surveillance systems is such an example.

We begin by describing how we calculate for our proposed adaptive algorithm. Then, we give the details of our algorithm along with our strategy to calculate the number of channels to be used per slot considering we do not know the number of nodes sending messages.

##### 5.1. Calculation of Channel Probabilities

The distribution controls the contention of nodes with messages to send in a time unit . Given , the used for each time unit must control contention such that the probability of successful message delivery in that time unit is maximized. As the protocol progresses, maintaining our assumption of simultaneous arrivals of messages, the number of remaining messages, , decreases as they are successfully received by the base station. Thus, for subsequent slots, after the first slot, a new value of must be calculated before each slot taking into account the current value of . For large initial , the time to read all messages can be quite large, and the requirement of recalculation of before each slot can prove infeasible. Moreover, we would prefer that the values of used in each slot be precalculated and stored in memory. Thus, due to the computational and memory limitations on wireless sensor devices, we adopt a *perframe* recalculation strategy.

Let be the number of messages that go through in frame with messages contending at the beginning of the frame. We seek for frame the channel probabilities that maximize the number of messages read, that is, the channel probabilities to use in a frame satisfies This can be solved through numerical methods using a recursive algorithm or through Markov chains in conjunction with (14). The distribution is updated at the beginning of each frame . The number of remaining messages is updated at the end of each frame by subtracting the expected number of messages read in the frame for the used in the frame. Thus, if is the number of channels used in a slot and is the number of frames that are expected to be required by the protocol to read all messages, the distribution is in the form of a matrix. Each row of is the distribution to use in the frame corresponding to that row. The steps to calculate are given in Algorithm 2.

This procedure calculates the number of frames required to read all messages up to some small threshold, , and then repeats the last in the final frame . The constant is used to ensure that an underestimate of number of messages remaining (compared to the actual value) does not result in using channel probabilities that will make it almost impossible for further messages to go through. (For example, channel probabilities chosen when estimated number of messages is 2, but actual number of messages is 10, will result in probabilities that does not allow any of the 10 messages to go and may delay reading any message for next 100–500 attempts. On the other hand if estimated number is 10 and actual number is 2, the additional delay is much smaller and is of the order of 1–10 extra attempts. So by setting to some small value, say 10, we ensure that the estimated number left allows further messages to go through still with a high probability.)

##### 5.2. Adaptive Algorithm Details

When the actual number of messages to be sent is unknown, we desire that the protocol itself try to estimate it and calculate the corresponding design parameters for this estimate. We employ an adaptive approach where we vary a node’s estimate of number of senders, , until it succeeds in sending its message. The algorithm starts out by selecting the number of channels to use per slot, , based on some guess on the initial number of nodes (or messages with our assumption of one message per node), , which need not be accurate due to insensitivity of to number of nodes. Further details of calculation of are explained in Section 5. The algorithm sets its estimate of number of messages, , to a small value as the starting point of its *upward* adaptation to larger values. Upward adaptation is chosen because changing to a different estimate only requires waiting a small time before realizing that the estimate may be incorrect. is a deployment environment specific parameter that is measurable, and, hence, can be estimated. As mentioned in Section 4.6, a reasonable estimate is enough. In the next step, the algorithm finds the optimal channel probabilities, , and the expected maximum number of frames within which the message will be read for this estimate. If, after frames of protocol execution, the node’s message is still not successful, it increases the estimate by a constant additive factor () and recalculates and to use in the subsequent frames. The algorithm continues until the message is finally read, after which the node resets back to the initial state and on the next event will reexecute the algorithm. The algorithm is designed such that all the calculations of and can be prestored and used from memory. The same initial and constant increase factor ensures that we just need to store calculated for some constant , and , for all estimates which can only have the following values: up to some maximum limit on the value of or memory capacity available.

##### 5.3. Number of Channels Per Slot

When the Alert protocol is deployed, we desire that only a single value of be used. Changing the number of channels per slot dynamically as number of messages that remain to be read changes is difficult to implement in practice. Next, we describe our method to derive given in Algorithm 3. It begins with some initial guess of , , not necessarily close to the actual , but not a small value. The optimal number of channels to use is calculated based on this . Because we do not recalculate , we need to ensure that the initial value of used gives us a good to use for the rest of the protocol execution, as adaptive estimates on changes. From our earlier analysis (mentioned in Section 4.5), it was found that optimal is quite insensitive to .

The calculation of uses the same method explained in Section 4.5 but with a small difference. Assume is the optimal value of for reading all messages with minimum delay. But this value may not be good for getting the first message through with minimum delay, for which a larger number of channels might be better. So we introduce a design factor by which we look for possibly larger number of channels to use without incurring an expected time penalty greater than , where is the expected time to read all messages using . Parameter allows us to control our optimization criteria: specifies selection of that optimizes the time to read all messages, while larger values of increasingly look to optimize the time to send the first message by considering usage of larger number of channels.

#### 6. Protocol Evaluation

In this section, we evaluate the feasibility and performance of Alert through an implementation on commodity hardware and also simulations. We begin with our implementations focusing on the feasibility of Alert.

##### 6.1. Feasibility of Alert

We implemented the Alert protocol on Bosch CC2420-based wireless nodes. The Bosch boards use Chipcon/TI CC2420 radio which is an IEEE 802.15.4 compliant transceiver operating at 2.4 GHz band with direct sequence spread spectrum (DSSS) O-QPSK modulation and 250 Kbps data rate. An external power amplifier (max transmit power 10 mW) is used to improve the communication range.

In the first experiment setup, we deployed 15 senders in an office in Palo Alto, Calif, as shown in Figure 8. The receiver (base station) is in communication range of all nodes and it kept them in sync by sending periodic beacon messages. Every second, all the nodes sent a message simultaneously using Alert protocol with the following fixed probability distribution: The receiver measured the number of time slots to receive the first message and number of time slots to collect all the 15 messages. Each time slot is 8 ms long.

Figure 10 shows the measured distribution of the number of time slots (for both the first and all messages). We see that the Alert protocol is performing *better* in real deployment (the experiment setup) than what the analysis predicts. The calculations show that in average we need 24.82 time slots to collect from all 15 nodes, but our experiment gives an average of 17.60 time slots. This improvement in performance is mainly due to *Capture effect*. When two nodes are sending simultaneously, in our analysis, we assume that there will be a failure, but in many cases the receiver can correctly decode one of the packets while treating the signal from the other sender as noise. Since the CC2420 radio employs spread spectrum techniques, it can tolerate higher level of interference and this helps increase the chance of capture effect. Note that Alert, by reducing the number of contending nodes at higher priority channels, increases the likelihood of capture effect.

To validate our analysis we reduced the chance of the capture effect by repeating the experiment with a different setup. In the second setup, all the 15 nodes were placed close to each other and close to the receiver on a table. The network configuration is shown in Figure 9. Since all nodes are close to one another, the received power at the receiver from all senders is high and equal. This reduces the chance of capture effect. The results are shown in Figure 10. We see that the measurements distribution with the second setup matches very closely to what the analysis predicts.

##### 6.2. Simulation Setup

Using simulations enables us to evaluate the performance of Alert with a large number of nodes with messages to send, that is, for scalability, and also compare against other protocols. We compare Alert with two other contention-based MAC protocols—Sift [8] and Slotted Aloha (S-Aloha) [11]. Sift was chosen because it is a CSMA-based protocol (unlike Alert) and previously shown to do better than variable contention window protocols like 802.11 for the target application scenario (refer to Section 2 for more details). S-Aloha is a simple protocol, allowing each time slot to be very small, possibly providing advantages in reducing time required to read messages. We believe comparisons of Alert with these two protocols covers a wide design space for MAC protocols for the target application. We had discussed the infeasibility of other possible MAC protocols (e.g., TDMA) in Section 2.

For the evaluations, we wrote a simulator in MATLAB. The important abstraction was the concept of time across different protocols from an implementation perspective. The interference was modeled as pointed out in Section 4. All protocols send messages to the receiver (or base station) in fixed time slots. (The 802.15.4 protocol also has a fixed slot structure with both a contention access period and a contention-free period within each frame [6].) The size of a time slot for each protocol is different (but of *fixed size*) based on how it is used. The timing of all three protocols as used in our simulations are shown in Figure 11. In this figure, is the guard time plus the rx/tx switching time, is the maximum clock skew, is the channel sensing time, is the channel switching time, and is the total time to exchange a packet and ack. Based on the implementation of Alert and measurement on the CC2420 radio, the values used for these constants are ms, ms, ms, ms.

In the S-Aloha protocol, each node tries to send its message in a slot with probability until it finally succeeds in doing so, where is the current estimate of number of messages. We let the S-Aloha protocol use the same methodology of adapting as Alert, and chose an initial value and increment factors that gave best results; this was initial with additive increments . For S-Aloha, a slot duration consists of the guard time plus RX/TX switching time , a single adjustment for clock skew and the time to exchange a Packet and Ack.

Sift uses a fixed contention window (CW) size and relies on a geometric probability distribution with which nodes pick a backoff slot for transmission. Once a node counts down to its chosen backoff slot and is the only one that has chosen that slot, it completes the packet and ack exchange with the receiver and all nodes move onto the beginning of the next protocol time slot. For simplicity, we do not implement RTS/CTS with Sift and do not consider the hidden terminal effect in our evaluations. The Sift slot duration consists of the guard time , the length of each backoff slot which is the sum of adjustment for clock skew and the channel sensing time, , and the time to exchange a packet and ack, . (We will mention how that consideration would effect the comparison between the protocols when we present our results.) Since a node might capture the channel in some backoff slot within the CW and begin packet transmission at that time, there is the possibility of some time left over after the ack is sent back until the next slot begins. This limitation is due to implementation considerations for which a fixed slot duration is highly desirable, and often, the most practical.

For Alert, a slot duration consists of the guard time , adjustment for possible clock skew both before and after the sampling time by receiver, multiple copies of time to sample channel plus channel switching time, , and the time to exchange a packet and ack, . Because each transmitter is sending a continuous tone the whole time the receiver is sampling channels, we do not need to adjust for clock skew () anytime except before and after the sampling is done when the transmitter is not sending the tone. Alert does not have any spare time left over in a slot-like Sift because the packet and Ack exchange takes place at a specified time in the slot regardless of which channel is used. The receiver simply waits on that channel at the time to receive a packet and return an ack. In our simulations, was taken as 10 with additive increments . The value of was calculated for and . The value of was set to 0.1 which provided a good balance between minimizing delay of collecting first message and collecting all messages.

Two levels of time synchronization were considered; *tight* and *loose*. Note that these are relative terms that are used to convey the compensation required for expected clock skew within slots. The tight synchronization allows smaller compensation times to be used with all protocols, but can prove to be a heavy burden on the higher level protocol that is responsible for it. Tight synchronization would require all nodes to participate in the synchronization protocol more frequently and would consume a lot of energy, even when the nodes have no messages to send. Thus, for applications targeting rare events, tight synchronization may not be feasible and a “looser” form of synchronization may be more desirable. We use the values or ms for loose and tight synchronization, respectively. (Note that, for tight synchronization, the values used (, ms) give roughly the specified backoff slot duration of 0.32 ms in the IEEE 802.15.4 standard [6].)

To get a sense of the effectiveness of the adaptive version of Alert, we also show a plot of the expected number of slots required to read all messages when exact value of is known throughout the protocol execution. This is calculated theoretically using (13) for a known . The scheme is termed . Each data plot shown is the average of 150 runs and 95% confidence intervals are shown for plots that show time required to read all messages. The maximum number of nodes was set at 100 which is a reasonably large number for a one hop centralized network.

##### 6.3. Comparisons Through Simulations

Figure 12 shows the comparison between all 4 schemes for the average time required to send the first message. (Confidence intervals are not shown for this plot to allow a close up snapshot of the schemes other than S-Aloha.) It can be seen that Alert manages to send the first message out far earlier than Sift and S-Aloha, and is quite close to its optimal expected performance ExOptAlert. The chosen channel probabilities of Alert allow the first message to go through in the initial few slots. The same happens for Sift, but more backoff slots in its contention window mean it takes more time to send the first message even though it may be successful in the first time slot. A smaller contention window for Sift could be useful here, but that could have a negative impact on success probability of a single slot and, hence, the delay to send all messages. S-Aloha seems to have the most delay since the random slots picked by nodes to send may not be the initial slots, or if they are, may not be successful due to collisions. The small slot time does not seem to have helped S-Aloha in this case.

Figure 13 shows the comparisons for all schemes to read all messages when (interference level of 5%). For the tight synchronization case, we see that Alert does slightly better than Sift. Note that this result does not take into account additional procedures like RTS-CTS which Sift might need to employ to handle hidden terminal collisions. Alert being a noncarrier sense protocol does not suffer from such issues. When loose time synchronization is used, the difference between Alert and Sift increases; in fact, S-Aloha does better than Sift now due to the much smaller slot structure it uses. When a higher level of interference () is taken into consideration, Sift does better than Alert for the tight synchronization case because the latter has a higher possibility to be affected due to its use of multiple frequency channels (see Figure 14). The possibility of such high levels of interference ( interference) is, however, very unlikely. In practice, Alert switches the frequency channels it uses periodically so that an interference source on some channel does not affect performance for long.

#### 7. Conclusions

We presented Alert, a MAC protocol to collect rare event-driven messages from multiple wireless sensor nodes with low latency. The protocol uses a novel time slot structure with nodes separated by prioritized frequency channels, which allows one node to succeed per slot with high probability. We provided extensive theoretical justifications for selecting values for the design parameters involved, and designed an adaptive algorithm for Alert to adjust parameter values based on the level of contention in the network. The feasibility and effectiveness of the protocol were demonstrated through both an implementation as well as extensive simulation-based comparisons with other protocols.

#### Disclosure

A preliminary version of this paper appeared in proceedings of ACM/IEEE International conference on Information Processing in Sensor Networks (ACM/IEEE IPSN), April 2008.

#### References

- “Fire detection and fire alarm systems. part 25. components using radio links and system requirements,” Tech. Rep. EN54-25, European Committee for Standardization, 2005.
- I. Chlamtac and A. Farago, “Making transmission schedules immune to topology changes in multi-hop packet radio networks,”
*IEEE/ACM Transactions on Networking*, vol. 2, no. 1, pp. 23–29, 1994. View at Publisher · View at Google Scholar · View at Scopus - A. Goldsmith,
*Wireless Communications*, Cambridge University Press, 2005. - K. K. Chintalapudi and L. Venkatraman, “On the design of MAC protocols for low-latency hard real-time discrete control applications over 802.15.4 hardware,” in
*Proceedings of the International Conference on Information Processing in Sensor Networks (IPSN '08)*, pp. 356–367, April 2008. View at Publisher · View at Google Scholar · View at Scopus - IEEE802.11, “Wireless medium access control (MAC) and PHY) specifications for low rate wireless personal area networks,” IEEE Standard 802, part 15.4, (WPANs), 1999.
- IEEE802.15.4, “Wireless medium access control (MAC) and PHY) specifications for low rate wireless personal area networks,” IEEE Standard 802, part 15.4, (WPANs), 2003.
- A. Woo and D. Culler, “A Transmission control scheme for media access in sensor networks,” in
*Proceedings of the 7th International ACM Conference on Mobile Computing and Networking (MOBICOM '01)*, Rome, Italy, 2001. - K. Jamieson, H. Balakrishnan, and Y. C. Tay, “Sift: a MAC protocol for event-driven wireless sensor networks,” in
*Proceedings of the European Workshop on Wireless Sensor Networks (EWSN '06)*, 2006. - Y. C. Tay, K. Jamieson, and H. Balakrishnan, “Collision-minimizing CSMA and its applications to wireless sensor networks,”
*IEEE Journal on Selected Areas in Communications*, vol. 22, no. 6, pp. 1048–1057, 2004. View at Publisher · View at Google Scholar · View at Scopus - F. Calì, M. Conti, and E. Gregori, “Dynamic tuning of the IEEE 802.11 protocol to achieve a theoretical throughput limit,”
*IEEE/ACM Transactions on Networking*, vol. 8, no. 6, pp. 785–799, 2000. View at Publisher · View at Google Scholar · View at Scopus - D. Bertsekas and R. Gallager,
*Data Networks*, Prentice Hall, 2nd edition, 1992. - N. Chirdchoo, W. S. Soh, and K. C. Chua, “Aloha-based MAC protocols with collision avoidance for underwater acoustic networks,” in
*Proceedings of the 26th IEEE International Conference on Computer Communications (INFOCOM '07)*, pp. 2271–2275, May 2007. View at Publisher · View at Google Scholar · View at Scopus - I. Rhee, A. Warrier, M. Aia, J. Min, and M. L. Sichitiu, “Z-MAC: a hybrid MAC for wireless sensor networks,”
*IEEE/ACM Transactions on Networking*, vol. 16, no. 3, pp. 511–524, 2008. View at Publisher · View at Google Scholar · View at Scopus - W. Ye, J. Heidemann, and D. Estrin, “Medium access control with coordinated adaptive sleeping for wireless sensor networks,”
*IEEE/ACM Transactions on Networking*, vol. 12, no. 3, pp. 493–506, 2004. View at Publisher · View at Google Scholar · View at Scopus - J. Polastre, J. Hill, and D. Culler, “Versatile low power media access for wireless sensor networks,” in
*Proceedings of the 2nd International Conference on Embedded Networked Sensor Systems*, pp. 95–107, November 2004. View at Scopus - T. Van Dam and K. Langendoen, “An adaptive energy-efficient MAC protocol for wireless sensor networks,” in
*Proceedings of the 1st International Conference on Embedded Networked Sensor Systems*, pp. 171–180, Los Angeles, Calif, USA, November 2003. View at Scopus - I. Demirkol, C. Ersoy, and F. Alagöz, “MAC protocols for wireless sensor networks: a survey,”
*IEEE Communications Magazine*, vol. 44, no. 4, pp. 115–121, 2006. View at Publisher · View at Google Scholar · View at Scopus - G. Lu, B. Krishnamachari, and C. S. Raghavendra, “An adaptive energy-efficient and low-latency MAC for data gathering in wireless sensor networks,” in
*Proceedings of the18th International Parallel and Distributed Processing Symposium (IPDPS '04)*, pp. 3091–3098, Los Alamitos, Calif, USA, April 2004. View at Scopus - M. Strasser, A. Meier, K. Langendoen, and P. Blum, “Dwarf: delay-aWAre robust forwarding for energy-constrained wireless sensor networks,” in
*Proceedings of the 3rd International Conference on Distributed Computing in Sensor Systems (DCOSS '07)*, Santa Fe, NM, USA, 2007. - V. Namboodiri and A. Keshavarzian, “Alert: an adaptive low-latency event-driven MAC protocol for wireless sensor networks,” in
*Proceedings of the 7th International Conference on Information Processing in Sensor Networks*, pp. 159–170, April 2008. View at Publisher · View at Google Scholar · View at Scopus - S. Boyd and L. Vandenberghe,
*Convex Optimization*, Cambridge University Press, New York, NY, USA, 2004.