Abstract

Minimum latency scheduling has arisen as one of the most crucial problems for broadcasting in duty-cycled Wireless Sensor Networks (WSNs). Typical solutions for the broadcast scheduling iteratively search for nodes able to transmit a message simultaneously. Other nodes are prevented from transmissions to ensure that there is no collision in a network. Such collision-preventions result in extra delays for a broadcast and may increase overall latency if the delays occur along critical paths of the network. To facilitate the broadcast latency minimization, we propose a novel approach, critical-path aware scheduling (CAS), which schedules transmissions with a preference of nodes in critical paths of a duty-cycled WSN. This paper presents two schemes employing CAS which produce collision-free and collision-tolerant broadcast schedules, respectively. The collision-free CAS scheme guarantees an approximation ratio of in terms of latency, where denotes the maximum node degree in a network. By allowing collision at noncritical nodes, the collision-tolerant CAS scheme reduces up to 10.2 percent broadcast latency compared with the collision-free ones while requiring additional transmissions for the noncritical nodes experiencing collisions. Simulation results show that broadcast latencies of the two proposed schemes are significantly shorter than those of the existing methods.

1. Introduction

Wireless Sensor Networks (WSNs) have become one of the most important technologies for the 21st century [1, 2] and have attracted a lot of attention from industrial and research perspectives [311]. A wide range of services in WSNs rely strongly on broadcasting, such as information dissemination, route discovery, and code update [12]. Implementing an effective network-wide broadcast scheduling is critical to improve the performance of WSNs. Like other wireless communications, broadcasting can suffer from collision when a node hears more than one message simultaneously from transmissions of its neighbors. If the collision occurs, the node is not able to receive any of these messages. The problem of Minimum Latency Broadcast Scheduling (MLBS) aims to schedule a broadcast such that a message will be disseminated from a source node to all other nodes in a network with a minimum period of time. The problem has been proven to be NP-hard and widely studied in conventional WSNs, where all nodes are active all the time [1315].

Recently, duty-cycle protocols let nodes in a network turns off their radios and go into a sleep state when they are idle to conserve energy and to extend network lifetime [1618]. In a low-duty-cycled WSN, a sensor node sleeps most of the time periodically waking up within a short time slot for possible data communication by either transmitting or receiving a message. Such periodic sleeping leads to a notable increase of communication latency between sensor nodes while reducing their energy consumption. In particular, every sensor node has to wait until its receivers wake up before transmitting a message and may need to transmit the message more than once if its receivers wake up at different time slots. The MLBS problem in duty-cycled WSNs gets more complex and has been proved as a NP-hard problem [19].

Throughout the literature, the MLBS in duty-cycle (MLBSDC) problem has been richly explored [1930]. The existing solutions typically aim to find a subset of nodes that can simultaneously transmit a message at a time slot so that the number of collision-free receptions in the time slot is maximized. However, delaying a transmission to prevent collision at a node in a network may increase overall broadcast latency if the node is in a critical path of the network. On minimizing broadcast latency, we propose a novel approach, critical-path aware scheduling (CAS) that offers an opportunity to reduce the latency by providing a preference for transmitting a message to nodes along critical paths of a duty-cycled WSN.

This paper presents two scheduling schemes employing CAS, Collision-Free-CAS (CF-CAS) and Collision-Tolerant-CAS (CT-CAS). Both schemes assign a numerical value for each node of a network to identify the critical paths during their scheduling phases. CF-CAS scheme does not allow any collision in a schedule to minimize the number of redundant transmissions in a broadcast. CT-CAS scheme further reduces broadcast latency by allowing collisions at noncritical nodes to speed up the broadcast process for critical nodes. To complete a broadcast schedule, it covers nodes experiencing collisions by additional transmissions. The increase of number of transmissions in CT-CAS reveals a trade-off between the redundancy and latency of a broadcast. Simulation results demonstrate the advantage of CAS in minimizing broadcast latency in duty-cycled WSNs. Broadcast latencies of both schemes are shorter than those of existing methods.

The remainder of this paper is organized as follows. Section 2 discusses related works. Section 3 includes a network model, problem formulation, and related terminology. In Section 4, we present the proposed schemes. Sections 5 and 6 show their performance analysis and evaluations, respectively. Finally, we conclude the paper and discuss our future work in Section 7.

Research on the minimum-latency broadcast problem has increased over the past few decades. One of the earliest works, Gandhi et al. [13], proves the NP-hardness of the MLBS problem in the Unit Disk Graph (UDG) model, where all nodes have the same transmission range. Their proposed algorithm gives an approximation ratio in terms of latency of at least . Huang et al. [14] exploited the fact that colors are sufficient to color all the nodes in any independent set of an UDG and thus the distance between nodes with same color is more than two hops. As a result, they improved the approximation ratio to . The algorithm in [15] recently reduced the ratio to by allowing a node to transmit more than once to reduce broadcast latency.

However, the above-mentioned algorithms fail to capture the intermittently connected characteristic of duty-cycled networks. Several issues for broadcasting in duty-cycled networks have been explored, such as minimum latency [1926, 2830], minimum number of transmissions [3134], and energy-saving [35, 36]. The MLBSDC problem rises as one of the most crucial problem for broadcasting in duty-cycled networks. Many research efforts have been made in literature to tackle the problem under different network settings, such as single channel [1922, 26, 29], multichannel [24, 28], and unreliable links [23, 25, 30]. Table 1 summarizes the characteristics of the broadcasting algorithms in duty-cycled WSNs.

The paper revisits MLBSDC problem in a time-synchronized network, in which all nodes use the same channel and the communication links between them are reliable. The problem is proven as NP-hardness [19], and there is a handful of related work so far. Duan et al. [20] developed the Vector-Iteration Algorithm (VIA) which transforms a broadcast scheduling into a matrix multiplication. This work introduces several matrices and vectors to characterize the essentials of broadcast scheduling in duty-cycled WSNs. VIA has a high computational complexity for matrix multiplication. In order to reduce broadcast latency, the proposed scheme greedily searches for a set of forwarding nodes at each time slot such that as many nodes as possible can receive a broadcast message in the time slot. VIA produces a collision-free broadcast schedule with an approximation ratio of , where is the maximum node degree in a network.

The One-To-All Broadcast (OTAB) scheme [21] utilizes a correlation function between network topology information and the sleep schedule of each node to find the minimum latency from a source node to every node in a network. All nodes in the network are divided into layers, according to the minimum latency. The scheme applies a D2-coloring method on the MIS of each active slot and schedules transmissions layer-by-layer based on the assigned colors. Also, OTAB requires all forwarding nodes in a one-hop propagation to finish their transmissions before all neighbors in the next hop. As a result, it increases the delay to ensure collision-free transmissions from those one-hop neighbors and hence leads to a high broadcast latency. The scheme has an approximation ratio of , where denotes the number of time slots in a working period.

Latency-Aware Broadcast Scheduling (LABS) [22] divides all nodes in a network into different layers according to their minimum latencies from the source node. It further reduces broadcast latency by employing an independent scheduling between consecutive layers of the network. The scheme schedules transmissions at several time slots in a single working period to maximize the number of receptions in the working period. It also explores geometric properties of the MIS to reduce the number of transmissions. A D2-coloring method is applied to prevent interference between transmissions for nodes within one layer. The broadcast schedule provided by LABS reduces up to percent of latency while keeping the same approximation ratio as OTAB. The total number of transmissions of LABS is at most times as large as the minimum number of transmissions.

Recently, Le et al. [29] proposed the Degree-Based Collision Tolerant Scheduling (DCTS) scheme. The scheme first categorizes internal and leaf nodes in a degree-based broadcast tree as primary and secondary nodes of a network, respectively. In order to minimize broadcast latency, it then speeds up the broadcast process for the primary nodes by selectively allowing collisions at secondary nodes of the network. DCTS accommodates such collisions by additional transmissions, thus ensuring the completion of a broadcast schedule. The scheme guarantees an approximation ratio of . Simulation results show that the scheme reduces to at least percent of the broadcast latency compared with LABS, while slightly increasing the number of transmissions due to the additional transmissions.

On minimizing broadcast latency, we have discovered that scheduling a broadcast with a preference of nodes in critical paths of a duty-cycled WSN is beneficial to reduce latency. The intuition behind this is that broadcast latency may increase if a delay occurs along critical paths of the network. In [37], such critical paths are determined on an arbitrary Shortest Path Tree (SPT) constructed by a one-to-all shortest path algorithm, such as Dijkstra’s algorithm [38]. Based on the critical-path awareness, we have sketched a collision-free schedule [37] and a collision-tolerant schedule [39] and presented preliminary simulation results to show the advantage of critical-path aware scheduling in reducing broadcast latency.

In this paper, we determine the critical paths on an SPT constructed with a preference of high-degree nodes to reduce the number of transmissions in a broadcast. In addition to a collision-free schedule, we develop a collision-tolerant schedule, similar to the one in [29], by referring to nodes in such critical paths as primary nodes. Moreover, both collision-free and collision-tolerant approaches adaptively select forwarding nodes in a broadcast, instead of selecting parents to forward a message to their children in a preconstructed broadcast tree as in [22, 29]. Such flexible selections also contribute to the reduction of broadcast latency in a critical-path aware schedule. Theoretical analysis and simulation results in this paper demonstrate a significant improvement of the proposed scheduling on broadcast latency.

3. Preliminary

3.1. Network Model and Assumptions

We consider a WSN of uniformly deployed sensor nodes in a square field as in [1922, 29]. Each sensor node in the network is assigned a unique identifier. Two sensor nodes form a bidirectional communication link and become neighbors whenever they are within their transmission ranges of each other. The network topology is modeled as a graph, in which each vertex and each edge correspond to a node and a communication link between two nodes in the network, respectively. Such a graph is called a communication graph and assumed to be connected.

In duty-cycled environments, sensor nodes alternate between active and sleep state to conserve their energy. Time is divided into unit time slots. These discrete time slots are grouped into multiple working periods, with fixed length . It is assumed that each time slot is long enough to accommodate the transmission of a message. All sensor nodes are time synchronized at the slot level using local time synchronization techniques, such as Reference-Broadcast Synchronization [40], Tiny-Sync [41], and Generalized ML-Like Estimator [42]. A sensor node randomly selects one time slot in as its active slot . It periodically wakes up at for each working period and stays in the active state for the time slot. Figure 1 explicitly illustrates an example of the periodic sleeping schedule.

A sensor node can forward a broadcast message only after it receives the message. A sensor node can wake up at any time slot to transmit a message and can receive a message only at its active slot. Due to the properties of a wireless environment, whenever a node transmits a message, all its active neighbor nodes hear the message. If a node hears more than one message at a time slot, it cannot receive any of these messages due to a collision. We assume that every transmission occupies a unit time slot and no message suffers from bit error. Therefore, sensor node successfully receives a message if only one neighbor of transmits the message at time slot of a working period.

3.2. Problem Statement

In a one-to-all broadcast, a message is disseminated from a source node to all other nodes in a network. The broadcast completes when every node in the network receives the message. To be specific, let denote the communication graph of a duty-cycled WSN, where is the set of vertices, and is the set of edges. Let denote a predefined source node of the network. The source node starts a broadcast at time slot .

A broadcast scheduling assigns transmitting time slots for nodes in the network such that the broadcast can complete. As node can receive a message only at its active time slot, one of its neighbors must be assigned a transmitting time slot in a form of for a nonnegative integer . The broadcast latency is determined by the maximum assigned transmitting time slot. The objective of the MLBSDC problem is to find broadcast schedule with a minimum broadcast latency. The MLBSDC problem is an NP-hard problem [19].

Figure 2 shows a simple duty-cycled WSN and a schedule for broadcasting a message from to its two-hop neighbors in the network. In the schedule, nodes , , and receive a message from one transmission of . After the transmission, nodes and simultaneously transmit the message at time slot to their corresponding neighbors and , respectively. In contrast, and cannot transmit the message at the same time to prevent a collision at their common neighbor node . As a result, a transmission from is delayed to the next working period. The two-hop broadcast completes after time slots.

3.3. Related Terminology

For each pair of neighboring nodes in a communication graph , we define two numerical values, called costs, corresponding to two asymmetrical directions of the edge between the nodes. Each cost value is the number of time slots for which a transmission on a direction of the edge is delayed due to the sleeping period of its receiver. For simplicity, the source node is assumed to have a message in advance, and its active slot is defined as . The cost of an edge with a direction from to can be determined as follows:

The minimum of accumulated costs on the paths from source node to node is referred to as level of , denoted as . The level of corresponds to the minimum possible latency for to receive a broadcast message generated by ; i.e. the smaller the level of a node is, the sooner the node may receive a broadcast message. The maximum level of nodes in a network is the lower bound of broadcast latency [19]. As the source node has generated the message in advance, its level is . Levels of other nodes in the network can be obtained by constructing an SPT rooted at . The SPT construction utilizes cost values as the weight of the edges. An example of SPT construction and level distribution for the network in Figure 2 is shown in Figure 3. It is worth noting that every node with a level has the same active slot .

4. Proposed Schemes

4.1. Overall Idea

Existing broadcast scheduling schemes such as OTAB [21], CLBS [22], and VIA [20] are typically motivated to find a set of forwarders whose simultaneous transmissions will result in as many collision-free receptions as possible. The schemes schedule a broadcast with a preference of transmissions to high-degree nodes. The intuition behind this is that the higher the degree a node has, the more the neighbors it can cover after receiving a message. They use the neighbourhood information of nodes to determine whether a particular node needs to transmit a message. Different transmitting time slots are assigned to nodes having a common neighbor to prevent any collision.

Figure 4(a) illustrates a degree-based broadcast scheduling for the duty-cycled WSN with , shown in Figure 2. In the schedule, the source node broadcasts a message to its neighbors , , and at time slot . A transmission to is preferred to be scheduled as the node has the highest degree among the remaining ones. There are four neighboring nodes of ready to forward the broadcast message. In order to prevent a collision at the common neighbor , only is allowed to forward the message at the time slot . The other ones must delay their transmissions to the next working period. As transmission of is delayed in the example, its neighbor node receives the message at time slot . In the same manner, a transmission from is delayed to prevent an interference with the transmission of at time slot . As a result, the receiving time of is postponed to time slot . Then, can forward the message further to its neighbors, and the broadcast completes at time slot .

On an SPT rooted at the source node of a duty-cycled WSN, we define a critical path as the longest path from the root to the leaves of the tree. For instance, the path is a critical path of the network in Figure 2. As node belongs to the critical path, a delay of transmission from to increases the overall broadcast latency. Similarly, if the transmission from to is scheduled earlier, it can result in a reduction of the overall broadcast latency. The problem is more severe in highly dense networks because there may be more nodes which cannot be scheduled simultaneously due to collisions. By preferring transmissions to nodes along the critical path, the critical-path aware scheduling selects to transmit a message to and at time slot , as shown in Figure 4(b). Consequently, the remaining nodes in the critical path of can receive the message earlier than in the degree-based schedule. In the same manner, is selected to forward the message to and at time slot . Even if a transmission from to is delayed to time slot , it does not affect the overall broadcast latency. The broadcast completes after time slot with the critical-path aware scheduling.

4.2. Criticality Awareness

The critical-path aware scheduling (CAS) approach utilizes node criticality information in its scheduling process to minimize broadcast latency. To this end, all critical paths of a network need to be identified. Note that a critical path is the shortest path from to a node with maximum level, identified by an SPT rooted at the source node of the network. The proposed SPT construction algorithm connects nodes to the tree in a nondecreasing order of their levels. The algorithm prefers to select high-degree nodes as internal nodes in the tree to reduce the number of transmissions. The notation utilized in this chapter is summarized in Table 2.

Given a network with level of every node and a predefined source node , Algorithm 1 constructs a degree-based SPT rooted at the source node by connecting nodes to the tree level by level. Let denote the set of nodes which have been connected to the tree, initially . For adding nodes in each level to , the algorithm considers nodes in with levels smaller than as parent candidates. Among the candidates, node adjacent to the largest number of nodes in with the level is selected as a parent node. Such a node must exist because of the network connectivity. All neighbors of in with the level become children of and are to . The process continues until contains all nodes in the network. Obviously, the tree is a shortest path tree rooted at the source node . Figure 5(a) illustrates a degree-based SPT construction for the network in Figure 2.

Input:  , ,
Output: Broadcast tree
whiledo
// nodes at level
whiledo

To facilitate node criticality awareness, we define the latency-ahead value of each node , denoted by , based on the constructed tree. The latency-ahead value of a node presents the minimum latency for transmitting a message from the node to the farthest leaf node in the subtree rooted at the node. The value is the accumulated costs from the node to a leaf node with the maximum level in its subtree. In other words, a node with a higher latency-ahead value requires a longer time to cover all nodes in its subtree. The higher latency-ahead value a node has, the more critical the node is. An example of latency-ahead calculation is shown in Figure 5(b).

4.3. Critical-Path Aware Scheduling

In this section, we present two broadcast scheduling schemes employing the critical-path aware scheduling (CAS) to reduce broadcast latency. The proposed schemes, named Collision-Free-CAS (CF-CAS) and Collision-Tolerant-CAS (CT-CAS), follow either collision-free or collision-tolerant strategy, respectively. A node is in a covered state if one of its neighbors has been scheduled to transmit a message to the node. Otherwise, it is in an uncovered state. The covered node is ready to be scheduled to cover its uncovered neighbors. The schemes schedule a broadcast with a preference of transmissions to uncovered nodes with high latency-ahead values. In each time slot, the schemes minimize the number of transmissions by selecting high-degree covered nodes as forwarders.

4.3.1. CF-CAS

Let denote the set of covered nodes in a network. Initially, contains only the source node. The scheduling scheme starts at time slot and iteratively works for each time slot as in Algorithm 2. At the current time slot , the algorithm searches for the most critical uncovered node that has the highest latency-ahead value and . All covered neighbors of are considered as its forwarder candidates. Among the candidates, the algorithm selects node as a forwarder if it covers the largest number of uncovered nodes active at time slot to reduce the number of transmissions. Let denote the set of uncovered nodes active at the current time slot.

Input:
Output: Transmitting schedule for every node
// source node has a message in advance
// current time slot
whiledo
// set of covered nodes
// set of uncovered nodes
whiledo
foreachdo

As a transmission from the selected forwarder covers all of its uncovered neighbors active at time slot ; such neighbors, called listeners, can receive a message from at time slot . The listeners become covered nodes and are excluded from . All covered neighbors of the listeners are excluded from to ensure that they will not be selected as forwarders in time slot anymore. By doing so, the algorithm prevents any collision at the listeners. CF-CAS continues searching for a node with the highest latency-ahead value in until all nodes active at the current time slot are covered or it cannot find any forwarder for the node. The scheduling algorithm updates the node status, and moves to the next time slot, i.e., , until it covers all nodes in the network, iteratively. Figure 6(a) shows an example of the proposed algorithm on the network shown in Figure 2.

4.3.2. CT-CAS

Observably, a delay of transmission caused by collision-prevention may increase the broadcast latency if the delay occurs along a critical path of the network. With the node criticality awareness, covering nodes in a nonincreasing order of their latency-ahead values can be beneficial for reducing broadcast latency. It motivates us to allow collision at low criticality nodes to speed up the broadcast process for high criticality ones. By doing so, the scheduling accelerates transmissions to nodes with high latency-ahead values. It also increases the number of simultaneous transmissions in each time slot to reduce broadcast latency. The CT-CAS algorithm accommodates the collisions possibly occurring at the low criticality nodes by retransmission to ensure the completion of a broadcast.

Initially, the algorithm offers a preference to the most critical uncovered node that has the highest latency-ahead value in time slot . Forwarder is selected among covered neighbors of in a similar way as the CF-CAS algorithm. All uncovered neighbors of the selected forwarder are referred to as listeners because they can hear a message at time slot . The listeners are excluded from as they become covered nodes. The difference is that a listener may be changed back to an uncovered state later if it hears more than one message at the current time slot. Let denote the number of messages that node can hear in a time slot. The number is increased by one for each listener as in Algorithm 3.

Input:
Output: Transmitting schedule for every node
// source node has a message in advance
// current time slot
whiledo
// set of covered nodes
// set of uncovered nodes
// set of listeners
whiledo
foreachdo

CT-CAS algorithm continues searching for the next critical node with the highest latency-ahead value among remaining nodes in . To prevent collisions at listeners whose latency-ahead values are not smaller than and numbers of heard messages are one, all covered neighbors of such listeners are excluded from . Forwarder for node is selected in the same manner among the remaining candidates in . All uncovered neighbors active at time slot of also become listeners. Collisions are allowed at listeners that are common neighbors of and .

The scheduling algorithm iterates the above procedure until all nodes active at the current time slot are covered, or it cannot find any proper forwarder. Only listeners hearing one message in the current tine slot become covered nodes. Other listeners are changed back to an uncovered state as they cannot receive any message. CT-CAS algorithm resets the number of heard messages for every uncovered node and iteratively moves to the next time slot, , until all nodes in the network are in a covered state. Figure 6(b) shows an example of the proposed algorithm on the network shown in Figure 2.

5. Performance Analysis

Recall that CAS along with collision-free and collision-tolerant strategies is referred to as the CF-CAS scheme and the CT-CAS scheme, respectively. In this section, we estimate approximation ratios between broadcast latencies given by the proposed schemes and the optimal latency for the MLBSDC problem. We first show that approximation ratio of the CF-CAS scheme does not exceed , where is the maximum degree of a communication graph.

The proposed scheme schedules all transmissions based on the latency-ahead value of each node with a preference of node degree. Let denote the broadcast latency in a CF-CAS schedule. Let denote the receiver node set of the scheduled transmission from node at time slot , . Note that such receivers are active at the same time slot of .

Lemma 1. In CF-CAS scheme, if a transmission from node is delayed to prevent a collision at node then , .

Proof. We prove this lemma by contradiction. Assume that a transmission of node is delayed until time slot , , to prevent an interference with the prior scheduled transmission of node at node . In the CF-CAS scheme, as is scheduled prior to and it has the common uncovered neighbor , must have a higher degree than . According to Algorithm 2, must be a receiver of , which contradicts the assumption.

Let denote the latency of an optimal broadcast schedule for the MLBSDC problem. In the optimal schedule, let be the set of all receivers at time slot , . It is worth noting that every node in is uncovered and active at time slot . The following is our key lemma.

Lemma 2. CF-CAS algorithm takes at most time slots to cover all nodes in , .

Proof. We prove this lemma by induction on . As all nodes in level can receive a message from a transmission of at time slot ; the claim of this lemma is true for the receiver set . Suppose that the claim is true for any receiver set from to , where . We now prove it for the set . Let be a sender at time slot that covers some nodes in . The transmission of node is delayed till time slot to prevent interference with other scheduled transmissions. We denote by the set of such interfering transmissions. According to the CF-CAS algorithm, we haveAccording to Lemma 1, a transmission from node to nodes in can only be delayed to prevent collision at some uncovered nodes in . Since the transmission may be delayed until all of such uncovered nodes are covered, we have , where accounts for the node . The reason is that has been covered before and has been excluded from the uncovered node set according to the CF-CAS algorithm. From (2), we haveNoting that in (3) and , we haveAs should receive a message before transmitting a message to , , where . By induction hypothesis, we haveSo the maximum transmitting time slot of node for nodes in can be estimated from (4) and (5) as follows: Thus, the claim of the lemma holds for the receiver set , and the proof is complete.

Theorem 3. The approximation ratio of the CF-CAS scheme is at most .

Proof. From the definition of the receiver set in the optimal schedule, we have According to Lemma 2, the maximum time slot to cover all nodes in is . Thus, we have .

In the following, we estimate the time complexities of the CAS-based schemes.

Theorem 4. The time complexity of the CF-CAS scheme is .

Proof. To facilitate the latency-ahead value calculation, Algorithm 1 takes time to construct the shortest path tree rooted at the source node. Then, it takes another time to assign the latency-ahead value for every node in the network. In total, the time complexity of Algorithm 1 is bounded by .
The CF-CAS algorithm takes at most time to collect covered set and uncovered node set in each time slot , . It takes at most time to pick the uncovered node with the highest latency-ahead value, and then time to select a parent for the node from set with a preference of node degree. Recall that uncovered neighbors active at time slot of the selected parent become listeners at the time slot. The parent iterates at most times to guarantee that all listeners can be covered by preventing their covered neighbors from transmitting a message at time slot . Thus, the algorithm takes at most time for each uncovered node. To cover all nodes, the time complexity of the Algorithm 2 is bounded by . We combine all the running times and conclude that the time complexity of the scheme is .

Theorem 5. The time complexity of the CT-CAS scheme is .

Proof. Similar to the proof of Theorem 4, the CT-CAS scheme takes time to assign latency-ahead value for every node in a network. The only difference between two CAS-based schemes is that the CT-CAS algorithm does not require time to guarantee all listeners in a time slot will be covered. Instead, it takes time to collect all listeners and then takes at most time to prevent collision at nodes with high latency-ahead values. Thus, the algorithm requires time for each uncovered node, as and . To cover all nodes, the time complexity of Algorithm 3 is bounded by . Combining this with the time complexity of Algorithm 1, we conclude that the time complexity of the CT-CAS scheme is .

6. Performance Evaluation

6.1. Simulation Environment

In this section, the performance of the proposed schemes is evaluated using a simulator written in C#. The simulation configurations are similar to the ones in [22]. Each network is generated by uniformly deploying all sensor nodes in a square area of . All nodes are assumed to have the same transmission range. They obtain their active time slots randomly. To enable an energy consumption evaluation, we adopt the energy model of the Mica2 platform [43]. A node consumes and in an active state and a sleeping state, respectively. It costs for transmitting and for receiving a message. A message has a fixed size of bytes as in TinyOS. The length of each time slot is milliseconds. We neglect the energy consumed during the network initialization phase.

Broadcast latency, the total number of transmissions, and the total energy consumption of the proposed schemes are compared with those of VIA [20], LABS[22], and DCTS [29]. We did not include ELAC [19] and OTAB [21] into the comparison because their performances in terms of both broadcast latency and number of transmissions are consistently dominated by LABS [22]. The effect of network parameters including network density, transmission range, and the duty-cycle on the performance of these schemes is studied. We vary one of the parameters while fixing the others in each simulation. Each value plotted on the curves is obtained from the results of randomly generated networks with a randomly chosen source node. Table 3 summarizes the simulation parameters.

6.2. Simulation Results
6.2.1. The Impact of the Number of Nodes

Figure 7 presents the performance of the schemes in low-density networks and high-density ones where the number of nodes is varied from to and from to with increments of and , respectively. The transmission range and the length of the working period are fixed to m and , respectively. In low-density scenarios, broadcast latencies of all schemes decrease when the network size increases because a node has more choices to forward a message, as shown in Figure 7(a). When the network density becomes high, there are more levels in a network. As level-based schemes, LABS and DCTS require more time to complete a broadcast process. The broadcast latencies of VIA and two proposed schemes decrease slightly since a node needs to wait longer to prevent interference with its neighbors. In all cases, the proposed schemes consistently outperform in terms of broadcast latency. By accelerating transmissions for nodes with high criticality, the broadcast latency of CF-CAS is percent shorter than that of VIA. CT-CAS further reduces the latency up to percent more than CF-CAS by allowing collision at less critical nodes.

Figure 7(b) shows that the total number of transmissions of all schemes grows. The reason is that each forwarding node has more neighbors when the network density increases. It leads to the increment of number of transmissions to cover the neighbors. Consequently, the total energy consumption of all the schemes increases as shown in Figure 7(c). In order to reduce broadcast latency, CF-CAS and CT-CAS require up to percent and percent more transmissions than VIA, respectively. DCTS and CT-CAS produce more transmissions than LABS and CF-CAS because of additional transmissions for nodes experiencing collisions. Thanks to the shorter broadcast schedules, all nodes in a network can stop working earlier in the proposed schemes. It results in an improvement of up to percent in total energy consumption compared to VIA.

The experiment results depict a notable extra delay of LABS because it unnecessarily suspends the collision-free transmissions to nonindependent nodes in a layer until all independent nodes in the layer received a message. This extra latency persistently results in bigger broadcast latencies than those of other schemes throughout all our experiments. To reflect the influence of CAS in reducing broadcast latency, the rest of comparisons only include VIA and DCTS, which produce the shortest broadcast latency among existing schemes deploying a collision-free and a collision-tolerant approaches, respectively.

6.2.2. The Impact of the Transmission Range

The effect of varying transmission ranges on the performance of the schemes is studied with nodes. The length of the working period is fixed to . As coverage area of a node gets bigger when its transmission range increases, a broadcast message can reach to a further node in a transmission. In other words, the message propagation may be faster with the bigger transmission range. This leads to a reduction of the broadcast latencies of all the schemes, as shown in Figure 8(a). CF-CAS achieves from to percent latency reduction over VIA while CT-CAS reduces up to percent more.

Subsequently, the number of neighbors of a node increases as its transmission range does. As the node can cover more neighbors with a transmission, it potentially requires a fewer number of transmissions to cover all nodes in a network. The total number of transmissions and the total energy consumption decrease, as shown in Figures 8(b) and 8(c), respectively. Due to additional transmissions to nodes experiencing collision, CT-CAS requires percent more transmissions than CF-CAS. The two schemes produce at most and percent more transmissions than VIA, respectively. However, the total energy consumptions of CF-CAS and CT-CAS consistently outperform VIA by and percent due to their shorter broadcast schedules, respectively.

6.2.3. The Impact of the Duty-Cycle

Figure 9 presents the impact of the duty-cycle on the performance of the schemes. While varying the length of the working period, we fix the number of nodes at and the transmission range at  m. As nodes sleep longer when the length of working period increases, a forwarding node takes more time to wait for its receivers waking up. Figure 9(a) shows that the broadcast latencies of all schemes increase. Thanks to node criticality awareness, CF-CAS improves the broadcast latency by percent compared to VIA. Instead of deferring a transmission to prevent a collision at every node, CT-CAS allows collisions at some nodes to reduce the broadcast latency by percent compared to CF-CAS.

In a longer working period, a forwarding node may require more transmissions to cover all of its neighbors with different active slots. Thus, the total number of transmissions grows with the decrease of working period length, as shown in Figure 9(b). However, the total energy consumption of VIA, CF-CAS, and CT-CAS decreases because their broadcast schedules are short enough and there will be more nodes sleeping in each working period with a bigger , as shown in Figure 9(c). Although the proposed schemes produce percent more transmissions than VIA, CF-CAS and CT-CAS, respectively, require and percent less energy by completing a broadcast faster.

6.2.4. The Trade-Off between Number of Transmissions and Broadcast Latency

While the network diameter, i.e., the shortest distance between the two most distant nodes, increases by adding more nodes or reducing the transmission range, the broadcast latency improvements of CF-CAS and CT-CAS schemes compared to VIA also increase, as shown in Figures 7(a) and 8(a). This reveals that CAS approach reduces more broadcast latency in networks with longer diameters. By contrast, the proposed schemes become worse in terms of number of transmissions in such networks, as shown in Figures 7(b) and 8(b). The reason is that transmissions from high-degree forwarding nodes in VIA scheme can cover more nodes than transmissions from nodes in critical paths in the proposed schemes.

To present the trade-off between the number of transmissions and the latency of a broadcast, we refer to nodes with latency-ahead values higher than times the maximum latency-ahead value as critical nodes in the network. The other nodes are referred to as noncritical ones. We slightly modify the CT-CAS algorithm, later on referred to as CT-CAS(), by allowing collisions at only nocritical nodes. After covering a critical node in a time slot, its covered neighbors will not be selected as forwarders in the time slot, i.e., excluded from the covered node set, to prevent collision at the critical node. It is worth noting that the CT-CAS algorithm is the case of CT-CAS() when . The bigger the parameter is, the less the critical nodes a network has. And thus, it increases the number of collisions possible occurring at noncritical nodes.

From simulation results shown in Figure 10, we observe that the increase of results in performance improvement in terms of broadcast latency. With a high , transmissions to nodes with high latency-ahead values receive more preference in a broadcast schedule. Such nodes can receive a message sooner to forward the message further. Hence, the overall broadcast latency is reduced. Obviously, more transmissions are required to accommodate collision possibly occurring at the noncritical nodes. With , the latency of CT-CAS is the closest to the lower bound of the broadcast latency, which is the the maximum latency-ahead value in a network. The results show that the broadcast latency of CT-CAS is at least percent lower than that of CF-CAS. CT-CAS requires at most percent more transmissions than CF-CAS due to retransmissions.

7. Conclusions

This paper presents a critical-path aware scheduling (CAS) for the Minimum Latency Broadcast Scheduling problem in duty-cycled wireless sensor networks. The scheduling reduces the broadcast latency by preferring transmissions to nodes along the critical paths of a network. We employ the scheduling in two latency-efficient broadcast schemes: Collision-Free-CAS (CF-CAS) and Collision-Tolerant-CAS (CT-CAS). By allowing collisions at low criticality nodes, CT-CAS accelerates the broadcast process for high criticality ones. It produces a shorter broadcast schedule with more transmissions than CF-CAS. Both proposed schemes improve up to percent in terms of broadcast latency while increasing the number of transmissions at most percent compared with those of existing schemes. The shorter broadcast schedule reduces up to percent the total energy consumption.

For further study, finding a solution for latency-efficient multicasting in duty cycled WSNs is an appealing problem. Broadcast schemes that balance energy consumption of every node in order to maximize the network lifetime is another challenging problem. Last but not least, we intend to study more realistic uneven node deployments to prove the applicability of our proposed methods.

Data Availability

The data used to support the findings of this study are included within the article.

Disclosure

This paper is an extended version of our paper published in International Conference on Parallel and Distributed Processing Techniques and Applications 2015, Duc-Tai Le, Thang Le Duc, Vyacheslav V. Zalyubovskiy, Dongsoo S. Kim, and Hyunseung Choo: On Minimizing Broadcast Latency in Duty-cycled Wireless Sensor Networks (Proceeding of the International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA 2015), 2015, pp. 485-490).

Conflicts of Interest

The authors declare no conflicts of interest.

Authors’ Contributions

This paper has been primarily conducted by Duc-Tai Le under the supervision of Hyunseung Choo. Giyeol Im, Thang Le Duc, Vyacheslav V. Zalyubovskiy, and Dongsoo S. Kim contributed to the descriptions and discussions on the proposed schemes and the simulation results presented in the paper.

Acknowledgments

This research was supported in part by Korean government, under IITP (B0101-15-1366), G-ITRC (IITP-2016-R6812-16-0001), PRCP (NRF-2010-0020210), and BSRP (NRF-2016R1D1A1B03934660), respectively.