Abstract

Reducing energy consumption, increasing network throughput, and reducing delay are the pivot issues for wake-up radio- (WuR-) enabled wireless sensor networks (WSNs). In this paper, a relay selection joint consecutive packet routing (RS-CPR) scheme is proposed to reduce channel competition conflicts and energy consumption, increase network throughput, and then reduce end-to-end delay in data transmission for WuR-enabled WSNs. The main innovations of the RS-CPR scheme are as follows: (1) Relay selection: when selecting a relay node for routing, the sender will select the node with the highest evaluation weight from its forwarding node set (FNS). The weight of the node is weighted by the distance from the node to sink, the number of packets in the queue, and the residual energy of the node. (2) The node sends consecutive packets once it accesses the channel successfully, and it gives up the channel after sending all packets. Nodes that fail the competition sleep during the consecutive packet transmission of the winner to reduce collisions and energy consumption. (3) Every node sets two thresholds: the packet queue length threshold and the packet maximum waiting time threshold . When the corresponding value of the node is greater than the threshold, the node begins to contend for the channel. Besides, to make full use of energy and reduce delay, the threshold of nodes which are far from sink is small while that of nodes which are close to sink is large. In such a way, nodes in RS-CPR scheme will select those with much residual energy, a large number of packets, and a short distance from sink as relay nodes. As a result, the probability that a node with no packets to transmit becomes a relay is very small, and the probability that a node with many data packets in the queue becomes a relay is large. In this strategy, only a few nodes in routing need to contend for the channel to send packets, thereby reducing channel contention conflicts. Since the relay node has a large number of data packets, it can send many packets continuously after a successful competition. It also reduces the spending of channel competition and improves the network throughput. In summary, RS-CPR scheme combines the selection of relay nodes with consecutive packet routing strategy, which greatly improves the performance of the network. As is shown in our theoretical analysis and experimental results, compared with the receiver-initiated consecutive packet transmission WuR (RI-CPT-WuR) scheme and RI-WuR protocol, the RS-CPR scheme reduces end-to-end delay by 45.92% and 65.99%, respectively, and reduces channel collisions by 51.92% and 76.41%. Besides, it reduces energy consumption by 61.24% and 70.40%. At the same time, RS-CPR scheme improves network throughput by 47.37% and 75.02%.

1. Introduction

With the development of microprocessor technology, the computing and storage capability of sensing devices are becoming more and more powerful, but their volume is getting smaller and smaller [13]. Therefore, sensing devices can be widely used in various data-sensing applications to monitor the surrounding environment, collect data in the environment, and send data to the processing center for processing by means of communication, making decisions on the events, which greatly promotes the development of Internet of Things (IoT). Currently, the number of devices connected to IoT has exceeded the number of humans [1]. Wireless sensor network is an important part of IoT [47]. Internet of things (IoT) couples with wireless sensor networks (WSNs), and thus a variety of smart applications such as smart cities, smart grid, and smart automation can be implemented [8].

Energy is one of the most important research contents in wireless sensor networks [913]. Sensor nodes are powered by batteries whose capacity is limited and, in many scenarios, such as battlefields and dangerous areas, they cannot be replaced once deployed, so how to save energy becomes an important issue in WSN research [1416]. In order to save energy of sensor nodes, researchers have proposed many ways to reduce the energy consumption of nodes [17, 18]. Since the communication module between nodes is the most energy-consuming part of sensor nodes and the energy consumption of communication operations accounts for 70% of the total energy consumption according to statistics [9, 11], it is necessary to reduce energy consumption of nodes and communication operations [9, 15]. In order to reduce the energy consumption of communication operations, researchers have also proposed various methods [17, 18] and communication protocol is one of the most important parts [19]. A good communication protocol requires small system overhead for communication, high effective communication rate [6, 12], low energy consumption [13], low delay [1416, 2022], low data collision rate, and high network throughput [23]. In order to save energy, most WSNs work in the way of duty cycle [14, 16, 18, 23]. In this way, nodes are not always active but periodically active and sleep. Since the energy consumption of nodes in active state is 2-3 orders of magnitude more than that of nodes in sleep state [16], the periodic active and sleep can make the node sleep for a certain proportion of time, thereby saving energy. However, this method deteriorates the performance of data communication, making the delay of data transmission increase [1216] and the throughput decrease. The main reason for the decline of communication performance is that when the sender has data to send, the receiver may be in sleep state. Therefore, the sender has to wait until the receiver is active, which degrades the network communication performance [18, 23]. However, in many cases, nodes do not have data to transmit or relay when they turn into active state in the duty cycle mode. Therefore, when nodes turn into active state without data transmission, they actually consume energy in vain. Moreover, when many nodes are in active state, channel competition becomes more serious. The key to these problems is the randomness of data operations. Nodes do not know when they have data operations beforehand, so they can only wake up periodically to see if there is data to be sent. This mode of operation makes the nodes have data operations without too much delay, and not always active without data operations, which can reduce energy consumption. It is a trade-off between network performance and energy consumption. If a node can be woke up in time when there are data operations and sleep all the time when there is no data operation, then the node does not need periodic active/sleep, but only active when needed. In this way, the energy consumption of nodes can be greatly reduced.

Wake-up radio (WuR) is such a technology [8]. WuR allows nodes work at a power consumption level that is 1000 times lower than that of the traditional radio [24]. A WuR-enabled WSN node contains two radios, one main radio (MR) and one wake-up radio. The MR is primarily responsible for data exchange, and the WuR is used to wake-up MR. While the WuR is always on to receive wake-up calls (WuCs) from other nodes [25], the MR is in the off or sleep mode for most of the time and it is switched to on or active only when needed. Note that the active and sleep periods apply only to MRs [8].

When wake-up radio (WuR) technology is applied to WSNs, it can improve the performance of WSNs [8]. It can save energy, reduce conflicts, and improve network throughput by enabling nodes to turn into active state only when needed. This is achieved by applying WuR on the nodes. However, WSNs with WuR also have problems about the optimization of network performance. WuR mainly plays the role of waking up MR, so the communication between MRs of nodes is similar to that without WuR after MRs wake up. If multiple nodes need to communicate at the same time, the MRs of the nodes still need to compete with each other, and the nodes in the competition will remain active. If there is a conflict in the competition, each node will be brought into the next competition, so that the nodes will continue to consume energy. Even if the winner of this competition accesses the channel, the failed nodes still need to monitor the channel in the next slot to determine that they will compete for the channel after the winner has finished the transmission. It can be seen that, in this case, the energy consumption of nodes and the network delay are larger because of the channel conflicts. Therefore, Guntupalli et al. [8] have proposed a receiver-initiated consecutive packet transmission WuR (RI-CPT-WuR) MAC protocol to improve network performance. The main point of their method is that when a node accesses the channel successfully, it sends packets continuously and it stops until its packet queue is empty. All nodes that failed the competition turn into sleep state when the winner transmits data and then turn into active state and contend for the channel when the winner terminates data transmission. That is to say, the RI-CPT-WuR MAC protocol enables multiple packets to transmit through a single competition. One of the advantages of the RI-CPT-WuR MAC protocol is that it reduces the number of channel conflicts. In the RI-WuR protocol, channel competition is performed on a packet-by-packet basis, so the nodes have to contend for the channel for each packet transmission. If there is a collision in the competition, the nodes have to compete for the channel for many times. However, nodes in the RI-CPT-WuR protocol send all the packets in their packet queue once they access the channel successfully, so they will no longer be involved in the following channel competitions because their packet queue is empty. Therefore, the number of nodes participating in the channel competition is reduced, and the probability of collision is also reduced. Another advantage of RI-CPT-WuR is that it reduces energy consumption. On the one hand, the conflict is reduced. On the other hand, the nodes fail the competition turns to sleep when the winner transmits packets.

Although RI-CPT-WuR can improve the network performance, there is still some room for improvement: (1) The key to the role of the RI-CPT-WuR MAC protocol is that if there are multiple packets to be sent in the packet queue, then the node can send consecutive packets. However, for wireless sensor networks with high node density and small data generation, the packets that each node needs to send mostly do not exist, or there is only one packet. In this case, the RI-CPT-WuR MAC protocol degenerates into the original RI-WuR MAC protocol. That is to say, after a successful channel competition, the node can still only send one data packet, which completely loses the advantage of the RI-CPT-WuR MAC protocol. (2) The RI-CPT-WuR MAC protocol mainly targets the one-hop network where the sender and the receiver are certain. But in the actual network, before the sender sends the data packet, its receiver is uncertain. The sender often chooses the node that can optimize the network performance within its transmission range as the relay node. The criteria for evaluating the relay node selection are the residual energy of the node, the distance from the node to sink, and the number of existing packets in the node. These factors need to be considered together to optimize the performance of the routing [26]. But in the RI-CPT-WuR MAC protocol, they have not been considered yet. Therefore, in this paper, a relay selection joint consecutive packet routing (RS-CPR) scheme is proposed to reduce channel competition conflicts and energy consumption, thereby improving network throughput and reducing end-to-end delay of data transmission for WuR-enabled WSNs. The main innovations of this paper are as follows:(1)We construct an optimal relay-node-selection evaluation function and propose a relay selection method. The method proposed in this paper can overcome the problem that the RI-CPT-WuR protocol degenerates to the RI-WuR MAC protocol due to the random and uniform distribution of data packets in nodes in previous strategies. The relay selection method in this paper can make the data packets to the nodes with more packets to take advantage of the consecutive packet transmission. In particular, when nodes select their relay nodes to transmit data packets, the node with the highest evaluation weight is selected from its forwarding node set (FNS). The weight of the node is weighted by the distance from the node to sink, the number of packets in the queue, and the residual energy of the node. In such a way, nodes in RS-CPR scheme will select those with much residual energy, a large number of packets, and a short distance from sink as relay nodes. As a result, the probability that a node with no packets to transmit becomes a relay is very small, and the probability that a node with many data packets in the queue becomes a relay is large. In this strategy, only a few nodes in the routing have data packets and need to contend for the channel, thus reducing channel contention conflicts.(2)A mechanism that enables packets to be further aggregated in the routing process is proposed in this paper by using two important thresholds: the packet queue length threshold and the packet maximum waiting time threshold . In the RS-CPR scheme, nodes begin to contend for the channel and send packets when their packet queue length exceeds or the maximum waiting time of packets exceeds . The role of the threshold is to start transmitting data when the data packets in the node are accumulated to a certain amount so that the node can send multiple data packets in a successful channel competition, thereby improving network efficiency. The role of the threshold is that it takes a long time for the number of data packets to reach in a network with sparse data packets, which makes the data transmission delay exceed the requirement. Therefore, the strategy also stipulates that if the waiting time of data packets in the queue reaches , the node will contend for the channel to transmit packets regardless of the length of the packet queue, thus ensuring that the delay of data transmission is within the allowable range. Combining the selection method of the relay node with the selection of thresholds, the RS-CPR scheme can not only aggregate packets of the nodes but also maintain a small delay. In fact, after a node sends its packets when the length of the packet queue reaches the threshold , every node on the routing path which receives the data packets of this node can satisfy the condition of competing channel, and its delay is the same as that of other RI-WuR MAC protocols. For the case of data transmission due to the fact that the packet maximum waiting time reaches , the node will still select a node with the largest number of packets as the relay so that the number of data packets transmitted at one time increases after each channel competition wins, thereby improving network efficiency.In addition, we also set different thresholds and for nodes in different regions according to the imbalance of energy consumption of wireless sensor networks to speed up the transmission of data packets. The specific method is as follows: because the energy consumption of nodes in the near-sink region in WSNs is large, and nodes in the far-sink region have low-energy consumption and have a lot of energy left, the RS-CPR scheme sets small thresholds in the far-sink region to speed up data transmission, although this strategy may increase energy consumption. However, this area has energy left so it does not affect the network life. In the near-sink area, the larger thresholds and are used to reduce energy consumption and maintain high network lifetime. Thus, the network performance is optimized overall.(3)The RS-CPR scheme further extends the advantage of consecutive packet scheme. In the consecutive packet scheme, the node transmits the packets once it accesses the channel, and it gives up the channel until all packets in the node are sent out, and the nodes that fail the competition turn into sleep state when the winner transmits consecutive packets to reduce competition conflicts and energy consumption. Since the number of data packets in the relay node is large, it can send a lot of packets continuously in a successful competition, thereby reducing packet-by-packet competition and increasing the network throughput. Therefore, the RS-CPR scheme combines the selection of the relay node with the consecutive packet routing scheme, which greatly improves the performance of the network.(4)According to our theoretical analysis and experimental results, compared with receiver-initiated consecutive packet transmission WuR (RI-CPT-WuR) scheme and RI-WuR protocol, the RS-CPR scheme reduces end-to-end delay by 45.92% and 65.99% and reduces channel collisions by 51.92% and 76.41%, respectively. Besides, it reduces energy consumption by 61.24% and 70.40%. At the same time, the RS-CPR scheme improves network throughput by 47.37% and 75.02%.

The rest of the paper is organized as follows: In Section 2, related works are given. In Section 3, the network model and problem statement are presented. Then, the RS-CPR scheme is introduced in Section 4. Performance analysis of the RS-CPR scheme is presented in Section 5. Finally, Section 6 provides conclusions and future works.

Due to the development of microprocessor technology, the computing and storage capacity of sensor nodes has been greatly improved, which makes them completely different from the previous weak processing capacity. The computing that cannot be processed in the past can be processed now when the capacity of sensor devices is 10 times and 100 times higher than before. Meanwhile, sensor devices of sensor nodes are becoming smaller and smaller and more and more [27, 28], so more sensor nodes are able to sense sound, image, vibration, and azimuth at the same time. In addition, external communication and improvement of interconnection conditions make more and more applications developed and applied to WSNs. Besides, multiple applications can be deployed on the same WSNs at the same time, which greatly saves time and the cost of application deployment [29]. In this way, many applications can use existing WSNs to complete deployment in a very short time, put into use quickly, and obtain abundant data, thus greatly promoting the development of Internet on Things (IoT) [9, 3031], so that the Internet of things (IoT) coupled with wireless sensor networks (WSNs) forms the so-called edge network [29, 30, 34, 35]. The emergence of this trend has led to significant changes in the network structure: from the cloud computing mode [36], which is dominated by computing and storage centers in the network, to the edge computing mode [29, 35] and the fog computing mode [1, 2, 31, 35, 37, 38]. According to Ref. [1], the number of devices connected to IoT network has exceeded the number of human beings so that a lot of computing, storage, and data perception are completed on the edge of the network [39]. Currently, the development of artificial intelligence technology [40, 41], privacy protection [42, 43], and the enhancement of trust and security technology [31, 35, 39] make the development of the network more rapid, which brings unprecedented development opportunities to the Internet of Things (IoT) and greatly promotes the development of the network [44].

Wireless sensor network is the most important component of IoT [35]. It consists of many sensor nodes, which are mainly composed of microprocessors, batteries, communication devices, and memory parts [9, 11]. It is deployed in areas that need to be monitored to build networks in a self-organizing way [14, 18]. The most important thing in WSNs is that many nodes can communicate with each other by appropriate communication protocols [1416]. According to the research results, the communication operation of sensor nodes is the most energy-consuming operation in the system, accounting for about 70% of the total energy consumption of the system [14, 15]. Most of the sensor nodes are powered by batteries and, in many cases, nodes are deployed in places where human beings can hardly reach or in dangerous scenarios such as battlefield, high-temperature areas, and high voltage areas, so it is unrealistic to replace batteries [14, 15, 18, 22]. Therefore, how to effectively save energy and prolong the network lifetime has become an important issue in communication protocol design [45].

MAC protocol is the basic communication protocol in wireless sensor networks, which plays an important role in network performance. In wired networks, the main performance evaluation indicators of designing the MAC protocol are protocol scalability, end-to-end delay, channel utilization, throughput, and conflict avoidance. Because devices in wired networks are generally active, there is little need to consider their energy consumption. However, since the energy consumption of wireless sensor network nodes is limited [1417], the most important factor in the design of their MAC protocol is to make the energy consumption effective [3, 6]. In order to save energy, most of the MAC protocols in WSNs are based on duty cycle- (DC-) based MAC protocol. In this paper, they are called DC WSNs [14, 16, 18, 23]. In such MAC protocols, since the energy consumption of nodes in active state is 1000 times [14, 16] higher than that of nodes in sleep state, nodes use periodic active/sleep rotation. Since sensor nodes have no data to transmit for most of the time, they should try to be in sleep state ideally. However, data operation is a random event and the nodes do not know it beforehand, so the delay caused by data operation is less when using periodic active/sleep rotation [6, 12], and the node can sleep for a period of time to save energy [1416].

The corresponding MAC protocols in WSNs can also be divided into two categories. One is contention-free MAC protocols, and the other is contention-based MAC protocols. In contention-free MAC protocols, such as Time Division Multiple Access (TDMA) protocols, the time of nodes is divided into equal time units called slots. The slots of data operations are allocated beforehand. Nodes turn into active state in the slots allocated beforehand when transmitting data and receive or send data accordingly. In contention-free MAC protocols such as TMDA, the energy of nodes is greatly saved. Each node only turns active when there is a need for data operations. However, the disadvantage of this contention-free MAC protocol is that it is not suitable for dynamic WSNs but more suitable for regular data generation networks. Because the timing sequence of data operations of each node in such networks can be planned beforehand, an optimized algorithm can be used to allocate the slot of corresponding data operation for each node beforehand, which is effective and time-saving. However, because the active slots of each node have been planned in advance, the MAC protocol such as TDMA needs to reschedule the slots of the whole network if there is a need to increase or decrease the active slots, so the network topology changes dynamically. The network generated dynamically by the amount of data is not applicable.

The other is the contention-based MAC protocol, in which the nodes need to compete for the channel before data transmission, and the winner needs to transmit data, while the losers need to wait for the next competition slot to compete for the channel again. The contention-based MAC protocol can be divided into two categories according to whether the time between the nodes is synchronized: synchronous-based MAC protocol and asynchronous-based MAC protocol. In synchronous-based WSNs, the time of nodes is synchronous; in other words, they are active and sleep at the same time. The sensor-MAC (S-MAC) protocol [46] and the T-MAC protocol [47] belong to synchronous-based MAC protocol.

The advantage of this kind of MAC protocol is that the sender can find receivers when data need to be transmitted without conflicts because of the time synchronization of the nodes and the synchronized active/sleep. However, this kind of strategy requires the system to maintain the time synchronization between nodes especially. Maintaining the synchronization of all nodes in the network is a systematic operation, which consumes more time and energy, and thus the system overhead is large. In addition, in synchronous WSNs, because nodes are all active in the same slot, so the probability of collision is higher than that in asynchronous-based MAC protocol. In asynchronous-based MAC protocol, nodes are independent of active/sleep periodically. Berkeley Media Access Control (B-MAC) protocol [48] and X-MAC protocol [49] belong to this kind of asynchronous-based MAC protocol. The advantage of this kind of protocol is that there is no need for time synchronization between nodes, thus saving this part of system overhead. The disadvantage is that it may increase the delay of data transmission. Besides, because of the asynchrony between nodes, it is possible that the receiver is sleeping when the sender has data to send, which requires the sender to wait for the receiver to wake up before data transmission, thus increasing the delay.

The contention-based MAC protocol can be divided into sender-initiated (SI) and receiver-initiated (RI) [50] according to who initiates the data transmission requests. In most WSNs, the MAC protocol belongs to the sender-initiated MAC protocol. The general sender-initiated protocol operation process is as follows: when the sender has data to send, it first initiates a data transmission request (such as long preamble). If the receiver is in active state, it can establish a link to transmit data. The receiver may be in sleep state when the sender sends the data transmission request, so the sender needs to keep the sending request signal for a long time, which usually exceeds the time of sleeping, so as to ensure that the receiver can listen to the request signal that the sender initiates to transmit data. Receiver-initiated (RI) MAC protocol is generally like this [50]: when the sender has data packets to send, it turns to active and begins to listen to the channel. The receiver wakes up and initiates beacon, and if the sender needs to send data packets at this time, it returns the response information so that a link can be established. In fact, besides the above two categories, there is also another category called sink-initiated MAC protocol, which is determined by the application of WSNs. Because in WSNs, nodes monitor and perceive data. Sensing data is generally sent to sink, which is processed by sink to make decisions. Thus, sink knows best when data is needed, and the frequency of data is required, so in the sink-initiated MAC protocol, sink initiates the data request, and the sensor nodes send data to sink.

In addition to the abovementioned two types of MAC protocols, contention-free and contention-based, the researchers have proposed a variety of hybrid MAC protocols. For example, in the first stage of data collection, the TDMA protocol is used, and contention-based MAC protocol is used in the second stage of data collection. And some protocols combine the sender-initiated (SI) MAC with the receiver-initiated (RI) MAC protocol [50]. There are also variants in the same protocol.

In addition, according to whether WSNs can obtain energy from the surrounding environment, they can be divided into energy-harvesting WSN (EH-WSN) MAC [51] and non-energy-harvesting WSN (NEH-WSN) MAC protocol [6, 14, 17]. The main types of energy absorbed from the surrounding environment are solar energy, wind energy, tidal energy, and vibration energy [51]. The MAC protocols we discussed before generally refer to the EH-WSN protocol [3, 6]. In EH-WSNs, since the nodes can absorb energy from the surrounding environment through energy absorption templates, their MAC protocol design is very different from that of NEH-WSNs. In EH-WSNs, the MAC protocol is no longer aimed at saving energy, but to make full use of the energy absorbed from the surrounding environment. As long as the energy of the node is not outage, the energy absorbed from the surrounding environment can be utilized as much as possible, so the MAC protocols are quite different. We will discuss these MAC protocols in detail below.

In the previous discussion, each node has only one main radio (MR), so if there is a need to establish communication with other nodes, nodes should be active at the predefined slot, such as TMDA protocol [52]. For the contention-based MAC protocol [46, 47], since there is no unified communication mechanism in advance and in order to save energy, the nodes use the duty cycle mode to periodically remain active to monitor whether they need to participate in data transmission operations [18, 23]. If the node can wake up even in the sleep state, the node does not need periodic active/sleep rotation but can sleep all the time. When the nodes are required to participate, they are awakened by additional radios, which can save more energy. Wake-up radio (WuR) is such a technology [8, 51]. There are two types of radios in nodes: main radio (MR) for data operations and wake-up radio (WuR) for wake-up operations between nodes [8, 51]. Since the energy consumption of WuR is 1000 times less than that of MR, the energy consumption of WuR is low even when WuR is active, but the energy consumption saved by MR is much more than that of WuR. Moreover, the data transmission performance of WuR-enabled WSNs is much better than that of WSNs [15, 54], with only one MR. This paper mainly studies WuR-enabled WSNs.

2.1. Contention-Free MAC Protocol

Time Division Multiple Access (TDMA) protocol is a representative contention-free MAC protocol [52]. In the TDMA protocol, the time unit of a node is divided into equal slots and slots form a cycle. After running, the scheduling algorithm determines the active or sleep state for each slot of each node. For active slots, nodes should be determined to send or receive. Each node in the network runs periodically according to the scheduling results of the TDMA scheduling algorithm. Since in TDMA protocol, nodes will be active only when they really need data operation, while in other slots they will be in sleep state. Therefore, this strategy has high energy efficiency, and its data transmission speed is generally high. The main optimization objectives of the design of TDMA strategy are as follows: (a) to minimize the number of slots in the active state, which can save energy; (b) to complete the data collection in the shortest time; and (c) to consume energy evenly between nodes [52].

TDMA protocol has a wide range of applications in wireless sensor networks. TDMA protocol is particularly suitable for networks that generate packets periodically. In the network where packets are generated periodically, each node generates a packet in each cycle. These packets need to be routed to sink through multiple hops [53]. The goal of TDMA scheduling is to arrange appropriate active slots for each node so that all packets can be transmitted to sink for the shortest time, and the nodes consume the lowest energy. The simplest one is TDMA scheduling for linear wireless sensor networks. Linear WSNs mean that all sensor nodes are deployed on a single line. For example, in applications where oil pipelines, electric cables, optical cables, and borders are monitored, sensor nodes tend to form a linear network along the monitored objects, and sink is located in the middle or the end of the linear network [54]. Long et al. [54] have given the TDMA scheduling strategy of such networks. And Ref. [54] extends the one-hop TDMA strategy in the linear network to arbitrary K-hop TDMA algorithm.

The most widely used and most important wireless sensor network is the data aggregation network [21, 23, 52]. In such networks, when data packets meet, they can be merged into one packet [52]. Such a data collection strategy has a special term, convergecast, to describe [52]. This data collection strategy uses a two-stage data collection strategy [52]. One is the data reception stage, in which nodes only receive packets from their child nodes. The other is the data transmission stage, in which the node fuses the received data packets into one packet and sends it to its parent node. After the node sends data packets, it completes all operations of this round of data collection and no longer receives data. That is to say, each node receives multiple packets and only sends one packet. In such TDMA scheduling, the WSNs are abstracted into a tree of which sink is the root node. Data is collected by the leaf nodes and is transmitted layer by layer to the sink until sink receives all the packets in the network.

The above research is to abstract WSNs into a tree and, in some studies, a hierarchical network structure is adopted, that is, TDMA scheduling using cluster-based WSNs [52]. In cluster-based WSNs, the network forms multiple clusters according to the clustering algorithm. Each cluster has a node called cluster head, but all the nodes are member nodes. The cluster head allocates slots for sending data to each member nodes, and the member nodes will send data to the cluster head in the allocated slots. This data collection process is called intracluster data collection, which can collect data in parallel in the network. In this way, when the data collection in the cluster is completed, only the cluster head has data packets in the network. At this time, the method of the tree-shaped network described above can be used. In the network, only the cluster head needs to transmit data so that the cluster head can send data to sink layer by layer from the outer layer. Li et al. [52] also proposed a parallel data collection network and TDMA scheduling strategy with a large cluster radius in the near-sink region and a small cluster radius in the far-sink region. In their scheduling strategy, each node only needs one state transition to complete the data collection. But in the previous strategy, the node needs two state transitions to complete the data collection.

In the previous discussion, the main disadvantage of the contention-free MAC protocol is that it cannot adapt to the dynamic data collection network. For example, when the nodes that generate data packets in a cluster are often changed, and the number of nodes that generate the data packets is also changed frequently, the TDMA strategy discussed above can be used, but the effect is not very good. Because we assume that all the nodes in the network generate data packets, so if a node does not generate a packet, it still retains a slot. In this way, the time of data collection will be longer, especially when only few packets are generated in the network. BMA (Bit-Map-Assisted) is a protocol that can dynamically adapt to the situation of network data generation. In the BMA protocol [55], the network is still divided into clusters, and the number of slots required for data collection in each cluster is adjusted dynamically according to the actual needs of nodes. The specific operation of the BMA protocol [55] is a follows: if the nodes need slots, they need to apply to the cluster head. After all the member nodes that need slots apply to the cluster head, the cluster head knows the number of slots required in the cluster, and allocates slots of each node, then the result of allocation will be told to the member nodes. In this way, the member nodes can send data in the slots allocated by the cluster head. The BMA protocol can dynamically adjust the required slots on demand according to such mechanisms, thereby being able to adapt to the dynamic data generation network. However, the BMA protocol promotes the negotiation of slots required by intracluster nodes but increases the time of data collection and energy consumption [55].

2.2. Contention-Based MAC Protocol

The contention-based MAC protocol can be divided into two categories: one is the synchronous-based MAC protocol and the other is the asynchronous-based MAC protocol. The synchronous MAC protocol is discussed below.

2.2.1. Synchronous-Based MAC Protocol

Sensor-MAC (S-MAC) is a synchronous MAC protocol using the duty cycle mode [46]. The timeline of S-MAC protocol is shown in Figure 1. The nodes adopt the working mode of duty cycle. A duty cycle can be divided into two phases: listen and sleep phase. In the listen phase, the nodes are in active state and mainly do two tasks: synchronize and manipulate the data if there is data to be sent or received [46], otherwise listen to the channel. After the nodes wake up, they first listen to the channel and then maintain the synchronization between the nodes through a series of clock synchronization operations. After the synchronization, if there are nodes having data to send or receive, the node performs data operation. Otherwise, the node will remain in sleep until the active time expires.

As can be seen from the above analysis, the period of the S-MAC is fixed, and the nodes have a fixed active time in each cycle, even if there is no data operation. Therefore, the energy consumption will be more. In view of this kind of situation, T-MAC (Time out MAC) protocol has been improved on the basis of the S-MAC protocol [47]. The main features of the T-MAC protocol are as follows: the nodes still use the periodic active/sleep mode, and after the nodes wake up, they still perform the synchronization operation first. However, unlike the S-MAC protocol, after the synchronization operations are completed, the nodes use a period of TA (Time Active) time to listen to the channel. If the nodes find no data operation in the TA time, the nodes will go to sleep in advance. Otherwise, the nodes perform data operations. The comparison of awake timing of the S-MAC protocol and T-MAC protocol can be seen in Figure 2. In the T-MAC protocol, the time of nodes in awake is shorter than that in the S-MAC protocol, thus saving energy [46, 47]. But it will increase the delay of data transmission, which is caused by the so-called early sleep phenomenon. However, if there are data transmission requirements after the TA time, the data operations must wait to be performed until nodes wake up in the next cycle, thus increasing the delay of data transmission [47].

2.2.2. Asynchronous-Based MAC Protocol

In the synchronous-based MAC protocol discussed above, after the nodes become active, there will be clock synchronization operations. Therefore, the system needs extra overhead to affect the network lifetime, and the synchronization also needs processing time, which may increase the delay of data transmission. Therefore, some researchers have proposed the asynchronous-based MAC protocol. B-MAC (Berkeley Media Access Control) is an asynchronous-based MAC protocol for wireless sensor networks (WSNs) [48], and its operation time is shown in Figure 3. In the B-MAC protocol, each node has an independent active/sleep rotation. When the sender has data to send, it immediately turns active and initiates a long preamble whose length must exceed the length of sleep cycle so that the preamble sent by the sender can be listened by the receiver. The receiver uses the Low Power Listening (LPL) mode to listen to the channel after the periodic active time. If the receiver detects the preamble signal sent by the sender, it can initiate the response to establish communication connection.

It can be seen that the characteristics of asynchronous-based MAC protocol are that there is no synchronization and the nodes remain active and sleep in turn according to their own clock cycle [48]. However, because of asynchronization, it is necessary to initiate a signal such as a preamble when establishing a communication connection and keep it for a while so that the nodes that are not synchronized are able to listen. The receiver needs to listen to the channel after waking up so that it can know whether there are nodes that need to send data. Both preamble and listening consume energy.

In order to reduce the energy consumed by the sender when keeping the long preamble, the X-MAC protocol improves the B-MAC protocol [49]. In the X-MAC protocol, the node does not send data continuously but goes to listen after sending a short preamble. Then, it adopts the rotation of sending preamble and listening. In this way, since a part of preamble is replaced by listening which uses the LPL mode, the energy consumption is reduced, but the network delay is increased [49].

2.2.3. Receiver-Initiated- (RI-) Based MAC Protocol

What discussed above are the ways that the sender initiates the communication connection. In fact, the way that the receiver initiates the connection is also widely used. Receiver-Initiated- (RI-) based MAC protocol is a kind of MAC protocol in which the receiver initiates the communication connection [8, 50]. The timeline of the RI-MAC protocol is shown in Figure 4 [8, 50]. Similarly, the nodes use the duty cycle mode. For the sender, when it has data to transmit, it immediately becomes active and listens to the channel. When the receiver wakes up, it first initiates a beacon. If there is a sender that is listening to the channel, it can establish a connection (see Figure 4).

In the RI-MAC protocol, if the receiver sends a beacon and does not receive a response from the sender, it is considered that there is no node in the network that needs to send data, then it goes to sleep to save energy. Compared to SI-MAC, the advantage of RI-MAC [15, 52] is that when there are many nodes having data to send, it is better to use the RI-MAC protocol. Because in the SI-MAC protocol, the sender needs to initiate a preamble when it needs to send data, which will occupy the channel. However, in the RI-MAC protocol, the sender only listens to the channel rather than occupying the channel when it has data to send. The receiver wakes up and initiates the beacon because the receiver occupies the channel for a short time, and the probability of collision is low.

2.3. Hybrid MAC Protocol and Optimization

The most typical hybrid MAC protocol is to mix the noncompetitive MAC protocol with the competitive MAC protocol. Since the actual wireless sensor network is not as ideal as the network described above, there are various situations that may cause the MAC protocol designed according to the ideal situation to be properly modified in practice. For example, the reliability of communication in wireless networks is far worse than that of wired networks, and its packet loss is often as high as 10% or more. So there is a need to consider the situation of packets loss when designing the MAC protocol. Therefore, the design of the MAC protocol is complicated by many factors, such as comprehensive retransmit, energy consumption, and delay. Broadcasting Combined with Multi-NACK/ACK (BCMN/A) protocol is such a hybrid MAC protocol [56]. BCMN/A protocol uses the following two methods to speed up data collection and reduce energy consumption. The first is the use of intracluster data collection mechanism of TDMA [56]. Unlike the previous intracluster data collection mechanism, the BCMN/A protocol is for packet loss WSNs. Thus, member nodes will lose packets when they pass the packet to the cluster head. To solve this problem, the cluster head needs to go through multiple rounds of data collection. Each member node is allocated a slot before each round of data collection, and each node sends a data packet in the allocated slot. Some packets will be lost due to the packet loss. Therefore, after a round of data collection, the cluster head only re-allocates slots to the member nodes that have not received packets and performs data collection again. In this way, the intracluster data collection does not finish until all the data is collected or the collected data packets have reached the application requirements. Multi-ACK is used in intracluster data collection to reduce the energy consumption of node data collection [56].

There are many studies on the optimization of the MAC protocol. Because most wireless sensor networks use the duty cycle mode, there are many studies on the optimization of duty cycle. Generally speaking, increasing the length of active time of nodes in a cycle can reduce the delay of data transmission, but it requires more energy consumption. Similarly, reducing the length of active time can reduce the energy consumption of nodes but may increase the delay. Therefore, Byun et al. [57] proposed a method for adaptively adjusting the size of duty cycle. Their measure is to increase the length of active time of nodes when the delay of data packets arriving at sink exceeds the threshold of the application. On the contrary, if the delay is much less than the threshold value of application, the length of active time will be reduced and ultimately reduce the energy consumption on the premise of meeting the requirements of application delay. In some studies, in RI-MAC, the receiver does not send only one beacon but send multiple beacons at intervals actively, which can effectively reduce the phenomenon of early sleep and delay.

2.4. Wake-Up Radio- (WuR-) Enabled WSNs

Wake-up radio- (WuR-) enabled WSNs improve communication performance by adding wake-up radio (WuR) hardware devices [8, 51]. In the previous WSNs, the duty cycle mode is used for saving energy of the nodes [1416]. The duty cycle mode is actually a trade-off between energy consumption and data transmission delay. If the energy of sensor nodes is infinite, the node can remain active all the time, so WuR is not needed. However, in order to reduce energy consumption, the node adopts the duty cycle mode, which will increase delay in data transmission and decrease the network throughput. The reason why the communication performance is deteriorated is that if the node is in the sleep state and there are data operations to be completed by this node, there is no way for the sleeping node to know. Therefore, data operations can only be performed when the sleeping node wakes up, thus reducing network performance. However, if nodes can wake up at any time even in the sleep state, they can wake up on demand instead of waking up regularly in case there is data to be transmitted. This way clearly saves energy and the performance is similar as the way that node are always active, thus such networks have a good performance, but this is achieved by increasing the hardware wake-up radio (WuR) in nodes [8, 51].

Even with the wake-up radio- (WuR-) enabled WSN, the network performance can be further improved [8, 51]. In the case of multiple senders, all the senders can be waken up by WuR to send data. At this time, they need to contend for the channel, and only the winner can send data. The losers need to wait for the previous sender to send all the packets before competing for the channel again. In this way, the losers have to listen to the channel, and they will contend for the channel again after the winner ends the data transmission. Besides, they will fall asleep after they access the channel successfully and send all the packets. If there are multiple packets in the node, one packet will be sent each time, and then the transmission of each data packet repeats the above process, so the energy consumption of such an AMCK protocol is relatively large. Guntupalli et al. [8] proposed a method called receiver-initiated consecutive packet transmission WuR (RI-CPT-WuR) medium access control (MAC) protocol [57] to improve this situation. That is to say, when the winner accesses the channel successfully, it does not only send one data packet but continuously transmits all the data packets in its packet queue. And the winner broadcasts the time required for the data transmission after winning, thereby allowing other competitors to no longer remain awake during this time and go to sleep, thereby reducing the number of competitions, the number of wake-ups, and the waiting time of competing nodes, which helps to reduce energy consumption and improve network performance.

However, the prerequisite for the RI-CPT-WuR protocol to work is that there must be multiple packets in the node [8]. If there are few packets in the node, the RI-CPT-WuR protocol is the same as the RI-WuR protocol. Therefore, the RS-CPR scheme proposed in this paper is a systematic way to create conditions for the packets in the node to be either multiple or not. Thus, nodes with multiple packets can fully take advantage of the RI-CPT-WuR protocol, while nodes without packets will sleep, thereby reducing competitors and reducing the number of contention conflicts. However, to achieve these goals in self-organizing wireless sensor networks is not an easy task. Therefore, this paper has been carefully designed and proposed the RS-CPR scheme to improve the previous protocol and the network performance.

3. The System Model and Problem Statement

3.1. The Network Model

We assume that there are homogeneous sensor nodes and one sink node in the network, and each node uses a WuR device. For the sake of description, we abstract the system model into a planar tree network, where sink is the root node. With the exception of leaf nodes, all other nodes in the network are within the emission radius of their child nodes and can receive data from their child nodes. There may be multiple child nodes under the parent node, and the child nodes may have multiple parent nodes. The diagram of the simplified network is shown in Figure 5.

The energy consumption of each node includes three aspects: (1) energy consumption for receiving packets , (2) energy consumption for sending packets , and (3) energy consumption for idling . Therefore, the total energy consumption of node is

Furthermore, assuming that is the number of packets received by , is the energy required to receive a packet, is the number of packets sent by , is the energy required to send a packet, is the idle time, and is the energy used in idle time, then the total energy consumption of the node is

3.2. Definitions

Definition 1. (the waiting time of data packets in the node (denoted as )). The waiting time of data packets in the node is defined as the number of slots that the packet experiences when the node receives it from the child node to contend for the channel to send it. As shown in Figure 6, at , the data packet arrives at node , and at time , begins to compete for the channel to send . Then, the number of slots between and is the waiting time of the data packet in node .

Definition 2. (the packet queue length threshold (denoted as )). All incoming packets are stored in the queue of the node. If the number of packets in the node is equal to or greater than the queue length threshold, the node begins to contend for the channel; otherwise, a large delay may result. As shown in Figure 7, the node receives the data packet transmitted by its child node, where is the number of data packets owned by . When  = , starts to contend for the channel.

Definition 3. (the packet maximum waiting time threshold (denoted as )). The packet is sent to the queue of the node and waiting to be sent. If the waiting time is too long, there will be excessive delay. In order to avoid this problem, when the waiting time of the packet is greater than or equal to the packet maximum waiting time threshold, the node can participate in the channel competition, as shown in Figure 8.

3.3. The Problem Statements

(1)Minimization of network delay: in this paper, the delay of the network is defined as the number of slots required by the network to transmit packets from the leaf node to sink. Therefore, the network delay can be represented by . Our goal is to reduce network delay. The formula is as follows:(2)Maximization of network lifetime (denoted as ): in this paper, network lifetime is defined as the time from the start of the network to the death of the first node. The energy consumption of a single node includes the energy consumed by sending data packets, the energy consumed by receiving data packets, and the energy consumed in idle time. The symbol denotes the same initial energy of each node, so the maximum network lifetime can be expressed as follows:(3)Reduction in the number of conflicts (denoted as ): refers to the number of collisions that occur in the data transmission. It can increase network delay and reduce packet reliability. The purpose of this article is to minimize the number of collisions.Obviously, the goal of RS-CPR is to minimize delay, , maximize the network lifetime, , maximize the effective energy utilization, , and reduce the number of conflicts , which can be summarized as follows:

4. The Design of RS-CPR Scheme

4.1. Research Motivation

We propose a relay selection joint consecutive packet routing (RS-CPR) scheme to reduce delay for wake-up radio-enabled wireless sensor networks (WSNs). The research motivation of RS-CPR is as follows.

When the traditional RI-WuR MAC protocol is used in wireless sensor networks, all sending nodes contend for the channel in each slot. This way of transmission leads to two problems. On the one hand, the failed nodes consume a lot of energy in excessive listening; on the other hand, frequent competition reduces the stability of network transmission. In order to solve this problem, the authors propose the RI-CPT-WuR MAC protocol in [8], which reduces energy consumption and improves network performance through consecutive packet transmission. Because the initial competition is inevitable, the RI-CPT-WuR protocol does not improve significantly in the case of few data packets.

As shown in Figure 9, the network has three layers of nodes. , , , and can communicate with sink directly, and the rest need to communicate with sink through relay nodes. Each node can only send data packets to its parent node. Initially, the energy of each node is equal.

The data packets of nodes are randomly generated, and we assume that the probability of data generation is . When  = 0.5, half of nodes will generate packets. We assume that the nodes that generate data packets at time are , , , , and . According to the traditional routing strategy (hereinafter referred to as TRS), the child node randomly selects the parent to send packets, and the data packet of is sent to , the packet of is sent to , and the packet of is sent to . When these data are transmitted, according to the RI-WuR protocol, , , , and need to contend for the channel to send data packets to sink. At this time, the transmission process is shown in Figure 10.

As long as there are data packets in the queue of the node, it will compete in each slot. We assumed that becomes the winner of the first slot, and it sends its first packet to sink. In slot 2, wins the competition and sends its data packet. In slot 3, competes successfully and sends its data packet. However, in slot 4, a collision occurs between and . In slot 5, sends a packet as the winner. In slot 6, sends its second packet. Up to now, there are six slots that are needed to transmit all the packets. Besides, one collision occurs, and has two active slots, has three active slots, has five active slots, and has six active slots; thus, the total active slots are 16.

Guntupalli et al. [8] proposed the RI-CPT-WuR protocol, and the protocol is illustrated in Figure 11. The main idea of this strategy is that if a node accesses the channel successfully, all data packets are continuously transmitted in subsequent slots, while other nodes that failed this competition no longer compete when the winner transmits its data packets, and they will sleep in the subsequent slots after the winner send its first packet, thereby reducing channel competition conflicts and energy consumption, and also speeding up data transmission and reducing delay.

First, competes successfully in the first slot, so it continuously transmits its two data packets. After the first data packet of the winner is transmitted, all the remaining nodes sleep in the next slot. A new round of competition starts at slot 3, and is the winner of this competition, so it sends its packet. In slot 4, competes successfully and sends a packet. In slot 5, competes successfully and sends a packet. Until now, all the packets have been sent, and the slots required for data transmission is 5, and no conflict occurs. has 2 active slots, has 3 active slots, has 4 active slots, and has 2 active slots, and thus, there are 11 active slots totally. Compared with the traditional strategy, this strategy generates less collisions, requires one less time slot for data transmission, and reduces the total number of active slots by five, thereby reducing energy consumption.

However, this strategy still has room for improvement.

If sends a packet to , sends a packet to , and sends a packet to , which means has two packets and has three data packets, and the process of data transmission is shown in Figure 12.

Since and have no packets to send, they are always in the sleep state. In slot 1, accesses the channel successfully, and three packets are sent continuously. After the first packet is sent, sleeps in the subsequent two slots. In slot 4, wakes up and sends two of its own packets. Sending all the packets requires a total of five slots and no collisions occur. and are always in the sleep state, so the number of active slots is 0. The number of active slots of and is 3 and 3, respectively. Compared with the strategy proposed by Guntupalli et al. [8], this strategy has fewer active slots and is more energy efficient.

The performance of three strategies is shown in Table 2, and the number of active slots of each node in different strategies is shown in Table 3, where RI, TRS refers to the traditional routing scheme with the RI-WuR MAC protocol, RI-CPT, TRS refers to the traditional routing scheme with the RI-CPT-WuR MAC protocol, RI-CPT, RS-CPR refers to the RS-CPR scheme with the RI-CPT-WuR MAC protocol, refers to the required slots for data transmission, refers to the number of collisions during the transmission, and refers to the sum of the number of active slots of each node.

Forwarding node set is a set of nodes within the sending radius of node , which is closer to sink than node . For example, in Figure 13, the forwarding node set of is {}, that of is {}, and that of and are both {}. According to the new strategy, in the forwarding node set, the nodes which have many data packets are selected to receive the data, and the nodes that do not have data packets do not undertake the data transmission work so that the number of nodes which contend for the channel can be reduced. Besides, once the node accesses the channel, it can continuously send multiple data packets, and other nodes that do not compete for the channel can sleep to save energy, thus reducing energy consumption and delay.

In this paper, the forwarding node set of a node is abstracted into its parents.

4.2. Algorithm Design

In order to solve the problem of excessive delay and high energy consumption, we should make data packets gather together when the nodes transmit data packets. Based on the distance to sink, energy, and the number of data packets, we propose the following three criteria:(1)A short distance to sink(2)More data packets in the packet queue(3)More energy in the node

In order to comprehensively consider these three factors, we assume a comprehensive evaluation index , and its calculation formula is as follows:where represents the distance between the node and sink, represents the number of packets in the node, and represents the residual energy of the node, and indicates the maximum packet length of the node. Finally, among the all candidate nodes, the node with the largest is selected as the relay node to transmit data packets.

As shown in Figure 13, the packet queue length, the distance to sink, and the residual energy of nodes , , and are shown. has only one forwarding node , so sends its packet to . In the forwarding node set of , has the highest weight, so sends the data packets to . For and , the node with the highest weight in the forwarding node set is also , so they send packets to . The relay node selection algorithm is shown in Algorithm 1.

(1)For each
(2)For each forwarding node of ,
(3)
(4)
(5) /∗collect all the forwarding node of and calculate
(6) their value∗/
(7)End For
(8)End For
(9)For each
(10) sort all nodes in the forwarding node set of by
(11) pick the node with the largest value
(12)
(13)
(14)
(15)End For

The explanatory remarks about Algorithm 1 are as follows:Lines 1–8: for each node in the network, place their forwarding nodes in the forwarding node set and calculate their comprehensive evaluation index value .Lines 8–14: for each node, the forwarding node with the highest value is selected to send data, the packet queue length and energy of the relay node are updated, and then the energy of the sending node is updated too.

However, if there are too many data packets in the queue or the waiting time of packets is too long, it will cause excessive delay and affect network performance. Therefore, we use the packet queue length threshold and the packet maximum waiting time threshold . When the number of data packets in the node is greater than or equal to or the waiting time of data packets is greater than or equal to , the node will contend for the channel to try to send data packets to its parent node. If the node accesses the channel successfully, it will send all the packets in the queue. Then, the node goes to sleep. Otherwise, it goes to sleep and becomes active until any threshold is reached or a child node sends data to it.

4.3. Illustration of RS-CPR Scheme

As shown in Figure 14, we assume that the packet queue length threshold is 5 and the packet maximum waiting time threshold is 3 slots. In the initial state, each node in the fifth layer has a data packet. However, since the packet queue length of all nodes do not reach the threshold , there are no nodes to send packets in slot 1∼slot 3.

In slot 4, for and , is the node with the highest weight in their forwarding node sets, so and contend for the channel to send packets to . There is no collision occurring and wins the competition, so has one packet in slot 4. sends a packet to . At this time, the number of data packets owned by is 1. For and , is the node with the highest weight in their forwarding node sets, so and compete for the channel to . There is no collision occurring and wins the competition, so the number of data packets owned by is 1. sends a data packet to , so the number of data packets owned by in slot 4 is 1. For , , and , is the node with the highest weight in their forwarding node sets, so , , and contend for the channel to send data to , but a conflict occurs, so does not receive data in slot 4. sends a packet to , so has 1 packet at this time.

In slot 5, accesses the channel successfully, so it sends a data packet to . The number of data packets owned by is 2. sends data to , and the number of data packets owned by in slot 5 is 2. , , and compete for the channel again, and successfully send a data packet to in slot 5. Therefore, at this time, has a data packet.

In slot 6, and compete for the channel again and succeeds, so sends a packet to . Other nodes which have packets do not reach the thresholds, so they still wait.

In slot 7, the waiting time of packets in , , , , , and reaches the waiting time threshold, so their data needs to be sent to the parent node. For and , is the node with the highest weight in their forwarding node sets, so they need to contend for the channel to send data to . In slot 7, competes successfully, sending its first packet to . sends the first data packet to . At this time, the number of data packets owned by is 1. For , is the node with the highest weight in the forwarding node set, so sends a data packet to , so at this time has one packet. sends a packet to , so has 3 packets in slot 7. sends a packet to , so has 1 packet.

In slot 8, sends its second packet to , so has 2 packets. sends its second packet to , at which point has 2 packets. For , is the node with the highest weight in its forwarding node set, so sends its first data packet to in slot 8, and has 2 data packets.

In slot 9, sends a packet to , so has 3 packets at this time. sends its second packets to , so has 3 packets at this slot.

In slot 10, the waiting time of packets in , , and reaches ; therefore, they have to send packets. For , , and , is the node with the highest weight in their forwarding node sets, so , , and need to contend for the channel to send data to . But there is a conflict so the data transmission does not happen. sends its third data packet to , at which time has 4 data packets.

In slot 11, , , and contend for the channel again. accesses the channel successfully and sends its packet to , so has 1 packet. At this slot, send its first packet to .

In slot 12, and compete for the channel again, and competes successfully, and its first data is sent to , at which time has 2 data packets. sends its second packet to , at which point has 2 packets. In slot 13, sends its second data packet to , at which point has 3 data packets. sends its third packet to , at which point has 3 packets.

In slot 14, sends its third packet to , then has 4 packets. sends its fourth packet to , at which point has 4 packets.

In slot 15, sends its first packet to , then has 5 packets. sends its first packet to sink. In slot 16, sends its second data packet to , at which point has 6 data packets. sends its second packet to sink. In slots 17 and 18, sends its third and fourth packets to sink, respectively. In slot 19 to slot 24, sends its sixth packet to sink.

In this example, it takes 24 slots to send all data packets from the leaf node to sink, and there are 22 active nodes, and 2 conflicts occur. The data transmission process is shown in Table 4, where refers to the node receiving the th packet from the node in this slot, refers to the node sending the th packet to the node in this slot, and refers to the number of data packets currently owned by this node. The first column of Table 4 is the current slot. For example, S1 refers to slot 1. The energy consumption of each node is shown in Table 5.

4.4. Illustration of RS-CPR-TSFLN

Based on the RS-CPR scheme, we propose an optimized strategy named RS-CPR-TSFLN, where TSFLN refers to Thresholds is Small in Far-sink area and Large in Near-sink area. Since the nodes away from sink need more hops to send packets to sink, excessive delay occurs during the data transmission. Therefore, we adjust and of nodes away from sink to reduce the delay during data transmission.

In the example shown in Section 4.3, we calculated that the average energy consumption of each node in the fifth layer is only 801.1, that of each node in the fourth layer is 1174.65, and that of each node in the third layer is 1604.95. The average energy consumption of each node in the 2nd layer is 2301.75, as shown in Figure 18. It can be found that the nodes which are farther away from sink consume less energy. When the first node dies, the nodes away from sink still have energy left.

Theoretically, we assume that there are layers of nodes in the network, there are nodes competing for the channel on average to transmit data to the same parent node in the -th layer (), each node T generates data packets on average, and the average amount of packets that each node owns is . For the leaf node in the -th layer, the number of packets owned by itself is . For the nodes in the -th layer, in addition to the data packets generated by themselves, they also bear the forwarding work of data packets sent from the -th layer nodes, so the average amount of packets owned by each node is , and so on, the number of packets in the -th layer () is . It can be seen that the nodes which are farther away from sink have less packets, so their energy consumption is less, while the nodes which are closer to sink have more packets to forward so more energy is consumed. Therefore, we adjust and of the nodes away from sink to a smaller value, which can reduce the unnecessary waiting time of the packets and use the remaining energy for data transmission.

For the example described in Section 4.3, we can adjust and of the nodes in the fifth layer and the fourth layer to 1 and 1, respectively, that of the nodes in the third layer can be adjusted to 2 and 2, and that of the nodes in the second layer can be adjusted to 3 and 3. After this adjustment, the data transmission from nodes in the fifth layer to sink requires a total of 17 slots. Besides, the transmission delay of some data packets is greatly reduced after optimization. For example, a packet from needs 22 slots to reach sink before optimization, but only 8 slots are used to reach sink after optimization. The state of each node in every slot after optimization is shown in Table 6.

5. Performance Analysis and Optimization

In this section, we analyze the performance of these strategies. For example, in the example described in Section 4.3, we describe the superiority of RS-CPR strategy by listing the performance of typical nodes under different strategies. Figures 1922, respectively, describe the number of times the nodes contend for the channel during the data transmission, the number of packets that are successfully transmitted per competition, the number of sleeping times of the nodes, and the number of active slots of the nodes. As can be seen from the figure, the RS-CPR strategy greatly reduces the number of times the node competes for the channel and the number of active slots, thereby reducing delay and energy consumption. In Sections 5.1 and 5.2, we theoretically derive the formulas of network delay and energy consumption and give experimental results under different strategies. In Section 5.3, we analyze the number of collisions and throughput under different strategies, thus proving the superiority of RS-CPR strategy.

5.1. The Analysis of Delay
5.1.1. Formula Derivation

Theorem 1. We define as the probability of occurring successful data transmission in a slot when there are nodes competing for the channel. It is assumed that there are layers of nodes in the network, and there are on average nodes in the -th layer (). Among these nodes, every nodes contend for the channel to transmit data to the same parent node, and each node itself generates packets on average, and the average amount of packets each node own is . Thus, the delay of the -th layer isThe delay of the -th layer () is

Proof. We first analyze the nodes in the -th layer and the )-th layer. For nodes in the -th layer, they are leaf nodes and do not need to receive data packets, so . Every nodes need to compete for the channel to send data to the same parent, so the number of slots required for all nodes in the -th layer sending all data packets to the nodes in the )-th layer is . Then,It should be noted that nodes in the )-th layer receive data packets from the -th layer in addition to the data packets generated by themselves, so the average number of data packets owned by nodes in the )-th layer is . If the number of data packets received by the )-th layer is greater than or equal to the packet queue length threshold or the waiting time of the packets is greater than or equal to , then nodes in the )-th layer send packets to the nodes in the )-th layer immediately. At this time, the slots required for the nodes in the )-th layer sending all the packets r to the )-th layer is , thenIf the number of received data packets is less than and the waiting time of data packets is less than , then there will be extra waiting time. At this time,In summary, Extending to the whole network, we assume that the delay of the -th layer is . ThenwhereThe delay of the network is

5.1.2. The Effect of Variables on Delay

Many environment variables have an impact on network delay. In this paper, the main parameters are the number of packets generated by each node , the number of nodes , the packet maximum waiting time threshold , and the packet queue length threshold .

The effect of the average number of packets generated by each node on the delay is shown in Figure 23. Longitudinally, the relationship between and delay is basically incremental linear, because the more the number of data packets, the more the slots needed to complete the transmission. Horizontally, among the four strategies, RI-WuR MAC with traditional routing scheme (TRS) has the highest delay, while RI-CPT-WuR MAC with RS-CPR-TSFLN scheme has the lowest delay. This is because when using traditional routing scheme, the child nodes randomly select the parent node as the relay node to transmit data, resulting in an increase in the number of active nodes and the probability of collision, and thus more slots are needed for data transmission. According to the characteristics of the RI-WuR MAC protocol, as long as there are data packets in the queue that need to be sent, the node will compete for the channel in each slot and only sends one data packet when accesses the channel successfully. Therefore, with the increase in the number of data packets, the probability of collision and delay will increase.

In summary, RI-CPT-WuR improves the shortcomings of the RI-WuR protocol. After the competition is successful, the node sends all its data packets at one time. After the transmission, it goes to sleep and no longer participates in the competition. Compared with the RI-WuR protocol, the number of nodes participating in the competition in each slot is reduced, and the probability of collision is reduced, so that the delay in RI-CPT-WuR with the traditional routing scheme is smaller than the former. When the RS-CPR strategy is used, the data packets gather on a small number of nodes, so there are fewer nodes participating in the competition than using the traditional routing scheme, thereby reducing the probability of collisions, so delay in RI-CPT-WuR with RS-CPR is lower than that in the traditional routing scheme. After the RS-CPR strategy is optimized, and of the nodes farther away from sink are adjusted, which reduces the unnecessary waiting time of the packets in the distant nodes, so delay in RI-CPT-WuR with RS-CPR-TSFLN is lowest. Therefore, the delay of the four strategies with the change of is shown in Figure 23.

The effect of the average number of nodes of per layer on the delay is shown in Figure 24. Longitudinally, the relationship between and delay is basically increasing linearly. This is because the more the nodes each layer has, the more the nodes that will compete for the channel, and the greater the probability of collision. The number of slots required for data transmission is also greater. Horizontally, similar to the above, among the four strategies, RI-WuR MAC with the traditional routing scheme (TRS) has the highest delay, while RI-CPT-WuR MAC with the optimized RS-CPR scheme has the lowest delay. The specific reason is the same as the above.

The effect of on delay is shown in Figure 25. As increases, the delay of the three strategies increases linearly, and the network delay is maximized when using RI-WuR MAC with the traditional routing scheme. When using RI-CPT-WuR MAC with the RS-CPR strategy, the network delay is lowest. This is because the packets in the node need to wait until to contend for the channel, so as increases, the waiting time is longer and longer, and the delay increases.

The effect of the packet queue length threshold on delay is shown in Figure 26. As increases, the delay of the three strategies increases at first and then converges to a large value. This is because when increases to a certain extent, no matter how large is, when the waiting time of data packets reaches , the nodes start to contend for the channel to transmit data. After that, the increase of has no effect on the delay, so the influencing factors are mainly , , and .

5.2. The Analysis of Energy Consumption
5.2.1. Formula Derivation

Theorem 2. We define that there are layers of nodes in the network, and there are nodes on average in the -th layer (), where every nodes compete for the channel to send data to the same parent. Each node generates packets by itself on average. If the average amount of packets each node has is , then the total network energy consumption is

Proof. We first analyze the nodes in the -th and ()-th layer. For the -th layer, every nodes need to contend for the channel to send data to the parent node, and there are groups in total. For each group of child nodes, in the first competition, the winner sends all of its own packets at one time, so the energy consumed by transmitting packets is , and the energy consumed by the failing nodes is , so the total energy consumed in the first round of competition is . In the second round of competition, the winner in the first round does not participate in the competition. The energy consumed by the winner to send data is , and the energy consumed by the failing nodes is . When this group of nodes sends all the packets, the total energy consumption is . Therefore, the total energy consumed by the nodes in the -th layer is . For the nodes in the-th () layer, the energy consumed to collect packets from the child nodes is . Similar to the nodes in the -th layer, the energy required for the -th (1 < i < n) layer to send all packets is . Therefore, the total energy consumed by the -th () layer nodes isIn addition, the energy consumed by sink to receive packets sent from the nodes in the 2nf layer is .
Therefore, the energy consumption of the whole network is

5.2.2. The Effect of Variables on Energy Consumption

The factors affecting energy consumption are mainly the average number of packets each node has and the average number of nodes each layer has .

The effect of on energy consumption is shown in Figure 27. Longitudinally, the larger the , the more the slots needed to transmit data packets, and the more the energy needed to send and receive data packets. As can be seen from Figure 27, the relationship between and energy consumption is basically increasing linearly. Horizontally, the energy consumption is highest when using RI-WuR MAC with the traditional routing scheme (TRS), and the energy consumption is lowest when using RI-CPT-WuR MAC with the optimized RS-CPR scheme. As mentioned earlier, the feature of RI-WuR is that nodes that have packets to send need to compete in each slot, so in this protocol, nodes constantly listen to the channel. On the other hand, RI-CPT-WuR substantially reduces energy consumption by confining overhearing to just one slot [15]. Therefore, energy consumption is lower in RI-CPT-WuR when compared with RI-WuR. In addition, according to the characteristics of RS-CPR scheme, the data packets gather on a small number of nodes, so the number of active nodes is much smaller than the traditional routing scheme, thereby reducing energy consumption. After the RS-CPR strategy is optimized, the improvement in energy consumption is not very obvious. This is because after optimization, the main reduction is the waiting time of the nodes away from sink, but they are in the sleep state during this time. Therefore, the change in energy consumption with the change of under the four strategies is shown in Figure 27, and the change in network lifetime with the change of under the four strategies is as shown in Figure 28.

The effect of the average number of nodes each layer has on energy consumption is shown in Figure 29. From a vertical perspective, the relationship between and energy consumption is basically increasing linearly, too. This is because the more the nodes each layer has, the more the nodes participating in the competition, and the more the energy is consumed to listen to the channel. In terms of horizontal comparison, the RI-WuR MAC with the traditional routing scheme (TRS) has the highest energy consumption among the four strategies, and the energy consumption is lowest when using RI-CPT-WuR MAC with the RS-CPR-TSNLN scheme. The specific reason is the same as the above. Therefore, the change in network lifetime with for the four strategies is shown in Figure 30.

5.3. Collisions and Throughput

The factors affecting the number of conflicts are mainly the number of packets each node has and the average number of nodes each layer has .

The effect of on the number of collisions is shown in Figure 31. Longitudinally, the number of collisions in RI-WuR with the traditional routing scheme is most affected by and increases linearly with the growth of . This is because nodes with data packets compete in each slot and only send one data packet at one time, so the more the packets there are, the greater the probability of collision is. One of the characteristics of the RI-CPT-WuR protocol is that the winner sends all its data packets in subsequent slots when the competition is successful. Therefore, in the RI-CPT-WuR protocol, the change in does not affect the number of collisions. And there are fewer active nodes in the RS-CPR strategy than that in the traditional routing scheme, so the probability of collision is smaller. Therefore, the number of conflicts with the change of is shown in Figure 31.

The effect of the average number of nodes each layer has on the number of collisions is shown in Figure 32. In the vertical view, the number of conflicts increases with the increase of . This is because the larger the is, the more the nodes are involved in the competition, and the greater the probability of conflicts will be. In the horizontal view, the number of collisions in RI-WuR with the traditional routing scheme is highest, while that in RI-CPT-WuR with the RS-CPR scheme is lowest. Using the traditional routing strategy causes the child nodes to randomly pick the parent node to send packets. Therefore, the number of active nodes is greater than that in RS-CPR, and the probability of collision increases.

6. Conclusion

In this paper, a relay selection joint consecutive packet routing (RS-CPR) scheme is proposed to reduce the channel competition conflicts and the energy consumption of nodes, thereby improving the network throughput rate and reducing the end-to-end delay for WuR-enabled WSNs. The main idea of the RS-CPR strategy is the proposal of the performance evaluation index, which combines the number of data packets, the distance to sink, and the residual energy. Node selects the node with the highest weight in the forwarding node set as the relay node so that the data packets can be gathered together on a small number of nodes, which reduces delay and energy consumption. We also propose an optimization scheme for the RS-CPR strategy, which adjusts the two thresholds of the nodes away from sink to a small value, thereby reducing the unnecessary waiting time of packets of the distant nodes. The RS-CPR scheme combines the selection of the relay node with the consecutive packet routing scheme, which greatly improves the performance of the network. Our theoretical analysis and experimental results show that compared with receiver-initiated consecutive packet transmission WuR (RI-CPT-WuR) scheme and RI protocol, the RS-CPR scheme can reduce the end-to-end delay by 45.92% and 65.99%, reduce the number of collisions by 51.92% and 76.41%, and reduce energy consumption by 61.24% and 70.40%. At the same time, the network throughput rate increased by 47.37% and 75.02%.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (61572528, 61772554, and 61572526).