Abstract

In wireless sensor networks, an improved throughput capacity region can be achieved by equipping multiple channels. However, such approach inevitably brings the issue of solving the coupled channel assignment and scheduling problem. This paper put forward a low-complexity distributed channel assignment and scheduling policy for multichannel wireless sensor networks with single-hop traffic flows, named LDCS, as well as its multihop multipath extension. Under the proposed algorithms, random access and backoff time techniques are introduced to keep the complexity low and independent of the number of links and channels. Through theoretical analysis and simulation experiments, it is proved that the proposed algorithms are throughput guaranteed, and in some network scenarios, the achieved capacity region can be larger than that of other comparable distributed algorithms.

1. Introduction

In wireless networks, datagrams are transmitted from source to destination through a routing selection mechanism in network layer associated with scheduling in MAC layer. When the paths for all users are fixed, scheduling policies play a key role in utilizing limited bandwidth effectively to achieve better throughput performance. There have been a number of findings and research results on the topic of improving throughput capacity region via designing efficient scheduling algorithms. It has been demonstrated that the maximum capacity region can be guaranteed by the throughput-optimal scheduling algorithms such as max-weight scheduling [1] for single-path traffic and back-pressure scheduling [2] for multipath cases. However, such policies require centralized management along with great limitation. The centralized scheduling algorithms require management node or base station in the network to collect information flow and carry out global control, and as a consequence, it is difficult to implement in many network scenarios such as wireless sensor networks (WSN) and ad hoc networks (MANET).

In order to enhance the adaptability, a series of distributed maximal scheduling (MS) algorithms that are more applicable in practice have been proposed under various interference models [3, 4]. So-called “distributed” refers to the behavior of network nodes deciding to forward data or not according to their own conditions and the information obtained from neighborhood without centralized control. It has been proved that MS is throughput guaranteed. That is, it can achieve at least a certain fraction of the maximum throughput capacity region. Even so, such maximal-matching-type algorithms need to execute a maximal matching finding process at each time slot, which requires a remarkable number of iterations increasing logarithmically with the number of links in the system [5] and consequently causes nontrivial implementation complexity. Considering such drawback, designing distributed scheduling policies with low complexity is worth further research. The authors in [6] presented a distributed Q-SCHED algorithm based on random access and backoff time techniques [7], whose throughput performance could be arbitrarily close to that of MS. The complexity of Q-SCHED is low and independent of the network size, which argues its applicability for large-scale network systems.

The achievements mentioned above are all acquired on single-channel conditions. In fact, the application of multichannel technology can significantly improve the throughput performance of WSN. At present, IEEE 802.11 series has provided resource base for the application of multichannel, such as 802.11a with 12 nonoverlapping channels operating on 5 GHz band and 802.11 b/g with 3 channels on 2.4 GHz frequency band. In multichannel environments, channel assignment and scheduling are usually combined with each other to jointly affect the throughput performance [810], which poses a technical challenge for designing distributed algorithms that solve the coupled channel assignment and scheduling problem. Through in-depth research, we find that many single-channel scheduling algorithms cannot be extended straightforward to multichannel networks because such extension may lead to poor throughput performance. Due to the existence of channel diversity, the designing of distributed channel assignment and scheduling strategy in multichannel networks is much more complicated than that in single-channel scenarios. Until now, there have been a series of research results on this issue. However, these existing results still have problems to be solved, such as complexity. Obviously, the application of multichannel in wireless networks is bound to bring an increased implementation complexity of the distributed algorithms.

In view of the above-discussed facts, the goal (or the main contribution) of this paper is to devise distributed algorithms for multichannel wireless networks with low complexity. We present a channel assignment and scheduling algorithm, named LDCS, for cases with single-interface nodes. In order to keep the complexity low, LDCS extends the idea of the above-mentioned Q-SCHED to fit into multichannel circumstances. It is theoretically proved that LDCS is throughput guaranteed, and the achieved capacity region can be larger than that of other comparable distributed algorithms in a variety of scenarios. After that, we extend the policy to the cases of multipath routing with multihop flows. The simulation experiments verify our theoretical analysis.

In multichannel scenarios, it is truly difficult to guarantee the throughput performance (characterized by capacity region) in a distributed manner. The objective of this issue is to maintain the system stable with finite queue lengths inside a specific capacity region. The implementation of scheduling algorithms should be combined with data flow allocation and channel assignment because of channel diversity. Toward this direction, in [9], a provably efficient algorithm (referred therein as SP) and its multipath extension MP have been developed for multichannel multi-interface (MC-MI) networks. SP and MP were proved to be throughput guaranteed, which implies that they can ensure a certain fraction of the maximum capacity region. In SP and MP, each link maintains multiple per-channel queues as well as one common link queue. Arriving packets first enter the common link queue and are then assigned to each channel via executing a data flow allocation mechanism, as illustrated in Figure 1. According to the per-channel queues, maximal scheduling is carried out to determine the set of forwarding queues. Such relay-forwarding-based process definitely has a negative impact on system performance. At the same time, aiming at achieving reliable throughput capacity region under SP or MP, each link collects queue length and channel rate information from neighborhood before allocating data flow, which leads to extra overhead inevitably. In this paper, we try to adopt a more concise way to implement effective configuration of data traffic.

Besides SP and MP, for the purpose of solving the coupled resource allocation problem systematically in MC-MI networks, Cheng et al. raised another method, named tuple-based MS [11]. In addition, a novel model was put forward in [11] under which the original nodes, interfaces, and channels are transformed into multiple node-radio-channel tuples. A simple example is shown in Figure 2, where the original link is composed of node and while the mapped tuple link is formed by tuple and . Such framework enables the single-channel MS algorithm to be extended straightforward to MC-MI wireless networks with guaranteed throughput performance. Compared with SP, the tuple-based MS has been shown to ensure a larger capacity region with lower average backlog. However, it could be observed that, with regard to SP, MP, and tuple-based MS, they all necessarily attempt to operate a maximal matching process during each time slot, which incurs significant complexity. In fact, at least number of iterations have to be conducted for the purpose of obtaining a maximal schedule [12], where and denote respectively the number of links and channels in the network. In order to reduce the complexity, we employ random access and backoff time techniques.

In recent years, a series of research results on channel allocation or scheduling in multichannel networks have emerged from other different perspectives. Authors of [13] developed a DES-Chan framework for distributed channel assignment. However, such framework did not address the scheduling issue or delve into throughput performance. Some other efforts have been made on designing algorithms for OFDM-based multichannel downlink relay networks [1416]. However, in OFDM-based systems, each channel can only be occupied by one link at each time slot because of opportunism. By comparison, our results apply to more general scenarios where different links can operate on the same channel provided that they do not interfere with each other. In addition, there have emerged some appealing results focusing on centralized schemes with low delay or complexity [17, 18]. Differently, we only consider the online fashion that is more suitable for distributed WSN in this paper.

The rest of the paper is organized as follows. System model and notations are introduced in Section 3. In Section 4, we propose the LDCS algorithm and prove that it can guarantee a certain fraction of the maximum throughput capacity region with low complexity. We next extend our results to multihop multipath cases in Section 5. Simulation results are given in Section 6. Finally, we conclude in Section 7.

3. System Model and Notations

We consider a single-interface multichannel wireless network where each node is equipped with one interface which is able to switch among channels dynamically if necessary. Let and denote respectively the set of all links and available nonoverlapping channels in the network. For any set , refers to the cardinality of . With single-hop data flows, each link represents a transmission path for a pair of source and receiver. Radio packets leave the system once they reach the destination. The time is slotted and synchronized for all links in the network. During each time slot, channel assignment and scheduling policy jointly decide the sets of scheduled links and occupied channels. Relevant definitions and terminologies are introduced now.

With regard to the collision model, each link corresponds to an interference set that includes links interfering with over the same channel and let . If and another link in operate on the same channel simultaneously, neither of them transmits successfully. Assume the interference relationship is symmetric, i.e., if and only if . Denote by the maximum number of links that can be scheduled on a given channel simultaneously within the interference set of arbitrary link. Due to the single-interface configuration, adjacent links cannot transmit simultaneously even if they are scheduled on different channels. Thus, one can split into two mutually exclusive subsets, denoted respectively by and . Specifically, is composed of links that are adjacent to (including itself) and is formulated by . Here, define as the maximum number of links that can be scheduled at the same time within of any link . Similarly, within the set of any link , use to represent the maximum number of links that can be scheduled on arbitrary channels simultaneously. In this paper, and are respectively named by interface and channel interference set while are named respectively by link/interface/channel interference degree.

The transmission rate (or capacity) of link on channel is denoted by . It is assumed that for any and . Because of the presence of channel diversity, every link has a unique rate on each channel. Denote by the number of arrivals at link in time slot , the arrival process is then defined as . We make some simple assumption that the arrival process is i.i.d. across time with the mean value . In fact, our method can be extended to suit more general arrival processes. For simplicity, this paper focuses only on situations with the assumption mentioned above. To store data packets, each link needs to maintain one channel queue for every channel , denoted by . One can obtain from the description above that the set of channel queues interfering with denoted by is as

Denote the queue length of by . Arriving packets are allocated immediately to multiple channel queues using some data-flow-allocation algorithm. Let be the number of datagrams assigned to , and the expectation is denoted by . Thus, the evolution of the queue length is given as where denotes the number of datagrams transmitted by in time slot based on the channel assignment and scheduling algorithm. It is easy to find that if is scheduled at ; otherwise, . For convenience, we will use the term “queue” to represent the “channel queue” in what follows. We say the network system is stable if for any , there exists a constant such that [19] where represents the probability of the event . System stability implies that network can bear current input load. It can be obtained that the system is stable and all the queue lengths remain finite if the Markov chain is positively recurrent.

The capacity region achieved by a particular channel assignment and scheduling algorithm is defined as the set of that stabilizes the network system, where is exactly the input load. As we know, the maximum capacity region can be obtained by the throughput optimal scheduling [17] in a centralized way. Under a distributed algorithm, if the queue lengths remain finite for any with constant , the efficiency ratio of this algorithm is then denoted by . The key variables in this paper are summarized in Table 1.

4. LDCS and Performance Analysis

At each time slot in LDCS, packets are assigned to channel queues immediately after arriving based on our data flow allocation mechanism without extra relay-forwarding common queue, which leads to a reduced transmission delay [17]. For channel assignment and scheduling, LDCS employs the random access and backoff time techniques (the idea of above-mentioned Q-SCHED) instead of executing maximal matching process. The LDCS algorithm consists of two parts given as follows.

4.1. Data Flow Allocation

For data flow allocation, a rate-proportional-based policy is designed as which can be realized by using a probability approach. It is easy to see that the process is also i.i.d. across time slot. The allocation process is implemented locally and is shown in Figure 3.

In LDCS, each link does not require information of other links in the data flow allocation phase, which can decrease the overhead effectively compared with the existing SP policy [9].

4.2. Channel Assignment and Scheduling

After data allocation, LDCS then utilizes its channel assignment and scheduling mechanism to make the decision whether a queue should be scheduled or not. Following the idea of Q-SCHED [6], the random access and backoff time techniques are employed. To be specific, each time slot is divided into two subslots: scheduling slot and transmission slot, either of which has a fixed length. The scheduling slot is further divided into minislots. Such time slot division is shown in Figure 4. According to the backoff time technique, each queue selects a minislot with probability as where represents the backoff time picked by and the parameter is computed as where . In formula (5), if a queue chooses as its backoff time, it will remain static in the current time slot. Instead of waiting for the entire scheduling slot, a queue enters the transmission slot to send messages immediately after the selected backoff time expires as long as no one in its interference set has already picked smaller backoff time. If two or more queues select the same minislot, all of them fail to accomplish the transmission successfully because of collision.

4.3. Stability Region

For the purpose of analyzing stability, we introduce two lemmas and construct the Lyapunov function as

Lemma 1. At each time slot , there exists a constant such that if , then for any queue satisfying the LDCS policy guarantees for any , and constants , , where denotes the event that is scheduled. The proof of Lemma 1 is omitted here. One can refer to [6] (proof of Lemma 1 therein) to derive Lemma 1 easily based on (5) and (6).

Lemma 2. Under LDCS, if the input load meets for every queue , then there will exist some integer and a constant such that with , holds for any queue and any . We present the proof in the appendix. Based on the two lemmas above, one can derive the following theorem which indicates the condition for system stability.

Theorem 1. Under LDCS, if for some , the input rates satisfy for any queue , the irreducible aperiodic discrete state Markov chain is then positively recurrent. One can refer to [6] for the detail of the proof of Theorem 1. Hence, we obtain the capacity region guaranteed by LDCS using the following proposition.

Proposition 1. The efficiency ratio of LDCS is given as where is a positive number with an arbitrary value and

Proof. Based on the relevant definitions described in Section 3, the achieved capacity region of LDCS is said to be if for any input load such that the centralized optimal algorithm can stabilize the system at , the LDCS can keep the system stable with . Under our scenarios, the necessary condition for system stability at is that there exists some for each such that hold for any link with parameter [10]. Note that inequality (15) comes from the rate constraint, while (16) and (17) represent the channel and interface restriction, respectively. Set a parameter and replace with the right side of (13), we have according to (4) and the inequalities (15)–(17) that for all and some , Hence, Proposition 1 has been proved using Theorem 1.

4.4. Performance Analysis

From (13), one can infer that with and , the efficiency ratio of LDCS policy arbitrarily approaches . Here, the parameter can be selected close to 0. Due to the fixed length of scheduling slot, tend to infinity if we let the length of each minislot be small enough by improving hardware. Therefore, it can be argued that the developed LDCS can achieve a throughput capacity region of approximately.

Then, we compare the throughput performance of LDCS with other existing distributed algorithms. According to [11], tuple-based MS attains an efficiency ratio of , where denotes the interference degree over the tuple-based equivalent model and is the efficiency ratio of SP algorithm [9]. Hence, one can conclude that as , the capacity region of LDCS is larger than that of the tuple-based MS or SP algorithm on condition that the network scenario satisfies . A simple topology in Figure 5 with single-hop flows and 2-hop interference model is illustrated. Suppose there are 14 nodes and 4 available channels with rates equalling 1, 1, 2, and 2 packets∕time slot, respectively, for all links. Thus, it is not hard to figure out that the efficiency ratio of LDCS equals to 1∕4, whereas that of tuple-based MS and SP are, respectively, and , we have . Therefore, in practice, compared with SP and tuple-based MS, better throughput performance can be obtained by LDCS as long as the network scenario satisfies .

On the other hand, the complexity of LDCS is low and independent of the number of links and channels. In LDCS, each node calculates the backoff time according to equations (5) and (6), which incurs number of iterations [12]. In most WSN scenarios in practice, the complexity of LDCS is bounded since parameters and of the network are bounded [8]. In contrast, recall that the complexity of the existing maximal-matching-based algorithms (SP or tuple-based MS) [9, 11] is which increases logarithmically with the numbers of links and channels. From the perspective of computation time, based on the analysis above, LDCS merely requires a scheduling slot (or minislots) to compute a schedule, which has no concern with both the size of network and the number of channels. With the improvement of hardware performance in the future, the length of a minislot can be set arbitrarily small so that the length of scheduling slot converges to a small positive number even approaches infinity. Contrastively, for the maximal-matching-based algorithms with complexity, the computation time of each schedule bounds to increase with the number of links and channels. Therefore, in large-scale intensive multichannel networks, the maximal-matching-based algorithms inevitably need longer time than LDCS to complete a schedule because of their inherent maximal matching mechanism [6, 7]. Currently, due to the limited hardware performance, we cannot set a minislot at an arbitrarily small value, and hence, this paper does not quantitatively compare the computation time of LDCS with that of maximal-matching-based methods.

It is worth mentioning that LDCS only ensures the weakly stability [19] of the system with a possibly unbounded delay experienced by customers and it is designed for multichannel wireless networks with single-interface wireless nodes.

5. Multipath Extension

The discussion above focused on the single-hop flows. For multihop flows with multiple paths, a routing selection problem needs to be resolved to ensure effectiveness and fairness. For multihop multipath cases, the arrival process and the maximum capacity region are redefined as follows. Suppose a system composed of users with alternate paths for each user . Let denote the number of packets offered by user at time slot . It is further assumed that is i.i.d. across time slot with average . Define as the routing indicator variable. If path of user passes through link , ; otherwise, . Thus, the arrivals at link in time slot can be given by where denotes the fraction of data packets assigned to path by user during time slot . The extended maximum throughput capacity region for multihop scenes with multiple paths is redefined as the set of such that there exists for all user and path satisfying where can be interpreted as the average fraction of traffic assigned to path on a long-term basis and has been defined by link-based rate vectors in Section 3. We now combine the MAC layer LDCS policy with a routing selection mechanism so that the joint routing, channel assignment, and scheduling algorithm, which is called M-LDCS, can guarantee a certain fraction of the extended maximum capacity region . The same data flow allocation strategy as LDCS is utilized and the joint algorithm is performed in two steps as

Step 1. Each source first computes the fraction vector at each time slot through solving the following problem for some , where is a positive constant. In fact, the transmission path is selected by (21). When a data packet is generated by user , it will be assigned to path with probability .

Step 2. The same channel assignment and scheduling mechanism as in Section 4 is employed to determine the set of operating queues and the queue is updated by

Note that the first term of (21) is to avoid an oscillation problem in routing selection process [20]. In the following proposition, the efficiency ratio of M-LDCS is given.

Proposition 2. For any , there exists some such that for any , the joint routing and LDCS algorithm can ensure capacity region where

The proof is presented in the appendix.

6. Simulation Results

In this section, we evaluate the performances of the proposed algorithms through simulation experiment using NS2-2.31. The topology we adopt is shown in Figure 6 which contains 36 nodes (denoted by circles), 60 links (denoted by dashed lines), and 4 available channels with rates equalling 1, 1, 2, and 2 packets/time slot, respectively. Each node is configured with a single interface that can switch among channels without hindrance. 2-hop interference model is used for simulations. In order to attain the throughput capacity region, we gradually increase the input rates (or offered loads) and observe the experimental data of average backlog. The unit of input rate is packets/time slot. We first consider the single-hop scenario and compare the developed LDCS with existing SP and tuple-based MS policies. As illustrated in Figure 6, there exist 16 traffic flows (represented by arrows) in our environment. Assume that the input rates of all flows are the same, denoted by . The average backlog is defined as the mean total backlog in the system divided by the number of traffic flows, which is used to reflect the throughput performances of the evaluated algorithms. In the simulation, the extra control overhead is omitted so that the throughput performance difference of the algorithms can be clearly shown.

Figure 7 shows the performance comparison of the three protocols. As we can see from the figure, along with the increasing of offered load, queues and the network become accordingly congested. At the same time, average backlog grows sharply to infinity when approaches a certain value. Such inflection point can be regarded as the edge of the throughput capacity region achieved by the corresponding algorithms. In addition, Figure 7 shows the superiority of the proposed LDCS in terms of throughput performance under our scenario, as expected. In fact as analyzed in Section 4.4, it can be said that LDCS guarantees larger capacity region than SP and tuple-based MS provided that . The model we use for experiment is one of the scenarios who satisfy such condition.

We next investigate the performance of the multihop multipath extension (ME) of LDCS using the same topology in Figure 8. Differently, let four source-destination pairs randomly picked across multiple hops replace the 16 single-hop flows. Assume that there exist three alternative paths for each user. Source node selects forwarding path for every data packet according to (21). We let be 103 to make sure is small enough based on Proposition 2 and set so that the route fraction vectors calculated by (21) are not too sensitive to the queue length updates. In this simulation, the comparison objects are MP and the ME of tuple-based MS. In practice, multipath extension implies a cross-layer control method. That is, for the purpose of ensuring throughput, link layer and network layer exchange information to solve a joint routing, channel assignment, and scheduling problem. Figure 8 displays the comparison of three solutions. As expected, ME of LDCS performs much better than MP since it achieves larger capacity region, which confirms theoretical analysis. As we can observe, the capacities attained by ME of LDCS and tuple-based MS are fairly close. Furthermore, for the same input rates, ME of tuple-based MS achieves the lowest average backlog. The reason for this phenomenon is that ME of tuple-based MS additionally considers transmission delay and solve the cross-layer problem with path selection convex optimization technique. For MP, it utilizes a two-stage queuing structure that stores datagrams in a common queue firstly and relays them to channel queues, which aggravates congestion and leads to imperfect performance.

It is worth noting that the number of interfaces and channels would impact the system capacity region inevitably. This paper purely focuses on single-interface scenarios. In reality, most existing communication or sensor systems employed single-interface nodes at present.

7. Conclusions

This paper presents distributed algorithms for multichannel single-interface wireless sensor networks with both single-hop and multihop multipath scenes, named, respectively, LDCS and ME of LDCS. It is theoretically demonstrated that the proposed algorithms can achieve guaranteed throughput capacity regions that are comparable with other distributed maximal-matching-based algorithms (such as SP, MP, and tuple-based MS) with low implementation complexity.

LDCS utilize a rate-proportional-based mechanism to allocate datagrams locally and applies random access technique based on probability to complete the channel assignment and scheduling. Such designing avoids executing maximal-matching-finding process and attains lower complexity that is independent of the numbers of links and channels. It is further theoretically proved that LDCS can achieve better throughput performance in some specific situations compared with other existing policies. LDCS is then extended to be appropriate for multihop multipath scenarios and a cross-layer solution is given. Simulation experiments confirm our analysis. In the future research, we will be interested in improving LDCS to be able to maintain the network under a strongly stable status with bounded average delay for both single-interface and multi-interface cases.

Appendix

A.1. Proof of Lemma 2

Because the arrivals and departures are both upper bounded during one time slot, then there must exist a positive number such that holds for any . Here, we discuss in two cases. If satisfies it is then easy to get that Lemma 2 holds. For the other case when for any , we have

Employ Lemma 1 and let , one can draw a conclusion that there exists a constant such that if , then holds for all . Clearly, is chosen as a function of . Define an indicator variable such that in case of queue is scheduled in time slot , and otherwise, . Then, from (A.5), a probability model can be described by

Suppose a random variable defined as where is the maximum number of queues that can be scheduled simultaneously in the interference set of any queue in the network. For any , based on (A.6) and the convexity of the exponential function [21], it is not hard to obtain that

Define . Now, we aim to obtain an upper bound of . It is well-known that the Chernoff bound for can be obtained if the variables were independent across time. Obviously, according to the above-mentioned definition of as well as the Markov characteristics of the random process , are not independent across time. In order to upper bound , we introduce a new random variable , where ’s are independent and take values of 0 or 1 and with probability

Obviously, can be upper bounded through Chernoff bound [22] as follows

Furthermore, based on (A.8), one can obtain by induction [23]. Hence,

Thus, from Markov inequality and (A.11), we get

For some , it is not hard to find a constant such that

For the arriving process, define

Considering the condition that the offered load meets in Lemma 2, one can easily infer for some that

Hence, based on (A.13) and (A.15), we have

Thus, by setting the value of appropriately such that , then we can obtain

This ends the proof of Lemma 2.

A.2. Proof of Proposition 2

Referring to the proof process of Proposition 1 as well as the corresponding definition of extended maximum capacity region for multihop flows with multiple paths in (20), the efficiency ratio in (23) implies there is for all user and path satisfying

For multihop multipath case, the following Lyapunov function is used,

From (4) and (19) and the fact that , we thus have where means higher order. Further on the basis of Lemma 1, it can be easy to obtain that for all queues , where has already been defined before in Section 4. In addition, by the binomial expansion [6], it is not difficult to obtain that for any , there will exist some such that holds for all . Hence, one can obtain

Utilizing (A.18) and the routing selection mechanism in (21), it is not hard to find that

Therefore, the Lyapunov drift is negative and is given by

The stability of the system then follows [24].

Data Availability

The data used to support the findings of this study are available from the corresponding authors on reasonable request.

Conflicts of Interest

The authors declare that Xinyan Xu and Fan Zhang are the joint first author. The authors declare that Department of Computer and Software Engineering, Shandong College of Electronic Technology; School of Information Engineering, Shandong Management University are the joint first institution and they jointly enjoy the copyrights of this paper. The authors declare that there is no conflict of interest regarding the publication of this paper.

Acknowledgments

This work was jointly supported by the National Natural Science Foundation of China under Grant 61501283, the Shandong Provincial Natural Science Foundation of China under Grant ZR2017BF013, and the University of Jinan Science and Technology Program under Grant XKY1813.