Research Article  Open Access
CrossLayer QoS Scheme Based on Priority Differentiation in Multihop Wireless Networks
Abstract
To support different QoS requirements of diverse types of services, a crosslayer QoS scheme providing different QoS guarantees is designed. This scheme sets values of service priorities according to services’ data arrival rates and required endtoend delay to endow different services with diverse scheduling priorities. To support QoS requirements better and maintain fairness, this scheme introduces delay and throughput weight coefficients. The methods of calculating the coefficients are also proposed. Through decomposing the optimization problem that uses weighted network utility as its optimization objective using Lyapunov optimization technique, this scheme can simultaneously support different QoS requirements of various services. The throughput utility optimality of the scheme is also proved. To reduce the computational complexity of the scheme, a distributed media access control scheme is proposed. A power control algorithm for this crosslayer scheme is also designed, and this algorithm transforms the power control into the solution of a multivariate equation. The simulation results evaluated with Matlab show that, compared with the existing works, the algorithm presented in this paper can simultaneously satisfy the delay demands of different services with maintaining high throughput.
1. Introduction
With various multimedia applications that have diverse QoS requirements appearing in multihop wireless networks [1], how to satisfy different QoS demands of diverse services has become a hotspot research issue. Considering the sharing of wireless channels among nodes and the diversity of QoS requirements of different services, the way of assigning transmission priorities to applications according to their QoS demands is an effective method to utilize wireless resources and provide QoS guarantees. Some algorithms following this line of thinking have been proposed. EDCA of IEEE 802.11e [2] standard defines four access categories corresponding to voice, video, best effort, and background to endow different applications with diverse priorities in media access control. Reference [3] adopts the 802.11e Access Control (AC) queue structure. Control packets used to route are prioritized according to the type of traffic associated with them to ensure that high priority packets are not penalized by the control packets. In [4], delay time of transmitting data is chosen as QoS metric. Packets in queues of flows with higher QoS level are delivered with higher priority. To support efficient video transmission, the scheduling algorithm in [5] assigns priority depending on the types of video frame. This videobased scheduling algorithm is combined with 802.11e protocol. Designed for video transport, the policy of [6] calculates the values of the counters depending on the delay estimation and the importance of packets. Under this policy, the packets with the lowest value of the counters gain the transmission opportunity. However, as there is no corresponding routing and flow control scheme combined with the above prioritybased scheduling algorithms, congestion of high priority data packets may occur. Servicedifferentiation routing algorithms [7, 8] that select routes with different approaches depending on the type of traffic to ensure that packets with higher priority will be transmitted on higher quality links are also proposed. However, these algorithms may cause unbalanced distribution of packets in the network.
Different from the above layered QoS schemes, the Backpressure policy [9] is designed by applying Lyapunov optimization technique joint routes and schedules. The policy can also be combined with flow control [10] to ensure that the admitted rate injected into the network layer lies within the network capacity region, as well as combined with MAC [11], TCP [12], and application layers [13]. Due to its throughputoptimal characteristic for different network structures, the backpressure crosslayer control scheme has been a promising scheme to provide QoS guarantees. There are still few researches of Backpressure policies designed to support different QoS requirements of different types of traffic [14, 15]. In [14], services are divided into different classes according to their QoS demands. QoS requirements are supported through solving the optimization problem with the objective of maximizing weighted utility of different classes and constraints of QoS demands of each class. However, under the condition of high traffic loads, the fairness of the policy will decline. Reference [15] proposes a Backpressure crosslayer algorithm which ensures that the delays of flows are proportional to the priorities of services with keeping optimal throughput utility. However, the work of [15] did not consider the situation that services have different arrival data rates.
In this paper, we consider both arrival data rate and QoS demands when setting priorities. The effect of priorities of services on services’ QoS performance is also studied. We propose a crosslayer QoS scheme which can provide QoS guarantees for different types of services simultaneously. The key contributions of this paper can be summarized as follows.(i)The paper proposes a Lyapunov optimization techniquebased crosslayer scheme which can satisfy different QoS requirements of various applications with priority differentiation. The method of how to calculate services’ priorities is also designed.(ii)The paper introduces throughput weight coefficient and delay weight coefficient that are updated according to QoS performance to meet QoS demands better and maintain fairness.(iii)To reduce the computational complexity, a distributed media access control scheme is proposed. A power control algorithm to keep the data transmission rates of all wireless links being equal is also designed. This power control algorithm treats the power control as the solution of a multivariate equation.(iv)The performance in terms of utility optimality is demonstrated with rigorous theoretical analyses. The policy is shown that it can achieve a time average throughput utility which can be arbitrarily close to the optimal value.
The structure of the rest of the paper is as follows. Section 2 introduces the system model and problem formulation. In Section 3, the algorithm is designed using Lyapunov optimization. The performance analyses of the proposed algorithm are present in Section 4. Simulation results are given in Section 5. Conclusions are provided in Section 6.
2. Model and Problem Formulation
2.1. Network Model
Consider a multihop wireless network consisting of several nodes. Let the network be modeled by a directed connectivity graph , where is the set of nodes and represents a unidirectional wireless link between node and node j. M denotes the set of unicast sessions between sourcedestination pairs in the network. K denotes the set of services in each session. is the set of source nodes of service in session m. is the set of destination nodes of service in session . Packets generated in the source nodes traverse multiple wireless hops before arriving at the destination nodes. The system is assumed to run in a timeslotted fashion. There are two channels including common control channel and data channel which use different communication frequencies in the network. Each node can broadcast control packets consisting of channel access negotiation information, lengths of queues, and weight values of nodes on the common control channel. Each node can gain control information by monitoring the control channel. The data channel is used for data communication. In this model scheduling will be subjected to the following constraints [16]: is used to indicate whether link is used to transmit packets in time slot t. if , and if . denotes the transmit power from node to node in time slot . Constraint (1) means that each node can either transmit or receive data on data channel at the same time. The SINR (Signal to Interference plus Noise Ratio) of link at node in time slot is calculated as follows:Node is the sending node, and node is the destination node of packets from node . Node denotes the neighbor nodes of node . When node sends packets, node is the destination node of packets from node z. denotes the transmit loss from node to node y. is the receiver noise at node . The achievable capacity of link in time slot is calculated as follows: represents the bandwidth of the data channel. There are two necessary constraints for the successful data transmission on link to be satisfied. The first constraint can be expressed asThis constraint states that the SINR of link at node must be above the predefined SINR threshold .
However, if the new link is built, the transmission power from node to may result in additional interference at the receiving node of existing link , and the SINR of link at node will decrease. To make sure that the new transmission will not impair the existing transmissions and the SINR of each existing link keeps being above the predefined SINR threshold , the second constraint is expressed as denotes the additional interference at node caused by the data transmission from node to in time slot . According to constraints (4) and (5), the maximum and minimum transmit power of node to node in time slot can be written as
2.2. Virtual Queue at the Transport Layer
denotes the arrival rate of service in session injected into the transport layer from the application layer at source node. is the maximum arrival rate of session m. is the admitted rate of session injected into the network layer. is an auxiliary variable called the virtual input rate. There is a virtual queue for every service in session at the service’s source node. The virtual queue at the transport layer of source node is denoted by that is updated as follows:If each virtual queue is guaranteed to be stable, according to the necessity and sufficiency for queue stability [17], it is apparent that , where the time average value of timevarying variable is denoted by . Therefore, the lower bound of can be derived from which is calculable.
2.3. Data Queue at the Network Layer
The data backlog queue for service in session at the network layer of node is denoted by . In each slot , the queue is updated aswhere represents the set of nodes with . represents the set of nodes with . is the amount of data of service in session to be forwarded from nodes to in time slot . is an indicator function that denotes 1 if and denotes 0 otherwise. In addition, must not be greater than the transmission capacity of link in time slot .
2.4. Design of Priorities of Services
represents the priority of service , which is used to denote the importance degree of service in scheduling. is calculated using the method as follows: is the average data arrival rate of service . represents the maximum allowable endtoend delay bound of service k. denotes the basic average data arrival rate. is the basic allowable endtoend delay. and can be calculated as
2.5. Design of Throughput and Delay Weight Coefficients
represents delay weight coefficient of service . In every interval, the destination nodes of the same service calculate the average endtoend delay of their corresponding service and the delay weight coefficient. As an example of service k, in destination nodes of service k, is calculated asHere, is the average endtoend delay of service in interval . Similarly, the throughput weight coefficient of service k, is calculated as follows: represents the average throughput of service in interval . denotes the required average throughput of service k.
Through introducing throughput and delay weight coefficients into the optimization objective, the QoS performances of services are considered in the optimization. According to the calculation methods above, we can find that if the QoS performances of a service including average endtoend delay and average throughput in the interval do not reach the threshold values, the delay and throughput weight coefficients will increase sharply. Meanwhile, the transmitted probability of the packets of this service will increase, which helps to support QoS requirements better.
2.6. Throughput Utility Optimization Problem
Similar to the design of utility function in [18], let the utility function of service in session m, , be a concave, differentiable, and nondecreasing utility function with . The throughput utility maximization problem P1 can be defined as follows:Similar to the definition in Section 2.2, is the time average value of timevarying variable , and is calculated according to . Here, , , , and . is the capacity region of the network. The constraint is used to guarantee the stability of the network.
3. Dynamic Algorithm via Lyapunov Optimization
Lyapunov optimization technique is applied to solve P1. and are used in the dynamic algorithm. Let be the network state vector in time slot . Define the Lyapunov function as The conditional Lyapunov drift in time slot isTo maximize a lower bound for , the driftplus penalty function can be defined as where is the weight of utility defined by user. The following inequality can be derived:here, , , and can be evaluated as follows: is a constant and satisfiesAssume that in this paper the transmit capacity of each link is a constant value by using a power control algorithm. According to , , and , constant must exist.
The algorithm CADSP (CrossLayer Algorithm with Differentiated Service Prioritization) scheme is based on the driftpluspenalty framework [17]. The main design principle of the algorithm is to minimize the righthand side of (17). This scheme consists of three parts which are joint flow control, routing, and scheduling scheme, medium access control scheme, and power control algorithm.
3.1. Joint Flow Control, Routing, and Scheduling Scheme
This scheme includes four parts as follows.
Source Rate Control. For sessions and at source node , the admitted rate is chosen to solveProblem (20) is a linear optimization problem, and if , is set to be ; otherwise it is set to be zero.
Virtual Input Rate Control. For sessions and at source node , the virtual input rate is chosen to solveIf is strictly concave and twice differentiable, (21) is a concave maximization problem with linear constraint. can be chosen bywhere is the inverse function of that is the firstorder derivative of . Since the utility function is strictly concave and twice differentiable, must be a monotonic function, and therefore, must exist. If is a linear function, let us suppose . can be calculated as
Joint Routing and Scheduling. At the node , routing and scheduling decisions for each service in session can be made by solving the following: denotes the capacity of link in time slot t, and is calculated according to (3). The first constraint of (24) indicates that the amount of data to be forwarded from one node to another node in a time slot should not be greater than the capacity of the link between these two nodes in time slot . The second constraint of (24) is built according to constraint (1) given in Section 2.1. The third constraint of (24) is built according to constraint (6) given in Section 2.1.
First, the best service and the best session whose data should be transmitted on link can be chosen asThe weight value of link is calculated using the following method asSo the joint routing and scheduling problem can be reduced to the following problem:Transmission rates are chosen based on (27) which is a hard problem for solving as it requires global knowledge and centralized algorithm. We define as the set of transmit powers on each link and define as the set of links which can be used for data transmission simultaneously when using as the set of transmit powers. is defined as the set of . In each slot, that maximizes is chosen as the set of scheduled links and the set of transmit powers.
Update of Queues. and are updated using (7) and (8) in each time slot.
3.2. Distributed Medium Access Control Scheme
Solving (27) is a NPhard problem whose computation complexity is where denote the number of nodes in the network. Obviously, the computation complexity will increase shapely with the increase of . To reduce the computation complexity, a distributed medium access control scheme for routing and link scheduling is proposed in this section. The design principle of this distributed scheme is that nodes with higher weight values will get higher probabilities of accessing the medium and transmitting data. When the runtime is long enough, the distributed media access control scheme plays the same role to the GMS (Greedy Maximal Scheduling) algorithm which is a central algorithm and whose capacity region can reach 1/2 capacity region of MWM (Maximal Weighted Matching) [19] which is the basic of the central crosslayer routing and scheduling scheme proposed in Section 3.1.
The medium access control scheme is implemented in a timeslotted fashion on the common control channel. The way that nodes contend to access the control channel is similar to IEEE 802.11 twoway RTS and CTS handshake. The medium access control logic is illustrated in Figure 1. The details of the scheme are as follows. (i) There is a central control node which implements the power control algorithm and records state information of existing links, including transmit power, positions of nodes, and noises at the receiving nodes. (ii) At the beginning of each slot, each node trying to send data chooses a random waiting time . The value of is calculated in the central control node. It relates with number of nodes in the network. (iii) Each node sends IU packet that includes information about weight value, the next hop node chosen, current position, and noise on the control channel after waiting for . For the send node , the receiving node of node is , and the weight value of node is . Each node also monitors the IU packets from other nodes to gain the weight values of other nodes. (iv) Every backlogged node calculates its contention window and backoff counter [20] as follows: is randomly chosen from the range . (v) After from the beginning of the slot, each backlogged node continues monitoring the control channel. If the node senses an idle control channel for a period of , it can send RTS packet which includes . RTS packet also includes the information about the receiving node in plan. (vi) After receiving RTS packet from node i, the central control node checks if the receiving node in plan of node is in transmission. The control node also implements the power control algorithm to decide if the new link is allowed to be established. If the new link is allowed to be established, the control node responds with a CTS packet that includes the new transmit powers and transmission time lengths of all send nodes after a period of SIFS; otherwise, the control node responds with a NCTS that includes decision that the new link is not allowed to be established. (vii) The send nodes update the transmit powers after receiving CTS packet. The receiving node in plan of node prepares for data reception and responds with a ACK packet after the successful data reception. Without considering the weight value of node i, idle nodes update their contention windows and backoff counters after receiving CTS packet. (viii) Node and other idle nodes begin to monitor the control channel for further negotiation after receiving NCTS packet. (ix) The maximum times that each node is allowed to send RTS packets in a time slot is three.
3.3. Power Control Algorithm
The power control algorithm is implemented in the central control node of the network. The design objective of the algorithm is to ensure that the SINR at every receiving node is . Assume that there have been links in the network. The links from send nodes to receiving nodes are represented by . When node tries to transmit data packets to node , it send RTS packet on the control channel. After receiving RTS packet from by monitoring the common control channel, the central control node begins the computation to check the transmit powers of all send nodes which can guarantee that the following equalities exist:where . If we can get , the new link can be established. Equation (29) can be transformed into multivariate equations as follows:where If , the link is allowed to be established. Here , and represents the maximum transmit power that node can support. On the common control channel, the central control node will broadcast which are new transmit powers of the send nodes.
4. Performance Analysis
Theorem 1 (algorithm performance). Define and the optimization problem P2 asDefine as the optimal value of and as the solution of P2. Under the implementation of central CADSP scheme proposed in Section 3.1, the following inequality holds:
Proof. According to Lemma 4 in [18], similar to Theorem 3 in [16], the following inequality holds when : According to the equalities which can be got, and , (34) can be transformed into following inequality:As CADSP scheme can guarantee that and function is nondecreasing, the following inequality can be got:Inequality (36) means the overall throughput utility achieved by the algorithm in this paper is within a constant gap from the optimum value.
5. Simulation
5.1. Simulation Setup
The network for simulations is considered as a network with 20 nodes randomly distributed in a square of 900 m^{2}. There are two unicast sessions with randomly chosen sources and destinations. Each session includes three services. Data are injected at the source nodes following Poisson arrivals. The simulation time lasts 10000 time slots. All the initial queue sizes are set to be 0. The throughput utility function is . Table 1 summarizes the simulation parameters.

In this paper, the performance of CADSP is compared with Backpressure scheme [21] and PDAPMF scheme [6]. Backpressure scheme is a classical joint routing and scheduling algorithm that can provide throughput utility optimality. PDAPMF scheme is a servicesdifferentiated scheduling policy. In this simulation, PDAPMF scheme is combined with AODV routing algorithm.
5.2. Simulation of Services with Different Delay Requirements
In this section, the average data arrival rates of all the services are the same. The maximum allowable endtoend delay bounds of services are set as s, s, and s. The required average throughputs of services are set as = 0.5 × 10^{5} bits/s.
We compare against the three solutions by varying the average data arrival rate and plot the average throughput of service 1 in Figure 2, which shows that CADSP outperforms Backpressure and PDAPMF. When the average data arrival rate is lower than 5 × 10^{5} bits/s, the three schemes obtain similar throughput performance. However, with higher average data arrival rate, CADSP and PDAPMF perform much better than Backpressure, since service 1 under CADSP and PDAPMF is assigned the highest priority in the three services, and it can get more transmission opportunities than service 1 under Backpressure which has the same priority as the other two services. As Backpressurebased algorithm has throughput optimality, CADSP performs better than PDAPMF in throughput performance.
Figure 3 shows the average endtoend delay performance of service 1 for the three solutions. When the average data arrival rate is lower than 5.5 × 10^{5} bits/s, PDAPMF performs best. But when the average data arrival rate is above 6.5 × 10^{5} bits/s, the average endtoend delay of service 1 under PDAPMF is higher than the maximum allowable endtoend delay bound of service 1. The average endtoend delay of CADSP is always below the maximum allowable endtoend delay bound of service 1, since CADSP using delay weight coefficient can provide better delay guarantee. When the average data arrival rate is in the range from 0.5 × 10^{5} bits/s to 4 × 10^{5} bits/s, the average endtoend delay of CADSP and Backpressure decreases. The reason is that the endtoend delay will be high if the traffic load is too low for the formation of “queue length pressure” from source nodes to destination nodes.
The performances of the three solutions in terms of average throughput of service 2 are compared in Figure 4, which shows that CADSP outperforms Backpressure and PDAPMF. When the average data arrival rate is lower than 4 × 10^{5} bits/s, the three schemes obtain similar throughput performance. However, with higher average data arrival rate, CADSP performs better than Backpressure and PDAPMF. From Figure 2 we can see that when traffic load is high, CADSP can still maintain good performance in terms of average throughput for service 2.
Figure 5 shows the average endtoend delay performance of service 2 for the three solutions. Since the priority of service 2 in CADSP is not as high as service 1, the average endtoend delay performance deteriorates. However, the average endtoend delay of service 2 is still maintained being lower than the maximum allowable endtoend delay bound of service 2 by using delay weight coefficient. We can also see that, in the condition of low traffic load, the performance of average endtoend delay of PDAPMF is the best.
In Figure 6, the average throughput of service 3 of the three solutions is compared. From the figure we can see that Backpressure with the same priority for each service outperforms CADSP and PDAPMF. Since service 3 in CADSP and PDAPMF is scheduled with the lowest priority, their average throughput cannot increase with the increase of average data arrival rate when traffic load is high. However, the average throughput of service 3 of CADSP is always higher than the required average throughput of service 3 through using throughput weight coefficient.
The average endtoend delay performance of service 3 for the three solutions can be seen from Figure 7. Though CADSP performs worse than Backpressure in most conditions, its average endtoend delay is maintained being lower than the maximum allowable endtoend delay bound of service 3.
From the simulation results above, we can see that CADSP can support QoS requirements of all services.
We plot the throughput of the three solutions in Figure 8, which shows that Backpressure outperforms CADSP and PDAPMF. The reason is that Backpressure can provide the throughput optimality, while the throughput optimality of CADSP is destroyed by introducing throughput and delay weight coefficients into the optimization objective.
5.3. Simulation of Services with Different Average Data Arrival Rates
The average data arrival rate of service 1 is four times that of service 3. The average data arrival rate of service 2 is two times that of service 3. The maximum allowable endtoend delay bounds of services are set as s. The required average throughputs of services are set as = 0.5 × 10^{5} bits/s.
In Figure 9 the average throughputs of the three services using CADSP are compared. From the figure we can see that the ratio among the average throughputs of the three services is close to the ratio among the average arrival data rates of the three services.
The average endtoend delay performances of the three services using CADSP are compared in Figure 10. The performances in terms of average endtoend delay of service 1 and service 2 are lower than the maximum allowable endtoend delay bound. When the average data arrival rate is lower than 2 × 10^{5} bits/s, the average endtoend delay of service 3 is higher than the maximum allowable endtoend delay bound of service 3. The reason is that the average data arrival rate of service 3 is too low to form the “queue length pressure” to push packets of service 3 from source node to destination node, and this increases the average endtoend delay of service 3.
6. Conclusions
This paper proposed a crosslayer QoS scheme which can provide different QoS guarantees for diverse types of services. Through setting priorities of services depending on services’ data arrival rates and endtoend delay demands, services with higher QoS demands can gain better QoS performance. The delay and throughput weight coefficients in the objective of the optimization problem help to maintain fairness of the policy and make the scheme support QoS requirements better. The throughput utility optimality of the scheme is kept. A distributed medium access control scheme and a power control algorithm are designed to reduce the computational complexity of the scheme. Compared with the existing works, the policy presented in this paper can simultaneously support the delay requirements of different services and maintain higher throughput.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of the manuscript.
References
 S. Ehsan and B. Hamdaoui, “A survey on energyefficient routing techniques with QoS assurances for wireless multimedia sensor networks,” IEEE Communications Surveys & Tutorials, vol. 14, no. 2, pp. 265–278, 2012. View at: Publisher Site  Google Scholar
 IEEE std 802.112007 (2007) IEEE standard for information technologytelecommunications and information exchange between systemslocal and metropolitan area networksspecific requirementspart 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications (Revision of IEEE Std 802.111999), article C11184.
 Y. Qin, L. Li, X. Zhong, Y. Yang, and C. L. Gwee, “A CrossLayer QoS Design with Energy and Traffic Balance Aware for Different Types of Traffic in MANETs,” Wireless Personal Communications, vol. 85, no. 3, pp. 1429–1449, 2015. View at: Publisher Site  Google Scholar
 C.Y. Liu, B. Fu, and H.J. Huang, “Delay minimization and priority scheduling in wireless mesh networks,” Wireless Networks, vol. 20, no. 7, pp. 1955–1965, 2014. View at: Publisher Site  Google Scholar
 G. Adam, C. Bouras, A. Gkamas, V. Kapoulas, G. Kioumourtzis, and N. Tavoularis, “Crosslayer mechanism for efficient video transmission over mobile ad hoc networks,” in Proceedings of the 2011 3rd International Workshop on Cross Layer Design, IWCLD 2011, pp. 1–5, December 2011. View at: Publisher Site  Google Scholar
 H. Wang and G. Liu, “Priority and delay aware packet management framework for realtime video transport over 802.11e WLANs,” Multimedia Tools and Applications, vol. 69, no. 3, pp. 621–641, 2014. View at: Publisher Site  Google Scholar
 A. R. Lari and B. Akbari, “Networkadaptive multipath video delivery over wireless multimedia sensor networks based on packet and path priority scheduling,” in Proceedings of the 5th International Conference on Broadband Wireless Computing, Communication and Applications (BWCCA '10), pp. 351–356, Fukuoka, Japan, November 2010. View at: Publisher Site  Google Scholar
 D. Djenouri and I. Balasingham, “Trafficdifferentiationbased modular QoS localized routing for wireless sensor networks,” IEEE Transactions on Mobile Computing, vol. 10, no. 6, pp. 797–809, 2011. View at: Publisher Site  Google Scholar
 L. Tassiulas and A. Ephremides, “Stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks,” Institute of Electrical and Electronics Engineers Transactions on Automatic Control, vol. 37, no. 12, pp. 1936–1948, 1992. View at: Publisher Site  Google Scholar  MathSciNet
 M. J. Neely, E. Modiano, and C.P. Li, “Fairness and optimal stochastic control for heterogeneous networks,” IEEE/ACM Transactions on Networking, vol. 16, no. 2, pp. 396–409, 2008. View at: Publisher Site  Google Scholar
 L. Jiang and J. Walrand, “A distributed CSMA algorithm for throughput and utility maximization in wireless networks,” IEEE/ACM Transactions on Networking, vol. 18, no. 3, pp. 960–972, 2010. View at: Publisher Site  Google Scholar
 H. Seferoglu and E. Modiano, “TCPaware backpressure routing and scheduling,” in Proceedings of the IEEE Information Theory and Applications Workshop (ITA '14), pp. 1–9, San Diego, Calif, USA, February 2014. View at: Publisher Site  Google Scholar
 E. Anifantis, E. Stai, V. Karyotis, and S. Papavassiliou, “Exploiting social features for improving cognitive radio infrastructures and social services via combined MRF and back pressure crosslayer resource allocation,” Computational Social Networks, vol. 1, article 4, 2014. View at: Publisher Site  Google Scholar
 G. A. Shah, V. C. Gungor, and O. B. Akan, “A crosslayer QoSaware communication framework in cognitive radio sensor networks for smart grid applications,” IEEE Transactions on Industrial Informatics, vol. 9, no. 3, pp. 1477–1485, 2013. View at: Publisher Site  Google Scholar
 A. Zhou, M. Liu, Z. Li, and E. Dutkiewicz, “Crosslayer design for proportional delay differentiation and network utility maximization in multihop wireless networks,” IEEE Transactions on Wireless Communications, vol. 11, no. 4, pp. 1446–1455, 2012. View at: Publisher Site  Google Scholar
 S. Fan and H. Zhao, “Crosslayer control with worst case delay guarantees in multihop wireless networks,” Journal of Electrical and Computer Engineering, vol. 2016, Article ID 5762851, 10 pages, 2016. View at: Publisher Site  Google Scholar
 M. J. Neely, Stochastic Network Optimization with Application to Communication and Queueing Systems, Morgan & Claypool Publishers, 2010.
 M. J. Neely, “Opportunistic scheduling with worst case delay guarantees in single and multihop networks,” in Proceedings of the IEEE INFOCOM, pp. 1728–1736, Shanghai, China, April 2011. View at: Publisher Site  Google Scholar
 X. Lin and N. B. Shroff, “The impact of imperfect scheduling on crosslayer congestion control in wireless networks,” IEEE/ACM Transactions on Networking, vol. 14, no. 2, pp. 302–315, 2006. View at: Publisher Site  Google Scholar
 L. Ding, T. Melodia, S. N. Batalama, J. D. Matyjas, and M. J. Medley, “Crosslayer routing and dynamic spectrum allocation in cognitive radio ad hoc networks,” IEEE Transactions on Vehicular Technology, vol. 59, no. 4, pp. 1969–1979, 2010. View at: Publisher Site  Google Scholar
 L. Georgiadis, M. J. Neely, and L. Tassiulas, “Resource allocation and crosslayer control in wireless networks,” Foundations and Trends in Networking, vol. 1, no. 1, pp. 1–144, 2006. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2017 Mingjiu Wang and Shu Fan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.