Abstract

This paper introduces a traffic load and interference based bandwidth allocation (TLIBA) scheme for wireless mesh network (WMN) that improves the delay and throughput performance by proper utilization of assigned bandwidth. The bandwidth is allocated based jointly on traffic load and interference. Then a suitable path is selected based upon the least routing metric (RM) value. Simulation results are presented to demonstrate the effectiveness of the proposed approach which indicates higher bandwidth utilization and throughput as compared with existing fair end-to-end bandwidth allocation (FEBA).

1. Introduction

Wireless mesh networking is an emerging hot topic and is still in infancy. Key features of WMN are being dynamically self-organized, self-configured, self-healing, scalable, reliable, easy to deploy, and it can establish adhoc network automatically and maintain connectivity. WMNs are activated in the industrial standard groups, such as IEEE 802.11, IEEE 802.15, and IEEE 802.16. [1]. Few applications of WMN are to access broadband internet, indoor WLAN, mobile user access and connectivity. WMNs are specifically constructed by the Firetide for providing connectivity [2].

Backhaul connectivity of the mesh networks is provided by the mesh base station in the IEEE 802.16 and controlling one or more subscriber stations is also provided. Collection of bandwidth request from subscriber station and management of resource allocation are the responsibilities of the mesh base station (BS) when a centralized scheduling scheme is used [3]. There are two types of routing in WMNs, namely, centralized scheduling and distributed scheduling. The IEEE 802.16 standard provides a centralized scheduling mechanism that supports contention-free and resource-guarantee transmission services in mesh mode. Research is going on towards designing an efficient way to realize centralized or distributed schedule by maximizing channel utilization. The designs are divided into two phases: routing and scheduling. First, a routing tree topology is constructed from a given mesh topology. Secondly, channel resource is allocated to the edges in the routing tree by a scheduling algorithm [4].

The channel resource is bandwidth, which is allocated on the basis of fundamental performance parameters. Generally delay, throughput, fairness, or interference is considered for bandwidth allocation. The wireless network has experienced significant growth to meet the increasing bandwidth demands of network users and support the emerging bandwidth-intensive applications such as videoconferencing and video on demand (VoD). In the IEEE 802.16 mesh networks the bandwidth negotiation is implicit which is based on the assumption that only the one-hop neighbors of a receiver can interfere with its ongoing data reception, which is also referred to as “protocol-model.” In 802.16 mesh networks in order to satisfy the QoS in routing packets, it is very important to reserve sufficient bandwidth for the transmission of the individual links on a particular route. Because, in wireless mesh networks, the end-to-end throughput of traffic flows depends on the path length, that is, the higher the number of hops, the lower the throughput becomes.

Organization of this paper is as follows. In Section 2 we have presented related work. In Section 3 details of proposed work and algorithm are presented. Section 4 presents the simulation results using NS2 simulator. Section 5 presents conclusions and outlines directions for the future work.

Cicconetti et al. [5] have proposed a fair end-to-end bandwidth allocation (FEBA) algorithm, in order to provide a maximum throughput in the end-to-end traffic flow. FEBA is implemented at the medium access control (MAC) layer of single-radio, multiple channels IEEE 802.16 mesh nodes, operated in a distributed coordinated scheduling mode. The advantage of this approach is that it negotiates bandwidth among neighbors to assign a fair share proportional to a specified weight to each end-to-end traffic flow. This way traffic flows are served in a differentiated manner, with higher priority traffic flows being allocated more bandwidths on average than the lower priority traffic flows.

Peng and Cao [6] have presented a dynamic programming based resource allocation and scheduling algorithm to address the problem of resource allocation with the goal of providing fairness access to channels in IEEE 802.16 mesh networks. They defined node’s unsatisfactory index and throughput function. Then, a multiobjective programming formulation was proposed for optimizing network performance.

Zhang et al. [7] have proposed a novel QoS guarantee mechanism which includes protocol process and minislot allocation algorithm. It uses existing service classes in original standard. Protocol processes were defined to manage the dynamic service flow and minislot allocation algorithm was used to support data scheduling of various services. WiMAX MAC layer was redesigned to support service classification in mesh mode. Using extended distributed scheduling messages, the delivery method of dynamic service management messages in WiMAX mesh networks was implemented.

Mogre et al. [8] have proposed a CORE, which addresses the problem of jointly optimizing the routing, scheduling, and bandwidth savings via network coding. Prior solutions are either not applicable in the 802.16 MeSH mode or computationally too costly to be of practical use in the WMN under realistic scenarios. CORE’s heuristics are able to compute solutions for the previous problem within an operator definable maximum computational cost, thereby enabling the computation and near real-time deployment of the computed solutions. And the advantage of this approach is that CORE is able to increase the number of flows admitted considerably and with minimal computational costs. We also see that CORE successfully increases the number of network coding sessions which can be established in the WMN.

De Rango et al. [9] have proposed a GCAD-CAC (greedy choice with bandwidth availability aware defragmentation) algorithm which is able to guarantee a respect for data flow delay constraints defined by three different traffic classes. By this approach it is possible to achieve good results and try to accept all the new requests, but when a higher priority request is received, a lower priority admitted request is preempted. This preemption can leave some small gaps which are not sufficient for new connection admission; these gaps can be collected by the GCAD algorithm by activating a bandwidth availability based defragmentation process.

Yang et al. [10] have proposed zone-based bandwidth allocation for mobile users in the IEEE 802.16j multihop relay network (IEEE 802.16-MR). The main focus of the work was adaptive selection of the zone size fit for user mobility. The zone of a mobile user includes the current relay station and its neighboring relay stations within the zone size in hop count. Bandwidth allocation was done for the mobile user roaming within the zone, and calculation of the required bandwidth was also presented.

Shakeri and Khazaei [11] have presented a novel scheduling scheme in WMNs. This technique is a multiple gateway fair scheduling scheme. This scheme consists of distributed requirement table and requirement propagation algorithm for scheduling at the gateways. The requirement propagation algorithm allows each gateway to distribute the requirements and routing table for scheduling into the network.

Delay aware load balancing routing (DLBR) [12] was proposed for WMN by introducing combined RM. The same was compared with existing Load balancing metric (LBM) [13]. The projected simulation results showed considerable reduction in delay and overhead thereby increasing the overall packet delivery. However, this paper does not consider bandwidth allocation and to the best of our knowledge till date there is a lack of a systematic study of distributed bandwidth reservation strategies for the mesh networks [1, 2, 14, 15].

3. Methodology

Our proposed metric differs from prior methods in several ways; that is, an efficient route is established with least delay and load which is considered in bandwidth allocation in WMN. We consider the traffic load in the interfering neighbors as the metric of traffic interference. Initially, a combined routing metric (RM) is defined for efficient route selection using the metrics traffic interference (TIM) and end-to-end service delay (EDM) [12, 16]. The suitable path is selected based upon the least routing metric value. Next, bandwidth allocation is performed for the selected path using fair end-to-end bandwidth allocation (FEBA) [5]. The basic idea of FEBA is that each node assigns bandwidth requests and grants in a round-robin manner where the amount of allocated bandwidth in bytes is proportional to the number of traffic flows weighted on their priorities. In this, FEBA approach each active queue, both requesting and granting, is assigned a weight value which is used by the bandwidth requestgrant procedure. So, the amount of service is proportional to the number of traffic flows under service. We are considering the traffic interference (TIM) [12, 17] metrics along with the traffic load in the requestgrant procedure to make it possible to provide an efficient bandwidth.

3.1. Calculation of Traffic Interference Metric (TIM)

We consider the traffic load in the interfering neighbors as the metric of traffic interference. Here both interflow interference and intraflow interference are considered. When the neighboring nodes transmit on the same channel, they compete with each other for channel bandwidth. The number of interfering nodes is not considered for degree of interference; instead the load generated by the interfering node is taken into account. This metric considers the traffic of interfering node to capture the interflow interference.

Let be the set of interfering neighbors of nodes a and node b, over channel .

captures the difference in transmission rate and loss ratio of links.

Then the TIM metric is defined as follows: where is the average load of , given by is the load of the interfering neighbors.

When there are no interfering neighbors, TIM metric selects the path with high transmission rate and low loss ratio. In the presence of interfering neighbors, TIM metric selects the path with minimum traffic load and minimum interference [17].

3.2. Calculation of End-to-End Service Delay Metric (EDM)

The expected end-to-end service delay metric (EDM) is used [16] to allow any shortest path based routing protocol to select a route with lowest end-to-end latency.

The EDM is defined as “network load-aware and radio-aware service delay” which is the end-to-end latency spent in transmitting a packet from source to destination. In order to estimate the EDM value, the expected link transmission time which is used for successfully transmitting a packet on each link is computed. Then this value is multiplied with mean number of backlogged packets in output queue at each relay node.

It is assumed that each node is serviced with a first-in-first-out (FIFO) interface queue. The per-hop service delay is given by the expected time spent in transmitting all packets waiting for transmission through a link at node .

considers the expected service delay, of any node such as queue delay, contention delay and transmission time of link between node and any neighbor node in the transmission range.

With a given , the EDM of path with -hops, between source and destination, is estimated as follows:

3.2.1. Estimation of

Let there be neighbor nodes in transmission range of node . Let be the mean number of packets waiting for transmission on link at node to successfully transmit through link ; is estimated as follows: where the is the of link at node and is the mean contention delay at node . As a result, route selection using the EDM finds the path with the lowest end-to-end service delay in terms of current network load. In addition, a routing protocol using this metric can simultaneously perform traffic load balancing.

3.2.2. Estimation of ELT2

is defined as the link transmission time spent by sending a packet over link at node . This measure is approximated and designed for ease in implementation and interoperability.

The for each link is calculated as where is the control overhead, is the mean contention delay, and the input parameters and are the bit rate in Mbs and the frame error rate of link for frame size , respectively.

3.3. Route Metric

A combined route metric (RM) [12] is proposed which includes both TIM and EDM metrics for efficient route selection:

Here and are the normalizing factors for TIM and EDM whose values range from 0 to 1. The normalizing factors and are chosen based on the weightage of interference or delay. Initially both are assigned equal values 0.5. When the interference is higher, the value of is adaptively increased and is decreased. Similarly when delay is higher, can be decreased and can be increased. Then a path with least value of RM is selected by exchanging RREQ and RREP packets. To allocate bandwidth for this selected path, a bandwidth allocation technique is given in the next section.

3.4. Bandwidth Allocation Technique

Our bandwidth allocation is based on the fair end-to-end bandwidth allocation (FEBA) approach which supports differentiated services for traffic flows. Let us consider a node maintaining two virtual queues towards any of its neighbor nodes which are the requesting queue and the granting queue. The requesting queue is the total amount of backlogged bytes directed to its neighbor. On the other side, the total amount of data enqueued at node directed to node is the occupancy of the granting queue. The mechanism works by allocating requests and grants dynamically based on the current status of the traffic load and physical transmission rates. In IEEE 802.16 bandwidth allocation process the requests and grants are expressed in units of slots.

In Figure 1 the node X is requesting the node Y. Based on the current status of the traffic load and physical transmission rates the node Y responds to the node X and grants the suitable bandwidth slots for the node X. In the same way the node Z requests the node X and the node X grants the suitable bandwidth slots.

3.5. Estimation of the Bandwidth

Here both requesting and granting nodes are assigned a weight which is used by the bandwidth request/grant procedure. The allocated of any queue is calculated so that the amount of service is proportional to number of traffic flows under service, weighted based on their priorities: where and is the set of all active traffic flows served by this node is an active flow with priority .

is an indicator function which equals 1 if is under service at queue , 0 otherwise. To provide the suitable bandwidth according to the path conditions and variations in the traffic flow during this bandwidth allocation for any queue, we also consider the traffic interference metric (TIM) in (7): where TIM (traffic interference metric) is estimated in [12].

There are some advantages considering the TIM while allocating the bandwidth; that is, the FEBA can tackle the spatial bias problem through keeping separate queues at every node for each traversing traffic flow. And also according to the traffic flow it is possible to provide differentiated services. The FEBA can get adjusted to the short time changes in the network only by considering the variations in the traffic flow.

Algorithm 1 (1) Start (2) Estimation of TIM(3) If (4)If (5)Estimation of the EDM(6)  (7) If EDM is MAX Else EDM is MIN(8) Estimation of FEBA(9) Allocation of the Bandwidth is directly proportional to TIM(10) End

In the previous algorithm, initially the TIM is found considering the set of neighbor nodes () [12], between the source and the destination nodes and difference in transmission rate and loss ratio of links. Then the end-to-end delay EDM is calculated using the time required for the transmission. This EDM is directly proportional to the ; that is, if the is more than the threshold value which is considered, then the EDM also increases. And the bandwidth is allocated for the path with the low delay. During this bandwidth allocation we consider TIM; depending on the TIM the bandwidth will be allocated for the particular transmission.

4. Simulation Results

4.1. Performance Metrics

We compare our traffic load and interference aware bandwidth allocation (TLIBA) technique with the FEBA [5] technique. We evaluate mainly the performance according to the following metrics, by varying the simulation time and the number of channels.

Average end-to-end delay: the end-to-end-delay is averaged over all surviving data packets from the sources to the destinations.

Received bandwidth: it is the measured at each receiver and expressed in Mbs.

Fairness: it is the average received packets at each receiver.

4.2. Simulation Model and Parameters

We use NS-2 [18] to simulate our proposed protocol. We use the IEEE 802.16 simulator [19] patch for NS2 version 2.33 to simulate a WiMAX Mesh Network. It has the facility to include multiple channels and radios. It supports different types of topologies such as chain, ring, multiring, grid, binary tree, star, hexagon, and triangular. The supported traffic types are CBR, VoIP, video-on-demand (VoD), and FTP. In our simulation, mobile nodes are arranged in a ring topology of size 500 meter 500 meter region. We keep the number of nodes as 25. All nodes have the same transmission range of 250 meters. A total of 4 traffic flows (one VoIP and three VoD) are used.

Our simulation settings and parameters are summarized in Table 1.

4.2.1. Performance Based on Traffic Flows

Initially we vary the number of traffic flows as 1, 2, 3, 4, and 5 with packet size as 1500 bytes.

For Figure 2, the results show that as the number of traffic flow increases, the average end-to-end delay also increases. Delay performance of TLIBA is improved significantly as compared to FEBA. In Figure 3 the results also indicate that when traffic flow increases, the bandwidth decreases but with proposed scheme more bandwidths are achieved. Figure 4 gives the fairness for both techniques when the number of traffic flows is increased. It shows that the fairness is more as compared to FEBA.

4.2.2. Performance Based on Traffic Rate

In our second experiment we vary the rate as 500 to 1500 Kb with 5 flows.

For Figure 5, the results show that when the packet size increases, the average end-to-end delay also increases and shows improvement with the proposed TLIBA scheme as compared to FEBA. In Figure 6 the results also indicate that when the packet size increases the bandwidth also increases. We can observe that TLIBA achieves more bandwidths as compared to FEBA. Figure 7 gives the fairness for both techniques when the packet size is increased. It shows that the fairness is more as compared to FEBA.

5. Conclusion and Future Work

This paper suggests a traffic load and interference based bandwidth allocation (TLIBA) technique for IEEE 802.16 mesh networks. First, we have calculated the metric of traffic interference (TIM) which considers the traffic load of interfering neighbors. End-to-end service delay (EDM) is calculated by using the expected time spent in transmitting all packets waiting for transmission through a link. Using these metrics, combined routing metric is defined for efficient route selection and the best path is selected based upon the least routing metric value. For the selected path, the bandwidth is allocated using the fair end-to-end bandwidth allocation (FEBA) approach. This allocation considers the load and the traffic of the link and allocates the bandwidth. The frequent changes in the path can be avoided thus increasing transmission efficiency. The main advantage of the proposed technique is that it is possible to allocate bandwidths in the wireless mesh networks according to the traffic load and interference of the network which makes it easy to achieve maximum throughput during the transmission. The simulation result shows substantial improvement in achieving better bandwidth utilization and throughput when compared with existing bandwidth allocation technique. As future work the quality of the performance evaluation can be further enhanced by considering different scenarios.