About this Journal Submit a Manuscript Table of Contents
Journal of Electrical and Computer Engineering
Volume 2012 (2012), Article ID 374961, 6 pages
http://dx.doi.org/10.1155/2012/374961
Research Article

Analytical Model of a Weighted Round Robin Service System

Department of Telecommunications and Multimedia, Faculty of Electrical Engineering, University of Žilina, 010 26 Žilina, Slovakia

Received 19 September 2011; Revised 13 January 2012; Accepted 27 January 2012

Academic Editor: Yaohui Jin

Copyright © 2012 Vladimir Hottmar and Bohumil Adamec. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper presents a mathematical description of Weighted Round Robin service strategy, on which basis all the perspective QoS tools that serve to manage congestion in converged packet networks work. On the basis of the presented mathematical model, it is possible to suitably configure the operational parameters (maximum queue length, distribution of the available transfer capacity) of these tools according to required values of packet delays. The implementation of the analytical model is demonstrated on a real network segment using advanced network data traffic emulator.

1. Introduction

In converged IP networks, it is necessary to provide, apart from the conventional data operation, also the video and voice communication support. Because IP packet architecture was not originally intended for real-time transfers, from the point of view of interactive communication it is necessary to solve the problem of quality guarantee of multimedia services—Quality of Service (QoS). Suitably implemented QoS tools ought to guarantee a certain quality level of processing of time-sensitive interactive applications at the expense of the quality of processing of other applications that do not necessarily require real-time transfers. Without properly implemented QoS tools, the applications that do not require real-time processing might use up the available transfer capacity and in this way make impossible the transfer of time-sensitive interactive applications. The main problem of IP packet network is thus the appropriate support of multimedia communication taking place in real time and the connected suitable management of potentially occurring congestion. The appropriate implementation of QoS tools mainly requires correct setting of the operation parameters of these tools. In order to provide high Quality-of-Service (QoS) in today’s high-speed converged networks, WRR mechanism assigns different priorities to different queues. WRR also ensures fair selection interval among all active queues with minimal delay and jitter [1]. In this scheduling algorithm, a weighting coefficient for each queue determines how many bytes of data the system delivers from the queue before it moves on to the next queue. The WRR mechanism cycles through the queues. For each queue, packets are sent until the number of bytes transmitted exceeds the bandwidth determined by the queue’s weighting coefficient, or the queue is empty [2, 3]. Then the WRR mechanism moves to the next queue. If a queue is empty, WRR mechanism will send packets from the next queue that has packets ready to send. This mechanism guarantees a minimum bandwidth to each queue, and allows the minimum to be exceeded if one or more of the port’s other queues are idle.However, when all the queues are located, each is limited to its maximum bandwidth according to its assigned weight—no queue achieves more than a predetermined portion of transfer capacity when the transmission line is under stress [4]. WRR includes several significant benefits. This scheduling mechanism can be implemented in hardware, so it can be applied to high-speed interfaces in both the core and at the edges of the network. WRR mechanism also ensures that all service classes have access to at least some configured amount of network capacity to avoid bandwidth starvation. WRR queuing provides coarse control over the percentage of output port bandwidth allocated to each service class. Classification of traffic by service class provides more equitable management and more stability for network applications than the use of priorities or preferences. WRR queuing is based on the belief that resource reduction is a better mechanism to control congestion than resource denial [1, 2]. Other information about WRR models and detailed description of the status quo can be found in source [4].

2. Analytical Modeling of Weighted Round Robin Service Strategy

In Figure 1 we can see the model of a Weighted Round Robin network node with 𝑀 queues. The offered traffic with 𝜆 intensity consists of data flows of individual queues with intensities 𝜆1, 𝜆2 to 𝜆𝑀, from which a certain part 𝜆1𝑆, 𝜆2𝑆 to 𝜆𝑀𝑆 is processed by the node and a certain part 𝜆1𝐿, 𝜆2𝐿 to 𝜆𝑀𝐿 is discarded by the node. In the 𝑖th queue maximally 𝑁𝑖max packets may be stored. The average time needed to service a packet of 𝑖th queue is 𝐸(𝑇𝑖). Intensity of the service of the 𝑖th queue is 𝜇𝑖. The overall transfer capacity 𝐵𝑅 of output interface is divided according to configuration among individual queues [4].

374961.fig.001
Figure 1: Model of Weighted Round Robin network node.

Proposed analytical model describes the specific conditions when the WRR service system is congested. This service mode is analyzed in order to provision a certain parameter reserve for normal noncongested mode. In the case of congestion all the queues work with the predefined portion of time period. Therefore, obtained results include a certain useful reserve for the normal noncongested mode.

Processing the traffic by the Weighted Round Robin service strategy means that the service of individual queues is realized only during certain, in general different, time intervals Δ𝑖. The probability that the Weighted Round Robin network node is in the state of service of 𝑘th queue is given by the relation [4]:Δ𝑃(𝑖=𝑘)=𝑘Δ1+Δ2++Δ𝑀=Δ𝑘Δ=𝐵𝑅𝑘.𝐵𝑅(1) When 𝑊𝑛𝑘 is the average waiting time of a packet belonging to the 𝑘th queue, having entered the network node during service of the 𝑛th queue, then the average waiting time of packet belonging to the 𝑘th queue will generally be given by the formula:𝑊𝑘=𝑊1𝑘𝑃(𝑖=1)++𝑊𝑀𝑘𝑃(𝑖=𝑀).(2) In the case of two queues, the average waiting time of packets in the first queue is given by the relation:𝑊1=𝑊11𝐵𝑅1𝐵𝑅+𝑊21𝐵𝑅2.𝐵𝑅(3) Figure 2 demonstrates a situation when a packet belonging to the first queue comes in the time of service of the first queue. In the moment of arrival of packet belonging to queue 𝑖=1, on average there are 𝑁1packets belonging to queue 𝑖=1 that precedes it.The average number of packets transferred since the moment of arrival of the packet in question to the moment of finishing the interval Δ1 of period Δ𝑘 is 𝑁1(𝑘). Up to the moment of servicing the considered packet the entire interval Δ1 will take place 𝑛 times on the whole and the entire interval Δ2(𝑛+1) times altogether. Then the average waiting time of the packet, in question is given by the relation:𝑊11=𝑁1(𝑘)𝑇𝐸1+𝑁1𝑁1(𝑘)𝑇𝐸1+[]𝑛+1Δ2.(4) The average waiting time computation comprises of the three-parts residual service time of the packet in service at the arrival,service time of the class-1 packets in queue, and the total intervals assigned to class-2 packets. As shown in Figure 2, residual service time is included in 𝑁1(𝑘) variable. In general, this variable assumes the decimal numbers; decimal part of this number represents the residual service time of the packet in service at the moment of arrival.

374961.fig.002
Figure 2: Time conditions at the arrival of 𝑖=1 packet during servicing the 𝑖=1 queue.

Since the considered packet could come during the interval Δ1 of period Δ𝑘 at any moment with the same probability, on average it will come in the moment Δ1/2 and during the period Δ𝑘 it will spend time Δ1/2, in the queue. During this time on average 𝑁1(𝑘) packets will be processed from the arrival time of the given packet to the moment of finishing interval Δ1 of period Δ𝑘. It is valid that𝑁1(𝑘)=Δ1𝑇2𝐸1.(5)

After the passage of period Δ𝑘, there will be on average 𝑁1𝑁1(𝑘) packets of category 𝑖=1 queued before the considered packet. A certain part of these packets will be processed during 𝑛 whole intervals Δ1 of periods Δ𝑘+1 to Δ𝑘+𝑛 and a certain residual part will be processed during interval Δ1 of periods Δ𝑘+𝑛+1 together with the considered packet. Since the moment of finishing period Δ𝑘 to the moment of transfer of the considered packet, interval Δ1 will take place 𝑛+𝜀 times on whole. Variable 𝜀 is the mean value of the proportion of the interval Δ1 of period Δ𝑘+𝑛+1 that is necessary for processing the residual part of packets. It can be expressed by𝑁𝑛+𝜀=1𝑁1(𝑘)Δ1𝑇/𝐸1=𝑁1𝑇𝐸1Δ112.(6) During one interval Δ1 up to Δ1/𝐸(𝑇1) packets of category 𝑖=1 are serviced. Thus, during the interval Δ1 of period Δ𝑘+𝑛+1, before the considered packet there might be up to [Δ1/𝐸(𝑇1)]1 packets serviced and on average (1/2)[Δ1/𝐸(𝑇1)]1/2 packets are serviced. Since 𝐸(𝑇1)/Δ1 is the proportion of interval Δ1 needed to transfer one packet, then for the mean value of the proportion of interval Δ1 of period Δ𝑘+𝑛+1, needed to transfer the residual part of packets, following relation is valid:𝐸𝑇𝜀=12Δ1Δ1𝐸𝑇1=𝑇11𝐸1Δ112.(7) It then follows that for the number of whole intervals Δ1 taking place since the arrival moment of a considered packet to the starting moment of its servicing, it is valid that𝐸𝑇𝑛=1Δ1𝑁1+121.(8) After substituting (5) and (8) to (4), the result is𝑊11=𝑁1Δ1+2Δ1𝐸𝑇1+Δ2𝐸𝑇12Δ1.(9)

A situation when a packet belonging to the first queue is coming in the moment of servicing the second queue which can be analyzed in the same way as before. At the arrival moment of packet 𝑖=1, on average there are other 𝑁1 packets in the 𝑖=1 queue before it. The average number of packets processed since the arrival moment of the considered packet to the moment of expiration of period Δ𝑘 is 𝑁2(𝑘). Until the moment of servicing the considered packet, the entire interval Δ1 will take place 𝑛 times and the entire interval Δ2𝑛 times as well. It follows that the average waiting time of the considered packet is given by the relation:𝑊21=𝑁2(𝑘)𝑇𝐸2+𝑁1𝑇𝐸1+𝑛Δ2.(10) Using the same method as before we obtain𝑊21=𝑁1Δ1+2Δ1𝐸𝑇1+Δ2𝐸𝑇12Δ1=𝑊11.(11) By substituting of (9) and (11) into the expression (3) we obtain𝑊1=𝑁1Δ1+2Δ1𝐸𝑇1+Δ2𝐸𝑇12Δ1.(12) The relation between the average number of packets in queue 𝑁1, the average waiting time of packets 𝑊1, and processed intensity of entry flow 𝜆1𝑆 is expressed by Little’s formula:𝑁1=𝜆1𝑆𝑊1.(13)

After substituting Little’s formula (13) into the relation (12), we obtain𝑊1=Δ2𝐸𝑇1/2Δ11𝜆1𝑆𝐸𝑇11+Δ2/Δ1.(14) After substituting the relation between the average time needed to transfer a packet and the intensity of service 𝐸(𝑇1)=𝐵𝑅1/(𝐵𝑅𝜇1) to the denominator of the expression (14), introducing the load coefficient of processed traffic 𝜌1𝑆=𝜆1𝑆/𝜇1𝑆 and finally after substituting the relation between the average packet size and the average time needed to transfer it 𝐸(𝑇1)=𝐸(𝑋1)/𝐵𝑅 to the numerator of (14), the result will be𝑊1=𝐵𝑅2𝑋𝐸1𝐵𝑅121𝜌1𝑆𝐵𝑅.(15) From the expression (15) it is obvious that the average time spent in the queue of packets belonging to the first queue 𝑊1 grows linearly with the average size 𝐸(𝑋1) of these packets. The bigger the packets are being sent, the longer will be their average waiting time 𝑊1. Further, the average waiting time 𝑊1 depends on the ratio of sizes of guaranteed proportions 𝐵𝑅1, and 𝐵𝑅2 from the overall capacity 𝐵𝑅. The bigger is the proportion 𝐵𝑅1, the smaller will be the average waiting time 𝑊1.

Figure 3 demonstrates the graphic interpretation of the relation (15) for 𝜌1𝑆=0.93 and 𝐵𝑅=256 kbps. Taking Figure 4 into account, it is obvious that the average waiting time 𝑊1 grows most rapidly in the direction of the growth of average packet size 𝐸(𝑋1) and concurrently to the direction of decrease of the size of portion 𝐵𝑅1. The smaller the size of the portion 𝐵𝑅1 is, the steeper will be the increase of the average waiting time 𝑊1, which depends on the increasing average packet size 𝐸(𝑋1). The bigger the average packet size 𝐸(𝑋1) is, the steeper will be the increase of the average waiting time 𝑊1 that depends on the decrease of the proportion 𝐵𝑅1.

374961.fig.003
Figure 3: Average waiting time of 𝑖=1 packets for 𝜌1𝑆=0.93 and 𝐵𝑅=256 kbps.
374961.fig.004
Figure 4: Time conditions at the arrival of 𝑖=1 packet waiting for period 𝑊1max.

The relation for the average waiting time can be generalized using mathematical induction. For the average waiting time of packets of 𝑖th queue 𝑊𝑖, it is valid that𝑊𝑖=𝐵𝑅𝐵𝑅𝑖𝑋𝐸𝑖𝐵𝑅𝑖21𝜌𝑖𝑆𝐵𝑅.(16)

The maximum queue length 𝑁1max can be expressed using the required value of maximum delay of 𝑖=1 packets [5].

The maximum value of waiting time 𝑊1max reached by packet 𝑖=1 will be in the case of entering the system just in the moment when the service of its queue has been finished. Concurrently, just 𝑁1max1 packets are located before it, so the considered packet is placed in the last position in the queue. For the maximum waiting time 𝑊1max of packet belonging to the first queue according to Figure 4, it is valid that𝑊1max=(𝑛+1)Δ2+𝑁1max𝑇1𝐸1.(17) After deduction for maximum queue length 𝑁1max, we get𝑁1max=𝑊1max+𝐸𝑋1/𝐵𝑅1+𝐵𝑅2/2𝐵𝑅1𝐸𝑋1/𝐵𝑅1+𝐵𝑅2/𝐵𝑅1.(18)

The relation for the maximum queue length can be generalized using mathematical induction. For the maximum queue length of packets of 𝑖th queue, it is valid that𝑁𝑖max=𝑊𝑖max+𝐸𝑋𝑖/𝐵𝑅1+𝐵𝑅𝐵𝑅𝑖/2𝐵𝑅𝑖𝐸𝑋𝑖/𝐵𝑅1+𝐵𝑅𝐵𝑅𝑖/𝐵𝑅𝑖.(19)

3. Implementation of the Mathematical Model and Experimental Results

The drawn-up mathematical model of a WRR service system was used by configuration of real network architecture emulating a large company network-campus LAN. The created emulating model consists of two segments: LAN 1 (client side segment) and LAN 2 (server side segment). These network parts represent far-flung individual segments of a large company network. The interconnection of individual network segments is created by WAN link emulated by PPP multilink. The data traffic is emulated by multifunctional hardware network emulator. Monitoring of data traffic processing is realized by computers and switches that realize the port mirroring.

Undesirable congestions can especially occur by the transfer of data traffic from high-speed LAN link to low-speed WAN link [5]. In order to accomplish a reliable voice or video communication, it is necessary to configure suitable QoS tools in the appropriate network devices that guarantee the required transfer quality of multimedia communication [6].

Generated data traffic operations are specified in the emulator event list. Each line in this list presents one required transaction. The emulated users generate load according to event list in the way that progressively require carrying out all events defined in the list. For the needs of emulation, four individual event lists were created. According to the first list the SIP VoIP traffic is generated, according to the second list the RTSP video traffic is generated, according to the third list the SMTP electronic mail traffic is generated, and finally according to the last list the FTP file transfer traffic is generated.

The simplest and in most cases default mechanism of traffic processing is FIFO strategy. It is the elementary packet processing technique that was used as first. For the purpose of detailed statistical analysis of transported data traffic, the additional software application was created. It calculates particular statistical parameters such as the empirical probability distribution of delays of transported packets. In Figure 5 there are demonstrated FIFO/CBWFQ results for voice and video data traffic. The average delay of VoIP RTP packets 𝑊1=652 ms is unsatisfactory, similarly, average delay of video RTP packets 𝑊2=579 ms is unsatisfactory. Since processing of interactive packets by FIFO strategy is unsatisfactory from the point of view of quality parameters, it is necessary to implement the more sophisticated mechanism such as CBWFQ. It is needful to create individual operation classes for interactive voice and video packets. Interactive voice RTP packets will belong to the operation class named voice-traffic and video RTP packets will belong to the operation class named video-traffic. For individual operation classes it is necessary to configure the operation parameters that are minimal guaranteed bandwidth and maximal queue length. The setting of these parameters will be realized according to drawn-up mathematical model.

fig5
Figure 5: Empirical delay probability distribution for FIFO/CBWFQ strategy.

By statistically found-out data traffic parameters (average packet sizes: 𝐸(𝑋1)=1712 bit, 𝐸(𝑋2)=7808 bit, load coefficients of processed traffic 𝜌1𝑆=0.93, 𝜌2𝑆=0.88), we can assume that the required average delay of voice RTP packets should be 100 ms with the peak 700 ms and required average delay of video RTP packet should be 150 ms with the peak 8700 ms.

For the size of guaranteed proportion from the total available capacity 𝐵𝑅=256 kbps for the first operation class from the relation (16) we get (𝐵𝑅1/𝐵𝑅)100%30% and for the second class we get (𝐵𝑅2/𝐵𝑅)100%45%.

For the queue length of the first operation class from the relation (19) we get 𝑁1max32 packets and for the second class we get 𝑁2max128 packets.

All other packets will be processed without any QoS guarantees as best effort traffic therefore setting operation parameters for this class has no sense [7, 8].

In Figure 5 there are demonstrated results for FIFO/CBWFQ strategy. The average delay of VoIP RTP packets compared with FIFO strategy decreased from 652 ms to 112 ms and average delay of video RTP packets decreased from 579 ms to 153 ms. The average delays obtained by emulation are approximately in agreement with required values. The deviations are mostly caused by the finite number of analyzed packets. The considerably lower value of peak delay 𝑊2max obtained by emulation compared with proposed required value is the result of the fact that the second queue was not even once filled during the emulation. From Figure 5 it is further apparent that approximately 73% of voice RTP packets have acceptable delay in the range from 0 ms to 157 ms, 25% of these packets have limit delay value in the range from 157 ms to 392 ms and only 2% of packets have unsatisfactory delay in the range from 392 ms, and more. Similarly, approximately 73% of video RTP packets have delay in the range from 0 ms to 134 ms, 20% of these packets have delay in the range from 134 ms to 401 ms and only 7% of packets have delay in the range from 401 ms and more.

References

  1. H. M. Chaskar and U. Madhow, “Fair scheduling with tunable latency: a Round Robin approach,” in Proceedings of the IEEE Global Telecommunication Conference (GLOBECOM '99), pp. 1328–1333, December 1999. View at Scopus
  2. A. Orda and A. Sprintson, “Precomputation schemes for QoS routing,” IEEE/ACM Transactions on Networking, vol. 11, no. 4, pp. 578–591, 2003. View at Publisher · View at Google Scholar · View at Scopus
  3. A. Orda, “Routing with end-to-end QoS guarantees in broadband networks,” IEEE/ACM Transactions on Networking, vol. 7, no. 3, pp. 365–374, 1999. View at Publisher · View at Google Scholar · View at Scopus
  4. B. Adamec, Design of optimization model for data communication management in Canpus LAN, Ph.D. dissertation, Department of Telecommunications and Multimedia, Faculty of Electrical Engineering, University of Zilina, Zilina, Slovakia, 2010.
  5. X. Liu, K. Ravindran, and D. Loguinov, “A queueing-theoretic foundation of available bandwidth estimation: single-hop analysis,” IEEE/ACM Transactions on Networking, vol. 15, no. 4, pp. 918–931, 2007. View at Publisher · View at Google Scholar · View at Scopus
  6. V. Paxson, “End-to-end Internet packet dynamics,” IEEE/ACM Transactions on Networking, vol. 7, no. 3, pp. 277–292, 1999. View at Publisher · View at Google Scholar · View at Scopus
  7. E. Muhammad and S. Stidham, Sample-Path Analysis of Queuing Systems, Kluwer Academic, Boston, Mass, USA, 1999.
  8. C. W. Chan, Performance Analysis of Telecommunications and Local Area Networks, Kluwer Academic, Norwell, Mass, USA, 2000.