Abstract
Fluid queues offer a natural framework for analyzing waiting times in a relay node of an ad hoc network. Because of the resource sharing policy applied, the input and output of these queues are coupled. More specifically, when there are users who wish to transmit data through a specific node, each of them obtains a share of the service capacity to feed traffic into the queue of the node, whereas the remaining fraction is used to serve the queue; here is a free design parameter. Assume now that jobs arrive at the relay node according to a Poisson process, and that they bring along exponentially distributed amounts of data. The case has been addressed before; the present paper focuses on the intrinsically harder case , that is, policies that give more weight to serving the queue. Four performance metrics are considered: (i) the stationary workload of the queue, (ii) the queueing delay, that is, the delay of a βpacketβ (a fluid particle) that arrives at an arbitrary point in time, (iii) the flow transfer delay, (iv) the sojourn time, that is, the flow transfer time increased by the time it takes before the last fluid particle of the flow is served. We explicitly compute the Laplace transforms of these random variables.
1. Introduction
Ad hoc networks are self-configuring networks of mobile routers, connected by wireless links. They enable infrastructure-free communication: no fixed equipment is needed, but instead each client acts as a hub. When information needs to be transmitted across the network, it is sent from the sender to the receiver by relaying the packets along the intermediate hubs. An excellent survey on ad hoc networks, with special emphasis on quality-of-service aspects, is [1].
On a somewhat more abstract level the nodes in ad hoc networks can be regarded as queues: information packets arrive and are relayed, and when the arrival rate (temporarily) exceeds the departure rate, the buffer content of the queue builds up. These queues, however, have the interesting modelling feature that the available transmission capacity at any specific node is used both to (i) βpullβ information packets from the βpredecessor hubsβ into the queue and (ii) βpushβ information packets from the queue towards βsuccessor hubsβ (and eventually the destination client).
Now, consider the situation that at some point in time there are stations that transmit traffic via the same relay node. If the standard resource sharing policy is used, then the relay-node is assigned a share of the available medium capacity, which is the same fraction as is allocated to each of the βsending nodes.β In other words, as soon as , the node's input rate exceeds its output rate, and hence the excess traffic accumulates in the node's buffer; only when the queue drains. This explains why relay nodes are prone to becoming bottlenecks.
The above observations have motivated the development of alternative resource sharing policies, that assign more βweightβ to serving the relay node; see for instance [2] and references therein. With the so-called enhanced distributed channel access (EDCA) protocol, it will be possible to set a parameter , such that each of the sending nodes obtains a fraction of the capacity, while the remaining is allocated to serving the queue. Clearly, when this has a benign effect on the buffer content of the queue, compared to the standard resource sharing policy described above: the queue drains for all (rather than just for ). The price paid is that the traffic remains longer at the sending nodes.
There is one important modelling feature, that applies to the case , that needs to be mentioned here. When the queue is empty and the number of sending nodes is below , it does not make sense to assign each of the sending notes just a share : it would imply that the (available) service rate is strictly larger than the input rate, and that a fraction is left unused. For that reason, the EDCA protocol (IEEE 802.11E) was augmented with an βidle modeβ: if the queue is idle and , then half of the capacity is allocated to serving the queue of the relay node, whereas the other half is shared equally among the sending nodes (such that the input and output rates are equal, the queue remains empty, and all available capacity is used). Notice that when this special rate allocation (during periods in which the buffer is empty) is not required, as it cannot be that both the buffer is empty and that there are jobs in the system.
The case , corresponding to the standard resource sharing policy, was proposed and analysed in detail in [3, 4]. Seen from a queueing-theoretic perspective, this is a model βwith coupled input and output,β in that the capacity is shared between the input and output process. It was assumed that flows (or jobs) arrive at the relay node according to a Poisson process and initiate a data transfer. For the special case of exponentially distributed flow sizes, [4] explicitly gave the Laplace transforms (and tail asymptotics) of several performance measures.
Importantly, the case (and in fact also ) has nice features that are lost for . As a consequence, the analysis of [4] for does not carry over to The main differences are the following.
(1) In the first place, as we explained, the case does not require the βidle modeβ rate allocation that was introduced above (during periods in which the buffer is empty). As a consequence, the number of flows present evolves independently of the buffer content process (and hence the distribution of the number of flows present can be computed independently of the distribution of the buffer content of the queue). This nice property is lost in the case ; one could say that there is then some sort of βfeedbackβ from the buffer content to the flows, in the sense that the buffer content has impact on the flow behaviour, and hence the distributions of the number of sources present and the buffer content cannot be determined separately.
(ii) In the second place, suppose we wish to analyse the flow transfer delay, defined as the time between the arrival of the flow and the epoch that its last fluid particle has been transmitted into the queue. We know that for the queue cannot become empty during this flow transfer. Therefore, all the βstate informationβ that we have to keep track of is the number of flows present when entering (and not the buffer content); for the queue can become empty, and therefore we have to take into account the buffer content, as seen upon arrival, as well.
(iii) In case , the buffer content decreases only during periods that there are no flows in the system, and these intervals are exponentially distributed; it turned out in [4] that this property entails that a direct translation is possible in terms of a related classical M/G/1 queueing model. For , the periods of net output are not exponentially distributed, and therefore we lose this nice feature.
The above explains that the analysis for is considerably more involved than for
For various values of , the model described above has been extensively validated in [2], by ad hoc network simulations that include all the details of the widely used IEEE 802.11 MAC-protocol. As indicated above, the alternative resource sharing policies can be enforced in real systems by deploying the recently standardised IEEE 802.11E protocol; [2] also indicates how to map the parameter settings of IEEE 802.11E on our model parameters.
The goal of the present paper is to extend the results of [4] to the case As in [4], four performance metrics are considered: (i) the stationary workload of the queue, (ii) the queueing delay, that is, the delay of a βpacketβ (a fluid particle) that arrives at the queue at an arbitrary point in time, (iii) the flow transfer delay, that is, the time elapsed between arrival of a flow and the epoch that all its traffic has been put into the queue, and (iv) the sojourn time, that is, the flow transfer time increased by the time it takes before the last fluid particle of the flow is served.
We introduce the model (including a graphical illustration) and notation in Section 2βit is noted that in our model an admission control policy is in force. We also present some preliminaries as well as the stability condition. In Section 3 the steady-state workload (i.e., buffer content) distribution of the queue is characterised. This is done by relying on techniques from [5β7]. Unfortunately, we cannot exploit the resemblance a related M/G/1 queueing model, as was possible in [4]. We find the distribution function of the steady-state workload, in terms of the solution of an eigensystem and a set of linear equations. In Section 4, we study the queueing delay of a tagged fluid particle that arrived at time 0. A full characterisation of its Laplace transform can be given; the computations turn out to be relatively straightforward, based on the observation that during the queueing delay the queue cannot enter the βidle mode.β
Above we already indicated that the analysis of the flow transfer delay is much more involved than for , mainly due to the fact that the buffer can become empty during the flow transfer, so that the allocation gets into the βidle mode.β The derivation of the Laplace transform requires the solution of various complex systems of equations. The results can be found in Section 5. In order to prove that the number of equations matches the number of unknowns, we need to show that a certain eigensystem has sufficiently many eigenvalues in the right half-plane; this we showed by using an elegant and powerful lemma of Sonneveld [8].
Section 6 concentrates on the so-called sojourn time, which is defined as the flow transfer time increased by the time it takes before the last fluid particle of the flow has left the queue; in other words, the sojourn time is the time it takes for the flow to go through the relay node. Relying on the results of Section 5, the sojourn time can be decomposed into a number of known components. Section 7 concludes and identifies a number of topics for future research.
2. Model and Preliminaries
In this section, we first give a detailed description of our model and introduce notation. Then, we derive the stability condition.
2.1. Model
The following model was verbally motivated in the Introduction. Consider a queueing system at which flows arrive according to a Poisson process transmits traffic into a queue (which is served in a FIFO manner) and leaves when ready. When there are flows active and the queue is nonempty, each flow transmits traffic into the queue at rate , while a rate is used to serve the queue; as a consequence, the queue drains when the number of flows present is below . For ease, we assume that is noninteger; we come back to this issue later in Section 7. When there are flows active and the queue is empty, all flows transmit at rate , while the queue is served at rate , so that the queue remains empty.
Suppose that we impose the admission control policy that the system accommodates maximally flows simultaneously; in this way, each active flow is guaranteed at least a transmission rate , and the queue at least We assume that (as otherwise the queue remains empty).
The above dynamics define a queueing process, for any given initial buffer level and initial number of flows present; we denote the buffer content at time by . We let denote the number of flows present (i.e., feeding traffic into the queue) at time . A pictorial illustration is given in Figure 1.
(a)
(b)
(c)
We introduce the following notation.
(i) : the βbusy mode.β It is not hard to see, under the assumption of exponentially distributed flow sizes (with mean ) and interarrival times (with mean ), that, during periods that , the process behaves as a Markov chain on , with generator matrix where ; the subscript βbβ stands for βbusy.β We define .
When , the aggregate traffic rate generated by the flows is , while the queue's output rate is , such that the net rate of change of the queue isDefine , , and
Busy periods are periods in which is positive all the time. With , it is evident that the number of active flows at the beginning of a busy period equals . The number of active flows at the end of a busy period is in , with
(ii) : the βidle mode.β Let idle periods be periods in which all the time. An idle period ends as soon as . During the idle period necessarily One could say that behaves as a Markov chain on until jumps from to (i.e., the start of a new busy period). The corresponding rate matrix (which is not a bona fide generator matrix) isthe subscript βiβ stands for βidle.β
2.2. Stability Condition
To make sure that the steady-state workload is finite a.s., the mean drift should be negative when is large. Since behaves essentially like a stationary Markov process with generator when is large, it follows that can only escape to when , denoting by the invariant measure of . Hence, we should require that Elementary Markov chain analysis yields that, with ,where, for can be regarded as a βgeneralised binomial coefficientβ:The stability condition becomes
It is instructive to show how this condition simplifies in the situation of Due to (recognize the probability density function of the negative binomial distribution)we have to verify whetherUsing identity (2.6) again, writingand observing thatit is readily verified that the stability requirement reduces to . In other words, for the system to be stable, it is required that (irrespective of the value of ). This result makes sense as essentially all traffic has to be βservedβ twice: first it has to be transmitted from the sources to the queue, and then it has to be served by the queue.
3. Buffer Content Distribution
In this section, we study the steady-state workload of the queue introduced in the previous section (jointly with the steady-state number of sources present). We do this by relating the workload of our model (to which we refer as Model I) to the workload in a slightly different system (Model II): a model in which the generator and the traffic rate matrices , , and apply also when the buffer is empty (so Model II has no βidle modeβ).
The procedure of relating a feedback system (Model I, in which the sources react to the buffer content) to an (easier) nonfeedback system (Model II, in which the flows behave independently of the buffer content) resembles that of [7, Section 2].
The distribution of the steady-state workload is characterised in terms of the solution of a certain eigensystem (and a number of additional linear equations). It also enables us to compute the corresponding Laplace transform, which we use several times in the next sections.
3.1. Preliminary Results
In this subsection, we consider the model without feedback, that is, Model II: the generator matrix and the traffic rate matrices , , and apply not only when the buffer content is positive, but also when the buffer is empty. We assume that the stability criterion derived above (which reduces to when ) applies. Denote by the buffer content of this system at time (where is its stationary version), and by the number of flows present at time (where is its stationary version). Define also whereModel II has been studied extensively in the literature; we now recall a number of basic properties, which turn out to become useful when analysing Model I, see Section 3.2.
Buffer Content Distribution
It is well known from
the literature how the can be determined; they obey the system of
linear differential equations:Owing to the special birth-death
structure, we can use explicit results obtained by van Doorn et al. [9].
A central role in the analysis is played by the
eigensystemwith eigenvectors up to and corresponding eigenvalues . Notice that ,
for all ,
so is invertible. Then, [9, Theorem 1] says that all
eigenvalues are real and simple.
Moreover, observe that the number of states of in which drains (or remains empty) is ;
in the other the buffer level increases. Provided that the
stability condition is satisfied, [9, Theorem 1] entails that there are negative eigenvalues, one eigenvalue that
equals 0, and positive eigenvalues. Put the eigenvalues in increasing
order; let refer to the th component of .
Then the above, in conjunction with the fact that lies between 0 and 1 for all ,
implies that in the representationthe terms up to are 0. As and ,
it follows that the requirement implies We obtainNow, only the (for up to ) need to be determined. These follow from the
fact that for .
These are equations in the same number of unknowns, and
can be determined explicitly in terms of ,
as described in [9],
Section 4.
Busy and Idle Periods
Elwalid and Mitra [10] give explicit expressions
for a number of quantities that are related to the busy and idle periods of the
queue. A busy period is, as before, defined as a period in which the buffer content
is positive, whereas an idle period is a period in which the buffer is empty.
It is easily seen that at the beginning of a busy period the number of flows
present is equal to ;
at the end of the busy period the number of flows present is at most .
Denote by the distribution of the number of flows
present at the end of the busy period. Let the matrices , , , be the submatrices that are obtained by
partitioning into down-states (i.e., states such that ) and up-states (); similarly, is partitioned in and .
Then, it is not hard to prove thatsee [10, Equation (5.9)]; denotes the inner product of two vectors. The
mean idle period is given byFinally, the mean busy period can be calculated. According to βrenewal
rewardβ so that
3.2. Analysis of Buffer Content Distribution
Now, we turn back to Model I, as described in Section 2.1. Our goal is to show that the steady-state buffer content of Model I (in which there are different queueing dynamics when the buffer is empty or nonempty) is intimately related to the steady-state buffer content of Model II (in which there is no distinction between an empty and nonempty buffer).
Let us start by making a number observations. First, observe that in both Models I and II a busy period starts with flow present. Also, the distribution of the length of the busy period is the same for both models, as well as the distribution of the number of flows present at the end of the busy period. In other words, the difference between the models lies just in the duration of the idle periods. In Section 3.1, we already found the mean idle period of Model II. Let us therefore consider the mean idle period of Model I.
As in [7, Lemma 2.3], we have that the mean idle period of Model I equalsthe expected amount of time during this idle time in which there are flows present, say , is , that is, the th entry of . This follows from the fact that the mean time spent in during the idle time, given that at the beginning of the idle time flows were present, satisfies the linear system, for ,with , andWe now have collected all the required elements to determine the distribution of the steady-state buffer content . Analogously to [7, Theorem 2.4] we obtain the following result for
Theorem 3.1. For all ,
Proof. This is proven as follows. We first condition on being positive or zero:By applying the renewal-reward theorem, the latter probability can be rewritten asnotice that these probabilities equal 0 for Also, from βrenewal-reward,βHence, we are left with determining the first probability in the right-hand side of (3.13). We first rewrite it asNow, recall that the distribution of , conditional on , is the same as the distribution of , conditional on Combining this with (3.15), we obtainThis proves the claim.
Upon combining the above theorem with representation (3.5), we find the following useful result.
Corollary 3.2. For all , Theorem 3.1, in conjunction with (3.5), defines numbers and (with and ) such that Here, if and only if ; is given by The probability of flows in the system is given by (where . The Laplace transform of reads, for ,
4. Queueing Delay Distribution
It is clear that it is a nontrivial step to translate the steady-state workload distribution into the queueing delay distribution. Importantly, to study the delay of a fluid particle arriving at time, say, 0, the arrivals and departures of flows after 0 have impact. In Sections 4.1 and 4.2, we analyse the so-called virtual queueing delay, that is, the delay experienced by a fluid particle arriving at a random point in time (i.e., a βtime averageβ); this is done through a direct approach in Section 4.1 and through so-called βdouble transformsβ is Section 4.2. Section 4.3 characterises the queueing delay of an arbitrary fluid particle (i.e., a βtraffic averageβ).
4.1. Virtual Queueing Delay
Let denote the delay experienced by a fluid particle arriving at the queue in steady state, say for ease at time 0; this type of delay is sometimes referred to as virtual queueing delay. Let denote the amount of output capacity available in the interval . If the fluid particle arrives at an empty queue, then the virtual delay is clearly zero; if the fluid particle arrives at a nonempty queue, then the queue is drained according to the rates until the particle has been served (in fact even until the queue is empty). Define, for , the random variable as the time until units of service have become available:notice that is increasing in . Then, analogously to [4, Section 4.1], with some abuse of notation,Hence, to further compute this expression, we need to evaluate . Here, we can use [4, Proposition 4.1]:where denotes an -dimensional vector with 1s. As the same proposition entails that the eigenvalues are simple and negative (hence real numbers), it allows us to write, for constants , with ,We thus obtain the following result.
Theorem 4.1. For , where the are as in (4.4). The , for , are the eigenvalues of (which are negative). An expression for , with , is available from Corollary 3.2.
4.2. A Second Approach: Double Transforms
We now proceed with demonstrating a second approach, which relies on the concept of βdouble transforms.β We feel that this is instructive, as this approach is used extensively in the remainder of the paper (when analysing the flow transfer delay and the sojourn time).
Let us first condition on the buffer content () that the fluid particle sees (say that it arrives at time 0), and the number of flows that are then present (). Define, for given and , the transform of the queueing delay:Then, we also introduce the transform of with respect to the workload :we say that is a βdouble transform.β Below, we show how to use these double transforms to derive .
Our first goal is to characterise the , for fixed and . We do this by expressing in terms of (with ) as follows. Condition on the time until the service rate changes; this time has an exponential distribution with mean . Hence,A straightforward change of variable () then yields thatFor given and , these are linear equations in the same number of unknowns. It is easy to see that the corresponding linear system is diagonally dominant, and hence there is a unique solution. This enables us to find the
Our second goal is to show how these yield an expression for . At an arbitrary point in time, the distribution function of the workload (jointly with the number of flows present) is given by , as given by (3.18). But, as the corresponding density is the weighted sum of exponentials, it entails that knowledge of the gives an expression for the Laplace transform of the virtual delay:recall that for
Theorem 4.2. For ,
4.3. βPacket Averageβ Queueing Delay
The previous subsections presented expressions for the Laplace transform of the queueing delay βat an arbitrary point in timeβ (a βtime averageβ). Clearly, there is a bias between the delay βat an arbitrary point in timeβ and delay βseen by an arbitrary fluid particle.β The correction to be made is analogous to [4, Section 4.2] and rather straightforward:compared to [11, Proposition 7.2].
5. Flow Transfer Delay Distribution
In this section, we focus on the time it takes for an arbitrary arriving flow to transmit its traffic. We define the flow transfer delay as the time between arrival and the epoch that its last fluid particle has been transmitted into the queue. Realise that the flow transfer time depends on the buffer content and number of flows that the tagged flow sees upon arrival. Due to the PASTA-property, these coincide with the corresponding time-averages. Recall that the case , as addressed in [4] is simpler, as the buffer content seen upon arrival does not play a role.
Let us first condition on the buffer content () and the number of flows (). Define, for , , the Laplace transform of the flow transfer time (conditional on and ) and its transform with respect to :For later reference, we also introduce, for and ,Notice that, for , is positive.
In Section 5.1, we find the ; this is in terms of auxiliary transforms that are determined in Section 5.2. We conclude this section by presenting the transform of the flow transfer delay; see Section 5.3.
5.1. A System of Equations for the Double Transform
We now deduce a system of equations for the , We do so by distinguishing between βdown-statesβ ( with ) and βup-statesβ (with ). The idea is that for up-states during the time till the first event (new arrival or departure) the buffer content cannot become 0, while for down-states this is possible. As a consequence, these cases have to be dealt with differently.
Up-State
First, assume that is an βup-stateβ: .
It is elementary to see that, conditioning on the first event taking place
after units of time,This is the sum of three
integrals. The third equals
Consider the first, and perform the change of variable :Similarly,We arrive atLater, it will turn out to be
also useful to consider the representation
Down-State
Now, assume that is a down-state: .
In this case, we must distinguish between the cases that the process remains in
state shorter, respectively, longer than (which is a positive number); in the former
case, the buffer does not become empty before the first event, whereas in the
latter case it does. In more detail, we haveWith ,
this simplifies toAs indicated, our goal is to
generate a system of equations for the (with and fixed); we therefore wish to express in terms of the .
This can be done as follows.
(i)
First, for ,whereas for ,(ii)
Now, consider the vector Define for ,and 0 else. The corresponding
matrix is called ,
that is, ;
for we have that is diagonally dominant and hence invertible.
Also, ,
withThen, (5.11) implies that In other words, once we know ,
we can compute . (iii)
Let .
Now, (5.12) entails that, for , where
Inserting this into (5.10), we have found the following
relation for down-states :notice the similarity with the
equation for the up-states (5.8).
5.2. Determining the Auxiliary Transforms
From (5.8) and (5.17), it follows that for known functions and ,here, the matrix is given throughIn other words, if the transforms , for and , would be known, then, for fixed and , the values of the can be found directly from solving a system of linear equations. The rest of this subsection is devoted to explaining how to identify the , for given We first prove a useful lemma.
Lemma 5.1. Consider, for fixed , the such that There are such values such that
Proof. First, rewrite ,
with, .
Observe that is a generator matrix. Notice also that
solutions of are the eigenvalues of .
(i) We first focus on properties of the
eigenvalues of .
Recall that it follows from Theorem 2, part 3 of
[8] that has as many eigenvalues in the right half
plane as the number of up-states in ,
that is, ;
there is also one eigenvalue of equal to 0 (note that is singular), and the remaining are in the left half plane. βGergorinβ states
that all eigenvalues are in at least one of the disksthe th disk is a circle in the complex plane
around of radius (which therefore goes through 0). Notice that implies that the number of disks in the right
(left) half plane equals the number of up-states (down-states, resp.).
These observations are illustrated in the left panel of Figure 2.
(ii) Now, consider the eigenvalues of for small Observe that these solve the equation As seen in the proof of [8, Theorem 2],has the same sign as the mean
drift, that is, negative. Likewise, the derivative of with respect to is positive (use that all diagonal entries of are positive). Hence, replacing by moves the zero eigenvalue to the left.
βGergorinβ implies
that all eigenvalues of are in at least one of the
diskscompared to the situation of this means that the disks corresponding to the
up-states (that were in the right half plane) move to the right (with the same
radius); likewise, the disks corresponding to the down-states move to the left.
This implies that all eigenvalues in the left (right) half plane remain in the left (right) half plane,
because of the continuity of the solutions of in the coefficients of the characteristic
polynomial.
Conclude that for small ,
there are then in the right half plane, and the remaining in the left half plane. This is illustrated in
the right panel of Figure 2.
(iii) Observe that the same arguments imply this
classification remains valid when increasing further. The special case proves our claim.
(a)
(b)
Now, we are able to characterise the transforms as follows.
STEP 1. Determine linear equations for the entries of .
(a)First,
focus on (5.8); these relate to .
We introduce the notation .
Then, for and ,
we obtain by inserting :Notice that ,
so that for both sides reduce to 0; hence, these equations are meaningless.(b)Now,
consider (5.17); these relate to .
Plugging in for ,
we obtain
STEP 2. Reduce dimension of the vector . The sets of (5.24) and (5.25) enable us to express the for and , but , in terms of . We have thus identified functions and such that for ,In other words, when the functions would be known, we would have found the .
STEP 3. Apply Lemma 5.1. By virtue of Cramer's rule, we obtain from (5.26) thatwhere is defined as but with the th column replaced by a vector of which the th entry is . For any in the right half plane, this should have a finite norm. Now fix , and use Lemma 5.1. Denote the zeroes of the denominator by . Conclude that each zero of the denominator should correspond to a zero of the numerator. This yields equations that determine .
The above results are summarised in the following theorem.
Theorem 5.2. For , with and defined by (5.26). For any there are values of in the right half plane such that , say . The vector follows, for fixed , from letting in the numerator of (5.27) and equating this to , for
Remark 5.3. It is easily verified that, in passing, we have also found a procedure to compute the , for , compared to (5.11), (5.12), (5.15).
5.3. Flow Transfer Delay
It is clear that, due to PASTA, the number of customers present at (i.e., just after) arrival of an (accepted) flow has distribution ()For determining the flow transfer delay, however, it is also necessary to know the amount of work found in the buffer. The joint distribution of the number of flows and the buffer content is given byobserve that indeed Hence,Mimicking the derivation of , we obtain the following result.
Theorem 5.4. For , Expressions for and are available from the previous subsection.
6. Sojourn Time Distribution
In this section, we study the sojourn time of flows in the system, which is defined as the flow transfer time, increased by the time it takes to serve the last particle of the flow. These components are not independent. Due to PASTA, the joint distribution of the workload and the number of flows that a new (accepted) flow sees upon arrival, is given by , as defined through (5.30). To derive the Laplace transform of , we first need to describe the workload increment (which can be positive or negative) during the flow transfer time , see Section 6.1. Then, in Section 6.2 we put the components together, and derive the transform of .
6.1. Joint Transform of Flow Transfer Time and Workload Increment
In the sequel it will turn out that, in order to characterise the distribution of the sojourn time, we do not just need the distribution of , but rather its joint distribution with the workload and the number of flows present at the end of the transfer (not counting the flow that just left). To this end we introduce the counterparts of and :with Similarly to before, we also define .
can be derived essentially in the same fashion as was found in Sections 5.1 and 5.2. We sketch this procedure. The counterpart of (5.7) is, for ,whereas (5.10) generalises to, for ,In the last equation, as before, the can be expressed in terms of the . Then, it remains to find the transforms ; these can be determined as in Section 5.2.
6.2. Sojourn Time
The sojourn time can be decomposed into (i) the flow transfer delay and (ii) the time it takes to serve the traffic that is in the buffer at the end of the flow transfer delay (i.e., the time it takes to serve the last particle of the tagged flow). This allows us to write, as in [4, Section 6.3], with the usual abuse of notationConsider the cases and separately. The contribution due to amounts toSimilarly, we find that the contribution due to is
Theorem 6.1. For , is given by the sum of Expressions (6.5) and (6.6).
7. Discussion and Concluding Remarks
In this paper, we have considered a relay node in an ad hoc network, fed by a Poisson stream of exponentially distributed jobs. We have characterised its performance in terms of (the Laplace transforms of) the buffer content, the queueing delay, the flow transfer delay, and the sojourn time.
Integer Weights
In our analysis, we throughout assumed that the
weight was noninteger. If ,
the analysis is slightly more involved. We now indicate how the analysis should
be adapted. In the first place, one of the coupled differential equations in
(3.2) has left-hand side 0 (because ); if we enumerate the equations ,
then the th equation readswhere is the -entry of .
Then the th differential equation (where ) becomesInterestingly, for ,
and ,
and hence the (with ,
but ) correspond to an -dimensional generator matrix. In self-evident
notation, we have arrived atwith all entries of not equal to 0. In this way, the steady-state
buffer content distribution of Model II can be determined:
we first find the for ,
and then we use (7.1) to derive .
The buffer content distribution follows in the same
fashion as in Section 3.2. It is of crucial importance to choose a definition
of busy and idle periods (i.e., one has to choose whether periods with an empty
buffer and flows in the system belong to the busy or to
the idle periods), and then to consistently use this definition. It is readily
verified that for the other performance measures no specific problems arise.
Limiting Cases
We now consider a number of interesting limiting
choices for the weight . For ease, we lift the assumption of performing
admission control; in other words: we take Let be the steady-state number of flows in the
system (i.e., transmitting traffic into the queue), for a given weight ; is defined analogously.
As argued in [3], the total amount of traffic in the system has the same dynamics as an M/M/1 queue; in this M/M/1 all job sizes are inflated by a factor 2 (as they have to be processed twice). This total amount of traffic is to be understood as , denoting the traffic at the sources, and the traffic at the queue; the factor 2 is due to the fact that traffic at the sources still needs to be processed twice, as opposed to traffic at the queue. Importantly, the evolution of the total queue is independent of ; realise that the total queue is work-conserving. It follows from the Pollaczek-Khinchine formula that the mean amount of work (measured in processing time) in the system is , independently of the choice of It can be seen that pathwise the amount of traffic that is at the sources increases in (as the decrease), so that the amount of traffic in the queue decreases in .
(i) In case , the queue has maximum weight, and never builds up. Always half of the capacity is dedicated to the flows, and the other half to the queue. It can be verified that the queue becomes a normal processor-sharing queue. The flow transfer delays and the sojourn times coincide. Elementary computations reveal that the steady-state distribution of the number of flows in the system is geometrically distributed. The probability of flows in the system is ; the mean number of flows is
(ii) A second extreme case is Then, the queue is only served when the flows have transmitted all their traffic into the queue; when the flows have something to transmit, the queue grows at a rate In this case, the probability of flows in the system is ; the mean number of flows is
Evidently, the mean amount of work in the queue does depend on First, observe that the mean amount of traffic (measured in processing time) that the flows still need to inject into the queue iswhere the factor 2 reflects the fact that the traffic still needs to be processed twice. We conclude that the mean amount of work (measured in processing time) at the queue isThese arguments can be used to quantify the tradeoff between the flow transfer delay (which increases in ) and the queueing delay of the last particle (which decreases in ). The formula for was already given in [3]which is indeed between (7.5) for .
Numerical Aspects
The numerical techniques to
be used in the approach presented in this paper (namely, solving eigensystems,
solving linear systems, and numerically inversion of Laplace transforms) are
techniques that are well established, and for which efficient and reliable
computer code is available. The distributions of and can be found without the need of performing
any Laplace inversion; as a result, their evaluation boils down to solving a
linear system of differential equations, comparable to those highlighted in,
for example, [5, 6]. Determining the distribution of and does require Laplace inversion. It is noted
that recently, substantial progress has been made with respect to this type of
inversion techniques. Besides the βclassicalβ reference [12],
we wish to draw attention
to significant recent progress by den Iseger, as reported on in [13]; the latter reference also
provides a fairly complete literature overflow.
When focusing on obtaining numerical output, there are several alternatives to the approach described in this paper. A first alternative is to rely on tail asymptotics, as done in [4] for , in the spirit ofwhere the constant follows from the solution of the corresponding eigensystem; likewise, one could consider the logarithmic tail asymptotics of , and . As this just provides us with decay rate (i.e., it says that for an unknown function such that ), one could use importance sampling-based simulations to improve on this, where the twisted distribution can be computed as in [14]. For sojourn times, such importance sampling schemes can be set up as in [15].
Subjects for Future Research
We mention the following directions for further research.
(i) Multiple bottlenecks. In some situations,
the scenario of a single bottleneck link may be an oversimplification of
reality, and in such cases one could study multiple bottleneck links that share capacity.
The complicating factor is that then the dynamics of the flows feeding into one
queue will be affected by the workload process in other queues; the queues
cannot be analysed separately. This gives the model the flavor of
coupled-processors systems as studied in, for example, [16].
(ii) Other flow-size distributions. Another
challenging extension is to consider nonexponential flow sizes; particularly
the impact heavy-tailed jobs is interesting to study. Suppose for instance that
the flow sizes have a regularly varying distribution of index ,
then it is an open question whether the sojourn times are regularly varying of
index ,
as is the case in the M/G/1 FIFO queue [17, 18],
or regularly varying of
index ,
as is the case in the M/G/1 processor sharing (PS) queue
[19] (or perhaps
regularly varying of another index).
(iii) Other queueing disciplines. In this study,
as well as in [3, 4],
the queue was supposed to operate under the
FIFO discipline. This introduces some βunfairnessβ in that
even small jobs can incur significant delay. Put differently the sojourn time
of a job of size ,
say ,
is such that If the scheduling discipline in the queue
would be PS (rather than FIFO), then
this limit would be 0; in this sense, PS could be regarded
as a remedy for unfairness.
(iv) Weight selection. Now, that we are able to
evaluate the performance of the relay node for a given weight ,
one may wonder what value of should be chosen. As argued above, there is a
tradeoff between the flow transfer delay and the queueing delay of the last
particle, imposing some cost structure, and optimal value for can be selected.
In a network setting, each node chooses its own weight. A high weight may be beneficial for the node itself, but harmful for other nodes. In view of this it may make sense to charge nodes for their weight. Pricing schemes could provide incentives for users to act as transit nodes on multihop paths, compared to [20, 21].
Acknowledgments
The authors would like to thank Hans van den Berg and Frank Roijers (TNO ICT) for useful remarks.