- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Journal of Applied Mathematics

Volume 2012 (2012), Article ID 831909, 35 pages

http://dx.doi.org/10.1155/2012/831909

## Stability and Probability 1 Convergence for Queueing Networks via Lyapunov Optimization

Electrical Engineering Department, University of Southern California, 3740 McClintock Avenue, Room 500, Los Angeles, CA 90089-2565, USA

Received 15 August 2011; Revised 17 March 2012; Accepted 11 April 2012

Academic Editor: P. G. L. Leach

Copyright © 2012 Michael J. Neely. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Lyapunov drift is a powerful tool for optimizing stochastic queueing networks subject to stability. However, the most convenient drift conditions often provide results in terms of a time average expectation, rather than a pure time average. This paper provides an extended drift-plus-penalty result that ensures stability with desired time averages with probability 1. The analysis uses the law of large numbers for martingale differences. This is applied to quadratic and subquadratic Lyapunov methods for minimizing the time average of a network penalty function subject to stability and to additional time average constraints. Similar to known results for time average expectations, this paper shows that pure time average penalties can be pushed arbitrarily close to optimality, with a corresponding tradeoff in average queue size. Further, in the special case of quadratic Lyapunov functions, the basic drift condition is shown to imply all major forms of queue stability.

#### 1. Introduction

This paper considers Lyapunov methods for analysis and control of queueing networks. Landmark work in [1, 2] uses the drift of a quadratic Lyapunov function to design general max-weight algorithms for network stability. Work in [3, 4] extends this result using a *drift-plus-penalty* algorithm that leads to joint stability and penalty optimization. The results in [1, 2] are developed for systems that evolve on ergodic Markov chains with countably infinite-state space. The results in [3, 4] apply to more general systems but are stated in terms of time average expectations, rather than pure time averages. Time average expectations can often be translated into pure time averages when the system has a Markov structure and some additional assumptions are satisfied, such as when the system regularly returns to a *renewal state*. However, these additional assumptions often fail and can be difficult to verify for particular systems of interest. This paper seeks to provide simple conditions that ensure desirable queue sample paths and desirable time average penalties with probability 1.

First, a basic drift condition for a general class of Lyapunov functions is shown to imply two weak forms of queue stability, called *rate stability* and *mean rate stability*. Next, this paper focuses on quadratic and subquadratic Lyapunov functions. Using the law of large numbers for martingale differences, a simple drift-plus-penalty condition is shown to ensure queue stability together with desirable time averages of an associated network penalty function. In the special case of quadratic Lyapunov functions, it is shown that the basic drift condition implies all major forms of queue stability, including rate and mean rate stability, as well as four stronger versions. These results require only mild bounded fourth moment conditions on queue difference processes and bounded second-moment conditions on penalty processes and do not require a Markov structure or renewal assumptions. Two simple examples are given in Appendix A to show what can go wrong if these bounded moment conditions are violated.

Finally, these results are used to design and analyze a class of algorithms for minimizing the time average of a network penalty subject to queue stability constraints and additional time average constraints. It is shown that the time average penalty can be pushed arbitrarily close to optimality, with a corresponding tradeoff in queue size. In the special case of quadratic functions, the tradeoff results of [3, 4] are recovered in a stronger form. They are shown to hold for time averages with probability 1 (rather than time average expectations). The results for quadratic functions in this paper assume the problem is feasible and that the constraints can be achieved with “-slackness” (similar to a Slater-type condition of static optimization theory [5]). However, this paper also obtains results for subquadratics. Algorithms based on subquadratics are shown to provide desired results whenever the problem is feasible, without requiring the additional slackness assumption. This analysis is useful for control of queueing networks, and for other stochastic problems that seek to optimize time averages subject to time average constraints.

##### 1.1. On Relationships between Time Average Expectations and Time Averages

It is known by Fatou's Lemma that if a random process is deterministically lower bounded (such as being nonnegative) and has time averages that converge to a constant with probability 1, then this constant must be less than or equal to the time average expectation (see, e.g., [6]). This can be used to translate the existing bounds in [4], which hold for time average expectations, into sample path bounds that hold with probability 1. However, this translation is only justified in the special case when the processes are suitably lower-bounded and have convergent time averages. Time average convergence can often be shown when the system is defined on a Markov chain with suitable renewal properties [7, 8]. Further, queue stability is often defined in terms of *positive Harris recurrence* [9, 10] although systems with positive Harris recurrence do not always have finite average queue size. For example, consider a standard queue with service times that have finite means but infinite second moments, and with arrival rate . The system spends a positive fraction of time in the 0 state but has infinite time average queue size.

Networks with flow control often have a structure that yields *deterministically bounded* queues [11, 12]. While this is perhaps the strongest form of stability, it requires special structure. Primal-dual algorithms are considered for scheduling in wireless systems with “infinite backlog” in [13, 14] and shown to converge to a utility-optimal operating point although this work does not consider queueing or time average constraints. Related primal-dual algorithms are treated in [15, 16] for systems with queues, where [15] shows utility optimality for a fluid version of the system, and [16] proves that the actual network exhibits rate stability with near-optimal utility. Related analysis is considered in [17] for systems with countably infinite-state ergodic Markov chains, and utility-optimal scheduling is treated for deterministic systems in [18]. The approaches in [13–18] do not obtain the desired tradeoffs between network utility and queue size.

##### 1.2. Prior Work on Quadratics and Other Lyapunov Functions

Much prior work on queue stability uses quadratic Lyapunov functions, including [1, 2, 19–21] for ergodic systems defined on countably infinite Markov chains, and [22] for more general systems but where stability is defined in terms of a time average expectation. Stability with exponential Lyapunov functions is treated in [23, 24], and stability with more general Lyapunov functions is in [25]. The works [26–29] develop algorithms based on subquadratic Lyapunov functions and suggest that these often have better delay in comparison to algorithms derived from quadratic Lyapunov functions. This motivates recent work in [30] that has combined subquadratic Lyapunov functions with joint stability and penalty minimization, using time average expectations similar to [4].

#### 2. Lyapunov Functions and Drift

Let be a real-valued random vector that evolves over time slots . The distribution of is given, and the values of for are determined by events on a probability space. Specifically, assume that outcomes on the probability space have the structure , where is the initial vector state, and for represents an event that occurs on slot . For each slot , the value of is assumed to be determined by the history . Let represent this history up to (but not including) slot . Formally, is a filtration of the infinite horizon probability space onto the finite set of slots . For any slot , a given includes information that specifies the value of .

An example of such a process is the *queue backlog process* in a network of queues that evolve according to some control law via the queueing equation
where and are arrival and service processes for queue , which take values determined by . For this reason, throughout we call the *queue vector*. However, our results are not restricted to processes of the type (2.1) and hold for more general processes, including processes that can have negative components.

##### 2.1. Lyapunov Functions

Let be a nonnegative function of the current queue vector . We call this a *Lyapunov function*. Define . Algorithms for queue stability are often developed by defining a convenient Lyapunov function, greedily minimizing a bound on every slot, and showing that the resulting expectation has desirable properties [1, 2, 19–24]. An example Lyapunov function that is often used is
where are given positive constants, with . This type of function represents a scalar measure of the vector queue size. Algorithms that greedily minimize a bound on every slot often satisfy a *drift condition* of the form for all , for some finite constant . Such drift conditions are examined in Section 3 in the context of queue stability.

Now let be an additional *penalty process* for the system, representing some penalty (such as power expenditure) incurred on slot . In many cases we desire to jointly stabilize all queues while minimizing the time average of . The *drift-plus-penalty* theory in [4] approaches this by taking control actions that greedily minimize a bound on , where is a nonnegative weight that affects a performance tradeoff. Such policies often ensure a *drift-plus-penalty condition* of the following form:
for some constants , , and some nonnegative function . The value represents a target value for the time average of . The function represents a measure of the total queue size. Expressions of the type (2.3) are examined in Sections 4, 5, and 6, and a broad class of networks that lead to expressions of this type are considered in Section 7.

##### 2.2. Performance for Time Average Expectations

Results for time average expected queue size and time average expected penalty can be immediately derived from the drift-plus-penalty condition (2.3) together with minimal assumptions. Specifically, assume that (2.3) holds for all and all , with , , . Assume that , and that there is a (possibly negative) constant such that for all . Taking expectations of (2.3) for a given slot gives the following: Now fix any integer . Using , summing over , and dividing by yields Rearranging the above and using leads to the following two inequalities for all where the first inequality holds whenever , and the second holds whenever . Taking a of the above gives this shows that performance can be parameterized by the constant , giving time average expected penalty within of the target , with an tradeoff in the time average expectation of (representing a time average queue size metric). This method is used in [3, 4] for analysis and design of stochastic network optimization algorithms. Related manipulations of drift expressions in [31–34] lead to statements concerning time average expectations in other contexts.

The above arithmetic did not require Markov chain assumptions, renewal assumptions, or assumptions on the higher-order moments of the processes. We desire to obtain performance bounds similar to (2.7) and (2.8), but with the above limiting time average expectations replaced, with probability 1, by limiting time averages over a sample path. This cannot be done without imposing more structural assumptions on the problem (see Appendix A for simple counterexamples). The goal is to find simple and easily verifiable additional assumptions that preserve the elegance of the method while strengthening the results.

#### 3. Rate Stability and Mean Rate Stability

This section considers drift conditions of the type . Let be a stochastic vector defined over slots . Assume that there are constants , such that For example, if , then the condition (3.1) states that the second moment of queue changes is bounded for all time. This holds, for example, whenever the queue evolves according to (2.1) with arrival and service processes , that have bounded second moments for all slots. The use of the variable allows more general situations when arrivals and/or service variables can have infinite second moments.

Let be a nonnegative function that has the following structural property. There exist constants , , such that for all vectors . Condition (3.2) implies that grows superlinearly in , at least as fast as for some . This is satisfied for a large class of functions used in practice, such as the example functions in (2.2) with . Define .

Proposition 3.1. * Suppose that is nonnegative and satisfies (3.2), and that there is a constant such that for all . Suppose that . Then we have the following.**(a) For all queues , we have
**(b) If additionally assumption (3.1) holds for some constants , , then for all queues we have
**
where “w.p.1” stands for “with probability 1.” *

The condition (3.3) is called *mean rate stability*, and the condition (3.4) is called *rate stability*. Both are weak forms of stability. Parts (a) and (b) of the proposition are proven separately.

*Proof (Proposition 3.1 Part (a)). *By definition of , the statement is equivalent to
The above holds for all slots . Summing over for some integer gives
Now select any . From (3.2), we have
Taking expectations of the above and using (3.6) gives:
Rearranging the above gives
The above holds for all . Thus, there is a constant such that
By Jensen's inequality we have . Using this in (3.10) gives
Therefore,
Taking a limit as proves part (a).

The proof of part (b) follows by noting that the condition (3.10), together with condition (3.1), satisfies the requirements of the following lemma concerning queues with higher-order moments that grow at most linearly.

Lemma 3.2. * Let be a real-valued stochastic process that evolves over slots , and suppose there are constants , , , such that
**
Then
*

*Proof. *See Appendix B.

#### 4. Convergence of Time Averages

This section considers drift conditions of the type (2.3).

##### 4.1. The Law of Large Numbers for Martingale Differences

Let be a real-valued stochastic process defined over slots . For integers , let be a filtration that includes the information . The following theorem is from [35] and concerns processes that satisfy . Such processes are called *martingale differences*.

Theorem 4.1 (from [35]). *Suppose that is a real-valued random process over such that for all and all . Further suppose that the second moments are finite for all and satisfy
**
Then
*

Corollary 4.2. * Let be a real-valued random process over slots , and suppose there is a finite constant such that for all and all . Further suppose the second moments are finite for all and satisfy
**
Then
*

*Proof. *Define . Then . Further, using (4.3), it is not difficult to verify that is finite (note that , and the latter term can be bounded by via Jensen's inequality). Thus, the conditions required for Theorem 4.1 hold, so we conclude that
However, because and , we know that for all . Thus, for all , we have
Taking a of the above and using (4.5) proves the result.

##### 4.2. A Drift-Plus-Penalty Result for General Lyapunov Functions

Now consider the stochastic system with queue vector and penalty process , as described in Section 2.1. The history includes information . Assume that there is a finite (possibly negative) constant such that for all and all Further assume that is finite for all , and Let be any nonnegative function of , and define . Let be another nonnegative function of .

Proposition 4.3. *Suppose is finite with probability 1, that are finite for all , that (4.7) and (4.8) hold, and that
**
Further suppose there are constants , , , such that the following drift-plus-penalty condition holds for all and all possible **
Then
**
where (4.11) holds when , and (4.12) holds when . *

*Proof. *Define , and note by algebra that
It follows that
Further, by (4.10), we have
Thus, by Corollary 4.2:
Because , we have
where the final inequality is because . Taking a of the above and using (4.16) gives
which implies (4.11).

The condition (4.10) together with also implies that
Defining and again applying Corollary 4.2 similarly shows that
which implies (4.12).

The result of Proposition 4.3 is *almost* the result we desire, with the exception that the condition (4.9) may be difficult to verify. Note that (4.9) often trivially holds in special cases when all queues are deterministically bounded. Hence, in the flow control algorithms [11, 12] designed from drift-plus-penalty theory, Proposition 4.3 proves that time average penalties satisfy (4.11) with probability 1. The next section shows that this condition follows from the drift assumption when is a *subquadratic* Lyapunov function. Section 6 treats quadratic Lyapunov functions.

#### 5. Subquadratic Lyapunov Functions

Suppose that the queue processes are nonnegative for all and all , and consider a Lyapunov function of the following form: where and are positive constants, with . The scaling by is only for convenience later.

##### 5.1. The Drift Structure

For each , define . Assume throughout that there is a finite constant such that for all , all , and all : Because , the above conditions, hold, for example, whenever for some finite constant . In particular, this holds if queues have the structure (2.1) and arrival and service processes , have bounded th moments, regardless of the history .

Appendix D shows that (5.2) implies there is a real number such that Further, Appendix D shows that there is a real number such that Again let be a penalty process. Algorithms that seek to minimize a bound on every slot often lead to drift conditions of the following form (see Section 7): for some . This has the general form (4.10) with .

##### 5.2. Drift-Plus-Penalty for Subquadratics

Proposition 5.1. * Let be a subquadratic Lyapunov function of the form (5.1), with . Assume that , (5.2) holds for the queue difference process , and (4.7)-(4.8) hold for the penalty process . Suppose that the drift-plus-penalty condition (5.5) holds for some constants , , , . Then for all **
Further, we have
**
where (5.7) holds when and (5.8) holds when . *

*Proof. *The drift-plus-penalty expression (5.5) implies that
where we define . Taking expectations of the above yields . The conditions of the theorem ensure that all conditions hold that are required for Proposition 3.1, and so Proposition 3.1 implies that all queues are rate stable, so that (5.6) holds.

Define . If we can show that
then our system satisfies all the requirements for Proposition 4.3, and so (5.7)-(5.8) hold. Thus, it suffices to prove (5.10). To this end, we have
for some real number (where we have used the fact (B.6) in Appendix B). This together with (5.3) gives
By (5.2), to prove that the above is finite, it suffices to prove that for each we have:

Recall from (5.9) that for all , and so
Summing the above over for some integer gives the following:
Thus, we have
Thus, there is a real number such that for each and all
Because , we have . Thus, by Jensen's inequality for the concave function over , we have
Thus, we have
where the final inequality holds because the summation terms are , and . This proves (5.13), completing the proof.

Note that the final line of the proof requires to be summable, which is true for but not true for . This is one reason why quadratic Lyapunov functions must be considered separately. Another distinction is that the above proposition holds even for , whereas we require for the quadratic analysis in the next section.

#### 6. Quadratic Lyapunov Functions

Here, we allow to possibly take negative values. Consider quadratic Lyapunov functions: where are any positive weights. This is the same form as in the previous section, assuming that . For , the assumptions (5.2) become

Define . These quadratic Lyapunov functions typically lead to drift-plus-penalty expressions of the following form (see [36]): which is the same as (4.10) with . Again define .

Proposition 6.1. *Suppose that has the quadratic form (6.1), and for all . Assume that the bounded moment conditions (6.2), (4.7), and (4.8) hold. If the drift-plus-penalty expression (6.3) holds for some constants , , , , for , then
**
where (6.4) holds when . *

The proposition is proven with the assistance of the following lemma.

Lemma 6.2. *Suppose that the same assumptions of Proposition 6.1 hold, and that the following additional assumption holds. For all , we have
**
Then (6.4) and (6.5) hold.*

*Proof (see Lemma 6.2). *Using , by Proposition 4.3, it suffices to show that
To this end, we have
Therefore, by (B.6) in Appendix B, there is a real number such that
However, because , we have
Thus, we have
Because and , we also have .

To prove Proposition 6.1, it remains only to show that the requirement used in Lemma 6.2 follows from the other assumptions in the lemma and hence can be removed. Note that the drift-plus-penalty expression (6.3) together with implies that for all and all Then the requirement follows by the next proposition.

Proposition 6.3. *Suppose that has the quadratic form (6.1), and that for all . Suppose that (6.2) holds, and that there are constants , , and for such that for all and all **
Then:**(a) For all we have
**(b) For all we have
**
where is an indicator function that is if , and else. *

Inequalities (6.15)–(6.20) represent six major forms of queue stability. Thus, all of these forms of stability are satisfied whenever the bounded second and fourth moment conditions (6.2) hold, and when the drift condition (6.13) holds for a quadratic Lyapunov function.

*Proof. *Part (a) is proven in Appendix C. To prove part (b), note that second moments are bounded because (6.2) holds. Further, the quadratic Lyapunov function (6.1) satisfies the condition (3.2), and the expression (6.13) implies that
Thus, the rate stability and mean rate stability results (6.15)-(6.16) follow by Proposition 3.1. The result (6.17) follows by (2.8) (with ). The result (6.18) follows by part (a) together with Lemma 6.2 (for ). The results (6.19) and (6.20) follow immediately from (6.17) and (6.18), respectively, by noting that for all we have .

#### 7. Stochastic Network Optimization

Here we use Propositions 5.1 and 6.1 to analyze an important class of stochastic queueing networks. This is the same scenario treated in [4, 36]. However, while the work in [4] obtains bounds on the time average expectations, here we obtain bounds on the pure time averages. Further, the works [4, 36] treat only quadratic Lyapunov functions. Here we treat both quadratics and subquadratics.

Consider a network of queues. The queue vector evolves in slotted time with update equation as follows:
where and are arrival and service variables, respectively, for queue . These are determined on slot by general functions , of a *network state * and a *control action * as follows:
where the control action is made every slot with knowledge of the current and is chosen within some abstract set . The value can represent random arrival and channel state information on slot , and can represent a resource allocation decision. For simplicity, assume that the process is independent and identically distributed (i.i.d.) over slots.

The control action additionally incurs a vector of *penalties *, again given by general functions of and as follows:
For , define , , , as time averages over the first slots:

Thus, every slot , the controller observes and chooses , with the goal of solving the following stochastic network optimization problem: Assume that the problem is feasible (so that the constraints can be met), and define as the infimum value of (7.6) over all feasible algorithms.

Typical penalties can represent *power expenditures* [11]. For example, suppose that , where is the power incurred in component of the network on slot , and is a required time average power expenditure. Then ensuring ensures that , so that the desired time average power constraint is met.

To ensure that the time average penalty constraints are met, for each , we define a *virtual queue * as follows:
Clearly for any integer . Summing this over yields the following (for any integer ):
Dividing by and rearranging terms yields
It follows that if is rate stable for all , so that with probability 1, then the constraint (7.7) is satisfied with probability 1.

For simplicity, assume that all queues are initially empty. Now define as the combined queue vector, and define the Lyapunov function as follows: where . The case is a quadratic Lyapunov function, and the case is a subquadratic. The drift-plus-penalty algorithm thus seeks to minimize a bound on

##### 7.1. Computing the Drift-Plus-Penalty Inequality

Assume that the functions , , and satisfy the following for all possible and all possible : where , and are deterministic bounds on for all . Also assume that there is a finite constant such that for all (possibly randomized) choices of in reaction to the i.i.d. , we have where the expectations are taken with respect to the distribution of the i.i.d. process, and the possibly randomized decisions .

Using the results of (5.4) for subquadratics and results in [4] for quadratics, it can be shown that the drift-plus-penalty expression satisfies the following bound: for some finite constant .

##### 7.2. The Dynamic Algorithm

It is easy to show that, given , the right-hand-side of (7.17) is minimized on slot by the policy that observes the current queue values , and the current and chooses to minimize the following expression:
This is done at the beginning of each slot , and at the end of the slot the actual and virtual queues , are updated according to (7.1) and (7.10). This policy does not require knowledge of the probability distribution for . One difficulty is that it may not be possible to achieve the infimum of the above expression over the set because we are using general (possibly noncontinuous) functions , , , and a general (possibly noncompact) set . Thus, we simply assume that there is a finite constant such that our algorithm chooses to come within an additive constant of the infimum on every slot , so that
Such a choice of is called a *-additive approximation.* The case corresponds to achieving the exact infimum every slot.

##### 7.3. -Only Policies and the Slackness Condition

Define an -only policy to be one that chooses every slot according to a stationary and randomized decision based only on the observed (in particular, being independent of ). In [36] it is shown that if the problem (7.6)–(7.9) is feasible, then for all there must be an -only algorithm that satisfies the following:

We assume that the problem is feasible. It is often useful to assume that the following stronger *slackness assumption* is satisfied. There exists an and a particular -only policy (typically different than the policy that yields (7.20)) that yields the following:
This assumption is similar to a Slater-type assumption used in convex optimization theory [5].

##### 7.4. Performance Bounds for Subquadratics

Assume . Because our policy comes within of minimizing the right-hand-side of (7.17) every slot (given the observed ), we have for all and all possible where is any other decision that can be implemented on slot . Fix any . Using the policy designed to achieve (7.20) and noting that this policy makes decisions independent of yields The above holds for all . Taking a limit as yields where for simplicity we have substituted on the left-hand side. Inequality (7.24) is in the exact form of the drift-plus-penalty condition (5.5) (with ). Recall that the penalty is deterministically lower bounded by some finite (possibly negative) value . Further, the moment bounds (7.16) can easily be shown to imply that the boundedness assumptions (5.2), (4.7), and (4.8) hold. Thus, we can apply Proposition 5.1 to conclude that all queues are rate stable. In particular with probability 1 for all , so that the constraints (7.7) are satisfied as follows: Further, again by Proposition 5.1, we have Thus, all constraints of the problem are met, and the time average of penalty is within of optimality.

The above did not require the slackness assumption. However, we can understand the tradeoff with by assuming that the slackness assumption holds, so that a policy exists that satisfies (7.21) for some . Using (7.21) in the right-hand-side of (7.22) gives From which we obtain by Proposition 5.1 that

##### 7.5. Performance Bounds for Quadratics

Similarly, we can obtain performance bounds for the quadratic case . Without the slackness condition, we can obtain an expression identical to (7.24). This ensures that all queues are rate stable (by Proposition 3.1), so that all desired constraints are satisfied. However, we cannot use Proposition 6.1 because the drift expression has . Thus, we require the slackness assumption to proceed further. Under this assumption, we derive (similar to (7.27)): from which we obtain by Proposition 6.1 We also know from (7.29) and Proposition 6.3 that , so that we conclude from (7.24) that

#### 8. Conclusions

This work derives sample path stability and time average penalty results for stochastic networks. It uses the law of large numbers for martingale differences to strengthen existing performance guarantees from time average expectations to pure time averages, with probability 1. This requires only mild-bounded second and fourth moment assumptions, and these assumptions were shown to be necessary via two simple counterexamples in Appendix A. The analysis considers both quadratic and subquadratic Lyapunov functions and uses both types of functions to develop and analyze algorithms for minimizing a time average penalty subject to network stability constraints and additional time average constraints. These results are useful for control of queueing networks, and for other stochastic problems of optimizing time averages subject to time average constraints.

#### Appendices

#### A. Counterexamples

This appendix provides two examples of systems that satisfy the drift-plus-penalty condition (2.3), so that the desired time average expectations (2.7)-(2.8) are satisfied. However, the corresponding sample path time averages do not hold because one of the bounded moment assumptions (6.2), (4.7), and (4.8) is violated.

##### A.1. The Penalty Process

Define a process that is independent over slots and satisfies the following: Formally define for all , and use , , so that for all and all . It is clear that . However, because the probability decays slowly, is equal to for infinitely many slots (with probability 1), and for any such time we have Thus, with probability 1 we have This differs from the time average expectation because the second moment condition (4.8) is violated.

##### A.2. The Queue Process

Let , , for all . Define a queue process with and with dynamics as follows: where is independent over slots and satisfies Define and define . Clearly , and so a basic calculation shows that for all and all we have Using , it follows by (2.8) that However, with probability 1, we have that for an infinite number of slots . Let be such a slot, and assume that . Because can decrease by at most every slot, it follows that for all we have where we have used the fact that . Then we have Thus Since (with probability 1) this holds for infinitely many , we have with probability 1 This differs from the time average expectation because the fourth moment condition (6.2) is violated.

#### B. Proof of Lemma 3.2

Here, we prove Lemma 3.2. Let be a real-valued random process over slots . Let be a subsequence of positive integers that increase to infinity. By the Borel-Cantelli lemma, to show that with probability 1, it suffices to show that for any we have Suppose there are constants , such that We first treat the case .

Lemma B.1. *If (B.2) holds for , then with probability 1.*

*Proof (Fix ). *From the Markov inequality, we have for all
Since , the sum of the above terms over is finite, proving the result.

Now suppose that , and define (so that ). For , define the increasing subsequence of times .

Lemma B.2. *If (B.2) holds for , then with probability 1.*

*Proof (Lemma B.2, Fix ) . *From (B.3), we have for
Using the definition of gives and so
Thus, the sum of the above terms over is finite, proving the result.

To complete the proof that with probability 1 for the case , we require the following simple lemma, which is also useful in other parts of this paper.

Lemma B.3. *Let be a collection of random variables, let be real numbers, and assume that . Then we have
*

*Proof. *We have
where the final inequality holds by Jensen's inequality for the convex function . Taking expectations of both sides proves the lemma.

Now define . Recall that for . Define as the set of slots