Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2012, Article ID 831909, 35 pages
Research Article

Stability and Probability 1 Convergence for Queueing Networks via Lyapunov Optimization

Electrical Engineering Department, University of Southern California, 3740 McClintock Avenue, Room 500, Los Angeles, CA 90089-2565, USA

Received 15 August 2011; Revised 17 March 2012; Accepted 11 April 2012

Academic Editor: P. G. L. Leach

Copyright © 2012 Michael J. Neely. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Lyapunov drift is a powerful tool for optimizing stochastic queueing networks subject to stability. However, the most convenient drift conditions often provide results in terms of a time average expectation, rather than a pure time average. This paper provides an extended drift-plus-penalty result that ensures stability with desired time averages with probability 1. The analysis uses the law of large numbers for martingale differences. This is applied to quadratic and subquadratic Lyapunov methods for minimizing the time average of a network penalty function subject to stability and to additional time average constraints. Similar to known results for time average expectations, this paper shows that pure time average penalties can be pushed arbitrarily close to optimality, with a corresponding tradeoff in average queue size. Further, in the special case of quadratic Lyapunov functions, the basic drift condition is shown to imply all major forms of queue stability.

1. Introduction

This paper considers Lyapunov methods for analysis and control of queueing networks. Landmark work in [1, 2] uses the drift of a quadratic Lyapunov function to design general max-weight algorithms for network stability. Work in [3, 4] extends this result using a drift-plus-penalty algorithm that leads to joint stability and penalty optimization. The results in [1, 2] are developed for systems that evolve on ergodic Markov chains with countably infinite-state space. The results in [3, 4] apply to more general systems but are stated in terms of time average expectations, rather than pure time averages. Time average expectations can often be translated into pure time averages when the system has a Markov structure and some additional assumptions are satisfied, such as when the system regularly returns to a renewal state. However, these additional assumptions often fail and can be difficult to verify for particular systems of interest. This paper seeks to provide simple conditions that ensure desirable queue sample paths and desirable time average penalties with probability 1.

First, a basic drift condition for a general class of Lyapunov functions is shown to imply two weak forms of queue stability, called rate stability and mean rate stability. Next, this paper focuses on quadratic and subquadratic Lyapunov functions. Using the law of large numbers for martingale differences, a simple drift-plus-penalty condition is shown to ensure queue stability together with desirable time averages of an associated network penalty function. In the special case of quadratic Lyapunov functions, it is shown that the basic drift condition implies all major forms of queue stability, including rate and mean rate stability, as well as four stronger versions. These results require only mild bounded fourth moment conditions on queue difference processes and bounded second-moment conditions on penalty processes and do not require a Markov structure or renewal assumptions. Two simple examples are given in Appendix A to show what can go wrong if these bounded moment conditions are violated.

Finally, these results are used to design and analyze a class of algorithms for minimizing the time average of a network penalty subject to queue stability constraints and additional time average constraints. It is shown that the time average penalty can be pushed arbitrarily close to optimality, with a corresponding tradeoff in queue size. In the special case of quadratic functions, the tradeoff results of [3, 4] are recovered in a stronger form. They are shown to hold for time averages with probability 1 (rather than time average expectations). The results for quadratic functions in this paper assume the problem is feasible and that the constraints can be achieved with “𝜖-slackness” (similar to a Slater-type condition of static optimization theory [5]). However, this paper also obtains results for subquadratics. Algorithms based on subquadratics are shown to provide desired results whenever the problem is feasible, without requiring the additional slackness assumption. This analysis is useful for control of queueing networks, and for other stochastic problems that seek to optimize time averages subject to time average constraints.

1.1. On Relationships between Time Average Expectations and Time Averages

It is known by Fatou's Lemma that if a random process is deterministically lower bounded (such as being nonnegative) and has time averages that converge to a constant with probability 1, then this constant must be less than or equal to the liminf time average expectation (see, e.g., [6]). This can be used to translate the existing bounds in [4], which hold for time average expectations, into sample path bounds that hold with probability 1. However, this translation is only justified in the special case when the processes are suitably lower-bounded and have convergent time averages. Time average convergence can often be shown when the system is defined on a Markov chain with suitable renewal properties [7, 8]. Further, queue stability is often defined in terms of positive Harris recurrence [9, 10] although systems with positive Harris recurrence do not always have finite average queue size. For example, consider a standard 𝑀/𝐺/1 queue with service times that have finite means 𝔼[𝑋] but infinite second moments, and with arrival rate 𝜆<1/𝔼[𝑋]. The system spends a positive fraction of time in the 0 state but has infinite time average queue size.

Networks with flow control often have a structure that yields deterministically bounded queues [11, 12]. While this is perhaps the strongest form of stability, it requires special structure. Primal-dual algorithms are considered for scheduling in wireless systems with “infinite backlog” in [13, 14] and shown to converge to a utility-optimal operating point although this work does not consider queueing or time average constraints. Related primal-dual algorithms are treated in [15, 16] for systems with queues, where [15] shows utility optimality for a fluid version of the system, and [16] proves that the actual network exhibits rate stability with near-optimal utility. Related analysis is considered in [17] for systems with countably infinite-state ergodic Markov chains, and utility-optimal scheduling is treated for deterministic systems in [18]. The approaches in [1318] do not obtain the desired tradeoffs between network utility and queue size.

1.2. Prior Work on Quadratics and Other Lyapunov Functions

Much prior work on queue stability uses quadratic Lyapunov functions, including [1, 2, 1921] for ergodic systems defined on countably infinite Markov chains, and [22] for more general systems but where stability is defined in terms of a time average expectation. Stability with exponential Lyapunov functions is treated in [23, 24], and stability with more general Lyapunov functions is in [25]. The works [2629] develop algorithms based on subquadratic Lyapunov functions and suggest that these often have better delay in comparison to algorithms derived from quadratic Lyapunov functions. This motivates recent work in [30] that has combined subquadratic Lyapunov functions with joint stability and penalty minimization, using time average expectations similar to [4].

2. Lyapunov Functions and Drift

Let 𝐐(𝑡)=(𝑄1(𝑡),,𝑄𝐾(𝑡)) be a real-valued random vector that evolves over time slots 𝑡{0,1,2,}. The distribution of 𝐐(0) is given, and the values of 𝐐(𝑡) for 𝑡{1,2,3,} are determined by events on a probability space. Specifically, assume that outcomes on the probability space have the structure {𝜂(1),𝜂(0),𝜂(1),𝜂(2),}, where 𝜂(1)=𝐐(0) is the initial vector state, and 𝜂(𝑡) for 𝑡{0,1,2,} represents an event that occurs on slot 𝑡. For each slot 𝑡{0,1,2,}, the value of 𝐐(𝑡) is assumed to be determined by the history {𝐐(0),𝜂(0),𝜂(1),,𝜂(𝑡1)}. Let (𝑡) represent this history up to (but not including) slot 𝑡. Formally, (𝑡) is a filtration of the infinite horizon probability space onto the finite set of slots {1,0,1,,𝑡1}. For any slot 𝑡{0,1,2,}, a given (𝑡) includes information that specifies the value of 𝐐(𝑡).

An example of such a process 𝐐(𝑡) is the queue backlog process in a network of 𝐾 queues that evolve according to some control law via the queueing equation 𝑄𝑘𝑄(𝑡+1)=max𝑘(𝑡)+𝑎𝑘(𝑡)𝑏𝑘(𝑡),0,(2.1) where 𝑎𝑘(𝑡) and 𝑏𝑘(𝑡) are arrival and service processes for queue 𝑘, which take values determined by 𝜂(𝑡). For this reason, throughout we call 𝐐(𝑡) the queue vector. However, our results are not restricted to processes of the type (2.1) and hold for more general processes, including processes 𝐐(𝑡) that can have negative components.

2.1. Lyapunov Functions

Let 𝐿(𝐐(𝑡)) be a nonnegative function of the current queue vector 𝐐(𝑡). We call this a Lyapunov function. Define Δ(𝑡)=𝐿(𝐐(𝑡+1))𝐿(𝐐(𝑡)). Algorithms for queue stability are often developed by defining a convenient Lyapunov function, greedily minimizing a bound on Δ(𝑡) every slot, and showing that the resulting expectation 𝔼[Δ(𝑡)] has desirable properties [1, 2, 1924]. An example Lyapunov function that is often used is 1𝐿(𝐐)=𝛽𝐾𝑘=1𝑤𝑘||𝑄𝑘||𝛽,(2.2) where 𝛽,𝑤𝑘 are given positive constants, with 𝛽>1. This type of function represents a scalar measure of the vector queue size. Algorithms that greedily minimize a bound on Δ(𝑡) every slot often satisfy a drift condition of the form 𝔼[Δ(𝑡)]𝐵 for all 𝑡, for some finite constant 𝐵. Such drift conditions are examined in Section 3 in the context of queue stability.

Now let 𝑝(𝑡) be an additional penalty process for the system, representing some penalty (such as power expenditure) incurred on slot 𝑡. In many cases we desire to jointly stabilize all queues while minimizing the time average of 𝑝(𝑡). The drift-plus-penalty theory in [4] approaches this by taking control actions that greedily minimize a bound on Δ(𝑡)+𝑉𝑝(𝑡), where 𝑉 is a nonnegative weight that affects a performance tradeoff. Such policies often ensure a drift-plus-penalty condition of the following form: 𝔼[]Δ(𝑡)+𝑉𝑝(𝑡)(𝑡)𝐵+𝑉𝑝𝜖𝑓(𝐐(𝑡))(2.3) for some constants 𝑝, 𝜖0, and some nonnegative function 𝑓(𝐐(𝑡)). The value 𝑝 represents a target value for the time average of 𝑝(𝑡). The function 𝑓(𝐐(𝑡)) represents a measure of the total queue size. Expressions of the type (2.3) are examined in Sections 4, 5, and 6, and a broad class of networks that lead to expressions of this type are considered in Section 7.

2.2. Performance for Time Average Expectations

Results for time average expected queue size and time average expected penalty can be immediately derived from the drift-plus-penalty condition (2.3) together with minimal assumptions. Specifically, assume that (2.3) holds for all 𝑡 and all (𝑡), with 𝑉0, 𝜖0, 𝑓(𝐐)0. Assume that 𝔼[𝐿(𝐐(0))]<, and that there is a (possibly negative) constant 𝑝min such that 𝔼[𝑝(𝑡)]𝑝min for all 𝑡. Taking expectations of (2.3) for a given slot 𝜏 gives the following: 𝔼[][]Δ(𝜏)+𝑉𝔼𝑝(𝜏)𝐵+𝑉𝑝[].𝜖𝔼𝑓(𝐐(𝜏))(2.4) Now fix any integer 𝑡>0. Using Δ(𝜏)=𝐿(𝐐(𝜏+1))𝐿(𝐐(𝜏)), summing over 𝜏{0,,𝑡1}, and dividing by 𝑡 yields 𝔼[][]𝐿(𝐐(𝑡))𝔼𝐿(𝐐(0))𝑡1+𝑉𝑡𝑡1𝜏=0𝔼[]𝑝(𝜏)𝐵+𝑉𝑝1𝜖𝑡𝑡1𝜏=0𝔼[].𝑓(𝐐(𝜏))(2.5) Rearranging the above and using 𝔼[𝐿(𝐐(𝑡))]0 leads to the following two inequalities for all 𝑡>01𝑡𝑡1𝜏=0𝔼[]𝑝(𝜏)𝑝+𝐵𝑉+𝔼[]𝐿(𝐐(0)),1𝑉𝑡𝑡𝑡1𝜏=0𝔼[]𝑝𝑓(𝐐(𝜏))𝐵+𝑉𝑝min𝜖+𝔼[]𝐿(𝐐(0)),𝜖𝑡(2.6) where the first inequality holds whenever 𝑉>0, and the second holds whenever 𝜖>0. Taking a limsup of the above gives limsup𝑡1𝑡𝑡1𝜏=0𝔼[]𝑝(𝜏)𝑝+𝐵𝑉,(2.7)limsup𝑡1𝑡𝑡1𝜏=0𝔼[]𝑝𝑓(𝐐(𝜏))𝐵+𝑉𝑝min𝜖.(2.8) this shows that performance can be parameterized by the constant 𝑉, giving time average expected penalty within 𝑂(1/𝑉) of the target 𝑝, with an 𝑂(𝑉) tradeoff in the time average expectation of 𝑓(𝐐(𝑡)) (representing a time average queue size metric). This method is used in [3, 4] for analysis and design of stochastic network optimization algorithms. Related manipulations of drift expressions in [3134] lead to statements concerning time average expectations in other contexts.

The above arithmetic did not require Markov chain assumptions, renewal assumptions, or assumptions on the higher-order moments of the processes. We desire to obtain performance bounds similar to (2.7) and (2.8), but with the above limiting time average expectations replaced, with probability 1, by limiting time averages over a sample path. This cannot be done without imposing more structural assumptions on the problem (see Appendix A for simple counterexamples). The goal is to find simple and easily verifiable additional assumptions that preserve the elegance of the method while strengthening the results.

3. Rate Stability and Mean Rate Stability

This section considers drift conditions of the type 𝔼[Δ(𝑡)]𝐵. Let 𝐐(𝑡)(𝑄1(𝑡),,𝑄𝐾(𝑡)) be a stochastic vector defined over slots 𝑡{0,1,2,}. Assume that there are constants 𝐷>0, 𝜃>0 such that 𝔼||𝑄𝑘(𝑡+1)𝑄𝑘||(𝑡)1+𝜃𝐷𝑡{0,1,2,},𝑘{1,,𝐾}.(3.1) For example, if 𝜃=1, then the condition (3.1) states that the second moment of queue changes is bounded for all time. This holds, for example, whenever the queue evolves according to (2.1) with arrival and service processes 𝑎𝑘(𝑡), 𝑏𝑘(𝑡) that have bounded second moments for all slots. The use of the variable 𝜃>0 allows more general situations when arrivals and/or service variables can have infinite second moments.

Let 𝐿(𝐐) be a nonnegative function that has the following structural property. There exist constants 𝑏𝑘>0, 𝑐𝑘>0, 𝑑𝑘0 such that for all vectors 𝐐=(𝑄1,,𝑄𝐾). 𝐿(𝐐)𝑐𝑘||𝑄𝑘||1+𝑏𝑘𝑑𝑘𝑘{1,,𝐾}.(3.2) Condition (3.2) implies that 𝐿(𝐐) grows superlinearly in |𝑄𝑘|, at least as fast as Ω(|𝑄𝑘|1+𝑏𝑘) for some 𝑏𝑘>0. This is satisfied for a large class of functions used in practice, such as the example functions in (2.2) with 𝛽>1. Define Δ(𝑡)=𝐿(𝐐(𝑡+1))𝐿(𝐐(𝑡)).

Proposition 3.1. Suppose that 𝐿(𝐐) is nonnegative and satisfies (3.2), and that there is a constant 𝐵0 such that 𝔼[Δ(𝑡)]𝐵 for all 𝑡{0,1,2,}. Suppose that 𝔼[𝐿(𝐐(0))]<. Then we have the following.
(a) For all queues 𝑘{1,,𝐾}, we have lim𝑡𝔼||𝑄𝑘||(𝑡)𝑡=0.(3.3)
(b) If additionally assumption (3.1) holds for some constants 𝐷>0, 𝜃>0, then for all queues 𝑘{1,,𝐾} we have lim𝑡𝑄𝑘(𝑡)𝑡=0(𝑤.𝑝.1),(3.4) where “w.p.1” stands for “with probability 1.”

The condition (3.3) is called mean rate stability, and the condition (3.4) is called rate stability. Both are weak forms of stability. Parts (a) and (b) of the proposition are proven separately.

Proof (Proposition 3.1 Part (a)). By definition of Δ(𝜏), the statement 𝔼[Δ(𝜏)]𝐵 is equivalent to 𝔼[][]𝐿(𝐐(𝜏+1))𝔼𝐿(𝐐(𝜏))𝐵.(3.5) The above holds for all slots 𝜏. Summing over 𝜏{0,1,,𝑡1} for some integer 𝑡>0 gives 𝔼[][]𝐿(𝐐(𝑡))𝔼𝐿(𝐐(0))𝐵𝑡.(3.6) Now select any 𝑘{1,,𝐾}. From (3.2), we have 𝑐𝑘||𝑄𝑘||(𝑡)1+𝑏𝑘𝑑𝑘𝐿(𝐐(𝑡)).(3.7) Taking expectations of the above and using (3.6) gives: 𝑐𝑘𝔼||𝑄𝑘||(𝑡)1+𝑏𝑘𝑑𝑘[]𝔼𝐿(𝐐(0))+𝐵𝑡.(3.8) Rearranging the above gives 𝔼||𝑄𝑘||(𝑡)1+𝑏𝑘1𝑐𝑘𝑑𝑘[].+𝔼𝐿(𝐐(0))+𝐵𝑡(3.9) The above holds for all 𝑡{1,2,3,}. Thus, there is a constant 𝐶>0 such that 𝔼||𝑄𝑘||(𝑡)1+𝑏𝑘𝐶𝑡𝑡{1,2,3,}.(3.10) By Jensen's inequality we have 𝔼[|𝑄𝑘(𝑡)|]1+𝑏𝑘𝔼[|𝑄𝑘(𝑡)|1+𝑏𝑘]. Using this in (3.10) gives 𝔼||𝑄𝑘||(𝑡)1+𝑏𝑘𝐶𝑡.(3.11) Therefore, 𝔼||𝑄𝑘||(𝑡)𝑡(𝐶𝑡)1/(1+𝑏𝑘)𝑡.(3.12) Taking a limit as 𝑡 proves part (a).

The proof of part (b) follows by noting that the condition (3.10), together with condition (3.1), satisfies the requirements of the following lemma concerning queues with higher-order moments that grow at most linearly.

Lemma 3.2. Let 𝑄(𝑡) be a real-valued stochastic process that evolves over slots 𝑡{0,1,2,}, and suppose there are constants 𝑏>0, 𝜃>0, 𝐶>0, 𝐷>0 such that 𝔼||||𝑄(𝑡)1+𝑏𝔼||||𝐶𝑡𝑡{1,2,},𝑄(𝑡+1)𝑄(𝑡)1+𝜃𝐷𝑡{1,2,}.(3.13) Then lim𝑡𝑄(𝑡)𝑡=0(𝑤.𝑝.1).(3.14)

Proof. See Appendix B.

4. Convergence of Time Averages

This section considers drift conditions of the type (2.3).

4.1. The Law of Large Numbers for Martingale Differences

Let 𝑋(𝑡) be a real-valued stochastic process defined over slots 𝑡{0,1,2,}. For integers 𝑡>0, let (𝑡) be a filtration that includes the information {𝑋(0),,𝑋(𝑡1)}. The following theorem is from [35] and concerns processes that satisfy 𝔼[𝑋(𝑡)(𝑡)]=0. Such processes are called martingale differences.

Theorem 4.1 (from [35]). Suppose that 𝑋(𝑡) is a real-valued random process over 𝑡{0,1,2,} such that 𝔼[𝑋(𝑡)(𝑡)]=0 for all 𝑡{1,2,} and all (𝑡). Further suppose that the second moments 𝔼[𝑋(𝑡)2] are finite for all 𝑡 and satisfy 𝑡=1𝔼𝑋(𝑡)2𝑡2<.(4.1) Then lim𝑡1𝑡𝑡1𝜏=0𝑋(𝜏)=0(𝑤.𝑝.1).(4.2)

Corollary 4.2. Let 𝑋(𝑡) be a real-valued random process over slots 𝑡{0,1,2,}, and suppose there is a finite constant 𝐵 such that 𝔼[𝑋(𝑡)(𝑡)]𝐵 for all 𝑡{1,2,} and all (𝑡). Further suppose the second moments 𝔼[𝑋(𝑡)2] are finite for all 𝑡 and satisfy 𝑡=1𝔼𝑋(𝑡)2𝑡2<.(4.3) Then limsup𝑡1𝑡𝑡1𝜏=0𝑋(𝜏)𝐵(𝑤.𝑝.1).(4.4)

Proof. Define 𝑋(𝑡)𝑋(𝑡)𝔼[𝑋(𝑡)(𝑡)]. Then 𝔼[𝑋(𝑡)(𝑡)]=0. Further, using (4.3), it is not difficult to verify that 𝑡=1𝔼[𝑋(𝑡)2]/𝑡2 is finite (note that 𝔼[𝑋(𝑡)2]2𝔼[𝑋(𝑡)2]+2𝔼[𝔼[𝑋(𝑡)(𝑡)]2], and the latter term can be bounded by 2𝔼[𝑋(𝑡)2] via Jensen's inequality). Thus, the conditions required for Theorem 4.1 hold, so we conclude that lim𝑡1𝑡𝑡1𝜏=0𝑋(𝜏)=0(w.p.1).(4.5) However, because 𝑋(𝑡)=𝑋(𝑡)+𝔼[𝑋(𝑡)(𝑡)] and 𝔼[𝑋(𝑡)(𝑡)]𝐵, we know that 𝑋(𝑡)𝑋(𝑡)+𝐵 for all 𝑡. Thus, for all 𝑡>0, we have 1𝑡𝑡1𝜏=01𝑋(𝜏)𝑡𝑡1𝜏=0𝑋(𝑡)+𝐵.(4.6) Taking a limsup of the above and using (4.5) proves the result.

4.2. A Drift-Plus-Penalty Result for General Lyapunov Functions

Now consider the stochastic system with queue vector 𝐐(𝑡)=(𝑄1(𝑡),,𝑄𝐾(𝑡)) and penalty process 𝑝(𝑡), as described in Section 2.1. The history (𝑡) includes information {𝐐(0),,𝐐(𝑡),𝑝(0),,𝑝(𝑡1)}. Assume that there is a finite (possibly negative) constant 𝑝min such that for all 𝑡 and all (𝑡)𝔼[]𝑝(𝑡)(𝑡)𝑝min.(4.7) Further assume that 𝔼[𝑝(𝑡)2] is finite for all 𝑡, and 𝑡=1𝔼𝑝(𝑡)2𝑡2<.(4.8) Let 𝐿(𝐐(𝑡)) be any nonnegative function of 𝐐(𝑡), and define Δ(𝑡)=𝐿(𝐐(𝑡+1))𝐿(𝐐(𝑡)). Let 𝑓(𝐐(𝑡)) be another nonnegative function of 𝐐(𝑡).

Proposition 4.3. Suppose 𝐿(𝐐(0)) is finite with probability 1, that 𝔼[Δ(𝑡)2],𝔼[𝑓(𝐐(𝑡))2],𝔼[𝑝(𝑡)2] are finite for all 𝑡, that (4.7) and (4.8) hold, and that 𝑡=1𝔼Δ(𝑡)2+𝔼𝑓(𝐐(𝑡))2𝑡2<.(4.9) Further suppose there are constants 𝑝, 𝑉0, 𝜖0, 𝐵0 such that the following drift-plus-penalty condition holds for all 𝑡{0,1,2,} and all possible (𝑡)𝔼[]Δ(𝑡)+𝑉𝑝(𝑡)(𝑡)𝐵+𝑉𝑝𝜖𝑓(𝐐(𝑡)).(4.10) Then limsup𝑡1𝑡𝑡1𝜏=0𝑝(𝜏)𝑝+𝐵𝑉(𝑤.𝑝.1),(4.11)limsup𝑡1𝑡𝑡1𝜏=0𝑝𝑓(𝐐(𝜏))𝐵+𝑉𝑝min𝜖(𝑤.𝑝.1),(4.12) where (4.11) holds when 𝑉>0, and (4.12) holds when 𝜖>0.

Proof. Define 𝑋(𝑡)=Δ(𝑡)+𝑉𝑝(𝑡)+𝜖𝑓(𝐐(𝑡)), and note by algebra that 𝑋(𝑡)24Δ(𝑡)2+4𝑉2𝑝(𝑡)2+4𝜖2𝑓(𝐐(𝑡))2(4.13) It follows that 𝑡=1𝔼𝑋(𝑡)2𝑡2<.(4.14) Further, by (4.10), we have 𝔼[]𝑋(𝑡)(𝑡)𝐵+𝑉𝑝.(4.15) Thus, by Corollary 4.2: limsup𝑡1𝑡𝑡1𝜏=0𝑋(𝜏)𝐵+𝑉𝑝.(4.16) Because 𝑋(𝑡)Δ(𝑡)+𝑉𝑝(𝑡), we have 1𝑡𝑡1𝜏=01𝑋(𝜏)𝑡𝑡1𝜏=0[]=Δ(𝜏)+𝑉𝑝(𝜏)𝐿(𝐐(𝑡))𝐿(𝐐(0))𝑡+𝑉𝑡𝑡1𝜏=0𝑝(𝜏)𝐿(𝐐(0))𝑡+𝑉𝑡𝑡1𝜏=0𝑝(𝜏),(4.17) where the final inequality is because 𝐿(𝐐(𝑡))0. Taking a limsup of the above and using (4.16) gives limsup𝑡𝑉𝑡𝑡1𝜏=0𝑝(𝜏)𝐵+𝑉𝑝(4.18) which implies (4.11).
The condition (4.10) together with 𝔼[𝑝(𝑡)(𝑡)]𝑝min also implies that 𝔼[Δ]𝑝(𝑡)+𝜖𝑓(𝐐(𝑡))(𝑡)𝐵+𝑉𝑝min(4.19) Defining 𝑋(𝑡)=Δ(𝑡)+𝜖𝑓(𝐐(𝑡)) and again applying Corollary 4.2 similarly shows that limsup𝑡1𝑡𝑡1𝜏=0𝑝𝜖𝑓(𝐐(𝑡))𝐵+𝑉𝑝min(4.20) which implies (4.12).

The result of Proposition 4.3 is almost the result we desire, with the exception that the condition (4.9) may be difficult to verify. Note that (4.9) often trivially holds in special cases when all queues are deterministically bounded. Hence, in the flow control algorithms [11, 12] designed from drift-plus-penalty theory, Proposition 4.3 proves that time average penalties satisfy (4.11) with probability 1. The next section shows that this condition follows from the drift assumption when 𝐿(𝐐) is a subquadratic Lyapunov function. Section 6 treats quadratic Lyapunov functions.

5. Subquadratic Lyapunov Functions

Suppose that the queue processes 𝑄𝑘(𝑡) are nonnegative for all 𝑘{1,,𝐾} and all 𝑡, and consider a Lyapunov function 𝐿(𝐐) of the following form: 1𝐿(𝐐)=𝛽𝐾𝑘=1𝑤𝑘𝑄𝛽𝑘,(5.1) where 𝑤𝑘 and 𝛽 are positive constants, with 1<𝛽<2. The scaling by 1/𝛽 is only for convenience later.

5.1. The Drift Structure

For each 𝑘{1,,𝐾}, define 𝛿𝑘(𝑡)=𝑄𝑘(𝑡+1)𝑄𝑘(𝑡). Assume throughout that there is a finite constant 𝐷>0 such that for all 𝑡{0,1,2,}, all (𝑡), and all 𝑘{1,,𝐾}: 𝔼𝛿𝑘(𝑡)2(𝑡)𝐷,𝑡=1𝔼||𝛿𝑘||(𝑡)2𝛽𝑡2<.(5.2) Because 1<𝛽<2, the above conditions, hold, for example, whenever 𝔼[|𝛿𝑘(𝑡)|2𝛽(𝑡)]𝐶 for some finite constant 𝐶. In particular, this holds if queues have the structure (2.1) and arrival and service processes 𝑎𝑘(𝑡), 𝑏𝑘(𝑡) have bounded (2𝛽)th moments, regardless of the history (𝑡).

Appendix D shows that (5.2) implies there is a real number 𝐶>0 such that 𝔼Δ(𝑡)2𝐶+𝐶𝐾𝑘=1𝔼||𝛿𝑘||(𝑡)2𝛽+𝑄𝑘(𝑡)2𝛽2.(5.3) Further, Appendix D shows that there is a real number 𝐵>0 such that 𝔼[]Δ(𝑡)(𝑡)𝐵+𝐾𝑘=1𝑤𝑘𝑄𝑘(𝑡)𝛽1𝔼𝛿𝑘.(𝑡)(𝑡)(5.4) Again let 𝑝(𝑡) be a penalty process. Algorithms that seek to minimize a bound on Δ(𝑡)+𝑉𝑝(𝑡) every slot 𝑡 often lead to drift conditions of the following form (see Section 7): 𝔼[]Δ(𝑡)+𝑉𝑝(𝑡)(𝑡)𝐵+𝑉𝑝𝜖𝐾𝑘=1𝑤𝑘𝑄𝑘(𝑡)𝛽1(5.5) for some 𝜖0. This has the general form (4.10) with 𝑓(𝐐(𝑡))=𝐾𝑘=1𝑤𝑘𝑄𝑘(𝑡)𝛽1.

5.2. Drift-Plus-Penalty for Subquadratics

Proposition 5.1. Let 𝐿(𝐐) be a subquadratic Lyapunov function of the form (5.1), with 1<𝛽<2. Assume that 𝔼[𝐿(𝐐(0))]<, (5.2) holds for the queue difference process 𝛿𝑘(𝑡), and (4.7)-(4.8) hold for the penalty process 𝑝(𝑡). Suppose that the drift-plus-penalty condition (5.5) holds for some constants 𝑝, 𝑉0, 𝐵0, 𝜖0. Then for all 𝑘{1,,𝐾}lim𝑡𝑄𝑘(𝑡)𝑡=0(𝑤.𝑝.1).(5.6) Further, we have limsup𝑡1𝑡𝑡1𝜏=0𝑝(𝜏)𝑝+𝐵𝑉(𝑤.𝑝.1),(5.7)limsup𝑡1𝑡𝑡1𝐾𝜏=0𝑘=1𝑤𝑘𝑄𝑘(𝜏)𝛽1𝑝𝐵+𝑉𝑝min𝜖(𝑤.𝑝.1),(5.8) where (5.7) holds when 𝑉>0 and (5.8) holds when 𝜖>0.

Proof. The drift-plus-penalty expression (5.5) implies that 𝔼[]Δ(𝑡)(𝑡)𝐹,(5.9) where we define 𝐹𝐵+𝑉(𝑝𝑝min). Taking expectations of the above yields 𝔼[Δ(𝑡)]𝐹. The conditions of the theorem ensure that all conditions hold that are required for Proposition 3.1, and so Proposition 3.1 implies that all queues are rate stable, so that (5.6) holds.
Define 𝑓(𝐐(𝑡))=𝐾𝑘=1𝑤𝑘𝑄𝑘(𝑡)𝛽1. If we can show that 𝑡=1𝔼Δ(𝑡)2+𝑓(𝐐(𝑡))2𝑡2<,(5.10) then our system satisfies all the requirements for Proposition 4.3, and so (5.7)-(5.8) hold. Thus, it suffices to prove (5.10). To this end, we have 𝔼𝑓(𝐐(𝑡))2=𝔼𝐾𝑘=1𝑤𝑘𝑄𝑘(𝑡)𝛽12𝐺𝑊𝑘=1𝔼𝑄𝑘(𝑡)2𝛽2(5.11) for some real number 𝐺>0 (where we have used the fact (B.6) in Appendix B). This together with (5.3) gives 𝑡=1𝔼Δ(𝑡)2+𝔼𝑓(𝐐(𝑡))2𝑡2𝑡=1𝐶+(𝐶+𝐺)𝐾𝑘=1𝔼||𝛿𝑘||(𝑡)2𝛽+𝑄𝑘(𝑡)2𝛽2𝑡2.(5.12) By (5.2), to prove that the above is finite, it suffices to prove that for each 𝑘{1,,𝐾} we have: 𝑡=1𝔼𝑄𝑘(𝑡)2𝛽2𝑡2<.(5.13)
Recall from (5.9) that 𝔼[Δ(𝜏)]𝐹 for all 𝜏, and so 𝔼[][]𝐿(𝐐(𝜏+1))𝔼𝐿(𝐐(𝜏))𝐹.(5.14) Summing the above over 𝜏{0,1,,𝑡1} for some integer 𝑡>0 gives the following: 𝔼[][]𝐿(𝐐(𝑡))𝔼𝐿(𝐐(0))𝐹𝑡.(5.15) Thus, we have 𝐾𝑘=1𝑤𝑘𝛽𝔼𝑄𝑘(𝑡)𝛽[]𝔼𝐿(𝐐(0))+𝐹𝑡.(5.16) Thus, there is a real number 𝑏>0 such that for each 𝑘{1,,𝐾} and all 𝑡{1,2,3,}𝔼𝑄𝑘(𝑡)𝛽𝑏𝑡.(5.17) Because 1<𝛽<2, we have 0<(2𝛽2)/𝛽<1. Thus, by Jensen's inequality for the concave function 𝑥(2𝛽2)/𝛽 over 𝑥0, we have 𝔼𝑄𝑘(𝑡)2𝛽2𝑄𝔼𝑘(𝑡)𝛽(2𝛽2)/𝛽(𝑏𝑡)22/𝛽(5.18) Thus, we have 𝑡=1𝔼𝑄𝑘(𝑡)2𝛽2𝑡2𝑡=1(𝑏𝑡)22/𝛽𝑡2<,(5.19) where the final inequality holds because the summation terms are 𝑂(1/𝑡2/𝛽), and 2/𝛽>1. This proves (5.13), completing the proof.

Note that the final line of the proof requires 𝑂(1/𝑡2/𝛽) to be summable, which is true for 1<𝛽<2 but not true for 𝛽=2. This is one reason why quadratic Lyapunov functions must be considered separately. Another distinction is that the above proposition holds even for 𝜖=0, whereas we require 𝜖>0 for the quadratic analysis in the next section.

6. Quadratic Lyapunov Functions

Here, we allow 𝑄𝑘(𝑡) to possibly take negative values. Consider quadratic Lyapunov functions: 1𝐿(𝐐)=2𝐾𝑘=1𝑤𝑘𝑄2𝑘,(6.1) where 𝑤𝑘 are any positive weights. This is the same form as in the previous section, assuming that 𝛽=2. For 𝛽=2, the assumptions (5.2) become 𝔼𝛿𝑘(𝑡)2(𝑡)𝐷,𝑡=1𝔼𝛿𝑘(𝑡)4𝑡2<.(6.2)

Define Δ(𝑡)=𝐿(𝐐(𝑡+1))𝐿(𝐐(𝑡)). These quadratic Lyapunov functions typically lead to drift-plus-penalty expressions of the following form (see [36]): 𝔼[]Δ(𝑡)+𝑉𝑝(𝑡)(𝑡)𝐵+𝑉𝑝𝜖𝐾𝑘=1𝑤𝑘||𝑄𝑘||(𝑡)(6.3) which is the same as (4.10) with 𝑓(𝐐)=𝐾𝑘=1𝑤𝑘|𝑄𝑘|. Again define 𝛿𝑘(𝑡)𝑄𝑘(𝑡+1)𝑄𝑘(𝑡).

Proposition 6.1. Suppose that 𝐿(𝐐) has the quadratic form (6.1), and 𝔼[|𝑄𝑘(0)|3]< for all 𝑘{1,,𝐾}. Assume that the bounded moment conditions (6.2), (4.7), and (4.8) hold. If the drift-plus-penalty expression (6.3) holds for some constants 𝑝, 𝐵0, 𝑉0, 𝜖>0, 𝑤𝑘>0 for 𝑘{1,,𝐾}, then limsup𝑡1𝑡𝑡1𝜏=0𝑝(𝜏)𝑝+𝐵𝑉(𝑤.𝑝.1),(6.4)limsup𝑡1𝑡𝑡1𝐾𝜏=0𝑘=1𝑤𝑘||𝑄𝑘||𝑝(𝜏)𝐵+𝑉𝑝min𝜖(𝑤.𝑝.1),(6.5) where (6.4) holds when 𝑉>0.

The proposition is proven with the assistance of the following lemma.

Lemma 6.2. Suppose that the same assumptions of Proposition 6.1 hold, and that the following additional assumption holds. For all 𝑘{1,,𝐾}, we have 𝑡=1𝔼𝑄𝑘(𝑡)2𝑡2<.(6.6) Then (6.4) and (6.5) hold.

Proof (see Lemma 6.2). Using 𝑓(𝐐)=𝐾𝑘=1|𝑄𝑘|, by Proposition 4.3, it suffices to show that 𝑡=1𝔼Δ(𝑡)2𝑡2<.(6.7) To this end, we have 1Δ(𝑡)=2𝐾𝑘=1𝑤𝑘𝑄𝑘(𝑡)+𝛿𝑘(𝑡)2𝑄𝑘(𝑡)2=12𝐾𝑘=1𝑤𝑘2𝛿𝑘(𝑡)𝑄𝑘(𝑡)+𝛿𝑘(𝑡)2.(6.8) Therefore, by (B.6) in Appendix B, there is a real number 𝐶>0 such that 𝔼Δ(𝑡)2𝐶𝐾𝑘=1𝔼𝛿𝑘(𝑡)2𝑄𝑘(𝑡)2+𝛿𝑘(𝑡)4.(6.9) However, because 𝔼[𝛿𝑘(𝑡)2(𝑡)]𝐷, we have 𝔼𝛿𝑘(𝑡)2𝑄𝑘(𝑡)2𝑄=𝔼𝑘(𝑡)2𝔼𝛿𝑘(𝑡)2𝑄(𝑡)𝐷𝔼𝑘(𝑡)2(6.10) Thus, we have 𝔼Δ(𝑡)2𝐶𝐷𝐾𝑘=1𝔼𝑄𝑘(𝑡)2+𝐶𝐾𝑘=1𝔼𝛿𝑘(𝑡)4.(6.11) Because 𝑡=1𝔼[𝑄𝑘(𝑡)2]/𝑡2< and 𝑡=1𝔼[𝛿𝑘(𝑡)4]/𝑡2<, we also have 𝑡=1𝔼[Δ(𝑡)2]/𝑡2<.

To prove Proposition 6.1, it remains only to show that the requirement 𝑡=1𝔼[𝑄𝑘(𝑡)2]/𝑡2< used in Lemma 6.2 follows from the other assumptions in the lemma and hence can be removed. Note that the drift-plus-penalty expression (6.3) together with 𝔼[𝑝(𝑡)(𝑡)]𝑝min implies that for all 𝑡 and all (𝑡)𝔼[]𝑝Δ(𝑡)(𝑡)𝐵𝑉𝑝min𝜖𝐾𝑘=1𝑤𝑘||𝑄𝑘||.(𝑡)(6.12) Then the requirement 𝑡=1𝔼[𝑄𝑘(𝑡)2]/𝑡2< follows by the next proposition.

Proposition 6.3. Suppose that 𝐿(𝐐) has the quadratic form (6.1), and that 𝔼[|𝑄𝑘(0)|3]< for all 𝑘{1,,𝐾}. Suppose that (6.2) holds, and that there are constants 𝐹0, 𝜖>0, and 𝑤𝑘>0 for 𝑘{1,,𝐾} such that for all 𝑡 and all (𝑡)𝔼[]Δ(𝑡)(𝑡)𝐹𝜖𝐾𝑘=1𝑤𝑘||𝑄𝑘||.(𝑡)(6.13) Then:
(a) For all 𝑘{1,,𝐾} we have 𝑡=1𝔼𝑄𝑘(𝑡)2𝑡2<.(6.14)
(b) For all 𝑘{1,,𝐾} we have lim𝑡𝔼||𝑄𝑘||(𝑡)𝑡=0,(6.15)lim𝑡𝑄𝑘(𝑡)𝑡=0(𝑤.𝑝.1),(6.16)limsup𝑡1𝑡𝑡1𝐾𝜏=0𝑘=1𝑤𝑘𝔼||𝑄𝑘||𝐹(𝜏)𝜖,(6.17)limsup𝑡1𝑡𝑡1𝐾𝜏=0𝑘=1𝑤𝑘||𝑄𝑘||𝐹(𝜏)𝜖(𝑤.𝑝.1),(6.18)lim𝑀limsup𝑡1𝑡𝑡1𝜏=0||𝑄Pr𝑘||(𝜏)>𝑀=0,(6.19)lim𝑀limsup𝑡1𝑡𝑡1𝜏=01||𝑄𝑘||(𝜏)>𝑀=0(𝑤.𝑝.1),(6.20) where 1{|𝑄𝑘(𝜏)|>𝑀} is an indicator function that is 1 if |𝑄𝑘(𝜏)|>𝑀, and 0 else.

Inequalities (6.15)–(6.20) represent six major forms of queue stability. Thus, all of these forms of stability are satisfied whenever the bounded second and fourth moment conditions (6.2) hold, and when the drift condition (6.13) holds for a quadratic Lyapunov function.

Proof. Part (a) is proven in Appendix C. To prove part (b), note that second moments are bounded because (6.2) holds. Further, the quadratic Lyapunov function (6.1) satisfies the condition (3.2), and the expression (6.13) implies that 𝔼[]Δ(𝑡)𝐹.(6.21) Thus, the rate stability and mean rate stability results (6.15)-(6.16) follow by Proposition 3.1. The result (6.17) follows by (2.8) (with 𝑉=0). The result (6.18) follows by part (a) together with Lemma 6.2 (for 𝑉=0). The results (6.19) and (6.20) follow immediately from (6.17) and (6.18), respectively, by noting that for all 𝑀>0 we have |𝑄𝑘(𝜏)|𝑀1{|𝑄𝑘(𝜏)|>𝑀}.

7. Stochastic Network Optimization

Here we use Propositions 5.1 and 6.1 to analyze an important class of stochastic queueing networks. This is the same scenario treated in [4, 36]. However, while the work in [4] obtains bounds on the time average expectations, here we obtain bounds on the pure time averages. Further, the works [4, 36] treat only quadratic Lyapunov functions. Here we treat both quadratics and subquadratics.

Consider a network of 𝐾 queues. The queue vector 𝐐(𝑡)=(𝑄1(𝑡),,𝑄𝐾(𝑡)) evolves in slotted time 𝑡{0,1,2,} with update equation as follows: 𝑄𝑘𝑄(𝑡+1)=max𝑘(𝑡)𝑏𝑘(𝑡)+𝑎𝑘(𝑡),0𝑘{1,,𝐾},(7.1) where 𝑎𝑘(𝑡) and 𝑏𝑘(𝑡) are arrival and service variables, respectively, for queue 𝑘. These are determined on slot 𝑡 by general functions ̂𝑎𝑘(𝛼(𝑡),𝜔(𝑡)), ̂𝑏𝑘(𝛼(𝑡),𝜔(𝑡)) of a network state 𝜔(𝑡) and a control action 𝛼(𝑡) as follows: 𝑎𝑘(𝑡)=̂𝑎𝑘(𝛼(𝑡),𝜔(𝑡)),𝑏𝑘̂𝑏(𝑡)=𝑘(𝛼(𝑡),𝜔(𝑡)),(7.2) where the control action 𝛼(𝑡) is made every slot 𝑡 with knowledge of the current 𝜔(𝑡) and is chosen within some abstract set 𝒜𝜔(𝑡). The 𝜔(𝑡) value can represent random arrival and channel state information on slot 𝑡, and 𝛼(𝑡) can represent a resource allocation decision. For simplicity, assume that the 𝜔(𝑡) process is independent and identically distributed (i.i.d.) over slots.

The control action additionally incurs a vector of penalties 𝐲(𝑡)=(𝑦0(𝑡),𝑦1(𝑡),,𝑦𝑀(𝑡)), again given by general functions of 𝛼(𝑡) and 𝜔(𝑡) as follows: 𝑦𝑚(𝑡)=̂𝑦𝑚(𝛼(𝑡),𝜔(𝑡)).(7.3) For 𝑡>0, define 𝑎𝑘(𝑡), 𝑏𝑘(𝑡), 𝑦𝑚(𝑡), 𝑄𝑘(𝑡) as time averages over the first 𝑡 slots: 𝑎𝑘1(𝑡)𝑡𝑡1𝜏=0𝑎𝑘(𝜏),𝑏𝑘1(𝑡)𝑡𝑡1𝜏=0𝑏𝑘(𝜏),(7.4)𝑦𝑚1(𝑡)𝑡𝑡1𝜏=0𝑦𝑚(𝜏),𝑄𝑘1(𝑡)𝑡𝑡1𝜏=0𝑄𝑘(𝜏).(7.5)

Thus, every slot 𝑡, the controller observes 𝜔(𝑡) and chooses 𝛼(𝑡)𝒜𝜔(𝑡), with the goal of solving the following stochastic network optimization problem: Minimize:limsup𝑡𝑦0(𝑡)(7.6)Subjectto:(1)limsup𝑡𝑦𝑚(𝑡)0𝑚{1,,𝑀}(7.7)(2)𝑄𝑘(𝑡)isratestable𝑘{1,,𝐾}(7.8)(3)𝛼(𝑡)𝒜𝜔(𝑡)𝑡{0,1,2,}.(7.9) Assume that the problem is feasible (so that the constraints can be met), and define 𝑦opt0 as the infimum value of (7.6) over all feasible algorithms.

Typical penalties can represent power expenditures [11]. For example, suppose that 𝑦𝑚(𝑡)𝑝𝑚(𝑡)𝑝av𝑚, where 𝑝𝑚(𝑡) is the power incurred in component 𝑚 of the network on slot 𝑡, and 𝑝av𝑚 is a required time average power expenditure. Then ensuring limsup𝑡𝑦𝑚(𝑡)0 ensures that limsup𝑡𝑝𝑚(𝑡)𝑝av𝑚, so that the desired time average power constraint is met.

To ensure that the time average penalty constraints are met, for each 𝑚{1,,𝑀}, we define a virtual queue 𝑍𝑚(𝑡) as follows: 𝑍𝑚𝑍(𝑡+1)=max𝑚(𝑡)+𝑦𝑚(𝑡),0.(7.10) Clearly 𝑍𝑚(𝜏+1)𝑍𝑚(𝜏)+𝑦𝑚(𝜏) for any integer 𝜏0. Summing this over 𝜏{0,1,,𝑡1} yields the following (for any integer 𝑡>0): 𝑍𝑚(𝑡)𝑍𝑚(0)𝑡1𝜏=0𝑦𝑚(𝜏).(7.11) Dividing by 𝑡 and rearranging terms yields 1𝑡𝑡1𝜏=0𝑦𝑚𝑍(𝜏)𝑚(𝑡)𝑡𝑍𝑚(0)𝑡.(7.12) It follows that if 𝑍𝑚(𝑡) is rate stable for all 𝑚, so that 𝑍𝑚(𝑡)/𝑡0 with probability 1, then the constraint (7.7) is satisfied with probability 1.

For simplicity, assume that all queues are initially empty. Now define Θ(𝑡)[𝐐(𝑡),𝐙(𝑡)] as the combined queue vector, and define the Lyapunov function as follows: 1𝐿(𝚯(𝑡))𝛽𝐾𝑘=1𝑄𝑘(𝑡)𝛽+𝑀𝑚=1𝑍𝑚(𝑡)𝛽,(7.13) where 1<𝛽2. The case 𝛽=2 is a quadratic Lyapunov function, and the case 1<𝛽<2 is a subquadratic. The drift-plus-penalty algorithm thus seeks to minimize a bound on 𝔼Δ(𝑡)+𝑉̂𝑦0(𝛼(𝑡),𝜔(𝑡))(𝑡).(7.14)

7.1. Computing the Drift-Plus-Penalty Inequality

Assume that the functions ̂𝑎𝑘(), ̂𝑏𝑘(), and ̂𝑦0() satisfy the following for all possible 𝜔(𝑡) and all possible 𝛼(𝑡)𝒜𝜔(𝑡): 0̂𝑎𝑘̂𝑏(𝛼(𝑡),𝜔(𝑡)),0𝑘(𝛼(𝑡),𝜔(𝑡)),𝑦0min̂𝑦0(𝛼(𝑡),𝜔(𝑡))𝑦0max,(7.15) where 𝑦0min, and 𝑦0max are deterministic bounds on 𝑦0(𝑡) for all 𝑡. Also assume that there is a finite constant 𝐷>0 such that for all (possibly randomized) choices of 𝛼(𝑡) in reaction to the i.i.d. 𝜔(𝑡), we have 𝔼̂𝑎𝑘(𝛼(𝑡),𝜔(𝑡))4𝔼̂𝑏𝐷𝑘{1,,𝐾}𝑘(𝛼(𝑡),𝜔(𝑡))4𝔼𝐷𝑘{1,,𝐾}̂𝑦𝑚(𝛼(𝑡),𝜔(𝑡))4𝐷𝑚{1,,𝑀},(7.16) where the expectations are taken with respect to the distribution of the i.i.d. 𝜔(𝑡) process, and the possibly randomized decisions 𝛼(𝑡)𝒜𝜔(𝑡).

Using the results of (5.4) for subquadratics and results in [4] for quadratics, it can be shown that the drift-plus-penalty expression satisfies the following bound: 𝔼Δ(𝑡)+𝑉̂𝑦0(𝛼(𝑡),𝜔(𝑡))(𝑡)𝐵+𝑉𝔼̂𝑦0+(𝛼(𝑡),𝜔(𝑡))(𝑡)𝐾𝑘=1𝑄𝑘(𝑡)𝛽1𝔼̂𝑎𝑘̂𝑏(𝛼(𝑡),𝜔(𝑡))𝑘+(𝛼(𝑡),𝜔(𝑡))(𝑡)𝑀𝑚=1𝑍𝑚(𝑡)𝛽1𝔼̂𝑦𝑚(𝛼(𝑡),𝜔(𝑡))(𝑡)(7.17) for some finite constant 𝐵>0.

7.2. The Dynamic Algorithm

It is easy to show that, given (𝑡), the right-hand-side of (7.17) is minimized on slot 𝑡 by the policy that observes the current queue values 𝐐(𝑡), 𝐙(𝑡) and the current 𝜔(𝑡) and chooses 𝛼(𝑡)𝒜𝜔(𝑡) to minimize the following expression: 𝑉̂𝑦0(𝛼(𝑡),𝜔(𝑡))+𝐾𝑘=1𝑄𝑘(𝑡)𝛽1̂𝑎𝑘̂𝑏(𝛼(𝑡),𝜔(𝑡))𝑘+(𝛼(𝑡),𝜔(𝑡))𝑀𝑚=1𝑍𝑚(𝑡)𝛽1̂𝑦𝑚(𝛼(𝑡),𝜔(𝑡)).(7.18) This is done at the beginning of each slot 𝑡, and at the end of the slot the actual and virtual queues 𝑄𝑘(𝑡), 𝑍𝑚(𝑡) are updated according to (7.1) and (7.10). This policy does not require knowledge of the probability distribution for 𝜔(𝑡). One difficulty is that it may not be possible to achieve the infimum of the above expression over the set 𝒜𝜔(𝑡) because we are using general (possibly noncontinuous) functions ̂𝑎𝑘(𝛼(𝑡),𝜔(𝑡)), ̂𝑏𝑘(𝛼(𝑡),𝜔(𝑡)), ̂𝑦𝑚(𝛼(𝑡),𝜔(𝑡)), and a general (possibly noncompact) set 𝒜𝜔(𝑡). Thus, we simply assume that there is a finite constant 𝐶0 such that our algorithm chooses 𝛼(𝑡)𝒜𝜔(𝑡) to come within an additive constant 𝐶 of the infimum on every slot 𝑡, so that 𝑉̂𝑦0+(𝛼(𝑡),𝜔(𝑡))𝐾𝑘=1𝑄𝑘(𝑡)𝛽1̂𝑎𝑘̂𝑏(𝛼(𝑡),𝜔(𝑡))𝑘+(𝛼(𝑡),𝜔(𝑡))𝑀𝑚=1𝑍𝑚(𝑡)𝛽1̂𝑦𝑚(𝛼(𝑡),𝜔(𝑡))𝐶+inf𝛼𝒜𝜔(𝑡)𝑉̂𝑦0(𝛼,𝜔(𝑡))+𝐾𝑘=1𝑄𝑘(𝑡)𝛽1̂𝑎𝑘̂𝑏(𝛼,𝜔(𝑡))𝑘+(𝛼,𝜔(𝑡))𝑀𝑚=1𝑍𝑚(𝑡)𝛽1̂𝑦𝑚.(𝛼,𝜔(𝑡))(7.19) Such a choice of 𝛼(𝑡) is called a 𝐶-additive approximation. The case 𝐶=0 corresponds to achieving the exact infimum every slot.

7.3. 𝜔-Only Policies and the Slackness Condition

Define an 𝜔-only policy to be one that chooses 𝛼(𝑡)𝒜𝜔(𝑡) every slot 𝑡 according to a stationary and randomized decision based only on the observed 𝜔(𝑡) (in particular, being independent of (𝑡)). In [36] it is shown that if the problem (7.6)–(7.9) is feasible, then for all 𝛿>0 there must be an 𝜔-only algorithm 𝛼(𝑡) that satisfies the following: 𝔼̂𝑦0𝛼(𝑡),𝜔(𝑡)𝑦opt0𝔼+𝛿,̂𝑎𝑘𝛼̂𝑏(𝑡),𝜔(𝑡)𝑘𝛼𝔼(𝑡),𝜔(𝑡)𝛿𝑘{1,,𝐾},̂𝑦𝑚𝛼(𝑡),𝜔(𝑡)𝛿𝑚{1,,𝑀}.(7.20)

We assume that the problem is feasible. It is often useful to assume that the following stronger slackness assumption is satisfied. There exists an 𝜖>0 and a particular 𝜔-only policy 𝛼(𝑡) (typically different than the policy that yields (7.20)) that yields the following: 𝔼̂𝑎𝑘𝛼̂𝑏(𝑡),𝜔(𝑡)𝑘𝛼𝔼(𝑡),𝜔(𝑡)𝜖𝑘{1,,𝐾}̂𝑦𝑚𝛼(𝑡),𝜔(𝑡)𝜖𝑚{1,,𝑀}.(7.21) This assumption is similar to a Slater-type assumption used in convex optimization theory [5].

7.4. Performance Bounds for Subquadratics

Assume 1<𝛽<2. Because our policy 𝛼(𝑡) comes within 𝐶0 of minimizing the right-hand-side of (7.17) every slot 𝑡 (given the observed (𝑡)), we have for all 𝑡 and all possible (𝑡)𝔼Δ(𝑡)+𝑉̂𝑦0(𝛼(𝑡),𝜔(𝑡))(𝑡)𝐵+𝐶+𝑉𝔼̂𝑦0𝛼+(𝑡),𝜔(𝑡)(𝑡)𝐾𝑘=1𝑄𝑘(𝑡)𝛽1𝔼̂𝑎𝑘𝛼̂𝑏(𝑡),𝜔(𝑡)𝑘𝛼+(𝑡),𝜔(𝑡)(𝑡)𝑀𝑚=1𝑍𝑚(𝑡)𝛽1𝔼̂𝑦𝑚𝛼,(𝑡),𝜔(𝑡)(𝑡)(7.22) where 𝛼(𝑡) is any other decision that can be implemented on slot 𝑡. Fix any 𝛿>0. Using the policy 𝛼(𝑡) designed to achieve (7.20) and noting that this policy makes decisions independent of (𝑡) yields 𝔼Δ(𝑡)+𝑉̂𝑦0𝑦(𝛼(𝑡),𝜔(𝑡))(𝑡)𝐵+𝐶+𝑉opt0+𝛿+𝛿𝐾𝑘=1𝑄𝑘(𝑡)𝛽1+𝛿𝑀𝑚=1𝑍𝑚(𝑡)𝛽1.(7.23) The above holds for all 𝛿>0. Taking a limit as 𝛿0 yields 𝔼Δ(𝑡)+𝑉𝑦0(𝑡)(𝑡)𝐵+𝐶+𝑉𝑦opt0,(7.24) where for simplicity we have substituted 𝑦0(𝑡)=̂𝑦0(𝛼(𝑡),𝜔(𝑡)) on the left-hand side. Inequality (7.24) is in the exact form of the drift-plus-penalty condition (5.5) (with 𝜖=0). Recall that the penalty 𝑦0(𝑡) is deterministically lower bounded by some finite (possibly negative) value 𝑦0min. Further, the moment bounds (7.16) can easily be shown to imply that the boundedness assumptions (5.2), (4.7), and (4.8) hold. Thus, we can apply Proposition 5.1 to conclude that all queues are rate stable. In particular 𝑍𝑚(𝑡)/𝑡0 with probability 1 for all 𝑘, so that the constraints (7.7) are satisfied as follows: limsup𝑡𝑦𝑚(𝑡)0𝑚{1,,𝑀}(w.p.1).(7.25) Further, again by Proposition 5.1, we have limsup𝑡1𝑡𝑡1𝜏=0𝑦0(𝜏)𝑦opt0+𝐵+𝐶𝑉(w.p.1).(7.26) Thus, all constraints of the problem are met, and the time average of penalty 𝑦0(𝑡) is within 𝑂(1/𝑉) of optimality.

The above did not require the slackness assumption. However, we can understand the tradeoff with 𝑉 by assuming that the slackness assumption holds, so that a policy 𝛼(𝑡) exists that satisfies (7.21) for some 𝜖>0. Using (7.21) in the right-hand-side of (7.22) gives 𝔼[]𝑦Δ(𝑡)(𝑡)𝐵+𝐶+𝑉0max𝑦0min𝜖𝐾𝑘=1𝑄𝑘(𝑡)𝛽1𝜖𝑀𝑚=1𝑍𝑚(𝑡)𝛽1(7.27) From which we obtain by Proposition 5.1 that limsup𝑡1𝑡𝑡1𝜏=0𝐾𝑘=1𝑄𝑘(𝜏)𝛽1+𝑀𝑚=1𝑍𝑚(𝜏)𝛽1𝑦𝐵+𝐶+𝑉0max𝑦0min𝜖(w.p.1).(7.28)

7.5. Performance Bounds for Quadratics

Similarly, we can obtain performance bounds for the quadratic case 𝛽=2. Without the slackness condition, we can obtain an expression identical to (7.24). This ensures that all queues are rate stable (by Proposition 3.1), so that all desired constraints are satisfied. However, we cannot use Proposition 6.1 because the drift expression has 𝜖=0. Thus, we require the slackness assumption to proceed further. Under this assumption, we derive (similar to (7.27)): 𝔼[]𝑦Δ(𝑡)(𝑡)𝐵+𝐶+𝑉0max𝑦0min𝜖𝐾𝑘=1𝑄𝑘(𝑡)𝜖𝑀𝑚=1𝑍𝑚(𝑡)(7.29) from which we obtain by Proposition 6.1limsup𝑡1𝑡𝑡1𝜏=0𝐾𝑘=1𝑄𝑘(𝜏)+𝑀𝑚=1𝑍𝑚𝑦(𝜏)𝐵+𝐶+𝑉0max𝑦0min𝜖(w.p.1).(7.30) We also know from (7.29) and Proposition 6.3 that 𝑡=1(𝔼[𝑄𝑘(𝑡)2]/𝑡2)<, so that we conclude from (7.24) that limsup𝑡1𝑡𝑡1𝜏=0𝑦0(𝜏)𝑦opt0+𝐵+𝐶𝑉(w.p.1).(7.31)

8. Conclusions

This work derives sample path stability and time average penalty results for stochastic networks. It uses the law of large numbers for martingale differences to strengthen existing performance guarantees from time average expectations to pure time averages, with probability 1. This requires only mild-bounded second and fourth moment assumptions, and these assumptions were shown to be necessary via two simple counterexamples in Appendix A. The analysis considers both quadratic and subquadratic Lyapunov functions and uses both types of functions to develop and analyze algorithms for minimizing a time average penalty subject to network stability constraints and additional time average constraints. These results are useful for control of queueing networks, and for other stochastic problems of optimizing time averages subject to time average constraints.


A. Counterexamples

This appendix provides two examples of systems that satisfy the drift-plus-penalty condition (2.3), so that the desired time average expectations (2.7)-(2.8) are satisfied. However, the corresponding sample path time averages do not hold because one of the bounded moment assumptions (6.2), (4.7), and (4.8) is violated.

A.1. The Penalty Process 𝑝(𝑡)

Define a process 𝑝(𝑡) that is independent over slots 𝑡{0,1,2,} and satisfies the following: 1𝑝(𝑡)=0withprobability121(𝑡+1)𝑡+1withprobability.2(𝑡+1)(A.1) Formally define 𝐐(𝑡)=𝟎 for all 𝑡, and use 𝐿(𝐐)=(1/2)𝐾𝑘=1𝑄2𝑘, 𝑉=1, so that 𝔼[Δ(𝑡)+𝑉𝑝(𝑡)(𝑡)]=𝔼[𝑝(𝑡)(𝑡)]=1/2 for all 𝑡 and all (𝑡). It is clear that lim𝑡(1/𝑡)𝑡1𝜏=0𝔼[𝑝(𝜏)]=1/2. However, because the probability 1/(2(𝑡+1)) decays slowly, 𝑝(𝑡) is equal to 𝑡+1 for infinitely many slots 𝑡{0,1,2,} (with probability 1), and for any such time 𝑡 we have 1𝑡+1𝑡𝜏=0𝑝𝑡𝑝(𝜏)𝑡=𝑡+1+1𝑡+1=1.(A.2) Thus, with probability 1 we have limsup𝑡1𝑡𝑡1𝜏=0𝑝(𝜏)1.(A.3) This differs from the time average expectation because the second moment condition (4.8) is violated.

A.2. The Queue Process 𝑄(𝑡)

Let 𝐾=1, 𝑉=0, 𝑝(𝑡)=0 for all 𝑡. Define a queue process 𝑄(𝑡) with 𝑄(0)=0 and with dynamics as follows: [],𝑄(𝑡+1)=max𝑄(𝑡)+𝑎(𝑡)1,0(A.4) where 𝑎(𝑡) is independent over slots 𝑡{0,1,2,} and satisfies 1𝑎(𝑡)=0withprobability1100(𝑡+1)101𝑡+1withprobability.100(𝑡+1)(A.5) Define 𝐿(𝑄(𝑡))=(1/2)𝑄(𝑡)2 and define Δ(𝑡)=𝐿(𝑄(𝑡+1))𝐿(𝑄(𝑡)). Clearly 𝑄(𝑡+1)2(𝑄(𝑡)+𝑎(𝑡)1)2, and so a basic calculation shows that for all 𝑡{0,1,2,} and all (𝑡) we have 𝔼[]1Δ(𝑡)(𝑡)2𝔼1+𝑎(𝑡)2[]1(𝑡)𝑄(𝑡)𝔼1𝑎(𝑡)(𝑡)=1𝑄(𝑡)1109𝑡+1110𝑄(𝑡).(A.6) Using 𝜖=9/10, it follows by (2.8) that limsup𝑡1𝑡𝑡1𝜏=0𝔼[]𝑄(𝜏)109.(A.7) However, with probability 1, we have that 𝑎(𝑡)=10𝑡+1 for an infinite number of slots 𝑡{0,1,2,}. Let 𝑡 be such a slot, and assume that 𝑡1. Because 𝑄(𝑡) can decrease by at most 1 every slot, it follows that for all 𝜏{𝑡+1,,𝑡+𝑡+1} we have 𝑄(𝜏)10𝑡+1𝑡+18𝑡+1,(A.8) where we have used the fact that 𝑡+12𝑡+1. Then we have 𝑡+𝑡+1𝜏=0𝑄(𝜏)𝑡+𝑡+1𝜏=𝑡+18𝑡𝑡+18.+1(A.9) Thus 1𝑡+𝑡+1+1𝑡+𝑡+1𝜏=08𝑡𝑄(𝜏)+1𝑡+𝑡+1+1(A.10) Since (with probability 1) this holds for infinitely many 𝑡{1,2,3,}, we have with probability 1 limsup𝑡1𝑡𝑡1𝜏=0𝑄(𝜏)8.(A.11) This differs from the time average expectation because the fourth moment condition (6.2) is violated.

B. Proof of Lemma 3.2

Here, we prove Lemma 3.2. Let 𝑄(𝑡) be a real-valued random process over slots 𝑡{0,1,2,}. Let {𝑡𝑘}𝑘=1 be a subsequence of positive integers that increase to infinity. By the Borel-Cantelli lemma, to show that lim𝑘𝑄(𝑡𝑘)/𝑡𝑘=0 with probability 1, it suffices to show that for any 𝜖>0 we have 𝑘=1||𝑄𝑡Pr𝑘||𝑡𝑘>𝜖<.(B.1) Suppose there are constants 𝐶>0, 𝑏>0 such that 𝔼||||𝑄(𝑡)1+𝑏𝐶𝑡𝑡{1,2,3,}.(B.2) We first treat the case 𝑏>1.

Lemma B.1. If (B.2) holds for 𝑏>1, then lim𝑡𝑄(𝑡)/𝑡=0 with probability 1.

Proof (Fix 𝜖>0). From the Markov inequality, we have for all 𝑡{1,2,3,}||||Pr𝑄(𝑡)𝑡||||>𝜖=Pr𝑄(𝑡)1+𝑏>(𝜖𝑡)1+𝑏𝔼||||𝑄(𝑡)1+𝑏(𝜖𝑡)1+𝑏𝐶𝑡(𝜖𝑡)1+𝑏=𝐶𝜖1+𝑏𝑡𝑏.(B.3) Since 𝑏>1, the sum of the above terms over 𝑡{1,2,3,} is finite, proving the result.

Now suppose that 0<𝑏1, and define 𝛼=2/𝑏 (so that 𝛼2). For 𝑘{1,2,3,}, define the increasing subsequence of times 𝑡𝑘=𝑘𝛼.

Lemma B.2. If (B.2) holds for 0<𝑏1, then lim𝑘𝑄(𝑡𝑘)/𝑡𝑘=0 with probability 1.

Proof (Lemma B.2, Fix 𝜖>0) . From (B.3), we have for 𝑡=𝑡𝑘||𝑄𝑡Pr𝑘||𝑡𝑘𝐶>𝜖𝜖1+𝑏𝑡𝑏𝑘.(B.4) Using the definition of 𝑡𝑘 gives 𝑡𝑏𝑘𝑘𝑏𝛼=𝑘2 and so ||||𝑄𝑡Pr𝑘𝑡𝑘||||𝐶>𝜖𝜖1+𝑏𝑘2.(B.5) Thus, the sum of the above terms over 𝑘{1,2,3,} is finite, proving the result.

To complete the proof that 𝑄(𝑡)/𝑡0 with probability 1 for the case 0<𝑏1, we require the following simple lemma, which is also useful in other parts of this paper.

Lemma B.3. Let {𝑋1,,𝑋𝑀} be a collection of random variables, let 𝛾,𝑎0,𝑎1,,𝑎𝑀 be real numbers, and assume that 𝛾1. Then we have 𝔼|||||𝑀