Strong Truncation Approximation in Tandem Queues with Blocking
Markov models are frequently used for performance modeling. However most models do not have closed form solutions, and numerical solutions are often not feasible due to the large or even infinite state space of models of practical interest. For that, the state-space truncation is often demanded for computation of this kind of models. In this paper, we use the strong stability approach to establish analytic error bounds for the truncation of a tandem queue with blocking. Numerical examples are carried out to illustrate the quality of the obtained error bounds.
Queueing networks consisting of several service stations are more suitable for representing the structure of many systems with a large number of resources than models consisting of a single-service station. Particularly, the queueing networks are used for the performance and reliability evaluation of computer, communication, and manufacturing systems .
The determination of the steady-state probabilities of all possible states of the network can be regarded as the central problem of queueing theory. The mean values of all other important performance measures of the network can be calculated from these. Several efficient algorithms for the exact solution of queueing networks are introduced. However, the memory requirements and computation time of these algorithms grow exponentially with the number of job classes in the system. For computationally difficult problems of networks with a large number of job classes, we resort to approximation methods .
Many approximation methods for nonproduct-form networks are discussed in the literature (see  and references therein). Especially, the well-known technique applicable for limiting model sizes is state truncation [4, 5]. Indeed, approximating a countable-state Markov chain by using finite-state Markov chains is an interesting and often a challenging topic, which has attracted many researchers' attention. Computationally, when we solve for the stationary distribution, when it exists, of a countable-state Markov chain, the transition probability matrix of the Markov chain has to be truncated in some way into a finite matrix as a first step. We then compute the stationary distribution of this finite-state Markov chain as an approximation to that of the countable-state one. We expect that as the truncation size increases to infinity, the solution for the finite Markov chain converges to that of the countable-state Markov chain. While for many application problems the justification of the convergence could be made by the physical meanings of the finite and the countable state Markov chains, it is not always easy to formally justify this claim.
The study of approximating the stationary probabilities of an infinite Markov chain by using finite Markov chains was initiated by Seneta  in 1967. Many up-to-date results were obtained by him and several collaborators. Most of their results are included in a paper by Gibson and Seneta . Other references may be found therein and/or in another paper  published in the same year by the same authors. Other researchers, including Wolf , used different approaches to those of Seneta et al. For instance, Heyman  provided a probabilistic treatment of the problem. Later, Grassmann and Heyman  justified the convergence for infinite-state Markov chains with repeating rows. All the above results are for approximating stationary distributions. Regarding more general issues of approximating a countable-state Markov chain, see the book by Freedman .
A different though related line of research is that of perturbed Markov chains. General results on perturbation bounds for Markov chains are summarized by Heidergott and Hordijk . One group of results concerns the sensitivity of the stationary distribution of a finite, homogeneous Markov chain (see Heidergott et al. [14, 15]), and the bounds are derived using methods of matrix analysis; see the review of Cho and Meyer  and recent papers of Kirkland [17, 18], and Neumann and Xu . Another group includes perturbation bounds for finite-time and invariant distributions of Markov chains with general state space; see Anisimov , Rachev , Aïssani and Kartashov , Kartashov , and Mitrophanov . In these works, the bounds for general Markov chains are expressed in terms of ergodicity coefficients of the iterated transition kernel, which are difficult to compute for infinite state spaces. These results were obtained using operator-theoretic and probabilistic methods. Some of these methods allow us to obtain quantitative estimates in addition to the qualitative affirmation of the continuity.
In this paper we are interested in computing the error bound of the stationary queue length distributions of queueing networks through finite truncation of some buffers, provided their stability holds. So, it is natural to approximate the stationary distribution of queueing networks through truncating some buffers. We may expect that such a truncation well approximates the original model as the truncation level (or size) becomes large. Therefore, we extend the applicability of the strong stability approach [23, 25] to the case of truncation problem for a tandem queue with blocking. As is well known, this network is a multidimensional, nonproduct form queueing network (see, for example, Van Dijk ). So, our interest is to see what conditions guarantee that the steady-state joint queue length distribution of this tandem queue system is well approximated by the finite buffer truncation of another one. Such conditions allow us to obtain better quantitative estimates on the stationary characteristics of the tandem queue with blocking and infinite buffers.
The paper is organized as follows. Section 2 contains the necessary definitions and notation. In Section 3, we present the considered network queueing model and we give a new perturbation bounds corresponding to the considered truncation problem. Numerical example is presented in Section 4. Eventually, we will point out directions of further research.
2. Strong Stability Approach
The main tool for our analysis is the weighted supremum norm, also called -norm, denoted by , where is some vector with elements , for all .
Let us note that , the Borel field of the natural numbers, that is equipped with the discrete topology and consider the measurable space .
Let be the space of finite measures on and let be the space of bounded measurable functions. We associate with each transition operator the linear mappings: Introduce to the class of norms of the form: where is an arbitrary measurable function (not necessary finite) bounded from below by a positive constant. This norm induces in the space the norm Let us consider , the space of bounded linear operators on the space , with norm Let and be two invariant measures and suppose that these measures have finite -norm. Then
For our analysis, we will assume that is of a particular form , for and , which implies Hence, the bound 6 becomes
We say that the Markov chain with transition kernel verifying and invariant measure is strongly -stable, if every stochastic transition kernel in some neighborhood admits a unique invariant measure such that tends to zero as tends to zero uniformly in this neighborhood. The key criterion of strong stability of a Markov chain is the existence of a deficient version of defined in the following.
Thereby, the Markov chain with the transition kernel and invariant measure is strongly -stable with respect to the norm if and only if there exist a measure and a nonnegative measurable function on such that the following conditions hold: (a), , , (b)the kernel is nonnegative, (c)the -norm of the kernel is strictly less than one, that is, , (d), where denotes the convolution between a measure and a function and is the vector having all the components equal to .
It has been shown in  that a Markov chain with the transition kernel is strongly stable with respect to if and only if a residual for with respect to exists. Although the strong stability approach originates from stability theory of Markov chains, the techniques developed for the strong stability approach allow to establish numerical algorithms for bounding . A bound on is established in the following theorem.
Theorem 2.1 (see ). Let be strongly stable. If then, the following bound holds where is the stationary projector of and is the identity matrix.
Note that the term in the bound provided in Theorem 2.1 can be bounded by In this case, we can also bound by
3. Analysis of the Model
3.1. Model Description and Assumptions
Consider two stations in series: a tandem queue of M/M/1/ and M/M/1/N. There is one server at each station, and customers arrive to station in accordance to a Poisson process with a state-dependent rate when customers are present at station (1) Customer service times at station are exponentially distributed with rate , , (2) The interarrival and service times are independent of one another. The size of the buffer at station is infinite, whereas the buffer size at station is . When the second station is saturated the servicing at the first station is stopped. Queueing is assumed to be first-come, first-served.
The steady-state joint queue size distribution of this tandem queue system does not exhibit a closed product form expression . Numerical studies and approximation procedures have therefore been investigated widely (see, for example, Boxma and Konheim , Hillier and Boling , and Latouche and Neuts ). Van Dijk  has analysed the same tandem queue system and obtained an explicit error bound for bias terms of reward structures. In this paper, like in , we consider the truncation of the size of the buffer at station to obtain another analytic error bound by using the strong stability approach . Therefore, we assume that for some constant :
Let denote the number of customers at stations and , , and consider the discrete time Markov chain with one-step transition probabilities given by :
In order to apply the strong stability approach, we consider the same truncation considered by Van Dijk . Therefore, for a finite integer , we have the following truncation:
Equation (3.3) means that the queue size at station is truncated at level by rejecting arrivals whenever . We remark also that the two transition matrices and are stochastic.
3.2. Strong Stability Bounds
For our bounds, we require bounds on the basic input entities such as and . In order to establish those bounds, we have to specify . Specifically, for and , we will choose as our norm-defining mapping.
For ease of reference, we introduce the following condition: This condition corresponds to the trafic intensity condition of the infinite system.
Essential for our numerical bound on the deviation between stationary distributions and is a bound on the deviation of the transition kernel from . This bound is provided in the following lemma.
Let denote a deficient Markov kernel (residual matrix) for the transition matrix that avoids jumps to state ; more specifically, for , let
Proof. We have
For : if :
From (3.21) and (3.22) we have For : if : If : If :
From (3.24), (3.25), and (3.26) we have For : if : If : If :
From (3.28), (3.29), and (3.30) we have
In order to obtain , we must impose that .
For : From (3.23), (3.27), (3.31), and (3.32) we have
For all such that: and , we will obtain , then, under the same condition, we finally obtain and it follows that the -norm of is equal to , which proves the claim.
In the following lemma we will identify the range for and that leads to finite -norm of .
For that, we choose the measurable function and the probability measure
Lemma 3.3. Provided that holds, the -norm of is bounded by where was defined in (3.18).
Proof. We have By definition, Hence,
Theorem 3.4. Provided that holds, then, for all such that and , the discrete time Markov chain describing the tandem queue with blocking and finite buffers model is -strongly stable for the test function .
By Theorem 3.4, the general bound provided Theorem 2.1 can be applied to the kernels and for our tandem queues with blocking. Specifically, we will insert the individual bounds provided in Lemmas 3.1, 3.2, and 3.3, which yields the following result.
Theorem 3.5. Let and be the steady-state joint queue size distributions of the discrete time Markov chains in the tandem queue with blocking and finite buffers and the tandem queue with blocking with infinite buffers respectively. Provided that holds, then, for all and , and under the condition: we have the following estimate: where and , , and were already defined by the formulas (3.5), (3.18), and (3.37), respectively.
Corollary 3.6. Under the conditions put forward in Theorem 3.5, it holds for any such that that
Note that the bound (3.49) in Corollary 3.6 has and as free parameters. This give the opportunity to minimize the right hand side of the inequality (3.48) in Theorem 3.5 with respect to . For given , this leads to the following optimization problem:
4. Numerical Example
In this section we will apply our bound put forward in Theorem 3.5. For this, we implement an algorithm (the principal idea of this algorithm is the same as in ) on concrete cases. Indeed, we apply a computer program to determine the made error on stationary distribution due to the approximation, when the approximation is possible, as well as the norm from which the error is obtained. It is important, in this work, to give an idea about the performance of this approach. For this, we computed the real value of the error by enlarging the state space and derived the stationary distribution. For that, we elaborated a program in the Matlab environment according to the following steps.(1)Compute the stationary distribution of the tandem queue with blocking with infinite buffers (the original model) using the definition (), where is a probability measure.(2)Compute the stationary distribution of the tandem queue with blocking and finite buffers (the truncated model)sing the definition (), where is a probability measure.(3)Calculate .
In order to compare the both errors (the real and that obtained by the strong stability approach), we calculated the real values of the error with the same norm that we have calculated the approximation.
For the first numerical example we set with , , , and , and for the second one we set with , , , and . As a first step for applying our bound, in the both examples, we compute the values for that minimize . Then, we can compute the bound put forward in Theorem 3.5 for various values for . The numerical results are presented in Tables 1 and 2.
4.1. Strong Stability Algorithm
Step 1. Define the inputs:(i)the service rate of the first station ();(ii)the service rate of the second station ();(iii)the arrival mean rate ;(iv)the function ;(v)the number of buffers in the second station ();(vi)the truncation level ();(vii)the step .
Step 4. For each value of determine the constant: ; , go to Step 5.
Step 6. Determine and , where
Step 7. End.
We compared our expected approximation error () against numerical results () and we easily observed that the real error on the stationary distribution is significantly smaller than the strong stability approach one. Furthermore, if we compare our expected approximation error against the numerical results, it is easy to see that the error decreases as the truncation level increases. We can notice the remarkable sensibility of the strong stability approach in the variation of the truncation level with regard to the real error. This means that the numerical error is really the point of the error which we can do when switching from the tandem queue with blocking and infinite buffers to the another one with finite buffers. Graphic comparison is illustrated in Figures 1 and 2.
5. Further Research
An alternative method for computing bounds on perturbations of Markov chains is the series expansion approach to Markov chains (SEMC). The general approach of SEMC has been introduced in . SEMC for discrete time finite Markov chains is discussed in , and SEMC for continuous time Markov chains is developed in . The key feature of SEMC is that a bound for the precision of the approximation can be given. Unfortunately, SEMC requires (numerical) computation of the deviation matrix, which limits the approach in essence to Markov chains with finite state space. Perturbation analysis via the strong stability approach overcomes this drawback, however, in contrast to SEMC, no measure on the quality of the approximation can be given.
H. C. Tijms, “Stochastic modelling and analysis,” in A Computational Approach, John Wiley & Sons, New York, NY, USA, 1986.View at: Google Scholar
G. Bolch, S. Greiner, H. Meer, and K. S. Trivedi, Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications, John Wiley & Sons, New York, NY, USA, 2006.
W. K. Grassmann and D. P. Heyman, “Computation of steady-state probabilities for infinite-state Markov chains with repeating rows,” ORSA Journal on Computing, vol. 5, pp. 292–303, 1993.View at: Google Scholar
D. Freedman, Approximating Countable Markov Chains, Springer, New York, NY, USA, 2nd edition, 1983.
D. Aïssani and V. N. Kartashov, “Ergodicity and stability of Markov chains with respect to operator topology in the space of transition kernels,” Doklady Akademii Nauk Ukrainskoi SSR A, vol. 11, pp. 3–5, 1983.View at: Google Scholar
N. V. Kartashov, Strong Stable Markov Chains, VSP, Utrecht, The Netherlands, 1996.
A. Hordijk and N. van Dijk, “Networks of queues with blocking,” in Performance '81, pp. 51–65, North-Holland, Amsterdam, The Netherlands, 1981.View at: Google Scholar
F. S. Hillier and R. W. Boling, “Finite queues in series with exponential of Erlang service times—a numerical approach,” Operations Research, vol. 15, pp. 286–303, 1967.View at: Google Scholar