Research Article  Open Access
Strong Truncation Approximation in Tandem Queues with Blocking
Abstract
Markov models are frequently used for performance modeling. However most models do not have closed form solutions, and numerical solutions are often not feasible due to the large or even infinite state space of models of practical interest. For that, the statespace truncation is often demanded for computation of this kind of models. In this paper, we use the strong stability approach to establish analytic error bounds for the truncation of a tandem queue with blocking. Numerical examples are carried out to illustrate the quality of the obtained error bounds.
1. Introduction
Queueing networks consisting of several service stations are more suitable for representing the structure of many systems with a large number of resources than models consisting of a singleservice station. Particularly, the queueing networks are used for the performance and reliability evaluation of computer, communication, and manufacturing systems [1].
The determination of the steadystate probabilities of all possible states of the network can be regarded as the central problem of queueing theory. The mean values of all other important performance measures of the network can be calculated from these. Several efficient algorithms for the exact solution of queueing networks are introduced. However, the memory requirements and computation time of these algorithms grow exponentially with the number of job classes in the system. For computationally difficult problems of networks with a large number of job classes, we resort to approximation methods [2].
Many approximation methods for nonproductform networks are discussed in the literature (see [3] and references therein). Especially, the wellknown technique applicable for limiting model sizes is state truncation [4, 5]. Indeed, approximating a countablestate Markov chain by using finitestate Markov chains is an interesting and often a challenging topic, which has attracted many researchers' attention. Computationally, when we solve for the stationary distribution, when it exists, of a countablestate Markov chain, the transition probability matrix of the Markov chain has to be truncated in some way into a finite matrix as a first step. We then compute the stationary distribution of this finitestate Markov chain as an approximation to that of the countablestate one. We expect that as the truncation size increases to infinity, the solution for the finite Markov chain converges to that of the countablestate Markov chain. While for many application problems the justification of the convergence could be made by the physical meanings of the finite and the countable state Markov chains, it is not always easy to formally justify this claim.
The study of approximating the stationary probabilities of an infinite Markov chain by using finite Markov chains was initiated by Seneta [6] in 1967. Many uptodate results were obtained by him and several collaborators. Most of their results are included in a paper by Gibson and Seneta [7]. Other references may be found therein and/or in another paper [8] published in the same year by the same authors. Other researchers, including Wolf [9], used different approaches to those of Seneta et al. For instance, Heyman [10] provided a probabilistic treatment of the problem. Later, Grassmann and Heyman [11] justified the convergence for infinitestate Markov chains with repeating rows. All the above results are for approximating stationary distributions. Regarding more general issues of approximating a countablestate Markov chain, see the book by Freedman [12].
A different though related line of research is that of perturbed Markov chains. General results on perturbation bounds for Markov chains are summarized by Heidergott and Hordijk [13]. One group of results concerns the sensitivity of the stationary distribution of a finite, homogeneous Markov chain (see Heidergott et al. [14, 15]), and the bounds are derived using methods of matrix analysis; see the review of Cho and Meyer [16] and recent papers of Kirkland [17, 18], and Neumann and Xu [19]. Another group includes perturbation bounds for finitetime and invariant distributions of Markov chains with general state space; see Anisimov [20], Rachev [21], Aïssani and Kartashov [22], Kartashov [23], and Mitrophanov [24]. In these works, the bounds for general Markov chains are expressed in terms of ergodicity coefficients of the iterated transition kernel, which are difficult to compute for infinite state spaces. These results were obtained using operatortheoretic and probabilistic methods. Some of these methods allow us to obtain quantitative estimates in addition to the qualitative affirmation of the continuity.
In this paper we are interested in computing the error bound of the stationary queue length distributions of queueing networks through finite truncation of some buffers, provided their stability holds. So, it is natural to approximate the stationary distribution of queueing networks through truncating some buffers. We may expect that such a truncation well approximates the original model as the truncation level (or size) becomes large. Therefore, we extend the applicability of the strong stability approach [23, 25] to the case of truncation problem for a tandem queue with blocking. As is well known, this network is a multidimensional, nonproduct form queueing network (see, for example, Van Dijk [4]). So, our interest is to see what conditions guarantee that the steadystate joint queue length distribution of this tandem queue system is well approximated by the finite buffer truncation of another one. Such conditions allow us to obtain better quantitative estimates on the stationary characteristics of the tandem queue with blocking and infinite buffers.
The paper is organized as follows. Section 2 contains the necessary definitions and notation. In Section 3, we present the considered network queueing model and we give a new perturbation bounds corresponding to the considered truncation problem. Numerical example is presented in Section 4. Eventually, we will point out directions of further research.
2. Strong Stability Approach
The main tool for our analysis is the weighted supremum norm, also called norm, denoted by , where is some vector with elements , for all .
Let us note that , the Borel field of the natural numbers, that is equipped with the discrete topology and consider the measurable space .
Let be the space of finite measures on and let be the space of bounded measurable functions. We associate with each transition operator the linear mappings: Introduce to the class of norms of the form: where is an arbitrary measurable function (not necessary finite) bounded from below by a positive constant. This norm induces in the space the norm Let us consider , the space of bounded linear operators on the space , with norm Let and be two invariant measures and suppose that these measures have finite norm. Then
For our analysis, we will assume that is of a particular form , for and , which implies Hence, the bound 6 becomes
We say that the Markov chain with transition kernel verifying and invariant measure is strongly stable, if every stochastic transition kernel in some neighborhood admits a unique invariant measure such that tends to zero as tends to zero uniformly in this neighborhood. The key criterion of strong stability of a Markov chain is the existence of a deficient version of defined in the following.
Thereby, the Markov chain with the transition kernel and invariant measure is strongly stable with respect to the norm if and only if there exist a measure and a nonnegative measurable function on such that the following conditions hold: (a), , , (b)the kernel is nonnegative, (c)the norm of the kernel is strictly less than one, that is, , (d), where denotes the convolution between a measure and a function and is the vector having all the components equal to .
It has been shown in [22] that a Markov chain with the transition kernel is strongly stable with respect to if and only if a residual for with respect to exists. Although the strong stability approach originates from stability theory of Markov chains, the techniques developed for the strong stability approach allow to establish numerical algorithms for bounding . A bound on is established in the following theorem.
Theorem 2.1 (see [26]). Let be strongly stable. If then, the following bound holds where is the stationary projector of and is the identity matrix.
Note that the term in the bound provided in Theorem 2.1 can be bounded by In this case, we can also bound by
3. Analysis of the Model
3.1. Model Description and Assumptions
Consider two stations in series: a tandem queue of M/M/1/ and M/M/1/N. There is one server at each station, and customers arrive to station in accordance to a Poisson process with a statedependent rate when customers are present at station (1) Customer service times at station are exponentially distributed with rate , , (2) The interarrival and service times are independent of one another. The size of the buffer at station is infinite, whereas the buffer size at station is . When the second station is saturated the servicing at the first station is stopped. Queueing is assumed to be firstcome, firstserved.
The steadystate joint queue size distribution of this tandem queue system does not exhibit a closed product form expression [27]. Numerical studies and approximation procedures have therefore been investigated widely (see, for example, Boxma and Konheim [28], Hillier and Boling [29], and Latouche and Neuts [30]). Van Dijk [4] has analysed the same tandem queue system and obtained an explicit error bound for bias terms of reward structures. In this paper, like in [4], we consider the truncation of the size of the buffer at station to obtain another analytic error bound by using the strong stability approach [23]. Therefore, we assume that for some constant :
Let denote the number of customers at stations and , , and consider the discrete time Markov chain with onestep transition probabilities given by [4]:
In order to apply the strong stability approach, we consider the same truncation considered by Van Dijk [4]. Therefore, for a finite integer , we have the following truncation:
Equation (3.3) means that the queue size at station is truncated at level by rejecting arrivals whenever . We remark also that the two transition matrices and are stochastic.
3.2. Strong Stability Bounds
For our bounds, we require bounds on the basic input entities such as and . In order to establish those bounds, we have to specify . Specifically, for and , we will choose as our normdefining mapping.
For ease of reference, we introduce the following condition: This condition corresponds to the trafic intensity condition of the infinite system.
Essential for our numerical bound on the deviation between stationary distributions and is a bound on the deviation of the transition kernel from . This bound is provided in the following lemma.
Lemma 3.1. If condition is satisfied, then
Proof. By definition, we have
where
For :
For :
If :
If :
If :
From (3.10), (3.11), and (3.12) we have
For :
if , then we have
From (3.8), (3.13), and (3.15) we have
Let denote a deficient Markov kernel (residual matrix) for the transition matrix that avoids jumps to state ; more specifically, for , let
Lemma 3.2. Provided that condition holds, it holds that where
Proof. We have
For : if :
If :
From (3.21) and (3.22) we have
For : if :
If :
If :
From (3.24), (3.25), and (3.26) we have
For : if :
If :
If :
From (3.28), (3.29), and (3.30) we have
In order to obtain , we must impose that .
For :
From (3.23), (3.27), (3.31), and (3.32) we have
For all such that: and , we will obtain , then, under the same condition, we finally obtain
and it follows that the norm of is equal to , which proves the claim.
In the following lemma we will identify the range for and that leads to finite norm of .
For that, we choose the measurable function and the probability measure
Lemma 3.3. Provided that holds, the norm of is bounded by where was defined in (3.18).
Proof. We have By definition, Hence,
Let
Theorem 3.4. Provided that holds, then, for all such that and , the discrete time Markov chain describing the tandem queue with blocking and finite buffers model is strongly stable for the test function .
Proof. We have , , and .
Hence, the kernel is nonnegative.
We verify that . We have
or, according to (2.5),
According to (2.4) and (2.1), we have
where
Then, .
By Theorem 3.4, the general bound provided Theorem 2.1 can be applied to the kernels and for our tandem queues with blocking. Specifically, we will insert the individual bounds provided in Lemmas 3.1, 3.2, and 3.3, which yields the following result.
Theorem 3.5. Let and be the steadystate joint queue size distributions of the discrete time Markov chains in the tandem queue with blocking and finite buffers and the tandem queue with blocking with infinite buffers respectively. Provided that holds, then, for all and , and under the condition: we have the following estimate: where and , , and were already defined by the formulas (3.5), (3.18), and (3.37), respectively.
Proof. Note that and already implies and . Hence, Lemmas 3.2 and 3.3 apply.
Following the line of thought put forward in Section 2, see (2.6), we will translate the norm bound in Theorem 3.5 to bounds for individual performance measures .
Corollary 3.6. Under the conditions put forward in Theorem 3.5, it holds for any such that that
Note that the bound (3.49) in Corollary 3.6 has and as free parameters. This give the opportunity to minimize the right hand side of the inequality (3.48) in Theorem 3.5 with respect to . For given , this leads to the following optimization problem:
4. Numerical Example
In this section we will apply our bound put forward in Theorem 3.5. For this, we implement an algorithm (the principal idea of this algorithm is the same as in [25]) on concrete cases. Indeed, we apply a computer program to determine the made error on stationary distribution due to the approximation, when the approximation is possible, as well as the norm from which the error is obtained. It is important, in this work, to give an idea about the performance of this approach. For this, we computed the real value of the error by enlarging the state space and derived the stationary distribution. For that, we elaborated a program in the Matlab environment according to the following steps.(1)Compute the stationary distribution of the tandem queue with blocking with infinite buffers (the original model) using the definition (), where is a probability measure.(2)Compute the stationary distribution of the tandem queue with blocking and finite buffers (the truncated model)sing the definition (), where is a probability measure.(3)Calculate .
In order to compare the both errors (the real and that obtained by the strong stability approach), we calculated the real values of the error with the same norm that we have calculated the approximation.
For the first numerical example we set with , , , and , and for the second one we set with , , , and . As a first step for applying our bound, in the both examples, we compute the values for that minimize . Then, we can compute the bound put forward in Theorem 3.5 for various values for . The numerical results are presented in Tables 1 and 2.


4.1. Strong Stability Algorithm
Step 1. Define the inputs:(i)the service rate of the first station ();(ii)the service rate of the second station ();(iii)the arrival mean rate ;(iv)the function ;(v)the number of buffers in the second station ();(vi)the truncation level ();(vii)the step .
Step 2. Verify the intensity trafic condition: if , go to Step 3, else (*the system is unstable*), go to Step 6.
Step 3. Determine , and put ; if , go to Step 4; else, go to Step 6.
Step 4. For each value of determine the constant: ; , go to Step 5.
Step 5. Calculate and , if go to Step 5;
else and go to Step 4.
Step 6. Determine and , where
Step 7. End.
We compared our expected approximation error () against numerical results () and we easily observed that the real error on the stationary distribution is significantly smaller than the strong stability approach one. Furthermore, if we compare our expected approximation error against the numerical results, it is easy to see that the error decreases as the truncation level increases. We can notice the remarkable sensibility of the strong stability approach in the variation of the truncation level with regard to the real error. This means that the numerical error is really the point of the error which we can do when switching from the tandem queue with blocking and infinite buffers to the another one with finite buffers. Graphic comparison is illustrated in Figures 1 and 2.
5. Further Research
An alternative method for computing bounds on perturbations of Markov chains is the series expansion approach to Markov chains (SEMC). The general approach of SEMC has been introduced in [13]. SEMC for discrete time finite Markov chains is discussed in [15], and SEMC for continuous time Markov chains is developed in [14]. The key feature of SEMC is that a bound for the precision of the approximation can be given. Unfortunately, SEMC requires (numerical) computation of the deviation matrix, which limits the approach in essence to Markov chains with finite state space. Perturbation analysis via the strong stability approach overcomes this drawback, however, in contrast to SEMC, no measure on the quality of the approximation can be given.
References
 H. C. Tijms, “Stochastic modelling and analysis,” in A Computational Approach, John Wiley & Sons, New York, NY, USA, 1986. View at: Google Scholar
 G. Bolch, S. Greiner, H. Meer, and K. S. Trivedi, Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications, John Wiley & Sons, New York, NY, USA, 2006.
 T. PhungDuc, “An explicit solution for a tandem queue with retrials and losses,” Operational Research, vol. 12, no. 2, pp. 189–207, 2012. View at: Publisher Site  Google Scholar
 N. M. van Dijk, “Truncation of Markov chains with applications to queueing,” Operations Research, vol. 39, no. 6, pp. 1018–1026, 1991. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 N. M. van Dijk, “Error bounds for state space truncation of finite Jackson networks,” European Journal of Operational Research, vol. 186, no. 1, pp. 164–181, 2008. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 E. Seneta, “Finite approximations to infinite nonnegative matrices,” Mathematical Proceedings of the Cambridge Philosophical Society, vol. 63, pp. 983–992, 1967. View at: Google Scholar  Zentralblatt MATH
 D. Gibson and E. Seneta, “Augmented truncations of infinite stochastic matrices,” Journal of Applied Probability, vol. 24, no. 3, pp. 600–608, 1987. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 D. Gibson and E. Seneta, “Monotone infinite stochastic matrices and their augmented truncations,” Stochastic Processes and their Applications, vol. 24, no. 2, pp. 287–292, 1987. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 D. Wolf, “Approximation of the invariant probability measure of an infinite stochastic matrix,” Advances in Applied Probability, vol. 12, no. 3, pp. 710–726, 1980. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 D. P. Heyman, “Approximating the stationary distribution of an infinite stochastic matrix,” Journal of Applied Probability, vol. 28, no. 1, pp. 96–103, 1991. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 W. K. Grassmann and D. P. Heyman, “Computation of steadystate probabilities for infinitestate Markov chains with repeating rows,” ORSA Journal on Computing, vol. 5, pp. 292–303, 1993. View at: Google Scholar
 D. Freedman, Approximating Countable Markov Chains, Springer, New York, NY, USA, 2nd edition, 1983.
 B. Heidergott and A. Hordijk, “Taylor series expansions for stationary Markov chains,” Advances in Applied Probability, vol. 35, no. 4, pp. 1046–1070, 2003. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 B. Heidergott, A. Hordijk, and N. Leder, “Series expansions for continuoustime Markov processes,” Operations Research, vol. 58, no. 3, pp. 756–767, 2010. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 B. Heidergott, A. Hordijk, and M. van Uitert, “Series expansions for finitestate Markov chains,” Probability in the Engineering and Informational Sciences, vol. 21, no. 3, pp. 381–400, 2007. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 G. E. Cho and C. D. Meyer, “Comparison of perturbation bounds for the stationary distribution of a Markov chain,” Linear Algebra and its Applications, vol. 335, pp. 137–150, 2001. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 S. Kirkland, “On a question concerning condition numbers for Markov chains,” SIAM Journal on Matrix Analysis and Applications, vol. 23, no. 4, pp. 1109–1119, 2002. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 S. Kirkland, “Digraphbased conditioning for Markov chains,” Linear Algebra and its Applications, vol. 385, pp. 81–93, 2004. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 M. Neumann and J. Xu, “Improved bounds for a condition number for Markov chains,” Linear Algebra and its Applications, vol. 386, pp. 225–241, 2004. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 V. V. Anisimov, “Estimates for deviations of transient characteristics of inhomogeneous Markov processes,” Ukrainian Mathematical Journal, vol. 40, no. 6, pp. 588–592, 1988. View at: Publisher Site  Google Scholar
 S. T. Rachev, “The problem of stability in queueing theory,” Queueing Systems, vol. 4, no. 4, pp. 287–317, 1989. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 D. Aïssani and V. N. Kartashov, “Ergodicity and stability of Markov chains with respect to operator topology in the space of transition kernels,” Doklady Akademii Nauk Ukrainskoi SSR A, vol. 11, pp. 3–5, 1983. View at: Google Scholar
 N. V. Kartashov, Strong Stable Markov Chains, VSP, Utrecht, The Netherlands, 1996.
 A. Yu. Mitrophanov, “Sensitivity and convergence of uniformly ergodic Markov chains,” Journal of Applied Probability, vol. 42, no. 4, pp. 1003–1014, 2005. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 K. Abbas and D. Aïssani, “Structural perturbation analysis of a single server queue with breakdowns,” Stochastic Models, vol. 26, no. 1, pp. 78–97, 2010. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 N. V. Kartashov, “Strongly stable Markov chains,” Journal of Soviet Mathematics, vol. 34, no. 2, pp. 1493–1498, 1986. View at: Publisher Site  Google Scholar
 A. Hordijk and N. van Dijk, “Networks of queues with blocking,” in Performance '81, pp. 51–65, NorthHolland, Amsterdam, The Netherlands, 1981. View at: Google Scholar
 O. J. Boxma and A. G. Konheim, “Approximate analysis of exponential queueing systems with blocking,” Acta Informatica, vol. 15, no. 1, pp. 19–66, 1981. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 F. S. Hillier and R. W. Boling, “Finite queues in series with exponential of Erlang service times—a numerical approach,” Operations Research, vol. 15, pp. 286–303, 1967. View at: Google Scholar
 G. Latouche and M. F. Neuts, “Efficient algorithmic solutions to exponential tandem queues with blocking,” Society for Industrial and Applied Mathematics Journal on Algebraic and Discrete Methods, vol. 1, no. 1, pp. 93–106, 1980. View at: Publisher Site  Google Scholar  Zentralblatt MATH
Copyright
Copyright © 2012 Karima AdelAissanou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.