Abstract

This paper deals with the computation of invariant measures and stationary expectations for discrete-time Markov chains governed by a block-structured one-step transition probability matrix. The method generalizes in some respect Neuts’ matrix-geometric approach to vector-state Markov chains. The method reveals a strong relationship between Markov chains and matrix continued fractions which can provide valuable information for mastering the growing complexity of real-world applications of large-scale grid systems and multidimensional level-dependent Markov models. The results obtained are extended to continuous-time Markov chains.

1. Introduction

This paper is concerned with the computation of invariant measures of discrete-time Markov chains having a block-partitioned one-step transition matrix of the form where all elements are nonnegative matrices of suitable dimension, that is, for . The states of the Markov chain are denoted by , and , and are ordered in lexicographic order, that is, , …, , , …, , , …. Following Neuts [1], the set of states will be called the level. We refer to the first component of (the level) as , and by , we denote the second component of . Throughout this paper, it is assumed that and that the Markov chain is irreducible and recurrent. Note that irreducibility implies for all . From the general theory of Markov chains, we then know that the system has a unique solution (up to a constant factor) which is strictly positive (see Karlin and Taylor ([2], chapter 11)). For the solution vector , we use the notation and Every constant positive multiple of is called an invariant measure of . The class of irreducible recurrent chains can be further subdivided into positive and null-recurrent chains according to whether or , with being a vector with all its components equal to one.

Vector-state Markov chains of the form (1) generalize the theory of matrix-geometric processes which was pioneered by M. Neuts [1]. As he points out in his monograph, the matrix-geometric approach is applicable to a wide class of stochastic models. The characteristic feature of these models is that the blocks within the transition probability matrix (or the generator matrix in case of continuous-time models) only depend on their relative position to the main diagonal, that is, only depends on . Otherwise, Neuts’ method fails.

A general strategy for solving systems of the form (1) is the following: (i)Truncate the system at level , that is, consider the northwest corner truncation of with blocks(ii)Derive a stochastic matrix by appropriately augmenting (iii)Solve .

Then might converge to the desired solution under some appropriate conditions (a natural condition is some normalization, e.g., ).

In case of (1) with and only depending on , level-independent quasi-birth-death processes arise. For and arbitrary , the quasi-birth-death process is level-dependent. Several publications have dealt with the problem of computing the invariant measures for this class of stochastic processes numerically: In [3] and in [4], algorithms based on taboo probabilities were developed. In [5], a method relying on matrix continued fractions was discussed. In principle, these three algorithms differ with respect to the question how to compensate the truncation of the infinite state space which is inevitable for algorithmic procedures, that is, how to augment . The major part of the algorithmic procedure is similar in all algorithms, and as pointed out in [6], for large truncation levels, it can be used without any augmentation. Since is not stochastic anymore, the last step of the above general method has to be interpreted appropriately: shall solve up to the first scalar equation. This simplified matrix-analytic algorithm becomes even more powerful when only stationary expectations have to be computed since then, the memory-efficient variant suggested in [7] can be applied.

Recently, Masuyama published a technical report [8] in which Markov chains (with a focus on continuous-time Markov chains) with a general block structure are discussed. Under somewhat restrictive conditions, it is shown that the above method (truncation, augmentation, finding a solution of the truncated and augmented system) converges to the desired invariant measure if the truncation level tends to . Since Markov chains with general block structure are discussed, transition matrices of the form (1) or its continuous analogue ((20) below) can be interpreted as special cases. However, the focus in [8] is the discussion of augmentation procedures.

We will treat the problem of finding an invariant measure numerically with a different perspective: After the truncation, we perform no augmentation but find in a similar manner as in the method for quasi-birth-death processes described in [7]. In some situations, not using an augmentation might result in requiring a higher truncation level than with an appropriate augmentation (although this is not clear). However, we discuss not only a basic algorithm for computing invariant measures but also the direct computation of stationary characteristics for which we introduce a memory-efficient implementation. Due to the memory-efficiency, our method can be applied with a very high truncation level.

In total, in some way, we combine the ideas of Neuts’ original matrix-geometric approach with considerations for level-dependent quasi-birth-death processes (although the resulting transition structures are not as general as in [8]), and with memory-efficient algorithms as developed in [7] for quasi-birth-death processes.

The rest of the paper is organized as follows: We begin with a derivation of our results and methods based on taboo probabilities in Section 2 and extend the results to continuous-time Markov chains in Section 3. The resulting algorithm for computing invariant measures is presented in Section 4 whereby we also derive a memory-efficient implementation for computing stationary expectations. Section 5 is dedicated to an alternate approach. This derivation is based on generalized continued fractions or matrix continued fractions, respectively. Although these considerations do not result in new computational methods, they give us a better insight in the algebraic and numerical aspects of the approach. Finally, in Section 6, we demonstrate the applicability of our methods by briefly discussing a queueing model with batch service and server breakdowns.

2. A Derivation Based on Taboo Probabilities

According to [2, 9, 10], an invariant measure of an irreducible recurrent Markov chain with state space and transition probability matrix is given by and for , where is some fixed state and denotes the probability of event . The summands on the right-hand side are taboo probabilities. Irreducibility of guarantees that the series converges for all . can be interpreted as the expected number of visits to state before the Markov chain returns to its initial state . Note that is the unique invariant measure of with .

In the present case, we choose and obtain for . By , we denote the first hitting time of of some level .

Following [9, 10], we have where solves

denotes the northwest corner truncation of at level , is an identity matrix of appropriate size, and is the vector with unity in the first position and zeroes elsewhere. The solution is unique, and with , we have and for with . Obviously, increases monotically in , and by definition, we have .

Furthermore, we set to be the first positive hitting time of some level (note that due to irreducibility and recurrence, almost surely), and we define the matrices

Again, the entries of can be interpreted as expectations. In this case, conditioned on the initial state , the entry at position is the expected number of visits in before returning to some level . Similarly, the entries of are expectations for the expected number of visits in before returning to some level or hitting some level . Obviously, for . Again, componentwise convergence of is a direct consequence of the monotone convergence theorem. We are now ready to find the representation of the invariant measures which will be exploited numerically later on.

Theorem 1. With the structure (1) ofand the notations introduced above, we haveIn (12) and (13), the inverse matrices exist. In (11), the solution of the system of linear equations is unique up to constant multiples, and additionally,holds up to the first scalar equation.

Proof. The interpretations of are very similar to the level-independent case (see [1]) and to quasi-birth-death processes (see [4, 5]). Therefore, we do not state all proofs in detail. (i)For , we have This proves (9) for . For , we have an additional summand for , but on the other hand, for , the sum regarding starts with . It turns out that (9) still holds. (10) is proven analogously.(ii)Using the structure (1) of and (9) repetitively, it becomes clear that satisfies equation (11) which can be rewritten as , where By a calculation similar to that for proving (9), we see that the entry of at position is given by the probability that the Markov chain returns to level before hitting a level and that this return is to . In particular, the entries of represent the probabilities that the Markov chain eventually returns to level and that this return occurs in . Irreducibility and recurrence of imply stochasticity and irreducibility (and due to being finite, recurrence) of . Hence, is an invariant measure for which is uniquely defined up to a constant factor.(iii)Alternatively, (11) can be shown by arguments similar to those leading to (9). These arguments also confirm that satisfies the corresponding equation up to the first scalar equation(iv)Additionally, define as the matrix of the expectations of the number of visits in before hitting some level . Then considerations similar to those yielding (9) lead to , and hence, for and for .The fact that converges monotonically increasing to entails that the are minimal nonnegative solutions to the equations which corresponds to similar results in [1]. However, we will not make use of this fact in Algorithms 1 and 2 which we suggest for computing invariant measures and stationary expectations.

3. Continuous-Time Markov Chains

For continuous-time Markov chains with block-structured generator matrix invariant measures satisfy . For the construction of a continuous-time Markov chain from its infinitesimal generator as well as for precise definitions of properties such as regularity, recurrence, and ergodicity, the reader is referred to [11, 12]. Assuming regularity, irreducibility, and recurrence, the solution to is unique up to constant multiples and strictly positive. Furthermore, these assumptions allow constructing the embedded jump chain which is a discrete-time Markov chain with transition probability matrix with structure (1) and where is the diagonal matrix with entries at position . Irreducibility of implies for all , and hence, is indeed invertible for all . Furthermore, irreducibility and recurrence of imply irreducibility and recurrence of and vice versa (note that this is not true for ergodicity). Finally, if is invariant for , an invariant measure for can be constructed by setting .

Renaming the matrices from Section 2 to , we have for the invariant measure of the embedded jump chain. Hence, we obtain with . From (11), we obtain

Taking into account that satisfies (12), we arrive at

Since the th diagonal entry of can be interpreted as the mean sojourn time in , the entries of are the expected time intervals spent in state before returning to some level , conditioned on starting in . Therefore, the entries of represent the proportion of the expected times spent in before returning to some level , and the time spent in the initial state .

Summarizing, we get

Theorem 2. With the above assumptions on, we havewhere the solution of the system of linear equations in (26) is unique up to constant multiples. The ’s satisfy and can be uniquely constructed by for , where for and satisfies up to the first scalar equation.

4. Algorithms

Next, we demonstrate how to use the results numerically. We focus on continuous-time Markov chains since the methods for discrete-time Markov chains follow an analogous scheme.

Basically, we have to use (28) for computing approximations to , then we determine an approximation for by solving (26) up to the first scalar equation, and finally, we use (24) for computing approximations to . In order to save operations, we suggest the following algorithm for computing an approximation to the invariant distribution.

Choose large
Initialize for
for do
   Compute
   Free memory space for
   Update for
   Initialize if
end for
Set and determine all other entries of by solving up to the first scalar equation.
For , compute
return y

In case of positive recurrence, an invariant distribution can be obtained afterwards by renormalizing . We give some remarks: (i)Algorithm 3 in [8] is quite similar. While our algorithm relies on the nonstochastic matrix or the nonconservative matrix , respectively, in [8], an augmentation is performed first in order to obtain a stochastic or conservative matrix. Note that Theorem 1 guarantees that for , we have convergence of the approximations or to the desired invariant measure without any augmentation. The only use of augmentations is that the truncation error might be decreased a bit for small values of the truncation level. Instead of considering an augmentation, we suggest to choose very large. If such a choice results in computational problems, we suggest to use the memory-efficient algorithm for computing stationary expectations (see below). Furthermore, note that the results in [8] were derived under different conditions: While is allowed for all (lower-block Hessenberg matrices), some restrictive conditions (Assumptions 2.1 and 2.2 in [8]) are assumed(ii)A major problem is the choice of the truncation level . There are some approaches concerning a priori estimates of the truncation error (e.g., [13]), but still the problem is not completely solved. Instead of using such error estimates, alternatively, as in [8], we might increase the truncation level iteratively, that is, we start with some small and increase by in each step until for some fixed norm and some fixed small . Note that for each new value of , we have to restart Algorithm 1. Furthermore, note that a small difference does not imply that is small where is the true invariant measure(iii)In many applications, the state space is finite with a transition structure corresponding to the northwest corner of (1) or (20), respectively. Then Algorithm 1 is exact up to numerical deviations(iv)Instead of computing the inverse matrix first and then perform the matrix multiplication for determining , it is recommended to solve the system directly(v)The matrices can be generated when they are used for computation. Hence, there is no need to store these matrices(vi)We have to store about matrices at the same time. If the dimensions of the matrices are constant, say , the memory requirement is approximately real numbers(vii)Concerning the computational effort, we have to solve approximately linear systems of equations of the form where the coefficients and and the solution are matrices of dimension (for constant ). In addition, we have to perform matrix multiplications. The computational effort is approximately with some constants and (the exponent and the constants depend on the method of choice, we have for “conventional matrix multiplication” and for Strassen’s algorithm, see [14])(viii)For , Algorithm 1 more or less coincides with the methods proposed in [35]. One might expect that also in general, a discrete-time Markov chain with transition probability matrix (1) and bandwidth or a continuous-time Markov chain with generator matrix (20) and bandwidth , respectively, may be reduced to the block-tridiagonal case by appropriately rearranging the states, i.e., define a new level as the union of the original levels and define a new level as the union of the original levels , …. But the increased block size heavily impacts the performance of the algorithm. If we apply the same truncation procedure, the truncation level for the modified matrix structure is , and for , all matrices involved have the dimension . This results in a computational effort of approximately . Since , the computational effort for the transformed system becomes significantly larger than the effort for the original system. Similarly, the memory requirement of is replaced by . For these reasons, one should discard this supposedly simple “trick” (compare ([15], pp.~73ff)).

In applications, often only stationary expectations have to be considered: Consider the case of positive recurrence, and denote by the invariant distribution. Then the ergodic theorem for discrete-time Markov chains guarantees that for any function for which converges absolutely. Again, by , we refer to a vector of appropriate dimension consisting of s. Analogously, for continuous-time Markov chains, we have if converges absolutely. Now, let be a vector valued function with for all . Then for continuous-time Markov chains, we have for . Hence, computing allows to determine the stationary characteristics which coincide with the long-run costs or rewards measured by .

When computing , we can apply a Horner-type scheme which allows us not to store the matrices and therefore save quite a lot memory requirement. This idea is very similar to that in [7]. With we write where

Obviously, we have and

The matrices are computed by a backward scheme, and (35) is a backward scheme, too. Since we are only interested in computing , there is no need to store the matrices or . We suggest

Choose large
Initialize for and
for do
   Update
   Update
   Free memory space for
   Update for
   Initialize if
end for
Set and determine all other entries of by solving up to the first scalar equation
Update and renormalize such that
return Z

Again, we give some comments: (i)The problem of finding an appropriate truncation level to keep the error in prescribed limits is still to be solved. In [16], a generalization of the idea from [13] is discussed, and and are compared. Unfortunately, the method is quite cumbersome to apply, especially because it would be more desirable to compare with . Again, an alternative consists in iteratively increasing until becomes small. Note that again, this method is somehow problematic since the value of this difference does not tell anything about the true error (ii)For finite state spaces with the transition structure under consideration, Algorithm 2 is exact up to numerical deviations(iii)Of course, it is still recommended to solve the system directly, and still, the matrices should be generated when they are needed(iv)In total, we have to store the matrices and . Since we need matrices at the same time, the total amount of memory requirement is if the matrix dimensions are constant. In particular, the memory requirement does not depend directly on the truncation level (v)As pointed out above, in ([8], section 3.2), lower-block Hessenberg matrices (block -type matrices) are considered, that is, for , but is allowed for all . All our theoretical considerations carry over (we only have to replace by or by at some points). Unfortunately, when applying Algorithm 2 to such block matrices, we still have to store matrices, and hence, it is not memory-efficient anymore. Since the memory-efficient implementation makes the theoretical considerations applicable to a wide range of stochastic models, we have restricted ourselves to considering band-block-structured matrices(vi)The considerations concerning the computational effort are similar to those for Algorithm 1. In particular, we do not recommend to reduce the structure to the case (vii)Note that often, “natural” state spaces are multidimensional. However, if there is a partition into finite levels such that transitions with positive probability/transition rate increase the level at most by and decrease it at most by , an appropriate renumbering of the states results in a structure given by (1) or (20). We refer to the end of Section 6 for an example and to [7, 13, 17] for the corresponding discussion concerning quasi-birth-death processes (that is, ).

5. Relationship to Matrix Continued Fractions

This section is devoted to the interdependence between Markov chains and matrix continued fractions. The results obtained are of importance not only from an inner mathematical but also from a numerical point of view. Matrix continued fractions are generalizations of ordinary continued fractions and generalized continued fractions (or -fractions, see [18, 19]). We will show that with each transition matrix of the form (1), one can associate a matrix continued fraction (or a matrix-driven Jacobi-Perron algorithm) being convergent if is irreducible and recurrent. In addition, it is proven that its approximants and its limits coincide with the coefficients of the recurrence relations (10) and (9), respectively.

A milestone in pure mathematics has been the arithmetic characterization of quadratic irrationalities which has been discovered by J. L. Lagrange (see [20]). Lagrange’s theorem states that the expansion of a real number as a simple continued fraction is periodic if and only if it is the real root of a quadratic equation in one variable with rational coefficients. Since then, mathematicians like C. G. J. Jacobi [21], O. Perron [22], P. Bachmann [23], L. Bernstein [24], and F. Schweiger [25] tried to expand Lagrange’s approach to higher degree irrationalities. G. C. F. Jacobi and O. Perron developed a tool for approximating real algebraic irrationalities of degree . To the honor of Jacobi and Perron, this algorithm has been named as the Jacobi-Perron algorithm. Although the Jacobi-Perron algorithm does not lead to the desired results, it has been successfully addressed to a lots of other unsolved problems in approximation theory and numerical analysis. In particular, in connection with modern computer technology, the Jacobi-Perron algorithm turns out to be an efficient tool for identifying and calculating subdominant solutions of higher order linear vector recursions.

In the sequel, we will show that the schemes (10) and (13) are equivalent to a specific matrix continued fraction or a matrix-driven Jacobi-Perron algorithm associated with the transition probability matrix (1). For this purpose, we assume that for all and that is invertible for all . Define where denotes the unit matrix. Then (36) and (2) yield the th-order homogeneous linear vector recursion subject to the boundary condition

In this perspective, the problem at hand is to find the “right” initial values, say sufficiently large, such that for all . In order to find a solution to this problem based on generalized continued fractions, we generalize and combine the results from [5, 26].

Remember that where solves the truncated system (6). (36) and (6) lead to the finite difference scheme coupled to the boundary conditions: where is a vector with unity in the first position and zeroes elsewhere. For simplification we transform the system (40) into a first order scheme of the form by putting

Now, let be fixed. By repeated substitutions, we obtain from (43) with

For the remainder, it will be convenient to partition as where the matrices are of dimension . In view of the boundary condition (42), we infer from (45) that

For computing the -matrices , the following result reduces the computational effort.

Theorem 3. Let, be defined as in (36) and let, for convenience,Then where

Proof. From (46), it is clear that the matrices can be recursively computed by with initial condition Rewriting (52) in expanded form we get By repeated substitutions, it is readily seen that the matrices satisfy the recurrence relation (50) at least for . To proceed in this manner we assume, for the moment, that the recurrence relation (50) may be continued for , without affecting the initial condition (53) and the scheme (54). Since the matrices do not appear in formula (46), they can be chosen arbitrarily. By putting and we get which explains (49) and (51). Since , (55) becomes compatible with (52) and (53), which proves our result for .

To make (48) congruent with (9) and (10), respectively, we first prove that for almost all under a mild condition on .

Consider the recurrence relation (40) for . Assembling the unknowns into an vector and setting the boundaries , (40) permits a presentation of the form where

Since we assumed for all , is a regular matrix. For its inverse, we choose the partition where is a square matrix of order . From (48), we know that

Solving (57) successively with replaced by , , …, , we get

The matrix is called the algebraic complement of the matrix . A theorem which is due to Jacobi (see ([27], pp. 123-124, formula (4.45))) states that iff . Since we assumed , is weakly diagonally dominant. To guarantee that is regular, it is therefore sufficient to assume that is irreducible (see ([28], p. 531, Theorem 3)).

From (48), we conclude that implying where

From (63) and (41), we obtain the decoupled recursion and the boundary condition which enable us to compute for . In view of the representations (49), (50), and (51), we make use of the notation

(67) can be interpreted as the th convergent of the generalized matrix continued fraction (GCF) or the matrix-driven Jacobi-Perron algorithm associated with the limiting process compare with [22, 29, 30]. If the limits (68) do exist, we obtain as a consequence of (65), (39), and (38).

For the calculation of and , we recommend the following algorithms: (a)Forward algorithm (FA): this is nothing else than a straightforward application of the relations (64), (49), (50), and (51).(b)Backward algorithm (BA): since a matrix-driven Jacobi-Perron algorithm may be interpreted as a special case of a matrix continued fraction, its th convergent , , can be calculated from a linear fractional transformation (see ([29], Theorem 2.2)). Restricted to the matrix-driven Jacobi-Perron algorithm, this algorithm runs as follows. For sufficiently large, define Then

From a numerical point of view, BA requires less computational effort than FA. The only disadvantage of BA is that for each convergent, the whole algorithm has to be repeated which can be time-consuming if the algorithm converges slowly. Since we are not interested in a particular but in the family of matrices , we take BA for the preferable algorithm in all situations.

It is easy to see that the recurrence relations (71)–(73) are equivalent to (12) and (13) with . Hence, the derivation based on continued fractions leads to the same results as the derivation based on taboo probabilities.

Observe that the derivation based on taboo probabilities still works if for some , whereas the derivation based on matrix continued fractions fails. Nevertheless, the derivation based on matrix continued fractions enables us to detect that (65) is a decoupled version of the difference equation (40) which is important with respect to numerical applications. In [31], it was shown that for and , the difference equation (37) is subject to numerical instability, because the invariant measure is asymptotically dominated by other solutions of the difference equation (see [32]). In addition, it was shown that the continued fraction algorithms (65) and (71) result in a decoupled recursion for the computation of in which the dominant solutions of (37) do not appear. But however, we cannot assure that in general, the decoupled recursion (65) is kept free from all sources of numerical instability. To take this issue forward, we need tailored Pringsheim- and Pincherle-type convergence criteria for matrix continued fractions, compare with [29].

Remark 4. The existence of the limits (68) has been shown in Theorem 1 by probabilistic arguments. To prove (68) in a purely analytic manner, further research is required. The reason is the following: Obviously, the convergence of the matrix-driven Jacobi-Perron algorithm is implied by the diagonal dominance of the transition probability matrix . In general, there exists a great variety of convergence criteria for ordinary continued fractions and their generalizations which appeal to the diagonal dominance of their coefficients (the so-called Pringsheim criteria). But in the case of the matrix-driven Jacobi-Perron algorithm, these results are only available in terms of norm-based criteria, which are too restrictive to imply convergence in the present case. In so far, it may be worth to study the relationship between Markov chains and approximation theory in more detail in order to discover alternate techniques for detecting convergence of the matrix-driven Jacobi-Perron algorithm. Notice that the approximants of the invariant measure of (1) are entirely determined by the finite difference schemes (65)–(67). In so far, the analytic proof of the existence of the limits (68) is mainly of theoretical interest.

6. Numerical Example

Consider a queueing system consisting of a single server which is subject to breakdowns. The up time of the system is exponentially distributed with parameter . The down time is assumed to be Erlang distributed with shape parameter and rate parameter for some (implying that the mean down time is ). Customers arrive according to a Poisson stream with rate . By , we denote the probability that an arriving customer joins the queue when customers are present and the server is active. Analogously, shall denote the probability that an arriving customer joins the queue when customers are present and the server is down. Notice that it could make sense to assume , that is, customers will not join the queue when the system is interrupted. When less than customers are present, no service is provided. Otherwise, customers are served at a time. The service time of each batch is assumed to be exponentially distributed with parameter . In addition, we assume that the arrival process, the service process, and the breakdown mechanism are stochastically independent. The model under consideration is an extension of the usual queue, which is described in more detail in [33].

Remember that an Erlang distributed random variable may be interpreted as a phase-type distributed random variable [1] with distribution function for , where

Erlang distributions are frequently used when modelling random variables with a small variance or even approximating deterministic behaviour. This is due to the fact that the variance of the Erlang distribution with parameters and is , and this is indeed the smallest variance which can occur for a phase-type distribution with order and expectation (see [34] for more details).

Due to the lack-of-memory property of the exponential distribution and the independence assumptions, the system under consideration is characterized by a two-dimensional continuous-time Markov chain with where (i) is the number of customers in the system at time (ii) indicates that the server is down and the down time is in phase at time (iii) indicates that the server is up at time .

Hence, the underlying state space is , and we obtain a generator matrix of structure (20) with

A sufficient condition for to be regular and ergodic is

This statement can be deduced from Tweedie’s criterion ([35], Theorem 2.3). The condition is to find a nonnegative sequence and an integer satisfying for and

, and some A simple check shows that satisfies this set of inequalities as long as (76) holds.

Next, we focus on the calculation of the following performance values: (i)The expected stationary number of customers in the system is given by where (ii)The corresponding variance is calculated according to where (iii)The stationary probability that no service is provided despite customers waiting is given by where (iv)The stationary probability of the system being down is

The latter probability can be computed exactly as . Therefore, the numerical computation of this probability only helps us to check the implementation.

For calculating all these values, we apply Algorithm 2 with and where for and for . After applying the algorithm, the variance can be deduced from the first two entries of the resulting vector. Similarly, the conditional probability can be obtained from the third and the forth entry.

In Tables 1, 2, and 3, we present some numerical results for , , , or , , and for , and various values of and .

In Table 4, we fix and and vary the coefficient of variation of the down time (i.e., ) from to .

The usual queue with service interruptions has been investigated in great detail by Gaver [36] and Federgruen and Green [37]. Their explicit formulas tell us that the mean number of customers in the system and the variance of the number of customers in the system are strongly affected by the arrival rate (i.e., ), the ratio of the mean repair time and the mean service time (i.e., ), and the availability of the system (i.e., ). Obviously, the same is true for the state dependent bulk service queue with unreliable server (see Tables 1 and 2).

The conditional probability may be interpreted as a certain measure for the non-value added time of the server. The data in Table 3 suggest that for , convergence is to . This can be deduced from the law of total probability which states

Since for , we conclude from (85) that as . So it is not surprising that and are the main influencing factors, whereas the arrival rate has only little effect on which may be explained as follows: Due to its state dependence, the arrival stream in our example is massively dampened when the number of customers grows. In other words, in spite of an excessive arrival rate (i.e., ), the system always reaches a comparatively moderate and balanced state.

Finally, Table 4 reveals that the coefficient of variation of the down time has a relatively small influence on the main characteristics of the system.

It should be mentioned that our model allows the specification of more complex interarrival, service, and up and down times by fitting phase-type distributions to empirical data or by approximating general distributions by phase-type distributions. Since it is only our intention to demonstrate the practical suitability of our algorithm, we do not deepen the discussion of the numerical results.

Obviously, a lot of other queueing situations can be taken into account within our model. In [17], a tandem queue was analysed in which (i)customers arrive at the first station according to a Markovian arrival process(ii)customers served at the first station are transferred to the second station(iii)customers served there leave the system(iv)there are and servers at both stations, respectively(v)at both stations, the service time is phase-type distributed.

As pointed out in [17], the state space of the underlying queueing process consists of the number of waiting customers (at the first station and at the second station), the number of servers serving in the various phases of the phase-type distribution (at the first station and at the second station), and the environmental state for the Markovian arrival process. But by appropriately renumbering the states, it is seen that the process may be transformed into a quasi-birth-death process having a generator matrix of the form (20) with .

Next, consider a batch service at both stations with a maximum batch size at the second station. Then departures at the second station decrease the total number of customers in the system by . Hence, a structure of type (20) with nonconstant block-dimensions comes into play. Since the state space remains finite, algorithms 1 and 2 are exact up to numerical errors. Similarly, many other queueing models may be solved in this manner.

7. Conclusion and Further Research

In this paper, we have suggested methods for computing invariant measures and stationary expectations for Markov chains with a band-block-structured transition probability matrix or generator matrix, respectively. We have presented a probabilistic derivation as well as a derivation based on generalized matrix-valued continued fractions. Although the latter derivation only works under some restrictive assumptions, we have gained some additional numerical and algebraic insights. From a practical point of view, we want to emphasize the benefits of Algorithm 2. This method generalizes the established algorithm for computing stationary expectations for quasi-birth-death processes [7], and in situations where we are only interested in stationary expectations (what should be the case in most practical applications), it is extremely efficient with respect to memory requirement.

As pointed out in the remarks concerning the Algorithms 1 and 2, all considerations but the memory-efficient computation of stationary expectations can be easily extended to lower-block Hessenberg matrices. Unfortunately, for general lower-block Hessenberg matrices (and large truncation level ), the memory requirement will be crucial. Hence, further research could concern memory-efficient use of the lower-block Hessenberg transition structure (skip-free to the right). Other structures being of interest in practical applications are upper-block Hessenberg matrices or general block-band matrices with upper bandwith . Another interesting point concerns the choice of the truncation level and the derivation of precise error bounds.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

We acknowledge support by Open Access Publishing Fund of Clausthal University of Technology.