Research Article | Open Access
Computational Procedures for a Class of GI/D/k Systems in Discrete Time
A class of discrete time GI/D/ systems is considered for which the interarrival times have finite support and customers are served in first-in first-out (FIFO) order. The system is formulated as a single server queue with new general independent interarrival times and constant service duration by assuming cyclic assignment of customers to the identical servers. Then the queue length is set up as a quasi-birth-death (QBD) type Markov chain. It is shown that this transformed GI/D/1 system has special structures which make the computation of the matrix R simple and efficient, thereby reducing the number of multiplications in each iteration significantly. As a result we were able to keep the computation time very low. Moreover, use of the resulting structural properties makes the computation of the distribution of queue length of the transformed system efficient. The computation of the distribution of waiting time is also shown to be simple by exploiting the special structures.
In most communication systems studied in discrete time, the interarrival times of packets usually follow independent general distribution. Hence, this arrival of packets in an ATM switch can be represented as independent general distribution. On the other hand, multiple packets (53 byte cells) are transmitted simultaneously through identical transmission lines in an ATM switch. Therefore, an ATM switch can be considered as a multiserver system with deterministic service time as the packets are of same size. Hence, the performance of an ATM switch can be analyzed by studying a GI/D/ system with finite support on interarrival times.
To the best of our knowledge GI/D/ systems in discrete time have never been analyzed with focus on computational efficiency in the literature. In this paper, matrix-geometric method is used to analyze a class of GI/D systems in discrete time with focus on computational efficiency. The class of GI/D systems considered in this paper has finite support for the interarrival times and the service is first-in first-out (FIFO) order. The idea used here to analyze the waiting time distribution of this multiserver system is due to Crommelin . This idea is also used in analyzing the distribution of waiting time of multiserver systems with constant service time duration for both continuous and discrete times such as [2–20]. The idea is that the assignment of customers to servers in cyclic order does not change the distribution of waiting time in a multiserver system where customers are served in FIFO order in the identical servers with same constant service time. In this case, only an arbitrary server can be studied to compute the waiting time distribution. Crommelin  analyzed the waiting time distribution of an M/D/ system by converting the system to an /D/1 system. The work of Crommelin  is followed by some studies on deterministic service time multiserver systems in both continuous time and discrete time. Iversen  decomposed an M/D/ queue with FIFO into /D/ queues with FIFO for analyzing waiting time distribution. Franx  developed expression for the waiting time distribution of the M/D/ queue by a full probabilistic analysis requiring neither generating functions nor Laplace transforms. Wittevrongel and Bruneel  used transform-based method to analyze multiserver system with constant service time and correlated arrival process. Nishimura  studied a MAP/D/ system in continuous time using spectral technique. Takine  analyzed a MAP/D/ system and computed the distribution of queue length by first characterizing the sojourn time distribution. Kim and Chaudhry  also computed the distribution of queue length of a discrete time GI/D/ queue after computing the distribution of waiting time by using distributional Little's law and transform-based method. Exact elapsed time from the time of last arrival is approximated with time average in . Chaudhry et al.  analyzed the distribution of waiting time of a MAP/D/ system in discrete time. Later Alfa  gave a simple and more efficient computational scheme for the MAP/D/ system in discrete time. Alfa  also carried out algorithmic analysis for a discrete time BMAP/D/ system by exploiting its structural properties. Chaudhry et al.  and Alfa [2, 3] used matrix geometric method  for analysis. There is some work on the GI/D/ queues such as Wuyts and Bruneel  and Gao et al. [10–14] which used transform-based method. There are other algorithms for multiserver queues with constant service times such as GI/D/1 and GI/D/c queues by Chaudhry , /D/ queue by Chaudhry and Kim , /D/ queue by Chaudhry et al.  and Franx , Ph/D/ queues by van Hoorn , and an /D/ queue by Xerocostas and Demertzes .
In this paper, the cyclic assignment of customers to servers is used to model a class of GI/D/ systems in discrete time as a single server system with the assumption of first-in first-out (FIFO) service order. The modeling as single server system sets up the distribution of queue length as a quasi-birth-death (QBD) which has some structural properties. Analysis of the GI/D/ system is carried out efficiently by exploiting these structural properties in this paper. This paper has three contributions—reductions in the computational complexities of (i) the matrix , (ii) the distribution of queue length of the transformed system, and (iii) the distribution of waiting time. The first contribution is that the time complexity of the computation of the matrix is reduced by decreasing the number of multiplications in each iteration from to while requiring the same number of iteration as the natural iteration. Here, is the support of general interarrival times, is the number of servers, and is the duration of service. The second contribution is that the distribution of queue length of the transformed system is computed efficiently by exploiting the special structures of the system. The third and most important contribution is that the computation of the distribution of waiting time is simplified. Chaudhry et al.  and Alfa [2, 3] computed the distribution of waiting time using the technique for phase type service time distribution. However, it is shown in this paper that the computation of the distribution of waiting time for deterministic service time distribution in discrete time does not need the complicated steps of phase type service time distribution.
The rest of the paper is organized as follows. Section 2 introduces the GI/D/ system and explains the modeling of this multiserver system into a single server system. Section 3 discusses the special structures of the matrices of our model and exploitation of those structures for efficient computation of the matrix . The computation of the distributions of queue length and waiting time is explained in Sections 4 and 5, respectively. Some numerical examples are provided in Section 6 to show the suitability of our proposed method to compute the matrix . Section 7 concludes the paper.
2. The GI/D/ System
The class of GI/D/ system in discrete time considered here has finite support for interarrival time which is of general distribution. Moreover, the customers are served in FIFO service order in the identical servers which have constant service time.
The interarrival times, , of this multiserver system are general, independent, and identically distributed (iid) with distribution , .
We let be the mean arrival rate of customers to the system with . The interarrival times can be represented as a Markov chain where each state represents the elapsed time using the approach followed in the analysis of discrete time GI/G/1 and /G/1 systems by Alfa . General independent interarrival times can be represented as an absorbing Markov chain with a transition matrix where is a square matrix of order , is a column vector of order , , is an matrix of zeros, and is a column vector of ones of order . The case of infinite interarrival times can be captured for . However, the interarrival times are finite if data is collected from practical systems. Therefore, general independent interarrival times with finite support () are considered in this paper. The probabilities of interarrival times can be represented by a vector . Here, and . This general arrival process can be defined by phase type distribution of dimension in elapsed time representation. The parameters , , and are given as , , for , , , for all and where for and is the transpose of matrix .
The general independent arrival process can be represented by two square matrices and of order where represents a transition from phase to phase with arrivals. Now, and can be represented in terms of phase type distribution . Here, and . The first column of is nonzero and this column is the vector . The matrices and are similar to the matrices of zero or one arrival for MAP distributed interarrival times. However, they are special cases because the matrices and only capture general and independently distributed interarrival times.
A discrete time Markov chain can be considered to represent this general arrival process where is the number of arrivals up to and including time and is the phase of the next arrival at time .
The matrix is stochastic. If is the invariant probability vector for arrival in different phases then and . The arrival rate is given by .
There are identical servers in a GI/D/ system. The service times are of constant duration units (). Service time can be represented as phase type distribution in elapsed time form with representation of dimension where and , is an identity matrix of order . Another column vector can be defined for this phase type distribution .
The condition that ensures the stability of GI/D/ system is where . We assume that the system is stable (i.e., ).
2.1. The Model of Arrivals to an Arbitrary Server
Let us assume that customers are served in first-in first-out (FIFO) order in this GI/D/ system. It can also be assumed that the servers are assigned with customers in a cyclic manner particularly during the idle times. This assignment of customers to servers does not affect the waiting times experienced by the customers as the servers are identical. Using the same approach of Alfa [2, 3] we can consider the case of the th () server. In this approach if the first customer is assigned to the first server then the th server will be assigned with , , th customers. In this case we only need to study one server, an arbitrary server (). This arbitrary server sees the arrival of customers to be according to another independent general process which we call . This arrival process is a -fold convolution of original arrival process to the multiserver system. can be defined by two square matrices and of order . The element ( and ) represents transition from phase to phase with arrivals. Both and can be considered to comprise of square block matrices of order . In this representation for , for and for and or . Similarly for and in all other cases for . The first column and the last row of are zero as the first column and last row of are zero. Only the last rows of are nonzero and only the first element of each of these rows is nonzero.
The matrices and exhibit behavior similar to the matrices and except that () is a new matrix representing the arrival of supercustomers where a supercustomer is the last customer to form a group of customers or the arrival of regular customers to an arbitrary server. On the other hand, () represents arrival of regular customers to the GI/D/ system. Now, this new arrival process to an arbitrary server can be represented by a discrete time Markov chain where is the number of supercustomers that have arrived by time and is the phase of the next arrival at time .
The matrix is stochastic. If , a matrix, is the invariant probability vector for arrival in different phases then and . can also be represented as where is of size . The supercustomer arrival rate is given by .
2.2. The New Model /D/1 System
A model can be developed for a single server queue with arrival and deterministic service time of units following the techniques of Alfa . This single server queue is called /D/1 system and the resulting model is of QBD process type for which standard methods can be used . Let us assume that , , and represent the number of supercustomers, the arrival phase of the next supercustomer, and the elapsed time of the supercustomer in service, respectively, at time in this QBD model. The state space can be represented by for and the state space is for . The total state space is . The transition matrix for this /D/1 system is
where , , , , , and .
Let be the stationary distribution of where , and () is the probability vector representing supercustomers in the /D/1 system. During the computation of the stationary distribution of queue length , a matrix is computed first which is followed by the computations of and . Then is computed using the matrix-geometric result . Here, the matrix is of order which is the minimal nonnegative solution of . The entry of represents the expected number of visits into , starting from , before the first return to level (). Here, is the number of supercustomers in the system and and are any of the phases of the system.
3. Structures of the Matrices
This section explains the structures of the block-matrices , , , , , and and how their structures are exploited for efficient computation of the matrix . The structures of these matrices are also fully exploited in computing the stationary distribution of the queue length in Section 4.
Here, and are matrices of order and , respectively. The first column and the last row of the matrix are zero. On the other hand, only the last rows of the matrix are nonzero and among these last rows only the first column is nonzero which is . The matrix can be represented as where .
The matrix is of order which can be represented as where
is a matrix of order . Moreover, has nonzero columns and each column contains only one nonzero element. has nonzero elements in th () positions. is also a matrix of order matrix. Furthermore, only the first column of is nonzero which contains nonzero elements. has nonzero elements in th () positions.
The matrices , , and are of size . The matrix can be represented as
where and are both matrices of order , and . Hence,
There are nonzero columns in and each column contains only one nonzero element. has these nonzero elements in th () positions. Only the first column of is nonzero with nonzero elements in th () positions.
The matrix can be represented as where
Here, and are both matrices of order . and . There are nonzero columns, nonzero rows, and nonzero elements in . Moreover, each nonzero row or each nonzero column of contains only one nonzero element. On the other hand, there are nonzero columns in from column to each having nonzero elements.
The matrix can be represented as . has only nonzero rows which are in its last rows and nonzero columns from column to column .
3.1. Computation of the Matrix
The matrix can be computed using any of the three linearly convergent formulae from Neuts —, , and which are known as natural, traditional, and -based iterations, respectively. Here, is the th iteration value of , , is a square matrix of the same order as the matrix and . The matrix is the minimal nonnegative solution of . The entry represents the taboo probability of starting from for the eventual visit of the Markov chain to level by visiting under the taboo of level . Any of these three methods terminate in the th () iteration for where is a very small positive constant. In each step, natural itearation requires two matrix multiplications whereas traditional iteration requires three matrix multiplications and one matrix inversion. On the other hand, -based iteration requires two matrix multiplications and one matrix inversion in each step. Traditional iteration requires less number of iterations than natural iteration does while the number of iterations required for -based iteration is less than the numbers of iterations for other two methods. Alfa and Xue  developed efficient inversion technique for -based iteration for the matrix in analyzing the queue length of a GI/G/1 system in discrete time.
There are several efficient methods for computing the matrix for a QBD such as logarithmic reduction algorithm of Latouche and Ramaswami , cyclic reduction technique of Bini and Meini , and invariant subspace approach of Akar and Sohraby . These quadratically convergent algorithms compute a matrix which is the minimal nonnegative solution to . Here, is an matrix for matrices , , and where () represents the probability that starting from state () the Markov chain eventually visits the level () and does so by visiting the state (). The matrices , , and are related and these relations can be found in . Similar to the computation of the matrix , the matrix can be computed using three different iterations—natural, traditional, and -based. Generally a zero matrix is used for the initial value for iterations of the matrix (i.e., ). Latouche and Ramaswami  showed that if iterative steps are needed to converge for a very small positive constant such that for the -st natural iteration , then logarithmic reduction requires not more than iterations and overall complexity of their algorithm is . They also developed logarithmic reduction technique for the matrix . However, the quadratic convergent algorithms [26–28] are not considered here for computing the matrix as these algorithms involve inversion of matrices which cannot exploit the special structures of the matrices , , and .
The complexity of computing the matrix can be reduced by using the structure of . The first rows are zero in . Therefore, the first rows are zero in . Among the last rows of , rows are zero which are rows , , . The corresponding rows are zero for . The matrix can be represented as follows:
Here, is a matrix of order which contains th rows as zero rows.
The computations of and require and block-matrix multiplications, respectively, where each block matrix is of order . Similarly, the computation of requires multiplication of and which involves block-matrix multiplications. Inversions of both the matrices and generate block matrices each of order . Inversion of the matrix needs to be computed only once for traditional iteration whereas inversion of the matrix needs to be computed in each iteration for -based iteration. On the other hand, the multiplication of and in traditional iteration and the multiplication of and in -based iteration involve and block-matrix multiplications, respectively, where each block matrix is of order . Each iteration of natural method requires substantially less multiplications than each step of traditional and -based methods do. Therefore, we use natural iteration for developing our algorithm for computing the matrix in this paper.
The matrix of (3.7) can be computed using the natural iteration with the following equation:
where is the th iteration value of with the initial value . The terminating condition for this iteration is same as the terminating condition for natural, traditional, and -based methods but needs to consider only nonzero rows of .
The number of multiplications required to compute is . Similarly, the number of multiplications required to compute both and for is . In the same way, the number of multiplications required to compute is . On the other hand, the numbers of multiplications required to compute by block-matrix multiplication of and and by block-matrix multiplication of and are and , respectively. Hence, the numbers of multiplications required for computing from and from in one iteration are and , respectively. Therefore, the total number of multiplications required to compute all block matrices 's () of the matrix in one iteration is . By exploiting the special structures of the matrices the computational complexity is decreased in each iteration from to which is substantial reduction in computations. Moreover, the memory requirement for our method is also less than that of natural, traditional, -based methods and quadratic convergent algorithms.
4. Stationary Distribution of P for /D/1 System
Two methods for computing stationary distribution for the Markov chain represented by the matrix are briefly presented here which are followed by Section 4.1. Section 4.1 explains how the second method can be made efficient by utilizing the special structures of the matrices.
In the first method an invariant probability vector needs to be computed for a stochastic matrix . The row vector is computed from and . Then and can be expressed as and where .
In the second method, we need to compute , the invariant probability vector of a stochastic matrix defined as . Next a positive real number is computed as follows . On the other hand, is computed using the relations and . and can be computed as , .
4.1. Exploiting Structures for Stationary Distribution
The stationary distribution of can be computed by first computing the probability vector which can be made efficient by exploiting the special structures of the matrices , and . The special structures of the matrices , , , , , , and are due to the special structures of the matrices , , , and . Here, how the computations of these matrices can be carried out efficiently is explained.
Computing requires multiplications by using block-matrix inversion  where , , , and . can be expressed as where .
can be represented as
where , , and . The computation of requires instead of multiplications by utilizing the special structure of .
The matrices and can be represented as
where , , for , and . Here, and for are matrices of order and , respectively. Here, multiplying by and by both require multiplications due to utilizing special structures of matrices. If there were no special structures of , and then the multiplying by and by would require and multiplications, respectively.
The matrix of can be represented in the following form:
where is a matrix of order . Here,
We can recursively define , for , for , , for , and .
Let us define another invariant probability vector of the stochastic matrix where is a row vector having scalar elements. Here, is computed from , and for . Next a positive real number is computed as where and . and are both of size . The column vectors and can be expressed as and , respectively. Both and are of size . and for . There are only nonzero elements in and nonzero elements in for . On the other hand, for and .
can be computed as using multiplications instead of multiplications using the special structure of . Let us partition and as and , respectively, for efficient computation of . Here, and for are of size and , respectively. is computed as and these vector-matrix multiplications can be made efficient by considering only nonzero rows of .
can be computed as for () using multiplications instead of multiplications as has only nonzero rows.
5. Waiting-Time Distribution of /D/1 System
The waiting time distribution of an arbitrary customer in /D/1 system is the same as the waiting time distribution of a customer in GI/D/ system. Let be the probability that a customer (i.e., the th customer of a supercustomer group) has to wait units of time. is computed after obtaining the probability vectors 's where is the stationary probability vector that a supercustomer sees supercustomers ahead of him. can be expressed as . Similarly, can be expressed as and can be expressed as .
Two approaches for computing the distribution of waiting time are presented in Sections 5.1 and 5.2, respectively. The first method uses the approach for phase type service time distribution and this approach is used in the analysis of Alfa  and Chaudhry et al. . In the second method we develop a computationally simpler approach. This is a simplified version of Method 1. It simplifies the computations of the probability vectors 's for and , the probability mass function of the distribution of waiting time.
5.1. Method 1
The expressions for the stationary distribution of an entering supercustomer to find different number of supercustomers ahead of him in the system can be written as
In this method, the probability that the waiting time is units of time is computed as
where is a square matrix of order that represents the probability distribution of unit of work in system at the arrival of a supercustomer who finds that supercustomers are ahead of him. For example, the element represents the probability that the service of the arbitrary supercustomer, who arrives to find supercustomers in the system with remaining workload of units begins in phase , given that service of the supercustomer who was in service at the arrival of the arbitrary supercustomer was in phase . The matrix is computed in the following way:
5.2. Method 2
This method simplifies the computation of and . Moreover, this method does not need to compute the matrix which simplifies the computation of the probability mass function of the distribution of waiting time.
The special structures of , , , and can be exploited to reduce the number of operations to compute and . Only the first columns of the matrices and are nonzero. Therefore, the first element of is nonzero and the remaining elements of are equal to zero. This phenomenon is quite intuitive as the arriving supercustomer does not find anybody ahead of him, he can start his service in phase instantly and the arrival process for next supercustomer starts at phase 1 of the phases. The first element of is computed as
columns of are nonzero which are column 2 to column . On the other hand, only the first column of is nonzero. Therefore, the first elements of are nonzero. It is quite intuitive that the first elements of are nonzero (i.e., is nonzero and ) as the arriving supercustomer finds supercustomers ahead of him, the arrival process of next supercustomer starts at phase 1 of phases and the ongoing service is in the start of phase . The nonzero elements 's of the vector can be computed using
The special structures of and for result in
If a supercustomer sees supercustomers ahead of him then he has to wait or amount of time for (or units of time for ) before getting service.
First we consider the case of . In this case, , , and . For , (5.3) can be written as
For we can assume and . In this case, .
For , we have from (5.7).
For we can assume and . In this case, . Therefore, we can express for ,
Now, we consider the case . Let us define as the th column of an identity matrix of order . It is found that for and
which are proved by induction in . If the matrix is nonzero then it contains nonzero elements in the first column. The nonzero first column in nonzero is due to elapsed time representation of service time where service starts at phase 1.
Now, . Therefore, we can express for ,
Alfa [2, 3] pointed out that for . However, he did not provide any explicit expression for and his method needs to compute the first column of the matrix recursively as our method does not need to compute. Use of (5.10) to compute can significantly reduce the complexity of computing the distribution of waiting time for any multiserver queueing system in discrete time with deterministic service time such as BMAP/D/ system of Alfa  and MAP/D/ systems of Alfa  and Chaudhry et al. . Equation (5.10) reduces the usage of memory as there is no need to compute and store the matrix for different values of and .
Now, if an arriving customer has to wait units of time () then there are supercustomers ahead of him and the customer, who is in service in the server, is in phase of service time. Therefore, the probability of a customer waiting () units of time can be expressed by using value of of (5.8) and (5.10) in (5.2)
6. Numerical Examples
This section compares our proposed method of computing the matrix with natural, traditional, and U-based methods and Logarithmic Reduction technique  for the matrices and . Three different matrices , , and are computed in each of these methods. The values of the convergence constant used are , , , and . Three identical servers (i.e., ) are considered with duration of service as eight time units (i.e., ) and the support for interarrival times is assumed to be four (i.e., ). Three different probability vectors , , and are used for interarrival times for numerical examples. A program is written in C programming language for different methods to compute the matrices , , and . The program was executed in SunOS 5.10  on an i386 processor which operates at 2660 MHz and has an i387 compatible floating point processor. The number of iterations and time in seconds is recorded in Tables 1–6 for each method from the execution of the program. “LR” is used to denote logarithmic reduction technique in those tables.