#### Abstract

We present a novel technique to solve multiserver retrial systems with impatience. Unfortunately these systems do not present an exact analytic solution, so it is mandatory to resort to approximate techniques. This novel technique does not rely on the numerical solution of the steady-state Kolmogorov equations of the Continuous Time Markov Chain as it is common for this kind of systems but it considers the system in its Markov Decision Process setting. This technique, known as value extrapolation, truncates the infinite state space using a polynomial extrapolation method to approach the states outside the truncated state space. A numerical evaluation is carried out to evaluate this technique and to compare its performance with previous techniques. The obtained results show that value extrapolation greatly outperforms the previous approaches appeared in the literature not only in terms of accuracy but also in terms of computational cost.

#### 1. Introduction

A common assumption when evaluating the performance of communication systems is that users that do not obtain an immediate service leave the system without retrying. However, due to the increasing number of customers and network complexity, the customer behavior in general, and the retrial phenomenon in particular, may have a nonnegligible impact on the system performance. For example, in mobile cellular networks the importance of the retrial phenomenon has been stressed in [1–3]. An extensive bibliography on retrial queues can be found in [4]. The modeling of repeated attempts has been a subject of numerous investigations, because these systems have a nonhomogeneous and infinite state space. However, it is known that the classical theory [5] is developed for random walks on the semistrip (being the number of servers) with infinitesimal transitions subject to conditions of space homogeneity.

When the space-homogeneity condition does not hold, for example, in the case of retrial queues, the problem of calculating the equilibrium distribution has not been solved beyond approximate techniques when the number of servers is higher than two [6]. In particular, Marsan et al. [7] propose a well-known approximate technique for its analysis. In [8], a generalization of the approximate technique in [7] was proposed, showing a substantial improvement in the accuracy at the expense of a marginal increase of the computational cost. Those approximations are based on the reduction of an infinite state space to a finite one by aggregating states. Other solutions maintain the infinite state space but homogenize it beyond a given level in order to solve the system. These later models are known as generalized truncated models [6] and usually present the advantage of providing a much better accuracy than the finite methodologies [9]. In this category we find the models proposed by Falin [10], by Neuts and Rao [11], and by Artalejo and Pozo [6]. All these approaches rely on the numerical solution of the steady-state Kolmogorov equations of the Continuous Time Markov Chain (CTMC) that describes the system under consideration.

Very recently, however, an alternative approach for evaluating infinite state space Markov processes has been introduced by Leino et al. [12–14]. The new technique, named value extrapolation, does not rely on solving the global balance equations. This technique considers the system in its MDP (Markov Decision Process) setting and solves the expected value from the Howard equations written for a truncated state space. Instead of a simple truncation, the relative values of states just outside the truncated state space are estimated using a polynomial extrapolation based on the states inside, obtaining a closed system. Therefore, we can compute any performance parameter as far as we are capable to express it as the expected value of a random variable that is function of the system state.

So far the value extrapolation technique has been applied to multiclass single server queues showing very promising results. It must be noted that a key aspect on the application of value extrapolation lies on the election of the extrapolating function for the relative state values. Indeed, in [14] the authors show that by selecting an appropriate polynomial function the technique yields exact results for the moments of the queue length in a multiclass Discriminatory Processor-Sharing (DPS) system. Unfortunately, the appropriateness of the functional form of the extrapolation depends on the system and also on the revenue function, that is, the performance parameter we are interested in. Hence, there is no universal good choice for the extrapolating function. In this paper we address the application of the value extrapolation technique to an important class of queuing systems, for example, retrial queues, which are essentially different of the type of queues to which this technique has been applied. A potential drawback of value extrapolation compared to conventional state space truncation methods is that, since the stationary state probabilities are not obtained, if one want to compute several performance parameters, the technique has to be applied once per each of them. We apply well-known linear algebra algorithms to compute several performance parameters simultaneously, and through some series of numerical examples we show that, at least for the type of system that we are studying, the relative impact in terms of computational cost is marginal.

The application of the value extrapolation technique has only addressed problems in which relative state values are expected to follow a polynomial tendency. In this paper we develop the value extrapolation technique to solve a multiserver retrial system, addressing also the drawback of computing only a single performance parameter every time the technique is used.

In a first part of the paper, we develop the analytical part of the technique, defining the associated Howard equations of the model and the revenue functions. In a second part, we compare our technique with other previously proposed techniques in terms of accuracy and computational cost. Results show that the proposed technique clearly outperforms the rest of the studied techniques in terms of computational cost, and this improvement is even much higher in terms of accuracy.

The rest of the paper is structured as follows. Section 2 describes the system under study, while Section 3 introduces the solving technique used. In Section 4, the numerical analysis is carried out, evaluating the value extrapolation technique and comparing it with other previous solving techniques proposed in the literature. Final remarks and a summary of results are provided in Section 5.

#### 2. System Model

The system under study is a generic retrial system including user impatience, that is, users leave the system with certain probability after a nonsuccessful retrial. A diagram of the system is shown in Figure 1. Users arrive following a Poisson process with rate to a system with servers and request an exponentially distributed service time with rate . Without loss of generality, we consider that each user occupies one server. When a new request finds all servers occupied, it joins the retrial orbit with probability 1. After an exponentially distributed time of rate , this session retries. The reattempt is successful if it finds a free server. Otherwise, the user leaves the system with probability or returns to the retrial orbit with probability , starting the retrial procedure again. Note that we consider an infinite capacity for the retrial orbit.

The model considered can be represented as a bidimensional CTMC, , where is the number of sessions being served and the number of users in the retrial orbit at time . The state space of the process is defined by

Figure 2 shows the transition diagram of such system, showing two important properties in the dimension corresponding to the number of users in the retrial orbit: on the one hand its infinite cardinality and on the other hand its space-heterogeneity produced by the fact that retrial rate depends on the number of customers in the retrial orbit.

#### 3. Solving Technique

In this section we develop the value extrapolation technique for the model presented in Section 2. Additionally, we present some particularities that should be taken into account when using this technique.

##### 3.1. MDP Settings

As it has been aforementioned, the problem under interest has not a closed form solution when [6], so approximation techniques are mandatory. To the best of our knowledge, all the approximate techniques that have appeared in literature compute the steady state probabilities () using the balance equations in order to compute the desired performance parameters, that is, solving the linear system of equations: along with the normalization condition , where represents the transition rate from state to .

Notwithstanding, value extrapolation is not based on the probability of being in a certain state, but on a new metric called relative state values. Relative state values appear when we consider the system in the setting of an MDP. Formally, an MDP can be defined as a tuple , where is a set of states, is a set of actions, is a state transition function, and is a revenue function. The state of the system can be controlled by choosing actions from , influencing in this way the state transitions. The transition function specifies the transition rate between a pair of states when a certain action is taken at the original state. The first characteristic of the value extrapolation technique is the necessity of the definition of a revenue function that must be a function of the system state, that is, . Following the definition of the revenue function for every state, in the steady state, a mean revenue rate of the entire process can be introduced as . In the value extrapolation technique, the revenue function has to be defined so that the resulting average revenue coincides with the desired performance metric.

Once we have defined the MDP framework as well as the revenue function, we are in a position to define the relative state values. It is obvious that after performing an action in state the system will collect a revenue for that action (), but as the number of transitions increases, the average revenue collected converges to . The relative state value () is equal to the difference between the total revenue incurred when the system starts at state and the total revenue incurred in a system for which the revenue rate at all states is :

The Howard equations relate revenues, relative state values, and transition rates:

The Howard equations represent the *policy evaluation* phase of the well-known *policy iteration* algorithm, the most widespread dynamic programming technique, proposed in [15]. The Howard equations that correspond to the system under study are

As we can observe the number of states is infinite because can take any value in , thus we need to truncate the state space to . In our case, the truncated state space is defined by

In general, is known as the truncation level. As we choose a higher value of , we can expect a higher accuracy as the system is more similar to the original one, but we will have a higher computation cost too. Therefore, the objective will be to achieve a certain accuracy with the minimum value of .

There will be as many Howard equations as number of states, . The number of unknowns will be the relative state values plus the expected revenue , that is, unknowns. However, as only the differences in the relative values appear in the Howard equations, we can set , so we will have a solvable linear system of equations with the same number of equations as unknowns.

##### 3.2. Polynomial Fitting

The traditional truncation sets but value extrapolation performs a more efficient truncation. Basically, value extrapolation considers the relative state values outside that appear in the Howard equations as an extrapolation of some relative state values inside . As we truncate the retrial orbit dimension beyond a value , the value extrapolation technique uses the state value of some states in to approximate , which is expected to improve the accuracy significantly, as it is better than ignoring these relative state values. Note that if extrapolation yielded the exact value for those states outside , the results obtained by solving the truncated model would be exact. Also note that including value extrapolation neither increases the computational cost nor increases the number of Howard equations, which remains equal to .

Summarizing, the objective of value extrapolation is to find an extrapolation function that fits with some points in so that it approximates also points outside . It is important to choose a fitting function that makes the Howard equations remain a closed system of linear equations. The most common fitting functions that fulfill that condition are the polynomials. We can use all the states in into the fitting procedure (global fitting) or, what is most commonly used, only a subset () of them (local fitting).

For the sake of simplicity, in the following description we will assume there exists a mapping from the two-dimensional set of states into a single-dimensional set, for example, the real numbers: . Hence, below we deal with states as if they were real values given as . The specific mapping used for the model under study is specified later on.

The choice of will highly depend on the states we want to extrapolate its relative state value. Note also that the function and the set need to be chosen so that the parameters in have unambiguous values, that is, in the case of choosing a polynomial as the fitting function, the number of different points in has to be equal or greater than the number of coefficients in the polynomial. In general, the procedure to compute the coefficients of the fitting polynomial consists in minimizing the least mean squared error

Then the optimal values for the ’s can be computed by solving the equations

In our case, we are using as many points as the number of parameters of the fitting polynomial, so the fitting procedure is an ordinary polynomial interpolation and , that is, all the considered points will lie in the curve of the polynomial. In this case, the problem can be formulated as follows. Given a set of points , where there are not two identical , we can determine an ()-th degree polynomial so that for , where

The interpolating polynomial satisfies the following linear equations: which in a matrix form are

The matrix of coefficients of this system () is a Vandermonde matrix, whose determinant is nonvanishing and therefore is invertible. Thus, there always exists a unique solution to the considered linear system of equations or, equivalently, there exists a unique polynomial that goes through all the points. However, Vandermonde matrices are often badly conditioned, specially if some are very close, so the procedure to compute the fitting polynomial is also badly conditioned. It is important to note that the unicity of the fitting polynomial does not mean that it cannot be written in a basis different from the standard basis. More concretely in this work we have used the Lagrange basis.

For the considered interpolation problem, the polynomial in its Lagrange setting is a linear combination of Lagrange basis polynomials

For the truncated problem of interest and as shown in Figure 3, we will have a Howard equation in which appears , that is a state value of a state that does not belong to . Therefore, we must approximate the value by using some relative state values of states belonging to . It is important to emphasize that for the extrapolation of we only use states from the last row of the model shown in Figure 3, that is, states of the form , with varying . With this choice, we define the mapping as . Moreover, we use an -th degree polynomial that interpolates the points in and then :

This way, the general form of the extrapolation state when using an -th degree polynomial is

For example, in the case of linear extrapolation (), we use and , having

Following a similar procedure, we can obtain the next relationship for and :

In general, for -th degree polynomials and using the Lagrange basis to reduce the complexity of the procedure, a simple closed-form expression for the extrapolated value can be obtained by where is the number of coefficients taken for Lagrange polynomials.

##### 3.3. Revenue Function

As performance parameters are not computed from the steady state probabilities as usual, it is important to explain more carefully how they are computed. By definition, is the revenue rate obtained when the system is in state . Therefore, we must define the revenue as the performance parameter we want to compute. The effect of that action is that the computed will be the performance parameter we are looking for. Additionally, the inputs in the Howard equations must be properly set. Table 1 gives several examples on how can be set in order to obtain certain performance parameters such as: blocking probability , mean number of users in the retrial orbit , nonservice probability (probability of a user leaving the system due to impatience without obtaining service), probability of being in a certain state , and probability of having busy servers .

As an example, we focus on the blocking probability and we define the revenue function to be 1 in those states in which an attempt is blocked, that is, when for all , and 0 in the rest of states, , for all .

##### 3.4. Effect of the Value Extrapolation into the Howard Equations

In our problem, and as mentioned above, we will only have to replace by its approximate value in the Howard equation that corresponds to the state . As an example, if we use linear extrapolation (), that equation becomes

As no longer appears in the Howard equations, the linear system of equations we have consists of equations with the same number of unknowns. This system can be expressed in matrix form for simplicity reasons. Therefore, the system can be seen as , where is a vector with the unknowns ( and the relative state values ) and are the negative revenue rates for the different states:

Matrix represents the matrix of coefficients and can be constructed making all the elements in the first row of matrix equal to −1.

Matrix is given by where the submatrices are defined as

When , using linear () and quadratic () extrapolation, we obtain, respectively, where .

In general, if the extrapolation is done with points, the matrix is given as where

Note that the size of matrix does not depend on the order of the polynomial used to perform the extrapolation; only the last column in the matrix depends on the polynomial adjustment. This characteristic has the advantage that there will not be any difference in the computation cost when using higher order of extrapolation.

The main drawback of the value extrapolation technique is that this technique is only able to compute one performance parameter each time we solve the system. Notwithstanding, we can overcome this drawback in the following way. In a general manner, the solution of the system can be obtained using the inverse matrix of by doing . Note also that choosing a different performance parameter to solve will only affect to the values in . Therefore, computing a second performance parameter will only increase the computation expenses by the cost of the product , as the rest of the process (specifically the computation of the inverse matrix ) is solved only once. Similarly, we can compute several performance parameters with a marginal increase in the computation cost using LU factorization, as the first part of the procedure (the factorization, which represents the most computationally expensive part) is done only once for the matrix. This characteristic of the value extrapolation technique can be observed in Figure 4, where we show that the computation time (results have been obtained using Matlab running on an Intel Pentium IV 3 GHz) is only marginally increased when we compute additional performance parameters.

#### 4. Results

In order to evaluate and compare the proposed technique, we have studied its performance in several scenarios. Letting , we have studied different system loads by modifying and keeping resource units and . The retrial phenomenon has been configured with and . Although only one configuration of the retrial orbit has been chosen, there will be fairly different working points, as the system load is widely modified.

For obtaining the results, we have used the relative error of different performance parameters, defined for a generic performance parameter by . In order to obtain an accurate enough estimate of which can be used as , we ran all techniques with increasing and sufficiently high values of so that the value of had stabilized up to the 14th decimal digit. As expected all techniques converged to the same value in the performance parameters under study, .

##### 4.1. Value Extrapolation Evaluation

Table 2 shows the minimum value of needed to obtain a relative error lower than for different performance parameters and loads (columns) and for different orders of the extrapolation polynomials (rows). Note that VE denotes the use of an extrapolation polynomial of order . The number in bold indicates the lowest truncation level of all the polynomials studied. Finally, the last row of Table 2 shows the exact value of the studied performance parameter for that scenario.

From Table 2 we conclude that there is not a clear choice in the order of the best polynomial. In general, neither the lowest nor the highest order polynomials are recommendable, so we recommend to use the intermediate cases. Furthermore, the fact that using VE enforces us to use a model with (see Section 3.2) must be considered in the choice of the polynomial. For that reason we can conclude that, for the problem and scenario of interest and for the relative accuracy we want to achieve, VE8 represents a good tradeoff between accuracy and value of needed. Therefore, hereafter we will use the polynomial of order 8 (VE8) and we will simply denote it as VE.

##### 4.2. Comparison with Other Techniques

In this section we compare the performance of value extrapolation with other techniques based on the traditional approach of solving the steady state probabilities using the balance equations for later computing the performance parameters of interest. Although other approaches exist, we have chosen the technique proposed in [8], referred to hereafter as FM, and the one proposed by Neuts and Rao in [11], referred to as NR. Note that we have not compared the results with the technique proposed by Artalejo and Pozo [6] as this last technique does not include the impatience phenomenon, so it is not directly applicable. A similar reasoning can be done for the technique proposed by Falin [10].

In Table 3 we show the minimum values of needed to obtain a relative error lower than for different performance parameters and for the aforementioned techniques. Results show that value extrapolation clearly outperforms classical techniques as it needs a much lower value of to achieve a certain accuracy in all the scenarios under study and for all the parameters studied. Similarly, in Figures 5, 6 and 7, we plot the relative error for , and , respectively, when and for the different techniques deployed. Results show that, for a same value of , VE is able to obtain lower relative errors than NR and FM. The difference in the relative errors is around 4 to 5 orders of magnitude, which supposes a very clear improvement.

##### 4.3. Computation Cost

Although it is shown that VE clearly outperforms NR and FM techniques, it is interesting to study their associated computation cost. In Figures 8, 9, and 10, we plot the time needed to achieve a certain relative error for , , and using the different techniques under study. Note also that, although it has been obtained using VE8, choosing a diferrent order for the extrapolation polynomial would not change the computation cost, as the linear system of equations to be solved remains of the same size. However, results should be interpreted carefully, because computation costs highly depend on the algorithm used to solve the resulting system of equations. More concretely, for solving the systems obtained in the FM and NR techniques, we have made use of the efficient algorithm described in [16] that takes advantage of the block-tridiagonal structure that presents the infinitesimal generator. Unfortunately, the linear system of equations obtained in VE has no longer such a block-tridiagonal structure, and therefore we must use a more general and less efficient algorithm. More concretely, we have used LU factorization. Figures 8–10 show that VE achieves a certain accuracy faster than the other techniques under study.

#### 5. Conclusions

Multiserver retrial systems have not an exact solution when the number of servers is higher than two, as their state space presents space heterogeneity along an infinite dimension. For that reason, it is mandatory to develop approximate techniques in order to solve these systems. To the best of our knowledge, all the techniques studied in the literature to solve these systems are based on their steady state probabilities. In this paper we propose an alternative technique based on a different metric: the relative state values and the Howard equations that relate them, instead of the balance equations. With this technique, truncation of the state space can be done in a more efficient way, as the state values outside the truncated state space are extrapolated from some known state values. In order to preserve the linearity of the resulting system of equations, we have only used polynomials as extrapolation functions.

In a first part, we have studied the use of different orders for the extrapolation polynomials. Later, we have compared the new technique with two well-known approaches appeared in the literature [8, 11] in terms of accuracy and computational cost. Results show that the proposed technique highly improves the previous approaches in terms of computational cost and, specially, in terms of accuracy, so its use is highly recommended.

#### Acknowledgments

This work has been supported by the Spanish government under Projects TIN2010-21378-C02-02 and TIN2008-06739-C04-02/TSI and by Comunidad de Madrid through Project S-2009/TIC-1468.