Abstract

This paper focuses on stochastic comparison of the Markov chains to derive some qualitative approximations for an retrial queue with a Bernoulli feedback. The main objective is to use stochastic ordering techniques to establish various monotonicity results with respect to arrival rates, service time distributions, and retrial parameters.

1. Introduction

Retrial queueing systems are described by the feature that the arriving customers (or calls) who find the server busy join the orbit to try again for their requests in a random order and at random time intervals. Retrial queues are widely and successfully used as mathematical models of several computer systems and telecommunication networks. For excellent and recent bibliographies on retrial queues, the reader is referred to [13].

Most of the queueing systems with repeated attempts assume that each customer in the retrial group seeks service independently of each other after a random time exponentially distributed with rate so that the probability of repeated attempt during the interval given that there were customers in orbit at time is . This discipline for access to the server from the retrial group is called classical retrial policy [4, 5].

Several papers on retrial queues have analyzed the systems without customer feedback. A more practical retrial queue with the Bernoulli feedback of the customers occurs in many real world situations: for instance, in communication networks where data transmissions need to be guaranteed to be error free within some specified probability, feedback schemes are used to request retransmission of packets that are lost or received in a corrupted form.

Because of complexity of retrial queueing models, analytic results are generally difficult to obtain. In contrast, there are a great number of numerical and approximation methods which are of practical importance. One important approach is monotonicity which allow to establish some stochastic bounds helpful in understanding complicated models by more simpler models for which an evaluation can be made using the stochastic comparison method based on the general theory of stochastic orderings [6].

Stochastic orders represent an important tool for many problems in probability and statistics. They lead to powerful approximation methods and bounds in situations where realistic stochastic models are too complex for rigorous treatment. They are also helpful in situations where fundamental model distributions are only known partially. Further details and applications about these stochastic orders may be found in [68].

There exists a flourishing literature on stochastic comparison methods and monotonicity of queues. Oukid and Aissani [9] obtain lower bound and new upper bound for the mean busy period of queue with breakdowns and FIFO discipline. Boualem et al. [10] investigate some monotonicity properties of an queue with constant retrial policy in which the server operates under a general exhaustive service and multiple vacation policy relative to strong stochastic ordering and convex ordering. These results imply in particular simple insensitive bounds for the stationary queue length distribution. More recently, Taleb and Aissani [11] investigate some monotonicity properties of an unreliable retrial queue relative to strong stochastic ordering and increasing convex ordering.

In this work, we use the tools of the qualitative analysis to investigate various monotonicity properties for an retrial queue with classical retrial policy and Bernoulli feedback relative to strong stochastic ordering, increasing convex ordering and the Laplace ordering. Instead of studying a performance measure in a quantitative fashion, this approach attempts to reveal the relationship between the performance measures and the parameters of the system.

The rest of the paper is organized as follows. In Section 2, we describe the mathematical model in detail and derive the generating function of the stationary distribution. In Section 3, we present some useful lemmas that will be used in what follows. Section 4 focusses on stochastic monotonicity of the transition operator of the embedded Markov chain and gives comparability conditions of two transition operators. Stochastic bounds for the stationary number of customers in the system are discussed in Section 5. In Section 6, we obtain approximations for the conditional distribution of the stationary queue given that the server is idle.

2. Description and Analysis of the Queueing System

We consider a single server retrial queue with the Bernoulli feedback at which customers arrive from outside the system according to a Poisson process with rate . An arriving customer receives immediate service if the server is idle, otherwise he leaves the service area temporarily to join the retrial group. Any orbiting customer produces a Poisson stream of repeated calls with intensity until the time at which he finds the server idle and starts his service. The service times follow a general probability law with distribution function having finite mean and Laplace-Stieltjes transform . After the customer is completely served, he will decide either to join the retrial group again for another service with probability () or to leave the system forever with probability .

We finally assume that the input flow of primary arrivals, intervals between repeated attempts and service times, are mutually independent.

The state of the system at time can be described by the Markov process , where is the indicator function of the server state: is equal to 0 or 1 depending on whether the server is free or busy at time and is the number of customers occupying the orbit. If , then corresponds to the elapsed time of the customer being served at time .

Note that the stationary distribution of the system state (the stationary joint distribution of the server state and the number of customers in the orbit) was found in [12], using the supplementary variables method. In this section, we are interested in the embedded Markov chain. To this end, we describe the structure of the latter, determine its ergodicity condition, and obtain its stationary distribution.

2.1. Embedded Markov Chain

Let be the time of the th departure and the number of customers in the orbit just after the time , then and , . We have the following fundamental recursive equation: where (i) is the number of primary customers arriving at the system during the service time which ends at . It does not depend on events which have occurred before the beginning of the st service. Its distribution is given by: with generating function ,

(ii) the Bernoulli random variable is equal to 1 or 0 depending on whether the customer who leaves the service area at time proceeds from the orbit or otherwise. Its conditional distribution is given by

(iii) the random variable is 0 or 1 depending on whether the served customer leaves the system or goes to orbit. We have also that and .

The sequence forms an embedded Markov chain with transition probability matrix , where , defined by Note that only for .

Theorem 2.1. The embedded Markov chain is ergodic if and only if .

Proof. It is not difficult to see that is irreducible and aperiodic. To find a sufficient condition, we use Foster's criterion which consists to show the existence of a nonnegative function , , and such that the mean drift is finite for all and for all except perhaps a finite number. In our case, we consider the function for all . Then, the mean drift is given by Let . Then . Therefore, the sufficient condition is .
To prove that the previous condition is also a necessary condition for ergodicity of our embedded Markov chain, we apply Kaplan's condition: , for all , and there is an such that , for . In our case, this condition is verified because for and (see (2.4)).

2.2. Generating Function of the Stationary Distribution

Now, under the condition , we find the stationary distribution . Using (2.4), one can obtain Kolmogorov equations for the distribution Because of presence of convolutions, these equations can be transformed with the help of the generating functions to Since from (2.7) and (2.8), we have We consider now the function .

It is easy to show that Therefore the function is decreasing on the interval , is the only zero there and for the function is positive, that is, (as ) for we have: .

Besides, that is, the function can be defined at the point as .

This means that for we can rewrite (2.9) as follows: The solution of the differential equation (2.12) is given by From (2.8), we have We obtain from the normalization condition that .

Finally we get the following formula for the generating function of the steady state queue size distribution at departure epochs (which is known in the literature as the stochastic decomposition property): It is easy to see that the right hand part of expression (2.15) can be decomposed into two factors. The first factor is the generating function for the number of customers in queueing system with Bernoulli feedback (see [13]); the remaining one is the generating function for the number of customers in the retrial queue with feedback given that the server is idle [12]. One can see that formula (2.15) is cumbersome (it includes integrals of Laplace transform, solutions of functional equations). It is why we use, in the rest of the paper, the general theory of stochastic orderings to investigate the monotonicity properties of the system relative to the strong stochastic ordering, convex ordering, and Laplace ordering.

3. Preliminaries

3.1. Stochastic Orders and Ageing Notions

First, let us recall some stochastic orders and ageing notions which are most pertinent to the main results to be developed in the subsequent section.

Definition 3.1. For two random variables and with densities and and cumulative distribution functions and , respectively, let and be the survival functions. As the ratios in the statements below are well defined, is said to be smaller than in: (a)stochastic ordering (denoted by ) if and only if , , (b)increasing convex ordering (denoted by ) if and only if , (c)Laplace ordering (denoted by ) if and only if . If the random variables of interest are of discrete type and , are the corresponding distributions, then the definitions above can be given in the following forms: (a) if and only if , for all , (b) if and only if , for all , (c) if and only if , for all .

For a comprehensive discussion on these stochastic orders see [68].

Definition 3.2. Let be a positive random variable with distribution function : (a) is HNBUE (harmonically new better than used in expectation) if and only if , (b) is HNWUE (harmonically new worse than used in expectation) if and only if , (c) is of class if and only if , where is the exponential distribution function with the same mean as .

3.2. Some Useful Lemmas

Consider two retrial queues with classical retrial policy and Bernoulli feedback with parameters and , . Let be the distribution of the number of primary calls which arrive during the service time of a call in the th system.

The following lemma turns out to be a useful tool for showing the monotonicity properties of the embedded Markov chain.

Lemma 3.3. If and , then , where is either or .

Proof. To prove that , we have to establish the usual numerical inequalities: The rest of the proof is known in the more general setting of a random summation.

The next lemma is key to proving the main result in Section 6.

Lemma 3.4. If and , then .

Proof. We have, , where , are the corresponding distributions of the number of new arrivals during a service time.
Let , . To prove that , we have to establish that The inequality means that for all .
In particular, for we have Since any Laplace transform is a decreasing function, implies that By transitivity, (3.3) and (3.4) give (3.2).

4. Stochastic Monotonicity of Transition Operator

Let be the transition operator of an embedded Markov chain, which associates to every distribution , a distribution such that .

Corollary 4.1 (see [6]). The operator is monotone with respect to if and only if , and is monotone with respect to if and only if . Here, and .

Theorem 4.2. The transition operator of the embedded chain is monotone with respect to the orders and .

Proof. We have Thus Based on Corollary 4.1 we obtain the stated result.

In Theorem 4.3, we give comparability conditions of two transition operators. Consider two retrial queues with classical retrial policy and feedback with parameters , and , respectively. Let , be the transition operators of the corresponding embedded Markov chains.

Theorem 4.3. If , where is either or , then , that is, for any distribution , one has .

Proof. From Stoyan [6], we wish to establish that To prove inequality (4.3), we have (for ) By hypothesis, we have that Since the function is increasing, we have Moreover, . Then Similarly, the function is increasing, we have Besides, implies that . Hence Using inequalities (4.8)–(4.10) and Lemma 3.3 (for -ordering) we get Following the technique above and using Lemma 3.3 (for -ordering), we establish inequality (4.4).

5. Stochastic Bounds for the Stationary Distribution

Consider two retrial queues with classical retrial policy and feedback with parameters and , respectively, and let , be the corresponding stationary distributions of the number of customers in the system.

Theorem 5.1. If , then , where is one of the symbols or .

Proof. Using Theorems 4.2 and 4.3 which state, respectively, that are monotone with respect to the order and , we have by induction for any distribution , where . Taking the limit, we obtain the stated result.

Based on Theorem 5.1 we can establish insensitive stochastic bounds for the generating function of the stationary distribution of the embedded Markov chain defined in (2.15).

Theorem 5.2. For any retrial queue with classical retrial policy and Bernoulli feedback the distribution is greater relative to the increasing convex ordering than the distribution with the generating function

Proof. Consider an auxiliary retrial queue with classical retrial policy and feedback having the same arrival rate , retrial rate , mean service time , and probability , as those of the retrial queue with classical retrial policy and Bernoulli feedback. The service times follow a deterministic low with distribution function: From Stoyan [6], it is known that . Therefore, the required result follows from Theorem 5.1.

Theorem 5.3. If in the retrial queue with classical retrial policy and feedback the service time distribution is HNBUE (or HNWUE), then (or ), where is the stationary distribution of the number of customers in the retrial queue with classical retrial policy and Bernoulli feedback with the same parameters as those of the retrial queue with classical retrial policy and Bernoulli feedback.

Proof. Consider an auxiliary retrial queue with classical retrial policy and Bernoulli feedback with the same arrival rate , probability , retrial rate , and mean service time as in the retrial queue with classical retrial policy and Bernoulli feedback, but with exponentially distributed service time . If is , then (if is , then ). Therefore, by using Theorem 5.1, we deduce the statement of this theorem.

6. Stochastic Approximations for the Conditional Distribution

We consider the conditional distribution of the stationary queue given that the server is idle. This distribution has also appeared in the stochastic decomposition law for the stationary queue length, see equation (2.15). As we saw its generating function was given by

Theorem 6.1. Suppose we have two retrial queues with classical retrial policy and Bernoulli feedback with parameters and , respectively. If , then .

Proof. By Lemma 3.4, we have .
Moreover, one has .
This implies that Besides, and and thus Therefore By combining this latter inequality with the inequality: , we get , which means the stochastic inequality .

Theorem 6.2. For any retrial queue with classical retrial policy and Bernoulli feedback the distribution is less relative to the Laplace ordering than the distribution with the generating function and if is of class then the distribution is greater relative to the ordering than the corresponding distribution in the queue with classical retrial policy and Bernoulli feedback.

Proof. Consider an auxiliary and retrial queues with classical retrial policy and Bernoulli feedback with the same arrival rates , probability , retrial rates , and mean service times .
Since is always less, relative to the ordering , than a deterministic distribution with the same mean value, based on Theorem 6.1 we obtain the stated result.
If is of class then is greater relative to the ordering than the exponential distribution with the same mean, based on Theorem 6.1 we can guarantee the second inequality.

7. Conclusion and Further Research

In this paper, we prove the monotonicity of the transition operator of the embedded Markov chain relative to strong stochastic ordering and increasing convex ordering. We obtain comparability conditions for the distribution of the number of customers in the system. Inequalities are derived for conditional distribution of the stationary queue given that the server is idle. The obtained results allow us to place in a prominent position the insensitive bounds for both the stationary distribution and the conditional distribution of the stationary queue of the considered model.

Monotonicity results are of importance in robustness analysis: if there is insecurity on the input of the model, then our order results provide information on what kind of deviation from the nominal model to expect. Moreover, in gradient estimation one has to control the growth of the cycle length as function of a change of the model. More precisely, the results established in this paper allow to bound the measure-valued derivative of the stationary distribution where the derivative can be translated into unbiased (higher order) derivative estimators with respect to some parameter (e.g., arrival rate () or retrial rate () parameter). Such bounds can be used to derive information on the speed of convergence of the gradient estimator. Finally, under some conditions (order holds in the strong sense), those results imply a fast convergence of the gradient estimator of the stationary distribution [1416].