Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2014, Article ID 767651, 16 pages
http://dx.doi.org/10.1155/2014/767651
Research Article

Product-Form Solutions for Integrated Services Packet Networks and Cloud Computing Systems

Department of Mathematics and State Key Laboratory of Novel Software Technology, Nanjing University, Nanjing 210093, China

Received 25 February 2014; Revised 26 August 2014; Accepted 27 August 2014; Published 13 October 2014

Academic Editor: Joao B. R. Do Val

Copyright © 2014 Wanyang Dai. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We iteratively derive the product-form solutions of stationary distributions for a type of preemptive priority multiclass queueing networks with multiserver stations. This type of queueing systems can typically be used to model the stochastic dynamics of some large scale backbone networks with multiprocessor shared-memory switches or local (edge) cloud computing centers with parallel-server pools. The queueing networks are Markovian with exponential interarrival and service time distributions. The obtained iterative solutions can be used to conduct performance analysis or as comparison criteria for approximation and simulation studies. Numerical comparisons with existing Brownian approximating model (BAM) related to general interarrival and service times are provided to show the effectiveness of our current designed algorithm and our previous derived BAM. Furthermore, based on the iterative solutions, we can also give some analysis concerning network stability for some cases of these queueing systems, which provides some insight for more general study.

1. Introduction

At present, integrated services packet networks (ISPN) are widely used to transport a wide range of information such as voice, video, and data. It is foreseeable that this integrated services pattern will be one of the major techniques in the future cloud computing based communication systems and Internet (see, e.g., Mullender and Wood [1]). The introduction of concept and architecture of cloud computing can be found in, for example, Mell and Grance [2] and Rhoton and Haukioja [3]. A possible cloud computing based telecommunication network architecture (i.e., a large-scale network infrastructure as a service) is designed in Figure 1, where an end-user may require service (or services) from single local cloud computing center or multiple local and remote cloud computing centers. Among these centers, they communicate each other by using core switching network system (note that the switching system itself can also be independently viewed and handled as a cloud computing system with multiple service pools).

767651.fig.001
Figure 1: An integrated services cloud computing based network.

The speed and efficiency of core switching network systems are the bottleneck in realizing high speed owing to the drastic improvement in transmission speed and reliability of optical fiber. In an ISPN network, information is partitioned into packets depending on the employed protocol such as Internet protocol (IP). For the purpose of transmission, each packet consists of user’s data payload to be transmitted, a header containing control information (e.g., source and destination addresses, packet type, priority, etc.), and a checksum used for error control. The high-speed ISPN networks require fast packet switches to move packets along their respective virtual paths. The switches are computers with processing and storage capabilities. The main objective of a switch is to route an incoming packet arriving along a particular input link to a specified output link. More precisely, once the incoming packet is entirely received, the switch examines its header and selects the next outgoing link. In other words, packets are transmitted in a store-and-forward manner.

Various pieces of information can be classified into a fixed number of different types with separate requirements of quality of service (e.g., different end-to-end delays and packet loss ratios). Real time traffic packets with stringent delay constraint (e.g., interactive audio/video) are endowed service priority. In the meanwhile, it is imperative to size buffers for nonreal time traffic packets (e.g., data), which can tolerate higher delay but demand much smaller packet loss ratios. Hence, efficient switching and buffer management schemes are needed for switches.

Currently, three basic techniques (see, e.g., Tobagi [4] and Rao and Tripathi [5]) are designed to realize the switching function: space-division, shared-medium, and shared-memory. In our switches, the shared-memory technique is employed, which is comprised of a single dual-ported memory shared by all input and output lines. Packets arriving along input lines are multiplexed into a single stream that is fed to the common memory for storage. Inside the memory, packets are organized into different output buffers, one for each output line. In each output buffer, memory can be further divided into priority output queues, one for each type of traffic. In the meantime, an output stream of packets is formed by retrieving packets from the output queues sequentially, one per queue. Among different traffic types, packets are retrieved according to their priorities. Inside each type, packets are retrieved under the first-in first-out (FIFO) service discipline. Then, the output stream is demultiplexed and packets are transmitted on the output lines. There are some other ways to deal with the output queues such as processor sharing among output lines (see, e.g., Tobagi [4] and Rao and Tripathi [5]). We will address these issues elsewhere (see, e.g., Dai [6]).

In addition to switching, queueing is another main functionality of packet switches. The introduction of queueing function is owing to multiple packets arriving simultaneously from separate input lines and owing to the randomness of packet arrivals from both outside routing and internal routing of the network. There are three possibilities for queueing in a switch: input queueing, output queueing, and shared-memory queueing (see, e.g., Schwarz [7] and Rao and Tripathi [5]). Shared-memory-queueing mechanism is employed in our switches since it has numerous advantages with respect to switching and queueing over other schemes. For example, both switching and queueing functions can be implemented together by controlling the memory read and write properly. Furthermore, modifying the memory read and write control circuit makes the shared-memory switch sufficiently flexible to perform functions such as priority control and other operations.

Nevertheless, no matter what technology is employed in implementing the switch, it places certain limitations on the size of the switch and line speeds. Currently, two possible ways can be used to build large switches to match the transmission speed of optical fiber. The first one is to adopt parallel processors (see, e.g., Figure 2). The second one is to interconnect many switches (known as switch modules) in a multistage configuration to build large switches (see, e.g., Figure 3). The remaining issue is how to reasonably allocate resources of these switches and efficiently evaluate the system performance.

767651.fig.002
Figure 2: A two-type and four-class network with two multiprocessor switches. Type 1 includes class 1 and class 2. Type 2 consists of class 3 and class 4. Type 1 owns the service priority.
767651.fig.003
Figure 3: A two-stage switch with four dual-ported shared-memory switching modules.

The statistical characteristics of packet interarrival times and packet sizes have a major impact on switch hardware and software designs owing to the consideration of network performance. How to more effectively identify packet traffic patterns is a very active and involved research field (see, e.g., Nikolaidis and Akyildiz [8] and Dai [6, 9, 10]). Independent and identical distribution (i.i.d.) is the popular assumption for these times and packet sizes. Doubly stochastic renewal process introduced in Dai [6] is the latest definition and generalization related to input traffic and service processes for a wireless network under random environment. The effectiveness of these characteristics is supported by recent discoveries in Cao et al. [11] and Dai [9, 10].

In all circumstances, it is imperative to find product-form solutions for those queueing networks under suitable conditions to conduct performance analysis or provide comparison criteria to show the effectiveness of approximation and/or simulation studies (see, e.g., Dai et al. [6, 1215]). Furthermore, note that the stochastic dynamics of the backbone networks with multiprocessor shared-memory switches and the local (edge) cloud computing centers with parallel-server pools in Figure 1 can both be described by a multiclass queueing networks with parallel servers at each station. Hence, in this paper, we aim to derive the product-form solutions iteratively for one particular type of related queueing networks, that is, a type of particular preemptive priority multiclass queueing networks when the interarrival and service times are exponentially distributed. In addition, we also aim to provide some numerical comparisons with existing Brownian approximating model (BAM) related to general interarrival and service times to show the effectiveness of our current designed algorithm and our former derived BAM.

Next, we provide some review of the existing literature associated with the current study. Comparisons between the existing studies and our current discussion are also presented.

Under a general Whittle network framework, the product-form solutions are presented in Serfozo [16] for some multiclass networks, which include those with sector-dependent (e.g., Example 3.3) and class-station-dependent service rates (e.g., BCMP networks in Section 3.3, which are introduced by Baskett et al. [17]). Without considering the interaction among different stations, the distinguishing feature of these networks is that a job’s service rate at a station may consist of two intensities: one referred to as station intensity is a function of the total queue length at the station and the other one referred to as class intensity is a function of the queue length in the same class as the job being served. Some specific single-class queueing systems of BCMP networks in Section 3.3 of Serfozo [16] are revisited in Harrison [18, 19] by developing some Reversed Compound Agent Theorem (RCAT) based method. The corresponding product-form and non-product-form solutions are derived. Nevertheless, as claimed in Harrison [18, 19], heavier notation will be involved as long as a multiclass BCMP network is concerned.

Although our networks are of a form of those networks with multiple types of units as introduced in Serfozo [16], our networks are beyond those with sector-dependent and class-station-dependent service rates as introduced previously. More precisely, for a station in our networks, the station intensity is not only a function of the total queue length but also a function of combinations of queue lengths of various classes. The class intensity depends on not only the queue length of itself and/or the total queue length but also the numbers of jobs in other classes at the station. Therefore, how to find suitable function and how to prove our service rates to be -balanced as defined in Chapter 3 of Serfozo [16] are not obvious. Moreover, how to apply the RCAT based method developed in Harrison [18, 19] to our multiclass network cases is also not trivial. Thus, in this paper, we use the method of solving Kolmogorov (balance) equations to get the product-form solutions iteratively, which are more engineering and computationally friendly. Furthermore, by this method, we can also give some analysis concerning network stability for some cases of these systems, which provides some insight for more general study.

The remainder of this paper is organized as follows. The open priority multiclass queueing network associated with high-speed ISPN is described in Section 2. Our main results including product-form solutions and performance comparisons are presented in Section 3. Numerical comparisons are given in Section 4. The proofs of our main theorems are provided in Section 5. The conclusion of the paper is presented in Section 6.

2. The Queueing Network Model

Note that the stochastic dynamics of the backbone core networks with multiprocessor shared-memory switches and the local (edge) cloud computing centers with parallel-server pools in Figure 1 can both be described by multiclass queueing networks with parallel servers at each station. Therefore, we consider a queueing network that has multiserver stations. Each station indexed by owns servers and has an infinite capacity waiting buffer. In the network, there are job types. Each type consists of job classes that are distributed at different stations. Therefore, the network is populated by job classes which are labeled by . Upon the arrival of a job of a type from outside the network, it may only receive service for part of classes and may visit a particular class more than once (but at most finite many times). Then, it leaves the network (i.e., the network is open). At any given time during its lifetime in the network, the job belongs to one of the classes and changes classes at each time a service is completed. All jobs within a class are served at a unique station and more than one class might be served at a station (so-called multiclass queueing network). The ordered sequence of classes that a job visits in the network is named a route. Interrouting among different job types is not allowed throughout the entire network.

We use to denote the set of classes belonging to station . Let denote the station to which class belongs. We implicitly set when and appear together. Associated with each class , there are two i.i.d sequences of random variables (r.v.), and , an i.i.d sequence of -dimensional random vectors, , and two real numbers, and . We suppose that the sequences are mutually independent. The initial r.v.s and have means and , respectively. For each , denotes the interarrival time between the th and the th externally arrival job at class . Furthermore, denotes the service time for the th class job. In addition, denotes the routing vector of the th class job. We allow for some classes . Then, it follows that and are the external arrival rate and service rate for class , respectively. We assume that the routing vector takes values in , where is the -dimensional vector of all 0’s. For , is the -dimensional vector with th component and other components . When , the th job departing class becomes a class job. Let be the probability that a job departing class becomes a class job (of the same type). Thus, the corresponding matrix is routing matrix of the network. Furthermore, the matrix is finite; that is, is invertible with since the network is open. The symbol on a vector or a matrix denotes the transpose and denotes the identity matrix.

We use for to denote the overall arrival rate of class , including both external arrivals and internal transitions. Then, we have the following traffic equation: or in its vector form: (all vectors in this paper are to be interpreted as column vectors unless explicitly stated otherwise). Note that the unique solution of (3) is given by . For each , if there is a long-run average rate of flow into the class which equals the long-run average rate out of that class, this rate will equal . Furthermore, we define the traffic intensity for station as follows: where with is also referred to as the nominal fraction of time that station is nonidle.

The order of jobs being served at each station is dictated by a service discipline. In the current research, we restrict our attention to static buffer priority (SBP) service disciplines under which the classes at each station are assigned a fixed rank (with no ties). In our queueing network, each type of jobs is assigned the same priority rank at every station where it possibly receives service. When a server within a station switches from one job to another, the new job will be taken from the leading (or longest waiting) job at the highest rank nonempty class at the server’s station. Within each class, it is assumed that jobs are served on the first-in first-out (FIFO) basis. We suppose that the discipline employed is nonidling; that is, a server is never idle when there are jobs waiting to be served at its station. We also assume that the discipline is preemptive resume; that is, when a job arrives at a station with all servers busy and if the job is with a higher rank than at least one of the jobs currently being served, one of the lower rank job services is interrupted; when there is a server available to the interrupted service, it continues from where it left off. For convenience and without loss of generality, we use consecutive numbers to index the classes that have the same priority rank at stations to . In other words, the highest priority classes for station 1 to are indexed by to , the second highest priority classes are indexed by to , and the lowest priority classes are indexed by to . An example of such a two-station network is given in Figure 2. In the network, type 1 traffic possibly requires class 1 and class 2 services, type 2 traffic possibly requires class 3 and class 4 services, and classes in type 1 have the higher priority at their corresponding stations.

Finally, we define the cumulative arrival, cumulative service, and cumulative routing processes by the following sums: where the th component of is the cumulative number of jobs to class for the first th jobs leaving class with and . Then, we define Note that denotes the total number of external arrivals to class in time interval . represents the total number of class jobs which have finished service in . is the total arrivals to class in including both external arrivals and internal transitions.

3. Steady-State Queue Length Distributions

We use to denote the number of class jobs in station with and at time . It is called the queue length process for class jobs. For convenience, let and be the -dimensional queue length process and -dimensional state vectors, respectively. They are given by for and and nonnegative integers . Then, we use to denote the probability of system at state and at time ; that is, Furthermore, let denote the corresponding steady-state probability of system at state if the network exists in a stationary distribution. In addition, let denote the probability at state for class with and .

Under the usual convention, let denote the smaller one of any two real numbers and . Let denote the larger one of and ; that is, and . Then, for each , we have the following notation: Finally, for , define Furthermore, for , define .

Theorem 1 (steady-state distribution). Assume that all of the service times and external interarrival times are exponentially distributed with rates as before and the traffic intensity for all . Furthermore, suppose that for , , and . Then, for each , the steady-state distribution is given by the following product form: More precisely, for , and, for , the initial distribution is determined by

The following theorem relates condition (12) to primitive interarrival time and service rates for some cases of these systems, which provides some insight for more general study.

Proposition 2 (network stability condition). Under the exponential assumptions as stated in Theorem 1, if , for each , the stability condition (12) holds for the following networks.
Net I. Multiclass networks with single-server stations, that is, the number of servers is one for all stations while the number of job types can be arbitrary.
Net II. The number of servers can be arbitrary for all stations while the number of job types equals two ().

We conjecture that for implies the condition (12) for our general network. Nevertheless, owing to complex computation involved, the corresponding analytical illustration is not a trivial task.

Example 3. Consider a network with three job types () and at least one station having two servers ( for ) while other stations have at most two servers ( for ). For a station with three servers, the condition (12) can be explicitly expressed as follows: for . Under , it is easy to see that the above inequality is true if . Numerical tests in Table 1 have also been conducted and show that the inequality is true even for , but the corresponding analytic demonstration could be nontrivial. The detailed illustration of the example will be provided at the end of this paper.

tab1
Table 1: Numerical tests for network stability condition (12).

Remark 4. The justifications of Theorem 1 and Proposition 2 are postponed to Section 5. Instead, we will first use these results as comparison criteria to illustrate the effectiveness of the diffusion approximation models developed in Dai [13] and answer the question on when these approximation models can be employed.

4. Numerical Comparisons

First of all, we note that partial results presented in this section were briefly reported in the short conference version (Dai [20]) of this paper. More precisely, we consider a network with single-server station and under preemptive priority service discipline. By employing the exact solutions developed in previous sections, we conduct performance comparisons between these product-form solutions and the approximating ones of Brownian network models.

Brownian network models, which are also known as semimartingale reflecting Brownian motions (SRBM), have been widely employed as approximating models for multiclass queueing networks with general interarrival and service time distributions when the traffic intensity defined in (4) is suitably large or close to one (see, e.g., Dai [13]). The effectiveness of the Brownian network models has been justified in numerical cases by comparing their solutions with either exact solutions or simulation results (see, e.g., Dai [13], Chen and Yao [21], and Dai et al. [22]). Thus, it is meaningful to illustrate the correctness of our newly derived formula by comparing it with the Brownian network models.

For the network with exponential interarrival and service time distributions, by using Theorem 1, we get the steady-state mean queue length for each class as and the expected total time (sojourn time) a job had to spend in the system as

For the network with general interarrival and service time distributions, owing to the nature of our network routing structure, the higher priority classes are independent of lower ones. Then, it follows from the studies in Dai [13], Harrison [23], and Chen and Yao [21] that the steady-state mean sojourn time and mean queue length for each class can iteratively be calculated with respect to priority order as follows: where is the steady-state mean total workload for all classes and . More precisely, it is given by where and , are the variances of interarrival and service time sequences for each class .

In our numerical comparisons, we consider an exponential network with . For this case, our corresponding data are listed in Table 2. In the table, and , and for . From the table, we can see that SRBMs are more reasonable approximations of their physical queueing counterparts when traffic intensities for the lowest priority jobs are relatively large.

tab2
Table 2: Performance comparisons for a priority multiclass network with three job types.

5. Proofs of Theorem 1 and Proposition 2

5.1. Proof of Theorem 1

For convenience, we introduce some additional notations. Let , , and be defined by which denote the cumulative external arrivals to class , the cumulative number of jobs finished services at class , and the total arrivals to class in . Then, we can justify Theorem 1 by induction as in the following several steps.

Step 1. We consider the steady-state distribution for the highest rank classes with each index . In this case, the type index is used in Theorem 1. Owing to the preemptive-resume service discipline and the class routing structure, we know that these classes form a Jackson network. Then, by the theorem in Jackson [24], we have the following product form: where denotes the steady-state probability at state for class at station . More precisely, it is given by with being determined by equation ; that is,

Step 2. We derive the steady-state distribution for the highest rank and the second highest rank classes with . Owing to the preemptive-resume service discipline and the job routing structure, we have where the second equality follows from the preemptive assumption and is the conditional probability at time for classes with at state in terms of classes with at state ; that is, In order to get the steady state distribution for , we consider each state at time for the second highest rank class jobs. There are several ways in which the system can reach it. They can be summarized in the following formula: where as defined in (10). is infinitesimal in terms of ; that is, as . Equation (28) can be illustrated in the following disjoint events.

Event . The -dimensional queue length process keeps at state unchanging at times and ; that is, This event involves the following two parts.

Part One. Suppose for all ; that is, no external jobs arrive to classes with indices belonging to during while no jobs finish their services, either because the jobs being served at time require longer service times than or because the services are blocked or interrupted by higher rank class jobs. Then, the probability for Part One can be expressed in the following product form: where is defined to be The event in the third equality of (30) is defined as follows: To explain the fourth equality of (30), we introduce more notations. Let denote the probability that for , and let be the probability that for ; that is, These probabilities can be explicitly expressed in terms of the external arrival rates, service rates, and network states; for example, Thus, by the independent assumptions on external arrival and service processes among different stations and classes, and for each , we have For the case , one can use the similar way to check that the result in the above equation is also true.

Part Two. Suppose for at least one ; that is, the number (at least one) of jobs that finished their services for class in equals that of jobs that arrived at the class. It is easy to see that the probability corresponding part one is given by where is defined in (31).

Then, it follows from (30) and (36) that the probability for Event is given by

Event . There is a such that , and for all , . Similar to the discussion in Event , we can obtain the probability for Event as follows:

Event . There is a such that , and for all , . Then, we can get the probability for Event as follows:

Event . There exist such that , , , and and for all , . Then, we can get the probability for Event as follows:

Now we go back to discuss (28). After transferring the from right to left, dividing by , and taking the limit as tends to zero, we get the following differential equations: Next we show that the given distribution corresponding in Theorem 1 is a steady-state solution of those equations described by (41). It is enough to demonstrate that the derivatives in the above equations are all made zero by setting , that is, to prove that By the definition of in Theorem 1, that is, for each , then, we can rewrite (42) as follows: From the given distribution in Theorem 1, we have Then, by (44) to (45), we know that it is sufficient to show Owing to the routing structure and the traffic equation (3), we have the following observations: Then, substituting (47) into (46), we get the necessary equality.

Next, we derive the initial distribution corresponding to state . By the network stability conditions (12) and for each , it follows from that the following initial distribution is well posed: Hence, we complete the proof of Theorem 1 for priority types .

Step 3. To finish the induction procedure, in this step, we first suppose that the result described in Theorem 1 is true for all classes with priority rank ; then, we show that it is true for all classes with priority rank . By the similar illustration in getting (41), we have the following differential equations: Next, we show that the given distribution corresponding the lowest priority type in Theorem 1 is a steady-state solution of those equations described by (49). It suffices to demonstrate that the derivatives in the above equations are all made zero by setting , that is, to prove that By the definition of in Theorem 1, we have Then, we can rewrite (50) as follows: From the given distribution in Theorem 1, we have