Abstract

The class of counting processes constitutes a significant part of applied probability. The classic counting processes include Poisson processes, nonhomogeneous Poisson processes, and renewal processes. More sophisticated counting processes, including Markov renewal processes, Markov modulated Poisson processes, age-dependent counting processes, and the like, have been developed for accommodating a wider range of applications. These counting processes seem to be quite different on the surface, forcing one to understand each of them separately. The purpose of this paper is to develop a unified multivariate counting process, enabling one to express all of the above examples using its components, and to introduce new counting processes. The dynamic behavior of the unified multivariate counting process is analyzed, and its asymptotic behavior as π‘‘β†’βˆž is established. As an application, a manufacturing system with certain maintenance policies is considered, where the optimal maintenance policy for minimizing the total cost is obtained numerically.

1. Introduction

A stochastic process {𝑁(𝑑)βˆΆπ‘‘β‰₯0} is called a counting process when 𝑁(𝑑) is nonnegative, right continuous and monotone nondecreasing with 𝑁(0)=0. The classic counting processes of importance include a Poisson process, a nonhomogeneous Poisson process (NHPP) and a renewal process. More sophisticated counting processes have been developed in order to accommodate a wider range of applications. A Markov renewal process, for example, extends an ordinary renewal process in that the interarrival time between two successive arrivals has a probability distribution depending on the state transition of the underlying Markov chain, see, for example, Pyke [1, 2]. In Masuda and Sumita [3], the number of entries of a semi-Markov process into a subset of the state space is analyzed, while a Markov-modulated Poisson process (MMPP) is introduced by Neuts [4] where jumps of the MMPP occur according to a Poisson process with intensity πœ†π‘– whenever the underlying Markov-chain is in state 𝑖.

The MMPP is generalized subsequently into several different directions. Lucantoni et al. [5] develop a Markovian arrival process (MAP), where a Markov-chain defined on π’₯=𝐺βˆͺ𝐡 with 𝐺∩𝐡=πœ™,πΊβ‰ πœ™, and π΅β‰ πœ™ is replaced to a state π‘–βˆˆπΊ as soon as it enters a state π‘—βˆˆπ΅ with probability ̃𝑝𝑗𝑖 and the counting process describes the number of such replacements occurred in [0,𝑑]. If we set 𝐺={0𝐺,1𝐺,…,𝐽𝐺} and 𝐡={0𝐡,1𝐡,…,𝐽𝐡}, and an absorbing state π‘—π΅βˆˆπ΅ can be reached only from its counter part π‘—πΊβˆˆπΊ with instantaneous replacement back to 𝑗𝐺 itself, the resulting MAP becomes an MMPP. Another direction for generalizing the MMPP is to replace the underlying Markov-chain by a semi-Markov process, which we call a semi-Markov-modulated Poisson process (SMMPP). To the best knowledge of the authors, the SMMPP is first addressed by Dshalalow [6], where a systematic approach for dealing with modulated random measures is proposed in a more abstract way. Such modulated random measures include the MMPP and the SMMPP as special cases. An application of the SMMPP to queueing theory is discussed in Dshalalow and Russell [7]. More recently, in a series of papers by Agarwal et al. [8] and Dshalalow [9, 10], the original approach above has been further extended and refined. The SMMPP is studied in detail independently by Γ–zekici and Soyer [11].

In the SMMPP, the counting process under consideration is modulated according to state transitions of the underlying semi-Markov process. A further generalization may be possible by considering a counting process whose arrival intensity depends on not only the current state of the semi-Markov process but also the current age of the process. This line of research is originated by Sumita and Shanthikumar [12] where an age-dependent counting process generated from a renewal process is studied. Here, items arrive according to an NHPP which is interrupted and reset at random epochs governed by a renewal process.

All of these counting processes discussed above seem to be quite different on the surface, forcing one to understand each of them separately. The purpose of this paper is to develop a unified multivariate counting process which would contain all of the above examples as special cases. In this regard, we consider a system where items arrive according to an NHPP. This arrival stream is interrupted from time to time where the interruptions are governed by a finite semi-Markov process 𝐽(𝑑) on π’₯={0,1,2,…,𝐽}. Whenever a state transition of the semi-Markov process occurs from 𝑖 to 𝑗, the intensity function of the NHPP is switched from πœ†π‘–(π‘₯) to πœ†π‘—(π‘₯) with an initial value reset to πœ†π‘—(0). In other words, the arrivals of items are generated by the NHPP with πœ†π‘–(π‘₯) when the semi-Markov process is in state 𝑖 with π‘₯ denoting the time since the last entry into state 𝑖. Of particular interest in analysis of such systems are the multivariate counting processes 𝑀(𝑑)=[𝑀𝑖(𝑑)]π‘–βˆˆπ’₯ and 𝑁(𝑑)=[𝑁𝑖𝑗(𝑑)]𝑖,π‘—βˆˆπ’₯ where 𝑀𝑖(𝑑) counts the cumulative number of items that have arrived in [0,𝑑] while the semi-Markov process is in state 𝑖 and 𝑁𝑖𝑗(𝑑) represents the cumulative number of the state transitions of the semi-Markov process from 𝑖 to 𝑗 in [0,𝑑]. The joint multivariate counting process [𝑀(𝑑),𝑁(𝑑)] enables one to unify many existing counting processes in that they can be derived in terms of the components of [𝑀(𝑑),𝑁(𝑑)]. Because of this reason, hereafter, we call [𝑀(𝑑),𝑁(𝑑)] the unified multivariate counting process. The dynamic behavior of [𝑀(𝑑),𝑁(𝑑)] is captured through analysis of the underlying Laplace transform generating functions, yielding asymptotic expansions of the means and the variances of the partial sums of its components as π‘‘β†’βˆž.

Applications of the unified multivariate counting process can be found, for example, in modern communication networks. One may consider a high-speed communication link for transmitting video signals between two locations. Video sequences are transmitted as streams of binary data that vary over time in traffic intensity according to the level of movement, the frequency of scene changes, and the level of transmission quality. Consequently, efficient transmission of video traffic can be achieved through variable bit rate coding. In this coding scheme, data packets are not generated at a constant rate from the original sequence, but rather at varying rates. By doing so, one achieves less fluctuation in transmission quality level and, at the same time, transmission capacity can be freed up whenever possible. As in Maglaris et al. [13], such a mechanism may be implemented by using multimode encoders where each mode reflects a certain level of data compression, and the change between modes is governed by the underlying video sequence according to buffer occupancy levels. A system of this sort can be described in the above framework with βˆ‘π‘€(𝑑)=π‘–βˆˆπ’₯𝑀𝑖(𝑑) representing the number of packet arrivals at the origination site and 𝑁𝑖𝑗(𝑑) describing the number of the encoder changes in [0,𝑑]. The state of the underlying semi-Markov process at time 𝑑 then corresponds to the current mode of the encoder. Other types of applications include system reliability models where the semi-Markov process describes the status of the system under consideration while the interruptions correspond to system failures and replacements. A cost function associated with such a system may then be constructed from the unified multivariate counting processes [𝑀(𝑑),𝑁(𝑑)]. In this paper, a manufacturing system with certain maintenance policies is considered, where the unified multivariate counting process enables one to determine numerically the optimal maintenance policy for minimizing the total cost.

The structure of this paper is as follows. In Section 2, key transform results of various existing counting processes are summarized. Detailed description of the unified multivariate counting process is provided in Section 3 and its dynamic behavior is analyzed in Section 4 by examining the probabilistic flow of the underlying stochastic processes and deriving transform results involving Laplace transform generating functions. Section 5 is devoted to derivation of the existing counting processes of Section 2 in terms of the components of the unified multivariate counting process. Asymptotic analysis is provided in Section 6, yielding asymptotic expansions of the means and the variances of the partial sums of its components. An application is discussed in Section 7, where a manufacturing system with certain maintenance policies is considered and the optimal maintenance policy for minimizing the total cost is obtained numerically. Some concluding remarks are given in Section 8.

Throughout the paper, matrices and vectors are indicated by double underlines (𝐴,𝑏, etc.) and underlines (𝑋,𝑦, etc.) respectively. The vector with all components equal to 1 is denoted by 1 and the identity matrix is written as 𝐼. A submatrix of π‘Ž is defined by π‘ŽπΊπ΅=[π‘Žπ‘–π‘—]π‘–βˆˆπΊ,π‘—βˆˆπ΅.

2. Various Counting Processes of Interest

In this section, we summarize key transform results of various counting processes of interest, which can be expressed in terms of the components of the unified multivariate counting process proposed in this paper as we will see. We begin the discussion with one of the most classical arrival processes, the Poisson process.

2.1. Poisson Process

Poisson process of intensity πœ† is characterized by a sequence of independently and identically distributed (i.i.d.) exponential random variables (𝑋𝑗)βˆžπ‘—=1 with common probability density function (p.d.f.) 𝑓𝑋(π‘₯)=πœ†π‘’βˆ’πœ†π‘₯. Let 𝑆𝑛=βˆ‘π‘›π‘—=1𝑋𝑗. Then, the associated Poisson process {𝑁(𝑑)βˆΆπ‘‘β‰₯0} is defined as a counting process satisfying 𝑁(𝑑)=π‘›βŸΊπ‘†π‘›β‰€π‘‘<𝑆𝑛+1.(2.1) If a system has an exponential lifetime of mean πœ†βˆ’1 and is renewed instantaneously upon failure, 𝑋𝑗 represents the lifetime of the 𝑗th renewal cycle. The Poisson process {𝑁(𝑑)βˆΆπ‘‘β‰₯0} then counts the number of failures that have occurred by time 𝑑.

Let 𝑝𝑛(𝑑)=P[𝑁(𝑑)=π‘›βˆ£π‘(0)=0] and define the probability generating function (p.g.f.) πœ‹(𝑣,𝑑) by ξ€Ίπ‘£πœ‹(𝑣,𝑑)=E𝑁(𝑑)ξ€»=βˆžξ“π‘›=0𝑝𝑛(𝑑)𝑣𝑛.(2.2) It can be seen, see, for example, Karlin and Taylor [14], that 𝑑𝑝𝑑𝑑𝑛(𝑑)=βˆ’πœ†π‘π‘›(𝑑)+πœ†π‘π‘›βˆ’1(𝑑)(2.3) where 𝑝𝑛(𝑑)=0 for 𝑛<0. Multiplying 𝑣𝑛 on both sides of (2.3) and summing from 0 to ∞, one then finds that πœ•πœ•π‘‘πœ‹(𝑣,𝑑)=βˆ’πœ†(1βˆ’π‘£)πœ‹(𝑣,𝑑).(2.4) Since 𝑝𝑛(0)=𝛿{𝑛=0} where 𝛿{𝑃}=1 if statement 𝑃 is true and 𝛿{𝑃}=0 otherwise, one has πœ‹(𝑣,0)=1. Equation (2.4) can then be solved as πœ‹(𝑣,𝑑)=π‘’βˆ’πœ†π‘‘(1βˆ’π‘£);𝑝𝑛(𝑑)=π‘’βˆ’πœ†π‘‘(πœ†π‘‘)𝑛.𝑛!(2.5)

2.2. Nonhomogeneous Poisson Process (NHPP)

An NHPP {𝑀(𝑑)βˆΆπ‘‘β‰₯0} differs from a Poisson process in that the failure intensity of the system is given as a function of time 𝑑. Accordingly, (2.3) should be rewritten as π‘‘π‘π‘‘π‘‘π‘š(𝑑)=βˆ’πœ†(𝑑)π‘π‘š(𝑑)+πœ†(𝑑)π‘π‘šβˆ’1(𝑑).(2.6) By taking the generating function of (2.6), one finds that πœ•πœ•π‘‘πœ‹(𝑒,𝑑)=βˆ’πœ†(𝑑)(1βˆ’π‘’)πœ‹(𝑒,𝑑).(2.7) With ∫𝐿(𝑑)=𝑑0πœ†(𝑦)𝑑𝑦, this equation can be solved as πœ‹(𝑒,𝑑)=π‘’βˆ’πΏ(𝑑)(1βˆ’π‘’);π‘π‘š(𝑑)=π‘’βˆ’πΏ(𝑑)𝐿(𝑑)π‘š.π‘š!(2.8) The reader is referred to Ross [15] for further discussions of NHPPs.

2.3. Markov-Modulated Poisson Process (MMPP)

Let {𝐽(𝑑)βˆΆπ‘‘β‰₯0} be a Markov-chain in continuous time on π’₯={0,…,𝐽} governed by a transition rate matrix 𝜈=[πœˆπ‘–π‘—]. Let πœ†βŠ€=[πœ†0,…,πœ†π½] and define the associated diagonal matrix πœ†π·=[𝛿{𝑖=𝑗}πœ†π‘–]. An MMPP {𝑀(𝑑)βˆΆπ‘‘β‰₯0} characterized by (𝜈,πœ†π·) is a pure jump process where jumps of 𝑀(𝑑) occur according to a Poisson process with intensity πœ†π‘– whenever the Markov-chain 𝐽(𝑑) is in state 𝑖.

Let πœˆπ‘–=βˆ‘π‘—βˆˆπ’₯πœˆπ‘–π‘— and define 𝜈𝐷=[𝛿{𝑖=𝑗}πœˆπ‘–]. The infinitesimal generator 𝑄 associated with the Markov-chain 𝐽(𝑑) is then given by 𝑄=βˆ’πœˆπ·+𝜈.(2.9) For 𝑖,π‘—βˆˆπ’₯, let 𝑝𝑝(π‘˜,𝑑)=𝑖𝑗;𝑝(π‘˜,𝑑)𝑖𝑗[],(π‘˜,𝑑)=P𝑀(𝑑)=k,𝐽(𝑑)=π‘—βˆ£π½(0)=𝑖,𝑀(0)=0(2.10) and define the associated matrix generating function πœ‹(𝑒,𝑑) by πœ‹(𝑒,𝑑)=βˆžξ“π‘˜=00π‘₯0200𝑑𝑝(π‘˜,𝑑)π‘’π‘˜.(2.11) It can be seen that πœ•π‘πœ•π‘‘π‘–π‘—ξ€·πœ†(π‘˜,𝑑)=βˆ’π‘—+πœˆπ‘—ξ€Έπ‘π‘–π‘—ξ“(π‘˜,𝑑)+π‘Ÿβˆˆπ’₯0π‘₯0200π‘‘π‘π‘–π‘Ÿ(π‘˜,𝑑)πœˆπ‘Ÿπ‘—+πœ†π‘—π‘π‘–π‘—(π‘˜βˆ’1,𝑑).(2.12) In matrix notation, this can be rewritten as πœ•π‘πœ•π‘‘(π‘˜,𝑑)=βˆ’π‘ξ‚†πœ†(π‘˜,𝑑)𝐷+πœˆπ·βˆ’πœˆξ‚‡+𝑝(π‘˜βˆ’1,𝑑)πœ†π·.(2.13) By taking the generating function of (2.13) together with (2.9), one sees that πœ•πœ‹πœ•π‘‘(𝑒,𝑑)=πœ‹ξ‚»π‘„(𝑒,𝑑)βˆ’(1βˆ’π‘’)πœ†π·ξ‚Ό.(2.14) Since 𝑀(0)=0, one has πœ‹(𝑒,0)=𝐼, where 𝐼=[𝛿{𝑖=𝑗}] is the identity matrix, so that the above differential equation can be solved as πœ‹(𝑒,𝑑)=π‘’ξ‚»π‘„βˆ’(1βˆ’π‘’)πœ†π·ξ‚Όπ‘‘=βˆžξ“π‘˜=0π‘‘π‘˜ξ‚»π‘„π‘˜!βˆ’(1βˆ’π‘’)πœ†π·ξ‚Όπ‘˜,(2.15) where 𝐴0def=𝐼 for any square matrix 𝐴. It should be noted that πœ‹(1,𝑑)=𝑒𝑄𝑑, which is the transition probability matrix of 𝐽(𝑑) as it should be. By taking the Laplace transform of both sides of (2.15), ξπœ‹βˆ«(𝑒,𝑠)=∞0π‘’βˆ’π‘ π‘‘πœ‹(𝑒,𝑑) is given by ξπœ‹ξ‚»(𝑒,𝑠)=π‘ πΌβˆ’π‘„+(1βˆ’π‘’)πœ†π·ξ‚Όβˆ’1.(2.16)

In general, the interarrival times generated by an MMPP are not independent nor identically distributed. In multimedia computer and communication networks, data packets are mingled together with voice and image packets generated from analogue sources. Since arrival patterns of such packets differ from each other, MMPPs have provided useful means to model arrival processes of packets in multimedia computer and communication networks, see, for example, Heffes and Lucantoni [16] and Sriram and Whitt [17]. Burman and Smith [18], and Knessl et al. [19] studied a single server queuing system with an MMPP arrival process and general i.i.d. service times. Neuts et al. [20] established characterization theorems for an MMPP to be a renewal process in terms of lumpability of the underlying Markov-chain 𝐽(𝑑). The reader is referred to Neuts [4] for further discussions of MMPP.

An MMPP can be extended by replacing the underlying Markov-chain in continuous time by a semi-Markov process as discussed in Section 1. This process is denoted by SMMPP. To the best knowledge of the authors, the SMMPP is first addressed by Dshalalow [6], where a systematic approach for dealing with modulated random measures is proposed in a more abstract way. Such modulated random measures include the MMPP and the SMMPP as special cases. An application of the SMMPP to queueing theory is addressed in Dshalalow and Russell [7]. More recently, the original approach above has been further extended and refined in a series of papers by Agarwal et al. [8] and Dshalalow [9, 10]. The SMMPP is studied in detail independently by Γ–zekici and Soyer [11] including transient characterizations and ergodic analysis. Both MMPP and SMMPP will be proven to be expressible in terms of the components of the unified multivariate counting process proposed in this paper.

2.4. Renewal Process

Renewal processes can be considered as a generalization of Poisson processes in that a sequence of i.i.d. exponential random variables are replaced by that of any i.i.d. nonnegative random variables with common distribution function 𝐴(π‘₯). The resulting counting process {𝑁(𝑑)βˆΆπ‘‘β‰₯0} is still characterized by (2.1). Let 𝑝𝑛(𝑑)=P[𝑁(𝑑)=π‘›βˆ£π‘(0)=0] as before. One then sees that 𝑝𝑛(𝑑)=𝐴(𝑛)(𝑑)βˆ’π΄(𝑛+1)(𝑑),(2.17) where 𝐴(𝑛)(𝑑) denotes the 𝑛-fold convolution of 𝐴(π‘₯) with itself, that is, 𝐴(𝑛+1)∫(𝑑)=𝑑0𝐴(𝑛)(π‘‘βˆ’π‘₯)𝑑𝐴(π‘₯) and 𝐴(0)(𝑑)=π‘ˆ(𝑑) which is the step function defined as π‘ˆ(𝑑)=1 for 𝑑β‰₯0 and π‘ˆ(𝑑)=0 else.

Let πœ‹π‘›βˆ«(𝑠)=∞0π‘’βˆ’π‘ π‘‘π‘π‘›(𝑑)𝑑𝑑 and βˆ«π›Ό(𝑠)=∞0π‘’βˆ’π‘ π‘‘π‘‘π΄(𝑑). By taking the Laplace transform of both sides of (2.17), it follows that πœ‹π‘›(𝑠)=1βˆ’π›Ό(𝑠)𝑠𝛼(𝑠)𝑛.(2.18) By taking the generating function of the above equation with βˆ‘πœ‹(𝑣,𝑠)=βˆžπ‘›=0πœ‹π‘›(𝑠)𝑣𝑛, one has πœ‹(𝑣,𝑠)=1βˆ’π›Ό(𝑠)𝑠1.1βˆ’π‘£π›Ό(𝑠)(2.19) The reader is referred to Cox [21], or Karlin and Taylor [14] for further discussions of renewal processes.

2.5. Markov Renewal Process (MRP)

An MRP is an extension of an ordinary renewal process in that, in the interval [0,𝑑), the former describes the recurrence statistics for intermingling classes of epochs of an underlying semi-Markov process, whereas the latter counts the number of recurrences for a single recurrent class of epochs. More specifically, let {𝐽(𝑑)βˆΆπ‘‘β‰₯0} be a semi-Markov process on π’₯={0,…,𝐽} governed by a matrix p.d.f. π‘Ž(π‘₯) where π‘Ž(π‘₯)β‰₯0 and ∫∞0π‘Ž(π‘₯)𝑑π‘₯=𝐴0 is a stochastic matrix which is assumed to be ergodic. Let πœ€(β„“) be a recurrent class consisting of the entries of the semi-Markov process to state β„“ for β„“βˆˆπ’₯, and define ξ‚π‘β„“π‘Ÿ(𝑑) to be a counting process describing the number of recurrences for πœ€(π‘Ÿ) given that there was an epoch of πœ€(β„“) at time 𝑑=0. Then 𝑁ℓ𝑁(𝑑)=[β„“0𝑁(𝑑),…,ℓ𝐽(𝑑)] is called an MRP.

The study of MRPs can be traced back to early 1960s represented by the two original papers by Pyke [1, 2], followed by Keilson [22, 23], Keilson and Wishart [24, 25], Γ‡inlar [26, 27] and McLean and Neuts [28]. Since then, the area attracted many researchers and a survey paper by Γ‡inlar [29] in 1975 already included more than 70 leading references. The study has been largely focused on the matrix renewal function 𝐻(𝑑)=[π»β„“π‘Ÿ(𝑑)] with π»β„“π‘Ÿξ‚π‘(𝑑)=E[β„“π‘Ÿ(𝑑)], the associated matrix renewal density, and the limit theorems. For example, one has the following result concerning the Laplace transform of 𝐻(𝑑) by Keilson [23]: ℒ𝐻=1(𝑑)𝑠𝛼𝐼(𝑠)βˆ’π›Όξ‚„(𝑠)βˆ’1,(2.20) where 𝛼(𝑠) is the Laplace transform of π‘Ž(𝑑). The unified multivariate counting process of this paper contains an MRP as a special case and provides more information based on dynamic analysis of the underlying probabilistic flows.

2.6. Number of Entries of a Semi-Markov Process into a Subset of the State Space (NESMPS)

Another type of counting processes associated with a semi-Markov process on π’₯={0,…,𝐽} governed by a matrix p.d.f. π‘Ž(π‘₯) is studied in Masuda and Sumita [3], where the state space π’₯ is decomposed into a set of good states 𝐺(β‰ πœ™) and a set of bad states 𝐡(β‰ πœ™) satisfying π’₯=𝐺βˆͺ𝐡 and 𝐺∩𝐡=πœ™. The counting process 𝑁𝐺𝐡(𝑑) is then defined to describe the number of entries of 𝐽(𝑑) into 𝐡 by time 𝑑.

While 𝑁𝐺𝐡(𝑑) is a special case of MRPs, the detailed analysis is provided in [3], yielding much more information. More specifically, let 𝑋(𝑑) be the age process associated with 𝐽(𝑑), that is, 𝑋(𝑑)=π‘‘βˆ’sup𝜏∢𝐽(𝑑)∣𝜏+πœβˆ’ξ€Ύβ‰ 0,0<πœβ‰€π‘‘,(2.21) where 𝑓(π‘₯)∣π‘₯+π‘₯βˆ’=𝑓(π‘₯+)βˆ’π‘“(π‘₯βˆ’), and define 𝐹𝑛𝐹(π‘₯,𝑑)=π‘›βˆΆπ‘–π‘—ξ€»,(π‘₯,𝑑)(2.22) where πΉπ‘›βˆΆπ‘–π‘—ξ€Ίπ‘‹(π‘₯,𝑑)=P(𝑑)≀π‘₯,𝑁𝐺𝐡(𝑑)=𝑛,𝐽(𝑑)=π‘—βˆ£π‘‹(0)=𝑁𝐺𝐡.(0)=0,𝐽(0)=𝑖(2.23) One then has π‘“π‘›πœ•(π‘₯,𝑑)=πΉπœ•π‘₯𝑛(π‘₯,𝑑).(2.24) The associated matrix Laplace transform generating function can then be defined as πœ‘(𝑣,𝑀,𝑠)=βˆžξ“π‘›=0π‘£π‘›ξ€βˆž0π‘’βˆ’π‘€π‘‘βˆ’π‘ π‘‘π‘“π‘›(π‘₯,𝑑)𝑑π‘₯𝑑𝑑.(2.25) It has been shown in Masuda and Sumita [3] that πœ‘1(𝑣,𝑀,𝑠)=𝑀+𝑠⋅𝛾0𝐼(𝑠)βˆ’π‘£π›½ξ‚Ό(𝑠)βˆ’1ξ‚†πΌβˆ’π›Όπ·ξ‚‡.(𝑀+𝑠)(2.26) Here, with π›Όβˆ«(𝑠)=∞0π‘’βˆ’π‘ π‘‘π‘Ž(𝑑)𝑑𝑑 and 𝛼𝐢𝐷(𝑠)=[𝛼𝑖𝑗(𝑠)]π‘–βˆˆπΆ,π‘—βˆˆπ· for 𝐢,π·βŠ‚π’₯, the following notation is employed:

𝛼𝛼(𝑠)=𝐺𝐺(𝑠)𝛼𝐺𝐡𝛼(𝑠)𝐡𝐺(𝑠)𝛼𝐡𝐡;𝛼(𝑠)𝐷𝛿(𝑠)={𝑖=𝑗}𝛼𝑖(𝑠)withπ›Όπ‘–βˆ‘(𝑠)=π‘—βˆˆπ’₯π›Όπ‘–π‘—πœ’(𝑠);(2.27)𝐺𝐼(𝑠)=πΊπΊβˆ’π›ΌπΊπΊξ‚‡(𝑠)βˆ’1;πœ’π΅ξ‚†πΌ(𝑠)=π΅π΅βˆ’π›Όπ΅π΅ξ‚‡(𝑠)βˆ’1𝛽;(2.28)0(𝑠)=𝐡𝐡0π΅πΊπ›ΌπΊπ΅πœ’π΅(𝑠)π›ΌπΊπ΅πœ’π΅(𝑠)π›Όπ΅πΊπœ’πΊξƒ­π›Ύ(𝑠);(2.29)0(𝛼𝑠)=π΅π΅πœ’π΅(𝑠)πœ’π΅(𝑠)π›Όπ΅πΊπœ’πΊ0(𝑠)πΊπ΅π›ΌπΊπΊπœ’πΊξƒ­(𝑠)+𝐼.(2.30) As we will see, the unified multivariate counting process proposed in this paper enables one to deal with multidimensional generalization of NESMPSs as a special case.

2.7. Markovian Arrival Process (MAP)

As for Poisson processes, a renewal process requires interarrival times to form a sequence of i.i.d. nonnegative random variables. As we have seen, a class of MMPPs enables one to avoid this requirement by introducing different Poisson arrival rates depending on the state of the underlying Markov chain. An alternative way to avoid this i.i.d. requirement is to adapt a class of MAPs, originally introduced by Lucantoni et al. [5]. We discuss here a slightly generalized version of MAPs in that a set of absorbing states is not necessarily a singleton set.

Let {π½βˆ—(𝑑)βˆΆπ‘‘β‰₯0} be an absorbing Markov-chain on π’₯βˆ—=𝐺βˆͺ𝐡 with πΊβ‰ πœ™,π΅β‰ πœ™ and 𝐺∩𝐡=πœ™, where all states in 𝐡 are absorbing. Without loss of generality, we assume that 𝐺={0,…,π‘š}, and 𝐡={π‘š+1,…,π‘š+𝐾}. For notational convenience, the following transition rate matrices are introduced.

πœˆβˆ—πΊπΊ=ξ€Ίπœˆπ‘–π‘—ξ€»π‘–,π‘—βˆˆπΊ;πœˆβˆ—πΊπ΅=ξ€Ίπœˆπ‘–π‘Ÿξ€»π‘–βˆˆπΊ,π‘Ÿβˆˆπ΅.(2.31) The entire transition rate matrix πœˆβˆ— governing π½βˆ—(𝑑) is then given by πœˆβˆ—=ξƒ¬πœˆβˆ—πΊπΊπœˆβˆ—πΊπ΅00ξƒ­.(2.32)

A replacement Markov-chain {𝐽(𝑑)βˆΆπ‘‘β‰₯0} on 𝐺 is now generated from {π½βˆ—(𝑑)βˆΆπ‘‘β‰₯0}. Starting from a state in 𝐺, the process 𝐽(𝑑) coincides with π½βˆ—(𝑑) within 𝐺. As soon as π½βˆ—(𝑑) reaches state π‘Ÿβˆˆπ΅, it is instantaneously replaced at state π‘—βˆˆπΊ with probability Μƒπ‘π‘Ÿπ‘— and the process continues. Let 𝐢=πœˆβˆ—πΊπΊ,𝐷=πœˆβˆ—πΊπ΅Μƒπ‘π΅πΊ,(2.33) where ̃𝑝𝐡𝐺=[Μƒπ‘π‘Ÿπ‘—]π‘Ÿβˆˆπ΅,π‘—βˆˆπΊ. Then the transition rate matrix 𝜈 and the infinitesimal generator 𝑄 of 𝐽(𝑑) are given as 𝜈=𝐢+𝐷;𝑄=βˆ’πœˆπ·+𝜈,(2.34) where 𝜈𝐷=𝐢𝐷+𝐷𝐷,(2.35) with 𝐢𝐷=𝛿{𝑖=𝑗}𝑐𝑖;𝑐𝑖=ξ“π‘—βˆˆπΊπ‘π‘–π‘—,𝐷𝐷=𝛿{𝑖=𝑗}𝑑𝑖;𝑑𝑖=ξ“π‘—βˆˆπΊπ‘‘π‘–π‘—.(2.36) Let 𝑝(𝑑) be the transition probability matrix of 𝐽(𝑑) with its Laplace transform defined by πœ‹βˆ«(𝑠)=∞0π‘’βˆ’π‘ π‘‘π‘(𝑑)𝑑𝑑. From the Kolmogorov forward equation (𝑑/𝑑𝑑)𝑝(𝑑)=𝑝(𝑑)𝑄 with 𝑝(0)=𝐼, one has πœ‹ξ‚»(𝑠)=π‘ πΌβˆ’π‘„ξ‚Όβˆ’1.(2.37)

Let {𝑁MAP(𝑑)βˆΆπ‘‘β‰₯0} be the counting process keeping the record of the number of replacements in [0,𝑑) and define 𝑓𝑖𝑗𝑁(π‘˜,𝑑)=𝑃MAPξ€»(𝑑)=π‘˜,𝐽(𝑑)=π‘—βˆ£π½(0)=𝑖,𝑖,π‘—βˆˆπΊ.(2.38) By analyzing the probabilistic flow at state 𝑗 at time 𝑑+Ξ”, it can be seen that 𝑓𝑖𝑗(π‘˜,𝑑+Ξ”)=𝑓𝑖𝑗(π‘˜,𝑑)1βˆ’β„“βˆˆπΊξ€·π‘π‘—β„“+𝑑𝑗ℓΔ+ξ“β„“βˆˆπΊπ‘“π‘–β„“(π‘˜,𝑑)𝑐ℓ𝑗Δ+ξ“β„“βˆˆπΊπ‘“π‘–β„“(π‘˜βˆ’1,𝑑)𝑑ℓ𝑗Δ+π‘œ(Ξ”).(2.39) It then follows that πœ•π‘“πœ•π‘‘π‘–π‘—(π‘˜,𝑑)=βˆ’π‘“π‘–π‘—ξ“(π‘˜,𝑑)β„“βˆˆπΊξ€·π‘π‘—β„“+𝑑𝑗ℓ+ξ“β„“βˆˆπΊπ‘“π‘–β„“(π‘˜,𝑑)𝑐ℓ𝑗+ξ“β„“βˆˆπΊπ‘“π‘–β„“(π‘˜βˆ’1,𝑑)𝑑ℓ𝑗.(2.40) In matrix notation, the above equation can be rewritten as πœ•π‘“πœ•π‘‘(π‘˜,𝑑)=βˆ’π‘“(π‘˜,𝑑)𝜈𝐷+𝑓(π‘˜,𝑑)𝐢+𝑓(π‘˜βˆ’1,𝑑)𝐷.(2.41) We now introduce the following matrix Laplace transform generating function: πœ‘(ξ€Ίπœ‘π‘£,𝑠)=𝑖𝑗(𝑣,𝑠);πœ‘π‘–π‘—(ξ€œπ‘£,𝑠)=∞0π‘’βˆžβˆ’π‘ π‘‘ξ“π‘˜=0𝑓𝑖𝑗(π‘˜,𝑑)π‘£π‘˜π‘‘π‘‘.(2.42) Taking the Laplace transform with respect to 𝑑 and the generating function with respect to π‘˜ of both sides of (2.41), one has πœ‘ξ‚†(𝑣,𝑠)𝑠𝐼+πœˆπ·βˆ’πΆβˆ’π‘£π·ξ‚‡=𝐼,(2.43) which can be solved as πœ‘ξ‚»(𝑣,𝑠)=π‘ πΌβˆ’π‘„+(1βˆ’π‘£)π·ξ‚Όβˆ’1.(2.44) We note from (2.37) and (2.44) that πœ‘(1,𝑠)=πœ‹(𝑠) as it should be.

It may be worth noting that an MMPP is a special case of an MAP, which can be seen in the following manner. Let an MAP be defined on 𝐺βˆͺ𝐡 where 𝐺={0𝐺,…,𝐽𝐺} and 𝐡={0𝐡,…,𝐽𝐡}. Transitions within 𝐺 is governed by 𝜈. An entry into π‘—π΅βˆˆπ΅ is possible only from π‘—πΊβˆˆπΊ. When this occurs, the Markov process is immediately replaced at the entering state 𝑗𝐺. The counting process for the number of such replacements then has the Laplace transform generating function πœ‘(𝑣,𝑠) given in (2.44) where 𝐷 is replaced by πœ†π·, which coincides with ξπœ‹(u,𝑠) of (2.16) for MMPPs, proving the claim.

2.8. Age-Dependent Counting Process Generated from a Renewal Process (ACPGRP)

An age-dependent counting process generated from a renewal process has been introduced and studied by Sumita and Shanthikumar [12], where items arrive according to an NHPP which is interrupted and reset at random epochs governed by a renewal process. More specifically, let {𝑁(𝑑)βˆΆπ‘‘β‰₯0} be a renewal process associated with a sequence of i.i.d. nonnegative random variables (𝑋𝑗)βˆžπ‘—=1 with common p.d.f. π‘Ž(π‘₯). The age process 𝑋(𝑑) is then defined by 𝑋(𝑑)=π‘‘βˆ’supπœβˆΆπ‘(𝑑)∣𝜏+πœβˆ’ξ€Ύ=1,0<πœβ‰€π‘‘.(2.45) In other words, 𝑋(𝑑) is the elapsed time since the last renewal. We next consider an NHPP 𝑍(π‘₯) governed by an intensity function πœ†(π‘₯). If we define ξ€œπΏ(π‘₯)=π‘₯0πœ†(𝑦)𝑑𝑦,(2.46) one has []𝑔(π‘₯,π‘˜)=P𝑍(π‘₯)=π‘˜=exp(βˆ’πΏ(π‘₯))𝐿(π‘₯)π‘˜π‘˜!,π‘˜=0,1,2,….(2.47) Of interest, then, is a counting process {𝑀(𝑑)βˆΆπ‘‘β‰₯0} characterized by P[]𝑀(𝑑+Ξ”)βˆ’π‘€(𝑑)=1βˆ£π‘€(𝑑)=π‘š,𝑁(𝑑)=1,𝑋(𝑑)=π‘₯=πœ†(π‘₯)Ξ”+π‘œ(Ξ”),(π‘š,𝑛,π‘₯)βˆˆπ‘†,Ξ”>0.(2.48) Here 𝑆=β„€+Γ—β„€+×ℝ+ where β„€+ is the set of nonnegative integers, and ℝ+ is the set of nonnegative real numbers.

Since the counting process 𝑀(𝑑) is governed by the intensity function depending on the age process 𝑋(𝑑) of the renewal process 𝑁(𝑑), it is necessary to analyze the trivariate process [𝑀(𝑑),𝑁(𝑑),𝑋(𝑑)]. Let the multivariate transform of [𝑀(𝑑),𝑁(𝑑),𝑋(𝑑)] be defined by ξ€œπœ‘(𝑒,𝑣,𝑀,𝑠)=∞0π‘’βˆ’π‘ π‘‘E𝑒𝑀(𝑑)𝑣𝑁(𝑑)π‘’βˆ’π‘€π‘‹(𝑑)𝑑𝑑.(2.49) It has been shown in Sumita and Shanthikumar [12] that π›½πœ‘(𝑒,𝑣,𝑀,𝑠)=βˆ—(𝑒,𝑀+𝑠),1βˆ’π‘£π›½(𝑒,𝑠)(2.50) where we define, for π‘šβ‰₯0 with ∫𝐴(𝑑)=βˆžπ‘‘π‘Ž(π‘₯)𝑑π‘₯,

π‘π‘š(𝑑)=π‘Ž(𝑑)𝑔(𝑑,π‘š);π‘βˆ—π‘š(𝑑)=𝐴𝛽(𝑑)𝑔(𝑑,π‘š),π‘šξ€œ(𝑠)=∞0π‘’βˆ’π‘ π‘‘π‘π‘š(𝑑)𝑑𝑑;π›½βˆ—π‘šξ€œ(𝑠)=∞0π‘’βˆ’π‘ π‘‘π‘βˆ—π‘š(𝑑)𝑑𝑑,𝛽(𝑒,𝑠)=βˆžξ“π‘š=0π›½π‘š(𝑠)π‘’π‘š;π›½βˆ—(𝑒,𝑠)=βˆžξ“π‘š=0π›½βˆ—π‘š(𝑠)π‘’π‘š.(2.51) The Laplace transform generating function of 𝑀(𝑑) defined by ξ€œπœ‹(𝑒,𝑠)=∞0π‘’βˆ’π‘ π‘‘E𝑒𝑀(𝑑)𝑑𝑑(2.52) is then given by πœ‹(𝑒,𝑠)=πœ‘(𝑒,1,0,𝑠), so that one has from (2.50) π›½πœ‹(𝑒,𝑠)=βˆ—(𝑒,𝑠).1βˆ’π›½(𝑒,𝑠)(2.53)

This class of counting processes denoted by ACPGRP may be extended where the underlying renewal process is replaced by an MMPP or an SMMPP. We define the former as a class of age-dependent counting processes governed by an MMPP, denoted by Markov-modulated age-dependent nonhomogeneous Poisson process (MMANHPP), and the latter as a class of age-dependent counting processes governed by an SMMPP, denoted by semi-Markov-modulated age-dependent nonhomogeneous Poisson process (SMMANHPP). The two extended classes are new and become special cases of the unified counting process proposed in this paper as we will see.

All of the counting processes discussed in this section are summarized in Figure 1.

3. Unified Multivariate Counting Process [𝐌(𝐭),𝐍(𝐭)]

In this section, we propose a stochastic process representing the unified multivariate counting process discussed in Section 1, which would contain all of the counting processes given in Section 2 as special cases. More specifically, we consider a system where items arrive according to an NHPP. This arrival stream is governed by a finite semi-Markov process on π’₯={0,…,𝐽} in that the intensity function of the NHPP depends on the current state of the semi-Markov process. That is, when the semi-Markov process is in state 𝑖 with the current dwell time of π‘₯, items arrive according to a Poisson process with intensity πœ†π‘–(π‘₯). If the semi-Markov process switches its state from 𝑖 to 𝑗, the intensity function πœ†π‘–(π‘₯) is interrupted, the intensity function at state 𝑗 is reset to πœ†π‘—(0), and the arrival process resumes. Of particular interest would be the multivariate counting processes 𝑀(𝑑)⊀=[𝑀0(𝑑),…,𝑀𝐽(𝑑)] and 𝑁(𝑑)=[𝑁𝑖𝑗(𝑑)] with 𝑀𝑖(𝑑) and 𝑁𝑖𝑗(𝑑) counting the number of items that have arrived in state 𝑖 by time 𝑑 and the number of transitions of the semi-Markov process from state 𝑖 to state 𝑗 by time 𝑑 respectively. The two counting processes 𝑀(𝑑) and 𝑁(𝑑) enable one to introduce a variety of interesting performance indicators as we will see.

Formally, let {𝐽(𝑑)βˆΆπ‘‘β‰₯0} be a semi-Markov process on π’₯={0,…,𝐽} governed by a matrix cumulative distribution function (c.d.f.) 𝐴𝐴(π‘₯)=𝑖𝑗,(π‘₯)(3.1) which is assumed to be absolutely continuous with the matrix probability density function (p.d.f.) π‘Žξ€Ίπ‘Ž(π‘₯)=𝑖𝑗=𝑑(π‘₯)𝐴𝑑π‘₯(π‘₯).(3.2) It should be noted that, if we define 𝐴𝑖(π‘₯) and 𝐴𝑖(π‘₯) by 𝐴𝑖(π‘₯)=π‘—βˆˆπ’₯𝐴𝑖𝑗(π‘₯);𝐴𝑖(π‘₯)=1βˆ’π΄π‘–(π‘₯),(3.3) then 𝐴𝑖(π‘₯) is an ordinary c.d.f. and 𝐴𝑖(π‘₯) is the corresponding survival function. The hazard rate functions associated with the semi-Markov process are then defined as πœ‚π‘–π‘—π‘Ž(π‘₯)=𝑖𝑗(π‘₯)𝐴𝑖(π‘₯),𝑖,π‘—βˆˆπ’₯.(3.4)

For notational convenience, the transition epochs of the semi-Markov process are denoted by πœπ‘›,𝑛β‰₯0, with 𝜏0=0. The age process 𝑋(𝑑) associated with the semi-Markov process is then defined as π‘‹ξ€½πœ(𝑑)=π‘‘βˆ’maxπ‘›βˆΆ0β‰€πœπ‘›ξ€Ύβ‰€π‘‘.(3.5) At time 𝑑 with 𝐽(𝑑)=𝑖 and 𝑋(𝑑)=π‘₯, the intensity function of the NHPP is given by πœ†π‘–(π‘₯). For the cumulative arrival intensity function 𝐿𝑖(π‘₯) in state 𝑖, one has πΏπ‘–ξ€œ(π‘₯)=π‘₯0πœ†π‘–(𝑦)𝑑𝑦.(3.6) The probability of observing π‘˜ arrivals in state 𝑖 within the current age of π‘₯ can then be obtained as 𝑔𝑖(π‘₯,π‘˜)=π‘’βˆ’πΏπ‘–(π‘₯)𝐿𝑖(π‘₯)π‘˜π‘˜!,π‘˜=0,1,2,…,π‘–βˆˆπ’₯.(3.7)

Of interest are the multivariate counting processes 𝑀(𝑑)⊀=𝑀0(𝑑),…,𝑀𝐽(ξ€»,𝑑)(3.8) where 𝑀𝑖(𝑑) represents the total number of items that have arrived by time 𝑑 while the semi-Markov process stayed in state 𝑖, and 𝑁𝑁(𝑑)=𝑖𝑗,(𝑑)(3.9) with 𝑁𝑖𝑗(𝑑) denoting the number of transitions of the semi-Markov process from state 𝑖 to state 𝑗 by time 𝑑. It is obvious that 𝑁𝑖(𝑑)def=βˆ‘β„“βˆˆπ’₯𝑁ℓ𝑖(𝑑) denotes the number of entries into state 𝑖 by time 𝑑. The initial state is not included in 𝑁‒𝑖(𝑑) for any π‘–βˆˆπ’₯. In other words, if 𝐽(0)=𝑖, 𝑁‒𝑖(𝑑) remains 0 until the first return of the semi-Markov process to state 𝑖. In the next section, we will analyze the dynamic behavior of [𝑀(𝑑),𝑁(𝑑)], yielding the relevant Laplace transform generating functions. In the subsequent section, all of the counting processes discussed in Section 2 would be expressed in terms of 𝑀(𝑑) and 𝑁(𝑑), thereby providing a unified approach for studying various counting processes. The associated asymptotic behavior as π‘‘β†’βˆž would be also discussed.

4. Dynamic Analysis of [𝐌(𝐭),𝐍(𝐭)]

The purpose of this section is to examine the dynamic behavior of the multivariate stochastic process [𝑀(𝑑),𝑁(𝑑)] introduced in Section 3 by observing its probabilistic flow in the state space. Figure 2 depicts a typical sample path of the multivariate process.

Since [𝑀(𝑑),𝑁(𝑑)] is not Markov, we employ the method of supplementary variables. More specifically, the multivariate stochastic process [𝑀(𝑑),𝑁(𝑑),𝑋(𝑑),𝐽(𝑑)] is considered. This multivariate stochastic process is Markov and has the state space 𝑆=β„€+𝐽+1Γ—β„€+(𝐽+1)Γ—(𝐽+1)×ℝ+Γ—π’₯, where β„€+𝐽+1 and β„€+(𝐽+1)Γ—(𝐽+1) are the set of (𝐽+1) and (𝐽+1)Γ—(𝐽+1) dimensional nonnegative integer vectors and matrices respectively, ℝ+ is the set of nonnegative real numbers and π’₯={0,…,𝐽}. Let 𝐹𝑖𝑗(π‘š,𝑛,π‘₯,𝑑) be the joint probability distribution of [𝑀(𝑑),𝑁(𝑑),𝑋(𝑑),𝐽(𝑑)] defined by πΉπ‘–π‘—ξ‚€π‘š,𝑛𝑀,π‘₯,𝑑=P(𝑑)=π‘š,𝑁(𝑑)=𝑛,𝑋(𝑑)≀π‘₯,𝐽(𝑑)=π‘—βˆ£π‘€(0)=0,𝑁(0)=0ξ‚„,𝐽(0)=𝑖.(4.1)

In order to assure the differentiability of 𝐹𝑖𝑗(π‘š,𝑛,π‘₯,𝑑) with respect to π‘₯, we assume that 𝑋(0) has an absolutely continuous initial distribution function 𝐷(π‘₯) with p.d.f. 𝑑(π‘₯)=(𝑑/𝑑π‘₯)𝐷(π‘₯). (If 𝑋(0)=0 with probability 1, we consider a sequence of initial distribution functions {π·π‘˜(π‘₯)}βˆžπ‘˜=1 satisfying π·π‘˜(π‘₯)β†’π‘ˆ(π‘₯) as π‘˜β†’βˆž where π‘ˆ(π‘₯)=1 for π‘₯β‰₯0 and π‘ˆ(π‘₯)=0 otherwise. The desired results can be obtained through this limiting process.) One can then define π‘“π‘–π‘—ξ‚€π‘š,𝑛=πœ•,π‘₯,π‘‘πΉπœ•π‘₯π‘–π‘—ξ‚€π‘š,𝑛,π‘₯,𝑑.(4.2) By interpreting the probabilistic flow of the multivariate process [𝑀(𝑑),𝑁(𝑑),𝑋(𝑑),𝐽(𝑑)] in its state space, one can establish the following equations: π‘“π‘–π‘—ξ‚€π‘š,𝑛,π‘₯,𝑑=𝛿{𝑖=𝑗}𝛿{π‘š=π‘šπ‘–1𝑖}𝛿{𝑛=0}𝑑(π‘₯βˆ’π‘‘)𝐴𝑖(π‘₯)𝐴𝑖𝑔(π‘₯βˆ’π‘‘)𝑖𝑑,π‘šπ‘–ξ€Έ+ξ‚€1βˆ’π›Ώ{𝑛=0}ξ‚π‘šπ‘—ξ“π‘˜=0π‘“π‘–π‘—ξ‚€π‘šβˆ’π‘˜1𝑗,𝑛,0+,π‘‘βˆ’π‘₯𝐴𝑗(π‘₯)𝑔𝑗𝑓(π‘₯,π‘˜);(4.3)π‘–π‘—ξ‚€π‘š,𝑛=ξ‚€,0+,𝑑1βˆ’π›Ώ{𝑛=0}ξ‚ξ“β„“βˆˆπ’₯ξ€œβˆž0π‘“π‘–β„“ξ‚΅π‘š,π‘›βˆ’1β„“π‘—ξ‚Άπœ‚,π‘₯,𝑑ℓ𝑗𝑓(π‘₯)𝑑π‘₯;(4.4)π‘–π‘—ξ‚€π‘š,𝑛,π‘₯,0=𝛿{𝑖=𝑗}𝛿{π‘š=0}𝛿{𝑛=0}𝑑(π‘₯),(4.5) where 1𝑖 is the column vector whose 𝑖th element is equal to 1 with all other elements being 0, 1𝑖𝑗=1𝑖1βŠ€π‘— and 𝑓𝑖𝑗(π‘š,𝑛,0+,𝑑)=0for𝑁≀0.

The first term of the right-hand side of (4.3) represents the case that 𝐽(𝑑) has not left the initial state 𝐽(0)=𝑖 by time 𝑑[𝛿{𝑖=𝑗}=1and𝛿{𝑛=0}=1] and there have been π‘šπ‘– items arrived during time 𝑑[𝛿{π‘š=π‘šπ‘–1𝑖}=1], provided that 𝐽(0)=𝑖 and 𝑋(0)=π‘₯βˆ’π‘‘. The second term corresponds to the case that 𝐽(𝑑) made at least one transition from 𝐽(0)=𝑖 by time 𝑑[𝛿{𝑛=0}=0], the multivariate process [𝑀(𝑑),𝑁(𝑑),𝑋(𝑑),𝐽(𝑑)] just entered the state [π‘šβˆ’π‘˜1𝑗,𝑛,0+,𝑗] at time π‘‘βˆ’π‘₯, 𝐽(𝑑) remained in state 𝑗 until time 𝑑 with 𝑋(𝑑)=π‘₯, and there have been π‘˜ items arrived during the current age π‘₯, provided that 𝐽(0)=𝑖 and 𝑋(0)=π‘₯βˆ’π‘‘. For the multivariate process at [π‘š,𝑛,0+,𝑗] at time 𝑑, it must be at [π‘š,π‘›βˆ’1ℓ𝑗,π‘₯,β„“] followed by a transition from β„“ to 𝑗 at time 𝑑 which increases 𝑁ℓ𝑗(t) by one, explaining (4.4). Equations (4.5) merely describe the initial condition that [𝑀(0),𝑁(0),𝑋(0),𝐽(0)]=[0,0,π‘₯,𝑖].

In what follows, the dynamic behavior of the multivariate process [𝑀(𝑑),𝑁(𝑑),𝑋(𝑑),𝐽(𝑑)] would be captured by establishing the associated Laplace transform generating functions based on (4.3), (4.4) and (4.5). For notational convenience, the following matrix functions are employed: π‘π‘˜ξ€Ίπ‘(𝑑)=π‘˜βˆΆπ‘–π‘—ξ€»(𝑑);π‘π‘˜βˆΆπ‘–π‘—(𝑑)=π‘Žπ‘–π‘—(𝑑)𝑔𝑖b(𝑑,π‘˜),(4.6)βˆ—π‘˜βˆΆπ·ξ€Ίπ›Ώ(𝑑)={𝑖=𝑗}π‘βˆ—π‘˜βˆΆπ‘–ξ€»=βŽ‘βŽ’βŽ’βŽ£π‘(𝑑)βˆ—π‘˜βˆΆ0⋱𝑏(𝑑)βˆ—π‘˜βˆΆπ½βŽ€βŽ₯βŽ₯⎦(𝑑);π‘βˆ—π‘˜βˆΆπ‘–(𝑑)=𝐴𝑖(𝑑)π‘”π‘–π‘Ÿ(𝑑,π‘˜),(4.7)βˆ—π‘˜ξ‚ƒπ‘Ÿ(𝑑)=βˆ—π‘˜βˆΆπ‘–π‘—ξ‚„(𝑑);π‘Ÿβˆ—π‘˜βˆΆπ‘–π‘—(𝑑)=π‘”π‘–ξ€œ(𝑑,π‘˜)∞0π‘Žπ‘‘(π‘₯βˆ’π‘‘)𝑖𝑗(π‘₯)𝐴𝑖𝛽(π‘₯βˆ’π‘‘)𝑑π‘₯,(4.8)π‘˜ξ€Ίπ›½(𝑠)=π‘˜βˆΆπ‘–π‘—ξ€»(𝑠);π›½π‘˜βˆΆπ‘–π‘—ξ€œ(𝑠)=∞0π‘’βˆ’π‘ π‘‘π‘π‘˜βˆΆπ‘–π‘—π›½(𝑑)𝑑𝑑,(4.9)𝑒=𝛽,𝑠𝑖𝑗𝑒𝑖,𝑠;𝛽𝑖𝑗𝑒𝑖=,π‘ βˆžξ“π‘˜=0π›½π‘˜βˆΆπ‘–π‘—(𝑠)π‘’π‘˜π‘–π›½,(4.10)βˆ—π‘˜βˆΆπ·βŽ‘βŽ’βŽ’βŽ£π›½(𝑠)=βˆ—π‘˜βˆΆ0(⋱𝛽𝑠)βˆ—π‘˜βˆΆπ½βŽ€βŽ₯βŽ₯⎦(𝑠);π›½βˆ—π‘˜βˆΆπ‘–ξ€œ(𝑠)=∞0π‘’βˆ’π‘ π‘‘π‘βˆ—π‘˜βˆΆπ‘–π›½(𝑑)𝑑𝑑,(4.11)βˆ—π·ξ€·π‘’ξ€Έ=βŽ‘βŽ’βŽ’βŽ£π›½,π‘ βˆ—0𝑒0⋱𝛽,π‘ βˆ—π½ξ€·π‘’π½ξ€ΈβŽ€βŽ₯βŽ₯⎦,𝑠;π›½βˆ—π‘–ξ€·π‘’π‘–ξ€Έ=,π‘ βˆžξ“π‘˜=0π›½βˆ—π‘˜βˆΆπ‘–(𝑠)π‘’π‘˜π‘–πœŒ,(4.12)βˆ—π‘˜ξ‚ƒπœŒ(𝑠)=βˆ—π‘˜βˆΆπ‘–π‘—ξ‚„(𝑠);πœŒβˆ—π‘˜βˆΆπ‘–π‘—ξ€œ(𝑠)=∞0π‘’βˆ’π‘ π‘‘π‘Ÿβˆ—π‘˜βˆΆπ‘–π‘—πœŒ(𝑑)𝑑𝑑,(4.13)βˆ—ξ€·π‘’ξ€Έ=ξ€ΊπœŒ,π‘ βˆ—π‘–π‘—ξ€·π‘’π‘–,𝑠;πœŒβˆ—π‘–π‘—ξ€·u𝑖=,π‘ βˆžξ“π‘˜=0πœŒβˆ—π‘˜βˆΆπ‘–π‘—(𝑠)π‘’π‘˜π‘–,(4.14) where π‘’π‘šπ‘£π‘›=βˆπ‘–βˆˆπ’₯π‘’π‘šπ‘–π‘–βˆ(𝑖,𝑗)∈π’₯Γ—π’₯β§΅{(0,0)}𝑣𝑛𝑖𝑗𝑖𝑗. We are now in a position to prove the main theorem of this section.Theorem. Let 𝑋(0)=0. Then: Μ‚πœ‰ξ‚€π‘’,𝑣=̃𝛽,0+,𝑠𝑒,𝑣𝐼,π‘ βˆ’Μƒπ›½(𝑒,𝑣,𝑠)βˆ’1;(4.19)ξπœ‘ξ‚€π‘’,𝑣=𝐼,𝑀,π‘ βˆ’Μƒπ›½(𝑒,𝑣,𝑠)βˆ’1π›½βˆ—π·ξ€·π‘’ξ€Έ,,𝑀+𝑠(4.20) where ̃𝛽(𝑒,𝑣,𝑠)=[𝑣𝑖𝑗⋅𝛽𝑖𝑗(𝑒𝑖,𝑠)].

Proof. First, we assume that 𝑋(0) has a p.d.f. 𝑑(π‘₯). Substituting (4.3) and (4.5) into (4.4), one sees that π‘“π‘–π‘—ξ‚€π‘š,𝑛=ξ‚€,0+,𝑑1βˆ’π›Ώ{𝑛=0}ξ‚Γ—ξƒ―ξ“β„“βˆˆπ’₯ξ€œβˆž0𝛿{𝑖=β„“}𝛿{π‘š=π‘šπ‘–1𝑖}𝛿{𝑛=1ℓ𝑗}𝑑(π‘₯βˆ’π‘‘)𝐴𝑖(π‘₯)𝐴𝑖𝑔(π‘₯βˆ’π‘‘)𝑖𝑑,π‘šπ‘–ξ€Έπœ‚β„“π‘—(+π‘₯)𝑑π‘₯β„“βˆˆπ’₯ξ‚΅1βˆ’π›Ώ{𝑛=1ℓ𝑗}ξ‚Άξ€œβˆž0π‘šβ„“ξ“π‘˜=0π‘“π‘–β„“ξ‚΅π‘šβˆ’π‘˜1β„“,π‘›βˆ’1ℓ𝑗×,0+,π‘‘βˆ’π‘₯𝐴ℓ(π‘₯)𝑔ℓ(π‘₯,π‘˜)πœ‚β„“π‘—ξƒ°=ξ‚€(π‘₯)𝑑π‘₯1βˆ’π›Ώ{𝑛=0}𝛿{π‘š=π‘šπ‘–1𝑖}𝛿{𝑛=1𝑖𝑗}𝑔𝑖𝑑,π‘šπ‘–ξ€Έξ€œβˆž0π‘Žπ‘‘(π‘₯βˆ’π‘‘)𝑖𝑗(π‘₯)𝐴𝑖+(π‘₯βˆ’π‘‘)𝑑π‘₯β„“βˆˆπ’₯ξ‚΅1βˆ’π›Ώ{𝑛=1ℓ𝑗}ξ‚Άπ‘šβ„“ξ“π‘˜=0ξ€œβˆž0π‘“π‘–β„“ξ‚΅π‘šβˆ’π‘˜1β„“,π‘›βˆ’1ℓ𝑗,0+,π‘‘βˆ’π‘₯Γ—π‘Žβ„“π‘—(π‘₯)𝑔ℓ.(π‘₯,π‘˜)𝑑π‘₯(4.21) Consequently, one sees from (4.6) and (4.8) that π‘“π‘–π‘—ξ‚€π‘š,𝑛=ξ‚€,0+,𝑑1βˆ’π›Ώ{𝑛=0}ξ‚Γ—ξƒ―π›Ώξ€½π‘š=π‘šπ‘–1𝑖𝛿𝑛=1π‘–π‘—ξ‚Όπ‘Ÿβˆ—π‘šπ‘–βˆΆπ‘–π‘—ξ“(𝑑)+β„“βˆˆπ’₯ξ‚΅1βˆ’π›Ώ{𝑛=1ℓ𝑗}ξ‚ΆΓ—π‘šβ„“ξ“π‘˜=0ξ€œβˆž0π‘“π‘–β„“ξ‚΅π‘šβˆ’π‘˜1β„“,π‘›βˆ’1ℓ𝑗𝑏,0+,π‘‘βˆ’π‘₯π‘˜βˆΆβ„“π‘—ξƒ°.(π‘₯)𝑑π‘₯(4.22) where π‘Žπ‘–π‘—(π‘₯)=𝐴𝑖(π‘₯)πœ‚π‘–π‘—(π‘₯) is employed from (3.4). By taking the Laplace transform of both sides of (4.22) with respect to 𝑑, it follows that πœ‰π‘–π‘—ξ‚€π‘š,𝑛=ξ‚€,0+,𝑠1βˆ’π›Ώ{𝑛=0}ξ‚Γ—ξƒ―π›Ώξ€½π‘š=π‘šπ‘–1𝑖𝛿𝑛=1π‘–π‘—ξ‚ΌπœŒβˆ—π‘šπ‘–βˆΆπ‘–π‘—(+𝑠)β„“βˆˆπ’₯ξ‚΅1βˆ’π›Ώ{𝑛=1ℓ𝑗}ξ‚Άπ‘šβ„“ξ“π‘˜=0πœ‰π‘–β„“ξ‚΅π‘šβˆ’π‘˜1β„“,π‘›βˆ’1ℓ𝑗𝛽,0+,π‘ π‘˜βˆΆβ„“π‘—ξƒ°.(𝑠)(4.23) By taking the multivariate generating function of (4.23) with respect to π‘š and 𝑛, it can be seen that Μ‚πœ‰π‘–π‘—ξ‚€π‘’,𝑣=,0+,π‘ π‘šβˆˆβ„€+𝐽+1ξ“π‘›βˆˆβ„€+(𝐽+1)Γ—(𝐽+1)β§΅{0}πœ‰π‘–π‘—ξ‚€π‘š,𝑛𝑒,0+,π‘ π‘šπ‘£π‘›=ξ“π‘šβˆˆβ„€+𝐽+1ξ“π‘›βˆˆβ„€+(𝐽+1)Γ—(𝐽+1)β§΅{0}𝛿{π‘š=π‘šπ‘–1𝑖}𝛿{𝑛=1𝑖𝑗}πœŒβˆ—π‘šπ‘–βˆΆπ‘–π‘—(𝑠)uπ‘šπ‘£π‘›+ξ“β„“βˆˆπ’₯ξ‚΅1βˆ’π›Ώ{𝑛=1ℓ𝑗}ξ‚Άπ‘šβ„“ξ“π‘˜=0πœ‰π‘–β„“ξ‚΅π‘šβˆ’π‘˜1β„“,π‘›βˆ’1ℓ𝑗𝛽,0+,π‘ π‘˜βˆΆβ„“π‘—(𝑠)π‘’π‘šπ‘£π‘›ξƒ°.(4.24) It then follows from (4.10), (4.14), (4.16) and the discrete convolution property that Μ‚πœ‰π‘–π‘—ξ‚€π‘’,𝑣=,0+,π‘ βˆžξ“π‘šπ‘–=0π‘’π‘šπ‘–π‘–π‘£π‘–π‘—πœŒβˆ—π‘šπ‘–βˆΆπ‘–π‘—(+𝑠)β„“βˆˆπ’₯βŽ›βŽœβŽœβŽœβŽπ‘£β„“π‘—ξ“π‘šβˆˆβ„€+𝐽+1π‘šβ„“ξ“π‘˜=0ξ“π‘›βˆˆβ„€+(𝐽+1)Γ—(𝐽+1)β§΅{0}πœ‰π‘–β„“ξ‚€π‘šβˆ’π‘˜1β„“,𝑛𝛽,0+,π‘ π‘˜βˆΆβ„“π‘—(𝑠)π‘’π‘šπ‘£π‘›βŽžβŽŸβŽŸβŽŸβŽ =π‘£π‘–π‘—πœŒβˆ—π‘–π‘—ξ€·π‘’π‘–ξ€Έ+,π‘ β„“βˆˆπ’₯π‘£β„“π‘—Μ‚πœ‰π‘–β„“ξ‚€π‘’,𝑣𝛽,0+,𝑠ℓ𝑗𝑒ℓ.,𝑠(4.25) The last expression can be rewritten in matrix form, and one has Μ‚πœ‰ξ‚€π‘’,𝑣,0+,𝑠=ΜƒπœŒβˆ—ξ‚€π‘’,𝑣+Μ‚πœ‰,𝑠𝑒,𝑣̃𝛽,0+,𝑠𝑒,𝑣,𝑠,(4.26) which can be solved for Μ‚πœ‰(𝑒,𝑣,0+,𝑠) as Μ‚πœ‰ξ‚€π‘’,𝑣,0+,𝑠=ΜƒπœŒβˆ—ξ‚€π‘’,𝑣𝐼,π‘ βˆ’Μƒπ›½(𝑒,𝑣,𝑠)βˆ’1,(4.27) where ΜƒπœŒβˆ—(𝑒,𝑣,s)=[π‘£π‘–π‘—β‹…πœŒβˆ—π‘–π‘—(𝑒𝑖,𝑠)].
Next, we introduce the following double Laplace transform:
πœ€π‘˜βˆΆπ‘–ξ€(𝑀,𝑠)=∞0π‘’βˆ’π‘€π‘₯π‘’βˆ’π‘ π‘‘π‘‘(π‘₯βˆ’π‘‘)𝐴𝑖(π‘₯)𝐴𝑖𝑔(π‘₯βˆ’π‘‘)𝑖(𝑑,π‘˜)𝑑𝑑𝑑π‘₯,(4.28) and the associated diagonal matrix πœ€π·ξ€·π‘’ξ€Έ=βŽ‘βŽ’βŽ’βŽ£πœ€,𝑀,π‘ βˆ—0𝑒0ξ€Έβ‹±πœ€,𝑀,π‘ βˆ—π½ξ€·π‘’π½ξ€ΈβŽ€βŽ₯βŽ₯⎦;πœ€,𝑀,𝑠𝑖𝑒𝑖=,𝑀,π‘ βˆžξ“π‘˜=0πœ€π‘˜βˆΆπ‘–(𝑀,𝑠)π‘’π‘˜π‘–.(4.29) By taking the double Laplace transform of (4.3), one sees from (4.7) and (4.28) that πœ‘π‘–π‘—ξ‚€π‘š,𝑛,𝑀,𝑠=𝛿{𝑖=𝑗}𝛿{π‘š=π‘šπ‘–1𝑖}𝛿{𝑛=0}πœ€π‘šπ‘–βˆΆπ‘–+ξ‚€(𝑀,𝑠)1βˆ’π›Ώ{𝑛=0}ξ‚π‘šπ‘—ξ“π‘˜=0πœ‰π‘–π‘—ξ‚€π‘šβˆ’π‘˜1𝑗,𝑛𝛽,0+,π‘ βˆ—π‘˜βˆΆπ‘—(𝑀+𝑠).(4.30) By taking the double generating function, this then leads to ξπœ‘π‘–π‘—ξ‚€π‘’,𝑣=,𝑀,π‘ π‘šβˆˆβ„€+𝐽+1ξ“π‘›βˆˆβ„€+(𝐽+1)Γ—(𝐽+1)πœ‘π‘–π‘—ξ‚€π‘š,𝑛𝑒,𝑀,π‘ π‘šπ‘£π‘›=π›Ώβˆž{𝑖=𝑗}ξ“π‘šπ‘–=0π‘’π‘šπ‘–π‘–πœ€π‘šπ‘–βˆΆπ‘–(+𝑀,𝑠)π‘šβˆˆβ„€+𝐽+1ξ“π‘›βˆˆβ„€+(𝐽+1)Γ—(𝐽+1)⧡0ξ‚‡ξƒ©π‘šπ‘—ξ“π‘˜=0πœ‰π‘–π‘—ξ‚€π‘šβˆ’π‘˜1𝑗,𝑛𝛽,0+,π‘ βˆ—π‘˜βˆΆπ‘—(𝑀+𝑠)π‘’π‘šπ‘£π‘›ξƒͺ=𝛿{𝑖=𝑗}πœ€π‘–ξ€·π‘’π‘–ξ€Έ+Μ‚πœ‰,𝑀,𝑠𝑖𝑗𝑒,𝑣𝛽,0+,π‘ βˆ—π‘—ξ€·π‘’π‘—ξ€Έ.,𝑀+𝑠(4.31) By rewriting the last expression in matrix form, it can be seen that ξπœ‘ξ‚€π‘’,𝑣,𝑀,𝑠=πœ€π·ξ€·π‘’ξ€Έ+Μ‚πœ‰,𝑀,𝑠𝑒,𝑣𝛽,0+,π‘ βˆ—π·ξ€·π‘’ξ€Έ.,𝑀+𝑠(4.32)
We now consider the limiting process 𝐷(π‘₯)β†’π‘ˆ(π‘₯). For the p.d.f., this limiting process becomes 𝑑(π‘₯)→𝛿(π‘₯) where 𝛿(π‘₯) is the Delta function defined as a unit function for convolution, that is, βˆ«π‘“(π‘₯)=𝑓(π‘₯βˆ’πœ)𝛿(𝜏)π‘‘πœ for any integrable function 𝑓. Accordingly, it can be seen from (4.8) that π‘Ÿβˆ—π‘˜βˆΆπ‘–π‘—(𝑑)β†’π‘π‘˜βˆΆπ‘–π‘—(𝑑). This in turn implies from (4.28) that πœ€π‘˜βˆΆπ‘–(𝑀,𝑠)β†’π›½βˆ—π‘˜βˆΆπ‘–(𝑀+𝑠). Consequently, it follows in matrix form that ΜƒπœŒβˆ—(𝑒,𝑣̃𝛽,𝑠)β†’(𝑒,𝑣,𝑠) and πœ€π·(𝑒,𝑀,𝑠)β†’π›½βˆ—π·(𝑒,𝑀+𝑠). From (4.27), it can be readily seen that Μ‚πœ‰(𝑒,𝑣̃𝛽,0+,𝑠)β†’(𝑒,𝑣,𝑠){πΌβˆ’Μƒπ›½(𝑒,𝑣,𝑠)}βˆ’1, proving (4.19). One also sees from (4.28) that
ξπœ‘ξ‚€π‘’,𝑣,𝑀,π‘ βŸΆπ›½βˆ—π·ξ€·π‘’ξ€Έ+Μ‚πœ‰,𝑀+𝑠𝑒,𝑣𝛽,0+,π‘ βˆ—π·ξ€·π‘’ξ€Έ=𝐼,𝑀+𝑠+̃𝛽𝑒,𝑣𝐼,π‘ βˆ’Μƒπ›½(𝑒,𝑣,𝑠)βˆ’1ξƒ­π›½βˆ—π·ξ€·π‘’ξ€Έ=𝐼,𝑀+π‘ βˆ’Μƒπ›½ξ‚€π‘’,𝑣+̃𝛽,𝑠𝑒,𝑣𝐼,π‘ ξ‚Ήξ‚»βˆ’Μƒπ›½(𝑒,𝑣,𝑠)βˆ’1π›½βˆ—π·ξ€·π‘’ξ€Έ=𝐼,𝑀+π‘ βˆ’Μƒπ›½(𝑒,𝑣,𝑠)βˆ’1π›½βˆ—π·ξ€·π‘’ξ€Έ,,𝑀+𝑠(4.33) which proves (4.20), completing the proof.
In the next section, it will be shown that all the transform results obtained in Section 2 can be derived from Theorem 4.1.

5. Derivation of the Special Cases from the Unified Counting Process

We are now in a position to demonstrate the fact that the proposed multivariate counting process introduced in Section 3 and analysed in Section 4 indeed unifies the existing counting processes discussed in Section 2. We will do so by deriving the transform results of Section 2 from Theorem 4.1.

5.1. Derivation of Poisson Process

Let 𝑁(𝑑) be a Poisson process with intensity πœ† as discussed in Section 2.1. From (2.5), one sees that βˆ«ξπœ‹(𝑣,𝑠)=∞0π‘’βˆ’π‘ π‘‘E[𝑣𝑁(𝑑)∫]𝑑𝑑=∞0π‘’βˆ’π‘ π‘‘πœ‹(𝑣,𝑑)𝑑𝑑 is given by 1ξπœ‹(𝑣,𝑠)=.𝑠+πœ†(1βˆ’π‘£)(5.1)

For the unified counting process, we consider a single state Markov-chain in continuous time where only the number of self transitions in (0,𝑑] is counted. More specifically, let π’₯={0}, 𝑁(𝑑)=[𝑁00(𝑑)], 𝑒0=1, 𝑣00=𝑣, 𝑀=0, πœ†0(𝑑)=0 and π‘Ž00(π‘₯)=πœ†π‘’βˆ’πœ†π‘₯. We note from (3.7) that πœ†0(𝑑)=0 implies 𝑔0(π‘₯,π‘˜)=𝛿{π‘˜=0} so that π‘π‘˜βˆΆ00(𝑑)=𝛿{π‘˜=0}π‘Ž00(𝑑) from (4.6). This then implies 𝛽00(1,𝑠)=πœ†/(𝑠+πœ†). Similarly, since 𝐴0(π‘₯)=π‘’βˆ’πœ†π‘₯, one has π›½βˆ—0(1,𝑠)=1/(𝑠+πœ†). It then follows from Theorem 4.1 that ξπœ‘01(1,𝑣,0+,𝑠)=1βˆ’π‘£π›½00𝛽(1,𝑠)βˆ—