Abstract

The class of counting processes constitutes a significant part of applied probability. The classic counting processes include Poisson processes, nonhomogeneous Poisson processes, and renewal processes. More sophisticated counting processes, including Markov renewal processes, Markov modulated Poisson processes, age-dependent counting processes, and the like, have been developed for accommodating a wider range of applications. These counting processes seem to be quite different on the surface, forcing one to understand each of them separately. The purpose of this paper is to develop a unified multivariate counting process, enabling one to express all of the above examples using its components, and to introduce new counting processes. The dynamic behavior of the unified multivariate counting process is analyzed, and its asymptotic behavior as 𝑡 is established. As an application, a manufacturing system with certain maintenance policies is considered, where the optimal maintenance policy for minimizing the total cost is obtained numerically.

1. Introduction

A stochastic process {𝑁(𝑡)𝑡0} is called a counting process when 𝑁(𝑡) is nonnegative, right continuous and monotone nondecreasing with 𝑁(0)=0. The classic counting processes of importance include a Poisson process, a nonhomogeneous Poisson process (NHPP) and a renewal process. More sophisticated counting processes have been developed in order to accommodate a wider range of applications. A Markov renewal process, for example, extends an ordinary renewal process in that the interarrival time between two successive arrivals has a probability distribution depending on the state transition of the underlying Markov chain, see, for example, Pyke [1, 2]. In Masuda and Sumita [3], the number of entries of a semi-Markov process into a subset of the state space is analyzed, while a Markov-modulated Poisson process (MMPP) is introduced by Neuts [4] where jumps of the MMPP occur according to a Poisson process with intensity 𝜆𝑖 whenever the underlying Markov-chain is in state 𝑖.

The MMPP is generalized subsequently into several different directions. Lucantoni et al. [5] develop a Markovian arrival process (MAP), where a Markov-chain defined on 𝒥=𝐺𝐵 with 𝐺𝐵=𝜙,𝐺𝜙, and 𝐵𝜙 is replaced to a state 𝑖𝐺 as soon as it enters a state 𝑗𝐵 with probability ̃𝑝𝑗𝑖 and the counting process describes the number of such replacements occurred in [0,𝑡]. If we set 𝐺={0𝐺,1𝐺,,𝐽𝐺} and 𝐵={0𝐵,1𝐵,,𝐽𝐵}, and an absorbing state 𝑗𝐵𝐵 can be reached only from its counter part 𝑗𝐺𝐺 with instantaneous replacement back to 𝑗𝐺 itself, the resulting MAP becomes an MMPP. Another direction for generalizing the MMPP is to replace the underlying Markov-chain by a semi-Markov process, which we call a semi-Markov-modulated Poisson process (SMMPP). To the best knowledge of the authors, the SMMPP is first addressed by Dshalalow [6], where a systematic approach for dealing with modulated random measures is proposed in a more abstract way. Such modulated random measures include the MMPP and the SMMPP as special cases. An application of the SMMPP to queueing theory is discussed in Dshalalow and Russell [7]. More recently, in a series of papers by Agarwal et al. [8] and Dshalalow [9, 10], the original approach above has been further extended and refined. The SMMPP is studied in detail independently by Özekici and Soyer [11].

In the SMMPP, the counting process under consideration is modulated according to state transitions of the underlying semi-Markov process. A further generalization may be possible by considering a counting process whose arrival intensity depends on not only the current state of the semi-Markov process but also the current age of the process. This line of research is originated by Sumita and Shanthikumar [12] where an age-dependent counting process generated from a renewal process is studied. Here, items arrive according to an NHPP which is interrupted and reset at random epochs governed by a renewal process.

All of these counting processes discussed above seem to be quite different on the surface, forcing one to understand each of them separately. The purpose of this paper is to develop a unified multivariate counting process which would contain all of the above examples as special cases. In this regard, we consider a system where items arrive according to an NHPP. This arrival stream is interrupted from time to time where the interruptions are governed by a finite semi-Markov process 𝐽(𝑡) on 𝒥={0,1,2,,𝐽}. Whenever a state transition of the semi-Markov process occurs from 𝑖 to 𝑗, the intensity function of the NHPP is switched from 𝜆𝑖(𝑥) to 𝜆𝑗(𝑥) with an initial value reset to 𝜆𝑗(0). In other words, the arrivals of items are generated by the NHPP with 𝜆𝑖(𝑥) when the semi-Markov process is in state 𝑖 with 𝑥 denoting the time since the last entry into state 𝑖. Of particular interest in analysis of such systems are the multivariate counting processes 𝑀(𝑡)=[𝑀𝑖(𝑡)]𝑖𝒥 and 𝑁(𝑡)=[𝑁𝑖𝑗(𝑡)]𝑖,𝑗𝒥 where 𝑀𝑖(𝑡) counts the cumulative number of items that have arrived in [0,𝑡] while the semi-Markov process is in state 𝑖 and 𝑁𝑖𝑗(𝑡) represents the cumulative number of the state transitions of the semi-Markov process from 𝑖 to 𝑗 in [0,𝑡]. The joint multivariate counting process [𝑀(𝑡),𝑁(𝑡)] enables one to unify many existing counting processes in that they can be derived in terms of the components of [𝑀(𝑡),𝑁(𝑡)]. Because of this reason, hereafter, we call [𝑀(𝑡),𝑁(𝑡)] the unified multivariate counting process. The dynamic behavior of [𝑀(𝑡),𝑁(𝑡)] is captured through analysis of the underlying Laplace transform generating functions, yielding asymptotic expansions of the means and the variances of the partial sums of its components as 𝑡.

Applications of the unified multivariate counting process can be found, for example, in modern communication networks. One may consider a high-speed communication link for transmitting video signals between two locations. Video sequences are transmitted as streams of binary data that vary over time in traffic intensity according to the level of movement, the frequency of scene changes, and the level of transmission quality. Consequently, efficient transmission of video traffic can be achieved through variable bit rate coding. In this coding scheme, data packets are not generated at a constant rate from the original sequence, but rather at varying rates. By doing so, one achieves less fluctuation in transmission quality level and, at the same time, transmission capacity can be freed up whenever possible. As in Maglaris et al. [13], such a mechanism may be implemented by using multimode encoders where each mode reflects a certain level of data compression, and the change between modes is governed by the underlying video sequence according to buffer occupancy levels. A system of this sort can be described in the above framework with 𝑀(𝑡)=𝑖𝒥𝑀𝑖(𝑡) representing the number of packet arrivals at the origination site and 𝑁𝑖𝑗(𝑡) describing the number of the encoder changes in [0,𝑡]. The state of the underlying semi-Markov process at time 𝑡 then corresponds to the current mode of the encoder. Other types of applications include system reliability models where the semi-Markov process describes the status of the system under consideration while the interruptions correspond to system failures and replacements. A cost function associated with such a system may then be constructed from the unified multivariate counting processes [𝑀(𝑡),𝑁(𝑡)]. In this paper, a manufacturing system with certain maintenance policies is considered, where the unified multivariate counting process enables one to determine numerically the optimal maintenance policy for minimizing the total cost.

The structure of this paper is as follows. In Section 2, key transform results of various existing counting processes are summarized. Detailed description of the unified multivariate counting process is provided in Section 3 and its dynamic behavior is analyzed in Section 4 by examining the probabilistic flow of the underlying stochastic processes and deriving transform results involving Laplace transform generating functions. Section 5 is devoted to derivation of the existing counting processes of Section 2 in terms of the components of the unified multivariate counting process. Asymptotic analysis is provided in Section 6, yielding asymptotic expansions of the means and the variances of the partial sums of its components. An application is discussed in Section 7, where a manufacturing system with certain maintenance policies is considered and the optimal maintenance policy for minimizing the total cost is obtained numerically. Some concluding remarks are given in Section 8.

Throughout the paper, matrices and vectors are indicated by double underlines (𝐴,𝑏, etc.) and underlines (𝑋,𝑦, etc.) respectively. The vector with all components equal to 1 is denoted by 1 and the identity matrix is written as 𝐼. A submatrix of 𝑎 is defined by 𝑎𝐺𝐵=[𝑎𝑖𝑗]𝑖𝐺,𝑗𝐵.

2. Various Counting Processes of Interest

In this section, we summarize key transform results of various counting processes of interest, which can be expressed in terms of the components of the unified multivariate counting process proposed in this paper as we will see. We begin the discussion with one of the most classical arrival processes, the Poisson process.

2.1. Poisson Process

Poisson process of intensity 𝜆 is characterized by a sequence of independently and identically distributed (i.i.d.) exponential random variables (𝑋𝑗)𝑗=1 with common probability density function (p.d.f.) 𝑓𝑋(𝑥)=𝜆𝑒𝜆𝑥. Let 𝑆𝑛=𝑛𝑗=1𝑋𝑗. Then, the associated Poisson process {𝑁(𝑡)𝑡0} is defined as a counting process satisfying 𝑁(𝑡)=𝑛𝑆𝑛𝑡<𝑆𝑛+1.(2.1) If a system has an exponential lifetime of mean 𝜆1 and is renewed instantaneously upon failure, 𝑋𝑗 represents the lifetime of the 𝑗th renewal cycle. The Poisson process {𝑁(𝑡)𝑡0} then counts the number of failures that have occurred by time 𝑡.

Let 𝑝𝑛(𝑡)=P[𝑁(𝑡)=𝑛𝑁(0)=0] and define the probability generating function (p.g.f.) 𝜋(𝑣,𝑡) by 𝑣𝜋(𝑣,𝑡)=E𝑁(𝑡)=𝑛=0𝑝𝑛(𝑡)𝑣𝑛.(2.2) It can be seen, see, for example, Karlin and Taylor [14], that 𝑑𝑝𝑑𝑡𝑛(𝑡)=𝜆𝑝𝑛(𝑡)+𝜆𝑝𝑛1(𝑡)(2.3) where 𝑝𝑛(𝑡)=0 for 𝑛<0. Multiplying 𝑣𝑛 on both sides of (2.3) and summing from 0 to , one then finds that 𝜕𝜕𝑡𝜋(𝑣,𝑡)=𝜆(1𝑣)𝜋(𝑣,𝑡).(2.4) Since 𝑝𝑛(0)=𝛿{𝑛=0} where 𝛿{𝑃}=1 if statement 𝑃 is true and 𝛿{𝑃}=0 otherwise, one has 𝜋(𝑣,0)=1. Equation (2.4) can then be solved as 𝜋(𝑣,𝑡)=𝑒𝜆𝑡(1𝑣);𝑝𝑛(𝑡)=𝑒𝜆𝑡(𝜆𝑡)𝑛.𝑛!(2.5)

2.2. Nonhomogeneous Poisson Process (NHPP)

An NHPP {𝑀(𝑡)𝑡0} differs from a Poisson process in that the failure intensity of the system is given as a function of time 𝑡. Accordingly, (2.3) should be rewritten as 𝑑𝑝𝑑𝑡𝑚(𝑡)=𝜆(𝑡)𝑝𝑚(𝑡)+𝜆(𝑡)𝑝𝑚1(𝑡).(2.6) By taking the generating function of (2.6), one finds that 𝜕𝜕𝑡𝜋(𝑢,𝑡)=𝜆(𝑡)(1𝑢)𝜋(𝑢,𝑡).(2.7) With 𝐿(𝑡)=𝑡0𝜆(𝑦)𝑑𝑦, this equation can be solved as 𝜋(𝑢,𝑡)=𝑒𝐿(𝑡)(1𝑢);𝑝𝑚(𝑡)=𝑒𝐿(𝑡)𝐿(𝑡)𝑚.𝑚!(2.8) The reader is referred to Ross [15] for further discussions of NHPPs.

2.3. Markov-Modulated Poisson Process (MMPP)

Let {𝐽(𝑡)𝑡0} be a Markov-chain in continuous time on 𝒥={0,,𝐽} governed by a transition rate matrix 𝜈=[𝜈𝑖𝑗]. Let 𝜆=[𝜆0,,𝜆𝐽] and define the associated diagonal matrix 𝜆𝐷=[𝛿{𝑖=𝑗}𝜆𝑖]. An MMPP {𝑀(𝑡)𝑡0} characterized by (𝜈,𝜆𝐷) is a pure jump process where jumps of 𝑀(𝑡) occur according to a Poisson process with intensity 𝜆𝑖 whenever the Markov-chain 𝐽(𝑡) is in state 𝑖.

Let 𝜈𝑖=𝑗𝒥𝜈𝑖𝑗 and define 𝜈𝐷=[𝛿{𝑖=𝑗}𝜈𝑖]. The infinitesimal generator 𝑄 associated with the Markov-chain 𝐽(𝑡) is then given by 𝑄=𝜈𝐷+𝜈.(2.9) For 𝑖,𝑗𝒥, let 𝑝𝑝(𝑘,𝑡)=𝑖𝑗;𝑝(𝑘,𝑡)𝑖𝑗[],(𝑘,𝑡)=P𝑀(𝑡)=k,𝐽(𝑡)=𝑗𝐽(0)=𝑖,𝑀(0)=0(2.10) and define the associated matrix generating function 𝜋(𝑢,𝑡) by 𝜋(𝑢,𝑡)=𝑘=00𝑥0200𝑑𝑝(𝑘,𝑡)𝑢𝑘.(2.11) It can be seen that 𝜕𝑝𝜕𝑡𝑖𝑗𝜆(𝑘,𝑡)=𝑗+𝜈𝑗𝑝𝑖𝑗(𝑘,𝑡)+𝑟𝒥0𝑥0200𝑑𝑝𝑖𝑟(𝑘,𝑡)𝜈𝑟𝑗+𝜆𝑗𝑝𝑖𝑗(𝑘1,𝑡).(2.12) In matrix notation, this can be rewritten as 𝜕𝑝𝜕𝑡(𝑘,𝑡)=𝑝𝜆(𝑘,𝑡)𝐷+𝜈𝐷𝜈+𝑝(𝑘1,𝑡)𝜆𝐷.(2.13) By taking the generating function of (2.13) together with (2.9), one sees that 𝜕𝜋𝜕𝑡(𝑢,𝑡)=𝜋𝑄(𝑢,𝑡)(1𝑢)𝜆𝐷.(2.14) Since 𝑀(0)=0, one has 𝜋(𝑢,0)=𝐼, where 𝐼=[𝛿{𝑖=𝑗}] is the identity matrix, so that the above differential equation can be solved as 𝜋(𝑢,𝑡)=𝑒𝑄(1𝑢)𝜆𝐷𝑡=𝑘=0𝑡𝑘𝑄𝑘!(1𝑢)𝜆𝐷𝑘,(2.15) where 𝐴0def=𝐼 for any square matrix 𝐴. It should be noted that 𝜋(1,𝑡)=𝑒𝑄𝑡, which is the transition probability matrix of 𝐽(𝑡) as it should be. By taking the Laplace transform of both sides of (2.15), 𝜋(𝑢,𝑠)=0𝑒𝑠𝑡𝜋(𝑢,𝑡) is given by 𝜋(𝑢,𝑠)=𝑠𝐼𝑄+(1𝑢)𝜆𝐷1.(2.16)

In general, the interarrival times generated by an MMPP are not independent nor identically distributed. In multimedia computer and communication networks, data packets are mingled together with voice and image packets generated from analogue sources. Since arrival patterns of such packets differ from each other, MMPPs have provided useful means to model arrival processes of packets in multimedia computer and communication networks, see, for example, Heffes and Lucantoni [16] and Sriram and Whitt [17]. Burman and Smith [18], and Knessl et al. [19] studied a single server queuing system with an MMPP arrival process and general i.i.d. service times. Neuts et al. [20] established characterization theorems for an MMPP to be a renewal process in terms of lumpability of the underlying Markov-chain 𝐽(𝑡). The reader is referred to Neuts [4] for further discussions of MMPP.

An MMPP can be extended by replacing the underlying Markov-chain in continuous time by a semi-Markov process as discussed in Section 1. This process is denoted by SMMPP. To the best knowledge of the authors, the SMMPP is first addressed by Dshalalow [6], where a systematic approach for dealing with modulated random measures is proposed in a more abstract way. Such modulated random measures include the MMPP and the SMMPP as special cases. An application of the SMMPP to queueing theory is addressed in Dshalalow and Russell [7]. More recently, the original approach above has been further extended and refined in a series of papers by Agarwal et al. [8] and Dshalalow [9, 10]. The SMMPP is studied in detail independently by Özekici and Soyer [11] including transient characterizations and ergodic analysis. Both MMPP and SMMPP will be proven to be expressible in terms of the components of the unified multivariate counting process proposed in this paper.

2.4. Renewal Process

Renewal processes can be considered as a generalization of Poisson processes in that a sequence of i.i.d. exponential random variables are replaced by that of any i.i.d. nonnegative random variables with common distribution function 𝐴(𝑥). The resulting counting process {𝑁(𝑡)𝑡0} is still characterized by (2.1). Let 𝑝𝑛(𝑡)=P[𝑁(𝑡)=𝑛𝑁(0)=0] as before. One then sees that 𝑝𝑛(𝑡)=𝐴(𝑛)(𝑡)𝐴(𝑛+1)(𝑡),(2.17) where 𝐴(𝑛)(𝑡) denotes the 𝑛-fold convolution of 𝐴(𝑥) with itself, that is, 𝐴(𝑛+1)(𝑡)=𝑡0𝐴(𝑛)(𝑡𝑥)𝑑𝐴(𝑥) and 𝐴(0)(𝑡)=𝑈(𝑡) which is the step function defined as 𝑈(𝑡)=1 for 𝑡0 and 𝑈(𝑡)=0 else.

Let 𝜋𝑛(𝑠)=0𝑒𝑠𝑡𝑝𝑛(𝑡)𝑑𝑡 and 𝛼(𝑠)=0𝑒𝑠𝑡𝑑𝐴(𝑡). By taking the Laplace transform of both sides of (2.17), it follows that 𝜋𝑛(𝑠)=1𝛼(𝑠)𝑠𝛼(𝑠)𝑛.(2.18) By taking the generating function of the above equation with 𝜋(𝑣,𝑠)=𝑛=0𝜋𝑛(𝑠)𝑣𝑛, one has 𝜋(𝑣,𝑠)=1𝛼(𝑠)𝑠1.1𝑣𝛼(𝑠)(2.19) The reader is referred to Cox [21], or Karlin and Taylor [14] for further discussions of renewal processes.

2.5. Markov Renewal Process (MRP)

An MRP is an extension of an ordinary renewal process in that, in the interval [0,𝑡), the former describes the recurrence statistics for intermingling classes of epochs of an underlying semi-Markov process, whereas the latter counts the number of recurrences for a single recurrent class of epochs. More specifically, let {𝐽(𝑡)𝑡0} be a semi-Markov process on 𝒥={0,,𝐽} governed by a matrix p.d.f. 𝑎(𝑥) where 𝑎(𝑥)0 and 0𝑎(𝑥)𝑑𝑥=𝐴0 is a stochastic matrix which is assumed to be ergodic. Let 𝜀() be a recurrent class consisting of the entries of the semi-Markov process to state for 𝒥, and define 𝑁𝑟(𝑡) to be a counting process describing the number of recurrences for 𝜀(𝑟) given that there was an epoch of 𝜀() at time 𝑡=0. Then 𝑁𝑁(𝑡)=[0𝑁(𝑡),,𝐽(𝑡)] is called an MRP.

The study of MRPs can be traced back to early 1960s represented by the two original papers by Pyke [1, 2], followed by Keilson [22, 23], Keilson and Wishart [24, 25], Çinlar [26, 27] and McLean and Neuts [28]. Since then, the area attracted many researchers and a survey paper by Çinlar [29] in 1975 already included more than 70 leading references. The study has been largely focused on the matrix renewal function 𝐻(𝑡)=[𝐻𝑟(𝑡)] with 𝐻𝑟𝑁(𝑡)=E[𝑟(𝑡)], the associated matrix renewal density, and the limit theorems. For example, one has the following result concerning the Laplace transform of 𝐻(𝑡) by Keilson [23]: 𝐻=1(𝑡)𝑠𝛼𝐼(𝑠)𝛼(𝑠)1,(2.20) where 𝛼(𝑠) is the Laplace transform of 𝑎(𝑡). The unified multivariate counting process of this paper contains an MRP as a special case and provides more information based on dynamic analysis of the underlying probabilistic flows.

2.6. Number of Entries of a Semi-Markov Process into a Subset of the State Space (NESMPS)

Another type of counting processes associated with a semi-Markov process on 𝒥={0,,𝐽} governed by a matrix p.d.f. 𝑎(𝑥) is studied in Masuda and Sumita [3], where the state space 𝒥 is decomposed into a set of good states 𝐺(𝜙) and a set of bad states 𝐵(𝜙) satisfying 𝒥=𝐺𝐵 and 𝐺𝐵=𝜙. The counting process 𝑁𝐺𝐵(𝑡) is then defined to describe the number of entries of 𝐽(𝑡) into 𝐵 by time 𝑡.

While 𝑁𝐺𝐵(𝑡) is a special case of MRPs, the detailed analysis is provided in [3], yielding much more information. More specifically, let 𝑋(𝑡) be the age process associated with 𝐽(𝑡), that is, 𝑋(𝑡)=𝑡sup𝜏𝐽(𝑡)𝜏+𝜏0,0<𝜏𝑡,(2.21) where 𝑓(𝑥)𝑥+𝑥=𝑓(𝑥+)𝑓(𝑥), and define 𝐹𝑛𝐹(𝑥,𝑡)=𝑛𝑖𝑗,(𝑥,𝑡)(2.22) where 𝐹𝑛𝑖𝑗𝑋(𝑥,𝑡)=P(𝑡)𝑥,𝑁𝐺𝐵(𝑡)=𝑛,𝐽(𝑡)=𝑗𝑋(0)=𝑁𝐺𝐵.(0)=0,𝐽(0)=𝑖(2.23) One then has 𝑓𝑛𝜕(𝑥,𝑡)=𝐹𝜕𝑥𝑛(𝑥,𝑡).(2.24) The associated matrix Laplace transform generating function can then be defined as 𝜑(𝑣,𝑤,𝑠)=𝑛=0𝑣𝑛0𝑒𝑤𝑡𝑠𝑡𝑓𝑛(𝑥,𝑡)𝑑𝑥𝑑𝑡.(2.25) It has been shown in Masuda and Sumita [3] that 𝜑1(𝑣,𝑤,𝑠)=𝑤+𝑠𝛾0𝐼(𝑠)𝑣𝛽(𝑠)1𝐼𝛼𝐷.(𝑤+𝑠)(2.26) Here, with 𝛼(𝑠)=0𝑒𝑠𝑡𝑎(𝑡)𝑑𝑡 and 𝛼𝐶𝐷(𝑠)=[𝛼𝑖𝑗(𝑠)]𝑖𝐶,𝑗𝐷 for 𝐶,𝐷𝒥, the following notation is employed:

𝛼𝛼(𝑠)=𝐺𝐺(𝑠)𝛼𝐺𝐵𝛼(𝑠)𝐵𝐺(𝑠)𝛼𝐵𝐵;𝛼(𝑠)𝐷𝛿(𝑠)={𝑖=𝑗}𝛼𝑖(𝑠)with𝛼𝑖(𝑠)=𝑗𝒥𝛼𝑖𝑗𝜒(𝑠);(2.27)𝐺𝐼(𝑠)=𝐺𝐺𝛼𝐺𝐺(𝑠)1;𝜒𝐵𝐼(𝑠)=𝐵𝐵𝛼𝐵𝐵(𝑠)1𝛽;(2.28)0(𝑠)=𝐵𝐵0𝐵𝐺𝛼𝐺𝐵𝜒𝐵(𝑠)𝛼𝐺𝐵𝜒𝐵(𝑠)𝛼𝐵𝐺𝜒𝐺𝛾(𝑠);(2.29)0(𝛼𝑠)=𝐵𝐵𝜒𝐵(𝑠)𝜒𝐵(𝑠)𝛼𝐵𝐺𝜒𝐺0(𝑠)𝐺𝐵𝛼𝐺𝐺𝜒𝐺(𝑠)+𝐼.(2.30) As we will see, the unified multivariate counting process proposed in this paper enables one to deal with multidimensional generalization of NESMPSs as a special case.

2.7. Markovian Arrival Process (MAP)

As for Poisson processes, a renewal process requires interarrival times to form a sequence of i.i.d. nonnegative random variables. As we have seen, a class of MMPPs enables one to avoid this requirement by introducing different Poisson arrival rates depending on the state of the underlying Markov chain. An alternative way to avoid this i.i.d. requirement is to adapt a class of MAPs, originally introduced by Lucantoni et al. [5]. We discuss here a slightly generalized version of MAPs in that a set of absorbing states is not necessarily a singleton set.

Let {𝐽(𝑡)𝑡0} be an absorbing Markov-chain on 𝒥=𝐺𝐵 with 𝐺𝜙,𝐵𝜙 and 𝐺𝐵=𝜙, where all states in 𝐵 are absorbing. Without loss of generality, we assume that 𝐺={0,,𝑚}, and 𝐵={𝑚+1,,𝑚+𝐾}. For notational convenience, the following transition rate matrices are introduced.

𝜈𝐺𝐺=𝜈𝑖𝑗𝑖,𝑗𝐺;𝜈𝐺𝐵=𝜈𝑖𝑟𝑖𝐺,𝑟𝐵.(2.31) The entire transition rate matrix 𝜈 governing 𝐽(𝑡) is then given by 𝜈=𝜈𝐺𝐺𝜈𝐺𝐵00.(2.32)

A replacement Markov-chain {𝐽(𝑡)𝑡0} on 𝐺 is now generated from {𝐽(𝑡)𝑡0}. Starting from a state in 𝐺, the process 𝐽(𝑡) coincides with 𝐽(𝑡) within 𝐺. As soon as 𝐽(𝑡) reaches state 𝑟𝐵, it is instantaneously replaced at state 𝑗𝐺 with probability ̃𝑝𝑟𝑗 and the process continues. Let 𝐶=𝜈𝐺𝐺,𝐷=𝜈𝐺𝐵̃𝑝𝐵𝐺,(2.33) where ̃𝑝𝐵𝐺=[̃𝑝𝑟𝑗]𝑟𝐵,𝑗𝐺. Then the transition rate matrix 𝜈 and the infinitesimal generator 𝑄 of 𝐽(𝑡) are given as 𝜈=𝐶+𝐷;𝑄=𝜈𝐷+𝜈,(2.34) where 𝜈𝐷=𝐶𝐷+𝐷𝐷,(2.35) with 𝐶𝐷=𝛿{𝑖=𝑗}𝑐𝑖;𝑐𝑖=𝑗𝐺𝑐𝑖𝑗,𝐷𝐷=𝛿{𝑖=𝑗}𝑑𝑖;𝑑𝑖=𝑗𝐺𝑑𝑖𝑗.(2.36) Let 𝑝(𝑡) be the transition probability matrix of 𝐽(𝑡) with its Laplace transform defined by 𝜋(𝑠)=0𝑒𝑠𝑡𝑝(𝑡)𝑑𝑡. From the Kolmogorov forward equation (𝑑/𝑑𝑡)𝑝(𝑡)=𝑝(𝑡)𝑄 with 𝑝(0)=𝐼, one has 𝜋(𝑠)=𝑠𝐼𝑄1.(2.37)

Let {𝑁MAP(𝑡)𝑡0} be the counting process keeping the record of the number of replacements in [0,𝑡) and define 𝑓𝑖𝑗𝑁(𝑘,𝑡)=𝑃MAP(𝑡)=𝑘,𝐽(𝑡)=𝑗𝐽(0)=𝑖,𝑖,𝑗𝐺.(2.38) By analyzing the probabilistic flow at state 𝑗 at time 𝑡+Δ, it can be seen that 𝑓𝑖𝑗(𝑘,𝑡+Δ)=𝑓𝑖𝑗(𝑘,𝑡)1𝐺𝑐𝑗+𝑑𝑗Δ+𝐺𝑓𝑖(𝑘,𝑡)𝑐𝑗Δ+𝐺𝑓𝑖(𝑘1,𝑡)𝑑𝑗Δ+𝑜(Δ).(2.39) It then follows that 𝜕𝑓𝜕𝑡𝑖𝑗(𝑘,𝑡)=𝑓𝑖𝑗(𝑘,𝑡)𝐺𝑐𝑗+𝑑𝑗+𝐺𝑓𝑖(𝑘,𝑡)𝑐𝑗+𝐺𝑓𝑖(𝑘1,𝑡)𝑑𝑗.(2.40) In matrix notation, the above equation can be rewritten as 𝜕𝑓𝜕𝑡(𝑘,𝑡)=𝑓(𝑘,𝑡)𝜈𝐷+𝑓(𝑘,𝑡)𝐶+𝑓(𝑘1,𝑡)𝐷.(2.41) We now introduce the following matrix Laplace transform generating function: 𝜑(𝜑𝑣,𝑠)=𝑖𝑗(𝑣,𝑠);𝜑𝑖𝑗(𝑣,𝑠)=0𝑒𝑠𝑡𝑘=0𝑓𝑖𝑗(𝑘,𝑡)𝑣𝑘𝑑𝑡.(2.42) Taking the Laplace transform with respect to 𝑡 and the generating function with respect to 𝑘 of both sides of (2.41), one has 𝜑(𝑣,𝑠)𝑠𝐼+𝜈𝐷𝐶𝑣𝐷=𝐼,(2.43) which can be solved as 𝜑(𝑣,𝑠)=𝑠𝐼𝑄+(1𝑣)𝐷1.(2.44) We note from (2.37) and (2.44) that 𝜑(1,𝑠)=𝜋(𝑠) as it should be.

It may be worth noting that an MMPP is a special case of an MAP, which can be seen in the following manner. Let an MAP be defined on 𝐺𝐵 where 𝐺={0𝐺,,𝐽𝐺} and 𝐵={0𝐵,,𝐽𝐵}. Transitions within 𝐺 is governed by 𝜈. An entry into 𝑗𝐵𝐵 is possible only from 𝑗𝐺𝐺. When this occurs, the Markov process is immediately replaced at the entering state 𝑗𝐺. The counting process for the number of such replacements then has the Laplace transform generating function 𝜑(𝑣,𝑠) given in (2.44) where 𝐷 is replaced by 𝜆𝐷, which coincides with 𝜋(u,𝑠) of (2.16) for MMPPs, proving the claim.

2.8. Age-Dependent Counting Process Generated from a Renewal Process (ACPGRP)

An age-dependent counting process generated from a renewal process has been introduced and studied by Sumita and Shanthikumar [12], where items arrive according to an NHPP which is interrupted and reset at random epochs governed by a renewal process. More specifically, let {𝑁(𝑡)𝑡0} be a renewal process associated with a sequence of i.i.d. nonnegative random variables (𝑋𝑗)𝑗=1 with common p.d.f. 𝑎(𝑥). The age process 𝑋(𝑡) is then defined by 𝑋(𝑡)=𝑡sup𝜏𝑁(𝑡)𝜏+𝜏=1,0<𝜏𝑡.(2.45) In other words, 𝑋(𝑡) is the elapsed time since the last renewal. We next consider an NHPP 𝑍(𝑥) governed by an intensity function 𝜆(𝑥). If we define 𝐿(𝑥)=𝑥0𝜆(𝑦)𝑑𝑦,(2.46) one has []𝑔(𝑥,𝑘)=P𝑍(𝑥)=𝑘=exp(𝐿(𝑥))𝐿(𝑥)𝑘𝑘!,𝑘=0,1,2,.(2.47) Of interest, then, is a counting process {𝑀(𝑡)𝑡0} characterized by P[]𝑀(𝑡+Δ)𝑀(𝑡)=1𝑀(𝑡)=𝑚,𝑁(𝑡)=1,𝑋(𝑡)=𝑥=𝜆(𝑥)Δ+𝑜(Δ),(𝑚,𝑛,𝑥)𝑆,Δ>0.(2.48) Here 𝑆=+×+×+ where + is the set of nonnegative integers, and + is the set of nonnegative real numbers.

Since the counting process 𝑀(𝑡) is governed by the intensity function depending on the age process 𝑋(𝑡) of the renewal process 𝑁(𝑡), it is necessary to analyze the trivariate process [𝑀(𝑡),𝑁(𝑡),𝑋(𝑡)]. Let the multivariate transform of [𝑀(𝑡),𝑁(𝑡),𝑋(𝑡)] be defined by 𝜑(𝑢,𝑣,𝑤,𝑠)=0𝑒𝑠𝑡E𝑢𝑀(𝑡)𝑣𝑁(𝑡)𝑒𝑤𝑋(𝑡)𝑑𝑡.(2.49) It has been shown in Sumita and Shanthikumar [12] that 𝛽𝜑(𝑢,𝑣,𝑤,𝑠)=(𝑢,𝑤+𝑠),1𝑣𝛽(𝑢,𝑠)(2.50) where we define, for 𝑚0 with 𝐴(𝑡)=𝑡𝑎(𝑥)𝑑𝑥,

𝑏𝑚(𝑡)=𝑎(𝑡)𝑔(𝑡,𝑚);𝑏𝑚(𝑡)=𝐴𝛽(𝑡)𝑔(𝑡,𝑚),𝑚(𝑠)=0𝑒𝑠𝑡𝑏𝑚(𝑡)𝑑𝑡;𝛽𝑚(𝑠)=0𝑒𝑠𝑡𝑏𝑚(𝑡)𝑑𝑡,𝛽(𝑢,𝑠)=𝑚=0𝛽𝑚(𝑠)𝑢𝑚;𝛽(𝑢,𝑠)=𝑚=0𝛽𝑚(𝑠)𝑢𝑚.(2.51) The Laplace transform generating function of 𝑀(𝑡) defined by 𝜋(𝑢,𝑠)=0𝑒𝑠𝑡E𝑢𝑀(𝑡)𝑑𝑡(2.52) is then given by 𝜋(𝑢,𝑠)=𝜑(𝑢,1,0,𝑠), so that one has from (2.50) 𝛽𝜋(𝑢,𝑠)=(𝑢,𝑠).1𝛽(𝑢,𝑠)(2.53)

This class of counting processes denoted by ACPGRP may be extended where the underlying renewal process is replaced by an MMPP or an SMMPP. We define the former as a class of age-dependent counting processes governed by an MMPP, denoted by Markov-modulated age-dependent nonhomogeneous Poisson process (MMANHPP), and the latter as a class of age-dependent counting processes governed by an SMMPP, denoted by semi-Markov-modulated age-dependent nonhomogeneous Poisson process (SMMANHPP). The two extended classes are new and become special cases of the unified counting process proposed in this paper as we will see.

All of the counting processes discussed in this section are summarized in Figure 1.

3. Unified Multivariate Counting Process [𝐌(𝐭),𝐍(𝐭)]

In this section, we propose a stochastic process representing the unified multivariate counting process discussed in Section 1, which would contain all of the counting processes given in Section 2 as special cases. More specifically, we consider a system where items arrive according to an NHPP. This arrival stream is governed by a finite semi-Markov process on 𝒥={0,,𝐽} in that the intensity function of the NHPP depends on the current state of the semi-Markov process. That is, when the semi-Markov process is in state 𝑖 with the current dwell time of 𝑥, items arrive according to a Poisson process with intensity 𝜆𝑖(𝑥). If the semi-Markov process switches its state from 𝑖 to 𝑗, the intensity function 𝜆𝑖(𝑥) is interrupted, the intensity function at state 𝑗 is reset to 𝜆𝑗(0), and the arrival process resumes. Of particular interest would be the multivariate counting processes 𝑀(𝑡)=[𝑀0(𝑡),,𝑀𝐽(𝑡)] and 𝑁(𝑡)=[𝑁𝑖𝑗(𝑡)] with 𝑀𝑖(𝑡) and 𝑁𝑖𝑗(𝑡) counting the number of items that have arrived in state 𝑖 by time 𝑡 and the number of transitions of the semi-Markov process from state 𝑖 to state 𝑗 by time 𝑡 respectively. The two counting processes 𝑀(𝑡) and 𝑁(𝑡) enable one to introduce a variety of interesting performance indicators as we will see.

Formally, let {𝐽(𝑡)𝑡0} be a semi-Markov process on 𝒥={0,,𝐽} governed by a matrix cumulative distribution function (c.d.f.) 𝐴𝐴(𝑥)=𝑖𝑗,(𝑥)(3.1) which is assumed to be absolutely continuous with the matrix probability density function (p.d.f.) 𝑎𝑎(𝑥)=𝑖𝑗=𝑑(𝑥)𝐴𝑑𝑥(𝑥).(3.2) It should be noted that, if we define 𝐴𝑖(𝑥) and 𝐴𝑖(𝑥) by 𝐴𝑖(𝑥)=𝑗𝒥𝐴𝑖𝑗(𝑥);𝐴𝑖(𝑥)=1𝐴𝑖(𝑥),(3.3) then 𝐴𝑖(𝑥) is an ordinary c.d.f. and 𝐴𝑖(𝑥) is the corresponding survival function. The hazard rate functions associated with the semi-Markov process are then defined as 𝜂𝑖𝑗𝑎(𝑥)=𝑖𝑗(𝑥)𝐴𝑖(𝑥),𝑖,𝑗𝒥.(3.4)

For notational convenience, the transition epochs of the semi-Markov process are denoted by 𝜏𝑛,𝑛0, with 𝜏0=0. The age process 𝑋(𝑡) associated with the semi-Markov process is then defined as 𝑋𝜏(𝑡)=𝑡max𝑛0𝜏𝑛𝑡.(3.5) At time 𝑡 with 𝐽(𝑡)=𝑖 and 𝑋(𝑡)=𝑥, the intensity function of the NHPP is given by 𝜆𝑖(𝑥). For the cumulative arrival intensity function 𝐿𝑖(𝑥) in state 𝑖, one has 𝐿𝑖(𝑥)=𝑥0𝜆𝑖(𝑦)𝑑𝑦.(3.6) The probability of observing 𝑘 arrivals in state 𝑖 within the current age of 𝑥 can then be obtained as 𝑔𝑖(𝑥,𝑘)=𝑒𝐿𝑖(𝑥)𝐿𝑖(𝑥)𝑘𝑘!,𝑘=0,1,2,,𝑖𝒥.(3.7)

Of interest are the multivariate counting processes 𝑀(𝑡)=𝑀0(𝑡),,𝑀𝐽(,𝑡)(3.8) where 𝑀𝑖(𝑡) represents the total number of items that have arrived by time 𝑡 while the semi-Markov process stayed in state 𝑖, and 𝑁𝑁(𝑡)=𝑖𝑗,(𝑡)(3.9) with 𝑁𝑖𝑗(𝑡) denoting the number of transitions of the semi-Markov process from state 𝑖 to state 𝑗 by time 𝑡. It is obvious that 𝑁𝑖(𝑡)def=𝒥𝑁𝑖(𝑡) denotes the number of entries into state 𝑖 by time 𝑡. The initial state is not included in 𝑁𝑖(𝑡) for any 𝑖𝒥. In other words, if 𝐽(0)=𝑖, 𝑁𝑖(𝑡) remains 0 until the first return of the semi-Markov process to state 𝑖. In the next section, we will analyze the dynamic behavior of [𝑀(𝑡),𝑁(𝑡)], yielding the relevant Laplace transform generating functions. In the subsequent section, all of the counting processes discussed in Section 2 would be expressed in terms of 𝑀(𝑡) and 𝑁(𝑡), thereby providing a unified approach for studying various counting processes. The associated asymptotic behavior as 𝑡 would be also discussed.

4. Dynamic Analysis of [𝐌(𝐭),𝐍(𝐭)]

The purpose of this section is to examine the dynamic behavior of the multivariate stochastic process [𝑀(𝑡),𝑁(𝑡)] introduced in Section 3 by observing its probabilistic flow in the state space. Figure 2 depicts a typical sample path of the multivariate process.

Since [𝑀(𝑡),𝑁(𝑡)] is not Markov, we employ the method of supplementary variables. More specifically, the multivariate stochastic process [𝑀(𝑡),𝑁(𝑡),𝑋(𝑡),𝐽(𝑡)] is considered. This multivariate stochastic process is Markov and has the state space 𝑆=+𝐽+1×+(𝐽+1)×(𝐽+1)×+×𝒥, where +𝐽+1 and +(𝐽+1)×(𝐽+1) are the set of (𝐽+1) and (𝐽+1)×(𝐽+1) dimensional nonnegative integer vectors and matrices respectively, + is the set of nonnegative real numbers and 𝒥={0,,𝐽}. Let 𝐹𝑖𝑗(𝑚,𝑛,𝑥,𝑡) be the joint probability distribution of [𝑀(𝑡),𝑁(𝑡),𝑋(𝑡),𝐽(𝑡)] defined by 𝐹𝑖𝑗𝑚,𝑛𝑀,𝑥,𝑡=P(𝑡)=𝑚,𝑁(𝑡)=𝑛,𝑋(𝑡)𝑥,𝐽(𝑡)=𝑗𝑀(0)=0,𝑁(0)=0,𝐽(0)=𝑖.(4.1)

In order to assure the differentiability of 𝐹𝑖𝑗(𝑚,𝑛,𝑥,𝑡) with respect to 𝑥, we assume that 𝑋(0) has an absolutely continuous initial distribution function 𝐷(𝑥) with p.d.f. 𝑑(𝑥)=(𝑑/𝑑𝑥)𝐷(𝑥). (If 𝑋(0)=0 with probability 1, we consider a sequence of initial distribution functions {𝐷𝑘(𝑥)}𝑘=1 satisfying 𝐷𝑘(𝑥)𝑈(𝑥) as 𝑘 where 𝑈(𝑥)=1 for 𝑥0 and 𝑈(𝑥)=0 otherwise. The desired results can be obtained through this limiting process.) One can then define 𝑓𝑖𝑗𝑚,𝑛=𝜕,𝑥,𝑡𝐹𝜕𝑥𝑖𝑗𝑚,𝑛,𝑥,𝑡.(4.2) By interpreting the probabilistic flow of the multivariate process [𝑀(𝑡),𝑁(𝑡),𝑋(𝑡),𝐽(𝑡)] in its state space, one can establish the following equations: 𝑓𝑖𝑗𝑚,𝑛,𝑥,𝑡=𝛿{𝑖=𝑗}𝛿{𝑚=𝑚𝑖1𝑖}𝛿{𝑛=0}𝑑(𝑥𝑡)𝐴𝑖(𝑥)𝐴𝑖𝑔(𝑥𝑡)𝑖𝑡,𝑚𝑖+1𝛿{𝑛=0}𝑚𝑗𝑘=0𝑓𝑖𝑗𝑚𝑘1𝑗,𝑛,0+,𝑡𝑥𝐴𝑗(𝑥)𝑔𝑗𝑓(𝑥,𝑘);(4.3)𝑖𝑗𝑚,𝑛=,0+,𝑡1𝛿{𝑛=0}𝒥0𝑓𝑖𝑚,𝑛1𝑗𝜂,𝑥,𝑡𝑗𝑓(𝑥)𝑑𝑥;(4.4)𝑖𝑗𝑚,𝑛,𝑥,0=𝛿{𝑖=𝑗}𝛿{𝑚=0}𝛿{𝑛=0}𝑑(𝑥),(4.5) where 1𝑖 is the column vector whose 𝑖th element is equal to 1 with all other elements being 0, 1𝑖𝑗=1𝑖1𝑗 and 𝑓𝑖𝑗(𝑚,𝑛,0+,𝑡)=0for𝑁0.

The first term of the right-hand side of (4.3) represents the case that 𝐽(𝑡) has not left the initial state 𝐽(0)=𝑖 by time 𝑡[𝛿{𝑖=𝑗}=1and𝛿{𝑛=0}=1] and there have been 𝑚𝑖 items arrived during time 𝑡[𝛿{𝑚=𝑚𝑖1𝑖}=1], provided that 𝐽(0)=𝑖 and 𝑋(0)=𝑥𝑡. The second term corresponds to the case that 𝐽(𝑡) made at least one transition from 𝐽(0)=𝑖 by time 𝑡[𝛿{𝑛=0}=0], the multivariate process [𝑀(𝑡),𝑁(𝑡),𝑋(𝑡),𝐽(𝑡)] just entered the state [𝑚𝑘1𝑗,𝑛,0+,𝑗] at time 𝑡𝑥, 𝐽(𝑡) remained in state 𝑗 until time 𝑡 with 𝑋(𝑡)=𝑥, and there have been 𝑘 items arrived during the current age 𝑥, provided that 𝐽(0)=𝑖 and 𝑋(0)=𝑥𝑡. For the multivariate process at [𝑚,𝑛,0+,𝑗] at time 𝑡, it must be at [𝑚,𝑛1𝑗,𝑥,] followed by a transition from to 𝑗 at time 𝑡 which increases 𝑁𝑗(t) by one, explaining (4.4). Equations (4.5) merely describe the initial condition that [𝑀(0),𝑁(0),𝑋(0),𝐽(0)]=[0,0,𝑥,𝑖].

In what follows, the dynamic behavior of the multivariate process [𝑀(𝑡),𝑁(𝑡),𝑋(𝑡),𝐽(𝑡)] would be captured by establishing the associated Laplace transform generating functions based on (4.3), (4.4) and (4.5). For notational convenience, the following matrix functions are employed: 𝑏𝑘𝑏(𝑡)=𝑘𝑖𝑗(𝑡);𝑏𝑘𝑖𝑗(𝑡)=𝑎𝑖𝑗(𝑡)𝑔𝑖b(𝑡,𝑘),(4.6)𝑘𝐷𝛿(𝑡)={𝑖=𝑗}𝑏𝑘𝑖=𝑏(𝑡)𝑘0𝑏(𝑡)𝑘𝐽(𝑡);𝑏𝑘𝑖(𝑡)=𝐴𝑖(𝑡)𝑔𝑖𝑟(𝑡,𝑘),(4.7)𝑘𝑟(𝑡)=𝑘𝑖𝑗(𝑡);𝑟𝑘𝑖𝑗(𝑡)=𝑔𝑖(𝑡,𝑘)0𝑎𝑑(𝑥𝑡)𝑖𝑗(𝑥)𝐴𝑖𝛽(𝑥𝑡)𝑑𝑥,(4.8)𝑘𝛽(𝑠)=𝑘𝑖𝑗(𝑠);𝛽𝑘𝑖𝑗(𝑠)=0𝑒𝑠𝑡𝑏𝑘𝑖𝑗𝛽(𝑡)𝑑𝑡,(4.9)𝑢=𝛽,𝑠𝑖𝑗𝑢𝑖,𝑠;𝛽𝑖𝑗𝑢𝑖=,𝑠𝑘=0𝛽𝑘𝑖𝑗(𝑠)𝑢𝑘𝑖𝛽,(4.10)𝑘𝐷𝛽(𝑠)=𝑘0(𝛽𝑠)𝑘𝐽(𝑠);𝛽𝑘𝑖(𝑠)=0𝑒𝑠𝑡𝑏𝑘𝑖𝛽(𝑡)𝑑𝑡,(4.11)𝐷𝑢=𝛽,𝑠0𝑢0𝛽,𝑠𝐽𝑢𝐽,𝑠;𝛽𝑖𝑢𝑖=,𝑠𝑘=0𝛽𝑘𝑖(𝑠)𝑢𝑘𝑖𝜌,(4.12)𝑘𝜌(𝑠)=𝑘𝑖𝑗(𝑠);𝜌𝑘𝑖𝑗(𝑠)=0𝑒𝑠𝑡𝑟𝑘𝑖𝑗𝜌(𝑡)𝑑𝑡,(4.13)𝑢=𝜌,𝑠𝑖𝑗𝑢𝑖,𝑠;𝜌𝑖𝑗u𝑖=,𝑠𝑘=0𝜌𝑘𝑖𝑗(𝑠)𝑢𝑘𝑖,(4.14) where 𝑢𝑚𝑣𝑛=𝑖𝒥𝑢𝑚𝑖𝑖(𝑖,𝑗)𝒥×𝒥{(0,0)}𝑣𝑛𝑖𝑗𝑖𝑗. We are now in a position to prove the main theorem of this section.Theorem. Let 𝑋(0)=0. Then: ̂𝜉𝑢,𝑣=̃𝛽,0+,𝑠𝑢,𝑣𝐼,𝑠̃𝛽(𝑢,𝑣,𝑠)1;(4.19)𝜑𝑢,𝑣=𝐼,𝑤,𝑠̃𝛽(𝑢,𝑣,𝑠)1𝛽𝐷𝑢,,𝑤+𝑠(4.20) where ̃𝛽(𝑢,𝑣,𝑠)=[𝑣𝑖𝑗𝛽𝑖𝑗(𝑢𝑖,𝑠)].

Proof. First, we assume that 𝑋(0) has a p.d.f. 𝑑(𝑥). Substituting (4.3) and (4.5) into (4.4), one sees that 𝑓𝑖𝑗𝑚,𝑛=,0+,𝑡1𝛿{𝑛=0}×𝒥0𝛿{𝑖=}𝛿{𝑚=𝑚𝑖1𝑖}𝛿{𝑛=1𝑗}𝑑(𝑥𝑡)𝐴𝑖(𝑥)𝐴𝑖𝑔(𝑥𝑡)𝑖𝑡,𝑚𝑖𝜂𝑗(+𝑥)𝑑𝑥𝒥1𝛿{𝑛=1𝑗}0𝑚𝑘=0𝑓𝑖𝑚𝑘1,𝑛1𝑗×,0+,𝑡𝑥𝐴(𝑥)𝑔(𝑥,𝑘)𝜂𝑗=(𝑥)𝑑𝑥1𝛿{𝑛=0}𝛿{𝑚=𝑚𝑖1𝑖}𝛿{𝑛=1𝑖𝑗}𝑔𝑖𝑡,𝑚𝑖0𝑎𝑑(𝑥𝑡)𝑖𝑗(𝑥)𝐴𝑖+(𝑥𝑡)𝑑𝑥𝒥1𝛿{𝑛=1𝑗}𝑚𝑘=00𝑓𝑖𝑚𝑘1,𝑛1𝑗,0+,𝑡𝑥×𝑎𝑗(𝑥)𝑔.(𝑥,𝑘)𝑑𝑥(4.21) Consequently, one sees from (4.6) and (4.8) that 𝑓𝑖𝑗𝑚,𝑛=,0+,𝑡1𝛿{𝑛=0}×𝛿𝑚=𝑚𝑖1𝑖𝛿𝑛=1𝑖𝑗𝑟𝑚𝑖𝑖𝑗(𝑡)+𝒥1𝛿{𝑛=1𝑗}×𝑚𝑘=00𝑓𝑖𝑚𝑘1,𝑛1𝑗𝑏,0+,𝑡𝑥𝑘𝑗.(𝑥)𝑑𝑥(4.22) where 𝑎𝑖𝑗(𝑥)=𝐴𝑖(𝑥)𝜂𝑖𝑗(𝑥) is employed from (3.4). By taking the Laplace transform of both sides of (4.22) with respect to 𝑡, it follows that 𝜉𝑖𝑗𝑚,𝑛=,0+,𝑠1𝛿{𝑛=0}×𝛿𝑚=𝑚𝑖1𝑖𝛿𝑛=1𝑖𝑗𝜌𝑚𝑖𝑖𝑗(+𝑠)𝒥1𝛿{𝑛=1𝑗}𝑚𝑘=0𝜉𝑖𝑚𝑘1,𝑛1𝑗𝛽,0+,𝑠𝑘𝑗.(𝑠)(4.23) By taking the multivariate generating function of (4.23) with respect to 𝑚 and 𝑛, it can be seen that ̂𝜉𝑖𝑗𝑢,𝑣=,0+,𝑠𝑚+𝐽+1𝑛+(𝐽+1)×(𝐽+1){0}𝜉𝑖𝑗𝑚,𝑛𝑢,0+,𝑠𝑚𝑣𝑛=𝑚+𝐽+1𝑛+(𝐽+1)×(𝐽+1){0}𝛿{𝑚=𝑚𝑖1𝑖}𝛿{𝑛=1𝑖𝑗}𝜌𝑚𝑖𝑖𝑗(𝑠)u𝑚𝑣𝑛+𝒥1𝛿{𝑛=1𝑗}𝑚𝑘=0𝜉𝑖𝑚𝑘1,𝑛1𝑗𝛽,0+,𝑠𝑘𝑗(𝑠)𝑢𝑚𝑣𝑛.(4.24) It then follows from (4.10), (4.14), (4.16) and the discrete convolution property that ̂𝜉𝑖𝑗𝑢,𝑣=,0+,𝑠𝑚𝑖=0𝑢𝑚𝑖𝑖𝑣𝑖𝑗𝜌𝑚𝑖𝑖𝑗(+𝑠)𝒥𝑣𝑗𝑚+𝐽+1𝑚𝑘=0𝑛+(𝐽+1)×(𝐽+1){0}𝜉𝑖𝑚𝑘1,𝑛𝛽,0+,𝑠𝑘𝑗(𝑠)𝑢𝑚𝑣𝑛=𝑣𝑖𝑗𝜌𝑖𝑗𝑢𝑖+,𝑠𝒥𝑣𝑗̂𝜉𝑖𝑢,𝑣𝛽,0+,𝑠𝑗𝑢.,𝑠(4.25) The last expression can be rewritten in matrix form, and one has ̂𝜉𝑢,𝑣,0+,𝑠=̃𝜌𝑢,𝑣+̂𝜉,𝑠𝑢,𝑣̃𝛽,0+,𝑠𝑢,𝑣,𝑠,(4.26) which can be solved for ̂𝜉(𝑢,𝑣,0+,𝑠) as ̂𝜉𝑢,𝑣,0+,𝑠=̃𝜌𝑢,𝑣𝐼,𝑠̃𝛽(𝑢,𝑣,𝑠)1,(4.27) where ̃𝜌(𝑢,𝑣,s)=[𝑣𝑖𝑗𝜌𝑖𝑗(𝑢𝑖,𝑠)].
Next, we introduce the following double Laplace transform:
𝜀𝑘𝑖(𝑤,𝑠)=0𝑒𝑤𝑥𝑒𝑠𝑡𝑑(𝑥𝑡)𝐴𝑖(𝑥)𝐴𝑖𝑔(𝑥𝑡)𝑖(𝑡,𝑘)𝑑𝑡𝑑𝑥,(4.28) and the associated diagonal matrix 𝜀𝐷𝑢=𝜀,𝑤,𝑠0𝑢0𝜀,𝑤,𝑠𝐽𝑢𝐽;𝜀,𝑤,𝑠𝑖𝑢𝑖=,𝑤,𝑠𝑘=0𝜀𝑘𝑖(𝑤,𝑠)𝑢𝑘𝑖.(4.29) By taking the double Laplace transform of (4.3), one sees from (4.7) and (4.28) that 𝜑𝑖𝑗𝑚,𝑛,𝑤,𝑠=𝛿{𝑖=𝑗}𝛿{𝑚=𝑚𝑖1𝑖}𝛿{𝑛=0}𝜀𝑚𝑖𝑖+(𝑤,𝑠)1𝛿{𝑛=0}𝑚𝑗𝑘=0𝜉𝑖𝑗𝑚𝑘1𝑗,𝑛𝛽,0+,𝑠𝑘𝑗(𝑤+𝑠).(4.30) By taking the double generating function, this then leads to 𝜑𝑖𝑗𝑢,𝑣=,𝑤,𝑠𝑚+𝐽+1𝑛+(𝐽+1)×(𝐽+1)𝜑𝑖𝑗𝑚,𝑛𝑢,𝑤,𝑠𝑚𝑣𝑛=𝛿{𝑖=𝑗}𝑚𝑖=0𝑢𝑚𝑖𝑖𝜀𝑚𝑖𝑖(+𝑤,𝑠)𝑚+𝐽+1𝑛+(𝐽+1)×(𝐽+1)0𝑚𝑗𝑘=0𝜉𝑖𝑗𝑚𝑘1𝑗,𝑛𝛽,0+,𝑠𝑘𝑗(𝑤+𝑠)𝑢𝑚𝑣𝑛=𝛿{𝑖=𝑗}𝜀𝑖𝑢𝑖+̂𝜉,𝑤,𝑠𝑖𝑗𝑢,𝑣𝛽,0+,𝑠𝑗𝑢𝑗.,𝑤+𝑠(4.31) By rewriting the last expression in matrix form, it can be seen that 𝜑𝑢,𝑣,𝑤,𝑠=𝜀𝐷𝑢+̂𝜉,𝑤,𝑠𝑢,𝑣𝛽,0+,𝑠𝐷𝑢.,𝑤+𝑠(4.32)
We now consider the limiting process 𝐷(𝑥)𝑈(𝑥). For the p.d.f., this limiting process becomes 𝑑(𝑥)𝛿(𝑥) where 𝛿(𝑥) is the Delta function defined as a unit function for convolution, that is, 𝑓(𝑥)=𝑓(𝑥𝜏)𝛿(𝜏)𝑑𝜏 for any integrable function 𝑓. Accordingly, it can be seen from (4.8) that 𝑟𝑘𝑖𝑗(𝑡)𝑏𝑘𝑖𝑗(𝑡). This in turn implies from (4.28) that 𝜀𝑘𝑖(𝑤,𝑠)𝛽𝑘𝑖(𝑤+𝑠). Consequently, it follows in matrix form that ̃𝜌(𝑢,𝑣̃𝛽,𝑠)(𝑢,𝑣,𝑠) and 𝜀𝐷(𝑢,𝑤,𝑠)𝛽𝐷(𝑢,𝑤+𝑠). From (4.27), it can be readily seen that ̂𝜉(𝑢,𝑣̃𝛽,0+,𝑠)(𝑢,𝑣,𝑠){𝐼̃𝛽(𝑢,𝑣,𝑠)}1, proving (4.19). One also sees from (4.28) that
𝜑𝑢,𝑣,𝑤,𝑠𝛽𝐷𝑢+̂𝜉,𝑤+𝑠𝑢,𝑣𝛽,0+,𝑠𝐷𝑢=𝐼,𝑤+𝑠+̃𝛽𝑢,𝑣𝐼,𝑠̃𝛽(𝑢,𝑣,𝑠)1𝛽𝐷𝑢=𝐼,𝑤+𝑠̃𝛽𝑢,𝑣+̃𝛽,𝑠𝑢,𝑣𝐼,𝑠̃𝛽(𝑢,𝑣,𝑠)1𝛽𝐷𝑢=𝐼,𝑤+𝑠̃𝛽(𝑢,𝑣,𝑠)1𝛽𝐷𝑢,,𝑤+𝑠(4.33) which proves (4.20), completing the proof.
In the next section, it will be shown that all the transform results obtained in Section 2 can be derived from Theorem 4.1.

5. Derivation of the Special Cases from the Unified Counting Process

We are now in a position to demonstrate the fact that the proposed multivariate counting process introduced in Section 3 and analysed in Section 4 indeed unifies the existing counting processes discussed in Section 2. We will do so by deriving the transform results of Section 2 from Theorem 4.1.

5.1. Derivation of Poisson Process

Let 𝑁(𝑡) be a Poisson process with intensity 𝜆 as discussed in Section 2.1. From (2.5), one sees that 𝜋(𝑣,𝑠)=0𝑒𝑠𝑡E[𝑣𝑁(𝑡)]𝑑𝑡=0𝑒𝑠𝑡𝜋(𝑣,𝑡)𝑑𝑡 is given by 1𝜋(𝑣,𝑠)=.𝑠+𝜆(1𝑣)(5.1)

For the unified counting process, we consider a single state Markov-chain in continuous time where only the number of self transitions in (0,𝑡] is counted. More specifically, let 𝒥={0}, 𝑁(𝑡)=[𝑁00(𝑡)], 𝑢0=1, 𝑣00=𝑣, 𝑤=0, 𝜆0(𝑡)=0 and 𝑎00(𝑥)=𝜆𝑒𝜆𝑥. We note from (3.7) that 𝜆0(𝑡)=0 implies 𝑔0(𝑥,𝑘)=𝛿{𝑘=0} so that 𝑏𝑘00(𝑡)=𝛿{𝑘=0}𝑎00(𝑡) from (4.6). This then implies 𝛽00(1,𝑠)=𝜆/(𝑠+𝜆). Similarly, since 𝐴0(𝑥)=𝑒𝜆𝑥, one has 𝛽0(1,𝑠)=1/(𝑠+𝜆). It then follows from Theorem 4.1 that 𝜑01(1,𝑣,0+,𝑠)=1𝑣𝛽00𝛽(1,𝑠)01(1,𝑠)=.𝑠+𝜆(1𝑣)(5.2) Hence, from (5.1) and (5.2), one concludes that 𝜋(𝑣,𝑠)=𝜑0(1,𝑣,0+,𝑠).

5.2. Derivation of NHPP

Let 𝑀(𝑡) be an NHPP of Section 2.2 characterized by a time dependent intensity function 𝜆(𝑡). It can be seen from (2.8) that 𝜋(𝑢,𝑠)=0𝑒𝑠𝑡E[𝑢𝑀(𝑡)]𝑑𝑡=0𝑒𝑠𝑡𝜋(𝑢,𝑡)𝑑𝑡 is given by 𝜋(𝑢,𝑠)=0𝑒𝑠𝑡𝑒𝐿(𝑡)(1𝑢)𝑑𝑡.(5.3)

In order to see that 𝑀(𝑡) can be viewed as a special case of the proposed multivariate counting process, we first consider a single state semi-Markov process where the dwell time in the state is deterministic given by 𝑇. The marginal counting process 𝑀(𝑡)=[𝑀0(𝑡)] then converges in distribution to 𝑀(𝑡) as 𝑇 as we show next.

Let 𝒥={0}, 𝑢0=𝑢, 𝑣00=1, 𝑤=0, and 𝜆0(𝑥)=𝜆(𝑥). It then follows that 𝑎00(𝑥)=𝛿(𝑥𝑡) and therefore 𝑏𝑘00(𝑡)=𝛿(𝑡𝑇)𝑔0(𝑡,𝑘) from (4.6). This in turn implies that 𝛽00(𝑢,𝑠)=0𝑒𝑠𝑡𝛿(𝑡𝑇)𝑒𝐿0(𝑡)(1𝑢)𝑑𝑡=𝑒{𝑠𝑇+𝐿0(𝑇)(1𝑢)}.(5.4) Let 𝑈(𝑡) be the step function defined by 𝑈(𝑡)=0 if 𝑡<0 and 𝑈(𝑡)=1 otherwise. Since 𝐴00(𝑡)=1𝑈(𝑡𝑇)=𝛿{0𝑡<𝑇}, one sees that 𝛽0(𝑢,𝑠)=0𝑒𝑠𝑡𝛿{0𝑡<𝑇}𝑒𝐿0(𝑡)(1𝑢)=𝑑𝑡𝑇0𝑒𝑠𝑡𝑒𝐿0(𝑡)(1𝑢)𝑑𝑡.(5.5) Theorem 4.1 together with (5.4) and (5.5) then leads to 𝜑00(1𝑢,1,0+,𝑠)=1𝛽00𝛽(𝑢,𝑠)0(𝑢,𝑠)=𝑇0𝑒𝑠𝑡𝑒𝐿0(𝑡)(1𝑢)𝑑𝑡1𝑒{𝑠𝑇+𝐿0(𝑇)(1𝑢)}.(5.6) Now it can be readily seen that 𝜑00(𝑢,1,0+,𝑠) in (5.6) converges to 𝜋(𝑢,𝑠) in (5.3) as 𝑇, proving the claim.

5.3. Derivation of MMPP

Let 𝑀(𝑡) be an MMPP of Section 2.3 characterized by (𝜈,𝜆𝐷). We show that the Laplace transform generating function 𝜋(𝑢,𝑠)=0𝑒𝑠𝑡E[𝑢𝑀(𝑡)]𝑑𝑡 given in (2.16) can be derived as a special case of Theorem 4.1.

For the unified multivariate counting process, let 𝒥={0,,𝐽}, 𝑀(𝑡)=𝑖𝒥𝑀𝑖(𝑡), 𝑢=𝑢1, 𝑣=1, 𝑤=0, 𝜆𝑖(𝑡)=𝜆𝑖, and 𝑎(𝑥)=[𝑒𝜈𝑖𝑥𝜈𝑖𝑗]=𝑒𝜈𝐷𝑥𝜈. From (3.3) one sees that 𝐴𝐷(𝑥)=𝐼𝐴𝐷=[𝛿{𝑖=𝑗}{1𝑗𝒥𝐴𝑖𝑗(𝑥)}]=[𝛿{𝑖=𝑗}𝑒𝜈𝑖𝑥]=𝑒𝜈𝐷𝑥. Therefore, one sees from (4.6), (4.9) and (4.10) that 𝛽𝑢1=,𝑠𝑘=00𝑒𝑠𝑡𝑔𝐷(𝑡,𝑘)𝑎(𝑡)𝑑𝑡𝑢𝑘1=𝑠𝐼+𝜈𝐷+(1𝑢)𝜆𝐷1𝜈(5.7) and similarly one has, from (4.7), (4.11) and (4.12), 𝛽𝐷𝑢1=,𝑠𝑘=00𝑒𝑠𝑡𝐴𝐷(𝑡)𝑔𝐷(𝑡,𝑘)𝑑𝑡𝑢𝑘1=𝑠𝐼+𝜈𝐷+(1𝑢)𝜆𝐷1.(5.8) It then follows from Theorem 4.1, (5.7)and (5.8) that 𝜑𝑢1,1=𝐼,0+,𝑠̃𝛽(𝑢1,1,𝑠)1𝛽𝐷𝑢1=𝐼,𝑠𝛽(𝑢1,𝑠)1𝛽𝐷𝑢1=,𝑠𝑠𝐼+𝜈𝐷+(1𝑢)𝜆𝐷𝜈1=𝑠𝐼𝑄+(1𝑢)𝜆𝐷1,(5.9) which coincides with (2.16) as expected.

5.4. Derivation of Renewal Process

In order to demonstrate that a renewal process is a special case of the unified multivariate counting process, we follow the line of the arguments for the case of Poisson processes. Namely, let 𝒥={0}, 𝑁(𝑡)=𝑁0(𝑡), 𝑢0=1, 𝑣00=𝑣, 𝑤=0, 𝑎0(𝑥)=𝑎(𝑥) and 𝐴0(𝑥)=1𝑥0𝑎(𝑦)𝑑𝑦. From Theorem 4.1, one has 𝜑01(1,𝑣,0+,𝑠)=𝛽1𝑣𝛽(1,𝑠)=1(1,𝑠)1𝑣0𝑒𝑠𝑡×𝑎(𝑡)𝑑𝑡0𝑒𝑠𝑡=1𝐴(𝑡)𝑑𝑡1𝑣𝛼(𝑠)1𝛼(𝑠)𝑠,(5.10) which agrees with (2.19).

5.5. Derivation of MRP

Let 𝑁𝑁(𝑡)=[0𝑁(𝑡),,𝐽(𝑡)] be an MRP discussed in Section 2.5. We recall that 𝑁𝑟(𝑡) describes the number of entries of the semi-Markov process into state 𝑟 in (0,𝑡] given that 𝐽(0)=. For the unified multivariate counting process, 𝑁𝑖𝑗(𝑡) counts the number of transitions from state 𝑖 to state 𝑗 in (0,𝑡]. Hence, one has 𝑁𝑟(𝑡)=𝑖𝒥𝑁𝑖𝑟(𝑡) provided that 𝐽(0)=. Accordingly, we set 𝑣̃𝑣=[01̃𝑣,,𝑟1̃𝑣,,𝐽1], that is, 𝑣𝑖𝑟=̃𝑣𝑟 for all 𝑖𝒥. With 𝑤=0+, 𝑢=1, 𝜆(𝑡)=0 for all 𝒥, one has 𝛽𝑟(1,𝑠)=𝛼𝑟(𝑠) and 𝛽={1𝛼(𝑠)}/𝑠 from (4.6), (4.7) and (4.9) through (4.12), where 𝛼(𝑠)=𝑟𝒥𝛼𝑟(𝑠). Let ̃𝑣𝐷=[𝛿{=𝑟}̃𝑣𝑟]. It then follows from Theorem 4.1 that 𝜑1,𝑣=𝐼,0+,𝑠̃𝑣r𝛼𝑟(𝑠)1×𝛿{=𝑟}1𝛼(𝑠)𝑠=𝐼𝛼̃𝑣(𝑠)𝐷1×𝛿{=𝑟}1𝛼(𝑠)𝑠.(5.11) It should be noted that, with ̃𝑣̃𝑣=[0̃𝑣,,𝐽] and ̃𝑣𝑁(𝑡)=𝑟𝒥̃𝑣𝑁𝑟𝑟(𝑡), the 𝑟 element of 𝜑(1,𝑣,0+,𝑠) in (5.11) can be written as 𝜑𝑟1,𝑣Ẽ𝑣,0+,𝑠=𝑁(𝑡),𝐽(𝑡)=𝑟𝐽(0)=.(5.12)

We now focus on 𝑁𝑟(𝑡) for =0,1,,𝐽. In doing so, let ̃𝑣𝐷𝑗def=[10̃𝑣,,𝑗1𝑗,,1𝐽] and define 𝜓̃𝑣𝑟,𝑠def=Ẽ𝑣𝑁𝑟𝑟(𝑡)Ẽ𝑣𝑁𝐽(0)=0𝑟𝑟(𝑡)𝐽(0)=𝐽.(5.13) It then follows from (5.11) through (5.13) that 𝜓̃𝑣𝑟,𝑠=𝜑1,̃𝑣𝐷𝑟1,0+,𝑠=𝐼𝛼̃𝑣(𝑠)𝐷𝑟1×𝛿{=𝑟}1𝛼(𝑠)𝑠×1,(5.14) that is, one has 𝜓̃𝑣𝑟=1,𝑠𝑠𝐼𝛼̃𝑣(𝑠)𝐷𝑟1×1𝛼(𝑠)1.(5.15)

We recall that 𝐻𝑟𝑁(𝑡)=E[𝑟(𝑡)], which can be obtained by differencing 𝜓(̃𝑣𝑟,𝑠) with respect to ̃𝑣𝑟 at ̃𝑣𝑟=1. More formally, one has 𝜕𝜕̃𝑣𝑟𝜓(̃𝑣𝑟|||̃𝑣,𝑠)𝑟=1=E𝑁𝑟E𝑁(𝑡)𝐽(0)=0𝑟(𝑡)𝐽(0)=𝐽,(5.16) which is the 𝑟th column of {𝐻(𝑡)} given in (2.20). By noting that 𝑑𝐼𝑑𝑥𝑓(𝑥)1=𝐼𝑓(𝑥)1𝑑𝑓𝑑𝑥𝐼(𝑥)𝑓(𝑥)1,(5.17) one sees from (5.15) that 𝜕𝜕̃𝑣𝑗𝜓(̃𝑣𝑗|||̃𝑣,𝑠)𝑗=1=1𝑠𝐼𝛼(𝑠)1𝛼0𝑗0(𝑠)0𝛼𝐽𝑗(𝐼𝑠)𝛼(𝑠)11𝛼(𝑠)1.(5.18) Since {𝐼𝛼(𝑠)}1=𝑘=0𝛼(𝑠)𝑘 and 𝛼0(𝑠)=𝐼, it can be seen that 𝐼𝛼(𝑠)11𝛼(𝑠)1=𝑘=0𝛼(𝑠)𝑘1𝛼(𝑠)1=1.(5.19) Substituting (5.19) into (5.18), one then concludes that 𝜕𝜕̃𝑣𝑗𝜓(̃𝑣𝑗|||̃𝑣,𝑠)𝑗=1=1𝑠𝐼𝛼(𝑠)1𝛼0𝑗𝛼(𝑠)𝐽𝑗(𝑠).(5.20) This in turn implies that 𝜕𝜕̃𝑣0𝜓(̃𝑣0||||̃𝑣,𝑠)0=1𝜕,,𝜕̃𝑣𝐽𝜓(̃𝑣𝐽||||̃𝑣,𝑠)𝐽=1=1𝑠𝐼𝛼(𝑠)1𝛼(𝑠),(5.21) which agrees with {𝐻(𝑡)} of (2.20), completing the derivation.

5.6. Derivation of NESMPS

As in Section 2.6, let the state space 𝒥 of 𝐽(𝑡) be decomposed into a set of good states 𝐺(𝜙) and a set of bad states 𝐵(𝜙) satisfying 𝐽=𝐺𝐵 and 𝐺𝐵=𝜙. The counting process 𝑁𝐺𝐵(𝑡) of Section 2.6 describes the number of entries of 𝐽(𝑡) into 𝐵 by time 𝑡. The Laplace transform generating function of the joint probability of 𝑁𝐺𝐵(𝑡), the age process 𝑋(𝑡) and 𝐽(𝑡) is given in (2.26).

In the context of the unified multivariate counting process [𝑀(𝑡),𝑁(𝑡)] discussed in Section 3, one expects to have 𝑁𝐺𝐵(𝑡)=𝑖𝐺𝑗𝐵𝑁𝑖𝑗(𝑡).(5.22) In order to prove (5.22) formally, we set 𝑢=1;𝑣=1𝐵𝐵1𝐵𝐺𝑣1𝐺𝐵1𝐺𝐺.(5.23) From (3.7), (4.6), (4.9) and (4.10), one has 𝛽𝑖𝑗(1,𝑠)=𝛼𝑖𝑗(𝑠) so that ̃𝛽1,𝑣=𝛼,𝑠𝐵𝐵(𝑠)𝛼𝐵𝐺(𝑠)𝑣𝛼𝐺𝐵(𝑠)𝛼𝐺𝐺(𝑠),(5.24) where ̃𝛽(𝑢,𝑣,𝑠) is as given in Theorem 4.1. Similarly, it can be seen from (3.7), (4.7), (4.11) and (4.12) that 𝛽𝑖(1,𝑤+𝑠)={1𝛼𝑖(𝑤+𝑠)}/(𝑤+𝑠) and hence 𝛽𝐷1=1,𝑤+𝑠𝐼𝑤+𝑠𝛼𝐷.(𝑤+𝑠)(5.25) Substituting (5.24) and (5.25) into (4.20), it then follows that 𝜑1,𝑣=1,𝑤,𝑠𝐼𝑤+𝑠̃𝛽(1,𝑣,𝑠)1𝐼𝛼𝐷.(𝑤+𝑠)(5.26)

By comparing (2.26) with (5.26), (5.22) holds true if and only if 𝛾0𝐼(𝑠)𝑣𝛽(𝑠)1=𝐼̃𝛽(1,𝑣,𝑠)1.(5.27) In what follows, we prove that (5.27) indeed holds true. From (5.24), one sees that 𝐼̃𝛽1,𝑣=𝜒,𝑠𝐵1(𝑠)𝛼𝐵𝐺(𝑠)𝑣𝛼𝐺𝐵(𝑠)𝜒𝐺1(𝑠),(5.28) where 𝜒𝐺(𝑠) and 𝜒𝐵(𝑠) are as given in (2.28). We define the inverse matrix of (5.28) by 𝐼̃𝛽(1,𝑣,𝑠)1=𝐶𝐵𝐵(𝑣,𝑠)𝐶𝐵𝐺𝐶(𝑣,𝑠)𝐺𝐵(𝑣,𝑠)𝐶𝐺𝐺(𝑣,𝑠).(5.29) Since the multiplication of the two matrices in (5.28) and (5.29) yields the identity matrix, it follows that

𝜒𝐵1(𝑠)𝐶𝐵𝐵(𝑣,𝑠)𝛼𝐵𝐺(𝑠)𝐶𝐺𝐵(𝑣,𝑠)=𝐼𝐵𝐵𝜒𝐵1(𝑠)𝐶𝐵𝐺(𝑣,𝑠)𝛼𝐵𝐺(𝑠)𝐶𝐺𝐺(𝑣,𝑠)=𝐼𝐵𝐺𝑣𝛼𝐺𝐵(𝑠)𝐶𝐵𝐵(𝑣,𝑠)+𝜒𝐺1(𝑠)𝐶𝐺𝐵(𝑣,𝑠)=𝐼𝐺𝐵𝑣𝛼𝐺𝐵(𝑠)𝐶𝐵𝐺(𝑣,𝑠)+𝜒𝐺1(𝑠)𝐶𝐺𝐺(𝑣,𝑠)=𝐼𝐺𝐺.(5.30) Solving the above equations for 𝐶(𝑣,𝑠), one has

𝐶𝐵𝐵(𝑣,𝑠)=𝜒𝐵(𝑠)+𝑣𝜒𝐵(𝑠)𝛼𝐵𝐺(𝑠)𝜒𝐺×𝐼(𝑠)𝐺𝐺𝑣𝛼𝐺𝐵(𝑠)𝜒𝐵(𝑠)𝛼𝐵𝐺(𝑠)𝜒𝐺(𝑠)1𝛼𝐺𝐵(𝑠)𝜒𝐵𝐶(𝑠),(5.31)𝐵𝐺(𝑣,𝑠)=𝜒𝐵(𝑠)𝛼𝐵𝐺(𝑠)𝜒𝐺×𝐼(𝑠)𝐺𝐺𝑣𝛼𝐺𝐵(𝑠)𝜒𝐵(𝑠)𝛼𝐵𝐺(𝑠)𝜒𝐺(𝑠)1,𝐶(5.32)𝐺𝐵(𝑣,𝑠)=𝑣𝜒𝐺×𝐼(𝑠)𝐺𝐺𝑣𝛼𝐺𝐵(𝑠)𝜒𝐵(𝑠)𝛼𝐵𝐺(𝑠)𝜒𝐺(𝑠)1𝛼𝐺𝐵(𝑠)𝜒𝐵𝐶(𝑠),(5.33)𝐺𝐺(𝑣,𝑠)=𝜒𝐺(𝐼𝑠)𝐺𝐺𝑣𝛼𝐺𝐵(𝑠)𝜒𝐵(𝑠)𝛼𝐵𝐺(𝑠)𝜒𝐺(𝑠)1.(5.34)

We next turn our attention to the left-hand side of (5.27). From (2.29), one sees that 𝐼𝑣𝛽𝐼(𝑠)=𝐵𝐵0𝐵𝐺𝑣𝛼𝐺𝐵(𝑠)𝜒𝐵(𝑠)𝐼𝐺𝐺𝑣𝛼𝐺𝐵(𝑠)𝜒𝐵(𝑠)𝛼𝐵𝐺(𝑠)𝜒𝐺(𝑠).(5.35) As before, we define the inverse matrix of (5.35) as 𝐼𝑣𝛽(𝑠)1=𝐷𝐵𝐵(𝑣,𝑠)𝐷𝐵𝐺𝐷(𝑣,𝑠)𝐺𝐵(𝑣,𝑠)𝐷𝐺𝐺(𝑣,𝑠).(5.36) Multiplying the two matrices in (5.35) and (5.36) then yields 𝐷𝐵𝐵(𝑣,𝑠)=𝐼𝐵𝐵𝐷𝐵𝐺(𝑣,𝑠)=0𝐵𝐺𝑣𝛼𝐺𝐵(𝑠)𝜒𝐵(𝑠)𝐷𝐵𝐵(𝐼𝑣,𝑠)+𝐺𝐺𝑣𝛼𝐺𝐵(𝑠)𝜒𝐵(𝑠)𝛼𝐵𝐺(𝑠)𝜒𝐺(𝐷𝑠)𝐺𝐵(𝑣,𝑠)=0𝐺𝐵𝑣𝛼𝐺𝐵(𝑠)𝜒𝐵(𝑠)𝐷𝐵𝐺𝐼(𝑣,𝑠)+𝐺𝐺𝑣𝛼𝐺𝐵(𝑠)𝜒𝐵(𝑠)𝛼𝐵𝐺(𝑠)𝜒𝐺𝐷(𝑠)𝐺𝐺(𝑣,𝑠)=𝐼𝐺𝐺,(5.37) which in turn can be solved for 𝐷(𝑣,𝑠) as

𝐷𝐵𝐵(𝑣,𝑠)=𝐼𝐵𝐵,𝐷(5.38)𝐵𝐺(𝑣,𝑠)=0𝐵𝐺𝐷,(5.39)𝐺𝐵𝐼(𝑣,𝑠)=𝑣𝐺𝐺𝑣𝛼𝐺𝐵(𝑠)𝜒𝐵(𝑠)𝛼𝐵𝐺(𝑠)𝜒𝐺(𝑠)1𝛼𝐺𝐵(𝑠)𝜒𝐵𝐷(𝑠),(5.40)𝐺𝐺𝐼(𝑣,𝑠)=𝐺𝐺𝑣𝛼𝐺𝐵(𝑠)𝜒𝐵(𝑠)𝛼𝐵𝐺(𝑠)𝜒𝐺(𝑠)1.(5.41) Let the left-hand side matrix of (5.27) be described as 𝛾0𝐼(𝑠)𝑣𝛽(𝑠)1=𝑍𝐵𝐵(𝑣,𝑠)𝑍𝐵𝐺𝑍(𝑣,𝑠)𝐺𝐵(𝑣,𝑠)𝑍𝐺𝐺(𝑣,𝑠).(5.42) From (2.30) and (5.36) through (5.41), one sees that 𝑍𝐵𝐵(𝑣,𝑠)=𝐼𝐵𝐵+𝛼𝐵𝐵(𝑠)𝜒𝐵(𝑠)+𝑣𝜒𝐵(𝑠)𝛼𝐵𝐺(𝑠)𝜒𝐺×𝐼(𝑠)𝐺𝐺𝑣𝛼𝐺𝐵(𝑠)𝜒𝐵(𝑠)𝛼𝐵𝐺(𝑠)𝜒𝐺(𝑠)1𝛼𝐺𝐵(𝑠)𝜒𝐵(𝑠).(5.43) From (2.28), one easily sees that 𝜒𝐵(𝑠)=𝐼𝐵𝐵+𝛼𝐵𝐵(𝑠)𝜒𝐵(𝑠), and hence 𝑍𝐵𝐵(𝑣,𝑠)=𝐶𝐵𝐵(𝑣,𝑠) from (5.31). The fact that 𝑍𝐵𝐺(𝑣,𝑠)=𝐶𝐵𝐺(𝑣,𝑠) is straightforward from (5.32). With 𝜒𝐺(𝑠)=𝐼𝐺𝐺+𝛼𝐺𝐺(𝑠)𝜒𝐺(𝑠), one has 𝑍𝐺𝐵(𝑣,𝑠)=𝐶𝐺𝐵(𝑣,𝑠) from (5.33) and 𝑍𝐺𝐺(𝑣,𝑠)=𝐶𝐺𝐺(𝑣,𝑠) from (5.34), completing the proof for (5.27).

5.7. Derivation of MAP

We recall that an MAP is constructed from an absorbing Markov-chain 𝐽(𝑡) in continuous time on 𝒥=𝐺𝐵, with 𝐵 being a set of absorbing states, governed by 𝜈 defined in (2.32). A replacement Markov-chain 𝐽(𝑡) is then generated from 𝐽(𝑡), where 𝐽(𝑡) coincides with 𝐽(𝑡) within 𝐺 starting from a state in 𝐺. As soon as 𝐽(𝑡) reaches state 𝑟B, it is instantaneously replaced at state 𝑗𝐺 with probability ̃𝑝𝑟𝑗 and the process continues.

In order to deduce an MAP from the unified multivariate counting process [𝑀(𝑡),𝑁(𝑡)] as a special case, we start from (4.20) in Theorem 4.1, where the underlying semi-Markov process is now the replacement Marcov chain 𝐽(𝑡) discussed above. This replacement Marcov chain is defined on 𝐺 governed by 𝜈=𝐶+𝐷 with 𝐶=[𝑐𝑖𝑗]=[𝜈𝑖𝑗]𝑖,𝑗𝐺 and 𝐷=[𝑑𝑖𝑗]=[𝑟𝐵𝜈𝑖𝑟̃𝑝𝑟𝑗]𝑖,𝑗𝐺 as in (2.33). We note that 𝛽𝑖𝑗(1,𝑠)=𝛼𝑖𝑗(𝑠) from (3.7), (4.6), (4.9) and (4.10), and 𝛽𝑖(1,𝑠)={1𝛼𝑖(𝑠)}/𝑠 from (3.7), (4.7), (4.11) and (4.12). Hence, it follows that 𝜑1,𝑣=𝐼,0+,𝑠𝑣𝑖𝑗𝛼𝑖𝑗(𝑠)1𝛿{𝑖=𝑗}1𝛼𝑖(𝑠)𝑠.(5.44) Let 𝜈𝐷=𝐶𝐷+𝐷𝐷 as in (2.35). Since 𝐽(𝑡) is a Markov chain, the dwell time in state 𝑖 is independent of the next state and is exponentially distributed with parameter 𝜈𝑖=𝑐𝑖+𝑑𝑖. Consequently, one has 𝛼𝑖𝜈(𝑠)=𝑖𝑠+𝜈𝑖;𝛼𝑖𝑗𝑐(𝑠)=𝑖𝑗+𝑑𝑖𝑗𝜈𝑖𝛼𝑖𝑐(𝑠)=𝑖𝑗+𝑑𝑖𝑗𝑠+𝜈𝑖.(5.45) Substituting (5.45) into (5.44) and noting [𝛿{𝑖=𝑗}((1𝛼𝑖(𝑠))/𝑠)]1=𝑠𝐼+𝜈𝐷, it follows that 𝜑1,𝑣=,0+,𝑠𝑠𝐼𝑄+𝐶+𝐷𝑣𝑖𝑗(𝑐𝑖𝑗+𝑑𝑖𝑗)1,(5.46) where 𝑄 in (2.34) is employed.

As it stands, the Laplace transform generating function of (5.46) describes the matrix counting process 𝑁(𝑡)=[𝑁𝑖𝑗(𝑡)] where 𝑣𝑖𝑗 corresponds to 𝑁𝑖𝑗(𝑡). For 𝑁MAP(𝑡) of Section 2.7, it is only necessary to count the number of replacements in (0,𝑡]. Given that state 𝑗𝐺 is visited from the current state 𝑖𝐺, this move is direct within 𝐺 with probability 𝑐𝑖𝑗/(𝑐𝑖𝑗+𝑑𝑖𝑗), and such a move involves replacement with probability 𝑑𝑖𝑗/(𝑐𝑖𝑗+𝑑𝑖𝑗). Accordingly, we set 𝑣𝑖𝑗=𝑐𝑖𝑗𝑐𝑖𝑗+𝑑𝑖𝑗+𝑑𝑖𝑗𝑐𝑖𝑗+𝑑𝑖𝑗𝑣.(5.47) Substitution of (5.47) into (5.46) then leads to 𝜑1,𝑣=,0+,𝑠𝑠𝐼𝑄+(1𝑣)𝐷1,(5.48) which coincides with 𝜑(𝑣,𝑠) of (2.44), as expected.

5.8. Derivation of ACPGRP

The age-dependent counting process of Sumita and Shanthikumar [12] has been extended in this paper where the underlying renewal process is replaced by a semi-Markov process with state dependent nonhomogeneous hazard functions. Accordingly, the original model can be treated as a special case of the unified multivariate counting process proposed in this paper by setting 𝒥={0}, 𝑀(t)=𝑀0(𝑡), 𝑁(𝑡)=𝑁00(𝑡), 𝑋(𝑡)=𝑋0(𝑡), 𝑢0=𝑢, 𝑣00=𝑣. With this specification, from Theorem 4.1, one sees that 𝜑00𝛽(𝑢,𝑣,𝑤,𝑠)=(𝑢,𝑤+𝑠).1𝑣𝛽(𝑢,𝑠)(5.49) It then follows that 𝜋(𝑢,𝑠)=𝜑00𝛽(𝑢,1,0,𝑠)=(𝑢,𝑠),1𝛽(u,𝑠)(5.50) which coincides with Equation (2.53).

6. Asymptotic Analysis

Let 𝒜, and 𝒩 be arbitrary subsets of the state space 𝒥 of the underlying semi-Markov process, and define 𝑀𝒜(𝑡)=𝑖𝒜𝑀𝑖(𝑡);𝑁𝒩(𝑡)=𝑖𝑗𝒩𝑁𝑖𝑗(𝑡),(6.1) where 𝑀𝒜(𝑡) describes the total number of items arrived in [0,𝑡] according to the nonhomogeneous Poisson processes within 𝒜 and 𝑁𝒩(𝑡) denotes the number of transitions from any state in to any state in 𝒩 occurred in [0,𝑡]. Appropriate choice of 𝒜, and 𝒩 would then enable one to analyze processes of interest in a variety of applications. In the variable bit rate coding scheme for video transmission explained in Section 1, for example, one may be interested in the packet arrival stream for a specified mode of the encoder represented by 𝑀𝒜(𝑡). In a reliability model, the underlying semi-Markov process may describe the state of a production machine. Minimal repair would take place whenever the system state is in 𝒜 at the cost of 𝑐, while complete overhaul would be forced at the cost of 𝑑 if the machine state enters 𝒩𝒥. The total maintenance cost 𝑆(𝑡) is then given by 𝑆(𝑡)=𝑐𝑀𝒜(𝑡)+𝑑𝑁𝒥𝒩(𝑡). A simplified version of this cost structure has been analyzed by Sumita and Shanthikumar [12] where the underlying semi-Markov process is merely a renewal process with 𝒥=𝒜=𝒩={1}. The purpose of this section is to study a more general cost structure specified by 𝑆(𝑡)=𝑐𝑀𝒜(𝑡)+𝑑𝑁𝒩(𝑡),(6.2) with focus on the Laplace transform generating function and the related moment asymptotic behaviors of 𝑀𝒜(𝑡),𝑁𝒩(𝑡) and 𝑆(𝑡) based on Theorem 4.1.

For notational simplicity, we introduce the following vectors and matrices. Let 𝒜, and 𝒩𝒥 with their compliments defined respectively by 𝒜𝐶=𝒥𝒜,𝐶=𝒥 and 𝒩𝐶=𝒥𝒩. The cardinality of a set 𝐴 is denoted by 𝐴. 1=11𝐽+1;1=1111(𝐽+1)×(𝐽+1)𝑢,(6.3)𝑢(𝒜)=𝑖𝐽+1with𝑢𝑖=𝑣𝑢,if𝑖𝒜,1,else;(𝑣,𝒩)=𝑖𝑗(𝐽+1)×(𝐽+1)with𝑣𝑖𝑗=𝑣,if𝑖,𝑗𝒩,1,else.(6.4) Submatrices of 𝐴(𝐽+1)×(𝐽+1) are denoted by 𝐴𝒜=𝐴𝑖𝑗𝑖𝒜,𝑗𝒥𝒜×(𝐽+1);𝐴𝒜=𝐴𝑖𝑗𝑖𝒥,𝑗𝒜(𝐽+1)×𝒜;𝐴𝒩=𝐴𝑖𝑗𝑖,𝑗𝒩×𝒩,(6.5) so that one has

𝐴=𝐴𝒩𝐴𝒩𝐶𝐴𝐶𝒩𝐴𝐶𝒩𝐶,(6.6) with understanding that the states are arranged appropriately.

Let 𝐴𝑘=0𝑥𝑘𝑎(𝑥)𝑑𝑥, 𝑘=0,1,2,. Throughout the paper, we assume that 𝐴𝑘< for 0𝑘2. In particular, one has 𝐴0=𝐴() which is stochastic. The Taylor expansion of the Laplace transform 𝛼(𝑠) is then given by 𝛼(𝑠)=𝐴0𝑠𝐴1+𝑠22𝐴2+𝑜𝑠2.(6.7) Let 𝑒 be the normalized left eigenvector of 𝐴0 associated with eigenvalue 1 so that 𝑒𝐴0=𝑒 and 𝑒1=1.

We recall from Theorem 4.1 that 𝜑𝑢,𝑣=𝐼,𝑤,𝑠̃𝛽𝑢,𝑣,𝑠1𝛽𝐷𝑢,,𝑤+𝑠(6.8) where ̃𝛽𝑢,𝑣=𝑣,𝑠𝑖𝑗𝛽𝑖𝑗𝑢𝑖,𝑠.(6.9) From (3.6), (3.7), (4.6), (4.9) and (4.10), one sees that 𝛽𝑖𝑗(𝑢,𝑠)=0𝑒𝑠𝑡𝑒𝐿𝑖(𝑡)(1𝑢)𝑎𝑖𝑗(𝑡)𝑑𝑡.(6.10) Similarly, it follows from (3.6), (3.7), (4.7), (4.11) and (4.12) that 𝛽𝑖(𝑢,𝑠)=0𝑒𝑠𝑡𝑒𝐿𝑖(𝑡)(1𝑢)𝐴𝑖(𝑡)𝑑𝑡.(6.11) The 𝑟th order partial derivatives of 𝛽𝑖𝑗(𝑢,𝑠) and 𝛽𝑖(𝑢,𝑠) with respect to 𝑢 at 𝑢=1 are then given by 𝜁𝑟𝑖𝑗(𝑠)def=𝜕𝜕𝑢𝑟𝛽𝑖𝑗|||(𝑢,𝑠)𝑢=1=0𝑒𝑠𝑡𝐿𝑟𝑖(𝑡)𝑎𝑖𝑗𝜁(𝑡)𝑑𝑡,𝑟=1,2;𝑟𝑖(𝑠)def=𝜕𝜕𝑢𝑟𝛽𝑖|||(𝑢,𝑠)𝑢=1=0𝑒𝑠𝑡𝐿𝑟𝑖(𝑡)𝐴𝑖(𝑡)𝑑𝑡,𝑟=1,2.(6.12) In matrix form, (6.12) can be written as 𝜁𝑟(𝑠)=0𝑒𝑠𝑡𝐿𝑟𝐷(𝑡)𝑎𝜁(𝑡)𝑑𝑡,𝑟=1,2;𝑟𝐷(𝑠)=0𝑒𝑠𝑡𝐿𝑟𝐷(𝑡)𝐴𝐷(𝑡)𝑑𝑡,𝑟=1,2.(6.13)

Let Φ𝑟𝑘=0𝑡𝑘𝐿𝑟𝐷(𝑡)𝑎(𝑡)𝑑𝑡 and Φ𝑟𝐷𝑘=0𝑡𝑘𝐿𝑟𝐷(𝑡)𝐴𝐷(𝑡)𝑑𝑡, 𝑟=1,2. The Taylor expansion of 𝜁𝑟(𝑠) and 𝜁𝑟𝐷(𝑠) is then given by 𝜁𝑟(𝑠)=Φ𝑟0𝑠Φ𝑟1+𝑠22Φ𝑟2+𝑜𝑠2𝜁,𝑟=1,2;𝑟𝐷(𝑠)=Φ𝑟𝐷0𝑠Φ𝑟𝐷1+𝑠22Φ𝑟𝐷2+𝑜𝑠2,𝑟=1,2.(6.14)

In order to prove the main results of this section, the following theorem of Keilson [23] plays a key role.

Theorem 6.1 (Keilson [23]). As 𝑠0+, one has 𝐼𝛼(𝑠)1=1𝑠𝐻1+𝐻0+𝑜(1),(6.15) where 𝐻1=1𝑚1𝑒,𝑚=𝑒𝐴11,𝐻0=𝐻1𝐴1+12𝐴2𝐻1+𝑍0𝐻1𝐴1𝑍0𝐴0𝐴1𝐻1+𝐼,𝑍0=𝐼𝐴0+1𝑒1.(6.16)

We are now in a position to prove the first key theorem of this section.

Theorem. Let 𝑝(0) be an initial probability vector of the underlying semi-Markov process. As 𝑡, one has E𝑀𝒜(𝑡)=𝑝(0)𝑡𝑃1+𝑃01E𝑁+𝑜(1),𝒩(𝑡)=𝑝(0)𝑡𝑄1+𝑄01+𝑜(1),(6.17) where 𝑃1=𝐻1𝒜Φ10𝒜,𝑃0=𝐻0𝒜Φ10𝒜𝐻1𝒜Φ11𝒜+𝐻1𝒜Φ1𝐷0𝒜,𝑄1=𝐻1𝐴0𝒩,0𝒩𝐶,𝑄0=𝐻0𝐴0𝒩,0𝒩𝐶𝐻1𝐴1𝒩,0𝒩𝐶.(6.18)

Proof. We first note from (6.8) together with (6.4) that E𝑢𝑀𝒜(𝑡)=𝑝(0)𝜑𝑢(𝒜),11,0,𝑠;E𝑣𝑁𝒩(𝑡)=𝑝(0)𝜑1,𝑣1(,𝒩),0,𝑠.(6.19) By taking the partial derivatives of (6.19) with respect to 𝑢 at 𝑢=1 and 𝑣 at 𝑣=1 respectively, one has E𝑀𝒜(𝑡)=𝑝𝐼(0)𝛼(𝑠)11𝑠𝜁1𝒜0(𝑠)𝒜𝐶+𝜁1,𝐷𝒜0(𝑠)𝒜𝐶1;E𝑁𝒩=1(𝑡)𝑠𝑝𝐼(0)𝛼(𝑠)1𝛼𝒩(𝑠)0𝒩𝐶0𝐶𝒩0𝐶𝒩𝐶1.(6.20) Theorem 6.1 of Keilson [23] combined with (6.7), (6.13) then yields the Laplace transform expansions of (6.20), and the theorem follows by taking the inversion of the Laplace transform expansions.The next theorem can be shown in a similar manner by differentiating (6.19) twice with respect to 𝑢 at 𝑢=1 and 𝑣 at 𝑣=1 respectively, and proof is omitted.Theorem. As 𝑡, one has E𝑀𝒜𝑀(𝑡)𝒜(𝑡)1=𝑝𝑡(0)2𝑃21+𝑡2𝑃1𝑃0𝑃+20𝑃1+𝑃21E𝑁+𝑜(𝑡);𝒩𝑁(𝑡)𝒩(𝑡)1=𝑝𝑡(0)2𝑄21𝑄+2𝑡1𝑄0+𝑄0𝑄11+𝑜(𝑡),(6.21) where 𝑃0=𝐻0𝒜Φ10𝒜𝐻1𝒜Φ11𝒜, 𝑃2=𝐻1𝒜Φ20𝒜 and other matrices are as defined in Theorem 6.2.Theorems 6.2 and 6.3 then lead to the following theorem providing the asymptotic expansions of Var[𝑀𝒜(𝑡)] and Var[𝑁𝒩(𝑡)].Theorem. As 𝑡, one has 𝑀Var𝒜(𝑡)=𝑡𝑝(0)𝑈01𝑁+𝑜(𝑡),Var𝒩(𝑡)=𝑡𝑝(0)𝑉01+𝑜(𝑡),(6.22) where 𝑈0=2𝑃1𝑃0+𝑃1𝑃11𝑝(0)𝑃0𝑃+20𝑃1𝑃0𝑃1+𝑃2;𝑉0=2𝑄1𝑄0+𝑄1𝑄11𝑝(0)𝑄0+𝑄0𝑄1.(6.23)

Proof. It can be readily seen that [𝑋][𝑋]Var=E2[𝑋]E2[][𝑋][𝑋]=E𝑋(𝑋1)+EE2.(6.24) Substituting the results of Theorems 6.2 and 6.3 into this equation, one sees that 𝑀Var𝒜(𝑡)=𝑝𝑡(0)2𝑈1+𝑡𝑈01𝑁+𝑜(𝑡),Var𝒩(𝑡)=𝑝(𝑡0)2𝑉1+𝑡𝑉01+𝑜(𝑡),(6.25) where 𝑈1=𝑃1𝑃11𝑝(0)𝑃1,𝑈0=2𝑃1𝑃0+𝑃1P11𝑝(0)𝑃0𝑃+20𝑃1𝑃01𝑝(0)𝑃1+𝑃2;𝑉1=𝑄1𝑄11𝑝(0)𝑄1,𝑉0=2𝑄1𝑄0+𝑄1𝑄11𝑝(0)𝑄0+2𝑄0𝑄1𝑄01𝑝(0)𝑄1.(6.26) From Theorem 6.1, one has 𝐻1=(1/𝑚)1𝑒 which is of rank one having identical rows. As can be seen from Theorem 6.2, both 𝑃1 and 𝑄1 satisfy 1𝑝(0)𝑃1=𝑃1,1𝑝(0)𝑄1=𝑄1,(6.27) so that 𝑈1=𝑉1=0 and the theorem follows.The asymptotic behavior of E[𝑆(𝑡)] can be easily found from (6.2) and Theorem 6.2. The asymptotic expansion of Var[𝑆(𝑡)], however, requires a little precaution because it involves the joint expectation of 𝑀𝒜(𝑡) and 𝑁𝒩(𝑡). More specifically, one has []Var𝑆(𝑡)=E𝑆(𝑡)2[]E𝑆(𝑡)2=E𝑐𝑀𝒜(𝑡)+𝑑𝑁𝒩(𝑡)2E𝑐𝑀𝒜(𝑡)+𝑑𝑁𝒩(𝑡)2=𝑐2E𝑀2𝒜𝑀(𝑡)+2𝑐𝑑E𝒜(𝑡)𝑁𝒩(𝑡)+𝑑2E𝑁2𝒩(𝑡)𝑐2E𝑀𝒜(𝑡)2𝑀2𝑐𝑑E𝒜E𝑁(𝑡)𝒩(𝑡)𝑑2E𝑁𝒩(𝑡)2,(6.28) so that [𝑆]Var(𝑡)=𝑐2𝑀Var𝒜(𝑡)+𝑑2𝑁Var𝒩𝑀(𝑡)+2𝑐𝑑E𝒜(t)𝑁𝒩𝑀(𝑡)2𝑐𝑑E𝒜E𝑁(𝑡)𝒩.(𝑡)(6.29) In order to evaluate E[𝑀𝒜(𝑡)𝑁𝒩(𝑡)], we note from (6.8) that E𝑀𝒜(𝑡)𝑁𝒩(𝑡)=𝑝𝜕(0)2𝜕𝑢𝜕𝑣𝜑(𝑢(𝒜),𝑣|||(,𝒩),0+,𝑠)𝑢=𝑣=11.(6.30) The asymptotic expansion of E[𝑀𝒜(𝑡)𝑁𝒩(𝑡)] can then be obtained as for the previous theorems.Theorem. As 𝑡, one has E𝑀𝒜(𝑡)𝑁𝒩(𝑡)=𝑝𝑡(0)22𝑇1+𝑡𝑇01+𝑜(𝑡),(6.31) where 𝑇1=𝑃1𝑄1+𝑄1𝑃1,𝑇0=𝑃0𝑄1+𝑃1𝑄0+𝑅+𝑄0𝑃1+𝑄1𝑃0,𝑅=𝐻1,(𝒜)Φ10(𝒜),𝒩,0(𝒜),𝒩𝐶.(6.32)Now the key theorem of this section is given from (6.29), Theorems 6.2, 6.4 and 6.5.Theorem. As 𝑡, one has E[]𝑆(𝑡)=𝑝𝑡(0)𝑐𝑃1+𝑑𝑄1+𝑐𝑃0+𝑑𝑄01[]+𝑜(1),(6.33)Var𝑆(𝑡)=𝑡𝑝(0)𝑊01+𝑜(𝑡),(6.34) where 𝑊0=𝑐2𝑈0+𝑑2𝑉0+2𝑐𝑑𝑇02𝑐𝑑(𝑃11𝑝(0)𝑄0+𝑃01𝑝(0)𝑄1).Proof. Equation (6.33) follows trivially from Theorem 6.2. For (6.34), we note from Theorems 6.2, 6.4 and 6.5 together with (6.29) that []Var𝑆(𝑡)=𝑝𝑡(0)2𝑊1+𝑡𝑊01+𝑜(𝑡),(6.35) where 𝑊1=𝑐𝑑𝑇12𝑐𝑑𝑃11𝑝(0)𝑄1,𝑊0=𝑐2𝑈0+𝑑2𝑉0+2𝑐𝑑𝑇0𝑃2𝑐𝑑11𝑝(0)𝑄0+𝑃01𝑝(0)𝑄1.(6.36) From (6.27), the first term on the right-hand side of Equation (6.35) can be rewritten as 𝑐𝑑𝑝(0)𝑇112𝑐𝑑𝑝(0)𝑃11𝑝(0)𝑄11=𝑐𝑑𝑝𝑃(0)1𝑄1+𝑄1𝑃11𝑝2𝑐𝑑(0)𝑃11𝑝(0)𝑄11𝑝=𝑐𝑑(0)𝑃1𝑄11+𝑝(0)𝑄1𝑃11𝑝2𝑐𝑑(0)𝑃11𝑝(0)𝑄11𝑝=𝑐𝑑(0)𝑃11𝑝(0)𝑄11+𝑝(0)𝑄11𝑝(0)𝑃11𝑝2𝑐𝑑(0)𝑃11𝑝(0)𝑄11=0,(6.37) completing the proof.

7. Dynamic Analysis of a Manufacturing System for Determining Optimal Maintenance Policy

As an application of the unified multivariate counting process, in this section, we consider a manufacturing system with a certain maintenance policy, where the system starts anew at time 𝑡=0, and tends to generate product defects more often as time goes by. When the system reaches a certain state, the manufacturing system would be overhauled completely and the system returns to the fresh state. More specifically, let 𝐽(𝑡) be a semi-Markov process on 𝒥={0,1,2,,𝐽} governed by 𝐴(𝑥), describing the system state at time 𝑡 where state 0 is the fresh state and state 𝐽 is the maintenance state. When the system is in state 𝑗, 0𝑗𝐽1, product defects are generated according to an NHPP with intensity 𝜆𝑗(𝑥). It is assumed that the system deteriorates monotonically and accordingly 𝜆𝑗(𝑥) increases as a function of both 𝑥 and 𝑗. When the system reaches state 𝐽, the manufacturing operation is stopped and the system is overhauled completely. The maintenance time increases stochastically as a function of 𝐽. In other words, the further the maintenance is delayed, the longer the maintenance time would tend to be. Upon finishing the overhaul, the system is brought back to the fresh state 0. Of interest, then, is to determine the optimal maintenance policy concerning how to set 𝐽.

In order to determine the optimal maintenance policy, it is necessary to define the objective function precisely. Let 𝜓𝑑 be the cost associated with each defect and let 𝜓𝑚 be the cost for each of the maintenance operation. If we define two counting processes 𝑀(𝑡) and 𝑁(𝑡) as the total number of defects generated by time 𝑡 and the number of the maintenance operations occurred by time 𝑡 respectively, the total cost in [0,𝑇] can be described as 𝐶𝐽(𝑇)=𝜓𝑑[]E𝑀(𝑇)+𝜓𝑚[].E𝑁(𝑇)(7.1) Let 𝑁 be the set of natural numbers. The optimal maintenance policy 𝐽 is then determined by 𝐶𝐽(𝑇)=min𝐽𝑁𝐶𝐽(𝑇).(7.2)

In what follows, we present a numerical example by letting 𝒥={0,1,,𝐽} for 1𝐽9. For the underlying semi-Markov process, we define the matrix Laplace transform 𝛼(𝑠) having IFR (Increasing Failure Rate) and DFR (Decreasing Failure Rate) dwell time distributions as described below, where the underlying parameters are set in such a way that the means of IFR dwell times are equal to those of DFR dwell times. By introducing matrices 𝜃,̂𝜃 and 𝑝, for which the details are given in Table 1 along with other parameter values, we define 𝛼IFR𝜃(𝑠)=𝑖𝑖𝑠+𝜃𝑖𝑖𝜃𝑖𝑗𝑠+𝐽𝑗=0𝜃𝑖𝑗,𝛼DFR𝑝(𝑠)=𝑖𝜃𝑖𝑗𝑠+𝐽𝑗=0𝜃𝑖𝑗+1𝑝𝑖̂𝜃𝑖𝑗𝑠+𝐽𝑗=0̂𝜃𝑖𝑗,(7.3) where 𝑝𝑖=𝜃𝑖𝑖+𝐽𝑗=0𝜃𝑖𝑗/𝜃𝑖𝑖𝐽𝑗=0𝜃𝑖𝑗1/𝐽𝑗=0̂𝜃𝑖𝑗1/𝐽𝑗=0𝜃𝑖𝑗1/𝐽𝑗=0̂𝜃𝑖𝑗,𝐽𝑗=0𝜃𝑖𝑗𝐽𝑗=0̂𝜃𝑖𝑗.(7.4)

Based on Theorem 6.2, the asymptotic behaviors of the mean of 𝑀(𝑡) and 𝑁(𝑡) per unit time with maintenance policy 𝐽=1,,9 are depicted in Figures 3 and 4. One could see that both the mean of 𝑀(𝑡) and 𝑁(𝑡) per unit time converges to a positive value as time 𝑡 increases. In order to determine the optimal maintenance policy, for 𝐽{1,2,,9}, the corresponding total cost 𝐶𝐽(𝑇) can be computed based on Theorem 6.6. Numerical results are shown in Table 2 and depicted in Figure 5. For the case of IFR, the optimal maintenance policy is at 𝐽=6, while 𝐽=5 for the DFR case, where the running period 𝑇 is taken to be 𝑇=1000 hours.

8. Concluding Remarks

In this paper, a unified multivariate counting process [𝑀(𝑡),𝑁(𝑡)] is proposed with nonhomogeneous Poisson processes lying on a finite semi-Markov process. Here the vector process 𝑀(𝑡) counts the cumulative number of such nonhomogeneous Poisson arrivals at every state and the matrix process 𝑁(𝑡) counts the cumulative number of state transitions of the semi-Markov process in [0,𝑡]. This unified multivariate counting process contains many existing counting processes as special cases. The dynamic analysis of the unified multivariate counting process is given, demonstrating the fact that the existing counting processes can be treated as special cases of the unified multivariate counting process. The asymptotic behaviors of the mean and the variance of the unified multivariate counting process are analyzed. As an application, a manufacturing system with certain maintenance policies is considered. The unified multivariate counting process enables one to determine the optimal maintenance policy minimizing the total cost. Numerical examples are given with IFR and DFR dwell times of the underlying semi-Markov process. As for the future agenda, the impact of such distributional properties on the optimal maintenance policy would be pursued theoretically. Other possible theoretical extensions include: (1) analysis of the reward process associated with the unified multivariate counting process; and (2) exploration of further applications in the areas of modern communication networks and credit risk analysis such as CDOs (collateralized debt obligations) for financial engineering.

Acknowledgment

The authors wish to thank two anonymous referees and the associate editor for providing constructive comments and valuable references, which contributed to improve the original version of the paper. This research is supported by MEXT Grand-in-Aid for Scientific Research (C) 17510114.