#### Abstract

We show how it is possible to construct efficient duration dependent semi-Markov reliability models by considering recurrence time processes. We define generalized reliability indexes and we show how it is possible to compute them. Finally, we describe a possible application in the study of credit rating dynamics by considering the credit rating migration as a reliability problem.

#### 1. Introduction

Homogeneous semi-Markov processes (HSMPs) were defined by [1, 2]. A detailed analysis of HSMP was given in [3, 4]. Engineers have used these processes to analyze mechanical, transportation, and informative systems. One of the most important engineering applications of HSMP is in reliability; see [3, 5, 6]. Generalized semi-Markov processes have been proposed and insensitivity phenomenon displayed by stochastic models from the areas of reliability and telephone engineering has been investigated; see [7–10].

In its basic form, a reliability problem consists of the analysing the performance of a system that moves randomly in time between many possible states*.* The state set is partitioned into two subsets. The first subset that is formed by the states in which the system is working and in the second subset by all states in which the system is down are present.

Supposing that next state depends only on the last one (the future depends only on the present) the problem can be confronted by means of Markov processes. In discrete time Markov chain environment the distribution functions (d.f.) of the waiting times between transitions are geometric while in continuous time they are negative exponentials. Usually, the transitions happen after random durations in general are not described by memoryless distribution functions (exponential or geometric). This is the reason why HSMP fits better then the Markov one in reliability problems; it offers the possibility of being able to use any distribution function.

Semi-Markov processes have been proposed for reliability and performability evaluation of systems; see [6, 11, 12]. In [13], how to apply homogeneous and nonhomogeneous semi-Markov processes in reliability problems is shown.

In this paper, in order to study the duration dependence we attach the backward and forward recurrence time processes to the HSMP and we consider them simultaneously at the beginning and at the end of the considered time interval.

These processes have been analyzed by several authors; see [4, 5, 14–18]. They allow us to have complete information as regards the waiting times scenarios. In fact

(i)initial backward times take into account the time at which the system went in the state even if the arrival time is before the beginning of the studied time horizon; (ii)initial forward times consider the time at which the first transition after the beginning of the studied time will happen;(iii)final backward times take into account the time at which the last transition before the end of the considered time interval is done;(iv)final forward allows us to consider the time at which the system will exit from the state occupied at the final time.In this way, the use of the initial and final backward and forward processes gives us the possibility of constructing all the waiting time scenarios that could happen in the neighbours of the initial and final observed times. Since HSMP uses no memoryless d.f., different values assumed by recurrence processes change the transition probabilities of the HSMP. Consequently it is possible to define generalized reliability measures which depend on the values assumed by the HSMP and recurrence time processes at starting and ending times.

The usefulness of the results is illustrated in the applicative section on the credit risk rating dynamic which is one of the most important problems in financial literature. Fundamentally it consists of computing the default probability of a firm going into debt. The literature on this topic is very wide but the reader can refer to [19–22].

In order to evaluate credit risk, international big organisations like Fitch, Moody’s, and Standard & Poor’s give different ranks to firms which agree to be evaluated.

Each firm receives a “rating” representing an evaluation of the “reliability” of its capacity to reimburse debt. Clearly, the lower the rating, the higher the interest rate the evaluated firm should pay.

The rating level changes in the time and one way to model it is by means of Markov processes as in [23]. In this environment Markov models are called “migration models.” The poor fitting of Markov process in credit risk environment was outlined in [24–26].

In our opinion, the credit rating problem can be included in the more general problem of the reliability of a stochastic system as already highlighted in [27–30]. Indeed, rating agencies, through the assessment of a rating, estimate the reliability of the firm issuing a bond.

In this paper we generalize the results of [27, 28] and another step is made in order to establish a link between credit rating and reliability models. Moreover the duration dependence of the rating evolution can be fully captured by means of recurrence times.

Section 2 will present a short description of HSMP. In Section 3 the backward and forward recurrence time processes are presented and their general distributions are determined. Section 4 is devoted to the initial and final backward semi-Markov reliability models. Section 5 presents the credit risk model with complete information as regards the waiting times. In the last section some concluding remarks are given.

#### 2. Homogeneous Semi-Markov Processes

We follow the notation given in [4]. On a complete probability space we define two random variables:

(i) with state space representing the state at the*n*th transition.(ii) with as state space representing the time of the

*n*th transition.

The process is supposed to be a homogeneous Markov renewal process (HMRP) of kernel so that

We know that

so is the transition matrix of the embedded Markov chain . Furthermore, it is necessary to introduce the probability that the process will leave state *i* in a time *t*:

It is possible to define the distribution function of the waiting time in each state *i*, given that the state *j*, successively occupied, is known as

The main difference between a continuous time Markov process and an HSMP is in the distribution functions In a Markov environment this function has to be a negative exponential function of parameter ; on the other hand, in the semi-Markov case the distribution functions can be of any type. This fact means that in order to consider the duration effects we can use the functions. The HSMP represents, for each waiting time, the state occupied by the system, that is,

The HSMP transition probabilities are defined in the following way:

They are obtained solving the following evolution equations (see among others [4]):

where represents the Kronecker symbol.

The first part of formula (2.7), , gives the probability of the system not having transitions up to time *t* given that it entered in state *i* at time 0.

In the second part, represents the probability of the system not making the next transition in moving from state *i *to state . After the transition, the system will go to state *j* following one of all possible trajectories that go from state to state *j* in a time

#### 3. Backward and Forward Recurrence Time Processes

In this section we present some generalizations of the HSMP transition probabilities (2.7) using the recurrence time processes.

Recurrence time processes have been investigated by many authors. For example, in [14, 15] the backward process at starting time was used to determine the asymptotic distribution of an ergodic HSMP. In [5] the backward processes were considered both at starting and arriving times in the transition probabilities but, to the authors’ knowledge, a complete study such as that given in the next subsection has never been presented.

Given the HMRP , we define the following stochastic processes of recurrence times:

The process is called the *backward recurrence time *(or age)* process* and *F* the *forward* (or residual time) *recurrence time process* (see [4]).

The recurrence time processes complement the semi-Markov to a Markov process with respect to and for this reason they are often called auxiliary processes. Then for any bounded -measurable function , it results that

Figure 1 presents an HSMP trajectory in which we reported the recurrence time processes. At time *s *the HSMP is in state , it entered in this state with its last transition at time then the initial backward value is The process makes next transition at time , then the initial forward value is At time the HSMP is in state , it entered in this state with its last transition at time then the initial backward value is The process makes next transition at time , then the initial forward value is

Our objective, in this section, is to define and compute transition probabilities that are constrained at initial time and at final time by the recurrence time processes. To be more precise, given the information, we want to compute the probability of having .

In order to clarify the presentation, we show, firstly some particular cases.

(a) Let be the transition probability with initial backward value . It denotes the probability of being in state after *t* periods given that at present the process is in state and it entered into this state with the last transition periods before.

Using the relation

it can be proved that

An explanation of (3.4) can be provided. The term represents the probability of remaining in state for times given that the process will stay in state for times. This probability contributes to only if .

The second term expresses the probability of a trajectory making provision for the entrance into the state after periods given that the process remained in state for times; then the transition in state in the remaining time from state has to be carried out. This holds for all states and times.

Note that if recovering the ordinary HSMP transition probabilities (2.7).

(b) Let be the transition probability with fixed initial forward value . It denotes the probability of being in state after a time given that, at time zero, the process entered into state and it makes next transition just at time into whatever state

The term is the Radon-Nikodym derivative of with respect to. It expresses. In this way, in (3.5) the entrance into all possible states with next transition at time is considered and then, from that state the process has to occupy state at time .

(c) Let be the transition probability with initial certain waiting value . It denotes the probability of being in state at time given that, at time zero, the process entered into state and it makes next transition after time into any state, then we are sure that, up to time *,* the process is still in state

The first term gives us the probability of remaining up to time in state given that the process will stay in that state at least up to time . The second term considers the possibility of evolving with the next transition into any state at any time given no movement up to time . After the transition at time, the system will be in the state at time following one of the possible trajectories.

(d) Let be the transition probability with final backward value .

It denotes the probability of being in state at time given that the process entered into state with its last transition within the interval given that it entered at time zero in state

The term gives us the probability of remaining from time zero up to time in state ; this probability contributes to only if and . In fact, the system will be in state at time without any transition in the time interval only if , then the backward value at time has, necessarily, to be equal to the length of the interval that is this implies . The second term considers the possibility of evolving with the next transition into any state at any time and then we have to consider all possible trajectories that will bring the system into state in the remaining time with its last transition into state at a time that belongs to the interval. In this way, the final backward will be less or equal to as required in relation (3.7).

(e) Let be the transition probability with final forward value .

It denotes the probability of being in state at time and of making the next transition in the time interval given that the process entered at time zero into state

The first term gives us the probability of exiting for the first time from state in the time interval ; this probability contributes to only if . The second term considers the possibility of evolving with the next transition into any state at any time and then we have to consider all possible trajectories that will bring in state in the remaining time with a final forward of.

##### 3.1. The General Distributions of the Auxiliary Processes

There are a lot of cases in which combinations of the previous equations can be considered. In this subsection, we explain the equations for the two general cases.

(f) Let be the transition probability with initial and final backward and forward values , , , . It denotes the probability of being in state at time and of entering into that state in the time interval and of making next transition in given that at present the process is in state and it entered into this state with the last transition periods before and it remained until time where a transition took place.

It results that

where

Equation (3.10) is composed of two parts, the first expresses the probability of exiting for the first time from state in the time interval this contributes to when then the backward at time must be equal to periods, that is, ; the second term considers the entrance into any state at any time and then the system has to be in state at time with final backward and forward values at a maximum of and .

Consequently, relation (3.9) states that to compute it is enough to consider the probability using a random starting distribution represented by the Radon-Nikodym derivative.

(g) Let be the transition probability with initial backward and starting certain waiting forward and final backward and forward values , , , . It denotes the probability of being in state at time and of entering that state in the time interval and of making the next transition in given that at present the process is in state and it entered into this state with the last transition periods before and it remained in this state at least until time . It results that

The term gives us the probability of exiting from state for the first time in the interval given that at present the process is in state and it entered into this state with the last transition periods before and up to time (i.e., for periods) no new transitions were carried out by the process. This probability is part of if *;* consequently, the backward process at time has to be equal to, that is,.

The other term considers the entrance into the state at time, given that at present the process is in state and it entered into this state with the last transition periods before with no movement from that state up to time ; then all possible trajectories that will bring the system into state at time with a final backward and forward of, respectively, and given the entrance into state at time have to be considered.

#### 4. Continuous Time Homogeneous Waiting Times Complete Knowledge Semi-Markov Reliability Model

There are a lot of semi-Markov models in reliability theory; see, for example, [5, 6, 11–13, 31]. The nonhomogeneous case was presented in [13]. More recently, in [28] a nonhomogeneous backward semi-Markov reliability model was presented.

In this section, we will generalize these reliability models taking into account initial and final backward and forward processes all together in a homogeneous environment.

Let us consider a reliability system that can be at every time in one of the states of. The stochastic process of the successive states of the system is denoted by.

The state set is partitioned into sets and , so that

The subset contains all “good” states in which the system is working and subset all “bad” states in which the system is not working well or has failed.

The classic indicators used in reliability theory are the following ones.

(i) *The pointwise availability function A* giving the probability that the system is working at time regardless of what has happened on :

(ii) *The reliability function R* giving the probability that the system was always working from time 0 to time :

(iii) *The maintainability function M* giving the probability that the system will leave the set within the time being in at time 0:

Considering the generalizations presented in the previous section we give the following new definitions.

() *The pointwise homogeneous waiting times complete knowledge availability function with fixed initial forward * giving the probability that the system is working at time , given that the entrance into the state with as initial backward time and that the process moves from this state just at time . Furthermore, we require the system to enter into the *working* state at time and to remain in this state until the time of the next transition :

() *The pointwise homogeneous waiting times complete knowledge availability function with starting certain waiting forward time * giving the probability that the system is working on time , conditioned by the entrance into the state with as initial backward time and that the process does not move from this state up to the time . Furthermore, we require the system to enter the *working* state at time and to remain in this state until the time of the next transition :

() *The homogeneous waiting times complete knowledge reliability function with fixed initial forward * giving the probability that the system was always working for a time , given that the entrance into the state was time periods before 0 and the next transition is at time . Furthermore, it is assumed that the system did the th transition at time and the th at time:

() *The homogeneous waiting times complete knowledge reliability function with starting certain waiting forward time * giving the probability that the system was always working for a time , given that the entrance into the state was time periods before 0 and the next transition is after time . Furthermore, it is assumed that the system did the th transition at time and the th at time :

() *The homogeneous waiting times complete knowledge maintainability function * giving the probability that the system will leave the set going into an up state, at least once, within the time and that, given that at present the process is in state and it entered this state with the last transition periods before and at time a new transition occurs

() *The homogeneous waiting times complete knowledge maintainability function with starting certain waiting forward time * giving the probability that the system will leave the set going into an up state within the time and that , given that at present the process is in state and it entered into this state with the last transition periods before and after the time a new transition occurs

The probabilities (4.5), (4.6), (4.7), (4.8), (4.9), and (4.10) can be computed as follows.

(i)*The pointwise availability functions*(4.5) and (4.6) are, respectively,

(ii)

*The reliability functions*(4.7) and (4.8) are, respectively,

where and are the solutions to (3.9) and (3.11) with all the states in that are absorbing.

To compute these probabilities all the states of the subset are changed into absorbing states through the following transformation of the semi-Markov kernel:

(iii)

*The maintainability function*(4.9) and (4.10) are, respectively,

where and are, respectively, the solution to (3.9) and (3.11) with all the states in that are absorbing.

In this case, all the states of the subset are changed into absorbing states through the transformation of the semi-Markov kernel:

The here-defined reliability indexes are able to assess different probabilities depending on the backward and forward process values. This makes it possible to obtain, for example, a complete knowledge of the variability of the survival probabilities of the system depending on the waiting time scenario.

#### 5. The Homogeneous Waiting Time with Complete Knowledge for Semi-Markov Reliability Credit Risk Models

The rating migration problem can be situated in the reliability environment. The rating process, done by the rating agency, states the degree of reliability of a firm’s bond.

The problem of the poor fit of the Markov process in a credit risk environment was outlined in [24–26]. The duration effect of rating transitions is one of the main problems which demonstrate the inadequacy of Markov models. The probability of changing rating depends on the time that a firm remains within the same rating class (see [32]). This problem can be solved satisfactorily by means of HSMP; see [27, 28, 33]. In fact, as already explained, in HSMP the transition probabilities are a function of the waiting time spent in a state of the system.

The knowledge of the waiting times around the beginning and the end of the considered interval is of fundamental relevance in credit rating migration modelling. Indeed, the solutions of the evolution equations (3.9) and (3.11) consider the duration time inside the starting and the arriving states.

In the next two subsections, we will present two semi-Markov reliability credit risk models.

To construct a semi-Markov model, it is necessary to construct the embedded Markov chain (2.2) and to find the d.f. of waiting times (2.4). The embedded Markov chain constructed by real data of Standard & Poor’s rating agency was given in [34] and it is reported in the next sub-section. This matrix is aperiodic and irreducible and has two down states D and NR. In the following sub-section the case in which the default state is supposed to be absorbing is studied and the No-Rating state is not considered. Under these hypotheses, the embedded Markov chain is mono-unireducible; see [35].

##### 5.1. The Irreducible Case with Two Down States

For example the rating agency Standard & Poor’s considers 9 different classes of rating and the No Rating state, so we have the following set of states:

The ratings express the creditworthiness of the rated firm. The creditworthiness is the highest for the rating AAA, assigned to firm extremely reliable with regard to financial obligations, and decrease towards the rating D which expresses the occurrence of payment default on some financial obligation. A table showing the financial meaning of the Standard & Poors rating categories is reported in [21]. As a matter of example, the rating B is assigned to firm vulnerable to changes in economic conditions currently showing the ability to meet its financial obligations.

The first 7 states are working states (good states) and the last two are the nonworking states. The two subsets are the following:

By solving the different evolution equations we obtain the following results.

() represents the probabilities of being in the state after a time starting in state with an initial backward time *,* an initial forward time , with .

() represents the probabilities of being in the state after a time starting in the state with an initial backward time *,* a starting certain forward time , with .

Both the results take into account the different probabilities of changing state as a function of all the possible entrance and exit times in the starting and arriving states. In the first case, the exit will be just at time *u* and in the second case after a time .

() represents the probability of the system having, at time , an up rating given that it entered the state with an initial backward time and exited from at time (forward initial time).

() represents the probability of the system having, at time , an up rating given that it entered the state with an initial backward time and exited from after time (certain waiting forward time).

() represents the probability of being in the state after a time starting in the state with an initial backward time *,* an initial forward time , with given that the two down states are considered absorbing.

() represents the probability of being in the state after a time starting in the state with an initial backward time *,* a starting certain forward time , with given that the two down states are considered absorbing.

() represents the probability that the system was always up in the time interval given that it entered the state with an initial backward time and it exits from at time (forward initial time) considering the two down states as absorbing.

() represents the probability that the system was always up in the time interval with an initial backward time and it exited from after time (certain waiting forward time) considering the two down states as absorbing.

() represents the probabilities of being in the state after a time starting in the state with an initial backward time *,* an initial forward time , with given that all the up states are considered absorbing.

() represents the probabilities of being in state after a time starting in the state with an initial backward time *,* a starting certain forward time , with given that all the up states are considered absorbing.

() represents the probability that the system at time has an up rating given that it entered into the state with an initial backward time and it exited from at time (forward initial time) given that all the up states are considered absorbing.

() represents the probability that the system at time has an up rating given that it entered the state with an initial backward time and exited from after time (certain waiting forward time) given that all the up states are considered absorbing.

The maintainability function has a precise financial meaning. It assesses the probability of a firm leaving state within time through reorganization. In fact, if the firm is reorganized the rating agency will give a new rating which evaluates the new financial situation. In the No Rating state, the re-entrance of the firm in the bond market will imply a new rating evaluation.

In Table 1 the embedded Markov chain obtained considering all the transitions of the historical data base of S&P is given. This matrix was presented in [34].

##### 5.2. The Default as Absorbing Case

In many credit risk migration models, the NoRating state is ignored and the Default is considered as an absorbing state. Under these hypotheses, the embedded Markov chain of the semi-Markov process has only two classes of states. The first is a transient class and the second is an absorbing class. The absorbing class is constituted by only one state and all the elements of the main diagonal of the matrix are always greater than zero. This kind of matrices (and the corresponding processes) is called monounireducible; see [35].

The set of states becomes

and the partition in up and down states is

The embedded Markov chain is reported in Table 2.

In this case, it results that reliability and availability correspond. Indeed, the only down state is absorbing and if the system at time is available it means that it never went into the default state and so it remained for all the observation time in a up state, which is the reliability definition. Furthermore, maintainability does not make sense because it is not possible to exit from the default state.

The following results can be obtained

() represents the probabilities of being in the state after a time starting in the state with an initial backward time *,* an initial forward time , with .

() represents the probabilities of being in the state after a time starting in the state with an initial backward time *,* starting certain forward time , with .

Both the results take into account the different probabilities of changing state during the permanence of the system in the same state considering all the possible entrance and the exit times in the starting and arriving states. In the first case, the exit will be just at time , in the second case after a time .

() represents the probability that the system at time has an up rating given that it entered the state with an initial backward time *v* and it exits from *i* at time *u* (forward initial time).

() represents the probability that the system at time has an up rating given that it entered the state with an initial backward time and it exits from after time (certain waiting forward time).

*Remark 5.1. *We wish to mention that the Markov matrices that are given yearly in the Standard & Poor’s publications always have greater elements on the main diagonal compared to the matrices that are presented in this paper. The reason for this is that, in a semi-Markov environment the transitions occur only when there is a real check on the state. In a credit risk environment this means that a transition is computed if and only if the rating agency assigns a new rating, given that the firm already has a rating. In the S&P transition Markov chain, if in a year there was no rating evaluation of a firm, it is supposed that the firm is still in the same state. Then the rating agency, in the construction of the transition matrix, considers a “virtual” transition (transition from that state into the same state). This implies that the number of virtual transitions is very high and the Markov chain, almost everywhere, becomes diagonally dominant. In the embedded Markov chain of the SMP, the virtual transitions are possible, but they happen when the rating agency gives a new rating which is equal to the previous one.

We think another reason for the superior performance of the semi-Markov environment as compared to that of the Markov one is the fact that the former only considers the transitions of the rating process which actually occurred.

#### 6. Conclusions

This paper introduces, for the first time to the authors’ knowledge, initial and final backward and forward processes in a continuous time homogeneous semi-Markov environment at the same time.

By means of this new approach a generalization of the transition probabilities of an HSMP is given and we show how it is possible to consider the time spent by the system in the starting state and in the final state. The waiting time inside the starting state is managed by means of *initial* backward and forward times. The time spent in the last state of the considered horizon is studied by means of the *final* backward and forward times.

The obtained results are used to derive generalized reliability measures and we show how it is possible to compute them.

An application to credit risk problems, which is considered as a particular aspect of the more general context of the reliability of a system, is illustrated. In this way, the paper may also serve the purpose of inviting stochastic modelling engineers into a new field. However, the model could also be useful for solving other reliability problems.

In the last part of the paper, the Markov chains embedded in the homogeneous semi-Markov processes obtained by the historical Standard and Poor’s database are presented. The difference between the obtained transition matrices and the ones that are provided by Standard and Poor’s agency is outlined. The authors also explain why the matrices obtained are more reliable compared to those of Standard and Poor’s.

Future work includes the construction of

(i)the discrete time version of this model, (ii)the related algorithm and computer program,(iii)the nonhomogeneous model,(iv)the related algorithm and computer program.Furthermore, we hope to apply the models to the mechanical reliability context in the near future.