Abstract

We show how it is possible to construct efficient duration dependent semi-Markov reliability models by considering recurrence time processes. We define generalized reliability indexes and we show how it is possible to compute them. Finally, we describe a possible application in the study of credit rating dynamics by considering the credit rating migration as a reliability problem.

1. Introduction

Homogeneous semi-Markov processes (HSMPs) were defined by [1, 2]. A detailed analysis of HSMP was given in [3, 4]. Engineers have used these processes to analyze mechanical, transportation, and informative systems. One of the most important engineering applications of HSMP is in reliability; see [3, 5, 6]. Generalized semi-Markov processes have been proposed and insensitivity phenomenon displayed by stochastic models from the areas of reliability and telephone engineering has been investigated; see [710].

In its basic form, a reliability problem consists of the analysing the performance of a system that moves randomly in time between many possible states. The state set is partitioned into two subsets. The first subset that is formed by the states in which the system is working and in the second subset by all states in which the system is down are present.

Supposing that next state depends only on the last one (the future depends only on the present) the problem can be confronted by means of Markov processes. In discrete time Markov chain environment the distribution functions (d.f.) of the waiting times between transitions are geometric while in continuous time they are negative exponentials. Usually, the transitions happen after random durations in general are not described by memoryless distribution functions (exponential or geometric). This is the reason why HSMP fits better then the Markov one in reliability problems; it offers the possibility of being able to use any distribution function.

Semi-Markov processes have been proposed for reliability and performability evaluation of systems; see [6, 11, 12]. In [13], how to apply homogeneous and nonhomogeneous semi-Markov processes in reliability problems is shown.

In this paper, in order to study the duration dependence we attach the backward and forward recurrence time processes to the HSMP and we consider them simultaneously at the beginning and at the end of the considered time interval.

These processes have been analyzed by several authors; see [4, 5, 1418]. They allow us to have complete information as regards the waiting times scenarios. In fact

(i)initial backward times take into account the time at which the system went in the state even if the arrival time is before the beginning of the studied time horizon; (ii)initial forward times consider the time at which the first transition after the beginning of the studied time will happen;(iii)final backward times take into account the time at which the last transition before the end of the considered time interval is done;(iv)final forward allows us to consider the time at which the system will exit from the state occupied at the final time.

In this way, the use of the initial and final backward and forward processes gives us the possibility of constructing all the waiting time scenarios that could happen in the neighbours of the initial and final observed times. Since HSMP uses no memoryless d.f., different values assumed by recurrence processes change the transition probabilities of the HSMP. Consequently it is possible to define generalized reliability measures which depend on the values assumed by the HSMP and recurrence time processes at starting and ending times.

The usefulness of the results is illustrated in the applicative section on the credit risk rating dynamic which is one of the most important problems in financial literature. Fundamentally it consists of computing the default probability of a firm going into debt. The literature on this topic is very wide but the reader can refer to [1922].

In order to evaluate credit risk, international big organisations like Fitch, Moody’s, and Standard & Poor’s give different ranks to firms which agree to be evaluated.

Each firm receives a “rating” representing an evaluation of the “reliability” of its capacity to reimburse debt. Clearly, the lower the rating, the higher the interest rate the evaluated firm should pay.

The rating level changes in the time and one way to model it is by means of Markov processes as in [23]. In this environment Markov models are called “migration models.” The poor fitting of Markov process in credit risk environment was outlined in [2426].

In our opinion, the credit rating problem can be included in the more general problem of the reliability of a stochastic system as already highlighted in [2730]. Indeed, rating agencies, through the assessment of a rating, estimate the reliability of the firm issuing a bond.

In this paper we generalize the results of [27, 28] and another step is made in order to establish a link between credit rating and reliability models. Moreover the duration dependence of the rating evolution can be fully captured by means of recurrence times.

Section 2 will present a short description of HSMP. In Section 3 the backward and forward recurrence time processes are presented and their general distributions are determined. Section 4 is devoted to the initial and final backward semi-Markov reliability models. Section 5 presents the credit risk model with complete information as regards the waiting times. In the last section some concluding remarks are given.

2. Homogeneous Semi-Markov Processes

We follow the notation given in [4]. On a complete probability space (Ω,,𝑃)we define two random variables:

(i)𝐽𝑛,𝑛 with state space 𝐼={1,,𝑚}representing the state at the nth transition.(ii)𝑇𝑛,𝑛 with + as state space representing the time of the nth transition.

The process (𝐽𝑛,𝑇𝑛) is supposed to be a homogeneous Markov renewal process (HMRP) of kernel 𝐐=[𝑄𝑖𝑗(𝑡)] so that

𝑄𝑖𝑗𝐽(𝑡)=𝑃𝑛+1=𝑗,𝑇𝑛+1𝑇𝑛𝑡𝐽𝑛=𝑖.(2.1) We know that

𝑝𝑖𝑗𝐽𝑃𝑛=𝑗𝐽𝑛1=𝑖=lim𝑡𝑄𝑖𝑗(𝑡),𝑖,𝑗𝐼,𝑡+,(2.2) so𝐏=[𝑝𝑖𝑗] is the transition matrix of the embedded Markov chain {𝐽𝑛}. Furthermore, it is necessary to introduce the probability that the process will leave state i in a time t:

𝐻𝑖(𝑇𝑡)𝑃𝑛+1𝑇𝑛𝑡𝐽𝑛==𝑖𝑗𝐼𝑄𝑖𝑗(𝑡).(2.3) It is possible to define the distribution function of the waiting time in each state i, given that the state j, successively occupied, is known as

𝐺𝑖𝑗𝑇(𝑡)𝑃𝑛+1𝑇𝑛𝑡𝐽𝑛=𝑖,𝐽𝑛+1=𝑄=𝑗𝑖𝑗(𝑡)𝑝𝑖𝑗,if𝑝𝑖𝑗,01,if𝑝𝑖𝑗.=0(2.4) The main difference between a continuous time Markov process and an HSMP is in the distribution functions 𝐺𝑖𝑗(𝑡). In a Markov environment this function has to be a negative exponential function of parameter 𝜆𝑖𝑗; on the other hand, in the semi-Markov case the distribution functions 𝐺𝑖𝑗(𝑡) can be of any type. This fact means that in order to consider the duration effects we can use the functions𝐺𝑖𝑗(𝑡). The HSMP 𝑍=(𝑍(𝑡),𝑡+) represents, for each waiting time, the state occupied by the system, that is,

𝑍(𝑡)=𝐽𝑁(𝑡),where𝑁(𝑡)=max𝑛𝑇𝑛𝑡.(2.5) The HSMP transition probabilities are defined in the following way:

𝜙𝑖𝑗𝑍(𝑡)=𝑃(𝑡)=𝑗𝑍(0)=𝑖,𝑇𝑁(0)=0.(2.6) They are obtained solving the following evolution equations (see among others [4]):

𝜙𝑖𝑗(𝑡)=𝛿𝑖𝑗1𝐻𝑖(+𝑡)𝛽𝐼𝑡0̇𝑄𝑖𝛽(𝜗)𝜙𝛽𝑗(𝑡𝜗)𝑑𝜗,(2.7) where 𝛿𝑖𝑗 represents the Kronecker symbol.

The first part of formula (2.7), 𝛿𝑖𝑗(1𝐻𝑖(𝑡)), gives the probability of the system not having transitions up to time t given that it entered in state i at time 0.

In the second part,𝑚𝛽=1𝑡0̇𝑄𝑖𝛽(𝜗)𝜙𝛽𝑗(𝑡𝜗)𝑑𝜗,̇𝑄𝑖𝛽(𝜗)𝑑𝜗 represents the probability of the system not making the next transition in [𝜗,𝜗+𝑑𝜗) moving from state i to state 𝛽. After the transition, the system will go to state j following one of all possible trajectories that go from state 𝛽 to state j in a time𝑡𝜗.

3. Backward and Forward Recurrence Time Processes

In this section we present some generalizations of the HSMP transition probabilities (2.7) using the recurrence time processes.

Recurrence time processes have been investigated by many authors. For example, in [14, 15] the backward process at starting time was used to determine the asymptotic distribution of an ergodic HSMP. In [5] the backward processes were considered both at starting and arriving times in the transition probabilities but, to the authors’ knowledge, a complete study such as that given in the next subsection has never been presented.

Given the HMRP (𝐽𝑛,𝑇𝑛), we define the following stochastic processes of recurrence times:

𝐵(𝑡)=𝑡𝑇𝑁(𝑡),𝐹(𝑡)=𝑇𝑁(𝑡)+1𝑡.(3.1) The process 𝐵(𝑡) is called the backward recurrence time (or age) process and 𝐹(𝑡)F the forward (or residual time) recurrence time process (see [4]).

The recurrence time processes complement the semi-Markov to a Markov process with respect to +𝑡𝜎{𝑍(𝜏),𝐹(𝜏),𝜏[0,𝑡]},𝑡0and for this reason they are often called auxiliary processes. Then for any bounded 𝐼×+-measurable function 𝑓(𝑥,𝑡),𝑠𝑡, it results that

𝐸𝑓(𝑍(𝑡),𝐹(𝑡))+𝑠[𝑓]=𝐸(𝑍(𝑡),𝐹(𝑡))𝑍(𝑠),𝐹(𝑠).(3.2) For (3.2) see [16].

Figure 1 presents an HSMP trajectory in which we reported the recurrence time processes. At time s the HSMP is in state 𝑍(𝑠)=𝑖, it entered in this state with its last transition at time 𝑇𝑛=𝑠𝑣then the initial backward value is 𝐵(𝑠)=𝑣.The process makes next transition at time 𝑇𝑛+1=𝑠+𝑢, then the initial forward value is 𝐹(𝑠)=𝑢. At time 𝑡+𝑠 the HSMP is in state 𝑍(𝑡+𝑠)=𝑗, it entered in this state with its last transition at time 𝑇1=𝑡+𝑠𝑣 then the initial backward value is 𝐵(𝑡+𝑠)=𝑣.The process makes next transition at time 𝑇=𝑡+𝑠+𝑢, then the initial forward value is 𝐹(𝑡+𝑠)=𝑢.

Our objective, in this section, is to define and compute transition probabilities that are constrained at initial time 𝑠 and at final time 𝑡+𝑠by the recurrence time processes. To be more precise, given the information(𝑍(𝑠)=𝑖,𝐵(𝑠)=𝑣,𝐹(𝑠)=𝑢), we want to compute the probability of having (𝑍(𝑡+𝑠)=𝑗,𝐵(𝑡+𝑠)=𝑣,𝐹(𝑡+𝑠)=𝑢).

In order to clarify the presentation, we show, firstly some particular cases.

(a) Let 𝑏𝜙𝑖𝑗(𝑣;𝑡)=𝑃[𝑍(𝑡)=𝑗𝑍(0)=𝑖,𝐵(0)=𝑣]be the transition probability with initial backward value 𝑣. It denotes the probability of being in state 𝑗 after t periods given that at present the process is in state 𝑖 and it entered into this state with the last transition 𝑣 periods before.

Using the relation

𝐽{𝑍(𝑡)=𝑖,𝐵(𝑡)=𝑣}=𝑁(𝑡)=𝑖,𝑇𝑁(𝑡)=𝑡𝑣,𝑇𝑁(𝑡)+1>𝑡,(3.3) it can be proved that

𝑏𝜙𝑖𝑗𝛿(𝑣;𝑡)=𝑖𝑗1𝐻𝑖(𝑡+𝑣)1𝐻𝑖+(𝑣)𝑘𝐼𝑡0̇𝑄𝑖𝑘(𝜏+𝑣)1𝐻𝑖𝜙(𝑣)𝑘𝑗(𝑡𝜏)𝑑𝜏.(3.4) An explanation of (3.4) can be provided. The term (1𝐻𝑖(𝑡+𝑣))/(1𝐻𝑖(𝑣)) represents the probability of remaining in state 𝑖 for 𝑡+𝑣 times given that the process will stay in state 𝑖 for 𝑣 times. This probability contributes to 𝑏𝜙𝑖𝑗(𝑣;𝑡) only if 𝑖=𝑗.

The second term 𝑘𝐼𝑡0(̇𝑄𝑖𝑘(𝜏+𝑣)/(1𝐻𝑖(𝑣)))𝜑𝑘𝑗(𝑡𝜏)𝑑𝜏 expresses the probability of a trajectory making provision for the entrance into the state 𝑘 after 𝜏 periods given that the process remained in state 𝑖 for 𝑣 times; then the transition in state 𝑗 in the remaining time 𝑡𝜏from state 𝑘 has to be carried out. This holds for all states 𝑘𝐼 and times𝜏[0,𝑡].

Note that if 𝑣=0,𝑏𝜙𝑖𝑗(𝑣;𝑡)=𝜙𝑖𝑗(𝑡) recovering the ordinary HSMP transition probabilities (2.7).

(b) Let 𝑓𝜙𝑖𝑗(𝑢;𝑡)=𝑃[𝑍(𝑡)=𝑗𝑍(0)=𝑖,𝐹(0)=𝑢] be the transition probability with fixed initial forward value 𝑢. It denotes the probability of being in state 𝑗 after a time 𝑡 given that, at time zero, the process entered into state 𝑖 and it makes next transition just at time 𝑢 into whatever state

𝑓𝜙𝑖𝑗(𝑢;𝑡)=𝑘𝐼𝑑𝑄𝑖𝑘(𝑢)𝑑𝐻𝑖𝜙(𝑢)𝑘𝑗(𝑡𝑢).(3.5) The term 𝑑𝑄𝑖𝑘(𝑢)/𝑑𝐻𝑖(𝑢) is the Radon-Nikodym derivative of 𝑄𝑖𝑘() with respect to𝐻𝑖(). It expresses𝑑𝑄𝑖𝑘(𝑢)/𝑑𝐻𝑖(𝑢)=𝑃[𝐽𝑛+1=𝑘𝐽𝑛=𝑖,𝑇𝑛+1𝑇𝑛=𝑢]. In this way, in (3.5) the entrance into all possible states 𝑘 with next transition at time 𝑢 is considered and then, from that state the process has to occupy state 𝑗 at time 𝑡.

(c) Let 𝐹𝜙𝑖𝑗(𝑢;𝑡)=𝑃[𝑍(𝑡)=𝑗𝑍(0)=𝑖,𝐹(0)>𝑢] be the transition probability with initial certain waiting value 𝑢. It denotes the probability of being in state 𝑗 at time 𝑡 given that, at time zero, the process entered into state 𝑖 and it makes next transition after time 𝑢 into any state, then we are sure that, up to time 𝑢, the process is still in state 𝑖

𝐹𝜙𝑖𝑗𝛿(𝑢;𝑡)=𝑖𝑗1𝐻𝑖(𝑡)1𝐻𝑖+(𝑢)𝑘𝐼0𝑡𝑢̇𝑄𝑖𝑘(𝜏+𝑢)1𝐻𝑖𝜙(𝑢)𝑘𝑗(𝑡𝑢𝜏)𝑑𝜏.(3.6) The first term 𝛿𝑖𝑗(1𝐻𝑖(𝑡))/(1𝐻𝑖(𝑢)) gives us the probability of remaining up to time 𝑡 in state 𝑖 given that the process will stay in that state at least up to time 𝑢. The second term considers the possibility of evolving with the next transition into any state at any time 𝜏(𝑢,𝑡] given no movement up to time 𝑢. After the transition at time𝜏, the system will be in the state 𝑗 at time 𝑡 following one of the possible trajectories.

(d) Let 𝜙𝐵𝑖𝑗(;𝑣,𝑡)=𝑃[𝑍(𝑡)=𝑗,𝐵(𝑡)𝑡𝑣𝑍(0)=𝑖]be the transition probability with final backward value 𝑣.

It denotes the probability of being in state 𝑗 at time 𝑡 given that the process entered into state 𝑗 with its last transition within the interval [𝑣,𝑡] given that it entered at time zero in state 𝑖

𝜙𝐵𝑖𝑗;𝑣,𝑡=𝛿𝑖𝑗1𝐻𝑖(1𝑡)𝑣=0+𝑘𝐼𝑡0̇𝑄𝑖𝑘(𝜏)𝜙𝐵𝑘𝑗;𝑣𝜏,𝑡𝜏𝑑𝜏.(3.7) The term (1𝐻𝑖(𝑡)) gives us the probability of remaining from time zero up to time 𝑡 in state 𝑖; this probability contributes to 𝜙𝐵𝑖𝑗(;𝑣,𝑡) only if 𝑖=𝑗and 𝑣=0. In fact, the system will be in state 𝑗 at time 𝑡 without any transition in the time interval [0,𝑡] only if 𝑖=𝑗, then the backward value at time 𝑡 has, necessarily, to be equal to the length of the interval that is 𝐵(𝑡)=𝑡 this implies 𝐵(𝑡)=𝑡𝑣=𝑡𝑣=0. The second term considers the possibility of evolving with the next transition into any state at any time 𝜏[0,𝑡] and then we have to consider all possible trajectories that will bring the system into state 𝑗 in the remaining time 𝑡𝜏with its last transition into state 𝑗 at a time that belongs to the interval[𝑣𝜏,𝑡𝜏]. In this way, the final backward will be less or equal to 𝑡𝜏(𝑣𝜏)=𝑡𝑣 as required in relation (3.7).

(e) Let 𝜙𝐹𝑖𝑗(;𝑡,𝑢)=𝑃[𝑍(𝑡)=𝑗,𝐹(𝑡)𝑢𝑡𝑍(0)=𝑖]be the transition probability with final forward value 𝑢.

It denotes the probability of being in state 𝑗 at time 𝑡 and of making the next transition in the time interval (𝑡,𝑢] given that the process entered at time zero into state 𝑖

𝜙𝐹𝑖𝑗;𝑡,𝑢=𝛿𝑖𝑗𝐻𝑖𝑢𝐻𝑖(+𝑡)𝑘𝐼𝑡0̇𝑄𝑖𝑘(𝜏)𝜙𝐹𝑘𝑗;𝑡𝜏,𝑢𝜏𝑑𝜏.(3.8) The first term(𝐻𝑖(𝑢)𝐻𝑖(𝑡)) gives us the probability of exiting for the first time from state 𝑖 in the time interval (𝑡,𝑢]; this probability contributes to 𝜙𝐹𝑖𝑗(;𝑡,𝑢) only if 𝑖=𝑗. The second term considers the possibility of evolving with the next transition into any state at any time𝜏[0,𝑡] and then we have to consider all possible trajectories that will bring in state 𝑗 in the remaining time 𝑡𝜏 with a final forward of𝑢𝜏.

3.1. The General Distributions of the Auxiliary Processes

There are a lot of cases in which combinations of the previous equations can be considered. In this subsection, we explain the equations for the two general cases.

(f) Let 𝑏𝑓𝜙𝐵𝐹𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢)=𝑃[𝑍(𝑡)=𝑗,𝐵(𝑡)𝑡𝑣,𝐹(𝑡)𝑢𝑡𝑍(0)=𝑖,𝐵(0)=𝑣,𝐹(0)=𝑢]be the transition probability with initial and final backward and forward values 𝑣, 𝑢, 𝑣, 𝑢. It denotes the probability of being in state 𝑗 at time 𝑡 and of entering into that state in the time interval [𝑣,𝑡] and of making next transition in (𝑡,𝑢] given that at present the process is in state 𝑖 and it entered into this state with the last transition 𝑣 periods before and it remained until time 𝑢 where a transition took place.

It results that

𝑏𝑓𝜙𝐵𝐹𝑖𝑗𝑣,𝑢;𝑣,𝑡,𝑢=𝑘𝐼𝑑𝑄𝑖𝑘(𝑣+𝑢)𝑑𝐻𝑖𝜙(𝑣+𝑢)𝐵𝐹𝑘𝑗;𝑣𝑢,𝑡𝑢,𝑢𝑢,(3.9) where

𝜙𝐵𝐹𝑖𝑗;𝑣,𝑡,𝑢𝑍=𝑃(𝑡)=𝑗,𝐵(𝑡)𝑡𝑣,𝐹(𝑡)𝑢𝑡𝑍(0)=𝑖=𝛿𝑖𝑗𝐻𝑖𝑢𝐻𝑖(1𝑡)𝑣=0+𝑘𝐼𝑡0̇𝑄𝑖𝑘(𝜏)𝜑𝐵𝐹𝑘𝑗;𝑣𝜏,𝑡𝜏,𝑢𝜏𝑑𝜏.(3.10) Equation (3.10) is composed of two parts, the first expresses the probability of exiting for the first time from state 𝑖 in the time interval (𝑡,𝑢] this contributes to 𝜙𝐵𝐹𝑖𝑗(;𝑣,𝑡,𝑢) when 𝑖=𝑗 then the backward at time 𝑡 must be equal to 𝑡 periods, that is, 𝐵(𝑡)𝑡𝑣=𝑡𝑣=0; the second term considers the entrance into any state 𝑘 at any time 𝜏(𝑢,𝑡] and then the system has to be in state 𝑗 at time 𝑡 with final backward and forward values at a maximum of 𝑡𝑣 and 𝑢𝑡.

Consequently, relation (3.9) states that to compute 𝑏𝑓𝜙𝐵𝐹𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) it is enough to consider the probability 𝜙𝐵𝐹𝑘𝑗(;𝑣𝑢,𝑡𝑢,𝑢𝑢) using a random starting distribution represented by the Radon-Nikodym derivative𝑑𝑄𝑖𝑘(𝑣+𝑢)/𝑑𝐻𝑖(𝑣+𝑢).

(g) Let 𝑏𝐹𝜙𝐵𝐹𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢)=𝑃[𝑍(𝑡)=𝑗,𝐵(𝑡)𝑡𝑣,𝐹(𝑡)𝑢𝑡𝑍(0)=𝑖,𝐵(0)=𝑣,𝐹(0)>𝑢]be the transition probability with initial backward and starting certain waiting forward and final backward and forward values 𝑣, 𝑢, 𝑣, 𝑢. It denotes the probability of being in state 𝑗 at time 𝑡 and of entering that state in the time interval [𝑣,𝑡] and of making the next transition in (𝑡,𝑢] given that at present the process is in state 𝑖 and it entered into this state with the last transition 𝑣 periods before and it remained in this state at least until time 𝑢. It results that

𝑏𝐹𝜙𝐵𝐹𝑖𝑗𝑣,𝑢;𝑣,𝑡,𝑢=𝛿𝑖𝑗𝐻𝑖𝑢+𝑣𝐻𝑖(𝑡+𝑣)1𝐻𝑖1(𝑢+𝑣)𝑣=𝑣+𝑘𝐼0𝑡𝑢̇𝑄𝑖𝑘(𝜏+𝑢+𝑣)1𝐻𝑖𝜑(𝑢+𝑣)𝐵𝐹𝑘𝑗;𝑣𝑢𝜏,𝑡𝑢𝜏,𝑢𝑢𝜏𝑑𝜏.(3.11) The term [(𝐻𝑖(𝑢+𝑣)𝐻𝑖(𝑡+𝑣))/(1𝐻𝑖(𝑢+𝑣))] gives us the probability of exiting from state 𝑖 for the first time in the interval (𝑡,𝑢] given that at present the process is in state 𝑖 and it entered into this state with the last transition 𝑣 periods before and up to time 𝑢 (i.e., for 𝑢+𝑣 periods) no new transitions were carried out by the process. This probability is part of 𝑏𝐹𝜙𝐵𝐹𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) if 𝑖=𝑗; consequently, the backward process at time 𝑡 has to be equal to𝑡+𝑣, that is,𝐵(𝑡)𝑡𝑣=𝑡+𝑣𝑣=𝑣.

The other term considers the entrance into the state 𝑘 at time𝜏(𝑢,𝑡], given that at present the process is in state 𝑖 and it entered into this state with the last transition 𝑣 periods before with no movement from that state up to time 𝑢; then all possible trajectories that will bring the system into state 𝑗 at time 𝑡 with a final backward and forward of, respectively, 𝑡𝑣 and 𝑢𝑡 given the entrance into state 𝑘 at time 𝜏 have to be considered.

4. Continuous Time Homogeneous Waiting Times Complete Knowledge Semi-Markov Reliability Model

There are a lot of semi-Markov models in reliability theory; see, for example, [5, 6, 1113, 31]. The nonhomogeneous case was presented in [13]. More recently, in [28] a nonhomogeneous backward semi-Markov reliability model was presented.

In this section, we will generalize these reliability models taking into account initial and final backward and forward processes all together in a homogeneous environment.

Let us consider a reliability system that can be at every time 𝑡 in one of the states of𝐼={1,,𝑚}. The stochastic process of the successive states of the system is denoted by𝑍={𝑍(𝑡),𝑡0}.

The state set is partitioned into sets 𝑈 and 𝐷, so that

𝐼=𝑈𝐷,=𝑈𝐷,𝑈,𝑈𝐼.(4.1) The subset 𝑈 contains all “good” states in which the system is working and subset 𝐷 all “bad” states in which the system is not working well or has failed.

The classic indicators used in reliability theory are the following ones.

(i) The pointwise availability function A giving the probability that the system is working at time 𝑡 regardless of what has happened on (0,𝑡]:

[]𝐴(𝑡)=𝑃𝑍(𝑡)𝑈.(4.2)

(ii) The reliability function R giving the probability that the system was always working from time 0 to time 𝑡:

[𝑅(𝑡)=𝑃𝑍(𝑢)𝑈𝑢(0,𝑡]].(4.3)

(iii) The maintainability function M giving the probability that the system will leave the set 𝐷 within the time 𝑡 being in 𝐷 at time 0:

[𝑀(𝑡)=1𝑃𝑍(𝑢)𝐷,𝑢(0,𝑡]].(4.4) Considering the generalizations presented in the previous section we give the following new definitions.

(i) The pointwise homogeneous waiting times complete knowledge availability function with fixed initial forward 𝑏𝑓𝐴𝑖𝐵𝐹 giving the probability that the system is working at time 𝑡, given that the entrance into the state 𝑍(0)=𝑖 with 𝑣 as initial backward time and that the process moves from this state just at time 𝑢. Furthermore, we require the system to enter into the working state 𝑍(𝑡) at time 𝑇𝑁(𝑡)𝑣 and to remain in this state until the time of the next transition 𝑇𝑁(𝑡)+1𝑢:

𝑏𝑓𝐴𝑖𝐵𝐹𝑣,𝑢;𝑣,𝑡,𝑢𝑍=𝑃(𝑡)𝑈,𝐵(𝑡)𝑡𝑣,𝐹(𝑡)𝑢.𝑡𝑍(0)=𝑖,𝐵(0)=𝑣,𝐹(0)=𝑢(4.5)

(i) The pointwise homogeneous waiting times complete knowledge availability function with starting certain waiting forward time 𝑏𝐹𝐴𝑖𝐵𝐹 giving the probability that the system is working on time 𝑡, conditioned by the entrance into the state 𝑍(0)=𝑖 with 𝑣 as initial backward time and that the process does not move from this state up to the time 𝑢. Furthermore, we require the system to enter the working state 𝑍(𝑡) at time 𝑡𝑇𝑁(𝑡)𝑣 and to remain in this state until the time of the next transition 𝑡<𝑇𝑁(𝑡)+1𝑢:

𝑏𝐹𝐴𝑖𝐵𝐹𝑣,𝑢;𝑣,𝑡,𝑢𝑍=𝑃(𝑡)𝑈,𝐵(𝑡)𝑡𝑣,𝐹(𝑡)𝑢.𝑡𝑍(0)=𝑖,𝐵(0)=𝑣,𝐹(0)>𝑢(4.6)

(ii) The homogeneous waiting times complete knowledge reliability function with fixed initial forward 𝑏𝑓𝑅𝑖𝐵𝐹 giving the probability that the system was always working for a time 𝑡, given that the entrance into the state 𝑍(0)=𝑖 was 𝑣 time periods before 0 and the next transition is at time 𝑢. Furthermore, it is assumed that the system did the 𝑁(𝑡)th transition at time 𝑡𝑇𝑁(𝑡)𝑣 and the (𝑁(𝑡)+1)th at time𝑡<𝑇𝑁(𝑡)+1𝑢:

𝑏𝑓𝑅𝑖𝐵𝐹𝑣,𝑢;𝑣,𝑡,𝑢𝑍]=𝑃()𝑈(0,𝑡,𝐵(𝑡)𝑡𝑣,𝐹(𝑡)𝑢.𝑡𝑍(0)=𝑖,𝐵(0)=𝑣,𝐹(0)=𝑢(4.7)

(ii) The homogeneous waiting times complete knowledge reliability function with starting certain waiting forward time 𝑏𝑓𝑅𝑖𝐵𝐹 giving the probability that the system was always working for a time 𝑡, given that the entrance into the state 𝑍(0)=𝑖 was 𝑣 time periods before 0 and the next transition is after time 𝑢. Furthermore, it is assumed that the system did the 𝑁(𝑡)th transition at time 𝑡𝑇𝑁(𝑡)𝑣 and the (𝑁(𝑡)+1)th at time 𝑡<𝑇𝑁(𝑡)+1𝑢:

𝑏𝐹𝑅𝑖𝐵𝐹𝑣,𝑢;𝑣,𝑡,𝑢𝑍]=𝑃()𝑈(0,𝑡,𝐵(𝑡)𝑡𝑣,𝐹(𝑡)𝑢.𝑡𝑍(0)=𝑖,𝐵(0)=𝑣,𝐹(0)>𝑢(4.8)

(iii) The homogeneous waiting times complete knowledge maintainability function 𝑏𝑓𝑀𝑖𝐵𝐹 giving the probability that the system will leave the set 𝐷 going into an up state, at least once, within the time 𝑡 and that𝑡𝑇𝑁(𝑡)𝑣, 𝑡<𝑇𝑁(𝑡)+1𝑢 given that at present the process is in state 𝑖𝐷and it entered this state with the last transition 𝑣 periods before and at time 𝑢 a new transition occurs

𝑏𝑓𝑀𝑖𝐵𝐹𝑣,𝑢;𝑣,𝑡,𝑢𝑇=𝑃𝑁(𝑡)𝑣,𝑇𝑁(𝑡)+1𝑢].,(0,𝑡𝑍()𝑈𝑍(0)=𝑖𝐷,𝐵(0)=𝑣,𝐹(0)=𝑢(4.9)

(iii) The homogeneous waiting times complete knowledge maintainability function with starting certain waiting forward time 𝑏𝐹𝑀𝑖𝐵𝐹 giving the probability that the system will leave the set 𝐷 going into an up state within the time 𝑡 and that 𝑡𝑇𝑁(𝑡)𝑣, 𝑡<𝑇𝑁(𝑡)+1𝑢 given that at present the process is in state 𝑖𝐷and it entered into this state with the last transition 𝑣 periods before and after the time 𝑢 a new transition occurs

𝑏𝐹𝑀𝑖𝐵𝐹𝑣,𝑢;𝑣,𝑡,𝑢𝑇=𝑃𝑁(𝑡)𝑣,𝑇𝑁(𝑡)+1𝑢].,(0,𝑡𝑍()𝑈𝑍(0)=𝑖𝐷,𝐵(0)=𝑣,𝐹(0)>𝑢(4.10) The probabilities (4.5), (4.6), (4.7), (4.8), (4.9), and (4.10) can be computed as follows.

(i)The pointwise availability functions (4.5) and (4.6) are, respectively,

𝑏𝑓𝐴𝑖𝐵𝐹𝑣,𝑢;𝑣,𝑡,𝑢=𝑗𝑈𝑏𝑓𝜙𝐵𝐹𝑖𝑗𝑣,𝑢;𝑣,𝑡,𝑢,(4.11)𝑏𝐹𝐴𝑖𝐵𝐹𝑣,𝑢;𝑣,𝑡,𝑢=𝑗𝑈𝑏𝐹𝜙𝐵𝐹𝑖𝑗𝑣,𝑢;𝑣,𝑡,𝑢.(4.12)

(ii)The reliability functions (4.7) and (4.8) are, respectively,

𝑏𝑓𝑅𝑖𝐵𝐹𝑣,𝑢;𝑣,𝑡,𝑢=𝑗𝑈𝑏𝑓𝜙𝐵𝐹𝑅𝑖𝑗𝑣,𝑢;𝑣,𝑡,𝑢,(4.13)𝑏𝐹𝑅𝑖𝐵𝐹𝑣,𝑢;𝑣,𝑡,𝑢=𝑗𝑈𝑏𝐹𝜙𝐵𝐹𝑅𝑖𝑗𝑣,𝑢;𝑣,𝑡,𝑢,(4.14) where𝑏𝑓𝜙𝐵𝐹𝑅𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) and 𝑏𝐹𝜙𝐵𝐹𝑅𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢)are the solutions to (3.9) and (3.11) with all the states in 𝐷 that are absorbing.

To compute these probabilities all the states of the subset 𝐷 are changed into absorbing states through the following transformation of the semi-Markov kernel:

𝑝𝑖𝑗=𝛿𝑖𝑗if𝑖𝐷.(4.15)

(iii)The maintainability function (4.9) and (4.10) are, respectively,

𝑏𝑓𝑀𝑖𝐵𝐹𝑣,𝑢;𝑣,𝑡,𝑢=𝑗𝑈𝑏𝑓𝜙𝐵𝐹𝑀𝑖𝑗𝑣,𝑢;𝑣,𝑡,𝑢,(4.16)𝑏𝐹𝑀𝑖𝐵𝐹𝑣,𝑢;𝑣,𝑡,𝑢=𝑗𝑈𝑏𝐹𝜙𝐵𝐹𝑀𝑖𝑗𝑣,𝑢;𝑣,𝑡,𝑢,(4.17) where𝑏𝑓𝜙𝐵𝐹𝑀𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) and 𝑏𝐹𝜙𝐵𝐹𝑀𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) are, respectively, the solution to (3.9) and (3.11) with all the states in 𝑈 that are absorbing.

In this case, all the states of the subset 𝑈 are changed into absorbing states through the transformation of the semi-Markov kernel:

𝑝𝑖𝑗=𝛿𝑖𝑗if𝑖𝑈.(4.18) The here-defined reliability indexes are able to assess different probabilities depending on the backward and forward process values. This makes it possible to obtain, for example, a complete knowledge of the variability of the survival probabilities of the system depending on the waiting time scenario.

5. The Homogeneous Waiting Time with Complete Knowledge for Semi-Markov Reliability Credit Risk Models

The rating migration problem can be situated in the reliability environment. The rating process, done by the rating agency, states the degree of reliability of a firm’s bond.

The problem of the poor fit of the Markov process in a credit risk environment was outlined in [2426]. The duration effect of rating transitions is one of the main problems which demonstrate the inadequacy of Markov models. The probability of changing rating depends on the time that a firm remains within the same rating class (see [32]). This problem can be solved satisfactorily by means of HSMP; see [27, 28, 33]. In fact, as already explained, in HSMP the transition probabilities are a function of the waiting time spent in a state of the system.

The knowledge of the waiting times around the beginning and the end of the considered interval is of fundamental relevance in credit rating migration modelling. Indeed, the solutions of the evolution equations (3.9) and (3.11) consider the duration time inside the starting and the arriving states.

In the next two subsections, we will present two semi-Markov reliability credit risk models.

To construct a semi-Markov model, it is necessary to construct the embedded Markov chain (2.2) and to find the d.f. of waiting times (2.4). The embedded Markov chain constructed by real data of Standard & Poor’s rating agency was given in [34] and it is reported in the next sub-section. This matrix is aperiodic and irreducible and has two down states D and NR. In the following sub-section the case in which the default state is supposed to be absorbing is studied and the No-Rating state is not considered. Under these hypotheses, the embedded Markov chain is mono-unireducible; see [35].

5.1. The Irreducible Case with Two Down States

For example the rating agency Standard & Poor’s considers 9 different classes of rating and the No Rating state, so we have the following set of states:

𝐼1={AAA,AA,A,BBB,BB,B,CCC,D,NR}.(5.1) The ratings express the creditworthiness of the rated firm. The creditworthiness is the highest for the rating AAA, assigned to firm extremely reliable with regard to financial obligations, and decrease towards the rating D which expresses the occurrence of payment default on some financial obligation. A table showing the financial meaning of the Standard & Poors rating categories is reported in [21]. As a matter of example, the rating B is assigned to firm vulnerable to changes in economic conditions currently showing the ability to meet its financial obligations.

The first 7 states are working states (good states) and the last two are the nonworking states. The two subsets are the following:

𝑈={AAA,AA,A,BBB,BB,B,CCC},𝐷1={D,NR}.(5.2) By solving the different evolution equations we obtain the following results.

(1.1) 𝑏𝑓𝜙𝐵𝐹𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) represents the probabilities of being in the state 𝑗 after a time 𝑡 starting in state 𝑖 with an initial backward time 𝑣, an initial forward time 𝑢, with 𝑣𝑇𝑁(𝑡)𝑡<𝑇𝑁(𝑡)+1𝑢.

(1.1) 𝑏𝐹𝜙𝐵𝐹𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) represents the probabilities of being in the state 𝑗 after a time 𝑡starting in the state 𝑖 with an initial backward time 𝑣, a starting certain forward time 𝑢, with 𝑣𝑇𝑁(𝑡)𝑡<𝑇𝑁(𝑡)+1𝑢.

Both the results take into account the different probabilities of changing state as a function of all the possible entrance and exit times in the starting and arriving states. In the first case, the exit will be just at time u and in the second case after a time 𝑢.

(1.2) 𝑏𝑓𝐴𝑖𝐵𝐹(𝑣,𝑢;𝑣,𝑡,𝑢)=𝑗𝑈𝑏𝑓𝜙𝐵𝐹𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) represents the probability of the system having, at time 𝑡, an up rating given that it entered the state 𝑖 with an initial backward time 𝑣 and exited from 𝑖 at time 𝑢 (forward initial time).

(1.2) 𝑏𝐹𝐴𝑖𝐵𝐹(𝑣,𝑢;𝑣,𝑡,𝑢)=𝑗𝑈𝑏𝐹𝜙𝐵𝐹𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) represents the probability of the system having, at time 𝑡, an up rating given that it entered the state 𝑖 with an initial backward time 𝑣 and exited from 𝑖 after time 𝑢 (certain waiting forward time).

(2.1) 𝑏𝑓𝜙𝐵𝐹𝑅𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) represents the probability of being in the state 𝑗 after a time 𝑡 starting in the state 𝑖 with an initial backward time 𝑣, an initial forward time 𝑢, with 𝑣𝑇𝑁(𝑡)𝑡<𝑇𝑁(𝑡)+1𝑢given that the two down states are considered absorbing.

(2.1) 𝑏𝐹𝜙𝐵𝐹𝑅𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) represents the probability of being in the state 𝑗 after a time 𝑡 starting in the state 𝑖𝑈 with an initial backward time 𝑣, a starting certain forward time 𝑢, with 𝑣𝑇𝑁(𝑡)𝑡<𝑇𝑁(𝑡)+1𝑢 given that the two down states are considered absorbing.

(2.2) 𝑏𝑓𝑅𝑖𝐵𝐹(𝑣,𝑢;𝑣,𝑡,𝑢)=𝑗𝑈𝑏𝑓𝜙𝐵𝐹𝑅𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) represents the probability that the system was always up in the time interval (0,𝑡] given that it entered the state 𝑖𝑈 with an initial backward time 𝑣 and it exits from 𝑖 at time 𝑢 (forward initial time) considering the two down states as absorbing.

(2.2) 𝑏𝐹𝑅𝑖𝐵𝐹(𝑣,𝑢;𝑣,𝑡,𝑢)=𝑗𝑈𝑏𝐹𝜙𝐵𝐹𝑅𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) represents the probability that the system was always up in the time interval (0,𝑡] with an initial backward time 𝑣 and it exited from 𝑖 after time 𝑢 (certain waiting forward time) considering the two down states as absorbing.

(3.1) 𝑏𝑓𝜙𝐵𝐹𝑀𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) represents the probabilities of being in the state 𝑗 after a time 𝑡 starting in the state 𝑖𝐷 with an initial backward time 𝑣, an initial forward time 𝑢, with 𝑣𝑇𝑁(𝑡)𝑡<𝑇𝑁(𝑡)+1𝑢given that all the up states are considered absorbing.

(3.1) 𝑏𝐹𝜙𝐵𝐹𝑀𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) represents the probabilities of being in state 𝑗 after a time 𝑡 starting in the state 𝑖𝐷 with an initial backward time 𝑣, a starting certain forward time 𝑢, with 𝑣𝑇𝑁(𝑡)𝑡<𝑇𝑁(𝑡)+1𝑢 given that all the up states are considered absorbing.

(3.2) 𝑏𝑓𝑀𝑖𝐵𝐹(𝑣,𝑢;𝑣,𝑡,𝑢)=𝑗𝑈𝑏𝑓𝜙𝐵𝐹𝑀𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) represents the probability that the system at time 𝑡 has an up rating given that it entered into the state 𝑖𝐷 with an initial backward time 𝑣 and it exited from 𝑖 at time 𝑢 (forward initial time) given that all the up states are considered absorbing.

(3.2) 𝑏𝐹𝑀𝑖𝐵𝐹(𝑣,𝑢;𝑣,𝑡,𝑢)=𝑗𝑈𝑏𝐹𝜙𝐵𝐹𝑀𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) represents the probability that the system at time 𝑡 has an up rating given that it entered the state 𝑖𝐷 with an initial backward time 𝑣 and exited from 𝑖 after time 𝑢 (certain waiting forward time) given that all the up states are considered absorbing.

The maintainability function 𝑀 has a precise financial meaning. It assesses the probability of a firm leaving state 𝐷 within time 𝑡 through reorganization. In fact, if the firm is reorganized the rating agency will give a new rating which evaluates the new financial situation. In the No Rating state, the re-entrance of the firm in the bond market will imply a new rating evaluation.

In Table 1 the embedded Markov chain obtained considering all the transitions of the historical data base of S&P is given. This matrix was presented in [34].

5.2. The Default as Absorbing Case

In many credit risk migration models, the NoRating state is ignored and the Default is considered as an absorbing state. Under these hypotheses, the embedded Markov chain of the semi-Markov process has only two classes of states. The first is a transient class and the second is an absorbing class. The absorbing class is constituted by only one state and all the elements of the main diagonal of the matrix are always greater than zero. This kind of matrices (and the corresponding processes) is called monounireducible; see [35].

The set of states becomes

𝐼2={AAA,AA,A,BBB,BB,B,CCC,D},(5.3) and the partition in up and down states is

𝑈={AAA,AA,A,BBB,BB,B,CCC},𝐷2={D}.(5.4) The embedded Markov chain is reported in Table 2.

In this case, it results that reliability and availability correspond. Indeed, the only down state is absorbing and if the system at time 𝑡 is available it means that it never went into the default state and so it remained for all the observation time in a up state, which is the reliability definition. Furthermore, maintainability does not make sense because it is not possible to exit from the default state.

The following results can be obtained

(1.1) 𝑏𝑓𝜙𝐵𝐹𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) represents the probabilities of being in the state 𝑗 after a time 𝑡 starting in the state 𝑖 with an initial backward time 𝑣, an initial forward time 𝑢, with 𝑣𝑇𝑁(𝑡)𝑡<𝑇𝑁(𝑡)+1𝑢.

(1.1) 𝑏𝐹𝜙𝐵𝐹𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢) represents the probabilities of being in the state 𝑗 after a time 𝑡 starting in the state 𝑖 with an initial backward time 𝑣, starting certain forward time 𝑢, with 𝑣𝑇𝑁(𝑡)𝑡<𝑇𝑁(𝑡)+1𝑢.

Both the results take into account the different probabilities of changing state during the permanence of the system in the same state considering all the possible entrance and the exit times in the starting and arriving states. In the first case, the exit will be just at time 𝑢, in the second case after a time 𝑢.

(1.2) 𝑏𝑓𝐴𝑖𝐵𝐹(𝑣,𝑢;𝑣,𝑡,𝑢)=𝑏𝑓𝑅𝑖𝐵𝐹(𝑣,𝑢;𝑣,𝑡,𝑢)=𝑗𝑈𝑏𝑓𝜙𝐵𝐹𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢)represents the probability that the system at time 𝑡 has an up rating given that it entered the state 𝑖𝑈 with an initial backward time v and it exits from i at time u (forward initial time).

(1.2) 𝑏𝐹𝐴𝑖𝐵𝐹(𝑣,𝑢;𝑣,𝑡,𝑢)=𝑏𝐹𝑅𝑖𝐵𝐹(𝑣,𝑢;𝑣,𝑡,𝑢)=𝑗𝑈𝑏𝐹𝜙𝐵𝐹𝑖𝑗(𝑣,𝑢;𝑣,𝑡,𝑢)represents the probability that the system at time 𝑡 has an up rating given that it entered the state 𝑖𝑈 with an initial backward time 𝑣 and it exits from 𝑖 after time 𝑢 (certain waiting forward time).

Remark 5.1. We wish to mention that the Markov matrices that are given yearly in the Standard & Poor’s publications always have greater elements on the main diagonal compared to the matrices that are presented in this paper. The reason for this is that, in a semi-Markov environment the transitions occur only when there is a real check on the state. In a credit risk environment this means that a transition is computed if and only if the rating agency assigns a new rating, given that the firm already has a rating. In the S&P transition Markov chain, if in a year there was no rating evaluation of a firm, it is supposed that the firm is still in the same state. Then the rating agency, in the construction of the transition matrix, considers a “virtual” transition (transition from that state into the same state). This implies that the number of virtual transitions is very high and the Markov chain, almost everywhere, becomes diagonally dominant. In the embedded Markov chain of the SMP, the virtual transitions are possible, but they happen when the rating agency gives a new rating which is equal to the previous one.
We think another reason for the superior performance of the semi-Markov environment as compared to that of the Markov one is the fact that the former only considers the transitions of the rating process which actually occurred.

6. Conclusions

This paper introduces, for the first time to the authors’ knowledge, initial and final backward and forward processes in a continuous time homogeneous semi-Markov environment at the same time.

By means of this new approach a generalization of the transition probabilities of an HSMP is given and we show how it is possible to consider the time spent by the system in the starting state and in the final state. The waiting time inside the starting state is managed by means of initial backward and forward times. The time spent in the last state of the considered horizon is studied by means of the final backward and forward times.

The obtained results are used to derive generalized reliability measures and we show how it is possible to compute them.

An application to credit risk problems, which is considered as a particular aspect of the more general context of the reliability of a system, is illustrated. In this way, the paper may also serve the purpose of inviting stochastic modelling engineers into a new field. However, the model could also be useful for solving other reliability problems.

In the last part of the paper, the Markov chains embedded in the homogeneous semi-Markov processes obtained by the historical Standard and Poor’s database are presented. The difference between the obtained transition matrices and the ones that are provided by Standard and Poor’s agency is outlined. The authors also explain why the matrices obtained are more reliable compared to those of Standard and Poor’s.

Future work includes the construction of

(i)the discrete time version of this model, (ii)the related algorithm and computer program,(iii)the nonhomogeneous model,(iv)the related algorithm and computer program.

Furthermore, we hope to apply the models to the mechanical reliability context in the near future.