#### Abstract

A parameter estimation problem for a backup system in a condition-based maintenance is considered. We model a backup system by a hidden, three-state continuous time Markov process. Data are obtained through condition monitoring at discrete time points. Maximum likelihood estimates of the model parameters are obtained using the EM algorithm. We establish conditions under which there is no more than one limitation in the parameter space for any sequence derived by the EM algorithm.

#### 1. Introduction

Suppose a backup system is represented by a continuous time homogeneous Markov chain with a state space . States 0, 1, and 2 are the healthy state, unhealthy state, and the failure state, respectively. Assume that the system is in a healthy state at time 0, and the transition rate matrix is given by where for are unknown. Here is a known extreme edge of the parameters. Suppose the system is observed at time points , where is a benchmark interval. While the system is failed at an inspection, a new system replaces it. Let two processes , and let be a record of the system. Here if the system is failed during , and otherwise. And . As a path of is a stepped right-continuous function, when a replacement occurs at time . Moreover, we set for convenience. The process represents the replacement of the system, and the process represents observable information of the system collected through condition monitoring.

The maximum likelihood estimates (MLE) of the model parameters for such models have been studied by [1, 2]. As the stochastic processes, are not Markov processes and the sample path of is not observable, the likelihood function of incomplete data is complex. Hence, it is difficult to obtain directly the MLE , where and . Here is a prearranged constant. Both [1, 2] suggest the EM algorithm (see e.g., [3, 4]). Let be initial values of the unknown parameters. The EM algorithm works as follows.

The step. For , compute the following pseudo-likelihood function: Here is the complete data set of the process . The forms of the complete data set may be different for different purpose. For example, the forms are different in [1, 2].

The step. Choose . Here The and steps are repeated. According to the theory of EM algorithms (Theorem 1 in [3]), for any given initial value and . It is clear that if an MLE in is one of these fixed-points when it exists.

In this paper, we consider the uniqueness of the MLE . As the likelihood function of incomplete data is complex, we do not follow the classical method by which the uniqueness of a MLE is demonstrated by establishing the global concavity of the log-likelihood function (see e.g., [5–7].) Alternatively, we investigate conditions under which the operator is a contraction. The conditions ensure that the MLE is unique if it exists. Moreover, the conditions implies that there is not more than one limitations in for different sequences derived by the EM algorithm. For the complete data set we present in the next section, we have the following main theorem of this paper.

Theorem 1. *There is not more than one fixed-point of the operator in provided that and the record has at least one replacement. *

#### 2. Complete Data Set

To establish the expression of operators , we present our construction of the path of and the complete data set of the process . Suppose that random variables , , are independent, has an exponential distribution, and . As every state of has an exponential duration distribution, we may construct a path of through the approach introduced by Theorem 5.4 in [8]. The path of restricted on has the following form: and for . If for , we can construct the following path of restricted on . Consider If , that is, the system is failed during , a new system replaces it at time and a path of restricted on is presented by setting in (5). In this paper, the complete data set .

Our choice of the complete data set ensures a simple form of the operator . The log-likelihood function has the following form: And the Markovian property of the process implies that where .

In Table 1, we denote by the number of different values of the triple . For example, is the number of triple . It follows from (6) that

Here the following forms of for follows from (7) and Table 1. Consider

For any given , it is obvious that the function is a concave function for . Hence there is a unique vector satisfying Moreover, it follows from the definition (3) that if , . Therefore, every fixed-point of in is a fixed-point of and vice versa. In this paper, we will prove Theorem 1 through studying the number of fixed-points of in . Here we present the from of derived by (14). Consider

#### 3. Two Lemmas

Lemma 2. *Consider the following:
*

* Proof. *Writing and
we have . Hence,

As on , we have that and . Therefore,
Moreover, we have
As on the region , it follows from the definition to that there is such that
Similarly, there is such that
The result follows from (19), (20), and (22).

Lemma 3. *Let a function be and for . We have for any . *

* Proof. *By a routine analysis, we may obtain that and for ,
Moreover, we may obtain that for ,
where . We have
where . A routine analysis may confirm that , where . Then it follows from that . Moreover, as , we have that for any , there is such that
Then it follows from and (25) that . Hence, it follows from (24) that for any . Therefore, for any
The result follows from the above formula, and the fact that is an even function.

#### 4. Proof of the Main Theorem

We may derive that is a contraction by investigating the Jacobian matrix .

By a routine analysis, it follows from (12) that Hence, it follows from Lemma 3 that Similarly, we have As , it follows from (28) and Lemma 3 that

It follows from (11) that , and . Therefore, From the definition (11), we have that . Then, it follows from Lemma 3 that

It follows from (9) and (10) that , , and hence for ,

Write , where for , As are continuous on the convex set , it follows from Theorem 5.19 in [9] that for , there exist such that Therefore, we have that , where . Hence, there is not more than one fixed-point of in when . As every fixed-point of in is a fixed-point of , we will prove the theorem by indicating .

It follows from (29) to (35), (15), and Lemma 2 that Then it follows from that and .

The record has at least one replacement. That is, there is such that . As a new system replaces the old failed system at time , we have that . Now the theorem will be accomplished in two cases.

*Case 1. *The case . We have that . It follows from (37) and (38) that

*Case 2. *The case . According to the condition-based maintenance policy, a failed system is replaced at an inspection. Hence follows from . As , , and , we have that . Assume that the last replacement occurs at if any replacement occurs before . And we write when there is no replacement before . Now we study the sequence , which consists of digits 0 and 1, corresponding healthy and unhealthy states of a system without replacement. It is obvious that there is such that and . Then, follows from , and . It follows from (37) and that
Moreover, it follows from (38) and that

#### 5. Example and Discussion

We will apply the EM algorithm to a simulation dataset. Based on this example, we will show the efficiency and accuracy of the EM algorithm. Moreover, by this example, we will show some limitations and shortcomings of Theorem 1. In this example, we make ensembles of consecutive inspection of a simulating backup system defined by (1). The true parameters are and , which is adopted from [2].

We describe first the process of iterations described by the EM algorithm (2) and (3). For a given couple of initial value in a given parameter space , we may derive from (15). If , it is the unique solution to (3), and we have that . If we are fortunate and can repeat the operation again and again, then we obtain a sequence .

It follows from the expression (8) of that is continuous with respect to both and . Similar to the discussion of the Theorem 1 in [4], we can prove that if the limitation of the sequence exists and is also in , then the limitation is a fixed-point of the operator . Theorem 1 shows that the limitation is unique for all such sequences.

In the first experiment, we run the EM algorithm for different initial values. In this experiment, we set and the parameter space . We run the algorithm for couples of initial values which are chosen randomly from to . For each couple of initial value, we run the algorithm for iterations. Figure 1 draws the final estimations of the parameters for initial values. We can see that the algorithm converges to the same result for a great range of initial values. As Theorem 1 points out that the number of fixed points is not more than . So we can conclude that there is a unique fixed points, and hence it is the MLE of the model parameters on the parameter space .

Sometimes, the above procedure of must stop without the output of the estimated parameters. In general, for a , if derived from (15) is not an element of , then . For this case, the solution to (3) is on the boundary of . As we do not obtain the explicit expression for this case, the procedure is aborted.

In the following second experiment, for the same dataset of the first experiment, we run the EM algorithm for another parameter space with . We run the algorithm for couples of initial values which are chosen randomly from to . As we predict, the procedure is aborted for every couple of initial values. For these initial values, Figure 2 draws the maximal iteration numbers before the procedure is aborted.

As we know, MLE in , when it exists, is one of fixed-points of the operator . However, there may be other fixed-points of , such as stationary points of . Theorem 1 provides a sufficient condition under which such fixed-point does not exist. Our first experiment and Figure 1 illustrate this fact. However, in some cases, there is not a fixed point of on the parameter space . Such a phenomenon occurs when we set a wrong parameter space as in the second experiment.

#### 6. Conclusion

A parameter estimation problem for a backup system has been considered. We established an EM algorithm, which can be used to iteratively determine the maximum likelihood estimators given observations of the system at discrete time points. It has been found that for any initial values, the sequence derived by the EM algorithm converges to a unique point when the limitation belongs to the specified parameter space.

#### Acknowledgment

This research was supported by the National Natural Science Foundation of China Grant Nos. 50977073 and 70971109.