`International Journal of Quality, Statistics, and ReliabilityVolume 2011 (2011), Article ID 719534, 6 pagesdoi:10.1155/2011/719534`
Research Article

## Estimation of Failure Probability and Its Applications in Lifetime Data Analysis

Department of Mathematics and Physics, Fujian University of Technology, Fuzhou Fujian, 350108, China

Received 28 June 2010; Accepted 13 January 2011

Copyright © 2011 Ming Han. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Since Lindley and Smith introduced the idea of hierarchical prior distribution, some results have been obtained on hierarchical Bayesian method to deal with lifetime data. But all those results obtained by means of hierarchical Bayesian methods involve complicated integration compute. Though some computing methods such as Markov Chain Monte Carlo (MCMC) are available, doing integration is still very inconvenient for practical problems. This paper introduces a new method, named E-Bayesian estimation method, to estimate failure probability. In the case of one hyperparameter, the definition of E-Bayesian estimation of the failure probability is provided; moreover, the formulas of E-Bayesian estimation and hierarchical Bayesian estimation and the property of E-Bayesian estimation of the failure probability are also provided. Finally, calculation on practical problems shows that the provided method is feasible and easy to perform.

#### 1. Introduction

In the area related to the reliability of industrial products, engineers often deal with the truncated data in life testing of products, where the data sometimes have small sample size or have been censored, and the products of interest have high reliability. In the literature, Lindley and Smith [1] first introduced the idea of hierarchical prior distribution. Han [2] developed some methods to construct hierarchical prior distribution. Recently, hierarchical Bayesian methods have been applied to data analysis [3]. However, complicated integration compute is a hard work by using hierarchical Bayesian methods in practical problems, though some computing methods such as Markov Chain Monte Carlo (MCMC) methods are available [4, 5].

Han [6] introduced a new method—E-Bayesian estimation method—to estimate failure probability in the case of two hyperparameters, proposed the definition of E-Bayesian of failure probability, and provided formulas for E-Bayesian estimation of the failure probability under the cases of three different prior distributions of hyperparameters. But we did not provide formulas for hierarchical Bayesian estimation of the failure probability nor discuss the relations between the two estimations. In this paper, we will introduce the definition for E-Bayesian estimation of the failure probability, provide formulas both for E-Bayesian estimation and hierarchical Bayesian estimation, and also discuss the relations between the two estimations in the case of only one hyperparameter. We will see that the E-Bayesian estimation method is really simple.

Conduct type I censored life testing time, denote the censored times as , the corresponding sample numbers as , and the corresponding failure sample numbers observed in the testing process as .

In the situation where no information about the life distribution of tested products is available, Mao and Luo [7] introduce a so-called curves fitting distribution method to give the estimation of the failure probability at time , for , where is the life of a product.

This paper introduces a new method, called E-Bayesian estimation method, to estimate failure probability. The definition and formulas of E-Bayesian estimation of the failure probability are described in Sections 2 and 3, respectively. In Section 4, formulas of hierarchical Bayesian estimation of the failure probability are proposed. In Section 5, the properties of E-Bayesian estimation are discussed. In Section 6, an application example introduces given. Section 7 is the conclusions.

#### 2. Definition of E-Bayesian Estimation of

Suppose that there are failures among samples, then can be viewed as a random variable with binomial distribution . We take the conjugate prior of , with density function where , is the beta function, and hyperparameters and .

The derivative of with respect to is

According to Han [2], and should be chosen to guarantee that is a decreasing function of . Thus , .

Given , the larger the value of , the thinner the tail of the density function. Berger had shown in [8] that the thinner tailed prior distribution often reduces the robustness of the Bayesian estimate. Consequently, the hyperparameter should be chosen under the restriction , where is a given upper bound. How to determine the constant would be described later in an example.

Since and is an expectation of uniform on (0, 1), that we take . When and , also is a decreasing function of .

In this paper we only consider the case when . Then the density function becomes

The definition of E-Bayesian estimation was originally addressed by Han [6] in the case of two hyperparameters. In the case of one hyperparameter, the E-Bayesian estimation of failure probability is defined as follows.

Definition 1. With being continuous, is called the expected Bayesian estimation of (briefly E-Bayesian estimation), which is assumed to be finite, where is the domain of , is the Bayesian estimation of with hyperparameter , and is the density function of over .

Definition 1 indicates that the E-Bayesian estimation of , is the expectation of the Bayesian estimation of for the hyperparameter.

#### 3. E-Bayesian Estimation of

Theorem 2. For the testing data set with type I censor, where , let and . If the prior density function of is given by (3), then, we have the following. (i)With the quadratic loss function, the Bayesian estimation of is (ii)If prior distribution of is uniform on (1, c), then E-Bayesian estimation of is

Proof. (i) For the testing data set with type I censor, where , according to Han [6], the likelihood function of samples is where and .
Combined with the prior density function of given by (3), the Bayesian theorem leads to the posterior density function of ,
Thus, with the quadratic loss function, the Bayesian estimation of is
(ii) If prior distribution of is uniform on , then, by Definition 1, the E-Bayesian estimation of is
This concludes the proof of Theorem 2.

#### 4. Hierarchical Bayesian Estimation

If the prior density function of is given by (3), how can the value of hyperparameter be determined? Lindley and Smith [1] addressed an idea of hierarchical prior distribution, which suggested that one prior distribution may be adapted to the hyperparameters while the prior distribution includes hyperparameters.

If the prior of , , is given by (3), prior distribution of is uniform on , and the density function is , then, hierarchical prior density function of is

Theorem 3. For the testing data set with type I censor, where , let and . If the hierarchical prior density function of is given by (12), then, using the quadratic loss function, the hierarchical Bayesian estimation of is

Proof. According to the course of the proof of Theorem 2, the likelihood function of samples is where and .
From the hierarchical prior density function of given by (12), the Bayesian theorem leads to the hierarchical posterior density function of ,
With the quadratic loss function, the hierarchical Bayesian estimation of is
Thus, the proof is completed.

#### 5. Property of E-Bayesian Estimation of

Now we discuss the relations between and in Theorems 2 and 3.

Theorem 4. In Theorems 2 and 3, and satisfy .

Proof. According to the course of the proof of Theorem 2, we have that
When , is continuous; by the mean value theorem for definite integrals, there is at least one number such that
According to (17) and (18), we have that
According to the relations of Beta function and Gamma function, we have that where the Gamma function is .
When , we have that , and is continuous; by the generalized mean value theorem for definite integrals, there is at least one number such that
According to Theorem 3, we have that
According to (19) and (22), we have that .
Thus, the proof is completed.

Theorem 4 shows that and are asymptotically equivalent to each other as tends to infinity. In application, and are close to each other, when is sufficiently large.

#### 6. Application Example

Han [6] provided a testing data of type I censored life testing for a type of engine, which is listed in Table 1 (time unit: hour).

Table 1: Test data of the engine.

By Theorems 2, 3, and Table 1, we can obtain , , and . Some numerical results are listed in Table 2.

Table 2: Results of and .

From Table 2, we find that for some (), , and are very close to each other and satisfy Theorem 4; for different (), and are all robust (when , there exists some difference). In application, the author suggests selecting a value of in the middle point of interval [2, 6], that is, .

When , some numerical results are listed in Table 3 and Figure 1.

Table 3: Results of and .
Figure 1: Results of and .

Range: in Figure 1, is the results of and is the results of .

From Table 3 and Figure 1, we find that and are very close to each other and consistent with Theorem 4.

From Table 3, we find that the results of and are very close to the corresponding results of Han [6].

According to Han [6], we may assume that the lifetime of these products obeys Weibull distribution with distribution function

According to Mao and Luo [7], the least square estimates of and are, respectively, where , , , , , , is the estimate of and , .

According to (24), we can obtain the estimate of the reliability at moment , where and are given by (24).

From (24) and Table 3, we can obtain and . Some numerical results of and are listed in Table 4.

Table 4: Results of and .

From Table 4, we find that the results of and are very close to the corresponding results of Han [6].

From (25) and Table 4, we can obtain the estimate of the reliability with some numerical results of and listed in Table 5 and Figure 2.

Table 5: Results of and .
Figure 2: Results of and .

Note that is the estimate of reliability at moment with regard to , is the estimate of reliability at moment with regard to , and .

Range: in Figure 2, is the results of and is the results of .

From Table 5, we find that when , , and the results of and are very close to those of Han [6].

#### 7. Conclusions

This paper introduces a new method, called E-Bayesian estimation, to estimate failure probability. The author would like to put forward the following two questions for any new parameter estimation method: (1) how much dependence is there between the new method and the other already-made ones? (2) In which aspects is the new method superior to the old ones?

For the E-Bayesian estimation method, Theorem 4 has given a good answer to the above question (1) and, in addition, the application example shows that and satisfy Theorem 4.

To question (2), from Theorems 2 and 3, we find that the expression of the E-Bayesian estimation is much simple, whereas the expression of the hierarchical Bayesian estimation relies on beta function and complicated integrals expression, which is often not easy.

Reviewing the application example, we find that the E-Bayesian estimation method is both efficient and easy to operate.

#### Acknowledgments

The author wish to thank Professor Xizhi Wu, who has checked the paper and given the author very helpful suggestions. This work was supported partly by the Fujian Province Natural Science Foundation (2009J01001), China.

#### References

1. D.V. Lindley and A. F. M. Smith, “Bayes estimaters for the linear model,” Journal of the Royal Statistical Society B, vol. 34, pp. 1–41, 1972.
2. M. Han, “The structure of hierarchical prior distribution and its applications,” Chinese Operations Research and Management Science, vol. 6, no. 3, pp. 31–40, 1997.
3. T. Ando and A. Zellner, “Hierarchical Bayesian analysis of the seemingly unrelated regression and simultaneous equations models using a combination of direct Monte Carlo and importance sampling techniques,” Bayesian Analysis, vol. 5, no. 1, pp. 65–96, 2010.
4. S. P. Brooks, “Markov chain Monte Carlo method and its application,” Journal of the Royal Statistical Society D, vol. 47, no. 1, pp. 69–100, 1998.
5. C. Andrieu and J. Thoms, “A tutorial on adaptive MCMC,” Statistics and Computing, vol. 18, no. 4, pp. 343–373, 2008.
6. M. Han, “E-Bayesian estimation of failure probability and its application,” Mathematical and Computer Modelling, vol. 45, no. 9-10, pp. 1272–1279, 2007.
7. S. S. Mao and C. B. Luo, “Reliability analysis of zero-failure data,” Chinese Mathematical Statistics and Applied Probability, vol. 4, no. 4, pp. 489–506, 1989.
8. J. O. Berger, Statistical Decision Theory and Bayesian Analysis, Springer, New York, NY, USA, 1985.