Abstract

We have developed the Bayesian estimation procedure for flexible Weibull distribution under Type-II censoring scheme assuming Jeffrey's scale invariant (noninformative) and Gamma (informative) priors for the model parameters. The interval estimation for the model parameters has been performed through normal approximation, bootstrap, and highest posterior density (HPD) procedures. Further, we have also derived the predictive posteriors and the corresponding predictive survival functions for the future observations based on Type-II censored data from the flexible Weibull distribution. Since the predictive posteriors are not in the closed form, we proposed to use the Monte Carlo Markov chain (MCMC) methods to approximate the posteriors of interest. The performance of the Bayes estimators has also been compared with the classical estimators of the model parameters through the Monte Carlo simulation study. A real data set representing the time between failures of secondary reactor pumps has been analysed for illustration purpose.

1. Introduction

In reliability/survival analysis, generally, life test experiments are performed to check the life expectancy of the manufactured product or items/units before products produced in the market. But in practice, the experimenters are not able to observe the failure times of all the units placed on a life test due to time and cost constraints or due to some other uncertain reasons. Data obtained from such experiments are called censored sample. Keeping time and cost constraints in mind, many types of censoring schemes have been discussed in the statistical literature named as Type-I censoring, Type-II censoring and progressive censoring schemes, and so forth. In this paper, Type-II censoring scheme is considered. In Type-II censoring scheme, the life test is terminated as soon as a prespecified number (say, ) of units have failed. Therefore, out of units put on test, only first failures will be observed. The data obtained from such a restrained life test will be referred to as a Type-II censored sample.

Prediction of the lifetimes of future items based on censored data is very interesting and valuable topic for researchers, engineers and reliability practitioners. In predictive inference, In predictive Inference, we can infer about the lifetimes of the future items using observed data. The future prediction problem can be classified into two types: (1) one-sample prediction problem. (2) two-sample prediction problem. In one-sample prediction problem, the variable to be predicted comes from the same sequence of variables observed and dependent of the informative sample. In the second type, the variable to be predicted comes from another independent future sample. Reference [1] has developed the Bayesian procedure to the prediction problems of future observations and use the concept of Bayesian predictive posterior distribution. Many authors have focussed on the problem of Bayesian prediction of future observations based on various types of censored data from different lifetime models (see [29], and references cited therein).

The flexible Weibull distribution is a new two-parameter generalization of the Weibull model which has been proposed by [10]. The density function of the flexible Weibull distribution is given by

The corresponding CDF is given by

They have shown that this distribution is able to model various ageing classes of lifetime distributions including IFR, IFRA, and MBT (modified bathtub). They have also checked the goodness-of-fit of this distribution to the failure time between secondary reactor pumps data in the comparison of various extensions of Weibull model and found that this model gives the better fit. Therefore, this distribution can be considered as an alternative lifetime model of the various well-known generalizations of the Weibull distribution. Some statistical properties and the classical estimation procedure for the flexible Weibull distribution have been discussed by [10]. It is to be mentioned here that this distribution has not been considered under Bayesian setup in the earlier literature.

It is well-known that the squared error loss function is the most widely used loss function in Bayesian analysis. This loss is well justified in the classical paradigm on the ground of minimum variance unbiased estimation procedure. In most of the cases, Bayesian estimation procedures has been developed under the same loss function. For Bayesian estimation, we also need to assume a prior distribution for the model parameters involved in the analysis. In this paper, Bayesian analysis have been preformed under the squared error loss function (SELF) assuming both Jeffreys scale invariant and Gamma priors.

A major difficulty to the implementation of Bayesian procedure is that of obtaining the posterior distribution. The process often requires the integration which is very difficult to calculate especially when dealing with complex and high-dimensional models. In such a situation, Monte Carlo Markov chain (MCMC) methods, namely, Gibbs sampling [11] and Metropolis-Hastings (MH) algorithms [12, 13], are very useful to simulate the deviates from the posterior density and produce the good approximate results.

The rest of the paper is organized as fallows. In Section 2, we have discussed the point estimation procedures for the parameters of the considered model under classical set-up. The confidence/bootstrap intervals have been constructed in Section 3. In Section 4, we have developed the Bayesian estimation procedure under the assumption that model parameters have the gamma prior density function. We have also derived the one-sample and two-sample predictive densities and corresponding survival functions of the future observables in Sections 5 and 6, respectively. The predictive bounds of future observations under one-sample and two-sample predictions have also been discussed in respective sections. For comparing the performance of the classical and Bayesian estimation procedures, the Monte Carlo simulation study has been presented in Section 7. To check the applicability of the proposed methodologies, a real data set has been analysed in Section 8. Finally, the conclusions have been given in Section 9.

2. Classical Estimation

Let be the IID random sample from (1) and let be the ordered sample obtained under Type-II censoring scheme. Then, the likelihood function for such type of censored sample can be defined as

Substituting (1) and (2) in (3), we have

The log-likelihood function is given by

The MLEs and of and can be obtained as the simultaneous solution of the following two nonlinear equations:

It can be seen that the above equations cannot be solved explicitly and one needs iterative method to solve them. Here, we proposed the use of the fixed point iteration method, which can be routinely applied as follows:

The equation (6) can be rewrite as

Similarly, from (7), we have

where

The following steps followed to obtain the solution of the normal equations (6) and (7).

Step 1. Start with initial starting points, say and .

Step 2. By using and , obtain from (8).

Step 3. Then, obtain from (9).

Step 4. If and , where is some preassigned tolerance limit, then will be the desired solution of (8) and (9).

Step 5. If and , then set and and repeat Steps 25, until tolerance limit is achieved.

3. Confidence Intervals

3.1. Asymptotic Confidence Intervals

The exact distribution of MLEs cannot be obtained explicitly. Therefore, the asymptotic properties of MLEs can be used to construct the confidence intervals for the parameters. Under some regularity conditions, the MLEs are approximately bivariate normal with mean and variance matrix , where is the observed Fishers information matrix and is defined as

where

The diagonal elements of provide the asymptotic variances for the parameters , and respectively. A two-sided normal approximation confidence interval of can be obtained as

Similarly, a two-sided normal approximation confidence interval of can be obtained as

where is the standard normal variate (SNV).

3.2. Bootstrap Confidence Intervals

In this subsection, we have discussed another method for obtaining the confidence intervals proposed by [14]. They have developed the computer-based technique that can be routinely applied without any heavy theoretical consideration. The bootstrap method is very useful when an assumption regarding the normality is invalid. For bootstrap procedure, the computational algorithm is given as follows:

Step 1. Generate sample of size form (1) by using inversion method. Then estimated distribution function is given by , where .

Step 2. Generate a bootstrap sample of size from . Obtain bootstrap estimates of using bootstrap sample.

Step 3. Repeat Step 2, -times. Obtain the bootstrap estimates and .

Step 4. Let be the ordered values of a sequence of a variable . Then, the empirical distribution function (EDF) of is given by . The boot-p confidence intervals for can be obtained by the following formula: . By using the above definition, the two-sided boot-p confidence intervals for and are given by and , respectively.

Step 5. The bootstrap measure of symmetry can be defined as

Using the above formula, we can also easily obtain the measure of symmetry (Shape) for and . For standard normal approximate confidence intervals, the shape is always equals to one.

4. Bayes Estimation

In Bayesian scenario, we need to assume the prior distribution of the unknown model parameters to take into account uncertainty of the parameters. The prior densities for and are given as

Further, it is assumed that the parameters and are independent. Therefore, the joint prior of and is given by

where , and are the hyperparameters. Then, the joint posterior PDF of and can be readily defined as

If is the function of and , then the Bayes estimates of are given by

The above expression cannot be obtained in nice closed form. The evaluation of the posterior mean of the parameters will be complicated and it will be the ratio of two intractable integrals. In such situations, Monte Carlo Markov chain (MCMC) method, namely, Gibbs sampling techniques can be effectively used. For implementing the Gibbs algorithm, the full conditional posterior densities of and are given by

The simulation algorithm consists of the following steps.

Step 1. Start with and the initial values of .

Step 2. Using the initial values , generate candidate points from proposal densities , where, is the probability of returning a value of given a previous value of .

Step 3. Generate uniform variate on range 0-1; that is, .

Step 4. Calculate the ratios at the candidate point and previous point

Step 5. If , accept the candidate point with probability that is, . Otherwise set .

Step 6. Similarly from Step 4, the ratio

Step 7. If , accept the candidate point with probability that is, . Otherwise set .

Step 8. Repeat Steps 27 for all and obtain .

Note that if the candidate point is independent on previous point, that is, if , then the M-H algorithm is called independence M-H sampler. The acceptance function becomes

The Bayes estimates under SELF of the parameters can be obtained as the mean of the generated samples from the posterior densities by using the algorithm discussed previously. The formulae are given by

where is the burn-in-period of Markov Chain. The HPD credible intervals for and can be constructed by using the algorithm given in [15]. Let be the corresponding ordered MCMC sample of . Then construct all the credible intervals of and as

Here, denotes the largest integer less than or equal to . Then, the HPD credible interval is that interval which has the shortest length.

5. One-Sample Prediction

In a Type-II censoring scheme, the life test consists only of few observed items (say ) out of all units/items (say ) under study due to time and cost constraints. In practice, the experimenter may be interested to know the life time of the () removed surviving units on the basis of informative units. In such a situation, one-sample prediction technique may be helpful to give an idea about the expected life of the removed units. Let the observed censored sample and be the unobserved future ordered sample from the same population. Let represent the failure lifetimes of the remaining surviving units. From [16], the conditional PDF of given can be obtained as

Putting (1) and (2) in (27), we get

Then, the predictive posterior density of future observables under Type-II censoring scheme is given by

Equation (29) cannot be evaluated analytically. Therefore, to obtain the consistent estimator for , MCMC sample obtained through Gibbs algorithm is used. The consistent estimate of is obtained as

To obtain the estimate of future sample, we used the M-H algorithm, to draw the sample from (29). Similarly, form (24), we can estimate the future observations under SELF as the mean of simulated sample drawn from (29). The survival function of future sample can be simply defined as

We can also obtain the two-sided prediction intervals for by solving the following two equations:

Confidence intervals can be obtained by using any suitable iterative procedure as the above equations cannot be solved directly.

6. Two-Sample Prediction

In some situations, only lifetime model is given, and no priori information is available then assumes . This leads to the two-sample prediction problems. That is, the experimenters are interested in the th failure time in a future sample of size following the same life time distribution. From [16], the PDF of th order statistics is given by

Putting (1) and (2) in (29), we get

The predictive posterior density of future observables under Type-II censoring scheme is given by

Equation (35) cannot be evaluated analytically. Therefore, to obtain the consistent estimator for , MCMC sample obtained through Gibbs algorithm is used. The consistent estimate of is given by

To obtain the estimate of future sample, again M-H algorithm is used to draw the sample from (35). Similarly, form (24), we can estimate the future observations under SELF as the mean of simulated sample drawn from (35). The survival function of future sample can be simply defined as

We can also obtain the two sided prediction intervals for by solving the following two nonlinear equations:

Confidence intervals can be obtained by using any suitable iterative procedure as the above equations cannot be solved directly.

7. Simulation Study

This section consists of the simulation results to compare the performance of the classical and Bayesian estimation procedures under different Type-II censoring schemes and parameter combinations. The comparison between the MLEs and Bayes estimators of the model parameters made in terms of their mean square errors (MSEs). We have also compared the average lengths of the asymptotic confidence intervals, bootstrap intervals and HPD credible intervals. For this purpose, we generate the sample of sizes small, 30 medium, and 50 large from (1) for fixed values of and . We have considered the different Type-II censoring schemes for each sample size so that the sample contains 100%, 80%, and 60% of the available information.

The choice of the hyperparameters is the main concerning issue in the Bayesian analysis. Reference [17] argues that when information is not in compact form it is better to perform the Bayesian analysis under the assumption of non-informative prior. If we take , then posterior becomes as obtained under Jeffery’s scale invariant prior. For the choice of hyper parameters under the subjectivism, we have taken prior means equals to the true values of the parameters with varying variances. The prior variance indicates the confidence of our prior guess. A large prior variance shows less confidence in prior guess and resulting prior distribution is relatively flat. On other hand, small prior variance indicates greater confidence in prior guess. In this study, we have taken prior variance equals to 1 (small) and 8 (large), and we call these Gamma-1 and Gamma-2 priors, respectively.

For obtaining the Bayes estimates, we generate posterior deviates for the parameters and using algorithm discussed in Section 4. First thousand MCMC iterations (Burn-in period) have been discarded from the generated sequence. We have also checked the convergence of the sequences of and for their stationary distributions through different starting values. It was observed that all the Markov chains reached to the stationary condition very quickly.

For the unknown model parameters, we have computed MLEs and Bayes estimates under informative and non-informative priors along with their asymptotic confidence/bootstrap/HPD intervals. We repeat the process 1000 times, and the average estimates with the corresponding mean square errors (MSEs) of the estimators, and average confidence/bootstrap/HPD intervals are recorded. The simulation results are summarized in Tables 1, 2, 3, 4, 5, 6, 7, and 8. All necessary computational algorithms are coded in R-environment [18] and codes are available with the authors.

On the basis of the results summarised in Tables 18, some conclusions can be drawn which are stated as follows: (i)The MSE of all the estimators decreases as sample increases (i.e., as and increases) for fixed values of and . (ii)The MSE of all the estimators increases with increasing the value of the parameters for any fixed values of and . (iii)The MSE of the maximum likelihood and Bayes estimator of increases with increasing for given values of , and . (iv)The MSE of the maximum likelihood and Bayes estimator of increases with increasing for given values of , and . (v)The Bayes estimators have the smaller risks than the classical estimators for estimating the parameters in all the considered cases. Although the Bayes estimates obtained under Gamma-1 prior more efficient than those obtained under Jeffery’s and Gamma-2 priors. This indicates that the Bayesian procedure with accurate prior information provides more precise estimates. (vi)The width of the HPD credible intervals is smaller than the width of the asymptotic confidence/bootstrap intervals. (vii)In all the cases, the bootstrap procedure provides larger width of the confidence intervals for the parameters and the wide range of the confidence intervals helps to cover the asymmetry. (viii)The shape is greater than one in all the considered cases which indicates that the distribution of the maximum likelihood estimators is positively skewed and becomes more skewed with decreasing sample size .

8. Real Data Analysis

In this section, we analysed the data set of time between failures of secondary reactor pumps. This data set has been originally discussed in [19]. The chance of the failure of the secondary reactor pump is of the increasing nature in early stage of the experiment and after that it decreases. It has been checked by [10] that flexible Weibull distribution is well fitted model to this data set. The times between failures of 23 secondary reactor pumps are as follows: 2.160, 0.150, 4.082, 0.746, 0.358, 0.199, 0.402, 0.101, 0.605, 0.954, 1.359, 0.273, 0.491, 3.465, 0.070, 6.560, 1.060, 0.062, 4.992, 0.614, 5.320, 0.347, and 1.921.

For analysing this data set under Type-II censoring scheme, we generate two artificial Type-II censored samples from this real data set by considering two different values of . In the real applications, we have nothing in our hand other than few observations following any distribution function. Let us sketch the likelihood profile with respect to the parameters. The log-likelihood function is plotted over the whole parameter space in Figure 1. The contour plots of the likelihood function are also plotted in Figure 2. The maximum likelihood estimates, Bayes estimates, and corresponding confidence/bootstrap/HPD intervals are presented in Table 9 for different values of and . The simulation runs and the histogram plot of simulated and are plotted in Figure 3.

The summary of one-sample predictive densities of future samples is presented in Table 10 for different values of . Two-sample predictive density functions for different values of and are summarised in Table 11. From Tables 10 and 11, it can be observed that the standard error (SE) of future observables increases as the values of and increase for one-sample and two-sample prediction, respectively. Two-sample predictive density of the first future ordered sample is plotted in Figure 4. The density of th future sample in case of one-sample prediction is plotted in Figure 5.

The expressions (31) and (37) do not seem to be possible to compute analytically. Therefore, we prosed to use the Monte Carlo technique to solve these two equations. To compute the integral part

appearing in the expressions, we follow the following steps.

Step 1. The approximate value of can be obtained as

Similarly,

denotes the expectation of the function with respect to the joint posterior pdf of and .

Step 2. Simulate and from (19) by using the algorithm discussed in Section 2.

Step 3. Generate from uniform density of size on range . That is, .

Step 4. Generate from uniform density of size on range . That is, .

Step 5. By using and , then calculate the approximate values of integrals as

By using the above-discussed algorithm, the survival functions (31) and (37) of future samples are plotted in Figures 6 and 7, respectively.

9. Conclusion

In this paper, we have considered the problem of estimation and prediction for flexible Weibull distribution in the presence of Type-II censored sample. We have found that Bayesian procedure provides the precise estimates of the unknown parameters of flexible Weibull model with smaller mean square error. The width of HPD intervals is smaller than that of the asymptotic and the bootstrap confidence intervals. Prediction has been applied in medicine, engineering, business, and other areas, and the Bayesian approach using MCMC methods can be effectively used to solve the prediction problems. The methodology developed in this paper will be very useful to the researchers, engineers, and statisticians where such type of life test is needed and especially where the flexible Weibull distribution is used.

Acknowledgment

The authors would like to thank the editor and revivers for the valuable suggestions to improve this manuscript. The third author (Vikas Kumar Sharma) thanks to the University grant commission (UGC), New Delhi for financial assistance.