Abstract

We obtained the maximum likelihood and Bayes estimators of the parameters of the generalized inverted exponential distribution in case of the progressive type-II censoring scheme with binomial removals. Bayesian estimation procedure has been discussed under the consideration of the square error and general entropy loss functions while the model parameters follow the gamma prior distributions. The performances of the maximum likelihood and Bayes estimators are compared in terms of their risks through the simulation study. Further, we have also derived the expression of the expected experiment time to get a progressively censored sample with binomial removals, consisting of specified number of observations from generalized inverted exponential distribution. An illustrative example based on a real data set has also been given.

1. Introduction

The one parameter exponential distribution is the simplest and the most widely discussed distribution in the context of life testing. This distribution plays an important role in the development to the theory, that is, any new theory developed can be easily illustrated by the exponential distribution due its mathematical tractability; see Barlow and Proschan [1] and Leemis [2]. But its applicability is restricted to a constant hazard rate because hardly any item/system can be seen which has time independent hazard rate. Therefore, the number of generalizations of the exponential distribution has been proposed in earlier literature where the exponential distribution is not suitable to the real problem. For example, the gamma (sum of independent exponential variates) and Weibull (power transformed distribution) distributions are the most popular generalizations of the exponential distribution. Most of the generalizations of the exponential distribution possess the constant, nonincreasing, nondecreasing and bathtub hazard rates.

But in practical real problems, there may be a situation where the data shows the inverted bathtub hazard rate (initially increases and then decreases, i.e., unimodal). Let us take an example, in the course of study of breast cancer data, we observed that the mortality increases initially, reaches to a peak after some time, and then declines slowly, that is, associated hazard rate is inverted bathtub or particularly unimodal. For such types of data, another extension of the exponential distribution has been proposed in statistical literature. That is known as one parameter inverse exponential or one parameter inverted exponential distribution (IED) which possess the inverted bathtub hazard rate. Many authors have proposed the use of IED in survival analysis; see Lin et al. [3], and Singh et al. [4]. Abouammoh and Alshingiti [5] have proposed two parameters generalization of IED called as generalized inverted exponential distribution (GIED) and they have showed that GIED is better than IED for real data set on the basis of likelihood ratio test and - statistics. They have also discussed the maximum likelihood and least square methods for the estimation of the unknown parameters of GIED. Krishna and Kumar [6] have studied the reliability estimation based on progressive type-II censored sample under classical setup.

In life testing experiments, situations do arise that the units under study are lost or removed from the experiments while they are still alive; that is, generally we get censored data from the life testing experiments. The loss of units may occur due to time constraints, giving type-I censored data. In type-I censoring scheme, the number of observations is random as the experiment is terminated at a prespecified time. Sometimes, the experiment has to be terminated after a prefixed number of observations and the data, thus obtained, is referred as type-II censored data. Besides these, there are many uncontrolled causes, resulting to loss of intermediate observations; see Balakrishnan [7]. One such censoring procedure named as progressive type-II censoring scheme can be described notationally as follows. Let the lifetimes of identical units/items be studied. At the first failure , called first stage, units are removed from the remaining surviving units. At second failure , called second stage units from remaining , units are removed, and so on, till th failure is observed; that is, at th stage all the remaining units are removed.It may be mentioned here that the number of units dropped out from the test at th stage should be less than in order to insure the availability of observations. In many practical situations, ’s may be random and cannot be predetermined, for example, in clinical trials. Considering to be random, Yuen and Tse [8] have discussed progressive censoring scheme with binomial removals. They assumed that the number of random removals at the stage is random and follows binomial distribution with probability . It may be noted here that in clinical trials, the assumption that ’s are less than looks unrealistic, but, in life testing experiments, it should not pose any problem as it is used to decide the value of ’s only. Thus, (at 1th stage) is to be considered to follow the binomial distribution with parameter and , that is, binomial , and in the same way, (at 2th stage) follows binomial . In general, the number of units removed at th stage follows the binomial distribution with parameter and for .

For further details on progressive censoring and its further development, readers may be referred to Balakrishnan [7]. The estimation of parameters of several lifetime distributions based on progressive censored samples has been studied by many authors; see Childs and Balakrishnan [9], Balakrishnan and Kannan [10], Mousa and Jaheen [11], and Ng et al. [12]. The progressive type-II censoring with binomial removal has been considered by Tse et al. [13] for Weibull distribution, Wu and Chang [14] for exponential distribution. Under the progressive type-II censoring with random removals, Wu and Chang [15], Yuen and Tse [8], and Singh et al. [16] developed the estimation problem for the Pareto distribution, Weibull distribution, and exponentiated Pareto distribution, respectively.

The objective of this paper is to obtain the MLEs and Bayes estimators of the unknown parameters of GIED under symmetric and asymmetric loss functions and compare the performances of the competing estimators. Further, we have also investigated the total experiment time of experiment on the basis of numerical study. The rest of the paper has been organized in the following section. Section 2 provides a brief discussion about the progressive type-II censoring scheme with binomial removals. In the next section, we have obtained MLEs and Bayes estimators of the model parameters. The expression for the expected experiment time for progressive type-II censored data with binomial removals has been derived in Section 4. The algorithm for simulating the progressive type-II censored data with binomial removal has been described in Section 5. The comparison study of the MLEs and Bayes estimators has been given in Section 6. In Section 7, the methodology is illustrated through a real data set. Finally, the conclusions have been provided in the last section.

2. The Model

The GIED with cumulative density function (cdf) is expressed as follows and the probability density function (pdf) is given by: The survival function of is Let be the progressive type-II censored sample, where ’s are random removals. For fixed values of ’s, say , the conditional likelihood function can be written as where and for and . Substituting (2) and (3) into (4), we get

As mentioned earlier, individual units removed from the test at th stage, independent of each other and the probability of removal is same for all. Therefore, the number of the unit removed at th stage follows binomial distribution with parameters ; that is, and for , Now, we further assume that ’s are independent of ’s for all . Then, the full likelihood function takes the following form: where Substituting (6) and (7) into (9), we get Now, using (5), (8), and (10), we can write the full likelihood function in the following form: where

3. Classical and Bayesian Estimation of Parameters

3.1. Maximum Likelihood Estimation

In this section, we have obtained the MLEs of the parameters , , and based on progressive type-II censored data with binomial removals. We observe from (11), (12), and (13) that likelihood function is multiplication of three terms, namely, , , and . Out of these, does not depend on the parameters , , and . , does not involve and it is function of and only, whereas involves only. Therefore, the MLEs of and can be derived by maximizing . Similarly, the MLE of can be obtained by maximizing .

Taking log of both sides of (12), we have Thus, MLEs of and can be obtained by simultaneously solving the following nonlinear normal equations which are as follows: From (15), we obtain the MLE of as a function of , say , where Putting in (14), we obtain Therefore, the MLE of can be obtained by maximizing (18) with respect to . Once is obtained, can be obtained from (17) as . Therefore, it reduces the two-dimensional problem to a one-dimensional problem which is relatively easier to solve with fixed point iteration method. For details about the fixed point iteration method, readers may refer Rao [17].

The log of takes the following form: The first order derivative of with respect to is Setting and solving, we get the MLE of as

3.2. Bayes Estimators

In order to obtain the Bayes estimators of the parameters and based on progressively type-II censored data with binomial removals. We must assume that the parameters and are random variables. Following Nassar and Eissa [18] and Kim et al. [19], we assume that these are independently distributed. The random variables and have prior distribution with respective prior pdfs respectively. It may be noted that the gamma prior and are flexible enough to cover wide variety of the prior believes experimenter. Based on the assumptions stated above, the joint prior pdf of and is Combining the priors given by (22) with likelihood given by (8), we can easily obtain joint posterior pdf of as where

Hence, the respective marginal posterior pdfs of and are given by Usually the Bayes estimators are obtained under square error loss function (SELF) where is the estimate of the parameter and the Bayes estimator of comes out to be , where denotes the posterior expectation. However, this loss function is symmetric loss function and can only be justified if over estimation and under estimation of equal magnitude are of equal seriousness. But in practical situations, this may not be true. A number of asymmetric loss functions are available in statistical literature. Let us consider the general entropy loss function (GELF) proposed by Calabria and Pulcini [20] defined as follows: The constant , involved in (28), is its shape parameter. It reflects departure from symmetry. When , then over estimation (i.e., positive error) causes more serious consequences than under estimation (i.e., negative error) and converse for . The Bayes estimator of under GELF is given by provided that the posterior expectation exits. It may be noted here that, for , the Bayes estimator under loss (27) coincides with the Bayes estimator under SELF . Expressions for the Bayes estimators and for and , respectively, under GELF can be given as Substituting the posterior pdfs from (26) in (30) and (31), respectively, and then simplifying then, we get the Bayes estimators and of and as follows: It may be noted here that the integrals involved in the expressions for the Bayes estimators and cannot be obtained analytically and one needs numerical techniques for computations. We, therefore, have proposed to use Markov Chain Monte Carlo (MCMC) methods. In MCMC techniques, we considered the Metropolis-Hastings algorithms to generate samples from posterior distributions and these samples are used to compute Bayes estimates. The Gibbs is an algorithm for simulating from the full conditional posterior distributions while the metropolis-hastings algorithm generates samples from an arbitrary proposal distribution. The conditional posterior distributions of the parameters and can be written as respectively. For the Bayes estimators, the following MCMC procedure is followed.(I)Set the initial guess of and say and .(II)Set .(III)Generate from and from .(IV)Repeat steps (II)-(III), times.(V)Obtain the Bayes estimates of and under GELF as where is the burn-in-period of Markov Chain. Substituting equal to −1 in step (V), we get Bayes estimates of and under SELF.(VI)To compute the HPD interval of and , order the MCMC sample of and (say as ) and ( as ). Then, construct all the credible intervals of and ; say and , respectively. Here, denotes the largest integer less than or equal to . Then, the HPD interval of and are that interval which has the shortest length.

4. Expected Experiment Times

In practical situations, an experimenter may be interested to know whether the test can be completed within a specified time. This information is important for an experimenter to choose an appropriate sampling plan because the time required to complete a test is directly related to cost. Under progressive censoring with a fixed number of removal, the time is given by . Following Balakrishnan and Aggarwala [21], the expected value of is given by where , , and is the number of live units removed from experiment (number of failure units). Using the pdf and cdf of GIED, the equation will be where After simplification it reduces to Putting this value in (36), the expected test time is given by The expected test time for progressively type-II censored data with binomial removals is evaluated by taking expectation on both sides (36) with respect to the . That is, where and is given in (10). For the expected time of a complete sampling, case with test units is obtained by taking and for all , in (39). We have and the expected time of a type-II censoring is defined by the expected value of the th failure time; then, The ratio of the expected experiment time (REET) denoted as is computed between progressive type-II censored data with binomial removals (PT-II CBR) and the complete sampling. We define It can be noted that gives an important information in order to determine the shortest experiment time significantly, and if a much larger sample of test units is used, the test is stopped, when th failures are observed. But we are interested here, in order to consider various values of , and numerically calculated, the expected experiment times under PT-II CBR and complete sample, which are derived in equations (40) and (41), respectively. The results are presented in Table 1. As mentioned earlier, analytical comparison of the expected test times under PT-II CBR and complete sampling test is very difficult. Hence, it is calculated for different values of , , and . Here, we have considered , and and the choices of are listed in Table 1. The various values of removal probability considered are , 0.1, 0.3, 0.5, 0.7, and . The results thus obtained are summarized in Table 1. It is noted from the results that for a fixed value of effective sample size , the values of decrease as increases. For fixed , the values of and expected termination time under PT-II CBR and complete sampling test increase as increases. Moreover, from Table 1, the expected test time is influenced by the value of the removal probability from effected sample size . So, is an important factor on the expected test time and when is large, units are removed at the earlier stage of life test out of units. Hence, this result gives an idea to the observed lifetimes much closer to tail of the failure time distribution. Thus, the expected test time of PT-II CBR is closed to that of complete sample. Figure 1 represents the ratio of the expected test time under PT-II CBR to the expected test time under complete sample versus for and different values of the removal probability . Finally, we observed that form Figure 1, for larger values of , the ratio is approaching to quickly. Hence, up is more significant for the reduction of expected test time. So, the expected termination time for binomial removal with is to be taken for further calculation.

5. Algorithm for Sample Simulation under PT-II CBR

We need to simulate PT-II CBR from specified GIED and propose the use of the following algorithm.(I)Specify the value of .(II)Specify the value of .(III)Specify the value of parameters , , and .(IV)Generate random number from , for .(V)Set according to the following relation:(VI)(VII)Generate independent random variables .(VIII)For given values of the progressive type-II censoring scheme set   .(IX)Set   , then are PT-II CBR samples of size from .(X)Finally, for given values of parameters and , set   . Then, is the required PT-II CBR sample of size from the GIED.

6. Simulation Studies

The estimators and denote the MLEs of the parameters and , respectively, while , and , are the corresponding Bayes estimators under SELF and GELF, respectively. We compare the estimators obtained under GELF with corresponding Bayes estimators under SELF and MLEs. The comparisons are based on the simulated risks (average loss over sample space) under GELF. Here, and represent confidence interval (CI) and HPD of and , respectively. Also and represent the average length of CI and HPD of and , respectively. It may be mentioned here that the exact expressions for the risks cannot be obtained as estimators are not found in nice closed form. Therefore, the risks of the estimators are estimated on the basis of MonteCarlo simulation study of 5000 samples. It may be noted that the risks of the estimators will depend on values of , , , , , and . The choice of hyperparameters and can be taken in such a way that if we consider any two independent information as prior mean and variance of and , then, and , respectively whereas and are considered as true values of the parameters and for different confidence in terms of smaller, moderate, and larger variances, On the basis of this information, the hyper parameters of and (say and ) can be easily evaluated from this relation, and , respectively.

In order to consider the variation in the values of , and , we have obtained the simulated risks for , (say prior mean of ), (say prior mean of ), (say prior variance of ), (say prior variance of ) and (say GELF loss parameter). Generating the samples of PT-II CBR as mentioned in Section 5, the simulated risks under SELF and GELF have been obtained for selected values of , , , , , and . The results are summarized in Tables 24. It is noted from Table 2 that for almost all the considered values of , the risks of the estimators and have minimum as compared to the considered competitive estimators, respectively, under GELF. To know the effect of variation in the value of other parameters on the risks of the estimators of and , we arbitrarily fixed (for the case when overestimation is more serious than underestimation) and in reverse situation. Tables 3 and 4 represent the risks of estimators of and under both losses in under and overestimation situations when the prior mean is same as the true value of the parameters and for smaller, moderate, and larger values of the prior variance of the parameters in order to consider the hyperparameters , and , respectively. When the effective sample size increases, the risks of all the estimators of and under both losses decrease for and the simulated risks of and are smaller than those of (, ) and (, ) for all the considered cases, including those where under estimation is considered to be more serious than over estimation or viceversa. Under different prior variances along with variation of effective sample size , the HPD and CI intervals are obtained. In Table 5, it is observed that the average length of CI and HPD interval decreases when the effective sample size increases and average length of HPD interval is always less than that of CI, which are represented in Figure 2 also.

7. Real Data Analysis

Here, we consider the real data set presented in Lawless [22] which represent the number of revolutions to failure for each of ball bearing in a life test. But for the purpose of illustrating the method discussed in this paper, a PT-II CBR is generated from real data set, and we consider three schemes under PT-II CBR which are given in Table 6. Table 7 shows the MLEs and Bayes estimators of and under SELF and GELF, CI/HPD interval based on complete real data set. For this real data set, Abouammoh and Alshingiti [5] indicated that the GIED provides satisfactory fit. This real data set was originally discussed by Lieblein and Zelen [23]. In Section 3, MCMC algorithm and mathematical treatments are given, for long run, we take noninformative prior for this purpose, and the value of the hyper parameters and are taken as and , respectively. Hence, on the basis of Table 6, we use noninformative prior under different degrees of censoring; the MLEs and the Bayes estimators and CI/HPD interval of and under SELF and GELF for are presented in Tables 8 and 9, respectively. Hence, finally, the study of Tables 8 and 9 observed that the MLEs and Bayes estimators and length of CI/HPD interval of and decrease as degree of censoring decreases, respectively.

8. Conclusion

(1) On the basis of simulation study, we observed that the maximum likelihood and Bayes methods are used for estimating the parameters under GELF and SELF of GIED based on PT-II CBR. Bayes estimators have been obtained on the basis of MCMC method. These methods are applied to real data set based on the number of revolutions to failure for each of 23 ball bearing in a life test.

(2) It has been noticed, under consideration of different prior believes, from tables, that the estimated risks of estimators decrease as effective sample size increases and Bayes estimates have the smallest estimated risks as compared with their corresponding MLEs. Hence, the proposed estimators perform better than and for different degree of censoring, when under estimation is serious than over estimation and vice-versa. The CI/HPD interval is also obtained. We found the Bayes estimates are superior than those of the corresponding MLEs.

(3) We have obtained the expected test times under PT-II CBR and complete sampling to compare it. In Table 1, the numerical results indicate that the expected test time depends very much times on the values of removal probability. When the probability of removal is large, a slight reduction in the expected test time can be achieved only by increasing the total number of test units .