Abstract

This paper explores the optimal censoring schemes from models with U-shaped hazard rates (USHRs) using Bayesian methods. Topp-Leone (TL) distribution has been considered as a special case. We have used conventional and fuzzy priors for the estimation. Further, the symmetric and asymmetric loss functions have been considered for the estimation. Since the Bayes estimators (BEs) for the parameters of the TL distribution cannot be derived in the closed form, we have used Quadrature method (QuM), Lindley’s approximation (LinA), Tierney and Kadane’s approximation (TKA), and Gibbs sampler (GiS) for the approximate estimation of the parameters. We also considered the different techniques to compare various progressive censoring schemes on the basis of their information contents and hence reported the optimal censoring schemes under Bayesian framework. The performance of the different BEs has been compared on the basis of a simulation study. A real-life example has been considered for the illustration.

1. Introduction

The TL distribution introduced by Topp and Leone [1] is a useful lifetime distribution. However, the said contribution was lacking the application side of the proposed model. Later, Nadarajah and Kotz [2] reported that TL distribution has U-shaped hazard rate (USHR). The lives of human populations can be efficiently modeled by the lifetime distributions with USHR. The lifetimes of various manufactured products are also modeled using distributions with USHR. According to Ghitany et al. [3], TL distribution is one of few lifetime distributions with USHR having only two parameters which provides convenience in modeling and estimation. The regularity conditions are fulfilled by the TL distribution. In addition, TL distribution has explicit forms for distribution function; hence, it can easily be applied to the censored lifetime data in contrast to lognormal and gamma distributions. The other important features of TL model can be seen from the contributions of Al-Zahrani and Alshomrani [4], Genc [5], Feroze and Aslam [6], Genc [7], Bayoud [8], MirMostafaee et al. [9], Feroze and Aslam [10], Reyad and Othman [11], and Rezaei et al. [12].

The progressive censoring (PC) has become very prominent in reliability studies. It is useful in life-testing experiments because of its capability to withdraw the surviving items from the experiment at the desire of the researcher. It has additional edges over the traditional type II censoring. Balakrishnan and Aggarwala [13] discussed PC and its applications in detail. More details regarding the developments, applications, and further potential issues to be studied on PC have been provided by Balakrishnan [14]. The analysis of PC samples from the various lifetime models has been discussed by Kundu and Joarder [15], Lin et al. [16], Abd-Elmougod et al. [17], Bayoud [18], and the references cited therein.

According to Pan and Klir [19], the conventional Bayesian prior distributions can be obtained as special case of the fuzzy priors. These priors allow the researchers not to use the conjugate priors [20]. The performance of the Bayesian methods can further be improved by use of fuzzy priors. There have been some important contributions using the concepts of fuzzy priors in the Bayesian inference [21]. Some of those important contributions include Wu [22], Salinas et al. [23], Singh et al. [24], and Pak [25].

Though many papers have appeared considering the Bayesian and classical analysis of the TL distribution during the last ten years, however, very few of these contributions have considered analysis of the PC samples from the TL distribution. Recently, Abd-Elmougod et al. [17] estimated coefficient of variation of TL distribution using Bayesian and classical methods based on adaptive PC samples. Applicability of the estimates has been discussed under the numerical examples. Similarly, Bayoud [18] used PC samples to obtain Bayesian and classical estimates for the shape parameter of the TL distribution. The approximate maximum likelihood method (MLE), LinA, and importance sampling method have been used for the estimation. Our contribution is different from Abd-Elmougod et al. [17] and Bayoud [18] in the sense that the said contributions have considered the analysis for shape parameter of the TL distribution, while considering the scale parameter to be fixed. Assuming the scale parameter to be fixed results in the less flexible model. So, the scope of these contributions has been limited due to the restriction on the flexibility of the TL distribution by assuming its scale parameter known. We have addressed the problem of estimating the parameters and selection of the optimal PC plans under Bayesian framework when both parameters of the TL distribution are unknown and have explored more options for the estimation of its parameters. The choice of the Bayesian estimation has been made due to the fact that Bayesian methods often have clear advantage over classical methods even in case of little prior information, for example, see Kundu and Joarder [15]. Unfortunately, the joint prior distribution does exist for the parameters of TL distribution; hence, we have assumed the conjugate gamma prior for the scale and a log-concave prior pdf for the shape parameter of the distribution. It is worth mentioning here that in case of scale-shape parameter distributions, the consideration of conjugate gamma prior for the scale parameter and a log-concave prior pdf for the shape parameter is frequent in the literature, for example, please see Pradhan and Kundu [26], Kundu and Raqab [27], and Lin et al. [16]. There are two main reasons regarding the choice of log-concave density for the shape parameter: (i) the mathematical tractability of the corresponding posterior distribution and (ii) that several well-known densities, with known shape parameters, are log-concave, for example, normal, log-normal, and gamma densities are log-concave. The fuzzy priors have also been assumed for the posterior distribution. The comparison of fuzzy and conventional priors has also been reported. The squared error loss function (SELF) and precautionary loss function (PLF) have been assumed for the posterior estimation. It should be noticed that SELF is symmetric while PLF is asymmetric loss function. We have used both symmetric and asymmetric loss functions, because the marginal posterior distributions are not in compact form; hence, their shapes are unknown and can be in symmetric or asymmetric form. The detailed discussion regarding the said loss functions can be seen from the works of Feroze and Aslam [10] and Feroze [28]. Further, the closed form for BEs was not possible; therefore, we have considered four approximation techniques, namely, QuM, LinA, TKA, and GiS for the numerical estimation.

2. The Model and Likelihood Function

This section contains the description of the TL model and the likelihood function under PC samples for the TL distribution.

The TL distribution has the following probability density function (pdf) where and are the parameters of the TL distribution.

Similarly, the cumulative distribution function (CDF) of the distribution is

The likelihood function for PC samples, using concept of Balakrishnan and Aggarwala [13], is

Putting results in [29], we have

3. Bayesian Estimation

Here, we have considered a conjugate gamma prior for the scale parameter as

Further, we have assumed that the log-concave prior density for the shape parameter is another gamma prior having the following form

Using [14, 30], the posterior distribution is

Equation (7) can be written as

We have also used the fuzzy priors for the posterior estimation. These priors have been used following the idea of Pan and Klir [19]. The classical Bayesian priors can be derived as special case of the fuzzy priors.

3.1. Loss Functions

In this subsection, a symmetric (SELF) and an asymmetric (PLF) loss function has been assumed for estimation. The introduction of these loss functions is as follows. The expression for the SELF is , where is a parameter and the is BE of the parameter . Similarly, PLF can be defined as having BE of as . It is clear that using SELF and PLF, the BEs for the parameter and cannot be obtained analytically. Hence, we have proposed some approximation methods in the coming sections in order to evaluate the said BEs numerically.

3.2. Quadrature Methods (QuM)

It should be noted that from [18], the BEs under SELF and PLF are not in closed form. As we have two parameters to be estimated, the BEs under SELF and PLF involve the double integrals. These integrals can be easily handled by employing QuM. In the Bayesian QuM, we choose a set of points between the finite integral in order to ensure the stability of our uncertainty. Consider the posterior density , where and are the parameters. We evaluate this density over a number of the points in the entire range as where is the increments. We have developed a program in the software mathematica to obtain BEs and associated posterior risks for the parameters and using SELF and PLF under informative priors. Some of the references to solve [8] using iterative procedures can be seen from the works of Ali and Pan [29], Ali and Pan [31], and Ali et al. [30].

3.3. Lindley’s Approximation (LinA)

The QuM can have issues in some situations. For example, for a function having some singularities; this method cannot be employed effectively. In such situations, few other approximation methods, such as LinA, can be used. This approximation can be used to obtain BEs without performing complex numerical integrations. Hence, in situations demanding only the BEs, the LinA can be used effectively. Bayoud [18] used LinA for estimation of the shape parameter from the TL distribution. We have considered the more general and flexible case by performing the LinA for both parameters of the TL distribution using PC data.

Lindley [32] has proposed an approximation for numerical solution of [28]. where is any function of or , is the log-likelihood function, and is the logarithmic of joint prior for the parameters and . The basic idea of the approximation is to expand and of [28] into a Taylor series expansion about the MLEs of the parameters . This approximation can produce reasonably good results if the concerned posterior distribution is unimodal or at least dominated by a single mode, and the sample is sufficiently large.

In case of two unknown parameters , the LinA of [28] is of the form where and are MLEs of the parameters and , respectively, and is the element of the inverse of the matrix , all evaluated at the MLEs of the parameters.

From [31], the log-likelihood function is of the form

The MLEs for and can be obtained from the following equations: where , , and .

Now, MLEs for the parameters and cannot be obtained in the closed form from [33, 34]; hence, iterative methods have been used for numerical MLEs.

From [6], the second order derivatives are where and

Now, (15)–(17) have been evaluated at the MLEs of and .

As the third-order derivatives with respect to and contain long expressions, therefore, they have not been presented in the paper.

Based on the second-order derivatives, the matrix is and its inverse is

Based on [10], BEs for and under SELF are

Again, BEs for and under PLF are

3.4. Tierney and Kadane’s Approximation (TKA)

Estimation using LinA often gets tedious especially when dealing with posteriors having several parameters. This problem is due to the reason that the LinA needs evaluation of the third-order derivatives from the log-likelihood function. This problem can be addressed by employing another conveniently computable approximation called TKA. The additional benefit of TKA is its smaller error than LinA. Hence, we have also considered TKA to obtain BEs using PC samples. Consider , where is the logarithmic of the joint informative prior for the parameters and is the logarithmic of likelihood function given in [31].

Further consider and , where is the logarithmic of the function of the parameter(s) or . Then, according to Tierney and Kadane [35], the expression using [18] can be presented in the form

The approximation for is where and maximize and , respectively, and and are the negatives of the inverse Hessians of and evaluated at and , respectively.

Here, we have where is any constant independent of the parameters and .

Now, are estimated using ([19, 25].

The determinant for the negative of the inverse Hessian of evaluated at is where , , and .

The second-order derivatives from contain lengthy expressions; therefore, they have not been presented here. Once , , , , , and have been calculated, they can easily be used to compute and ; hence, using [36], the BEs can be obtained from [18].

3.5. Gibbs Sampler

Consider a posterior distribution given in [18]. Let the full conditional densities and from [18] are tractable, and we aim to obtain and . To implement a Gibbs sampler, we start with choosing some initial values for the parameters and denoted by and , and then, we draw the samples from the two conditional distributions in the following sequence

As values at the th step depend on the values at the (-1)th step, therefore, sequence given in [37] is a Markov chain. In order to implement a GiSfor the posterior distribution [18], we need to extract the conditional distributions, for each unknown parameter, from the posterior distribution [18].

From [18], the conditional distribution of the parameter given is

Similarly, the conditional distribution of the parameter given is of the form

Using [11, 26], the GiS can be employed considering the methodology proposed by Pandey and Bandyopadhyay [37] using Winbugs software. The generated samples for the parameters and can be utilized for the estimation of the said parameters under SELF and PLF. The BE and the posterior risks for the parameter using SELF can be obtained by using the formulae and , respectively. Similarly, the BE and posterior risk for the parameter using PLF can be computed by using the formulae and , respectively.

4. Simulation Study

This section contains simulation study using different samples sizes () and effective sample sizes (). The parametric space used for estimation to compare different BEs.

The PC samples from the TL model have been drawn by using the method proposed by Balakrishnan and Aggarwala [13]. The choice of hyperparameters have been made by using prior means approach. The SELF and PLF have been assumed for BEs. Since close form expressions were not available for the BEs, we have used QuM, LinA, TKA, and GiS for the numerical computation of the estimators. All the results have been reported under 10,000 replications. The following censoring schemes (CS) have been used for the estimation.

CS1:, , ,

CS2:, , ,

CS3:, , ,

CS4:, , ,

CS5:, , ,

CS6:, , ,

CS7:, , ,

CS8:, , ,

CS9:, , ,

The results from the simulation study have been reported in Tables 16. The results for the estimation of and have been presented in Tables 1 and 2, those for estimation of and have been given in Tables 1 and 2, and those for estimation of and have been reported in Tables 5 and 6. On the other hand, the results under SELF have been presented in Tables 1, 3, and 5 and those under PLF have been reported in Tables 2, 4, and 6. The comparison among the different BEs has been made on the basis of amounts of posterior risks (PRs) associated with these estimates. The larger and imposes a positive impact on the performance of BEs. Interestingly, whenever the true parametric values are less than one, the SELF performs better than PLF. In converse, whenever the true parametric values are equal or greater than one (the results for the greater than one values of the parameters have not been presented here due to the space restriction), the performance of the PLF seems better than SELF due to the same reason. Similarly, using TKA have little advantage over QuM, LinA, and GiS as the amounts of PRs associated with estimates considering TKA are the least among all the approximation methods used in the study. The TKA also provides better convergence in majority of the cases.

Comparing censoring schemes 1-9 given in Tables 16, it is clear that CS3, CS5, and CS8 have the least amounts of PRs as compared to their counterparts for the same “” and “.” This is according to the expectation, because the expected test time for the censoring schemes CS3, CS5, and CS8 are greater than their counterparts. Hence, the data obtained under CS3, CS5, and CS8 is likely to provide more information about the said parameters as compared to other censoring schemes. In addition, CS3, CS5, and CS8 are in accordance with the shape of hazard rate of the TL distribution, as the hazard rate of TL distribution is U-shaped which proceeds with more failures in the start and at the end of the experiment. The censoring schemes 1, 4, and 6 are the close competitors of CS3, CS5, and CS8, respectively, and interestingly, these CSs incorporate more failures at the end of experiments. It is interesting to note that in case of CS2 and CS7, the amounts of PRs are the most; it may be due to the fact that for these CSs, we have assumed more failures in the middle of the experiments which is generally not suited for the TL distribution. For a “,” the increase in “” results in the decreased amounts of corresponding posterior risks.

We have not reported the comparison of MLE with BEs as it is a well-known fact that the results for MLE and BEs under noninformative priors are often similar, and in such case, the MLE can be preferred over BEs as the BEs are computationally more expensive. However, in case of informative priors, the BEs provides better results than MLE. The comparison of BEs under informative/noninformative priors and MLE having similar findings has been observed by Kundu and Joarder [15] and Kundu [38].

5. Optimum Censoring Scheme

In real-life situations it is vital to select an optimum censoring scheme among different schemes. Here, the different schemes mean, for predetermined sample size () and fixed effective sample size (), the various choices of such that . Suppose there are two different censoring scheme denoted by and respectively, such that and , then will considered to be better than if it provides more information regarding the parameters of the concerned model as compared to . In coming sections, we have reported two criteria to determine the optimum censoring scheme between the two competing censoring schemes based on their information contents. These criteria have also been used by Kundu [38] and are based on the estimation of the , , quantile. The quantile for the TL distribution is .

The first criteria is where denotes the censoring scheme and and denote the posterior variance of for a censored and complete sample, respectively. It is clear that depends on the quantile () and not on the sample; hence, according to this criteria, the censoring scheme is better than if . The drawback of this criterion is that it is the function of quantile point .

The second criteria is where , , and have same definitions as above. Here, is a nonnegative weight function defined on [0, 1]; it has to be predetermined based on the nature of the study. For example, if more concentration is required at the middle, then larger weight should be given at ; conversely, if tail probabilities are more vital, then more weight can be attached to the lager . Again, will be better than if .

Now, [1, 35] cannot be computed directly; therefore, we have used the LinA to approximate [1, 35] considering Monte Carlo simulation. The details of the approximation have been presented in the Appendix A.

We have presented the optimum censoring schemes considering four different objective functions for selected combinations of and in Table 7. We have considered , , and for the calculation of . On the other hand, the values have been obtained for . We have also reported the ratio of expected experimental time required to carry complete sample and the sample under a particular censoring scheme. That is, we have calculated , where , , and are relative expected test time, expected test time under censored sample, and expected test time under complete sample, respectively. The independent priors have been assumed for both of the parameters . The results have been presented in Table 3.

In majority of the cases, the censoring schemes having all the removals at the time of first failure provide the maximum information whenever and for . This may be due to the reason that for a larger choice of “” in , we are interested in the tail behavior of the TL distribution. Also, the larger choice of “” will suit the shape of hazard rate of the TL distribution. It is also quite apparent that for in , the censoring schemes having failures other than the tails of the experiment provide more information than their counterparts. The RETT also clarify that the censoring schemes having all/more failures at the start of the experiment provide the maximum information about the parameters under the study. Hence, larger choice of “” in may be recommended for the choice of optimum censoring scheme when failure times comes from the TL distribution.

In addition, the optimum censoring schemes are not much sensitive, a little depart from the optimal censoring scheme does not change the efficiency to a large extend. For example, for in , if we depart from the optimal censoring scheme (, ) to the other scheme say (, , ) the relative efficiency becomes 0.9645. Similar patterns can be observed for other cases.

6. Real-Life Example

In this section, the data regarding the failure times (in mileage) of eighteen military carriers reported by Grubbs [39] has been used to illustrate the applicability of the proposed estimators. Bayoud [18] has confirmed that this data follow TL distribution by employing Kolmogorov-Smirnov test. The data is as follows 162, 200, 271, 302, 393, 508, 539, 629, 706, 777, 884, 1101, 1182, 1463, 1603, 1984, 2355, and 2880. For convenience, we have divided each value in the data by 10000. We considered following censoring schemes for the estimation.

CS1:, , ,

CS2:, , , ,

CS3:, , ,

CS4:, , ,

CS5:, , ,

CS6:, , , ,

CS7:, , ,

CS8:, , ,

The MLEs for and have been computed using iterative methods. Using the initial guess for as 0.50, we obtained the MLEs of the parameters and as 0.6835 and 2.9761, respectively. From the results given in Tables 811, it can be assessed that the increase in the “” results in the decreased amounts of PRs. In case of estimation of the parameter , the BEs under SELF are better than those under PLF, while for the estimation of the parameter , the PLF out performs SELF. Having in mind that the estimated values of the parameter are always less than one and those for the parameter are always greater than one, these results can be perceived as replication of the findings from the simulated results. The results using TKA are again associated with the least amounts of the PRs which also confirmed the findings of the simulation study. In addition, the results based on fuzzy priors are slightly better than those under conventional priors.

On comparing different censoring schemes, it has been found that the censoring schemes with all/more removals at the time of first failure give least amounts of posterior risks. In addition, as observed in the simulation study, the censoring schemes with all the removals at the time of last failure are having smaller posterior risks than those with more removals in the middle of the experiments. In short, most of the findings from the simulation study have also been replicated by the analysis of the real life dataset.

7. Conclusion

This paper is aimed to discussing the BEs for the parameters of the TL distribution under PC samples. It has been observed that BEs cannot be obtained in the explicit form; therefore, we have proposed QuM, LinA, TKA, and GiS for the approximate computations of the BEs and PRs. From the results, it has been assessed that TKA has little advantage of other approximation methods, as the amounts of the PRS using TKA are the least among the estimates under all the approximation methods. The BEs under fuzzy priors were slightly better than those under conventional priors.

Different criteria for the comparing to different censoring schemes, based on the information they contain, have been proposed. Based on these criteria, we have reported the optimal CSs with respect to different values of and . The future aspects of the study can be finding an algorithm for selection of the optimal censoring scheme for the TL distribution from all the possible censoring schemes which are often large in the practical situations.

Appendix

A. Approximation of [1, 34]

In [34], we have , and we have to approximate and separately. For the estimation of , we have considered in [9] with the following details.

The remaining quantities in [9] will remain the same.

Similarly, in order to estimate , put in [9] and considering all other quantities the same.

In addition to approximate we have to approximate and

These can be computed considering in place of in all the above expressions. For different weight functions , the integration can be carried out numerically, for example, considering uniform weight function and , we have

Data Availability

The data used in the paper is available in the paper.

Conflicts of Interest

The authors declare that they have no conflicts of interest.