Research Article | Open Access
Rashad M. EL-Sagheer, Mustafa M. Hasaballah, "Inference of Process Capability Index for 3-Burr-XII Distribution Based on Progressive Type-II Censoring", International Journal of Mathematics and Mathematical Sciences, vol. 2020, Article ID 2412857, 13 pages, 2020. https://doi.org/10.1155/2020/2412857
Inference of Process Capability Index for 3-Burr-XII Distribution Based on Progressive Type-II Censoring
In this paper, we discussed the estimation of the index for a 3-Burr-XII distribution based on Progressive Type-II censoring. The maximum likelihood and Bayes method have been used to obtain the estimating of the index . The Fisher information matrix has been used to construct approximate confidence intervals. Also, bootstrap confidence intervals (CIs) of the estimators have been obtained. The Bayesian estimates for the index have been obtained by the Markov Chain Monte Carlo method. Also, the credible intervals are constructed by using MCMC samples. Two real-datasets have been discussed using the proposed index.
Statistician and quality control engineers in manufacturing industries often employ varied statistical process techniques to measure the capability of a manufacturing process and quantify the process behavior to identify contradictions between the actual process performance and the desired specifications. These techniques include the process capability index (PCI), and the PCI compares the output of the process to customer’s specification. The objective of the PCI is to provide a numerical indicator of whether or not a production process is able to produce products within the specification limits. These specifications are determined through the lower specification limit , the upper specification limit , and the target value The most commonly used PCIs , , , and are based on the assumption that a given process may be described by a normal probability model with process mean and process standard deviation. For more information, see Juran , Kane , Chan et al. , and Pearn et al.  are based on the assumption that a given process may be described by a normal probability model with process mean and process standard deviation . However, the assumption of normality is largely a simplifying assumption in different manufacturing and service processes, and often invalid. For more details, see Gunter . In fact, there are several PCIs and their study for different conditions is valid for both typical and nonnormal output characteristics of processes in the literature for more information, see Clements , Rodriguez , Polansky , Yeh and Bhattarchya , and Perakis and Xekalaki . In the recent past, Maiti et al.  have established a generalized PCI which is directly or indirectly connected to most of the PCIs described in the literature. Furthermore, it includes both normal and nonnormal and continuous as well as discrete random variables and is defined as follows:where is the CDF of , is the upper specification limit, is the lower specification limit, is the lower desirable limit, is the upper desirable limit, is the process yield, and is the desirable yield. If the process distribution is normal with and , then the generalized PCI can be written as . Huiming et al.  proposed Bayesian approach for the problem of estimation and testing PCI depending on subsamples obtained over time from an in-control process. Miao et al.  discussed Bayesian approach under SE loss function for computing PCIs. Wu and Lin  suggested one-sided lower Bayesian estimation of . Recently, Kargar et al.  studied the Bayesian approach with normal prior depending on subsamples to check process capability via capability index . Maiti and Saha  obtained the Bayesian estimation of the index based on SE loss function for normal, exponential, and Poisson process distributions. Mahmoud et al. studied the inferences of the lifetime performance index with Lomax distribution based on progressive type-IIcensored data. Ali and Riaz  discussed the generalized PCIs from the Bayesian view point under symmetric and asymmetric loss functions for the simple and mixture of generalized lifetime models. Saha et al.  studied the classical and Bayesian inference of the index for generalized Lindley distributed quality characteristic. The rest of this paper is organized as follows. In Section 2, we developed for 3-Burr-XII distribution (TPBXIID). In Section 3, the maximum likelihood estimators (MLEs) of the unknown parameters of TPBXIID as well as are studied. In Section 4, deals with approximate confidence intervals (ACIs) based on the MLEs. Bootstrap confidence intervals are discussed in Section 5. In Section 6, the MCMC techniques have been used to get the Bayes estimates and construct credible intervals (CRIs) of the index based on squared error (SE) loss functions for the TPBXIID. Two real-datasets are analyzed to illustrative purposes in Section 7. In Section 8, Monte Carlo simulation is performed to compare the efficiency of the proposed classical estimators and Bayes estimators of the index in terms of their MSEs. Finally, Section 9 contains conclusions.
2. The Index for 3-Burr-XII Distribution
Burr  introduced the Burr XII distribution, and this distribution is popularly used in reliability analysis as a more flexible alternative to Weibull distribution, see Wingo [21, 22] and Zimmer et al. , and its 3-Burr XII distribution (TPBXIID) form is a generalisation of the log-logistic distribution, see Shao . The TPBXIID has the following CDF:
Here, and are the shape parameters and is a scale parameter. It is important to note that when , TPBXIID reduces to the Lomax distribution, when the density function is upside-down bathtub shaped with mode at and is L-shaped when .
3. ML Inference
Let be a progressive type-II censored scheme from TPBXIID. To obtain the maximum likelihood estimators of the unknown location and scale parameters, the likelihood function is written aswhere . The log-likelihood function for the 3-Burr-XII distribution
Taking the first derivatives of equation (6) with reference to , , and and setting each of them equal to zero, we obtain
From (6), we obtain the MLE as
Since it is difficult to express equations (7) and (8) in closed forms, the Newton–Raphson iteration process was used to generate the estimates. For more information, see EL-Sagheer . In addition, after replacing , , and by their MLEs , , and , we can get the estimator of as follows:
3.1. Approximate Confidence Interval
The asymptotic variance-covariance of the MLEs for parameters , , and are given by elements of the negative of the Fisher information matrix are defined as follows:
However, the exact mathematical expressions for the above expectations are very hard to obtain. Hence, the asymptotic variance-covariance matrix is obtained as follows:with
Then, CIs for parameters , , and are, respectively, given aswhere is the percentile of the standard normal distribution with right-tail probability . Furthermore, to construct the asymptotic confidence interval of the , which is function of the parameters , , and , we need to find the variances of it. In order to find the approximate estimates of the variance of , we use the delta method referred to in Green  to compute ACIs for . Based on this method, the variance of can be approximated by , where is the gradient of with respect to , , and and . Thus, the ACIs for can be given by .
4. Bootstrap Confidence Intervals
In this section, we propose two confidence intervals’ dependent bootstrapping. The two methods of bootstrap which are commonly used in practice are as follows:(1)The percentile bootstrap (Boot-p) proposed by Efron (2)The bootstrap-t method (Boot-t) proposed by Hall 
4.1. Boot-p Method
(1)Depending on the original sample , compute the MLEs of the parameters from equations (7)–(8) and (10).(2)Using the values of to generate a bootstrap sample with the same values of , , using algorithm presented in Balakrishnan and Sandhu .(3)Get a bootstrap sample by resampling with replacement.(4)As in Step 1, based on , compute the bootstrap sample estimates of , where , say .(5)Repeat Steps 3 and 4 N Boot times, and obtain .(6)Arrange in an ascending order to obtain the bootstrap sample .(7)Let be the cdf of . Define for given . The approximate Boot-p CI of is given by .
4.2. Boot-t Method
(1)From (1) to (4) is the same steps in Boot-p.(2)Compute the statistic defined as , where are obtained by using Fisher information matrix.(3)Repeat Step 1 and 2 N Boot times and obtain .(4)Arrange , in an ascending orders and obtain the ordered sequences .(5)Let be the cdf of . For a given , define . Then, the approximate Boot-t CI of is given by .
5. Bayes Estimation
In this section, we present the posterior densities of the parameters , , and based on progressive type-II censored data and then obtain the corresponding Bayes estimates of these parameters. In order to obtain the joint posterior density of , , and , we suppose that , , and are independently distributed as gamma , gamma , and gamma priors, respectively. Consequently, the prior density functions of , , and becomeswhere all the hyperparameters and , where , are chosen to reflect prior knowledge about , , and . The joint prior distribution for , , and is
The posterior distribution of the parameters , , and up to proportionality can be obtained by combining the likelihood function (5) with the joint prior (17) via Bayes’ theorem, and it can be written as
From equation (18), it may be observed that explicit forms for the marginal posterior distributions for each parameter are difficult to obtain. For this reason, we assume to use MCMC approximation method to produce samples from the joint posterior density function in (18) and to use these samples to calculate the Bayes estimate of , , and and any function of them such as as well as to construct associated credible intervals. We consider the Gibbs within Metropolis sampler to implement the MCMC technique, which requires derivation of the complete set of conditional posterior distribution. A lot of papers dealt with MCMC technique such as Chen and Shao  and EL-Sagheer . It can be shown that the conditional posterior density function of , , and can be written, up to proportionality, as follows:
In this representation, the full conditional forms given in (21) is gamma density with parameter of shape and parameter of scale . So, samples of can be easily generated using any gamma-generating routine. In addition, since the conditional posteriors of and in (19) and (20), respectively, do not give standard forms, and therefore Gibbs sampling is not a straightforward choice, and it is appropriate to use the Metropolis–Hastings sampler to implement MCMC technique, see Metropolis et al. . Because of these conditional distributions in (19) and (20), the following is a hybrid algorithm with Gibbs sampling steps to update parameter and Metropolis–Hastings sampler steps to update and .
5.1. Metropolis-Hastings Algorithm
(1)Start with initial guess of , , and , say , , and , respectively, M = burn-in.(2)Set .(3)Generate from Gamma .(4)Using Metropolis–Hastings, generate and from and with normal proposal distribution, and , where and are obtained from the variance-covariance matrix.(i)Calculate the acceptance probability:(ii)Generate and from a uniform distribution.(iii)If , accept the proposal and set , else set .(iv)If , accept the proposal and set , else set .(5)Calculate as(6)Set .(7)Repeat Steps times and obtain , and , . In order to guarantee the convergence and to remove the affection of selecting of initial values, the first simulated varieties are discarded. Then, the chosen samples are , and , , for sufficiently large forms an approximate posterior samples which can be used to develop the Bayesian inferences. The approximate Bayes estimate of under SE loss function is given by(8)To calculate the CRIs of , order , , as . Then, the CRIs of become .
6. Applications to Real Life Data
In this section, we present two examples to illustrate the computations of the methods proposed in this article using two different real-datasets.
Dataset I. We chose the real-dataset from Leiva et al. , and we added (2) to this data, the quality characteristic in this dataset is ball size (in millimeters) and the process has been monitored with USL and LSL for this quality characteristic is mil and mil ( in mm), respectively. The data are given as follows:
Data set II. In this set of data, the first failure times (in months) of 20 electric carts are used in a large manufacturing facility for internal transport and distribution. Here, we have set the hypothetical LSL and the hypothetical USL, respectively, are and and the details are given in Zimmer et al. . The data are as follows:
We used Kolmogorov–Smirnov (K-S) test to fit whether the data distribution as TPBXIID or not. The calculated value of the K-S test of dataset I and dataset II are and , respectively, for the TPBXIID and these values are smaller than their corresponding values expected at 5% significance level which is and value equal at and and value equal at . So, it can be observed that the TPBXIID fits these data very well and also we have just plotted the empirical and the fitted for dataset I and dataset II in Figures 1 and 2, respectively. Note that the TPBXIID can be a good fitting model for these data. According to the dataset I presented by Leiva et al. , we can generate the progressive type-II censored scheme sample of size taken from sample size with censoring scheme . A progressive type-II censored scheme sample generated from the real-dataset I is given as follows.
Also, we can generate the progressive type-II censored sample of size taken from sample size with censoring scheme based on the dataset II defined by Zimmer and Hubele . A progressive type-II censored sample produced from the real-dataset II is obtained as follows:
The descriptive statistics for the considered datasets are reported in Table 1. For the previous datasets considered, based on a progressive type-II we have computed the point estimates of using ML and Bootstrap method, the results are shown in Table 2, and we also determined the % CIs based on MLEs and the 95% bootstrap (Boot-p and Boot-t) CIs of , and the results are displayed in Table 3. Now, we want to calculate the Bayes estimates of against SE loss functions. Since we do not have prior information about the unknown parameters, we assume the noninformative gamma priors for , , and . This prior distribution is the case in which hyperparameters are identified as . We perform the MCMC algorithm described in Section 5 to generate a sequence of 10,000 random vectors iteratively with different starting points for the parameters , , and , and discard the first 1000 values as “burn-in.” The results of Bayes estimates are reported in Table 2 and also calculated the 95% CRIs, the results are shown in Table 3. The MCMC results are shown in Table 4 for the posterior mean, median, mode, standard deviation (SD), and skewness (Sk) of .
In this section, the Monte Carlo simulation study has been implemented to compare the performances of the classical estimation methods and the Bayesian estimation approach for prior-0 and prior-I distributions under SE loss function of the index for TPBXIID. This simulation was carried out considering different values of and and by choosing = and with , , and , respectively. Two different priors are used for Bayesian computation in order to compare the Bayes estimates: (a) noninformative gamma prior (prior-0), the hyperparameter values as , and (b) informative gamma prior (prior-I), for this prior, we arbitrarily selected the hyperparameter values as and for different parameter sets. We applied the MCMC method with using 10000 MCMC samples and discard the first 1000 values as “burn-in” under SE loss function. We compare the performances of MLEs and Bayes estimates in terms of the MSE, which is calculated as follows:
We have used two different sampling schemes as follows:(i)Scheme I: for (ii)Scheme II: for Point (classical in addition to the Bayesian) estimates of for TPBXIID are displayed in Tables 5 and 6. Also, the % CIs based on MLEs and the 95% bootstrap (Boot-p and Boot-t) CIs of were determined, and the results are summarized in Tables 7 and 8. Also, the results of 95% CRIs are given in Tables 9 and 10.