Abstract

Industrial revolution leads to the manufacturing of multicomponent products; to guarantee the sufficiency of the product and consumer satisfaction, the producer has to study the lifetime of the products. This leads to the use of bivariate and multivariate lifetime distributions in reliability engineering. The most popular and applicable is Marshall–Olkin family of distributions. In this paper, a new bivariate lifetime distribution which is the bivariate inverted Kumaraswamy (BIK) distribution is found and its properties are illustrated. Estimation using both maximum likelihood and Bayesian approaches is accomplished. Using different selection criteria, it is found that BIK provides the best performance compared with other bivariate distributions like bivariate exponential and bivariate inverse Weibull distributions. As a generalization, the multivariate inverted Kumaraswamy (MIK) distribution is derived. Few studies have been conducted on the multivariate Marshall–Olkin lifetime distributions. To the best of our knowledge, none of them handle estimation process. In this paper, we developed an algorithm to show how to estimate the unknown parameters of MIK using both maximum likelihood and Bayesian approaches. This algorithm could be applied in estimating other Marshall–Olkin multivariate lifetime distributions.

1. Introduction

Global competition in combination with using higher manufacturing technologies results in producing two or multicomponent products. This led to the use of bivariate and multivariate distributions in reliability engineering. Different families of distributions were constructed. One of the most commonly used is the Marshall–Olkin (MO) family. It is widely used due to its flexibility in considering different situations of failures (i.e., the first component has lifetime smaller, greater, or equal to the lifetime of other components).

In the literature, several lifetime distributions were derived as members of the bivariate Marshall–Olkin family. Marshall and Olkin [1] presented a bivariate exponential distribution with exponential marginals and loss of memory property. Using the same strategy, Kundu and Dey [2], Bareto-Souza and Lemonte [3], Muhammed [4], and Alqallaf and Kundu [5] introduced the bivariate Weibull, bivariate Kumaraswamy, bivariate generalized Burr, and bivariate inverse generalized exponential distributions, respectively. Using the maximum instead of the minimum in the MO scheme, Kundu and Gupta [6, 7] introduced the bivariate generalized exponential and bivariate proportional reversed hazard distributions, respectively. Moreover, Sarhan et al. [8] presented bivariate generalized linear failure rate distribution. Recently, Muhammed [9] introduced bivariate inverse Weibull (BIW) distribution.

Sometimes, the use of bivariate distributions may not be sufficient and there exists a need for multivariate distributions. For example, in air fighter jets, the natural lifetime since being manufactured and the flying time since being put into service are recorded. Studying the reliability of the air fighter jets using only two variables may not be good enough. One should take into consideration the lifetime of the engine, the wing, and the fuselage. This leads to the use of multivariate distribution. For more details, see Li and Li [10]. There is no much work performed in the multivariate case. Sarhan et al. [8] and Kundu and Gupta [11] derived the multivariate generalized linear failure rate and multivariate inverse Weibull distributions, respectively. To the best of our knowledge, there is no work dealing with estimating the unknown parameters for multivariate Marshall–Olkin family.

Several authors tackled the estimation problem for bivariate MO distributions. For example, Kundu and Gupta [6], Muhammed [9], Aly et al. [12], Eliwa and El-Morshedy [13], El-Morshedy et al. [14], and Sarhan [4, 5, 15] estimated the unknown parameters using maximum likelihood approach for different bivariate lifetime distributions. On the other hand, Hanagal and Ahmadi [16], Kundu and Gupta [17], and Lin et al. [11, 1315, 18] applied Bayesian approach for estimating certain bivariate lifetime distributions.

The univariate inverted Kumaraswamy (IK) distribution has several applications in different fields (see Abd Al-Fattah et al. [19] and Abdul Hammed et al. [20]), for example, in medical research, life testing problems, and stress-strength analysis. Also, in reliability and biological studies, IK distribution may be used to model failure rates. Due to its expected wide applicability, we are interested in deriving bivariate inverted Kumaraswamy (BIK) distribution. BIK could be applied in different fields like sports, engineering, and medicine as will be explained using three different real datasets. We expect better performance of BIK than other bivariate distributions. No one has derived the distribution before or found its mathematical properties.

The main purpose of this paper is to introduce BIK as a new Marshall–Olkin bivariate distribution in order to be applied efficiently in several fields. As a generalization, the multivariate inverted Kumaraswamy (MIK) distribution is derived. To the best of our knowledge, there is no work dealing with estimating the unknown parameters for multivariate Marshall–Olkin family. Here, estimation of MIK parameters is found using both maximum likelihood and Bayesian approaches. This work could be applied to all Marshall–Olkin multivariate distributions.

The paper is organized as follows. In Section 2, the bivariate inverted Kumaraswamy distribution is derived, and the cumulative distribution function and probability density function are presented. Also, the marginals and conditional distributions of the proposed model are obtained. Moreover, the product moments and the moment generating function are derived. In Section 3, the maximum likelihood estimators of the model parameters, asymptotic Fisher information matrix, and Bayesian estimators are obtained. Multivariate inverted Kumaraswamy distribution and its properties are illustrated in Section 4. The maximum likelihood and Bayesian estimators of the parameters under multivariate case are obtained in Section 5. Numerical analysis using both simulation, and real datasets are presented in Section 6. Finally, the paper is concluded in Section 7.

2. Bivariate Inverted Kumaraswamy Distribution

In this section, we will derive the bivariate inverted Kumaraswamy distribution as a new member in the MO family. Its properties such as the marginal and conditional distributions, joint moment generating function, and product moments are studied.

2.1. Derivation of the Bivariate Inverted Kumaraswamy Distribution

The probability density function (pdf) and the cumulative distribution function (cdf) of the univariate inverted Kumaraswamy distribution (IK), respectively, are as follows (for more details, see [19]):

Assume that , are three random variables, such that follows IK (). Define and . Hence, the bivariate vector (X1, X2) follows BIK with shape parameters . The joint cdf of () has the following form:where . The joint cdf can also be written as follows:where is as illustrated in equation (2).

Proposition 1. The joint pdf of has the following form:wherein which is as illustrated in equation (1).

Proof. See Appendix A.1.
The joint pdf of can be expressed as a mixture of absolutely continuous part and singular part as follows.

Proposition 2. If follows BIK (), thenwherewhere and are the singular and absolute parts, respectively.

Proof. See Appendix A.1.

Corollary 1. The joint pdf of can be written as follows:where

The absolutely continuous part of BIK can be unimodal depending on the values of , and α. is unimodal if (under the case ) or (under the case ). The respective modes are

2.2. Properties of Bivariate Inverted Kumaraswamy Distribution

In this section, we illustrate different properties of BIK distribution. We provide marginal, conditional distributions, joint moment generating function, and product moments.

Proposition 3 (Marginal and conditional distributions). If follows BIK (), then(a) follows () and follows ().(b)max {} follows ().(c)

Proof. See Appendix A.1.

Proposition 4 (Moment generating function). If follows BIK (), then the joint moment generating function of and is given by wherewhere , , p and q are nonnegative integers, and B (.,.) is beta function.

Proof. See Appendix A.1.

Proposition 5 (Product moments). If follows BIK (), then the product moments of and are given by where

Proof. See Appendix A.1.

3. Estimation of Bivariate Inverted Kumaraswamy Distribution

In this section, we estimate the unknown parameters using both maximum likelihood and Bayesian approaches.

3.1. Using Maximum Likelihood Approach

In this section, we derive the maximum likelihood (ML) estimators of the unknown parameters of BIK distribution. Suppose {(), …, ()} is a random sample of size n from BIK (); then, the ML estimators of the unknown parameters are obtained as follows.

The log likelihood function of the sample of size n is given by wherewhere denotes the cardinality of the set , for .

Thus,where .

The first derivatives of the log likelihood with respect to the unknown parameters and also the observed Fisher information matrix are obtained in Appendix A.2. The ML estimates of are numerically obtained in Section 6.

3.2. Using Bayesian Approach

Let () follow BIK (), where is the vector of unknown parameters. The posterior pdf of parameters can be obtained as follows:where is the prior distribution.

Pena and Gupta [21] considered Bayesian estimation of the parameters for Marshall–Olkin bivariate exponential distribution (BVE), in series and parallel systems. They obtained posterior mode using gamma Dirichlet distribution as prior distribution. Angali et al. [22] considered Bayesian estimation for BVE using gamma prior.

Similar to [22], we considered a gamma prior distribution with the following pdf:where and are the hyperparameters. The posterior pdf has the following form:

It is observed that Bayesian estimators under square error loss function cannot be obtained in explicit forms. Therefore, we obtain the posterior mean using MCMC technique which is illustrated in Section 6.

4. Multivariate Inverted Kumaraswamy Distribution

In literature, there is no much work that dealt with multivariate MO distributions. Sarhan et al. [8] derived the multivariate generalized linear failure rate distribution. Kundu and Gupta [11] derived the multivariate inverse Weibull distribution. Here, we will introduce a new multivariate MO distribution, which is the multivariate inverted Kumaraswamy (MIK) distribution. It is a generalization of the BIK considered in Section 2. We expect that MIK will be of great importance for several applied fields. In the following two sections, the derivation of MIK is explained and some of its properties are studied.

4.1. Derivation of MIK

In this section, we will derive the cdf and pdf of MIK.

Let be independent random variables such that for . Define , . Then, is an m-variate IK with parameters (), and it will be denoted by MIK . We have the following results regarding MIK distribution.

Proposition 6. If , then the joint cdf of for iswhere and .

Proof. We prove by generalizing the same method illustrated in Section 2.
Similar to the bivariate case, the MIK distribution can be written as, for ,where and and denote the absolute and continuous parts, respectively. The corresponding pdf can also be written asThe absolutely continuous part of can be obtained fromwhere belongs to the set where is absolutely continuous, if and only if are different. For each where are different, there exists a permutation such that .
Define . Then, for , , where can be obtained as follows:whereSincethenand for all , .
Now, let such that . can be written aswhere is a pdf with respect to dimensional Lebesgue measure on the hyperplane . The exact meaning of is as follows.
For any Borel measurable set ,where is the projection of set onto dimensional hyperplane .
Now, we provide and . Note that if , then has the following form:For a given , we define from the dimensional hyperplane to ℜ as follows:if for and zero otherwise. Similar to , we can obtain as follows:

4.2. Properties of MIK

In this section, we will get the marginal and conditional distributions of MIK.

Proposition 7. If then(a)(b)The conditional distribution of given iswhere , .(c)For .(d)If , then .

Proof. (a, c) B taking the limit for the joint cdf. (b, d) Could be directly obtained from the definition of the multivariate inverted Kumaraswamy distribution.

5. Estimation of the Multivariate Inverted Kumaraswamy Distribution

Although estimating the unknown parameters of a certain multivariate MO distribution is very important, no one in the literature was interested in it. Therefore, in this section, we will consider the process of estimation for MIK parameters. The proposed techniques could be applied for any multivariate MO distribution. Here, we will apply both maximum likelihood and Bayesian approaches.

5.1. Using Maximum Likelihood Approach

In this section, for simplification, we consider the case when we have three random variables , and . Applying Proposition 6, we have the following cdf:

The cdf can be rewritten aswhere

The pdf can be obtained by taking the derivative, except for where we should take into consideration that the sum of all probabilities equals one.

Now, suppose is a random sample of size from ; the problem is to find the ML estimators of the unknown parameters. Consider the following notation:where denotes the cardinality of the set , for .

The log likelihood function of the sample of size is given bywhere .

It is seen that the ML estimates could not be obtained in explicit forms, and hence we need to use numerical analysis to obtain them.

5.2. Using Bayesian Approach

Let be three random variables from MIK (), where is the vector of unknown parameters. The posterior pdf can be obtained as follows:where is the prior distribution.

We considered a gamma prior distribution with the following pdf:where and are the hyperparameters. The posterior pdf has the following form:

As in the bivariate case, Bayesian estimation will be obtained numerically using MCMC which is illustrated in the next section.

6. Numerical Analysis

In this section, both simulation and MCMC techniques are carried out to investigate the performance of the derived BIK and MIK distributions. The estimation is performed using both ML and Bayesian approaches. Three real datasets are analyzed in case of BIK and another one for the case of MIK.

6.1. For BIK Distribution

In this section, we perform a simulation study to get the estimates of the unknown parameters of BIK distribution. Also, three real datasets are analyzed.

6.1.1. A Simulation Study

Here, we present an algorithm to generate BIK distribution (Algorithm 1). To perform a simulation study, we first need to select initial values for the parameters. Here, the following eight different populations are considered:(i)Case 1: (ii)Case 2: (iii)Case 3: (iv)Case 4: (v)Case 5: (vi)Case 6: (vii)Case 7: (viii)Case 8:

The following algorithm is to generate from BIK distribution.
Step 1: generate , and from uniform (0, 1).
Step 2: compute , and .
Step 3: define , and
Step 4: obtain .

The parameters are selected to cover different shapes of the distribution. It can be seen from Figure 1 that(i)For cases 1 and 2, the surface plot of the absolutely continuous part of the joint probability density function is decreasing.(ii)For cases 3 to 8, the surface plot of the absolutely continuous part of the joint probability density function is increasing till it reaches the mode; then, it is decreasing.

(1) Maximum Likelihood Approach. The maximum likelihood estimates of the model parameters are obtained by maximizing the log likelihood function given by equation (15). Monte Carlo simulation is performed using R package with 1000 replications and three different sample sizes , and and eight different populations.

Absolute bias (ABias), mean square error (MSE), confidence width (CW), and coverage probability (CP) are obtained and presented in Table 1. The numerical steps and the corresponding equations are explained in detail in Appendix B.1.

(2) Bayesian Approach. Using Bayesian approach under square error loss function, the Bayesian estimator is the posterior mean. However, it is hard to obtain the posterior mean theoretically as we have four parameters to estimate. One can use Markov Chain Monte Carlo (MCMC) simulation method to obtain it numerically.

The MCMC method uses simulation techniques to obtain a Markov sequence such that it has a limiting distribution. In the Bayesian approach, the limiting distribution is the posterior pdf as it includes all needed information about the parameters .

Here, the MCMC method can be used to set up a Markov chain of parameters with distribution . The mean of the sequence can be considered as the posterior mean.

To perform MCMC, we used both R and WinBugs packages. Gamma prior is used with the same three sample sizes and eight populations used in ML approach. The R package with 1000 replications is used, and for each replication, WinBugs is used with 1000 replications to generate the sequence of Markov chain.

We used the Geweke test to examine the convergence of the generated Markov chain sequence. Geweke statistic converges to normal distribution for large sample sizes. Hence, large absolute values of are considered as a rejection for convergence. Only those converged sequences are used in the analysis. For more details about the Geweke test, see [16].

ABias, MSE, confidence length (CL), and CP are obtained and presented in Table 1. The numerical steps and the corresponding equations are explained in detail in Appendix B.1.

From Table 1, it can be seen that under different combinations of the parameters and for different sample sizes, ABias and MSE are relatively small. This indicates that both Bayesian and ML approaches work efficiently in estimating the parameters of BIK.

Comparing ML and Bayesian estimates, it is found that Bayesian estimates have less than or equal mean square error (MSE) than ML ones. This is clear from Figure 2.

Also, it can be seen that as the sample size increases, the ABias, MSE, CW, and CL decrease for both ML and Bayesian as seen from Figure 3. Moreover, it can be seen that for most cases, the CP is around 0.95.

6.1.2. Real Datasets

Here, we analyze three real datasets to show the applicability of BIK in several fields like sports, engineering, and medicine.

(1) Football Data. The dataset has been obtained from Meintanis [23]; he used the bivariate MO exponential distribution (BE) to analyze the data. The data are about football (soccer) where at least one goal scored by the home team and at least one goal scored directly from a penalty kick, a foul kick, or any other direct kick (all of them together will be called as kick goal) by any team have been considered. Here, the variables are the time in minutes of the first kick goal scored by any team () and the time of the first goal of any type scored by the home team ().

The bivariate dataset has the following structure:  < ,  = , and  > . Since, has a positive probability, we need a singular distribution to analyze this dataset. Here, we analyze the data using BIK distribution defined by equation (4). All the data points have been divided by 100. This is not going to make any difference in the statistical inference.

First, before analyzing the data, we fit inverted Kumaraswamy distribution to , and max (). To guess the initial values for the parameters of BIK model, the MLEs of the shape parameters of the respective inverted Kumaraswamy distribution for , and max () are obtained. To check the modelʼs fitness, we first need to illustrate goodness-of-fit tests.

Goodness-of-fit (GOF) tests are hypothesis tests regarding the distribution of some random variable in a population. The objective of applying GOF tests is to measure how well the data agree with a given distribution as its population. For example, if we want to examine if the random variable follows distribution , then the null hypothesis is

One approach for applying GOF tests is based on the empirical distribution function (EDF) which is defined as follows:

This approach is based on defining a statistic to measure the discrepancy between and The most used statistics are modified Cramér–von Mises statistic and Anderson–Darling statistic which have the following formulas:where is the value of the order statistics in the sample.

Large values of test statistics (or small corresponding value) lead to the rejection of the null hypothesis. For more details about GOF tests, see DʼAgostnio and Stephens [24].

Here, we apply GOF tests in order to see whether the fit based on univariate inverted Kumaraswamy distributions is reasonable for this dataset. We computed the modified Cramér–von Mises statistic and Anderson–Darling statistic . The values of these statistics and the corresponding values (in brackets) for , , and max () are illustrated in Table 2.

Based on the values of these statistics and the corresponding values, the inverted Kumaraswamy distribution cannot be rejected for modeling the marginals and the maximum. In Table 3, the ML estimates and the posterior mean using gamma priors are obtained for the parameters of BIK. Also, credible interval length and confidence width are illustrated.

Now, we try to compare the performance of BIK, bivariate exponential (BE), and bivariate inverted Weibull (BIW) to fit this dataset. To select between models, several information criteria (IC) were presented; the main idea behind IC is to afford a balance between good fit and complexity of the model as follows:where is the penalized term and refers to number of parameters in the model.

The most commonly used IC in model selections are Akaike information criteria (AIC), Bayesian information criteria (BIC), the consistent Akaike information criteria (CAIC), and Hannan–Quinn information criteria (HQIC). Each IC has a different penalty term illustrated in the first row of Table 4.

By analyzing equation (45), we can see that the first term tends to decrease when the model provides good fit. But, the second term tends to increase as the number of parameters in the model increases. The model with the lowest IC is the best (for more details about IC, see Vrieze [25]).

The log likelihood value, AIC, BIC, CAIC, and HQIC are represented in Table 4. All of the criteria suggest that BIK provides the best fit compared with BE and BIW models. This may show the importance of BIK.

(2) Motor Data. The dataset has been obtained from [26]. The data are about failure time in days for a parallel system containing two motors. The variables are time to failure for the first motor and time to failure for the second motor (). All data points have been divided by 1000. We applied GOF tests on IK, IW, and E distributions. From Table 5, based on the values of , and the corresponding values, only IK distribution can be used for modeling the marginals and the maximum. Hence, only BIK can be used for modeling this dataset.

ML estimates and the posterior mean using gamma priors are obtained for the parameters of BIK. Also, credible interval length and confidence width are illustrated in Table 6.

(3) Diabetic Retinopathy Data. The dataset has been obtained from [27]. The data are used by the National Eye Institute to study the effect of laser treatment on the blindness in patients with diabetic retinopathy. At the beginning of clinical trial, for each patient, one eye is randomly selected for laser treatment. The variables are time to failure for treated eye () and time to failure for untreated eye (). All data points have been divided by 1000. We applied GOF on IK, IW, and E distributions. From Table 7, it can be seen that only IK distribution can be used for modeling the marginals and the maximum. Hence, only BIK can be used for modeling this dataset.

In Table 8, ML estimates and the posterior mean using gamma priors are obtained for the parameters of BIK. Also, credible interval length and confidence width are illustrated.

From these three datasets, we can conclude that the derived BIK distribution will be of great importance.

6.2. For MIK Distribution

In this section, we present numerical results of estimation using a simulation study and a real dataset.

6.2.1. A Simulation Study

Here, we present an algorithm to generate MIK distribution. Also, we illustrate the simulation results for both ML and Bayesian approaches (Algorithm 2).

The following algorithm is to generate from MIK distribution.
Step 1: generate , and from uniform (0, 1).
Step 2: compute , , and .
Step 3: define , , and
Step 4: obtain , and .

(1) Maximum Likelihood Approach. To obtain the maximum likelihood estimates, a Monte Carlo simulation is performed using R package with 1000 replications and three different sample sizes , and . Different initial values of the parameters are arbitrarily chosen, varying between small and large values to cover different cases of the distribution. The parameter is assumed known for simplicity. The following twelve populations are considered.(i)(ii)(iii)(iv)(v)(vi)(vii)(viii)(ix)(x)(xi)(xii)

To check the behavior of the estimates, ABias, MSE, CW, and CP are computed in Table 9. The algorithm is explained in detail in Appendix B.2.

(2) Bayesian Approach. Similar to the bivariate case, MCMC simulation is used to obtain the posterior mean numerically. Absolute bias (ABias), mean square error (MSE), confidence length (CL), and coverage probability (CP) are obtained and presented in Table 9. The algorithm is explained in detail in Appendix B.2.

From Table 9, it can be seen that for different combinations of the parameters and for different sample sizes, ABias and MSE are relatively small. This indicates that both Bayesian and ML approaches work efficiently in estimating the parameters of MIK.

Comparing ML and Bayesian estimates, it is found that Bayesian estimates have less than or equal mean square error (MSE) than ML ones as seen from Figure 4. Also, for the majority of cases, Bayesian estimates have smaller ABias than ML ones.

Also, it can be seen that as the sample size () increases, the ABias, MSE, CW, and CL decrease for both ML and Bayesian as seen from Figure 5. Moreover, it can be seen that for most cases, the CP is around 0.95.

6.2.2. A Real Dataset

Here, we analyze a real dataset to show the applicability of MIK. The dataset has been obtained from Bland and Altman [28]. It represents a set of systolic blood pressure measurement for 85 patients made by a semiautomatic blood pressure monitor; three readings were made for each patient. The variables are as follows:: first systolic blood pressure measurement.: second systolic blood pressure measurement.: third systolic blood pressure measurement.

All data points have been divided by 1000. This is not going to make any difference in the statistical inference. We applied GOF tests in order to check if the fit based on IK, IW, and E distributions is reasonable in this case. We computed the modified Cramér–von Mises statistic and Anderson–Darling statistic . The values of these statistics and the corresponding values (in brackets) for , , , and max are illustrated in Table 10.

Based on the values of these statistics and the corresponding values, only IK distribution can be used for modeling the marginals and the maximum. Hence, only MIK can be used for modeling these data. In Table 11, the ML estimates and the posterior mean using gamma priors are obtained for the parameters of MIK ( is considered known for simplicity). Also, credible interval length and confidence width are illustrated.

7. Conclusion

In this paper, bivariate inverted Kumaraswamy (BIK) distribution is derived as a new member of bivariate Marshall–Olkin family. Its properties are also studied. Estimation is performed using both maximum likelihood (ML) and Bayesian approaches. To see the applicability of BIK distribution, three real datasets in different fields like sports, engineering, and biology have been analyzed. It is observed that the BIK model provides the best fit. Due to the wide applicability and great performance of the BIK model, a generalization to multivariate inverted Kumaraswamy (MIK) distribution is performed. To the best of our knowledge, estimation of multivariate Marshall–Olkin family was not studied before. Here, estimation of MIK is performed using both ML and Bayesian approaches. A very convenient algorithm is proposed for both approaches. This algorithm could be applied for any multivariate Marshall–Olkin distributions. Finally, a real dataset has been analyzed to illustrate the applicability of MIK distribution, and it is observed that the MIK model provides good fit, while multivariate inverse Weibull and multivariate exponential distributions failed to fit this dataset.

Appendix

A

A.1. Proofs of Propositions 1 to 5
A.1.1. Proposition 1

The first two cases X1 < X2 and X1 > X2 are easily obtained by taking . Now, to get , we use the fact that

But and .

Hence, we have

Therefore,

A.1.2. Proposition 2

Let A be the following event:

Then, and .

Therefore, .

For z = min (), and can be obtained by subtraction.

It can be seen that is the singular part as its second partial derivative is zero when . Thus, is the absolutely continuous part as its mixed partial derivative is a density function.

A.1.3. Proposition 3

(a)where .(b)(c)Conditional distribution of given is given bythat is,

A.1.4. Proposition 4

Starting withsubstituting for by the corresponding formula, and then using change of variable technique and the following facts, the formula is derived.

A.1.5. Proposition 5

Starting withsubstituting for by corresponding formula, and then applying change of variable technique and the following facts, the formula is derived.

A.2. Maximum Likelihood Estimators for BIK

The MLEs of are obtained by solving the above nonlinear system of equations. The solutions are numerically obtained in Section 6.

The asymptotic variance covariance matrix is given bywherewhere

Using the asymptotic distribution of the MLEs, the confidence intervals can be obtained aswhere , is the estimated variance, and is the upper percentile of the standard normal table.

B

B.1. Algorithms for BIK
B.1.1. Monte Carlo Simulation Algorithm for BIK

The following algorithm is used to perform Monte Carlo simulation study using R package and get ML estimates for BIK distribution.Step 1: independent samples from BIK distribution are generated as follows:(a)Generate , and from uniform (0, 1).(b)Compute , and (c)Define , and (d)Obtain Step 2: the maximum likelihood estimates (MLEs) and the corresponding variance (Var) are obtained and stored.Step 3: lower (L) and upper (U) bounds for confidence interval (CI) are calculated (i.e., ) and stored.Step 4: the three previous steps are repeated 1000 times.Step 5: for converged datasets (this is obtained from R using optim function), the average of MLE, variance, and CI is obtained as follows:Step 6: absolute bias (ABias), mean square error (MSE), and confidence width (CW) are obtained as follows:Step 7: the coverage probability (CP) is obtained as follows:

B.1.2. MCMC Simulation Algorithm for BIK

The following algorithm is used to perform Markov Chain Monte Carlo (MCMC) simulation study using R and WinBugs packages and get Bayesian estimates for BIK distribution.Step 1: using R package: n independent samples from BIK distribution are generated using the same procedure in Appendix B.1.1.Step 2: the generated dataset is sent to WinBugs package and gamma priors are defined.Step 3: WinBugs is used with 1000 replications to generate the sequence of Markov chain.Step 4: WinBugs provides posterior mean (PM), standard deviation (SD), and credible interval (CI).Step 5: results are sent back to R package and stored.Step 6: the five previous steps are repeated 1000 times.Step 7: for converged datasets (i.e., Geweke test results <1.96, it is obtained using coda function in R package), the average of posterior mean (PM), variance (Var), and credible interval (CI) is obtained as follows:Step 8: absolute bias (ABias), mean square error (MSE), and credible length (CL) are obtained as follows:MSE and CL are the same as in Appendix B.1.1.Step 9: the coverage probability (CP) is the same as step 7 in Appendix B.1.1.

B.2. Algorithms for MIK
B.2.1. Monte Carlo Simulation Algorithm for MIK

The following algorithm is used to perform Monte Carlo simulation study using R package and get ML estimates for MIK distribution.Step 1: independent samples from MIK distribution are generated as follows:(a)Generate , and from uniform (0, 1).(b)Compute , , and (c)Define , ,and (d)Obtain , and Steps 2 till 7 are the same as in Appendix B.1.1.

B.2.2. MCMC Simulation Algorithm for MIK

The following algorithm is used to perform Markov Chain Monte Carlo (MCMC) simulation study using R and WinBugs packages and get Bayesian estimates for MIK distribution.Step 1: using R package: n independent samples from MIK distribution are generated using the same procedure in Appendix B.2.1.Steps 2 till 9 are the same as in Appendix B.1.2.

Data Availability

The datasets used in the example application are available at [23, 2628].

Conflicts of Interest

The authors declare that they have no conflicts of interest.