Abstract

The mixture of two Burr Type III distributions (MTBIIID) is investigated. First, the identifiability property of the MTBIIID is proved. Then, two different methods of estimation are used. Next, the estimates of the unknown five parameters and reliability function of the MTBIIID under Type II censoring are obtained. To study the performance of the estimation technique in the paper, a Monte Carlo simulation is presented. In addition, the numerical illustration requires solving nonlinear equations; therefore, the software international mathematical statistical library (IMSL) is used to assess these effects numerically. Finally, a real data set is applied to illustrate the methods proposed here.

1. Introduction

Finite mixtures of distributions have been used as models throughout the history of statistics. For more details and examples about finite mixture of distributions see Everitt and Hand [1], Titterington et al. [2], McLachlan and Basford [3], Lindsay [4], McLachlan and Krishnan [5], and McLachlan and Peel [6]. Identifiability property is an important problem in mixture models since it gives a unique representation for a class of mixtures. Identifiability of mixtures has been discussed by several authors. Among others, they were discussed by Ahmad [7], Sultan et al. [8], and Sultan and Al-Moisheer [9].

The MTBIIID has its probability density function (pdf) as where , , , and , the density function of the th component, is given by

The cumulative distribution function (cdf) and reliability function of the MTBIIID, respectively, are given by where and , the cdf and reliability function of the th component, are given by

Recently, Al-Moisheer and Sultan [10] have proposed new mixture of two Burr Type III distributions (MTBIIID) with common shape parameter. They have proved that the MTBIIID with common shape parameter is identifiable. Also, they have derived the nonlinear discriminant function of the MTBIIID and calculated the total probabilities of misclassification as well as the percentage bias. Al-Moisheer [11] has investigated the problem of updating discriminant functions estimated from the MTBIIID.

Researches dealing with the method of maximum likelihood estimation of finite mixture models have been studied, for example, by Sultan et al. [8] and Sultan and Al-Moisheer [9]. The basic idea of Lindley’s [12] approximation for estimation under Type II censoring of the parameters of the mixture model has been investigated by many authors; among others, see Ahmad et al. [13] for a mixture of two Weibull distributions and Sultan and Al-Moisheer [1416] in the case of mixture of two inverse Weibull distributions. A computationally simpler approximation is given by Tierney and Kadane [17], which requires the unimodality of the posterior distribution; such condition cannot be guaranteed in the case of MTBIIID. So, this approach will not be considered in this paper.

The main purpose of this paper is to prove that the MTBIIID with unknown shape parameters is identifiable in Section 2. The maximum likelihood estimators of the parameters and reliability function under Type II censoring are obtained in Section 3. In Section 4, Lindley’s procedure is applied to approximate the Bayes estimation of the unknown parameters and the reliability function based on Type II censoring. In Section 5, a simulation study is carried out to illustrate the estimation techniques considered in Sections 3 and 4. In addition, the usefulness of the proposed model is illustrated by fitting it to a new area of applications such as carbon monoxide level. Finally, in Section 6, some concluding remarks are drawn.

2. Identifiability

Chandra [18] has proved the following.

Let be a transform associated with each having the domain of definition with linear map . If there exists a total ordering of such that(i), implies ,(ii)for each , there exists some , such that for , ,then class of all finite mixing distributions is identifiable relative to .

By using Chandra’s approach, we prove the following proposition.

Proposition 1. The class of all finite mixing distributions relative to the Burr III distribution (BIIID) is identifiable.

Proof. Let be a random variable having the pdf and cdf of the BIIID given in (2) and (5), respectively. Then the th moments of the th BIII component is given by where is the standard beta function.
From (5), we have Now let , , , and ; then from (8), we have that and see Abramowitz and Stegun [19].
On the other hand, when and , we have From (9) and (10), we have and hence the identifiability is proved.

3. Maximum Likelihood Estimation

Suppose that is a censored sample under Type II right censoring of size obtained from a life test on items whose life times have a MTBIIID with density (1); the likelihood function takes the form where and , , are given, respectively, by (2) and (6). The log likelihood function in this case is then given by Equation (13) can be reduced to the complete sample case by setting . Differentiating with respect to 5-dimensional vector of parameters and setting it to zero, we obtain the following system of nonlinear equations: where, for and , and, for , , , , and are given in (1)–(6), respectively.

The solution of nonlinear equations (14) gives the MLEs of the vector of the five parameters. The routine DNEQNF from IMSL manual is used to solve this system of equations. The MLE of the reliability function is given by (4) after replacing the parameters by their MLEs. Table 1 displays the MLEs of the unknown parameters and the corresponding reliability function. Note that the MLEs of the unknown parameters of MTBIIID with common shape parameter based on the complete sample case are obtained by Al-Moisheer and Sultan [10].

4. Lindley’s Approximation for the Bayes Estimation

Let us assume that the parameters , , , , and are independent random variables; then the following joint noninformative improper prior density of the vector is proposed:The choice of the above prior function is appropriate when no information is available about the parameters of interest. Moreover, for this prior function, the posterior distribution is proportional to the likelihood; hence, the Bayes estimates are numerically close to the ML estimates.

By multiplying (16) by (12), then, the joint posterior density for the vector given the data becomesIt is well known that, under the squared error loss function, the Bayes estimator of a function of the parameters is the posterior mean of the function and is given by a ratio of integrals. Neither integral could be expressed in a simple form. Therefore, Lindley’s [12] approximation is used to obtain the Bayes estimator of , where is a vector of parameters, is an arbitrary function of , and Lindley’s procedure is presented in the following subsection.

4.1. Lindley’s Procedure

Consider the following integral: where is a vector of parameters, is an arbitrary function of , and is the logarithm of a posterior function for observations, forming a random sample , from a density and a prior density function . The basic idea of Lindley’s [12] approximate form for the Bayes estimator is to expand the functions involved in the posterior mode . He obtained an order asymptotic expansion for the integral (18). Under quadratic error loss function, he showed that the approximate Bayes estimate of the function is of the form where, for , All functions on the right-hand side of (19) are to be evaluated at the posterior mode. This approach will be used in this paper to develop the Bayes estimation of the vector of parameter of the MTBIIID with pdf and reliability function given by (1) and (4), respectively.

For the five parameters case, Lindley’s [12] approximation form reduces to the following: where, for , where , , , , and are defined in (20).

All terms on the right-hand side of (21) are to be evaluated at the posterior mode.

In our case, the logarithm of the posterior density function (17) is given by The mode of the posterior density can be obtained by solving the following five nonlinear equations, :where the functions , , , , , , and are as given by (15).

Now, to apply Lindley’s [12] form (21), we first obtain the elements of the matrix , , defined in (20). These elements are calculated by using (A.1) in Appendix A. Next, the elements , , of the matrix are calculated numerically by inverting the matrix by using the routine DLINRG from IMSL manual. Finally, the elements , , defined in (20) are calculated by using (B.1) in Appendix B.

4.2. Lindley’s Approximation for the Bayes Estimates of the Vector of Parameters

The approximate Bayes estimator of the components of the vector of five parameters of the MTBIIID can now be obtained by equating in (21) to one of the five parameters, so that and , . Then the Bayes estimates of , , , , and are given, respectively, by where , , are given in (22) and , , are the elements of the inverse of the matrix . All functions in (25) are to be evaluated at the posterior mode which is computed using the appropriate equation in (24).

4.3. Lindley’s Approximation for the Bayes Estimates of the Reliability Function

The approximate Bayes estimation of the reliability function of the MTBIIID at some value of can be obtained by using (21) as follows: whereand the functions , and , are as given by (15) and (A.2), respectively, with being replaced by .

Hence, Bayes estimation of is then given by where is given by (22) and are the elements of the inverse of the matrix evaluated at the posterior mode.

5. Illustrative Examples

In this section the following examples are considered to illustrate the two estimation methods used in this paper.

5.1. Example 1 (Real Data)

In order to show the usefulness of the proposed mixture model, it is applied to the data collected from Jeddah city for measuring the carbon monoxide level (mg∖m3) in different locations during the period (1 January–31 January 2010). The Kolmogorov-Smirnov (K-S) test distance between the data and the fitted MTBIIID gives a good fit at level of significance. Assuming the improper prior distribution , then Bayes estimates and the corresponding MLEs of the parameters of MTBIIID based on the complete sample case are calculated in Table 2.

The results show that both the Bayes and MLEs are close.

5.2. Example 2 (Simulated Data)

In this subsection, we illustrate and compare the MLEs and Bayes estimates by using Monte Carlo simulation technique according to the steps listed below:(1)We select the parameters to perform the simulation as displayed in Table 1.(2)Random samples of different sizes are generated from the mixture population in (1) under the choice according to the following algorithm:(a)Generate two uniform variates and from the Fortran numerical library (IMSL) using the routine DRNUN.(b)If , then use to generate a random variate from the MTBIIID as .(c)If , then use to generate a random variate from the MTBIIID as .(3)The MLEs of the vector of parameters are obtained by solving the system of nonlinear likelihood equations in (14) using the subroutine DNEQNF from the IMSL. The MLE of the reliability function is then computed, at some , from (4) after replacing the parameters by their MLEs.(4)For the different choices of and in Table 1, the prior distribution is considered to be proportional to 1 because for this choice of vague prior function the posterior distribution is proportional to the likelihood function as mentioned in Section 4. Hence, for both Bayesian and non-Bayesian approaches the results become close.(5)The squares of the deviations are computed for different samples and censoring sizes, where stands for the parameter and is its estimate (MLE or Bayes).(6)Steps are repeated 100 times. The averages (Ave.) and estimated risks (ER) are computed over the 100 repetitions by averaging the estimates and the squared deviations, respectively. The computational results are displayed in Table 1.In Table 1, two different methods of estimation are used to obtain the estimates of the unknown parameters and reliability function, namely, maximum likelihood method and Lindley’s approximation for the Bayes estimates. Estimates of the parameters and reliability function are obtained in two cases: when the sample size is complete and right censored. The averages and estimated risks are computed for different sample sizes and censoring sizes . The simulation study is applied for various sample sizes, respectively, small, moderate, and large.

We see that the estimated risks of Bayes estimates are uniformly less than the corresponding estimated risks of the MLEs of certain parameters for some sample sizes and censoring sizes . To summarize, when comparing the two methods we note that none of the two methods performs better than the other method in all cases. From Table 1, with respect to the estimates for the mixing proportion and reliability function and when increasing sample sizes, the ML method gives better estimated risk compared with the Bayes method. Also, for the other parameters and for all values of sample size and censoring sizes , the Bayes method is better.

6. Concluding Remarks

In this paper, the finite MTBIIID is considered. Then, the identifiability property of the MTBIIID is proved and this property is important since it gives a unique representation for a mixture model. Also, two different methods of estimation are discussed, namely, maximum likelihood method and Lindley’s approximation for the Bayes estimates. The MLEs of the parameters of MTBIIID and reliability function are obtained and compared with the corresponding Bayes estimates. Some numerical illustrations are evaluated to show the efficiencies of the obtained estimates. In addition, a new area of application such as carbon monoxide level is used to illustrate the importance of the proposed MTBIIID. Furthermore, all of the results obtained can be specialized to the complete sample case by taking . Finally, Lindley’s approximation form is shown to be a suitable alternative numerical evaluation of the integrals over a subset of 5-dimensional parameters.

Appendix

A. Elements

From (23) and the definitions in (21), the elements are derived as follows: where, for and , with , , , , , , , , , , , ,   , , , and being as given in (1)–(6) and (15), respectively.

B. Elements

From (23) and the definitions in (21), the elements , , for , are derived as follows: wherewith , , , , , , , , , , , ,   , , , and being as given in (1)–(6) and (15), respectively.

Competing Interests

The author declares that there is no conflict of interests regarding the publication of this paper.