Abstract

The exponentiated gamma (EG) distribution and Fisher information matrices for complete, Type I, and Type II censored observations are obtained. Asymptotic variances of the different estimators are derived. Also, we consider different estimators and compare their performance through Monte Carlo simulations.

1. Introduction

Gupta et al. [1] introduced the exponentiated gamma (EG) distribution. This model is flexible enough to accommodate both monotonic as well as nonmonotonic failure rates. The EG distribution has the distribution function (c.d.f.): Therefore, EG distribution has the density function: the survival function and the hazard function here , and are the shape and scale parameters, respectively. The two-parameter EG distribution will be denoted by EG(). For details, see Bakoban [2], Coronel-Brizio et al. [3] and Shawky and Bakoban [410].

Computation of Fisher information for any particular distribution is quite important, see, for example, Zheng [11]. Fisher information matrix can be used to compute the asymptotic variances of the different functions of the estimators, for example, maximum likelihood estimators (MLEs). The problem is quite important when the data are censored. We compute Fisher information matrices of EG distribution for complete and censored samples. We then study the properties of the MLEs of EG distribution under complete and censored samples in great details. Also, we consider different estimators of EG distribution and study how the estimators of the different unknown parameter(s) behave for different sample sizes and for different parameter values. We mainly compare the MLEs, estimators based on percentiles (PCEs), least squares estimators (LSEs), weighted least squares estimators (WLSEs), method of moment estimators (MMEs) and the estimators based on the linear combinations of order statistics (LMEs) by using extensive simulation techniques. It is important to mention that many authors interested with estimating parameters of such distributions, for example, Wingo [12] derived MLEs of Burr XII distribution parameters under type II censoring. Estimations based on order statistics under complete and censored samples compared with MLE which in turn based on a complete sample were studied by Raqab [13] for the Burr X distribution. Also, Gupta and Kundu [14] presented the properties of MLE’s for generalized exponential (GE) distribution, and they discussed other methods for GE in [15]. Kundu and Raqab [16] discussed the generalized Rayleigh distribution too. Hossain and Zimmer [17] compared several methods for estimating Weibull parameters with complete and censored samples. Surles and Padgett [18] considered the MLEs and discussed the asymptotic properties of these estimators for complete and censored samples from Burr X distribution.

The rest of the paper is organized as follows. In Section 2, we obtain the Fisher information matrices of EG distribution. In Section 3, we derive MLEs of EG distribution and study its properties. In Sections 4 to 7, we describe other methods of estimations. Simulation results and discussions are provided in Section 8.

2. Fisher Information Matrix

2.1. Fisher Information Matrix for Complete Sample

Let be a continuous random variable with the cumulative distribution function (c.d.f.) and the probability density function (p.d.f.) . For the simplicity, we consider only two parameters and , although the results are true for any finite-dimensional vector. Under the standard regularity conditions (Gupta and Kundu [19]), the Fisher information matrix for the parameter vector based on an observation in terms of the expected values of the first and second derivatives of the log-likelihood function is where for , Now, we will derive Fisher information matrix of EG() under complete sample. It can be shown that Differentiating (2.3) with respect to and respectively, we have Therefore, the second derivatives are Thus, the elements of Fisher information matrix for single observation from EG() are in the forms: where Moreover, Fisher information matrix for a complete sample of size from EG() is simply .

2.2. Fisher Information Matrix under Type II Censoring

Let be a random sample of size from . In life-time analysis, items are on test. The test continues until the th smallest outcome is observed, . Thus, we observe the smallest -order statistics from , denoted by , which are called the Type II censored data.

Denote by , where is the integer part of . Thus, as . Denote Fisher information matrix in by (see, Zheng [11]), where in the case of the EG() distribution, and define

The estimates based on , under suitable conditions, are asymptotically normal, where the asymptotic covariance matrix is the inverse of .

Assuming the regularity conditions hold, the following expression for , where is any finite-dimensional vector, can be expressed as where is the th percentile of denotes the transpose, and is the hazard function.

If there is no censoring , then (2.11) becomes the usual Fisher information in a single variable (2.1).

In the following, we use (2.11) to obtain Fisher information matrix under Type II censoring for the EG() distribution. For , denote Fisher information matrix as where , and can be obtained by (2.11).

It can be shown that Differentiating (2.13) with respect to and , respectively, we have Thus, it is easily to see, for , that the elements of from EG() are It follows, by (2.15), that the percentage of Fisher information about , is independent of , and thus is a decreasing function of .

2.3. Fisher Information Matrix under Type I Censoring

If the observation of is right censored at a fixed time point , that is, one observe min(), Fisher information for the parameter vector based on a censored observation is thus where, for ,

The Fisher information matrix of EG() under Type I censoring can be similarly derived as shown in Type II censoring. For ,

3. Maximum Likelihood Estimators

3.1. Maximum Likelihood Estimators for Complete Sample

In this section, the maximum likelihood estimators (MLE’s) of EG() are considered. First, we consider the case when both and are unknown. Let be a random sample of size from EG(), then the log−likelihood function is The normal equations become It follows, by (3.2), that the MLE of as a function of , say , where Substituting in (3.1), we obtain the profile log-likelihood of as Therefore, MLE of , say , can be obtained by maximizing (3.5) with respect to as follows: Once is obtained, the MLE of , say , can be obtained from (3.4) as .

Now, we state the asymptotic normality results to obtain the asymptotic variances of the different parameters. It can be stated as follows: where is the information matrix (2.1) whose elements are given by (2.6), (2.7), and (2.8).

Now, consider the MLE of , when the scale parameter is known. Without loss of generality, we can take . If is known, the MLE of , say , is It follows, by the asymptotic properties of the MLE, that where is the single information about which is defined in (2.6).

Now, note that if ’s are independently and identically distributed EG(), then follows . Therefore, for , Using (3.8), an unbiased estimate of can be obtained by where Let us consider the MLE of when the shape parameter is known. For known the MLE of , say , can be obtained by numerical solving of the following equation: It follows, by the asymptotic properties of the MLE, that where is the single information about which is defined in (2.8).

3.2. Maximum Likelihood Estimators under Censored Samples

Let be the Type II censored data, then the likelihood function is given (Lawless [20]) by Next, let , be independent, then are the Type I censored data, thus the likelihood function is given (Lawless [20]) by

We now turn to the computationally more complicated case of censored data. Type I and Type II censorings will be considered simultaneously, since they give the same form of likelihood function above. We will deal with the MLE under Type II censoring from EG() and it is the same for Type I censoring.

In life testing, under the Type II censoring from EG(), the log likelihood function is The normal equations become where The MLE of and , say and , can be obtained by solving numerically the two nonlinear equations (3.18) and (3.19).

The MLE and , based on Type II censored data are strongly consistent and asymptotically normal (see, Zheng [11]), that is, where and is the Fisher information matrix (2.12) whose elements are given by (2.15), (2.16), and (2.17).

Now, consider the MLE of , based on the Type II censored data when the scale parameter is known. Without loss of generality, we can take . If is known, the MLE of , say , could be obtained by solving numerically the nonlinear (3.18). It follows, by the asymptotic properties of the MLE, that where is defined in (2.15).

Let us consider the MLE of when the shape parameter is known. For known the MLE of , say , can be obtained by solving numerically the nonlinear equation (3.19). It follows, by the asymptotic properties of the MLE, that where is defined in (2.16).

4. Estimators Based on Percentiles

If the data come from a distribution function which has a closed form, then it is quite natural to estimate the unknown parameters by fitting straight line to the theoretical points obtained by the distribution function and the sample percentile points. Murthy et al. [21] discussed this method for Weibull distribution while Gupta and Kundu [22] studied the generalized exponential distribution.

First, let us consider the case when both parameters are unknown. Since therefore, Let be the order statistics obtained by EG(). If denotes some estimate of , then the estimate of and can be obtained by minimizing with respect to and . We call these estimators as percentile estimators (PCEs) and could be obtained by solving numerically the following two nonlinear equations: Several estimators of can be used here (see, Murthy et al. [21]). In this section, we mainly consider , which is the expected value of .

Now, let us consider the case when one parameter is known. If the shape parameter is known, then the PCE of , say , can be obtained from (4.5).

Now let us consider the case when the scale parameter is known. Without loss of generality, we can assume that . If we denote , then Therefore, the PCE of , say , can be obtained by minimizing with respect to and hence Interestingly, is also in a closed form like when is known.

5. Least-Squares and Weighted Least-Squares Estimators

The least-squares estimators and weighted least-squares estimators were originally proposed by Swain et al. [23] to estimate the parameters of Beta distributions. It can be described as follows. Suppose is a random sample of size from a distribution function and denotes the order statistics of the observed sample. It is well known that Using the expectations and the variances, two variants of the least-squares methods can be used.

Method 1. The least-squares estimators of the unknown parameters can be obtained by minimizing with respect to the unknown parameters. Therefore, in case of EG distribution, the least-squares estimators of and , say and , respectively, can be obtained by minimizing with respect to and .
It could be obtained by solving the following nonlinear equations:

Method 2. The weighted least-squares estimators of the unknown parameters can be obtained by minimizing with respect to the unknown parameters. Therefore, in case of an EG distribution, the weighted least-squares estimators of and , say and , respectively, can be obtained by minimizing with respect to and .
It could be found by solving the following nonlinear equations

6. Method of Moment Estimators

In this section, we provide the method of moment estimators (MMEs) of the parameters of an EG distribution. If follows EG(), then where and are defined in (2.9).

It is well known that the principle of the moment’s method is to equate the sample moments with the corresponding population.

From (6.1) and (6.2), we obtain the coefficient of variation (C.V.) as The C.V. is independent of the scale parameter . Therefore, equating the sample C.V. with the population C.V., we obtain where and . We need to solve (6.4) to obtain the MME of , say . Once we estimate , we can use (6.1) to obtain the MME of .

If the scale parameter is known (without loss of generality, we assume ), then the MME of , say , can be obtained by solving the nonlinear equation: that is,

Now consider the case when the shape parameter is known, then the MME of , say , is Note that (6.5) follows easily from (6.1). Although is not an unbiased estimator of , is unbiased estimator of and, therefore,

7. L-Moment Estimator

In this section, we propose a method of estimating the unknown parameter of an EG distribution based on the linear combination of order statistics (see, [24, 25]). The estimators obtained by this method are popularly known as L-moment estimators (LMEs). It is observed (see, Gupta and Kundu [15]) that the LMEs have certain advantages over the conventional moment estimators.

The standard method to compute the L-moment estimators is to equate the sample L-moments with the population L-moments.

First, we discuss the case of obtaining the LMEs when both the parameters of an EG distribution are unknown. If denote the ordered sample, then using the same notation as in [15, 25], we obtain the first and second sample L-moments as Similarly, the first two population L-moments (see David and Nagaraja [24]) are respectively, where .

Then, for EG(), we obtain where is defined by (2.9) and Therefore, LMEs can be obtained by solving the following two equations: First, we obtain the LME of , say , as the solution of the following nonlinear equation: Once is obtained, the LME of , say , can be obtained from (7.5) as Note that if or is known, then the LME of or is the same as the corresponding MME obtained in Section 6.

8. Numerical Experiments and Discussions

In this section, we present the results of some numerical experiments to compare the performance of the different estimators proposed in the previous sections. We apply Monte Carlo simulations to compare the performance of different estimators, mainly with respect to their biases and mean squared errors (MSEs) for different sample sizes and for different parameter values. Since is the scale parameter and all the estimators are scale invariant, we take in all our computations. We set and . We compute the bias estimates and MSEs over 1000 replications for different cases.

First of all, we consider the estimation of when is known. In this case, the MLE, unbiased estimator (UBE), and PCE of can be obtained from (3.4), (3.11), and (4.8) respectively. The least-squares and weighted least-squares estimators of can be obtained by solving numerically the nonlinear equations (5.4) and (5.8), respectively. The MME (similarly LME) of can be obtained by solving numerically the nonlinear equation (6.5) as well. The results are reported in Table 1.

It is observed in Table 1 that for each method the MSEs decrease as sample size increases at . It indicates that all the methods deduce asymptotically unbiased and consistent estimators of the shape parameter for known . Moreover, all methods (except PCE and MME) usually overestimate whereas PCE underestimate in all cases that are considered and for some values of in case UBE. Therefore, the estimates of all methods are underestimate for most values of except MLE that forms overestimate for all . Also, all estimators are unbiased except LSE and WLSE that are the worst in biasness. The estimates of all methods are consistent except for some values at because of the shape of the curve (reversed J shaped).

In the context of computational complexities, UBE, MLE, and PCE are the easiest to compute. They do not involve solvable nonlinear equation, whereas the LSE, WLSE, and MME involve solvable nonlinear equations and they need to be calculated by some iterative processes. Comparing all the methods, we conclude that for known scale parameter, the UBE should be used for estimating .

The negative sign in some results of the first entry only, in cells in all tables because of calculating of bias (see, Abouammoh and Alshingiti [26]).

Now consider the estimation of when is known. In this case, the MLE, PCE, LSE, and WLSE of can be obtained by solving the nonlinear equations (3.13), (4.5), (5.5), and (5.9), respectively, but the MME (LME is exactly the same) of can be obtained directly from (6.7). The results are reported in Table 2.

In this case, it is observed, at the same sample size, that the value of increases for all methods the MSEs decrease. Comparing the computational complexities of the different estimators, it is observed that when the shape parameter is known, PCE and MME can be computed directly, while some iterative techniques are needed to compute MLE, LSE, and WLSE. We apply Newton-Raphson method using Mathematica 6 to solve the nonlinear equations required. Comparing all methods, we conclude that all the estimates are consistent except WLSE and LSE for some .

Also, for most estimates, the MSEs decrease as the values of decrease. We recommend to use PCE for estimate at , , 20, and to use LSE at , , while use WLSE at , . All the estimates are consistent and unbiased at for all values of .

Finally, consider the estimation of and when both of them are unknown. The can be obtained by solving the nonlinear equation (3.6), once is obtained, can be obtained from (3.4). The PCE of and can be obtained by solving the nonlinear equations (4.4) and (4.5). Similarly, LSE of and can be obtained by solving the nonlinear equations (5.4) and (5.5). Also, WLSE of and can be obtained by solving the nonlinear equations (5.8) and (5.9). The or can be obtained by solving the nonlinear equation (6.4) or (7.7), and then or can be obtained from (6.7) or (7.8). The results for and are presented in Tables 3 and 4, respectively.

It is observed in Tables 3 and 4 that for each method, the MSEs decrease as sample size increases. It indicates that all the methods deduce asymptotically unbiased and consistent estimators of and when both are unknown.

Comparing the performance of all the estimators, it is observed that as far as the minimum biases are concerned, the MLE performs. Considering the MSEs, the MLE and PCE perform better than the rest in most cases considered. The performances of the LSE’s and WLSE’s for are the worst as far as the bias or MSE’s are concerned. Moreover, it is observed from Table 4 for PCE method that the MSE’s of depend on , that is, for , and as , increases the MSEs of decrease. Most of the estimators are consistent and most of the estimators PCE are underestimate for all .

Now if we consider the computational complexities, it is observed that MLEs, MMEs, and LMEs involve one-dimensional optimization, whereas PCEs, LSEs, and WLSEs involve two-dimensional optimization. Considering all the points above, we recommend to use MLE’s for estimating and when both are unknown.

Acknowledgment

The authors thank the Editor and the Referees for their helpful remarks that improved the original manuscript.