#### Abstract

We deal with the problem of estimating the parameters of the generalized Lindley distribution. Besides the classical estimator, inverse moment and modified inverse estimators are proposed and their properties are investigated. A condition for the existence and uniqueness of the inverse moment and modified inverse estimators of the parameters is established. Monte Carlo simulations are conducted to compare the estimators’ performances. Two methods for constructing joint confidence regions for the two parameters are also proposed and their performances are discussed. A real example is presented to illustrate the proposed methods.

#### 1. Introduction

Lindley [1] originally introduced the Lindley distribution to illustrate a difference between fiducial distribution and posterior distribution. This distribution is becoming increasingly popular for modeling lifetime data and has a wide applicability in survival and reliability as closed forms for the survival and hazard functions and good flexibility of fit. Its density function is given byWe denoted this by writing . The Lindley distribution is a mixture of an exponential distribution with scale and a gamma distribution with shape 2 and scale , where the mixing proportion is .

Ghitany et al. [2] provided a comprehensive treatment of the statistical properties of the Lindley distribution. Mazucheli and Achcar [3] used the Lindley distribution as a good alternative to analyze lifetime data within the competing risks approach as compared with the use of standard exponential or even the Weibull distribution commonly used in this area. Krishna and Kumar [4] considered the reliability estimation in Lindley distribution with progressively type II right censored sample. Al-Mutairi et al. [5] dealt with the estimation of the stress-strength parameter when the variables are independent Lindley random variables with different shape parameters.

Some researchers have proposed and studied new classes of distributions based on the Lindley distribution. See, for example, Sankaran [6], Ghitany et al. [7], Bakouch et al. [8], Shanker et al. [9], and Ghitany et al. [10]. In this paper, we focus on the generalized Lindley distribution (GLD) introduced by Nadarajah et al. [11]. It has the attractive feature of allowing for monotonically decreasing, monotonically increasing, and bath tub shaped hazard rate functions while not allowing for constant hazard rate functions. It has better hazard rate properties than the gamma, lognormal, and the Weibull distributions.

The cumulative distribution function and the probability density function are, respectively, given bywhere and are two parameters. We denote this distribution as . When , the generalized Lindley distribution reduces to the one parameter Lindley distribution.

Singh et al. [12] developed the Bayesian estimation for the generalized Lindley distribution under squared error and general entropy loss functions in case of complete sample of observations. Singh et al. [13] considered the generalized Lindley distribution and proposed the progressive type II censoring scheme which allows the removal of the live units from a life-test with beta-binomial probability law during the execution of the experiment.

Nadarajah et al. [11] considered the classical maximum-likelihood estimation of the parameters of a generalized Lindley distribution. The results showed that the bias is not satisfied especially for a small or even moderate sample size. As for the moment estimates, two nonlinear equations need to be solved simultaneously and the existence and uniqueness of the roots are not clear and guaranteed.

In this paper, we consider the problem of estimating the two parameters of the generalized Lindley distribution. We propose inverse moment and modified inverse moment estimators and study their properties. The conditions of the existence and uniqueness of the estimators are established. Monte Carlo simulations are used to compare the performances of the estimators. We also investigate the methods for constructing joint confidence regions for the two parameters and study their performances.

The rest of this paper is organized as follows. In Section 2, we briefly review the classical maximum-likelihood estimation of the parameters of the generalized Lindley distribution. In Section 3, the moment estimator is discussed. In Section 4, we propose two new methods of estimating the parameters and study their properties. Joint confidence regions for the two parameters are proposed in Section 5. Section 6 conducts simulations to assess the methods. Finally, in Section 7, a real example is presented to illustrate the proposed methods.

#### 2. Maximum-Likelihood Estimation

In this section, we briefly review the MLEs of the parameters of GLD distribution. Let be a random sample from with pdf and cdf as (3) and (2), respectively. The log-likelihood function is given by The score equations are thus as follows:

From (6) we obtain the MLE of as a function of : The MLE of is the root of the following equation:

Such nonlinear equation does not have closed form solution. We can apply numerical method such as Newton-Raphson method to compute .

#### 3. Moment Estimation of Parameters

In this section, we discuss the moment estimation (MOM) of the parameters of GLD distribution. Let be a random sample from with pdf and cdf as (3) and (2), respectively. are the observed values. Let , . are sample moments. For the population moments, we need the following lemma (Nadarajah et al. [11]).

Lemma 1. *Let . One has *

Let denote a GLD random variable. It follows thatBy equating the population moments with the sample moments, we obtainThe method of moments estimators is the roots of the two equations. Similar to the MLEs, such nonlinear equations do not have closed form solutions. We can apply numerical method such as Newton-Raphson method to determine the roots.

#### 4. Inverse Moment Estimation of Parameters

Unlike the regular method of moments, the idea of the inverse moment estimation (IME) is as follows: for a given random sample from a distribution with unknown parameters, first transform the original sample to a quasisample , where contains the unknown parameters but its distribution does not depend on the unknown parameters; that is, is a pivot variable, . The population moments of the new sample do not depend on the unknown parameters while the sample moments do. Let the population moments of the quasisample equal the sample moments and solve for the unknown parameters.

Let form a sample from with pdf given in (3); it is known that , , follow the uniform distribution , and thus , , follow standard exponential distribution . By the method of inverse moment estimation, we let that is,Thus, the IME of is obtained as a function of ,which is identical to the MLE of . In the following, we determine the IME of .

Lemma 2. *Let be the order statistics from the standard exponential distribution. Then, the random variables , wherewith , are independent and follow standard exponential distributions.*

*Proof. *The proof can be found by Arnold et al. [14].

Lemma 3. *Let be iid standard exponential variables, , , , ; then*(1)* are independent;*(2)* follow the uniform distribution ;*(3)* follows .*

*Proof. *The proof can be found by Wang [15].

For the sample from , considering the order statistics , we have that are -order statistics from standard exponential distribution.

Let , , where . Thus, are the first -order statistics from the standard exponential distribution. By Lemma 2, , , form a sample from standard exponential distribution.

Let , , , and ; by Lemma 3, we havewhere Note that the mean of is . Thus, we obtain an inverse moment equation for as follows:Solve the equation and we obtain the inverse estimate of . Plugging into (15), we obtain the inverse estimate . In addition, considering that the mode of is , we can obtain a modified equation for : Solve the equation and we obtain the modified inverse estimate of . Plugging into (15), we obtain the modified inverse estimate . Unlike the moment estimation, here we only need to solve one nonlinear equation instead of two equations.

In the following, we prove the existence and uniqueness of the root in (20) and (21).

Lemma 4. *The following limits hold:*(1)*One has*(2)*One has*(3)*One has*

Lemma 5. *For , is a decreasing function of .*

Theorem 6. *Let , , form a sample from standard exponential distribution, ; then for , equation has a unique positive solution.*

*Proof. *By Lemma 4, we obtain Thus, . On the other hand, Thus, . Therefore, for , equation has one positive solution. For the uniqueness of the solution, we consider the derivative of with respect to . Note that, for , where By Cauchy’s mean-value theorem, for , , there exist and such that Note that , by Lemma 5, , , thus is a strictly increasing function of , and equation has a unique positive solution.

#### 5. Joint Confidence Regions for and

Let form a sample from , and are the order statistics. Let , Thus, are the first -order statistics from the standard exponential distribution. By Lemma 2, , , form a sample from standard exponential distribution. Let , , , and . HenceIt is obvious that and are independent. DefineWe obtain that and are independent using the known bank-post office story in statistics.

Let denote the percentile of distribution with left-tail probability and and degrees of freedom. Let denote the percentile of distribution with left-tail probability and degrees of freedom.

By using the pivotal variables and , a joint confidence region for the two parameters and can be constructed as follows.

Theorem 7 (method 1). *Let form a sample from ; then, based on the pivotal variables and , a joint confidence region for the two parameters is determined by the following inequalities:where is the root of for the equation and is the root of for the equation .*

*Proof. * + − / is a function of and does not depend on . From Theorem 6, we have , , and . Therefore, for any , equation has a unique positive root of :

On the other hand, by Lemma 3, we have and are also independent. By using the pivotal variables and , a joint confidence region for the two parameters and can be constructed as follows.

Theorem 8 (method 2). *Let form a sample from ; then, based on the pivotal variables and , a joint confidence region for the two parameters is determined by the following inequalities: where is the root of for the equation and is the root of for the equation .*

*Proof. * is a function of and does not depend on . From Theorem 6, for any , equation has a unique positive root of :

#### 6. Simulation Study

##### 6.1. Comparison of the Four Estimation Methods

In this section, we conduct simulations to compare the performances of the MIMEs, IMEs, MLEs, and MOMs mainly with respect to their biases and mean squared errors (MSEs), for various sample sizes and for various true parametric values.

The random data from the distribution can be generated as follows:where follows uniform distribution over and giving the principal solution for in is pronounced as Lambert function; see Jodrá [16].

We obtain by solving (8) and by (7). and can be obtained by solving (11) and (12) simultaneously. and can be obtained by solving (20) and (21), respectively. and can be obtained from (15).

We consider sample sizes and . We take in all our computations. For each combination of sample size and parameter , we generate a sample of size from and estimate the parameters and by the MLE, MOM, IME, and MIME methods. The average values of and as well as the corresponding MSEs over 1000 replications are computed and reported.

Table 1 reports the average values of and the corresponding MSE is reported within parenthesis. Figures 1(a), 1(b), 1(c), and 1(d) show the relative biases and the MSEs of the four estimators of for sample sizes and . Figures 1(e) and 1(f) show the relative biases and the MSEs of the four estimators of for . The other cases are similar.

**(a)**Relative biases ()

**(b)**Relative MSEs ()

**(c)**Relative biases ()

**(d)**Relative MSEs ()

**(e)**Relative biases ()

**(f)**Relative MSEs ()Table 2 reports the average values of and the corresponding MSE is reported within parenthesis. Figures 2(a), 2(b), 2(c), and 2(d) show the relative biases and the MSEs of the four estimators of for sample sizes and . Figures 2(e) and 2(f) show the relative biases and the MSEs of the four estimators of for . The other cases are similar.

**(a)**Relative biases ()

**(b)**Relative MSEs ()

**(c)**Relative biases ()

**(d)**Relative MSEs ()

**(e)**Relative biases ()

**(f)**Relative MSEs ()From Tables 1 and 2, it is observed that for the four methods the average relative biases and the average relative MSEs decrease as sample size increases as expected. The asymptotic unbiasedness of all the estimators is verified. The average MSEs of and depend on the parameter . For the four methods, the average relative MSEs of decrease as goes up. The average relative MSEs of increase as goes up. Considering only MSEs, we can observe that the estimation of ’s is more accurate for smaller values while the estimation of ’s is more accurate for larger values of . MOM, MLE, and IME overestimate both of the two parameters and . MIME overestimates only .

As far as the biases and MSEs are concerned, it is clear that MIME works the best in all the cases considered for estimating the two parameters. Its performance is followed by IME, MLE, and MOM, especially for small sample sizes. The four methods are close for larger sample sizes.

Considering all the points, MIME is recommended for estimating both parameters of the distribution. MOM is not suggested.

##### 6.2. Comparison of the Two Joint Confidence Regions

In Section 5, two methods to construct the confidence regions of the two parameters and are proposed. In this section, we conduct simulations to compare the two methods.

First, we assess the precisions of the two methods of interval estimators for the parameter . We take sample sizes and . We take in all our computations. For each combination of sample size and parameter , we generate a sample of size from and estimate the parameter by the two proposed methods (32) and (35).

The mean widths as well as the coverage rates over 1000 replications are computed and reported. Here the coverage rate is defined as the rate of the confidence intervals that contain the true value among these 1,000 confidence intervals. The results are reported in Table 3.

It is observed that the mean widths of the intervals decrease as sample sizes increase as expected. The mean widths of the intervals decrease as the parameter increases. The coverage rates of the two methods are close to the nominal level 0.95. Considering the mean widths, the interval estimate of obtained in method 2 performs better than that obtained in method 1. Method 2 for constructing the interval estimate of is recommended.

Next we consider the two joint confidence regions and the empirical coverage rates and expected areas. The results of the methods for constructing joint confidence regions for with confidence level are reported in Table 4.

We can find that the mean areas of the joint regions decrease as sample sizes increase as expected. The mean areas of the joint regions increase as the parameter increases. The coverage rates of the two methods are close to the nominal level 0.95. Considering the mean areas, the joint region of obtained in method 2 performs better than that obtained in method 1. Method 2 is recommended.

#### 7. Real Illustrative Example

In this section, we consider a real lifetime data set (Gross and Clark [17]) and it shows the relief times of twenty patients receiving an analgesic. The dataset has been previously analyzed by Bain and Engelhardt [18], Kumar and Dharmaja [19], Nadarajah et al. [11], and so forth. The relief times in hours are shown as follows:

Nadarajah et al. [11] fit the data with generalized Lindley distribution and showed that it can be a better model than those based on the gamma, lognormal, and the Weibull distributions. The MLEs of the parameters are and with log-likelihood value . The Kolmogorov-Smirnov distance and its corresponding value are and , respectively. The MOMs of the parameters are and .

Using the methods proposed in Section 3, we obtain the following estimates:In addition, based on method 1, the 95% joint confidence region for the parameters is given by the following inequalities:

Based on method 2, the 95% joint confidence region for the parameters is given by the following inequalities: Figures 3(a) and 3(b) show the 95% joint confidence regions of .

**(a) Method 1**

**(b) Method 2**

Considering the widths of , method 2 is suggested.

#### 8. Conclusion

In this paper, we study the problem of estimating the two parameters of the generalized Lindley distribution introduced by Nadarajah et al. [11]. We propose the inverse moment estimator and modified inverse moment estimator and study their statistical properties. The existence and uniqueness of inverse moment and modified inverse moment estimates of the parameters are proved. Monte Carlo simulations are used to compare their performances. We also investigate the methods for constructing joint confidence regions for the two parameters and study their performances.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgment

Wenhao Gui’s work was partially supported by the program for the Fundamental Research Funds for the Central Universities (nos. 2014RC042 and 2015JBM109).