Journal of Probability and Statistics

Journal of Probability and Statistics / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 7362657 | https://doi.org/10.1155/2020/7362657

Mohammed Obeidat, Amjad Al-Nasser, Amer I. Al-Omari, "Estimation of Generalized Gompertz Distribution Parameters under Ranked-Set Sampling", Journal of Probability and Statistics, vol. 2020, Article ID 7362657, 14 pages, 2020. https://doi.org/10.1155/2020/7362657

Estimation of Generalized Gompertz Distribution Parameters under Ranked-Set Sampling

Academic Editor: Ramón M. Rodríguez-Dagnino
Received02 Apr 2020
Revised12 Jul 2020
Accepted29 Jul 2020
Published07 Sep 2020

Abstract

This paper studies estimation of the parameters of the generalized Gompertz distribution based on ranked-set sample (RSS). Maximum likelihood (ML) and Bayesian approaches are considered. Approximate confidence intervals for the unknown parameters are constructed using both the normal approximation to the asymptotic distribution of the ML estimators and bootstrapping methods. Bayes estimates and credible intervals of the unknown parameters are obtained using differential evolution Markov chain Monte Carlo and Lindley’s methods. The proposed methods are compared via Monte Carlo simulations studies and an example employing real data. The performance of both ML and Bayes estimates is improved under RSS compared with simple random sample (SRS) regardless of the sample size. Bayes estimates outperform the ML estimates for small samples, while it is the other way around for moderate and large samples.

1. Introduction

Gompertz distribution was introduced by Gompertz [1] to describe human mortality and to establish actuarial tables. It was also found to be useful in medical sciences because it gives a good fit to data coming from clinical trials on ordered subjects [2]. Gompertz distribution has been extensively studied in the literature (see, for example, El-Din et al. [3] and the reference therein). This paper focuses on the three-parameter generalized Gompertz distribution, which was proposed by El-Gohary et al. [4].

A random variable is said to have a generalized Gompertz (GG) distribution with parameter vector , denoted as , if its probability density function and its distribution function are given by

The GG distribution covers the generalized exponential distribution and the one-parameter exponential distribution as special cases when goes to zero and . It also covers the Gompertz distribution when . The GG distribution takes different shapes of the failure rate curve, namely, increasing, constant, decreasing, or bathtub depending on the value of (the shape parameter). The GG distribution is considered as a strong candidate distribution for the analysis of reliability data [4] and survival data [5].

Demir and Saracoglu [6] studied maximum likelihood estimation of the GG distribution parameters under progressively type II censored data. Ahmed [7] studied maximum likelihood and Bayesian estimation of the lifetime parameters of the GG distribution under progressively type II censored data. Based on the GG distribution, Borges [5] developed a regression model for survival data. The author proposed an Expectation-Maximization algorithm to estimate the regression parameters. Abu-Zinadah and Al-Oufi [8] studied the estimation of the GG parameters under complete sample using ML, least-squares, weighted least-squares, and percentiles estimation methods. Estimation of the GG parameters under type II censored sample was studied by Abu-Zinadah and Al-Oufi [9]. The use of GG distribution in lifetime data in the presence of cure fraction, censored data, and covariates was studied by Martinez [10].

Different from the above, this article discusses ML and Bayesian parameter estimation of the GG distribution under ranked-set sampling (RSS) scheme. RSS is a sampling scheme proposed by McIntyre [11] in the hope of improving the estimation of the population mean. RSS is very useful in situations where precise measurements of sample units are difficult, due to high cost or time consumption, but a set of sample units can be accurately ranked at negligible cost or time. For situations where RSS techniques have been found applicable, see Barnett and Moore [12], Wolfe [13], and Frey and Zhang [14].

RSS sampling scheme aims to collect observations from a population that are more representative of it than other probability sampling techniques such as the simple random sampling (SRS) based on the same number of the collected observations. To implement RSS of observations from a population, follow the following steps:(i)Select SRSs of size each, where is chosen to be small.(ii)Rank the units in each sample from smallest to largest. Ranking is done without actually measuring the units with respect to the variable of interest.(iii)Actual measurements are taken only on the largest unit in the sample, .(iv)Repeat the previous steps times (cycles) such that .

The resulted sample is denoted as , the largest unit in a set of size in the cycle, where and . Based on the above steps, the joint pdf of a RSS is given by the following equation (see Arnold et al. [15]):where and and are the pdf and cdf of a random variable .

The remainder of this paper is organized as follows. Section 2 discusses ML estimation and confidence intervals of the model parameters under both RSS and SRS. Bayesian estimators along with credible intervals of the parameters are discussed in Section 3. Section 4 compares between ML and Bayesian method through a Monte Carlo simulation study. Real data example is considered in Section 5 to implement the proposed methods. Section 6 concludes the paper.

2. Maximum Likelihood Estimation

This section discusses the parameter estimation of the GG distribution using the ML estimation method under RSS. Moreover, interval estimation of the parameters is discussed based on the observed Fisher information matrix and the normal approximation to the asymptotic distribution of MLEs. Bootstrapping confidence intervals are also considered as an alternative to the normal approximation approach. For comparative purposes, the ML and interval estimation under SRS are investigated.

2.1. MLE under RSS

Let be a RSS from , with pdf given in 1. Let the vectorbe the vector of realizations. Substituting (1) and (2) in (3), the likelihood and the -likelihood functions of a RSS from GG distribution will be as follows:respectively, where . The first partial derivatives of are as follows:

The MLEs for the parameters , , and are obtained by maximizing the likelihood in (4) or equivalently by maximizing the -likelihood in (5). This can be accomplished by setting the partial derivatives in (6)–(8) equal to zero and solving these equations simultaneously. It can be seen that these equations do not have closed-form solutions; therefore, the Newton–Raphson method is used to obtain the estimates. The algorithm is comprised of the following steps:Step 1: start with an initial guess as a starting point of the iterations.Step 2: at iteration , do the following:Evaluate the gradients at .Evaluate the observed Fisher information matrix, denoted by , at . The Fisher information matrix will be defined later in this section.Update the parameter vector bywhere stands for the inverse of the Fisher information.Step 3: repeat Step 2 until the absolute difference between and is less than a threshold value, usually taken to be .Step 4: the MLEs of are the parameter values at the last iteration, denote them by .

To obtain confidence intervals of the parameters, the asymptotic properties of the MLE were used. The MLE is asymptotically normal with mean equal to the true parameter values and variance-covariance matrix equal to the inverse of the observed Fisher information matrix (see Lawless [16]). Observed Fisher information matrix is defined to be the matrix of the second partial derivatives of the negative -likelihood with respect to the model parameters, that iswheretherefore, a confidence intervals of the model parameters are , , and , where , and are the diagonal elements of , and is the upper quantile of the standard normal distribution.

The second partial derivatives of that are the elements of are as follows:

2.2. MLE under SRS

Let be a SRS from , with pdf given in 1, where being the vector of realizations. The likelihood and the -likelihood functions are given byrespectively, where . The first partial derivatives of are as follows:

The MLEs of the model parameters based on SRS are obtained by setting the partial derivatives in (16)–(18) equal to zero. Clearly, this system of equations does not have closed-form solution; therefore, Newton–Raphson method will be used.

The second partial derivatives of that are the elements of are as follows:

2.3. Bootstrap Confidence Interval

Constructing confidence intervals for the model parameters using the normal approximations may not work well when the sample size is small. Resampling methods are alternative ways that may provide more accurate approximate confidence intervals. One popular resampling method is the bootstrapping method. This section aims to discuss the percentile bootstrap (Boot-p) confidence interval proposed by Efron [17]. The Boot-p confidence interval can be described as follows:(i)Select a random sample (whether RSS or SRS) from the population and obtain the MLE of the model parameter as discussed in Section 2(ii)Based on the specified sampling scheme (RSS or SRS), generate a bootstrap random sample from the GG distribution with parameters (iii)Obtain the MLE of the model parameters based on the bootstrap sample and denote this bootstrap estimate by (iv)Repeat the second and third steps above times to obtain (v)Arrange the above estimates in ascending order to obtain the ordered estimates (vi)A confidence interval is then obtained from the and empirical percentiles of the bootstrap estimates obtained in the previous step

3. Bayesian Estimation

In this section, Bayes’ estimates and the Bayesian credible intervals of the parameters are obtained using Markov chain Monte Carlo Methods (MCMCs) under both RSS and SRS.

Two important components of Bayesian analysis are the choice of prior distribution of the parameters and the loss function. The prior distribution reflects the prior knowledge or information about the parameters of interest prior to collecting the data. If there is no such knowledge, then a weakly informative prior could be considered. The loss function measures the loss incurred when estimating a parameter by an estimator and is used as a criterion for good estimators.

Independent gamma priors are assumed for the parameters, that is

For the gamma prior to be weakly informative, the hyperparameters and , , are assumed to equal a small value such as 0.001. Bayesian inference is then obtained based on the posterior distribution, the distribution of the parameters given the data , that is,where is the likelihood function.

In this work, we will use the most widely used loss function in Bayesian inference, that is, the squared error loss (SEL) function, which is given by

Bayes’ estimator of the parameter based on SEL is the posterior mean, that is,

3.1. Bayesian Estimation under RSS

The joint posterior distribution of the parameters , , and under RSS can be obtained by combining the likelihood in (4) and the prior in (21) via Bayes’ theorem. Up to a normalizing constant, it can be written as

Using this posterior, one can obtain Bayes’ estimator of any function of the parameters by finding the posterior mean, that is,

Clearly, the posterior distribution involves intractable integrals; this is because the likelihood function based on RSS is complicated. Therefore, a Markov chain Monte Carlo (MCMC) method is proposed to obtain Bayes’ estimates of the parameters. MCMC methods aim to generate samples from the joint posterior density function and to use them to compute the Bayes estimate of the parameters of interest. To implement the MCMC methodology, we consider the Metropolis–Hasting (M-H) sampler. The M-H sampler is summarized in the following steps:Step 1: start with initial guess as a starting point of the M-H sampler.Step 2: choose a proposal kernel from which it is easy to sample from and which have the main characteristics of the posterior. Denote this proposal by .Step 3: for , do the following stepsSample a proposed realization from .Calculate the acceptance ratioSet  =  with probability , and otherwise, set  = .Step 4: repeat Step 3 for large number of iterations, say , until convergence is assured.

In our simulations, independent normal kernel was used as a proposal distribution. The mean of this proposal is taken to equal to the previously sampled value and standard deviation equal to the square root of the inverse of the observed Fisher information scaled with a factor of , where is the dimension of the parameter space (see Gelman et al. [18]).

The abovementioned M-H algorithm is called the Random-Walk M-H (RW-M-H), due to its randomness of proposing a new realization from the posterior. One of the main drawbacks of the RW-M-H algorithm in complex posteriors is its slow convergence due to the dependency between parameters. In our simulations, we noticed that the RW-M-H algorithm did not provide a true nominal coverage of the Bayesian credible interval. Therefore, the differential evolution M-H (DE-M-H) developed by Braak [19] is used to improve the performance of the M-H algorithm. Braak [19] stated that the main advantage of the DE-M-H is its ability to handle issues as nonconvergence, colinear parameters, and multimodal densities. The DE-M-H comprises of running multiple chains, say , which are initialized from overdispersed states. The main feature is that the proposed value in each chain uses information from two randomly selected chains from the remaining chains. This allows the chains to learn from each other through the process. The DE-M-H can be implemented as follows. Set .Step 1: chains are initialized, . Braak [19] stated that the algorithm would work well for between to .Step 2: for .The proposal for the chain, , is  = .The proposed value is accepted with probability , withStep 3: repeat Step 2 for large number of iterations, say .

For the DE-M-H algorithm, define the following: is the previous state of the chain. and are previous states of randomly selected without replacement chains from the remaining chains without the chain. This ensures that the chains are learning from each other. is drawn from normal (0, ), where is chosen to be small. In our simulations, is taken to equal the standard deviations obtained from the observed Fisher information. is a scaling factor used to provide an acceptable acceptance probability. The default choice is (see Gelman et al. [18]).

The resulted posterior samples can be used to find the posterior estimates and credible intervals as follows. , where

In addition to the point estimator, , of , Bayesian credible interval can be obtained from the posterior samples. One popular credible interval is the highest posterior density (HPD) credible interval. The HPD interval can be constructed from the empirical cumulative distribution function (cdf) of the posterior samples as the shortest interval for which the difference in the empirical cdf values of the endpoints is the desired nominal probability.

3.2. Bayesian Estimation under SRS

The Bayesian approach under SRS is similar to the one under RSS. The only change is that the likelihood is used in place of in the formula of the posterior distribution in (25). DE-M-H methods were used to obtain posterior samples and making Bayesian inference.

3.3. Lindley’s Approximation for Bayesian Estimates

Another way of obtaining Bayesian estimates is by approximating the ratio of integrals in (26). Many attempts have been proposed in the literature to approximate such ratio of integrals. One popular method is proposed by Lindley [20]. Lindley’s procedure is outlined as follows.

The ratio of integrals is written in the following form:where is the log posterior distribution. By expanding using Taylor series expansion about the posterior mode , Lindley obtained the Bayes estimator of to be

All functions are evaluated at the posterior mode . The summations in 18 are over all subscripts and are from 1 to , the dimension of the parameter vector . The subscripts denote the partial derivative of the function with respect to the corresponding component of , i.e., and , and are the elements of the negative of the inverse of the Hessian matrix of .

In our case, assuming , and with , for example, equation (31) can be written as

4. Simulation Study

In order to assess the performance of the proposed estimation methods (ML and Bayesian) under SRS and RSS schemes, a Monte Carlo simulation study of 5000 samples is conducted. Comparisons are made based on the bias, mean square error (MSE), and coverage probability (CP) and half length (HL) of confidence intervals. Different combinations of the parameter values are considered so as to cover different shapes of the probability density function of the GG distribution (see Figure 1). Since the conclusions were similar for almost all combinations of the parameter values, the results of the two combinations are presented, namely, and . For each set of parameter values, 4 different sample sizes are studied, a small , one moderate , and two large sample sizes.

Bayesian estimates are obtained using the DE-M-H method with chains, each having a length of 10000. The first half of each chain is considered as burn-in. The results of the simulation study are summarized in Tables 18. The following are observed:(i)Bayesian estimation using RW-M-H algorithm produces a coverage probability of a credible interval lower than the nominal rate even for large sample sizes. This can be seen in Table 8. This is because that the RW-M-H method did not explore the domain of the posterior distributions of the parameters. The trace plot of the MCMC chain of length for a randomly selected sample of size is shown in Figure 2. It can be seen from this plot that the DE-M-H chain covers a wider range of parameter values than the one obtained using RW-M-H. Therefore, DE-M-H is used in our simulations to obtain Bayesian estimates and credible intervals.(ii)Bayesian estimation using DE-M-H has better coverage probability, and it still has a lower MSE than the RW-M-H.(iii)For a small sample size, the Bayesian approach produces a smaller MSE than the ML method. This is true under both RSS and SRS.(iv)For moderate sample sizes, the ML methods produce a smaller MSE than the Bayesian method.(v)In terms of the coverage probability (CP) and half length (HL) of the confidence interval, Bayesian approach produces better results (having more accurate CP and smaller HL) than the normal approximation of the ML approach under small and moderate sample sizes.(vi)Confidence interval constructed using bootstrapping methods provides more accurate coverage probability with shorter half length compared with confidence intervals obtained using normal approximation for small and moderate sample sizes. Both methods perform almost the same as the sample size increases.(vii)Bootstrapping confidence intervals were very comparable to Bayesian credible intervals in terms of half length and coverage probability.(viii)DE-MCMC method provided smaller MSE than Lindley’s method for small samples. Both methods performed almost the same for larger sample sizes.(ix)The performance of both Bayesian and ML methods improves when using RSS compared with SRS even for small . The performance improves as increases.(x)As the sample size gets large, Bayesian and ML methods tend to perform almost the same under RSS and SRS.


MLEBayesMLEBayesMLEBayes
SamplingBiasMSEBiasMSEBiasMSEBiasMSEBiasMSEBiasMSE

SRS0.1400.3920.4050.3920.4031.0150.4900.5872.7451.0252.556
RSS20.1180.3350.3710.3130.3370.7780.3630.4671.2860.8361.644
50.0660.2240.3120.2290.2980.6380.3450.2270.520

SRS0.0680.1730.3250.2370.1750.2980.2490.1860.446
RSS20.0550.1490.3050.2090.1560.2580.2200.1510.410
50.0300.1090.2630.1660.1340.2020.1870.0920.332

SRS0.0230.0600.2210.1280.0640.0790.1040.0590.233
RSS20.0170.0510.2010.1120.0590.0720.0980.0450.217
50.0110.0400.1680.0880.0500.0590.0820.0300.182

SRS0.0070.0270.1180.0630.0340.0350.0540.0240.118
RSS20.0100.0240.1080.0540.0280.0310.0480.0220.111
50.0040.0190.0800.0390.0260.0270.0390.0150.087


MLEBayesMLEBayesMLEBayes
SamplingBiasMSEBiasMSEBiasMSEBiasMSEBiasMSEBiasMSE

SRS1.1390.4871.1481.1465.3891.7950.1850.1450.211
RSS20.8860.4210.8090.9533.7071.3270.1380.1270.147
50.5350.3000.4590.8322.9181.2400.0510.0810.057

SRS0.4490.3630.4520.4811.0900.4540.0400.0790.046
RSS20.3720.3200.3720.4300.9250.4230.0310.0710.036
50.2710.2530.2570.3700.7000.3280.0190.0550.022

SRS0.1720.2800.2170.1700.2410.1600.0120.0490.015
RSS20.1480.2400.1840.1580.2230.1640.0100.0440.012
50.1120.2070.1400.1340.1800.1410.0070.0380.008

SRS0.0870.2230.1440.0850.1010.1120.0050.0360.007
RSS20.0750.1980.1210.0700.0900.1030.0050.0330.006
5