Abstract
This paper develops the problem of estimating stressstrength reliability for Gompertz lifetime distribution. First, the maximum likelihood estimation (MLE) and exact and asymptotic confidence intervals for stressstrength reliability are obtained. Then, Bayes estimators under informative and noninformative prior distributions are obtained by using Lindley approximation, Monte Carlo integration, and MCMC. Bayesian credible intervals are constructed under these prior distributions. Also, simulation studies are used to illustrate these inference methods. Finally, a real dataset is analyzed to show the implementation of the proposed methodologies.
1. Introduction
The stressstrength reliability is an assessment of the reliability of a component based on its strength and its stress . The idea of a stressstrength reliability was introduced by Birnbaum [1] and spread two years later by Birnbaum and McCarty [2].
Recently, the study on reliability has been considered by the authors, which we refer to some recent studies. Qixuan and Wenhao [3] worked on the Bayesian and classical estimation of stressstrength reliability for inverse Weibull lifetime models. Abravesh et al. [4] obtained classical and Bayesian estimation of stressstrength reliability in type II censored Pareto distributions. Akgül et al. [5] presented inferences on stressstrength reliability based on ranked set sampling data in the case of Lindley distribution. Byrnes et al. [6] made a Bayesian inference of for Burr type XII distribution based on progressively first failurecensored samples. Zhang et al. [7] studied the reliability of generalized Rayleigh distribution under progressive type II censoring.
Gompertz distribution which was first proposed by Gompertz [8] is one of the most widely used distributions in the fields of survival, lifetime data, mortality tables, computer, biology, sociology, and marketing [9–12]. Some recent studies on the distribution of Compretz include the following: [13] presented a new and practical generalization of the Compretz distribution. [14] developed acceptance sampling plans for lot sentencing in which the quality characteristic of the products follows the ToppLeone Gompertz distribution. An application of GammaGompertz distribution was proposed by [15]. [16] estimated the parameters of a new generalization of Gompertz distribution and investigated the features and application of this new model.
The study on reliability by considering Gompertz distribution is one of the most important and interesting issues. Saraçoğlu and Kaya [17] studied MLE and confidence intervals of system reliability for Gompertz distribution in stressstrength models. Kumar and Vaish [18] presented a study of strength reliability for Gompertz distributed stress. Jha et al. [19] obtained reliability estimation of a multicomponent stressstrength model for unit Gompertz distribution under progressive type II censoring. Asadi et al. [20] studied inference on adaptive progressive hybrid censored accelerated life test for Gompertz distribution.
A brief explanation of the type II censoring is given. Let and be independent random samples from and random variable, respectively. Suppose the ordered statistics of these samples are and . ’s and ’s are collected until failures and failures occur, respectively (where and ).
The rest of the article is organized as follows. In Section 2, we introduce the Gompertz distribution. In Section 3, we obtain the MLE of stress strength reliability (). In Section 4, we construct the exact and asymptotic confidence intervals for . In Section 5, we calculate the Bayes estimator of by considering the conjugate informative and Jeffreys noninformative prior distributions. In Section 6, we provide Bayesian credible intervals, including equitailed and HPD intervals under the conjugate informative and Jeffreys noninformative prior distributions. In Section 7, the performance of these inference methods is compared by using simulation studies. Finally, Section 8 performs a real data analysis to demonstrate the application of these methods.
2. Gompertz Distribution
Let and be two independent random variables. The probability density function (PDF) and cumulative distribution function (CDF) of and are given:
The reliability function is calculated as follows:
3. MLE of
Let be a type II censored sample from and be a type II censored sample from . Suppose these two samples are independent. The likelihood function is given by where
Then, the loglikelihood function is
To obtain the MLE of parameters and , it is sufficient to derive the loglikelihood function with respect to parameters and and equal them to zero:
From Equations (7) and (8), we get
Now, by substituting (10) and (11) into (9), the MLE of parameter is obtained. Then, to get and , we substitute into Equations (10) and (11). Therefore, the MLE of is
4. Confidence Interval of
In this section, the exact and asymptotic confidence intervals for are calculated.
4.1. Exact Confidence Interval
Let be a type II censored sample from . Consider , where is a type II censored dependent sample from the standard exponential distribution (SED). Now, apply the following conversion:
It can be concluded that . Therefore,
Similarly, suppose be a type II censored sample from . Define , where is a type II censored dependent sample from the SED. Now, apply the following transformation:
It results that . So,
Based on the independence of and can be written
Then, confidence interval for is where
4.2. Asymptotic Confidence Interval
In this section, the asymptotic confidence interval for is calculated using Wald statistics. Based on Wald statistics, we have where .
Theorem 1. Let and , then where
Proof. Given that the maximum likelihood estimator is asymptotically normal [21], when and , then where Define . According to Taylor expansion around and , we have where the remaining sentences in the following relation apply: Based on (22) and (24), when and , then where Therefore, Equation (26) holds because according to the property of MLE .
Corollary 2. A asymptotic confidence interval for is where
Proof. Define and . According to the asymptotic property of the MLE, tends to 1 in probability. On the other hand, according to Theorem 1, Therefore, according to the Slutsky theorem, we have Thus,
5. Bayesian Estimation of
Bayes [22] and Laplace [23] found that the uncertainty about the parameters of a model, which we represent with , could be modeled on through a probability distribution such as , called the prior distribution. With this approach, the inference is based on the conditional distribution on , . This conditional distribution is called the posterior distribution. In this section, Bayesian estimation is obtained by using the conjugate informative and Jeffreys noninformative prior distributions.
5.1. Conjugate Informative Prior
Let and and are independent. The PDF of these priors is as follows: where and are the hyperparameters. Then, the joint prior possibility distribution is
The posterior probability distribution is calculated as follows: where and were given in (4) and (5), respectively. For , we can write where and . [24] proposed , and Robert [25] suggested an empirical Bayesian approach to determining the values of hyperparameters. According to the approach presented by Robert, to get and , we maximize the following function:
We have
and are obtained by solving the following equations: where shows the digamma function. By solving Equation (41), is
By substituting (42) into (40), we get
Similarly, this method can be repeated to calculate and . So, is as follows:
Also, is obtained by solving the following equation:
Abravesh et al. [4] showed that Equations (43) and (45) have no answer and set to solve this problem. Then, and .
5.2. Posterior Distribution of
To calculate the posterior distribution of , we have the following transformations:
Transformations (46) and (47) are equivalent to and . The posterior distribution of and can be calculated by the following formula:
In the above formula, is called Jacobin and is calculated as follows:
The marginal distribution of is calculated from the joint distribution in (48) as
5.3. Jeffreys Noninformative Prior
In this section, by using Jeffreys noninformative prior [26], the Bayesian estimation of is obtained. The Jeffreys prior is as follows: where
Considering Jeffreys prior , the marginal posterior distribution is given by
Similar to the process in Subsection 5.2, the marginal posterior distribution can be obtained:
5.4. Lindley Approximation
Lindley [27] proposed a method for approximating the ratio of integrals. The Lindley approximation of can be calculated using the following formula: where , is MLE of and
Here, is loglikelihood function and is logprior distribution.
5.4.1. Informative Prior
Based on the prior distribution (35) and , we obtain
So,
Inverse of the Hessian matrix is given by
So, and
Finally, the Lindley approximation of the Bayes estimator of is
5.4.2. Noninformative Prior
Under Jeffreys prior (), we have
Therefore, the Bayes estimator of using the Lindley approximation is
5.5. Monte Carlo Integration
The Monte Carlo integration method was introduced by Metropolis and Ulam [28] and Neumann [29]. Let be a random sample from posterior density . In this case, according to the strong law of large numbers, for large , an approximation for the expected values of posterior is equal to
This method was very simple and does not involve complicated calculations. The only problem this method may have is generating a random sample of posterior density.
Now, the Bayes estimator of is obtained using this method. Let be the random sample from , then for large ,
5.5.1. Informative Prior
To calculate the Bayes estimator of under the conjugate prior (35), we assume . So,
5.5.2. Noninformative Prior
Under the Jeffreys prior, we consider . Then,
5.6. MCMC
To solve the stated problem of the Monte Carlo integration method, a more general method is used to generate approximate random variables from the posterior distribution, called the Markov chain Monte Carlo (MCMC) method [30]. The MetropolisHastings algorithm is used to create Markov chains with a given distribution. The application of this algorithm in mechanical physics was first developed by Metropolis et al. [30]. A few years later, Hastings and Keith generalized the algorithm in more statistical detail [31]. Using the MCMC method, the following integral is approximated:
Let be an ergodic MCMC sample from , we have
5.6.1. Informative Prior
Considering the conjugate prior distribution (35) for and , we obtain the posterior distribution (50). By generating an ergodic sample from using the MetropolisHastings algorithm, the Bayesian estimator is as follows:
5.6.2. Noninformative Prior
Similarly, using the Jeffreys prior and the posterior distribution (54) for and , we generate an ergodic sample from using the MetropolisHastings algorithm; the Bayesian estimator is given by
6. Bayesian Credible Interval
In fact, the Bayesian view offers confidence interval that are more realistic than its classic counterpart. We start this section with two definitions.
Definition 3. Set is called a credible region whenever where is the posterior probability function of condition .
Definition 4. The credible region is called a region with the highest posterior density (HPD) whenever it can be written as follows: where is the largest fixed number that applies to
Although the HPD interval is an optimal answer among the credible intervals, in some cases, it is not easy to calculate directly, and approximate methods must be used to obtain it [32]. It is usually easier to calculate approximate intervals with equal tails than HPD interval [33]. Chen and Shao [34] have proposed an algorithm to construct an approximate HPD interval. We obtain confidence intervals of equal tails and HPD.
6.1. An EquiTailed Bayesian Credible Interval
In this subsection, confidence intervals with equal tails are calculated under the conjugate and Jeffreys prior distributions.
6.1.1. Informative Prior
We use the prior distribution (35). The posterior distributions of and are as follows:
Therefore, one can conclude
Given that the posterior distributions of and are independent, we get
Thus, a Bayesian credible interval with equal tails for under the conjugate prior is
6.1.2. Noninformative Prior
Similarly, considering the Jeffreys prior, the posterior distributions of and are and , respectively. So, we have
Hence,
A Bayesian credible interval with equal tails for under the Jeffreys prior is
6.2. HPD Interval
As mentioned earlier, it is difficult to obtain the HPD interval directly. Therefore, in this subsection, the ChenShao algorithm [34] is used to calculate the approximate HPD interval for . This algorithm is expressed as follows.

In Step 3, ’s are credible intervals for . To obtain the HPD interval under the conjugate and Jeffreys priors, it is enough to substitute the posterior distributions (50) and (54) in Step 1 of Algorithm 1.
7. Simulation Study
In this section, we use simulation to compare estimators and confidence intervals of R. Therefore, for different sample sizes, different numbers of type II censorship, and different values with 1000 repetitions, bias and mean square error (MSE) values of estimators are calculated. For the conjugate informative prior distribution, hyperparameters and are considered. Tables 1–3 show biases and MSEs of point estimators with , and , respectively. Using Algorithm 2, type II censored samples are generated from two independent Gompertz distributions.
