Abstract

This paper proposes different methods of estimating the scale parameter in the inverse Weibull distribution (IWD). Specifically, the maximum likelihood estimator of the scale parameter in IWD is introduced. We then derived the Bayes estimators for the scale parameter in IWD by considering quasi, gamma, and uniform priors distributions under the square error, entropy, and precautionary loss functions. Finally, the different proposed estimators have been compared by the extensive simulation studies in corresponding the mean square errors and the evolution of risk functions.

1. Introduction

It is well known that the Weibull distribution is one of the most popular distributions in the lifetime data analyzing. The main reason is that one can create a wide variety of shapes with varying levels of its parameters. Therefore, during the past decades, extensive work has been done on this distribution in both the frequentist and Bayesian points of view; see, for example, the excellent reviews by Johnson et al. [1] and Kundu [2]. However, the Weibull distribution has two parameters, and in many practical applications, one or both of them might be unknown. To estimate them, we may use common approaches (see, e.g., Nordman and Meeker [3]). Moreover, it is clear through the distribution of Weibull that the Weibull probability density function (PDF) can be decreasing (or increasing) or unimodal, depending on the shape of distribution parameters. Due to the flexibility of the Weibull PDF, the inverse Weibull distribution (IWD) has been extensively employed in situation where a monotone data set is available (REF). Furthermore, if the empirical studies indicate that the Weibull PDF might be unimodal, then the inverse Weibull distribution (IWD) may be an appropriate model (Kundu [2]).

As a definition, if a positive random variable has the Weibull distribution with the following PDF: then the random variable has the IWD with the PDF of the following form: where is called scale parameter and is called shape parameter of this family. It also follows from (2) that the cumulative distribution function of can be obtained: IWD plays an important role in many applications, including the dynamic components of diesel engines and several data sets such as the times to breakdown of an insulating fluid subject to the action of a constant tension (see Drapella [4], Jiang et al. [5], and Nelson [6] for more practical applications). For instance, Calabria and Pulcini [7] provide an interpretation of the IWD in the context of the load-strength relationship for a component. Maswadah [8] has fitted IWD to the flood data reported in Dumonceaux and Antle [9] (for more details see, e.g., Murthy et al. [10]).

The aim of this paper is to propose the different methods of estimation of the scale parameter for the inverse Weibull distribution (IWD). In the next section, we obtain the maximum likelihood estimator of the scale parameter in IWD, when the shape parameter is known. We also discuss the procedures to obtain the Bayes estimators for the quasi prior, gamma prior, and for uniform prior under square error, entropy, and precautionary loss functions for the scale parameter in IWD. In Section 3, we compare the maximum likelihood estimator and the Bayes estimators which are obtained in Section 2 based on their considered risk functions. The last section of the paper includes a discussion.

2. Estimation of the Scale Parameter

In a situation where the shape parameter is known in IWD, we can obtain the maximum likelihood estimator of the scale parameter . Suppose that is a random sample of size , extracted from the density function defined in (2); then the likelihood function of for fixed value of is given by By taking the natural logarithm on (4), we will obtain and by taking derivative on (5) and setting with zero, the maximum likelihood estimator can be obtained as the following form:

2.1. The Bayes Estimator

We now derive the Bayes estimator of the scale parameter in IWD when the shape parameter is known. We consider three different prior distributions and three different loss functions.

(a) The Quasi Prior. When there is no more information about the distribution parameter, one may use the quasi density as given by The quasi-prior leads to a diffuse prior for a case where and to a noninformative prior for a case where .

(b) The Gamma Prior. It is assumed that the scale parameter has a gamma prior distribution with the shape and scale parameters as and , respectively, when it has the following PDF: Note that the gamma prior is one of the most considerable priors, which researchers often use. Note also that the gamma prior is a conjugate prior family.

(c) The Uniform Prior. It is assumed that the scale parameter has a uniform distribution over a finite range , when it has the following form for all . Bayesian estimators are optimal decisions and are often obtained under a specific prior distribution and loss function. Suppose that is an estimate of .

(i) The Square Error Loss Function. A commonly used loss function is the square error loss function (SLF) which is a symmetric loss function that assigns equal losses to overestimation and underestimation. The SLF is often used because it does not need extensive numerical computation. However, several authors have recognized the inappropriateness of using an SLF in several applications (Calabria and Pulcini [11], Basu and Ebrahimi [12], Berger [13], and Norström [14]). For instance, Basu and Ebrahimi [12] derive Bayes estimators of the mean lifetime and the reliability function in the exponential life testing model. Instead, the loss functions that they used are asymmetric to reflect that, in most situations of interest, overestimating is more harmful than underestimating. Due to this, we use various asymmetric loss functions as follows.

(ii) The Entropy Loss Function. In many practical situations, it appears to be more realistic to express the loss in terms of the ratio . In this case, Calabria and Pulcini [7] point out that a useful asymmetric loss function is the entropy loss function (ELS): where and , whose minimum occurs at . Also, the loss function has been used in Dey et al. [15] and Dey and Liu [16], in the original form having . Thus, can be written as

(iii) The Precautionary Loss Function. Norström [14] introduced an alternative asymmetric loss function and also presented a general class of precautionary loss functions as a special case. These loss functions approach infinitely near the origin to prevent underestimation, thus giving conservative estimators, especially when low failure rates are being estimated. These estimators are very useful when underestimation may lead to serious consequences. A very useful and simple asymmetric precautionary loss function (PLF) is

2.2. The Bayes Estimator under

Now, we obtain the Bayes estimators for parameter for the quasi-prior density under square error, entropy, and precautionary loss functions. The posterior PDF of is obtained as which is a gamma family with parameters .

The Bayes estimator under the square error loss function can clearly be obtained as the Bayes estimator under the entropy loss function by and the Bayes estimator under the precautionary loss function by It is clear that the maximum likelihood estimator is a special case of the Bayes estimator under square error loss function by . Therefore, the risk functions of and are the same when .

2.2.1. The Risk Functions

The risk functions of the estimators , , and , relative to SLF, are denoted by , , and , respectively, and are given by The risk functions of the estimators , , and , relative to the entropy loss function, are denoted by , , and , respectively, and are given by The risk functions of the estimators , , and , relative to the precautionary loss function, are denoted by , , and , respectively, and are given by

2.3. The Bayes Estimator under

The gamma density is the natural conjugate prior for the parameter with respect to IWD. Using (4), the posterior distribution is obtained by which is again a gamma family of parameters . Thus, the Bayes estimators of under the square loss function are given by The Bayes estimator of under entropy loss function is given by and the Bayes estimator under the precautionary loss function by

2.3.1. The Risk Functions

The risk functions of the estimators , , and , relative to SLF, are denoted by , , and , respectively, and are given by where

Similarly, the risk functions of the estimators , , and , relative to the entropy loss function, are denoted by , , and , respectively, and are given by The risk functions of the estimators , , and , relative to the precautionary loss function, are denoted by , , and , respectively, and are given by

2.4. The Bayes Estimator under

Under , using (4), the posterior distribution is obtained by where The Bayes estimators of under the square loss function are given by the Bayes estimator under the entropy loss function by and the Bayes estimator under the precautionary loss function by In this case, there is no closed-form solution to obtain the risk functions of the latter estimators. Therefore, we employ the importance sampling technique for constructing the Bayes estimators and obtaining risk functions which is presented in next section.

3. Comparisons

This section presents the comparison of the various estimators obtained by the use of different methods in Sections 2 and 3 on the basis of their risks. In the previous section, the risk function of the estimators is computed under SLF, ELF, and PLF.

3.1. The Case of Quasi Prior

The Bayes estimators are seen to depend upon the parameters of prior distributions. In Figure 1, we have plotted the ratio of the risk functions to , that is, for the Bayes estimators , , and , respectively, under the square error loss function, as given in (18), for and .

In Figure 2, we have plotted the ratio of the risk functions to , that is, for the Bayes estimators , , and , respectively, under the precautionary loss function, as given in (20), for and .

It is important to mention here that the scales on -axis of the graphs are not the same and they vary from figure to figure. From Figures 1 and 2, we see that none of the estimators uniformly dominates any other. We therefore recommend that the estimators to be chosen according to the values of when quasi density is used as the prior distribution, and this choice in turn depends on the situations at hand.

3.2. The Case of Gamma Prior

The risk functions under the gamma prior are dependent on the population parameter , which is not separable. Therefore, a comparison could only be made by using numerical techniques. Random samples of different size are generated, and the estimators obtained in Sections 2 and 3 are compared in the following steps.

Algorithm 1. Consider the following.Step 1. For given values (, , ), we generate prior (8). Step 2. By using the value from Step and true value , we select the sample size , and 40. We then generate the likelihood function (4). Step 3. The MLE and different Bayes estimators of are computed through Step 3.Step 4. Steps to are repeated 1000 times, and the mean square error (MSE) for each estimator is computed.
Table 1 given herein shows the mean square error (MSE) of the different estimators based on 1000 runs of Monte Carlo simulation.

From Table 1, we see that the estimators are consistent in MSE of the all considered cases. As expected, the Bayes estimators are doing better than the maximum likelihood estimators. Also, the Bayes estimators under precautionary loss function are doing better than the all other estimators.

3.3. The Case of Uniform Prior

Since the risk functions of estimators cannot be obtained in a closed form, we propose to use the Gibbs sampling technique to generate MCMC samples and then use importance sampling technique for constructing the Bayes estimators.

Now, we provide an algorithm to draw MCMC samples from the posterior distribution (29). Since where have been defined in (30), it is possible to use the acceptance rejection method to generate samples from , by using gamma generation, and we use Algorithm 2 in what follows to generate Gibbs sample from the posterior density function of .

Algorithm 2. Consider the following.Step  1. Generate from the Gamma () and from the Uniform (0,1). Step  2. If , then accept ; otherwise, go back to Step 1. Step  3. Generate .Step  4. Obtain the Bayes estimate of under the square error loss function as the posterior mean, that is, Step  5. Obtain the Bayes estimator under the precautionary loss function as follows: Step  6. Obtain the mean square error .

In order to compare the proposed Bayes estimators with the corresponding Bayes estimators, we perform a Monte Carlo simulation study of 1000 using different sample sizes , and 80. The IWD samples were generated from (2) for all combinations of and , and 4. For the uniform prior, we have considered . In this case, we have chosen the hyperparameters in such a way that the prior mean becomes the expected value of the corresponding parameter. The averages and mean square errors (MSE) in parentheses of estimators of and are presented in Table 2.

It is clear from Table 2 that the proposed Bayes estimators perform very well for and the estimators is consistent in MSE of the all considered case. Also, the Bayes estimators under square loss function are doing better than the Bayes estimators under precautionary loss function, that is, .

4. Conclusion

In this paper, we have proposed the classical and the Bayesian approaches to estimate the scale parameter for inverse Weibull distribution, when the shape parameter was known [12]. Bayes estimators are often obtained using both symmetric and asymmetric loss functions ([11, 12]). In view of this, we have obtained and then compared the different Bayes estimators corresponding to the different loss functions. To compare the considered estimators, extensive simulation studies have been performed. The results show that, in the case of quasi-prior, none of the estimators uniformly dominates any other. Therefore, it might recommend that the estimators be chosen according to the value of , when quasi-density is used as the prior distribution. This choice in turn depends on the situations at hand. It appears to be clear from this study that the Bayes method of estimation for gamma prior is superior to the MLE method. Also, in the case of gamma prior, the Bayes estimators related to precautionary loss function have the smallest MSE as compared with the Bayes estimators related to square error loss function or the Bayes estimators under entropy loss function or the MLEs. Furthermore, in the case of uniform prior, the Bayes estimators under square error loss function are doing better than the Bayes estimators under precautionary loss function.