#### Abstract

The Weibull distribution has been observed as one of the most useful distribution, for modelling and analysing lifetime data in engineering, biology, and others. Studies have been done vigorously in the literature to determine the best method in estimating its parameters. Recently, much attention has been given to the Bayesian estimation approach for parameters estimation which is in contention with other estimation methods. In this paper, we examine the performance of maximum likelihood estimator and Bayesian estimator using extension of Jeffreys prior information with three loss functions, namely, the linear exponential loss, general entropy loss, and the square error loss function for estimating the two-parameter Weibull failure time distribution. These methods are compared using mean square error through simulation study with varying sample sizes. The results show that Bayesian estimator using extension of Jeffreys' prior under linear exponential loss function in most cases gives the smallest mean square error and absolute bias for both the scale parameter *α* and the shape parameter *β* for the given values of extension of Jeffreys' prior.

#### 1. Introduction

The Weibull distribution is widely used in reliability and life data analysis due to its versatility. Depending on the values of the parameters, the Weibull distribution can be used to model a variety of life behaviours. An important aspect of the Weibull distribution is how the values of the shape parameter, , and the scale parameter, , affect the characteristics life of the distribution, the shape/slope of the distribution curve, the reliability function, and the failure rate. It has been found that this distribution is satisfactory in describing the life expectancy of components that involve fatigue and for assessing the reliability of bulbs, ball bearings, and machine parts according to [1].

The primary advantage of Weibull analysis according to [2] is its ability to provide accurate failure analysis and failure forecasts with extremely small samples. With Weibull, solutions are possible at the earliest indications of a problem without having to pursue further. Small samples also allow cost-effective component testing.

Maximum likelihood estimation has been the most widely used method for estimating the parameters of the Weibull distribution. Recently, Bayesian estimation approach has received great attention by most researchers among them is [3]. They considered Bayesian survival estimator for Weibull distribution with censored data while [4] studied Bayesian estimation for the extreme value distribution using progressive censored data and asymmetric loss. Bayes estimator for exponential distribution with extension of Jeffreys’ prior information was considered by [5]. Others including [6–8] did some comparative studies on the estimation of Weibull parameters using complete and censored samples and [9] determined Bayes estimation of the extreme-value reliability function.

The objective of this paper is to compare the traditional maximum likelihood estimation of the scale and shape parameters of the Weibull distribution with its Bayesian counterpart using extension of Jeffreys prior information obtained from Lindleys’ approximation procedure with three loss functions.

The rest of the paper is arranged as follows: Section 2 contains the derivative of the parameters under maximum likelihood estimator, Section 3 is the Bayesian estimator. Section 4 is the asymmetric loss function which is divided into two subsections, that is, linear exponential (LINEX) loss function and general entropy loss function. Symmetric loss function also known as squared error loss function is in Section 5 followed by simulation study in Section 6. Section 7 is the results and discussion, and Section 8 is the conclusion.

#### 2. Maximum Likelihood Estimation

Let be a random sample of size with a probability density function (pdf) of a two-parameter Weibull distribution given as The Cumulative distribution function (CDF) is The likelihood function of the pdf is The log-likelihood function is Differentiating (2.4) with respect to and and equating to zero, we have From (2.5), When is obtained then can be determined. We propose to solve by using Newton-Raphson method as given below. Let be the same as (2.6) and taking the first differential of , we have Substituting (2.7) into (2.6) which is gives Substituting (2.7) into (2.8), we have Therefore, is obtained from the equation below by carefully choosing an initial value for and iterating the process till it converges:

#### 3. Bayesian Estimation

Bayesian estimation approach has received a lot of attention in recent times for analysing failure time data, which has mostly been proposed as an alternative to that of the traditional methods. Bayesian estimation approach makes use of ones prior knowledge about the parameters as well as the available data. When ones prior knowledge about the parameter is not available, it is possible to make use of the noninformative prior in Bayesian analysis. Since we have no knowledge on the parameters, we seek to use the extension of Jeffreys’ prior information, where Jeffreys’ prior is the square root of the determinant of the Fisher information. According to [5], the extension of Jeffreys’ prior is by taking , giving that Thus, Given a sample from the pdf (2.1), the likelihood function, With Bayes theorem, the joint posterior distribution of the parameters and is where is the normalizing constant that makes a proper pdf.

#### 4. Asymmetric Loss Function

##### 4.1. Linear Exponential Loss Function (LINEX)

The LINEX loss function is under the assumption that the minimal loss occurs at and is expressed as with , where is an estimate of . The sign and magnitude of the shape parameter represents the direction and degree of symmetry, respectively. There is overestimation if and underestimation if but when , the LINEX loss function is approximately the squared error loss function. The posterior expectation of the LINEX loss function, according to [10], is The Bayes estimator of , represented by under LINEX loss function, is the value of which minimizes (4.2) and is given as provided that exists and is finite. The Bayes estimator of a function is given as From (4.4), it can be observed that it contains a ratio of integrals which cannot be solved analytically and for that we employ Lindleys approximation procedure to estimate the parameters. Lindley considered an approximation for the ratio of integrals for evaluating the posterior expectation of an arbitrary function as According to [11], Lindley’s expansion can be approximated asymptotically by where is the log-likelihood function in (2.4) and

##### 4.2. General Entropy Loss Function

Another useful asymmetric loss function is the general entropy (GE) loss which is a generalization of the entropy loss and is given as The Bayes estimator of under the general entropy loss is provided exists and is finite. The Bayes estimator for this loss function is Similar Lindley approach is used for the general entropy loss function as in the LINEX loss but here the Lindley approximation procedure as stated in (4.6), where , and , are the first and second derivatives for and , respectively, and are given as

#### 5. Symmetric Loss Function

The squared error loss denotes the punishment in using to estimate and is given by . This loss function is symmetric in nature, that is, it gives equal weightage to both over and under estimation. In real life, we encounter many situations where overestimation may be more serious than underestimation or vice versa. The Bayes estimator of a function of the unknown parameters under square error loss function (SELF) is the posterior mean, where Applying the same Lindley approach here as in (4.6) with , and , being the first and second derivatives for and , respectively, we have

#### 6. Simulation Study

In our simulation study, we chose a sample size of , 50, and 100 to represent small, medium, and large dataset. The scale and shape parameters are estimated for Weibull distribution with maximum likelihood and Bayesian using extension of Jeffreys prior methods. The values of the parameters chosen were = 0.5 and 1.5, = 0.8 and 1.2. The values of Jeffreys extension were = 0.4 and 1.4. The values for the loss parameters were = and . These were iterated 5000 times and the scale and shape parameters for each method were calculated. The results are presented below for the estimated parameters and their corresponding mean squared error values. The mean squared error is given as

#### 7. Results and Discussion

We present the estimated values for the scale parameter for both the maximum likelihood estimation and Bayesian estimation using extension of Jeffreys’ prior information with the three loss functions in Table 1. It is observed that Bayes estimator under LINEX and general entropy loss functions tend to underestimate the scale parameter with MLE and Bayes estimator under squared error loss function slightly underestimating it. In terms of mean squared error and absolute bias as given in Tables 3 and 5, Bayes estimation with linear exponential loss function provides the smallest values in most cases especially when the loss parameter is less than zero (0), that is, = −1.6 whether the extension of Jeffreys prior is 0.4 or 1.4 but as the sample size increases both maximum likelihood estimation and Bayes estimation under all loss functions have a corresponding decrease in MSE and absolute bias values.

For the shape parameter , it is clear from Table 2 that MLE and Bayes estimation under squared error loss function tend to overestimate it but not completely but Bayes estimation under linear exponential and general entropy loss functions overestimate the parameter completely. From Tables 4 and 6, Bayesian estimation under LINEX loss gives a better or smaller mean squared error and minimum absolute bias as compared to the others but this happens when the loss parameter = 1.6 implying overestimation since the loss parameter is greater than zero (0). It is observed again from Table 4 that as the sample size increases, the mean squared error values of the general entropy loss function decreases to smaller values than any of the others but it must be stated that the others also have their MSE values decreasing with increasing sample size.

Similarly, it has also been observed from Table 6 that the estimator that gives the minimum absolute bias over all the other estimators in majority of the cases is Bayes estimator under LINEX loss function. This is followed by Bayes estimator under General Entropy loss function. There is a corresponding decrease of the absolute bias values for all the estimators as the sample size increases.

#### 8. Conclusion

In this paper, we have addressed the problem of Bayesian estimation for the Weibull distribution, under asymmetric and symmetric loss functions and that of maximum likelihood estimation. Bayes estimators were obtained using Lindley approximation while MLE were obtained using Newton-Raphson method.

A simulation study was conducted to examine and compare the performance of the estimates for different sample sizes with different values for the extension of Jeffreys’ prior and the loss functions.

From the results, we observe that in most cases, Bayesian estimator under linear exponential loss function (LINEX) has the smallest mean squared error values and minimum bias for both the scale parameter and the shape parameter when and 1.6, respectively, and for both values of the extension of Jeffreys’ prior information. As the sample size increases the mean squared error and the absolute bias for maximum likelihood estimator and that of Bayes estimator under all the loss functions decrease correspondingly.