Table of Contents Author Guidelines Submit a Manuscript
Journal of Probability and Statistics
Volume 2011, Article ID 457472, 17 pages
http://dx.doi.org/10.1155/2011/457472
Research Article

Bayesian Inference on the Shape Parameter and Future Observation of Exponentiated Family of Distributions

1Department of Statistics, St. Anthony's College, Shillong 793 001, India
2Department of Statistics, Visva-Bharati University, Santiniketan 731 235, India

Received 17 May 2011; Accepted 5 September 2011

Academic Editor: Mohammad Fraiwan Al-Saleh

Copyright © 2011 Sanku Dey and Sudhansu S. Maiti. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The Bayes estimators of the shape parameter of exponentiated family of distributions have been derived by considering extension of Jeffreys' noninformative as well as conjugate priors under different scale-invariant loss functions, namely, weighted quadratic loss function, squared-log error loss function and general entropy loss function. The risk functions of these estimators have been studied. We have also considered the highest posterior density (HPD) intervals for the parameter and the equal-tail and HPD prediction intervals for future observation. Finally, we analyze one data set for illustration.

1. Introduction

Let 𝑋 be a random variable whose cumulative distribution function (cdf) and probability density function (pdf) are given by 𝐺(𝑥;𝛼,𝜃)=𝐹𝛼(𝑥;𝜃),(1.1)𝑔(𝑥;𝛼,𝜃)=𝛼𝐹𝛼1(𝑥;𝜃)𝑓(𝑥;𝜃),(1.2) respectively. Here 𝐹(,𝜃) is the continuous baseline distribution function with the corresponding probability density function 𝑓(𝑥;𝜃), and 𝜃 may be vector valued, and 𝛼 is the positive shape parameter. Then, 𝑋 is said to be belonging to the exponentiated family of distributions (abbreviated as EFD) or the proportional reversed hazard family.

If the baseline distribution is exponential, then it is called the generalized exponential (GE) distribution in the literature. In recent years, an impressive array of papers has been devoted to study the behavioral patterns of the parameters of the generalized exponential distribution using both classical and Bayesian framework, and a very good summary of this work can be found in Gupta and Kundu [14], Raqab [5], Raqab and Ahsanullah [6], Zheng [7], Raqab and Madi [8], Alamm et al. [9], Singh et al. [10], Dey [11], and the references cited there for some recent developments on GE distribution. If the baseline distribution is Weibull, then it is called the exponentiated Weibull distribution. Mudholkar and Srivastava [12], Nassar and Eissa [13], and Singh et al. [14] have studied this distribution.

In this paper, we assume that 𝐹(𝑥,𝜃)=𝐹(𝑥) is known, but the shape parameter 𝛼 is unknown. Then, the cdf and pdf become 𝐺(𝑥;𝛼)=𝐹𝛼(𝑥),(1.3)𝑔(𝑥;𝛼)=𝛼𝐹𝛼1(𝑥)𝑓(𝑥),(1.4) respectively. If 𝐹(𝑥) is symmetric, then 𝐺(𝑥;𝛼) will be skewed distribution for different values of 𝛼(1). Hence 𝛼 can be considered as a skewness parameter. Gupta and Gupta [15] have shown that positively skewed data can be analyzed very well for normal baseline distribution. Again 𝛼 is the parameter of the proportional reversed hazard model in lifetime data analysis. For its various important roles, we are interested to find out the Bayes estimators and their performances under different loss functions using different priors. In Figure 1, the shape of (i) exponentiated distribution with 𝐹(𝑥)=1𝑒𝑥, (ii) exponentiated Rayleigh, distribution with 𝐹(𝑥)=1𝑒𝑥2, and (iii) exponentiated lognormal distribution with 𝐹(𝑥)=Φ(ln𝑥) has been shown for 𝛼=0.5 and 𝛼=2, respectively.

fig1
Figure 1: Exponentiated distributions for 𝛼=0.5 and 𝛼=2.

The paper is categorized into the following sections. Section 2 has a brief description of the prior distributions and loss functions. The Bayes estimators and associated risk functions are provided in Sections 3 and 4, respectively. Section 5 presents the highest posterior density (HPD) interval for 𝛼. Section 6 is devoted to finding out the predictive distributions and equal-tail Bayesian predictive interval for the future observation. Section 7 deals with the Bayes predictive estimator and HPD prediction interval for a future observation. Section 8 presents the data application based on a real life data set. The paper ends with a concluding remark in Section 9.

2. Prior and Loss Functions

The Bayesian inference requires appropriate choice of prior(s) for the parameter(s). From the Bayesian viewpoint, there is no clear cut way from which one can conclude that one prior is better than the other. Nevertheless, very often priors are chosen according to one’s subjective knowledge and beliefs. However, if one has adequate information about the parameter(s), it is better to choose informative prior(s); otherwise, it is preferable to use noninformative prior(s). In this paper we consider both type of priors: the extended Jeffreys’ prior and the natural conjugate prior.

The extended Jeffreys’ prior proposed by Al-Kutubi [16] is given as𝜋21(𝛼)𝛼2𝑐1,𝛼>0,𝑐1>0.(2.1) The conjugate prior in this case will be the gamma prior, and the probability density function is taken as𝜋1𝑏(𝛼)=𝑎𝛼Γ(𝑎)𝑎1𝑒𝑏𝛼,𝛼,𝑎,𝑏>0.(2.2) With the above priors, we use three different loss functions for the model (1.1).(1)The first loss function considered is called weighted quadratic loss function and is given by 𝐿1(𝛼,𝛿)=𝛼𝛿𝛼2,(2.3) where 𝛿 is a decision rule to estimate 𝛼. 𝛿 is to be chosen such that 0𝛼𝛿𝛼2𝜋𝛼𝑥𝑑𝛼(2.4) is minimum. This can be equivalently written as 0(𝛼𝛿)2𝑞𝛼𝑥𝑑𝛼,with𝑞𝛼𝑥=1/𝛼2𝜋𝛼𝑥01/𝛼2𝜋𝛼𝑥𝑑𝛼(2.5) being minimum. Hence 𝛼bq=𝛿=𝐸𝑞𝛼𝑥.(2.6)(2)The second one is the squared-log error loss function proposed by Brown [17] and is defined as 𝐿2(𝛼,𝛿)=(ln𝛿ln𝛼)2=𝛿ln𝛼2.(2.7) This loss function is balanced with lim𝐿2(𝛼,𝛿) as 𝛿0 or . A balanced loss function takes both error of estimation and goodness of fit into account, but the unbalanced loss function only considers error of estimation. This loss function is convex for 𝛿/𝛼𝑒 and concave otherwise, but its risk function has a unique minimum with respect to 𝛿. The Bayes estimator for the parameter 𝛼 under squared-log error loss function may be given as 𝛼bsl𝐸=expln𝛼𝑥,(2.8) where 𝐸() denotes the posterior expectation. (3)The third loss function is a particular type of asymmetric loss functions called the general entropy loss function proposed by Calabria and Pulcini [18] (Podder and Roy [19] called it the modified linear exponential (MLINEX) loss function) and is given by 𝐿3𝛿(𝛼,𝛿)=𝑤𝛼𝛾𝛿𝛾ln𝛼1,𝛾0,𝑤>0.(2.9) If we replace 𝛿𝛼 in place of ln(𝛿/𝛼), that is, ln𝛿ln𝛼, we get the linear exponential (LINEX) loss function, 𝑤[𝑒𝛾(𝛿𝛼)𝛾(𝛿𝛼)1]. Without loss of generality, we assume that 𝑤=1. If 𝛾=1, it is the entropy loss function. Under this general entropy loss function, the Bayes estimator of 𝛼 is obtained as follows: 𝛼bge=𝐸𝛼(𝛼𝛾)1/𝛾.(2.10)

3. Estimation of Parameter

Let us consider a random sample 𝑥=(𝑥1,𝑥2,,𝑥𝑛) of size 𝑛 from the exponentiated family of distributions. Then the likelihood function of 𝛼 for the given sample observation is𝐿𝛼𝑥=𝛼𝑛𝑛𝑖=1𝐹𝛼1𝑥𝑖𝑓𝑥𝑖=𝛼𝑛𝑒𝛼𝑛𝑖=1ln𝐹(𝑥𝑖)𝑛𝑖=1𝑓𝑥𝑖𝐹𝑥𝑖.(3.1) Here, the maximum likelihood estimator (MLE) of 𝛼 is 𝛼mle=𝑛/𝑇, with 𝑇=𝑛𝑖=1ln𝐹(𝑥𝑖).

3.1. Estimation under the Assumption of Extended Jeffreys’ Prior

Combining the prior distribution in (2.1) and the likelihood function, the posterior density of 𝛼 is derived as follows:𝜋1𝛼𝑥=𝑇𝑛2𝑐1+1Γ𝑛2𝑐1𝑒+1𝛼𝑇𝛼𝑛2𝑐1,𝛼>0,(3.2) which is a gamma distribution 𝐺(𝑛2𝑐1+1,𝑇).

For differnt derivations in this section and subsequent sections, we use the expressions Γ(𝑝)=0𝑥𝑝1𝑒𝑥𝑑𝑥, Γ(𝑝)=0ln𝑥𝑥𝑝1𝑒𝑥𝑑𝑥, Γ(𝑝)=0(ln𝑥)2𝑥𝑝1𝑒𝑥𝑑𝑥, 𝜓(𝑝)=𝑑lnΓ(𝑝)/𝑑𝑝=Γ(𝑝)/Γ(𝑝), the digamma function, and 𝜓(𝑝)=𝑑2lnΓ(𝑝)/𝑑𝑝2=((Γ(𝑝)Γ(𝑝)[Γ(𝑝)]2)/Γ2(𝑝)), the trigamma function.

Using extended Jeffreys’ prior of the form (2.1), the Bayes estimators of 𝛼 under weighted quadratic, squared-log error and general entropy loss functions are derived as follows:𝛼𝐸bq=𝑛2𝑐11𝑇,𝛼𝐸bsl=𝑒𝜓(𝑛2𝑐1+1)𝑇,𝛼𝐸bge=Γ𝑛2𝑐1+1Γ𝑛2𝑐1+1𝛾1/𝛾1𝑇=𝑘𝑇,(3.3) with 𝑘=[Γ(𝑛2𝑐1+1)/Γ(𝑛2𝑐1+1𝛾)]1/𝛾, respectively.

Remark 3.1. We get the Jeffreys’ noninformative prior for 𝑐1=1/2 and the Hartigan’s noninformative prior for 𝑐1=3/2.

3.2. Estimation under the Assumption of Conjugate Prior

Combining the prior distribution in (2.2) and the likelihood function, the posterior density of 𝛼 is derived as follows:𝜋2𝛼𝑥=𝑇𝑏𝑛+𝑎Γ𝑒(𝑛+𝑎)𝛼𝑇𝑏𝛼𝑛+𝑎1,𝛼>0,(3.4) which is a gamma distribution 𝐺(𝑛+𝑎,𝑇𝑏) with 𝑇𝑏=𝑇+𝑏.

Using a conjugate prior of the form (2.2), the Bayes estimators under weighted quadratic, squared-log error and general entropy loss functions are derived as follows:𝛼𝑐bq=𝑛2+𝑎𝑇𝑏,(3.5)𝛼𝑐bsl=𝑒𝜓(𝑛+𝑎)𝑇𝑏,(3.6)𝛼𝑐bge=Γ(𝑛+𝑎)Γ(𝑛+𝑎𝛾)1/𝛾1𝑇𝑏,(3.7) respectively. It is to be noted that Bayes’ estimators given in (3.5), (3.6), and (3.7) depend on 𝑎 and 𝑏 which are the parameters of the prior distribution of 𝛼. These parameters, that is, 𝑎 and 𝑏 could be estimated by means of empirical Bayes’ procedure (see Lindley [20] and Awad and Gharraf [21]). Given the random sample 𝑥=(𝑥1,𝑥2,,𝑥𝑛), the likelihood function of 𝛼 is gamma density with parameter (𝑛+1,𝑇). Hence it is proposed to estimate the prior parameters 𝑎 and 𝑏 from the sample by 𝑛+1 and 𝑇, respectively. Therefore, (3.5), (3.6), and (3.7) will become𝛼𝑐bq=2𝑛1,2𝑇(3.8)𝛼𝑐bsl=𝑒𝜓(2𝑛+1)2𝑇,(3.9)𝛼𝑐bge=Γ(2𝑛+1)Γ(2𝑛+1𝛾)1/𝛾1=𝐾2𝑇12𝑇,where𝐾1=Γ(2𝑛+1)Γ(2𝑛+1𝛾)1/𝛾,(3.10) respectively.

4. Risks of the Bayes Estimators

Since 𝑋 follows the exponentiated family of distributions with parameter 𝛼, then 𝑇=𝑛𝑖=1ln𝐹(𝑥𝑖) is distributed as 𝐺(𝑛,𝛼). Then the probability density function of 𝑇 is 𝑇𝛼(𝑡)=𝑛Γ𝑒(𝑛)𝛼𝑡𝑡𝑛1,𝑡>0.(4.1) Therefore, 𝐸(𝑇𝛾)=0𝑡𝛾𝑇𝛼(𝑡)𝑑𝑡=𝑛Γ(𝑛)0𝑒𝛼𝑡𝑡𝑛𝛾1𝑑𝑡=Γ(𝑛𝛾)𝛼Γ(𝑛)𝛾.(4.2) The risk function of 𝛼𝐸bq is𝑅𝛼𝐸bq=𝐸𝐿𝛼𝐸bq=1,𝛿𝛼2𝛼22𝛼𝑛2𝑐1𝐸11𝑇+𝑛2𝑐112𝐸1𝑇2=1𝛼2𝛼22𝛼𝑛2𝑐1𝛼1+𝑛1𝑛2𝑐112𝛼2=2(𝑛1)(𝑛2)1𝑛2𝑐11+𝑛1𝑛2𝑐112.(𝑛1)(𝑛2)(4.3) Similarly, the risk functions of 𝛼𝐸bsl and 𝛼𝐸bge under squared-log error loss and general entropy loss functions are𝑅𝛼𝐸bsl=𝜓𝜓(𝑛)+𝑛2𝑐1+1𝜓(𝑛)2𝑅,(4.4)𝛼𝐸bge=𝑘𝛾Γ(𝑛𝛾)[]Γ(𝑛)+𝛾𝜓(𝑛)ln𝑘1,(4.5) respectively. The risk functions of 𝛼𝑐bq, 𝛼𝑐bsl, and 𝛼𝑐bge, assuming conjugate prior are𝑅𝛼𝑐bq=12𝑛1+𝑛1(2𝑛1)24,𝑅(𝑛1)(𝑛2)(4.6)𝛼𝑐bsl=𝜓[](𝑛)+𝜓(2𝑛+1)𝜓(𝑛)ln22𝑅,(4.7)𝛼𝑐bge=𝑘𝛾1Γ(𝑛𝛾)Γ(𝑛)+𝛾𝜓(𝑛)ln𝑘1+ln21,(4.8) respectively.

The risk functions of 𝛼mle under weighted quadratic, squared-log error and general entropy loss functions are𝑅𝑞𝛼mle=12𝑛+𝑛𝑛12𝑅(𝑛1)(𝑛2),(4.9)𝑠𝑙𝛼mle=𝜓[](𝑛)+𝜓(𝑛)ln𝑛2𝑅,(4.10)𝑔𝑒𝛼mle=𝑛𝛾Γ(𝑛𝛾)[]Γ(𝑛)+𝛾𝜓(𝑛)ln𝑛1,(4.11) respectively.

The estimators, developed in Section 3, are studied here on the basis of their risks obtained under three different loss functions, namely, (a) weighted quadratic loss, (b) squared-log error loss, (c) general entropy loss for 𝛾=0.5, (d) general entropy loss for 𝛾=1, and (e) general entropy loss for 𝛾=1.5. Risk functions of the proposed estimators are shown in Figures 25. The thick lines in each figure show the risk of the Bayes estimators under extended Jeffreys’ prior and conjugate prior, and dotted lines show the risk of MLE under different loss functions. In Figure 2, risk functions have been plotted for all the loss functions under extended Jeffreys’ prior for different values of 𝑐1 and for 𝑛=30. It is observed that risks are increasing with the increase in 𝑐1. Risks under general entropy loss for 𝛾<1 are ordinarily less than those of weighted quadratic and squared-log error losses. For small values of 𝑐1, risks of Bayes estimators are lower than those of maximum likelihood estimators for each loss function considered. The Bayes estimators perform better for some values of 𝑐1 and loss functions under consideration; for example, in the figure, risk of 𝛼𝐸bq is less for 𝑐1<1.5 (approximately) whereas that of 𝛼𝐸bsl is less for 𝑐1<0.8 (approximately). Figures 3 and 4 show the risks for different values of 𝑛 for 𝑐1=0.5 and 2, respectively. We find that the risks are decreasing with the increase in 𝑛 for all values of 𝑐1.

457472.fig.002
Figure 2: Risk function of estimators under the extended Jeffreys’ prior for different values of 𝑐1 with 𝑛=30.
457472.fig.003
Figure 3: Risk function of estimators under the extended Jeffreys’ prior for different values of 𝑛 with 𝑐1=0.5.
457472.fig.004
Figure 4: Risk function of estimators under the extended Jeffreys’ prior for different 𝑛 with 𝑐1=2.
457472.fig.005
Figure 5: Risk function of estimators under conjugate prior for different 𝑛.

When we consider conjugate prior, we see that risks are less for the squared-log error and weighted quadratic losses than the general entropy loss, and only for these two losses, the Bayes estimators are less than the MLE for small 𝑛 (Figure 5). The risks under conjugate prior are generally higher than those under Jeffreys’ prior.

5. Highest Posterior Density Intervals for 𝛼

In this section our objective is to provide a highest posterior density (HPD) interval for the unknown parameter 𝛼 of the model (1.2). HPD interval is one of the most useful tools to measure posterior uncertainty. This interval is such that it includes more probable values of the parameter and excludes the less probable ones. Since the posterior density (3.2) is unimodal, the 100(1𝜂)% HPD interval [𝐻𝐸1,𝐻𝐸2] for 𝛼 must satisfy 𝐻𝐸2𝐻𝐸1𝜋1𝛼𝑥𝑑𝛼=1𝜂,(5.1) that is Γ𝑛2𝑐1+1,𝐻𝐸2𝑇Γ𝑛2𝑐1+1,𝐻𝐸1𝑇𝜋=1𝜂,(5.2)1𝐻𝐸1𝑥=𝜋1𝐻𝐸2𝑥,(5.3) that is 𝐻𝐸1𝐻𝐸2𝑛2𝑐1=𝑒𝑇(𝐻𝐸1𝐻𝐸2),(5.4) simultaneously. The HPD interval [𝐻𝐸1,𝐻𝐸2] is the simultaneous solution of (5.2) and (5.4).

Similarly, the posterior density (3.4) is unimodal, and the 100(1𝜂)% HPD interval [𝐻𝑐1,𝐻𝑐2] for 𝛼 must satisfy 𝐻𝑐2𝐻𝑐1𝜋2𝛼𝑇𝑎𝑑𝛼=1𝜂,(5.5) that is Γ𝑛+𝑏,𝐻𝑐2𝑇𝑎Γ𝑛+𝑏,𝐻𝑐1𝑇𝑎𝜋=1𝜂,(5.6)2𝐻𝑐1𝑇𝑎=𝜋2𝐻𝑐2𝑇𝑎,(5.7) that is 𝐻𝑐1𝐻𝑐2𝑛+𝑏1=𝑒𝑇𝑎(𝐻𝑐1𝐻𝑐2),(5.8) simultaneously. Therefore, the HPD interval [𝐻𝑐1,𝐻𝑐2] is the simultaneous solution of (5.6) and (5.8). If 𝑎 and 𝑏 are not known, then substituting the empirical Bayes estimate of 𝑎 and 𝑏, we get the equations as follows:Γ2𝑛+1,2𝐻𝑐2𝑇Γ2𝑛+1,2𝐻𝑐1𝑇𝐻=1𝜂,(5.9)𝑐1𝐻𝑐22𝑛=𝑒2𝑇(𝐻𝑐1𝐻𝑐2),(5.10) respectively.

6. Predictive Distribution

In this section our objective is to obtain the posterior predictive density of future observation, based on current observations. Another objective is to attain equal-tail Bayesian prediction interval for the future observation and then compare this interval with frequentist predictive interval. The posterior predictive distribution for 𝑦=𝑥𝑛+1 given 𝑥=(𝑥1,𝑥2,,𝑥𝑛) under (3.2) is defined by𝜉𝐸𝑦𝑥=0𝜋1𝛼𝑥=𝑔(𝑦;𝛼)𝑑𝛼𝑛2𝑐1+1𝑇1[]1(ln𝐹(𝑦)/𝑇)𝑛2𝑐1+2𝑓(𝑦).𝐹(𝑦)(6.1) A 100(1𝜂)% equal-tail prediction interval [𝑦𝐸1,𝑦𝐸2] is the solution of 𝑦𝐸10𝜉𝐸𝑦𝑥𝑑𝑦=𝑦𝐸2𝜉𝐸𝑦𝑥𝜂𝑑𝑦=2.(6.2) Using (6.1), we get (after simplification)𝑦𝐸1=𝐹1𝑒𝑇{1(1(𝜂/2))11/(𝑛2𝑐+1)},𝑦𝐸2=𝐹1𝑒𝑇{1(𝜂/2)11/(𝑛2𝑐+1)}.(6.3) The posterior predictive distribution for 𝑦=𝑥𝑛+1 given 𝑥=(𝑥1,𝑥2,,𝑥𝑛) under (3.4) is defined by𝜉𝑐𝑦𝑥=0𝜋2𝛼𝑥=𝑔(𝑦;𝛼)𝑑𝛼𝑛+𝑏𝑇𝑎11ln𝐹(𝑦)/𝑇𝑎𝑛+𝑏+1𝑓(𝑦).𝐹(𝑦)(6.4) A 100(1𝜂)% equal-tail prediction interval [𝑦𝑐1,𝑦𝑐2] is the solution of 𝑦𝑐10𝜉𝑐𝑦𝑥𝑑𝑦=𝑦𝑐2𝜉𝑐𝑦𝑥𝜂𝑑𝑦=2.(6.5) Using (6.4), we get (after simplification)𝑦𝑐1=𝐹1𝑒𝑇𝑎{1(1(𝜂/2))1/(𝑛+𝑏)},𝑦𝑐2=𝐹1𝑒𝑇𝑎{1(𝜂/2)1/(𝑛+𝑏)}.(6.6) If 𝑎 and 𝑏 are not known, then substituting the empirical Bayes estimate of 𝑎 and 𝑏, we get the prediction limits as follows:𝑦𝑐1=𝐹1𝑒2𝑇{1(1(𝜂/2))1/(2𝑛+1)},𝑦𝑐2=𝐹1𝑒2𝑇{1(𝜂/2)1/2𝑛+1}.(6.7) For deriving classical intervals, we notice that 𝑍=ln𝐹(𝑦)/𝑇 is distributed as a beta-variate of the second kind with parameters 1 and 𝑛. The pdf of 𝑍 has the following form:1(𝑧)=𝐵1(1,𝑛)(1+𝑧)𝑛+1,𝑧>0.(6.8) Solving for (𝑧1,𝑧2) in 𝑧10(𝑧)𝑑𝑧=𝑧2𝜂(𝑧)𝑑𝑧=2,(6.9) and using (6.8), we get (after simplification)𝑧1=𝐹1𝑒𝑇{1(1(𝜂/2))1/𝑛},𝑧2=𝐹1𝑒𝑇{1(𝜂/2)1/𝑛}.(6.10) It is to be noted that if we take 𝑐1=0.5 in (6.3), we get classical 100(1𝜂)% equal-tail prediction interval.

7. Bayes Predictive Estimator and HPD Prediction Interval for a Future Observation

In this section, we introduce the Bayes predictive estimator for different priors under the above-mentioned loss functions, and later we obtain HPD predictive intervals for the future observation. The Bayes predictive estimators of 𝑦 under a weighted quadratic error loss function assuming the extended Jeffreys’ prior and the conjugate prior are𝑦1𝐸=0𝑦1/𝑦2𝜉𝐸𝑦𝑥𝑑𝑦01/𝑦2𝜉𝐸𝑦𝑥,𝑦𝑑𝑦(7.1)1𝑐=0𝑦1/𝑦2𝜉𝑐𝑦𝑥𝑑𝑦01/𝑦2𝜉𝑐𝑦𝑥,𝑑𝑦(7.2) respectively. The Bayes predictive estimators of 𝑦 under the squared-log error loss function assuming the extended Jeffreys’ prior and the conjugate prior are𝑦2𝐸𝐸=expln𝑌𝑥,with𝐸ln𝑌𝑥=0ln𝑦𝜉𝐸𝑦𝑥𝑦𝑑𝑦,(7.3)2𝑐𝐸=expln𝑌𝑥,with𝐸ln𝑌𝑥=0ln𝑦𝜉𝑐𝑦𝑥𝑑𝑦,(7.4) respectively. The Bayes predictive estimators of 𝑦 under the general entropy loss function assuming the extended Jeffreys’ prior and the conjugate prior are𝑦3𝐸=𝐽𝐸𝑔1/𝛾with𝐽𝐸𝑔=0𝑦𝛾𝜉𝐸𝑦𝑥𝑦𝑑𝑦,(7.5)3𝑐=𝐽𝑐𝑔1/𝛾with𝐽𝑐𝑔=0𝑦𝛾𝜉𝑐𝑦𝑥𝑑𝑦,(7.6) respectively. The closed form expressions of (7.1)–(7.6) seem to be intractable, and calculations are to be made using numerical method.

For the unimodal predictive density (6.1), the HPD-predictive interval [𝐸1,𝐸2] with probability 1𝜂 for 𝑦 is the simultaneous solution of the following: 𝑃𝐸1<𝑌<𝐸2=1𝜂,(7.7) that is 11ln𝐹(𝐸2)/𝑇𝑛2𝑐1+111ln𝐹(𝐸1)/𝑇𝑛2𝑐1+1𝜉=1𝜂,(7.8)𝐸𝐸1𝑥=𝜉𝐸𝐸2𝑥,(7.9) that is 1ln𝐹(𝐸2)/𝑇1ln𝐹(𝐸1)/𝑇𝑛2𝑐1+2=𝑓𝐸2𝐹𝐸2𝐹𝐸1𝑓𝐸1.(7.10) Similarly, for the unimodal predictive density (6.4), the HPD-predictive interval [𝑐1,𝑐2] with probability 1𝜂 for 𝑦 is the simultaneous solution of the following: 𝑃𝑐1<𝑌<𝑐2=1𝜂,(7.11) that is 11ln𝐹(𝑐2)/𝑇𝑎𝑛+𝑏11ln𝐹(𝑐1)/𝑇𝑎𝑛+𝑏𝜉=1𝜂,(7.12)𝑐𝑐1𝑥=𝜉𝑐𝑐2𝑥,(7.13) that is 1ln𝐹(𝑐2)/𝑇𝑎1ln𝐹(𝑐1)/𝑇𝑎𝑛+𝑏+1=𝑓𝑐2𝐹𝑐2𝐹𝑐1𝑓𝑐1.(7.14) If 𝑎 and 𝑏 are not known, then substituting the empirical Bayes estimate of 𝑎 and 𝑏, we get the HPD-prediction limits for future observation as follows:11ln𝐹(𝑐2)/2𝑇2𝑛+111ln𝐹(𝑐1)/2𝑇2𝑛+1=1𝜂,1ln𝐹𝑐2/2𝑇1ln𝐹𝑐1/2𝑇2𝑛+2=𝑓𝑐2𝐹𝑐2.𝐹𝑐1𝑓𝑐1.(7.15)

8. Data Analysis

Consider the following data which arose in tests on endurance of deep grove ball bearings (Lawless [22, page 228]). The data are the number of million revolutions before failure for each of the 23 ball bearings in the life test:17.88,28.92,33.00,41.52,42.12,45.60,48.80,51.84,51.96,54.12,55.56,67.80,68.64,68.64,68.88,84.12,93.12,98.64,105.12,105.84,127.92,128.04,173.40.(8.1) To study the goodness of fit of the exponentiated exponential model, Gupta and Kundu [2] computed the 𝜒2 statistic as 0.783 with the corresponding 𝑃 value being 0.376. The estimate of 𝛼, the shape parameter, is 5.2589 and that of 𝜃, the rate of the exponential distribution, is 0.0314. Here our aim is to obtain the Bayes estimates of 𝛼 for this data set under three different loss functions and for two priors by assuming that the base line distribution is exponential with 𝜃=0.0314. At the same time, we are interested in studying the HPD intervals for the parameter 𝛼. Further, our intention is to obtain the future observation based on a given set of observations and the HPD predictive intervals for the future observation based on the current observations. Figure 6 shows the estimated predictive distribution.

457472.fig.006
Figure 6: Estimate of the predictive distribution for the given data set for extended Jeffreys’ prior and conjugate prior.

Tables 14 summarize the result from the data analysis. Tables 1(a) and 1(b) represents the Bayes estimates of 𝛼 and corresponding risks under the extended Jeffreys’ prior the conjugate prior, respectively. It is evident from Table 1(a) that Bayes estimates under general entropy loss (𝛾=0.5 and 𝛾=1) give better estimates than all other estimates. It is also evident from Table 1(a) that the estimates decrease and the corresponding risks increase with the increase in 𝑐1. In case of conjugate prior (Table 1(b)), the estimates under weighted quadratic and squared-log error loss functions seem to be better. In Table 2, the HPD intervals under conjugate prior appear to be slightly better than the extended Jeffreys’ with respect to minimum length. Table 3 presents the estimates of future observation based on the data set. It is observed from the last column of the table that the general entropy loss function at 𝛾=1.5 gives a quite reasonable estimate. Table 4 shows the Bayesian predictive and HPD predictive intervals for future observation of the data set. The first row in each cell represents the Bayesian predictive intervals, and the second row represents the HPD intervals for future observation. It is apparent that the HPD intervals for future observation with respect to conjugate prior are reasonably good.

tab1
Table 1
tab2
Table 2: HPD intervals of the shape parameter for the data set.
tab3
Table 3: Estimated future observation for the data set under three different loss functions.
tab4
Table 4: Bayesian predictive and HPD predictive intervals for future observation of the data set.

9. Concluding Remark

In this paper, we have derived the Bayes estimators of the shape parameter of the exponentiated family of distributions under the extended Jeffreys’ prior as well as conjugate prior using three different loss functions. Though the extended Jeffreys’ prior gives the opportunity of covering wide spectrum of priors, yet at times the conjugate prior also gives better Bayes estimates and HPD intervals of the parameter and of future observations.

Acknowledgment

The authors would like to thank the referee for a very careful reading of the manuscript and making a number of nice suggestions which improved the earlier version of the manuscript.

References

  1. R. D. Gupta and D. Kundu, “Generalized exponential distributions,” Australian and New Zealand Journal of Statistics, vol. 41, no. 2, pp. 173–188, 1999. View at Google Scholar
  2. R. D. Gupta and D. Kundu, “Exponentiated exponential family: an alternative to gamma and Weibull distributions,” Biometrical Journal, vol. 43, no. 1, pp. 117–130, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  3. R. D. Gupta and D. Kundu, “Generalized exponential distribution: different method of estimations,” Journal of Statistical Computation and Simulation, vol. 69, no. 4, pp. 315–337, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  4. R. D. Gupta and D. Kundu, “Discriminating between Weibull and generalized exponential distributions,” Computational Statistics & Data Analysis, vol. 43, no. 2, pp. 179–196, 2003. View at Google Scholar
  5. M. Z. Raqab, “Inferences for generalized exponential distribution based on record statistics,” Journal of Statistical Planning and Inference, vol. 104, no. 2, pp. 339–350, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  6. M. Z. Raqab and M. Ahsanullah, “Estimation of the location and scale parameters of generalized exponential distribution based on order statistics,” Journal of Statistical Computation and Simulation, vol. 69, no. 2, pp. 109–123, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  7. G. Zheng, “On the Fisher information matrix in type II censored data from the exponentiated exponential family,” Biometrical Journal, vol. 44, no. 3, pp. 353–357, 2002. View at Publisher · View at Google Scholar
  8. M. Z. Raqab and M. T. Madi, “Bayesian inference for the generalized exponential distribution,” Journal of Statistical Computation and Simulation, vol. 75, no. 10, pp. 841–852, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  9. A. A. Alamm, M. Z. Raqab, and M. T. Madi, “Bayesian prediction intervals for future order statistics from the generalized exponential distribution,” Journal of the Iranian Statistical Society, vol. 6, no. 1, pp. 17–30, 2007. View at Google Scholar
  10. R. Singh, S. K. Singh, U. Singh, and G. P. Singh, “Bayes estimator of generalized-exponential parameters under Linex loss function using Lindley's approximation,” Data Science Journal, vol. 7, pp. 65–75, 2008. View at Publisher · View at Google Scholar
  11. S. Dey, “Bayesian estimation of the shape parameter of the generalised exponential distribution under different loss functions,” Pakistan Journal of Statistics and Operations Research, vol. 6, no. 2, pp. 163–174, 2010. View at Google Scholar
  12. G. S. Mudholkar and D. K. Srivastava, “Exponentiated Weibull family for analyzing bathtub failure-rate data,” IEEE Transactions on Reliability, vol. 42, no. 2, pp. 299–302, 1993. View at Publisher · View at Google Scholar
  13. M. M. Nassar and F. H. Eissa, “Bayesian estimation for the exponentiated Weibull model,” Communications in Statistics. Theory and Methods, vol. 33, no. 10, pp. 2343–2362, 2004. View at Publisher · View at Google Scholar · View at MathSciNet
  14. U. Singh, P. K. Gupta, and S. K. Upadhyay, “Estimation of parameters for exponentiated-Weibull family under type-II censoring scheme,” Computational Statistics and Data Analysis, vol. 15, pp. 2065–2077, 2005. View at Google Scholar
  15. R. D. Gupta and R. C. Gupta, “Analyzing skewed data by power normal model,” TEST, vol. 17, no. 1, pp. 197–210, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  16. H. S. Al-Kutubi, On comparison estimation procedures for parameter and survival function exponential distribution using simulation, Ph.D. thesis, Baghdad University, Baghdad, Iraq, 2005.
  17. L. Brown, “Inadmissiblity of the usual estimators of scale parameters in problems with unknown location and scale parameters,” Annals of Mathematical Statistics, vol. 39, pp. 29–48, 1968. View at Publisher · View at Google Scholar
  18. R. Calabria and G. Pulcini, “Point estimation under asymmetric loss functions for left-truncated exponential samples,” Communications in Statistics. Theory and Methods, vol. 25, no. 3, pp. 585–600, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  19. C. K. Podder and M. K. Roy, “Bayesian estimation of the parameter of Maxwell distribution under MLINEX loss function,” Journal of Statistical Studies, vol. 23, pp. 11–16, 2003. View at Google Scholar
  20. D. V. Lindley, Introduction to Probability and Statistics from a Bayesian View Point, Cambridge University Press, Cambridge, Mass, USA, 1969.
  21. A. M. Awad and M. K. Gharraf, “Estimation of P[Y<X] in the Burr case: a comparative study,” Communications in Statistics—Simulation and Computation, vol. 15, no. 2, pp. 389–403, 1986. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  22. J. F. Lawless, Statistical Models and Methods for Lifetime Data, John Wiley and Sons, New York, NY, USA, 1982.