Abstract

The Bayes estimators of the shape parameter of exponentiated family of distributions have been derived by considering extension of Jeffreys' noninformative as well as conjugate priors under different scale-invariant loss functions, namely, weighted quadratic loss function, squared-log error loss function and general entropy loss function. The risk functions of these estimators have been studied. We have also considered the highest posterior density (HPD) intervals for the parameter and the equal-tail and HPD prediction intervals for future observation. Finally, we analyze one data set for illustration.

1. Introduction

Let 𝑋 be a random variable whose cumulative distribution function (cdf) and probability density function (pdf) are given by 𝐺(π‘₯;𝛼,πœƒ)=𝐹𝛼(π‘₯;πœƒ),(1.1)𝑔(π‘₯;𝛼,πœƒ)=π›ΌπΉπ›Όβˆ’1(π‘₯;πœƒ)𝑓(π‘₯;πœƒ),(1.2) respectively. Here 𝐹(β‹…,πœƒ) is the continuous baseline distribution function with the corresponding probability density function 𝑓(π‘₯;πœƒ), and πœƒ may be vector valued, and 𝛼 is the positive shape parameter. Then, 𝑋 is said to be belonging to the exponentiated family of distributions (abbreviated as EFD) or the proportional reversed hazard family.

If the baseline distribution is exponential, then it is called the generalized exponential (GE) distribution in the literature. In recent years, an impressive array of papers has been devoted to study the behavioral patterns of the parameters of the generalized exponential distribution using both classical and Bayesian framework, and a very good summary of this work can be found in Gupta and Kundu [1–4], Raqab [5], Raqab and Ahsanullah [6], Zheng [7], Raqab and Madi [8], Alamm et al. [9], Singh et al. [10], Dey [11], and the references cited there for some recent developments on GE distribution. If the baseline distribution is Weibull, then it is called the exponentiated Weibull distribution. Mudholkar and Srivastava [12], Nassar and Eissa [13], and Singh et al. [14] have studied this distribution.

In this paper, we assume that 𝐹(π‘₯,πœƒ)=𝐹(π‘₯) is known, but the shape parameter 𝛼 is unknown. Then, the cdf and pdf become 𝐺(π‘₯;𝛼)=𝐹𝛼(π‘₯),(1.3)𝑔(π‘₯;𝛼)=π›ΌπΉπ›Όβˆ’1(π‘₯)𝑓(π‘₯),(1.4) respectively. If 𝐹(π‘₯) is symmetric, then 𝐺(π‘₯;𝛼) will be skewed distribution for different values of 𝛼(β‰ 1). Hence 𝛼 can be considered as a skewness parameter. Gupta and Gupta [15] have shown that positively skewed data can be analyzed very well for normal baseline distribution. Again 𝛼 is the parameter of the proportional reversed hazard model in lifetime data analysis. For its various important roles, we are interested to find out the Bayes estimators and their performances under different loss functions using different priors. In Figure 1, the shape of (i) exponentiated distribution with 𝐹(π‘₯)=1βˆ’π‘’βˆ’π‘₯, (ii) exponentiated Rayleigh, distribution with 𝐹(π‘₯)=1βˆ’π‘’βˆ’π‘₯2, and (iii) exponentiated lognormal distribution with 𝐹(π‘₯)=Ξ¦(lnπ‘₯) has been shown for 𝛼=0.5 and 𝛼=2, respectively.

The paper is categorized into the following sections. Section 2 has a brief description of the prior distributions and loss functions. The Bayes estimators and associated risk functions are provided in Sections 3 and 4, respectively. Section 5 presents the highest posterior density (HPD) interval for 𝛼. Section 6 is devoted to finding out the predictive distributions and equal-tail Bayesian predictive interval for the future observation. Section 7 deals with the Bayes predictive estimator and HPD prediction interval for a future observation. Section 8 presents the data application based on a real life data set. The paper ends with a concluding remark in Section 9.

2. Prior and Loss Functions

The Bayesian inference requires appropriate choice of prior(s) for the parameter(s). From the Bayesian viewpoint, there is no clear cut way from which one can conclude that one prior is better than the other. Nevertheless, very often priors are chosen according to one’s subjective knowledge and beliefs. However, if one has adequate information about the parameter(s), it is better to choose informative prior(s); otherwise, it is preferable to use noninformative prior(s). In this paper we consider both type of priors: the extended Jeffreys’ prior and the natural conjugate prior.

The extended Jeffreys’ prior proposed by Al-Kutubi [16] is given asπœ‹21(𝛼)βˆπ›Ό2𝑐1,𝛼>0,𝑐1>0.(2.1) The conjugate prior in this case will be the gamma prior, and the probability density function is taken asπœ‹1𝑏(𝛼)=π‘Žπ›ΌΞ“(π‘Ž)π‘Žβˆ’1π‘’βˆ’π‘π›Ό,𝛼,π‘Ž,𝑏>0.(2.2) With the above priors, we use three different loss functions for the model (1.1).(1)The first loss function considered is called weighted quadratic loss function and is given by 𝐿1ξ‚€(𝛼,𝛿)=π›Όβˆ’π›Ώπ›Όξ‚2,(2.3) where 𝛿 is a decision rule to estimate 𝛼. 𝛿 is to be chosen such that ξ€œβˆž0ξ‚€π›Όβˆ’π›Ώπ›Όξ‚2πœ‹ξ€·π›Όβˆ£π‘₯𝑑𝛼(2.4) is minimum. This can be equivalently written as ξ€œβˆž0(π›Όβˆ’π›Ώ)2π‘žξ€·π›Όβˆ£π‘₯𝑑𝛼,withπ‘žπ›Όβˆ£π‘₯ξ€Έ=ξ€·1/𝛼2ξ€Έπœ‹ξ€·π›Όβˆ£π‘₯ξ€Έβˆ«βˆž0ξ€·1/𝛼2ξ€Έπœ‹ξ€·π›Όβˆ£π‘₯𝑑𝛼(2.5) being minimum. Hence 𝛼bq=𝛿=πΈπ‘žξ€·π›Όβˆ£π‘₯ξ€Έ.(2.6)(2)The second one is the squared-log error loss function proposed by Brown [17] and is defined as 𝐿2(𝛼,𝛿)=(lnπ›Ώβˆ’ln𝛼)2=𝛿ln𝛼2.(2.7) This loss function is balanced with lim𝐿2(𝛼,𝛿)β†’βˆž as 𝛿→0 or ∞. A balanced loss function takes both error of estimation and goodness of fit into account, but the unbalanced loss function only considers error of estimation. This loss function is convex for 𝛿/𝛼≀𝑒 and concave otherwise, but its risk function has a unique minimum with respect to 𝛿. The Bayes estimator for the parameter 𝛼 under squared-log error loss function may be given as 𝛼bsl𝐸=explnπ›Όβˆ£π‘₯ξ€Έξ€»,(2.8) where 𝐸(β‹…) denotes the posterior expectation. (3)The third loss function is a particular type of asymmetric loss functions called the general entropy loss function proposed by Calabria and Pulcini [18] (Podder and Roy [19] called it the modified linear exponential (MLINEX) loss function) and is given by 𝐿3𝛿(𝛼,𝛿)=π‘€π›Όξ‚π›Ύξ‚€π›Ώβˆ’π›Ύlnπ›Όξ‚ξ‚Ήβˆ’1,𝛾≠0,𝑀>0.(2.9) If we replace π›Ώβˆ’π›Ό in place of ln(𝛿/𝛼), that is, lnπ›Ώβˆ’ln𝛼, we get the linear exponential (LINEX) loss function, 𝑀[𝑒𝛾(π›Ώβˆ’π›Ό)βˆ’π›Ύ(π›Ώβˆ’π›Ό)βˆ’1]. Without loss of generality, we assume that 𝑀=1. If 𝛾=1, it is the entropy loss function. Under this general entropy loss function, the Bayes estimator of 𝛼 is obtained as follows: 𝛼bge=𝐸𝛼(π›Όβˆ’π›Ύ)ξ€»βˆ’1/𝛾.(2.10)

3. Estimation of Parameter

Let us consider a random sample π‘₯=(π‘₯1,π‘₯2,…,π‘₯𝑛) of size 𝑛 from the exponentiated family of distributions. Then the likelihood function of 𝛼 for the given sample observation isπΏξ€·π›Όβˆ£π‘₯ξ€Έ=𝛼𝑛𝑛𝑖=1πΉπ›Όβˆ’1ξ€·π‘₯𝑖𝑓π‘₯𝑖=π›Όπ‘›π‘’π›Όβˆ‘π‘›π‘–=1ln𝐹(π‘₯𝑖)𝑛𝑖=1𝑓π‘₯𝑖𝐹π‘₯𝑖.(3.1) Here, the maximum likelihood estimator (MLE) of 𝛼 is 𝛼mle=𝑛/𝑇, with βˆ‘π‘‡=βˆ’π‘›π‘–=1ln𝐹(π‘₯𝑖).

3.1. Estimation under the Assumption of Extended Jeffreys’ Prior

Combining the prior distribution in (2.1) and the likelihood function, the posterior density of 𝛼 is derived as follows:πœ‹1ξ€·π›Όβˆ£π‘₯ξ€Έ=π‘‡π‘›βˆ’2𝑐1+1Ξ“ξ€·π‘›βˆ’2𝑐1𝑒+1βˆ’π›Όπ‘‡π›Όπ‘›βˆ’2𝑐1,𝛼>0,(3.2) which is a gamma distribution 𝐺(π‘›βˆ’2𝑐1+1,𝑇).

For differnt derivations in this section and subsequent sections, we use the expressions βˆ«Ξ“(𝑝)=∞0π‘₯π‘βˆ’1π‘’βˆ’π‘₯𝑑π‘₯, Ξ“ξ…žβˆ«(𝑝)=∞0lnπ‘₯π‘₯π‘βˆ’1π‘’βˆ’π‘₯𝑑π‘₯, Ξ“ξ…žξ…žβˆ«(𝑝)=∞0(lnπ‘₯)2π‘₯π‘βˆ’1π‘’βˆ’π‘₯𝑑π‘₯, πœ“(𝑝)=𝑑lnΞ“(𝑝)/𝑑𝑝=Ξ“β€²(𝑝)/Ξ“(𝑝), the digamma function, and πœ“β€²(𝑝)=𝑑2lnΞ“(𝑝)/𝑑𝑝2=((Ξ“ξ…žξ…ž(𝑝)β‹…Ξ“(𝑝)βˆ’[Ξ“ξ…ž(𝑝)]2)/Ξ“2(𝑝)), the trigamma function.

Using extended Jeffreys’ prior of the form (2.1), the Bayes estimators of 𝛼 under weighted quadratic, squared-log error and general entropy loss functions are derived as follows:𝛼𝐸bq=π‘›βˆ’2𝑐1βˆ’1𝑇,𝛼𝐸bsl=π‘’πœ“(π‘›βˆ’2𝑐1+1)𝑇,𝛼𝐸bge=ξƒ¬Ξ“ξ€·π‘›βˆ’2𝑐1ξ€Έ+1Ξ“ξ€·π‘›βˆ’2𝑐1ξ€Έξƒ­+1βˆ’π›Ύ1/𝛾⋅1𝑇=π‘˜π‘‡,(3.3) with π‘˜=[Ξ“(π‘›βˆ’2𝑐1+1)/Ξ“(π‘›βˆ’2𝑐1+1βˆ’π›Ύ)]1/𝛾, respectively.

Remark 3.1. We get the Jeffreys’ noninformative prior for 𝑐1=1/2 and the Hartigan’s noninformative prior for 𝑐1=3/2.

3.2. Estimation under the Assumption of Conjugate Prior

Combining the prior distribution in (2.2) and the likelihood function, the posterior density of 𝛼 is derived as follows:πœ‹2ξ€·π›Όβˆ£π‘₯ξ€Έ=𝑇𝑏𝑛+π‘ŽΞ“π‘’(𝑛+π‘Ž)βˆ’π›Όπ‘‡π‘π›Όπ‘›+π‘Žβˆ’1,𝛼>0,(3.4) which is a gamma distribution 𝐺(𝑛+π‘Ž,𝑇𝑏) with 𝑇𝑏=𝑇+𝑏.

Using a conjugate prior of the form (2.2), the Bayes estimators under weighted quadratic, squared-log error and general entropy loss functions are derived as follows:𝛼𝑐bq=π‘›βˆ’2+π‘Žπ‘‡π‘,(3.5)𝛼𝑐bsl=π‘’πœ“(𝑛+π‘Ž)𝑇𝑏,(3.6)𝛼𝑐bge=ξ‚ΈΞ“(𝑛+π‘Ž)ξ‚ΉΞ“(𝑛+π‘Žβˆ’π›Ύ)1/𝛾⋅1𝑇𝑏,(3.7) respectively. It is to be noted that Bayes’ estimators given in (3.5), (3.6), and (3.7) depend on π‘Ž and 𝑏 which are the parameters of the prior distribution of 𝛼. These parameters, that is, π‘Ž and 𝑏 could be estimated by means of empirical Bayes’ procedure (see Lindley [20] and Awad and Gharraf [21]). Given the random sample π‘₯=(π‘₯1,π‘₯2,…,π‘₯𝑛), the likelihood function of 𝛼 is gamma density with parameter (𝑛+1,𝑇). Hence it is proposed to estimate the prior parameters π‘Ž and 𝑏 from the sample by 𝑛+1 and 𝑇, respectively. Therefore, (3.5), (3.6), and (3.7) will become𝛼𝑐bq=2π‘›βˆ’1,2𝑇(3.8)𝛼𝑐bsl=π‘’πœ“(2𝑛+1)2𝑇,(3.9)𝛼𝑐bge=ξ‚ΈΞ“(2𝑛+1)ξ‚ΉΞ“(2𝑛+1βˆ’π›Ύ)1/𝛾⋅1=𝐾2𝑇12𝑇,where𝐾1=ξ‚ΈΞ“(2𝑛+1)ξ‚ΉΞ“(2𝑛+1βˆ’π›Ύ)1/𝛾,(3.10) respectively.

4. Risks of the Bayes Estimators

Since 𝑋 follows the exponentiated family of distributions with parameter 𝛼, then βˆ‘π‘‡=βˆ’π‘›π‘–=1ln𝐹(π‘₯𝑖) is distributed as 𝐺(𝑛,𝛼). Then the probability density function of 𝑇 is β„Žπ‘‡π›Ό(𝑑)=𝑛Γ𝑒(𝑛)βˆ’π›Όπ‘‘π‘‘π‘›βˆ’1,𝑑>0.(4.1) Therefore, πΈβ„Ž(π‘‡βˆ’π›Ύξ€œ)=∞0π‘‘βˆ’π›Ύβ„Žπ‘‡π›Ό(𝑑)𝑑𝑑=π‘›ξ€œΞ“(𝑛)∞0π‘’βˆ’π›Όπ‘‘π‘‘π‘›βˆ’π›Ύβˆ’1𝑑𝑑=Ξ“(π‘›βˆ’π›Ύ)𝛼Γ(𝑛)𝛾.(4.2) The risk function of 𝛼𝐸bq is𝑅𝛼𝐸bq=πΈβ„Žξ‚ƒπΏξ‚€ξ‚π›ΌπΈbq=1,𝛿𝛼2𝛼2ξ€·βˆ’2π›Όπ‘›βˆ’2𝑐1ξ€ΈπΈβˆ’1β„Žξ‚€1𝑇+ξ€·π‘›βˆ’2𝑐1ξ€Έβˆ’12πΈβ„Žξ‚΅1𝑇2=1𝛼2𝛼2ξ€·βˆ’2π›Όπ‘›βˆ’2𝑐1ξ€Έπ›Όβˆ’1+ξ€·π‘›βˆ’1π‘›βˆ’2𝑐1ξ€Έβˆ’12𝛼2ξ‚Ή=2ξ€·(π‘›βˆ’1)(π‘›βˆ’2)1βˆ’π‘›βˆ’2𝑐1ξ€Έβˆ’1+ξ€·π‘›βˆ’1π‘›βˆ’2𝑐1ξ€Έβˆ’12ξƒ­.(π‘›βˆ’1)(π‘›βˆ’2)(4.3) Similarly, the risk functions of 𝛼𝐸bsl and 𝛼𝐸bge under squared-log error loss and general entropy loss functions are𝑅𝛼𝐸bsl=πœ“ξ…žξ€Ίπœ“ξ€·(𝑛)+π‘›βˆ’2𝑐1ξ€Έξ€»+1βˆ’πœ“(𝑛)2𝑅,(4.4)𝛼𝐸bge=π‘˜π›ΎΞ“(π‘›βˆ’π›Ύ)[]Ξ“(𝑛)+π›Ύπœ“(𝑛)βˆ’lnπ‘˜βˆ’1,(4.5) respectively. The risk functions of 𝛼𝑐bq, 𝛼𝑐bsl, and 𝛼𝑐bge, assuming conjugate prior are𝑅𝛼𝑐bqξ€Έ=ξ‚Έ1βˆ’2π‘›βˆ’1+π‘›βˆ’1(2π‘›βˆ’1)24ξ‚Ή,𝑅(π‘›βˆ’1)(π‘›βˆ’2)(4.6)𝛼𝑐bslξ€Έ=πœ“ξ…ž[](𝑛)+πœ“(2𝑛+1)βˆ’πœ“(𝑛)βˆ’ln22𝑅,(4.7)𝛼𝑐bgeξ€Έ=π‘˜π›Ύ1Ξ“(π‘›βˆ’π›Ύ)ξ€ΊΞ“(𝑛)+π›Ύπœ“(𝑛)βˆ’lnπ‘˜1ξ€»+ln2βˆ’1,(4.8) respectively.

The risk functions of 𝛼mle under weighted quadratic, squared-log error and general entropy loss functions areπ‘…π‘žξ€·ξπ›Όmleξ€Έ=ξ‚Έ1βˆ’2𝑛+π‘›π‘›βˆ’12𝑅(π‘›βˆ’1)(π‘›βˆ’2),(4.9)𝑠𝑙𝛼mleξ€Έ=πœ“ξ…ž[](𝑛)+πœ“(𝑛)βˆ’ln𝑛2𝑅,(4.10)𝑔𝑒𝛼mleξ€Έ=𝑛𝛾Γ(π‘›βˆ’π›Ύ)[]Ξ“(𝑛)+π›Ύπœ“(𝑛)βˆ’lnπ‘›βˆ’1,(4.11) respectively.

The estimators, developed in Section 3, are studied here on the basis of their risks obtained under three different loss functions, namely, (a) weighted quadratic loss, (b) squared-log error loss, (c) general entropy loss for 𝛾=0.5, (d) general entropy loss for 𝛾=1, and (e) general entropy loss for 𝛾=1.5. Risk functions of the proposed estimators are shown in Figures 2–5. The thick lines in each figure show the risk of the Bayes estimators under extended Jeffreys’ prior and conjugate prior, and dotted lines show the risk of MLE under different loss functions. In Figure 2, risk functions have been plotted for all the loss functions under extended Jeffreys’ prior for different values of 𝑐1 and for 𝑛=30. It is observed that risks are increasing with the increase in 𝑐1. Risks under general entropy loss for 𝛾<1 are ordinarily less than those of weighted quadratic and squared-log error losses. For small values of 𝑐1, risks of Bayes estimators are lower than those of maximum likelihood estimators for each loss function considered. The Bayes estimators perform better for some values of 𝑐1 and loss functions under consideration; for example, in the figure, risk of 𝛼𝐸bq is less for 𝑐1<1.5 (approximately) whereas that of 𝛼𝐸bsl is less for 𝑐1<0.8 (approximately). Figures 3 and 4 show the risks for different values of 𝑛 for 𝑐1=0.5 and 2, respectively. We find that the risks are decreasing with the increase in 𝑛 for all values of 𝑐1.

When we consider conjugate prior, we see that risks are less for the squared-log error and weighted quadratic losses than the general entropy loss, and only for these two losses, the Bayes estimators are less than the MLE for small 𝑛 (Figure 5). The risks under conjugate prior are generally higher than those under Jeffreys’ prior.

5. Highest Posterior Density Intervals for 𝛼

In this section our objective is to provide a highest posterior density (HPD) interval for the unknown parameter 𝛼 of the model (1.2). HPD interval is one of the most useful tools to measure posterior uncertainty. This interval is such that it includes more probable values of the parameter and excludes the less probable ones. Since the posterior density (3.2) is unimodal, the 100(1βˆ’πœ‚)% HPD interval [𝐻𝐸1,𝐻𝐸2] for 𝛼 must satisfy ξ€œπ»πΈ2𝐻𝐸1πœ‹1ξ€·π›Όβˆ£π‘₯𝑑𝛼=1βˆ’πœ‚,(5.1) that is Ξ“ξ€·π‘›βˆ’2𝑐1+1,𝐻𝐸2π‘‡ξ€Έξ€·βˆ’Ξ“π‘›βˆ’2𝑐1+1,𝐻𝐸1π‘‡ξ€Έπœ‹=1βˆ’πœ‚,(5.2)1𝐻𝐸1ξ€Έβˆ£π‘₯=πœ‹1𝐻𝐸2ξ€Έβˆ£π‘₯,(5.3) that is 𝐻𝐸1𝐻𝐸2ξƒ­π‘›βˆ’2𝑐1=𝑒𝑇(𝐻𝐸1βˆ’π»πΈ2),(5.4) simultaneously. The HPD interval [𝐻𝐸1,𝐻𝐸2] is the simultaneous solution of (5.2) and (5.4).

Similarly, the posterior density (3.4) is unimodal, and the 100(1βˆ’πœ‚)% HPD interval [𝐻𝑐1,𝐻𝑐2] for 𝛼 must satisfy ξ€œπ»π‘2𝐻𝑐1πœ‹2ξ€·π›Όβˆ£π‘‡π‘Žξ€Έπ‘‘π›Ό=1βˆ’πœ‚,(5.5) that is Γ𝑛+𝑏,𝐻𝑐2π‘‡π‘Žξ€Έξ€·βˆ’Ξ“π‘›+𝑏,𝐻𝑐1π‘‡π‘Žξ€Έπœ‹=1βˆ’πœ‚,(5.6)2𝐻𝑐1βˆ£π‘‡π‘Žξ€Έ=πœ‹2𝐻𝑐2βˆ£π‘‡π‘Žξ€Έ,(5.7) that is 𝐻𝑐1𝐻𝑐2𝑛+π‘βˆ’1=π‘’π‘‡π‘Ž(𝐻𝑐1βˆ’π»π‘2),(5.8) simultaneously. Therefore, the HPD interval [𝐻𝑐1,𝐻𝑐2] is the simultaneous solution of (5.6) and (5.8). If π‘Ž and 𝑏 are not known, then substituting the empirical Bayes estimate of π‘Ž and 𝑏, we get the equations as follows:Ξ“ξ€·2𝑛+1,2𝐻𝑐2π‘‡ξ€Έξ€·βˆ’Ξ“2𝑛+1,2𝐻𝑐1𝑇𝐻=1βˆ’πœ‚,(5.9)𝑐1𝐻𝑐2ξ‚Ή2𝑛=𝑒2𝑇(𝐻𝑐1βˆ’π»π‘2),(5.10) respectively.

6. Predictive Distribution

In this section our objective is to obtain the posterior predictive density of future observation, based on current observations. Another objective is to attain equal-tail Bayesian prediction interval for the future observation and then compare this interval with frequentist predictive interval. The posterior predictive distribution for 𝑦=π‘₯𝑛+1 given π‘₯=(π‘₯1,π‘₯2,…,π‘₯𝑛) under (3.2) is defined byπœ‰πΈξ€·π‘¦βˆ£π‘₯ξ€Έ=ξ€œβˆž0πœ‹1ξ€·π›Όβˆ£π‘₯ξ€Έ=𝑔(𝑦;𝛼)π‘‘π›Όπ‘›βˆ’2𝑐1+1𝑇1[]1βˆ’(ln𝐹(𝑦)/𝑇)π‘›βˆ’2𝑐1+2𝑓(𝑦).𝐹(𝑦)(6.1) A 100(1βˆ’πœ‚)% equal-tail prediction interval [𝑦𝐸1,𝑦𝐸2] is the solution of ξ€œπ‘¦πΈ10πœ‰πΈξ€·π‘¦βˆ£π‘₯ξ€Έξ€œπ‘‘π‘¦=βˆžπ‘¦πΈ2πœ‰πΈξ€·π‘¦βˆ£π‘₯ξ€Έπœ‚π‘‘π‘¦=2.(6.2) Using (6.1), we get (after simplification)𝑦𝐸1=πΉβˆ’1𝑒𝑇{1βˆ’(1βˆ’(πœ‚/2))1βˆ’1/(π‘›βˆ’2𝑐+1)}ξ‚„,𝑦𝐸2=πΉβˆ’1𝑒𝑇{1βˆ’(πœ‚/2)1βˆ’1/(π‘›βˆ’2𝑐+1)}ξ‚„.(6.3) The posterior predictive distribution for 𝑦=π‘₯𝑛+1 given π‘₯=(π‘₯1,π‘₯2,…,π‘₯𝑛) under (3.4) is defined byπœ‰π‘ξ€·π‘¦βˆ£π‘₯ξ€Έ=ξ€œβˆž0πœ‹2ξ€·π›Όβˆ£π‘₯ξ€Έ=𝑔(𝑦;𝛼)𝑑𝛼𝑛+π‘π‘‡π‘Ž1ξ€Ίξ€·1βˆ’ln𝐹(𝑦)/π‘‡π‘Žξ€Έξ€»π‘›+𝑏+1𝑓(𝑦).𝐹(𝑦)(6.4) A 100(1βˆ’πœ‚)% equal-tail prediction interval [𝑦𝑐1,𝑦𝑐2] is the solution of ξ€œπ‘¦π‘10πœ‰π‘ξ€·π‘¦βˆ£π‘₯ξ€Έξ€œπ‘‘π‘¦=βˆžπ‘¦π‘2πœ‰π‘ξ€·π‘¦βˆ£π‘₯ξ€Έπœ‚π‘‘π‘¦=2.(6.5) Using (6.4), we get (after simplification)𝑦𝑐1=πΉβˆ’1ξ‚ƒπ‘’π‘‡π‘Ž{1βˆ’(1βˆ’(πœ‚/2))βˆ’1/(𝑛+𝑏)}ξ‚„,𝑦𝑐2=πΉβˆ’1ξ‚ƒπ‘’π‘‡π‘Ž{1βˆ’(πœ‚/2)βˆ’1/(𝑛+𝑏)}ξ‚„.(6.6) If π‘Ž and 𝑏 are not known, then substituting the empirical Bayes estimate of π‘Ž and 𝑏, we get the prediction limits as follows:𝑦𝑐1=πΉβˆ’1𝑒2𝑇{1βˆ’(1βˆ’(πœ‚/2))βˆ’1/(2𝑛+1)}ξ‚„,𝑦𝑐2=πΉβˆ’1𝑒2𝑇{1βˆ’(πœ‚/2)βˆ’1/2𝑛+1}ξ‚„.(6.7) For deriving classical intervals, we notice that 𝑍=βˆ’ln𝐹(𝑦)/𝑇 is distributed as a beta-variate of the second kind with parameters 1 and 𝑛. The pdf of 𝑍 has the following form:1β„Ž(𝑧)=𝐡1(1,𝑛)(1+𝑧)𝑛+1,𝑧>0.(6.8) Solving for (𝑧1,𝑧2) in ξ€œπ‘§10ξ€œβ„Ž(𝑧)𝑑𝑧=βˆžπ‘§2πœ‚β„Ž(𝑧)𝑑𝑧=2,(6.9) and using (6.8), we get (after simplification)𝑧1=πΉβˆ’1𝑒𝑇{1βˆ’(1βˆ’(πœ‚/2))βˆ’1/𝑛}ξ‚„,𝑧2=πΉβˆ’1𝑒𝑇{1βˆ’(πœ‚/2)βˆ’1/𝑛}ξ‚„.(6.10) It is to be noted that if we take 𝑐1=0.5 in (6.3), we get classical 100(1βˆ’πœ‚)% equal-tail prediction interval.

7. Bayes Predictive Estimator and HPD Prediction Interval for a Future Observation

In this section, we introduce the Bayes predictive estimator for different priors under the above-mentioned loss functions, and later we obtain HPD predictive intervals for the future observation. The Bayes predictive estimators of 𝑦 under a weighted quadratic error loss function assuming the extended Jeffreys’ prior and the conjugate prior are𝑦1βˆ—πΈ=∫∞0𝑦⋅1/𝑦2ξ€Έπœ‰πΈξ€·π‘¦βˆ£π‘₯ξ€Έπ‘‘π‘¦βˆ«βˆž0ξ€·1/𝑦2ξ€Έπœ‰πΈξ€·π‘¦βˆ£π‘₯ξ€Έ,𝑦𝑑𝑦(7.1)1βˆ—π‘=∫∞0𝑦⋅1/𝑦2ξ€Έπœ‰π‘ξ€·π‘¦βˆ£π‘₯ξ€Έπ‘‘π‘¦βˆ«βˆž0ξ€·1/𝑦2ξ€Έπœ‰π‘ξ€·π‘¦βˆ£π‘₯ξ€Έ,𝑑𝑦(7.2) respectively. The Bayes predictive estimators of 𝑦 under the squared-log error loss function assuming the extended Jeffreys’ prior and the conjugate prior are𝑦2βˆ—πΈξ€ΊπΈξ€·=explnπ‘Œβˆ£π‘₯ξ€·ξ€Έξ€»,with𝐸lnπ‘Œβˆ£π‘₯ξ€Έ=ξ€œβˆž0lnπ‘¦β‹…πœ‰πΈξ€·π‘¦βˆ£π‘₯𝑦𝑑𝑦,(7.3)2βˆ—π‘ξ€ΊπΈξ€·=explnπ‘Œβˆ£π‘₯ξ€·ξ€Έξ€»,with𝐸lnπ‘Œβˆ£π‘₯ξ€Έ=ξ€œβˆž0lnπ‘¦β‹…πœ‰π‘ξ€·π‘¦βˆ£π‘₯𝑑𝑦,(7.4) respectively. The Bayes predictive estimators of 𝑦 under the general entropy loss function assuming the extended Jeffreys’ prior and the conjugate prior are𝑦3βˆ—πΈ=ξ€Ίπ½πΈπ‘”ξ€»βˆ’1/𝛾with𝐽𝐸𝑔=ξ€œβˆž0π‘¦βˆ’π›Ύπœ‰πΈξ€·π‘¦βˆ£π‘₯𝑦𝑑𝑦,(7.5)3βˆ—π‘=ξ€Ίπ½π‘π‘”ξ€»βˆ’1/𝛾with𝐽𝑐𝑔=ξ€œβˆž0π‘¦βˆ’π›Ύπœ‰π‘ξ€·π‘¦βˆ£π‘₯𝑑𝑦,(7.6) respectively. The closed form expressions of (7.1)–(7.6) seem to be intractable, and calculations are to be made using numerical method.

For the unimodal predictive density (6.1), the HPD-predictive interval [β„ŽπΈ1,β„ŽπΈ2] with probability 1βˆ’πœ‚ for 𝑦 is the simultaneous solution of the following: π‘ƒξ€·β„ŽπΈ1<π‘Œ<β„ŽπΈ2ξ€Έ=1βˆ’πœ‚,(7.7) that is 1ξ€Ίξ€·1βˆ’ln𝐹(β„ŽπΈ2)/π‘‡ξ€Έξ€»π‘›βˆ’2𝑐1+1βˆ’1ξ€Ίξ€·1βˆ’ln𝐹(β„ŽπΈ1)/π‘‡ξ€Έξ€»π‘›βˆ’2𝑐1+1πœ‰=1βˆ’πœ‚,(7.8)πΈξ€·β„ŽπΈ1∣π‘₯ξ€Έ=πœ‰πΈξ€·β„ŽπΈ2∣π‘₯ξ€Έ,(7.9) that is 1βˆ’ln𝐹(β„ŽπΈ2ξ€Έ)/𝑇1βˆ’ln𝐹(β„ŽπΈ1ξ€Έξƒ­)/π‘‡π‘›βˆ’2𝑐1+2=π‘“ξ€·β„ŽπΈ2ξ€ΈπΉξ€·β„ŽπΈ2ξ€Έβ‹…πΉξ€·β„ŽπΈ1ξ€Έπ‘“ξ€·β„ŽπΈ1ξ€Έ.(7.10) Similarly, for the unimodal predictive density (6.4), the HPD-predictive interval [β„Žπ‘1,β„Žπ‘2] with probability 1βˆ’πœ‚ for 𝑦 is the simultaneous solution of the following: π‘ƒξ€·β„Žπ‘1<π‘Œ<β„Žπ‘2ξ€Έ=1βˆ’πœ‚,(7.11) that is 1ξ€Ίξ€·1βˆ’ln𝐹(β„Žπ‘2)/π‘‡π‘Žξ€Έξ€»π‘›+π‘βˆ’1ξ€Ίξ€·1βˆ’ln𝐹(β„Žπ‘1)/π‘‡π‘Žξ€Έξ€»π‘›+π‘πœ‰=1βˆ’πœ‚,(7.12)π‘ξ€·β„Žπ‘1∣π‘₯ξ€Έ=πœ‰π‘ξ€·β„Žπ‘2∣π‘₯ξ€Έ,(7.13) that is 1βˆ’ln𝐹(β„Žπ‘2)/π‘‡π‘Žξ€Έξ€·1βˆ’ln𝐹(β„Žπ‘1)/π‘‡π‘Žξ€Έξƒ­π‘›+𝑏+1=π‘“ξ€·β„Žπ‘2ξ€ΈπΉξ€·β„Žπ‘2ξ€Έβ‹…πΉξ€·β„Žπ‘1ξ€Έπ‘“ξ€·β„Žπ‘1ξ€Έ.(7.14) If π‘Ž and 𝑏 are not known, then substituting the empirical Bayes estimate of π‘Ž and 𝑏, we get the HPD-prediction limits for future observation as follows:1ξ€Ίξ€·1βˆ’ln𝐹(β„Žπ‘2)/2𝑇2𝑛+1βˆ’1ξ€Ίξ€·1βˆ’ln𝐹(β„Žπ‘1)/2𝑇2𝑛+1ξƒ¬ξ€·ξ€·β„Ž=1βˆ’πœ‚,1βˆ’ln𝐹𝑐2ξ€Έξ€Έ/2π‘‡ξ€·ξ€·β„Ž1βˆ’ln𝐹𝑐1ξ€Έξ€Έξƒ­/2𝑇2𝑛+2=π‘“ξ€·β„Žπ‘2ξ€ΈπΉξ€·β„Žπ‘2ξ€Έ.πΉξ€·β„Žπ‘1ξ€Έπ‘“ξ€·β„Žπ‘1ξ€Έ.(7.15)

8. Data Analysis

Consider the following data which arose in tests on endurance of deep grove ball bearings (Lawless [22, page 228]). The data are the number of million revolutions before failure for each of the 23 ball bearings in the life test:17.88,28.92,33.00,41.52,42.12,45.60,48.80,51.84,51.96,54.12,55.56,67.80,68.64,68.64,68.88,84.12,93.12,98.64,105.12,105.84,127.92,128.04,173.40.(8.1) To study the goodness of fit of the exponentiated exponential model, Gupta and Kundu [2] computed the πœ’2 statistic as 0.783 with the corresponding 𝑃 value being 0.376. The estimate of 𝛼, the shape parameter, is 5.2589 and that of πœƒ, the rate of the exponential distribution, is 0.0314. Here our aim is to obtain the Bayes estimates of 𝛼 for this data set under three different loss functions and for two priors by assuming that the base line distribution is exponential with πœƒ=0.0314. At the same time, we are interested in studying the HPD intervals for the parameter 𝛼. Further, our intention is to obtain the future observation based on a given set of observations and the HPD predictive intervals for the future observation based on the current observations. Figure 6 shows the estimated predictive distribution.

Tables 1–4 summarize the result from the data analysis. Tables 1(a) and 1(b) represents the Bayes estimates of 𝛼 and corresponding risks under the extended Jeffreys’ prior the conjugate prior, respectively. It is evident from Table 1(a) that Bayes estimates under general entropy loss (𝛾=0.5 and 𝛾=1) give better estimates than all other estimates. It is also evident from Table 1(a) that the estimates decrease and the corresponding risks increase with the increase in 𝑐1. In case of conjugate prior (Table 1(b)), the estimates under weighted quadratic and squared-log error loss functions seem to be better. In Table 2, the HPD intervals under conjugate prior appear to be slightly better than the extended Jeffreys’ with respect to minimum length. Table 3 presents the estimates of future observation based on the data set. It is observed from the last column of the table that the general entropy loss function at 𝛾=1.5 gives a quite reasonable estimate. Table 4 shows the Bayesian predictive and HPD predictive intervals for future observation of the data set. The first row in each cell represents the Bayesian predictive intervals, and the second row represents the HPD intervals for future observation. It is apparent that the HPD intervals for future observation with respect to conjugate prior are reasonably good.

9. Concluding Remark

In this paper, we have derived the Bayes estimators of the shape parameter of the exponentiated family of distributions under the extended Jeffreys’ prior as well as conjugate prior using three different loss functions. Though the extended Jeffreys’ prior gives the opportunity of covering wide spectrum of priors, yet at times the conjugate prior also gives better Bayes estimates and HPD intervals of the parameter and of future observations.

Acknowledgment

The authors would like to thank the referee for a very careful reading of the manuscript and making a number of nice suggestions which improved the earlier version of the manuscript.