Abstract

A sequence of independent lifetimes 𝑋1,,𝑋𝑚,𝑋𝑚+1,,𝑋𝑛 was observed from Maxwell distribution with reliability 𝑟1(𝑡) at time 𝑡 but later, it was found that there was a change in the system at some point of time 𝑚 and it is reflected in the sequence after 𝑋𝑚 by change in reliability 𝑟2(𝑡) at time 𝑡. The Bayes estimators of 𝑚, 𝜃1, 𝜃2 are derived under different asymmetric loss functions. The effects of correct and wrong prior information on the Bayes estimates are studied.

1. Introduction

Maxwell distribution plays an important role in physics and other allied sciences. This paper introduces a discrete analogue of the Maxwell distribution, called discrete Maxwell distribution (dMax distribution). This distribution is suggested as a suitable reliability model to fit a range of discrete lifetime data.

In reliability theory many continuous lifetime models have been suggested and studied. However, it is sometimes impossible or inconvenient to measure the life length of a device, on a continuous scale. In practice, we come across situations, where lifetime of a device is considered to be a discrete random variable. For example, in an on/off switching device, the lifetime of the switch is a discrete random variable. Also, the number of voltage fluctuations which an electrical or electronic item can withstand before its failure is a discrete rv.

If the lifetimes of individuals in some population are grouped or when lifetime refers to an integral number of cycles of some sort, it may be desirable to treat it as a discrete rv. When a discrete model is used with lifetime data, it is usually a multinomial distribution, which arises because effectively continuous data have been grouped. Some situations may demand for another discrete distribution, usually over the non negative integers. Such situations are best treated individually, but generally one tries to adopt one of the standard discrete distributions.

In the last two decades, standard discrete distributions like geometric and negative binomial have been employed to model lifetime data. However, there is a need to find more plausible discrete lifetime distributions to fit to various types of lifetime data. For this purpose, popular continuous lifetime distributions can be helpful in the following manner.

The Maxwell distribution defines the speed of molecules in thermal equilibrium under some conditions as defined in statistical mechanics. For example, this distribution explains many fundamental gas properties in kinetic theory of gases; distribution of energies and moments, and so forth.

Tyagi and Bhattachary ([1], (1989b)) considered Maxwell distribution as a lifetime model for the first time. They obtained minimum variance unbiased estimator (MVUE) and Bayes estimator of the parameter and reliability function of this distribution.

Chaturvedi and Rani [2] studied generalized Maxwell distribution by introducing one more parameter and obtained classical and Bayesian estimation procedures for it. A lifetime model is specified to represent the distribution of lifetimes, and statistical inferences are made on the basis of this model. Physical systems manufacturing the items are often subject to random fluctuations. It may happen that at some point of time instability in the sequence of lifetimes and reliability is observed. The problem of study is when and where this change has started occurring. This is called change point inference problem. Bayesian ideas may play an important role in the study of such change point problem and has been often proposed as a valid alternative to classical estimation procedure. The monograph of Broemeling and Tsurumi [3] on structural changes, Jani and Pandya [4], Ebrahimi and Ghosh [5] and a survey by Zacks [6], Pandya and Jani [7], Pandya and Bhatt [8], Mayuri Pandya and prbha Jadav [9], Pandya and Jadav [10] are useful references. In this paper we have proposed a discrete Maxwell model to represent the distribution of lifetimes with change point 𝑚 and have obtained Bayes estimators of 𝑚,𝜃1,𝜃2.

2. Proposed Change Point Model

In this section we propose change point model on discrete Maxwell distribution. We also derived the Bayes estimates for the model. Let 𝑋1,𝑋2,,𝑋𝑛(𝑛3) be a sequence of random lifetimes. First 𝑚 of them are coming from discrete Maxwell, dMax(𝜃1). So the probability mass function is given by𝑝𝑥𝑖=4𝜋1𝜃1𝑄𝑥𝑖,2,𝜃1,𝑥=0,1,,𝜃1>0𝑖=1,2,,𝑚.(1) With reliability 𝑟1(𝑡),𝑟14(𝑡)=𝜋𝜃13/2𝐽𝑡,2,𝜃1,𝑥=0,1,;𝜃1>0.(2)

Later 𝑛-𝑚 observations are coming from the discrete Maxwell, dMax(𝜃2). So the probability mass function is given by𝑝𝑥𝑖=4𝜋1𝜃2𝑄𝑥𝑖,2,𝜃2,𝑥=0,1,,𝜃2>0,𝑖=𝑚+1,,𝑛.(3) With reliability 𝑟2(𝑡),𝑟24(𝑡)=𝜋𝜃23/2𝐽𝑡,2,𝜃2,𝑥=0,1;𝜃2>0,(4) where𝑄𝑥,𝑘,𝜃1=𝑥𝑥+1𝑢𝑘𝑒𝑢2/𝜃1d𝑢=𝐽𝑥,𝑘,𝜃1𝐽𝑥+1,𝑘,𝜃1,𝐽(5a)𝑥,𝑘,𝜃1=𝑥𝑢𝑘𝑒𝑢2/𝜃1d𝑢,(5b)𝑄𝑥,𝑘,𝜃2=𝑥𝑥+1𝑢𝑘𝑒𝑢2/𝜃2d𝑢=𝐽𝑥,𝑘,𝜃2𝐽𝑥+1,𝑘,𝜃2,𝐽(5c)𝑥,𝑘,𝜃2=𝑥𝑢𝑘𝑒𝑢2/𝜃2d𝑢.(5d)The likelihood function given the sample information 𝑋=(𝑋1,𝑋2,,𝑋𝑚,𝑋𝑚+1,,𝑋𝑛)𝐿𝜃1,𝜃2,𝑚𝑋=4𝜋𝑛𝜃1𝑚𝐺1𝑚,𝜃1𝜃2(𝑛𝑚)𝐺2𝑚,𝜃2(6) where𝐺1𝑚,𝜃1=𝑚𝑖=0𝑄𝑥𝑖,2,𝜃1,𝐺2𝑚,𝜃2=𝑛𝑖=𝑚+1𝑄𝑥𝑖,2,𝜃2,(7)𝑄(𝑥𝑖,2,𝜃1)and 𝑄(𝑥𝑖,2,𝜃2) are explained in (5a) and (5c).

3. Bayes Estimation

3.1. The Conjugate Analysis Using Inverted Gamma Prior Distribution

The ML method, as well as other classical approaches, is based only on the empirical information provided by the data. However, when there is some technical knowledge on the parameters of the distribution available, a Bayes procedure seems to be an attractive inferential method. The Bayes procedure is based on a posterior density, say 𝑔(𝜃1,𝜃2,𝑚𝑋), which is proportional to the product of the likelihood function 𝐿(𝜃1,𝜃2,𝑚𝑋) with a prior joint density, say 𝑔(𝜃1,𝜃2,𝑚)| representing the uncertainty on the parameters values.

We also suppose that some information on 𝜃1 and 𝜃2 is available and that this technical knowledge can be given in terms of prior mean values 𝜇1, 𝜇2 and variance 𝜎1 and 𝜎2, respectively. Suppose that the marginal prior density of 𝜃1 and 𝜃2is the inverted Gamma with respective mean 𝜇1and𝜇2:𝑔𝜃1=𝑎1𝑏1Γ𝑏1𝜃1(𝑏1+1)𝑒𝑎1/𝜃1,𝑔𝜃2=𝑎2𝑏2Γ𝑏2𝜃2(𝑏2+1)𝑒𝑎2/𝜃2,𝑎𝑖𝑏𝑖>0,𝜃𝑖>0,𝑖=1,2,(8) where the parameters 𝑎𝑖,𝑏𝑖𝑖=1,2 are obtained by solving 𝑏𝑖𝜇=2+2𝑖𝜎2𝑖,𝑎𝑖=𝜇𝑖𝑏𝑖1,𝑖=1,2.(9)

Following Calabria and Pulcini [13] we assume the prior information to be correct if the true value of 𝜃1(𝜃2)is close to prior mean 𝜇1(𝜇2)and is assumed to be wrong if 𝜃1(𝜃2)is far from 𝜇1(𝜇2).

As in the study by Broemeling and Tsurumi [3], we suppose the marginal prior distribution of 𝑚 to be discrete uniform over the set {1,2,,𝑛1}1𝑔(𝑚)=(𝑛1).(10) The joint prior density of 𝜃1,𝜃2,𝑚 is𝑔1𝜃1,𝜃2=1,𝑚𝑎𝑛11𝑏1Γ𝑏1𝑎2𝑏2Γ𝑏2𝜃1(𝑏1+1)𝑒𝑎1/𝜃1𝜃2(𝑏2+1)𝑒𝑎2/𝜃2=𝑘𝜃1(𝑏1+1)𝑒𝑎1/𝜃1𝜃2(𝑏2+1)𝑒𝑎2/𝜃2,(11) where1𝑘=𝑎𝑛11𝑏1Γ𝑏1𝑎2𝑏2Γ𝑏2.(12) The joint posterior density of 𝜃1,𝜃2,𝑚 is𝑔1𝜃1,𝜃2,𝑚𝑥=𝐿𝜃1,𝜃2,𝑚𝑥𝑔1𝜃1,𝜃2,𝑚1𝑥,𝑔1𝜃1,𝜃2,𝑚𝑥=𝑘2𝜃1(𝑚+𝑏1+1)𝑒𝑎1/𝜃1𝐺1𝑚,𝜃1𝜃2(𝑛𝑚+𝑏2+1)×𝑒𝑎2/𝜃2𝐺2𝑚,𝜃211𝑥,(13)𝑘2=𝑘𝑘14=𝑘𝜋𝑛,(14)𝐺1(𝑚,𝜃1) and 𝐺2(𝑚,𝜃2) are explained in (7).

And 1(𝑥) is the marginal posterior density of 𝑋1𝑥=𝑛1𝑚=10𝐿𝜃1,𝜃2,𝑚𝑥𝑔1𝜃1,𝜃2,𝑚d𝜃1d𝜃2=𝑘2𝑛1𝑚=1𝐼1(𝑚)𝐼2(𝑚),(15) where 𝐼1(𝑚)=0𝜃1(𝑚+𝑏1+1)𝑒𝑎1/𝜃1𝐺1𝑚,𝜃1d𝜃1,(16)𝐼2(𝑚)=0𝜃2(𝑛𝑚+𝑏2+1)𝑒𝑎2/𝜃2𝐺2𝑚,𝜃2d𝜃2,(17)𝐺1(𝑚,𝜃1) and 𝐺2(𝑚,𝜃2) are explained in (7).

Marginal posterior density of 𝜃1 and of 𝜃2 is obtained by integrating the joint posterior density of 𝜃1,𝜃2,𝑚 given in (13) with respect to 𝜃2and with respect to 𝜃1, respectively, and summing over 𝑚,𝑔1𝜃1𝑥=𝑘2𝑛1𝑚=1𝜃1(𝑚+𝑏1+1)𝑒𝑎1/𝜃1𝐺1𝑚,𝜃1×0𝜃2(𝑛𝑚+𝑏2+1)𝑒𝑎2/𝜃2𝐺2𝑚,𝜃2d𝜃211𝑥,𝑔(18)1𝜃2𝑥=𝑘2𝑛1𝑚=1𝜃2(𝑛𝑚+𝑏2+1)𝑒𝑎2/𝜃2𝐺2𝑚,𝜃2×0𝜃1(𝑚+𝑏1+1)𝑒𝑎1/𝜃1𝐺1𝑚,𝜃1d𝜃111𝑥,(19)𝐺1(𝑚,𝜃1) and 𝐺2(𝑚,𝜃2) are explained in (7), and1(𝑥) is the same as in (15).

The marginal posterior density of change point 𝑚, say 𝑔1(𝑚𝑥), is obtained by integrating the joint posterior density of 𝜃1,𝜃2,𝑚 (13) with respect to 𝜃1and𝜃2𝑔1𝑚𝑥=0𝑔1𝜃1,𝜃2,𝑚𝑥d𝜃1d𝜃2=0𝑘2𝜃1(𝑚+𝑏1+1)𝑒𝑎1/𝜃1𝐺1𝑚,𝜃1𝜃2(𝑛𝑚+𝑏2+1)×𝑒𝑎2/𝜃2𝐺2𝑚,𝜃2d𝜃1d𝜃211𝑥=𝐼1(𝑚)𝐼2(𝑚)𝑛1𝑚=1𝐼(𝑚)1𝐼2(𝑚).(20) The Bayes estimator of a generic parameter (or function thereof) 𝛼, based on a squared error loss (SEL) function:𝐿1(𝛼,𝑑)=(𝛼𝑑)2,(21) where 𝑑 is a decision rule to estimate and 𝛼, is the posterior mean.

Bayes estimator of 𝑚 under SEL and the inverted Gamma prior is𝑚1=𝑛1𝑚=1𝑚𝐼1(𝑚)𝐼2(𝑚)𝑛1𝑚=1𝐼1(𝑚)𝐼2(𝑚).(22)𝐼1(𝑚) and 𝐼2(𝑚) are the same as in (16).

Other Bayes estimators of 𝛼 based on the loss functions 𝐿2||||,𝐿(𝛼,𝑑)=𝛼𝑑3||||(𝛼,𝑑)=0,if𝛼𝑑<𝜖,𝜖>0,1,otherwise,(23) are the posterior median and posterior mode, respectively.

3.2. Posterior Distribution Functions Using Noninformative Prior

A noninformative prior is a prior that adds no information to that contained in the empirical data. Thus, a Bayes inference based upon noninformative prior has generally a theoretical interest only, since, from an engineering view point, the Bayes approach is very attractive for it allows incorporating expert opinion or technical knowledge in the estimation procedure. Let the joint noninformative prior density of 𝜃1,𝜃2,and𝑚be given by,𝑔2𝜃1,𝜃2=1,𝑚(𝑛1)𝜃1𝜃2.(24) The joint posterior density using noninformative prior 𝑔2(𝜃1,𝜃2,𝑚), say 𝑔2(𝜃1,𝜃2,𝑚𝑥), is 𝑔2𝜃1,𝜃2,𝑚𝑥=𝑘2𝑛1𝑚=1𝜃1(𝑚+1)𝑒1/𝜃1𝐺1𝑚,𝜃1×𝜃2(𝑛𝑚+1)𝑒1/𝜃2𝐺2𝑚,𝜃221𝑥,(25) where 𝐺1(𝑚,𝜃1),𝐺2(𝑚,𝜃2)and 𝑘2 are the same as in (7) and (14) and 2(𝑥) is the marginal density of 𝑥 under the noninformative priors and is obtained by,2𝑥=𝑛1𝑚=10𝐿𝜃1,𝜃2,𝑚𝑥𝑔2𝜃1,𝜃2,𝑚d𝜃1d𝜃2=𝑘2𝑛1𝑚=1𝐼3(𝑚)𝐼4(𝑚),(26) where𝐼3(𝑚)=0𝜃1(𝑚+1)𝑒1/𝜃1𝐺1𝑚,𝜃1𝜃2(𝑛𝑚+1)d𝜃1,𝐼4(𝑚)=0𝑒1/𝜃2𝐺2𝑚,𝜃2d𝜃2.(27) Marginal posterior density of 𝜃1,𝜃2 and change point 𝑚 under the noninformative prior (25) is obtained as𝑔2𝜃1𝑥=𝑘2𝑛1𝑚=1𝜃1(𝑚+1)𝑒1/𝜃1𝐺1𝑚,𝜃1×0𝜃2(𝑛𝑚+1)𝑒1/𝜃2𝐺2𝑚,𝜃2d𝜃221𝑥,𝑔(28)2𝜃2𝑥=𝑘2𝑛1𝑚=1𝑒1/𝜃2𝐺2𝑚,𝜃2𝜃2(𝑛𝑚+1)×0𝜃1(𝑚+1)𝑒1/𝜃1𝐺1𝑚,𝜃1d𝜃121𝑥,(29) where 𝐺1(𝑚,𝜃1), 𝐺2(𝑚,𝜃2), and 𝑘2 are same as in (7) and (14). 2(𝑥) is same as in (26).

The marginal posterior density of change point 𝑚 is, say 𝑔2(𝑚𝑥) is obtained by integrating the joint posterior density of 𝜃1,𝜃2,𝑚 (25) with respect to 𝜃1and𝜃2,𝑔2𝑚𝑥=0𝑔2𝜃1,𝜃2,𝑚𝑥d𝜃1d𝜃2=0𝑘2𝑛1𝑚=1𝜃1(𝑚+1)𝑒1/𝜃1𝐺1𝑚,𝜃1𝜃2(𝑛𝑚+1)×𝑒1/𝜃2𝐺2𝑚,𝜃221𝑥d𝜃1d𝜃2=𝐼3(𝑚)𝐼4(𝑚)𝑛1𝑚=1𝐼3(𝑚)𝐼4(𝑚)(30) where 𝐼3(𝑚) and 𝐼4(𝑚) are the same as in (27).

Bayes estimator of 𝑚 under SEL and noninformative prior is𝑚2=𝑛1𝑚=1𝑚𝐼3(𝑚)𝐼4(𝑚)𝑛1𝑚=1𝐼3(𝑚)𝐼4(𝑚).(31)

4. Bayes Estimates of Change Point and Other Parameters under Asymmetric Loss Functions

The loss function 𝐿(𝛼,𝑑) provides a measure of the financial consequences arising from a wrong decision rule 𝑑 to estimate an unknown quantity 𝛼. The choice of the appropriate loss function depends on financial considerations only and is independent of the estimation procedure used. The use of symmetric loss function was found to be generally inappropriate, since, for example, an overestimation of the reliability function is usually much more serious than an underestimation.

In this section, we derive Bayes estimators of change point 𝑚 under different asymmetric loss functions using both prior considerations explained in Sections 3.1 and 3.2. A useful asymmetric loss, known as the Linex loss function was introduced by Varian [12]. Under the assumption that the minimal loss occurs at 𝑑, the Linex loss function can be expressed as𝐿4𝑞(𝛼,𝑑)=exp1(𝑑𝛼)𝑞1(𝑑𝛼)1,𝑞10.(32) The sign of the shape parameter 𝑞1 reflects the deviation of the asymmetry, 𝑞1>0, if over, estimation is more serious than underestimation, and vice versa, and the magnitude of 𝑞1 reflects the degree of asymmetry.

The posterior expectation of the Linex loss function is𝐸𝛼𝐿4𝑞(𝛼,𝑑)=exp1𝑑𝐸𝛼exp𝑞1𝛼𝑞1(𝑑𝐸{𝛼})𝐼,(33) where 𝐸𝛼{𝑓(𝑎)}denotes the expectation of 𝑓(𝑎) with respect to the posterior density 𝑔(𝛼𝑥). The Bayes estimate 𝑎𝑙 is the value of 𝑑 that minimizes 𝐸𝛼{𝑙4(𝛼,𝑑)}𝑎𝑙1=𝑞1𝐸ln𝛼exp𝑞1𝛼(34) provided that 𝐸𝛼{exp(𝑞1𝛼)} exists and is finite.

Minimizing expected loss function 𝐸𝑚[𝐿4(𝑚,𝑑)] and using posterior distributions (20) and (30), we get the Bayes estimators of 𝑚 using Linex loss function, respectively, as 𝑚𝐿1=𝑞1ln𝑛1𝑚=1𝑒𝑚𝑞1𝐼1(𝑚)𝐼2(𝑚)𝑛1𝑚=1𝐼1(𝑚)𝐼2(,𝑚𝑚)𝐿1=𝑞1ln𝑛1𝑚=1𝑒𝑚𝑞1𝐼3(𝑚)𝐼4(𝑚)𝑛1𝑚=1𝐼3(𝑚)𝐼4,(𝑚)(35) where 𝐼1(𝑚)𝐼2(𝑚) and 𝐼3(𝑚)𝐼4(𝑚) are the same as in (16) and (27).

Minimizing expected loss function 𝐸𝜃1[𝐿4(𝜃1,𝑑)] and using posterior distributions (18) and (28), we get the Bayes estimators of 𝜃1 using Linex loss function, respectively, as𝜃1𝐿1=𝑞1𝑘ln2𝑛1𝑚=10𝜃2(𝑛𝑚+𝑏2+1)𝑒𝑎2/𝜃2𝐺2𝑚,𝜃2d𝜃2×0𝜃1(𝑚+𝑏1+1)𝑒𝑎1/𝜃1𝜃1𝑞1×𝐺1𝑚,𝜃1d𝜃111𝑥,𝜃1𝐿=1𝑞1ln0𝑘2𝑛1𝑚=1𝜃1(𝑚+1)𝑒1/𝜃1𝜃1𝑞1𝐺1𝑚,𝜃1d𝜃1×0𝜃2(𝑛𝑚+1)𝑒1/𝜃2𝐺2𝑚,𝜃2d𝜃221𝑥,(36) where 𝐺1(𝑚,𝜃1), 𝐺2(𝑚,𝜃2), and 𝑘2 are the same as in (7) and (14). and 1(𝑥) and 2(𝑥) are the same as in (15) and (26).

Minimizing expected loss function 𝐸𝜃2[𝐿4(𝜃2,𝑑)] and using posterior distribution (19) and (29), we get the Bayes estimators of 𝜃2 using Linex loss function, respectively, as𝜃2𝐿=1𝑞1𝑘ln2𝑛1𝑚=10𝜃1(𝑚+𝑏1+1)𝑒𝑎1/𝜃1𝐺1𝑚,𝜃1d𝜃1×0𝜃2(𝑛𝑚+𝑏2+1)𝑒𝑎2/𝜃2𝜃2𝑞1×𝐺2𝑚,𝜃2d𝜃211𝑥,𝜃2𝐿1=𝑞1𝑘ln20𝑒1/𝜃2𝜃2𝑞1𝐺2𝑚,𝜃2𝜃2(𝑛𝑚+1)d𝜃2×0𝜃1(𝑚+1)𝑒1/𝜃1𝐺1𝑚,𝜃1d𝜃121𝑥,(37) where 𝐺1(𝑚,𝜃1), 𝐺2(𝑚,𝜃2), and 𝑘2 are the same as in (7) and (14) and 1(𝑥) and 2(𝑥) are the same as in (15) and (26).

Another loss function, called General Entropy (GE) loss function, proposed by Calabria and Pulcini [11], is given by,𝐿5𝑑(𝛼,𝑑)=𝛼𝑞3𝑞3𝑑ln𝛼𝐼.(38) The Bayes estimate 𝛼𝐸 is the value of 𝑑 that minimizes 𝐸𝛼[𝐿5(𝛼,𝑑)]:𝛼𝐸=𝐸𝛼(𝛼𝑞3)1/𝑞3,(39) provided that 𝐸𝛼(𝛼𝑞3) exists and is finite.

Minimizing expectation 𝐸𝑚[𝐿5(𝑚,𝑑)] and using posterior distributions (20) and (30), we get the Bayes estimators 𝑚𝐸,𝑚𝐸 of 𝑚, respectively, as𝑚𝐸=𝐸𝑚𝑚𝑞31/𝑞3=𝑛1𝑚=1m𝑞3𝐼1(𝑚)𝐼2(𝑚)𝑛1𝑚=1𝐼1(𝑚)𝐼2(𝑚)1/𝑞3,𝑚𝐸=𝑛1𝑚=1𝑚𝑞3𝐼3(𝑚)𝐼4(𝑚)𝑛1𝑚=1𝐼3(𝑚)𝐼4(𝑚)1/𝑞3,(40) where 𝐼1(𝑚)𝐼2(𝑚) and 𝐼3(𝑚)𝐼4(𝑚) are the same as in (16) and (27).

Minimizing expected loss function 𝐸𝜃1[𝐿5(𝜃1,𝑑)] and using posterior distributions (18) and (28), we get the Bayes estimates of 𝜃1 using General Entropy loss function and informative and noninformative prior, respectively, as𝜃1𝐸=𝑘2𝑛1𝑚=10𝜃2(𝑛𝑚+𝑏2+𝑞3+1)𝑒𝑎2/𝜃2𝐺2𝑚,𝜃2d𝜃2×0𝜃1(𝑚+𝑏1+1)𝑒𝑎1/𝜃1𝐺1(𝑚,𝜃1)d𝜃111(𝑥)1/𝑞3,𝜃1𝐸=0𝑘2𝑛1𝑚=1𝜃1(𝑚+𝑞3+1)𝑒1/𝜃1𝐺1𝑚,𝜃1d𝜃1×0𝜃2(𝑛𝑚+1)𝑒1/𝜃2𝐺2𝑚,𝜃2d𝜃221(𝑥)1/𝑞3,(41) where 𝐺1(𝑚,𝜃1), 𝐺2(𝑚,𝜃2) and 𝑘2 are same as in (7) and (14). 1(𝑥) and 2(𝑥) are same as in (15) and (26).

Minimizing expected loss function 𝐸𝜃2[𝐿5(𝜃2,𝑑)] and using posterior distributions (19) and (29), we get the Bayes estimates of 𝜃2 using General Entropy loss function as𝜃2𝐸=𝑘2𝑛1𝑚=10𝜃1(𝑚+𝑏1+1)𝑒𝑎1/𝜃1𝐺1𝑚,𝜃1d𝜃1×0𝜃2(𝑛𝑚+𝑏2+𝑞3+1)𝑒𝑎2/𝜃2×𝐺2𝑚,𝜃2d𝜃211𝑥1/𝑞3,𝜃(42)2𝐸=𝑘20𝑒1/𝜃2𝐺2𝑚,𝜃2𝜃2(𝑛𝑚+𝑞3+1)d𝜃2×0𝜃1(𝑚+1)𝑒1/𝜃1𝐺1𝑚,𝜃1d𝜃121𝑥1/𝑞3,(43) where, 𝐺1(𝑚,𝜃1), 𝐺2(𝑚,𝜃2)and 𝑘2 are same as in (7) and (14) and 1(𝑥) and 2(𝑥)are the same as in (15) and (26).

Remark 1. Putting 𝑞3=1 in (40), (41), and (42), we get the Bayes estimators of 𝑚, 𝜃1, and 𝜃2, posterior means under the squared error loss.

Note that, for 𝑞3=1, the GE loss function reduces to the squared error loss function.

5. Numerical Study

We have generated 30 random observations from dmax distribution involving change point discussed in Section 2. The first 15 observations from dMax with 𝜃1=1.0 and at 𝑡=5.0, 𝑅1𝑡=0.0460 and next 15 observations from dMax distribution with 𝜃2=0.5 and 𝑅2𝑡=0.0011, 𝜃1 and 𝜃2 themselves were random observation from inverted Gamma prior distributions with prior means 𝜇1=1.0, 𝜇2=0.5 and variance 𝜎1=1.0 and 𝜎2=1.0, respectively, resulting in 𝑎1=2.0,𝑏1=3.0, and 𝑎2=.5, 𝑏2=.75. These observations are given in Table 1 first raw.

We also compute the Bayes estimators 𝑚𝐿,𝑚𝐸 of 𝑚, 𝜃1𝐸, 𝜃1𝐿 and 𝜃2𝐸, 𝜃2𝐿 of 𝜃1 and 𝜃2 using the results given in Section 4 for the data given in Table 1 and for different values of shape parameter 𝑞1 and 𝑞3. The results are shown in Tables 3 and 4.

We have generated 6 random samples from discrete Maxwell distribution involving change point discussed in Section 2 with 𝑛=30,50,50, 𝑚=15,25,35, and 𝜃1=1.0,5.0 and 𝜃2=0.5 and 2.0. As explained in Section 3.1, 𝜃1 and 𝜃2 themselves were random observations from inverted Gamma prior distributions with prior means 𝜇1, 𝜇2, respectively. These observations are given in Table 1. We have calculated posterior means of 𝑚, 𝜃1, and 𝜃2 under both the prior for all samples, and the results are shown in Table 2.

Table 3 shows that for small values of |𝑞|, 𝑞1=0.09, 0.2, 0.1 Linex loss function is almost symmetric and nearly quadratic and the values of the Bayes estimate under such a loss is not far from the posterior mean. Table 3 also shows that, for 𝑞1=1.5,1.2, Bayes estimates are less than actual value of 𝑚=15.

For 𝑞1=𝑞3=1, −2, Bayes estimates are quite large than actual value 𝑚=15. It can be seen from Tables 3 and 4 that the negative sign of shape parameter of loss functions reflecting underestimation is more serious than overestimation. Thus, problem of underestimation can be solved by taking the value of shape parameters of Linex and General Entropy loss functions negative.

Table 4 shows that, for small values of |𝑞|, 𝑞3=0.09, 0.2, 0.1 General Entropy loss function, the values of the Bayes estimate under such a loss is not far from the posterior mean. Table 4 also shows that, for 𝑞3=1.5,1.2, Bayes estimates are less than actual value of 𝑚=15.

It can be seen from Tables 3 and 4 that positive sign of shape parameter of loss functions reflecting overestimation is more serious than under estimation. Thus, problem of over estimation can be solved by taking the value of shape parameter of Linex and General Entropy loss functions positive and high.

5.1. Sensitivity of Bayes Estimates

In this section, we study the sensitivity of the Bayes estimator, obtained in Section 3 with respect to change in the prior of parameters. The means 𝜇1 and 𝜇2 of inverted Gamma prior on 𝜃1 and 𝜃2 have been used as prior information in computing the parameters 𝑎1, 𝑎2, 𝑏1, and 𝑏2 of the prior. We have computed posterior mean 𝑚 for the data given in Table 1, considering different sets of values of (𝜇1, 𝜇2). Following Calabria and Pulcini [13], we also assume the prior information to be correct if the true value of 𝜃1 and 𝜃2 is closed to prior mean 𝜇1(𝜇2) and is assumed to be wrong if 𝜃1 and 𝜃2 are far from 𝜇1(𝜇2). We observed that the posterior mean 𝑚 appears to be robust with respect to the correct choice of the prior density of 𝜃1(𝜃2) and a wrong choice of the prior density of 𝜃2(𝜃1).

This can be seen from Tables 5, 6, and 7.

Table 5 shows that, when prior mean 𝜇1=1 = actual value of 𝜃1, 𝜇2=0.3 and 0.7 (far from true value of 𝜃2=0.5), it means correct choice of prior of 𝜃1 and wrong choice of prior of 𝜃2, the values of Bayes estimator posterior mean remains the same and is 15. It gives correct estimation of 𝑚. Thus, posterior mean is not sensitive with wrong choice of prior density of 𝜃2(𝜃1) and correct choice of prior of 𝜃1(𝜃2).

6. Simulation Study

In Section 5, we have obtained Bayes estimates of 𝑚 on the basis of the generated data given in Table 1 for given values of parameters. To justify the results, we have generated 10,000 different random samples with 𝑚=10, 𝑛=30,50, 𝜃1=1, 𝜃2=0.5 and obtained the frequency distributions of posterior mean 𝑚𝐿, 𝑚𝐸 of 𝑚 with the correct prior consideration. The result is shown in Table 8. The value of shape parameter of the General Entropy loss and Linex loss used in simulation study for change point is taken as 0.1. We have also simulated several dMax samples with 𝑚=15,25,35; 𝑛=30,50, and 𝜃1=0.15,0.11,0.10; 𝜃2=0.55,0.45,0.35. For each 𝑚,𝑛,𝜃1, and 𝜃2 1000 pseudorandom samples have been simulated and Bayes estimators of change point 𝑚 using 𝑞1=𝑞3=0.9 have been computed for same value of 𝑎1, 𝑎2 and for different prior means 𝜇1 and 𝜇2. We observed that the posterior mean 𝑚 appears to be robust with respect to the correct choice of the prior density of 𝜃1(𝜃2) and a wrong choice of the prior density of 𝜃2(𝜃1).for each combination of prior means 𝜇1 and 𝜇2.

The value of Bayes estimator of change point 𝑚, based on Linex loss and general Entropy loss, using 𝑞1=𝑞3=0.9 is 10.

Table 8 leads to conclusion that performance of posterior means has better performance than that of 𝑚𝐿, 𝑚𝐸, of change point. 78% values of posterior mean are closed to actual value of change point with correct choice of prior. 65% values of 𝑚𝐿 are closed to actual value of change point with correct choice of prior. 66% values of 𝑚𝐸 are close to correct values of change point with correct prior considerations.

Acknowledgment

The authors would like to thank the editor and the referee for their valuable suggestions which improved the earlier version of the paper.