About this Journal Submit a Manuscript Table of Contents
International Journal of Quality, Statistics, and Reliability
VolumeΒ 2011Β (2011), Article IDΒ 395034, 8 pages
http://dx.doi.org/10.1155/2011/395034
Research Article

Bayes Estimation of Change Point in Discrete Maxwell Distribution

Department of Statistics, Bhavnagar University, Bhavnagar 364002, India

Received 28 December 2010; Revised 13 May 2011; Accepted 13 May 2011

Academic Editor: Ajit K.Β Verma

Copyright Β© 2011 Mayuri Pandya and Hardik Pandya. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A sequence of independent lifetimes 𝑋1,…,π‘‹π‘š,π‘‹π‘š+1,…,𝑋𝑛 was observed from Maxwell distribution with reliability π‘Ÿ1(𝑑) at time 𝑑 but later, it was found that there was a change in the system at some point of time π‘š and it is reflected in the sequence after π‘‹π‘š by change in reliability π‘Ÿ2(𝑑) at time 𝑑. The Bayes estimators of π‘š, πœƒ1, πœƒ2 are derived under different asymmetric loss functions. The effects of correct and wrong prior information on the Bayes estimates are studied.

1. Introduction

Maxwell distribution plays an important role in physics and other allied sciences. This paper introduces a discrete analogue of the Maxwell distribution, called discrete Maxwell distribution (dMax distribution). This distribution is suggested as a suitable reliability model to fit a range of discrete lifetime data.

In reliability theory many continuous lifetime models have been suggested and studied. However, it is sometimes impossible or inconvenient to measure the life length of a device, on a continuous scale. In practice, we come across situations, where lifetime of a device is considered to be a discrete random variable. For example, in an on/off switching device, the lifetime of the switch is a discrete random variable. Also, the number of voltage fluctuations which an electrical or electronic item can withstand before its failure is a discrete rv.

If the lifetimes of individuals in some population are grouped or when lifetime refers to an integral number of cycles of some sort, it may be desirable to treat it as a discrete rv. When a discrete model is used with lifetime data, it is usually a multinomial distribution, which arises because effectively continuous data have been grouped. Some situations may demand for another discrete distribution, usually over the non negative integers. Such situations are best treated individually, but generally one tries to adopt one of the standard discrete distributions.

In the last two decades, standard discrete distributions like geometric and negative binomial have been employed to model lifetime data. However, there is a need to find more plausible discrete lifetime distributions to fit to various types of lifetime data. For this purpose, popular continuous lifetime distributions can be helpful in the following manner.

The Maxwell distribution defines the speed of molecules in thermal equilibrium under some conditions as defined in statistical mechanics. For example, this distribution explains many fundamental gas properties in kinetic theory of gases; distribution of energies and moments, and so forth.

Tyagi and Bhattachary ([1], (1989b)) considered Maxwell distribution as a lifetime model for the first time. They obtained minimum variance unbiased estimator (MVUE) and Bayes estimator of the parameter and reliability function of this distribution.

Chaturvedi and Rani [2] studied generalized Maxwell distribution by introducing one more parameter and obtained classical and Bayesian estimation procedures for it. A lifetime model is specified to represent the distribution of lifetimes, and statistical inferences are made on the basis of this model. Physical systems manufacturing the items are often subject to random fluctuations. It may happen that at some point of time instability in the sequence of lifetimes and reliability is observed. The problem of study is when and where this change has started occurring. This is called change point inference problem. Bayesian ideas may play an important role in the study of such change point problem and has been often proposed as a valid alternative to classical estimation procedure. The monograph of Broemeling and Tsurumi [3] on structural changes, Jani and Pandya [4], Ebrahimi and Ghosh [5] and a survey by Zacks [6], Pandya and Jani [7], Pandya and Bhatt [8], Mayuri Pandya and prbha Jadav [9], Pandya and Jadav [10] are useful references. In this paper we have proposed a discrete Maxwell model to represent the distribution of lifetimes with change point π‘š and have obtained Bayes estimators of π‘š,πœƒ1,πœƒ2.

2. Proposed Change Point Model

In this section we propose change point model on discrete Maxwell distribution. We also derived the Bayes estimates for the model. Let 𝑋1,𝑋2,…,𝑋𝑛(𝑛β‰₯3) be a sequence of random lifetimes. First π‘š of them are coming from discrete Maxwell, dMax(πœƒ1). So the probability mass function is given by𝑝π‘₯𝑖=4βˆšπœ‹1πœƒ1𝑄π‘₯𝑖,2,πœƒ1ξ€Έ,π‘₯=0,1,…,πœƒ1>0𝑖=1,2,…,π‘š.(1) With reliability π‘Ÿ1(𝑑),π‘Ÿ14(𝑑)=βˆšπœ‹πœƒ1βˆ’3/2𝐽𝑑,2,πœƒ1ξ€Έ,π‘₯=0,1,…;πœƒ1>0.(2)

Later 𝑛-π‘š observations are coming from the discrete Maxwell, dMax(πœƒ2). So the probability mass function is given by𝑝π‘₯𝑖=4βˆšπœ‹1πœƒ2𝑄π‘₯𝑖,2,πœƒ2ξ€Έ,π‘₯=0,1,…,πœƒ2>0,𝑖=π‘š+1,…,𝑛.(3) With reliability π‘Ÿ2(𝑑),π‘Ÿ24(𝑑)=βˆšπœ‹πœƒ2βˆ’3/2𝐽𝑑,2,πœƒ2ξ€Έ,π‘₯=0,1…;πœƒ2>0,(4) where𝑄π‘₯,π‘˜,πœƒ1ξ€Έ=ξ€œπ‘₯π‘₯+1π‘’π‘˜π‘’βˆ’π‘’2/πœƒ1ξ€·d𝑒=𝐽π‘₯,π‘˜,πœƒ1ξ€Έξ€·βˆ’π½π‘₯+1,π‘˜,πœƒ1ξ€Έ,𝐽(5a)π‘₯,π‘˜,πœƒ1ξ€Έ=ξ€œβˆžπ‘₯π‘’π‘˜π‘’βˆ’π‘’2/πœƒ1d𝑒,(5b)  𝑄π‘₯,π‘˜,πœƒ2ξ€Έ=ξ€œπ‘₯π‘₯+1π‘’π‘˜π‘’βˆ’π‘’2/πœƒ2ξ€·d𝑒=𝐽π‘₯,π‘˜,πœƒ2ξ€Έξ€·βˆ’π½π‘₯+1,π‘˜,πœƒ2ξ€Έ,𝐽(5c)π‘₯,π‘˜,πœƒ2ξ€Έ=ξ€œβˆžπ‘₯π‘’π‘˜π‘’βˆ’π‘’2/πœƒ2d𝑒.(5d)The likelihood function given the sample information 𝑋=(𝑋1,𝑋2,…,π‘‹π‘š,π‘‹π‘š+1,…,𝑋𝑛)πΏξ€·πœƒ1,πœƒ2,π‘šβˆ£π‘‹ξ€Έ=4βˆšπœ‹ξƒͺπ‘›πœƒ1βˆ’π‘šπΊ1ξ€·π‘š,πœƒ1ξ€Έπœƒ2βˆ’(π‘›βˆ’π‘š)𝐺2ξ€·π‘š,πœƒ2ξ€Έ(6) where𝐺1ξ€·π‘š,πœƒ1ξ€Έ=π‘šξ‘π‘–=0𝑄π‘₯𝑖,2,πœƒ1ξ€Έ,𝐺2ξ€·π‘š,πœƒ2ξ€Έ=𝑛𝑖=π‘š+1𝑄π‘₯𝑖,2,πœƒ2ξ€Έ,(7)𝑄(π‘₯𝑖,2,πœƒ1)and 𝑄(π‘₯𝑖,2,πœƒ2) are explained in (5a) and (5c).

3. Bayes Estimation

3.1. The Conjugate Analysis Using Inverted Gamma Prior Distribution

The ML method, as well as other classical approaches, is based only on the empirical information provided by the data. However, when there is some technical knowledge on the parameters of the distribution available, a Bayes procedure seems to be an attractive inferential method. The Bayes procedure is based on a posterior density, say 𝑔(πœƒ1,πœƒ2,π‘šβˆ£π‘‹), which is proportional to the product of the likelihood function 𝐿(πœƒ1,πœƒ2,π‘šβˆ£π‘‹) with a prior joint density, say 𝑔(πœƒ1,πœƒ2,π‘š)| representing the uncertainty on the parameters values.

We also suppose that some information on πœƒ1 and πœƒ2 is available and that this technical knowledge can be given in terms of prior mean values πœ‡1, πœ‡2 and variance 𝜎1 and 𝜎2, respectively. Suppose that the marginal prior density of πœƒ1 and πœƒ2is the inverted Gamma with respective mean πœ‡1andπœ‡2:π‘”ξ€·πœƒ1ξ€Έ=π‘Ž1𝑏1Γ𝑏1ξ€·πœƒ1ξ€Έβˆ’(𝑏1+1)π‘’βˆ’π‘Ž1/πœƒ1,π‘”ξ€·πœƒ2ξ€Έ=π‘Ž2𝑏2Γ𝑏2ξ€·πœƒ2ξ€Έβˆ’(𝑏2+1)π‘’βˆ’π‘Ž2/πœƒ2,π‘Žπ‘–π‘π‘–>0,πœƒπ‘–>0,𝑖=1,2,(8) where the parameters π‘Žπ‘–,𝑏𝑖𝑖=1,2 are obtained by solving π‘π‘–πœ‡=2+2π‘–πœŽ2𝑖,π‘Žπ‘–=πœ‡π‘–ξ€·π‘π‘–ξ€Έβˆ’1,𝑖=1,2.(9)

Following Calabria and Pulcini [13] we assume the prior information to be correct if the true value of πœƒ1(πœƒ2)is close to prior mean πœ‡1(πœ‡2)and is assumed to be wrong if πœƒ1(πœƒ2)is far from πœ‡1(πœ‡2).

As in the study by Broemeling and Tsurumi [3], we suppose the marginal prior distribution of π‘š to be discrete uniform over the set {1,2,…,π‘›βˆ’1}1𝑔(π‘š)=(π‘›βˆ’1).(10) The joint prior density of πœƒ1,πœƒ2,π‘š is𝑔1ξ€·πœƒ1,πœƒ2ξ€Έ=1,π‘šπ‘Žπ‘›βˆ’11𝑏1Γ𝑏1π‘Ž2𝑏2Γ𝑏2πœƒ1βˆ’(𝑏1+1)π‘’βˆ’π‘Ž1/πœƒ1πœƒ2βˆ’(𝑏2+1)π‘’βˆ’π‘Ž2/πœƒ2=π‘˜πœƒ1βˆ’(𝑏1+1)π‘’βˆ’π‘Ž1/πœƒ1πœƒ2βˆ’(𝑏2+1)π‘’βˆ’π‘Ž2/πœƒ2,(11) where1π‘˜=π‘Žπ‘›βˆ’11𝑏1Γ𝑏1π‘Ž2𝑏2Γ𝑏2.(12) The joint posterior density of πœƒ1,πœƒ2,π‘š is𝑔1ξ€·πœƒ1,πœƒ2,π‘šβˆ£π‘₯ξ€Έ=πΏξ€·πœƒ1,πœƒ2,π‘šβˆ£π‘₯𝑔1ξ€·πœƒ1,πœƒ2ξ€Έ,π‘šβ„Ž1ξ€·π‘₯ξ€Έ,𝑔1ξ€·πœƒ1,πœƒ2,π‘šβˆ£π‘₯ξ€Έ=π‘˜2πœƒ1βˆ’(π‘š+𝑏1+1)π‘’βˆ’π‘Ž1/πœƒ1𝐺1ξ€·π‘š,πœƒ1ξ€Έπœƒ2βˆ’(π‘›βˆ’π‘š+𝑏2+1)Γ—π‘’βˆ’π‘Ž2/πœƒ2𝐺2ξ€·π‘š,πœƒ2ξ€Έβ‹…β„Ž1βˆ’1ξ€·π‘₯ξ€Έ,(13)π‘˜2=π‘˜β‹…π‘˜14=π‘˜βˆšπœ‹ξƒͺ𝑛,(14)𝐺1(π‘š,πœƒ1) and 𝐺2(π‘š,πœƒ2) are explained in (7).

And β„Ž1(π‘₯) is the marginal posterior density of π‘‹βˆΆβ„Ž1ξ€·π‘₯ξ€Έ=π‘›βˆ’1ξ“π‘š=1ξ€βˆž0πΏξ€·πœƒ1,πœƒ2,π‘šβˆ£π‘₯𝑔1ξ€·πœƒ1,πœƒ2ξ€Έ,π‘šdπœƒ1dπœƒ2=π‘˜2π‘›βˆ’1ξ“π‘š=1𝐼1(π‘š)𝐼2(π‘š),(15) where 𝐼1ξ€œ(π‘š)=∞0πœƒ1βˆ’(π‘š+𝑏1+1)π‘’βˆ’π‘Ž1/πœƒ1𝐺1ξ€·π‘š,πœƒ1ξ€Έdπœƒ1,(16)𝐼2ξ€œ(π‘š)=∞0πœƒ2βˆ’(π‘›βˆ’π‘š+𝑏2+1)π‘’βˆ’π‘Ž2/πœƒ2𝐺2ξ€·π‘š,πœƒ2ξ€Έdπœƒ2,(17)𝐺1(π‘š,πœƒ1) and 𝐺2(π‘š,πœƒ2) are explained in (7).

Marginal posterior density of πœƒ1 and of πœƒ2 is obtained by integrating the joint posterior density of πœƒ1,πœƒ2,π‘š given in (13) with respect to πœƒ2and with respect to πœƒ1, respectively, and summing over π‘š,𝑔1ξ€·πœƒ1∣π‘₯ξ€Έ=π‘˜2π‘›βˆ’1ξ“π‘š=1πœƒ1βˆ’(π‘š+𝑏1+1)π‘’βˆ’π‘Ž1/πœƒ1𝐺1ξ€·π‘š,πœƒ1ξ€ΈΓ—ξ€œβˆž0πœƒ2βˆ’(π‘›βˆ’π‘š+𝑏2+1)π‘’βˆ’π‘Ž2/πœƒ2𝐺2ξ€·π‘š,πœƒ2ξ€Έdπœƒ2β„Ž1βˆ’1ξ€·π‘₯ξ€Έ,𝑔(18)1ξ€·πœƒ2∣π‘₯ξ€Έ=π‘˜2π‘›βˆ’1ξ“π‘š=1πœƒ2βˆ’(π‘›βˆ’π‘š+𝑏2+1)π‘’βˆ’π‘Ž2/πœƒ2𝐺2ξ€·π‘š,πœƒ2ξ€ΈΓ—ξ€œβˆž0πœƒ1βˆ’(π‘š+𝑏1+1)π‘’βˆ’π‘Ž1/πœƒ1𝐺1ξ€·π‘š,πœƒ1ξ€Έdπœƒ1β„Ž1βˆ’1ξ€·π‘₯ξ€Έ,(19)𝐺1(π‘š,πœƒ1) and 𝐺2(π‘š,πœƒ2) are explained in (7), andβ„Ž1(π‘₯) is the same as in (15).

The marginal posterior density of change point π‘š, say 𝑔1(π‘šβˆ£π‘₯), is obtained by integrating the joint posterior density of πœƒ1,πœƒ2,π‘š (13) with respect to πœƒ1andπœƒ2𝑔1ξ€·π‘šβˆ£π‘₯ξ€Έ=ξ€βˆž0𝑔1ξ€·πœƒ1,πœƒ2,π‘šβˆ£π‘₯ξ€Έdπœƒ1dπœƒ2=ξ€βˆž0π‘˜2πœƒ1βˆ’(π‘š+𝑏1+1)π‘’βˆ’π‘Ž1/πœƒ1𝐺1ξ€·π‘š,πœƒ1ξ€Έπœƒ2βˆ’(π‘›βˆ’π‘š+𝑏2+1)Γ—π‘’βˆ’π‘Ž2/πœƒ2𝐺2ξ€·π‘š,πœƒ2ξ€Έdπœƒ1dπœƒ2β„Ž1βˆ’1ξ€·π‘₯ξ€Έ=𝐼1(π‘š)𝐼2(π‘š)βˆ‘π‘›βˆ’1π‘š=1𝐼(π‘š)1𝐼2(π‘š).(20) The Bayes estimator of a generic parameter (or function thereof) 𝛼, based on a squared error loss (SEL) function:𝐿1(𝛼,𝑑)=(π›Όβˆ’π‘‘)2,(21) where 𝑑 is a decision rule to estimate and 𝛼, is the posterior mean.

Bayes estimator of π‘š under SEL and the inverted Gamma prior isπ‘šβˆ—1=βˆ‘π‘›βˆ’1π‘š=1π‘šπΌ1(π‘š)𝐼2(π‘š)βˆ‘π‘›βˆ’1π‘š=1𝐼1(π‘š)𝐼2(π‘š).(22)𝐼1(π‘š) and 𝐼2(π‘š) are the same as in (16).

Other Bayes estimators of 𝛼 based on the loss functions 𝐿2||||,𝐿(𝛼,𝑑)=π›Όβˆ’π‘‘3ξƒ―||||(𝛼,𝑑)=0,ifπ›Όβˆ’π‘‘<πœ–,πœ–>0,1,otherwise,(23) are the posterior median and posterior mode, respectively.

3.2. Posterior Distribution Functions Using Noninformative Prior

A noninformative prior is a prior that adds no information to that contained in the empirical data. Thus, a Bayes inference based upon noninformative prior has generally a theoretical interest only, since, from an engineering view point, the Bayes approach is very attractive for it allows incorporating expert opinion or technical knowledge in the estimation procedure. Let the joint noninformative prior density of πœƒ1,πœƒ2,andπ‘šbe given by,𝑔2ξ€·πœƒ1,πœƒ2ξ€Έ=1,π‘š(π‘›βˆ’1)πœƒ1πœƒ2.(24) The joint posterior density using noninformative prior 𝑔2(πœƒ1,πœƒ2,π‘š), say 𝑔2(πœƒ1,πœƒ2,π‘šβˆ£π‘₯), is 𝑔2ξ€·πœƒ1,πœƒ2,π‘šβˆ£π‘₯ξ€Έ=π‘˜2π‘›βˆ’1βˆ‘π‘š=1πœƒ1βˆ’(π‘š+1)π‘’βˆ’1/πœƒ1𝐺1ξ€·π‘š,πœƒ1ξ€ΈΓ—πœƒ2βˆ’(π‘›βˆ’π‘š+1)π‘’βˆ’1/πœƒ2𝐺2ξ€·π‘š,πœƒ2ξ€Έβ‹…β„Ž2βˆ’1ξ€·π‘₯ξ€Έ,(25) where 𝐺1(π‘š,πœƒ1),𝐺2(π‘š,πœƒ2)and π‘˜2 are the same as in (7) and (14) and β„Ž2(π‘₯) is the marginal density of π‘₯ under the noninformative priors and is obtained by,β„Ž2ξ€·π‘₯ξ€Έ=π‘›βˆ’1ξ“π‘š=1ξ€βˆž0πΏξ€·πœƒ1,πœƒ2,π‘šβˆ£π‘₯𝑔2ξ€·πœƒ1,πœƒ2ξ€Έ,π‘šdπœƒ1dπœƒ2=π‘˜2π‘›βˆ’1ξ“π‘š=1𝐼3(π‘š)𝐼4(π‘š),(26) where𝐼3ξ€œ(π‘š)=∞0πœƒ1βˆ’(π‘š+1)π‘’βˆ’1/πœƒ1𝐺1ξ€·π‘š,πœƒ1ξ€Έπœƒ2βˆ’(π‘›βˆ’π‘š+1)dπœƒ1,𝐼4ξ€œ(π‘š)=∞0π‘’βˆ’1/πœƒ2𝐺2ξ€·π‘š,πœƒ2ξ€Έdπœƒ2.(27) Marginal posterior density of πœƒ1,πœƒ2 and change point π‘š under the noninformative prior (25) is obtained as𝑔2ξ€·πœƒ1∣π‘₯ξ€Έ=π‘˜2π‘›βˆ’1ξ“π‘š=1πœƒ1βˆ’(π‘š+1)π‘’βˆ’1/πœƒ1𝐺1ξ€·π‘š,πœƒ1ξ€ΈΓ—ξ€œβˆž0πœƒ2βˆ’(π‘›βˆ’π‘š+1)π‘’βˆ’1/πœƒ2𝐺2ξ€·π‘š,πœƒ2ξ€Έdπœƒ2β‹…β„Ž2βˆ’1ξ€·π‘₯ξ€Έ,𝑔(28)2ξ€·πœƒ2∣π‘₯ξ€Έ=π‘˜2π‘›βˆ’1ξ“π‘š=1π‘’βˆ’1/πœƒ2𝐺2ξ€·π‘š,πœƒ2ξ€Έπœƒ2βˆ’(π‘›βˆ’π‘š+1)Γ—ξ€œβˆž0πœƒ1βˆ’(π‘š+1)π‘’βˆ’1/πœƒ1𝐺1ξ€·π‘š,πœƒ1ξ€Έdπœƒ1β‹…β„Ž2βˆ’1ξ€·π‘₯ξ€Έ,(29) where 𝐺1(π‘š,πœƒ1), 𝐺2(π‘š,πœƒ2), and π‘˜2 are same as in (7) and (14). β„Ž2(π‘₯) is same as in (26).

The marginal posterior density of change point π‘š is, say 𝑔2(π‘šβˆ£π‘₯) is obtained by integrating the joint posterior density of πœƒ1,πœƒ2,π‘š (25) with respect to πœƒ1andπœƒ2,𝑔2ξ€·π‘šβˆ£π‘₯ξ€Έ=ξ€βˆž0𝑔2ξ€·πœƒ1,πœƒ2,π‘šβˆ£π‘₯ξ€Έdπœƒ1dπœƒ2=ξ€βˆž0π‘˜2π‘›βˆ’1ξ“π‘š=1πœƒ1βˆ’(π‘š+1)π‘’βˆ’1/πœƒ1𝐺1ξ€·π‘š,πœƒ1ξ€Έπœƒ2βˆ’(π‘›βˆ’π‘š+1)Γ—π‘’βˆ’1/πœƒ2𝐺2ξ€·π‘š,πœƒ2ξ€Έβ‹…β„Ž2βˆ’1ξ€·π‘₯ξ€Έdπœƒ1dπœƒ2=𝐼3(π‘š)𝐼4(π‘š)βˆ‘π‘›βˆ’1π‘š=1𝐼3(π‘š)𝐼4(π‘š)(30) where 𝐼3(π‘š) and 𝐼4(π‘š) are the same as in (27).

Bayes estimator of π‘š under SEL and noninformative prior isπ‘šβˆ—2=βˆ‘π‘›βˆ’1π‘š=1π‘šπΌ3(π‘š)𝐼4(π‘š)βˆ‘π‘›βˆ’1π‘š=1𝐼3(π‘š)𝐼4(π‘š).(31)

4. Bayes Estimates of Change Point and Other Parameters under Asymmetric Loss Functions

The loss function 𝐿(𝛼,𝑑) provides a measure of the financial consequences arising from a wrong decision rule 𝑑 to estimate an unknown quantity 𝛼. The choice of the appropriate loss function depends on financial considerations only and is independent of the estimation procedure used. The use of symmetric loss function was found to be generally inappropriate, since, for example, an overestimation of the reliability function is usually much more serious than an underestimation.

In this section, we derive Bayes estimators of change point π‘š under different asymmetric loss functions using both prior considerations explained in Sections 3.1 and 3.2. A useful asymmetric loss, known as the Linex loss function was introduced by Varian [12]. Under the assumption that the minimal loss occurs at 𝑑, the Linex loss function can be expressed as𝐿4ξ€Ίπ‘ž(𝛼,𝑑)=exp1ξ€»(π‘‘βˆ’π›Ό)βˆ’π‘ž1(π‘‘βˆ’π›Ό)βˆ’1,π‘ž1β‰ 0.(32) The sign of the shape parameter π‘ž1 reflects the deviation of the asymmetry, π‘ž1>0, if over, estimation is more serious than underestimation, and vice versa, and the magnitude of π‘ž1 reflects the degree of asymmetry.

The posterior expectation of the Linex loss function is𝐸𝛼𝐿4ξ€Ύξ€·π‘ž(𝛼,𝑑)=exp1𝑑𝐸𝛼expβˆ’π‘ž1π›Όξ€Έξ€Ύβˆ’π‘ž1(π‘‘βˆ’πΈ{𝛼})βˆ’πΌ,(33) where 𝐸𝛼{𝑓(π‘Ž)}denotes the expectation of 𝑓(π‘Ž) with respect to the posterior density 𝑔(π›Όβˆ£π‘₯). The Bayes estimate π‘Žβˆ—π‘™ is the value of 𝑑 that minimizes 𝐸𝛼{𝑙4(𝛼,𝑑)}π‘Žβˆ—π‘™1=βˆ’π‘ž1𝐸ln𝛼expβˆ’π‘ž1𝛼(34) provided that 𝐸𝛼{exp(βˆ’π‘ž1𝛼)} exists and is finite.

Minimizing expected loss function πΈπ‘š[𝐿4(π‘š,𝑑)] and using posterior distributions (20) and (30), we get the Bayes estimators of π‘š using Linex loss function, respectively, as π‘šβˆ—πΏ1=βˆ’π‘ž1ξƒ¬βˆ‘lnπ‘›βˆ’1π‘š=1π‘’βˆ’π‘šπ‘ž1𝐼1(π‘š)𝐼2(π‘š)βˆ‘π‘›βˆ’1π‘š=1𝐼1(π‘š)𝐼2(ξƒ­,π‘šπ‘š)πΏβˆ—βˆ—1=βˆ’π‘ž1ξƒ¬βˆ‘lnπ‘›βˆ’1π‘š=1π‘’βˆ’π‘šπ‘ž1𝐼3(π‘š)𝐼4(π‘š)βˆ‘π‘›βˆ’1π‘š=1𝐼3(π‘š)𝐼4ξƒ­,(π‘š)(35) where 𝐼1(π‘š)𝐼2(π‘š) and 𝐼3(π‘š)𝐼4(π‘š) are the same as in (16) and (27).

Minimizing expected loss function πΈπœƒ1[𝐿4(πœƒ1,𝑑)] and using posterior distributions (18) and (28), we get the Bayes estimators of πœƒ1 using Linex loss function, respectively, asπœƒβˆ—1𝐿1=βˆ’π‘ž1ξƒ¬π‘˜ln2π‘›βˆ’1ξ“π‘š=1ξ€œβˆž0πœƒ2βˆ’(π‘›βˆ’π‘š+𝑏2+1)π‘’βˆ’π‘Ž2/πœƒ2𝐺2ξ€·π‘š,πœƒ2ξ€Έdπœƒ2Γ—ξ€œβˆž0πœƒ1βˆ’(π‘š+𝑏1+1)π‘’βˆ’π‘Ž1/πœƒ1βˆ’πœƒ1π‘ž1×𝐺1ξ€·π‘š,πœƒ1ξ€Έdπœƒ1β„Ž1βˆ’1ξ€·π‘₯ξ€Έξƒ­,πœƒβˆ—βˆ—1𝐿=1π‘ž1ξƒ¬ξ€œln∞0π‘˜2π‘›βˆ’1ξ“π‘š=1πœƒ1βˆ’(π‘š+1)π‘’βˆ’1/πœƒ1βˆ’πœƒ1π‘ž1𝐺1ξ€·π‘š,πœƒ1ξ€Έdπœƒ1Γ—ξ€œβˆž0πœƒ2βˆ’(π‘›βˆ’π‘š+1)π‘’βˆ’1/πœƒ2𝐺2ξ€·π‘š,πœƒ2ξ€Έdπœƒ2β‹…β„Ž2βˆ’1ξ€·π‘₯ξ€Έξ‚Ή,(36) where 𝐺1(π‘š,πœƒ1), 𝐺2(π‘š,πœƒ2), and π‘˜2 are the same as in (7) and (14). and β„Ž1(π‘₯) and β„Ž2(π‘₯) are the same as in (15) and (26).

Minimizing expected loss function πΈπœƒ2[𝐿4(πœƒ2,𝑑)] and using posterior distribution (19) and (29), we get the Bayes estimators of πœƒ2 using Linex loss function, respectively, asπœƒβˆ—2𝐿=1π‘ž1ξƒ¬π‘˜ln2π‘›βˆ’1ξ“π‘š=1ξ€œβˆž0πœƒ1βˆ’(π‘š+𝑏1+1)π‘’βˆ’π‘Ž1/πœƒ1𝐺1ξ€·π‘š,πœƒ1ξ€Έdπœƒ1Γ—ξ€œβˆž0πœƒ2βˆ’(π‘›βˆ’π‘š+𝑏2+1)π‘’βˆ’π‘Ž2/πœƒ2βˆ’πœƒ2π‘ž1×𝐺2ξ€·π‘š,πœƒ2ξ€Έdπœƒ2β„Ž1βˆ’1ξ€·π‘₯ξ€Έξƒ­,πœƒβˆ—βˆ—2𝐿1=βˆ’π‘ž1ξ‚Έπ‘˜ln2ξ€œβˆž0π‘’βˆ’1/πœƒ2βˆ’πœƒ2π‘ž1𝐺2ξ€·π‘š,πœƒ2ξ€Έπœƒ2βˆ’(π‘›βˆ’π‘š+1)dπœƒ2Γ—ξ€œβˆž0πœƒ1βˆ’(π‘š+1)π‘’βˆ’1/πœƒ1𝐺1ξ€·π‘š,πœƒ1ξ€Έdπœƒ1β‹…β„Ž2βˆ’1ξ€·π‘₯ξ€Έξ‚Ή,(37) where 𝐺1(π‘š,πœƒ1), 𝐺2(π‘š,πœƒ2), and π‘˜2 are the same as in (7) and (14) and β„Ž1(π‘₯) and β„Ž2(π‘₯) are the same as in (15) and (26).

Another loss function, called General Entropy (GE) loss function, proposed by Calabria and Pulcini [11], is given by,𝐿5𝑑(𝛼,𝑑)=π›Όξ‚π‘ž3βˆ’π‘ž3𝑑lnπ›Όξ‚βˆ’πΌ.(38) The Bayes estimate π›Όβˆ—πΈ is the value of 𝑑 that minimizes 𝐸𝛼[𝐿5(𝛼,𝑑)]:π›Όβˆ—πΈ=𝐸𝛼(π›Όβˆ’π‘ž3)ξ€»βˆ’1/π‘ž3,(39) provided that 𝐸𝛼(π›Όβˆ’π‘ž3) exists and is finite.

Minimizing expectation πΈπ‘š[𝐿5(π‘š,𝑑)] and using posterior distributions (20) and (30), we get the Bayes estimators π‘šβˆ—πΈ,π‘šπΈβˆ—βˆ— of π‘š, respectively, asπ‘šβˆ—πΈ=ξ€ΊπΈπ‘šξ€Ίπ‘šβˆ’π‘ž3ξ€»ξ€»βˆ’1/π‘ž3=ξƒ¬βˆ‘π‘›βˆ’1π‘š=1mβˆ’π‘ž3𝐼1(π‘š)𝐼2(π‘š)βˆ‘π‘›βˆ’1π‘š=1𝐼1(π‘š)𝐼2ξƒ­(π‘š)βˆ’1/π‘ž3,π‘šπΈβˆ—βˆ—=ξƒ¬βˆ‘π‘›βˆ’1π‘š=1π‘šβˆ’π‘ž3𝐼3(π‘š)𝐼4(π‘š)βˆ‘π‘›βˆ’1π‘š=1𝐼3(π‘š)𝐼4ξƒ­(π‘š)βˆ’1/π‘ž3,(40) where 𝐼1(π‘š)𝐼2(π‘š) and 𝐼3(π‘š)𝐼4(π‘š) are the same as in (16) and (27).

Minimizing expected loss function πΈπœƒ1[𝐿5(πœƒ1,𝑑)] and using posterior distributions (18) and (28), we get the Bayes estimates of πœƒ1 using General Entropy loss function and informative and noninformative prior, respectively, asπœƒβˆ—1𝐸=ξƒ¬π‘˜2π‘›βˆ’1ξ“π‘š=1ξ€œβˆž0πœƒ2βˆ’(π‘›βˆ’π‘š+𝑏2+π‘ž3+1)π‘’βˆ’π‘Ž2/πœƒ2𝐺2ξ€·π‘š,πœƒ2ξ€Έdπœƒ2Γ—ξ€œβˆž0πœƒ1βˆ’(π‘š+𝑏1+1)π‘’βˆ’π‘Ž1/πœƒ1𝐺1(π‘š,πœƒ1)dπœƒ1β„Ž1βˆ’1(π‘₯)ξ‚Ήβˆ’1/π‘ž3,πœƒβˆ—βˆ—1𝐸=ξƒ¬ξ€œβˆž0π‘˜2π‘›βˆ’1ξ“π‘š=1πœƒ1βˆ’(π‘š+π‘ž3+1)π‘’βˆ’1/πœƒ1𝐺1ξ€·π‘š,πœƒ1ξ€Έdπœƒ1Γ—ξ€œβˆž0πœƒ2βˆ’(π‘›βˆ’π‘š+1)π‘’βˆ’1/πœƒ2𝐺2ξ€·π‘š,πœƒ2ξ€Έdπœƒ2β‹…β„Ž2βˆ’1(π‘₯)ξ‚Ήβˆ’1/π‘ž3,(41) where 𝐺1(π‘š,πœƒ1), 𝐺2(π‘š,πœƒ2) and π‘˜2 are same as in (7) and (14). β„Ž1(π‘₯) and β„Ž2(π‘₯) are same as in (15) and (26).

Minimizing expected loss function πΈπœƒ2[𝐿5(πœƒ2,𝑑)] and using posterior distributions (19) and (29), we get the Bayes estimates of πœƒ2 using General Entropy loss function asπœƒβˆ—2𝐸=ξƒ¬π‘˜2π‘›βˆ’1ξ“π‘š=1ξ€œβˆž0πœƒ1βˆ’(π‘š+𝑏1+1)π‘’βˆ’π‘Ž1/πœƒ1𝐺1ξ€·π‘š,πœƒ1ξ€Έdπœƒ1Γ—ξ€œβˆž0πœƒ2βˆ’(π‘›βˆ’π‘š+𝑏2+π‘ž3+1)π‘’βˆ’π‘Ž2/πœƒ2×𝐺2ξ€·π‘š,πœƒ2ξ€Έdπœƒ2β„Ž1βˆ’1ξ€·π‘₯ξ€Έξƒ­βˆ’1/π‘ž3,πœƒ(42)βˆ—βˆ—2𝐸=ξ‚Έπ‘˜2ξ€œβˆž0π‘’βˆ’1/πœƒ2𝐺2ξ€·π‘š,πœƒ2ξ€Έπœƒ2βˆ’(π‘›βˆ’π‘š+π‘ž3+1)dπœƒ2Γ—ξ€œβˆž0πœƒ1βˆ’(π‘š+1)π‘’βˆ’1/πœƒ1𝐺1ξ€·π‘š,πœƒ1ξ€Έdπœƒ1β‹…β„Ž2βˆ’1ξ€·π‘₯ξ€Έξ‚Ήβˆ’1/π‘ž3,(43) where, 𝐺1(π‘š,πœƒ1), 𝐺2(π‘š,πœƒ2)and π‘˜2 are same as in (7) and (14) and β„Ž1(π‘₯) and β„Ž2(π‘₯)are the same as in (15) and (26).

Remark 1. Putting π‘ž3=βˆ’1 in (40), (41), and (42), we get the Bayes estimators of π‘š, πœƒ1, and πœƒ2, posterior means under the squared error loss.

Note that, for π‘ž3=βˆ’1, the GE loss function reduces to the squared error loss function.

5. Numerical Study

We have generated 30 random observations from dmax distribution involving change point discussed in Section 2. The first 15 observations from dMax with πœƒ1=1.0 and at 𝑑=5.0, 𝑅1𝑑=0.0460 and next 15 observations from dMax distribution with πœƒ2=0.5 and 𝑅2𝑑=0.0011, πœƒ1 and πœƒ2 themselves were random observation from inverted Gamma prior distributions with prior means πœ‡1=1.0, πœ‡2=0.5 and variance 𝜎1=1.0 and 𝜎2=1.0, respectively, resulting in π‘Ž1=2.0,𝑏1=3.0, and π‘Ž2=.5, 𝑏2=.75. These observations are given in Table 1 first raw.

tab1
Table 1: Generated samples from dMax distribution.

We also compute the Bayes estimators π‘šβˆ—πΏ,π‘šβˆ—πΈ of π‘š, πœƒβˆ—1𝐸, πœƒβˆ—1𝐿 and πœƒβˆ—2𝐸, πœƒβˆ—2𝐿 of πœƒ1 and πœƒ2 using the results given in Section 4 for the data given in Table 1 and for different values of shape parameter π‘ž1 and π‘ž3. The results are shown in Tables 3 and 4.

We have generated 6 random samples from discrete Maxwell distribution involving change point discussed in Section 2 with 𝑛=30,50,50, π‘š=15,25,35, and πœƒ1=1.0,5.0 and πœƒ2=0.5 and 2.0. As explained in Section 3.1, πœƒ1 and πœƒ2 themselves were random observations from inverted Gamma prior distributions with prior means πœ‡1, πœ‡2, respectively. These observations are given in Table 1. We have calculated posterior means of π‘š, πœƒ1, and πœƒ2 under both the prior for all samples, and the results are shown in Table 2.

tab2
Table 2: Bayes estimate of π‘š, πœƒ1, and πœƒ2 under SEL.
tab3
Table 3: The Bayes estimates using Linex loss function. (𝑛=30, π‘š=15, πœƒ1=1.0, πœƒ2=0.5).
tab4
Table 4: The Bayes estimates using General Entropy loss function. (𝑛=30, π‘š=15, πœƒ1=1.0, πœƒ2=0.5).

Table 3 shows that for small values of |π‘ž|, π‘ž1=0.09, 0.2, 0.1 Linex loss function is almost symmetric and nearly quadratic and the values of the Bayes estimate under such a loss is not far from the posterior mean. Table 3 also shows that, for π‘ž1=1.5,1.2, Bayes estimates are less than actual value of π‘š=15.

For π‘ž1=π‘ž3=βˆ’1, βˆ’2, Bayes estimates are quite large than actual value π‘š=15. It can be seen from Tables 3 and 4 that the negative sign of shape parameter of loss functions reflecting underestimation is more serious than overestimation. Thus, problem of underestimation can be solved by taking the value of shape parameters of Linex and General Entropy loss functions negative.

Table 4 shows that, for small values of |π‘ž|, π‘ž3=0.09, 0.2, 0.1 General Entropy loss function, the values of the Bayes estimate under such a loss is not far from the posterior mean. Table 4 also shows that, for π‘ž3=1.5,1.2, Bayes estimates are less than actual value of π‘š=15.

It can be seen from Tables 3 and 4 that positive sign of shape parameter of loss functions reflecting overestimation is more serious than under estimation. Thus, problem of over estimation can be solved by taking the value of shape parameter of Linex and General Entropy loss functions positive and high.

5.1. Sensitivity of Bayes Estimates

In this section, we study the sensitivity of the Bayes estimator, obtained in Section 3 with respect to change in the prior of parameters. The means πœ‡1 and πœ‡2 of inverted Gamma prior on πœƒ1 and πœƒ2 have been used as prior information in computing the parameters π‘Ž1, π‘Ž2, 𝑏1, and 𝑏2 of the prior. We have computed posterior mean π‘šβˆ— for the data given in Table 1, considering different sets of values of (πœ‡1, πœ‡2). Following Calabria and Pulcini [13], we also assume the prior information to be correct if the true value of πœƒ1 and πœƒ2 is closed to prior mean πœ‡1(πœ‡2) and is assumed to be wrong if πœƒ1 and πœƒ2 are far from πœ‡1(πœ‡2). We observed that the posterior mean π‘šβˆ— appears to be robust with respect to the correct choice of the prior density of πœƒ1(πœƒ2) and a wrong choice of the prior density of πœƒ2(πœƒ1).

This can be seen from Tables 5, 6, and 7.

tab5
Table 5: Bayes estimate of π‘š for Sample 1.
tab6
Table 6: Bayes estimate of π‘š for Sample 5.
tab7
Table 7: Bayes Estimate of π‘š for Sample 6.

Table 5 shows that, when prior mean πœ‡1=1 = actual value of πœƒ1, πœ‡2=0.3 and 0.7 (far from true value of πœƒ2=0.5), it means correct choice of prior of πœƒ1 and wrong choice of prior of πœƒ2, the values of Bayes estimator posterior mean remains the same and is 15. It gives correct estimation of π‘š. Thus, posterior mean is not sensitive with wrong choice of prior density of πœƒ2(πœƒ1) and correct choice of prior of πœƒ1(πœƒ2).

6. Simulation Study

In Section 5, we have obtained Bayes estimates of π‘š on the basis of the generated data given in Table 1 for given values of parameters. To justify the results, we have generated 10,000 different random samples with π‘š=10, 𝑛=30,50, πœƒ1=1, πœƒ2=0.5 and obtained the frequency distributions of posterior mean π‘šβˆ—πΏ, π‘šβˆ—πΈ of π‘š with the correct prior consideration. The result is shown in Table 8. The value of shape parameter of the General Entropy loss and Linex loss used in simulation study for change point is taken as 0.1. We have also simulated several dMax samples with π‘š=15,25,35; 𝑛=30,50, and πœƒ1=0.15,0.11,0.10; πœƒ2=0.55,0.45,0.35. For each π‘š,𝑛,πœƒ1, and πœƒ2 1000 pseudorandom samples have been simulated and Bayes estimators of change point π‘š using π‘ž1=π‘ž3=0.9 have been computed for same value of π‘Ž1, π‘Ž2 and for different prior means πœ‡1 and πœ‡2. We observed that the posterior mean π‘šβˆ— appears to be robust with respect to the correct choice of the prior density of πœƒ1(πœƒ2) and a wrong choice of the prior density of πœƒ2(πœƒ1).for each combination of prior means πœ‡1 and πœ‡2.

tab8
Table 8: Frequency distributions of the Bayes estimates of the change point.

The value of Bayes estimator of change point π‘š, based on Linex loss and general Entropy loss, using π‘ž1=π‘ž3=0.9 is 10.

Table 8 leads to conclusion that performance of posterior means has better performance than that of π‘šβˆ—πΏ, π‘šβˆ—πΈ, of change point. 78% values of posterior mean are closed to actual value of change point with correct choice of prior. 65% values of π‘šβˆ—πΏ are closed to actual value of change point with correct choice of prior. 66% values of π‘šβˆ—πΈ are close to correct values of change point with correct prior considerations.

Acknowledgment

The authors would like to thank the editor and the referee for their valuable suggestions which improved the earlier version of the paper.

References

  1. R. K. Tyagi and S. K. Bhattacharya, β€œBayes estimation of the Maxwell's velocity distribution function,” Statistica, vol. 29, no. 4, pp. 563–567, 1989.
  2. A. Chaturvedi and U. Rani, β€œClassical and Bayesian Reliability estimation of the generalized Maxwell failure distribution,” Journal of Statistical Research, vol. 32, pp. 113–120, 1998.
  3. L. D. Broemeling and H. Tsurumi, Econometrics and Structural Change, Marcel Dekker, New York, NY ,USA, 1987.
  4. P. N. Jani and M. Pandya, β€œBayes estimation of shift point in left truncated exponential sequence,” Communications in Statistics, vol. 28, no. 11, pp. 2623–2639, 1999.
  5. N. Ebrahimi and S. K. Ghosh, β€œCh. 31. Bayesian and frequentist methods in change-point problems,” in Handbook of Statistics, N. Balakrishna and C. R. Rao, Eds., vol. 20, pp. 777–787, 2001.
  6. S. Zacks, β€œSurvey of classical and Bayesian approaches to the change point problem: fixed sample and sequential procedures for testing and estimation,” in Recent Advances in Statistics. Herman Chernoff Best Shrift, pp. 245–269, Academic Press, New York, NY, USA, 1983.
  7. M. Pandya and P. N. Jani, β€œBayesian estimation of change point in inverse weibull sequence,” Communications in Statistics, vol. 35, no. 12, pp. 2223–2237, 2006. View at Publisher Β· View at Google Scholar Β· View at MathSciNet
  8. M. Pandya and S. Bhatt, β€œBayesian estimation of shift point in Weibull distribution,” Journal of the Indian Statistical Association, vol. 45, no. 1, pp. 67–80, 2007.
  9. M. Pandya and P. Jadav, β€œBayesian estimation of change point in inverse Weibull distribution,” IAPQR Transactions, vol. 33, no. 1, pp. 1–23, 2008.
  10. M. Pandya and P. Jadav, β€œBayesian estimation of change point in mixture of left truncated exponential and degenerate distribution,” Communication in Statistics, vol. 39, no. 15, pp. 2742–2742, 2010.
  11. R. Calabria and G. Pulcini, β€œBayes credibility intervals for the left-truncated exponential distribution,” Microelectronics Reliability, vol. 34, no. 12, pp. 1897–1907, 1994. View at Publisher Β· View at Google Scholar
  12. H. R. Varian, β€œA Bayesian approach to real estate assessment,” in Studies in Bayesian Econometrics and Statistics in Honor of Leonard J. Savage, Feigner and A. Zellner, Eds., pp. 195–208, North Holland, Amsterdam, The Netherlands, 1975.
  13. R. Calabria and G. Pulcini, β€œPoint estimation under asymmetric loss functions for left-truncated exponential samples,” Communications in Statistics, vol. 25, no. 3, pp. 585–600, 1996.