Abstract

Let the regression model be 𝑌𝑖=𝛽1𝑋𝑖+𝜀𝑖, where 𝜀𝑖 are i. i. d. N (0,𝜎2) random errors with variance 𝜎2>0 but later it was found that there was a change in the system at some point of time 𝑚 and it is reflected in the sequence after 𝑋𝑚 by change in slope, regression parameter 𝛽2. The problem of study is when and where this change has started occurring. This is called change point inference problem. The estimators of 𝑚, 𝛽1,𝛽2 are derived under asymmetric loss functions, namely, Linex loss & General Entropy loss functions. The effects of correct and wrong prior information on the Bayes estimates are studied.

1. Introduction

Regression analysis is an important statistical technique to analyze data in social, medical, and engineering sciences. Quite often, in practice, the regression coefficients are assumed constant. In many real-life problems, however, theoretical or empirical deliberations suggest models with occasionally changing one or more of its parameters. The main parameter of interest in such regression analyses is the shift point parameter, which indexes when or where the unknown change occurred.

A variety of problems, such as switching straight lines [1], shifts of level or slope in linear time series models [2], detection of ovulation time in women [3], and many others, have been studied during the last two decades. Holbert [4], while reviewing Bayesian developments in structural change from 1968 onward, gives a variety of interesting examples from economics and biology. The monograph by Broemeling and Tsurumi [5] provides a complete study of structural change in the linear model from the Bayesian viewpoint.

Bayesian inference of the shift point parameter assumes availability of the prior distribution of the changing model parameters. Bansal and Chakravarty [6] had proposed to study the effect of an ESD prior for the changed slope parameter of the two-phase linear regression (TPLR) model on the Bayes estimates of the shift point and also on the posterior odds ratio (POR) to detect a change in the simple regression model.

In this paper, we studied a TPLR model. In Section 2, we have given a change point model TPLR. In Sections 3.1 and 3.2, we obtained posterior densities of 𝑚 considering 𝜎2 unknown and of 𝛽1,𝛽2 and 𝑚 considering 𝜎2 known, respectively. We derive Bayes estimators of 𝛽1,𝛽2, and 𝑚 under symmetric loss functions in Section 4 and asymmetric loss functions in Section 5. We have studied the sensitivity of the Bayes estimators of 𝑚 when prior specifications deviate from the true values in Section 6. In Section 7, we have presented a numerical study to illustrate the above technique on generated observations. In this study, we have generated observations from the proposed model and have computed the Bayes estimates of 𝑚 and of other parameters. Section 8 concludes the research paper.

2. Two-Phase Linear Regression Model

The TPLR model is one of the many models, which exhibits structural change. Holbert [4] used a Bayesian approach, based on TPLR model, to reexamine the McGee and Kotz [7] data for stock market sales volume and reached the same conclusion that the abolition of splitups did hurt the regional exchanges.

The TPLR model is defined as𝑦𝑡=𝛼1+𝛽1𝑥𝑡+𝜀𝑡𝛼,𝑡=1,2,,𝑚,2+𝛽2𝑥𝑡+𝜀𝑡,𝑡=𝑚+1,,𝑛,(1) where 𝜀𝑡’s are i. i. d. N (0,𝜎2) random errors with variance 𝜎2>0, 𝑥𝑡 is a nonstochastic explanatory variable, and the regression parameters (α1, β1) (α2, β2). The shift point 𝑚 is such that if 𝑚=𝑛 there is no shift, but when 𝑚=1,2,,𝑛1 exactly one shift has occurred.

3. Bayes Estimation

The ML method, as well as other classical approaches are based only on the empirical information provided by the data. However, when there is some technical knowledge on the parameters of the distribution available, a Bayes procedure seems to be an attractive inferential method. The Bayes procedure is based on a posterior density, say, 𝑔(𝛽1,𝛽2,𝜎2,𝑚𝑧), which is proportional to the product of the likelihood function 𝐿(𝛽1,𝛽2,𝜎2,𝑚𝑧), with a prior joint density, say, 𝑔(𝛽1,𝛽2,𝜎2,𝑚) representing uncertainty on the parameters values𝑔𝛽1,𝛽2,𝜃1,𝜃2=𝐿𝛽,𝑚𝑧1,𝛽2,𝜎2𝛽,𝑚𝑧𝑔1,𝛽2,𝜎2,𝑚𝑛1𝑚=1𝛽1𝛽2𝜎2𝐿𝛽1,𝛽2,𝜎2,,𝑚𝑧(2) where denotes 𝑔(𝛽1,𝛽2,𝜎2,𝑚)𝑑𝛽1𝑑𝛽2𝑑𝜎2.

The likelihood function of β1, β2, 𝜎2 and 𝑚, given the sample information 𝑍𝑡=(𝑥𝑡,𝑦𝑡),𝑡=1,2,,𝑚,𝑚+1,,𝑛, is𝐿𝛽1,𝛽2,𝜎2=1,𝑚𝑧(2𝜋)𝑛/2exp12𝛽21𝑠𝑚1𝜎2+𝛽1𝑠𝑚3𝜎2×exp12𝑠𝑛1𝑠𝑚1𝜎2𝛽22+𝛽2𝑠𝑚4𝜎2×𝑒𝐴/2𝜎2𝜎𝑛,(3) where𝑠𝑘1=𝑘𝑖=1𝑥2𝑖,𝑠𝑘2=𝑘𝑖=1𝑥𝑖𝑦𝑖,𝐴=𝑛𝑖=1𝑦2𝑖+𝑛𝛼22𝛼𝑛𝑖=1𝑦𝑖,𝑆𝑚3=𝑆𝑚2+2𝛼𝑆𝑚1,𝑆𝑚4=𝑆𝑛2𝑆𝑚2𝑆+2𝛼𝑛1𝑆𝑚1.(4)

3.1. Using Gamma Prior on 1/𝜎2 and Conditional Informative Priors on 𝛽1, 𝛽2 with 𝜎2 Unknown

We consider the TPLR model (1) with unknown 𝜎2. As in Broemeling and Tsurumi [5], we suppose that the shift point 𝑚 is a priori uniformly distributed over the set {1,2,,𝑛1} and is independent of β1 and β2. We also suppose that some information on β1 and β2 are available that can be expressed in terms of conditional prior probability densities on β1 and β2.

We have conditional prior density on β1 and β2 given 𝜎2, with 𝑁(0,𝜎2) as𝑔1𝛽1𝜎2=112𝜋𝜎exp2𝛽21𝜎2,<𝛽1𝑔<,1𝛽2𝜎2=112𝜋𝜎exp2𝛽22𝜎2,<𝛽2<.(5) We also suppose that some information on 1/𝜎2 is available and that technical knowledge can be given in terms of prior mean 𝜇 and coefficient of variation . We suppose marginal prior distribution of 1/𝜎2 to be gamma (𝑐,𝑑) distribution with mean 𝜇𝑔11𝜎2=𝑐𝑑1Γ𝑑𝜎2𝑑1𝑒𝑐/𝜎2,𝜎2>0,𝑐,𝑑>0,(6) where Γ𝑑 is gamma function same as explained in (8).

The integral representation of Γ𝑧 is as below [Re𝑧>0,Re𝑥>0], Gradshteyn and Ryzhik [8, page 934]Γ𝑧=𝑥𝑧0𝑒𝑥𝑡𝑡𝑧1𝑑𝑡.(7)

The gamma function (Euler’s integral of the second kind) Γ(𝑧)

[Re𝑧>0], (Euler) Gradshteyn and Ryzhik [8, page 933], is defined asΓ(𝑧)=0𝑒𝑡𝑡𝑧1𝑑𝑡.(8)

If the prior information is given in terms of prior mean 𝜇 and coefficient of variation , then the parameters 𝑐 and 𝑑 can be obtained by solving1𝑑=21,𝑐=𝜇2.(9)

Hence, joint prior pdf of 𝛽1,𝛽2,𝜎2 and 𝑚, say 𝑔1(𝛽1,𝛽2,𝜎2,𝑚) is𝑔1𝛽1,𝛽2,𝜎2,𝑚=𝑘1𝑒1/2𝜎2(𝛽21+𝛽22)1𝜎2𝑑𝑒𝑐/𝜎2,(10) where𝑘1=𝑐𝑑2𝜋Γ𝑑(𝑛1).(11)

Joint posterior density of 𝛽1,𝛽2,𝜎2, and 𝑚 say, 𝑔(β1, β2, 𝜎2, 𝑚𝑧) is𝑔1𝛽1,𝛽2,𝜎2=𝐿𝛽,𝑚𝑧1,𝛽2,𝜎2𝑔𝛽,𝑚𝑧1,𝛽2,𝑚𝑑𝛽1𝑑𝛽2𝑑𝜎21(𝑧)=𝑘2exp12𝛽21𝜎2𝑆𝑚1+1+𝛽1𝑆𝑚3𝜎2×exp12𝛽22𝜎2𝑆𝑛1𝑆𝑚1+1+𝛽2𝑆𝑚4𝜎2×𝑒((𝐴/2)+𝑐)(1/𝜎2)1𝜎2𝑑𝜎2𝑛/21,(𝑧)(12) where 𝑘2=𝑘11(2𝜋)𝑛/2,(13)𝑆𝑚3,𝑆𝑚4,𝑆𝑚1, 𝑆𝑚2and 𝐴 are as given in (4).

1(𝑧) is the marginal density of 𝑧 given by 1(𝑧)=𝑛1𝑚=10𝐿𝛽1,𝛽2,𝜎2𝛽,𝑚𝑧𝑔1,𝛽2,𝜎2,𝑚𝑑𝛽1𝑑𝛽2𝑑𝜎2=𝑘2𝑛1𝑚=1𝑇1(𝑚),(14) where 𝑘2 is as given in (13)𝑇1(𝑚)=Γ((𝑛/2)+𝑑1)(𝐴/2)+𝐶𝑚1𝑚2(𝑛/2)+𝑑1𝔵,(15) where 𝔵 denotes 𝑆𝑚1+1𝑆𝑛1𝑆𝑚1+1,𝑚1=𝑆2𝑚32𝑆𝑚1,+1𝑚2=𝑆2𝑚42𝑆𝑛1𝑆𝑚1,+1(16)𝑆𝑚3,𝑆𝑚4,𝑆𝑚1, 𝑆𝑚2, 𝑆𝑛1, and 𝐴 are as given in (4).

Γ((𝑛/2)+𝑑1) is the gamma function as explained in (8).

Marginal posterior density of change point 𝑚, say 𝑔(𝑚𝑧) is 𝑔1𝑇(𝑚𝑧)=1(𝑚)𝑛1𝑚=1𝑇1(𝑚),(17)𝑇1(𝑚) is as given in (15).

3.2. Using Conditional Informative Priors on 𝛽1,𝛽2 with 𝜎2 Known

We consider the TPLR model (1) with known 𝜎2. We also assume same prior consideration of β1, β2, and 𝑚 as explained in Section 3.1.

Hence, joint prior pdf of β1, β2, and 𝑚, say 𝑔2(β1, β2, 𝑚) is𝑔2𝛽1,𝛽2,𝑚=𝑘3𝑒1/2((𝛽21+𝛽22)/𝜎2),(18) where𝑘3=12𝜋(𝑛1)𝜎2.(19) Joint posterior density of β1, β2 and 𝑚 say, 𝑔2(𝛽1,𝛽2,𝑚𝑧) is𝑔2𝛽1,𝛽2,𝑚𝑧=𝑘4exp12𝛽21𝑆𝑚1+1𝜎2+𝛽1𝑆𝑚3𝜎2×exp12𝛽22𝑆𝑛1𝑆𝑚1+1𝜎2+𝛽2𝑆𝑚4𝜎22(𝑧),(20) where𝑘4=𝑘31(2𝜋)𝑛/2𝑒𝐴/2𝜎2𝜎𝑛,(21)𝑆𝑚3,𝑆𝑚4,𝑆𝑚1, 𝑆𝑚2, and 𝐴 are as given in (4).

Where 2(𝑧) is the marginal density of 𝑧 given by 2(𝑧)=𝑘4𝑛1𝑚=11exp2𝛽21𝑆𝑚1+1𝜎2+𝛽1𝑆𝑚3𝜎2𝑑𝛽1×1exp2𝛽22𝜎2𝑆𝑛1𝑆𝑚1+1+𝛽2𝑆𝑚4𝜎2𝑑𝛽2.(22)

And the integrals,𝐺1𝑚=1exp2𝛽22𝜎2𝑆𝑛1𝑆𝑚1+1+𝛽2𝑆𝑚4𝜎2𝑑𝛽2=2𝜋𝑒(𝑆2𝑚4/2𝜎2(𝑆𝑛1𝑆𝑚1+1))𝑆𝑛1𝑆𝑚1,𝐺+1/𝜎(23)2𝑚=1exp2𝛽21𝑆𝑚1+1𝜎2+𝛽1𝑆𝑚3𝜎2𝑑𝛽1=2𝜋𝑒𝑆2𝑚3/2𝜎2(𝑆𝑚1+1)𝑆𝑚1.+1/𝜎(24)

So using (23) and (24) results in (22), it reduces to 2(𝑧)=𝑘4𝑛1𝑚=1𝑇2𝑇(𝑚),2(𝑚)=𝐺1𝑚𝐺2𝑚,(25) where 𝑘4 is as given in (21). 𝐺1𝑚 and 𝐺2𝑚 are as given in (23) and (24).

𝑆𝑚3,𝑆𝑚4,𝑆𝑚1, and 𝑆𝑚2 are as given in (4).

Marginal posterior density of change point 𝑚, β1, and β2 is 𝑔2𝑇(𝑚𝑧)=2(𝑚)𝑛1𝑚=1𝑇2,(𝑚)(26)𝑔2𝛽1=𝑘𝑧42(𝑧)𝑛1𝑚=1𝑒1/2𝛽21𝑆𝑚1+1𝜎2+𝛽1𝑆𝑚3𝜎2𝐺1𝑚,(27)𝑔2𝛽2=𝑘𝑧42(𝑧)𝑛1𝑚=1𝑒1/2𝛽22𝑆𝑛1𝑆𝑚1+1𝜎2+𝛽2𝑆𝑚4𝜎2𝐺2𝑚,(28)𝑘4 and 𝐺1𝑚,𝐺2𝑚,2(𝑧)are as given in (21), (23), (24), and (25), respectively.

𝑆𝑚3,𝑆𝑚4,𝑆𝑚1, 𝑆𝑚2 are as given in (4).

4. Bayes Estimates under SymmetricLoss Function

The Bayes estimator of a generic parameter (or function there of) α based on a squared error loss (SEL) function:𝐿1(𝛼,𝑑)=(𝛼𝑑)2,(29) where 𝑑 is a decision rule to estimate α, is the posterior mean. As a consequence, the SEL function relative to an integer parameter, 𝐿1(𝑚,𝑣)=(𝑚𝑣)2,𝑚,𝑣=0,1,2,.(30)

Hence, the Bayesian estimate of an integer-valued parameter under the SEL function 𝐿1(𝑚,𝑣) is no longer the posterior mean and can be obtained by numerically minimizing the corresponding posterior loss. Generally, such a Bayesian estimate is equal to the nearest integer value to the posterior mean. So, we consider the nearest value to the posterior mean as Bayes Estimate.

The Bayes estimator of 𝑚 under SEL is𝑚=𝑛1𝑚=1𝑚𝑇1(𝑚)𝑛1𝑚=1𝑇1𝑚(𝑚),(31)=𝑛1𝑚=1𝑚𝑇2(𝑚)𝑛1𝑚=1𝑇2(𝑚),(32) where 𝑇1(𝑚) and 𝑇2(𝑚) are as given in (15) and (25).

Other Bayes estimators of α based on the loss functions𝐿2||||,𝐿(𝛼,𝑑)=𝛼𝑑3||||(𝛼,𝑑)=0,if𝛼𝑑<𝜖,𝜖>0,1,otherwise,(33) is the posterior median and posterior mode, respectively.

5. Asymmetric Loss Function

The Loss function 𝐿(𝛼,𝑑) provides a measure of the financial consequences arising from a wrong decision rule d to estimate an unknown quantity (a generic parameter or function thereof) α. The choice of the appropriate loss function depends on financial considerations only and is independent from the estimation procedure used. The use of symmetric loss function was found to be generally inappropriate, since for example, an overestimation of the reliability function is usually much more serious than an underestimation.

A useful asymmetric loss, known as the Linex loss function was introduced by Varian [9]. Under the assumption that the minimal loss occurs at α, the Linex loss function can be expressed as 𝐿4𝑞(𝛼,𝑑)=exp1(𝑑𝛼)𝑞1(𝑑𝛼)𝐼,𝑞10.(34)

The sign of the shape parameter 𝑞1 reflects the direction of the asymmetry, 𝑞1>0 if overestimation is more serious than underestimation, and vice versa, and the magnitude of 𝑞1 reflects the degree of asymmetry.

The posterior expectation of the Linex loss function is𝐸𝛼𝐿4𝑞(𝛼,𝑑)=exp1𝑑𝐸𝛼exp𝑞1𝛼𝑞1𝑑𝐸𝛼{𝛼}𝐼,(35) where 𝐸𝛼{𝑓(𝛼)} denotes the expectation of 𝑓(𝛼) with respect to the posterior density 𝑔(𝛼𝑧). The Bayes estimate 𝛼𝐿 is the value of 𝑑 that minimizes 𝐸𝛼{𝐿4(𝛼,𝑑)}𝛼𝐿1=𝑞1𝐸ln𝛼exp𝑞1𝛼(36) provided that 𝐸𝛼{exp(𝑞1𝛼)} exists and is finite.

5.1. Assuming 𝜎2 Unknown

Minimizing the posterior expectation of the Linex loss function 𝐸𝑚[𝐿4(𝑚,𝑑)]. Where 𝐸𝑚[𝐿4(𝑚,𝑑)] denotes the expectation of 𝐿4(𝑚,𝑑) with respect to the posterior density 𝑔1(𝑚𝑧) given in (17), we get the Bayes estimate of 𝑚 by means of the nearest integer value to (37), say 𝑚𝐿, as under. We get the Bayes estimators of 𝑚 using Linex loss function, respectively, as𝑚𝐿1=𝑞1𝑙𝑛𝐸𝑚exp𝑞1𝑚1=𝑞1𝑙𝑛𝑛1𝑚=1𝑒𝑞1𝑚𝑇1(𝑚)𝑛1𝑚=1𝑇1.(𝑚)(37)

Another loss function, called general entropy (GE) loss function, proposed by Calabria and Pulcini [10], is given by𝐿5𝑑(𝛼,𝑑)=𝛼𝑞3𝑞3𝑑ln𝛼𝐼.(38) The Bayes estimate 𝛼𝐸 is the value of 𝑑 that minimizes 𝐸𝛼[𝐿5(𝛼,𝑑)]:𝛼𝐸=[𝐸𝛼(𝛼𝑞3)]1/𝑞3(39) provided that 𝐸𝛼(𝛼𝑞3) exists and is finite.

Combining the General Entropy Loss with the posterior density (17), we get the estimate 𝑚 by means of the nearest integer value to (40), say 𝑚𝐸, as below. We get the Bayes estimates 𝑚𝐸 of 𝑚 using General Entropy loss function as𝑚𝐸=𝐸𝑚𝑚𝑞31/𝑞3=𝑛1𝑚=1𝑚𝑞3𝑇1(𝑚)𝑛1𝑚=1𝑇1(𝑚)1/𝑞3,(40) where 𝑇1(𝑚) is as given in (15).

5.2. Assuming 𝜎2 Known

Combining the Linex loss with posterior density (26), we get the Bayes estimate of 𝑚 by means of the nearest integer value to (41), say 𝑚𝐿 as below. 𝑚𝐿1=𝑞1𝐸ln𝑚exp𝑞1𝑚1=𝑞1ln𝑛1𝑚=1𝑒𝑞1𝑚𝑇1(𝑚)𝑛1𝑚=1𝑇1.(𝑚)(41)

Combining the Linex loss with the posterior distributions (27) and (28), respectively, we get the Bayes estimators of β1 and β2 using Linex loss function as𝛽1𝐿1=𝑞1𝐸ln𝛽1𝑒𝑞1𝛽11=𝑞1𝑘ln42(𝑧)𝑛1𝑚=12𝜋𝑒(𝑠𝑚3𝑞1𝜎2)2/2𝜎2(𝑆𝑚1+1)𝑆𝑚1𝐺+1/𝜎1𝑚,(42)𝛽2𝐿1=𝑞1𝑘ln42×(𝑧)𝑛1𝑚=11exp2𝛽21𝑆𝑛1𝑆𝑚1+1𝜎2+𝛽1𝑆𝑚4𝜎2𝑞1𝑑𝛽1𝐺2𝑚,(43) where 𝑘4,2(𝑧), 𝐺1𝑚, and 𝐺2𝑚 are same as given in (21), (22), (23), and (24), respectively. 𝑆𝑚3,𝑆𝑚4,𝑆𝑚1,𝑆𝑚2 are as given in (4).

Minimizing expectation 𝐸[𝐿5(𝑚,𝑑)] and then taking expectation with respect to posterior density 𝑔2(𝑚𝑧), we get the estimate 𝑚 by means of the nearest integer value to (44) say 𝑚𝐸, as below. We get the Bayes estimates 𝑚𝐸 of 𝑚 using General Entropy loss function as 𝑚𝐸=𝐸1𝑚𝑞31/𝑞3=𝑛1𝑚=1𝑚𝑞3𝑇2(𝑚)𝑛1𝑚=1𝑇2(𝑚)1/𝑞3,(44) where 𝑇2(𝑚) is same as given in (25).

Note 1. The confluent hypergeometric function of the first kind 1𝐹1(𝑎,𝑏;𝑥) [11] is a degenerate form of the hypergeometric function 2𝐹1(𝑎,𝑏,𝑐;𝑥) which arises as a solution to the confluent hypergeometric differential equation. It is also known as Kummer's function of the first kind and denoted by 1𝐹1, defined as follows: 1𝐹1(𝑎,𝑏;𝑥)=𝑚=0(𝑎,𝑚)𝑥𝑚(𝑏,𝑚)𝑚!,for|𝑥|<1.(45) With Pochhammer coefficients (𝑎,𝑚)=Γ(𝑎+𝑚)/Γ(𝑎) for 𝑚1 and (𝑎,0)=1 [12, page 755], also has an integral representation 1𝐹1(𝑎,𝑏;𝑥)=10𝑒𝑥𝑢𝑢𝑎1(1𝑢)𝑏𝑎1𝐵(𝑎,𝑏𝑎)𝑑𝑢.(46) The symbols Γ and 𝐵 denoting the usual functions gamma and beta, respectively.
When 𝑎 and 𝑏 are both integer, some special results are obtained. If 𝑎<0, and either 𝑏>0 or 𝑏<𝑎, the series yields a polynomial with a finite number of terms. If integer 𝑏0, the function is undefined.

Note 2. 𝑝𝐹𝑞[{(𝑎1),,(𝑎𝑝)},{(𝑏1),,(𝑏𝑞)},𝑧] is called a generalized hypergeometric series and defined as (Gradshteyn and Ryzhik [8, page 1045]).
𝑝𝐹𝑞[{(𝑎1),,(𝑎𝑝)},{(𝑏1),,(𝑏𝑞)},𝑧] has series expansion 𝑝𝐹𝑞𝑎1𝑎,,𝑝,𝑏1𝑏,,𝑞=,𝑧𝑚=0𝑎1𝑚𝑎,,𝑝𝑚,𝑏1𝑚𝑏,,𝑞𝑚,𝑧𝑚𝑚!.(47) In many special cases hypergeometric 𝑝𝐹𝑞 is automatically converted to other functions.
For 𝑝=𝑞+1, hypergeometric 𝑝𝐹𝑞 [𝑎 list, 𝑏 list, 𝑧] has a branch cut discontinuity in the complex 𝑧 plane running from 1 to .
Hypergeometric 𝑝𝐹𝑞 (Regularized) is finite for all finite values of its argument so long as 𝑝𝑞.

Note 3. 𝛽(𝑥,𝑦) is the beta function Euler’s integral of the first kind defined as Gradshteyn and Ryzhik [8, pages 948, 950], 𝛽(𝑥,𝑦)=10𝑡𝑥1(1𝑡)𝑦1𝑑𝑡,𝛽(𝑥,𝑦)=Γ𝑥Γ𝑦.Γ𝑥𝑦(48) The gamma function is as explained in (8).
Minimizing expected loss function 𝐸[𝐿5(𝛽𝑖,𝑑)] and using posterior distributions (27) and (28), we get the Bayes estimates of 𝛽𝑖,𝑖=1,2 using General Entropy loss function, respectively, as 𝛽𝑖𝐸=𝐸𝛽𝑞3𝑖1/𝑞3𝛽,𝑖=1,2,1𝐸=𝑘42(𝑧)𝑛1𝑚=1𝐽1𝑚𝐺1𝑚1/𝑞3,𝐽1𝑚=𝛽𝑞31𝑒(1/2𝜎2)𝛽12(𝑆𝑚1+1)+𝛽1𝑆𝑚3/𝜎2𝑑𝛽1=1𝑆𝑚1+1/𝜎21+𝑞3×𝑘5𝑚Γ1𝑞321𝐹1𝑞32,12𝑠,2𝑚3/𝜎22𝑆𝑚1+𝐺+13𝑚Γ𝑞132Hypergeometric1𝐹1×1+𝑞32,32𝑠,2𝑚3/𝜎22𝑆𝑚1𝜎+12+𝐺4𝑚Hypergeometric𝑝𝐹×1𝑞,2,𝑞,1132,32𝑞32,𝑠2𝑚3/𝜎22𝑆𝑚1𝜎+12,(49) where 𝑘5𝑚=(1)𝑞32(1/2)(1𝑞3)𝑆𝑚1+12𝑠2𝑚3𝑞3×𝑠2𝑚3𝑆𝑚1+12𝑞3𝑠exp2𝑚3/𝜎22𝑆𝑚1×𝑆+1𝑚1+1𝜎2𝑞3/21+𝑞3×𝑆𝑚1+1𝜎2(1)𝑞3𝑆𝑚1+1𝑆𝑚3𝑞3×𝑆𝑚3𝑆𝑚1+1𝑞3,𝐺+13𝑚=2𝑆𝑚3𝜎2(1)𝑞3𝑆𝑚1+1𝑆𝑚3𝑞3𝑆𝑚3𝑆𝑚1+1𝑞3,𝐺14𝑚=2(1+𝑞3)/2𝑆𝑚1+12𝑠2𝑚3𝑞3𝑆𝑚3𝜎2×(1)𝑞3𝑆𝑚3𝑆𝑚1+1𝑞3𝑆𝑚3𝑆𝑚1+1𝑞3.(50) Hypergeometric 1𝐹1((1+𝑞3)/2,3/2,(𝑠2𝑚3/𝜎2)/2(𝑆𝑚1+1)𝜎2) and hypergeometric 𝑝𝐹𝑞({1/2,1},{(1𝑞3/2,3/2𝑞3/2},(𝑠2𝑚3/𝜎2)/2(𝑆𝑚1+1)𝜎2) are hypergeometric functions same as explained in Notes 1 and 2, respectively. 𝑘5𝑚,𝐺3𝑚,𝐺4𝑚, and 𝐺1𝑚are as explained in (23) and (50), respectively. 𝑘4 is as given in (21). Γ((1𝑞3)/2) and Γ(1𝑞3/2) are gamma functions same as explained in (8). 𝑆𝑚3,𝑆𝑚4,𝑆𝑚1, and 𝑆𝑚2 are as given in (4) 𝛽2𝐸=𝑘42(𝑧)𝑛1𝑚=1𝐽2𝑚𝐺2𝑚1/𝑞3,𝐽2𝑚=𝛽𝑞32𝑒(1/2𝜎2)𝛽22(𝑆𝑛1𝑆𝑚1+1)+𝛽2(𝑆𝑚4/𝜎2)𝑑𝛽2=1𝑆𝑛1𝑆𝑚1+1/𝜎21+𝑞3×𝑘6𝑚Γ1𝑞32Hypergeometric1𝐹1×𝑞32,12𝑠,2𝑚4/𝜎22𝑆𝑛1𝑆𝑚1+𝐺+15𝑚Γ𝑞132Hypergeometric1𝐹1×1+𝑞32,32𝑠,2𝑚4/𝜎22𝑆𝑛1𝑆𝑚1+𝐺+16𝑚Hypergeometric𝑝𝐹𝑞×12,𝑞,1132,32𝑞32,𝑠2𝑚4/𝜎22𝑆𝑛1𝑆𝑚1,+1(51) where 𝑘6𝑚=(1)𝑞32(1/2)(1𝑞3)×𝑆𝑛1𝑆𝑚1+12𝑠2𝑚4𝑞3×𝑠2𝑚4𝑆𝑛1𝑆𝑚1+12𝑞3𝑠×exp2𝑚4/𝜎22𝑆𝑛1𝑆𝑚1×𝑆+1𝑛1𝑆𝑚1+1𝜎2𝑞3/21+𝑞3×𝑆𝑛1𝑆𝑚1+1𝜎2×(1)𝑞3𝑆𝑛1𝑆𝑚1+1𝑆𝑚4𝑞3×𝑆𝑚4𝑆𝑛1𝑆𝑚1+1𝑞3,𝐺+15𝑚=2𝑆𝑚4𝜎2(1)𝑞3𝑆𝑛1𝑆𝑚1+1𝑆𝑚4𝑞3×𝑆𝑚4𝑆𝑛1𝑆𝑚1+1𝑞3,𝐺16𝑚=2(1+𝑞3)/2𝑆𝑛1𝑆𝑚1+12𝑠2𝑚4𝑞3𝑆𝑚4𝜎2×(1)𝑞3𝑆𝑚4𝑆𝑛1𝑆𝑚1+1𝑞3𝑆𝑚4𝑆𝑛1𝑆𝑚1+1𝑞3.(52) Hypergeometric 1𝐹1((1+𝑞3)/2,3/2,(𝑠2𝑚3/𝜎2)/2(𝑆𝑚1+1)𝜎2) and hypergeometric 𝑝𝐹𝑞({1/2,1},{1𝑞3/2,(3/2)𝑞3/2},(𝑠2𝑚3/𝜎2)/2(𝑆𝑚1+1)𝜎2) are hypergeometric functions same as explained in Notes 1 and 2, respectively. 𝑘6𝑚,𝐺5𝑚,𝐺6𝑚, and 𝐺2𝑚 are as explained in (23) and (50), respectively. 𝑘4 is as given in (21).
𝑘4 is as given in (21). Γ((1𝑞3)/2) and Γ(1𝑞3/2) are gamma functions same as explained in (8). 𝑆𝑚3,𝑆𝑚4,𝑆𝑚1, and 𝑆𝑚2 are as given in (4).

Remark 1. Putting 𝑞3=1 in (40) and (44), we get the Bayes estimators of 𝑚, posterior means under the squared error loss as given in (31) and (32). Note that for 𝑞3=1, the GE loss function reduces to the squared error loss function.

6. Numerical Study

6.1. Illustration

Let us consider the two-phase regression model𝑦𝑡=3𝑥𝑡+𝜀𝑡𝑦,𝑡=1,2,3,4,𝑡=3.5𝑥𝑡+𝜀𝑡,𝑡=5,6,,15,(53) where 𝜀𝑡's are i.i.d. 𝑁(0,1) random errors. We take the first 15 values of 𝑥𝑡 and 𝜀𝑡from Table 4.1 of Zellner [13] to generate 15 sample values (𝑥𝑡,𝑦𝑡)𝑡=1,2,,15. The generated sample values are given in Table 1. The 𝛽1,𝛽2, and 𝜎2 themselves were random observations. 𝛽1 and 𝛽2 were from standard normal distribution and precision 1/𝜎2 was from gamma distribution with 𝜇 =1 and coefficient of variation =1.4, respectively, in 𝑐=0.5,𝑑=0.5.

We have calculated posterior mean, posterior median and posterior mode of m. The results are shown in Table 2.

We also compute the Bayes estimators mE of 𝑚 using (40) for unknown 𝜎2 and (44) for known 𝜎2 and 𝑚𝐿 using (37) for unknown 𝜎2 and (41) for known 𝜎2 for data given in Table 1. The results are shown in Table 3.

Table 3 shows that for small values of |𝑞|,𝑞=0.9, 0.5, 0.2, 0.1 Linex loss function is almost symmetric and nearly quadratic and the values of the Bayes estimate under such a loss is not far from the posterior mean. Table 3 also shows that for 𝑞1=𝑞3=1.5, 1.2, Bayes estimates are less than actual value of 𝑚=4.

It can be seen from Table 3 that positive sign of shape parameter of loss functions reflects overestimation is more serious than underestimation. Thus, problem of overestimation can be solved by taking the value of shape parameter of Linex and General Entropy loss functions positive and high.

For 𝑞1=𝑞3=1, −2, Bayes estimates are quite large than actual value 𝑚=4. It can be seen from Table 3 that the negative sign of shape parameter of loss functions reflects underestimation is more serious than overestimation. Thus, problem of underestimation can be solved by taking the value of shape parameters of Linex and General Entropy loss functions negative.

We get Bayes estimators 𝛽1L,𝛽2𝐿,𝛽1𝐸, and 𝛽2𝐸 of 𝛽1 and 𝛽2 using (42), (43), (49), and (51), respectively, for the data given in Table 1 and for different value of shape parameter 𝑞1 and 𝑞3. The results are shown in Table 4.

Tables 3 and 4 show that the values of the shape parameters of Linex and General Entropy loss functions increase, the values of Bayes estimates decrease.

7. Sensitivity of Bayes Estimates

In this section, we study the sensitivity of the Bayes estimator, obtained in Sections 4 and 5 with respect to change in the prior of parameters. The mean μ of gamma prior on 𝜎2 has been used as prior information in computing the parameters 𝑐,𝑑 of the prior. We have computed posterior mean 𝑚 using (31) and 𝑚 using (32) for the data given in Table 1 considering different sets of values of (μ). Following Calabria and Pulcini [10], we also assume the prior information to be correct if the true value of 𝜎2 is closed to prior mean μ and is assumed to be wrong if 𝜎2 is far from μ. We observed that the posterior mode 𝑚 appears to be robust with respect to the correct choice of the prior density of 𝜎2and also with a wrong choice of the prior density of 𝜎2. This can be seen from Table 5.

Table 5 shows that when prior mean μ = 1 = actual value of 𝜎2, it means correct choice of prior of 𝜎2, The values of Bayes estimator posterior mode is 4. It gives correct estimation of change point. Now, when μ = 0.5 and 1.5 (far from true value of 𝜎2 = 1), it means wrong choice of prior of 𝜎2. The value of Bayes estimator of posterior mode remains 4. But, posterior mean and posterior median do not remain same for wrong choice of prior of 𝜎2. Thus, posterior mode is not sensitive with wrong choice of prior density of 𝜎2. While, posterior mean and posterior median are sensitive with wrong choice of prior density of 𝜎2.

8. Simulation Study

In Sections 4 and 5 we have obtained Bayes estimates of 𝑚 on the basis of the generated data given in Table 1 for given values of parameters. To justify the results, we have generated 10,000 different random samples with 𝑚=4,𝑛=15,𝛽1=3.2, 3.3, 3.4, 𝛽2=3.5, 3.6, 3.7 and obtained the frequency distributions of posterior mean, median of 𝑚,𝑚𝐿,𝑚𝐸 with the correct prior consideration. The result is shown in Tables 2 and 3. The value of shape parameter of the general entropy loss and Linex loss used in simulation study for shift point is taken as 0.1.

We have also simulated several standard normal samples. For each β1, β2, and 𝑚 and 𝑛, 1000 pseudorandom samples from two-phase linear regression model discussed in Section 2 have been simulated and Bayes estimators of change point 𝑚 has been computed using 𝑞3=0.9 and for different prior mean μ.

Table 6 leads to conclusion that performance of 𝑚𝐿,𝑚𝐹, posterior mode’s and posterior median’s has better performance than that of posterior mean of change point explained in Sections 4 and 5. 46% values of posterior mean are closed to actual value of change point with correct choice of prior. 62% values of posterior median are closed to actual value of change point with correct choice of prior. 70% values of posterior mode are close to correct values of change point with correct prior considerations. 65% values of 𝑚𝐿 are closed to actual values of 𝑚. 66% values of 𝑚𝐸 are closed to actual values of 𝑚.

9. Conclusions

In this study, we are discussing the Bayes estimator of shift point, the integer parameter, posterior mean is less appealing. Posterior median and posterior mode appear as better estimators as they would be always integer. Our numerical study showed that the Bayes estimators posterior mode of 𝑚 is robust with respect to the correct choice of the prior specifications on 𝜎2 and wrong choice of the prior specifications on 𝜎2, posterior median and posterior mode are sensitive in case prior specifications on 1/𝜎2 deviate simultaneously from the true values. Here, we discussed regression model with one change point, in practice it may have two or more change point models. One can apply these models to econometric data such as poverty and irrigation.

Acknowledgments

The authors would like to thank the editor and the referee for their valuable suggestions which improved the earlier version of the paper.