Abstract

Let the regression model be 𝑌𝑖=𝛽1𝑋𝑖+𝜀𝑖, where 𝜀𝑖 are i. i. d. N (0,ğœŽ2) random errors with variance ğœŽ2>0 but later it was found that there was a change in the system at some point of time 𝑚 and it is reflected in the sequence after 𝑋𝑚 by change in slope, regression parameter 𝛽2. The problem of study is when and where this change has started occurring. This is called change point inference problem. The estimators of 𝑚, 𝛽1,𝛽2 are derived under asymmetric loss functions, namely, Linex loss & General Entropy loss functions. The effects of correct and wrong prior information on the Bayes estimates are studied.

1. Introduction

Regression analysis is an important statistical technique to analyze data in social, medical, and engineering sciences. Quite often, in practice, the regression coefficients are assumed constant. In many real-life problems, however, theoretical or empirical deliberations suggest models with occasionally changing one or more of its parameters. The main parameter of interest in such regression analyses is the shift point parameter, which indexes when or where the unknown change occurred.

A variety of problems, such as switching straight lines [1], shifts of level or slope in linear time series models [2], detection of ovulation time in women [3], and many others, have been studied during the last two decades. Holbert [4], while reviewing Bayesian developments in structural change from 1968 onward, gives a variety of interesting examples from economics and biology. The monograph by Broemeling and Tsurumi [5] provides a complete study of structural change in the linear model from the Bayesian viewpoint.

Bayesian inference of the shift point parameter assumes availability of the prior distribution of the changing model parameters. Bansal and Chakravarty [6] had proposed to study the effect of an ESD prior for the changed slope parameter of the two-phase linear regression (TPLR) model on the Bayes estimates of the shift point and also on the posterior odds ratio (POR) to detect a change in the simple regression model.

In this paper, we studied a TPLR model. In Section 2, we have given a change point model TPLR. In Sections 3.1 and 3.2, we obtained posterior densities of 𝑚 considering ğœŽ2 unknown and of 𝛽1,𝛽2 and 𝑚 considering ğœŽ2 known, respectively. We derive Bayes estimators of 𝛽1,𝛽2, and 𝑚 under symmetric loss functions in Section 4 and asymmetric loss functions in Section 5. We have studied the sensitivity of the Bayes estimators of 𝑚 when prior specifications deviate from the true values in Section 6. In Section 7, we have presented a numerical study to illustrate the above technique on generated observations. In this study, we have generated observations from the proposed model and have computed the Bayes estimates of 𝑚 and of other parameters. Section 8 concludes the research paper.

2. Two-Phase Linear Regression Model

The TPLR model is one of the many models, which exhibits structural change. Holbert [4] used a Bayesian approach, based on TPLR model, to reexamine the McGee and Kotz [7] data for stock market sales volume and reached the same conclusion that the abolition of splitups did hurt the regional exchanges.

The TPLR model is defined as𝑦𝑡=𝛼1+𝛽1𝑥𝑡+𝜀𝑡𝛼,𝑡=1,2,…,𝑚,2+𝛽2𝑥𝑡+𝜀𝑡,𝑡=𝑚+1,…,𝑛,(1) where 𝜀𝑡’s are i. i. d. N (0,ğœŽ2) random errors with variance ğœŽ2>0, 𝑥𝑡 is a nonstochastic explanatory variable, and the regression parameters (α1, β1) ≠ (α2, β2). The shift point 𝑚 is such that if 𝑚=𝑛 there is no shift, but when 𝑚=1,2,…,𝑛−1 exactly one shift has occurred.

3. Bayes Estimation

The ML method, as well as other classical approaches are based only on the empirical information provided by the data. However, when there is some technical knowledge on the parameters of the distribution available, a Bayes procedure seems to be an attractive inferential method. The Bayes procedure is based on a posterior density, say, 𝑔(𝛽1,𝛽2,ğœŽâˆ’2,𝑚∣𝑧), which is proportional to the product of the likelihood function 𝐿(𝛽1,𝛽2,ğœŽâˆ’2,𝑚∣𝑧), with a prior joint density, say, 𝑔(𝛽1,𝛽2,ğœŽâˆ’2,𝑚) representing uncertainty on the parameters values𝑔𝛽1,𝛽2,𝜃1,𝜃2=𝐿𝛽,𝑚∣𝑧1,𝛽2,ğœŽâˆ’2𝛽,𝑚∣𝑧⋅𝑔1,𝛽2,ğœŽâˆ’2,𝑚∑𝑛−1𝑚=1∫𝛽1∫𝛽2âˆ«ğœŽâˆ’2𝐿𝛽1,𝛽2,ğœŽâˆ’2,,𝑚∣𝑧⋅ℨ(2) where ℨ denotes 𝑔(𝛽1,𝛽2,ğœŽâˆ’2,𝑚)𝑑𝛽1𝑑𝛽2ğ‘‘ğœŽâˆ’2.

The likelihood function of β1, β2, ğœŽâˆ’2 and 𝑚, given the sample information 𝑍𝑡=(𝑥𝑡,𝑦𝑡),𝑡=1,2,…,𝑚,𝑚+1,…,𝑛, is𝐿𝛽1,𝛽2,ğœŽâˆ’2=1,𝑚∣𝑧(2𝜋)𝑛/2exp−12𝛽21𝑠𝑚1ğœŽ2+𝛽1𝑠𝑚3ğœŽ2×exp−12𝑠𝑛1−𝑠𝑚1ğœŽ2𝛽22+𝛽2𝑠𝑚4ğœŽ2×𝑒−𝐴/2ğœŽ2ğœŽâˆ’ğ‘›,(3) where𝑠𝑘1=𝑘𝑖=1𝑥2𝑖,𝑠𝑘2=𝑘𝑖=1𝑥𝑖𝑦𝑖,𝐴=𝑛𝑖=1𝑦2𝑖+𝑛𝛼2−2𝛼𝑛𝑖=1𝑦𝑖,𝑆𝑚3=𝑆𝑚2+2𝛼𝑆𝑚1,𝑆𝑚4=𝑆𝑛2−𝑆𝑚2𝑆+2𝛼𝑛1−𝑆𝑚1.(4)

3.1. Using Gamma Prior on 1/ğœŽ2 and Conditional Informative Priors on 𝛽1, 𝛽2 with ğœŽâˆ’2 Unknown

We consider the TPLR model (1) with unknown ğœŽâˆ’2. As in Broemeling and Tsurumi [5], we suppose that the shift point 𝑚 is a priori uniformly distributed over the set {1,2,…,𝑛−1} and is independent of β1 and β2. We also suppose that some information on β1 and β2 are available that can be expressed in terms of conditional prior probability densities on β1 and β2.

We have conditional prior density on β1 and β2 given ğœŽ2, with 𝑁(0,ğœŽ2) as𝑔1𝛽1âˆ£ğœŽ2=1√−12ğœ‹ğœŽexp2𝛽21ğœŽ2,−∞<𝛽1𝑔<∞,1𝛽2âˆ£ğœŽ2=1√−12ğœ‹ğœŽexp2𝛽22ğœŽ2,−∞<𝛽2<∞.(5) We also suppose that some information on 1/ğœŽ2 is available and that technical knowledge can be given in terms of prior mean 𝜇 and coefficient of variation ∅. We suppose marginal prior distribution of 1/ğœŽ2 to be gamma (𝑐,𝑑) distribution with mean 𝜇𝑔11ğœŽ2=𝑐𝑑1Î“ğ‘‘ğœŽ2𝑑−1𝑒−𝑐/ğœŽ2,ğœŽ2>0,𝑐,𝑑>0,(6) where Γ𝑑 is gamma function same as explained in (8).

The integral representation of Γ𝑧 is as below [Re𝑧>0,Re𝑥>0], Gradshteyn and Ryzhik [8, page 934]Γ𝑧=ğ‘¥ğ‘§î€œâˆž0𝑒−𝑥𝑡𝑡𝑧−1𝑑𝑡.(7)

The gamma function (Euler’s integral of the second kind) Γ(𝑧)

[Re𝑧>0], (Euler) Gradshteyn and Ryzhik [8, page 933], is defined asΓ(𝑧)=∞0𝑒−𝑡𝑡𝑧−1𝑑𝑡.(8)

If the prior information is given in terms of prior mean 𝜇 and coefficient of variation ∅, then the parameters 𝑐 and 𝑑 can be obtained by solving1𝑑=∅21,𝑐=𝜇∅2.(9)

Hence, joint prior pdf of 𝛽1,𝛽2,ğœŽâˆ’2 and 𝑚, say 𝑔1(𝛽1,𝛽2,ğœŽâˆ’2,𝑚) is𝑔1𝛽1,𝛽2,ğœŽâˆ’2,𝑚=𝑘1𝑒−1/2ğœŽ2(𝛽21+𝛽22)1ğœŽ2𝑑𝑒−𝑐/ğœŽ2,(10) where𝑘1=𝑐𝑑2𝜋Γ𝑑(𝑛−1).(11)

Joint posterior density of 𝛽1,𝛽2,ğœŽâˆ’2, and 𝑚 say, 𝑔(β1, β2, ğœŽâˆ’2, 𝑚∣𝑧) is𝑔1𝛽1,𝛽2,ğœŽâˆ’2=𝐿𝛽,𝑚∣𝑧1,𝛽2,ğœŽ2𝑔𝛽,𝑚∣𝑧1,𝛽2,𝑚𝑑𝛽1𝑑𝛽2ğ‘‘ğœŽâˆ’2ℎ1(𝑧)=𝑘2⋅exp−12⋅𝛽21ğœŽ2𝑆𝑚1+1+𝛽1𝑆𝑚3ğœŽ2×exp−12𝛽22ğœŽ2𝑆𝑛1−𝑆𝑚1+1+𝛽2𝑆𝑚4ğœŽ2×𝑒−((𝐴/2)+𝑐)(1/ğœŽ2)1ğœŽ2î‚ğ‘‘î€·ğœŽ2−𝑛/2ℎ1,(𝑧)(12) where 𝑘2=𝑘11(2𝜋)𝑛/2,(13)𝑆𝑚3,𝑆𝑚4,𝑆𝑚1, 𝑆𝑚2and 𝐴 are as given in (4).

ℎ1(𝑧) is the marginal density of 𝑧 given by ℎ1(𝑧)=𝑛−1𝑚=1∞0î€âˆžâˆ’âˆžğ¿î€·ğ›½1,𝛽2,ğœŽâˆ’2𝛽,𝑚∣𝑧⋅𝑔1,𝛽2,ğœŽâˆ’2,𝑚𝑑𝛽1𝑑𝛽2ğ‘‘ğœŽâˆ’2=𝑘2𝑛−1𝑚=1𝑇1(𝑚),(14) where 𝑘2 is as given in (13)𝑇1(𝑚)=Γ((𝑛/2)+𝑑−1)(𝐴/2)+ğ¶âˆ’â„Žğ‘š1âˆ’â„Žğ‘š2(𝑛/2)+𝑑−1𝔵,(15) where 𝔵 denotes √𝑆𝑚1√+1⋅𝑆𝑛1−𝑆𝑚1+1,â„Žğ‘š1=𝑆2𝑚32𝑆𝑚1,ℎ+1𝑚2=𝑆2𝑚42𝑆𝑛1−𝑆𝑚1,+1(16)𝑆𝑚3,𝑆𝑚4,𝑆𝑚1, 𝑆𝑚2, 𝑆𝑛1, and 𝐴 are as given in (4).

Γ((𝑛/2)+𝑑−1) is the gamma function as explained in (8).

Marginal posterior density of change point 𝑚, say 𝑔(𝑚∣𝑧) is 𝑔1𝑇(𝑚∣𝑧)=1(𝑚)∑𝑛−1𝑚=1𝑇1(𝑚),(17)𝑇1(𝑚) is as given in (15).

3.2. Using Conditional Informative Priors on 𝛽1,𝛽2 with ğœŽâˆ’2 Known

We consider the TPLR model (1) with known ğœŽ2. We also assume same prior consideration of β1, β2, and 𝑚 as explained in Section 3.1.

Hence, joint prior pdf of β1, β2, and 𝑚, say 𝑔2(β1, β2, 𝑚) is𝑔2𝛽1,𝛽2,𝑚=𝑘3⋅𝑒−1/2((𝛽21+𝛽22)/ğœŽ2),(18) where𝑘3=12𝜋(𝑛−1)ğœŽ2.(19) Joint posterior density of β1, β2 and 𝑚 say, 𝑔2(𝛽1,𝛽2,𝑚∣𝑧) is𝑔2𝛽1,𝛽2,𝑚∣𝑧=𝑘4exp−12𝛽21𝑆𝑚1+1ğœŽ2+𝛽1𝑆𝑚3ğœŽ2×exp−12𝛽22𝑆𝑛1−𝑆𝑚1+1ğœŽ2+𝛽2𝑆𝑚4ğœŽ2ℎ2(𝑧),(20) where𝑘4=𝑘31(2𝜋)𝑛/2𝑒−𝐴/2ğœŽ2ğœŽâˆ’ğ‘›,(21)𝑆𝑚3,𝑆𝑚4,𝑆𝑚1, 𝑆𝑚2, and 𝐴 are as given in (4).

Where ℎ2(𝑧) is the marginal density of 𝑧 given by ℎ2(𝑧)=𝑘4𝑛−1𝑚=1∞−∞−1exp2𝛽21𝑆𝑚1+1ğœŽ2+𝛽1𝑆𝑚3ğœŽ2𝑑𝛽1×∞−∞−1exp2𝛽22ğœŽ2𝑆𝑛1−𝑆𝑚1+1+𝛽2𝑆𝑚4ğœŽ2𝑑𝛽2.(22)

And the integrals,𝐺1𝑚=∞−∞−1exp2𝛽22ğœŽ2𝑆𝑛1−𝑆𝑚1+1+𝛽2𝑆𝑚4ğœŽ2𝑑𝛽2=√2𝜋⋅𝑒(𝑆2𝑚4/2ğœŽ2(𝑆𝑛1−𝑆𝑚1+1))𝑆𝑛1−𝑆𝑚1,𝐺+1/ğœŽ(23)2𝑚=∞−∞−1exp2𝛽21𝑆𝑚1+1ğœŽ2+𝛽1𝑆𝑚3ğœŽ2𝑑𝛽1=√2𝜋⋅𝑒𝑆2𝑚3/2ğœŽ2(𝑆𝑚1+1)𝑆𝑚1.+1/ğœŽ(24)

So using (23) and (24) results in (22), it reduces to ℎ2(𝑧)=𝑘4𝑛−1𝑚=1𝑇2𝑇(𝑚),2(𝑚)=𝐺1𝑚𝐺2𝑚,(25) where 𝑘4 is as given in (21). 𝐺1𝑚 and 𝐺2𝑚 are as given in (23) and (24).

𝑆𝑚3,𝑆𝑚4,𝑆𝑚1, and 𝑆𝑚2 are as given in (4).

Marginal posterior density of change point 𝑚, β1, and β2 is 𝑔2𝑇(𝑚∣𝑧)=2(𝑚)∑𝑛−1𝑚=1𝑇2,(𝑚)(26)𝑔2𝛽1=𝑘∣𝑧4ℎ2(𝑧)𝑛−1𝑚=1𝑒−1/2𝛽21𝑆𝑚1+1ğœŽ2+𝛽1𝑆𝑚3ğœŽ2𝐺1𝑚,(27)𝑔2𝛽2=𝑘∣𝑧4ℎ2(𝑧)𝑛−1𝑚=1𝑒−1/2𝛽22𝑆𝑛1−𝑆𝑚1+1ğœŽ2+𝛽2𝑆𝑚4ğœŽ2𝐺2𝑚,(28)𝑘4 and 𝐺1𝑚,𝐺2𝑚,ℎ2(𝑧)are as given in (21), (23), (24), and (25), respectively.

𝑆𝑚3,𝑆𝑚4,𝑆𝑚1, 𝑆𝑚2 are as given in (4).

4. Bayes Estimates under SymmetricLoss Function

The Bayes estimator of a generic parameter (or function there of) α based on a squared error loss (SEL) function:𝐿1(𝛼,𝑑)=(𝛼−𝑑)2,(29) where 𝑑 is a decision rule to estimate α, is the posterior mean. As a consequence, the SEL function relative to an integer parameter, 𝐿1(𝑚,𝑣)=(𝑚−𝑣)2,𝑚,𝑣=0,1,2,….(30)

Hence, the Bayesian estimate of an integer-valued parameter under the SEL function 𝐿1(𝑚,𝑣) is no longer the posterior mean and can be obtained by numerically minimizing the corresponding posterior loss. Generally, such a Bayesian estimate is equal to the nearest integer value to the posterior mean. So, we consider the nearest value to the posterior mean as Bayes Estimate.

The Bayes estimator of 𝑚 under SEL is𝑚∗=∑𝑛−1𝑚=1𝑚𝑇1(𝑚)∑𝑛−1𝑚=1𝑇1𝑚(𝑚),(31)∗∗=∑𝑛−1𝑚=1𝑚𝑇2(𝑚)∑𝑛−1𝑚=1𝑇2(𝑚),(32) where 𝑇1(𝑚) and 𝑇2(𝑚) are as given in (15) and (25).

Other Bayes estimators of α based on the loss functions𝐿2||||,𝐿(𝛼,𝑑)=𝛼−𝑑3||||(𝛼,𝑑)=0,if𝛼−𝑑<𝜖,𝜖>0,1,otherwise,(33) is the posterior median and posterior mode, respectively.

5. Asymmetric Loss Function

The Loss function 𝐿(𝛼,𝑑) provides a measure of the financial consequences arising from a wrong decision rule d to estimate an unknown quantity (a generic parameter or function thereof) α. The choice of the appropriate loss function depends on financial considerations only and is independent from the estimation procedure used. The use of symmetric loss function was found to be generally inappropriate, since for example, an overestimation of the reliability function is usually much more serious than an underestimation.

A useful asymmetric loss, known as the Linex loss function was introduced by Varian [9]. Under the assumption that the minimal loss occurs at α, the Linex loss function can be expressed as 𝐿4î€ºğ‘ž(𝛼,𝑑)=exp1(𝑑−𝛼)âˆ’ğ‘ž1(𝑑−𝛼)−𝐼,ğ‘ž1≠0.(34)

The sign of the shape parameter ğ‘ž1 reflects the direction of the asymmetry, ğ‘ž1>0 if overestimation is more serious than underestimation, and vice versa, and the magnitude of ğ‘ž1 reflects the degree of asymmetry.

The posterior expectation of the Linex loss function is𝐸𝛼𝐿4î€¾î€·ğ‘ž(𝛼,𝑑)=exp1𝑑𝐸𝛼expâˆ’ğ‘ž1ğ›¼î€¸î€¾âˆ’ğ‘ž1𝑑−𝐸𝛼{𝛼}−𝐼,(35) where 𝐸𝛼{𝑓(𝛼)} denotes the expectation of 𝑓(𝛼) with respect to the posterior density 𝑔(𝛼∣𝑧). The Bayes estimate 𝛼∗𝐿 is the value of 𝑑 that minimizes 𝐸𝛼{𝐿4(𝛼,𝑑)}𝛼∗𝐿1=âˆ’ğ‘ž1𝐸ln𝛼expâˆ’ğ‘ž1𝛼(36) provided that 𝐸𝛼{exp(âˆ’ğ‘ž1𝛼)} exists and is finite.

5.1. Assuming ğœŽâˆ’2 Unknown

Minimizing the posterior expectation of the Linex loss function 𝐸𝑚[𝐿4(𝑚,𝑑)]. Where 𝐸𝑚[𝐿4(𝑚,𝑑)] denotes the expectation of 𝐿4(𝑚,𝑑) with respect to the posterior density 𝑔1(𝑚∣𝑧) given in (17), we get the Bayes estimate of 𝑚 by means of the nearest integer value to (37), say 𝑚∗𝐿, as under. We get the Bayes estimators of 𝑚 using Linex loss function, respectively, as𝑚∗𝐿1=âˆ’ğ‘ž1𝑙𝑛𝐸𝑚expâˆ’ğ‘ž1𝑚1=âˆ’ğ‘ž1𝑙𝑛∑𝑛−1𝑚=1ğ‘’âˆ’ğ‘ž1𝑚𝑇1(𝑚)∑𝑛−1𝑚=1𝑇1.(𝑚)(37)

Another loss function, called general entropy (GE) loss function, proposed by Calabria and Pulcini [10], is given by𝐿5𝑑(𝛼,𝑑)=ğ›¼î‚ğ‘ž3âˆ’ğ‘ž3𝑑ln𝛼−𝐼.(38) The Bayes estimate 𝛼∗𝐸 is the value of 𝑑 that minimizes 𝐸𝛼[𝐿5(𝛼,𝑑)]:𝛼∗𝐸=[𝐸𝛼(ğ›¼âˆ’ğ‘ž3)]−1/ğ‘ž3(39) provided that 𝐸𝛼(ğ›¼âˆ’ğ‘ž3) exists and is finite.

Combining the General Entropy Loss with the posterior density (17), we get the estimate 𝑚 by means of the nearest integer value to (40), say 𝑚∗𝐸, as below. We get the Bayes estimates 𝑚∗𝐸 of 𝑚 using General Entropy loss function as𝑚∗𝐸=î€ºğ¸ğ‘šî€ºğ‘šâˆ’ğ‘ž3−1/ğ‘ž3=∑𝑛−1𝑚=1ğ‘šâˆ’ğ‘ž3𝑇1(𝑚)∑𝑛−1𝑚=1𝑇1(𝑚)−1/ğ‘ž3,(40) where 𝑇1(𝑚) is as given in (15).

5.2. Assuming ğœŽâˆ’2 Known

Combining the Linex loss with posterior density (26), we get the Bayes estimate of 𝑚 by means of the nearest integer value to (41), say 𝑚𝐿∗∗ as below. 𝑚𝐿∗∗1=âˆ’ğ‘ž1𝐸ln𝑚expâˆ’ğ‘ž1𝑚1=âˆ’ğ‘ž1∑ln𝑛−1𝑚=1ğ‘’âˆ’ğ‘ž1𝑚𝑇1(𝑚)∑𝑛−1𝑚=1𝑇1.(𝑚)(41)

Combining the Linex loss with the posterior distributions (27) and (28), respectively, we get the Bayes estimators of β1 and β2 using Linex loss function as𝛽∗∗1𝐿1=âˆ’ğ‘ž1𝐸ln𝛽1ğ‘’âˆ’ğ‘ž1𝛽11=âˆ’ğ‘ž1𝑘ln4ℎ2(𝑧)𝑛−1𝑚=1√2𝜋⋅𝑒(𝑠𝑚3âˆ’ğ‘ž1ğœŽ2)2/2ğœŽ2(𝑆𝑚1+1)𝑆𝑚1𝐺+1/ğœŽ1𝑚,(42)𝛽∗∗2𝐿1=âˆ’ğ‘ž1𝑘ln4ℎ2×(𝑧)𝑛−1𝑚=1∞−∞−1exp2𝛽21𝑆𝑛1−𝑆𝑚1+1ğœŽ2+𝛽1𝑆𝑚4ğœŽ2î‚¶âˆ’ğ‘ž1𝑑𝛽1𝐺2𝑚,(43) where 𝑘4,ℎ2(𝑧), 𝐺1𝑚, and 𝐺2𝑚 are same as given in (21), (22), (23), and (24), respectively. 𝑆𝑚3,𝑆𝑚4,𝑆𝑚1,𝑆𝑚2 are as given in (4).

Minimizing expectation 𝐸[𝐿5(𝑚,𝑑)] and then taking expectation with respect to posterior density 𝑔2(𝑚∣𝑧), we get the estimate 𝑚 by means of the nearest integer value to (44) say 𝑚∗𝐸, as below. We get the Bayes estimates 𝑚∗𝐸 of 𝑚 using General Entropy loss function as 𝑚𝐸∗∗=𝐸1î€ºğ‘šâˆ’ğ‘ž3−1/ğ‘ž3=∑𝑛−1𝑚=1ğ‘šâˆ’ğ‘ž3𝑇2(𝑚)∑𝑛−1𝑚=1𝑇2(𝑚)−1/ğ‘ž3,(44) where 𝑇2(𝑚) is same as given in (25).

Note 1. The confluent hypergeometric function of the first kind 1𝐹1(ğ‘Ž,𝑏;𝑥) [11] is a degenerate form of the hypergeometric function 2𝐹1(ğ‘Ž,𝑏,𝑐;𝑥) which arises as a solution to the confluent hypergeometric differential equation. It is also known as Kummer's function of the first kind and denoted by 1𝐹1, defined as follows: 1𝐹1(ğ‘Ž,𝑏;𝑥)=âˆžî“ğ‘š=0(ğ‘Ž,𝑚)𝑥𝑚(𝑏,𝑚)𝑚!,for|𝑥|<1.(45) With Pochhammer coefficients (ğ‘Ž,𝑚)=Γ(ğ‘Ž+𝑚)/Γ(ğ‘Ž) for 𝑚≥1 and (ğ‘Ž,0)=1 [12, page 755], also has an integral representation 1𝐹1(ğ‘Ž,𝑏;𝑥)=10ğ‘’ğ‘¥ğ‘¢ğ‘¢ğ‘Žâˆ’1(1−𝑢)ğ‘âˆ’ğ‘Žâˆ’1𝐵(ğ‘Ž,ğ‘âˆ’ğ‘Ž)𝑑𝑢.(46) The symbols Γ and 𝐵 denoting the usual functions gamma and beta, respectively.
When ğ‘Ž and 𝑏 are both integer, some special results are obtained. If ğ‘Ž<0, and either 𝑏>0 or 𝑏<ğ‘Ž, the series yields a polynomial with a finite number of terms. If integer 𝑏≤0, the function is undefined.

Note 2. ğ‘ğ¹ğ‘ž[{(ğ‘Ž1),…,(ğ‘Žğ‘)},{(𝑏1),…,(ğ‘ğ‘ž)},𝑧] is called a generalized hypergeometric series and defined as (Gradshteyn and Ryzhik [8, page 1045]).
ğ‘ğ¹ğ‘ž[{(ğ‘Ž1),…,(ğ‘Žğ‘)},{(𝑏1),…,(ğ‘ğ‘ž)},𝑧] has series expansion ğ‘ğ¹ğ‘žğ‘Žî€ºî€½î€·1î€¸î€·ğ‘Ž,…,𝑝,𝑏1𝑏,…,ğ‘žî€»=,ğ‘§âˆžî“ğ‘š=0î€·ğ‘Ž1î€¸ğ‘šî€·ğ‘Ž,…,𝑝𝑚,𝑏1𝑚𝑏,…,ğ‘žî€¸ğ‘š,𝑧𝑚𝑚!.(47) In many special cases hypergeometric ğ‘ğ¹ğ‘ž is automatically converted to other functions.
For 𝑝=ğ‘ž+1, hypergeometric ğ‘ğ¹ğ‘ž [ğ‘Ž list, 𝑏 list, 𝑧] has a branch cut discontinuity in the complex 𝑧 plane running from 1 to ∞.
Hypergeometric ğ‘ğ¹ğ‘ž (Regularized) is finite for all finite values of its argument so long as ğ‘â‰¤ğ‘ž.

Note 3. 𝛽(𝑥,𝑦) is the beta function Euler’s integral of the first kind defined as Gradshteyn and Ryzhik [8, pages 948, 950], 𝛽(𝑥,𝑦)=10𝑡𝑥−1(1−𝑡)𝑦−1𝑑𝑡,𝛽(𝑥,𝑦)=Γ𝑥Γ𝑦.Γ𝑥𝑦(48) The gamma function is as explained in (8).
Minimizing expected loss function 𝐸[𝐿5(𝛽𝑖,𝑑)] and using posterior distributions (27) and (28), we get the Bayes estimates of 𝛽𝑖,𝑖=1,2 using General Entropy loss function, respectively, as 𝛽∗∗𝑖𝐸=î€½ğ¸î€·ğ›½âˆ’ğ‘ž3𝑖−1/ğ‘ž3𝛽,𝑖=1,2,∗∗1𝐸=𝑘4ℎ2(𝑧)𝑛−1𝑚=1𝐽1𝑚𝐺1𝑚−1/ğ‘ž3,𝐽1𝑚=î€œâˆžâˆ’âˆžğ›½âˆ’ğ‘ž31𝑒−(1/2ğœŽ2)𝛽12(𝑆𝑚1+1)+𝛽1𝑆𝑚3/ğœŽ2𝑑𝛽1=1𝑆𝑚1+1/ğœŽ2−1+ğ‘ž3×𝑘5𝑚Γ1âˆ’ğ‘ž321𝐹1îƒ©ğ‘ž32,12𝑠,−2𝑚3/ğœŽ22𝑆𝑚1+𝐺+13ğ‘šÎ“î‚€ğ‘ž1−32Hypergeometric1𝐹1×1+ğ‘ž32,32𝑠,−2𝑚3/ğœŽ22𝑆𝑚1î€¸ğœŽ+12+𝐺4𝑚Hypergeometric𝑝𝐹×1ğ‘ž,2,î‚†ğ‘ž,11−32,32âˆ’ğ‘ž32,−𝑠2𝑚3/ğœŽ22𝑆𝑚1î€¸ğœŽ+12,(49) where 𝑘5𝑚=(−1)âˆ’ğ‘ž32(1/2)(−1âˆ’ğ‘ž3)−𝑆𝑚1+12𝑠2𝑚3îƒ°âˆ’ğ‘ž3×−𝑠2𝑚3𝑆𝑚1+12îƒ°âˆ’ğ‘ž3𝑠exp2𝑚3/ğœŽ22𝑆𝑚1×𝑆+1𝑚1+1ğœŽ2î‚¶ğ‘ž3/2−1+ğ‘ž3×𝑆𝑚1+1ğœŽ2(−1)ğ‘ž3−𝑆𝑚1+1𝑆𝑚3î‚¶ğ‘ž3×−𝑆𝑚3𝑆𝑚1+1ğ‘ž3,𝐺+13𝑚=√2𝑆𝑚3ğœŽ2(−1)ğ‘ž3−𝑆𝑚1+1𝑆𝑚3î‚¶ğ‘ž3−𝑆𝑚3𝑆𝑚1+1ğ‘ž3,𝐺−14𝑚=2(1+ğ‘ž3)/2âŽ¡âŽ¢âŽ¢âŽ£îƒ¯âˆ’î€·ğ‘†ğ‘š1+12𝑠2𝑚3îƒ°ğ‘ž3⋅𝑆𝑚3ğœŽ2×(−1)ğ‘ž3−𝑆𝑚3𝑆𝑚1+1ğ‘ž3−𝑆𝑚3𝑆𝑚1+1ğ‘ž3.(50) Hypergeometric 1𝐹1((1+ğ‘ž3)/2,3/2,−(𝑠2𝑚3/ğœŽ2)/2(𝑆𝑚1+1)ğœŽ2) and hypergeometric ğ‘ğ¹ğ‘ž({1/2,1},{(1âˆ’ğ‘ž3/2,3/2âˆ’ğ‘ž3/2},−(𝑠2𝑚3/ğœŽ2)/2(𝑆𝑚1+1)ğœŽ2) are hypergeometric functions same as explained in Notes 1 and 2, respectively. 𝑘5𝑚,𝐺3𝑚,𝐺4𝑚, and 𝐺1𝑚are as explained in (23) and (50), respectively. 𝑘4 is as given in (21). Γ((1âˆ’ğ‘ž3)/2) and Γ(1âˆ’ğ‘ž3/2) are gamma functions same as explained in (8). 𝑆𝑚3,𝑆𝑚4,𝑆𝑚1, and 𝑆𝑚2 are as given in (4) 𝛽∗∗2𝐸=𝑘4ℎ2(𝑧)𝑛−1𝑚=1𝐽2𝑚𝐺2𝑚−1/ğ‘ž3,𝐽2𝑚=î€œâˆžâˆ’âˆžğ›½âˆ’ğ‘ž32𝑒−(1/2ğœŽ2)𝛽22(𝑆𝑛1−𝑆𝑚1+1)+𝛽2(𝑆𝑚4/ğœŽ2)𝑑𝛽2=1𝑆𝑛1−𝑆𝑚1+1/ğœŽ2−1+ğ‘ž3×𝑘6𝑚⋅Γ1âˆ’ğ‘ž32Hypergeometric1𝐹1Ã—îƒ©ğ‘ž32,12𝑠,−2𝑚4/ğœŽ22𝑆𝑛1−𝑆𝑚1+𝐺+15ğ‘šÎ“î‚€ğ‘ž1−32Hypergeometric1𝐹1×1+ğ‘ž32,32𝑠,−2𝑚4/ğœŽ22𝑆𝑛1−𝑆𝑚1+𝐺+16𝑚Hypergeometricğ‘ğ¹ğ‘žÃ—îƒ©î‚†12,î‚†ğ‘ž,11−32,32âˆ’ğ‘ž32,−𝑠2𝑚4/ğœŽ22𝑆𝑛1−𝑆𝑚1,+1(51) where 𝑘6𝑚=(−1)âˆ’ğ‘ž32(1/2)(−1âˆ’ğ‘ž3)×−𝑆𝑛1−𝑆𝑚1+12𝑠2𝑚4îƒ°âˆ’ğ‘ž3×−𝑠2𝑚4𝑆𝑛1−𝑆𝑚1+12îƒ°âˆ’ğ‘ž3𝑠×exp2𝑚4/ğœŽ22𝑆𝑛1−𝑆𝑚1×𝑆+1𝑛1−𝑆𝑚1+1ğœŽ2î‚¶ğ‘ž3/2−1+ğ‘ž3×𝑆𝑛1−𝑆𝑚1+1ğœŽ2×(−1)ğ‘ž3−𝑆𝑛1−𝑆𝑚1+1𝑆𝑚4îƒªğ‘ž3×−𝑆𝑚4𝑆𝑛1−𝑆𝑚1+1ğ‘ž3,𝐺+15𝑚=√2𝑆𝑚4ğœŽ2(−1)ğ‘ž3−𝑆𝑛1−𝑆𝑚1+1𝑆𝑚4î‚¶ğ‘ž3×−𝑆𝑚4𝑆𝑛1−𝑆𝑚1+1ğ‘ž3,𝐺−16𝑚=2(1+ğ‘ž3)/2âŽ¡âŽ¢âŽ¢âŽ£îƒ¯âˆ’î€·ğ‘†ğ‘›1−𝑆𝑚1+12𝑠2𝑚4îƒ°ğ‘ž3𝑆𝑚4ğœŽ2×(−1)ğ‘ž3−𝑆𝑚4𝑆𝑛1−𝑆𝑚1+1ğ‘ž3−𝑆𝑚4𝑆𝑛1−𝑆𝑚1+1ğ‘ž3⎤⎥⎥⎦.(52) Hypergeometric 1𝐹1((1+ğ‘ž3)/2,3/2,−(𝑠2𝑚3/ğœŽ2)/2(𝑆𝑚1+1)ğœŽ2) and hypergeometric ğ‘ğ¹ğ‘ž({1/2,1},{1âˆ’ğ‘ž3/2,(3/2)âˆ’ğ‘ž3/2},−(𝑠2𝑚3/ğœŽ2)/2(𝑆𝑚1+1)ğœŽ2) are hypergeometric functions same as explained in Notes 1 and 2, respectively. 𝑘6𝑚,𝐺5𝑚,𝐺6𝑚, and 𝐺2𝑚 are as explained in (23) and (50), respectively. 𝑘4 is as given in (21).
𝑘4 is as given in (21). Γ((1âˆ’ğ‘ž3)/2) and Γ(1âˆ’ğ‘ž3/2) are gamma functions same as explained in (8). 𝑆𝑚3,𝑆𝑚4,𝑆𝑚1, and 𝑆𝑚2 are as given in (4).

Remark 1. Putting ğ‘ž3=−1 in (40) and (44), we get the Bayes estimators of 𝑚, posterior means under the squared error loss as given in (31) and (32). Note that for ğ‘ž3=−1, the GE loss function reduces to the squared error loss function.

6. Numerical Study

6.1. Illustration

Let us consider the two-phase regression model𝑦𝑡=3𝑥𝑡+𝜀𝑡𝑦,𝑡=1,2,3,4,𝑡=3.5𝑥𝑡+𝜀𝑡,𝑡=5,6,…,15,(53) where 𝜀𝑡's are i.i.d. 𝑁(0,1) random errors. We take the first 15 values of 𝑥𝑡 and 𝜀𝑡from Table 4.1 of Zellner [13] to generate 15 sample values (𝑥𝑡,𝑦𝑡)𝑡=1,2,…,15. The generated sample values are given in Table 1. The 𝛽1,𝛽2, and ğœŽ2 themselves were random observations. 𝛽1 and 𝛽2 were from standard normal distribution and precision 1/ğœŽ2 was from gamma distribution with 𝜇 =1 and coefficient of variation ∅=1.4, respectively, in 𝑐=0.5,𝑑=0.5.

We have calculated posterior mean, posterior median and posterior mode of m. The results are shown in Table 2.

We also compute the Bayes estimators mE of 𝑚 using (40) for unknown ğœŽ2 and (44) for known ğœŽ2 and 𝑚𝐿 using (37) for unknown ğœŽ2 and (41) for known ğœŽ2 for data given in Table 1. The results are shown in Table 3.

Table 3 shows that for small values of |ğ‘ž|,ğ‘ž=0.9, 0.5, 0.2, 0.1 Linex loss function is almost symmetric and nearly quadratic and the values of the Bayes estimate under such a loss is not far from the posterior mean. Table 3 also shows that for ğ‘ž1=ğ‘ž3=1.5, 1.2, Bayes estimates are less than actual value of 𝑚=4.

It can be seen from Table 3 that positive sign of shape parameter of loss functions reflects overestimation is more serious than underestimation. Thus, problem of overestimation can be solved by taking the value of shape parameter of Linex and General Entropy loss functions positive and high.

For ğ‘ž1=ğ‘ž3=−1, −2, Bayes estimates are quite large than actual value 𝑚=4. It can be seen from Table 3 that the negative sign of shape parameter of loss functions reflects underestimation is more serious than overestimation. Thus, problem of underestimation can be solved by taking the value of shape parameters of Linex and General Entropy loss functions negative.

We get Bayes estimators 𝛽1∗∗L,𝛽∗∗2𝐿,𝛽∗∗1𝐸, and 𝛽∗∗2𝐸 of 𝛽1 and 𝛽2 using (42), (43), (49), and (51), respectively, for the data given in Table 1 and for different value of shape parameter ğ‘ž1 and ğ‘ž3. The results are shown in Table 4.

Tables 3 and 4 show that the values of the shape parameters of Linex and General Entropy loss functions increase, the values of Bayes estimates decrease.

7. Sensitivity of Bayes Estimates

In this section, we study the sensitivity of the Bayes estimator, obtained in Sections 4 and 5 with respect to change in the prior of parameters. The mean μ of gamma prior on ğœŽâˆ’2 has been used as prior information in computing the parameters 𝑐,𝑑 of the prior. We have computed posterior mean 𝑚∗ using (31) and 𝑚∗∗ using (32) for the data given in Table 1 considering different sets of values of (μ). Following Calabria and Pulcini [10], we also assume the prior information to be correct if the true value of ğœŽâˆ’2 is closed to prior mean μ and is assumed to be wrong if ğœŽâˆ’2 is far from μ. We observed that the posterior mode 𝑚∗ appears to be robust with respect to the correct choice of the prior density of ğœŽâˆ’2and also with a wrong choice of the prior density of ğœŽâˆ’2. This can be seen from Table 5.

Table 5 shows that when prior mean μ = 1 = actual value of ğœŽâˆ’2, it means correct choice of prior of ğœŽâˆ’2, The values of Bayes estimator posterior mode is 4. It gives correct estimation of change point. Now, when μ = 0.5 and 1.5 (far from true value of ğœŽâˆ’2 = 1), it means wrong choice of prior of ğœŽâˆ’2. The value of Bayes estimator of posterior mode remains 4. But, posterior mean and posterior median do not remain same for wrong choice of prior of ğœŽâˆ’2. Thus, posterior mode is not sensitive with wrong choice of prior density of ğœŽâˆ’2. While, posterior mean and posterior median are sensitive with wrong choice of prior density of ğœŽâˆ’2.

8. Simulation Study

In Sections 4 and 5 we have obtained Bayes estimates of 𝑚 on the basis of the generated data given in Table 1 for given values of parameters. To justify the results, we have generated 10,000 different random samples with 𝑚=4,𝑛=15,𝛽1=3.2, 3.3, 3.4, 𝛽2=3.5, 3.6, 3.7 and obtained the frequency distributions of posterior mean, median of 𝑚,𝑚∗𝐿,𝑚∗𝐸 with the correct prior consideration. The result is shown in Tables 2 and 3. The value of shape parameter of the general entropy loss and Linex loss used in simulation study for shift point is taken as 0.1.

We have also simulated several standard normal samples. For each β1, β2, and 𝑚 and 𝑛, 1000 pseudorandom samples from two-phase linear regression model discussed in Section 2 have been simulated and Bayes estimators of change point 𝑚 has been computed using ğ‘ž3=0.9 and for different prior mean μ.

Table 6 leads to conclusion that performance of 𝑚∗𝐿,𝑚∗𝐹, posterior mode’s and posterior median’s has better performance than that of posterior mean of change point explained in Sections 4 and 5. 46% values of posterior mean are closed to actual value of change point with correct choice of prior. 62% values of posterior median are closed to actual value of change point with correct choice of prior. 70% values of posterior mode are close to correct values of change point with correct prior considerations. 65% values of 𝑚∗𝐿 are closed to actual values of 𝑚. 66% values of 𝑚∗𝐸 are closed to actual values of 𝑚.

9. Conclusions

In this study, we are discussing the Bayes estimator of shift point, the integer parameter, posterior mean is less appealing. Posterior median and posterior mode appear as better estimators as they would be always integer. Our numerical study showed that the Bayes estimators posterior mode of 𝑚 is robust with respect to the correct choice of the prior specifications on ğœŽâˆ’2 and wrong choice of the prior specifications on ğœŽâˆ’2, posterior median and posterior mode are sensitive in case prior specifications on 1/ğœŽ2 deviate simultaneously from the true values. Here, we discussed regression model with one change point, in practice it may have two or more change point models. One can apply these models to econometric data such as poverty and irrigation.

Acknowledgments

The authors would like to thank the editor and the referee for their valuable suggestions which improved the earlier version of the paper.