Abstract

Let the regression model be π‘Œπ‘–=𝛽1𝑋𝑖+πœ€π‘–, where πœ€π‘– are i. i. d. N (0,𝜎2) random errors with variance 𝜎2>0 but later it was found that there was a change in the system at some point of time π‘š and it is reflected in the sequence after π‘‹π‘š by change in slope, regression parameter 𝛽2. The problem of study is when and where this change has started occurring. This is called change point inference problem. The estimators of π‘š, 𝛽1,𝛽2 are derived under asymmetric loss functions, namely, Linex loss & General Entropy loss functions. The effects of correct and wrong prior information on the Bayes estimates are studied.

1. Introduction

Regression analysis is an important statistical technique to analyze data in social, medical, and engineering sciences. Quite often, in practice, the regression coefficients are assumed constant. In many real-life problems, however, theoretical or empirical deliberations suggest models with occasionally changing one or more of its parameters. The main parameter of interest in such regression analyses is the shift point parameter, which indexes when or where the unknown change occurred.

A variety of problems, such as switching straight lines [1], shifts of level or slope in linear time series models [2], detection of ovulation time in women [3], and many others, have been studied during the last two decades. Holbert [4], while reviewing Bayesian developments in structural change from 1968 onward, gives a variety of interesting examples from economics and biology. The monograph by Broemeling and Tsurumi [5] provides a complete study of structural change in the linear model from the Bayesian viewpoint.

Bayesian inference of the shift point parameter assumes availability of the prior distribution of the changing model parameters. Bansal and Chakravarty [6] had proposed to study the effect of an ESD prior for the changed slope parameter of the two-phase linear regression (TPLR) model on the Bayes estimates of the shift point and also on the posterior odds ratio (POR) to detect a change in the simple regression model.

In this paper, we studied a TPLR model. In Section 2, we have given a change point model TPLR. In Sections 3.1 and 3.2, we obtained posterior densities of π‘š considering 𝜎2 unknown and of 𝛽1,𝛽2 and π‘š considering 𝜎2 known, respectively. We derive Bayes estimators of 𝛽1,𝛽2, and π‘š under symmetric loss functions in Section 4 and asymmetric loss functions in Section 5. We have studied the sensitivity of the Bayes estimators of π‘š when prior specifications deviate from the true values in Section 6. In Section 7, we have presented a numerical study to illustrate the above technique on generated observations. In this study, we have generated observations from the proposed model and have computed the Bayes estimates of π‘š and of other parameters. Section 8 concludes the research paper.

2. Two-Phase Linear Regression Model

The TPLR model is one of the many models, which exhibits structural change. Holbert [4] used a Bayesian approach, based on TPLR model, to reexamine the McGee and Kotz [7] data for stock market sales volume and reached the same conclusion that the abolition of splitups did hurt the regional exchanges.

The TPLR model is defined as𝑦𝑑=𝛼1+𝛽1π‘₯𝑑+πœ€π‘‘π›Ό,𝑑=1,2,…,π‘š,2+𝛽2π‘₯𝑑+πœ€π‘‘,𝑑=π‘š+1,…,𝑛,(1) where πœ€π‘‘β€™s are i. i. d. N (0,𝜎2) random errors with variance 𝜎2>0, π‘₯𝑑 is a nonstochastic explanatory variable, and the regression parameters (Ξ±1, Ξ²1) β‰  (Ξ±2, Ξ²2). The shift point π‘š is such that if π‘š=𝑛 there is no shift, but when π‘š=1,2,…,π‘›βˆ’1 exactly one shift has occurred.

3. Bayes Estimation

The ML method, as well as other classical approaches are based only on the empirical information provided by the data. However, when there is some technical knowledge on the parameters of the distribution available, a Bayes procedure seems to be an attractive inferential method. The Bayes procedure is based on a posterior density, say, 𝑔(𝛽1,𝛽2,πœŽβˆ’2,π‘šβˆ£π‘§), which is proportional to the product of the likelihood function 𝐿(𝛽1,𝛽2,πœŽβˆ’2,π‘šβˆ£π‘§), with a prior joint density, say, 𝑔(𝛽1,𝛽2,πœŽβˆ’2,π‘š) representing uncertainty on the parameters values𝑔𝛽1,𝛽2,πœƒ1,πœƒ2ξ€Έ=𝐿𝛽,π‘šβˆ£π‘§1,𝛽2,πœŽβˆ’2𝛽,π‘šβˆ£π‘§β‹…π‘”1,𝛽2,πœŽβˆ’2ξ€Έ,π‘šβˆ‘π‘›βˆ’1π‘š=1βˆ«π›½1βˆ«π›½2βˆ«πœŽβˆ’2𝐿𝛽1,𝛽2,πœŽβˆ’2ξ€Έ,,π‘šβˆ£π‘§β‹…β„¨(2) where ℨ denotes 𝑔(𝛽1,𝛽2,πœŽβˆ’2,π‘š)𝑑𝛽1𝑑𝛽2π‘‘πœŽβˆ’2.

The likelihood function of Ξ²1, Ξ²2, πœŽβˆ’2 and π‘š, given the sample information 𝑍𝑑=(π‘₯𝑑,𝑦𝑑),𝑑=1,2,…,π‘š,π‘š+1,…,𝑛, is𝐿𝛽1,𝛽2,πœŽβˆ’2ξ€Έ=1,π‘šβˆ£π‘§(2πœ‹)𝑛/2expβˆ’12𝛽21π‘ π‘š1𝜎2+𝛽1ξ‚€π‘ π‘š3𝜎2×expβˆ’12𝑠𝑛1βˆ’π‘ π‘š1𝜎2𝛽22+𝛽2ξ‚ƒπ‘ π‘š4𝜎2ξ‚„ξ‚„Γ—π‘’βˆ’π΄/2𝜎2πœŽβˆ’π‘›,(3) whereπ‘ π‘˜1=π‘˜ξ“π‘–=1π‘₯2𝑖,π‘ π‘˜2=π‘˜ξ“π‘–=1π‘₯𝑖𝑦𝑖,𝐴=𝑛𝑖=1𝑦2𝑖+𝑛𝛼2βˆ’2𝛼𝑛𝑖=1𝑦𝑖,π‘†π‘š3=π‘†π‘š2+2π›Όπ‘†π‘š1,π‘†π‘š4=𝑆𝑛2βˆ’π‘†π‘š2𝑆+2𝛼𝑛1βˆ’π‘†π‘š1ξ€Έ.(4)

3.1. Using Gamma Prior on 1/𝜎2 and Conditional Informative Priors on 𝛽1, 𝛽2 with πœŽβˆ’2 Unknown

We consider the TPLR model (1) with unknown πœŽβˆ’2. As in Broemeling and Tsurumi [5], we suppose that the shift point π‘š is a priori uniformly distributed over the set {1,2,…,π‘›βˆ’1} and is independent of Ξ²1 and Ξ²2. We also suppose that some information on Ξ²1 and Ξ²2 are available that can be expressed in terms of conditional prior probability densities on Ξ²1 and Ξ²2.

We have conditional prior density on Ξ²1 and Ξ²2 given 𝜎2, with 𝑁(0,𝜎2) as𝑔1𝛽1∣𝜎2ξ€Έ=1βˆšξƒ¬βˆ’12πœ‹πœŽexp2𝛽21𝜎2ξƒ­,βˆ’βˆž<𝛽1𝑔<∞,1𝛽2∣𝜎2ξ€Έ=1βˆšξƒ¬βˆ’12πœ‹πœŽexp2𝛽22𝜎2ξƒ­,βˆ’βˆž<𝛽2<∞.(5) We also suppose that some information on 1/𝜎2 is available and that technical knowledge can be given in terms of prior mean πœ‡ and coefficient of variation βˆ…. We suppose marginal prior distribution of 1/𝜎2 to be gamma (𝑐,𝑑) distribution with mean πœ‡π‘”1ξ‚€1𝜎2=𝑐𝑑1Ξ“π‘‘πœŽ2ξ‚π‘‘βˆ’1π‘’βˆ’π‘/𝜎2,𝜎2>0,𝑐,𝑑>0,(6) where Γ𝑑 is gamma function same as explained in (8).

The integral representation of Γ𝑧 is as below [Re𝑧>0,Reπ‘₯>0], Gradshteyn and Ryzhik [8, page 934]Γ𝑧=π‘₯π‘§ξ€œβˆž0π‘’βˆ’π‘₯π‘‘π‘‘π‘§βˆ’1𝑑𝑑.(7)

The gamma function (Euler’s integral of the second kind) Ξ“(𝑧)

[Re𝑧>0], (Euler) Gradshteyn and Ryzhik [8, page 933], is defined asξ€œΞ“(𝑧)=∞0π‘’βˆ’π‘‘π‘‘π‘§βˆ’1𝑑𝑑.(8)

If the prior information is given in terms of prior mean πœ‡ and coefficient of variation βˆ…, then the parameters 𝑐 and 𝑑 can be obtained by solving1𝑑=βˆ…21,𝑐=πœ‡βˆ…2.(9)

Hence, joint prior pdf of 𝛽1,𝛽2,πœŽβˆ’2 and π‘š, say 𝑔1(𝛽1,𝛽2,πœŽβˆ’2,π‘š) is𝑔1𝛽1,𝛽2,πœŽβˆ’2ξ€Έ,π‘š=π‘˜1π‘’βˆ’1/2𝜎2(𝛽21+𝛽22)ξ‚€1𝜎2ξ‚π‘‘π‘’βˆ’π‘/𝜎2,(10) whereπ‘˜1=𝑐𝑑2πœ‹Ξ“π‘‘(π‘›βˆ’1).(11)

Joint posterior density of 𝛽1,𝛽2,πœŽβˆ’2, and π‘š say, 𝑔(Ξ²1, Ξ²2, πœŽβˆ’2, π‘šβˆ£π‘§) is𝑔1𝛽1,𝛽2,πœŽβˆ’2ξ€Έ=𝐿𝛽,π‘šβˆ£π‘§1,𝛽2,𝜎2𝑔𝛽,π‘šβˆ£π‘§1,𝛽2ξ€Έ,π‘šπ‘‘π›½1𝑑𝛽2π‘‘πœŽβˆ’2β„Ž1(𝑧)=π‘˜2⋅expβˆ’12⋅𝛽21𝜎2ξ€Ίπ‘†π‘š1ξ€»+1+𝛽1π‘†π‘š3𝜎2×expβˆ’12𝛽22𝜎2𝑆𝑛1βˆ’π‘†π‘š1ξ€Έξ€»+1+𝛽2π‘†π‘š4𝜎2ξƒ­Γ—π‘’βˆ’((𝐴/2)+𝑐)(1/𝜎2)ξ‚€1𝜎2ξ‚π‘‘ξ€·πœŽ2ξ€Έβˆ’π‘›/2β„Ž1,(𝑧)(12) where π‘˜2=π‘˜11(2πœ‹)𝑛/2,(13)π‘†π‘š3,π‘†π‘š4,π‘†π‘š1, π‘†π‘š2and 𝐴 are as given in (4).

β„Ž1(𝑧) is the marginal density of 𝑧 given by β„Ž1(𝑧)=π‘›βˆ’1ξ“π‘š=1ξ€œβˆž0ξ€βˆžβˆ’βˆžπΏξ€·π›½1,𝛽2,πœŽβˆ’2𝛽,π‘šβˆ£π‘§β‹…π‘”1,𝛽2,πœŽβˆ’2ξ€Έ,π‘šπ‘‘π›½1𝑑𝛽2π‘‘πœŽβˆ’2=π‘˜2π‘›βˆ’1ξ“π‘š=1𝑇1(π‘š),(14) where π‘˜2 is as given in (13)𝑇1(π‘š)=Ξ“((𝑛/2)+π‘‘βˆ’1)ξ€·(𝐴/2)+πΆβˆ’β„Žπ‘š1βˆ’β„Žπ‘š2ξ€Έ(𝑛/2)+π‘‘βˆ’1𝔡,(15) where 𝔡 denotes βˆšπ‘†π‘š1√+1⋅𝑆𝑛1βˆ’π‘†π‘š1+1,β„Žπ‘š1=𝑆2π‘š32ξ€·π‘†π‘š1ξ€Έ,β„Ž+1π‘š2=𝑆2π‘š42𝑆𝑛1βˆ’π‘†π‘š1ξ€Έ,+1(16)π‘†π‘š3,π‘†π‘š4,π‘†π‘š1, π‘†π‘š2, 𝑆𝑛1, and 𝐴 are as given in (4).

Ξ“((𝑛/2)+π‘‘βˆ’1) is the gamma function as explained in (8).

Marginal posterior density of change point π‘š, say 𝑔(π‘šβˆ£π‘§) is 𝑔1𝑇(π‘šβˆ£π‘§)=1(π‘š)βˆ‘π‘›βˆ’1π‘š=1𝑇1(π‘š),(17)𝑇1(π‘š) is as given in (15).

3.2. Using Conditional Informative Priors on 𝛽1,𝛽2 with πœŽβˆ’2 Known

We consider the TPLR model (1) with known 𝜎2. We also assume same prior consideration of Ξ²1, Ξ²2, and π‘š as explained in Section 3.1.

Hence, joint prior pdf of Ξ²1, Ξ²2, and π‘š, say 𝑔2(Ξ²1, Ξ²2, π‘š) is𝑔2𝛽1,𝛽2ξ€Έ,π‘š=π‘˜3β‹…π‘’βˆ’1/2((𝛽21+𝛽22)/𝜎2),(18) whereπ‘˜3=12πœ‹(π‘›βˆ’1)𝜎2.(19) Joint posterior density of Ξ²1, Ξ²2 and π‘š say, 𝑔2(𝛽1,𝛽2,π‘šβˆ£π‘§) is𝑔2𝛽1,𝛽2ξ€Έ,π‘šβˆ£π‘§=π‘˜4expβˆ’12𝛽21ξƒ¬ξ€·π‘†π‘š1ξ€Έ+1𝜎2ξƒ­+𝛽1π‘†π‘š3𝜎2×expβˆ’12𝛽22𝑆𝑛1βˆ’π‘†π‘š1ξ€Έ+1𝜎2ξƒ­+𝛽2π‘†π‘š4𝜎2ξƒ­ξ‚‚β„Ž2(𝑧),(20) whereπ‘˜4=π‘˜31(2πœ‹)𝑛/2π‘’βˆ’π΄/2𝜎2πœŽβˆ’π‘›,(21)π‘†π‘š3,π‘†π‘š4,π‘†π‘š1, π‘†π‘š2, and 𝐴 are as given in (4).

Where β„Ž2(𝑧) is the marginal density of 𝑧 given by β„Ž2(𝑧)=π‘˜4π‘›βˆ’1ξ“π‘š=1ξ€œβˆžβˆ’βˆžξƒ¬βˆ’1exp2𝛽21ξ€Ίπ‘†π‘š1ξ€»+1𝜎2+𝛽1π‘†π‘š3𝜎2𝑑𝛽1Γ—ξ€œβˆžβˆ’βˆžξƒ¬βˆ’1exp2𝛽22𝜎2𝑆𝑛1βˆ’π‘†π‘š1ξ€Έξ€»+1+𝛽2π‘†π‘š4𝜎2𝑑𝛽2.(22)

And the integrals,𝐺1π‘š=ξ€œβˆžβˆ’βˆžξƒ¬βˆ’1exp2𝛽22𝜎2𝑆𝑛1βˆ’π‘†π‘š1ξ€Έξ€»+1+𝛽2π‘†π‘š4𝜎2𝑑𝛽2=√2πœ‹β‹…π‘’(𝑆2π‘š4/2𝜎2(𝑆𝑛1βˆ’π‘†π‘š1+1))𝑆𝑛1βˆ’π‘†π‘š1ξ€Έ,𝐺+1/𝜎(23)2π‘š=ξ€œβˆžβˆ’βˆžξƒ¬βˆ’1exp2𝛽21ξ€Ίπ‘†π‘š1ξ€»+1𝜎2+𝛽1π‘†π‘š3𝜎2𝑑𝛽1=√2πœ‹β‹…π‘’π‘†2π‘š3/2𝜎2(π‘†π‘š1+1)ξ”ξ€·π‘†π‘š1ξ€Έ.+1/𝜎(24)

So using (23) and (24) results in (22), it reduces to β„Ž2(𝑧)=π‘˜4π‘›βˆ’1ξ“π‘š=1𝑇2𝑇(π‘š),2(π‘š)=𝐺1π‘šπΊ2π‘š,(25) where π‘˜4 is as given in (21). 𝐺1π‘š and 𝐺2π‘š are as given in (23) and (24).

π‘†π‘š3,π‘†π‘š4,π‘†π‘š1, and π‘†π‘š2 are as given in (4).

Marginal posterior density of change point π‘š, Ξ²1, and Ξ²2 is 𝑔2𝑇(π‘šβˆ£π‘§)=2(π‘š)βˆ‘π‘›βˆ’1π‘š=1𝑇2,(π‘š)(26)𝑔2𝛽1ξ€Έ=π‘˜βˆ£π‘§4β„Ž2(𝑧)π‘›βˆ’1ξ“π‘š=1π‘’βˆ’1/2𝛽21ξ‚΅π‘†π‘š1+1𝜎2ξ‚Ά+𝛽1ξ‚΅π‘†π‘š3𝜎2𝐺1π‘š,(27)𝑔2𝛽2ξ€Έ=π‘˜βˆ£π‘§4β„Ž2(𝑧)π‘›βˆ’1ξ“π‘š=1π‘’βˆ’1/2𝛽22𝑆𝑛1βˆ’π‘†π‘š1+1𝜎2ξ‚Ά+𝛽2π‘†π‘š4𝜎2𝐺2π‘š,(28)π‘˜4 and 𝐺1π‘š,𝐺2π‘š,β„Ž2(𝑧)are as given in (21), (23), (24), and (25), respectively.

π‘†π‘š3,π‘†π‘š4,π‘†π‘š1, π‘†π‘š2 are as given in (4).

4. Bayes Estimates under SymmetricLoss Function

The Bayes estimator of a generic parameter (or function there of) Ξ± based on a squared error loss (SEL) function:𝐿1(𝛼,𝑑)=(π›Όβˆ’π‘‘)2,(29) where 𝑑 is a decision rule to estimate Ξ±, is the posterior mean. As a consequence, the SEL function relative to an integer parameter, 𝐿1(π‘š,𝑣)=(π‘šβˆ’π‘£)2,π‘š,𝑣=0,1,2,….(30)

Hence, the Bayesian estimate of an integer-valued parameter under the SEL function 𝐿1(π‘š,𝑣) is no longer the posterior mean and can be obtained by numerically minimizing the corresponding posterior loss. Generally, such a Bayesian estimate is equal to the nearest integer value to the posterior mean. So, we consider the nearest value to the posterior mean as Bayes Estimate.

The Bayes estimator of π‘š under SEL isπ‘šβˆ—=βˆ‘π‘›βˆ’1π‘š=1π‘šπ‘‡1(π‘š)βˆ‘π‘›βˆ’1π‘š=1𝑇1π‘š(π‘š),(31)βˆ—βˆ—=βˆ‘π‘›βˆ’1π‘š=1π‘šπ‘‡2(π‘š)βˆ‘π‘›βˆ’1π‘š=1𝑇2(π‘š),(32) where 𝑇1(π‘š) and 𝑇2(π‘š) are as given in (15) and (25).

Other Bayes estimators of Ξ± based on the loss functions𝐿2||||,𝐿(𝛼,𝑑)=π›Όβˆ’π‘‘3ξƒ―||||(𝛼,𝑑)=0,ifπ›Όβˆ’π‘‘<πœ–,πœ–>0,1,otherwise,(33) is the posterior median and posterior mode, respectively.

5. Asymmetric Loss Function

The Loss function 𝐿(𝛼,𝑑) provides a measure of the financial consequences arising from a wrong decision rule d to estimate an unknown quantity (a generic parameter or function thereof) Ξ±. The choice of the appropriate loss function depends on financial considerations only and is independent from the estimation procedure used. The use of symmetric loss function was found to be generally inappropriate, since for example, an overestimation of the reliability function is usually much more serious than an underestimation.

A useful asymmetric loss, known as the Linex loss function was introduced by Varian [9]. Under the assumption that the minimal loss occurs at Ξ±, the Linex loss function can be expressed as 𝐿4ξ€Ίπ‘ž(𝛼,𝑑)=exp1ξ€»(π‘‘βˆ’π›Ό)βˆ’π‘ž1(π‘‘βˆ’π›Ό)βˆ’πΌ,π‘ž1β‰ 0.(34)

The sign of the shape parameter π‘ž1 reflects the direction of the asymmetry, π‘ž1>0 if overestimation is more serious than underestimation, and vice versa, and the magnitude of π‘ž1 reflects the degree of asymmetry.

The posterior expectation of the Linex loss function is𝐸𝛼𝐿4ξ€Ύξ€·π‘ž(𝛼,𝑑)=exp1𝑑𝐸𝛼expβˆ’π‘ž1π›Όξ€Έξ€Ύβˆ’π‘ž1ξ€·π‘‘βˆ’πΈπ›Ό{𝛼}βˆ’πΌ,(35) where 𝐸𝛼{𝑓(𝛼)} denotes the expectation of 𝑓(𝛼) with respect to the posterior density 𝑔(π›Όβˆ£π‘§). The Bayes estimate π›Όβˆ—πΏ is the value of 𝑑 that minimizes 𝐸𝛼{𝐿4(𝛼,𝑑)}π›Όβˆ—πΏ1=βˆ’π‘ž1𝐸ln𝛼expβˆ’π‘ž1𝛼(36) provided that 𝐸𝛼{exp(βˆ’π‘ž1𝛼)} exists and is finite.

5.1. Assuming πœŽβˆ’2 Unknown

Minimizing the posterior expectation of the Linex loss function πΈπ‘š[𝐿4(π‘š,𝑑)]. Where πΈπ‘š[𝐿4(π‘š,𝑑)] denotes the expectation of 𝐿4(π‘š,𝑑) with respect to the posterior density 𝑔1(π‘šβˆ£π‘§) given in (17), we get the Bayes estimate of π‘š by means of the nearest integer value to (37), say π‘šβˆ—πΏ, as under. We get the Bayes estimators of π‘š using Linex loss function, respectively, asπ‘šβˆ—πΏ1=βˆ’π‘ž1π‘™π‘›ξ€ΊπΈπ‘šξ€½ξ€·expβˆ’π‘ž1π‘š1ξ€Έξ€Ύξ€»=βˆ’π‘ž1π‘™π‘›ξƒ¬βˆ‘π‘›βˆ’1π‘š=1π‘’βˆ’π‘ž1π‘šπ‘‡1(π‘š)βˆ‘π‘›βˆ’1π‘š=1𝑇1ξƒ­.(π‘š)(37)

Another loss function, called general entropy (GE) loss function, proposed by Calabria and Pulcini [10], is given by𝐿5𝑑(𝛼,𝑑)=π›Όξ‚π‘ž3βˆ’π‘ž3𝑑lnπ›Όξ‚βˆ’πΌ.(38) The Bayes estimate π›Όβˆ—πΈ is the value of 𝑑 that minimizes 𝐸𝛼[𝐿5(𝛼,𝑑)]:π›Όβˆ—πΈ=[𝐸𝛼(π›Όβˆ’π‘ž3)]βˆ’1/π‘ž3(39) provided that 𝐸𝛼(π›Όβˆ’π‘ž3) exists and is finite.

Combining the General Entropy Loss with the posterior density (17), we get the estimate π‘š by means of the nearest integer value to (40), say π‘šβˆ—πΈ, as below. We get the Bayes estimates π‘šβˆ—πΈ of π‘š using General Entropy loss function asπ‘šβˆ—πΈ=ξ€ΊπΈπ‘šξ€Ίπ‘šβˆ’π‘ž3ξ€»ξ€»βˆ’1/π‘ž3=ξƒ¬βˆ‘π‘›βˆ’1π‘š=1π‘šβˆ’π‘ž3𝑇1(π‘š)βˆ‘π‘›βˆ’1π‘š=1𝑇1ξƒ­(π‘š)βˆ’1/π‘ž3,(40) where 𝑇1(π‘š) is as given in (15).

5.2. Assuming πœŽβˆ’2 Known

Combining the Linex loss with posterior density (26), we get the Bayes estimate of π‘š by means of the nearest integer value to (41), say π‘šπΏβˆ—βˆ— as below. π‘šπΏβˆ—βˆ—1=βˆ’π‘ž1𝐸lnπ‘šξ€½ξ€·expβˆ’π‘ž1π‘š1ξ€Έξ€Ύξ€»=βˆ’π‘ž1ξƒ¬βˆ‘lnπ‘›βˆ’1π‘š=1π‘’βˆ’π‘ž1π‘šπ‘‡1(π‘š)βˆ‘π‘›βˆ’1π‘š=1𝑇1ξƒ­.(π‘š)(41)

Combining the Linex loss with the posterior distributions (27) and (28), respectively, we get the Bayes estimators of Ξ²1 and Ξ²2 using Linex loss function asπ›½βˆ—βˆ—1𝐿1=βˆ’π‘ž1𝐸ln𝛽1π‘’βˆ’π‘ž1𝛽1ξ€»1=βˆ’π‘ž1π‘˜ln4β„Ž2(𝑧)π‘›βˆ’1ξ“π‘š=1√2πœ‹β‹…π‘’(π‘ π‘š3βˆ’π‘ž1𝜎2)2/2𝜎2(π‘†π‘š1+1)ξ”ξ€·π‘†π‘š1𝐺+1/𝜎1π‘š,(42)π›½βˆ—βˆ—2𝐿1=βˆ’π‘ž1π‘˜ln4β„Ž2×(𝑧)π‘›βˆ’1ξ“π‘š=1ξ€œβˆžβˆ’βˆžξ‚»βˆ’1exp2𝛽21𝑆𝑛1βˆ’π‘†π‘š1+1𝜎2ξ‚Ά+𝛽1ξ‚΅π‘†π‘š4𝜎2ξ‚Άβˆ’π‘ž1𝑑𝛽1𝐺2π‘š,(43) where π‘˜4,β„Ž2(𝑧), 𝐺1π‘š, and 𝐺2π‘š are same as given in (21), (22), (23), and (24), respectively. π‘†π‘š3,π‘†π‘š4,π‘†π‘š1,π‘†π‘š2 are as given in (4).

Minimizing expectation 𝐸[𝐿5(π‘š,𝑑)] and then taking expectation with respect to posterior density 𝑔2(π‘šβˆ£π‘§), we get the estimate π‘š by means of the nearest integer value to (44) say π‘šβˆ—πΈ, as below. We get the Bayes estimates π‘šβˆ—πΈ of π‘š using General Entropy loss function as π‘šπΈβˆ—βˆ—=𝐸1ξ€Ίπ‘šβˆ’π‘ž3ξ€»ξ€»βˆ’1/π‘ž3=ξƒ¬βˆ‘π‘›βˆ’1π‘š=1π‘šβˆ’π‘ž3𝑇2(π‘š)βˆ‘π‘›βˆ’1π‘š=1𝑇2ξƒ­(π‘š)βˆ’1/π‘ž3,(44) where 𝑇2(π‘š) is same as given in (25).

Note 1. The confluent hypergeometric function of the first kind 1𝐹1(π‘Ž,𝑏;π‘₯) [11] is a degenerate form of the hypergeometric function 2𝐹1(π‘Ž,𝑏,𝑐;π‘₯) which arises as a solution to the confluent hypergeometric differential equation. It is also known as Kummer's function of the first kind and denoted by 1𝐹1, defined as follows: 1𝐹1(π‘Ž,𝑏;π‘₯)=βˆžξ“π‘š=0(π‘Ž,π‘š)π‘₯π‘š(𝑏,π‘š)π‘š!,for|π‘₯|<1.(45) With Pochhammer coefficients (π‘Ž,π‘š)=Ξ“(π‘Ž+π‘š)/Ξ“(π‘Ž) for π‘šβ‰₯1 and (π‘Ž,0)=1 [12, page 755], also has an integral representation 1𝐹1ξ€œ(π‘Ž,𝑏;π‘₯)=10𝑒π‘₯π‘’π‘’π‘Žβˆ’1(1βˆ’π‘’)π‘βˆ’π‘Žβˆ’1𝐡(π‘Ž,π‘βˆ’π‘Ž)𝑑𝑒.(46) The symbols Ξ“ and 𝐡 denoting the usual functions gamma and beta, respectively.
When π‘Ž and 𝑏 are both integer, some special results are obtained. If π‘Ž<0, and either 𝑏>0 or 𝑏<π‘Ž, the series yields a polynomial with a finite number of terms. If integer 𝑏≀0, the function is undefined.

Note 2. π‘πΉπ‘ž[{(π‘Ž1),…,(π‘Žπ‘)},{(𝑏1),…,(π‘π‘ž)},𝑧] is called a generalized hypergeometric series and defined as (Gradshteyn and Ryzhik [8, page 1045]).
π‘πΉπ‘ž[{(π‘Ž1),…,(π‘Žπ‘)},{(𝑏1),…,(π‘π‘ž)},𝑧] has series expansion π‘πΉπ‘žπ‘Žξ€Ίξ€½ξ€·1ξ€Έξ€·π‘Ž,…,𝑝,𝑏1𝑏,…,π‘žξ€»=ξ€Έξ€Ύ,π‘§βˆžξ“π‘š=0ξ€·π‘Ž1ξ€Έπ‘šξ€·π‘Ž,…,π‘ξ€Έπ‘š,𝑏1ξ€Έπ‘šξ€·π‘,…,π‘žξ€Έπ‘š,π‘§π‘šξ‚‚π‘š!.(47) In many special cases hypergeometric π‘πΉπ‘ž is automatically converted to other functions.
For 𝑝=π‘ž+1, hypergeometric π‘πΉπ‘ž [π‘Ž list, 𝑏 list, 𝑧] has a branch cut discontinuity in the complex 𝑧 plane running from 1 to ∞.
Hypergeometric π‘πΉπ‘ž (Regularized) is finite for all finite values of its argument so long as π‘β‰€π‘ž.

Note 3. 𝛽(π‘₯,𝑦) is the beta function Euler’s integral of the first kind defined as Gradshteyn and Ryzhik [8, pages 948, 950], ξ€œπ›½(π‘₯,𝑦)=10𝑑π‘₯βˆ’1(1βˆ’π‘‘)π‘¦βˆ’1𝑑𝑑,𝛽(π‘₯,𝑦)=Ξ“π‘₯Γ𝑦.Ξ“π‘₯𝑦(48) The gamma function is as explained in (8).
Minimizing expected loss function 𝐸[𝐿5(𝛽𝑖,𝑑)] and using posterior distributions (27) and (28), we get the Bayes estimates of 𝛽𝑖,𝑖=1,2 using General Entropy loss function, respectively, as π›½βˆ—βˆ—π‘–πΈ=ξ€½πΈξ€·π›½βˆ’π‘ž3π‘–ξ€Έξ€Ύβˆ’1/π‘ž3𝛽,𝑖=1,2,βˆ—βˆ—1𝐸=ξƒ―π‘˜4β„Ž2(𝑧)π‘›βˆ’1ξ“π‘š=1𝐽1π‘šπΊ1π‘šξƒ°βˆ’1/π‘ž3,𝐽1π‘š=ξ€œβˆžβˆ’βˆžπ›½βˆ’π‘ž31π‘’βˆ’(1/2𝜎2)𝛽12(π‘†π‘š1+1)+𝛽1π‘†π‘š3/𝜎2𝑑𝛽1=1π‘†ξ€·ξ€·π‘š1ξ€Έ+1/𝜎2ξ€Έξ€·βˆ’1+π‘ž3ξ€ΈΓ—ξƒ―π‘˜5π‘šΞ“ξ‚΅1βˆ’π‘ž32ξ‚Ά1𝐹1ξƒ©π‘ž32,12𝑠,βˆ’2π‘š3/𝜎22ξ€·π‘†π‘š1ξ€Έ+𝐺+1ξƒͺξƒ°3π‘šΞ“ξ‚€π‘ž1βˆ’32Hypergeometric1𝐹1×1+π‘ž32,32𝑠,βˆ’2π‘š3/𝜎22ξ€·π‘†π‘š1ξ€ΈπœŽ+12+𝐺ξƒͺξƒ°4π‘šHypergeometric𝑝𝐹×1π‘ž,2,ξ‚†π‘ž,11βˆ’32,32βˆ’π‘ž32,βˆ’π‘ 2π‘š3/𝜎22ξ€·π‘†π‘š1ξ€ΈπœŽ+12,ξƒͺξƒ°(49) where π‘˜5π‘š=(βˆ’1)βˆ’π‘ž32(1/2)(βˆ’1βˆ’π‘ž3)ξƒ―βˆ’ξ€·π‘†π‘š1ξ€Έ+12𝑠2π‘š3ξƒ°βˆ’π‘ž3Γ—ξƒ―βˆ’π‘ 2π‘š3ξ€·π‘†π‘š1ξ€Έ+12ξƒ°βˆ’π‘ž3𝑠exp2π‘š3/𝜎22ξ€·π‘†π‘š1×𝑆+1π‘š1+1𝜎2ξ‚Άπ‘ž3/2ξ€·βˆ’1+π‘ž3ξ€ΈΓ—ξƒ―ξ‚™π‘†π‘š1+1𝜎2ξ‚΅(βˆ’1)π‘ž3ξ‚΅βˆ’π‘†π‘š1+1π‘†π‘š3ξ‚Άπ‘ž3Γ—ξ‚΅βˆ’π‘†π‘š3π‘†π‘š1ξ‚Ά+1π‘ž3ξ‚Ά,𝐺+1ξƒ°ξƒ­3π‘š=√2π‘†π‘š3𝜎2ξ‚Έ(βˆ’1)π‘ž3ξ‚΅βˆ’π‘†π‘š1+1π‘†π‘š3ξ‚Άπ‘ž3ξ‚΅βˆ’π‘†π‘š3π‘†π‘š1ξ‚Ά+1π‘ž3ξ‚Ή,πΊβˆ’14π‘š=2(1+π‘ž3)/2βŽ‘βŽ’βŽ’βŽ£ξƒ―βˆ’ξ€·π‘†π‘š1ξ€Έ+12𝑠2π‘š3ξƒ°π‘ž3β‹…π‘†π‘š3𝜎2Γ—ξ‚»(βˆ’1)π‘ž3ξ‚΅βˆ’π‘†π‘š3π‘†π‘š1ξ‚Ά+1π‘ž3βˆ’ξ‚΅π‘†π‘š3π‘†π‘š1ξ‚Ά+1π‘ž3ξ‚Όξƒ­.(50) Hypergeometric 1𝐹1((1+π‘ž3)/2,3/2,βˆ’(𝑠2π‘š3/𝜎2)/2(π‘†π‘š1+1)𝜎2) and hypergeometric π‘πΉπ‘ž({1/2,1},{(1βˆ’π‘ž3/2,3/2βˆ’π‘ž3/2},βˆ’(𝑠2π‘š3/𝜎2)/2(π‘†π‘š1+1)𝜎2) are hypergeometric functions same as explained in Notes 1 and 2, respectively. π‘˜5π‘š,𝐺3π‘š,𝐺4π‘š, and 𝐺1π‘šare as explained in (23) and (50), respectively. π‘˜4 is as given in (21). Ξ“((1βˆ’π‘ž3)/2) and Ξ“(1βˆ’π‘ž3/2) are gamma functions same as explained in (8)ξ…ž. π‘†π‘š3,π‘†π‘š4,π‘†π‘š1, and π‘†π‘š2 are as given in (4) π›½βˆ—βˆ—2𝐸=ξƒ―π‘˜4β„Ž2(𝑧)π‘›βˆ’1ξ“π‘š=1𝐽2π‘šπΊ2π‘šξƒ°βˆ’1/π‘ž3,𝐽2π‘š=ξ€œβˆžβˆ’βˆžπ›½βˆ’π‘ž32π‘’βˆ’(1/2𝜎2)𝛽22(𝑆𝑛1βˆ’π‘†π‘š1+1)+𝛽2(π‘†π‘š4/𝜎2)𝑑𝛽2=1𝑆𝑛1βˆ’π‘†π‘š1ξ€Έ+1/𝜎2ξ€Έξ€·βˆ’1+π‘ž3ξ€ΈΓ—ξƒ―π‘˜6π‘šξ‚΅β‹…Ξ“1βˆ’π‘ž32ξ‚ΆHypergeometric1𝐹1Γ—ξƒ©π‘ž32,12𝑠,βˆ’2π‘š4/𝜎22𝑆𝑛1βˆ’π‘†π‘š1ξ€Έ+𝐺+1ξƒͺξƒ°5π‘šΞ“ξ‚€π‘ž1βˆ’32Hypergeometric1𝐹1×1+π‘ž32,32𝑠,βˆ’2π‘š4/𝜎22𝑆𝑛1βˆ’π‘†π‘š1ξ€Έ+𝐺+1ξƒͺξƒ°6π‘šHypergeometricπ‘πΉπ‘žΓ—ξƒ©ξ‚†12,ξ‚†π‘ž,11βˆ’32,32βˆ’π‘ž32,βˆ’π‘ 2π‘š4/𝜎22𝑆𝑛1βˆ’π‘†π‘š1ξ€Έ,+1ξƒͺξƒ°(51) where π‘˜6π‘š=(βˆ’1)βˆ’π‘ž32(1/2)(βˆ’1βˆ’π‘ž3)Γ—ξƒ―βˆ’ξ€·π‘†π‘›1βˆ’π‘†π‘š1ξ€Έ+12𝑠2π‘š4ξƒ°βˆ’π‘ž3Γ—ξƒ―βˆ’π‘ 2π‘š4𝑆𝑛1βˆ’π‘†π‘š1ξ€Έ+12ξƒ°βˆ’π‘ž3𝑠×exp2π‘š4/𝜎22𝑆𝑛1βˆ’π‘†π‘š1×𝑆+1𝑛1βˆ’π‘†π‘š1+1𝜎2ξ‚Άπ‘ž3/2ξ€·βˆ’1+π‘ž3×𝑆𝑛1βˆ’π‘†π‘š1+1𝜎2×(βˆ’1)π‘ž3ξƒ©βˆ’ξ€·π‘†π‘›1βˆ’π‘†π‘š1ξ€Έ+1π‘†π‘š4ξƒͺπ‘ž3Γ—ξƒ©βˆ’π‘†π‘š4𝑆𝑛1βˆ’π‘†π‘š1ξ€Έξƒͺ+1π‘ž3,𝐺+1ξƒͺξƒ°ξƒ­5π‘š=√2π‘†π‘š4𝜎2ξ‚Έ(βˆ’1)π‘ž3ξ‚΅βˆ’π‘†π‘›1βˆ’π‘†π‘š1+1π‘†π‘š4ξ‚Άπ‘ž3Γ—ξ‚΅βˆ’π‘†π‘š4𝑆𝑛1βˆ’π‘†π‘š1ξ‚Ά+1π‘ž3ξ‚Ή,πΊβˆ’16π‘š=2(1+π‘ž3)/2βŽ‘βŽ’βŽ’βŽ£ξƒ―βˆ’ξ€·π‘†π‘›1βˆ’π‘†π‘š1ξ€Έ+12𝑠2π‘š4ξƒ°π‘ž3π‘†π‘š4𝜎2Γ—ξ‚»(βˆ’1)π‘ž3ξ‚΅βˆ’π‘†π‘š4𝑆𝑛1βˆ’π‘†π‘š1ξ‚Ά+1π‘ž3βˆ’ξ‚΅π‘†π‘š4𝑆𝑛1βˆ’π‘†π‘š1ξ‚Ά+1π‘ž3ξ‚ΌβŽ€βŽ₯βŽ₯⎦.(52) Hypergeometric 1𝐹1((1+π‘ž3)/2,3/2,βˆ’(𝑠2π‘š3/𝜎2)/2(π‘†π‘š1+1)𝜎2) and hypergeometric π‘πΉπ‘ž({1/2,1},{1βˆ’π‘ž3/2,(3/2)βˆ’π‘ž3/2},βˆ’(𝑠2π‘š3/𝜎2)/2(π‘†π‘š1+1)𝜎2) are hypergeometric functions same as explained in Notes 1 and 2, respectively. π‘˜6π‘š,𝐺5π‘š,𝐺6π‘š, and 𝐺2π‘š are as explained in (23) and (50), respectively. π‘˜4 is as given in (21).
π‘˜4 is as given in (21). Ξ“((1βˆ’π‘ž3)/2) and Ξ“(1βˆ’π‘ž3/2) are gamma functions same as explained in (8)ξ…ž. π‘†π‘š3,π‘†π‘š4,π‘†π‘š1, and π‘†π‘š2 are as given in (4).

Remark 1. Putting π‘ž3=βˆ’1 in (40) and (44), we get the Bayes estimators of π‘š, posterior means under the squared error loss as given in (31) and (32). Note that for π‘ž3=βˆ’1, the GE loss function reduces to the squared error loss function.

6. Numerical Study

6.1. Illustration

Let us consider the two-phase regression model𝑦𝑑=3π‘₯𝑑+πœ€π‘‘π‘¦,𝑑=1,2,3,4,𝑑=3.5π‘₯𝑑+πœ€π‘‘,𝑑=5,6,…,15,(53) where πœ€π‘‘'s are i.i.d. 𝑁(0,1) random errors. We take the first 15 values of π‘₯𝑑 and πœ€π‘‘from Table 4.1 of Zellner [13] to generate 15 sample values (π‘₯𝑑,𝑦𝑑)𝑑=1,2,…,15. The generated sample values are given in Table 1. The 𝛽1,𝛽2, and 𝜎2 themselves were random observations. 𝛽1 and 𝛽2 were from standard normal distribution and precision 1/𝜎2 was from gamma distribution with πœ‡ =1 and coefficient of variation βˆ…=1.4, respectively, in 𝑐=0.5,𝑑=0.5.

We have calculated posterior mean, posterior median and posterior mode of m. The results are shown in Table 2.

We also compute the Bayes estimators mE of π‘š using (40) for unknown 𝜎2 and (44) for known 𝜎2 and π‘šπΏ using (37) for unknown 𝜎2 and (41) for known 𝜎2 for data given in Table 1. The results are shown in Table 3.

Table 3 shows that for small values of |π‘ž|,π‘ž=0.9, 0.5, 0.2, 0.1 Linex loss function is almost symmetric and nearly quadratic and the values of the Bayes estimate under such a loss is not far from the posterior mean. Table 3 also shows that for π‘ž1=π‘ž3=1.5, 1.2, Bayes estimates are less than actual value of π‘š=4.

It can be seen from Table 3 that positive sign of shape parameter of loss functions reflects overestimation is more serious than underestimation. Thus, problem of overestimation can be solved by taking the value of shape parameter of Linex and General Entropy loss functions positive and high.

For π‘ž1=π‘ž3=βˆ’1, βˆ’2, Bayes estimates are quite large than actual value π‘š=4. It can be seen from Table 3 that the negative sign of shape parameter of loss functions reflects underestimation is more serious than overestimation. Thus, problem of underestimation can be solved by taking the value of shape parameters of Linex and General Entropy loss functions negative.

We get Bayes estimators 𝛽1βˆ—βˆ—L,π›½βˆ—βˆ—2𝐿,π›½βˆ—βˆ—1𝐸, and π›½βˆ—βˆ—2𝐸 of 𝛽1 and 𝛽2 using (42), (43), (49), and (51), respectively, for the data given in Table 1 and for different value of shape parameter π‘ž1 and π‘ž3. The results are shown in Table 4.

Tables 3 and 4 show that the values of the shape parameters of Linex and General Entropy loss functions increase, the values of Bayes estimates decrease.

7. Sensitivity of Bayes Estimates

In this section, we study the sensitivity of the Bayes estimator, obtained in Sections 4 and 5 with respect to change in the prior of parameters. The mean ΞΌ of gamma prior on πœŽβˆ’2 has been used as prior information in computing the parameters 𝑐,𝑑 of the prior. We have computed posterior mean π‘šβˆ— using (31) and π‘šβˆ—βˆ— using (32) for the data given in Table 1 considering different sets of values of (ΞΌ). Following Calabria and Pulcini [10], we also assume the prior information to be correct if the true value of πœŽβˆ’2 is closed to prior mean ΞΌ and is assumed to be wrong if πœŽβˆ’2 is far from ΞΌ. We observed that the posterior mode π‘šβˆ— appears to be robust with respect to the correct choice of the prior density of πœŽβˆ’2and also with a wrong choice of the prior density of πœŽβˆ’2. This can be seen from Table 5.

Table 5 shows that when prior mean ΞΌ = 1 = actual value of πœŽβˆ’2, it means correct choice of prior of πœŽβˆ’2, The values of Bayes estimator posterior mode is 4. It gives correct estimation of change point. Now, when ΞΌ = 0.5 and 1.5 (far from true value of πœŽβˆ’2 = 1), it means wrong choice of prior of πœŽβˆ’2. The value of Bayes estimator of posterior mode remains 4. But, posterior mean and posterior median do not remain same for wrong choice of prior of πœŽβˆ’2. Thus, posterior mode is not sensitive with wrong choice of prior density of πœŽβˆ’2. While, posterior mean and posterior median are sensitive with wrong choice of prior density of πœŽβˆ’2.

8. Simulation Study

In Sections 4 and 5 we have obtained Bayes estimates of π‘š on the basis of the generated data given in Table 1 for given values of parameters. To justify the results, we have generated 10,000 different random samples with π‘š=4,𝑛=15,𝛽1=3.2, 3.3, 3.4, 𝛽2=3.5, 3.6, 3.7 and obtained the frequency distributions of posterior mean, median of π‘š,π‘šβˆ—πΏ,π‘šβˆ—πΈ with the correct prior consideration. The result is shown in Tables 2 and 3. The value of shape parameter of the general entropy loss and Linex loss used in simulation study for shift point is taken as 0.1.

We have also simulated several standard normal samples. For each Ξ²1, Ξ²2, and π‘š and 𝑛, 1000 pseudorandom samples from two-phase linear regression model discussed in Section 2 have been simulated and Bayes estimators of change point π‘š has been computed using π‘ž3=0.9 and for different prior mean ΞΌ.

Table 6 leads to conclusion that performance of π‘šβˆ—πΏ,π‘šβˆ—πΉ, posterior mode’s and posterior median’s has better performance than that of posterior mean of change point explained in Sections 4 and 5. 46% values of posterior mean are closed to actual value of change point with correct choice of prior. 62% values of posterior median are closed to actual value of change point with correct choice of prior. 70% values of posterior mode are close to correct values of change point with correct prior considerations. 65% values of π‘šβˆ—πΏ are closed to actual values of π‘š. 66% values of π‘šβˆ—πΈ are closed to actual values of π‘š.

9. Conclusions

In this study, we are discussing the Bayes estimator of shift point, the integer parameter, posterior mean is less appealing. Posterior median and posterior mode appear as better estimators as they would be always integer. Our numerical study showed that the Bayes estimators posterior mode of π‘š is robust with respect to the correct choice of the prior specifications on πœŽβˆ’2 and wrong choice of the prior specifications on πœŽβˆ’2, posterior median and posterior mode are sensitive in case prior specifications on 1/𝜎2 deviate simultaneously from the true values. Here, we discussed regression model with one change point, in practice it may have two or more change point models. One can apply these models to econometric data such as poverty and irrigation.

Acknowledgments

The authors would like to thank the editor and the referee for their valuable suggestions which improved the earlier version of the paper.