Abstract

Time-varying GARCH-M models are commonly used in econometrics and financial economics. Yet the recursive nature of the conditional variance makes exact likelihood analysis of these models computationally infeasible. This paper outlines the issues and suggests to employ a Markov chain Monte Carlo algorithm which allows the calculation of a classical estimator via the simulated EM algorithm or a simulated Bayesian solution in only 𝑂(𝑇) computational operations, where 𝑇 is the sample size. Furthermore, the theoretical dynamic properties of a time-varying GQARCH(1,1)-M are derived. We discuss them and apply the suggested Bayesian estimation to three major stock markets.

1. Introduction

Time series data, emerging from diverse fields appear to possess time-varying second conditional moments. Furthermore, theoretical results seem to postulate quite often, specific relationships between the second and the first conditional moment. For instance, in the stock market context, the first conditional moment of stock market’s excess returns, given some information set, is a possibly time-varying, linear function of volatility (see, e.g., Merton [1], Glosten et al. [2]). These have led to modifications and extensions of the initial ARCH model of Engle [3] and its generalization by Bollerslev [4], giving rise to a plethora of dynamic heteroscedasticity models. These models have been employed extensively to capture the time variation in the conditional variance of economic series, in general, and of financial time series, in particular (see Bollerslev et al. [5] for a survey).

Although the vast majority of the research in conditional heteroscedasticity is being processed aiming at the stylized facts of financial stock returns and of economic time series in general, Arvanitis and Demos [6] have shown that a family of time-varying GARCH-M models can in fact be consistent with the sample characteristics of time series describing the temporal evolution of velocity changes of turbulent fluid and gas molecules. Despite the fact that the latter statistical characteristics match in a considerable degree their financial analogues (e.g., leptokurtosis, volatility clustering, and quasi long-range dependence in the squares are common), there are also significant differences in the behavior of the before mentioned physical systems as opposed to financial markets (examples are the anticorrelation effect and asymmetry of velocity changes in contrast to zero autocorrelation and the leverage effect of financial returns) (see Barndorff-Nielsen and Shephard [7] as well as Mantegna and Stanley [8, 9]). It was shown that the above-mentioned family of models can even create anticorrelation in the means as far as an AR(1) time-varying parameter is introduced.

It is clear that from an econometric viewpoint it is important to study how to efficiently estimate models with partially unobserved GARCH processes. In this context, our main contribution is to show how to employ the method proposed in Fiorentini et al. [10] to achieve MCMC likelihood-based estimation of a time-varying GARCH-M model by means of feasible 𝑂(𝑇) algorithms, where 𝑇 is the sample size. The crucial idea is to transform the GARCH model in a first-order Markov’s model. However, in our model, the error term enters the in-mean equation multiplicatively and not additively as it does in the latent factor models of Fiorentini et al. [10]. Thus, we show that their method applies to more complicated models, as well.

We prefer to employ a GQARCH specification for the conditional variance (Engle [3] and Sentana [11]) since it encompasses all the existing restricted quadratic variance functions (e.g., augmented ARCH model), its properties are very similar to those of GARCH models (e.g., stationarity conditions) but avoids some of their criticisms (e.g., very easy to generalize to multivariate models). Moreover, many theories in finance involve an explicit tradeoff between the risk and the expected returns. For that matter, we use an in-mean model which is ideally suited to handling such questions in a time series context where the conditional variance may be time varying. However, a number of studies question the existence of a positive mean/variance ratio directly challenging the mean/variance paradigm. In Glosten et al. [2] when they explicitly include the nominal risk free rate in the conditioning information set, they obtain a negative ARCH-M parameter. For the above, we allow the conditional variance to affect the mean with a possibly time varying coefficient which we assume for simplicity that it follows an AR(1) process. Thus, our model is a time-varying GQARCH-M-AR(1) model.

As we shall see in Section 2.1, this model is able to capture the, so-called, stylized facts of excess stock returns. These are (i) the sample mean is positive and much smaller than the standard deviation, that is, high coefficient of variation, (ii) the autocorrelation of excess returns is insignificant with a possible exception of the 1st one, (iii) the distribution of returns is nonnormal mainly due to excess kurtosis and may be asymmetry (negative), (iv) there is strong volatility clustering, that is, significant positive autocorrelation of squared returns even for high lags, and (v) the so-called leverage effect; that is, negative errors increase future volatility more than positive ones of the same size.

The structure of the paper is as follows. In Section 2, we present the model and derive the theoretical properties the GQARCH(1,1)-M-AR(1) model. Next, we review Bayesian and classical likelihood approaches to inference for the time-varying GQARCH-M model. We show that the key task (in both cases) is to be able to produce consistent simulators and that the estimation problem arises from the existence of two unobserved processes, causing exact likelihood-based estimations computationally infeasible. Hence, we demonstrate that the method proposed by Fiorentini et al. [10] is needed to achieve a first-order Markov’s transformation of the model and thus, reducing the computations from 𝑂(𝑇2) to 𝑂(𝑇). A comparison of the efficient (𝑂(𝑇) calculations) and the inefficient (𝑂(𝑇2) ones) simulator is also given. An illustrative empirical application on weekly returns from three major stock markets is presented in Section 4, and we conclude in Section 5.

2. GQARCH(1,1)-M-AR(1) Model

The definition of our model is as follows.

Definition 2.1. The time-varying parameter GQARCH(1,1)-M-AR(1) model: π‘Ÿπ‘‘=π›Ώπ‘‘β„Žπ‘‘+πœ€π‘‘,πœ€π‘‘=π‘§π‘‘β„Žπ‘‘1/2,(2.1) where 𝛿𝑑=(1βˆ’πœ‘)𝛿+πœ‘π›Ώπ‘‘βˆ’1+πœ‘π‘’π‘’π‘‘,β„Ž(2.2)π‘‘ξ€·πœ€=πœ”+π›Όπ‘‘βˆ’1ξ€Έβˆ’π›Ύ2+π›½β„Žπ‘‘βˆ’1,(2.3)π‘§π‘‘βˆ½i.i.d.𝑁(0,1), π‘’π‘‘βˆ½i.i.d.𝑁(0,1), 𝑒𝑑, 𝑧𝑑 are independent for allπ‘‘ξ…žπ‘ , and where {π‘Ÿπ‘‘}𝑇𝑑=1 are the observed excess returns, 𝑇 is the sample size, {𝛿𝑑}𝑇𝑑=1 is an unobserved AR(1) process independent (with 𝛿0=𝛿) of {πœ€π‘‘}𝑇𝑑=1, and {β„Žπ‘‘}𝑇𝑑=1 is the conditional variance (with β„Ž0 equal to the unconditional variance and πœ€0=0) which is supposed to follow a GQARCH(1,1). It is obvious that 𝛿𝑑 is the market price of risk (see, e.g., Merton [1] Glosten at al. [2]). Let us call β„±π‘‘βˆ’1 the sequence of natural filtrations generated by the past values of {πœ€π‘‘} and {π‘Ÿπ‘‘}.

Modelling the theoretical properties of this model has been a quite important issue. Specifically, it would be interesting to investigate whether this model can accommodate the main stylized facts of the financial markets. On other hand, the estimation of the model requires its transformation into a first-order Markov’s model to implement the method of Fiorentini et al. [10]. Let us start with the theoretical properties.

2.1. Theoretical Properties

Let us consider first the moments of the conditional variance β„Žπ‘‘, needed for the moments of π‘Ÿπ‘‘. The proof of the following lemma is based on raising β„Žπ‘‘ to the appropriate power, in (2.3), and taking into account that 𝐸(𝑧4𝑑)=3, 𝐸(𝑧6𝑑)=15 and 𝐸(𝑧8𝑑)=105.

Lemma 2.2. If 105𝛼4+60𝛼3𝛽+18𝛼2𝛽2+4𝛼𝛽3+𝛽4<1, the first four moments of the conditional variance of (2.3) are given by πΈξ€·β„Žπ‘‘ξ€Έ=πœ”+𝛼𝛾2,πΈξ€·β„Ž1βˆ’(𝛼+𝛽)2𝑑=ξ€·πœ”+𝛼𝛾2ξ€Έξ€·πœ”+𝛼𝛾2ξ€Έ(1+𝛼+𝛽)+4𝛼2𝛾2ξ€·1βˆ’3𝛼2βˆ’2π›Όπ›½βˆ’π›½2ξ€Έ(,πΈξ€·β„Ž1βˆ’π›Όβˆ’π›½)3𝑑=ξ€·πœ”+𝛼𝛾2ξ€Έ3ξ€·+3πœ”+𝛼𝛾2ξ€Έξ€Ίξ€·πœ”+𝛼𝛾2ξ€Έ(𝛼+𝛽)+4𝛼2𝛾2ξ€»πΈξ€·β„Žπ‘‘ξ€Έ1βˆ’π›½3βˆ’15𝛼3βˆ’9𝛼2π›½βˆ’3𝛼𝛽2+3ξ€Ίξ€·πœ”+𝛼𝛾2ξ€Έξ€·3𝛼2+𝛽2ξ€Έ+2𝛽𝛼+4𝛼2𝛾2ξ€»πΈξ€·β„Ž(3𝛼+𝛽)2𝑑1βˆ’π›½3βˆ’15𝛼3βˆ’9𝛼2π›½βˆ’3𝛼𝛽2,πΈξ€·β„Ž4𝑑=ξ€·πœ”+𝛼𝛾2ξ€Έ2ξ€·πœ”+𝛼𝛾2ξ€Έ2+4ξ€Ίξ€·πœ”+𝛼𝛾2ξ€Έ(𝛼+𝛽)+6𝛼2𝛾2ξ€»πΈξ€·β„Žπ‘‘ξ€Έ1βˆ’105𝛼4βˆ’60𝛼3π›½βˆ’18𝛼2𝛽2βˆ’4𝛼𝛽3βˆ’π›½4ξ€·+6πœ”+𝛼𝛾2ξ€Έ2ξ€·3𝛼2+𝛽2ξ€Έ+2𝛽𝛼+8ξ€Ίξ€·πœ”+𝛼𝛾2ξ€Έ(3𝛼+𝛽)+𝛼2𝛾2𝛼2𝛾21βˆ’105𝛼4βˆ’60𝛼3π›½βˆ’18𝛼2𝛽2βˆ’4𝛼𝛽3βˆ’π›½4πΈξ€·β„Ž2𝑑+4πœ”+𝛼𝛾2ξ€Έξ€·15𝛼3+9𝛼2𝛽+3𝛼𝛽2+𝛽3ξ€Έ+6𝛼2𝛾2ξ€·15𝛼2+6𝛼𝛽+𝛽2ξ€Έ1βˆ’105𝛼4βˆ’60𝛼3π›½βˆ’18𝛼2𝛽2βˆ’4𝛼𝛽3βˆ’π›½4πΈξ€·β„Ž3𝑑.(2.4)

Consequently, the moments of π‘Ÿπ‘‘ are given in the following theorem taken from Arvanitis and Demos [6].

Theorem 2.3. The first two moments of the model in (2.1), (2.2), and (2.3) are given by πΈξ€·π‘Ÿπ‘‘ξ€Έξ€·β„Ž=π›ΏπΈπ‘‘ξ€Έξ€·π‘Ÿ,𝐸2𝑑=𝛿2+πœ‘2𝑒1βˆ’πœ‘2ξ‚ΆπΈξ€·β„Ž2π‘‘ξ€Έξ€·β„Ž+𝐸𝑑,(2.5) whereas the skewness and kurtosis coefficients are ξ€·π‘ŸSk𝑑=π‘†ξ€·π‘Ÿπ‘‘ξ€ΈVar1.5ξ€·π‘Ÿπ‘‘ξ€Έξ€·π‘Ÿ,kurt𝑑=πœ…Var2ξ€·π‘Ÿπ‘‘ξ€Έ,(2.6) where π‘†ξ€·π‘Ÿπ‘‘ξ€Έξ‚΅π›Ώ=𝛿2πœ‘+32𝑒1βˆ’πœ‘2ξ‚ΆπΈξ€·β„Ž3π‘‘ξ€Έξ€·β„Ž+3𝛿𝐸2ξ€Έ+2𝛿3𝐸3ξ€·β„Žπ‘‘ξ€Έπ›Ώβˆ’3𝛿2+πœ‘2𝑒1βˆ’πœ‘2ξ‚ΆπΈξ€·β„Ž2π‘‘ξ€Έξ€·β„Ž+πΈπ‘‘ξ€Έξ‚ΉπΈξ€·β„Žπ‘‘ξ€Έ,ξƒ©π›Ώπœ…=4+6𝛿2πœ‘2𝑒1βˆ’πœ‘2πœ‘+34𝑒1βˆ’πœ‘2ξ€Έ2ξƒͺπΈξ€·β„Ž4𝑑+3𝛿2ξ€Ί2βˆ’π›Ώ2πΈξ€·β„Žπ‘‘πΈξ€Έξ€»3ξ€·β„Žπ‘‘ξ€Έξ‚΅π›Ώ+62+πœ‘2𝑒1βˆ’πœ‘2ξ‚ΆπΈξ€·β„Ž3π‘‘ξ€Έβˆ’4𝛿2𝛿2+3πœ‘2𝑒1βˆ’πœ‘2ξ‚ΆπΈξ€·β„Ž3π‘‘ξ€ΈπΈξ€·β„Žπ‘‘ξ€Έ+ξ‚»6𝛿2𝛿2+πœ‘2𝑒1βˆ’πœ‘2ξ‚Άξ‚Ήξ‚ΌπΈξ€·β„ŽπΈ(β„Ž)βˆ’2𝐸(β„Ž)+32𝑑,(2.7) and 𝐸(β„Žπ‘‘), 𝐸(β„Ž2𝑑), 𝐸(β„Ž3𝑑),and 𝐸(β„Ž4𝑑) are given in Lemma 2.2.

In terms of stationarity, the process {π‘Ÿπ‘‘} is 4th-order stationary if and only if ||πœ‘||<1,105𝛼4+60𝛼3𝛽+18𝛼2𝛽2+4𝛼𝛽3+𝛽4<1.(2.8) These conditions are the same as in Arvanitis and Demos [6], indicating that the presence of the asymmetry parameter, 𝛾, does not affect the stationarity conditions (see also Bollerslev [4] and Sentana [11]). Furthermore, the 4th-order stationarity is needed as we would like to measure the autocorrelation of the squared π‘Ÿπ‘‘β€™s (volatility clustering), as well as the correlation of π‘Ÿ2𝑑 and π‘Ÿπ‘‘βˆ’1 (leverage effect). The dynamic moments of the conditional variance and those between the conditional variance β„Žπ‘‘ and the error πœ€π‘‘ are given in the following two lemmas (for a proof see Appendix B).

Lemma 2.4. Under the assumption of Lemma 2.2, one has that ξ€·β„ŽCov𝑑,β„Žπ‘‘βˆ’π‘˜ξ€Έ=(𝛼+𝛽)π‘˜π‘‰ξ€·β„Žπ‘‘ξ€Έ,ξ€·β„ŽCov2𝑑,β„Žπ‘‘βˆ’π‘˜ξ€Έ=π΄π‘˜ξ€ΊπΈξ€·β„Ž3π‘‘ξ€Έξ€·β„Žβˆ’πΈ2π‘‘ξ€ΈπΈξ€·β„Žπ‘‘+ξ€Έξ€»(𝛼+𝛽)π‘˜βˆ’π΄π‘˜ξ€·β„Ž(𝛼+𝛽)βˆ’π΄π΅π‘‰π‘‘ξ€Έ,ξ€·β„ŽCov𝑑,β„Ž2π‘‘βˆ’π‘˜ξ€Έ=(𝛼+𝛽)π‘˜ξ€ΊπΈξ€·β„Ž3π‘‘βˆ’1ξ€Έξ€·β„Žβˆ’πΈ2π‘‘βˆ’1ξ€ΈπΈξ€·β„Žπ‘‘βˆ’1,ξ€·β„Žξ€Έξ€»Cov2𝑑,β„Ž2π‘‘βˆ’π‘˜ξ€Έ=π΄π‘˜π‘‰ξ€·β„Ž2𝑑(+𝐡𝛼+𝛽)π‘˜βˆ’π΄π‘˜ξ€ΊπΈξ€·β„Ž(𝛼+𝛽)βˆ’π΄3π‘‘ξ€Έξ€·β„Žβˆ’πΈ2π‘‘ξ€ΈπΈξ€·β„Žπ‘‘,ξ€Έξ€»(2.9) where 𝐴=3𝛼2+𝛽2+2𝛼𝛽 and 𝐡=2[2𝛼2𝛾2+(πœ”+𝛼𝛾2)(𝛼+𝛽)].

Lemma 2.5. ξ€·β„ŽCov𝑑,πœ€π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž=πΈπ‘‘πœ€π‘‘βˆ’π‘˜ξ€Έ=βˆ’2𝛼𝛾(𝛼+𝛽)π‘˜βˆ’1πΈξ€·β„Žπ‘‘ξ€Έ,ξ€·β„ŽCovπ‘‘β„Žπ‘‘βˆ’π‘˜,πœ€π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž=πΈπ‘‘β„Žπ‘‘βˆ’π‘˜πœ€π‘‘βˆ’π‘˜ξ€Έ=βˆ’2𝛼𝛾(𝛼+𝛽)π‘˜βˆ’1πΈξ€·β„Ž2π‘‘βˆ’1ξ€Έ,ξ€·β„ŽCov𝑑,πœ€2π‘‘βˆ’π‘˜ξ€Έ=(𝛼+𝛽)π‘˜π‘‰ξ€·β„Žπ‘‘ξ€Έ+2𝛼(𝛼+𝛽)π‘˜βˆ’1πΈξ€·β„Ž2𝑑,ξ€·β„ŽCov2𝑑,πœ€π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž=𝐸2π‘‘πœ€π‘‘βˆ’π‘˜ξ€Έ=βˆ’4𝛼𝛾(3𝛼+𝛽)π΄π‘˜βˆ’1πΈξ€·β„Ž2π‘‘ξ€Έξ‚Έξ€·βˆ’4π›Όπ›Ύπœ”+𝛼𝛾2ξ€Έ(𝛼+𝛽)π‘˜βˆ’π΄π‘˜(𝛼+𝛽)βˆ’π΄+2𝛼2𝛾2(𝛼+𝛽)π‘˜βˆ’1βˆ’π΄π‘˜βˆ’1ξ‚ΉπΈξ€·β„Ž(𝛼+𝛽)βˆ’π΄π‘‘ξ€Έ,ξ€·β„ŽCov2𝑑,πœ€2π‘‘βˆ’π‘˜ξ€Έ=π΄π‘˜ξ€ΊπΈξ€·β„Ž3π‘‘ξ€Έξ€·β„Žβˆ’πΈ2π‘‘ξ€ΈπΈξ€·β„Žπ‘‘ξ€Έξ€»+𝐡(𝛼+𝛽)π‘˜βˆ’π΄π‘˜π‘‰ξ€·β„Ž(𝛼+𝛽)βˆ’π΄π‘‘ξ€Έ+4𝛼(3𝛼+𝛽)π΄π‘˜βˆ’1πΈξ€·β„Ž3𝑑+ξ‚Έ2𝛼𝐡(𝛼+𝛽)π‘˜βˆ’1βˆ’π΄π‘˜βˆ’1ξ€·(𝛼+𝛽)βˆ’π΄+42𝛼2𝛾2+ξ€·πœ”+𝛼𝛾2π΄ξ€Έξ€Έπ‘˜βˆ’1ξ‚ΉπΈξ€·β„Ž2𝑑,ξ€·β„ŽCov2π‘‘β„Žπ‘‘βˆ’π‘˜,πœ€π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž=𝐸2π‘‘β„Žπ‘‘βˆ’π‘˜πœ€π‘‘βˆ’π‘˜ξ€Έ=βˆ’4π›Όπ›Ύπ΄π‘˜βˆ’1ξ€Ίξ€·β„Ž(3𝛼+𝛽)𝐸3𝑑+ξ€·πœ”+𝛼𝛾2ξ€ΈπΈξ€·β„Ž2𝑑(ξ€Έξ€»βˆ’2𝛼𝛾𝐡𝛼+𝛽)π‘˜βˆ’1βˆ’π΄π‘˜βˆ’1πΈξ€·β„Ž(𝛼+𝛽)βˆ’π΄2𝑑,(2.10) where 𝐴 and 𝐡 are given in Lemma 2.4.

Furthermore, from Arvanitis and Demos [6] we know that the following results hold.

Theorem 2.6. The autocovariance of returns for the model in (2.1)–(2.3) is given by π›Ύπ‘˜ξ€·π‘Ÿ=Cov𝑑,π‘Ÿπ‘‘βˆ’π‘˜ξ€Έ=𝛿2ξ€·β„ŽCov𝑑,β„Žπ‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+π›ΏπΈπ‘‘πœ€π‘‘βˆ’π‘˜ξ€Έ+πœ‘π‘˜πœ‘2𝑒1βˆ’πœ‘2πΈξ€·β„Žπ‘‘β„Žπ‘‘βˆ’π‘˜ξ€Έ,(2.11) and the covariance of squares’ levels and the autocovariance of squares are ξ€·π‘ŸCov2𝑑,π‘Ÿπ‘‘βˆ’π‘˜ξ€Έξ€·π›Ώ=𝐸2π‘‘π›Ώπ‘‘βˆ’π‘˜ξ€Έξ€·β„ŽCov2𝑑,β„Žπ‘‘βˆ’π‘˜ξ€Έξ€·π›Ώ+Cov2𝑑,π›Ώπ‘‘βˆ’π‘˜ξ€ΈπΈξ€·β„Ž2π‘‘ξ€ΈπΈξ€·β„Žπ‘‘βˆ’π‘˜ξ€Έξ€·π›Ώ+πΈπ‘‘βˆ’π‘˜ξ€Έξ€·β„ŽCov𝑑,β„Žπ‘‘βˆ’π‘˜ξ€Έξ€·π›Ώ+𝐸2π‘‘ξ€ΈπΈξ€·β„Ž2π‘‘πœ€π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+πΈπ‘‘πœ€π‘‘βˆ’π‘˜ξ€Έ,ξ€·π‘ŸCov2𝑑,π‘Ÿ2π‘‘βˆ’π‘˜ξ€Έ=𝐸2𝛿2π‘‘ξ€Έξ€·β„ŽCov2𝑑,β„Ž2π‘‘βˆ’π‘˜ξ€Έξ€·π›Ώ+Cov2𝑑,𝛿2π‘‘βˆ’π‘˜ξ€ΈπΈξ€·β„Ž2π‘‘β„Ž2π‘‘βˆ’π‘˜ξ€Έξ€·π›Ώ+𝐸2π‘‘ξ€Έξ€·β„ŽCov2𝑑,πœ€2π‘‘βˆ’π‘˜ξ€Έξ€·π›Ώ+2𝐸2π‘‘π›Ώπ‘‘βˆ’π‘˜ξ€ΈπΈξ€·β„Ž2π‘‘β„Žπ‘‘βˆ’π‘˜πœ€π‘‘βˆ’π‘˜ξ€Έξ€·π›Ώ+𝐸2π‘‘ξ€Έξ€·β„ŽCov𝑑,β„Ž2π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+Cov𝑑,πœ€2π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+2π›ΏπΈπ‘‘β„Žπ‘‘βˆ’π‘˜πœ€π‘‘βˆ’π‘˜ξ€Έ,(2.12) where all needed covariances and expectations of the right-hand sizes are given in Lemmas 2.4 and 2.5, 𝛿Cov2𝑑,π›Ώπ‘‘βˆ’π‘˜ξ€Έξ€·π›Ώ=Cov𝑑,𝛿2π‘‘βˆ’π‘˜ξ€Έ=2πœ‘π‘˜π›Ώπœ‘2𝑒1βˆ’πœ‘2,𝛿Cov2𝑑,𝛿2π‘‘βˆ’π‘˜ξ€Έ=4πœ‘π‘˜π›Ώ2πœ‘2𝑒1βˆ’πœ‘2+2πœ‘2π‘˜πœ‘4𝑒1βˆ’πœ‘2ξ€Έ2.(2.13)

From the above theorems and lemmas it is obvious that our model can accommodate all stylized facts. For example, negative asymmetry is possible; volatility clustering and the leverage effect (negative Cov(β„Žπ‘‘,πœ€π‘‘βˆ’π‘˜)) can be accommodated, and so forth. Furthermore, the model can accommodate negative autocorrelations, π›Ύπ‘˜, something that is not possible for the GARCH-M model (see Fiorentini and Sentana [12]). Finally, another interesting issue is the diffusion limit of the time-varying GQARCH-M process. As already presented by Arvanitis [13], the weak convergence of the Time-varying GQARCH(1,1)-M coincides with the general conclusions presented elsewhere in the literature. These are that weak limits of the endogenous volatility models are exogenous (stochastic volatility) continuous-time processes. Moreover, Arvanitis [13] suggests that there is a distributional relation between the GQARCH model and the continuous-time Ornstein-Uhlenbeck models with respect to appropriate nonnegative Levy’s processes.

Let us turn our attention to the estimation of our model. We will show that estimating our model is a hard task and the use of well-known methods such as the EM-algorithm cannot handle the problem due to the huge computational load that such methods require.

3. Likelihood-Inference: EM and Bayesian Approaches

The purpose of this section is the estimation of the time-varying GQARCH(1,1)-M model. Since our model involves two unobserved components (one from the time-varying in-mean parameter and one from the error term), the estimation method required is an EM and more specifically a simulated EM (SEM), as the expectation terms at the 𝐸 step cannot be computed. The main modern way of carrying out likelihood inference in such situations is via a Markov chain Monte Carlo (MCMC) algorithm (see Chib [14] for an extensive review). This simulation procedure can be used either to carry out Bayesian inference or to classically estimate the parameters by means of a simulated EM algorithm.

The idea behind the MCMC methods is that in order to sample a given probability distribution, which is referred to as the target distribution, a suitable Markov chain is constructed (using a Metropolis-Hasting (M-H) algorithm or a Gibbs sampling method) with the property that its limiting, invariant distribution is the target distribution. In most problems, the target distribution is absolutely continuous, and as a result the theory of MCMC methods is based on that of the Markov chains on continuous state spaces [15]. This means that by simulating the Markov chain a large number of times and recording its values a sample of (correlated) draws from the target distribution can be obtained. It should be noted that the Markov chain samplers are invariant by construction, and, therefore, the existence of the invariant distribution does not have to be checked in any particular application of MCMC method.

The Metropolis-Hasting algorithm (M-H) is a general MCMC method to produce sample variates from a given multivariate distribution. It is based on a candidate generating density that is used to supply a proposal value that is accepted with probability given as the ratio of the target density times the ratio of the proposal density. There are a number of choices of the proposal density (e.g., random walk M-H chain, independence M-H chain, tailored M-H chain) and the components may be revised either in one block or in several blocks. Another MCMC method, which is special case of the multiple block M-H method with acceptance rate always equal to one, is called the Gibbs sampling method and was brought into statistical prominence by Gelfand and Smith [16]. In this algorithm, the parameters are grouped into blocks, and each block is sampled according to the full conditional distribution denoted as πœ‹(πœ™π‘‘/πœ™/𝑑). By Bayes' theorem, we have πœ‹(πœ™π‘‘/πœ™/𝑑)βˆπœ‹(πœ™π‘‘πœ™/𝑑), the joint distribution of all blocks, and so full conditional distributions are usually quite simply derived. One cycle of the Gibbs sampling algorithm is completed by simulating {πœ™π‘‘}𝑝𝑑=1, where 𝑝 is the number of blocks, from the full conditional distributions, recursively updating the conditioning variables as one moves through each distribution. Under some general conditions, it is verified that the Markov chain generated by the M-H or the Gibbs sampling algorithm converges to the target density as the number of iterations becomes large.

Within the Bayesian framework, MCMC methods have proved very popular, and the posterior distribution of the parameters is the target density (see [17]). Another application of the MCMC is the analysis of hidden Markov’s models where the approach relies on augmenting the parameter space to include the unobserved states and simulate the target distribution via the conditional distributions (this procedure is called data augmentation and was pioneered by Tanner and Wong [18]). Kim et al. [19] discuss an MCMC algorithm of the stochastic volatility (SV) model which is an example of a state space model in which the state variable β„Žπ‘‘ (log-volatility) appears non-linearly in the observation equation. The idea is to approximate the model by a conditionally Gaussian state space model with the introduction of multinomial random variables that follow a seven-point discrete distribution.

The analysis of a time-varying GQARCH-M model becomes substantially complicated since the log-likelihood of the observed variables can no longer be written in closed form. In this paper, we focus on both the Bayesian and the classical estimation of the model. Unfortunately, the non-Markovian nature of the GARCH process implies that each time we simulate one error we implicitly change all future conditional variances. As pointed out by Shephard [20], a regrettable consequence of this path dependence in volatility is that standard MCMC algorithms will evolve in 𝑂(𝑇2) computational load (see [21]). Since this cost has to be borne for each parameter value, such procedures are generally infeasible for large financial datasets that we see in practice.

3.1. Estimation Problem: Simulated EM Algorithm

As mentioned already, the estimation problem arises because of the fact that we have two unobserved processes. More specifically, we cannot write down the likelihood function in closed form since we do not observe both πœ€π‘‘ and 𝛿𝑑. On the other hand, the conditional log-likelihood function of our model assuming that 𝛿𝑑 were observed would be the following: ℓ𝐫,πœΉβˆ£πœ™,β„±0ξ€Έξ€·=lnπ‘π«βˆ£πœΉ,πœ™,β„±0ξ€Έξ€·+lnπ‘πœΉβˆ£πœ™,β„±0ξ€Έ1=βˆ’π‘‡ln2πœ‹βˆ’2𝑇𝑑=1lnβ„Žπ‘‘βˆ’12𝑇𝑑=1ξ€·πœ€π‘‘ξ€Έ2β„Žπ‘‘βˆ’π‘‡2ξ€·πœ‘ln2π‘’ξ€Έβˆ’12𝑇𝑑=1ξ€·π›Ώπ‘‘βˆ’π›Ώ(1βˆ’πœ‘)βˆ’πœ‘π›Ώπ‘‘βˆ’1ξ€Έ2πœ‘2𝑒,(3.1) where 𝐫=(π‘Ÿ1,…,π‘Ÿπ‘‡)ξ…ž,𝜹=(𝛿1,…,𝛿𝑇)ξ…ž, and 𝐑=(β„Ž1,…,β„Žπ‘‡)ξ…ž.

However, the 𝛿𝑑’s are unobserved, and, thus, to classically estimate the model, we have to rely on an EM algorithm [22] to obtain estimates as close to the optimum as desired. At each iteration, the EM algorithm obtains πœ™(𝑛+1), where πœ™ is the parameter vector, by maximizing the expectation of the log-likelihood conditional on the data and the current parameter values, that is, 𝐸(β„“(β‹…)∣𝐫,πœ™(𝑛),β„±0) with respect to πœ™ keeping πœ™(𝑛) fixed.

The 𝐸 step, thus, requires the expectation of the complete log-likelihood. For our model, this is given by: 𝐸ℓ(β‹…)∣𝐫,πœ™(𝑛),β„±0𝑇=βˆ’π‘‡ln2πœ‹βˆ’2ξ€·πœ‘ln2π‘’ξ€Έβˆ’12𝑇𝑑=1𝐸lnβ„Žπ‘‘βˆ£π«,πœ™(𝑛),β„±0ξ€Έβˆ’12𝑇𝑑=1πΈξƒ©ξ€·πœ€π‘‘ξ€Έ2β„Žπ‘‘βˆ£π«,πœ™(𝑛),β„±0ξƒͺβˆ’12𝑇𝑑=1πΈξƒ©ξ€·π›Ώπ‘‘βˆ’π›Ώ(1βˆ’πœ‘)βˆ’πœ‘π›Ώπ‘‘βˆ’1ξ€Έ2πœ‘2π‘’βˆ£π«,πœ™(𝑛),β„±0ξƒͺ.(3.2)

It is obvious that we cannot compute such quantities. For that matter, we may rely on a simulated EM where the expectation terms are replaced by averages over simulations, and so we will have an SEM or a simulated score. The SEM log-likelihood is: 𝑇SEMβ„“=βˆ’π‘‡ln2πœ‹βˆ’2ξ€·πœ‘ln2π‘’ξ€Έβˆ’121𝑀𝑀𝑇𝑖=1𝑑=1lnβ„Žπ‘‘(𝑖)βˆ’121𝑀𝑀𝑇𝑖=1𝑑=1ξ‚€πœ€π‘‘(𝑖)2β„Žπ‘‘(𝑖)βˆ’π‘‡2(1βˆ’πœ‘)2𝛿2πœ‘2π‘’βˆ’121πœ‘2𝑒1𝑀𝑀𝑇𝑖=1𝑑=1𝛿𝑑(𝑖)2+(1βˆ’πœ‘)π›Ώπœ‘2𝑒1𝑀𝑀𝑇𝑖=1𝑑=1𝛿𝑑(𝑖)+πœ‘πœ‘2𝑒1𝑀𝑀𝑇𝑖=1𝑑=1𝛿𝑑(𝑖)𝛿(𝑖)π‘‘βˆ’1+(1βˆ’πœ‘)πœ‘π›Ώπœ‘2𝑒1𝑀𝑀𝑇𝑖=1𝑑=1𝛿(𝑖)π‘‘βˆ’1βˆ’πœ‘22πœ‘2𝑒1𝑀𝑀𝑇𝑖=1𝑑=1𝛿(𝑖)π‘‘βˆ’12.(3.3) Consequently, we need to obtain the following quantities: βˆ‘(1/𝑀)𝑀𝑖=1βˆ‘π‘‡π‘‘=1lnβ„Žπ‘‘(𝑖), βˆ‘(1/𝑀)𝑀𝑖=1βˆ‘π‘‡π‘‘=1((πœ€π‘‘(𝑖))2/β„Žπ‘‘(𝑖)), βˆ‘(1/𝑀)𝑀𝑖=1βˆ‘π‘‡π‘‘=1𝛿𝑑(𝑖), βˆ‘(1/𝑀)𝑀𝑖=1βˆ‘π‘‡π‘‘=1𝛿(𝑖)π‘‘βˆ’1,β€‰β€‰βˆ‘(1/𝑀)𝑀𝑖=1βˆ‘π‘‡π‘‘=1𝛿𝑑(𝑖)𝛿(𝑖)π‘‘βˆ’1 and βˆ‘(1/𝑀)𝑀𝑖=1βˆ‘π‘‡π‘‘=1(𝛿𝑑(𝑖))2, βˆ‘(1/𝑀)𝑀𝑖=1βˆ‘π‘‡π‘‘=1𝛿2(𝑖)π‘‘βˆ’1, where 𝑀 is the number of simulations.

Thus, to classically estimate our model by using an SEM algorithm, the basic problem is to sample from π‘βˆ£πœ™,𝐫,β„±0 where πœ™ is the vector of the unknown parameters and also sample from πœΉβˆ£πœ™,𝐫,β„±0.

In terms of identification, the model is not, up to second moment, identified (see Corollary 1 in Sentana and Fiorentini [23]). The reason is that we can transfer unconditional variance from the error, πœ€π‘‘, to the price of risk, 𝛿𝑑, and vice versa. One possible solution is to fix πœ” such that 𝐸(β„Žπ‘‘) is 1 or to set πœ‘π‘’ to a specific value. In fact in an earlier version of the paper, we fixed πœ‘π‘’ to be 1 (see Anyfantaki and Demos [24]). Nevertheless, from a Bayesian viewpoint, the lack of identification is not too much of a problem, as the parameters are identified through their proper priors (see Poirier [25]).

Next, we will exploit the Bayesian estimation of the model, and, since we need to resort to simulations, we will show that the key task is again to simulate from πœΉβˆ£πœ™,𝐫,β„±0.

3.2. Simulation-Based Bayesian Inference

In our problem, the key issue is that the likelihood function of the sample 𝑝(π«βˆ£πœ™,β„±0) is intractable which precludes the direct analysis of the posterior density 𝑝(πœ™βˆ£π«,β„±0). This problem may be overcome by focusing instead on the posterior density of the model using Bayes’ rule:𝑝(πœ™,𝜹∣𝐫)βˆπ‘(πœ™,𝜹)𝑝(π«βˆ£πœ™,𝜹)βˆπ‘(πœΉβˆ£πœ™)𝑝(πœ™)𝑝(π«βˆ£πœ™,𝜹),(3.4) whereξ€·πœ™=𝛿,πœ‘,πœ‘π‘’ξ€Έ,𝛼,𝛽,𝛾,πœ”ξ…ž.(3.5)

Now,𝑝(πœΉβˆ£πœ™)=𝑇𝑑=1𝑝𝛿𝑑𝛿,πœ‘,πœ‘2𝑒=𝑇𝑑=11√2πœ‹πœ‘2π‘’ξƒ©βˆ’ξ€·π›Ώexpπ‘‘βˆ’π›Ώ(1βˆ’πœ‘)βˆ’πœ‘π›Ώπ‘‘βˆ’1ξ€Έ22πœ‘2𝑒ξƒͺ.(3.6)

On the other hand,𝑝(π«βˆ£πœ™,𝜹)=𝑇𝑑=1π‘ξƒ©π‘Ÿπ‘‘ξ€½π‘Ÿπ‘‘βˆ’1ξ€Ύξƒͺ=,𝜹,πœ™π‘‡ξ‘π‘‘=11√2πœ‹β„Žπ‘‘ξƒ©βˆ’πœ€exp2𝑑2β„Žπ‘‘ξƒͺ(3.7) is the full-information likelihood. Once we have the posterior density, we get the parameters' marginal posterior density by integrating the posterior density. MCMC is one way of numerical integration.

The Hammersley-Clifford theorem (see Clifford [26]) says that a joint distribution can be characterized by its complete conditional distribution. Hence, given initial values {𝛿𝑑}(0),πœ™(0), we draw {𝛿𝑑}(1) from 𝑝({𝛿𝑑}(1)∣𝐫,πœ™(0)) and then πœ™(1) from 𝑝(πœ™(1)∣{𝛿𝑑}(1),𝐫). Iterating these steps, we finally get ({𝛿𝑑}(𝑖),πœ™(𝑖))𝑀𝑖=1, and under mild conditions it is shown that the distribution of the sequence converges to the joint posterior distribution 𝑝(πœ™,𝜹∣𝐫).

The above simulation procedure may be carried out by first dividing the parameters into two blocks:πœ™1=𝛿,πœ‘,πœ‘2𝑒,πœ™2=(𝛼,𝛽,𝛾,πœ”),(3.8) Then the algorithm is described as follows.(1)Initialize πœ™.(2)Draw from 𝑝(π›Ώπ‘‘βˆ£π›Ώβ‰ π‘‘,𝐫,πœ™).(3)Draw from 𝑝(πœ™βˆ£πœΉ,𝐫) in the following blocks:(i)draw from 𝑝(πœ™1∣𝜹,𝐫) using the Gibbs sampling. This is updated in one block;(ii)draw from 𝑝(πœ™2∣𝐫) by M-H. This is updated in a second block.(4)Go to  (2).

We review the implementation of each step.

3.2.1. Gibbs Sampling

The task of simulating from an AR model has been already discussed. Here, we will follow the approach of Chib [27], but we do not have any MA terms which makes inference simpler.

Suppose that the prior distribution of (𝛿,πœ‘2𝑒,πœ‘) is given by:𝑝𝛿,πœ‘2𝑒,πœ‘=π‘π›Ώβˆ£πœ‘2π‘’ξ€Έπ‘ξ€·πœ‘2𝑒𝑝(πœ‘),(3.9) which means that 𝛿,πœ‘2𝑒 is a priory independent of πœ‘.

Also the following holds for the prior distributions of the parameter subvector πœ™1: π‘ξ€·π›Ώβˆ£πœ‘2π‘’ξ€Έξ‚€π›ΏβˆΌπ‘π‘π‘Ÿ,πœ‘2π‘’πœŽ2π›Ώπ‘π‘Ÿξ‚,π‘ξ€·πœ‘2π‘’ξ€Έξ‚΅π‘£βˆΌIG02,𝑑02ξ‚Ά,π‘ξ€·πœ‘(πœ‘)βˆΌπ‘0,𝜎2πœ‘0ξ€ΈπΌπœ‘,(3.10) where πΌπœ‘ ensures that πœ‘ lies outside the unit circle, IG is the inverted gamma distribution, and the hyperparameters 𝑣0,𝑑0,π›Ώπ‘π‘Ÿ,𝜎2π›Ώπ‘π‘Ÿ,πœ‘0,𝜎2πœ‘0 have to be defined.

Now, the joint posterior is proportional to 𝑝𝛿,πœ‘,πœ‘2π‘’ξ€Έβˆβˆ£π«,πœΉπ‘‡ξ‘π‘‘=11√2πœ‹πœ‘2π‘’ξƒ―βˆ’ξ€·π›Ώexpπ‘‘βˆ’(1βˆ’πœ‘)π›Ώβˆ’πœ‘π›Ώπ‘‘βˆ’1ξ€Έ22πœ‘2π‘’ξƒ°ξ‚€π›ΏΓ—π‘π‘π‘Ÿ,πœ‘2π‘’πœŽ2π›Ώπ‘π‘Ÿξ‚ξ‚΅π‘£Γ—IG02,𝑑02ξ‚Άξ€·πœ‘Γ—π‘0,𝜎2πœ‘0ξ€ΈπΌπœ‘.(3.11) From a Bayesian viewpoint, the right-hand side of the above equation is equal to the β€œaugmented” prior, that is, the prior augmented by the latent 𝜹 (We would like to thank the associate editor for bringing this to our attention.) We proceed to the generation of these parameters.

Generation of 𝛿
First we see how to generate 𝛿. Following again Chib [27], we may write π›Ώβˆ—π‘‘=π›Ώπ‘‘βˆ’πœ‘π›Ώπ‘‘βˆ’1,π›Ώβˆ—π‘‘βˆ£β„±π‘‘βˆ’1ξ€·βˆΌπ‘(1βˆ’πœ‘)𝛿,πœ‘2𝑒,(3.12) or, otherwise, π›Ώβˆ—π‘‘=(1βˆ’πœ‘)𝛿+𝑣𝑑,π‘£π‘‘ξ€·βˆΌπ‘0,πœ‘2𝑒.(3.13) Under the above and using Chib's (1993) notation, we have that the proposal distribution is the following Gaussian distribution (see Chib [27] for a proof).Proposition 3.1. The proposal distribution of 𝛿 is π›Ώβˆ£πœΉ,πœ™,πœ‘2π‘’ξ‚€ΜƒβˆΌπ‘π›Ώ,πœ‘2π‘’ξ‚πœŽ2𝛿,(3.14) where ̃𝛿=ξ‚πœŽ2π›Ώξƒ©π›Ώπ‘π‘ŸπœŽ2π›Ώπ‘π‘Ÿ+(1βˆ’πœ‘)𝑇𝑑=1π›Ώβˆ—π‘‘ξƒͺ,ξ‚πœŽ2𝛿=1𝜎2π›Ώπ‘π‘Ÿ+(1βˆ’πœ‘)2ξƒͺβˆ’1.(3.15)
Hence, the generation of 𝛿 is completed, and we may turn on the generation of the other parameters.

Generation of πœ‘2𝑒
For the generation of πœ‘2𝑒 and using [27] notation, we have the following.
Proposition 3.2. The proposal distribution of πœ‘2𝑒 is πœ‘2π‘’ξ‚΅βˆ£πœΉ,πœ™,π›ΏβˆΌIGπ‘‡βˆ’π‘£02,𝑑0+𝑄+𝑑2ξ‚Ά,(3.16) where 𝑄=π›Ώβˆ’π›Ώpπ‘Ÿξ€Έ2πœŽπ›Ώβˆ’2π‘π‘Ÿ,𝑑=𝑇𝑑=2ξ€Ίπ›Ώβˆ—π‘‘ξ€»βˆ’π›Ώ(1βˆ’πœ‘)2.(3.17)
Finally, we turn on the generation of πœ‘.

Generation of πœ‘
For the generation of πœ‘, we follow again Chib [27] and write 𝛿𝑑=(1βˆ’πœ‘)π›Ώβˆ’πœ‘π›Ώπ‘‘βˆ’1+𝑣𝑑,π‘£π‘‘ξ€·βˆΌπ‘0,πœ‘2𝑒.(3.18) We may now state the following proposition (see Chib [27] for a proof).
Proposition 3.3. The proposal distribution of πœ‘ is πœ‘2∣𝜹,𝛿,πœ‘2π‘’ξ‚€βˆΌπ‘ξ‚πœ‘,ξ‚πœŽ2πœ‘ξ‚,(3.19) where ξ‚πœ‘=ξ‚πœŽ2πœ‘ξƒ©πœŽπœ‘βˆ’20πœ‘0+πœ‘π‘’π‘‡βˆ’2𝑑=1ξ€·π›Ώπ‘‘βˆ’1π›Ώβˆ’π›Ώξ€Έξ€·π‘‘ξ€Έξƒͺ,βˆ’π›Ώξ‚πœŽπœ‘βˆ’2=πœŽπœ‘βˆ’20+πœ‘π‘’π‘‡βˆ’2𝑑=1ξ€·π›Ώπ‘‘βˆ’1ξ€Έβˆ’π›Ώ2.(3.20)
The Gibbs sampling scheme has been completed, and the next step of the algorithm requires the generation of the conditional variance parameters via an M-H algorithm which is now presented.

3.2.2. Metropolis-Hasting

Step (3)-(ii) is the task of simulating from the posterior of the parameters of a GQARCH-M process. This has been already addressed by Kim et al. [19], Bauwens and Lubrano [28], Nakatsuma [29], Ardia [30] and others.

First, we need to decide on the priors. For the parameters 𝛼,𝛽,𝛾,πœ”, we use normal densities as priors:π‘ξ€·πœ‡(𝜢)βˆΌπ‘π›Ό,Ξ£πœΆξ€ΈπΌπœΆ,ξ‚€πœ‡π‘(𝛽)βˆΌπ‘π›½,𝜎2𝛽𝐼𝛽(3.21) and similarly ξ€·πœ‡π‘(𝛾)βˆΌπ‘π›Ύ,𝜎2𝛾,(3.22) where 𝜢=(πœ”,𝛼)ξ…ž,𝐼𝜢,𝐼𝛽 are the indicators ensuring the constraints 𝜢>0,𝛼+𝛽<1 and 𝛽>0,𝛼+𝛽<1, respectively. πœ‡.,𝜎2. are the hyperparameters.

We form the joint prior by assuming prior independence between 𝜢,𝛽,𝛾, and the joint posterior is then obtained by combining the joint prior and likelihood function by Bayes' rule:𝑝(𝜢,𝛽,π›Ύβˆ£π«)βˆπ‘‡ξ‘π‘‘=11√2πœ‹β„Žπ‘‘ξƒ©βˆ’πœ€exp2𝑑2β„Žπ‘‘ξƒͺξ€·πœ‡Γ—π‘π›Ό,Ξ£πœΆξ€ΈπΌπœΆξ‚€πœ‡Γ—π‘π›½,𝜎2π›½ξ‚πΌπ›½ξ€·πœ‡Γ—π‘π›Ύ,𝜎2𝛾.(3.23)

For the M-H algorithm, we use the following approximated GARCH model as in Nakatsuma [29] which is derived by the well-known property of GARCH models [4]: πœ€2π‘‘ξ€·πœ€=πœ”+π›Όπ‘‘βˆ’1ξ€Έβˆ’π›Ύ2+π›½πœ€2π‘‘βˆ’1+π‘€π‘‘βˆ’π›½π‘€π‘‘βˆ’1,(3.24) where 𝑀𝑑=πœ€2π‘‘βˆ’β„Žπ‘‘ with π‘€π‘‘βˆΌπ‘(0,2β„Ž2𝑑).

Then the corresponding approximated likelihood is written as π‘ξ€·πœ€2∣𝜹,𝐫,πœ™2ξ€Έ=𝑇𝑑=112β„Žπ‘‘βˆšπœ‹βŽ‘βŽ’βŽ’βŽ’βŽ£βˆ’ξ‚€πœ€exp2π‘‘ξ€·πœ€βˆ’πœ”βˆ’π›Όπ‘‘βˆ’1ξ€Έβˆ’π›Ύ2βˆ’π›½πœ€2π‘‘βˆ’1+π›½π‘€π‘‘βˆ’124β„Ž2π‘‘βŽ€βŽ₯βŽ₯βŽ₯⎦,(3.25) and the generation of 𝜢,𝛽,𝛾 is based on the above likelihood where we update {β„Žπ‘‘} each time after the corresponding parameters are updated. The generation of the four variance parameters is given.

Generation of 𝛼
For the generation of 𝜢, we first note that 𝑀𝑑 in (3.32), below, can be written as a linear function of 𝜢: 𝑀𝑑=πœ€2π‘‘βˆ’πœπ‘‘πœΆ,(3.26) where πœπ‘‘=[Μƒπœ„π‘‘,Μ‚πœ€2𝑑] with πœ€2𝑑=Μƒπœ€2π‘‘βˆ’π›½Μƒπœ€2π‘‘βˆ’1,Μƒπœ€2𝑑=πœ€2𝑑+π›½Μƒπœ€2π‘‘βˆ’1,Μ‚πœ€2𝑑=ξ€·πœ€π‘‘βˆ’1ξ€Έβˆ’π›Ύ2+π›½Μ‚πœ€2π‘‘βˆ’1,Μƒπœ„π‘‘=1+π›½Μƒπœ„π‘‘βˆ’1.(3.27)
Now, let the two following vectors beπ‘ŒπœΆ=ξ‚ƒπœ€21,…,πœ€2π‘‡ξ‚„ξ…ž,π‘‹πœΆ=ξ€Ίπœξ…ž1,…,πœξ…žπ‘‡ξ€»ξ…ž.(3.28)
Then the likelihood function of the approximated model is rewritten as π‘ξ€·πœ€2∣𝐫,𝜹,πœ™2ξ€Έ=𝑇𝑑=112πœ‹2β„Ž2π‘‘ξ€ΈβŽ‘βŽ’βŽ’βŽ’βŽ£βˆ’ξ‚€expπœ€2π‘‘βˆ’πœπ‘‘πœΆξ‚22ξ€·2β„Ž2π‘‘ξ€ΈβŽ€βŽ₯βŽ₯βŽ₯⎦.(3.29)
Using this we have the following proposal distribution of 𝜢 (see Nakatsuma [29] or Ardia [30] for a proof).
Proposition 3.4. The proposal distribution of 𝜢 is πœΆβˆ£π‘Œ,𝑋,Ξ£,πœ™2βˆ’πœΆξ‚€βˆΌπ‘ξπœ‡πœΆ,ξΞ£πœΆξ‚πΌπœΆ,(3.30) where ξπœ‡πœΆ=Σ𝜢(π‘‹ξ…žπœΆΞ›βˆ’1π‘ŒπœΆ+Ξ£πœΆβˆ’1πœ‡πœΆ), Σ𝜢=(π‘‹ξ…žπœΆΞ›βˆ’1π‘‹πœΆ+Ξ£πœΆβˆ’1)βˆ’1, and Ξ›=diag(2β„Ž21,…,2β„Ž2𝑇). 𝐼𝜢 imposes the restriction that 𝜢>0 and 𝛼+𝛽<1.
Hence a candidate ξ‚πœΆ is sampled from this proposal density and accepted with probability: ξƒ―π‘ξ€·ξ‚ξ€Έπ‘žξ€·πœΆmin𝜢,𝛽,π›Ύβˆ£πœΉ,π«βˆ—βˆ£ξ‚ξ€ΈπœΆ,𝛽,𝛾,𝜹,𝐫𝑝(πœΆβˆ—ξ€·ξ‚,𝛽,π›Ύβˆ£πœΉ,𝐫)π‘žπœΆβˆ£πœΆβˆ—ξ€Έξƒ°,𝛽,𝛾,𝜹,𝐫,1,(3.31) where πœΆβˆ— is the previous draw.

Similar procedure is used for the generation of 𝛽 and 𝛾.

Generation of 𝛽
Following Nakatsuma [29], we linearize 𝑀𝑑 by the first-order Taylor expansion 𝑀𝑑(𝛽)β‰ˆπ‘€π‘‘ξ€·π›½βˆ—ξ€Έ+πœ‰π‘‘ξ€·π›½βˆ—ξ€Έξ€·π›½βˆ’π›½βˆ—ξ€Έ,(3.32) where πœ‰π‘‘ is the first-order derivative of 𝑀𝑑(𝛽) evaluated at π›½βˆ— the previous draw of the M-H sampler.
Define as π‘Ÿπ‘‘=π‘€π‘‘ξ€·π›½βˆ—ξ€Έ+π‘”π‘‘ξ€·π›½βˆ—ξ€Έπ›½βˆ—,(3.33) where 𝑔𝑑(π›½βˆ—)=βˆ’πœ‰π‘‘(π›½βˆ—) which is computed by the recursion: 𝑔𝑑=πœ€2π‘‘βˆ’1βˆ’π‘€π‘‘βˆ’1+π›½βˆ—π‘”π‘‘βˆ’1,(3.34)πœ‰π‘‘=0 for 𝑑≀0 [30]. Then, 𝑀𝑑(𝛽)β‰ˆπ‘Ÿπ‘‘βˆ’π‘”π‘‘(𝛽)𝛽.(3.35)
Let the following two vectors be π‘Œπ›½=ξ€Ίπ‘Ÿ1,…,π‘Ÿπ‘‡ξ€»ξ…ž,𝑋𝛽=𝑔1,…,π‘”π‘‡ξ€»ξ…ž.(3.36)
Then, the likelihood function of the approximated model is rewritten as π‘ξ€·πœ€2∣𝜹,𝐫,πœ™2ξ€Έ=𝑇𝑑=112πœ‹2β„Ž2π‘‘ξ€Έξƒ¬βˆ’ξ€½π‘€expπ‘‘ξ€·π›½βˆ—ξ€Έ+πœ‰π‘‘ξ€·π›½βˆ—ξ€Έξ€·π›½βˆ’π›½βˆ—ξ€Έξ€Ύ22ξ€·2β„Ž2𝑑.(3.37)
We have the following proposal distribution for 𝛽 (for a proof see Nakatsuma [29] or Ardia [30]).

Proposition 3.5. The proposal distribution for 𝛽 is π›½βˆ£π‘Œ,𝑋,𝜎2𝛽,πœ™2βˆ’π›½ξ‚€βˆΌπ‘ξπœ‡π›½,𝜎2𝛽𝐼𝛽,(3.38) where ξπœ‡π›½=𝜎2𝛽(π‘‹ξ…žπ›½Ξ›βˆ’1π‘Œπ›½+(πœ‡π›½/𝜎2𝛽)), 𝜎2𝛽=(π‘‹ξ…žπ›½Ξ›βˆ’1𝑋𝛽+(1/𝜎2𝛽))βˆ’1, and Ξ›=diag(2β„Ž41,…,2β„Ž4𝑇). 𝐼𝛽 imposes the restriction that 𝛽>0 and 𝛼+𝛽<1.
Hence, a candidate ̃𝛽 is sampled from this proposal density and accepted with probability: ξƒ―π‘ξ€·Μƒξ€Έπ‘žξ€·π›½min𝛽,𝛼,π›Ύβˆ£πœΉ,π«βˆ—βˆ£Μƒξ€Έπ›½,𝛼,𝛾,𝜹,π«π‘ξ€·π›½βˆ—ξ€Έπ‘žξ€·Μƒ,𝛼,π›Ύβˆ£πœΉ,π«π›½βˆ£π›½βˆ—ξ€Έξƒ°,𝛼,𝛾,𝜹,𝐫,1(3.39)

Finally, we explain the generation of 𝛾.

Generation of 𝛾
As with 𝛽, we linearize 𝑀𝑑 by a first-order Taylor, expansion at a point π›Ύβˆ— the previous draw in the M-H sampler. In this case, π‘Ÿπ‘‘=π‘€π‘‘ξ€·π›Ύβˆ—ξ€Έβˆ’π‘”π‘‘ξ€·π›Ύβˆ—ξ€Έπ›Ύβˆ—,(3.40) where 𝑔𝑑(π›Ύβˆ—)=βˆ’πœ‰π‘‘(π›Ύβˆ—) which is computed by the recursion: π‘”π‘‘ξ€·πœ€=βˆ’2π›Όπ‘‘βˆ’1βˆ’π›Ύβˆ—ξ€Έ+π›½π‘”π‘‘βˆ’1,(3.41) and 𝑔𝑑=0 for 𝑑≀0.
Then 𝑀𝑑(𝛾)β‰ˆπ‘Ÿπ‘‘βˆ’π‘”π‘‘π›Ύ.(3.42)
Let again π‘Œπ›Ύ=ξ€Ίπ‘Ÿ1,…,π‘Ÿπ‘‡ξ€»ξ…ž,𝑋𝛾=𝑔1,…,π‘”π‘‡ξ€»ξ…ž(3.43) and the likelihood function of the approximated model is rewritten as π‘ξ€·πœ€2∣𝜹,𝐫,πœ™2ξ€Έ=𝑇𝑑=112πœ‹2β„Ž2π‘‘ξ€Έξƒ¬βˆ’ξ€½π‘€expπ‘‘ξ€·π›Ύβˆ—ξ€Έβˆ’π‘”π‘‘ξ€·π›Ύβˆ—ξ€Έπ›Ύβˆ—ξ€Ύ22ξ€·2β„Ž2𝑑.(3.44)
Thus, we have the following proposal distribution for 𝛾 (for proof see Nakatsuma [29] and Ardia [30]).

Proposition 3.6. The proposal distribution of 𝛾 is π›Ύβˆ£π‘Œ,𝑋,𝜎2𝛾,πœ™2βˆ’π›Ύξ‚€βˆΌπ‘ξπœ‡π›Ύ,𝜎2𝛾,(3.45) where ξπœ‡π›Ύ=𝜎2𝛾(π‘‹ξ…žπ›ΎΞ›βˆ’1π‘Œπ›Ύ+(πœ‡π›Ύ/𝜎2𝛾)), 𝜎2𝛾=(π‘‹ξ…žπ›ΎΞ›βˆ’1𝑋𝛾+(1/𝜎2𝛾))βˆ’1, and Ξ›=diag(2β„Ž41,…,2β„Ž4𝑇). A candidate ̃𝛾 is sampled from this proposal density and accepted with probability: 𝛾min𝑝(̃𝛾,𝛼,π›½βˆ£πœΉ,𝐫)π‘žβˆ—ξ€Έβˆ£Μƒπ›Ύ,𝛼,𝛽,𝜹,𝐫𝑝(π›Ύβˆ—,𝛼,π›½βˆ£πœΉ,𝐫)π‘ž(Μƒπ›Ύβˆ£π›Ύβˆ—ξƒ°,𝛼,𝛽,𝜹,𝐫),1.(3.46)

The algorithm described above is a special case of a MCMC algorithm, which converges as it iterates, to draws from the required density 𝑝(πœ™,𝜹∣𝐫). Posterior moments and marginal densities can be estimated (simulation consistently) by averaging the relevant function of interest over the sample variates. The posterior mean of πœ™ is simply estimated by the sample mean of the simulated πœ™ values. These estimated values can be made arbitrarily accurate by increasing the simulation sample size. However, it should be remembered that sample variates from an MCMC algorithm are a high dimensional (correlated) sample from the target density, and sometimes the serial correlation can be quite high for badly behaved algorithms.

All that remains, therefore, is step (2). Thus, from the above, it is seen that the main task is again as with the classical estimation of the model, to simulate from πœΉβˆ£πœ™,𝐫,β„±0.

3.2.3. MCMC Simulation of πœ€βˆ£πœ™,𝐫,β„±0

For a given set of parameter values and initial conditions, it is generally simpler to simulate {πœ€π‘‘} for 𝑑=1,…,𝑇 and then compute {𝛿𝑑}𝑇𝑑=1 than to simulate {𝛿𝑑}𝑇𝑑=1 directly. For that matter, we concentrate on simulators of πœ€π‘‘ given 𝐫 and πœ™. We set the mean and the variance of πœ€0 equal to their unconditional values, and, given that β„Žπ‘‘ is a sufficient statistic for β„±π‘‘βˆ’1 and the unconditional variance is a deterministic function of πœ™, β„±0 can be eliminated from the information set without any information loss.

Now sampling from 𝑝(πœ€βˆ£π«,πœ™)βˆπ‘(π«βˆ£πœ€,πœ™)𝑝(πœ€βˆ£πœ™) is feasible by using an M-H algorithm where we update each time only one πœ€π‘‘ leaving all the other unchanged [20]. In particular, let us write the nth iteration of a Markov chain as πœ€π‘›π‘‘. Then we generate a potential new value of the Markov chain πœ€new𝑑 by proposing from some candidate density 𝑔(πœ€π‘‘βˆ£πœ€π‘›β§΅π‘‘,𝐫,πœ™) where πœ€π‘›β§΅π‘‘={πœ€1𝑛+1,…,πœ€π‘›+1π‘‘βˆ’1,πœ€π‘›π‘‘+1,…,πœ€π‘›π‘‡} which we accept with probabilityβŽ‘βŽ’βŽ’βŽ£π‘ξ‚€πœ€min1,newπ‘‘βˆ£πœ€π‘›β§΅π‘‘ξ‚π‘”ξ‚€πœ€,𝐫,πœ™newπ‘‘βˆ£πœ€π‘›β§΅π‘‘ξ‚,𝐫,πœ™π‘ξ‚€πœ€π‘›π‘‘βˆ£πœ€π‘›β§΅π‘‘ξ‚π‘”ξ‚€πœ€,𝐫,πœ™π‘›π‘‘βˆ£πœ€π‘›β§΅π‘‘ξ‚βŽ€βŽ₯βŽ₯⎦,𝐫,πœ™.(3.47)

If it is accepted then, we set πœ€π‘‘π‘›+1=πœ€new𝑑 and otherwise we keep πœ€π‘‘π‘›+1=πœ€π‘›π‘‘. Although the proposal is much better since it is only in a single dimension, each time we consider modifying a single error we have to compute: π‘ξ‚€πœ€newπ‘‘βˆ£πœ€π‘›β§΅π‘‘ξ‚,𝐫,πœ™π‘ξ‚€πœ€π‘›π‘‘βˆ£πœ€π‘›β§΅π‘‘ξ‚=π‘ξ€·π‘Ÿ,𝐫,πœ™π‘‘βˆ£πœ€new𝑑,β„Žnew𝑑,π‘‘ξ€Έπ‘ξ€·πœ€,πœ™newπ‘‘βˆ£β„Žnew𝑑,π‘‘ξ€Έπ‘ξ€·π‘Ÿ,πœ™π‘‘βˆ£β„Žπ‘‘π‘›,𝑑,πœ™π‘ξ€·π‘Ÿπ‘‘βˆ£β„Žnew𝑑,π‘‘ξ€Έπ‘ξ€·π‘Ÿ,πœ™π‘‘βˆ£πœ€π‘›π‘‘,β„Žπ‘‘π‘›,π‘‘ξ€Έπ‘ξ€·πœ€,πœ™newπ‘‘βˆ£β„Žπ‘‘π‘›,π‘‘ξ€Έβˆ—,πœ™π‘‡ξ‘π‘ =𝑑+1π‘ξ€·π‘Ÿπ‘ βˆ£πœ€π‘Ÿπ‘ ,β„Žnew𝑠,π‘‘ξ€Έπ‘ξ€·πœ€,πœ™π‘Ÿπ‘ βˆ£β„Žnew𝑠,π‘‘ξ€Έπ‘ξ€·π‘Ÿ,πœ™π‘ βˆ£β„Žπ‘ π‘›,𝑑,πœ™π‘ξ€·π‘Ÿπ‘ βˆ£β„Žnew𝑠,π‘‘ξ€Έπ‘ξ€·π‘Ÿ,πœ™π‘ βˆ£πœ€π‘›π‘ ,β„Žπ‘ π‘›,π‘‘ξ€Έπ‘ξ€·πœ€,πœ™π‘›π‘ βˆ£β„Žπ‘ π‘›,𝑑=π‘ξ€·π‘Ÿ,πœ™π‘‘βˆ£πœ€new𝑑,β„Žnew𝑑,π‘‘ξ€Έπ‘ξ€·πœ€,πœ™newπ‘‘βˆ£β„Žnew𝑑,𝑑,πœ™π‘ξ€·π‘Ÿπ‘‘βˆ£πœ€π‘›π‘‘,β„Žπ‘‘π‘›,π‘‘ξ€Έπ‘ξ€·πœ€,πœ™newπ‘‘βˆ£β„Žπ‘‘π‘›,π‘‘ξ€Έβˆ—,πœ™π‘‡ξ‘π‘ =𝑑+1π‘ξ€·π‘Ÿπ‘ βˆ£πœ€π‘›π‘ ,β„Žnew𝑠,π‘‘ξ€Έπ‘ξ€·πœ€,πœ™π‘›π‘ βˆ£β„Žnew𝑠,𝑑,πœ™π‘ξ€·π‘Ÿπ‘ βˆ£πœ€π‘›π‘ ,β„Žπ‘ π‘›,π‘‘ξ€Έπ‘ξ€·πœ€,πœ™π‘›π‘ βˆ£β„Žπ‘ π‘›,𝑑,,πœ™(3.48) where for 𝑠=𝑑+1,…,𝑇,β„Žnew𝑠,π‘‘ξ€·πœ€=π‘‰π‘ βˆ£πœ€π‘›π‘ βˆ’1,πœ€π‘›π‘ βˆ’2,…,πœ€π‘›π‘‘+1,πœ€new𝑑,πœ€π‘›+1π‘‘βˆ’1,…,πœ€1𝑛+1ξ€Έ,β„Žπ‘ π‘›,π‘‘ξ€·πœ€=π‘‰π‘ βˆ£πœ€π‘›π‘ βˆ’1,πœ€π‘›π‘ βˆ’2,…,πœ€π‘›π‘‘+1,πœ€π‘›π‘‘,πœ€π‘›+1π‘‘βˆ’1,…,πœ€1𝑛+1ξ€Έ(3.49) whileβ„Žnew𝑑,𝑑=β„Žπ‘‘π‘›,𝑑.(3.50)

Nevertheless, each time we revise one πœ€π‘‘, we have also to revise π‘‡βˆ’π‘‘ conditional variances because of the recursive nature of the GARCH model which makes β„Žnew𝑠,𝑑 depend upon πœ€new𝑑 for 𝑠=𝑑+1,…,𝑇. And since 𝑑=1,…,𝑇, it is obvious that we need to calculate 𝑇2 normal densities, and so this algorithm is 𝑂(𝑇2). And this should be done for every πœ™. To avoid this huge computational load, we show how to use the method proposed by Fiorentini et al. [10] and so do MCMC with only 𝑂(𝑇) calculations. The method is described in the following subsection.

3.3. Estimation Method Proposed: Classical and Bayesian Estimation

The method proposed by Fiorentini et al. [10] is to transform the GARCH model into a first-order Markov’s model and so do MCMC with only 𝑂(𝑇) calculations. Following their transformation, we augment the state vector with the variables β„Žπ‘‘+1 and then sample the joint Markov process {β„Žπ‘‘+1,𝑠𝑑}∣𝐫,πœ™βˆˆβ„±π‘‘ where π‘ π‘‘ξ€·πœ€=signπ‘‘ξ€Έβˆ’π›Ύ,(3.51) so that 𝑠𝑑=Β±1 with probability one. The mapping is one to one and has no singularities. More specifically, if we know {β„Žπ‘‘+1} and πœ‘, then we know the value ofξ€·πœ€π‘‘ξ€Έβˆ’π›Ύ2=β„Žπ‘‘+1βˆ’πœ”βˆ’π›½β„Žπ‘‘π›Όβˆ€π‘‘β‰₯1.(3.52)

Hence the additional knowledge of the signs of (πœ€π‘‘βˆ’π›Ύ) would reveal the entire path of {πœ€π‘‘} so long as β„Ž0 (which equals the unconditional value in our case) is known, and, thus, we may now reveal also the unobserved random variable {𝛿𝑑}∣𝐫,πœ™,{β„Žπ‘‘+1}.

Now we have to sample from 𝑝𝑠𝑑,β„Žπ‘‘+1ξ€Ύξ€Έβˆβˆ£π«,πœ™π‘‡ξ‘π‘‘=1π‘ξ€·π‘ π‘‘βˆ£β„Žπ‘‘+1,β„Žπ‘‘ξ€Έπ‘ξ€·β„Ž,πœ™π‘‘+1βˆ£β„Žπ‘‘ξ€Έπ‘ξ€·π‘Ÿ,πœ™π‘‘βˆ£π‘ π‘‘,β„Žπ‘‘,β„Žπ‘‘+1ξ€Έ,πœ™,(3.53) where the second and the third term come from the model, and the first comes from the fact that πœ€π‘‘βˆ£β„±π‘‘βˆ’1βˆΌπ‘(0,β„Žπ‘‘) but πœ€π‘‘βˆ£{β„Žπ‘‘+1},β„±π‘‘βˆ’1 takes valuesπœ€π‘‘=𝛾±𝑑𝑑,(3.54) where 𝑑𝑑=ξ‚™β„Žπ‘‘+1βˆ’πœ”βˆ’π›½β„Žπ‘‘π›Ό.(3.55)

From the above, it is seen that we should first simulate {β„Žπ‘‘+1}∣𝐫,πœ™ since we do not alter the volatility process when we flip from 𝑠𝑑=βˆ’1 to 𝑠𝑑=1 (implying that the signs do not cause the volatility process), but we do alter πœ€π‘‘ and then simulate {𝑠𝑑}∣{β„Žπ‘‘+1},𝐫,πœ™. The second step is a Gibbs sampling scheme whose acceptance rate is always one and also conditional on {β„Žπ‘‘+1},𝐫,πœ™ the elements of {𝑠𝑑} are independent which further simplifies the calculations. We prefer to review first the Gibbs sampling scheme and then the simulation of the conditional variance.

3.3.1. Simulations of {𝑠𝑑}∣{β„Žπ‘‘+1},𝐫, πœ™

First, we see how to sample from {𝑠𝑑}∣{β„Žπ‘‘+1},𝐫,πœ™. To obtain the required conditionally Bernoulli distribution, we establish first some notation. We have the following (see Appendix A): 𝑐𝑑=1βˆšπ‘£π‘‘βˆ£π‘Ÿπ‘‘,β„Žπ‘‘ξƒ¬πœ‘ξƒ©π›Ύ+π‘‘π‘‘βˆ’πœ€π‘‘βˆ£π‘Ÿπ‘‘,β„Žπ‘‘βˆšπ‘£π‘‘βˆ£π‘Ÿπ‘‘,β„Žπ‘‘ξƒͺ+πœ‘π›Ύβˆ’π‘‘π‘‘βˆ’πœ€π‘‘βˆ£π‘Ÿπ‘‘,β„Žπ‘‘βˆšπ‘£π‘‘βˆ£π‘Ÿπ‘‘,β„Žπ‘‘,πœ€ξƒͺξƒ­π‘‘βˆ£π‘Ÿπ‘‘,β„Žπ‘‘ξ€·πœ€=πΈπ‘‘βˆ£π‘Ÿπ‘‘,β„Žπ‘‘ξ€Έ=ξ€·1βˆ’πœ‘2π‘Ÿξ€Έξ€·π‘‘βˆ’π›Ώβ„Žπ‘‘ξ€Έπœ‘2π‘’β„Žπ‘‘+1βˆ’πœ‘2,π‘£π‘‘βˆ£π‘Ÿπ‘‘,β„Žπ‘‘ξ€·πœ€=Varπ‘‘βˆ£π‘Ÿπ‘‘,β„Žπ‘‘ξ€Έ=πœ‘2π‘’β„Ž2π‘‘πœ‘2π‘’β„Žπ‘‘+1βˆ’πœ‘2.(3.56)

Using the above notation, we see that the probability of drawing 𝑠𝑑=1 conditional on {β„Žπ‘‘+1} is equal to the probability of drawing πœ€π‘‘=𝛾+𝑑𝑑 conditional on β„Žπ‘‘+1,β„Žπ‘‘,π‘Ÿπ‘‘,πœ™, where 𝑑𝑑 is given by (3.70), which is given byπ‘ξ€·π‘ π‘‘ξ€½β„Ž=1βˆ£π‘‘+1ξ€Ύξ€Έξ€·πœ€,𝐫,πœ™=𝑝𝑑=𝛾+π‘‘π‘‘βˆ£β„Žπ‘‘+1,β„Žπ‘‘,π‘Ÿπ‘‘ξ€Έ=1,πœ™π‘π‘‘βˆšπ‘£π‘‘/π‘Ÿπ‘‘,β„Žπ‘‘πœ‘ξƒ©π›Ύ+π‘‘π‘‘βˆ’πœ€π‘‘/π‘Ÿπ‘Ÿ,β„Žπ‘‘βˆšπ‘£π‘‘/π‘Ÿπ‘‘,β„Žπ‘‘ξƒͺ.(3.57)

Similarly for the probability of drawing 𝑠𝑑=βˆ’1. Both of these quantities are easy to compute; for example,πœ‘ξƒ©π›Ύ+π‘‘π‘‘βˆ’πœ€π‘‘/π‘Ÿπ‘Ÿ,β„Žπ‘‘βˆšπ‘£π‘‘/π‘Ÿπ‘‘,β„Žπ‘‘ξƒͺ=1√⎧βŽͺ⎨βŽͺβŽ©βˆ’12πœ‹exp2𝛾+π‘‘π‘‘βˆ’πœ€π‘‘/π‘Ÿπ‘Ÿ,β„Žπ‘‘βˆšπ‘£π‘‘/π‘Ÿπ‘‘,β„Žπ‘‘ξƒͺ2⎫βŽͺ⎬βŽͺ⎭(3.58) and so we may simulate {𝑠𝑑}∣{β„Žπ‘‘+1},𝐫,πœ™ using a Gibbs sampling scheme. Specifically, since conditional on {β„Žπ‘‘+1},𝐫,πœ™ the elements of {𝑠𝑑} are independent, we actually draw from the marginal distribution, and the acceptance rate for this algorithm is always one.

The Gibbs sampling algorithm for drawing {𝑠𝑑}∣{β„Žπ‘‘+1},𝐫,πœ™ may be described as below.(1)Specify an initial value 𝑠(0)=(𝑠1(0),…,𝑠𝑇(0)).(2)Repeat for π‘˜=1,…,𝑀.(a)Repeat for 𝑑=0,…,π‘‡βˆ’1.(i)Draw 𝑠(π‘˜)=1 with probability, 1π‘π‘‘βˆšπ‘£π‘‘/π‘Ÿπ‘‘,β„Žπ‘‘πœ‘ξƒ©π›Ύ+π‘‘π‘‘βˆ’πœ€π‘‘/π‘Ÿπ‘Ÿ,β„Žπ‘‘βˆšπ‘£π‘‘/π‘Ÿπ‘‘,β„Žπ‘‘ξƒͺ,(3.59) and 𝑠(π‘˜)=βˆ’1 with probability, 11βˆ’π‘π‘‘βˆšπ‘£π‘‘/π‘Ÿπ‘‘,β„Žπ‘‘πœ‘ξƒ©π›Ύ+π‘‘π‘‘βˆ’πœ€π‘‘/π‘Ÿπ‘Ÿ,β„Žπ‘‘βˆšπ‘£π‘‘/π‘Ÿπ‘‘,β„Žπ‘‘ξƒͺ.(3.60)(3)Return the values {𝑠(1),…,𝑠(𝑀)}.

3.3.2. Simulations of {β„Žπ‘‘+1}∣𝐫, πœ™ (Single Move Samplers)

On the other hand, the first step involves simulating from {β„Žπ‘‘+1}∣𝐫,πœ™. To avoid large dependence in the chain, we use an M-H algorithm where we simulate one β„Žπ‘‘+1 at a time leaving the others unchanged [20, 31]. So if (β„Žπ‘‘+1)𝑛 is the current value of the 𝑛th iteration of a Markov chain, then we draw a candidate value of the Markov chain β„Žnew𝑑+1 by proposing it from a candidate density (proposal density) 𝑔(β„Žπ‘‘+1∣(β„Ž)𝑛/𝑑+1,𝐫,πœ™) where (β„Ž)𝑛/𝑑+1={β„Ž1𝑛+1,β„Ž2𝑛+1,…,β„Žπ‘‘π‘›+1,β„Žπ‘›π‘‘+2,…,β„Žπ‘›π‘‡+1}. We set (β„Žπ‘‘+1)𝑛+1=(β„Žπ‘‘+1)new with acceptance probabilityξƒ¬π‘ξ€·β„Žmin1,new𝑑+1∣(β„Ž)𝑛/𝑑+1ξ€Έπ‘”ξ€·β„Ž,𝐫,πœ™π‘›π‘‘+1∣(β„Ž)𝑛/𝑑+1ξ€Έ,𝐫,πœ™π‘ξ€·β„Žπ‘›π‘‘+1∣(β„Ž)𝑛/𝑑+1ξ€Έπ‘”ξ€·β„Ž,𝐫,πœ™new𝑑+1∣(β„Ž)𝑛/𝑑+1ξ€Έξƒ­,𝐫,πœ™,(3.61) where we have used the fact that𝑝(𝐑∣𝐫,πœ™)=𝑝(β„Ž)/π‘‘ξ€Έπ‘ξ€·β„Žβˆ£π«,πœ™π‘‘βˆ£(β„Ž)/𝑑,𝐫,πœ™.(3.62)

However, we may simplify further the acceptance rate. More specifically, we have thatπ‘ξ€·β„Žπ‘‘+1∣(β„Ž)/𝑑+1ξ€Έξ€·β„Ž,𝐫,πœ™βˆπ‘π‘‘+2βˆ£β„Žπ‘‘+1ξ€Έπ‘ξ€·β„Ž,πœ™π‘‘+1βˆ£β„Žπ‘‘ξ€Έπ‘ξ€·π‘Ÿ,πœ™π‘‘+1βˆ£β„Žπ‘‘+2,β„Žπ‘‘+1ξ€Έπ‘ξ€·π‘Ÿ,πœ™π‘‘βˆ£β„Žπ‘‘+1,β„Žπ‘‘ξ€Έ,πœ™.(3.63)

Now, since the following should hold: β„Žπ‘‘+1β‰₯πœ”+π›½β„Žπ‘‘(3.64) and similarly β„Žπ‘‘+1β‰€π›½βˆ’1ξ€·β„Žπ‘‘+2ξ€Έβˆ’πœ”,(3.65) we have the support of the conditional distribution of β„Žπ‘‘+1 given that β„Žπ‘‘ is bounded from below by πœ”+π›½β„Žπ‘‘, and the same applies to the distribution of β„Žπ‘‘+2 given β„Žπ‘‘+1 (lower limit corresponds to 𝑑𝑑=0 and the upper limit to 𝑑𝑑+1=0). This means that the range of values of β„Žπ‘‘+1 compatible with β„Žπ‘‘ and β„Žπ‘‘+2 in the GQARCH case is bounded from above and below; that is:β„Žπ‘‘+1βˆˆξ€Ίπœ”+π›½β„Žπ‘‘,π›½βˆ’1ξ€·β„Žπ‘‘+2βˆ’πœ”ξ€Έξ€».(3.66)

From the above, we understand that it makes sense to make the proposal to obey the support of the density, and so it is seen that we can simplify the acceptance rate by setting π‘”ξ€·β„Žπ‘‘+1∣(β„Ž)/𝑑+1ξ€Έξ€·β„Ž,𝐫,πœ™=𝑝𝑑+1βˆ£β„Žπ‘‘ξ€Έ,πœ™(3.67) appropriately truncated from above (since the truncation from below will automatically be satisfied). But the above proposal density ignores the information contained in π‘Ÿπ‘‘+1, and so according to Fiorentini et al. [10] we can achieve a substantially higher acceptance rate if we propose from π‘”ξ€·β„Žπ‘‘+1∣(β„Ž)/𝑑+1ξ€Έξ€·β„Ž,𝐫,πœ™=𝑝𝑑+1βˆ£π‘Ÿπ‘‘,β„Žπ‘‘ξ€Έ,πœ™.(3.68)

A numerically efficient way to simulate β„Žπ‘‘+1 from 𝑝(β„Žπ‘‘+1βˆ£π‘Ÿπ‘‘,β„Žπ‘‘,πœ™) is to sample an underlying Gaussian random variable doubly truncated by using an inverse transform method. More specifically, we may drawπœ€π‘‘βˆ£π‘Ÿπ‘‘,β„Žπ‘‘ξƒ©ξ€·,πœ™βˆΌπ‘1βˆ’πœ‘2π‘Ÿξ€Έξ€·π‘‘βˆ’π›Ώβ„Žπ‘‘ξ€Έπœ‘2π‘’β„Žπ‘‘+1βˆ’πœ‘2,πœ‘2π‘’β„Ž2π‘‘πœ‘2π‘’β„Žπ‘‘+1βˆ’πœ‘2ξƒͺ(3.69) doubly truncated so that it remains within the following bounds:πœ€newπ‘‘βˆˆξ€Ίπ›Ύβˆ’π‘™π‘‘,𝛾+𝑙𝑑,(3.70) where 𝑙𝑑=ξƒŽβ„Žπ‘‘+2βˆ’πœ”βˆ’π›½πœ”βˆ’π›½2β„Žπ‘‘π›½π›Ό,(3.71) using an inverse transform method and then computeβ„Žnew𝑑+1ξ€·πœ€=πœ”+𝛼newπ‘‘ξ€Έβˆ’π›Ύ2+π›½β„Žπ‘‘,(3.72) which in turn implies a real value for 𝑑new𝑑+1=(β„Žπ‘‘+2βˆ’πœ”βˆ’π›½β„Žnew𝑑+1)/𝛼 and so guarantees that β„Žnew𝑑+1 lies within the acceptance bounds.

The inverse transform method to draw the doubly truncated Gaussian random variable first draws a uniform random numberπ‘’βˆΌπ‘ˆ(0,1)(3.73) and then computes the following:βŽ›βŽœβŽœβŽœβŽπ‘’=(1βˆ’π‘’)Ξ¦π›Ύβˆ’π‘™π‘‘βˆ’ξ€·ξ€·1βˆ’πœ‘2π‘Ÿξ€Έξ€·π‘‘βˆ’π›Ώβ„Žπ‘‘ξ€Έ/ξ€·πœ‘2π‘’β„Žπ‘‘+1βˆ’πœ‘2ξ€Έξ€Έξ”πœ‘2π‘’β„Ž2𝑑/ξ€·πœ‘2π‘’β„Žπ‘‘+1βˆ’πœ‘2ξ€ΈβŽžβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽœβŽ+𝑒Φ𝛾+π‘™π‘‘βˆ’ξ€·ξ€·1βˆ’πœ‘2π‘Ÿξ€Έξ€·π‘‘βˆ’π›Ώβ„Žπ‘‘ξ€Έ/ξ€·πœ‘2π‘’β„Žπ‘‘+1βˆ’πœ‘2ξ€Έξ€Έξ”πœ‘2π‘’β„Ž2𝑑/ξ€·πœ‘2π‘’β„Žπ‘‘+1βˆ’πœ‘2ξ€ΈβŽžβŽŸβŽŸβŽŸβŽ .(3.74)

A draw is then given byπœ€new𝑑=Ξ¦βˆ’1𝑒.(3.75)

However, if the bounds are close to each other (the degree of truncation is small) the extra computations involved make this method unnecessarily slow, and so we prefer to use the accept-reject method where we draw πœ€newπ‘‘βˆ£π‘Ÿπ‘‘,β„Žπ‘‘ξƒ©ξ€·,πœ™βˆΌπ‘1βˆ’πœ‘2π‘Ÿξ€Έξ€·π‘‘βˆ’π›Ώβ„Žπ‘‘ξ€Έπœ‘2π‘’β„Žπ‘‘+1βˆ’πœ‘2,πœ‘2π‘’β„Ž2π‘‘πœ‘2π‘’β„Žπ‘‘+1βˆ’πœ‘2ξƒͺ(3.76) and accept the draw if π›Ύβˆ’π‘™π‘‘β‰€πœ€new𝑑≀𝛾+𝑙𝑑, and otherwise we repeat the drawing (this method is inefficient if the truncation lies in the tails of the distribution). It may be worth assessing the degree of truncation first, and, depending on its tightness, choose one simulation method or the other.

The conditional density of πœ€new𝑑 will be given according to the definition of a truncated normal distribution: π‘ξ€·πœ€newπ‘‘βˆ£||πœ€new𝑑||βˆ’π›Ύβ‰€π‘™π‘‘,π‘Ÿπ‘‘,β„Žπ‘‘ξ€Έ=1,πœ™βˆšπ‘£π‘‘/π‘Ÿπ‘‘,β„Žπ‘‘πœ‘ξƒ©πœ€newπ‘‘βˆ’πœ€π‘‘/π‘Ÿπ‘Ÿ,β„Žπ‘‘βˆšπ‘£π‘‘/π‘Ÿπ‘‘,β„Žπ‘‘ξƒͺ×Φ𝛾+π‘™π‘‘βˆ’πœ€π‘‘/π‘Ÿπ‘Ÿ,β„Žπ‘‘βˆšπ‘£π‘‘/π‘Ÿπ‘‘,β„Žπ‘‘ξƒͺξƒ©βˆ’Ξ¦π›Ύβˆ’π‘™π‘‘βˆ’πœ€π‘‘/π‘Ÿπ‘Ÿ,β„Žπ‘‘βˆšπ‘£π‘‘/π‘Ÿπ‘‘,β„Žπ‘‘ξƒͺξƒ­βˆ’1,(3.77) where Ξ¦(β‹…) is the cdf of the standard normal.

By using the change of variable formula, we have that the density of β„Žnew𝑑+1 will beπ‘ξ€·β„Žnew𝑑+1βˆ£β„Žnew𝑑+1βˆˆξ€Ίπœ”+π›½β„Žπ‘‘,π›½βˆ’1ξ€·β„Žπ‘‘+2βˆ’πœ”ξ€Έξ€»,π‘Ÿπ‘‘,β„Žπ‘‘ξ€Έ=𝑐,πœ™new𝑑||2𝛼𝑑new𝑑||×Φ𝛾+π‘™π‘‘βˆ’πœ€π‘‘/π‘Ÿπ‘Ÿ,β„Žπ‘‘βˆšπ‘£π‘‘/π‘Ÿπ‘‘,β„Žπ‘‘ξƒͺξƒ©βˆ’Ξ¦π›Ύβˆ’π‘™π‘‘βˆ’πœ€π‘‘/π‘Ÿπ‘Ÿ,β„Žπ‘‘βˆšπ‘£π‘‘/π‘Ÿπ‘‘,β„Žπ‘‘ξƒͺξƒ­βˆ’1.(3.78)

Using Bayes theorem we have that the acceptance probability will beξƒ©π‘ξ€·β„Žmin1,𝑑+2βˆ£β„Žnew𝑑+1,π‘Ÿπ‘‘+1ξ€Έπ‘ξ€·π‘Ÿ,πœ™π‘‘+1βˆ£β„Žnew𝑑+1ξ€Έ,πœ™π‘ξ€·π‘Ÿπ‘‘+1βˆ£β„Žπ‘›π‘‘+1ξ€Έπ‘ξ€·β„Ž,πœ™π‘‘+2βˆ£β„Žπ‘›π‘‘+1,π‘Ÿπ‘‘+1ξ€Έξƒͺ,πœ™.(3.79)

Since the degree of truncation is the same for old and new, the acceptance probability will be ξƒ©π‘ξ€·π‘Ÿmin1,𝑑+1βˆ£β„Žnew𝑑+1ξ€Έπ‘ξ€·π‘Ÿπ‘‘+1βˆ£β„Žπ‘›π‘‘+1𝑐new𝑑+1𝑐𝑛𝑑+1𝑑𝑛𝑑+1𝑑new𝑑+1ξƒͺ,(3.80) where 𝑝(π‘Ÿπ‘‘+1βˆ£β„Žπ‘‘+1) is a mixture of two univariate normal densities soπ‘Ÿπ‘‘+1βˆ£β„Žπ‘‘+1ξ‚΅βˆΌπ‘π›Ώβ„Žπ‘‘+1,ξ‚΅πœ‘2𝑒1βˆ’πœ‘2β„Žπ‘‘+1ξ‚Άβ„Ž+1𝑑+1ξ‚Ά.(3.81)

Hence,π‘ξ€·π‘Ÿπ‘‘+1βˆ£β„Žπ‘›π‘‘+1ξ€Έ=1ξ”πœ‘2πœ‹ξ€·ξ€·2𝑒/ξ€·1βˆ’πœ‘2β„Žξ€Έξ€Έπ‘›π‘‘+1ξ€Έβ„Ž+1𝑛𝑑+1ξƒ©βˆ’ξ€·π‘Ÿexp𝑑+1βˆ’π›Ώβ„Žπ‘›π‘‘+1ξ€Έ22πœ‘ξ€·ξ€·2𝑒/ξ€·1βˆ’πœ‘2β„Žξ€Έξ€Έπ‘›π‘‘+1ξ€Έβ„Ž+1𝑛𝑑+1ξƒͺ,(3.82) and the acceptance probability becomesβŽ‘βŽ’βŽ’βŽ’βŽ£ξ‚΅β„Žmin1,𝑛𝑑+1β„Žnew𝑑+1ξ‚Ά3/2βˆšβ„Žπ‘›π‘‘+2βˆ’πœ”βˆ’π›½β„Žπ‘›π‘‘+1ξ”β„Žπ‘›π‘‘+2βˆ’πœ”βˆ’π›½β„Žnew𝑑+1πœ…ξ€·β„Žnew𝑑+1ξ€Έπœ…ξ€·β„Žπ‘›π‘‘+1ξ€ΈβŽ€βŽ₯βŽ₯βŽ₯⎦,(3.83) where πœ…ξ€·β„Žπ‘–π‘‘+1ξ€ΈβŽ‘βŽ’βŽ’βŽ’βŽ’βŽ’βŽ’βŽ£βˆ’ξ€·π‘Ÿ=exp𝑑+1βˆ’π›Ώβ„Žπ‘–π‘‘+1ξ€Έ22ξ‚΅πœ‘2𝑒1βˆ’πœ‘2β„Žπ‘–π‘‘+1ξ‚Άβ„Ž+1𝑖𝑑+1βˆ’πœ‘2𝑒1βˆ’πœ‘2β„Žπ‘–π‘‘+1+12πœ‘2𝑒1βˆ’πœ‘2ξ€·β„Žπ‘–π‘‘+1ξ€Έ2βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽβŽ›βŽœβŽœβŽœβŽœβŽπ‘Ÿπ›Ύβˆ’π‘‘+1βˆ’π›Ώβ„Žπ‘–π‘‘+1πœ‘2𝑒1βˆ’πœ‘2β„Žπ‘–π‘‘+1⎞⎟⎟⎟⎟⎠+12+β„Žπ‘Ÿπ‘‘+2βˆ’πœ”βˆ’π›½β„Žπ‘–π‘‘+1π›ΌβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ βŽ€βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ¦Γ—βŽ‘βŽ’βŽ’βŽ’βŽ’βŽ’βŽ’βŽ’βŽ’βŽ’βŽ’βŽ£βŽ‘βŽ’βŽ’βŽ’βŽ’βŽ£2πœ‘1+exp2𝑒1βˆ’πœ‘2β„Žπ‘–π‘‘+1+1πœ‘2𝑒1βˆ’πœ‘2ξ€·β„Žπ‘–π‘‘+1ξ€Έ2βŽ›βŽœβŽœβŽœβŽœβŽπ‘Ÿπ›Ύβˆ’π‘‘+1βˆ’π›Ώβ„Žπ‘–π‘‘+1πœ‘2𝑒1βˆ’πœ‘2β„Žπ‘–π‘‘+1βŽžβŽŸβŽŸβŽŸβŽŸβŽ ξƒŽ+1β„Žπ‘›π‘‘+2βˆ’πœ”βˆ’π›½β„Žπ‘–π‘‘+1π›ΌβŽ€βŽ₯βŽ₯βŽ₯βŽ₯βŽ¦βŽ‘βŽ’βŽ’βŽ’βŽ’βŽ£πœ‘exp2𝑒1βˆ’πœ‘2β„Žπ‘–π‘‘+1+1πœ‘2𝑒1βˆ’πœ‘2ξ€·β„Žπ‘–π‘‘+1ξ€Έ2βŽ›βŽœβŽœβŽœβŽœβŽπ‘Ÿπ›Ύβˆ’π‘‘+1βˆ’π›Ώβ„Žπ‘–π‘‘+1πœ‘2𝑒1βˆ’πœ‘2β„Žπ‘–π‘‘+1βŽžβŽŸβŽŸβŽŸβŽŸβŽ ξƒŽ+1β„Žπ‘›π‘‘+2βˆ’πœ”βˆ’π›½β„Žπ‘–π‘‘+1π›ΌβŽ€βŽ₯βŽ₯βŽ₯βŽ₯⎦⎀βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯⎦.(3.84)

Overall the MCMC of {β„Žπ‘‘+1}∣𝐫,πœ™ includes the following steps.(1)Specify an initial value {β„Ž(0)}.(2)Repeat for 𝑛=1,…,𝑀.(a)Repeat for 𝑑=0,…,π‘‡βˆ’1.(i)Use an inverse transform method to simulate πœ€newπ‘‘βˆ£π‘Ÿπ‘‘,β„Žπ‘‘ξƒ©ξ€·,πœ™βˆΌπ‘1βˆ’πœ‘2π‘Ÿξ€Έξ€·π‘‘βˆ’π›Ώβ„Žπ‘‘ξ€Έπœ‘2π‘’β„Žπ‘‘+1βˆ’πœ‘2,πœ‘2π‘’β„Ž2π‘‘πœ‘2π‘’β„Žπ‘‘+1βˆ’πœ‘2ξƒͺ(3.85) doubly truncated.(ii)Calculate β„Žnew𝑑+1ξ€·πœ€=πœ”+π›Όπ‘π‘‘ξ€Έβˆ’π›Ύ2+π›½β„Žπ‘‘.(3.86) Steps (2)(a)(i) and (2)(a)(ii) are equivalent to draw ξ€·β„Žπ‘‘+1ξ€Έnewξ€·β„ŽβˆΌπ‘new𝑑+1βˆ£π‘Ÿπ‘‘,β„Žπ‘‘ξ€Έ,πœ™(3.87) appropriately truncated.(iii)Calculate π›Όπ‘Ÿξƒ¬π‘ξ€·π‘Ÿ=min1,𝑑+1βˆ£β„Žnew𝑑+1ξ€Έπ‘ξ€·π‘Ÿπ‘‘+1βˆ£β„Žπ‘›π‘‘+1𝑐new𝑑+1𝑐𝑛𝑑+1𝑑𝑛𝑑+1𝑑new𝑑+1ξƒ­.(3.88)(iv)Set ξ€·β„Žπ‘‘+1𝑛+1=ξ‚»ξ€·β„Žπ‘‘+1ξ€ΈnewifUnif(0,1)β‰€π›Όπ‘Ÿξ€·β„Žπ‘‘+1𝑛otherwise(3.89)

Remark 3.7. Every time we change β„Žπ‘‘+1, we calculate only one normal density since the transformation is Markovian, and, since 𝑑=0,…,π‘‡βˆ’1, we need 𝑂(𝑇) calculations.

Notice that if we retain β„Žnew𝑑+1, then πœ€new𝑑 is retained and we will not need to simulate 𝑠𝑑 at a later stage. In fact we only need to simulate 𝑠𝑑 at 𝑑=𝑇 since we need to know πœ€π‘‡. The final step involves computing𝛿(𝑛)𝑑+1=π‘Ÿπ‘‘+1βˆ’πœ€(𝑛)𝑑+1β„Ž(𝑛)𝑑+1,𝑑=0,…,π‘‡βˆ’1,𝑛=1,…,𝑀.(3.90)

Using all the above simulated values, we may now take average of simulations and compute the quantities needed for the SEM algortihm. As for the Bayesian inference, having completed Step (2) we may now proceed to the Gibbs sampling and M-H steps to obtain draws from the required posterior density. Thus, the first-order Markov transformation of the model made feasible an MCMC algorithm which allows the calculation of a classical estimator via the simulated EM algorithm and a simulation-based Bayesian inference in 𝑂(𝑇) computational operations.

3.4. A Comparison of the Simulators

In order to compare the performance of the inefficient and the efficient MCMC sampler introduced in the previous subsection, we have generated realizations of size 𝑇=240 for the simple GQARCH(1,1)-M-AR(1) model with parameters 𝛿=0.1, πœ‘=0.85, 𝛼=0.084, 𝛽=0.688, 𝛾=0.314 (which are centered around typical values that we tend to see in the empirical literature). We first examine the increase in the variance of the sample mean of πœ€π‘‘ across 500,000 simulations due to the autocorrelation in the drawings relative to an ideal but infeasible independent sampler.

We do so by recording the inefficient ratios for the observations 𝑑=80 and 𝑑=160 using standard spectral density techniques. In addition, we record the mean acceptance probabilities over all observations and the average CPU time needed to simulate one complete drawing. The behavior of the two simulators is summarized in Table 1 and is very much as one would expect. The computationally inefficient sampler shows high serial correlation for both 𝑑=80 and 𝑑=160 and a low acceptance rate for each individual 𝑑. Moreover, it is extremely time consuming to compute even though our sample size is fairly small. In fact, when we increase 𝑇 from 240 to 2400, and 24000 the average CPU time increases by a factor of 100 and 10000, respectively, as opposed to 10 and 100 for the other one (the efficient), which makes it impossible to implement in most cases of practical interest. On the other hand, the single-move efficient sampler produces results much faster, with a reasonably high acceptance rate but more autocorrelation in the drawings for 𝑑=160.

4. Empirical Application: Bayesian Estimation of Weekly Excess Returns from Three Major Stock Markets: Dow-Jones, FTSE, and Nikkei

In this section we apply the procedures described above to weekly excess returns from three major stock markets: Dow-Jones, FTSE, and Nikkei for the period of the last week of 1979:8 to the second to the last week of 2008:5 (1,500 observations). To guarantee 0≀𝛽≀1βˆ’π›Όβ‰€1 and to ensure that πœ”>0 we also used some accept-reject method for the Bayesian inference. This means that, when drawing from the posterior (as well as from the prior), we had to ensure that 𝛼,𝛽>0,𝛼+𝛽<1 and πœ”>0.

In order to implement our proposed Bayesian approach, we first have to specify the hyperparameters that characterize the prior distributions of the parameters. In this respect, our aim is to employ informative priors that would be in accordance with the β€œreceived wisdom.” In particular, for all data sets, we set the prior mean for 𝛽 equal to 0.7, and for π‘Ž, πœ”, and 𝛾 we decided to set their prior means equal to 0.15, 0.4, and 0.0, respectively. We had also to decide on the prior mean of 𝛿. We set its prior mean equal to 0.05, for all markets. These prior means imply an annual excess return of around 4%, which is a typical value for annualized stock excess returns. Finally, we set the prior mean of πœ‘ equal to 0.75, of πœ‘2𝑒 equal to 0.01, and the hyperparameters 𝜈0 and π‘‘π‘œ equal to 1550 and 3, respectively, for all three datasets, something which is consistent with the β€œcommon wisdom” of high autocorrelation of the price of risk. We employ a rather vague prior and set its prior variance equal to 10,000 for all datasets.

We ran a chain for 200,000 simulations for the three datasets and decided to use every tenth point, instead of all points, in the sample path to avoid strong serial correlation. The posterior statistics for the Dow-Jones, FTSE, and Nikkey are reported in Table 2. Inefficiency factors are calculated using a Parzen window equal to 0.1𝑇 (where, recall, 𝑇 is the number of observations) and indicate that the M-H sampling algorithm has converged and well behaved (This is also justified by the ACFs of the draws. However, they are not presented for space considerations and are available upon request.) With the exception of the constants 𝛿 and the πœ‘2𝑒's, there is low uncertainty with the estimation of the parameters. The estimated persistence, 𝛼+𝛽, for all three markets is close to 0.8 with the highest being the one of FTSE (0.822), indicating that the half life of a shock is around 3.5. The estimated asymmetry parameters are round 0.4 with β€œt-statistics” higher than 3.2, indicating that the leverage effect is important in all three markets. In a nut shell, all estimated parameters have plausible values, which are in accordance with previous results in the literature.

We have also performed a sensitivity analysis to our choice of priors. In particular, we have halved and doubled the dispersion of the prior distributions around their respective means. Figures 1, 2, and 3 show the kernel density estimates for all parameters for all datasets for the posterior distributions for the three cases: when the variances are 10,000 (baseline posterior), when the variances are halved (small variance posterior), and when the variances are doubled (large variance posterior). We used a canonical Epanechnikov kernel, and the optimal bandwidth was determined automatically by the data. The results which are reported in Figures 1, 2, and 3 indicate that the choice of priors does not unduly influence our conclusions.

Finally, treating the estimated posterior means as the β€œtrue” parameters, we can employ the formulae of Section 2.1 and compare the moments implied by the estimates and the sample ones. One fact is immediately obvious. All order autocorrelations of excess returns implied by the estimates are positive but small, with the 1st one being around 0.04, which is in accordance with the (ii) stylized fact (see Section 1). However, for all the three markets, the sample skewness coefficients are negative, ranging from βˆ’0.89 (FTSE) to βˆ’0.12 (Nikkey), whereas the implied ones are all positive, ranging from 0.036 (FTSE) to 0.042 (Dow-Jones). Nevertheless, the model is matching all the other stylized facts satisfactorily, that is, the estimated parameter values accommodate high coefficient of variation, leptokurtosis as well the volatility clustering and leverage effect.

5. Conclusions

In this paper, we derive exact likelihood-based estimators for our time-varying GQARCH(1,1)-M model. Since in general the expression for the likelihood function is unknown, we resort to simulation methods. In this context, we show that MCMC likelihood-based estimation of such a model can in fact be handled by means of feasible 𝑂(𝑇) algorithms. Our samplers involve two main steps. First we augment the state vector to achieve a first-order Markovian process in an analogous manner to the way in which GARCH models are simulated in practice. Then, we discuss how to simulate first the conditional variance and then the sign given these simulated series so that the unobserved in mean process is revealed as a residual term. We also develop simulation-based Bayesian inference procedures by combining within a Gibbs sampler the MCMC simulators. Furthermore, we derive the theoretical properties of this model, as far as moments and dynamic moments are concerned.

In order to investigate the practical performance of the proposed procedure, we estimate within a Bayesian context our time-varying GQARCH(1,1)-M-AR(1) model for weekly excess stock returns from the Dow-Jones, Nikkei, and FTSE index. With the exception of the returns' skewness, the suggested likelihood-based estimation method and our model is producing satisfactory results, as far as a comparison between sample and theoretical moments is concerned.

Although we have developed the method within the context of an AR(1) price of risk, it applies much more widely. For example, we could assume that the market price of risk is a Bernoulli process or a Markov’s switching process. A Bernoulli’s distributed price of risk would allow a negative third moment by appropriately choosing the two values of the in-mean process. However, this would make all computations much more complicated. In an earlier version of the paper, we assumed that the market price of risk follows a normal distribution, and we applied both the classical and the Bayesian procedure to three stock markets (where we decided to set the posterior means as initial values for the simulated EM algorithm). The results suggested that the Bayesian and the classical procedures are quite in agreement (see Anyfantaki and Demos [24]).

Finally, it is known that (e.g., [32, pages 84 and 85]) the EM algorithm slows down significantly in the neighborhood of the optimum. As a result, after some initial EM iterations, it is tempting to switch to a derivative-based optimization routine, which is more likely to quickly converge to the maximum. EM-type arguments can be used to facilitate this switch by allowing the computation of the score. In particular, it is easy to see that πΈξƒ©ξ€·πœ•lnπ‘πœΉβˆ£π«,πœ™,β„±0ξ€Έπœ•πœ™βˆ£π«,πœ™(𝑛),β„±0ξƒͺ=0,(5.1) so it is clear that the score can be obtained as the expected value given 𝐫,πœ™,β„±0 of the sum of the unobservable scores corresponding to ln𝑝(𝐫∣𝜹,πœ™,β„±0) and ln𝑝(πœΉβˆ£πœ™,β„±0). This could be very useful for the classical estimation procedure, not presented here, as even though our algorithm is an 𝑂(𝑇) one, it is still rather slow. We leave these issues for further research.

Appendices

A. Proof of (3.69) and (3.82)

Proof of Equation (3.69). This is easily derived using the fact that π‘Ÿπ‘‘=π›Ώπ‘‘β„Žπ‘‘+πœ€π‘‘,(A.1) where π‘Ÿπ‘‘βˆ£β„Žπ‘‘ξ‚΅βˆΌπ‘π›Ώβ„Žπ‘‘,ξ‚΅πœ‘2𝑒1βˆ’πœ‘2β„Žπ‘‘ξ‚Άβ„Ž+1𝑑,(A.2) and, consequently, βŽ›βŽœβŽœβŽπœ€π‘‘π‘Ÿπ‘‘βŽžβŽŸβŽŸβŽ βˆ£β„Žπ‘‘βŽ›βŽœβŽœβŽœβŽβŽ›βŽœβŽœβŽ0βˆΌπ‘π›Ώβ„Žπ‘‘βŽžβŽŸβŽŸβŽ ,βŽ›βŽœβŽœβŽœβŽβ„Žπ‘‘β„Žπ‘‘β„Žπ‘‘ξ‚΅πœ‘2𝑒1βˆ’πœ‘2β„Žπ‘‘ξ‚Άβ„Ž+1π‘‘βŽžβŽŸβŽŸβŽŸβŽ βŽžβŽŸβŽŸβŽŸβŽ ,(A.3) and, thus; from the definition of the bivariate normal, πΈξ€·πœ€π‘‘βˆ£π‘Ÿπ‘‘,β„Žπ‘‘ξ€Έ=ξ€·1βˆ’πœ‘2π‘Ÿξ€Έξ€·π‘‘βˆ’π›Ώβ„Žπ‘‘ξ€Έπœ‘2π‘’β„Žπ‘‘+1βˆ’πœ‘2,ξ€·πœ€Varπ‘‘βˆ£π‘Ÿπ‘‘,β„Žπ‘‘ξ€Έ=πœ‘2π‘’β„Ž2π‘‘πœ‘2π‘’β„Žπ‘‘+1βˆ’πœ‘2.(A.4) Consequently, πœ€π‘‘βˆ£π‘Ÿπ‘‘,β„Žt,πœ™βˆΌπ‘1βˆ’πœ‘2π‘Ÿξ€Έξ€·π‘‘βˆ’π›Ώβ„Žπ‘‘ξ€Έπœ‘2π‘’β„Žπ‘‘+1βˆ’πœ‘2,πœ‘2π‘’β„Ž2π‘‘πœ‘2π‘’β„Žπ‘‘+1βˆ’πœ‘2ξƒͺ.(A.5)

Proof of Equation (3.82). We have that π‘ξ€·π‘Ÿπ‘‘+1βˆ£β„Žπ‘Ÿπ‘‘+1ξ€Έ=1ξ”πœ‘2πœ‹ξ€·ξ€·2𝑒/ξ€·1βˆ’πœ‘2β„Žξ€Έξ€Έπ‘Ÿπ‘‘+1ξ€Έβ„Ž+1π‘Ÿπ‘‘+1ξƒ©βˆ’ξ€·π‘Ÿexp𝑑+1βˆ’π›Ώβ„Žπ‘Ÿπ‘‘+1ξ€Έ22πœ‘ξ€·ξ€·2𝑒/ξ€·1βˆ’πœ‘2β„Žξ€Έξ€Έπ‘Ÿπ‘‘+1ξ€Έβ„Ž+1π‘Ÿπ‘‘+1ξƒͺ,(A.6) and, thus, π‘ξ€·π‘Ÿπ‘‘+1βˆ£β„Žnew𝑑+1ξ€Έπ‘ξ€·π‘Ÿπ‘‘+1βˆ£β„Žπ‘Ÿπ‘‘+1ξ€Έ=ξ”πœ‘ξ€·ξ€·2𝑒/ξ€·1βˆ’πœ‘2β„Žξ€Έξ€Έπ‘Ÿπ‘‘+1ξ€Έβ„Ž+1π‘Ÿπ‘‘+1ξ”πœ‘ξ€·ξ€·2𝑒/ξ€·1βˆ’πœ‘2β„Žξ€Έξ€Έnew𝑑+1ξ€Έβ„Ž+1𝑛𝑒𝑀𝑑+1ξƒ©ξ€·π‘ŸΓ—exp𝑑+1βˆ’π›Ώβ„Žπ‘Ÿπ‘‘+1ξ€Έ22πœ‘ξ€·ξ€·2𝑒/ξ€·1βˆ’πœ‘2β„Žξ€Έξ€Έπ‘Ÿπ‘‘+1ξ€Έβ„Ž+1π‘Ÿπ‘‘+1βˆ’ξ€·π‘Ÿπ‘‘+1βˆ’π›Ώβ„Žnew𝑑+1ξ€Έ22πœ‘ξ€·ξ€·2𝑒/ξ€·1βˆ’πœ‘2β„Žξ€Έξ€Έnew𝑑+1ξ€Έβ„Ž+1new𝑑+1ξƒͺ.(A.7) Also, π‘‘π‘Ÿπ‘‘+1=ξƒŽβ„Žπ‘Ÿπ‘‘+2βˆ’πœ”βˆ’π›½β„Žπ‘Ÿπ‘‘+1𝛼,(A.8) and so π‘‘π‘Ÿπ‘‘+1𝑑new𝑑+1=βˆšβ„Žπ‘Ÿπ‘‘+2βˆ’πœ”βˆ’π›½β„Žπ‘Ÿπ‘‘+1ξ”β„Žπ‘Ÿπ‘‘+2βˆ’πœ”βˆ’π›½β„Žnew𝑑+1.(A.9) Moreover, 𝑐new𝑑+1π‘π‘Ÿπ‘‘+1=β„Žπ‘Ÿπ‘‘+1ξ”πœ‘2π‘’β„Žnew𝑑+1+1βˆ’πœ‘2βŽ‘βŽ’βŽ’βŽ’βŽ’βŽ’βŽ’βŽ’βŽ’βŽ’βŽ’βŽ£βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβˆ’expξ€·πœ‘2π‘’β„Žnew𝑑+1+1βˆ’πœ‘2ξ€Έ2πœ‘2π‘’β„Žnew2βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽξ„Άξ„΅ξ„΅βŽ·π‘‘+1𝛾+β„Žπ‘Ÿπ‘‘+2βˆ’πœ”βˆ’π›½β„Žnew𝑑+1π›Όβˆ’ξ€·1βˆ’πœ‘2π‘Ÿξ€Έξ€·π‘‘+1βˆ’π›Ώβ„Žnew𝑑+1ξ€Έξ€·πœ‘2π‘’β„Žnew𝑑+1+1βˆ’πœ‘2ξ€ΈβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ 2βŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβˆ’+expξ€·πœ‘2π‘’β„Žnew𝑑+1+1βˆ’πœ‘2ξ€Έ2πœ‘2π‘’β„Žnew2βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽξ„Άξ„΅ξ„΅βŽ·π‘‘+1π›Ύβˆ’β„Žπ‘Ÿπ‘‘+2βˆ’πœ”βˆ’π›½β„Žnew𝑑+1π›Όβˆ’ξ€·1βˆ’πœ‘2π‘Ÿξ€Έξ€·π‘‘+1βˆ’π›Ώβ„Žnew𝑑+1ξ€Έξ€·πœ‘2π‘’β„Žnew𝑑+1+1βˆ’πœ‘2ξ€ΈβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ 2⎞⎟⎟⎟⎟⎟⎟⎟⎠⎀βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ¦β„Žnew𝑑+1ξ”πœ‘2π‘’β„Žπ‘Ÿπ‘‘+1+1βˆ’πœ‘2βŽ‘βŽ’βŽ’βŽ’βŽ’βŽ’βŽ’βŽ’βŽ’βŽ’βŽ’βŽ£βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽβˆ’expξ€·πœ‘2π‘’β„Žπ‘Ÿπ‘‘+1+1βˆ’πœ‘2ξ€Έ2πœ‘2π‘’β„ŽβŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽξ„Άξ„΅ξ„΅βŽ·π‘Ÿ2𝑑+1𝛾+β„Žπ‘Ÿπ‘‘+2βˆ’πœ”βˆ’π›½β„Žπ‘Ÿπ‘‘+1π›Όβˆ’ξ€·1βˆ’πœ‘2π‘Ÿξ€Έξ€·π‘‘+1βˆ’π›Ώβ„Žπ‘Ÿπ‘‘+1ξ€Έξ€·πœ‘2π‘’β„Žπ‘Ÿπ‘‘+1+1βˆ’πœ‘2ξ€ΈβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ 2βŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽβˆ’+expξ€·πœ‘2π‘’β„Žπ‘Ÿπ‘‘+1+1βˆ’πœ‘2ξ€Έ2πœ‘2π‘’β„ŽβŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽξ„Άξ„΅ξ„΅βŽ·π‘Ÿ2𝑑+1π›Ύβˆ’β„Žπ‘Ÿπ‘‘+2βˆ’πœ”βˆ’π›½β„Žπ‘Ÿπ‘‘+1π›Όβˆ’ξ€·1βˆ’πœ‘2π‘Ÿξ€Έξ€·π‘‘+1βˆ’π›Ώβ„Žπ‘Ÿπ‘‘+1ξ€Έξ€·πœ‘2π‘’β„Žπ‘Ÿπ‘‘+1+1βˆ’πœ‘2ξ€ΈβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ 2⎞⎟⎟⎟⎟⎟⎟⎠⎀βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯⎦.(A.10) And the result comes straightforward.

B. Dynamic Moments of the Conditional Variance

We have ξ€·β„ŽCov𝑑,β„Žπ‘‘βˆ’π‘˜ξ€Έξ€·πœ€=𝛼Cov2π‘‘βˆ’1,β„Žπ‘‘βˆ’π‘˜ξ€Έξ€·πœ€βˆ’2𝛼𝛾Covπ‘‘βˆ’1,β„Žπ‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+𝛽Covπ‘‘βˆ’1,β„Žπ‘‘βˆ’π‘˜ξ€Έξ€·β„Ž=(𝛼+𝛽)Covπ‘‘βˆ’1,β„Žπ‘‘βˆ’π‘˜ξ€Έ=β‹―=(𝛼+𝛽)π‘˜π‘‰ξ€·β„Žπ‘‘ξ€Έ,πΈξ€·β„Žπ‘‘πœ€π‘‘βˆ’π‘˜ξ€Έξ€·πœ€=𝛼𝐸2π‘‘βˆ’1πœ€π‘‘βˆ’π‘˜ξ€Έξ€·πœ€βˆ’2π›Όπ›ΎπΈπ‘‘βˆ’1πœ€π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+π›½πΈπ‘‘βˆ’1πœ€π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž=(𝛼+𝛽)πΈπ‘‘βˆ’1πœ€π‘‘βˆ’π‘˜ξ€Έ=β‹―=(𝛼+𝛽)π‘˜βˆ’1πΈξ€·β„Žπ‘‘βˆ’π‘˜+1πœ€π‘‘βˆ’π‘˜ξ€Έ=βˆ’2𝛼𝛾(𝛼+𝛽)π‘˜βˆ’1πΈξ€·β„Žπ‘‘βˆ’1ξ€Έ,(B.1) since πΈξ€·β„Žπ‘‘πœ€π‘‘βˆ’1ξ€Έξ€·=πΈπ›Όπœ€3π‘‘βˆ’1βˆ’2π›Όπ›Ύπœ€2π‘‘βˆ’1+π›½β„Žπ‘‘βˆ’1πœ€π‘‘βˆ’1ξ€Έξ€·πœ€=βˆ’2𝛼𝛾𝐸2π‘‘βˆ’1ξ€Έξ€·β„Ž=βˆ’2π›Όπ›ΎπΈπ‘‘βˆ’1ξ€Έ.(B.2)

Also, ξ€·β„ŽCov2𝑑,β„Žπ‘‘βˆ’π‘˜ξ€Έξ€·β„Ž=𝐴Cov2π‘‘βˆ’1,β„Žπ‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+𝐡Covπ‘‘βˆ’1,β„Žπ‘‘βˆ’π‘˜ξ€Έ=𝐴2ξ€·β„ŽCov2π‘‘βˆ’2,β„Žπ‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+𝐴𝐡Covπ‘‘βˆ’2,β„Žπ‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+𝐡Covπ‘‘βˆ’1,β„Žπ‘‘βˆ’π‘˜ξ€Έ=β‹―=π΄π‘˜βˆ’1ξ€·β„ŽCov2π‘‘βˆ’π‘˜+1,β„Žπ‘‘βˆ’π‘˜ξ€Έ+π΄π‘˜βˆ’2ξ€·β„Žπ΅Covπ‘‘βˆ’π‘˜+1,β„Žπ‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+β‹―+𝐴𝐡Covπ‘‘βˆ’2,β„Žπ‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+𝐡Covπ‘‘βˆ’1,β„Žπ‘‘βˆ’π‘˜ξ€Έ=π΄π‘˜βˆ’1ξ€·β„ŽCov2π‘‘βˆ’π‘˜+1,β„Žπ‘‘βˆ’π‘˜ξ€Έ+(𝛼+𝛽)𝐡(𝛼+𝛽)π‘˜βˆ’1βˆ’π΄π‘˜βˆ’1π‘‰ξ€·β„Ž(𝛼+𝛽)βˆ’π΄π‘‘ξ€Έ=π΄π‘˜ξ€ΊπΈξ€·β„Ž3π‘‘βˆ’1ξ€Έξ€·β„Žβˆ’πΈ2π‘‘βˆ’1ξ€ΈπΈξ€·β„Žπ‘‘βˆ’1+(𝛼+𝛽)π‘˜βˆ’π΄π‘˜ξ€·β„Ž(𝛼+𝛽)βˆ’π΄π΅π‘‰π‘‘ξ€Έ,(B.3) as ξ€·β„ŽCov2𝑑,β„Žπ‘‘βˆ’1ξ€Έξ€ΊπΈξ€·β„Ž=𝐴3π‘‘βˆ’1ξ€Έξ€·β„Žβˆ’πΈ2π‘‘βˆ’1ξ€ΈπΈξ€·β„Žπ‘‘βˆ’1ξ€·β„Žξ€Έξ€»+π΅π‘‰π‘‘βˆ’1ξ€Έ,(B.4) where 𝐴=3𝛼2+𝛽2+2𝛼𝛽 and 𝐡=2[2𝛼2𝛾2+(πœ”+𝛼𝛾2)(𝛼+𝛽)].

Furthermore, πΈξ€·β„Ž2π‘‘πœ€π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž=𝐴𝐸2π‘‘βˆ’1πœ€π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+π΅πΈπ‘‘βˆ’1πœ€π‘‘βˆ’π‘˜ξ€Έ=𝐴2πΈξ€·β„Ž2π‘‘βˆ’2πœ€π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+π΄π΅πΈπ‘‘βˆ’2πœ€π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+π΅πΈπ‘‘βˆ’1πœ€π‘‘βˆ’π‘˜ξ€Έ=β‹―=π΄π‘˜βˆ’1πΈξ€·β„Ž2π‘‘βˆ’π‘˜+1πœ€π‘‘βˆ’π‘˜ξ€Έξ€Ίπ΄βˆ’2π›Όπ›Ύπ΅π‘˜βˆ’2+β‹―+𝐴(𝛼+𝛽)π‘˜βˆ’3+(𝛼+𝛽)π‘˜βˆ’2ξ€»πΈξ€·β„Žπ‘‘βˆ’1ξ€Έ=βˆ’4𝛼𝛾(3𝛼+𝛽)π΄π‘˜βˆ’1πΈξ€·β„Ž2π‘‘βˆ’1ξ€Έξ‚Έξ€·βˆ’4π›Όπ›Ύπœ”+𝛼𝛾2ξ€Έ(𝛼+𝛽)π‘˜βˆ’π΄π‘˜(𝛼+𝛽)βˆ’π΄+2𝛼2𝛾2(𝛼+𝛽)π‘˜βˆ’1βˆ’π΄π‘˜βˆ’1ξ‚ΉπΈξ€·β„Ž(𝛼+𝛽)βˆ’π΄π‘‘βˆ’1ξ€Έ,(B.5) asπΈξ€·β„Ž2π‘‘πœ€π‘‘βˆ’1ξ€Έξ€·β„Ž=βˆ’4𝛼𝛾(3𝛼+𝛽)𝐸2π‘‘βˆ’1ξ€Έξ€·βˆ’4πœ”+𝛼𝛾2ξ€Έξ€·β„Žπ›Όπ›ΎπΈπ‘‘βˆ’1ξ€Έ.(B.6)

Also we have thatξ€·β„ŽCov𝑑,β„Ž2π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž=(𝛼+𝛽)Covπ‘‘βˆ’1,β„Ž2π‘‘βˆ’π‘˜ξ€Έ=(𝛼+𝛽)π‘˜ξ€ΊπΈξ€·β„Ž3π‘‘βˆ’1ξ€Έξ€·β„Žβˆ’πΈ2π‘‘βˆ’1ξ€ΈπΈξ€·β„Žπ‘‘βˆ’1,ξ€Έξ€»(B.7) asξ€·β„ŽCov𝑑,β„Ž2π‘‘βˆ’1ξ€Έξ€ΊπΈξ€·β„Ž=(𝛼+𝛽)3π‘‘βˆ’1ξ€Έξ€·β„Žβˆ’πΈ2π‘‘βˆ’1ξ€ΈπΈξ€·β„Žπ‘‘βˆ’1.ξ€Έξ€»(B.8)

Now,ξ€·β„ŽCov2𝑑,β„Ž2π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž=𝐴Cov2π‘‘βˆ’1,β„Ž2π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+𝐡Covπ‘‘βˆ’1,β„Ž2π‘‘βˆ’π‘˜ξ€Έ=𝐴2ξ€·β„ŽCov2π‘‘βˆ’2,β„Ž2π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+𝐴𝐡Covπ‘‘βˆ’2,β„Ž2π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+𝐡Covπ‘‘βˆ’1,β„Ž2π‘‘βˆ’π‘˜ξ€Έ=β‹―=π΄π‘˜βˆ’1ξ€·β„ŽCov2π‘‘βˆ’π‘˜+1,β„Ž2π‘‘βˆ’π‘˜ξ€Έ+π΄π‘˜βˆ’1ξ€·β„Žπ΅Covπ‘‘βˆ’π‘˜+1,β„Ž2π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+β‹―+𝐴𝐡Covπ‘‘βˆ’2,β„Ž2π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+𝐡Covπ‘‘βˆ’1,β„Ž2π‘‘βˆ’π‘˜ξ€Έ=π΄π‘˜π‘‰ξ€·β„Ž2π‘‘βˆ’1ξ€Έ+𝐡(𝛼+𝛽)π‘˜βˆ’π΄π‘˜ξ€ΊπΈξ€·β„Ž(𝛼+𝛽)βˆ’π΄3π‘‘βˆ’1ξ€Έξ€·β„Žβˆ’πΈ2π‘‘βˆ’1ξ€ΈπΈξ€·β„Žπ‘‘βˆ’1,ξ€Έξ€»(B.9) asξ€·β„ŽCov2𝑑,β„Ž2π‘‘βˆ’1ξ€Έξ€·β„Ž=𝐴𝑉2π‘‘βˆ’1ξ€Έξ€ΊπΈξ€·β„Ž+𝐡3π‘‘βˆ’1ξ€Έξ€·β„Žβˆ’πΈ2π‘‘βˆ’1ξ€ΈπΈξ€·β„Žπ‘‘βˆ’1ξ€Έξ€».(B.10)

Further, ξ€·β„ŽCov𝑑,πœ€2π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž=(𝛼+𝛽)Covπ‘‘βˆ’1,πœ€2π‘‘βˆ’π‘˜ξ€Έ=(𝛼+𝛽)π‘˜βˆ’1ξ€·β„ŽCovπ‘‘βˆ’π‘˜+1,πœ€2π‘‘βˆ’π‘˜ξ€Έ=(𝛼+𝛽)π‘˜π‘‰ξ€·β„Žπ‘‘βˆ’1ξ€Έ+2𝛼(𝛼+𝛽)π‘˜βˆ’1πΈξ€·β„Ž2𝑑,(B.11) as ξ€·β„ŽCov𝑑,πœ€2π‘‘βˆ’1ξ€Έξ€Ίξ€·β„Ž=𝛼2𝐸2π‘‘ξ€Έξ€·β„Ž+π‘‰π‘‘βˆ’1ξ€·β„Žξ€Έξ€»+π›½π‘‰π‘‘βˆ’1ξ€Έξ€·β„Ž=(𝛼+𝛽)π‘‰π‘‘βˆ’1ξ€Έξ€·β„Ž+2𝛼𝐸2𝑑.(B.12)

Further ξ€·β„ŽCov2𝑑,πœ€2π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž=𝐴Cov2π‘‘βˆ’1,πœ€2π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+𝐡Covπ‘‘βˆ’1,πœ€2π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž=𝐴Cov2π‘‘βˆ’2,πœ€2π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+𝐴𝐡Covπ‘‘βˆ’2,πœ€2π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+𝐡Covπ‘‘βˆ’1,πœ€2π‘‘βˆ’π‘˜ξ€Έ=β‹―=π΄π‘˜βˆ’1ξ€·β„ŽCov2π‘‘βˆ’π‘˜+1,πœ€2π‘‘βˆ’π‘˜ξ€Έ+π΄π‘˜βˆ’2ξ€·β„Žπ΅Covπ‘‘βˆ’π‘˜+1,πœ€2π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+β‹―+𝐴𝐡Covπ‘‘βˆ’2,πœ€2π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+𝐡Covπ‘‘βˆ’1,πœ€2π‘‘βˆ’π‘˜ξ€Έ=π΄π‘˜βˆ’1ξ€·β„ŽCov2π‘‘βˆ’π‘˜+1,πœ€2π‘‘βˆ’π‘˜ξ€Έ+𝐡(𝛼+𝛽)(𝛼+𝛽)π‘˜βˆ’1βˆ’π΄π‘˜βˆ’1π‘‰ξ€·β„Ž(𝛼+𝛽)βˆ’π΄π‘‘βˆ’1ξ€Έ+2𝛼𝐡(𝛼+𝛽)π‘˜βˆ’1βˆ’π΄π‘˜βˆ’1πΈξ€·β„Ž(𝛼+𝛽)βˆ’π΄2𝑑=π΄π‘˜ξ€ΊπΈξ€·β„Ž3π‘‘βˆ’1ξ€Έξ€·β„Žβˆ’πΈ2π‘‘βˆ’1ξ€ΈπΈξ€·β„Žπ‘‘βˆ’1ξ€Έξ€»+𝐡(𝛼+𝛽)π‘˜βˆ’π΄π‘˜π‘‰ξ€·β„Ž(𝛼+𝛽)βˆ’π΄π‘‘βˆ’1ξ€Έ+4𝛼(3𝛼+𝛽)π΄π‘˜βˆ’1πΈξ€·β„Ž3π‘‘βˆ’1ξ€Έ+ξ‚Έ(2𝛼𝐡𝛼+𝛽)π‘˜βˆ’1βˆ’π΄π‘˜βˆ’1ξ€·(𝛼+𝛽)βˆ’π΄+42𝛼2𝛾2+ξ€·πœ”+𝛼𝛾2π΄ξ€Έξ€Έπ‘˜βˆ’1ξ‚ΉπΈξ€·β„Ž2𝑑,(B.13) as ξ€·β„ŽCov2𝑑,πœ€2π‘‘βˆ’1ξ€Έξ€ΊπΈξ€·β„Ž=𝐴3π‘‘βˆ’1ξ€Έξ€·β„Žβˆ’πΈ2π‘‘βˆ’1ξ€ΈπΈξ€·β„Žπ‘‘βˆ’1ξ€·β„Žξ€Έξ€»+4𝛼(3𝛼+𝛽)𝐸3π‘‘βˆ’1ξ€Έξ€·+42𝛼2𝛾2+ξ€·πœ”+𝛼𝛾2πΈξ€·β„Žξ€Έξ€Έ2π‘‘ξ€Έξ€·β„Ž+𝐡𝑉𝑑.(B.14)

Further,πΈξ€·β„Žπ‘‘β„Žπ‘‘βˆ’π‘˜πœ€π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž=(𝛼+𝛽)πΈπ‘‘βˆ’1β„Žπ‘‘βˆ’π‘˜πœ€π‘‘βˆ’π‘˜ξ€Έ=β‹―=(𝛼+𝛽)π‘˜βˆ’1πΈξ€·β„Žπ‘‘βˆ’π‘˜+1β„Žπ‘‘βˆ’π‘˜πœ€π‘‘βˆ’π‘˜ξ€Έ=βˆ’2𝛼𝛾(𝛼+𝛽)π‘˜βˆ’1πΈξ€·β„Ž2π‘‘βˆ’1ξ€Έ,(B.15) asπΈξ€·β„Žπ‘‘β„Žπ‘‘βˆ’1πœ€π‘‘βˆ’1ξ€Έξ€·β„Ž=βˆ’2𝛼𝛾𝐸2π‘‘βˆ’1ξ€Έ.(B.16)

Finally,πΈξ€·β„Ž2π‘‘β„Žπ‘‘βˆ’π‘˜πœ€π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž=𝐴𝐸2π‘‘βˆ’1β„Žπ‘‘βˆ’π‘˜πœ€π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+π΅πΈπ‘‘βˆ’1β„Žπ‘‘βˆ’π‘˜πœ€π‘‘βˆ’π‘˜ξ€Έ=β‹―=π΄π‘˜βˆ’1πΈξ€·β„Ž2π‘‘βˆ’π‘˜+1β„Žπ‘‘βˆ’π‘˜πœ€π‘‘βˆ’π‘˜ξ€Έ+π΄π‘˜βˆ’2ξ€·β„Žπ΅πΈπ‘‘βˆ’π‘˜+1β„Žπ‘‘βˆ’π‘˜πœ€π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+β‹―+π΄π΅πΈπ‘‘βˆ’2β„Žπ‘‘βˆ’π‘˜πœ€π‘‘βˆ’π‘˜ξ€Έξ€·β„Ž+π΅πΈπ‘‘βˆ’1β„Žπ‘‘βˆ’π‘˜πœ€π‘‘βˆ’π‘˜ξ€Έ=βˆ’4π›Όπ›Ύπ΄π‘˜βˆ’1ξ€Ίξ€·β„Ž(3𝛼+𝛽)𝐸3π‘‘βˆ’1ξ€Έ+ξ€·πœ”+𝛼𝛾2ξ€ΈπΈξ€·β„Ž2π‘‘βˆ’1ξ€Έξ€»βˆ’2𝛼𝛾𝐡(𝛼+𝛽)π‘˜βˆ’1βˆ’π΄π‘˜βˆ’1πΈξ€·β„Ž(𝛼+𝛽)βˆ’π΄2π‘‘βˆ’1ξ€Έ,(B.17) asπΈξ€·β„Ž2π‘‘β„Žπ‘‘βˆ’1πœ€π‘‘βˆ’1ξ€Έξ€·β„Ž=βˆ’4𝛼𝛾(3𝛼+𝛽)𝐸3π‘‘βˆ’1ξ€Έξ€·βˆ’4πœ”+𝛼𝛾2ξ€Έξ€·β„Žπ›Όπ›ΎπΈ2π‘‘βˆ’1ξ€Έ.(B.18)

Acknowledgments

The authors would like to thank S. Arvanitis, G. Fiorentini, and E. Sentana for their valuable comments. We wish also to thank the participants of the 7th Conference on Research on Economic Theory and Econometrics (Naxos, July 2008), the 14th Spring Meeting of Young Economists (Istanbul, April 2009), the Doctoral Meeting of Montpellier (D.M.M.) in Economics, Management and Finance (Montpellier, May 2009), the 2nd Ph.D. Conference in Economics 2009, in Memory of Vassilis Patsatzis (Athens, May 2009), the 3rd International Conference in Computational and Financial Econometrics (Limassol, October 2009), the 4th International Conference in Computational and Financial Econometrics (London, December 2010) and the seminar participants at the Athens University of Economics and Business for useful discussions. They acknowledge Landis Conrad's help for the collection of the data. Finally, they greatly benefited by the comments of the Associate Editor and an anonymous referee.