Journal of Probability and Statistics

Journal of Probability and Statistics / 2012 / Article

Research Article | Open Access

Volume 2012 |Article ID 167431 | 16 pages | https://doi.org/10.1155/2012/167431

Bayesian Non-Parametric Mixtures of GARCH(1,1) Models

Academic Editor: Ori Rosen
Received02 Mar 2012
Revised16 May 2012
Accepted18 May 2012
Published16 Jul 2012

Abstract

Traditional GARCH models describe volatility levels that evolve smoothly over time, generated by a single GARCH regime. However, nonstationary time series data may exhibit abrupt changes in volatility, suggesting changes in the underlying GARCH regimes. Further, the number and times of regime changes are not always obvious. This article outlines a nonparametric mixture of GARCH models that is able to estimate the number and time of volatility regime changes by mixing over the Poisson-Kingman process. The process is a generalisation of the Dirichlet process typically used in nonparametric models for time-dependent data provides a richer clustering structure, and its application to time series data is novel. Inference is Bayesian, and a Markov chain Monte Carlo algorithm to explore the posterior distribution is described. The methodology is illustrated on the Standard and Poor's 500 financial index.

1. Introduction

Generalised autoregressive conditional heteroscedastic (GARCH) models estimate time-varying fluctuations around mean levels of a time series known as the volatility of a time series [1, 2]. The standard GARCH model specifies the volatility persistence at time 𝑡 as a linear combination of previous volatilities and squared residual terms. The persistence is assumed constant for all 𝑡 resulting in smooth transitions of volatility levels. However, many nonstationary time series exhibit abrupt changes in volatility suggesting fluctuating levels of volatility persistence. In this case the GARCH parameters undergo regime changes over time. If the maximum number of potential regimes is known Markov-switching GARCH, models are an appealing option [3–8]. However, often the number of volatility regimes is not known and can be difficult to preselect. In this case, Bayesian nonparametric mixture models are attractive because they allow the data to determine the number of volatility regimes or mixture components. For example, recently nonparametric mixing over the Dirichlet process has been described by Lau and So [9] for GARCH(1,1) models, Jensen and Maheu [10] for stochastic volatility models and Griffin [11] for Ornstein-Uhlenbeck Processes.

The Dirichlet process [12] is most widely applied in nonparametric mixture models and within a hierarchical framework is introduced in Lo [13] for independent data. Lau and So [9] extend the work of Lo [13] to time-dependent data where time-varying GARCH parameters are mixed over the Dirichlet process. The additional flexibility of this approach allows a range of GARCH regimes, from all observations generated by a single GARCH model to each observation in the series generated by a unique set of GARCH parameters. Lau and So [9] conclude with a discussion on the possibility of extending their method to alternative random probability measures that provide greater clustering flexibility than the Dirichlet process. We continue this discussion by outlining a novel method for a class of GARCH mixture models mixed over the Poisson-Kingman process [14, 15] derived from the stable subordinator (known henceforth as PKSS). To illustrate the richer clustering mechanisms of the PKSS process, we describe three of its special cases—the Dirichlet process [12], the Poisson-Dirichlet (PD) process [14, 16], and the Normalized Generalized Gamma (NGG) process [17, 18].

Theoretical developments and recent applications of the PKSS process are discussed in Lijoi et al. [19]. However, we note that the PD and the NGG processes have yet to be developed for volatitility estimation, or indeed time series applications in general, and in this sense the work in this paper is novel. Although the Dirichlet process has now been used extensively in applications, the implementation of more general nonparametric mixture models for applied work is not always obvious. We therefore describe three Markov chain Monte Carlo (MCMC) algorithms. First, we develop a weighted Chinese restaurant Gibbs type process for partition sampling to explore the posterior distribution. The basis of this algorithm is developed for time-dependent data in Lau and So [9], and we extend it to allow for the PKSS process. We also note recent developments for the sampling of Bayesian nonparametric models in Walker [20] and Papaspiliopoulos and Roberts [21] and describe how these algorithms can be constructed to estimate our model.

The methodology is illustrated through volatility and predictive density estimation of a GARCH(1,1) model applied to the Standard and Poor’s 500 financial index from 2003 to 2009. Results are compared between a no-mixture model and nonparametric mixtures over the Dirichlet, PD, and NGG processes. Under the criterion of marginal likelihood the NGG process performs the best. Also, the PD and NGG process outperforms the previously studied Dirichlet process which in turn outperforms the no-mixture model. The results suggest that alternatives to the Dirichlet process should be considered for applications of nonparametric mixture models to time-dependent data.

The paper proceeds as follows. Section 2 presents a Bayesian mixture of GARCH models over an unknown mixing distribution, outlines a convenient Bayesian estimator based on quadratic loss, and describes some of the time series properties of our model. Section 3 discusses the class of random probability measures we consider as the mixing distributions and details the clustering mechanisms associated with the three special cases mentioned above via the Pólya Urn representation and the consequences for the posterior distribution of the partitions resulting from the PKSS process. Our MCMC algorithm for sampling the posterior distribution is presented in Section 4, and the alternative MCMC algorithms of Walker [20] and Papaspiliopoulos and Roberts [21] are presented in Section 5. Section 6 describes the application, and Section 7 concludes the paper.

2. The Mixture Model

Let 𝑌𝑡 be the observed variable at time 𝑡, 𝐘𝑡−1={𝑌1,…,𝑌𝑡−1} be the observations from time 1 to time 𝑡−1 and 𝑌𝑡=0 for 𝑡≤0. The GARCH(1,1) model specifies 𝑌𝑡∣𝐘𝑡−1,(𝜈,𝜒,𝜓)∼𝑁0,ğœŽ2𝑡,ğœŽ2𝑡=𝜈+𝜒𝑌2𝑡−1+ğœ“ğœŽ2𝑡−1,(2.1) where 𝜈>0, 𝜒≥0, 𝜓≥0 and ğœŽ2𝑡=0 for 𝑡≤0. In (2.1) the GARCH parameters (𝜈,𝜒,𝜓) are not time varying implying that volatility persistence is constant over time with smooth transitions of volatility levels. To allow abrupt changes to volatilities, we extend (2.1) by writing 𝑍𝑡={𝜈𝑡,𝜒𝑡,𝜓𝑡}, 𝜈𝑡>0, 𝜒𝑡≥0, 𝜓𝑡≥0 for 𝑡=1,…,𝑛 and 𝐙𝑡={𝑍1,…,𝑍𝑡} as joint latent variables from time 1 to time 𝑡; that is, the model is now a dynamic GARCH model with each observation potentially generated by its own set of GARCH parameters as follows: 𝑌𝑡∣𝐘𝑡−1,𝐙𝑡∼𝑁0,ğœŽ2𝑡,ğœŽ2𝑡=𝜈𝑡+𝜒𝑡𝑌2𝑡−1+ğœ“ğ‘¡ğœŽ2𝑡−1.(2.2) Note that in model (2.2) the data controls the maximum potential number of GARCH regimes, the sample size 𝑛. In contrast, finite switching models preallocate a maximum number of regimes typically much smaller than the number of observations. As the potential number of regimes gets larger, estimation of the associated transition probabilities and GARCH parameters in finite switching models become prohibitive. However, assuming that the latent variables 𝐙𝑛={𝑍1,…,𝑍𝑛} are independent of each other and completing the hierarchy by modelling the GARCH parameters contained in 𝐙𝑛 with an unknown mixing distribution, 𝐺, with law 𝒫 the model becomes manageable, that is, 𝑍𝑡∣𝐺iid∼𝐺(𝑑𝜈,𝑑𝜒,𝑑𝜓),𝐺∼𝒫(2.3) with 𝐙𝑛 and the mixing distribution, 𝐺, parameters that we may estimate. Depending on the posterior distribution of the clustering structure associated with the mixing distribution, the results may suggest that 𝑍𝑡=𝑍 for 𝑡=1,2,…,𝑛 (a single regime GARCH model) up to a unique 𝑍𝑡 for each 𝑡 indicating a separate GARCH regime for each time point. This illustrates the flexibility of the model.

We write 𝑠(𝐙𝑛,𝐺) as a positive integrable function of the latent variables, 𝐙𝑛, and the mixing distribution, 𝐺, to represent various quantities that may be of interest for inference. Under quadratic loss the Bayesian estimator is the posterior expectation 𝐸[𝑠(𝐙𝑛,𝐺)∣𝐘𝑛]. For our model this is an appealing estimator because it does not require the posterior of 𝐺 but only the posterior distribution of the sequence {𝑍1,…,𝑍𝑛}, that is, 𝐸𝑠𝐙𝑛,𝐺∣𝐘𝑛=î€œğ’µğ‘›â„Žî€·ğ˜ğ‘›,𝐙𝑛𝜋𝑑𝐙𝑛∣𝐘𝑛,(2.4) because â„Žî€·ğ˜ğ‘›,𝐙𝑛=𝒢𝑠𝐙𝑛,𝐺𝜋𝑑𝐺∣𝐘𝑛,𝐙𝑛,(2.5) where 𝜋(𝑑𝐺∣𝐘𝑛,𝐙𝑛) represents the posterior law of the random probability measure 𝐺, and 𝜋(𝑑𝐙𝑛∣𝐘𝑛) is the posterior distribution of the sequence 𝐙𝑛={𝑍1,…,𝑍𝑛}.

In Lau and So [9] the unknown mixing distribution for the GARCH parameters, 𝐺, is taken to be the Dirichlet process. This paper combines the theoretical ground work of Lijoi et al. [19, 22] with Lau and So [9] by allowing 𝐺 to be the PKSS process. The result is a nonparametric GARCH model which contains (among others) the Dirichlet process typically used in time series as well as the PD and NGG processes as special cases.

Understanding conditions for stationarity of a time series model is fundamental for statistical inference. Since our model is specified with zero mean over time, we provide a necessary and sufficient condition for the existence of a secondary order stationary solution for the infinite mixture of GARCH(1,1) models. The derivation closely follows Embrechts et al. [23] and Zhang et al. [24], and we state the conditions without giving proof. By letting 𝜖𝑡 be a standard normal random variable and replacing 𝑌2𝑡−1 by ğœŽ2𝑡−1𝜖2𝑡−1 then, conditioned on 𝑌𝑖 for 𝑖=1,…,𝑡−1 and {𝜈𝑖,𝜒𝑖,𝜓𝑖} for 𝑖=1,…,𝑡, ğœŽ2𝑡 in (2.2) becomes ğœŽ2𝑡=𝜈𝑡+𝜒𝑡𝑌2𝑡−1+ğœ“ğ‘¡ğœŽ2𝑡−1=𝜈𝑡+𝜒𝑡𝜖2𝑡−1+ğœ“ğ‘¡î€¸ğœŽ2𝑡−1.(2.6) Here (2.6) is well known to be an univariate stochastic difference equation expressed as 𝑋𝑡=𝐵𝑡+𝐴𝑡𝑋𝑡−1,(2.7) where 𝑋𝑡=ğœŽ2𝑡, 𝐴𝑡=𝜒𝑡𝜖2𝑡−1+𝜓𝑡, and 𝐵𝑡=𝜈𝑡. The stationarity of (2.7) implies the secondary order stationarity of (2.2), that is, 𝑋𝑡𝑑→𝑋 as ğ‘¡â†’âˆž for some random variable 𝑋, and 𝑋 satisfies 𝑋=𝐵+𝐴𝑋, where the random variable pair (𝐴𝑡,𝐵𝑡)𝑑→(𝐴,𝐵) as ğ‘¡â†’âˆž for some random variable pairs (𝐴,𝐵). The stationary solution of (2.7) holds if 𝐸[ln+|𝐴𝑡|]<0 and 𝐸[ln+|𝐵𝑡|]<∞, where ln+|𝑥|=ln[max{𝑥,1}] as given in Embrechts et al. [23, Section  8.4, pages 454–481] and Zhang et al. [24, Theorems  2 and  3], Vervaat [25], and Brandt [26]. So the conditions for the stationarity in our model are (0,∞)ln+|𝜈|𝐻1(𝑑𝜈)<∞,(0,∞)2𝐸ln+||𝜒𝜖2+𝜓||𝐻2(𝑑𝜒,𝑑𝜓)<0,(2.8) where the expectation for the second condition is applicable only to 𝜖, which is a standard normal random variable, and both 𝐻1=∫(0,∞)2𝐻(𝑑𝜈,𝑑𝜒,𝑑𝜓) and 𝐻2=∫(0,∞)𝐻(𝑑𝜈,𝑑𝜒,𝑑𝜓) are marginal measures of 𝐻=𝐸[𝐺], the mean measure of 𝐺.

Now consider the first two conditional moments from model (2.2). Obviously, the first conditional moment is zero, and the second conditional moment is identical to ğœŽ2𝑡 in (2.6). The distinguishing feature of model (2.2) is that parameters change over time and have the distribution 𝐺. Considering ğœŽ2𝑡 as a scale of the model results in a scale mixture model over time. From the representation in (2.6), ğœŽ2𝑡 could be rewritten as ğœŽ2𝑡=𝜈𝑡+𝑡−1𝑗=1𝜈𝑡−𝑗𝑡𝑖=𝑡−𝑗+1𝜒𝑖𝜖2𝑖−1+𝜓𝑖.(2.9)

The unconditional second moment could be derived according to this representation by marginalising over all the random variates. Also, ğœŽ2𝑡 in (2.9) can be viewed as a weighted sum of the random sequence {𝜈𝑡,…,𝜈1}, and the random weights decay to zero at a polynomial rate, as long as the model is stationary. In fact, the rate could be irregular over time, and this is a substantial difference between the mixture of GARCH models and the traditional GARCH models.

Finally, one might be also interested in the connection between models such as (2.2) with parameters having the distribution (2.3) and those having the Markov-switching characteristic that result in Markov-switching GARCH models [6, 7]. Markov-switching GARCH models have a similar structure to (2.3), and we can replace (2.3) by 𝑍𝑡∣𝑆𝑡=𝑖=𝜈𝑖,𝜒𝑖,𝜓𝑖,𝑆𝑡∣𝑆𝑡−1∼𝜂𝑖𝑗=𝑃𝑆𝑡=𝑖∣𝑆𝑡−1=𝑗,for𝑖,𝑗=1,…,𝐾,(2.10) where 𝑆𝑡 denotes the state variables, usually latent and unobserved. Marginalising the current state variable 𝑆𝑡 in (2.10) yields the conditional distribution for 𝑍𝑡 given the previous state 𝑆𝑡−1, 𝑍𝑡∣𝑆𝑡−1∼𝐾𝑖=1𝜂𝑖𝑗𝛿(𝜈𝑖,𝜒𝑖,𝜓𝑖).(2.11) So (2.11) could be a random probability measure but with a finite number of components and dependent on previous state 𝑆𝑡−1.

3. The Random Probability Measures

We now describe PKSS process and detail the Dirichlet, the PD, and NGG processes to illustrate how the more general PKSS process allows for richer clustering mechanisms. Let 𝒵 be a complete and separable metric space and ℬ(𝒵) be the corresponding Borel ğœŽ-field. Let 𝐺∈𝒢 be a probability measure on the space (𝒵,ℬ(𝒵)) where 𝒢 is the set of probability measures equipped with the suitable ğœŽ-field ℬ(𝒢) and the corresponding probability measure 𝒫 (see chapter 2 of Ghosh and Ramamoorthi [27] for more details). The random probability measure 𝐺 is sampled from the law 𝒫 and operates as the unknown mixing distribution of the GARCH parameters in (2.2).

All random probability measures within the class of PKSS processes feature an almost surely discrete probability measure represented as 𝐺(𝐴)=âˆžî“ğ‘–=1𝑊𝑖𝛿𝑍𝑖(𝐴)for𝐴∈ℬ(𝒵),(3.1) where 𝛿𝑍𝑖 denotes the dirac delta measure concentrated at 𝑍𝑖, in which the sequence of the random variables {𝑍1,𝑍2,…} has a nonatomic probability measure 𝐻 and the sequence of the random variables {𝑊1,𝑊2,…} sums to 1 [28]. Also, the mean measure of the process 𝐺 with respect to 𝒫 is 𝐻 as follows: 𝐸[𝐺(𝐴)]=𝐻(𝐴)for𝐴∈ℬ(𝒵).(3.2)

A common characterization of (3.1) is the well-known Pólya Urn prediction distribution described in Pitman [28]. For the purposes of this paper the Pólya Urn warrants further discussion for two reasons. First, it is important for developing our MCMC algorithm to explore the posterior distribution discussed in Section 4. Second, it explicitly details how the PKSS process is a generalisation of the Dirichlet, PD, and NGG processes and how the different cluster tuning mechanisms operates.

Let {𝑍1,…,𝑍𝑟} be a sequence with size 𝑟 drawn from 𝐺 where 𝑟 is a positive integer, and let 𝐩𝑟 denote a partition of integers {1,…,𝑟}. A partition 𝐩𝑟={𝐶𝑟,1,…,𝐶𝑟,𝑁𝑟} of size 𝑁𝑟 contains disjoint clusters 𝐶𝑟,𝑗 of size 𝑒𝑟,𝑗 indicated by the distinct values {𝑍∗1,…,𝑍∗𝑁𝑟}. The Pólya Urn prediction distribution for the PKSS process can now be written as 𝜋𝑑𝑍𝑖+1∣𝐙𝑖=𝑉𝑖+1,𝑁𝑖+1𝑉𝑖,𝑁𝑖𝐻𝑑𝑍𝑖+1+𝑉𝑖+1,𝑁𝑖𝑉𝑖,𝑁𝑖𝑁𝑖𝑗=1𝑒𝑖,𝑗−𝛼𝛿𝑍∗𝑗𝑑𝑍𝑖+1,(3.3) for 𝑖=1,…,𝑟−1,𝜋(𝑑𝑍1)=𝐻(𝑑𝑍1),𝑉1,1=1 and 𝑉𝑖,𝑁𝑖=𝑖−𝑁𝑖𝛼𝑉𝑖+1,𝑁𝑖+𝑉𝑖+1,𝑁𝑖+1.(3.4)

The Pólya Urn prediction distribution is that 𝑍𝑖+1 will take a new value from 𝐻 with mass 𝑉𝑖+1,𝑁𝑖+1/𝑉𝑖,𝑁𝑖 and one of the existing values, {𝑍∗1,…,𝑍∗𝑁𝑖}, with mass (𝑖−𝑁𝑖𝛼)𝑉𝑖+1,𝑁𝑖/𝑉𝑖,𝑁𝑖. This yields a joint prior distribution 𝜋𝑑𝑍1,𝑑𝑍2,…,𝑑𝑍𝑛=𝜋𝑑𝑍1𝑛−1𝑖=1𝜋𝑑𝑍𝑖+1∣𝑍1,…,𝑍𝑖,(3.5) as the product of easily managed conditional densities useful for our MCMC scheme below.

The PKSS process can be represented as either the Dirichlet, PD, or NGG processes in (3.3) as the following.(1)Taking 0≤𝛼<1 and 𝑉𝑖,𝑁𝑖=Γ(𝑖)Γ(𝑖+𝜃)𝑁𝑖𝑗=1(𝜃+(𝑗−1)𝛼),(3.6) for 𝜃>−𝛼 results in the PD process. (2)Setting that 𝛼=0𝑉𝑖,𝑁𝑖=Γ(𝑖)Γ(𝑖+𝜃)𝜃𝑁𝑖,(3.7) and the PD process becomes the Dirichlet process. (3)The NGG process takes 0≤𝛼<1, such that 𝑉𝑖,𝑁𝑖=𝛼𝑁𝑖−1𝑒𝛽Γ(𝑖)𝑖−1𝑘=0âŽ›âŽœâŽœâŽğ‘–âˆ’1ğ‘˜âŽžâŽŸâŽŸâŽ (−1)𝑘𝛽𝑘/𝛼Γ𝑁𝑖−𝑘𝛼,𝛽,(3.8) for 𝛽>0.

In the above Γ(⋅) is the complete Gamma function, and Γ(⋅,⋅) is the incomplete Gamma function. Examining of the predictive distribution (3.3), the ratios 𝑉𝑖+1,𝑁𝑖+1/𝑉𝑖,𝑁𝑖 and 𝑉𝑖+1,𝑁𝑖/𝑉𝑖,𝑁𝑖 indicate the difference between the Dirichlet process and the other processes. Substituting the values of 𝑉𝑖,𝑁𝑖 into the allocation mass for each process reveals that the ratios do not depend on the number of existing clusters, 𝑁𝑖. Rather, the Dirichlet process assigns probability to a new value independent of the number of existing clusters, and the rate of increment of partition size is a constant. In contrast, the PD and NGG processes assign probability to a new value dependent on the number of existing clusters. The comparison of these three special cases illustrates the richer clustering mechanisms of the PKSS process over the Dirichlet process. Furthermore, the PKSS process contains many other random measures, and these measures would be of interest for their clustering behaviors in further investigation.

Turning to the distribution of partitions, Pitman [28] shows that the joint prior distribution of the sequence {𝑍1,…,𝑍𝑛} is 𝜋𝑑𝑍1,…,𝑑𝑍𝑟=𝑉𝑟,𝑁𝑟𝑁𝑟𝑗=1Γ𝑒𝑟,𝑗−𝛼Γ(1−𝛼)𝐻𝑑𝑍∗𝑗.(3.9)

Notice that the joint distribution is dependent on the partition 𝐩𝑟 of 𝑟 integers {1,…,𝑟}, and we can decompose (3.9) into 𝜋(𝑍∗1,…,𝑍∗𝑁𝑟,𝐩𝑟)=𝜋(𝑑𝑍∗1,…,𝑑𝑍∗𝑁𝑟∣𝐩𝑟)𝜋(𝐩𝑟). The distribution of the partition, 𝜋(𝐩𝑟), is 𝜋𝐩𝑟=𝑉𝑟,𝑁𝑟𝑁𝑟𝑗=1Γ𝑒𝑟,𝑗−𝛼Γ(1−𝛼)(3.10) and is known as the Exchangeable Partition Probability Function. For many nonparametric models, this representation also helps MCMC construction by partitioning the posterior distribution in the form of Exchangeable Partition Probability Function. To do so it is necessary to obtain the posterior distribution of the partition 𝜋(𝐩𝑛∣𝐘𝑛) analytically. Then we could generate 𝐩𝑛 and approximate the posterior expectation. However, this is not possible in general. So we consider the joint distribution of {𝐙∗𝑁𝑛,𝐩𝑛} instead. We write the posterior expectation of 𝑠(𝐙𝑛,𝐺) as a marginalization over the joint posterior distribution of {𝐙∗𝑁𝑛,𝐩𝑛} by 𝐸𝑠𝐙𝑛,𝐺∣𝐘𝑛=î“ğ©ğ‘›î€œğ’µâ‹¯î€œğ’µî„¿î…€î…€î…€î…€î…ƒî…€î…€î…€î…€î…Œğ‘ğ‘›â„Žî‚€ğ˜ğ‘›,𝐙∗𝑁𝑛,𝐩𝑛𝜋𝑑𝐙∗𝑁𝑛,𝐩𝑛∣𝐘𝑛.(3.11) Here the joint posterior distribution of (𝐙∗𝑁𝑛,𝐩𝑛) is given by 𝜋𝑑𝐙∗𝑁𝑛,𝐩𝑛∣𝐘𝑛=∏𝑛𝑡=1𝜙𝑌𝑡∣0,ğœŽ2𝑡𝐙∗𝑁𝑡,𝐩𝑡𝜋𝑑𝐙∗𝑁𝑛,𝐩𝑛∑𝐩𝑛∫𝒵⋯∫𝒵∏𝑛𝑡=1𝜙𝑌𝑡∣0,ğœŽ2𝑡𝐙∗𝑁𝑡,𝐩𝑡𝜋𝑑𝐙∗𝑁𝑛,𝐩𝑛,(3.12) where 𝜙(ğ‘¥âˆ£ğ‘Ž,𝑏) represents a normal density with mean ğ‘Ž and variance 𝑏 evaluated at 𝑥. The variance ğœŽ2𝑡({𝐙∗𝑁𝑡,𝐩𝑡}) is identical to ğœŽ2𝑡 and emphasizes that ğœŽ2𝑡 is a function of {𝑍1,𝑍2,…,𝑍𝑡} represented by ({𝐙∗𝑁𝑡,𝐩𝑡}). This representation leads to the development of the MCMC algorithm in the next section. For the sake of simplicity, we prefer the following expression for the variance: ğœŽ2𝑡𝐙∗𝑁𝑛,𝐩𝑛=ğœŽ2𝑡𝐙𝑛=ğœŽ2𝑡𝐙∗𝑁𝑡,𝐩𝑡=ğœŽ2𝑡𝐙𝑡=ğœŽ2𝑡.(3.13)

We emphasise that we can always express 𝐙𝑛 by two elements, namely, a partition and distinct values. In this case 𝐩𝑛 is a partition of the integers {1,…,𝑛} that are the indices of 𝐙𝑛, and 𝐙∗𝑁𝑛={𝑍∗1,…,𝑍∗𝑁𝑛} represents the distinct values of 𝐙𝑛. The partition 𝐩𝑛 locates the distinct values from 𝐙𝑛 to 𝐙∗𝑁𝑛 or vice versa. As a result, we have the following equivalent representations: 𝐙𝑛=𝑍1,…,𝑍𝑛=𝑍∗1,…,𝑍∗𝑁𝑛=𝐙∗𝑁𝑛,𝐩𝑛.(3.14)

In time series analysis, we usually consider the first 𝑡 items, 𝐙𝑡={𝑍1,…,𝑍𝑡}, the corresponding partition, 𝐩𝑡, and distinct values, 𝐙∗𝑁𝑡, such that 𝐙𝑡=𝐙∗𝑁𝑡,𝐩𝑡.(3.15) Here 𝐙𝑡 contains the first 𝑡 elements of 𝐙𝑛={𝑍1,…,𝑍𝑛}, and adding {𝑡+1,…,𝑛} to 𝐩𝑡 would yield 𝐩𝑛 according to 𝐙∗𝑁𝑡 and the distinct values of {𝑍𝑡+1,…,𝑍𝑛}. Combining 𝐙∗𝑁𝑡 and the distinct values of {𝑍𝑡+1,…,𝑍𝑛} gives 𝐙∗𝑁𝑛 providing the connection between {𝐙∗𝑁𝑡,𝐩𝑡} and {𝐙∗𝑁𝑛,𝐩𝑛}. To simplify the likelihood expression and the sampling algorithm, we replace ğœŽ2𝑡(𝐙𝑡)=ğœŽ2𝑡({𝐙∗𝑁𝑡,𝐩𝑡}) by ğœŽ2𝑡({𝐙∗𝑁𝑛,𝐩𝑛}) since the subscript 𝑡 in ğœŽ2𝑡 already tells us that the first 𝑡 items of 𝐙𝑛 are considered. We then have a more accessible representation of the likelihood function as 𝐿𝐘𝑛∣𝐙∗𝑁𝑛,𝐩𝑛=𝑛𝑡=1𝜙𝑌𝑡∣0,ğœŽ2𝑡𝐙∗𝑁𝑛,𝐩𝑛,(3.16) and (3.12) becomes 𝜋𝑑𝐙∗𝑁𝑛,𝐩𝑛∣𝐘𝑛=𝐿𝐘𝑛∣𝐙∗𝑁𝑛,𝐩𝑛𝜋𝑑𝐙∗𝑁𝑛,𝐩𝑛∑𝐩𝑛∫𝒵⋯∫𝒵𝐿𝐘𝑛∣𝐙∗𝑁𝑛,𝐩𝑛𝜋𝑑𝐙∗𝑁𝑛,𝐩𝑛.(3.17) We are now equipped to describe the MCMC algorithm.

4. The Algorithm for the Partitions and Distinct Values Sampling

Our Markov chain Monte Carlo (MCMC) sampling procedure generates distinct values and partitions alternatively from the posterior distribution, 𝜋(𝑑𝐙∗𝑁𝑛,𝐩𝑛∣𝐘𝑛). For 𝑆 iterations our MCMC algorithm is(1) Initialise 𝐙∗𝑁𝑛=(𝐙∗𝑁𝑛)[0]. For 𝑠=1,2,…,𝑆. (2) Generate 𝐩[𝑠]𝑛 from 𝜋(𝐩[𝑠]𝑛∣(𝐙∗𝑁𝑛)[𝑠−1],𝐘𝑛). (3) Generate (𝐙∗𝑁𝑛)[𝑠] from 𝜋(𝑑𝐙∗𝑁𝑛∣𝐩[𝑠]𝑛,𝐘𝑛).  End.

To obtain our estimates we use the weighted Chinese restaurant Gibbs type process introduced in Lau and So [9] for the time series models mixed over Dirichlet process. We have extended this scheme to allow for the more general PKSS process. In what follows, the extension from the Dirichlet to the PKSS lies in the weights of the Pólya Urn predictive distribution in (3.3).

The main idea of this algorithm is the “leave one out” principle that removes item 𝑡 from the partition and then replaces it. This will give an update on both 𝐙∗𝑁𝑛 and 𝐩𝑛. This idea has been applied in sampling of partitions in many Bayesian nonparametric models of Dirichlet process (see [17] for an review). The strategy is a simple evaluation on the product of the likelihood function (3.16) and the Pólya Urn distribution (3.3) of 𝑍𝑡, conditioned on the remaining parameters, yielding a joint updating distribution of 𝐙∗𝑁𝑛 and 𝐩𝑛. We now describe the distributions 𝜋(𝐩𝑛∣𝐙∗𝑁𝑛,𝐘𝑛) and 𝜋(𝑑𝐙∗𝑁𝑛∣𝐩𝑛,𝐘𝑛) used in the sampling scheme.

Define 𝐩𝑛,−𝑡 to be the partition 𝐩𝑛 less item 𝑡. Then 𝐩𝑛,−𝑡={𝐶1,−𝑡,𝐶2,−𝑡,…,𝐶𝑁𝑛,−𝑡,−𝑡} with corresponding distinct values given by 𝐙∗𝑁𝑛,−𝑡={𝑍∗1,−𝑡,𝑍∗2,−𝑡,…,𝑍∗𝑁𝑛,−𝑡,−𝑡}. To generate from 𝜋(𝐩𝑛∣𝐙∗𝑁𝑛,𝐘𝑛) for each 𝑡=1,…,𝑛, the item 𝑡 is assigned either to a new cluster 𝐶𝑁𝑛,−𝑡+1,−𝑡; that is, empty before 𝑡 is assigned with probability 𝜋̃‌𝐩𝑁𝑛,−𝑡+1×∫𝒵𝐿𝐘𝑛∣𝐙∗𝑁𝑛,−𝑡+1,̃‌𝐩𝑁𝑛,−𝑡+1𝐻𝑑𝑍𝑁𝑛,−𝑡+1,−𝑡𝜋̃‌𝐩𝑁𝑛,−𝑡+1×∫𝒵𝐿𝐘𝑛∣𝐙∗𝑁𝑛,−𝑡+1,̃‌𝐩𝑁𝑛,−𝑡+1𝐻𝑑𝑍𝑁𝑛,−𝑡+1,−𝑡+∑𝑁𝑛,−𝑡𝑗=1𝜋̃‌𝐩𝑗×𝐿𝐘𝑛∣𝐙∗𝑗,̃‌𝐩𝑗(4.1) or to an existing cluster, 𝐶𝑗,−𝑡 for 𝑗=1,…,𝑁𝑛,−𝑡 with probability 𝜋̃‌𝐩𝑗×𝐿𝐘𝑛∣𝐙∗𝑗,̃‌𝐩𝑗𝜋̃‌𝐩𝑁𝑛,−𝑡+1×∫𝒵𝐿𝐘𝑛∣𝐙∗𝑁𝑛,−𝑡+1,̃‌𝐩𝑁𝑛,−𝑡+1𝐻𝑑𝑍𝑁𝑛,−𝑡+1,−𝑡+∑𝑁𝑛,−𝑡𝑗=1𝜋̃‌𝐩𝑗×𝐿𝐘𝑛∣𝐙∗𝑗,̃‌𝐩𝑗,(4.2) where ̃‌𝐩𝑗=𝐩𝑛,−𝑡∪𝑡∈𝐶𝑗,−𝑡,𝐙∗𝑗=𝐙∗𝑁𝑛,−𝑡,−𝑡,𝑍𝑡=𝑍∗𝑗,−𝑡,(4.3) for 𝑗=1,…,𝑁𝑛,−𝑡+1. In addition, if 𝑗=𝑁𝑛,−𝑡+1 and a new cluster is selected, a sample of 𝑍∗𝑁𝑛,−𝑡+1,−𝑡 is drawn from 𝐿𝐘𝑛∣𝐙∗𝑁𝑛,−𝑡+1,̃‌𝐩𝑁𝑛,−𝑡+1𝐻𝑑𝑍∗𝑁𝑛,−𝑡+1,−𝑡∫𝒵𝐿𝐘𝑛∣𝐙∗𝑁𝑛,−𝑡+1,̃‌𝐩𝑁𝑛,−𝑡+1𝐻𝑑𝑍∗𝑁𝑛,−𝑡+1,−𝑡(4.4) for the next iteration.

To generate from 𝜋(𝐙∗𝑁𝑛∣𝐩𝑛,𝐘𝑛) for 𝑗=1,…,𝑁𝑛 generate 𝑍∗𝑗 given {𝑍∗1,…,𝑍∗𝑁𝑛}⧵{𝑍∗𝑗}, 𝐩𝑛, and 𝐘𝑛 from the conditional distribution 𝜋𝑍∗𝑗∣𝐙∗𝑁𝑛⧵𝑍∗𝑗,𝐩𝑛,𝐘𝑛=𝐿𝐘𝑛∣𝐙∗𝑁𝑛,𝐩𝑛𝐻𝑑𝑍∗𝑗∫𝒵𝐿𝐘𝑛∣𝐙∗𝑁𝑛,𝐩𝑛𝐻𝑑𝑍∗𝑗.(4.5) This step uses the standard Metropolis-Hastings algorithm to draw the posterior samples 𝑍∗𝑗. Precisely, (4.5) is given by 𝜋𝑑𝑍∗𝑗∣𝐙∗𝑁𝑛⧵𝑍∗𝑗,𝐩𝑛,𝐘𝑛=∏𝑛𝑡=1𝜙𝑌𝑡∣0,ğœŽ2𝑡𝐙∗𝑁𝑛,𝐩𝑛𝐻𝑑𝑍∗𝑗∫𝒵∏𝑛𝑡=1𝜙𝑌𝑡∣0,ğœŽ2𝑡𝐙∗𝑁𝑛,𝐩𝑛𝐻𝑑𝑍∗𝑗.(4.6)

In (4.6) all elements in the sequence {𝑍∗1,…,𝑍∗𝑁𝑛}, conditional on 𝐩𝑛 and 𝐘𝑛, are no longer independent, and they require to be sampled individually, and conditional the remaining elements.

We note that the special case of the above algorithm can be found for independent data of the normal mixture models from West et al. [29] (see also [17]). Taking 𝜒=0 and 𝜓=0 in (2.2) yields 𝑍𝑡=𝜈𝑡=ğœŽ2𝑡 for 𝑖=1,…,𝑛 with distribution 𝐺. Let 𝐺 be a Dirichlet process with parameter 𝜃𝐻. Then (4.1) and (4.2) become 𝜃×∫𝒵𝜙𝑌𝑡∣0,𝑍𝐻(𝑑𝑍)𝜃×∫𝒵𝜙𝑌𝑡∣0,𝑍𝐻(𝑑𝑍)+∑𝑁𝑛,−𝑡𝑗=1𝑒𝑗,−𝑡×𝜙𝑌𝑡∣0,𝑍∗𝑗,−𝑡,𝑒𝑗,−𝑡×𝜙𝑌𝑡∣0,𝑍∗𝑗,−𝑡𝜃×∫𝒵𝜙𝑌𝑡∣0,𝑍𝐻(𝑑𝑍)+∑𝑁𝑛,−𝑡𝑗=1𝑒𝑗,−𝑡×𝜙𝑌𝑡∣0,𝑍∗𝑗,−𝑡.(4.7) Furthermore, the joint distribution of {𝑍1,…,𝑍𝑁𝑛} conditional on 𝐩𝑛 and 𝐘𝑛 is given by 𝑁𝑛𝑗=1𝑡∈𝐶𝑗𝜙𝑌𝑡∣0,𝑍∗𝑗𝐻𝑑𝑍∗𝑗.(4.8) In this case {𝑍1,…,𝑍𝑁𝑛} are independent in both the prior, 𝜋(𝐙∗𝑁𝑛∣𝐩𝑛), and the posterior, 𝜋(𝐙∗𝑁𝑛∣𝐩𝑛,𝐘𝑛). However, it is not true in the more general dynamic GARCH model we consider.

Usually, the parameters of interest are both the sequence {ğœŽ1,…,ğœŽğ‘›} and the predictive density ∫𝒵𝑘(𝑌𝑛+1∣𝐘𝑛,𝐙𝑛,𝑍𝑛+1)𝐺(𝑑𝑍𝑛+1). These two sets of parameters are functions of 𝐙𝑛 under the mixture of GARCH(1,1) model, and the Bayesian estimators are taken to be the expected means as outlined in Section 2. That is, writing the volatility as the vector ğœŽğ‘›(𝐙𝑛)={ğœŽ1(𝐙𝑛),…,ğœŽğ‘›(𝐙𝑛)}ğ¸î€ºğœŽğ‘›î€·ğ™ğ‘›î€¸âˆ£ğ˜ğ‘›î€»=î“ğ©ğ‘›î€œğ’µâ‹¯î€œğ’µğœŽğ‘›î‚€ğ™âˆ—ğ‘ğ‘›,𝐩𝑛𝜋𝑑𝐙∗𝑁𝑛,𝐩𝑛∣𝐘𝑛,𝐸𝒵𝑘𝑌𝑛+1∣𝐘𝑛,𝑍𝑛+1,𝐙𝑛𝐺𝑑𝑍𝑛+1∣𝐘𝑛=𝐩𝑛𝒵⋯𝒵𝒵𝑘𝑌𝑛+1∣𝐘𝑛,𝑍𝑛+1,𝐙∗𝑁𝑛,𝐩𝑛×𝜋𝑑𝑍𝑛+1∣𝐙∗𝑁𝑛,𝐩𝑛𝜋𝑑𝐙∗𝑁𝑛,𝐩𝑛∣𝐘𝑛,(4.9) where 𝜋(𝑑𝐙∗𝑁𝑛,𝐩𝑛∣𝐘𝑛) denotes the posterior distribution of 𝐙𝑛={𝐙∗𝑁𝑛,𝐩𝑛} and 𝜋(𝑑𝑍𝑛+1∣𝐙∗𝑁𝑛,𝐩𝑛) represents the Pólya Urn predictive density of 𝑍𝑛+1 given 𝐙𝑛.

5. Alternative Algorithmic Estimation Procedures

We now outline how the algorithms of Walker [20] and Papaspiliopoulos and Roberts [21] may be applied to our GARCH(1,1) model mixed over the PKSS process. First, consider the approach in Walker [20]. Beginning with (3.1) the weights can be written as 𝑊𝑖=(0,𝑊𝑖)𝑑𝑢=𝑊𝑖(0,∞)𝑊−1𝑖𝕀{(0,𝑊𝑖)}(𝑢)𝑑𝑢=𝑊𝑖(0,∞)𝑈𝑢∣0,𝑊𝑖𝑑𝑢,(5.1) where 𝕀{(0,𝑊𝑖)}(𝑢) denotes the indicator function that 𝕀{(0,𝑊𝑖)}(𝑢)=1 if 0<𝑢<𝑊𝑖 and 𝕀{(0,𝑊𝑖)}(𝑢)=0 otherwise, and 𝑈(𝑢∣0,𝑊𝑖) represents the uniform density on the interval (0,𝑊𝑖). Then, substituting (5.1), but without the integral over 𝑈, into (3.1), we obtain the joint measure 𝐺(𝑑𝑧,𝑑𝑢)=âˆžî“ğ‘–=1𝑊𝑖𝑈𝑢∣0,𝑊𝑖𝑑𝑢𝛿𝑍𝑖(𝑑𝑧).(5.2)

Furthermore, we can take the classification variables {𝛿1,…,𝛿𝑛} to indicate the points {𝑍𝛿1,…,𝑍𝛿𝑛} taken from the measure. The classification variables {𝛿1,…,𝛿𝑛} take values from the integers {1,2,…} and assign a configuration to model (2.2) so that the expression of the likelihood is simpler without the product of sums. So, combining (5.2) with model (2.2) yields 𝐿𝐘𝑛,𝐮𝑛,𝜹𝑛∣𝐙,𝐖=𝑛𝑡=1𝑊𝛿𝑡𝑈𝑢𝑡∣0,𝑊𝛿𝑡𝜙𝑌𝑡∣0,ğœŽ2𝑡𝐙𝜹𝑡,(5.3) where 𝜹𝑛={𝛿1,…,𝛿𝑛} and 𝐙𝜹𝑡={𝑍𝛿1,…,𝑍𝛿𝑡} for 𝑡=1,…,𝑛. Here the random jumps that build up the random measure 𝐺 in (5.2) can be reexpressed as 𝑊1𝑑=𝑉1 and 𝑊𝑖𝑑=𝑉𝑖∏𝑖−1𝑗=1(1−𝑉𝑗) for 𝑗=2,3,…. This is called the stick-breaking representation. Unfortunately, up to now, this representation includes only the Poisson-Dirichlet processe is (𝛼,𝜃) where 𝑉𝑖s are Beta(1−𝛼,𝜃+𝑖𝛼) random variables. Further development will be required to fully utilise the approach of Walker [20] for the PKSS process in general.

The likelihood (5.3) can be written as 𝐿𝐘𝑛,𝐮𝑛,𝜹𝑛∣𝐙,𝐕=𝑛𝑡=1𝑊𝛿𝑡(𝐕)𝑈𝑢𝑡∣0,𝑊𝛿𝑡(𝐕)𝜙𝑌𝑡∣0,ğœŽ2𝑡𝐙𝜹𝑡,(5.4) and MCMC algorithms for sampling 𝐮𝑛, 𝜹𝑛, and 𝐕 are straightforward and already included in Walker ([20], Section  3). To complete the algorithm for our model it requires sampling from 𝐙. This can be achieved by sampling from 𝑍𝑗 for all {𝑗∶𝛿𝑡=𝑗} from ∏𝑛𝑡=1𝜙𝑌𝑡∣0,ğœŽ2𝑡𝐙𝜹𝑡𝐻𝑑𝑍𝑗∫𝒵∏𝑛𝑡=1𝜙𝑌𝑡∣0,ğœŽ2𝑡𝐙𝜹𝑡𝐻𝑑𝑍𝑗,(5.5) otherwise sample 𝑍𝑗 from 𝐻(𝑑𝑍) if there is no 𝑗 equal to 𝛿𝑡. Notice that there are infinite 𝑍𝑗s contained in 𝐙, but it is only required to sample at most 𝑛 of them. The number of sampled 𝑍𝑗s varies over iterations (see also ([20], Section  3) for details).

Papaspiliopoulos and Roberts [21] suggest an approach similar to Walker [20]. Consider the classification variables {𝛿1,…,𝛿𝑛} and the stick-breaking representation of {𝑊1,𝑊2,…} contributed by {𝑉1,𝑉2,…} defined above. Then the likelihood is immediately given by 𝐿𝐘𝑛,𝜹𝑛∣𝐙,𝐕=𝑛𝑡=1𝑊𝛿𝑡(𝐕)𝜙𝑌𝑡∣0,ğœŽ2𝑡𝐙𝜹𝑡.(5.6) The most challenging task is the reallocation of 𝑛 observations over the infinite components in (3.1), equivalent to sampling the classification variables {𝛿1,…,𝛿𝑛} over the MCMC iterations. Here we will briefly discuss this task when it involves the variance ğœŽ2𝑡(𝐙𝜹𝑡) for our model (see also Papaspiliopoulos and Roberts [21, Section  3]. Let 𝜹𝑛(𝑖,𝑗)=𝛿1,…,𝛿𝑖−1,𝑗,𝛿𝑖+1,…,𝛿𝑛(5.7) be the vector produced from 𝜹𝑛 by substituting the 𝑖th element by 𝑗. This is a proposed move from 𝜹𝑛 to 𝜹𝑛(𝑖,𝑗) where 𝑗=1,2,…. Notice that it is not possible to consider an infinite number of 𝑍𝑗s directly since we only have a finite number. Instead, we can employ the Metropolis-Hastings sampler that considers a proposal probability mass function which requires only a finite number of 𝑍𝑗s. The probabilities for the proposed moves are given by ğ‘žğ‘›(𝑖,𝑗)=ğ‘Šğ‘—ğ¶î€·ğœ¹ğ‘›î€¸Ã—âŽ§âŽªâŽ¨âŽªâŽ©ğ‘›âˆğ‘¡=1𝜙𝑌𝑡∣0,ğœŽ2𝑡𝐙𝜹𝑡∣𝛿𝑖=𝑗,for𝑗=1,…,max𝛿1,…,𝛿𝑛𝑀𝜹𝑛,for𝑗>max𝛿1,…,𝛿𝑛,(5.8) where 𝑀(𝜹𝑛)=max𝑗=1,…,max{𝛿1,…,𝛿𝑛}∏𝑛𝑡=1𝜙(𝑌𝑡∣0,ğœŽ2𝑡(𝐙𝜹𝑡))∣𝛿𝑖=𝑗, and the proportional constant is given by 𝐶(𝜹𝑛)=∑max{𝛿1,…,𝛿𝑛}𝑗=1𝑊𝑗×∏𝑛𝑡=1𝜙(𝑌𝑡∣0,ğœŽ2𝑡(𝐙𝜹𝑡))∣𝛿𝑖=𝑗+(1−∑max{𝛿1,…,𝛿𝑛}𝑗=1𝑊𝑗)×𝑀(𝜹𝑛). Then simulate a Uniform(0,1) random variable 𝑈𝑖 and accept the proposal to move to 𝜹𝑛(𝑖,𝑗)={𝛿1,…,𝛿𝑖−1,𝛿𝑖=𝑗,𝛿𝑖+1,…,𝛿𝑛} if 𝑗 satisfies ∑𝑗−1ℓ=0ğ‘žğ‘›(𝑖,ℓ)<𝑈𝑖≤∑𝑗ℓ=1ğ‘žğ‘›(𝑖,ℓ), where ğ‘žğ‘›(𝑖,0)=0. The acceptance probability for this Metropolis-Hastings is given by 𝛼𝜹𝑛,𝜹𝑛(𝑖,𝑗)=⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩1if𝑗≤max𝛿1,…,𝛿𝑛andmax𝜹𝑛(𝑖,𝑗)=max𝛿1,…,𝛿𝑛min⎧⎨⎩1,𝐶𝜹𝑛𝐶𝜹𝑛(𝑖,𝑗)𝑀𝜹𝑛(𝑖,𝑗)∏𝑛𝑡=1𝜙𝑌𝑡∣0,ğœŽ2ğ‘¡î€·ğ™ğœ¹ğ‘¡î€¸î€¸âŽ«âŽ¬âŽ­if𝑗≤max𝛿1,…,𝛿𝑛andmax𝜹𝑛(𝑖,𝑗)<max𝛿1,…,𝛿𝑛min⎧⎨⎩1,𝐶𝜹𝑛𝐶𝜹𝑛(𝑖,𝑗)∏𝑛𝑡=1𝜙𝑌𝑡∣0,ğœŽ2𝑡𝐙𝜹𝑡∣𝛿𝑖=ğ‘—ğ‘€î€·ğœ¹ğ‘›î€¸âŽ«âŽ¬âŽ­if𝑗>max𝛿1,…,𝛿𝑛.(5.9) This completes the task for sampling 𝜹𝑛. Finally, similar to Walker [20], sampling 𝐙 only needs (5.5). That is, for all {𝑗∶𝛿𝑡=𝑗}, sample 𝑍𝑗 from (5.5), otherwise sample 𝑍𝑗 from 𝐻(𝑑𝑍) if there is no 𝑗 equal to 𝛿𝑡.

6. Application to the Standard & Poor’s 500 Financial Index

The methodology is illustrated on the daily logarithm returns of the S&P500 (Standard & Poor’s 500) financial index, dated from 2006 Jan 03 to 2009 Dec 31. The data contains a total of 1007 trading days and is available from the Yahoo Finance (URL: http://finance.yahoo.com/). The log return is defined as 𝑌𝑡=100(ln𝐼𝑡−ln𝐼𝑡−1) where 𝐼𝑡 is the index at time 𝑡. The algorithm described in Section 4 is used to estimate the nonparametric mixture models.

To compare the three different mixture models in Section 3, 𝐺 is allocated in the Dirichlet process, the PD process, and the NGG process. In each case, the mean process is denoted 𝐻 and is a Gamma-Dirichlet distribution given by 𝐻(𝑑𝜈,𝑑𝜒,𝑑𝜓)=𝐻1(𝑑𝜈)𝐻2(𝑑𝜒,𝑑𝜓) where 𝐻1(𝑑𝜈) is the Gamma(1,1) distribution and 𝐻2(𝑑𝜒,𝑑𝜓) is the Dirichlet(1,1,1) distribution. We set the parameters of each process such that the variance of each process evaluated over the same measure is equal. This reslts in 𝜃=2.3538 for the Dirichlet process, 𝜃=0.6769 and 𝛼=1/2 for the PD and 𝛽=1 and 𝛼=1/2 for the NGG. We also compare the results to a no mixture GARCH(1,1) model in which the parameters (𝜈,𝜒,𝜓) have a prior distribution 𝐻1(𝑑𝜈)𝐻2(𝑑𝜒,𝑑𝜓). We initialise the MCMC algorithm with a partition that separates all integers, that is, 𝐩𝑛={𝐶1={1},𝐶2={2},…,𝐶𝑛={𝑛}}. We run the MCMC algorithm for 20,000 iterations of which the first 10,000 iterations are discarded. The last 10,000 iterations are considered a sample from the posterior density of {𝐙∗𝑛,𝐩𝑛}.

Figure 1 contains the volatility estimates (fitted data) for the no-mixture model, the Dirichlet process, the DP process, and the NGG process. The no mixture model, the Dirichelt process, and the PD process appear to give similar results. However, it is easy to distinguish the NGG process from the other models since the volatility estimates of the NGG process appear to better fit the observed spikes in the data. Figure 2 presents the predictive densities for each model. Again, the no-mixture model, the Dirichlet process, and the PD process give similar predictive density estimates in the sense that the distribution tails are all similar. However, the NGG process model estimates a predictive density with substantially wider tails than the other three models. Figures 1 and 2 suggest that the Dirichlet and PD processes allocate fewer clusters and consider the periods of increased volatility as outliers within the data. On the other hand the NGG process allocates more clusters and incorporates the periods of increased volatility directly into its predictive density.

Finally, we evaluate the goodness of fit in terms of the marginal likelihoods. The logarithm of the marginal likelihoods of the no mixture model, the Dirichlet process model, PD process model, and the NGG process model are −1578.085, −1492.086, −1446.275, and −1442.269, respectively. Under the marginal likelihood criterion all three mixture models outperform the GARCH(1,1). Further, the NGG process outperforms the PD process which in turn outperforms the model proposed in Lau and So [9]. These results suggest that generalisations of the Dirichlet process mixture model should be further investigated for time-dependent data.

7. Conclusion

In this paper we have extended nonparametric mixture modelling for GARCH models to the Kingman-Poisson process. The process includes the previously applied Dirichlet process and also includes the Poisson-Dirichlet and Normalised Generalised Gamma process. The Poisson-Dirichlet and Normalised Generalised Gamma process provide richer clustering structures than the Dirichlet process and have not been previously adapted to time series data. An application to the S&P500 financial index suggests that these more general random probability measures can outperform the Dirichlet process. Finally, we developed an MCMC algorithm that is easy to implement which we hope will facilitate further investigation into the application of nonparametric mixture modes to time series data.

References

  1. R. F. Engle, “Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation,” Econometrica, vol. 50, no. 4, pp. 987–1007, 1982. View at: Google Scholar
  2. T. Bollerslev, “Generalized autoregressive conditional heteroskedasticity,” Journal of Econometrics, vol. 31, no. 3, pp. 307–327, 1986. View at: Google Scholar
  3. L. Bauwens, A. Preminger, and J. V. K. Rombouts, “Theory and inference for a Markov switching GARCH model,” Econometrics Journal, vol. 13, no. 2, pp. 218–244, 2010. View at: Publisher Site | Google Scholar
  4. Z. He and J. M. Maheu, “Real time detection of structural breaks in GARCH models,” Computational Statistics and Data Analysis, vol. 54, no. 11, pp. 2628–2640, 2010. View at: Publisher Site | Google Scholar
  5. R. T. Baillie and C. Morana, “Modelling long memory and structural breaks in conditional variances: an adaptive FIGARCH approach,” Journal of Economic Dynamics and Control, vol. 33, no. 8, pp. 1577–1592, 2009. View at: Publisher Site | Google Scholar
  6. M. Haas, S. Mittnik, M. Paolella, Z. He, and J. Maheu, “Mixed normal conditional het-eroskedasticity,” Journal of Financial Econometrics, vol. 2, pp. 211–250, 2004. View at: Google Scholar
  7. M. Haas, S. Mittnik, M. Paolella, Z. He, and J. Maheu, “A new approach to markov- switching GARCH models,” Journal of Financial Econometrics, vol. 2, pp. 493–530, 2004. View at: Google Scholar
  8. S. Kaufmann and S. Frühwirth-Schnatter, “Bayesian analysis of switching arch models,” Journal of Time Series Analysis, vol. 23, no. 4, pp. 425–458, 2002. View at: Publisher Site | Google Scholar
  9. J. W. Lau and M. K. P. So, “A Monte Carlo Markov chain algorithm for a class of mixture time series models,” Statistics and Computing, vol. 21, no. 1, pp. 69–81, 2011. View at: Publisher Site | Google Scholar
  10. M. J. Jensen and J. M. Maheu, “Bayesian semiparametric stochastic volatility modeling,” Journal of Econometrics, vol. 157, no. 2, pp. 306–316, 2010. View at: Publisher Site | Google Scholar
  11. J. E. Griffin, “Inference in infinite superpositions of Non-Gaussian Ornstein-Uhlenbeck processes using Bayesian nonparametic methods,” Journal of Financial Econometrics, vol. 9, no. 3, Article ID nbq027, pp. 519–549, 2011. View at: Publisher Site | Google Scholar
  12. T. S. Ferguson, “A Bayesian analysis of some nonparametric problems,” Annals of Statistics, vol. 1, pp. 209–230, 1973. View at: Google Scholar
  13. A. Y. Lo, “On a class of Bayesian nonparametric estimates. I. Density estimates,” Annals of Statistics, vol. 12, no. 1, pp. 351–357, 1984. View at: Google Scholar
  14. J. Pitman and M. Yor, “The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator,” Annals of Probability, vol. 25, no. 2, pp. 855–900, 1997. View at: Google Scholar
  15. J. Pitman, “Poisson-Kingman partitions,” in Statistics and Science: A Festschrift for Terry Speed, vol. 40 of IMS Lecture Notes Monogr, pp. 1–34, Beachwood, Ohio, USA, 2003. View at: Google Scholar
  16. J. F. C. Kingman, “Random discrete distribution,” Journal of the Royal Statistical Society, Series B, vol. 37, pp. 1–22, 1975. View at: Google Scholar
  17. L. F. James, “Poisson process partition calculus with applications to exchangeable models and Bayesian nonparametrics,” http://arxiv.org/abs/math/0205093. View at: Google Scholar
  18. L. F. James, “Bayesian poisson process partition calculus with an application to Bayesian lévy moving averages,” Annals of Statistics, vol. 33, no. 4, pp. 1771–1799, 2005. View at: Publisher Site | Google Scholar
  19. A. Lijoi, I. Prünster, and S. G. Walker, “Bayesian nonparametric estimators derived from conditional Gibbs structures,” Annals of Applied Probability, vol. 18, no. 4, pp. 1519–1547, 2008. View at: Publisher Site | Google Scholar
  20. S. G. Walker, “Sampling the Dirichlet mixture model with slices,” Communications in Statistics, vol. 36, no. 1, pp. 45–54, 2007. View at: Publisher Site | Google Scholar
  21. O. Papaspiliopoulos and G. O. Roberts, “Retrospective Markov chain Monte Carlo methods for Dirichlet process hierarchical models,” Biometrika, vol. 95, no. 1, pp. 169–186, 2008. View at: Publisher Site | Google Scholar
  22. A. Lijoi, I. Prünster, and S. G. Walker, “Investigating nonparametric priors with Gibbs structure,” Statistica Sinica, vol. 18, no. 4, pp. 1653–1668, 2008. View at: Google Scholar
  23. P. Embrechts, C. Klüppelberg, and T. Mikosch, Modelling Extremal Events for Insurance and Finance, Springer, 1997.
  24. Z. Zhang, W. K. Li, and K. C. Yuen, “On a mixture GARCH time-series model,” Journal of Time Series Analysis, vol. 27, no. 4, pp. 577–597, 2006. View at: Publisher Site | Google Scholar
  25. W. Vervaat, “On a stochastic difference equation and a representation of nonnegative infinitely divisible random variables,” Advances in Applied Probability, vol. 11, pp. 750–783, 1979. View at: Publisher Site | Google Scholar
  26. A. Brandt, “The stochastic equation Yn+1=AnYn+Bn with stationary coefficients,” Advances in Applied Probability, vol. 18, no. 1, pp. 211–220, 1986. View at: Publisher Site | Yn+1=AnYn+Bn%20with%20stationary%20coefficients&author=A. Brandt&publication_year=1986" target="_blank">Google Scholar | Zentralblatt MATH
  27. J. K. Ghosh and R. V. Ramamoorthi, Bayesian Nonparametrics, Springer Series in Statistics, Springer, New York, NY, USA, 2003.
  28. J. Pitman, Combinatorial Stochastic Processes, vol. 1875 of Lecture Notes in Mathematics, Springer, Berlin, Germany, 2006, Lectures from the 32nd Summer School on Probability Theory held in Saint-Flour, July 7–24, 2002,With a foreword by Jean Picard.
  29. M. West, P. Müller, and M. D. Escobar, “Hierarchical priors and mixture models, with applications in regression and density estimation,” in A Tribute to D. V. Lindley, pp. 363–386, John Wiley & Sons, New York, NY, USA, 1994. View at: Google Scholar

Copyright © 2012 John W. Lau and Ed Cripps. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1287 Views | 642 Downloads | 1 Citation
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder