Abstract

Traditional GARCH models describe volatility levels that evolve smoothly over time, generated by a single GARCH regime. However, nonstationary time series data may exhibit abrupt changes in volatility, suggesting changes in the underlying GARCH regimes. Further, the number and times of regime changes are not always obvious. This article outlines a nonparametric mixture of GARCH models that is able to estimate the number and time of volatility regime changes by mixing over the Poisson-Kingman process. The process is a generalisation of the Dirichlet process typically used in nonparametric models for time-dependent data provides a richer clustering structure, and its application to time series data is novel. Inference is Bayesian, and a Markov chain Monte Carlo algorithm to explore the posterior distribution is described. The methodology is illustrated on the Standard and Poor's 500 financial index.

1. Introduction

Generalised autoregressive conditional heteroscedastic (GARCH) models estimate time-varying fluctuations around mean levels of a time series known as the volatility of a time series [1, 2]. The standard GARCH model specifies the volatility persistence at time 𝑡 as a linear combination of previous volatilities and squared residual terms. The persistence is assumed constant for all 𝑡 resulting in smooth transitions of volatility levels. However, many nonstationary time series exhibit abrupt changes in volatility suggesting fluctuating levels of volatility persistence. In this case the GARCH parameters undergo regime changes over time. If the maximum number of potential regimes is known Markov-switching GARCH, models are an appealing option [38]. However, often the number of volatility regimes is not known and can be difficult to preselect. In this case, Bayesian nonparametric mixture models are attractive because they allow the data to determine the number of volatility regimes or mixture components. For example, recently nonparametric mixing over the Dirichlet process has been described by Lau and So [9] for GARCH(1,1) models, Jensen and Maheu [10] for stochastic volatility models and Griffin [11] for Ornstein-Uhlenbeck Processes.

The Dirichlet process [12] is most widely applied in nonparametric mixture models and within a hierarchical framework is introduced in Lo [13] for independent data. Lau and So [9] extend the work of Lo [13] to time-dependent data where time-varying GARCH parameters are mixed over the Dirichlet process. The additional flexibility of this approach allows a range of GARCH regimes, from all observations generated by a single GARCH model to each observation in the series generated by a unique set of GARCH parameters. Lau and So [9] conclude with a discussion on the possibility of extending their method to alternative random probability measures that provide greater clustering flexibility than the Dirichlet process. We continue this discussion by outlining a novel method for a class of GARCH mixture models mixed over the Poisson-Kingman process [14, 15] derived from the stable subordinator (known henceforth as PKSS). To illustrate the richer clustering mechanisms of the PKSS process, we describe three of its special cases—the Dirichlet process [12], the Poisson-Dirichlet (PD) process [14, 16], and the Normalized Generalized Gamma (NGG) process [17, 18].

Theoretical developments and recent applications of the PKSS process are discussed in Lijoi et al. [19]. However, we note that the PD and the NGG processes have yet to be developed for volatitility estimation, or indeed time series applications in general, and in this sense the work in this paper is novel. Although the Dirichlet process has now been used extensively in applications, the implementation of more general nonparametric mixture models for applied work is not always obvious. We therefore describe three Markov chain Monte Carlo (MCMC) algorithms. First, we develop a weighted Chinese restaurant Gibbs type process for partition sampling to explore the posterior distribution. The basis of this algorithm is developed for time-dependent data in Lau and So [9], and we extend it to allow for the PKSS process. We also note recent developments for the sampling of Bayesian nonparametric models in Walker [20] and Papaspiliopoulos and Roberts [21] and describe how these algorithms can be constructed to estimate our model.

The methodology is illustrated through volatility and predictive density estimation of a GARCH(1,1) model applied to the Standard and Poor’s 500 financial index from 2003 to 2009. Results are compared between a no-mixture model and nonparametric mixtures over the Dirichlet, PD, and NGG processes. Under the criterion of marginal likelihood the NGG process performs the best. Also, the PD and NGG process outperforms the previously studied Dirichlet process which in turn outperforms the no-mixture model. The results suggest that alternatives to the Dirichlet process should be considered for applications of nonparametric mixture models to time-dependent data.

The paper proceeds as follows. Section 2 presents a Bayesian mixture of GARCH models over an unknown mixing distribution, outlines a convenient Bayesian estimator based on quadratic loss, and describes some of the time series properties of our model. Section 3 discusses the class of random probability measures we consider as the mixing distributions and details the clustering mechanisms associated with the three special cases mentioned above via the Pólya Urn representation and the consequences for the posterior distribution of the partitions resulting from the PKSS process. Our MCMC algorithm for sampling the posterior distribution is presented in Section 4, and the alternative MCMC algorithms of Walker [20] and Papaspiliopoulos and Roberts [21] are presented in Section 5. Section 6 describes the application, and Section 7 concludes the paper.

2. The Mixture Model

Let 𝑌𝑡 be the observed variable at time 𝑡, 𝐘𝑡1={𝑌1,,𝑌𝑡1} be the observations from time 1 to time 𝑡1 and 𝑌𝑡=0 for 𝑡0. The GARCH(1,1) model specifies 𝑌𝑡𝐘𝑡1,(𝜈,𝜒,𝜓)𝑁0,𝜎2𝑡,𝜎2𝑡=𝜈+𝜒𝑌2𝑡1+𝜓𝜎2𝑡1,(2.1) where 𝜈>0, 𝜒0, 𝜓0 and 𝜎2𝑡=0 for 𝑡0. In (2.1) the GARCH parameters (𝜈,𝜒,𝜓) are not time varying implying that volatility persistence is constant over time with smooth transitions of volatility levels. To allow abrupt changes to volatilities, we extend (2.1) by writing 𝑍𝑡={𝜈𝑡,𝜒𝑡,𝜓𝑡}, 𝜈𝑡>0, 𝜒𝑡0, 𝜓𝑡0 for 𝑡=1,,𝑛 and 𝐙𝑡={𝑍1,,𝑍𝑡} as joint latent variables from time 1 to time 𝑡; that is, the model is now a dynamic GARCH model with each observation potentially generated by its own set of GARCH parameters as follows: 𝑌𝑡𝐘𝑡1,𝐙𝑡𝑁0,𝜎2𝑡,𝜎2𝑡=𝜈𝑡+𝜒𝑡𝑌2𝑡1+𝜓𝑡𝜎2𝑡1.(2.2) Note that in model (2.2) the data controls the maximum potential number of GARCH regimes, the sample size 𝑛. In contrast, finite switching models preallocate a maximum number of regimes typically much smaller than the number of observations. As the potential number of regimes gets larger, estimation of the associated transition probabilities and GARCH parameters in finite switching models become prohibitive. However, assuming that the latent variables 𝐙𝑛={𝑍1,,𝑍𝑛} are independent of each other and completing the hierarchy by modelling the GARCH parameters contained in 𝐙𝑛 with an unknown mixing distribution, 𝐺, with law 𝒫 the model becomes manageable, that is, 𝑍𝑡𝐺iid𝐺(𝑑𝜈,𝑑𝜒,𝑑𝜓),𝐺𝒫(2.3) with 𝐙𝑛 and the mixing distribution, 𝐺, parameters that we may estimate. Depending on the posterior distribution of the clustering structure associated with the mixing distribution, the results may suggest that 𝑍𝑡=𝑍 for 𝑡=1,2,,𝑛 (a single regime GARCH model) up to a unique 𝑍𝑡 for each 𝑡 indicating a separate GARCH regime for each time point. This illustrates the flexibility of the model.

We write 𝑠(𝐙𝑛,𝐺) as a positive integrable function of the latent variables, 𝐙𝑛, and the mixing distribution, 𝐺, to represent various quantities that may be of interest for inference. Under quadratic loss the Bayesian estimator is the posterior expectation 𝐸[𝑠(𝐙𝑛,𝐺)𝐘𝑛]. For our model this is an appealing estimator because it does not require the posterior of 𝐺 but only the posterior distribution of the sequence {𝑍1,,𝑍𝑛}, that is, 𝐸𝑠𝐙𝑛,𝐺𝐘𝑛=𝒵𝑛𝐘𝑛,𝐙𝑛𝜋𝑑𝐙𝑛𝐘𝑛,(2.4) because 𝐘𝑛,𝐙𝑛=𝒢𝑠𝐙𝑛,𝐺𝜋𝑑𝐺𝐘𝑛,𝐙𝑛,(2.5) where 𝜋(𝑑𝐺𝐘𝑛,𝐙𝑛) represents the posterior law of the random probability measure 𝐺, and 𝜋(𝑑𝐙𝑛𝐘𝑛) is the posterior distribution of the sequence 𝐙𝑛={𝑍1,,𝑍𝑛}.

In Lau and So [9] the unknown mixing distribution for the GARCH parameters, 𝐺, is taken to be the Dirichlet process. This paper combines the theoretical ground work of Lijoi et al. [19, 22] with Lau and So [9] by allowing 𝐺 to be the PKSS process. The result is a nonparametric GARCH model which contains (among others) the Dirichlet process typically used in time series as well as the PD and NGG processes as special cases.

Understanding conditions for stationarity of a time series model is fundamental for statistical inference. Since our model is specified with zero mean over time, we provide a necessary and sufficient condition for the existence of a secondary order stationary solution for the infinite mixture of GARCH(1,1) models. The derivation closely follows Embrechts et al. [23] and Zhang et al. [24], and we state the conditions without giving proof. By letting 𝜖𝑡 be a standard normal random variable and replacing 𝑌2𝑡1 by 𝜎2𝑡1𝜖2𝑡1 then, conditioned on 𝑌𝑖 for 𝑖=1,,𝑡1 and {𝜈𝑖,𝜒𝑖,𝜓𝑖} for 𝑖=1,,𝑡, 𝜎2𝑡 in (2.2) becomes 𝜎2𝑡=𝜈𝑡+𝜒𝑡𝑌2𝑡1+𝜓𝑡𝜎2𝑡1=𝜈𝑡+𝜒𝑡𝜖2𝑡1+𝜓𝑡𝜎2𝑡1.(2.6) Here (2.6) is well known to be an univariate stochastic difference equation expressed as 𝑋𝑡=𝐵𝑡+𝐴𝑡𝑋𝑡1,(2.7) where 𝑋𝑡=𝜎2𝑡, 𝐴𝑡=𝜒𝑡𝜖2𝑡1+𝜓𝑡, and 𝐵𝑡=𝜈𝑡. The stationarity of (2.7) implies the secondary order stationarity of (2.2), that is, 𝑋𝑡𝑑𝑋 as 𝑡 for some random variable 𝑋, and 𝑋 satisfies 𝑋=𝐵+𝐴𝑋, where the random variable pair (𝐴𝑡,𝐵𝑡)𝑑(𝐴,𝐵) as 𝑡 for some random variable pairs (𝐴,𝐵). The stationary solution of (2.7) holds if 𝐸[ln+|𝐴𝑡|]<0 and 𝐸[ln+|𝐵𝑡|]<, where ln+|𝑥|=ln[max{𝑥,1}] as given in Embrechts et al. [23, Section  8.4, pages 454–481] and Zhang et al. [24, Theorems  2 and  3], Vervaat [25], and Brandt [26]. So the conditions for the stationarity in our model are (0,)ln+|𝜈|𝐻1(𝑑𝜈)<,(0,)2𝐸ln+||𝜒𝜖2+𝜓||𝐻2(𝑑𝜒,𝑑𝜓)<0,(2.8) where the expectation for the second condition is applicable only to 𝜖, which is a standard normal random variable, and both 𝐻1=(0,)2𝐻(𝑑𝜈,𝑑𝜒,𝑑𝜓) and 𝐻2=(0,)𝐻(𝑑𝜈,𝑑𝜒,𝑑𝜓) are marginal measures of 𝐻=𝐸[𝐺], the mean measure of 𝐺.

Now consider the first two conditional moments from model (2.2). Obviously, the first conditional moment is zero, and the second conditional moment is identical to 𝜎2𝑡 in (2.6). The distinguishing feature of model (2.2) is that parameters change over time and have the distribution 𝐺. Considering 𝜎2𝑡 as a scale of the model results in a scale mixture model over time. From the representation in (2.6), 𝜎2𝑡 could be rewritten as 𝜎2𝑡=𝜈𝑡+𝑡1𝑗=1𝜈𝑡𝑗𝑡𝑖=𝑡𝑗+1𝜒𝑖𝜖2𝑖1+𝜓𝑖.(2.9)

The unconditional second moment could be derived according to this representation by marginalising over all the random variates. Also, 𝜎2𝑡 in (2.9) can be viewed as a weighted sum of the random sequence {𝜈𝑡,,𝜈1}, and the random weights decay to zero at a polynomial rate, as long as the model is stationary. In fact, the rate could be irregular over time, and this is a substantial difference between the mixture of GARCH models and the traditional GARCH models.

Finally, one might be also interested in the connection between models such as (2.2) with parameters having the distribution (2.3) and those having the Markov-switching characteristic that result in Markov-switching GARCH models [6, 7]. Markov-switching GARCH models have a similar structure to (2.3), and we can replace (2.3) by 𝑍𝑡𝑆𝑡=𝑖=𝜈𝑖,𝜒𝑖,𝜓𝑖,𝑆𝑡𝑆𝑡1𝜂𝑖𝑗=𝑃𝑆𝑡=𝑖𝑆𝑡1=𝑗,for𝑖,𝑗=1,,𝐾,(2.10) where 𝑆𝑡 denotes the state variables, usually latent and unobserved. Marginalising the current state variable 𝑆𝑡 in (2.10) yields the conditional distribution for 𝑍𝑡 given the previous state 𝑆𝑡1, 𝑍𝑡𝑆𝑡1𝐾𝑖=1𝜂𝑖𝑗𝛿(𝜈𝑖,𝜒𝑖,𝜓𝑖).(2.11) So (2.11) could be a random probability measure but with a finite number of components and dependent on previous state 𝑆𝑡1.

3. The Random Probability Measures

We now describe PKSS process and detail the Dirichlet, the PD, and NGG processes to illustrate how the more general PKSS process allows for richer clustering mechanisms. Let 𝒵 be a complete and separable metric space and (𝒵) be the corresponding Borel 𝜎-field. Let 𝐺𝒢 be a probability measure on the space (𝒵,(𝒵)) where 𝒢 is the set of probability measures equipped with the suitable 𝜎-field (𝒢) and the corresponding probability measure 𝒫 (see chapter 2 of Ghosh and Ramamoorthi [27] for more details). The random probability measure 𝐺 is sampled from the law 𝒫 and operates as the unknown mixing distribution of the GARCH parameters in (2.2).

All random probability measures within the class of PKSS processes feature an almost surely discrete probability measure represented as 𝐺(𝐴)=𝑖=1𝑊𝑖𝛿𝑍𝑖(𝐴)for𝐴(𝒵),(3.1) where 𝛿𝑍𝑖 denotes the dirac delta measure concentrated at 𝑍𝑖, in which the sequence of the random variables {𝑍1,𝑍2,} has a nonatomic probability measure 𝐻 and the sequence of the random variables {𝑊1,𝑊2,} sums to 1 [28]. Also, the mean measure of the process 𝐺 with respect to 𝒫 is 𝐻 as follows: 𝐸[𝐺(𝐴)]=𝐻(𝐴)for𝐴(𝒵).(3.2)

A common characterization of (3.1) is the well-known Pólya Urn prediction distribution described in Pitman [28]. For the purposes of this paper the Pólya Urn warrants further discussion for two reasons. First, it is important for developing our MCMC algorithm to explore the posterior distribution discussed in Section 4. Second, it explicitly details how the PKSS process is a generalisation of the Dirichlet, PD, and NGG processes and how the different cluster tuning mechanisms operates.

Let {𝑍1,,𝑍𝑟} be a sequence with size 𝑟 drawn from 𝐺 where 𝑟 is a positive integer, and let 𝐩𝑟 denote a partition of integers {1,,𝑟}. A partition 𝐩𝑟={𝐶𝑟,1,,𝐶𝑟,𝑁𝑟} of size 𝑁𝑟 contains disjoint clusters 𝐶𝑟,𝑗 of size 𝑒𝑟,𝑗 indicated by the distinct values {𝑍1,,𝑍𝑁𝑟}. The Pólya Urn prediction distribution for the PKSS process can now be written as 𝜋𝑑𝑍𝑖+1𝐙𝑖=𝑉𝑖+1,𝑁𝑖+1𝑉𝑖,𝑁𝑖𝐻𝑑𝑍𝑖+1+𝑉𝑖+1,𝑁𝑖𝑉𝑖,𝑁𝑖𝑁𝑖𝑗=1𝑒𝑖,𝑗𝛼𝛿𝑍𝑗𝑑𝑍𝑖+1,(3.3) for 𝑖=1,,𝑟1,𝜋(𝑑𝑍1)=𝐻(𝑑𝑍1),𝑉1,1=1 and 𝑉𝑖,𝑁𝑖=𝑖𝑁𝑖𝛼𝑉𝑖+1,𝑁𝑖+𝑉𝑖+1,𝑁𝑖+1.(3.4)

The Pólya Urn prediction distribution is that 𝑍𝑖+1 will take a new value from 𝐻 with mass 𝑉𝑖+1,𝑁𝑖+1/𝑉𝑖,𝑁𝑖 and one of the existing values, {𝑍1,,𝑍𝑁𝑖}, with mass (𝑖𝑁𝑖𝛼)𝑉𝑖+1,𝑁𝑖/𝑉𝑖,𝑁𝑖. This yields a joint prior distribution 𝜋𝑑𝑍1,𝑑𝑍2,,𝑑𝑍𝑛=𝜋𝑑𝑍1𝑛1𝑖=1𝜋𝑑𝑍𝑖+1𝑍1,,𝑍𝑖,(3.5) as the product of easily managed conditional densities useful for our MCMC scheme below.

The PKSS process can be represented as either the Dirichlet, PD, or NGG processes in (3.3) as the following.(1)Taking 0𝛼<1 and 𝑉𝑖,𝑁𝑖=Γ(𝑖)Γ(𝑖+𝜃)𝑁𝑖𝑗=1(𝜃+(𝑗1)𝛼),(3.6) for 𝜃>𝛼 results in the PD process. (2)Setting that 𝛼=0𝑉𝑖,𝑁𝑖=Γ(𝑖)Γ(𝑖+𝜃)𝜃𝑁𝑖,(3.7) and the PD process becomes the Dirichlet process. (3)The NGG process takes 0𝛼<1, such that 𝑉𝑖,𝑁𝑖=𝛼𝑁𝑖1𝑒𝛽Γ(𝑖)𝑖1𝑘=0𝑖1𝑘(1)𝑘𝛽𝑘/𝛼Γ𝑁𝑖𝑘𝛼,𝛽,(3.8) for 𝛽>0.

In the above Γ() is the complete Gamma function, and Γ(,) is the incomplete Gamma function. Examining of the predictive distribution (3.3), the ratios 𝑉𝑖+1,𝑁𝑖+1/𝑉𝑖,𝑁𝑖 and 𝑉𝑖+1,𝑁𝑖/𝑉𝑖,𝑁𝑖 indicate the difference between the Dirichlet process and the other processes. Substituting the values of 𝑉𝑖,𝑁𝑖 into the allocation mass for each process reveals that the ratios do not depend on the number of existing clusters, 𝑁𝑖. Rather, the Dirichlet process assigns probability to a new value independent of the number of existing clusters, and the rate of increment of partition size is a constant. In contrast, the PD and NGG processes assign probability to a new value dependent on the number of existing clusters. The comparison of these three special cases illustrates the richer clustering mechanisms of the PKSS process over the Dirichlet process. Furthermore, the PKSS process contains many other random measures, and these measures would be of interest for their clustering behaviors in further investigation.

Turning to the distribution of partitions, Pitman [28] shows that the joint prior distribution of the sequence {𝑍1,,𝑍𝑛} is 𝜋𝑑𝑍1,,𝑑𝑍𝑟=𝑉𝑟,𝑁𝑟𝑁𝑟𝑗=1Γ𝑒𝑟,𝑗𝛼Γ(1𝛼)𝐻𝑑𝑍𝑗.(3.9)

Notice that the joint distribution is dependent on the partition 𝐩𝑟 of 𝑟 integers {1,,𝑟}, and we can decompose (3.9) into 𝜋(𝑍1,,𝑍𝑁𝑟,𝐩𝑟)=𝜋(𝑑𝑍1,,𝑑𝑍𝑁𝑟𝐩𝑟)𝜋(𝐩𝑟). The distribution of the partition, 𝜋(𝐩𝑟), is 𝜋𝐩𝑟=𝑉𝑟,𝑁𝑟𝑁𝑟𝑗=1Γ𝑒𝑟,𝑗𝛼Γ(1𝛼)(3.10) and is known as the Exchangeable Partition Probability Function. For many nonparametric models, this representation also helps MCMC construction by partitioning the posterior distribution in the form of Exchangeable Partition Probability Function. To do so it is necessary to obtain the posterior distribution of the partition 𝜋(𝐩𝑛𝐘𝑛) analytically. Then we could generate 𝐩𝑛 and approximate the posterior expectation. However, this is not possible in general. So we consider the joint distribution of {𝐙𝑁𝑛,𝐩𝑛} instead. We write the posterior expectation of 𝑠(𝐙𝑛,𝐺) as a marginalization over the joint posterior distribution of {𝐙𝑁𝑛,𝐩𝑛} by 𝐸𝑠𝐙𝑛,𝐺𝐘𝑛=𝐩𝑛𝒵𝒵𝑁𝑛𝐘𝑛,𝐙𝑁𝑛,𝐩𝑛𝜋𝑑𝐙𝑁𝑛,𝐩𝑛𝐘𝑛.(3.11) Here the joint posterior distribution of (𝐙𝑁𝑛,𝐩𝑛) is given by 𝜋𝑑𝐙𝑁𝑛,𝐩𝑛𝐘𝑛=𝑛𝑡=1𝜙𝑌𝑡0,𝜎2𝑡𝐙𝑁𝑡,𝐩𝑡𝜋𝑑𝐙𝑁𝑛,𝐩𝑛𝐩𝑛𝒵𝒵𝑛𝑡=1𝜙𝑌𝑡0,𝜎2𝑡𝐙𝑁𝑡,𝐩𝑡𝜋𝑑𝐙𝑁𝑛,𝐩𝑛,(3.12) where 𝜙(𝑥𝑎,𝑏) represents a normal density with mean 𝑎 and variance 𝑏 evaluated at 𝑥. The variance 𝜎2𝑡({𝐙𝑁𝑡,𝐩𝑡}) is identical to 𝜎2𝑡 and emphasizes that 𝜎2𝑡 is a function of {𝑍1,𝑍2,,𝑍𝑡} represented by ({𝐙𝑁𝑡,𝐩𝑡}). This representation leads to the development of the MCMC algorithm in the next section. For the sake of simplicity, we prefer the following expression for the variance: 𝜎2𝑡𝐙𝑁𝑛,𝐩𝑛=𝜎2𝑡𝐙𝑛=𝜎2𝑡𝐙𝑁𝑡,𝐩𝑡=𝜎2𝑡𝐙𝑡=𝜎2𝑡.(3.13)

We emphasise that we can always express 𝐙𝑛 by two elements, namely, a partition and distinct values. In this case 𝐩𝑛 is a partition of the integers {1,,𝑛} that are the indices of 𝐙𝑛, and 𝐙𝑁𝑛={𝑍1,,𝑍𝑁𝑛} represents the distinct values of 𝐙𝑛. The partition 𝐩𝑛 locates the distinct values from 𝐙𝑛 to 𝐙𝑁𝑛 or vice versa. As a result, we have the following equivalent representations: 𝐙𝑛=𝑍1,,𝑍𝑛=𝑍1,,𝑍𝑁𝑛=𝐙𝑁𝑛,𝐩𝑛.(3.14)

In time series analysis, we usually consider the first 𝑡 items, 𝐙𝑡={𝑍1,,𝑍𝑡}, the corresponding partition, 𝐩𝑡, and distinct values, 𝐙𝑁𝑡, such that 𝐙𝑡=𝐙𝑁𝑡,𝐩𝑡.(3.15) Here 𝐙𝑡 contains the first 𝑡 elements of 𝐙𝑛={𝑍1,,𝑍𝑛}, and adding {𝑡+1,,𝑛} to 𝐩𝑡 would yield 𝐩𝑛 according to 𝐙𝑁𝑡 and the distinct values of {𝑍𝑡+1,,𝑍𝑛}. Combining 𝐙𝑁𝑡 and the distinct values of {𝑍𝑡+1,,𝑍𝑛} gives 𝐙𝑁𝑛 providing the connection between {𝐙𝑁𝑡,𝐩𝑡} and {𝐙𝑁𝑛,𝐩𝑛}. To simplify the likelihood expression and the sampling algorithm, we replace 𝜎2𝑡(𝐙𝑡)=𝜎2𝑡({𝐙𝑁𝑡,𝐩𝑡}) by 𝜎2𝑡({𝐙𝑁𝑛,𝐩𝑛}) since the subscript 𝑡 in 𝜎2𝑡 already tells us that the first 𝑡 items of 𝐙𝑛 are considered. We then have a more accessible representation of the likelihood function as 𝐿𝐘𝑛𝐙𝑁𝑛,𝐩𝑛=𝑛𝑡=1𝜙𝑌𝑡0,𝜎2𝑡𝐙𝑁𝑛,𝐩𝑛,(3.16) and (3.12) becomes 𝜋𝑑𝐙𝑁𝑛,𝐩𝑛𝐘𝑛=𝐿𝐘𝑛𝐙𝑁𝑛,𝐩𝑛𝜋𝑑𝐙𝑁𝑛,𝐩𝑛𝐩𝑛𝒵𝒵𝐿𝐘𝑛𝐙𝑁𝑛,𝐩𝑛𝜋𝑑𝐙𝑁𝑛,𝐩𝑛.(3.17) We are now equipped to describe the MCMC algorithm.

4. The Algorithm for the Partitions and Distinct Values Sampling

Our Markov chain Monte Carlo (MCMC) sampling procedure generates distinct values and partitions alternatively from the posterior distribution, 𝜋(𝑑𝐙𝑁𝑛,𝐩𝑛𝐘𝑛). For 𝑆 iterations our MCMC algorithm is(1) Initialise 𝐙𝑁𝑛=(𝐙𝑁𝑛)[0].For 𝑠=1,2,,𝑆. (2) Generate 𝐩[𝑠]𝑛 from 𝜋(𝐩[𝑠]𝑛(𝐙𝑁𝑛)[𝑠1],𝐘𝑛). (3) Generate (𝐙𝑁𝑛)[𝑠] from 𝜋(𝑑𝐙𝑁𝑛𝐩[𝑠]𝑛,𝐘𝑛). End.

To obtain our estimates we use the weighted Chinese restaurant Gibbs type process introduced in Lau and So [9] for the time series models mixed over Dirichlet process. We have extended this scheme to allow for the more general PKSS process. In what follows, the extension from the Dirichlet to the PKSS lies in the weights of the Pólya Urn predictive distribution in (3.3).

The main idea of this algorithm is the “leave one out” principle that removes item 𝑡 from the partition and then replaces it. This will give an update on both 𝐙𝑁𝑛 and 𝐩𝑛. This idea has been applied in sampling of partitions in many Bayesian nonparametric models of Dirichlet process (see [17] for an review). The strategy is a simple evaluation on the product of the likelihood function (3.16) and the Pólya Urn distribution (3.3) of 𝑍𝑡, conditioned on the remaining parameters, yielding a joint updating distribution of 𝐙𝑁𝑛 and 𝐩𝑛. We now describe the distributions 𝜋(𝐩𝑛𝐙𝑁𝑛,𝐘𝑛) and 𝜋(𝑑𝐙𝑁𝑛𝐩𝑛,𝐘𝑛) used in the sampling scheme.

Define 𝐩𝑛,𝑡 to be the partition 𝐩𝑛 less item 𝑡. Then 𝐩𝑛,𝑡={𝐶1,𝑡,𝐶2,𝑡,,𝐶𝑁𝑛,𝑡,𝑡} with corresponding distinct values given by 𝐙𝑁𝑛,𝑡={𝑍1,𝑡,𝑍2,𝑡,,𝑍𝑁𝑛,𝑡,𝑡}. To generate from 𝜋(𝐩𝑛𝐙𝑁𝑛,𝐘𝑛) for each 𝑡=1,,𝑛, the item 𝑡 is assigned either to a new cluster 𝐶𝑁𝑛,𝑡+1,𝑡; that is, empty before 𝑡 is assigned with probability 𝜋̃𝐩𝑁𝑛,𝑡+1×𝒵𝐿𝐘𝑛𝐙𝑁𝑛,𝑡+1,̃𝐩𝑁𝑛,𝑡+1𝐻𝑑𝑍𝑁𝑛,𝑡+1,𝑡𝜋̃𝐩𝑁𝑛,𝑡+1×𝒵𝐿𝐘𝑛𝐙𝑁𝑛,𝑡+1,̃𝐩𝑁𝑛,𝑡+1𝐻𝑑𝑍𝑁𝑛,𝑡+1,𝑡+𝑁𝑛,𝑡𝑗=1𝜋̃𝐩𝑗×𝐿𝐘𝑛𝐙𝑗,̃𝐩𝑗(4.1) or to an existing cluster, 𝐶𝑗,𝑡 for 𝑗=1,,𝑁𝑛,𝑡 with probability 𝜋̃𝐩𝑗×𝐿𝐘𝑛𝐙𝑗,̃𝐩𝑗𝜋̃𝐩𝑁𝑛,𝑡+1×𝒵𝐿𝐘𝑛𝐙𝑁𝑛,𝑡+1,̃𝐩𝑁𝑛,𝑡+1𝐻𝑑𝑍𝑁𝑛,𝑡+1,𝑡+𝑁𝑛,𝑡𝑗=1𝜋̃𝐩𝑗×𝐿𝐘𝑛𝐙𝑗,̃𝐩𝑗,(4.2) where ̃𝐩𝑗=𝐩𝑛,𝑡𝑡𝐶𝑗,𝑡,𝐙𝑗=𝐙𝑁𝑛,𝑡,𝑡,𝑍𝑡=𝑍𝑗,𝑡,(4.3) for 𝑗=1,,𝑁𝑛,𝑡+1. In addition, if 𝑗=𝑁𝑛,𝑡+1 and a new cluster is selected, a sample of 𝑍𝑁𝑛,𝑡+1,𝑡 is drawn from 𝐿𝐘𝑛𝐙𝑁𝑛,𝑡+1,̃𝐩𝑁𝑛,𝑡+1𝐻𝑑𝑍𝑁𝑛,𝑡+1,𝑡𝒵𝐿𝐘𝑛𝐙𝑁𝑛,𝑡+1,̃𝐩𝑁𝑛,𝑡+1𝐻𝑑𝑍𝑁𝑛,𝑡+1,𝑡(4.4) for the next iteration.

To generate from 𝜋(𝐙𝑁𝑛𝐩𝑛,𝐘𝑛) for 𝑗=1,,𝑁𝑛 generate 𝑍𝑗 given {𝑍1,,𝑍𝑁𝑛}{𝑍𝑗}, 𝐩𝑛, and 𝐘𝑛 from the conditional distribution 𝜋𝑍𝑗𝐙𝑁𝑛𝑍𝑗,𝐩𝑛,𝐘𝑛=𝐿𝐘𝑛𝐙𝑁𝑛,𝐩𝑛𝐻𝑑𝑍𝑗𝒵𝐿𝐘𝑛𝐙𝑁𝑛,𝐩𝑛𝐻𝑑𝑍𝑗.(4.5) This step uses the standard Metropolis-Hastings algorithm to draw the posterior samples 𝑍𝑗. Precisely, (4.5) is given by 𝜋𝑑𝑍𝑗𝐙𝑁𝑛𝑍𝑗,𝐩𝑛,𝐘𝑛=𝑛𝑡=1𝜙𝑌𝑡0,𝜎2𝑡𝐙𝑁𝑛,𝐩𝑛𝐻𝑑𝑍𝑗𝒵𝑛𝑡=1𝜙𝑌𝑡0,𝜎2𝑡𝐙𝑁𝑛,𝐩𝑛𝐻𝑑𝑍𝑗.(4.6)

In (4.6) all elements in the sequence {𝑍1,,𝑍𝑁𝑛}, conditional on 𝐩𝑛 and 𝐘𝑛, are no longer independent, and they require to be sampled individually, and conditional the remaining elements.

We note that the special case of the above algorithm can be found for independent data of the normal mixture models from West et al. [29] (see also [17]). Taking 𝜒=0 and 𝜓=0 in (2.2) yields 𝑍𝑡=𝜈𝑡=𝜎2𝑡 for 𝑖=1,,𝑛 with distribution 𝐺. Let 𝐺 be a Dirichlet process with parameter 𝜃𝐻. Then (4.1) and (4.2) become 𝜃×𝒵𝜙𝑌𝑡0,𝑍𝐻(𝑑𝑍)𝜃×𝒵𝜙𝑌𝑡0,𝑍𝐻(𝑑𝑍)+𝑁𝑛,𝑡𝑗=1𝑒𝑗,𝑡×𝜙𝑌𝑡0,𝑍𝑗,𝑡,𝑒𝑗,𝑡×𝜙𝑌𝑡0,𝑍𝑗,𝑡𝜃×𝒵𝜙𝑌𝑡0,𝑍𝐻(𝑑𝑍)+𝑁𝑛,𝑡𝑗=1𝑒𝑗,𝑡×𝜙𝑌𝑡0,𝑍𝑗,𝑡.(4.7) Furthermore, the joint distribution of {𝑍1,,𝑍𝑁𝑛} conditional on 𝐩𝑛 and 𝐘𝑛 is given by 𝑁𝑛𝑗=1𝑡𝐶𝑗𝜙𝑌𝑡0,𝑍𝑗𝐻𝑑𝑍𝑗.(4.8) In this case {𝑍1,,𝑍𝑁𝑛} are independent in both the prior, 𝜋(𝐙𝑁𝑛𝐩𝑛), and the posterior, 𝜋(𝐙𝑁𝑛𝐩𝑛,𝐘𝑛). However, it is not true in the more general dynamic GARCH model we consider.

Usually, the parameters of interest are both the sequence {𝜎1,,𝜎𝑛} and the predictive density 𝒵𝑘(𝑌𝑛+1𝐘𝑛,𝐙𝑛,𝑍𝑛+1)𝐺(𝑑𝑍𝑛+1). These two sets of parameters are functions of 𝐙𝑛 under the mixture of GARCH(1,1) model, and the Bayesian estimators are taken to be the expected means as outlined in Section 2. That is, writing the volatility as the vector 𝜎𝑛(𝐙𝑛)={𝜎1(𝐙𝑛),,𝜎𝑛(𝐙𝑛)}𝐸𝜎𝑛𝐙𝑛𝐘𝑛=𝐩𝑛𝒵𝒵𝜎𝑛𝐙𝑁𝑛,𝐩𝑛𝜋𝑑𝐙𝑁𝑛,𝐩𝑛𝐘𝑛,𝐸𝒵𝑘𝑌𝑛+1𝐘𝑛,𝑍𝑛+1,𝐙𝑛𝐺𝑑𝑍𝑛+1𝐘𝑛=𝐩𝑛𝒵𝒵𝒵𝑘𝑌𝑛+1𝐘𝑛,𝑍𝑛+1,𝐙𝑁𝑛,𝐩𝑛×𝜋𝑑𝑍𝑛+1𝐙𝑁𝑛,𝐩𝑛𝜋𝑑𝐙𝑁𝑛,𝐩𝑛𝐘𝑛,(4.9) where 𝜋(𝑑𝐙𝑁𝑛,𝐩𝑛𝐘𝑛) denotes the posterior distribution of 𝐙𝑛={𝐙𝑁𝑛,𝐩𝑛} and 𝜋(𝑑𝑍𝑛+1𝐙𝑁𝑛,𝐩𝑛) represents the Pólya Urn predictive density of 𝑍𝑛+1 given 𝐙𝑛.

5. Alternative Algorithmic Estimation Procedures

We now outline how the algorithms of Walker [20] and Papaspiliopoulos and Roberts [21] may be applied to our GARCH(1,1) model mixed over the PKSS process. First, consider the approach in Walker [20]. Beginning with (3.1) the weights can be written as 𝑊𝑖=(0,𝑊𝑖)𝑑𝑢=𝑊𝑖(0,)𝑊1𝑖𝕀{(0,𝑊𝑖)}(𝑢)𝑑𝑢=𝑊𝑖(0,)𝑈𝑢0,𝑊𝑖𝑑𝑢,(5.1) where 𝕀{(0,𝑊𝑖)}(𝑢) denotes the indicator function that 𝕀{(0,𝑊𝑖)}(𝑢)=1 if 0<𝑢<𝑊𝑖 and 𝕀{(0,𝑊𝑖)}(𝑢)=0 otherwise, and 𝑈(𝑢0,𝑊𝑖) represents the uniform density on the interval (0,𝑊𝑖). Then, substituting (5.1), but without the integral over 𝑈, into (3.1), we obtain the joint measure 𝐺(𝑑𝑧,𝑑𝑢)=𝑖=1𝑊𝑖𝑈𝑢0,𝑊𝑖𝑑𝑢𝛿𝑍𝑖(𝑑𝑧).(5.2)

Furthermore, we can take the classification variables {𝛿1,,𝛿𝑛} to indicate the points {𝑍𝛿1,,𝑍𝛿𝑛} taken from the measure. The classification variables {𝛿1,,𝛿𝑛} take values from the integers {1,2,} and assign a configuration to model (2.2) so that the expression of the likelihood is simpler without the product of sums. So, combining (5.2) with model (2.2) yields 𝐿𝐘𝑛,𝐮𝑛,𝜹𝑛𝐙,𝐖=𝑛𝑡=1𝑊𝛿𝑡𝑈𝑢𝑡0,𝑊𝛿𝑡𝜙𝑌𝑡0,𝜎2𝑡𝐙𝜹𝑡,(5.3) where 𝜹𝑛={𝛿1,,𝛿𝑛} and 𝐙𝜹𝑡={𝑍𝛿1,,𝑍𝛿𝑡} for 𝑡=1,,𝑛. Here the random jumps that build up the random measure 𝐺 in (5.2) can be reexpressed as 𝑊1𝑑=𝑉1 and 𝑊𝑖𝑑=𝑉𝑖𝑖1𝑗=1(1𝑉𝑗) for 𝑗=2,3,. This is called the stick-breaking representation. Unfortunately, up to now, this representation includes only the Poisson-Dirichlet processe is (𝛼,𝜃) where 𝑉𝑖s are Beta(1𝛼,𝜃+𝑖𝛼) random variables. Further development will be required to fully utilise the approach of Walker [20] for the PKSS process in general.

The likelihood (5.3) can be written as 𝐿𝐘𝑛,𝐮𝑛,𝜹𝑛𝐙,𝐕=𝑛𝑡=1𝑊𝛿𝑡(𝐕)𝑈𝑢𝑡0,𝑊𝛿𝑡(𝐕)𝜙𝑌𝑡0,𝜎2𝑡𝐙𝜹𝑡,(5.4) and MCMC algorithms for sampling 𝐮𝑛, 𝜹𝑛, and 𝐕 are straightforward and already included in Walker ([20], Section  3). To complete the algorithm for our model it requires sampling from 𝐙. This can be achieved by sampling from 𝑍𝑗 for all {𝑗𝛿𝑡=𝑗} from 𝑛𝑡=1𝜙𝑌𝑡0,𝜎2𝑡𝐙𝜹𝑡𝐻𝑑𝑍𝑗𝒵𝑛𝑡=1𝜙𝑌𝑡0,𝜎2𝑡𝐙𝜹𝑡𝐻𝑑𝑍𝑗,(5.5) otherwise sample 𝑍𝑗 from 𝐻(𝑑𝑍) if there is no 𝑗 equal to 𝛿𝑡. Notice that there are infinite 𝑍𝑗s contained in 𝐙, but it is only required to sample at most 𝑛 of them. The number of sampled 𝑍𝑗s varies over iterations (see also ([20], Section  3) for details).

Papaspiliopoulos and Roberts [21] suggest an approach similar to Walker [20]. Consider the classification variables {𝛿1,,𝛿𝑛} and the stick-breaking representation of {𝑊1,𝑊2,} contributed by {𝑉1,𝑉2,} defined above. Then the likelihood is immediately given by 𝐿𝐘𝑛,𝜹𝑛𝐙,𝐕=𝑛𝑡=1𝑊𝛿𝑡(𝐕)𝜙𝑌𝑡0,𝜎2𝑡𝐙𝜹𝑡.(5.6) The most challenging task is the reallocation of 𝑛 observations over the infinite components in (3.1), equivalent to sampling the classification variables {𝛿1,,𝛿𝑛} over the MCMC iterations. Here we will briefly discuss this task when it involves the variance 𝜎2𝑡(𝐙𝜹𝑡) for our model (see also Papaspiliopoulos and Roberts [21, Section  3]. Let 𝜹𝑛(𝑖,𝑗)=𝛿1,,𝛿𝑖1,𝑗,𝛿𝑖+1,,𝛿𝑛(5.7) be the vector produced from 𝜹𝑛 by substituting the 𝑖th element by 𝑗. This is a proposed move from 𝜹𝑛 to 𝜹𝑛(𝑖,𝑗) where 𝑗=1,2,. Notice that it is not possible to consider an infinite number of 𝑍𝑗s directly since we only have a finite number. Instead, we can employ the Metropolis-Hastings sampler that considers a proposal probability mass function which requires only a finite number of 𝑍𝑗s. The probabilities for the proposed moves are given by 𝑞𝑛(𝑖,𝑗)=𝑊𝑗𝐶𝜹𝑛×𝑛𝑡=1𝜙𝑌𝑡0,𝜎2𝑡𝐙𝜹𝑡𝛿𝑖=𝑗,for𝑗=1,,max𝛿1,,𝛿𝑛𝑀𝜹𝑛,for𝑗>max𝛿1,,𝛿𝑛,(5.8) where 𝑀(𝜹𝑛)=max𝑗=1,,max{𝛿1,,𝛿𝑛}𝑛𝑡=1𝜙(𝑌𝑡0,𝜎2𝑡(𝐙𝜹𝑡))𝛿𝑖=𝑗, and the proportional constant is given by 𝐶(𝜹𝑛)=max{𝛿1,,𝛿𝑛}𝑗=1𝑊𝑗×𝑛𝑡=1𝜙(𝑌𝑡0,𝜎2𝑡(𝐙𝜹𝑡))𝛿𝑖=𝑗+(1max{𝛿1,,𝛿𝑛}𝑗=1𝑊𝑗)×𝑀(𝜹𝑛). Then simulate a Uniform(0,1) random variable 𝑈𝑖 and accept the proposal to move to 𝜹𝑛(𝑖,𝑗)={𝛿1,,𝛿𝑖1,𝛿𝑖=𝑗,𝛿𝑖+1,,𝛿𝑛} if 𝑗 satisfies 𝑗1=0𝑞𝑛(𝑖,)<𝑈𝑖𝑗=1𝑞𝑛(𝑖,), where 𝑞𝑛(𝑖,0)=0. The acceptance probability for this Metropolis-Hastings is given by 𝛼𝜹𝑛,𝜹𝑛(𝑖,𝑗)=1if𝑗max𝛿1,,𝛿𝑛andmax𝜹𝑛(𝑖,𝑗)=max𝛿1,,𝛿𝑛min1,𝐶𝜹𝑛𝐶𝜹𝑛(𝑖,𝑗)𝑀𝜹𝑛(𝑖,𝑗)𝑛𝑡=1𝜙𝑌𝑡0,𝜎2𝑡𝐙𝜹𝑡if𝑗max𝛿1,,𝛿𝑛andmax𝜹𝑛(𝑖,𝑗)<max𝛿1,,𝛿𝑛min1,𝐶𝜹𝑛𝐶𝜹𝑛(𝑖,𝑗)𝑛𝑡=1𝜙𝑌𝑡0,𝜎2𝑡𝐙𝜹𝑡𝛿𝑖=𝑗𝑀𝜹𝑛if𝑗>max𝛿1,,𝛿𝑛.(5.9) This completes the task for sampling 𝜹𝑛. Finally, similar to Walker [20], sampling 𝐙 only needs (5.5). That is, for all {𝑗𝛿𝑡=𝑗}, sample 𝑍𝑗 from (5.5), otherwise sample 𝑍𝑗 from 𝐻(𝑑𝑍) if there is no 𝑗 equal to 𝛿𝑡.

6. Application to the Standard & Poor’s 500 Financial Index

The methodology is illustrated on the daily logarithm returns of the S&P500 (Standard & Poor’s 500) financial index, dated from 2006 Jan 03 to 2009 Dec 31. The data contains a total of 1007 trading days and is available from the Yahoo Finance (URL: http://finance.yahoo.com/). The log return is defined as 𝑌𝑡=100(ln𝐼𝑡ln𝐼𝑡1) where 𝐼𝑡 is the index at time 𝑡. The algorithm described in Section 4 is used to estimate the nonparametric mixture models.

To compare the three different mixture models in Section 3, 𝐺 is allocated in the Dirichlet process, the PD process, and the NGG process. In each case, the mean process is denoted 𝐻 and is a Gamma-Dirichlet distribution given by 𝐻(𝑑𝜈,𝑑𝜒,𝑑𝜓)=𝐻1(𝑑𝜈)𝐻2(𝑑𝜒,𝑑𝜓) where 𝐻1(𝑑𝜈) is the Gamma(1,1) distribution and 𝐻2(𝑑𝜒,𝑑𝜓) is the Dirichlet(1,1,1) distribution. We set the parameters of each process such that the variance of each process evaluated over the same measure is equal. This reslts in 𝜃=2.3538 for the Dirichlet process, 𝜃=0.6769 and 𝛼=1/2 for the PD and 𝛽=1 and 𝛼=1/2 for the NGG. We also compare the results to a no mixture GARCH(1,1) model in which the parameters (𝜈,𝜒,𝜓) have a prior distribution 𝐻1(𝑑𝜈)𝐻2(𝑑𝜒,𝑑𝜓). We initialise the MCMC algorithm with a partition that separates all integers, that is, 𝐩𝑛={𝐶1={1},𝐶2={2},,𝐶𝑛={𝑛}}. We run the MCMC algorithm for 20,000 iterations of which the first 10,000 iterations are discarded. The last 10,000 iterations are considered a sample from the posterior density of {𝐙𝑛,𝐩𝑛}.

Figure 1 contains the volatility estimates (fitted data) for the no-mixture model, the Dirichlet process, the DP process, and the NGG process. The no mixture model, the Dirichelt process, and the PD process appear to give similar results. However, it is easy to distinguish the NGG process from the other models since the volatility estimates of the NGG process appear to better fit the observed spikes in the data. Figure 2 presents the predictive densities for each model. Again, the no-mixture model, the Dirichlet process, and the PD process give similar predictive density estimates in the sense that the distribution tails are all similar. However, the NGG process model estimates a predictive density with substantially wider tails than the other three models. Figures 1 and 2 suggest that the Dirichlet and PD processes allocate fewer clusters and consider the periods of increased volatility as outliers within the data. On the other hand the NGG process allocates more clusters and incorporates the periods of increased volatility directly into its predictive density.

Finally, we evaluate the goodness of fit in terms of the marginal likelihoods. The logarithm of the marginal likelihoods of the no mixture model, the Dirichlet process model, PD process model, and the NGG process model are −1578.085, −1492.086, −1446.275, and −1442.269, respectively. Under the marginal likelihood criterion all three mixture models outperform the GARCH(1,1). Further, the NGG process outperforms the PD process which in turn outperforms the model proposed in Lau and So [9]. These results suggest that generalisations of the Dirichlet process mixture model should be further investigated for time-dependent data.

7. Conclusion

In this paper we have extended nonparametric mixture modelling for GARCH models to the Kingman-Poisson process. The process includes the previously applied Dirichlet process and also includes the Poisson-Dirichlet and Normalised Generalised Gamma process. The Poisson-Dirichlet and Normalised Generalised Gamma process provide richer clustering structures than the Dirichlet process and have not been previously adapted to time series data. An application to the S&P500 financial index suggests that these more general random probability measures can outperform the Dirichlet process. Finally, we developed an MCMC algorithm that is easy to implement which we hope will facilitate further investigation into the application of nonparametric mixture modes to time series data.