Abstract
Traditional GARCH models describe volatility levels that evolve smoothly over time, generated by a single GARCH regime. However, nonstationary time series data may exhibit abrupt changes in volatility, suggesting changes in the underlying GARCH regimes. Further, the number and times of regime changes are not always obvious. This article outlines a nonparametric mixture of GARCH models that is able to estimate the number and time of volatility regime changes by mixing over the Poisson-Kingman process. The process is a generalisation of the Dirichlet process typically used in nonparametric models for time-dependent data provides a richer clustering structure, and its application to time series data is novel. Inference is Bayesian, and a Markov chain Monte Carlo algorithm to explore the posterior distribution is described. The methodology is illustrated on the Standard and Poor's 500 financial index.
1. Introduction
Generalised autoregressive conditional heteroscedastic (GARCH) models estimate time-varying fluctuations around mean levels of a time series known as the volatility of a time series [1, 2]. The standard GARCH model specifies the volatility persistence at time as a linear combination of previous volatilities and squared residual terms. The persistence is assumed constant for all resulting in smooth transitions of volatility levels. However, many nonstationary time series exhibit abrupt changes in volatility suggesting fluctuating levels of volatility persistence. In this case the GARCH parameters undergo regime changes over time. If the maximum number of potential regimes is known Markov-switching GARCH, models are an appealing option [3–8]. However, often the number of volatility regimes is not known and can be difficult to preselect. In this case, Bayesian nonparametric mixture models are attractive because they allow the data to determine the number of volatility regimes or mixture components. For example, recently nonparametric mixing over the Dirichlet process has been described by Lau and So [9] for GARCH(1,1) models, Jensen and Maheu [10] for stochastic volatility models and Griffin [11] for Ornstein-Uhlenbeck Processes.
The Dirichlet process [12] is most widely applied in nonparametric mixture models and within a hierarchical framework is introduced in Lo [13] for independent data. Lau and So [9] extend the work of Lo [13] to time-dependent data where time-varying GARCH parameters are mixed over the Dirichlet process. The additional flexibility of this approach allows a range of GARCH regimes, from all observations generated by a single GARCH model to each observation in the series generated by a unique set of GARCH parameters. Lau and So [9] conclude with a discussion on the possibility of extending their method to alternative random probability measures that provide greater clustering flexibility than the Dirichlet process. We continue this discussion by outlining a novel method for a class of GARCH mixture models mixed over the Poisson-Kingman process [14, 15] derived from the stable subordinator (known henceforth as PKSS). To illustrate the richer clustering mechanisms of the PKSS process, we describe three of its special cases—the Dirichlet process [12], the Poisson-Dirichlet (PD) process [14, 16], and the Normalized Generalized Gamma (NGG) process [17, 18].
Theoretical developments and recent applications of the PKSS process are discussed in Lijoi et al. [19]. However, we note that the PD and the NGG processes have yet to be developed for volatitility estimation, or indeed time series applications in general, and in this sense the work in this paper is novel. Although the Dirichlet process has now been used extensively in applications, the implementation of more general nonparametric mixture models for applied work is not always obvious. We therefore describe three Markov chain Monte Carlo (MCMC) algorithms. First, we develop a weighted Chinese restaurant Gibbs type process for partition sampling to explore the posterior distribution. The basis of this algorithm is developed for time-dependent data in Lau and So [9], and we extend it to allow for the PKSS process. We also note recent developments for the sampling of Bayesian nonparametric models in Walker [20] and Papaspiliopoulos and Roberts [21] and describe how these algorithms can be constructed to estimate our model.
The methodology is illustrated through volatility and predictive density estimation of a GARCH(1,1) model applied to the Standard and Poor’s 500 financial index from 2003 to 2009. Results are compared between a no-mixture model and nonparametric mixtures over the Dirichlet, PD, and NGG processes. Under the criterion of marginal likelihood the NGG process performs the best. Also, the PD and NGG process outperforms the previously studied Dirichlet process which in turn outperforms the no-mixture model. The results suggest that alternatives to the Dirichlet process should be considered for applications of nonparametric mixture models to time-dependent data.
The paper proceeds as follows. Section 2 presents a Bayesian mixture of GARCH models over an unknown mixing distribution, outlines a convenient Bayesian estimator based on quadratic loss, and describes some of the time series properties of our model. Section 3 discusses the class of random probability measures we consider as the mixing distributions and details the clustering mechanisms associated with the three special cases mentioned above via the Pólya Urn representation and the consequences for the posterior distribution of the partitions resulting from the PKSS process. Our MCMC algorithm for sampling the posterior distribution is presented in Section 4, and the alternative MCMC algorithms of Walker [20] and Papaspiliopoulos and Roberts [21] are presented in Section 5. Section 6 describes the application, and Section 7 concludes the paper.
2. The Mixture Model
Let be the observed variable at time , be the observations from time 1 to time and for . The GARCH(1,1) model specifies where , , and for . In (2.1) the GARCH parameters are not time varying implying that volatility persistence is constant over time with smooth transitions of volatility levels. To allow abrupt changes to volatilities, we extend (2.1) by writing , , , for and as joint latent variables from time 1 to time ; that is, the model is now a dynamic GARCH model with each observation potentially generated by its own set of GARCH parameters as follows: Note that in model (2.2) the data controls the maximum potential number of GARCH regimes, the sample size . In contrast, finite switching models preallocate a maximum number of regimes typically much smaller than the number of observations. As the potential number of regimes gets larger, estimation of the associated transition probabilities and GARCH parameters in finite switching models become prohibitive. However, assuming that the latent variables are independent of each other and completing the hierarchy by modelling the GARCH parameters contained in with an unknown mixing distribution, , with law the model becomes manageable, that is, with and the mixing distribution, , parameters that we may estimate. Depending on the posterior distribution of the clustering structure associated with the mixing distribution, the results may suggest that for (a single regime GARCH model) up to a unique for each indicating a separate GARCH regime for each time point. This illustrates the flexibility of the model.
We write as a positive integrable function of the latent variables, , and the mixing distribution, , to represent various quantities that may be of interest for inference. Under quadratic loss the Bayesian estimator is the posterior expectation . For our model this is an appealing estimator because it does not require the posterior of but only the posterior distribution of the sequence , that is, because where represents the posterior law of the random probability measure , and is the posterior distribution of the sequence .
In Lau and So [9] the unknown mixing distribution for the GARCH parameters, , is taken to be the Dirichlet process. This paper combines the theoretical ground work of Lijoi et al. [19, 22] with Lau and So [9] by allowing to be the PKSS process. The result is a nonparametric GARCH model which contains (among others) the Dirichlet process typically used in time series as well as the PD and NGG processes as special cases.
Understanding conditions for stationarity of a time series model is fundamental for statistical inference. Since our model is specified with zero mean over time, we provide a necessary and sufficient condition for the existence of a secondary order stationary solution for the infinite mixture of GARCH(1,1) models. The derivation closely follows Embrechts et al. [23] and Zhang et al. [24], and we state the conditions without giving proof. By letting be a standard normal random variable and replacing by then, conditioned on for and for , in (2.2) becomes Here (2.6) is well known to be an univariate stochastic difference equation expressed as where , , and . The stationarity of (2.7) implies the secondary order stationarity of (2.2), that is, as for some random variable , and satisfies , where the random variable pair as for some random variable pairs . The stationary solution of (2.7) holds if and , where as given in Embrechts et al. [23, Section 8.4, pages 454–481] and Zhang et al. [24, Theorems 2 and 3], Vervaat [25], and Brandt [26]. So the conditions for the stationarity in our model are where the expectation for the second condition is applicable only to , which is a standard normal random variable, and both and are marginal measures of , the mean measure of .
Now consider the first two conditional moments from model (2.2). Obviously, the first conditional moment is zero, and the second conditional moment is identical to in (2.6). The distinguishing feature of model (2.2) is that parameters change over time and have the distribution . Considering as a scale of the model results in a scale mixture model over time. From the representation in (2.6), could be rewritten as
The unconditional second moment could be derived according to this representation by marginalising over all the random variates. Also, in (2.9) can be viewed as a weighted sum of the random sequence , and the random weights decay to zero at a polynomial rate, as long as the model is stationary. In fact, the rate could be irregular over time, and this is a substantial difference between the mixture of GARCH models and the traditional GARCH models.
Finally, one might be also interested in the connection between models such as (2.2) with parameters having the distribution (2.3) and those having the Markov-switching characteristic that result in Markov-switching GARCH models [6, 7]. Markov-switching GARCH models have a similar structure to (2.3), and we can replace (2.3) by where denotes the state variables, usually latent and unobserved. Marginalising the current state variable in (2.10) yields the conditional distribution for given the previous state , So (2.11) could be a random probability measure but with a finite number of components and dependent on previous state .
3. The Random Probability Measures
We now describe PKSS process and detail the Dirichlet, the PD, and NGG processes to illustrate how the more general PKSS process allows for richer clustering mechanisms. Let be a complete and separable metric space and be the corresponding Borel -field. Let be a probability measure on the space where is the set of probability measures equipped with the suitable -field and the corresponding probability measure (see chapter 2 of Ghosh and Ramamoorthi [27] for more details). The random probability measure is sampled from the law and operates as the unknown mixing distribution of the GARCH parameters in (2.2).
All random probability measures within the class of PKSS processes feature an almost surely discrete probability measure represented as where denotes the dirac delta measure concentrated at , in which the sequence of the random variables has a nonatomic probability measure and the sequence of the random variables sums to 1 [28]. Also, the mean measure of the process with respect to is as follows:
A common characterization of (3.1) is the well-known Pólya Urn prediction distribution described in Pitman [28]. For the purposes of this paper the Pólya Urn warrants further discussion for two reasons. First, it is important for developing our MCMC algorithm to explore the posterior distribution discussed in Section 4. Second, it explicitly details how the PKSS process is a generalisation of the Dirichlet, PD, and NGG processes and how the different cluster tuning mechanisms operates.
Let be a sequence with size drawn from where is a positive integer, and let denote a partition of integers . A partition of size contains disjoint clusters of size indicated by the distinct values . The Pólya Urn prediction distribution for the PKSS process can now be written as for and
The Pólya Urn prediction distribution is that will take a new value from with mass and one of the existing values, , with mass . This yields a joint prior distribution as the product of easily managed conditional densities useful for our MCMC scheme below.
The PKSS process can be represented as either the Dirichlet, PD, or NGG processes in (3.3) as the following.(1)Taking and for results in the PD process. (2)Setting that and the PD process becomes the Dirichlet process. (3)The NGG process takes , such that for .
In the above is the complete Gamma function, and is the incomplete Gamma function. Examining of the predictive distribution (3.3), the ratios and indicate the difference between the Dirichlet process and the other processes. Substituting the values of into the allocation mass for each process reveals that the ratios do not depend on the number of existing clusters, . Rather, the Dirichlet process assigns probability to a new value independent of the number of existing clusters, and the rate of increment of partition size is a constant. In contrast, the PD and NGG processes assign probability to a new value dependent on the number of existing clusters. The comparison of these three special cases illustrates the richer clustering mechanisms of the PKSS process over the Dirichlet process. Furthermore, the PKSS process contains many other random measures, and these measures would be of interest for their clustering behaviors in further investigation.
Turning to the distribution of partitions, Pitman [28] shows that the joint prior distribution of the sequence is
Notice that the joint distribution is dependent on the partition of integers , and we can decompose (3.9) into . The distribution of the partition, , is and is known as the Exchangeable Partition Probability Function. For many nonparametric models, this representation also helps MCMC construction by partitioning the posterior distribution in the form of Exchangeable Partition Probability Function. To do so it is necessary to obtain the posterior distribution of the partition analytically. Then we could generate and approximate the posterior expectation. However, this is not possible in general. So we consider the joint distribution of instead. We write the posterior expectation of as a marginalization over the joint posterior distribution of by Here the joint posterior distribution of is given by where represents a normal density with mean and variance evaluated at . The variance is identical to and emphasizes that is a function of represented by . This representation leads to the development of the MCMC algorithm in the next section. For the sake of simplicity, we prefer the following expression for the variance:
We emphasise that we can always express by two elements, namely, a partition and distinct values. In this case is a partition of the integers that are the indices of , and represents the distinct values of . The partition locates the distinct values from to or vice versa. As a result, we have the following equivalent representations:
In time series analysis, we usually consider the first items, , the corresponding partition, , and distinct values, , such that Here contains the first elements of , and adding to would yield according to and the distinct values of . Combining and the distinct values of gives providing the connection between and . To simplify the likelihood expression and the sampling algorithm, we replace by since the subscript in already tells us that the first items of are considered. We then have a more accessible representation of the likelihood function as and (3.12) becomes We are now equipped to describe the MCMC algorithm.
4. The Algorithm for the Partitions and Distinct Values Sampling
Our Markov chain Monte Carlo (MCMC) sampling procedure generates distinct values and partitions alternatively from the posterior distribution, . For iterations our MCMC algorithm is(1) Initialise . For . (2) Generate from . (3) Generate from . End.
To obtain our estimates we use the weighted Chinese restaurant Gibbs type process introduced in Lau and So [9] for the time series models mixed over Dirichlet process. We have extended this scheme to allow for the more general PKSS process. In what follows, the extension from the Dirichlet to the PKSS lies in the weights of the Pólya Urn predictive distribution in (3.3).
The main idea of this algorithm is the “leave one out” principle that removes item from the partition and then replaces it. This will give an update on both and . This idea has been applied in sampling of partitions in many Bayesian nonparametric models of Dirichlet process (see [17] for an review). The strategy is a simple evaluation on the product of the likelihood function (3.16) and the Pólya Urn distribution (3.3) of , conditioned on the remaining parameters, yielding a joint updating distribution of and . We now describe the distributions and used in the sampling scheme.
Define to be the partition less item . Then with corresponding distinct values given by . To generate from for each , the item is assigned either to a new cluster ; that is, empty before is assigned with probability or to an existing cluster, for with probability where for . In addition, if and a new cluster is selected, a sample of is drawn from for the next iteration.
To generate from for generate given , , and from the conditional distribution This step uses the standard Metropolis-Hastings algorithm to draw the posterior samples . Precisely, (4.5) is given by
In (4.6) all elements in the sequence , conditional on and , are no longer independent, and they require to be sampled individually, and conditional the remaining elements.
We note that the special case of the above algorithm can be found for independent data of the normal mixture models from West et al. [29] (see also [17]). Taking and in (2.2) yields for with distribution . Let be a Dirichlet process with parameter . Then (4.1) and (4.2) become Furthermore, the joint distribution of conditional on and is given by In this case are independent in both the prior, , and the posterior, . However, it is not true in the more general dynamic GARCH model we consider.
Usually, the parameters of interest are both the sequence and the predictive density . These two sets of parameters are functions of under the mixture of GARCH(1,1) model, and the Bayesian estimators are taken to be the expected means as outlined in Section 2. That is, writing the volatility as the vector where denotes the posterior distribution of and represents the Pólya Urn predictive density of given .
5. Alternative Algorithmic Estimation Procedures
We now outline how the algorithms of Walker [20] and Papaspiliopoulos and Roberts [21] may be applied to our GARCH(1,1) model mixed over the PKSS process. First, consider the approach in Walker [20]. Beginning with (3.1) the weights can be written as where denotes the indicator function that if and otherwise, and represents the uniform density on the interval . Then, substituting (5.1), but without the integral over , into (3.1), we obtain the joint measure
Furthermore, we can take the classification variables to indicate the points taken from the measure. The classification variables take values from the integers and assign a configuration to model (2.2) so that the expression of the likelihood is simpler without the product of sums. So, combining (5.2) with model (2.2) yields where and for . Here the random jumps that build up the random measure in (5.2) can be reexpressed as and for . This is called the stick-breaking representation. Unfortunately, up to now, this representation includes only the Poisson-Dirichlet processe is where s are Beta random variables. Further development will be required to fully utilise the approach of Walker [20] for the PKSS process in general.
The likelihood (5.3) can be written as and MCMC algorithms for sampling , , and are straightforward and already included in Walker ([20], Section 3). To complete the algorithm for our model it requires sampling from . This can be achieved by sampling from for all from otherwise sample from if there is no equal to . Notice that there are infinite s contained in , but it is only required to sample at most of them. The number of sampled s varies over iterations (see also ([20], Section 3) for details).
Papaspiliopoulos and Roberts [21] suggest an approach similar to Walker [20]. Consider the classification variables and the stick-breaking representation of contributed by defined above. Then the likelihood is immediately given by The most challenging task is the reallocation of observations over the infinite components in (3.1), equivalent to sampling the classification variables over the MCMC iterations. Here we will briefly discuss this task when it involves the variance for our model (see also Papaspiliopoulos and Roberts [21, Section 3]. Let be the vector produced from by substituting the th element by . This is a proposed move from to where . Notice that it is not possible to consider an infinite number of s directly since we only have a finite number. Instead, we can employ the Metropolis-Hastings sampler that considers a proposal probability mass function which requires only a finite number of s. The probabilities for the proposed moves are given by where , and the proportional constant is given by . Then simulate a Uniform random variable and accept the proposal to move to if satisfies , where . The acceptance probability for this Metropolis-Hastings is given by This completes the task for sampling . Finally, similar to Walker [20], sampling only needs (5.5). That is, for all , sample from (5.5), otherwise sample from if there is no equal to .
6. Application to the Standard & Poor’s 500 Financial Index
The methodology is illustrated on the daily logarithm returns of the S&P500 (Standard & Poor’s 500) financial index, dated from 2006 Jan 03 to 2009 Dec 31. The data contains a total of 1007 trading days and is available from the Yahoo Finance (URL: http://finance.yahoo.com/). The log return is defined as where is the index at time . The algorithm described in Section 4 is used to estimate the nonparametric mixture models.
To compare the three different mixture models in Section 3, is allocated in the Dirichlet process, the PD process, and the NGG process. In each case, the mean process is denoted and is a Gamma-Dirichlet distribution given by where is the Gamma(1,1) distribution and is the Dirichlet(1,1,1) distribution. We set the parameters of each process such that the variance of each process evaluated over the same measure is equal. This reslts in for the Dirichlet process, and for the PD and and for the NGG. We also compare the results to a no mixture GARCH(1,1) model in which the parameters have a prior distribution . We initialise the MCMC algorithm with a partition that separates all integers, that is, . We run the MCMC algorithm for 20,000 iterations of which the first 10,000 iterations are discarded. The last 10,000 iterations are considered a sample from the posterior density of .
Figure 1 contains the volatility estimates (fitted data) for the no-mixture model, the Dirichlet process, the DP process, and the NGG process. The no mixture model, the Dirichelt process, and the PD process appear to give similar results. However, it is easy to distinguish the NGG process from the other models since the volatility estimates of the NGG process appear to better fit the observed spikes in the data. Figure 2 presents the predictive densities for each model. Again, the no-mixture model, the Dirichlet process, and the PD process give similar predictive density estimates in the sense that the distribution tails are all similar. However, the NGG process model estimates a predictive density with substantially wider tails than the other three models. Figures 1 and 2 suggest that the Dirichlet and PD processes allocate fewer clusters and consider the periods of increased volatility as outliers within the data. On the other hand the NGG process allocates more clusters and incorporates the periods of increased volatility directly into its predictive density.
Finally, we evaluate the goodness of fit in terms of the marginal likelihoods. The logarithm of the marginal likelihoods of the no mixture model, the Dirichlet process model, PD process model, and the NGG process model are −1578.085, −1492.086, −1446.275, and −1442.269, respectively. Under the marginal likelihood criterion all three mixture models outperform the GARCH(1,1). Further, the NGG process outperforms the PD process which in turn outperforms the model proposed in Lau and So [9]. These results suggest that generalisations of the Dirichlet process mixture model should be further investigated for time-dependent data.
7. Conclusion
In this paper we have extended nonparametric mixture modelling for GARCH models to the Kingman-Poisson process. The process includes the previously applied Dirichlet process and also includes the Poisson-Dirichlet and Normalised Generalised Gamma process. The Poisson-Dirichlet and Normalised Generalised Gamma process provide richer clustering structures than the Dirichlet process and have not been previously adapted to time series data. An application to the S&P500 financial index suggests that these more general random probability measures can outperform the Dirichlet process. Finally, we developed an MCMC algorithm that is easy to implement which we hope will facilitate further investigation into the application of nonparametric mixture modes to time series data.