Economic and Financial Networks 2020View this Special Issue
Research Article | Open Access
Nicolás S. Magner, Jaime F. Lavin, Mauricio A. Valle, Nicolás Hardy, "The Volatility Forecasting Power of Financial Network Analysis", Complexity, vol. 2020, Article ID 7051402, 17 pages, 2020. https://doi.org/10.1155/2020/7051402
The Volatility Forecasting Power of Financial Network Analysis
This investigation connects two crucial economic and financial fields, financial networks, and forecasting. From the financial network’s perspective, it is possible to enhance forecasting tools, since econometrics does not incorporate into standard economic models, second-order effects, nonlinearities, and systemic structural factors. Using daily returns from July 2001 to September 2019, we used minimum spanning tree and planar maximally filtered graph techniques to forecast the stock market realized volatility of 26 countries. We test the predictive power of our core models versus forecasting benchmarks models in and out of the sample. Our results show that the length of the minimum spanning tree is relevant to forecast volatility in European and Asian stock markets, improving forecasting models’ performance. As a new contribution, the evidence from this work establishes a road map to deepening the understanding of how financial networks can improve the quality of prediction of financial variables, being the latter, a crucial factor during financial shocks, where uncertainty and volatility skyrocket.
In this paper, we used minimum spanning tree length (MSTL) and maximally filtered graph length (PMFGL) methodologies to improve the forecasting of volatility in financial markets. In times of crisis, uncertainty soars, and capital markets evolve quickly and wildly, generating a surge in the volatility of financial assets (a metric for measuring the uncertainty in financial markets is the Chicago Board Options Exchange Volatility Index (CBOE VIX). Since its inception, this index exhibits an average history value of nearly 20, but during the subprime crisis of 2008-2009 reached its maximum historical level of 89.53. Similarly, nowadays, during the COVID-19 crisis, the index reached a second historic maximum level of 85.47 units), affecting the price behavior, risk management, and asset pricing, also consumption, savings, and investment decisions in the economy, that weakening the economic growth and well-being in the short and long run . Consequently, improving the forecasting of volatility is a priority for its role in portfolio selection, risk management, and derivatives pricing, helping policymakers, institutions, and people to minimize better adverse effects in the postcrisis stage [2, 3].
During the past two decades, forecasting models’ performance has improved with the incorporation of more data, where the results of the predictions are very reliable on weekly, monthly, and even quarterly horizons . New approaches emerged, from high-frequency models  to multivariate ARCH, stochastic volatility models for time-varying return volatilities, and conditional distributions [6, 7], and long memory models . Despite these broad advances, volatility models continue to be of very little dimensionality and univariate, which do not manage to incorporate second-order effects and nonlinear relationships typical of complex systems .
Complexity is a crucial element for understanding the behavior of financial markets and its reaction to disturbances. An example of a complex system is capital markets characterized by the presence of multiple economic agents interacting simultaneously (complex systems in opposition to linear systems are characterized among other factors, by being nonlinear. In other words, in a nonlinear system, the change in the outputs is nonproportional to the change in the inputs, causing the system to appear chaotic, unpredictable, or even contradictory). The existence of multiple entities and various interaction rules (on several degrees and nonlinearities), among other characteristics, generates collective effects that hinder the understanding and modeling of the whole system .
One way to improve the understanding of complex systems’ behavior is through the use of network methods (they allow modeling the indirect effects in the interconnections of its components or entities. The traditional econometric methods study the direct effects on the relationships of the entities of a system; when using networks, it is feasible to estimate, for example, the distance between two entities or nodes and how likely an indirect effect is between them. These kinds of computations are not feasible using only traditional statistics. This advantage is one of the reasons because network approaches have been used in economics and financial markets). The literature that examines networks in financial markets focuses on the implications of network properties and their relationships with the stability and fragility of financial systems [10, 11]. The literature also explored how the distribution of the links affects the systemic reaction to shocks and how the connectivity of critical nodes or hub nodes could destabilize and even cause the entire network to collapse [12–15]. Likewise, other relevant topics are related to transaction networks of financial assets, portfolio selection, risk management, overlapped portfolios, integration of financial markets, and financial crises [16–22].
Network methodologies apply correlation networks to analyze the synchronization of returns. For the case of interconnectedness risk, planar maximally filtered graph (PMFG) and minimum spanning trees (MST) are used to study the increase in cointegration between financial markets (this phenomenon not only negatively affects investors’ possibilities to diversify but is also evidence of an increase in the influence of regional and global economic phenomena on the economies and financial markets that comprise it. For example, Onnela et al.  study correlation networks of the S&P 500, finding the presence of dynamic clusters whose existence is not exclusively due by the industrial sector, but to psychological and economic factors captured in the asset network. They also find that the normalized tree length of the MST (MSTL) is dynamic and reaches minimums during financial crises and that the potential for diversification is related to the evolution of the MSTL of the asset network. During crises, the topology of an MST becomes more star-like and compact, being this network less resilient to shocks and more prone to systemic risk ) . We applied minimum spanning tree length (MSTL) and maximally filtered graph length (PMFGL) to measure the synchronization of returns of global stock markets (Granger causality refers to temporary anticipation of an effect (called forecasting power), which may be due to a black box of explanations. We evaluate the predictive power of equity markets via Granger-causality and forecasting regressions, which are useful to assess whether a variable has the predictive ability, not whether it “causes” other variables to change. The latter question can only be answered by using a structural model. However, we can study whether PMFGL and MSTL have predictive ability above and beyond that contained in other variables, such as global demand pressures and interest rates, used as a proxy for the world business cycle, and we undertake such analysis ) because both are parsimonious representations of the complex network of interrelationships, and its connections are useful for obtaining direct and indirect information . The ability to represent a dynamic system is a second reason for measuring cointegration with these methodologies. This topic is particularly useful when it is required to represent the phenomenon incorporating a large number of markets under examination . Despite the differences between MSTL and PMFGL, the advantage of reducing the complexity of networks is that it represents nonredundant and the essential connections in a graphical way.
We intend to explore the possibilities of using the MSTL and PMFGL methodologies to improve the forecasting of volatility in financial markets. We hypothesize that there is Granger causality (Granger causality refers to temporary anticipation of an effect (called forecasting power), which may be due to a black box of explanations. We evaluate the predictive power of equity markets via Granger causality and forecasting regressions, which are useful to assess whether a variable has the predictive ability, not whether it “causes” other variables to change. The latter question can only be answered by using a structural model. However, we can study whether PMFGL and MSTL have predictive ability above and beyond that contained in other variables, such as global demand pressures and interest rates, used as a proxy for the world business cycle, and we undertake such analysis ) between the global MSTL and global PMFGL and the realized stock market volatility. We think that the global network correlations between the stock markets have relevant information to forecasting realized volatility of stock indices. It is vital to notice that our paper does not study “causality effects,” and in other words, we do not study the structural link between the length of MSTL (and PMFGL) and the realized volatility of stock markets.
We contribute to the extant financial network literature in two ways. First, we study an application of the PMFGL and MSTL in the field of forecasting, defining a methodology for testing the predictive power of these two network measures. Second, we connect to relevant fields that, to our best knowledge, have not been linked, network analysis and forecasting. We believe that there is an excellent possibility of contributing financial networks to the field of financial forecasting. Many papers forecast stock market volatility, but none have used financial network metrics that incorporate correlations’ asset trees as independent factors [27–29].
To analyze our hypothesis, we test the predictive power of the global MSTL and global PMFGL on the stock market of 26 countries of North America, Latin America, Europe, Asia, and Oceania. For this, we collected daily returns from July 2001 to September 2019 for the main stock indices of these countries and calculated their monthly realized volatility. We then apply finance network methodologies to estimate the PMFGL and MSTL metrics to represent the global correlation structure and to observe it dynamically over time. Finally, we tested the predictive power of such network metrics using in-the-sample and out-of-sample tests. Finally, we applied robustness tests to our results.
Our main finding is that MSTL helps to predict the realized volatility of stock markets. Specifically, results indicate that there is Granger causality of the MSTL on the volatility performed in most Asian and European markets. Nevertheless, there is no evidence for the case of North and Latin American markets. This finding would mean that the global correlation network behind the MSTL contains useful information that helps to predict the realized volatility of stock markets. Another relevant result relates to the predictive power of PMFGL. There is evidence that compared to the MSTL, its ability is more limited. One explanation could be that compared with MSTL, this network measure captures more information from the entire asset’s correlation network. This would appear to be counterproductive to its volatility predictive power. These results are robust in out-of-sample tests adding benchmark models with six lags, but the effect disappears when we add the variation of the VIX lagged in one month. The previous is preliminary evidence that the MSTL would be an efficient indicator to represent information of the global volatility of the stock markets, with similar predictive power to VIX. However, we think that MSTL has more advantage than the VIX. First, MSTL considers information of all stock markets in comparison with VIX that is elaborated only with the North American stock market. Secondly, MSTL is calculated with realized correlations, and VIX is estimated with expected volatility that is more sensible at market sentiments.
The main conclusion of this paper is that the global correlation connections add useful information that helps to predict the realized volatility in a relevant segment of global stock markets. These results imply that policymakers and practitioners could improve their estimations of future volatility in financial markets and, consequently, improve their forecasting and decision-making regarding asset pricing and risk management. From an economic policy point of view, this work could help policymakers to improve financial stability frameworks and design models that consider the underlying structure of the global network of stock indices. Finally, a possible extension of this work is the development of new methods that deepen the study of the connection between correlations’ assets networks and the influence of volatility gauges as the VIX.
This paper is organized as follows. In Section 2, we present the possibilities of expanding the realized volatility forecasting methodologies. In Section 3, we indicate the methodology and the data used in the study. Section 4 shows the results of the in-the-sample and out-of-sample and robustness tests. Section 4 concludes and provides some future research extensions.
2. Realized Volatility Forecasting Methodologies
Financial crises have attracted considerable attention from the literature of financial networks. During the financial crisis of 2008-2009, the synchronization of returns, defined as the tendency of stock markets to display significant comovements , has a negative impact on the contribution of diversification to risk minimization. This high interconnectedness risk (network centrality measures quantify the interconnectedness risk. The network is built from some measure of dependence between financial assets (e.g., correlations)) phenomenon is an element that becomes a contagion channel for financial shocks in times of crisis.
Evidence indicates that in high-volatility environments, such as that of financial crises, the network topology of equities markets changes, and the correlation among financial assets rises, diminishing in consequence, the effectiveness of diversification as a risk management tool. This issue is critical for strategies applied in portfolio management, where tactical and strategic asset allocation decisions are based on modeling the correlations of returns of financial assets [22, 23, 31, 32].
Nevertheless, the returns of financial assets, especially stocks, are particularly difficult to predict (see  for a review); however, the volatility of their returns seems to be relatively easier to forecast. The stylized fact about volatility is that it is low, but slowly decaying persistency. In this sense, it is not surprising the growing literature focused on modeling and forecasting financial volatility, due to its implications for asset pricing, portfolio management, and risk management.
One of the main problems of volatility measures is that the conditional variance is a “latent” variable. Therefore, it is not directly observable (see  for a discussion). There is a wide range of models to estimate this latent variable, such as autoregressive conditional heteroskedasticity (ARCH or GARCH type models) and stochastic volatility (SV) models. However, as pointed out in [35, 36], these models tend to fail to correctly accommodate some stylized facts regarding financial time series such as high excess kurtosis.
A novel and increasingly popular method is the quantification of realized volatility. The main advantage of this approach is that the ex-post volatility becomes necessarily observable rather than being treated as a latent variable. In this sense, it becomes straightforward to evaluate the out-of-sample forecasting accuracy of different models when predicting realized volatility, as it can be modeled directly (see [37, 38] for a discussion)
While our out-of-sample forecasting approach using financial networks is somewhat novel, there are recent attempts to establish relationships between local market volatilities and the international financial interlinkages. For instance, Bouri et al.  examine the predictive ability of commodity and significant developed stock markets forecasting the implied volatility (IV) of each one of the individuals BRICS (Brazil, Russia, India, China, and South Africa) stock markets. Using a Bayesian graphical SVAR approach (BGSVAR, see ), they find some evidence of Granger causality mainly from the global stock markets (and some individual markets) IV over the BRICS IV (notably, they find that the commodity market is important exclusively in South Africa. One possible explanation for this result is the strong relationship between major exporting economies and global commodity prices, extensively reported in [41–45]). This result is somewhat consistent with previous evidence relating to global factors and BRICS stock markets .
In related work, Ji et al.  model a dynamic network for the IV transmission among US equities, commodities, and BRICS equities. In general, they show that the integration structure of the information transmission network is somewhat unstable, with changes over time. Their results suggest that the impact of the analyzed events is heterogeneous, e.g., some events have an impact exclusively on the IV of local markets, but others impact global volatility.
In the same line, Ji et al.  use a directed acyclic graph to study the contemporaneous and lagged relationships between bitcoins and other financial assets as commodities, stock markets, and fixed income indices. Notably, they find little evidence of contemporaneous relations between bitcoins and other financial assets, although they find some evidence of predictability in the bear market states of bitcoins.
Respect to market volatility linkages, Aggarwal and Raja  study the cointegration among the stock markets of BRIC economies. Additionally, they examine the IV transmission between the Indian IV index and three international indices; in particular, they study how shocks in the IV of one market may affect other markets’ volatility. Similarly, Ewing et al.  study the effects of the NAFTA agreement on volatility transmissions of each market. Other empirical papers exploring the linkages among financial markets include [51, 52] for NAFTA economies,  for linkages between the US and European markets, and  for Latin American markets.
Finally, Hussain Shahzad et al.  study spillovers in the Eurozone credit market sector. Using network theory and daily data on 14 sector-level credit default swaps (CDS) indices in the Eurozone, they identify the main sectors transmitting (and receiving) spillovers during regular periods and crisis. In particular, they find that many CDS sectors became strongly interconnected in crisis periods, and this linkage remains for some later periods, suggesting some evidence of contagion.
3.1. The Minimum Spanning Tree Length (MSTL)
We follow the standard procedure to obtain the return correlations and dynamic asset trees based on price market indexes [18, 56]. The closure prices index at time date is . The return of index is given by , for a consecutive sequence of trading days. For each index , daily returns are calculated within a time window of 1 month. Let be the return vector of the index of the month , thenwhere is the correlation coefficient between the indices and where indicates the average over all the trading days of the month . In this way, a symmetrical matrix of correlations between market indexes ( is the number of financial indices) with values .
Then, the correlations of are converted to distances , which represents the distance between the market index and . Thus, a correlation indicates a maximum distance of , whilst indicates a minimum distance of .
The minimum spanning tree (hereafter, MST) is a tree structure graph that connects the indexes through edges avoiding loops and clicks, and where the path to connect all the nodes is minimal. The MST is constructed using the Prim algorithm . In this way, MST reduces the information space of the entire network by connecting all nodes with edges, to a tree with edges .
The sum of the edges of the resulting tree calculated for each month forms a time series. We define the normalized length of the MST (MSTL) as follows:
So, for every month, we have an MSTL. The variation in the MSTL is calculated as , which allows us to work with a stationary time series.
3.2. The Planar Maximally Filtered Graph Length (PMFGL)
The planar maximally filtered graph (PMFG) also filters out the complete graph based on the distance matrix by keeping only the main representative links by varying the genus of the graph [58, 59]. In this case, the PMFG retains a bit more information. The MST keeps edges, while the PMFG keeps edges compared to the edges of the complete graph . In addition, the PMFG also contains the MST.
The length of the PMFG (therefore, PMFGL) is simply defined as the sum of all distances of the resulting distance matrix for the PMFG. Since the PMFG retains a greater number of edges (thus a greater number of correlations), it is possible that this network can better express the level of synchronization in the market. For this reason, it is included in the core models.
Since the PMFG supports cycles in the network and may include negatively correlated stocks, the PMFG length will always be greater than that of the MST. Precisely because of this feature of including more information, it is interesting to be able to compare the models that explain the risk of interconnectivity as a measure of robustness. It is worth mentioning that we do not calculate the PMFG length for regions. The PMFG includes the MST and the edges used to join the nodes in the PMFG are of minimum distance; therefore, the length of the regional PMFG is the same as the length of the MST.
3.3. Realized Variance
We measured volatility with the daily variance of stock market using the realized variance model () :
The realized volatility is our dependent variable, where is the daily return for the day j on the month t for the market index i. We used daily data provided by Bloomberg from July 2001 to September 2019, totaling 216 months, for a total of 26 market indexes in America, Europe, Asia, and Oceania. These market indices are part of the benchmarks published by Bloomberg for each stock market at the country and region level. We included the VOX CBOE in our robustness out-of-sample test following  that incorporated with a control for the monthly volatility of each region and monthly variation of VIX index.
3.4. Forecasting Model and Evaluation
We used two types of forecasting models to evaluate the predictive power of the MSTL and PMFGL. First, we named “core models” at forecasting models for our in-the-sample and out-of-sample tests that include the natural logarithm variation of the MSTL (therefore, VMSTL) and include the natural logarithm variation of PMFGL (therefore, VPMFGL) (see Table 1 panels A and B). Second, for our out-of-sample tests, we named “benchmark models” at forecasting models that are inspired in a vast literature that has shown that AR(p) models are usually difficult benchmarks to beat when forecasting realized volatility [4, 8]. In this sense, we use a heterogeneous autoregressive (HAR) model as our main benchmark (see Table 1 panel C).
Source: authors’ elaboration.
In the table, is the realized variance in the month t for the market index i, is the global minimal spin tree length in the month t − 1, is the global planar maximally filtered graph in the month t − 1, , , and are the first, second, and third lags of the realized volatility, respectively, for the market index i, and is the disturbance error.
Our main goal in this paper is to test the existence of Granger causality from the structure of the network to the realized volatility. For this, we focus on testing the following null hypothesis ; this means that we are comparing our core models to benchmark models (see Table 1). Our null hypothesis both in sample and out of sample posits that the and have no role in predicting the market index realized volatility. We test these hypotheses both in sample and out of sample focusing on one-step-ahead forecasts only, leaving the analysis of multistep-ahead forecasts as an extension for future research.
In-sample evaluations are carried out using the t-statistic associated with the coefficient of the minimal spin tree length. For covariance stationary processes, the central limit theorem requires a proper estimation of the long-run variance; in this sense, we use HAC standard errors as suggested in [61, 62] (Newey and West  propose a Barlett kernel to ensure a positive definite variance matrix. Additionally, Newey and West  propose an automatic lag selection method for the covariance matrix estimation). In-sample estimates, however, are usually criticized because they are relatively different from a real-time forecasting exercise and also because there are prone to data mining-induced overfitting. To mitigate these shortcomings, we also consider out-of-sample analysis.
For out-of-sample evaluations, as we are working in an environment with nested models, we use the ENCNEW test proposed in  (other tests for nested models such as [64–66] were also considered with similar messages, and they are available upon request). Again, for out-of-sample analysis, we are considering the null hypothesis . In the context of linear models estimated by OLS, Clark and McCracken  derive the correct asymptotic distribution of this test. While the distribution is not standard, critical values for one-step-ahead forecasts are available in their paper. Under general conditions, the asymptotic distribution of the ENCNEW test is a functional of Brownian motions depending on the number of excess parameters of the nesting model, which is 1 in our specifications, and on the parameter defined as the limit of the ratio , where P is the number of one-step-ahead forecasts and R is the size of the first expanding window used in the out-of-sample analysis (π is defined as the limit P/R when P, R ⟶ ∞. Clark and McCracken  show that the asymptotic distribution of the ENCNEW depends, among other parameters, on π. In this sense, π = 0.4 can be interpreted as the estimation window being approximate twice the prediction window’s length). The asymptotic distribution of the test varies also with the scheme used to update the estimates of the parameters: either rolling, recursive, or fixed. Additionally, we emphasize that this test is one-sided, and in other words, rejection of the null occurs only when the statistic is greater than a critical value located at the right tail of the distribution (see [67, 68] for wonderful reviews and further details about the implementation of out-of-sample tests of predictive ability in nested model environments).
For in-sample analysis, we estimate our models with all the available observations. For the case of out-of-sample analysis, we split the sample in three different ways, as suggested in . For each splitting, we consider two windows: an initial estimation window of size R and an evaluation window of size P such that T = P + R, where T is the total number of observations. We split the samples in three different ways. First, we use the first half for the initial estimations and the second half to make our predictions. Second, we use approximately the first third of observations for initial estimations and two thirds for evaluation. Third, we use approximately 70% of initial observations for estimation and 30% for evaluation. Finally, we update our parameters using recursive windows, although results with rolling windows are very similar. We only report the results for the third division (π = 0.4) for saving space; however, the message of predictability is very similar in all divisions and they are available upon request.
3.5. The Data
We used daily data provided by Bloomberg from July 2001 to September 2019, totaling 223 months, for a total of 26 market indexes in North America, Latin America, Europe, Asia, and Oceania (see Table 2 for details). These indices belong to regional stock indices published by Bloomberg for each stock market at the country and region level. As mentioned above, we included the CBOE VIX index in our robustness section as part of an out-of-sample test following Lavin et al. , who incorporated the monthly variation of the VIX index as a control for the monthly volatility of each regional market.
Source: authors’ elaboration.
4. Empirical Results
In this section, we first report the estimation results of 26 market indices of our core models (Table 1 panel A) using in-sample data. Secondly, we evaluate the forecasting performance with our benchmark models (Table 1 panel B). Finally, we check the robustness of models adding in the out-of-sample test a lag of variation of VIX (VVIX). We calculate the ENCNEW out-of-sample test of Clark and McCracken .
Figure 1 shows a representation of the financial indices’ MSTs in three different periods: before, during, and after the financial crisis of 2008. The proximity of the assets about their origin geographical location remains an unalterable property through time. This phenomenon produces clusters based on geographical location. For example, Asian market indices tend to cluster together. The same is true of European indices. Only the two Oceania indices (Australia and New Zealand) appear to be accommodating regardless of geographic location. However, the lengths of the MSTs are different. The MST length was 10.97 in precrisis, and it decreased to 10.23 in crisis and then increased again to 11.68 in the postcrisis period. In times of financial crisis, markets synchronize, increasing the intercorrelations between financial assets. Consequently, the distances represented on each edge of the network shorten .
4.1. In-Sample Analysis
Tables 3–5 report estimates of core models in Table 1 panel A. In all Tables 3–5, we consider monthly frequencies and use HAC standard errors according to [61, 62]. Generally speaking, the VMSTL coefficients are more significant than the PMFG coefficients. This evidence shows that the MSTL is a more efficient measure because the additional information included in the PMFGL does not represent a higher statistical significance. When comparing the significance of the core models (1) and (2)fd2 presented in Table 1 panel A, we observed that the MSTL has greater predictive power, presenting statistical significance in 11 markets out of 26 in total to the PMFG that shows statistical significance in 7 markets out of 26 in total, consistent with the idea that the MSTL is more efficient by not considering correlations of lesser magnitude.
Note: AR stands for lag monthly volatility realized. AR (−1) and AR (−2) represent the first and second lags of volatility realized. SPX, CCMP, SPTX, MEXBOL, and IGBVL Lmex denote one-month volatility realized returns of the respective indices. The estimations from the first equation in Table 1 are presented here. , , and . Source: authors’ elaboration.
Note: AR stands for lag monthly volatility realized, and AR (−1) and AR (−2) represent the first and second lags of volatility realized. SPX, CCMP, SPTX, MEXBOL, and IGBVL Lmex denote one-month volatility realized returns of the respective indices. The estimations from the first equation in Table 1 are presented here. , , and . Source: authors’ elaboration.
Notes: AR stands for lag monthly volatility realized, and AR (−1) and AR (−2) represent the first and second lags of volatility realized. SPX, CCMP, SPTX, MEXBOL, and IGBVL Lmex denote one-month volatility realized returns of the respective indices. The estimations from the first equation in Table 1 are presented here , , and . Source: authors’ elaboration.
However, the predictive power of VMSTL varies by geographic area. In specific, the VMSTL coefficient is significant at least at a 10% level in most European and Asian stock markets (see Tables 4 and 5), but we did not find significance in the VMSTL coefficient in the models that represent the American equity markets (see Table 3). These results imply the existence of causality to the Granger between the dynamics of the network of correlations formed between global stock markets and the volatility of the European and Asian stock markets.
Regarding Europe, Table 4 shows that the volatility of the stock markets in France (β = −0.672, ), Spain (β = −0.700, ), Italy (β = −0.838, ), and Sweden (β = −0.822, ) has the highest relationship with the global network of correlations lagging in one month. Regarding Asia, Table 5 indicates that the volatility realized in the markets of Taiwan (β = −1.083, ), Korea (β = −0.829, ), and Hong Kong (β = −0.672, ) shows the relationship with the global network of correlations lagging in a month of greater magnitude of statistical significance. Regarding Oceania, Table 5 shows that only the volatility of the New Zealand stock market (β = −0.623, ) shows statistical significance with the global network of correlations with one month of lag.
Several additional features are worth noticing from our in-sample core models. First, the term of the constant and the coefficient of the first volatility lag are statistically significant in all markets (see Tables 3–5), consistent with the strong autocorrelation of the volatility of the equity indices. Second, the coefficient of the three lags is positive in all markets, consistent with the persistence of financial markets: an increase in volatility is an indicator of an increase in volatility in the following period. This relationship is consistent with the first lag in all markets except the UK, but statistical significance decreases for the second and third lags. Finally, the adjusted determination coefficients are between 27.6% and 59.2%, the highest being the KOSPI in Korea and the lowest the SPIPSA in Chile.
Tables 6–8 show the ENCNEW test results  in out-of-sample exercise for the Americas, Europe, and Asia-Oceanic. These tables focus on the core models described in Table 1 panel B, and the results correspond to the statistical difference between the core models presented in Table 1 panel B (with VMSTL and VPMFGL) versus the benchmark models presented in Table 1 panel C when the number of observations to make the forecast is 40% of the total sample (P/R = 0.4).
10%, 5%, and 1% critical values are 0.685, 1.079, and 2.098, respectively, when there is only one excess parameter. P represents the number of one-step-ahead forecasts and R the sample size of the first estimation window. The AR (3)-VVIX (1) benchmark corresponds to model 1. , , and . Source: authors’ elaboration.
10%, 5%, and 1% critical values are 0.685, 1.079, and 2.098, respectively, when there is only one excess parameter. P represents the number of one-step-ahead forecasts and R the sample size of the first estimation window. The AR(3)-VVIX(1) benchmark corresponds to model 1. , , and . Source: authors’ elaboration.