Complexity

Volume 2018, Article ID 2834680, 13 pages

https://doi.org/10.1155/2018/2834680

## Information Feedback in Temporal Networks as a Predictor of Market Crashes

^{1}Laboratory for Financial and Risk Analytics, Faculty of Electrical Engineering and Computing, University of Zagreb, 10000 Zagreb, Croatia^{2}Woodrow Wilson School of Public and International Affairs and Department of Economics, Princeton University, Princeton, NJ 08544, USA^{3}CERGE-EI, Politickch vz. 7, P.O. Box 882, 111 21 Prague, Czech Republic^{4}Center for Polymer Studies and Department of Physics, Boston University, Boston, MA 02215, USA^{5}Faculty of Civil Engineering, University of Rijeka, 51000 Rijeka, Croatia^{6}Zagreb School of Economics and Management, 10 000 Zagreb, Croatia^{7}Luxembourg School of Business, Luxembourg, Luxembourg

Correspondence should be addressed to Dejan Kovač; ude.notecnirp@cavokd

Received 2 March 2018; Revised 26 June 2018; Accepted 26 July 2018; Published 17 September 2018

Academic Editor: Maricel Agop

Copyright © 2018 Stjepan Begušić et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In complex systems, statistical dependencies between individual components are often considered one of the key mechanisms which drive the system dynamics observed on a macroscopic level. In this paper, we study cross-sectional time-lagged dependencies in financial markets, quantified by nonparametric measures from information theory, and estimate directed temporal dependency networks in financial markets. We examine the emergence of strongly connected feedback components in the estimated networks, and hypothesize that the existence of information feedback in financial networks induces strong spatiotemporal spillover effects and thus indicates systemic risk. We obtain empirical results by applying our methodology on stock market and real estate data, and demonstrate that the estimated networks exhibit strongly connected components around periods of high volatility in the markets. To further study this phenomenon, we construct a systemic risk indicator based on the proposed approach, and show that it can be used to predict future market distress. Results from both the stock market and real estate data suggest that our approach can be useful in obtaining early-warning signals for crashes in financial markets.

#### 1. Introduction

Connectivity patterns in complex systems and their dynamic properties have been the focus of extensive research in physical, biological, neurological, and social systems [1–4]. In many studies, identification of strong dependencies between interconnected components has been linked to systemic risk—the risk associated with the collapse of the entire system [5–8]. This line of research is especially prominent in modeling risk in financial systems [9, 10], where dependent components within the system (e.g., banks, companies, and financial assets) are more likely to fail simultaneously and influence other connected components (a phenomenon known also as *spillover effect*), thus inducing a potential cascade of failures in the entire system [11]. However, due to the dynamic nature of financial systems and often complex dependency relationships, the identification and quantification of these effects is generally not a trivial task [12, 13]. In addition, financial variables and time series (e.g., prices, returns, and volumes) are known to exhibit strongly non-Gaussian characteristics, heavy tails, and long-range dependence, which calls into question standard parametric approaches [14].

The complexity of economic and financial systems have been in the focus of research from various perspectives, employing Ising models [15, 16], agent-based models [17–19], and game theory [20]. The instabilities therein, observed as crises and crashes [21, 22], are especially elusive and hard to model and predict [23, 24]. Generally, connectivity patterns in financial markets are modelled by estimating and analyzing graphs of financial assets [25], which have been found to capture the structural properties of these complex systems, such as the hierarchical structures captured by spanning trees in financial markets [26, 27]. Empirical research on connectedness in financial systems and the relation to systemic risk has intensified in the aftermath of the 2008 subprime crisis, focusing either on contagion within the banking sector or the comovement in financial markets [28]. Longin and Solnik [29] first provided formal evidence of increased correlations during bear markets, and recent studies report that volatility cross-correlations exhibit long memory meaning that once high volatility (risk) is spread across the entire market, it could last for a long time [30]. Adrian and Brunnermeier’s and the systemic expected shortfall (SES) by Acharya et al. [31] quantify the potential distress of financial institutions conditional on other institutions’ poor performance, thus measuring the spillover of losses within the financial sector. Kritzman and Li [32] quantify the divergence of the cross section of financial returns from their historical behavior expressed as the Mahalanobis distance, and find that it can be used similarly to implied volatility in sets of assets without liquid option markets, while accounting for their interactions. To measure the total contribution of assets to the systemic risk of the entire market, Kritzman et al. [33] propose the absorption ratio, based on the principal component analysis of the cross section of financial returns. A more detailed look into the connectivity patterns in financial systems was proposed by Billio et al. [34], who analyzed the dynamic causality patterns in networks of hedge funds, brokers, insurance companies, and banks. They report a highly asymmetric relationship during the subprime crisis of 2008 and find that the proportion of significant causal relationships in the network increases with the financial distress and crises in the market. Recently, Curme et al. [35] introduced a numerical method for validating time-lagged correlation networks of assets and found a growing number of statistically validated links and the rise of instability in financial networks. Although the state-of-the-art approaches employ correlation-based measures and Granger causality tests for inference of dependency relationships in financial networks, the assumptions of linearity and Gaussianity of these methods are often violated with financial data. In addition, the conclusions drawn from such analyses require further long-range historical backtests of the relationship between specific network patterns and systemic risk in the markets which would include more systemic events than the single 2008 subprime crisis.

In this paper, we investigate the dynamic relationships within a network of financial assets, their evolution through time and relation to the systemic risk in the market. We take a nonparametric approach to identify and validate time-lagged cross-sectional links between pairs of assets in a financial market. Based on the information-theoretic concept of entropy which quantifies uncertainty, we measure the dependence between and within time series as a reduction in uncertainty, as quantified by Schreiber’s transfer entropy [36]. The concept of entropy has been used to measure sequential irregularity in many time series applications, with notable results in finance. Moreover, transfer entropy-based methods yield state-of-the-art results in detecting information flows in computational neuroscience, bioinformatics, and financial economics [37]. Specific financial applications include estimation of serial irregularities and risk in time series or returns and inference of the global dependence networks of financial indices [20, 38–42]. We expand on previous results by investigating the evolution of dynamic causal networks (sometimes referred to as information flow networks) through time and analyzing their association with systemic risk in the market. Moreover, we focus on the emergence of information feedback within these networks and hypothesize that strongly connected feedback components may indicate future distress in the system. The concept of feedback in financial systems was previously linked to systemic risk in various studies [22, 43], most notably, the DebtRank methodology proposed by Battiston et al. [44] uses interbank lending networks to assess the risk within the financial sector. Our approach moves beyond interbank lending networks and relies on time series of returns for any set of financial assets to infer directed dependency networks and study the information feedback within. In addition, previous approaches often depend on fundamental firm-level financial data, available only in quarterly time intervals and often heterogeneous in nature, while in this paper we estimate dependency networks using asset prices from financial markets, allowing for wider areas of application.

Based on the proposed methodology for estimating dependency networks of financial assets from time series of asset returns, we examine the general levels of predictability in the market and the topologies of the estimated dependency networks. Furthermore, we study the emergence of information feedback in the networks and introduce a network-based systemic risk indicator to test our hypothesis. We apply the proposed approach to 9 U.S. sector indices on a period from 1999 to 2016 and a selection of S&P 500 constituent company stocks from 1980 to 2016. In addition, we consider the U.S. House Price Index [45] data and apply our approach to the real estate market as well. Our results suggest that the dynamic dependency networks exhibit strongly connected feedback components, particularly around periods of financial crises. The proposed systemic risk indicator is shown to yield predictive power for future market distress, both for the two stock market datasets (sector indices and individual stocks), and the real estate market. These results demonstrate the validity of our approach and indicate that the proposed methodology can be used to construct early-warning signals for crashes in financial markets.

#### 2. Methods and Data

##### 2.1. Information Theoretic Causality Measures

In thermodynamics and statistical mechanics, *entropy* is a measure of disorder in a system. In information theory, it is a quantification of uncertainty of a process, based on the *Shannon information content* of an event : [46]. For a sequence of events , entropy is defined as the average information:

In analogy, the joint entropy for two variables and is defined with the joint probabilities in the sum. The entropy is maximal when all events are equally probable, meaning their distribution is uniform, and minimal (equal to zero) for deterministic processes [47]. In analogy with (1), the idea of conditional entropy measures the uncertainty in left after accounting for the context in :

To measure causality relationships between time series, the most common approach is the Wiener-Granger concept of causality [48]: if the prediction of time series given its own past is improved by using the past values of another time series , then [49]. Although the most popular approach to Granger causality is based on estimation of VAR models, Schreiber [36] proposed the notion of *transfer entropy* as a nonparametric version of Granger causality. In the information theoretic sense, transfer entropy is defined as the amount of uncertainty in given its own past reduced by including the past of :
where the uncertainty is quantified using the concept of conditional entropy : the uncertainty left in after accounting for the context [37]. Note that owing to the lack of autocorrelation in return time series, we may restrict the past of to and past of to steps and thus reduce the dimensionality of the estimation procedures.

By incorporating Shannon’s entropy given in (2), the transfer entropy formula from (3) can be specified as

The above expression is also known as the Kullback-Leibler divergence between distributions and [36]. If the distribution given its own past were independent of , the KL divergence given in (4) would be equal to 0. On the other hand, if would deterministically predict , the KL divergence would reach its maximum value, equal to the entropy . In the Granger-Wiener causality sense, by using expression (4) we measure the improvement in predicting when knowing , conditional on the past of itself. It has been shown that for Gaussian variables, transfer entropy is equal to Granger causality [50], which implies that the standard linear and VAR model-based methods are a special case of the proposed approach.

Due to a positive bias in the estimates , Marschinski and Kantz [38] propose the *effective transfer entropy* for financial time series, calculated by subtracting the mean of the surrogate measurements , using random permutations of the source time series . In order to measure the fraction of the maximum possible value, we employ the concept of *normalized transfer entropy*, used in neurophysiology and computational neuroscience [51, 52]. It is defined as the effective transfer entropy divided by the entropy of given its own past, which is the maximum theoretic value of (for a case when ):

This measurement represents the fraction of information in not explained by its own past which is explained by including the past of .

To include potential serial dependency of each time series on its own past , we also estimate the mutual information: . In the discrete case, the expression can be reformulated as follows:

The above expression again corresponds to the KL divergence [47] between the joint distribution and the product of marginal distributions . If the return distributions were independent, the joint distribution would be equal to the product of marginals and the KL divergence would be equal to 0. On the other hand, if the subsequent returns were functionally (deterministically) dependent, then the KL divergence would reach its maximum value, equal to the entropy . In addition, the mutual information measure for Gaussian distributions is determined by the Pearson correlation coefficient —again implying that the proposed approach is a generalization of the correlation-based methods. In analogy with (5), we estimate the effective mutual information (where is estimated using the shuffled time series ) and normalize it by , which is its theoretical maximum.

##### 2.2. Empirical Estimation

Evidently, distributions , , , and need to be estimated, commonly not a simple task due to scarcity of data. Various methods for discretizing asset returns for transfer entropy estimation have been utilized [40, 41], mainly based on binning. Specifically, due to the dynamic nature of financial systems, dependencies are estimated using time windows rather than the entire history, which makes the studied time series considerably short. Since a large number of bins may significantly deteriorate the estimation of multidimensional distributions, we significantly reduce the number of variables by limiting returns into just two classes based on their sign: positive and negative. However, rather than just taking the signs of returns and counting their occurrences on a given time window, we employ concepts from fuzzy set theory [53] and define a *sigmoid membership function* which encodes realized returns to two classes (positive and negative) with membership and :
where is a parameter defining the steepness of the sigmoid function. Note that . Therefore, the sigmoid functions define the membership of for the positive and negative classes, depending on its magnitude—very positive returns will have much higher (and therefore smaller ), and vice versa, as shown in Figure 1.