Abstract

The spread of the COVID-19 pandemic has severely impacted all aspects of social and economic life, including the evolution of stock markets. Thus, we advance a methodological framework suitable for assessing 2020 year-long shifts in markets’ statistical complexity, and we apply such framework to ten major international developed or emerging stock markets. Our research reveals that this crisis had considerably altered markets’ evolutionary patterns. The network description of markets’ multivocal transmission of complex responses changed in 2020, European and Asian markets playing a pivotal role. Nevertheless, an important regional and time heterogeneity emerges. In addition, we find that the total number of worldwide confirmed COVID-19 cases plays a leading role in the changes in markets’ complexity.

1. Introduction

Since its initial detection in China in early December 2019, the COVID-19 outbreak has rapidly evolved into a global pandemic crisis. Its spread has disturbed virtually the entire societal life and has represented a heavy burden in terms of both human and economic losses.

Recent literature deals with the COVID-19 crisis and its impact on the overall economic activity, as well as on stock markets [18]. Other studies addressed the “safe havens” used by investors during such turbulent times, using a Morlet wavelet theoretical framework [9], and the spillover effect determined by the COVID-19 pandemic, using an ARDL model [10], a DCC-GARCH model [11], or even the DY-spillover index [12]. In this context, we propose a twofold contribution to the literature. First, we consider a methodology for assessing the complex behaviour of stock markets. Starting from the approach Martin et al. [13] proposed, we consider a possible improvement aiming to fully retrieve the deterministic structure of the financial markets’ series. Second, we apply such methodology to ten major international (developed and emerging) stock markets in order to evaluate the impact exercised by the COVID-19 outbreak on their complexity.

The motivation behind our endeavour can be resumed along these lines: even if there is a rich literature dealing with the impact exercised by international financial crises on market mechanisms, as is this impact reflected by changes in various market features (such as price and return distributions, liquidity effects, volatility peaks, heard behaviour, or shifts in information efficiency), less attention is paid to the potential influence of such crises on market complexity and evolutionary patterns. Ultimately, financial markets are a striking example of “complexity in action” [14], and their evolution is the outcome of a vast number of trading agents’ decisions for coping with uncertainty in a rapidly changing environment. When major shocks disrupt the “business as usual” trading mechanisms, significant changes in the market trajectory may occur. Nevertheless, in the framework of complex evolutionary systems, a solid description of market reactions to various exogenous and endogenous shocks is still missing in the existing literature.

Hence, we propose the following underlying argument: COVID-19 represents the largest exogenous shock to markets in many decades. The uncertainty related to the economic environment quickly spreads worldwide as the pandemic crisis worsens. Such uncertainty affects not only the general public but also businesses and investors in different markets. Thus, one can expect various types of “overreactions” from traders as a response to the anxiety surrounding future economic prospects (for a similar argument, see [4]). Such response on behalf of the stock markets to this crisis further translates into significant shifts in their evolutionary patterns. These shifts can be inter alia reflected by a level of complexity in prices’ formation mechanisms different from the one in prepandemic periods. So an accurate description of stock markets in terms of their associated complexity should reveal such changes and capture their time dynamics. Our results support the existence during 2020 of a pattern shift for the considered financial markets. Therefore, shocks matter in explaining changes in markets’ evolutionary trajectories, and there is room for an explanatory conceptual background grounded in the complex systems theory.

The following section discusses a methodology used in describing evolutions in market complexity. Section 3 presents a set of international stock market data, while Section 4 reports the main results and provides an interpretation of these. The last section provides the conclusion.

2. Methodology

The main operational task consists in selecting an adequate method for complexity assessment. Such a task faces the recurring practical difficulty of distinguishing the de facto complex behaviours and isolating them from noisy evolutions. Moreover, the concept of “complexity” has an elusive nature at a theoretical level. In the words of Kantz et al. [15], “Thus complexity is characterised by the paradoxical situation of complicated dynamics of simple systems. In contrast to that, when the system itself is already complicated… it is obvious that it may support very complicated dynamics, but perhaps without the emergence of clear and characteristic patterns.” Hence, one should carefully choose a methodological framework that can discriminate between various possible cases and identify those whose evolutions are characterised by clear and persistent nontrivial patterns. We consider here the method proposed by Martin et al. [13].

2.1. The Measure of Complexity

The entropy of a random variable is defined as the average amount of “information” or “uncertainty” associated with that variable’s distribution. has the following cumulative distribution function:

For a discrete random variable, the entropy of will be

For a base 2 logarithm, the Shannon entropy is obtained as

Starting from this concept and for a given probability distribution P and its associate information measure, ϒ[P], Martin et al. [13] define an amount of “disorder” H as follows:

Where Pe stands for the probability distribution, which maximises the information measure, that is, the uniform distribution. As for the information measure ϒ[P], this can be viewed in terms of information entropy, that is, as a measure of the uncertainty associated with the implied processes described by the probability distribution . Whenever ϒ[P] = 0, the uncertainty is minimal: there is a maximum foreknowledge for the underlying process. On the opposite, the uncertainty is maximal for a uniform distribution, and ϒ[P] = ϒmax. Therefore, real-world processes fall between these two extremes: (a) “perfect knowledge” of their evolutions and outcomes and (b) “maximum ignorance” (or “maximum randomness”). Hence, it appears natural for the measure of “statistical complexity” described by (4) to imply some kind of distance D to the uniform distribution Pe.

D is not a true distance function, neither necessarily satisfying the triangle inequality nor being symmetric.

However, it captures how much the distribution P differs from the equiprobable one. Of course, Q > 0 for a positive measure of complexity, and Q = 0 on the limit of equiprobability. Therefore, “disequilibrium” Q can be defined as follows:

where Q0 is a normalisation constant. Q would give an idea of the probabilistic hierarchy of the system. It will be different from zero if there are more likely states among the accessible ones. Opposite, if among the accessible states one totally prevails, then the “disequilibrium” will be maximum (see also [16]).

With these components, Martin et al. [13] adopt the following functional form for their statistical complexity measure:

This measure reflects the interplay between the amount of information stored in the system and its disequilibrium.

It should be noticed that ϒ is defined in terms of entropy (Shannon, Tsallis, or Rényi), while there are several possibilities for distance D.

For instance, if is a discrete distribution, its associated Shannon entropy will be

Also, Martin et al. [13] call DJ the distance to the uniform distribution associated with Jensen–Shannon’s divergence:

Therefore, based on the functional product form, Martin et al. [13] obtain a family of statistical complexity measures for the distinct disorder H-measures and disequilibrium Q-measure (where K = Shannon (S), Tsallis (T), or Rényi (R) for fixed q—in Shannon’s instance q= 1—while γ indicates that the disequilibrium is evaluated with the appropriate distance measures: Euclidean (E), Wootters (W), Kullback (K), q-Kullback (q-K), Jensen (J), and q-Jensen (q-J)):

From this family of statistical complexity measures, we further consider the case in which  = Shannon and γ = Jensen.

2.2. A Compound Alternative Measure

Several analytical and geometrical considerations concerning the family are discussed in detail by Martin et al. [13]. Among these properties, the analysis of the C-behaviour in the C versus H plane shows the existence of bounds to C (Cmax and Cmin). Such bounds can be evaluated by recourse to a geometric analysis performed in the space of probabilities. Nevertheless, the evolution of C between these bounds in practical implementations can be subjected to quite significant variations. For instance, an estimation of this measure can be done by transforming a time series into a series of “ordinal patterns” and using their distribution employing Keller and Sinn’s [17] coding scheme, as in Sippel et al. [18].

For a real-valued time series , the ordinal pattern of order and delay at time t is defined as the unique permutation.of the set {0, 1, …, d} satisfying

In other words, provides a ranking of the times such that the corresponding values are in descending order. Once the ordinal patterns and their distribution are identified, one can highlight long-run dynamics in time series as well as the differences between time series and their underlying mechanisms. For instance, Bandt and Pompe [19] built up, based on this, the “permutation entropy,” which yields meaningful results for time series analysis in the presence of observational and dynamical noise. Maszczyk and Duch [20] have proposed a modified algorithm, which overcomes the original since Shannon entropy does not allow for the “exploration of the tradeoff between the probability of different classes and the overall information gain” (p. 650). Nevertheless, a key parameter for the estimation of the ordinal patterns is represented by the “embedding dimension,” that is, the required minimal number of observations for the deterministic state of the system to be unambiguously resolved.

The embedding theorems require that the embedding dimension be more than double the “true” dimension of the underlying dynamics to assure topological conjugacy. However, this “true” dimension is rarely known in practice, and different heuristic methods should be involved to properly assess the embedding dimension. All the same, many of these methods are computationally expensive, and all of them break down for a short or noisy time series [21]. Also highly relevant here is that these different methods do not provide an “exact” value for the embedding dimension but rather an “estimate” one. So one can expect the practical implementations of different methods of determining family to be sensitive to both the involved embedding dimensions and the empirical series length. Figure 1 illustrates this idea.

Figure 1(b) displays the results from the estimation of C with the embedding dimension equal to 3 and the length of the series .

Figure 1(a) simulates the case in which the true generative process for a series of lengths equal to 1,000 observations is a fractional Brownian motion, but this is not ex ante known as is not the true embedding dimension. In such a case, the C measure (based on Jensen–Shannon divergence) is estimated for all embedding dimensions between 2 and 11. This measure appears to increase until the involved embedding dimension reaches a level equal to 6 and slowly declines for higher levels of this dimension. While the specific regularity conditions are respected, the C-values may vary between boundaries with substantial amplitude.

Figure 1(b) shows the changes in C measure for an increase in series length in [10, 1,000] interval, but with embedding, dimension kept unchanged at a level of 3. For very short series (less than 100 observations), the variations of C are extremely large, while its evolution becomes smoother for 500 observations and more.

Briefly, in practice, more information than suggested by a particular method for estimating the embedding dimension may be required to fully retrieve a series’ deterministic structure. Biases may occur for short noisy series.

In order to account for this, we consider a compound measure of complexity based on ) family, , as follows:

Where mmin and mmax are the minimal respective the maximal levels of the embedding dimension that are involved, while is the normalised complexity measure in the range [0, 1]. In other words, is the Euclidean norm of all the values that are obtained for an embedding . This method aims to consider all information relevant for reconstructing the deterministic structure of the series. It does not alter the logic of the Martin et al. [13] measure. Instead, it aims to deal with the practical implementation issues related to the requirement of finding a proper estimate of the embedding dimension when the deterministic structure of different real-world series should be reconstructed.

By considering for , a Gamma distribution with a shape parameter α and an inverse scale parameter β (a “rate parameter”), , one can assess the complexity level of an empirical series by involving a one-sided test for the null hypothesis against the alternative . If the test fails to reject the null, this can indicate that the series is not generated by a nontrivial process.

Table 1 reports the critical values of such test, with a 0.05 probability for some Gaussian/Brownian noises with different lengths. While it appears that the compound measure is still vulnerable to short-run noise, it provides relatively robust values across different types of series for a given length of them.

A two-step approach can be considered for practical applications of the proposed compound measure of complexity. First, a minimal embedding dimension can be estimated by involving an appropriate method such as the one proposed by Cao [26]. Second, a reasonable number of higher embedding dimensions can be taken into account up to a maximal one.

In implementing this approach to individual stock markets, we contemplate the possibility of distinctive embedding dimensions for each of them. Hence, for particular market i, we first estimate the corresponding minimal embedding dimension separately and then add the other two dimensions to obtain mmax. With the involvement of Jensen–Shannon divergence, we will have

2.3. Estimating the Minimal Embedding Dimension: Cao’s [26] Method

In this method, let Yi(m) denote the ith reconstruction vector with embedding dimension m and τ being the time lag for a time series x=xn, where n= 1, …, N

Cao [26] defines

where is some measurement of Euclidian distance being given by the maximum norm, that is, and is an integer such that is the nearest neighbour of Yi(m) in the m-dimensional reconstructed phase space in the sense of distance

Cao defines then the mean value of all a(i,m)’s, , as well as another quantity that is useful to distinguish deterministic signals from stochastic signals, , and proposes two tests as follows:

E1(m) stops changing when m is greater than some value m0 if the time series comes from an attractor, and therefore, m0+1 is an estimator of the minimum embedding dimension.

For random data, E2(m) will be equal to 1 for any m. Conversely, for deterministic data, E2(m) is related to m, and it cannot be a constant for all m; in other words, there must exist some m’s such that .

Cao [26] recommends the involvement of both E1(m) and E2(m) in determining the minimum embedding dimension of a scalar time series and in distinguishing deterministic from random data.

Such method displays several advantages: (1) it does not contain any subjective parameters (apart from the time delay for the embedding); (2) it does not significantly depend on series’ length; (3) it can clearly distinguish deterministic from stochastic signals; (4) it works appropriately for time series from high-dimensional attractors; and (5) it is computationally efficient [26].

2.4. Dealing with Data Structure Uncertainty: The Bayesian Network Learning and Inference Framework

The financial globalisation processes lead to an increased interlinked architecture of the international stock markets. Although, during “business as usual” periods, this may facilitate the international flows of capital and ensure smooth dynamics of the international markets, contributing to risk propagation when large systemic market shocks emerge. Thus, one can expect a synchronised change in the behaviour of the investors from the main international markets during the pandemic crisis, as a direct consequence of specific contagion mechanisms, that is, the propagation of extreme negative returns and the increase in interdependence compared to “normal” times.

Hence, an interesting research question refers to whether or not there is, during the COVID-19 crisis, a spread of negative externalities from one market to another in terms of their dynamics’ complexity.

In order to answer this question, one needs to account for the uncertainty surrounding the cross-market transmission channels.

A possible way to deal with this type of uncertainty is represented by the Bayesian neural networks (BNN) framework. As Scutari (2010:1) notes: “In recent years Bayesian networks have been used in many fields…. The high dimensionality of the data sets common in these domains have led to the development of several learning algorithms focused on reducing computational complexity while still learning the correct network.”

Bayesian networks are graphical models where nodes represent random variables (the two terms are used interchangeably in this article) and arrows represent probabilistic dependencies between them [27].

The graphical structure of a Bayesian network is a directed acyclic graph (DAG), where V is the node (or vertex) set, and A is the arc (or edge) set (for more details, see [28]).

The DAG defines a factorisation of the joint probability distribution of V= {X1, X2, …, Xv} often called the global probability distribution, into a set of local probability distributions, one for each variable.

The form of the factorisation is given by the Markov property of Bayesian networks [27] (Section 2.2.4), which states that every random variable Xi directly depends only on its parents as follows:(a)For discrete variables:(b)For continuous variables:

The correspondence between conditional independence (of the random variables) and graphical separation (of the corresponding nodes of the graph) has been extended to an arbitrary triplet of disjoint subsets of V by Pearl [29] with the d-separation (from direction-dependent separation).

Therefore, model selection algorithms try first to learn the graphical structure of the Bayesian network (hence the name of structure learning algorithms) and then to estimate the parameters of the local distribution functions, conditional on the learned structure.

This two-step approach has the advantage of taking one local distribution function at a time, and it does not require modelling explicitly the global distribution function. Another advantage is that learning algorithms are able to scale to fit high-dimensional models without incurring the so-called curse of dimensionality.

In order to implement this framework, we use the R package bnlearn [30].

3. International Stock Market Data

The daily closing levels of 10 major developed and emergent stock markets’ indexes are collected from Yahoo Finance (https://finance.yahoo.com/) for a period between 2017/01/04 and 2020/08/18 by using R package “BatchGetSymbols” [31]. We remove the nonavailable data, ending with 731 observations.

As Table 2 shows, there are notable differences among stock markets in terms of their corresponding minimal embedding dimension, as estimated by Cao’s [26] method. When the full sample estimates are compared with the 2020 subsample estimates, the pandemic crisis is revealed as associated with some changes in the embedding mechanisms. However, with the notable exceptions of BEL 20, for which the embedding dimension declines 1.75 times and FTSE 100, for which this dimension increases almost 1.30 times, such changes are not radically modifying the entire period picture.

Table 2 reports the estimates of the minimum embedding dimension based on Cao’s [26] method, as is this implemented in Garcia [32]. One time lag is used to build the Takens’ vectors needed to estimate the embedding dimension. The embedding dimension is estimated using the E1(m) function. E1(m) stops changing when m is greater than or equal to the embedding dimension, staying close to 1. This value establishes a threshold for considering that E1(m) is close to 1. This threshold is set to be equal to 0.90. The full sample covers the period between 2017/01/04 and 2020/08/18.

On this database and using the embedding dimensions reported in column 1 of Table 2 as mmin (while mmax=mmin + 2), we compute the proposed compound measure of complexity on rolling (overlapping) windows, with a length set equal to 500 consecutive observations. With this procedure, we obtain, for each individual market, 232 estimates covering the period between 2020/07/16 and 2020/08/18.

4. Results and Discussion

4.1. Estimates of a Compound Complexity Measure

Figure 2 displays the evolution of the compound measure of complexity, while Table 3 reports its main statistics. One can highlight here several issues. First, as the values of distribution parameters and the Jarque–Bera statistics for the full sample show, the empirical distribution of the complexity measure is asymmetrical with right/left fat-tails effects and platykurtic (with the single exception of IBOVESPA index). For some markets (but not for all), the test of Zivot and Andrews [33] rejects the unit root with drift null in favour of a stationary process with a one-time break in both trend and intercept. Interestingly, when at least one break exists in the complexity measure’s series, this is usually placed at the end of 2019 or in March 2020.

Notes: The compound measure is computed based on (11), and the data covers the full sample. The estimation is performed on a rolling (overlapping) window with a length of 500 consecutive observations. The Martin et al. [13] generalised (global) complexity measure based on the Jenson–Shannon divergence is implemented in Sippel et al. [18].

Second, the mean values of compound C measure are significantly higher than the corresponding critical values reported by Table 1. Therefore, there is strong evidence of complex patterns for the stock markets included in our data set.

Third, there is regional heterogeneity in markets’ response to the pandemic crisis. Such heterogeneity is reflected at least by: (a) an overall higher level of compound C for Asian markets compared to the European and Americas’ markets, during 2020; (b) a less volatile evolutionary pattern for Americas and Asian markets’ complexity than in the case of the European markets; and (c) nonsynchronised dynamics across regions for compound C measure.

Fourth, there is also important time heterogeneity, as revealed by Table 4, employing pairwise comparisons using t-tests with pooled standard deviations and three adjustment methods for multiple testing. At least three subsamples can be identified in the analysed time horizon. The first one covers the last two quarters of 2019; the second one corresponds to the first quarter of 2020, while the third subsample is the second quarter of 2020. There are only two exceptions when tests fail to identify the distinctiveness between subsamples. These relate to the DAX index, for which the first and third subsamples do not show distinct profiles, and to the HANG SENG index, for which the second and the third subsamples do not display differences at 5% statistical significance.

4.2. Bayesian Networks Analysis

We further turn to the structure of the relationships between stock markets’ complexity, as revealed by the Bayesian networks analysis. Figure 3 shows the network provided by a Tabu search score-based structure-learning algorithm and its arcs according to the strength of the dependencies.

This analysis suggests four relevant groups of markets in terms of complex behaviour transmission. Thus, the linkages between European markets generate the first one, including BEL20, DAX, and, with a lower strength, FTSE 100. The second group occurs due to the connections between DAX and Americas (MERVAL and S&P/TSX) markets. Due to the direct linkages between CAC40 and S&P/TSX on the one hand and S&P/TSX and NIKKEI225 on the other, the third group functionally emerges. The last group associates S&P/TSX, HANG SENG, and S&P 500. In addition, IBOVESPA remains isolated from the rest of the markets, and no arc links this market to the others.

This last result is consistent with other findings from the literature. For instance, Laurini and Chaim [41] show that Brazilian stock markets went through a period of “exuberance” between early 2016 and March 2020, only to crash with the global turmoil caused by health concerns and oil prices. David et al. [42] point toward a strong impact of the pandemic crisis on the IBOVESPA and reveal its poor recovery when compared to other indexes; whereas the findings of Chong et al. [43] suggest that the Brazilian market is significantly influenced by the conditions of other markets, such as the Chinese one. This impact of Chinese market evolutions is not surprising since China is the leading trade partner of many countries from Latin America and the biggest oil importer globally. Henceforth, episodes such as the Chinese Stock Market Crash from 2015 to 2016 [44] or the pandemic crisis are likely to considerably bear the Latin American markets.

Broadly, the heterogeneity of Latin America’s evolutions is also related to the fact that intraregional integration of local financial markets seems less advanced than in other emerging regions. Moreover, the Integrated Market for Latin America (MILA) initiative, initiated in 2011, still requires additional integration efforts [45]. Meanwhile, as Romero-Meza et al. [46] show, several episodes of international turbulence on financial markets induce nonlinear dependencies and contagion effects on these markets (especially for the financial markets of Mexico, Brazil, and Argentina, which tend to present high nonlinear dependence within very similar time frames).

Nevertheless, the picture that surfaces when this analysis focuses only on the 2020 sample differs substantially. Figure 4 displays the corresponding results, showing clearly the transmission channels of the changes in investors’ behaviour leading to shifts in prices’ complexity dynamics.

Most notably, the structure of distinctive groups of markets changes for 2020 compared to the full sample. There is a prominent role played by European markets (especially by FTSE100, DAX, and BEL20) and HANG SENG and NIKKEI22 as crucial markets for the translation of the effects associated with the pandemic shock. Somehow surprising, the North American markets rather receive than transmit such effects.

4.3. Global Analysis: Bayesian Principal Component Analysis

One can argue that the pandemic crisis is both a local or regional process and a global health crisis. Consequently, “one would expect to see not only a rise in country-specific risks in stock markets but also an increase in systemic risks” ([47]: 2). Such increase in market risks and, broadly, the shifts in price mechanisms during the COVID-19 pandemic should be mirrored by the changes in the global forces driving the existence of complex market behaviours. In order to check for this, we employ the Bayesian principal component analysis (Bayesian PCA) approach. The purpose is to estimate a synthetic descriptor of complexity for the international stock markets included in our data set as a single integrated system. Bayesian PCA is a version of PCA that imposes prior and encodes a basis selection mechanism (see [48]).

Figure 5 displays the first principal component from this analysis for 2020. It highlights the global peak occurring in March 2020 as well as the other two local peaks for June and July 2020.

After we obtain an estimate of the latent forces driving the complexity of the international markets, we further collect, by using R package “covid19.analytics” [51], worldwide data related to the spread of COVID-19 from the Novel CoronaVirus Disease (CoViD-19) data repository of Johns Hopkins University Center for Systems Science and Engineering (JHU CSSE; https://github.com/CSSEGISandData/COVID-19). Since we are interested in global evolutions, we compute, based on this data, the total worldwide number of registered cases. Our purpose is to check for a potential association between the global spread of COVID-19 and stock markets’ changes. With this aim, we involve a “wavelet coherence” analysis. Wavelet coherence can be viewed as a localised correlation coefficient in time-frequency space [52]. The wavelet coherence of two-time series can be defined as follows [52, 53]:

where S is a smoothing operator (S(W)=Sscale(Stime(Wn(s)))), where Sscale denotes smoothing along the wavelet scale axis, Stime denotes smoothing in time, and WXY is the cross-wavelet transform (which, in a geometrical sense, is the analogue of the covariance). One key advantage of wavelet coherence is that it shows statistical significance only in areas where the involved series share significant periods.

The results of the wavelet coherence analysis between the total worldwide number of confirmed cases and the first principal component of compound C measure are displayed in Figure 6, while Figure 7 shows the global phase difference image. These figures highlight the leading role exercised by the global spread of COVID-19, with high significance for both “high” (2 to 4 days) and medium frequencies (6 to 10 days). Nevertheless, especially at the beginning of the analysed period (during January 2020), there occurs an area of significant coherence for lower frequencies (up to 32 days). However, this significance is lost at such frequencies due to the subsequent spread of the pandemic shock. These results may be explained in connection with a possible shortening of the trading horizons, as the uncertainty surrounding the future economic prospects intensifies from February 2020 onwards. This is illustrated by the emergence of “short” and “medium” frequency areas of significant coherence around the end of the analysis period (July–August 2020).

Nevertheless, as the plot of “instantaneous” (or local) phase differences in Figure 7 suggests, the evolution of these differences is driven by the time structure of COVID-19 spread around the world (notably at periods between 2 and 8 days). Overall, it can be argued that the shocks associated with the pandemic crisis exercise a key impact on the global complexity of the considered financial markets. Still, it is not constant over different time spans, and it highly reflects the periods relevant for the pandemic propagation.

4.4. Discussion

We find some significant shifts in stock market prices’ patterns during the COVID-19 crisis, which lead, in terms of complexity, to a different evolution than in prepandemic periods.

Nevertheless, one can ask how plausible are such findings. Several aspects can be emphasised here. First, the used compound measure of complexity is able to capture the March 2020 episode of severe contraction when the market value of the MSCI World Index (capturing large and midcap representation across 23 developed markets countries) faced a 17.5% drop on 03/06–03/18, DJIA market fell with around 26% in only four days, CSI 300 index in China lost 12.1% of its value during this period, and the FTSE MIB index in Italy lost 27.3%. This can be viewed as evidence, even if a limited one, supporting the sensitivity of the compound measure to changes in complexity leading mechanisms for empirical time series. Still interesting here are the differences in the minimum embedding dimension for markets and the determinants of their changes during 2020.

Second, we obtain confirmation of a network structure among the considered stock markets, implying biunivocal mechanisms for complex behaviour transmission. However, it is highly plausible that such a structure is adaptive, and thus, we highlight the variations occurring during the pandemic exogenous shock. As expected, the developed European and Asian markets play the part of “critical nodes.” At this moment, more research is required to better clarify the role of North American markets during this period.

Third, we emphasise the existence of a global force driving the considered markets during the pandemic period. Its generative mechanism can be associated with international portfolios’ adjustments, as investors’ uncertainty about local market conditions emerges. While such adjustments typically occur during each major financial turmoil, it should be noticed that the COVID-19 crisis is generated by a pure “exogenous to the market” international shock. Therefore, an extensive analysis is necessary to understand investor reactions to such shock and the corresponding changes in their decision-making mechanisms [56]. In addition, for a better picture, research should focus on the indirect effects of COVID-19 containment policies (including business and education units’ shutdowns, public event cancellations, travel restrictions, mandatory quarantines, social distancing, or mask-wearing). It is plausible to expect such measures to inhibit the economic activity, generate a significant uncertainty associated with the forecast of future economic perspectives, and impact different components of financial systems.

Finally, we also find a significant “wavelet coherence” between the total number of COVID-19 cases worldwide and the underlying forces behind stock markets’ complex behaviour. This high wavelet coherence is confirmed by studies (e.g., [9] that discover mostly positive associations between the analysed indexes and the gold-Bitcoin ratio. A high degree of correlation between gold and Bitcoin in the COVID-19 period is present in the literature [57]. Positive correlations that had not been present before the pandemic were documented by Aslam et al. [58]. The authors highlight the significant impact of the COVID-19 pandemic on financial networks in terms of structural changes in the form of node changes and connectivity and the occurrence of substantial differences in the topological characteristics.

Of course, “coherency” does not necessarily imply “causality.” However, the area of coherence significance in Figure 6 is so extensive that it is implausible that this happens by chance. Supplementary, the identified “short” and “medium” frequencies of significance coincide with the natural daily and weekly trading cycles. Furthermore, other findings from the literature suggest that negative market reactions were strong during the early days of confirmed cases [1], while decreasing (increasing) trend in the number of confirmed coronavirus cases is associated with improving (deteriorating) liquidity in financial markets [6]. Meanwhile, the conditional variance of European and the United States markets during the COVID-19 crisis is higher than the 2007–2010 financial and real crisis, but conditional variance during the 2007–2010 crisis was greater on Asian markets [2]. The spread of COVID-19 in the USA and Saudi Arabia was proven to negatively influence the volatility of the Saudi stock market indexes [59]. Finally, loser stocks exhibit extreme asymmetric volatility, which correlates negatively with stock returns [3]. This may be associated with investors’ asymmetric reactions to the information related to the pandemic spread and the corresponding adjustments into their risk profiles. Overall, these mechanisms may explain the existence of a possible unidirectional causality running from pandemic evolutions to market reactions. Nevertheless, a more careful examination is required to separate the influence exercised by the changes in the local pandemic situation from the one induced by the evolutions at regional/international levels.

5. Conclusions

This study describes some of the effects exercised by the COVID-19 pandemic crisis on the stock markets’ behaviour, as reflected by the changes in the complexity levels. In order to capture the associated shifts in market patterns, we consider a compound version of the complexity estimation method proposed by Martin et al. [13].

Our main result is that this crisis has significantly altered the markets’ evolutionary patterns. We also find that the network description of markets’ multivocal transmission of complex responses changed in 2020, with European and Asian markets playing a key role. Additionally, for individual markets, there is also a single latent driving mechanism during the pandemic crisis acting globally. Meanwhile, regional and time heterogeneity effects are present. Particularly, March 2020 witnesses extreme downside movements in markets.

A wavelet coherence analysis involving an estimate of such latent synthetic variable (as provided by a Bayesian PCA analysis) shows the leading role of the total worldwide number of confirmed COVID-19 cases in its dynamics. Such a role is exercised over “short” and “medium” frequencies, which coincide with the regular trading cycles.

The main message of this paper is that the COVID-19 pandemic affects not only individual lives, society, and the economy as a whole but also the stability and efficiency of the financial systems and their components, among which stock markets play a crucial part. Therefore, there is room for policy measures able to promote stability and limit the associated negative externalities.

Data Availability

The research uses exclusively publicly available data, as stated in Section 3 of the paper. Thus, the daily closing levels of 10 major developed and emergent stock markets’ indexes are collected from Yahoo Finance (https://finance.yahoo.com/), for a period between 2017/01/04 and 2020/08/18.

Conflicts of Interest

The authors declare that they have no conflicts of interest.