Market inefficiency is a latent concept, and it is difficult to be measured by means of a single indicator. In this paper, following both the adaptive market hypothesis (AMH) and the fractal market hypothesis (FMH), we develop a new time-varying measure of stock market inefficiency. The proposed measure, called composite efficiency index (CEI), is estimated as the synthesis of the most common efficiency measures such as the returns’ autocorrelation, liquidity, volatility, and a new measure based on the Hurst exponent, called the Hurst efficiency index (HEI). To empirically validate the indicator, we compare different European stock markets in terms of efficiency over time.

1. Introduction

From Malkiel and Fama [1] seminal work, there have been several papers that studied the market efficiency. The efficient market hypothesis (EMH) is based on many unrealistic assumptions such as serial independence, returns’ normality, homoscedasticity, and absence of long memory. Nowadays, it is well accepted that stock returns share some statistical properties, called stylized facts (see Cont [2]), which suggest strong deviation from the EMH. Various methods have been proposed to test the efficient market hypothesis (EMH), but the empirical evidence varies according to the specific markets, periods of time, and the selected approaches implemented to measure the efficiency.

EMH considers that the investor is generic. An investor is anyone who wants to buy, sell, or hold a security because of the available information, and it is rational price-taker. This generic approach, where information and investors are general cases, implies that financial markets are “informationally efficient.” However, it is easy to see that extreme market reactions occur more frequently than as expected by the EMH, and noise traders operate to provide the necessary liquidity for the rational investors. Proposed by Peters (1994) on Mandelbrot’s’ previous work, the fractal market hypothesis (FMH) arrives to provide a new framework to model the turbulence in financial markets. The FMH is based on the concept of market liquidity, and how the information that arrives to the market is interpreted by different agents. So, the market is stable when there are investors operating in a large number of investment horizons son the market is provided of wide liquidity. When information has the same impact on all investors, the market liquidity will decrease, and the market will be unstable. The FMH considers that the market collapses when one-horizon traders are market dominants placing many sell orders that the rest of agents cannot absorb.

As the EMH, APT, and CAPM are equilibrium models, it is expected that work properly when markets are stable, but not under turbulences. Weron and Weron [3] remarked that the purpose of the FMH is to give a model of investor behavior and market price movements that fits our observations. The key is that, under the FMH, the market is stable when it has no characteristic timescale or investment horizon.

Market efficiency is surely a latent concept, and it is difficult to be measured by means of a single indicator. Nevertheless, it has been measured in different ways. Moreover, from Lo [4], we know that market efficiency changes over the time. This is the main idea underlying the adaptive market hypothesis (AMH). As a consequence, a good measure of efficiency has to be time-varying.

Among the several approaches for measuring market efficiency, the most common is based on returns’ autocorrelation (e.g., Ito et al. [5, 6], Noda [7], and Tran and Leirvik [8]). The degree of returns’ autocorrelation is an index of market efficiency because, if present, it reflects a deviation from the random walk hypothesis. In particular, Ito et al. [5, 6] measured time-varying efficiency by considering the autocorrelation in stock monthly returns estimated with a time-variant autoregressive (TV-AR) model. Tran and Leirvik [8] improved the measure of Ito et al. [5, 6] for high-frequency data and for a large number of autocorrelations.

Furthermore, from a historical perspective, volatility has been proved to be a good proxy of market efficiency as well (Földvári and Van Leeuwen [9]). In this direction, Lo and MacKinlay [10] proposed a test statistic based on the ratio of variances which is still nowadays commonly used in testing for market efficiency. More recently, Liu and Chen [11] studied market efficiency by means of ARMA-GARCH forecasts in order to account the heteroscedastic nature of stock returns.

Another type of market efficiency measure is related to the market liquidity (e.g., Sukpitak and Hengpunya [12]). With this respect, Chordia et al. [13] found that predictability is lower in the market with narrower bid-ask spreads and that prices were closer to random walk in more liquid markets. Similarly, Chung and Hrazdil [14] showed that the more liquid in a market, the higher is its efficiency. There are several ways of measuring liquidity (for a review, see Gabrielsen et al. [15]), but one of the most recognized and used was proposed by Amihud [16].

A fourth approach and, perhaps, the most discussed (e.g., Kristoufek and Vosvrda [17, 18], Sensoy and Tabak [19], and Kristoufek and Vosvrda [20, 21]) is based on the analysis of markets’ long-range dependence, also defined as long memory. The long memory property describes if and how much past events influence the future evolution of the process (Asuloos et al. [22]). There are several researchers who studied the long memory of financial markets (e.g., Lillo and Farmer [23], Jiang et al. [24], Ferreira and Dionisio [25], Barviera et al. [26], Sánchez Granero et al. [27], and Dimitrova et al. [28]).

The long memory is commonly studied through the Hurst exponent, previously introduced by Hurst [29] and lately developed by Mandelbrot and Van Ness [30], which represents the rate of decay of the autocorrelation function of a time series. If , we say that the time series shows persistent behavior; if , we define the time series as antipersistent. Bianchi et al. [31] claimed that different values of the Hurst exponent are related to different regimes: “bull” and “bear” periods and the mean reversion.

Overall, the Hurst exponent is closely related to market efficiency, and several authors (e.g., Bianchi [32], Sánchez Granero et al. [27], and Dimitrova et al. [28]) showed that a value of reflects an efficient market. Therefore, in constructing a measure of inefficiency, the deviations of from the value have to be computed (Kristoufek and Vosvrda [17]). However, a time-invariant Hurst exponent explicitly contradicts the adaptive market hypothesis (AMH). To overcome this limitation, we consider a new efficiency measure based on a time-varying Hurst exponent that is computed by assuming that stock prices follow a multifractional Brownian motion (mBm).

However, each of the aforementioned approaches only considers one aspect of efficiency. Composite indicators are used in several domains of science to summarise in a meaningful way the information coming from different sources. Starting from the fact that efficiency is a difficult concept to measure by means of a single indicator, in this paper, we construct a new composite indicator, called composite efficiency index (CEI), of financial market efficiency by combining all the aforementioned different measures.

Building composite indicators is a complex task that involves several steps: (1) selection of the theoretical framework, (2) the selection of the subindicators, and (3) the aggregation method (i.e., how the single indicators are combined). The theoretical framework of the proposed CEI is obviously represented by the financial market efficiency. Regarding the selection of the subindicators, we consider each of the aforementioned approaches of measuring market efficiency. Regarding the last step, our approach consists in estimating a factor model where the composite index is the latent common factor. Then, the composite index is obtained by the principal component estimator (see Bai and Ng [33]).

As an empirical application, we study the efficiency of different European stock markets by considering also the effects of the COVID-19 pandemic by means of this new composite index. More in detail, we consider the Netherlands, Austria, Belgium, France, Germany, Spain, and Switzerland stock markets and construct country-specific composite indicators. Then, we proceed to estimate the inefficiency of the market during the reference period and investigate the deterministic or random nature of this phenomenon. Moreover, the properties of the efficiency process are studied from a statistical point of view through the stationarity test (Dickey–Fuller test) and the analysis of the (partial) autocorrelation function.

The rest of this paper is organized as follows. In the next sections, the remaining three steps required for the construction of the composite efficiency index (CEI) are discussed in detail. In particular, in Sections 2 and 3, the information set is presented (i.e., the set of the considered subindicators), while in Section 4, the employed methodology (i.e., the aggregation method) is discussed. In particular, Section 3 discusses in detail a new Hurst-based inefficiency measure. In Section 5, we apply the proposed methodology to European stock market data, while some final remarks are discussed in the conclusions.

2. Measuring Market Efficiency

In the following, we discuss in detail the subindices considered for the construction of the composite efficiency index (CEI). The classical single indicators are the returns’ autocorrelation, volatility, and liquidity. Moreover, we also construct a new efficiency measure based on the time-varying Hurst exponent that is used as an additional subindex, which is discussed in more detail in Section 3 of this paper.

2.1. Autocorrelation

The proposed index exploits several aspects of the efficiency for its construction. First of all, the returns’ autocorrelation: we consider the fact that, according to the EMH, an autoregressive process of any order cannot explain the dynamics of the returns . Hence, we consider the following AR(p) process:

Let be the vector of the estimated coefficients in (1). If the market is efficient, the vector should contain all values very close to zero. Following Noda [7], we consider the magnitude market inefficiency (MIM):where absolute values are used to get rid of sign effects. For this index, deviations from zero represent inefficient markets. Clearly, this measure is not rid of weaknesses. For example, the market efficiency computed in this way, as any model-based approach, depends on the sampling errors. Nevertheless, (2) represents the first of the subindicator’s set.

2.2. Volatility

Another important proxy for stock market inefficiency that we consider is volatility. Even if there are several ways of computing volatility, Földvári and Van Leeuwen [9] showed that GARCH models provide a good way for appropriately reconstructing volatility. Therefore, as volatility measure, we consider the in-sample predictions obtained by a t-GARCH(1, 1) model of Bollerslev [34]:where the innovation term contained in is assumed to be a stochastic process with i.i.d. t-student realizations in order to account for stock returns fat tails (e.g., Cerqueti et al. [35]).

2.3. Liquidity

Measuring market liquidity is important as well. Indeed, we know that illiquid markets are very inefficient. Despite there are several ways of measuring liquidity (for a review, see Gabrielsen et al. [15]), we consider the illiquidity measure developed by Amihud [16], which is one of the most recognized since it can be computed by using only prices and volumes. Moreover, it is also used by policy makers to estimate liquidity in financial markets. The Amihud [16] illiquidity measure is obtained with the following equation:where is the return at time and is the volume. (4) is a realized measure of illiquidity since it can be computed on a yearly basis, monthly as the average of the daily ratios within each month, or daily as the average of the infradaily ratios. A rough daily measure can be obtained by simply considering the daily ratio in (4). Despite the fact that the Amihud [16] index is a measure of the price impact, differently from the bid-ask spread, its main advantage is based on the simplicity of calculation and the easy availability of the data.

3. A New Hurst-Based Efficiency Measure

At the end, we consider a very promising measure of market efficiency based on the fractal market hypothesis (FMH). Exploring efficiency by the FMH requires the estimation of the Hurst exponent. Indeed, nowadays, the Hurst-based inefficiency measures are well established (e.g., Kristoufek and Vosvrda [17, 18], Sensoy and Tabak [19], and Kristoufek and Vosvrda [20, 21]).

Nevertheless, there are some relevant questions regarding the usage of the Hurst exponent in finance. First of all, the way which the Hurst exponent is computed is crucial. Indeed, even if different approaches to calculate the Hurst exponent have been proposed in the last decades (see López-García and Requena [36] for an interesting review), several authors (e.g., Lo [37], Sánchez Granero et al. [27], and Weron [38]) claimed that the Hurst exponent estimation using classical methodologies presents a lack of preciseness when the length of the series is not large enough. In addition, Mercik et al. [39], Fernández-Martínez et al. [40], and Sánchez et al. [41] proved that most of the classical algorithms used to calculate the Hurst exponent are valid only for fractional Brownian motions and do not work properly for another kind of distributions such as stable ones.

Another important issue lies on the fact that assuming a constant value for the Hurst exponent is unrealistic (e.g., see Bianchi [32], Bianchi et al. [31], and Mattera and Sciorio [42]) and explicitly contradicts the adaptive market hypothesis (AMH). To overcome this limitation, we consider an additional efficiency measure based on a time-varying Hurst exponent.

The mathematical representation of a fractal market is based on the fractional Brownian motion (fBm) and assumes a value of Hurst that can be . A very useful generalization of the fractional Brownian motion that is consistent with the adaptive market hypothesis is represented by the multifractional Brownian motion (mBm).

The multifractional Brownian motion (mBm) was introduced to replace the real by a function ranging in . The function is the regularity function of mBm.

Corlay et al. [43] started from the definition of a fractional Brownian field. Let be a probability space. A fractional Brownian field on is a Gaussian field, noted , such that, for every in , the defined process is a fractional Brownian motion with Hurst parameter .

A multifractional Brownian motion is simply a “path” traced on a fractional Brownian field. More precisely, it is defined as follows.

Let be a deterministic continuous function and be a fractional Brownian field. A mBm on with functional parameter is the Gaussian process defined by for all .

The multifractional Brownian motion can be represented by the following formulation (Bianchi [32]):where the normalizing factor is equal toand its covariance is given bywhere

In the mBm, a time-varying Hurst exponent is related to the idea of long memory instead of a static one. Higher values than 0.5 for the Hurst exponent indicate the persistence of the time series, while lower values indicate antipersistence. Under efficient markets, we have a geometric Brownian motion (gBm) process, and the Hurst index assumes a value of 0.5.

We then introduce a new time-varying measure of market inefficiency, called Hurst-based efficiency index (HEI), given by the absolute deviation of the empirical (time-varying) Hurst exponent from its theoretical value of 0.5 under efficient markets:

In this paper, we propose to estimate with the AMBE method of Bianchi et al. [31] and Bianchi and Pianese [44] that works as follows.

Considering a time series follows a multifractional Brownian motion (5), the author assumes a discrete version , which, for , , and , locally behaves like a fractional Brownian motion with a given exponent within a window of proper length . Then, Bianchi [32] derived the following estimator:with :

In order to apply estimator (9), the parameters , , and have to be chosen. According to Bianchi [32], the best choice for . About , since it affects the estimator’s variance, the choice should be made with a variance-minimization criterion. The optimal choice is (see Bianchi [32] for the estimator’s variance formula). In the end, about the parameter , the author suggests a value of for the financial application.

Therefore, the Hurst-based efficiency index (HEI) is obtained by replacing estimated with (9) within (8).

4. Methodology

The last step required for building composite indicators is based on the aggregation/synthesis of the subindices. With this respect, several authors (e.g., Becker et al. [45] and Karagiannis [46]) agree that one of the most critical aspects in the definition of a composite index is this last step because the weighting procedures reflect the relevance of each indicator in determining the overall composite index.

Different schemes have been proposed in the literature, but no one is free from weaknesses (for an overview, see Greco et al. [47]).

The first approach is to consider an equal weighting scheme. This scheme is commonly employed when it is assumed that each indicator has the same informative power with respect to the phenomenon under investigation. Despite its simplicity, the equal weighting does not take into account both the variability of different indicators and their relationship structure.

To avoid equal weighting, the literature considered specific statistical methods. Some popular approaches are based on the correlation analysis and linear regression.

According to the correlation analysis, it is possible to determine the weights by considering the correlation between each indicator and a selected benchmark (Kantiray [48]) such that the stronger is the correlation of a given subindex, and the higher is its weight. The main drawbacks of the correlation-based approach rely on the fact that correlations may be statistically not significant and do not necessarily imply causation, only taking into account if different indicators move or not in the same direction.

Regression analysis, instead, allows exploring the causal linkage between the individual indicators and a benchmark. Indicators’ weights are retrieved by estimating the linear model. However, even if regression analysis exploits causal relationships in weighting subindicators, as in correlation analysis, the choice of an appropriate endogenous variable is required.

This aspect is problematic since most of the times, a composite indicator is built with the aim of measuring a latent concept. This is the case of well-being (e.g., Slottje [49] and Haq and Zia [50]), but another interesting example is the financial market inefficiency, where several proxies exist (e.g., volatility and market liquidity) but a single measure does not.

Therefore, a third approach for constructing composite indicators is based on factorial analysis and principal component analysis (PCA). In these cases, no benchmark has to be chosen, and the composite indicator is obtained as the synthesis of a set of subindices that roughly explain the phenomenon. The composite indicator proposed in this paper falls within this last class of approaches.

More in detail, we start by considering a factor model structure. Factor models have been widely applied in both theoretical and empirical finance. An important distinction which has to be done is between observed and latent factor models. Observed factors are known and based on outside information. On the contrary, latent factors are unknown and need to be estimated.

Let and denote the sample size in the time series and number of subindices, respectively. For and , the observation has a factor structure represented as

The factor model (11) can be written in the matrix form as follows:where is the matrix of factors and is the matrix of factor loadings. Our objective is to estimate both and . The most common approach for estimating the static latent factor is through the principal component estimator (e.g., Stock and Watson [51] and Bai and Ng [33]).

By choosing the normalizations and diagonal , we consider the following objective function minimization:where denotes the matrix trace. The estimator for , denoted by , is a matrix consisting of unitary eigenvectors associated with the largest eigenvalues of the matrix in the decreasing order. Then, is an matrix of estimated factor loadings. Stock and Watson [51] showed that the sample eigenvector of asymptotically behaves like those of , and then, use these eigenvectors to consistently estimate .

We consider the composite efficiency index (CEI) as the estimated factor. The choice of single common factor is reasonable because in the empirical application, we find that the first factor explains most of the total variance. Indeed, it is important to highlight that if the first factor accounts for at least 65% of the total variance, the latent concept is usually considered unidimensional, and the first factor is assumed to be the composite indicator (see Nardo et al. [52]). Moreover, by employing the Bai and Ng [53] procedure, we find that only one latent factor (i.e., ) can be used to describe market efficiency for all the considered stock markets. Alternatively, also, the Kaiser rule can be applied to identify the number of factors (The Kaiser rule suggests to drop factors with eigenvalues below 1. The motivation is that it makes no sense to add a factor explaining less variability than just one indicator.), but the use of Bai and Ng [53] procedure is more appropriate when dealing with factor models. The Bai and Ng procedure works for both strict and approximate factor models. In approximate factor models, some correlation in the idiosyncratic components is allowed, and thus is more general than the strict factor model where the idiosyncratic components are uncorrelated. Their procedure can also be applied for the definition of the optimal number of factors for strict factor models.

Factor model and PCA are inevitably related concepts. However, differently from PCA, the first involves the estimation of a model where some observable variables are determined by both common factors and unique factors. Moreover, a factor model has its own covariance structure such that the total data variability can be decomposed into that accounted for by common factors and that due to unique factors. Nevertheless, PCA is commonly used to estimate the common latent factors.

5. Application to European Stock Markets

5.1. Data and Subindices

To show the usefulness of the proposed index, we provide an application to the major European stock markets. In detail, we consider the prices and volume time series for the stock exchange index of the Netherlands (AEX), Austria (ATX), Belgium (BEL20), France (CAC40), Germany (DAX30), Spain (IBEX35), and Switzerland (SMI). For each time series, we consider the daily data from January 1, 2003, to August 1, 2021. For some market indices (i.e., ATX and IBEX), we have shorter time series because of data availability. However, different time series length is not an issue for our application. The time series of the log returns are shown in Figure 1.

The ingredients needed for the computation of the composite indicator are those described in Section 2 of the paper. To compute the MIM (2), the indicator of market efficiency based on the returns’ autocorrelation structure has been obtained by considering a time-varying AR process (TV-AR) as in [7]. In particular, following a rolling-window approach, we consider an estimation window equal to , hence, of 1 year of daily observations, which is updated day by day. Therefore, given a -length time series, we obtain, for each market, a MIM time series of length . Figure 2 shows the evolution over time of the MIM for each of the considered stock market indices. The higher the MIM value, the higher the market inefficiency.

Then, another indicator of market efficiency that we consider is based on the estimated conditional volatility. In this paper, following authors such as Földvári and Van Leeuwen [9] and Liu and Chen [11], we consider in-sample predictions from a t-GARCH(1, 1) model (3). Figure 3 reports the evolution over time of the estimated conditional volatility.

It is clear from Figure 3 that volatility increases during the period of crisis. Market liquidity is another important measure of market efficiency since, as we have seen, a liquid market is more efficient than an illiquid one. In the paper, we consider the illiquidity measure of Amihud [16] in (4). As in the case of the MIM index, to compute the ILL index, we used a rolling-window approach with the estimation window , i.e., one year of observation. Since the ILL index is a realized measure, we compute the market illiquidity over the last year by updating the indicator daily. The results for the considered sample of market returns are shown in Figure 4.

In the end, we have the new measure of market inefficiency based on the time-varying Hurst exponent. As explained in Section 2 of the paper, the Hurst exponent is computed with the AMBE method proposed by Bianchi [32] and Bianchi et al. [31]. The Hurst estimates are shown in Figure 5.

Figure 5 somehow validates the adaptive market hypothesis (AMH) since we observe many deviations from the value of , under which the market should be efficient. Moreover, the degree of efficiency changes over time as well, and the severity of the deviation is not constant over the time. Indeed, while, in some time periods, we observe strongly persistent behaviors (i.e., ), in others, the markets seem to be antipersistent.

Nevertheless, any deviation from represents a deviation from the hypothesis of efficient market. Therefore, we derive the Hurst-based efficiency index (HEI) as the absolute deviations of the Hurst exponent in Figure 5 from the value , as shown in (8).

The resulting time series are shown in Figure 6.

A value of means that the market is somehow efficient, and the bigger the HEI value, the higher the degree of market inefficiency. We observe huge spikes in the HEI time series in the presence of crisis. In particular, by considering Figures 26 together, we can see that the time periods with picks in the HEI index are associated to picks in the other inefficiency measures, i.e., MIM, ILL, and volatility. Therefore, all the indices increase with increasing inefficiency, and therefore, the HEI index gets higher values when the market becomes more illiquid and highly volatile.

Nonetheless, we aim to measure the complexity of financial market by considering a single index that incorporates all these information. To this aim, we developed the composite efficiency index by following the methodology explained in Section 3.

5.2. Composite Efficiency Index: In-Sample Analysis

In the following, we aim to describe the in-sample dynamics of the developed CEI. As previously explained, the CEI is computed as the latent factor underlying the movements of the aforementioned subindices. Such a latent factor is consistently estimated with the principal component (PC) estimator. The CEI time series, for each stock market, is shown in Figure 7.

Clearly, different markets show very different behaviors of the index. This fact depends on different importance of each single subcomponent in being the main driving force of inefficiency across different markets.

As a consequence, while some markets present trends in efficiency levels (e.g., ATX and PSI markets), others show more random behaviors around their mean. This fact rises the problem of efficiency predictability. With a predictable market efficiency, policy makers would be able to identify appropriate policies with the aim of making markets more efficient in the near future. Similarly, stock traders would adjust their investment strategies accordingly.

Figure 8 shows the autocorrelation structure of the CEI for the sample of stock markets.

Figure 8 highlights a very strong autocorrelation structure for the CEI across all the considered markets. Hence, the CEI shows a very persistent behavior. This evidence suggests a possibly strong predictability. To get further evidence, we test if the CEIs are random walk (RW) or not. In doing so, we perform the Ljung–Box [54] test on the CEI’s first different . Ljung–Box [54], usually used to test for the white noise hypothesis, is based on the following test statistic:where is the length of the time series, is the th autocorrelation coefficient, and is the number of lags used for testing. Hyndman and Athanasopoulos [55] recommended using for nonseasonal data. Autocorrelations at different lags are computed with the usual estimator:with being the sample average over of the time series . A large value of the statistics , which follows a distribution with degrees of freedom, indicates that there is a significant autocorrelation structure in the time series. Under the null hypothesis, the time series is a white noise. In applying the Ljung–Box test, we exploit the fact that if is random walk, its first difference must be a white noise process. The results are reported in Table 1.

As Table 1 shows, by rejecting the null hypothesis of white noise for the CEI first differences, we reject the random walk hypothesis for the CEI time series for all the markets. Therefore, the market efficiency and its variation follow predictable stochastic processes. However, each market has its own data-generating process.

5.3. Composite Efficiency Index: Out-of-Sample Analysis

Another important aspect that can be studied is to assess if the market efficiency can be useful for predicting stock returns. In doing so, we conduct an out-of-sample analysis following the empirical approach of the exchange rate predictability literature (e.g., Meese and Rogoff [56], Rossi [57], Molodtsova and Papell [58], and Mattera et al. [59]). Indeed, we compare the forecasts obtained with a random walk without drift modelwith those obtained by the following predictive regression:where is the constant term, represents the impact of on the returns, and is an error term. Note that we use the first difference of the CEI instead of its levels. This is because it is not guaranteed a priori that the composite index is stationary. Stationarity of the composite indicator clearly depends on the stationarity of the subindicators. To see the results, Table 2 contains the results of the augmented Dickey–Fuller test, showing that some of the CEIs present either unit roots or deterministic trends.

This is evident also looking at the indices’ autocorrelation structure shown in Figure 8. With first differences, instead, all the indices are stationary, and the consistency of the OLS estimator is guaranteed. Moreover, first difference can be seen as a measure of the CEI variation rate. Hence, we are interested in understanding if inefficiency variation is able to predict variation in prices, i.e., the returns.

In order to evaluate the forecasts’ accuracy in out of sample, we split the dataset into a training set used to estimate parameters and to obtain forecasts and a training set with one-third of the observations used for predictive accuracy tests. In particular, we conduct an experiment following a rolling-sample approach. Given a -long dataset, we choose an estimation window of length equal to two-third of the dataset. Then, in each period , starting from , we use the observations to estimate the parameters needed for obtaining the forecasts for . This process is repeated times by adding the return for the next period in the dataset and dropping the earliest one until the end of the dataset is reached. The outcome is, for each strategy, a time series of out-of-sample forecasts.

Then, for each th stock market, we compute the forecasting error as follows:

In the end, given the forecasting error in (17), we compute the mean square forecast error (MSfE) and use Clark and West [60] to test whether the forecasts obtained from model (16) are statistically different from those of the benchmark RW model (15). Under the null hypothesis of the Clark and West [60] test, the two competing forecasting models have statistically equal predictive ability. If the MSfE of model (16) is lower and statistically different with respect to the MSfE of model (15), we can argue that the variation of the market efficiency level, measured by the CEI, is a good predictor for stock market returns. Moreover, for sake of robustness, we also consider the results of the Diebold and Mariano test [61] of predictive accuracy. The results of the out-of-sample experiment are shown in Table 3.

According to the Clark and West test [60] for all stock markets, the forecast obtained through regression is statistically different than the random walk benchmark (see columns 3 and 4 of Table 3). Moreover, the MSfE of (16) is always lower than the random walk MSfE, meaning that the CEI is a useful predictor for market returns. This is a generalized result because the forecast error of the two competing models is statistically different, i.e., the observed differences are not sample driven.

6. Conclusion

Since the Fama seminal work on the topic, there have been several papers that studied the market efficiency. However, the market efficiency is a latent concept, and it is difficult to be measured by means of a single indicator. Indeed, market efficiency has been investigated by means of various methods. Among the most important, we have the random walk hypothesis, the martingale hypothesis, and the liquidity.

Nevertheless, most of the literature studies are based on the EMH, which lies on several strong assumptions such as independence, normality, and many others, while several empirical studies have proved that stock returns show some statistical properties that are known as stylized facts. A very important stylized fact is represented by the presence of long-range dependence, also defined as long memory. The long memory property describes if and how much past events influence the future evolution of the process.

The long memory is commonly studied through the Hurst exponent, previously introduced by Hurst [29] and lately developed by Mandelbrot and Wallis [62], which represents the rate of decay of the autocorrelation function of a time series.

The fractal market hypothesis (FMH) claims that the Hurst exponent is related to market efficiency since a value of reflects efficient market conditions. The mathematical representation of the stock prices under the FMH is given by the fractional Brownian motion (fBm). However, a main drawback of the fBm lies on the fact that the Hurst exponent is assumed to be static and not time-varying. Therefore, constructing an efficiency measure with a static Hurst exponent contradicts the adaptive market hypothesis (AMH).

However, by assuming a multifractional Brownian motion (mBm) for stock prices, we compute a time-varying Hurst exponent that can potentially explain how efficiency changes over time. With this respect, following Bianchi [32], we use a simple measure of market inefficiency, called Hurst-based efficiency index (HEI), computed as the absolute deviation of the time-varying Hurst exponent from 0.5.

Then, to capture, in a more accurate way, all the efficiency dimensions, we computed a composite indicator by using the developed Hurst-based inefficiency measure, conditional volatility, and market liquidity.

We apply the proposed indicator to European stock market indices to study the behavior inefficiency. We find that the proposed index called CEI does not follow a random walk process getting evidence of efficiency predictability. This result could be relevant to both policy makers and traders. We also use the CEI as predictors in a log-return predictive regression problem. We find that the CEI is a relevant predictor for daily stock returns as well.

Data Availability

All the data can be found in https://es.finance.yahoo.com.

Conflicts of Interest

The authors declare that they have no conflicts of interest.


Juan E. Trinidad-Segovia was supported by the Spanish Ministerio de Ciencia, Project No. PGC2018-101555-B-I00, and Universidad de Almería, project no. UAL18-FQM-B038-A (UAL/CECEU/FEDER).