Complexity

Volume 2017, Article ID 1768264, 13 pages

https://doi.org/10.1155/2017/1768264

## Efficient Computation of Multiscale Entropy over Short Biomedical Time Series Based on Linear State-Space Models

^{1}BIOtech, Department of Industrial Engineering, University of Trento, Trento, Italy^{2}Dipartimento di Energia, Ingegneria dell’Informazione e Modelli Matematici (DEIM), University of Palermo, Palermo, Italy^{3}Department of Biomedical Sciences for Health, University of Milan, Milan, Italy^{4}Department of Cardiothoracic-Vascular Anesthesia and Intensive Care, IRCCS Policlinico San Donato, San Donato Milanese, Milan, Italy^{5}Department of Physiology, Jessenius Faculty of Medicine, Comenius University in Bratislava, Mala Hora 4C, 03601 Martin, Slovakia^{6}Biomedical Center Martin, Jessenius Faculty of Medicine, Comenius University in Bratislava, Mala Hora 4C, 03601 Martin, Slovakia^{7}Bruno Kessler Foundation, Trento, Italy

Correspondence should be addressed to Luca Faes; moc.liamg@acul.seaf

Received 18 September 2017; Accepted 13 November 2017; Published 7 December 2017

Academic Editor: Anne Humeau-Heurtier

Copyright © 2017 Luca Faes et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The most common approach to assess the dynamical complexity of a time series across multiple temporal scales makes use of the multiscale entropy (MSE) and refined MSE (RMSE) measures. In spite of their popularity, MSE and RMSE lack an analytical framework allowing their calculation for known dynamic processes and cannot be reliably computed over short time series. To overcome these limitations, we propose a method to assess RMSE for autoregressive (AR) stochastic processes. The method makes use of linear state-space (SS) models to provide the multiscale parametric representation of an AR process observed at different time scales and exploits the SS parameters to quantify analytically the complexity of the process. The resulting linear MSE (LMSE) measure is first tested in simulations, both theoretically to relate the multiscale complexity of AR processes to their dynamical properties and over short process realizations to assess its computational reliability in comparison with RMSE. Then, it is applied to the time series of heart period, arterial pressure, and respiration measured for healthy subjects monitored in resting conditions and during physiological stress. This application to short-term cardiovascular variability documents that LMSE can describe better than RMSE the activity of physiological mechanisms producing biological oscillations at different temporal scales.

#### 1. Introduction

An intrinsic feature of almost all physiological systems, which is clearly visible in the time course of the variables measured from these systems, is their dynamical complexity. It is indeed widely acknowledged that physiological systems such as the brain, the cardiovascular system, and the muscular system produce output signals that exhibit highly complex dynamics, resulting from the combined activity of several mechanisms of physiological regulation which are coupled with each other through structural and functional pathways [1–4]. Since these multiple and simultaneously active mechanisms typically operate across multiple temporal scales, a surge of interest has emerged in the last two decades in computational methods able to connect the complexity of a dynamic oscillation to its temporal scale. The research in this context has been driven by the work of Costa et al. [5], who introduced multiscale entropy (MSE) as a measure of the complexity of a time series evaluated as a function of the time scale at which the series is observed. Since its introduction, MSE has been successfully employed in several fields of science [6], becoming a prevailing method to quantify the complexity of biomedical time series [7, 8] and gaining particular popularity in the analysis of brain [9, 10] and cardiovascular variability [11, 12] signals.

In spite of its acknowledged usefulness, MSE has been shown to present some shortcomings which have led many authors to propose improvements and modifications of the original algorithm [6]. The main problems identified in the original MSE formulation are related to both its defining algorithmic steps, the rescaling procedure that changes the temporal scale of the observed series progressively filtering out the shorter scales and keeping the longer ones, and the computation of the entropy rate of the rescaled time series performed by means of the sample entropy (SampEn) metric [13]. Specifically, it has been reported [11, 14] that the rescaling procedure makes use of a suboptimal low-pass filter (i.e., an averaging filter) which cannot prevent aliasing and that the progressive application of SampEn employs a badly designed coarse-graining step which makes MSE artificially decrease as a function of the time scale. These problems were overcome by the introduction of the so-called refined MSE (RMSE) [11], which exploits well-designed procedures for low-pass filtering and coarse graining.

Even though RMSE and other refinements and extensions [6] have improved the performance and widened the applicability of this methodology, some limitations still remain which hamper a full exploitation of the concept underlying MSE. The main limitation is in the fact that MSE requires long time series to be reliably computed: as any nonparametric entropy estimator, the SampEn is highly data-demanding, and the issue is exacerbated at increasing time scales where the rescaled signals get progressively shorter. This issue is only partly addressed by versions of MSE that intensify the number of patterns over which SampEn is calculated [15, 16], because the problem ultimately stands in the lower number of cycles of the slowest oscillations that are available for a given data length. Another limitation is in the fact that the length of the temporal scales which can be explored is usually expressed in number of samples and is restricted to integer values, thus limiting the possibility of fine-tuning the elimination of the fast temporal scales. These issues strongly limit the applicability of MSE or RMSE to short biomedical time series, such as those typically considered in short-term cardiovascular variability, where the dynamics of interest are deployed over a few minutes (~300 samples) [17]. Moreover, in this applicative context fine-tuning of the filtering process is fundamental to elicit changes in complexity related to the oscillatory rhythms typically observed in cardiovascular variability, that is, low-frequency (LF, from 0.04 Hz to 0.15 Hz) and high-frequency (HF, from 0.15 Hz to 0.5 Hz) oscillations [18, 19]. A further issue with existing MSE techniques is the lack of theoretical procedures to derive exact values of the multiscale complexity for known dynamical processes which one can study analytically and can fully control; though being theoretical, this issue has practical implications as analytical methods would allow one to assess the estimation bias of existing MSE measures.

The present study introduces a new approach for the evaluation of multiscale complexity that is explicitly designed to address the limitations of existing MSE methods described above. The approach builds on recent theoretical advances providing exact techniques for the analytical computation of information-theoretic measures, including entropy rates, for linear autoregressive (AR) stochastic processes [20–23]. The key steps of the derivation of the new measure of multiscale complexity, which we denote as linear MSE (LMSE), are the closed-form representation of filtered and downsampled AR processes as state-space (SS) processes and the analytical quantification of complexity from SS parameters. Following these steps, we devise a procedure which allows an exact computation of the refined version of MSE for linear processes, starting from their AR parameters and from the desired scale factor. Additionally, a technique to set the scale factor at any rational number instead of an integer number is devised, thus allowing the fine-tuning of the filtering step of MSE computation. The proposed approach is tested on simulations of linear stochastic processes, first in a theoretical analysis aimed at relating multiscale complexity to the dynamical properties of the process without the unavoidable errors connected with estimation procedures and then in realizations of these simulated processes generated to assess the computational reliability of the LMSE compared with the traditional RMSE estimator. Moreover, the comparative ability of LMSE and RMSE in assessing the multiscale complexity of short-term cardiovascular variability is assessed on the beat-to-beat time series of the heart period, arterial pressure, and respiration measured in a large group of healthy subjects resting in a relaxed state and during two commonly studied conditions of physiological stress, that is, postural stress and mental stress.

#### 2. Approaches to Multiscale Entropy Analysis

##### 2.1. Multiscale Entropy

Multiscale entropy (MSE), originally proposed by Costa et al. [5], is a measure assessing the complexity of a process across multiple temporal scales. Its computation relies first on rescaling the observed process (i.e., focusing on a specific range of temporal scales) and then on assessing the dynamical complexity of the rescaled process through the computation of its rate of entropy production.

Specifically, let us consider a discrete-time, stationary stochastic process with zero mean and variance . Let us further define as the variable obtained sampling the process at the discrete time and set an integer scale factor . The rescaling step is aimed at eliminating the fast temporal scales and is performed applying the following transformation to the original process , which averages consecutive samples to yield the rescaled process : the rescaled process is denoted here as because it is a downsampled version of the original process ; in fact, in (1) one sample of the process is obtained by taking one sample every values of the averaged version of , where averaging is performed over *τ* consecutive samples. The second part of the procedure is implemented by calculating the conditional entropy of the present state of the rescaled process, , given its past states, . In MSE, this is accomplished approximating the past history of the rescaled process with the finite-length pattern and then on estimating the conditional entropy by means of the Sample Entropy (SampEn) metric [13]. SampEn is an estimate of the probability that patterns which are detected as similar in the -dimensional space remain similar also in the -dimensional space when they are incremented with their future values, that is, the probability that and are similar given that and are similar; similarity is assessed through a coarse-graining procedure which applies the Heaviside step function with parameter on the distance between patterns (i.e., is a threshold such that and are similar if their distance in the -dimensional space is lower than , where the distance is assessed using the maximum norm).

The MSE measure resulting from the above procedure, which we denote as , quantifies the dynamical complexity of the original process observed at scale . The free parameters of the MSE estimator are the length of the patterns used to approximate the past of the process and the threshold that sets the similarity between patterns. In the application of MSE, the pattern length is limited to a few samples (typically, ), while the threshold distance is a fraction of the standard deviation of the original process (typically, ).

##### 2.2. Refined Multiscale Entropy

The original MSE formulation suffers from two main limitations: the suboptimal procedure for elimination of the fast temporal scales implemented by (1), which tends to introduce spurious oscillations in the rescaled time series, and the fact that the threshold parameter of the coarse graining is kept at a constant value for all time scales, which brings about an artificial reduction of the MSE values with increasing scales. Valencia et al. [11] proposed a refined MSE (RMSE) measure that is aimed at overcoming these limitations. The solution to the first limitation is based on the rationale that the rescaling procedure actually consists of two steps: a filtering step which eliminates the fast temporal scales from the original process and a downsampling step that eliminates the redundancy introduced by the first step. Accordingly, the filtering step was designed to limit as much as possible the aliasing that can occur with the following downsampling step.

In fact, the change of scale of a process* x* is performed first applying a low-pass filter to obtain the filtered process and then reducing the sampling rate of the filtered process to obtain the rescaled (downsampled) process* x*_{d}(*n*). The two steps yield, respectively, the processeswhere and are the filter coefficients and and are the filter orders. As it averages consecutive samples, the original MSE formulation [5] implicitly applies a finite impulse response (FIR) low-pass filter [24] of orders and (the filter is nonrecursive) and with coefficients for each ; the cutoff frequency of the filter is constrained to the value . To improve elimination of the fast temporal scales and satisfy the Shannon theorem in the subsequent downsampling step, the RMSE approach implements an infinite impulse response (IIR) avoiding ripples in the stop band; specifically, a Butterworth filter [24] of order (with ) is implemented in which the coefficients and are set to realize a low-pass implementation with cutoff frequency [11]. This choice brings also the advantage that since the cutoff frequency can take any real value, in RMSE the scale factor is not constrained to integer numbers as in MSE; this allows filtering out the oscillatory components of the original process with a better resolution before computing the complexity of the rescaled process.

Besides the type of filter, another crucial difference exists between the original and refined formulations of MSE. Whereas in MSE the parameter is constant for all scale factors, as it is set at a fraction of the standard deviation of the original process (e.g., ), in RMSE this parameter is set at a fraction of the standard deviation of the rescaled process (e.g., ). This choice is meant to reduce the dependence of the estimated conditional entropy on the decrease of variance due to filtering. As a consequence of this refinement, RMSE does not exhibit a tendency to decline with the time scale as a consequence of the inherent reduction of the variance of the filtered process: for example, for a white noise process MSE decreases with the time scale while RMSE is constant.

##### 2.3. Linear Multiscale Entropy

In this section we present a method to assess the multiscale complexity of linear Gaussian stochastic processes. The method is based on the fact that if the variables obtained sampling the considered process have a joint Gaussian distribution, all the variability that defines the entropy rate of the process is fully captured by a linear autoregressive (AR) model [25]. Accordingly, let us consider the linear AR representation of the process , defined aswhere is the order of the process, , , are linear regression coefficients describing the interactions between the process variables as a function of the lag , and is an innovation process with variance . The process is fully described by the AR model (3) when the innovation process* e* is formed by uncorrelated Gaussian variables.

Given the AR representation, the variance of the process and of the innovations can be used to derive an information-theoretic description of the statistical structure of the process [26]. Specifically, the entropy of the process is related to its variance by [27] and the entropy rate of the process, that is, the conditional entropy of the present given the past , is related to the variance of the innovations by [28] Then, (4) and (5) can be combined to provide a version of the conditional entropy which quantifies the dynamical complexity of the normalized process as note that applying (6) corresponds to computing the conditional entropy of the original process after normalizing it to unit variance.

Now we turn to show how to compute analytically the variance of the AR innovations after rescaling the original process, in a way such that this variance can be used as in (6) to assess multiscale complexity. First, we perform a virtual upsampling of the original process , by setting an integer upsampling factor and by defining the processwhere is the order of the upsampled process and the coefficients are set to be a zero-padded version of the original AR coefficients; that is, we set for each and for each , . Note that the innovations for the original and upsampled processes are formally the same, that is, . Then, we filter the upsampled process using a linear FIR filter, that is, a filter implemented as in (2a) with for each ; with some algebraic manipulation we find that, substituting (7) in (2a) applied with and with in place of , the filtered representation of the upsampled process takes the formEquation (8) shows that the filtering step introduces a moving average (MA) component in the original AR process, transforming it into an ARMA process [21]. Then, we exploit the connection between ARMA processes and state-space (SS) processes [29] to evidence that the ARMA process (8) can be expressed in an SS form aswhere is a -dimensional state process, is the scalar SS innovation process defined as , and the vectors and and the matrix are defined asSuch a connection between ARMA and SS models allows finding, through (10) and the relation , the parameters of the SS process descriptive of the process . Such a process is a filtered version of the upsampled process , obtained passing through a FIR low-pass filter with cutoff frequency equal to ; therefore, since the sampling frequency of the upsampled process is times higher than that of the original process , the cutoff frequency of the low-pass filter applied to for obtaining becomes .

The next step of the procedure is to provide a representation for the downsampled process , where downsampling is applied to the filtered process . To this end, we exploit recent theoretical findings leading to an analytical derivation of the parameters of the SS model which describes the downsampled process [20, 21, 30]. According to these findings, the downsampled process can be represented in SS form involving a vector state process aswhere the parameters of the SS model (11) are the matrix , the vector , the variance of the scalar noise process , , the covariance matrix of the vector state noise process , , and the cross-covariance between and , . These parameters can be computed at scale , in terms of parameters previously obtained, as follows [20, 30]:Then, the SS model (11) can be transformed in a form similar to that of (9) which evidences the innovations:by solving a so-called discrete algebraic Riccati equation [20, 30]applied to the parameters of the SS model (11), to complete the derivation of the parameters of (13) asFinally, the variance of the downsampled process can be computed analytically solving a discrete-time Lyapunov equation [20] as

The derivations above allow computing analytically all the parameters of the state-space process (see (13)) which describes the filtered (see (9)) and downsampled (see (13)) versions of the upsampled process (see (7)), which in turn can be analytically described starting from the original AR process given by (3). Among these parameters, the relevant ones for our purpose are the variance of the downsampled process and that of the relevant innovations , which are used to compute the complexity of in analogy to (6), thus obtaining our MSE measure:Note that MSE measure defined in (17), which we denote as linear MSE (LMSE), is relevant to a time scale given by the rational number , obtained setting integer values for the scale factor *τ* and the upsampling factor . As this corresponds to applying to the original process a low-pass filter with cutoff frequency , such cutoff frequency can be arranged according to the value of *τ*_{s} in order to provide fine-tuning of the filtering process. Moreover we stress that since we filter the original process with a FIR filter that prevents aliasing and we compute at each time the complexity of the normalized process, our state-space MSE measure follows the philosophy of the RMSE method rather than that of the original MSE.

##### 2.4. Practical Implementation and Applicability of LMSE and RMSE

The proposed approach for multiscale complexity analysis is implemented in the LMSE MATLAB® toolbox, which includes the algorithms for computing LMSE and RMSE for the simulated processes and exemplificative realizations of the cardiovascular data studied in this paper. The toolbox is uploaded as supplementary material to this article and is freely available for download from http://www.lucafaes.net/LMSE.html. The codes allow also nonexpert users to implement multiscale complexity analysis without the need to go deep into the mathematical details presented in the previous subsections. Complexity is assessed at a given time scale, after removing the faster temporal scales through low-pass filtering, as the unpredictability of future values of the time series given past observations. Unpredictability is quantified either using a model-free estimation of the conditional entropy (RMSE) or using a linear model (LMSE). As we will see in the next two sections, the analytical expressions underlying the computation of LMSE make this measure much more reliable than RMSE in the presence of short time series and long temporal scales. On the other hand, one should bear in mind that the context of application for LMSE is that of time series which are adequately represented by linear stationary stochastic processes. Linear stochastic processes are nondeterministic dynamic processes for which the relation between samples can be described by linear equations. The range of applicability of this simple generative model for time series data is very broad, including time series collected from many physical, biological, and social systems. Nevertheless, if the observed system is supposed to generate time series with strong nonlinear dynamics or significant departures from the null hypothesis of Gaussianity of the distribution are proven, the LMSE measure may capture only partly the complexity of these time series; in such a case the RMSE measure is a more appropriate choice. Furthermore, the hypothesis of stationarity implies that the statistical properties of the observed process (e.g., mean and variance) do not vary across time. For time series with evident nonstationary behaviors, the potential applicability of the method here proposed to short time series allows its easy adaptation to time-varying formulations.

#### 3. Simulation Study

To investigate the theoretical profiles of the dynamical complexity of a stochastic process as a function of the parameters determining its dynamics, as well as to assess the computational reliability of the proposed estimator of the refined MSE, in this section we consider a set of linear processes simulated with varying oscillatory components, for which we compute exact values of MSE and compare LMSE and RMSE estimates obtained from short process realizations.

##### 3.1. Simulation Design

Simulations are designed considering AR processes described by pairs of complex-conjugate poles with assigned modulus and phase , (the order of the process is ). Type 1 simulations are designed with one pair of complex-conjugate poles , according to two configurations: (a) fixing the pole phase to (frequency ) and varying the modulus in the range ; (b) fixing the pole modulus to and varying the frequency in the range . Type 2 simulations are designed with two pairs of complex-conjugate poles , according to two configurations both with a fixed low-frequency pole obtained setting and ; (c) fixing the phase of the second pole to (frequency ) and varying the modulus in the range ; (d) fixing the modulus of the second pole to and varying the frequency in the range . Type 1 simulations (configurations (a, b)) are set to study how the complexity of a process with a single stochastic oscillatory component varies with the amplitude and the frequency of such component. Type 2 simulations (configurations (c, d)) are set to study how the complexity of a process with two stochastic oscillatory components varies with the mismatch in the amplitude and in the frequency of these two components.

Given the configurations described above, first we determine the theoretical values of the MSE using the procedure described in Section 2.3. To this end, the exact values of the AR coefficients are determined as the coefficients of the polynomial with roots given by the poles set in the complex plane as ); these coefficients, together with the innovation variance , are set as parameters of the AR model of (3). From these parameters, MSE values are computed for specific values of the upsampling factor and the scale factor , that is, ,, , , chosen in order to yield an approximately uniform range of values for the cutoff frequency of the rescaling low-pass filter, that is, , 0.444, 0.4, 0.35, 0.3, 0.278, 0.25, 0.222, 0.2, 0.175, 0.15, 0.125, 0.1, 0.075, 0.05, ; the filter order is set to .

After theoretical analysis, practical estimation of MSE was performed choosing two representative cases of the parameter setting for Type 1 simulation , and Type 2 simulation , ; , , generating for each case 100 realizations of the simulation by feeding the model of (3) with realizations of 300 white noise samples taken from the Gaussian distribution with zero mean and unit variance and computing LMSE and RMSE estimates. For each realization, LMSE was obtained through the procedure described in Section 2.3, identifying an AR model through the standard least squares method, setting the model order according to the Bayesian Information Criterion [31], and using a FIR low-pass filter of order [24]; RMSE was obtained through the procedure described in Section 2.2 and using the parameters of [11], that is, filtering the data with a Butterworth low-pass filter of order [24] and calculating SampEn with embedding and tolerance parameters and ( is the variance of the downsampled signal).

##### 3.2. Theoretical Analysis

Results of the theoretical analysis are reported in Figure 1. As a general result, the complexity of the simulated AR processes tends to increase at increasing the time scale (i.e., at decreasing the cutoff frequency of the low-pass filter applied to the original process, . This result is expected, because filtering removes the oscillatory components that bring regular dynamics to the signal, in a way such that when all stochastic oscillations are removed the signal is left with no regular dynamics and the maximum complexity level is achieved (i.e., , corresponding to uncorrelated white noise).