Table of Contents Author Guidelines Submit a Manuscript
Computational and Mathematical Methods in Medicine
Volume 2014, Article ID 129248, 11 pages
Research Article

Methodological Framework for Estimating the Correlation Dimension in HRV Signals

1Communications Technology Group (GTC), Aragón Institute for Engineering Research (I3A), IIS Aragón, University of Zaragoza, 50018 Zaragoza, Spain
2CIBER de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 50018 Zaragoza, Spain
3Anaesthesiology Service, Miguel Servet University Hospital, 50009 Zaragoza, Spain
4Medicine School, University of Zaragoza, 50009 Zaragoza, Spain
5Aragón Health Sciences Institute (IACS), 50009 Zaragoza, Spain

Received 28 August 2013; Revised 17 December 2013; Accepted 20 December 2013; Published 30 January 2014

Academic Editor: Mika P. Tarvainen

Copyright © 2014 Juan Bolea et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


This paper presents a methodological framework for robust estimation of the correlation dimension in HRV signals. It includes (i) a fast algorithm for on-line computation of correlation sums; (ii) log-log curves fitting to a sigmoidal function for robust maximum slope estimation discarding the estimation according to fitting requirements; (iii) three different approaches for linear region slope estimation based on latter point; and (iv) exponential fitting for robust estimation of saturation level of slope series with increasing embedded dimension to finally obtain the correlation dimension estimate. Each approach for slope estimation leads to a correlation dimension estimate, called , , and . and estimate the theoretical value of correlation dimension for the Lorenz attractor with relative error of 4%, and with 1%. The three approaches are applied to HRV signals of pregnant women before spinal anesthesia for cesarean delivery in order to identify patients at risk for hypotension. keeps the 81% of accuracy previously described in the literature while and approaches reach 91% of accuracy in the same database.

1. Introduction

Heart rate variability (HRV) has been widely used as a marker of the autonomic nervous system (ANS) regulation of the heart. Classical HRV indices include global descriptive statistics which characterize HRV distribution in the time domain (mean heart rate and standard deviation of the normal-to-normal beat interval, among others) and in the frequency domain (power in the very low frequency, and low frequency (LF) and high frequency (HF) bands). The activity of the two main branches of the ANS, sympathetic and parasympathetic systems, has been related with the power in the LF and HF bands, respectively [1].

HRV data often present nonlinear characteristics, possibly reflecting intrinsic physiological nonlinearities, such as changes in the gain of baroreflex feedback loops or delays in conduction time, which are not properly described by classical HRV indices.

The most widespread methods used to characterize nonlinear system dynamics are based on chaos theory. The question of whether HRV arises from a low-dimensional attractor associated with a deterministic nonlinear dynamical system or whether it has a stochastic origin is still under debate.

Of great interest is the concept of system complexity, which refers to the richness of process dynamics. Complexity measures are based on the theory of nonlinear systems but may be applied to both linear and nonlinear systems. Several techniques attempting to assess complexity have been developed such as detrended fluctuation analysis [2], Lempel-Ziv complexity [3], Lyapunov exponents [4], the correlation dimension () [5], and approximate and sample entropies [6].

The reduction of HRV complexity has been associated with age, disease, and unbalanced cardiovascular regulation [7]. Complexity measures have been proven to characterize HRV signals more successfully than linear approaches in certain applications [8]. In [911] point correlation dimension of HRV signals predicted hypotension events in pregnant women during spinal anesthesia for cesarean section, something which time and frequency domain indices were unable to do.

While these measurements are of considerable interest, their application to HRV has some pitfalls that could mislead their interpretation. One of such limitations arises from their application to limited time series. Correlation dimension estimation is highly dependent on the length of the time series [12]. Several studies have reported the effect of data length on estimation, as well as proposals to alleviate this effect [13, 14]. Stationarity is another requirement that a time series has to fulfil to obtain reliable results. However, satisfying the constraint of finite series and stationarity at the same time is usually difficult [15]. Yet another limitation of these measurements is the long computational time required. Data length exponentially increases the computational time cost in a classical sequential approach. In the case of , several attempts to try to reduce this factor have been reported by Widman et al. [16] and Zurek et al. [17], the latter proposing parallel computing using MPI (Message Passing Interface).

The main goal of this study is to propose a methodological framework for robust and fast estimation of and its application in HRV signals. Section 2 starts with a definition of the correlation dimension and its classical estimation. An algorithm for its fast computation is proposed. Robustness is addressed by fitting the log-log curve to a sigmoid function after which three alternative approaches for estimation are presented. Section 3 introduces synthetic and real (HRV signals) data where the proposed estimates are evaluated and interpreted. Section 4 presents the results while Section 5 sets out the discussion and conclusions of the study.

2. Methods

2.1. Correlation Dimension

Let , be the time series of interest, which in HRV analysis will be the interval series normalized to unit amplitude, with being the total number of beats. A set of -dimensional vectors, , called reconstructed vectors, are generated [18]: where represents the delay between consecutive samples in the reconstructed space. Then, the amount of reconstructed vectors is for each -dimension. The distance between each pair of reconstructed vectors, , is denoted as and this can be computed as the norm of the difference vector . (In Appendix A different norms and their effect on correlation dimension estimates from finite time series are discussed.) The correlation sum which represents the probability of the reconstructed vector pair distance being smaller than a certain threshold is computed as where is the Heaviside function defined as: and .

For deterministic systems, decreases monotonically to 0 as approaches 0, and it is expected that is well approximated by . Thus, can be defined as

For increasing , values tend to saturate to a value which constitutes the final correlation dimension estimate.

2.2. Fast Computation of Correlation Sums

One important limitation of estimation is the long computational time required mainly due to the sequential estimation of correlation sums. This section describes an algorithm for the fast computation of correlation sums based on matrix operations (MO). A matrix which contains the differences between all pairs of samples of is computed as where is the matrix:

xy(8) where symbolizes . For instance, the dashed box contains the elements of the difference vector for . For each embedded dimension and the reconstructed vector , the difference vectors generates a matrix: xy(9)

The selected norm is applied to the matrix , generating the norm vector , whose elements are distances . To compute the limit in (5), distances should be compared with a set of thresholds, which implies the repetition of the whole process as many times as the number of thresholds. This repetition is avoided since distances in are compared with a whole set of thresholds : where is a matrix, which contains ones and zeros. The accumulative addition of each column represents the partial correlation sum of the reconstructed vector for a set of thresholds : where is a vector whose elements are equal to one.

Finally, the procedure has to be repeated varying the index times to compute . This technique saves computational time due to the usage of a set of thresholds in one step.

2.3. New Approaches for Assessment
2.3.1. Sigmoid Curves as Surrogates of Log-Log Curves

has to be estimated from (5) whose numerator and denominator both tend to as tends to 0. Therefore, applying L’Hôpital’s rule the equation can be rewritten as [19]

Since the size of the time series is finite, choosing small values of to evaluate this limit is problematic. For values of close to 0, very few distances contribute to the correlation sum, making the estimation unreliable. The evaluation of this expression is usually done in a linear region in the versus representation, called the log-log curve. The slope of this linear region is considered an estimate of . There are different approaches for estimating this slope. Maximum slope searching can be done by directly computing the increments in the log-log curve. Another approach is to estimate numerically the maximum of the first derivative of the log-log curve. Nevertheless these approaches encounter some limitations due to the usual nonequidistant sampling of values in the logarithmic scale. Yet another limitation arises in the presence of dynamic systems whose log-log curves display several linear regions, as can be seen in Figure 1 where the data corresponds to an RR interval series extracted from a 30 minute ECG recording. In order to estimate the slope of the linear region of the log-log curve, an attempt to artificially extend the linear region is made by excluding the self-comparisons () from the correlation sums.

Figure 1: Log-log curves for a dynamic system. Data correspond to an RR interval series extracted from 30 mins of ECG recording.

However, the basis of the approach proposed in this work to improve estimation lies in considering self-comparisons. Figure 2 illustrates how log-log curves behave in both situations, considering or not considering self-comparisons. As it is shown, both share part of the linear region. Our proposal is to use sigmoidal curve fitting (SCF) over the log-log curves to obtain an analytic function whose maximum slope in the linear region is well defined. These log-log curves are reminiscent of the biasymptotic fractals studied by Rigaut [20] and Dollinger et al. in [21] in which exponential fittings were proposed. The sigmoidal fitting is applied to the interpolated log-log curves computed with evenlyspaced values.

Figure 2: Log-log curves discarding and accepting self-comparisons. Arrows show the slope of the scaling range. Data correspond to an RR interval series of 300 beats.

A modified Boltzmann sigmoid curve was used by Navarro-Verdugo et al. [22] as a model for the phase transition of smart gels: where , , , and are the parameter designed. The first derivative of is

In our study the sigmoid curve is fitted to log-log curve. The first derivative, (14), is determined analytically and its maximum constitutes the slope of the linear range, that is, . Note that hat notation refers the use of SCF.

In order to achieve a good fitting, the thresholds, , have to guarantee that both asymptotes are reached. In this work with a step of 0.01. The upper asymptote is reached when all comparisons are above the threshold, , and the lower when only the self-comparison is below, .

The SCF approach is robust in the presence of dynamic systems which exhibit log-log curves with more than one linear region since when the fitting is not good enough, no estimate is given. In this work, the requirement for a good fitting is to achieve a regression factor greater than 0.8.

As the embedding dimension increases, the linear regions of the log-log curves tend to be parallel to each other. Thus, estimates tend to saturate to a certain value, which is considered the correlation dimension . The correlation dimension is estimated fitting the versus curve following a modified version of that used by Carvajal et al. [23]: where , introduced in this study in order to reach the saturation level more quickly than previously proposed, and are exponential growth factors.

2.3.2. New Approaches for Estimation

As mentioned previously, we chose as the maximum slope on each fitted sigmoid curve. Nevertheless the linear range is composed of more than one point. Instead of considering only one point per curve, in this study we propose a new approach for estimation considering a set of points extracted from these linear ranges.

The proposed strategy is based on selecting one point of the linear range in the SCF log-log curve of the lowest embedded dimension and moving forward to the next embedded dimension , selecting the point of the corresponding SCF log-log curve with minimum distance to the former curve (i.e. where the perpendicular to the log-log curve intersects the log-log curve, as in the gradient descent technique); see Figure 3. The procedure is repeated up to the maximum embedded dimension analyzed. Then, several sets of slopes are computed (one for each point in the linear region around the maximum slope of the SCF log-log curve of the lowest embedded dimension) providing a set of correlation dimension estimates per embedded dimension (). The dependence on in the notation indicates that each set of correlation dimension estimates is linked to an value, corresponding to the first value of each set.

Figure 3: Maximum slope points are marked with crosses over fitted sigmoid curves. Points calculated using gradient descent criteria from two starting points are shown in dots and circles. is the point which corresponds to the maximum slope in the lowest embedded dimension. The inset illustrates the correlation dimension estimation of the three sets of points.

Finally, (15) is used to estimate the final correlation dimension () for each set of points. These () estimates are linked to the value of the lowest embedded dimension. Finally, the maximum of the is selected as the new estimate, called .

Another new approach for estimation based on sample entropy (SampEn) is now presented. SampEn was defined by Zurek et al. [17] as where, in this case, is computed as in (3), but without considering self-comparisons. Let us define as the sample entropy considering self-pairs, which is easily computed for all embedded dimensions and a huge set of thresholds using the fast algorithm described in Section 2.2. We can generate a surface from the fitted sigmoid curves, as can be seen in Figure 4, an example of a 300-beat RR interval series extracted from one recording of the database used in [10]. For each embedded dimension, the value of which maximizes is used to evaluate the slope of the linear region of the SCF log-log curves, , yielding another estimate, called in this paper .

Figure 4: surface for a 300-beat RR interval series. For each embedded dimension maximum point is marked with solid triangle. Circles correspond to the values which define .

3. Materials

The selected time series chosen to validate the approaches proposed in this paper to estimate are the Lorenz attractor, the MIX() process, and real HRV signals, respectively.

Lorenz Attractor. The Lorenz system is described by three coupled first order differential equations whose solution exhibits a chaotic behaviour system for certain parameter values and initial conditions. This is called the Lorenz attractor:

For parameter values , , and the theoretical value is 2.02 [24]. In this study, the system equations are discretized with a time step of 0.01.

Signals. is a family of stochastic processes that samples a sine for and becomes more random as increases ( completely random) [5] following the expression where , uniform random variables on , and . random variables, with with probability , and with probability . MIX indicates a mixture of deterministic and stochastic components.

HRV Signals. The HRV representation used in this study is the time difference between the occurrences of consecutive normal heart beats, the so-called RR interval. Ectopic beats as well as missed and false detections introduce some extra variability in the RR interval series which is not representative of the ANS activity. Thus, they were detected and corrected [25]. The RR interval series analysed in this study belongs to a database recorded at the Miguel Servet University Hospital in Zaragoza (Spain). That database was used to predict hypotension events during spinal anesthesia in elective cesarean delivery by HRV analysis [10]. It consists of ECG signals from 11 women with programmed cesarean section recorded at a 1000 Hz sampling frequency immediately before the cesarean surgery. Five of them suffered a hypotension event during the surgery (Hyp) and 6 did not (NoHyp). The series analysed correspond to 5 minutes in a lateral decubitus position. See [10] for further database details.

4. Results

All the results presented in this section are computed using the -norm. The effect of different norms in estimation is discussed in Appendix A.

The computational time cost of the correlation sums depends on the length of the data, the maximum embedded dimension considered, and the amount of thresholds used. The results shown in Table 1 correspond to computational time required for different data lengths of Lorenz attractor series, a maximum embedded dimension of 16, and a set of 300 threshold values evenlyspaced from 0.01 to 3. The computational time required for a sequential approach is denoted by whereas the time required for the proposed technique, based on matrix operations, is denoted by . This allows defining the speed up achieved by the novel approach as the ratio between both measurements, . As it is shown in Table 1, increases with data length. For a 300-sample data (usual length for a 5 minute RR interval series) correlation sums are estimated in approximately 1 s.

Table 1: Computational time of correlation sums estimated for Lorenz attractor series of different sample lengths. is the speed-up achieved and defined as , where is the time demand for a sequential algorithm and the time demand for the proposed technique based on matrix operations.

The Lorenz attractor series was used to validate the new proposed methodologies. Figure 5(a) displays the SCF log-log curves for embedded dimensions from 1 to 10. The sets of points where the slope is evaluated according to (15) are displayed for different starting points. For each starting point, the corresponding set of points is selected following a gradient descent technique. Figure 5(b) shows the slope estimate () versus for each starting point. Figure 5(c) displays the correlation dimension estimate () versus for each starting point. The maximum () constitutes the novel estimate ().

Figure 5: (a) Set of points where slope is estimated from the fitted sigmoid curves in the approach proposed in Section 2.3. (b) Set of estimates for different starting points versus embedded dimensions are fitted by the exponential equation (15). (c) Correlation dimension estimate for each set corresponding to different starting points. Data extracted from Lorenz attractor of 5000-sample length.

Table 2 displays correlation dimension estimates using the different approaches presented in this study. Note that although the three approaches give results close to the theoretical value of Lorenz attractor correlation dimension, the approach is the closest one. Relative errors for approaches and are above , while for it is just ; estimated as described in [10] is also included for comparison purposes.

Table 2: estimated by different approaches for Lorenz attractor series (5000 samples) and HRV signals (300 samples). Data expressed as median ∣ interquartile range.

was applied to a set of MIX series with different values (0.1, 0.4, and 0.8). These estimates can be considered as measures of the randomness of the signals when these signals are finite stochastic processes; see Figure 6.

Figure 6: MIX signals with different degrees of randomness and the effects on the estimation of the .

The same database for HRV analysis as in [10] was used. The results shown in Table 2 are divided into hypotension and nonhypotension groups. The approaches proposed in this paper were applied as well as the classical correlation dimension estimate described in [10] included for comparison purposes. The distribution of the data was found to be not normally distributed by the Kolmogorov-Smirnov test, and therefore the Mann-Whitney test was applied to evaluate their statistical differences in medians. The differences between both groups for all estimates were found to be statistically significant with a value lower than 0.03. In order to evaluate the discriminant power of the proposed measures, ROC analysis was performed. Area of the ROC curve, accuracy, sensitivity, and specificity for all the proposed approaches and the classical estimate used in [10] were displayed in Table 3. The proposed estimates maintain the accuracy achieved in [10] while the techniques based on the SampEn surface and the gradient descent actually increase it.

Table 3: ROC area for the analysis of all studied correlation dimension estimates for the database used. Accuracy, sensibility, and specificity estimated with the correspondent cut points are expressed in percentage.

5. Discussion and Conclusion

In this paper a methodological framework has been proposed to compute the correlation dimension () of a limited time series such as HRV signals which includes fast computation of the correlation sums, sigmoidal curve fitting of log-log curves, three approaches for estimating the slope of the linear region, and exponential fitting of the versus curves.

One important limitation for the application of to HRV analysis is the long computational time required for the correlation sums. In an attempt to solve this problem, an algorithm has been proposed based on matrix operations. In [17] another approach was described based on parallel computing which decreased the time demand. Nevertheless, the computational times achieved in the present work were obtained with a regular computer (Windows 7 based PC, Intel Core i7 3.5 GHz, 16 Gb RAM with Matlab R2011a). As an example, for a signal of 300 sample length (a usual length in typical 5 min HRV analysis, 300 beats), the time demand was reduced with respect to the sequential approach from 18 minutes to 1 second, which allows the online computation of in clinical practice. Computational time required for the proposed approaches is discussed in Appendix B.

Another limitation of the estimate is its reliability. One of the system characteristics that can lead to an unreliable measurement of is the nonstationarity of the data. Several techniques attempting to characterize these dynamic systems have been reported, mainly focused on changing the parameter or even taking into account the time between the vectors [13, 26, 27]. Searching the linear region of the log-log curves becomes a difficult challenge when the system is nonstationary since more than one linear region can appear and classical estimate is unreliable in those cases. The SCF approach is more robust since it does not give any estimate if the fitting is not good enough.

The novel approaches proposed in this study for the estimation of use the SCF approach. exploits the fact that the linear region of the log-log curves is almost parallel for high embedded dimensions. This allows a set of points surrounding the maximum slope point to be considered, and therefore several correlation dimension estimates are obtained for these starting points. is based on the differences between two consecutive log-log curves that define the surface. This surface showed maximum values for each embedded dimension, , and a specific threshold, , providing another estimation of the correlation dimension. was found to be the closest to the theoretical correlation dimension value for the Lorenz attractor series with 5000 size points and for , with a relative error of 1%, while and obtained a relative error of 4% with the same data.

The correlation dimension is known to be a surrogate of the fractal dimension of a chaotic attractor [12]. However, when applied to limited time series, nonzero finite correlation dimension values do not imply the existence of an underlying chaotic attractor. For example, when applied to MIX processes, nonzero finite values were obtained, higher for more random processes. Thus, although cannot be interpreted as the fractal dimension of an underlying chaotic attractor, it still gives a measure of the complexity of the process at least regarding its unpredictability.

Thus, the estimate in HRV signals may shed light on the degree of complexity of the ANS or how many degrees of freedom it has. The group of women (Hyp) suffering hypotension events occurring during the surgery of a programmed cesarean section under spinal anesthesia showed higher values than the group who did not (NoHyp), in the lateral decubitus position. As an example Figure 7 shows one patient of each group and the estimate. All the proposed correlation dimension estimates not only maintain the accuracy obtained in [10], they also increase it. Predicting hypotension is a challenge since it occurs in the 60% of the cases producing fetal stress [28]. If the goal is to predict those who are going to suffer hypotension, then the estimates that performed 100% of specificity will be selected, being [10], , and . Otherwise, if the goal is to use prophylaxis in the less number of patients to prevent hypotension, then the estimates that performed 100% of sensitivity will be chosen, and in this case it is . The effect of prophyilaxis on patients who finally are not going to suffer a hypotension event and the relation with fetal stress needs further studies.

Figure 7: The left panel shows two RR intervals, one corresponding to a patient who developed a hypotension event (Hyp) and the other to one who did not (NoHyp); the right panel shows the estimation using the perpendicular points in the log-log curves.

The contribution of this paper to the field is the proposal of a methodological framework for a reliable estimation of the correlation dimension from a limited time series, such as HRV signals, avoiding or at least alleviating the misleading interpretations that can be made from classical correlation dimension estimates. The computational speed-up achieved may allow this framework to be considered for monitoring in clinical practice. Nevertheless, the main limitation for the application of these methodologies to HRV analysis lies in its relation with the underlying physiology, which is still unclear and needs further studies. In spite of the fact that the framework proposed in this paper is focused on the characterization of HRV signals, its applicability could be extended to a wide range of fields. However, an evaluation would be needed to ascertain whether the proposed approaches are appropriate in each particular case.


A. Use of Norms and Thresholds in Correlation Dimension Estimates

The correlation dimension is considered norm invariant [14]. However, the effect of selecting the norm in correlation dimension estimates deserves further attention when applied to a finite data set. The norm of the difference vector defines the distance in (2). Norms can be defined from () to (). Left panel in Figure 8 shows norm unity for and for . Moreover, it is illustrated how a distance can be lower than the norm unity or not depending on which norm is used. The norm unity is chosen as an example of any threshold used in the correlation dimension algorithm. Therefore, by fixing the set of thresholds, the appearance of the linear region of the log-log curve can be compromised.

Figure 8: In the left panel, vector differences of any two reconstructed vectors are shown (i.e., for , ), where solid circles and dashed lines represent the points whose -norm and -norm are equal to 1, respectively. The dots are the differences below -norm unity and the dots with circles are below -norm unity. In the right panel log-log curves of one HRV signal (300 samples) used in the study are shown for a and -, - and -norms.

In Figure 8 the right panel shows how the application of different norms shifts the log-log curves losing the entire linear region in some cases due to the fixed range of thresholds. The range of these thresholds should be long enough to ensure that the linear regions are contained therein; thus, the election of the norm compromises the set of thresholds used.

In the SCF approach it is particularly important that the two asymptotic regions should be represented in the log-log curve. Therefore, the correct selection of the norm and the range of the set of thresholds are critical to assure the goodness of the SCF approach. Table 4 shows the correlation dimension estimates for 5000 data length of Lorenz attractor series. The effect of different norms is reflected in the estimates since the set of thresholds was fixed. As it is shown, the application of -norm, combined with the fixed set of thresholds used, achieves closest values with respect to the theoretical correlation dimension value for Lorenz attractor, 2.01.

Table 4: Correlation dimension estimates for the different proposed approaches, using different norms for Lorenz attractor series (5000 samples) using different norms.

B. Computational Time Demand of Novel Approaches to Correlation Dimension Estimates

In Section 2.2 a new technique based on matrix operations (MO) was introduced in order to compute correlation sums which represent the core of the correlation dimension algorithm. Nevertheless, in the paper no computational time cost was considered for the new proposed approaches for correlation dimension estimates. Table 5 shows the time required for the correlation dimension estimates including that used in [10].

Table 5: Computational time cost for correlation dimension estimates by all proposed approaches considering Lorenz attractor series and HRV signals in which -norm was applied. Data expressed as mean ± standard deviation.

10 realizations of Lorenz attractor series were generated whose initial conditions were randomly chosen. It is noticeable that the time cost of is higher compared to the others in both cases, the Lorenz series and the HRV signals (11 subjects), since it uses several sets of slope estimated to compute correlation dimension. Furthermore, the ratio between them is higher for the HRV signals than for the Lorenz series. Each of the different sets of thresholds is associated with an value in an interval centred on the maximum slope for . This interval is defined as a decrease in 50% of the amplitude of the maximum in the SCF first derivative. The more abrupt the transition zone in the sigmoid, the lower the amount of starting points. Thus, each realization is done with a different number of points, varying the computational time.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


This work has been supported by the Ministerio de Ciencia e Innovación, Spain, and FEDER under Project TEC2010-21703-C03-02, and by ISCIII, Spain, through Project PI10/02851 (FIS).


  1. Task Force of the European Society of Cardiology and The North American Society of Pacing and Electrophysiology, “Heart rate variability. Standards of measurement, physiological interpretation, and clinical use,” European Heart Journal, vol. 17, no. 3, pp. 354–381, 1996. View at Google Scholar · View at Scopus
  2. J. W. Kantelhardt, S. A. Zschiegner, E. Koscielny-Bunde, S. Havlin, A. Bunde, and H. E. Stanley, “Multifractal detrended fluctuation analysis of nonstationary time series,” Physica A, vol. 316, no. 1–4, pp. 87–114, 2002. View at Publisher · View at Google Scholar · View at Scopus
  3. A. Lempel and J. Ziv, “On the complexity of finite sequences,” IEEE Transactions on Information Theory, vol. 22, no. 1, pp. 75–81, 1976. View at Google Scholar · View at Scopus
  4. A. Wolf, J. B. Swift, H. L. Swinney, and J. A. Vastano, “Determining Lyapunov exponents from a time series,” Physica D, vol. 16, no. 3, pp. 285–317, 1985. View at Google Scholar · View at Scopus
  5. S. M. Pincus and A. L. Goldberger, “Physiological time-series analysis: what does regularity quantify?” American Journal of Physiology—Heart and Circulatory Physiology, vol. 266, no. 4, pp. H1643–H1656, 1994. View at Google Scholar · View at Scopus
  6. S. Pincus and B. H. Singer, “Randomness and degrees of irregularity,” Proceedings of the National Academy of Sciences of the United States of America, vol. 93, no. 5, pp. 2083–2088, 1996. View at Publisher · View at Google Scholar · View at Scopus
  7. A. Porta, S. Guzzetti, R. Furlan, T. Gnecchi-Ruscone, N. Montano, and A. Malliani, “Complexity and nonlinearity in short-term heart period variability: comparison of methods based on local nonlinear prediction,” IEEE Transactions on Biomedical Engineering, vol. 54, no. 1, pp. 94–106, 2007. View at Publisher · View at Google Scholar
  8. S. Cerutti, G. Carrault, P. J. M. Cluitmans et al., “Non-linear algorithms for processing biological signals,” Computer Methods and Programs in Biomedicine, vol. 51, no. 1-2, pp. 51–73, 1996. View at Publisher · View at Google Scholar · View at Scopus
  9. D. Chamchad, V. A. Arkoosh, J. C. Horrow et al., “Using heart rate variability to stratify risk of obstetric patients undergoing spinal anesthesia,” Anesthesia & Analgesia, vol. 99, no. 6, pp. 1818–1821, 2004. View at Publisher · View at Google Scholar · View at Scopus
  10. L. Canga, A. Navarro, J. Bolea, J. M. Remartínez, P. Laguna, and R. Bailón, “Non-linear analysis of heart rate variability and its application to predict hypotension during spinal anesthesia for cesarean delivery,” in Proceedings of the Computing in Cardiology (CinC '12), pp. 413–416, Kraków, Poland, September 2012.
  11. J. Bolea, R. Bailón, E. Rovira, J. M. Remartínez, P. Laguna, and A. Navarro, “Heart rate variability in pregnant women before programmed cesarean intervention,” in XIII Mediterranean Conference on Medical and Biological Engineering and Computing 2013, M. L. R. Romero, Ed., vol. 41 of IFMBE Proceedings, pp. 710–713, Springer International, 2014. View at Publisher · View at Google Scholar
  12. P. Grassberger and I. Procaccia, “Characterization of strange attractors,” Physical Review Letters, vol. 50, no. 5, pp. 346–349, 1983. View at Publisher · View at Google Scholar · View at Scopus
  13. J. Theiler, “Spurious dimension from correlation algorithms applied to limited time-series data,” Physical Review A, vol. 34, no. 3, pp. 2427–2432, 1986. View at Publisher · View at Google Scholar · View at Scopus
  14. J. Theiler, “Estimating fractal dimension,” Journal of the Optical Society of America A, vol. 7, no. 6, pp. 1055–1073, 1990. View at Publisher · View at Google Scholar
  15. H. Kantz and T. Schreiber, Nonlinear Time Series Analysis, Cambridge University Press, Cambridge, UK, 2004.
  16. G. Widman, K. Lehnertz, P. Jansen, W. Meyer, W. Burr, and C. E. Elger, “A fast general purpose algorithm for the computation of auto- and cross-correlation integrals from single channel data,” Physica D, vol. 121, no. 1-2, pp. 65–74, 1998. View at Google Scholar · View at Scopus
  17. S. Zurek, P. Guzik, S. Pawlak, M. Kosmider, and J. Piskorski, “On the relation between correlation dimension, approximate entropy and sample entropy parameters, and a fast algorithm for their calculation,” Physica A, vol. 391, no. 24, pp. 6601–6610, 2012. View at Publisher · View at Google Scholar
  18. F. Takens, “Detecting strange attractors in turbulence,” in Dynamical Systems and Turbulence, Warwick 1980, D. Rand and L.-S. Young, Eds., vol. 898 of Lecture Notes in Mathematics, pp. 366–381, Springer, Berlin, Germany, 1981. View at Publisher · View at Google Scholar
  19. J. A. Lee and M. Verleysen, Nonlinear Dimensionality Reduction, Springer, Berlin, Germany, 2007.
  20. J. P. Rigaut, “An empirical formulation relating boundary lengths to resolution in specimens showing “non-ideally fractal” dimensions,” Journal of Microscopy, vol. 133, no. 1, pp. 41–54, 1984. View at Google Scholar · View at Scopus
  21. J. W. Dollinger, R. Metzler, and T. F. Nonnenmacher, “Bi-asymptotic fractals: fractals between lower and upper bounds,” Journal of Physics A, vol. 31, no. 16, pp. 3839–3847, 1998. View at Publisher · View at Google Scholar · View at Scopus
  22. A. L. Navarro-Verdugo, F. M. Goycoolea, G. Romero-Meléndez, I. Higuera-Ciapara, and W. Argüelles-Monal, “A modified Boltzmann sigmoidal model for the phase transition of smart gels,” Soft Matter, vol. 7, no. 12, pp. 5847–5853, 2011. View at Publisher · View at Google Scholar · View at Scopus
  23. R. Carvajal, M. Vallverdú, R. Baranowski, E. Orlowska-Baranowska, J. J. Zebrowski, and P. Caminal, “Dynamical non-linear analysis of heart rate variability in patients with aortic stenosis,” in Proceedings of the Computers in Cardiology, pp. 449–452, September 2002. View at Publisher · View at Google Scholar · View at Scopus
  24. E. N. Lorenz, “Deterministic non-periodic flow,” Journal of Atmospheric Science, vol. 20, no. 2, pp. 130–141, 1963. View at Google Scholar
  25. J. Mateo and P. Laguna, “Analysis of heart rate variability in the presence of ectopic beats using the heart timing signal,” IEEE Transactions on Biomedical Engineering, vol. 50, no. 3, pp. 334–343, 2003. View at Publisher · View at Google Scholar · View at Scopus
  26. L. A. Aguirre, “A nonlinear correlation function for selecting the delay time in dynamical reconstructions,” Physics Letters A, vol. 203, no. 2-3, pp. 88–94, 1995. View at Google Scholar · View at Scopus
  27. H. S. Kim, R. Eykholt, and J. D. Salas, “Nonlinear dynamics, delay times, and embedding windows,” Physica D, vol. 127, no. 1-2, pp. 48–60, 1999. View at Google Scholar · View at Scopus
  28. A. M. Cyna, M. Andrew, R. S. Emmett, P. Middleton, and S. W. Simmons, “Techniques for preventing hypotension during spinal anaesthesia for caesarean section,” Cochrane Database of Systematic Reviews, vol. 18, no. 4, Article ID CD002251, 2006. View at Google Scholar · View at Scopus