New Methods for Analyzing Complex Biomedical Systems and SignalsView this Special Issue
Research Article | Open Access
On Consistency of Cross-Approximate Entropy in Cardiovascular and Artificial Environments
Cross-approximate entropy (XApEn) quantifies the mutual orderliness of simultaneously recorded time series. Despite being derived from the firmly established solitary entropies, it has never reached their reputation and deployment. The aim of this study is to identify the problems that preclude wider XApEn implementation and to develop a set of solutions. Exact expressions for XApEn and its constitutive parts are derived and compared to values estimated from artificial data. This comparison revealed vast regions within the parameter space that do not guarantee reliable probability estimation, making XApEn estimates inconsistent. A simple correction to one of the XApEn procedural steps is proposed. Three sets of formulae for joint parameter selection are derived. The first one is intended to maximize threshold profile. The remaining ones minimize XApEn instability according to the strong and weak criteria. The derived expressions are verified using cardiovascular signals recorded from rats (long signals) and healthy volunteers (short clinical signals), proposing a change of traditional parameter guidelines.
Approximate entropy (ApEn) is among the most exploited nonlinear techniques to quantify the complexity and unpredictability of time series [1–3], with a citation rate highlighted in . ApEn is approved as a supporting tool in preclinical and clinical studies, with the most prominent applications in cardiovascular studies.
In spite of the vast and firmly established implementation, the activities on ApEn improvement have never ceased. Research efforts are oriented towards modifications, including sample entropy (SampEn) , corrected conditional entropy (CApEn) , -nearest neighborhood entropy (KNNCE) , fuzzy entropy (FuzzyEn) , multiscale entropy (MultiScaleEn) , distance entropy (DistEn) , binarized entropy (BinEn) , ApEn based on wave mode , and speeding-up algorithms [13, 14], and towards the consistency studies, including threshold selection [15–20], time lag , and consistency in general .
One of the very first ApEn modifications is Cross-ApEn (XApEn)  that estimates mutual predictability of two simultaneously recorded time series. In view of ApEn as a complexity measure of solitary signal, XApEn should have become an equivalent tool for paired physiological processes. However, the reputation and deployment of its predecessor have never been reached. Except for a recent improvement , XApEn applications follow classical ApEn guidelines, although (a) the parameters should be adjusted even for ApEn (e.g., [4, 15–20]) and (b) properties of XApEn are different and straightforward ApEn guidelines might not be proper.
The aim of this paper is to identify the problems that preclude wider XApEn implementation and to propose methods for their adjustment. It is proved that the fundamental source of XApEn inconsistency is unreliable estimation of conditional probabilities that are its constitutive elements. It is shown that the parameters required for XApEn are mutually dependent, so the traditional guidelines for their choice need an update. In order to locate the unstable parameter regions, the XApEn values estimated from the artificially simulated data are compared to the true XApEn values calculated by a formula. Following these observations, we derived equations that jointly select stable parameters for reliable XApEn estimation in real signals, for which exact mathematical descriptions are unavailable. The derived equations are tested using a wide variety of cardiovascular signals: using typical short records of ambulatory patients and using long signals recorded from small laboratory animals.
2. Materials and Methods
2.1. Experimental Data
The study used three types of data: artificial, animal, and human. The purpose of artificial data was to test the stability of estimated values, comparing them to the exact values calculated by the derived formulae, and also to develop a new set of threshold equations. The data recorded from animals were also used for method development and for its repeated testing, enabled by the record length. The data recorded from human volunteers were used to validate the derived methods in typical real applications where the records are short.
The animal data include systolic blood pressure (SBP) signals and pulse interval (PI) signals recorded from two groups of rats: the first group comprised outbred male Wistar normotensive rats, while the second group comprised outbred male borderline hypertensive rats. Ten days before the recordings, radiotelemetric probes (TA11PA-C40, DSI, Transoma Medical) were implanted in the abdominal aorta. Blood pressure (BP) waveforms were digitally recorded and transmitted using Dataquest A.R.T. 4.0 equipment (DSI, St. Paul, MN, USA) with sampling frequency = 1000 Hz. The systolic blood pressure (SBP) was determined at the receiver as waveform local maxima. The pulse interval was determined as an interval between the successive moments of maximal gradients , that is, the local maximal change () of blood pressure (BP) waveform per sampling interval . Pulse interval is an acceptable alternative to R-R interval of an electrocardiogram. During the experiments, the animals were exposed to two types of stress: shaker stress, with rats positioned on a platform shaking at 200 cycles/min, and restraint stress, with rats placed in a Plexiglas restrainer tube (ID 5.5 cm with pores) in the supine position, with a detailed description given in . These data were already used for other research purposes [20, 24, 25]. The duration of records in baseline, restraint stress, and shaker stress was 20 minutes, 60 minutes, and 20 minutes, respectively. Such a wide variety of data (different types of rats and different stressing surroundings) ensured the correct validation of the derived methods. Very slow signal component (trend), an outcome of free rat movements, was eliminated using a high-pass filter designed specifically for biomedical signals . The cut-off frequency is tuned to the mean pulse interval of a subject: . Its value ranges from 0.055 Hz to 0.085 Hz in this experiment. The time series passed the stationarity, so the remaining signals were taken from 12 Wistar normotensive rats (total of 24 signals: 12 baseline, 6 restraint, and 6 shaker) and from 13 borderline hypertensive rats (total of 26 signals: 13 baseline, 7 restraint, and 6 shaker signal sets). All experimental procedures in this study conformed to the European Communities Council Directive of 24 November 1986 (86/609/ECC) and the Guidelines on Animal Experimentation of the School of Medicine, University of Belgrade.
Short data series (number of samples ) were recorded from 41 medically examined male healthy volunteers positioned in a hospital bed, using Task Force® Monitor with sampling frequency equal to 1000 Hz (CNSystems Medizintechnik AG, Graz, http://www.cnsystems.com/products/task-force-monitor). Systolic blood pressure was obtained as local maxima of blood pressure waveforms, while R peaks were found as local maxima of electrocardiogram waveforms. The intervals between the successive R peaks form the R-R interval time series. Sensors were attached by medical professionals and patients were motionless, ensuring the reduced number of artifacts and stationarity of the signals. Short data lengths are typical for ambulatory monitoring, up to 10 minutes, and also ensure tranquility of the patient and stationarity of the recorded data. The experiment followed the ethical standards stated by the School of Medicine, University of Belgrade, and a signed permission was collected from each volunteer.
Artificial data included samples with uniform distributions, generated using a reliable proprietary machine independent random number generator, and the samples with normal distribution generated using the polar method. For each one of the parameter sets, a hundred of the artificial data series were generated, each one comprising up to data samples. Program code was written in Matlab 2014b.
An illustration of the time series referenced within this study is shown in Figure 1.
2.2. XApEn in Brief
The procedure for XApEn estimation  implies , as a “master” time series and , as its “follower,” where is the number of signal samples. In biomedical data, SBP is considered as a master signal , and PI is considered as its follower . The series are divided into the vectors of length :
Parameter can be introduced for vector decorrelation  but in most studies it is set to 1 [beat]. A vector from the master series is usually regarded as a “template.”
Distance between two vectors is defined as a maximal absolute sample difference:
The vectors are considered as “similar” if their distance is less than or equal to the predefined threshold (tolerance) . The procedure of finding the similarities is called “template matching.” Standard scoring of time series must be performed; otherwise, vectors from different sources would be incomparable . Standard scoring implies centralization and normalization of series , , where denotes mean and denotes standard deviation. For a correct estimation of and , the observed time series must be stationary at least in a wide sense.
A conditional probability that a vector is within the distance from a particular template , is estimated as follows:where “” denotes an estimate and denotes an indicator function that is equal to 1 if the condition it indicates is fulfilled (distance less than or equal to ). Denoting the number of “ones” with , the conditional probability (3) can be also expressed as
Probabilities (3) and (4) describe a binary event, as the vector can either match or mismatch its template . A precise mathematical derivation of the number of binary events necessary for reliable probability estimation is given in . The derivation in  is based assuming a normal approximation of binomial distribution, in our case observing the number of matches found during template matching trials. The resulting tables and graphs point out the probability that the estimated value , for a given number of matching trials, lies within specific boundaries known as “confidence interval.” From the graphically presented results in , it can be seen that a 95% confidence interval of about , , which is generally considered to be a reasonable uncertainty, is assessed if the number of trials is . So, if the number of tests exceeds , it is considered as a “weak criterion” that yields satisfactory good estimates of binary probabilities . The number of matches that is equal to is considered as a “strong criterion” for which the boundaries are tighter, with probability estimation less uncertain, but usually it is more difficult to be achieved as it requires long time series. The corresponding confidence intervals, both for weak and for strong criteria, are derived in  as follows:
The next step in XApEn estimation is to average logarithms of the probability estimates (4) over all template values and to form a summand :The procedure is repeated for the vectors of length , yielding the estimate:Due to the fact that in XApEn two different time series are compared, a self-match (matching a template to itself), alleged source of bias in ApEn, cannot occur. Unfortunately, comparison of vectors from different series also means that there is no guarantee that a template of length in the master time series has at least one match in the follower time series; hence, a lack of matches can occur. The consequence is a logarithm of zero in (6) that makes XApEn estimate impossible, pointed out in  as well. A usual correction ignores and skips cases. Such a correction decreases the numerator in (6) with respect to its denominator and considerably underestimates the summand . We propose two summand corrections, both different from the corrections in . The first one is analogous to ApEn and implements artificial self-matches (see (8)). The second correction excludes cases from the numerator of (6) (see (9)). If denotes a total number of zero matches, then
A simple abstract experiment analyzes the proposed corrections. Suppose that all nonzero probabilities are equal to an arbitrary constant, . Then, summands given by (6) with skipped and given by (8) and by (9) are equal toIf tends to infinity, converges to zero and all estimates converge to the same true value, that is, to . But if is limited, the underestimation of in (6) (with skipped) and (8) is proportional to . Equation (8) additionally suffers from a nonlinear bias due to the false self-matching, thus repeating the problem of genuine self-matching in ApEn. The preferred correction is given by (9) and it was used in further evaluation. However, as it would be clarified later on, if the XApEn parameters are properly chosen, the need for this adjustment is reduced.
2.3. Parameter-Induced XApEn Inconsistency
The parameters necessary for XApEn estimation—time series length , threshold level , vector size , and time lag —create a huge working space, but with unstable regions. Recent ApEn contributions have reviewed this problem [4, 19], earlier also noticed in [6, 8, 15–18, 20]. But parameter study for XApEn does not exist, except for a brief comment in  (to the best of the authors’ knowledge). A few contributions that implement XApEn follow the standard guidelines from the original ApEn/XApEn papers [1–3, 22]: parameters and are implemented in ; parameters or 3, , and are implemented in [28, 29], while parameters and are used in . It should be noted that the threshold value in ApEn studies is calculated with respect to the standard deviation , for example, . In XApEn studies, standard deviation is by definition set to , due to the obligatory standard scoring .
An illustrative example of contradictory results induced by different working points is presented in Figure 2. The contradictory results are noticed in borderline hypertensive rats exposed to shaker stress. Embedded plots show a “XApEn threshold profile” : XApEn values estimated for a range of threshold values, keeping all the other parameters fixed. To obtain a profile, threshold level was gradually increased with increments equal to 0.01 for and equal to 0.05 for . The black line corresponds to XApEn estimated from SBP and PI signals of rats in baseline conditions, while the red line corresponds to the threshold profile estimated from rats at stress. An intersection of these two lines divides the plot area into two regions: in the “gray” region, XApEn values are higher in baseline conditions than in stress, while in the “white” region XApEn contradictorily becomes lower at baseline than at stress. The maximal value of XApEn (the maximum of the threshold profile line) is denoted by a circle. An “empty” circle denotes the lower one of these two maximums. A complete threshold-length () plane is shown in the remaining part of Figure 2. A threshold profile is estimated for each particular time series length . The thresholds positions for which XApEn profile reaches its maximum  are shown by circles. We have assumed a time lag .
The symbolic presentation in Figure 2 distinguishes three different sources of contradictory results (flip-flop effect):(A)For a fixed length , the XApEn “flips” from being lower during stress for lower threshold to being higher during stress for higher threshold ; for example, for , this change occurs for .(B)For a fixed threshold , an undesirable change of XApEn from being lower at stress to being higher at stress occurs at particular time series lengths; for example, if , this change occurs between and .(C)For chosen for a threshold ( is a threshold for which XApEn reaches its maximal value, [15–17]), XApEn changes from being higher at stress, if , shown by a full red circle, to being lower at stress for , shown by empty red circles.
2.4. Explicit Formulae for Error Estimation
The constitutive XApEn elements are conditional probabilities ((3) and (4)), which are subject to an estimation error each: These errors are logarithmically accumulating to form a summand estimation error ,and XApEn error ,
To find a stable parameter region where estimation errors would be minimized, the true conditional probabilities must be evaluated for each template : where denotes the probability density function (p.d.f.) of the signal (follower). A probability corresponds to an event where a sample is within the distance from a template sample .
Unfortunately, biomedical signals are an outcome of numerous complex physiological inputs and feedback, so exact formulae for their p.d.f. have not been determined yet. Instead, we used artificially generated data with normal and uniform distribution:
The artificial data were used both for the master and for the follower series, generating four experimental data sets: a master and its follower both normal, both uniform, normal master with uniform follower, and uniform master with normal follower.
The corresponding true values of conditional probabilities, assuming , areNormal cumulative distribution function in the lower part of (16) is denoted by instead of , to avoid confusion with the summand . With (16) known, all the relevant estimation errors (11)–(13) can be found.
2.5. Reference Thresholds for Cardiovascular Signals
The error analysis in artificial data has a privilege to evaluate a reference point (true value) using an exact formula. The problem with natural signals is that their data samples are statistically dependent. Besides, the signal nature is not completely revealed and there are no mathematical expressions to describe it. So, there exists no calculated reference value to which the estimated results could be compared. The first auxiliary point of reference considering ApEn was empirically obtained in , where the authors pointed out the existence of ApEn maximum and the corresponding threshold for which this maximum is reached. is found by performing a tedious task of estimating ApEn for each threshold value separately, thus obtaining a threshold profile. In , tedious estimation is substituted by a set of expressions that automatically evaluate a theoretical threshold that is (almost) equal to . In , the same authors proposed an alternative set of formulae for . It was shown [15, 16] that the empirical threshold level for which maximal value of ApEn is obtained when plotting the threshold profile depends on the time series length , on standard deviations of time series , and on standard deviations of lagged differential signal , where is a time lag (delay) between the samples, . The observed dependency was shown to be linear with respect to the square root of and ratio, so the authors performed a computer search for the coefficients that yield the best fit in the least-squares sense to obtain (18). We have adopted the same method to find the case of a theoretical threshold in (18), necessary for our evaluations, but not covered in [15, 16].
A method for automatic evaluation of auxiliary reference point does not exist for XApEn. We observed  that the threshold of master time series influences the threshold of XApEn and that their difference is linearly related to the square root of , where and are standard deviations of the lagged differential series and . Then, for each artificially created master -follower combination (normal-normal, normal-uniform, uniform-normal, and uniform-uniform), we generated 100 normalized signal pairs, in total 400 signal pairs per parameter set. For each signal pair, we found the maximum of the threshold profile and the corresponding , and we also estimated and . Then, the least-squares fitting is performed with respect to , thus obtaining particular coefficients , , and . The procedure is repeated for different values, from 300 to 5000 with an increment equal to 100. The results were averaged for each particular giving the coefficient sets , , and . Since several sets had almost the same averaged least-squares error, ten out of 25 baseline signals from rats were randomly selected for justification, and the coefficient set with minimal absolute error (averaged over all values of and ) was selected for a threshold , for which XApEn approaches its maximal value:where is evaluated from the master time series according to
Here, is standard deviation of a time series (in XApEn it is always equal to one) while and are standard deviations of the lagged differential series and . The expression for in (18) is presented for the first time in this study, while the expressions for are taken from . As it is shown in Table 3, the expressions from  yield a lower amount of relative error for cardiovascular data used in this study, so they are more appropriate than the similar expressions derived in . The subset of thresholds (17) for and was presented in .
The threshold is only halfway towards the reliable working point. As it would be shown, if threshold is applied, the reliability criteria stated by (or ) are not satisfied. A different interpretation of these criteria states that at least 10 (or 100) events should be found, in order to have the estimated event probability fitted within the confidence bounds (5) with the certainty of 95%.
Obviously, increased reduces the matching probability of the template and its follower, and thus a threshold increase had to be proportional to it. The coefficients in (19) and (20) are obtained using the same data sets as for (17). In this case, the conditional probabilities were estimated, and the fitting was performed among the coefficients that yield less than 5% of unreliable probabilities, over all values of and over all data lengths:
In the case of strong reliability criteria, a relationship of threshold (20) with respect to vector length is nonlinear:
For the sake of clarity, the different threshold values are summarized in Table 1.
2.6. Reliability Criteria in Statistically Dependent Data
The confidence bounds (5) hold for statistically independent binary events . Template matching in artificial data satisfies this constraint: observing an template , its successive matching tests and at the respective positions and are independent, as an absolute difference of a constant and independent samples and does not destroy independency. A similar line of reasoning holds if . It follows that the estimated probabilities have a scaled binomial distribution , so the reliability criteria and confidence intervals (5) can be applied. If the data were statistically dependent, the only change is an increase of confidence bounds, without affecting the estimated values or the reliability criteria . However, true confidence bounds in real biomedical data cannot be defined as true is unknown, so statistical dependency is of no importance for real data. The reliability still can be tested in a form of , showing whether the obtained value is estimated from a sufficient amount of data.
3. Results and Discussion
3.1. Estimation Errors in Artificial Data
For XApEn error estimation, master series and a hundred of its followers were artificially generated for a range of () parameters, and the same values exactly were calculated using the derived formulae. The results for all four data sets, due to the size, are presented within the Supplementary Material (available online at https://doi.org/10.1155/2017/8365685).
Relative conditional probability errors are plotted in Figure 3, for series and both normally distributed and for . Satisfactory results are obtained for an extremely atypical threshold level . For a typical threshold , error is below 10% only for large and . The results are in accordance with reliability criteria stating that or should be large. Indeed, is defined as a product of probabilities (14), so it is at its largest if , while increased threshold augments the number of template matches (4), again enlarging .
The reliability of estimate is checked comparing the data length with the required minimum . The percentage of test failures is presented in Figure 4, in an plane, where 0% failures are presented by green color that gradually changes to yellow (100% failures). The white line marks classical and values, revealing that, for and 4, the estimates that implement classical parameters are unreliable. Nonlinear interrelation of , , and in Figure 4 also points out that the choice of parameters that would yield a stable working point is not a straightforward task.
The estimation error of summand (12) exposes two problems. The first one is a lack of matches () that induce logarithms of zero in (6) which is, as shown in Figure 5, a subset of all unreliable probabilities from Figure 4. The second problem is an error accumulation (12).
Figure 6 shows a line graph of summand Φ and its changes with respect to series length if it is evaluated by a formula (exact values), if it is estimated in a traditional way, and if a correction (9) is applied. This correction does attenuate estimation errors, even for high values, but it cannot be used alone, as it removes a subset of the problem only. The convergence of summand towards its true value depends on data type: uniform data achieve the correct limit for shorter lengths . The convergence is also -dependent, requiring different data lengths to achieve the same limit. Since is defined as a difference of values (7), the fact that each summand adds its own and independent estimation error is a further source of instability.
Figure 7 shows two-dimensional XApEn profiles. Values calculated using the derived formulae ((a) and gray lines in (b) and (c)) verify the known fact that XApEn linearly decreases with the logarithm of threshold value . The major conclusion is that the true XApEn values are influenced neither by time series length nor by template length . Frequent notions that XApEn monotonously increases with until a plateau is reached or that its value decreases with are a consequence of probability underestimation that decreases with series length and eventually disappears. The panels in the second row of (a) show corrected estimates that are in good accordance with true values for , but showing increased discrepancies for and 3. If no correction is applied (the third row of (a)), substantially diverges from the correct values. (b) and (c) show as a line graph. The discrepancy between the true and estimated values points out that generally accepted guidelines for parameter choice do not offer a safe estimation.
3.2. Stable Working Point in Cardiovascular Environment
To verify the accuracy of the derived equation (17), we followed the recommendations from [15, 16], where their equations are validated with respect to ApEn relative error. For each one of SBP-PI (or R-R) and PI (or R-R)-SBP signal pairs, of animals (or volunteers), a threshold was evaluated according to (17) and used as a parameter for XApEn() estimation. The true maximal XApEn() value is provided by a time-consuming procedure of finding a threshold profile. The corresponding relative error is presented in Table 2. The results estimated from signals recorded in human volunteers are presented in the italic part of Table 2. The relative error is below 2%, except for very short data lengths () and when it reaches 6%; unfortunately, this corresponds to the ambulatory records that cannot be too long.
|Results are presented as mean ± SEM. Results of 41 human volunteers are shown in the first three italic rows. Results of 25 rats in different conditions (total of 50 signals) are presented in roman rows.|
|Error is evaluated according to the guidelines from  and from . Results are presented as mean ± SEM, averaged over 100 time series: 50 data series of Wistar and borderline hypertensive rats in baseline and in stress, repeated for and for .|
For the sake of comparison, we estimated ApEn error from our data as well. We applied calculated according to  (see (18)) and according to . The relative error is shown in Table 3, providing the following conclusions: error level is lower if the equations from  are used and comparison of results in Tables 2 and 3 verifies that our expressions (17) are equal in performance to expressions (18) derived by Lu et al. in [15, 16].
The value cannot be applied, since the percentage of the probabilities that fail the weak and strong criteria is almost 100% (Figure 8). But this threshold reflects the nature of data and it can be used as a reference for thresholds and . Their implementation reduces the failures, as shown in Figure 8.
The actual threshold values calculated from real signals are shown in Table 4. For the sake of comparison, threshold values for normally distributed artificial data are also plotted (Figure 9). These thresholds show that the traditional guidelines (0.15, 0.2, and 0.35) considerably underestimate the actual requirements for the reliable XApEn estimation. Even if , the threshold required for strong reliability may exceed . Although it was speculated  that detailed system information would be lost if values were larger than or (for ApEn), meaning that if the threshold exceeds these values a template would always match its follower, in investigated cardiovascular signals, this assumption does not hold as the evaluated high threshold levels are considerably below the amplitude range of the observed data.
|(a) Threshold for which XApEn reaches its maximal value (17)|
|(b) Threshold for which at least 95% of the probabilities satisfy (19)|
|(c) Threshold for which at least 95% of the probabilities satisfy (20)|
|Results are presented as mean ± SEM, averaged over 50 time series of both Wistar and borderline hypertensive rats, both in baseline and in stress, and averaged over 41 time series of volunteers; results from human volunteers are in italics.|
The developed sets of threshold expressions are primarily intended for cardiovascular signal analysis. The artificial signals used as well are uniform on segment (with bounded amplitudes) and Gaussian (with increased incidence of small amplitudes, but unbounded). Both artificial distributions are symmetric. The amplitudes of real signals (PI, SBP, and R-R) are bounded by physiological constraints. Signal asymmetry is expressed by skewness factor, shown in Table 5. Cardiovascular data can be skewed to the left and to the right, so Table 5 presents the average of absolute skewness values. SBP and PI signals exhibit a level of asymmetry comparable to Rayleigh distribution (skewness ≈ 0.631), but below the skewness of exponential distribution (skewness ≈ 2). Threshold evaluation in signals with highly skewed distribution would be a subject for follow-up studies. In particular, it would be precious to explore an important class of interspike interval time series, with distributions that are close to exponential. A memoryless property of exponential distribution might affect the values of differential time series standard deviation , necessary for (17) and (18).
|Absolute skewness is expressed as ; each signal set comprises a couple of highly skewed signals that influence the standard deviation.|
Finally, Table 6 presents a brief recap of the steps necessary for reliable XApEn estimation.
This paper performed an analysis of XApEn parameter space and proposed methods for adjusting both the procedure and its parameter choice. Artificial environment with known data distribution allowed exact mathematical evaluation of reference XApEn values and their comparison to the estimated ones. The comparison indicated vast regions in XApEn parameter space (threshold , data length , and template length ) with unstable estimation of both XApEn and its constitutive components. It is also shown that parameter interrelationship is nonlinear and depends on data type. An established relationship between the theory of binary probability estimation  and template matching procedure proved that the source of instability is unreliable probability estimation, which is a consequence of working within the unstable region. Stable XApEn estimates should remain constant for increased data lengths and the frequently observed monotonous increase with is just another form of instability. It is also shown that the traditional guidelines work within the regions of instability. The proposed correction attenuated but did not eliminate the problem.
The solution is to target the parameters within the stable region of the parameter space and to perform the parameter choice jointly, due to the nonlinear properties of their relationship. Real environment does not offer an advantage of exact mathematical error analysis, so a set of parameters for stable and reliable estimation is ensured deriving a novel set of threshold expressions. The expressions take into account length , template , statistical data properties, and reliability criterion (weak or strong, ). The consistency of estimates is validated for typical cases of short records (ambulatory patients) and long records (small laboratory animals). It is shown that classical guidelines underestimate the threshold levels and data lengths required for consistent results.
This analysis can be employed for all the entropy measures that rely on probability estimation. The proposed XApEn improvements may contribute to its wider implementation in the domain of related physiological processes such as baroreflex studies. In order to facilitate and promote the wider XApEn deployment, we have included the detailed instructions within the Supplementary Material.
Although the method is validated in cardiovascular data, it can be equivalently applied for finding correct parameters for XApEn analysis of processes that are not necessarily biomedical, provided that their distribution is not too skewed. Further investigation would consider the signals with highly asymmetric distribution, for example, interspike interval time series, but also the signals with mismatched distribution where zero probability is not an error, but inherent signal property.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
This work was supported by the Serbian Ministry of Education, Science and Technology Development, under Grants TR32040 and III41013. The authors acknowledge the support from the EU COST-Action CA15104-Body Communications Group. The authors are also grateful to Professor Dr. Harald Straus, University of Iowa, and to Professor Dr. Suave Lobodzinski, UCLA, for the inspiring discussions in October 2016.
Figures 3, 4, and 5 display all combinations of master and follower artificial time series.
- S. M. Pincus, “Approximate entropy as a measure of system complexity,” Proceedings of the National Academy of Sciences of the United States of America, vol. 88, no. 6, pp. 2297–2301, 1991.
- S. Pincus, “Approximate entropy (ApEn) as a complexity measure,” Chaos, vol. 5, no. 1, pp. 110–117, 1995.
- S. Pincus and B. H. Singer, “Randomness and degrees of irregularity,” Proceedings of the National Academy of Sciences of the United States of America, vol. 93, no. 5, pp. 2083–2088, 1996.
- J. M. Yentes, N. Hunt, K. K. Schmid, J. P. Kaipust, D. McGrath, and N. Stergiou, “The appropriate use of approximate entropy and sample entropy with short data sets,” Annals of Biomedical Engineering, vol. 41, no. 2, pp. 349–365, 2013.
- J. S. Richman and J. R. Moorman, “Physiological time-series analysis using approximate entropy and sample entropy,” American Journal of Physiology—Heart and Circulatory Physiology, vol. 278, no. 6, pp. H2039–H2049, 2000.
- A. Porta, G. Baselli, D. Liberati et al., “Measuring regularity by means of a corrected conditional entropy in sympathetic outflow,” Biological Cybernetics, vol. 78, no. 1, pp. 71–78, 1998.
- A. Porta, P. Castiglioni, V. Bari et al., “K-nearest-neighbor conditional entropy approach for the assessment of the short-term complexity of cardiovascular control,” Physiological Measurement, vol. 34, no. 1, article no. 17, pp. 17–33, 2013.
- W. T. Chen, Z. Z. Wang, H. B. Xie, and W. Yu, “Characterization of surface EMG signal based on fuzzy entropy,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 15, no. 2, pp. 266–272, 2007.
- M. Costa, A. L. Goldberger, and C. Peng, “Multiscale entropy analysis of biological signals,” Physical Review E, vol. 71, no. 2, Article ID 021906, 2005.
- P. Li, C. Liu, K. Li, D. Zheng, C. Liu, and Y. Hou, “Assessing the complexity of short-term heartbeat interval series by distribution entropy,” Medical and Biological Engineering and Computing, vol. 53, no. 1, pp. 77–87, 2015.
- T. Skoric, O. Mohamoud, B. Milovanovic, N. Japundzic-Zigon, and D. Bajic, “Binarized cross-approximate entropy in crowdsensing environment,” Computers in Biology and Medicine, vol. 80, pp. 137–147, 2017.
- X. Ning, Y. Xu, J. Wang, and X. Ma, “Approximate entropy analysis of short-term HFECG based on wave mode,” Physica A: Statistical Mechanics and its Applications, vol. 346, no. 3-4, pp. 475–483, 2005.
- Y.-H. Pan, Y.-H. Wang, S.-F. Liang, and K.-T. Lee, “Fast computation of sample entropy and approximate entropy in biomedicine,” Computer Methods and Programs in Biomedicine, vol. 104, no. 3, pp. 382–396, 2011.
- S. Zurek, P. Guzik, S. Pawlak, M. Kosmider, and J. Piskorski, “On the relation between correlation dimension, approximate entropy and sample entropy parameters, and a fast algorithm for their calculation,” Physica A: Statistical Mechanics and its Applications, vol. 391, no. 24, pp. 6601–6610, 2012.
- S. Lu, X. Chen, J. K. Kanters, I. C. Solomon, and K. H. Chon, “Automatic selection of the threshold value r for approximate entropy,” IEEE Transactions on Biomedical Engineering, vol. 55, no. 8, article no. 8, pp. 1966–1972, 2008.
- K. Chon, C. Scully, and S. Lu, “Approximate entropy for all signals,” IEEE Engineering in Medicine and Biology Magazine, vol. 28, no. 6, pp. 18–23, 2009.
- P. Castiglioni and M. Di Rienzo, “How the threshold "r" influences approximate entropy analysis of heart-rate variability,” in Proceedings of the Computers in Cardiology 2008, CAR, pp. 561–564, Bologna, Italy, September 2008.
- C. Liu, C. Liu, P. Shao et al., “Comparison of different threshold values r for approximate entropy: Application to investigate the heart rate variability between heart failure and healthy control groups,” Physiological Measurement, vol. 32, no. 2, pp. 167–180, 2011.
- J. F. Restrepo, G. Schlotthauer, and M. E. Torres, “Maximum approximate entropy and r threshold: A new approach for regularity changes detection,” Physica A: Statistical Mechanics and its Applications, vol. 409, pp. 97–109, 2014.
- A. Boskovic, T. Loncar-Turukalo, O. Sarenac, N. Japundzic-Zigon, and D. Bajic, “Unbiased entropy estimates in stress: A parameter study,” Computers in Biology and Medicine, vol. 42, no. 6, pp. 667–679, 2012.
- F. Kaffashi, R. Foglyano, C. G. Wilson, and K. A. Loparo, “The effect of time delay on approximate & sample entropy calculations,” Physica D. Nonlinear Phenomena, vol. 237, no. 23, pp. 3069–3074, 2008.
- S. M. Pincus, T. Mulligan, A. Iranmanesh, S. Gheorghiu, M. Godschalk, and J. D. Veldhuis, “Older males secrete luteinizing hormone and testosterone more irregularly, and jointly more asynchronously, than younger males,” Proceedings of the National Academy of Sciences of the United States of America, vol. 93, no. 24, pp. 14100–14105, 1996.
- P. Li, K. Li, C. Liu, D. Zheng, Z.-M. Li, and C. Liu, “Detection of coupling in short physiological series by a Joint Distribution Entropy Method,” IEEE Transactions on Biomedical Engineering, vol. 63, no. 11, pp. 2231–2242, 2016.
- O. Šarenac, M. Lozić, S. Drakulić et al., “Autonomic mechanisms underpinning the stress response in borderline hypertensive rats,” Experimental Physiology, vol. 96, no. 6, pp. 574–589, 2011.
- T. Ceranic, T. Loncar-Turukalo, B. Milovanovic, N. Japundzic-Zigon, and D. Bajic, “Cross-entropy of systolic blood pressure - Pulse interval: Automatic threshold and its reliability,” in Proceedings of the 2013 40th Computing in Cardiology Conference, CinC 2013, pp. 1219–1222, Zaragoza, Spain, September 2013.
- M. P. Tarvainen, P. O. Ranta-aho, and P. A. Karjalainen, “An advanced detrending method with application to HRV analysis,” IEEE Transactions on Biomedical Engineering, vol. 49, no. 2, pp. 172–175, 2002.
- M. C. Jeruchim, “Techniques for Estimating the Bit Error Rate in the Simulation of Digital Communication Systems,” IEEE Journal on Selected Areas in Communications, vol. 2, no. 1, pp. 153–170, 1984.
- H.-T. Wu, C.-Y. Lee, C.-C. Liu, and A.-B. Liu, “Multiscale cross-approximate entropy analysis as a measurement of complexity between ECG R-R interval and PPG pulse amplitude series among the normal and diabetic subjects,” Computational and Mathematical Methods in Medicine, vol. 2013, Article ID 231762, 2013.
- S. Alesci, P. E. Martinez, S. Kelkar et al., “Major depression is associated with significant diurnal elevations in plasma interleukin-6 levels, a shift of its circadian rhythm, and loss of physiological complexity in its secretion: Clinical implications,” Journal of Clinical Endocrinology and Metabolism, vol. 90, no. 5, pp. 2522–2530, 2005.
- H.-T. Wu, C.-C. Liu, M.-T. Lo et al., “Multiscale cross-approximate entropy analysis as a measure of complexity among the aged and diabetic,” Computational and Mathematical Methods in Medicine, vol. 2013, Article ID 324325, 7 pages, 2013.
Copyright © 2017 Tamara Skoric et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.