Table of Contents Author Guidelines Submit a Manuscript
Applied Bionics and Biomechanics
Volume 2017, Article ID 5985479, 13 pages
https://doi.org/10.1155/2017/5985479
Research Article

Comparison of Machine Learning Methods for the Arterial Hypertension Diagnostics

1Research Medical and Biological Engineering Centre of High Technologies, Ural Federal University, Mira 19, Yekaterinburg 620002, Russia
2Laboratório de Instrumentação, Engenharia Biomédica e Física da Radiação (LIBPhys-UNL), Departamento de Física, Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa, Monte da Caparica, 2892-516 Caparica, Portugal

Correspondence should be addressed to Vladimir S. Kublanov; ur.liam@vonalbuk

Received 4 April 2017; Revised 29 May 2017; Accepted 1 June 2017; Published 31 July 2017

Academic Editor: Justin Keogh

Copyright © 2017 Vladimir S. Kublanov et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The paper presents results of machine learning approach accuracy applied analysis of cardiac activity. The study evaluates the diagnostics possibilities of the arterial hypertension by means of the short-term heart rate variability signals. Two groups were studied: 30 relatively healthy volunteers and 40 patients suffering from the arterial hypertension of II-III degree. The following machine learning approaches were studied: linear and quadratic discriminant analysis, k-nearest neighbors, support vector machine with radial basis, decision trees, and naive Bayes classifier. Moreover, in the study, different methods of feature extraction are analyzed: statistical, spectral, wavelet, and multifractal. All in all, 53 features were investigated. Investigation results show that discriminant analysis achieves the highest classification accuracy. The suggested approach of noncorrelated feature set search achieved higher results than data set based on the principal components.

1. Introduction

According to the World Health data, hypertension affects more than 1 billion people worldwide. Many factors can conduce to hypertension, including occipital stress and job strain [1]. One of the main problems concerning the treatments for the arterial hypertension conditions is the late detection for apparently healthy people. Some studies had been shown that among the individuals with hypertension, more than 35% were unaware of their condition [2].

The heart rate variability (HRV) is among one of the widely used biomedical signals, due to ease of record the electrical heart activity [3]. The HRV analysis can be applied in the task of arterial hypertension diagnostics, since it is well known that various features of the HRV reflect behavior of the different modules of the autonomic nervous system (ANS) [4].

Common HRV analysis implies the application of a variety of analysis methods: statistical, spectral, and nonlinear analysis. Generally, in a single study, a limited number of features are extracted. Such as in [5], 13 nonlinear features were studied for efficacy of stress state detection. The current paper comprises a study of 53 different features. Usually during one study, feature sets of a particular method are used. For example in [6], sets of time-domain features, nonlinear features, and spectral features were studied separately for automatic sleep staging by means of HRV signal analyses. In this study, combinations of different methods were analyzed.

The common uses of machine learning approaches for condition classification based on HRV information imply usage of several available methods: support vector machine (SVM), discriminant analysis (DA), and ordinal pattern statistics (OPS) [79]. However, the selection of a particular approach is not always justified. In current study, linear and quadratic DA, SVM, k-nearest neighbors, decision trees, and Naïve Bayes approaches were investigated.

In one of the previous works, the investigation of the linear and quadratic discriminant analysis was carried out, implying the study of arterial hypertension diagnostic using single features of short-term HRV signals. In that work, the evaluation of the features and the evaluation of the classifier efficacy were carried out by means of an in-house software produced in MATLAB [10]. In the present paper, the machine learning methods were implemented in the python.

In summary, the goal of the present work is to study the efficacy of different machine learning approaches for diagnostic of the arterial hypertension by means of the short-term HRV, using combinations of statistical, spectral (Fourier and Wavelet transforms), and nonlinear features. By applying feature combinations of different methods, we aim to build more robust and accurate classifiers.

2. Materials and Methods

2.1. Recorded Dataset

The clinical part of the study was performed in the Sverdlovsk Clinical Hospital of Mental Diseases for Military Veterans (Yekaterinburg, Russian Federation). For the HR signals registration, the electroencephalograph-analyzer “Encephalan-131-03” (“Medicom-MTD,” Taganrog, Russian Federation) was used. The rotating table Lojer (Vammalan Konepaja OY, Finland) performed the spatial position change of the patient during passive orthostatic load; the lift of the head end of the table was up to 70° from the horizontal position. The clinical part of the study was approved by the local Ethics Committee of the Ural State Medical University.

Participants of this study were 30 healthy volunteers and 41 patients suffering from the arterial hypertension of II and III degree. The electrocardiography (ECG) signals were recorded in two functional states: functional rest (state F) and passive orthostatic load (state O). The length of the signal in the mentioned state was about 300 seconds. The HRV signals were consequently derived from the ECG signals automatically by the “Encephalan-131-03” software. Figure 1 presents diagram of the functional states.

Figure 1: Diagram of the study.
2.2. Heart Rate Variability Features

Prior to the processing, the original time series were cleaned from the artifacts. By artifacts, in this study, we considered values of the R-R intervals that differed from the HR mean by more than three standard deviations. NN is the abbreviation for the “normal to normal” time series, that is, without artifacts. Among all studied time series, less than 2% of data was removed. For spectral and multifractal analyses, NN time series were interpolated using cubic spline interpolation with the 10 Hz sampling frequency.

The feature dataset is the same as used previously [11], where it was shown that features of the HRV signals recorded in the state O have better classification accuracy for arterial hypertension diagnostics. Therefore, in this study, we analyze data only in the state O. The used features were separated into statistical, geometric, spectral (Fourier based), wavelet, nonlinear, and multifractal. Their description will be given below.

2.2.1. Statistical Features

Statistical methods are used for the direct quantitative evaluation of the HR time series. Main quantitative features are as follows: (i)M is the mean value of the R-R intervals after artifact rejection: where is the number of elements in the and is the ith element in R-R time series.(ii), the heart rate, is an inverse ratio to M: (iii) is the standard deviation of the NN intervals: (iv), the coefficient of variation, is defined as ratio of standard deviation SDNN to the mean , expressed in percent; (v) is the square root of mean of squares of differences between successive elements in ; (vi) is the number of pairs of successive elements in that differ by more than 50 ms [12].

2.2.2. Geometric Features

The geometric methods analyze the distribution of the R-R intervals as a random numbers. The common features of these methods are as follows: (i), the mode, is the most frequent value in the R-R interval. In case of the normal distribution, the mode is close to the mean .(ii), the variation range, is the difference between the lowest R-R interval and the highest R-R interval in the time series. shows variability of the R-R interval values and reflects activity of the parasympathetic system of the ANS.(iii), the amplitude of the mode, is a number of the R-R intervals that correspond to the mode value. shows the stabilizing effect of the heart rate management, mainly caused by the sympathetic activity [12].

The following indexes are derived from common geometric features: (i), the stress index, reflects centralization degree of the heart rate and mostly characterizes the activity of the sympathetic department of the ANS: (ii), the index of the autonomic balance, depends on the relation between activities of the sympathetic and parasympathetic department of the ANS: (iii), the autonomic rhythm index, shows parasympathetic shifts of the autonomic balance: smaller values of the correspond to the shift of the autonomic balance to the parasympathetic activity: (iv), the index of adequate regulation processes, reflects accordance of the autonomic function changes of the sinus node as a reaction of the sympathetic regulatory effects on the heart:

2.2.3. Spectral Features

Spectral analysis is used to quantify periodic processes in the heart rate by the means of the Fourier transform (Fr). The main spectral components of the HRV signal are high frequency—HF (0.4–0.15 Hz), low frequency—LF (0.15–0.04 Hz), very low frequency—VLF (0.04–0.003 Hz), and ultralow frequency—ULF (lower than 0.003 Hz) [12, 13]. For smaller than 300 seconds, short-term time series ULF spectral component is not analyzed.

spectral component characterizes activity of the parasympathetic system of the ANS and activity of the autonomic regulation loop. High frequencies of the heart rate in HRV spectrum are associated with the breathing and determined by the connection and influences of the vagus with the sinus node.

spectral component mainly characterizes activity of the sympathetic vascular tone regulation center. Low frequencies reflect modulation of the heart rate by the sympathetic nervous system [4].

spectral component is defined by the suprasegmental regulation of the heart rate, as the amplitude of the waves and is related to the psycho-emotional strain and functional state of the cortex. The genesis of the very low frequencies is still the matter of debates. Most likely, it is influenced by the suprasegmental centers of the autonomic regulation that generates slow rhythms. These rhythms are directed to the heart by the sympathetic nervous system, humoral factors on the sinus node. Biologic rhythms in the same frequency band are connected with the mechanisms of the thermoregulation, fluctuations of the vascular tone, the renin activity, and the secretion of the leptin [14]. The similarity of the frequencies implies the participation of these mechanisms in the genesis of the spectral component. There are evidences of the increase of the activity in case of the central nervous system damage, anxiety, and depression disorders [15].

The studied quantitative features of spectral analysis are (i)spectral power of the , , and components,(ii)total power of the spectrum—,(iii)normalized values of the spectral components by the total power—, , and ,(iv)the LF/HF ratio, also known as the autonomic balance exponent,(v), the index of centralization, (vi), the index of the subcortical nervous centers activation, (vii), the maximal power of the spectral components,(viii), the respiration frequency, frequency that corresponds to the [16].

2.2.4. Wavelet Transform

For nonstationary time series, one can also use the wavelet transform (wt), to simultaneously study time-frequency patterns. The general equation for continuous wavelet transform is as follows: where is the scale, is the shift, is the wavelet basis, and is the analyzed signal [17].

Moreover, the connection between the scale and the analyzed frequency is in accordance with the following: where is the central frequency of the wavelet basis, called by the centfrq function, is the sampling frequency of the analyzed signal, and is the analyzed frequency. For wavelet transform computation in this work, we used wavelet Coiflet of the fifth order [18].

It is possible to acquire same spectral features by means of the wavelet transform: (i)Spectral power of the , , and components(ii)Normalized values of the spectral components by the total power—, , and (iii)The LF/HF ratio.

Additionally, standard deviations SDHF(wt), SDLF(wt), and SDVLF(wt) of the HFwt(t), LFwt(t), and VLFwt(t) time series were tested as features. HFwt(t), LFwt(t), and VLFwt(t) are time series of the HF, LF, and VLF spectral components, respectively, acquired by means of the wavelet transform.

Moreover, one can study informational characteristics of the wavelet transform by analyzing the function (Figure 2). is the continuous function of the LF/HF ratio. This function was not a smooth morphology. Its “excursions” (local dysfunctions) varies in case of functional loads, as the features of is possible to use the number of dysfunctions Nd, the maximal value of dysfunction (LF/HF)max, and the intensity of dysfunction (LF/HF)int. By the dysfunction, we consider values of function that suppress decision threshold ∆ according to previous studies of our research group ∆ = 10 [19].

Figure 2: Example of with the decision threshold.
2.2.5. Nonlinear Feature

As the nonlinear feature in this study, we have used the Hurst exponent calculated by the aggregated variance method. The variance can be written as follows: where is the Hurst exponent and is a time-series vector.

can be defined as the slope exponent in the following equation: where is the standard deviation of the increments, corresponding to the time period s, and с is a constant [20].

Note that H > 0.5 corresponds to the process with trend, so-called persistent process, contrary H < 0.5 corresponds to antipersistent processes that have a tendency for trend change, and H = 0.5 is the random process [21].

2.2.6. Multifractal Features

As nonlinear methods, we adopted the multifractal detrended fluctuation analysis (MFDFA) [22]. The algorithm and application features of the MFDFA method to estimation of short-term time series are described in details in [23].

The main steps of the method include the following: (i)The detrending procedure with second degree polynomial on nonoverlapping segments where the length of the segments corresponds to the studied time scale boundaries.

In current study, we investigated time scale boundaries that correspond to the LF and VLF frequency bands: 6–25 sec and 25–300 sec, respectively. In our earlier works and by other authors, it was noted that multifractal analysis of the HF component is not informative because of the noising [24]. (ii)Determination of the fluctuation functions for in range q = [−5, 5]: where is the local trend in the segment , is the number of segments, and s is the scale.(iii)Estimation of the slope exponent in log-log plot of the fluctuation function against scale s for each : (iv)Calculation of the scaling exponent : (v)The Legendre transform application for the probability distribution of the spectrum estimation:

Figure 3 represents the main features of the multifractal spectrum estimated by the MFDFA method. Here, is the height of the spectrum and represents the most probable fluctuations in the investigated time scale boundary of the signal; is the generalized Hurst exponent (also known as correlation degree); αmin represents behavior of the smallest fluctuations in the spectrum; αmax represents behavior of the greatest fluctuations in the spectrum; and is the width of multifractal spectrum that shows the variability of fluctuations in the spectrum. Multifractal characteristics are quantitative measures of the self-similarity and may characterize functional changes in the regulatory processes of the organism. In addition, we also tested the so-called 1/2-width measure of the spectrum, which is defined as [25]. Table 1 presents summary of all features used in this study.

Figure 3: The features of multifractal analysis.
Table 1: List of studied features.
2.3. Machine Learning Approaches

For the machine learning evaluation, the respective functions of the sklearn library were used [26]. The current paper describes supervised machine learning methods used. At first, the classifiers are trained using a training dataset. For that dataset, the class labels are known. After that, the efficacy of the classification is evaluated using test set of data. The efficacy is evaluated by comparing true labels of test set with those predicted by the model.

2.3.1. Discriminant Analysis (DA)

In this work, two variants of the discriminant analysis were tested—linear and quadratic discriminant analyses (LDA and QDA). The LDA aims to find the best linear combination of the input features to properly separate studied classes. In the case of the QDA, the studied classes are separated by a quadratic function [27].

2.3.2. k-Nearest Neighbors (kNN)

The k-nearest neighbors is one of the nonparametric machine learning approaches. In order to predict the class of the object, method chose the class, which is the most common among k “neighbors” of the object. Examples of the “neighbors” are picked from the training dataset. In the present study, different values of the are tested—3, 4, and 5 [28].

2.3.3. Support Vector Machine (SVM)

The base idea of the support vector machine methods is creation of the decision hyperplane which would separate different classes. In that case, the margin between two nearest points on the different sides of the hyperplane is maximal. In present study, the radial basis function (RBF) is used. For implementation in python, one have to specify the following: [29].

2.3.4. Decision Trees (DT)

The decision trees classification model is built around a sequence of the Boolean queries. The sequence of such queries forms the “trees” structure. In the present work, variations of the classifier were analyzed—with fixed value of the maximal tree depth (max_depth = 5). The maximal depth feature points the maximal number of queries that is allowed to use before reaching leaf. The leaf node is the node that has no “children” [30].

2.3.5. Naive Bayes (NB)

This method is based on the application of the Bayes’ theorem with assumptions that data has strong (or naive) independence. In current study, the Gaussian distribution of data is assumed [31].

2.4. Semioptimal Search of the Noncorrelated Feature Space
2.4.1. Feature Set Selection

In the current investigation, all possible combinations of all features were analyzed. However, it is well known that using combined correlated features in machine learning may lead to misleading results. Therefore, the first step in this investigation is to sort uncorrelated combinations. For this task, we compute the correlation coefficient. The whole flowchart of the script for noncorrelated feature combination selection is presented in Figure 4.

Figure 4: Flowchart of the noncorrelated combination selection.

The threshold correlation value was set to 0.25. Usually, correlation more than 0.75 is considered to be high. Therefore, a value lower than 0.25 is a good benchmark for low correlation. In the current work, two to five feature combinations were made. In case of more than two feature combinations, the correlation was checked pairwise. When all calculation was finished, the noncorrelated features were saved to a file for future purposes.

Table 2 presents the total number of n-combinations for 53 features in case of n = [2, 3, 4, 5] and number of selected noncorrelated combinations. Table 2 shows that application of such selection leads to both, more appropriate results and significant reduction of the analyzed combinations set.

Table 2: Noncorrelated combination selection data.
2.4.2. Cross-Validation

Figure 5 presents a complete flowchart of the implemented algorithm for classifier efficacy evaluation.

Figure 5: Flowchart of classifier efficacy evaluation algorithm.

Cross-validation implies division of the original datasets into m subsets, when m-1 subsets are used for the classifier training. The remaining part is used for the classifier test. The procedure is repeated m times. Such approach allows one to use dataset evenly [32].

In the current investigation, the number of random folds l was set to be 5. For the implementation of 5-fold cross-validation, we randomly divide the original dataset into 5 subsets. The division is implemented for both groups simultaneously. As the result, each subset included 6 healthy volunteers and 8 patients diagnosed with hypertension.

Many machine learning methods are sensitive to train set selection, so, in order to remove such influence, the cross-validation procedure was repeated 100 times with different folds. The repeated cross-validation allows to increase number of classification accuracy estimates [33].

Table 3 presents calculation times spent for each machine learning approach for different number of features in combinations. Calculation times are presented for all noncorrelated combinations. In accordance with Table 3, the fastest approach is the decision trees. The k-nearest neighbors approach is the slowest one.

Table 3: Calculation times of classifier efficacy evaluation, sec.

3. Results

The classifier performance was averaged over 5 cross-validations and over 100 implementations. Figures 69 show overview of the classifier performance for all combinations for different numbers of features in combination. All color bars on Figures 58 have the same range—from 50 to 100%.

Figure 6: Classifier score for 2-feature combinations.
Figure 7: Classifier score for 3-feature combinations.
Figure 8: Classifier score for 4-feature combinations.
Figure 9: Classifier score for 5-feature combinations.
Figure 10: Maximal scores achieved by each learning machine approach.
Figure 11: Scores of the PCA achieved by each learning machine approach.

Figure 10 presents maximal accuracy achieved by each classifier for different number of features in set.

According to the data presented in Figures 59, the highest classification is achieved by the discriminant analysis. Moreover, in Figures 58, it can be clearly seen that approaches of discriminant analysis have more combinations with relatively high score than any other approach. Furthermore, for the support vector machine approach, only few combinations have acceptable classification score level.

It is worthy to mention that generally classification accuracy rises as the number of features in the feature set increases. For 4-feature sets, the maximum is achieved—accuracy for 5-feature sets is lower for all machine learning approaches. It drops significantly in case of support vector machine approach.

Table 4 presents best results achieved by all machine learning approaches for 4-feature set.

Table 4: Best classification scores.

Data in Table 4 shows that linear and quadratic DA not only achieve higher classification score but also have better stability of the results. Naïve Bayes classifier also has relatively high classification score and low deviation.

Among 53 studied features, 36 form combinations that have the classification score higher than 85. Table 5 presents occurrences of the features among the combinations. The highest occurrences are noted for different spectral features, associated with VLF spectral band, LF/HF ratio, and statistical feature heart rate.

Table 5: Features occurrences for classification score higher than 85%.

Table 6 presents 7 features that form combinations with accuracy higher than 90%. All these combinations consist of heart rate, one feature associated with LF/HF ratio, and two features associated with VLF spectral band.

Table 6: Feature occurrences for classification score higher than 90%.

4. Discussion

For discussion purposes, a comparison of the results of the current study with results of one of the commonly used procedure, principal components analysis (PCA), was executed. The PCA is a statistical procedure used to reveal the internal structure of the dataset [34]. In our case, features of different amplitude are used; PCA is known to be sensitive to the relative scaling of the feature dataset. Therefore, prior to the PCA application, the standardization procedure was implemented for each of 53 different features—subtraction of the mean value and after that division by the standard deviation.

Table 7 presents explained variance as well as the cumulative variance for the first 15 principal components. First 10 principal components explain 93% of the variance. Consequent principal components add 1% of the variance or less.

Table 7: Dataset analysis by PCA.

In order to compare results of the semioptimal search of the noncorrelated feature space with PCA, combinations of the first 10 components were consequently tested for all machine learning approaches using 100 repeated 5-fold cross-validation. Figure 11 presents the maximal results of classification accuracy achieved by each machine learning approach using combinations of the principal components.

Comparing the results of Figures 10 and 11, one can note that features found by the semioptimal search of the noncorrelated feature space reach higher classification accuracies than combinations of the principal components for all tested machine learning approaches.

5. Conclusions

In this work, various machine learning approaches were tested in task of the arterial hypertension diagnostics. In earlier works, the same datasets were used for investigation of the linear and quadratic DA methods [11]. The present work implies comparison of the DA methods with other machine learning approaches, like support vector machine, k-nearest neighbors, Naive Bayes, and decision trees.

The results of the current investigation showed that for the studied task, the application of the discriminant analysis (linear and quadratic) revealed to be the most appropriate classifiers. These approaches have high classification score and low deviations over different realizations. A set of four features in combination seems to be the optimal number, as the classification accuracy score is higher and more consistent than those for two, three, and five features in combination.

Prevalence of the VLF and LF/HF spectral features among best combinations might indicate that sympathetic nervous system takes an important part in the initialization of the arterial hypertension and maintenance of the increased vascular tone as well as increased cardiac output. These results are in accordance with scientists’ interpretation of the arterial hypertension development [35, 36].

The results of the suggested approach were compared with data set prepared by the commonly used procedure of principal component analysis. Results of the n-feature noncorrelated sets have achieved higher classification accuracies than ones based on the dataset of the selected principal components.

In future works, our research group will continue to improve results on this problem. One of the investigations that are planned is to analyze robustness of the classifiers based on multiple signals recorded simultaneously. Among the other perspective directions of future investigation is usage of the advanced neural networks [37] and genetic algorithms [38] for feature extraction and classification.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The work was supported by the Act 211 Government of the Russian Federation, Contract no. 02.A03.21.0006, and by the FCT project AHA CMUP-ERI/HCI/0046/2013.

References

  1. T. Rosenthal and A. Alter, “Occupational stress and hypertension,” Journal of the American Society of Hypertension, vol. 6, no. 1, pp. 2–22, 2012. View at Google Scholar
  2. X. L. Feng, M. Pang, and J. Beard, “Health system strengthening and hypertension awareness, treatment and control: data from the China health and retirement longitudinal study,” Bulletin of the World Health Organization, vol. 92, no. 1, pp. 29–41, 2014. View at Google Scholar
  3. M. V. Kamath, M. Watanabe, and A. Upton, Heart Rate Variability (HRV) Signal Analysis: Clinical Applications, CRC Press, New York, 2012.
  4. I. B. Ushakov, O. I. Orlov, R. M. Baevskii, E. Y. Bersenev, and A. G. Chernikova, “Conception of health: space-earth,” Human Physiology, vol. 39, no. 2, pp. 115–118, 2013. View at Google Scholar
  5. P. Melillo, M. Bracale, and L. Pecchia, “Nonlinear heart rate variability features for real-life stress detection. Case study: students under stress due to university examination,” Biomedical Engineering Online, vol. 10, no. 96, pp. 1–13, 2011. View at Google Scholar
  6. F. Ebrahimi, S. K. Setarehdan, J. Ayala-Moyeda, and H. Nazeran, “Automatic sleep staging using empirical mode decomposition, discrete wavelet transform, time-domain, and nonlinear dynamics features of heart rate variability signals,” Computer Methods and Programs in Biomedicine, vol. 112, no. 1, pp. 47–57, 2013. View at Google Scholar
  7. A. H. Khandoker, M. Palaniswami, and C. K. Karmakar, “Support vector machines for automated recognition of obstructive sleep apnea syndrome from ECG recordings,” IEEE Transactions on Information Technology in Biomedicine, vol. 13, no. 1, pp. 37–48, 2009. View at Google Scholar
  8. U. Parlitz, S. Berg, S. Luther, A. Schirdewan, J. Kurths, and N. Wessel, “Classifying cardiac biosignals using ordinal pattern statistics and symbolic dynamics,” Computers in Biology and Medicine, vol. 42, no. 3, pp. 319–327, 2012. View at Google Scholar
  9. M. O. Mendez, J. Corthout, S. Van Huffel et al., “Automatic screening of obstructive sleep apnea from the ECG based on empirical mode decomposition and wavelet analysis,” Physiological Measurement, vol. 31, no. 3, pp. 273–289, 2010. View at Google Scholar
  10. V. Kublanov, A. Dolganov, and V. Borisov, “Application of the discriminant analysis for diagnostics of the arterial hypertension - analysis of short-term heart rate variability signals,” in presented at the 4th International Congress on Neurotechnology, Electronics and Informatics, pp. 45–52, 2016.
  11. V. Kublanov, A. Dolganov, and Y. Kazakov, “Diagnostics of the arterial hypertension by means of the discriminant analysis - analysis of the heart rate variability signals features combinations,” in presented at the BIOSTEC 2017 - Special Session on Neuro-electrostimulation in Neurorehabilitation Tasks, pp. 291–298, 2017.
  12. M. Malik, “Heart rate variability: standards of measurement, physiological interpretation, and clinical use,” Circulation, vol. 93, no. 5, pp. 1043–1065, 1996. View at Google Scholar
  13. R. M. Baevskiy, “Аnaliz variabelnosti serdechnogo ritma pri ispolzovanii razlichnykh ehlektrokardiograficheskikh sistem (metodicheskie rekomendatsii) [Analysis of heart rate variability using different electrocardiographic systems (guidelines)],” Vestnik aritmologii [Herald Arhythmology], no. 24, pp. 65–87, 2001. View at Google Scholar
  14. F. A. Jain, I. A. Cook, A. F. Leuchter et al., “Heart rate variability and treatment outcome in major depression: a pilot study,” International Journal of Psychophysiology, vol. 93, no. 2, pp. 204–210, 2014. View at Google Scholar
  15. N. B. Haspekova, “Diagnosticheskaya informativnost monitorirovaniya variabelnosti serdechnogo ritma serdca [Diagnostic Informativeness of the heart rate variability monitoring],” Vestnik aritmologii [Herald Arhythmology], vol. 32, pp. 15–23, 2003. View at Google Scholar
  16. M. Adnane, Z. Jiang, and Z. Yan, “Sleep–wake stages classification and sleep efficiency estimation using single-lead electrocardiogram,” Expert Systems with Applications, vol. 39, no. 1, pp. 1401–1413, 2012. View at Google Scholar
  17. P. S. Addison, “Wavelet transforms and the ECG: a review,” Physiological Measurement, vol. 26, no. 5, pp. R155–R199, 2005. View at Google Scholar
  18. S. Mallat, A Wavelet Tour of Signal Processing, 2009.
  19. V. S. Kublanov, “A hardware-software system for diagnosis and correction of autonomic dysfunctions,” Biomedical Engineering, vol. 42, no. 4, pp. 206–212, 2008. View at Google Scholar
  20. D. Rubin, T. Fekete, and L. R. Mujica-Parodi, “Optimizing complexity measures for fMRI data: algorithm, artifact, and sensitivity,” PloS One, vol. 8, no. 5, 2013. View at Google Scholar
  21. B. B. Mandelbrot, “Multifractal power law distributions: negative and critical dimensions and other “anomalies,” explained by a simple example,” Journal of Statistical Physics, vol. 110, no. 3–6, pp. 739–774, 2003. View at Google Scholar
  22. H. E. Stanley, L. A. N. Amaral, A. L. Goldberger, S. Havlin, P. C. Ivanov, and C.-K. Peng, “Statistical physics and physiology: monofractal and multifractal approaches,” Physica A: Statistical Mechanics and its Applications, vol. 270, no. 1-2, pp. 309–324, 1999. View at Google Scholar
  23. E. A. F. Ihlen, “Introduction to multifractal detrended fluctuation analysis in Matlab,” Frontiers in Physiology, vol. 3, pp. 141–150, 2012. View at Google Scholar
  24. D. Makowiec, A. Rynkiewicz, J. Wdowczyk-Szulc, and M. Zarczynska-Buchowiecka, “On reading multifractal spectra. Multifractal age for healthy aging humans by analysis of cardiac interbeat time intervals,” Acta Physica Polonica B Proceedings Supplement, vol. 5, no. 1, pp. 159–170, 2012. View at Google Scholar
  25. D. Makowiec, A. Rynkiewicz, R. Gałaska, J. Wdowczyk-Szulc, and M. Żarczyńska-Buchowiecka, “Reading multifractal spectra: aging by multifractal analysis of heart rate,” EPL Europhysics Letters, vol. 94, no. 6, p. 68005, 2011. View at Google Scholar
  26. F. Pedregosa, G. Varoquaux, A. Gramfort et al., “Scikit-learn: machine learning in python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011. View at Google Scholar
  27. G. McLachlan, Discriminant Analysis and Statistical Pattern Recognition, vol. 544, John Wiley & Sons, 2004.
  28. L. E. Peterson, “K-nearest neighbor,” Scholarpedia, vol. 4, no. 2, p. 1883, 2009. View at Google Scholar
  29. J. A. Suykens and J. Vandewalle, “Least squares support vector machine classifiers,” Neural Processing Letters, vol. 9, no. 3, pp. 293–300, 1999. View at Google Scholar
  30. P. H. Swain and H. Hauska, “The decision tree classifier: design and potential,” IEEE Transactions on Geoscience Electronics, vol. 15, no. 3, pp. 142–147, 1977. View at Google Scholar
  31. K. P. Murphy, Naive Bayes Classifiers, Univ. Br, Columbia, 2006.
  32. R. Kohavi, “A study of cross-validation and bootstrap for accuracy estimation and model selection,” Ijcai, vol. 14, pp. 1137–1145, 1995. View at Google Scholar
  33. P. Refaeilzadeh, L. Tang, and H. Liu, “Cross-validation,” in Encyclopedia of Database Systems, pp. 532–538, Springer, 2009. View at Google Scholar
  34. I. Jolliffe, Principal Component Analysis, Wiley Online Library, 2002.
  35. G. Parati and M. Esler, “The human sympathetic nervous system: its relevance in hypertension and heart failure,” European Heart Journal, vol. 33, no. 9, pp. 1058–1066, 2012. View at Google Scholar
  36. G. Mancia, R. Fagard, K. Narkiewicz et al., “2013 ESH/ESC guidelines for the management of arterial hypertension,” European Heart Journal, vol. 34, no. 28, pp. 2159–2219, 2013. View at Google Scholar
  37. H. B. Demuth, M. H. Beale, O. De Jess, and M. T. Hagan, Neural network design, Martin Hagan, 2014.
  38. A. Fraser and D. Burnell, Computer Models in Genetics, Mcgraw-Hill Book Co., New York, 1970.