Abstract

The analyses of electrocardiogram (ECG) and heart rate variability (HRV) are of primordial interest for cardiovascular diseases. The algorithm used for the detection of the QRS complex is the basis for HRV analysis and HRV quality will depend strongly on it. The aim of this paper is to implement HRV analysis in real time on an ARM microcontroller (MCU). Thus, there is no need to send raw data to a cloud server for real time HRV monitoring and, consequently, the communication requirements and the power consumption of the local sensor node would be far lower. The system would facilitate the integration into edge computing, for instance, in small local networks, such as hospitals. A QRS detector based on wavelets is proposed, which is able to autonomously select the coefficients the QRS complex will be detected with. To validate it, the MITBIH and NSRDB databases were used. This detector was implemented in real time using an MCU. Subsequently HRV analysis was implemented in the time, frequency, and nonlinear domains. When evaluating the QRS detector with the MITBIH database, 99.61% positive prediction (PP), 99.3% sensitivity (SE), and a prediction error rate (DER) of 1.12% were obtained. For the NSRDB database the results were a PP of 99.95%, an SE of 99.98%, and a DER of 0.0006%. The execution of the QRS detector in the MCU took 52 milliseconds. On the other hand, the time required to calculate the HRV depends on the data size, but it took only a few seconds to analyze several thousands of interbeat intervals. The results obtained for the detector were superior to 99%, so it is expected that the HRV is reliable. It has also been shown that the detection of QRS complex can be done in real time using advanced processing techniques such as wavelets.

1. Introduction

The ECG signal is a defined waveform representation that shows the phases through which the heart passes. The signal represents the polarization and depolarization of the atrium and the ventricle (see Figure 1) [1]. With the ECG, doctors can detect heart disease across the heart rate variability (HRV). Bearing in mind that cardiovascular diseases generate 30% of global deaths, the analysis of the ECG is considered a topic of great interest for researchers [2]. The heart waveform is also often referred to as the QRS complex and is the basis for most of the algorithms used for ECG analysis.

Basically, the detection of the QRS complex consists in detecting the R peaks of the ECG signal that is the peak between the Q and S waves (see Figure 1). The variability of time between R peaks is the basis for HRV analysis. In the literature, the interval between R peaks is usually referred to as interbeat interval () [3], normal to normal () [4], or RR interval () [5]. From now on, any of them will be used interchangeably. The analysis of the HRV is a noninvasive method that allows analyzing the activity of the autonomic nervous system (ANS). Likewise, it has been found that HRV alterations are linked to cardiovascular diseases [68] or that meditation can alter HRV patterns [9]. There are several methods to measure the HRV, but the most common belong to three categories: time domain, frequency domain, and nonlinear. In the time domain, statistical and geometric measures are included [5, 6, 10].

The main problem of analyzing the ECG signal is the noise present due to its susceptibility to interferences such as power line, RF interferences, and muscle artifacts, complicating the detection of the QRS complex (see Figure 1). This is why in recent years different types of algorithms have been developed for the elimination of noise and the detection of QRS complex. Pan and Tompkin were pioneer in this topic. Their algorithm consisted in using a digital band pass filter and a dynamic threshold [11]. Subsequently, algorithms with more advanced techniques for the detection of the QRS complex emerged, such as the use of wavelets [5, 12], adaptive filters [13], Differential Threshold [14], Level-Crossing Sampling [15], Hidden Markov Models [16], S-transform [17], and many more. The use of wavelets has allowed the detection of R peaks even in different scenarios like varying QRS morphologies and high grade of noise. It has obtained the best results. The adaptive filters use methodologies based on the leaky-LMS (LLMS) algorithm of LMS family. The differential threshold algorithms outstand for their low computational requirements. The level-crossing sampling was tested with a hardware implementation, leading to an ECG-monitoring system with a low energy consumption, noise cancelation and low-drawn input current leads. The application of HMM and S-transform are in a more experimental phase.

Recent studies have shown that HRV analysis in the frequency domain reveals the activity of the sympathetic nervous system (SNS) and the parasympathetic nervous system (PNS), where the high frequencies band (HF: 0.15-0.40 Hz) corresponds mainly to the activity of the PNS, and the low frequency band (LF: 0.04-0.15 Hz) corresponds to the activity of the SNS [5]. It has also been found that the reduction of HRV and the increase in the LF/HF ratio are associated with several cardiovascular diseases [5, 6].

In recent years the eHealth and mHealth services have grown and they are expected to continue growing in order to offer more efficient services to patients [18] thanks to the growth of the Internet of Things () and the improvement of portable devices in the area of health. On the other hand, as smart devices are increasingly involved in people’s lives (for example, fall detection systems, monitors of physical activity, vital signs, or sleep quality), they require wide bandwidths and lower latencies. The use of cloud computing is not recommended in applications that require very low latencies between the data sources and the processing unit [19]. Some specific examples try to define strategies to overcome the associated problems. For instance, Gonzalez-Landero et al. [20] made an intelligent tracking system of the heart rate which predicts the hours in which it is high. Then, the heart rate is measured with high frequency (every minute) at certain moments and low frequency (every 10 minutes) at other moments. This saves energy in communication. However, this cannot be generalized to any kind of measurements since the requirements for sampling are higher or changing the rate of the communications is not an option. For HRV, the sampling frequency must be very high (500 Hz recommended) and the possible reduction of communication implies a computation in a local node, which is the proposed approach of this paper.

To solve the problems of cloud computing, a new processing technique has emerged, edge computing. In contrast to cloud computing, in edge computing the data generated by the device is processed in the network edge instead of being transmitted to a centralized cloud for processing, resulting in very low latencies and lower bandwidth requirements [19, 21]. The characteristics of edge computing make this technique the most suitable for many eHealth and mHealth applications in which sending raw data would not be feasible. Health applications are one of the typical areas of edge computing [22]. In [23] real-time signal processing algorithms are proposed to be implemented in a local node, closer to the sensing environment. They are responsible for all the real-time processing of health-related collected data to enable a set of personalized services. The proposed scenario includes applications for gas leak detection, fall detection, and pulse and oxygen abnormal level detection. Sometimes the computation in local nodes requires the search for algorithms effective yet simple enough to be run in low end processors. In [24] a wireless acoustic sensor for ambient assisted living is proposed in keeping with the philosophy of edge computing. The proposed sensor is able to record audio samples at least to 10 kHz sampling frequency. It is capable of doing audio signal processing without compromising the sample rate and the energy consumption.

The aim of this paper is to propose a portable system capable of doing a real-time analysis of the HRV using an ARM microcontroller. The solution adopted is efficient in terms of energy by avoiding communication of raw data. Within this aim, we have developed and improved a QRS complex detector using wavelets. This detector has the capability of selecting autonomously the coefficients to detect the R peaks. The implementation on the MCU required the optimization and improvement of the functions for HRV analysis. The proposed device is designed to be used in a portable way in small local networks, such as hospitals, where the advantages offered by edge computing can show up, especially in topics related to privacy, in addition to a real-time analysis of patients. In this way the quality of the mHealth services could be increased. The system could also be used in applications for remote HRV monitoring like in [25] or [26].

The QRS detector proposed in this work is an extension of the paper sent to the International Conference On Biomedical Engineering and Applications (ICBEA) [27]. The differences with respect to [27] are the following: (i) in the current paper the detector is analyzed in more detail (block diagram of the detector, use more variable symbols so that the text can be followed easily, use more images and the full set of conditions to find an R peak); (ii) In addition, the HRV is also measured in the current work; (iii) the optimization of the algorithms for a lower RAM consumption is described, giving the possibility of creating applications in embedded systems with limited resources and achieving real-time capability.

2. Materials and Methods

2.1. Complex QRS Detector Algorithm

The proposed algorithm consisted of three stages. The first (preprocessing) was responsible for filtering and adjusting the signal for the detector. In the second stage the detector itself was implemented, which decided whether the found peak was an R peak or not. And finally, in the third stage an adaptive threshold was built, updating its level with the last peak found. Figure 2 shows a block diagram of the complex QRS detector algorithm.

(a) Preprocessing Stage. The first step in this stage was to apply the Discrete Wavelet Transform (DWT). This tool is based on the decomposition of a signal in subbands by means of the use of a pair of digital filters (low pass and high pass filters). The outputs of the low pass filter are named approximation coefficients (), while the outputs of the high pass filter are named detail coefficients (), where represents the level of the subband. This process of decomposition through filtering is repeated times. In each iteration the signal is subsampled by a factor of 2. In practice, the DWT is implemented with the Mallat pyramid algorithm [28]. Some studies have shown that the use of a four order Daubechies wavelet is one of the most effective when processing ECG signals [29]. Afterwards, the energy percentage of each level was calculated (see (1)-(3) and Figure 3), then the four ones with more energy were selected to reconstruct the signal (Figure 4). In this way it was assured that the levels with more information of the ECG signal were selected because noise or some interferences such as those of the electrical network or the artifacts are usually found at low energy levels (high frequencies, generally between and ) (see Figure 3). Therefore, using energy levels to discriminate the noise of QRS complex was a good option. Finally we proceeded to remove the offset of the signal by leaving out the approximation coefficients (). In the present study a db4 mother wavelet with 7 levels of decomposition was selected. The number of levels was selected because the data were processed in buffers of 1024 and thus the number of iterations allowed was 7.        where is total energy of all , is energy on each , is percent of energy on each , is number of decomposition levels, is number of coefficient on each , is length of , and is maximum number of decomposition levels.

The next step was to reconstruct the signal () using only the coefficients of the 4 details with the highest energy (see Figure 4). Then, the first difference was applied (4) and later it was squared to emphasize the R peaks (5). Finally, by means of an average filter, the signal was smoothed using a window of 0.2 s ( samples) (see (6)). The size of the average filter is an important factor. If the is too wide, the filter will merge the QRS and T complexes. If it is too narrow, the QRS complex will produce several peaks and it can cause difficulties in its detection. Generally, the size should be approximately as wide as the QRS complex [11].

(b) QRS Detector Stage. The stage of the detector required two inputs for its operation, which are the reconstructed signal (containing the ECG signal) and the signal that serves as a basis to find the possible location of the QRS complex (see Figure 5). The signal had to start from the half of plus one (see (7)) so that both signals ( and ) coincided in the location of the peaks. This is because the average filter introduced a delay in the signal. The steps followed by the detector are as follows:(i)To calculate the initial threshold using the signal. The threshold () was set to 15% of the maximum peak that was located in the first 0.2 s of the signal.(ii)To find a peak () that exceeded in the signal. When this occurred the index () of the signal was stored.(iii)The next step was to find an in the signal using . Then, a window of 0.4 s was selected around . The window size was set 0.4 s because this detector is limited to a range of 40 to 150 BPM, where 0.4 s equals 150 BPM. Thus, the maximum peak was searched using (8). The peak obtained in this way was a candidate for an . To determine if the peak found was a true , two criteria were followed:(a)For the first 10 peaks, the time interval between the current peak and the previous peak had to be between 0.4 s and 1.5 s.(b)For the rest of peaks (more than 10), an average of the intervals of all previous peaks was taken (), and if the current interval was greater than 60% of , without being greater than 1.5 s, the current peak was considered an . With this we avoided confusing the with the or the , sometimes they tend to have similar amplitudes.(iv)Finally, the index () in which the was found was multiplied by the sampling period. In this way the time in which it had occurred was obtained () and later stored. The amplitude of the peak of the signal () will be used in the next stage (adaptive threshold).

(c) Adaptive Threshold. In this stage the changed its value according to the following two conditions:(i)If an peak was confirmed, the amplitude was used to update the using (9).(ii)If after a time greater than 1.5 s (corresponding to 40 BPM) starting from the last , no was found, the was stored as , and then it was reduced by 10%. Then, the search from the last was restarted. This modification of could be repeated up to 3 times. If no was found afterwards, the retrieved its value from and the algorithm continued with the search for more peaks without returning to the previous index. With this process, we could detect waves that had a smaller amplitude, which might not exceed the because the previous one had a very large amplitude, see Figure 6.

2.2. HRV Analysis

Table 1 shows the parameters obtained in the different categories of HRV analysis methods. The following subsections explain each category but we insist on the nontrivial ones or the details required to reproduce our results.

2.2.1. Preprocessing for HRV Analysis

This section is devoted to the set of operations required to eliminate ectopic beats. They are known to give erroneous measures in the HRV analysis if they are not eliminated. This preprocessing is performed before the calculation of any of the parameters shown in Table 1. We used the method proposed by [30], following these steps:

Step 1. The linear trend of the vector was removed using (10). For that purpose, the line that best fits was calculated by least squares.where and and are the coefficients after resolving the system by Least-Squares.

Step 2. The standard deviation () and the mean () of were calculated.

Step 3. A threshold () was set in order to find the ectopic beats. In this case it was equal to three times the :

Step 4. Finally, to find an ectopic beat, the mean was subtracted from the absolute value of the . Ectopic beats were those beats in which the result was higher than ; in those cases the sample was substituted by the mean of the five preceding samples and the five following samples of it.

2.2.2. Analysis of HRV in the Time Domain

After the ectopic beats were removed, the vector was transformed from seconds to milliseconds () and the statistical parameters shown in Table 1 were calculated, which correspond to widely known statistical measurements (means, standard deviations).

2.2.3. Analysis of HRV in the Frequency Domain

The analysis in the frequency domain had three stages: (a) preprocessing: in this stage the signal went through a series of filters to eliminate its offset; (b) interpolation: the signal without offset was interpolated at 4 Hz; (c) spectral analysis: the spectral density (PSD) was calculated using the Welch method.

Preprocessing

Step 1. It began by smoothing the vector. Some authors have shown that the smoothness prior filter is usually very effective in bioelectric signals like the ECG [37, 38]. It is even used in commercial software such as Kubios for HRV analysis [39]. However, its algorithm requires many resources in RAM memory to be implemented in an MCU. Therefore, in this paper we opted to do the filtering using wavelets. The filtering with the wavelets was done by eliminating the coefficients of the details. In this case, a wavelet with four decomposition levels was used, in which after having eliminated all the coefficients, the signal was reconstructed (). Finally using (12) the signal was smoothed. The purpose of this filtering process was to remove any disturbance in low frequency that affects RR intervals.where and .

Step 2. Calculate the temporary vector (), which will be used to perform the interpolation. The temporary vector is the cumulative sum of the vector minus the first value of itself (13).

Interpolation. The vector was passed to ms (). Then, the interpolation was carried out using the cubic spline algorithm at 4 Hz (14). Once was interpolated, the average was subtracted (15).

Calculation of the Spectral Density. To calculate the PSD with the Welch method, the algorithm implemented used the fast Fourier transform (FFT) with a window of 256 points () and overlapping of 128 points. Before applying the FFT to each window, the data were smoothed by multiplying them by a Hamming function () of the same width as . In this way we avoided abrupt discontinuities at the beginning and end of each window.

Obtaining the Parameters in the Frequency Domain. After computing the PSD () of the signal, the area under the curve was calculated for each band of frequencies associated with the HRV: the band (0 to 0.04 Hz), the band (0.04 to 0.15 Hz), and the band (0.15 to 0.4 Hz). Once the areas for each band were calculated, the ratio was calculated by dividing the total area of by the total area of (16). The normalized and were calculated with (17) and (18).

2.2.4. Poincaré Analysis

The Poincaré analysis is a graphical method that evaluates the dynamics of the HRV from the current () and the next () RR intervals:where .

To perform a quantitative analysis and evaluate the HRV, an ellipse was fit to the data (see Figure 7). The width of the ellipse is known as the standard deviation 1 () and the length as the standard deviation 2 (). From and the area of the ellipse was calculated () [6]. It is said that represents the HRV in short times (short term) and is correlated with the SNA, while represent long periods (long term) and is correlated with SNS [40]. Due to the correlation between the standard deviation of the RR interval difference () (see (21)) and , was calculated using (22)-(24). From the parameter and (see (23)) was calculated with (25), the area with (26), and finally the ratio with (27) [41].

The Poincaré plot represents the healthy case by a large ellipse area and small for critical diseases. To perform the analysis of Poincaré periods between 5 to 20 minutes are recommended [6]. On the other hand [42, 43] observed that a lowest value is present in healthy subjects. In the same way a small value in for diseased subjects indicates weakening of parasympathetic regulation by health disorder. As decreases, the SNS activity increases. Figure 7 shows an arrhythmia case from the signal 215 (MITBIH), in which it can observed that the most of the samples are focused in the center of the plot and represent a small ellipse area.where and .

2.2.5. Triangular Geometric Analysis

Triangular geometric analysis is usually considered as part of the analysis in the time domain. However, it deserves further explanation since it is more complex than the rest. This analysis is done by calculating the histogram of the vector , through which two parameters can be calculated, the triangular index () and the index of triangular interpolation of the base of the histogram (). The is equal to the total number of RR intervals divided by the maximum value of the histogram () (28). This index gives an overall estimate of the HRV. The width of the bins used in this analysis was set to 7.8125 ms. The index gives a value on the distribution of the density of all the RR intervals as the base of a triangle and it is usually calculated by means of a least squares estimate. Thus (see (29)) was minimized and the limiting points N and M were found. Then, was calculated with (30); see Figure 8 [44].where is histogram output, defines the base of triangle that fits histogram and its cero for , and .where and are the limits of .

2.3. Parameters and Databases Used to Evaluate the Performance of QRS Detector

In order to compare the performance of the QRS detector with those found in the literature, we used the sensitivity parameter () (see (31)), the positive predictive value () (see (32)), and the percentage of the prediction error rate () (see (33)). was also used to evaluate the accuracy of the algorithm.where are the true positives in detecting the , are the peaks that have not been detected, and are the peaks that have been mistakenly detected as .

In order to validate and compare our proposed algorithm for QRS complex detection, the MIT-BIT arrhythmia database (MITBIH) and the normal sinus rhythm database (NSRDB), both available online, were used [45]. Then, the results obtained were compared with other algorithms to see their effectiveness. The MITBIH database consists of 48 recordings. Each recording has two signals extracted from half hour of a 24-hour recording, which have been sampled at 360 Hz and belong to 47 patients in total. The NSRDB database contains 18 signals with a duration of 130 minutes sampled at 128 Hz and belong to healthy adults aged 20 to 50 years.

2.4. Implementation on an MCU

The MCU used for the implementation of the QRS detector and the HRV analysis was an STM32F407ZET6 MCU. Two features can be highlighted for the present study: it has a set of instructions for digital signal processing (DSP) and a floating point processing unit (FPU), which make it an MCU capable of performing advanced digital processing calculations. Likewise, it has a flash program memory of 512 Kbyte, 192 Kbyte of SRAM memory and a working frequency of 168 MHz.

The DSP libraries provided by CMSIS (Cortex Microcontroller Software Interface Standard) were used for the signal processing and the open source wavelib library [46] was used for the wavelet implementation. The wavelib library had to be modified and optimized in terms of RAM resources to be used in an MCU.

The MCU accessed the MITBIH and NSRDB databases through a microSD card memory, in which each recording was stored in a separate text data file. They were processed in real time in blocks of 1024 data. The block size was chosen for two reasons: the first one because this number of samples is enough to apply the DWT with 7 levels of decomposition; and the second because the amount of RAM required is constant (25 kB) and did not compromise the rest of calculations. It is worth explaining that for an application in real life, the MCU could acquire data in real time from two external sources, selecting only one at a time. The first one would involve a serial port through the HC-05 Bluetooth module at a baud rate of 230400. The second one could use the AD8232 module using the digital analog converter (ADC) with a sampling frequency of 500 Hz. In both cases interrupts could be used, being activated whenever there is a new data available. Thus even while the signal is being processed, the MCU can continue to fill new buffers for further processing. To display the data locally, two LCD screens SSD1306 of 128x64 pixels were incorporated using the I2C protocol. An example of the information displayed on LCD are show in the Figures 9 and 10. Figure 9 shows the data displayed in real time, while Figure 10 shows the analysis in the frequency domain for the signal 215 (MITBIH database).

To reduce RAM consumption when processing the signal, the following programming techniques were applied:(i)Avoiding the creations of arrays or matrices within functions: it is better to use loops and perform the corresponding calculations in each iteration.(ii)If possible, using the input variables as output variables in functions, so that new variables are not needed.(iii)Making use of structures and unions to pack data and take control of system flags: in this way the amount of RAM required is fully utilized. For example, in Code 1 you can see that less RAM is required using structures and unions for the same number of flags.

typedef union
unsigned uint8_tAllFlags;
struct
unsigned flag_0:1;
unsigned flag_1:1;
unsigned flag_2:1;
unsigned flag_3:1;
unsigned flag_4:1;
unsigned flag_5:1;
unsigned flag_6:1;
unsigned flag_7:1;
;
DeviceStatusFlags;
DeviceStatusFlags Flags;
Used Bytes in RAM: 1
uint8_t flag_0;
uint8_t flag_1;
uint8_t flag_2;
uint8_t flag_3;
uint8_t flag_4;
uint8_t flag_5;
uint8_t flag_6;
uint8_t flag_7;
Used Bytes in RAM: 8

For example, in Code 2 it is assumed that the input variable has a size of 2000 data. In Matlab 64047 bytes is required to perform the same calculation (remove linear trend), since it does so by means of arrays, while the algorithm of Code 2 occupied only 48 bytes in the MCU (12 variables and an variable that require 4 bytes each one) since the result was stored in the same input variable.

void arm_lineal_detrend_f32(float32_t y, int size)
float32_t a,b,c,d,e,f,x1,x2,C1,C2,determinant;
int i;
a=0;b=0;c=0;d=0;e=0;f=0;
x2=1;
determinant=0;
for (i=0; i<size; i++)
x1=((float32_t)i+1)/(float32_t)size; // create a slope
a=a+(x1x1);
b=b+x1;
c=c+x1;
d=d+x2;
e=e+(x1yi);
f=f+(x2yi);
determinant = ad - bc;
if (isnan(determinant) isinf(determinant))
return;
else
C1 = (ed - bf)/determinant;// resolve the System
C2 = (af - ec)/determinant;// to find the coefficients C1 and C2
for (i=0; i<size; i++)
x1=((float32_t)i+1)/(float32_t)size;
yi=yi-(x1C1+C2);// remove lineal trend

The algorithms optimized following the ideas shown in Code 2 were the average filter, the peak detection and the threshold updating in the QRS detector, the calculation of and , the application of DWT (see (10)), the linear trend removal (see (12)), and the calculation of the PSD (see (15)).

3. Results

The results of Table 2 correspond to the performance of the QRS detector when evaluated with the MITBIH database. is the number of detected beats and is the total number of beats in each ECG signal. It can be observed that 109086 pulses of a total of 109494 have been detected: 108713 were true positives, 781 were false negative, and 373 were false positives. In addition, the QRS detector showed a of 99.30% with a positive prediction of 99.61% and an error rate in the detection of pulses of 1.12%

Table 3 shows the results obtained between the different algorithms found in the literature and ours, using the MITBIH database. The results we have obtained are a little worse. However, the difference between the algorithm with better [34] and ours is only 0.57%. In , the difference with respect to [35] is 0.30% and in , the difference with respect to [36] is only 0.84%.

Table 4 shows the comparison between the results obtained by our algorithm and those obtained by [32] using the database. In this comparison [32] has better results than us in and by 0.01% while is only 0.0002% better.

The time required by the MCU for the HRV analysis will vary according to the size of the IBI vector. Table 5 shows the execution times that were required to analyze the signal 215 of the MITBIH database. This particular file was chosen because it contains more QRS complex than the rest of the signals (3358 found by our algorithm).

On the other hand, the execution times for the QRS detector will always be the same (47 ms for detection and 5 ms for saving in a microSD the temporary location of the found) because the length of the input vector has a constant size (1024 samples).

The time required to fill the 1024 data buffer is , where is the sampling frequency. In the case of the MITBIH is 360 Hz, so it would take 2.84 s to fill it. In the case of the NSRDB is 128 Hz, so that it would take 8 s. In both cases the detection of the QRS complex can be done in real time since it only takes 52 ms. The detector may operate in real time up to a maximum sampling frequency of 19.5 kHz. However, some studies show that 500 Hz is a very reliable frequency for HRV analysis [47]. Our system can easily achieve real time for this recommended frequency.

Although the times required for the analysis of the HRV vary depending on the size of , in Table 5 it can be seen that the smoothing of the signal using wavelets and the analysis in the frequency domain required more time for its execution. On the other side, the Poincaré analysis was the fastest. The time required for the complete HRV analysis of signal 215 (MITBIH database) was 4.401 s.

Refreshing the LCD screens took 50 ms and it was done while the buffer was filling, so that the update times of the screens had no influence on the calculations of the HRV.

The execution time on a regular PC with the processing performed in Matlab is also shown in Table 5 for comparison. It is lower than in the MCU implementation because of the powerful processor. The idea is not to be competitive with a PC server in this aspect, but to avoid transmitting data to a server in the cloud, which would be a great burden for the network. In addition, this would imply a higher power consumption for the sensor node. For instance, the proposed system has these power consumption contributions: 4.2 mA for the MCU in sleep mode, 4.9 mA while processing a signal from a micro SD card, 7.5 mA while processing signals from AD8232 or 16.9 mA while processing signals from Bluetooth (HC-05 module). Thus it is clear that avoiding communication is of great interest.

4. Conclusions and Future Work

The results obtained with the QRS detector that we have proposed based on wavelets and with automatic selection of the coefficients of the details have been higher than 99% in and using the MITBIH database. They are worse than previous studies but the difference does not exceed 0.9% in any of the parameters used in that comparison (). On the other hand, using the NSRDB database the results have been better (, and ). In comparison with [32] SE and PP have been only 0.01% lower.

The QRS detection algorithm was followed by an HRV analysis. In both stages, several algorithms had to be optimized for the implementation on an MCU. Thanks to the efficient use of RAM, it has been possible to develop the whole HRV analysis as a standalone application embedded on an MCU ARM.

With the execution times shown in Table 5, it has been shown that the QRS detector is capable of running in real time for the most common frequencies used in ECGs. Likewise, given that some parameters such as , , , and require very little time for their execution and that they are in the time domain, they could be displayed by sending them via Bluetooth or Wifi to a mobile application. In this way, an HRV analysis would be done in real time. In addition, the energy consumption is low, so it can facilitate its integration on portable devices.

The time required for the complete analysis of the HRV will be variable (depending on the amount of data), but it will be in the order of seconds for practical purposes, making it a viable application.

Our system could also help in applications like the one shown in [25], in which HRV and location are measured to evaluate wellness and recommend a place to live accordingly. All the RR intervals are sent to the user’s mobile phone, and from it to a server in which a time domain parameter of HRV is calculated. The system could be improved if the wearable device measured itself HRV, avoiding the battery consuming process of the communication with the mobile phone. Besides, our system can calculate all the HRV parameters and could improve the determination of the user’s wellness. Another application of our system could be the integration into the system to monitor asthma disease presented in [26], in which the heart rate is also a key component.

This system developed on an ARM MCU could be complemented by developing a mobile application that displays the results of the HRV analysis in a more adequate way to the user, showing comparisons or statistics with respect to previous analyses. In this way the user would get better control over his/her HRV.

As future work, several lines are devised. In the first one the QRS detector will be improved to obtain better results than those seen in Tables 3 and 4. The second line will be devoted to the optimization of the proposed system code to integrate it in the most optimal way into an network. It can be tested using Wifi, ZigBee, or Bluetooth networks for short ranges. For instance, it could receive information from a network edge and then send back to a server only the HRV results to be displayed. This will help to improve mHealth and eHealth services using systems embedded in microcontrollers. On the other hand, we will work to make the proposed device as small as possible, similar to smart band. In this way, it will be able to track heart rate variability during the daily activity.

Data Availability

The data supporting this study are from previously reported studies and datasets, which have been cited or are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to thank the “Fondo Social Europeo”, the “Diputación General de Aragón” (reference group EduQTech: T49_17R). Victor H. Rodriguez acknowledges a grant from “CONACYT-Gobierno del Estado de Durango, México 330795/386043”.