Journal of Healthcare Engineering

Journal of Healthcare Engineering / 2021 / Article
Special Issue

Multiple Criteria Decision-Making Approaches for Healthcare Management Applications

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 6633832 | https://doi.org/10.1155/2021/6633832

Fangwan Huang, Xiuyu Leng, Mohan Vamsi Kasukurthi, Yulong Huang, Dongqi Li, Shaobo Tan, Guiying Lu, Juhong Lu, Ryan G. Benton, Glen M. Borchert, Jingshan Huang, "Utilizing Machine Learning Techniques to Predict the Efficacy of Aerobic Exercise Intervention on Young Hypertensive Patients Based on Cardiopulmonary Exercise Testing", Journal of Healthcare Engineering, vol. 2021, Article ID 6633832, 14 pages, 2021. https://doi.org/10.1155/2021/6633832

Utilizing Machine Learning Techniques to Predict the Efficacy of Aerobic Exercise Intervention on Young Hypertensive Patients Based on Cardiopulmonary Exercise Testing

Academic Editor: Hao Chun Lu
Received27 Dec 2020
Revised08 Mar 2021
Accepted05 Apr 2021
Published22 Apr 2021

Abstract

Recently, the incidence of hypertension has significantly increased among young adults. While aerobic exercise intervention (AEI) has long been recognized as an effective treatment, individual differences in response to AEI can seriously influence clinicians’ decisions. In particular, only a few studies have been conducted to predict the efficacy of AEI on lowering blood pressure (BP) in young hypertensive patients. As such, this paper aims to explore the implications of various cardiopulmonary metabolic indicators in the field by mining patients’ cardiopulmonary exercise testing (CPET) data before making treatment plans. CPET data are collected “breath by breath” by using an oxygenation analyzer attached to a mask and then divided into four phases: resting, warm-up, exercise, and recovery. To mitigate the effects of redundant information and noise in the CPET data, a sparse representation classifier based on analytic dictionary learning was designed to accurately predict the individual responsiveness to AEI. Importantly, the experimental results showed that the model presented herein performed better than the baseline method based on BP change and traditional machine learning models. Furthermore, the data from the exercise phase were found to produce the best predictions compared with the data from other phases. This study paves the way towards the customization of personalized aerobic exercise programs for young hypertensive patients.

1. Introduction

As a prevalent chronic disease, hypertension has been widely considered as a major risk factor for cardio-cerebrovascular events [1]. Strikingly, hypertension incidence is increasing most dramatically in young adults [2, 3]. As an alternative to antihypertensive drugs, lifestyle adjustments, including body weight control, diet, and exercise, can also be used to lower blood pressure (BP) [4, 5]. In particular, aerobic exercise not only directly reduces BP but also indirectly achieves similar effects by controlling body weight, reducing stress, and improving vascular endothelial function, along with other mechanisms [68]. Therefore, aerobic exercise intervention (AEI) has been widely recommended for the treatment of hypertension [9, 10]. Unfortunately, specific guidelines for effectively administering aerobic exercise aimed at antihypertension have not been widely accepted as there is significant individual variation in BP lowering achieved by the same exercise program, with the same exercise type, time, frequency, and duration [1113]. Understanding the individual responsiveness to AEI before formulating comprehensive hypertension management plans will help to improve both effectiveness and efficiency of BP management. To our knowledge, research in this field is still very limited, thus motivating us to perform the work conducted in this paper.

For the clinical feasibility and practicality, this work provided an investigation on the feasibility of utilizing machine learning techniques to predict the efficacy of AEI on young hypertensive patients. Taking into account the prognostic ability of key cardiopulmonary variables, data mining was performed based on the data generated by cardiopulmonary exercise testing (CPET) before treatment. CPET provides a comprehensive physiological assessment of multiorgan system function, including not only cardiovascular and pulmonary but also musculoskeletal and hematopoietic systems [14]. It can help clinicians identify the severity of the disease and evaluate the response to treatments, thus playing an important role in formulating aerobic exercise training prescription and cardiac rehabilitation [15, 16]. In this paper, CPET being used is an electric bicycle with many sensors (see Figure 1) as the main ergometer to measure the changes of various cardiopulmonary metabolic indicators over time. To provide the best measure of the response to exercise, these data were collected “breath by breath” by an oxygenation analyzer attached to a mask. The specific test scheme guided by clinicians included four phases: (1) resting for 1 minute to relieve the patient’s tension; (2) load-free cycling (no resistance on the pedals) for 3 minutes to warm up; (3) exercise for 5–12 minutes with increasing resistance on the pedals (20–35 watt/min increment) until maximal exertion; and (4) recovery for 6 minutes with the first 3 minutes of load-free cycling and the second 3 minutes of sitting still.

Based on the professional advice of clinicians, this paper first utilized a simple method as the baseline to predict the BP-lowering effect of AEI for young hypertensive patients. Just to be clear, BP in this paper was equal to the sum of systolic blood pressure (SBP) and diastolic blood pressure (DBP). This method compared BP at the 6th minute of recovery (R6BP) with BP at the pre-exercise resting (PEBP) in a single CPET before AEI. Patients with R6BP  PEBP were predicted to be strong responders to AEI. If the converse was true, they were predicted to be weak responders. Subsequent experiments showed that the accuracy of this method was typically 50%–60%, closely approximating a random guess, and far beneath the requirement for making effective and accurate clinical exercise prescriptions. To meet this challenge, machine learning techniques were utilized to fully capitalize on the information present within several cardiopulmonary metabolic indicators provided by CPET. As such, this work provides useful insights into the formulation of personalized AEI prescriptions for young hypertensive patients. The main contributions of this paper are as follows:(i)A sparse representation classifier based on analytic dictionary learning was designed to accurately predict the efficacy of AEI on BP lowering. This model can not only alleviate the interference of redundant information and noise brought by breath-by-breath collection but also overcome the deficiency of the existing sparse representation-based classifier which needs a large number of training samples.(ii)The significance of various cardiopulmonary metabolic indicators at different phases of CPET for this task was discussed through comparative experiments. The results showed that the data from the exercise phase can produce the best predictions compared with the data from other phases. Among various metabolic indicators, oxygen pulse (i.e., oxygen intake per heartbeat) was recommended as a powerful indicator for predicting the individual responsiveness to AEI.

The remainder of the paper is structured as follows. Section 2 introduces various metabolic indicators of CPET used in this paper. Section 3 briefly introduces the related works, including the development of application scenarios and research methods. Section 4 describes the designed model in detail based on the shortcomings of the existing model. Section 5 reports the experimental results along with analyses. Finally, conclusions and future works are summarized in Section 6.

2. Main Metabolic Indicators of CPET

CPET provides time-varying information regarding multiple indicators related to circulation, respiration, and gas metabolism at different levels of exercise intensity [17]. The nine indicators recommended by professional clinicians for this work are briefly described in the following:(1)Heart rate (HR): the number of heartbeats per minute. Normally, HR is 60–100 beats per minute at rest. HR varies individually according to age, sex, and other physiological factors.(2)Stroke volume (SV): the volume of blood ejected from either ventricle of the heart in a single beat. The main affecting factors of SV are myocardial contractility, venous return blood volume (preload), arterial BP (afterload), and so forth.(3)Cardiac output (CO): the volume of blood that flows out of the heart in a given period, usually denoted as liters per minute. It can be obtained by multiplying the average SV per beat by HR, varying with metabolism and activity. For example, it increases with muscle movement, emotional agitation, pregnancy, and so forth.(4)Oxygen pulse (VO2/HR): the volume of the oxygen intake per heartbeat. Hence, it is the amount of oxygen that the tissues of the body extract from oxygen carried by each SV. A higher oxygen pulse suggests better cardiopulmonary function. This can be used as a comprehensive index to determine the cardiopulmonary function.(5)Oxygen consumption/kilogram (VO2/kg): the volume of oxygen consumed by the metabolic processes of the body over a period of time, usually denoted as milliliters of oxygen per kilogram of body weight per minute. It reflects the body’s ability to use oxygen and is usually determined by the maximum cardiac output, arterial oxygen content, cardiac output to the distribution index of the exercise muscle, and muscle oxygen capacity.(6)Tidal volume (VT): the volume of air inhaled or exhaled during a normal breath. It is related to age, sex, volume and surface, breathing habits, body metabolism, and so forth.(7)Ventilation volume/minute (VE): the volume of air inhaled or exhaled from the lungs in a minute, which can be obtained by multiplying VT by the respiratory rate.(8)Respiratory exchange ratio (R): the ratio of the carbon dioxide (CO2) output to the oxygen (O2) uptake (i.e., VCO2/VO2) during the same period. It reflects not only the exchange of tissue metabolism of gas but also the influence of transient change in gas storage.(9)Carbon dioxide ventilation equivalent (VE/VCO2): the ability of the body to discharge carbon dioxide, calculated as the ratio between the required ventilation volume and carbon dioxide output.

To illustrate the characteristics of these indicators more vividly, Figure 2 shows a visualization of the above nine indicators for a patient during the exercise phase of a CPET before AEI. Since each breath represents a sampling point, the information of each metabolic indicator collected by the breath-by-breath technique can be stored as a time series [18].

CPET is a dynamic, noninvasive diagnostic method to evaluate cardiopulmonary function during increasing load exercise. Recently, the application of CPET in clinical decision-making for various diseases has been significantly developed. For example, CPET is playing a growing role in cardiology, including heart failure, valve diseases, and ischemic heart disease [19]. Buys et al. evaluated the predictive value of CPET for the incidence of hypertension in patients undergoing aortic coarctation surgery and determined the high-risk boundary as VE/VCO2 slope 27 and peak SBP 220 mmHg through Cox regression analysis [20]. Keller et al. suggested that BP overresponse in CPET might be a diagnostic tool for identifying high-risk groups of hypertension [21]. Besides, CPET can be used as a tool for preoperative risk stratification of patients (not limited to cardiopulmonary surgery) to predict postoperative adverse outcomes [22, 23]. Currently, one of the most impressive advances is that the integration of CPET and other tests has been introduced to diagnose several diseases [24]. Exercise stress echocardiography and CPET have been successfully combined in the dynamic assessment of heart failure for hypertensive patients [25]. Similarly, CPET combined with echocardiography of the right ventricle was applied to predict the prognosis of patients with pulmonary arterial hypertension [26].

From the perspective of research methods, in addition to traditional statistical analysis, data mining of CPET using machine learning techniques is gradually becoming a research hotspot. Leopold et al. developed a greedy heuristic algorithm based on feature clustering to study the ability of CPET to predict the anaerobic mechanical power outputs [27]. Braccioni et al. used a random forest algorithm to analyze the relationship between symptoms and cardiopulmonary parameters of lung transplant recipients based on incremental CPET [28]. Sakr et al. evaluated the performance of six machine learning techniques in predicting the individuals at risk of hypertension through treadmill stress tests on a massive crowd [29]. Unfortunately, the above work only selected some special values of cardiopulmonary metabolic indicators (such as peaks or slope) as features for analysis, without taking into account their time-varying characteristics. Our previous work has proved that time-varying data of some metabolic indicators obtained through CPET could be used to predict the efficacy of AEI [30], but how to further improve the predictive accuracy is still a challenge, especially in the case of insufficient training samples. This encourages us to perform the research conducted in this paper.

In fact, the prediction of the BP-lowering effect of AEI by using a certain metabolic indicator can be transformed into time series classification (TSC) for data mining. To date, researchers have proposed hundreds of approaches for TSC in different application scenarios. TSC algorithms can be roughly divided into seven categories: (1) the whole-series-based method, (2) the interval-based method, (3) the shaped method, (4) the word-frequency-based method, (5) the model-based method, (6) the integration-based method, and (7) the deep learning-based method. Bagnall et al. evaluated the latest progress of TSC algorithms on 85 datasets in the University of California, Riverside (UCR) archive [31]. They recommended 1-nearest neighbor with dynamic time warping (1NN-DTW) and random forest (RF) as the baseline classifiers for comparison with other classifiers. Besides, they also concluded that the integration-based method can achieve high accuracy by utilizing multiple classifiers on one or more feature spaces. For example, Bagnall et al. integrated 35 classifiers on the time, frequency, change, and shapelet transformation domains [32]. On this basis, Lines et al. added two new classifiers, two additional transformation domains, and a hierarchical structure of probability voting to further improve the performance [33]. Recently, the method based on deep learning has gradually become a research hotspot [34]. Deep learning is characterized by learning hidden and more abstract representations of data from the original time series to achieve better classification performance. This method is widely used for end-to-end learning including methods such as convolutional neural networks (CNNs) [35] and echo state network (ESN) [36]. The common disadvantage of these methods is that they require a large amount of data and computational cost for model training. As this work represents the first stage in a larger experiment, the relatively small number of samples means that the above approach is not appropriate. Moreover, the robustness of the method to signal-to-noise ratio also needs to be considered because the process involved in collecting CPET data is usually very noisy. For the above reasons, a classifier based on sparse representation is recommended for the task in this paper.

4. Sparse Representation-Based Classifier

In this section, a sparse representation classifier based on dictionary learning was designed to accurately predict the efficacy of AEI on BP lowering. This method firstly eliminated redundant information and reduced noise by feature extraction based on the sparse representation. At the same time, it took advantage of learning of an analytic dictionary without requiring as many training samples as the existing sparse representation-based classifier.

4.1. Brief Introduction for Sparse Representation

Recently, sparse representation has received increasing attention in many fields. While initially developed for use in image analysis and signal processing, sparse representation has been successfully utilized for dealing with more general tasks in the machine learning field [37]. Specifically, given a signal of m observations and an overcomplete dictionary in which the column vector di () is known as an atom, the main goal of the sparse representation is the reconstruction of a signal perfectly with the least possible number of atoms. Its objective function is as follows:where is the sparse representation (or sparse solution) of x and refers to the number of nonzero elements in . Due to the noise in the real signal, the solution of equation (1) can be approximated by either of the following two equations:where δ can be considered as noise or a reconstruction residual; the sparse factor k is a predefined integer not less than 1. Besides, based on the Lagrange multiplier theorem, solving sparse representation can be equivalently transformed into an unconstrained minimization problem:where λ is a positive constant used to achieve a tradeoff between the reconstruction residual and the sparse solution.

It should be noted that since obtaining the optimal solution with l0-norm minimization is an NP-hard problem, many algorithms have been proposed to deal with it. The strategies commonly used in these algorithms mainly include greedy pursuit strategy and convex relaxation strategy [38, 39]. The greedy pursuit strategy represented by the orthogonal matching pursuit (OMP) algorithm is to gradually approach the optimal solution through the sequential selection of column vectors (atoms) until the end of iteration [40]. For the convex relaxation strategy, the main idea is to replace the l0-norm minimization term with the l1-norm minimization term. Taking equation (3) as an example, it can be approximately equivalent to the lasso problem:where represents the sum of the absolute values of nonzero elements in and ε is a positive constant given beforehand. The advantage of this strategy is that the l1-norm minimization problem has an analytical solution and can be effectively solved by several methods, such as least angle regression (LAR) [41], coordinate descent algorithm (CDA) [42], iterative shrinkage-thresholding algorithm (ISTA) [43], and many variations of them.

4.2. The Existing Sparse Representation-Based Classifier

Proposed by Wright et al., a sparse representation-based classifier (SRC) was first applied in the field of face recognition and then successfully extended to TSC [44, 45]. Specifically, the sparse representation of an unlabeled sample is first solved based on the dictionary composed of all labeled samples. Then, the reconstruction residuals of each class are calculated by using the samples of each class and the corresponding elements in the sparse representation. Finally, the classification is performed by examining which class leads to the minimum residual of the unlabeled sample. The steps to implement SRC are as follows:(1)The l2-norm normalization is preprocessed for each sample of the whole dataset with a class number of c.(2)A dictionary D = [D1, ⋯, Dj, ⋯, Dc] is generated, where Dj () is a subdictionary composed of jth-class normalized samples in the training set as column vectors (atoms).(3)The sparse representation of the unlabeled sample y is obtained by using the algorithm described above.(4)The unlabeled sample y is reconstructed, respectively, using each Dj and corresponding , where () is a subvector consisting of the elements in corresponding to all atoms in Dj. The label is determined based on the minimum residual, as shown in the following equation:

Figure 3 shows the SRC schematic for a two-class problem. The success of the SRC depends on the hypothesis that the unlabeled sample can be best reconstructed by a linear representation of samples within the same class. However, once the samples of different classes look similar to each other, the performance of SRC is very unstable [46]. Besides, the dictionary cannot satisfy the overcompleteness if the number of labeled samples is less than the dimension of samples, which will also affect the performance of the SRC [47]. To overcome the shortcoming of the SRC, a sparse representation classifier based on an analytic dictionary was designed, and then its accuracy was improved by using dictionary learning. For the sake of simplicity, the model was called SRC-AL for short. The principle is described in the following.

4.3. The Designed Sparse Representation-Based Classifier

In the application domain of sparse representation, an overcomplete dictionary can be usually generated using data implementation or analytic approach [48]. The approach based on data implementation is to construct an explicit dictionary directly by using the raw data. This is exactly the way adopted by the SRC, intending to obtain the residuals of the unlabeled sample reconstructed by the samples of different classes. Unlike SRC, SRC-AL generates an implicit dictionary based on the analytic approach as the initial dictionary. This approach generally utilizes some fixed transformations, such as discrete Fourier transform (DFT), discrete cosine transform (DCT), and discrete wavelet transform (DWT) [49]. Compared to the data implementation, the analytic approach has the advantage of allowing an overcomplete dictionary of any size without being limited by the number of labeled samples. However, due to the poor adaptability, the analytic dictionary often requires further optimization through dictionary learning. K-singular value decomposition (K-SVD) is a popular algorithm for dictionary learning, which updates the used atoms one by one in an iterative manner to train the overcomplete dictionary most suitable for the training set [50].

Inspired by the sparse representation predictor for time series proposed by our previous work [51], the workflow of SRC-AL consists of the following six steps:(1)Generate an initial dictionary by utilizing the analytic approach, where m is the dimension of the sample, c is the number of classes, and n is an arbitrary integer much larger than (m + c). The upper and lower parts of the dictionary are represented by Dup and Dlw, respectively.(2)Normalize each sample of the training dataset with l2-norm, and convert its label into one-hot encoding. Combine the above two parts into the new training sample x.(3)According to the training set composed of new samples, update the initial dictionary through dictionary learning, with the purpose of better reconstructing the samples. The objective function of dictionary learning can be described aswhere r is the number of samples in the training set and is a sparse representation of sample .(4)Normalize the unlabeled sample y with l2-norm, and then obtain its sparse representation based on the upper part of the learned dictionary (Dup).(5)Multiply the lower part of the learned dictionary (Dlw) by the sparse representation to obtain the label vector Ly.(6)Determine the label of y according to the index of the element with the largest absolute value in Ly, as shown in the following equation:where represents the ith element in vector Ly.

Figure 4 shows the SRC-AL schematic for a two-class problem. Assuming that sample x1 belongs to class 1, the green-filled blocks represent the normalized sample, and the following “10” represents the one-hot encoding of the label. Similarly, the blue-filled blocks represent the normalized sample of class 2, and the following “01” represents the one-hot encoding of its label. The dictionary filled with orange is generated by the analytic approach. To better reconstruct all training samples, a dictionary-learning algorithm (such as K-SVD) should be applied to constantly update the dictionary. Based on the upper part of the learned dictionary (Dup’), the sparse representation of the unlabeled sample y (grey-filled blocks) is solved, and then Dlw is used to obtain the label vector Ly. Finally, the element with the largest absolute value in Ly is set to 1, and the other elements are set to 0. This one-hot encoding is used to replace the question mark in Figure 4 to achieve the classification of y.

5. Experiments and Results

CPET data from 24 young patients with stage I hypertension before AEI treatment were used for the experiments. The dataset was provided by the Department of Cardiology, First Affiliated Hospital of Sun Yat-sen University, China. The whole exercise process of all the people was completed under the supervision of professional medical staff in the hospital. Blood pressure before and after exercise was assessed using both dynamic and exercise blood pressure results. Although the cost of each sample is very large, the data are highly comparable and reliable due to the guaranteed amount of exercise and more comprehensive monitoring indicators.

The performance of various machine learning models based on the data from the exercise phase was compared with the baseline method given by the clinician. Note that the baseline method only focused on BP change between pre-exercise and postexercise within a single CPET, while the machine learning model took into account the time series of metabolic indicators during CPET. After verifying the effectiveness of the designed model, the significance of the data from different phases in CPET for predicting the efficacy of AEI on BP lowering was further evaluated.


SampleMBPB (mmHg)MBPA (mmHg)ri (%)ziLabel

124220216.5292.209Strong
222919514.8471.792Strong
322119213.1221.365Strong
424121212.0331.094Strong
52352139.3620.432Strong
62532309.0910.365Strong
72232038.9690.334Strong
82492278.8350.301Strong
92091918.6120.246Strong
102442238.6070.245Strong
112141968.4110.196Strong
122462268.1300.127Strong
132041887.8430.055Strong
142442257.7870.041Strong
152442267.377−0.060Weak
162312147.359−0.065Weak
172402237.083−0.133Weak
182312156.926−0.172Weak
192142073.271−1.079Weak
202212143.167−1.104Weak
212222162.703−1.220Weak
222112081.422−1.537Weak
232072050.966−1.650Weak
242072080.483−1.770Weak

5.1. Description of the Dataset
(1)Inclusion criteria: between the ages of 18 and 45; stage I hypertension (SBP: 140–160 mmHg; DBP: 90–100 mmHg) either without medication or with discontinuation of antihypertensive drugs for more than two weeks and still presenting stage I hypertension; no regular exercise for four months prior to admission; willingness to participate in follow-ups for more than 6 months.(2)Treatment prescription: patients underwent aerobic exercise with an Italian COSMED K4 electric bicycle. Training intensity corresponded to the metabolic equivalent of task (MET) of 70% of maximal oxygen consumption (VO2max). Get aerobic exercise 5 times per week, each time 45 minutes (exercise intensity equivalent to 2,000–3,000 kcal per week), lasting 12 weeks.(3)Classification standard: patients were categorized as strong or weak responders of AEI treatment according to the therapeutic effect. The classification process is as follows:(1)All patients received 24-hour dynamic BP monitoring before and after AEI to obtain their daily mean BP.(2)The rate of BP change before and after treatment was calculated for each patient: , where MBPB and MBPA indicated the mean BP of 24 hours before and after treatment, respectively.(3)Z-score standardization was performed for ri as follows: , where and were the mean and standard deviation, respectively. The role of zi was to determine whether the antihypertensive efficacy of the ith patient was above average.(4)Classify according to zi. Patients with zi >0 (14 individuals in total) were identified as the strong antihypertensive responders of AEI, while patients with zi <0 (10 individuals in total) were classified as weak responders. The real labels of 24 patients are detailed in Table 1.

As can be seen from Table 1, all patients except the last one exhibited certain antihypertensive effects following 12 weeks of AEI treatment. The average antihypertensive change rate was 7.582%. The individual showing the best antihypertensive effect exhibited a 40 mmHg (or 16.529%) BP decrease after AEI. However, the absence of obvious changes in BP of some individuals also proved that the efficacy of AEI is significantly different in hypertensive patients.

5.2. Experimental Results

In this paper, accuracy and F1-score (the harmonic average of precision and recall) obtained by the confusion matrix (see Figure 5) were used to evaluate the performance of the model. For them, higher values indicate positive benefits.

5.2.1. The Performance of the Baseline Method Based on BP Change

An intuitive way to predict the BP-lowering effect of AEI is to determine whether the BP of patients after exercise is lower than that before exercising in CPET. Specifically, the pre-exercise resting BP (PEBP) was subtracted from BP at the 6th minute of the recovery phase (R6BP) to obtain BP change (△BP) for each patient. A patient with △BP less than 0 was considered to be unable to benefit from AEI, meaning the predicted label was weak. Conversely, a patient would exhibit a strong, beneficial antihypertensive response to AEI. The predicted labels of the baseline method based on BP change are shown in Table 2. Using the confusion matrix, the accuracy of the baseline method was 0.542, and F1-score was 0.56. This meant that the baseline method was only slightly superior to the random guess (accuracy = 0.5), far less than the requirement for clinical applications.


SamplePEBP (mmHg)R6BP (mmHg)△BP (mmHg)Predicted labelReal label

1242267−25WeakStrong
2233260−27WeakStrong
319917029StrongStrong
423621620StrongStrong
5190223−33WeakStrong
6238246−8WeakStrong
72092054StrongStrong
827425618StrongStrong
9201203−2WeakStrong
1022421410StrongStrong
11204211−7WeakStrong
122622557StrongStrong
13216218−2WeakStrong
1421919425StrongStrong
15222232−10WeakWeak
1624522916StrongWeak
17247250−3WeakWeak
181761733StrongWeak
192332258StrongWeak
20219227−8WeakWeak
21205206−1WeakWeak
22223235−12WeakWeak
23216242−26WeakWeak
2422418539StrongWeak

5.2.2. The Performance of Machine Learning Models Based on the Metabolic Indicators

Time series of the nine metabolic indicators described above during the exercise phase were selected for analysis using machine learning models. Of note, patients had distinct exercise durations based on different physical conditions, resulting in different numbers of sampling points for individuals (ranging from 85 to 270). As most machine learning models required samples to have the same dimension, linear interpolation was first applied to unify the sampling numbers of all patients to 270 points. Afterward, SRC-AL presented herein was compared with SRC and some popular models of TSC, including 1NN-DTW, random forest (RF), and support vector machine (SVM). Due to the limited samples, the leave-one-out cross-validation was adopted to carry out the experiments [52]. All the above models were implemented by MATLAB. For SRC and SRC-AL, OMP and K-SVD algorithms in the SPAMS toolbox were used to solve the sparse representation and dictionary learning, respectively. Besides, the optimal sparse factor was obtained by grid search in a specific interval. Finally, for SRC-AL, the size of the initial dictionary was defined as a matrix where the number of columns was twice the number of rows, which was realized by the discrete cosine transform. The experimental results of each model are shown in Tables 3 and 4, where the last column of each table shows the average performance of each metabolic indicator based on different machine learning models.


IndicatorSRC-ALSRC1NN-DTWRFSVMMean

VO2/HR1.0000.7920.6250.5830.6670.733
VE0.8750.7080.6250.4580.5420.642
VO2/kg0.9170.6670.5000.5000.5830.633
CO0.9170.6670.5000.5830.7920.692
HR0.8330.6250.5000.5000.4170.575
SV0.6670.5420.4580.5420.5420.550
VE/VCO20.7080.5420.5000.4580.4580.533
VT1.0000.4580.5830.5000.3330.575
R0.9580.4170.3750.5000.5830.567


IndicatorSRC-ALSRC1NN-DTWRFSVMMean

VO2/HR1.0000.8150.6670.6670.7140.773
VE0.9030.7410.6400.5810.5930.692
VO2/kg0.9330.6920.5390.6000.6430.681
CO0.9230.6920.5710.6670.8280.736
HR0.8460.6400.5390.6000.4620.617
SV0.6670.6670.5520.6450.6210.630
VE/VCO20.7780.6670.6000.6060.5810.646
VT1.0000.3810.6150.5710.4290.599
R0.9660.4620.5160.5710.6430.632

5.2.3. The Performance of SRC-AL Based on the Data from Different Phases of CPET

Since SRC-AL performed best in the above model, it was directly used to evaluate the significance of the data generated in the three important phases of CPET for predicting the individual responsiveness to AEI. These three phases included warm-up, exercise, and recovery. Similar to the exercise phase, the data dimensions of different patients in the other two phases were also inconsistent. For the warm-up phase, the shortest time series of metabolic indicators had only 38 sample values, while the longest had 81 sample values. For the recovery phase, the shortest one had only 113 sample values, while the longest one had 195 sample values. Therefore, linear interpolation should be used first to unify the data dimensions of different patients into the same. Besides, the dictionary learned in the exercise phase cannot be applied to the other two phases due to different data dimensions. The experimental results of SRC-AL based on the data of the above three phases of CPET are shown in Table 5.

5.3. Analyses of Experimental Results

This work investigated the ability of metabolic indicators to discriminate between strong and weak responses to AEI in patients. Through the analysis of the above experimental results, the following insights can be obtained to help clinicians predict the efficacy of AEI on young hypertensive patients based on CPET.(1)From Tables 3 and 4, SRC-AL and SRC were superior to other traditional classifiers in predicting the individual responsiveness to AEI based on the time series of metabolic indicators. This is mainly because the process of collecting these metabolic indicator data is prone to generate many interference signals, while the sparse representation can effectively extract the main features of time series and enhance the robustness to noise to the maximum extent.(2)The performance of SRC-AL was significantly better than that of SRC regardless of the time series based on any indicator, although both were based on sparse representation. This indicates that SRC needs an adequate set of training samples to form an overcomplete dictionary for better performance. On the contrary, SRC-AL can always guarantee the overcompleteness because it generates dictionaries employing the analytic approach. Through dictionary learning, the initial dictionary can be gradually updated to better fit the training samples and their labels.(3)According to the last column of Table 3, except for the indicator VE/VCO2, the average accuracy of all the other metabolic indicators based on the five machine learning models was higher than that of the baseline method based on BP change (accuracy = 0.542). However, if evaluated by the average F1-score, all metabolic indicators were superior to BP change alone (F1-score = 0.56), as shown in the last column of Table 4. This interesting finding suggests that the multipoint characteristics of cardiopulmonary metabolic indicators formed by collecting breath data can more accurately reflect the individual responsiveness to AEI. Figure 6 visualizes the comparison between the predictive performance of each indicator obtained by machine learning models and that of BP change obtained by the baseline method, where Figure 6(a) shows the average/optimal accuracy and Figure 6(b) shows the average/optimal F1-score. Note that the optimal performance of all metabolic indicators was obtained by SRC-AL designed herein.(4)Table 5 illustrates the significance of data from different phases in CPET for predicting the BP-lowering effect of AEI. VO2/HR, VE, VO2/kg, VT, and R had the best predictive effect by using the time series of the exercise phase, while HR, SV, and VE/VCO2 performed better according to the time series of the warm-up phase. The performance of CO was consistent in both the exercise and the warm-up phases. Finally, the data in the recovery phase were less important than in the previous two phases. The reason may be that the patient is only active for the first three minutes during the recovery phase and remains inactive for the next three minutes. In other words, the metabolic data of patients in the active state are more significant for predicting the individual responsiveness to AEI.


IndicatorWarm-up accuracyExercise accuracyRecovery accuracyWarm-up F1-scoreExercise F1-scoreRecovery F1-score

VO2/HR0.5831.0000.6250.7371.0000.743
VE0.6250.8750.8330.7570.9030.875
VO2/kg0.7500.9170.7500.8240.9330.824
CO0.9170.9170.7080.9230.9230.720
HR0.8330.8330.7920.8570.8460.828
SV0.9580.6670.6250.9630.6670.757
VE/VCO20.7500.7080.7080.8000.7780.788
VT0.5831.0000.7920.7371.0000.828
R0.9170.9580.7080.9230.9660.759

5.4. Additional Experiments

Considering that the sample size of the aforementioned experiments is rather limited, six datasets from the UCR Time Series Classification Archive were selected for additional experiments to further verify the effectiveness of our proposed model [53]. The common characteristics of these datasets include the following: (1) the number of classes is two and (2) the number of training samples is less than or close to the length of the sample. This results in the dictionary based on data implementation not being overcomplete, which may reduce the classification accuracy of SRC. Nevertheless, the proposed model based on analytic dictionary learning (SRC-AL) should not be affected. The detailed description of the datasets is shown in Table 6. According to the results demonstrated in Table 7, SRC-AL achieved the best classification in all the datasets, indicating that SRC-AL is particularly suitable for datasets with fewer training samples than the sample length.


TypeDatasetClassesLengthTraining setTesting set

ECGECGFiveDays213623861
ECGECG2000296100100
SensorSonyAIBORobotSurface127020601
SpectroHam2431109105
ImageHerring25126464
ImageBeetleFly25122020


DatasetSRC-ALSRC1NN-DTWRFSVM

ECGFiveDays0.9740.9710.7680.7870.974
ECG20000.9200.9000.7700.8190.770
SonyAIBORobotSurface10.8900.7570.7250.7330.677
Ham0.7620.6190.4670.7220.619
Herring0.6720.6090.5310.5720.625
BeetleFly0.9000.6500.7000.8250.900

In addition, considering that SRC-AL is an extended sparse representation classifier, an interesting question is whether or not other machine learning models can be modified to handle the problem addressed in this paper with better performance. To answer this question, the improved versions of some machine learning models were used to be compared with SRC-AL. For example, in order to reduce the huge feature space of the random forest, time series forest (TSF) was used to divide a time series into random intervals (m is the length of the time series), and then the mean, standard deviation, and slope of each interval were all taken as features for classification [54]. Similarly, in order to improve the classification accuracy of 1NN-DTW, 1NN-sharpDTW was first adopted to convert the time series into a sequence of shape descriptors, and then the locally similar structures were paired [55]. Aiming to extract different characteristics of the domain data, three description functions were utilized to encode local shape information in this paper: raw subsequence (RAWS), discrete wavelet transform (DWT), and slope. Specifically, RAWS was applied to directly take a subsequence of the data around a sampling point of a time series as its shape descriptor. On this basis, DWT was used to decompose each subsequence into three levels, and then all the coefficients were serialized into a shape descriptor. Alternatively, the slope function was first adopted to divide each subsequence into several intervals, and then the slopes of the fitting lines of all the intervals were concatenated into a shape descriptor. According to the results shown in Table 8, SRC-AL performed best in all the improved versions. This fully demonstrates the significance of sparse representation in feature extraction and noise reduction of CPET data.


IndicatorSRC-AL1NN-sharpDTW (RAWS)1NN-sharpDTW (DWT)1NN-sharpDTW (slope)TSF

VO2/HR1.0000.6250.5830.5420.675
VE0.8750.5830.5830.6670.392
VO2/kg0.9170.4580.5000.5000.450
CO0.9170.8330.8330.6670.558
HR0.8330.5000.5420.5420.517
SV0.6670.5830.6250.5420.592
VE/VCO20.7080.6250.5830.5420.542
VT1.0000.5420.5420.6250.392
R0.9580.5830.5830.6250.517

6. Conclusions and Future Works

In recent years, the incidence of hypertension has shown a clear trend towards presenting in younger patients. Note that AEI has been recognized as an effective treatment among young hypertensive patients. Unfortunately, research regarding how to predict the individual responsiveness to AEI for young hypertensive patients is still lacking. As such, a sparse representation classifier based on analytic dictionary learning, a.k.a. SRC-AL, was designed to mine the time series of multiple cardiopulmonary metabolic indicators from CPET data to accurately estimate the effectiveness of AEI on patients’ BP management.

In summary, the experimental results first showed that the machine learning model, especially SRC-AL, which is based on the time series of metabolic indicators, can better predict the individual responsiveness to AEI than the baseline method that is based on scalar values of BP change alone. Secondly, data from the exercise phase in CPET are the first choice for data mining, with the second choice being data from the warm-up phase. Thirdly, VO2/HR is strongly recommended as a powerful, new prognostic indicator for predicting aerobic exercise efficacy as an antihypertensive, with an average accuracy of about 75% and up to 100%. Besides, CO is also a good choice not only because its average performance is second only to VO2/HR but also due to the fact that its performance is very stable in both warm-up and exercise phases. As such, this will likely prove to be useful to clinicians for more accurately selecting comprehensive antihypertensive treatment measures without requiring extra clinical testing.

Note that the predictive model in this study is a qualitative prediction that predicts whether or not an individual hypertensive patient’s response to aerobic exercise intervention is ideal. In future work, the quantitative prediction model of BP reduction caused by AEI is planned to be studied. Besides, BP defined in the current model is the sum of SBP and DBP. It may make more sense to analyze SBP and DBP separately in the subsequent work. Finally, the work presented here includes data generated from 24 young patients with stage I hypertension. Due to the limited sample size of this dataset, more samples should be collected in the future to prove the robustness of the proposed method. At the same time, further optimization can be attempted through the data augmentation technologies.

Data Availability

The data used to support the findings of this study cannot be made freely available in order to protect patient privacy. Requests for access to these data should be made to the corresponding author.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The authors wish to thank the research team of the Department of Cardiology, First Affiliated Hospital of Sun Yat-sen University, for providing the experimental data. This work was supported by the National Natural Science Foundation of China (Grant nos. 61772136 and 61672159) and the Natural Science Foundation of Fujian Province (Grant no. 2018J07005).

References

  1. R. M. Carey and P. K. Whelton, “Prevention, detection, evaluation, and management of high blood pressure in adults: synopsis of the 2017 American college of cardiology/American heart association hypertension guideline,” Annals of Internal Medicine, vol. 168, no. 5, pp. 351–358, 2018. View at: Publisher Site | Google Scholar
  2. Y. Yano, J. P. Reis, L. A. Colangelo et al., “Association of blood pressure classification in young adults using the 2017 American College of Cardiology/American Heart Association blood pressure guideline with cardiovascular events later in life,” Journal of the American Medical Association, vol. 320, no. 17, pp. 1774–1782, 2018. View at: Publisher Site | Google Scholar
  3. S. Wu, Y. Song, S. Chen et al., “Blood pressure classification of 2017 associated with cardiovascular disease and mortality in young Chinese adults,” Hypertension, vol. 76, no. 1, pp. 251–258, 2020. View at: Publisher Site | Google Scholar
  4. R. D. Brook, L. J. Appel, M. Rubenfire et al., “Beyond medications and diet: alternative approaches to lowering blood pressure,” Hypertension, vol. 61, no. 6, pp. 1360–1383, 2013. View at: Publisher Site | Google Scholar
  5. H. Wen and L. Wang, “Reducing effect of aerobic exercise on blood pressure of essential hypertensive patients: a meta-analysis,” Medicine, vol. 96, no. 11, p. e6150, 2017. View at: Publisher Site | Google Scholar
  6. L. Cao, X. Li, P. Yan et al., “The effectiveness of aerobic exercise for hypertensive population: a systematic review and meta‐analysis,” The Journal of Clinical Hypertension, vol. 21, no. 7, pp. 868–876, 2019. View at: Publisher Site | Google Scholar
  7. M. L. Pedralli, B. Eibel, G. Waclawovsky et al., “Effects of exercise training on endothelial function in individuals with hypertension: a systematic review with meta-analysis,” Journal of the American Society of Hypertension, vol. 12, no. 12, pp. e65–e75, 2018. View at: Publisher Site | Google Scholar
  8. I. Gorostegi-Anduaga, P. Corres, A. MartinezAguirre-Betolaza et al., “Effects of different aerobic exercise programmes with nutritional intervention in sedentary adults with overweight/obesity and hypertension: EXERDIET-HTA study,” European Journal of Preventive Cardiology, vol. 25, no. 4, pp. 343–353, 2018. View at: Publisher Site | Google Scholar
  9. B. K. Pedersen and B. Saltin, “Exercise as medicine-evidence for prescribing exercise as therapy in 26 different chronic diseases,” Scandinavian Journal of Medicine & Science in Sports, vol. 25, pp. 1–72, 2015. View at: Publisher Site | Google Scholar
  10. S. Lopes, J. Mesquita-Bastos, A. J. Alves, and F. Ribeiro, “Exercise as a tool for hypertension and resistant hypertension management: current insights,” Integrated Blood Pressure Control, vol. 11, pp. 65–71, 2018. View at: Publisher Site | Google Scholar
  11. C. Hacke, D. Nunan, and B. Weisser, “Do exercise trials for hypertension adequately report interventions? A reporting quality study,” International Journal of Sports Medicine, vol. 39, no. 12, pp. 902–908, 2018. View at: Publisher Site | Google Scholar
  12. C. Ozemek and R. Arena, “Precision in promoting physical activity and exercise with the overarching goal of moving more,” Progress in Cardiovascular Diseases, vol. 62, no. 1, pp. 3–8, 2019. View at: Publisher Site | Google Scholar
  13. R. Ross, B. H. Goodpaster, L. G. Koch et al., “Precision exercise medicine: understanding exercise response variability,” British Journal of Sports Medicine, vol. 53, no. 18, pp. 1141–1153, 2019. View at: Publisher Site | Google Scholar
  14. K. Albouaini, M. Egred, A. Alahmar, and D. J. Wright, “Cardiopulmonary exercise testing and its application,” Postgraduate Medical Journal, vol. 83, no. 985, pp. 675–682, 2007. View at: Publisher Site | Google Scholar
  15. G. J. Balady, R. Arena, K. Sietsema et al., “Clinician's guide to cardiopulmonary exercise testing in adults,” Circulation, vol. 122, no. 2, pp. 191–225, 2010. View at: Publisher Site | Google Scholar
  16. J.-C. Youn and S.-M. Kang, “Cardiopulmonary exercise test in patients with hypertension: focused on hypertensive response to exercise,” Pulse, vol. 3, no. 2, pp. 114–117, 2015. View at: Publisher Site | Google Scholar
  17. A. Mezzani, “Cardiopulmonary exercise testing: basics of methodology and measurements,” Annals of the American Thoracic Society, vol. 14, no. Supplement_1, pp. S3–S11, 2017. View at: Publisher Site | Google Scholar
  18. U. Drescher, J. Koschate, and U. Hoffmann, “Oxygen uptake and heart rate kinetics during dynamic upper and lower body exercise: an investigation by time-series analysis,” European Journal of Applied Physiology, vol. 115, no. 8, pp. 1665–1672, 2015. View at: Publisher Site | Google Scholar
  19. M. Guazzi, F. Bandera, C. Ozemek, D. Systrom, and R. Arena, “Cardiopulmonary exercise testing,” Journal of the American College of Cardiology, vol. 70, no. 13, pp. 1618–1636, 2017. View at: Publisher Site | Google Scholar
  20. R. Buys, A. Van De Bruaene, J. Müller et al., “Usefulness of cardiopulmonary exercise testing to predict the development of arterial hypertension in adult patients with repaired isolated coarctation of the aorta,” International Journal of Cardiology, vol. 168, no. 3, pp. 2037–2041, 2013. View at: Publisher Site | Google Scholar
  21. K. Keller, K. Stelzer, M. A. Ostad, and F. Post, “Impact of exaggerated blood pressure response in normotensive individuals on future hypertension and prognosis: systematic review according to PRISMA guideline,” Advances in Medical Sciences, vol. 62, no. 2, pp. 317–329, 2017. View at: Publisher Site | Google Scholar
  22. P. O. Older and D. Z. H. Levett, “Cardiopulmonary exercise testing and surgery,” Annals of the American Thoracic Society, vol. 14, no. Supplement_1, pp. S74–S83, 2017. View at: Publisher Site | Google Scholar
  23. D. J. Stubbs, L. A. Grimes, and A. Ercole, “Performance of cardiopulmonary exercise testing for the prediction of post-operative complications in non-cardiopulmonary surgery: a systematic review,” PLoS One, vol. 15, no. 2, p. e0226480, 2020. View at: Publisher Site | Google Scholar
  24. C. Santoro, R. Sorrentino, R. Esposito et al., “Cardiopulmonary exercise testing and echocardiographic exam: an useful interaction,” Cardiovascular Ultrasound, vol. 17, no. 1, p. 29, 2019. View at: Google Scholar
  25. I. Nedeljkovic, M. Banovic, J. Stepanovic et al., “The combined exercise stress echocardiography and cardiopulmonary exercise test for identification of masked heart failure with preserved ejection fraction in patients with hypertension,” European Journal of Preventive Cardiology, vol. 23, no. 1, pp. 71–77, 2016. View at: Publisher Site | Google Scholar
  26. R. Badagliacca, S. Papa, G. Valli et al., “Echocardiography combined with cardiopulmonary exercise testing for the prediction of outcome in idiopathic pulmonary arterial hypertension,” Chest, vol. 150, no. 6, pp. 1313–1322, 2016. View at: Publisher Site | Google Scholar
  27. E. Leopold, D. Navot-Mintzer, E. Shargal et al., “Prediction of the Wingate anaerobic mechanical power outputs from a maximal incremental cardiopulmonary exercise stress test using machine-learning approach,” PLoS One, vol. 14, no. 3, p. e0212199, 2019. View at: Publisher Site | Google Scholar
  28. F. Braccioni, D. Bottigliengo, A. Ermolao et al., “Dyspnea, effort and muscle pain during exercise in lung transplant recipients: an analysis of their association with cardiopulmonary function parameters using machine learning,” Respiratory Research, vol. 21, no. 1, pp. 1–11, 2020. View at: Publisher Site | Google Scholar
  29. S. Sakr, R. Elshawi, A. Ahmed et al., “Using machine learning on cardiorespiratory fitness data for predicting hypertension: the Henry Ford ExercIse Testing (FIT) Project,” PLoS One, vol. 13, no. 4, p. e0195344, 2018. View at: Publisher Site | Google Scholar
  30. G. Yang, X. Leng, F. Huang et al., “Use CPET data to predict the intervention effect of aerobic exercise on young hypertensive patients,” in Proceedings of 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 1699–1702, IEEE, San Diego, CA, USA, November 2019. View at: Google Scholar
  31. A. Bagnall, J. Lines, A. Bostrom, J. Large, and E. Keogh, “The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances,” Data Mining and Knowledge Discovery, vol. 31, no. 3, pp. 606–660, 2017. View at: Publisher Site | Google Scholar
  32. A. Bagnall, J. Lines, J. Hills, and A. Bostrom, “Time-series classification with COTE: the collective of transformation-based ensembles,” IEEE Transactions on Knowledge and Data Engineering, vol. 27, no. 9, pp. 2522–2535, 2015. View at: Publisher Site | Google Scholar
  33. J. Lines, S. Taylor, and A. Bagnall, “Time series classification with HIVE-COTE: the hierarchical vote collective of transformation-based ensembles,” ACM Transactions on Knowledge Discovery from Data, vol. 12, no. 5, p. 52, 2018. View at: Google Scholar
  34. H. I. Fawaz, G. Forestier, J. Weber et al., “Deep learning for time series classification: a review,” Data Mining and Knowledge Discovery, vol. 33, no. 4, pp. 917–963, 2019. View at: Google Scholar
  35. B. Zhao, H. Lu, H. Chen, J. Liu, and D. Wu, “Convolutional neural networks for time series classification,” Journal of Systems Engineering and Electronics, vol. 28, no. 1, pp. 162–169, 2017. View at: Publisher Site | Google Scholar
  36. Q. Ma, L. Shen, W. Chen, J. Wang, J. Wei, and Z. Yu, “Functional echo state network for time series classification,” Information Sciences, vol. 373, pp. 1–20, 2016. View at: Publisher Site | Google Scholar
  37. Z. Zhang, Y. Xu, J. Yang, X. Li, and D. Zhang, “A survey of sparse representation: algorithms and applications,” IEEE Access, vol. 3, pp. 490–530, 2015. View at: Publisher Site | Google Scholar
  38. J. A. Tropp, A. C. Gilbert, and M. J. Strauss, “Algorithms for simultaneous sparse approximation. Part I: greedy pursuit,” Signal Processing, vol. 86, no. 3, pp. 572–588, 2006. View at: Publisher Site | Google Scholar
  39. J. A. Tropp, “Algorithms for simultaneous sparse approximation. Part II: convex relaxation,” Signal Processing, vol. 86, no. 3, pp. 589–602, 2006. View at: Publisher Site | Google Scholar
  40. J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655–4666, 2007. View at: Publisher Site | Google Scholar
  41. B. Efron, T. Hastie, I. Johnstone et al., “Least angle regression,” The Annals of Statistics, vol. 32, no. 2, pp. 407–499, 2004. View at: Publisher Site | Google Scholar
  42. T. T. Wu and K. Lange, “Coordinate descent algorithms for lasso penalized regression,” The Annals of Applied Statistics, vol. 2, no. 1, pp. 224–244, 2008. View at: Publisher Site | Google Scholar
  43. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183–202, 2009. View at: Publisher Site | Google Scholar
  44. J. Wright, A. Y. Yang, A. Ganesh et al., “Robust face recognition via sparse representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 210–227, 2008. View at: Google Scholar
  45. Z. Chen, W. Zuo, Q. Hu, and L. Lin, “Kernel sparse representation for time series classification,” Information Sciences, vol. 292, pp. 15–26, 2015. View at: Publisher Site | Google Scholar
  46. L. Zhang, M. Yang, and X. Feng, “Sparse representation or collaborative representation: which helps face recognition,” in Proceedings of International Conference on Computer Vision, pp. 471–478, IEEE, Barcelona, Spain, November 2011. View at: Google Scholar
  47. W. Deng, J. Hu, and J. Guo, “Extended SRC: undersampled face recognition via intraclass variant dictionary,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 9, pp. 1864–1870, 2012. View at: Publisher Site | Google Scholar
  48. R. Rubinstein, A. M. Bruckstein, and M. Elad, “Dictionaries for sparse representation modeling,” Proceedings of the IEEE, vol. 98, no. 6, pp. 1045–1057, 2010. View at: Publisher Site | Google Scholar
  49. P. Wang, L. Kong, T. Du, and L. Wang, “Orthogonal sparse dictionary based on Chirp echo for ultrasound imaging,” Applied Acoustics, vol. 156, pp. 359–366, 2019. View at: Publisher Site | Google Scholar
  50. M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4311–4322, 2006. View at: Google Scholar
  51. Z. Yu, X. Zheng, F. Huang et al., “A framework based on sparse representation model for time series prediction in smart city,” Frontiers of Computer Science, vol. 15, no. 1, pp. 1–13, 2020. View at: Publisher Site | Google Scholar
  52. M. Alkhodari, D. K. Islayem, F. A. Alskafi, and A. H. Khandoker, “Predicting hypertensive patients with higher risk of developing vascular events using heart rate variability and machine learning,” IEEE Access, vol. 8, pp. 192727–192739, 2020. View at: Publisher Site | Google Scholar
  53. H. A. Dau, A. Bagnall, K. Kamgar et al., “The UCR time series archive,” IEEE/CAA Journal of Automatica Sinica, vol. 6, no. 6, pp. 1293–1305, 2019. View at: Publisher Site | Google Scholar
  54. H. Deng, G. Runger, E. Tuv, and M. Vladimir, “A time series forest for classification and feature extraction,” Information Sciences, vol. 239, pp. 142–153, 2013. View at: Publisher Site | Google Scholar
  55. J. Zhao and L. Itti, “Shapedtw: shape dynamic time warping,” Pattern Recognition, vol. 74, pp. 171–184, 2018. View at: Publisher Site | Google Scholar

Copyright © 2021 Fangwan Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views288
Downloads358
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.