Bio-Inspired Learning and Adaptation for Optimization and Control of Complex SystemsView this Special Issue
Research Article | Open Access
Can Wang, Xinyu Wu, Yue Ma, Guizhong Wu, Yuhao Luo, "A Flexible Lower Extremity Exoskeleton Robot with Deep Locomotion Mode Identification", Complexity, vol. 2018, Article ID 5712108, 9 pages, 2018. https://doi.org/10.1155/2018/5712108
A Flexible Lower Extremity Exoskeleton Robot with Deep Locomotion Mode Identification
This paper presents a bioinspired lower extremity exoskeleton robot. The proposed exoskeleton robot can be adjusted in structure to meet the wearer’s height of 150–185 cm and has a good gait stability. In the gait control part, a method of identifying different locomotion modes is proposed; five common locomotion modes are considered in this paper, including sitting down, standing up, level-ground walking, ascending stairs, and descending stairs. The identification is depended on angle information of the hip, knee, and ankle joints. A deep locomotion mode identification model (DLMIM) based on long-short term memory (LSTM) architecture is proposed in this paper for exploiting the angle data. We conducted two experiments to verify the effectiveness of the proposed method. Experimental results show that the DLMIM is capable of learning inherent characteristics of joint angles and achieves more accurate identification than the other models. The last experiment demonstrates that the DLMIM can recognize transitions between different locomotion modes in time and the real-time performance varies with each individual.
Lower extremity exoskeleton robots have drawn increasing attention and developed prosperously in recent decades. They can be used for rehabilitation training in hospital, walking assistance in daily life, and carrying load over unstructured terrain-like forests and disaster area [1, 2]. However, at present, only a handful of robots have been successfully commercialized, such as ReWalk, Ekso, Indego, and HAL. Such exoskeleton robots are generally bulky and expensive, which is difficult for the general population to pay such a high cost. If the medical institutions buy exoskeleton robots, they will invest a lot of manpower and material resources to maintain, and the users need a special trip to the hospital for rehabilitation training, which makes the users difficult to have convenient services. Therefore, we intend to propose a lower extremity exoskeleton robot that is suitable for sharing. The realization of this idea will be encountered by a problem as the different gaits for different wearers in locomotion modes. How to solve this problem is the key to achieve convenience.
Each locomotion mode has its unique characteristics. Five locomotion modes that we usually encounter in daily life are considered, i.e., sitting down (SD), standing up (SU), level-ground walking (LW), ascending stairs (AS), and descending stairs (DS), as shown in Figure 1. A task is analyzed based on kinematic and biological information and divided into a number of phases according to specific motion intentions like “swing the leg” or “lift the body.” For each task, a sequence of phases is transformed into the motion for the humanoid robot [3–5]. It is necessary for the control system to identify different locomotion modes and deliver corresponding assistive. In different locomotion modes, the angle of each joint is different. In order to realize the sit down function, the exoskeleton robot should have at least the flexion and extension degree of the hip, knee, and ankle joint. In , the researchers tested multiple sets of healthy people sitting data; the ranges of the hip, knee, and ankle joint activities were (0°–120°), (0°–100°), and (−7°–20°). Walking process can be divided into upstairs, downstairs, flat walk, and turn. Among them, the hip and knee flexion and extension of the main step to complete the function is to achieve the main factors of body advance. At the same time, in the bipedal support phase, back ankle plantar flexion movement on the center of gravity also plays an important role. Therefore, considering the turning function, the exoskeleton robot should have at least hip, knee, and ankle flexion and extension and hip rotation 4 degrees of freedom. Authors of  tested the range of activities of various groups of healthy people walking: (−18°–28°), (0°–66°), and (−7°–18°). In general, the hip, knee, and ankle flexion and extension of the range of activities are (−18°–120°), (0°–100°), and (−7°–20°). Therefore, the paper will make full use of the hip, knee, and ankle joint angles and investigate a more advanced technique based on context modeling to identify locomotion modes, which is called deep locomotion mode identification model (DLMIM). This DLMIM only uses joint angles as input vector, so it could greatly reduce the installation of extra sensors. Then, the whole component of the exoskeleton robot is simplified and can be applied to different wearers. Make it possible to realize the sharing of exoskeleton robot.
(a) Sitting down
(b) Standing up
(c) Level-ground walking
(d) Ascending stairs
(e) Descending stairs
The remainder of this paper is organized as follows. Section 2 is the related works. Section 3 introduces the structure of exoskeleton robot and gait data collection. Section 4 presents data processing and establishes the DLMIM. Section 5 reports and analyzes experimental results. Conclusions are drawn in the final section.
2. Related Works
The recognition of locomotion mode is the key technology of exoskeleton robot. Electromyography (EMG) signal is one of the most important biological signals for the locomotion mode identification. Kim et al. converted acquired EMG signals into six time domain features and used a transformed correlation feature analysis to recognize different locomotion modes . Joshi et al. collected EMG data from seven leg muscles when able-bodied subjects walked on the floor, ascended stairs, and performed the transition between them. Then, a spectrogram-based approach with prior knowledge was used to classify these two locomotion modes and their transition . Ground reaction force (GRF) is another frequently used signal, for it can be collected easily and stably [10–12]. GRF is usually merged with other signals as input features, such as EMG , joint angles , and inertial measurement units (IMU) . As for locomotion mode identification methods, machine learning methods are usually employed, such as principal component analysis (PCA) , linear discriminate analysis (LDA) , support vector machine (SVM) [10–13], and dynamic Bayesian network . Yuan et al. divided five locomotion modes into static modes and dynamic modes by the variation of the relative hip angles. The former was further classified into sitting and standing still according to the absolute hip angle, while a fuzzy logic-based method was proposed for the latter .
In a word, great efforts have been made in locomotion mode identification. However, there are still limitations and challenges. First, as the above analysis, EMG is a common and important biological signal in locomotion mode identification, but its electrodes have to be firmly attached to human skin, which makes it inconvenient in practical application . Besides, the shifting of electrodes and the presence of sweating skin may negatively impact the performance of collection data. Second, GRF is popular for locomotion mode identification for it can be measured directly by various kinds of pressure sensors. However, it will be invalid while walking on rugged terrains. In addition, the life span of pressure sensors is limited due to constant pressures . Third, there are several widely used methods for classification problems, such as LDA, Bayesian network, SVM, boosting, C4.5 decision trees, random forests, and neural network. But most of these models focus on extracting features from each moment, while characteristics based on a period of time may be better and more visible according to the observation from Figure 1. Thus, sensor fusion strategy is usually used to improve their recognition performance. However, Yuan et al. pointed out a better identification model that should meet four different requirements and minimal sensors which should be embedded into the mechanism was one of the significant factors .
3. The Structure of Exoskeleton Robot and Gait Data Collection
3.1. Structure of the Exoskeleton Robot
In this paper, the SIAT exoskeleton robot will be studied. It is independently developed by Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences. Its mechanical structure is shown in Figure 2. Similar to human, the SIAT exoskeleton robot has hip joints, knee joints, and ankle joints, totaling ten joint degrees of freedom (DoF).
In normal locomotion, hip abduction/adduction (A/A) is for moving the center of body gravity in the lateral direction. The flexion/extension (F/E) of the hips, knees, and ankles is mainly for moving the body forward in the sagittal plane. The distribution of DoFs and actuator types of joints are shown in Table 1. Additionally, the length of thighs and shanks of SIAT exoskeleton robot is adjustable so as to fit for the height of 150 cm–185 cm wearers; this device will be structurally satisfied with the requirements of sharing exoskeleton robots. Its mechanical structure is shown in Figure 3.
The SIAT exoskeleton robot is designed for people who have difficulties in walking. It could provide gait assistance on the hip and knee joints in the sagittal plane, while the ankle joint behaves passively with a spring. Except for the power-assisted mode, it can also work in zero-torque mode to collect gait data.
In different locomotion modes, postures of the leg segment are very important information. Thus, six encoders are installed in the left hip F/E (notated lh), right hip F/E (rh), left knee F/E (lk), right knee F/E (rk), left ankle F/E (la), and right ankle F/E (ra), to collect angle information. These encoders are produced by Shenzhen PONREED Technology Co. Ltd., whose model is WDG-AM23-360. They can measure the rotation angle ranging from 0° to 360° with 12-bit resolution and the precision of 0.087°. The schematic of the whole data collection system is presented in Figure 4.
3.2. Experiment Protocol
To evaluate the performance of the proposed DLMIM, twenty-two male subjects with no physical or cognitive abnormalities were recruited. And all of them have provided the written and informed consent. These subjects have an average height of 172 ± 3 cm and an average age of 25 ± 2 years old.
Before wearing the exoskeleton robot, the length of the exoskeleton legs was adjusted to the subject’s shank and thigh, to make the joint axes of the exoskeleton robot align the subject. Then, the subject wore the exoskeleton robot without actuation and performed a random movement to find a comfortable wearing pattern. The observations of the encoders were acquired and saved on the upper computer. Before experiments, subjects were divided into two groups randomly: the first group contains eighteen subjects, whose experimental data would be used for training the DLMIM; the second group contains the remaining four subjects, whose gait data would help to test the performance of the DLMIM fully.
We carried out three experiments to validate the proposed DLMIM. In the first experiment, subjects were asked to perform the five locomotion modes for five trials, as shown in Figure 2. During each trial, subjects started from standing or sitting still for three seconds and then directly finished the motion in their own comfortable way. At last, the subjects stopped and stood or sat still for several seconds. The joint angle data of all the trials were recorded for building and testing the DLMIM. In the second experiment, several common machine learning models were established based on the same data acquired in the first experiment, which are SVM, back propagation (BP) neural network, and extreme gradient boosting (XGBoost). Then, we make comparisons among these models. In the last experiment, the subjects in the second group were asked to perform planned routes for two trials, the first one is “standing still, sitting down, standing up, level-ground walking, climbing up stairs, level-ground walking, and standing still,” and the second one is “standing still, sitting down, standing up, level-ground walking, descending stairs, level-ground walking, and standing still.” These two planned routes are used for testing the real-time performance of the DLMIM. The subjects started from standing still, finished each route continuously, and stopped and stood still finally.
4. Deep Locomotion Mode Identification Model
4.1. Gait Data Processing
Considering that measurement noises may cause small fluctuation, joint angle data is first processed by a mean filter with a window length of five: where is the joint angle at time before filtering and is the corresponding joint angle after filtering. Here, denotes one of the six joints and is the tag of a window.
Thus, the features of the DLMIM are composed of six joint angles:
Then, we normalize the data. When the input vectors have a wide range of values, neural network may be adversely affected and produces dissatisfactory results . Therefore, we use the following formula to restrict the input vector elements to the range of [−1, 1]: where is the th element of input vector and and are the upper limit and lower limit of a normalization vector, respectively. In this paper, we set and and is the maximal and minimal values of the whole input vector set.
Next, we split the gait data of the same locomotion mode with the sliding window method. The main idea is shown in Figure 5. Each time, we move the sliding window from one sampling point to another, so as to obtain a new train sample. In the paper, the fixed length of the sliding window is set as 50, i.e., the time step of the DLMIM is 50.
As for the output, we use “one-hot vector” to represent the true distribution, i.e., one locomotion mode will be denoted as a vector whose dimensions are “0” except one “1.” Therefore, all the locomotion modes are denoted as follows:
4.2. LSTM-Based Deep Locomotion Mode Identification Model
To learn the inherent characteristics of joint angles from the perspective of time, a deep locomotion mode identification model based on long short-term memory (LSTM) architecture is proposed. LSTM was originally proposed in  and only contained input and output gates at first. Later in , the authors introduced forget gates to allow the memory cells to reset themselves whenever the network needs to forget past inputs. Thus, each memory block consists of input, output, forget gate units, and one or more self-connected memory cells at present. The basic architecture of an LSTM memory block with one memory cell is shown in Figure 6. The overall effect of the gate units is that the LSTM memory cells could store and access information over long periods of time and thus successfully avoid the vanishing gradient problem.
From Figure 6, are the input and output vectors of the LSTM memory block at time . Here, we can compute the values of the input gate , the candidate value for the states of the memory cells , the activation of the memory cells’ forget gate , and the values of the output gate . where is the sigmoid function; are weight matrices; are bias vectors.
Then, we can compute , the memory cells’ new state at time :
Therefore, the output of the memory block is as follows:
Then, “softmax” regression is used to assign probabilities to different locomotion modes based on the outputs of the memory block.
The structure of the proposed DLMIM is shown in Figure 7. We adopt a two-layer LSTM network, and each layer has 128 nodes. As for every time step , we can get an output of the memory block . To take full advantage of all the , the weighted average method is used in the paper. Thus, is calculated as follows: where is the total number of time steps. In the paper, , for the length of the sliding window is set to 50.
Therefore, the evidence for a locomotion mode is as follows: where is the weights, is the bias, and is the total number of locomotion modes. In the paper, we discuss five locomotion modes, so . We then convert the evidence tallies into predicted probabilities using the “softmax” function:
In order to train the DLMIM, “cross-entropy” is used for calculating the loss in this paper. And it is defined as follows: where is our predicted probability distribution, and is the true distribution.
Finally, Adadelta optimization algorithm is used to reduce the loss and improve the DLMID. Adadelta is an adaptive learning rate method and widely used for training neural networks .
4.3. Performance Evaluation
To evaluate the proposed DLMIM, we need some criterion. Generally, identification success rate (ISR) is used for evaluating the accuracy of a classifier, which is defined as follows: where is the number of correct identification data, and is the total number of testing data.
To better illustrate the identification performance and quantify the error distribution, the confusion matrix is defined as follows: where each element is defined as follows: where is the number of testing data in locomotion mode that is identified as mode ; is the total number of testing data in locomotion mode . It is obvious that the diagonal elements of the confusion matrix are the ISR and the off diagonal elements denote the error rates.
In order to judge whether transitions between different locomotion modes can be identified in time, we should calculate the time difference between the critical moment (the moment when a new locomotion mode starts) and the identification moment (the moment when the new locomotion mode is recognized for the first time). In , the critical moment is defined as the moment when the leading leg touches the ground. This definition is in line with the actual situation, for we may recognize the locomotion mode is level-ground walking, ascending stairs, or descending stairs more correctly at this time. But such definition is relatively loose. The paper not only employs the above description but also defines another strict critical moment. The strict critical moment refers to the moment when the wearer begins to change the current locomotion mode. The identification delay rate (IDR) is defined as follows: where is the identification moment, is the critical moment, and is the average time of a gait cycle.
5. Experiments Results and Analysis
5.1. Experiment 1: The Establishment of the DLMIM
Based on the open source software library of TensorFlow, the paper writes code of the DLMIM according to the procedure described in Section 4. 20% of the randomized training samples are used as a validation set for evaluating the DLMIM during training. The number of iterations is set to 500. To avoid overfitting, the training would be early terminated when the accuracy of the validation set fails to improve after ten continuous iterations. The weights of the DLMIM are initialized to the uniform distribution of [−0.01, 0.01].
After establishing the DLMIM offline, the testing data from the second-group subjects is fed into it. Considering the randomness of initial weights, the paper will generate 10 DLMIMs and calculate the mean and unbiased standard deviation of their ISR. The confusion matrix is shown in Table 2.
As Table 2 shows, the ISR of the five locomotion modes is all over 95% and the total ISR is up to 98.30%. So, it can be concluded that the proposed DLMIM is able to achieve accurate recognition. Besides, sitting down and standing up are more likely to be mistaken for each other than other locomotion modes. Likewise, level-ground walking, ascending stairs, and descending stairs are hardly identified as sitting down or standing up. These results can obtain good explanation from gait curves. As Figure 2 shows, gait curves of sitting down and standing up are mirror symmetrical, which may lead to mistake between their recognitions. Similarly, there are many analogical features between level-ground walking and ascending/descending stairs, so error identification rate among them is relatively high.
5.2. Experiment 2: Comparison with Other Machine Learning Methods
As for classification tasks, SVM, BP, and XGBoost are all common machine learning methods. Thus, we establish these three models and compare the ISR with the proposed DLMIM. The processed data above is fed into these three models and the results are shown in Figures 8 and 9. The SVM model is built by the tool of , and XGBoost model is established by the library of . Besides, to make the comparison more persuasive, the structure of the BP network is set according to the DLMIM, e.g., it has two hidden layers and 128 nodes of each layer.
Figure 8 shows that SVM, BP, and XGBoost models fail to accurately recognize locomotion modes of sitting down and standing up. As analyzed above, gait curves of these two locomotion modes have left-right mirror symmetry. So their characteristics are similar from the perspective of the time point, which brings about the high error identification rate of these methods. However, ISR of level-ground walking, ascending stairs, and descending stairs is relatively higher.
The total accuracy of these four machine learning models and each locomotion mode is shown in Figure 9. The ISR of the proposed DLMIM is almost 100%, while most results of SVM, BP, and XGBoost models are less than 80%, especially the ISR of sitting down and standing up. So, it is obvious that the proposed DLMIM can fully discover features of different locomotion modes and achieve accurate recognition results. So, its identification effect is much better than other common machine learning models. Besides, SVM, BP, and XGBoost are all more likely to mistake sitting down for standing up, which is similar to the proposed DLMIM.
5.3. Experiment 3: Real-Time Performance Evaluation
According to our planned routes in Section 2, the paper takes six transitions into account, including SD to SU, SU to LW, LW to AS, AS to LW, LW to DS, and DS to LW. The IDRs of these transitions are shown in Table 3.
As Table 3 shows, the proposed DLMIM is able to recognize transitions between different locomotion modes successfully before the leading leg contacts with the ground. Results from the strict definition show that the identification delay between level-ground walking and ascending/descending stairs is relative bad. It may account for the similarity of gait curves between level-ground walking and ascending/descending stairs. So, the DLMIM needs more gait data to identify locomotion correctly. Besides, transitions between ascending stairs and level-ground walking and descending stairs and level-ground walking are semblable, for both of their IDRs are approximate 30%. There is relatively small IDR from sitting down to standing up, which indicates that the proposed DLMIM is good at finding different characteristics between these two locomotion modes and obtains excellent real-time results.
Furthermore, the unbiased estimation standard deviation of IDR is relatively large, which illustrates that the real-time performance varies from person to person. For example, during the transition between sitting down and standing up, there is no delay for some wearers, while the IDR of some wearers may be up to 23.08%. Meanwhile, experimental results of the conversion from standing up to level-ground walking vary from 1.93% to 56.43%.
In this paper, we propose a kind of adjustable lower extremity exoskeleton robot combined with sharing economic concept. The structure is an adjustable structure designed in the part of the thigh and calf section, so that the exoskeleton robot is suitable for the wearer with the height of 150–185 cm. In the control method, we have proposed LSTM-based deep locomotion mode identification model to recognize five common locomotion modes, including sitting down, standing up, level-ground walking, ascending stairs, and descending stairs. In order to reduce the installation of sensors for exoskeleton robots, the proposed DLMIM is performed with only joint angles of encoders, which are often embedded on the exoskeleton legs. Experimental results show that the DLMIM is able to mine inherent characteristics of gait curves and achieve satisfactory identification accuracy and real-time performance. Therefore, with the DLMIM, we may avoid installing additional sensors on lower extremity exoskeleton robots for locomotion identification and reduce the difficulty and cost of the maintenance, at the same time, enhancing suitability for different wearers. To sum up, sharing economy can be integrated into the lower extremity exoskeleton robot proposed in this paper, so that the robot can provide convenient service for people who have difficulties in walking.
Although the proposed DLMIM could identify different locomotion modes with high accuracy and low latency, there are some limitations of our current study. First, the DLMIM was only trained and tested with healthy subjects, while the disable and elder may generate different locomotion features. Second, experiments were all conducted in zero-torque mode of the SIAT exoskeleton robot, but exoskeleton robots may work under the power-assisted mode in more situations. Therefore, related researches will continue to carry out.
The joint angle data of the trials to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
The work described in this paper is partially supported by the National Key Research and Development Program of China (2017YFB1302303) and the Shenzhen Fundamental Research and Discipline Layout Project (JCYJ20150925163244742).
- T. Yan, M. Cempini, C. M. Oddo, and N. Vitiello, “Review of assistive strategies in powered lower-limb orthoses and exoskeletons,” Robotics and Autonomous Systems, vol. 64, pp. 120–136, 2015.
- D. Novak and R. Riener, “A survey of sensor fusion methods in wearable robotics,” Robotics and Autonomous Systems, vol. 73, pp. 155–170, 2015.
- H. Kawamoto and Y. Sankai, “Power assist method based on phase sequence and muscle force condition for HAL,” Advanced Robotics, vol. 19, no. 7, pp. 717–734, 2005.
- H. Imai, M. Nozawa, Y. Kawamura, and Y. Sankai, “Human motion oriented control method for humanoid robot,” in Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication, pp. 221–226, Berlin, Germany, September 2002.
- Y. Kawamura and Y. Sankai, “Humanoid control method based on human knack for human care service,” in IEEE International Conference on Systems, Man and Cybernetics, Yasmine Hammamet, Tunisia, October 2002.
- R. Riener and T. Fuhr, “Patient-driven control of FES-supported standing up: a simulation study,” IEEE Transactions on Rehabilitation Engineering, vol. 6, no. 2, pp. 113–1124, 1998.
- D. Liu, M. Li, and L. Shen, “Experimental study on walking gait of normal young people,” Journal of University of Shanghai for Science and Technology, vol. 1, pp. 67–70, 2008.
- D.-H. Kim, C.-Y. Cho, and J. Ryu, “Real-time locomotion mode recognition employing correlation feature analysis using EMG pattern,” ETRI Journal, vol. 36, no. 1, pp. 99–105, 2014.
- D. Joshi, B. H. Nakamura, and M. E. Hahn, “High energy spectrogram with integrated prior knowledge for EMG-based locomotion classification,” Medical Engineering & Physics, vol. 37, no. 5, pp. 518–524, 2015.
- Z. Peng, C. Cao, J. Huang, and W. Pan, “Human moving pattern recognition toward channel number reduction based on multipressure sensor network,” International Journal of Distributed Sensor Networks, vol. 9, no. 11, 2013.
- Y. Long, Z. J. Du, W. D. Wang et al., “PSO-SVM-based online locomotion mode identification for rehabilitation robotic exoskeletons,” Sensors, vol. 16, no. 9, article 1408, 2016.
- B. Shen, J. Li, F. Bai, and C.-M. Chew, “Motion intent recognition for control of a lower extremity assistive device (lead),” in 2013 IEEE International Conference on Mechatronics and Automation, pp. 926–931, Takamatsu, Japan, August 2013.
- H. Huang, F. Zhang, L. J. Hargrove, Z. Dou, D. R. Rogers, and K. B. Englehart, “Continuous locomotion-mode identification for prosthetic legs based on neuromuscular–mechanical fusion,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 10, pp. 2867–2875, 2011.
- B. Chen, E. Zheng, and Q. Wang, “A locomotion intent prediction system based on multi-sensor fusion,” Sensors, vol. 14, no. 7, pp. 12349–12369, 2014.
- A. J. Young, A. M. Simon, N. P. Fey, and L. J. Hargrove, “Intent recognition in a powered lower limb prosthesis using time history information,” Annals of Biomedical Engineering, vol. 42, no. 3, pp. 631–641, 2014.
- K. Yuan, A. Parri, T. Yan et al., “A realtime locomotion mode recognition method for an active pelvis orthosis,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6196–6201, Hamburg, Germany, September-October 2015.
- E. Zheng, L. Wang, K. Wei, and Q. Wang, “A noncontact capacitive sensing system for recognizing locomotion modes of transtibial amputees,” IEEE Transactions on Biomedical Engineering, vol. 61, no. 12, pp. 2911–2920, 2014.
- E. Zheng, N. Vitiello, and Q. Wang, “Gait phase detection based on non-contact capacitive sensing: preliminary results,” in 2015 IEEE International Conference on Rehabilitation Robotics (ICORR), pp. 43–48, Singapore, August 2015.
- K. Yuan, Q. Wang, and L. Wang, “Fuzzy-logic-based terrain identification with multisensor fusion for transtibial amputees,” IEEE/ASME Transactions on Mechatronics, vol. 20, no. 2, pp. 618–630, 2015.
- J.-Y. Jung, W. Heo, H. Yang, and H. Park, “A neural network-based gait phase classification method using sensors equipped on lower limb exoskeleton robots,” Sensors, vol. 15, no. 11, pp. 27738–27759, 2015.
- M. D. Zeiler, “ADADELTA: an adaptive learning rate method,” 2012 http://arxiv.org/abs/1212.5701.
- C.-C. Chang and C.-J. Lin, “LIBSVM: a library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, pp. 1–27, 2011.
- D.-X. Liu, W. Du, X. Wu, C. Wang, and Y. Qiao, “Deep rehabilitation gait learning for modeling knee joints of lower-limb exoskeleton,” in 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1058–1063, Qingdao, China, December 2016.
- T. Chen and C. Guestrin, “XGBoost: a scalable tree boosting system,” in Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794, San Francisco, California, USA, August 2016.
Copyright © 2018 Can Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.