Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2021 / Article
Special Issue

Advanced Intelligent Fuzzy Systems Modeling Technologies for Smart Cities

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 5564235 |

Min Shi, Chengyi Yang, Dalu Zhang, "A Novel Human-Machine Collaboration Model of an Ankle Joint Rehabilitation Robot Driven by EEG Signals", Mathematical Problems in Engineering, vol. 2021, Article ID 5564235, 8 pages, 2021.

A Novel Human-Machine Collaboration Model of an Ankle Joint Rehabilitation Robot Driven by EEG Signals

Academic Editor: Yi-Zhang Jiang
Received09 Feb 2021
Revised25 Feb 2021
Accepted02 Mar 2021
Published12 Mar 2021


With the emergence of the phenomenon of social aging, the elderly have frequent physical movement disorders. In particular, the movement disorder of the ankle joint seriously affects the daily life of the elderly. Rehabilitation robots are of great significance for improving the efficiency of rehabilitation, ensuring the quality of rehabilitation, and reducing the labor intensity of workers. As an auxiliary treatment tool, rehabilitation robots should have rich and effective motion modes. The exercise mode should be adaptable for patients with different conditions and different recovery periods. To improve the accuracy of human-computer interaction of ankle joint rehabilitation robots (AJRR), this study proposes a man-machine collaboration model of an EEG-driven AJRR. The model mainly expands from two levels (1) to establish the connection between EEG and intention so as to identify the intention. In the recognition process, first feature extraction is given on the preprocessed EEG. Convolutional neural network (CNN) is selected to extract the deep features of the EEG signal, and support vector machine (SVM) is used for classifying the deep features, thereby realizing intent recognition. (2) The result of intention recognition is input to the human-computer interaction (HCI) system, which controls the movement of the rehabilitation robot after receiving the instruction. This study truly realizes patient-oriented rehabilitation training. Experiments show that the human-machine collaboration model used can show higher accuracy of intention recognition, thereby increasing the satisfaction of using AJRR.

1. Introduction

The rapid rise and popularization of robotics technology have made robots no longer limited to manufacturing, but they are also developing towards the field of medical services [1, 2]. At present, many foreign rehabilitation medical institutions have used rehabilitation robots to provide rehabilitation training services for patients [3]. The lower limb rehabilitation robot has a high degree of automation. Rehabilitation robots replace the rehabilitation physicians to perform rehabilitation training for patients, which can free the rehabilitation teachers from heavy physical labor and focus on formulating rehabilitation training programs. Moreover, the rehabilitation robot can realize remote rehabilitation treatment, obtain rehabilitation data in time, and improve the quality and efficiency of rehabilitation. Therefore, the rehabilitation robot can alleviate the shortage of rehabilitation treatment personnel and the high cost of rehabilitation.

AJRR is mainly used to assist patients with ankle joint injuries in intelligent rehabilitation training. At the end of the 20th century, rehabilitation robots began to enter the practical stage, and they had already appeared in a few hospitals. The current research on rehabilitation robot technology is mainly on human prosthetics, surgical robots, rehabilitation wheelchairs, rehabilitation training robots, and so forth. Rehabilitation medicine research shows that appropriate and reasonable exercise is an essential rehabilitation process after joint injury [4]. The Mega-Ankle robot developed by Rutgers State University in the United States is a 6-DOF parallel AJRR [5, 6]. The effect of the robot is good [7]. The Pneumatic Soft Parallel Robot [8] of the University of Auckland has no spherical joints and has the characteristics of high sensitivity, compact structure, wear resistance, easiness to carry, and light weight. Wearable ankle rehabilitation robots [9, 10] are also becoming more and more popular. Multiangle movement and rotation can fully move the ankle joint, which is more conducive to the patient's return to a normal state. However, its control system does not realize the function of active ankle joint exercise for muscle strength recovery. Patients are passively involved in rehabilitation training, with less active participation.

High-quality rehabilitation training should be actively participated by patients. To improve the quality of rehabilitation training, this study proposes a man-machine collaboration model of an AJRR driven by EEG signals. The specific work is as follows:(1)In order to establish a precise connection between EEG and intent, a deep feature method is used to model and analyze EEG signals. The relationship between the EEG signal and the intention is established to realize the identification of the intention.(2)A human-computer interaction system was developed. The result of intention recognition is input to the interactive system to control the rehabilitation robot to perform rehabilitation training for the patient.(3)Experiments verify the effectiveness of the man-machine collaboration model designed in this paper. Since the intention is accurately recognized, the patient's training instructions can be accurately sent to the rehabilitation robot. Thus, the patient-oriented ankle rehabilitation training is truly realized.

2.1. Ankle Joint Rehabilitation Robot and Its Motion Control Theory

At present, AJRR can be divided into traditional pedestal type [11, 12], parallel type [1316], and wearable type [1721] according to different structures and installation methods. The overall structure of the traditional pedestal AJRR is complex, the use and maintenance cost is high, the equipment is large, and the training method is single. The structure of the parallel ankle joint robot is simple, but its control is more complicated and the control accuracy is not high. The wearable AJRR can assist rehabilitation treatment by checking the wearer's movement intention. It makes up for the shortcomings of the traditional rehabilitation robot rehabilitation model and the site limitation of large-scale rehabilitation equipment.

The development of robot motion control theory can be summarized into three stages: classic control theory, modern control theory, and advanced control theory.(1)In the classical control theory, the Laplace transform is used as the mathematical basis. This theory mainly analyzes the motion characteristics of the system in the time domain and frequency domain. The classical control theory represented by PID is relatively famous. However, it is difficult to obtain a more satisfactory control effect for systems with nonlinear and strong multivariable coupling characteristics. The determination of the controller parameters depends on the debugger.(2)Modern control theory is based on the state space method. For the purpose of optimal control in the time domain, the control system is analyzed and designed by describing the internal state variables of the system.(3)Advanced control theory is used to deal with some tasks, where conventional control strategies cannot achieve satisfactory control results. It has great advantages in improving the stability and robustness of the control system.

2.2. CNN

In the process of recognizing the patient's intention, deep feature extraction of the collected EEG is required. CNN has stronger advantages than machine learning algorithms [2225]. Therefore, CNN [2628] is used for feature extraction. A multilayer supervised learning neural network is called a CNN. Each layer of the CNN consists of several two-dimensional planes, and multiple independent neurons constitute each plane. Compared with ordinary neural networks, CNN includes a feature extractor, which mainly includes a convolutional layer and a pooling layer. The structure of CNN is shown in Figure 1.

CNN’s architecture is relatively fixed and consists of three parts. The first part is the input layer. The combination of multiple convolutional layers and pooling layers forms the second part of the CNN, and the third part consists of a fully connected classifier. With the structure shown in Figure 2, after the input image is convolved, 3 feature maps will be generated in the C1 layer; then these feature maps are simplified by the pooling operation. Using the Sigmoid function to get the S2 layer, the number of maps of this layer is the same as that of the C1 layer. The C3 layer is acquired by the reconvolution operation of the map, and the S4 layer is generated in the same way as the S2 layer. At the end, the fully connected layer and Softmax classifier are used to get the output result.

The training process of CNN mainly has two stages: forward propagation of signal and backward propagation of error. The first stage is the forward propagation of the signal. In this stage, the input layer accepts the original information, undergoes step-by-step transformation, and finally transmits it to the output layer. The second stage is backward propagation, which is achieved by adjusting the size of the weights using the error back-propagation algorithm.

2.3. Human-Machine Collaboration Model of AJRR Driven by EEG

Based on the results of intent recognition, this study designed an EEG-driven AJRR man-machine collaboration model. The subject sits on a chair and puts one foot on the AJRR. The intention is to send instructions to the robot, and the robot generates corresponding angle changes according to the received control instructions and finally completes the rehabilitation training. In this article, the brain-controlled AJRR system based on motor imagery is mainly divided into software and hardware. The hardware part includes signal acquisition module and ankle robot control module. The software is divided into signal processing module and rehabilitation training software module. The system structure is shown in Figure 2.

The working principle of the system is as follows: The signal acquisition module is responsible for collecting the EEG electrical signal of the subject, filtering, and amplifying. Send the collected data to the rehabilitation training software in real time. The training data collection function in the rehabilitation training software will save it and use the saved data as training samples. The rehabilitation training software will call the signal processing module to analyze the real-time collected EEG signals and return the processing results. The rehabilitation training software will send corresponding instructions to the robot control module based on the results. The robot first parses the received instructions into angle information for controlling the rotation of the robot. The completion of the subject’s motion imagination task corresponds to the motion of the ankle robot so as to realize the purpose of autonomous rehabilitation of the entire robot according to the patient’s brain.

The movement path of the structural parts of the robot is set according to various specific requirements of the rehabilitation training task. According to the designed motion path corresponding to the control instruction that needs to be sent, the subject performs the corresponding motion imagination task according to the instruction to be sent. In the course of the experiment, after the subject's motor image signal is processed and the result is consistent with the preset instruction, the rehabilitation training software will send the corresponding instruction. After receiving the control instructions sent by the rehabilitation training software, first set the angle that the robot needs to rotate, and then drive the robot to drive the ankle to perform corresponding rehabilitation training. The subjects imagined their corresponding motion control intentions according to the preset instruction sequence in the training task. Then the signal processing classification results are compared with the above-mentioned instruction sequence. If the two are consistent, the instruction is sent to the robot control module. If they are inconsistent, the subjects need to perform the motor imaging task again.

3. Patient Intention Recognition based on EEG Signal

3.1. Intent Recognition Process

Different human conscious activities will produce corresponding changes in EEG signals. This special effect is often used to recognize human intentions [29]. Most of the existing human-computer interaction systems are the analysis results of EEG signals used. The corresponding results are obtained by processing the EEG, and then the results are converted into control instructions for the equipment. The main use of this research is the motor imagery EEG. This type of EEG can be collected through subjective awareness induction, which can directly describe the entire process of subjective awareness from formation to execution [30]. From the above background knowledge, it can be concluded that the application of EEG signals of motion imagery in medicine is extremely valuable. This technology can assist training with rehabilitation robots to improve the enthusiasm of patients in training, induces the repair or reconstruction of damaged motor conduction pathways, and promotes the recovery of patients' limb function. The process of intent recognition is given in Figure 3.

In this paper, the EEG generated by the subject’s motion imagination is used as the input of the interactive system. The EEG signals of different modes are generated by performing various motion imaging tasks and then converted into external actions. The final application of the interactive system is to control the external devices, and the core part of the control is the EEG signal processing algorithm. The signal processing process is divided into three parts, namely, signal preprocessing, feature extraction, and classification. The EEG signal extracted from the brain scalp is analyzed by the signal processing module and converted into control commands for external devices.

3.2. Collection and Preprocessing of EEG

Use Ag/AgC1 electrode to record the EEG signal on the scalp. The collected data mainly include four kinds of motion imagination data of hands and feet. When collecting, the subject operated according to the prompts. The time to collect data is 11 seconds, a countdown of 3, 2, and 1 appears on the computer screen, and then the data collection officially starts. The subjects were imagined to control their hands and feet according to the four arrows in different directions that appeared on the computer screen. The arrow appears and disappears after 2 seconds. The subjects performed motor imagination on their own. After 3 seconds, the word "End" appears, ending the motor imagination. The subject rested for 3 seconds. According to this collection process, a total of 500 data sets of 10 subjects were collected.

Due to technical limitations of the acquisition equipment, it is inevitable that there will be some noise signals in the process of acquiring EEG signals, such as cerebral cortex electromyography artifacts, eye movement and blinking artifacts, and other types of interference. Among these interference signals, the electromyographic artifacts in the cerebral cortex can be directly filtered out because of the low frequency characteristics. Electrooculogram artifacts and power frequency interference are the main interference factors when collecting EEG signals. In order to facilitate subsequent signal processing, the original EEG signal needs to be preprocessed. For artifact interference, filtering can be used for processing. Specifically, filter out the 10-25 Hz filtering in the EEG signal.

3.3. Deep Feature Extraction and Classification

CNN can automatically collect and classify input data, thereby reducing the influence of human factors. When the sample size is larger, the advantages of CNN can be used. However, in the multiclass motor imaging EEG classification task, due to the small training sample size, CNN cannot be fully trained, which will lead to overfitting and affect the final classification effect. To improve the final recognition accuracy, CNN is used for deep feature extraction, principal component analysis (PCA) is used for dimensionality reduction processing on the extracted feature values, and SVM is used for dimensionality reduction feature classification. Figure 4 is a flowchart of data feature extraction and classification.

The process of deep feature extraction is as follows:(1)In Input layer, the input data is , where N is the number of electrode channels of EEG signal and M is the characteristic number of EEG signals of each channel.(2)In the convolutional layer, n convolution kernels are used to learn the features of , and n feature maps (i = 1, 2, ..., n) are obtained. The size of the convolution kernel is 3 × 3, and the spatial position of the brain electrodes and the frequency domain information of the EEG signal can be fused through convolution. The size of each map after convolution is (N-2) × (M-2), and the calculation formula for each map on the convolution layer is as follows:where is the j-th map on the C2 layer. and represent the convolution kernel and bias of the j-th map. f is the activation function from the input layer to the convolutional layer. The calculation formula of the activation function is as follows:(3)In the pooling layer, average pooling is used to downsample each map of the convolutional layer. The calculation formula of each map on the pooling layer is as follows:where and correspond to the j-th map of the convolutional layer and the pooling layer. and are the multiplier and bias of the j-th map on the pooling layer, respectively. is the downsampling function. f represents that the size of each map on the activation function pooling layer becomes half of the convolutional layer.(4)In the fully connected layer, a mapping matrix of size 1×1 is used to fully connect the n maps of the pooling layer. Finally, n × (N-2) × (M-2)/4 neurons are generated. The calculation formula for each neuron in the fully connected layer iswhere and are the j-th neuron and bias of the fully connected layer. is the sigmoid activation function, and the expression is shown as follows:The classification result of the EEG signal is output in the output layer. The error is propagated back by the BP algorithm, thereby updating the parameters of the CNN network. The calculation formula for the value of each neuron in the output layer is as follows:where is the bias of the i-th output layer, represents the weight of connecting the j-th neuron of the fully connected layer to the i-th neuron of the output layer, and f is the sigmoid activation function.

The classification steps are as follows.

The EEG data set D is divided into training set Dt and test set Dc. The feature matrix Fm×t of Dt is obtained after feature extraction through CNN, where m is the number of samples in Dt and t is the number of features extracted by CNN. PCA [31] is used to reduce the dimensionality of F.

Select the eigenvectors to form the projection matrix according to the threshold . According to the following formula, a new feature matrix is obtained, and is used to complete the training of SVM.

The classification steps of the sample set Xc in Dc are as follows:Step 1: The feature fc in Xc is extracted using CNN.Step 2: fc is centralized usingStep 3: According to equation (8), transform fc to obtain the feature in the projection space.Step 4: Input into SVM to get category.

4. Simulation Experiment Analysis

4.1. Experimental Background

To verify the classification effect of the method used in this article on EEG, the comparison algorithm has support vector machine (SVM) [32], reference [33], TSK [23], and ELM [34]. The experimental data are the data collected in the way shown in Section 3.2. Table 1 gives the parameter settings of each algorithmwhere is classification accuracy and C = 4 is classification number. The larger the value of Kappa, the better the performance of the method.

MethodParameter settings

SVMPenalty coefficient ; bandwidth of RBF kernel function ; the step sizes of C and γ are 5 and 0.01, respectively
Reference [33]Parameter settings are consistent with [33]
TSKNumber of rules ;
ELMNeuron number range ; regularization scale parameter
Our methodNumber of convolution kernels

The evaluation index is Kappa coefficient. The calculation formula is as follows:
4.2. Experimental Process and Discussion
4.2.1. Parameter Sensitivity Experiment

To study the sensitivity of our method to parameters, with the different values of parameter n in the method, the Kappa parameter values obtained are shown in Table 2. It can be seen from the data shown in Table 2 that when n is 4, the value obtained is the largest, indicating that the performance of the method is the best at this time. Therefore, in the subsequent experiments, the number of convolution kernels n is 4.

Number of convolution kernels n12345678910Mean


4.2.2. Method Performance Comparison

To compare the superiority of our method, the EEG data of 10 subjects were randomly selected for experiment. The experimental results are shown in Table 3.


Reference [33]0.80080.80320.79970.82560.81020.80050.79900.80250.82870.81100.8060
Our method0.85620.82320.86420.81170.83320.85320.842200.82260.81180.80410.8322

The table shows the classification effect of EEG collected from 10 subjects. On the whole, the classification effects of SVM, TSK, and ELM are very poor, and the accuracy rate does not reach 80%. Reference [33] showed the best effect on subjects on the 4th, 9th, and 10th. In [33], the wavelet packet is used to select the best frequency band of EEG; then the OVR-CSP method is used to extract the features of each frequency band, and finally the features are input into the SVM-BP classifier for classification. This method can extract more accurate frequency band features, thereby improving the classification effect. The overall Kappa coefficient of our method is the highest. This verifies that the method used in this study can effectively improve the classification effect.

4.2.3. Method Noise Resistance Comparison

It is easy to mix in noise data during EEG acquisition. Therefore, the method of processing EEG data must have good noise immunity. In order to verify the antinoise performance of our method, here we add 1%, 3%, 5%, 7%, and 9% Gaussian noise to the original data. After adding noise, the experimental results are shown in Table 4. The data in the table is the average value of each method after running on the data of the above 10 subjects.

Noise\methodSVMReference [33]TSKELMOur method


It can be analyzed from the data in Table 4 that as the noise content increases, the performance of each algorithm shows a downward trend. At this time, the method with a gentle downward trend is sufficient to show that the method has good noise resistance and high algorithm stability. Analyzing the data in Table 4, when the noise is within 5%, the difference in the downward trend of each method is not large. When the noise increases to 9%, the performance of the three methods, SVM, references [33], and ELM, is greatly reduced. The downward trend of TSK and our method did not show a cliff-like decline but still a relatively flat decline. Comparing TSK and our method, our method has a smaller decline, which further proves that our method's antinoise performance is the best among several comparison methods.

5. Conclusion

Ankle dyskinesia restricts people’s movement and brings about troubles to daily life. The methods and techniques of ankle rehabilitation are very important for patients. The AJRR can assist patients in effective ankle joint rehabilitation training. This research designed a more accurate human-computer interaction model of AJRR. This model can effectively control the robot according to the patient's intention. The human-computer interaction of the system is more convenient and efficient. The realization of this study mainly has the following two aspects. One is the effective recognition of exercise intention. For the recognition step, the convolutional neural network (CNN) is used to extract the deep features of the brain wave signal, and the support vector machine (SVM) is used to classify the above features to identify the intention of the machine user. The other is to send instructions to the rehabilitation robot in the form of instructions based on the recognition results. The robot moves in different directions and angles according to the received instructions so as to truly realize patient-oriented rehabilitation training. The human-computer interaction model designed in this research can effectively identify the patient's intention and successfully control the movement of the robot and has a good market application prospect. However, because the extraction of depth features takes a lot of time, the real-time performance of the system needs to be further improved. This is also the direction to be optimized in the next step of this study.

Data Availability

The labeled data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.


This work was supported by the Research Team Fund of Fuzhou University of International Studies and Trade in 2018, “Interactive Design” (no. 2018KYTD-08).


  1. S. Böttger, T.-C. Çallar, A. Schweikard, and E. Rückert, “Medical robotics simulation framework for application-specific optimal kinematics,” Current Directions in Biomedical Engineering, vol. 5, no. 1, pp. 145–148, 2019. View at: Publisher Site | Google Scholar
  2. D. S. Elson, K. Cleary, P. Dupont, R. Merrifield, and C. Riviere, “Medical robotics,” Annals of Biomedical Engineering, vol. 46, no. 10, pp. 1433–1436, 2018. View at: Publisher Site | Google Scholar
  3. W. Meng, Q. Liu, Z. Zhou, Q. Ai, B. Sheng, and S. Xie, “Recent development of mechanisms and control strategies for robot-assisted lower limb rehabilitation,” Mechatronics, vol. 31, no. 6, pp. 132–145, 2015. View at: Publisher Site | Google Scholar
  4. J. A. Saglia, N. G. Tsagarakis, J. S. Dai, and D. G. Caldwell, “A high-performance redundantly actuated parallel mechanism for ankle rehabilitation,” The International Journal of Robotics Research, vol. 28, no. 9, pp. 1216–1227, 2009. View at: Publisher Site | Google Scholar
  5. J. Deutsch, J. Latonio, and G. Burdea, “Post Stroke Rehabilitation with the Rutgers Ankle System - Accuse Study. Presence,” MIT Press, vol. 10, no. 4, pp. 420–435, 2001. View at: Google Scholar
  6. H. Tourajizadeh and S. Manteghi, “Design and optimal control of dual-stage Stewart platform using Feedback-Linearized Quadratic Regulator,” Advanced Robotics, vol. 30, no. 2, pp. 1305–1321, 2016. View at: Publisher Site | Google Scholar
  7. C.-Y. Lin, C.-M. Tsai, P.-C. Shih et al., “Development of a novel haptic glove for improving finger dexterity in poststroke rehabilitation,” Technology & Health Care, vol. 24, no. 1, pp. S97–S103, 2016. View at: Google Scholar
  8. S. Q. Xie and P. K. Jamwal, “An iterative fuzzy controller for pneumatic muscle driven rehabilitation robot,” Expert Systems with Applications, vol. 38, pp. 8128–8137, 2011. View at: Publisher Site | Google Scholar
  9. I. D. Loram and M. Lakie, “Direct measurement of human ankle stiffness during quiet standing: the intrinsic mechanical stiffness is insufficient for stability,” The Journal of Physiology, vol. 545, no. 3, pp. 1041–1053, 2002. View at: Publisher Site | Google Scholar
  10. J. C. Perez-Ibarra, A. A. G. Siqueira, M. A. Silva-Couto, T. L. De Russo, and H. I. Krebs, “Adaptive impedance control applied to robot-aided neuro-rehabilitation of the ankle,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 185–192, 2019. View at: Publisher Site | Google Scholar
  11. K. van Kammen, A. M. Boonstra, L. H. V. Van Der Woude, H. A. Reinders-Messelink, and R. Den Otter, “The combined effects of guidance force, bodyweight support and gait speed on muscle activity during able-bodied walking in the Lokomat,” Clinical Biomechanics, vol. 36, pp. 65–73, 2016. View at: Publisher Site | Google Scholar
  12. A. Mayr, E. Quirbach, A. Picelli, M. Kofler, N. Smania, and L. Saltuari, “Early robot-assisted gait retraining in nonambulatory patients with stroke: a single blind randomized controlled trial,” European Journal of Physical & Rehabilitation Medicine, vol. 54, no. 6, pp. 819–826, 2018. View at: Google Scholar
  13. J. Hu, L. Liu, and D.-W. Ma, “Robust nonlinear feedback control of a chaotic permanent-magnet synchronous motor with a load torque disturbance,” Journal of the Korean Physical Society, vol. 65, no. 12, pp. 2132–2139, 2014. View at: Publisher Site | Google Scholar
  14. M. Girone, G. Burdea, M. Bouzit, V. Popescu, and J. E. Deutsch, “A stewart platform-based system for ankle telerehabilitation,” Autonomous Robots, vol. 10, no. 2, pp. 203–212, 2001. View at: Publisher Site | Google Scholar
  15. M. Girone, G. Burdea, M. Bouzit, V. Popescu, and J. E. Deutsch, “Orthopedic rehabilitation using the “Rutgers ankle” interface,” Studies in Health Technology and Informatics, vol. 70, pp. 89–95, 2000. View at: Google Scholar
  16. J. Yoon, J. Ryu, and K. B. Lim, “Reconfigurable ankle rehabilitation robot for various exercises,” Journal of Robotic Systems, vol. 22, no. S1, pp. 15–33, 2006. View at: Publisher Site | Google Scholar
  17. K. W. Hollander, R. Ilg, T. G. Sugar, and D. Herring, “An efficient robotic tendon for gait assistance,” Journal of Biomechanical Engineering, vol. 128, no. 5, pp. 788–791, 2006. View at: Publisher Site | Google Scholar
  18. S. E. Irby, K. R. Kaufman, R. W. Wirta, and D. H. Sutherland, “Optimization and application of a wrap-spring clutch to a dynamic knee-ankle-foot orthosis,” IEEE Transactions on Rehabilitation Engineering, vol. 7, no. 2, pp. 130–134, 1999. View at: Publisher Site | Google Scholar
  19. K. H. Ha, S. A. Murray, and M. Goldfarb, “An approach for the cooperative control of FES with a powered exoskeleton during level walking for persons with paraplegia,” IEEE Transactions on Neural Systems & Rehabilitation Engineering, vol. 24, no. 4, pp. 455–466, 2015. View at: Google Scholar
  20. P. Milia, F. De Salvo, M. Caserio et al., “Neurorehabilitation in paraplegic patients with an active powered exoskeleton (Ekso),” Digital Medicine, vol. 2, no. 4, pp. 163–169, 2016. View at: Publisher Site | Google Scholar
  21. S. Tanabe, E. Saitoh, S. Hirano et al., “Design of the wearable power-assist locomotor (WPAL) for paraplegic gait reconstruction,” Disability and Rehabilitation: Assistive Technology, vol. 8, no. 1, pp. 84–91, 2013. View at: Publisher Site | Google Scholar
  22. Y. Zhang, F.-L. Chung, and S. Wang, “Takagi-sugeno-kang fuzzy systems with dynamic rule weights,” Journal of Intelligent & Fuzzy Systems, vol. 37, no. 6, pp. 8535–8550, 2019. View at: Publisher Site | Google Scholar
  23. Y. Jiang, D. Wu, Z. Deng et al., “Seizure classification from EEG signals using transfer learning, semi-supervised learning and TSK fuzzy system,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25, no. 12, pp. 2270–2284, 2017. View at: Publisher Site | Google Scholar
  24. Y. Zhang, S. Wang, K. Xia, Y. Jiang, and P. Qian, “Alzheimer's disease multiclass diagnosis via multimodal neuroimaging embedding feature selection and fusion,” Information Fusion, vol. 66, pp. 170–183, 2021. View at: Publisher Site | Google Scholar
  25. Y. Jiang, F. L. Chung, S. Wang, Z. Deng, J. Wang, and P. Qian, “Collaborative fuzzy clustering from multiple weighted views,” IEEE Transactions on Cybernetics, vol. 45, no. 4, pp. 688–701, 2014. View at: Google Scholar
  26. M. Mehran and M. Mohammad Reza, “Restricted convolutional neural networks,” Neural Processing Letters, vol. 50, no. 2, pp. 1705–1733, 2019. View at: Google Scholar
  27. D.-X. Zhou, “Universality of deep convolutional neural networks,” Applied and Computational Harmonic Analysis, vol. 48, no. 2, pp. 787–794, 2020. View at: Publisher Site | Google Scholar
  28. R. Xin, J. Zhang, and Y. Shao, “Complex network classification with convolutional neural network,” Tsinghua Science and Technology, vol. 25, no. 4, pp. 447–457, 2020. View at: Publisher Site | Google Scholar
  29. L. Hyeon-Seok, Y. Jiang, and W.-Y. Chung, “Motor imagery based application control using 2 channel EEG sensor,” Journal of Sensor Science and Technology, vol. 25, no. 4, pp. 257–263, 2016. View at: Google Scholar
  30. F. Zhengquan, Q. He, J. Zhang, X. Zhu, and M. Qiu, “A hybrid BCI system based on motor imagery and transient visual evoked potential,” Multimedia Tools & Applications, vol. 79, no. 15, pp. 10327–10340, 2020. View at: Google Scholar
  31. G. Jochen, S. Thilo, S. Dirk, W. Daniel, and D. Oliver, “Uncertainty-aware principal component analysis,” IEEE Transactions on Visualization & Computer Graphics, vol. 26, no. 1, pp. 822–831, 2020. View at: Google Scholar
  32. D. Nabil, R. Benali, R. F. Bereksi, and B. R. Fethi, “Epileptic seizure recognition using EEG wavelet decomposition based on nonlinear and statistical features with support vector machine classification,” Biomedical Engineering/Biomedizinische Technik, vol. 65, no. 2, pp. 133–148, 2020. View at: Publisher Site | Google Scholar
  33. S. Guan, K. Zhao, and F. Wang, “Multiclass motor imagery recognition of single joint in upper limb based on NSGA- II OVO TWSVM,” Computational Intelligence and Neuroscience, vol. 2018, Article ID 6265108, 11 pages, 2018. View at: Publisher Site | Google Scholar
  34. F. Huang, J. Lu, J. Tao, L. Li, X. Tan, and P. Liu, “Research on optimization methods of ELM classification algorithm for hyperspectral remote sensing images,” IEEE Access, vol. 7, pp. 108070–108089, 2019. View at: Publisher Site | Google Scholar

Copyright © 2021 Min Shi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles