Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2021 / Article
Special Issue

Brain-Computer Interface Applications for Improving the Quality of Elderly Living

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 8832686 | https://doi.org/10.1155/2021/8832686

David Achanccaray, Shin-Ichi Izumi, Mitsuhiro Hayashibe, "Visual-Electrotactile Stimulation Feedback to Improve Immersive Brain-Computer Interface Based on Hand Motor Imagery", Computational Intelligence and Neuroscience, vol. 2021, Article ID 8832686, 13 pages, 2021. https://doi.org/10.1155/2021/8832686

Visual-Electrotactile Stimulation Feedback to Improve Immersive Brain-Computer Interface Based on Hand Motor Imagery

Academic Editor: Abdelkader Nasreddine Belkacem
Received21 Aug 2020
Revised09 Feb 2021
Accepted15 Feb 2021
Published25 Feb 2021

Abstract

In the aging society, the number of people suffering from vascular disorders is rapidly increasing and has become a social problem. The death rate due to stroke, which is the second leading cause of global mortality, has increased by 40% in the last two decades. Stroke can also cause paralysis. Of late, brain-computer interfaces (BCIs) have been garnering attention in the rehabilitation field as assistive technology. A BCI for the motor rehabilitation of patients with paralysis promotes neural plasticity, when subjects perform motor imagery (MI). Feedback, such as visual and proprioceptive, influences brain rhythm modulation to contribute to MI learning and motor function restoration. Also, virtual reality (VR) can provide powerful graphical options to enhance feedback visualization. This work aimed to improve immersive VR-BCI based on hand MI, using visual-electrotactile stimulation feedback instead of visual feedback. The MI tasks include grasping, flexion/extension, and their random combination. Moreover, the subjects answered a system perception questionnaire after the experiments. The proposed system was evaluated with twenty able-bodied subjects. Visual-electrotactile feedback improved the mean classification accuracy for the grasping (93.00%  3.50%) and flexion/extension (95.00%  5.27%) MI tasks. Additionally, the subjects achieved an acceptable mean classification accuracy (maximum of 86.5%  5.80%) for the random MI task, which required more concentration. The proprioceptive feedback maintained lower mean power spectral density in all channels and higher attention levels than those of visual feedback during the test trials for the grasping and flexion/extension MI tasks. Also, this feedback generated greater relative power in the -band for the premotor cortex, which indicated better MI preparation. Thus, electrotactile stimulation along with visual feedback enhanced the immersive VR-BCI classification accuracy by 5.5% and 4.5% for the grasping and flexion/extension MI tasks, respectively, retained the subject’s attention, and eased MI better than visual feedback alone.

1. Introduction

The number of elderly people suffering from vascular disorders has increased rapidly in developed countries. It has become a social problem, which can cause paralysis and worsen living conditions. Paralysis could be due to medical conditions, such as stroke, spinal cord injury (SCI), amyotrophic lateral sclerosis, or multiple sclerosis [1]. Stroke is one of the most important causes of global mortality, and the death rate due to stroke has increased by 40% in the last two decades [2]. It was reported in 2016 as the second cause of death by the World Health Organization [3]; most poststroke patients experience partial paralysis mainly, often of the upper limbs [4].

Lately, brain-computer interfaces (BCIs) have been gathering attention in the rehabilitation field. They are focused on improving the life quality and health condition of paralyzed patients mostly. BCI for motor rehabilitation is an assistive technology that promotes neural plasticity and eases cortical reorganization in ipsilesional motor brain regions, when subjects perform motor imagery (MI) tasks. MI tasks can modulate brain activity in the sensorimotor cortex by eliciting an event-related desynchronization (ERD) and an event-related synchronization similar to movement execution during physical therapies. High ERD in the -(8–12 Hz) and central -(16–24 Hz) bands can contribute to the recovery process [5, 6]. Also, BCIs can train motor and cognitive skills of elderly people, with or without physical and cognitive diseases. BCIs attempted to prevent degenerative changes by aging. Thereby, an interactive interface could improve multitasking skills, and neurofeedback (or feedback) could enhance cognitive performance [7]. In addition, feedback is a significant factor to reach high performance and reliability in BCI-based assistive systems [8, 9]. Most BCI applications use visual feedback; however, it could be limited to subjects without visual disability or by the end-effector device. Alternative feedbacks can be auditory, vibrotactile, or electrical stimulation [10]. BCIs with visual and proprioceptive feedbacks contribute to MI learning in healthy subjects [11] and motor function restoration of poststroke patients [12].

In a BCI, visual feedback could induce fatigue and lead to a poor BCI performance owing to the monotony of performing MI tasks; thereby, subjects would lose interest and concentration [13]. Visual feedback modification can improve BCI performance by increasing the subject’s motivation, attention, or engagement. An option is realistic visual feedback, which can induce an embodiment sense, promoting significant MI learning in the short term. The embodiment sense, which is the owning feeling of a controlled body, can reinforce the immersive experience of able-bodied subjects [14]. Moreover, virtual reality (VR) provides powerful graphical resources to improve feedback control by enhancing feedback presentation [15]. VR also increases patient engagement during the BCI training owing to enhancing feedback focus [16]. On the other hand, action observation of real or virtual body movements stimulates the corresponding motor-related cortical areas through the mirror neuron system [1719]. Thus, the ERD is enhanced during MI tasks [20, 21].

BCIs in rehabilitation can replace and restore lost neurological function. On the one hand, BCIs for replacement restore the subject’s skills to interact with environments and control devices to perform activities. On the other hand, BCIs for restoration are used with rehabilitation therapies to help the central nervous system restoration by inducing neural plasticity, synchronizing brain activity related to movement intent with motion, and feedback provided by end-effector devices [16]. The electrical stimulation also contributes to neural recovery from paralysis; functional electrical stimulation (FES) produces muscle contraction on paretic limbs and activates the sensory-motor system [5, 22], and electrotactile stimulation provides somatosensory feedback on human skin for sensory restoration [23]. Sensory restoration depends on cutaneous inputs for natural motor control by indicating state transitions and providing information about slip or contact force from manual interactions [24]. Thus, able-bodied and amputee subjects improved the perceptual embodiment in an artificial hand by participating in a modified version of the rubber hand illusion and received transcutaneous electrical nerve stimulation [25, 26]. In another study, Wilson et al. [27] proposed a lingual electrotactile stimulation feedback as a vision substitution system in a BCI based on MI to move a cursor, where subjects with or without visual disability obtained similar results. Also, a BCI used visual-haptic feedback [28], which comprised a visual scene and electrical stimulation simultaneously. This feedback combination improved sensorimotor cortical activity and BCI performance during MI in able-bodied subjects.

Some studies combined VR with electrical stimulation. A VR hand rehabilitation platform with electrotactile stimulation feedback and surface electromyography modules in a closed-loop control improved the training efficiency and grasp control performance of healthy subjects compared to visual and no feedbacks [29]. FES-BCI [30, 31] and robot-BMI [32] showed virtual hands as realistic visual feedback. Both systems improved upper limb motor functions and increased the subject’s motivation with SCI by achieving higher relative power (RP) [3335] than those of conventional BCI systems. Other studies used a head-mounted display (HMD) to increase the immersive experience, which is considered as the subject’s propensity to respond to the VR environment as it was real [36]. Researchers proposed an embodied BCI based on MI using an HMD to display an immersive VR environment to train able-bodied subjects. These systems improved their MI skills and BCI performance [37, 38], reaching classification results and power spectral density (PSD) [38] better than those of the classical MI approach [39].

The present work proposed an immersive BCI based on electroencephalography (EEG) signals to perform hand MI tasks in a VR environment displayed by an HMD, supplying realistic visual feedback along with electrotactile stimulation. The proposed VR-BCI framework with visual-electrotactile stimulation (VES) feedback could improve the system performance achieved with realistic visual feedback. Thereby, our BCI design attempts to obtain results to increase the system usability by able-bodied subjects. It can also assist in the motor rehabilitation of paralyzed patients.

2. Materials and Methods

2.1. Participants

Twenty able-bodied subjects participated in this study, 5 females and 15 males, aged between 18 and 39 years (mean = 26.20, standard deviation (SD) = 5.37); only one was left-handed. Ten of them participated in experiments with VES feedback, while the rest of them participated in experiments with visual feedback. The Ethical Committee from the School of Engineering at Tohoku University approved the experimental protocol.

All subjects signed an informed-consent document according to the Declaration of Helsinki guidelines before the experiment began. They were naive to perform MI tasks for a BCI and without previous experience using a similar device. In addition, none had a background in neurological disorders.

2.2. Experimental Setup

The experimental setup is illustrated in Figure 1(a). The brain activity was recorded using a 16-channel amplifier g.USBamp (g.tec Medical Engineering GMBH) with active electrodes. The electrodes were distributed over the scalp according to the 10–20 international system, using electrode positions AF3, AF4, FC3, FCz, FC4, C3, Cz, C4, T7, T8, CP3, CPz, CP4, Pz, O1, and O2; Fz was the ground electrode, and the right ear lobe was the reference electrode. Additionally, an HMD Oculus Rift (Facebook Technologies, LLC) with a display frequency of 90 Hz provided a higher immersion perception of the subjects. Also, two devices, UnlimitedHand (UH) (H2L Inc.), supplied electrotactile stimulation and were mounted over each subject’s forearm, as shown in Figure 1(a). This device worked to 40 Hz and provided electrotactile stimulation for 1 second.

The devices were connected to a PC for recording and processing EEG data and developing the VR environment. The PC had the following features: Windows 10 operating system, Intel Core i7-8750H CPU at up to 4.1 GHz, 16 GB RAM, Nvidia GTX 1070 GPU. The VR-BCI system was integrated into Unity (Unity Technologies) and coded in C# language.

The VR environment, which comprised a virtual avatar and room, was shown through an HMD. Thus, the human avatar was designed in MakeHuman (The MakeHuman Team). The VR room was designed using Unity, and the virtual arm animations were done using Blender (Blender Foundation). They included a red ball that interacted with the virtual arms, as shown in Figure 1(d). Each arm animation comprised the movement itself (during 1 second) and the return (during 1 second) to the neutral position.

2.3. Experimental Procedure

The electrotactile stimulation intensity was calibrated before the beginning of the experiment. The pulse width (tw) was 0.2 milliseconds, and the voltage bootup level (hi) of 5V over the voltage level (hf). The voltage level started at 1V and was increased by 1V repeatedly until the subject felt it, and before producing muscle contraction, it was between 1V and 3V for most subjects. The experiment did not require contracting hand muscles, and electrotactile stimulation was simply a means to provide interaction between the subject and the BCI. The eight electrode positions and their pulse waveforms are shown in Figure 1(b). The electrodes used for grasping were 0, 1, 3, 4, 6, 7, and flexion and extension were 0, 1, 6, 7. There was no difference between the stimulation pattern for both movements. These electrodes were chosen according to the MI task associated.

Active electrodes were positioned in a cap and mounted on the subject’s scalp; then, the conductive gel was applied to each electrode. Then, an HMD was placed over the subject’s head. The electrode’s impedance was then checked to be below 10 k. The experimental setup preparation took between 15 and 20 minutes.

The experiment was carried out following the timeline shown in Figure 2(a) during the BCI calibration stage. In the trial beginning, a green cross (side cue) was displayed randomly in the left or right position to indicate the MI task limb. Then, a virtual arm animation (MI cue) showed the MI task requested. The subject started the kinesthetic MI task when the virtual arm animation stopped, and the red ball disappeared; it is performed repeatedly for 6 seconds. After the virtual arm animation of the MI performed was shown as a visual reinforcement (R), the electrotactile stimulation was added at the reinforcement beginning if the subject belongs to the group with electrotactile stimulation. Afterward, a blue line (end cue) indicated the trial ends. This BCI system was calibrated for hand MI tasks, such as grasping, flexion, and extension. There were two runs for each MI task, each run of 20 trials (10 trials for each limb) with a break of 1 minute between runs. The flexion and extension MI tasks were performed in the same run. This stage lasted about 22 minutes.

The second stage was training and consisted of feature extraction and classifier training, as shown in Figure 2(b), using the EEG data recorded in the calibration session. The feature extraction comprised common spatial pattern (CSP) filtering and normalized log-variance; then, a support vector machine (SVM) classifier was trained. These methods are detailed in the next section. There was a break of 5 minutes between the calibration and test sessions.

During the test stage, the trial followed the timeline shown in Figure 2(c). In the beginning, a green cross and the virtual arm animation indicated the MI task limb and the MI task, similar to the calibration session. The subject started the kinesthetic MI task when the virtual arm animation stopped, and the red ball disappeared; it is performed repeatedly for 2 seconds. After the virtual arm animation of the classifier’s output was shown as visual feedback (F), the electrotactile stimulation was added at the feedback beginning if the subject belongs to the group with electrotactile stimulation. Afterward, a blue line indicated the trial ends. The VR-BCI system evaluated a run for each hand MI task, which were grasping, flexion/extension, and the random combination of them. The random MI task run used trained classifiers for the other MI tasks. Each run consisted of 20 trials (10 trials for each limb) with a break of 1 minute between runs. This stage lasted about 12 minutes.

Finally, subjects answered the questionnaire shown in Table 1 about system perception and detailed in Section 2.6. This stage lasted around 5 minutes. The time required to disassemble the experimental setup was around 10 minutes. Each subject carried out the whole experiment on the same day, and the total time was about 1 hour 15 minutes.


Questions

Q1I felt as if the virtual hands might belong to my body
Q2I felt that I controlled the virtual hands as if they were my own hands
Q3Even though it did not look like me, when I saw the virtual hands moving, I felt as if they might be my own hands
Q4I had the feeling that I was in the virtual environment
Q5I was concentrating during the experiment
Q6I felt that moving my visual focus inside the virtual environment was easy
Q7I was often distracted by the virtual environment objects
Q8I was frustrated, trying to imagine the movements
Q9I felt tired because the virtual environment was very bright

2.4. Signal Processing

Figures 2(b) and 2(c) show the signal processing for the training and test stages, respectively. EEG data were sampled at a frequency of 512 Hz and filtered using an eighth-order Butterworth bandpass filter with cutoff frequencies of 0.5 and 30 Hz and a fourth-order 50 Hz notch filter.

The feature extraction consisted of CSP and log-variance. CSP filtering is a popular and useful method applied in BCI systems based on oscillatory activity such as MI. It provides high classification performance, being numerically efficient, and a simple algorithm to implement [40]. The CSP method is based on the calculation of a transformation matrix W (equation (1)) that maximizes the variance of spatially filtered EEG data belonging to one class while minimizing it for the other class: in our case, EEG data of MI tasks of the left and right limbs. The matrix X is transformed into a matrix Z. W is a square matrix, the dimensions of which depend on the number of channels, and its columns are spatial filters. Three pairs of spatial filters of the 16 × 16 CSP transformation matrix W were applied to EEG data. The first three spatial filters generate the maximum variance in filtered EEG signals during left limb MI, and the last three spatial filters generate it during right limb MI [11, 4143].

Then, the feature vector f involved the normalized variance logarithm, as shown in equation (2), where p = 1, 2, …, 6.

The SVM classifier has demonstrated to be efficient for discriminating between two motor-imagery classes, and it is the standard classification method used for binary-class BCIs based on MI owing to its fast and computationally efficient training [4345]. An SVM classifier was trained to discriminate between the left and right limbs for the grasping and flexion/extension MI tasks. This classifier was configured with a radial basis function kernel using the library LibSVM [46] in C#.

In the SVM classifier training, each MI task EEG dataset recorded during the calibration stage was reordered randomly. Then, the dataset was divided into 80% and 20% for training and test, respectively. These subdatasets were reshaped to one-second segments with 90% overlapped (sliding window method [4749]). The random reordering of subdatasets was optimized by genetic algorithms in MATLAB (The MathWorks, Inc.). The function to maximize was the classification accuracy of the trained SVM using the test subdataset. Then, the SVM model was validated using fivefold cross-validation using the training subdataset.

The optimized SVM classifier predicted MI tasks in the test stage; the EEG data during test trials were also reshaped to one-second segments, with 90% overlapping. The virtual arm animation displayed as visual feedback was shown partially (biased feedback [14]) depending on the percentage of one-second segments right classified. Then, the trial prediction depended on the one-second segments’ accuracy. In addition, no animation was shown if the prediction was wrong (error-ignoring [14]).

2.5. Analysis

First of all, the classification accuracy and the overall BCI performance were calculated for the test stage. The overall BCI performance was measured by the information transfer rate (ITR) [42]. The ITR depends on the accuracy and is defined by [50]

Here, N is the number of classes, P is the classification accuracy (%), T is the time required for one classification in seconds, and B is the ITR in bits per minute (bpm).

Second of all, the frequency spectrum was calculated and assessed by the following measurements. The ERD power generated was evaluated by the relative power for all channels. The relative power can normalize the PSD, eliminating offsets and reducing the power variability. Then, the relative power was computed using [3335]

Here, is the PSD in dB during the event x.

The coefficient of determination was computed to find power differences [51, 52] between the grand average PSDs of two groups. is defined as follows [53, 54]:where x is the observed signal, y is the predicted signal, is the variance of x, is the variance of y, and is the covariance between x and y. range is from 0 to 1 [55]. If value is close to 1, there is very good discrimination, whereas if value is close to 0, it indicates that the signals can hardly be distinguished [54].

Additionally, the hemispheric asymmetry is related to the performance of fine motor tasks, and left hemisphere changes are related to motor learning. Then, the hemispheric asymmetry was calculated over the motor brain regions as the difference between the mean PSD of the right (FC4, C4, and CP4) and left (FC3, C3, and CP3) channels [38, 56].

Then, the statistical analysis looked into differences between groups. The Shapiro–Wilk (S–W) test () was applied to verify the normal distribution; the S–W test is commonly used for a small sample of fewer than 50 data. If the S–W test was accomplished, analysis of variances (ANOVA) [55] was then used to find statistically significant differences. The repeated measures ANOVA evaluated the overall differences between groups; if the repeated measures model failed the Mauchly’s test of sphericity (), the Greenhouse–Geisser correction was computed. Then, the post hoc test Bonferroni correction was used for pairwise comparisons. The significance level was 5% for the methods mentioned above [38].

On the other hand, If the S–W test was not accomplished, nonparametric statistical tests analyzed it; Friedman test assessed the overall differences between groups. Then, the nonparametric Wilcoxon rank-sum test was adopted to find statistically significant differences between pairwise comparisons. The significance level was also 5% for the nonparametric methods [38, 55].

Finally, Spearman correlation found relationships between the relative power for each channel and the perception levels with a significance level of 5% [38].

2.6. Questionnaire

Each subject responded to the questionnaire presented in Table 1. The questionnaire consisted of nine questions, and the answers were on the Likert scale from 1 to 7 [57]. The questionnaire was directed to know the subject’s perception about the system during BCI sessions; the questions were designed to get levels for the ownership perception (mean of Q1 and Q2), immersion perception (mean of Q3 and Q4), attention (mean of Q5 and Q6), and difficulty (mean of Q7, Q8, and Q9).

3. Results

All subjects performed the three MI tasks (grasping, flexion/extension, and random) during the test stage; flexion and extension were grouped as one MI task for this analysis owing to the similar brain response. The random MI task attempted to evaluate the subject’s ability to perform MI tasks of different nature in the same run.

3.1. Classification Performance

The mean cross-validation accuracy of both feedback groups for the grasping and flexion/extension MI tasks was above 85%, with SD lower than 8%. These results validated the SVM classifier model.

Table 2 presents the accuracy and F1-score of both feedback groups during the test stage for the grasping, flexion/extension, and random MI tasks. The mean accuracy was close to the mean F1-score for all MI tasks; i.e., there was a balance of correct classifications. The VES feedback group achieved greater mean accuracy and lower SD for the grasping (93.00%  3.50%) and flexion/extension (95.00%  5.27%) MI tasks than those of the visual feedback group (grasping: 87.50%  4.25%, flexion/extension: 91.50%  7.09%). On the other hand, the mean accuracy for the random MI task of both feedback groups was close. However, the variability of the VES feedback group was greater than that of the visual feedback group.


FSGF/ER
AccF1AccF1AccF1

V
I
S
180.0083.3380.0080.0090.0090.91
285.0082.3580.0083.3390.0090.00
385.0085.71100.00100.0085.0086.96
490.0090.9190.0090.0085.0085.71
590.0088.8995.0095.2490.0090.91
695.0094.7495.0094.7490.0090.00
785.0086.9690.0088.8995.0094.74
885.0086.96100.00100.0085.0086.96
990.0090.0095.0094.7475.0076.19
1090.0088.8990.0090.0080.0075.00
Mean87.5087.8791.5091.6986.5086.74
SD4.253.667.096.585.806.41

V
E
S
1190.0088.8990.0090.9190.0090.91
12100.00100.00100.00100.00100.00100.00
1395.0095.24100.00100.0095.0095.24
1490.0088.89100.00100.0080.0075.00
1590.0088.8995.0095.2470.0062.50
1695.0095.2495.0095.2475.0073.68
1795.0095.2490.0088.8980.0080.00
1890.0090.00100.00100.0070.0076.92
1990.0090.0085.0084.2195.0094.74
2095.0094.7495.0094.74100.00100.00
Mean93.0092.7195.0094.9285.5084.90
SD3.503.875.275.4811.8912.95

The S–W test verified the accuracy normal distribution; the VES feedback group did not accomplish the S–W test () for the grasping MI task. Then, the Friedman test found overall statistical differences (  = 4.52, ) between the accuracy of both feedback groups. Thus, the nonparametric Wilcoxon rank-sum test found statistical differences between both feedback groups for the grasping MI task (); however, there were no differences for the flexion/extension and random MI tasks ().

Additionally, Table 3 shows the overall VR-BCI performance calculated by the information transfer rate. The mean information transfer rate of the VES feedback group was higher than that of the visual feedback group for all MI tasks; however, the SD of the VES feedback group for the random MI task was higher.


FGF/ER

VIS2.81  0.743.77  1.512.68  0.90
VES3.91  0.924.56  1.382.96  2.08

3.2. Frequency Spectrum

The frequency spectrum was calculated by the spectrogram of EEG data recorded of each subject during the test stage for the grasping and flexion/extension MI tasks of both feedback groups. The PSD decreases in the -and central -bands, mainly owing to movement execution or MI [33, 42]. Thereby, the spectrograms verified that the PSD decreased during both MI tasks. Also, the PSD for both MI tasks of the VES feedback group were less intense than those of the visual feedback group. Then, the VES feedback group reached a lower ERD level than that of the visual feedback.

On the other hand, the coefficient of determination [5153] between the grand average PSD in the -and central -bands of both feedback groups for both MI tasks in each channel was close to 1; thus, there was good discrimination between both feedback groups [54].

3.3. Relative Power

The mean relative power was calculated using the PSD of EEG data recorded of each subject during the test stage for the grasping and flexion/extension MI tasks of both feedback groups. The grand average relative power in the -and central -bands of both feedback groups in premotor (FC3 and FC4 channels), primary motor (C3 and C4 channels), and somatosensory (CP3 and CP4 channels) cortices is shown in Figures 3 and 4, without considering the negative sign. The premotor cortex is related to movement preparation, the primary motor cortex is related to motor execution and motor imagery [58, 59], and the somatosensory cortex is related to sensory perception [60].

The VES feedback group had a grand average relative power in the -band (approximately 72.71%) for the premotor cortex greater than that of the visual feedback group for both MI tasks. Thereby, the ERD over the movement preparation region of the VES feedback group was more intense. The grand average relative power of both bands (approximately 75.45%) for the primary motor cortex was similar for both MI tasks. Thus, the MI performance was also similar for both feedback groups, whereas the visual feedback group had a grand average relative power of both bands (approximately 77.89%) for the somatosensory cortex greater than those of the VES feedback group. Thereby, the ERD over the sensory perception region of the VES feedback group was weaker, verifying the results of [61]. Also, both feedback groups achieved relative powers higher than those of similar approaches (approximately 40%) [34, 35] in the - and central -bands.

The mean relative power in each channel approached of both feedback groups did not accomplish the S–W test of normal distribution (). The Friedman test found overall statistical differences () between both feedback groups for both MI tasks and both bands. Thus, the Wilcoxon rank-sum test found statistical differences () between both feedback groups in the channel CP3 for the grasping (-band: , central -band: ) and flexion/extension (-band: , central -band: ) MI tasks. Also, there were statistical differences in channel CP4 of the central -band () for the grasping MI task. The sensory perception region was affected by the type of feedback, as mentioned previously.

3.4. Hemispheric Asymmetry

The mean hemispheric asymmetry was calculated using the mean PSD of EEG data recorded over the left and right motor brain regions of each subject during the calibration and test stages for the grasping and flexion/extension MI tasks of both feedback groups. Figures 5 and 6 show the grand average hemispheric asymmetry in both bands and both feedback groups. The increase of hemispheric asymmetry in sessions with feedback (test stage) was verified and compared with sessions without feedback (calibration stage) [8, 9]. Also, the hemispheric asymmetry of the VES feedback group was greater than that of the visual feedback group.

The mean hemispheric asymmetry in both bands of both stages and feedback groups did not accomplish the S–W test of normal distribution (). Thus, the Friedman test did not find overall statistical differences () between both feedback groups.

3.5. System Perception

The subjects of both feedback groups answered the questionnaire about system perception; these answers on the Likert scale are shown in Figure 7. Also, Table 4 shows the mean perception levels of both feedback groups. The mean immersion and ownership perception levels of the visual feedback group were higher than those of the VES feedback group; besides, the VES feedback group had a mean attention level higher and a mean difficulty level lower than those of the visual feedback group. Most of the subjects of both feedback groups felt very high levels of immersion perception and attention.


FOwnershipImmersionAttentionDifficulty

VIS4.45  0.985.50  0.755.75  0.683.37  1.59
VES4.15  0.754.80  1.305.90  0.842.80  1.20

The attention level did not accomplish the S–W test of normal distribution (). Then, the Friedman test did not find overall statistical differences ( = 0.93, ) between both feedback groups.

On the other hand, the Spearman correlation found significant correlations between the perception levels and the channel’s relative power in both bands and for both MI tasks, as shown in Table 5. Regarding the grasping MI task, the ownership perception level was correlated with the premotor cortex, and the attention level was correlated with the somatosensory and prefrontal cortices. The difficulty level was correlated with the somatosensory, prefrontal, and visual cortices, while, for the flexion/extension MI task, the attention level was correlated with the somatosensory, and the difficulty level was correlated with the premotor, somatosensory, prefrontal, and visual perception regions. Additionally, attention and difficulty levels were correlated inversely ( = −0.45, ).


MI taskBandChannelLevel

GFC3O−0.460.0419
AF3A0.520.0193
CP3A0.450.0493
AF3D−0.490.0272
CP3D−0.610.0044
O1D−0.470.0388
C. AF3A0.580.0077
AF3D−0.480.0309
CP3D−0.620.0033

F/EAF3A0.510.0221
AF3D−0.540.0135
FCzD−0.470.0350
CP3D−0.670.0011
O1D−0.460.0433
C. AF3A0.570.0082
CP3A0.490.0278
AF3D−0.500.0246
CP3D−0.640.0023

4. Discussion

This work investigated the feasibility of applying electrotactile stimulation along with visual feedback by assessing VES feedback compared to visual feedback.

Regarding the classification accuracy, the VES feedback group had the best results for the grasping and flexion/extension MI tasks; however, there were only statistical differences between the classification accuracy of both feedback groups for the grasping MI task. The response to the electrotactile stimulation feedback could be owing to the muscles involved with the movements. Grasping is a movement more complex than flexion/extension. It activates brain regions related to the finger’s movement proximate or overlapped to those of the flexion/extension, which are related to wrist movement [62, 63]. Thus, performing the flexion/extension MI was easier than the grasping MI, and they could need no additional stimulation during feedback. On the other hand, both feedback groups had similar mean classification accuracy for the random MI task; however, the VES feedback group had a dispersion greater than that of the visual feedback group. Performing two MI tasks in a random order required more concentration. However, the electrotactile stimulation could distract the subject. Moreover, most of the subjects had significant accuracy (above 80%) for the random MI task. It showed the feasibility of increasing the BCI control options by combining different MI tasks for the left and right limbs in an immersive environment, such as similar motor rehabilitation BCI [37, 38] and cognitive training BCI systems [64, 65].

In the frequency spectrum, there were differences between the PSD of both feedback groups. The channels’ PSD of the VES feedback group were lower than those of the visual feedback group; the lower PSD for the occipital -band (8–12 Hz) for the VES feedback group indicated a higher attention level during test trials [66]. Thus, subjects experiencing proprioceptive feedback paid more attention to the VR-BCI system than those with visual feedback [11]. Also, there were statistical differences between the somatosensory cortex relative powers of both feedback groups. The VES feedback group had a relative power lower than that of the visual feedback group, verifying that the -band trends to keep the current sensorimotor state. Then, the electrical stimulation produced a change of the status quo by decreasing the -band activity [67]. On the other hand, the hemispheric asymmetry of PSD can be modulated and increased during feedback sessions [8], improving the performance of fine motor tasks and triggering motor learning changes [56]. Thereby, our system could promote inter-hemispheric interaction in patients with affected hemispheric differences [38]. It could also contribute to motor learning transfer by using a healthy hand to improve the paretic hand movements [9]. Therefore, the reached relative power and hemispheric asymmetry can contribute to different learning processes [68] and restore motor and cognitive functions [64, 69].

The questionnaire answers confirmed that subjects of the VES feedback group paid more attention. It was related to the decrease in difficulty level to perform MI tasks. The attention and difficulty levels were correlated with the relative power of the somatosensory channels, verifying the electrotactile stimulation effect for decreasing the relative power in this brain region [67]. On the other hand, the body ownership illusion results from the interaction between sensory inputs and internal body models; however, the immersion and ownership perception levels influenced by the VR, as mentioned in [37, 38], were higher without proprioceptive feedback. Then, realistic visual feedback generated an ownership perception higher than that of cutaneous perception. It could attenuate proprioceptive signals from the skin, altering the subject’s somatosensory perception [70].

Our VR-BCI system with realistic visual feedback provided richer and more explanatory feedback, which could reduce illiteracy and deficiency in naive subjects, than that of conventional BCI systems during BCI training [71]. Also, the sensorimotor rhythms’ modulation was influenced by realistic visual feedback, positively biased feedback, and sense of embodiment. They reinforced MI learning in the short term to improve the BCI performance [14]. In addition, the electrotactile stimulation and positively biased feedback contributed to enhancing the VR environment interaction by their influence on the subject’s motivation and confidence; thus, they improved MI learning. Then, the VES feedback enhanced the VR-BCI with visual feedback in some features, such as classification accuracy of MI tasks, subject’s attention during MI, and MI preparation.

This work was limited by no pre-training sessions, few test stage sessions, and the number of subjects. Subjects can feel difficulty practicing MI learning skills in pre-training sessions; then, they explore strategies and learn better in the subsequent calibration sessions [14, 71]. Also, they can improve MI performance with feedback during more test sessions. Thus, the differences between visual and VES feedbacks could increase. Moreover, the results will be more reliable by increasing the number of subjects. However, other works evaluated BCI systems with a similar number of subjects [11, 15, 20, 21, 57].

The proposed immersive system will be assessed with poststroke patients in future research, considering the limitations mentioned above. The VR-BCI would be updated according to the patients’ constraints. In addition, cognitive background and spatial abilities should be considered to predict the subject’s response to the feedback and the BCI performance [72, 73].

5. Conclusions

The visual-electrotactile feedback was assessed and compared to the visual feedback to discriminate MI tasks between both limbs; the MI tasks were grasping, flexion/extension, and random combination of them. Visual-electrotactile feedback improved the mean classification accuracy for the grasping (93.00%  3.50%) and flexion/extension (95.00%  5.27%) MI tasks and reached higher information transfer rates for the three MI tasks (maximum of 4.56  1.38 bpm). In addition, subjects achieved a significant mean classification accuracy (maximum of 86.5%  5.80%) for the random MI task; however, it was lower than that of the other MI tasks. Since the random MI task required more subject’s concentration, electrotactile stimulation could distract the subject during MI performing, generating greater dispersion of classification accuracy. There were only statistical differences for the grasping MI task between the classification accuracies of both feedback groups.

The proprioceptive feedback kept lower mean PSD in all channels and higher attention levels than those of the visual feedback during test trials for the grasping and flexion/extension MI tasks. This feedback also generated a relative power in the -band greater for the premotor cortex, which indicated a better MI preparation. On the other hand, the hemispheric asymmetry was lower for the visual-electrotactile feedback; however, there were no statistical differences between both feedback groups. Then, both feedback types can contribute to the motor and cognitive learning processes. Also, the questionnaire confirmed attention level higher and difficulty level lower for the visual-electrotactile feedback, whereas the immersive and ownership perception levels were higher for visual feedback. However, there were also no statistical differences between the system perception levels of both feedback groups.

Therefore, the use of electrotactile stimulation along with visual feedback enhanced the immersive VR-BCI classification performance. It also retained the subject’s attention and eased motor imagery better than visual feedback alone.

Data Availability

The datasets recorded and used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported in part by FONDECYT from CONCYTEC, Peru, under Contract 112-2017, and in part by the Japan Society for the Promotion of Science Grant-in-Aid for Scientific Research (B) under Project 18H01399.

References

  1. J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Brain-computer interfaces for communication and control,” Clinical Neurophysiology, vol. 113, no. 6, pp. 767–791, 2002. View at: Publisher Site | Google Scholar
  2. E. J. Benjamin, S. S. Virani, C. W. Callaway et al., “Heart disease and stroke statistics-2018 update: a report from the american heart association,” Circulation, vol. 137, no. 12, pp. e67–e492, 2018. View at: Publisher Site | Google Scholar
  3. WHO, Global Health Estimates 2016: Deaths by Cause, Age, Sex, by Country and by Region, 2000–2016, World Health Organization, Geneva, Switzerland, 2018.
  4. I. Faria-Fortini, S. M. Michaelsen, J. G. Cassiano, and L. F. Teixeira-Salmela, “Upper extremity function in stroke subjects: relationships between the international classification of functioning, disability, and health domains,” Journal of Hand Therapy, vol. 24, no. 3, pp. 257–265, 2011. View at: Publisher Site | Google Scholar
  5. U. Chaudhary, N. Birbaumer, and M. R. Curado, “Brain-machine interface (bmi) in paralysis,” Annals of Physical and Rehabilitation Medicine, vol. 58, no. 1, pp. 9–13, 2015. View at: Publisher Site | Google Scholar
  6. G. Pfurtscheller and C. Neuper, “Motor imagery and direct brain-computer communication,” Proceedings of the IEEE, vol. 89, no. 7, pp. 1123–1134, 2001. View at: Publisher Site | Google Scholar
  7. A. N. Belkacem, N. Jamil, and J. A. Palmer, “Brain computer interfaces for improving the quality of life of older adults and elderly patients,” Frontiers in Neuroscience, vol. 14, p. 692, 2020. View at: Publisher Site | Google Scholar
  8. C. Neuper, A. Schlögl, and G. Pfurtscheller, “Enhancement of left-right sensorimotor eeg differences during feedback-regulated motor imagery,” Journal of Clinical Neurophysiology, vol. 16, no. 4, pp. 373–382, 1999. View at: Publisher Site | Google Scholar
  9. N. Takeuchi, Y. Oouchida, and S.-I. Izumi, “Motor control and neural plasticity through interhemispheric interactions,” Neural Plasticity, vol. 2012, Article ID 823285, 2012. View at: Publisher Site | Google Scholar
  10. I. N. Angulo-Sherman and D. Gutiérrez, “Effect of different feedback modalities in the performance of brain-computer interfaces,” in Proceedings of the 2014 International Conference on Electronics, Communications and Computers (CONIELECOMP), pp. 14–21, Cholula, Mexico, February 2014. View at: Google Scholar
  11. S. Bhattacharyya, M. Clerc, and M. Hayashibe, “Augmenting motor imagery learning for brain-computer interfacing using electrical stimulation as feedback,” IEEE Transactions on Medical Robotics and Bionics, vol. 1, no. 4, pp. 247–255, 2019. View at: Publisher Site | Google Scholar
  12. P. D. Ganzer, S. C. Colachis, and M. A. Schwemmer, “Restoring the sense of touch using a sensorimotor demultiplexing neural interface,” Cell, vol. 181, no. 4, pp. 763–773, 2020, http://www.sciencedirect.com/science/article/pii/S0092867420303470. View at: Google Scholar
  13. R. Foong, K. K. Ang, C. Quek et al., “Assessment of the efficacy of eeg-based mi-bci with visual feedback and eeg correlates of mental fatigue for upper-limb stroke rehabilitation,” IEEE Transactions on Biomedical Engineering, vol. 67, no. 3, pp. 786–795, 2020. View at: Publisher Site | Google Scholar
  14. M. Alimardani, S. Nishio, and H. Ishiguro, “Brain-computer interface and motor imagery training: the role of visual feedback and embodiment,” in Evolving BCI Therapy, D. Larrivee, Ed., IntechOpen, London, UK, 2018. View at: Google Scholar
  15. R. Ron-Angevin and A. Díaz-Estrella, “Brain-computer interface: changes in performance using virtual reality techniques,” Neuroscience Letters, vol. 449, no. 2, pp. 123–127, 2009. View at: Publisher Site | Google Scholar
  16. M. A. Bockbrader, G. Francisco, R. Lee, J. Olson, R. Solinsky, and M. L. Boninger, “Brain computer interfaces in rehabilitation medicine,” PM&R, vol. 10, no. 9, pp. S233–S243, 2018. View at: Publisher Site | Google Scholar
  17. M. Tani, Y. Ono, M. Matsubara et al., “Action observation facilitates motor cortical activity in patients with stroke and hemiplegia,” Neuroscience Research, vol. 133, pp. 7–14, 2018. View at: Publisher Site | Google Scholar
  18. S. P. Tipper, “Eps mid-career award 2009: from observation to action simulation: the role of attention, eye-gaze, emotion, and body state,” Quarterly Journal of Experimental Psychology, vol. 63, no. 11, pp. 2081–2105, 2010. View at: Publisher Site | Google Scholar
  19. J. Kim, B. Lee, H. S. Lee, K. H. Shin, M. J. Kim, and E. Son, “Differences in brain waves of normal persons and stroke patients during action observation and motor imagery,” Journal of Physical Therapy Science, vol. 26, no. 2, pp. 215–218, 2014. View at: Publisher Site | Google Scholar
  20. T. Ono, A. Kimura, and J. Ushiba, “Daily training with realistic visual feedback improves reproducibility of event-related desynchronisation following hand motor imagery,” Clinical Neurophysiology, vol. 124, no. 9, pp. 1779–1786, 2013. View at: Publisher Site | Google Scholar
  21. Y. Ono, K. Wada, M. Kurata, and N. Seki, “Enhancement of motor-imagery ability via combined action observation and motor-imagery training with proprioceptive neurofeedback,” Neuropsychologia, vol. 114, pp. 134–142, 2018. View at: Publisher Site | Google Scholar
  22. D. B. Popovic, “Advances in functional electrical stimulation (fes),” Journal of Electromyography and Kinesiology, vol. 24, no. 6, pp. 795–802, 2014. View at: Google Scholar
  23. K. Li, Electrotactile Feedback for Sensory Restoration: Modelling and Application, University of Portsmouth, Portsmouth, UK, 2018.
  24. R. Johansson and J. Flanagan, “Coding and use of tactile signals from the fingertips in object manipulation tasks,” Nature reviews,” Neuroscience, vol. 10, pp. 345–359, 2009. View at: Google Scholar
  25. M. R. Mulvey, H. J. Fawkner, H. E. Radford, and M. I. Johnson, “Perceptual embodiment of prosthetic limbs by transcutaneous electrical nerve stimulation,” Neuromodulation: Technology at the Neural Interface, vol. 15, no. 1, pp. 42–47, 2012. View at: Publisher Site | Google Scholar
  26. M. Isaković, M. Belić, and M. Štrbac, “Electrotactile feedback improves performance and facilitates learning in the routine grasping task,” European Journal of Translational Myology, vol. 26, p. 6, 2016. View at: Google Scholar
  27. J. A. Wilson, L. M. Walton, M. Tyler, and J. Williams, “Lingual electrotactile stimulation as an alternative sensory feedback pathway for brain-computer interface applications,” Journal of Neural Engineering, vol. 9, no. 4, p. 45007, 2012. View at: Publisher Site | Google Scholar
  28. Z. Wang, Y. Zhou, L. Chen et al., “A BCI based visual-haptic neurofeedback training improves cortical activations and classification performance during motor imagery,” Journal of Neural Engineering, vol. 16, no. 6, p. 66012, 2019. View at: Publisher Site | Google Scholar
  29. K. Li, P. Boyd, Y. Zhou, Z. Ju, and H. Liu, “Electrotactile feedback in a virtual hand rehabilitation platform: evaluation and implementation,” IEEE Transactions on Automation Science and Engineering, vol. 16, no. 4, pp. 1556–1565, 2019. View at: Publisher Site | Google Scholar
  30. F. Trincado-Alonso, E. López-Larraz, and F. Resquín, “A pilot study of brain-triggered electrical stimulation with visual feedback in patients with incomplete spinal cord injury,” Journal of Medical and Biological Engineering, vol. 38, pp. 790–803, 2017. View at: Publisher Site | Google Scholar
  31. A. Moldoveanu, O.-M. Ferche, F. Moldoveanu et al., “The travee system for a multimodal neuromotor rehabilitation,” IEEE Access, vol. 7, pp. 8151–8171, 2019. View at: Publisher Site | Google Scholar
  32. G. Onose, C. Grozea, and A. Anghelescu, “On the feasibility of using motor imagery eeg-based brain-computer interface in chronic tetraplegics for assistive robotic arm control: a clinical test and long-term post-trial follow-up,” Spinal Cord, vol. 50, pp. 599–608, 03 2012. View at: Google Scholar
  33. G. Pfurtscheller and F. H. Lopes da Silva, “Event-related eeg/meg synchronization and desynchronization: basic principles,” Clinical Neurophysiology, vol. 110, no. 11, pp. 1842–1857, 1999. View at: Publisher Site | Google Scholar
  34. Z. Wang, L. Chen, and W. Yi, “Enhancement of cortical activation for motor imagery during bci-fes training,” in Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 2527–2530, Honolulu, Hawaii, July 2018. View at: Google Scholar
  35. I. Choi, K. Bond, and C. S. Nam, “A hybrid bci-controlled fes system for hand-wrist motor function,” in Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 2324–2328, Budapest, Hungary, October 2016. View at: Google Scholar
  36. C. Nam, A. Nijholt, and F. Lotte, Brain-Computer Interfaces Handbook: Technological and Theoretical Advances, CRC Press, Boca Raton, FL, USA, 2018.
  37. F. Škola, S. Tinková, and F. Liarokapis, “Progressive training for motor imagery brain-computer interfaces using gamification and virtual reality embodiment,” Frontiers in Human Neuroscience, vol. 13, p. 329, 2019. View at: Google Scholar
  38. A. Vourvopoulos and S. Bermúdez i Badia, “Motor priming in virtual reality can augment motor-imagery training efficacy in restorative brain-computer interaction: a within-subject analysis,” Journal of NeuroEngineering and Rehabilitation, vol. 13, no. 1, p. 69, 2016. View at: Publisher Site | Google Scholar
  39. J. Kalcher, D. Flotzinger, C. Neuper, S. Gölly, and G. Pfurtscheller, “Graz brain-computer interface ii: towards communication between humans and computers based on online classification of three different eeg patterns,” Medical & Biological Engineering & Computing, vol. 34, no. 5, pp. 382–388, 1996. View at: Publisher Site | Google Scholar
  40. M. Clerc, L. Bougrain, and F. Lotte, Brain-Computer Interfaces 1 :  Foundations and Methods, ISTE Ltd, London, UK, 2016.
  41. J. Müller-Gerking, G. Pfurtscheller, and H. Flyvbjerg, “Designing optimal spatial filters for single-trial eeg classification in a movement task,” Clinical Neurophysiology, vol. 110, no. 5, pp. 787–798, 1999. View at: Publisher Site | Google Scholar
  42. B. Blankertz, R. Tomioka, S. Lemm, M. Kawanabe, and K.-R. Muller, “Optimizing spatial filters for robust eeg single-trial analysis,” IEEE Signal Processing Magazine, vol. 25, no. 1, pp. 41–56, 2008. View at: Publisher Site | Google Scholar
  43. D. Achanccaray and M. Hayashibe, “Decoding hand motor imagery tasks within the same limb from eeg signals using deep learning,” IEEE Transactions on Medical Robotics and Bionics, vol. 2, no. 4, pp. 692–699, 2020. View at: Publisher Site | Google Scholar
  44. C. M. Bishop, Pattern Recognition and Machine Learning, Springer, New York, NY, USA, 2006.
  45. P. B. Junior, W. R. B. M. Nunes, and A. E. Lazzaretti, “Classifier for motor imagery during parametric functional electrical stimulation frequencies on the quadriceps muscle,” in Proceedings of the 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER), pp. 526–529, San Francisco, CA, USA, March 2019. View at: Google Scholar
  46. C.-C. Chang and C.-J. Lin, “Libsvm,” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, pp. 1–27, 2011. View at: Publisher Site | Google Scholar
  47. A. I. Sburlea, L. Montesano, and J. Minguez, “Continuous detection of the self-initiated walking pre-movement state from EEG correlates without session-to-session recalibration,” Journal of Neural Engineering, vol. 12, no. 3, p. 36007, 2015. View at: Publisher Site | Google Scholar
  48. S. Ren, W. Wang, Z.-G. Hou, X. Liang, J. Wang, and W. Shi, “Enhanced motor imagery based brain- computer interface via fes and vr for lower limbs,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 28, no. 8, pp. 1846–1855, 2020. View at: Publisher Site | Google Scholar
  49. M. A. Romero-Laiseca, D. Delisle-Rodriguez, V. Cardoso et al., “A low-cost lower-limb brain-machine interface triggered by pedaling motor imagery for post-stroke patients rehabilitation,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 28, no. 4, pp. 988–996, 2020. View at: Publisher Site | Google Scholar
  50. D. J. McFarland, W. A. Sarnacki, and J. R. Wolpaw, “Brain-computer interface (BCI) operation: optimizing information transfer rates,” Biological Psychology, vol. 63, no. 3, pp. 237–251, 2003. View at: Publisher Site | Google Scholar
  51. X. Yong and C. Menon, “Eeg classification of different imaginary movements within the same limb,” PLoS One, vol. 10, no. 4, pp. 1–24, 2015. View at: Publisher Site | Google Scholar
  52. M. Tavakolan, Z. Frehlick, X. Yong, and C. Menon, “Classifying three imaginary states of the same upper extremity using time-domain features,” PLoS One, vol. 12, no. 3, pp. 1–18, 2017. View at: Google Scholar
  53. C. B. Tabernig, C. A. Lopez, and L. C. Carrere, “Neurorehabilitation therapy of patients with severe stroke based on functional electrical stimulation commanded by a brain computer interface,” Journal of Rehabilitation and Assistive Technologies Engineering, vol. 5, 2018. View at: Publisher Site | Google Scholar
  54. R. Aldea and O. Eva, “Detecting sensorimotor rhythms from the eeg signals using the independent component analysis and the coefficient of determination,” in Proceedings of the International Symposium on Signals, Circuits and Systems ISSCS2013, pp. 1–4, Iasi, Romania, July 2013. View at: Google Scholar
  55. J. Devore, Probability and Statistics for Engineering and the Sciences, Cengage Learning, Boston, MA, USA, 2015.
  56. M. I. Garry, G. Kamen, and M. A. Nordstrom, “Hemispheric differences in the relationship between corticomotor excitability changes following a fine-motor task and motor learning,” Journal of Neurophysiology, vol. 91, no. 4, pp. 1570–1578, 2004. View at: Publisher Site | Google Scholar
  57. S. Kishore, M. González-Franco, C. Hintemüller et al., “Comparison of ssvep bci and eye tracking for controlling a humanoid robot in a social environment,” Presence: Teleoperators and Virtual Environments, vol. 23, no. 3, pp. 242–252, 2014. View at: Publisher Site | Google Scholar
  58. M. Jeannerod and V. Frak, “Mental imaging of motor activity in humans,” Current Opinion in Neurobiology, vol. 9, no. 6, pp. 735–739, 1999. View at: Publisher Site | Google Scholar
  59. K. J. Miller, G. Schalk, E. E. Fetz, M. den Nijs, J. G. Ojemann, and R. P. N. Rao, “Cortical activity during motor execution, motor imagery, and imagery-based online feedback,” Proceedings of the National Academy of Sciences, vol. 107, no. 9, pp. 4430–4435, 2010. View at: Publisher Site | Google Scholar
  60. G. Pfurtscheller and A. Berghold, “Patterns of cortical activation during planning of voluntary movement,” Electroencephalography and Clinical Neurophysiology, vol. 72, no. 3, pp. 250–258, 1989. View at: Publisher Site | Google Scholar
  61. C. Reynolds, B. A. Osuagwu, and A. Vuckovic, “Influence of motor imagination on cortical activation during functional electrical stimulation,” Clinical Neurophysiology, vol. 126, no. 7, pp. 1360–1369, 2015. View at: Publisher Site | Google Scholar
  62. J. Sanes, J. Donoghue, V. Thangaraj, R. Edelman, and S. Warach, “Shared neural substrates controlling hand movements in human motor cortex,” Science, vol. 268, no. 5218, pp. 1775–1777, 1995. View at: Publisher Site | Google Scholar
  63. E. B. Plow, P. Arora, M. A. Pline, M. T. Binenstock, and J. R. Carey, “Within-limb somatotopy in primary motor cortex-revealed using fMRI,” Cortex, vol. 46, no. 3, pp. 310–321, 2010. View at: Publisher Site | Google Scholar
  64. J. Gomez-Pilar, R. Corralejo, L. F. Nicolas-Alonso, D. Álvarez, and R. Hornero, “Neurofeedback training with a motor imagery-based bci: neurocognitive improvements and eeg changes in the elderly,” Medical & Biological Engineering & Computing, vol. 54, no. 11, pp. 1655–1666, 2016. View at: Publisher Site | Google Scholar
  65. S. Jirayucharoensak, P. Israsena, S. Pan-Ngum, S. Hemrungrojn, and M. Maes, “A game-based neurofeedback training system to enhance cognitive performance in healthy elderly subjects and in patients with amnestic mild cognitive impairment,” Clinical Interventions in Aging, vol. 14, pp. 347–360, 2019. View at: Publisher Site | Google Scholar
  66. J. Frey, C. Mühl, F. Lotte, and M. Hachet, “Review of the use of electroencephalography as an evaluation method for human-computer interaction,” in Proceedings of the PhyCS-International Conference on Physiological Computing Systems, Scitepress, Lisbonne, Portugal, January 2014. View at: Google Scholar
  67. A. K. Engel and P. Fries, “Beta-band oscillations-signalling the status quo?” Current Opinion in Neurobiology, vol. 20, no. 2, pp. 156–165, 2010. View at: Publisher Site | Google Scholar
  68. J. A. Pineda, D. S. Silverman, A. Vankov, and J. Hestenes, “Learning to control brain rhythms: making a brain-computer interface possible,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 11, no. 2, pp. 181–184, 2003. View at: Publisher Site | Google Scholar
  69. T.-S. Lee, S. J. A. Goh, and S. Y. Quek, “A brain-computer interface based cognitive training system for healthy elderly: a randomized control pilot study for usability and preliminary efficacy,” PLoS One, vol. 8, no. 11, pp. 1–8, 2013. View at: Publisher Site | Google Scholar
  70. M. Alimardani, S. Nishio, and H. Ishiguro, “Removal of proprioception by bci raises a stronger body ownership illusion in control of a humanlike robot,” Scientific Reports, vol. 6, p. 33514, 2016. View at: Publisher Site | Google Scholar
  71. C. Jeunet, E. Jahanpour, and F. Lotte, “Why standard brain-computer interface (BCI) training protocols should be changed: an experimental study,” Journal of Neural Engineering, vol. 13, no. 3, p. 36024, 2016. View at: Publisher Site | Google Scholar
  72. K. Pacheco, K. Acuña, E. Carranza, D. Achanccaray, and J. Andreu-Perez, “Performance predictors of motor 2017, imagery brain-computer interface based on spatial abilities for upper limb rehabilitation,” in Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 1014–1017, Jeju Island, Korea, July 2017. View at: Google Scholar
  73. N. Leeuwis and M. Alimardani, “High aptitude motor-imagery bci users have better visuospatial memory,” in Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 1518–1523, Toronto, Canada, October 2020. View at: Google Scholar

Copyright © 2021 David Achanccaray et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views349
Downloads183
Citations

Related articles