Abstract

In the aging society, the number of people suffering from vascular disorders is rapidly increasing and has become a social problem. The death rate due to stroke, which is the second leading cause of global mortality, has increased by 40% in the last two decades. Stroke can also cause paralysis. Of late, brain-computer interfaces (BCIs) have been garnering attention in the rehabilitation field as assistive technology. A BCI for the motor rehabilitation of patients with paralysis promotes neural plasticity, when subjects perform motor imagery (MI). Feedback, such as visual and proprioceptive, influences brain rhythm modulation to contribute to MI learning and motor function restoration. Also, virtual reality (VR) can provide powerful graphical options to enhance feedback visualization. This work aimed to improve immersive VR-BCI based on hand MI, using visual-electrotactile stimulation feedback instead of visual feedback. The MI tasks include grasping, flexion/extension, and their random combination. Moreover, the subjects answered a system perception questionnaire after the experiments. The proposed system was evaluated with twenty able-bodied subjects. Visual-electrotactile feedback improved the mean classification accuracy for the grasping (93.00%  3.50%) and flexion/extension (95.00%  5.27%) MI tasks. Additionally, the subjects achieved an acceptable mean classification accuracy (maximum of 86.5%  5.80%) for the random MI task, which required more concentration. The proprioceptive feedback maintained lower mean power spectral density in all channels and higher attention levels than those of visual feedback during the test trials for the grasping and flexion/extension MI tasks. Also, this feedback generated greater relative power in the -band for the premotor cortex, which indicated better MI preparation. Thus, electrotactile stimulation along with visual feedback enhanced the immersive VR-BCI classification accuracy by 5.5% and 4.5% for the grasping and flexion/extension MI tasks, respectively, retained the subject’s attention, and eased MI better than visual feedback alone.

1. Introduction

The number of elderly people suffering from vascular disorders has increased rapidly in developed countries. It has become a social problem, which can cause paralysis and worsen living conditions. Paralysis could be due to medical conditions, such as stroke, spinal cord injury (SCI), amyotrophic lateral sclerosis, or multiple sclerosis [1]. Stroke is one of the most important causes of global mortality, and the death rate due to stroke has increased by 40% in the last two decades [2]. It was reported in 2016 as the second cause of death by the World Health Organization [3]; most poststroke patients experience partial paralysis mainly, often of the upper limbs [4].

Lately, brain-computer interfaces (BCIs) have been gathering attention in the rehabilitation field. They are focused on improving the life quality and health condition of paralyzed patients mostly. BCI for motor rehabilitation is an assistive technology that promotes neural plasticity and eases cortical reorganization in ipsilesional motor brain regions, when subjects perform motor imagery (MI) tasks. MI tasks can modulate brain activity in the sensorimotor cortex by eliciting an event-related desynchronization (ERD) and an event-related synchronization similar to movement execution during physical therapies. High ERD in the -(8–12 Hz) and central -(16–24 Hz) bands can contribute to the recovery process [5, 6]. Also, BCIs can train motor and cognitive skills of elderly people, with or without physical and cognitive diseases. BCIs attempted to prevent degenerative changes by aging. Thereby, an interactive interface could improve multitasking skills, and neurofeedback (or feedback) could enhance cognitive performance [7]. In addition, feedback is a significant factor to reach high performance and reliability in BCI-based assistive systems [8, 9]. Most BCI applications use visual feedback; however, it could be limited to subjects without visual disability or by the end-effector device. Alternative feedbacks can be auditory, vibrotactile, or electrical stimulation [10]. BCIs with visual and proprioceptive feedbacks contribute to MI learning in healthy subjects [11] and motor function restoration of poststroke patients [12].

In a BCI, visual feedback could induce fatigue and lead to a poor BCI performance owing to the monotony of performing MI tasks; thereby, subjects would lose interest and concentration [13]. Visual feedback modification can improve BCI performance by increasing the subject’s motivation, attention, or engagement. An option is realistic visual feedback, which can induce an embodiment sense, promoting significant MI learning in the short term. The embodiment sense, which is the owning feeling of a controlled body, can reinforce the immersive experience of able-bodied subjects [14]. Moreover, virtual reality (VR) provides powerful graphical resources to improve feedback control by enhancing feedback presentation [15]. VR also increases patient engagement during the BCI training owing to enhancing feedback focus [16]. On the other hand, action observation of real or virtual body movements stimulates the corresponding motor-related cortical areas through the mirror neuron system [1719]. Thus, the ERD is enhanced during MI tasks [20, 21].

BCIs in rehabilitation can replace and restore lost neurological function. On the one hand, BCIs for replacement restore the subject’s skills to interact with environments and control devices to perform activities. On the other hand, BCIs for restoration are used with rehabilitation therapies to help the central nervous system restoration by inducing neural plasticity, synchronizing brain activity related to movement intent with motion, and feedback provided by end-effector devices [16]. The electrical stimulation also contributes to neural recovery from paralysis; functional electrical stimulation (FES) produces muscle contraction on paretic limbs and activates the sensory-motor system [5, 22], and electrotactile stimulation provides somatosensory feedback on human skin for sensory restoration [23]. Sensory restoration depends on cutaneous inputs for natural motor control by indicating state transitions and providing information about slip or contact force from manual interactions [24]. Thus, able-bodied and amputee subjects improved the perceptual embodiment in an artificial hand by participating in a modified version of the rubber hand illusion and received transcutaneous electrical nerve stimulation [25, 26]. In another study, Wilson et al. [27] proposed a lingual electrotactile stimulation feedback as a vision substitution system in a BCI based on MI to move a cursor, where subjects with or without visual disability obtained similar results. Also, a BCI used visual-haptic feedback [28], which comprised a visual scene and electrical stimulation simultaneously. This feedback combination improved sensorimotor cortical activity and BCI performance during MI in able-bodied subjects.

Some studies combined VR with electrical stimulation. A VR hand rehabilitation platform with electrotactile stimulation feedback and surface electromyography modules in a closed-loop control improved the training efficiency and grasp control performance of healthy subjects compared to visual and no feedbacks [29]. FES-BCI [30, 31] and robot-BMI [32] showed virtual hands as realistic visual feedback. Both systems improved upper limb motor functions and increased the subject’s motivation with SCI by achieving higher relative power (RP) [3335] than those of conventional BCI systems. Other studies used a head-mounted display (HMD) to increase the immersive experience, which is considered as the subject’s propensity to respond to the VR environment as it was real [36]. Researchers proposed an embodied BCI based on MI using an HMD to display an immersive VR environment to train able-bodied subjects. These systems improved their MI skills and BCI performance [37, 38], reaching classification results and power spectral density (PSD) [38] better than those of the classical MI approach [39].

The present work proposed an immersive BCI based on electroencephalography (EEG) signals to perform hand MI tasks in a VR environment displayed by an HMD, supplying realistic visual feedback along with electrotactile stimulation. The proposed VR-BCI framework with visual-electrotactile stimulation (VES) feedback could improve the system performance achieved with realistic visual feedback. Thereby, our BCI design attempts to obtain results to increase the system usability by able-bodied subjects. It can also assist in the motor rehabilitation of paralyzed patients.

2. Materials and Methods

2.1. Participants

Twenty able-bodied subjects participated in this study, 5 females and 15 males, aged between 18 and 39 years (mean = 26.20, standard deviation (SD) = 5.37); only one was left-handed. Ten of them participated in experiments with VES feedback, while the rest of them participated in experiments with visual feedback. The Ethical Committee from the School of Engineering at Tohoku University approved the experimental protocol.

All subjects signed an informed-consent document according to the Declaration of Helsinki guidelines before the experiment began. They were naive to perform MI tasks for a BCI and without previous experience using a similar device. In addition, none had a background in neurological disorders.

2.2. Experimental Setup

The experimental setup is illustrated in Figure 1(a). The brain activity was recorded using a 16-channel amplifier g.USBamp (g.tec Medical Engineering GMBH) with active electrodes. The electrodes were distributed over the scalp according to the 10–20 international system, using electrode positions AF3, AF4, FC3, FCz, FC4, C3, Cz, C4, T7, T8, CP3, CPz, CP4, Pz, O1, and O2; Fz was the ground electrode, and the right ear lobe was the reference electrode. Additionally, an HMD Oculus Rift (Facebook Technologies, LLC) with a display frequency of 90 Hz provided a higher immersion perception of the subjects. Also, two devices, UnlimitedHand (UH) (H2L Inc.), supplied electrotactile stimulation and were mounted over each subject’s forearm, as shown in Figure 1(a). This device worked to 40 Hz and provided electrotactile stimulation for 1 second.

The devices were connected to a PC for recording and processing EEG data and developing the VR environment. The PC had the following features: Windows 10 operating system, Intel Core i7-8750H CPU at up to 4.1 GHz, 16 GB RAM, Nvidia GTX 1070 GPU. The VR-BCI system was integrated into Unity (Unity Technologies) and coded in C# language.

The VR environment, which comprised a virtual avatar and room, was shown through an HMD. Thus, the human avatar was designed in MakeHuman (The MakeHuman Team). The VR room was designed using Unity, and the virtual arm animations were done using Blender (Blender Foundation). They included a red ball that interacted with the virtual arms, as shown in Figure 1(d). Each arm animation comprised the movement itself (during 1 second) and the return (during 1 second) to the neutral position.

2.3. Experimental Procedure

The electrotactile stimulation intensity was calibrated before the beginning of the experiment. The pulse width (tw) was 0.2 milliseconds, and the voltage bootup level (hi) of 5V over the voltage level (hf). The voltage level started at 1V and was increased by 1V repeatedly until the subject felt it, and before producing muscle contraction, it was between 1V and 3V for most subjects. The experiment did not require contracting hand muscles, and electrotactile stimulation was simply a means to provide interaction between the subject and the BCI. The eight electrode positions and their pulse waveforms are shown in Figure 1(b). The electrodes used for grasping were 0, 1, 3, 4, 6, 7, and flexion and extension were 0, 1, 6, 7. There was no difference between the stimulation pattern for both movements. These electrodes were chosen according to the MI task associated.

Active electrodes were positioned in a cap and mounted on the subject’s scalp; then, the conductive gel was applied to each electrode. Then, an HMD was placed over the subject’s head. The electrode’s impedance was then checked to be below 10 k. The experimental setup preparation took between 15 and 20 minutes.

The experiment was carried out following the timeline shown in Figure 2(a) during the BCI calibration stage. In the trial beginning, a green cross (side cue) was displayed randomly in the left or right position to indicate the MI task limb. Then, a virtual arm animation (MI cue) showed the MI task requested. The subject started the kinesthetic MI task when the virtual arm animation stopped, and the red ball disappeared; it is performed repeatedly for 6 seconds. After the virtual arm animation of the MI performed was shown as a visual reinforcement (R), the electrotactile stimulation was added at the reinforcement beginning if the subject belongs to the group with electrotactile stimulation. Afterward, a blue line (end cue) indicated the trial ends. This BCI system was calibrated for hand MI tasks, such as grasping, flexion, and extension. There were two runs for each MI task, each run of 20 trials (10 trials for each limb) with a break of 1 minute between runs. The flexion and extension MI tasks were performed in the same run. This stage lasted about 22 minutes.

The second stage was training and consisted of feature extraction and classifier training, as shown in Figure 2(b), using the EEG data recorded in the calibration session. The feature extraction comprised common spatial pattern (CSP) filtering and normalized log-variance; then, a support vector machine (SVM) classifier was trained. These methods are detailed in the next section. There was a break of 5 minutes between the calibration and test sessions.

During the test stage, the trial followed the timeline shown in Figure 2(c). In the beginning, a green cross and the virtual arm animation indicated the MI task limb and the MI task, similar to the calibration session. The subject started the kinesthetic MI task when the virtual arm animation stopped, and the red ball disappeared; it is performed repeatedly for 2 seconds. After the virtual arm animation of the classifier’s output was shown as visual feedback (F), the electrotactile stimulation was added at the feedback beginning if the subject belongs to the group with electrotactile stimulation. Afterward, a blue line indicated the trial ends. The VR-BCI system evaluated a run for each hand MI task, which were grasping, flexion/extension, and the random combination of them. The random MI task run used trained classifiers for the other MI tasks. Each run consisted of 20 trials (10 trials for each limb) with a break of 1 minute between runs. This stage lasted about 12 minutes.

Finally, subjects answered the questionnaire shown in Table 1 about system perception and detailed in Section 2.6. This stage lasted around 5 minutes. The time required to disassemble the experimental setup was around 10 minutes. Each subject carried out the whole experiment on the same day, and the total time was about 1 hour 15 minutes.

2.4. Signal Processing

Figures 2(b) and 2(c) show the signal processing for the training and test stages, respectively. EEG data were sampled at a frequency of 512 Hz and filtered using an eighth-order Butterworth bandpass filter with cutoff frequencies of 0.5 and 30 Hz and a fourth-order 50 Hz notch filter.

The feature extraction consisted of CSP and log-variance. CSP filtering is a popular and useful method applied in BCI systems based on oscillatory activity such as MI. It provides high classification performance, being numerically efficient, and a simple algorithm to implement [40]. The CSP method is based on the calculation of a transformation matrix W (equation (1)) that maximizes the variance of spatially filtered EEG data belonging to one class while minimizing it for the other class: in our case, EEG data of MI tasks of the left and right limbs. The matrix X is transformed into a matrix Z. W is a square matrix, the dimensions of which depend on the number of channels, and its columns are spatial filters. Three pairs of spatial filters of the 16 × 16 CSP transformation matrix W were applied to EEG data. The first three spatial filters generate the maximum variance in filtered EEG signals during left limb MI, and the last three spatial filters generate it during right limb MI [11, 4143].

Then, the feature vector f involved the normalized variance logarithm, as shown in equation (2), where p = 1, 2, …, 6.

The SVM classifier has demonstrated to be efficient for discriminating between two motor-imagery classes, and it is the standard classification method used for binary-class BCIs based on MI owing to its fast and computationally efficient training [4345]. An SVM classifier was trained to discriminate between the left and right limbs for the grasping and flexion/extension MI tasks. This classifier was configured with a radial basis function kernel using the library LibSVM [46] in C#.

In the SVM classifier training, each MI task EEG dataset recorded during the calibration stage was reordered randomly. Then, the dataset was divided into 80% and 20% for training and test, respectively. These subdatasets were reshaped to one-second segments with 90% overlapped (sliding window method [4749]). The random reordering of subdatasets was optimized by genetic algorithms in MATLAB (The MathWorks, Inc.). The function to maximize was the classification accuracy of the trained SVM using the test subdataset. Then, the SVM model was validated using fivefold cross-validation using the training subdataset.

The optimized SVM classifier predicted MI tasks in the test stage; the EEG data during test trials were also reshaped to one-second segments, with 90% overlapping. The virtual arm animation displayed as visual feedback was shown partially (biased feedback [14]) depending on the percentage of one-second segments right classified. Then, the trial prediction depended on the one-second segments’ accuracy. In addition, no animation was shown if the prediction was wrong (error-ignoring [14]).

2.5. Analysis

First of all, the classification accuracy and the overall BCI performance were calculated for the test stage. The overall BCI performance was measured by the information transfer rate (ITR) [42]. The ITR depends on the accuracy and is defined by [50]

Here, N is the number of classes, P is the classification accuracy (%), T is the time required for one classification in seconds, and B is the ITR in bits per minute (bpm).

Second of all, the frequency spectrum was calculated and assessed by the following measurements. The ERD power generated was evaluated by the relative power for all channels. The relative power can normalize the PSD, eliminating offsets and reducing the power variability. Then, the relative power was computed using [3335]

Here, is the PSD in dB during the event x.

The coefficient of determination was computed to find power differences [51, 52] between the grand average PSDs of two groups. is defined as follows [53, 54]:where x is the observed signal, y is the predicted signal, is the variance of x, is the variance of y, and is the covariance between x and y. range is from 0 to 1 [55]. If value is close to 1, there is very good discrimination, whereas if value is close to 0, it indicates that the signals can hardly be distinguished [54].

Additionally, the hemispheric asymmetry is related to the performance of fine motor tasks, and left hemisphere changes are related to motor learning. Then, the hemispheric asymmetry was calculated over the motor brain regions as the difference between the mean PSD of the right (FC4, C4, and CP4) and left (FC3, C3, and CP3) channels [38, 56].

Then, the statistical analysis looked into differences between groups. The Shapiro–Wilk (S–W) test () was applied to verify the normal distribution; the S–W test is commonly used for a small sample of fewer than 50 data. If the S–W test was accomplished, analysis of variances (ANOVA) [55] was then used to find statistically significant differences. The repeated measures ANOVA evaluated the overall differences between groups; if the repeated measures model failed the Mauchly’s test of sphericity (), the Greenhouse–Geisser correction was computed. Then, the post hoc test Bonferroni correction was used for pairwise comparisons. The significance level was 5% for the methods mentioned above [38].

On the other hand, If the S–W test was not accomplished, nonparametric statistical tests analyzed it; Friedman test assessed the overall differences between groups. Then, the nonparametric Wilcoxon rank-sum test was adopted to find statistically significant differences between pairwise comparisons. The significance level was also 5% for the nonparametric methods [38, 55].

Finally, Spearman correlation found relationships between the relative power for each channel and the perception levels with a significance level of 5% [38].

2.6. Questionnaire

Each subject responded to the questionnaire presented in Table 1. The questionnaire consisted of nine questions, and the answers were on the Likert scale from 1 to 7 [57]. The questionnaire was directed to know the subject’s perception about the system during BCI sessions; the questions were designed to get levels for the ownership perception (mean of Q1 and Q2), immersion perception (mean of Q3 and Q4), attention (mean of Q5 and Q6), and difficulty (mean of Q7, Q8, and Q9).

3. Results

All subjects performed the three MI tasks (grasping, flexion/extension, and random) during the test stage; flexion and extension were grouped as one MI task for this analysis owing to the similar brain response. The random MI task attempted to evaluate the subject’s ability to perform MI tasks of different nature in the same run.

3.1. Classification Performance

The mean cross-validation accuracy of both feedback groups for the grasping and flexion/extension MI tasks was above 85%, with SD lower than 8%. These results validated the SVM classifier model.

Table 2 presents the accuracy and F1-score of both feedback groups during the test stage for the grasping, flexion/extension, and random MI tasks. The mean accuracy was close to the mean F1-score for all MI tasks; i.e., there was a balance of correct classifications. The VES feedback group achieved greater mean accuracy and lower SD for the grasping (93.00%  3.50%) and flexion/extension (95.00%  5.27%) MI tasks than those of the visual feedback group (grasping: 87.50%  4.25%, flexion/extension: 91.50%  7.09%). On the other hand, the mean accuracy for the random MI task of both feedback groups was close. However, the variability of the VES feedback group was greater than that of the visual feedback group.

The S–W test verified the accuracy normal distribution; the VES feedback group did not accomplish the S–W test () for the grasping MI task. Then, the Friedman test found overall statistical differences (  = 4.52, ) between the accuracy of both feedback groups. Thus, the nonparametric Wilcoxon rank-sum test found statistical differences between both feedback groups for the grasping MI task (); however, there were no differences for the flexion/extension and random MI tasks ().

Additionally, Table 3 shows the overall VR-BCI performance calculated by the information transfer rate. The mean information transfer rate of the VES feedback group was higher than that of the visual feedback group for all MI tasks; however, the SD of the VES feedback group for the random MI task was higher.

3.2. Frequency Spectrum

The frequency spectrum was calculated by the spectrogram of EEG data recorded of each subject during the test stage for the grasping and flexion/extension MI tasks of both feedback groups. The PSD decreases in the -and central -bands, mainly owing to movement execution or MI [33, 42]. Thereby, the spectrograms verified that the PSD decreased during both MI tasks. Also, the PSD for both MI tasks of the VES feedback group were less intense than those of the visual feedback group. Then, the VES feedback group reached a lower ERD level than that of the visual feedback.

On the other hand, the coefficient of determination [5153] between the grand average PSD in the -and central -bands of both feedback groups for both MI tasks in each channel was close to 1; thus, there was good discrimination between both feedback groups [54].

3.3. Relative Power

The mean relative power was calculated using the PSD of EEG data recorded of each subject during the test stage for the grasping and flexion/extension MI tasks of both feedback groups. The grand average relative power in the -and central -bands of both feedback groups in premotor (FC3 and FC4 channels), primary motor (C3 and C4 channels), and somatosensory (CP3 and CP4 channels) cortices is shown in Figures 3 and 4, without considering the negative sign. The premotor cortex is related to movement preparation, the primary motor cortex is related to motor execution and motor imagery [58, 59], and the somatosensory cortex is related to sensory perception [60].

The VES feedback group had a grand average relative power in the -band (approximately 72.71%) for the premotor cortex greater than that of the visual feedback group for both MI tasks. Thereby, the ERD over the movement preparation region of the VES feedback group was more intense. The grand average relative power of both bands (approximately 75.45%) for the primary motor cortex was similar for both MI tasks. Thus, the MI performance was also similar for both feedback groups, whereas the visual feedback group had a grand average relative power of both bands (approximately 77.89%) for the somatosensory cortex greater than those of the VES feedback group. Thereby, the ERD over the sensory perception region of the VES feedback group was weaker, verifying the results of [61]. Also, both feedback groups achieved relative powers higher than those of similar approaches (approximately 40%) [34, 35] in the - and central -bands.

The mean relative power in each channel approached of both feedback groups did not accomplish the S–W test of normal distribution (). The Friedman test found overall statistical differences () between both feedback groups for both MI tasks and both bands. Thus, the Wilcoxon rank-sum test found statistical differences () between both feedback groups in the channel CP3 for the grasping (-band: , central -band: ) and flexion/extension (-band: , central -band: ) MI tasks. Also, there were statistical differences in channel CP4 of the central -band () for the grasping MI task. The sensory perception region was affected by the type of feedback, as mentioned previously.

3.4. Hemispheric Asymmetry

The mean hemispheric asymmetry was calculated using the mean PSD of EEG data recorded over the left and right motor brain regions of each subject during the calibration and test stages for the grasping and flexion/extension MI tasks of both feedback groups. Figures 5 and 6 show the grand average hemispheric asymmetry in both bands and both feedback groups. The increase of hemispheric asymmetry in sessions with feedback (test stage) was verified and compared with sessions without feedback (calibration stage) [8, 9]. Also, the hemispheric asymmetry of the VES feedback group was greater than that of the visual feedback group.

The mean hemispheric asymmetry in both bands of both stages and feedback groups did not accomplish the S–W test of normal distribution (). Thus, the Friedman test did not find overall statistical differences () between both feedback groups.

3.5. System Perception

The subjects of both feedback groups answered the questionnaire about system perception; these answers on the Likert scale are shown in Figure 7. Also, Table 4 shows the mean perception levels of both feedback groups. The mean immersion and ownership perception levels of the visual feedback group were higher than those of the VES feedback group; besides, the VES feedback group had a mean attention level higher and a mean difficulty level lower than those of the visual feedback group. Most of the subjects of both feedback groups felt very high levels of immersion perception and attention.

The attention level did not accomplish the S–W test of normal distribution (). Then, the Friedman test did not find overall statistical differences ( = 0.93, ) between both feedback groups.

On the other hand, the Spearman correlation found significant correlations between the perception levels and the channel’s relative power in both bands and for both MI tasks, as shown in Table 5. Regarding the grasping MI task, the ownership perception level was correlated with the premotor cortex, and the attention level was correlated with the somatosensory and prefrontal cortices. The difficulty level was correlated with the somatosensory, prefrontal, and visual cortices, while, for the flexion/extension MI task, the attention level was correlated with the somatosensory, and the difficulty level was correlated with the premotor, somatosensory, prefrontal, and visual perception regions. Additionally, attention and difficulty levels were correlated inversely ( = −0.45, ).

4. Discussion

This work investigated the feasibility of applying electrotactile stimulation along with visual feedback by assessing VES feedback compared to visual feedback.

Regarding the classification accuracy, the VES feedback group had the best results for the grasping and flexion/extension MI tasks; however, there were only statistical differences between the classification accuracy of both feedback groups for the grasping MI task. The response to the electrotactile stimulation feedback could be owing to the muscles involved with the movements. Grasping is a movement more complex than flexion/extension. It activates brain regions related to the finger’s movement proximate or overlapped to those of the flexion/extension, which are related to wrist movement [62, 63]. Thus, performing the flexion/extension MI was easier than the grasping MI, and they could need no additional stimulation during feedback. On the other hand, both feedback groups had similar mean classification accuracy for the random MI task; however, the VES feedback group had a dispersion greater than that of the visual feedback group. Performing two MI tasks in a random order required more concentration. However, the electrotactile stimulation could distract the subject. Moreover, most of the subjects had significant accuracy (above 80%) for the random MI task. It showed the feasibility of increasing the BCI control options by combining different MI tasks for the left and right limbs in an immersive environment, such as similar motor rehabilitation BCI [37, 38] and cognitive training BCI systems [64, 65].

In the frequency spectrum, there were differences between the PSD of both feedback groups. The channels’ PSD of the VES feedback group were lower than those of the visual feedback group; the lower PSD for the occipital -band (8–12 Hz) for the VES feedback group indicated a higher attention level during test trials [66]. Thus, subjects experiencing proprioceptive feedback paid more attention to the VR-BCI system than those with visual feedback [11]. Also, there were statistical differences between the somatosensory cortex relative powers of both feedback groups. The VES feedback group had a relative power lower than that of the visual feedback group, verifying that the -band trends to keep the current sensorimotor state. Then, the electrical stimulation produced a change of the status quo by decreasing the -band activity [67]. On the other hand, the hemispheric asymmetry of PSD can be modulated and increased during feedback sessions [8], improving the performance of fine motor tasks and triggering motor learning changes [56]. Thereby, our system could promote inter-hemispheric interaction in patients with affected hemispheric differences [38]. It could also contribute to motor learning transfer by using a healthy hand to improve the paretic hand movements [9]. Therefore, the reached relative power and hemispheric asymmetry can contribute to different learning processes [68] and restore motor and cognitive functions [64, 69].

The questionnaire answers confirmed that subjects of the VES feedback group paid more attention. It was related to the decrease in difficulty level to perform MI tasks. The attention and difficulty levels were correlated with the relative power of the somatosensory channels, verifying the electrotactile stimulation effect for decreasing the relative power in this brain region [67]. On the other hand, the body ownership illusion results from the interaction between sensory inputs and internal body models; however, the immersion and ownership perception levels influenced by the VR, as mentioned in [37, 38], were higher without proprioceptive feedback. Then, realistic visual feedback generated an ownership perception higher than that of cutaneous perception. It could attenuate proprioceptive signals from the skin, altering the subject’s somatosensory perception [70].

Our VR-BCI system with realistic visual feedback provided richer and more explanatory feedback, which could reduce illiteracy and deficiency in naive subjects, than that of conventional BCI systems during BCI training [71]. Also, the sensorimotor rhythms’ modulation was influenced by realistic visual feedback, positively biased feedback, and sense of embodiment. They reinforced MI learning in the short term to improve the BCI performance [14]. In addition, the electrotactile stimulation and positively biased feedback contributed to enhancing the VR environment interaction by their influence on the subject’s motivation and confidence; thus, they improved MI learning. Then, the VES feedback enhanced the VR-BCI with visual feedback in some features, such as classification accuracy of MI tasks, subject’s attention during MI, and MI preparation.

This work was limited by no pre-training sessions, few test stage sessions, and the number of subjects. Subjects can feel difficulty practicing MI learning skills in pre-training sessions; then, they explore strategies and learn better in the subsequent calibration sessions [14, 71]. Also, they can improve MI performance with feedback during more test sessions. Thus, the differences between visual and VES feedbacks could increase. Moreover, the results will be more reliable by increasing the number of subjects. However, other works evaluated BCI systems with a similar number of subjects [11, 15, 20, 21, 57].

The proposed immersive system will be assessed with poststroke patients in future research, considering the limitations mentioned above. The VR-BCI would be updated according to the patients’ constraints. In addition, cognitive background and spatial abilities should be considered to predict the subject’s response to the feedback and the BCI performance [72, 73].

5. Conclusions

The visual-electrotactile feedback was assessed and compared to the visual feedback to discriminate MI tasks between both limbs; the MI tasks were grasping, flexion/extension, and random combination of them. Visual-electrotactile feedback improved the mean classification accuracy for the grasping (93.00%  3.50%) and flexion/extension (95.00%  5.27%) MI tasks and reached higher information transfer rates for the three MI tasks (maximum of 4.56  1.38 bpm). In addition, subjects achieved a significant mean classification accuracy (maximum of 86.5%  5.80%) for the random MI task; however, it was lower than that of the other MI tasks. Since the random MI task required more subject’s concentration, electrotactile stimulation could distract the subject during MI performing, generating greater dispersion of classification accuracy. There were only statistical differences for the grasping MI task between the classification accuracies of both feedback groups.

The proprioceptive feedback kept lower mean PSD in all channels and higher attention levels than those of the visual feedback during test trials for the grasping and flexion/extension MI tasks. This feedback also generated a relative power in the -band greater for the premotor cortex, which indicated a better MI preparation. On the other hand, the hemispheric asymmetry was lower for the visual-electrotactile feedback; however, there were no statistical differences between both feedback groups. Then, both feedback types can contribute to the motor and cognitive learning processes. Also, the questionnaire confirmed attention level higher and difficulty level lower for the visual-electrotactile feedback, whereas the immersive and ownership perception levels were higher for visual feedback. However, there were also no statistical differences between the system perception levels of both feedback groups.

Therefore, the use of electrotactile stimulation along with visual feedback enhanced the immersive VR-BCI classification performance. It also retained the subject’s attention and eased motor imagery better than visual feedback alone.

Data Availability

The datasets recorded and used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported in part by FONDECYT from CONCYTEC, Peru, under Contract 112-2017, and in part by the Japan Society for the Promotion of Science Grant-in-Aid for Scientific Research (B) under Project 18H01399.