Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2019 / Article
Special Issue

Ergonomic Issues in Brain-Computer Interface Technologies: Current Status, Challenges, and Future Direction

View this Special Issue

Review Article | Open Access

Volume 2019 |Article ID 3807670 |

Zina Li, Shuqing Zhang, Jiahui Pan, "Advances in Hybrid Brain-Computer Interfaces: Principles, Design, and Applications", Computational Intelligence and Neuroscience, vol. 2019, Article ID 3807670, 9 pages, 2019.

Advances in Hybrid Brain-Computer Interfaces: Principles, Design, and Applications

Guest Editor: Hyun J. Baek
Received20 Jun 2019
Revised09 Sep 2019
Accepted17 Sep 2019
Published08 Oct 2019


Conventional brain-computer interface (BCI) systems have been facing two fundamental challenges: the lack of high detection performance and the control command problem. To this end, the researchers have proposed a hybrid brain-computer interface (hBCI) to address these challenges. This paper mainly discusses the research progress of hBCI and reviews three types of hBCI, namely, hBCI based on multiple brain models, multisensory hBCI, and hBCI based on multimodal signals. By analyzing the general principles, paradigm designs, experimental results, advantages, and applications of the latest hBCI system, we found that using hBCI technology can improve the detection performance of BCI and achieve multidegree/multifunctional control, which is significantly superior to single-mode BCIs.

1. Introduction

Brain-computer interface (BCI) is a technology that translates signals generated by brain activity into control signals without the involvement of peripheral nerves and muscles and uses these signals to control external devices [1]. In recent years, BCI has attracted increasing attention from academia and the public due to its potential clinical application. For example, BCI can provide augmented or repaired motor function, which can be of great help to patients with severe motor impairment. The most commonly used methods of extracting brain signals are nonimplanting, including functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), electroencephalography (EEG), and functional near-infrared spectroscopy (fNIRS) [2]. Although EEG has low signal-to-noise ratio and spatial resolution, it has been widely used in BCI because of its noninvasiveness, portability, low cost, good performance, real-time response, and technical requirements lower than other brain signals. This paper mainly describes the BCI based on EEG. Brain models used in EEG-based hybrid BCIs typically include the P300 visual-evoked potential proposed by Farwell and Donchin in 1988 [3], the steady-state-evoked potential (such as the steady-state visual-evoked potential (SSVEP)) [4] and event-related desynchronization/synchronization (ERD/ERS) generated by motor imagination (MI) [5].

Conventional EEG-based BCI generally relies solely on a single-signal input (such as EEG, electromyography (EMG), and electro-oculogram (EOG)), single sensory stimulus (such as visual only, auditory only, and tactile only), or single brain pattern (such as the above P300 potential and SSVEP). The single-mode BCI system has achieved great progress in paradigm design, brain signal processing algorithms, and applications. However, these BCI systems have been facing multiple challenges, including low information transfer rates (ITRs), low man-machine adaptability, and high dynamics/nonstationarity of brain signals [6, 7]. Here, we mainly consider two fundamental challenges and introduce a hybrid BCI technique intended to address these challenges:(1)Multidegree/multifunction control: multidegree/multifunctional control is necessary for many devices, such as wheelchair, robots, or artificial limbs. For instance, the wheelchair control includes speed, direction, and start/stop functions. However, it is difficult for a conventional simple BCI to generate effective multiple control signals [8].(2)Improvement of detection performance: over the years, although many efforts have been made to improve the detection performance of BCI, the detection performance in terms of classification accuracy, information transfer rate (ITR), and false-positive rate (FPR) is still far from practice in many applications, especially for patients. Approximately 13% of healthy users suffer from BCI illiteracy and do not reach the criterion for controlling a BCI application [9]. Moreover, user acceptability and complexity of the BCI systems should be reported as important performance criteria.

To conquer the above two fundamental challenges, some researchers have proposed a hybrid BCI (hBCI). As described by Allison [8], an hBCI system consists of a BCI system and an add-on system, which can be a second BCI system, but designed to perform specific goals better than a conventional BCI. The main goal of hBCI is to overcome the existing limitations and disadvantages of the conventional BCI systems. In this paper, the recent progress in hBCIs was reviewed to illustrate how hBCI techniques could be implemented to address these challenges. The definition of hybrid BCIs was updated and extended, and three main types of hBCIs have been devised. For each type of hybrid BCIs, the principle was summarized and several representative hybrid BCI systems were highlighted by analyzing their paradigm designs, control methods, and experimental results. Finally, the future prospect and research direction of hBCI were discussed.

2. Hybrid BCI Overview

Although the concept of hBCI emerged before 2010, its development has become more and more rapid in recent years. Based on the search engine “Web of Science,” and title-abstract-keyword ((“brain-computer interface” or “BCI”) and (“hybrid” or “multimodal”), the number of journal papers found before 2010 was only three. However, this number rose to 148 and 293 in the two periods of 2010–2014 and 2015–2019, respectively. It is evident that the number of publications on hBCI has grown rapidly in recent years. Note that those studies of single BCI combining only features and algorithms also can improve performance are excluded. In fact, “Hybrid BCI” and “multimodal BCI” are two highly related concepts. Li et al. [9] even considered that “hybrid BCI” and “multimodal BCI” to be interchangeable terms with the same BCI definition.

Pfurtscheller et al. [10] believed that in addition to the simple combination of different BCIs, the type of hBCI should meet the following four criteria: (1) the activity comes directly from the brain; (2) at least one brain signal acquisition method should be used to capture this activity, and the brain signal acquisition method can be in the form of electrical, magnetic, or hemodynamic changes; (3) the signal must be processed in real time/online to establish communication between the brain and the computer to generate control commands; (4) feedback must be provided according to the results of brain activity for communication and control.

The signal flow of an hBCI system is as described in Figure 1, which includes two stages of brain signal processing. (1) In the signal acquisition, the signal input can be from multiple signals (e.g., EEG and NIRS) or multiple brain patterns (e.g., P300 and SSVEP), which are evoked by multisensory stimuli (e.g., audiovisual stimuli). (2) In the signal processing, an hBCI system can provide only a single-output/control signal or multiple-output/control signals. In the former case, when multiple brain patterns or multiple signals are involved, data fusion is generally required at the feature or decision level. In the latter case, multiple control signals may be separately manipulated by different brain patterns detected by the system, and the fusion of these brain patterns is generally not necessary. As shown in Figure 1, the hBCI can be divided into three main categories:(1)hBCI based on multiple brain patterns: it uses at least two brain modes (e.g., P300 and SSVEP or MI and P300). In this type of hBCI, multiple brain patterns are induced by a single sensory stimulus. Several studies have indicated that hybrid integration associated with multimodal stimuli has the potential to enhance brain patterns, which may be beneficial for BCI performance [11].(2)hBCI with multisensory stimuli: its brain pattern is simultaneously induced by multiple sensory stimuli, such as audiovisual stimuli. In this hBCI, one or more brain patterns are induced by multisensory stimuli. Some researchers believed multisensory BCIs may offer more versatile and user-friendly paradigms for control and feedback [12].(3)hBCI based on multiple signals: in this hBCI, two or more input signals are typically combined with a hybrid BCI system, such as EEG, MEG, fMRI, fNIRS, EOG, or EMG. Different brain signals have different signal characteristics and can be used for different functions.

The state-of-the-art of the above three types of hBCI is introduced in the following sections, including their general principles, stimuli paradigm, control methods, corresponding experimental results, and advantages.

3. hBCI Based on Multiple Brain Patterns

The first class of hBCIs combines multiple brain patterns, such as P300, SSVEP, and MI. It has been designed for a variety of applications, such as speller [13], idle state detection [14], orthotics [15], the wheelchair navigation, and control of computer components, which include two-dimensional (2D) cursor [16], mouse [17], or mail client [18]. Table 1 lists the representative hBCI applications of multiple brain patterns in recent years. In this section, we mainly describe hBCI based on P300 and SSVEP, hBCI based on MI and SSVEP, and hBCI based on MI and P300.

ReferenceHybrid modeApplicationClassifiersCommandsAccuracy (%)Improvements

[19]SSVEP, P300, MIHumanoid machine navigationCCA6P300: 84.6,
SSVEP: 84.1
Better commands performance in navigation and exploration
[20]SSVEP, P300Wheelchair control with stop commandSVM2>80Higher detection accuracy and low response time
[21]SSVEP, P300Target selection spellerSW-LDA993.3More effective in target discrimination
[22]SSVEP, P300Cursor controlSVM9>90Higher accuracy and better commands performance
[11]SSVEP, P300Multiple option selectionCCA, LDA4P300: 99.9
SSVEP: 67.2
Better performance and user-friendly
[23]P300, SSVEPSpellerSW-LDA3693.85Higher accuracy
[24]MI, SSVEPPlay Tetris games in MI-SSVEP paradigmLDA, CSP, CCA4MI: 87.01
SSVEP: 90.26
Higher accuracy
[25]MI, SSVEPHybrid BCI system of MI and SSVEPLDC285.6 ± 7.7Better classification performance
[9]MI, SSVEP, visual, and auditoryWheelchair controlSVM6Multidegree control commands
[26]MI, SSVEPHybrid BCI system with feedbackLDA2≥83Better MI training performance
[27]SSVEP, MIControl commandsCCA5MI: 93.3
Better performance and easiness for users
[16]MI, P3002-D cursor controlSVM2>80Multiple-degree control
[17]P300, MIBCI mouse-based web browserSVM393.21Multidegree control with a feasible BCI mouse
[28]P300, MIBCI wheelchair with direction and speed controlLDA483.10 ± 2.12Direction and speed control

3.1. P300- and SSVEP-Based hBCIs

Both P300 potential and SSVEP can be elicited by visual stimuli, allowing subjects to evoke both brain patterns by performing a visual attention task without extra mental load. The P300 and SSVEP features are located in different domains (time domain versus frequency domain), and both brain patterns have significant independence. The improvement in performance may result from the utilization of both P300 and SSVEP features. The addition of the EEG feature may provide additional information that facilitates the classification of a target versus a nontarget.

Bi et al. [22] proposed a hybrid paradigm based on SSVEP and P300 for developing speed-direction-based cursor control. In this study, the stimulation of the P300 was distributed on the upper and lower sides of the screen, and the stimulus for detecting SSVEP (which can rotate the control device clockwise or counterclockwise) was displayed on the left and right sides of the screen. The results using the method based on the support vector machine classification showed that the accuracy of the hBCI was higher than 90%.

Pan et al. [29] detected consciousness in eight patients with disorders of consciousness (DOC) by using a hybrid paradigm of SSVEP and P300. Following the instructions, the left- and right-hand photos flickered on a black background with fixed frequencies of 6.0 and 7.5 Hz, respectively, to evoke the patient’s SSVEP. Meanwhile, each of the two photo frames was randomly presented five times to evoke P300, with each appearance lasting 200 ms and the interval between two consecutive appearances being 800 ms. The BCI system used the characteristics of P300 and SSVEP to detect which photo the patient had noticed. Eight patients (four in the vegetative state (VS), three in the minimally conscious state (MCS), and one in the locked-in syndrome (LIS)) participated in the experiment. Using the SVM-based classifier, one VS patient, one MCS patient, and one LIS patient were able to select photos of themselves or others (classification accuracy, 66%–100%), which indicates that the patient command can be followed using an hybrid BCI and further proves that they have certain cognitive abilities and awareness.

3.2. MI- and SSVEP-Based hBCIs

There are four reasons to combine SSVEP and MI: (1) SSVEP- and MI-related brain patterns were produced simultaneously; (2) SSVEP is an evoked potential that can be stably detected in unfamiliar subjects with little training, but for most new users, it is difficult to adapt to the process of completing MI task; (3) SSVEP can detect by a single trial based on EEG data, and the detection does not require an averaging process; (4) nonvisual training will frustrate subjects, while SSVEP provides a possible solution to attract subjects to participate in MI task.

Based on the above principles, Yu et al. [26] combined SSVEP and MI to provide effective continuous feedback for MI training in 24 subjects. Initially, the classifier assigns a greater weight to the SSVEP in order to get the correct feedback at the beginning of the training. As the training goes on, participants reduced their visual attention to SSVEP stimuli but maintained sustained attention to MI mental tasks. When subjects adapt to rhythmic activities, the classifier shifts the weight to MI. The result showed that an hBCI can be used to improve MI training and produce distinguishable brain patterns after only five sessions (about 1.5 hours).

3.3. MI- and P300-Based hBCIs

An important aspect of the EEG-based BCI system is multidimensional control, which involves multiple independent control signals. These control signals can be obtained from multiple brain patterns, such as MI and P300. P300 represents the reliable type of brain pattern used to generate discrete control output commands, and MI is more effective against generating sequential control commands.

Li and colleagues [16] proposed hBCI combining MI brain patterns and P300 potentials for 2D cursor control and target selection. The GUI is shown in Figure 2, in which the circle and square represent the cursor and target, respectively, with the initial position of the cursor and the initial position and color (green or blue) of the target are randomly provided. The three “UP” buttons, three “DOWN” buttons, and two “STOP” buttons flash in a random order to evoke P300 potentials. The task of the user is to move the cursor to the target and then to select or reject the green/blue target. The control strategy of the user is described below. The user can move the cursor to the left or right by imagining his or her own left- or right-hand movement, respectively, and the user can move the cursor up or down by focusing on one of the three flashing “UP” or “DOWN” buttons to evoke P300 potentials. If the user does not intend to move the cursor in the vertical direction, then the user can focus on one of the two “STOP” buttons.

To further implement a BCI mouse, target selection and rejection functions are required. Specifically, once the cursor hits the target of interest (green square), the user can select the target by focusing the attention on a flashing “STOP” button and simultaneously maintaining an idle state of motor imagery. If the target is not of interest (blue square), the user can reject it by continuing to imagine left- or right-hand movement without focusing on any flashing buttons.

The algorithm for the 2D cursor control includes two parts: P300 detection for vertical movement control and motor-imagery detection for horizontal movement control, with the details presented in [19]. The signal processing procedure for P300 detection consists of three stages: low-pass filtering, P300 feature extraction, and SVM classification. For motor-imagery detection, the signal processing stages include common average reference (CAR) spatial filtering, band-pass filtering of the specific mu rhythm band (8–13 Hz), feature extraction based on a CSP algorithm, and SVM classification. The algorithm for target selection or rejection was based on the hybrid features of P300 potentials and MI. After extracting the features of the P300 potentials and MI using the same algorithms described above, a hybrid feature vector for each trial is constructed by concatenating the feature vector of the MI with the feature vector of the P300 potentials, which is then fed into the SVM for classification.

Eleven healthy subjects attended the online experiment, which included one session of 80 trials for each subject. Each trial included two sequential tasks. During the first task, subjects were instructed to move the cursor to a target that was presented at a randomized position on the screen. After the cursor hit the target, the subject was instructed to perform the second task of selecting or rejecting the target according to the color of the target (green for selection and blue for rejection). The time interval for the second task was set to 2 s. Among all subjects, the average time for one trial was 18.96 s, the average accuracy for successful trials was 92.84%, and the average for target selection accuracy given that the cursor was successfully moved to the target was 93.99%. Additionally, several datasets were also collected for offline analysis to demonstrate the advantage of P300 potential and MI hybrid features for target selection/rejection compared with the use of P300 potential or MI features alone. The experimental results showed that the accuracy for use of the hybrid features was significantly higher than for use of only the MI or P300 potential features (hybrid features: 83.10 ± 2.12%; MI features: 71.68 ± 2.41%; P300 features: 80.44 ± 1.82%). Based on the BCI cursor, Long et al. [28] proposed a hybrid BCI paradigm based on MI and P300 potential to operate actual wheelchairs by providing direction (left or right) and speed control (acceleration and deceleration) commands with 5 subjects.

All of these hybrid systems have three advantages. First, two independent control signals are generated based on MI and P300 potential. Second, the user can move the cursor from any position to a randomly located target. Third, the hybrid control strategy using MI and P300 potential provides better identification performance than the control strategy using MI-only or P300-only.

4. Multisensory hBCIs

Humans have multiple senses that provide pathways for processing information on the reality. The integration of multiple sensory stimuli enhances top-down attention, and these enhanced effects may be conducive to improve the performance of BCI systems. Taken into this consideration, hBCI based on audiovisual and visual-tactile was proposed, in which bimodal stimulation was used to improve system performance. Table 2 lists the representative applications of multisensory hBCIs in recent years.

ReferenceHybrid modeApplicationClassifiersCommandsAccuracy (%)Improvements

[30]P300, visual, audioP300 audiovisual spellerRegularized linear LR>80Improvement in performance
[31]Visual, audioConsciousness detection in patients with DOCSVM2>64Better performance and feasible to patients with DOC
[32]Visual, audioVisual-auditory spellerLDA3087.7 (chance level <3%)Better BCI performance
[33]Visual, audioAwareness detectionSVM295.67Better performance over auditory-only and visual-only systems
[34]Auditory, tactile, visual, P300Visual saccade-independent BCIBLDA488.67Better online performance
[35]Auditory, tactile, P300Tactile and bone-conduction BCISW-LDA670Higher classification accuracy
[36]Audio, tactileRobot gestureFGMMs, SVM1092.75Better performance over framework

4.1. Audiovisual hBCIs

Belitski et al. [30] proposed an offline audiovisual-based P300 speller and corresponding data analysis results. Their study of 7 healthy subjects showed that the intensity of P300 reaction was higher in audiovisual conditions than in visual or auditory conditions alone. Similarly, An et al. [32] explored parallel spellers for BCI unrelated to gaze for healthy subjects, where the auditory and visual domains are independent of each other. Their results showed that 15 users can spell online, with an average accuracy rate of 87.7%. These existing results suggest that audiovisual integration may be a potential way to enhance brain patterns and further improve BCI performance. Wang et al. [33] proposed a novel audiovisual BCI system, which is based on time-synchronous visual and auditory stimuli. In the GUI of this audiovisual BCI, there are two number buttons (two numbers randomly drawn from 0 to 9) located on the left and right sides, and two speakers are placed laterally to the monitor. The two buttons flash in an alternative manner. When a number button is visually intensified, the corresponding spoken number is presented from the ipsilateral speaker. In this way, the user is presented with a temporally, spatially, and semantically congruent audiovisual stimulus that lasts for 300 ms, where the interstimulus interval is randomized from 700 to 1500 ms. Ten healthy subjects participated in the experiment. The experiment consisted of three sessions administered in a random order, corresponding to the visual-only, auditory-only, and audiovisual conditions. In each session, the subject first performed a training run of 10 trials and then a test run of 30 trials. The online average accuracy of audiovisual, visual-only, and auditory-only sessions for all healthy subjects was 95.67%, 86.33%, and 62.33%, respectively. The audiovisual BCI significantly outperformed the visual-only and auditory-only BCIs. This audiovisual hBCI system was then applied to the consciousness detection of 7 patients with DOC. The experimental results indicated that the audiovisual BCI can provide more sensitive results than the behavioral observation scale.

4.2. Audio-Tactile hBCIs

The above bimodal BCI requires visual interaction to focus on stimuli and feedback, which limits their applicability to users with good vision and complete gaze control. Since the user does not require visual interaction when operating auditory or tactile BCI, a bimodal auditory/tactile-based manner may allow visual scanning of unrelated BCI. Yin et al. [34] proposed a dual-mode P300 BCI with the same direction, which was presented simultaneously with auditory and tactile stimuli from the same spatial direction. Rutkowski and Mori [35] studied the tactile and auditory BCI of 11 users with vision and hearing impairment.

These existing results reveal the several advantages of BCI auditory-tactile. First, the auditory-tactile dual-mode BCI has better overall system performance than the auditory or tactile single-mode P300 BCI. Second, in visual computer applications, auditory-tactile hBCI offers an attractive possibility of target sensory fields that can induce potential without relying on visual stimuli, although the performance achieved by using this system is lower than that of BCI dependent on gaze transfer. Third, visual-tactile hBCI is an alternative for users with impaired vision.

5. hBCI Based on Multimodal Signals

hBCI systems can be constructed using multimodal signals, including EEG, MEG, fMRI, EOG, fNIRS, and EMG. Different brain signals have different signal characteristics and can be used for different functions. Recently, several hybrid BCIs based on multiple signals have been reported in the following. Table 3 lists the representative hBCI applications based on multimodal signals in recent years.

ReferenceHybrid modeApplicationClassifiersCommandsAccuracy (%)Improvements

[37]EMG, EEGA motor imagery hybrid BCI spellerGMM2End-users: 91
Able-bodied users: 94
Better performance over command accuracy
[38]EEG, EMGHome environmental control systemCCA496.3Higher control accuracy, security, and interactivity
[39]EEG, EOGAIDS recoveryAR462.28Substantially better control over assistive devices
[40]EEG, EOGMobile robot controlLDA987.3Reduce the best completion time
[41]EEG, EOGHybrid speller systemLDA197.6Better performance and usability
[42]fNIRS, EEG, eye movementControl a quadcopter onlineLDA8fNIRS: 75.6
EEG: 86
Higher accuracy on decoding
[43]EEG, fNIRSHand movement and recognitionLDA294.2Reduce fNIRS delay time in detection
[44]EEG, fNIRSLeft- and right-hand motion imaginationDL2Reduce response time
[45]EEG, NIRSDecoding of four movementsLDA5>80Higher classification accuracy
[46]EEG, NIRSMental state recognitionMeta665.6Better performance on mental states classification
[47]EEG, MEGLeft- and right-hand motor imageryCSP, LR2MEG: 70.6
EEG: 67.7
Better performance over good within-subject accuracy
[48]EEG, NIRSClassification of mental arithmetic, MI, and idle statesLDA382.2 ± 10.2Higher classification accuracy
[49]EEG, MEGIntersubject decoding of left- vs. right-hand motor imageryLR, L2, 1-norm regularization4MEG: 70
EEG: 67.7
Higher within-subject accuracy

5.1. EEG- and EMG-Based hBCIs

Leeb et al. [50] proposed an hBCI combining EEG and EMG. In each trial, 12 healthy subjects were instructed to repeat the exercise for five seconds with their left or right hand (holding the hand with the fist) based on visual cues (arrows to the left or right). The researchers processed and classified EEG and EMG signals separately and then fused them. Canonical variable analysis was used to select subject-specific features that maximized separability between different tasks, and stable features were determined by cross validation of a Gaussian classifier based on training data. The resulting features were given threshold, normalized, and classified based on maximum distance in a subject-specific manner. Finally, the Bayesian method was used to fuse the probabilities of two classifiers to generate a control signal. The accuracy of a single EEG activity was 73% and single EMG activity was 87%. However, the accuracy of the hBCI was improved to 91%. In addition, to simulate tired muscles, the amplitude of the EMG channel decreased during operation (from 10% to 100%), and EEG activity is increasingly important in fused data as EMG muscles become more tired. The results showed a significant advantage for EEG- and EMG-based BCI systems.

5.2. EEG- and EOG-Based hBCIs

Recently, some studies have combined EEG and EOG to construct an hBCI. Since many people with disabilities are able to control their eye movements, EOG signals are an appropriate choice for many users of the BCI system. Lee et al. [41] employed hBCIs based on EEG-EOG to a speller system with fast typing speed. The hBCI system comprised a conventional ERP-based speller, an EOG-based command detector, and a visual feedback module. The online ERP speller was used to compute the classification probabilities for all candidate characters from EEG epoch. The character of highest probability was selected as visual feedback based on the probabilities sorting. The accuracy of the novel speller system was 97.6%, and its ITR is 39.6 ± 13.2 bits/min across 20 participants. The result showed that this EEG- and EOG-based speller has better performance than the conventional ERP-based speller.

5.3. Other hBCIs Based on Multimodal Signals

Other hybrid BCIs based on multiple signals have also been reported. A way to make full use of the spatial and temporal information of brain activity is to combine the fMRI with EEG in BCIs. A key advantage of EEG-fMRI combined BCI is that EEG can provide online slow cortical potential (SCP) feedback to subjects. It also reveals the basic psychophysiological mechanisms, such as the correlation between local BOLD-responses and the SCP changes, which helps to develop new training procedures and paradigms. Although fNIRS has poor spatial resolution compared to fMRI, it is portable and reflects the hemodynamic response of brain activity.

The authors in [45] have proved that the performance of an MI-based BCI was improved significantly by combing EEG and NIRS. It allows those who are unable to run EEG-based BCI alone to achieve meaningful classification rates. EEG is easily distorted by the inhomogeneities of the extracerebral tissues, while MEG is not affected as long as the electric inhomogeneities are concentric. Therefore, MEG signals are more local than the corresponding EEG signals and can provide more spatial information. In [47], the MEG and EEG signals generated in the sensorimotor cortex were used to index the finger movements for three tetraplegics.

6. Discussion and Conclusion

This paper focuses on several hBCI types and different stimulus designs and their performance analysis. To begin with, we summarized three classes of hBCIs: hBCIs based on multiple brain patterns, multisensory hBCIs, and hBCIs based on multimodal signals. For each type of hBCIs, we reviewed several representative hybrid BCI systems, including their design principles, stimuli paradigms, control methods, experimental results, and corresponding advantages. In the following, we will elaborate concluding remarks regarding the benefits of hybrid BCI systems and future studies.

Following consideration of the three types of hybrid BCI and their respective applications, we can summarize the advantages of hybrid BCI in two aspects. First, the hBCI system can provide only a single control signal or output to improve the classification performance. The two main strategies for bringing about these improvements are as follows: (1) the combination of multiple brain patterns (such as MI, P300, and SSVEP) or the fusion of multiple signals (such as EEG, EMG, EOG, and NIRS) can be performed at the feature level; and (2) enhancing brain patterns by presenting multiple sensory stimuli, such as audiovisual stimuli. Second, when multiple control signals or outputs are available, hBCI systems attempt to implement multi-degree object control. In this paper, the multi-dimensional or functional control method based on hybrid BCIs and some application systems are presented. Two main methods can be adopted: (1) combining multiple brain patterns to obtain multiple independent control signals, such as 2D cursor control based on MI and P300 and orthopedic control based on MI and SSVEP; (2) using different signal characteristics to perform different functions, such as robot control based on EEG and EOG.

Here, we consider several challenging problems for further study.

6.1. Design and Implementation for hBCIs

From the user’s point of view, the complexity of the hBCI system is usually higher than that of the conventional simple BCI. User acceptability is an important performance criterion that needs to be carefully considered in hBCI design and implementation. In the design of an hBCI based on brain patterns, one of the challenges is how to determine the best combination of brain patterns to achieve the desired goals, and the combination can vary from user to user. For example, it should be considered that long-term use of SSVEP and P300 will increase visual fatigue. While designing a couple sensory hBCI, the challenge is to ensure that the desired brain patterns are enhanced by multiple sensory stimuli. Previous studies [33] have found that combining audio stimuli with natural spoken words in a visual P300-based BCI can help reduce the burden of mental work. Therefore, we can consider more combinations of multiple sensory stimuli involving auditory and tactile patterns in future research. For the hBCI based on multiple signals, one challenge is how to make full use of the characteristics of different signals to achieve the greatest improvement in system performance. In addition, when designing the real-time hBCI based on EEG and fMRI, the high noise, slow response and high dimensionality of EEG data (generated by fMRI scanner), and the low temporal resolution of fMRI data are not negligible.

6.2. Brain Mechanisms for hBCIs

The hBCI system may involve multiple brain modes, multiple sensory modes, or multimode signal inputs. To ensure that these components are effectively coordinated in the hBCI system, it is necessary to study the relevant brain mechanisms. For example, cross-modal integration/interaction in the brain can provide a brain mechanism for multisensory BCI. However, there have been few studies on the brain mechanism of hBCI so far.

6.3. Clinical Application

Until now, most hBCI systems (such as BCI browsers and BCI wheelchairs) were designed for healthy subjects. It needs to be extended to patients and extend their value to clinical applications. In recent years, more and more hBCIs have been used in clinical applications, such as in the rehabilitation and treatment of patients with hemiplegia [51, 52] and DOC [53]. When designing these hBCI systems for patients, the differences between them and healthy subjects need to be fully considered. In some cases, even a single patient design is necessary. The application of hBCI to patients with DOC has just begun, and hBCI-based communication and rehabilitation is an important topic for our future research. In addition, a variety of intelligent technologies, such as automatic navigation systems and intelligent robots, have been combined with BCI. This combination not only greatly reduces the user’s workload but also makes the BCI system more reliable, flexible, and powerful by allowing the subject to focus on the final goal and to ignore the low-level details associated with the execution of the action. This is promising for patients with low recognition and control capabilities. Therefore, future research should focus on such systems developed for patients.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.


This study was supported by the National Natural Science Foundation of China (Grant no. 61876067), the Pearl River S and T Nova Program of Guangzhou (201710010038), and Guangdong Natural Science Foundation (Grant 2014A030310244).


  1. J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Brain-computer interfaces for communication and control,” Clinical Neurophysiology, vol. 113, no. 6, pp. 767–791, 2002. View at: Publisher Site | Google Scholar
  2. S. Fazli, J. Mehnert, J. Steinbrink et al., “Enhanced performance by a hybrid NIRS-EEG brain computer interface,” NeuroImage, vol. 59, no. 1, pp. 519–529, 2012. View at: Publisher Site | Google Scholar
  3. L. A. Farwell and E. Donchin, “Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials,” Electroencephalography and Clinical Neurophysiology, vol. 70, no. 6, pp. 510–523, 1988. View at: Publisher Site | Google Scholar
  4. G. R. Müllerputz, R. Scherer, C. Neuper, and G. Pfurtscheller, “Steady-state somatosensory evoked potentials: suitable brain signals for brain-computer interfaces?” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 14, no. 1, pp. 30–37, 2006. View at: Publisher Site | Google Scholar
  5. G. Pfurtscheller and F. H. Lopes da Silva, “Event-related EEG/MEG synchronization and desynchronization: basic principles,” Clinical Neurophysiology, vol. 110, no. 11, pp. 1842–1857, 1999. View at: Publisher Site | Google Scholar
  6. K. S. Hong and M. J. Khan, “Hybrid brain–computer interface techniques for improved classification accuracy and increased number of commands: a review,” Frontiers in Neurorobotics, vol. 11, p. 35, 2017. View at: Publisher Site | Google Scholar
  7. A. Vučković and F. Sepulveda, “A two-stage four-class BCI based on imaginary movements of the left and the right wrist,” Medical Engineering & Physics, vol. 34, no. 7, pp. 964–971, 2012. View at: Publisher Site | Google Scholar
  8. B. Z. Allison, “Toward ubiquitous BCIs,” in Brain-Computer Interfaces: Revolutionizing Human-Computer Interaction, B. Graimann, G. Pfurtscheller, and B. Allison, Eds., pp. 357–387, Springer, Berlin, Germany, 2010. View at: Publisher Site | Google Scholar
  9. J. Li, H. Ji, L. Cao et al., “Evaluation and application of a hybrid brain computer interface for real wheelchair parallel control with multi-degree of freedom,” International Journal of Neural Systems, vol. 24, no. 4, Article ID 1450014, 2014. View at: Publisher Site | Google Scholar
  10. G. Pfurtscheller, B. Z. Allison, C. Brunner et al., “The hybrid BCI,” Frontiers in Neuroscience, vol. 4, no. 3, 2010. View at: Publisher Site | Google Scholar
  11. B. Z. Allison, J. Jin, Y. Zhang, and X. Wang, “A four-choice hybrid P300/SSVEP BCI for improved accuracy,” Brain-Computer Interfaces, vol. 1, no. 1, pp. 17–26, 2014. View at: Publisher Site | Google Scholar
  12. I. C. Wagner, I. Daly, and A. Väljamäe, “Non-visual and Multisensory BCI Systems: Present and Future,” Towards Practical Brain-Computer Interfaces, Springer, Berlin, Germany, 2012. View at: Publisher Site | Google Scholar
  13. R. C. Panicker, S. Puthusserypady, and Y. Sun, “An asynchronous P300 BCI with SSVEP-based control state detection,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 6, pp. 1781–1788, 2011. View at: Publisher Site | Google Scholar
  14. Y. Li, J. Pan, J. Long et al., “Multimodal BCIs: target detection, multidimensional control, and awareness evaluation in patients with disorder of consciousness,” Proceedings of the IEEE, vol. 104, no. 2, pp. 332–352, 2016. View at: Publisher Site | Google Scholar
  15. G. Pfurtscheller, T. Solis-Escalante, R. Ortner, P. Linortner, and G. R. Muller-Putz, “Self-paced operation of an SSVEP-based orthosis with and without an imagery-based “brain switch:” a feasibility study towards a hybrid BCI,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 18, no. 4, pp. 409–414, 2010. View at: Publisher Site | Google Scholar
  16. Y. Li, J. Long, T. Yu et al., “An EEG-based BCI system for 2-D cursor control by combining mu/beta rhythm and P300 potential,” IEEE Transactions on Biomedical Engineering, vol. 57, no. 10, pp. 2495–2505, 2010. View at: Publisher Site | Google Scholar
  17. T. Yu, Y. Li, J. Long, and Z. Gu, “Surfing the internet with a BCI mouse,” Journal of Neural Engineering, vol. 9, no. 3, Article ID 036012, 2012. View at: Publisher Site | Google Scholar
  18. T. Yu, Y. Li, J. Long, and F. Li, “A hybrid brain-computer interface-based mail client,” Computational and Mathematical Methods in Medicine, vol. 2013, Article ID 750934, 9 pages, 2013. View at: Publisher Site | Google Scholar
  19. B. Choi and S. Jo, “A low-cost EEG system-based hybrid brain-computer interface for humanoid robot navigation and recognition,” PLoS One, vol. 8, no. 9, Article ID e74583, 2013. View at: Publisher Site | Google Scholar
  20. Y. Li, J. Pan, F. Wang, and Z. Yu, “A hybrid BCI system combining P300 and SSVEP and its application to wheelchair control,” IEEE Transactions on Biomedical Engineering, vol. 60, no. 11, pp. 3156–3166, 2013. View at: Publisher Site | Google Scholar
  21. M. Xu, H. Qi, B. Wan, T. Yin, Z. Liu, and D. Ming, “A hybrid BCI speller paradigm combining P300 potential and the SSVEP blocking feature,” Journal of Neural Engineering, vol. 10, no. 2, Article ID 026001, 2013. View at: Publisher Site | Google Scholar
  22. L. Bi, J. Lian, K. Jie, R. Lai, and Y. Liu, “A speed and direction-based cursor control system with P300 and SSVEP,” Biomedical Signal Processing and Control, vol. 14, pp. 126–133, 2014. View at: Publisher Site | Google Scholar
  23. E. Yin, T. Zeyl, R. Saab, T. Chau, D. Hu, and Z. Zhou, “A hybrid brain-computer interface based on the fusion of P300 and SSVEP scores,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 23, no. 4, pp. 693–701, 2015. View at: Publisher Site | Google Scholar
  24. Z. Wang, Y. Yu, M. Xu, Y. Liu, E. Yin, and Z. Zhou, “Towards a hybrid BCI gaming paradigm based on motor imagery and SSVEP,” International Journal of Human-Computer Interaction, vol. 35, no. 3, pp. 197–205, 2019. View at: Publisher Site | Google Scholar
  25. L.-W. Ko, S. S. K. Ranga, O. Komarov, and C.-C. Chen, “Development of single-channel hybrid BCI system using motor imagery and SSVEP,” Journal of Healthcare Engineering, vol. 2017, Article ID 3789386, 7 pages, 2017. View at: Publisher Site | Google Scholar
  26. T. Yu, J. Xiao, F. Wang et al., “Enhanced motor imagery training using a hybrid BCI with feedback,” IEEE Transactions on Biomedical Engineering, vol. 62, no. 7, pp. 1706–1717, 2015. View at: Publisher Site | Google Scholar
  27. F. Duan, D. Lin, W. Li, and Z. Zhang, “Design of a multimodal EEG-based hybrid BCI system with visual servo module,” IEEE Transactions on Autonomous Mental Development, vol. 7, no. 4, pp. 332–341, 2015. View at: Publisher Site | Google Scholar
  28. J. Long, Y. Li, H. Wang, T. Yu, J. Pan, and F. Li, “A hybrid brain computer interface to control the direction and speed of a simulated or real wheelchair,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 20, no. 5, pp. 720–729, 2012. View at: Publisher Site | Google Scholar
  29. J. Pan, Q. Xie, Y. He et al., “Detecting awareness in patients with disorders of consciousness using a hybrid brain–computer interface,” Journal of Neural Engineering, vol. 11, no. 5, Article ID 056007, 2014. View at: Publisher Site | Google Scholar
  30. A. Belitski, J. Farquhar, and P. Desain, “P300 audio-visual speller,” Journal of Neural Engineering, vol. 8, no. 2, Article ID 025022, 2011. View at: Publisher Site | Google Scholar
  31. J. Pan, Q. Xie, H. Huang et al., “Emotion-related consciousness detection in patients with disorders of consciousness through an EEG-based BCI system,” Frontiers in Human Neuroscience, vol. 12, p. 198, 2018. View at: Publisher Site | Google Scholar
  32. X. An, J. Höhne, D. Ming, and B. Blankertz, “Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces,” PLoS One, vol. 9, no. 10, Article ID e111070, 2014. View at: Publisher Site | Google Scholar
  33. F. Wang, Y. He, J. Pan et al., “Erratum: a novel audiovisual brain-computer interface and its application in awareness detection,” Scientific Reports, vol. 5, no. 1, p. 9962, 2015. View at: Publisher Site | Google Scholar
  34. E. Yin, T. Zeyl, R. Saab, D. Hu, Z. Zhou, and T. Chau, “An auditory-tactile visual saccade-independent P300 brain-computer interface,” International Journal of Neural Systems, vol. 26, no. 1, Article ID 1650001, 2016. View at: Publisher Site | Google Scholar
  35. T. M. Rutkowski and H. Mori, “Tactile and bone-conduction auditory brain computer interface for vision and hearing impaired users,” Journal of Neuroscience Methods, vol. 244, pp. 45–51, 2015. View at: Publisher Site | Google Scholar
  36. Z. Ju and H. Liu, “Human hand motion analysis with multisensory information,” IEEE/ASME Transactions on Mechatronics, vol. 19, no. 2, pp. 456–466, 2014. View at: Publisher Site | Google Scholar
  37. S. Perdikis, R. Leeb, J. Williamson et al., “Clinical evaluation of BrainTree, a motor imagery hybrid BCI speller,” Journal of Neural Engineering, vol. 11, no. 3, Article ID 036003, 2014. View at: Publisher Site | Google Scholar
  38. X. Chai, Z. Zhang, Y. Lu, G. Liu, T. Zhang, and H. Niu, “A hybrid BCI-based environmental control system using SSVEP and EMG signals,” in World Congress on Medical Physics and Biomedical Engineering 2018, vol. 2019, Springer, Singapore, 2019. View at: Google Scholar
  39. S. R. Soekadar, M. Witkowski, N. Vitiello, and N. J. B. E. Birbaumer, “An EEG/EOG-based hybrid brain-neural computer interaction (BNCI) system to control an exoskeleton for the paralyzed hand,” Biomedical Engineering, vol. 60, no. 3, pp. 199–205, 2015. View at: Publisher Site | Google Scholar
  40. J. Ma, Y. Zhang, A. Cichocki, and F. Matsuno, “A novel EOG/EEG hybrid human-machine interface adopting eye movements and ERPs: application to robot control,” IEEE Transactions on Biomedical Engineering, vol. 62, no. 3, pp. 876–889, 2015. View at: Publisher Site | Google Scholar
  41. M. H. Lee, J. Williamson, D. O. Won, S. Fazli, and S. W. Lee, “A high performance spelling system based on EEG-EOG signals with visual feedback,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 26, no. 7, 2018. View at: Publisher Site | Google Scholar
  42. M. J. Khan and K. S. Hong, “Hybrid EEG-fNIRS-based eight-command decoding for BCI: application to quadcopter control,” Frontiers in Neurorobotics, vol. 11, p. 6, 2017. View at: Publisher Site | Google Scholar
  43. A. P. Buccino, H. O. Keles, and A. Omurtag, “Hybrid EEG-fNIRS asynchronous brain-computer interface for multiple motor tasks,” PLoS One, vol. 11, no. 1, Article ID e0146610, 2016. View at: Publisher Site | Google Scholar
  44. A. M. Chiarelli, P. Croce, A. Merla, and F. Zappasodi, “Deep learning for hybrid EEG-fNIRS brain-computer interface: application to motor imagery classification,” Journal of Neural Engineering, vol. 15, no. 3, Article ID 036028, 2018. View at: Publisher Site | Google Scholar
  45. M. J. Khan, M. J. Hong, and K. S. J. FiH. N. Hong, “Decoding of four movement directions using hybrid NIRS-EEG brain-computer interface,” Frontiers in Human Neuroscience, vol. 8, no. 1, p. 244, 2014. View at: Publisher Site | Google Scholar
  46. J. Shin, A. von Luhmann, B. Blankertz et al., “Open access dataset for EEG+NIRS single-trial classification,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25, no. 10, pp. 1735–1745, 2017. View at: Publisher Site | Google Scholar
  47. H. L. Halme and L. Parkkonen, “Across-subject offline decoding of motor imagery from MEG and EEG,” Scientific Reports, vol. 8, no. 1, p. 10087, 2018. View at: Publisher Site | Google Scholar
  48. J. Shin, J. Kwon, and C.-H. Im, “A ternary hybrid EEG-NIRS brain-computer interface for the classification of brain activation patterns during mental arithmetic, motor imagery, and idle state,” Frontiers in Neuroinformatics, vol. 12, no. 5, 2018. View at: Publisher Site | Google Scholar
  49. H.-L. Halme and L. Parkkonen, “Across-subject offline decoding of motor imagery from MEG and EEG,” Scientific Reports, vol. 8, no. 1, p. 10087, 2018. View at: Publisher Site | Google Scholar
  50. R. Leeb, H. Sagha, R. Chavarriaga, and J. del R Millán, “A hybrid brain-computer interface based on the fusion of electroencephalographic and electromyographic activities,” Journal of Neural Engineering, vol. 8, no. 2, Article ID 025011, 2011. View at: Publisher Site | Google Scholar
  51. M. Hassan, H. Kadone, T. Ueno, Y. Hada, Y. Sankai, and K. Suzuki, “Feasibility of synergy-based exoskeleton robot control in hemiplegia,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 26, no. 6, pp. 1233–1242, 2018. View at: Publisher Site | Google Scholar
  52. K. Kato, K. Takahashi, N. Mizuguchi, and J. Ushiba, “Online detection of amplitude modulation of motor-related EEG desynchronization using a lock-in amplifier: comparison with a fast Fourier transform, a continuous wavelet transform, and an autoregressive algorithm,” Journal of Neuroscience Methods, vol. 293, pp. 289–298, 2018. View at: Publisher Site | Google Scholar
  53. F. Wang, Y. He, J. Qu et al., “Enhancing clinical communication assessments using an audiovisual BCI for patients with disorders of consciousness,” Journal of Neural Engineering, vol. 14, no. 4, Article ID 046024, 2017. View at: Publisher Site | Google Scholar

Copyright © 2019 Zina Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.