Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 6685672 | https://doi.org/10.1155/2021/6685672

Anna Lekova, Ivan Chavdarov, "A Fuzzy Shell for Developing an Interpretable BCI Based on the Spatiotemporal Dynamics of the Evoked Oscillations", Computational Intelligence and Neuroscience, vol. 2021, Article ID 6685672, 21 pages, 2021. https://doi.org/10.1155/2021/6685672

A Fuzzy Shell for Developing an Interpretable BCI Based on the Spatiotemporal Dynamics of the Evoked Oscillations

Academic Editor: Rodolfo E. Haber
Received17 Nov 2020
Revised05 Mar 2021
Accepted17 Mar 2021
Published12 Apr 2021

Abstract

Researchers in neuroscience computing experience difficulties when they try to carry out neuroanalysis in practice or when they need to design an explainable brain-computer interface (BCI) with quick setup and minimal training phase. There is a need of interpretable computational intelligence techniques and new brain states decoding for more understandable interpretation of the sensory, cognitive, and motor brain processing. We propose a general-purpose fuzzy software system shell for developing a custom EEG BCI system. It relies on the bursts of the ongoing EEG frequency power synchronization/desynchronization at scalp level and supports quick BCI setup by linguistic features, ad hoc fuzzy membership construction, explainable IF-THEN rules, and the concept of the Internet of Things (IoT), which makes the BCI system device and service independent. It has a potential for designing both passive and event-related BCIs with options for visual representation at scalp-source level in response to time. The feasibility of the proposed system has been proven by real experiments and bursts for and frequency power have been detected in real time in response to evoked visuospatial selective attention. The presence of the proposed new brain state decoding can be used as a feasible metric for interpretation of the spatiotemporal dynamics of the passive or evoked neural oscillations.

1. Introduction

An EEG-based brain-computer interface (BCI) uses an electrophysiological monitoring method to measure the scalp electrical potentials resulting from ionic currents within the neurons of the brain. By placing multiple electrodes on the scalp, the brain signals correlated with user’s emotions and intentions can be registered, featured, classified, and translated into artificial commands for control or communication with the digital devices and services around. The stages of pattern recognition and classification in BCIs call for elements of Artificial Intelligence (AI). Because humans are involved, interpretable and explainable Artificial Intelligence (XAI) [1] needs to be included in the system design. Interpretability means that the cause and effect can be easily determined in the machine learning (ML) models. XAI is a new trend of AI to explain the black-box approaches in ML by context-specific methods in order to make humans understand the reasoning behind their predictions and the errors they make.

BCI community is a multidisciplinary research field where neuroscientists, biomedical engineers, and computer scientists need to work together. Very often this collaboration is impossible and researchers experience difficulties when they try to use the available BCI software tools, such as BCILAB [2], EEGLAB [3], OpenVibe [4], BCI2000 [5], Neuromore [6] and other tools surveyed in [7]. These general-purpose software applications aid the design and testing of both passive and event-related BCIs in different applications. BCILAB [2], an open source MATLAB-based toolbox, provides an organized collection of 100 preimplemented methods and method variants. EEGLAB [3] is an open source toolbox for analysis of EEG dynamics by Independent Component Analysis (ICA). The OpenVibe [4] is a user-friendly software for BCI with graphical user drag-and-drop interface. BCI2000 [5] is a general-purpose BCI system for different task specific BCI applicable methods. Some of these tools are optimized for real-time EEG data processing using Python, C++, or MATLAB scripting box for online processing: OpenVibe Acquisition Server [http://openvibe.inria.fr/acquisition-server], BCI2000 Webserver [http://www.bci2000.org], and MatRiver, a MATLAB DataRiver client [7]. Neuromore [6] allows users to connect to biosensors, such as EEG, and is a biodata acquisition, streaming, processing, and visualization software. It has drag-and-drop user interface allowing the users to get different views of the raw EEG data simultaneously. Neuromore is open source and has a cloud-based platform that connects with wearable devices and provides cloud-based collaborative research and cloud data management. The OpenVibe is considered to be the most user-friendly tool and can be used without much programming skills. However, the use of the real-time WebSocket for BCI online operation needs IT skills in client-server programming and is time-consuming. Although these tools are “general-purpose” ones, they lack comprehensible features and models in designing a custom EEG-based BCI and a lot of skills in MATLAB programming and other software languages are required to support brain computations of the neuroscientists. Widely accepted in the experimental science, such as neuroscience, are the research claims to be based on statistical tests [8]. Therefore, testing the null-hypothesis (also known as significance testing) by Analysis of Variation (ANOVA) and multiple comparison statistics is essential to be friendly configured and embedded in the BCIs for neuroresearch in order to support neuroscientists in their experiments.

On the other hand, computer scientists easily use these tools but face difficulties in designing custom BCI applications. The available today portable brain-listening headsets come with accessible brain measuring hardware and the computer scientists commonly use BCIs to control digital devices or services. The task-evoked underlying neural activity needs to be translated into artificial commands and before designing a specific BCI the computer scientists need to find and map the published neuroexpertise into features, patterns, and control actions. It will be helpful if this mapping is human interpretable. Because the expert opinion is involved for classification, the black-box ML models for classification are not sufficient. We support the computer scientists by quick setup of a fuzzy system. Fuzzy rule-based classifiers can be built using expert opinion, data, or both and are considered more intuitive and AI explainable [9].

We seek to close the gap between computer scientists and neuroscientists by providing a general-purpose fuzzy software system shell for designing a real-time operating EEG-based BCI system with ad hoc brain state decoding by linguistic variables and fuzzy sets participating in interpretable fuzzy IF-THEN rules. The decision-making response is based on the neurons involved in a particular neural computation in terms of the trend (derivatives) in the evoked oscillatory rhythms and neuronal assembly at scalp locations over time. One of our goals in the design of the proposed BCI fuzzy shell (BCIFS) was to profile the data analysis of the software: ready-to-use data for post hoc interpretation by ANOVA and multiple comparison statistics for the neuroscientists and post hoc training and optimization of the fuzzy system parameters for the computer scientists. Thus, the neuroscientists can test their hypothesis and perform initial experiments for testing multiple predictions with contradictions in assumptions simultaneously, while the computer users can digitally translate the published neuroscience findings in interpretable IF-THEN rules for fuzzy interpretation of the spatiotemporal dynamics of the passive or evoked oscillatory rhythms. Additionally, the computer scientists have the freedom to exploit the resulting ready-to-use data in MATLAB for post hoc training and optimization of the fuzzy sets, fuzzy membership functions, and the fuzzy rule-based classifier with different ML algorithms.

The rest of the paper is organized as follows: in Section 2, related works and the proposed solutions are presented. Section 3 summarizes the mathematical background and the detailed description of the proposed BCI fuzzy shell. Section 4 presents materials and methods. In Section 5, the results are exposed and discussed. Finally, conclusions follow.

2. Current Status, Problem Statement, and the Proposed Solution

2.1. Existing Solutions

EEG signals are subject-specific and with nonlinear behaviour. The real-time brain state evaluation during sensory processing, memory, or decision-making is a challenging task. The current status of the BCIs, sensing technologies, and computational intelligence approaches are surveyed in [1014]. An overview of recently used EEG noninvasive devices can be found in [10, 14].

2.1.1. Feature Extraction Approaches

The most commonly extracted BCI features from time series signals can be derived from time, frequency, time-frequency, and statistical and spatial domains. EEG features are described and compared in detail in [1214]. Examples frequently extracted from EEG signals temporal features are maximum, minimum, zero-crossing rate, linear regression, interquartile range, absolute integral, and others. Commonly extracted statistical features are median, mean, standard deviation, absolute deviation, root mean square, skewness, kurtosis, histogram, and total energy. Spectral features commonly extracted are total energy, spectral centroid, spread, slope, decrease, variation, while the most used spatial features are Common Spatial Patterns (CSP). Other used BCI features are nonlinear: Lyapunov exponent, Shannon entropy, correlation dimension, detrended fluctuation analysis, recurrence rate, and others.

Most papers focus on the use of temporal and spectral domain characteristics. BCIs can be based on the evoked potentials such as Event-Related Potential (ERP) [15] or based on the power of the EEG rhythms and the spectral density [16]. ERP components reflect brief bursts of neuronal activity, time locked to the eliciting event. However, time-domain interpretation neglects spectral characteristics that may be important to the classifier. In the oscillatory-based BCI, researchers usually decompose the EEG signal into five frequency bands using FFT. They use a specific band as a correlate with a cognitive task, sensory or emotion responses. For instance, (4–7 Hz) and (8–14 Hz) band rhythms correlate with the brain activities during working memory tasks. Similarly, the use of spectral characteristics for feature extraction lacks temporal characteristics. To overcome these deficiencies, the dynamics of the oscillatory rhythms with time resolution in msec is proposed in the literature. Such frequency-band specific features that reflect the changes of the ongoing evoked oscillatory rhythms are Event-Related Desynchronization (ERD) and Event-Related Synchronization (ERS) [1722]. Decreases in power from the reference to the activation interval are expressed as negative values (i.e., desynchronization), whereas task-related increases in power (i.e., synchronization) are expressed as positive values [18]. For instance, the changes of the ongoing evoked oscillations for sensory and cognitive processing result in different ERD/ERS. However, ERS/ERD alone cannot feature the functional connectivity (FC) in the human brain, the temporal correlation between the times series from different brain regions [23].

Feature extraction methods also are categorized according to the domain they are derived from. A typical time-domain-based feature extraction approach is the autoregressive modelling. Among frequency-domain-based techniques are Fast Fourier Transform (FFT), Power Spectral Density (PSD), band power, and central centroid. The most widespread time-frequency-based feature extraction approaches are Short-time Fourier Transform (STFT), Continuous Wavelet Transform (CWT), and Discrete Wavelet Transform (DWT). Another powerful feature extraction approach CSP utilizes localized spatial filters and each of them converts brain waves into different domain where the variance of one group is magnified. The feature extraction methods depend on the type of brain processes being captured plus the choice of the BCI system design. Some feature extraction methods are unsupervised such as Principle Component Analysis (PCA). They do not use raw EEG data labelled with features to learn from, focusing on the differences in the data. Other methods like CSP are supervised and require a set of labelled data to determine the specific spatial features. Applying spatial domain feature extraction could reduce significantly the original size of the electrodes under consideration; however, the computational cost of the improved performance is high. Although the effectiveness of the published feature extraction methods has yet to be reported, studies like [13, 24, 25] illustrated that the high-dimensional and noisy nature of EEG may limit the advantage of nonlinear extraction methods over linear ones. Findings in [24] suggested that multiclass complex BCI task discrimination could gain more benefit from analyzing simple and symbolic features such as Time-Domain Parameters (TDP) rather than more complex features such as CSP and Power Spectral Density. Moreover, from the comparison in [24], it was concluded that the complex ones produced only slightly better classification results. Wen et al. in [25] proposed genetic algorithm-based frequency-domain features and proved that they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance.

To sum up, the selection of the EEG electrodes for classification and their features is critical for both the accuracy of the classifiers and the calculation cost. Some features are not cost effective and suitable for operating in real time. For example, the evaluation of Independent Components (ICs) derived by ICA or Common Spatial Patterns (CSP) more often is performed offline and the calculation cost is high. The resulting features are not human interpretable, while the ML models for their extraction are not explainable. Since the more intuitive and human-interpretable features that can be used in both passive and event-related BCIs are ERS/ERD, our starting assumption originated from [18, 20].

2.1.2. Cross-Subject Training

Cross-subject training is another challenge linked with the BCI system. Sometimes, the training is for a long time, which causes users to become fatigued. Motor imagery (MI-)based BCIs need most training, while the most robust performance across users has the BCIs based on steady-state (motion) visually evoked potentials. When the user receives either a motion reversal frequency or fixed frequency of flashing visual stimuli, the potential brain activity produces the same frequency as a response. For instance, user training was not required in the flicker-free Steady-State Motion Visually Evoked Potential (SSMVEP-)based BCI system proposed in [26]. The developed paradigm used new ring-shaped motion checkerboard patterns with oscillating expansion and contraction motions for visual stimuli. The frequency energy of SSMVEPs was concentrative and the visual stimuli evoked “single fundamental peak” responses after FFT signal processing and canonical correlation analysis. This method has shown highly interactive performance as a paradigm of BCIs with zero training. However, authors in [27] questioned “to train or not to train” and although this is a survey on training of feature extraction methods for Steady-State Visually Evoked Potentials (SSVEP-)based BCIs, the main challenges of how to reduce the user training while maintaining good BCI performance were analyzed. The criticism here is that training-less systems are more practical, however with limiting performance due to intersubject variability.

In general, less training and high performance make the ERP-based BCI system widely used. Some recent studies tried to reduce the calibration time with new subject-independent training methods. In [28], authors introduce the concept of a generic model set. They used ERP data from 116 participants to train the generic model set and trained ten models by weighted linear discriminant analysis. The results from testing the validity of the generic model set demonstrated that all new participants matched the best generic model.

PSD- or ERD-based BCIs also need to be trained, either by session-independent or subject-independent training methods. Different methods have been surveyed in [10]; however, the training is either features or application specific. Recently, the Transfer Learning (TL) in EEG decoding showed great potential in processing signals across sessions and subjects, as can be seen from [10, 29]. The principle of TL is to transfer knowledge from different but related tasks using existing knowledge learned from already accomplished tasks to help with new tasks. In order to train the feature extraction or classification model, the large-scale and high-quality datasets are used to obtain strong robustness and high classification accuracy for the new tasks. A lot of case-studies are surveyed in [29] showing how TL improves the cross-subject transfer and the practicality of real-world BCI applications for different features and tasks.

2.1.3. Classification Methods

Regression or classification algorithms could be utilized to identify various brain activity patterns by the BCI system and translated into commands. The most explored machine learning techniques for classification of EEG signals are based on supervised learning, where a model is created from a training set of EEG signal features to its labels. Example algorithms are k-nearest neighbor (K-NN) [3032], support vector machine (SVM) [3235], naive Bayes (NB) [32], linear discriminant analysis (LDA) [36, 37], convolutional neural networks (CNN) [38], deep belief network (DBN) [39], AdaBoost ensemble learning [40], lattice computing [31], and fuzzy logic-based classifiers [32, 41, 4346]. Examples of unsupervised learning algorithms are the reinforcement learning (RL), k-means, affinity propagation, spectral clustering, hierarchical clustering, and others. Among them, RL is a prominent unsupervised learning algorithm [47]. RL is flexible and general in its applicability and efficiency for real time and personalized learning in a complex stochastic environment that requires control actions to optimize the system parameters. The RL “agent” acts on the digital world through actions and receives rewards to learn what action to undertake within the current situation. RL is less dependent on the quality of the label information that results in high efficiency of data utilization. Q-learning is a model-free RL and does not require a model of the environment. It handles problems with stochastic transitions and rewards. The Q-learning algorithm has a function “Q” that calculates the quality of a state-action combination according to the maximum expected rewards. RL and its variations, such as deep Q-learning, are applied in different BCI contexts and analyze dynamically the EEG data captured in an experiment. The reward in the EEG-based BCI system could be explicitly based on the EEG signals or implicitly based on system-response parameters. EEG response from the BCI system serves as a reward for the RL agent to learn the features or control actions. Some studies are based on the reward prediction error theory of dopamine [48]. Other studies use EEG signal as the error signal underlying mechanisms of the human error processing [49]. In [50], the system performance is the indicator for the reward calculation. In this study, authors introduced deep reinforcement Q-learning to study the correlation between drowsiness and driving performance. Authors indirectly measure the mind states based on indicators measured during the system performance, such as Response Time (RT). RT measures how quickly the subject reacts to a stimulus and yields the reward. RT is used to assess the current action against the current state, which is the EEG data in the current time window. An optimal policy was always assumed and exploited for action selection and the results showed that the trained model could trace the variations of mind state in a satisfactory way against the EEG data. Usually RL measures future reward to assess the current action. The specific of the proposed Q-learning in [50] is that the reward (in terms of RT) is measured in some latency. However, this brings elements of supervised learning, such as the transition weight beta and history-dependent prediction. Therefore, although the RL is intuitive and does not need an extra XAI, the paradigm of RL requires abstraction and instantiation of the agent, environment, state, action, and reward according to the specific of the learning problem. Q-learning is capable of solving problems with limited states and actions; however, in order to evaluate optimal policies, the value function Q needs to be defined precisely and is out of scope of the neuroscientists. More information for deep learning and unsupervised and semisupervised learning algorithms could be found in [12, 14].

A fuzzy rule-based classifier (FRBC) is a fuzzy system specifically configured for performing classification tasks that consists of a number of classification rules and utilizes fuzziness only in the reasoning mechanism of the classifier [51]. FRBC can be built using expert opinion, data, or both. When the FRBC is created directly from numerical data simple heuristic procedures, neurofuzzy techniques, clustering methods, fuzzy nearest neighbor methods, and genetic algorithms can be used [52]. Majority vote based on a single winner rule (the class with the maximum total strength of the vote) usually classifies the new pattern. BCIs with fuzzy rule-based inference for brain states pattern recognition and classification can be found in [4146, 53]. Almost all reported results proved that fuzzy rule-based classifiers were not necessarily less accurate than other classifiers. Gu et al. in [41] extracted power spectral features that have been labelled for each trial. The classification model consisted of four types of fuzzy rules that determined the finalised predicted label. Five well-known supervised ML classification methods as SVM, K-NN, NB, ensemble for boosting, and discriminant analysis classifier (DAC) had been trained and the comparison results showed outperformance of a fuzzy rule-based classifier. However, the proposed fuzzy model is task specific; i.e., it was applied to classify motor imagery (MI) data in a passive BCI. Nguyen et al. in [45] introduced a multiclass type-2 fuzzy logic (FL) classifier, where fuzzy parameters were trained using a metaheuristic population-based particle swarm optimization algorithm. CSP is used to extract significant features that are then fed as inputs for classification. The proposed model was also applied to MI BCI. The benchmark four-class MI BCI dataset from the BCI competition IV was used in the analysis of the study. The results from experiments showed the great accuracy of the combination of CSP and type-2 FL compared to LDA, NB, K-NN, ensemble learning AdaBoost, and SVM. Bhattacharyya et al. in [44] proposed two types of multiclass classification algorithm by fusing interval type-2 FL and Adaptive-Network-based Fuzzy Inference Systems (ANFIS). The experimental results showed that the proposed algorithms performed better than LDA, SVM, and NB when dealing with uncertain EEG data. Das et al. in [46] also proposed an interval type-2 fuzzy system using extended Kalman filter based learning algorithm. The BCI competition data of MI type was used in the analysis of the study and the performance evaluation of the FL model showed higher accuracy than SVM, as well as several other fuzzy systems including the evolving fuzzy rule-based classifier, online sequential ANFIS, metacognitive neurofuzzy inference system, and metacognitive interval type-2 fuzzy system. Tsai et al. in [53] proposed a Takagi-Sugeno fuzzy neural network-based algorithm with single-channel EEG signal for the discrimination between light and deep sleep stages and reported high accuracy in the classification.

To sum up, the proposed FRBC models are brain headset or application specific. Some of them use black-box machine learning approaches; however, in a combination with fuzzy rules, a reasoning can be performed in order to argue the inner logic of the classifiers. The combination of antecedents and consequences persists in the discipline of argumentation [1]. Although the interpretability and accuracy are considered to be contradictive requirements, the recent tendency is to increase the explainability without hurting model performance much. The authors in [54] proposed a fuzzy classifier by a combination of rule granulation and rule consolidation methods. They obtained the maximum possible classification accuracy with as simple classifier as possible and the method offers the possibility of finding a good compromise between interpretability and performance. To the best of our knowledge, this classifier is not yet implemented in the BCI research; however, the cross-validation results in [54] were prominent. They did not confirm the frequent claim that “Naive Bayes often outperforms more sophisticated classification methods.” On nine benchmark datasets and four classifiers, the naive Bayes one appeared to be winner only in three cases, while the proposed method won in four.

2.2. Problem Statement

Summarizing the bibliography research, both feature extraction and classification of EEG data are brain headset or application specific and depend on the custom BCI task. For instance, the highly interactive performance of the evoked-related BCIs proposed in [26] cannot be applied for developing passive BCIs that do not rely on external stimuli, such as emotion recognition, mental workload assessment, or driver drowsiness. Features extraction techniques and models training are not always human interpretable and often are offline. Studding the temporal correlation between the EEG time series from different brain regions in the human brain is not supported online. The big EEG data impose large feature dimensions, extensive set of training data, and machine learning models with a black-box approach. Some ML models perform better than others; however, they are harder to explain. Users want to trust the systems that they are using and to know why a model comes up with the predictions they output. This led to significant growth of XAI over the last few years; however, XAI is an extra work and sometimes it may be difficult to find the cause.

Considering the problems above, we define the requirements to be searched for as follows:(i)A framework that can adaptively support a wide range of EEG-based BCI applications(ii)New brain states decoding with more understandable interpretation of the spatiotemporal dynamics of neuronal activations and neuronal assembly(iii)Explainable classification model with interpretable linguistic features to support the development of practical BCI applications(iv)Traceable and comprehensible process of class prediction(v)Ubiquitous EEG-based BCI to operate in real time locally or remotely, on different platforms and EEG devices(vi)Daily live use with practical artifacts cleaning online, less subject-specific, and with minimal training phase

2.3. Proposed Solution

We first searched for a new brain states decoding and for more understandable interpretation of the functional connectivity of the neurons involved in the brain processing. Our starting assumption from [20] is the interpretation of ERD as a correlate with the activated cortical areas with increased excitability. We consider that the rate of the oscillatory rhythms is a steady-state endogenous or exogenous brain process with specific functional significance for the evoked neuronal activation and assembly. Thus, we featured the ERS/ERD within a specific ongoing EEG band power by the bursts in change over a certain time window. We denote the rate of increase/decrease (burst) as ERS”/ERD”. Endogenous and exogenous ERS”/ERD” have difference in latency. We define the latency as the delay between the evoked brain activities. The ERS”/ERD” with a peak latency within 250 to 350 msec reflecting the elicited internal neural processes (endogenous) are consistently observed in various executive or memory tasks. The ERS”/ERD” with a peak latency within 100 to 150 msec are attributable to external stimuli or emotional reactions and are typically associated with sensory systems, e.g., steady-state visual or auditory responses.

In order to observe and interpret the ERS”/ERD” in a human readable way, we use linguistic variables and IF-THEN rules where the second derivatives participate and feature the changes in brain rhythms at scalp locations over time. The ERS”/ERD” are described by linguistic variables and discriminated by fuzzy membership functions. The functional connectivity of all brain regions correlated with the evoked event or stimuli is described in the IF part of the rules by a specific combination of linguistic variables. The proposed technique for developing a fuzzy BCI system is the Sugeno fuzzy model (also known as TSK fuzzy model) [55]. It has been chosen because it has a flexibility in the fuzzy system design. TSK model can be used to generate fuzzy rules from a given input-output dataset and thus to train a fuzzy rule-based classifier. Regarding satisfactory accuracy proposed in [53], a four-layer Takagi-Sugeno fuzzy neural network classifier has been reported in the BCI research field. However, in order to go from the neural network black-box approaches to interpretable and explainable model, we studied and implemented an approach proposed in [54] for simple supervised training of a fuzzy classifier via a combination of rule granulation and rule consolidation methods (RGRC). With a slight modification of the criteria for the rule consolidation, we embedded this method in the proposed BCI system to train a fuzzy classifier offline.

Although a classification is a basic task in EEG pattern recognition, sometimes the fuzzy system has to be used in an operation mode and the brain activity to be characterized by formulas that participate in the consequent part of the fuzzy rules. For instance, the EEG_W score (1) known to be related to cognitive processes like workload, engagement, attention, and fatigue [56] is computed from Ne electrodes, placed in the occipital lobe for visual processing evaluation, in frontal lobe for emotional processing, and in the temporal lobe that processes auditory information.

Another functional relation (2) that evaluates a current emotional state based on high/low arousal and positive/negative valence in [57] is computed from four electrodes, placed in the prefrontal cortex (AF3, AF4, F3, and F4). The associated / ratio is a reasonable indicator of the arousal state of a person, while valence (2) is estimated by computing and comparing and power in the frontal channels F3 and F4.

In both cases, the TSK fuzzy model can be used as a universal approximator of known functions with specified error bounds with computationally efficient defuzzification process. The crisp output value in TSK model is a mathematical combination of the outputs and the rules strength. The fuzzy membership functions can be defined experimentally or statistically. We automatically built trapezoidal fuzzy membership functions during the baseline phase. This shape is chosen because the upper base of a trapezoid takes care of the small scattering due to the oscillatory nature of EEG that causes false featuring. The mean and standard deviation participate together with several coefficients that are tuned according to the used brain headset.

In the proposed BCIFS, we exploited the concept behind the Internet of Things and the Node-RED approach [58] to make all sensing, computation, and memory integrated into a single standalone platform. Node-RED uses a visual programming for “wiring together” of code blocks and make up “flows” to carry out tasks by connecting nodes (input, processing, output, and UI nodes) in a browser-based flow editor (Figures 1 and 2). The described below article’s contributions from 5 to 8 are because BCIFS is built in Node-RED, which is a cross platform based on Node.js event-driven model. The flows are stored using JSON and can be easily imported and exported for sharing with others. All these make the device, task, and service independent and portable to operate locally or remotely. The Node-RED standard front-end graphical user interfaces are used for ERS”/ERD” monitoring. It might be observed by a live data dashboard (Figure 3) or heard in the background by passing the result values into a code block for an audio player. The bar and gauge graphs in the dashboard monitor the electrode levels for the brain oscillatory rhythms and their features. For instance, the power of brain oscillation and ERS’ values for the electrode O2 are being passed to gauge graphs O2 A and O2 A’ (left and centred gauges on Figure 3), while ERS” values are passed to the bar graph O2A”. The colour of the gauge depends on the value being passed into it and changes from the green via yellow to the red corresponding to a change from a reference value to a burst.

The main contributions of this study are the following: (1) a general software system shell for developing both passive and event-related BCIs with quick setup, short training phase, and for real-time application in different contexts, such as executive or memory tasks, sensory processing, neurofeedback, and BCI control; (2) new brain state decoding for human-interpretable feature extraction in terms of burst in change of the neuronal synchronization or desynchronization at scalp-region level; (3) digital twin-based optimization for tuning the parameters of the fuzzy membership functions; (4) practical and real-time artifact collecting and cleaning; (5) easily adapted different EEG-based brain headsets; (6) easily adapted variety of digital devices and services operating in the IoT; (7) real-time analysis of recording EEG rhythms with options for visual or audio representations at scalp-region levels in response to time; (8) remote use of the developed BCI either for operating or for performing experiments; (9) a proof-of-concept: spatiotemporal dynamics of brain connectivity during the evoked visuospatial selective attention.

3. Fuzzy Shell for Developing a Custom EEG BCI

The BCIFS is built in the Node-RED platform, taking full advantages of its low-code programming for event-driven applications and wiring together hardware devices, APIs, and online services. The streaming of EEG time series and TSK fuzzy model is developed in a browser-based flow and can run on low-cost hardware such as the Raspberry Pi and in the cloud and IoT. Integrating data streams from different EEG-based headset needs to be done via a custom Node-RED library of input nodes, which allow interfacing the headset technology with other Node-RED nodes.

3.1. Featuring of the EEG-Time Series

EEG devices for measuring the ongoing brain activity provide a stream of constantly changing brain time series. Suppose that we are given an EEG dataset denoted by S that contains N number of trials for one subject. Each trial contains EEG records in respect to time and electrode locations belonging to one or several classes (patterns) for the brain activity under consideration. The dataset S is denoted bywhere i = 1, ..., N.

Si represents the EEG records in ith trial. N is the number of trials. L is a column vector that assigns each trial to one of the associated labels for a single class (C). In case of multiple classes, the multilabel classification for ith trial is presented as raw vector and its length corresponds to the number of classes k.

Each trial has to be present as an input matrixwhere E is the number of electrodes and T is the number of time samples per trial. Since the time resolution of the brain signals is in the magnitude of msec and the trial is in the range of sec, time windowing is applied for each trial in order to be more informative for classification. Let denote the size of the window. After referencing the studies in [5961], we found that is usually 128 msec with no overlapping windows or with overlapping of 5 msec. Experimentally, we defined similar sizes:  = 125 msec for exogenous and  = 150 msec for endogenous brain processes. The used features (F) for categorizing the brain activity reduce the dataspace dimension in the windows. Thus, Si can be represented as

F may be any kind of temporal, statistical, spectral, or nonlinear feature over a certain time window. The window’s length defines long-term or short-term interpretation.

The integrated Node-RED third-party software or hardware might be robots, programs for neurofeedback training, serious games, etc. The featured brain electrical activity is translating different type of commands such as robot navigation, touching digital objects on the screen, and switching home. However, no uniform place exists in the brain where a command is stored as a set of neurons. Memory and thinking during the command generation evoke distributed neuronal activity; we can only use approximate reasoning over this EEG activity in order to map them to a digital command. This imposed us to discriminate these brain patterns and describe them by linguistic variables, fuzzy sets, and fuzzy rules according to the location of scalp electrodes and bandwidth. The scalar strength in the premises of the fuzzy rules and the crisp values in the consequences designate the specific functional significance for the evoked neuronal activation and connectivity.

3.2. Mathematical Background of the TSK Fuzzy Model

Fuzzy logic takes decisions and recognizes patterns using linguistic variables, “degree of membership,” and fuzzy inference. It maps an input space to an output space using a series of fuzzy IF-THEN rules. Uncertainties are presented as fuzzy sets (Ai), which are often expressed by words and interpreted by their membership functions . TSK structure consists of rules in the formwhere is a matrix from the brain signal responses at scalp level representing the inputs defined in domain (S): ; Ai is a fuzzy set defined on (S); yi is a scalar output corresponding to rule i; are the consequence parameters associated with rule i. For a zero-order TSK model, the output level y is a constant. , where p is the number of fuzzy rules.

The simplest fuzzy rule-based classifier is a fuzzy IF-THEN system with a class label staying in the consequences. A fuzzy classifier is constructed by specifying the classification rules:where functions as a label.

An example rule is “IF x1is A1AND x2is A2THEN class label is 1.” Such argumentation is easier to be obtained from the neuroscientists. The actual numerical value in the consequences is irrelevant because the class is a nominal variable. All rules “vote” for the class in the consequent part and the majority of these votes discriminate the class; i.e., the maximum aggregation method is applied.

A useful special case for “voting” [51] is the support for each class as a single constant value, usually within the interval [0, 1]:where are constants within the interval [0, 1] and l is the number of labels for class 1.

In this model, every rule votes for all the classes and the rules are aggregated and defuzzified by using the weighted average:where is the degree of fulfillment of i-th rule.where is the number of input variables in i-th rule () and T is a type of t-(co)norm, such as minimum or product. Since each rule has a crisp output, the overall output is obtained via weighted average, thus avoiding the time-consuming process of defuzzification required in the Mamdani fuzzy model.

First, BCIFS separates the registered EEG data in terms of band average power according to the location of the scalp electrodes. After the preprocessing, BCIFS evaluates variations (derivatives) in ERS/ERD at scalp-source level in response to time. Then, fuzzy reasoning is performed according to the current fuzzy rule base (FRB) and inference. The chosen TSK fuzzy model of zero order uses a set of simple functions that require low CPU memory resources and presents a low time response. Another advantage of this model is that the fuzzy rules can be generated from a given input-output dataset for training a fuzzy rule-based classifier.

3.3. Generating the Fuzzy Rule-Based Classifier

During the design of BCIFS, the premise and consequence parameters have to be identified. The electrodes of interest are described by linguistic variables, whereas the rate of change of the evoked oscillatory rhythms is described by fuzzy sets. For instance, following the proposed brain maps of coherence in [62], there is significantly higher coherence at the frontal and right parietal sites for the band when watching a negative film compared to the neutral state.IFF4_T_ERS”300” ishighandO2_T_ERS”300” ishighandP8_T_ERS”300” ishighTHEN valence is 0.1.

Here, the output label (valence on the scale [0, 1]) interprets the valence according to the chosen sliding window. ERS”300 means that the rate of change peak (burst) comes with latency of 150 msec over a 300 msec window, shifted every 150 msec.

The FRB is set up according to the significance of the underlying role of each electrode and frequency. Only the critical changes have to be described in the IF-THEN rules. Rules with contradiction in assumptions can be separated in several FRBs working in parallel. We use fuzzy trapezoidal membership functions that are simple and fast for calculation. The upper base of the trapezoid takes care of the small scattering that causes false ERS”/ERD” and wrong baseline recordings during the reference phase. The upper base ignores the wave scattering in the frequency domain, while the legs of the trapezoid eliminate the spikes due to artifacts. When the right leg is perpendicular to the base, the right slope for fuzziness is eliminated and the high bursts produced from artifacts are not evaluated. If we are interested in any “smart artifact,” we can add additional fuzzy membership, so called “artifact,” with appropriate trapezoid parameters. Thus, events related to a facial expression can be classified according to the evoked artifacts.

4. Materials and Methods

In this section, the feasibility of the proposed BCI fuzzy shell is illustrated. This was proven by real experiments for evaluating the spatiotemporal dynamics of neural oscillations during the evoked top-down visuospatial selective attention.

4.1. Scientific Context

Attentional processes are the brain’s way to cope with the information overload and focus on some stimuli, while suppressing others. Attention is commonly categorized in top-down (or endogenous) attention, an internally induced mental focus on self-thoughts, memories, or abstractions, and bottom-up (or exogenous) attention, an externally induced mechanism that is directed by stimuli from the surroundings. Top-down attention is under voluntary control and is also known as “goal-directed” attention, whereas bottom-up attention is “data-directed.” Endogenous oscillations are attributable to internal neural processes and include a well-known set of frequencies [63]. Exogenous oscillations are driven by the external stimuli and are typically associated with sensory systems, e.g., the auditory steady-state response [64]. Attention in the visual system is extensively studied over the past decades. Visual spatial attention can be either exogenously captured by a salient stimulus that overrides internal goals or can be endogenously allocated by voluntary effort while processing multiple targets [65]. The brain regions actively involved are the prefrontal, parietal, and occipital cortex [59, 60, 6669]. The prefrontal lobe is thought to be involved in executive functions of the brain: problem solving, judgement, attention, working memory (WM), and motor programming. Many studies have indicated that frontal activity is closely related to enhanced attention and that sustained neuronal activity is necessary to maintain the WM of representations [22, 70, 71]. Other studies report that increase is observed in the occipital, parietal, and temporal lobes during a short-term memory task [72, 73]. The ongoing EEG oscillatory rhythm in the higher frequency is considered in [69] as a correlate of high-speed WM comparison during the recall (see Figure 4).

Following the above neuroscience findings, our main hypothesis is that the ongoing EEG oscillatory rhythms during top-down visuospatial selective attention show specific evoked bursts in higher frequency bands and electrode positions and are functionally connected in different ways during attentional states compared with passive view.

4.2. Participants

Data were collected from 11 healthy participants (3 females) with normal vision, all right handed, and with mean age 33.09. The EEG session lasted for half an hour in total. All participants signed informed consent before the experiments.

4.3. Stimulus Presentation

Participants were seated in front of Dell laptop with a 14-inch flat screen monitor with a resolution of 1366 × 768 pixels. So called Porteus mazes (https://www.mazes.ws/mazes-hard-puzzle-one.htm) were displayed on it, although any other website for playing hard mazes online can be used. An identical cable mouse had been used from all participants. A Python script is used for detecting whether the mouse is moving or not.

4.4. Method of Registration

In Section 3, the proposed fuzzy shell for developing a custom EEG BCI was implemented for studding the neuronal activity during the top-down visuospatial selective attention and the information processing during navigation. We tested different hypotheses during solving a Virtual Maze Navigation Task (VMNT) and proved the concept in [74] that VMNT is well suited to evoke brain responses. We examined the brain activity during spatial exploration, path planning, and navigation, which rely on forming cognitive maps (CMs). According to Tolman [75], CMs enable one to get, encode, store, recall, and decode information about the relative locations in their everyday or symbolic spatial environment. Therefore, many top-down attentional systems participate during the spatial navigation in order to gather information and evaluate the options: attention to sensory observations over multiple spatial locations, attention to mental representation of paths with their temporal order, and attention to encode and retrieve information from WM or visual short-term memory.

4.5. Experimental Task and Trial Setup

In the present experiment, participants had to solve virtual hard mazes # 14th and #15th. A maze #13th was used as an exercise to make them use the laptop, mouse, and software.

The experimental VMNT task responses in three conditions are as follows:(i)Condition 1: forming of cognitive maps, top-down visuospatial selective attention underlying spatial exploration, path planning, mental navigation, and evaluation of options, as well as encoding and storing the cognitive map in WM(ii)Condition 2: memory-guided visuospatial traversing, high-speed WM comparison during recall and decode routes for traversing(iii)Condition 3: instant visuospatial traversing, instant spatial exploration when the path is not bottleneck in the neighborhood or trial and error guided the route traversing

The mouse events (handled by a Python script in Node-RED) are used for identifying the current condition and depend on whether the mouse is moving or not. We assume that holding the mouse button presumes forming of CMs, while if the mouse coordinates are changing, the second or third condition occurred. We set two top-down visuospatial attentional conditions: forming of cognitive maps (FCMs) and visuospatial traversing (VST). We distinguish them according to the mouse events and saved the ongoing EEG oscillatory activity in terms of in different CSV files.

4.6. EEG Acquisition

EEG data was continuously recorded from neuroheadset “EPOC+” by EMOTIV Bioinformatics Company [76]. The recording sites from AF3, AF4, F3, F4, F7, F8, T7, T8, P7, P8, O1, and O2 were collected. The EEG signals then were preprocessed in frequencies from 4 to 50 Hz. EMOTIV EPOC + categorizes brainwaves by frequency into four main types: beta, alpha, theta, and delta using FFT. The FFT output is converted to power density ( V2/Hz).

4.7. Experimental Protocol

At the beginning of the experiment, two baseline phases for the anterior and posterior cortex were performed. Electrode positions were topographically aggregated as frontotemporal cortex and parietooccipital cortex. In a passive view condition, the average power of frequency bands was recorded and after calculating the ERS” for the electrodes of interests the passive view baseline for each participant was set up.

Each trial starts with audio tones for 10 seconds and prompts the subject to listen to his/her brain and alerts that the upcoming maze solving task is starting. He/she is encouraged to click the left button of the mouse, which sets up the start of the trial and ERS” recordings for the two conditions, depending on the current mouse event. Time locked at the start of the trial auditory stimulation evokes bottom-up audio attention. This additional condition, bottom-up audio attention with passive view (AVA), is used as a neurophysiological indicator for bottom-up audio attention during amplitude-modulated tone in a range of 290 to 790 Hz. Mapping the corresponding frequency bands of interest to a specific Hz in the human hearing range of frequencies is evaluated whether the participant is stressed/excited or relaxed. The corresponding frequency in Hz, played by a PC player, correlates with the active brain rhythms: lower frequencies indicate ERS” in lower bands and higher frequencies indicate ERS” in high-speed bands, while the middle indicates modulation of bands power. Windows with length of 250 or 300 msec and a step of 125 or 150 msec with resolution 256 Hz are used. Depending on the length of the step, the latency is changed; e.g., the oscillatory rhythms in high and bands are accessed each 125 msec, while the oscillatory rhythms in in and bands are accessed each 150 msec.

The integration of the data streaming from the EPOC + headset to Node-RED is via a custom library of input nodes, EmotivBCI Node-RED toolbox [77]. The installation and the node descriptions are presented in [58]. Node-RED flows of how to design an example of the proposed BCIFS are uploaded in the Node-RED flow library to be shared with the community [78]. The first flow consecutively registers the EEG data from the headset for the electrodes and frequency bands of interest. The second flow initializes the linguistic variables, fuzzy sets, and performs the reference phase for the parietooccipital cortex in order to generate the membership functions. The third flow optimizes the parameters in the fuzzy membership functions by DT. The fourth flow initializes the fuzzy rules and performs fuzzy inference based on the chosen type of Sugeno-style aggregation. The last flow illustrates how to create a front-end graphical user interface.

4.8. Data Analysis

Event-related bursts in the average power of oscillatory rhythms relative to a preevent baseline period at the four frequencies and in EEG epochs are analyzed. The evoked ERS” were averaged across participants to produce a grand average in order to discriminate the bursts, important electrodes, and/or training the fuzzy membership function and FIS. First, we concatenate all experiments for the user electrode arrays (column vectors) into one big matrix and label them for the three conditions. Then, data is ready to use for post hoc interpretation of the results by statistical and ML models in MATLAB.

4.8.1. EEG Data Analysis

The proposed BCIFS was used for the analysis of bursts in the average power of frequency bands of interest in each single epoch. EEG power has been labelled for each trial and in each epoch from the status of the mouse event. Thus, each training sample has a label associating the current condition of one single class C-visuospatial attention (). The 1st label associates the AVA condition; the 2nd associates FCMs condition, while the 3rd associates VST condition. The ERS” (bursts) over scalp site and bandwidth were evaluated in windows with a length of 250 (300) msec and a step of 125 (150) msec. These overall latencies are suggested for developers and are in line with [79] that reported short narrowband bursts (<150 msec) and the authors in [80] stated that a duration of “ bursts” is 100–200 msec with similar duration of cycle. Only critical changes at source and scalp level are described in the fuzzy rules. The output is used to differentiate the condition. For instance, the next rule expresses the functional connectivity at temporoparietal level:IFP8_T_ERS”300”, “high”, “T8_G_ERS”250”, “high”, THEN,where P8_T_ERS is a linguistics variable with fuzzy set “high.” Other options are “desync,” “low,” “ref,” and “artifact.” Artifacts that arise from either low device connectivity or blink/ocular/muscle movements showed ERS” with values over 150 units for low frequencies and 10 for higher. Artifacts were removed at reference phase and corrected during the test. P8_T_ERS”300 means that a positive-going power over a right-parietal electrode site displays maximum rate for the power increase with a peak latency of 300 msec.

The membership functions for the fuzzy sets are built during the baseline phase:where ci are tuning parameters, m is the mean, and std is the standard deviation for 100 samples. The values of ci can be predefined based on experience or obtained by an optimization procedure, such as genetic algorithm (GA) and particle swarm optimization. We first defined the parameters experimentally as c1 = 1.2, c2 = 1.4, and c3 = 2. Then, a digital twin-based optimization was designed and implemented based on the concept described in [81]. GA was used as a heuristics and is linked to the digital twin (DT). The population evaluation is performed in the DT with real EEG data in consecutive time windows. We constructed individual cost functions that define the individual error for each electrode and frequency band of interest. Figure 5 illustrates the flow diagram of the digital twin-based optimization procedure, where GA and DT tune the fuzzy membership functions for the electrode i, window with a size , and band f of interest, i.e., GA and DT .

For each solution in the population , the fuzzy membership functions (12) and (13) in the fuzzy rules (15) and (16) are updated according to the new genes (tuning parameters) and each chromosome in the population is evaluated by the DT . GA minimizes the error by (17), which measures the difference between the simulated and real ERS”. The idea of the cost function is to compute the error for noncompliance with the already measured EEG oscillatory rhythms during the baseline phase. The fuzzy membership functions for the fuzzy sets “reference” and “low” (Aref and Alow) should map adequately the dynamic of the BCI system in a passive view condition; i.e., Aref should be 1 and Alow should be 0. Thus, the temporal behaviour of the error within the several consecutive time windows serves as a cost function and after the last generation it should be minimum and close to zero.where, , is the number of electrodes of interest and f is the frequency of interest for i-th electrode.

The cost function is described by TSK fuzzy rules of zero order (15) and (16), and the rules are aggregated and defuzzified by using the weighted average (10). Then, the GA receives the costs of the current population. E () is 0 when the ERS” is in the baseline interval that results in a membership function close to 1 and by tuning the parameters c1, c2, and c3 in (12) and (13) the GA tries to minimize the cost function by selecting the best offspring after parents’ mating and mutation. The optimization iterates to some ending condition; however, we found out that decent results could be obtained after only 10 generations with setting parameters: population size (the number of chromosomes in each generation) 40, parents mating 20, and 3 genes (the tuning parameters: c1, c2, and c3). The duration was about 2.5 minutes that did not cause users to become fatigued during the training stage. By analogy, the DT-based optimization was used to refine the coefficient c3 for the condition FCMs (forming of cognitive maps). During this training, the fuzzy rules R3 (18) and R4 (19) were used. In order to minimize the error by (17), Ahigh (14) should be 1 and Aref should be 0.

Since the coefficients slightly fluctuate for the different electrodes and bands, and across users, the coefficients typically were in the following ranges: c1 = [0.9 ÷ 1.3], c2 = [1.4 ÷ 1.6], and c3 = [1.9 ÷ 2.2]. The implementation in Node-RED is illustrated in Figure 6. The flow in JSON format is available in the Node-RED library [78].

After tuning the parameters of the fuzzy sets, the FRB was set up to test the three conditions. The following FRB is an example of how to test the EEG oscillatory rhythms correlating with label 1 with contradictions in assumptions:FR1: IFT7_A_ ERS”300” ishighTHENFR14: IFT7_T_ ERS”300” ishighandP7_T_ ERS”300” ishighTHENFR15: IFT7_BH_ ERS”250” ishighandT8_BH_ ERS”250” ishighTHEN

We analyzed more rules that mirror the different bursts in ERS/ERD during processing external audio inputs because the underlying band during audio stimulation is still strongly debated in the literature. Such rules that run simultaneously are FR11(T8BH); FR19(P6G); FR21(F7G); FR34(F8T); FR35(T7T); FR36(T8T); and FR39(R-T).

4.8.2. Statistical Data Analysis

For statistical analysis of EEG data, the statistic package of MATLAB was used. An ANOVA compared ERS” for the band of interest on EEG scalp level between AVA, FCMs, and VST conditions. The statistical value is commonly used to express the significance of research findings; however, according to the criticism in [82] that a single value cannot meaningfully determine which pairs of means (groups) are significantly different for a given hypothesis, we use statistics and machine learning methods in MATLAB for multiple comparison based on Bonferroni approach to perform multiple t tests with statistically highly significant  < 0.01.

From the post hoc scientific hypothesis testing, the electrodes and frequency bands of interest are discriminated and irrelevant features discarded. Thus, the system designers reduce the high multidimensional input space (resulting from multichannel and frequency bands) in the antecedents of the fuzzy rules.

4.8.3. Machine Learning Data Analysis

Since EEG power has been labelled for each trial and stimulus, we used supervised machine learning-based classification approaches. In order to classify the three conditions during the spatial navigation, we trained a fuzzy classifier proposed in [54] from the input-output data for ERS”. We slightly modify the criteria for the rule consolidation in order to keep all sparse data that are important when the bursts in the oscillatory rhythms are evaluated. During the consolidation stage, the rules are ranked not only by their strength (the number of samples they govern) but also by the global classification error of the rule. The number of iterations was 15, defined based on the consolidation stabilization (i.e., there are no more accepted transfers). The fuzzy classifier by RGRC was implemented by the single winner approach [51].

5. Results and Discussion

5.1. EEG Data

The bursts in cut-off frequencies statistically were obtained by ANOVA and post hoc multiple comparison tests. ANOVA yielded a significant effect on the three conditions, showing an increase of ERS” in relation to baseline. VMST activated functional connectivity from frontal to parietal and occipital regions, as can be seen in Figures 7 and 8, reflecting the visual information processing and path finding processing with significant differences between the left and right hemispheres. The sites exhibiting high positive ERS” far exceeded the baseline at each frequency showing that the dominant is high-frequency band (18–25 Hz) in the right parietal site (Figures 9 and 10). We explain this with the type of mazes that are “hard” and with discrimination difficulties. This is in line with the results in [83] claiming that the increased parietal activity in the right temporoparietal region correlates with improving the perception of crowded stimuli. Similar results that a short narrowband burst of waves correlated with memory and movement were reported in [79].

The detected high parietal activity is thought to reflect the visuospatial processing and ERS” increase with the difficulty of the task; thus, higher is associated with the FCMs condition (compare the values for bursts in Figures 7(b) and 7(c)). Other bursts in activity were observed in the temporal and occipital regions that were in line with studies reporting oscillatory activity during spatial navigation [68, 69, 84, 85]. Riddle et al. in [68] reported high and in frontal and parietal cortex during visual search tasks. Herrmann et al. [84] proved that early evoked activity (50–150 msec) reflected allocating attention to a selected object and comparison with the templates in WM. Hong et al. in [69] proved that functional brain networks in and bands were integrated in different ways during attentional state comparing to passive view state. Howard et al. in [85] showed that band power in the PFC increased directly and approximately linearly with WM load and this is in line with the decreased burst in Figure 7(c).

In consistence with White [86], we localized functional connectivity of and bursts in right temporal and parietal regions during spatial navigation in both conditions. We gained this by designing fuzzy rules that consist of more than one linguistic variable to evaluate the coherence among the different electrodes and bands in one IF-THEN rule. This functional connectivity between and can be observed from FR4 (T8TG) in Figures 911. FR4 combines the linguistic variables for T8_T_ERS”250 and T8_G_ERS”250 with fuzzy sets “high.” Thus, this fuzzy rule has specific functional significance for the evoked neuronal assembly. ERS” shows high positive increase that far exceeded the AVA condition ( < 0.001). The ERS” for temporal and is higher in FCMs than in VST. This also can be seen in Tables 1 and 2 where values of for T8TG are statistically significant (in bold). We explain this with increased task difficulties and corresponding working memory load that is in line with Meltzer et al. [87] who associated the increases in with power as most prevalent in frontal midline cortex. Our post hoc multiple comparison with type of critical value “Bonferroni” (that rejects the null hypothesis at the 1% significance level) showed an increase ERS” in and increased frontal with increasing cognitive demand. values (Table 2) that proved this were for the rules AF3T, AF4T, F4T, FT (frontal ), O2G, and P8G. This is in line with Lisman et al. [88], who mirrored the effects of working memory load with power increases. He determined / coupling as a neural coding system beyond the hippocampus and most common in the occipital lobe.


T7A  = 0.16AFT  < 0.01P8T  < 0.01T8TG  = 0.99FT  < 0.01
FRT<0.01AF4B<0.01F8B<0.01FB<0.01F7B  = 0.16
T8B<0.01FRRT  = 0.15P8A<0.01TPLT<0.01TB  = 0.27
F3B<0.01F4B<0.01P7B  = 0.95T8G<0.01P8G  < 0.05
TPRT<0.01F7G  = 0.56F8G  = 0.68O1G  = 0.20O2G  < 0.05
OT  = 0.24O2T  < 0.05P8B  = 0.37AF3T<0.01AF4T<0.01
F3T<0.01F4T<0.01F8T  = 0.11T7T<0.01T8T  = 0.57
T8A  = 0.06TOGT  = 0.20RT  = 0.054T7G  = 0.94AF4G<0.01
F3G<0.01P7A<0.01O1A<0.01O2A  = 0.16


T7A  = 0.55AFT  < 0.01P8T  < 0.01T8TG  = 0.43FT  < 0.01
FRT<0.01AF4B<0.01F8B<0.01FB<0.01F7B<0.01
T8B  = 0.15FRRT  = 0.30P8A<0.01TPLT  = 0.50TB<0.01
F3B<0.01F4B<0.01P7B<0.01T8G  = 0.265P8G<0.01
TPRT  = 0.77F7G  < 0.05F8G  = 0.69O1G  = 0.33O2G<0.01
OT  = 0.42O2T  = 0.33P8B<0.01AF3T<0.01AF4T<0.01
F3T  = 0.34F4T<0.01F8T  = 0.22T7T  = 0.86T8T  = 0.75
T8A<0.01TOGT  = 0.46RT<0.01T7G  = 0.52AF4G<0.01
F3G  = 0.38P7A  = 0.92O1A  = 0.61O2A  = 0.95

We did not discover main frontal bursts that are not in contradiction with Caplan et al. [89], who reported that the dominant frequency they found during virtual maze learning occurred within band. We explained this with the findings in [72] that increased at the start of the encoding in WM and did not decrease until the end of a trial. In order to detect this activity, we need to evaluate also the first derivative, not only the second one. Meanwhile, we found bursts in the occipital and temporoparietal regions (see values for TPRT, T7T, and O2T). The parietal showed significant differences in the left and right lobe between conditions. Also, ERS” for the parietal was higher in FCMs condition than in VST (Table 2: values for P8T). Figure 10 illustrates that almost all burst increased in ERS” and that the electrode AF3 shows the highest burst. This was in line with the results in [63, 72] associating rhythms with maintenance of stored information during the “retention” process.

increased in the temporoparietal sites and had a clear lateralization with higher ERS” in parietal regions of the right hemisphere. This can be seen in Table 1 and Table 2 ( values for P8A) and is in line with results in [90] reporting the functional significance of EEG power increases that are observed in various memory tasks and conflicting thinking.

After analyzing whether the audio stimuli evoked high ERS”, we confirmed the neuroscience findings in [91] that the low responses (40 Hz) were evident 80–120 msec after amplitude-modulated tone and were localized on the lateral right temporal region. We also observed high ERS” for and , as well as synchronization in the temporal regions. According to our results, the electrodes contributing to hearing are distributed along posterior right (T8, P8, and O2) and posterior left (T7 and P7) scalp sites.

5.2. System Performance

We designed a BCI system and translated the published neuroexpertise that correlates with sensory-evoked and event-related cognitive tasks in visuospatial navigation into 44 interpretable fuzzy rules. After averaging the data from the experimental sessions across all participants, the data was ready to use for post hoc interpretation of the results by different statistical or ML models developed in MATLAB.

The MATLAB scripts of how to average the data across all participants and how to perform the multiple comparison statistics can be seen in [92]. Based on the post hoc statistical analysis in MATLAB, we determined the candidate antecedents of significance.

We tested in MATLAB different fuzzy membership functions and the feasibility of several fuzzy, neurofuzzy, and fuzzy clustering approaches for post hoc evaluation of the neuronal activity and connectivity during top-down visuospatial selective attention. We developed in MATLAB the fuzzy rule-based classifier by RGRC (described in Subsection 4.8.3) and used membership functions in [54]. They are built upon two Gaussian curves defined by the positions of the peaks and standard deviations. The embedded Adaptive Neurofuzzy Inference System of Sugeno-type (ANFIS) in MATLAB [93] has been tested for training the membership function parameters. ANFIS combines the least-squares and backpropagation gradient descent methods. We considered the embedded fuzzy C-means (FCM) clustering in MATLAB [94], as well.

We trained these three fuzzy classifiers with the averaged numerical data from the experiments. The observations are as follows: (1) interval type-2 fuzzy membership function cannot be applied; (2) the ANFIS data did not match the training data even with an increase of the number of membership functions to 5 and the training epochs to 40 (Figure 12); (3) the RGRC showed better accuracy and was the most suitable for discriminating the burst in the oscillatory rhythms. The obtained results in Figure 13 can be compared with the FCM clustering in Figure 13(a). Probably, the classification accuracy of the used models depends on the properties of the dataset. ANFIS (Figure 12) and FCM clustering (Figure 13(a)) interpreted the sparse data like outliers, while the metaheuristic approach by RGRC (Figure 13(b)) evaluated and weighted the sparse data as specific information for functional connectivity.

In the future, the principles of transductive reasoning [95] will be tested in MATLAB by analogy to the proposed algorithm in [96]. It generates a local model at a single point of the workspace and for each new data to be processed the closest examples are selected from the known data. The main idea is to assign more importance (weight) to the specific information related to the data to be processed than to the general information provided by the entire training set [96].

By the experimental results, we confirmed the published neuroscience findings and provided causal evidence that the top-down visuospatial attention was mirrored in the oscillatory rhythms and and rhythms had distinct functional roles. The found time locked presence of the ERS” at source and scalp level can be used as a general metric for interpretation of the spatiotemporal dynamics of the passive or evoked oscillatory rhythms. These results can be used for real-time attentional state classification in navigational tasks or for neurofeedback training of the top-down visuospatial attention.

6. Conclusion

A new brain state decoding is proposed that can be used as a feasible metric for interpretation of the spatiotemporal dynamics of the evoked neurooscillations. This new metric is exploited in the proposed BCI fuzzy shell for developing either passive or event-related BCIs, which can be used for control, monitoring, or research. The designed BCI works in IoT, in real time, and is device and service independent. The feasibility of the proposed BCI fuzzy shell was proven by real experiments. From them, we observed that and bursts can be detected in real time and strongly believe in the reproducibility and ubiquity of the new proposed features that rate the increase of the evoked synchronization and desynchronization of brain rhythms at scalp level in response to time. The proposed BCIFS intends to support a wide range of EEG-based BCI applications and not a lot of skills in MATLAB programming and other software languages are required to support brain computations of the neuroscientists. Furthermore, the proposed software can be used for performing EEG experiments remotely, which is rather valuable nowadays.

Data Availability

The data used to support the findings of this study are available (in anonymized form) upon request submitted to Anna Lekova (a.lekova@ir.bas.bg).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work has received funding from the National Scientific Research Fund, Project under Grant DH17/10, and from the H2020 Project CybSPEED under Grant 777720.

References

  1. G. Vilone and L. Longo, “Explainable artificial intelligence: a systematic review,” CoRR, vol. 93, 2006. View at: Google Scholar
  2. C. Kothe and S. Makeig, “BCILAB: a platform for brain computer interface development,” Journal of Neural Engineering, vol. 10, no. 5, 2013. View at: Publisher Site | Google Scholar
  3. A. Delorme and S. Makeig, “EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis,” Journal of Neuroscience Methods, vol. 134, no. 1, pp. 9–21, 2004. View at: Publisher Site | Google Scholar
  4. Y. Renard, F. Lotte, G. Gibert et al., “OpenViBE: an open-source software platform to design, test, and use brain-computer interfaces in real and virtual environments,” Presence: Teleoperators and Virtual Environments, vol. 19, no. 1, pp. 35–53, 2010. View at: Publisher Site | Google Scholar
  5. G. Congedo, D. J. McFarland, T. Hinterberger, N. Birbaumer, and J. R. Wolpaw, “BCI2000: a general-purpose brain-computer interface (BCI) system,” IEEE Transactions on Biomedical Engineering, vol. 51, no. 6, pp. 1034–1043, 2004. View at: Publisher Site | Google Scholar
  6. Neuromore EEG streaming and processing studio.
  7. A. Delorme, T. Mullen, C. Kothe et al., “EEGLAB, SIFT, NFT, BCILAB, ERICA-new tools for advanced EEG processing,” IEEE Transactions on Biomedical Engineering, vol. 23, 2011. View at: Google Scholar
  8. A. Stuart, J. Ord, and S. Arnold, “Kendall’s advanced theory of statistics,” Classical Inference and the Linear Model, vol. 2A, 1999. View at: Google Scholar
  9. C. Mencar, C. Castiello, R. Cannone, and A. M. Fanelli, “Interpretability assessment of fuzzy knowledge bases: a cointension based approach,” International Journal of Approximate Reasoning, vol. 52, no. 4, pp. 501–518, 2011. View at: Publisher Site | Google Scholar
  10. X. Gu, Z. Cao, A. Jolfaei et al., “EEG-based brain-computer interfaces (BCIs): a survey of recent studies on signal sensing technologies and computational intelligence approaches and their applications,” 2020. View at: Google Scholar
  11. S. M. Alarcão and M. J. Fonseca, “Emotions recognition using EEG signals: a survey,” IEEE Transactions on Affective Computing, vol. 10, no. 3, pp. 374–393, 2019. View at: Publisher Site | Google Scholar
  12. P. J. Bota, C. Wang, A. L. N. Fred, and H. Placido Da Silva, “A review, current challenges, and future possibilities on emotion recognition using machine learning and physiological signals,” IEEE Access, vol. 7, pp. 140990–141020, 2019. View at: Publisher Site | Google Scholar
  13. D. Garrett, D. A. Peterson, C. W. Anderson, and M. H. Thaut, “Comparison of linear, nonlinear, and feature selection methods for EEG signal classification,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 11, no. 2, pp. 141–144, 2003. View at: Publisher Site | Google Scholar
  14. M. Rashid, N. Sulaiman, A. Abdul Majeed et al., “Current status, challenges, and possible solutions of EEG-based brain-computer interface: a comprehensive review,” Frontiers in Neurorobotics, vol. 23, pp. 14–25, 2020. View at: Google Scholar
  15. S. Luck and J. Steven, An Introduction to the Event-Related Potential Technique, The MIT Press, London, UK, 2005.
  16. S. Chiappa and S. Bengio, “HMM and IOHMM modeling of EEG rhythms for asynchronous BCI systems,” 2004. View at: Google Scholar
  17. G. Pfurtscheller, D. Flotzinger, and J. Kalcher, “Brain-Computer Interface-a new communication device for handicapped persons,” Journal of Microcomputer Applications, vol. 16, no. 3, pp. 293–299, 1993. View at: Publisher Site | Google Scholar
  18. G. Pfurtscheller and F. Lopes da Silva, Functional Brain Imaging, Hans Huber Publishers, London, UK, 1988.
  19. G. Pfurtscheller, A. Stancak, and C. Neuper, “Event-related synchronization (ERS) in the band—an electrophysiological correlate of cortical idling: a review,” International Journal of Psychophysiology, vol. 24, no. 1-2, pp. 39–46, 1996. View at: Publisher Site | Google Scholar
  20. G. Pfurtscheller, “Functional brain imaging based on ERD/ERS,” Vision Research, vol. 41, no. 10-11, pp. 1257–1260, 2001. View at: Publisher Site | Google Scholar
  21. E. Başar, “Brain oscillations in neuropsychiatric disease,” Dialogues Clin Neuroscience, vol. 15, pp. 291–300, 2013. View at: Google Scholar
  22. W. Klimesch, M. Doppelmayr, H. Russegger, and T. Pachinger, “ band power in the human scalp EEG and the encoding of new information,” Neuro Report, vol. 7, no. 7, pp. 1235–1240, 1996. View at: Publisher Site | Google Scholar
  23. K. Stephan and K. Friston, “Functional connectivity,” Academic Press, vol. 2009, 2009. View at: Google Scholar
  24. S.-B. Lee, H.-J. Kim, H. Kim, J.-H. Jeong, S.-W. Lee, and D.-J. Kim, “Comparative analysis of features extracted from EEG spatial, spectral and temporal domains for binary and multiclass motor imagery classification,” Information Sciences, vol. 502, pp. 190–200, 2019. View at: Publisher Site | Google Scholar
  25. T. Wen and Z. Zhang, “Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multi classification,” Medicine, vol. 96, no. 19, 2017. View at: Publisher Site | Google Scholar
  26. C. Han, G. Xu, J. Xie et al., “Highly interactive brain-computer interface based on flicker-free steady-state motion visual evoked potential,” Scientific Reports, vol. 8, 2018. View at: Publisher Site | Google Scholar
  27. R. Zerafa, T. Camillieri, O. Falzon et al., “To train or not to train? A survey on training of feature extraction methods for SSVEP-based BCIs,” Journal Neural Engineering, vol. 15, 2018. View at: Publisher Site | Google Scholar
  28. J. Jin, S. Li, I. Daly et al., “The study of generic model set for reducing calibration time in P300-based brain-computer interface,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 28, no. 1, pp. 3–12, 2020. View at: Publisher Site | Google Scholar
  29. K. Zhang, G. Xu, X. Zheng et al., “Application of transfer learning in EEG decoding based on brain-computer interfaces: a review,” Sensors, vol. 20, 2020. View at: Publisher Site | Google Scholar
  30. B. Blankertz, G. Curio, and K. Müller, “Classifying single trial EEG: towards brain computer interfacing,” Advances in Neural Information Processing Systems, vol. 1, pp. 157–164, 2002. View at: Google Scholar
  31. C. Lytridis, A. Lekova, C. Bazinas, M. Manios, and V. G. Kaburlasos, “WINkNN: windowed intervals’ number kNN classifier for efficient time-series applications,” Mathematics, vol. 8, no. 3, p. 413, 2020. View at: Publisher Site | Google Scholar
  32. B. Myroniv, C. Wu, Y. Reny et al., “Analyzing user emotions via physiology signals,” Data Science Pattern Recognition, vol. 1, no. 2, pp. 11–25, 2017. View at: Google Scholar
  33. F. Lee, R. Scherer, R. Leeb et al., “A comparative analysis of multi-class EEG classification for brain computer interface,” 2005. View at: Google Scholar
  34. N. Zhuang, Y. Zeng, L. Tong, C. Zhang, H. Zhang, and B. Yan, “Emotion recognition from EEG signals using multidimensional information in EMD domain,” BioMed Research International, vol. 2017, pp. 1–9, 2017. View at: Publisher Site | Google Scholar
  35. C. He, Y.-J. Yao, and X.-S. Ye, “An emotion recognition system based on physiological signals obtained by wearable sensors,” Wearable Sensors and Robots, vol. 23, 2017. View at: Google Scholar
  36. F. Lotte, M. Congedo, A. Lécuyer, F. Lamarche, and B. Arnaldi, “A review of classification algorithms for EEG-based brain-computer interfaces,” Journal of Neural Engineering, vol. 4, no. 2, 2007. View at: Publisher Site | Google Scholar
  37. R. Fu, Y. Tian, T. Bao et al., “Improvement motor imagery EEG classification based on regularized linear discriminant analysis,” Journ. Med Syst., vol. 43, no. 6, 2019. View at: Publisher Site | Google Scholar
  38. M. Alom, T. Taha, C. Yakopcic et al., “A state-of-the-art survey on deep learning theory and architectures,” Electronics, vol. 8, 2019. View at: Publisher Site | Google Scholar
  39. Q. Abbas, M. E. A. Ibrahim, and M. A. Jaffar, “A comprehensive review of recent advances on deep vision systems,” Artificial Intelligence Review, vol. 52, no. 1, pp. 39–76, 2019. View at: Publisher Site | Google Scholar
  40. O. AlZoubi, I. Koprinska, and R. Calvo, “Classification of brain-computer interface data,” Australasian Data Mining Conference, vol. 87, pp. 123–131, 2008. View at: Google Scholar
  41. X. Gu and Z. Cao, “An interpretative fuzzy rule-based eeg classification system for discrimination of hand motor attempts in stroke patients,” Signal Processing, vol. 2020, 2020. View at: Google Scholar
  42. E. Vrochidou, C. Lytridis, C. Bazinas et al., “Fuzzy lattice reasoning for brain signal classification,” Journal of Universal Computer Science, vol. 26, no. 9, pp. 1175–1195, 2020. View at: Google Scholar
  43. S. Bhattacharyya, D. Basu, A. Konar, and D. N. Tibarewala, “Interval type-2 fuzzy logic based multiclass ANFIS algorithm for real-time EEG based movement control of a robot arm,” Robotics and Autonomous Systems, vol. 68, pp. 104–115, 2015. View at: Publisher Site | Google Scholar
  44. T. Nguyen, I. Hettiarachchi, A. Khosravi et al., “Multiclass EEG data classification using fuzzy systems,” 2017. View at: Google Scholar
  45. A. Das, S. Suresh, and N. Sundararajan, “A fully tuned sequential interval type-2 fuzzy inference system for motorimagery task classification,” 2016. View at: Google Scholar
  46. R. Sutton and A. Barto, “Reinforcement learning: an introduction,” 1998. View at: Google Scholar
  47. P. Glimcher, “Understanding dopamine and reinforcement learning: the dopamine reward prediction error hypothesis,” Proceedings of the National Academy of Sciences of the United States of America, vol. 108, no. 3, pp. 15647–15654, 2011. View at: Publisher Site | Google Scholar
  48. I. Iturrate, L. Montesano, and J. Minguez, “Robot reinforcement learning using EEG-based reward signals,” 2010. View at: Google Scholar
  49. Y. Ming, D. Wu, Y. Wang, Y. Shi, and C. Lin, “EEG-based drowsiness estimation for driving safety using deep Q-learning,” 2020. View at: Google Scholar
  50. L. Kuncheva, Fuzzy Classifier Design, Springer-Verlag, Heidelberg, China, 2000.
  51. H. Ishibuchi and T. Nakashima, “Effect of rule weights in fuzzy rule-based classification systems,” IEEE Transactions on Fuzzy Systems, vol. 9, no. 4, pp. 506–515, 2001. View at: Publisher Site | Google Scholar
  52. F. Lotte, “The use of fuzzy inference systems for classification in EEG-Based BCI,” 2006. View at: Google Scholar
  53. T. Tsai, L. Kau, and K. Chao, “A Takagi-Sugeno fuzzy neural network-based algorithm with single-channel EEG signal for the discrimination between light and deep sleep stages,” In IEEE Biomedical Circuits and Systems Conference, vol. 23, pp. 532–535, 2016. View at: Google Scholar
  54. A. Riid and J.-S. Preden, “Design of fuzzy rule-based classifiers through granulation and consolidation,” Journal of Artificial Intelligence and Soft Computing Research, vol. 7, no. 2, pp. 137–147, 2017. View at: Publisher Site | Google Scholar
  55. T. Takagi and M. Sugeno, “Fuzzy identification of systems and its applications to modeling and control,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 15, no. 1, pp. 116–132, 1985. View at: Publisher Site | Google Scholar
  56. C. Berka, “Real-time analysis of EEG indexes of alertness, cognition, and memory acquired with a wireless EEG headset,” International Journal of Human-Computer Interaction, vol. 17, 2020. View at: Publisher Site | Google Scholar
  57. R. Ramirez and Z. Vamvakousis, “Detecting emotion from EEG signals using the emotive epoc device,” in Brain Informatics, Lecture Notes in Computer Science, F. M. Zanzotto, S. Tsumoto, N. Taatgen, and Y. Yao, Eds., vol. 7670, Springer, Berlin, Heidelberg, 2012. View at: Google Scholar
  58. IBM NodeRED Flow-based programming for the Internet of Things.
  59. A. B. Chica, P. Bartolomeo, and J. Lupiáñez, “Two cognitive and neural systems for endogenous and exogenous spatial attention,” Behavioural Brain Research, vol. 237, pp. 107–123, 2013. View at: Publisher Site | Google Scholar
  60. M. Corbetta and G. L. Shulman, “Control of goal-directed and stimulus-driven attention in the brain,” Nature Reviews Neuroscience, vol. 3, no. 3, pp. 201–215, 2002. View at: Publisher Site | Google Scholar
  61. S. Banerjee, S. Grover, and D. Sridharan, “Unraveling causal mechanisms of top-down and bottom-up visuospatial attention with non-invasive brain stimulation,” Journal of the Indian Institute of Science, vol. 97, no. 4, pp. 451–475, 2017. View at: Publisher Site | Google Scholar
  62. Y. Lee and S. Hsieh, “Classifying different emotional states by means of EEG-based functional connectivity patterns,” PLoS ONE, vol. 9, no. 4, 2014. View at: Publisher Site | Google Scholar
  63. E. Niedermeyer and F. Lopes da Silva, Electroencephalography: Basic Principles, Clinical Applications, and Related Fields, Williams and Williams, Baltimore, MD, USA, 1993.
  64. T. W. Picton, M. S. John, A. Dimitrijevic, and D. Purcell, “Human auditory steady-state responses: respuestas auditivas de estado estable en humanos,” International Journal of Audiology, vol. 42, no. 4, pp. 177–219, 2003. View at: Publisher Site | Google Scholar
  65. E. Guzman-Martinez, M. Grabowecky, G. Palafox et al., “A unique role of endogenous visual-spatial attention in rapid processing of multiple targets,” J. of Experimental Psychology: Human Perception and Performance, vol. 27, no. 1, pp. 1–15, 2012. View at: Google Scholar
  66. T. J. Buschman and E. K. Miller, “Top-down versus bottom-up control of attention in the prefrontal and posterior parietal cortices,” Science, vol. 315, no. 5820, pp. 1860–1862, 2007. View at: Publisher Site | Google Scholar
  67. F. Katsuki and C. Constantinidis, “Bottom-up and top-down attention,” The Neuroscientist, vol. 20, no. 5, pp. 509–521, 2014. View at: Publisher Site | Google Scholar
  68. J. Riddle, K. Hwang, D. Cellier, S. Dhanani, and M. D’Esposito, “Causal evidence for the role of neuronal oscillations in top-down and bottom-up attention,” Journal of Cognitive Neuroscience, vol. 31, no. 5, pp. 768–779, 2019. View at: Publisher Site | Google Scholar
  69. X. Hong, J. Sun, and S. Tong, “Functional brain networks for sensory maintenance in top-down selective attention to audiovisual inputs,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 21, no. 5, 2013. View at: Google Scholar
  70. O. Jensen, J. Gelfand, J. Kounios et al., “Oscillations in the Alpha band (9-12 Hz) increase with memory load during retention in a short-term memory task,” Cerebral Cortex, vol. 12, no. 8, pp. 877–882, 2002. View at: Publisher Site | Google Scholar
  71. A. Gevins and M. E. Smith, “Neurophysiological measures of cognitive workload during human-computer interaction,” Theoretical Issues in Ergonomics Science, vol. 4, no. 1-2, pp. 113–131, 2003. View at: Publisher Site | Google Scholar
  72. S. Raghavachari, J. E. Lisman, M. Tully, J. R. Madsen, E. B. Bromfield, and M. J. Kahana, “Theta oscillations in human cortex during a working-memory task: evidence for local generators,” Journal of Neurophysiology, vol. 95, no. 3, pp. 1630–1638, 2006. View at: Publisher Site | Google Scholar
  73. P. Fries, J.-H. Schröder, P. R. Roelfsema, W. Singer, and A. K. Engel, “Oscillatory neuronal synchronization in primary visual cortex as a correlate of stimulus selection,” The Journal of Neuroscience, vol. 22, no. 9, pp. 3739–3754, 2002. View at: Publisher Site | Google Scholar
  74. S. Sandkühler and J. Bhattacharya, “Deconstructing insight: EEG correlates of insightful problem solving,” PLoS ONE, vol. 3, no. 1, e1459. View at: Google Scholar
  75. E. C. Tolman, “Cognitive maps in rats and men,” Psychological Review, vol. 55, no. 4, pp. 189–208, 1948. View at: Publisher Site | Google Scholar
  76. E. P. O. C. Emotiv, “The most credible and cost-effective mobile EEG Brainwear device,” 2020. View at: Google Scholar
  77. EmotivBCI Node-RED Toolbox.
  78. Fuzzy shell for developing a custom EEG BCI, https://flows.nodered.org/collection/-bU3rdvsYIpL.
  79. G. Karvat, A. Schneider, M. Alyahyay et al., “Real-time detection of neural oscillation bursts allows behaviourally relevant neurofeedback,” Communication in Biology, vol. 3, 2020. View at: Publisher Site | Google Scholar
  80. M. F. Carr, M. P. Karlsson, and L. M. Frank, “Transient slow gamma synchrony underlies hippocampal memory replay,” Neuron, vol. 75, no. 4, pp. 700–713, 2012. View at: Publisher Site | Google Scholar
  81. R. H. Guerra, R. Quiza, A. Villalonga, J. Arenas, and F. Castano, “Digital twin-based optimization for ultraprecision motion systems with backlash and friction,” IEEE Access, vol. 7, pp. 93462–93472, 2019. View at: Publisher Site | Google Scholar
  82. B. Alger, “Scientific hypothesis-testing strengthens neuroscience research,” Eneuro, vol. 8, no. 7, 2020. View at: Google Scholar
  83. L. Battaglini, A. Ghiani, C. Casco, and L. Ronconi, “Parietal tACS at beta frequency improves vision in a crowding regime,” Neuroimage, vol. 208, 2020. View at: Publisher Site | Google Scholar
  84. C. S. Herrmann and A. Mecklinger, “Gamma activity in human EEG is related to highspeed memory comparisons during object selective attention,” Visual Cognition, vol. 8, no. 3–5, pp. 593–608, 2001. View at: Publisher Site | Google Scholar
  85. M. W. Howard, “Gamma oscillations correlate with working memory load in humans,” Cerebral Cortex, vol. 13, no. 12, pp. 1369–1374, 2003. View at: Publisher Site | Google Scholar
  86. D. J. White, M. Congedo, J. Ciorciari, and R. B. Silberstein, “Brain oscillatory activity during spatial navigation: theta and gamma activity link medial temporal and parietal regions,” Journal of Cognitive Neuroscience, vol. 24, no. 3, pp. 686–697, 2012. View at: Publisher Site | Google Scholar
  87. J. A. Meltzer, H. P. Zaveri, I. I. Goncharova et al., “Effects of working memory load on oscillatory power in human intracranial EEG,” Cerebral Cortex, vol. 18, no. 8, pp. 1843–1855, 2008. View at: Publisher Site | Google Scholar
  88. J. E. Lisman and O. Jensen, “The theta-gamma neural code,” Neuron, vol. 77, no. 6, pp. 1002–1016, 2013. View at: Publisher Site | Google Scholar
  89. J. B. Caplan, J. R. Madsen, and S. Raghavachari, “Distinct patterns of brain oscillations underlie two basic parameters of human maze learning,” Journal of Neurophysiology, vol. 86, no. 1, pp. 368–380, 2001. View at: Publisher Site | Google Scholar
  90. M. Kahana, R. J. Schickel, E. Jauk, A. Fink, and A. C. Neubauer, “Alpha power increases in right parietal cortex reflects focused internal attention,” Neuropsychologia, vol. 56, no. 100, pp. 393–400, 2014. View at: Publisher Site | Google Scholar
  91. M. C. Cervenka, S. Nagle, and D. Boatman-Reich, “Cortical high-gamma responses in auditory processing,” American Journal of Audiology, vol. 20, no. 2, pp. 171–180, 2011. View at: Publisher Site | Google Scholar
  92. ANOVA MATLAB script for BCIFS, http://alekova.aabg.eu/index.php?option=com_content&view=article&id=8.
  93. J. Jang, “ANFIS: adaptive-network-based fuzzy inference systems,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 23, no. No. 3, pp. 665–685, 1993. View at: Publisher Site | Google Scholar
  94. J. Bezdec, Pattern Recognition with Fuzzy Objective Function Algorithms, Plenum Press, New York, NY, USA, 1981.
  95. Q. Song and N. Kasabov, “A neuro-fuzzy inference method for transductive reasoning,” IEEE, vol. 13, no. 6, pp. 799–808, 2005. View at: Google Scholar
  96. A. Gajate, R. E. Haber, P. I. Vega, and J. R. Alique, “A transductive neuro-fuzzy controller: application to a drilling process,” IEEE Transactions on Neural Networks, vol. 21, no. 7, pp. 1158–1167, 2010. View at: Publisher Site | Google Scholar

Copyright © 2021 Anna Lekova and Ivan Chavdarov. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views184
Downloads209
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.