Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 7985010 | https://doi.org/10.1155/2020/7985010

Felix W. Gembler, Aya Rezeika, Mihaly Benda, Ivan Volosyak, "Five Shades of Grey: Exploring Quintary m-Sequences for More User-Friendly c-VEP-Based BCIs", Computational Intelligence and Neuroscience, vol. 2020, Article ID 7985010, 11 pages, 2020. https://doi.org/10.1155/2020/7985010

Five Shades of Grey: Exploring Quintary m-Sequences for More User-Friendly c-VEP-Based BCIs

Academic Editor: Fabio Solari
Received28 Oct 2019
Revised03 Feb 2020
Accepted04 Feb 2020
Published10 Mar 2020

Abstract

Responsive EEG-based communication systems have been implemented with brain-computer interfaces (BCIs) based on code-modulated visual evoked potentials (c-VEPs). The BCI targets are typically encoded with binary m-sequences because of their autocorrelation property; the digits one and zero correspond to different target colours (usually black and white), which are updated every frame according to the code. While binary flickering patterns enable high communication speeds, they are perceived as annoying by many users. Quintary (base 5) m-sequences, where the five digits correspond to different shades of grey, may yield a more subtle visual stimulation. This study explores two approaches to reduce the flickering sensation: (1) adjusting the flickering speed via refresh rates and (2) applying quintary codes. In this respect, six flickering modalities are tested using an eight-target spelling application: binary patterns and quintary patterns generated with 60, 120, and 240 Hz refresh rates. This study was conducted with 18 nondisabled participants. For all six flickering modalities, a copy-spelling task was conducted. According to questionnaire results, most users favoured the proposed quintary over the binary pattern while achieving similar performance to it (no statistical differences between the patterns were found). Mean accuracies across participants were above 95%, and information transfer rates were above 55 bits/min for all patterns and flickering speeds.

1. Introduction

Maximum length sequences (m-sequences) are special pseudorandom binary sequences that have been used in various research fields including encryption, signal recovery, and brain-computer interface (BCI) [13].

A BCI is an interface between a user’s brain and a computer; it translates the brain activities into commands allowing the control of external devices without muscle activity [4]. The BCI paradigm based on code-modulated visual evoked potentials (c-VEPs) interprets the responses to rapidly flickering patterns corresponding to special code sequences [58]. Each c-VEP target is coded with an individual sequence, where bits are mapped to different contrasts. To encode targets on computer monitors, usually black and white patterns are used [9].

The brain responses to these patterns (the c-VEPs) can be recorded via electroencephalography (EEG). A typical c-VEP application is a communication tool, where a target letter fixated by the user is determined via template matching [10].

Although c-VEP spelling applications can achieve high communication speeds (around 20 error-free characters per minute [5]), some issues with regard to user friendliness need to be addressed.

A key aspect in terms of usability is the flickering speed. In general, the number of bit flips per second impacts the classification accuracy [11]. Numerous BCI studies investigated stimulus choice for the steady-state VEP (SSVEP) approach, where targets are coded with distinct frequencies [12, 13]. According to Herrmann [14], brain responses of up to 90 Hz can be recognized in EEG recordings. Low-frequency and medium-frequency sets between 6 and 30 Hz are predominantly used for spelling applications in SSVEP research [10, 15] as they elicit large SSVEP amplitudes.

However, BCI users may perceive low flickering speed as annoying and tiring [16, 17]. This also applies to flicker patterns based on m-sequences. Stimulus-induced fatigue reduces the applicability of these systems. Moreover, the low-frequency flicker patterns may trigger photosensitivity-based epileptic seizures [17].

Because of these problems, high-frequency BCI applications have been developed [13, 16]. For example, Chen et al. [13] implemented a 45-target SSVEP BCI speller using high-frequency stimuli (ranging from 35.6 to 44.4 Hz). The authors reported a promising average information transfer rate (ITR) of 61 bits/min. Armengol-Urpi and Sarma [18] integrated high-frequency stimuli (42, 43, 44, and 45 Hz) in a virtual reality menu navigation tool. The authors stated that users reported a satisfactory overall experience as the flickering did not cause annoyance. Even higher, imperceptible flickers around 60 Hz have also been tested: Sakurada et al. [16] used three LED stimuli (61, 63, and 65 Hz) and reported an average accuracy of 90% while eliminating visual fatigue. More recently, Jiang et al. [19] used four phase-shifted 60 Hz stimuli presented on a 240 Hz monitor.

For c-VEP BCIs, the flickering speed can be manipulated by changing the monitor refresh rate. When using standard 60 Hz monitors, the stimulus duration of a 63 bit m-sequence is 63/60 = 1.05 s, a time window that is reasonably fast while still sufficiently long for reliable classifications. Higher refresh rates allow for higher flickering rates, which can potentially improve user friendliness. However, the target stimuli might be harder to distinguish from other targets due to the shorter lag between consecutive targets. Previous research indicates that c-VEP stimuli generated with a 120 Hz refresh rate yield good performance [2022], but with a 240 Hz setup, a performance drop has been observed [23]. In terms of bit flips per second, the 240 Hz generated m-sequence is comparable to a 59 Hz SSVEP stimulus. Due to the sequences of up to 6 consecutive identical bits, the flickering pattern generated by the m-sequence is still visually perceivable.

Beside higher flickering rates, research on SSVEP-BCIs has found other methods to reduce discomfort induced by the flickering. For example, with the sinusoidal stimulus modulation method [24], which is realised by varying the luminance each frame, more subtle sine-shaped stimulus patterns can be realised. Recently, we compared the stimulus presentation paradigms SSVEP and c-VEP in terms of system performance and user friendliness [25]. While c-VEP slightly outperformed SSVEP in terms of offline accuracy, SSVEP was rated as the more user-friendly approach (thanks to the more subtle sinusoidal stimulus presentation).

Due to the binary stimulation pattern of the m-sequence, the visual stimuli switch between two colours (most commonly black and white). Other code patterns could offer a more subtle stimulation while maintaining good autocorrelation. Recently, Shirzhiyan et al. [26] employed chaotic codes generated from a one-dimensional logistic map. While there was no significant difference in the classification accuracies in comparison with conventional m-sequences, the chaotic code reduced subjective fatigue.

In this study, quintary (base 5) m-sequences are explored. Instead of switching between black and white, the flickering targets go to five different shades of grey. We compared the BCI performance of the conventional binary and the proposed quintary pattern with refresh rate setups of 60, 120, and 240 Hz. The six different code patterns were tested with 18 participants using an earlier-developed spelling application [27, 28] that allows for the selection of letters in two steps (see Figure 1).

2. Methods

In the following, the generation of the binary and quintary m-sequence patterns and the respective stimulus designs are explained. Following that, details about the signal classification, the spelling application, and the experimental protocol are provided.

2.1. Participants

Eighteen nondisabled participants were recruited for this experiment, eight females and ten males (average age 24.3 years, SD 2.8, ranging from 18 to 29). All of them had normal or corrected-to-normal vision. This research was approved by the Ethical Committee of the Medical Faculty of the University of Duisburg-Essen. Before the experiment, the participants were informed about the purpose, risks, and experimental protocol of the study. The participants gave informed consent in accordance with the Declaration of Helsinki and were informed that they could opt out of the study without providing reasons at any time. The information needed for the analysis of the experiments was stored anonymously. All participants received a financial reward for taking part in the experiment.

2.2. Hardware

Stimulus presentation and signal identification operated on the same computer, Dell Precision 3630 Tower, equipped with an NVIDIA GeForce GTX 1080 graphics card running Microsoft Windows 10 Education on an Intel processor (Intel Core i7-8700K @ 3.70 GHz). The c-VEP targets were presented on a liquid crystal display screen (Acer Predator XB252Q, 1920  1080 pixels, 240 Hz refresh rate). For signal acquisition, an EEG amplifier (g.USBamp, Guger Technologies, Graz, Austria) was used, employing all its 16 signal channels, which were placed according to the international 10/5 system of electrode placement (see, e.g., [29]): PZ, P3, P4, P5, P6, PO3, PO4, PO7, PO8, POO1, POO2, O1, O2, OZ, O9, and O10. The reference electrode was placed at CZ and the ground electrode at AFZ. The standard abrasive electrolytic electrode gel was applied between the electrodes and the scalp to bring impedances below during the preparation phase. A bandpass filter (between 2 and 100 Hz) and a notch filter (around 50 Hz) were applied. The sampling rate of the amplifier was set to 600 Hz.

2.3. Generation of m-Sequences

A maximal-length sequence (m-sequence) is a periodic sequence with a noise-like waveform that can be generated using a linear-feedback shift register (LFSR) [30, 31] (see Figure 2). LFSRs are special shift registers, consisting of N memory cells (also called stages) labelled . The input digit stored in the cell is the value of a linear function f that performs modulo p additions with a weighted subset of the register entries.

The memory stages of the LFSR are controlled by a timing clock. At each pulse of the clock, the state of each stage is shifted to the next stage. The entry in the cell is passed to the cell , . The entry in the stage (the rightmost register) determines the output of the LFSR. The sequence of output bits is called the output stream of the LFSR.

A -ary code of length N can assume values. However, the period of the code produced by the LFSR can have a maximal length of at most . In this case, the LFSR cycles through all states except for the case where all digits are zeros. If all digits were zeros, it could not be used as a code sequence for stimuli, as there would be no state changes, and thus no brain response evoked by the stimuli. The output stream of maximal length is an m-sequence.

In Figure 2, a generic LFSR is displayed. The bit positions that influence the next state (weights ) are called taps. The combination of the register pins can also be expressed in the finite field arithmetic as the modulo polynomial, which is referred to as a generator polynomial or feedback polynomial:where the coefficients correspond to the weight of the register pin , .

An LFSR must be initialised with a so-called seed which describes the N initial digits of the register cells. The seed and the generator polynomial uniquely determine the resulting sequence.

If the LFSR is represented by a primitive polynomial and initiated with a nonzero seed, it will generate an m-sequence [32].

The binary m-sequence, , used in the experiment was determined with the generator polynomial (corresponding to weights , , and ) and the seed . The quintary m-sequence, , used in the experiment was determined with the generator polynomial (corresponding to the weights ) and the seed . The period lengths, and thus the length of the m-sequence, were for the binary pattern and for the quintary pattern.

The m-sequences have a number of desirable properties (see, e.g., [33]). For BCIs, the most interesting feature is the autocorrelation function. For the binary sequence, a single peak at 0 can be observed. The values of the function are equal to 1 at and the correlation coefficient is 1/n in every other case, where n refers to the period length of the sequence, where (i.e., binary 63-bit m‐sequence). It should be noted that the quintary m-sequence has two phase values for which the sequences are anticorrelated (see also [3]). These shifts are avoided in the implementation of the BCI. For the binary and quintary m-sequences used in the experiment, the autocorrelation functions are displayed in Figure 3.

2.4. Stimulus Design

To test the code sequences in an online spelling scenario, we implemented them into a spelling application with eight targets ( pixels) which were arranged as a stimulus matrix (see Figure 1). Each target corresponded to one of the code sequences.

For the binary flickering paradigm, was generated as described in the previous section and were generated by employing left circular shifts on of . For the quintary flickering paradigm, the eight codes , , were generated analogously.

The flickering patterns were modulated utilising alpha blending [34]. The process of alpha blending allows for transparency effects in computer graphics by applying a convex combination of two colours (a translucent foreground colour and a background colour). Using alpha blending, the translucent foreground colour of the stimulus (here white) was combined with the background colour (here black), yielding a blended colour (here different shades of grey). The degree of translucency, α, ranges from 0.0 to 1.0. When the foreground colour is completely transparent (i.e., ), the combined colour is the background colour (here black). On the contrary, if the foreground colour is completely opaque (i.e., ), the combined colour is the foreground colour (here white).

The degree of translucency of the stimuli was updated every frame; the values for α were derived from the code pattern. In case of the binary m-sequences, α was set to 0 or 1 in accord with the binary code sequence yielding a black and white pattern. For the quintary m-sequences, the quintary digits 0, 1, 2, 3, and 4 were mapped to the corresponding α-values 0, 0.25, 0.5, 0.75, and 1, yielding a pattern that goes through five shades of grey.

The update rate and therefore the speed of the flickering pattern are dependent on the monitor refresh rate. At high vertical refresh rates, a more subtle visual stimulation can be achieved. Here, for both code patterns, update rates of 60, 120, and 240 Hz were tested; the stimulus colour was updated every , , and ms, respectively.

2.5. Experimental Protocol

Each participant took part in six sessions using the two flickering patterns at three different update rates, 60 Hz, 120 Hz, and 240 Hz. The order was binary at 60 Hz, quintary at 60 Hz, binary at 120 Hz, quintary at 120 Hz, and binary at 240 Hz, quintary at 240 Hz, for half of the participants, and quintary at 60 Hz, binary at 60 Hz, quintary at 120 Hz, binary at 120 Hz, and quintary at 240 Hz, binary at 240 Hz, for the other half. In between sessions, participants took a small break. Each session consisted of a training and a copy-spelling phase. Participants sat in a comfortable chair approximately 70 cm away from the screen which presented the 8-target interface (arranged as a matrix, see Figure 1), showing numbers 1–8 in the training phase and a letter grid in the copy-spelling phase.

For the generation of c-VEP templates, labelled responses for every stimulus were recorded in the training phase, where all eight targets were presented simultaneously to the user. For each of the eight targets, several trials were recorded. In this respect, the training phase was grouped into blocks, where trials were collected in total. For the binary pattern, each trial lasted 2.1 s; the stimulation cycle repeated 2, 4, and 8 times for the 60, 120, and 240 Hz setups, respectively. Analogously, for the quintary pattern, each trial lasted  s; the stimulation cycle repeated 1, 2, and 4 times for the 60, 120, and 240 Hz setups. The different flickering patterns are illustrated in Figure 4. The trials were stored as an matrix, where m denotes the number of recording EEG electrodes (here ) and n denotes the number of sample points (here and samples for binary and quintary patterns) samples, for binary and quintary patterns).

The box, at which the user was needed to gaze, was outlined by a green frame. The boxes were highlighted in sequence (from the upper left to the lower right). After each trial, the flickering paused for 1 s. After each block, the user could rest for a longer time, until he or she initiated the next recording block by pressing the space bar on the keyboard.

After each training phase, participants filled out a brief questionnaire. The subjective impressions of the flickering patterns were assessed with two 7-point Likert scales: (1 = relaxing, 7 = exhausting) and (1 = comfortable, 7 = annoying), where the points 2–6 were left unlabelled.

In the online session, a brief familiarisation run was conducted, where participants learned how to use the speller. Thereafter, a copy-spelling task was performed. Misclassifications needed to be corrected by gazing at the box representing the UNDO function. The copy-spelling task was to spell the word POWERFUL. In this phase, the gaze-shifting phase was 2 s, giving the participant enough time to identify the location of the next character. (During this gaze-shifting phase, the flickering and data recording paused). The entire experiment lasted approximately 1 h.

2.6. Signal Classification

A template-matching method using spatial filters generated via canonical-correlation analysis (CCA) was used for online signal classification [27]. A filter bank design was used to increase the discrimination of targets [35] further. On the basis of the training data, templates were calculated by averaging over the target-specific trials. In addition to this, for each target, a CCA-based spatial filter was determined as described, e.g., in [28].

This was done for different filter banks; in this regard, M bandpass filters (described in the following section) were applied to the recorded trials, resulting in weights and templates , , for .

The three filter banks were designed using 8th-order Butterworth bandpass filters. The upper and lower cutoffs were set as follows:(1)The first subband covered the alpha, beta, and gamma bands (a bandpass filter between 8 and 60 Hz was applied)(2)The second subband covered the beta and gamma bands (a bandpass filter between 12 and 60 Hz was applied)(3)The third subband covered the gamma band (a bandpass filter between 30 and 60 Hz was applied)

For classification, ensemble correlations between spatially filtered reference signals and the spatially filtered EEG data buffer were calculated for each subband () independently. This yielded a set of correlation coefficients:which were calculated for all classes and averaged across the number of filter banks:

To identify the intended target, the class label C was determined as

For the online classification, a sliding window mechanism was implemented as described in [25]. The amplifier transferred the EEG data in blocks of 30 samples per channel, which were collected in a buffer. The number of columns, , of the buffer changed dynamically when new data were added each calculation interval ( incrementally increased by 30 samples until ). After a new block was received, a class label was calculated using submatrices from the templates (containing only the first columns). A system output was only produced, if a threshold criterion was met: the distance between the highest and the second highest correlation needed to exceed 0.15; for some participants, this threshold was adjusted slightly during the familiarisation to increase accuracy. If this threshold criterion was met, the output was produced, the data buffer was cleared, and a gaze-shifting period of two seconds followed. If the criterion was not met, further data were added to the buffer. In case , old data were shuffled out.

3. Results

In the following, the results from the evaluation of the online spelling performance and the questionnaire are presented; Table 1 provides an overall summary of the results. The BCI performance was evaluated by comparing ITR and classification accuracy. The significance levels of the differences between the binary and quintary patterns were evaluated using paired t-tests. We used Wilcoxon signed-rank tests and Friedman’s analysis to evaluate the questionnaires.


60 Hz120 Hz240 Hz
BinaryQuintaryBinaryQuintaryBinaryQuintary

Offline accuracy (%)97.7 (2.8)98.7 (3.1)99.0 (2.3)96.9 (9.8)94.7 (9.6)96.3 (12.4)
Online accuracy (%)99.4 (1.8)98.5 (2.5)97.6 (6.0)97.5 (5.0)97.9 (3.6)97.6 (4.8)
Experiment time (s)45.2 (7.0)45.3 (4.4)46.4 (9.8)50.7 (12.0)50.1 (12.5)59.7 (37.8)
ITR (bits/min)64.8 (8.8)63.9 (6.1)63.7 (11.5)59.2 (11.4)59.5 (12.5)55.9 (14.8)
Relaxing/exhausting42.53.5333
Comfortable/annoying433.52.53.53

The provided values for the offline accuracies were achieved with a classification time window of 1 s.
3.1. Offline Performance Evaluation

The offline classification accuracy of the binary and quintary flickering paradigms was compared offline via leave-one-out cross-validation (see, e.g., [36]). All but one of the recorded blocks (each containing eight trials) was used for the training, and the left-out block was used as validation data. The cross-validation was repeated times; each recording block was used once as validation data, and the resulting accuracies were averaged. For the performance analysis, the process was conducted for different classification time windows up to 1 s. Figure 5 presents the mean classification accuracies for all tested patterns.

For the time window of 1 s, the mean (SD) classification accuracies for the binary flickering pattern were 97.7 (2.76) %, 99.0 (3.3)%, and 94.7 (9.6)% for the 60 Hz, 120 Hz, and 240 update rates, respectively; for the quintary flickering pattern, accuracies were 98.7 (3.1)%, 96.9 (9.8)%, and 95.3 (12.4)%. Neither for the 60 and 120 Hz refresh rates nor for the 240 Hz refresh rate, significant differences between the binary and quintary patterns were found according to paired t-tests ().

In general, the accuracy achieved with the fastest flickering rate (using the 240 Hz refresh rate) was lower in comparison with those of the 60 and 120 Hz refresh rates. No statistical differences between the binary and quintary patterns can be observed.

3.2. Online Spelling Performance Evaluation

Figure 6 shows the individual performance in the online experiment. The commonly used ITR and classification accuracies were calculated. The ITR in bits/min, , is given as follows:where K denotes the number of classes (here ), denotes the classification accuracy which is calculated as correctly classified selections divided by the total number of selections, and t denotes the average time to make a selection (in s). An online calculation tool for the ITR can be found at https://bci-lab.hochschule-rhein-waal.de/en/itr.html.

All participants completed the task for all six tested flickering patterns. The average (SD) online classification accuracies for the binary flickering pattern were 99.4 (1.85) %, 97.6 (6.0)%, and 97.9 (3.6)% for the 60 Hz, 120 Hz, and 240 Hz update rates, respectively; for the quintary flickering pattern, accuracies were 98.5 (2.5)%, 97.5 (5.0)%, and 97.6 (4.8)%. The mean ITRs achieved with the binary pattern were 64.8 (8.8), 63.7 (11.5), and 59.5 (12.5) bits/min; the mean ITRs achieved with the quintary pattern were 63.9 (6.1), 59.2 (11.4), and 55.9 (14.8) bits/min. On average, the spelling times for the binary pattern were 45.2 (7.0), 46.4 (9.8), and 50.1 (12.5) s; the spelling times for the quintary pattern were 45.3 (4.4), 50.7 (12.0), and 59.7 (37.8) s. Figure 7 shows the achieved ITRs per flickering pattern. Regarding the differences between the binary and quintary patterns per refresh rate, analysis with paired t-tests did not reveal statistically significant differences () for neither the accuracy nor the ITR.

3.3. Questionnaire Results

Figure 8 summarizes the questionnaire responses. Regarding the first question (relaxing/exhausting), the median ratings for the binary pattern were 4, 3.5, and 3 for the 60 Hz, 120 Hz, and 240 Hz update rates, respectively; the median ratings for the quintary pattern were 2.5, 3, and 3.

The medians of the binary and quintary patterns were significantly different only for the 60 Hz setup; the values of Wilcoxon signed-rank tests were 0.003, 0.065, and 0.67 for 60, 120, and 240 Hz, respectively. According to the Friedman analysis, the differences between refresh rate settings were not significant for the binary () and quintary () patterns.

Regarding the second question (comfortable/annoying), the median ratings for the binary pattern were 4, 3.5, and 3.5, and for the quintary pattern, the ratings were 3, 2.5, and 3.

Again, only for the 60 Hz comparison, the medians of binary and quintary patterns were significantly different; the values of Wilcoxon signed-rank tests were 0.009, 0.084, and 0.077 for 60, 120, and 240 Hz, respectively. According to the Friedman analysis, the differences between refresh rate settings were not significant for the binary () and quintary () patterns.

We further grouped the scores into relaxing (1–3), neither relaxing nor exhausting (4), and exhausting (5–7). Analogously for the second question, we grouped the scores into comfortable (1–3), neither comfortable nor annoying (4), and annoying (5–7). For all refresh rate setups, the quintary pattern was rated less exhausting and less annoying. The quintary pattern at 60 Hz was rated the least exhausting; only two out of the eighteen participants (i.e., 11%) found this flickering design exhausting. The binary pattern was rated exhausting by four participants (28%) for all refresh rates. Regarding the second question, the quintary pattern at 120 Hz was rated least annoying (two out of eighteen, i.e., 11%).

Overall, answers indicate that the quintary flickering patterns are perceived as less annoying. According to additional comments from the participants, with the quintary pattern, it was easier to focus on the target letters. One participant found that the 60 Hz binary pattern caused headaches during the training stage. Several participants commented that the quintary flickering was less fatiguing.

4. Discussion

The aim of the study was to explore more user-friendly flickering patterns for c-VEP-based BCIs. Two flickering patterns, binary and quintary m-sequences, were tested with different flickering speeds. Both code sequences are orthogonal to their time lags. While the binary m-sequence is well established in BCI research, the quintary m-sequences have so far not been tested. Due to the nonlinearity of the visual system (e.g., due to bifurcation or period-doubling), the elicited responses obtained by visual stimulation with the orthogonal patterns have nonorthogonal autocorrelations (see, e.g., [23]). Previous studies with online BCI systems showed that the accuracies obtained with m-sequence-based flickering patterns are nonetheless quite high [5, 28, 37]. In a previous study, we compared SSVEP and c-VEP flickering patterns. It was observed that the latter yielded on average higher offline accuracies [25].

The acceptance of BCIs based on visual evoked potentials may depend on two factors, the user friendliness and the BCI performance. A major focus of this study was on the aspect of user friendliness. The presented quintary sequence allowed for a more subtle stimulation in comparison with the conventionally used binary pattern and was rated as slightly more user-friendly according to our questionnaire.

The stimulus colour is a key parameter for BCIs based on visual stimulation. In this study, black and white or different grey shades were used for the binary and quintary stimulus patterns, respectively. Humans have different responses to stimuli of different colours. The human retina contains two types of photoreceptors, rods and cones. The rod cells are responsible for black-and-white vision at low light levels; the cones are responsible for colour vision. There are three subtypes of cones that reflect the response to various wavelengths of light, blue cones, green cones, and red cones. As noted by Wei et al. [6], white colour stimulates all three types of cones, and therefore, it may lead to the strongest VEP response. Aminaka et al. [38] implemented a c-VEP flickering paradigm with four green and blue chromatic flashing targets in order to reduce the risk of photosensitive epilepsy. In terms of performance, the authors did not observe any significant differences in the conventional black and white flashing pattern. Instead of different shades of grey, the digits of the m-sequence could be encoded with different colours, stimulating the different types of cones.

In addition to the colour of the targets, the flickering speed impacts the load on the visual channel. While high-frequency systems are less fatiguing, they tend to yield lower selection speeds. In this study, eight targets were used, which is a comparably low number for c-VEP studies. Still, due to the low classification time windows employed, ITRs between 55 and 65 bits/min were achieved with the different flickering modalities.

The achieved ITRs are slightly higher than those in low-target high-frequency SSVEP BCIs: Armengol-Urpi and Sarma [18] reported a mean ITR of 15.7 bits/min for strong flickering and 13.6 bits/min for weak flickering using a four-target SSVEP system with frequencies ranging from 40 to 45 Hz in a virtual reality application. Jiang et al. [19] reported a mean ITR of 18.8 bits/min in an online experiment using a 4-target system with phase-shifted 60 Hz stimuli.

Recently, a multitarget c-VEP system with fast flickering speed was tested: Başaklar [3] implemented a 36-target c-VEP system employing a 127 bit m-sequence at refresh rates of 60 Hz, 120 Hz, and 240 Hz. The authors reported average ITRs and accuracies of 85.9 bits/min and 92% for 60 Hz, 94.2 bits/min and 97% for 120 Hz, and 78.7 bits/min and 87% for 240 Hz. The authors concluded that the 120 Hz refresh rate setup is best to use in multitarget BCIs, whereas the 240 Hz refresh rate may be a good choice for low-target systems. Indeed, in this study, the differences in BCI performance between the tested patterns were not significant. According to the within-subject comparison, the tested flickering patterns were equally effective. Further tests of the quintary pattern with multitarget systems are planned.

5. Conclusions

This study explored the usage of quintary m-sequences for BCIs based on c-VEPs. The conventional binary and the proposed quintary patterns were compared in an online spelling experiment with different refresh rate setups. In terms of user friendliness, we found that the quintary pattern was perceived as more comfortable and relaxing than the binary pattern. Especially, the typically used binary 60 Hz pattern was perceived as annoying by more than a quarter of the participants. In terms of BCI performance, no significant differences between the patterns were found, suggesting that further c-VEP experiments could be designed with the proposed quintary pattern.

Data Availability

The recorded data sets cannot be shared according to legal guidelines. All participants were informed that the information needed for the analysis of the experiments was stored anonymously and will be deleted after a certain time period.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was supported by the European Fund for Regional Development (EFRD or EFRE in German) under Grant IT-1-2-001. The authors thank the participants of this research study and the student assistants for their help, stimulating comments, and suggestions.

References

  1. S. W. Golomb, Shift Register Sequences, OCLC, Dublin, OH, USA, 1982.
  2. E. E. Sutter, “The visual evoked response as a communication channel,” in Proceedings on IEEE 1984 Symposium on Biosensors, pp. 95–100, Los Angeles, CA, USA, September 1984. View at: Google Scholar
  3. G. T. Buračas and G. M. Boynton, “Efficient design of event-related fMRI experiments using M-sequences,” NeuroImage, vol. 16, no. 3, pp. 801–813, 2002. View at: Google Scholar
  4. J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Brain-computer interfaces for communication and control,” Clinical Neurophysiology, vol. 113, no. 6, pp. 767–791, 2002. View at: Publisher Site | Google Scholar
  5. M. Spüler, W. Rosenstiel, and M. Bogdan, “Online adaptation of a c-VEP brain-computer interface (BCI) based on error-related potentials and unsupervised learning,” PLoS One, vol. 7, no. 12, Article ID e51077, 2012. View at: Publisher Site | Google Scholar
  6. Q. Wei, S. Feng, and Z. Lu, “Stimulus specificity of brain-computer interfaces based on code modulation visual evoked potentials,” PLoS One, vol. 11, no. 5, Article ID e0156416, 2016. View at: Publisher Site | Google Scholar
  7. S. Nagel and M. Spüler, “Modelling the brain response to arbitrary visual stimulation patterns for a flexible high-speed brain-computer interface,” PLoS One, vol. 13, no. 10, Article ID e0206107, 2018. View at: Publisher Site | Google Scholar
  8. S. Nagel and M. Spüler, “Asynchronous non-invasive high-speed BCI speller with robust non-control state detection,” Scientific Reports, vol. 9, no. 1, p. 8269, 2019. View at: Publisher Site | Google Scholar
  9. D. Zhu, J. Bieger, G. Garcia Molina, and R. M. Aarts, “A survey of stimulation methods used in SSVEP-based BCIs,” Computational Intelligence and Neuroscience, vol. 2010, Article ID 702357, 12 pages, 2010. View at: Publisher Site | Google Scholar
  10. A. Rezeika, M. Benda, P. Stawicki, F. Gembler, A. Saboor, and I. Volosyak, “Brain-computer interface spellers: a review,” Brain Sciences, vol. 8, no. 4, 2018. View at: Publisher Site | Google Scholar
  11. S. Nagel, W. Rosenstiel, and M. Spüler, “Finding optimal stimulation patterns for bcis based on visual evoked potentials,” in Proceedings of the 7th International Brain-Computer Interface Meeting, pp. 164-165, BCI Society, Pacific Grove, CA, USA, May 2018. View at: Google Scholar
  12. I. Volosyak, C. Hubert, and A. Gräser, “Impact of frequency selection on lcd screens for ssvep based brain-computer interfaces,” in Bio-Inspired Systems: Computational and Ambient Intelligence, J. Cabestany, F. Sandoval, A. Prieto, and J. M. Corchado, Eds., pp. 706–713, Springer, Berlin, Germany, 2009. View at: Google Scholar
  13. X. Chen, Z. Chen, S. Gao, and X. Gao, “A high-ITR SSVEP-based BCI speller,” Brain-Computer Interfaces, vol. 1, no. 3-4, pp. 181–191, 2014. View at: Publisher Site | Google Scholar
  14. C. S. Herrmann, “Human EEG responses to 1–100 Hz flicker: resonance phenomena in visual cortex and their potential correlation to cognitive phenomena,” Experimental Brain Research, vol. 137, no. 3-4, pp. 346–353, 2001. View at: Publisher Site | Google Scholar
  15. X. Gao, D. Xu, M. Cheng, and S. Gao, “A bci-based environmental controller for the motion-disabled,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 11, no. 2, pp. 137–140, 2003. View at: Publisher Site | Google Scholar
  16. T. Sakurada, T. Kawase, T. Komatsu, and K. Kansaku, “Use of high-frequency visual stimuli above the critical flicker frequency in a SSVEP-based BMI,” Clinical Neurophysiology, vol. 126, no. 10, pp. 1972–1978, 2015. View at: Publisher Site | Google Scholar
  17. D.-O. Won, H.-J. Hwang, S. Dähne, K.-R. Müller, and S.-W. Lee, “Effect of higher frequency on the classification of steady-state visual evoked potentials,” Journal of Neural Engineering, vol. 13, Article ID 016014, 2016. View at: Publisher Site | Google Scholar
  18. A. Armengol-Urpi and S. E. Sarma, “Sublime: a hands-free virtual reality menu navigation system using a high-frequency SSVEP-based brain-computer interface,” in Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology—VRST’ 18, pp. 1–8, ACM Press, Tokyo, Japan, December 2018. View at: Google Scholar
  19. L. Jiang, Y. Wang, W. Pei, and H. Chen, “A four-class phase-coded ssvep bci at 60hz using refresh rate,” in Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 6331–6334, Berlin, Germany, July 2019. View at: Google Scholar
  20. B. Wittevrongel, E. Van Wolputte, and M. M. Van Hulle, “Code-modulated visual evoked potentials using fast stimulus presentation and spatiotemporal beamformer decoding,” Scientific Reports, vol. 7, no. 1, 2017. View at: Publisher Site | Google Scholar
  21. F. Gembler, P. Stawicki, A. Rezeika, A. Saboor, M. Benda, and I. Volosyak, “Effects of monitor refresh rates on c-VEP BCIs,” in Symbiotic Interaction, J. Ham, A. Spagnolli, B. Blankertz, L. Gamberini, and G. Jacucci, Eds., vol. 10727, pp. 53–62, Springer, Berlin, Germany, 2018. View at: Google Scholar
  22. F. Gembler, P. Stawicki, A. Rezeika, and I. Volosyak, “A comparison of cVEP-based BCI-performance between different age groups,” in Advances in Computational Intelligence, I. Rojas, G. Joya, and C. Andreu, Eds., vol. 11506, pp. 394–405, Springer, Berlin, Germany, 2019. View at: Google Scholar
  23. T. Başaklar, Y. Tuncel, and Y. Ziya Ider, “Effects of high stimulus presentation rate on EEG template characteristics and performance of c-VEP based BCIs,” Biomedical Physics & Engineering Express, vol. 5, no. 3, Article ID 035023, 2019. View at: Google Scholar
  24. N. V. Manyakov, N. Chumerin, A. Robben, A. Combaz, M. van Vliet, and M. M. Van Hulle, “Sampled sinusoidal stimulation profile and multichannel fuzzy logic classification for monitor-based phase-coded SSVEP brain–computer interfacing,” Journal of Neural Engineering, vol. 10, no. 3, Article ID 036011, 2013. View at: Publisher Site | Google Scholar
  25. F. Gembler, P. Stawicki, A. Saboor, and I. Volosyak, ““Dynamic time window mechanism for time synchronous VEP-based BCIs—performance evaluation with a dictionary-supported BCI speller employing SSVEP and c-VEP,” PLoS One, vol. 14, no. 6, Article ID e0218177, 2019. View at: Publisher Site | Google Scholar
  26. Z. Shirzhiyan, A. Keihani, M. Farahi et al., “Introducing chaotic codes for the modulation of code modulated visual evoked potentials (c-VEP) in normal adults for visual fatigue reduction,” PLoS One, vol. 14, no. 3, Article ID e0213197, 2019. View at: Publisher Site | Google Scholar
  27. F. Gembler, P. Stawicki, A. Saboor et al., “A dictionary driven mental typewriter based on code-modulated visual evoked potentials (cVEP),” in Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 619–624, Miyazaki, Japan, October 2018. View at: Google Scholar
  28. F. Gembler and I. Volosyak, “A novel dictionary-driven mental spelling application based on code-modulated visual evoked potentials,” Computers, vol. 8, no. 2, 2019. View at: Publisher Site | Google Scholar
  29. R. Oostenveld and P. Praamstra, “The five percent electrode system for high-resolution EEG and ERP measurements,” Clinical Neurophysiology, vol. 112, no. 4, pp. 713–719, 2001. View at: Publisher Site | Google Scholar
  30. N. Zierler, “Linear recurring sequences,” Journal of the Society for Industrial and Applied Mathematics, vol. 7, no. 1, pp. 31–48, 1959. View at: Publisher Site | Google Scholar
  31. E. E. Sutter, “The brain response interface: communication through visually-induced electrical brain responses,” Journal of Microcomputer Applications, vol. 15, no. 1, pp. 31–45, 1992. View at: Publisher Site | Google Scholar
  32. P. Z. Marmarelis and V. Z. Marmarelis, Analysis of Physio-Logical Systems, Plenum Press, New York, NY, USA, 1978.
  33. A. Mitra, “On the properties of pseudo noise sequences with a simple proposal of randomness test,” International Journal of Electrical and Computer Engineering, vol. 3, no. 3, pp. 164–169, 2008. View at: Google Scholar
  34. A. R. Smith and J. F. Blinn, “Blue screen matting,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques-SIGGRAPH’ 96, pp. 259–268, ACM Press, New Orleans, LA, USA, August 1996. View at: Google Scholar
  35. X. Chen, Y. Wang, S. Gao, T.-P. Jung, and X. Gao, “Filter bank canonical correlation analysis for implementing a high-speed SSVEP-based brain–computer interface,” Journal of Neural Engineering, vol. 12, no. 4, Article ID 046008, 2015. View at: Publisher Site | Google Scholar
  36. R. Kohavi, “A study of cross-validation and bootstrap for accuracy estimation and model selection,” in Proceedings of the 14th International Joint Conference on Artificial Intelligence - Volume 2, IJCAI’95, pp. 1137–1143, Morgan Kaufmann Publishers Inc., Quebec, Canada, August 1995. View at: Google Scholar
  37. G. Bin, X. Gao, Y. Wang, B. Hong, and S. Gao, “VEP-based brain-computer interfaces: time, frequency, and code modulations,” IEEE Computational Intelligence Magazine, vol. 4, no. 4, pp. 22–26, 2009. View at: Publisher Site | Google Scholar
  38. D. Aminaka and M. T Rutkowski, “A sixteen-command and 40 Hz carrier frequency code-modulated visual evoked potential BCI,” in Brain-Computer Interface Research, C. Guger, B. Allison, and M. Lebedev, Eds., pp. 97–104, Springer, Berlin, Germany, 2017. View at: Google Scholar

Copyright © 2020 Felix W. Gembler et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views114
Downloads209
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.