Computational and Mathematical Methods in Medicine

Volume 2016, Article ID 2029791, 14 pages

http://dx.doi.org/10.1155/2016/2029791

## P300 Detection Based on EEG Shape Features

^{1}Graduate Program in Computer Science and Engineering, Universidad Nacional Autónoma de México, 04510 Mexico City, Mexico^{2}Department of Computer Science, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, 04510 Mexico City, Mexico^{3}Neuroimaging Laboratory, Department of Electrical Engineering, Universidad Autónoma Metropolitana, 09340 Mexico City, Mexico

Received 21 August 2015; Revised 18 November 2015; Accepted 22 November 2015

Academic Editor: Joao Cardoso

Copyright © 2016 Montserrat Alvarado-González et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We present a novel approach to describe a P300 by a shape-feature vector, which offers several advantages over the feature vector used by the BCI2000 system. Additionally, we present a calibration algorithm that reduces the dimensionality of the shape-feature vector, the number of trials, and the electrodes needed by a Brain Computer Interface to accurately detect P300s; we also define a method to find a template that best represents, for a given electrode, the subject’s P300 based on his/her own acquired signals. Our experiments with 21 subjects showed that the SWLDA’s performance using our shape-feature vector was , that is, higher than the one obtained with BCI2000-feature’s vector. The shape-feature vector is 34-dimensional for every electrode; however, it is possible to significantly reduce its dimensionality while keeping a high sensitivity. The validation of the calibration algorithm showed an averaged area under the ROC (AUROC) curve of . Also, most of the subjects needed less than trials to have an AUROC superior to . Finally, we found that the electrode C4 also leads to better classification.

#### 1. Introduction

The P300 is an event-related potential (ERP) endogenous component that has a positive deflection that occurs in the scalp-recorded electroencephalogram (EEG) and typically elicited approximately 300 ms after the presentation of an infrequent stimulus (such as visual, auditory, or somatosensory) [1]. The specific set of circumstances for eliciting a P300 is known as the Oddball Paradigm which consists of presenting a target stimulus amid more frequent standard background stimuli. Under this paradigm, a P300, among other ERPs, is unconsciously elicited every time a subject’s brain detects the target stimulus (the rare event). In fact, the P300 is a reasonable input signal, with desirable properties and stability to control Brain Computer Interfaces (BCI) [2], applications requiring precise real-time detection as well as memory and computation optimization [3, 4]. The feature vector dimensionality reduction has been a popular choice to achieve these goals within the BCI community because it decreases the complexity of classifiers [5].

The features of a P300 have been represented in time, frequency, time-frequency, and shape domains by using, among others, Wavelet Transform [6], Genetic Algorithms [7], and Common Spatial Patterns [8]. Additionally, the approaches more commonly used for P300 classification are Linear Discriminant Analysis, Stepwise Linear Discriminant Analysis [9], and Support Vector Machines [10].

In this work, we are interested in the shape domain because we assume that (i) every subject produces P300 signals whose waveform can be consistently represented by template curves and (ii) such template curves from a subject are more similar to curves with a P300 than to curves produced by EEG background activity [11]. Most techniques based on these ideas are classified into Cross Correlation Alignment (e.g., Woody’s [12] and Maximum Likelihood (ML) [13] methods), Dynamic Time Warping (DTW) alignment [14, 15], and linear methods such as coherent averaging [11]. Although the latter is the most controversial of all, it is the fastest and the most commonly used averaging method because of the following argument: a P300 can be considered as a well-defined component since the alignment of its peaks “is most likely linear even though the distortion is nonlinear” [16]. For this reason, it is common practice to repeat the stimulation procedure to improve its signal-to-noise ratio (SNR) by coherently averaging several segments of filtered EEG signals generated after the stimulation (i.e., trials); the number of stimulations may vary from subject to subject for reasons explained in [17]. Coherent averaging implies that ERP components are unaffected by the averaging procedure and that any variability is due to noise [18]. However, P300’s amplitude, latency, and waveshape vary not only between electrodes but also in time. The first variation is due to its position; that is, the farther the electrode is from the cortical area, the lower the amplitude is. Thus, if we average all the electrode signals without taking into account the latter consideration, we will damage the P300’s properties; for this reason, usually, the electrode signals are processed individually. The variation in time is due to either biological determinants (e.g., increasing difficulty in perception and cognition of a task), subject’s attention level, or experimenter-dependent variables [17]. Thus, the coherent average does distort most ERP’s components [15, 19]; however, for a given subject, the averaged P300 remains consistent [20]. The previous considerations can be summarized in the following statement by Knuth et al. [18]: “Of course, waveshape variability also exists, but robust single-trial amplitude and latency estimates are nonetheless obtainable with the assumption of fixed component waveshapes.”

The novelty of this paper consists in the detection of P300 trials based on using pattern recognition techniques on its shape, represented by a feature vector. Specifically, we use a contour representation based on an adapted version of the Slope Chain Code (SCC) and some of its properties (e.g., the tortuosity measure) [21], as well as some general descriptors, such as the differences of areas, to describe the differences between curves. Importantly, chain codes have been successfully used to describe and classify other biosignals such as electrocardiograms [22]. The advantages of using the SCC are as follows: (i) it is self-contained, which implies that a chain does not need decoding, and (ii) it is finite, which means that the resulting chains can be classified using either grammatical techniques, syntactic analysis [23], or algebraic operations. Because the SCC is very expensive, we adapted it to make it computationally less demanding. In addition to the adapted SCC, we also present an offline calibration algorithm that reduces the dimensionality of the shape-feature vector, the number of subject’s stimulations, and the number of electrodes needed by a BCI to accurately detect a subject’s P300.

We organized the paper as follows. In Section 2, we define the shape-feature vector and explain the details of the proposed algorithm. Then, in Section 2.3, we present our methodology to set the Oddball Paradigm and the experiments to define the parameters needed for the proposed algorithm. In Section 3, we present key results and a discussion of the experiments designed to evaluate the classification performance. Finally, in Section 4, we provide some conclusions.

#### 2. Materials and Methods

In this section, we describe the features of the ERP’s waveform that we use as the vector of characteristics. Additionally, we present an offline calibration algorithm that reduces the dimensionality of the shape-feature vector, the number of trials for a subject, and the number of electrodes needed by a BCI to detect a subject’s P300.

##### 2.1. Feature Vector Based on ERP’s Waveform

As we mentioned before, the vector of characteristics obtained from the waveform of a P300 is central to our work. A first step towards producing such a vector is the coherent averaging of a set of trials.

###### 2.1.1. Coherent Averaging

It is a well-known fact that coherent averaging increases the SNR in signals and we take advantage of this fact to enhance the small amplitude signals immersed in an EEG. We and other groups [25] assume that the coherent averaging is feasible because (i) there is no correlation between the ERP signal and the rest of the EEG, (ii) the stimulation time and the response reflected in the EEG signal are known, (iii) there exists a consistently detectable component (e.g., a P300), and (iv) the EEG is a random signal with zero mean.

In a common BCI experiment, a number of electrodes are used to acquire EEG signals. We refer to this number as . The signal from an electrode is acquired times (i.e., trials). We will refer to the resulting set of all acquired signals (i.e., signals for electrodes) as and we divide it into two nonoverlapping subsets and . We use the set to train the calibration algorithm (which is discussed in Section 2.2) and the set to validate its performance (see Section 3.2). Furthermore, every EEG signal recorded by an electrode is discretized by number of samples. Consequently, the -dimensional vector representing an EEG signal can be represented as follows:where and are also vectors representing the ERP signal and the EEG background (associated with the rest of the brain’s activity), respectively. By coherently averaging the signals of a single electrode, we haveIn practice, the averaged vector is considered to be the zero vector (that vector whose element values are all equal to zero) because the EEG is a random signal with zero mean with little autocorrelation.

Because we intend to use the waveform of the recorded ERP signals to generate the vector of features, we represent a recorded signal as the following sequence of ordered pairs , where is a nonnegative integer corresponding to the sample number and is a real number representing the measured amplitude of the ERP at the position . As a result, the coherent average of (2) produces the vector .

###### 2.1.2. Slope Horizontal Chain Code

Chain codes are alphanumeric sequences with integer alphabets being the most common choices because the easiness and velocity to process the resulting chains in comparison to those based on alphanumeric alphabets. Several integer-alphabet chain codes have been proposed [21, 26–32] as well as methods to represent analog signals with sequences of bits (e.g., pulse code modulation [33]); however, the SCC is the most useful for the purposes of this paper because it divides the curve into straight-line segments placed onto the curve and preserves with higher resolution the contour shape. By using the ordered-pair representation for ERP signals, we can obtain a chain code representing the contour of the curve described by its sequence of ordered pairs [34].

In this work, we adapted the SCC to represent ERP signals and called it Slope Horizontal Chain Code (SHCC). The main differences between the SCC and our code are the following. The SHCC adjusts a segment’s length to avoid interpolation; this adjustment takes advantage of the sampling uniformity during the biosignal acquisition to keep the sampling points as the endpoints of segments. Contrary to the SCC, the SHCC does not compute the angle between two adjacent segments; in contrast, it computes the slope between a segment and the horizontal in the continuous range equivalent to (). Consequently, the segments are independent, which means that if the signal from one electrode is disturbed (e.g., due to noise or loss of information), this will not affect more than one chain element. Furthermore, the SHCC does not require either rotation invariance, since it is not designed for closed curves, or scale invariance. Consequently, the previous differences make the SHCC algorithm computationally less expensive and very useful for real-time applications. Moreover, the SHCC can be easily implemented in hardware; thus, allowing the classifier integration to signal acquisition devices.

On the other hand, the SHCC and the SCC share the following very useful properties for our application: both place line segments onto the curve to preserve with high resolution the contour shape, both are translation-invariant, which is relevant since the SHCC can adequately represent P300’s variability, and both allow feature dimensionality and data reduction. The two are very desirable properties in BCI applications [35].

A first step to transform the curve into a chain by the SHCC is to resample the vector with a new sampling distance given bywhere is a nonnegative integer representing the desired number of line segments to represent the curve (in Section 2.3.4, we will explain the procedure to select the value). The new rediscretized vector is a sequence of ordered pairs , where , for , and . An alternative to this rediscretization process would be to change the sampling rate (i.e., subsampling) during the acquisition process but this can potentially distort the ERP signal, due to aliasing, and produce regions of the signal similar to a P300, which in turn could produce false positives in the classification stage.

Before obtaining the alphabet symbols, the SHCC normalizes every element of as follows:where** 1** is a vector whose element values are all equal to one.

These operations produce new coordinate vectors and , where . With these coordinates, the SHCC produces a chain whose th element represents the code associated with the slope between the horizontal axis and the th ordered pair , for . To compute the members of the alphabet, we use a precision of two decimals when computing the individual slopes, resulting in an alphabet of 200 elements. To exemplify this process, we show in Figure 1 a discretized ERP whose chain is = (0.06 −0.02 −0.06 0.06 0.05 0.02 0.01 −0.04 0.04 −0.03 −0.09 0.04 0.05 −0.01 −0.02 0.04).