Computational and Mathematical Methods in Medicine

Volume 2016, Article ID 6450126, 8 pages

http://dx.doi.org/10.1155/2016/6450126

## Generalized Information Equilibrium Approaches to EEG Sleep Stage Discrimination

^{1}Department of Psychiatry, Veterans Affairs Greater Los Angeles Healthcare System, Los Angeles, CA 90073, USA^{2}Department of Psychiatry and Biobehavioral Sciences, UCLA, Los Angeles, CA, USA^{3}The Boeing Company, Seattle, WA 98124, USA

Received 15 March 2016; Revised 28 May 2016; Accepted 19 June 2016

Academic Editor: Valeri Makarov

Copyright © 2016 Todd Zorick and Jason Smith. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Recent advances in neuroscience have raised the hypothesis that the underlying pattern of neuronal activation which results in electroencephalography (EEG) signals is via power-law distributed neuronal avalanches, while EEG signals are nonstationary. Therefore, spectral analysis of EEG may miss many properties inherent in such signals. A complete understanding of such dynamical systems requires knowledge of the underlying nonequilibrium thermodynamics. In recent work by Fielitz and Borchardt (2011, 2014), the concept of information equilibrium (IE) in information transfer processes has successfully characterized many different systems far from thermodynamic equilibrium. We utilized a publicly available database of polysomnogram EEG data from fourteen subjects with eight different one-minute tracings of sleep stage 2 and waking and an overlapping set of eleven subjects with eight different one-minute tracings of sleep stage 3. We applied principles of IE to model EEG as a system that transfers (equilibrates) information from the time domain to scalp-recorded voltages. We find that waking consciousness is readily distinguished from sleep stages 2 and 3 by several differences in mean information transfer constants. Principles of IE applied to EEG may therefore prove to be useful in the study of changes in brain function more generally.

#### 1. Introduction

In electroencephalography (EEG), scalp electrodes measure electrical potential as a function of time [1]. EEG measures the sum of local field potentials in the region of cortex below the electrode, comprising ~10^{9} cortical neurons [1]. EEG is typically analyzed by spectral analysis (Fourier transform) that assesses power in frequency bands [1]. However, many studies over the last 20 years have demonstrated that the underlying cortical neuronal dynamics is nonlinear and that EEG signals are nonstationary (the mean and variance change over time unpredictably). This has been mostly convincingly demonstrated both* in vivo* and* in vitro* using multielectrode arrays on cortical tissue, demonstrating the presence of “neuronal avalanches” [2, 3].

Given that the cortical neuronal dynamics largely responsible for the summed local field potentials that comprise EEG are characterized by scale-free avalanches consistent with a system at a critical state that is well described by power-law dynamics, many attempts have been made to analyze EEG using methods derived from fractal and other nonlinear theories, with some degree of success [4–9]. Another avenue of physical understanding of cortical avalanche dynamics would be via statistical physics and thermodynamics; however, the relatively large magnitude changes in scalp-recorded voltages in EEG clearly could not be characteristic of a system in thermodynamic equilibrium [10]. Therefore, a thorough statistical physics understanding of EEG would involve a complete description of cortical nonequilibrium thermodynamics, which is not possible for a noninvasive technique such as EEG [10, 11]. Similarly, previously published information-theoretic shortcuts to a thermodynamic understanding (such as maximum entropy approaches) for EEG suffer from insufficient knowledge of appropriate constraints for microscopic variables [12, 13].

Instead, we propose to utilize the concept of generalized information transfer, where EEG could be modeled as an information transfer process [11, 14]. Generalized information equilibrium (IE) has been proposed as a system-independent mechanism to study systems far from thermodynamic equilibrium, with applications to astrophysics, economics, materials science, Newtonian physics, and thermodynamics [11, 14, 15]. The principles of IE were developed from Hartley’s original description [16] of an amount of information ():where is the number of selected symbols and is a constant which depends on the number of symbols () available at each selection. Note that we use the natural logarithm, so that our natural information measure is in “nats” instead of “bits.” Following Fielitz and Borchardt (2014) we will use the Hartley definition of information to say that the information in a given process iswhere is the size of the alphabet of symbols used to encode and is the number of symbols we select. A key assumption is that (which we have from the 10^{8} to 10^{9} neurons in the cortex underlying an electrode).

Note that the more commonly utilized Shannon entropy () defined as [17]reduces to the Hartley definition of information () when the probability of each symbol in the alphabet is equal (i.e., is a constant). The use of Hartley’s information theory, lacking any probabilistic assumptions, thus allows an estimation of information flow in any system even without access to knowledge of microscopic states or appropriate constraints in the case of maximum entropy approaches [11, 14]. It should also be noted here that Hartley information is a special case of the Rényi entropy for [18]: It has been demonstrated that one can use Hartley’s information theory to define a natural amount of information for any system [11, 14]:where is the information transfer constant, is the absolute value, and is the signal of the process variable , with . Using this relationship, virtually any system where information flows from a source () to a destination () can be considered from the point of view of information transfer [11, 14]. The important point is, however, that the amount of information () must generally obey the inequality when the process variable is related to the information destination and the process variable to the information source. For the current study, we assume ideal information transfer () and, hence, information equilibrium (IE). Considering (5) one getsFor convenience we will denote the ratio as and call it the information transfer constant for ideal information transfer or for IE. For EEG, we use (7) to define an information transfer constant () for each time interval () to the voltage reading (). We analyze the distribution of values to see if they are peaked around a well-defined mean. In that case we can interpret (7) (for small changes in the process variables and ) as a differential equation: which has the solutionWe will make a few observations here about the IE approach and its relationship to other physical descriptions of dynamic systems. For general information equilibrium, the solution to (8) can be rewritten asLet us now set a new parameter, . Over short time scales (), (10) reduces toEquation (11) is precisely the form of a Lyapunov exponent if the voltage measurement is considered as a superposition of a large number of neurons at different distances from the EEG sensor (i.e., is a sum over individual neuron voltages near the sensor, mapping a* 4n* dimensional “phase space” to a voltage measurement ). Lyapunov exponents are deeply related to the study of chaotic dynamical systems, with positive values indicating a chaotic system with exponential divergence from initial conditions [19]. For systems with power-law sensitivity to initial conditions, Lyapunov exponent analysis has been generalized to the scale-dependent Lyapunov exponent, which has been utilized to successfully describe many dynamic physical systems, including EEG-based seizure identification in humans (e.g., [5, 20–22]).

For the current study, we utilize a publicly available database of polysomnographic data for fourteen subjects with eight minutes each of waking and sleep stage 2 EEG (and eleven subjects with eight minutes of sleep stage 3 EEG) to assess for differences in patterns of values to assess the utility of IE in distinguishing different states of consciousness. Our hypothesis is that different states of consciousness can be identified by different distributions of and different values.

#### 2. Materials and Methods

##### 2.1. Database

We utilized a publicly available EEG dataset (slpdb) http://www.physionet.org/, which was a polysomnogram study of patients with severe sleep apnea [23]. There were subjects with 8 min of waking EEG and sleep stage 2 EEG and subjects with 8 min of sleep stage 3 EEG. An additional dataset of subjects of waking EEG, subjects of REM sleep EEG, and subjects of sleep stage 1 EEG (1 minute each, nonoverlapping with the larger 8 min EEG dataset) was also generated from the larger dataset. The exact dataset used has previously been described in a prior unrelated study [9]. EEG segments chosen for further analysis were selected on the basis of the absence of movement artifacts and disordered breathing, which limited the amount of suitable tracings. No demographic and limited clinical information was available from the dataset. Digitized 250 Hz EEG recordings on a 10–20 international system were used with a single EEG lead for each subject, which differed among subjects; no information was provided about reference electrode placement [9]. Use of the dataset for this study was approved by the VA West Los Angeles IRB.

##### 2.2. Estimation

EEG is a time series of voltage readings , where (length of series) for each value of up to , given a time interval , so the values for each instant can be calculated:Therefore, each segment of EEG would be characterized by a series of information transfer constant ratios, for different values of the time interval (i.e., 1, 2, 4, 8, time steps, etc.), and for each segment the mean was calculated:Code for extracting values from EEG was written in R [24]. We used the natural log for transformation throughout. Values where were excluded from estimation (as the logarithm of zero is undefined).

##### 2.3. Analyses

Probability density function (PDF) estimation was done using the R* density* package. Lomb-Scargle periodograms were done using the R package* cts* [25], designed to follow [26]. To assess statistically significant periodogram peaks, we utilized a threshold, heuristically estimating the maximum possible number of frequencies in the input PDF as twice the number of data points in the PDF [26]. All statistics were done in R [24]. For the REM sleep and sleep stage 1 dataset analysis with the reduced size dataset, we utilized generalized linear mixed modeling (GLMM) with unstructured covariance matrices to account for subject-specific effects, using the R package* nlme* [27].

#### 3. Results

##### 3.1. Waking Differs from Sleep Stages 2 and 3 in Values at Multiple Time Scales

We calculated the mean value for each segment in our database with a range of different values (0.004, 0.04, 0.4, and 4 seconds; Figure 1, Table 1). An example of the comparison in the PDFs of values for all three states of consciousness for 1 min each of EEG for a single subject at values from 0.004 to 4 sec is shown in Figure 1.