- About this Journal ·
- Aims and Scope ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

ISRN Biomathematics

Volume 2013 (2013), Article ID 871472, 37 pages

http://dx.doi.org/10.1155/2013/871472

## Modeling Neural Activity

Departments of Pediatrics and Neurology, Committee on Computational Neuroscience, Computation Institute, KCBD Room 4124, 900 E 57th Street, Chicago, IL 60637, USA

Received 13 September 2012; Accepted 4 November 2012

Academic Editors: T. Liu, D. Mogul, and M. R. Roussel

Copyright © 2013 Wim van Drongelen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper provides an overview of different types of models for studying activity of nerve cells and their networks with a special emphasis on neural oscillations. One part describes the neuronal models based on the Hodgkin and Huxley formalism first described in the 1950s. It is discussed how further simplifications of this formalism enable mathematical analysis of the process of neural excitability. The focus of the paper’s second component is on network activity. Understanding network function is one of the important frontiers remaining in neuroscience. At present, experimental techniques can only provide global recordings or samples of the activity of the huge networks that form the nervous system. Models in neuroscience can therefore play a critical role by providing a framework for integration of necessarily incomplete datasets, thereby providing insight into the mechanisms of neural function. Network models can either explicitly contain individual network nodes that model the neurons, or they can be based on representations of compound population activity. The latter approach was pioneered by Wilson and Cowan in the 1970s. Finally I provide an overview and discuss how network models are employed in the study of neuronal network pathology such as epilepsy.

#### 1. Introduction

A fundamental and famous model in neuroscience was described by Hodgkin and Huxley in 1952 [1]. They generated a formalism for the dynamics of the membrane potential of the giant axon of squid describing measurements of sodium, potassium, and leakage currents. Their model can be represented by an equivalent electronic circuit of the axon’s membrane in which a capacitor models the membrane’s phospholipids and several voltage-dependent resistors represent its ion channels. Much later, after computer technology became readily available, their formalism has been widely employed and also includes other types of channels. In addition, it is used to create detailed cell models in which the cell is divided into a set of coupled compartments each represented by a separate membrane model. In many studies these computational models are embedded in networks (e.g., [2–7]). In this approach, the individual nodes of the network and their connections are simulated with the purpose of finding the properties associated with emergent network activities such as generation of synchronized bursts and oscillations. The enormous advantage of these models is that they are close to the experimental domain; that is, they may include recorded ion conductance values, morphology of observed cell types, and known intercellular connectivity. In spite of the many parameters in these models, an overwhelming amount of detail is still missing. As a consequence, due to nonlinearities in neuronal function (e.g., membrane conductance dynamics, the coupling function between neurons), small inaccuracies in the model parameters may lead to large prediction errors of the model. In spite of this potential shortcoming, some models perform surprisingly well in predicting or mimicking experimental outcomes (e.g., [8]).

An additional problem with these detailed models is that, because of their complexity, they cannot be analyzed mathematically and may not therefore lead to deeper understanding of the process it models. Further model reduction is required before mathematical analysis can be employed. For example, the Hodgkin and Huxley model is a four dimensional nonlinear model (the dynamic variables are membrane potential , and gating variables , , and ). Both FitzHugh [9] and Nagumo et al. [10], inspired by the nonlinear van der Pol oscillator [11], reduced the Hodgkin and Huxley formalism to one pair of dynamic variables (membrane potential and a recovery variable ) to gain insight in the dynamics of the excitable membrane. Later a similar simplified approach was published for the excitation in the barnacle giant muscle fiber by Morris and Lecar [12].

A further simplification for simulation of nerve cell activity is the leaky integrate-and-fire (IF) model [13]. The basic IF model describes a neuron’s subthreshold behavior as a leaky integrator (a resistor and capacitor) and adds an *ad hoc* superthreshold spike when a preset threshold is exceeded. In between the spike events, the IF circuit is a one-dimensional linear model for the cell’s subthreshold activity. There are additions to this model, for instance, resonance properties are included by making the model two-dimensional and a quadratic function has been proposed to add a nonlinear component that is capable of generating the spike upward deflection [14]. More recently, Izhikevich [15] extended and combined these approaches into what he defines as the simple model of choice (SMC). In this model there is no threshold for the spike generator and only the downward deflection (reset) of the spike is generated in an *ad hoc* fashion.

When modeling neuronal networks, one can start from the network nodes, that is, the neurons, or make direct assumptions at a higher organization level, that is, the population activity. In the first case, the bottom-up approach, one must decide how to model the nodes. Simple on/off switches are the most abstract models of neurons in networks. The underlying hypothesis of this approach is that, for some of the aggregate behavior, the network properties themselves are of principal interest while the details of the cellular properties (beyond having active and inactive states) are irrelevant [16–20]. Many network models that use the bottom-up approach include more complex representations for the nerve cells and their connections. These representations may vary from a simple IF model to compartmental models including biophysically realistic channels (e.g., [2–7, 21–27]). Recently, a network model of IF neurons was connected to an artificial, simulated eye and arm, thereby creating a system that is capable of replicating human behavior [28].

A different family of network models describes aggregate behavior of neuronal populations. Although the components of these models are derived from or inspired by the neuronal function, they do not explicitly consist of a network of individual agents. This approach was pioneered by Wilson and Cowan [29, 30]. Their result is a nonlinear, two dimensional model describing the dynamics of excitatory and inhibitory activity. The expressions for the population activity are based on assumptions about cellular statistics such as the distribution of neuronal excitability. The initial 1972 Wilson-Cowan model describes only the time-dependent dynamics of the neural populations; whereas their 1973 paper includes a spatial component to this model. In the same decade, Nunez proposed models of cortical networks to explain and describe the electroencephalogram (EEG) representing compound activity of millions of nerve cells [31–34]. This work focused on the synaptic activities in the cortical pyramidal cell population since they are considered to be the major contributors to the mammalian EEG. To describe cortical activity, Nunez included inhibitory and excitatory populations and used linearized relationships to describe their interactions. Since the early 1970s, many models have been inspired by the Wilson and Cowan approach and the models by Nunez. Examples are the models by Lopes da Silva et al. [35] and van Rotterdam et al. [36], the Jansen neural mass model [37, 38], and Freeman’s models of the olfactory system [39, 40]. More recent examples are the models by Wendling and coworkers (e.g., [41]), Liley and Bojak [42], van Albada and Robinson [43], and Hindriks and van Putten [44]. A model developed by Jirsa and Haken [45] produces a field equation for the network activity generated by a cortical surface and can be considered a synthesis of the different population model approaches. In part due to increased access to computer technology facilitating the analysis of nonlinear dynamics, the original Wilson-Cowan approach and its derived models are being “rediscovered” [46]. Due to the deterministic character of these models, they fail to model fluctuations. Stochastic approaches are capable of including higher order moments that characterize the neural activity such as synaptic input and spike trains (e.g., [47, 48]). Recently, Buice and Cowan [49] used a statistical mechanics approach to include fluctuations in the population model.

Understanding the function of neuronal networks is a frontier in current neuroscience. Our lack of fundamental understanding of network function in the nervous system prevents us from linking neuronal activity to higher level physiologic and pathologic behavior. Although compound activity of huge networks can be captured by the EEG or possibly by functional magnetic resonance imaging (fMRI), current experimental techniques in neuroscience cannot record the individual activity of cells forming larger networks (>1000 neurons) with sufficient resolution in the time domain (<1 ms). Due to the lack of such techniques, there is a real need for theoretical and computational models in neuroscience because they can provide a framework for the integration of experimental data. In addition, models provide an efficient tool for generating and testing series of hypotheses; to cite Abbott [50], “A skillful theoretician can formulate, explore, and often reject models at a pace that no experimental program can match.”

The purpose of this paper is to provide a summary of different models for neural activity with a focus on oscillatory patterns, to discuss the relationship between modeling approaches, and to place them in a logical and a historical context. Section 2 summarizes the Hodgkin-Huxley formalism and its simplifications. The next part, Section 3, describes the network models with nodes that are based on the models for the individual neurons, followed by Section 4 about population cell models without explicit individual neurons. Section 5 discusses a procedure and effects of adding a stochastic component. The application of neuronal models in studying pathology such as epilepsy is evaluated in Section 6. The final section contains the concluding remarks.

#### 2. Cellular Models

The Hodgkin and Huxley (H&H) equations [1] describe the nonlinear dynamics of the electrical properties of an excitable membrane of the giant nerve fiber that innervates the muscles involved in water jet propulsion in the squid. The inside and outside of the nerve fiber is separated by a biomembrane (Figure 1(a)). The principal component of the biomembrane is a double layer of phospholipids that serve as an insulator between the conductive media in- and outside of the fiber; this component can be modeled as a capacitor ( in Figure 1(a)). Ion pumps in the membrane create gradients of different ion species, such as sodium and potassium, across the membrane, and ion channels allow passive leak of the ions.

Because ions are charged, an ion pump creates a potential difference and can therefore be modeled as a potential source ( in Figure 1(a)). For individual ion species, these potentials are known as Nernst potentials that can be computed by the Nernst equation: for example, at 25°C, the Nernst potential for the positively charged sodium and potassium ions is determined from their intra- and extracellular concentrations ( and , resp.) as mV. An ion channel conducts one or more ion species and may be modeled as a resistor ( in Figure 1(a)). The resistances of the sodium and potassium channels depend on the potential across the biomembrane. This membrane potential dependence is modeled by the H&H equations using activation and inactivation processes that govern the degree of opening of the channel and consequently its conductance. It is of interest to note that Hodgkin and Huxley proposed their formalism in the 1950s as an empirical model, and that molecular evidence for the existence of the different membrane components came decades later!

In the H&H formalism, the potential difference between the fiber’s in- and outside, the membrane potential () is related to activation and inactivation parameters of sodium and potassium ion channels (, , ). The differential equations that H&H used to describe the dynamics are
or
The parameter’s dynamics are governed by three first-order differential equations
Here, the variables , , and are the time constants of parameters , , and . Equation (2) is based on Kirchhoff’s law stating that the sum of all membrane currents is equal to zero. An equivalent membrane circuit is depicted in Figure 1(b). Based on the equation (with —charge of the capacitor, —its capacitance, and —potential across the capacitor), we determine that current equals . Further we determine that each ion current is described by Ohm’s law , with —membrane capacitance; —membrane potential; —conductance () of ion species *k*, which can be presented in (2) by the product of the maximum conductance and activation and inactivation parameters; —equilibrium potential for ion species . Equations (3)-(4) describe the dynamics for the activation () and inactivation () parameters of the sodium channel. Equation (5) describes the dynamics of the activation () of potassium. In the following, we often represent activation/inactivation variables generically by . It can be seen in (3)–(5) that the parameters and depend on the membrane potential . These -dependent relationships are all nonlinear as well as the activation parameters and since H&H determined experimentally that the model fits best if they used and in (2). The dynamics of the parameters in (3)–(5) were originally presented by H&H in the form: , where and are rate constants that are membrane potential (voltage) sensitive. These presentations can be linked to the one in (3)–(5) by and . The voltage-dependent functions for and were determined by H&H using the so-called voltage-clamp technique. This technique enables the experimenter to keep constant, the holding potential, while measuring membrane current. By repeating this measurement for a range of holding potentials, H&H determined the *I-V* relationship. This relationship was determined for sodium and potassium currents (a specific ion current can be determined by disabling other ion currents pharmacologically). In their model, H&H used these *I-V* relationships to determine and . As it can be seen in (2), the total membrane current is proportional to the time derivative of the membrane potential ; therefore, the experimentally determined *I-V* relationship can be interpreted as the phase space for membrane potential dynamics (e.g., [15, 51]).

##### 2.1. Linearization of the Hodgkin and Huxley Equations

A linearized version of the H&H equations is useful to examine subthreshold oscillatory properties of the biomembrane around a resting or holding potential (say ). The mathematical details of the following can be found in Appendix A. Here I describe the details of the linearization procedure so that the physiological parameters in the full model can be related to its linearized version; readers not interested in this aspect can skip to Section 2.1.3.

In order to linearize the nonlinear equations, we rewrite (2)–(5) in a more compact form as and , with representing an activation/inactivation variable , , or . Then we linearize about an equilibrium potential . We define small variations in the parameters about this equilibrium as , , , , . Using this, we obtain a linearized equation (2) with: , , , , , .

For the gating parameters ((3)–(5)), using , we get the linearized expression In the literature, it is common to further simplify the notation of (6) and (7). For example in Richardson et al. [52], (6) and (7) are simplified to (Appendix A)

###### 2.1.1. Linearization of the Potassium Channel

Now we consider the potassium current and model small perturbations and show we can model this by using a two branch electrical circuit with resistors , , and inductance (Figure 2(a)). A small perturbation in the circuit with the two parallel branches is governed by Equation (9) is just Ohm’s law () applied to the circuit in Figure 2(a). Here , the inverse of , represents the impedance of the circuit that links the fluctuations in potential and current ( and are in series and they are parallel to ). Now if we consider a single ion current, the current, we show that we can write this in the same form as (9). (Experimentally, a single ion current can be measured by disabling all other currents pharmacologically.)

First we restate, from (2), that the potassium current is given by A small perturbation approximated by a linearization around potential is with , and .

We can now use (7), substituting for , to relate to Substituting this result into (11) gives This result can be written as with and .

Now we can directly relate the linearized potassium channel to the circuit in Figure 2(a). Comparing (9) and (14), we can determine that both expressions are equivalent if , , and .

Combining the above we find that Thus we can successfully model small perturbations in the potassium channel with the equivalent circuit in Figure 2(a). Note that is positive because and are both positive at physiological values of . Therefore is positive and (because is also >0) is positive.

###### 2.1.2. Linearization of the Sodium Channel

Now we apply the linearization to inward current A small perturbation approximated by a linearization around potential is We simplify notation using: , , , , and .

From the three terms in (17), it is obvious that we need three branches in an equivalent electrical circuit for the sodium channel. One resistor and two branches each with a resistor and inductor, one for activation and one for inactivation (Figure 2(b)). From the circuit properties in Figure 2(b), we determine that a perturbation of the potential is governed by We find that the expressions in (18) and (19) are equal if , , and with , , , and .

Combining the above we find and .

So we can successfully model small perturbations in the sodium channel with the equivalent circuit in Figure 2(b). The model can easily be evaluated by simulation; however, building the model using hardware components cannot be done as simply because some of the conductance and inductance values are negative over the range we are interested in. Note that in the expressions above is negative because and is negative at physiological values of . Therefore is negative and (because is >0) is also negative. In contrast, note that is positive because both and at . Therefore is positive and (because is >0) is also positive. Mauro et al. [53] and Koch [54] proposed to replace the negative inductance by a positive capacitance and correct for negative resistance in one of the other branches of the circuit. They show that a branch with negative conductance and inductance parallel to a true (positive valued) resistor can be replaced with an RC circuit without changing the effective impedance of the two branches. The equivalent diagrams of the circuitry are shown in Figure 2(c). In the original circuit (Figure 2(c1)) with the positive resistor , negative resistance , and negative inductor , we have impedance in the original circuit In the replacement circuit (Figure 2(c2)) with the capacitor we have impedance After substituting and multiplying numerator and denominator by we get This result is identical to the original impedance ; it will therefore also have the same frequency response: absolute value and phase (i.e., the same Bode plot).

###### 2.1.3. Generalized Equivalent Circuit

In the previous sections, it was shown that the channels in a linearized version of the H&H equations can be represented by equivalent circuits consisting of a network of resistors and inductors (Figure 2). Even in the case where negative values occur for these components (due to positive feedback between sodium conductance and membrane depolarization), they can be represented by positive valued alternative electronic components (Figure 2(c)). When combining these findings, we simplify the equivalent circuit in Figure 1 and consider the RCL network in Figure 3 as a linearized representation of the membrane.

The inductor channel in the circuit represents the stabilizing effects of the outward current activation (e.g., potassium) and inward current inactivation (e.g., sodium). The resistor and capacitor represent membrane resistance and capacitance plus corrections for negative impedances. Since we look at small perturbations around an equilibrium, which we conveniently set at zero, the driving forces of the difference between Nernst potential and resting potential can be ignored.

The pair of differential equations that govern the circuit in Figure 3 are We now simplify and we nondimensionalize the system. First we substitute and and get and . Then we change to a timescale such that ; substituting this we get the following pair of ODEs: Finally, using and (note that ), we obtain the following 2nd order ODE: This result for and is identical to the equations analyzed by Erchova et al. [55] (their equations (A16) and (A17)). An alternative route to create a single equation from the two original ones is by direct substitution while leaving dimensionality intact. If we differentiate the first expression in (23) with respect to time we have . Here , , and indicate the second and first time derivatives of and . We can use the second expression in (23) to substitute where as a function of can be found from . This yield This result is identical to the expression used by Erchova et al. [55] (their Equation (A1)). Interestingly this equation is also similar to the network equation by Jirsa and Haken [45] in which the spatial derivative is set to zero (see Section 4.5). This similarity across the scale from neuron to network is seen frequently. In one example, investigators even use a cellular model for action potential generation to represent activity of a population [56]. Although Curto et al. [56] do not explicitly describe this, in this case the inward and outward currents of the cell membrane can be considered to represent the network’s excitation and inhibition, respectively.

###### 2.1.4. The Frequency Response of the Linearized Model

In electrophysiology, cellular properties are often probed with hyperpolarizing and depolarizing current steps. The unit impulse response, the derivative of the unit step, is the inverse Fourier transform or inverse Laplace transform of the system’s frequency response or transfer function, respectively. Since we deal with a linearized system with current as input and membrane potential as output, the frequency response is just the impedance (i.e., , with —frequency in rad/s). We can use the Laplace transform of (26) to obtain the transfer function of the linearized model: . In this form we can apply the Matlab *freqs* command to depict the Bode plot of the model (Figure 4).

An interesting observation is that the Bode plot in Figure 4 shows a peak in the magnitude of the impedance, indicating that neurons and their networks display resonance [55, 57–60]. With an appropriate, realistic choice of parameters, this peak is located in the alpha-frequency range of the EEG at 75 rad/s (~12 Hz).

##### 2.2. Models That Can Be Derived from the Hodgkin and Huxley Formalism

The H&H representation and its linearized version can be seen as a basis for many different neuronal models. Of course one could either add complexity to the H&H formalism or further reduce it. The following sections summarize representative examples of both types.

###### 2.2.1. Complex Cell Models

Using the H&H formalism one might extend the sodium and potassium channels of the model neurons to also include different types of potassium channels (such as the A-current, a transient potassium current), persistent sodium channels, and different types of Ca^{++} channels. In addition to the voltage sensitive channels, it is common practice to also include Ca^{++} and/or Na^{+} sensitive channels. In addition to the addition of channels, the cell model can also be divided into many multiple coupled compartments (e.g., [4]). Each compartment represents a small part of the cell that can be approximated by a compartment with a uniform membrane potential. The rationale for the use of multiple equipotential compartments is to mimic the spatial separation of different functions: synaptic input, passive and active propagation of signals, and generation of action potentials. The more complex cell models are also frequently used to create networks in order to determine emergent behavior of neuronal populations (e.g., [3, 5, 6, 26, 61]). Special neural simulation packages such as Genesis and Neuron have been developed to simulate such models, even on parallel machines [62, 63]. A recent example in which many details are incorporated in the individual cell models is the Blue Brain Project [7]. The advantage of these models is that they are based on experimental data enabling the modeler to link the model’s behavior to measurements. This close relationship can also be a disadvantage of these complex models; due to the complexity, the simulation results can be as difficult to interpret as real recorded data.

###### 2.2.2. Simplified Cell Models

One can simplify the H&H equations while preserving the nonlinearity, or one might use the linearization results described in Section 2.1. In fact both these approaches have been employed. In one case, the four dimensional H&H model can be reduced to a lower dimensional one and in the other case, the equivalent circuit of a linearized membrane model (Figure 3) is coupled to an ad hoc spike generator (Figure 5). The advantage of preserving the nonlinearity is that the spike generation mechanisms remain (at least in part) intact, but mathematical analysis of such a model can be fairly complicated. The linearized cell models are more accessible for analysis but require an ad hoc spike generator (Table 1).

(1) *FitzHugh-Nagumo/Morris and Lecar. *The simplification used independently by FitzHugh [9], Nagumo et al., [10], and Morris and Lecar [12], is all based on the observation that the cell has a relatively fast activating inward current and a slower activating outward current. This allows one to simplify (2)–(5) from a four dimensional model into a model with only two dynamic variables. In the FitzHugh-Nagumo approach, the inward current is sodium whereas in the Morris and Lecar model for a barnacle muscle cell, the inward current is calcium. For the following discussion, it is not critical which ion species carries the inward current, so I will focus on simplification of the H&H formalism with a Na^{+} current. In the original equations (2)–(5) we have , , , and . The simplified version ignores activation of the inward current because it is so fast it is considered instantaneous. The effects of and are similar; that is, they repolarize the membrane, so they can either be lumped or one of the two can be ignored so that only one repolarization parameter (often indicated by ) remains. This generates a nonlinear dynamical system with two variables. Although a few variations exist, the original equations in FitzHugh [9] were
Here , , and are membrane potential, recovery variable, and injected current, respectively; , , and are constants.

(2) *Reduced Spiking Models. *From the linearization procedure we can appreciate that the neural membrane can be represented by the RCL-circuit (Figure 3). Of course, this representation only works for studying subthreshold behavior such as integration of inputs or resonance properties. This type of study explains the propensity of some neurons to resonate with specific input frequencies. If, as an additional simplification, the ion channel properties are completely ignored, we may even further simplify the membrane to an RC-circuit; in this configuration there are no resonant properties. Both these simplifications, the RCL- and RC-circuits, ignore the nonlinear properties of the excitable membrane. In the so-called integrate-and-fire (IF) models the nonlinearity is again introduced, but in an ad hoc fashion by employing a firing threshold. When this threshold is exceeded, a spike + reset are pasted into the signal. Although such an ad-hoc firing process can be included in both RCL- and RC-circuits, it is commonly applied to the latter.

Historically, the RC-based integrate-and-fire model was introduced long before the H&H formalism in 1907 by Lapicque [13]. Because of the leakage current in the resistor, it is also often called the leaky integrate-and-fire neuron (LIF). Much later a nonlinear variant of the LIF was introduced where the *I-V* relationship is not linear as in (23), but quadratic, the QIF model (e.g., [14]). Because this quadratic relationship, the *I-V* curve contains a stable and an unstable equilibrium, the spike’s upstroke can be generated by the model and only an ad-hoc spike reset is required (Figure 6).

Finally, one can combine the QIF scenario with the RCL-circuit to produce a model that is capable of both resonance and spike upstroke generation. This is defined as the simple model of choice (SMC) by Izhikevich and was shown to be able to produce a plethora of neuronal behavior (e.g., [15]).

##### 2.3. Summary of the Cell Models

We can consider the different models and summarize their relationships by counting the dimensions (Table 1). The H&H model is four dimensional whereas the subthreshold component of the IF model is only one dimensional. Of course, dimensions are added when the ad hoc spike generator is included but most studies using this model will focus on the subthreshold behavior. Table 1 summarizes the model dimensions and provides an overview of the spiking mechanisms of the models we evaluated above.

#### 3. Populations of Cell Models

Network models containing concrete nodes come in a wide variety, ranging from nodes represented by an on/off switch to a detailed multicompartmental cell model with multiple ion channels. Similarly, the connections between the nodes can be modeled by relatively simple rules or they can be simulated by biophysically realistic synaptic ion channels. In general, the purpose of these models is to provide simulation results that can be compared to a result from mathematical analysis and/or an experimental measurement.

##### 3.1. On/Off Switches

The first simple network model using coupled switches was described by McCulloch and Pitts [16]. Their approach was to represent network elements as gates in a logical circuit (e.g., AND, OR, etc.). Others have used a similar approach and also compared the network on/off node with the spins in a spin-glass lattice (e.g., [17]). The so-called Ising spins can be up or down, indicated by +1 and −1, respectively [64]. Each spin tends to align with its neighbors. However, at high temperatures, the probability of this alignment decreases to . We model the cortical surface by a two dimensional lattice of Ising spins. The overall state of this lattice can be characterized by its magnetization ; the average of the individual spins: . Below a certain critical temperature , the lattice will retain global magnetization by an external magnetic field . Above the critical temperature (also called the Curie temperature), thermal fluctuations will cause demagnetization. Although this is a mathematical framework in statistical mechanics, it has been used to represent neuronal networks where the spins are the neuronal nodes in the network; in these models, the up and down states are often presented by 1 (active) and 0 (inactive), respectively. These networks with nodes are typically simulated in discrete time and follow the following update rules.(1)At each time step, the state of is determined by the state of its set () of its connected neighbors, reflected by function , with representing a weighting function from node to node (in many models this weight is set to one).(2)The update of now takes place in a probabilistic fashion; the probability that will be in the up state in the next time step is: ; variable is the temperature; it can be seen that at and that the update is a deterministic process () at .

This update process is a stochastic variant of one of the rules described by McCulloch and Pitts [16] where a neuron’s output *y* is updated by the state of its input from neurons in a deterministic fashion, that is, output *y* is set to 1 when its input exceeds a threshold and to 0 otherwise. A mean field approach for the analysis of the Ising spin model is described in Section 4.1.

A model similar to the Ising spin lattice was described by Hopfield [18]. He also considered switches interconnected in a network in which the strength of the connections between any neuron pair and is determined by the synaptic weight of their connection (no unit has connection with itself and, unrealistically, he also assumed that the weight matrix is symmetric, ). The principal difference between Hopfield’s approach and the ones described above is that the updates occur randomly and asynchronously; that is, for every time step a node is randomly selected and then an update rule is applied for that node only. This update rule is similar to the deterministic one employed by McCulloch and Pitts [16], that is, the selected neuron is active if the sum of activities of its connected neurons exceeds a threshold. More on stochastic approaches is discussed in Section 5.

##### 3.2. Networks of Reduced Spiking Models

As described in Section 2.2.2 (2), there is a variety of simplified models ranging from simple IF to rather complete SMC representations (Table 1). Especially the IF-model is frequently used for network simulations. One advantage of this model is that it is simply linear in between the spikes. By employing this approach, one can not only simulate the network but also, to some extent, analyze its properties mathematically. Using the simplification in Figure 3 and a further simplification by neglecting induction effects, we can simplify (23): . By simplifying this notation, one can describe the network dynamics by membrane potential , the activity level of each neuron, by
(e.g., [21, 22]). The injected current is separated into and . Parameter represents spontaneous drive and is the time-dependent input. The presynaptic spike train can be represented by a train of unit impulses . Each incoming impulse generates a postsynaptic signal that can be modeled by a so-called alpha function; that is, for an impulse arriving at time , its postsynaptic effect is given by . The Heaviside function is included because the alpha-function does not have a physical meaning for negative arguments, . Combining these considerations, one can conclude that input is the weighted sum of all alpha functions resulting from the incoming spike train (e.g., [23, 24])
Here we determine the input by using the synaptic weight matrix , in which each entry is the connectivity strength of neuron to neuron . Scaling factor represents the number of presynaptic inputs of neuron . We now determine the Laplace transform of (29) and rearrange terms , which can also be inverse transformed into the time domain in the form of a differential equation
Since the first order differential equation (28) and the second order one (30) are coupled, via input *E*, they create a three dimensional nonlinear system; a system capable of exhibiting oscillatory and chaotic behavior. To study the activity in the model, Olmi et al. [23] rewrite the system as three 1st-order differential equations and integrate between pulses and . This creates an event-driven map that can be further investigated. Tattini et al. [24] examine the propensity for chaotic behavior by evaluating the attractors characterizing the network activities and they show that both connectivity and network size are critical properties—unsurprisingly, the level of chaos, reflected by the maximum Lyapunov exponent, decreases with network size, although less so for sparsely connected ones.

More complicated cell models have also been employed to investigate network activity. Recently, Izhikevich and Edelman [25] used the SMC model to create a large network while using connectivity based on the tracts determined by a diffusion tensor image of the brain.

##### 3.3. Networks of Ion Channel Models Based on Biophysical Considerations

Using the H&H formalism combined with approximation of the cell morphology by segments, one can create realistic representations of neurons. Addition of ligand-sensitive, synaptic channels allows one to create networks. Equation (2) is then adapted to include currents associated with other voltage sensitive channels, the synaptic currents, and intercompartmental currents Interestingly, by applying Kirchhoff’s first law, it is apparent that intracellular intercompartmental currents must be equal to the extracellular current between the compartments, thereby providing a tool to model extracellular measurements. The dynamics of the voltage-sensitive channels is governed by expressions similar to (3)–(5). The intercompartmental current, following Ohm’s law, is determined by the potential difference between the isopotential compartments divided by the axial resistance between them. Traub and co-workers were amongst the first to develop this approach (e.g., [2, 26]), the blue brain project [7] is a famous recent attempt to determine properties that govern network activity at mesoscopic levels. The advantage of the approach is that the neurons in the network are modeled with realistic ion channels, which allows a direct comparison with experimental data. However, well-known problems with the approach are associated with the complexity and large number of parameters, many of which are not experimentally determined yet. The large parameter space renders this type of model ill-posed and the complexity can make interpretation of the results very difficult. A combined approach with associated detailed and global models can be very useful to understand the principles underpinning the simulation results of the detailed models [5, 6, 27, 65, 66].

The detailed model of neocortex outlined in Figure 7(a) shows pyramidal cell populations and the inhibitory basket and chandelier cells. The network generates an EEG signal based on the weighted sum of the neuronal currents. Surprisingly, during reduced excitatory coupling, the model starts to oscillate (top trace in Figures 7(a) and 7(b)) [5]. The mechanism generating these oscillations is difficult to grasp when studying the individual rasters of the superficial and deep pyramidal neurons (grey panels in Figure 7(b)). However when lumping each of the superficial and deep pyramidal activities (Figure 7(c)), it can be seen that the simulated EEG oscillation is associated with alternating oscillations in the two populations. The mechanism of the EEG oscillation can be much better understood if one considers population activity patterns while ignoring the details of the individual neural activity. If one considers individual neuron’s activity as depicted in the rasters in Figure 7(b), one might not see the relationship between neuron and network activity since only few neurons are active during each cycle; that is, most neurons show cycle skipping (Wallace et al., 2012).

Not only the oscillatory activity pattern itself but also the sudden transition between nonoscillatory and oscillatory network behavior can be understood from bifurcation analysis of a global model of two coupled excitatory-inhibitory populations as depicted in Figure 7(b) [65, 66].

#### 4. Mean Field Network Models

The network models discussed in this section do not explicitly simulate each network node, but start directly by modeling populations of nerve cells. Some of the models explicitly use the mean field approach commonly employed in statistical mechanics, others create neural populations with functionality inspired by the single neuron function. The final result of these approaches can be fairly similar since they both usually contain excitatory and inhibitory components (the Ising spin model is an exception). As I will point out repeatedly, this results in a strong similarity between the equations that result from the various network models and even between these and the equations describing single neurons.

##### 4.1. The Ising Spin Model

Two- or three-dimensional models of the Ising-spin lattice, where the spins represent the neurons, are not easy to solve. A mean field approach is the simplest approximation to describe such a system. In the lattice, each node experiences the sum of the magnetic forces created by the other nodes and, if present, an external magnetic field. In the mean field approach, we replace the internal field generated by the spins by its average value. It is beyond the scope of this paper to derive the mean field relationship, which can be found in many physics textbooks (e.g., [67, Chapter 7]). Using the mean field expression, we can describe the equilibrium state of the lattice of spins—or network of model neurons—by the expression
with —Boltzmann constant (in the following *k* is set to unity), —temperature, —magnetic coupling strength, —number of neighbors, —magnetization of the lattice, and —an externally applied magnetic field. As shown in the following, (32) shows sudden transitions in behavior according to the so-called cusp catastrophe [68]. First we can determine the fixed points for the expression graphically by finding the intersections of both terms, and as a function of . When depicting these functions for different and , it can be seen that they intersect at either 3 or at 1 point(s). In case of the intersection at 3 points, the middle one is an unstable fixed point, the other two are stable ones; therefore the system is bistable in one area of the plane and monostable elsewhere.

The transition between bi- and mono-stability occurs at the bifurcation where the straight line intersects the sigmoid tangentially; this indicates that two conditions must be satisfied at the transition in point , : and since they must be tangent From these two expressions, one can find the expression for and (as a function of ). The plot of these functions in the plane is the so-called stability diagram (Figure 8(a)). These lines mark the border where the system transitions from mono- (1 fixed point) to bi-stability (3 fixed points with one of the points unstable). Another common presentation is a bifurcation diagram; here we have a codimension-2 case with two independent variables and . Note that for , is always a solution (fixed point) for (32). If we use , we have a stable fixed point at for and an unstable one plus two stable fixed points for . This is depicted in Figure 8(b); we can see that we have the characteristic of a supercritical pitchfork bifurcation. This plot can be considered as cross-sections of a 3D plot of shown in Figure 8(c); this surface depicts the so-called cusp catastrophe.

##### 4.2. The Wilson and Cowan Model

The 1st model by Wilson and Cowan [29] is exclusively temporal and describes the behavior of two neuronal populations—one excitatory and one inhibitory—embedded in neural tissue. It is more realistic than using the Ising model that only considers the equivalent of excitation. In the Wilson-Cowan model, population properties are derived from assumed statistics of the neurons [29]. Each population receives input from itself, the other population, and external sources. The input to a population affects the neurons of that population that are available (the inactive proportion of the population). The model is nonlinear because the incoming activity for the population is related to its output through a nonlinear sigmoid function *S*. In a 2nd version of the model that will not be further discussed here, Wilson and Cowan [30] extended their model to include the spatial component as well. Instead of formulating the model from underlying statistical properties, a coarse grained version of the model can be directly derived from the diagram in Figure 9. Two populations, the excitatory and inhibitory, are coupled with strengths –. The external population affects the excitatory and inhibitory networks with activities and . We denote excitatory activity by and inhibitory activity by , the excitatory population in Figure 9 receives input that can be quantified as . This input of the population is converted to an output expression via a sigmoid function . In most followers of their approach, such a sigmoid curve is employed to represent conversion from membrane potential to spike rate and/or vice versa. In the Wilson and Cowan’s approach, the sigmoid curve represents the cumulative distribution of the probability that a neuron will fire at a given input, that is, the distance between membrane potential and firing threshold. Further, assuming that the unavailable part of the population, that is, the active + refractory portion is proportional (by a fraction ) to the activity level then the receptive population (available portion of ) is .

In Wilson and Cowan [29], the authors use a value a little bit less than unity for , reflecting that the maximum values for the sigmoid function will be a little bit less than one (see pages 9-10 in the original paper). Therefore, the available population becomes . The formalism for population activity also must describe spontaneous decay proportional to *E*, reflecting that a nonstimulated group of neurons converge to a base level of activity (zero, reflecting a low level of spontaneous activity in this model). Finally all changes are weighted by the time constant of the neuronal population . Putting this all together the coarse grained equation for excitatory activity becomes
A similar approach for the inhibitory population yields
In their 1972 paper [29], Wilson and Cowan derive these relationships from first principles followed by a coarse graining procedure (their equations (11) and (12)). When looking into the two dimensional nonlinear Wilson-Cowan model, there should be a variety of behaviors including a steady activity scenario, oscillatory activity with damping, and limit cycles (Figure 10).

###### 4.2.1. Linearization of the Wilson-Cowan Equations

To study resonance phenomena, we will consider the effect on *E* and *I* by small sinusoidal perturbations (e.g., evoked by electrical stimulation) close to an equilibrium state. When stimulating electrically in the extracellular space and assuming similar spatial distribution of excitatory and inhibitory cells, it is reasonable to assume that both the excitatory and inhibitory populations receive the same stimulus current. Thus the external inputs and are considered equal. Further simplifying the sigmoid curves and to lines with slopes and , the linearized versions of (35) and (36) become
Rewriting these results and using and , we get
We now further linearize and consider small perturbations , , around the equilibrium indicated by
The partial derivatives determined at equilibrium are:
Combining these expressions and substituting and for and , and making the perturbation sinusoidal (where amplitude is a small value), we can simplify notation
These two 1st order differential equations can also be written as a single 2nd order one. Taking the derivative of (42) and substituting the expression in (41) for gives
Using (41), we can substitute , and we get the standard expression for a 2nd order differential equation with a driving force
This equation is again of the same form as the equation that governs a parallel RCL circuit as shown in Figure 3. The 2nd-order ODE governing this RCL circuit is (26). This equation is also very similar to the network equation by Jirsa and Haken [45] in which the spatial derivative is set to zero (Section 4.5). Thus the linearized network model is capable of resonance behavior, described in Section 2.1.4. Since we deal with a linearized version of a nonlinear system, unstable solutions of the linearized case may relate to bifurcations in the full nonlinear version of the model (see Section 6.1 for further discussion).

##### 4.3. Models Inspired by the Wilson-Cowan Approach

The model described by Lopes da Silva et al. [35] is a model of thalamus and one of the first that is inspired by the Wilson-Cowan approach. A similar treatment of the mesoscopic model, with altered synaptic function, was given by van Rotterdam et al. [36]. The base model describes a thalamic network; however, the populations are identical to the ones in the cortical Wilson-Cowan model: one excitatory and one inhibitory cell group. Similar to the Wilson-Cowan approach they assume an invertible function *f* that translates membrane potential, and (i.e., the input) for the excitatory and inhibitory populations, into a rate of action potentials for each population and (i.e., the output). For the excitatory population we then get the following relationships: and . Here and are the function and its inverse that translates membrane potential into spike rate and vice versa. Both the functions and its equivalent for the inhibitory population are basically the static nonlinearities in their model. Each nonlinear component is subsequently approached by a Taylor series about the mean value of denoted as : . Subsequently Lopes da Silva et al. simplify notation by only considering the deviation from the mean only, . This is done for all variables in the model, and next they linearize about the mean by neglecting the 2nd and higher order terms in the Taylor series so that the original equations can be approximated by and . External input or its demeaned version perturbs the system (top Left in Figure 11(a)). For the excitatory component in the diagram in the Laplace domain we get , with the following Laplace transform pairs: , , and . Here and are the unit impulse response and transfer function of the excitatory network.

In a similar fashion we include the effect of the inhibition in the diagram in Figure 11(a) and get
Following the same procedure for the linearized inhibitory component (bottom part in Figure 11(a)) gives
If we substitute the expression for *I*(*s*) (obtained from (46)) into (45) and use (from (45)), we obtain the following expression for :
Then the authors use a linear system to model synaptic input with unit impulse responses (UIRs) and . In Lopes da Silva et al. [35], the synaptic UIR is a dual exponential and in van Rotterdam et al. [36] it is an alpha-function ; in both cases, these functions are only considered for *. *The associated Laplace and Fourier transforms are, of course, slightly different. Here we follow the paper of Lopes da Silva et al. [35] and use the dual exponential unit impulse response for both the excitatory and inhibitory populations. These functions and their Laplace transforms are
We can substitute this into (47) and simplify
with .

We can substitute by in to obtain an expression for the spectrum of . The spectrum can be further quantified by assuming that the thalamic input is white noise and therefore is constant. The numerical values we use for Figure 11(b) are the same as the ones in the example given in Lopes da Silva et al. [35]: mV, mV, cells, cells, s^{−1}, s^{−1}, s^{−1}, s^{−1}, , , and (). It can be seen that the spectrum peaks around 10 Hz, the alpha rhythm frequently encountered in the EEG of subjects with eyes closed.

###### 4.3.1. Neural Mass Models

The Wilson-Cowan Equations [29] and Lopes da Silva et al. approach [35] inspired many other models. Instead of considering individual neurons, these models determine the activity of coupled populations of nerve cells, each population representing a neural mass. A very basic model, using two populations representing a coupled excitatory and inhibitory network is described by Dayan and Abbott [69, pages 265–270]. The rate of change for each network is inversely proportional to its activity level, and proportional to the input it receives from itself, the other population, and an external input. This results in a set of two coupled differential equations. Although the model seems to be linear, their model is actually nonlinear because the population activities are limited to positive values only. Therefore their model is capable of generating limit cycles.

Jansen’s neural mass model [37] is applied by Grimbert and Faugeras [38] to create a cortical network unit (Figure 12(a)). It is a temporal model of neuronal populations inspired by the Lopes da Silva et al. [35] approach where each population is represented by a Wiener cascade ([70], Chapters 2–5): a dynamic linear component, modeling the synaptic conversion process, and a nonlinear static component that represents the conversion of membrane potential to spike rate (Figure 12(b)). As it can be seen in Figure 12(b), input (input spike rate) is first convolved with the unit impulse response (UIR) of the linear dynamic component, the result (population membrane potential; -convolution) is subsequently converted with a static sigmoid to produce output (output spike rate). The structure of the model by Grimbert and Faugeras [38] is similar to the canonical cortical model of Douglas and Martin [71] in the sense that it also includes two excitatory populations , one inhibitory population , and an external input . The development of the equations that govern the activities is straightforward. First we use the UIR to write a convolution expression. Again, the synapse is considered a linear system and in this model, the synaptic impulse response function is defined as an alpha function (as in [36]):
in which and are population-specific constants. The Laplace transform of the unit impulse response is: . In the time/Laplace domain we may link the input /transformed input to the output /transformed output of any linear system by the transform pair: . Second, we write the convolution relationship as an ODE. Substituting the above expression for in the Laplace transformed expression and we get . This result can be re written as . Recall that and denote the Laplace transforms of the 1st and 2nd derivatives of in the time domain; using this, we can transform the above expression into a 2nd order ODE for the linear synaptic process: , which can be written as two 1st order ODEs
Now we add the static nonlinearity employed for the membrane potential to pulse rate conversion, a sigmoid curve (Figure 12(b)). In this study, it is defined as a Boltzmann function with maximum rate , slope and a 50% level threshold : . This completes the formalism for one population, a single Wiener cascade in Figure 12(b). The pair of ODEs plus the static nonlinearity can be defined for each of the three populations in the neural mass model (two excitatory and one inhibitory, Figure 12(a)). This means we will have a pair of 1st order differential equations per population, generating a total of six equations (Equation (3) in [38])
The parameters – denote the membrane potentials, and , , , are the constants in the population specific alpha functions. Note that in these equations, some outputs and sigmoid functions are weighted by connectivity strengths –. The focus of the paper is on the variable *y*, the aggregate membrane potential of the main population, which is thought to be proportional with the local field potential. A simulation using the following parameters, , , , , , , , , , , and , shows a signal that is similar to the EEG alpha rhythm (Figure 12(a)).

###### 4.3.2. Modeling the Olfactory System

The models created by Freeman in the 1980s used a combined network modeling and experimental approach to study the electrical activity in rabbit olfactory system. Although Freeman’s approach may not have been directly inspired by the Wilson-Cowan equations and/or the models by Lopes da Silva and co-workers, the fundamental building blocks of his model are neural masses. The appeal of Freeman’s approach is that it is strongly based on the anatomy of the olfactory system, and that he relates modeling results to electrophysiological experiments. As compared to the previous models, a unique addition is that it contains latencies L1–L4, modeling the conduction delays occurring in between populations [39]. Differential equations without delays need initial values, whereas equations with delays require initial functions to determine the delayed response of the system. Equations with delays can exhibit behavior that cannot be displayed by equations without delay of the same order. It is given that delays in the nervous system occur because of the finite conduction velocity of nerve fiber activity. In addition to Freeman’s model, several others have explored the role of these delays or included the delays in the formalism (e.g., [65, 66, 72–74]).

In his models, Freeman describes different population levels. Within the most basic population type, which is defined as K0, he includes pulse-to-wave conversion, linear spatiotemporal integration, wave-to-pulse conversion, and conduction of the output. This basic population is modeled with a 2nd order linear ODE coupled to a nonlinear static sigmoid function, the Wiener cascade (Figure 12(b)). In general, Freeman [39] (and many other papers from Freeman and co-workers) defines the following neuronal population levels (Figure 12(c)).

(1) K0: a subset of noninteracting neurons, either all excitatory () or inhibitory (). The smallest unit in Figure 12(c) signifies a K0 subset. In each K0 subset *j*, the wave variable is governed by a 2nd order ODE: , with and constants. For the output there is a sigmoid function to convert the membrane potential to pulse rate: , with , , and constants.

(2) KI: coupled K0 networks create the KI level. In Freeman’s terminology, two modules give a module and a pair of gives one (Figure 12(c)). For each population , the coupling is governed by , a function representing the incoming pulses originating from the connected populations. As shown above, the dynamics of the system’s basic K0 population is described by a 2nd order differential equation with a driving term (i.e., ). If we Laplace transform the ODE while assuming that the driving force is a unit impulse , we get , with - transfer function. The inverse transform of the transfer function, the UIR is the dual exponential (also used in other models such as Lopes da Silva et al. [35]).

(3) KII: this type of population arises when a and are coupled (Figure 12(c)). The distance between KII populations can be large enough that conduction delays cannot be neglected. These delays are indicated by in Figure 12(c).

(4) KIII: when KII sets are interconnected, a KIII set is created. Note that there is just one such a set in Figure 12(c). In Freeman [39] there are three coupled KII sets: the olfactory bulb (OB), the anterior olfactory nucleus (AON), and the prepyriform cortex (PC). The connections in this set are longer range so that lags (latencies) were incorporated. These latencies are included in his expressions of the driving force by a backward counter *J*. For example, the effect of a subpopulation from AON to one in OB is included as
This term represents the weighted sum of the AON population’s pulse values in the past 14–22 steps. Note that the coefficient is divided by 9, the number of backward steps that are included in the sum.

Simulations of Freeman’s model were implemented in Matlab and typical results are depicted in the bottom panel in Figure 12(c). To show that the system is initially quiet, we deliver a first stimulus a little after onset of the simulation and to demonstrate that the response of the active intact network is slightly different, we deliver another stimulus at about the halfway point of the simulation (arrows in Figure 12(c)).

##### 4.4. Model for the Electroencephalogram

The electrical signal that can be recorded from the scalp, the electroencephalogram (EEG), reflects the currents generated by the brain’s nerve cells. If we ignore electrical signals from other more remote sources (e.g., muscle, heart) and nonbiological artifacts, we can assume that the EEG is the weighted sum of the underlying neural network in which the weights are determined by the geometry. The EEG signal is used both in research and clinical settings. Because a single EEG signal includes the activity of millions of nerve cells, the relationship between smaller networks of the brain and the EEG signal is not necessarily a trivial one. Therefore this relationship has been the target of many modeling studies; the framework developed by physicist Paul Nunez is an example. Because it is generally thought that the slower synaptic potentials, especially those of the neocortical pyramidal cells, contribute most to the EEG signals on the surface of cortex and scalp, Nunez’ model focused on describing synaptic activity across cortex (e.g., [31, 32]). The spatiotemporal model of cortical activity is based on the schematic of neocortex shown in Figure 13. This diagram shows the effect of action potential firing activity *g* produced by the left cube on the synaptic activity *h* of the right cube. The synaptic activity has subscripts *e* and *i* for the excitatory and inhibitory synapses, respectively. Both activity variables *g* and *h* are a function of location: (left cube), (right cube), and time *t*. The remaining variables in Figure 13 are *v* for conduction velocity and for external, subcortical input. Consequently the delay for action potentials arriving at the right cube that were generated in the left cube is ; therefore, the action potential firing function from the left cube arriving at the right cube is indicated as . Furthermore, the connectivity within the cortex is governed by functions and for the excitatory and inhibitory connections, respectively. These functions also depend on the locations , , and fiber system (with conduction velocity ), that is, . The equation that describes the excitatory synaptic activity as a function of the input to the right cube can now be formulated as
For the inhibitory synaptic activity a simpler expression can be used since there are only local inhibitory fibers all with similar conduction velocity, that is, . Further, because of the local character of inhibition, one can neglect the delays for the inhibitory fibers: that is, now the action potential firing function is simply . For the inhibitory expression, we get

The external input represents subcortical input which is assumed constant for a given state. Because the physiological basis for this input is not obvious, Nunez dropped this term in later versions of the model. In the following, it is not very important since the effect of vanishes when linearizing the expression. Note that both in (54) and (55) the connectivity functions and and/or activity function *g* must contain a component that translates the action potential rate into a level of synaptic activity. In his development, Nunez assumes this component to be linear by using a simple gain/attenuation factor. The same can be said for external input functions and . They implicitly include a conversion from action potential rate to synaptic activity; the external input in terms of the action potential rate must be corrected to generate synaptic activity and (again) if this relationship is linear, one can use a simple factor. The above expressions in (54) and (55) relate spike input to synaptic activity (pulse-to-wave conversion); of course there is also the effect of the synaptic activity on the spike output (wave-to-pulse conversion). The nonlinear relationship for both synaptic activities and on action potential rate function is commonly modeled by a sigmoid function (e.g., [32]). Nunez linearizes this relationship about an assumed fixed state . If we consider a small change of the action potential rate around state and relate this to small changes in the synaptic activities and , we get . Defining and , , and , we can simplify notation and get . If we now use (54) to write an expression for , we get
with the changes of notation indicated by the horizontal curly brackets, we obtain (Equation (11.5) in [32]):
Following the same procedure for (55) gives (Nunez’ Equation (11.6))
One of the strong aspects of Nunez’ model is that it acknowledges different conduction velocities, the weak part is that it linearizes the relationship between synaptic activity and action potential rate, and vice versa.

##### 4.5. A Field Equation for the EEG

An excellent summary of macroscopic modeling is provided in Jirsa and Haken [45]. These authors integrate temporal and spatial aspects, the firing rate models first described by Wilson and Cowan [29, 30], and the synaptic fields described by Nunez (e.g., [31, 32]). Their approach is straightforward. First they develop expressions for the synaptic and action potential activities. Then they simplify by zooming out, determine the expression for the synaptic field, and write that expression in the form of a convolution operation. Finally, they take the Fourier transform of the convolution and transform this result back into the spatiotemporal domain as a partial differential equation for the synaptic field of the excitatory neurons. Just as Nunez (Section 4.4), Jirsa and Haken assume that this field is a representative for the EEG.

###### 4.5.1. Spatiotemporal Expression for the Synaptic Field

The excitatory (*e*) and inhibitory (*i*) populations each have pulse rate and ( and ) similar to the approach by Wilson and Cowan [29], and a wave expression and ) similar to the development by Nunez. In the above, the spatial component is three dimensional; in the remainder of the text, the spatial variable is reduced to one dimension represented by . In the following text, we follow the derivation of the macroscopic field equation as it is visualized in Figure 14. In order to find the expression for the synaptic activity, we determine the synaptic component (top, Figure 14), while including the spike rate generator (bottom, Figure 14). The left part of Figure 14 shows the details whereas the right part shows the zoomed out version representing a macroscopic view where some detail is lost.

(1) *Synaptic Component*. The synaptic activity resulting from the incoming spike activity, for the excitatory population is:
For the inhibitory population a similar expression can be presented. In the equations for synaptic activity, the contribution from the pulse activity over the whole cortex is included: at location , this contribution is the product of all converted cortical activity multiplied by the connection distribution for . Synaptic activity is then computed by integrating (using integration variable ) over cortical surface (Figure 14). The conversion function determines how incoming action potentials originating from excitatory cells are converted into excitatory synaptic activity (excitatory post synaptic potentials). This is represented by a sigmoid function *S* that can be linearized over most of the trajectory using the constant slope
Note that the function describes all action potentials generated in the past that could have traveled to location originating from location (Figure 14, top row) in a time interval , with —conduction velocity. The ones that have really arrived at depend on the connectivity between and ; this is described by function .

(2) *Spike Rate Component*. The expression for the spike rate for the excitatory population *E* is similar to (59)
The output conversion : wave-to-pulse becomes an expression that reflects the conversion from: (a) the excitatory synapses, (b) the inhibitory synapses, and (c) a term for the extracortical activity. The synaptic activities for the three wave sources are simply added and a (static) nonlinearity converts it to pulse rate. Like in the expression in (60), there is a temporal delay term that corrects for conduction velocity. The expression for the wave-to-pulse conversion becomes

The inclusion of a conduction delay is correct when assuming active propagation of synaptic input, for passive propagation (at the speed of light) such a delay is not needed. For the remainder of the derivation by Jirsa and Haken [45] this point is irrelevant because these delays will be neglected. The conversion function describes the contributions from all cortical areas to location and the distribution function in (61) determines the strength of the connection.

###### 4.5.2. Zooming out

In the model, cortical connectivity is described by exponential distributions of the form , functions where the area under the curve evaluates to unity: . The advantage is that when we zoom out to a global overview of the cortex, these distributions can be replaced by delta functions. In the model, the distributions , , and are represented by a delta function because they are local (see for example in Figure 14 bottom row). This procedure for is depicted in Figure 14, bottom row: . Distribution , describing the excitatory connections over a longer range, remains in the form of the exponential expression (Figure 14, top row).

(1)* The Expression for *. Now we use the above to formulate the expression for the synaptic activity because this variable is considered proportional to the EEG. In this approach, we focus on solving for the excitatory component since this is considered the major contributor to the EEG signal. First we substitute the linearized version of (60) into (59):
In this expression, we substitute (61) for , while changing integration variable to , and we get . Subsequently we substitute the expression for from (62) and replace with a delta function
Now we apply the sifting property to evaluate the 2nd integral and get
For the inhibitory population we follow a similar procedure and get
If we now evaluate the integral in this expression, while replacing the distribution function with the delta function , so we can use the sifting property, we obtain the result for the inhibitory synaptic activity: . Subsequently we follow the procedure by Jirsa and Haken [45] and only take the linear aspect of the function, using a slope of and finally solve for
Substitution of (67) into (65) gives