Abstract

This paper provides an overview of different types of models for studying activity of nerve cells and their networks with a special emphasis on neural oscillations. One part describes the neuronal models based on the Hodgkin and Huxley formalism first described in the 1950s. It is discussed how further simplifications of this formalism enable mathematical analysis of the process of neural excitability. The focus of the paper’s second component is on network activity. Understanding network function is one of the important frontiers remaining in neuroscience. At present, experimental techniques can only provide global recordings or samples of the activity of the huge networks that form the nervous system. Models in neuroscience can therefore play a critical role by providing a framework for integration of necessarily incomplete datasets, thereby providing insight into the mechanisms of neural function. Network models can either explicitly contain individual network nodes that model the neurons, or they can be based on representations of compound population activity. The latter approach was pioneered by Wilson and Cowan in the 1970s. Finally I provide an overview and discuss how network models are employed in the study of neuronal network pathology such as epilepsy.

1. Introduction

A fundamental and famous model in neuroscience was described by Hodgkin and Huxley in 1952 [1]. They generated a formalism for the dynamics of the membrane potential of the giant axon of squid describing measurements of sodium, potassium, and leakage currents. Their model can be represented by an equivalent electronic circuit of the axon’s membrane in which a capacitor models the membrane’s phospholipids and several voltage-dependent resistors represent its ion channels. Much later, after computer technology became readily available, their formalism has been widely employed and also includes other types of channels. In addition, it is used to create detailed cell models in which the cell is divided into a set of coupled compartments each represented by a separate membrane model. In many studies these computational models are embedded in networks (e.g., [27]). In this approach, the individual nodes of the network and their connections are simulated with the purpose of finding the properties associated with emergent network activities such as generation of synchronized bursts and oscillations. The enormous advantage of these models is that they are close to the experimental domain; that is, they may include recorded ion conductance values, morphology of observed cell types, and known intercellular connectivity. In spite of the many parameters in these models, an overwhelming amount of detail is still missing. As a consequence, due to nonlinearities in neuronal function (e.g., membrane conductance dynamics, the coupling function between neurons), small inaccuracies in the model parameters may lead to large prediction errors of the model. In spite of this potential shortcoming, some models perform surprisingly well in predicting or mimicking experimental outcomes (e.g., [8]).

An additional problem with these detailed models is that, because of their complexity, they cannot be analyzed mathematically and may not therefore lead to deeper understanding of the process it models. Further model reduction is required before mathematical analysis can be employed. For example, the Hodgkin and Huxley model is a four dimensional nonlinear model (the dynamic variables are membrane potential , and gating variables , , and ). Both FitzHugh [9] and Nagumo et al. [10], inspired by the nonlinear van der Pol oscillator [11], reduced the Hodgkin and Huxley formalism to one pair of dynamic variables (membrane potential and a recovery variable ) to gain insight in the dynamics of the excitable membrane. Later a similar simplified approach was published for the excitation in the barnacle giant muscle fiber by Morris and Lecar [12].

A further simplification for simulation of nerve cell activity is the leaky integrate-and-fire (IF) model [13]. The basic IF model describes a neuron’s subthreshold behavior as a leaky integrator (a resistor and capacitor) and adds an ad hoc superthreshold spike when a preset threshold is exceeded. In between the spike events, the IF circuit is a one-dimensional linear model for the cell’s subthreshold activity. There are additions to this model, for instance, resonance properties are included by making the model two-dimensional and a quadratic function has been proposed to add a nonlinear component that is capable of generating the spike upward deflection [14]. More recently, Izhikevich [15] extended and combined these approaches into what he defines as the simple model of choice (SMC). In this model there is no threshold for the spike generator and only the downward deflection (reset) of the spike is generated in an ad hoc fashion.

When modeling neuronal networks, one can start from the network nodes, that is, the neurons, or make direct assumptions at a higher organization level, that is, the population activity. In the first case, the bottom-up approach, one must decide how to model the nodes. Simple on/off switches are the most abstract models of neurons in networks. The underlying hypothesis of this approach is that, for some of the aggregate behavior, the network properties themselves are of principal interest while the details of the cellular properties (beyond having active and inactive states) are irrelevant [1620]. Many network models that use the bottom-up approach include more complex representations for the nerve cells and their connections. These representations may vary from a simple IF model to compartmental models including biophysically realistic channels (e.g., [27, 2127]). Recently, a network model of IF neurons was connected to an artificial, simulated eye and arm, thereby creating a system that is capable of replicating human behavior [28].

A different family of network models describes aggregate behavior of neuronal populations. Although the components of these models are derived from or inspired by the neuronal function, they do not explicitly consist of a network of individual agents. This approach was pioneered by Wilson and Cowan [29, 30]. Their result is a nonlinear, two dimensional model describing the dynamics of excitatory and inhibitory activity. The expressions for the population activity are based on assumptions about cellular statistics such as the distribution of neuronal excitability. The initial 1972 Wilson-Cowan model describes only the time-dependent dynamics of the neural populations; whereas their 1973 paper includes a spatial component to this model. In the same decade, Nunez proposed models of cortical networks to explain and describe the electroencephalogram (EEG) representing compound activity of millions of nerve cells [3134]. This work focused on the synaptic activities in the cortical pyramidal cell population since they are considered to be the major contributors to the mammalian EEG. To describe cortical activity, Nunez included inhibitory and excitatory populations and used linearized relationships to describe their interactions. Since the early 1970s, many models have been inspired by the Wilson and Cowan approach and the models by Nunez. Examples are the models by Lopes da Silva et al. [35] and van Rotterdam et al. [36], the Jansen neural mass model [37, 38], and Freeman’s models of the olfactory system [39, 40]. More recent examples are the models by Wendling and coworkers (e.g., [41]), Liley and Bojak [42], van Albada and Robinson [43], and Hindriks and van Putten [44]. A model developed by Jirsa and Haken [45] produces a field equation for the network activity generated by a cortical surface and can be considered a synthesis of the different population model approaches. In part due to increased access to computer technology facilitating the analysis of nonlinear dynamics, the original Wilson-Cowan approach and its derived models are being “rediscovered” [46]. Due to the deterministic character of these models, they fail to model fluctuations. Stochastic approaches are capable of including higher order moments that characterize the neural activity such as synaptic input and spike trains (e.g., [47, 48]). Recently, Buice and Cowan [49] used a statistical mechanics approach to include fluctuations in the population model.

Understanding the function of neuronal networks is a frontier in current neuroscience. Our lack of fundamental understanding of network function in the nervous system prevents us from linking neuronal activity to higher level physiologic and pathologic behavior. Although compound activity of huge networks can be captured by the EEG or possibly by functional magnetic resonance imaging (fMRI), current experimental techniques in neuroscience cannot record the individual activity of cells forming larger networks (>1000 neurons) with sufficient resolution in the time domain (<1 ms). Due to the lack of such techniques, there is a real need for theoretical and computational models in neuroscience because they can provide a framework for the integration of experimental data. In addition, models provide an efficient tool for generating and testing series of hypotheses; to cite Abbott [50], “A skillful theoretician can formulate, explore, and often reject models at a pace that no experimental program can match.”

The purpose of this paper is to provide a summary of different models for neural activity with a focus on oscillatory patterns, to discuss the relationship between modeling approaches, and to place them in a logical and a historical context. Section 2 summarizes the Hodgkin-Huxley formalism and its simplifications. The next part, Section 3, describes the network models with nodes that are based on the models for the individual neurons, followed by Section 4 about population cell models without explicit individual neurons. Section 5 discusses a procedure and effects of adding a stochastic component. The application of neuronal models in studying pathology such as epilepsy is evaluated in Section 6. The final section contains the concluding remarks.

2. Cellular Models

The Hodgkin and Huxley (H&H) equations [1] describe the nonlinear dynamics of the electrical properties of an excitable membrane of the giant nerve fiber that innervates the muscles involved in water jet propulsion in the squid. The inside and outside of the nerve fiber is separated by a biomembrane (Figure 1(a)). The principal component of the biomembrane is a double layer of phospholipids that serve as an insulator between the conductive media in- and outside of the fiber; this component can be modeled as a capacitor ( in Figure 1(a)). Ion pumps in the membrane create gradients of different ion species, such as sodium and potassium, across the membrane, and ion channels allow passive leak of the ions.

Because ions are charged, an ion pump creates a potential difference and can therefore be modeled as a potential source ( in Figure 1(a)). For individual ion species, these potentials are known as Nernst potentials that can be computed by the Nernst equation: for example, at 25°C, the Nernst potential for the positively charged sodium and potassium ions is determined from their intra- and extracellular concentrations ( and , resp.) as  mV. An ion channel conducts one or more ion species and may be modeled as a resistor ( in Figure 1(a)). The resistances of the sodium and potassium channels depend on the potential across the biomembrane. This membrane potential dependence is modeled by the H&H equations using activation and inactivation processes that govern the degree of opening of the channel and consequently its conductance. It is of interest to note that Hodgkin and Huxley proposed their formalism in the 1950s as an empirical model, and that molecular evidence for the existence of the different membrane components came decades later!

In the H&H formalism, the potential difference between the fiber’s in- and outside, the membrane potential () is related to activation and inactivation parameters of sodium and potassium ion channels (, ,  ). The differential equations that H&H used to describe the dynamics are or The parameter’s dynamics are governed by three first-order differential equations Here, the variables , , and are the time constants of parameters , , and . Equation (2) is based on Kirchhoff’s law stating that the sum of all membrane currents is equal to zero. An equivalent membrane circuit is depicted in Figure 1(b). Based on the equation (with —charge of the capacitor, —its capacitance, and —potential across the capacitor), we determine that current equals . Further we determine that each ion current is described by Ohm’s law , with —membrane capacitance; —membrane potential; —conductance () of ion species k, which can be presented in (2) by the product of the maximum conductance and activation and inactivation parameters; —equilibrium potential for ion species . Equations (3)-(4) describe the dynamics for the activation () and inactivation () parameters of the sodium channel. Equation (5) describes the dynamics of the activation () of potassium. In the following, we often represent activation/inactivation variables generically by . It can be seen in (3)–(5) that the parameters and depend on the membrane potential . These -dependent relationships are all nonlinear as well as the activation parameters and since H&H determined experimentally that the model fits best if they used and in (2). The dynamics of the parameters in (3)–(5) were originally presented by H&H in the form: , where and are rate constants that are membrane potential (voltage) sensitive. These presentations can be linked to the one in (3)–(5) by and . The voltage-dependent functions for and were determined by H&H using the so-called voltage-clamp technique. This technique enables the experimenter to keep constant, the holding potential, while measuring membrane current. By repeating this measurement for a range of holding potentials, H&H determined the I-V relationship. This relationship was determined for sodium and potassium currents (a specific ion current can be determined by disabling other ion currents pharmacologically). In their model, H&H used these I-V relationships to determine and . As it can be seen in (2), the total membrane current is proportional to the time derivative of the membrane potential ; therefore, the experimentally determined I-V relationship can be interpreted as the phase space for membrane potential dynamics (e.g., [15, 51]).

2.1. Linearization of the Hodgkin and Huxley Equations

A linearized version of the H&H equations is useful to examine subthreshold oscillatory properties of the biomembrane around a resting or holding potential (say ). The mathematical details of the following can be found in Appendix A. Here I describe the details of the linearization procedure so that the physiological parameters in the full model can be related to its linearized version; readers not interested in this aspect can skip to Section 2.1.3.

In order to linearize the nonlinear equations, we rewrite (2)–(5) in a more compact form as and , with representing an activation/inactivation variable , , or . Then we linearize about an equilibrium potential . We define small variations in the parameters about this equilibrium as , , , , . Using this, we obtain a linearized equation (2) with: , , , , , .

For the gating parameters ((3)–(5)), using , we get the linearized expression In the literature, it is common to further simplify the notation of (6) and (7). For example in Richardson et al. [52], (6) and (7) are simplified to (Appendix A)

2.1.1. Linearization of the Potassium Channel

Now we consider the potassium current and model small perturbations and show we can model this by using a two branch electrical circuit with resistors , , and inductance (Figure 2(a)). A small perturbation in the circuit with the two parallel branches is governed by Equation (9) is just Ohm’s law () applied to the circuit in Figure 2(a). Here , the inverse of , represents the impedance of the circuit that links the fluctuations in potential and current ( and are in series and they are parallel to ). Now if we consider a single ion current, the current, we show that we can write this in the same form as (9). (Experimentally, a single ion current can be measured by disabling all other currents pharmacologically.)

First we restate, from (2), that the potassium current is given by A small perturbation approximated by a linearization around potential is with , and .

We can now use (7), substituting for , to relate to Substituting this result into (11) gives This result can be written as with and .

Now we can directly relate the linearized potassium channel to the circuit in Figure 2(a). Comparing (9) and (14), we can determine that both expressions are equivalent if , , and .

Combining the above we find that Thus we can successfully model small perturbations in the potassium channel with the equivalent circuit in Figure 2(a). Note that is positive because and are both positive at physiological values of . Therefore is positive and (because is also >0) is positive.

2.1.2. Linearization of the Sodium Channel

Now we apply the linearization to inward current A small perturbation approximated by a linearization around potential is We simplify notation using: , , , , and .

From the three terms in (17), it is obvious that we need three branches in an equivalent electrical circuit for the sodium channel. One resistor and two branches each with a resistor and inductor, one for activation and one for inactivation (Figure 2(b)). From the circuit properties in Figure 2(b), we determine that a perturbation of the potential is governed by We find that the expressions in (18) and (19) are equal if , , and with , , , and .

Combining the above we find and .

So we can successfully model small perturbations in the sodium channel with the equivalent circuit in Figure 2(b). The model can easily be evaluated by simulation; however, building the model using hardware components cannot be done as simply because some of the conductance and inductance values are negative over the range we are interested in. Note that in the expressions above is negative because and is negative at physiological values of . Therefore is negative and (because is >0) is also negative. In contrast, note that is positive because both and at . Therefore is positive and (because is >0) is also positive. Mauro et al. [53] and Koch [54] proposed to replace the negative inductance by a positive capacitance and correct for negative resistance in one of the other branches of the circuit. They show that a branch with negative conductance and inductance parallel to a true (positive valued) resistor can be replaced with an RC circuit without changing the effective impedance of the two branches. The equivalent diagrams of the circuitry are shown in Figure 2(c). In the original circuit (Figure 2(c1)) with the positive resistor , negative resistance , and negative inductor , we have impedance in the original circuit In the replacement circuit (Figure 2(c2)) with the capacitor we have impedance After substituting and multiplying numerator and denominator by we get This result is identical to the original impedance ; it will therefore also have the same frequency response: absolute value and phase (i.e., the same Bode plot).

2.1.3. Generalized Equivalent Circuit

In the previous sections, it was shown that the channels in a linearized version of the H&H equations can be represented by equivalent circuits consisting of a network of resistors and inductors (Figure 2). Even in the case where negative values occur for these components (due to positive feedback between sodium conductance and membrane depolarization), they can be represented by positive valued alternative electronic components (Figure 2(c)). When combining these findings, we simplify the equivalent circuit in Figure 1 and consider the RCL network in Figure 3 as a linearized representation of the membrane.

The inductor channel in the circuit represents the stabilizing effects of the outward current activation (e.g., potassium) and inward current inactivation (e.g., sodium). The resistor and capacitor represent membrane resistance and capacitance plus corrections for negative impedances. Since we look at small perturbations around an equilibrium, which we conveniently set at zero, the driving forces of the difference between Nernst potential and resting potential can be ignored.

The pair of differential equations that govern the circuit in Figure 3 are We now simplify and we nondimensionalize the system. First we substitute and and get and . Then we change to a timescale such that ; substituting this we get the following pair of ODEs: Finally, using and (note that ), we obtain the following 2nd order ODE: This result for and is identical to the equations analyzed by Erchova et al. [55] (their equations (A16) and (A17)). An alternative route to create a single equation from the two original ones is by direct substitution while leaving dimensionality intact. If we differentiate the first expression in (23) with respect to time we have . Here , , and indicate the second and first time derivatives of and . We can use the second expression in (23) to substitute where as a function of can be found from . This yield This result is identical to the expression used by Erchova et al. [55] (their Equation (A1)). Interestingly this equation is also similar to the network equation by Jirsa and Haken [45] in which the spatial derivative is set to zero (see Section 4.5). This similarity across the scale from neuron to network is seen frequently. In one example, investigators even use a cellular model for action potential generation to represent activity of a population [56]. Although Curto et al. [56] do not explicitly describe this, in this case the inward and outward currents of the cell membrane can be considered to represent the network’s excitation and inhibition, respectively.

2.1.4. The Frequency Response of the Linearized Model

In electrophysiology, cellular properties are often probed with hyperpolarizing and depolarizing current steps. The unit impulse response, the derivative of the unit step, is the inverse Fourier transform or inverse Laplace transform of the system’s frequency response or transfer function, respectively. Since we deal with a linearized system with current as input and membrane potential as output, the frequency response is just the impedance (i.e., , with —frequency in rad/s). We can use the Laplace transform of (26) to obtain the transfer function of the linearized model: . In this form we can apply the Matlab freqs command to depict the Bode plot of the model (Figure 4).

An interesting observation is that the Bode plot in Figure 4 shows a peak in the magnitude of the impedance, indicating that neurons and their networks display resonance [55, 5760]. With an appropriate, realistic choice of parameters, this peak is located in the alpha-frequency range of the EEG at 75 rad/s (~12 Hz).

2.2. Models That Can Be Derived from the Hodgkin and Huxley Formalism

The H&H representation and its linearized version can be seen as a basis for many different neuronal models. Of course one could either add complexity to the H&H formalism or further reduce it. The following sections summarize representative examples of both types.

2.2.1. Complex Cell Models

Using the H&H formalism one might extend the sodium and potassium channels of the model neurons to also include different types of potassium channels (such as the A-current, a transient potassium current), persistent sodium channels, and different types of Ca++ channels. In addition to the voltage sensitive channels, it is common practice to also include Ca++ and/or Na+ sensitive channels. In addition to the addition of channels, the cell model can also be divided into many multiple coupled compartments (e.g., [4]). Each compartment represents a small part of the cell that can be approximated by a compartment with a uniform membrane potential. The rationale for the use of multiple equipotential compartments is to mimic the spatial separation of different functions: synaptic input, passive and active propagation of signals, and generation of action potentials. The more complex cell models are also frequently used to create networks in order to determine emergent behavior of neuronal populations (e.g., [3, 5, 6, 26, 61]). Special neural simulation packages such as Genesis and Neuron have been developed to simulate such models, even on parallel machines [62, 63]. A recent example in which many details are incorporated in the individual cell models is the Blue Brain Project [7]. The advantage of these models is that they are based on experimental data enabling the modeler to link the model’s behavior to measurements. This close relationship can also be a disadvantage of these complex models; due to the complexity, the simulation results can be as difficult to interpret as real recorded data.

2.2.2. Simplified Cell Models

One can simplify the H&H equations while preserving the nonlinearity, or one might use the linearization results described in Section 2.1. In fact both these approaches have been employed. In one case, the four dimensional H&H model can be reduced to a lower dimensional one and in the other case, the equivalent circuit of a linearized membrane model (Figure 3) is coupled to an ad hoc spike generator (Figure 5). The advantage of preserving the nonlinearity is that the spike generation mechanisms remain (at least in part) intact, but mathematical analysis of such a model can be fairly complicated. The linearized cell models are more accessible for analysis but require an ad hoc spike generator (Table 1).

(1) FitzHugh-Nagumo/Morris and Lecar. The simplification used independently by FitzHugh [9], Nagumo et al., [10], and Morris and Lecar [12], is all based on the observation that the cell has a relatively fast activating inward current and a slower activating outward current. This allows one to simplify (2)–(5) from a four dimensional model into a model with only two dynamic variables. In the FitzHugh-Nagumo approach, the inward current is sodium whereas in the Morris and Lecar model for a barnacle muscle cell, the inward current is calcium. For the following discussion, it is not critical which ion species carries the inward current, so I will focus on simplification of the H&H formalism with a Na+ current. In the original equations (2)–(5) we have , , , and . The simplified version ignores activation of the inward current because it is so fast it is considered instantaneous. The effects of and are similar; that is, they repolarize the membrane, so they can either be lumped or one of the two can be ignored so that only one repolarization parameter (often indicated by ) remains. This generates a nonlinear dynamical system with two variables. Although a few variations exist, the original equations in FitzHugh [9] were Here , , and are membrane potential, recovery variable, and injected current, respectively; , , and are constants.

(2) Reduced Spiking Models. From the linearization procedure we can appreciate that the neural membrane can be represented by the RCL-circuit (Figure 3). Of course, this representation only works for studying subthreshold behavior such as integration of inputs or resonance properties. This type of study explains the propensity of some neurons to resonate with specific input frequencies. If, as an additional simplification, the ion channel properties are completely ignored, we may even further simplify the membrane to an RC-circuit; in this configuration there are no resonant properties. Both these simplifications, the RCL- and RC-circuits, ignore the nonlinear properties of the excitable membrane. In the so-called integrate-and-fire (IF) models the nonlinearity is again introduced, but in an ad hoc fashion by employing a firing threshold. When this threshold is exceeded, a spike + reset are pasted into the signal. Although such an ad-hoc firing process can be included in both RCL- and RC-circuits, it is commonly applied to the latter.

Historically, the RC-based integrate-and-fire model was introduced long before the H&H formalism in 1907 by Lapicque [13]. Because of the leakage current in the resistor, it is also often called the leaky integrate-and-fire neuron (LIF). Much later a nonlinear variant of the LIF was introduced where the I-V relationship is not linear as in (23), but quadratic, the QIF model (e.g., [14]). Because this quadratic relationship, the I-V curve contains a stable and an unstable equilibrium, the spike’s upstroke can be generated by the model and only an ad-hoc spike reset is required (Figure 6).

Finally, one can combine the QIF scenario with the RCL-circuit to produce a model that is capable of both resonance and spike upstroke generation. This is defined as the simple model of choice (SMC) by Izhikevich and was shown to be able to produce a plethora of neuronal behavior (e.g., [15]).

2.3. Summary of the Cell Models

We can consider the different models and summarize their relationships by counting the dimensions (Table 1). The H&H model is four dimensional whereas the subthreshold component of the IF model is only one dimensional. Of course, dimensions are added when the ad hoc spike generator is included but most studies using this model will focus on the subthreshold behavior. Table 1 summarizes the model dimensions and provides an overview of the spiking mechanisms of the models we evaluated above.

3. Populations of Cell Models

Network models containing concrete nodes come in a wide variety, ranging from nodes represented by an on/off switch to a detailed multicompartmental cell model with multiple ion channels. Similarly, the connections between the nodes can be modeled by relatively simple rules or they can be simulated by biophysically realistic synaptic ion channels. In general, the purpose of these models is to provide simulation results that can be compared to a result from mathematical analysis and/or an experimental measurement.

3.1. On/Off Switches

The first simple network model using coupled switches was described by McCulloch and Pitts [16]. Their approach was to represent network elements as gates in a logical circuit (e.g., AND, OR, etc.). Others have used a similar approach and also compared the network on/off node with the spins in a spin-glass lattice (e.g., [17]). The so-called Ising spins can be up or down, indicated by +1 and −1, respectively [64]. Each spin tends to align with its neighbors. However, at high temperatures, the probability of this alignment decreases to . We model the cortical surface by a two dimensional lattice of Ising spins. The overall state of this lattice can be characterized by its magnetization ; the average of the individual spins: . Below a certain critical temperature , the lattice will retain global magnetization by an external magnetic field . Above the critical temperature (also called the Curie temperature), thermal fluctuations will cause demagnetization. Although this is a mathematical framework in statistical mechanics, it has been used to represent neuronal networks where the spins are the neuronal nodes in the network; in these models, the up and down states are often presented by 1 (active) and 0 (inactive), respectively. These networks with nodes are typically simulated in discrete time and follow the following update rules.(1)At each time step, the state of is determined by the state of its set () of its connected neighbors, reflected by function , with representing a weighting function from node to node (in many models this weight is set to one).(2)The update of now takes place in a probabilistic fashion; the probability that will be in the up state in the next time step is: ; variable is the temperature; it can be seen that at and that the update is a deterministic process () at .

This update process is a stochastic variant of one of the rules described by McCulloch and Pitts [16] where a neuron’s output y is updated by the state of its input from neurons in a deterministic fashion, that is, output y is set to 1 when its input exceeds a threshold and to 0 otherwise. A mean field approach for the analysis of the Ising spin model is described in Section 4.1.

A model similar to the Ising spin lattice was described by Hopfield [18]. He also considered switches interconnected in a network in which the strength of the connections between any neuron pair and is determined by the synaptic weight of their connection (no unit has connection with itself and, unrealistically, he also assumed that the weight matrix is symmetric, ). The principal difference between Hopfield’s approach and the ones described above is that the updates occur randomly and asynchronously; that is, for every time step a node is randomly selected and then an update rule is applied for that node only. This update rule is similar to the deterministic one employed by McCulloch and Pitts [16], that is, the selected neuron is active if the sum of activities of its connected neurons exceeds a threshold. More on stochastic approaches is discussed in Section 5.

3.2. Networks of Reduced Spiking Models

As described in Section 2.2.2 (2), there is a variety of simplified models ranging from simple IF to rather complete SMC representations (Table 1). Especially the IF-model is frequently used for network simulations. One advantage of this model is that it is simply linear in between the spikes. By employing this approach, one can not only simulate the network but also, to some extent, analyze its properties mathematically. Using the simplification in Figure 3 and a further simplification by neglecting induction effects, we can simplify (23): . By simplifying this notation, one can describe the network dynamics by membrane potential , the activity level of each neuron, by (e.g., [21, 22]). The injected current is separated into and . Parameter represents spontaneous drive and is the time-dependent input. The presynaptic spike train can be represented by a train of unit impulses . Each incoming impulse generates a postsynaptic signal that can be modeled by a so-called alpha function; that is, for an impulse arriving at time , its postsynaptic effect is given by . The Heaviside function is included because the alpha-function does not have a physical meaning for negative arguments, . Combining these considerations, one can conclude that input is the weighted sum of all alpha functions resulting from the incoming spike train (e.g., [23, 24]) Here we determine the input by using the synaptic weight matrix , in which each entry is the connectivity strength of neuron to neuron . Scaling factor represents the number of presynaptic inputs of neuron . We now determine the Laplace transform of (29) and rearrange terms , which can also be inverse transformed into the time domain in the form of a differential equation Since the first order differential equation (28) and the second order one (30) are coupled, via input E, they create a three dimensional nonlinear system; a system capable of exhibiting oscillatory and chaotic behavior. To study the activity in the model, Olmi et al. [23] rewrite the system as three 1st-order differential equations and integrate between pulses and . This creates an event-driven map that can be further investigated. Tattini et al. [24] examine the propensity for chaotic behavior by evaluating the attractors characterizing the network activities and they show that both connectivity and network size are critical properties—unsurprisingly, the level of chaos, reflected by the maximum Lyapunov exponent, decreases with network size, although less so for sparsely connected ones.

More complicated cell models have also been employed to investigate network activity. Recently, Izhikevich and Edelman [25] used the SMC model to create a large network while using connectivity based on the tracts determined by a diffusion tensor image of the brain.

3.3. Networks of Ion Channel Models Based on Biophysical Considerations

Using the H&H formalism combined with approximation of the cell morphology by segments, one can create realistic representations of neurons. Addition of ligand-sensitive, synaptic channels allows one to create networks. Equation (2) is then adapted to include currents associated with other voltage sensitive channels, the synaptic currents, and intercompartmental currents Interestingly, by applying Kirchhoff’s first law, it is apparent that intracellular intercompartmental currents must be equal to the extracellular current between the compartments, thereby providing a tool to model extracellular measurements. The dynamics of the voltage-sensitive channels is governed by expressions similar to (3)–(5). The intercompartmental current, following Ohm’s law, is determined by the potential difference between the isopotential compartments divided by the axial resistance between them. Traub and co-workers were amongst the first to develop this approach (e.g., [2, 26]), the blue brain project [7] is a famous recent attempt to determine properties that govern network activity at mesoscopic levels. The advantage of the approach is that the neurons in the network are modeled with realistic ion channels, which allows a direct comparison with experimental data. However, well-known problems with the approach are associated with the complexity and large number of parameters, many of which are not experimentally determined yet. The large parameter space renders this type of model ill-posed and the complexity can make interpretation of the results very difficult. A combined approach with associated detailed and global models can be very useful to understand the principles underpinning the simulation results of the detailed models [5, 6, 27, 65, 66].

The detailed model of neocortex outlined in Figure 7(a) shows pyramidal cell populations and the inhibitory basket and chandelier cells. The network generates an EEG signal based on the weighted sum of the neuronal currents. Surprisingly, during reduced excitatory coupling, the model starts to oscillate (top trace in Figures 7(a) and 7(b)) [5]. The mechanism generating these oscillations is difficult to grasp when studying the individual rasters of the superficial and deep pyramidal neurons (grey panels in Figure 7(b)). However when lumping each of the superficial and deep pyramidal activities (Figure 7(c)), it can be seen that the simulated EEG oscillation is associated with alternating oscillations in the two populations. The mechanism of the EEG oscillation can be much better understood if one considers population activity patterns while ignoring the details of the individual neural activity. If one considers individual neuron’s activity as depicted in the rasters in Figure 7(b), one might not see the relationship between neuron and network activity since only few neurons are active during each cycle; that is, most neurons show cycle skipping (Wallace et al., 2012).

Not only the oscillatory activity pattern itself but also the sudden transition between nonoscillatory and oscillatory network behavior can be understood from bifurcation analysis of a global model of two coupled excitatory-inhibitory populations as depicted in Figure 7(b) [65, 66].

4. Mean Field Network Models

The network models discussed in this section do not explicitly simulate each network node, but start directly by modeling populations of nerve cells. Some of the models explicitly use the mean field approach commonly employed in statistical mechanics, others create neural populations with functionality inspired by the single neuron function. The final result of these approaches can be fairly similar since they both usually contain excitatory and inhibitory components (the Ising spin model is an exception). As I will point out repeatedly, this results in a strong similarity between the equations that result from the various network models and even between these and the equations describing single neurons.

4.1. The Ising Spin Model

Two- or three-dimensional models of the Ising-spin lattice, where the spins represent the neurons, are not easy to solve. A mean field approach is the simplest approximation to describe such a system. In the lattice, each node experiences the sum of the magnetic forces created by the other nodes and, if present, an external magnetic field. In the mean field approach, we replace the internal field generated by the spins by its average value. It is beyond the scope of this paper to derive the mean field relationship, which can be found in many physics textbooks (e.g., [67, Chapter 7]). Using the mean field expression, we can describe the equilibrium state of the lattice of spins—or network of model neurons—by the expression with —Boltzmann constant (in the following k is set to unity), —temperature, —magnetic coupling strength, —number of neighbors, —magnetization of the lattice, and —an externally applied magnetic field. As shown in the following, (32) shows sudden transitions in behavior according to the so-called cusp catastrophe [68]. First we can determine the fixed points for the expression graphically by finding the intersections of both terms, and as a function of . When depicting these functions for different and , it can be seen that they intersect at either 3 or at 1 point(s). In case of the intersection at 3 points, the middle one is an unstable fixed point, the other two are stable ones; therefore the system is bistable in one area of the plane and monostable elsewhere.

The transition between bi- and mono-stability occurs at the bifurcation where the straight line intersects the sigmoid tangentially; this indicates that two conditions must be satisfied at the transition in point , : and since they must be tangent From these two expressions, one can find the expression for and (as a function of ). The plot of these functions in the plane is the so-called stability diagram (Figure 8(a)). These lines mark the border where the system transitions from mono- (1 fixed point) to bi-stability (3 fixed points with one of the points unstable). Another common presentation is a bifurcation diagram; here we have a codimension-2 case with two independent variables and . Note that for , is always a solution (fixed point) for (32). If we use , we have a stable fixed point at for and an unstable one plus two stable fixed points for . This is depicted in Figure 8(b); we can see that we have the characteristic of a supercritical pitchfork bifurcation. This plot can be considered as cross-sections of a 3D plot of shown in Figure 8(c); this surface depicts the so-called cusp catastrophe.

4.2. The Wilson and Cowan Model

The 1st model by Wilson and Cowan [29] is exclusively temporal and describes the behavior of two neuronal populations—one excitatory and one inhibitory—embedded in neural tissue. It is more realistic than using the Ising model that only considers the equivalent of excitation. In the Wilson-Cowan model, population properties are derived from assumed statistics of the neurons [29]. Each population receives input from itself, the other population, and external sources. The input to a population affects the neurons of that population that are available (the inactive proportion of the population). The model is nonlinear because the incoming activity for the population is related to its output through a nonlinear sigmoid function S. In a 2nd version of the model that will not be further discussed here, Wilson and Cowan [30] extended their model to include the spatial component as well. Instead of formulating the model from underlying statistical properties, a coarse grained version of the model can be directly derived from the diagram in Figure 9. Two populations, the excitatory and inhibitory, are coupled with strengths . The external population affects the excitatory and inhibitory networks with activities and . We denote excitatory activity by and inhibitory activity by , the excitatory population in Figure 9 receives input that can be quantified as . This input of the population is converted to an output expression via a sigmoid function . In most followers of their approach, such a sigmoid curve is employed to represent conversion from membrane potential to spike rate and/or vice versa. In the Wilson and Cowan’s approach, the sigmoid curve represents the cumulative distribution of the probability that a neuron will fire at a given input, that is, the distance between membrane potential and firing threshold. Further, assuming that the unavailable part of the population, that is, the active + refractory portion is proportional (by a fraction ) to the activity level then the receptive population (available portion of ) is .

In Wilson and Cowan [29], the authors use a value a little bit less than unity for , reflecting that the maximum values for the sigmoid function will be a little bit less than one (see pages 9-10 in the original paper). Therefore, the available population becomes . The formalism for population activity also must describe spontaneous decay proportional to E, reflecting that a nonstimulated group of neurons converge to a base level of activity (zero, reflecting a low level of spontaneous activity in this model). Finally all changes are weighted by the time constant of the neuronal population . Putting this all together the coarse grained equation for excitatory activity becomes A similar approach for the inhibitory population yields In their 1972 paper [29], Wilson and Cowan derive these relationships from first principles followed by a coarse graining procedure (their equations (11) and (12)). When looking into the two dimensional nonlinear Wilson-Cowan model, there should be a variety of behaviors including a steady activity scenario, oscillatory activity with damping, and limit cycles (Figure 10).

4.2.1. Linearization of the Wilson-Cowan Equations

To study resonance phenomena, we will consider the effect on E and I by small sinusoidal perturbations (e.g., evoked by electrical stimulation) close to an equilibrium state. When stimulating electrically in the extracellular space and assuming similar spatial distribution of excitatory and inhibitory cells, it is reasonable to assume that both the excitatory and inhibitory populations receive the same stimulus current. Thus the external inputs and are considered equal. Further simplifying the sigmoid curves and to lines with slopes and , the linearized versions of (35) and (36) become Rewriting these results and using and , we get We now further linearize and consider small perturbations , , around the equilibrium indicated by The partial derivatives determined at equilibrium are: Combining these expressions and substituting and for and , and making the perturbation sinusoidal (where amplitude is a small value), we can simplify notation These two 1st order differential equations can also be written as a single 2nd order one. Taking the derivative of (42) and substituting the expression in (41) for gives Using (41), we can substitute , and we get the standard expression for a 2nd order differential equation with a driving force This equation is again of the same form as the equation that governs a parallel RCL circuit as shown in Figure 3. The 2nd-order ODE governing this RCL circuit is (26). This equation is also very similar to the network equation by Jirsa and Haken [45] in which the spatial derivative is set to zero (Section 4.5). Thus the linearized network model is capable of resonance behavior, described in Section 2.1.4. Since we deal with a linearized version of a nonlinear system, unstable solutions of the linearized case may relate to bifurcations in the full nonlinear version of the model (see Section 6.1 for further discussion).

4.3. Models Inspired by the Wilson-Cowan Approach

The model described by Lopes da Silva et al. [35] is a model of thalamus and one of the first that is inspired by the Wilson-Cowan approach. A similar treatment of the mesoscopic model, with altered synaptic function, was given by van Rotterdam et al. [36]. The base model describes a thalamic network; however, the populations are identical to the ones in the cortical Wilson-Cowan model: one excitatory and one inhibitory cell group. Similar to the Wilson-Cowan approach they assume an invertible function f that translates membrane potential, and (i.e., the input) for the excitatory and inhibitory populations, into a rate of action potentials for each population and (i.e., the output). For the excitatory population we then get the following relationships: and . Here and are the function and its inverse that translates membrane potential into spike rate and vice versa. Both the functions and its equivalent for the inhibitory population are basically the static nonlinearities in their model. Each nonlinear component is subsequently approached by a Taylor series about the mean value of denoted as : . Subsequently Lopes da Silva et al. simplify notation by only considering the deviation from the mean only, . This is done for all variables in the model, and next they linearize about the mean by neglecting the 2nd and higher order terms in the Taylor series so that the original equations can be approximated by and . External input or its demeaned version perturbs the system (top Left in Figure 11(a)). For the excitatory component in the diagram in the Laplace domain we get , with the following Laplace transform pairs: , , and . Here and are the unit impulse response and transfer function of the excitatory network.

In a similar fashion we include the effect of the inhibition in the diagram in Figure 11(a) and get Following the same procedure for the linearized inhibitory component (bottom part in Figure 11(a)) gives If we substitute the expression for I(s) (obtained from (46)) into (45) and use (from (45)), we obtain the following expression for : Then the authors use a linear system to model synaptic input with unit impulse responses (UIRs) and . In Lopes da Silva et al. [35], the synaptic UIR is a dual exponential and in van Rotterdam et al. [36] it is an alpha-function ; in both cases, these functions are only considered for . The associated Laplace and Fourier transforms are, of course, slightly different. Here we follow the paper of Lopes da Silva et al. [35] and use the dual exponential unit impulse response for both the excitatory and inhibitory populations. These functions and their Laplace transforms are We can substitute this into (47) and simplify with .

We can substitute by in to obtain an expression for the spectrum of . The spectrum can be further quantified by assuming that the thalamic input is white noise and therefore is constant. The numerical values we use for Figure 11(b) are the same as the ones in the example given in Lopes da Silva et al. [35]:  mV,  mV, cells, cells,  s−1,  s−1,  s−1,  s−1, , , and (). It can be seen that the spectrum peaks around 10 Hz, the alpha rhythm frequently encountered in the EEG of subjects with eyes closed.

4.3.1. Neural Mass Models

The Wilson-Cowan Equations [29] and Lopes da Silva et al. approach [35] inspired many other models. Instead of considering individual neurons, these models determine the activity of coupled populations of nerve cells, each population representing a neural mass. A very basic model, using two populations representing a coupled excitatory and inhibitory network is described by Dayan and Abbott [69, pages 265–270]. The rate of change for each network is inversely proportional to its activity level, and proportional to the input it receives from itself, the other population, and an external input. This results in a set of two coupled differential equations. Although the model seems to be linear, their model is actually nonlinear because the population activities are limited to positive values only. Therefore their model is capable of generating limit cycles.

Jansen’s neural mass model [37] is applied by Grimbert and Faugeras [38] to create a cortical network unit (Figure 12(a)). It is a temporal model of neuronal populations inspired by the Lopes da Silva et al. [35] approach where each population is represented by a Wiener cascade ([70], Chapters 2–5): a dynamic linear component, modeling the synaptic conversion process, and a nonlinear static component that represents the conversion of membrane potential to spike rate (Figure 12(b)). As it can be seen in Figure 12(b), input (input spike rate) is first convolved with the unit impulse response (UIR) of the linear dynamic component, the result (population membrane potential; -convolution) is subsequently converted with a static sigmoid to produce output (output spike rate). The structure of the model by Grimbert and Faugeras [38] is similar to the canonical cortical model of Douglas and Martin [71] in the sense that it also includes two excitatory populations , one inhibitory population , and an external input . The development of the equations that govern the activities is straightforward. First we use the UIR to write a convolution expression. Again, the synapse is considered a linear system and in this model, the synaptic impulse response function is defined as an alpha function (as in [36]): in which and are population-specific constants. The Laplace transform of the unit impulse response is: . In the time/Laplace domain we may link the input /transformed input to the output /transformed output of any linear system by the transform pair: . Second, we write the convolution relationship as an ODE. Substituting the above expression for in the Laplace transformed expression and we get . This result can be re written as . Recall that and denote the Laplace transforms of the 1st and 2nd derivatives of in the time domain; using this, we can transform the above expression into a 2nd order ODE for the linear synaptic process: , which can be written as two 1st order ODEs Now we add the static nonlinearity employed for the membrane potential to pulse rate conversion, a sigmoid curve (Figure 12(b)). In this study, it is defined as a Boltzmann function with maximum rate , slope and a 50% level threshold : . This completes the formalism for one population, a single Wiener cascade in Figure 12(b). The pair of ODEs plus the static nonlinearity can be defined for each of the three populations in the neural mass model (two excitatory and one inhibitory, Figure 12(a)). This means we will have a pair of 1st order differential equations per population, generating a total of six equations (Equation (3) in [38]) The parameters denote the membrane potentials, and , , , are the constants in the population specific alpha functions. Note that in these equations, some outputs and sigmoid functions are weighted by connectivity strengths . The focus of the paper is on the variable y, the aggregate membrane potential of the main population, which is thought to be proportional with the local field potential. A simulation using the following parameters, , , , , , , , , , , and , shows a signal that is similar to the EEG alpha rhythm (Figure 12(a)).

4.3.2. Modeling the Olfactory System

The models created by Freeman in the 1980s used a combined network modeling and experimental approach to study the electrical activity in rabbit olfactory system. Although Freeman’s approach may not have been directly inspired by the Wilson-Cowan equations and/or the models by Lopes da Silva and co-workers, the fundamental building blocks of his model are neural masses. The appeal of Freeman’s approach is that it is strongly based on the anatomy of the olfactory system, and that he relates modeling results to electrophysiological experiments. As compared to the previous models, a unique addition is that it contains latencies L1–L4, modeling the conduction delays occurring in between populations [39]. Differential equations without delays need initial values, whereas equations with delays require initial functions to determine the delayed response of the system. Equations with delays can exhibit behavior that cannot be displayed by equations without delay of the same order. It is given that delays in the nervous system occur because of the finite conduction velocity of nerve fiber activity. In addition to Freeman’s model, several others have explored the role of these delays or included the delays in the formalism (e.g., [65, 66, 7274]).

In his models, Freeman describes different population levels. Within the most basic population type, which is defined as K0, he includes pulse-to-wave conversion, linear spatiotemporal integration, wave-to-pulse conversion, and conduction of the output. This basic population is modeled with a 2nd order linear ODE coupled to a nonlinear static sigmoid function, the Wiener cascade (Figure 12(b)). In general, Freeman [39] (and many other papers from Freeman and co-workers) defines the following neuronal population levels (Figure 12(c)).

(1) K0: a subset of noninteracting neurons, either all excitatory () or inhibitory (). The smallest unit in Figure 12(c) signifies a K0 subset. In each K0 subset j, the wave variable is governed by a 2nd order ODE: , with and constants. For the output there is a sigmoid function to convert the membrane potential to pulse rate: , with , , and constants.

(2) KI: coupled K0 networks create the KI level. In Freeman’s terminology, two modules give a module and a pair of gives one (Figure 12(c)). For each population , the coupling is governed by , a function representing the incoming pulses originating from the connected populations. As shown above, the dynamics of the system’s basic K0 population is described by a 2nd order differential equation with a driving term (i.e., ). If we Laplace transform the ODE while assuming that the driving force is a unit impulse , we get , with - transfer function. The inverse transform of the transfer function, the UIR is the dual exponential (also used in other models such as Lopes da Silva et al. [35]).

(3) KII: this type of population arises when a and are coupled (Figure 12(c)). The distance between KII populations can be large enough that conduction delays cannot be neglected. These delays are indicated by in Figure 12(c).

(4) KIII: when KII sets are interconnected, a KIII set is created. Note that there is just one such a set in Figure 12(c). In Freeman [39] there are three coupled KII sets: the olfactory bulb (OB), the anterior olfactory nucleus (AON), and the prepyriform cortex (PC). The connections in this set are longer range so that lags (latencies) were incorporated. These latencies are included in his expressions of the driving force by a backward counter J. For example, the effect of a subpopulation from AON to one in OB is included as This term represents the weighted sum of the AON population’s pulse values in the past 14–22 steps. Note that the coefficient is divided by 9, the number of backward steps that are included in the sum.

Simulations of Freeman’s model were implemented in Matlab and typical results are depicted in the bottom panel in Figure 12(c). To show that the system is initially quiet, we deliver a first stimulus a little after onset of the simulation and to demonstrate that the response of the active intact network is slightly different, we deliver another stimulus at about the halfway point of the simulation (arrows in Figure 12(c)).

4.4. Model for the Electroencephalogram

The electrical signal that can be recorded from the scalp, the electroencephalogram (EEG), reflects the currents generated by the brain’s nerve cells. If we ignore electrical signals from other more remote sources (e.g., muscle, heart) and nonbiological artifacts, we can assume that the EEG is the weighted sum of the underlying neural network in which the weights are determined by the geometry. The EEG signal is used both in research and clinical settings. Because a single EEG signal includes the activity of millions of nerve cells, the relationship between smaller networks of the brain and the EEG signal is not necessarily a trivial one. Therefore this relationship has been the target of many modeling studies; the framework developed by physicist Paul Nunez is an example. Because it is generally thought that the slower synaptic potentials, especially those of the neocortical pyramidal cells, contribute most to the EEG signals on the surface of cortex and scalp, Nunez’ model focused on describing synaptic activity across cortex (e.g., [31, 32]). The spatiotemporal model of cortical activity is based on the schematic of neocortex shown in Figure 13. This diagram shows the effect of action potential firing activity g produced by the left cube on the synaptic activity h of the right cube. The synaptic activity has subscripts e and i for the excitatory and inhibitory synapses, respectively. Both activity variables g and h are a function of location: (left cube), (right cube), and time t. The remaining variables in Figure 13 are v for conduction velocity and for external, subcortical input. Consequently the delay for action potentials arriving at the right cube that were generated in the left cube is ; therefore, the action potential firing function from the left cube arriving at the right cube is indicated as . Furthermore, the connectivity within the cortex is governed by functions and for the excitatory and inhibitory connections, respectively. These functions also depend on the locations , , and fiber system (with conduction velocity ), that is, . The equation that describes the excitatory synaptic activity as a function of the input to the right cube can now be formulated as For the inhibitory synaptic activity a simpler expression can be used since there are only local inhibitory fibers all with similar conduction velocity, that is, . Further, because of the local character of inhibition, one can neglect the delays for the inhibitory fibers: that is, now the action potential firing function is simply . For the inhibitory expression, we get

The external input represents subcortical input which is assumed constant for a given state. Because the physiological basis for this input is not obvious, Nunez dropped this term in later versions of the model. In the following, it is not very important since the effect of vanishes when linearizing the expression. Note that both in (54) and (55) the connectivity functions and and/or activity function g must contain a component that translates the action potential rate into a level of synaptic activity. In his development, Nunez assumes this component to be linear by using a simple gain/attenuation factor. The same can be said for external input functions and . They implicitly include a conversion from action potential rate to synaptic activity; the external input in terms of the action potential rate must be corrected to generate synaptic activity and (again) if this relationship is linear, one can use a simple factor. The above expressions in (54) and (55) relate spike input to synaptic activity (pulse-to-wave conversion); of course there is also the effect of the synaptic activity on the spike output (wave-to-pulse conversion). The nonlinear relationship for both synaptic activities and on action potential rate function is commonly modeled by a sigmoid function (e.g., [32]). Nunez linearizes this relationship about an assumed fixed state . If we consider a small change of the action potential rate around state and relate this to small changes in the synaptic activities and , we get . Defining and , , and , we can simplify notation and get . If we now use (54) to write an expression for , we get with the changes of notation indicated by the horizontal curly brackets, we obtain (Equation (11.5) in [32]): Following the same procedure for (55) gives (Nunez’ Equation (11.6)) One of the strong aspects of Nunez’ model is that it acknowledges different conduction velocities, the weak part is that it linearizes the relationship between synaptic activity and action potential rate, and vice versa.

4.5. A Field Equation for the EEG

An excellent summary of macroscopic modeling is provided in Jirsa and Haken [45]. These authors integrate temporal and spatial aspects, the firing rate models first described by Wilson and Cowan [29, 30], and the synaptic fields described by Nunez (e.g., [31, 32]). Their approach is straightforward. First they develop expressions for the synaptic and action potential activities. Then they simplify by zooming out, determine the expression for the synaptic field, and write that expression in the form of a convolution operation. Finally, they take the Fourier transform of the convolution and transform this result back into the spatiotemporal domain as a partial differential equation for the synaptic field of the excitatory neurons. Just as Nunez (Section 4.4), Jirsa and Haken assume that this field is a representative for the EEG.

4.5.1. Spatiotemporal Expression for the Synaptic Field

The excitatory (e) and inhibitory (i) populations each have pulse rate and ( and ) similar to the approach by Wilson and Cowan [29], and a wave expression and ) similar to the development by Nunez. In the above, the spatial component is three dimensional; in the remainder of the text, the spatial variable is reduced to one dimension represented by . In the following text, we follow the derivation of the macroscopic field equation as it is visualized in Figure 14. In order to find the expression for the synaptic activity, we determine the synaptic component (top, Figure 14), while including the spike rate generator (bottom, Figure 14). The left part of Figure 14 shows the details whereas the right part shows the zoomed out version representing a macroscopic view where some detail is lost.

(1) Synaptic Component. The synaptic activity resulting from the incoming spike activity, for the excitatory population is: For the inhibitory population a similar expression can be presented. In the equations for synaptic activity, the contribution from the pulse activity over the whole cortex is included: at location , this contribution is the product of all converted cortical activity multiplied by the connection distribution for . Synaptic activity is then computed by integrating (using integration variable ) over cortical surface (Figure 14). The conversion function determines how incoming action potentials originating from excitatory cells are converted into excitatory synaptic activity (excitatory post synaptic potentials). This is represented by a sigmoid function S that can be linearized over most of the trajectory using the constant slope Note that the function describes all action potentials generated in the past that could have traveled to location originating from location (Figure 14, top row) in a time interval , with —conduction velocity. The ones that have really arrived at depend on the connectivity between and ; this is described by function .

(2) Spike Rate Component. The expression for the spike rate for the excitatory population E is similar to (59) The output conversion : wave-to-pulse becomes an expression that reflects the conversion from: (a) the excitatory synapses, (b) the inhibitory synapses, and (c) a term for the extracortical activity. The synaptic activities for the three wave sources are simply added and a (static) nonlinearity converts it to pulse rate. Like in the expression in (60), there is a temporal delay term that corrects for conduction velocity. The expression for the wave-to-pulse conversion becomes

The inclusion of a conduction delay is correct when assuming active propagation of synaptic input, for passive propagation (at the speed of light) such a delay is not needed. For the remainder of the derivation by Jirsa and Haken [45] this point is irrelevant because these delays will be neglected. The conversion function describes the contributions from all cortical areas to location and the distribution function in (61) determines the strength of the connection.

4.5.2. Zooming out

In the model, cortical connectivity is described by exponential distributions of the form , functions where the area under the curve evaluates to unity: . The advantage is that when we zoom out to a global overview of the cortex, these distributions can be replaced by delta functions. In the model, the distributions , , and are represented by a delta function because they are local (see for example in Figure 14 bottom row). This procedure for is depicted in Figure 14, bottom row: . Distribution , describing the excitatory connections over a longer range, remains in the form of the exponential expression (Figure 14, top row).

(1) The Expression for . Now we use the above to formulate the expression for the synaptic activity because this variable is considered proportional to the EEG. In this approach, we focus on solving for the excitatory component since this is considered the major contributor to the EEG signal. First we substitute the linearized version of (60) into (59): In this expression, we substitute (61) for , while changing integration variable to , and we get . Subsequently we substitute the expression for from (62) and replace with a delta function Now we apply the sifting property to evaluate the 2nd integral and get For the inhibitory population we follow a similar procedure and get If we now evaluate the integral in this expression, while replacing the distribution function with the delta function , so we can use the sifting property, we obtain the result for the inhibitory synaptic activity: . Subsequently we follow the procedure by Jirsa and Haken [45] and only take the linear aspect of the function, using a slope of and finally solve for Substitution of (67) into (65) gives

Using the simplifications indicated by the horizontal curly brackets above we have the expression for the excitatory synaptic activity:

4.5.3. Expression for as a Convolution

The next step is to write the result for in the form of a convolution. First, the notation for the integrand in (69) can be simplified Because the excitatory connections cover a larger area its distribution function cannot be replaced by . Therefore we use the exponential distribution for and get: . Although it seems that we make things more complicated, we rewrite this as an integral: . Now we substitute this result and the expression in (70) in (69) and rearrange the order of the terms The part can be interpreted as a unit impulse response or Green’s function G.

Now we complete this step and write the expression in the form of a convolution with respect to space and time

4.5.4. From Convolution to the Field Equation

Now we take advantage of the convolution form of (72). With the spatio-temporal Fourier transforms for , and , we can write the transform of the convolution as a product in the frequency domain. To summarize we have the following transform pairs: Using , we get: Note that in the temporal Fourier transform and represent the transformed first and second derivatives in the time domain, while the representatives for the first and second spatial derivatives are and . Transforming (74), we complete our last step and get the equation for the spatio-temporal excitatory synaptic activity An example of simulated activity is depicted in Figure 15. In this example, we show the model’s response to a half sine wave pulse in the center of a cortical surface. As expected, the evoked activity is damped over both space and time. The result in (75) is also similar to earlier results obtained for linearized cellular and network models. If we ignore the spatial component in (75), we get a second order ODE with a force term, similar to (26) and (44). Although in this case, a nonlinearity is included in the factor of the force term.

5. Models with a Stochastic Component

With the exception of the spin model and Hopfield’s approach (Section 3.1), all models we discussed are deterministic. Most of the deterministic approaches can easily be modified to include a stochastic component [48]. Such a component may represent intrinsic noise in the membrane and/or the synaptic input originating from sources external to the modeled unit. In the 1960s and 1970s, membrane noise was investigated to quantify electrical channel properties (e.g., [75]). Especially when modeling a network embedded in the brain, its input from the rest of the brain is generated by enormous networks and a stochastic approach to this input is appropriate and also significant because it affects the network function (e.g., [76]). A critical shortcoming of the deterministic models (e.g., the mean field models discussed in Section 4), that can be mitigated with a stochastic approach, is the absence of fluctuations and correlations of the neural activity, especially spike trains (e.g., [77, 78]). Moreover, a statistical approach is applied to spike trains to relate to information coding in networks in general [47, 79]. Mathematically, stochastic modeling often results in a diffusion equation approach, where the models are frequently approximated by either the Fokker-Planck equation and/or Langevin equation (e.g., Appendix in [48]). Further discussion of these approaches is beyond the scope of this paper.

Recently, Benayoun et al. [19] and Wallace et al. [20] used a network of switches in which the update rule was asynchronous and stochastic known as the Gillespie’s exact stochastic simulation algorithm [80, 81]. To show this approach, I describe a simplified version The active neurons are created from an infinite pool of inactive ones at a rate and they become inactive again with a rate . The rate function depends on the activity level in the network, however, for the sake of the example, we will not worry about this “detail” here. Rate is the intrinsic rate at which active cells become inactive (this also includes the refractory period). The deterministic kinetic rate equation for this process is We assume here that Q is available in large excess (i.e., an infinite pool) so that it can be considered constant. This equation can be solved both analytically and numerically.

The next step is to consider the so-called master equation for this process. In contrast to the deterministic approach above, we now use a stochastic approach and describe the probability that there are n active neurons (i.e., cells in our infinite pool are in state ). We may visualize this by a line on which we see all the possible states for the number of active neurons, that is, . If we update the system fast enough, we can safely assume that a state change only involves a single step.

Under this assumption, the probabilities describing the dynamics around state n are easily seen in the following diagram that depicts part of the line of possible states This leads to the master equation describing the dynamics of the probability for the state with active neurons To simplify this further, we can apply shift operators and that operate on function and increase or decrease the number by one, respectively [82]. This allows one to rewrite (79) as a function of The problem is that in most cases the master equation is difficult to solve analytically or cannot be solved at all. Gillespie [80, 81] formulated a remarkable and exact algorithm to simulate chemical systems in a stochastic fashion. Within this approach, one starts from the system in a particular state at time t and determines the probability density function (PDF) for the next reaction. Briefly, if we consider reactions; if one simulates such a system over time, we only need to know (a)when the next reaction (any of the ) will occur, and (b)what kind of reaction () it will be.

In order to resolve these questions, Gillespie follows the system of reactions from any time to . Accordingly, he formulates the joint probability as the probability that none of the reactions occurs in the interval between and , and that reaction does occur in the interval between and (Figure 16(a)). It is straightforward to show that is characterized by the product of both probabilities. Let us first determine the first component that no reaction occurs in the interval between and . Consider a time axis set at 0 immediately after one of the reactions occurred. Consequently, the probability that none of the reactions occurs in is Thus one can define the probability of no reaction at is the probability of no reaction at multiplied by the expression in (81) which can be written as where is the sum over all . Integration of (83) from 0 to results in Component (b) that reactions do occur in the interval between and is given by . Thus the probability is the product of this result and (84), therefore For all values of (the reaction number) and (time steps ). Now we go back to the initial questions (a) and (b) above, and we state that The first factor indicates the probability for the next reaction and it does not matter which one, therefore From substitution of (87) in (85), we have So now we have the probabilities for both and and the only question that remains is how we use this knowledge to implement a simulation. Usually the particular probability functions in (87) and (88) are not readily available in a simulation environment. But most computers do have a random number generator based on a uniform distribution between 0 and 1. In a simulation, the uniform distribution can be used to generate values taken from any arbitrary PDF by using the uniform distribution to pick random points on the cumulative distribution of the target PDF. The procedures for and are shown in Figures 16(c) and 16(d). Because is a continuous variable, we can compute the cumulative () function by integration of (87) In order to determine both the values for and , we generate two random numbers and respectively. Now we can connect the random number to the cumulative distribution of (Figure 16(c)). However, since the cumulative function ranges between 0–1, we can also connect directly to (Figure 16(c); note that 0 and 1 are inverted in this case). If we now solve for , we get Subsequently we create the cumulative distribution for the reactions and use to update the system (Figure 16(d)). This procedure of picking a time step and update the system can be repeated for the simulation epoch. Examples of two simulations of our simplified system are depicted in Figure 16(b). For the sake of this example (see Matlab script in Appendix B), we update individual switches (neurons) at each time step. In homogenous all-to-all networks, the simulation algorithm of the number of neurons active in each population may be simplified along the lines of Gillespie's original presentation for a well-mixed chemical system, since the downward and upwards transition rates are identical for all neurons. This simplification will result in a faster computation since the time steps will increase (due to a smaller ) as compared to our approach in the example.

Because the master equation can be used to recover the deterministic rate equation, the stochastic approach is more general than the mean field models we discussed in Section 4. We recover the mean field equation by determining the mean number of active neurons from the master equation via its definition In our case we assumed an infinite pool of nerve cells so there is no finite limitation to the number of active neurons. Therefore the sum in (91) is taken from . Other models have assumed a limited number of inactive and active nerve cells.

We now plug in (91) into (79) (i.e., we multiply each term by and sum over all ), and we get This generates an expression for the time derivative of the mean (the term left of the equal sign). We now substitute and for and , respectively and we have Note that we changed the range over which we sum in the 1st and 3rd term. We now sum again from . Note that we can make this change in the summation interval since in the 3rd term, and for , in the 1st term. Furthermore, we expand the first and third terms and get Terms 1 and 3, 4 and 6 cancel and terms 2 and 5 remain, that is Using (91), and knowing that all probabilities must add up to one, we obtain an expression similar to the kinetic rate equation (77) where the mean of n is the variable A. Thus the mean number of active neurons in the master equation of the process produces the kinetic rate equation. Accordingly, the work by Benayoun et al. [19] and Wallace et al. [20] showed that the mean activity generated by the stochastic model in which f depended on the network activity, corresponded to the mean field Wilson-Cowan model (Section 4.2). In addition, the simulations of their stochastic model made predictions about the fluctuations of the neural activity. These fluctuations followed avalanche statistics as observed in real neuronal networks or even in the scalp or intracranial EEG [8385].

A recent, novel approach to solve the master equation for a neuronal network model used the equation of motion of the generator function in operator notation Here, we use the Dirac bra-ket notation (for a didactic overview of the notation see [86]). The ket of G represents the probability for each state n represented by the ket of in an infinite dimensional (Hilbert) space. The advantage of this approach is that the extensive mathematics developed for the operator notation can now be used to analyze neuronal networks (e.g., [49, 87]). Using (79), multiplying with ket of and summing over , we get Now we use the equality in (97) and and for and , respectively and rewrite the expression in (98) For the same reasons as we did above, we summate and from 0 to and, in addition, we use the raising and lowering operators and Note that the eigenvalues are 1 and instead of the conventional and .

Now we get Using (97), we can simplify this to With a bit of algebra, we may rewrite this, and recognize it as a linear equation of motion If we use the eigenfunctions of L, to expand For each eigenfunction there is a corresponding eigenvalue of , and therefore The solution of this differential equation is Now we can retrieve the PDF by projecting on bra-, and using the expressions in (104)–(106) The coefficients are determined from the initial distribution .

6. Epileptic Seizures

Epilepsy is a serious neurological disorder characterized by spontaneous recurrent seizures. In the electroencephalogram (EEG) of patients with epilepsy, one may observe seizures (ictal events) and interictal events: for example, epileptic spikes or spike-waves. The real problem is that about 30% of the current population of 60 million patients with epilepsy do not respond to any treatment [88]. This problem is mostly due to an incomplete understanding of the mechanisms that underlie this pathology (e.g., [89]). As is the case for many other neurological diseases, this directly relates to a poor understanding of network function in general. Because of lack of experimental tools for studying network behavior at sufficient scale with the associated detail (e.g. [70, 90]), there is a significant role for modeling in this field (e.g., [3, 5, 6, 27, 41, 61, 91102]).

6.1. Modeling Epileptic Activity

Although there are many clinical types of seizure [104], the seizure activity in the EEG is often characterized by rhythmic activity usually lasting for minutes. Examples of typical seizure onset activity are depicted in Figure 17. In contrast, interictal spikes and waves are sudden, short bursts of electrical activity. Although there are exceptions, in between the epileptiform events (i.e., seizures, spikes, waves), the EEG of patients with epilepsy cannot be distinguished from normal EEG patterns. Because epilepsy is characterized by its EEG patterns representing large networks, it is impossible to define epileptiform activity at the level of single neurons or even small networks. Although there seem to be cellular activity patterns frequently found in epileptic brain tissue, such as the paroxysmal depolarization shift (PDS), it is by no means clear how these PDSs relate to the typical network phenomena in epilepsy. The oscillatory activity of cells and small networks is of general interest to the neuroscientist but equally of interest in epilepsy research. However, at this level, the distinction between normal and pathologic oscillatory activity is undefined. At the scale of large networks, as reflected in the EEG signal, the distinction between normal and abnormal is not without problems but definitely easier to make. This makes the study of networks at mesoscopic and macroscopic scales of particular interest in epilepsy research. Based on the activity patterns observed in patients with epilepsy, an effective network model should be able to explain(1)normal and epileptiform states, (2)prototypical oscillations and bursts, and (3)transitions between the states.

From these properties, a few general conclusions about the nature of the model can be determined. First, a model of network activity must be at least two dimensional to explain oscillations. Second, transitions between normal and epileptiform activity can be modeled by a bifurcation in a nonlinear dynamical system. Of course, there are many bifurcations that could explain sudden onset and offset of seizure-like oscillations. Examples using the simplest candidate: co-dimension-1 bifurcations transitioning a system between steady state and oscillatory behavior, the sub- and supercritical Hopf bifurcations [105], are shown in Figure 17. In Figure 17, the two top panels show the bifurcation superimposed on seizure activity, the bottom panel indicates that in case of the sub-critical scenario, the transition can also occur if the system exceeds the boundary between the basins of attractors (one attractor being the stable node and the other the stable limit cycle) [103].

Some of the models employed in epilepsy research go beyond the minimalistic approach of two dimensions. The detailed simulation models with compartmental neuron models with realistic ion channels may have hundreds of parameters. Independent of the level of complexity of the network models, it is important to know if the neuron model itself is capable of oscillatory activity so that the individual network node can be a pacemaker for oscillation.

6.2. Population Models and Epileptiform Activity

Here we examine the population models we described in Section 4 for their capability of generating seizure-like behavior. The Wilson and Cowan [29] model is both nonlinear and two-dimensional and is therefore capable of showing a range of behaviors that include limit cycles (Figure 10) and transitions from stable behavior to an oscillatory one. The Wilson and Cowan model satisfies the three criteria of an effective network model for studying epilepsy described in the previous section. In addition, it is also a minimal model because the two dimensions ( and , (35), (36)) are minimally required to include oscillation and the nonlinearity (sigmoids and ) are essential for the existence of sudden transitions (bifurcations) between nonoscillatory and oscillatory states. An example of how a gradually changing excitatory-coupling (, represented by constants and in Figure 9) can cause sudden transitions of the population activity is depicted in Figure 18. Interestingly, it can be seen that, depending on the initial state, oscillations may occur with both increasing and decreasing strength of the coupling (arrows, Figure 18).

The modeling approach by Lopes da Silva et al. [35] was used to formulate the Jansen neural mass model, which was later examined for epileptic behavior by Grimbert and Faugeras [38]. Several versions of neural mass models were recently investigated [41, 91, 93, 98, 102]. Because these neural mass models are at least two dimensional and nonlinear (see for example (52)), they are capable of displaying transitions between activities associated with a stable node and oscillations or even chaotic behavior if they have three or more dimensions. The neural mass models by Freeman [39] are capable of displaying these behaviors as well since they are both relatively high-dimensional and nonlinear. However, due to Freeman’s parameter choices, the isolated K0 populations (Section 4.3.2) show stable behavior only. The nondriven K0 system obeys the ODE . In this notation, is the sum and the product of the eigenvalues. In Freeman’s simulations the eigenvalues are −0.72 and −0.22, thus their sum and product . Since is negative, positive, and , the K0 units have the behavior of a stable node (e.g., [105, Chapter 5]). Therefore, any behavior in the model other than that associated with a stable node, must be generated externally by the driving force , that is, the external input and interconnections between the populations. Using Freeman’s choice of parameters, the model displays chaotic behavior similar to ongoing discharges during a seizure (Figure 12(c)).

Linearized models do not support bifurcations and therefore fall short in explaining a critical aspect of epilepsy (seizure onset and offset). The linearized model of Nunez [31, 32] has this shortcoming. Nonetheless, it is useful to check if it supports oscillatory activity. First, the oscillatory activity can model ongoing seizure activity. Second, the point where an unstable oscillation arises in a linearized system is an indicator for a Hopf bifurcation in the full nonlinear model. Models in neuroscience are often represented by electronic equivalent circuits (e.g., Figures 1, 2, 3, and 5). In contrast, Nunez compares the spatiotemporal behavior of his cortical model with that of the mechanics of a string (e.g., [32]). If we consider a string in vacuum attached by its ends and subject to a tension (but unattached to any other structure, Figure 19(a)), we only need to consider forces acting in the string itself. At rest the string is straight but now we perturb the string a little in the vertical direction . Let us analyze the forces by considering a small part of the string of length . In the detail of Figure 19(a), we can see that the forces acting on each side of this small piece are equal to the tension in the string. Because of the curve adopted by the string, the two forces at each side of are not exactly opposed and therefore they do not cancel. First we consider the horizontal component of the force: . This horizontal component can be neglected since, for small values of the angles and , we have and . Therefore we find that . The upward force is and the downward force is . The vertical force acting on is We divide this equation by and (recall that and ) and we get According to Newton’s second law, the net force must equal the product of the mass of and its net acceleration . If we define the mass per unit length of the string by , the mass of our piece is . Using the above for , using (109), and rearranging terms, we get Because the tangents can be written as the slope of the curve adopted by the string, we can rewrite the expression left of the equal sign as and for , this expression becomes Using this result and substituting propagation speed in (110), we obtain the wave equation Now we add a local spring connected to our little piece of string, an extra external force and we embed our mass of the string in a damping environment (e.g., molasses); this situation is depicted in Figure 19(b). Now we have to include these forces in Newton’s law for in (113) The local force due to the spring is proportional to the position of the spring ; that is , K is a spring constant and there is a minus sign because a positive deflection of the piece of string causes a force in the opposite direction. The damping force is proportional to the vertical velocity of the piece of string ; also in this case there is a minus sign because a damping force works in a direction opposite of the velocity: . Thus both the local spring and damping forces are restoring forces. Putting this all into (114) yields. It can be seen that this result and the one from Jirsa and Haken [45], (75), are very similar. Equation (75) explicitly includes a nonlinear component in the force term, whereas here may be either linear or nonlinear. The result in (115) is an expression that can explain multiple types of behaviors. If B (damping) is very large the equation starts to resemble the diffusion equation whereas for very small B the equation resembles more the wave equation [32]. If on the other hand, the spatial derivative and the external force term dominate, the expression becomes similar to Poisson’s Equation .

Finally we examine the result (75) from Jirsa and Haken [45] for a special case, a generalized seizure (e.g., Figures 17(b) and 17(c)). Since we have generalized activity we assume that there is spatial homogeneity, and since the seizure sustains itself we assume there is no external input; due to these assumptions, the partial derivative term vanishes and so does the external input part in the variable (see (70) for the definition of ). The Taylor series approximation of this variable is . Substituting this approximation into (75) and using the above considerations, we can simplify The general solution of this ODE is in the form , with . Therefore, we may expect intrinsic global oscillations only if are imaginary, that is if the condition is satisfied. Now note that in the simplified equation above , if we substitute this into the condition above: . Since the outcome is a quadratic expression, the condition will never be satisfied and therefore, without oscillatory input (e.g., an external network or a set of pacemaker neurons in the network), oscillations cannot occur in this generalized and linearized version of the model.

7. Concluding Remarks

In the previous sections, I presented a variety of modeling approaches used in neuroscience. As shown in Section 6, most models are capable of explaining aspects of epileptiform behavior. Interestingly, some of these explanations are counterintuitive (e.g., [96, 106]). Considering the rich repertoire of model behaviors, it is not surprising that a single approach to treatment, such as adjusting the excitation-inhibition balance, is likely to fail. A common theme in the different models is that oscillatory behavior arises from the force term in the differential equations; in the neuronal networks, these force terms may represent membrane currents of the cells that create an oscillatory input and/or the properties of the network that embeds them. A logical next step would be to create a systematic summary of hypothetical mechanisms leading to epileptiform activity and to examine them in experimental models and clinical cases.

In general, the interest in and acceptance of modeling in neuroscience is growing (e.g., [46, 50]). However, an important roadblock for the application of modeling is that, in part due to the complexity of the nervous system, there is little consensus amongst neuroscientists on what properties are to be considered critical in a theoretical approach. For this reason, traditionally, the neuroscientist prefers “realistic models” in which many experimental details are represented. Of course, in this sense, the skeptical neuroscientist is right that all models are wrong, that is, no model fully captures reality, both by design and because some details are simply not known. To cite Rosenblueth and Wiener [107], “the best material model for a cat is another, or preferably the same cat.” It is obvious that such a “best material model” is not the most helpful tool for a theoretical study of specific aspects of cat behavior because it clearly contains too much detail. Similarly, if one models traffic flow, one probably would not include some details (e.g., color of the cars) while including others (e.g., number of traffic lanes between suburbs and industrial zones). Just like in physics, when modeling in biology, it is good to recall Einstein's dictum: “models should be as simple as possible, but not more so.” (e.g., [108]). Due to simplifications, models do not include all possible parameters, but they can nonetheless be very useful because they allow us to make predictions and they may generate uncluttered insight into the critical components of the underlying mechanisms of the modeled process. Of course, the above general statements are valid for modeling in neuroscience, whether we deal with models that describe the microscopic details of cellular function or ones that deal with the macroscopic networks of billions of nerve cells in the human brain.

Because of the necessary simplifications required in modeling, there are limitations in the models reviewed here; they neglect many aspects that play important roles. A critical part of the dynamics of neuronal networks is determined by synaptic plasticity, that is, the change in network connection strength. These changes play a crucial role in the process of learning but, when pathologic, may also contribute to diseases such as epilepsy. It was more than a half century ago that Hebb [109] recognized this and postulated his law. Hebb’s law describes how synaptic coupling strength depends on local activity. It is often presented in a simplified form as “neurons that fire together wire together.” Much later, an experimental approach has confirmed the existence of activity dependent plasticity at the level of the coupling between nerve cells, that is, the synapse (e.g., [110, 111]). Here the plastic changes appeared to depend on the timing of the action potential (spike) activity of the pre- and postsynaptic neurons and is therefore also called spike-time-dependent plasticity (STDP). STDP has been used in simulations to find appropriate synaptic strength (e.g., [25]), effectively as a parameter estimation technique. It is fairly obvious that if connections between excitatory neurons that fire together also wire together (i.e., get strengthened) represents a positive feedback mechanism, which may lead to instability and possibly seizure-like hyperactivity [112]. Therefore, under normal physiological conditions, there must be additional mechanisms counteracting this positive feedback. Recently, an interesting attempt was made to analyze effects of plasticity of inhibitory synapses at the population level and successfully relate it to the network’s balance between excitation and inhibition [113]. Since this model describes strengthening of the inhibitory connection, the instability issue does not arise here. Although plasticity is an important aspect of network behavior, the models reviewed here did not include it as one of the properties. Plasticity and many other aspects involved in nervous system function, such as the role of glia (e.g., [114]), remain to be investigated.

Finally, one important conclusion is that there is not “a best approach” when modeling neural function. For example, nonlinear models can help us to understand specifics such as sudden transitions, but linear ones are more suitable to analyze subthreshold properties such as oscillation. The linearized versions of the models also make it easier to see similarities between the models across different levels of organization. Another example: complex models can be studied with simulations, they are more complete since they can deal with more parameters than the more abstract models used for mathematical analysis. However, both have a place and they can be complementary. Analysis of simplified mathematical models can generate fundamental insight in spite of the fact that they lack detail. Simulation can be too complex to directly generate such fundamental insight into the interaction of the ongoing processes, but they have the advantage to be closer to reality. This property appeals to the experimenter since it may produce data with a temporal-spatial resolution that is not (yet) feasible experimentally. In this context, the ultimate goal in understanding neural networks in physiological and pathological states, is to create a framework of experimental models, detailed simulations, and mathematical formalisms that allow us to understand and to predict dynamics of network activities including state transitions; that is, results in one model can be used to inspire work in another type of model. In the case of analyzing a heterogeneous network disease such as epilepsy, modeling can provide an overview of hypothetical network mechanisms involved in ictal activity that can be employed as a guide for experimental and clinical investigation.

Appendices

A. Linearization of the Hodgkin and Huxley Equations

In order to linearize the nonlinear equations, we rewrite (2)–(5) as and , with representing an activation/inactivation variable , , or . Then we linearize about an equilibrium potential (either resting potential with , or a holding potential implying ). Small variations in the parameters about this equilibrium are , , , , . Using just the compact form of (2)–(5), we can evaluate them for small changes: that is, and . For example, the effect due to perturbation is The expression in parentheses is evaluated at equilibrium . Similarly, a perturbation yields: Combining the perturbation effects, we obtain a linearized version of (2) Here we simplified notation by using instead of , further, using (2), we find For the gating parameters, using , we get the linearized expression: with Note that , are functions of only, and consequently the partial differential is replaced by an ordinary one. Putting it all together and ignoring the nonlinear cross term , the above equation simplifies to In the literature, it is common to simplify the notation of (A.3) and (A.7).

For example in Richardson et al. [52] Equation (A.7) is simplified by multiplying both sides of the expression by and substituting and . This results in In the same manner, the notation of (A.3) can be simplified. The first term of the right hand side of (A.3): Notation for each of the conductivity terms with , , or can also be simplified. For example the expression for becomes In the above, we multiplied with to allow change of variable from to . Similarly, we can simplify the expressions for and in and . Consequently, (A.3) can be simplified to (e.g., [52]) Here is Na, K, and all other ion species included in the model.

B. Matlab Scripts

The following Matlab scripts were used to create some of the Figures in this paper. They can be copied in the Matlab command window, or (via the Matlab editor) saved as an m file.

B.1. Figure 4 Script

% Bodeplt.mC=1e-10;R=2e8;RL=2e7;L=2e6;gam=(1/R)+(RL*C/L);eps=(1/L)*(1+(RL/R));B=[1 RL/L];A=[C gam eps];w=0 : 1000;freqs(B,A,w).

B.2. Figures 8(a) and 8(c) Script

% IsingModel.m% h - external magnetic field% J and n are coupling strength and number of % neighbors resp.clear;close all;J=1;n=1;%%%%%%%%%%%%%%%%%% Stability Diagram% To find this diagram we can solve for% 0=T*atanh(m) - (J*n*m+h) and % d/dm(T*atanh(m))=d/dm(J*n*m+h)figure; hold;i=1;for mm=-1 : .01 : 1Tc(i)=J*n*(1-mm);h_star(i)=Tc(i)*atanh(mm)-J*n*mm;i=i+1;end;plot(h_star, Tc, ‘k’)title(‘Stability Diagram’)xlabel(‘h - Magnetic Field’)ylabel(‘T - Temperature’)%%%%%%%%%%%%%%%% 3Dfigure; hold;for mmm=-1 : .01 : 1;for T=0 : .1 : 1.5;   h=T*atanh(mmm) - J*n*mmm;   plot3(h,T, mmm)end;end;plot(h_star,Tc, ‘k’)axis([-1.5 1.5 -.1 1.5 -1.1 1.1])title(‘3D diagram - blue; Stability Diagram – black’)xlabel(‘h’)ylabel(‘T’)zlabel(‘m’)view(-20, 70).

B.3. Figure 11(b) Script

% ModelSpectrum_LdS1974.m% Fig. 8 in Lopes da Silva et al., 1974clear% Parameters (see p. 36 in Lopes da Silva et al., 1974)A=1.65e-3; % in VB=32e-3; % in VC1=32; C2=3; a1=55; a2=605; b1=27.5; b2=55;qe1qi1=4.55e6;P=1; % assume random input (i.e. same for all % frequencies)K=A*B*C1*C2*qe1qi1*(a2-a1)*(b2-b1); n=0;for f=0 : .1 : 30 % Frequency range n=n+1;F(n)=f;w=2*pi*f; % radial frequency% j*w substituted for sV(n)=((a2-a1)*A*P*(j*w+b1)*(j*w+b2))/ ((j*w+a1)*(j*w+a2)*(j*w+b1)*(j*w+b2)+K);end;figure;semilogy(F, abs(V))xlabel(‘Frequency (Hz)’)ylabel(‘abs(V(jw))’)title(‘Fig. 8 Lopes da Silva et al., 1974’).

B.4. Figure 16(b) Script

% netsim.mclear;close all% Parameters and initial valuesN_cells=10; % number of cells N=round((rand(1)/2+.1)*rand(1,N_cells)); % randomized initial state of vector NAD(1)=sum(N); % initial value for the deterministic casealpha=10; % decay rate a -> q (set around 1/100 ms)f_tilde=5; % q -> a (set around 5 Hz)dt=.001; % time step for the deterministic modelT=1; % Epoch (in s)timeD=0 : dt : T; % determinsitic timebasesteps=floor(T/dt); % determinsitic # of steps% Deterministic model% - - - - - - - - - - - - - - - - - - -% this simulation based on kinetic rate equation% note that equilibrium is at dA=0 - -> A_equilibrium % = f/alphafor i=1 : stepsdAD=(f_tilde*(N_cells-AD(i))-alpha*AD(i))*dt;AD(i+1)=AD(i)+dAD;end;% Equivalent Stochastic model% - - - - - - - - - - - - - - - - - - - - - - - - - - -% this simulation based on stochastic model% Specific parameters and initial values% background of the method can be found in % Gillespie (1976, 1977)count=1; % counter for the AS arraycum_t=0; % initial timetimeS(count)=cum_t; % stochastic time basewhile cum_t < T % Main LOOP  AS(count)=sum(N); % Stochastic value of A; # of active cells  % compute the overall rates in the variables a#  a1=AS(count)*alpha; % rate for the decay  a2=(N_cells-AS(count))*f_tilde; % rate for cells becoming active   a0=a1+a2;  % pick a two random # from uniform distribution (following Gillespie's algorithm)  r=rand(1, 2);  % compute time step tau (based on exponential distribution)  tau=(1/a0)*log(1/r(1)); % Gillespie (1977)   % Now we compute the cumulative distribution of all cells  % this could be done more efficient here in the all-to-all connection  % case; here we only have two variables to update Q and A  % However, the following approach of updating individual cells is more general.  P=N*alpha; % all active cells multiplied by alpha  for i=1 : length(P)    if P(i)==0; P(i)=f_tilde; end; % all inactive cells become f  end;  %plot(P)  %pause;  P=P./sum(P); % normalize P  F=cumsum(P); % cumulative function F of all   % probabilities associated with  % the cells in N  % now pick a cell # using the cumulative % distribution  pick=0;  i=0;  while pick == 0     i=i+1;     if (F(i)>=r(2)); mu=i; pick=1; end; % Gillespie  (1977)  end;  % Update time, cell and counter  if N(mu)==1; N(mu)=0; else; N(mu)=1; end; % Update cell  count=count+1; % update the counter   cum_t=cum_t+tau; % update the time   timeS(count)=cum_t; % and stochastic time baseend;% plot resultsfigure; holdplot(timeD, AD, ‘k’)plot(timeS(1 : length(AS)), AS)title (‘Deterministic model (black); stochastic (blue)’)xlabel(‘time (s)’)ylabel(‘# of Active Cells’).

B.5. Figure 18 Script

% HopfBifurcation.m% Depict a Hopf Bifurcation in the WC Eqs.% x - E pop; y - I pop% P - External input to E popclear;close all;x(1)=0;y(1)=0;P=4.0; WE(1)=1;dWE=.01dt=.1;T=300;N=T/dt;tim=0 : dt : T-dt;for i=1 : N-1He=(1/(1+exp(-1.3*((WE(i)*x(i)-12*y(i)+1.25)- P))))-(1/(1+exp(1.3*P))); Hi=(1/(1+exp(-2.0*((WE(i)*x(i)-3*y(i)+0)- 3.7))))-(1/(1+exp(2.0*3.7)));dx=-x(i)+(1-1*x(i))*He;dy=-y(i)+(1-1*y(i))*Hi;x(i+1)=x(i)+dx*dt;y(i+1)=y(i)+dy*dt;WE(i+1)=WE(i)+dWE;end;figure;subplot(2, 1, 2), plot(tim, x, ‘r’)hold;subplot(2, 1, 2), plot(tim, y, ‘g’)axis([0 300 -.05 0.55])subplot(2, 1, 1), plot(tim, WE)axistitle(‘Hopf Bifurcation in the WC Equations: E-red, I-green, External Input to E-blue’)xlabel(‘Time’).

Acknowledgments

This work was supported by the Dr. Ralph and Marian Falk Medical Research Trust. Thanks are due to Albert Wildeman for providing the data for Figure 15 and Dr. Jack Cowan, Dr. Michel van Putten, Dr. Marc Benayoun, Jeremy Neuman, and the anonymous reviewers for valuable suggestions.