Abstract

Spontaneous emergence of neuronal activity avalanches characterized by power-law distributions is known to occur in different types of nervous tissues suggesting that nervous systems may operate at a critical regime. Here, we explore the possible relation of this dynamical state with the underlying topology in a small-size network of interconnected Morris-Lecar neurons. Studying numerically different topological configurations, we find that, very close to the efficient small-world situation, the system self-organizes near to a critical branching process with observable distributions in the proximity of a power law with exponents similar to those reported in the experimental literature. Therefore, we conclude that the observed scaling is intimately related only with the small-world topology.

1. Introduction

The spontaneous emergence of neuronal activity avalanches showing power-law distributions has been found in superficial layers of cortex, both in vitro [13] and in vivo [4, 5], in the spike activity of dissociated cortex cultures [6, 7], across the sleep-wake cycle [8] and in several other cases. Although the network architecture causing this behavior is not currently known, avalanches have been found in simulations of networks with scale-free [7, 9, 10], fully connected [11], random [12], and nearest-neighbors [10, 13] topologies. In this work, we show that, even in a small neural network composed of 64 neurons, activity close to a power law results from a small-world (SW) network topology working also close to critical branching dynamics. This is achieved considering the effect of long-range connections (shortcuts) on the spontaneous activity of a network of Morris-Lecar neurons. Simulations show that, for a certain amount of added shortcuts, the system is able to reach an SW regime where close-to-power-law avalanches of activity are observed without requiring tuning of parameters. In the SW regime, numerically calculated avalanche size and lifetime probability distributions are quite close to the experimentally observed power laws, [1, 2, 6, 7] with exponents −32 and −2, respectively.

2. Materials and Methods

Consider a set of coupled Morris-Lecar neurons forming a ring through weighted synaptic interactions. We use the version of the Morris-Lecar (ML) model [14] proposed by Rinzel and Ermentrout [15] in particular. The ML model results from the dimensional reduction of the original Hodgkin and Huxley model (HH) [16] of membrane conductivity. In the ML model, two kind of channels, voltage gated Ca++ channels and voltage, gated delayed rectifier K+ channels, are present. The calcium current plays the role of Na+ current in the original HH system. Thus, the membrane potential equation, under the action of an input , has the following form:

The functions , , and are

In these equations, is the fraction of open channels; acts as an instantaneous activation of channels; is a relaxation constant for each given value of membrane potential ; is the membrane capacitance; , , and are constants with conductance dimension; , , and are the resting potentials for the two different kinds of ions and for the leakage current; , , , , and are constants.

The system described by (1)-(2) is known to be a sufficiently exact model for biologically plausible spiking activity of neurons. In particular, the ML model is able to reproduce quite correctly the all-or-none signal emission, oscillations associated with a Hopf bifurcation leading to spike trains, bistability behavior given by the coexistence of a stable steady-state and a stable oscillation and bursting with a slowly varying injected current. Figure 1 shows the temporal evolution of a neuron’s membrane potential. In the following, for a set of neurons, each neuron is submitted to synaptic stimulation from the rest of the neurons in the network via the term .

The synaptic interaction is described as

The network is built under the assumption that each neuron is connected to the remaining neurons. The connection strength depends on the synaptic weights chosen to obey a Gaussian dependence on the distance, , separating neuron and the reference neuron ; that is, . The parameter controls the range of interactions between cells. We use such that the mutual interaction vanishes for separations . represents an amplitude factor setting the efficiency of the synaptic contact with respect to the size of the network. In (3), the term is associated with the saturation level of the presynaptic neuron that is represented as a sigmoid; the voltage difference represents whether the synapses are inhibitory or excitatory. In this setup, there is not a fixed number of inhibitory (or excitatory) neurons but the effect of a presynaptic neuron will depend on the sign of this term. However, from the snapshots of the activity of a neuron, it is known that inhibition occurs approximately during 10% of the time course. Therefore, the network evolves with a dynamically changing inhibition with this approximate proportion. The network is mainly excitatory: we tuned it to produce self-sustained oscillations such that its activity does not turns off indefinitely; that is, if the activity is turned off later it will turn on autonomously without external intervention. has been chosen such that single neurons are in a limit cycle oscillation driven by ; neurons can be on or off depending on this value. We simulate neurons with periodic boundary conditions. The interaction, at a first neighbors level, depicts a ring as in the starting configuration of [17]. To avoid isolated groups, long-range interaction is achieved by randomly adding, rather that rewiring, a fixed number of shortcuts, [18]. The effective distance between two neurons is reduced when a shortcut is included. Such a long-range connection increases the synaptic interaction because of the unit separation distance, . In our simulations, there is no noise present; our goal is to isolate the effect of the long-range connectivity and its effect on the appearance of critical behavior or a behavior close to it.

3. Results and Discussion

Temporal evolution starts from a set of initial conditions selected randomly from a uniform distribution on mV. After a transient time, the system reaches a bistable behavior characterized by the alternance of on states, where at least one neuron is firing, and off states, where no neurons fire. It is known that, for excitatory integrate and fire neurons, self-sustained activity depends on the proportion of shortcuts [19]. In our case, for the parameters selected, and for networks with less than , bistability was not observed; the system dynamics describe a state of perpetual firing without almost no silent episodes. We focused on larger values of that were able to generate persistent alternation between avalanches of activity and silent episodes. Hereby, avalanche is understood as the dynamical state where at least one neuron is on. An avalanche starts when all neurons are off and at least one neuron turns on. An avalanche stops when at least one neuron is on and, in the next step, all neurons are off, starting a silent period or interavalanche interval. In Figure 2, a raster plot showing a temporal segment of a long avalanche is shown.

For each shortcuts’ proportion, we calculated the avalanche size—measured as the number of on neurons per avalanche—and the avalanche lifetime. The evolution of the avalanche size with the number of avalanche is plotted in Figure 3. Many avalanches were simulated during a long-time span for each . Given a fixed value of , the avalanche size distribution was calculated averaging very long realizations. As an illustrative case for , three realizations of 1373, 3968, and 5367 avalanches were averaged. Similar values were used for other ’s. The avalanche size distribution reveals a strong dependence on the number of shortcuts. This fact is deduced from the left column of Figure 4. For low values of —but high enough to have persistent avalanche dynamics—the distribution shows a decaying behavior. This decaying is softened around where the distribution of avalanche size gets closer to a power law with exponent −32. Note, we are not affirming that the calculated distribution is a power law but instead of that, for a particular amount of added long-range links, the distribution approaches the power law characterizing the critical behavior in experimental studies [1, 2, 6, 7]. Increasing reduces the probability of small sizes compared with large-size avalanches at the analyzed limit case with (top left Figure 4). Notoriously, the avalanche sizes obtained were very big, with quite different sizes from those reported in experimental cases [1]. To understand this, let’s consider the following: in our simulations, at each integration step (), the number of on neurons is determined. It means that, if a peak last in the on state around 6.25 msec (Figure 1), in a second, we are counting 6250 points. Lets say the avalanche includes 16 neurons, then an avalanche size of neurons will be measured. This exemplifies how a huge size can be obtained for an avalanche lasting only one second and composed only of one quarter of the network size.

Lifetime distribution behaves in a quite remarkable similar fashion as shown in Figure 5. It can be seen that, in the case , the distribution approaches a power law but now with exponent close to as observed experimentally [1]. The inclusion of long-range connections modifies the network topology and the system dynamics which, accordingly with our results, is “driven” to a regime close to one characterized by power laws, that is, a candidate critical operating regime.

Next, we characterize the topology of the weighted network for different values of . For weighted nets, typical measures of the clustering, , and mean path length, , are not available as in the case for nonweighted networks. However, recent works have developed adequate metrics for weighted networks. This is the case of the global efficiency of information transmission which is inversely proportional to the average minimum path length, , and the local efficiency or fault tolerance which is proportional to the clustering coefficient, [20]. An additional and useful measure is the cost, that is, the cost involved in adding further links to a network, defined simply as the sum of edges between the regions in the graph [21]. A costly network has many edges. In terms of the information flow the small-world networks have high and : They are very efficient in global and local communication [20]. Many small-world networks in biology and sociology have been shown to have the economic property of delivering high global and local efficiency at relatively low cost [21] and have been called efficient SW networks. Details about the calculation of the efficiencies and cost can be found in [20, 21]. In our system, adding shortcuts affects both local and global efficiencies as shown Figure 6(a). Adding shortcuts increases monotonously reaching a maximum value at the larger value of . Meanwhile, adding shortcuts reduces until it reaches a minimum value after which it starts increasing for larger values of ; there are so many added links compared to the size of the network where local information flow is improved. Adding links represents almost no relative additional cost for a range of values smaller than the network size however over this range, cost increases almost exponentially. From these curves, the identification of a small-world regime is not clear. What criterion should be used to consider that both , and are high? This problem can be solved defining a simple new measure that summarizes the characteristics of an efficient SW. We propose the quantity which is expected to peak with high global and local efficiencies and low cost. For the type of networks considered in this study, this measure behaves as depicted Figure 6(b). Surprisingly, shows a clear maximum value located around ; the case where size and lifetime distributions showed a behavior closer to the power law. Accordingly, we identify this configuration as an economic small-world [21] that shows both high global and local efficiencies and low cost.

Further insights can be obtained studying the system branching parameter, , defined as the average number of descendants from one ancestor [22]. Given   on neurons at a given time step and , in the following time step, it is calculated as . Monitoring of for each avalanche was done continuously. shows a considerable variability and dependence on as shown in the right column of Figure 4. Most of the values obtained are over but close to the critical value , while most of the cases showed a most probable values located around . Notoriously, in the case the distribution gets closer than any other, to the critical value , while being still weakly supercritical. This result reinforces the idea that in the SW regime the behavior of this network gets closer to the critical state. The network is self-organized at a mean branching parameter indicating the weaker supercritical state in the SW regime.

4. Conclusions

It seems surprising for us that this small network may produce such differences for the distribution of sizes and lifetimes. It is revealing that, for the case identified as an efficient SW, the branching parameter approaches the value expected for the critical state. Indeed it is remarkable that, our approach not being a realistic one, results close to those experimentally obtained can be mimicked. This is particularly true considering the inclusion of the nonrealistic factor suggesting that sustained avalanche activity existing close to power law behavior does not require the consideration of realistic neuronal connectivity terms. An interesting fact is evoking that one of the underlying dynamical origins of critical avalanche dynamics in neuronal systems may not be the nature of the interconnectivity but the underlying SW network topology. There is an increasing body of evidence showing that nervous systems are connected as small-world networks at macro- and micro-scales [23]. Some works have formally identified the SW characteristics of the neural network of Caenorhabditis elegans [17], the macaque, and the cat cortical connectivity [24], in macaque brain functional networks [25, 26], in functional networks of cats’ cortical neurons [27], human brain functional networks [2831], or compared the neural case with the behavior of other complex systems [32]. Existing approaches trying to explain the emergence of power laws in the spontaneous activity of neural networks present different limitations as the necessity of an a priori definition of a branching process and its consequent fine tuning to obtain power laws [1, 10, 12], the assumption of a topology not supported by the neurological evidence [9, 11, 13] or the absence of inhibitory neurons [13].

Our work with this tiny network (when compared with a real nervous system) suggests that there is no need to define an a priori branching process to be tuned but that the neural network itself spontaneously self-organizes in a weakly critical branching process. While topologically driven phase transitions with power-law-distributed avalanches at criticality have been reported in non SW networks [33, 34] we think results from this work may be revealing that self-organized criticality in neuronal systems could be caused by the SW topology. An aspect is already implicit in previous works [35].

Acknowledgments

Juan Luis Cabrera gratefully thanks R. Kuske for her hospitality during a sabbatical leave in the Mathematics Department of the University of British Columbia where this work was initially written. The authors thanks H. de Nobrega for his contribution to the software’s initial versions. Juan Luis Cabrera acknowledges the research support from IVIC-141 grant; Johans Hoenicka acknowledges the support from IVIC’s undergraduate fellowships. Correspondence and requests for materials should be addressed to Juan Luis Cabrera.