About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 484812, 17 pages
http://dx.doi.org/10.1155/2012/484812
Research Article

Structural Models of Cortical Networks with Long-Range Connectivity

1Faculty of Biology, University of Freiburg, 79104 Freiburg, Germany
2Eye Clinic, University Medical Center Freiburg, 79106 Freiburg, Germany
3Bernstein Center Freiburg, University of Freiburg, Hansastrβe9a, 79104 Freiburg, Germany

Received 4 July 2011; Accepted 17 August 2011

Academic Editor: Zidong Wang

Copyright © 2012 Nicole Voges et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Most current studies of neuronal activity dynamics in cortex are based on network models with completely random wiring. Such models are chosen for mathematical convenience, rather than biological grounds, and additionally reflect the notorious lack of knowledge about the neuroanatomical microstructure. Here, we describe some families of new, more realistic network models and explore some of their properties. Specifically, we consider spatially embedded networks and impose specific distance-dependent connectivity profiles. Each of these network models can cover the range from purely local to completely random connectivity, controlled by a single parameter. Stochastic graph theory is then used to describe and analyze the structure and the topology of these networks.

1. Introduction

The architecture of any network can be an essential determinant of its respective function. Signal processing in the brain, for example, relies on a large number of mutually connected neurons that establish a complex network [1]. Since the seminal work of Ramón y Cajal more than a hundred years ago, enormous efforts have been put into uncovering the microcircuitry of the various parts of the brain, including the neocortex [26]. On the level of networks, however, our knowledge is still quite fragmentary, rendering computational network models for cortical function notoriously underdetermined.

Networks with a probabilistically defined structure represent, from a modeler’s perspective, a viable method to deal with this lack of detailed knowledge concerning cell-to-cell connections [7]. In such models, data from statistical neuroanatomy (e.g., coupling probabilities) are directly used to define ensembles of networks where only few parameters are needed to define relatively complex network structures. Properties that all members of such a statistical ensemble have in common are then regarded as “generic” for this type of network.

Random graphs [8, 9] and more general stochastic graph models have been mathematically analyzed in great detail. The main motivation was that striking threshold behavior and phase transitions could be observed when certain parameters of such systems were varied. Recently the theory of “complex networks” began to raise even more interest as it was discovered that real-world networks of very different nature (e.g., social networks, the Internet, and metabolic networks) share a number of universal properties [1012]. Applications to large-scale brain organization were among the earliest applications of the new concepts [1315]. Here, we suggest to import some of the ideas and methods that came up in the abstract theory of complex networks and apply them to neuronal networks at a cellular level (Figure 1). Specifically, we provide several parametric models for spatially embedded networks. These models allow us to synthesize biologically realistic networks with controlled statistical properties, which serve as candidate models for cortical networks. Providing such models supports the joint structural analysis of synthetic and biological networks.

484812.fig.001
Figure 1: Left: reconstruction of a pyramidal cell stained in a tangential slice of the rat neocortex (top view). Middle: schematic 2D section representing a spatially embedded network composed of locally (red lines) connected pyramidal cells (black triangles). Right: different types of abstract networks.

The graph-theoretic analysis of cortical networks raises the following problem: graphs usually do not deal with space (right part Figure 1), even though a spatial embedding of the physical network implicitly determines some of its properties. Horizontal wiring between cortical neurons, for example, exhibits a clear dependence on the distance of the involved cells, indicated by the left part of Figure 1. Many synaptic contacts are formed between close neighbors, in accordance with, and constrained by, the geometry of neuronal dendrites and local axons [1618]. However, there is also an appreciable number of axons that travel for longer distances within the gray matter before making synaptic contacts with cells further away [6, 7, 19, 20]. Absolute numbers of local and nonlocal synaptic connections are still a matter of debate among neuroanatomists, and the same is true for the details of the spatial organisation of synaptic projections [1, 6, 7]. Here, we consider three different candidate network models, each representing one possible concept for the geometric layout of distance-dependent connectivity. The uncertainty concerning the ratio of local versus nonlocal synapses is reflected by the systematic variation of a suitable parameter in each model. Moreover, if spatial aspects are included in simulating and analyzing cortical network dynamics, neurons are commonly placed on the grid points of a regular lattice [21, 22]. Cortical neurons, however, are unlikely to be arranged in a crystal-like fashion [1], neither in three dimensions nor in a two-dimensional projection.

Altogether, we face a spatially embedded and very sparsely connected network, where only a very small fraction of neuron pairs are synaptically coupled to each other directly. What is the impact of these general structural features of synaptic wiring in the cortex? Do these features matter in determining the global topology of the network? Sparse couplings save cable material, but they also constrain communication in the network. Can the sparsity, in principle, be overcome by smart circuit design? Likewise, admitting only neighborhood couplings saves cable length but increases the topological distance between nodes in the network, that is, the number of synapses engaged in transmitting a signal between remote neurons becomes quite large [23, 24]. On the other hand, allowing for distant projections reduces the topological distance, but it induces a higher consumption of wiring material. These wires occupy space that is clearly limited within the skull. Has cortex optimized its design by making use of such tricks? Here, we approach these and related biological questions by establishing suitable parametric families of stochastic network models and by exploring their properties numerically.

Preliminary results of this study have been presented previously in abstract form [25, 26].

2. Methods

We considered network models that comprised neurons with directed synaptic connections. Therefore, our cortical networks were represented by directed graphs 𝐺 (see Figure 2, left), specified by nonsymmetric adjacency matrices 𝐴(𝐺)=(𝑎𝑖𝑗). We had 𝑎𝑖𝑗=1 if a link 𝑖𝑗 existed, otherwise 𝑎𝑖𝑗=0 (see Figure 2, middle). We did neither allow for autapses (self-coupling) nor for multiple synapses for any pair of neurons. Also, our choice of the adjacency matrix approach did not allow, at this point, to differentiate between excitatory and inhibitory synaptic contacts. Our networks were composed of 𝑁=1024 sparsely connected nodes. On average, only a fraction 𝑐0.012 of all 𝑁(𝑁1) possible links was realized in each particular network. These synaptic connections were established according to probabilistic rules common to all neurons. In general, the expected number of both incoming and outgoing synapses was fixed to 𝑘12; see Table 1. The same distribution for incoming (𝑃(𝑘in)) and outgoing (𝑃(𝑘out)) links, respectively, held for all nodes. However, in any specific network realization, each node had random in- and out-degrees. Along the same lines, all other network properties assumed random values if computed from individual networks. To obtain characteristic mean values, we generated 20 independent realizations for each type of network and calculated the corresponding averages and the standard errors of the means (SEM).

tab1
Table 1: List of randomness parameters used to construct the spatially embedded RPNs. 𝜙 is the rewiring probability for the SW model, (𝑟,𝑝) describe range and probability of connectivity in the FN model, and (𝜎,𝑝0) are the parameters for the adjusted GN networks.
fig2
Figure 2: Left: simple ring graph composed of 6 nodes. Middle: the corresponding adjacency matrix. Right: scheme describing the construction of spatially embedded networks with distance-dependent connectivity. Each node (red dots) has a connectivity disk (filled circle); blue arrows indicate periodic boundary conditions (torus topology).
2.1. Spatially Embedded Graphs

Each neuron was situated in a quadratic domain of extent 𝑅=1, wrapped to a torus to avoid boundary effects (see Figure 2, right). We considered the following two types of 2D spatially embedded networks, random position networks (RPNs), and lattice position networks (LPNs). In RPNs, the positions of all nodes were drawn independently from the same uniform probability distribution. In LPNs, the nodes were placed on the grid points of a rectangular lattice. For a comparison, we also analyzed the corresponding 1D ring graphs.

In a network with no long-range connections, nodes placed within a circular neighborhood of radius 𝑟 were linked to the center node with connection probability 𝑝, according to𝑐𝑅2=𝑝𝑟2𝜋with𝑟𝑅.(2.1) For the LPNs, the smallest possible neighborhood compatible with this rule was obtained for full connectivity (𝑝=1), implying a radius 𝑟min=𝑅𝑐/𝜋0.061. This neighborhood consisted of 8 nearest neighbors and 4 additional next-to-nearest neighbors, compatible with 𝑘=12 for all networks considered in this study.

We considered the following three families of networks, each spanning the full range from regular to random connectivity.(i)Fuzzy neighborhood (FN) network: this model assumed uniform connectivity of probability 𝑝 within a circular neighborhood of radius 𝑟. No connections were established with nodes further away. Starting from a symmetric adjacency matrix 𝐴(𝐺) with (𝑟,𝑝)=(𝑟min,1), the transition to a completely random graph was induced by simultaneously increasing 𝑟 and decreasing 𝑝 accordingly to (𝑟,𝑝)=(0.5,0.015).(ii)Small-world- (SW-) like network: again starting from (𝑟,𝑝)=(𝑟min,1), we applied a rewiring procedure in order to introduce long-range links, that is, connections spanning larger distances than 𝑟min. Each individual link of the graph was, with probability 𝜙, replaced by a randomly selected one. For 𝜙=1 we again ended up with a completely random graph.(iii)Gaussian neighborhood (GN) network: Gaussian profiles were used to define a smooth distance-dependent connection probability, adjusted to the connectivity parameters of the FN networks. The corresponding parameter pairs were (𝜎,𝑝0), where 𝜎 was the width of the Gaussian profile used. For technical reasons, we confined our investigation here to RPN models. In contrast to the FN and SW models, the initial adjacency matrix 𝐴(𝐺) for (𝜎,𝑝0)=(0.043,1) was nonsymmetrical. Motivated by neuroanatomical data [16], GN models represent a biologically more realistic connectivity model.

2.2. Characteristic Network Properties

The following descriptors were used to characterize and compare the network models described above. Most quantities are well established in the context of graph theory (see, eg., [10, 11]).

(a) Degree distributions and correlations: counting incoming and outgoing links for each node of a graph yield an estimate of the distribution of in-degrees 𝑃in(𝑘) and out-degrees 𝑃out(𝑘), respectively. Here, we only used the out-degree for analysis. The two-node degree correlation 𝐾𝑐=𝑖𝑗𝑘𝑖𝑘𝑗 describes out-degree correlations between connected nodes 𝑖𝑗. In addition, to account for the spatial embedding aspect of our graphs, we considered histograms of the number of links between any two nodes depending on their spatial distance.

(b) Small-world characteristics: the cluster coefficient describes the probability that two nodes, both connected to a common third node, are also directly linked to each other. Let 𝐶𝑖 be the fraction of links actually established between any two nodes receiving a link from node 𝑖. We considered the mean cluster coefficient 𝐶=(1/𝑁)𝑖𝐶𝑖. Additionally, we calculated the degree-dependent cluster coefficient 𝐶(𝑘), where the average was formed over all nodes with a given out-degree [27]. The shortest path 𝐿𝑖𝑗 is the minimal number of hops necessary to get from node 𝑖 to node 𝑗 respecting link directions. We considered the average shortest path length 𝐿=(1/𝑁(𝑁1))𝑖𝑗𝐿𝑖𝑗 for all pairs of distinct nodes, referred to as “characteristic” path length. If delays in a neuronal network are mainly generated by synaptic and dendritic integration times, 𝐿 is a natural measure for the total delay to transmit a signal from neuron 𝑖 to neuron 𝑗. The two measures 𝐶 and 𝐿 together constitute the so-called small-world characteristics [1012].

(c) Wiring length: since we deal with spatially embedded networks, any pair of nodes 𝑖 and 𝑗 can be assigned a spatial distance 𝐷𝑖𝑗. Of interest here was the total pairwise distance of connected nodes 𝐷=𝑖𝑗𝐷𝑖𝑗. It provides a measure of the total wiring length of the network, assuming straight cables [28, 29]. If delays in a neuronal network are mainly generated by axonal conductance times, 𝐷 is a natural measure for the total delay to transmit a signal from neuron 𝑖 to neuron 𝑗.

(d) Eigenvalues and eigenvectors: for any graph 𝐺 with 𝑁 nodes, we numerically determined the 𝑁 (complex) eigenvalues 𝜆 of its adjacency matrix 𝐴(𝐺) and estimated the eigenvalue density 𝑃(𝜆) based on 20 samples of graphs of the same type [10, 30, 31]. The corresponding eigenvectors 𝑣 of 𝐴(𝐺) were also numerically determined [10]. To quantify the spatial spread of a normalized eigenvector 𝑣, we used three different measures: firstly, the weighted 2D circular variance |||||𝑉=42𝑘||𝑣𝑘||2𝑒2𝜋𝑖𝑥𝑘/𝑅||||||||||2𝑘||𝑣𝑘||2𝑒2𝜋𝑖𝑦𝑘/𝑅|||||,(2.2) where 𝑣𝑘 are the components of 𝑣 satisfying 𝑘𝑣2𝑘=1 and (𝑥𝑘,𝑦𝑘) denotes the spatial coordinates of node 𝑘. Complex numbers were used here to conveniently account for the fact that the neurons in our model are arranged on a torus. The circular mean [32, 33] of 𝑥-coordinates across all neurons 𝜇𝑐=𝑘𝑒(2𝜋𝑖𝑥𝑘)/𝑅 was used to obtain the average 𝑥-coordinate in a consistent manner. The circular variance 𝜎2𝑐=2(1|𝜇𝑐|) provides a measure for the dispersion of 𝑥-coordinates and small values of 𝜎2𝑐 indicate a high concentration on the circle. For any eigenvector 𝑣, we considered the sum of the circular variances for 𝑥- and 𝑦-coordinates, respectively, each weighed according to the participation of individual nodes 𝑘 described by the coefficient |𝑣𝑘|2. This definition gives values for 0𝑉4. Small values of 𝑉 indicate that the “mass” encoded by the squared components of 𝑣 is concentrated in a compact spatial region (see Figure 3 (top-left)), while larger values of 𝑉 imply that it is more uniformly spread over the whole domain (see Figure 3 (bottom-right)). For comparison, we also considered two other measures, the entropy 𝐻 and the inverse participation ratio  𝐼𝐻=𝑁𝑘=1||𝑣𝑘||2||𝑣log𝑘||2,𝐼=𝑁𝑘=1||𝑣𝑘||4.(2.3) The entropy 𝐻 assumes its maximal value 𝐻max=log𝑁 if the mass encoded by the squared coefficients of 𝑣 is uniformly distributed over its 𝑁 components. Its minimal value 𝐻min=0 is assumed if the mass is concentrated in one point in space. The inverse participation ratio was suggested for the analysis of 1D ring graphs [31]. In contrast to 𝐻, it assumes its minimal value 𝐼min=1/𝑁 if the mass encoded by the squared coefficients of 𝑣 is uniformly distributed over its 𝑁 components. Its maximal value 𝐼max=1 is assumed if the mass is concentrated in one point in space. As the circular variance, both measures were used to asses the spatial concentration of eigenfunctions. Figure 3 shows four sample eigenvectors arising from different networks, with the corresponding values for the three locality measures indicated above each plot.

fig3
Figure 3: Localization of four sample eigenvectors of spatially embedded graphs. The squared value of each component of an eigenvector is represented by a rectangle of proportional area, centered at the position of the corresponding node. Top left: FN random position network with 𝑝=0.9. Bottom right: FN lattice position network with 𝑝=0.5. Right top and bottom: Two different eigenvectors of a SW RPN for 𝜙=0.1.

3. Results

We employed several characteristic network properties to compare different types of spatially embedded networks (FN, SW, and GN). Comparing FN and SW connectivity, we aimed to analyze the effect of unconstrained long-range connections, as opposed to the compact FN connectivity. We also asked if GN connectivity provides an appropriate compromise, involving long-range links combined with a compact local connectivity range. We focused on networks with random node positions (RPN), while the results for lattice position networks (LPNs) and the corresponding 1D ring graphs are only discussed in case of a significant deviation.

3.1. Degree Distributions and Degree Correlations

In the FN, SW, and GN scenarios, networks with random node positions exhibited binomial distributions for both the in- and out-degree (Figure 4 (top-left)), irrespective of the relative abundance of nonlocal connections. For networks with nodes positioned on a regular lattice, however, these distributions were binomial only in the case of random connectivity. Here, a more regular wiring (𝜙<0.5, 𝑝>0.5), that is, fewer nonlocal connections, implied less spread in the distribution [28]. For RPNs, the variability of the degree of each node depended both on the randomness parameter characterizing its connectivity and on the fluctuations of the number of nodes located within its neighborhood (connectivity disk). In case of LPNs, however, this variability was only determined by the randomness parameter.

fig4
Figure 4: Top left: binomial out-degree distributions (gray) for RPNs based on FN and SW connectivity for different parameter settings. The fitted binomial distribution is superimposed (black). Top right: degree correlations depending on the randomness parameters 𝑝 and 𝜙, respectively, for FN (blue), SW (red), and GN (green) RPNs. The results for the corresponding 1D ring graphs are also indicated, for both SW (magenta) and FN (light blue). Each data point represents the mean outcome of 20 simulations, the largest occurring SEM is 0.54. Bottom: three histograms 𝑃(𝑑) of the number of links in dependence of their spatial distance 𝑑. Each histogram represents one connectivity type (SW, FN, or GN). The specific parametric realizations are chosen according to an approximately equal mean distance of connected nodes 𝐷=0.15, corresponding to a horizontal line in Figure 5, bottom left.

Figure 4 (top-right) shows the two node degree correlations of RPNs for the three types of connectivity considered in this study (FN, GN, and SW). Additionally shown are the results of calculating 𝐾𝑐 for 1D ring graphs with SW and FN connectivity. These were comparable to those of the LPN models but much less influenced by the strong sensitivity to fluctuations in 𝑘 as they occurred for LPNs with FN connectivity. For regularly connected 1D ring graphs (𝑝=1 or 𝜙=0) the degree correlations exhibited smaller values (𝐾𝑐=𝑘𝑘) than for random connections (𝑝0, 𝜙=1), resulting in an increasing 𝐾𝑐 curve. In contrast, for RPNs, 𝐾𝑐 started with rather high values and decreased with increasing randomness parameter, terminating at the same value of approximately 𝐾𝑐=156 as observed for randomly connected 1D ring graphs. RPNs with GN connectivity exhibited rather small 𝐾𝑐 values for 𝜙<0.2 compared to the other two models. Thus, 𝑃(𝑘) and 𝐾𝑐 clearly depended on the type of spatial embedding (RPN versus LPN), whereas there were only small deviations with respect to the type of connectivity (FN versus SW versus GN).

The three histograms in Figure 4 (bottom) indicate the frequency of connections 𝑃(𝑑) at a given distance 𝑑 for SW, FN, and GN RPNs, respectively. Each of these networks was established with the same total wiring length 𝐷=0.15 (cf. Figure  5). In contrast to the out-degree distribution 𝑃(𝑘), the distributions 𝑃(𝑑) reflect the specific distance-dependent connectivity profiles. For uniform connection probability, as given in the local neighborhood 𝑑<𝑟min=0.0611 in SW networks, 𝑃(𝑑) exhibited a linear slope; see Figure 4, bottom-left. We also observed a linear increase of 𝑃(𝑑) within the connectivity range 𝑟 of FN networks, as well as for the number of long-range links (𝑟min<𝑟<𝑅) in SW models. This feature is due to the linear increase of the circumference of a circle with increasing radius. Therefore, in case of a 2D spacial embedding with uniformly distributed node positions and a constant connection probability, the number of nodes connected at a given distance grows linearly with increasing distance. For GN networks, however, the connection probability is not constant but decreases with increasing distance, leading to the nonlinear rise, and decline, as displayed in Figure 4, bottom-right.

3.2. Small-World Characteristics and Wiring Length

In this section, most results are shown for RPNs. Concerning the average shortest path length and the mean distance of connected nodes, any differences between RPNs and LPNs were negligible. Only the cluster coefficient was significantly higher in case of RPNs; for a detailed analysis of this issue, see [28].

The well-known characteristic feature of small-world networks is the 𝐿-𝐶 ratio depending on the rewiring probability 𝜙. Starting from a regular graph with increasing 𝜙 the average shortest path length, 𝐿, initially decreases much more than the cluster coefficient 𝐶. This is exactly what we observed for our spatially embedded SW networks; see Figure 5, top left (red curves). In contrast, we found 𝐶𝐿 for all 𝑝 in case of FN connectivity (blue curves). These findings are summarized in Figure 5, top-right, plotting 𝐶 versus 𝐿 for the three types of connectivity. Randomness now progresses from top-right to bottom-left. Spatially embedded SW networks showed a strong small-world effect, according to which very few long-range connections sufficed to dramatically decrease the characteristic path length 𝐿, while the cluster coefficient 𝐶 remained relatively high. Neither FN nor GN networks shared this behavior. Any given clustering 𝐶 was associated with much shorter paths 𝐿 in SW networks than in FN or GN networks.

fig5
Figure 5: Top left: cluster coefficient 𝐶 and average shortest path length 𝐿 for RPN with SW (upper) and FN (lower) connectivity, for different parameters 𝑝 and 𝜙, respectively. All curves are normalized to the common maximum (𝐶max=0.586, 𝐿max=8.53). Shown are mean values obtained from 20 simulations for each parameter, the SEM is always below 0.003. Top right: scatter plot of the normalized values of 𝐶 versus 𝐿 for RPNs with FN, SW, and GN connectivity. The leftward bending of the SW curve reflects the small-world effect: Strong clustering (high values of 𝐶) coexists with short paths linking pairs of nodes (low values of 𝐿). Bottom left: mean distance of connected nodes 𝐷 for FN, GN, and SW RPNs, depending on 𝑝 or 𝜙, respectively. All values are normalized as described above (𝐷max=0.38). The SEM is below on 0.00009. Bottom right: scattering of 𝐷 versus 𝐿 for the same FN, GN, and SW RPNs.

Figure 5, bottom, shows the mean Euclidean distance 𝐷 between pairs of connected nodes, again depending on the randomness parameters 𝑝, and 𝜙. For SW connectivity, 𝐷 increased linearly, while again both FN and GN curves exhibited a different behavior: initially, there was a comparably weak increase, which became steeper at 𝑝0=0.8 and 𝑝=0.8. Wiring length 𝐷 and graph distance 𝐿 are jointly displayed for all networks considered here in Figure 5, bottom-right. For all network models, 𝐷 increased as 𝐿 decreased from regular (bottom-right) to random (top) connectivity. Non-local connections decreased the graph-theoretic path length 𝐿, but they increased the total wiring length 𝐷. To realize a given graph-theoretic path length, SW networks had the smallest wiring expenses, followed by GN and FN networks, which make the least effective use of cables.

We also computed the degree-dependent cluster coefficient 𝐶(𝑘), another well-established measure for 1D networks [27]. For random graphs, 𝐶(𝑘) is known to be independent of the degree 𝑘. This is what we observed for RPNs, independently of the type of connectivity, as well as for LPNs with FN and GN connectivity, as indicated by the horizontal lines in Figure 6. Only for LPNs with a less random SW connectivity (𝜙<0.5) we found a decreasing 𝐶(𝑘) for degrees 𝑘 larger than a certain threshold (depending on the specific value of 𝜙). This effect cannot be traced back to the degree distribution since 𝑃(𝑘) is identical for the corresponding SW and FN LPNs (see above). In LPNs, thus, the nonconstant 𝐶(𝑘) carries information about deviations from a uniform connectivity. In turn, for more regular connectivity, 𝐶(𝑘) behaved differently for RPNs and LPNs, respectively.

fig6
Figure 6: Degree-dependent cluster coefficient for RPNs (left) and LPNs (right) with SW (red symbols) and FN (blue symbols) connectivity. Each figure shows the results for several different values of the parameters 𝑝 and 𝜙, respectively.

To summarize, FN and GN models did not exhibit any small-world characteristics. There was no reduction of 𝐿 compared to 𝐶 with increasing randomness because unconstrained long-range connections were only present in the SW model. Long-range links also induced the strong increase of 𝐷 in the SW model, as well as the decrease in 𝐶(𝑘) in case of LPNs.

3.3. Eigenvalues and Eigenvectors

Concerning the eigenvalue distribution of the adjacency matrix, we again found characteristic differences due to the spatial embedding, especially in the case of near-regular connectivity. We observed, however, again only small deviations between different types of connectivity.

Figure 7, bottom rows, shows the density of eigenvalues (real part on the 𝑥-axis, imaginary part on the 𝑦-axis) for the FN (left) and SW (right) RPNs. From top to bottom randomness progresses, indicated by 𝑝 ranging from 0.95 to 0.015 and 𝜙 ranging from 0.05 to 1.0. For regular networks (𝑝=1 or 𝜙=0), these networks had a symmetric adjacency matrix and, therefore, only real eigenvalues. The corresponding eigenvalue spectrum 𝑃(𝜆) is shown in Figure 7, top. Note the prominent peak at 𝜆=1 in an otherwise smooth and asymmetric distribution. The GN network, however, even at 𝑝0=1 exhibited an asymmetric disk-like structure, due to the initially asymmetrical adjacency matrix (data not shown). In contrast to the smooth distributions of RPNs, the eigenvalue density of LPNs was rugged, with many peaks; see Figure 8.

fig7
Figure 7: Eigenvalue density of FN (left) and SW (right) RPNs ranging from local (top) to random (bottom) networks. Top: real eigenvalue spectrum of a (symmetric) locally connected network (𝑟=0.061, 𝑝=1, 𝜙=0). Note the exceptional peak of the density at small negative values around −1. Bottom: complex eigenvalue density for (nonsymmetric) RPNs with FN and SW connectivity. The corresponding parameters 𝑝 and 𝜙 are indicated within the plot. The logarithmic gray scale indicates densities up to about 16 per unit square (black).
fig8
Figure 8: Complex eigenvalue density for LPNs with FN (𝑝=0.95) and SW (𝜙=0.05) connectivity. The logarithmic gray scale indicates densities up to about 16 per unit square (black).

For both the FN and SW scenarios, the distribution of eigenvalues smoothly changed its shape from circular (most eigenvalues complex) with radius 𝑁𝑐(1𝑐) in the case of a completely randomly connected network to degenerate (all eigenvalues real) with a heavy tail of large positive eigenvalues for networks with only local couplings; see Figure 7. Additionally, both distributions exhibited a prominent peak at Re(𝜆)=1, clearly visible only for 𝑝>0.9 and 𝜙<0.1, respectively. In the FN model, there were more large real eigenvalues, corresponding to the prominent horizontal line. For the SW model, particularly in the range of 𝜙=0.5, we observed a higher frequency of eigenvalues with 2.5<Re(𝜆)<7.5 and 1.5<Im(𝜆)<1.5.

Although the spectra of FN and SW networks were quite similar, the average spatial concentration of eigenvectors turned out to be a quite sensitive indicator for both the type of spatial embedding (RPN versus LPN) and the type of wiring (FN versus SW) assumed for the construction of the graph. Figure 9 shows the results of calculating the locality of all eigenvectors. As explained in Section 2 we considered three measures, two of them are displayed in Figure 9. In the top row, we present the entropy 𝐻 for RPNs, the middle row shows the square-root of the circular variance 𝑉1/2 for RPNs, and the bottom row shows the same quantity for LPNs. In each figure, dots correspond to the eigenvectors and eigenvalues of one particular network realization. The different colors represent different randomness parameters 𝑝 and 𝜙, respectively.

fig9
Figure 9: Locality of eigenvectors for RPNs and LPNs, either with FN (left) or SW (right) connectivity. Top: entropy 𝐻 and square-root of circular variance 𝑉1/2 for RPNs. Middle: circular variance for LPNs. In each plot the randomness parameters 𝑝 or 𝜙 range from purely local (red) to random (blue) connectivity. Bottom: mean 𝑉1/2 of all analyzed networks in dependence of 𝑝 or 𝜙, respectively. Shown are FN and SW connectivity for RPN and LPN and GN connectivity for RPNs only. Each data point represents the mean of 20 simulations; error bars indicate the SEM.

In general, the eigenvectors corresponding to the largest absolute values of 𝜆 were the most local ones. Additionally, we found that the more regular the connectivity is, the more spatial concentration occurs: there were more localized eigenvectors in the regular connectivity range (red), and these exhibited smaller values of both 𝑉1/2 and 𝐻. (the 𝐼 measure behaved similar to 𝐻; data not shown). In addition to the features discussed above, we again found a prominent aggregation for 𝜆=1 which can be traced back to the peak at Re(𝜆=1) in the eigenvalue spectrum. Comparing our two connectivity models, we found that the eigenvectors of the FN networks were clearly more local than those for SW networks: There were more points with small 𝑉1/2 values on the left than on the right side of Figure 9.

Figure 9, bottom, indicates a prominent difference between RPNs and LPNs. Lattice positions produced significantly less spatial concentration. We found only very few eigenvectors for 𝑉1/2<1.5. The most local eigenvectors were not to be found in the regular connectivity range but for values of 𝑝 and 𝜙 at approximately 0.5. Hence, a certain amount of randomness was important for the emergence of locality. There was less locality in the LPNs and less locality in the SW model. However, for more random connectivity these differences were less expressed. With respect to the circular variance, the GN model showed indeed intermediate behavior: the eigenvectors were not as local as in the SW network, but there was more spatial concentration than in the FN model.

4. Discussion and Conclusions

We introduced two families of network models, each describing a sheet, or layer, of cortical tissue with different types of horizontal connections. We assumed no particular structure in the vertical dimension. Neurons were situated in space, and the probability for a synaptic coupling between any two cells depended only on their distance. Both models could be made compatible with basic neuroanatomy by adjusting the parameters of the coupling appropriately. Both families of networks spanned the full range from purely local, or regular, to completely random connectivity by variation of a single parameter. The paths they took through the high-dimensional manifold of possible networks, however, were very different.

The first model (fuzzy neighborhood) assumed a homogeneous coupling probability for neurons within a disk of a given diameter centered at the source neuron. The probability was matched to the size of the disk such that the total connectivity assumed a prescribed value. The related Gaussian neighborhood model was based on similar assumptions but its smoothly decreasing connection probabilities were defined by Gaussian profiles, adapted to those of the fuzzy neighborhood model. For very small disks, only close neighbors formed synapses with each other, and, for very large disks spanning the whole network, couplings were effectively random. The second model (small world) started with the same narrow neighborhoods but departed in a different direction by replacing more and more local connections with nonlocal ones, randomly selecting targets that were located anywhere in the network.

For most models considered in this study, the initial random positioning of neurons in space guaranteed that both in-degrees and out-degrees had always the same binomial distribution, irrespective of the size of the disk defining the neighborhood and irrespective of the number of non-local connections. This means that none of the statistical differences between the various candidate models described in the paper can be due to specific degree distributions. This is in marked contrast to the original demonstration of the small-world effect in ring graphs [34], where the locally coupled networks were at the same time completely regular, that is, all degrees were identical. Finally, we also relaxed the random positioning of neurons before linking them and put them on a (jittered) grid instead [28]. It is striking to see (and a warning to the modeler) that this had indeed a strong impact on several parameters considered (Figures 6 and 9).

The first main result of this study is that networks residing in two dimensions—very much like one-dimensional ring graphs [34]—can also exhibit the small-world effect (Figure 5). As a prerequisite, though, the non-local shortcut links must be allowed to invade also remote parts of the network without any constraints on the distance they might need to travel. In our small-world model, strong clustering (provided by intense neighborhood coupling) and short graph-theoretical paths (provided by the long-distance bridges) coexisted for certain parameter constellations. Such a smart circuit design is the prevailing assumption for cortical connectivity [14, 15, 29]. In the fuzzy neighborhood models, in contrast, the global limit imposed on the physical length of connections was strictly prohibitive for this combination of properties. The Gaussian neighborhood model, finally, shows comparably weak clustering but shorter graph-theoretical paths than the corresponding fuzzy neighborhood model.

It seems reasonable to assume that, in neocortex, the length of a cable realizing a connection is roughly proportional to the physical distance it has to bridge. The second main result of this study is that the length of the average shortest graph-theoretical path was always inversely related to the total length of cable that is necessary to realize it (Figure 5 (bottom part)), considering networks with fixed global connectivity. Completely random networks had very short graph-theoretical paths, but they needed a lot of cable to be wired up. In contrast, networks with local couplings were only very economical in terms of cable, for the price of quite long graph-theoretical paths. Networks from the small-world regime with short graph-theoretical paths were relatively inefficient in terms of necessary cable length, compared to the fuzzy and Gaussian neighborhood models (Figure 5). Only networks with patchy long-range connectivity [7, 20, 29] provide a near-to-optimal solution since they are very efficient in terms of both cable and graph-theoretical path lengths, in addition to high clustering. Networks with patchy connectivity are, however, beyond the scope of this paper. In view of the results presented here, an optimized model would employ a Gaussian connectivity profile for local connections, combined with some long-distance bridges to overcome the sparsity in cortical connectivity.

What conclusions can be drawn from graph spectral analyses? First of all, the complex eigenvalue spectrum of the adjacency matrix of a graph is a true graph invariant in the sense that any equivalent graph (obtained by renaming the nodes) has exactly the same spectrum. To some degree, the opposite is also the case: significantly different graphs give rise to differently shaped eigenvalue spectra. Empirically, it seems that similar graphs also yield similar spectra, but a rigorous mathematical foundation of such a result would be very difficult to establish. So we informally state the result that the shape of eigenvalue spectra reflects characteristic properties of graph ensembles, very much like a fingerprint. With an appropriate catalog at hand, major characteristics of a network might be recognized from its eigenvalue spectrum.

More can be said once the eigenvalue spectrum is interpreted in an appropriate dynamical context. Linearizing the firing rate dynamics about a stationary state allows the direct interpretation of eigenvalues in terms of the transient dynamic properties of an eigenstate. Real parts give the damping time constant, and imaginary parts yield the oscillation frequency. Although some care must be taken to correctly account for inhibition in the network [35], it is safe to predict that networks with more local connections tend to have a greater diversity with respect to the life times of their states and a reduced tendency to produce fast oscillations (Figure 7). The spatial properties of the eigenstates (Figures 3 and 9) are potentially relevant for describing network-wide features of activity, which can be observed in the brain using modern methods like real-time optical imaging. More specific predictions about the network dynamics based on a network model, however, would certainly depend on the precise neuron model, further parameters describing the circuit, in particular synaptic transmission delays [36], but also on the type of signal the dynamic properties of which are considered [37].

Finally, we would like to stress once more the importance of identifying characteristic parameters in stochastic graphs and their potential yield for the analysis of neuroanatomical data. Measurable quantities, or combinations of such characteristic numbers, could be of invaluable help to find signatures and to eventually identify the type of neuronal network represented by neocortex.

Acknowledgments

The authors thank A. Schüz and V. Braitenberg for stimulating discussions. This work was funded by a grant to N. Voges from the IGPP Freiburg. Further support was received from the German Federal Ministry for Education and Research (BMBF; Grant no. 01GQ0420 to BCCN Freiburg) and the 6th RFP of the EU (Grant no. 15879-FACETS).

References

  1. V. Braitenberg and A. Schüz, Cortex: Statistics and Geometry of Neuronal Connectivity, Springer, Berlin, Germany, 2nd edition, 1998.
  2. R. J. Douglas and K. A. C. Martin, “Neuronal circuits of the neocortex,” Annual Review of Neuroscience, vol. 27, pp. 419–451, 2004. View at Publisher · View at Google Scholar · View at PubMed
  3. J. S. Lund, A. Angelucci, and C. P. Bressloff, “Anatomical substrates for functional columns in macaque monkey primary visual cortex,” Cerebral Cortex, vol. 13, no. 1, pp. 15–24, 2003.
  4. A. M. Thomson, “Selectivity in the inter-laminar connections made by neocortical neurones,” Journal of Neurocytology, vol. 31, no. 3–5, pp. 239–246, 2002. View at Publisher · View at Google Scholar
  5. K. S. Rockland, “Non-uniformity of extrinsic connections and columnar organization,” Journal of Neurocytology, vol. 31, no. 3–5, pp. 247–253, 2002. View at Publisher · View at Google Scholar
  6. C. Boucsein, M. P. Nawrot, and P. Schnepel, “Beyond the cortical column: abundance and physiology of horizontal connections imply a strong role for inputs from the surround,” Frontiers in Neuroscience, vol. 5, article 32, 2011.
  7. N. Voges, A. Schüz, A. Aertsen, and S. Rotter, “A modeler's view on the spatial structure of intrinsic horizontal connectivity in the neocortex,” Progress in Neurobiology, vol. 92, no. 3, pp. 277–292, 2010. View at Publisher · View at Google Scholar · View at PubMed
  8. P. Erdős and A. Rényi, “On random graphs. I,” Publicationes Mathematicae Debrecen, vol. 6, pp. 290–297, 1959.
  9. B. Bollobás, Random Graphs, vol. 73 of Cambridge Studies in Advanced Mathematics, Cambridge University Press, Cambridge, UK, 2nd edition, 2001.
  10. R. Albert and A. L. Barabási, “Statistical mechanics of complex networks,” Reviews of Modern Physics, vol. 74, no. 1, pp. 47–97, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  11. M. E. J. Newman, “The structure and function of complex networks,” SIAM Review, vol. 45, no. 2, pp. 167–256, 2003. View at Zentralblatt MATH
  12. S. H. Strogatz, “Exploring complex networks,” Nature, vol. 410, no. 6825, pp. 268–276, 2001. View at Publisher · View at Google Scholar · View at PubMed
  13. O. Sporns, G. Tononi, and G. M. Edelman, “Theoretical neuroanatomy: relating anatomical and functional connectivity in graphs and cortical connection matrices,” Cerebral Cortex, vol. 10, no. 2, pp. 127–141, 2000.
  14. D. S. Bassett and E. Bullmore, “Small-world brain networks,” Neuroscientist, vol. 12, no. 6, pp. 512–523, 2006. View at Publisher · View at Google Scholar · View at PubMed
  15. F. Gerhard, G. Pipa, B. Lima, S. Neuenschwander, and W. Gerstner, “Extraction of network topology from multi-electrode recordings: is there a small-world effect?” Frontiers in Computational Neuroscience, vol. 5, article 4, 2011.
  16. B. Hellwig, “A quantitative analysis of the local connectivity between pyramidal neurons in layers 2/3 of the rat visual cortex,” Biological Cybernetics, vol. 82, no. 2, pp. 111–121, 2000. View at Scopus
  17. N. Kalisman, G. Silberberg, and H. Markram, “Deriving physical connectivity from neuronal morphology,” Biological Cybernetics, vol. 88, no. 3, pp. 210–218, 2003. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  18. A. Stepanyants, J. A. Hirsch, L. M. Martinez, Z. F. Kisvárday, A. S. Ferecskó, and D. B. Chklovskii, “Local potential connectivity in cat primary visual cortex,” Cerebral Cortex, vol. 18, no. 1, pp. 13–28, 2008. View at Publisher · View at Google Scholar · View at PubMed
  19. H. Ojima, C. N. Honda, and E. G. Jones, “Patterns of axon collateralization of identified supragranular pyramidal neurons in the cat auditory cortex,” Cerebral Cortex, vol. 1, no. 1, pp. 80–94, 1991.
  20. Z. F. Kisvarday and U. T. Eysel, “Cellular organization of reciprocal patchy networks in layer III of cat visual cortex (area 17),” Neuroscience, vol. 46, no. 2, pp. 275–286, 1992. View at Publisher · View at Google Scholar
  21. C. Mehring, U. Hehl, M. Kubo, M. Diesmann, and A. Aertsen, “Activity dynamics and propagation of synchronous spiking in locally connected random networks,” Biological Cybernetics, vol. 88, no. 5, pp. 395–408, 2003. View at Publisher · View at Google Scholar · View at PubMed
  22. A. Kumar, S. Rotter, and A. Aertsen, “Propagation of synfire activity in locally connected networks with conductance-based synapses,” in Proceedings of the Computational and Systems Neuroscience (Cosyne '06), 2006.
  23. D. B. Chklovskii, “Optimal sizes of dendritic and axonal arbors in a topographic projection,” Journal of Neurophysiology, vol. 83, no. 4, pp. 2113–2119, 2000.
  24. D. B. Chklovskii, “Synaptic connectivity and neuronal morphology: two sides of the same coin,” Neuron, vol. 43, no. 5, pp. 609–617, 2004. View at Publisher · View at Google Scholar · View at PubMed
  25. N. Voges, A. Aertsen, and S. Rotter, “Statistical analysis and modeling of cortical network architecture based on neuroanatomical data,” Göttingen Neurobiology Report, Thieme, 2005.
  26. N. Voges, A. Aertsen, and S. Rotter, “Anatomy-based network models of cortex and their statistical analysis,” in Proceedings of the 15th Annual Computational Neuroscience Meeting (CNS '06), Edinburgh, UK.
  27. A. L. Barabási and Z. N. Oltvai, “Network biology: understanding the cell's functional organization,” Nature Reviews Genetics, vol. 5, no. 2, pp. 101–113, 2004. View at Publisher · View at Google Scholar · View at PubMed
  28. N. Voges, A. Aertsen, and S. Rotter, “Statistical analysis of spatially embedded networks: from grid to random node positions,” Neurocomputing, vol. 70, no. 10–12, pp. 1833–1837, 2007. View at Publisher · View at Google Scholar
  29. N. Voges, C. Guijarro, A. Aertsen, and S. Rotter, “Models of cortical networks with long-range patchy projections,” Journal of Computational Neuroscience, vol. 28, no. 1, pp. 137–154, 2010. View at Publisher · View at Google Scholar · View at PubMed
  30. I. J. Farkas, I. Derényi, A. L. Barabási, and T. Vicsek, “Spectra of “real-world” graphs: beyond the semicircle law,” Physical Review E, vol. 64, no. 2, Article ID 26704, pp. 267041–2670412, 2001.
  31. I. Farkas, I. Derényi, H. Jeong et al., “Networks in life: scaling properties and eigenvalue spectra,” Physica A, vol. 314, no. 1–4, pp. 25–34, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  32. E. Batschelet, Circular Statistics in Biology, Mathematics in Biology, Academic Press, London, UK, 1981.
  33. N. I. Fisher, Statistical Analysis of Circular Data, Cambridge University Press, Cambridge, UK, 1993.
  34. D. J. Watts and S. H. Strogatz, “Collective dynamics of 'small-world networks,” Nature, vol. 393, no. 6684, pp. 440–442, 1998.
  35. B. Kriener, T. Tetzlaff, A. Aertsen, M. Diesmann, and S. Rotter, “Correlations and population dynamics in cortical networks,” Neural Computation, vol. 20, no. 9, pp. 2185–2226, 2008. View at Publisher · View at Google Scholar · View at PubMed
  36. N. Voges and L. Perrinet, “Phase space analysis of networks based on biologically realistic parameters,” Journal of Physiology Paris, vol. 104, no. 1-2, pp. 51–60, 2010. View at Publisher · View at Google Scholar · View at PubMed
  37. M. Timme, F. Wolf, and T. Geisel, “Topological speed limits to network synchronization,” Physical Review Letters, vol. 92, no. 7, Article ID 74101, pp. 741011–741014, 2004.