Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2010 / Article
Special Issue

Signal Processing for Neural Spike Trains

View this Special Issue

Research Article | Open Access

Volume 2010 |Article ID 752428 |

Michael Krumin, Shy Shoham, "Multivariate Autoregressive Modeling and Granger Causality Analysis of Multiple Spike Trains", Computational Intelligence and Neuroscience, vol. 2010, Article ID 752428, 9 pages, 2010.

Multivariate Autoregressive Modeling and Granger Causality Analysis of Multiple Spike Trains

Academic Editor: Andrzej Cichocki
Received03 Apr 2009
Revised13 Aug 2009
Accepted11 Jan 2010
Published29 Apr 2010


Recent years have seen the emergence of microelectrode arrays and optical methods allowing simultaneous recording of spiking activity from populations of neurons in various parts of the nervous system. The analysis of multiple neural spike train data could benefit significantly from existing methods for multivariate time-series analysis which have proven to be very powerful in the modeling and analysis of continuous neural signals like EEG signals. However, those methods have not generally been well adapted to point processes. Here, we use our recent results on correlation distortions in multivariate Linear-Nonlinear-Poisson spiking neuron models to derive generalized Yule-Walker-type equations for fitting ‘‘hidden’’ Multivariate Autoregressive models. We use this new framework to perform Granger causality analysis in order to extract the directed information flow pattern in networks of simulated spiking neurons. We discuss the relative merits and limitations of the new method.

1. Introduction

The analysis of multivariate neurophysiological signals at the cellular (spike trains) and population scales (EEG/MEG, LFP, and ECOG) has developed almost independently, largely due to the mathematical differences between continuous and point-process signals. The analysis of multiple neural spike train data [1] has gained tremendous relevance recently with the advent and widespread application of arrays of microelectrodes in both basic and applied Neurosciences. Furthermore, emerging optical methods for network activity imaging [2] and control [3] are likely to further compound this growth.

Currently, the analysis of multichannel spike trains is still largely limited to single-channel analyses, to bivariate cross-correlation and metric-space analyses [4], and to spike train filtering (“decoding”). In contrast, much of EEG/MEG time series analysis has revolved around linear and nonlinear models and analyses that are essentially multivariate, most prominently the multivariate autoregressive (MVAR) model. The MVAR framework is associated with a powerful set of time- and frequency-domain statistical tools for inferring directional and causal information flow based on Granger’s framework [5], including linear and nonlinear Granger causality, directed transfer function, directed coherence, and partial directed coherence (see [68] for reviews). Scattered attempts at applying this general framework to neural spike trains have relied on smoothing the spike trains to obtain a continuous process that can be fit with an MVAR model [912]. This approach has the clear disadvantage of being highly kernel dependent and of introducing unwanted distortions. The inability to estimate multivariate autoregressive models for spike trains has recently motivated Nedungadi et al. [13] to develop an alternative nonparametric procedure for computing Granger causality based on spectral matrix factorization (without fitting the data with an autoregressive model).

The purpose of this paper is to bridge this divide in neurophysiological data analysis by introducing a correlation-distortion-based framework for applying multivariate autoregressive models to multichannel spike trains. The primary aim of making this connection is to enable direct identification of causal information flow among populations of neurons using the powerful Granger causality analyses, which have been tried and tested in numerous studies of continuous neural signals. The framework is based on our recent analytical results [14, 15] on correlation distortions in (multiple) Linear-Nonlinear-Poisson (LNP) models when the inputs are white Gaussian noise processes and the nonlinearities are exponential, square, or absolute values. The essential idea in this approach is that the nonlinearity (which produces the firing rates) systematically distorts the correlation structure of the correlated Gaussian outputs of the linear stage, and that the spike trains carry essentially the same expected correlation structure. By deriving formulas for these distortions, we were able to generate synthetic spike trains with a fully-controllable correlation structure by choosing FIR linear kernels that “predistort” the Gaussian processes to cancel out the subsequent distortion. Such spike trains can be applied, for example, in pattern photo-stimulation of synthetic input activity onto a neuron, or for controlling neuron populations in artificial neuroprosthetic interfaces [3, 16]. Although we noted in [14] that the linear stage could generally have a recursive MVAR structure, the required estimation steps were not presented or tested.

The remainder of the paper is organized as follows. Section 2 introduces the methods used for generating the spike trains used in this paper and for evaluating statistical significance. In Section 3 we present and evaluate the procedure for estimating the MVAR-nonlinear-Poisson model. In Section 4 we provide an overview of linear Granger causality analyses and apply them to estimating information flow in bi- and trivariate spike trains. In Section 5 we conclude by discussing the new framework’s relation to previous work, and its prospects and limitations.

2. Methods

2.1. Synthetic Spike Train Generation

Spike trains were generated in two different ways in order to mimic two basic scenarios encountered in neural data recordings: distributed population activity with relatively wide correlation functions and local network with directly interconnected neurons.

Population activity was simulated using a Linear-Nonlinear-Poisson (LNP) generative neural model with a multichannel linear stage modeled by a Multivariate Autoregressive model (see Section 3). Causal connectivity structures were generated by choosing appropriate coefficients for the MVAR model (details provided for each example in Section 4). The desired mean firing rates and firing rate variability were obtained by adjusting the parameters of the nonlinearity [14].

Local networks of directly interconnected neurons were simulated using integrate-and-fire (IF) neuron models proposed by Izhikevich [17] with various interconnection schemes. Routines based on freely available code from the ModelDB [18, 19] database (accession number 115968) were used to generate this network activity. This approach provided networks of realistic spiking neurons with no assumptions of Poisson firing or LNP-based activity. For the three examples used in this work, three different networks were simulated with the connectivity structures depicted in Figure 1.

2.2. Surrogate Data Generation

To perform a statistical test on the results of the proposed method as part of Granger causality analyses, surrogate datasets were generated. The surrogate data was generated by nullifying one causal connection (coefficient) at a time in the estimated underlying MVAR model of the linear stage. This new MVAR model (with one artificially “broken” connection) was used for generating spike trains with no direct causal relation between the two tested channels. Each of these spike trains was then analyzed by the proposed method for Granger causality. The resultant coefficients (6) or (8) provided a “null” distribution, to which the corresponding coefficients calculated from the real data were compared.

3. Identifying an MVAR-Nonlinear-Poisson Model

We consider the problem of identifying a p-dimensional multivariate (vector) autoregressive model of order m: from observations of Poisson spike trains whose rate depends nonlinearly on :

The matrices are coefficient matrices, each corresponding to a specific lag, and is a zero-mean Gaussian noise process with covariance matrix .

The parameters of an MVAR model () can be estimated directly from the autocorrelation function of its output by solving the multivariate Yule-Walker equations [8]:

How can this framework be adapted to our case? Although the correlation matrices are not directly measurable, they can be indirectly estimated from the correlation matrices and expectations of the firing rates for certain choices of the nonlinearity f. These “predistortion” relationships were derived in [14] for exponential, square and absolute-value transformations by considering the effect of these nonlinearities on pairs of correlated, normally distributed random variables. For the case of doubly-stochastic Poisson processes, the spike-train correlation structure is identical to that of the firing rates, except at zero lag [14, 20]: .

The parameter estimation algorithm is summarized in Algorithm 1 for the exponential and square nonlinearities (the detailed derivation and formulas of absolute-value pre-distortions appear in [14]).

( ) Estimate mean rates and pairwise auto- and cross-correlation matrices from spike trains . Note that, for zero-lag autocorrelations : . A smoothing spline can optionally be used to remove variance from the noisy spike-train correlation estimates.
( ) Estimate the nonlinearity parameters and predistort the correlation (see Table 1).
( ) Solve Yule-Walker equations (3). This step can be performed by directly inverting a Toeplitz matrix, or more efficiently and robustly using the Levinson-Wiggins-Robinson (LWR) algorithm [21].

The main algorithm assumes that the model order m is known. Several different criteria for automatic determination of an “optimal” model order m have been developed. In our implementation we determined the model order by minimizing the Bayesian Information Criterion (BIC): where is the total number of data points and the prediction error covariance matrix is given by Figure 2 presents an illustrative example of an MVAR-Nonlinear-Poisson model (Figure 2(a)) of order 3 estimated from three correlated spike trains. The correlations between the three processes, which can be qualitatively appreciated from the firing rates (Figure 2(b)), are accurately captured by the estimated model (Figure 2(c)).

4. Granger Causality Analysis

Granger causality is based on the general concept due to Norbert Wiener that a causal influence should manifest in improving the predictability of the driven process when the driving process is observed. A measurable reduction in the unexplained variance of the driven process (say ) as a result of inclusion of the causal (driving) process (say ) in linear autoregressive modeling marks the existence of a causal influence from to in the time domain [5]. Pairwise linear Granger causality in the time domain is defined as where is the unexplained variance (prediction error covariance) of in its own autoregressive model, whereas is its unexplained variance when a joint MVAR model for both and is constructed. It is expected that when influences , and when it does not. In practice, is compared to a threshold value, which can be determined using a variety of methods (typically using surrogate data or by shuffling channels).

To evaluate the Granger analysis framework, we first tested its ability to estimate the causality structure in a system of two coupled synthetic spike trains with unidirectional influence (Figure 3). The structure of the MVAR model used to generate the data (Figure 3(a)) was We simulated a 10-minute-long dataset where the mean firing rate of the spike trains was 20 Hz (set by adjusting the exponent’s parameters). The strength of each connection was calculated using (6), and their statistical significance was tested by applying the same calculation scheme to surrogate data where the tested connection was removed (for details on generating surrogate data see Section 2).

Pairwise causal analyses are important, but cannot distinguish, for example, between direct and indirect influences in more elaborate connectivity schemes, such as trivariate networks [8]. To address inference in this more complex scenario, we can perform a series of “leave-one-out” analyses, using the multivariate extension of the linear Granger causality index [7, 8]. For example, to assess the direct influence exerted by on , we use

To test this approach (Figure 4) we applied it to synthetic data originating from 10-minute-long datasets (20 Hz average rate) synthetically generated from two different MVAR models that represent two basic triple-unit activity examples. The first one (Figures 4(a)–4(c)) models sequential connection and was modeled by The second case (Figures 4(d)–4(f)) is a “common input” example, where unit #1 drives the activity of units #2 and #3 with different time delays. This example was modeled by

To test the robustness of the MVAR-N-P model estimation under a nonlinearity mismatch, we simulated Model I from Figure 4 using a square and an absolute value nonlinearity and estimated the model assuming an exponential nonlinearity as before. As can be seen in Figure 5, the estimated MVAR models are extremely similar to the model estimated without the mismatch: , , respectively, for the nonzero kernels (each model is illustrated by its respective impulse responses).

Finally, we tested the method on data that comes from simulations of realistic local network activity. The spike trains were simulated using interconnected networks of integrate-and-fire neurons [17]. Examples similar to those presented in Figures 3 and 4 were analyzed by the proposed approach based on the MVAR-Nonlinear-Poisson model. Exponential nonlinearity was used as it is more flexible in capturing correlation structures than the other two nonlinearities [14]. The method successfully determines the connectivity pattern in all three examples, even though the spike trains are far from being Poissonian and therefore cannot be generated by an LNP model (Figure 6).

We note that, as a result of absolute and relative refractory periods of non-Poisson spiking, the correlation structure has strong negative peaks that cannot be directly captured by the MVAR models used in our framework. To fit a stable and representative MVAR model to these spike trains, we applied a basic regularization procedure to their correlation structure that consisted of truncating the negative peaks in the auto- and cross-covariance functions and adding a diagonal matrix to the correlations matrix (in order to get a positive semidefinite correlation matrix).

5. Discussion

In this paper, we introduced a new method for modeling multineuron spike train data, and its application to the identification of information flow structure among interacting neuronal populations. The identification of neural systems from multineuron spike train data can be used for experimental inference of underlying network connections [2227], or more generally of “effective” connectivity. It is also indirectly related to nonparametric methods for identifying high-order synchronous interactions [2831], and metrics of (dis)similarity between spike trains [4, 3236]. In our approach, the data is fit to a model based on an underlying MVAR process with Gaussian statistics which is nonlinearly transformed to firing rates that modulate Poisson spike trains. Our approach thus departs from the classical model-based identification of multivariate spike train data which assumes a specific, biophysically-motivated, linear or nonlinear interaction scheme between neurons [2227]. In our approach, there is no explicit modeling of the interaction exerted through the spike trains, but rather the modulating processes interact through the multivariate recursive structure of the MVAR. In practice, the strict assumptions of neural-interaction models are challenged by the dramatic under-sampling of a large population in real neural recordings, as well as by oscillations and modulations exerted by unmodeled elements. Recently, this “classical” approach has been generalized into generalized linear models (GLMs) that include modulation by a dynamic stimulus or behavior [3739], which in our framework can be done by adding an additional external input or set of inputs (and thus moving from an AR to an ARX model). In addition, the new framework can easily be extended to analyze mixed datasets containing both spiking and continuous neurophysiological (and behavioral) data, as well as to time-varying (nonstationary) scenarios using the MVAR framework and its standard adaptive extensions.

Rather than explicitly modeling neuron-to-neuron interactions, our approach benefits from the MVAR-based Granger causality framework for inferring directed information flow using the concept of increased predictability of one time series when another is observed. While we have only illustrated the use of linear Granger causality, this framework now includes a number of different statistical measures that can be used for inference, including directed transfer function, directed coherence, and partial directed coherence. A thorough overview of these methods [68] is clearly outside the scope of the current paper; however, we note that their ability to infer cortical network connectivity patterns has been systematically tested and compared in a number of studies (e.g., see [40]). It is important to note that a major difference between our approach and “classical” MVAR models is that the MVAR model here is hidden and only indirectly observed through the noisy spike trains. As a result, it was crucial that both the MVAR parameter estimates and the prediction errors can both be estimated from auto- and cross-correlations equations (3) and (5).

Finally, we note that the proposed framework has some limitations that could potentially benefit from additional work. The LNP nonlinearity is required in order to transform the Gaussian processes into nonnegative stochastic intensities (rates) that modulate the Poisson processes. Having to select a specific nonlinearity is clearly a shortcoming of this approach versus the complete generality (and uniqueness) of the MVAR framework in the context of continuous processes. Secondly, we note that when the MVAR-N-P is used as a generative model for spike trains, the doubly-stochastic Poisson processes that are produced have a larger-than-Poisson dispersion (variance), which may not always be desirable. This limitation can be addressed by considering alternative models for the transformation from a Gaussian process to spike trains. For example, the single-stochastic processes analyzed by Macke et al. [41] produce spikes deterministically whenever a Gaussian process is higher than a threshold value, while Tchumatchenko et al. [42] analyzed spike trains produced during threshold crossings of Gaussian processes (both solutions can only be numerically inverted).



This work was supported by Israeli Science Foundation Grant no. 1248/06 and European Research Council Starting Grant no. 211055.


  1. E. N. Brown, R. E. Kass, and P. P. Mitra, “Multiple neural spike train data analysis: state-of-the-art and future challenges,” Nature Neuroscience, vol. 7, no. 5, pp. 456–461, 2004. View at: Publisher Site | Google Scholar
  2. W. Göbel and F. Helmchen, “In vivo calcium imaging of neural network function,” Physiology, vol. 22, no. 6, pp. 358–365, 2007. View at: Publisher Site | Google Scholar
  3. S. Shoham, D. H. O'Connor, D. V. Sarkisov, and S. S.-H. Wang, “Rapid neurotransmitter uncaging in spatially defined patterns,” Nature Methods, vol. 2, no. 11, pp. 837–843, 2005. View at: Publisher Site | Google Scholar
  4. J. D. Victor, “Spike train metrics,” Current Opinion in Neurobiology, vol. 15, no. 5, pp. 585–592, 2005. View at: Publisher Site | Google Scholar
  5. C. W. J. Granger, “Investigating causal relations by econometric models and cross-spectral methods,” Econometrica, vol. 37, no. 3, pp. 424–438, 1969. View at: Google Scholar
  6. E. Pereda, R. Q. Quiroga, and J. Bhattacharya, “Nonlinear multivariate analysis of neurophysiological signals,” Progress in Neurobiology, vol. 77, no. 1-2, pp. 1–37, 2005. View at: Publisher Site | Google Scholar
  7. B. Gourévitch, R. L. Bouquin-Jeannès, and G. Faucon, “Linear and nonlinear causality between signals: methods, examples and neurophysiological applications,” Biological Cybernetics, vol. 95, no. 4, pp. 349–369, 2006. View at: Publisher Site | Google Scholar
  8. M. Ding, Y. Chen, and S. L. Bressler, “Granger causality: basic theory and application to neuroscience,” in Handbook of Time Series Analysis, B. Schelter, M. Winterhalder, and J. Timmer, Eds., pp. 437–460, Wiley, Weinheim, Germany, 2007. View at: Google Scholar
  9. K. Sameshima and L. A. Baccalá, “Using partial directed coherence to describe neuronal ensemble interactions,” Journal of Neuroscience Methods, vol. 94, no. 1, pp. 93–103, 1999. View at: Publisher Site | Google Scholar
  10. E. E. Fanselow, K. Sameshima, L. A. Baccala, and M. A. L. Nicolelis, “Thalamic bursting in rats during different awake behavioral states,” Proceedings of the National Academy of Sciences of the United States of America, vol. 98, no. 26, pp. 15330–15335, 2001. View at: Publisher Site | Google Scholar
  11. M. Kamiński, M. Ding, W. A. Truccolo, and S. L. Bressler, “Evaluating causal relations in neural systems: granger causality, directed transfer function and statistical assessment of significance,” Biological Cybernetics, vol. 85, no. 2, pp. 145–157, 2001. View at: Publisher Site | Google Scholar
  12. L. Zhu, Y.-C. Lai, F. C. Hoppensteadt, and J. He, “Probing changes in neural interaction during adaptation,” Neural Computation, vol. 15, no. 10, pp. 2359–2377, 2003. View at: Publisher Site | Google Scholar
  13. A. G. Nedungadi, G. Rangarajan, N. Jain, and M. Ding, “Analyzing multiple spike trains with nonparametric granger causality,” Journal of Computational Neuroscience, vol. 27, no. 1, pp. 55–64, 2009. View at: Publisher Site | Google Scholar
  14. M. Krumin and S. Shoham, “Generation of spike trains with controlled auto- and cross-correlation functions,” Neural Computation, vol. 21, no. 6, pp. 1642–1664, 2009. View at: Publisher Site | Google Scholar
  15. M. Krumin, A. Shimron, and S. Shoham, “Correlation-distortion based identification of linear-nonlinear-poisson models,” Journal of Computational Neuroscience. In press. View at: Publisher Site | Google Scholar
  16. N. Farah, I. Reutsky, and S. Shoham, “Patterned optical activation of retinal ganglion cells,” in Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '07), pp. 6368–6370, Cité Internationale, Lyon, France, August 2007. View at: Publisher Site | Google Scholar
  17. E. M. Izhikevich, “Simple model of spiking neurons,” IEEE Transactions on Neural Networks, vol. 14, no. 6, pp. 1569–1572, 2003. View at: Publisher Site | Google Scholar
  18. M. L. Hines, T. Morse, M. Migliore, N. T. Carnevale, and G. M. Shepherd, “ModelDB: a database to support computational neuroscience,” Journal of Computational Neuroscience, vol. 17, no. 1, pp. 7–11, 2004. View at: Google Scholar
  19. E. M. Izhikevich, “Polychronization: computation with spikes,” Neural Computation, vol. 18, no. 2, pp. 245–282, 2006. View at: Publisher Site | Google Scholar
  20. C. K. Knox, “Signal transmission in random spike trains with applications to the statocyst neurons of the lobster,” Biological Cybernetics, vol. 7, no. 5, pp. 167–174, 1970. View at: Publisher Site | Google Scholar
  21. P. Whittle, “On the fitting of multivariate autoregressions, and the approximate canonical factorization of a spectral density matrix,” Biometrika, vol. 50, no. 1-2, pp. 129–134, 1963. View at: Google Scholar
  22. G. N. Borisyuk, R. M. Borisyuk, A. B. Kirillov, E. I. Kovalenko, and V. I. Kryukov, “A new statistical method for identifying interconnections between neuronal network elements,” Biological Cybernetics, vol. 52, no. 5, pp. 301–306, 1985. View at: Google Scholar
  23. H. van den Boogaard, “Maximum likelihood estimations in a nonlinear self-exciting point process model,” Biological Cybernetics, vol. 55, no. 4, pp. 219–225, 1986. View at: Publisher Site | Google Scholar
  24. E. S. Chornoboy, L. P. Schramm, and A. F. Karr, “Maximum likelihood identification of neural point process systems,” Biological Cybernetics, vol. 59, no. 4-5, pp. 265–275, 1988. View at: Google Scholar
  25. X. Yang and S. A. Shamma, “Identification of connectivity in neural networks,” Biophysical Journal, vol. 57, no. 5, pp. 987–999, 1990. View at: Google Scholar
  26. D. R. Brillinger, “Nerve cell spike train data analysis: a progression of technique,” Journal of the American Statistical Association, vol. 87, no. 418, pp. 260–271, 1992. View at: Google Scholar
  27. M. Okatan, M. A. Wilson, and E. N. Brown, “Analyzing functional connectivity using a network likelihood model of ensemble neural spiking activity,” Neural Computation, vol. 17, no. 9, pp. 1927–1961, 2005. View at: Publisher Site | Google Scholar
  28. L. Martignon, G. Deco, K. Laskey, M. Diamond, W. Freiwald, and E. Vaadia, “Neural coding: higher-order temporal patterns in the neurostatistics of cell assemblies,” Neural Computation, vol. 12, no. 11, pp. 2621–2653, 2000. View at: Google Scholar
  29. H. Nakahara and S.-I. Amari, “Information-geometric measure for neural spikes,” Neural Computation, vol. 14, no. 10, pp. 2269–2316, 2002. View at: Publisher Site | Google Scholar
  30. R. Gütig, A. Aertsen, and S. Rotter, “Analysis of higher-order neuronal interactions based on conditional inference,” Biological Cybernetics, vol. 88, no. 5, pp. 352–359, 2003. View at: Publisher Site | Google Scholar
  31. S. Schrader, S. Grün, M. Diesmann, and G. L. Gerstein, “Detecting synfire chain activity using massively parallel spike train recording,” Journal of Neurophysiology, vol. 100, no. 4, pp. 2165–2176, 2008. View at: Publisher Site | Google Scholar
  32. J. D. Victor and K. P. Purpura, “Metric-space analysis of spike trains: theory, algorithms and application,” Network: Computation in Neural Systems, vol. 8, no. 2, pp. 127–164, 1997. View at: Google Scholar
  33. M. C. W. van Rossum, “A novel spike distance,” Neural Computation, vol. 13, no. 4, pp. 751–763, 2001. View at: Publisher Site | Google Scholar
  34. S. Schreiber, J. M. Fellous, D. Whitmer, P. Tiesinga, and T. J. Sejnowski, “A new correlation-based measure of spike timing reliability,” Neurocomputing, vol. 52–54, pp. 925–931, 2003. View at: Google Scholar
  35. J. D. Hunter and J. G. Milton, “Amplitude and frequency dependence of spike timing: implications for dynamic regulation,” Journal of Neurophysiology, vol. 90, no. 1, pp. 387–394, 2003. View at: Google Scholar
  36. J. Dauwels, F. Vialatte, T. Weber, and A. Cichocki, “Quantifying statistical interdependence by message passing on graphs—part I: one-dimensional point processes,” Neural Computation, vol. 21, no. 8, pp. 2152–2202, 2009. View at: Publisher Site | Google Scholar
  37. L. Paninski, S. Shoham, M. R. Fellows, N. G. Hatsopoulos, and J. P. Donoghue, “Superlinear population encoding of dynamic hand trajectory in primary motor cortex,” Journal of Neuroscience, vol. 24, no. 39, pp. 8551–8561, 2004. View at: Publisher Site | Google Scholar
  38. W. Truccolo, U. T. Eden, M. R. Fellows, J. P. Donoghue, and E. N. Brown, “A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects,” Journal of Neurophysiology, vol. 93, no. 2, pp. 1074–1089, 2005. View at: Publisher Site | Google Scholar
  39. J. W. Pillow, J. Shlens, L. Paninski et al., “Spatio-temporal correlations and visual signalling in a complete neuronal population,” Nature, vol. 454, no. 7207, pp. 995–999, 2008. View at: Publisher Site | Google Scholar
  40. L. Astolfi, F. Cincotti, D. Mattia et al., “Comparison of different cortical connectivity estimators for high-resolution EEG recordings,” Human Brain Mapping, vol. 28, no. 2, pp. 143–157, 2007. View at: Publisher Site | Google Scholar
  41. J. H. Macke, P. Berens, A. S. Ecker, A. S. Tolias, and M. Bethge, “Generating spike trains with specified correlation coefficients,” Neural Computation, vol. 21, no. 2, pp. 397–423, 2009. View at: Google Scholar
  42. T. Tchumatchenko, A. Malyshev, T. Geisel, M. Volgushev, and F. Wolf, “Correlations and synchrony in threshold neuron models,” Physical Review Letters, vol. 104, no. 5, Article ID 058102, 4 pages, 2010. View at: Publisher Site | Google Scholar

Copyright © 2010 Michael Krumin and Shy Shoham. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles