Advances in Artificial Intelligence

Advances in Artificial Intelligence / 2010 / Article
Special Issue

Artificial Intelligence in Neuroscience and Systems Biology: Lessons Learnt, Open Problems, and the Road Ahead

View this Special Issue

Research Article | Open Access

Volume 2010 |Article ID 209254 |

Antonio Novellino, José-Manuel Zaldívar, "Recurrence Quantification Analysis of Spontaneous Electrophysiological Activity during Development: Characterization of In Vitro Neuronal Networks Cultured on Multi Electrode Array Chips", Advances in Artificial Intelligence, vol. 2010, Article ID 209254, 10 pages, 2010.

Recurrence Quantification Analysis of Spontaneous Electrophysiological Activity during Development: Characterization of In Vitro Neuronal Networks Cultured on Multi Electrode Array Chips

Academic Editor: Naoyuki Sato
Received24 Aug 2009
Revised14 Oct 2009
Accepted28 Oct 2009
Published28 Jan 2010


The combination of a nonlinear time series analysis technique, Recurrence Quantification Analysis (RQA) based on Recurrence Plots (RPs), and traditional statistical analysis for neuronal electrophysiology is proposed in this paper as an innovative paradigm for studying the variation of spontaneous electrophysiological activity of in vitro Neuronal Networks (NNs) coupled to Multielectrode Array (MEA) chips. Recurrence, determinism, entropy, distance of activity patterns, and correlation in correspondence to spike and burst parameters (e.g., mean spiking rate, mean bursting rate, burst duration, spike in burst, etc.) have been computed to characterize and assess the daily changes of the neuronal electrophysiology during neuronal network development and maturation. The results show the similarities/differences between several channels and time periods as well as the evolution of the spontaneous activity in the MEA chip. RPs could be used for graphically exploring possible neuronal dynamic breaking/changing points, whereas RQA parameters are suited for locating them. The combination of RQA with traditional approaches improves the identification, description, and prediction of electrophysiological changes and it will be used to allow intercomparison between results obtained from different MEA chips. Results suggest the proposed processing paradigm as a valuable tool to analyze neuronal activity for screening purposes (e.g., toxicology, neurodevelopmental toxicology).

1. Introduction

In vitro neuronal networks are a simplified and accessible model of the central nervous system (CNS). Moreover, they exhibit morphological and physiological properties [1] and activity-dependent path-specific synaptic modification similar to the in vivo tissue [2, 3]. In vitro Neuronal Networks (NNs) of cortical cells grown on multielectrode array (MEA) chips have been shown to be a valuable tool to study fundamental properties of network activity patterns [48], plasticity [2, 9], learning in vitro [1012], and pharmacological screening [1318]. While growing, the networks of cortical cells manifest different spontaneous electrophysiological dynamics [7, 19, 20] and therefore analysis and characterisation of the network dynamics can provide important information for understanding network behaviour and optimising experimental design.

The spike trains can be accurately extracted from MEA recordings, for example, [21, 22], and thus neuronal activity is translated into time series of discrete events. The ensembles of spike trains simultaneously recorded from many channels represent multidimensional nonstationary point-process time series which makes the data analysis highly challenging [23]. Identification, description, and prediction of the changes of such dynamic during the neuronal network development are even more complex and challenging.

A promising tool for the qualitative and quantitative analysis of nonstationary signals is Recurrence Quantification Analysis (RQA) [24] which was developed to quantify the parameters of Recurrence Plots (RPs) [25]. An RP is a two-dimensional graph which reveals all the moments at which the state space of the dynamical system recurs, that is, visits the same region in state space (recurrence of states). The recurrence of states, meaning that states are arbitrary close after some time, is a fundamental property of deterministic dynamical systems and is typical for nonlinear or chaotic systems. The RQA is a method for assessing the dynamics of noisy, short, and nonstationary signals typical of a dynamical system (e.g., neuronal electrophysiology). RPs of neuronal spike trains were analyzed by Kalużny and Tarnecki [26] where they showed changes in the structure of the interspike interval sequences recorded from cerebellum and red nucleus of anesthetized cats. The application of RP was also proposed by Novák and Schmidt [27] for the analysis of the time series generated by their model of neuronal stochastic activity. More recently, Marwan and Meinke [28] used RQA to detect transitions in event-related brain potentials and Bergner et al. [29] to analyze synchronization in neuronal networks.

In this work, we used a combination of well-known electrophysiological parameters (e.g., mean firing rate, mean bursting rate, etc.) and RQA parameters (e.g., percentage of recurrence, of determinism, laminarity, etc.) for identifying and characterizing the key events that underlie the electrophysiological dynamics as it changes daily during the neuronal network growth.

The analysis showed the similarities and differences of the neuronal dynamic in the space (recording channels) and time (different days). Our results suggest that the combination of RQA with traditional approaches improves the identification, description, and prediction of electrophysiological changes in neuronal networks coupled to MEA chips. The application of the presented methodology could be used to perform intercomparison between results obtained from different MEA chips, when the characterization and quantification of electrophysiological endpoints are needed for further assessment (e.g., toxicology).

Finally, our result suggests that the spontaneous activity of in vitro network of neurons is less noisy than it was expected, and the identified deterministic and laminar behaviours represent a main advantage for developing more robust and accurate in silico model of neuronal network.

2. Materials and Methods

2.1. Neuronal Network

The experiments were preformed with Cryo preserved neurons from mouse cortex (Clonetics C57 black, E14-E16) (Lonza, M-CX-300). Cryo preserved cells are very delicate: we used cryovials containing 4 million cells and we recorded a viability ranging from 15% to 25% (Trypan Blue assessment). The culture medium is PNBM, L-Glutamine, Gentamycine, AmphotericinB, and NSF-1 (cryo cell bullet kit, Lonza CC-4461). The final cell density was about 35.000 cells per chip. The chips were then incubated at in 5% CO2, 20% O2. Starting at 3 DIV (Day In Vitro), half of the culture medium was exchanged twice a week under the laminar hood. The study of the electrophysiology started after the second day of in vitro incubation.

2.2. Chip Preparation

Standard 60-electrode MEA chips (with 30 m diameter electrodes, 200 m interelectrode spacing with an integrated reference electrode) were employed. Prior to plating cells, the MEA chip was sterilized (2 hours in oven at ). The sterilized chip was coated with 10% New Born Calf Serum (NBCS, Invitrogen/Gibco 16010-159). Laminin (Sigma L2020) and Poli-D-Lysin (Sigma P6407) were then applied to the surface of the MEA to render the surface cellphilic and to promote neurites outgrowth. Every coater deposition was carried out at room temperature into laminar hood. The chips with Laminin were incubated three hours into incubator and then it was gently removed. The Poly-D-lysin was left overnight. Once the second coater was removed, the chip was ready to host cells.

2.3. Recording System

The activity was recorded by the MEA120-2-System from Multichannel Systems (MCS GmbH, Ruetlingen, Germany, In particular, the MEA was fed into the MEA Amplifier (Gain 1000x) and data were recorded by the MC_Rack software at a sampling rate of 10 kHz. A bandpass digital filter (60 Hz–4000 Hz) was also applied. Figure 1 shows an example of the neuronal network coupled to the MEA chip and the typical recorded electrophysiological signal. The system also included a temperature controller (TC02, MCS GmbH) that allowed heating the MEA chips and thus the medium from the bottom.

2.4. Signal Processing
2.4.1. Characterization of the Spontaneous Electrophysiology

Spikes were extracted when the raw signals overcame a threshold set at 6.5 times the standard deviation of the mean square root noise. The raw signal was then translated into time series of discrete events, that is, the Spike Train:

The recorded spike trains were processed to extract descriptors of the spontaneous electrophysiology at both spike and burst levels. In particular we extracted the Mean Firing Rate (MFR) and Mean Burst Rate (MBR), and we also studied the number of spikes per burst (i.e., Burst Amplitude (BA)), Burst Duration (BD) and Interburst Interval (IBI). Bursts were extracted from spike trains according to already presented methods [30]. Briefly, a burst is a dense sequence of spikes, and we applied the following burst detection parameters: a burst is composed of 5 spikes at least, the interspike interval between two successive spikes belonging to the same burst is 100 milliseconds at most, and two serial bursts are separated by 100 milliseconds of non bursting activity.

It is then possible to describe the Burst Train such as where is the number of bursts within the train, denotes the starting time of the th burst, is the rectangular function denoting the occurrence of a burst at time and lasting , and is the number of spike per burst (i.e., Burst Amplitude): where is the burst duration, and is the number of the spikes belonging to that burst.

2.4.2. Linear Correlation Assessment

To check if the spikes time series or the originally raw signals were linearly correlated, we used the correlation matrix and the Cross Correlation Function (CCF). The correlation coefficient matrix represents the normalized measure of the strength of linear relationship between variables. The correlation coefficient R of two variables x and y is given by

where is the covariance matrix.

The correlation coefficients range from to 1, where values close to 1 suggest that there is a positive linear relationship between the data columns, and values close to suggest that one column of data has a negative linear relationship to another column of data (anticorrelation). Values close or equal to 0 suggest that there is no linear relationship between the data columns.

The significance of each correlation was evaluated by the -test.

Cross correlation is a generalization of the correlation coefficient and it is a standard method for estimating the degree to which two series are correlated when we shift them one in respect to the others [31].

Consider two series and where . The cross correlation at delay is defined as

If the above is computed for all delays = , then it results in a cross correlation series of twice the length as the original series. For = 0 it becomes the linear correlation coefficient .

2.4.3. Recurrence Plot Analysis

A recurrence plot (RP) is a two-dimensional graph which reveals all the moments at which the state space of the dynamical system recurs, that is, visits the same region in state space. Recurrence Plots (RPs) were introduced by Eckmann et al. [25] to provide insight into periodic structures and clustering properties that were not apparent in the original time series. In particular the RP is based on the computation of the distance matrix between the reconstructed points in the phase space, that is, ,

where and are the reconstruction parameters, that is, the time delay (the lag between data when reconstructing the phase space) and the embedding dimension (the dimension of the space required to unfold the dynamics).

The result is the array of distances that forms an square matrix, , where is the number of points under study. Any distance between any couple of points , can be represented by a pixel where the pixel coordinates are (). Eckmann et al. [25] showed that if we darken the pixels which correspond to distances lower than a predetermined cutoff threshold, that is, a ball of radius centred at , and if we require , then the plot is symmetric and presents a darkened main diagonal. The darkened diagonal corresponds to the identity line and the darkened points individuate the recurrences of the dynamical systems.

The RP method was then made more quantitative by Zbilut and Webber [24] who defined several measures of complexity to quantify the small scale structures in RP. These measures are based on the recurrence point density and the diagonal and vertical line structures of the RP (i.e., recurrence quantification analysis RQA).

A computation of these measures in small windows (submatrices) of the RP moving along the main diagonal yields the time dependent behaviour of these variables [32]. Studies based on RQA measures show that they are able to identify bifurcation points, especially chaos-order [33] and chaos-to-chaos transitions [34].

In order to quantify complexity of RPs, Zbilut and Webber [24] and Marwan et al. [35] proposed the following parameters.

(i) Measures Based on Recurrence Density
The percentage of recurrence, RR, is the percentage of darkened pixels in recurrence plot: where is one if the state of the system at time and the one at time have a distance lower than and zero otherwise. RR is a measure of the density of recurrence points in an RP.

(ii) Measures Based on Diagonal Lines
Let be the histogram of diagonal lines of length . The ratio of recurrence points that form diagonal structures, longer than , is called the percentage of determinism (DET): The percentage of determinism (DET) depends on the value of . If = 1, then DET = 100. If is too big, the histogram is likely to be sparse and, thus, the reliability of DET decreases.
A further RQA measure considers the length of the longest diagonal line found in the RP, or its reciprocal, that is, the divergence (Divergence = ).
and Divergence are related to the exponential divergence of the phase space trajectory. The faster the trajectory segments diverge, the shorter the diagonal lines are, and the higher the divergence is.
The entropy (ENTR) calculates the Shannon information entropy of the probability, , to find a diagonal line of length in the RP. This probability is given by : ENTR tries to capture the complexity of the diagonal lines in the RP. Uncorrelated noise will produce small ENTR values, indicating its low complexity.
TREND measures the paling recurrence points away from the central diagonal. It is a linear regression coefficient over recurrence point density of the diagonals parallel to the main diagonal as a function of the time distance between these diagonals and the main diagonal. It provides information about nonstationarity in the process. TREND is highly correlated to the size of the window [35].

(iii) Measures Based on Vertical Lines
We can find vertical lines in presence of laminar states in intermittence regimes. Let the total number of vertical lines of length in RP be given by the histogram , then the ratio between the recurrence points forming the vertical structures and the entire set of recurrence points can be computed: This is the percentage of laminarity. The LAM is computed for those vertical lines of length that exceed a minimal length . LAM represents the occurrence of laminar states in the system without describing the length of these laminar phases. LAM decreases if RP consists of more single recurrence points than vertical structures.
The average length of vertical structures is given by which is called trapping time (TT). TT estimates the mean time that the system will be trapped at a specific state.
According to Marwan et al. [34], measures based on vertical lines are able to find chaos-chaos transitions, because these measures are close to zero in periodic dynamics; that is, the RP has only diagonal lines.

3. Results

The experimental results reported here were obtained from a culture continually monitored for 6 weeks. Spontaneous activity monitoring started at 2 DIV (day in vitro) and continued up to DIV 42. The chip was monitored for at least 30 minutes at least three times per week. Results refer to ten minutes of spontaneous activity starting from five minutes after the start of the electrophysiological recording. We waited five minutes for a more stable electrophysiological behavior.

Random spiking activity was evident even at a very early stage of network development; however only a low amount of channels were active and the spike frequency was very low (Figure 2) and unsynchronized.

After one week in culture (DIV 10) the number of active channels was higher and it was possible to record both local spiking and bursting. Until the third week, bursts were generated at low frequency (Figure 3) and low amplitude (i.e., the number of spikes belonging to the same burst) (see Table 1). Burst Duration is the parameter that showed less variation during time (see Table 1). If we consider the bursts at network level and consider a weighted burst duration taking into account the number of active bursting channels (mean burst duration * number of busting channels), then the network was seen to be most active during the third week (Figure 3).

MBR [burst/min]DIV 1013141516172021222328293042


Mean Burst Number [burst]DIV 1013141516172021222328293042


Burst Amplitude [spike]DIV 1013141516172021222328293042


Burst Duration [ms]DIV 1013141516172021222328293042


Weighted Burst Duration [ms]DIV 1013141516172021222328293042


Mean Inter Burst Interval [ms]DIV 1013141516172021222328293042


Weighted IBI [ms]DIV 1013141516172021222328293042


The mean interburst interval was variable if considered at single channel level, while the weighted interburst interval (interburst interval/number of active bursting channels) showed the lowest values during the third week in vitro (Figure 3) and confirmed that the rate of burst events was higher in this period.

Before performing RQA it is necessary to determine the time delay and the embedding dimension that define the reconstructed state space and different methods have been developed for this purpose (the interested reader is referred to the books of Abarbanel [36] and Kantz and Shreiber [37]). In this work we used the first minimum of the average mutual information (AMI) to calculate the time delay [38] and the False Nearest Neighbours (FNNs) [39] to compute the embedding dimension. Since we were interested in comparing all recorded time series at different periods (DIV), we used average values and obtained = 2 and = 5.

Another important parameter for the recurrence quantification analysis is the cutoff radius . If is too small, almost no recurrence points exist; conversely if is too large, almost every point is close to every other point. Hence, a compromise for the value has to be found. Some “rules of thumb” can be applied [35].

(1) should not exceed 10% of the mean or the maximum phase space diameter [40].(2) should be such that the recurrence point density in RP is approximately 1% [41].(3)To avoid problems related to noise, has to be chosen such that it is five time larger than the standard deviation of the observational noise, that is, [42].

We applied the second rule to the first set of time series with enough points (at DIV 10) and we kept the same values for the subsequent recording periods. Even though in that case the rule is not true any longer, obviously it is necessary to use the same parameters for allowing the intercomparison between the spike interval time series.

An example of Recurrence Plot is presented in Figure 4 where both the - and -axis report the interspike interval (isi) after reconstruction; that is, each point is because the spike trains embedded dimension and delay are 5 and 2, respectively, and dark points correspond to those for which the Euclidean distance is lower than the cutoff ratio (i.e., = 1300).

Only channels with sufficient activity (at least a spiking rate of 0.33 spikes/s) were considered for recurrence quantification analysis (Figure 5). The evolution of RQA parameters over the entire experiment was possible only on four channels (namely, 35, 58 67, and 74; channel name is in accordance with MCS MEA chip layout) that were highly active since the early days of the experiment. Linear correlation analysis of the spike interval time series showed that these four channels were low correlated. The maximum R value, found at DIV 42, was 0.26 (.0005) between channels 58 and 67. The shifting of the spike interval time series, using the cross correlation function, did not improve the maximum correlation between these channels.

To investigate whether there was high correlation between channels for which we did not have enough data to carry out recurrence quantitative analysis, the raw signals (one minute, i.e., 60000 points, five minutes after the start of the recording) for all channels at DIV 9 and DIV 14 were analyzed. The maximum correlation coefficient between the 60 recorded channels at DIV 9 was 0.0786 between channels 46 and 57, whereas for DIV 14 was 0.26 between channels 76 and 77. In general, there was a slight increase in the average correlation coefficient between both days. However the values were quite low. Also in this case shifting the time series using CCF did not improve the correlation results.

4. Discussion

In vitro neuronal networks of cortical cells grown on Multielectrode Array (MEA) chips are becoming a widely used means to investigate basic neuronal properties (neural coding) [48], higher properties (learning and plasticity) [2, 9], and electrophysiological response to pharmacological manipulation [13, 17, 18].

During development, neuronal networks change in morphology and express different activity patterns [7, 20, 30]: activity that ranges from sporadic spiking (when the network presents immature types of synapses and a very low synaptic density) [43, 44] to complex synchronous activity packages (when neurons start exploiting “far” connections and the network manifests its collective behavior, i.e., network burst) [4, 7, 30, 45, 46].

While spike trains can be accurately extracted from MEA recordings, for example, [21, 22], identification, description, and prediction of changes in electrophysiological dynamic are less accessible by means of standard processing techniques for neuronal electrophysiology. The introduction of Recurrence Plot [25] and Recurrence Quantification Analysis [24] has made available innovative tools for investigating in complex dynamic of nonstationary systems. As far as we know, this paper is the first attempt to investigate the changes in the spontaneous electrophysiological activity of a growing neuronal network of mouse cortical cells coupled to a multielectrode array chip in terms of Recurrent Quantification Analysis.

The structures of the RPs obtained point out at the occurrence of several regime shifts and transitions in which the system stays during a certain period in a certain state (dark zones) and after moves to another state [43]. This behavior occurs several times for each channel during the analyzed period at each DIV. Concerning the evolution of the MEA chip during the experiment, it is possible to observe that the percentage of recurrence (RR) follows the same trend in all channels: it sharply increases after DIV 20, and it reaches a plateau period between DIV 20 and DIV 30, when it starts decreasing. DET and LAM reveal a transition from less to more laminar states in channel 67 that occurs around DIV 15 and with a lower amplitude and slightly delayed at channel 58. This is an example of a transition between two states as it is shown by Marwan et al. [35] using the logistic map and DET and LAM to detect periodic-chaos/chaos-periodic transitions. The absence of correlation in stochastic processes produces RPs with none or very short diagonals which are the signature of a predetermined trajectory in phase space. Although DET does not really reflect the determinism of the system, an increase in DET can be related to an increase in close trajectories at different times. This is confirmed by the parallel increase in which can be interpreted as an increase in the mean prediction time or its inverse as the maximal positive Lyapunov exponent (assuming a chaotic system which is not the case here) as well as by the increase in the entropy (ENTR) which reflects the complexity of the diagonal lines in the RP. In addition to the detection of changes by LAM, the trapping time (TT) specifies the time in which a system is trapped in a specific state which also increases after DIV 20.

In addition, in order to verify that our model was consistent with what had already been reported in literature, we processed the recorded electrophysiology to describe the network dynamic in terms of spikes and bursts.

Our recording confirmed that the behavior of the network changed in the third week in vitro: bursting activity becomes more dominant, synchronized over all the active channels, and frequent (Figure 3) and changes do not answer to chaotic laws (Figure 5).

Besides pointing to the same results than we observed with more traditional approaches, RQA parameters provide further details about neuronal electrophysiological dynamics with a particular focus on the periodic structures and clustering properties that are difficult to determine in the original time series: the evolution of spontaneous neuronal electrophysiology is less nonstationary than one could expect. In other words it will be possible to design in silico models that consider and mimic those states and transitions and experimental design can benefit from models where it is possible to better predict the electrophysiological activity.

We believe that the application of RQA to in vitro electrophysiology is a very promising tool to improve the quality of the results as well as a read-out itself. In particular, the technique could be used to allow the intercomparison between results obtained from different MEA chips, providing a tool for toxicity testing standardization.


The authors wish to acknowledge Dr. Anna Price and Dr. Taina Paloosari for cell preparation and maintenance and Professor Maurice Whelan and Professor Alfons Schuster for their very helpful advice.


  1. A. R. Kriegstein and M. A. Dichter, “Morphological classification of rat cortical neurons in cell culture,” Journal of Neuroscience, vol. 3, no. 8, pp. 1634–1647, 1983. View at: Google Scholar
  2. Y. Jimbo, T. Tateno, and H. P. C. Robinson, “Simultaneous induction of pathway-specific potentiation and depression in networks of cortical neurons,” Biophysical Journal, vol. 76, no. 2, pp. 670–678, 1999. View at: Google Scholar
  3. G.-Q. Bi and M.-M. Poo, “Distributed synaptic modification in neural networks induced by patterned stimulation,” Nature, vol. 401, no. 6755, pp. 792–796, 1999. View at: Publisher Site | Google Scholar
  4. E. Maeda, H. P. C. Robinson, and A. Kawana, “The mechanisms of generation and propagation of synchronized bursting in developing networks of cortical neurons,” Journal of Neuroscience, vol. 15, no. 10, pp. 6834–6845, 1995. View at: Google Scholar
  5. G. W. Gross, J. M. Kowalski, and B. K. Rhoades, “Spontaneous and evoked oscillations in cultured neuronal networks,” in Oscillations in Neural Systems, D. Levine, V. Brown, and T. Shirey, Eds., pp. 3–29, Erlbaum, New York, NY, USA, 1999. View at: Google Scholar
  6. J. M. Beggs and D. Plenz, “Neuronal avalanches in neocortical circuits,” Journal of Neuroscience, vol. 23, no. 35, pp. 11167–11177, 2003. View at: Google Scholar
  7. J. van Pelt, M. A. Corner, P. S. Wolters, W. L. C. Rutten, and G. J. A. Ramakers, “Long-term stability and developmental changes in spontaneous network burst firing patterns in dissociated rat cerebral cortex cell cultures on multielectrode arrays,” Neuroscience Letters, vol. 361, no. 1–3, pp. 86–89, 2004. View at: Publisher Site | Google Scholar
  8. V. Pasquale, P. Massobrio, L. L. Bologna, M. Chiappalone, and S. Martinoia, “Self-organization and neuronal avalanches in networks of dissociated cortical neurons,” Neuroscience, vol. 153, no. 4, pp. 1354–1369, 2008. View at: Publisher Site | Google Scholar
  9. E. Maeda, Y. Kuroda, H. P. C. Robinson, and A. Kawana, “Modification of parallel activity elicited by propagating bursts in developing networks of rat cortical neurones,” European Journal of Neuroscience, vol. 10, no. 2, pp. 488–496, 1998. View at: Publisher Site | Google Scholar
  10. G. Shahaf and S. Marom, “Learning in networks of cortical neurons,” Journal of Neuroscience, vol. 21, no. 22, pp. 8782–8788, 2001. View at: Google Scholar
  11. D. Eytan, N. Brenner, and S. Marom, “Selective adaptation in networks of cortical neurons,” Journal of Neuroscience, vol. 23, no. 28, pp. 9349–9356, 2003. View at: Google Scholar
  12. A. Novellino, P. D'Angelo, L. Cozzi, M. Chiappalone, V. Sanguineti, and S. Martinoia, “Connecting neurons to a mobile robot: an in vitro bidirectional neural interface,” Computational Intelligence and Neuroscience, vol. 2007, Article ID 12725, 13 pages, 2007. View at: Publisher Site | Google Scholar
  13. S. I. Morefield, E. W. Keefer, K. D. Chapman, and G. W. Gross, “Drug evaluations using neuronal networks cultured on microelectrode arrays,” Biosensors & Bioelectronics, vol. 15, no. 7-8, pp. 383–396, 2000. View at: Publisher Site | Google Scholar
  14. A. Gramowski, D. Schiffmann, and G. W. Gross, “Quantification of acute neurotoxic effects of trimethyltin using neuronal network cultured on microelectrode arrays,” NeuroToxicology, vol. 21, no. 3, pp. 331–342, 2000. View at: Google Scholar
  15. E. W. Keefer, A. Gramowski, D. A. Stenger, J. J. Pancrazio, and G. W. Gross, “Characterization of acute neurotoxic effects of trimethylolpropane phosphate via neuronal network biosensors,” Biosensors & Bioelectronics, vol. 16, no. 7-8, pp. 513–525, 2001. View at: Publisher Site | Google Scholar
  16. K. V. Gopal, “Neurotoxic effects of mercury on auditory cortex networks growing on microelectrode arrays: a preliminary analysis,” Neurotoxicology & Teratology, vol. 25, no. 1, pp. 69–76, 2003. View at: Publisher Site | Google Scholar
  17. M. Chiappalone, A. Vato, M. Tedesco, M. Marcoli, F. A. Davide, and S. Marinoia, “Network of neurons coupled to microelectrode arrays: a neuronal sensory system for pharmacological applications,” Biosensors & Bioelectronics, vol. 18, no. 5-6, pp. 627–634, 2003. View at: Publisher Site | Google Scholar
  18. A. Gramowski, K. Jügelt, S. Stüwe et al., “Functional screening of traditional antidepressants with primary cortical neuronal networks grown on multielectrode neurochips,” European Journal of Neuroscience, vol. 24, no. 2, pp. 455–465, 2006. View at: Publisher Site | Google Scholar
  19. M. Chiappalone, M. Bove, A. Vato, M. Tedesco, and S. Martinoia, “Dissociated cortical networks show spontaneously correlated activity patterns during in vitro development,” Brain Research, vol. 1093, no. 1, pp. 41–53, 2006. View at: Publisher Site | Google Scholar
  20. D. A. Wagenaar, J. Pine, and S. M. Potter, “An extremely rich repertoire of bursting patterns during the development of cortical cultures,” BMC Neuroscience, vol. 7, article 11, 2006. View at: Publisher Site | Google Scholar
  21. A. Maccione, M. Gandolfo, P. Massobrio, A. Novellino, S. Martinoia, and M. Chiappalone, “A novel algorithm for precise identification of spikes in extracellularly recorded neuronal signals,” Journal of Neuroscience Methods, vol. 177, no. 1, pp. 241–249, 2009. View at: Publisher Site | Google Scholar
  22. A. Novellino, M. Chiappalone, A. MacCione, and S. Martinoia, “Neural signal manager: a collection of classical and innovative tools for multi-channel spike train analysis,” International Journal of Adaptive Control and Signal Processing, vol. 23, no. 11, pp. 999–1013, 2009. View at: Publisher Site | Google Scholar
  23. E. N. Brown, R. E. Kass, and P. P. Mitra, “Multiple neural spike train data analysis: state-of-the-art and future challenges,” Nature Neuroscience, vol. 7, no. 5, pp. 456–461, 2004. View at: Publisher Site | Google Scholar
  24. J. P. Zbilut and C. L. Webber Jr., “Embeddings and delays as derived from quantification of recurrence plots,” Physics Letters A, vol. 171, no. 3-4, pp. 199–203, 1992. View at: Google Scholar
  25. J. P. Eckmann, S. O. Kamphorst, and D. Ruelle, “Recurrence plots of dynamical systems,” Europhysics Letters, vol. 4, pp. 973–977, 1987. View at: Google Scholar
  26. P. Kalużny and R. Tarnecki, “Recurrence plots of neuronal spike trains,” Biological Cybernetics, vol. 68, no. 6, pp. 527–534, 1993. View at: Publisher Site | Google Scholar
  27. V. Novák and J. Schmidt, “Changes of the inner time structures in the sequences of interspike intervals produced by the activity of excitatory and inhibitory synapses: simulation with Gaussian input processes,” Physiological Research, vol. 46, no. 6, pp. 497–505, 1997. View at: Google Scholar
  28. N. Marwan and A. Meinke, “Extended recurrence plot analysis and its application to ERP data,” International Journal of Bifurcation and Chaos, vol. 14, no. 2, pp. 761–771, 2004. View at: Publisher Site | Google Scholar
  29. A. Bergner, M. C. Romano, J. Kurths, and M. Thiel, “Synchronization analysis of neuronal networks by means of recurrence plots,” in Lectures in Supercomputational Neuroscience: Dynamics in Complex Brain Networks, P. Graben, C. Zhou, M. Thiel, and J. Kurths, Eds., pp. 177–191, Springer, Berlin, Germany, 2008. View at: Google Scholar
  30. M. Chiappalone, A. Novellino, I. Vajda, A. Vato, S. Martinoia, and J. van Pelt, “Burst detection algorithms for the analysis of spatio-temporal patterns in cortical networks of neurons,” Neurocomputing, vol. 65-66, pp. 653–662, 2005. View at: Publisher Site | Google Scholar
  31. S. J. Orfanidis, Optimum Signal Processing: An Introduction, Prentice-Hall, Englewood Cliffs, NJ, USA, 2nd edition, 1996.
  32. C. L. Webber Jr. and J. P. Zbilut, “Dynamical assessment of physiological systems and states using recurrence plot strategies,” Journal of Applied Physiology, vol. 76, no. 2, pp. 965–973, 1994. View at: Google Scholar
  33. L. L. Trulla, A. Giuliani, J. P. Zbilut, and C. L. Webber Jr., “Recurrence quantification analysis of the logistic equation with transients,” Physics Letters A, vol. 223, no. 4, pp. 255–260, 1996. View at: Publisher Site | Google Scholar | MathSciNet
  34. N. Marwan, N. Wessel, U. Meyerfeldt, A. Schirdewan, and J. Kurths, “Recurrence-plot-based measures of complexity and their application to heart-rate-variability data,” Physical Review E, vol. 66, no. 2, Article ID 026702, 8 pages, 2002. View at: Publisher Site | Google Scholar
  35. N. Marwan, M. C. Romano, M. Thiel, and J. Kurths, “Recurrence plots for the analysis of complex systems,” Physics Reports, vol. 438, no. 5-6, pp. 237–329, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  36. H. D. I. Abarbanel, Analysis of Observed Chaotic Data, Springer, New York, NY, USA, 1996.
  37. H. Kantz and T. Shreiber, Nonlinear Time Series Analysis, Cambridge University Press, Cambridge, UK, 1997.
  38. A. M. Fraser and H. L. Swinney, “Independent coordinates for strange attractors from mutual information,” Physical Review A, vol. 33, no. 2, pp. 1134–1140, 1986. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  39. M. B. Kennel, R. Brown, and H. D. I. Abarbanel, “Determining embedding dimension for phase-space reconstruction using a geometrical construction,” Physical Review A, vol. 45, no. 6, pp. 3403–3411, 1992. View at: Publisher Site | Google Scholar
  40. M. Koebbe and G. Mayer-Kress, “Use of recurrence plots in the analysis of time-series data,” in Proceedings of SFI Studies in the Science of Complexity, M. Casdagli and S. Eubank, Eds., vol. 21, pp. 361–378, Adison-Wesley, Reading, Mass, USA, 1992. View at: Google Scholar
  41. J. P. Zbilut, J.-M. Zaldívar-Comenges, and F. Strozzi, “Recurrence quantification based Liapunov exponents for monitoring divergence in experimental data,” Physics Letters A, vol. 297, no. 3-4, pp. 173–181, 2002. View at: Publisher Site | Google Scholar
  42. M. Thiel, M. C. Romano, J. Kurths, R. Meucci, E. Allaria, and F. T. Arecchi, “Influence of observational noise on the recurrence quantification analysis,” Physica D, vol. 171, no. 3, pp. 138–152, 2002. View at: Publisher Site | Google Scholar | MathSciNet
  43. M. Ichikawa, K. Muramoto, K. Kobayashi, M. Kawahara, and Y. Kuroda, “Formation and maturation of synapses in primary cultures of rat cerebral cortical cells: an electron microscopic study,” Neuroscience Research, vol. 16, no. 2, pp. 95–103, 1993. View at: Publisher Site | Google Scholar
  44. K. Muramoto, M. Ichikawa, M. Kawahara, K. Kobayashi, and Y. Kuroda, “Frequency of synchronous oscillations of neuronal activity increases during development and is correlated to the number of synapses in cultured cortical neuron networks,” Neuroscience Letters, vol. 163, no. 2, pp. 163–165, 1993. View at: Publisher Site | Google Scholar
  45. S. Marom and G. Shahaf, “Development, learning and memory in large random networks of cortical neurons: lessons beyond anatomy,” Quarterly Reviews of Biophysics, vol. 35, no. 1, pp. 63–87, 2002. View at: Publisher Site | Google Scholar
  46. J. van Pelt, I. Vajda, P. S. Wolters, M. A. Corner, and G. J. A. Ramakers, “Dynamics and plasticity in developing neuronal networks in vitro,” Progress in Brain Research, vol. 147, pp. 171–188, 2005. View at: Publisher Site | Google Scholar

Copyright © 2010 Antonio Novellino and José-Manuel Zaldívar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.