Computational Intelligence and Neuroscience

Volume 2009 (2009), Article ID 658474, 13 pages

http://dx.doi.org/10.1155/2009/658474

## On the Relation between Bursts and Dynamic Synapse Properties: A Modulation-Based Ansatz

Chair for Parallel VLSI Systems and Neural Circuits, Dresden University of Technology, 01062 Dresden, Germany

Received 12 June 2008; Revised 31 January 2009; Accepted 11 March 2009

Academic Editor: Rodrigo Quiroga

Copyright © 2009 Christian Mayr et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

When entering a synapse, presynaptic pulse trains are filtered according to the recent pulse history at the synapse and also with respect to their own pulse time course. Various behavioral models have tried to reproduce these complex filtering properties. In particular, the quantal model of neurotransmitter release has been shown to be highly selective for particular presynaptic pulse patterns. However, since the original, pulse-iterative quantal model does not lend itself to mathematical analysis, investigations have only been carried out via simulations. In contrast, we derive a comprehensive explicit expression for the quantal model. We show the correlation between the parameters of this explicit expression and the preferred spike train pattern of the synapse. In particular, our analysis of the transmission of modulated pulse trains across a dynamic synapse links the original parameters of the quantal model to the transmission efficacy of two major spiking regimes, that is, bursting and constant-rate ones.

#### 1. Introduction

The main computational function of artificial neural networks has traditionally been modeled as an adjustment of the coupling weight between neurons. In biological nets, this coupling weight is provided by the synapse, where an incoming (presynaptic) pulse causes a release of neurotransmitters, which in turn generate a postsynaptic current (PSC) that charges the postsynaptic (i.e., receiving) neuron membrane [1]. The synaptic weight (size of the PSC) can be modeled as a function of three different variables [2]:

Mechanisms acting on the number of release sites seem to be targeted at long-term learning, while plasticity of the neurotransmitter release probability and release quantity both act on timescales of 0.1–1 seconds and are therefore well suited for extracting temporal fine structure of presynaptic pulse trains [3, 4]. Even for long-term learning, this short-term synaptic filtering may influence the type of learning [5]. Thus, dynamic synapses carry out various crucial signal transformations; for a review, see [3]. These transformations are used for processing sensory information, for example, in the auditory cortex [6].

Dynamic synapses also interact in a complex manner with another important component of neural information transmission, modulated pulse trains [7, 8], that is, spike trains characterized by regular shifts between high and low pulse rates [9]. In biology, these bursting spike trains have been implicated in the rapid transmission of information, encoding of stimuli, and population synchrony [7]. This interaction has been shown in simulations of models describing the plasticity of the synaptic release probability [4] and also in models of the plasticity of the release quantity [8, 10]. With regard to this interaction, it is often postulated as a general neural principle that a new stimulus is favorably transmitted over a steady-state one. This would mean that modulated spike trains, in which the stimulus continuously changes, would be favored over regular-rate stimuli. We will critically examine this assumption, extending the -plasticity-based calculations of Natschlaeger and Maass [10].

The plasticity of has been modeled in an influential manuscript by Markram et al. [11]. They introduced a formulation of quantal neurotransmitter release based on a descriptive model of biological mechanisms and measurements (in the following referred to as * quantal model *). Over the intervening years, the quantal model has been extensively studied with respect to its information transmission properties [3, 8, 10, 12]. It has also been combined with other synaptic plasticity mechanisms to investigate possible interrelations with long-term learning [3, 5] or probabilistic release models [8]. Various state-of-the-art neuroscience efforts still employ the original model, for example, in studies of pain reception [5], the differing modes of memory retrieval [13], or in the ongoing effort to fully characterize the model itself and its various processing characteristics [5, 13, 14]. Most of this work has been carried out via simulations, probably caused by the iterative, pulse-based nature of the model, making a closed solution, that is, some kind of transfer function, intractable. However, especially the causal dependency of the model's behavior on its parameters cannot be fully explained with simulations such as the ones in [10]. Rather, some kind of analytical expression is needed. This is especially interesting since biological synapses show very complex interdependences between their state variables and behavior [15, 16]; so an analytical expression of the biophysical model in [11] could be employed to identify the governing variables and mechanisms.

To derive this expression, we show that for regular pulse rates, the model by Markram et al. can be expressed explicitly as an exponential decay function. We use this function in Section 2 to deduce the response of a dynamic synapse to frequency modulated pulse trains. The veracity of the explicit expression is shown by comparison to simulations of the original quantal model in Section 3. Furthermore, we extend the optimality analysis of [10] to a wider parameter spectrum and give an explanation for the favored transmission of modulated spike trains in dynamic synapses.

#### 2. Synaptic Transmission of Modulated Pulse Trains

##### 2.1. Model of Activity-Dependent Synapses

The model developed by Markram et al. [11] is governed by two parameters, utilization of synaptic efficacy and available synaptic efficacy . These are normalized as fractions of overall efficacy at pulse of the pulse train. The model is based on a formulation of the refractoriness of neurotransmitter release, where available synaptic efficacy is dependent on the fraction used up in previous pulses. This increased usage is counteracted by a facilitation mechanism, which increases the utilization of synaptic efficacy (i.e., the available neurotransmitter amount) with rising pulse rate. Thus, utilization is increased (facilitated) with each pulse and recovers with a time constant , while synaptic efficacy recovers with , dependent on the current utilization. The iterative equations governing the evolution of and are as follows [11](For , we use the index correction stated by Natschlaeger and Maass [10].):

where denotes the time elapsed between pulses and of the pulse train. The starting terms for (2) and (3) are computed from the utilization of a relaxed synapse as or , respectively [11]. The PSC caused by a presynaptic pulse is defined as the product of and , weighted with the absolute synaptic efficacy (ratio between release quantity and resultant PSC):

The effect of this adaption can best be described as transmission of transients, that is, changes in the presynaptic pulse rate are transmitted with their full dynamic range to the postsynaptic neuron, but the response to steady-state input pulse rates diminishes. This seems to be a universal feature of biological neural nets, where novel stimuli receive increased responses compared to static ones [1, 3].

For a steady-state signal, the above response can be thought of as a signal compression, so that the high dynamic range of, for example, sensory input is adapted to the limited range of the pulse response of a neuron [3]. The steady-state values that and settle to for a given pulse rate (Figure 1) can be computed by equating and in (2) for a fixed pulse rate [11]:

Using this and a similar equalization approach, the convergent is derived as

As expected from the model, steady-state utilization increases with higher pulse rate, whereas available synaptic efficacy decreases. Due to the different time constants, these changes do not cancel out completely but lead to a maximum single PSC at around 20 Hz with slight decay for pulse rates below or above this value [11].

However, this steady-state analysis does not do justice to the complex transmission characteristics across a dynamic synapse. Consequently, in the following we analyze the response of a synapse to a single transient pulse rate transition.

Figure 2 shows the response of the quantal synapse to a step change in pulse rate. The synapse starts out with one of three initial converged values for pulse rates 1 Hz, 15 Hz, and 30 Hz, respectively. We then transition to a new pulse rate as denoted on the abscissa for 5 consecutive pulses. The mean PSC of these 5 pulses after the step in pulse rate was normalised. The reference value for normalization is the converged PSC that would result from the steady-state PSC response for the pulse rate after the step, that is, the response if the new pulse train was not stopped after 5 pulses.

For decreasing pulse rate, the PSC response will continuously decrease, making the transient response bigger than the converged value. At first glance, one would expect the opposite for increasing pulse rate: if the PSC continuously increased, the transient PSC should be smaller than the converged value, and the quotient between both values should diminish for higher pulse rate differences because of being bigger than as well as the shorter time window. In contrast to that, Figure 2 shows transient PSCs higher than equilibrium for bigger step-ups in pulse rate; especially, for an initial 30 Hz rate, this is the case for all frequencies after the step change. This effect is caused by two processes: first, the time constant for utilization considerably decreases with higher pulse rate, making it roughly equal to the time constant for efficacy (see (A.7) in the appendix); second, the value for a single PSC decreases above a pulse rate of approximately 20 Hz [11], so that the resulting mean PSC sharply increases with the frequency step-up due to the higher number of releases per time but then is regulated down by the decreasing amplitude of a single PSC. This effect is also visible from the PSC time course in Figure 1.

As shown in this section, the response of dynamic synapses cannot be fully characterized by the transmission characteristics for regular pulse rates. The response for most cases of transients is amplified compared to the steady-state response. This is as expected from biology, where changes in pulse rate are a source of information, while static stimuli should be attenuated in favor of these transients [1, 3].

##### 2.2. Analytical Approach to Synaptic Transmission

In a generalization of the analysis carried out in Figure 2, in this section we derive the response of the quantal model to fully transient stimuli. Thus, we do not start from a converged value of and as in Figure 2, but we use repeating transient stimuli that result in regular variations in pulse rate (i.e., a modulated pulse signal; see Figure 3).

A modulated pulse rate can be thought of as a sequence of bursts and as such represents a generic model for various types of neural pulse signaling, where the information is encoded in the temporal fine structure of the pulse signal [8, 9] or where bursts represent mechanisms in memory retrieval [13].

In the upper part of Figure 3, we generate a sine-modulated stochastic pulse train using a Poisson process [1] with time-variable pulse rate:

where is the probability density function of the time between two successive spikes. In contrast to [1], we do not employ a fixed pulse rate , but one periodically sine-modulated between high pulse rate and low pulse rate . We use this formulation for the simulations carried out in Section 2.3. However, for the mathematical analysis, we further simplify the stochastic bursting spike train in the upper part of Figure 3 to one that switches with a period of between two fixed pulse rates (see lower part of Figure 3). We additionally introduce a duty cycle as the fraction of high rate stimulation per period. This enables a close approximation of different spiking modes (bursting, stuttering, etc.). For the approximation of the sine-wave, is chosen.

Figure 4 qualitatively shows the time course of for this switched modulated stimulus. Its value oscillates inside a fixed amplitude interval that depends on the modulation frequency , the duty cycle , the convergence limits for low and high pulse rate, as well as the time constants and defined by (A.7) in the appendix.

For the derivation of the PSC's modulation dependency, we start with the explicit expression of (2) as derived in the appendix:

Dependent on the sign of the term , this equation describes one increasing or decreasing part of the time course, respectively. For a complete formulation, the initial values for each cycle must be calculated. These are generally not the limits of convergence, but intermediate values, as can be seen from Figure 4. Their calculation will be shown as an example for in the following. Our approach is based on the observation that the value of at points 1 and 3 in Figure 4 is the same in a steady-state. Following the time course of beginning at point 1 (assuming there) gives

with the second equation determining the value of at the end of the high rate interval. An analogous relation for the low-rate interval, that is, the time course from point 2 to 3, results in:

Evaluating (9) and (10) leads to the following expression for

Results for , and can be derived with similar approaches.

Now, the mean synaptic release quantity can be calculated. This is done by integrating the product , normalizing the result with the integration interval. For the high rate interval, that is, the time course between points 1 and 2, the following holds:

Evaluating this integral results in

Integrating over the low-rate interval, that is, the time course between points 2 and 3, in the same way yields the corresponding value .

As mentioned together with Figure 1, these mean values must be weighted by the number of pulses that occurred in the corresponding time interval. This can be done by using the ratio between the total time any pulse was active and the time interval:

For the high rate interval, , whereas for the low-rate interval, . Using the corresponding constant pulse rate, can be calculated for each interval as . For calculations, we will set milliseconds, which is in agreement with the parameters used in [11].

When calculating an overall mean PSC, the duty cycle (i.e., the fraction each was active) has to be taken into account. This results in a weighted average formula:

##### 2.3. Results

The explicit expressions derived in Section 2.2 describe the behavior of PSC transmission dependent on the modulation frequency. To evaluate these equations, we compare our model to numerical simulations of the original iterative equations (2) and (3). In particular, Natschlaeger et al. [10] treat the quantal model to a rigorous numerical analysis; so we apply our model to their framework. Since the optimal spike trains of [10] differ from our modulated pulse rate assumption, we have to validate that the sum over the product , that is, the PSC efficacy criterion, has the same quantitative and qualitative behavior for the modulated rate as for the optimized spike train. An initial validation can be done by extracting a sample spike train for a single parameter set from [10], applying a jitter to account for extraction errors, and comparing it to a modulated spike train which is parameterized to exhibit a similar burstiness. This is shown in Figure 5.

The parameters were chosen to resemble the experiment of Figure 5 in [10], with 20 pulses distributed in a one-second interval. All spike trains were processed with the original quantal model. As can be seen, the original optimized spike train shows a strong burstiness, so as expected, the regular spike train has a much lower synaptic efficacy. Also, the modulated spike rate is well within bandwidth of the statistical variations of the optimized spike train and also shows significantly larger synaptic efficacy than the regular rate. From this limited example (and others below), the initial assumption for our derivation seems valid, that is, a modulated pulse rate exhibits the same behavior with respect to the Markram model as a more precisely optimized one.

In the following, we will thus apply the derivation of Section 2.2, in particular the new non-iterative time constants, to extend the analysis of [10] and especially test the predictive and explanatory power of our analytical expressions. Two major activity regimes can be discerned from Figure 5 of [10]: one where the grouping of pulses into short activity bursts results in a large synaptic efficacy, and one where in contrast a regular distribution of all 20 pulses across the time interval is advantageous. If we relate this back to our model, the pulse regime is determined by the modulation frequency. Thus, with the explicit expression for the mean PSC (15), we can state an alternative optimality approach to [10]. Maximizing the mean PSC over the modulation frequency corresponds to finding the optimum pulse regime for a synapse. The optimum modulation frequency in that sense can be derived using the necessary condition:

In general, this equation cannot be brought in an explicit form. Approximate explicit expressions could be derived, for example, assuming , but the respective approximations are not valid over the entire synapse parameter space and optimization range. Thus, we will solve the optimization equation numerically. Modulated (i.e., bursty) spike trains are only generated in the -range ; other values result in a regular spike train. Thus, if the partial derivative does not change sign inside this interval, that is, no local maximum exists therein, a regular spike train will result in a maximum mean PSC.

To resemble the optimization regime of [10], we adjust the duty cycle such that the mean frequency Hz. The results of [10] show that a modulated regime is optimal for low values of and , whereas for higher values, a regular spike train is favorable. Figure 6 confirms this result with our analysis for an illustrative example: for the low-value case (left), a maximum at approximately 4 Hz is present, whereas for the high-value case, the synaptic efficacy monotonically increases with modulation frequency, which ultimately leads to a regular spike train as an optimum.

Of course, as we have shown in the previous section and the appendix, the preference of the quantal model depends not only on , and but also on the spike train characteristics, that is, duty cycle , high rate , and low-rate . To show these dependencies, we extend the analysis of Figure 6 to a full sweep across , and , employing the synapse parameter set of Figure 6, left.

Figure 7 shows the optimal modulation frequency , derived similar to Figure 6, in grey-scale. Data points are only depicted if a distinct optimal is found, that is, if the maximum as shown in Figure 6, left, is at least above the value of for the high modulation frequency (right side of both graphs in Figure 6). Thus, nonsignificant maxima and cases where a regular spike train is preferred (Figure 6, right) are omitted. A good correspondence between the simulation of the original quantal model and the mean as derived from the analysis in the previous section can be observed, showing the validity of our derivations.

There is almost no dependence of the optimal modulation frequency and the burst preference on the low spike rate . This may be due to the fact that there is only a certain level of relaxation that can be obtained by the synapse during the low-rate intervals. This means that, while the relaxation is important to obtain a high , as will be explained together with Figure 9, the exact low rate during this relaxation is not important, only the fact that there is such a relaxation phase. However, there is a clear dependence between the duty cycle and , where rises linearly with at a certain (see columns in the plots of Figure 7). In other words, this can be thought of as

with being the duration of a period and being the duration of the high rate interval therein, that is, the length of a burst. Thus, if is constant, the number of pulses during a burst for a given high rate is also constant. An explanation for this could be that there exists an optimal burst profile which maximizes for a given parameter set , , , and a given . Accordingly, if is subjected to a sweep, must rise with it to keep this optimal profile. At the same time, bursts are shifted closer, so that the mean number of pulses in a fixed time interval rises linearly with . Equation (16) thus searches not so much for an optimal but rather for an optimal burst profile.

Another interesting characteristic of the above plot is the decrease of the maximum at which a significant can be found with increase in . This inverse relationship between maximum for a bursty spike train and may hint at an optimal profile or number of pulses in a burst almost independent of . According to (17), the number of spikes in a burst can be computed as , resulting in a (mean) number of pulses per burst of 4.3 for Hz and 3.6 for Hz. These similar values could be explained by the fact that the optimum is governed by the evolution of and during the burst. These in turn depend on the absolute time constants derived in the appendix, which scale with ; thus, the scaling of the time constants and cancel each other at least partially, resulting in very similar optimal burst profiles despite the change in . So the absolute value of may vary with , and , but the qualitative behavior, that is, the burst profile for which is maximum, seems to be constant for a given synapse type. Interestingly, there exists no optimal modulation frequency above 8Hz, that is, in this range a regular spike train is always better than a modulated one. This is probably due to the fact that at this , there is a natural transition between bursty and regular spike train in any case. That means, the burst phases are too short to allow a real grouping of spikes, while the low-rate phases are too short to obtain a significant recovery of and , so that the same number of pulses achieves a higher if it is spaced regularly across the given time span.

As already stated, one of the main questions behind such analyses is, for which synapse types (i.e., parameters , and ) a modulated spike train is favored over a regular spike train in terms of transmission. This question was tackled in [10] only exemplarily for single-value sweeps. Here, we perform a sweep over the full three-dimensional parameter space of the quantal model, as shown in Figure 8. Thereby we use the relative difference of a modulated spike train and a regular spike train as a measure for the favored spike mode. Parameters were again Hz, Hz, and Hz, together with a modulation frequency Hz, comparing to results of Figure 5 in [10].

We also used this parameter sweep to compare our analytical calculation with the original iterative formula. This is a hard test case, because already small deviations in the calculations, for example, caused by the continuous-time idealization and the approximations made with the derivation of , can lead to marked changes in the relative difference value calculated for comparison. Taking this sensitivity into account, our derivation is in good agreement with the simulation. Especially the discrimination between favored modulated or regular spike train is well replicated.

The principal dependencies of the favored spike mode on the synapse parameters as suggested by Figure 5 of [10] are also present in the whole parameter space exploration: A modulated spike train is only favored if or are low. Also, for and milliseconds, a transition from regular-favored to modulation-favored transmission with increasing is present in the plot, which is in agreement with [10]. This again shows that even with the assumption of a fixed modulated, and potentially nonoptimal, spike train, essentially the same predictions can be derived as with a single-spike optimization, but with much less computational complexity. Also, further dependencies can be extracted from the parameter sweep: if both and decrease to low values, the relative difference of the response to modulated and regular spike trains gets more and more independent of . Additionally, to a certain extent, higher values for can be compensated with lower values for , and vice versa.

In the following, we try to analyze the above parameter dependencies, based on our modeling of and as exponential decays. In particular, based on our noniterative time constants for and and on the converged values and that scale the exponential decay functions, we postulate the following mechanisms for a preference of either regular or grouped (bursty) spike trains by the synapse.

There is a dependency of this preference on the relation between the time constants and , that is, for the convergence during the high rates/bursts (Figure 9(c)). A bursty spike train benefits if the time constant is relatively low, so that rises fast to its converged value, which is a factor of five above its relaxed value (i.e., at the end of the low-rate interval, Figure 9(a)), as this increases markedly the total value of . On the other hand, R diminishes to a small value for the high rate, so its convergence time constant should be large relative to , so that most of the spikes during the burst still “see” the high relaxed value (Figure 9(a)). Compared to a parameter regime which preferentially transmits a regular rate (Figure 9b and 9d), a low time constant diminishes the value of during a burst, and a corresponding high time constant would prevent from rising to compensate this decrease in , especially if at the same time has a smaller dynamic range (Figure 9b). Thus, for this parameter regime and its resulting time constants, a bursty regime would result in a lower synaptic efficacy compared with a regular rate.

A second criterion based on which it can be predicted if a bursty or regular regime is preferred by the synapse, would be the relation between the convergence time constant for for low and high rate and . This relation expresses the basic intuition that should be relatively low compared to , so that can relax very fast to a high value during the low-rate intervals. In contrast, should be high compared to the burst duration so that R does not decrease too much during the high rate intervals. So a parameter regime that results in a low relative to should preferentially transmit bursts.

From the above postulates, two criteria can be derived where the time constants derived in this paper allow to predict if a grouped/bursty or a regular regime is preferred by the synapse. The first one would be the difference between the convergence time constants for the high rates, that is, , see Figure 10(a).

As can be seen, there is a definite correlation in the way suggested above, that should be larger than in order for the synapse to transmit a bursty spike train better than a regular one. In the figure, this is expressed on the x-axis by the normalized difference between for a regular respectively. a bursty spike train (for the same quantal parameter set). The second criterion would be the quotient between the convergence time constant for for low and high rate, as expressed by . Figure 10(b) shows a plot of this criterion, against the same synaptic efficacy criterion as in Figure 10(a). For clarity reasons, the natural logarithm of the above quotient is plotted rather than the quotient itself. Again, as postulated above, there is a clear correlation between a measure based on the convergence time constants and the amount a bursty spike train evokes more or less synaptic efficacy compared to a regular spike train. Interestingly, there also seems to be some parameter which causes a change in slope as well as a shift of the correlation curve. When plotting the data points based on their parameter values, it becomes evident that this parameter is , that is, for larger , the spike trains enter the bursty regime earlier. This trend towards burstiness with increasing can be explained based on Figures 9(a) and 9(c). As can be seen, for larger the slope of the curve increases, so that the (converged) value of for a regular rate diminishes, while for the short high rate episodes characteristic of a burst, the relaxed is still close to one. Due to the fact that for the high rate does not decrease, the burst benefits from this high relaxed value in the same way as it did for lower . At the same time, increases with increasing , so that there is a more pronounced trend towards long periods of little activity in the spike train, so that can reach its relaxed value even if its convergence time constant becomes larger. That the synapse exhibits a mechanism which prefers modulated spike trains for certain parameter sets (as shown above) might also provide an alternative, synapse-based way for bursting behavior to emerge. This could complement the conductance-based bursting behavior shown in [7].

How could these results be applied in the wider neuroscience context? One important topic of current interest is the interaction of the different forms of plasticity on the same synapse, especially with regard to the different temporal timescales of expression [5, 15–17]. Some studies which employ both kinds of plasticity act on an abstract idea of weight, but with basically unchanging parameters of the dynamic synapse [5]. On the other hand, Spike-Timing-Dependent Plasticity (STDP) is postulated to depend on the modulation of neurotransmitter release probability ([16, 17]), which in the model discussed herein is expressed as the initial release quantity [11]. As evidenced by our analysis, this influences directly the spike pattern preferences through the mechanisms postulated in Figure 9. So these are not just plasticity mechanisms overlaid on the same synaptic weight [17], instead STDP might govern the operating regime of the short term dynamics. Thus, STDP might not only provide a basis for static, weight-based memory formation [18], but also serve as a substrate for memory and computation in dynamical models. Examples for this could be attractor neural networks [13], or liquid computing [19], which rely heavily on the short term dynamics of synapses. In this respect, our analysis indicates several ways in which a modulated by some other plasticity mechanisms might in turn govern the absolute temporal dynamics of a synapse, namely through , , and . Pushing this speculation further, there might also be a feedback path back towards STDP, in which the absolute synaptic time constants and of our derivation influence the time course of the STDP learning window. Of course, classical STDP relies on coincidence between pre- and postsynaptic spikes, so the quantal release mechanism which only acts on presynaptic spikes would not work in this context. However, several newer forms of STDP rely on dendritic spikes [20], which depend on coincident heterosynaptically expressed presynaptic spike transmission rather than on postsynaptic spikes. Thus, this form of STDP could, through its influence on , change and and these in turn would impact on the temporal learning window. This could form some kind of metaplasticity or homeostasis [21], in which STDP influences its own expression at the synapse.

#### 3. Conclusion

We have derived an explicit expression for the iterative quantal model [11] describing short term plasticity of dynamic synapses. A wide range of naturally occurring pulse trains could be subjected to detailed mathematical analysis using this model. For example, our analysis is also valid if the pulse rate during a burst is not constant (see Figure 5). Thus, the selective treatment of bursts by dynamic synapses as derived in Section 2 could also be extended to cases were the information is contained in the fine structure of the bursts [4, 8, 12]. Also, the modulation does not have to be constant, that is, pauses between bursts could vary, so that pulse trains derived in [8, 14] could also be treated with a more rigorous, global approach, rather than an analysis via simulations.

We have shown how the filtering characteristics might be determined from the synaptic parameters. Specifically, we have provided an explanation how the filtering characteristic of a dynamic synapse depends on the effective time constants and and their interaction with the converged values for and (see Figures 5 and 10). Also, in extension of [10], we have provided a more complete picture how the filtering characteristic relates back to the original parameters (see Figure 8). The mechanisms/correlations shown in Figure 10 could be applied to characterize the transfer/decoding function of synaptic networks, such as the ones used in [4]. Also, the closed expression for the transfer function developed in this manuscript could be employed to deriving the synaptical parameter set for the optimal coding of stimuli in, for example, the auditory cortex [6]. We have shown a limited example of this in Figure 6, where we use our transfer expression to derive the optimal modulation frequency for a parameter set, which is in good agreement with the numerical simulations of [10].

Our noniterative expression for the behavior of the dynamic synapses of [11] could also have consequences on plasticity mechanisms. Synapses exhibit very diverse modulatory and plastic behaviors, where the interdependences and governing variables often cannot be clearly determined [15, 16]. Since the time constants of neural actions are not very amenable to change [1], it might be assumed that the temporal dynamics and preferences of a synapse are relatively fixed. In contrast, our derivations in this paper predict mechanisms by which a synapse could change its effective time constants and spike pattern preference based on even though the basic temporal parameters and of the dynamic synapse might be constant. In this context, we have speculated on possible repercussions of this modulation of and on models of long-term plasticity (STDP), especially with regard to extending STDP to dynamical models of computation and learning/memory.

#### Appendcies

#### A. Transient Analytical Description of Quantal Plasticity

The convergence of iterative equations like those of the quantal model [11] can only be expressed explicitly for some special cases [22]. Whereas the convergence limits for a constant presynaptic pulse rate can be derived with relatively little effort [11], the time course and speed of convergence is difficult to define, especially due to the constant parts of the iterative equations for , (2), and , (3).

However, as Figure 11 shows, in case of regular spike trains with a defined pulse rate, the iterative descriptions of and in (2) and (3) can be interpreted as settling of transient responses to a steady-state value, comparable to the exponential convergence of, for example, an RC voltage settling curve.

In this case, an absolute time constant for this settling may be derived, which is likely to depend on the fundamental time constants of the quantal model.

In the following, an explicit expression for the convergence of toward a steady-state value will be derived. Equation (2), recursively describing the value of after inter-spike interval (ISI) , can be rewritten as

Note that all variables are shifted by one ISI compared to the original formulation. For deriving an explicit expression, we restrict ourselves to pulse trains having a constant rate , so that for all . Recursively extending (A.1) by one ISI yields

The further recursion back to is obvious from (A.2), resulting in

Because the term never exceeds the interval , the geometric series of the second term converges, and its sum can be calculated to yield [22]:

The limit for is the same as the value for calculated in (5). In fact, equation (A.4) can be rewritten in the following form to make this limit obvious:

The speed of convergence is determined by the term dependent on . For introducing the notion of a time constant, we extend in (A.5) to a continuous-time variable that is equal to at the time of pulse , which means for a constant pulse rate. At this point in time, equality holds, which we use to reformulate the term dependent on :

with the time constant describing the speed of convergence following:

The time constant thus is dependent on both the time constant of the iteration, , and the pulse rate, . Therefore, (A.5) can be modeled as follows:

An explicit expression for can be derived in a similar way, starting with an equation analogous to (A.2):

This again makes the further recursion back to clear:

Because of the varying , the terms in the product are not constant like in the derivation for , so that the resulting series is not geometric, making a straightforward derivation impossible. Nevertheless, an exponential decay with a fixed time constant like for is the dominant behavior also for , as can be seen from Figure 11. We will therefore use a heuristic approximation for deriving an explicit expression for the time constant of . We first insert the explicit expression of (A.8) and the starting value for to derive the following expression: The constant factor outside the sum does not have an influence on the convergence speed of . When neglecting the product inside the sum, the expression reduces to the sum of a geometric series, whose time constant may be determined analogously to (A.4) and (A.5). The deviations from the original formulation then have to be corrected by additional or modified terms in the time constant. The exponential function and the term for in the numerator of the simplified expression result in terms and in the time constant, respectively. By using the approximation , the term in the denominator can be transformed to yield an inverse proportional relation between and the time constant for , .

Now, the product term in (A.11) has to be accounted for. The term results in a quadratic dependency between the time constant for and the pulse rate due to the additional potentiation of . At the same time, the term of is again transformed into , resulting in . The constant factor results in an inverse linear dependency of on , while is again transformed into . The term can be neglected for high frequencies , but has to be taken into account for low frequencies. This can be done by combining the constant terms and while adding a frequency-dependent factor .

With all principal dependencies being identified, the resulting time constant of reads:

Analogously to the derivation for , an explicit formulation for can be stated using the time constant defined by equation (A.12):

Equations (A.8) for and (A.13) for were verified against simulations of the iteration formulae (2), (3) over a wide range of parameters for , and , see results. Despite the continuous-time generalization and the approximations made in the derivation of , analytical and simulation results are in good agreement. Figure 11 shows an example of the resulting time courses. The differences between simulated and analytically derived are due to the discrete nature of the original iterative equations that were generalized to continuous time for the analytical formulation. Slightly bigger, but still negligible deviations are visible for , which are due to the approximations made.

#### Acknowledgment

The authors would like to gratefully acknowledge financial support by the European Union in the framework of the Information Society Technologies program, Biologically Inspired Information Systems branch, project FACETS (No. 15879).

#### References

- C. Koch,
*Biophysics of Computation: Information Processing in Single Neurons*, Oxford University Press, New York, NY, USA, 2004. - A. M. Thomson, “Presynaptic frequency- and pattern-dependent filtering,”
*Journal of Computational Neuroscience*, vol. 15, no. 2, pp. 159–202, 2003. View at Publisher · View at Google Scholar - L. F. Abbott and W. G. Regehr, “Synaptic computation,”
*Nature*, vol. 431, no. 7010, pp. 796–803, 2004. View at Publisher · View at Google Scholar - J. Jenča and J. Pavlásek, “Some implications of the short-term synaptic plasticity for neuronal computation: a model study,”
*Biologia*, vol. 62, no. 4, pp. 498–506, 2007. View at Publisher · View at Google Scholar - A. Farajidavar, S. Saeb, and K. Behbehani, “Incorporating synaptic time-dependent plasticity and dynamic synapse into a computational model of wind-up,”
*Neural Networks*, vol. 21, no. 2-3, pp. 241–249, 2008. View at Publisher · View at Google Scholar - K. M. MacLeod, T. K. Horiuchi, and C. E. Carr, “A role for short-term synaptic facilitation and depression in the processing of intensity information in the auditory brain stem,”
*Journal of Neurophysiology*, vol. 97, no. 4, pp. 2863–2874, 2007. View at Publisher · View at Google Scholar - A. Kepecs, X.-J. Wang, and J. Lisman, “Bursting neurons signal input slope,”
*The Journal of Neuroscience*, vol. 22, no. 20, pp. 9053–9062, 2002. View at Google Scholar - V. Matveev and X.-J. Wang, “Differential short-term synaptic plasticity and transmission of complex spike trains: to depress or to facilitate?”
*Cerebral Cortex*, vol. 10, no. 11, pp. 1143–1153, 2000. View at Publisher · View at Google Scholar - F. Gabbiani and W. Metzner, “Encoding and processing of sensory information in neuronal spike trains,”
*The Journal of Experimental Biology*, vol. 202, no. 10, pp. 1267–1279, 1999. View at Google Scholar - T. Natschläger and W. Maass, “Computing the optimally fitted spike train for a synapse,”
*Neural Computation*, vol. 13, no. 11, pp. 2477–2494, 2001. View at Publisher · View at Google Scholar - H. Markram, Y. Wang, and M. Tsodyks, “Differential signaling via the same axon of neocortical pyramidal neurons,”
*Proceedings of the National Academy of Sciences of the United States of America*, vol. 95, no. 9, pp. 5323–5328, 1998. View at Publisher · View at Google Scholar - G. Fuhrmann, I. Segev, H. Markram, and M. Tsodyks, “Coding of temporal information by activity-dependent synapses,”
*Journal of Neurophysiology*, vol. 87, no. 1, pp. 140–148, 2002. View at Google Scholar - J. J. Torres, J. M. Cortes, J. Marro, and H. J. Kappen, “Competition between synaptic depression and facilitation in attractor neural networks,”
*Neural Computation*, vol. 19, no. 10, pp. 2739–2755, 2007. View at Publisher · View at Google Scholar - B. Sengupta and D. Halliday, “Neuronal dynamics of dynamic synapses,” in
*Proceedings of the 27th IEEE-EMBS Annual International Conference of the Engineering in Medicine and Biology Society (EMBS '05)*, vol. 7, pp. 3636–3639, Shanghai, China, September 2005. View at Publisher · View at Google Scholar - M. A. Xu-Friedman and W. G. Regehr, “Structural contributions to short-term synaptic plasticity,”
*Physiological Reviews*, vol. 84, no. 1, pp. 69–85, 2004. View at Publisher · View at Google Scholar - P. J. Sjöström, E. A. Rancz, A. Roth, and M. Häusser, “Dendritic excitability and synaptic plasticity,”
*Physiological Reviews*, vol. 88, no. 2, pp. 769–840, 2008. View at Publisher · View at Google Scholar - R. Legenstein, C. Naeger, and W. Maass, “What can a neuron learn with spike-timing-dependent plasticity?”
*Neural Computation*, vol. 17, no. 11, pp. 2337–2382, 2005. View at Publisher · View at Google Scholar - F. Wörgötter and B. Porr, “Temporal sequence learning, prediction, and control: a review of different models and their relation to biological mechanisms,”
*Neural Computation*, vol. 17, no. 2, pp. 245–319, 2005. View at Publisher · View at Google Scholar - W. Maass, T. Natschläger, and H. Markram, “Real-time computing without stable states: a new framework for neural computation based on perturbations,”
*Neural Computation*, vol. 14, no. 11, pp. 2531–2560, 2002. View at Publisher · View at Google Scholar - K. Holthoff, Y. Kovalchuk, and A. Konnerth, “Dendritic spikes and activity-dependent synaptic plasticity,”
*Cell and Tissue Research*, vol. 326, no. 2, pp. 369–377, 2006. View at Publisher · View at Google Scholar - W. C. Abraham, “Metaplasticity: tuning synapses and networks for plasticity,”
*Nature Reviews Neuroscience*, vol. 9, no. 5, pp. 387–399, 2008. View at Publisher · View at Google Scholar - I. Bronshtein, K. Semendyayev, G. Musiol, and H. Muehlig,
*Handbook of Mathematics*, Springer, New York, NY, USA, 1997.