- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Computational Intelligence and Neuroscience
Volume 2012 (2012), Article ID 968272, 15 pages
Spike-Timing-Dependent Plasticity and Short-Term Plasticity Jointly Control the Excitation of Hebbian Plasticity without Weight Constraints in Neural Networks
1Information Science and Control Engineering, Graduate School of Engineering, Nagaoka University of Technology, 1603-1 Kamitomioka-machi, Nagaoka, Niigata 940-2188, Japan
2Management and Information Systems Science, Faculty of Engineering, Nagaoka University of Technology, 1603-1 Kamitomioka-machi, Nagaoka, Niigata 940-2188, Japan
Received 3 September 2012; Accepted 28 November 2012
Academic Editor: Vince D. Calhoun
Copyright © 2012 Subha Fernando and Koichi Yamada. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Hebbian plasticity precisely describes how synapses increase their synaptic strengths according to the correlated activities between two neurons; however, it fails to explain how these activities dilute the strength of the same synapses. Recent literature has proposed spike-timing-dependent plasticity and short-term plasticity on multiple dynamic stochastic synapses that can control synaptic excitation and remove many user-defined constraints. Under this hypothesis, a network model was implemented giving more computational power to receptors, and the behavior at a synapse was defined by the collective dynamic activities of stochastic receptors. An experiment was conducted to analyze can spike-timing-dependent plasticity interplay with short-term plasticity to balance the excitation of the Hebbian neurons without weight constraints? If so what underline mechanisms help neurons to maintain such excitation in computational environment? According to our results both plasticity mechanisms work together to balance the excitation of the neural network as our neurons stabilized its weights for Poisson inputs with mean firing rates from 10 Hz to 40 Hz. The behavior generated by the two neurons was similar to the behavior discussed under synaptic redistribution, so that synaptic weights were stabilized while there was a continuous increase of presynaptic probability of release and higher turnover rate of postsynaptic receptors.
Even though Hebbian synaptic plasticity is a powerful concept which explains how the correlated activity between presynaptic and postsynaptic neurons increases the synaptic strength, its value has been diminished as a learning postulate because it does not provide enough explanation how synaptic weakening occurs. In simple mathematical interpretation of Hebbian learning algorithm, an increase of the synaptic strength between two neurons can be seen if their activity is correlated otherwise it is decreased . This interpretation to Hebbian plasticity allows boundless growth or weakening of synaptic strength between the two neurons . Even though Hebbian plasticity has been supported by the biological experiments on long-term plasticity, it is still not completely understood how Hebbian plasticity can avoid synaptic saturation and bring the competition between synapses to balance the excitation of Hebbian neurons. Normalization of weight , BCM theory , and spike timing-dependent plasticity (STDP)  are the most biologically significant mathematical mechanisms that have been discussed in the literature to address this issue effectively. Weight normalization has been introduced either in additive or multiplicative modes to scale the synaptic weights and to control the continuous growth or weakening of synaptic strength; however, these user-defined weight constraints significantly affect the dynamic behavior of the applied neural network and limit the performance of learning . BCM theory is another significant approach that explains synaptic activity as a temporal competition between input patterns. Synaptic inputs that drive postsynaptic firing to higher rate than a threshold value result in an increase of synaptic strength while inputs that make postsynaptic firing to lower rate than the threshold value result in a decrease of synaptic strength. BCM approach has mainly considered instantaneous postsynaptic firing frequencies for its threshold updating mechanism instead of spike arrival time to the synapses. As per the recent literature, it has recognized STDP  as a key mechanism of how information is processing in the brain. STDP is a form of long-term plasticity that merely depends on the relative timing of presynaptic and postsynaptic action potentials [6, 7]. Although the process and the role of STDP in information passing in some area of the human brain in the development stages are still not clear [8, 9], it has been shown that average case versions of the perception convergence theorem hold for STDP in simple models of spike neurons for both uncorrelated and correlated Poisson input spike trains. And further it has shown that not only STDP changes the weight of synapses but also STDP modulates the initial release probability of dynamic synapses . Moreover, STDP has been tested on a variety of computational environments, especially to balance the excitation of Hebbian neurons by introducing synaptic competition [11–13] and to identify the repetitive patterns in a continuous spike trains [14, 15]. These experimental studies on synaptic competition using STDP are conducted in two forms: additive form and multiplicative form. In additive form, for example, as in , synapses competed against each other to control the timing of postsynaptic firing but this approach assumed that synaptic strength does not scale synaptic efficacy and hard constraints were used to define the efficacy boundaries. In the multiplicative form synaptic scaling was separately introduced to synaptic weight as a function of postsynaptic activity [12, 13]. However, because of the reduced competition between synapses, for strong spike input correlations all synapses stabilized into similar equilibrium. In sum, many applications based on STDP to control the excitation of Hebbian neuron depend on the user-defined constraints on weight algorithm which ultimately limit the performance of learning. To alleviate this limitation in the learning process of using hard weight constraints, another significant approach has been discussed in the literature to remove the correlation in input spike trains by using recurrent neural networks . Their results claim the possibility of reducing the correlation in the spike inputs by recurrent network dynamics. The experiment was conducted on two types of recurrent neural networks; with purely inhibitory neurons and mixed inhibitory-excitatory neurons. At low firing frequencies, response fluctuations were reduced in recurrent neural network with inhibitory neurons when compared to feed-forward network with inhibitory neurons. Moreover, in the case of homogeneous excitatory and inhibitory subpopulation, negative feedback helps to suppress the population rate in both recurrent neural network and feed-forward network. Because inhibitory feedback effectively suppresses pairwise correlations and population rate fluctuations in recurrent neural networks, they suggested using inhibitory neurons to de correlate the input spike correlations. Moving one step further by combining the underlying concepts in [17, 18], that is, using nonlinear temporally asymmetric Hebbian plasticity and recent experimental observation of STDP in inhibitory synapses, Luz and Shamir  have discussed the stability of Hebbian plasticity in feed-forward networks. Their findings supported the fact that temporally asymmetric Hebbian STDP of inhibitory synapses is responsible for the balance the transient feed-forward excitation and inhibition. Using STDP rules, the stochastic weights on inhibitory synapses were defined to generate the negative feedback and stabilized into a unimodal weight distribution. The approach was tested on two forms of network structure; feed-forward inhibitory synaptic population and feed forward inhibitory and excitatory synaptic population. The former structure converged to a uniform solution for correlation input spikes but later destabilized and excitatory synaptic weights were segregated according to the correlation structure in input spike train. Even though the proposed model in the presence of inhibitory neurons of the learning is more sensitive to the correlation structure, the stability of the network is needed to be validated when the correlation between the excitatory synapses and inhibitory synapses is present.
However, the specifics of a biologically plausible model of plasticity that can account for the observed synaptic patterns have remained elusive. To get biologically plausible model and remove the instability in Hebbian plasticity many mechanisms have been discussed in recent findings. One remarkable suggestion is to combine STDP with multiple dynamic and stochastic synaptic connections which enable the neurons to contact each other simultaneously through multiple synaptic communication pathways that are highly sensitive to the dynamic updates and stochastically adjust their states according to the activity history. Furthermore, strength of these individual connections between neurons is necessarily a function of the number of synaptic contacts, the probability of neurotransmitter release, and postsynaptic depolarization . These synapses are further capable of adjusting their own probability of neurotransmitter release () according to the history of short-term activity [20, 21] which provides an elegant way of introducing activity-dependent modifications to synapses and to generate the competition between synapses . Based on this hypothesis many approaches have been proposed by modeling the behavior at synapses stochastically [23, 24]; here the model we have proposed differs from others because of the computational power that has been granted to the modelled receptors, so that behavior at a single synapse was determined by collective activities of these dynamic stochastic receptors. Using this model, an experiment was conducted to find the answers to the following two questions: first, can STDP and short-term plasticity control the excitation of Hebbian neurons in neural networks without weight constraints? Second, if the excitation was controlled what parameters help STDP in such a controlling activity?
A fully connected neural network was developed with two neurons in which each neuron consisted of thousands of computational units. These computational units were categorized as transmitters and receptors according to the role they played on the network. A unit was called a transmitter if it transmitted signals to other neurons and a unit was called a receptor if it received the signals into the neuron. The receptors of a given neuron were clustered into receptor groups. According to the excitation and the inhibition of the model neuron these computational units could update their states dynamically from active state to inactive state or vice versa. Only when a computational unit was in active state it could successfully transmit signals between neurons. Transmitters from presynaptic neuron and receptors of the corresponding receptor group of the postsynaptic neuron together simulated the process of a single synapse. Transmitter at presynaptic neuron can be considered as a synaptic vesicle which can release only a single neurotransmitter at a time and the model receptors can be considered as postsynaptic receptors at synaptic cleft. With these features, excitation of a neuron at a particular synapse in our network was determined by the function of the number of active transmitters in the presynaptic neuron, transmitters’ release probability, and the number of active receptors at the corresponding receptor group of the postsynaptic neuron. First, in order to analyze how network with two neurons could balance the excitation when Poisson inputs with mean rates 10 Hz and 40 Hz were applied, Only one neuron was fed by Poisson inputs while letting the other neuron to adjust itself according to the presynaptic fluctuations. Neurons stabilized its weight for both Poisson inputs while the weight values of Poisson inputs with mean rate 10 Hz were stabilized into higher range compared to when Poisson inputs with mean rate 40 Hz was applied. The analysis into internal dynamics of neurons shows that neurons have behaved similar to the process discussed in synaptic redistribution when long-term plasticity interacts with short-term depression. Further, neurons have played complementary roles to maintain the network’s excitation in an operational level. These compensatory roles have not damaged the network biological plausibility as we could see that neurons worked as integrators that integrate higher synaptic weighted inputs to lower output and vice versa. Finally the network behavior was evaluated for other Poisson inputs with mean rates in the range of 10 Hz to 40 Hz and observed as the mean rate of the Poisson inputs increases, the immediate postsynaptic neuron increases its synaptic weights, while the immediate presynaptic neuron of those inputs was settle, into a complementary state to the immediate postsynaptic neuron.
A fully connected network with two neurons was created. Each neuron was attached to thousands of computational units which were either in active state or inactive state according to the excitation and the inhibition of the attached neuron. Units attached to a neuron were classified into two groups based on the role they played on the neuron. A computational unit that transmitted the signal from the attached neurons to other neurons was called a transmitter and a computational unit that received the signals to the attached neurons from other neurons was called a receptor. Further, receptors attached to a neuron were clustered into groups so that transmitters from presynaptic neuron could contact the postsynaptic neuron simultaneously through multiple synaptic connections. Figure 1 shows the structure of our modeled neuron , with receptor groups and a transmitter set. Moreover, transmitters in our presynaptic neurons were similar to the synaptic vesicles in real neurons with a single neurotransmitter. The states, either active or inactive, of these transmitters and receptors were modeled using two-state stochastic process as explained in the next section. Only when the units were in active states, they were reliable to successfully transmit or receive the signals to or from other neurons.
The transmitters from presynaptic neurons contacted the receptors of a particular receptor group of postsynaptic neurons by forming a synapse between the two neurons; see Figure 2. Through multiple receptor groups of postsynaptic neurons, presynaptic transmitters could make multiple synaptic connections simultaneously forming dynamic and stochastic synapses. As depicted in the Figure 2 each receptor group of postsynaptic neuron and transmitter set of presynaptic neuron jointly measured the excitation at the attached synapse and balanced the excitation using threshold as discussed in the next section.
2.1. Process at Dynamic Stochastic Synapses
When defining the process under dynamic stochastic synapses we have only concerned with the properties and mechanisms of use-dependent plasticity from the few milliseconds to several minutes time scales. Therefore, use-dependent activity to our modeled network was introduced using short-term plasticity; facilitation and depletion [25, 26]. When defining the probability of neurotransmitter release at our modeled transmitters it was assumed that facilitation at biological synapses depends only on the external ions that influx to the biological synapse after arriving of an action potential and residual ion concentrations that the synapse already has. And depletion has no influence on ion concentrations and merely depends on use activity of the synapse. Then signal release probability at a transmitter in a synapse was adopted by the model proposed in  which determines the signal release probability as a function of ions influx to synapse, vesicle depletion, and signal arriving time to the transmitter. Only the influx of ions after arriving of neurotransmitters into receptors of postsynaptic neuron was considered when determining states of the receptors.
If is the probability that signal is released by a transmitter at time and train consists of exact signal releasing times of the , consists of the sequences of times where has successfully released the signals. The map at forms a stochastic process with two states, that is, Release () for and Failure of Release () for . The probability in (1) describes a signal release probability at time by as a function of facilitation in (2) and a depletion in (4) at time . and are the facilitation and depression constants, respectively. Function given in (3) defines the response of to presynaptic signal that had reached to at time ; is the magnitude of the response. Similarly given in (5) models the response of to the preceding releases of the synapse at time and and are time decay constants of facilitation and depression. Maass and Zador  allowed to release the received signal at time , if . We updated this rule by introducing a new threshold so that if , a transmitter is allowed to release the received signal. And we called it as in active state. Receptors in the postsynaptic neuron were modeled using the same model of Maass and Zador except that they were not involved in the process of vesicle depletion. Therefore, the states of the receptors were determined by setting the depletion in (1) into a unit. According to the recent biological findings of , parameters were initialized to , ,, and :
A modeled neuron maintained threshold values for each receptor groups and set of transmitters. Let denote the th receptor group in postsynaptic neuron that contact transmitters in the presynaptic neuron , and let the output and be the threshold value of at time step . Similarly let denote the transmitters in neuron and let be the output and the threshold value of at time step . The threshold value of the receptor group was defined as in (6) and it was exponentially increased as the activity of to is increasing (or decreased when the activity of to is decreasing). Threshold value for transmitters in neuron , that is, , was defined as a function of total synaptic inputs from all its synaptic connections to the neuron into the total output of the neuron as in (7). Every 60 time steps threshold values of both neurons were updated: Let be the number of receptor groups a neuron has, then , ; ; is the number of components in and is the number of active components in at time step .
Moreover, according to the following predefined behavioral rule signal was propagated between neurons.
Rule 1. When a receptor receives a signal from the corresponding presynaptic neuron at time step , the signal is propagated within the network according to the following conditions.
Condition 1. Once a received signal is applied to a receptor if the receptor is updated to inactive state then the received signal is inactivated otherwise the signal is propagated to a randomly selected transmitter of the same neuron.
Condition 2. Once a transmitter of a particular neuron receives a signal at time step , the signal is transmitted to a randomly selected receptor of the randomly selected receptor group of the postsynaptic neuron if updated state of the transmitter is active otherwise the received signal is inactivated.
The above behavioral rule defines the underlying mechanism of signal transmission between the presynaptic neuron and the postsynaptic neuron; that is, when the related computational units from the two neurons are active only, the signal is successfully transmitted. Therefore, the number of active receptors in a receptor group of the postsynaptic neuron and the number of active transmitters in the presynaptic neuron jointly define the efficacy at a given synapse. In addition to this short-term plasticity and homeostatic synaptic plasticity [28, 29] adjustments (it was shown that under similar conditions, neurons processed similar to Hebbian neurons  and the defined threshold mechanism functioned as a homeostatic synaptic plasticity process; see ) our dynamic stochastic synapses are subject to long-term plasticity induced by STDP as discussed next.
2.2. Bin the Process at Synapses
The process at synapses where transmitters from the presynaptic neuron contacted the receptors in a particular receptor group of the postsynaptic neuron were binned to analyze the synapse’s excitation. Bin is an array of seven columns, , which stored data of a given synapse of successive seven time steps. A single cell of a bin contains data at a time step , namely, the number of active transmitters in the presynaptic neuron, the number of active transmitters in the postsynaptic neuron, the number of active receptors in the corresponding receptor group of the postsynaptic neuron, and the mean release probability of transmitters in presynaptic neuron. Let be th cell of th bin, the time gap between two consecutive cells is set to 5 ms as in (8).
This allowed us to define the time represented by each cell in a bin from its first cell as in (9); see Figure 3. This arrangement of bin was necessary in our model to satisfy the condition , where and are membrane constants for potentiation and depression (discussed later):
Let be random variables of the number of active transmitters in presynaptic neuron at successive seven time steps of a bin; similarly let be random variables of the number of active transmitters in postsynaptic neuron and let be random variables of the number of active receptors in receptor group that correspons to synapse in th bin (). Since the activity between the presynaptic transmitters and receptors in receptor group is not independent, we defined mean, , and variance, , of the th bin on synapse as in (10) and (11): where and are the mean and variance of . Similarly, and are the mean and variance of . The mean and variance of both and were estimated using maximum likelihood estimators, so that in (10) can be written as in (12) if and are the sample means, and in (11) can be written as in (13) if , and are the sample variances of and respectively. The covariance of and is defined in (14):
The mean release probability of the presynaptic transmitters within a given bin, say , can be defined as in (15), if be the mean release probability of the transmitters in presynaptic neuron at time step :
2.3. Defining Synapse’s Activity Using Bins’ Activity
STDP is a form of long-term modification to synaptic strength that depends on the action potential arriving timing between presynaptic neuron and postsynaptic neuron  and can be described by weight window function defined in (16). This weight function defines how strength between the two neurons can be adjusted for a single pair of action potential within the time window . As defined in (16) if presynaptic action potential occurs before the postsynaptic action potential then it strengths the synaptic strength and called long-term potentiation. Conversely if postsynaptic potential occurs before the postsynaptic action potential, it weakens the synaptic strength and called long-term depression:
Here and are membrane constants of long-term potentiation and long-term depression. The values for and need to satisfy the condition as it required the integral of the weight window to be negative to generate stable synaptic strength based on STDP . Furthermore, recent biologically observations  have estimated and roughly to . Thus, in order to generate stable synaptic strength, it is required to have . In our model, the weight window function was applied in bin level at each synapse in order to apply long-term modifications to neuron. Let be the highest amount of active transmitters recorded from the presynaptic neuron during bin and it was at cell as defined in (17). Similarly let be the highest amount of active transmitters recorded from the postsynaptic neuron during bin and it was at cell as defined in (18). Then STDP weight window function was applied on bin’s level by mapping as an action potential occurred in the presynaptic neuron during bin which could significantly update the synaptic strength presynaptically at the corresponding synapse and as the action potential occurred in the postsynaptic neuron during bin which could significantly update the synaptic strength postsynaptically on the same synapse. Here we have assumed that within the duration of a bin only the highest hitter of that bin can significantly update the synaptic strength. Subsequently we mapped to and to . Therefore, if postsynaptic hitter occurs after the presynaptic hitter, it leads to the potentiation, and if presynaptic hitter is followed by the postsynaptic hitter, it depresses the synapses during the given bin:
2.4. Mean and Variance of a Synapse
Learning based on STDP was implemented on synapses assuming that bins of a given synapse are mutually independent and the impact that each bin made on the synapse sums linearly. Then mean and variance of the th synapse can be defined as in (19) and (20) when th bin is interacted with the synapse. The mean and the variance of the synapse were estimated using maximum likelihood estimators as shown in (21) and (22), respectively. Further, the total mean release probability at synapse at was defined using bin’s mean release probabilities as in (23):
In order to generate action potentials real neurons are necessary to be in a nonquiescence state. If a neuron is in a quiescence state, it is not possible for the neuron to generate action potentials that can change the synaptic strength significantly. Therefore, STDP was applied on synapses only if model presynaptic and postsynaptic neurons were not in quiescence states. We defined that a neuron is not in a quiescence state when the average output produced by the neuron during bin is greater than the average output it had produced so far. That is, if the current mean number of active transmitters of a particular neuron is less than the mean number of active transmitters during the th bin, neuron was recognized as in a nonquiescence state at bin . That is mathematically if and , the weight was updated on the synapse at bin as discussed next.
2.5. Learning Based on STDP and Release Probability
According to the model proposed in , the amplitude of the excitatory postsynaptic current of the th spike in a spike train is proportional to the weight at that synapse and the release probability at the th spike. In our approach is proportional to the impact that made by transmitters in the presynaptic neuron and receptors in the corresponding receptor group of the postsynaptic neuron during on the synapse . If we applied the model proposed in  to our th bin instead of th spike, we can express as in (24). Moreover, biological evidence supports the fact that the amount of change on weight is also dependent on the initial synaptic size . Depression is independent of the synaptic strength, whereas strong synapses are less potentiated than weak synapses. By assuming that there is an inverse relationship between the initial synaptic strength and the amount of potentiation, potentiation can be expressed for the th bin, as in (25) if is the amount of potentiation during th bin at synapse :
If we combined (16), (24), and (25), the amount of weight updated during th bin at synapse , can be defined as in (26) and the synaptic weight at at the end of bin is determined as in (27): where and are potentiation and depression learning rates . The amplitude during the th bin was estimated by the proportion of the deviation that made by the bin compared to its mean, to the deviation that synapse had made so far compared to its overall mean. That is, in statistically amplitude during the th bin can be expressed as a proportion of the coefficient variation during the th bin to the coefficient variation of the synapse has as given in (28). The release probability during the th bin was determined as a proportion of mean release probability during th bin to the total mean release probability at synapse has as in (29). Median of the weight distribution at synapse was taken as an estimator for as in (30). This is merely because median provides a good approximation about the center of the weight distribution than mean; that is, the median is not affected by the outliers, whereas the mean is affected by the outliers:
3. Balancing the Excitation of the Network
An experiment was arranged to find the possibility that can STDP and short-term plasticity together balance the excitation of a network with two Hebbian neurons without defining any constraints on the weight learning algorithm. A fully connected network with two neurons, say neuron and neuron , was developed; each neuron had ten receptor groups, making a presynaptic neuron to contact the postsynaptic neuron through ten dynamic stochastic synapses simultaneously. Both neurons, and had equal number of transmitters and receptors ; and receptors attached to neurons were uniformly distributed among receptor groups. At the onset one percent of transmitters and one percent of receptors in each receptor-group was set to active state. Poisson inputs with mean firing rates 10 Hz and 40 Hz were applied to all the receptor groups of neuron simultaneously while giving enough space for neuron to adjust itself according to the feedbacks of neuron ,see Figure 4. Each input was applied around two hours continuously to neuron ; and the behavior of the network was analyzed after the synaptic connections have established the effect of the altered activity and network activity has developed. The values were fed to the system according to following rule: the generated Poisson distribution was converted to byte stream by the following rule: if generated value is greater than the median of the Poisson distribution then it was considered to represent value 1 otherwise it was considered to represent value 0. Only when the represented value is equal to one, the signal was generated and fed to neuron .
Figures 5 and 6 show the distribution of weights of both postsynaptic neurons and B at Poisson inputs with mean rates 10 Hz and 40 Hz. As shown in these figures, the weight distributions of both neurons at each, synapse have stabilized around 175 bins. After the weights distribution was stabilized, the median of the weight distribution was calculated and these calculated median values are shown in Figure 7. As shown in the figure of Poisson inputs with low mean rate, that is, at 10 Hz, medians of the all the synapses of the postsynaptic neurons reach to higher value compared to when Poisson inputs with higher mean rate, that is, at 40 Hz, were applied. The network balanced its excitation by pushing synaptic weights towards higher values for inputs with low mean rate and for inputs with higher mean rate the network has pulled down the synaptic weights into a lower weight values. This dynamic behavior of both neurons is a necessary adjustment to balance neurons’ excitation and subsequently to balance the network excitation while adjusting to external manipulations.
Next we were interested to know what makes the neuron to stabilize its activity without being overexcited or overdepressed in a network which has no controlling constraints. To understand that we analyzed the internal behaviors of neurons and in terms of their mean release probability (the mean of the release probabilities of transmitters attached to the neuron) and the coefficient variation which measures the given synapse’s excitation (CV = in (28)) in terms of the number of active transmitters in the presynaptic neuron and the number of active receptors in the corresponding receptor groups of the synapse. Thus, the value of CV shows the extent of variability of the given synapse in relation to the synapse’s mean and portraits effectively the synapse’s internal dynamics. So that higher CV value implies the higher internal fluctuations and higher deviation of the synaptic mean. Figure 5 shows the mean release probability of the presynaptic neurons at 10 Hz while Figure 8 depicts what is happening inside the synapse in terms of CV at 10 Hz. As shown in these figures, the neuron that has made higher synaptic weight has maintained higher CV compared to the other neuron in the network. Notably, the neuron which had the higher synaptic weight has produced a lower mean release probability. For example, if we analyze the behavior of neuron , as shown in Figure 7, its synapses , from presynaptic neuron to postsynaptic neuron , have scored higher synaptic weights at 10 Hz compared to synaptic weights of postsynaptic neuron . Further, neuron has maintained higher CV at all its synapses compared to the values of CV of the synapses of postsynaptic neuron , that is, . In contrast, the neuron has maintained lower mean release probability as a presynaptic neuron at 10 Hz of Poisson inputs at its all synapses compared to neuron . These opposing and balancing behaviors of the two neurons are consistent in Poisson inputs with mean firing rate 40 Hz as given in Figures 6 and 9.
Moreover, if the difference of the value of CVs between two neurons was considered, it is clearly shown in the Figures 8 and 9 that this difference was reduced to amount 0.0001 after the two neurons have adjusted to the external input and stabilized themselves. However, most importantly, even that we could see the stabilize activity of the two neurons in terms of synaptic weights and CV the mean release probabilities have not reached to any stable position, but instead either continuously positively or negatively increasing. The positive correlation between the synaptic weights and the CV, and the negative correlation between the synaptic weights and the mean release probability at a same neuron have proven that neurons could act as integrators that integrate the excited synaptic weights and controlled the excitation via higher CV fluctuations and produced balanced output that help to balance the network activity. This reduced excitation in the output flow that has allowed the other neuron to play compensatory role and to balance the network activity.
Finally we would like to understand the behavior of the network when applying Poisson inputs in the range of 10 Hz and 40 Hz. The Poisson inputs with mean rates, 15 Hz, 20 Hz, 25 Hz, 30 Hz, and 35 Hz were also presented to neuron ’s receptor groups and studied the behavior of the both neurons on the same network. Figure 10 shows the average value of the medians of synapses of each neuron. When magnitude of the Poisson inputs mean rate is greater than the STDP potentiation and depression time constants , the medians of the stabilized synaptic weights of synapses of neuron as the immediate postsynaptic neurons of external inputs have continuously increased as the mean rate of Poisson inputs is increased. Again the compensatory behavior from neuron could be seen as it has generally decreased its medians of the stabilized synaptic weights as the mean rate is increasing. These complementary behaviors of two neurons seem to be necessary to stabilize the overall network activity. When , both neurons have worked together to control the overall excitement of the network. Intriguingly, when and , the excitation of the entire network was equally balanced between the two neurons as their average value of the medians of the stabilized synaptic weights becomes almost equal to each other. This might be because of the effect of that we selected for STDP potentiation and depression time constant. This is important observation which provides the possibilitythat postsynaptic neurons could excited and stabilized into the same level of the presynaptic neuron if STDP time constants are highly correlated with the mean rate of the Poisson inputs applied. Therefore, STDP with different time constants for potentiation and depression might be a good solution to scale down external inputs effectively into neuronal level.
As per the literature, a synapse can be strengthened either by increasing the probability of transmitter release presynaptically or by increasing the number of active receptors postsynaptically. This general functionality at the synapses can be varied by the interplay between long-term plasticity and short-term dynamics, especially short-term depression. Short-term depression is mainly based on vesicle depletion which is the use-dependent reduction of neurotransmitter release in the readily releasable pool . When long-term plasticity is interacting with the short-term depression it is called synaptic redistribution . The role of this synaptic redistribution is not yet clearly identified. However, this synaptic redistribution allows the presynaptic neuron to increase the probability of release and thereby increase the signal transmission between the two neurons. In our developed network, the two neurons have simulated a behavior similar to the effect of synaptic redistribution. Neuron as the immediate postsynaptic neuron of the external inputs has scored the higher synaptic weights compared to neuron . That is, its synapses, where the transmitters from immediate presynaptic neuron contacted each receptor group of the postsynaptic neuron , have scored higher weights compared to the other neuron. The presynaptic neuron in this functional process has maintained higher mean release probability. Therefore, first, the synaptic weights of neuron have been increased presynaptically by increasing the probability of neurotransmitter release. Second, the analysis of CV of these synapses of postsynaptic neuron shows that it is laying higher range than the function of CV of neuron , confirming the possibility of increasing the synaptic weights by higher turnover rate of active receptor component of postsynaptic neuron. This behavior of postsynaptic neuron is also supported at Poisson inputs with mean firing rate 40 Hz. Intriguingly, if the behavior of postsynaptic neuron at 40 Hz after neuron has adjusted to the external inputs and stabilized, was analyzed, the higher fluctuations of CV and comparatively lesser synaptic weights of neuron at Poisson inputs with mean rate 40 Hz were observed compared to the postsynaptic neuron ’s behavior at Poisson inputs with mean rate 10 Hz, showing the possibility that synaptic redistribution can increase the synaptic weights for Poisson inputs with low mean rate at steady state and not for Poisson inputs for higher mean rate. And for higher mean rates it is only a higher turnover rate of active receptor components. Further, STDP potentiation and depression time constants have made higher impact on the behavior of those two neurons; that is, it has controlled the level of excitation of each neuron equally when the magnitude of the mean rate laid near the magnitude of the STDP time constants. As the mean rate of the Poisson inputs is increasing, the complementary roles have been initiated into the two neurons.
STDP has successfully interplayed with short-term plasticity to control the excitation or inhibition of neural network according to external adjustments. Notably, these adjustments are consistent and are also biologically plausible. The stabilization of synaptic weights in an operational level without controlling constraints seems to be possible if STDP as long-term plasticity interacts with the short-term dynamics. The dynamic behavior of short-term activity is necessary to propagate and balance the excitation of neural network without damaging the synaptic weight distribution; similar to how CV and probability of release have played with STDP to balance the excitation. When compared to Luz and Shamir  findings, instead of specifically using inhibitory neurons to generate negative feedback to stabilize the excitation and inhibition of the network, here we have used ground plasticity mechanisms that observed in biology to alleviate the excitation and inhibition of the network. Even though two approaches have used different derivatives of temporally asymmetric STDP to implement the stochastic response of neurons, both have once again proven the possibility of stabilization of the network excitation due to Hebbian plasticity using STDP. However, our approach differs from their mechanism because of the integration of the sensitivity of the release probability and turnover rate of active components attached to a synapse. Instead of the inhibition made on network by the inhibitory neuron by negative feedback to impinge the excitation generate by excited correlated spikes, our mechanism absorbed the high firing frequency excitation or overcomes the low firing frequency inhibition in terms of appropriate turnover rate of active components attached to a given synapse and adjusting release probabilities of the attached active transmitters. The mechanism under this manipulation of excitation and inhibition is similar to the synaptic redistribution discussed in biology, which drives our approach more towards to the biological plausibility. However, both systems are still needed to be evaluated and examined on larger networks and the approach of Luz and Shamir  needs to be tested when correlation between the inhibitory and excitatory synapses is present. Moreover, we compare the result of  against our findings on how excitation was balanced. In their model excitation was balanced by introducing synaptic competition in which synapses competed against each other to control the postsynaptic firing times; further, this competition was introduced by scaling the synaptic efficacy using hard boundary conditions. This model balanced the excitation of Poisson inputs 10 Hz and 40 Hz, so that for 10 Hz more synapses approach to the higher limit of synaptic efficacy and for higher inputs more synapses remained in lower limits. However, once the system reached stability it was hardly disturbed by the presynaptic firing frequency. Therefore, the stability that the system reached is moderately stronger than in our case. Although our model also exhibited similar characteristics at Poisson inputs 10 Hz and 40 Hz, no boundary conditions were defined to achieve this stability. Furthermore, compared to their moderately strong equilibrium discussed in terms of synaptic efficacy, internal dynamics of our neurons were continuously fluctuating around the equilibrium allowing neurons to remain dynamically active even at the equilibrium as similar to many natural systems.
The model proposed in this research is a computational model to investigate the internal dynamics of neural networks when STDP, Hebbian plasticity and short-term plasticity, are interacting with each other. The model itself has few drawbacks; mainly the neural network of our model has spent around 150 bins to show the adjustment to external modifications; this is mainly because we selected the median of the weight distribution as the amount of synaptic potentiated as a response to the pair of presynaptic and postsynaptic spike (in (30)). This static quantifier is less sensitive to the sudden changes that occurred in the tail of the distribution until those changes are visible via many elements of the distribution. On the other hand, this quantifier effectively quantifies the distribution of the network into a range where many elements of the distribution are approximately lying. Even though mean could also be a good option for such an indicator, is very sensitive to the sudden changes and could easily forget the history of the distribution. Therefore median is better than the mean, but still necessary to look for an unbiased statistical quantifier to estimate the amount of potentiated as a response to presynaptic and postsynaptic spike pair which can represent the history as well as the sudden changes to the weight distribution effectively. The other main drawback we see in our approach is the use of bins to chunk the process at synapses. The size of the bin might be a possible constraint that also limits the performances of STDP on short-term dynamics. However, the model proposed in this research successfully balanced the synaptic excitation of the two neurons in an operational level without damaging their biological plausibility.
- D. O. Hebb, The Organization of Behavior. The First Stage of the Perception: Growth of the Assembly, John Wiley & Sons, New York, NY, USA, 1949.
- K. D. Miller and D. J. MacKey, “The role of constraints in Hebbian learning,” Neural Computation, vol. 6, pp. 100–126, 1994.
- E. L. Bienenstock, L. N. Cooper, and P. W. Munro, “Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex,” Journal of Neuroscience, vol. 2, no. 1, pp. 32–48, 1982.
- L. F. Abbott and W. Gerstner, “Homeostasis and learning through spike-timing dependent plasticity,” in Presented at Summer School in Neurophzsics, Les Houches, France, July 2004.
- G. J. Goodhill and H. G. Barrow, “The role of weight normalization in competitive learning,” Neural Computation, vol. 6, no. 2, pp. 255–269, 1993.
- G. Q. Bi and M. M. Poo, “Synaptic modification by correlated activity: Hebb's postulate revisited,” Annual Review of Neuroscience, vol. 24, pp. 139–166, 2001.
- L. F. Abbott and S. B. Nelson, “Synaptic plasticity: taming the beast,” Nature Neuroscience, vol. 3, pp. 1178–1183, 2000.
- J. Lisman and N. Spruston, “Postsynaptic depolarization requirements for LTP and LTD: a critique of spike timing-dependent plasticity,” Nature Neuroscience, vol. 8, no. 7, pp. 839–841, 2005.
- D. A. Butts and P. O. Kanold, “The applicability of spike dependent plasticity to development,” Frontier in Synaptic Neuroscience, vol. 2, p. 30, 2010.
- R. Legenstein, C. Naeger, and W. Maass, “What can a neuron learn with spike-timing-dependent plasticity?” Neural Computation, vol. 17, no. 11, pp. 2337–2382, 2005.
- S. Song, K. D. Miller, and L. F. Abbott, “Competitive Hebbian learning through spike-timing-dependent synaptic plasticity,” Nature Neuroscience, vol. 3, no. 9, pp. 919–926, 2000.
- M. C. W. van Rossum, G. Q. Bi, and G. G. Turrigiano, “Stable Hebbian learning from spike timing-dependent plasticity,” Journal of Neuroscience, vol. 20, no. 23, pp. 8812–8821, 2000.
- M. C. W. van Rossum and G. G. Turrigiano, “Correlation based learning from spike timing dependent plasticity,” Neurocomputing, vol. 38-40, pp. 409–415, 2001.
- T. Masquelier, R. Guyonneau, and S. J. Thorpe, “Spike timing dependent plasticity finds the start of repeating patterns in continuous spike trains,” PLoS ONE, vol. 3, no. 1, Article ID e1377, 2008.
- F. Henry, E. Daucé, and H. Soula, “Temporal pattern identification using spike-timing dependent plasticity,” Neurocomputing, vol. 70, no. 10–12, pp. 2009–2016, 2007.
- T. Tetzlaff, M. Helias, G. T. Einevoll, and M. Diesmann, “Decorrelation of neural-network activity by inhibitory feedback,” PLoS Computational Biology, vol. 8, no. 8, Article ID e100259610, 2012.
- R. Gütig, R. Aharonov, S. Rotter, and H. Sompolinsky, “Learning input correlations through nonlinear temporally asymmetric Hebbian plasticity,” Journal of Neuroscience, vol. 23, no. 9, pp. 3697–3714, 2003.
- J. S. Haas, T. Nowotny, and H. D. I. Abarbanel, “Spike-timing-dependent plasticity of inhibitory synapses in the entorhinal cortex,” Journal of Neurophysiology, vol. 96, no. 6, pp. 3305–3313, 2006.
- Y. Luz and M. Shamir, “Balancing feed-forward excitation and inhibition via Hebbian inhibitory synaptic plasticity,” PLoS Computational Biology, vol. 8, no. 1, Article ID e1002334, 2012.
- T. Branco and K. Staras, “The probability of neurotransmitter release: variability and feedback control at single synapses,” Nature Reviews Neuroscience, vol. 10, no. 5, pp. 373–383, 2009.
- T. Branco, K. Staras, K. J. Darcy, and Y. Goda, “Local dendritic activity sets release probability at hippocampal synapses,” Neuron, vol. 59, no. 3, pp. 475–485, 2008.
- R. S. Zucker and W. G. Regehr, “Short-term synaptic plasticity,” Annual Review of Physiology, vol. 64, pp. 355–405, 2002.
- P. A. Appleby and T. Elliott, “Multispike interactions in a stochastic model of spike-timing-dependent plasticity,” Neural Computation, vol. 19, no. 5, pp. 1362–1399, 2007.
- H. S. Seung, “Learning in spiking neural networks by reinforcement of stochastic synaptic transmission,” Neuron, vol. 40, no. 6, pp. 1063–1073, 2003.
- A. M. Thomson, “Facilitation, augmentation and potentiation at central synapses,” Trends in Neurosciences, vol. 23, no. 7, pp. 305–312, 2000.
- L. F. Abbott and W. G. Regehr, “Synaptic computation,” Nature, vol. 431, no. 7010, pp. 796–803, 2004.
- W. Maass and A. M. Zador, “Dynamic stochastic synapses as computational units,” Neural Computation, vol. 11, no. 4, pp. 903–917, 1999.
- G. G. Turrigiano, “Homeostatic plasticity in neuronal networks: the more things change, the more they stay the same,” Trends in Neurosciences, vol. 22, no. 5, pp. 221–227, 1999.
- G. G. Turrigiano and S. B. Nelson, “Homeostatic plasticity in the developing nervous system,” Nature Reviews Neuroscience, vol. 5, no. 2, pp. 97–107, 2004.
- S. D. Fernando, K. Yamada, and A. Marasinghe, “Observed stent's anti-Hebbian postulate on dynamic stochastic computational synapses,” in Proceedings of International Joint Conference on Neural Networks (IJCNN '11), pp. 1336–1343, San Jose, Calif, USA, 2011.
- S. Fernando, K. Ymamada, and A. Marasinghe, “New threshold updating mechanism to stabilize activity of Hebbian neuron in a dynamic stochastic ‘multiple synaptic’ network, similar to homeostatic synaptic plasticity process,” International Journal of Computer Applications, vol. 36, no. 3, pp. 29–37, 2011.
- G. Q. Bi and M. M. Poo, “Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type,” Journal of Neuroscience, vol. 18, no. 24, pp. 10464–10472, 1998.
- H. Markram, Y. Wang, and M. Tsodyks, “Differential signaling via the same axon of neocortical pyramidal neurons,” Proceedings of the National Academy of Sciences of the United States of America, vol. 95, no. 9, pp. 5323–5328, 1998.
- G. G. Turrigiano, K. R. Leslie, N. S. Desai, L. C. Rutherford, and S. B. Nelson, “Activity-dependent scaling of quantal amplitude in neocortical neurons,” Nature, vol. 391, no. 6670, pp. 892–896, 1998.