Mathematical Modeling and Analysis of Soft ComputingView this Special Issue
Dynamics of Moment Neuronal Networks with Intra- and Inter-Interactions
A framework of moment neuronal networks with intra- and inter-interactions is presented. It is to show how the spontaneous activity is propagated across the homogeneous and heterogeneous network. The input-output firing relationship and the stability are first explored for a homogeneous network. For heterogeneous network without the constraint of the correlation coefficients between neurons, a more sophisticated dynamics is then explored. With random interactions, the network gets easily synchronized. However, desynchronization is produced by a lateral interaction such as Mexico hat function. It is the external intralayer input unit that offers a more sophisticated and unexpected dynamics over the predecessors. Hence, the work further opens up the possibility of carrying out a stochastic computation in neuronal networks.
The theory of moment neuronal networks (MNN) was developed recently , which takes into account both the first and the second order statistics of spike trains, generalizing the case of Poisson synaptic inputs to more biologically plausible renewal processes. Therefore, the MNN framework can be considered as an attempt toward a general framework of computation with stochastic systems. Further a more biologically reasonable MNN with intra- and interlayer interactions was developed by introducing intralayer inputs , which is analogous to the “networks with context units”  in ANNs. It shows that even a single unit with such system, as analogous to [4–6], is able to perform various complex nonlinear tasks like the XOR problem and that it can reach the trade-off between the output bias and variance by determining the optimal penalty factor due to a specific learning task.
In this letter we will explore how the spontaneous activity is propagated across the feedforward network with both homogeneous and heterogeneous connections, derived from the intralayers inputs. The input-output firing relationship and the stability are first explored for a homogeneous network. Synchronization or desynchronization is then presented for heterogeneous network by random or lateral interactions. It is the external intralayer input unit that offers a more sophisticated and unexpectedly dynamics over MNN.
This paper is organized as follows. The framework of MNN with intra- and interlayer interactions is presented in Section 2. The dynamics of homogeneous and heterogeneous network are explored in Section 3.
2. Moment Neuronal Networks with Intra- and Inter-Interactions
For the decay rate , when the membrane potential of the th neuron in the th layer is between its resting state and its threshold , it satisfies the following dynamics: where the synaptic input is given by Here and are the magnitudes of excitatory postsynaptic potentials (EPSPs) and inhibitory postsynaptic potentials (IPSPs), and are renewal processes arriving from the th and th synapse, and and are the total number of active excitatory and inhibitory synapses in the th layer. All notations with the superscript ia imply that they receive the external stimulus from intralayer interactions rather than from the network itself, similar to the “networks with context unit” . It would present a more biologically reasonable network architecture than the moment neuronal networks. When crosses the membrane threshold from below, a spike is generated and the membrane resets to its resting potential .
The renewal process can be approximated as [1, 2] with where and are the mean and variance of the interspike intervals (ISIs) of the renewal process , respectively, the superscript ix could be either ir or ia, and is the refractory period.
For the conciseness of notation, we take the intrainput from the th layer as external context input paralleled to the th layer, which is denoted by the superscript ia. Furthermore, we suppose that , , , and , , ; then (2) can be approximated as (see ) where Here is the ratio between inhibitory inputs and excitatory inputs from the intralayer; is correlation coefficient between the th and th input from intralayer.
In terms of Siegert’s expression , we have the expression of the mean and variance of the output ISIs: where As discussed in , for the correlation between the inputs and outputs, we can assume that the following heuristic relationship holds: Note that the right-hand side of (10) is the correlation of the inputs to the th and the th neurons in the th layer, which includes the intra- and interinput correlations.
The above relationships between inputs and outputs lay the foundation of the moment neural networks with intra- and interinteractions (MNNIII).
3. Dynamics of Networks
The question we intend to address here is how a spontaneous activity can be maintained in a feedforward network. For all simulations, the values of the decay rate, threshold, and resting potential were set equal to ms−1, mV, and mV, respectively. The same parameters have been employed elsewhere  and are thought to be in the physiological range for visual cortex cells and in agreement with most published results [9, 10]. The coefficient of variation, , is used to quantify the irregularity of a spike train. If , the spike train is regular; otherwise the spike train is random. As analogous to , we stop our simulation when the firing rate is slower than Hz. All simulations were carried out with MATLAB.
3.1. Dynamics of Homogeneous Network
In this section, we focus on a homogeneous network, where all weights, afferent means, and variances were set to be identical. Now the quantities in (6) reduce to , , , , and , , . As we discussed before, the propagation of correlation becomes trivial in such case since all cells become fully correlated after the first layer. To avoid this, we clamped the correlation coefficient; that is, we set , , . In simulations we set or , in agreement with experimental data reported in the literature [1, 11, 12], and we assume that each intralayer input, including numbers of neurons, afferent means, and variances, is identical; that is, , , and .
First we choose , , to illustrate how the activity is propagated across the networks, varied from and , on assumption that . Unless otherwise specified, the initial CV in all simulations is always 1, and the weight is 0.5. In Figure 1, we show the results obtained for various values of and (we reported the coefficient of variation ). As stated in , each data point () is connected with () to illustrate how the activity is propagated across the networks. After the first few layers, for (top), neurons are found to be either silent or firing at relatively high frequency for all and . For (middle), however, the situation has changed. On the one hand, the left panel (middle) shows that even if there is no inhibitory input () for a small number of intraneurons (), to be silent is certainty on fewer intrainput neurons. On the other hand, the right panel (middle) shows that not all networks are certainty to be silent. As stated earlier, for with fewer intrainput neurons (), the network is certainty to be silent (left). However, with increasing size of intrainput units, the network might be fixing or firing at relatively high frequency (right). Instead, they fire at low or relatively high frequency; for example, for . It actually shows that the networks can be stable at low frequency; see discussions later. In addition, for (bottom), there is another kind of properties that neurons fire at relatively low frequency after a few times oscillations. Unlike a homogeneous MNN , neurons are certainty to be silent with a larger ratio (e.g., ) after the first few layers. It is obvious that the intrainputs cause a more sophisticated dynamics of network than MNN.
Figure 1 implies that the stable solution existed in some networks. Thus we explore the stable solution of such network. Figure 2 shows how the input-output firing rate relationship is varied from and , for and . As analogous to , roughly speaking, the stable solution appears at low firing rate for networks of increasing size (either or size) with strong inhibition (either or ). However, the preferred appearance of stable solution at low frequency is not the increasing size of both inter- and intralayer; for (top right, dashed lines), for example, any ratio from intralayer (from to ) appears as a fixed point at low firing rate when , (top right), instead of , (bottom right).
We further explore the stability of such network by showing that the stable outputs varied from . Figure 3 shows the stable outputs versus and , with a fixed ratio from interlayer inputs (top) and (bottom) and with the number of neurons from interlayer inputs (left) and (right). The initial inputs fix Hz in the current and latter simulations. It is found that the intralayer input unit appears to have a sophisticated role in the stability of such network, depending on the ratio and the size of context unit, together with the ratio and the size of network itself, that is, interlayer. For example, for (top), the stable output appears to increase with the increasing size and the decreasing ratio of context unit, which plays a general role; for and (bottom right panel), however, the situation has changed to play a negative role; see .
We also explore the stability of such network by showing that the stable outputs varied from numbers of intralayer neurons and of interlayer . Figure 4 shows that the network with a small size of intra- and interlayer will be either silent or firing at low frequency. Top panel shows that the stable output of such network seems to be symmetry between numbers of intralayers and interlayers when . However, there are obvious asymmetry for or ; see middle and bottom panels in Figure 4. Middle panel shows that the network with a relatively large size of interlayer, for , will be firing at relatively high rate and that the size of intralayer has negligible effect on the stable outputs such as . Bottom panel presents unexpected results of that relatively high frequency is firing at the large size of intralayer and small size interlayer, rather than the large size of intralayer and interlayer. In particular, for and , the network is unexpectedly firing at low frequency when and ; see the left of bottom panel. All those imply that the current network can lead to more sophisticated dynamics than MNN and the intrainput unit plays a very sophisticated role in dynamics of such network.
Finally, we explore the stability of such network by showing that the stable outputs varied from the ratio of both inter- and intralayers, that is, and . Figure 5(a) shows the stable outputs versus and with . It can be seen that the network will be either silent or firing at low frequency when and will be firing at relatively high rate when , which implies that the ratio (between inhibitory and excitatory input) from the network itself plays a more important role than that from intrainput unit with such parameters. However, the network will converge to silent or to low firing frequency with strong inhibition or to high frequency with weak inhibition . Unlike the stable output of MNN in  (a high level of inhibition is necessary to maintain a spontaneous activity, that is, a low output firing rate), a weaker inhibition ( approaching ), even with weaker intrainhibition ( approaching ), can converge to low firing frequency; see, for example, and . It is the intrainput unit to maintain the low frequency activity. Thus there is an important gain in computational power over MNN. Note the appearance of oscillation for a homogeneous MNNIII as singularity, for example, with , , , and (Figure 5(b)).
3.2. Dynamics of Heterogeneous Network
We now turn to explore a more sophisticated dynamics of a heterogeneous network. We shed light on, for instance, how synchronization or desynchronization is produced by random or lateral connections when we remove the constraint on the correlations between neurons.
In accordance with the results reported in the previous publications, we fixed the total number of neurons in each interlayer to . To show the influences of intralayer inputs on networks, we also fixed the total number of neurons in each intralayer to . We further assumed that each intralayer receives the same inputs as the first intralayer; that is, , , , and for .
First we generated random connections and inputs Hz. We also assumed that and that for .
Figure 6 shows the output CV versus the mean firing rate in the first layers of the network itself (left) and the correlation coefficients for one cell in the first, second, and fourth (or fifth) layer (right) for , ; , ; and , . Simulations show that all neurons synchronize after the th layer, that is, for , and that the network is stable. Top panels show that the network will be stable at relatively low frequency (<100 Hz) after the th layer; middle panels show that the network will be stable at relatively high rate (>100 Hz) after the th layer; however, the last panels show that the network can be stable at a very low frequency (<1 Hz). For example, Figure 7 shows the mean firing rate for different layers, corresponding to Figure 6 (top). Note the different scale for each layer. It can be seen from Figures 6 and 10 (top) that the ratios and change the stable outputs but do not change the synchronization. This result, in general, is in agreement with numerical experiments of feedforward spiking neuronal networks, showing that neurons get synchronized quite easily . In fact, desynchronization rather than synchronization seems to be the major problem for a spiking neuronal network.
In order to avoid synchronization, we used the same Mexican hat weight distribution as in . To this end, we first rearranged all neurons from the interlayer on a two-dimensional square lattice by assigning to neuron coordinates as with . Then for , we set where is the Mexico hat function, defined for as with , modulation parameters. For simulations we used the same modulation , as  and the same other parameters as in the previous subsection.
Figure 8 shows the propagation of activity in a heterogeneous MNNIII with Mexico hat type connections for , . The results indicate that the activity in the network becomes stable after the th layer (top left). Also, as indicated by the values of the correlation coefficients in the top right panel, the neurons do not synchronize. Middle and bottom panels show the mean firing rate for different layers. Figure 9 presents the correlation coefficients between the central cell and the other cells in layer and in layer , corresponding to Figure 8; on average the correlation coefficient between neurons was around zero.
It is interesting to note that, from the first to the second layer, there is a general reduction in firing rate. Then, after a few transition layers, the neuronal activity becomes stable. In summary, the activity in a heterogeneous MNNIII becomes stationary after a few layers. With random interactions, the network gets easily synchronized. However, with lateral interactions by Mexico hat function, desynchronization is presented; that is, firing rates of individual neurons tend to spread out.
To compare the results with others, Figure 10 shows the propagation of activity in a heterogeneous MNNIII with random connections (top) and with Mexico hat type connections (bottom) for , . It only shows the output CV versus the mean firing rate in the first layers (left) and the stable mean firing rate in layer (right), respectively. Figure 10 implies that the MNNIII with random connections gets synchronized after the th layer, and with Mexico hat type connections it becomes stationary and fires with the bell-like firing rates after the th layer.
First, the weight connections are key to propagate the activity of such network. A homogeneous network with clamped weight connections, as stated in Section 3.1, converges to silent or to low/high firing frequency. However, a heterogeneous network gets synchronized or desynchronized due to random or lateral weight connections. It is the lateral weight connections introduced by the Mexican hat that push cells to fire with more widely spread firing rates than in a network with random interactions; see the former discussions in this subsection. In other words, weight connections determine the type of dynamics of network (say, convergence, synchronization, and desynchronization) and subsequently change the size of stable outputs. For , , for example, a homogeneous network with initial input Hz converges to Hz after the th layer (approximate to , in the left panel of Figure 5); Figure 10 shows that a network with random connections gets synchronized at Hz after the th layer (top) and with Mexico hat type connections it gets desynchronized at the stable bell-like outputs with a mean value of Hz after the th layer (bottom). It is the weight connections that change the stable outputs of network. Additionally, the network simulations reported in this subsection show that a highly irregular output firing is also the result of correlations between neuronal activities.
Second, all simulations show that the ratio of inhibition from inter- or intralayer ( or ) does not determine the type of dynamics but influences the stable outputs of network. For homogeneous network, it was verified in Section 3.1. For a network with random connections, Figures 6 and 10 (top) show that the ratio or changes the stable outputs but does not change the synchronization. Figures 8 and 10 (bottom) show that the lateral weight connections introduced by the Mexican hat push cells to fire with more widely spread firing rates than in a network with random interactions and that the ratio only changes the stable outputs; for example, the mean of stable bell-like outputs in Figure 8 ( Hz) is bigger than that in bottom panel of Figure 10 ( Hz). It is the inhibition from network itself or from external intrainput that can enhance or weaken the (stable) firing outputs of network.
Finally, all parameters influence the speed of the stability of network. For homogeneous network, it was included in discussions in Section 3.1. For heterogeneous network, the weight connections can influence it, when other parameters are identical; see, for example, Figure 8, which shows that the speed of synchronization by random connections is slightly faster than that of desynchronization by lateral connections. Further the ratio of inhibition influences it; see, for example, Figures 6 and 10 (top), which show its validity for random connections, and Figures 8 and 10 (bottom) do so for the Mexican hat type connections. In addition, the heterogeneous MNNIII simulations reported in this subsection show that synchronization/desynchronization seems to be slightly slower or faster than MNN  with other identical parameters, since the external intralayer input unit can enhance or weaken the firing outputs. Thus, the network reported here offers a more sophisticated and unexpected dynamics over the MNN.
In the current paper, we have focused on the dynamics of feedforward MNNIII, akin to the dynamics of MNN. Synchronization or desynchronization is presented by random or lateral interactions. Due to more biologically reasonable intralayer inputs, such network offers a more sophisticated and unexpected dynamics over the MNN. As stated earlier, the intrainput unit plays a very sophisticated role in dynamics of such network. It can enhance or weaken the firing outputs by varying on the ratio and/or the size of intralayer, incorporating with those of interlayer, that is, and/or . In the future we might shed light on the application in engineering, the dynamics, and/or learning rule of MNNIII with other network architectures such as recurrent and RBF network .
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
The authors would like to thank the anonymous reviewers for their useful comments and suggestions. This work was supported by the National Natural Science Foundation of China (11171101 and 61403136), Provincial Natural Science Foundation of Hunan (13JJ3114), Scientific Research Fund of Hunan University of Arts and Science (13ZD02), and the Construction Program of the Key Discipline in Hunan University of Arts and Science—Applied Mathematics.
J. Feng, Y. Deng, and E. Rossoni, “Dynamics of moment neuronal networks,” Physical Review E: Statistical, Nonlinear, and Soft Matter Physics, vol. 73, no. 4, Article ID 041906, 17 pages, 2006.View at: Publisher Site | Google Scholar | MathSciNet
X. Y. Xiang and Y. C. Deng, “Training moment nenronal networks with intra- and interlayer interactions,” Information: An International Interdisciplinary Journal, vol. 15, pp. 363–374, 2012.View at: Google Scholar
T. P. Trappenberg, Fundamentals of Computational Neuroscience, Oxford University Press, Oxford, UK, 2002.
X. Y. Xiang, Y. C. Deng, and X. Q. Yang, “Second order spiking perceptrons,” Soft Computing, vol. 13, pp. 1219–1230, 2009.View at: Publisher Site | Google Scholar
X. Y. Xiang and Y. C. Deng, “The learning of moment neuronal networks,” Nerucomputing, vol. 73, pp. 2597–2613, 2010.View at: Google Scholar
X. Y. Xiang, Y. Chen, and L. F. Liu, “An updated learning rule for moment neuronal networks,” Journal of Information and Computational Science, vol. 8, no. 13, pp. 2509–2516, 2011.View at: Google Scholar
J. F. Feng, Computational Neuroscience-A Comprehensive Approach, Chapman & Hall/CRC Press, London, UK, 2003.View at: MathSciNet
W. Gerstner and W. Kistler, Spiking Neuron Models, Cambridge University Press, London, UK, 2003.
V. Livak, H. Sompolinsky, I. Segev, and M. Abeles, “On the transmission of rate code in long feedforward networks with excitatory-inhibitory balance,” The Journal of Neuroscience, vol. 23, pp. 3006–3015, 2003.View at: Google Scholar
M. N. Shadlen and W. T. Newsome, “Noise, neural codes and cortical organization,” Current Opinion in Neurobiology, vol. 4, pp. 569–579, 1994.View at: Google Scholar
J. F. Feng and D. Brown, “Impact of correlated inputs on the output of the integrate-andfire models,” Neural Computation, vol. 12, pp. 671–692, 2000.View at: Publisher Site | Google Scholar
E. Zohary, M. N. Shadlen, and W. T. Newsome, “Correlated neuronal discharge rate and its implications for psychophysical performance,” Nature, vol. 370, pp. 140–143, 1994.View at: Google Scholar
H. C. Tuckwell, Introduction to Theoretical Neurobiology, Cambridge University Press, London, UK, 1988.
I. Sadeghkhani, A. Ketabi, and R. Feuillet, “Radial basis function neural network application to measurement and control of shunt reactor overvoltages based on analytical rules,” Mathematical Problems in Engineering, vol. 2012, Article ID 647305, 14 pages, 2012.View at: Publisher Site | Google Scholar | MathSciNet