Mathematical Problems in Engineering

Volume 2015 (2015), Article ID 381271, 13 pages

http://dx.doi.org/10.1155/2015/381271

## Dynamics of Moment Neuronal Networks with Intra- and Inter-Interactions

^{1}College of Mathematics and Computational Science, Hunan University of Arts and Science, Changde 415000, China^{2}Department of Mathematics, Hunan College of Finance and Economics, Changsha 410205, China

Received 6 July 2014; Accepted 7 September 2014

Academic Editor: Shifei Ding

Copyright © 2015 Xuyan Xiang and Jianguo Wu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

A framework of moment neuronal networks with intra- and inter-interactions is presented. It is to show how the spontaneous activity is propagated across the homogeneous and heterogeneous network. The input-output firing relationship and the stability are first explored for a homogeneous network. For heterogeneous network without the constraint of the correlation coefficients between neurons, a more sophisticated dynamics is then explored. With random interactions, the network gets easily synchronized. However, desynchronization is produced by a lateral interaction such as Mexico hat function. It is the external intralayer input unit that offers a more sophisticated and unexpected dynamics over the predecessors. Hence, the work further opens up the possibility of carrying out a stochastic computation in neuronal networks.

#### 1. Introduction

The theory of moment neuronal networks (MNN) was developed recently [1], which takes into account both the first and the second order statistics of spike trains, generalizing the case of Poisson synaptic inputs to more biologically plausible renewal processes. Therefore, the MNN framework can be considered as an attempt toward a general framework of computation with stochastic systems. Further a more biologically reasonable MNN with intra- and interlayer interactions was developed by introducing intralayer inputs [2], which is analogous to the “networks with context units” [3] in ANNs. It shows that even a single unit with such system, as analogous to [4–6], is able to perform various complex nonlinear tasks like the XOR problem and that it can reach the trade-off between the output bias and variance by determining the optimal penalty factor due to a specific learning task.

In this letter we will explore how the spontaneous activity is propagated across the feedforward network with both homogeneous and heterogeneous connections, derived from the intralayers inputs. The input-output firing relationship and the stability are first explored for a homogeneous network. Synchronization or desynchronization is then presented for heterogeneous network by random or lateral interactions. It is the external intralayer input unit that offers a more sophisticated and unexpectedly dynamics over MNN.

This paper is organized as follows. The framework of MNN with intra- and interlayer interactions is presented in Section 2. The dynamics of homogeneous and heterogeneous network are explored in Section 3.

#### 2. Moment Neuronal Networks with Intra- and Inter-Interactions

For the decay rate , when the membrane potential of the th neuron in the th layer is between its resting state and its threshold , it satisfies the following dynamics:
where the synaptic input is given by
Here and are the magnitudes of excitatory postsynaptic potentials (EPSPs) and inhibitory postsynaptic potentials (IPSPs), and are renewal processes arriving from the th and th synapse, and and are the total number of active excitatory and inhibitory synapses in the th layer. All notations with the superscript* ia* imply that they receive the external stimulus from intralayer interactions rather than from the network itself, similar to the “networks with context unit” [3]. It would present a more biologically reasonable network architecture than the moment neuronal networks. When crosses the membrane threshold from below, a spike is generated and the membrane resets to its resting potential .

The renewal process can be approximated as [1, 2]
with
where and are the mean and variance of the interspike intervals (ISIs) of the renewal process , respectively, the superscript* ix* could be either* ir* or* ia*, and is the refractory period.

For the conciseness of notation, we take the intrainput from the th layer as external context input paralleled to the th layer, which is denoted by the superscript* ia*. Furthermore, we suppose that , , , and , , ; then (2) can be approximated as (see [2])
where
Here is the ratio between inhibitory inputs and excitatory inputs from the intralayer; is correlation coefficient between the th and th input from intralayer.

In terms of Siegert’s expression [7], we have the expression of the mean and variance of the output ISIs:
where
As discussed in [2], for the correlation between the inputs and outputs, we can assume that the following* heuristic* relationship holds:
Note that the right-hand side of (10) is the correlation of the inputs to the th and the th neurons in the th layer, which includes the intra- and interinput correlations.

The above relationships between inputs and outputs lay the foundation of the moment neural networks with intra- and interinteractions (MNNIII).

#### 3. Dynamics of Networks

The question we intend to address here is how a spontaneous activity can be maintained in a feedforward network. For all simulations, the values of the decay rate, threshold, and resting potential were set equal to ms^{−1}, mV, and mV, respectively. The same parameters have been employed elsewhere [8] and are thought to be in the physiological range for visual cortex cells and in agreement with most published results [9, 10]. The coefficient of variation, , is used to quantify the irregularity of a spike train. If , the spike train is regular; otherwise the spike train is random. As analogous to [1], we stop our simulation when the firing rate is slower than Hz. All simulations were carried out with MATLAB.

##### 3.1. Dynamics of Homogeneous Network

In this section, we focus on a homogeneous network, where all weights, afferent means, and variances were set to be identical. Now the quantities in (6) reduce to , , , , and , , . As we discussed before, the propagation of correlation becomes trivial in such case since all cells become fully correlated after the first layer. To avoid this, we clamped the correlation coefficient; that is, we set , , . In simulations we set or , in agreement with experimental data reported in the literature [1, 11, 12], and we assume that each intralayer input, including numbers of neurons, afferent means, and variances, is identical; that is, , , and .

First we choose , , to illustrate how the activity is propagated across the networks, varied from and , on assumption that . Unless otherwise specified, the initial CV in all simulations is always 1, and the weight is 0.5. In Figure 1, we show the results obtained for various values of and (we reported the coefficient of variation ). As stated in [1], each data point () is connected with () to illustrate how the activity is propagated across the networks. After the first few layers, for (top), neurons are found to be either silent or firing at relatively high frequency for all and . For (middle), however, the situation has changed. On the one hand, the left panel (middle) shows that even if there is no inhibitory input () for a small number of intraneurons (), to be silent is certainty on fewer intrainput neurons. On the other hand, the right panel (middle) shows that not all networks are certainty to be silent. As stated earlier, for with fewer intrainput neurons (), the network is certainty to be silent (left). However, with increasing size of intrainput units, the network might be fixing or firing at relatively high frequency (right). Instead, they fire at low or relatively high frequency; for example, for . It actually shows that the networks can be stable at low frequency; see discussions later. In addition, for (bottom), there is another kind of properties that neurons fire at relatively low frequency after a few times oscillations. Unlike a homogeneous MNN [1], neurons are certainty to be silent with a larger ratio (e.g., ) after the first few layers. It is obvious that the intrainputs cause a more sophisticated dynamics of network than MNN.