Research Article | Open Access
Xiangyu Li, Chunhua Yuan, Bonan Shan, "System Identification of Neural Signal Transmission Based on Backpropagation Neural Network", Mathematical Problems in Engineering, vol. 2020, Article ID 9652678, 8 pages, 2020. https://doi.org/10.1155/2020/9652678
System Identification of Neural Signal Transmission Based on Backpropagation Neural Network
The identification method of backpropagation (BP) neural network is adopted to approximate the mapping relation between input and output of neurons based on neural firing trajectory in this paper. In advance, the input and output data of neural model is used for BP neural network learning, so that the identified BP neural network can present the transfer characteristics of the model, which makes the network precisely predict the firing trajectory of the neural model. In addition, the method is applied to identify electrophysiological experimental data of real neurons, so that the output of the identified BP neural network can not only accurately fit the neural firing trajectories of neurons participating in the network training but also predict the firing trajectories and spike moments of neurons which are not involved in the training process with high accuracy.
Large-scale, high-flow data acquisition is revolutionizing the field of neuroscience . To link the experimental data with models featuring various biophysical phenomena is central to understanding neuronal function and network behavior . System identification of quantitative mathematical models has proved to be an essential tool in exploring this issue [3,4]. Detailed computational models are required to fit the model to electrophysiological records [5,6].
Complex conductance-based neuronal models such as the Hodgkin–Huxley model abstract out the multi-ion channel mathematical models of neurons [7–9], which can reproduce most of the characteristics of neuronal signals, but their complex structure and a large number of model parameters make it difficult to identify the system. The cascade model is a simple phenomenological model, which abstracts the complex dynamic process of neurons into the combination of linear process and nonlinear process, and does not need to know the specific structure details [10–12]. Only the input and output signals are identified by certain algorithms, so that the model can fit the spiking signals of neurons with certain precision under the given input.
Artificial neuron network is a mathematical model inspired by the process of synaptic connection and information processing in the biological nervous system. It consists of several interconnected neuron nodes and connection weights. Compared with traditional information processing methods, this model overcomes the defects of logic symbol-based artificial intelligence in processing intuition and unstructured information and has the characteristics of adaptive, self-organizing, and real-time learning. BP network is a multilayer feedforward network trained by the error backpropagation algorithm proposed by a team of scientists led by Rumelhart and McCelland [13–15]. It is one of the most widely used neural network models [16–18]. It can learn and store a large number of input-output pattern mapping relationships without having to reveal the mathematical equations describing such mapping relationships in advance. The BP network uses the steepest descent method and continuously adjusts the weight and threshold of the network by backpropagating the output error until the sum of the squared errors of the network output and the given output is the smallest. The BP network model includes a linear process of weighted summing of input signals and a linear or nonlinear function processing process represented by network nodes, so it can be regarded as a cascade model.
In this paper, the input and output trajectories of neurons are used as identification features, and the signal transmission process of real neurons is identified using BP neural network. The output of the trained network is consistent with the real signal, and the real output signal can also be predicted with a certain accuracy for the test input signals that have not participated in the training. Furthermore, the influence of the relevant parameters of the network on the identification is studied.
2. Materials and Methods
BP neural network is a multilayer feedforward network. The model topology includes input layer, hidden layer, and output layer. The network is mainly characterized by forward signal transmission and backward propagation of errors. In forward transmission, the input signal is processed layer by layer from the input layer to the output layer. The state of each layer of neurons only affects the state of the next layer of neurons. If the output layer does not get the desired output, it switches to backpropagation and adjusts the network weights and thresholds according to the prediction error, so that the network predicted output is continuously approaching the expected output. The topology of the BP neural network is shown in Figure 1.
In Figure 1, are the input values of the BP neural network, are the predicted values of the BP neural network, and and are the weights of the BP neural network. Each layer of nodes contains a linear or nonlinear excitation function . Common excitation functions include logsig, tansig, and purelin functions. The functional relationship is shown in Figure 2. Only suitable combinations of excitation functions can achieve good results when fitting data. When the number of input nodes is n and the number of output nodes is m, the BP neural network can be regarded as a nonlinear function mapping relationship from n independent variables to m dependent variables.
The process of system identification of BP neural network is divided into three main steps, including BP neural network construction, BP neural network training, and BP neural network testing, which is shown in Figure 3. Network construction refers to selecting the appropriate input and output, the number of nodes, and the excitation function to construct the BP network according to the characteristics of the system to be identified. The parameters of the network are initially given randomly, so after the construction, the network needs to be trained based on the input and output data according to certain rules, so that the network output is consistent with the expected output. Finally, in order to test whether the trained network can show the identified system characteristics, it is necessary to calculate the approximation between the network output and the given output of the network input which is not involved in network training and evaluate the network prediction performance as the index of the system identification effect.
Before system identification through BP neural network, the network should be trained to have associative memory and predictive capacity. The training steps are as follows:(1)Network Initialization. Determine the number of input layer nodes n, the number of hidden layer nodes l, and the number of output layer nodes m according to the system input and output sequence. Initialize the connection weight between the input layer and hidden layer, and initialize between the hidden layer and output layer of the neurons. Initialize the hidden layer threshold a and the output layer threshold b, given the learning rate and neuron excitation function.(2)Hidden Layer Output Calculation. According to the input variable, the connection weight between the input layer and hidden layer, and the hidden layer threshold , calculate the hidden layer output:(3)Output Layer Output Calculation. Calculate the predicted output of the BP neural network based on the hidden layer output, the connection weight , and the threshold:(4)Error Calculation. Calculate the network prediction error e based on the network prediction output and the expected output:(5)Weight Update. Update the network connection weights and according to the network prediction error:where is the learning rate.(6)Threshold Update. Update the network node thresholds a and b according to the network prediction error:(7)Determine whether the algorithm iteration is over. If not, return to step 2.
3. Neural System Identification Based on BP Network
3.1. Data Sources
The network fitting data in this paper are the input and output signals of pyramidal neurons, which were obtained by experiments of patch-clamp recording of rat cerebral cortex slices. The input signal is a time series of fluctuating synaptic currents that simulates the in vivo environment. The output data are the time series of the corresponding membrane potentials of vertebral cells. The fluctuating input of the imitative in vivo environment is constructed from two Poisson firing sequences, which imitate 100 excitatory input neurons with 10 Hz of spike frequency and 200 consistent input neurons with 20 Hz of spike frequency, respectively. The fluctuating input current is formed after artificial synapse filtering.
The membrane potential at a certain time is used as the output of the BP neural network. The current value of the membrane potential is not only determined by the current input current but also affected by the input current value and the membrane potential value at the previous moment, which is shown in Figure 4. The n input data at the sampling point (the values at the circles in Figure 4) are selected as I(t-a),..., I(t) and V(t-b),..., V(t-1) (a+b+1 = n), and the output data (the value at the fork in Figure 4) are V(t) (m = 1), where n = 30 and a = 20.
It is necessary to normalize the network input and output data before training the network, that is, to transform the data linearly into the interval of [−1,1], so as to avoid the problem of large network prediction error caused by the difference of order of magnitude between the data. The normalized formula is as follows:where and are the minimum and maximum values in the sequence, respectively. The data sequence generated by the network needs to be antinormalized to obtain the data in the normal range. The denormalization only needs to reverse formula (6). In addition, in order to standardize the effect evaluation, the subsequent error statistics step is also based on the normalized data.
3.2. Network Construction
After determining the number of nodes in the input and output layers, we need to select the structure of the hidden layer and the type of node excitation function to determine the basic architecture of the neural network. The structure of the hidden layer has a great impact on the prediction accuracy of the BP neural network: if the number of nodes is too small, the network learning effect is not good, more training times are needed to reach the goal, and the accuracy of the training will be affected; if the number of nodes is too large, the training time will be too long and overfitting may occur, that is, the fitting effect on the untrained test data of the network will be reduced due to the excessive pursuit of the accuracy of the training data. For simpler function mapping relationships, neural networks with a single hidden layer can speed up network training while ensuring the accuracy of network fitting; for more complex mapping relationships, increasing the number of layers of the hidden layer network can improve the prediction accuracy of the network to a certain extent. In addition, the choice of the excitation function for each layer of the BP neural network also has an important influence on the identification process. Generally speaking, the input of the hidden layer is the weighted sum of the input data, that is, the input layer node excitation function is the purelin function, and only a suitable combination of the hidden layer and output layer functions is needed.
In order to construct a suitable BP neural network, we conducted multiple sets of simulation experiments, compared the test results with the expected data, and statistically analyzed the errors. The average absolute error ab_error and the error variance sq_error of each test are recorded as follows:where num is the number of sample input and output data sets and and are the expected output and the network output, respectively, and are both normalized values.
Because the initial parameters of the network are given randomly, the identification results are highly random. The identification process is repeated 10 times in each case, and the average of the average absolute error and error variance is taken as the identification effect index in this case.
Figure 5(a) shows the error statistics of the identification results of the double hidden layer network and the single hidden layer network with 20 ∗ 20. It can be seen that the mean absolute error and the error variance of the network identification results with the double hidden layer are reduced by nearly half compared to the single hidden layer network. Figure 5(b) compares the effect of the combination of four better-acting types of hidden layer excitation function and output layer excitation function on the identification effect. l-p refers to the case where the hidden layer is a logsig function and the output layer is a purelin function, and so on. It was found that the fitting error of the l-t combination was the smallest. Therefore, a 20 ∗ 20 double hidden layer structure was finally selected and the excitation function combination of l-t was used to construct a BP neural network for identification of pyramidal neuron signal transmission systems.
3.3. Network Training
The training steps of BP neural network have been described in detail in the previous content. After choosing the appropriate network structure, combination of functions, learning rate ( = 0.01), and the maximum number of trainings (100 times), the training of the neural network can be started. In this paper, 10,000 sets of data are extracted from the input and output time series of pyramidal neurons as training data, and the time sequence is disrupted for network training.
Generally speaking, the smaller the training target error is, the better the training data fits. However, with the training time increasing, the neural network training may appear overfitting. In order to avoid this problem, a part of the training data is divided into the verification data and does not participate in the training but only perform the error calculation. As shown in Figure 6(a), the error of the training data decreases continuously with the increase of the number of training, while the error of the verification data no longer decreases after the fifth training but rises slightly. Therefore, the training result after which the verification data error is not reduced for two consecutive times is considered as the best identification result of the network, so as to avoid overfitting of the subsequent training results.
Figure 6(b)shows a regression graph of the training data and the verification data. The closer the slope of the fitted straight line of the sample points to 1, the better the fitting effect is. The slopes of the fitted straight lines for the training and validation data are equal to 1 and 0.974, respectively. It can be seen that the network output can almost completely overlap the training output data, and it can also achieve a good fitting effect on the validation data.
4. Network Testing and Result Analysis
After the network training is completed, the data not participating in the training are required to test the network identification effect. 10,000 consecutive sets of data were extracted from the time series of pyramid neuron input and output data that did not participate in training as test data. In order to observe and understand the fitting effect intuitively, the training data, test data, and network output data are antinormalized and restored to time series, as shown in Figures 7 and 8.
Figures 7(a) and 7(b) show the input and output time series of the training data, respectively, and Figure 7(c) records the errors of the training data and the network output. It can be seen that the output of the network basically coincides with the training data, and only a small part of spiking times is not exactly matched. The prediction results are shown in Figure 8. It can be seen that the error of the output voltage trajectory of test data and network output is small although the prediction effect cannot achieve the accuracy of the training data. The voltage trajectory is basically consistent, and most of the errors appear at spiking times.
The neural signal has many firing characteristics, and the spike time is considered to mainly undertake the task of information carrier [19,20]. Therefore, the spike time is an important research object of neural signal coding. It is necessary to evaluate the network's prediction effect on the spike times in addition to various error indicators. The spike times of the test data and the predicted firing sequence of the network are marked in Figure 8(b). For these two firing time series, the coincidence rate  is calculated as follows:where Γ is the coincidence rate and denotes the number of spike times that are accurately predicted. is the number of spike times that happen to coincide when the network output is Poisson distribution. and are the total number of spike times of the network output and test data, respectively, and is the normalization factor.
For the prediction effect predicted by the test data in Figure 8, the spiking coincidence rate is 80% even if the time range for determining the overlap of spikes is reduced to 1% of the average interspike interval. Therefore, the identified BP neural network can effectively predict the firing information of pyramidal neurons.
This paper mainly uses BP neural network to identify the signal transmission process of pyramidal neurons. First of all, the appropriate network input and output are selected and the data are normalized. Then, the appropriate network structure is selected to complete the construction of BP neural network. The well-built network is trained with training data soon afterwards. Finally, the prediction performance of the well-trained network is tested. In the test results, BP neural network can better match the untrained pyramidal neuron voltage trajectory and predict the spike time with high precision.
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
This work was supported by the Natural Science Foundation of Liaoning Province of China (grant no. 20180540127) and the Scientific Research Fund of Liaoning Provincial Education Department (grant nos. LG201930 and LG202028).
- M. Liang, M. A. Kramer, S. J. Middleton et al., “A unified approach to linking experimental, statistical and computational analysis of spike train data,” PLoS ONE, vol. 9, no. 1, Article ID e85269, 2014.
- C. Yuan, J. Wang, G. Yi et al., “Estimation of key parameters in adaptive neuron model according to firing patterns based on improved particle swarm optimization algorithm,” Modern Physics Letters B, vol. 31, no. 7, Article ID 1750060, 2017.
- C. D. Meliza, M. Kostuk, H. Huang, A. Nogaret, D. Margoliash, and H. D. I. Abarbanel, “Estimating parameters and predicting membrane voltages with conductance-based neuron models,” Biological Cybernetics, vol. 108, no. 4, pp. 495–516, 2014.
- C. Liu, J. Wang, B. Deng, X.-L. Wei, H.-T. Yu, and H.-Y. Li, “Variable universe fuzzy closed-loop control of tremor predominant Parkinsonian state based on parameter estimation,” Neurocomputing, vol. 151, no. 3, pp. 1507–1518, 2015.
- J. Hass, L. Hertäg, S. C. Q. Lombard et al., “A computational model of prefrontal cortex based on physiologically derived cellular parameter distributions,” BMC Neuroscience, vol. 14, no. 1, pp. 1-2, 2013.
- M. C.-K. Wu, S. V. David, and J. L. Gallant, “Complete functional characterization of sensory neurons by system identification,” Annual Review of Neuroscience, vol. 29, no. 1, pp. 477–505, 2006.
- A. L. Hodgkin and A. F. Huxley, “The dual effect of membrane potential on sodium conductance in the giant axon of Loligo,” The Journal of Physiology, vol. 116, no. 4, pp. 497–506, 1952.
- V. Baysal, Z. Saraç, and E. Yilmaz, “Chaotic resonance in Hodgkin-Huxley neuron,” Nonlinear Dynamics, vol. 97, no. 2, pp. 1275–1285, 2019.
- M. Khodashenas, G. Baghdadi, and F. Towhidkhah, “A modified hodgkin-huxley model to show the effect of motor cortex stimulation on the trigeminal neuralgia network,” The Journal of Mathematical Neuroscience, vol. 9, p. 4, 2019.
- S. Ostojic and N. Brunel, “From spiking neuron models to linear-nonlinear models,” PLoS Computational Biology, vol. 7, no. 1, Article ID e1001056, 2011.
- W. Pannakkong and V. N. Huynh, “A new hybrid linear-nonlinear model based on decomposition of discrete wavelet transform for time series forecasting,” in Proceedings of the International Symposium on Knowledge & Systems Sciences, Springer, Singapore, 2017.
- L. Hertäg, J. Hass, T. Golovko et al., “An approximation to the adaptive exponential integrate-and-fire neuron model allows fast and predictive fitting to physiological data,” Frontiers in Computational Neuroscience, vol. 6, p. 62, 2012.
- R. Hecht-Nielsen, “Theory of the backpropagation neural network,” in Proceedings of the International Joint Conference on Neural Networks, San Diego, CA, USA, February 1989.
- S.-i. Horikawa, T. Furuhashi, and Y. Uchikawa, “On fuzzy modeling using fuzzy neural networks with the back-propagation algorithm,” IEEE Transactions on Neural Networks, vol. 3, no. 5, pp. 801–806, 1992.
- L. X. Wang and J. M. Mendel, “Back-propagation fuzzy system as nonlinear dynamic system identifiers,” in Proceedings of the IEEE International Conference on Fuzzy Systems, pp. 1409–1418, San Diego, CA, USA, 1992.
- Z. Guo, M. Liu, and B. Li, “Circuit breaker fault analysis based on wavelet packet time-frequency entropy and LM algorithm to optimize BP neural network,” in Proceedings of the 37th Chinese Control Conference, Wuhan, China, July 2018.
- X. Tan, Z. Ji, and Y. Zhang, “Non-invasive continuous blood pressure measurement based on mean impact value method, BP neural network, and genetic algorithm,” Technology & Health Care Official Journal of the European Society for Engineering & Medicine, vol. 26, no. 6, pp. 1–15, 2018.
- B. Li and C. Liu, “Parallel BP neural network on single-chip cloud computer,” in IEEE International Conference on High Performance Computing & Communications, IEEE, New York, NY, USA, August 2015.
- R. VanRullen, R. Guyonneau, and S. J. Thorpe, “Spike times make sense,” Trends in Neurosciences, vol. 28, no. 1, pp. 1–4, 2005.
- D. Roy, P. Panda, and K. Roy, “Synthesizing images from spatio-temporal representations using spike-based backpropagation,” Frontiers in Neuroscience, vol. 13, p. 621, 2019.
- W. M. Kistler, W. Gerstner, and J. L. Hemmen, “Reduction of the Hodgkin-Huxley equations to a single-variable threshold model,” Neural Computation, vol. 9, no. 5, pp. 1015–1045, 1997.
Copyright © 2020 Xiangyu Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.