Abstract

The identification method of backpropagation (BP) neural network is adopted to approximate the mapping relation between input and output of neurons based on neural firing trajectory in this paper. In advance, the input and output data of neural model is used for BP neural network learning, so that the identified BP neural network can present the transfer characteristics of the model, which makes the network precisely predict the firing trajectory of the neural model. In addition, the method is applied to identify electrophysiological experimental data of real neurons, so that the output of the identified BP neural network can not only accurately fit the neural firing trajectories of neurons participating in the network training but also predict the firing trajectories and spike moments of neurons which are not involved in the training process with high accuracy.

1. Introduction

Large-scale, high-flow data acquisition is revolutionizing the field of neuroscience [1]. To link the experimental data with models featuring various biophysical phenomena is central to understanding neuronal function and network behavior [2]. System identification of quantitative mathematical models has proved to be an essential tool in exploring this issue [3,4]. Detailed computational models are required to fit the model to electrophysiological records [5,6].

Complex conductance-based neuronal models such as the Hodgkin–Huxley model abstract out the multi-ion channel mathematical models of neurons [79], which can reproduce most of the characteristics of neuronal signals, but their complex structure and a large number of model parameters make it difficult to identify the system. The cascade model is a simple phenomenological model, which abstracts the complex dynamic process of neurons into the combination of linear process and nonlinear process, and does not need to know the specific structure details [1012]. Only the input and output signals are identified by certain algorithms, so that the model can fit the spiking signals of neurons with certain precision under the given input.

Artificial neuron network is a mathematical model inspired by the process of synaptic connection and information processing in the biological nervous system. It consists of several interconnected neuron nodes and connection weights. Compared with traditional information processing methods, this model overcomes the defects of logic symbol-based artificial intelligence in processing intuition and unstructured information and has the characteristics of adaptive, self-organizing, and real-time learning. BP network is a multilayer feedforward network trained by the error backpropagation algorithm proposed by a team of scientists led by Rumelhart and McCelland [1315]. It is one of the most widely used neural network models [1618]. It can learn and store a large number of input-output pattern mapping relationships without having to reveal the mathematical equations describing such mapping relationships in advance. The BP network uses the steepest descent method and continuously adjusts the weight and threshold of the network by backpropagating the output error until the sum of the squared errors of the network output and the given output is the smallest. The BP network model includes a linear process of weighted summing of input signals and a linear or nonlinear function processing process represented by network nodes, so it can be regarded as a cascade model.

In this paper, the input and output trajectories of neurons are used as identification features, and the signal transmission process of real neurons is identified using BP neural network. The output of the trained network is consistent with the real signal, and the real output signal can also be predicted with a certain accuracy for the test input signals that have not participated in the training. Furthermore, the influence of the relevant parameters of the network on the identification is studied.

2. Materials and Methods

BP neural network is a multilayer feedforward network. The model topology includes input layer, hidden layer, and output layer. The network is mainly characterized by forward signal transmission and backward propagation of errors. In forward transmission, the input signal is processed layer by layer from the input layer to the output layer. The state of each layer of neurons only affects the state of the next layer of neurons. If the output layer does not get the desired output, it switches to backpropagation and adjusts the network weights and thresholds according to the prediction error, so that the network predicted output is continuously approaching the expected output. The topology of the BP neural network is shown in Figure 1.

In Figure 1, are the input values of the BP neural network, are the predicted values of the BP neural network, and and are the weights of the BP neural network. Each layer of nodes contains a linear or nonlinear excitation function . Common excitation functions include logsig, tansig, and purelin functions. The functional relationship is shown in Figure 2. Only suitable combinations of excitation functions can achieve good results when fitting data. When the number of input nodes is n and the number of output nodes is m, the BP neural network can be regarded as a nonlinear function mapping relationship from n independent variables to m dependent variables.

The process of system identification of BP neural network is divided into three main steps, including BP neural network construction, BP neural network training, and BP neural network testing, which is shown in Figure 3. Network construction refers to selecting the appropriate input and output, the number of nodes, and the excitation function to construct the BP network according to the characteristics of the system to be identified. The parameters of the network are initially given randomly, so after the construction, the network needs to be trained based on the input and output data according to certain rules, so that the network output is consistent with the expected output. Finally, in order to test whether the trained network can show the identified system characteristics, it is necessary to calculate the approximation between the network output and the given output of the network input which is not involved in network training and evaluate the network prediction performance as the index of the system identification effect.

Before system identification through BP neural network, the network should be trained to have associative memory and predictive capacity. The training steps are as follows:(1)Network Initialization. Determine the number of input layer nodes n, the number of hidden layer nodes l, and the number of output layer nodes m according to the system input and output sequence. Initialize the connection weight between the input layer and hidden layer, and initialize between the hidden layer and output layer of the neurons. Initialize the hidden layer threshold a and the output layer threshold b, given the learning rate and neuron excitation function.(2)Hidden Layer Output Calculation. According to the input variable, the connection weight between the input layer and hidden layer, and the hidden layer threshold , calculate the hidden layer output:(3)Output Layer Output Calculation. Calculate the predicted output of the BP neural network based on the hidden layer output, the connection weight , and the threshold:(4)Error Calculation. Calculate the network prediction error e based on the network prediction output and the expected output:(5)Weight Update. Update the network connection weights and according to the network prediction error:where is the learning rate.(6)Threshold Update. Update the network node thresholds a and b according to the network prediction error:(7)Determine whether the algorithm iteration is over. If not, return to step 2.

3. Neural System Identification Based on BP Network

3.1. Data Sources

The network fitting data in this paper are the input and output signals of pyramidal neurons, which were obtained by experiments of patch-clamp recording of rat cerebral cortex slices. The input signal is a time series of fluctuating synaptic currents that simulates the in vivo environment. The output data are the time series of the corresponding membrane potentials of vertebral cells. The fluctuating input of the imitative in vivo environment is constructed from two Poisson firing sequences, which imitate 100 excitatory input neurons with 10 Hz of spike frequency and 200 consistent input neurons with 20 Hz of spike frequency, respectively. The fluctuating input current is formed after artificial synapse filtering.

The membrane potential at a certain time is used as the output of the BP neural network. The current value of the membrane potential is not only determined by the current input current but also affected by the input current value and the membrane potential value at the previous moment, which is shown in Figure 4. The n input data at the sampling point (the values at the circles in Figure 4) are selected as I(t-a),..., I(t) and V(t-b),..., V(t-1) (a+b+1 = n), and the output data (the value at the fork in Figure 4) are V(t) (m = 1), where n = 30 and a = 20.

It is necessary to normalize the network input and output data before training the network, that is, to transform the data linearly into the interval of [−1,1], so as to avoid the problem of large network prediction error caused by the difference of order of magnitude between the data. The normalized formula is as follows:where and are the minimum and maximum values in the sequence, respectively. The data sequence generated by the network needs to be antinormalized to obtain the data in the normal range. The denormalization only needs to reverse formula (6). In addition, in order to standardize the effect evaluation, the subsequent error statistics step is also based on the normalized data.

3.2. Network Construction

After determining the number of nodes in the input and output layers, we need to select the structure of the hidden layer and the type of node excitation function to determine the basic architecture of the neural network. The structure of the hidden layer has a great impact on the prediction accuracy of the BP neural network: if the number of nodes is too small, the network learning effect is not good, more training times are needed to reach the goal, and the accuracy of the training will be affected; if the number of nodes is too large, the training time will be too long and overfitting may occur, that is, the fitting effect on the untrained test data of the network will be reduced due to the excessive pursuit of the accuracy of the training data. For simpler function mapping relationships, neural networks with a single hidden layer can speed up network training while ensuring the accuracy of network fitting; for more complex mapping relationships, increasing the number of layers of the hidden layer network can improve the prediction accuracy of the network to a certain extent. In addition, the choice of the excitation function for each layer of the BP neural network also has an important influence on the identification process. Generally speaking, the input of the hidden layer is the weighted sum of the input data, that is, the input layer node excitation function is the purelin function, and only a suitable combination of the hidden layer and output layer functions is needed.

In order to construct a suitable BP neural network, we conducted multiple sets of simulation experiments, compared the test results with the expected data, and statistically analyzed the errors. The average absolute error ab_error and the error variance sq_error of each test are recorded as follows:where num is the number of sample input and output data sets and and are the expected output and the network output, respectively, and are both normalized values.

Because the initial parameters of the network are given randomly, the identification results are highly random. The identification process is repeated 10 times in each case, and the average of the average absolute error and error variance is taken as the identification effect index in this case.

Figure 5(a) shows the error statistics of the identification results of the double hidden layer network and the single hidden layer network with 20 ∗ 20. It can be seen that the mean absolute error and the error variance of the network identification results with the double hidden layer are reduced by nearly half compared to the single hidden layer network. Figure 5(b) compares the effect of the combination of four better-acting types of hidden layer excitation function and output layer excitation function on the identification effect. l-p refers to the case where the hidden layer is a logsig function and the output layer is a purelin function, and so on. It was found that the fitting error of the l-t combination was the smallest. Therefore, a 20 ∗ 20 double hidden layer structure was finally selected and the excitation function combination of l-t was used to construct a BP neural network for identification of pyramidal neuron signal transmission systems.

3.3. Network Training

The training steps of BP neural network have been described in detail in the previous content. After choosing the appropriate network structure, combination of functions, learning rate ( = 0.01), and the maximum number of trainings (100 times), the training of the neural network can be started. In this paper, 10,000 sets of data are extracted from the input and output time series of pyramidal neurons as training data, and the time sequence is disrupted for network training.

Generally speaking, the smaller the training target error is, the better the training data fits. However, with the training time increasing, the neural network training may appear overfitting. In order to avoid this problem, a part of the training data is divided into the verification data and does not participate in the training but only perform the error calculation. As shown in Figure 6(a), the error of the training data decreases continuously with the increase of the number of training, while the error of the verification data no longer decreases after the fifth training but rises slightly. Therefore, the training result after which the verification data error is not reduced for two consecutive times is considered as the best identification result of the network, so as to avoid overfitting of the subsequent training results.

Figure 6(b)shows a regression graph of the training data and the verification data. The closer the slope of the fitted straight line of the sample points to 1, the better the fitting effect is. The slopes of the fitted straight lines for the training and validation data are equal to 1 and 0.974, respectively. It can be seen that the network output can almost completely overlap the training output data, and it can also achieve a good fitting effect on the validation data.

4. Network Testing and Result Analysis

After the network training is completed, the data not participating in the training are required to test the network identification effect. 10,000 consecutive sets of data were extracted from the time series of pyramid neuron input and output data that did not participate in training as test data. In order to observe and understand the fitting effect intuitively, the training data, test data, and network output data are antinormalized and restored to time series, as shown in Figures 7 and 8.

Figures 7(a) and 7(b) show the input and output time series of the training data, respectively, and Figure 7(c) records the errors of the training data and the network output. It can be seen that the output of the network basically coincides with the training data, and only a small part of spiking times is not exactly matched. The prediction results are shown in Figure 8. It can be seen that the error of the output voltage trajectory of test data and network output is small although the prediction effect cannot achieve the accuracy of the training data. The voltage trajectory is basically consistent, and most of the errors appear at spiking times.

The neural signal has many firing characteristics, and the spike time is considered to mainly undertake the task of information carrier [19,20]. Therefore, the spike time is an important research object of neural signal coding. It is necessary to evaluate the network's prediction effect on the spike times in addition to various error indicators. The spike times of the test data and the predicted firing sequence of the network are marked in Figure 8(b). For these two firing time series, the coincidence rate [21] is calculated as follows:where Γ is the coincidence rate and denotes the number of spike times that are accurately predicted. is the number of spike times that happen to coincide when the network output is Poisson distribution. and are the total number of spike times of the network output and test data, respectively, and is the normalization factor.

For the prediction effect predicted by the test data in Figure 8, the spiking coincidence rate is 80% even if the time range for determining the overlap of spikes is reduced to 1% of the average interspike interval. Therefore, the identified BP neural network can effectively predict the firing information of pyramidal neurons.

5. Conclusions

This paper mainly uses BP neural network to identify the signal transmission process of pyramidal neurons. First of all, the appropriate network input and output are selected and the data are normalized. Then, the appropriate network structure is selected to complete the construction of BP neural network. The well-built network is trained with training data soon afterwards. Finally, the prediction performance of the well-trained network is tested. In the test results, BP neural network can better match the untrained pyramidal neuron voltage trajectory and predict the spike time with high precision.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the Natural Science Foundation of Liaoning Province of China (grant no. 20180540127) and the Scientific Research Fund of Liaoning Provincial Education Department (grant nos. LG201930 and LG202028).