Research Article | Open Access

Masaaki Takahashi, Kiyohisa Natsume, "Automatic Estimation of the Dynamics of Channel Conductance Using a Recurrent Neural Network", *Advances in Artificial Neural Systems*, vol. 2009, Article ID 724092, 10 pages, 2009. https://doi.org/10.1155/2009/724092

# Automatic Estimation of the Dynamics of Channel Conductance Using a Recurrent Neural Network

**Academic Editor:**Akira Imada

#### Abstract

In order to simulate neuronal electrical activities, we must estimate the dynamics of channel conductances from physiological experimental data. However, this approach requires the formulation of differential equations that express the time course of channel conductance. On the other hand, if the dynamics are automatically estimated, neuronal activities can be easily simulated. By using a recurrent neural network (RNN), it is possible to estimate the dynamics of channel conductances without formulating the differential equations. In the present study, we estimated the dynamics of the and conductances of a squid giant axon using two different fully connected RNNs and were able to reproduce various neuronal activities of the axon. The reproduced activities were an action potential, a threshold, a refractory phenomenon, a rebound action potential, and periodic action potentials with a constant stimulation. RNNs can be trained using channels other than the and channels. Therefore, using our RNN estimation method, the dynamics of channel conductance can be automatically estimated and the neuronal activities can be simulated using the channel RNNs. An RNN can be a useful tool to estimate the dynamics of the channel conductance of a neuron, and by using the method presented here, it is possible to simulate neuronal activities more easily than by using the previous methods.

#### 1. Introduction

The electrical activities of a neuron can be studied by simulating it, that is, without performing a neurophysiological experiment on a real neuron. NEURON [1] and GENESIS [2] are well-known simulators that simulate neuronal activities and neuronal network activities. Neuronal activities are induced by many channels located in the plasma membrane of a neuron. Channel currents flow through these channels, change the neuronal membrane potential, and induce neuronal activities. In order to simulate neuronal activities, it is first necessary to estimate the dynamics of channel conductance.

There are two methods for estimating the dynamics of channel conductance. One method is to estimate the dynamics using time course of the membrane potential of a neuron [3]. In this method, it is first assumed that some putative channels exist in a neuron and then some parameters of the channel dynamics are estimated from the time course of the membrane potentials. The other method is to estimate the dynamics from the time course of the channel conductance at the clamped voltage. The former is called a top-down approach, while the latter is a bottom-up approach. In the top-down approach, the dynamics of the channel conductance of a putative channel are simply estimated from the neuronal activities of a neuron, and it is unknown whether the putative channel exists or not. On the other hand, in the bottom-up approach, the dynamics of the conductance of a real channel are estimated from experimental electrophysiological data. Our ultimate objective is to develop a simulation system for simulating neuronal activities easily. In this system, we can select several channels from a database, implement them in a putative neuron, and simulate the neuronal activities. Hence, we have taken the bottom-up approach as the estimation method of the channel dynamics. In addition, when a new channel is found, our system can automatically estimate the channel dynamics, and these dynamics can be easily stored in the database of the system.

In order to determine conductance dynamics from the experimentally recoded time course of the channel conductance of a real neuron, it is necessary to formulate the differential equations that express the relationship between the membrane potential and the channel conductance in a conventional manner. One method of estimating the dynamics is to approximate the dynamics using Boltzmann and Gaussian functions [4]. However, in this method, some of the parameters are not automatically determined; therefore, it is necessary to adjust them. In addition, this method is appropriate only for certain channels, but not other channels. The other method, which neuroscientists usually follow, is to provide channel data to some neurocomputational researchers and to help them to formulate and adjust the parameters. However, if we can automatically estimate channel dynamics from experimental data, we can simulate neuronal activities very easily by using the channels. Hence, we have started the development of an estimation method for channel dynamics. In order to estimate channel dynamics, we have used a recurrent neural network (RNN) so far, because RNN can learn and reproduce a time sequence. This method, to estimate the dynamics of channel conductance of neurons by RNN, is our unique method.

In a previous study, we simulated
the neuronal activities of a squid giant axon using RNNs, which learned the dynamics
of the gate variables of the Na^{+} and K^{+} channels. We were
thus able to reproduce the neuronal activities of a squid giant axon by using this
method [5]. However, the time course of gate variables cannot usually be determined
by experiment. Channel conductances can usually be recorded at the clamped
membrane potentials in a voltage-clamp experiment; in this experiment, the
membrane potential is clamped (held constant) while measuring the currents across
a cell membrane. Hence, we need to develop an automatic estimation method that
employs channel conductance data. Therefore, the purpose of our present study
is to develop an RNN estimation method in which channel dynamics can be
estimated from the channel conductance determined by voltage-clamp experiment. Note
that it is not what we insist in this paper to construct a Hodgkin-Huxley model
using RNNs.

#### 2. Methods

##### 2.1. Squid Giant Axon

We simulated the neuronal activities of the squid giant axon
using the dynamics of channel conductances estimated by RNNs in this study.
The squid giant axon has two channels, namely, Na^{+} and K^{+}.
In order to train RNNs, we used the data on the conductances of these two
channels. However, we could not obtain the actual conductance data recorded by neuroscientists;
hence, for training RNN, we used the channel conductances calculated using the
Hodgkin-Huxley (HH) model [6]. The details of the HH model are described in the
appendix.

##### 2.2. Recurrent Neural Network

An RNN is a neural network having recurrent connections. By means of these connections, past outputs of units associate with the current outputs, thereby making it possible to estimate the dynamics of a phenomenon [7]. The RNN is trained using the time-series of the phenomenon to do so. There are several types of RNNs, for example, a fully connected-type RNN (FRNN), an Elman-type RNN (ERNN) [8], and a time-delayed neural network (TDNN) [9]. In this study, we used FRNN. Figure 1 shows the structure of the RNN.

The present RNN has eleven input
units, ten hidden units, one output unit, and a bias unit. The hidden and
output units have recurrent connections. The output of unit *i* is defined by the following equations: The suffix indicates the unit number, denotes time, and represent the assemblies of input and hidden
units, respectively, and represent a bias unit and an output unit,
respectively, is the output of an input unit or a bias unit
at time ,
and is the output of a hidden unit or an output
unit. The output of the hidden unit *i* and the output unit *i* at time are calculated from the following equations: Here, is the net input to a hidden unit or an output
unit at time denotes the connection weight from the unit to the unit is the output function for hidden and output
units, and a sigmoid function was adopted as the output function in this study.

##### 2.3. Learning Algorithm

The algorithm called back propagation through time [10] was used to train the RNN. The RNN was trained by using the time course of channel conductance as a teacher signal. An error between RNN output and the teacher signal between and is determined by

In (3), and denote teacher signal and the output of an output unit, respectively. In order to decrease the error , the connection weights were updated. The quantity by which the connection weight is updated is denoted by and is obtained by partially differentiating (3) with respect to the weight . The equation for is as follows: Here, is called the learning rate and is a positive constant. In this study, is set to can be calculated from the following equations recursively in the reverse order of time: Until the average error decreased to a criterion, the weights were updated using (4).

##### 2.4. RNN Model of Squid Giant Axon

The squid giant axon has two channels, namely, Na^{+} and K^{+}. In our RNN model, the dynamics of the Na^{+} and K^{+} conductances were implemented in two different RNNs, and the RNNs trained using
the dynamics of the Na^{+} and K^{+} conductances are called Na^{+}-RNN
and K^{+}-RNN, respectively. In a physiological experiment, channel conductances
are recorded by clamping the membrane potential at different voltages. In our
model, we mimicked the experimental method. The clamped membrane potential was the
input, and the channel conductance at that potential was the output of an RNN.
The pairs of the clamped potentials and the respective channel conductance data
recorded in the previous five time steps were also used as input data to the
RNN. Thus, eleven sets of data were input to the RNN. After an RNN is trained, when
one inputs a membrane potential to the RNN, the RNN produces a time course of the
channel conductance at that potential. In this study, the training data for the
Na^{+} and K^{+} conductances were calculated from (A.2) and (A.3) of
the appendix by using the 4th Runge-Kutta (RK) method (Figures 2 and 3).

The
calculation time step was millisecond. The input data, membrane
potential, and channel conductances were transformed between 0.01 and 0.99 to
be close to the output of the RNN units. The error criterion for the Na^{+} conductance was ,
and that for the K^{+} conductance was .
The criterion for the K^{+} conductance was smaller than that for the
Na^{+} one because it was easier to estimate the K^{+} conductance than the Na^{+} conductance.

In a conventional method (see the appendix) when we calculate the neuronal activities of the squid giant axon, we first determine the differential equations of gate variables and channel conductances, and then calculate the membrane potential using them. On the other hand, in our RNN model, instead of the differential equations of gate variables and channel conductances, we used the RNNs. The membrane potential is calculated using a differential equation (A.1) in the appendix), as in a conventional method. In order to compare the data calculated using RNNs with those calculated using a conventional method, we calculated the neuronal activities of the squid giant axon from the differential equations of gate variables and channel conductances by using the RK method. The neuron whose activities were calculated using RNNs and the RK method are called the RNN-neuron and the squid giant- (SG-) neuron, respectively. The calculation flows of the SG-neuron and the RNN-neuron are shown in Figure 4.

**(a)**

**(b)**

#### 3. Results

##### 3.1. Training of the Na^{+}- and K^{+}-RNNs

An RNN can be trained in the
dynamics of Na^{+} conductance, and in this study, the Na^{+}-RNN
could approximate the training data shown in Figure 5.

In
addition to the Na^{+}-RNN, we were able to develop the K^{+}-RNN
(Figure 6).

##### 3.2. Neuronal Activities of the RNN-Neuron and SG-Neuron

We will present the results of the RNN-neuron. Initially, a weak stimulus () and a strong stimulus () were provided for millisecond in order to verify the presence of a threshold in the RNN-neuron. The weak stimulus did not induce an action potential, but the strong stimulus did (Figure 7(a)). These results indicate the presence of a threshold in the RNN-neuron and the SG-neuron. Figure 7(b) shows the relation between stimulation intensities and the peak values of the action potentials. The RNN- and SG-neurons generated action potentials when the stimulation intensity was greater than or equal to . Figure 7(c) shows the time course of the channel conductances of the RNN-neuron. Interestingly, the time courses are not included in the training period of the RNNs.

**(a)**

Next, we examined whether the RNN-neuron has a refractory period (Figure 8). The first stimulation was provided at milliseconds with an intensity of for millisecond, and the second stimulation was provided or milliseconds after the first one. The duration of these stimuli was the same as that shown in Figure 7. The first stimulus evoked an action potential. When the second stimulus was provided milliseconds later at an intensity of , no action potential was generated in the RNN-neuron (Figure 8(a)). Even in the case of a larger intensity () at the second stimulus, no action potential was generated (Figure 8(b)). Further, when the onset time of the second stimulus was milliseconds later than that of the first stimulus, the action potential was not generated in the case of the small intensity (), either (Figure 8(c)), but it was generated in the case of the larger intensity () (Figure 8(d)). Thus, the RNN-neuron has an absolute and relative refractory period, similar to the SG-neuron.

**(a)**

**(b)**

**(c)**

**(d)**

The squid giant axon exhibits the phenomena in which a hyperpolarized pulse of a 5-millisecond duration induces a rebound action potential [6]. The RNN-neuron also exhibited a rebound action potential, as shown in Figure 9, though the peak latency of this action potential was slightly longer than that of the SG-neuron.

In the last part of our experiment, a constant stimulus with an intensity of was applied to the neurons. Periodic action potentials were generated in the RNN- and SG-neurons (Figure 10(a)). When the stimulation intensity was increased to , the frequency of the action potentials increased in both neurons (Figure 10(b)). The periods of action potentials generated at different stimulation intensities are shown in Figure 10(c). The RNN- and SG-neuron had almost the same characteristics.

**(a)**

**(b)**

**(c)**

##### 3.3. Dynamics of Another Channel Conductance by an RNN

We used our RNN method for another
channel than those in the squid giant axon. The Ca^{2+} channel plays a
role in making bursts in a hippocampal pyramidal cell and in entering the
extracellular Ca^{2+} into the neuron. The Ca^{2+} channel found
in a hippocampal pyramidal cell by Kay and Wong [11] was voltage-dependent. An RNN
was trained in the dynamics of the Ca^{2+} channel conductance. The
time course of the conductance as a function of the membrane potential was
reproduced by the RNN, as shown in Figures 11 and 12.

#### 4. Discussion

We were able to train the RNNs in the dynamics of Na^{+} and K^{+} channel conductances of the squid giant axon (Figures
2, 3, 4, and 5), and we were able to reproduce the various neuronal activities
of the axon, threshold phenomenon, refractory period, and the constant
stimulation-induced action potentials (Figures 7, 8, 9, and 10). In the
neuronal activities Na^{+}- and K^{+}-RNNs produced the
different time course of the channel conductance from the trained one. By using
this method, other channel dynamics could also be reproduced. The dynamics of Ca^{2+} channel conductance in the hippocampus can be reproduced by an RNN trained with the dynamics of the Ca^{2+} channel conductance (Figures 11 and 12). Hence,
there is a possibility that our proposed RNN method can be used as a general
method for the automatic estimation of the dynamics of channel conductance determined
by a physiological experiment.

In the present study, we used a
mean square error (MSE) between the training data and the output of an RNN to
evaluate the training error (3). When
the average MSE between Na^{+} conductance and RNN output had a high
value such as 10^{-2}, which is larger than the criterion, the neuronal
activities could not be reproduced well. The error criterion of the Na^{+}-RNN
used in this study was below ,
and various neuronal activities could be reproduced. Thus, it is appropriate to
evaluate the reproduction level of neuronal activities using the MSE.

We used an FRNN to estimate the
dynamics of channel conductance in the present study. We also reproduced some
neuronal activities using other types of RNNs, namely, an Elman-type RNN (ERNN)
[8] and a time-delayed neural network (TDNN) [9]. In both cases, the inputs
and outputs were the same as those in the present study. To compare the RNNs’
ability to reproduce neuronal activities, we prepared ten Na^{+}-RNNs
and ten K^{+}-RNNs, combined them, and made one hundred RNN-neurons of
three types: TDNN, ERNN, and FRNN. We adjusted the respective number of the
hidden units of the TDNN, the ERNN, or the FRNN to twenty, eleven, or ten in
order to equal the number of total connections among the three NNs. We then
calculated the maximum value of the cross-correlation function (MVCCF) obtained
between the outputs from the SG-neuron and those from the three types of
RNN-neurons in order to evaluate the reproduction level of the neuronal
activities. The stimuli used for this purpose were the same as those shown in
the bottom of Figure 7(a). The average MVCCF of the TDNN, ERNN, and FRNN were 0.154,
0.895, and 0.951, respectively, with the average MVCCF of the FRNN being
maximum. The results were significant (; Kruskal-Wallis test
with Steel-Dwass test as a post-hoc test). Therefore, the FRNN is the most effective
RNN for the automatic estimation of the dynamics of channel conductance.

When an RNN was trained using training
data other than those used in the present study, the threshold of the SG-neuron
could not be reproduced in the RNN-neuron. In this study, we refined the
training data, and then the characteristics of the neuronal activities could be
reproduced. This result suggests that the reproduction of neuronal activities
by an RNN depends on the training data. When an RNN was trained, the error produced
by the K^{+}-RNN always became low, while it was difficult to reduce the
error produced by the Na^{+}-RNN. The ability to reproduce neuronal
activities depends on the reproduction of the dynamics of the Na^{+} channel conductance. The more correctly an Na^{+}-RNN predicts the time
course of Na^{+} channel conductance, the more accurately the
RNN-neurons can reproduce the neuronal activities.

The results obtained using the
RNN-neuron differed somewhat from those obtained using the SG-neuron. We
believe that there are two causes for the difference. First, it is possible
that the RNN could not extrapolate the time course of channel conductance sufficiently
to stimulate the neuronal activities. As described previously, the reproduction
ability depends on the training data. Thus, in a future study, we will use better
training data from which an RNN can learn the dynamics of channel conductance
more correctly. Second, the Na^{+}-RNN could not reproduce the training
data as perfectly as the K^{+}-RNN could (Figures 4 and 5). Na^{+} channel conductance was calculated from the two gate variables: an activation
gate and an inactivation gate (see the appendix). At some membrane potential, the
conductance remains constant, while the two gate variables can change
differently. The effect by the change can be seen, for example, at about 150
and 170 milliseconds at the two arrows in Figures 2 and 5. Just before the first
arrow, the conductance remains constant, while the membrane potential changes. At
the first arrow, the conductance was double of that at the second one in the
training data (Figure 2); however the RNN could not reproduce this feature (the
arrows in Figure 5). The time courses of the two gate variables of Na^{+} channel in the period from just before the first arrow to the second one were
analyzed, and it was found that this result is
caused by an increase in value of the inactivation gate variable of the Na^{+} channel due to the hyperpolarization
of the membrane potential before the first arrow. We would like to revise the RNN
structure so that it can learn the results of such an occurrence.

Our estimation method was used to estimate the conductance
dynamics of not only Na^{+} and K^{+} channels of the giant
squid axon but also other channels (Figures 11 and 12). An action potential that
cannot be reproduced by the HH model has been recently reported [12], that is,
the action potential that has a rapid initiation slope and variable onset times.
It is possible for an RNN to reproduce such action potential when the RNN is trained
using appropriate training data.

In the future, we intend to find more appropriate training data to enhance the reproduction possibility in our estimation method and to estimate the dynamics for other several channel conductances by using RNNs to simulate many types of neurons.

#### 5. Conclusion

In order to simulate neuronal activities, it is necessary to
estimate and formulate the dynamics of neuronal channel conductance using
differential equations based on experimental data. We developed an automatic
estimation method of estimating the dynamics of channel conductance by using a
fully connected recurrent neural network. Two RNNs were trained using the Na^{+} and K^{+} channel conductance data from the squid giant axon. By using
these trained RNNs, the neuronal activities of the axon could be reproduced.
Thus, our RNN estimation method can automatically estimate the conductance dynamics
of a new channel from experimental data, and can easily simulate neuronal
activities using the estimated dynamics.

#### Appendix

The equations of the Hodgkin-Huxley model are as follows:

In these equations, denotes the membrane potential; is the membrane capacity; and ,
and denote Na^{+}, K^{+}, and leak
currents, respectively. denotes the injected current; is Na^{+} equilibrium potential; is K^{+} equilibrium potential; and is the leak equilibrium potential; is Na^{+} conductance whose activation and the
inactivation gate variables are referred to as and ;
and is K^{+} conductance whose activation gate variable is .
The gate variables change with time and voltage. In the steady state, an activation
gate variable has a tendency to increase with the membrane potential; conversely,
the inactivation gate variable decreases with the potential. Here, is the maximum Na^{+} conductance; is the maximum K^{+} conductance; and is the maximum leak conductance. and are the rate constants for the *m* gate; and are those for the *h* gate; and and are those for the *n* gate.

#### Acknowledgments

This study was supported in part by a grant from the 21st COE Program in 2003 to the Kyushu Institute of Technology (KIT), by the Ministry of Education, Culture, Sports, Science, and Technology of Japan. In addition, the authors would like to thank Mr. Cai Youbo and Professor Masumi Ishikawa at KIT for providing them with the source code of the algorithm “back propagation through time” and Professor Keiichi Horio at KIT for advice about the RNN.

#### References

- M. L. Hines and N. T. Carnevale, “The NEURON simulation environment,”
*Neural Computation*, vol. 9, no. 6, pp. 1179–1209, 1997. View at: Publisher Site | Google Scholar - J. M. Bower and D. Beeman,
*The Book of GENESIS: Exploring Realistic Neural Models with the General Neural Simulation System*, Springer, New York, NY, USA, 2nd edition, 1998. - K. Doya, A. I. Slverston, and P. F. Rowat, “A Hodgkin-Huxley type neuron model that learns slow non-spike oscillation,” in
*Advances in Neural Information Processing Systems*, vol. 6, pp. 566–573, Morgan Kaufmann, San Francisco, Calif, USA, 1994. View at: Google Scholar - E. M. Izhikevich,
*Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting*, The MIT Press, Cambridge, Mass, USA, 2007. - M. Takahashi, C. Youbo, M. Ishikawa, and K. Natsume, “Simulation of neuronal activity using recurrent neural network,” in
*Proceedings of the 15th IEEE International Workshop on Nonlinear Dynamics of Electronic Systems (NDES '07)*, pp. 78–81, Tokushima, Japan, July 2007. View at: Google Scholar - A. L. Hodgkin and A. F. Huxley, “A quantitative description of membrane current and its application to conduction and excitation in nerve,”
*The Journal of Physiology*, vol. 117, pp. 500–544, 1952. View at: Google Scholar - M. Jordan, “A parallel distributed processing approach,”
*ICS Report*8604, Institute for Cognitive Science, University of California at San Diego, La Jolla, Calif, USA, 1986. View at: Google Scholar - J. L. Elman, “Finding structure in time,”
*Cognitive Science*, vol. 14, no. 2, pp. 179–211, 1990. View at: Publisher Site | Google Scholar - A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. J. Lang, “Phoneme recognition using time-delay neural networks,”
*IEEE Transactions on Acoustics, Speech and Signal Processing*, vol. 37, no. 3, pp. 328–339, 1989. View at: Publisher Site | Google Scholar - D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representations by error propagation,” in
*Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1: Foundations*, chapter 8, pp. 318–362, The MIT Press, Cambridge, Mass, USA, 1986. View at: Google Scholar - A. R. Kay and R. K. S. Wong, “Calcium current activation kinetics in isolated pyramidal neurones of the Ca1 region of the mature guinea-pig hippocampus,”
*The Journal of Physiology*, vol. 392, pp. 603–616, 1987. View at: Google Scholar - B. Naundorf, F. Wolf, and M. Volgushev, “Unique features of action potential initiation in cortical neurons,”
*Nature*, vol. 440, no. 7087, pp. 1060–1063, 2006. View at: Publisher Site | Google Scholar

#### Copyright

Copyright © 2009 Masaaki Takahashi and Kiyohisa Natsume. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.