Robust Reservoir Generation by Correlation-Based Learning
Reservoir computing (RC) is a new framework for neural computation. A reservoir is usually a recurrent neural network with fixed random connections. In this article, we propose an RC model in which the connections in the reservoir are modifiable. Specifically, we consider correlation-based learning (CBL), which modifies the connection weight between a given pair of neurons according to the correlation in their activities. We demonstrate that CBL enables the reservoir to reproduce almost the same spatiotemporal activity patterns in response to an identical input stimulus in the presence of noise. This result suggests that CBL enhances the robustness in the generation of the spatiotemporal activity pattern against noise in input signals. We apply our RC model to trace eyeblink conditioning. The reservoir bridged the gap of an interstimulus interval between the conditioned and unconditioned stimuli, and a readout neuron was able to learn and express the timed conditioned response.
Liquid state machines (LSMs)  and echo state networks (ESNs)  are an emerging new framework for neural computation. They were proposed independently and now are unified under the name of reservoir computing (RC) . An RC model consists of a recurrent network called a “reservoir” that maps input signals into a higher dimension, and a set of neurons called “readouts” that receive inputs from the reservoir to extract time-varying information. The computational power of RC is outstanding , despite the simplicity of its structure and learning algorithm compared with conventional recurrent networks.
In the standard RC, the reservoir is assumed to have fixed random connections, although synaptic weights in the brain can change dynamically depending on internal states and external inputs. In this study, we consider the change of the computational performance of an RC model that possesses learnability in its reservoir. Specifically, we apply correlation-based learning (CBL), which has widely been assumed in self-organizing neural systems [5, 6], to synaptic weights in the reservoir to examine whether the computational performance of the RC model is improved.
The reservoir generates a spatiotemporal activity pattern of neurons in response to an input stimulus, and reproduces the same spatiotemporal activity pattern when the same stimulus is given in the noise-free condition. If noise is added to the input stimulus, however, the generated activity pattern may change for the repetition of trials. We demonstrate that CBL modifies connection weights in the reservoir under the repeated presentations of input stimuli so that the model becomes to generate almost the same spatiotemporal activity patterns even in the noisy situation. This result suggests that CBL enables computational performance of the RC model to be noise-tolerant.
We also conduct the simulation of trace eyeblink conditioning  to demonstrate the performance of our RC model. In trace eyeblink conditioning, a subject is exposed to paired presentation of a transient conditioned stimulus (CS, e.g., brief tone) and an unconditioned stimulus (US, e.g., airpuff to the eye) that induces an eyelid closure with a temporal gap. After conditioning by repeated CS-US presentation, the subject learns to close its eye in response to the transient CS with a delay equal to the interstimulus interval (ISI) between the CS and US onsets (conditioned response: CR). In this conditioning paradigm, how to bridge the gap is an important issue. We demonstrate that our RC model bridges the gap by sustained activation of the reservoir and learns the CS-US association.
2. Materials and Methods
2.1. Model Description
Figure 1 illustrates the schematic of our network model. The model is built on the basis of our previous model . We added a new feature that synaptic weights between excitatory neurons are updated according to CBL. The calculation of neural activity patterns and the update of synaptic weights are repeated several times, which we call trials. The model consists of excitatory neurons and the same number of inhibitory neurons. Let be the activity of neuron of type at time at trial . indicates either excitatory or inhibitory neuron type. Using , which represents the internal state of the neuron, is defined as where is the constant threshold. The dynamics of the internal states are given by the following equations: where is the time constant, is the external input to excitatory neuron at time at trial . A uniform noise with intensity () was added to at each time step and for each neuron. is the weight of the synaptic connection from neuron of type to neuron of type , where . Synaptic weights among excitatory neurons are updated after every trial based on CBL as follows: for , where is set at for any and , and is a constant denoting the initial connection weight. The first term of the right-hand side in (3) represents the correlation between activities of excitatory neurons and , where is the total simulation step for one trial, is a constant scaling factor because the correlation takes the value between 0 and 1. The second term represents a memory effect of excitatory synaptic weights over trials with a decay constant . We assume that for any and , indicating the absence of self-excitatory connections. is given under the binomial distribution . is set at if and 0 otherwise: each inhibitory neuron inhibits its corresponding excitatory neuron. is set at for any and for simplicity, indicating all-to-all inhibition.
We defined two stimuli CS1 and CS2 fed into the network as follows:(CS1)(CS2) It should be noted that CS1 and CS2 activate different excitatory neurons in the reservoir.
2.2. Data Analysis
We study how the activity pattern of excitatory neurons in the reservoir is evolved by the trigger of an input stimulus. To do this, first we define the autocorrelation of activities of excitatory neuron population at times and at trial as follows: The numerator represents the inner product of two populations of active excitatory neurons at times and , and the denominator normalizes the autocorrelation so that it falls in the range of because takes only positive values. It takes a value of 1 if these populations are identical, and it becomes 0 if no neurons are active at both times. We then define the similarity index at trial by The similarity index is the average of (6) with respect to , that is, this index represents how two populations separated by are correlated, on average. In particular, it holds that for any because of trivial identity. It holds that for some if the population of active excitatory reservoir neurons changes with time, and two populations separated by are uncorrelated. It holds that for some if any two populations separated by have no overlap. Moreover, if the generation of the population of active excitatory reservoir neurons is nonrecurrent, the index would decrease monotonically. We define the standard deviation of the similarity index as well: We also define the reproducibility index at trial as follows: The reproducibility index represents how two activity patterns of neurons at two successive trials are similar to each other at the identical time steps. If two activity patterns are completely identical, the index would be 1 for any . If two activity patterns gradually diverge with time, the index gradually decreases.
The values of basic parameters are as the same as that we used previously  except . They are, , , , , , , , , , , , and .
2.3. Simulation of Trace Eyeblink Conditioning
We carry out simulation of trace eyeblink conditioning to demonstrate the performance of our RC model.
We consider a simple model which has one readout neuron. The neuron is fed with the output of all excitatory neurons in the reservoir through modifiable connections. The activity of the readout neuron is given by where is the connection weight from neuron in the reservoir to the readout neuron. The readout neuron receives the binary instruction signal as well. The connection weights are updated at each time step as follows: This rule represents that the connection from a neuron is strengthened if the neuron is active and the instruction is given simultaneously, and that the connection is weaken if the neuron is active but the instruction is not given. In other words, conjunctive stimulation of the afferent and instruction inputs makes the connection stronger, whereas stimulation of only the afferent input makes the connection weaker.
We use CS1 and CS2 as CSs, and define 2 USs (US1, US2) as follows:(US1)(US2) CSs and USs are paired and presented to the network as described below.
3.1. Generation of a Sequence of Active Neuron Populations
First, we studied the basic property of our reservoir. Figure 2(a) represents the activity pattern of the first 100 out of excitatory neurons () when the network received CS1. These neurons became active immediately after the stimulus onset, and underwent random repetition of transitions between active and inactive states for . The random repetition of state transitions was sustained even after the cessation of the input stimulus at , which is due to the recurrent connections among excitatory neurons. Thus, the transient input stimulus evoked the sustained neural activity. Different neurons exhibited different temporal transition patterns, and so the population of active neurons changed gradually with time. This random transition of neural activity was made by the random recurrent inhibitory connections between excitatory and inhibitory neurons. The connections projected a population of active neurons into another, uncorrelated population of active neurons repeatedly, and the nonrecurrent sequence of populations of active neurons was generated. To confirm the nonrecurrence of this sequence, we calculated its similarity index in Figure 2(b). The index takes 1 at because of the trivial identity. The index monotonically decreased as deviated from 0, indicating that two populations of active neurons temporally separated by became dissimilar as increased. Taken together, our reservoir generated a nonrecurrent sequence of populations of active excitatory neurons by the trigger of a transient input stimulus. Because there is one-to-one correspondence between a time step from the stimulus onset and one population of active neurons at the time, the sequence of active neuron populations can represent the passage-of-time from the stimulus onset.
We then examined the robustness of the sequence generation against noise in input signals. We ran two simulations with different noise and calculated the reproducibility index between the two sequences of active neuron populations. As shown in Figure 2(c), the reproducibility was the highest at and monotonically decreased towards 0.85 as time elapsed, suggesting that these two sequences were almost identical at first, and then they diverged into different sequences gradually with time. This decrease ended at and after that the reproducibility index was almost constant (data not shown), suggesting that although we conducted simulation for 3000 time steps (), the present model was capable of representing only the first 1000 time steps under the current parameter setting. In the following, therefore, we showed simulation results only for the first 1000 time steps. We also calculated the reproducibility index between the activity pattern generated in response to CS1 and a random activity pattern to obtain a lower bound of the index. As the random activity pattern, we used the activity pattern generated in response to CS2. We found that the index quickly increased from 0 to 0.8 within 500 time steps, and was kept constant thereafter (data not shown). Therefore, the reproducibility index with a random activity pattern is lower than 0.85 in the steady state.
Some other properties, including how to stop the sustained neural activity, have been reported previously .
3.2. Enhanced Robustness of CBL in Sequence Generation
Next, to examine the effect of CBL on the robustness in sequence generation, we carried out 21 simulations repeatedly with CS1 as an input stimulus. We calculated 20 reproducibility indices for 20 pairs of two successive trials and 20 similarity indices for 20 single trials.
Figure 3 represents similarity (a) and reproducibility (b) indices at 1st, 10th, and 20th trials. The similarity did not change largely across trials, whereas the reproducibility increased trial by trial. This result indicates that CBL makes the sequence generation robust, without affecting the nonrecurrence property.
3.3. Embedding Two Sequences in a Single Reservoir
Reservoirs have ability to map different inputs into different spatiotemporal activity patterns in a higher dimension [9, 10]. We questioned whether our reservoir has the same ability, and conducted the following simulation. Figure 4(a) shows the schematic of instructive presentation of 2 CSs to the reservoir. We repeated 10 sets of trials of CS presentation. Each set of trials consists of a pair of CS1 trials followed by another pair of CS2 trials. For each set of trials, we calculated the reproducibility indices for responses to CS1 and CS2 using a pair of CS1 trials and that of CS2 trials, respectively.
Figure 4(b) shows these reproducibility indices. Here, RI1, RI2, RI3, and RI4 denote the indices for CS1 and CS2 at the first trial and the indices at the final trial, respectively. Comparing between RI1 and RI3, and RI2 and RI4, the reproducibility of responses to CS1 and CS2 was evidently increased at the 10th set of trials. This result suggests that CBL enhanced the robustness of the sequence generation for 2 different stimuli simultaneously.
3.4. Simulation of Trace Eyeblink Conditioning
We carried out two simulations of trace eyeblink conditioning. In the first simulation, CS1 was paired with both US1 and US2 to examine whether our RC model can learn multiple timings. In the second simulation, CS1 was paired with US1, whereas CS2 with US2. By this simulation, we studied whether two sequences generated by CS1 and CS2 represent different memory traces.
Figure 5(a) shows the readout activities when CS1 was paired with both US1 and US2 with or without CBL. In both cases with or without CBL, the first peak appeared clearly at . The building-up activity started at and reached the maximum at (data not shown). Namely, there is the second peak at . The width of the second peak was larger than that of the first peak, suggesting that the precision of representation of longer time becomes worse. On the other hand, CBL improved the reproducibility of the activity pattern generation in the reservoir by strengthening connections between correlated neurons while weakening connections between less-correlated neurons. This manipulation increased the overall activity of the reservoir and made the output signal to the readout stronger. As a result, the amplitude of the readout activity became larger with CBL than without CBL.
Figure 5(b) shows activities of the readout neuron after the separate conditioning of CS1-US1 and CS2-US2. The activity of the readout neuron in response to CS1 reached the maximum around , and then gradually decreased. On the contrary, the readout activity in response to CS2 steadily increased and reached the maximum at . Both cases showed single peak activities around the instructed times, indicating that there was little crosstalk between two sequences of active neuron populations for CS1 and CS2. Thus, our reservoir with CBL has potency to generate distinct sequences of active neuron populations for different input stimuli.
In conventional RC, a reservoir is composed of neurons interconnected with fixed connection weights, and acts as a static nonlinear filter that maps input signals into a higher dimension. Because the dynamics is deterministic due to the fixed connections, a reservoir generates the identical output signals for the identical input stimulus in noise-free condition. In noisy condition, however, the reservoir could generate different output signals for the identical input stimulus. To generate the almost identical output signals despite the presence of noise, learning must be incorporated. In this article, we parsimoniously extended the convention of RC so that the synaptic weights between excitatory neurons are allowed to change according to CBL. The CBL improved the reproducibility of the output signals for the identical input stimulus in the noisy condition. This made the output signals to readout neurons stronger, and eventually increased the amplitude of readout neurons. Therefore, CBL was demonstrated to be useful for robust RC against noise. We also conducted the simulation of trace eyeblink conditioning and confirmed that the readout neuron was able to learn and represent the instructed time.
In the construction of our reservoir, connections between excitatory neurons work to sustain neural activity even after the cessation of input stimuli, these between inhibitory neurons control the total activity of neurons, and recurrent random connections between excitatory and inhibitory neurons cause random repetition of transitions between active and inactive states of neural population. This construction seems consistent with experimental studies which suggest that inhibitory neurons play an important role for generating complex spatiotemporal activity patterns of excitatory neurons in the olfactory bulb during odour discrimination  and in the prefrontal cortex during working memory tasks . Moreover, these functional roles of the connections in our RC model elucidate properties of the reservoir beyond a blackbox generating spatiotemporal activity patterns as in the previously proposed RC models [1, 2].
The present model can represent up to 1000 time steps under the current parameter setting. We could extend the maximal time steps by, for example, scaling the time constants (, ). We could also reduce the inhibition to excitatory neurons to increase the reproducibility index, which, however, increases the similarity index as well and results in the worse separation of two time steps in the simulation of trace eyeblink conditioning. There is a tradeoff between increasing the reproducibility and decreasing the similarity.
Correlation-based learning incorporated in the reservoir modified the connections among excitatory neurons such that connections between well-correlated neurons are strengthened, whereas these between less-correlated neurons are weakened. Owing to CBL, the network was self-organized and the generation of sequence of active neuron populations became noise-tolerant. Furthermore, it would be of interest to examine the property of the resulting connection matrix after the learning in terms of, for example, sparseness or scale-freeness.
Although RC is a promising concept in the field of modern neural computation, many problems remain to be solved for enabling its practical use. One of the important problems was how to determine neural connections to achieve robust and efficient reservoirs. Incorporation of CBL that we showed here provides a simple solution to construct robust reservoirs automatically.
H. Jaeger, “Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the “echo state network” approach,” GMD Report, Fraunhofer Institute AIS, St. Augustin, Germany, 2002.View at: Google Scholar