Abstract

We consider the problem of reconstructing finite energy stimuli encoded with a population of spiking leaky integrate-and-fire neurons. The reconstructed signal satisfies a consistency condition: when passed through the same neuron, it triggers the same spike train as the original stimulus. The recovered stimulus has to also minimize a quadratic smoothness optimality criterion. We formulate the reconstruction as a spline interpolation problem for scalar as well as vector valued stimuli and show that the recovery has a unique solution. We provide explicit reconstruction algorithms for stimuli encoded with single as well as a population of integrate-and-fire neurons. We demonstrate how our reconstruction algorithms can be applied to stimuli encoded with ON-OFF neural circuits with feedback. Finally, we extend the formalism to multi-input multi-output neural circuits and demonstrate that vector-valued finite energy signals can be efficiently encoded by a neural population provided that its size is beyond a threshold value. Examples are given that demonstrate the potential applications of our methodology to systems neuroscience and neuromorphic engineering.

1. Introduction

Formal spiking neuron models, such as integrate-and-fire (IAF) neurons, encode information in the time domain [1]. Assuming that the input is bandlimited with a known bandwidth, a perfect recovery of the stimulus from the train of spikes is possible provided that the spike density is above the Nyquist rate [2]. Using results from frame theory [3] and statistics [4], these findings were extended to (i) bandlimited stimuli encoded with a population of IAF neurons with receptive fields modeled as linear filterbanks [5], (ii) multivariate (e.g., space-time) bandlimited stimuli encoded with a population of IAF neurons with Gabor spatiotemporal receptive fields [6], and (iii) sensory stimuli encoded with a population of leaky integrate-and-fire (LIF) neurons with random thresholds [7].

These results are based on the key insight that neural encoding of a stimulus with a population of LIF neurons is akin to taking a set of measurements on the stimulus. These measurements or encodings can be represented as projections (inner products) of the stimulus on a set of sampling functions. Stimulus recovery therefore calls for the reconstruction of the encoded stimuli from these inner products. These findings have shown that sensory information can be faithfully encoded into the spike trains of a neural ensemble and can serve as a theoretical basis for modeling of sensory systems (e.g., auditory, vision) [8].

In this paper we investigate the problem of reconstructing scalar and vector stimuli from a population of spike trains on a finite time horizon. The encoding circuits considered are either single-input multi-output or multi-input multi-output (MIMO). The increasing availability of multi-neuron population recordings has led to a paradigm shift towards population-centric approaches of neural coding and processing. Examples of MIMO models in systems neuroscience include extensive investigations of spike train transformations between neuron populations [9] as well as the analysis of the causal relationships between neurons in a population [10]. In neuromorphic engineering MIMO models have been used for brain-machine interfaces [11], as well as silicon retinas and related hardware applications [12].

The stimuli considered in this paper have finite energy and are defined on a finite time horizon. Even though restricted to finite time intervals, finite energy signals have infinite degrees of freedom. Consequently, the formal stimulus recovery is ill-defined. We cast the stimulus reconstruction problem in the abstract spline theory [13] and recover the stimulus as the unique solution to an interpolation spline problem. Splines serve as a valuable mathematical tool for interpolation problems, and their applications arise in many areas such as data smoothing in statistics [4], computer graphics [14], and digital signal processing [15].

Through the formulation of the interpolation spline problem, the reconstructed signal will give the same measurements as the original one. We show that this leads to a signal recovery that is consistent in the sense that the reconstructed signal triggers exactly the same spike train when passed through the same neuron as the original stimulus. The reconstructed signal is also required to achieve a maximum degree of smoothness gauged by a quadratic criterion. This condition ensures that the problem has a unique optimal solution.

A preliminary version of some of the ideas presented here appears in [16]. The analysis was based on results arising in generalized sampling [17]. Here the theory is presented in a more general setting using the spline theoretic framework, and all proofs are included. We apply our theoretical results to stimuli encoded with a number of spiking neural circuits of interest. These include populations of integrate-and-fire neurons with linear receptive fields that arise in hearing, ON-OFF neural circuits with feedback that arise in vision and multi-input multi-output (MIMO) neural circuits that arise in olfaction.

Formally, MIMO neural circuits encode -dimensional vector-valued finite energy stimuli into the spike trains of a population of neurons. Their architecture consists of an linear, time invariant filtering kernel that feeds into an ensemble of neurons. For this novel neural circuit we formulate and solve the problem of optimal consistent recovery and also discuss some of the key conditions that the filtering kernel has to satisfy in order to get a good reconstruction.

The paper is organized as follows. Section 2 formulates the problem of consistent reconstruction on a finite time horizon as a spline interpolation problem and presents its general solution. In Section 3 the reconstruction problem is addressed for stimuli encoded with a population of LIF neurons. Section 4 presents general MIMO neural encoding circuits and the corresponding optimal consistent stimulus reconstruction. A neuroscience inspired example is presented where the filtering kernel performs arbitrary (but known) delays and scalings to input stimuli akin to simple synaptic models. Finally Section 5 concludes our work.

2. Encoding with LIF Neurons and Consistent Stimulus Recovery

In this section we formulate and solve the problem of optimal consistent reconstruction for the simple case of a stimulus encoded with a single LIF neuron. We show how the spiking of an LIF neuron can be associated with a series of projections in the general space. We impose intuitive constraints on the desired reconstructed signal and show that the reconstruction algorithm can be reduced to a spline interpolation problem.

2.1. Neural Encoding with Single LIF Neurons

Let , be a signal (or stimulus) of finite length and energy, that is, . In what follows we assume that the stimulus is the input to a Leaky Integrate-and-Fire (LIF) neuron. Throughout this paper , denotes the set of recorded spikes. As in the case of bandlimited signals [5], neuron encoding can be associated with the projection (measurement) of the stimulus on a set of functions. By applying the -transform [2], we can determine both the sampling functions and the projections of the stimulus on these functions using only the knowledge of the spike times.

Assume that the encoder is an LIF neuron, with threshold , capacitance , resistance , and constant bias . The membrane potential of the LIF neuron is governed by the differential equation with the initial condition and reseting conditions for all , and . By solving the differential equation in each interspike interval, the -transform of the LIF neuron is given by for all . The -transform can be rewritten as where is a linear functional given by for all . Therefore, we have the following result.

Lemma 1. The -transform of the LIF neuron can be written in inner-product form as where and is the standard inner product restricted to the domain for all .

Remark 2. The inner products or projections in (7) represent a set of measurements or encodings of the signal on . Since and in (6) can be readily derived from the knowledge of the spike times and the neuron parameters, the signal encodings are available to an observer reading the spike times .

2.2. Consistent Stimulus Recovery

The problem of stimulus reconstruction calls for estimating the signal given the set of spikes . This problem is, for the class of stimuli , ill-defined. (Signals that lie in a space have, in general, infinite degrees of freedom and thus cannot be perfectly recovered by a finite number of observations.) A remedy is provided by introducing a set of constraints on the recovery. The first constraint considered here requires the reconstructed signal to generate the same spikes as the original stimulus. The second constraint requires choosing among the reconstructed stimuli the one with the maximum degree of smoothness. The latter is formulated as an optimization criterion.

Definition 3. A reconstruction of based on the spike times , is said to be consistent if triggers exactly the same spike train as the original stimulus .

Remark 4. As before, assume that at time 0 the membrane potential of the LIF neuron is set to the resting potential 0. Then the consistency condition above is equivalent with for all .

Definition 5. A consistent reconstruction that minimizes the quadratic criterion is called the optimal consistent reconstruction of .

Remark 6. is the norm of the second derivative of the reconstructed stimulus.

Lemma 7. The optimal consistent reconstruction solves the spline interpolation problem where is the standard -norm restricted to the interval .

Proof. It follows directly from Definitions 3 and 5.

Remark 8. An introduction to splines and the general solution to spline interpolation problems is presented in the Appendix A.

Theorem 9. The optimal consistent reconstruction is unique and is given by where where denotes the convolution, and denotes the absolute value. With , and the coefficients and satisfy the matrix equations Moreover is an matrix, and are column vectors with entries given by where all the inner products are restricted to the interval .

Proof. The proof follows from Theorem 4 in Appendix A. Note that the function is up to a multiplicative constant Green's function for the second-order iterated Laplacian. (See Lemma 5 in Appendix B).
The representation functions (13) can be explicitly given in analytical form as where . The entries of the matrix are given by with . Finally

Remark 10. By letting , one obtains the representation of the optimal consistent reconstruction for stimuli encoded with the ideal IAF neuron. The parameters and representation functions take a simple form:

2.3. Example

The input to an LIF neuron is a bandlimited signal with bandwidth of  Hz. The neuron encodes the stimulus during the time interval second. A bias equal to is also added to the input. The parameters of the LIF neuron are . Under these conditions the neuron generated 78 spikes. The recovered signal is shown in Figure 1. In order to quantify the quality of the recovery, we used the signal-to-noise ratio (SNR) defined by In the above SNR definition the noise corresponds to the error between the original and reconstructed signal. The SNR was equal to  dB.

3. Single-Input Multi-Output Encoding and Consistent Stimulus Recovery

In this section we consider the reconstruction of a stimulus encoded with a population of LIF neurons. We demonstrate that the consistent recovery can be again formulated as a spline interpolation problem and provide the reconstruction algorithm. We also show how the methodology developed in this section can be applied to a simple encoding circuit consisting of two-coupled ON-OFF neurons with feedback.

3.1. Encoding with a Population of LIF Neurons

In what follows we consider a neural encoding circuit consisting of leaky integrate-and-fire (LIF) neurons. Neuron has threshold , bias , resistance , and capacitance for all . After each spike every neuron resets its membrane potential to 0. Let denote the th spike of neuron , with , where is the number of spikes that the neuron generates, .

The -transform of the population of LIF neurons is given by Let for all . As in the previous section, we have the following.

Lemma 11. The -transform of the LIF neuron can be written in inner-product form as where and is the standard inner product restricted to the domain for all and .

3.2. Consistent Stimulus Recovery

Let be a column vector defined as with . The vectors have the same dimension and are similarly defined. The matrix is a block square matrix defined as with .

The following theorem first appeared in [16]; its proof is presented here for the first time.

Theorem 12. Assume that at time 0 the membrane potential of all neurons is at the rest value 0. The optimal consistent reconstruction is unique and can be written as where The reconstruction coefficients are given in matrix form by where denotes the pseudoinverse and

Proof. The proof is notationally more complex but closely follows the proof of Theorem 9. The representation functions (27) can be computed analytically as where .

3.3. Example: Encoding with an ON-OFF Neuron Pair

We consider an encoding circuit consisting of two interconnected integrate-and-fire neurons with feedback. For simplicity we assume that the IAF neurons are ideal, that is, . Figure 2 depicts the circuit. Whenever a spike is generated, the firing neuron is reset and feedback is added to the membrane potential. In addition, the firing of each spike is communicated to the other neuron through cross-feedback. The two neurons in Figure 2 arise as models of ON and OFF bipolar cells in the retina and their connections through the nonspiking horizontal cells [18].

Let denote the th spike of the th neuron, and . The -transform of the neural circuit amounts to and can be written in inner product form as with the right-hand side of (31) and , for all , and . With the -transform in inner product form, the optimal consistent reconstruction is given by Theorem 12.

A simple example consisting of two symmetric neurons with parameters , , and is considered here. The cross-feedback is of the form No other feedback is present, that is, . The neuron parameters are , , and . In addition,  sec and . Note that the impulse response of the filter has mean value zero. If the mean value is nonzero, the spike density of the ideal IAF neurons can be driven to zero or infinity.

The input was chosen to be the temporal contrast of an artificial (positive) input photocurrent. With denoting the input photocurrent, the temporal contrast is defined as Clearly, even when the input bandwidth of the photocurrent is known, the effective bandwidth of the actual neuron input cannot be analytically estimated. The input photocurrent was bandlimited with bandwidth  Hz and had duration  milliseconds. Each neuron generated 75 spikes. The result of the recovery is shown in Figure 3. The SNR is equal to 28.75 dB.

4. Multi-Input Multi-Output Encoding and Consistent Stimulus Recovery

In this section we present our model of consistent information representation of -dimensional vector signals using an -dimensional filtering kernel and an ensemble of integrate-and-fire neurons (see Figure 4). We assume without loss of generality that the neurons are ideal (nonleaky). Each component filter of the kernel receives input from one of the component inputs, and its output is additively coupled into a single neuron. Finally, we describe an algorithm for stimulus reconstruction that is based on spline interpolation.

4.1. MIMO Model for Neural Encoding

Let denote the space of -dimensional, vector-valued functions of finite energy over the domain . The general element of this space is , with , modeling the th component of the input signal and for all . The space endowed with the inner product and norm defined by respectively, is a Hilbert space. Let be a filtering kernel defined as Filtering the signal with the kernel leads to an -dimensional vector valued signal defined by Equation (37) can also be written in vector notation as where is the filtering vector of the neuron . A bias is added to the component of the signal , and the sum is passed through an integrate-and-fire neuron with integration constant (capacitance) and threshold , for all (see Figure 4). For simplicity we assume that the IAF neurons are ideal, that is, . Let denote the th spike of the neuron , with , where is the number of spikes generated by neuron , . The Time Encoding Machine in Figure 4 maps, therefore, the input vector into the vector time sequence .

The -transform for the th neuron can be written as or where , for all , and all . Note that, without any loss of generality, after firing all neurons are reset to the zero state. The -transform (40) can be written in an inner product form as where for all , , and denotes the involution (time reversal) of , that is, . Note that the impulse response of the filtering kernel is not restricted to the interval and can possibly have infinite support.

Remark 13. An implicit assumption in writing the -transform in the inner product form (41) is that the sampling functions belong to . A sufficient condition for the latter is that all filters are bounded-input bounded-output (BIBO) stable, that is, for all and all .

4.2. Consistent Stimulus Recovery

The optimal consistent reconstruction is given by the solution of the following spline interpolation problem: We have the following result.

Theorem 14. Assume that at time 0 the membrane potential of all neurons is at the rest value 0. The optimal consistent reconstruction is unique and can be written as where and With , , and similarly defined, the reconstruction coefficients are given in matrix form by with where the inner products are restricted to the interval .

Proof. The proof is based on Theorem 4.

Remark 15. Note that since the signal reconstruction is set up as a spline interpolation problem, the algorithm presented in Theorem 14 above will produce a solution that depends on both the filtering kernel and the spiking mechanism of the population of neurons. We will briefly mention here conditions of no information loss due to filtering. If denotes the Fourier transform, we have The requirement for no information loss implies that , the filtering kernel in the Fourier domain, has rank for all frequencies of interest (here for all ). A trivial necessary condition that comes out of the rank condition is that ; that is, the number of neurons that encode the stimulus must be at least equal to the number of its components. This intuitive argument has important ramifications in experimental neuroscience as it shows that, in general, multivariate stimuli (e.g., video sequences) cannot be efficiently represented by the spike train of a single neuron or a small neural population. Rather, the spike trains from a larger population of neurons that encode the same stimulus needs to be used.

4.3. Example: Delay Filter Bank

We present the realization of the recovery algorithm for a filtering kernel that induces arbitrary, but known, delays, and weights on the stimulus. The kernel models dendritic tree latencies in sensory neurons (motor, olfactory) [8] or, in general, delays and synaptic weights between groups of pre- and postsynaptic neurons. In order to incorporate these delays, we assume that the stimuli are defined on a time window larger than . The inner product, however, is restricted to the time interval .

Each filter delays the stimulus in time by an arbitrary positive amount and scales it by an arbitrary real number , for all , and all . Consequently, and . From now on let for all all and all .

The representation functions , of (45) are given by for all . Note that the general term of (49) can be expressed analytically similarly to (19).

The entries of (46) can be computed from (47) as

Note that the entries of can be computed analytically as The vector-valued signal has three bandlimited components each with the same bandwidth Hz and time length millisecond. In total, 9 IAF neurons were used to recover the signal . The delays were drawn randomly from an exponential distribution with mean millisecond. The biases and thresholds , , were drawn from uniform distributions on the intervals and , respectively. Finally, for all .

The recovered stimuli using the spikes from 3, 6, and 9 neurons, respectively, are depicted from top to bottom in Figure 5. For each component, the recovered signal converges to the original one.

Figure 6 shows the SNR corresponding to the recovery of each stimulus component when neurons are used. Figure 6 demonstrates that overall, as more neurons are added to the representation of the stimulus, the SNR of all stimulus components increases. An exception is observed in the recovery of the component ; the addition of a neuron from three to four leads to a decrease of the SNR. Note, however, that the SNR for the recovery of the entire vector-valued stimulus increases with the addition of the fourth neuron from 7.71 dB to 12.23 dB (data not shown).

5. Discussion

The methodology of interpolating splines presented here applies to the deterministic case where the input stimulus and the LIF neurons have low noise levels. It ties in naturally with theoretical results that show that neural encoding of bandlimited signals leads to perfect signal reconstruction if Nyquist-type rate conditions are satisfied [5].

In neuromorphic engineering applications the noise levels can be kept low. Neuronal spike trains, however, often exhibit strong variability in response to identical inputs due to various noise sources. For stimuli encoded with neural circuits the problem of optimal reconstruction can be formulated as a smoothing spline problem [4]. This case is presented analytically in [7] for a slightly less general setup, where the signals belong to a Reproducing Kernel Hilbert Space [19]. A reconstruction of stimuli encoded with LIF neurons using both smoothing and interpolating splines offers an additional alternative. Thus, the methodology of spline theory provides a general framework for the optimal reconstruction of signals on a finite time horizon.

The methodology presented here can be applied to the reconstruction of stimuli encoded with neurons that belong to other model classes of interest. An example is provided by neuron models with level crossing spiking mechanisms and feedback that have been investigated in [16]. More generally, the -transform of any neuron model with piecewise linear dynamics can be described by a set of linear projections. Neurons with linear dynamics have been shown to express complex spiking behaviors [2022].

The MIMO architecture presented here consists of a linear, time-invariant filtering kernel that is separated from the neural spiking mechanism. By relaxing the time-invariance property and embedding spike-triggered reseting mechanisms at the level of the filtering kernel, more complex transformations can be modeled. Consequently dendritic trees incorporating compartmental neuron models and spike backpropagation [23] can be analyzed with the methodology advanced in this paper. The aforementioned architectures will be the subject of future research.

Appendices

A. Interpolation Splines in Hilbert Spaces

We assume throughout that stimuli belong to the space of functions of finite energy over a domain , that is, . The information available to a decoder is a set of measurements where are known functions and . The inner products can be written in operator form as where , and is a linear operator defined by Finding by inverting (A.2) is, in general, an ill-posed problem. Additional “smoothness” conditions are needed. We introduce these by requiring that the reconstructed signal minimizes a quadratic criterion , where is a bounded linear operator, is the range of , and denotes the standard -norm over .

Definition 1. The solution to the interpolation problem is called an interpolation spline corresponding to the initial data , the measurement operator , and the energy operator .

We restrict ourselves to the case where the operator has a finite dimensional kernel of dimension . The following standard theorem establishes necessary conditions for the existence and uniqueness of the interpolation spline. For a proof see [13].

Theorem 2. If , and the range of is closed, then there exists a unique interpolation spline corresponding to the data , the measurement operator , and the energy operator .

In order to derive the general form of the interpolation spline, we introduce the notion of reproducing kernel for a Hilbert space with respect to the energy operator . This notion generalizes reproducing kernels associated with Reproducing Kernel Hilbert Spaces [19].

Definition 3. The function is called the reproducing kernel of the space with respect to the energy operator , if (1)for any functional , the function lies in ; (2)any functional that vanishes on the kernel of , that is, can be represented as Here is a bilinear form defined by

Theorem 4. The solution to the spline interpolation problem is given by where the set of functions , forms an orthogonal basis for and With , , and , the coefficients and satisfy the matrix equations Moreover is an matrix and is an matrix with entries given by for all , and all .

Proof. For the representation result of (A.8) see [13]. By substituting (A.8) into (A.2), we obtain For the rest of the equations define where the entries of the vector are the Lagrange multipliers. If is the optimal consistent reconstruction and , then From the Cauchy-Schwarz inequality with ker, we have Therefore, for each of the basis functions of ker, , which in matrix form can be written as with defined as in (A.11). Combining (A.12) with (A.17) we obtain (A.10). For more information see [13].

B. Reproducing Kernels for MIMO Signal Reconstruction

Let be the space of -dimensional vector-valued functions defined over the domain . The space equipped with inner product and norm given by (35) is a Hilbert space. Suppose that we seek a consistent reconstruction that also minimizes the energy operator

Lemma 5. The reproducing kernel for the Hilbert space with respect to the energy operator is given by

Proof. It can be shown that the reproducing kernel can be written in the form where is the fundamental solution for the th-iterated Laplacian For univariate functions the iterated Laplacian is equal to the th order derivative of each component and the result follows. A complete proof can be found in a general setting in [24].

Note that for the kernel becomes For the general representation result (A.8), the scalar factor can be absorbed into the coefficients.

Acknowledgments

This work was supported by NIH Grant R01 DC008701-01 and NSF Grant CCF-06-35252. E. A. Pnevmatikakis was also supported by the Onassis Public Benefit Foundation. The authors would like to thank the reviewers for their suggestions for improving the presentation of this paper.