Computational Intelligence and Neuroscience

Volume 2012 (2012), Article ID 209590, 20 pages

http://dx.doi.org/10.1155/2012/209590

## Channel Identification Machines

Department of Electrical Engineering, Columbia University, New York, NY 10027, USA

Received 8 March 2012; Revised 29 June 2012; Accepted 16 July 2012

Academic Editor: Cheng-Jian Lin

Copyright © 2012 Aurel A. Lazar and Yevgeniy B. Slutskiy. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We present a formal methodology for identifying a channel in a system consisting of a communication channel in cascade with an asynchronous sampler. The channel is modeled as a multidimensional filter, while models of asynchronous samplers are taken from neuroscience and communications and include integrate-and-fire neurons, asynchronous sigma/delta modulators and general oscillators in cascade with zero-crossing detectors. We devise channel identification algorithms that recover a projection of the filter(s) onto a space of input signals loss-free for both scalar and vector-valued test signals. The test signals are modeled as elements of a reproducing kernel Hilbert space (RKHS) with a Dirichlet kernel. Under appropriate limiting conditions on the bandwidth and the order of the test signal space, the filter projection converges to the impulse response of the filter. We show that our results hold for a wide class of RKHSs, including the space of finite-energy bandlimited signals. We also extend our channel identification results to noisy circuits.

#### 1. Introduction

Signal distortions introduced by a communication channel can severely affect the reliability of communication systems. If properly utilized, knowledge of the channel response can lead to a dramatic improvement in the performance of a communication link. In practice, however, information about the channel is rarely available a priori and the channel needs to be identified at the receiver. A number of channel identification methods [1] have been proposed for traditional clock-based systems that rely on the classical sampling theorem [2, 3]. However, there is a growing need to develop channel identification methods for asynchronous nonlinear systems, of which time encoding machines (TEMs) [4] are a prime example.

TEMs naturally arise as models of early sensory systems in neuroscience [5, 6] as well as models of nonlinear samplers in signal processing and analog-to-discrete (A/D) converters in communication systems [4, 6]. Unlike traditional clock-based amplitude-domain devices, TEMs encode analog signals as a strictly increasing sequence of irregularly spaced times . As such, they are closely related to irregular (amplitude) samplers [4, 7] and, due to their asynchronous nature, are inherently low-power devices [8]. TEMs are also readily amenable to massive parallelization [9]. Furthermore, under certain conditions, TEMs faithfully represent analog signals in the time domain; given the parameters of the TEM and the time sequence at its output, a time decoding machine (TDM) can recover the encoded signal loss-free [4, 5].

A general TEM of interest is shown in Figure 1. An analog multidimensional signal is passed through a channel with memory that models physical communication links. We assume that the effect of this channel on the signal can be described by a linear multidimensional filter. The output of the channel is then mapped, or encoded, by a nonlinear asynchronous sampler into the time sequence . A few examples of samplers include asynchronous A/D converters such as the one based on an asynchronous sigma/delta modulator (ASDM) [4], nonlinear oscillators such as the van der Pol oscillator in cascade with a zero-crossing detector (ZCD) [6], and spiking neurons such as the integrate-and-fire (IAF) or the threshold-and-fire (TAF) neurons [9]. The above-mentioned asynchronous samplers incorporate the temporal dynamics of spike (pulse) generation and allow one to consider, in particular for neuroscience applications, more biologically plausible nonlinear spike generation (sampling) mechanisms.

In this paper, we investigate the following *nonlinear* identification problem: given both the input signal and the time sequence at the output of a TEM, what is the channel filter? System identification problems of this kind are key to understanding the nature of neural encoding and processing [10–14], process modeling and control [15], and, more generally, methods for constructing mathematical models of dynamical systems [16].

Identification of the channel from a time sequence is to be contrasted with existing methods for rate-based models in neuroscience (see [10] for an extensive review). In such models the output of the system is taken to be its instantaneous response rate and the nonlinear generation of a time sequence is not explicitly modeled. Furthermore, in order to fit model parameters, identification methods for such models typically require the response rate to be known [17]. This is often difficult in practice since the same experiment needs to be repeated a large number of times to estimate the response rate. Moreover, the use of the same stimulus typically introduces a systematic bias during the identification procedure [10].

The channel identification methodology presented in this paper employs test signals that are neither white nor have stationary statistics (e.g., Gaussian with a fixed mean/variance). This is a radical departure from the widely employed nonlinear system identification methods [10], including the spike-triggered average [18] and the spike-triggered covariance [19] methods. We carry out the channel identification using input signals that belong to reproducing kernel Hilbert spaces (RKHSs), and, in particular, spaces of bandlimited functions, that is, functions that have a finite support in the frequency domain. The latter signals are extensively used to describe sensory stimuli in biological systems and to model signals in communications. We show that for such signals the channel identification problem becomes mathematically tractable. Furthermore, we demonstrate that the choice of the input signal space profoundly effects the type of identification results that can be achieved.

The paper is organized as follows. In Section 2, we introduce three application-driven examples of the system in Figure 1 and formally state the channel identification problem. In Section 3, we present the single-input single-output (SISO) channel identification machine (CIM) for the finite-dimensional input signal space of trigonometric polynomials. Using analytical methods and simulations, we demonstrate that it is possible to identify the projection of the filter onto the input space loss-free and show that the SISO CIM algorithm can recover the original filter with arbitrary precision, provided that both the bandwidth and the order of the input space are sufficiently high. Then, in Section 4, we extend our methodology to multidimensional systems and present multi-input single-output (MISO) CIM algorithms for the identification of vector-valued filters modeling the channel. We generalize our methods to classes of RKHSs of input signals in Section 5.1 and work out in detail channel identification algorithms for infinite-dimensional Paley-Wiener spaces. In Section 5.2 we discuss extensions of our identification results to noisy systems, where additive noise is introduced either by the channel or the asynchronous sampler. Finally, Section 6 concludes our work.

#### 2. The Channel Identification Problem

We investigate a general I/O system comprised of a filter or a bank of filters (i.e., a linear operator) in cascade with an asynchronous (nonlinear) sampler (Figure 1). The I/O circuit belongs to the class of [Filter]-[Asynchronous Sampler] circuits. In general terms, the input to such a system is a vector-valued analog signal , , , and the output is a time sequence generated by its asynchronous sampling mechanism. In the neural coding literature, such a system is called a time encoding machine (TEM) [4] as it encodes an unknown signal into an observable time sequence .

##### 2.1. Examples of Asynchronous SISO and MISO Systems

An instance of the TEM in Figure 1 is the SISO [Filter]-[Ideal IAF] neural circuit depicted in Figure 2(a). Here the filter is used to model the aggregate processing of a stimulus performed by the dendritic tree of a sensory neuron. The output of the filter is encoded into the sequence of spike times by an ideal integrate-and-fire neuron. Identification of dendritic processing in such a circuit is an important problem in systems neuroscience. It was first investigated in [20]. Another instance of the system in Figure 1 is the SISO [Filter]-[Nonlinear Oscillator-ZCD] circuit shown in Figure 2(b). In contrast to the first example, where the input was coupled additively, in this circuit the biased filter output is coupled multiplicatively into a nonlinear oscillator. The zero-crossing detector then generates a time sequence by extracting zeros from the observable modulated waveform at the output of the oscillator. Called a TEM with multiplicative coupling [6], this circuit is encountered in generalized frequency modulation [21].

An example of a MISO system is the [Filter]-[ASDM-ZCD] circuit shown in Figure 2(c). Similar circuits arise practically in all modern-day A/D converters and constitute important front-end components of measurement and communication systems. Each signal , , , is transmitted through a communication channel and the effect of the channel on each signal is modeled using a linear filter with an impulse response , , . The aggregate channel output , where denotes the convolution of with , is additively coupled into an ASDM. Specifically, is passed through an integrator and a noninverting Schmitt trigger to produce a binary output , . A zero-crossing detector is then used to extract the sequence of zero-crossing times from . Thus, the output of this [Filter]-[ASDM-ZCD] circuit is the time sequence .

##### 2.2. Modeling the Input Space

We model channel input signals , , as elements of the space of trigonometric polynomials (see Section 5.1 for more general input spaces).

*Definition 1. *The space of trigonometric polynomials is a Hilbert space of complex-valued functions
where , is the bandwidth, is the order and , endowed with the inner product
Given the inner product in (2), the set of elements
forms an orthonormal basis in . Thus, any element and any inner product can be compactly written as and . Moreover, is a reproducing kernel Hilbert space (RKHS) with a reproducing kernel (RK) given by
also known as a Dirichlet kernel [22].

We note that a function satisfies . There is a natural connection between functions on an interval of length that take on the same values at interval end-points and functions on that are -periodic: both provide equivalent descriptions of the same mathematical object, namely a function on a circle. By abuse of notation, in what follows will denote both a function defined on an interval of length and a function defined on the entire real line. In the latter case, the function is simultaneously periodic with period and bandlimited with bandwidth , that is, it has a finite spectral support , where denotes the Fourier transform. In what follows we will assume that for all , that is, a signal contains all frequency components.

##### 2.3. Modeling the Channel and Channel Identification

The channel is modeled as a bank of filters with impulse responses , . We assume that each filter is linear, causal, BIBO-stable and has a finite temporal support of length , that is, it belongs to the space . Since the length of the filter support is smaller than or equal to the period of an input signal, we effectively require that for a given and a fixed input signal bandwidth , the order of the space satisfies . The aggregate channel output is given by . The asynchronous sampler maps the input signal into the output time sequence , where denotes the total number of spikes produced on an interval .

*Definition 2. *A signal at the input to a [Filter]-[Asynchronous Sampler] circuit together with the resulting output of that circuit is called an input/output (I/O) pair and is denoted by .

We are now in a position to define the channel identification problem.

*Definition 3. *Let , , be a set of signals from a test space . A channel identification machine implements an algorithm that estimates the impulse response of the filter from the I/O pairs , , of the [Filter]-[Asynchronous Sampler] circuit.

*Remark 4. *We note that a CIM recovers the impulse response of the filter based on the knowledge of I/O pairs , , and the sampler circuit. In contrast, a time decoding machine recovers an encoded signal based on the knowledge of the entire TEM circuit (both the channel filter and the sampler) and the output time sequence .

#### 3. SISO Channel Identification Machines

As already mentioned, the circuits under investigation consist of a channel and an asynchronous sampler. Throughout this paper, we will assume that the structure and the parameters of the asynchronous sampler are known. We start by formally describing asynchronous channel measurements in Section 3.1. Channel identification algorithms from asynchronous measurements are given in Section 3.2. Examples characterizing the performance of the identification algorithms are discussed in Section 3.3.

##### 3.1. Asynchronous Measurements of the Channel Output

Consider the SISO [Filter]-[Ideal IAF] neural circuit in Figure 2(a). In this circuit, an input signal is passed through a filter with an impulse response (or kernel) and then encoded by an ideal IAF neuron with a bias , a capacitance , and a threshold . The output of the circuit is a sequence of spike times on the time interval that is available to an observer. This neural circuit is an instance of a TEM and its operation can be described by a set of equations where . Intuitively, at every spike time the ideal IAF neuron is providing a measurement of the signal on the time interval .

*Definition 5. *The mapping of an analog signal , , into an increasing sequence of times (as in (5)) is called the -transform [4].

*Definition 6. *The operator given by
is called the projection operator.

Proposition 7 (conditional duality). *For all , a [Filter]-[Ideal IAF] TEM with a filter kernel is I/O-equivalent to a [Filter]-[Ideal IAF] TEM with the filter kernel . Furthermore, the CIM algorithm for identifying the filter kernel is equivalent to the TDM algorithm for recovering the input signal encoded by a [Filter]-[Ideal IAF] TEM with the filter kernel . *

*Proof. *Since , by the reproducing property of the kernel . Hence,

, where follows from the commutativity of convolution, from the reproducing property of the kernel and the assumption that , from the equality , from the definition of in (6), and from the definition of convolution for periodic functions [23]. It follows that on the interval , (5) can be rewritten as
for all , where comes from the commutativity of convolution. The right-hand side of (7) is the -transform of a [Filter]-[Ideal IAF] TEM with an input and a filter that has an impulse response . Hence, a TDM can identify , given a filter-output pair .

The conditional duality between time encoding and channel identification is visualized in Figure 3. First, we note the *conditional I/O equivalence* between the circuit in Figure 3(a) and the original circuit in Figure 2(a). The equivalence is conditional since is a projection onto a particular space and the two circuits are I/O-equivalent only for input signals in that space. Second, identifying the filter of the circuit in Figure 3(a) is the same as decoding the signal encoded with the circuit in Figure 3(b). Note that the filter projection is now treated as the input to the [Filter]-[Ideal IAF] circuit and the signal appears as the impulse response of the filter. Effectively, we have transformed the channel identification problem into a time decoding problem and we can use the TDM machinery of [5] to identify the filter projection on .

##### 3.2. Channel Identification from Asynchronous Measurements

Given the parameters of the asynchronous sampler, the measurements of the channel output can be readily computed from spike times using the definition of ((5) for the IAF neuron). Furthermore, as we will now show, for a known input signal, these measurements can be reinterpreted as measurements of the channel itself.

Lemma 8. * There is a function , such that the -transform of the [Filter]-[Ideal IAF] neuron in (7) can be written as
**
and for all and .*

*Proof. * The linear functional defined by
where , is bounded. Thus, by the Riesz representation theorem [22], there exists a function such that , , and . Since , we have for some , . To find the latter coefficients, we note that . By definition of in (9), .

Since , the measurements are projections of onto , . Assuming that is known and there are enough measurements available, can be obtained by first recovering from these projections and then deconvolving it with . However, this two-step procedure does not work when the circuit is not producing enough measurements and one cannot recover . A more direct route is suggested by Lemma 8, since the measurements can also be interpreted as the projections of onto , that is, , . A natural question then is how to identify directly from the latter projections.

Lemma 9. * Let be the input to a [Filter]-[Ideal IAF] circuit with . If the number of spikes generated by the neuron in a time interval of length satisfies , then the filter projection can be perfectly identified from the I/O pair as , where with and denotes the Moore-Penrose pseudoinverse of . The matrix is of size and its elements are given by
*

*Proof. * Since , it can be written as . Then from (8) we have
Writing (11) for all , we obtain with , and . This system of linear equations can be solved for , provided that the rank of the matrix satisfies . A necessary condition for the latter is that the number of measurements is at least , or, equivalently, the number of spikes . Under this condition, the solution can be computed as .

*Remark 10. * If the signal is fed directly into the neuron, then , for , that is, , . In other words, if there is no processing on the input signal , then the kernel in is identified as the filter projection. This is also illustrated in Figure 7.

In order to ensure that the neuron produces at least measurements in a time interval of length , it suffices to have . Since for , it suffices to have . Using the definition of and taking the limit as , we obtain the familiar Nyquist-type criterion for a bandlimited stimulus [4, 20] (see also Section 5.1).

Ideally, we would like to identify the impulse response of the filter . Note that unlike , the projection belongs to the space . Nevertheless, under quite natural conditions on (see Section 3.4), approximates arbitrarily closely on , provided that both the bandwidth and the order of the signal are sufficiently large (see also Figure 9).

The requirement of Lemma 9 that the number of spikes produced by the system in Figure 2(a) has to satisfy is quite stringent and may be hard to meet in practice, especially if the order of the space is high. In that case we have the following result.

Theorem 11 (SISO channel identification machine). *Let be a collection of linearly independent stimuli at the input to a [Filter]-[Ideal IAF] circuit with . If the total number of spikes generated by the neuron satisfies , then the filter projection can be perfectly identified from a collection of I/O pairs as
**
where . Furthermore, , and with each of size and of size . The elements of matrices are given by
**
for all , , and . *

*Proof. * Since , it can be written as . Furthermore, since the stimuli are linearly independent, the measurements provided by the IAF neuron are distinct. Writing (5) for a stimulus , we obtain
or , with , and . Repeating for all , we get with and . This system of linear equations can be solved for , provided that the rank of matrix satisfies . A necessary condition for the latter is that the total number of spikes generated in response to all signals satisfies . Then, the solution can be computed as . To find the coefficients , we note that (see Lemma 8). Hence, the result follows.

The time encoding interpretation of the channel identification problem for a SISO [Filter]-[Ideal IAF] circuit is shown in Figure 4(a). The block diagram of the SISO CIM in Theorem 11 is shown in Figure 4(b). Note that the key idea behind the SISO CIM is the introduction of multiple linearly independent test signals , . When the [Filter]-[Ideal IAF] circuit is producing very few measurements of in response to any given test signal , we use more signals to obtain additional measurements. We can do so and identify because is fixed. In contrast, identifying in a two-step deconvolving procedure requires reconstructing at least one . This is an ill-posed problem since each is signal-dependent and has a small number of associated measurements.

##### 3.3. Examples

We now demonstrate the performance of the identification algorithms in Lemma 9 and Theorem 11. First, we identify a filter in the SISO [Filter]-[Ideal IAF] circuit (Figure 2(a)) from a single I/O pair when this circuit produces a sufficient number of measurements in an interval of length . Second, we identify the filter using multiple I/O pairs for the case when the number of measurements produced in response to any given input signal is small. Finally, we consider the SISO [Filter]-[Nonlinear Oscillator-ZCD] circuit with multiplicative coupling (Figure 2(b)) and identify its filter from multiple I/O pairs.

###### 3.3.1. SISO [Filter]-[Ideal IAF] Circuit, Single I/O Pair

We model the dendritic processing filter using the causal linear kernel with and . The general form of this kernel was suggested in [24] as a plausible approximation to the temporal structure of a visual receptive field. Since the length of the filter support s, we will need to use a signal with a period s. In Figure 5(a), we apply a signal that is bandlimited to Hz and has a period of s, that is, the order of the space . The biased output of the filter is then fed into an ideal integrate-and-fire neuron (Figure 5(b)). Here the bias guarantees that the output of the integrator reaches the threshold value in finite time. Whenever the biased filter output is above zero (Figure 5(b)), the membrane potential is increasing (Figure 5(c)). If the membrane potential reaches a threshold , a spike is generated by the neuron at a time and the potential is reset to zero (Figure 5(c)). The resulting spike train at the output of the [Filter]-[Ideal IAF] circuit is shown in Figure 5(d). Note that the circuit generated a total of spikes in an interval of length s. According to Theorem 14, we need at least spikes, corresponding to measurements, in order to identify the projection of the filter loss-free. Hence, for this particular example, it will suffice to use a single I/O pair .

In Figure 5(e), we plot the original impulse response of the filter , the filter projection , and the filter . The latter filter was identified using the algorithm in Theorem 14. Notice that the identified impulse response (red) is quite different from (dashed black). In contrast, and as expected, the blue and red curves corresponding, respectively, to and are indistinguishable. The mean squared error (MSE) between and amounts to −77.5 dB.

The difference between and is further evaluated in Figures 5(f)–5(h). By definition of in (6), , or since . Hence both the projection and the identified filter will contain frequencies that are present in the reproducing kernel , or equivalently in the input signal . In Figure 5(f) we show the double-sided Fourier amplitude spectrum of . As expected, we see that the kernel is bandlimited to Hz and contains distinct frequencies. On the other hand, as shown in Figure 5(g), the original filter is not bandlimited (since it has a finite temporal support). As a result, the input signal explores in a limited spectrum of rad/s, effectively projecting onto the space with rad/s and . The Fourier amplitude spectrum of the identified projection is shown in Figure 5(h).

###### 3.3.2. SISO [Filter]-[Ideal IAF] Circuit, Multiple I/O Pairs

Next, we identify the projection of onto the space of functions that are bandlimited to 100 Hz and have the same period s as in the first example. This means that the order of the space of input signals is . In order to identify the projection loss-free, the neuron has to generate at least measurements. If the neuron produces about 13 spikes (12 measurements) on an interval of length , as in the previous example, a single I/O pair will not suffice. However, we can still recover the projection if we use multiple I/O pairs.

In Figure 6 we illustrate identification of the filter using the algorithm in Theorem 11. A total of spikes were produced by the neuron in response to four different signals . Since , the MSE between the identified filter (red) and the projection (blue) is −73.3 dB.

###### 3.3.3. SISO [Filter]-[Ideal IAF] Circuit,

Now we consider a special case when the channel does not alter the input signal, that is, when , , is the Dirac delta function. As explained in Remark 10, the CIM should identify the projection of onto , that is, it should identify the kernel . This is indeed the case as shown in Figure 7.

###### 3.3.4. SISO [Filter]-[Nonlinear Oscillator-ZCD] Circuit, Multiple I/O Pairs

Next we consider a SISO circuit consisting of a channel in cascade with a nonlinear dynamical system that has a stable limit cycle. We assume that the (positive) output of the channel is multiplicatively coupled to the dynamical system (Figure 2(b)) so that the circuit is governed by a set of equations A system (16) followed by a zero-crossing detector is an example of a TEM with multiplicative coupling and has been previously investigated in [6]. It can be shown that such a TEM is input/output equivalent to an IAF neuron with a threshold that is equal to the period of the dynamical system on a stable limit cycle [6].

As an example, we consider a [Filter]-[van der Pol - ZCD] TEM with the van der Pol oscillator described by a set of equations where is the damping coefficient [6]. We assume that is the only observable state of the oscillator and without loss of generality we choose the zero phase of the limit cycle to be the peak of .

In Figure 8, we show the results of a simulation in which a SISO CIM was used to identify the channel. Input signals (Figure 8(a)) were bandlimited to 50 Hz and had a period s, that is, . In the absence of an input, that is, when , a constant bias (Figure 8(b)) resulted a in period of 34.7 ms on a stable limit cycle (Figure 8(e)). As seen in Figures 8(b) and 8(c), downward/upward deviations of in response to resulted in the slowing-down/speeding-up of the oscillator. In order to identify the filter projection onto a space of order loss-free, we used a total of zeros at the output of the zero-crossing detector (Figure 8(d)). This is 1 more zero than the rank requirement of zeros, or equivalently of measurements. The MSE between the identified filter (red) and the projection (blue) is −66.6 dB.

##### 3.4. Convergence of the SISO CIM Estimate

Recall, that the original problem of interest is that of recovering the impulse response of the filter . The CIM lets us identify the projection of that filter onto the input space. A natural question to ask is whether converges to and if so how and under what conditions. We formalize this below.

Proposition 12. * If , then in the norm and almost everywhere on with increasing , and fixed . *

*Proof. * Let . Then and by definition of in (6), we have
where is the th partial sum of the Fourier series of and is the th Fourier coefficient. Hence the problem of convergence of to is the same as that of the convergence of the Fourier series of . We thus have convergence in the norm and convergence almost everywhere follows from Carleson's theorem [23].

*Remark 13. *More generally, if , , then in the norm and almost everywhere by Hunt's theorem [23].

It follows from Proposition 12 that approximates arbitrarily closely (in the norm, or MSE sense), given an appropriate choice of and . Since the number of measurements needed to identify the projection increases linearly with , a single channel identification problem leads us to consider a countably infinite number of time encoding problems in order to identify the impulse response of the filter with arbitrary precision. To provide further intuition about the relationship between and , we compare the two in time and frequency domains for multiple values of and in Figure 9.

#### 4. MISO Channel Identification Machines

In this section we consider the identification of a bank of filters with impulse responses , . We present a MISO CIM algorithm in Section 4.1, followed by an example demonstrating its performance in Section 4.2.

##### 4.1. An Identification Algorithm for MISO Channels

Consider now the MISO ASDM-based circuit in Figure 2(c), where the signal , , , is transformed into the time sequence . This circuit is also an instance of a TEM and (assuming ) its -transform is given by where , with and . One simple way to identify filters , , is to identify them one by one as in Theorem 11. For instance, this can be achieved by applying signals of the form when identifying the filter . In a number of applications, most notably in early olfaction [25], this model of system identification cannot be applied. An alternative procedure that allows to identify all filters at once is given below.

Theorem 14 (MISO channel identification machine). *Let be a collection of linearly-independent vector-valued signals at the input of a MISO [Filter]-[ASDM-ZCD] circuit with filters , . The filter projections can be perfectly identified from a collection of I/O pairs as
**. Here the coefficients are given by with , and , provided that the matrix has rank . The matrix is given by
**
where , . Finally, the elements of matrix are given by
*

*Proof. * Since for all , it can be written as . Hence, for the th component of the stimulus we get and
Using the definition of and substituting (23) into the -transform (19), we obtain
or with , , , and . Repeating for all stimuli , , we obtain with as specified in (21). This system of linear equations can be solved for , provided that the rank of satisfies the condition . To find the coefficients , we note that . Hence, the result follows.

The MIMO time-encoding interpretation of the channel identification problem for a MISO [Filter]-[ASDM-ZCD] circuit is shown in Figure 10(a). The block diagram of the MISO CIM in Theorem 14 is shown in Figure 10(b).

*Remark 15. *From (23), we see that with . Writing this for all , we obtain , where , and . In order to identify the multidimensional channel this system of equations must have a solution for every . A necessary condition for the latter is that , that is, the number of test signals is greater than the number of signal components .

*Remark 16. *The rank condition can be satisfied by increasing the number of input signals . Specifically, if on average the system is providing measurements in a time interval , then the minimum number of test signals is .

##### 4.2. Example: MISO [Filter]-[ASDM-ZCD] Circuit

We now describe simulation results for identifying the channel in a MISO [Filter]-[ASDM - ZCD] circuit of Figure 2(c). We use three different filters: with s, and and ms. All signals are bandlimited to 100 Hz and have a period of s, that is, the order of the space . According to Theorem 14, the ASDM has to generate a total of at least trigger times in order to identify the projections , and loss-free. We use all five triplets , , to produce 131 trigger times.

A single such triplet is shown in Figure 11(a). The corresponding biased aggregate channel output is shown in Figure 11(b). Since the Schmitt trigger output switches between and (Figure 11(d)), the signal is piece-wise continuous. Figure 11(c) shows the integrator output. Note that when , the channel output is positively biased and the integrator output is compared against a threshold . As soon as that threshold is reached, the Schmitt trigger output switches to and the negatively-biased channel output is compared to a threshold . Passing the ASDM output through a zero-crossing device (Figure 11(d)), we obtain a corresponding sequence of trigger times . The set of all 131 trigger times is shown in Figure 11(e). Three identified filters , and are plotted in Figures 11(f)–11(h). The MSE between filter projections and filters recovered by the algorithm in Theorem 14 is on the order of −60 dB.

#### 5. Generalizations

We shall briefly generalize the results presented in previous sections in two important directions. First, we consider a general class of signal spaces for test signals in Section 5.1. Then we discus channel models with noisy observations in Section 5.2.

##### 5.1. Hilbert Spaces and RKHSs for Input Signals

Until now we have presented channel identification results for a particular space of input signals, namely the space of trigonometric polynomials. The finite-dimensionality of this space and the simplicity of the associated inner product makes it an attractive space to work with when implementing a SISO or a MISO CIM algorithm. However, fundamentally the identification methodology relied on the the geometry of the Hilbert space of test signals [5, 26]; computational tractability was based on kernel representations in an RKHS.

Theorem 17. *Let be a collection of linearly independent and bounded stimuli at the input of a [Filter]-[Asynchronous Sampler] circuit with a linear processing filter and the -transform
**
where is a bounded linear functional mapping into a measurement . Then there is a set of sampling functions , in such that
**
for all , . Furthermore, if is an RKHS with a kernel , , then . Let the set of representation functions , span the Hilbert space . Then
**
Finally, if and are orthogonal basis or frames for , then the filter coefficients amount to , where with , and with for all , and . *

*Proof. *By the Riesz representation theorem, since the linear functional is bounded, there is a set of sampling functions in such that . If is an RKHS, a sampling function can be computed using the reproducing property of the kernel as in
Finally, writing all inner products yields, with the notation above, a system of linear equations and the fiter coefficients amount to .

###### 5.1.1. Example: Paley-Wiener Space

As an example, we consider the Paley-Wiener space which is closely related to the space of trigonometric polynomials. Specifically, the finite-dimensional space can be thought of as a discretized version of the infinite-dimensional Paley-Wiener space in the frequency domain. An element has a line spectrum at frequencies , . This spectrum becomes dense in as . The space with the inner product given by is also an RKHS with an RK [22] with . Defining the projection of the filter onto as , we find that Lemma 8 still holds with and we can extend Theorem 11 to the following.

Proposition 18. * Let be a collection of linearly independent and bounded stimuli at the input of a [Filter]-[Ideal IAF] neural circuit with a dendritic processing filter . If , then can be perfectly identified from the collection of I/O pairs as
**
where , , and . Finally, , where with , and with for all , and . *

*Proof. * As before, the spikes in response to each test signal , , represent distinct measurements of . Thus we can think of the 's as projections of onto , where . Since the signals are linearly independent and bounded [5], it follows that, if or equivalently if the number of test signals , the set of functions with , is a frame for [5, 26]. Hence
If the set of functions forms a frame for , we can find the coefficients , , by taking the inner product of (34) with each element of : , for , . Letting , we obtain
for , . Writing (35) in matrix form, we have with . Finally, the coefficients , and , amount to .

Simulation results of a SISO CIM for a Paley-Wiener space of test signals is shown in Figure 12 Input signals were bandlimited to 100 Hz and the circuit generated a total of 38 spikes. The MSE between the identified filter (red) and the projection (blue) is −71.1 dB.

##### 5.2. Channels with Noisy Observations

In the derivations above we implicitly assumed that the I/O system was noiseless. In practice, noise is introduced either by the channel or the sampler itself. Here we revisit the -transform in (5) and show that the analysis/methodology employed in the previous sections can be extended within an appropriate mathematical setting to I/O systems with noisy measurements.

Recall the -transform of an ideal IAF neuron is given by , , where is the number of spikes generated by the neuron in an interval of length . The measurements were obtained by applying a piece-wise linear operator on the channel output . If either the channel or the sampler introduce an error, we can model it by adding a noise term to the -transform [9]: Here we will assume that , , are i.i.d.

In the presence of noise it is not possible to identify the projection loss-free. However, we can still identify an estimate of that is optimal for an appropriately defined cost function. For example, we can formulate a bi-criterion Tikhonov regularization problem where the scalar provides a trade-off between the faithfulness of the identified filter projection to measurements and its norm .

Theorem 19. * Problem (37) can be solved explicitly in analytical form. The optimal solution is achieved by
**
with , and , , as defined in (13). *

*Proof. *Since the minimizer is in , it is of the form given in (38). Substituting this into (37), we obtain
where with , , as defined in (13). This quadratic optimization problem can be solved analytically by expressing the objective as a convex quadratic function with denoting the conjugate transpose. A vector minimizes if and only if , that is, .

*Remark 20. *In Section 3.2, identification of the projection amounted to finding such that the sum of the residuals was minimized [9]. In other words, we were solving an unconstrained convex optimization problem of the form
where and with , , as defined in (13).

###### 5.2.1. Example: Noisy SISO [Filter]-[Ideal IAF] Circuit

In the following example, we assume that noise is added to the measurements , , by the neuron and we model that noise by introducing random thresholds that are normally distributed with a mean and a standard deviation , that is, , where . Thus random thresholds result in additive noise , .

In Figure 13(a) we show two stimuli that were used to probe the [Filter]-[Ideal IAF] circuit. Both stimuli are bandlimited to 25 Hz and have a period of s, that is, the order of the space is . The response of the neuron to a biased filter output (Figure 13(b)) is shown in Figure 13(c). Note the significant deviations in thresholds around the mean value of . Although a significant amount of noise is introduced into the system, we can identify an optimal estimate that is still quite close to the true projection . The MSE of identification is −31.8 dB.

#### 6. Conclusion

In this paper we presented a class of channel identification problems arising in the context of communication channels in [Filter]-[Asynchronous Sampler] circuits. Our results are based on a key structural conditional duality result between time decoding and channel identification. The conditional duality result shows that given a class of test signals, the projection of the filter onto the space of input signals can be recovered loss-free. Moreover, the channel identification problem can be converted into a time decoding problem. We considered a number of channel identification problems that arise both in communications and in neuroscience. We presented CIM algorithms that allow one to recover projections of both one-dimensional and multi-dimensional filters in such problems and demonstrated their performance through numerical simulations. Furthermore, we showed that under natural conditions on the impulse response of the filter, the filter projection converges to the original filter almost everywhere and in the mean-squared sense ( norm), with increasing bandwidth and order of the space. Thus in order to identify the impulse response of the filter with arbitrary precision, we are lead to consider a countably infinite number of time encoding problems. Finally, we generalized our results to a large class of test signal spaces and to channel models with noisy observations.

#### Acknowledgments

This work was supported in part by the NIH under the grant no. R01 DC008701-05 and in part by AFOSR under grant no. FA9550-12-1-0232. The authors’ names are alphabetically ordered.

#### References

- L. Tong, B. M. Sadler, and M. Dong, “Pilot-assisted wireless transmissions: general model, design criteria, and signal processing,”
*IEEE Signal Processing Magazine*, vol. 21, no. 6, pp. 12–25, 2004. View at Publisher · View at Google Scholar · View at Scopus - M. Unser, “Sampling—50 years after Shannon,”
*Proceedings of the IEEE*, vol. 88, no. 4, pp. 569–587, 2000. View at Publisher · View at Google Scholar · View at Scopus - J. J. Benedetto and P. J. S. G. Ferreira, Eds.,
*Modern Sampling Theory, Mathematics and Applications*, Birkhäuser, 2001. - A. A. Lazar and L. T. Tóth, “Perfect recovery and sensitivity analysis of time encoded bandlimited signals,”
*IEEE Transactions on Circuits and Systems I*, vol. 51, no. 10, pp. 2060–2073, 2004. View at Publisher · View at Google Scholar · View at Scopus - A. A. Lazar and E. A. Pnevmatikakis, “Faithful representation of stimuli with a population of integrate-and-fire neurons,”
*Neural Computation*, vol. 20, no. 11, pp. 2715–2744, 2008. View at Publisher · View at Google Scholar · View at Scopus - A. A. Lazar, “Population encoding with Hodgkin-Huxley neurons,”
*IEEE Transactions on Information Theory*, vol. 56, no. 2, pp. 821–837, 2010. View at Publisher · View at Google Scholar · View at Scopus - H. G. Feichtinger and Gröchenig K., “Theory and practice of irregular sampling,” in
*Wavelets: Mathematics and Applications*, Studies in Advanced Mathematics, pp. 305–363, CRC Press, 1994. View at Google Scholar - S. Yan Ng,
*A continuous-time asynchronous Sigma Delta analog to digital converter for broadband wireless receiver with adaptive digital calibration technique [Ph.D. thesis]*, Department of Electrical and Computer Engineering, Ohio State University, 2009. - A. A. Lazar, E. A. Pnevmatikakis, and Y. Zhou, “Encoding natural scenes with neural circuits with random thresholds,”
*Vision Research*, vol. 50, no. 22, pp. 2200–2212, 2010, Special Issue on Mathematical Models of Visual Coding. View at Publisher · View at Google Scholar · View at Scopus - M. C.-K. Wu, S. V. David, and J. L. Gallant, “Complete functional characterization of sensory neurons by system identification,”
*Annual Review of Neuroscience*, vol. 29, pp. 477–505, 2006. View at Publisher · View at Google Scholar · View at Scopus - U. Friederich, D. Coca, S. Billings, and M. Juusola, “Data modelling for analysis of adaptive changes in fly photoreceptors,”
*Neural Information Processing*, vol. 5863, no. 1, pp. 34–48, 2009. View at Publisher · View at Google Scholar · View at Scopus - T. W. Berger, D. Song, R. H. M. Chan, and V. Z. Marmarelis, “The neurobiological basis of cognition: identification by multi-input, multioutput nonlinear dynamic modeling,”
*Proceedings of the IEEE*, vol. 98, no. 3, pp. 356–374, 2010. View at Publisher · View at Google Scholar · View at Scopus - T. W. Berger, D. Song, R. H. M. Chan et al., “A hippocampal cognitive prosthesis: multi-input, multi-output nonlinear modeling and VLSI implementation,”
*IEEE Transactions on Neural Systems and Rehabilitation Engineering*, vol. 20, no. 2, pp. 198–211, 2012. View at Publisher · View at Google Scholar - Z. Song, M. Postma, S. A. Billings, D. Coca, R. C. Hardie, and M. Juusola, “Stochastic, adaptive sampling of information by microvilli in fly photoreceptors,”
*Current Biology*, vol. 22, pp. 1–10, 2012. View at Google Scholar - F. J. Doyle III, R. K. Pearson, and B. A. Ogunnaike,
*Identification and Control Using Volterra Models*, Springer, 2002. - L. Ljung, “Perspectives on system identification,”
*Annual Reviews in Control*, vol. 34, no. 1, pp. 1–12, 2010. View at Publisher · View at Google Scholar · View at Scopus - F. E. Theunissen, S. V. David, N. C. Singh, A. Hsu, W. E. Vinje, and J. L. Gallant, “Estimating spatio-temporal receptive fields of auditory and visual neurons from their responses to natural stimuli,”
*Network*, vol. 12, no. 3, pp. 289–316, 2001. View at Publisher · View at Google Scholar · View at Scopus - R. de Boer and P. Kuyper, “Triggered correlation,”
*IEEE Transactions on Biomedical Engineering*, vol. 15, no. 3, pp. 169–179, 1968. View at Google Scholar · View at Scopus - O. Schwartz, E. J. Chichilnisky, and E. P. Simoncelli, “Characterizing neural gain control using spike-triggered covariance,”
*Advances in Neural Information Processing Systems*, vol. 14, pp. 269–276, 2002. View at Google Scholar - A. A. Lazar and Y. B. Slutskiy, “Identifying dendritic processing,”
*Advances in Neural Information Processing Systems*, vol. 23, pp. 1261–1269, 2010. View at Google Scholar - W. P. Torres, A. V. Oppenheim, and R. R. Rosales, “Generalized frequency modulation,”
*IEEE Transactions on Circuits and Systems I*, vol. 48, no. 12, pp. 1405–1412, 2001. View at Publisher · View at Google Scholar · View at Scopus - A. Berlinet and C. Thomas-Agnan,
*Reproducing Kernel Hilbert Spaces in Probability and Statistics*, Kluwer Academic Publishers, 2004. - L. Grafakos,
*Modern Fourier Analysis*, vol. 250 of*Graduate Texts in Mathematics*, Springer, 2008. - E. H. Adelson and J. R. Bergen, “Spatiotemporal energy models for the perception of motion,”
*Journal of the Optical Society of America A*, vol. 2, no. 2, pp. 284–299, 1985. View at Publisher · View at Google Scholar · View at Scopus - A. J. Kim, A. A. Lazar, and Y. B. Slutskiy, “System identification of
*Drosophila*olfactory sensory neurons,”*Journal of Computational Neuroscience*, vol. 30, no. 1, pp. 143–161, 2011. View at Publisher · View at Google Scholar · View at Scopus - O. Christensen,
*An Introduction to Frames and Riesz Bases. Applied and Numerical Harmonic Analysis*, Birkhäuser, 2003.