Abstract

An array of nonidentical and locally connected chaotic biological neurons is modelled by a single representative chaotic neuron model based on an extension of the Hindmarsh-Rose neuron. This model is then employed in conjunction with the unscented Kalman filter to study the associated state estimation problem. The archetypal system, which was deliberately chosen to be chaotic, was corrupted with noise. The influence of noise seemed to annihilate the chaotic behaviour. Consequently it was observed that the filter performs quite well in reconstructing the states of the system although the introduction of relatively low noise had a profound effect on the system. Neither the noise-corrupted process model nor the filter gave any indications of chaos. We believe that this behaviour can be generalised and expect that unscented Kalman filtering of the states of a biological neuron is completely feasible even when the uncorrupted process model exhibits chaos. Finally the methodology of the unscented Kalman filter is applied to filter a typical simulated ECG signal using a synthetic model-based approach.

1. Introduction

Oscillatory signals in the cardiovascular region either originate directly from the sinoatrial node or one of the neurons as an action potential traverses to the ventricle myocytes. Alternatively they are functions or weighted sums of action potentials arising at spatially distributed points. To consider a range of oscillatory measurements in the cardiovascular region, it is important to consider the output of typical neuronal cell.

Neural information is mainly encoded in various firing patterns of a neuron, such as periodic spiking (or bursting) and chaotic spiking (or bursting), travelling among coupled neurons within a physiological domain of neurons such as the heart. The “action potential” is a spontaneously and rhythmically produced electrical impulse in a membrane of neuron cell that occurs during the firing of the neuron due to an exchange of charged ions inside and outside a neural cell. Although not a definition, a dynamic system may be considered chaotic if it exhibits (i) sensitive dependence on the initial conditions and (ii) a number of dense orbits with a multiplicity of periods for a range of parameters. Two nonlinear dynamic systems with chaotic responses can sometimes exhibit the phenomenon of synchronization when the responses of the two lock in and seem to drive each other with a common feature such as the phase, phase-lag, amplitude, and envelope or even some generalised property that can be described in terms of a functional of the features of the response. Physiological observations have confirmed the existence of synchronous motion of neurons in different areas of the heart (Elson et al. [1], Pinto et al. [2], and Szucs et al. [3]). Synchronization of neurons is possible when a single neuron faithfully encodes the timing of successive peaks, burst, or spikes and a group of neurons can respond collectively to a common synaptic current. Moreover, a group of interacting coupled neurons can display various synchronous cardio-vascular rhythms. Several types of synchronization of coupled neurons have been studied under the influence of parameter changes and it is observed that when the coupling strength is above a critical value, certain synchronization mechanisms between neurons can be achieved. This applies both to bursting neurons as well as to neurons exhibiting periodic spikes. The presence of noise can have a profound effect and can enhance synchronization between neurons under certain conditions. Thus it was felt that one could employ a nonlinear filter such as the unscented Kalman filter (UKF) to estimate the states and parameters of an archetypal neuron.

In this paper the state and parameter estimation of an array of nonidentical, locally connected chaotic biological neuronal models is considered. It is known that, under certain conditions, even a single biological neuron can exhibit chaotic behaviour. Chaos may be achieved by introducing the nonlinear effects of the chemical and electrical synapses. Alternately, the chaotic behaviour of the single biological neuron is achieved by driving it with periodic excitations. The global behaviour of an array of biological neurons may then be investigated by considering a spatial distribution of identical neurons, where spatiotemporal chaos emerges, as well as in presence of spatial diversity, generated by a distribution law which could be stochastic or chaotic. In the latter case, it has been observed that the introduction of spatial disorder enhances the self-organization or synchronisation capability. In particular, in agreement with the results presented in the works of Elson et al. [1], Pinto et al. [2], and Szucs et al. [3], the introduction of spatial diversity generated by such a distribution leads to an improvement in synchronization. While the phenomenon of synchronization in dynamics has been observed over a long time, two or more chaotic systems can be synchronized by linking them with mutual coupling or with a common signal or signals. Ideal synchronisation could be induced by mutually coupling a pair of identical chaotic systems when all trajectories converge to the same value and remain in step with each other during further evolution. Linking chaotic systems given by identical differential-dynamic models but with different system parameters can lead to practical synchronization involving phase synchronization. Initially unexcited biological neural models, subsequently externally excited by periodic oscillators, can synchronize both in chaotic and periodic regimes. Provided the amplitudes and frequencies of certain modes are within certain limits, it has been observed that a number of independent neurons can exhibit periodic or chaotic behaviour and achieve a regime of complete synchronization including phase synchronization.

In this paper, we consider a typical extended four-state Hindmarsh-Rose (HR) model (Hindmarsh and Rose, [4]) as a representation of an ensemble of biological neurons. This is preferred over the two-dimensional map model of Rulkov [5] and Shilnikov and Rulkov [6] although the map may be easier to implement in a filter. The neuron model was subjected to the same type of periodic forcing as the biological neurons. The autonomous periodic bursting pattern of the four-dimensional neuron model was observed to be similar to a biological neuron. The fact that HR model represents an ensemble of biological models is accounted for by introducing low-level process noise. Thus both the process and measurement were assumed to be corrupted by the introduction of very low levels of white noise. The noise had a profound effect on the response of the model as it seemed to annihilate the chaos. The unscented Kalman filtering method was applied to estimate the states of the model. It was observed that the filter performs quite well in reconstructing the states the system, which was deliberately chosen to be chaotic. Neither the filter nor the noise corrupted process model gave any indications of chaos.

Finally the methodology is applied to the Electro-cardiogram (ECG) measurements which are modelled as oscillatory signals using a synthetic model first proposed by McSharry et al. [7]. Like the Hindmarsh-Rose model, it exhibits limit cycle oscillations and chaos and can represent the primary characteristic (P, Q, R, S, T) points in an ECG. The methodology of the UKF is used to filter and reconstruct a measured ECG signal and validated by simulation.

2. Chaotic Model of a Neuron

The analysis of biological neurons had that shown they could be modelled with only three or four states, we chose initially to use a familiar simplified model put forward by Hindmarsh and Rose [4]. The general form of this model contains three terms: It is hard to establish a one-to-one correspondence between the states of the HR neuron and the states of a biological neuron. Yet the HR neuron model seems to reproduce the overall behaviour of the action potential fairly accurately. After appropriate scaling, the output of the HR neuron model can be made to lie within the same nominal limits as a biological neuron, . Furthermore the other principal states of the HR neuron show the same behaviour as the principal compartmental currents and gating variables that can be established by considering the diffusion of ionic charge carriers from one compartment to the other. The net result of this type of diffusion is the generation of a potential difference, across the two compartments which can be described by the Nernst equation. For this reason the HR neuron may be employed as a representative model for constructing reduced order observers of the neuron dynamics. The three equations in (2.1) represent the original HR model where corresponds to membrane voltage, represents a “fast” current and by making , a “slow” current. These three equations (the 3-state model) can produce several modes of spiking-bursting activity including a regime of chaos that appears similar to that seen in biological neurons. However, the parameter space for the chaotic behaviour is much more restricted than that we observe in real neurons. Following Szucs et al. [3], the chaotic regime is greatly expanded by incorporation of the fourth term into the model: where, and .

Adding the term w(t) to introduce an even slower process () is intended to represent the dynamics of intracellular () ions. To couple the additional equation to the original three-state HR model, a term is included in the second equation. When this term is taken into account, the model produces simulations of intracellular activity that are even more similar to the biological observations. However it is not known yet if the term actually represents ion kinetics in sinoatrial node (SAN) and ventricular neurons and numerical simulations are currently under way to compare transients in HR neurons using realistic biological models. Because of its relative simplicity, the extended HR model was extremely useful in constructing a simulation model that could perform the computations necessary to emulate SAN neurons in real-time. A similar model has been employed by Mayer et al. [8] to model thalamocortical circuits. Although this simplified model is difficult to compare with biological neurons which are made up of a multitude of individual conductance and compartments, we expect the model to provide us with the experience in estimating the states of a real biological model. Since the estimation of these states and parameters is crucial in establishing physiological mechanisms, we also developed several multicompartmental type models that provide a more biologically realistic representation of the nonlinear voltage-current relations than that of Hindmarsh and Rose. Röbenack and Goel [9], and Goel and Röbenack [10] demonstrated that it was possible to reconstruct the currents and gate dynamics from measurements of the action potential by using a “reduced order observer.” An observer is an electronic circuit that is expected to reconstruct the internal dynamics of a system, whatever the nature of the dynamics may be, solely from the measurements, in such way that the error between the actual signal and its reconstruction is asymptotically stable. Goel and Röbenack employed a four- and a six-state model to construct their observer. While in this work the HR neuron model has been employed to demonstrate the viability of successfully observing the state of a neuron, the application of the methodology to a multicompartmental biologically inspired model will presumably facilitate the reconstruction of the internal dynamics within the cell using measurements of the action potential. We accordingly employed a modified Hodgkin and Huxley type (Hodgkin and Huxley, [11]) seven-state model to reconstruct all the states of the system. When this model, as well as several other biologically inspired models, was used to construct UKF-based state estimators from a biological neural measurement in our first attempt, all of these models exhibited filter instability. Further analysis indicated that this could be due to one of three reasons: (i) the chaotic nature of the dynamics (ii) unobservability due to inadequate measurements, and (iii) the nonlinear functions arising from the Nernst equations for the compartmental currents due to the ionic concentrations and the sigmoid-like functions associated with the gate time constants and final values, which must lie within the prescribed final values. The question of unobservability was dealt with by including a range of simulated measurements. It was essential to identify which of the remaining reasons was the predominant cause for the filter instability. So it was decided to first eliminate the possibility of the chaotic dynamics being the primary factor in causing the filter instability. For this reason before employing these models it was decided to apply the UKF to the simplified extended HR model. In this context we note that observers and the extended Kalman filtering have been applied in the past to construct neural estimators by Cruz and Nijmeijer [12]. The reconstruction of the neural dynamics has been considered by Steur et al. [13] and Tyukin et al. [14] have considered the application of adaptive observers to neural systems. However, our objective is to reconstruct the action potential and its features such as the duration, particularly of a group of spatially distributed neurons, over an extended time frame, and to ultimately extend the application to complex multicompartmental models of biological neurons. An adaptive nonlinear observer wherein the gain of the observer is continually modified in accordance with the magnitude of the measurements and noise statistics by an appropriate adaption law would be more suitable in this case than a conventional nonlinear observer like the UKF.

The neuron model described by (2.2), which represents a typical nonlinear oscillator, is described in the parameter plane with the coordinates as the amplitude and frequency of the forcing (Glass and Mackey [15]). A study of the response characteristics of this model reveals subharmonic and superharmonic synchronization or chaotic behaviour, depending on the amplitude and frequency of the forcing. In some cases the chaos occurs after a period-doubling bifurcation. For the parameter set considered in (2.2), the response is chaotic. However, the addition of a relatively small level of noise to the initial conditions seemed to completely annihilate the chaos. A typical response of the model is shown in Figure 1(a) and the magnified plot in Figure 1(b), and it illustrates the fact that the response is chaotic. In Figure 1(a) the second state is scaled down by 400 to plot it on the same figure. This can be demonstrated by a one-dimensional Poincaré plot (Abarbanel [16]). The Poincaré map corresponding to Figure 1(b) shows that the system is chaotic, and it is shown in Figure 1(c).

3. The Unscented Kalman Filter

Most dynamic models employed for purposes of estimation neural action potential signals are generally not linear. To extend and overcome the limitations of linear models, a number of approaches such as the extended Kalman filter (EKF) have been proposed in the literature for nonlinear estimation using a variety of approaches. Unlike the Kalman filter, the EKF may diverge, if the consecutive linearizations are not a good approximation of the linear model over the entire uncertainty domain. Yet the EKF provides a simple and practical approach to dealing with essential nonlinear dynamics.

The main difficulty in applying the EKF algorithm to problems related to the estimation of a neural action potential signal is in determining the proper Jacobian matrices. The UKF is a feasible alternative that has been proposed to overcome this difficulty, by Julier et al. [17] as an effective way of applying the Kalman filter to nonlinear systems. It is based on the intuitive concept that it is easier to approximate a probability distribution than to approximate an arbitrary nonlinear function or transformation of a random variable.

The UKF gets its name from the unscented transformation, which is a method of calculating the mean and covariance of a random variable undergoing nonlinear transformation . Although it is a derivative-free approach, it does not really address the divergence problem. In essence, the method constructs a set of sigma vectors and propagates them through the same nonlinear function. The mean and covariance of the transformed vector are approximated as a weighted sum of the transformed sigma vectors and their covariance matrices.

Consider a random variable with dimension L which is going through the nonlinear transformation . The initial conditions are that has a mean and a covariance . To calculate the statistics of , a matrix of sigma vectors is formed. Sigma vector points are calculated according to the following equations: where , is a scaling parameter between 0 and 1 and is a secondary scaling parameter. is the column of the matrix square root. This matrix square root can be obtained by Cholesky factorization. The weights associated with the sigma vectors are calculated from the following [18]: where is chosen as 2 for Gaussian distributed variables. We have chosen to use the scaled unscented transformation proposed by Julier [18], as this transformation gives one the added flexibility of scaling the sigma points to ensure that the covariance matrices are always positive definite. The mean, covariance, and cross-covariance of calculated using the unscented transformation are given by where and are the set of weights defined in a manner so approximations of the mean and covariance are accurate up to the third order for Gaussian inputs for all nonlinearities, and to at least the second order for non-Gaussian inputs. The sigma points in the sigma vectors are updated using the nonlinear model equations without any linearization.

Given a general discrete nonlinear dynamic system in the form where is the state vector, is the known input vector, and is the output vector at time k, and are, respectively, the disturbance or process noise and sensor noise vectors, which are assumed to be Gaussian white noise with zero mean. Furthermore and are assumed to be the covariance matrices of the process noise sequence, and the measurement noise sequence , respectively. The unscented transformations of the states are denoted as while the transformed covariance matrices and cross-covariance are, respectively, denoted as The UKF estimator can then be expressed in a compact form. The state time-update equation, the propagated covariance, the Kalman gain, the state estimate, and the updated covariance are, respectively, given by, Equations (3.7) in the same form as the traditional Kalman filter and the EKF. Thus higher order nonlinear models capturing significant aspects of the dynamics may be employed to ensure that the Kalman filter algorithm can be implemented to effectively estimate the states in practice. For our purposes we adopt the UKF approach to estimate the neuron states in the process model.

The UKF is based on approximating the probability distribution function than on approximating a nonlinear function as in the case of EKF. The state distributions are approximated by a Gaussian probability density, which is represented by a set of deterministically chosen sample points. The nonlinear filtering using the Gaussian representation of the posterior probability density via a set of deterministically chosen sample points is the basis for the UKF. It is based on statistical linearization of the state dynamics rather than analytical linearization (as in the EKF). The statistical linearization is performed by employing linear regression using a set of regression (sample) points. The sigma points are chosen as the regression points. The mean and covariance at the sigma points then represent the true mean and covariance of the random variable with the particular Gaussian probability density. Thus when transformed to the nonlinear systems, they represent the true mean and covariance accurately only to the second order of the nonlinearity. Thus this can be a severe limitation of the UKF unless the nonlinearities can be limited to the first and second order in the process model.

4. UKF Estimation Applied to a Neuron Model

The success of the application of the UKF depends largely on the approximation to the covariance which is estimated as a weighted linear sum of the covariance at the sigma points. When this approximation is such that the covariance is not positive definite, the UKF algorithm fails as the Cholesky decomposition is not possible. To ensure that this covariance is essential, adjust the scaling parameter , if and when necessary. In the example illustrated, was chosen to be very small positive number. First, to see the need for the UKF, the traditional extended Kalman filter (EKF) is also applied to the same responses and the two sets of results are compared. These comparisons are shown in Figure 2. Figure 2 also illustrates the simulated neuron model states plotted to the same scale. While the state estimates obtained by the UKF and EKF are almost the same in the case of the first state (which was measured), the EKF estimates of all the other states tend to zero. Although in the case of the third state , the EKF seems to perform better than the UKF, the state estimate in this case as well tends to zero in steady state. This may be due to the inadequacy of the number of measurements but it is natural to assume that the internal states cannot be measured. Given that only the first state can be measured, the UKF definitely tends to perform better than the traditional EKF.

Figure 3 shows the corresponding errors in the simulated states and UKF estimated states over the same time frame. Figure 4 shows the simulated measurement error of a typical sensor. Finally it must be said that the filter was run over a much longer time frame and the performance of the filter did not deteriorate in spite of this long-term operation. Thus the implementation of an UKF-based state estimator for the HR neuron is successfully demonstrated over a relatively long time frame.

In particular we observe the relatively large error in the third state, . We also note that this error does not significantly influence the error in the estimate of the first state. The addition of a relatively small level of noise to the initial conditions seems to have the effect of generating a response that completely shrouds and annihilates the chaotic behaviour and this appears to be a consequence of the sensitive dependence of the initial conditions as well. What appears to be noise in the response may well be a combination of both noise and chaos, but it is not possible to distinguish between the two. This significant change in the response of in the estimator, which can be recognized by comparing Figures 1 and 2, explains the reason for the chaos to be annihilated as this state plays a key role in the appearance of the chaotic response in the first state. In fact it acts like a switch or gate and the addition of noise to the equation for changes the dynamics of its mean value quite significantly which in turn is responsible for switching off the chaos. However, we also observe that this is not a feature of the estimator but a result of the addition of noise to the Hiindmarsh-Rose model of the dynamics of the neuron. We had also observed that when no chaos was present and was already well behaved, the introduction of noise was not so significant.

5. Application to ECG Estimation

McSharry et al. [7] have proposed a theoretical nonlinear dynamic model that is capable of emulating an ECG, which is characterised by several parameters that are adaptable to many measured ECG signals. A typical ECG signal, shown in Figure 5, is characterised by six important points labelled as P, Q, R, S, T, or U. These points define the “fiducial” points which are the landmarks on the ECG signal such as the isoelectric line (PQ junction), the onset of individual waves such as QRS complex and the P and T waves, and the PQ, QT, and ST time intervals. The ECG signal is periodic and the period is the elapsed time between two R-R peaks. The circularradian frequency is related to the Heart Rate. The heart rate is by no means steady as several rhythmic variations are known to influence it. Coupling between the heart rate and the respiratory cycle causes oscillations in the heart rate at about 0.25 Hz and is termed as the respiratory sinus arrhythmia. Heart Rate Variability (HRV) influences the fiducial points and is controlled by the baroreflex regulatory feedback. The baroreflex feedback mechanism is modelled by a nonlinear delay-differential equation by McSharry et al. [19] based on a model by Fowler and McGuinness [20] to capture and to describe the interactions between the heart rate and blood pressure. The model gives rise to the oscillations in the blood pressure known as Mayer waves with a time period ranging from 10 to 25 seconds, due to the presence of a time delay. The model maintains an intrinsically stable heart rate in the absence of nervous control and features baroreflex influence on both heart rate and peripheral resistance. Irregularities in the baroreflex feedback which can create disturbances in the blood pressure such as the Mayer waves manifest themselves in some form in the ECG signal. The Mayer waves and the heart rate variability modelling have also been studied by Seydnejad and Kitney [21]. Analysis of Heart rate variability is also the basis for the assessment of the sympathetic and parasympathetic responses of the autonomic nervous system, with the sympathetic tone influencing the low-frequency spectrum only while both the sympathetic and parasympathetic responses influence the high frequency component of the ECG spectrum. Consequently the heart rate estimation generally involves both ECG and additional measurements of the arterial blood pressure and/or features associated with the respiratory system. For this reason, in this paper, the heart rate is assumed to be either known or independently estimated.

The original model proposed by McSharry et al. [7] generates a trajectory in a three-dimensional state space with coordinates (x, y, and ). The ECG is plot of the coordinate with respect to time. An observation of the responses shows that they exhibit a limit-cycle behaviour and that it is not sinusoidal. The dynamical equations of motion are given by a set of three ordinary differential equationswhere , , is the four-quadrant inverse tangent (arctangent) given the sine (y) and cosine (x) of the angle defined in the range , and is the angular velocity of the trajectory as it moves around the limit cycle which is assumed to be either measured or estimated adaptively and hence is treated as a known parameter. The baseline value of in (5.1c) is assumed to be driven by the respiratory circular frequency according to where the constant  mV. These equations of motion may be integrated numerically using the MATLAB built-in m-file which is based on an explicit Dormand-Prince Runge-Kutta ((3.2), (3.3)) pair of formulae over each fixed time step where is the sampling frequency. Equation (5.1c) may be expressed as The parameters of the modified representation of the (5.1c) given by (5.3) are defined in Table 1.

As rightly pointed by Sameni et al. [22], the first two equations (5.1a) and (5.1b) could be transformed two other dynamic equations in terms of

The r-dynamics take the form and are essentially unobservable. Consequently (5.1a) and (5.1b) may be replaced by . Thus (5.1a), (5.1b), and (5.3) may now be augmented by additional state equations and expressed asEquations (5.5a) and (5.5b) represent a classic pair of the first-order equations that exhibit both limit cycle and chaotic behaviour. The complete set of 13 equations characterised by eight parameters , , , and represents a dynamic model of the ECG with typical initial conditions as illustrated in Table 2. In addition one could assume that the state space dynamics include a number of disturbances. The state space equations including the random white noise disturbances are given by (5.6) as with the set being zero mean white noise process disturbances with a known covariance matrix.

Given a set of continuously sampled ECG measurements, the measurements may be expressed by the equation with being a zero mean white noise measurement disturbance with a known covariance. The UKF may be employed to estimate the states , and the augmented states , , , .

In Figure 6 a typical set of simulated and estimated responses of the states , z is compared. In Figure 7 the errors in the estimate over 10 000 time steps, are shown.

In Figure 8 a typical estimated error in the measurement is shown. Thus the UKF is capable of performing extremely well given the measurements with well-behaved covariance characteristics. When the noise covariance matrices are unknown, it is possible to estimate the states adaptively. The filter is currently undergoing extensive tests with actual measured ECG data and in this case the adaptive estimation appears not only to be more appropriate but also performs better than the nonadaptive UKF. A complete discussion of the application of the adaptive UKF to ECG measurements, where the process and measurement noise covariance matrices are recursively updated, is beyond the scope of this paper and will be presented elsewhere.

6. Conclusions and Discussion

The unscented Kalman filtering method was applied to estimate the states of an HR-like neuron model which in the absence of noise were deliberately chosen to be chaotic. The process and measurement was then corrupted by the introduction of very low levels of white noise. The noise had a profound effect on the response of the model as it seemed to annihilate the chaos. It was observed that the filter performs quite well in reconstructing the states of the system. Neither the filter nor the noise corrupted process model gave any indications of chaos. Moreover, the exercise gave us valuable experience in applying the UKF to a biological neuron. Preliminary studies of the application of the UKF to a Hodgkin-Huxley type model indicated that the successful application of the unscented approach to an ensemble of biological neurons was feasible, provided the sigma points were scaled according to certain scaling laws related to the gate constants. Finally the methodology of the unscented Kalman filter is successfully applied to filter a typical simulated ECG signal using a synthetic model-based approach.