A filtering algorithm based on the unscented transformation is proposed to estimate the state of a nonlinear system from noisy measurements which can be randomly delayed by one sampling time. The state and observation noises are perturbed by correlated nonadditive noises, and the delay is modeled by independent Bernoulli random variables.

1. Introduction

The signal estimation problem in time-delay stochastic systems plays an important role in different application fields. For example, in engineering applications involving communication networks with a heavy network traffic, the measurements available may not be uptodate. Although the delay can sometimes be interpreted as a known deterministic function of the time, the numerous sources of uncertainty make it preferable to interpret it as a stochastic process, including its statistical properties in the system model. This fact must be considered in the study of the signal estimation problem since the conventional algorithms are then not applicable.

In the past few years, attention has been focused on investigating estimation problems from measurements subject to a random delay which does not exceed one sampling time, modeling the delay values by a zero-one white noise with known probabilities indicating that the measurements either arrive on time or are delayed. In linear systems, Ray et al. [1] first modified the conventional algorithms to fit this observation model and since then many results based on this model have been reported (see, among others, [25] and references therein). Literature on nonlinear filtering from randomly delayed observations is less extensive. Recently, generalizations of extended and unscented Kalman filters, using one- and two-step randomly delayed observations, have been proposed and compared in [6, 7], respectively, for a class of nonlinear discrete-time systems with independent additive noises.

In this paper, we address the problem of estimating from nonlinear measurements subject to a random delay which does not exceed one sampling time, and when the last available measurement is used for the estimation at any time. This situation is modelled by considering Bernoulli random variables whose value of one indicates that the corresponding observation is not updated. Concretely, we propose an extension of the unscented filter in [6] to the case of correlated and nonadditive signal and measurement noises.

2. State and Observation Models

In this section, we present the nonlinear systems with one-step randomly delayed observations to be considered and we describe the assumption about the underlying processes.

The considered nonlinear discrete-time model is represented by the equations: where and are random vectors which describe the system state and output at time , respectively. The process is the state noise, is the measurement noise, and, for all , and are known analytic (not necessarily linear) functions.

We assume that at time the real observation is always available for the estimation but, as indicated previously, we go to consider the possibility that the current observation at any time , , is either the current system output, , with probability of , or the previous one, , with probability of (delay probability). Thus, the available observations for are and the delayed observation model can be described as [6] where are Bernoulli random variables (binary switching sequence taking the values 0 or 1) with , which model the delays in the observations. Indeed, if (which occurs with probability ), then and the measurement is delayed by one sampling period; otherwise, implies that or, equivalently, that the measurement is updated (which occurs with probability ).

In applications of communication networks, the noise usually represents the random delay from sensor to controller and the assumption of one-step sensor delay is based on the reasonable supposition that the induced data latency from the sensor to the controller is restricted so as not to exceed the sampling period.

To deal with the state estimation problem, the following assumptions about the processes involved in (2.1) and (2.3) are considered.

Assumption 1. The initial state in (2.1), , is a random vector with and .

Assumption 2. The noises and are correlated zero-mean white processes with , and , with and .

Assumption 3. is a sequence of independent Bernoulli random variables with known probabilities, , for all .

Assumption 4. The initial state, , and the processes and are mutually independent.

3. Unscented Filtering Algorithm

The unscented transformation (see [8] for details) approximates the distribution of a -dimensional random vector by sample distributions with the same mean and covariance, and . The distributions correspond to a set of sigma-points defined as (expression denotes the -th column of the matrix ) whose mean and covariance are and , respectively, when the following weights, for the mean and for the covariance, are used: The parameters in (3.1) and (3.2) are , where is a scaling parameter determining the spread of the sigma-points, and , are tuning parameters. When the mean and covariance of a nonlinear transformation are approximated by the sample mean and covariance of the transformed sigma-points, , weighted with and , respectively, these approximations are accurate up to the second and first term of their Taylor expansion series, respectively.

On the basis of this procedure, unscented filtering uses the state equation (which provides as a nonlinear function of and ) to approximate the conditional mean and covariance of given from those of and . These statistics are then updated with the observation using the Kalman equations to obtain approximations of the conditional mean and covariance of given .

For the update, the mean and covariance of given , and hence those of and , are approximated using again the unscented procedure and, for this purpose, the conditional statistics of the vectors , and must be known. Therefore, in view of the requirements in the prediction and update steps, the starting point for obtaining the filter of is the knowledge of the approximated conditional mean and covariance of the vector given ; these statistics, which are denoted by and , respectively, provide the approximations of the conditional statistics of given which, in turn, provide those of . The procedure is now detailed in the following two steps.

Prediction step
From the independence of the vectors and and the conditional independence of and , the conditional mean and covariance of given are given by and then, the problem is to obtain approximations for the conditional mean and covariance of , and , as well as for the conditional cross-covariance of and , .
Since and are both functions of the vector , in order to approximate their conditional statistics we use the sigma-points , defined from and as in (3.1) (here is the dimension of the augmented vector ), and the required statistics are approximated by those corresponding to the transformed sigma-points, , by using the weights defined in (3.2):

Update step
As previously commented, the obtaining of and from and is carried out by using the Kalman filter equations and hence, the mean and covariance of given , as well as the conditional cross-covariance of and , need to be approximated; next, we describe the approximation procedure.
Taking into account (2.3) and since, from the independence, , the conditional statistics of given are expressed in terms of those corresponding to and as follows: where again applying the independence hypotheses of the model,

As in the prediction step, the conditional statistics of are approximated from the sigma-points associated with and , by defining :

However, to approximate the statistics of , which is a function of the vector , we use the information given in (3.3) and (3.4) about their conditional statistics. Thus, we consider a set of sigma-points, ( being the dimension of the vector ), defined in a similar way to those in (3.1) for the two first block components of and , with weights and defined as in (3.2), and the following approximations are used:

The conditional statistics of and are substituted in (3.5) and (3.6) to obtain those of , which are used in the following equations providing the filter of and the error covariance:

The initial conditions of the proposed algorithm are given by which is easily obtained from the independence hypotheses and initial conditions of the model.

Summarizing, given and , the computation procedure of the proposed unscented filter is as follows:

Step 1. Compute the sigma-points defined from and as in (3.1), and(i)compute , , and by (3.4), and compute and by (3.3).(ii)compute , , and by (3.7).

Step 2. Compute the sigma-points defined from and as in (3.1), and compute , , , and by (3.8).

Step 3. Compute and by (3.6).

Step 4. Compute , and by (3.5).

Step 5. Compute and by (3.9).

Finally, by extracting the first block components of and , the filter of the original vector and the error covariance are obtained.

4. Simulation Example

To illustrate the performance of the proposed unscented filter, we consider the following logistic type of transition and measurement equations, used previously in [9] to compare the performance of various nonlinear filters in the case of mutually independent noises and nondelayed observations: where the initial state is a random variable with uniform distribution between zero and one; the state and observation noises are assumed to be zero-mean Gaussian joint processes with , and known , for all .

To apply the proposed algorithm, we assume that the observations available for the estimation can be randomly delayed by one sampling period; that is, and that the noise modeling the delays is a sequence of independent Bernoulli variables with known delay probability , for all .

We have implemented a MATLAB program which simulates the state, , and the real, , and delayed measurements, , for , for different values of and , and which provides the unscented filtering estimates of . The root mean square error () criterion has been used to quantify the performance of the estimates.

Considering 1000 independent simulations and denoting by the th set of the artificially simulated states and by the filtering estimate at time in the th simulation run, the of the filter at time is calculated by

Let us first examine the performance of the algorithm with respect to different values of ; Figure 1 illustrates the when the delay probability is and different values of are considered; specifically, and ; this figure shows, as expected, that the higher the value of (which means that the correlation between the state and the observations increases) the smaller that of and, consequently, the performance of the estimators is better. Analogous results are obtained for other values of and .

Moreover, in order to compare the performance of the estimators as a function of the delay probability , the arithmetic average corresponding to the 50 iterations of was calculated for . The results of this are shown in Figure 2, from which it is apparent that the means increase as increases, the increase being greater when is greater, and consequently, as expected, the performance of the estimators deteriorates as the delay probability rises. From this figure, it is also inferred that, for each fixed value of , the means decrease with increasing , which extends the result in Figure 1 to different values of .

Finally, to compare the performance of the proposed and the EKF algorithms, the latter was applied to the observation data of the simulation example for different values of and . The results show that the proposed algorithm outperforms the EKF algorithm and the improvement is greater when the delay probability is greater and, also, when the correlation increases. Table 1, showing the means of for both algorithms considering and , illustrates this fact.


This work has been partially supported by the Ministerio de Ciencia e Innovación and the Junta de Andalucía through Projects MTM2008-05567 and P07-FQM-02701, respectively.