Abstract

The least-squares linear estimation problem using covariance information is addressed in discrete-time linear stochastic systems with bounded random observation delays which can lead to bounded packet dropouts. A recursive algorithm, including the computation of predictor, filter, and fixed-point smoother, is obtained by an innovation approach. The random delays are modeled by introducing some Bernoulli random variables with known distributions in the system description. The derivation of the proposed estimation algorithm does not require full knowledge of the state-space model generating the signal to be estimated, but only the delay probabilities and the covariance functions of the processes involved in the observation equation.

1. Introduction

Originally, signal estimation problems were addressed under the assumption that the sensor data are transmitted over perfect communication channels, thus being received either instantaneously or with a known deterministic delay at the data processing center which provides the estimation. However, the use of communication networks for transmitting measured data motivates the need of considering eventual transmission delays and/or possible packet losses, due to numerous causes, such as random failures in the transmission mechanism, accidental loss of some measurements, or data inaccessibility at certain times. Often, these network uncertainties are random in nature and, hence, an appropriate model for such situations consists of describing the sensor delay or multiple packet dropout by a stochastic process, whose statistical properties are included in the system description. Therefore, estimation problems with bounded random delays in the observations and packet dropouts are challenging problems in networked control systems and have attracted much research interest.

Assuming that the state-space model of the signal to be estimated is known, many results have been reported on systems with random delays and packet dropouts. For example, Ray et al. [1] proposed a recursive linear filtering algorithm which modifies the conventional one to fit situations where the arrival of sensor data at the controller terminal may be randomly delayed. In [2], the state estimation problem for a model involving randomly varying bounded sensor delays is treated by reformulating it as an estimation problem in systems with stochastic parameters. More recently, Matveev and Savkin [3] have proposed a recursive minimum variance state estimator in linear discrete-time partially observed systems perturbed by white noises, when the observations are transmitted via communication channels with random transmission times and various signal measurements may incur independent delays. Wang et al. [4] have designed a robust linear filter for linear uncertain discrete-time stochastic systems with randomly varying sensor delay. Su and Lu [5] have designed an extended Kalman filtering algorithm which provides optimal estimates of interconnected network states for systems in which some or all measurements are delayed. Hermoso-Carazo and Linares-Pérez [6] have proposed a filtering algorithm based on the unscented transformation to estimate the state of a nonlinear system from randomly delayed measurements. All of the above mentioned papers on signal estimation from randomly delayed observations assume that the random delay does not exceed one sampling time. Recently, Hounkpevi and Yaz [7] have considered the estimation problem from observations coming from multiple sensors with different one-step delay rates, indicating that the model considered could be generalized to the case of multiple-sample delay.

For the packet-dropout problem, many recent papers can be mentioned. For example, the optimal filtering problem in networked control systems with random delays of one sampling period, multiple packet dropouts, and uncertain observations are studied under a unified framework in [8], and the optimal filtering problem in networked control systems with multiple packet dropouts is addressed in [9]. The optimal linear minimum variance estimation problem for linear discrete-time stochastic systems with possible infinite packet dropouts and the steady-state estimators are studied in [10]. Also, for these systems with possible infinite packet dropouts the optimal full-order and reduced-order estimators in the linear minimum variance sense are obtained in [11]. In practice, the number of consecutive packet dropouts cannot be infinite but bounded by a finite number and, consequently, the above mentioned papers may lead to conservative results. A novel model to describe the case when the number of consecutive packet dropouts is limited by a known upper bound is considered in [12] and the linear filtering problem is addressed assuming that the filter has a similar recursive structure to that of the Kalman one; hence, for assuming the filter to have a fixed structure, it is only a suboptimal filter. This study is completed in [13] by considering the optimal estimation problem (including filtering, prediction, and smoothing) in the linear least-mean-square sense. In the above mentioned papers only packet dropouts are considered; recently, Sun [14] has addressed the linear filtering, prediction, and smoothing problems in discrete-time linear systems with finite random measurement delays and packet dropouts, assuming that the delay and the number of consecutive dropouts do not exceed a known upper bound.

On the other hand, when the state-space model of the signal to be estimated is not available, it is necessary to use alternative information, for example, about the covariance functions of the processes involved in the observation equation. In this context, the least-squares (LS) linear and second-order polynomial estimation problems from randomly delayed observations based on covariance information have been addressed in [15, 16], respectively, under the assumption that the Bernoulli random variables modeling the delays are independent; also, linear and polynomial estimation algorithms from observations featuring correlated random delays have been proposed in [17, 18], respectively. Recently, the LS linear filtering problem of discrete-time signals using one- and two-step randomly delayed observations coming from multiple sensors with different delay rates has been studied in [19] using covariance information.

In this paper, the LS linear estimation problem in systems with bounded random measurement delays and packet dropouts is addressed. The proposed estimators only depend on the delay probabilities at each sampling time, but do not need to know if a particular measurement is delayed or updated. Moreover, the estimation algorithm is derived using only covariance information. Consequently, considering the case of sensors with the same delay characteristics, the current study generalizes the results in [19] to the case of measurements with bounded multiple-step random delays and packet dropouts.

The paper is organized as follows. In Section 2 the observation model considered and the hypotheses on the signal and noise processes are presented. The LS linear estimation problem is formulated in Section 3, where also the innovation technique used to address such problem is described. The LS linear estimation algorithm is derived in Section 4 which includes recursive formulas to obtain the estimation-error covariance matrices; these matrices provide a global measure of the LS estimators accuracy. Finally, in Section 5, a numerical simulation example is presented to show the effectiveness of the estimation algorithm proposed in the current paper.

2. Observation Model

In networked systems, such as telephone networks, cable TV networks, cellular networks or the Internet, among others, the system output is measured at every sampling time and the measurement is transmitted to a data processing center producing the signal estimation. In the transmission, delays and packet dropouts are unavoidable due to eventual communication failures and, to reduce the effect of such delays and packet dropouts without overloading the network traffic, each sensor measurement is transmitted for several consecutive sampling times, but only one measured output is processed for the estimation at each sampling time.

In this paper it is assumed that the largest delay is upper bounded (the bound is a known constant denoted by ); hence, the current study is a generalization of that performed in [19] for the estimation problem from one- or two-step randomly delayed observations. Note that this observation model includes bounded packet dropouts since, if a measurement is not processed after sampling times, such measurement is lost in transmission.

In this section, the observation model with bounded random measurement delays and packet dropouts as well as the assumptions about the signal and noises involved, are presented.

Consider a signal vector, , whose measured output at the sampling time , denoted by , is perturbed by an additive noise vector ; that is,

The measured output is transmitted during the sampling times , and but, at each sampling time , only one of the measurements is processed; consequently, at time , the processed measurement can be either delayed by sampling periods with a known probability , or updated with probability . At the initial time , the measured output is always available () and, hence, the processed measurement is equal to the real measurement . At any time , the processed measurement only can be delayed by sampling periods, since only are available. Also, it is assumed that the delays at different times are independent.

Therefore, the following model for the processed measurements to estimate the signal is considered: where, for denote sequences of mutually independent Bernoulli random variables with and .

This fact guarantees that if , for all , which means that, with probability , the th measurement is received and processed on time. If , then there exists one and only one such that , which means that the measurement is delayed by sampling periods with probability . Note that the measured output at any time can be received on time, delayed, or lost in transmission. Also note that some measured output can be rereceived, since the output at each instant is transmitted for consecutive times (please, see Figure 1 in Section 5 for specific results when ).

The signal estimation problem is addressed based on the following assumptions.

Assumption 1. The -dimensional signal process has zero mean and autocovariance function where and are known matrix functions.

Assumption 2. The noise process is a zero-mean white sequence with known autocovariance function for all

Assumption 3. For are sequences of independent Bernoulli random variables with and .

Moreover, is independent of for all .

Assumption 4. For each , the processes , and are mutually independent.

Remark 2.1. The estimation problem is addressed under the assumption that the evolution model of the signal need not be available and using only information about the covariance functions of the processes involved in the observation equation. Note that, although a state-space model can be generated from covariances, when only this kind of information is available, it is preferable to address the estimation problem directly using covariances, thus obviating the need for previous identification of the state-space model.

Remark 2.2. Although Assumption 1 might seem restrictive, it covers many practical situations; for example, when the system matrix in the state-space model of a stationary signal is available, the signal autocovariance function can be expressed as and Assumption 1 is clearly satisfied, taking and . Also, processes with finite-dimensional, possibly time-variant, state-space models have semiseparable covariance functions, (see [20]), and this structure is a particular case of that assumed, just taking and . Consequently, this structural assumption on the signal autocovariance function covers both stationary and nonstationary signals.

3. Problem Statement

Given the observation model (2.1)-(2.2) with random delays and packet dropouts, our purpose is to find the least-squares (LS) linear estimator, , of the signal based on the observations . Specifically, our aim is to obtain the filter and the fixed-point smoother ( fixed and ) from recursive formulas.

The estimator is the orthogonal projection of the vector onto , the -dimensional linear space spanned by . Since the observations are generally nonorthogonal vectors, we use an innovation approach, based on an orthogonalization procedure by means of which the observation process is transformed into an equivalent one (innovation process) of orthogonal vectors , equivalent in the sense that each set spans the same linear subspace as .

Since the innovations constitute a white process, this methodology allows us to find the orthogonal projection of the vector onto by separately projecting onto each of the previous orthogonal vectors; that is, the replacement of the observation process by the innovation one leads to the following expression of the signal estimators: where and .

Hence, to obtain the signal estimators it is necessary to find previously an explicit formula for the innovations and their covariance matrices.

Innovation and Covariance Matrix
The innovation at time is defined as where is the one-stage linear predictor of . Since the LS linear estimator is the projection of onto , from (2.1) and (2.2) and model assumptions, it is clear that where denotes the orthogonal projection of the vector onto .

Similar that to (3.1), by denoting , the noise estimators are expressed as follows: From the model assumptions, the noise is independent of and hence, for , we have that ; thus, the one-stage predictor of the noise is . Consequently, the innovation is given by and it is necessary to obtain the signal predictor and the estimators and for , that is, the filters and the smoothers of the signal and noise, respectively.

Remark 3.1. From the model assumptions, it is clear that and, consequently, the filter of the noise is

To obtain the covariance matrix , from (2.1), (2.2), and (3.2) and taking into account that , the innovation is expressed as follows:

Using this expression and the model assumptions, we have where and denote the estimation-error covariance matrices of the state and noise, respectively, and the estimation-error cross-covariance matrix between the state and noise. Hence, the filtering, and smoothing error covariance and cross-covariance matrices of the state and noise must be calculated.

4. Recursive Estimation Algorithm

In this section we obtain the state predictor (Section 4.1), the filtering, and smoothing estimators of the signal and noise (Section 4.2), and the filtering, and smoothing error covariance and cross-covariance matrices of the state and noise (Section 4.3), which together with (3.4) and (3.6) constitute the proposed recursive estimation algorithm.

4.1. Predictor of the Signal

From (3.1), to determine the signal predictor, , it is necessary to calculate the coefficients so and must be calculated for

(i) On the one hand, using (2.1)-(2.2) and the model assumptions, it is clear that where .

(ii) On the other hand, from (3.2) and taking into account that , we have that so and must be calculated for (a)Using (3.1) for , (b)Using (3.3) for , and since , for , we obtain

Substituting into (4.1) the expectations calculated in and , we obtain If we now introduce a function satisfying that we conclude that

Substituting now (4.8) into (3.1) for , the following expression for the state predictor is obtained: where the vectors are defined by thus satisfying the recursive relation

Hence, an expression for must be derived.

Remark 4.1. By substituting (4.8) into (3.1), it is also clear that the -stage predictors are given by .

Expression for
First, expression (4.7) is just rewritten for : For , using that and , and denoting , it is immediately clear that For , we examine separately the sums that appear in as follows.
(i) Since , for , by denoting that , we have that
(ii) Since , we have that By substituting the above two sums in , it is deduced that Finally, note that and are the smoothing gain matrices (which will be obtained in the next section), and the matrix is recursively obtained by

4.2. Filtering and Smoothing Estimators of the Signal and Noise

Clearly, in view of (3.1) and (3.3), the filters and fixed-point smoothers of the signal and noise are obtained by the following recursive expressions: with initial conditions given by the one-stage predictors and , respectively. Hence, the gain matrices and must be calculated for .

Gain Matrices
Using (3.5) for and the model assumptions, it is derived that and, since the estimation-errors are orthogonal to the estimators, it is concluded that where and are the covariance and cross-covariance matrices of the errors with and , respectively.

A similar reasoning leads to the following expression: where and are the covariance and cross-covariance matrices of the errors with and , respectively.

Remark 4.2. Note that, from (4.8), the gain matrix of the signal filter is and, from (3.1), the filter is .

4.3. Error Covariance and Cross-Covariance Matrices

From the recursive relations (4.18), the estimation-errors admit the following expressions: Now, since the estimation-errors are orthogonal to the estimators, we have that and , thus deducing the following recursive expressions for the error covariance and cross-covariance matrices:

The initial conditions of these equations, the prediction error covariance and cross-covariance matrices, are obtained as follows: where denotes the Kronecker delta function.

In fact, by writing and using that and , the expression for is immediate from Assumption 1. Analogously, using that , for , and Assumptions 2 and 4, the expressions for , , and are derived.

4.4. Computational Procedure

At the sampling time , once the iteration is finished and the new observation is available, the proposed estimation algorithm operates as follows.(i) Compute the innovation and its covariance matrix by (3.4) and (3.6), respectively.(ii) Compute by (4.16) and, from it, the filtering gain matrix .(iii) Compute the filter and the filtering error covariance matrix .(iv) To implement the above steps at time , we need to(a) compute and, from it, the signal predictor ,(b) compute the noise filter ,(c) for , compute the smoothing gain matrices and from (4.20) and (4.21), respectively, and the smoothers and . Then, calculate the innovation . To obtain its covariance matrix, , compute the error covariance and cross-covariance matrices of the previous estimators of the signal and noise (predictors, filters, and smoothers) using the formulas established in Section 4.3. Finally, compute to calculate and, from it, the filtering gain , which provides the filter and the error covariance matrix .

5. Computer Simulation Results

In this section, the application of the proposed signal estimation algorithm is illustrated by a simulation example. Consider a zero-mean scalar signal with autocovariance function given by which is factorizable according to Assumption 1 of the model just taking and For the simulation, the signal is supposed to be generated by an autoregressive model, , where is a zero-mean white Gaussian noise with , for all .

Using the proposed filtering and fixed-point smoothing algorithms, we have estimated the signal from bounded random measurement delays and packet dropouts, assuming that the largest delay and the maximum number of successive dropouts is . For this purpose, we implemented a MATLAB program which simulates the values of the signal, , the real measurements, , and the available ones, , considering different delay probabilities, and provides the filtering and fixed-point smoothing estimates of , as well as the corresponding error variances.

As in the theoretical study, it is assumed that, at the initial time, the available measurement is equal to the real one, ; at time , the real measurement may be one-step randomly delayed; at time it may be randomly delayed either by one or two steps and, at any sampling time , the available measurement, , may be randomly delayed by one, two or three steps. The random delays in the observations have been simulated considering three independent sequences of independent Bernoulli random variables, , , and with constant probabilities and defining the available measurements of the signal as Hence, for , the sequences in the theoretical study are given by

If , then and the th measurement is updated; if , the th measurement is one-step delayed when and two-step delayed when and ; finally, if , for , the th measurement is three-step delayed.

Note that represents the probability of receiving a delayed observation at each sampling time. Moreover, note that , , and are decreasing functions of , , and , respectively, while is an increasing function of , . Actually, as increases, decreases, but the delay probabilities , and increase; as increases, decreases, but the two- and three-step delay probabilities and increase; and, as increases, decreases, but the three-step delay probability increases.

For , Figure 1 displays the processed measurements, , taking , ; that is, when the probability that the real measurement is used in the estimation is for , the one-step delay probability is for , and the two- and three-step delay probabilities are both equal for . This figure shows that, in fact, the measured output at time can be received on time, delayed or lost in transmission. Actually, the available measurement being processed is(i)updated () at , (ii)delayed by one sampling period () at , (iii)delayed by two sampling periods () at , (iv)delayed by three sampling periods () at ,

while , and are dropped out, thus not being processed at any sampling time. Also, it is worth noting that some measurements are rereceived, since the output at each instant is transmitted for consecutive times (for example, is rereceived at , is rereceived at , and is rereceived at , among others).

Next, the performance of the estimators is analyzed by investigating the error variances for . Figure 2 shows the error variances for the filter and the smoothers and . As occurs for nondelayed observations, the error variances of the smoothing estimators are less than the filtering one; hence, the estimation accuracy of the smoothers is superior to that of the filter and also improves as the number of iterations in the fixed-point smoothing algorithm increases. The results of a simulated signal together with the filtering and smoothing estimates, and , are illustrated in Figure 3. According to the previous results, this figure shows that the evolution of this signal is followed more accurately by the smoothing estimates.

To analyze the performance of the proposed estimators versus the delay probabilities, the error variances have been calculated for different values of , , which provide different values of the probabilities , . In all the cases examined, the estimation-error variances present insignificant variation from the iteration onwards and, consequently, only the error variances at a specific iteration are shown here. Figure 4 displays the filtering and smoothing error variances at versus (for and ). From this figure it is deduced that, as increases (and, consequently, the nondelay probability decreases), the estimation-error variances become greater and, hence, worse estimations are obtained.

On the other hand, for each value of (keeping fixed), the error variances become smaller as decreases, which means that the estimations are better. This was expectable since, as decreases, the one-step delay probability increases and the two- and three-step delay probabilities decrease.

Also, as expected, this improvement is more significant as increases (or, equivalently, as the delay probability increases), being more appreciable for the filter than for the smoother.

Next, we compare the filtering error variances at versus , for and (similar results are obtained for other fixed values of the probability of delay, , but as indicated above, the comparison is clearer when such probability is large). The results are displayed in Figure 5, which shows that, as increases (which means that the one-step delay probability decreases, but the two- and three-step delay probabilities increase), the estimation-error variances become greater and, consequently, the accuracy of the estimators is worse. Moreover, for each value of , the estimators perform worse with the increasing of , which is reasonable since the three-step delay probability increases with . Nevertheless, for small values of , the difference is almost imperceptible since, as becomes close to zero, the one-step delay probability tends to (fixed), and the two- and three-step delay probabilities both tend to zero. This consideration is confirmed in Figure 6, where the filtering and smoothing error variances at versus (for and ) are displayed.

6. Conclusions

In this paper, a recursive least-squares linear estimation algorithm is proposed to estimate signals from observations which can be randomly delayed or lost in transmission, a realistic and common assumption in networked control systems where, generally, transmission delays and packet losses are unavoidable due to the unreliable network characteristics. The largest delay and the maximum number of consecutive dropouts are assumed to be upper bounded by a known constant , and the random measurement delays and packet dropouts are described by introducing sequences of Bernoulli random variables, whose parameters represent the delay probabilities. Thus, the current study generalizes the results in [19] to the case of multiple-sample delays in the observations.

Using an innovation approach, the estimation algorithm is derived without requiring the knowledge of the signal state-space model, but only the covariance functions of the processes involved in the observation equation, as well as the delay probabilities. To measure the performance of the estimators, the estimation-error covariance matrices are also calculated.

To illustrate the theoretical results established in this paper, a simulation example is presented, in which the proposed algorithm is applied to estimate a signal from bounded random measurement delays and packet dropouts, assuming that the largest delay and the maximum number of successive dropouts is .

Acknowledgments

This research is supported by Ministerio de Educación y Ciencia (Grant no. MTM2008-05567) and Junta de Andalucía (Grant no. P07-FQM-02701).