Abstract

The optimal least-squares linear estimation problem is addressed for a class of discrete-time multisensor linear stochastic systems subject to randomly delayed measurements with different delay rates. For each sensor, a different binary sequence is used to model the delay process. The measured outputs are perturbed by both random parameter matrices and one-step autocorrelated and cross correlated noises. Using an innovation approach, computationally simple recursive algorithms are obtained for the prediction, filtering, and smoothing problems, without requiring full knowledge of the state-space model generating the signal process, but only the information provided by the delay probabilities and the mean and covariance functions of the processes (signal, random parameter matrices, and noises) involved in the observation model. The accuracy of the estimators is measured by their error covariance matrices, which allow us to analyze the estimator performance in a numerical simulation example that illustrates the feasibility of the proposed algorithms.

1. Introduction

In the past decades, the development of network technologies has promoted the study of the estimation problem in multisensor systems, where the observations provided by all the sensor networks are transmitted to a fusion center for being processed, thus obtaining the whole available information on the signal. This kind of systems with multiple sensors is becoming an interesting research topic due to its broad scope of application as they can provide more information than traditional communication systems with a single sensor. This form of transmission has several advantages, such as low cost or simple installation and maintenance; however, due to the imperfection of the communication channels, during the transmission process, there exist often random sensor delays and/or multiple packet dropouts. Standard observation models are not appropriate under these random uncertainties, and classical estimation algorithms, where the measurements generated by the system are available in real time, cannot be applied directly. Therefore, new algorithms are needed and, recently, the estimation problem in multisensor systems with some of the aforementioned random uncertainties has become a research topic of growing interest (see, e.g., [16] and references therein).

There are many current applications, for example, networked multiple sensor systems with measurement-based output feedback, where the measurements may be randomly delayed due to network congestion or random failures in the transmission mechanism. Several modifications of the standard estimation algorithms have been proposed to incorporate the effects of randomly delayed measurements, in both linear and nonlinear systems. Assuming full knowledge of the state-space model of the signal process to be estimated we can mention [710] and using covariance information, [11, 12], among others. Although all papers above mentioned involve systems with randomly delayed sensors, their major handicap is that all the sensors are assumed to have the same delay characteristics. Nevertheless, such an assumption is not realistic in many practical situations, where the information is gathered by an array of heterogeneous sensors, and the delay probability at each individual sensor can be different from the others. In recent years, this approach has been generalized considering multiple delayed sensors with different delay characteristics (see, e.g., [13, 14], using the state-space model, and [15, 16], using covariance information).

Furthermore, in many sensor network applications the measured outputs present uncertainties which cannot be described only by the usual additive disturbances, and multiplicative noises must be included in the observation equations to model such uncertainties (see, e.g., [17, 18]). Also, in the context of missing and fading measurements, the observation equations include multiplicative noises described by scalar random variables with arbitrary discrete probability distribution over the interval (see, e.g., [1921]). The above systems are a special case of systems with random parameter matrices, which have important practical significance and arise in areas such as digital control of chemical processes, systems with human operators, economic systems, and stochastically sampled digital control systems [22].

In [22, 23], the optimal linear filtering problem in systems with independent random state transition and measurement matrices is addressed by transforming the original system into one with deterministic parameter matrices and state-dependent process and measurement noises, to which the Kalman filter is applied. Although in [22] the Kalman filter is applied without providing any theoretical justification, in [23] it is shown that, under mild conditions, the transformed system satisfies the Kalman filter requirements and, hence, optimal linear estimators are obtained for systems with independent random parameter matrices. In [24], systems with deterministic transition matrices and one-step correlated measurement matrices are considered, and the optimal recursive state estimation problem is addressed by converting the observation equation into one with deterministic measurement matrices and applying the optimal Kalman filter for the case of one-step correlated measurement noise. In the above-mentioned papers, although the noises of the transformed system with deterministic matrices depend on the system state and therefore can be correlated, the original system noises are assumed to be independent white processes. This assumption can be restrictive in many real world problems in which correlation and cross-correlation of the noises may be present. For this reason, the estimation problem in systems with correlated and cross-correlated noises is becoming an active research topic (see [2529] for systems with deterministic matrices and [30, 31] for systems with random parameter matrices, among others). In [30] a locally optimal filter in the class of Kalman-type recursive filters is presented and, in [31], the optimal least-squares linear filter is derived.

Motivated by the above analysis, in this paper we address the signal estimation problem from measurements coming from multiple sensors which are randomly delayed by one sampling time with different delay characteristics, under the assumption that the measured outputs are perturbed by both random parameter matrices and one-step autocorrelated and cross-correlated observation noises. The main contributions of this paper can be highlighted as follows: the observation model considers simultaneously random delayed measurements and both random parameter matrices and correlated noises (one-step autocorrelation and also one-step cross-correlations between different sensor noises are considered) in the measured outputs; optimal LS linear recursive filtering and smoothing algorithms are obtained without requiring signal augmentation approach thus avoiding the expensive computational cost; the proposed algorithms are obtained without requiring full knowledge of the state-space model generating the signal process; and the innovation technique is used, simplifying substantially the derivation of the algorithms since the innovation process is a white noise.

The rest of the paper is organized as follows. In Section 2, we present the delayed measurement model to be considered and the assumptions and properties under which the LS linear estimation problem is addressed. The innovation approach which, as mentioned above, yields straightforward derivation of the estimation algorithms is given in Section 3. The recursive filtering and smoothing algorithms are derived in Sections 4 and 5, respectively. In Section 6, the performance of the proposed filtering algorithms is illustrated by a numerical simulation example where the signal of a first-order autoregressive model is estimated from delayed observations coming from two sensors with different delay characteristics, considering two kinds of measured outputs with correlated noises. The paper concludes with some final comments in Section 7.

Notation. The notation used throughout the paper is standard. denotes the -dimensional Euclidean space and is the set of all real matrices. and denote the transpose and inverse of a matrix , respectively. The shorthand denotes a diagonal matrix whose diagonal entries are . denotes the all-ones vector and the identity matrix. If the dimensions of matrices are not explicitly stated, they are assumed to be compatible with algebraic operations. The notation denotes the Hadamard product (). represents the Kronecker delta function, which is equal to one if and zero otherwise. Moreover, for arbitrary random vectors, and , we will denote , where stands for the mathematical expectation operator.

2. Problem Formulation

The aim of this paper is to find recursive algorithms for the optimal least-squares (LS) linear filtering and smoothing problems of an -dimensional discrete-time random signal using measurements perturbed by random observation matrices and correlated additive noises, which are transmitted by multiple sensors where one-step random delays with different rates may occur during the transmission process.

The estimation problem is addressed under the assumption that the evolution model of the signal to be estimated is unknown and only information about its mean and covariance functions is available; this information is specified in the following assumption.

Assumption 1. The -dimensional signal process has zero mean and its autocovariance function is expressed in a separable form, , , where and are known matrix functions.

Remark 2. Although Assumption 1 might seem restrictive, it covers many practical situations; for example, when the system matrix in the state-space model of a stationary signal is available, the signal autocovariance function is , and Assumption 1 is clearly satisfied, taking and . Also, processes with finite-dimensional, possibly time-variant, state-space models have semiseparable covariance functions, , (see [32]), and this structure is a particular case of that assumed, just taking and . Consequently, the structural assumption on the signal autocovariance function covers both stationary and nonstationary signals.

Next, the observation model with one-step random delays is described and the assumptions under which the LS linear estimation problem will be addressed are presented.

2.1. Delayed Observation Model

Let be the signal process satisfying Assumption 1 and consider sensors which provide scalar measurements of the signal according to the following model: where is the measurement provided by the th sensor at time (actual output); are random parameter matrices; are measurement noises. The following assumptions are established on this model.

Assumption 3. For , are random parameter matrices with known means, ; and are independent for ; the covariances and cross-covariances at the same time, , are also known ( denotes the th entry of , for ).

Assumption 4. The additive measurement noises , , are zero-mean processes with , for .

Remark 5. From Assumption 4, the measurement noises of any two sensors are correlated at the same sampling time and at consecutive sampling times and uncorrelated otherwise; the cross-covariances of with , , and are , , and , respectively.

It is assumed that, at any sampling time, the outputs are transmitted from the different sensors to a data processing center producing the signal estimation and, as a consequence of possible failures during the transmission process, one-step delays may occur randomly in the measurements used for estimation. These measurement delays are modelled by introducing different sequences of Bernoulli variables whose values, zero or one, indicate whether the current measurement is up-to-date or delayed, respectively. Specifically, assume that, at initial time , the actual outputs, , are always available for the estimation but, at any time , the available measurements coming from each sensor may be randomly delayed by one sampling time according to different delay rates. Therefore, if , , denote sequences of Bernoulli random variables, the available measurements from the th sensor are described by

Remark 6. Model (2) is commonly used to describe measurements coming from multiple sensors which are one-step randomly delayed with different delay rates (see, e.g., [13] using the state-space model and [15] using covariance information). From (2) it is clear that if , which occurs with a certain probability , then and the measurement from the th sensor is delayed by one sampling period; otherwise, and , which means that the measurement is up-to-date with probability . Therefore, the variables model the random delays of the th sensor and the following assumption is made.

Assumption 7. For , the process is a sequence of independent Bernoulli random variables with known probabilities , . For the variables and are independent for , and are known.

Note that this assumption is more general than that considered in [13, 15] where the processes , for , are assumed to be mutually independent.

Finally, the following independence hypothesis is also assumed.

Assumption 8. For , the signal process, , and the processes , , and are mutually independent.

To address the optimal LS linear estimation problem of the signal based on the measurements coming from all the sensors, , , the centralized fusion method will be used. For this purpose, the observation equations of the different sensors (1) and (2) are combined yielding the following vectorial observation model: where , , , and .

Hence, the problem is to obtain the LS linear estimator of the signal, , based on the randomly delayed observations given in (3). Next, we present the statistical properties of the processes involved in observation model (3), from which the LS linear filtering and fixed-point smoothing algorithms of the signal will be derived; these properties are easily inferred from the model Assumptions 38 previously established.(i) are independent random parameter matrices with known means, , and known covariances, , where denotes the th entry of matrix, , for and .(ii) is a zero-mean process with , where .(iii)The random matrices are independent or, equivalently, the -dimensional process , where , is a white sequence. The first- and second-order moments of these processes are known, and the following notation will be used: (iv)The signal process, , and the processes , , and are mutually independent.

Remark 9. From the above properties, the following ones, which will be frequently used in the derivation of the algorithms, are obtained. (a)The covariances of vectors , with , are given by This identity is easily obtained from the conditional expectation properties, using the independence of and , and Assumption 1: From (5), these matrices are known and their entries are given by (b)The covariances of the vectors are given by (c)If is a random matrix independent of , the Hadamard product properties and (iii) lead to

To simplify future formulas and expressions, the observation model (3) will be written equivalently as follows: Taking into account the model properties and those specified in Remark 9, the first- and second-order properties of the processes and and, consequently, those of the observation process, , are easily obtained. They are established in the following lemmas.

Lemma 10. The process has zero mean and is given by

Lemma 11. has zero mean and , , where

Lemma 12. The processes and are uncorrelated and, consequently, with and given in (11) and (12), respectively.

3. Innovation Approach to the LS Linear Estimation Problem

To obtain a recursive algorithm for the LS linear estimator, , of the signal, , based on the randomly delayed observations, , an innovation approach will be used [32]. This approach consists of transforming the observation process into an equivalent one (innovation process) of orthogonal vectors , defined by , where is the orthogonal projection of into the linear space generated by . The orthogonality property of the new process allows us to simplify the estimators’ expressions (which also simplifies the algorithms derivation) in comparison to those obtained when the estimators are expressed directly as linear combination of the observations.

Specifically, if denotes a random vector to be estimated, the LS linear estimator of based on the observations (which will be denoted as ) agrees with that based on the innovations or, equivalently, with the orthogonal projection of onto the linear space generated by . Hence, and the impulse-response function, , , is calculated from the orthogonality property, , , which leads to the Wiener-Hopf equation, taking into account that for , Consequently, by denoting , the following general expression for the LS linear estimators of is obtained:

3.1. Innovation Process

As indicated above, the innovation at time is defined as , where , the orthogonal projection of onto the linear space generated by , is the LS one-stage linear predictor of . From (10) and the orthogonal projection lemma, this estimator can be expressed by so we need the one-stage predictors and which, by using the general expression (16) for the LS linear estimators, are given by (1)From the independence property (iv), it is clear that and hence, for , then, from (16) for and , we obtain (2)The uncorrelation of and with and the independence property (iv) guarantee that , , and , and this last expectation is equal to from the uncorrelation of and ; hence,

Now, from (21) and (22), by denoting , it is immediately clear that the innovation at time can be expressed as and, hence, its determination requires that of the linear signal predictors, , , . The derivation of the linear predictors is analogous to that of the filter, , so both are obtained simultaneously in the following section.

4. Prediction and Filtering Recursive Algorithm

The following theorem presents a recursive algorithm for the signal LS linear predictor and filter based on the delayed observation model given in Section 2.

Theorem 13. The signal predictors , , and the signal filter, , are obtained as where the vectors are recursively calculated from The matrix function is given by with recursively obtained from The innovation, , satisfies where is recursively obtained from The innovation covariance matrix, , is given by The matrices and are given in (5) and (13), respectively. , for , are given in (12). Finally, the matrices and are defined by

Proof. From the general expression (16), , for , and the coefficients , , must be calculated in order to determine the predictors and filter of . Using expression (23) for , we obtain (a)On the one hand, from (10) and independence hypotheses, we have where Assumption 1 and expression (31) for have been used.(b)On the other hand, using again (16) for and and , we have Therefore, and this expression guarantees that where is a function satisfying Hence, denoting which obviously satisfies (25), expression (24) for the predictors and filter is proved.
Next, taking into account (36) and denoting expression (26) for is easily derived just making in (37). The recursive formula (27) for is immediately clear from (39).
Expression (28) for is derived by substituting and in (23) and considering expression (31) for .
To prove recursive expression (29) for , we calculate both expectations as follows.(1)From expression (10) for , using the independence properties and , for , we have and since it is clear that Analogously, .(2)Using that and , , we have and since and , we obtain Clearly, .So expression (29) is proved.
Finally, formula (30) for the innovation covariance matrices is obtained by writing , using the expression for the observation predictor, , and taking into account that and .

4.1. Filtering Error Covariance Matrices

The performance of the LS estimators , is measured by the covariance matrices of the estimation errors, . Since the error of a LS linear estimator is orthogonal to the estimator, using Assumption 1, these matrices are given by Then, by using expression (24) and taking into account that , we obtain the following expressions for the prediction and filtering error covariance matrices:

Note that the computation of the prediction and filtering error covariance matrices does not depend on the current set of observations, as it only needs the matrices and , which are known, and the matrices , which are recursively calculated from (27); hence, the error covariance matrices provide a measure of the estimator performance even before we get any observed data.

5. Fixed-Point Smoothing Algorithm

In this section, we present a recursive algorithm for the LS linear fixed-point smoothers, , , where is fixed and recursions for increasing are proposed. By starting from the general expression for the LS linear estimator of the signal, , where and , it is clear that the linear fixed-point smoothers, , , can be recursively calculated as with the linear filter, , as initial condition.

Hence, to calculate the fixed-point smoothing estimators, , for ( fixed), we need a recursive relation in for , .

On the one hand, as in the proof of Theorem 13, using (10) and taking into account the independence hypotheses, together with Assumption 1 and expression (31) for , we have

On the other hand, the expression of obtained from (28) for yields

Therefore, defining the function , the following expression holds: with initial conditions given by and , from (36).

Finally, we need a recursive expression for , . Taking into account that, from the orthogonality property, and , using (24), and that , we have that and . Now, using (25) for , the following formula is immediately deduced:

Summarizing these results, the following recursive fixed-point smoothing algorithm is obtained.

Theorem 14. The fixed-point smoother , with, , of the signal is calculated as with initial condition given by the filter, , and with and .
The matrices satisfy the following recursive formula: The filter , the matrices , , and , and the innovations and their covariance matrices are obtained from the linear filtering algorithm given in Theorem 13.

Using the recursive formula of the fixed-point smoother, the following recursive expression for the fixed-point smoothing error covariance matrices, , is immediately deduced: with the filtering error covariance matrix, , as initial condition.

6. Numerical Simulation Example

In this section, the applicability of the proposed prediction, filtering, and fixed-point smoothing algorithms is shown by a numerical simulation example with two kinds of measured outputs. For this purpose, the signal values and their observations have been simulated in MATLAB and the signal estimates have been calculated, as well as the corresponding error variances, which provide a measure of the estimation accuracy.

It is assumed that is a zero-mean scalar signal with autocovariance function , , which is factorizable according to Assumption 1 just taking, for example, and . For the simulations, the signal is assumed to be generated by an autoregressive model, , where is a zero-mean white Gaussian noise with variance , for all .

Measurements coming from two sensors are considered and, according to the proposed observation model, it is assumed that, at any sampling time , the measured output from the th sensor, , can be randomly delayed by one sampling period during network transmission; that is, where , , are independent sequences of independent Bernoulli random variables with , .

Case 1 (systems with observation multiplicative noises). Consider measurements coming from two sensors, where the multiplicative noises , , are independent zero-mean Gaussian white processes with unit variance, and the additive noises , , are defined by , , with , , and a zero-mean Gaussian white process with variance . Clearly, according to Assumption 4, the additive noises are one-step autocorrelated with

Firstly, to compare the performance of the predictor, , filter, , and fixed-point smoothers, , with , , , the corresponding error variances are calculated considering constant delay probabilities, and . The results are displayed in Figure 1 which shows that the error variances corresponding to the fixed-point smoother are less than those of the filter and the filtering error variances are smaller than the prediction ones, thus confirming that the smoother has the best performance while the predictor has the worst performance. This figure also shows that the performance of the fixed-point smoothers improves as the number of available observations increases. Analogous results are obtained for other values of the probabilities , .

Next, we study the filtering error variances, , when the delay probabilities and are varied from 0.1 to 0.9. In all the cases, the filtering error variances present insignificant variation from the 10th iteration onwards and, consequently, only the error variances at a specific iteration are shown here. Figure 2(a) displays the filtering error variances at versus (for constant values of ) and Figure 2(b) shows these variances versus (for constant values of ).

From these figures it is concluded that the performance of the filter improves as the delay probabilities, , , decrease. Consequently, more accurate estimations are obtained as comes nearer to , a case in which all the observations arrive on time.

Case 2 (systems with missing measurements). As in [28], consider missing measurements from two sensors, with different missing characteristics and noise correlation: where the noise processes , , are the same as those in Example 1. Two different independent sequences of random variables , , with a probability distribution over the interval are used to model the missing phenomenon: is a sequence of independent variables with , and  , and is a sequence of independent Bernoulli variables with . For all , the means and variances of these variables are , , , and .

For different values of the missing probability and the delay probabilities and , a comparative analysis, similar to that carried out in Case 1, based on the estimation error variances of the predictor, filter, and smoother was performed. For all values, the results were similar to those given in Figure 1, showing that the fixed-point smoothing error variances are less than the filtering ones which, in turn, are smaller than the prediction error variances, thus confirming the comments on Figure 1.

Next, considering a fixed value of , namely, , the filtering error variances have been calculated for different values of the delay probabilities and . Specifically, the values , , and , , , have been used. The results are given in Figure 3 which shows that, as the delay probability or increases, the filtering error variances become greater and, consequently, worse estimations are obtained. Also, a similar study to that performed in Figure 2 has been carried out in this case; specifically, for fixed values of and fixed delay probability in one of the sensors, the filtering error variances have been analyzed for different delay probabilities in the other sensor. The results are omitted as they are completely analogous to those displayed in Figure 2.

Also, to analyze the performance of the proposed estimators versus the probability that the signal is present in the measurements of the second sensor, the filtering error variances have been calculated for , , and varying from to . The results are displayed in Figure 4; this figure shows that, as increases, the filtering error variances become smaller and, hence, better estimations are obtained. Analogous conclusions are deduced for other values of , , and .

Finally, we present a comparative analysis of the proposed filter and the following filters:(a)the suboptimal Kalman-type filter [13] for systems with uncorrelated white noises and one-step random delays,(b)the optimal linear filter based on covariance information [15] for the same class of systems considered in [13],(c)the centralized Kalman-type filter [26] for systems with correlated and cross-correlated noises,(d)the optimal centralized filter [28] for systems with missing measurements and correlated and cross-correlated noises.

Considering the values , , and and using one thousand independent simulations, the different filtering estimates were compared using the mean square error (MSE) at each time instant , which is calculated as , where denote the th set of artificially simulated data and is the filter at the sampling time in the th simulation run. The results are displayed in Figure 5, which shows that (a) the proposed filtering algorithm provides better estimations than the other four filtering algorithms; (b) the performance of the optimal filter [15] is better than that of the suboptimal filter [13]; (c) the performance of the filters [13, 15] is better than that of the filters [26, 28] since these filters ignore any delay assumption; (d) the filtering algorithm in [26] provides the worst estimations as this filter considers correlated and cross-correlated noises, but neither missing observations nor delayed measurements are taken into account.

7. Conclusions

The optimal least-squares linear estimation problem from randomly delayed measurements has been investigated for discrete-time multisensor linear stochastic systems with both random parameter matrices and correlated noises in the measured outputs. The main contributions are summarized as follows.(1)The current multisensor observation model considers simultaneously one-step random delayed measurements with different delay rates and both random parameter matrices and correlated noises in the measured outputs. This observation model covers those situations where the sensor noises are one-step autocorrelated and also one-step cross-correlations between different sensor noises are considered. This correlation assumption is valid in a wide spectrum of applications, for example, in target tracking systems where a target is observed by multiple sensors and all of them operate in the same noisy environment. A similar study to that performed in this paper would allow us to generalize the current results by considering more general situations in which the signal and the observation noises are correlated. This extension would cover systems where the sensor and process noises are correlated and would constitute an interesting research topic.(2)The random delay in each sensor is modelled by a sequence of independent Bernoulli random variables, whose parameters represent the delay probabilities. Another interesting future direction would be to complement the current study considering randomly delayed measurements correlated at consecutive sampling times, thus covering situations where two successive observations cannot be delayed. This kind of delay is usual in situations such as network congestion, random failures in the transmission mechanism, or data inaccessibility at certain times.(3)Using covariance information, recursive optimal LS linear prediction, filtering, and smoothing algorithms, with a simple computational procedure, are derived by an innovation approach without requiring full knowledge of the state-space model generating the signal process.(4)The applicability of the proposed algorithms is illustrated by a numerical simulation example, where a scalar state process generated by a first-order autoregressive model is estimated from delayed measurements coming from two sensors, in the following cases: systems with observation multiplicative noises and systems with missing measurements, both with correlated observation noises.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This research is supported by Ministerio de Educación y Ciencia (Grant no. MTM2011-24718).