Abstract

This paper addresses the least-squares centralized fusion estimation problem of discrete-time random signals from measured outputs, which are perturbed by correlated noises. These measurements are obtained by different sensors, which send their information to a processing center, where the complete set of data is combined to obtain the estimators. Due to random transmission failures, some of the data packets processed for the estimation may either contain only noise (uncertain observations), be delayed (randomly delayed observations), or even be definitely lost (random packet dropouts). These multiple random transmission uncertainties are modelled by sequences of independent Bernoulli random variables with different probabilities for the different sensors. By an innovation approach and using the last observation that successfully arrived when a packet is lost, a recursive algorithm is designed for the filtering estimation problem. The proposed algorithm is easily implemented and does not require knowledge of the signal evolution model, as only the first- and second-order moments of the processes involved are used. A numerical simulation example illustrates the feasibility of the proposed estimators and shows how the probabilities of the multiple random failures influence their performance.

1. Introduction

Least-squares (LS) estimation has traditionally been a fertile research field, with important repercussions in the study of stochastic systems. Over the past few decades, the scientists have been especially concerned with networked systems, where multiple sensors receive a signal in a noisy environment and provide information that must be sent to a fusion center for being combined and processed to provide a signal estimation.

Traditionally, fusion estimation algorithms were concerned with conventional systems, where the sensors transmit their measured outputs to the fusion center over perfect connections (see, e.g., [1, 2] and references therein). However, usually the network characteristics may not be completely reliable and some anomalies (e.g., uncertain observations or missing measurements, random delays, and/or packet dropouts) may arise when the sensor measurements are transmitted to the fusion center. Ignoring these random phenomena in the derivation of the estimators may deteriorate their accuracy and performance and, for this reason, the design of new fusion estimation algorithms for linear systems featuring one of these uncertainties (see, e.g., [35] and references therein) or even several of them simultaneously (see, e.g., [69] and references therein) has become an active research topic. Specifically, multisensors subject to random packet dropouts are dealt with in [3], a packet-dropping network is considered in [4], and networked systems in the presence of stochastic sensor gain degradations are object of study in [5]. Networked multisensor systems with random transmission delays and packet dropouts are considered in [69], in addition to missing sensor measurements in [6] and uncertain observations in transmission in [8].

Also, in nonlinear systems, the interest in the estimation problem from observations featuring random uncertainties is increasing. For example, in [10], the recursive finite-horizon filtering problem for a class of nonlinear time-varying systems subject to multiplicative noises, missing measurements, and quantisation effects is addressed; the recursive state estimation problem is investigated in [11] for time-varying complex networks with missing measurements; and [12] is concerned with the recursive filtering problem for a class of time-varying nonlinear stochastic systems in the presence of event-triggered transmissions and multiple missing measurements.

Depending on the methodology by which the multisensor measurements are combined and processed, there are different classes of fusion filtering algorithms. The most commonly used fusion methods are the centralized and the distributed ones. In the centralized fusion filtering algorithms (see, e.g., [8]), all the raw data from different sensors are sent to a single location where they are fused and processed to provide optimal estimators; hence, when there are no sensor errors and under perfect connections, centralized fusion estimators have the best accuracy. On the other hand, in the distributed fusion filtering algorithms (see, e.g., [13]), the filtering process is divided between some local filters working in parallel to obtain individual sensor-based estimates and one main filter where these local estimates are combined to yield an improved global signal estimate. For its relation with the current research, it must be indicated that the centralized fusion estimation problem for systems with mixed uncertainties including sensor delays, packet dropouts, and uncertain observations is considered in [8], using the state-space model and the state augmentation approach.

The study of the signal estimation problem traditionally hinged on the assumption of uncorrelated measurement noises. However, this is not a realistic consideration in many practical situations; for this reason, this conservative assumption is commonly weakened in many papers concerning the signal estimation problem in networked systems, and the presence of correlated noise in the sensor data makes the design of signal estimation algorithms more challenging and interesting. The optimal Kalman filtering fusion problem in systems with cross-correlated noises at consecutive sampling times is addressed, for example, in [14] for systems with multistep transmission delays and multiple packet dropouts, by transforming the system into a stochastic parameterized one. Also, centralized and distributed fusion algorithms are obtained in [15] for uncertain systems with correlated noises and in [5] for systems where the measurements might randomly contain only partial information about the signal. Also in [16] for systems with multiplicative noise and two-step random transmission delays, the centralized and distributed fusion estimation problems are addressed by using the state augmentation approach and, even though white noises are considered in the original model, the observation noises of the augmented system are correlated. Autocorrelated and cross-correlated noises have also been considered in systems with random parameter matrices and transmission uncertainties; some results on the fusion estimation problems in these systems can be found in [17, 18], among others.

This paper addresses the LS centralized fusion estimation problem in networked systems with correlated noises, in the presence of multiple uncertainties during transmission, which include random delays, packet dropouts, and/or uncertain observations. To the best of the authors’ knowledge, the simultaneous consideration of these uncertainties has not been investigated yet in the framework of covariance information and, therefore, it constitutes an interesting research challenge. We emphasize that, unlike some existing published work on estimation from observations with mixed uncertainties, including random delays, packet dropouts, and/or uncertain observations, in this paper the centralized fusion estimation problem is carried out without requiring the evolution model generating the signal process, but only the mean and covariance functions of the processes involved in the observation equation. Let us also note that the proposed recursive algorithm is obtained without using the state augmentation approach, thus reducing the computational cost in comparison with the augmentation method.

The rest of the paper is organized as follows. In Section 2, we formulate the LS linear estimation problem and the observation model with correlated noises and multiple random failures in transmission. In Section 3, the LS linear centralized fusion filtering algorithm is designed by using the innovation theory. Section 4 contains a simulation example which illustrates the applicability of the proposed estimators in comparison with the local ones and shows the influence of the transmission failures on the estimation accuracy. A concluding remark is presented in Section 5. Finally, the local LS linear filtering algorithm, only used for comparison purposes in Section 4, is presented as an Appendix.

Notations. The notations used throughout the paper are standard. denotes the -dimensional Euclidean space. For a matrix , and denote its transpose and inverse, respectively. If a matrix dimension is not specified, it is assumed to be compatible with algebraic operations. The Kronecker and Hadamard product of matrices will be denoted by and , respectively. denotes the Kronecker delta function. For any , is used to mean the minimum of and . Finally, for any function , depending on the sampling times and , for simplicity, we will write ; analogously, will be written for any function , depending on the sensors and .

2. Problem Statement

This paper is concerned with the least-squares (LS) estimation problem of discrete-time random signals from multisensor noisy measurements transmitted through different channels with mixed uncertainties during the transmission process. More precisely, it is assumed that the observations processed for the estimation may either contain only noise, be one-step randomly delayed, or be dropped out, in which case the last observation that successfully arrived will be used for the estimation. Our aim is to find a recursive algorithm for the LS linear filtering problem using the centralized fusion method.

As it is well known, the LS linear estimation problem requires the existence of the second-order moments of the random vectors involved and, hence, second-order random signals must be considered. Also, the existence of the second-order moments of the observation noise vectors is necessary to guarantee that the observations used for the estimation have second-order moments. Next, we present the observation model and the hypotheses on the signal and noise processes under which the estimation problem will be addressed.

Signal Process. The estimation algorithm will be obtained under the assumption that the evolution model of the signal to be estimated is unknown and only covariance information (that is, information about its mean and covariance functions) is available; specifically, the following hypothesis is required.

Hypothesis 1. The -dimensional signal process has zero mean and its autocovariance function is factorizable as , , where and are known matrices.

Multisensor Measured Outputs. Consider sensors, whose measurements obey the following equations: where is the measured output of the th sensor at time and is the noise vector. We assume the following hypothesis on the noise processes.

Hypothesis 2. The measurement noises , , are second-order zero-mean processes with ,

Observation Model with Mixed Uncertainties. It is assumed that, at any sampling time, the sensor outputs are transmitted from the different sensors to a data processing center, where the signal estimation is performed and, as a consequence of possible failures during the transmission process, random one-step delays, packet dropouts, and uncertain observations may occur in the transmission. White sequences of Bernoulli random variables with different parameters for the different sensors are introduced to depict these random transmission uncertainties. Namely, the following model is considered for the measurement from the th local sensor, , : where , and . We denote , and the following hypothesis on the random variables , , is assumed.

Hypothesis 3. For , and , the process is a sequence of independent Bernoulli random variables with known probabilities , . Also, we assume that is independent of the sequences , for and .

From this assumption, it is clear that, for , and , the correlation of the variables and is known, and it is given by

Finally, the following independence hypothesis is also required.

Hypothesis 4. For , and , the signal, , and the processes and are mutually independent.

2.1. Stacked Observation Model

To address the estimation problem by the centralized fusion method, the observations from the different sensors are gathered and jointly processed at each sampling time; for this purpose, the observation equations (1) and (2) are combined yielding the following stacked observation model: where , , , and , .

Hence, the problem is to obtain the LS linear estimator of the signal, , based on the observations , given in (4). Next, we present the statistical properties of the processes involved in the observation model (4), from which the LS linear filtering algorithm of the signal will be derived; these statistical properties are easily inferred from the previously established model Hypotheses 14.(P1) is a zero-mean process with , for , where .(P2), , are sequences of independent random matrices with known means, , and if we denote , the correlation matrices , for , are also known matrices whose entries are given in (3). Moreover, for any deterministic matrix , the Hadamard product properties guarantee that (P3)For , the signal, , and the processes and are mutually independent.

2.1.1. Observation Covariance Matrices

From the previous properties, it is easy to check that is a zero-mean process and the covariance matrices , for , are obtained by the following expressions: where and , for , are given by

3. Centralized Fusion Filtering Estimators

Our aim in this section is to address the LS linear centralized estimation problem of the signal from a set of available observations. Specifically, a recursive algorithm for the LS linear centralized filtering problems will be derived. For this purpose, we will use an innovation approach.

3.1. Innovation Approach to the LS Linear Estimation Problem

The innovation approach basically consists of transforming the observation process into an equivalent one of orthogonal vectors, called innovation process, which will be denoted by and defined as , where is the orthogonal projection of onto the linear space generated by . These processes are equivalent in the sense that each set spans the same linear subspace as ; hence, the estimation problem can be addressed by replacing the observation process by the innovation one. So, the LS linear estimator of any random vector based on the observations , denoted as , agrees with that based on the innovations and, denoting , the following general expression for the LS linear estimators of based on the observations is obtained

This general expression is derived from the Orthogonal Projection Lemma (OPL), which establishes that the estimation error is uncorrelated with all the observations or, equivalently, with all the innovations. From (7), the first step to obtain the signal estimators is to find an explicit formula for the innovation or, equivalently, for the one-stage linear predictor of the observation.

3.1.1. One-Stage Observation Predictor

To simplify future formulas and expressions, the observation model (4) will be equivalently written as follows: Then, applying orthogonal projections, we have Now, from the general expression (7), denoting , and taking into account the fact that for , we obtain and, hence, the observation predictor is given by Note that the determination of the one-stage observation predictor, and consequently that of the innovation, requires the calculation of the linear one-stage predictor and the filter of the signal, and , respectively. Since both derivations are analogous, they are simultaneously addressed in the following subsection.

3.2. Centralized Prediction and Filtering Recursive Algorithm

The following theorem presents a recursive algorithm for the LS linear centralized fusion estimators , , of the signal based on the observations given in (4) or, equivalently, in (8).

Theorem 5. The centralized predictor and filter, , , and the corresponding error covariance matrices, , are obtained by where the vectors and the matrices are recursively obtained from and the matrices satisfy The innovations, , and their covariance matrices, , are given by The coefficients , , satisfy where and are given by Finally, the matrices and are given in (5), and the matrices , with , are defined by

Proof. From the general expression (7), to obtain the LS linear estimators , , it is necessary to calculate the coefficients The independence hypotheses and the factorization of the signal covariance (Hypothesis 1) lead to , with given in (19). Now, using expression (11) for , together with (7) for and , we obtain that the coefficients , , can be expressed as follows: which guarantees that , , with given by Then, by defining and , for , with and , and taking into account the fact that , for , and , it is easy to obtain expressions (12)–(15) of Theorem 5.
To obtain expression (16) for , we apply the OPL to write By definition, and, again from the OPL, we have Now, using thatin the expectation and since expression (16) for is easily obtained.
Next, expression (17) for , , with given in (8), is derived. Taking into account the fact that is uncorrelated with , , we have that Now, using (7) for in , we have , with . Now, from (8) and the independence between the signal and the observation noise, the first expectation involved in the previous formula is given by and so, expression (17) is proved.
To complete the proof, expression (18) for and is obtained. Using (8) for , the expression for is clear, and the expression for is easily computed taking into account the fact that, from the OPL, and using expression (11) for Then the proof of Theorem 5 is complete.

The difficulties caused by the simultaneous consideration of correlated noises in the measured outputs and multiple uncertainties during transmission are mainly related to the derivation of the one-stage observation predictor and, consequently, the innovation process, and also the calculations involving this process. Specifically, due to the noise correlation, the noise predictor and filter are nonzero, so they must be taken into account when deriving the innovations. Also, some additional difficulties are met when obtaining simple formulas for the innovation covariance, for example, when computing the observation covariance matrices , and the matrices , .

4. Numerical Simulation Example

In this section, the application of the centralized fusion filtering algorithm proposed in the current paper is illustrated by a simulation example. Let us consider a zero-mean scalar signal process, , with autocovariance function , , which is factorizable according to Hypothesis 1 just taking, for example, and

The measured outputs of this signal, which are provided by four different sensors, are described by (1): where , , , , and the additive noises are defined as , , where , , , and is a zero-mean Gaussian white process with unit variance. These noises are clearly correlated, with , ,

Next, according to the theoretical observation model, suppose that random one-step delays, packet dropouts, and uncertain observations with different rates exist in the data transmissions from the individual sensors to the local processors. Specifically, let us consider the observation model (2): where, for and , are sequences of independent Bernoulli variables. Initially, we consider the following probabilities:(i)For and (ii)For with

This model considers the possibility of uncertain observations, delays, and packet dropouts simultaneously in transmissions from sensor 4, while the measurements transmitted by sensors 1, 2, and 3 are only subject to one single random transmission failure, specifically, missing measurements in sensor 1, one-step delays in sensor 2, and packet dropouts in sensor 3. Moreover, these probabilities are assumed to be time-invariant, for .

To illustrate the feasibility and effectiveness of the estimators proposed in the current paper, the centralized and local filtering algorithms were implemented in MATLAB, and fifty iterations were run. The local LS linear filtering algorithm is given in the Appendix; the derivation of this local algorithm is omitted since it is totally analogous to that of the centralized filtering algorithm (Theorem 5).

Figure 1 compares the filtering error variances of the local filters and the centralized fusion filter and shows that the error variances of the centralized fusion filtering estimators are significantly smaller than those of every local estimators; consequently, agreeing with the theoretical results, the centralized fusion filter has better accuracy than the local filters, as it is the optimal one based on the information from all contributing sensors.

Next, in order to show how the estimation accuracy is influenced by the missing measurements, random delays, and packet dropouts of the sensors 1, 2, and 3, the centralized filtering error variances are displayed in Figure 2 for different probabilities, , for . From this figure, we see that the performance of the filters is indeed influenced by these uncertainties and, as it was expected, it is confirmed that the centralized error variances become smaller as some of the probabilities increase, which means that the performance of the centralized filter improves when , for (missing measurement probability in sensor 1, delay probability in sensor 2, and packet dropout probability in sensor 3) decrease.

Finally, in order to analyze the performance of the proposed centralized filter in comparison with the Kalman filter, the filtering mean-square error at each time instant () is used. For this analysis, one thousand independent simulations were run, assuming the same probabilities as in Figure 1. Figure 3 shows the of these two filters. As expected, the for the proposed filter are less than those of the Kalman filter since the latter does not take into account the phenomena of missing measurements in sensor 1, one-step delays in sensor 2, packet dropouts in sensor 3, and uncertain observations, delays, and packet dropouts simultaneously in transmissions from sensor 4.

5. Conclusions

In this paper, we studied the signal estimation problem in networked systems with correlated measurement noises. We proposed a general framework to deal simultaneously with multiple random failures (specifically, missing measurements, random delays, and packet dropouts) in the output transmission from each sensor to the fusion center. A centralized architecture was used to design an LS linear filtering algorithm, which does not require the evolution model generating the signal process, since it is based on covariance information; nonetheless, the current results are also applicable to the conventional formulation using the state-space model. The accuracy of the proposed LS filtering estimator is measured by the error covariance matrices, which can be calculated offline as they do not depend on the current observed data set. Numerical results show a good performance of the proposed algorithm and illustrate the impact of the transmission random uncertainties on the estimation accuracy.

In future work, in order to reduce the computational burden at the fusion center, we will develop a decentralized architecture where the optimization procedure is locally carried out by the sensors themselves in a distributed way, and the local filters are then fused (according to some optimality criterion) to yield a global distributed fusion filter. Furthermore, the consideration of random measurement matrices will allow us to address the estimation problem in a wide variety of systems featuring different uncertainties in the measurement mechanism of the network sensors, that usually arise in practice.

Appendix

Local LS Linear Filtering Recursive Algorithm

For each , the local LS linear filters, , and their error covariance matrices, , are given by where the vectors and the matrices are obtained from and the matrices satisfy The innovations, , are given by and the innovation covariance matrices, , are calculated by The coefficients , , satisfy where and are given by The matrices are obtained by with ,

Finally, the matrices with are defined by

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research is supported by Ministerio de Economía y Competitividad and Fondo Europeo de Desarrollo Regional FEDER (Grant no. MTM2014-52291-P).