Abstract

The optimal least-squares linear estimation problem is addressed for a class of discrete-time multisensor linear stochastic systems with missing measurements and autocorrelated and cross-correlated noises. The stochastic uncertainties in the measurements coming from each sensor (missing measurements) are described by scalar random variables with arbitrary discrete probability distribution over the interval ; hence, at each single sensor the information might be partially missed and the different sensors may have different missing probabilities. The noise correlation assumptions considered are (i) the process noise and all the sensor noises are one-step autocorrelated; (ii) different sensor noises are one-step cross-correlated; and (iii) the process noise and each sensor noise are two-step cross-correlated. Under these assumptions and by an innovation approach, recursive algorithms for the optimal linear filter are derived by using the two basic estimation fusion structures; more specifically, both centralized and distributed fusion estimation algorithms are proposed. The accuracy of these estimators is measured by their error covariance matrices, which allow us to compare their performance in a numerical simulation example that illustrates the feasibility of the proposed filtering algorithms and shows a comparison with other existing filters.

1. Introduction

For a long time, the least-squares (LS) estimation problem in linear stochastic systems from measurements perturbed by additive noises has received considerable attention in the scientific community due to its wide applicability in many practical situations (e.g., video and laser tracking systems, satellite navigation, radar and meteorological applications, etc. [1]). As it is well known, one of the major contributions made to solve this problem is the Kalman filter, which provides a recursive algorithm for the optimal LS estimator when the additive white noises and the initial state are Gaussian and mutually independent (or, equivalently, uncorrelated due to the Gaussianity assumption) and, therefore, the optimal LS estimator is the optimal LS linear estimator. From the publication of the Kalman filter [2] in 1960, numerous results and several solution methods have been reported in the literature to address the state estimation problem from noisy observations, which depend on models representing possible relationships between the unknown state and the observable variables and also on the noise processes assumptions.

Specifically, during the past decades, there has been an increasing interest in the filtering problem in multisensor systems, where sensor networks are used to obtain the whole available information on the system state and its estimation must be carried out from the observations provided by all the sensors. A basic matter for this class of systems is how to fuse the measurement data from the different sensors to address the estimation problem. Commonly, two methods are used to process the measured data coming from multiple sensors: centralized and distributed fusion methods. In the centralized fusion method all the measured data from sensors are communicated to the fusion center for being processed; nevertheless, as is widely known, centralized estimators have many computational disadvantages, which motivate the research into other fusion methods. In the distributed fusion method, each sensor estimates the state based on its own single measurement data, and then it sends such estimate to the fusion center for fusion according to a certain information fusion criterion. Although the use of sensor networks offers several advantages, the unreliable network characteristics usually cause problems during data transmission from sensors to the fusion center, such as missing measurements, random communication packet losses and/or delays. Taking into account these network uncertainties, the models representing the relationships between the state and measurements do not allow to apply the Kalman filter, and modifications of conventional estimation algorithms have been proposed (see e.g., [39] and references therein).

As in the Kalman filter, independent white noises are considered in all the mentioned papers; however, this assumption may not be realistic and can be a limitation in many real-world problems in which noise correlation may be present. This problem arises, for example, when a target is taking an electronic countermeasure, for example, noise jamming [10], or if the process noise and the sensor measurement noises are dependent on the system state, then there may be cross-correlation between different sensor noises and cross-correlation between process noise and sensor noises. Also, if all the sensors are observed in the same noisy environment, the measurement noises of different sensors are usually correlated.

For these reasons, the estimation problem in systems with correlated noises has received significant research interest in recent years. For example, the optimal Kalman filtering fusion problem in systems with cross-correlated sensor noises is addressed in [10], while [11, 12] study the same problem in systems with cross-correlated process noises and measurement noises; in these papers correlated noises at the same sampling time are considered. In general, the assumption of correlation and cross-correlation of the noise process and measurement noises in different sampling times makes difficult the identification of optimal estimators; this limitation has encouraged a wider research into suboptimal Kalman-type estimation problems. In [13], a Kalman-type recursive filter is presented for systems with finite-step correlated process noises, and the filtering problem with multistep correlated process and measurement noises is investigated in [14]. The optimal robust nonfragile Kalman-type recursive filtering problem is studied in [15] for a class of uncertain systems with finite-step autocorrelated measurement noises and multiple packet dropouts. The problem of distributed weighted robust Kalman filter fusion is studied in [16] for a class of uncertain systems with autocorrelated and cross-correlated noises. In [17], a stochastic singular system with correlated noises at the same sampling time is transformed into an equivalent nonsingular system with correlated noises at the same and neighboring sampling times. Also, in [18], an augmented parameterized system with correlated noises at the same and neighboring sampling times is used to describe the sensor delay, packet dropout, and uncertain observation phenomenons.

On the other hand, as noted above, the use of communication networks for transmitting measured data motivates the need of considering stochastic uncertainties. Missing measurements have been widely treated due to its applicability to model a large class of real-world problems, such as fading phenomena in propagation channels, target tracking or, in general, situations where there exist intermittent failures in the observation mechanism, accidental loss of some measurements, or inaccessibility of the data during certain times. The state estimation problem from missing measurement transmitted by multiple sensors has been studied based on the assumption that all the sensors are identical (see, e.g., [1922]); however, this assumption can be unreasonable since some real systems usually involve multiple sensors with different characteristics. Recently, the filtering problem using missing measurements whose statistical properties are assumed not to be the same in all the sensors has been addressed by several authors under different approaches and hypotheses on the processes involved (see, e.g., [2327]). In all the above papers, Bernoulli random variables are used to model the missing measurements phenomenon, and hence, it is assumed that the measurement signal is either completely lost (if the corresponding Bernoulli variable takes the value zero) or successfully transferred (when the Bernoulli variable is equal to one). Recently, this missing measurement model has been generalized considering any discrete distribution on the interval , which allows to cover some practical applications where only partial information is missing (see [28, 29] and references therein).

Motivated by the above considerations, our attention is focused on investigating the optimal LS linear centralized and distributed fusion estimation problems in multisensor systems with missing measurements and autocorrelated and cross-correlated noises. In each sensor, the missing measurement phenomenon is governed by a scalar random variable with arbitrary discrete probability distribution over the interval , and the different sensors may have different missing probabilities. Assume that the process noise and all the sensor noises are one-step autocorrelated; different sensor noises are one-step cross-correlated; and the process noise and each sensor noise are two-step cross-correlated. This paper makes a twofold substantial novel contribution: (1)  unlike most previous results with correlated noises, in which suboptimal Kalman-type estimators are proposed, in this paper optimal LS linear estimators are obtained by using an innovation approach, which provides a simple derivation of the estimation algorithms due to the fact that the innovations constitute a white process; and (2)  our missing measurement model considers at each sensor the possibility of observations containing only partial information about the state, or even only noise.

The paper is organized as follows. In Section 2 the system model with autocorrelated and cross-correlated noises and missing measurements coming from multiple sensors is described. Also, the suitable properties on the state and noise processes are specified and a brief description of the innovation approach to the optimal LS linear estimation problem is included. In Section 3 a recursive algorithm for the centralized optimal linear filter is presented for the considered model (the derivation has been deferred to Appendix A). Next, in Section 4, the local LS linear filters and their corresponding error covariance matrices between any two local estimates are provided, and then the distributed optimal weighted fusion estimators and their error covariance matrices are obtained by applying the optimal information fusion criterion weighted by matrices in the linear minimum variance sense. Finally, in Section 5, a numerical simulation example is presented to show the effectiveness of the estimation algorithms proposed in the current paper, and some conclusions are drawn in Section 6.

Notation. The notation used throughout the paper is standard. For any matrix , the notation symbols and represent its transpose and inverse, respectively; denotes the -dimensional Euclidean space and is the set of all real matrices of dimension . The shorthand denotes a diagonal matrix whose diagonal entries are . If the dimensions of matrices are not explicitly stated, they are assumed to be compatible for algebraic operations. is the Kronecker delta function, which is equal to one, if , and zero otherwise. Moreover, for arbitrary random vectors and , we will denote and , where stands for the mathematical expectation operator. Finally, denotes the estimator of and the estimation error.

2. Problem Formulation

Our aim is to obtain recursive algorithms for the optimal LS linear filtering problem in a class of discrete-time stochastic systems with missing measurements coming from multiple sensors, by using centralized and distributed fusion methods. In this section, firstly the system model and the assumptions about the state and noise processes are presented and, secondly, the optimal LS linear estimation problem is formulated using an innovation approach.

2.1. Stochastic System Model

Consider a discrete-time linear stochastic system with autocorrelated and cross-correlated noises and missing measurements coming from sensors. The phenomenon of missing measurements occurs randomly and, for each sensor, a different sequence of scalar random variables with discrete distribution over the interval is used to model this phenomenon. Specifically, the following system is considered: where is the state, is the process noise, and , for , are known matrices with compatible dimensions.

Consider sensors which, at any time , provide scalar measurements of the system state, perturbed by additive and multiplicative noises according to the following model: where are the measured data; are measurement noises; are scalar random variables sequences; , for , are known time-varying matrices with compatible dimensions; superscript denotes the th sensor, and is the number of sensors.

Next, the statistical properties assumed about the initial state and noise processes involved in (1) and (2) are specified.(i)The initial state is a random vector with and .(ii)The process noise, , and the measurement noises, , , are zero-mean sequences with covariances and cross-covariances: (iii) The multiplicative noises , , are white sequences of scalar variables with discrete distribution over the interval , with and .(iv) The initial state and the multiplicative noises , for , are mutually independent, and they are independent of the additive noises and , for .

Remark 1. From assumption (ii) the following correlation properties of the additive noises are easily deduced. (1)The noise vectors and are correlated at consecutive sampling times, , and independent otherwise; the covariance matrices of with , and are , and , respectively. (2)For , the measurement noises and are cross-correlated at the same sampling time and at consecutive sampling times, , and independent otherwise; the cross-covariances of with , and are , and , respectively. (3)For , the measurement noises are correlated with the noise vectors , for , and independent otherwise; the cross-covariance matrices of with , and are , and , respectively. The correlation conditions of the process noise and the measurement noises considered in this paper are the same as those in [16]. Systems with only finite-step correlated process noises or multistep correlated process and measurement noises are considered in [1315], among others. The current study can be extended to more general systems involving finite-step autocorrelated and cross-correlated noises with no difficulty, except for a greater complexity in the mathematical derivations.

Remark 2. From the state equation (1) and assumptions (ii) and (iv), it is easy to deduce that is recursively calculated by

Also, it is easy to see that the state is correlated with the measurement noises , for , and the expectations satisfy

Remark 3. According to assumption (iii), the scalar random variables take values over the interval and they can satisfy any arbitrary discrete probability distribution over such interval, for instance, a Bernoulli distribution. Usually, Bernoulli random variables have been used to model the phenomenon of missing measurements (see, e.g., [25] and references therein), with meaning that the state is present in the measurement coming from the th sensor at time , while means that the state is missing in the measured data at time or, equivalently, that such observation only contains additive noise . However, in practice, the information transmitted at a sampling time can usually be neither completely missing nor completely successful, but only part of the information can go through; in such situations, only partial information is missing and the proportion of missed data at one moment is a fraction other than 0 or 1 (see, e.g., [28, 29] and references therein).

2.2. Stacked Measurement Equation

As noted above, our aim is to solve the optimal LS linear estimation problem of the state based on the measurements , for , by using centralized and distributed fusion methods to process the measured sensor data. The centralized fusion method considers that all the measurement data coming from sensors are transmitted to a fusion center for being processed; for this purpose and to simplify the notation, the measurement equation (2) is rewritten in a stacked form as follows: where , , and ,.

The following properties of the noises in (6) are easily inferred from the model assumptions (ii)–(iv) previously stated.(i)The additive noise is a zero-mean process satisfying: where and .(ii)The state vector and the measurement noise vector are correlated with satisfying (iii)The random matrices satisfy and ; also, denoting , it is clear that .Moreover, for any random matrix independent of , it is easily deduced that where denotes the Hadamard product [23].(iv)The initial state and are independent, and they are independent of and .

2.3. Innovation Approach to the Optimal LS Linear Estimation Problem

To address the optimal LS linear estimation problem of the state based on the measurements , , the centralized and distributed fusion methods will be used. In both cases, recursive algorithms for the LS linear estimators will be established using an innovation approach and the orthogonal projection Lemma (OPL); more specifically we have the following.

Centralized Fusion Estimation Problem. Our aim is to obtain the optimal LS linear filter, , of the state based on the measurements , given in (6), by recursive algorithms.

As known, the LS linear filter is the orthogonal projection of the state over the linear space spanned by . These observations are generally nonorthogonal vectors, but the Gram-Schmidt orthogonalization procedure allows us to substitute them by a set of orthogonal vectors, called innovations, defined as the difference between each observation and its one-stage predictor. Due to the orthogonality property of the innovations and since the innovation process is uniquely determined by the observations, the LS linear filter, , can be calculated as linear combination of the innovations; namely, where are the innovation vectors, with the one-stage observation predictor, , and .

Distributed Fusion Estimation Problem. To address the distributed fusion estimation problem, firstly, recursive algorithms to obtain local LS linear filters, , for , and the error cross-covariance matrices between any two local estimates, are derived. Secondly, the distributed fusion filter, , is established by applying the optimal information fusion criterion weighted by matrices in the linear minimum variance sense [30].

Analogously to (10), denoting , , and , the local filter is expressed as

3. Optimal LS Linear Centralized Fusion Estimation

In this section a recursive algorithm for the centralized optimal (under the LS criterion) linear filter, is derived. Such algorithm is deduced using (10) and the OPL, and it is presented in Theorem 5. Firstly, in order to simplify the proof of Theorem 5, the following lemma is established.

Lemma 4. Under assumptions (i)–(iv), the following results hold:

Proof. Since is independent of , and hence . Now, using (1) and (6), can be calculated as follows: Taking into account that is independent of , the calculation of is similar to that of , and hence the proof is omitted.

Theorem 5. For the system model (1) and measurement model (6), under assumptions (i)–(iv), the optimal LS linear filter is obtained as where the state predictor, , satisfies
The innovation, , is given by The matrix is calculated by where satisfies The prediction error covariance matrix, , is obtained by where is calculated by The filtering error covariance matrix, , is given by The innovation covariance matrix, , satisfies The matrices , , , and are given in (4), (8), (12), and (13), respectively.

Proof. See Appendix A.

Remark 6. In conventional estimation problems in systems with missing measurements and uncorrelated additive white noises, the one-stage state and observation predictors are calculated as and , respectively. However, this is not true for the problem at hand since, due to the correlation assumption (ii), the noise estimators and must be taken into account for the derivation of the predictors. Besides the fact of considering missing measurements, this is the main difference between the optimal estimators proposed in the current paper and the suboptimal Kalman-type ones proposed in [16], where the noise estimators are considered to be equal to zero.

4. Distributed Fusion Estimation

One of the main disadvantages of the centralized fusion estimators derived in Section 3 is that they may have a high computational cost due to augmentation. Moreover, as is widely known, the centralized approach has several other drawbacks, such as fault detection, isolation, poor reliability, and so forth. To overcome these disadvantages, our aim in this section is to address the optimal distributed fusion estimation problem, in which each single sensor provides its local LS linear estimator and their estimation error covariance matrices, and then these local estimators along with the covariances and cross-covariance matrices of the estimation errors between any two sensors are sent to the fusion center for fusion based on the matrices-weighted fusion estimation criterion in the linear minimum variance sense [30].

4.1. Local LS Linear Filtering Algorithms

For each single sensor subsystem of systems (1) and (2), the following theorem provides recursive formulas for the local LS linear filters, , and their corresponding error covariance matrices, .

Theorem 7. For the th sensor subsystem of systems (1) and (2) under assumptions (i)–(iv), the local LS linear filter, , is calculated by where the local LS linear predictor, , satisfies with .
The innovation, , is given by with .
The vector is calculated from the following expression where .
The local prediction error covariance matrix, , is obtained by where , and , the filtering error covariance matrix, is given by
The innovation variance, , satisfies The matrix and the vector are given in (4) and (5), respectively.

Proof. The proof, based on the innovation approach and the OPL, is omitted for being analogous to that of Theorem 5. Nevertheless, it should be indicated that, in this proof, the Hadamard product is not used since, instead of the diagonal stochastic matrix , the scalar variable is now involved in the derivation of the estimators.

Remark 8. As indicated in Remark 6 for the centralized estimators, it must be noted that, due to the correlation assumption (ii) of the additive noises and , the estimators and are not equal to zero, and hence the optimal local state predictor, , and the observation predictor, , are quite different from conventional filtering algorithms with uncorrelated white noises. This issue, along with the consideration of missing measurements at each single sensor, constitutes the main difference between the current optimal local estimators and the suboptimal local estimators proposed in [16].

4.2. Cross-Covariance Matrices of Local Estimation Errors

To apply the optimal fusion criterion weighted by matrices in the linear minimum variance sense, the filtering, , and prediction, , error cross-covariance matrices between local estimators of any two subsystems must be calculated.

For simplicity, besides the notation of Theorem 7, for , , we introduce the following notation: Also, in order to simplify the calculation of the error cross-covariance matrices, the following lemmas are given.

Lemma 9. Under assumptions (i)–(iv), the following results hold. (a)The expectation satisfies (b)The expectation satisfies where .(c)The expectation satisfies

Proof. (a) From (25) for and (24) for , we have and since , expression (32) is proved.
(b) Analogously, taking into account that , we have and expression (33) is immediately obtained. Finally, the derivation of expression is similar to that of (13) and hence it is omitted.
(c) Taking into account expression (26) for , with (2) for , we have and using (33) for , expression (34) is obtained.

Lemma 10. Under assumptions (i)–(iv), for , , the expectations are recursively obtained by with initial condition .

Proof. Taking into account expression (26) for , with (2) for , we have From the OPL, ; then, taking into account (32) for , and (33) for , it is enough to prove that which is easily deduced since

Lemma 11. Under assumptions (i)–(iv), for , , the innovation cross-covariance satisfies where is given by

Proof. Taking into account expression (26) for , with (2) for , we have and, from (34) for , expression for is clear.
Analogously, taking into account expression (26) for , with (2) for , we have and, from (32) for , expression for is immediately derived.

In the following theorem, recursive formulas to calculate the filtering and prediction error cross-covariance matrices, and , respectively, are derived.

Theorem 12. Under assumptions (i)–(iv), the cross-covariance matrices, , of the filtering errors between the th and the th sensor subsystems are recursively computed by where , the cross-covariance matrix of the prediction error between the th and the th sensor subsystems, satisfies where , . The vectors and the innovation cross-covariances are given in Lemmas 10 and 11, respectively.

Proof. By using (24) for and , we have Taking into account that and , the recursive expression for the cross-covariance matrices of the local filtering errors is immediately deduced.
Following an analogous reasoning, using now (25) and taking into account that and , it is easy to see that Finally, using again (24) for , and since , we have and the expression for the cross-covariance matrices of the local prediction errors is easily obtained.

4.3. Distributed Fusion Filtering Estimators

Once the local LS linear filtering estimators and their error covariance matrices , given in Theorem 7, along with the error cross-covariance matrices, , given in Theorem 12, are available, the distributed optimal weighted fusion estimators and their error covariance matrices are obtained by applying the optimal information fusion criterion weighted by matrices in the linear minimum variance sense [30].

Theorem 13. For the system model (1) and measurement model (2), under assumptions (i)–(iv), the distributed optimal fusion filter, , is given by where the local estimators are calculated by the recursive algorithm established in Theorem 7.
The optimal matrix weights are computed by where the matrices and are both matrices, and is an positive definite symmetric block matrix, whose matrix entries are given in Theorems 7 and 12.
The error covariance matrices of the distributed weighted fusion filtering estimators are computed by and the following inequality holds: .

Proof. The proof is omitted because it follows directly from the optimal information criterion weighted by matrices in the linear minimum variance sense [30].

Remark 14. The proposed distributed optimal LS linear fusion filter requires the computation of an inverse matrix, with the dimension of the system state and the number of sensors. Consequently, the proposed distributed fusion method has a computational complexity of , equal to that of the distributed Kalman-type filter in [16] and less than that of the distributed fusion filters based on the state augmentation approach. Hence, our distributed fusion method is superior to the filter proposed in [16] (since it has the same computation burden but better accuracy) and also to the distributed fusion filters based on state augmentation (since it has less computational complexity).

5. Numerical Simulation Example

In this section, a numerical simulation example is presented to illustrate the effectiveness of the centralized and distributed filtering algorithms proposed in this paper. Consider a scalar first-order autoregressive model with missing measurements coming from two sensors with autocorrelated and cross-correlated noises. According to the proposed observation model, two different independent sequences of random variables with a certain probability distribution over the interval are used to model the missing phenomenon. Specifically, the following model is considered as follows: where the initial state is a zero-mean Gaussian variable with variance . The noise processes and , , are defined by where the sequence of variables is a zero-mean Gaussian white process with variance . Clearly, according to assumption (ii), the additive noises and are one-step autocorrelated and two-step cross-correlated with The phenomenon of missing measurements for each sensor is described as follows. (1)In the first sensor, a sequence of independent and identically distributed (i.i.d.) random variables, , is considered, with probability distribution given by If , which occurs with probability , the state is missing and the observation contains only noise ; if , only partial information of the state is missing in such observation, which happens with probability ; and, finally, the state is present in the observation with probability when . The mean and variance of these variables are easily calculated, being and , for all .(2)In the second sensor, a sequence of i.i.d. Bernoulli random variables, , is considered, with ; in this case, if the state is present in the measurement with probability , whereas if such observation only contains additive noise, , with probability . So, no partial missing information is considered in this sensor. Clearly, for all , and .

To illustrate the feasibility and effectiveness of the proposed estimators we ran a program in MATLAB, in which fifty iterations of the proposed algorithms have been performed considering different values of and . Using simulated values of the state and the corresponding observations, both distributed and centralized filtering estimates of the state are calculated, as well as the corresponding error variances, which provide a measure of the estimation accuracy.

Firstly, for , the local, centralized, and distributed filtering error variances are displayed in Figure 1 considering the values and . According to Theorem 13, this figure corroborates that the optimal fusion distributed filter performs quite better than each local filter, but lightly worse than the centralized filter. Nevertheless, although the distributed fusion filter has a bit lower accuracy than the centralized one, both filters perform similarly and provide good estimations. Moreover, this slight difference is compensated because the distributed fusion structure is in general more robust, reduces the computational cost, and improves the reliability due to its parallel structure. For these reasons, the distributed filter is generally preferred in practice.

Next, to analyze the performance of the proposed estimators versus the probability that the state is present in the measurements of the second sensor, the centralized and distributed filtering error variances have been calculated for , and different values of the probability and . The results are displayed in Figure 2; analysis of this figure reveals that as increases (or, equivalently, the probability that the state is missing in the observations from the second sensor decreases), the filtering error variances become smaller and, hence, better estimations are obtained. Also, this figure shows that, for all the considered probability values, the error variances corresponding to the centralized filter are always less than those of the distributed filter. Analogous results are obtained for other values of , and the probability .

On the other hand, to compare the performance of the estimators for different degrees of correlation between the state and the observation noises, the centralized and distributed filtering error variances have been calculated considering , and different values of , specifically, , and 1. These values provide different correlations between the noise process and the first sensor observation noise and, consequently, different correlations, , between the state and the first sensor observation noise. The error variances are displayed in Figure 3, from which it is inferred that the error variances are smaller (and, consequently, the performance of the estimators is better) as the value is greater; these results were expected, since the correlation between the state and observations increases with . Analogous results are obtained for different values of and other values of the probability .

Now, completing the results of the two previous figures, the performance of the filters is analyzed when , the probability is varied from 0.1 to 0.9, and the values , and are considered. It must be noted that in all the cases examined, the error variances present insignificant variation from a certain iteration on and, consequently, only the values at a specific iteration (viz., ) are shown. The results are presented in Figure 4 which, for the sake of clarity, only displays the distributed filtering error variances. Agreeing with the comments about Figures 2 and 3, this figure shows that, for a fixed value of , the performance of the estimators improves as becomes greater, and for a fixed value of , also more accurate estimations are obtained as increases. Hence, from this figure it is gathered that, as or decreases (which means that the correlation between the state and the first sensor observation noise decreases or the probability that the state is present in the second sensor measurements decreases, resp.), the filtering error variances become greater and, consequently, worse estimations are obtained.

Finally, a comparative analysis is presented between the classical Kalman filter [2], the Kalman-type filter with correlated and cross-correlated noises given in [16], the filter proposed in [23] for systems with different failure rates in multisensor networks, and the centralized and distributed filters proposed in this paper. For the comparison, the same parameter values as in Figure 1 are considered (, , and ).

On the basis of one thousand independent simulations of the mentioned algorithms, a comparison between the different filtering estimates is performed using the mean square error (MSE) criteria. For , let denote the th set of artificially simulated data (which is taken as the th set of true values of the state), and the filtering estimate at the sampling time in the th simulation run. For each algorithm, the filtering MSE at time is calculated by .

The values , for , are displayed in Figure 5 which shows that, for all , the proposed centralized and distributed filters have approximately the same values, which in turn are smaller than the values of the filter in [23] and considerably less than those of the filters [2, 16]. Hence, we can conclude that, according to the MSE criterion, the proposed filtering estimates perform significantly better than other filters in the literature.

6. Conclusions

The LS linear estimation problem from missing measurements has been investigated for multisensor linear discrete-time systems with autocorrelated and cross-correlated noises. The main contributions are summarized as follows.(1)Using both centralized and distributed fusion methods to process the measurement data from the different sensors, recursive optimal LS linear filtering algorithms are derived by an innovation approach.(2)At each sensor, the possibility of missing measurements (i.e., observations containing only partial information about the state or even only noise) is modelled by a sequence of independent random variables taking discrete values over the interval .(3)The multisensor system model considered in the current paper covers those situations where the sensor and process noises are one-step autocorrelated and two-step cross-correlated. Also, one-step cross-correlations between different sensor noises is considered. This correlation assumption is valid in a wide spectrum of applications, for example, in target tracking systems with process and measurement noises dependent on the system state, or situations where a target is observed by multiple sensors and all of them operate in the same noisy environment. Nevertheless, the current study can be extended to more general systems involving finite-step autocorrelated and cross-correlated noises with no difficulty, except for a greater complexity in the mathematical expressions.(4)The applicability of the proposed centralized and distributed filtering algorithms is illustrated by a numerical simulation example, where a scalar state process generated by a first-order autoregressive model is estimated from missing measurements coming from two sensors with autocorrelated and cross-correlated noises. The results confirm that centralized and distributed fusion estimators have approximately the same error variances, with a slight inferiority of the distributed one which is compensated by a reduced computational burden and reduced communication demands for the sensor networks. Also, compared with some existing estimation methods, the proposed algorithms provide better estimations in the mean square error sense.

Appendix

Proof of Theorem 5

From (10), expression (15) for the state filter in terms of the one-stage predictor is immediately clear.

Expression (16) for the state predictor is obtained as follows: and clearly, .

Now we show expression (17) for the innovation, , for which it is enough to obtain an expression for . A similar reasoning to that used to prove (16) leads to with . Hence, and expression (17) for the innovation is clear.

Next, expression (18) for the matrix is derived. From (6) and the independence assumption, it is clear that From expression (A.3) for and the OPL, is calculated as follows: By substraction of the above expectations, expression (18) for is obtained. From (1), expression (19) for is immediately clear.

Expression (20) for the prediction error covariance matrix, is easily obtained by using (1) and (16); and, from (1) and (15), expression (21) for is also obvious. Expression (22) for the filtering error covariance matrix, , is immediately derived by using (15).

Finally, we prove expression (23) for the innovation covariance matrix . From (6) and using (9), we have that Using now (A.3) for and property (9), and taking into account that, from the OPL, , the following identity holds:

From the above expectations and after some manipulations, expression (23) for the innovation covariance matrix is obtained.

Acknowledgments

This research is supported by Ministerio de Ciencia e Innovación (Programa FPU and Grant no. MTM2011-24718) and Junta de Andalucía (Grant no. P07-FQM-02701).