The distributed fusion state estimation problem is addressed for sensor network systems with random state transition matrix and random measurement matrices, which provide a unified framework to consider some network-induced random phenomena. The process noise and all the sensor measurement noises are assumed to be one-step autocorrelated and different sensor noises are one-step cross-correlated; also, the process noise and each sensor measurement noise are two-step cross-correlated. These correlation assumptions cover many practical situations, where the classical independence hypothesis is not realistic. Using an innovation methodology, local least-squares linear filtering estimators are recursively obtained at each sensor. The distributed fusion method is then used to form the optimal matrix-weighted sum of these local filters according to the mean squared error criterion. A numerical simulation example shows the accuracy of the proposed distributed fusion filtering algorithm and illustrates some of the network-induced stochastic uncertainties that can be dealt with in the current system model, such as sensor gain degradation, missing measurements, and multiplicative noise.

1. Introduction

In recent years, information and communication technologies have experienced a fast development, making the use of sensor networks become very popular for measurement acquisition and data processing, as they usually provide more information than traditional single-sensor communication systems. For this reason, the study of the estimation problem in sensor network stochastic systems has achieved great interest in many important research fields of engineering, computing, and mathematics, mainly by their broad scope of applications (target tracking, environment observation, habitat monitoring, animal tracking, communications, etc.).

Although fusion algorithms have been proposed according to different methods (see, e.g., [13]), most existing results do not consider the new problems that arise inevitably in sensor network systems due to the restrictions of the physical equipment, mainly the limitations of bandwidth channels and uncertainties in the external environment, in both the modeling process and the transmission of information. These situations can dramatically worsen the quality of fusion estimators designed. Multiplicative noise uncertainties, random delays, packet dropouts, and missing measurements are some of the most common problems that motivated the need to develop new estimation algorithms. Therefore, it is not surprising that, in the past few years, the study of the state estimation problem in network systems with only one or several of the aforementioned uncertainties has become an active research area (see, e.g., [49] and the references therein).

Clearly, some of these situations with network-induced phenomena are special cases of systems with transition and/or measurement random parameter matrices, which have important practical significance and arise in many application areas such as digital control of chemical processes, radar control, navigation systems, or economic systems [10]. On the one hand, random state transition matrices arise in the context of systems with state-dependent multiplicative noise, of great interest for applications in aerospace systems, communication, processing images, and so forth [11]. On the other hand, systems with observation multiplicative noises [12] clearly are special cases of systems with random parameter measurement matrices. Also, networked systems with stochastic sensor gain degradation as those considered in [13] or the systems with state and measurement multiplicative noises in [9] can be rewritten by transition and measurement random parameter matrices. It must be mentioned that, in many papers (see, e.g., [8]), systems with random delays and packet dropouts are transformed into systems with random parameter matrices. Consequently, these kinds of systems can model a great variety of real situations and, for this reason, the estimation problem in this type of system has gained a considerable interest in recent years (see, e.g., [1418] and the references therein).

Furthermore, in the latest research on signal estimation, the fairly conservative assumption that the process and measurement noises are uncorrelated is commonly weakened as, in many practical situations, such noises are usually correlated. For example, when all the sensors operate in the same noisy environment, the sensor noises are usually correlated. Likewise, when the process noise and the sensor noises are state dependent, there may be cross-correlation between them, as well as between different sensor noises. Also, the augmented systems used to describe random delays and measurement losses are systems with correlated noises, and discretized continuous-time systems have also inherently correlated noises. Hence, in both, systems with deterministic matrices and systems with random parameter matrices, the estimation problem with correlated and cross-correlated noises has become a challenging research topic. In the first case, the optimal Kalman filtering fusion problem in systems with cross-correlated process noises and measurement noises at the same sampling time is addressed, for example, in [19], and at consecutive sampling times in [8]. Under different correlation assumptions of the noises, centralized and distributed fusion algorithms are obtained in [11], for systems with multiplicative noise in the state equation, in [9], when multiplicative noises exist in both the state and observation equations, and in [13], for systems where the measurements might have partial information about the signal. For systems with random parameter matrices and autocorrelated and cross-correlated noises, many research efforts have been devoted to the centralized fusion estimation problem [1518]. Centralized algorithms are based on a fusion centre able to receive all the measured data from sensors for being processed; they provide optimal estimators from the measurements of all the sensors and, hence, when all the sensors work correctly, they have the best accuracy. Nevertheless, as it is known, the centralized approach has several drawbacks such as bad robustness, poor survivability and reliability, heavy communication, and expensive computational cost, which can be overcome by using distributed approaches. In the distributed fusion method, each sensor estimates the state based on its own measurement data, and these local estimators are combined according to a certain information fusion criterion. To the best of the authors’ knowledge, the distributed fusion estimation problem in networked systems with both random parameter matrices and autocorrelated and cross-correlated noises has not been investigated.

Motivated by the above considerations, this paper deals with the distributed fusion estimation problem in sensor network systems including simultaneously random parameter matrices and correlated noises in the state-space model. The main contributions of our study can be highlighted as follows. (1) The network system model with random parameter matrices considered provides a unified framework to treat some network-induced phenomena, such as multiplicative noise uncertainties, missing measurements, or sensor gain degradation, and, hence, the proposed distributed fusion filter has wide applicability. (2) One-step autocorrelation of the noises and, also, two-step cross-correlation between the process noise and different sensor noises are considered. (3) The innovation technique is used to obtain algorithms for the local least-squares linear filtering estimators which are recursive and computationally simple. (4) The proposed distributed fusion filter is generated by a matrix-weighted linear combination of the local filtering estimators using the mean squared error as optimality criterion, requiring the cross-covariance matrices between any two local filters, but not the error cross-covariance matrices, as in [1].

The rest of the paper is organized as follows. The system model with multiple sensors and random parameter matrices is presented in Section 2, including a brief description of the traditional centralized and distributed fusion estimation methods. The local least-squares linear filtering algorithms are derived in Section 3, using an innovation approach. In Section 4, the proposed distributed fusion filter is obtained by a matrix-weighted linear combination of the local filtering estimators using the mean squared error as optimality criterion. A simulation example is given in Section 5 to show the performance of the proposed estimation algorithms, and some conclusions are drawn in Section 6.

Notation. The notation used throughout the paper is standard. denotes the -dimensional Euclidean space. and denote the transpose and inverse of a matrix , respectively. The shorthand denotes a partitioned matrix into submatrices . If a matrix dimension is not explicitly stated, it is assumed to be compatible for algebraic operations. Moreover, for any function , depending on the time instants and , we write for simplicity. Analogously, we write for any function , depending on sensors and . represents the Kronecker delta function, which is equal to one if and zero otherwise. Orthogonal Projection Lemma is abbreviated as OPL. Finally, all the random vectors will be defined on the probabilistic space , and for arbitrary random vectors and , we denote and , where stands for the mathematical expectation operator.

2. System Formulation and Problem Statement

Consider the following discrete-time linear stochastic system with different sensors:where is the state vector and is the output measured by sensor , both at time . and are sequences of random parameter matrices with compatible dimensions. is the process noise and is the measurement noise of the sensor.

2.1. Model Assumptions

The assumptions about the initial state, the random parameter matrices, and the noises involved in the system model (1)-(2), under which the fusion filtering problem will be addressed, are as follows:(i)The initial state is a random vector with and .(ii) and are sequences of independent random parameter matrices with known means, , , , and the covariances of their entries, , , are also assumed to be known. denotes the entry of matrix , for , and denotes the entry of , for and .(iii)The noises and , , are zero-mean sequences with known covariances and cross-covariances:(iv)For , the initial state and the processes and are mutually independent and they are independent of the additive noises and

Remark 1. By denoting , , , and , an arbitrary deterministic matrix, the following identities hold for entries of the matrices and :

Remark 2. Assumptions (i)–(iii) lead to the following recursive formula for , the correlation matrix of the state vector (see, e.g., [15]):

Remark 3. The following correlation properties of the vector noises and are easily inferred from assumptions (iii)-(iv):(i)For , the noise vector is uncorrelated with the observations and correlated with , with(ii)For , the noise vector is uncorrelated with the observations and correlated with , with

Our aim is to address the optimal least-squares (LS) linear filtering problem of state by fusing effectively the observations , ; specifically, we use the traditional centralized and distributed fusion methods. In the centralized fusion method, all the measurement data coming from sensors are used in the fusion center for the state estimation, while in the distributed fusion method, the observations in the fusion center are replaced by the estimates that have been locally computed. As it is known, the main drawbacks of the first one are the expensive computational cost and poor robustness and flexibility, while the latter overcomes these disadvantages and provides greater accuracy than the local estimators.

2.2. Centralized Fusion Algorithm

Combining measurement equations given by (2) and setting , the discrete-time multisensor system with random parameter matrices and correlated additive noises (1)-(2) considered in this paper is a special case of the discrete-time stochastic system with random parameter matrices and correlated additive noises considered in [17]. Hence, the optimal centralized fusion filter could be obtained by the optimal LS linear filtering algorithm in [17].

2.3. Distributed Fusion Algorithm

The distributed fusion method computes, at each sensor, a local optimal LS linear state filter using its own measurement data, and, subsequently, the fusion center computes the LS matrix-weighted linear combination of the local filtering estimators. Hence, the distributed fusion filtering algorithm is performed in two steps. In the first one (Section 3), for each , a local LS linear estimator of the signal , denoted by , is produced using the measurements , by a recursive algorithm. In the second step (Section 4), a fusion distributed estimator, , is generated by a matrix-weighted linear combination of the local estimators, , , using the mean squared error as optimality criterion.

3. Local LS Linear Filtering Algorithm

This section is concerned with the problem of obtaining a recursive algorithm for the local LS linear filter at each sensor , for , by using an innovation approach.

Theorem 4. For system (1)-(2), under assumptions (i)–(iv), the local LS linear filter, , is given bywhere the one-stage state predictor, , satisfiesThe filtering error covariance matrix, , is given bywhere the prediction error covariance matrix, , is calculated bywithThe matrix is obtained bywhere is given bywithThe innovation, , is given byand the innovation covariance matrix, , satisfiesThe matrices , , and are given in (5), (6), and (7), respectively.

Proof. See Appendix.

The computational procedure of the proposed local LS linear filtering algorithm can be summarized as follows.

The matrices and are computed by expressions (6) and (7), respectively. We obtain recursively by (5), where the matrix , necessary to compute and , is obtained as indicated in Remark 1. The matrix is also computed in order to obtain the innovation covariance matrix . Note that all these matrices depend only on the system model information and can be obtained before the observations are available.

At the sampling time , once iteration is finished and the new observation is available, starting with the prior knowledge including , , , , and , the proposed filtering algorithm operates as follows.

Step 1. From (15), is computed and, from it, is provided by (14), and then is obtained by (13).

Step 2. The innovation and its covariance matrix are computed by (16) and (17), respectively.

Step 3. The filter, , and the filtering error covariance matrix, , are computed by (8) and (10), respectively.

Step 4. To implement the above steps at time , we must(1)compute the state predictor, , by (9);(2)compute by (12) and, from this, the prediction error covariance matrix, , by (11).

4. Distributed Fusion Filtering Estimators

Once the local LS linear filters, for each sensor , have been obtained, our objective in this section is to design a distributed fusion filter, , by a matrix-weighted linear combination of such estimators, which minimizes the mean squared estimation error. To simplify the derivation of the proposed fusion estimators, we previously present some useful lemmas that provide the expectations , , and , necessary for subsequent calculations. The assumptions and notation in these lemmas are those of Section 3.

4.1. Preliminary Results

Lemma 5. For , the cross-covariance matrix between any two local state predictors, , satisfies

Proof. Using (A.7) and the fact that and , the proof of this lemma is immediately clear.

Lemma 6. For and , the expectation satisfies where is given by

Proof. Taking into account expression (16) for , we have that Now, using (2) for , (A.7) for , and the OPL, we obtain From the above relations, the expression for is immediately derived.
Using again (A.7) for , expression for is also immediately clear, and the proof is completed.

Lemma 7. For and , the innovation cross-covariance matrix, , satisfies where is given by and is obtained by

Proof. Using (16) for the innovation , it is clear that Similar reasoning to that used to obtain (17) yields So, the expression for is immediately derived. The expression for is obtained by analogous reasoning.

4.2. Distributed Fusion Filter Design

As we have already indicated, our goal is to obtain a distributed fusion filter, , generated by a weighted sum of the local estimators, , in which the weight matrices, , , are computed to minimize the mean squared estimation error.

So, by denoting and , the aim is to find such that the estimator minimizes As it is well known, the solution of this problem is given by the matrix

The following theorem provides the proposed distributed fusion filtering estimators, , and their error covariance matrices, .

Theorem 8. Let be the vector formed by the local LS filtering estimators calculated in Theorem 4. Then, the distributed fusion filter is given bywhere and , , are computed bywith , , and given in Lemmas 5, 6, and 7, respectively.
The error covariance matrices of the distributed fusion filtering estimators are computed by

Proof. Expressions (30) and (33) for the distributed estimators and their error covariance matrices, respectively, are immediately derived from (29). Expression (32) for the cross-covariance matrices between local filtering estimators follows easily using expression (8) of such estimators.

5. Numerical Simulation Example

Consider the following discrete-time linear networked system with state-dependent multiplicative noise and scalar measurements from four sensors:where is a zero-mean Gaussian white process with unit variance. The additive noises are defined as and , , where , , , and and is a zero-mean Gaussian white process with variance .

For , the random parameter matrices are defined as follows:(i), where is a sequence of independent and identically distributed (i.i.d.) random variables uniformly distributed over .(ii), where is a sequence of i.i.d. discrete random variables with , , and .(iii), where is a Bernoulli process with , .(iv), where is a Bernoulli process with , , and is a zero-mean Gaussian white process with unit variance.

Note that the random parameter matrices at each sensor, , allow modeling different types of uncertainty. Namely, in sensors 1 and 2, as in [13], the scalar random variables take values over the interval and represent continuous and discrete stochastic sensor gain degradation, respectively. In sensor 3, are Bernoulli random variables, thus covering the phenomenon of missing measurements, with meaning that the signal is present in the measurement coming from the third sensor at time , while means that the signal is missing in the measured data at time or, equivalently, that such observation is only noise . Finally, as in [5], both missing measurements and multiplicative noise are considered in sensor 4.

To illustrate the feasibility and effectiveness of the proposed algorithms, they were implemented in MATLAB, and one hundred iterations of the algorithms were run. Using simulated values, both centralized and distributed filtering estimates were calculated, as well as the corresponding error variances, in order to measure the estimation accuracy.

The error variances of the local, centralized, and distributed fusion filters are compared considering fixed values , . In Figure 1, we can see that the error variances of the distributed fusion filter are significantly smaller than those of every local filter but lightly greater than those of the centralized filter. Nevertheless, although the centralized fusion filter outperforms the distributed one, this difference is slight and both filters perform similarly and provide good estimations. Besides, this slight difference is compensated by the fact that the distributed fusion structure reduces the computational cost and has the advantage of better robustness and fault tolerance. For example, assuming that the fourth sensor is faulty and the measurement equation is given by , where for and otherwise, Figure 2 displays the corresponding filtering mean square errors of one thousand independent simulations at each sampling time , showing that the distributed fusion method has better fault-tolerance abilities than the centralized one.

Next, we analyze the centralized and distributed filtering accuracy in function of the probabilities and of the Bernoulli variables that model uncertainties of the observations coming from sensors 3 and 4, respectively. Specifically, the filter performances are analyzed when is varied from 0.1 to 0.9, and different values of are considered. Since the behavior of the error variances is analogous in all the iterations, only the results of a specific iteration () are shown here. In Figure 3, the centralized and distributed filtering error variances are displayed versus , for , and . As expected, from this figure, it is concluded that as or increase, the centralized and distributed error variances both become smaller and, hence, the performance of the centralized and distributed filters improves as these probabilities increase.

Finally, to evaluate the performance of the proposed filters in comparison with other filters reported in the literature, the filtering mean square error (MSE) at each sampling time is calculated. To compute the MSE, one thousand independent simulations were considered and one hundred iterations of each algorithm were performed considering the values , . A comparative analysis was carried out between the proposed centralized and distributed filters and the following filters:(i)The centralized Kalman filter for systems with independent additive noises.(ii)The centralized filter for multisensor systems with multiplicative noises in the state and observation equations, missing measurements, and noise correlation at the same sampling time [5].(iii)The centralized filter for networked systems with multiplicative noise in the state equation, stochastic sensor gain degradation, and independent additive noises [13].

The results of these comparisons are displayed in Figure 4, which shows that the proposed centralized and distributed filters perform better than the other three centralized filters. The Kalman filter provides the worst estimations since neither multiplicative noises nor missing measurements are taken into account. The performance of the filter in [5] is better than that of the filter in [13] since the latter ignores any correlation assumption and multiplicative observation noise in sensor 4 is not taken into account.

6. Conclusion

The distributed fusion filtering problem has been investigated for multisensor stochastic systems with random parameter matrices and correlated noises. The main outcomes and results can be summarized as follows:(i)Recursive algorithms for the local LS linear filters of the system state based on the measured output data coming from each sensor have been designed by an innovation approach. The computational procedure of these local filtering algorithms is very simple and suitable for online applications.(ii)Once the local filters have been obtained, a distributed fusion filter has been designed as the matrix-weighted sum of such local estimators which minimizes the mean squared estimation error. The error covariance matrices of such distributed fusion filter have been also derived.(iii)A numerical simulation example has illustrated the usefulness of the proposed results. Error variance comparison has shown that both the centralized and the distributed filters outperform the local ones; this example has also shown that the slight superiority of the centralized filter over the distributed filter is compensated by better robustness and fault-tolerance abilities of the latter. This example has also highlighted the applicability of the proposed algorithm for a great variety of multisensor systems featuring network-induced stochastic uncertainties, such as sensor gain degradation, missing measurements, or multiplicative observation noises, which can be dealt with in the observation model considered in this paper.

A challenging further research topic is to address the estimation problem for this kind of system with random parameter matrices, considering a sensor network whose nodes are distributed according to a given topology, characterized by a directed graph. Also, an interesting future research topic is to consider other kinds of stochastic uncertainties which often appear in networked systems, such as random delays and packet dropouts.


Proof of Theorem 4

This appendix provides the proof of Theorem 4 by an innovation approach. For the sensor, the innovation at time is defined as , where is the LS linear estimator of based on measurements , . Replacing the observation process by the innovation one, the LS linear estimator, , of a random vector based on the observations, , can be calculated as linear combination of the innovations ; namely,

From system equations (1)-(2) and the OPL, the state predictor, , and the observation predictor, , verifyBecause of the correlation assumption (iii), the noise filter and the one-stage noise predictor are not equal to zero and, hence, expressions for such estimators must be calculated.

LS Linear Noise Estimators and . From the general expression for estimators (A.1), taking into account the fact that, by Remark 3, we have that the noise filter, , and the one-stage noise predictor, , satisfy

Now, we will prove the local filtering algorithm by several steps.

(I) Derivation of the Filter (8), One-Stage State Predictor (9), and Their Error Covariance Matrices (10) and (11), Respectively. From (A.1), by denoting and , expression (8) for the filter is obvious, and from this relation and the OPL, we obtain (10) for the filtering error covariance.

From (A.2) and (A.5), expression (9), with satisfying (6), for the state predictor is immediately obtained. This expression together with state equation (1) leads to (11) for the prediction error covariance matrix, where is given by (5), and clearly satisfies (12).

(II) Derivation of (13) for the Matrix . From (8) and (9), the following recursive expression for the state predictor is obtained:where, from (1), the matrix clearly verifies (15).

Using relation (A.7) and again (1), we obtain that the correlation between the prediction error and the noise, , satisfies (14). This correlation property allows us to obtain easily expression (13) for the matrix . Indeed, by the OPL, we have , and, from (2), expression (13) for is obtained.

(III) Derivation of Innovation (16) and Its Covariance Matrix (17). From (A.3) and (A.6), it is clear that the innovation is given by (16), with satisfying (7). Next, expression (17) for is derived. From the OPL and (2), we have that(i)Again, from the OPL and (2) and the conditional expectation properties, (ii)Using (16) for , with (2) for , we obtain that Substituting the above expectations into (A.8), we easily obtain (17), and the proof is completed.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


This research is supported by Ministerio de Economía y Competitividad (Grant no. MTM2014-52291-P and FPU programme).