Abstract

The least-squares quadratic estimation problem of signals from observations coming from multiple sensors is addressed when there is a nonzero probability that each observation does not contain the signal to be estimated. We assume that, at each sensor, the uncertainty about the signal being present or missing in the observation is modelled by correlated Bernoulli random variables, whose probabilities are not necessarily the same for all the sensors. A recursive algorithm is derived without requiring the knowledge of the signal state-space model but only the moments (up to the fourth-order ones) of the signal and observation noise, the uncertainty probabilities, and the correlation between the variables modelling the uncertainty. The estimators require the autocovariance and cross-covariance functions of the signal and their second-order powers in a semidegenerate kernel form. The recursive quadratic filtering algorithm is derived from a linear estimation algorithm for a suitably defined augmented system.

1. Introduction

In many real systems the signal to be estimated can be randomly missing in the observations due, for example, to intermittent failures in the observation mechanism, fading phenomena in propagation channels, target tracking, accidental loss of some measurements, or data inaccessibility during certain times. Usually, these situations are characterized by including in the observation equation not only an additive noise, but also a multiplicative noise consisting of a sequence of Bernoulli random variables taking the value one if the observation is state plus noise, or the value zero if it is only noise (uncertain observations). Since these models are appropriate in many practical situations with random failures in the transmission, the estimation problem in systems with uncertain observations has been widely studied in the literature under different hypotheses and approaches (see e.g., [1, 2] and references therein).

On the other hand, in some practical situations the state-space model of the signal is not available and another type of information must be processed for the estimation. In the last years, the estimation problem from uncertain observations has been investigated using covariance information, and algorithms with a simpler structure than those obtained when the state-space model is known have been derived (see, e.g., [3]).

Recently, the least-squares linear estimation problem using uncertain observations transmitted by multiple sensors, whose statistical properties are assumed not to be the same, has been studied by several authors under different approaches and hypotheses on the processes (see, e.g., [4, 5] for a state-space approach and [6, 7] for a covariance approach).

In this paper, using covariance information, recursive algorithms for the least-squares quadratic filtering problem from correlated uncertain observations coming from multiple sensors with different uncertainty characteristics are proposed. This paper extends the results in [6] in two directions: on the one hand, correlation at times 𝑘 and 𝑘+𝑟 between the random variables modelling the uncertainty in the observations is considered, and, on the other, the quadratic estimation problem is addressed. The quadratic estimation is also a new topic addressed in this paper over the results in Hermoso-Carazo et al. [7] which are also referred to observations with uncertainty modelled by Bernoulli variables correlated at times 𝑘 and 𝑘+𝑟 with arbitrary 𝑟, but coming from a single sensor. Furthermore, the current paper differs from [5] in the correlation model considered and in the information used to derive the algorithms (state-space model in [5] and covariance information in the current paper).

To address the quadratic estimation problem, augmented signal and observation vectors are introduced by assembling the original vectors with their second-order powers defined by the Kronecker product, thus obtaining a new augmented system and reducing the quadratic estimation problem in the original system to the linear estimation problem in the augmented system. By using an innovation approach, the linear estimator of the augmented signal based on the augmented observations is obtained, thus providing the required quadratic estimator.

The performance of the proposed filtering algorithms is illustrated by a numerical simulation example where the state of a first-order autoregressive model is estimated from uncertain observations coming from two sensors with different uncertainty characteristics correlated at times 𝑘 and 𝑘+𝑟, considering several values of 𝑟. The linear and quadratic estimation error covariance matrices are compared, showing the superiority of the quadratic estimators over the linear ones.

2. Observation Model and Hypotheses

The problem at hand is to determine the least-squares (LS) quadratic estimator of an 𝑛-dimensional discrete signal, 𝑧𝑘, from noisy measurements coming from multiple sensors which may not contain the signal with different probabilities. In this section, we present the observation model and the hypotheses about the signal and noise processes involved.

Consider 𝑚 scalar sensors whose measurements at each sampling time, 𝑘, denoted by 𝑦𝑖𝑘, may either contain the signal to be estimate, 𝑧𝑘, or be only noise, 𝑣𝑖𝑘; the uncertainty about the signal being present or missing in the observation is modelled by Bernoulli variables, 𝛾𝑖𝑘. The observation model is thus described as follows:𝑦𝑖𝑘=𝛾𝑖𝑘𝐻𝑖𝑘𝑧𝑘+𝑣𝑖𝑘,𝑘1,𝑖=1,,𝑚.(2.1) If 𝛾𝑖𝑘=1, then 𝑦𝑖𝑘=𝐻𝑖𝑘𝑧𝑘+𝑣𝑖𝑘 and the measurement coming from the 𝑖th sensor contains the signal; otherwise, if 𝛾𝑖𝑘=0, then 𝑦𝑖𝑘=𝑣𝑖𝑘, which means that such measurement is only noise. Therefore, the variables {𝛾𝑖𝑘;𝑘1} model the uncertainty of the observations coming from the 𝑖th sensor.

To simplify the notation, the observation equation (2.1) is rewritten in a compact form as follows:𝑦𝑘=Υ𝑘𝐻𝑘𝑧𝑘+𝑣𝑘,𝑘1,(2.2) where 𝑦𝑘=(𝑦1𝑘,,𝑦𝑚𝑘)𝑇,𝐻𝑘=(𝐻𝑘1𝑇,,𝐻𝑘𝑚𝑇)𝑇,Υ𝑘=Diag(𝛾1𝑘,,𝛾𝑚𝑘), and 𝑣𝑘=(𝑣1𝑘,,𝑣𝑚𝑘)𝑇.

It is known that if the signal 𝑧𝑘 and the observations 𝑦1,,𝑦𝑘 have finite second-order moments, the LS linear filter of 𝑧𝑘 is the orthogonal projection of 𝑧𝑘 on the space of 𝑛-dimensional random variables obtained as linear transformations of 𝑦1,,𝑦𝑘. So, by defining the random vectors 𝑦𝑖[2]=𝑦𝑖𝑦𝑖 ( denotes the Kronecker product [8]) and, if 𝐸[𝑦𝑖[2]𝑇𝑦𝑖[2]]<, the LS quadratic estimator of 𝑧𝑘 based on the observations up to the sampling time 𝑘 is the orthogonal projection of 𝑧𝑘 on the space of 𝑛-dimensional linear transformations of 𝑦1,,𝑦𝑘 and their second-order powers 𝑦1[2],,𝑦𝑘[2]. To guarantee the existence of the second-order moments of the vectors 𝑦𝑖[2], the pertinent assumptions about the processes in (2.1) are now stated.

2.1. Hypotheses about the Model
(H1)The 𝑛×1 signal process {𝑧𝑘;𝑘1} has zero mean, and its autocovariance function, 𝐾𝑧𝑘,𝑠, as well as the autocovariance function of the second-order powers, 𝐾𝑧[2]𝑘,𝑠, is expressed in a semidegenerate kernel form 𝐾𝑧𝑘,𝑠=𝐴𝑘𝐵𝑇𝑠,𝑠𝑘,𝐾𝑧[2]𝑘,𝑠=𝑎𝑘𝑏𝑇𝑠,𝑠𝑘,(2.3) where the 𝑛×𝑀 matrix functions 𝐴, 𝐵, and the 𝑛2×𝐿 matrix functions 𝑎, 𝑏, are known. Moreover, it is assumed that the covariance function of the signal and its second-order powers, 𝐾𝑧𝑧[2]𝑘,𝑠, can also be expressed as 𝐾𝑧𝑧[2]𝑘,𝑠=𝛼𝑘𝛽𝑇𝑠𝜀,𝑠𝑘,𝑘𝛿𝑇𝑠,𝑘𝑠,(2.4) where 𝛼, 𝛽, 𝜀, and 𝛿 are 𝑛×𝑁, 𝑛2×𝑁, 𝑛×𝑃, and 𝑛2×𝑃 known matrix functions, respectively. (H2)For 𝑖=1,,𝑚, the sensor additive noises, {𝑣𝑖𝑘;𝑘1}, are zero mean white processes, and their moments, up to the fourth one, are known; we will denote 𝑅𝑘=Cov[𝑣𝑘], 𝑅𝑘(3)=Cov[𝑣𝑘,𝑣𝑘[2]] and 𝑅𝑘(4)=Cov[𝑣𝑘[2]]. (H3)For 𝑖=1,,𝑚, the noises {𝛾𝑖𝑘;𝑘1} are sequences of Bernoulli random variables with 𝑃[𝛾𝑖𝑘=1]=𝑝𝑖𝑘; the variables 𝛾𝑖𝑘 and 𝛾𝑖𝑠 are independent for |𝑘𝑠|2, and Cov[𝛾𝑖𝑘,𝛾𝑖𝑘+1] are assumed to be known. (H4)The signal process, {𝑧𝑘;𝑘1}, and the noise processes, {𝛾𝑘;𝑘>1} and {𝑣𝑘;𝑘1}, where 𝛾𝑘=(𝛾1𝑘,,𝛾𝑚𝑘)𝑇, are mutually independent.

3. Augmented System

Given the observation model (2.1) with assumptions (H1)–(H4), the problem is to find the LS quadratic filter of the signal 𝑧𝑘, which will be denoted 𝑧𝑞𝑘/𝑘. The technique used to obtain this estimator consists of augmenting the signal and observation by assembling the original vectors and their second-order powers, 𝒵𝑘=(𝑧𝑇𝑘,𝑧𝑘[2]𝑇)𝑇,𝒴𝑘=(𝑦𝑇𝑘,𝑦𝑘[2]𝑇)𝑇, and deriving the estimator 𝑧𝑞𝑘/𝑘 as the vector constituted by the first 𝑛 entries of the LS linear filter of 𝒵𝑘 based on 𝒴𝑘.

To obtain this linear estimator, the first- and second-order statistical properties of the augmented vectors 𝒵𝑘 and 𝒴𝑘 are now analyzed.

3.1. Properties of the Augmented Vectors

By using the Kronecker product properties and denoting 𝐷𝛾𝑘=Diag(Υ𝑘,Υ𝑘[2]), 𝑘=Diag(𝐻𝑘,𝐻𝑘[2]) and 𝒱𝑘=𝑣𝑘𝐼𝑚2+𝐾𝑚2Υ𝑘𝐻𝑘𝑧𝑘𝑣𝑘+𝑣𝑘[2],(3.1) (𝐼𝑚2 is the 𝑚2×𝑚2 identity matrix and 𝐾𝑚2 is the 𝑚2×𝑚2 commutation matrix, [8]) the following model with uncertain observations is obtained, 𝒴𝑘=𝐷𝛾𝑘𝑘𝒵𝑘+𝒱𝑘,𝑘1.(3.2) It should be noted that the signal, 𝒵𝑘, and the noise, 𝒱𝑘, in this new model have non-zero mean. Nevertheless, this handicap can be overcome by considering the centered augmented vectors 𝑍𝑘=𝒵𝑘𝐸[𝒵𝑘] and 𝑌𝑘=𝒴𝑘𝐸[𝒴𝑘] which, taking into account that 𝐸[𝐷𝛾𝑘𝑘𝑍𝑘]=𝐸[𝐷𝛾𝑘]𝑘𝐸[𝑍𝑘], satisfy𝑌𝑘=𝐷𝛾𝑘𝑘𝑍𝑘+𝑉𝑘,𝑘1,(3.3) where 𝑉𝑘=𝑣𝑘𝐼𝑚2+𝐾𝑚2Υ𝑘𝐻𝑘𝑧𝑘𝑣𝑘+𝑣𝑘[2]𝑅vec𝑘+𝐷𝛾𝑘𝐷𝑝𝑘𝑘𝐸𝒵𝑘,(3.4) being 𝐷𝑝𝑘=𝐸[𝐷𝛾𝑘] and vec the operator that vectorizes a matrix [8].

Note that the LS linear estimator of 𝒵𝑘 based on 𝒴1,,𝒴𝑘 is obtained from the LS linear estimator of 𝑍𝑘 based on 𝑌1,,𝑌𝑘, just adding the mean vector 𝐸[𝒵𝑘]=(0𝑇𝑛×1,(vec(𝐴𝑘𝐵𝑇𝑘))𝑇)𝑇. Hence, since the first 𝑛 components of 𝐸[𝒵𝑘] are zero, the required quadratic estimator 𝑧𝑞𝑘/𝑘 is just the vector constituted by the first 𝑛 entries of the LS linear filter of 𝑍𝑘. Henceforth, these centered vectors will be referred to as the augmented signal and observation vectors, respectively.

The signal and noise processes {𝑍𝑘;𝑘1} and {𝑉𝑘;𝑘1} involved in model (3.3) are zero mean. In the following propositions the second-order statistical properties of these processes are established.

Proposition 3.1. If the signal process {𝑧𝑘;𝑘1} satisfies (H1), the autocovariance function of the augmented signal process {𝑍𝑘;𝑘1} can be expressed in a semidegenerate kernel form, namely, 𝐾𝑍𝑘,𝑠𝑍=𝐸𝑘𝑍𝑇𝑠=𝒜𝑘𝑇𝑠,𝑠𝑘,(3.5) where 𝒜𝑘=𝐴𝑘𝛼𝑘0𝑛×𝑃0𝑛×𝐿0𝑛2×𝑀0𝑛2×𝑁𝛿𝑘𝑎𝑘,𝑘=𝐵𝑘0𝑛×𝑁𝜀𝑘0𝑛×𝐿0𝑛2×𝑀𝛽𝑘0𝑛2×𝑃𝑏𝑘.(3.6)

Proof. It is immediate from hypothesis (H1) about the covariance functions of the signal and its second-order moments.

Proposition 3.2. Under (H1)–(H4), the noise {𝑉𝑘;𝑘1} is a sequence of random vectors with covariance matrices, 𝐸[𝑉𝑘𝑉𝑇𝑠]=𝑅𝑉𝑘,𝑠, given by 𝑅𝑉𝑘,𝑠=𝑅𝑘Γ+𝐶𝑜𝑣𝑘𝑘𝐸𝒵𝑘𝐸𝒵𝑇𝑘𝑇𝑘Γ,𝑠=𝑘Cov𝑘,Γ𝑘+1𝑘𝐸𝒵𝑘𝐸𝒵𝑇𝑘+1𝑇𝑘+1||||,𝑠=𝑘+10,𝑘𝑠0,1,(3.7) where denotes the Hadamard product and Γ𝑘=𝛾𝑘𝛾𝑘[2],𝑅𝑘=𝑅𝑘𝑅𝑘(3)𝑅𝑘(3)𝑇𝑅𝑘22(3.8) with 𝑅𝑘22=𝐼𝑚2+𝐾𝑚2𝐸𝛾𝑘𝛾𝑇𝑘𝐻𝑘𝐴𝑘𝐵𝑇𝑘𝐻𝑇𝑘𝑅𝑘𝐼𝑚2+𝐾𝑚2+𝑅𝑘(4).(3.9) Moreover, {𝑉𝑘;𝑘1} is uncorrelated with the processes {𝑍𝑘;𝑘1} and {𝐷𝛾𝑘𝑘𝑍𝑘;𝑘1}.

Proof. It is obvious that 𝐸[𝑉𝑘]=0,forall𝑘1. On the other hand, since 𝑉𝑘=𝒱𝑘𝐸[𝒱𝑘]+(𝐷𝛾𝑘𝐷𝑝𝑘)𝑘𝐸[𝒵𝑘], and {𝑧𝑘;𝑘1}, {𝑣𝑘;𝑘1} and {𝛾𝑘;𝑘1} are mutually independent, it is easy to see that 𝐸[(𝒱𝑘𝐸[𝒱𝑘])((𝐷𝛾𝑠𝐷𝑝𝑠)𝑠𝐸[𝒵𝑠])𝑇]=0,forall𝑘,𝑠 and hence 𝐸𝑉𝑘𝑉𝑇𝑠𝒱=Cov𝑘,𝒱𝑠𝐷+𝐸𝛾𝑘𝐷𝑝𝑘𝑘𝐸𝒵𝑘𝐷𝛾𝑠𝐷𝑝𝑠𝑠𝐸𝒵𝑠𝑇.(3.10) Firstly, we prove that 𝒱Cov𝑘,𝒱𝑠=𝑅11𝑘,𝑠𝑅12𝑘,𝑠𝑅12𝑇𝑘,𝑠𝑅22𝑘,𝑠=𝑅𝑘𝑅𝑘(3)𝑅𝑘(3)𝑇𝑅𝑘22𝛿𝑘,𝑠,(3.11) where 𝛿 denotes the Kronecker delta function.
Indeed, since {𝑣𝑘;𝑘1} is a zero mean white sequence with covariances 𝑅𝑘,forall𝑘1, it is clear that 𝑅11𝑘,𝑠=𝑅𝑘𝛿𝑘,𝑠. Moreover, from the mutual independence, the Kronecker and Hadamard products properties lead to 𝑅12𝑘,𝑠=Υ𝑝𝑠𝐻𝑠𝐸𝑧𝑠𝑇𝑅𝑘𝛿𝑘,𝑠𝐼𝑚2+𝐾𝑚2+𝑅𝑘(3)𝛿𝑘,𝑠=𝑅𝑘(3)𝛿𝑘,𝑠,𝑅22𝑘,𝑠=𝐼𝑚2+𝐾𝑚2𝐸𝛾𝑘𝛾𝑇𝑠𝐻𝑘𝐴𝑘𝐵𝑇𝑠𝐻𝑇𝑠𝑅𝑘𝛿𝑘,𝑠𝐼𝑚2+𝐾𝑚2+𝑅𝑘(4)𝛿𝑘,𝑠=𝑅𝑘22𝛿𝑘,𝑠.(3.12)
On the other hand, 𝐸𝐷𝛾𝑘𝐷𝑝𝑘𝑘𝐸𝒵𝑘𝐷𝛾𝑠𝐷𝑝𝑠𝑠𝐸𝒵𝑠𝑇,Γ=Cov𝑘,Γ𝑠𝑘𝐸𝒵𝑘𝐸𝒵𝑇𝑠𝑇𝑠(3.13) and since Cov[Γ𝑘,Γ𝑠]=0 for |𝑘𝑠|2, the covariance matrices 𝑅𝑉𝑘,𝑠 are obtained.
The uncorrelation between {𝑉𝑘;𝑘1} and the processes {𝑍𝑘;𝑘1} and {𝐷𝛾𝑘𝑘𝑍𝑘;𝑘1} is derived in a similar way, taking into account that {𝑧𝑘;𝑘0}, {𝑣𝑘;𝑘1}, and {𝛾𝑘;𝑘1} are mutually independent and using the Kronecker and Hadamard products properties.

4. Quadratic Filtering Algorithm

As indicated above, to obtain the LS quadratic estimators of the signal 𝑧𝑘 based on observations (2.1), we consider the LS linear estimators of the augmented signal, 𝑍𝑘, based on the augmented observations (3.3). As known, the LS linear filter of 𝑍𝑘 is the orthogonal projection of the vector 𝑍𝑘 onto (𝑌1,,𝑌𝑘), the linear space spanned by {𝑌1,,𝑌𝑘}; so the Orthogonal Projection Lemma (OPL) states that the estimator, 𝑍𝑘/𝑘, is the only linear combination satisfying the orthogonality property 𝐸𝑍𝑘𝑍𝑘/𝑘𝑌𝑇𝑠=0,𝑠=1,,𝑘.(4.1)

Due to the fact that the observations are generally nonorthogonal vectors, we will use an innovation approach, consisting of transforming the observation process {𝑌𝑘;𝑘1} into an equivalent process (innovation process) of orthogonal vectors {𝜈𝑘;𝑘1}, equivalent in the sense that each set {𝜈1,,𝜈𝑘} spans the same linear subspace as {𝑌1,,𝑌𝑘}; that is, (𝜈1,,𝜈𝑘)=(𝑌1,,𝑌𝑘).

The innovation process is constructed by the Gram-Schmidt orthogonalization procedure, using an inductive reasoning. Starting with 𝜈1=𝑌1, the projection of the next observation, 𝑌2, onto (𝜈1) is given by 𝑌2/1=𝐸[𝑌2𝜈𝑇1](𝐸[𝜈1𝜈𝑇1])1𝜈1; then, the vector 𝜈2=𝑌2𝑌2/1 is orthogonal to 𝜈1, and clearly (𝜈1,𝜈2)=(𝑌1,𝑌2). Let {𝜈1,,𝜈𝑘1} be the set of orthogonal vectors satisfying (𝜈1,,𝜈𝑘1)=(𝑌1,,𝑌𝑘1); if now we have an additional observation 𝑌𝑘, we project it onto (𝜈1,,𝜈𝑘1) and the orthogonality property allows us to find the projection by separately projecting onto each of the previous orthogonal vectors; that is, 𝑌𝑘/𝑘1=𝑘1𝑗=1𝐸𝑌𝑘𝜈𝑇𝑗𝐸𝜈𝑗𝜈𝑇𝑗1𝜈𝑗;(4.2) so the next vector, 𝜈𝑘=𝑌𝑘𝑌𝑘/𝑘1, is orthogonal to the previous ones and (𝜈1,,𝜈𝑘)=(𝑌1,,𝑌𝑘).

Note that the projection 𝑌𝑘/𝑘1 is the part of the observation 𝑌𝑘 that is determined by knowledge of {𝑌1,,𝑌𝑘1}; thus the remainder vector 𝜈𝑘=𝑌𝑘𝑌𝑘/𝑘1 can be regarded as the “new information” or the “innovation" provided by 𝑌𝑘 and the process {𝜈𝑘;𝑘1} as the innovation process associated with {𝑌𝑘;𝑘1}. The causal and causally invertible linear relation existing between the observation and innovation processes makes the innovation process unique.

Next, taking into account that the innovations constitute a white process, we derive a general expression for the LS linear estimator of the augmented signal, 𝑍𝑘, based on {𝑌1,,𝑌𝐿}, which will be denoted by 𝑍𝑘/𝐿. Replacing {𝑌1,,𝑌𝐿} by the equivalent set of orthogonal vectors {𝜈1,,𝜈𝐿}, the signal estimator is 𝑍𝑘/𝐿=𝐿𝑗=1𝑘,𝑗𝜈𝑗,(4.3) where the impulse-response function 𝑘,𝑗,𝑗=1,,𝐿 is calculated from the orthogonality property, 𝐸𝑍𝑘𝑍𝑘/𝐿𝜈𝑇𝑠=0,𝑠𝐿,(4.4) which leads to the Wiener-Hopf equation𝐸𝑍𝑘𝜈𝑇𝑠=𝐿𝑗=1𝑘,𝑗𝐸𝜈𝑗𝜈𝑇𝑠,𝑠𝐿.(4.5) Due to the whiteness of the innovation process, 𝐸[𝜈𝑗𝜈𝑇𝑠]=0 for 𝑗𝑠 and the Wiener-Hopf equation is expressed as 𝐸𝑍𝑘𝜈𝑇𝑠=𝑘,𝑠𝐸𝜈𝑠𝜈𝑇𝑠,𝑠𝐿;(4.6) consequently, 𝑘,𝑠𝑍=𝐸𝑘𝜈𝑇𝑠𝐸𝜈𝑠𝜈𝑇𝑠1,𝑠𝐿,(4.7) and, therefore, the following general expression for the LS linear filter of the augmented signal is obtained𝑍𝑘/𝐿=𝐿𝑖=1𝑆𝑘,𝑖Π𝑖1𝜈𝑖,(4.8) where 𝑆𝑘,𝑖=𝐸[𝑍𝑘𝜈𝑇𝑖] and Π𝑖=𝐸[𝜈𝑖𝜈𝑇𝑖].

Using the properties of the processes involved in (3.3), as established in Propositions 3.1 and 3.2, and expression (4.8) for the filter, we derive a recursive algorithm for the linear filtering estimators, 𝑍𝑘/𝑘, of the augmented signal 𝑍𝑘. As indicated above, the first 𝑛 entries of these estimators provide the required quadratic filter of the original signal 𝑧𝑘.

Theorem 4.1. The quadratic filter, 𝑧𝑞𝑘/𝑘, of the original signal 𝑧𝑘 is given by 𝑧𝑞𝑘/𝑘𝑍=Θ𝑘/𝑘,𝑘1,(4.9) where Θ is the operator which extracts the first 𝑛 entries of 𝑍𝑘/𝑘, the linear filter of the augmented signal 𝑍𝑘, which is obtained by 𝑍𝑘/𝑘=𝒜𝑘𝑂𝑘,𝑘1,(4.10) where the vectors 𝑂𝑘 are recursively calculated from 𝑂𝑘=𝑂𝑘1+𝐽𝑘Π𝑘1𝜈𝑘,𝑘1;𝑂0=0.(4.11) The innovation, 𝜈𝑘, satisfies 𝜈𝑘=𝑌𝑘𝐷𝑝𝑘𝑘𝒜𝑘𝑂𝑘1Ξ𝑘,𝑘1𝜈𝑘1,𝑘2;𝜈1=𝑌1,(4.12) with Ξ𝑘,𝑘1=ΓCov𝑘,Γ𝑘1𝑘𝒜𝑘𝑇𝑘1𝑇𝑘1+𝑅𝑉𝑘,𝑘1Π1𝑘1,𝑘2,(4.13) and Π𝑘, the covariance matrix of the innovation, verifies Π𝑘Γ=𝐸𝑘Γ𝑇𝑘𝑘𝒜𝑘𝑇𝑘𝑇𝑘𝐷𝑝𝑘𝑘𝒜𝑘𝑟𝑘1𝒜𝑇𝑘𝑇𝑘𝐷p𝑘Ξ𝑘,𝑘1Π𝑘1Ξ𝑇𝑘,𝑘1𝐷𝑝𝑘𝑘𝒜𝑘𝐽𝑘1Ξ𝑇𝑘,𝑘1Ξ𝑘,𝑘1𝐽𝑇𝑘1𝒜𝑇𝑘𝑇𝑘𝐷𝑝𝑘+𝑅𝑉𝑘,𝑘Π,𝑘2,1Γ=𝐸1Γ𝑇11𝒜1𝑇1𝑇1+𝑅𝑉1,1.(4.14) The matrix function 𝐽 is given by 𝐽𝑘=𝑇𝑘𝑟𝑘1𝒜𝑇𝑘𝑇𝑘𝐷𝑝𝑘𝐽𝑘1Ξ𝑇𝑘,𝑘1𝐽,𝑘2,1=𝑇1𝑇1𝐷𝑝1,(4.15) where 𝑟𝑘 is recursively obtained from 𝑟𝑘=𝑟𝑘1+𝐽𝑘Π𝑘1𝐽𝑇𝑘,𝑘1;𝑟0=0.(4.16)

Proof. We start by obtaining an explicit formula for the innovations, 𝜈𝑘=𝑌𝑘𝑌𝑘/𝑘1, or, equivalently, for the one-stage predictor of 𝑌𝑘, which by denoting 𝑇𝑘,𝑖=𝐸[𝑌𝑘𝜈𝑇𝑖] is given by 𝑌𝑘/𝑘1=𝑘1𝑖=1𝑇𝑘,𝑖Π𝑖1𝜈𝑖𝑌,𝑘2;1/0=0.(4.17) Using the hypotheses on the model, it is deduced that 𝑇𝑘,𝑖=𝐷𝑝𝑘𝑘𝑆𝑘,𝑖,𝑖<𝑘1,(4.18) and hence, 𝑌𝑘/𝑘1=𝐷𝑝𝑘𝑘𝑍𝑘/𝑘1+𝑇𝑘,𝑘1𝐷𝑝𝑘𝑘𝑆𝑘,𝑘1Π1𝑘1𝜈𝑘1.(4.19) Using again the hypotheses on the model, we obtain 𝑇𝑘,𝑘1𝐷𝑝𝑘𝑘𝑆𝑘,𝑘1Γ=Cov𝑘,Γ𝑘1𝑘𝒜𝑘𝑇𝑘1𝑇𝑘1+𝑅𝑉𝑘,𝑘1;(4.20) consequently, 𝜈𝑘=𝑌𝑘𝐷𝑝𝑘𝑘𝑍𝑘/𝑘1Ξ𝑘,𝑘1𝜈𝑘1,𝑘2;𝜈1=𝑌1,(4.21) with Ξ𝑘,𝑘1 given by expression (4.13).
Next, expression (4.10) for the filter 𝑍𝑘/𝑘 is derived. For this purpose, taking into account expression (4.8), we obtain formulas to calculate the coefficients 𝑆𝑘,𝑖=𝐸[𝑍𝑘𝜈𝑇𝑖],𝑖𝑘. From the hypotheses on the model, replacing 𝜈𝑖 by its expression in (4.21), and using (4.8) for 𝑍𝑖/𝑖1, we have 𝑆𝑘,𝑖=𝒜𝑘𝑇𝑖𝑇𝑖𝐷𝑝𝑖𝑖1𝑗=1𝑆𝑘,𝑗Π𝑗1𝑆𝑇𝑖,𝑗𝑇𝑖𝐷𝑝𝑖𝑆𝑘,𝑖1Ξ𝑇𝑖,𝑖1𝑆,2𝑖𝑘;𝑘,1=𝒜1𝑇1𝑇1𝐷𝑝1(4.22) or, equivalently, 𝑆𝑘,𝑖=𝐴𝑘𝐽𝑖,𝑖𝑘,(4.23) where 𝐽 is a function satisfying 𝐽𝑖=𝑇𝑖𝑇𝑖𝐷𝑝𝑖𝑖1𝑗=1𝐽𝑗Π𝑗1𝑆𝑇𝑖,𝑗𝑇𝑖𝐷𝑝𝑖𝐽𝑖1Ξ𝑇𝑖,𝑖1𝐽,2𝑖𝑘;1=𝑇1𝑇1𝐷𝑝1.(4.24) Then, from (4.8) and (4.23), expression (4.10) for the filter is deduced, where 𝑂𝑘, defined by 𝑂𝑘=𝑘𝑖=1𝐽𝑖Π𝑖1𝜈𝑖,𝑘1;𝑂0=0(4.25) satisfies the recursive relation (4.11). Analogously, it is obtained that the one-stage predictor of the signal is given by 𝑍𝑘/𝑘1=𝒜𝑘𝑂𝑘1, which substituted into (4.21) leads to formula (4.12) for the innovation.
Expression (4.15) for 𝐽𝑘 is derived making 𝑖=𝑘 in (4.24), using (4.23), and defining the function 𝑟𝑘=𝐸[𝑂𝑘𝑂𝑇𝑘]=𝑘𝑗=1𝐽𝑗Π𝑗1𝐽𝑇𝑗,𝑘1;𝑟0=0. From this definition, the recursive relation (4.16) is also immediately derived.
Finally, we obtain expression (4.14) for the innovation covariance matrix; from the hypotheses on the model, expression (4.12), and the definition of 𝑟𝑘, the following equation is obtained: Π𝑘Γ=𝐸𝑘Γ𝑇𝑘𝑘𝒜𝑘𝑇𝑘𝑇𝑘+𝑅𝑉𝑘,𝑘𝐷𝑝𝑘𝑘𝒜𝑘𝑟𝑘1𝒜𝑇𝑘𝑇𝑘𝐷𝑝𝑘Ξ𝑘,𝑘1Π𝑘1Ξ𝑇𝑘,𝑘1𝐷𝑝𝑘𝑘𝒜𝑘𝐸𝑂𝑘1𝜈𝑇𝑘1Ξ𝑇𝑘,𝑘1Ξ𝑘,𝑘1𝐸𝜈𝑘1𝑂𝑇𝑘1𝒜𝑇𝑘𝑇𝑘𝐷𝑝𝑘,𝑘2.(4.26) So, expression (4.14) for Π𝑘 is deduced taking into account that 𝐸[𝑂𝑘1𝜈𝑇𝑘1]=𝐽𝑘1, which follows from (4.11) using that the vector 𝑂𝑘2 is orthogonal to 𝜈𝑘1.

To conclude, as a measure of the estimation accuracy, we have calculated the filtering error covariance matrices, Σ𝑘/𝑘=𝐸[𝑍𝑘𝑍𝑇𝑘𝑍]𝐸[𝑘/𝑘𝑍𝑇𝑘/𝑘], which clearly are obtained by Σ𝑘/𝑘=𝒜𝑘[𝑇𝑘𝑟𝑘𝒜𝑇𝑘],𝑘1.

5. Generalization to Correlation at Times 𝑘 and 𝑠, with |𝑘𝑠|=𝑟

The observation model considered in Section 2 assumes that the uncertainty is modeled by Bernoulli variables correlated at consecutive sampling times, but independent otherwise. In this section, such model is generalized by assuming correlation in the uncertainty at times 𝑘 and 𝑠 differing 𝑟 units of time. Specifically, hypothesis (H3) is replaced by the following one.(H3')For 𝑖=1,,𝑚, the noises {𝛾𝑖𝑘;𝑘1} are sequences of Bernoulli random variables with 𝑃[𝛾𝑖𝑘=1]=𝑝𝑖𝑘. For 𝑖,𝑗=1,,𝑚, the variables 𝛾𝑖𝑘 and 𝛾𝑗𝑠 are assumed to be independent for |𝑘𝑠|𝑟, and Cov[𝛾𝑖𝑘,𝛾𝑗𝑠] are known for |𝑘𝑠|=𝑟.

This correlation model allows us to consider certain situations where the signal cannot be missing at 𝑟+1 consecutive observations.

Similar considerations to those made in Section 3 for the case of consecutive sampling times lead now to the following expression for the covariance matrices of the noise {𝑉𝑘;𝑘1}: 𝑅𝑉𝑘,𝑠=𝑅𝑘Γ+Cov𝑘𝑘𝐸𝒵𝑘𝐸𝒵𝑇𝑘𝑇𝑘Γ,𝑠=𝑘,Cov𝑘,Γ𝑠𝑘𝐸𝒵𝑘𝐸𝒵𝑇𝑠𝑇𝑠,||||||||𝑘𝑠=𝑟,0,𝑘𝑠0,𝑟.(5.1) Then, performing the same steps as in the proof of Theorem 4.1, the following algorithm is deduced.

Theorem 5.1. The quadratic filter, 𝑧𝑞𝑘/𝑘, of the original signal 𝑧𝑘 is given by 𝑧𝑞𝑘/𝑘𝑍=Θ𝑘/𝑘,𝑘1,(5.2) where Θ is the operator which extracts the first 𝑛 entries of 𝑍𝑘/𝑘, the linear filter of the augmented signal 𝑍𝑘, which is obtained by 𝑍𝑘/𝑘=𝒜𝑘𝑂𝑘,𝑘1,(5.3) where the vectors 𝑂𝑘 are recursively calculated from 𝑂𝑘=𝑂𝑘1+𝐽𝑘Π𝑘1𝜈𝑘,𝑘1;𝑂0=0.(5.4) The innovation, 𝜈𝑘, satisfies 𝜈1=𝑌1,𝜈𝑘=𝑌𝑘𝐷𝑝𝑘𝑘𝒜𝑘𝑂𝑘1𝜈,2𝑘𝑟,𝑘=𝑌𝑘𝐷𝑝𝑘𝑘𝒜𝑘𝑂𝑘1Ξ𝑘,𝑘𝑟𝜈𝑘𝑟𝑘1𝑖=𝑘𝑟+1𝑇𝑇𝑖,𝑘𝑟Π𝑖1𝜈𝑖,𝑘>𝑟(5.5) with Ξ𝑘,𝑘𝑟=ΓCov𝑘,Γ𝑘𝑟𝑘𝒜𝑘𝑇𝑘𝑟𝑇𝑘𝑟+𝑅𝑉𝑘,𝑘𝑟Π1𝑘𝑟𝑇𝑖,𝑘𝑟=𝐷𝑝𝑖𝑖𝑆𝑖,𝑘𝑟Ξ𝑖,𝑖𝑟𝑇𝑇𝑘𝑟,𝑖𝑟,𝑖=𝑘𝑟+1,,𝑘1.(5.6) The covariance matrix of the innovation, Π𝑘, verifies Π1Γ=𝐸1Γ𝑇11𝒜1𝑇1𝑇1+𝑅𝑉1,1,Π𝑘Γ=Cov𝑘𝑘𝒜𝑘𝑇𝑘𝑇𝑘+𝑅𝑉𝑘,𝑘+𝐷𝑝𝑘𝑘𝒜𝑘𝐽𝑘Π,2𝑘𝑟,𝑘Γ=Cov𝑘𝑘𝒜𝑘𝑇𝑘𝑇𝑘+𝑅𝑉𝑘,𝑘+𝐷𝑝𝑘𝑘𝒜𝑘𝐽𝑘Ξ𝑘,𝑘𝑟Π𝑘𝑟+𝑘1𝑖=𝑘𝑟+1𝑇𝑇𝑖,𝑘𝑟Π𝑖1𝑇𝑖,𝑘𝑟Ξ𝑇𝑘,𝑘𝑟𝐷𝑝𝑘𝑘𝑘𝒜𝑘𝑟𝑘1𝐽𝑇𝑘𝒜𝑇𝑘𝑇𝑘𝐷𝑝𝑘,𝑘>𝑟.(5.7) The matrix function 𝐽 is given by 𝐽1=𝑝1𝑇1𝑇1,𝐽𝑘=𝑇𝑘𝑟𝑘1𝒜𝑇𝑘𝑇𝑘𝐷𝑝𝑘𝐽,2𝑘𝑟,𝑘=𝑇𝑘𝑟𝑘1𝒜𝑇𝑘𝑇𝑘𝐷𝑝𝑘𝐽𝑘𝑟𝑘1𝑗=𝑘𝑟+1𝐽𝑗Π𝑗1𝑇𝑗,𝑘𝑟Ξ𝑇𝑘,𝑘𝑟,𝑘>𝑟,(5.8) where 𝑟𝑘 is recursively obtained from 𝑟𝑘=𝑟𝑘1+𝐽𝑘Π𝑘1𝐽𝑇𝑘,𝑘1;𝑟0=0.(5.9)

Proof. It is analogous to that of Theorem 4.1, taking into account that, in this case, the one-stage predictor of 𝑌𝑘 satisfies 𝑌𝑘/𝑘1=𝐷𝑝𝑘𝑘𝑍𝑘/𝑘1+𝑘1𝑖=𝑘𝑟𝑇𝑘,𝑖𝐷𝑝𝑘𝑘𝑆𝑘,𝑖Π𝑖1𝜈𝑖,(5.10) and, from the model hypotheses, 𝑇𝑘,𝑘𝑟𝐷𝑝𝑘𝑘𝑆𝑘,𝑘𝑟Γ=Cov𝑘,Γ𝑘𝑟𝑘𝒜𝑘𝑇𝑘𝑟𝑇𝑘𝑟+𝑅𝑉𝑘,𝑘𝑟,(5.11) and for 𝑖>𝑘𝑟, 𝑇𝑘,𝑖𝐷𝑝𝑘𝑘𝑆𝑘,𝑖Γ=Cov𝑘,Γ𝑘𝑟𝑘𝒜𝑘𝑇𝑘𝑟𝑇𝑘𝑟+𝑅𝑉𝑘,𝑘𝑟Π1𝑘𝑟𝑇𝑇𝑖,𝑘𝑟.(5.12)

6. Numerical Simulation Example

To illustrate the application of the proposed filtering algorithm a numerical simulation example is shown now. To check the effectiveness of the proposed quadratic filter, we ran a program in MATLAB which, at each iteration, simulates the signal and the observed values and provides the linear and quadratic filtering estimates, as well as the corresponding error covariance matrices.

For the simulations, this program has been applied to a scalar signal {𝑧𝑘;𝑘1}, generated by the following first-order autoregressive model: 𝑧𝑘=0.95𝑧𝑘1+𝑤𝑘1,𝑘1,(6.1) where the initial state is a zero mean Gaussian variable with Var[𝑧0]=1 and {𝑤𝑘;𝑘0} is a zero mean white Gaussian noise with Var[𝑤𝑘]=1.

The autocovariance functions of the signal and their second-order powers are given in a semidegenerate kernel form, specifically, 𝐾𝑧𝑘,𝑠=1.025641×0.95𝑘𝑠,𝐾𝑧2𝑘,𝑠=2.1038795×0.952(𝑘𝑠),𝑠𝑘(6.2)

and the covariance function of the signal and their second-order powers is given by 𝐾𝑧𝑧2𝑘,𝑠=0,forall𝑠,𝑘. According to hypothesis (H1), the functions constituting these covariance functions can be defined as follows:𝐴𝑘=1.025641×0.95𝑘,𝐵𝑘=0.95𝑘,𝑎𝑘=2.1038795×0.952𝑘,𝑏𝑘=0.952𝑘,𝛼𝑘=𝛽𝑘=𝜀𝑘=𝛿𝑘=0.(6.3)

Consider two sensors whose measurements, according to our theoretical study, are perturbed by sequences of Bernoulli random variables {𝛾𝑘(𝑖);𝑘1}, 𝑖=1,2, and by additive white noises, {𝑣𝑘(𝑖);𝑘1}, 𝑖=1,2; that is, 𝑦𝑖𝑘=𝛾𝑘(𝑖)𝑧𝑘+𝑣𝑘(𝑖),𝑘1,𝑖=1,2.(6.4)

Assume that the additive noises are independent and have the following probability distributions: 𝑃𝑣𝑘(2)=1=88𝑣,𝑃𝑘(1)=87=78𝑃𝑣,𝑘1,(6.5)𝑘(2)==115𝑣18,𝑃𝑘(2)=2=3𝑣18,𝑃𝑘(2)=1=918,𝑘1.(6.6)

Now, in accordance with the proposed uncertain observation model, we assume that the uncertainty at any time 𝑘>𝑟 is correlated with the uncertainty at time 𝑠 only if |𝑘𝑠|=𝑟, but independent otherwise.

To model the uncertainty in this way, we can consider two independent sequences of independent Bernoulli random variables, {𝜃𝑘(𝑖);𝑘1}, 𝑖=1,2, and define 𝛾𝑘(𝑖)=1𝜃𝑘(𝑖)(1𝜃(𝑖)𝑘+𝑟),𝑘1. It is obvious that the variables 𝛾𝑘(𝑖) take the value zero if and only if 𝜃𝑘(𝑖)=1 and 𝜃(𝑖)𝑘+𝑟=0; otherwise, 𝛾𝑘(𝑖)=1; therefore, {𝛾𝑘(𝑖);𝑘1} are Bernoulli variables with 𝑃[𝛾𝑘(𝑖)=1]=1𝑃[𝜃𝑘(𝑖)=1]𝑃[𝜃(𝑖)𝑘+𝑟=0]. Note that 𝛾𝑘(𝑖)=0 implies 𝜃(𝑖)𝑘+𝑟=0 and, consequently, 𝛾(𝑖)𝑘+𝑟=1; this fact implies that if the signal is missing at time 𝑘, it is assured that, at time 𝑘+𝑟, the observation contains the signal; therefore, the signal cannot be missing in 𝑟+1 consecutive observations.

For the application, we have assumed that the variables 𝜃𝑘(𝑖) in each sensor have the same distribution; that is, 𝑃[𝜃𝑘(𝑖)=1]=𝜃𝑖 independent of 𝑘. So, in each sensor, the probability that the observation contains the signal, 𝑝𝑖=𝑃[𝛾𝑘(𝑖)=1]=1𝜃𝑖(1𝜃𝑖), is constant for all the observations.

Since {𝜃𝑘(1);𝑘1} is independent of {𝜃𝑘(2);𝑘1}, also {𝛾𝑘(1);𝑘1} is independent of {𝛾𝑘(2);𝑘1} and hence, forall𝑘,𝑠, Cov[𝛾𝑘(𝑖),𝛾𝑠(𝑗)]=0 for 𝑖𝑗. For fixed 𝑖=1,2, the variance of 𝛾𝑘(𝑖) is Var[𝛾𝑘(𝑖)]=𝑝𝑖(1𝑝𝑖), and the correlation between two different variables 𝛾𝑘(𝑖) and 𝛾𝑠(𝑖) is obtained as follows.(I)For |𝑘𝑠|0,𝑟 the variables (𝜃𝑘(𝑖),𝜃(𝑖)𝑘+𝑟) and (𝜃𝑠(𝑖),𝜃(𝑖)𝑠+𝑟) are independent, thus being the variables 𝛾𝑘(𝑖)=1𝜃𝑘(𝑖)(1𝜃(𝑖)𝑘+𝑟) and 𝛾𝑠(𝑖)=1𝜃𝑠(𝑖)(1𝜃(𝑖)𝑠+𝑟) also independent and hence, uncorrelated; that is, Cov[𝛾𝑘(𝑖),𝛾𝑠(𝑖)]=𝐸[𝛾𝑘(𝑖)𝛾𝑠(𝑖)]𝐸[𝛾𝑘(𝑖)]𝐸[𝛾𝑠(𝑖)]=0.(II)For |𝑘𝑠|=𝑟, 𝐸[𝜃𝑘(𝑖)(1𝜃(𝑖)𝑘+𝑟)𝜃𝑠(𝑖)(1𝜃(𝑖)𝑠+𝑟)]=0 since 𝑘=𝑠+𝑟 or 𝑠=𝑘+𝑟, and 𝜃(𝑖)(1𝜃(𝑖))=0. Then, 𝐸[𝛾𝑘(𝑖)𝛾𝑠(𝑖)]=1𝐸[𝜃𝑘(𝑖)(1𝜃(𝑖)𝑘+𝑟)]𝐸[𝜃𝑠(𝑖)(1𝜃(𝑖)𝑠+𝑟)]=12𝜃𝑖(1𝜃𝑖)=2𝑝𝑖1, which implies that Cov[𝛾𝑘(𝑖),𝛾𝑠(𝑖)]=𝐸[𝛾𝑘(𝑖)𝛾𝑠(𝑖)]𝐸[𝛾𝑘(𝑖)]𝐸[𝛾𝑠(𝑖)]=2𝑝𝑖1𝑝2𝑖=(1𝑝𝑖)2.

Summarizing, the correlation function of {𝛾𝑘(𝑖);𝑘1} is given by 𝛾Cov𝑘(𝑖),𝛾𝑠(𝑖)=||||𝑝0,𝑘𝑠𝑟,𝑖1𝑝𝑖,||||𝑘𝑠=0,1𝑝𝑖2,||||𝑘𝑠=𝑟,(6.7)

and, hence, the measurements above described are in accordance with the proposed correlation model.

To analyze the performance of the proposed estimators, the linear and quadratic filtering error variances have been calculated for different values of 𝑟 and also for different 𝜃1 and 𝜃2, which provide different values of the probabilities 𝑝1 and 𝑝2. Since 𝑝𝑖 are the same if the value 1𝜃𝑖 is considered instead of 𝜃𝑖, only the case 𝜃𝑖0.5 is examined here (note that, in such case, 𝑝𝑖 is a decreasing function of 𝜃𝑖); more specifically, the values 𝜃𝑖=0.1,0.2,0.3,0.4,0.5 (which lead to 𝑝𝑖=0.91,0.84,0.78,0.76,0.75, resp.) have been used.

First, considering 𝑟=4, the linear and quadratic filtering error variances are calculated for the values 𝜃1=𝜃2=0.1, 𝜃1=0.2 and 𝜃2=0.3, 𝜃1=0.4 and 𝜃2=0.5. Figure 1 shows the results obtained; for all the values of 𝜃𝑖, the error variances corresponding to the quadratic filter are always considerably less than those of the linear filter, thus confirming the superiority of the quadratic filter over the linear one in the estimation accuracy. Also, from this figure it is gathered that, as 𝜃1 or 𝜃2 increase (which means that the probability that the signal is present in the observations coming from the corresponding sensor decreases), the filtering error variances become greater and, hence, worse estimations are obtained.

Next, we compare the performance of the linear and quadratic filtering estimators for the values 𝜃𝑖=0.1,0.2,0.3,0.4,0.5; since the linear and quadratic filtering error variances show insignificant variation from the 5th iteration onwards only the error variances at a specific iteration are considered.

In Figure 2 the linear and quadratic filtering error variances at 𝑘=50 are displayed versus 𝜃1 (for constant values of 𝜃2), and, in Figure 3, these variances are shown versus 𝜃2 (for constant values of 𝜃1). From these figures it is gathered that, as 𝜃1 or 𝜃2 decrease (and, consequently, the probability that the signal is not present in the observations coming from the corresponding sensor, 1𝑝𝑖, decreases), the filtering error variances become smaller and, hence, better estimations are obtained. Note that this improvement is more significant for small values of 𝜃1 or 𝜃2, that is, when the probability that the signal is present in the observations coming from one of the sensors is large. On the other hand, both figures show that, for all the values of 𝜃1 and 𝜃2, the error variances corresponding to the quadratic filter are always considerably less than those of the linear filter, confirming again the superiority of the quadratic filter over the linear one.

Finally, for 𝜃1=𝜃2=0.5 (these values produce the maximum value for the probability that the signal is not present in the observations coming from both sensors) and considering different values of 𝑟, specifically 𝑟=1,,16, the error variances at 𝑘=50 for the linear and quadratic filters are displayed in Figure 4. From this figure it is deduced that the performance of the estimators improves when the values of 𝑟 are smaller and, hence, a greater distance between the correlated variables produces worse estimations (in the sense of the mean squared error). As expected, this figure also shows that the estimation accuracy of the quadratic filters is superior to that of the linear filters and also that the error variances show insignificant variation when the values of 𝑟 are greater.

7. Conclusion

A recursive quadratic filtering algorithm is proposed from correlated uncertain observations coming from multiple sensors with different uncertainty characteristics. This is a realistic assumption in situations concerning sensor data that are transmitted over communication networks where, generally, multiple sensors with different properties are involved. The uncertainty in each sensor is modelled by a sequence of Bernoulli random variables which are correlated at times 𝑘 and 𝑘+𝑟. A real application of such observation model arises, for example, in signal transmission problems where a failure in one of the sensors at time 𝑘 is detected and the old sensor is replaced at time 𝑘+𝑟, thus avoiding the possibility of missing signal in 𝑟+1 consecutive observations.

Using covariance information, the algorithm is derived by applying the innovation technique to suitably defined augmented signal and observation vectors, and the LS quadratic estimator of the signal is obtained from the LS linear estimator of the augmented signal based on the augmented observations.

The performance of the proposed filtering algorithm is illustrated by a numerical simulation example where the state of a first-order autoregressive model is estimated from uncertain observations coming from two sensors with different uncertainty characteristics correlated at times 𝑘 and 𝑘+𝑟, considering several values of 𝑟.

Acknowledgment

This paper is supported by Ministerio de Educación y Ciencia (Grant no. MTM2008-05567) and Junta de Andalucía (Grant no. P07-FQM-02701).