Research Article  Open Access
Recursive Linear Estimation for DiscreteTime Systems in the Presence of Different Multiplicative Observation Noises
Abstract
This paper describes a design for a least mean square error estimator in discrete time systems where the components of the state vector, in measurement equation, are corrupted by different multiplicative noises in addition to observation noise. We show how known results can be considered a particular case of the algorithm stated in this paper.
1. Introduction
It was back in 1960 when Kalman [1] introduced his wellknown filter. Assuming that the dynamic system is described through a state space model, Kalman considers the problem of optimum linear recursive estimation. From this event much other research work was developed including different hypothesis frameworks about system noises [2–5].
In all studies above mentioned the estimated signal (state vector) in measurement equation is only corrupted by additive noise. Rajasekaran et al. [6] consider the problem of linear recursive estimation of stochastic signals in the presence of multiplicative noise in addition to measurement noise. When multiplicative noise is a Bernoulli random variable, the system is called system with uncertain observations. Hadidi and Schwartz [7] investigate for the existence of recursive leastsquares state estimators, where uncertainty about observations is caused by a binary switching sequence which is specified by a conditional probability distribution and which enters the observation equation. The proposed solution is revisited by Wang [8], proposing new formulations for the optimal filter and the onestep predictor. The estimation problem about these systems has been extensively treated [9–11]. There have been other approaches as that of Zhang et al. [12], in which the authors consider the infinite horizon mixed H2/H∞ control for discretetime stochastic systems with state and disturbancedependent noise. In a very recent study [13], the optimal H2 ﬁltering problems associated respectively with possible delay of one sampling period, uncertain observations and multiple packet dropouts are studied under a uniﬁed framework. In particular Sahebsara et al. [13] propose the following observation equation: where is the mreal valued measured output, is the measurement received by the estimator to be designed, and is a white binary distributed random variable with and , where and is uncorrelated with other random variables. The model introduced by Sahebsara et al. [13] describes packet dropouts in networked estimation. The model says that the latest measurement received will be used if the current measurement is lost. Some other authors like Nakamori [14] focus their attention on the recursive estimation technique using the covariance information in linear stationary discretetime systems when the uncertain observations are given.
We propose in this paper a design for a least mean square error (LMSE) estimator in discretetime systems where the components of the state vector, in measurement equation, are corrupted by different multiplicative noises in addition to observation noise. The estimation problems treated include onestage prediction and filtering.
The presented algorithm can be considered as a general algorithm because, with particular specifications, this algorithm degenerates in known results as in Kalman [1], Rajasekaran et al. [6], Nahi [9], and SánchezGonzález and GarcíaMuñoz [11]. It can also be inferred that if multiplicative noises are Bernoulli random variables, such situation is not, properly speaking, a system with uncertain observations because the components of the state can be present in the observation with different probabilities. Therefore, the presented algorithm solves the estimation problems in this new system specification with complete uncertainty about signals.
2. Statement and Notation
We now introduce symbols and definitions used across the paper. Let the following linear discretetime dynamic system with elements be the state vector :
State Equation: and observation vector be given by
Observation Equation: where , , and are known matrices with appropriate dimensions.
Usual and specific hypothesis regarding probability behavior for random variables is introduced to formalize the model as follows: (H.1) is a centered random vector with variancecovariance matrix,(H.2) is centered white noise with . (H.3) is a diagonal matrix where is a scalar white sequence with nonzero mean and variance , . It is supposed that and are correlated in the same instant and,. The following matrix will be used later on:(H.4) is a centered white noise sequence with variance (H.5)are mutually independent.(H.6)The sequences are independent of initial state
As we can observe, the components of the state vector, in the observation equation, are corrupted by multiplicative noises in addition to measurement noise.
Let be the LMSE estimate of given observations . denoting the estimation error, and the corresponding covariance matrix is .
The LMSE linear filter and onestep ahead predictor of the state are presented in the next section.
3. Prediction and Filter Algorithm
Theorem 3.1. The onestep ahead predictor and filter are given by The filter gain matrix verifies where with The prediction and filter error covariance matrices satisfy
Proof. By the state equation it is easy to prove that the predictor satisfies the orthogonal projection lemma (OPL) [15]. In the initial instant, the estimate of is its mean, so that .
As a consequence of the orthogonal projection theorem [15], the state filter can be written as a function of the onestep ahead predictor as
where is the innovation process. Its expression is obtained below.
Since is the orthogonal projection of onto the subspace generated by observations , we know that this is the only element in that subspace verifying
Then, by the observation equation and the hypotheses (H.3)–(H.6), it can be seen that , and the innovation process for the problem we are solving is given by
To obtain the gain matrix , we observe that, given that the OPL holds, , and we have
where are the covariance matrices of the innovation. From the observation equation and the hypotheses (H.2)–(H.6), it can easily be checked that
and therefore
To obtain the covariance matrices of the innovation process, it can be seen that
and by adding and subtracting ,
Then
Let us work out each of the terms in previous expression. By the observation equation we have that
and according to hypotheses in (H.4)–(H.6) the second term can be cancelled. Adding and subtracting ,
where the second term is zero by (H.3) and (H.6). According to (H.6), if we label for , we get
Therefore
On the other hand, by the observation equation and (H.4)–(H.6),
By the same reasons,
Shortly, the covariance matrices of the innovations process verify
To obtain the components of the , we only need to observe that
where and = )_{1× n }. The following recursive expression of is immediate given that is a white noise sequence and independent of :
The expression of the prediction error covariance matrices
is immediate since .
In the other hand, given that then
It can be observed that
where the second term cancels according to OPL, and by (3.9) it is obtained that
and then
Next, we see how some known results can be considered as particular specifications of the general model proposed in this paper.(i)If , the state vectors are not corrupted by a multiplicative noise, then
and our algorithm degenerates in Kalman’s [1] algorithm.(ii)If where is a scalar white sequence with nonzero mean and variance , we end up with Rajasekaran’s et al. [6] framework, , where the state vector (all components) is corrupted by multiplicative noise. In this case,
and the presented algorithm collapses in Rajasekaran’s.(iii)If where is a sequence of Bernoulli independent random variable with , then and we end up with Nahi’s [9] framework, where the state vector is present in the observation with probability . In this case,
and the new algorithm collapses in Nahi’s one.(iv)If and where is a sequence of Bernoulli independent random variable with, the observations can include some elements of the state vector not ensureing the presence of the resting others (SánchezGonzález and GarcíaMuñoz [11] framework). In this case and the new algorithm degenerates in Sanchez and García’s.
Another interesting situation appears when some of the components in the state vector are present in the observation but appear with different probabilities. Such a situation is not a system with uncertain observations. The present algorithm solves estimation problems in this type of system; it is only necessary to suppose that the multiplicative noises are different Bernoulli random variables.
4. Some Numerical Simulation Examples
We show now some numerical examples to illustrate the filtering and prediction algorithm presented in Theorem 3.1.
Example 4.1. We consider the following linear system described by the dynamic equation: where is centered Gaussian white noise with ; and are centered Gaussian random variables with variances equal to 0.5; and are Gaussian white noises with means 2 and 3 and variances and , respectively; and are independent; is centered Gaussian white noise with variance .
Using the estimation algorithm of Theorem 3.1, we can calculate the filtering estimate of the state recursively. Figures 1 and 2 illustrate the state and the filter , for , versus for the multiplicative Gaussian observation noises and . The state is represented with black and the filter with red color.
Tables 1 and 2 show the meansquare values (MSVs) of the filtering errors for and corresponding to multiplicative white observation noises:


Example 4.2. We consider a linear system described by (4.1) where and are sequences of independent Bernoulli random variables being 1 with probabilities and , respectively.
Figures 3 and 4 illustrate the state and the filter , for , versus for the multiplicative observation noises and . The state is represented with red and the filter with black color.
Tables 3 and 4 show the meansquare values (MSVs) of the filtering errors for and corresponding to multiplicative white observation noises:


As we can observe, the simulation graphs and the MSVs of the filtering in both examples show the effectiveness of the new algorithm.
5. Conclusions
For linear discretetime stochastic systems where the components of the state vector are corrupted by different multiplicative noises added to the observation noises we have derived the optimal linear estimators including filter, predictor, and smoother in the minimum variance sense by applying the innovation analysis approach. Our solutions are given in terms of general expressions for the innovations, cevariance matrices, and gain and have the interesting fact that they result in different wellknown scenarios; in particular, four of them can be considered; particularized cases of our algorithm when different values of the noise are considered, those cases are those of Kalman [1], Rajasekaran et al. [6], Nahi [9], and more recently SánchezGonzález and GarcíaMuñoz [11].
References
 R. E. Kalman, “A new approach to linear filtering and prediction problems,” Journal of Basic Engineering, vol. 82, pp. 35–45, 1960. View at: Google Scholar
 R. E. Kalman, “New methods in Wiener filtering theory,” in Proceedings of the 1st Symposium on Engineering Application of Random Function Theory and Probability, pp. 270–387, John Wiley & Sons, New York, NY, USA, 1963. View at: Google Scholar
 J. S. Meditch, Stochastic Optimal Linear Estimation and Control, McGrawHill, New York, NY, USA, 1969.
 A. H. Jazwinski, Stochastic Processes and Filtering Theory, Academic Press, New York, NY, USA, 1970.
 A. Kowalski and D. Szynal, “Filtering in discretetime systems with state and observation noises correlated on a finite time interval,” IEEE Transactions on Automatic Control, vol. 31, no. 4, pp. 381–384, 1986. View at: Google Scholar  Zentralblatt MATH
 P. K. Rajasekaran, N. Satyanarayana, and M. D. Srinath, “Optimum linear estimation of stochastic signals in the presence of multiplicative noise,” IEEE Transactions on Aerospace Electronic Systems, vol. 7, no. 3, pp. 462–468, 1971. View at: Publisher Site  Google Scholar  MathSciNet
 M. T. Hadidi and S. C. Schwartz, “Linear recursive state estimators under uncertain observations,” IEEE Transactions on Automatic Control, vol. 24, no. 6, pp. 944–948, 1979. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 X. D. Wang, “Recursive algorithms for linear LMSE estimators under uncertain observations,” IEEE Transactions on Automatic Control, vol. 29, no. 9, pp. 853–854, 1984. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 N. E. Nahi, “Optimal recursive estimation with uncertain observation,” IEEE Transactions on Information Theory, vol. 15, pp. 457–462, 1969. View at: Google Scholar  Zentralblatt MATH  MathSciNet
 A. H. Hermoso and J. L. Pérez, “Linear estimation for discretetime systems in the presence of timecorrelated disturbances and uncertain observations,” IEEE Transactions on Automatic Control, vol. 39, no. 8, pp. 1636–1638, 1994. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 C. SánchezGonzález and T. M. GarcíaMuñoz, “Linear estimation for discrete systems with uncertain observations: an application to the correction of declared incomes in inquiry,” Applied Mathematics and Computation, vol. 156, no. 1, pp. 211–233, 2004. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 W. Zhang, Y. Huang, and L. Xie, “Infinite horizon stochastic ${H}_{2}/{H}_{\infty}$ control for discretetime systems with state and disturbance dependent noise,” Automatica, vol. 44, no. 9, pp. 2306–2316, 2008. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 M. Sahebsara, T. Chen, and S. L. Shah, “Optimal ${\text{H}}_{2}$ filtering with random sensor delay, multiple packet dropout and uncertain observations,” International Journal of Control, vol. 80, no. 2, pp. 292–301, 2007. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 S. Nakamori, “Estimation technique using covariance information with uncertain observations in linear discretetime systems,” Signal Processing, vol. 58, no. 3, pp. 309–317, 1997. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 A. P. Sage and J. L. Melsa, Estimation Theory with Applications to Communications and Control, McGrawHill Series in Systems Science, McGrawHill, New York, NY, USA, 1971. View at: MathSciNet
Copyright
Copyright © 2010 C. SánchezGonzález and T. M. GarcíaMuñoz. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.