Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volumeย 2012ย (2012), Article IDย 717504, 19 pages
http://dx.doi.org/10.1155/2012/717504
Research Article

On Kalman Smoothing for Wireless Sensor Networks Systems with Multiplicative Noises

Xiao Lu,1,2ย Haixia Wang,1ย and Xi Wang1

1Key Laboratory for Robot & Intelligent Technology of Shandong Province, Shandong University of Science and Technology, Qingdao 266510, China
2School of Control Science and Engineering, Shandong University, Jinan 250100, China

Received 12 March 2012; Revised 21 May 2012; Accepted 21 May 2012

Academic Editor: Baocangย Ding

Copyright ยฉ 2012 Xiao Lu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The paper deals with Kalman (or ๐ป2) smoothing problem for wireless sensor networks (WSNs) with multiplicative noises. Packet loss occurs in the observation equations, and multiplicative noises occur both in the system state equation and the observation equations. The Kalman smoothers which include Kalman fixed-interval smoother, Kalman fixedlag smoother, and Kalman fixed-point smoother are given by solving Riccati equations and Lyapunov equations based on the projection theorem and innovation analysis. An example is also presented to ensure the efficiency of the approach. Furthermore, the proposed three Kalman smoothers are compared.

1. Introduction

The linear estimation problem has been one of the key research topics of control community according to [1]. As is well known, two indexes are used to investigate linear estimation, one is ๐ป2 index, and the other is ๐ปโˆž index. Under the performance of ๐ป2 index, Kalman filtering [2โ€“4] is an important approach to study linear estimation besides Wiener filtering. In general, Kalman filtering which usually uses state space equation is better than Wiener filtering, since it is recursive, and it can be used to deal with time-variant system [1, 2, 5]. This has motivated many previous researchers to employ Kalman filtering to study linear time variant or linear time-invariant estimation, and Kalman filtering has been a popular and efficient approach for the normal linear system. However, the standard Kalman filtering cannot be directly used in the estimation on wireless sensor networks (WSNs) since packet loss occurs, and sometimes multiplicative noises also occur [6, 7].

Linear estimation for systems with multiplicative noises under ๐ป2 index has been studied well in [8, 9]. Reference [8] considered the state optimal estimation algorithm for singular systems with multiplicative noise, and dynamic noise and noise measurement estimations have been proposed. In [9], we presented the linear filtering for continuous-time systems with time-delayed measurements and multiplicative noises under ๐ป2 index.

Wireless sensor networks have been popular these years, and the corresponding estimation problem has attracted many researchersโ€™ attention [10โ€“15]. It should be noted that in the above works, only packet loss occurs. Reference [10] which is an important and ground-breaking reference has considered the problem of Kalman filtering of the WSN with intermittent packet loss, and Kalman filter together with the upper and lower bounds of the error covariance is presented. Reference [11] has developed the work of [10], the measurements are divided into two parts which are sent by different channel under different packet loss rate, and the Kalman filter together with the covariance matrix is given. Reference [14] also develops the result of [11], and the stability of the Kalman filter with Markovian packet loss has been given.

However, the above references mainly focus on the linear systems with packet loss, and they cannot be useful when there are multiplicative noises in the system models [6, 7, 16โ€“18]. For the Kalman filtering problem for wireless sensor networks with multiplicative noises, [16โ€“18] give preliminary results, where [16] has given Kalman filter, [17] deals with the Kalman filter for the wireless sensor networks system with two multiplicative noises and two measurements which are sent by different channels and packet-drop rates, and [18] has given the information fusion Kalman filter for wireless sensor networks with packet loss and multiplicative noises.

In this paper, Kalman smoothing problem including fixed-point smoothing [6], fixed-interval smoothing, and fixed-lag smoothing [7] for WSN with packet loss will be studied. Multiplicative noises both occur in the state equation and observation equation, which will extend the result of works of [8] where multiplicative noises only occur in the state equations. Three Kalman smoothers will be given by recursive equations. The smoother error covariance matrices of fixed-interval smoothing and fixed-lag smoothing are given by Riccati equation without recursion, while the smoother error covariance matrix of fixed-point smoothing is given by recursive Riccati equation and recursive Lyapunov equation, which develops the work of [6, 7], where some main theorems with errors on Kalman smoothing are given.

The rest of the paper is organized as follows. In Section 2, we will present the system model and state the problem to be dealt with in the paper. The main results of Kalman smoother will be given in Section 3. In Section 4, a numerical example will be given to show the result of smoothers. Some conclusion will be drawn in Section 5.

2. Problem Statement

Consider the following discrete-time wireless sensor network systems with multiplicative noises: ๐ฑ(๐‘ก+1)=๐ด๐ฑ(๐‘ก)+๐ต1๐ฑ(๐‘ก)๐ฐ(๐‘ก)+๐ต2๐ฎ(๐‘ก),(2.1)๐ฒ(๐‘ก)=๐ถ๐ฑ(๐‘ก)+๐ท๐ฑ(๐‘ก)๐ฐ(๐‘ก)+๐ฏ(๐‘ก),(2.2)๐ฒ(๐‘ก)=๐›พ(๐‘ก)๐ฒ(๐‘ก),๐ฑ(0),(2.3) where ๐ฑ(๐‘ก)โˆˆ๐‘…๐‘› is the state, ๐ฒ(๐‘ก)โˆˆ๐‘…๐‘ is measurement, ๐ฎ(๐‘ก)โˆˆ๐‘…๐‘Ÿ is input sequence, ๐ฏ(๐‘ก)โˆˆ๐‘…๐‘ is the white noise of zero means, which is additive noise, and ๐ฐ(๐‘ก)โˆˆ๐‘…1 is the white noise of zero means, which is multiplicative noise. ๐ดโˆˆโ„›๐‘›ร—๐‘›, ๐ต1โˆˆโ„›๐‘›ร—๐‘›, ๐ต2โˆˆโ„›๐‘›ร—๐‘Ÿ, ๐ถโˆˆโ„›๐‘ร—๐‘›, and ๐ทโˆˆโ„›๐‘ร—๐‘› are known time-invariant matrices.

Assumption 2.1. ๐›พ(๐‘ก)(๐‘กโ‰ฅ0) is Bernoulli random variable with probability distribution ๎‚ป๐‘(๐›พ(๐‘ก))=๐œ†(๐‘ก),๐›พ(๐‘ก)=1,1โˆ’๐œ†(๐‘ก),๐›พ(๐‘ก)=0.(2.4)๐›พ(๐‘ก) is independent of ๐›พ(๐‘ ) for ๐‘ โ‰ ๐‘ก, and ๐›พ(๐‘ก) is uncorrelated with ๐ฑ(0), ๐ฎ(๐‘ก), ๐ฏ(๐‘ก), and ๐ฐ(๐‘ก).

Assumption 2.2. The initial states ๐ฑ(0), ๐ฎ(๐‘ก), ๐ฏ(๐‘ก), and ๐ฐ(๐‘ก) are all uncorrelated white noises with zero means and known covariance matrices, that is, ๎„”โŽกโŽขโŽขโŽขโŽขโŽขโŽขโŽฃโŽคโŽฅโŽฅโŽฅโŽฅโŽฅโŽฅโŽฆ,โŽกโŽขโŽขโŽขโŽขโŽขโŽขโŽฃโŽคโŽฅโŽฅโŽฅโŽฅโŽฅโŽฅโŽฆ๎„•=โŽกโŽขโŽขโŽขโŽขโŽขโŽขโŽฃ๐ฑ(0)๐ฎ(๐‘ก)๐ฏ(๐‘ก)๐ฐ(๐‘ก)๐ฑ(0)๐ฎ(๐‘ )๐ฏ(๐‘ )๐ฐ(๐‘ )ฮ (0)0000๐‘„๐›ฟ๐‘ก,๐‘ 0000๐‘…๐›ฟ๐‘ก,๐‘ 0000๐‘€๐›ฟ๐‘ก,๐‘ โŽคโŽฅโŽฅโŽฅโŽฅโŽฅโŽฅโŽฆ,(2.5) where โŸจ๐‘Ž,๐‘โŸฉ=โ„ฐ[๐‘Ž๐‘โˆ—], and โ„ฐ denotes the mathematical expectation.
The Kalman smoothing problem considered in the paper for the system model (2.1)โ€“(2.3) can be stated in the following three cases.

Problem 1 (fixed-interval smoothing). Given the measurements {๐ฒ(0),โ€ฆ,๐ฒ(๐‘)} and scalars {๐›พ(0),โ€‰โ€ฆ,๐›พ(๐‘)} for a fixed scalar ๐‘, find the fixed-interval smoothing estimate ฬ‚๐ฑ(๐‘กโˆฃ๐‘) of ๐ฑ(๐‘ก), such that [๐ฑฬ‚๐ฑ]minโ„ฐ(๐‘ก)โˆ’(๐‘กโˆฃ๐‘)๎…ž๎€บ๐ฑฬ‚๐ฑ(๐‘ก)โˆ’(๐‘กโˆฃ๐‘)โˆฃ๐ฒ๐‘0,๐›พ๐‘0๎€ป,๐ฑ(0),ฮ (0),(2.6) where 0โ‰ค๐‘กโ‰ค๐‘, ๐ฒ๐‘0โ‰œ{๐ฒ(0),โ€ฆ,๐ฒ(๐‘)}, ๐›พ๐‘0โ‰œ{๐›พ(0),โ€ฆ,๐›พ(๐‘)}.

Problem 2 (fixed-lag smoothing). Given the measurements {๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก)} and scalars {๐›พ(0),โ€ฆ,๐›พ(๐‘ก)} for a fixed scalar ๐‘™, find the fixed-lag smoothing estimate ฬ‚๐ฑ(๐‘กโˆ’๐‘™โˆฃ๐‘ก) of ๐ฑ(๐‘กโˆ’๐‘™), such that [๐ฑฬ‚๐ฑ]minโ„ฐ(๐‘กโˆ’๐‘™)โˆ’(๐‘กโˆ’๐‘™โˆฃ๐‘ก)๎…ž๎€บ๐ฑฬ‚๐ฑ(๐‘กโˆ’๐‘™)โˆ’(๐‘กโˆ’๐‘™โˆฃ๐‘ก)โˆฃ๐ฒ๐‘ก0,๐›พ๐‘ก0๎€ป,๐ฑ(0),ฮ (0),(2.7) where ๐ฒ๐‘ก0โ‰œ[๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก)], ๐›พ๐‘ก0โ‰œ[๐›พ(0),โ€ฆ,๐›พ(๐‘ก)].

Problem 3 (fixed-point smoothing). Given the measurements {๐ฒ(0),โ€ฆ,๐ฒ(๐‘)} and scalars {๐›พ(0),โ€ฆ,๐›พ(๐‘)} for a fixed time instant ๐‘ก, find the fixed-point smoothing estimate ฬ‚๐ฑ(๐‘กโˆฃ๐‘—) of ๐ฑ(๐‘ก), such that [ฬ‚]minโ„ฐ๐ฑ(๐‘ก)โˆ’๐ฑ(๐‘กโˆฃ๐‘—)๎…ž๎€บฬ‚๐ฑ(๐‘ก)โˆ’๐ฑ(๐‘กโˆฃ๐‘—)โˆฃ๐ฒ๐‘—0,๐›พ๐‘—0๎€ป,,๐ฑ(0),ฮ (0)(2.8) where 0โ‰ค๐‘ก<๐‘—โ‰ค๐‘, ๐ฒ๐‘—0โ‰œ{๐ฒ(0),โ€ฆ,๐ฒ(๐‘—)}, ๐›พ๐‘—0โ‰œ{๐›พ(0),โ€ฆ,๐›พ(๐‘—)}, ๐‘—=๐‘ก+1,๐‘ก+2,โ€ฆ,๐‘.

3. Main Results

In this section, we will give Kalman fixed-interval smoothing, Kalman fixed-lag smoothing, and Kalman fixed-point smoothing for the system model (2.1)โ€“(2.3) and compare them with each other. Before we give the main result, we first give the Kalman filtering for the system model (2.1)โ€“(2.3) which will be useful in the section. We first give the following definition.

Definition 3.1. Given time instants ๐‘ก and ๐‘—, the estimator ฬ‚๐œ‰(๐‘กโˆฃ๐‘—) is the optimal estimation of ๐œ‰(๐‘ก) given the observation sequences โ„’{๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก),๐ฒ(๐‘ก+1),โ€ฆ,๐ฒ(๐‘—);๐›พ(0),โ€ฆ,๐›พ(๐‘ก),๐›พ(๐‘ก+1),โ€ฆ,๐›พ(๐‘—)},(3.1) and the estimator ฬ‚๐œ‰(๐‘ก) is the optimal estimation of ๐œ‰(๐‘ก) given the observation โ„’{๐ฒ(0),โ€ฆ,๐ฒ(๐‘กโˆ’1);๐›พ(0),โ€ฆ,๐›พ(๐‘กโˆ’1)}.(3.2)

Remark 3.2. It should be noted that the linear space โ„’{๐ฒ(0),โ€ฆ,๐ฒ(๐‘กโˆ’1);๐›พ(0),โ€ฆ,๐›พ(๐‘กโˆ’1)} means the linear space โ„’{๐ฒ(0),โ€ฆ,๐ฒ(๐‘กโˆ’1)} under the condition that the scalars ๐›พ(0),โ€ฆ,๐›พ(๐‘กโˆ’1) are known. So is the linear space โ„’{๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก),๐ฒ(๐‘ก+1),โ€ฆ,๐ฒ(๐‘—);๐›พ(0),โ€ฆ,๐›พ(๐‘ก),๐›พ(๐‘ก+1),โ€ฆ,๐›พ(๐‘—)}.
Give the following denotations: ฬƒฬ‚ฬ‚๎€บ๐ฒ(๐‘ก)โ‰œ๐ฒ(๐‘ก)โˆ’๐ฒ(๐‘ก),(3.3)๐ž(๐‘ก)โ‰œ๐ฑ(๐‘ก)โˆ’๐ฑ(๐‘ก),(3.4)๐‘ƒ(๐‘ก+1)โ‰œโ„ฐ๐ž(๐‘ก+1)๐ž๐‘‡(๐‘ก+1)โˆฃ๐ฒ๐‘ก0,๐›พ๐‘ก0๎€ปฮ ๎€บ๐ฑ,(3.5)(๐‘ก+1)โ‰œโ„ฐ(๐‘ก+1)๐ฑ๐‘‡(๐‘ก+1)โˆฃ๐ฒ๐‘ก0,๐›พ๐‘ก0๎€ป,(3.6) it is clear that ฬƒ๐ฒ(๐‘ก) is the Kalman filtering innovation sequence for the system (2.1)โ€“(2.3). We now have the following relationships: ฬƒ๐ฒ(๐‘ก)=๐›พ(๐‘ก)๐ถ๐ž(๐‘ก)+๐›พ(๐‘ก)๐ท๐ฑ(๐‘ก)๐ฐ(๐‘ก)+๐›พ(๐‘ก)๐ฏ(๐‘ก).(3.7) The following lemma shows that {ฬƒ๐ฒ} is the innovation sequence. For the simplicity of discussion, we omitted the {๐›พ(0),โ€ฆ,๐›พ(๐‘กโˆ’1)} as in Remark 3.2.

Lemma 3.3. {ฬƒฬƒ๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก)}(3.8) is the innovation sequence which spans the same linear space consider that โ„’{๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก)}.(3.9)

Proof. Firstly, from (3.2) in Definition 3.1, we have ฬ‚๐ฒ(๐‘ก)=Proj{๐ฒ(๐‘ก)โˆฃ๐ฒ(0),โ€ฆ,๐ฒ(๐‘กโˆ’1)}.(3.10) From (3.3) and (3.10), we have ฬƒ๐ฒ(๐‘ก)โˆˆโ„’{๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก)} for ๐‘ก=0,1,2,โ€ฆ. Thus, ฬƒฬƒโ„’{๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก)}โŠ‚โ„’{๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก)}.(3.11) Secondly, from inductive method, we have ฬƒฬƒ๐ฒฬƒ๐ฒฬƒ๐ฒฬƒ๐ฒโ‹ฎฬƒฬƒฬƒ๐ฒ(0)=๐ฒ(0)+โ„ฐ๐ฒ(0)โˆˆโ„’{๐ฒ(0)},(1)=(1)+Proj{๐ฒ(1)โˆฃ๐ฒ(0)}โˆˆโ„’{(0),(1)},๐ฒ(๐‘ก)=๐ฒ(๐‘ก)+Proj{๐ฒ(๐‘ก)โˆฃ๐ฒ(0),โ€ฆ,๐ฒ(๐‘กโˆ’1)}โˆˆโ„’{๐ฒ(0),โ€ฆ,๐ฒ(๐‘กโˆ’1)}.(3.12) Thus, ฬƒฬƒโ„’{๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก)}โŠ‚โ„’{๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก)}.(3.13) So ฬƒฬƒโ„’{๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก)}=โ„’{๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก)}.(3.14)
Next, we show that ฬƒ๐ฒ is an uncorrelated sequence. In fact, for any ๐‘ก,๐‘ (๐‘กโ‰ ๐‘ ), we can assume that ๐‘ก>๐‘  without loss of generality, and it follows from (3.7) that โ„ฐ๎€บฬƒ๐ฒฬƒ๐ฒ(๐‘ก)๐‘‡๎€ป๎€บ๐›พฬƒ๐ฒ(๐‘ )=โ„ฐ(๐‘ก)๐ถ๐ž(๐‘ก)๐‘‡๎€ป๎€บ๐›พฬƒ๐ฒ(๐‘ )+โ„ฐ(๐‘ก)๐ท๐ฑ(๐‘ก)๐ฐ(๐‘ก)๐‘‡๎€ป๎€บฬƒ๐ฒ(๐‘ )+โ„ฐ๐›พ(๐‘ก)๐ฏ(๐‘ก)๐‘‡(๎€ป.๐‘ )(3.15) Note that ฬƒ๐ฒโ„ฐ[๐›พ(๐‘ก)๐ท๐ฑ(๐‘ก)๐ฐ(๐‘ก)๐‘‡(๐‘ )]=0, ฬƒ๐ฒโ„ฐ[๐›พ(๐‘ก)๐ฏ(๐‘ก)๐‘‡(๐‘ )]=0. Since ๐ž(๐‘ก) is the state prediction error, it follows that ฬƒ๐ฒโ„ฐ[๐›พ(๐‘ก)๐ถ๐ž(๐‘ก)๐‘‡(๐‘ )]=0, and thus ฬƒฬƒ๐ฒโ„ฐ[๐ฒ(๐‘ก)๐‘‡(๐‘ )]=0, which implies that ฬƒ๐ฒ(๐‘ก) is uncorrelated with ฬƒ๐ฒ(๐‘ ). Hence, {ฬƒฬƒ๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก)} is an innovation sequence. This completes the proof of the lemma.

For the convenience of derivation, we will give the orthogonal projection theorem in form of the next theorem without proof, and readers can refer to [4, 5],

Theorem 3.4. Given the measurements {๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก)} and the corresponding innovation sequence {ฬƒฬƒ๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก)} in Lemma 3.3, the projection of the state ๐ฑ can be given as ฬƒฬƒ๎€บ๐ฑฬƒ๐ฒProj{๐ฑโˆฃ๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก)}=Proj{๐ฑโˆฃ๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก)}=Proj{๐ฑโˆฃ๐ฒ(0),โ€ฆ,๐ฒ(๐‘กโˆ’1)}+โ„ฐ๐‘‡โ„ฐ๎€ทฬƒฬƒ๐ฒ(๐‘ก)๎€ป๎€บ๐ฒ(๐‘ก)๐‘‡(๐‘ก)๎€ธ๎€ปโˆ’1ฬƒ๐ฒ(๐‘ก).(3.16)

According to a reviewerโ€™s suggestion, we will give the Kalman filter for the system model (2.1)โ€“(2.3) which has been given in [7]

Lemma 3.5. Consider the system model (2.1)โ€“(2.3) with scalars ๐›พ(0),โ€ฆ,๐›พ(๐‘ก+1) and the initial condition ฬ‚๐ฑ(0)=0; the Kalman filter can be given as ฬ‚ฬ‚๐ฑ(๐‘ก+1โˆฃ๐‘ก+1)=๐ฑ(๐‘ก+1)+๐‘ƒ(๐‘ก+1)๐ถ๐‘‡ร—๎€บ๐ถ๐‘ƒ(๐‘ก+1)๐ถ๐‘‡+๐ทฮ (๐‘ก+1)๐‘€๐ท๐‘‡๎€ป+๐‘…โˆ’1ร—๎€บฬ‚๎€ป,ฬ‚ฬ‚๎€บฬ‚๎€ป,ฬ‚๐ฑ[][]๐ฒ(๐‘ก+1)โˆ’๐›พ(๐‘ก+1)๐ถ๐ฑ(๐‘ก+1)(3.17)๐ฑ(๐‘ก+1)=๐ด๐ฑ(๐‘ก)+๐พ(๐‘ก)๐ฒ(๐‘ก)โˆ’๐›พ(๐‘ก)๐ถ๐ฑ(๐‘ก)(0)=0,(3.18)๐‘ƒ(๐‘ก+1)=๐ดโˆ’๐›พ(๐‘ก)๐พ(๐‘ก)๐ถ๐‘ƒ(๐‘ก)ร—๐ดโˆ’๐›พ(๐‘ก)๐พ(๐‘ก)๐ถ๐‘‡+๎€บ๐ต1๎€ป๎€บ๐ตโˆ’๐›พ(๐‘ก)๐พ(๐‘ก)๐ทฮ (๐‘ก)ร—1๎€ปโˆ’๐›พ(๐‘ก)๐พ(๐‘ก)๐ท๐‘‡๐‘€+๐ต2๐‘„๐ต๐‘‡2+๐›พ(๐‘ก)๐พ(๐‘ก)๐‘…๐พ๐‘‡๎€บ(๐‘ก),๐‘ƒ(0)=โ„ฐ๐ฑ(0)๐ฑ๐‘‡๎€ป,(0)โˆฃ๐›พ(0)(3.19)ฮ (๐‘ก+1)=๐ดฮ (๐‘ก)๐ด๐‘‡+๐ต1ฮ (๐‘ก)๐ต๐‘‡1๐‘€+๐ต2๐‘„๐ต๐‘‡2,ฮ (0)=๐‘ƒ(0),(3.20) where ๎€บ๐พ(๐‘ก)โ‰œ๐ด๐‘ƒ(๐‘ก)๐ถ๐‘‡+๐ต1๐‘€ฮ (๐‘ก)๐ท๐‘‡๎€ปร—๎€บ๐ถ๐‘ƒ(๐‘ก)๐ถ๐‘‡+๐ทฮ (๐‘ก)๐‘€๐ท๐‘‡๎€ป+๐‘…โˆ’1.(3.21)

Proof. Firstly, according to the projection theorem (Theorem 3.4), we have ฬ‚ฬƒฬƒฬƒ๐ฒฬƒ๐ฒฬ‚๐ฒ(๐‘ก)=Proj{๐ฒ(๐‘ก)โˆฃ๐ฒ(0),โ€ฆ,๐ฒ(๐‘กโˆ’1);๐›พ(0),โ€ฆ,๐›พ(๐‘กโˆ’1)}=Proj{๐›พ(๐‘ก)๐ถ๐ฑ(๐‘ก)+๐›พ(๐‘ก)๐ท๐ฑ(๐‘ก)๐ฐ(๐‘ก)+๐›พ(๐‘ก)๐ฏ(๐‘ก)โˆฃ(0),โ€ฆ,(๐‘กโˆ’1);๐›พ(0),โ€ฆ,๐›พ(๐‘กโˆ’1)}=๐›พ(๐‘ก)๐ถ๐ฑ(๐‘ก),(3.22) then from (3.7), we have ๐‘„ฬƒ๐ฒฬƒฬƒ(๐‘ก)โ‰œโŸจ๐ฒ(๐‘ก),๐ฒ(๐‘ก)โŸฉ=๐›พ(๐‘ก)๐ถ๐‘ƒ(๐‘ก)๐ถ๐‘‡+๐›พ(๐‘ก)๐ทฮ (๐‘ก)๐‘€๐ท๐‘‡+๐›พ(๐‘ก)๐‘….(3.23) Secondly, according to the projection theorem, we have ฬ‚ฬƒฬƒ๎€ฝ๐ฑ(๐‘ก+1)=Proj{๐ฑ(๐‘ก+1)โˆฃ๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก);๐›พ(0),โ€ฆ,๐›พ(๐‘ก)}=Proj๐ด๐ฑ(๐‘ก)+๐ต1๐ฑ(๐‘ก)๐ฐ(๐‘ก)+๐ต2ฬƒฬƒ๎€พ๎€ฝ๐ฎ(๐‘ก)โˆฃ๐ฒ(0),โ€ฆ,๐ฒ(๐‘กโˆ’1);๐›พ(0),โ€ฆ,๐›พ(๐‘กโˆ’1)+Proj๐ด๐ฑ(๐‘ก)+๐ต1๐ฑ(๐‘ก)๐ฐ(๐‘ก)+๐ต2ฬƒ๎€พฬ‚๎€บ๐ฎ(๐‘ก)โˆฃ๐ฒ(๐‘ก);๐›พ(๐‘ก)=๐ด๐ฑ(๐‘ก)+๐›พ(๐‘ก)ร—๐ด๐‘ƒ(๐‘ก)๐ถ๐‘‡+๐ต1๐‘€ฮ (๐‘ก)๐ท๐‘‡๎€ป๐‘„ฬƒ๐ฒโˆ’1ฬ‚ร—[]ฬ‚๐ฑ๎€บ๐ฒฬ‚๐ฑ๎€ป,(๐‘ก)=๐ด๐ฑ(๐‘ก)+๐›พ(๐‘ก)๐พ(๐‘ก)๐ถ๐ž(๐‘ก)+๐ท๐ฑ(๐‘ก)๐ฐ(๐‘ก)+๐ฏ(๐‘ก)=๐ด(๐‘ก)+๐พ(๐‘ก)(๐‘ก)โˆ’๐›พ(๐‘ก)๐ถ(๐‘ก)(3.24) which is (3.18) by considering (3.21), (3.22), and (3.23).
Thirdly, by considering (3.7) and Theorem 3.4, we also have ฬ‚ฬƒฬƒฬƒ=ฬ‚๐ฑฬƒ๐ฒ=ฬ‚๐ฑ(๐‘ก+1โˆฃ๐‘ก+1)=Proj{๐ฑ(๐‘ก+1)โˆฃ๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก),๐ฒ(๐‘ก+1);๐›พ(0),โ€ฆ,๐›พ(๐‘ก),๐›พ(๐‘ก+1)}(๐‘ก+1)+Proj{๐ฑ(๐‘ก+1)โˆฃ(๐‘ก+1);๐›พ(๐‘ก+1)}๐ฑ(๐‘ก+1)+๐‘ƒ(๐‘ก+1)๐ถ๐‘‡ร—๎€บ๐ถ๐‘ƒ(๐‘ก+1)๐ถ๐‘‡+๐ทฮ (๐‘ก+1)๐‘€๐ท๐‘‡๎€ป+๐‘…โˆ’1ร—๎€บฬ‚๎€ป,๐ฒ(๐‘ก+1)โˆ’๐›พ(๐‘ก+1)๐ถ๐ฑ(๐‘ก+1)(3.25) which is (3.17).
From (2.1) and (3.24), we have ฬ‚=[]๎€บ๐ต๐ž(๐‘ก+1)=๐ฑ(๐‘ก+1)โˆ’๐ฑ(๐‘ก+1)๐ดโˆ’๐›พ(๐‘ก)๐พ(๐‘ก)๐ถ๐ž(๐‘ก)+1๎€ปโˆ’๐›พ(๐‘ก)๐พ(๐‘ก)๐ท๐ฑ(๐‘ก)๐ฐ(๐‘ก)+๐ต2๐ฎ(๐‘ก)โˆ’๐›พ(๐‘ก)๐พ(๐‘ก)๐ฏ(๐‘ก),(3.26) then we have ๎€บ๐‘ƒ(๐‘ก+1)=โ„ฐ๐ž(๐‘ก+1)๐ž๐‘‡(๐‘ก+1)โˆฃ๐›พ0๐‘ก+1๎€ป=[][]๐ดโˆ’๐›พ(๐‘ก)๐พ(๐‘ก)๐ถ๐‘ƒ(๐‘ก)ร—๐ดโˆ’๐›พ(๐‘ก)๐พ(๐‘ก)๐ถ๐‘‡+๎€บ๐ต1๎€ป๎€บ๐ตโˆ’๐›พ(๐‘ก)๐พ(๐‘ก)๐ทฮ (๐‘ก)ร—1๎€ปโˆ’๐›พ(๐‘ก)๐พ(๐‘ก)๐ท๐‘‡๐‘€+๐ต2๐‘„๐ต๐‘‡2+๐›พ(๐‘ก)๐พ(๐‘ก)๐‘…๐พ๐‘‡(๐‘ก),(3.27) which is (3.19).
From (2.1), (3.20) can be given directly, and the proof is over.

Remark 3.6. Equation (3.19) is a recursive Riccati equation, and (3.20) is a Lyapunov equation.

3.1. Kalman Fixed-Interval Smoother

In this subsection, we will present the Kalman fixed-interval smoother by the projection theorem. First, we define ๎€บ๐‘ƒ(๐‘ก,๐‘˜)โ‰œโ„ฐ๐ฑ(๐‘ก)๐ž๐‘‡(๐‘ก+๐‘˜)โˆฃ๐ฒ0๐‘ก+๐‘˜,๐›พ0๐‘ก+๐‘˜๎€ป,ฬ‚๎€บ(3.28)๐ž(๐‘กโˆฃ๐‘ก+๐‘˜)โ‰œ๐ฑ(๐‘ก)โˆ’๐ฑ(๐‘กโˆฃ๐‘ก+๐‘˜),(3.29)๐‘ƒ(๐‘กโˆฃ๐‘ก+๐‘˜)โ‰œโ„ฐ๐ž(๐‘กโˆฃ๐‘ก+๐‘˜)๐ž๐‘‡(๐‘กโˆฃ๐‘ก+๐‘˜)โˆฃ๐ฒ0๐‘ก+๐‘˜,๐›พ0๐‘ก+๐‘˜๎€ป.(3.30) Then we can give the theorem which develops [6] as follows.

Theorem 3.7. Consider the system (2.1)โ€“(2.3) with the measurements {๐ฒ(0),โ€ฆ,๐ฒ(๐‘)} and scalars {๐›พ(0),โ€ฆ,๐›พ(๐‘)}, Kalman fixed-interval smoother can be given by the following backwards recursive equations: ฬ‚ฬ‚๐ฑ(๐‘กโˆฃ๐‘)=๐ฑ(๐‘กโˆฃ๐‘ก)+๐‘โˆ’๐‘ก๎“๐‘˜=1๐‘ƒ(๐‘ก,๐‘˜)๐ถ๐‘‡๎€บ๐ถ๐‘ƒ(๐‘ก+๐‘˜)๐ถ๐‘‡+๐ทฮ (๐‘ก+๐‘˜)๐‘€๐ท๐‘‡๎€ป+๐‘…โˆ’1ร—๎€บฬ‚๎€ป๐ฒ(๐‘ก+๐‘˜)โˆ’๐›พ(๐‘ก+๐‘˜)๐ถ๐ฑ(๐‘ก+๐‘˜),๐‘ก=0,1,โ€ฆ,๐‘,(3.31) and the corresponding smoother error covariance matrix can be given as ๐‘ƒ(๐‘กโˆฃ๐‘)=๐‘ƒ(๐‘ก)โˆ’๐‘โˆ’๐‘ก๎“๐‘˜=0๐›พ(๐‘ก+๐‘˜)๐‘ƒ(๐‘ก,๐‘˜)๐ถ๐‘‡ร—๎€บ๐ถ๐‘ƒ(๐‘ก+๐‘˜)๐ถ๐‘‡+๐ทฮ (๐‘ก+๐‘˜)๐‘€๐ท๐‘‡๎€ป+๐‘…๐ถ๐‘ƒ(๐‘ก,๐‘˜),๐‘ก=0,1,โ€ฆ,๐‘,(3.32) where []๐‘ƒ(๐‘ก,๐‘˜)=๐‘ƒ(๐‘ก,๐‘˜โˆ’1)๐ดโˆ’๐›พ(๐‘ก+๐‘˜โˆ’1)๐พ(๐‘ก+๐‘˜โˆ’1)๐ถ๐‘‡,๐‘˜=1,โ€ฆ,๐‘โˆ’๐‘ก,(3.33) with ๐‘ƒ(๐‘ก,0)=๐‘ƒ(๐‘ก), and ฬ‚๐‘ƒ(๐‘ก+๐‘˜),๐‘ƒ(๐‘ก),๐ฑ(๐‘ก) can be given from (3.18), (3.19), and (3.17).

Proof. From the projection theorem, we have ฬ‚ฬƒ๐ฒฬƒ๐ฒฬƒ๐ฒฬƒ๐ฒฬƒฬƒฬƒฬƒ=ฬ‚๐ฑ(๐‘กโˆฃ๐‘)=Proj{๐ฑ(๐‘ก)โˆฃ๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก),๐ฒ(๐‘ก+1),โ€ฆ,๐ฒ(๐‘)}=Proj{๐ฑ(๐‘ก)โˆฃ(0),โ€ฆ,(๐‘ก),(๐‘ก+1),โ€ฆ,(๐‘)}=Proj{๐ฑ(๐‘ก)โˆฃ๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก)}+Proj{๐ฑ(๐‘ก)โˆฃ๐ฒ(๐‘ก+1),โ€ฆ,๐ฒ(๐‘)}๐ฑ(๐‘กโˆฃ๐‘ก)+๐‘โˆ’๐‘ก๎“๐‘˜=1ฬƒ=ฬ‚Proj{๐ฑ(๐‘ก)โˆฃ๐ฒ(๐‘ก+๐‘˜)}๐ฑ(๐‘กโˆฃ๐‘ก)+๐‘โˆ’๐‘ก๎“๐‘˜=1[]}=ฬ‚Proj{๐ฑ(๐‘ก)โˆฃ๐›พ(๐‘ก+๐‘˜)๐ถ๐ž(๐‘ก+๐‘˜)๐ท๐ฑ(๐‘ก+๐‘˜)๐ฐ(๐‘ก+๐‘˜)+๐ฏ(๐‘ก+๐‘˜)๐ฑ(๐‘กโˆฃ๐‘ก)+๐‘โˆ’๐‘ก๎“๐‘˜=1๐‘ƒ(๐‘ก,๐‘˜)๐ถ๐‘‡๎€บ๐ถ๐‘ƒ(๐‘ก+๐‘˜)๐ถ๐‘‡+๐ทฮ (๐‘ก+๐‘˜)๐‘€๐ท๐‘‡๎€ป+๐‘…โˆ’1ร—๎€บฬ‚๎€ป,๐ฒ(๐‘ก+๐‘˜)โˆ’๐›พ(๐‘ก+๐‘˜)๐ถ๐ฑ(๐‘ก+๐‘˜)(3.34) which is (3.31).
Noting that ๐ฑ(๐‘ก) is uncorrelated with ๐ฐ(๐‘ก+๐‘˜โˆ’1), ๐ฎ(๐‘ก+๐‘˜โˆ’1), and ๐ฏ(๐‘ก+๐‘˜โˆ’1) for ๐‘˜=1,โ€ฆ,๐‘โˆ’๐‘ก, then from (3.5), we have ๎€บ๐‘ƒ(๐‘ก,๐‘˜)=โ„ฐ๐ฑ(๐‘ก)๐ž๐‘‡(๐‘ก+๐‘˜)โˆฃ๐ฒ0๐‘ก+๐‘˜,๐›พ0๐‘ก+๐‘˜๎€ป[]+๎€บ๐ต=โŸจ๐ฑ(๐‘ก),๐ดโˆ’๐›พ(๐‘ก+kโˆ’1)๐พ(๐‘ก+๐‘˜โˆ’1)๐ถ๐ž(๐‘ก+๐‘˜โˆ’1)1๎€ปโˆ’๐›พ(๐‘ก+๐‘˜โˆ’1)๐พ(๐‘ก+๐‘˜โˆ’1)๐ทร—๐ฑ(๐‘ก+๐‘˜โˆ’1)๐ฐ(๐‘ก+๐‘˜โˆ’1)+๐ต2[]๐ฎ(๐‘ก+๐‘˜โˆ’1)โˆ’๐›พ(๐‘ก+๐‘˜โˆ’1)๐พ(๐‘ก+๐‘˜โˆ’1)๐ฏ(๐‘ก+๐‘˜โˆ’1)โŸฉ=๐‘ƒ(๐‘ก,๐‘˜โˆ’1)๐ดโˆ’๐›พ(๐‘ก+๐‘˜โˆ’1)๐พ(๐‘ก+๐‘˜โˆ’1)๐ถ๐‘‡,(3.35) which is (3.33).
Next, we will give the proof of covariance matrix ๐‘ƒ(๐‘กโˆฃ๐‘). From the projection theorem, we have ฬ‚ฬ‚๐ฑ(๐‘ก)=๐ฑ(๐‘กโˆฃ๐‘)+๐ž(๐‘กโˆฃ๐‘)=๐ฑ(๐‘ก)+๐ž(๐‘ก), and from (3.34), we have ๐ž(๐‘กโˆฃ๐‘)=๐ž(๐‘ก)โˆ’๐‘โˆ’๐‘ก๎“๐‘˜=0๐‘ƒ(๐‘ก,๐‘˜)๐ถ๐‘‡๎€บ๐ถ๐‘ƒ(๐‘ก+๐‘˜)๐ถ๐‘‡+๐ทฮ (๐‘ก+๐‘˜)๐‘€๐ท๐‘‡๎€ป+๐‘…โˆ’1ร—ฬƒ๐ฒ(๐‘ก+๐‘˜),(3.36) that is, ๐ž(๐‘ก)=๐ž(๐‘กโˆฃ๐‘)+๐‘โˆ’๐‘ก๎“๐‘˜=0๐‘ƒ(๐‘ก,๐‘˜)๐ถ๐‘‡๎€บ๐ถ๐‘ƒ(๐‘ก+๐‘˜)๐ถ๐‘‡+๐ทฮ (๐‘ก+๐‘˜)๐‘€๐ท๐‘‡๎€ป+๐‘…โˆ’1ร—ฬƒ๐ฒ(๐‘ก+๐‘˜).(3.37) Thus, (3.32) can be given.

Remark 3.8. The proposed theorem is based on the theorem in [6]. However, the condition of the theorem in [6] is wrong since multiplicative noises ๐ฐ(0),โ€ฆ,๐ฐ(๐‘ก) are not known. In addition, the proposed theorem gives the fixed-interval smoother error covariance matrix ๐‘ƒ(๐‘กโˆฃ๐‘) which is an important index in Problem 1 and also useful in the comparison with fixed-lag smoother.

3.2. Kalman Fixed-Lag Smoother

Let ๐‘ก๐‘™=๐‘กโˆ’๐‘™, and we can give Kalman fixed-lag smoothing estimate for the system model (2.1)โ€“(2.3), which develops [6] in the following theorem.

Theorem 3.9. Consider the system (2.1)โ€“(2.3), given the measurements {๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก)} and scalars {๐›พ(0),โ€ฆ,๐›พ(๐‘ก)} for a fixed scalar ๐‘™(๐‘™<๐‘ก), then Kalman fixed-lag smoother can be given by the following recursive equations: ฬ‚๐ฑ๎€ท๐‘ก๐‘™๎€ธ=ฬ‚๐ฑ๎€ท๐‘กโˆฃ๐‘ก๐‘™โˆฃ๐‘ก๐‘™๎€ธ+๐‘™๎“๐‘˜=1๐‘ƒ๎€ท๐‘ก๐‘™๎€ธ๐ถ,๐‘˜๐‘‡๎€บ๎€ท๐‘ก๐ถ๐‘ƒ๐‘™๎€ธ๐ถ+๐‘˜๐‘‡๎€ท๐‘ก+๐ทฮ ๐‘™๎€ธ+๐‘˜๐‘€๐ท๐‘‡๎€ป+๐‘…โˆ’1ร—๎€บ๐ฒ๎€ท๐‘ก๐‘™๎€ธ๎€ท๐‘ก+๐‘˜โˆ’๐›พ๐‘™๎€ธ๐ถฬ‚๐ฑ๎€ท๐‘ก+๐‘˜๐‘™+๐‘˜๎€ธ๎€ป,๐‘ก>๐‘™,(3.38) and the corresponding smoother error covariance matrix ๐‘ƒ(๐‘ก๐‘™โˆฃ๐‘ก) can be given as ๐‘ƒ๎€ท๐‘ก๐‘™๎€ธ๎€ท๐‘กโˆฃ๐‘ก=๐‘ƒ๐‘™๎€ธโˆ’๐‘™๎“๐‘˜=0๐›พ๎€ท๐‘ก๐‘™๎€ธ๐‘ƒ๎€ท๐‘ก+๐‘˜๐‘™๎€ธ๐ถ,๐‘˜๐‘‡๎€บ๎€ท๐‘ก๐ถP๐‘™๎€ธ๐ถ+๐‘˜๐‘‡๎€ท๐‘ก+๐ทฮ ๐‘™๎€ธ+๐‘˜๐‘€๐ท๐‘‡๎€ป๎€ท๐‘ก+๐‘…๐ถ๐‘ƒ๐‘™๎€ธ,๐‘˜,๐‘ก>๐‘™,(3.39) where ๐‘ƒ๎€ท๐‘ก๐‘™๎€ธ๎€ท๐‘ก,๐‘˜=๐‘ƒ๐‘™๎€ท๐‘ก,๐‘˜โˆ’1๎€ธ๎€บ๐ดโˆ’๐›พ๐‘™๎€ธ๐พ๎€ท๐‘ก+๐‘˜โˆ’1๐‘™๎€ธ๐ถ๎€ป+๐‘˜โˆ’1๐‘‡,๐‘˜=1,โ€ฆ,๐‘™,(3.40) with ๐‘ƒ(๐‘ก๐‘™,0)=๐‘ƒ(๐‘ก๐‘™), and ๐‘ƒ(๐‘ก๐‘™+๐‘˜),๐‘ƒ(๐‘ก๐‘™ฬ‚),๐ฑ(๐‘ก๐‘™+๐‘˜),โ€‰and โ€‰ฬ‚๐ฑ(๐‘ก๐‘™โˆฃ๐‘ก๐‘™) can be given from Lemma 3.5.

Proof. From the projection theorem, we have ฬ‚๐ฑ๎€ท๐‘ก๐‘™๎€ธ๎€ฝ๐ฑ๎€ท๐‘กโˆฃ๐‘ก=Proj๐‘™๎€ธ๎€ท๐‘กโˆฃ๐ฒ(0),โ€ฆ,๐ฒ๐‘™๎€ธ๎€ท๐‘ก,๐ฒ๐‘™๎€ธ๎€พ๎€ฝ๐ฑ๎€ท๐‘ก+1,โ€ฆ,๐ฒ(๐‘ก)=Proj๐‘™๎€ธโˆฃฬƒฬƒ๐ฒ๎€ท๐‘ก๐ฒ(0),โ€ฆ,๐‘™๎€ธ,ฬƒ๐ฒ๎€ท๐‘ก๐‘™๎€ธฬƒ๎€พ๎€ฝ๐ฑ๎€ท๐‘ก+1,โ€ฆ,๐ฒ(๐‘ก)=Proj๐‘™๎€ธโˆฃฬƒ๐ฒฬƒ๐ฒ๎€ท๐‘ก(0),โ€ฆ,๐‘™๎€ฝ๐ฑ๎€ท๐‘ก๎€ธ๎€พ+Proj๐‘™๎€ธโˆฃฬƒ๐ฒ๎€ท๐‘ก๐‘™๎€ธฬƒ๐ฒ๎€พ=ฬ‚๐ฑ๎€ท๐‘ก+1,โ€ฆ,(๐‘ก)๐‘™โˆฃ๐‘ก๐‘™๎€ธ+๐‘™๎“๐‘˜=1๎€ฝ๐ฑ๎€ท๐‘กProj๐‘™๎€ธโˆฃฬƒ๐ฒ๎€ท๐‘ก๐‘™=ฬ‚๐ฑ๎€ท๐‘ก+๐‘˜๎€ธ๎€พ๐‘™โˆฃ๐‘ก๐‘™๎€ธ+๐‘™๎“๐‘˜=1๎€ฝ๐ฑ๎€ท๐‘กProj๐‘™๎€ธ๎€ท๐‘กโˆฃ๐›พ๐‘™๎€ท๐‘ก+๐‘˜๎€ธ๎€บ๐ถ๐ž๐‘™๎€ธ๎€ท๐‘ก+๐‘˜+๐ท๐ฑ๐‘™๎€ธ๐ฐ๎€ท๐‘ก+๐‘˜๐‘™๎€ธ๎€ท๐‘ก+๐‘˜+๐ฏ๐‘™=ฬ‚๐ฑ๎€ท๐‘ก+๐‘˜๎€ธ๎€ป๎€พ๐‘™โˆฃ๐‘ก๐‘™๎€ธ+๐‘™๎“๐‘˜=1๐‘ƒ๎€ท๐‘ก๐‘™๎€ธ๐ถ,๐‘˜๐‘‡๎€บ๎€ท๐‘ก๐ถ๐‘ƒ๐‘™๎€ธ๐ถ+๐‘˜๐‘‡๎€ท๐‘ก+๐ทฮ ๐‘™๎€ธ+๐‘˜๐‘€๐ท๐‘‡๎€ป+๐‘…โˆ’1ร—๎€บ๐ฒ๎€ท๐‘ก๐‘™๎€ธ๎€ท๐‘ก+๐‘˜โˆ’๐›พ๐‘™๎€ธ๐ถฬ‚๐ฑ๎€ท๐‘ก+๐‘˜๐‘™,+๐‘˜๎€ธ๎€ป(3.41) which is (3.38).
Noting that ๐ฑ(๐‘ก) is uncorrelated with ๐ฐ(๐‘ก+๐‘˜โˆ’1), ๐ฎ(๐‘ก+๐‘˜โˆ’1), and ๐ฏ(๐‘ก+๐‘˜โˆ’1) for ๐‘˜=1,โ€ฆ,๐‘™, then from (3.5), we have ๐‘ƒ๎€ท๐‘ก๐‘™๎€ธ๎‚ƒ๐ฑ๎€ท๐‘ก,๐‘˜=โ„ฐ๐‘™๎€ธ๐ž๐‘‡๎€ท๐‘ก๐‘™๎€ธ+๐‘˜โˆฃ๐ฒ๐‘ก๐‘™0+๐‘˜,๐›พ๐‘ก๐‘™0+๐‘˜๎‚„=๎ซ๐ฑ๎€ท๐‘ก๐‘™๎€ธ,๎€บ๎€ท๐‘ก๐ดโˆ’๐›พ๐‘™๎€ธ๐พ๎€ท๐‘ก+๐‘˜โˆ’1๐‘™๎€ธ๐ถ๎€ป๎€ท๐‘ก+๐‘˜โˆ’1ร—๐ž๐‘™๎€ธ+๎€บ๐ต+๐‘˜โˆ’11๎€ท๐‘กโˆ’๐›พ๐‘™๎€ธ๐พ๎€ท๐‘ก+๐‘˜โˆ’1๐‘™๎€ธ๐ท๎€ป๎€ท๐‘ก+๐‘˜โˆ’1ร—๐ฑ๐‘™๎€ธ๐ฐ๎€ท๐‘ก+๐‘˜โˆ’1๐‘™๎€ธ+๐‘˜โˆ’1+๐ต2๐ฎ๎€ท๐‘ก๐‘™๎€ธ๎€ท๐‘ก+๐‘˜โˆ’1โˆ’๐›พ๐‘™๎€ธ๐พ๎€ท๐‘ก+๐‘˜โˆ’1๐‘™๎€ธ๐ฏ๎€ท๐‘ก+๐‘˜โˆ’1๐‘™๎€ท๐‘ก+๐‘˜โˆ’1๎€ธ๎ฌ=๐‘ƒ๐‘™๎€ท๐‘ก,๐‘˜โˆ’1๎€ธ๎€บ๐ดโˆ’๐›พ๐‘™๎€ธ๐พ๎€ท๐‘ก+๐‘˜โˆ’1๐‘™๎€ธ๐ถ๎€ป+๐‘˜โˆ’1๐‘‡,(3.42) which is (3.40).
Next, we will give the proof of covariance matrix ๐‘ƒ(๐‘ก๐‘™โˆฃ๐‘ก). Since ๐ฑ(๐‘ก๐‘™ฬ‚)=๐ฑ(๐‘ก๐‘™โˆฃ๐‘ก)+๐ž(๐‘ก๐‘™ฬ‚โˆฃ๐‘ก)=๐ฑ(๐‘ก๐‘™)+๐ž(๐‘ก๐‘™), from (3.41), we have ๐ž๎€ท๐‘ก๐‘™๎€ธ๎€ท๐‘กโˆฃ๐‘ก=๐ž๐‘™๎€ธโˆ’๐‘™๎“๐‘˜=0๐‘ƒ๎€ท๐‘ก๐‘™๎€ธ๐ถ,๐‘˜๐‘‡๎€บ๎€ท๐‘ก๐ถ๐‘ƒ๐‘™๎€ธ๐ถ+๐‘˜๐‘‡๎€ท๐‘ก+๐ทฮ ๐‘™๎€ธ+๐‘˜๐‘€๐ท๐‘‡๎€ป+๐‘…โˆ’1ร—ฬƒ๐ฒ๎€ท๐‘ก๐‘™๎€ธ,+๐‘˜(3.43) that is, ๐ž๎€ท๐‘ก๐‘™๎€ธ๎€ท๐‘ก=๐ž๐‘™๎€ธ+โˆฃ๐‘ก๐‘™๎“๐‘˜=0๐‘ƒ๎€ท๐‘ก๐‘™๎€ธ๐ถ,๐‘˜๐‘‡๎€บ๎€ท๐‘ก๐ถ๐‘ƒ๐‘™๎€ธ๐ถ+๐‘˜๐‘‡๎€ท๐‘ก+๐ทฮ ๐‘™๎€ธ+๐‘˜๐‘€๐ท๐‘‡๎€ป+๐‘…โˆ’1ร—ฬƒ๐ฒ๎€ท๐‘ก๐‘™๎€ธ.+๐‘˜(3.44) Thus, (3.39) can be given.

Remark 3.10. It should be noted that the result of the Kalman smoothing is better than that of Kalman filtering for the normal systems without packet loss since more measurement information is available. However, it is not the case if the measurement is lost, which can be verified in the next section. In addition, in Theorems 3.7 and 3.9, we have changed the predictor type of ๐ฑ(๐‘ก) or ๐ฑ(๐‘ก๐‘™) in [6] to the filter case of ๐ฑ(๐‘กโˆฃ๐‘ก) or ๐ฑ(๐‘ก๐‘™โˆฃ๐‘ก๐‘™), which will be more convenient to be compared with Kalman fixed-point smoother (3.45).

3.3. Kalman Fixed-Point Smoother

In this subsection, we will present the Kalman fixed-point smoother by the projection theorem and innovation analysis. We can directly give the theorem which develops [7] as follows.

Theorem 3.11. Consider the system (2.1)โ€“(2.3) with the measurements {๐ฒ(0),โ€ฆ,๐ฒ(๐‘—)} and scalars {๐›พ(0),โ€ฆ,๐›พ(๐‘—)}, then Kalman fixed-point smoother can be given by the following recursive equations: ฬ‚ฬ‚๐ฑ(๐‘กโˆฃ๐‘—)=๐ฑ(๐‘กโˆฃ๐‘ก)+๐‘—โˆ’๐‘ก๎“๐‘˜=1๐‘ƒ(๐‘ก,๐‘˜)๐ถ๐‘‡๎€บ๐ถ๐‘ƒ(๐‘ก+๐‘˜)๐ถ๐‘‡+๐ทฮ (๐‘ก+๐‘˜)๐‘€๐ท๐‘‡๎€ป+๐‘…โˆ’1ร—๎€บฬ‚๎€ป๐ฒ(๐‘ก+๐‘˜)โˆ’๐›พ(๐‘ก+๐‘˜)๐ถ๐ฑ(๐‘ก+๐‘˜),๐‘—=๐‘ก+1,โ€ฆ,๐‘,(3.45) and the corresponding smoother error covariance matrix ๐‘ƒ(๐‘กโˆฃ๐‘—) can be given as ๐‘ƒ๎€บ(๐‘กโˆฃ๐‘—)=๐‘ƒ(๐‘กโˆฃ๐‘—โˆ’1)โˆ’๐›พ(๐‘—)๐พ(๐‘กโˆฃ๐‘—)ร—๐ถ๐‘ƒ(๐‘—)๐ถ๐‘‡+๐ทฮ (๐‘—)๐‘€๐ท๐‘‡๎€ป๐พ+๐‘…๐‘‡(๐‘กโˆฃ๐‘—),(3.46) where []๐‘ƒ(๐‘ก,๐‘˜)=๐‘ƒ(๐‘ก,๐‘˜โˆ’1)ร—๐ดโˆ’๐›พ(๐‘ก+๐‘˜โˆ’1)๐พ(๐‘ก+๐‘˜โˆ’1)๐ถ๐‘‡,๐‘˜=1,โ€ฆ,๐‘—โˆ’๐‘ก,(3.47)๐พ(๐‘กโˆฃ๐‘—)=๐›พ(๐‘—)๐‘ƒ(๐‘ก)ฮจ๐‘‡1๎€บ(๐‘—,๐‘ก)ร—๐ถ๐‘ƒ(๐‘—)๐ถ๐‘‡+๐ทฮ (๐‘—)๐‘€๐ท๐‘‡๎€ป+๐‘…โˆ’1,ฮจ(3.48)1(๐‘—,๐‘ก)=ฮจ1(๐‘—โˆ’1)โ‹ฏฮจ1(๐‘ก),(3.49) with ๐‘ƒ(๐‘ก,0)=๐‘ƒ(๐‘ก),โ€‰ ๐‘ƒ(๐‘กโˆฃ๐‘กโˆ’1)=๐‘ƒ(๐‘ก),โ€‰ ฮจ1(๐‘ก)=๐ดโˆ’๐›พ(๐‘ก)๐พ(๐‘ก)๐ถ, and ๐‘ƒ(๐‘ก+๐‘˜),๐‘ƒ(๐‘ก), and ฬ‚๐ฑ(๐‘ก) can be given from (3.19) and (3.18).

Proof. The proof of (3.45) and (3.47) can be referred to [7], and we only give the proof of the covariance matrix ๐‘ƒ(๐‘กโˆฃ๐‘—). From the projection theorem, we have ฬ‚ฬƒฬƒฬƒฬƒฬƒ๐ฒฬƒ๐ฒฬƒ๐ฒ=ฬ‚ฬƒ๐ฑ(๐‘กโˆฃ๐‘ก+๐‘˜)=Proj{๐ฑ(t)โˆฃ๐ฒ(0),โ€ฆ,๐ฒ(๐‘ก),๐ฒ(๐‘ก+1),โ€ฆ,๐ฒ(๐‘ก+๐‘˜)}=Proj{๐ฑ(๐‘ก)โˆฃ(0),โ€ฆ,(๐‘ก+๐‘˜โˆ’1)}+Proj{๐ฑ(๐‘ก)โˆฃ(๐‘ก+๐‘˜)}๐ฑ(๐‘กโˆฃ๐‘ก+๐‘˜โˆ’1)+๐พ(๐‘กโˆฃ๐‘ก+๐‘˜)๐ฒ(๐‘ก+๐‘˜),(3.50) where ๎€บฬƒ๐ฒ๐พ(๐‘กโˆฃ๐‘ก+๐‘˜)=โ„ฐ๐ฑ(๐‘ก)๐‘‡๎€ป๐‘„(๐‘ก+๐‘˜)ฬƒ๐ฒโˆ’1(๐‘ก+๐‘˜).(3.51) Define ฮจ1(๐‘ก)=๐ดโˆ’๐›พ(๐‘ก)๐พ(๐‘ก)๐ถ and ฮจ2(๐‘ก)=๐ต1โˆ’๐›พ(๐‘ก)๐พ(๐‘ก)๐ท, then (3.26) can be rewritten as ๐ž(๐‘ก+1)=ฮจ1(๐‘ก)๐ž(๐‘ก)+ฮจ2(๐‘ก)๐ฑ(๐‘ก)๐ฐ(๐‘ก)+๐ต2๐ฎ(๐‘ก)โˆ’๐›พ(๐‘ก)๐พ(๐‘ก)๐ฏ(๐‘ก),(3.52) and recursively computing (3.52), we have ๐ž(๐‘ก+๐‘˜)=ฮจ1(๐‘ก+๐‘˜,๐‘ก)๐ž(๐‘ก)+ฮจ2+(๐‘ก+๐‘˜,๐‘ก)๐ฑ(๐‘ก)๐ฐ(๐‘ก)๐‘ก+๐‘˜๎“๐‘–=๐‘ก+1ฮจ1๎€บ๐ต(๐‘ก+๐‘˜,๐‘–)ร—2๎€ป,๐ฎ(๐‘–โˆ’1)โˆ’๐›พ(๐‘–โˆ’1)๐พ(๐‘–โˆ’1)๐ฏ(๐‘–โˆ’1)(3.53) where ฮจ1(๐‘ก+๐‘˜,๐‘–)=ฮจ1(๐‘ก+๐‘˜โˆ’1)โ‹ฏฮจ1ฮจ(๐‘–),๐‘–<๐‘ก+๐‘˜,2(๐‘ก+๐‘˜,๐‘ก)=ฮจ1(๐‘ก+๐‘˜โˆ’1)โ‹ฏฮจ1(๐‘ก+1)ฮจ2ฮจ(๐‘ก),1(๐‘ก+๐‘˜,๐‘ก+๐‘˜)=๐ผ๐‘›.(3.54) By considering (3.7), (3.53), and Theorem 3.4, โ„ฐ๎€บฬƒ๐ฒ๐ฑ(๐‘ก)๐‘‡(๎€ป๐‘ก+๐‘˜)=๐›พ(๐‘ก+๐‘˜)๐‘ƒ(๐‘ก)ฮจ๐‘‡1(๐‘ก+๐‘˜,๐‘ก),(3.55) so ๐พ(๐‘กโˆฃ๐‘ก+๐‘˜)=๐›พ(๐‘ก+๐‘˜)๐‘ƒ(๐‘ก)ฮจ๐‘‡1๎€บ(๐‘ก+๐‘˜,๐‘ก)ร—๐ถ๐‘ƒ(๐‘ก+๐‘˜)๐ถ๐‘‡+๐ทฮ (๐‘ก+๐‘˜)๐‘€๐ท๐‘‡๎€ป+๐‘…โˆ’1,(3.56) which is (3.48) by setting ๐‘ก+๐‘˜=๐‘—.
From (3.50) and considering ฬ‚ฬ‚๐ฑ(๐‘ก)=๐ฑ(๐‘กโˆฃ๐‘ก+๐‘˜)+๐ž(๐‘กโˆฃ๐‘ก+๐‘˜)=๐ฑ(๐‘กโˆฃ๐‘ก+๐‘˜โˆ’1)+๐ž(๐‘กโˆฃ๐‘ก+๐‘˜โˆ’1), then we have ฬƒ๐ž(๐‘กโˆฃ๐‘ก+๐‘˜)=๐ž(๐‘กโˆฃ๐‘ก+๐‘˜โˆ’1)โˆ’๐พ(๐‘กโˆฃ๐‘ก+๐‘˜)๐ฒ(๐‘ก+๐‘˜),(3.57) that is, ฬƒ๐ž(๐‘กโˆฃ๐‘ก+๐‘˜โˆ’1)=๐ž(๐‘กโˆฃ๐‘ก+๐‘˜)+๐พ(๐‘กโˆฃ๐‘ก+๐‘˜)๐ฒ(๐‘ก+๐‘˜).(3.58) Then according to (3.30), we have ๐‘ƒ(๐‘กโˆฃ๐‘ก+๐‘˜โˆ’1)=๐‘ƒ(๐‘กโˆฃ๐‘ก+๐‘˜)+๐พ(๐‘กโˆฃ๐‘ก+๐‘˜)๐‘„ฬƒ๐ฒ(๐‘ก+๐‘˜)๐พ๐‘‡(๐‘กโˆฃ๐‘ก+๐‘˜),(3.59) Combined with (3.23), we have (3.46) by setting ๐‘ก+๐‘˜=๐‘—.

3.4. Comparison

In this subsection, we will give the comparison among the three cases of smoothers. It can be easily seen from (3.31), (3.38), and (3.45) that the smoothers are all given by a Kalman filter and an updated part. To be compared conveniently, the smoother error covariance matrices are given in (3.32), (3.39), and (3.46), which develops the main results in [6, 7] where only Kalman smoothers are given.

It can be easily seen from (3.31) and (3.38) that Kalman fixed-interval smoother is similar to fixed-lag smoother. For (3.31), the computation time is ๐‘โˆ’๐‘ก, and it is ๐‘™ in (3.38). For Kalman fixed-interval smoother, the ๐‘ is fixed, and ๐‘ก is variable, so when ๐‘ก=0,1,โ€ฆ,๐‘โˆ’1, the corresponding smoother ฬ‚๐ฑ(๐‘กโˆฃ๐‘) can be given. For Kalman fixed-lag smoother, the ๐‘™ is fixed, and ๐‘ก is variable, so when ๐‘ก=๐‘™+1,๐‘™+2,โ€ฆ, the corresponding smoother ฬ‚๐ฑ(๐‘ก๐‘™โˆฃ๐‘ก) can be given. The two smoothers are similar in the form of (3.31) and (3.38). However, it is hard to see which smoother is better from the smoother error covariance matrix (3.32) and (3.39), which will be verified in numerical example.

For Kalman fixed-point smoother in Theorem 3.7, the time ๐‘ก is fixed, and in this case, we can say that Kalman fixed-point smoother is different from fixed-interval and fixed-lag smoother in itself. ๐‘— can be equal to ๐‘ก+1,๐‘ก+2,โ€ฆ.

4. Numerical Example

In the section, we will give an example to show the efficiency and the comparison of the presented results.

Consider the system (2.1)โ€“(2.3) with ๐‘=80,๐‘™=20, โŽกโŽขโŽขโŽฃโŽคโŽฅโŽฅโŽฆ๐ด=0.80.300.6,๐ต1=โŽกโŽขโŽขโŽฃโŽคโŽฅโŽฅโŽฆ0.200.10.9,๐ต2=โŽกโŽขโŽขโŽฃโŽคโŽฅโŽฅโŽฆ,๎‚ƒ๎‚„๎‚ƒ๎‚„,0.50.8๐ถ=12,๐ท=22๐›พ(๐‘ก)=1+(โˆ’1)๐‘ก2.(4.1) The initial state value ๎€บ๐ฑ(0)=10.5๎€ป, and noises ๐ฎ(๐‘ก) and ๐ฐ(๐‘ก) are uncorrelated white noises with zero means and unity covariance matrices, that is, ๐‘„=1,๐‘€=1. Observation noise ๐ฏ(๐‘ก) is of zero means and with covariance matrix ๐‘…=0.01.

Our aim is to calculate the Kalman fixed-interval smoother ฬ‚๐ฑ(๐‘กโˆฃ๐‘) of the signal ๐ฑ(๐‘ก), Kalman fixed-lag smoother ฬ‚๐ฑ(๐‘ก๐‘™โˆฃ๐‘ก) of the signal ๐ฑ(๐‘ก๐‘™) for ๐‘ก=๐‘™+1,โ€ฆ,๐‘, and Kalman fixed-point smoother ฬ‚๐ฑ(๐‘กโˆฃ๐‘—) of the signal ๐ฑ(๐‘ก) for ๐‘—=๐‘ก+1,โ€ฆ,๐‘ based on observations {๐ฒ(๐‘–)}๐‘๐‘–=0, {๐ฒ(๐‘–)}๐‘ก๐‘–=0 and {๐ฒ(๐‘–)}๐‘—๐‘–=0, respectively. For the Kalman fixed-point smoother ฬ‚๐ฑ(๐‘กโˆฃ๐‘—), we can set ๐‘ก=30.

According to Theorem 3.7, the computation of the Kalman fixed-interval smoother ฬ‚๐ฑ(๐‘กโˆฃ๐‘) can be summarized in three steps as shown below.

Step 1. Compute ฮ (๐‘ก+1),โ€‰โ€‰ ๐‘ƒ(๐‘ก+1),โ€‰โ€‰ ฬ‚๐ฑ(๐‘ก+1), and ฬ‚๐ฑ(๐‘ก+1โˆฃ๐‘ก+1) by (3.20), (3.19), (3.18), and (3.17) in Lemma 3.5 for ๐‘ก=0,โ€ฆ,๐‘โˆ’1, respectively.

Step 2. ๐‘กโˆˆ[0,๐‘] is set invariant compute ๐‘ƒ(๐‘ก,๐‘˜) by (3.33) for ๐‘˜=1,โ€ฆ,๐‘โˆ’๐‘ก;with the above initial values ๐‘ƒ(๐‘ก,0)=๐‘ƒ(๐‘ก).

Step 3. Compute the Kalman fixed-interval smoother ฬ‚๐ฑ(๐‘กโˆฃ๐‘) by (3.31) for ๐‘ก=0,โ€ฆ,๐‘ with fixed N.
Similarly, according to Theorem 3.9, the computation of the Kalman fixed-lag smoother ฬ‚๐ฑ(๐‘ก๐‘™โˆฃ๐‘ก) can be summarized in three steps as shown below.

Step 1. Compute ฮ (๐‘ก๐‘™+๐‘˜+1),โ€‰โ€‰๐‘ƒ(๐‘ก๐‘™+๐‘˜+1),โ€‰โ€‰ฬ‚๐ฑ(๐‘ก๐‘™+๐‘˜+1), and ฬ‚๐ฑ(๐‘ก๐‘™+๐‘˜+1โˆฃ๐‘ก๐‘™+๐‘˜+1) by (3.20), (3.19), (3.18), and (3.17) in Lemma 3.5 for ๐‘ก>๐‘™ and ๐‘˜=1,โ€ฆ,๐‘™, respectively.

Step 2. ๐‘กโˆˆ[0,๐‘] is set invariant; compute ๐‘ƒ(๐‘ก๐‘™,๐‘˜) by (3.40) for ๐‘˜=1,โ€ฆ,๐‘™ with the above initial values ๐‘ƒ(๐‘ก๐‘™,0)=๐‘ƒ(๐‘ก๐‘™).

Step 3. Compute the Kalman fixed-lag smoother ฬ‚๐ฑ(๐‘ก๐‘™โˆฃ๐‘ก) by (3.38) for ๐‘ก>๐‘™.
According to Theorem 3.11, the computation of Kalman fixed-point smoother ฬ‚๐ฑ(๐‘กโˆฃ๐‘—) can be summarized in three steps as shown below.

Step 1. Compute ฮ (๐‘ก+1),โ€‰โ€‰๐‘ƒ(๐‘ก+1),โ€‰โ€‰ฬ‚๐ฑ(๐‘ก+1), and ฬ‚๐ฑ(๐‘ก+1โˆฃ๐‘ก+1) by (3.20), (3.19), (3.18) and (3.17) in Lemma 3.5 for ๐‘ก=0,โ€ฆ,๐‘โˆ’1, respectively.

Step 2. Compute ๐‘ƒ(๐‘ก,๐‘˜) by (3.47) for ๐‘˜=1,โ€ฆ,๐‘—โˆ’๐‘ก with the initial value ๐‘ƒ(๐‘ก,0)=๐‘ƒ(๐‘ก).

Step 3. Compute the Kalman fixed-point smoother ฬ‚๐ฑ(๐‘กโˆฃ๐‘—) by (3.45) for ๐‘—=๐‘ก+1,โ€ฆ,๐‘.
The tracking performance of Kalman fixed-point smoother ฬ‚๎‚ƒฬ‚๐ฑ๐ฑ(๐‘กโˆฃ๐‘—)=1(๐‘กโˆฃ๐‘—)ฬ‚๐ฑ2(๐‘กโˆฃ๐‘—)๎‚„ is drawn in Figures 1 and 2, and the line is based on the fixed time ๐‘ก=30, and the variable is ๐‘—. It can be easily seen from the above two figures that the smoother is changed much at first, and after the time ๐‘—=35, the fixed-point smoother is fixed, that is, the smoother at time ๐‘ก=30 will take little effect on ๐ฒ(๐‘—),๐ฒ(๐‘—+1),โ€ฆ due to packet loss. In addition, at time ๐‘—=30, the estimation (filter) is more closer to the origin than other ๐‘—>30, which shows that Kalman filter is better than fixed-point smoother for WSN with packet loss. In fact, the smoothers below are also not good as filter.
The fixed-interval smoother ฬ‚๎‚ƒฬ‚๐ฑ๐ฑ(๐‘กโˆฃ๐‘)=1(๐‘กโˆฃ๐‘)ฬ‚๐ฑ2(๐‘กโˆฃ๐‘)๎‚„ is given in Figures 3 and 4, and the tracking performance of the fixed-lag smoother ฬ‚๐ฑ(๐‘ก๐‘™๎‚ƒฬ‚๐ฑโˆฃ๐‘ก)=1(๐‘ก๐‘™โˆฃ๐‘ก)ฬ‚๐ฑ2(๐‘ก๐‘™โˆฃ๐‘ก)๎‚„ is given in Figures 5 and 6. From the above figures, they can estimate the origin signal in general.
In addition, according to the comparison part in the end of last section, we give the comparison of the sum of the error covariance of the fixed-interval and fixed-lag smoother (the fixed-point smoother is different from the above two smoother, so its error covariance is not necessary to be compared with, which has been explained at the end of last section), and we also give the sum of the error covariance of Kalman filter, and they are all drawn in Figure 7. As seen from Figure 7, it is hard to say which smoother is better due to packet loss, and the result of smoothers is not better than filter.

717504.fig.001
Figure 1: The origin and its fixed-point smoother ฬ‚๐‘ฅ1(30โˆฃ๐‘—), where the blue line is the origin signal, the red line is the smoother.
717504.fig.002
Figure 2: The origin and its fixed-point smoother ฬ‚๐‘ฅ2(30โˆฃ๐‘—), where the blue line is the origin signal, the red line is the smoother.
717504.fig.003
Figure 3: The origin and its fixed-interval smoother ฬ‚๐‘ฅ1(๐‘กโˆฃ๐‘), where the blue line is the origin signal, the red line is the smoother.
717504.fig.004
Figure 4: The origin and its fixed-interval smoother ฬ‚๐‘ฅ2(๐‘กโˆฃ๐‘), where the blue line is the origin signal, the red line is the smoother.
717504.fig.005
Figure 5: The origin and its fixed-lag smoother ฬ‚๐‘ฅ1(๐‘กโˆ’๐‘™โˆฃ๐‘ก), where the blue line is the origin signal, the red line is the smoother.
717504.fig.006
Figure 6: The origin and its fixed-lag smoother ฬ‚๐‘ฅ2(๐‘กโˆ’๐‘™โˆฃ๐‘ก), where the blue line is the origin signal, the red line is the smoother.
717504.fig.007
Figure 7: The sum of the error covariance for filter, fixed-interval smoother, and fixed-lag smoother, where the blue line is for the filter, the green line is for fixed-interval smoother, and the red line is for the fixed-lag smoother.

5. Conclusion

In this paper, we have studied Kalman fixed-interval smoothing, fixed-lag smoothing [6], and fixed-point smoothing [7] for wireless sensor network systems with packet loss and multiplicative noises. The smoothers are given by recursive equations. The smoother error covariance matrices of fixed-interval smoothing, and fixed-lag smoothing are given by Riccati equation without recursion, while the smoother error covariance matrix of fixed-point smoothing is given by recursive Riccati equation and recursive Lyapunov equation. The comparison among the fixed-point smoother, fixed-interval smoother and fixed-lag smoother has been given, and numerical example verified the proposed approach. The proposed approach will be useful to study more difficult problems, for example, the WSN with random delay and packet loss [19].

Disclosure

X. Lu is affiliated with Shandong University of Science and Technology and also with Shandong University, Qingdao, China. H. Wang and X. Wang are affiliated with Shandong University of Science and Technology, Qingdao, China.

Acknowledgment

This work is supported by National Nature Science Foundation (60804034), the Scientific Research Foundation for the Excellent Middle-Aged and Youth Scientists of Shandong Province (BS2012DX031), the Nature Science Foundation of Shandong Province (ZR2009GQ006), SDUST Research Fund (2010KYJQ105), the Project of Shandong Province Higher Educational Science, Technology Program (J11LG53) and โ€œTaishan Scholarshipโ€ Construction Engineering.

References

  1. N. Wiener, Extrapolation Interpolation, and Smoothing of Stationary Time Series, The Technology Press and Wiley, New York, NY, USA, 1949.
  2. R. E. Kalman, โ€œA new approach to linear filtering and prediction problems,โ€ Journal of Basic Engineering, vol. 82, no. 1, pp. 35โ€“45, 1960. View at Publisher ยท View at Google Scholar
  3. T. Kailath, A. H. Sayed, and B. Hassibi, Linear Estimation, Prentice-Hall, Englewood Cliffs, NJ, USA, 1999.
  4. B. D. O. Anderson and J. B. Moore, Optimal Filtering, Prentice-Hall, Englewood Cliffs, NJ, USA, 1979.
  5. X. Lu, H. Zhang, W. Wang, and K.-L. Teo, โ€œKalman filtering for multiple time-delay systems,โ€ Automatica, vol. 41, no. 8, pp. 1455โ€“1461, 2005. View at Publisher ยท View at Google Scholar ยท View at Zentralblatt MATH
  6. X. Lu, W. Wang, and M. Li, โ€œKalman fixed-interval and fixed-lag smoothing for wireless sensor systems with multiplicative noises,โ€ in Proceedings of the 24th Chinese Control and Decision Conference (CCDC '12), Taiyuan, China, May 2012.
  7. X. Lu and W. Wang, โ€œKalman fixed-point smoothing for wireless sensor systems with multiplicative noises,โ€ in Proceedings of the 24th Chinese Control Conference (CCDC '12), Taiyuan, China, May 2012.
  8. D. S. Chu and S. W. Gao, โ€œState optimal estimation algorithm for singular systems with multiplicative noise,โ€ Periodical of Ocean University of China, vol. 38, no. 5, pp. 814โ€“818, 2008. View at Google Scholar
  9. H. Zhang, X. Lu, W. Zhang, and W. Wang, โ€œKalman filtering for linear time-delayed continuous-time systems with stochastic multiplicative noises,โ€ International Journal of Control, Automation, and Systems, vol. 5, no. 4, pp. 355โ€“363, 2007. View at Google Scholar
  10. B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla, M. I. Jordan, and S. S. Sastry, โ€œKalman filtering with intermittent observations,โ€ IEEE Transaction on Automatic Control, vol. 49, no. 9, pp. 1453โ€“1464, 2004. View at Publisher ยท View at Google Scholar
  11. X. Liu and A. Goldsmith, โ€œKalman filtering with partial observation losses,โ€ in Proceedings of the 15th International Symposium on the Mathematical Theory of Networks and Systems, pp. 4180โ€“4183, Atlantis, Bahamas, December 2004.
  12. A. S. Leong, S. Dey, and J. S. Evans, โ€œOn Kalman smoothing with random packet loss,โ€ IEEE Transactions on Signal Processing, vol. 56, no. 7, pp. 3346โ€“3351, 2008. View at Publisher ยท View at Google Scholar
  13. L. Schenato, โ€œOptimal estimation in networked control systems subject to random delay and packet drop,โ€ IEEE Transactions on Automatic Control, vol. 53, no. 5, pp. 1311โ€“1317, 2008. View at Publisher ยท View at Google Scholar
  14. M. Huang and S. Dey, โ€œStability of Kalman filtering with Markovian packet losses,โ€ Automatica, vol. 43, no. 4, pp. 598โ€“607, 2007. View at Publisher ยท View at Google Scholar ยท View at Scopus
  15. L. Schenato, B. Sinopoli, M. Franceschetti, K. Poolla, and S. S. Sastry, โ€œFoundations of control and estimation over lossy networks,โ€ Proceedings of the IEEE, vol. 95, no. 1, pp. 163โ€“187, 2007. View at Publisher ยท View at Google Scholar ยท View at Scopus
  16. X. Lu and T. Wang, โ€œOptimal estimation with observation loss and multiplicative noise,โ€ in Proceedings of the 8th World Congress on Intelligent Control and Automation (WCICA '10), pp. 248โ€“251, July 2010. View at Scopus
  17. X. Lu, M. Li, and Q. Pu, โ€œKalman filtering for wireless sensor network with multiple multiplicative noises,โ€ in Proceedings of the 23th Chinese Control and Decision Conference (CCDC '11), pp. 2376โ€“2381, May 2011.
  18. X. Lu, X. Wang, and H. Wang, โ€œOptimal information fusion Kalman filtering for WSNs with multiplicative noise,โ€ in Proceedings of the International Conference on System Science and Engineering (ICSSE '12), June 2012.
  19. L. Schenato, โ€œKalman filtering for networked control systems with random delay and packet loss,โ€ in Proceedings of the Conference of Mathematical Theroy of Networks and Systems (MTNSar '06), Kyoto, Japan, July 2006.