Abstract

The paper deals with Kalman (or š»2) smoothing problem for wireless sensor networks (WSNs) with multiplicative noises. Packet loss occurs in the observation equations, and multiplicative noises occur both in the system state equation and the observation equations. The Kalman smoothers which include Kalman fixed-interval smoother, Kalman fixedlag smoother, and Kalman fixed-point smoother are given by solving Riccati equations and Lyapunov equations based on the projection theorem and innovation analysis. An example is also presented to ensure the efficiency of the approach. Furthermore, the proposed three Kalman smoothers are compared.

1. Introduction

The linear estimation problem has been one of the key research topics of control community according to [1]. As is well known, two indexes are used to investigate linear estimation, one is š»2 index, and the other is š»āˆž index. Under the performance of š»2 index, Kalman filtering [2ā€“4] is an important approach to study linear estimation besides Wiener filtering. In general, Kalman filtering which usually uses state space equation is better than Wiener filtering, since it is recursive, and it can be used to deal with time-variant system [1, 2, 5]. This has motivated many previous researchers to employ Kalman filtering to study linear time variant or linear time-invariant estimation, and Kalman filtering has been a popular and efficient approach for the normal linear system. However, the standard Kalman filtering cannot be directly used in the estimation on wireless sensor networks (WSNs) since packet loss occurs, and sometimes multiplicative noises also occur [6, 7].

Linear estimation for systems with multiplicative noises under š»2 index has been studied well in [8, 9]. Reference [8] considered the state optimal estimation algorithm for singular systems with multiplicative noise, and dynamic noise and noise measurement estimations have been proposed. In [9], we presented the linear filtering for continuous-time systems with time-delayed measurements and multiplicative noises under š»2 index.

Wireless sensor networks have been popular these years, and the corresponding estimation problem has attracted many researchersā€™ attention [10ā€“15]. It should be noted that in the above works, only packet loss occurs. Reference [10] which is an important and ground-breaking reference has considered the problem of Kalman filtering of the WSN with intermittent packet loss, and Kalman filter together with the upper and lower bounds of the error covariance is presented. Reference [11] has developed the work of [10], the measurements are divided into two parts which are sent by different channel under different packet loss rate, and the Kalman filter together with the covariance matrix is given. Reference [14] also develops the result of [11], and the stability of the Kalman filter with Markovian packet loss has been given.

However, the above references mainly focus on the linear systems with packet loss, and they cannot be useful when there are multiplicative noises in the system models [6, 7, 16ā€“18]. For the Kalman filtering problem for wireless sensor networks with multiplicative noises, [16ā€“18] give preliminary results, where [16] has given Kalman filter, [17] deals with the Kalman filter for the wireless sensor networks system with two multiplicative noises and two measurements which are sent by different channels and packet-drop rates, and [18] has given the information fusion Kalman filter for wireless sensor networks with packet loss and multiplicative noises.

In this paper, Kalman smoothing problem including fixed-point smoothing [6], fixed-interval smoothing, and fixed-lag smoothing [7] for WSN with packet loss will be studied. Multiplicative noises both occur in the state equation and observation equation, which will extend the result of works of [8] where multiplicative noises only occur in the state equations. Three Kalman smoothers will be given by recursive equations. The smoother error covariance matrices of fixed-interval smoothing and fixed-lag smoothing are given by Riccati equation without recursion, while the smoother error covariance matrix of fixed-point smoothing is given by recursive Riccati equation and recursive Lyapunov equation, which develops the work of [6, 7], where some main theorems with errors on Kalman smoothing are given.

The rest of the paper is organized as follows. In Section 2, we will present the system model and state the problem to be dealt with in the paper. The main results of Kalman smoother will be given in Section 3. In Section 4, a numerical example will be given to show the result of smoothers. Some conclusion will be drawn in Section 5.

2. Problem Statement

Consider the following discrete-time wireless sensor network systems with multiplicative noises: š±(š‘”+1)=š“š±(š‘”)+šµ1š±(š‘”)š°(š‘”)+šµ2š®(š‘”),(2.1)š²(š‘”)=š¶š±(š‘”)+š·š±(š‘”)š°(š‘”)+šÆ(š‘”),(2.2)š²(š‘”)=š›¾(š‘”)š²(š‘”),š±(0),(2.3) where š±(š‘”)āˆˆš‘…š‘› is the state, š²(š‘”)āˆˆš‘…š‘ is measurement, š®(š‘”)āˆˆš‘…š‘Ÿ is input sequence, šÆ(š‘”)āˆˆš‘…š‘ is the white noise of zero means, which is additive noise, and š°(š‘”)āˆˆš‘…1 is the white noise of zero means, which is multiplicative noise. š“āˆˆā„›š‘›Ć—š‘›, šµ1āˆˆā„›š‘›Ć—š‘›, šµ2āˆˆā„›š‘›Ć—š‘Ÿ, š¶āˆˆā„›š‘Ć—š‘›, and š·āˆˆā„›š‘Ć—š‘› are known time-invariant matrices.

Assumption 2.1. š›¾(š‘”)(š‘”ā‰„0) is Bernoulli random variable with probability distribution ī‚»š‘(š›¾(š‘”))=šœ†(š‘”),š›¾(š‘”)=1,1āˆ’šœ†(š‘”),š›¾(š‘”)=0.(2.4)š›¾(š‘”) is independent of š›¾(š‘ ) for š‘ ā‰ š‘”, and š›¾(š‘”) is uncorrelated with š±(0), š®(š‘”), šÆ(š‘”), and š°(š‘”).

Assumption 2.2. The initial states š±(0), š®(š‘”), šÆ(š‘”), and š°(š‘”) are all uncorrelated white noises with zero means and known covariance matrices, that is, ī„”āŽ”āŽ¢āŽ¢āŽ¢āŽ¢āŽ¢āŽ¢āŽ£āŽ¤āŽ„āŽ„āŽ„āŽ„āŽ„āŽ„āŽ¦,āŽ”āŽ¢āŽ¢āŽ¢āŽ¢āŽ¢āŽ¢āŽ£āŽ¤āŽ„āŽ„āŽ„āŽ„āŽ„āŽ„āŽ¦ī„•=āŽ”āŽ¢āŽ¢āŽ¢āŽ¢āŽ¢āŽ¢āŽ£š±(0)š®(š‘”)šÆ(š‘”)š°(š‘”)š±(0)š®(š‘ )šÆ(š‘ )š°(š‘ )Ī (0)0000š‘„š›æš‘”,š‘ 0000š‘…š›æš‘”,š‘ 0000š‘€š›æš‘”,š‘ āŽ¤āŽ„āŽ„āŽ„āŽ„āŽ„āŽ„āŽ¦,(2.5) where āŸØš‘Ž,š‘āŸ©=ā„°[š‘Žš‘āˆ—], and ā„° denotes the mathematical expectation.
The Kalman smoothing problem considered in the paper for the system model (2.1)ā€“(2.3) can be stated in the following three cases.

Problem 1 (fixed-interval smoothing). Given the measurements {š²(0),ā€¦,š²(š‘)} and scalars {š›¾(0),ā€‰ā€¦,š›¾(š‘)} for a fixed scalar š‘, find the fixed-interval smoothing estimate Ģ‚š±(š‘”āˆ£š‘) of š±(š‘”), such that [š±Ģ‚š±]minā„°(š‘”)āˆ’(š‘”āˆ£š‘)ī…žī€ŗš±Ģ‚š±(š‘”)āˆ’(š‘”āˆ£š‘)āˆ£š²š‘0,š›¾š‘0ī€»,š±(0),Ī (0),(2.6) where 0ā‰¤š‘”ā‰¤š‘, š²š‘0ā‰œ{š²(0),ā€¦,š²(š‘)}, š›¾š‘0ā‰œ{š›¾(0),ā€¦,š›¾(š‘)}.

Problem 2 (fixed-lag smoothing). Given the measurements {š²(0),ā€¦,š²(š‘”)} and scalars {š›¾(0),ā€¦,š›¾(š‘”)} for a fixed scalar š‘™, find the fixed-lag smoothing estimate Ģ‚š±(š‘”āˆ’š‘™āˆ£š‘”) of š±(š‘”āˆ’š‘™), such that [š±Ģ‚š±]minā„°(š‘”āˆ’š‘™)āˆ’(š‘”āˆ’š‘™āˆ£š‘”)ī…žī€ŗš±Ģ‚š±(š‘”āˆ’š‘™)āˆ’(š‘”āˆ’š‘™āˆ£š‘”)āˆ£š²š‘”0,š›¾š‘”0ī€»,š±(0),Ī (0),(2.7) where š²š‘”0ā‰œ[š²(0),ā€¦,š²(š‘”)], š›¾š‘”0ā‰œ[š›¾(0),ā€¦,š›¾(š‘”)].

Problem 3 (fixed-point smoothing). Given the measurements {š²(0),ā€¦,š²(š‘)} and scalars {š›¾(0),ā€¦,š›¾(š‘)} for a fixed time instant š‘”, find the fixed-point smoothing estimate Ģ‚š±(š‘”āˆ£š‘—) of š±(š‘”), such that [Ģ‚]minā„°š±(š‘”)āˆ’š±(š‘”āˆ£š‘—)ī…žī€ŗĢ‚š±(š‘”)āˆ’š±(š‘”āˆ£š‘—)āˆ£š²š‘—0,š›¾š‘—0ī€»,,š±(0),Ī (0)(2.8) where 0ā‰¤š‘”<š‘—ā‰¤š‘, š²š‘—0ā‰œ{š²(0),ā€¦,š²(š‘—)}, š›¾š‘—0ā‰œ{š›¾(0),ā€¦,š›¾(š‘—)}, š‘—=š‘”+1,š‘”+2,ā€¦,š‘.

3. Main Results

In this section, we will give Kalman fixed-interval smoothing, Kalman fixed-lag smoothing, and Kalman fixed-point smoothing for the system model (2.1)ā€“(2.3) and compare them with each other. Before we give the main result, we first give the Kalman filtering for the system model (2.1)ā€“(2.3) which will be useful in the section. We first give the following definition.

Definition 3.1. Given time instants š‘” and š‘—, the estimator Ģ‚šœ‰(š‘”āˆ£š‘—) is the optimal estimation of šœ‰(š‘”) given the observation sequences ā„’{š²(0),ā€¦,š²(š‘”),š²(š‘”+1),ā€¦,š²(š‘—);š›¾(0),ā€¦,š›¾(š‘”),š›¾(š‘”+1),ā€¦,š›¾(š‘—)},(3.1) and the estimator Ģ‚šœ‰(š‘”) is the optimal estimation of šœ‰(š‘”) given the observation ā„’{š²(0),ā€¦,š²(š‘”āˆ’1);š›¾(0),ā€¦,š›¾(š‘”āˆ’1)}.(3.2)

Remark 3.2. It should be noted that the linear space ā„’{š²(0),ā€¦,š²(š‘”āˆ’1);š›¾(0),ā€¦,š›¾(š‘”āˆ’1)} means the linear space ā„’{š²(0),ā€¦,š²(š‘”āˆ’1)} under the condition that the scalars š›¾(0),ā€¦,š›¾(š‘”āˆ’1) are known. So is the linear space ā„’{š²(0),ā€¦,š²(š‘”),š²(š‘”+1),ā€¦,š²(š‘—);š›¾(0),ā€¦,š›¾(š‘”),š›¾(š‘”+1),ā€¦,š›¾(š‘—)}.
Give the following denotations: ĢƒĢ‚Ģ‚ī€ŗš²(š‘”)ā‰œš²(š‘”)āˆ’š²(š‘”),(3.3)šž(š‘”)ā‰œš±(š‘”)āˆ’š±(š‘”),(3.4)š‘ƒ(š‘”+1)ā‰œā„°šž(š‘”+1)šžš‘‡(š‘”+1)āˆ£š²š‘”0,š›¾š‘”0ī€»Ī ī€ŗš±,(3.5)(š‘”+1)ā‰œā„°(š‘”+1)š±š‘‡(š‘”+1)āˆ£š²š‘”0,š›¾š‘”0ī€»,(3.6) it is clear that Ģƒš²(š‘”) is the Kalman filtering innovation sequence for the system (2.1)ā€“(2.3). We now have the following relationships: Ģƒš²(š‘”)=š›¾(š‘”)š¶šž(š‘”)+š›¾(š‘”)š·š±(š‘”)š°(š‘”)+š›¾(š‘”)šÆ(š‘”).(3.7) The following lemma shows that {Ģƒš²} is the innovation sequence. For the simplicity of discussion, we omitted the {š›¾(0),ā€¦,š›¾(š‘”āˆ’1)} as in Remark 3.2.

Lemma 3.3. {ĢƒĢƒš²(0),ā€¦,š²(š‘”)}(3.8) is the innovation sequence which spans the same linear space consider that ā„’{š²(0),ā€¦,š²(š‘”)}.(3.9)

Proof. Firstly, from (3.2) in Definition 3.1, we have Ģ‚š²(š‘”)=Proj{š²(š‘”)āˆ£š²(0),ā€¦,š²(š‘”āˆ’1)}.(3.10) From (3.3) and (3.10), we have Ģƒš²(š‘”)āˆˆā„’{š²(0),ā€¦,š²(š‘”)} for š‘”=0,1,2,ā€¦. Thus, ĢƒĢƒā„’{š²(0),ā€¦,š²(š‘”)}āŠ‚ā„’{š²(0),ā€¦,š²(š‘”)}.(3.11) Secondly, from inductive method, we have ĢƒĢƒš²Ģƒš²Ģƒš²Ģƒš²ā‹®ĢƒĢƒĢƒš²(0)=š²(0)+ā„°š²(0)āˆˆā„’{š²(0)},(1)=(1)+Proj{š²(1)āˆ£š²(0)}āˆˆā„’{(0),(1)},š²(š‘”)=š²(š‘”)+Proj{š²(š‘”)āˆ£š²(0),ā€¦,š²(š‘”āˆ’1)}āˆˆā„’{š²(0),ā€¦,š²(š‘”āˆ’1)}.(3.12) Thus, ĢƒĢƒā„’{š²(0),ā€¦,š²(š‘”)}āŠ‚ā„’{š²(0),ā€¦,š²(š‘”)}.(3.13) So ĢƒĢƒā„’{š²(0),ā€¦,š²(š‘”)}=ā„’{š²(0),ā€¦,š²(š‘”)}.(3.14)
Next, we show that Ģƒš² is an uncorrelated sequence. In fact, for any š‘”,š‘ (š‘”ā‰ š‘ ), we can assume that š‘”>š‘  without loss of generality, and it follows from (3.7) that ā„°ī€ŗĢƒš²Ģƒš²(š‘”)š‘‡ī€»ī€ŗš›¾Ģƒš²(š‘ )=ā„°(š‘”)š¶šž(š‘”)š‘‡ī€»ī€ŗš›¾Ģƒš²(š‘ )+ā„°(š‘”)š·š±(š‘”)š°(š‘”)š‘‡ī€»ī€ŗĢƒš²(š‘ )+ā„°š›¾(š‘”)šÆ(š‘”)š‘‡(ī€».š‘ )(3.15) Note that Ģƒš²ā„°[š›¾(š‘”)š·š±(š‘”)š°(š‘”)š‘‡(š‘ )]=0, Ģƒš²ā„°[š›¾(š‘”)šÆ(š‘”)š‘‡(š‘ )]=0. Since šž(š‘”) is the state prediction error, it follows that Ģƒš²ā„°[š›¾(š‘”)š¶šž(š‘”)š‘‡(š‘ )]=0, and thus ĢƒĢƒš²ā„°[š²(š‘”)š‘‡(š‘ )]=0, which implies that Ģƒš²(š‘”) is uncorrelated with Ģƒš²(š‘ ). Hence, {ĢƒĢƒš²(0),ā€¦,š²(š‘”)} is an innovation sequence. This completes the proof of the lemma.

For the convenience of derivation, we will give the orthogonal projection theorem in form of the next theorem without proof, and readers can refer to [4, 5],

Theorem 3.4. Given the measurements {š²(0),ā€¦,š²(š‘”)} and the corresponding innovation sequence {ĢƒĢƒš²(0),ā€¦,š²(š‘”)} in Lemma 3.3, the projection of the state š± can be given as ĢƒĢƒī€ŗš±Ģƒš²Proj{š±āˆ£š²(0),ā€¦,š²(š‘”)}=Proj{š±āˆ£š²(0),ā€¦,š²(š‘”)}=Proj{š±āˆ£š²(0),ā€¦,š²(š‘”āˆ’1)}+ā„°š‘‡ā„°ī€·ĢƒĢƒš²(š‘”)ī€»ī€ŗš²(š‘”)š‘‡(š‘”)ī€øī€»āˆ’1Ģƒš²(š‘”).(3.16)

According to a reviewerā€™s suggestion, we will give the Kalman filter for the system model (2.1)ā€“(2.3) which has been given in [7]

Lemma 3.5. Consider the system model (2.1)ā€“(2.3) with scalars š›¾(0),ā€¦,š›¾(š‘”+1) and the initial condition Ģ‚š±(0)=0; the Kalman filter can be given as Ģ‚Ģ‚š±(š‘”+1āˆ£š‘”+1)=š±(š‘”+1)+š‘ƒ(š‘”+1)š¶š‘‡Ć—ī€ŗš¶š‘ƒ(š‘”+1)š¶š‘‡+š·Ī (š‘”+1)š‘€š·š‘‡ī€»+š‘…āˆ’1Ɨī€ŗĢ‚ī€»,Ģ‚Ģ‚ī€ŗĢ‚ī€»,Ģ‚š±[][]š²(š‘”+1)āˆ’š›¾(š‘”+1)š¶š±(š‘”+1)(3.17)š±(š‘”+1)=š“š±(š‘”)+š¾(š‘”)š²(š‘”)āˆ’š›¾(š‘”)š¶š±(š‘”)(0)=0,(3.18)š‘ƒ(š‘”+1)=š“āˆ’š›¾(š‘”)š¾(š‘”)š¶š‘ƒ(š‘”)Ɨš“āˆ’š›¾(š‘”)š¾(š‘”)š¶š‘‡+ī€ŗšµ1ī€»ī€ŗšµāˆ’š›¾(š‘”)š¾(š‘”)š·Ī (š‘”)Ɨ1ī€»āˆ’š›¾(š‘”)š¾(š‘”)š·š‘‡š‘€+šµ2š‘„šµš‘‡2+š›¾(š‘”)š¾(š‘”)š‘…š¾š‘‡ī€ŗ(š‘”),š‘ƒ(0)=ā„°š±(0)š±š‘‡ī€»,(0)āˆ£š›¾(0)(3.19)Ī (š‘”+1)=š“Ī (š‘”)š“š‘‡+šµ1Ī (š‘”)šµš‘‡1š‘€+šµ2š‘„šµš‘‡2,Ī (0)=š‘ƒ(0),(3.20) where ī€ŗš¾(š‘”)ā‰œš“š‘ƒ(š‘”)š¶š‘‡+šµ1š‘€Ī (š‘”)š·š‘‡ī€»Ć—ī€ŗš¶š‘ƒ(š‘”)š¶š‘‡+š·Ī (š‘”)š‘€š·š‘‡ī€»+š‘…āˆ’1.(3.21)

Proof. Firstly, according to the projection theorem (Theorem 3.4), we have Ģ‚ĢƒĢƒĢƒš²Ģƒš²Ģ‚š²(š‘”)=Proj{š²(š‘”)āˆ£š²(0),ā€¦,š²(š‘”āˆ’1);š›¾(0),ā€¦,š›¾(š‘”āˆ’1)}=Proj{š›¾(š‘”)š¶š±(š‘”)+š›¾(š‘”)š·š±(š‘”)š°(š‘”)+š›¾(š‘”)šÆ(š‘”)āˆ£(0),ā€¦,(š‘”āˆ’1);š›¾(0),ā€¦,š›¾(š‘”āˆ’1)}=š›¾(š‘”)š¶š±(š‘”),(3.22) then from (3.7), we have š‘„Ģƒš²ĢƒĢƒ(š‘”)ā‰œāŸØš²(š‘”),š²(š‘”)āŸ©=š›¾(š‘”)š¶š‘ƒ(š‘”)š¶š‘‡+š›¾(š‘”)š·Ī (š‘”)š‘€š·š‘‡+š›¾(š‘”)š‘….(3.23) Secondly, according to the projection theorem, we have Ģ‚ĢƒĢƒī€½š±(š‘”+1)=Proj{š±(š‘”+1)āˆ£š²(0),ā€¦,š²(š‘”);š›¾(0),ā€¦,š›¾(š‘”)}=Projš“š±(š‘”)+šµ1š±(š‘”)š°(š‘”)+šµ2ĢƒĢƒī€¾ī€½š®(š‘”)āˆ£š²(0),ā€¦,š²(š‘”āˆ’1);š›¾(0),ā€¦,š›¾(š‘”āˆ’1)+Projš“š±(š‘”)+šµ1š±(š‘”)š°(š‘”)+šµ2Ģƒī€¾Ģ‚ī€ŗš®(š‘”)āˆ£š²(š‘”);š›¾(š‘”)=š“š±(š‘”)+š›¾(š‘”)Ɨš“š‘ƒ(š‘”)š¶š‘‡+šµ1š‘€Ī (š‘”)š·š‘‡ī€»š‘„Ģƒš²āˆ’1Ģ‚Ć—[]Ģ‚š±ī€ŗš²Ģ‚š±ī€»,(š‘”)=š“š±(š‘”)+š›¾(š‘”)š¾(š‘”)š¶šž(š‘”)+š·š±(š‘”)š°(š‘”)+šÆ(š‘”)=š“(š‘”)+š¾(š‘”)(š‘”)āˆ’š›¾(š‘”)š¶(š‘”)(3.24) which is (3.18) by considering (3.21), (3.22), and (3.23).
Thirdly, by considering (3.7) and Theorem 3.4, we also have Ģ‚ĢƒĢƒĢƒ=Ģ‚š±Ģƒš²=Ģ‚š±(š‘”+1āˆ£š‘”+1)=Proj{š±(š‘”+1)āˆ£š²(0),ā€¦,š²(š‘”),š²(š‘”+1);š›¾(0),ā€¦,š›¾(š‘”),š›¾(š‘”+1)}(š‘”+1)+Proj{š±(š‘”+1)āˆ£(š‘”+1);š›¾(š‘”+1)}š±(š‘”+1)+š‘ƒ(š‘”+1)š¶š‘‡Ć—ī€ŗš¶š‘ƒ(š‘”+1)š¶š‘‡+š·Ī (š‘”+1)š‘€š·š‘‡ī€»+š‘…āˆ’1Ɨī€ŗĢ‚ī€»,š²(š‘”+1)āˆ’š›¾(š‘”+1)š¶š±(š‘”+1)(3.25) which is (3.17).
From (2.1) and (3.24), we have Ģ‚=[]ī€ŗšµšž(š‘”+1)=š±(š‘”+1)āˆ’š±(š‘”+1)š“āˆ’š›¾(š‘”)š¾(š‘”)š¶šž(š‘”)+1ī€»āˆ’š›¾(š‘”)š¾(š‘”)š·š±(š‘”)š°(š‘”)+šµ2š®(š‘”)āˆ’š›¾(š‘”)š¾(š‘”)šÆ(š‘”),(3.26) then we have ī€ŗš‘ƒ(š‘”+1)=ā„°šž(š‘”+1)šžš‘‡(š‘”+1)āˆ£š›¾0š‘”+1ī€»=[][]š“āˆ’š›¾(š‘”)š¾(š‘”)š¶š‘ƒ(š‘”)Ɨš“āˆ’š›¾(š‘”)š¾(š‘”)š¶š‘‡+ī€ŗšµ1ī€»ī€ŗšµāˆ’š›¾(š‘”)š¾(š‘”)š·Ī (š‘”)Ɨ1ī€»āˆ’š›¾(š‘”)š¾(š‘”)š·š‘‡š‘€+šµ2š‘„šµš‘‡2+š›¾(š‘”)š¾(š‘”)š‘…š¾š‘‡(š‘”),(3.27) which is (3.19).
From (2.1), (3.20) can be given directly, and the proof is over.

Remark 3.6. Equation (3.19) is a recursive Riccati equation, and (3.20) is a Lyapunov equation.

3.1. Kalman Fixed-Interval Smoother

In this subsection, we will present the Kalman fixed-interval smoother by the projection theorem. First, we define ī€ŗš‘ƒ(š‘”,š‘˜)ā‰œā„°š±(š‘”)šžš‘‡(š‘”+š‘˜)āˆ£š²0š‘”+š‘˜,š›¾0š‘”+š‘˜ī€»,Ģ‚ī€ŗ(3.28)šž(š‘”āˆ£š‘”+š‘˜)ā‰œš±(š‘”)āˆ’š±(š‘”āˆ£š‘”+š‘˜),(3.29)š‘ƒ(š‘”āˆ£š‘”+š‘˜)ā‰œā„°šž(š‘”āˆ£š‘”+š‘˜)šžš‘‡(š‘”āˆ£š‘”+š‘˜)āˆ£š²0š‘”+š‘˜,š›¾0š‘”+š‘˜ī€».(3.30) Then we can give the theorem which develops [6] as follows.

Theorem 3.7. Consider the system (2.1)ā€“(2.3) with the measurements {š²(0),ā€¦,š²(š‘)} and scalars {š›¾(0),ā€¦,š›¾(š‘)}, Kalman fixed-interval smoother can be given by the following backwards recursive equations: Ģ‚Ģ‚š±(š‘”āˆ£š‘)=š±(š‘”āˆ£š‘”)+š‘āˆ’š‘”ī“š‘˜=1š‘ƒ(š‘”,š‘˜)š¶š‘‡ī€ŗš¶š‘ƒ(š‘”+š‘˜)š¶š‘‡+š·Ī (š‘”+š‘˜)š‘€š·š‘‡ī€»+š‘…āˆ’1Ɨī€ŗĢ‚ī€»š²(š‘”+š‘˜)āˆ’š›¾(š‘”+š‘˜)š¶š±(š‘”+š‘˜),š‘”=0,1,ā€¦,š‘,(3.31) and the corresponding smoother error covariance matrix can be given as š‘ƒ(š‘”āˆ£š‘)=š‘ƒ(š‘”)āˆ’š‘āˆ’š‘”ī“š‘˜=0š›¾(š‘”+š‘˜)š‘ƒ(š‘”,š‘˜)š¶š‘‡Ć—ī€ŗš¶š‘ƒ(š‘”+š‘˜)š¶š‘‡+š·Ī (š‘”+š‘˜)š‘€š·š‘‡ī€»+š‘…š¶š‘ƒ(š‘”,š‘˜),š‘”=0,1,ā€¦,š‘,(3.32) where []š‘ƒ(š‘”,š‘˜)=š‘ƒ(š‘”,š‘˜āˆ’1)š“āˆ’š›¾(š‘”+š‘˜āˆ’1)š¾(š‘”+š‘˜āˆ’1)š¶š‘‡,š‘˜=1,ā€¦,š‘āˆ’š‘”,(3.33) with š‘ƒ(š‘”,0)=š‘ƒ(š‘”), and Ģ‚š‘ƒ(š‘”+š‘˜),š‘ƒ(š‘”),š±(š‘”) can be given from (3.18), (3.19), and (3.17).

Proof. From the projection theorem, we have Ģ‚Ģƒš²Ģƒš²Ģƒš²Ģƒš²ĢƒĢƒĢƒĢƒ=Ģ‚š±(š‘”āˆ£š‘)=Proj{š±(š‘”)āˆ£š²(0),ā€¦,š²(š‘”),š²(š‘”+1),ā€¦,š²(š‘)}=Proj{š±(š‘”)āˆ£(0),ā€¦,(š‘”),(š‘”+1),ā€¦,(š‘)}=Proj{š±(š‘”)āˆ£š²(0),ā€¦,š²(š‘”)}+Proj{š±(š‘”)āˆ£š²(š‘”+1),ā€¦,š²(š‘)}š±(š‘”āˆ£š‘”)+š‘āˆ’š‘”ī“š‘˜=1Ģƒ=Ģ‚Proj{š±(š‘”)āˆ£š²(š‘”+š‘˜)}š±(š‘”āˆ£š‘”)+š‘āˆ’š‘”ī“š‘˜=1[]}=Ģ‚Proj{š±(š‘”)āˆ£š›¾(š‘”+š‘˜)š¶šž(š‘”+š‘˜)š·š±(š‘”+š‘˜)š°(š‘”+š‘˜)+šÆ(š‘”+š‘˜)š±(š‘”āˆ£š‘”)+š‘āˆ’š‘”ī“š‘˜=1š‘ƒ(š‘”,š‘˜)š¶š‘‡ī€ŗš¶š‘ƒ(š‘”+š‘˜)š¶š‘‡+š·Ī (š‘”+š‘˜)š‘€š·š‘‡ī€»+š‘…āˆ’1Ɨī€ŗĢ‚ī€»,š²(š‘”+š‘˜)āˆ’š›¾(š‘”+š‘˜)š¶š±(š‘”+š‘˜)(3.34) which is (3.31).
Noting that š±(š‘”) is uncorrelated with š°(š‘”+š‘˜āˆ’1), š®(š‘”+š‘˜āˆ’1), and šÆ(š‘”+š‘˜āˆ’1) for š‘˜=1,ā€¦,š‘āˆ’š‘”, then from (3.5), we have ī€ŗš‘ƒ(š‘”,š‘˜)=ā„°š±(š‘”)šžš‘‡(š‘”+š‘˜)āˆ£š²0š‘”+š‘˜,š›¾0š‘”+š‘˜ī€»[]+ī€ŗšµ=āŸØš±(š‘”),š“āˆ’š›¾(š‘”+kāˆ’1)š¾(š‘”+š‘˜āˆ’1)š¶šž(š‘”+š‘˜āˆ’1)1ī€»āˆ’š›¾(š‘”+š‘˜āˆ’1)š¾(š‘”+š‘˜āˆ’1)š·Ć—š±(š‘”+š‘˜āˆ’1)š°(š‘”+š‘˜āˆ’1)+šµ2[]š®(š‘”+š‘˜āˆ’1)āˆ’š›¾(š‘”+š‘˜āˆ’1)š¾(š‘”+š‘˜āˆ’1)šÆ(š‘”+š‘˜āˆ’1)āŸ©=š‘ƒ(š‘”,š‘˜āˆ’1)š“āˆ’š›¾(š‘”+š‘˜āˆ’1)š¾(š‘”+š‘˜āˆ’1)š¶š‘‡,(3.35) which is (3.33).
Next, we will give the proof of covariance matrix š‘ƒ(š‘”āˆ£š‘). From the projection theorem, we have Ģ‚Ģ‚š±(š‘”)=š±(š‘”āˆ£š‘)+šž(š‘”āˆ£š‘)=š±(š‘”)+šž(š‘”), and from (3.34), we have šž(š‘”āˆ£š‘)=šž(š‘”)āˆ’š‘āˆ’š‘”ī“š‘˜=0š‘ƒ(š‘”,š‘˜)š¶š‘‡ī€ŗš¶š‘ƒ(š‘”+š‘˜)š¶š‘‡+š·Ī (š‘”+š‘˜)š‘€š·š‘‡ī€»+š‘…āˆ’1ƗĢƒš²(š‘”+š‘˜),(3.36) that is, šž(š‘”)=šž(š‘”āˆ£š‘)+š‘āˆ’š‘”ī“š‘˜=0š‘ƒ(š‘”,š‘˜)š¶š‘‡ī€ŗš¶š‘ƒ(š‘”+š‘˜)š¶š‘‡+š·Ī (š‘”+š‘˜)š‘€š·š‘‡ī€»+š‘…āˆ’1ƗĢƒš²(š‘”+š‘˜).(3.37) Thus, (3.32) can be given.

Remark 3.8. The proposed theorem is based on the theorem in [6]. However, the condition of the theorem in [6] is wrong since multiplicative noises š°(0),ā€¦,š°(š‘”) are not known. In addition, the proposed theorem gives the fixed-interval smoother error covariance matrix š‘ƒ(š‘”āˆ£š‘) which is an important index in Problem 1 and also useful in the comparison with fixed-lag smoother.

3.2. Kalman Fixed-Lag Smoother

Let š‘”š‘™=š‘”āˆ’š‘™, and we can give Kalman fixed-lag smoothing estimate for the system model (2.1)ā€“(2.3), which develops [6] in the following theorem.

Theorem 3.9. Consider the system (2.1)ā€“(2.3), given the measurements {š²(0),ā€¦,š²(š‘”)} and scalars {š›¾(0),ā€¦,š›¾(š‘”)} for a fixed scalar š‘™(š‘™<š‘”), then Kalman fixed-lag smoother can be given by the following recursive equations: Ģ‚š±ī€·š‘”š‘™ī€ø=Ģ‚š±ī€·š‘”āˆ£š‘”š‘™āˆ£š‘”š‘™ī€ø+š‘™ī“š‘˜=1š‘ƒī€·š‘”š‘™ī€øš¶,š‘˜š‘‡ī€ŗī€·š‘”š¶š‘ƒš‘™ī€øš¶+š‘˜š‘‡ī€·š‘”+š·Ī š‘™ī€ø+š‘˜š‘€š·š‘‡ī€»+š‘…āˆ’1Ɨī€ŗš²ī€·š‘”š‘™ī€øī€·š‘”+š‘˜āˆ’š›¾š‘™ī€øš¶Ģ‚š±ī€·š‘”+š‘˜š‘™+š‘˜ī€øī€»,š‘”>š‘™,(3.38) and the corresponding smoother error covariance matrix š‘ƒ(š‘”š‘™āˆ£š‘”) can be given as š‘ƒī€·š‘”š‘™ī€øī€·š‘”āˆ£š‘”=š‘ƒš‘™ī€øāˆ’š‘™ī“š‘˜=0š›¾ī€·š‘”š‘™ī€øš‘ƒī€·š‘”+š‘˜š‘™ī€øš¶,š‘˜š‘‡ī€ŗī€·š‘”š¶Pš‘™ī€øš¶+š‘˜š‘‡ī€·š‘”+š·Ī š‘™ī€ø+š‘˜š‘€š·š‘‡ī€»ī€·š‘”+š‘…š¶š‘ƒš‘™ī€ø,š‘˜,š‘”>š‘™,(3.39) where š‘ƒī€·š‘”š‘™ī€øī€·š‘”,š‘˜=š‘ƒš‘™ī€·š‘”,š‘˜āˆ’1ī€øī€ŗš“āˆ’š›¾š‘™ī€øš¾ī€·š‘”+š‘˜āˆ’1š‘™ī€øš¶ī€»+š‘˜āˆ’1š‘‡,š‘˜=1,ā€¦,š‘™,(3.40) with š‘ƒ(š‘”š‘™,0)=š‘ƒ(š‘”š‘™), and š‘ƒ(š‘”š‘™+š‘˜),š‘ƒ(š‘”š‘™Ģ‚),š±(š‘”š‘™+š‘˜),ā€‰and ā€‰Ģ‚š±(š‘”š‘™āˆ£š‘”š‘™) can be given from Lemma 3.5.

Proof. From the projection theorem, we have Ģ‚š±ī€·š‘”š‘™ī€øī€½š±ī€·š‘”āˆ£š‘”=Projš‘™ī€øī€·š‘”āˆ£š²(0),ā€¦,š²š‘™ī€øī€·š‘”,š²š‘™ī€øī€¾ī€½š±ī€·š‘”+1,ā€¦,š²(š‘”)=Projš‘™ī€øāˆ£ĢƒĢƒš²ī€·š‘”š²(0),ā€¦,š‘™ī€ø,Ģƒš²ī€·š‘”š‘™ī€øĢƒī€¾ī€½š±ī€·š‘”+1,ā€¦,š²(š‘”)=Projš‘™ī€øāˆ£Ģƒš²Ģƒš²ī€·š‘”(0),ā€¦,š‘™ī€½š±ī€·š‘”ī€øī€¾+Projš‘™ī€øāˆ£Ģƒš²ī€·š‘”š‘™ī€øĢƒš²ī€¾=Ģ‚š±ī€·š‘”+1,ā€¦,(š‘”)š‘™āˆ£š‘”š‘™ī€ø+š‘™ī“š‘˜=1ī€½š±ī€·š‘”Projš‘™ī€øāˆ£Ģƒš²ī€·š‘”š‘™=Ģ‚š±ī€·š‘”+š‘˜ī€øī€¾š‘™āˆ£š‘”š‘™ī€ø+š‘™ī“š‘˜=1ī€½š±ī€·š‘”Projš‘™ī€øī€·š‘”āˆ£š›¾š‘™ī€·š‘”+š‘˜ī€øī€ŗš¶šžš‘™ī€øī€·š‘”+š‘˜+š·š±š‘™ī€øš°ī€·š‘”+š‘˜š‘™ī€øī€·š‘”+š‘˜+šÆš‘™=Ģ‚š±ī€·š‘”+š‘˜ī€øī€»ī€¾š‘™āˆ£š‘”š‘™ī€ø+š‘™ī“š‘˜=1š‘ƒī€·š‘”š‘™ī€øš¶,š‘˜š‘‡ī€ŗī€·š‘”š¶š‘ƒš‘™ī€øš¶+š‘˜š‘‡ī€·š‘”+š·Ī š‘™ī€ø+š‘˜š‘€š·š‘‡ī€»+š‘…āˆ’1Ɨī€ŗš²ī€·š‘”š‘™ī€øī€·š‘”+š‘˜āˆ’š›¾š‘™ī€øš¶Ģ‚š±ī€·š‘”+š‘˜š‘™,+š‘˜ī€øī€»(3.41) which is (3.38).
Noting that š±(š‘”) is uncorrelated with š°(š‘”+š‘˜āˆ’1), š®(š‘”+š‘˜āˆ’1), and šÆ(š‘”+š‘˜āˆ’1) for š‘˜=1,ā€¦,š‘™, then from (3.5), we have š‘ƒī€·š‘”š‘™ī€øī‚ƒš±ī€·š‘”,š‘˜=ā„°š‘™ī€øšžš‘‡ī€·š‘”š‘™ī€ø+š‘˜āˆ£š²š‘”š‘™0+š‘˜,š›¾š‘”š‘™0+š‘˜ī‚„=ī«š±ī€·š‘”š‘™ī€ø,ī€ŗī€·š‘”š“āˆ’š›¾š‘™ī€øš¾ī€·š‘”+š‘˜āˆ’1š‘™ī€øš¶ī€»ī€·š‘”+š‘˜āˆ’1Ɨšžš‘™ī€ø+ī€ŗšµ+š‘˜āˆ’11ī€·š‘”āˆ’š›¾š‘™ī€øš¾ī€·š‘”+š‘˜āˆ’1š‘™ī€øš·ī€»ī€·š‘”+š‘˜āˆ’1Ɨš±š‘™ī€øš°ī€·š‘”+š‘˜āˆ’1š‘™ī€ø+š‘˜āˆ’1+šµ2š®ī€·š‘”š‘™ī€øī€·š‘”+š‘˜āˆ’1āˆ’š›¾š‘™ī€øš¾ī€·š‘”+š‘˜āˆ’1š‘™ī€øšÆī€·š‘”+š‘˜āˆ’1š‘™ī€·š‘”+š‘˜āˆ’1ī€øī¬=š‘ƒš‘™ī€·š‘”,š‘˜āˆ’1ī€øī€ŗš“āˆ’š›¾š‘™ī€øš¾ī€·š‘”+š‘˜āˆ’1š‘™ī€øš¶ī€»+š‘˜āˆ’1š‘‡,(3.42) which is (3.40).
Next, we will give the proof of covariance matrix š‘ƒ(š‘”š‘™āˆ£š‘”). Since š±(š‘”š‘™Ģ‚)=š±(š‘”š‘™āˆ£š‘”)+šž(š‘”š‘™Ģ‚āˆ£š‘”)=š±(š‘”š‘™)+šž(š‘”š‘™), from (3.41), we have šžī€·š‘”š‘™ī€øī€·š‘”āˆ£š‘”=šžš‘™ī€øāˆ’š‘™ī“š‘˜=0š‘ƒī€·š‘”š‘™ī€øš¶,š‘˜š‘‡ī€ŗī€·š‘”š¶š‘ƒš‘™ī€øš¶+š‘˜š‘‡ī€·š‘”+š·Ī š‘™ī€ø+š‘˜š‘€š·š‘‡ī€»+š‘…āˆ’1ƗĢƒš²ī€·š‘”š‘™ī€ø,+š‘˜(3.43) that is, šžī€·š‘”š‘™ī€øī€·š‘”=šžš‘™ī€ø+āˆ£š‘”š‘™ī“š‘˜=0š‘ƒī€·š‘”š‘™ī€øš¶,š‘˜š‘‡ī€ŗī€·š‘”š¶š‘ƒš‘™ī€øš¶+š‘˜š‘‡ī€·š‘”+š·Ī š‘™ī€ø+š‘˜š‘€š·š‘‡ī€»+š‘…āˆ’1ƗĢƒš²ī€·š‘”š‘™ī€ø.+š‘˜(3.44) Thus, (3.39) can be given.

Remark 3.10. It should be noted that the result of the Kalman smoothing is better than that of Kalman filtering for the normal systems without packet loss since more measurement information is available. However, it is not the case if the measurement is lost, which can be verified in the next section. In addition, in Theorems 3.7 and 3.9, we have changed the predictor type of š±(š‘”) or š±(š‘”š‘™) in [6] to the filter case of š±(š‘”āˆ£š‘”) or š±(š‘”š‘™āˆ£š‘”š‘™), which will be more convenient to be compared with Kalman fixed-point smoother (3.45).

3.3. Kalman Fixed-Point Smoother

In this subsection, we will present the Kalman fixed-point smoother by the projection theorem and innovation analysis. We can directly give the theorem which develops [7] as follows.

Theorem 3.11. Consider the system (2.1)ā€“(2.3) with the measurements {š²(0),ā€¦,š²(š‘—)} and scalars {š›¾(0),ā€¦,š›¾(š‘—)}, then Kalman fixed-point smoother can be given by the following recursive equations: Ģ‚Ģ‚š±(š‘”āˆ£š‘—)=š±(š‘”āˆ£š‘”)+š‘—āˆ’š‘”ī“š‘˜=1š‘ƒ(š‘”,š‘˜)š¶š‘‡ī€ŗš¶š‘ƒ(š‘”+š‘˜)š¶š‘‡+š·Ī (š‘”+š‘˜)š‘€š·š‘‡ī€»+š‘…āˆ’1Ɨī€ŗĢ‚ī€»š²(š‘”+š‘˜)āˆ’š›¾(š‘”+š‘˜)š¶š±(š‘”+š‘˜),š‘—=š‘”+1,ā€¦,š‘,(3.45) and the corresponding smoother error covariance matrix š‘ƒ(š‘”āˆ£š‘—) can be given as š‘ƒī€ŗ(š‘”āˆ£š‘—)=š‘ƒ(š‘”āˆ£š‘—āˆ’1)āˆ’š›¾(š‘—)š¾(š‘”āˆ£š‘—)Ɨš¶š‘ƒ(š‘—)š¶š‘‡+š·Ī (š‘—)š‘€š·š‘‡ī€»š¾+š‘…š‘‡(š‘”āˆ£š‘—),(3.46) where []š‘ƒ(š‘”,š‘˜)=š‘ƒ(š‘”,š‘˜āˆ’1)Ɨš“āˆ’š›¾(š‘”+š‘˜āˆ’1)š¾(š‘”+š‘˜āˆ’1)š¶š‘‡,š‘˜=1,ā€¦,š‘—āˆ’š‘”,(3.47)š¾(š‘”āˆ£š‘—)=š›¾(š‘—)š‘ƒ(š‘”)ĪØš‘‡1ī€ŗ(š‘—,š‘”)Ɨš¶š‘ƒ(š‘—)š¶š‘‡+š·Ī (š‘—)š‘€š·š‘‡ī€»+š‘…āˆ’1,ĪØ(3.48)1(š‘—,š‘”)=ĪØ1(š‘—āˆ’1)ā‹ÆĪØ1(š‘”),(3.49) with š‘ƒ(š‘”,0)=š‘ƒ(š‘”),ā€‰ š‘ƒ(š‘”āˆ£š‘”āˆ’1)=š‘ƒ(š‘”),ā€‰ ĪØ1(š‘”)=š“āˆ’š›¾(š‘”)š¾(š‘”)š¶, and š‘ƒ(š‘”+š‘˜),š‘ƒ(š‘”), and Ģ‚š±(š‘”) can be given from (3.19) and (3.18).

Proof. The proof of (3.45) and (3.47) can be referred to [7], and we only give the proof of the covariance matrix š‘ƒ(š‘”āˆ£š‘—). From the projection theorem, we have Ģ‚ĢƒĢƒĢƒĢƒĢƒš²Ģƒš²Ģƒš²=Ģ‚Ģƒš±(š‘”āˆ£š‘”+š‘˜)=Proj{š±(t)āˆ£š²(0),ā€¦,š²(š‘”),š²(š‘”+1),ā€¦,š²(š‘”+š‘˜)}=Proj{š±(š‘”)āˆ£(0),ā€¦,(š‘”+š‘˜āˆ’1)}+Proj{š±(š‘”)āˆ£(š‘”+š‘˜)}š±(š‘”āˆ£š‘”+š‘˜āˆ’1)+š¾(š‘”āˆ£š‘”+š‘˜)š²(š‘”+š‘˜),(3.50) where ī€ŗĢƒš²š¾(š‘”āˆ£š‘”+š‘˜)=ā„°š±(š‘”)š‘‡ī€»š‘„(š‘”+š‘˜)Ģƒš²āˆ’1(š‘”+š‘˜).(3.51) Define ĪØ1(š‘”)=š“āˆ’š›¾(š‘”)š¾(š‘”)š¶ and ĪØ2(š‘”)=šµ1āˆ’š›¾(š‘”)š¾(š‘”)š·, then (3.26) can be rewritten as šž(š‘”+1)=ĪØ1(š‘”)šž(š‘”)+ĪØ2(š‘”)š±(š‘”)š°(š‘”)+šµ2š®(š‘”)āˆ’š›¾(š‘”)š¾(š‘”)šÆ(š‘”),(3.52) and recursively computing (3.52), we have šž(š‘”+š‘˜)=ĪØ1(š‘”+š‘˜,š‘”)šž(š‘”)+ĪØ2+(š‘”+š‘˜,š‘”)š±(š‘”)š°(š‘”)š‘”+š‘˜ī“š‘–=š‘”+1ĪØ1ī€ŗšµ(š‘”+š‘˜,š‘–)Ɨ2ī€»,š®(š‘–āˆ’1)āˆ’š›¾(š‘–āˆ’1)š¾(š‘–āˆ’1)šÆ(š‘–āˆ’1)(3.53) where ĪØ1(š‘”+š‘˜,š‘–)=ĪØ1(š‘”+š‘˜āˆ’1)ā‹ÆĪØ1ĪØ(š‘–),š‘–<š‘”+š‘˜,2(š‘”+š‘˜,š‘”)=ĪØ1(š‘”+š‘˜āˆ’1)ā‹ÆĪØ1(š‘”+1)ĪØ2ĪØ(š‘”),1(š‘”+š‘˜,š‘”+š‘˜)=š¼š‘›.(3.54) By considering (3.7), (3.53), and Theorem 3.4, ā„°ī€ŗĢƒš²š±(š‘”)š‘‡(ī€»š‘”+š‘˜)=š›¾(š‘”+š‘˜)š‘ƒ(š‘”)ĪØš‘‡1(š‘”+š‘˜,š‘”),(3.55) so š¾(š‘”āˆ£š‘”+š‘˜)=š›¾(š‘”+š‘˜)š‘ƒ(š‘”)ĪØš‘‡1ī€ŗ(š‘”+š‘˜,š‘”)Ɨš¶š‘ƒ(š‘”+š‘˜)š¶š‘‡+š·Ī (š‘”+š‘˜)š‘€š·š‘‡ī€»+š‘…āˆ’1,(3.56) which is (3.48) by setting š‘”+š‘˜=š‘—.
From (3.50) and considering Ģ‚Ģ‚š±(š‘”)=š±(š‘”āˆ£š‘”+š‘˜)+šž(š‘”āˆ£š‘”+š‘˜)=š±(š‘”āˆ£š‘”+š‘˜āˆ’1)+šž(š‘”āˆ£š‘”+š‘˜āˆ’1), then we have Ģƒšž(š‘”āˆ£š‘”+š‘˜)=šž(š‘”āˆ£š‘”+š‘˜āˆ’1)āˆ’š¾(š‘”āˆ£š‘”+š‘˜)š²(š‘”+š‘˜),(3.57) that is, Ģƒšž(š‘”āˆ£š‘”+š‘˜āˆ’1)=šž(š‘”āˆ£š‘”+š‘˜)+š¾(š‘”āˆ£š‘”+š‘˜)š²(š‘”+š‘˜).(3.58) Then according to (3.30), we have š‘ƒ(š‘”āˆ£š‘”+š‘˜āˆ’1)=š‘ƒ(š‘”āˆ£š‘”+š‘˜)+š¾(š‘”āˆ£š‘”+š‘˜)š‘„Ģƒš²(š‘”+š‘˜)š¾š‘‡(š‘”āˆ£š‘”+š‘˜),(3.59) Combined with (3.23), we have (3.46) by setting š‘”+š‘˜=š‘—.

3.4. Comparison

In this subsection, we will give the comparison among the three cases of smoothers. It can be easily seen from (3.31), (3.38), and (3.45) that the smoothers are all given by a Kalman filter and an updated part. To be compared conveniently, the smoother error covariance matrices are given in (3.32), (3.39), and (3.46), which develops the main results in [6, 7] where only Kalman smoothers are given.

It can be easily seen from (3.31) and (3.38) that Kalman fixed-interval smoother is similar to fixed-lag smoother. For (3.31), the computation time is š‘āˆ’š‘”, and it is š‘™ in (3.38). For Kalman fixed-interval smoother, the š‘ is fixed, and š‘” is variable, so when š‘”=0,1,ā€¦,š‘āˆ’1, the corresponding smoother Ģ‚š±(š‘”āˆ£š‘) can be given. For Kalman fixed-lag smoother, the š‘™ is fixed, and š‘” is variable, so when š‘”=š‘™+1,š‘™+2,ā€¦, the corresponding smoother Ģ‚š±(š‘”š‘™āˆ£š‘”) can be given. The two smoothers are similar in the form of (3.31) and (3.38). However, it is hard to see which smoother is better from the smoother error covariance matrix (3.32) and (3.39), which will be verified in numerical example.

For Kalman fixed-point smoother in Theorem 3.7, the time š‘” is fixed, and in this case, we can say that Kalman fixed-point smoother is different from fixed-interval and fixed-lag smoother in itself. š‘— can be equal to š‘”+1,š‘”+2,ā€¦.

4. Numerical Example

In the section, we will give an example to show the efficiency and the comparison of the presented results.

Consider the system (2.1)ā€“(2.3) with š‘=80,š‘™=20, āŽ”āŽ¢āŽ¢āŽ£āŽ¤āŽ„āŽ„āŽ¦š“=0.80.300.6,šµ1=āŽ”āŽ¢āŽ¢āŽ£āŽ¤āŽ„āŽ„āŽ¦0.200.10.9,šµ2=āŽ”āŽ¢āŽ¢āŽ£āŽ¤āŽ„āŽ„āŽ¦,ī‚ƒī‚„ī‚ƒī‚„,0.50.8š¶=12,š·=22š›¾(š‘”)=1+(āˆ’1)š‘”2.(4.1) The initial state value ī€ŗš±(0)=10.5ī€», and noises š®(š‘”) and š°(š‘”) are uncorrelated white noises with zero means and unity covariance matrices, that is, š‘„=1,š‘€=1. Observation noise šÆ(š‘”) is of zero means and with covariance matrix š‘…=0.01.

Our aim is to calculate the Kalman fixed-interval smoother Ģ‚š±(š‘”āˆ£š‘) of the signal š±(š‘”), Kalman fixed-lag smoother Ģ‚š±(š‘”š‘™āˆ£š‘”) of the signal š±(š‘”š‘™) for š‘”=š‘™+1,ā€¦,š‘, and Kalman fixed-point smoother Ģ‚š±(š‘”āˆ£š‘—) of the signal š±(š‘”) for š‘—=š‘”+1,ā€¦,š‘ based on observations {š²(š‘–)}š‘š‘–=0, {š²(š‘–)}š‘”š‘–=0 and {š²(š‘–)}š‘—š‘–=0, respectively. For the Kalman fixed-point smoother Ģ‚š±(š‘”āˆ£š‘—), we can set š‘”=30.

According to Theorem 3.7, the computation of the Kalman fixed-interval smoother Ģ‚š±(š‘”āˆ£š‘) can be summarized in three steps as shown below.

Step 1. Compute Ī (š‘”+1),ā€‰ā€‰ š‘ƒ(š‘”+1),ā€‰ā€‰ Ģ‚š±(š‘”+1), and Ģ‚š±(š‘”+1āˆ£š‘”+1) by (3.20), (3.19), (3.18), and (3.17) in Lemma 3.5 for š‘”=0,ā€¦,š‘āˆ’1, respectively.

Step 2. š‘”āˆˆ[0,š‘] is set invariant compute š‘ƒ(š‘”,š‘˜) by (3.33) for š‘˜=1,ā€¦,š‘āˆ’š‘”;with the above initial values š‘ƒ(š‘”,0)=š‘ƒ(š‘”).

Step 3. Compute the Kalman fixed-interval smoother Ģ‚š±(š‘”āˆ£š‘) by (3.31) for š‘”=0,ā€¦,š‘ with fixed N.
Similarly, according to Theorem 3.9, the computation of the Kalman fixed-lag smoother Ģ‚š±(š‘”š‘™āˆ£š‘”) can be summarized in three steps as shown below.

Step 1. Compute Ī (š‘”š‘™+š‘˜+1),ā€‰ā€‰š‘ƒ(š‘”š‘™+š‘˜+1),ā€‰ā€‰Ģ‚š±(š‘”š‘™+š‘˜+1), and Ģ‚š±(š‘”š‘™+š‘˜+1āˆ£š‘”š‘™+š‘˜+1) by (3.20), (3.19), (3.18), and (3.17) in Lemma 3.5 for š‘”>š‘™ and š‘˜=1,ā€¦,š‘™, respectively.

Step 2. š‘”āˆˆ[0,š‘] is set invariant; compute š‘ƒ(š‘”š‘™,š‘˜) by (3.40) for š‘˜=1,ā€¦,š‘™ with the above initial values š‘ƒ(š‘”š‘™,0)=š‘ƒ(š‘”š‘™).

Step 3. Compute the Kalman fixed-lag smoother Ģ‚š±(š‘”š‘™āˆ£š‘”) by (3.38) for š‘”>š‘™.
According to Theorem 3.11, the computation of Kalman fixed-point smoother Ģ‚š±(š‘”āˆ£š‘—) can be summarized in three steps as shown below.

Step 1. Compute Ī (š‘”+1),ā€‰ā€‰š‘ƒ(š‘”+1),ā€‰ā€‰Ģ‚š±(š‘”+1), and Ģ‚š±(š‘”+1āˆ£š‘”+1) by (3.20), (3.19), (3.18) and (3.17) in Lemma 3.5 for š‘”=0,ā€¦,š‘āˆ’1, respectively.

Step 2. Compute š‘ƒ(š‘”,š‘˜) by (3.47) for š‘˜=1,ā€¦,š‘—āˆ’š‘” with the initial value š‘ƒ(š‘”,0)=š‘ƒ(š‘”).

Step 3. Compute the Kalman fixed-point smoother Ģ‚š±(š‘”āˆ£š‘—) by (3.45) for š‘—=š‘”+1,ā€¦,š‘.
The tracking performance of Kalman fixed-point smoother Ģ‚ī‚ƒĢ‚š±š±(š‘”āˆ£š‘—)=1(š‘”āˆ£š‘—)Ģ‚š±2(š‘”āˆ£š‘—)ī‚„ is drawn in Figures 1 and 2, and the line is based on the fixed time š‘”=30, and the variable is š‘—. It can be easily seen from the above two figures that the smoother is changed much at first, and after the time š‘—=35, the fixed-point smoother is fixed, that is, the smoother at time š‘”=30 will take little effect on š²(š‘—),š²(š‘—+1),ā€¦ due to packet loss. In addition, at time š‘—=30, the estimation (filter) is more closer to the origin than other š‘—>30, which shows that Kalman filter is better than fixed-point smoother for WSN with packet loss. In fact, the smoothers below are also not good as filter.
The fixed-interval smoother Ģ‚ī‚ƒĢ‚š±š±(š‘”āˆ£š‘)=1(š‘”āˆ£š‘)Ģ‚š±2(š‘”āˆ£š‘)ī‚„ is given in Figures 3 and 4, and the tracking performance of the fixed-lag smoother Ģ‚š±(š‘”š‘™ī‚ƒĢ‚š±āˆ£š‘”)=1(š‘”š‘™āˆ£š‘”)Ģ‚š±2(š‘”š‘™āˆ£š‘”)ī‚„ is given in Figures 5 and 6. From the above figures, they can estimate the origin signal in general.
In addition, according to the comparison part in the end of last section, we give the comparison of the sum of the error covariance of the fixed-interval and fixed-lag smoother (the fixed-point smoother is different from the above two smoother, so its error covariance is not necessary to be compared with, which has been explained at the end of last section), and we also give the sum of the error covariance of Kalman filter, and they are all drawn in Figure 7. As seen from Figure 7, it is hard to say which smoother is better due to packet loss, and the result of smoothers is not better than filter.

5. Conclusion

In this paper, we have studied Kalman fixed-interval smoothing, fixed-lag smoothing [6], and fixed-point smoothing [7] for wireless sensor network systems with packet loss and multiplicative noises. The smoothers are given by recursive equations. The smoother error covariance matrices of fixed-interval smoothing, and fixed-lag smoothing are given by Riccati equation without recursion, while the smoother error covariance matrix of fixed-point smoothing is given by recursive Riccati equation and recursive Lyapunov equation. The comparison among the fixed-point smoother, fixed-interval smoother and fixed-lag smoother has been given, and numerical example verified the proposed approach. The proposed approach will be useful to study more difficult problems, for example, the WSN with random delay and packet loss [19].

Disclosure

X. Lu is affiliated with Shandong University of Science and Technology and also with Shandong University, Qingdao, China. H. Wang and X. Wang are affiliated with Shandong University of Science and Technology, Qingdao, China.

Acknowledgment

This work is supported by National Nature Science Foundation (60804034), the Scientific Research Foundation for the Excellent Middle-Aged and Youth Scientists of Shandong Province (BS2012DX031), the Nature Science Foundation of Shandong Province (ZR2009GQ006), SDUST Research Fund (2010KYJQ105), the Project of Shandong Province Higher Educational Science, Technology Program (J11LG53) and ā€œTaishan Scholarshipā€ Construction Engineering.