Abstract
The paper deals with Kalman (or ) smoothing problem for wireless sensor networks (WSNs) with multiplicative noises. Packet loss occurs in the observation equations, and multiplicative noises occur both in the system state equation and the observation equations. The Kalman smoothers which include Kalman fixed-interval smoother, Kalman fixedlag smoother, and Kalman fixed-point smoother are given by solving Riccati equations and Lyapunov equations based on the projection theorem and innovation analysis. An example is also presented to ensure the efficiency of the approach. Furthermore, the proposed three Kalman smoothers are compared.
1. Introduction
The linear estimation problem has been one of the key research topics of control community according to [1]. As is well known, two indexes are used to investigate linear estimation, one is index, and the other is index. Under the performance of index, Kalman filtering [2โ4] is an important approach to study linear estimation besides Wiener filtering. In general, Kalman filtering which usually uses state space equation is better than Wiener filtering, since it is recursive, and it can be used to deal with time-variant system [1, 2, 5]. This has motivated many previous researchers to employ Kalman filtering to study linear time variant or linear time-invariant estimation, and Kalman filtering has been a popular and efficient approach for the normal linear system. However, the standard Kalman filtering cannot be directly used in the estimation on wireless sensor networks (WSNs) since packet loss occurs, and sometimes multiplicative noises also occur [6, 7].
Linear estimation for systems with multiplicative noises under index has been studied well in [8, 9]. Reference [8] considered the state optimal estimation algorithm for singular systems with multiplicative noise, and dynamic noise and noise measurement estimations have been proposed. In [9], we presented the linear filtering for continuous-time systems with time-delayed measurements and multiplicative noises under index.
Wireless sensor networks have been popular these years, and the corresponding estimation problem has attracted many researchersโ attention [10โ15]. It should be noted that in the above works, only packet loss occurs. Reference [10] which is an important and ground-breaking reference has considered the problem of Kalman filtering of the WSN with intermittent packet loss, and Kalman filter together with the upper and lower bounds of the error covariance is presented. Reference [11] has developed the work of [10], the measurements are divided into two parts which are sent by different channel under different packet loss rate, and the Kalman filter together with the covariance matrix is given. Reference [14] also develops the result of [11], and the stability of the Kalman filter with Markovian packet loss has been given.
However, the above references mainly focus on the linear systems with packet loss, and they cannot be useful when there are multiplicative noises in the system models [6, 7, 16โ18]. For the Kalman filtering problem for wireless sensor networks with multiplicative noises, [16โ18] give preliminary results, where [16] has given Kalman filter, [17] deals with the Kalman filter for the wireless sensor networks system with two multiplicative noises and two measurements which are sent by different channels and packet-drop rates, and [18] has given the information fusion Kalman filter for wireless sensor networks with packet loss and multiplicative noises.
In this paper, Kalman smoothing problem including fixed-point smoothing [6], fixed-interval smoothing, and fixed-lag smoothing [7] for WSN with packet loss will be studied. Multiplicative noises both occur in the state equation and observation equation, which will extend the result of works of [8] where multiplicative noises only occur in the state equations. Three Kalman smoothers will be given by recursive equations. The smoother error covariance matrices of fixed-interval smoothing and fixed-lag smoothing are given by Riccati equation without recursion, while the smoother error covariance matrix of fixed-point smoothing is given by recursive Riccati equation and recursive Lyapunov equation, which develops the work of [6, 7], where some main theorems with errors on Kalman smoothing are given.
The rest of the paper is organized as follows. In Section 2, we will present the system model and state the problem to be dealt with in the paper. The main results of Kalman smoother will be given in Section 3. In Section 4, a numerical example will be given to show the result of smoothers. Some conclusion will be drawn in Section 5.
2. Problem Statement
Consider the following discrete-time wireless sensor network systems with multiplicative noises: where is the state, is measurement, is input sequence, is the white noise of zero means, which is additive noise, and is the white noise of zero means, which is multiplicative noise. , , , , and are known time-invariant matrices.
Assumption 2.1. is Bernoulli random variable with probability distribution is independent of for , and is uncorrelated with , , , and .
Assumption 2.2. The initial states , , , and are all uncorrelated white noises with zero means and known covariance matrices, that is,
where , and denotes the mathematical expectation.
The Kalman smoothing problem considered in the paper for the system model (2.1)โ(2.3) can be stated in the following three cases.
Problem 1 (fixed-interval smoothing). Given the measurements and scalars ,โ for a fixed scalar , find the fixed-interval smoothing estimate of , such that where , , .
Problem 2 (fixed-lag smoothing). Given the measurements and scalars for a fixed scalar , find the fixed-lag smoothing estimate of , such that where , .
Problem 3 (fixed-point smoothing). Given the measurements and scalars for a fixed time instant , find the fixed-point smoothing estimate of , such that where , , , .
3. Main Results
In this section, we will give Kalman fixed-interval smoothing, Kalman fixed-lag smoothing, and Kalman fixed-point smoothing for the system model (2.1)โ(2.3) and compare them with each other. Before we give the main result, we first give the Kalman filtering for the system model (2.1)โ(2.3) which will be useful in the section. We first give the following definition.
Definition 3.1. Given time instants and , the estimator is the optimal estimation of given the observation sequences and the estimator is the optimal estimation of given the observation
Remark 3.2. It should be noted that the linear space means the linear space under the condition that the scalars are known. So is the linear space .
Give the following denotations:
it is clear that is the Kalman filtering innovation sequence for the system (2.1)โ(2.3). We now have the following relationships:
The following lemma shows that is the innovation sequence. For the simplicity of discussion, we omitted the as in Remark 3.2.
Lemma 3.3. is the innovation sequence which spans the same linear space consider that
Proof. Firstly, from (3.2) in Definition 3.1, we have
From (3.3) and (3.10), we have for . Thus,
Secondly, from inductive method, we have
Thus,
So
Next, we show that is an uncorrelated sequence. In fact, for any , we can assume that without loss of generality, and it follows from (3.7) that
Note that , . Since is the state prediction error, it follows that , and thus , which implies that is uncorrelated with . Hence, is an innovation sequence. This completes the proof of the lemma.
For the convenience of derivation, we will give the orthogonal projection theorem in form of the next theorem without proof, and readers can refer to [4, 5],
Theorem 3.4. Given the measurements and the corresponding innovation sequence in Lemma 3.3, the projection of the state can be given as
According to a reviewerโs suggestion, we will give the Kalman filter for the system model (2.1)โ(2.3) which has been given in [7]
Lemma 3.5. Consider the system model (2.1)โ(2.3) with scalars and the initial condition ; the Kalman filter can be given as where
Proof. Firstly, according to the projection theorem (Theorem 3.4), we have
then from (3.7), we have
Secondly, according to the projection theorem, we have
which is (3.18) by considering (3.21), (3.22), and (3.23).
Thirdly, by considering (3.7) and Theorem 3.4, we also have
which is (3.17).
From (2.1) and (3.24), we have
then we have
which is (3.19).
From (2.1), (3.20) can be given directly, and the proof is over.
Remark 3.6. Equation (3.19) is a recursive Riccati equation, and (3.20) is a Lyapunov equation.
3.1. Kalman Fixed-Interval Smoother
In this subsection, we will present the Kalman fixed-interval smoother by the projection theorem. First, we define Then we can give the theorem which develops [6] as follows.
Theorem 3.7. Consider the system (2.1)โ(2.3) with the measurements and scalars , Kalman fixed-interval smoother can be given by the following backwards recursive equations: and the corresponding smoother error covariance matrix can be given as where with , and can be given from (3.18), (3.19), and (3.17).
Proof. From the projection theorem, we have
which is (3.31).
Noting that is uncorrelated with , , and for , then from (3.5), we have
which is (3.33).
Next, we will give the proof of covariance matrix . From the projection theorem, we have , and from (3.34), we have
that is,
Thus, (3.32) can be given.
Remark 3.8. The proposed theorem is based on the theorem in [6]. However, the condition of the theorem in [6] is wrong since multiplicative noises are not known. In addition, the proposed theorem gives the fixed-interval smoother error covariance matrix which is an important index in Problem 1 and also useful in the comparison with fixed-lag smoother.
3.2. Kalman Fixed-Lag Smoother
Let , and we can give Kalman fixed-lag smoothing estimate for the system model (2.1)โ(2.3), which develops [6] in the following theorem.
Theorem 3.9. Consider the system (2.1)โ(2.3), given the measurements and scalars for a fixed scalar , then Kalman fixed-lag smoother can be given by the following recursive equations: and the corresponding smoother error covariance matrix can be given as where with , and ,โand โ can be given from Lemma 3.5.
Proof. From the projection theorem, we have
which is (3.38).
Noting that is uncorrelated with , , and for , then from (3.5), we have
which is (3.40).
Next, we will give the proof of covariance matrix . Since , from (3.41), we have
that is,
Thus, (3.39) can be given.
Remark 3.10. It should be noted that the result of the Kalman smoothing is better than that of Kalman filtering for the normal systems without packet loss since more measurement information is available. However, it is not the case if the measurement is lost, which can be verified in the next section. In addition, in Theorems 3.7 and 3.9, we have changed the predictor type of or in [6] to the filter case of or , which will be more convenient to be compared with Kalman fixed-point smoother (3.45).
3.3. Kalman Fixed-Point Smoother
In this subsection, we will present the Kalman fixed-point smoother by the projection theorem and innovation analysis. We can directly give the theorem which develops [7] as follows.
Theorem 3.11. Consider the system (2.1)โ(2.3) with the measurements and scalars , then Kalman fixed-point smoother can be given by the following recursive equations: and the corresponding smoother error covariance matrix can be given as where with ,โ ,โ , and , and can be given from (3.19) and (3.18).
Proof. The proof of (3.45) and (3.47) can be referred to [7], and we only give the proof of the covariance matrix . From the projection theorem, we have
where
Define and , then (3.26) can be rewritten as
and recursively computing (3.52), we have
where
By considering (3.7), (3.53), and Theorem 3.4,
so
which is (3.48) by setting .
From (3.50) and considering , then we have
that is,
Then according to (3.30), we have
Combined with (3.23), we have (3.46) by setting .
3.4. Comparison
In this subsection, we will give the comparison among the three cases of smoothers. It can be easily seen from (3.31), (3.38), and (3.45) that the smoothers are all given by a Kalman filter and an updated part. To be compared conveniently, the smoother error covariance matrices are given in (3.32), (3.39), and (3.46), which develops the main results in [6, 7] where only Kalman smoothers are given.
It can be easily seen from (3.31) and (3.38) that Kalman fixed-interval smoother is similar to fixed-lag smoother. For (3.31), the computation time is , and it is in (3.38). For Kalman fixed-interval smoother, the is fixed, and is variable, so when , the corresponding smoother can be given. For Kalman fixed-lag smoother, the is fixed, and is variable, so when , the corresponding smoother can be given. The two smoothers are similar in the form of (3.31) and (3.38). However, it is hard to see which smoother is better from the smoother error covariance matrix (3.32) and (3.39), which will be verified in numerical example.
For Kalman fixed-point smoother in Theorem 3.7, the time is fixed, and in this case, we can say that Kalman fixed-point smoother is different from fixed-interval and fixed-lag smoother in itself. can be equal to .
4. Numerical Example
In the section, we will give an example to show the efficiency and the comparison of the presented results.
Consider the system (2.1)โ(2.3) with , The initial state value , and noises and are uncorrelated white noises with zero means and unity covariance matrices, that is, . Observation noise is of zero means and with covariance matrix .
Our aim is to calculate the Kalman fixed-interval smoother of the signal , Kalman fixed-lag smoother of the signal for , and Kalman fixed-point smoother of the signal for based on observations , and , respectively. For the Kalman fixed-point smoother , we can set .
According to Theorem 3.7, the computation of the Kalman fixed-interval smoother can be summarized in three steps as shown below.
Step 1. Compute ,โโ ,โโ , and by (3.20), (3.19), (3.18), and (3.17) in Lemma 3.5 for , respectively.
Step 2. is set invariant compute by (3.33) for with the above initial values .
Step 3. Compute the Kalman fixed-interval smoother by (3.31) for with fixed N.
Similarly, according to Theorem 3.9, the computation of the Kalman fixed-lag smoother can be summarized in three steps as shown below.
Step 1. Compute ,โโ,โโ, and by (3.20), (3.19), (3.18), and (3.17) in Lemma 3.5 for and , respectively.
Step 2. is set invariant; compute by (3.40) for with the above initial values .
Step 3. Compute the Kalman fixed-lag smoother by (3.38) for .
According to Theorem 3.11, the computation of Kalman fixed-point smoother can be summarized in three steps as shown below.
Step 1. Compute ,โโ,โโ, and by (3.20), (3.19), (3.18) and (3.17) in Lemma 3.5 for , respectively.
Step 2. Compute by (3.47) for with the initial value .
Step 3. Compute the Kalman fixed-point smoother by (3.45) for .
The tracking performance of Kalman fixed-point smoother is drawn in Figures 1 and 2, and the line is based on the fixed time , and the variable is . It can be easily seen from the above two figures that the smoother is changed much at first, and after the time , the fixed-point smoother is fixed, that is, the smoother at time will take little effect on due to packet loss. In addition, at time , the estimation (filter) is more closer to the origin than other , which shows that Kalman filter is better than fixed-point smoother for WSN with packet loss. In fact, the smoothers below are also not good as filter.
The fixed-interval smoother is given in Figures 3 and 4, and the tracking performance of the fixed-lag smoother is given in Figures 5 and 6. From the above figures, they can estimate the origin signal in general.
In addition, according to the comparison part in the end of last section, we give the comparison of the sum of the error covariance of the fixed-interval and fixed-lag smoother (the fixed-point smoother is different from the above two smoother, so its error covariance is not necessary to be compared with, which has been explained at the end of last section), and we also give the sum of the error covariance of Kalman filter, and they are all drawn in Figure 7. As seen from Figure 7, it is hard to say which smoother is better due to packet loss, and the result of smoothers is not better than filter.
5. Conclusion
In this paper, we have studied Kalman fixed-interval smoothing, fixed-lag smoothing [6], and fixed-point smoothing [7] for wireless sensor network systems with packet loss and multiplicative noises. The smoothers are given by recursive equations. The smoother error covariance matrices of fixed-interval smoothing, and fixed-lag smoothing are given by Riccati equation without recursion, while the smoother error covariance matrix of fixed-point smoothing is given by recursive Riccati equation and recursive Lyapunov equation. The comparison among the fixed-point smoother, fixed-interval smoother and fixed-lag smoother has been given, and numerical example verified the proposed approach. The proposed approach will be useful to study more difficult problems, for example, the WSN with random delay and packet loss [19].
Disclosure
X. Lu is affiliated with Shandong University of Science and Technology and also with Shandong University, Qingdao, China. H. Wang and X. Wang are affiliated with Shandong University of Science and Technology, Qingdao, China.
Acknowledgment
This work is supported by National Nature Science Foundation (60804034), the Scientific Research Foundation for the Excellent Middle-Aged and Youth Scientists of Shandong Province (BS2012DX031), the Nature Science Foundation of Shandong Province (ZR2009GQ006), SDUST Research Fund (2010KYJQ105), the Project of Shandong Province Higher Educational Science, Technology Program (J11LG53) and โTaishan Scholarshipโ Construction Engineering.