Abstract

This paper studies the remote filtering problem over a packet-dropping network. A general multiple-input-multiple-output (MIMO) discrete-time system is considered. The multiple measurements are sent over different communication channels every time step, and the packet loss phenomenon in every communication channel is described by an independent and identically distributed (i.i.d) Bernoulli process. A suboptimal filter is obtained which can minimize the mean squared estimation error. The convergence properties of the estimation error covariance are studied, and mean square stability of the suboptimal filter is proved under standard assumptions. A simulation example is exploited to demonstrate the effectiveness of the results.

1. Introduction

Filtering systems which transmit data packets through communication networks are called network-based filtering systems [1]. The introduction of networks brings many attractive advantages, such as low cost, fast deployment, and flexible installation. However, communication networks are usually unreliable and may give rise to packet losses and network-induced delays due to inherent limited bandwidth. The packet losses and network-induced delays can degrade the performance or even cause instability of the systems. Hence, it is not surprising that, in the past few years, the study of the state estimation problem for network-based filtering systems with packet losses and time-delays has been an active research area; see [27], to name a few. This paper is concerned with the design of the filter for networked discrete-time systems with random observation losses.

In the literature, there have been commonly two approaches for modeling the packet loss phenomenon in the network-based filtering systems. The first approach is to use a system with Markovian jumping parameter to represent random packet loss model [1]. Note that such class of systems is a special class of Markovian jumping systems; hence, some results of control synthesis [8, 9] and filtering methods [10] for Markovian jumping systems may be extended to these systems. The second approach is to view the packet loss as an independent Bernoulli process. And such approach has been used to deal with the estimation problems for network-based filtering systems with missing or intermittent observations [1113].

As is well known, Kalman filtering [14] is one of the most popular and useful approaches to the filtering problem. In the literature, a few results have been reported on the Kalman filtering problem with observation losses [1113, 1517]. The early studies on the Kalman filtering with uncertain observation can be traced back to [15], where the linear minimum mean squared error (LMMSE) estimation method is considered. More recently, the LMMSE filter is obtained for systems with multiple packet dropouts in [16], where the number of possible consecutive packet dropouts is limited by a known bound. Besides, by the state augmentation, the LMMSE optimal filter, predictor, and smoother are designed for systems with finite consecutive packet dropouts [17]. The LMMSE filtering only uses the statistics of the unobserved uncertainty sequence. In fact, the filter can get the information whether a packet has been delivered or not for networked filtering systems. In recent years, the paper [11] proposes a new filtering method which is called Kalman filtering with intermittent observations. The filter proposed in this paper exploits additional information regarding the packet arrival indicator sequence. As a consequence, the filter in [11] can give better performances. Nonetheless, the analysis of the filter is only limited to the boundary analysis due to the complex discussions [11, 12]. Motivated by the above analysis, the paper [13] proposes a new suboptimal estimator under a new performance index, which improves the performance of the LMMSE Kalman filter and possesses better properties of convergence and stability than the Kalman filter with intermittent.

Our paper extends the results in [13] to a more general case. In [13], the traditional assumption that all the measurements are encoded together and transmitted to the remote filter via a common communication channel is made. Thus the measurements are either received in full or lost completely. However, in the practical networked filtering systems, the measurements usually cannot be encapsulated into one data packet and multiple measurements must be transmitted through different communication channels. Moreover, the packet loss processes in different channels are often distinct. This is the motivation of the present paper. Our paper investigates the suboptimal filtering problem for the discrete-time systems with multichannel transmission mechanism. The convergence of the estimation error covariance and the mean square stability of the filter are proved. It should be pointed out that the presented results can also be used for the systems with all measurements sent via one common communication channel.

This remainder of the paper is organized as follows. Section 2 formulates the problem and makes some preliminaries. The main results of this paper are presented in Section 3. The suboptimal filter is derived, and the convergence and stability of the suboptimal filter are proved under standard assumptions. A simulation example is given to demonstrate the effectiveness of the approach in Section 4. Section 5 concludes the paper.

2. Problem Statements and Preliminaries

Consider the following network-based filtering system: where is the system state, is the measurement, and and are, respectively, the system noise and measurement noise with zero mean and covariance matrices and , in which , , and is the Kronecker delta function. The initial state is also a random vector of mean and covariance . and are constant matrices of appropriate dimensions. The scenario under consideration is illustrated in Figure 1, where the measurement is partitioned into parts; that is, . The measurement components are transmitted to the remote filter through different communication channels. The independent and identically distributed (i.i.d.) Bernoulli random variables () are employed to describe, respectively, the packet dropout phenomenon in the channels with and . It is further assumed that the packet arrival indicators and are independent for every and ; then we have . From the above assumption and analysis, the multiplicative noise matrix can be expressed by the following diagonal binary random matrix with entries of either 1 or 0 in the diagonal: It is noted that, as shown in Figure 1, measurements are transmitted to the filter via channels. In fact, some measurements may be encoded together and sent over the network in a single packet; that is to say, the number of communication channels can be smaller than that of the measurements. The model proposed in this paper can be easily adjusted to describe the above case. For example, assume the number of communication channels is 4. The first two measurements are encoded together and transmitted by a common channel, and the last two measurements are transmitted by another common channel. Thus can be written as

Throughout this paper, without loss of generality, the following assumptions are made for technical convenience.

Assumption 1. The random processes , , for all and the initial state are mutually independent.

Assumption 2. The remote filter can obtain the information regarding by employing the time-stamp technique.

Similar to [13], we introduce the following innovation sequence : where and is the gain matrix of the filter, which can be chosen such that the following is minimized:

According to [13, Lemma 1], it is easy to obtain that is mutually uncorrelated noise with zero mean. The filtering problem considered in this paper is to find the state estimation in (6), in which is chosen such that (7) is minimized.

Remark 3. Our paper extends the results in [13] where all the measurements are transmitted to the filter via a common communication channel. Assume that the packet loss processes in all the channels are identical; the filter proposed in this paper is equivalent to that in [13], which means that the suboptimal filter in [13] can be regarded as a special case of our filter.

Remark 4. Our estimation problem is different from [12] where the expectation is only taken over the system noise and the measurement noise . In this paper, the expectation is taken over not only and but also the multiplicative noise matrix with entries in the diagonal. Moreover, it is assumed that the measurements are sent to the filter via two communication channels for the sake of simplicity in [12], and while we address the filtering problem in the more general case, the observation processes can be sent by multiple communication channels.

Remark 5. The filter proposed in this paper is also different from the LMMSE optimal Kalman filter with multiplicative noise [18] where the innovation sequence is defined as in which is the mean of the random matrix and . And the state estimation in [18] can be written as The aim of [18] is to find such that (7) is minimized. Obviously, this filter only uses the statistics of the multiplicative noise matrix , while the filter proposed in this paper exploits additional information regarding the packet arrival indicator sequence. Hence, our filter may give better performances.

3. Main Results

3.1. Suboptimal Filter Design

The following theorem gives the recursive equations of the suboptimal filter defined in (6).

Theorem 6. Consider the systems (1) and (2); the suboptimal filter defined in (6) is given as follows: where the gain matrix is calculated as and the state estimation error covariance is given by in which is defined as in (15) below.

Proof. Let and note (1), (5), and (6); we have Taking expectation over , , and , we can obtain where .
Define where Then we can rewrite (14) as where If we choose , (18) can be minimized. This completes the proof.

3.2. Convergence and Stability of the Suboptimal Filter

In this subsection, the convergence and stability of the proposed filter are studied. The following preliminary lemmas are introduced before presenting the main objectives of this paper.

Lemma 7. Consider the following operators: where Assume is symmetric and positive semidefinite; then the following facts are true.(i)With , .(ii), for all .(iii)If , then .

Proof. (i) Fact (i) can be easily obtained by directly substituting into (21), and therefore it is omitted.
(ii) The proof for fact (ii) is somewhat more technical since in the operator is the implicit expression of the vector . In order to facilitate the proof of fact (ii), we will transform (21) to the explicit expression of . First, we can rewrite (21) as follows: where
Then, let us define It is easy to obtain that Note (23) and then we claim that (28) can be described by where () are the diagonal matrices and they have all diagonal elements being zeros except for the th element being . The above analysis leads to the following expression of (24): It is clear that (30) is the explicit expression of vectors and . We are now ready to derive fact (ii).
According to the similar idea in [11], in (30) is quadratic and convex in the variable . By noting that and , therefore, the minimizer can be found by solving , which gives From (25), (27), and (29), we can rewrite (32) as which corresponds to defined above, so fact (ii) follows from fact (i).
(iii) The monotonicity of can be easily obtained from (30). If ,

Lemma 8. Let the operator Suppose there exists such that .
(i) For all , .
(ii) Define the linear system where . Then, the sequence is bounded.
The proof can be derived similarly to [11, Lemma 3]; therefore, it is omitted.

Theorem 9. Suppose there exists a matrix and a positive definite matrix such that ; then for any initial condition , (12) converges to a unique positive semidefinite matrix . That is to say, .

Proof. Consider the operator Then we have Note that ; we can easily obtain that . Then, in view of the assumption , we have . Therefore, meets the condition of Lemma 8.
Further, note (12) and (20); we get . It follows from Lemma 7(ii) that And using Lemma 8(ii), we have that is bounded.
In the following, we show that (12) converges to the same limit for three types of initial conditions: , , and , where , respectively.
(i) .
It is noteworthy that which means . It follows from Lemma 7 (iii) that By mathematical induction, we can see that for any time step . So far, we can conclude that the sequence is bounded and monotonically increasing with , which implies . Moreover, by taking limit on (11), (12), (15), and (16) one has
(ii) .
In the following, we show that initialized at also converges to the same limit .
It follows from Lemma 7 (iii) that By induction, holds for any .
Now we define and then we have Therefore, meets the condition of Lemma 8. Using Lemma 7 (i), (ii), and (30), we can obtain Since meets the condition of Lemma 8, from the above analysis, we have Thus we claim that initialized at also converges to the same limit .
(iii) , .
We now establish that the sequence converges to for all initial conditions.
Note that Since and converge to the same limit , holds. This completes the proof.

Taking limits in (10), we can rewrite the estimator (10) as Next, we will present the result that (52) is mean square stable.

Theorem 10. Suppose there exists a matrix and a positive definite matrix such that ; the filter (52) is mean square stable if the limit of (12) is exactly positive definite.

Proof. First it is obvious that the mean square stability of the following system is equivalent to that of the filter (52): From Lyapunov inequality in [19], we conclude that if we can find a positive definite matrix satisfying then (53) is mean square stable. Choosing and considering (44), we can rewrite (54) as From (45) and the assumption , it is obvious that there exists such that (55) holds. Hence the proof is complete.

4. Simulation Example

In this section, for the purpose of illustrating the effectiveness of the filter proposed in this paper, we present a simulation example as follows: where and are Gaussian random noises with zero means and covariances . Assume that two sensors which are not located together are used to measure the outputs of the system; thus the two measurements are transmitted to the filter via two distinct communication channels. The packet loss processes in the two channels are different from each other and are described by two i.i.d. Bernoulli random processes and , respectively. Moreover, the processes and are mutually independent, which means . Furthermore, we assume that and . With the initial condition , the tracking performance of the proposed suboptimal filter is shown in Figure 2, which shows that our filter is effective. Then we design the LMMSE filter for this example and compare it with the proposed filter in Figure 3. Note that the error covariance matrix is a symmetric matrix; that is to say, in this example, so we just show , and in Figure 3. From Figure 3, we can see that our filter’s covariance is convergent while the covariance of the LMMSE filter is divergent. Moreover, the error covariance matrix of the new proposed filter converges to a positive semidefinite matrix

5. Conclusions

This paper extends the suboptimal estimation method [13] to a more general and practical case that the measurements are allowed to be transmitted through distinct communication channels with packet losses and each measurement loss process is described by an i.i.d Bernoulli process. If packet loss processes in all the communication channels are identical, our filter is equivalent to the estimator in [13]. The suboptimal filter is designed which can minimize the mean squared estimation error. Furthermore, under standard assumptions, the convergence properties of the error covariance are studied and the suboptimal filter designed is proved to be stable.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was supported by the Fundamental Research Funds for the Central Universities under Grant 2014QNB21.