Abstract

This paper is concerned with the optimal linear estimation for a class of direct-time Markov jump systems with missing observations. An observer-based approach of fault detection and isolation (FDI) is investigated as a detection mechanic of fault case. For systems with known information, a conditional prediction of observations is applied and fault observations are replaced and isolated; then, an FDI linear minimum mean square error estimation (LMMSE) can be developed by comprehensive utilizing of the correct information offered by systems. A recursive equation of filtering based on the geometric arguments can be obtained. Meanwhile, a stability of the state estimator will be guaranteed under appropriate assumption.

1. Introduction

Discrete-time Markov jump linear systems (MJLSs) are basically linear discrete-time systems with discretional parameters evolving with a finite-state Markov chain. It can be used in modeling systems with abrupt structures, for example, those which may be found in signal processing, fault detection [1, 2], and subsystem switching. One classical application is maneuvering target tracking, in which signals of interest are modeled by using MJLSs [3]. In these fields, the problems of state estimation for MJLSs play an essential role in recovering some desired variables from given noisy observations for output variables. However, many approaches of achieving the state estimation of MJLSs include the generalized pseudo-Bayesian (GPB) algorithm [4, 5], the interacting multiple model (IMM) filtering [6], stochastic sampling based methods [7, 8], and LMMSE filter. Those methods are different from each other in their estimation criteria and means [2, 912]. Among them, LMMSE filter has been well studied for MJLSs in many of literary works [9].

On the other hand, since applications of sensors networks are becoming ubiquitous in practical systems, wireless or wireline communication channels are essential for data communication. Examples are offered ranging from advanced aircraft, spacecraft to manufacturing process. As communication channels are time varying and unreliable, the phenomena of random time delays and random packet dropout usually occur in these networked systems. Hence, more and more attention has been paid to systems with observer-based fault during the past years. For example, studies on optimal recursive filter for systems with intermittent observations can be traced back to Nahi [13], whose work assumed that uncertainty of observations is independent and identically distributed. Afterwards, by using linear matrix inequalities (LMIs) techniques, the performance, performance, finite-time performance, and finite-time performance have been well studied for solving filtering and control problems occurring in stochastic systems with uncertain elements [1421]. In [2225], the stability analysis of random Riccati equation arising from Kalman filtering with intermittent observations was investigated elaborately. filtering algorithm [2628] has been developed for discrete systems with random packet losses in [29, 30]. In [31], a robust filtering algorithm was developed for state estimation of MJLSs with random missing observation by applying basic IMM approach and technique. Reference [32] dealt with the fault detection filtering (FDF) design within stochastic filtering frame for a class of discrete-time nonlinear Markov jump systems with lost measurements.

Although the aforementioned references give efficient and practical tools to deal with the filtering problems for systems with package dropout, the results given by the methods constructed based on LMIs techniques are sometimes too conservative. What is more, IMM approach mentioned a priori requires online calculations. Inspired by the effectiveness of LMMSE mechanic used in solving state estimation problem of MJLSs with random time delays in [33], the problem of state estimation of MJLSs with random missing observations is formulated into LMMSE filtering frame. This frame can lead to a time-varying linear filter easy to implement. At the same time, most calculations can be performed off-line.

Aiming at solving the issue of uncertain observations in MJLSs, this paper provides a heuristic method for detecting the fault in process of transmitting observation. An approach of fault detection and isolation (FDI) [32, 34] for a class of MJLSs with missing observations will be investigated. The key point of FDI is to construct the residual generator and determine the residual evaluation function and the threshold. Then, by comparing the value of the evaluation function with the prescribed threshold, we will make judgment whether an alarm of fault is generated. The situation of uncertainties of observation can be naturally and conveniently reflected. With knowing the information of the faulty case, a conditional prediction of observation will be obtained, which can be used as replacement of the faulty one. At this time, we can utilize the optimal state estimator of pervious instant and parameters for constructing observer of system to estimate the observation at current time. By this way, we can skip and avoid the fault observation.

Accordingly, by applying the basic FDI approach and basic LMMSE algorithm, an FDI-LMMSE filtering algorithm is developed for state estimation of MJLSs with random missing observation. In order to solve the optimal estimation problem, the measurements’ loss process is modeled as a Bernoulli distributed white sequence taking values from 0 to 1 randomly. The estimation problem is then reformulated as an optimal linear filtering of a class of MJLSs, which have random missing observation and necessary model compensation, via state augmentation [3538]. A recursive filtering is formulated in terms of Riccati difference equations. At the same time, we will show that estimator is stable under necessary assumptions in this paper.

This paper is organized as follows. Section 2 gives the problem formulation. A recursive optimal solution is given in Section 3. Its stability is discussed in Section 4. In Section 5, a numerical example is shown to explain the effectiveness of approach proposed in our paper. At last, the conclusions are drawn in Section 6.

2. Problem Formulation

On the stochastic basis , considering the following jump Markov linear system model: where is continuous-valued based-state sequence with known initial distribution . is assumed to be known time-varying constant to each value of . is the noisy observation sequence. is the noisy observation sequence with distribution . is a white measurement noise sequence independent of the process noise with distribution .

Remark 1. is a compensation between practical systems and models applied in this paper.
is the unknown discrete-valued Markov chain with a finite-state space . The transition probability matrix is , where . We set . The basic variables , , and the modal-state sequence are assumed to be mutually independent for all . , , , are assumed to be known time-varying system matrices to each value of . For notational simplicity, the following notations and definitions hold in the rest of the paper:
In this paper, consider that the observations are sent to the estimator via a Gilbert-Elliot channel, where the packet arrival is modeled using a binary random variable , with probability , and with independent of if . Let be independent of ; that is, according to this model, the measurement equation consists of noise alone or noise plus signal, depending on whether is 0 or 1.

Notation 1. Some notations which we will use throughout the paper should be presented first. We will denote by the space of real matrices and by the space of -dimensional real vectors. The superscript indicates transpose of a matrix. For a collection of matrices , with , represents the diagonal matrix formed by in the diagonal.

Notation 2. Define and . For , , if for each , we have .

3. Recursive Optimal Solution

In this section, a solution to the optimal estimator will be presented via the projection theory and the state augmentation in the Hilbert space.

3.1. Preliminaries

First, we denote by the linear space spanned by the observation . If for some , , the random variable .

Let represent an indicator of Markov process, which is defined as follows: And call . Define also as the projection of onto the linear space and Then, we first define the following second-moment matrices associated with the aforementioned variables. They play key roles in deriving the covariance matrices of the estimator errors and optimal estimator: Considering the following augment matrices: then system can be described as follows: Note that .

Assumption 2. For all , , where convergency value of , which will be given in Section 3.

3.2. Optimal Estimator

From geometric arguments in [39], the LMMSE filter for MJLSs with uncertain observations can be derived in this section. The following lemmas present necessary and sufficient conditions on derivation of FDI-LMMSE filtering.

Lemma 3. For any given time instant , one has where .

Proof. For any given instant , we have from (8) that Recalling that , initial covariance matrix .
To derive the optimal filter, we first define the innovation sequence as where conditional prediction is the projection of onto the linear space of . Consider Then, according to (4) and (8), the generated residual will be obtained as
In the following, an FDI scheme will be constructed, which can detect whether observation at instant is lost. In this paper, we choose the following mean square of the residual as the residual evaluation function to measure the energy of the residual: From (12)-(13) we get that
Suppose that is convergent to at the instant , from Assumption 2, we have that If , we have that If , From (8), we have that The FDI scheme in the following lemma will play a key role in deriving the main results of this paper.

Lemma 4. With above derivation, we can decide whether the observations of system were lost and detect the lost information at instant according to the following rule: where

With the fault being detected, the missing information can be taken into consideration when designing the FDI-LMMSE filter. The fault observation can be replaced and isolated by . By the above approach, we can skip the error information at the instant and use the correct information of pervious instant to estimate the value of state at instant directly.

Theorem 5. Consider the system represented by (8). Then the LMMSE is given by where satisfies the recursive equation where .

Proof. Recall that observation estimator is given by (12).
Now, can be rewritten as the following equation:
Considering the geometric argument as in [39], the estimator satisfies the following equations:
From (26), we get that
Because is independent of , we have that
showing that is orthogonal to . Similar reasoning shows the orthogonality between and . Recalling that and are orthogonal to , we can obtain that is orthogonal to . Then, from (27), the result can be obtained as follows: From (11), (28) and (26), (29), we get that
The positive-semidefinite matrices are obtained from And the recursive equation about is given as follows: where .
can be derived directly as a recursive Riccati equation in the following derivation. In the following, we denote the linear operator by , in which is

Theorem 6. satisfies the following recursive Riccati equation: where is given by the recursive equation (9) from Lemma 3.
Unlike the classical case, the sequence is now random, which result from its dependence on the random sequence .

Proof. Rewrite state equation in (8) as follows: where
From (32), we define
From (25) and (32), we have that
Then from (41) and (38), we get that
Therefore, at this point, we obtain the recursive equation for as follows:
By a series of algebraic manipulations, we have
Substituting (44) into (43) yields the recursive equation for as

4. Stability of the State Estimator

As we all see, the intermittent observations are the source of potential instability. From Theorem 6, however, the error covariance matrix obtained from the LMMSE can be rewritten in terms of a recursive Riccati equation of . In this section, based on that following assumptions hold, we show that the proposed estimator is stable as provided in our paper.

Assumption 7. is assumed to be ergodic Markov chain.

Assumption 8. System (1) is mean square stable (MSS) according to the definition in [35].

First, (37) describes a recursive Riccati equation for . We should establish now its convergence when . It follows from Assumption 2 that exists and it is independent of . We define We redefine the matrix as follows:

Then, we give the following facts and lemmas for system, which will be used in the proof of stability of the covariance matrix of estimation error.

With regard to Assumptions 2 and 7 and Proposition  3.36 in [35], as , where is the unique solution that satisfies

Then we have holding for all (since , we have as ). Defining , then we get At the same time, exponentially fast.

From (37), as , we obtain the mean state covariance as follows:

An operator is introduced for any positive-semidefinite matrix as follows:

Then

Define now with , and

Lemma 9 (see [35]). and for each , one can get that
Now, one defines where , .
From the definition of and condition of , one notices that the inverse of exists.

Lemma 10. For each , one gets that

Proof. In order to deduce (56), we define
Then, if ,
Obviously, when , it yields , . Similarly if , based on (49) and (54), we have Since , the induction argument is completed for .

Theorem 11. Suppose that Assumptions 7 and 8 hold. Consider that the algebraic Riccati equation satisfies (48), where . Then, there exists a unique nonnegative definite solution to (60). , and for any , , , and , one has given by (50) satisfying .

Proof. Due to MSS of 5.38 [35], we have from Proposition 3.6 in chapter 3 [35] that . According to the standard results for algebraic Riccati equation there is a unique positive-semidefinite solution to (60). And moreover .
From Theorem 11, we get that satisfied
Define and
Then (50) can be rewritten as
Suppose that , we have that
By definition, . Suppose that . From (64), we have that . Therefore we have shown by induction that for all . From MSS and ergodicity of the Markov chain we have that , , and exponentially fast. From and same reasoning as in the proof of proposition  3.36 in [35] we have that as , where satisfies
And is the unique solution to (65). Recalling that satisfies (62), we get that is also a solution to (65) and from uniqueness, . Then, we obtain that
And . From (66) and (56) in Lemma 10 it follows that . And thus we can conclude that whenever for some . Moreover, from the fact that and , we have that satisfies (60).
From uniqueness of the positive-semidefinite solution to (60), we can conclude that . From (66) and (56), and since and as , we get that .

The upper bound for the error covariance matrix to a stationary value for linear minimum mean square error (LMMSE) estimation can be easily obtained. It is described that if the system is MSS and the missing information is detected, then the error covariance matrix will converge to the unique nonnegative definite solution of an algebraic Riccati equation associated with the problem.

5. Numerical Example

In order to evaluate the performance of our method, in this section, we are going to use a scalar MJLS described by the following equations: where , , and denote the target position, velocity, and acceleration, respectively. The initial state is normally distributed with mean 10 and variance 1. , and , are independent white noise sequences with covariance of , and . The transition probability matrix for the finite-state Markov chain is

To assess the performance of algorithms, the average root mean square (RMS) error based on times Monte-Carlo simulation is defined as where the time step is chosen as 500, .

The simulation results are obtained as follows. Figure 5 presents the real states and their estimators subject to fault-free case and faulty case, respectively, based on the given path. Figure 1 shows the observations with lost data from unreliable channel and observations from reliable channel. As the proposed algorithm can be thought of as a generalization of the well-known LMMSE filtering, we denote it by FDI-LMMSE filtering in the simulation. The RMS in the position of FDI-LMMSE filtering in the faulty case is compared with that of LMMSE filtering in the fault-free case in the Figure 4. It can be shown in Figures 2 and 3 that the residual can deliver fault alarms soon after the fault occurs. From the simulation results, we can see that the obtained linear estimator for systems with random missing data are tracking well to the real state value, which is the estimation scheme proposed in this paper produces good performance.

6. Conclusions

This paper has addressed the estimation problem for MJLSs with random missing data. Random missing data introduced by the network is modeled as Bernoulli distribution variable. By usage of an observer-based FDI as a residual generator, the design of FDI-LMMSE filter has been formulated in the framework of LMMSE filtering. Complete analytical solution has been obtained by solving the recursive Riccati equations. It has been proved from theorem derivation and a numerical example simulation that the proposed state estimator is effective.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was partially supported by the National Natural Science Foundation of China 61273087 and the Program for Excellent Innovative Team of Jiangsu Higher Education Institutions.