Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2010, Article ID 343586, 17 pages
http://dx.doi.org/10.1155/2010/343586
Research Article

Unbiased Minimum-Variance Filter for State and Fault Estimation of Linear Time-Varying Systems with Unknown Disturbances

1Electrical Engineering Department, ESSTT-C3S, 5 Avenue Taha Hussein, BP 56, 1008 Tunis, Tunisia
2Electrical Engineering Department, CRAN (CNRS UMR 7039), 2 Avenue de la forêt de Haye, 54516 Vandœuvre-les-Nancy Cedex, France

Received 5 October 2009; Revised 2 January 2010; Accepted 11 January 2010

Academic Editor: J. Rodellar

Copyright © 2010 Fayçal Ben Hmida et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper presents a new recursive filter to joint fault and state estimation of a linear time-varying discrete systems in the presence of unknown disturbances. The method is based on the assumption that no prior knowledge about the dynamical evolution of the fault and the disturbance is available. As the fault affects both the state and the output, but the disturbance affects only the state system. Initially, we study the particular case when the direct feedthrough matrix of the fault has full rank. In the second case, we propose an extension of the previous case by considering the direct feedthrough matrix of the fault with an arbitrary rank. The resulting filter is optimal in the sense of the unbiased minimum-variance (UMV) criteria. A numerical example is given in order to illustrate the proposed method.

1. Introduction

This paper is concerned with the problem of joint fault and state estimation of linear time-varying discrete-time stochastic systems in the presence of unknown disturbances. In spite of the presence of the unknown inputs, the robust estimate of the state and the fault enables us to implement a Fault Tolerant Control (FTC). A simple idea consists of using an architecture FTC resting on the compensation of the effect of the fault; see, for example, [1].

Initially, we refer to the unknown input filtering problem largely treated in the literature by two different approaches. The first approach was based on the augmentation of the state vector with an unknown input vector. However, this approach assumes that the model for the dynamical evolution of the unknown inputs is available. When the statistical properties of the unknown input are perfectly known, the augmented state Kalman filter (ASKF) is an optimal solution. To reduce computation costs of the ASKF, Friedland [2] developed the two-stage Kalman filter (TSKF). This latter is optimal only for a constant bias. Many authors have extended Friedland's idea to treat the stochastic bias, for example, [35]. Recently, Kim et al. [6, 7] have developed an adaptive two-stage Kalman filter (ATSKF). The second approach treats the case when we do not have a prior knowledge about the dynamical evolution of the unknown input. Kitanidis [8] was the first to solve this problem using the linear unbiased minimum-variance (UMV). Darouach et al. [9] extended Kitanidis's filter using a parameterizing technique to have an optimal estimator filter (OEF). Hsieh [10] has developed an equivalent to Kitanidis's filter noted robust two-stage Kalman filter (RTSKF). Later, Hsieh [11] developed an optimal minmum variance filter (OMVF) to solve the performance degradation problem encountered in OEF. Gillijns and Moor [12] have treated the problem of estimating the state in the presence of unknown inputs which affect the system model. They developed a recursive filter which is optimal in the sense of minimum-variance. This filter has been extended by the same authors [13] for joint input and state estimation to linear discrete-time systems with direct feedthrough where the state and the unknown input estimation are interconnected. This filter is called recursive three-step filter (RTSF) and is limited to direct feedthrough matrix with full rank. Recently, Cheng et al. [14] proposed a recursive optimal filter with global optimality in the sense of unbiased minimum-variance over all linear unbiased estimators, but this filter is limited to estimate the state (i.e., no estimate of the unknown input). In [15], the author has extended an RTSF-noted ERTSF, where he solved a general case when the direct feedthrough matrix has an arbitrary rank.

In this paper, we develop a new recursive filter to joint fault and state estimation for linear stochastic, discrete-time, and time-varying systems in the presence of unknown disturbances. We assume that the unknown disturbances affect only the state equation. While, the fault affects both the state and the output equations, as well, we consider that the direct feedthrough matrix has an arbitrary rank [15].

This paper is organized as follows. Section 2 states the problem of interest. Section 3 is dedicated to the design of the proposed filter. In Section 4, the obtained filter is summarized. An illustrative example is presented in Section 5. Finally, in Section 6 we conclude our obtained results.

2. Statement of the Problem

Assume the following linear stochastic discrete-time system:

𝑥𝑘+1=𝐴𝑘𝑥𝑘+𝐵𝑘𝑢𝑘+𝐹𝑥𝑘𝑓𝑘+𝐸𝑥𝑘𝑑𝑘+𝑤𝑘,𝑦𝑘=𝐻𝑘𝑥𝑘+𝐹𝑦𝑘𝑓𝑘+𝑣𝑘,(2.1) where 𝑥𝑘𝑛 is the state vector, 𝑦𝑘𝑚 is the observation vector, 𝑢𝑘𝑟 is the known control input, 𝑓𝑘𝑝 is the additive fault vector, and 𝑑𝑘𝑞 is the unknown disturbances. 𝑤𝑘 and 𝑣𝑘 are uncorrelated white noise sequences of zero-mean and covariance matrices are 𝑄𝑘0 and 𝑅𝑘>0, respectively. The disturbance 𝑑𝑘 is assumed to have no stochastic description and must be decoupled. The initial state is uncorrelated with the white noises processes 𝑤𝑘 and 𝑣𝑘 and 𝑥0 is a Gaussian random variable with [𝑥0]=̂𝑥0 and [(𝑥0̂𝑥0)(𝑥0̂𝑥0)𝑇]=𝑃𝑥0 where [] denotes the expectation operator. The matrices 𝐴𝑘, 𝐵𝑘, 𝐹𝑥𝑘, 𝐸𝑥𝑘, 𝐻𝑘, and 𝐹𝑦𝑘 are known and have appropriate dimensions. We consider the following assumptions:

(i)𝐴1: (𝐻𝑘,𝐴𝑘) is observable,(ii)𝐴2: 𝑛>𝑚𝑝+𝑞,(iii)𝐴3: 0<rank(𝐹𝑦𝑘)𝑝,(iv)𝐴4: rank(𝐻𝑘𝐸𝑥𝑘1)=rank(𝐸𝑥𝑘1)=𝑞.

The objective of this paper is to design an unbiased minimum-variance linear estimator of the state 𝑥𝑘 and the fault 𝑓𝑘 without any information concerning the fault 𝑓𝑘 and the unknown disturbances 𝑑𝑘. We can consider that the filter has the following form:

̂𝑥𝑘/𝑘1=𝐴𝑘1̂𝑥𝑘1+𝐵𝑘1𝑢𝑘1+𝐹𝑥𝑘1𝑓𝑘1𝑓,(2.2)𝑘=𝐾𝑓𝑘𝑦𝑘𝐻𝑘̂𝑥𝑘/𝑘1,(2.3)̂𝑥𝑘=̂𝑥𝑘/𝑘1+𝐾𝑥𝑘𝑦𝑘𝐻𝑘̂𝑥𝑘/𝑘1,(2.4) where the gain matrices 𝐾𝑓𝑘𝑝×𝑚 and 𝐾𝑥𝑘𝑛×𝑚 are determined to satisfy the following criteria.

Unbiasedness
The estimator must satisfy 𝑓𝑘𝑓=𝑘𝑓𝑘=0,(2.5)̃𝑥𝑘𝑥=𝑘̂𝑥𝑘=0.(2.6)

Minimum-Variance
The estimator is determined such that (i)the mean square errors 𝑓[𝑘𝑓𝑇𝑘] is minimized under the constraint (2.5); (ii)the trace{𝑃𝑥𝑘=[̃𝑥𝑘̃𝑥𝑇𝑘]} is minimized under the constraints (2.5) and (2.6).

3. Filter Design

In this section, the fault and the state estimation are considered in the presence of the unknown disturbance in two cases with respect to assumption 𝐴3. Section 3.1 is dedicated to deriving a UMV state and fault estimation filter if matrix 𝐹𝑦𝑘 has full rank (i.e., rank(𝐹𝑦𝑘)=𝑝). A general case will be solved by an extension of the UMV state and fault estimation filter in Section 3.2.

3.1. UMV Fault and State Estimation

In this subsection, we will study a particular case when the rank(𝐹𝑦𝑘)=𝑝. The gain matrices 𝐾𝑓𝑘 and 𝐾𝑥𝑘 will be determined as that (2.3) and (2.4) can give an unbiased estimation of 𝑓𝑘 and 𝑥𝑘. In the next, the UMV fault and state estimation are solved.

3.1.1. Unbiased Estimation

The innovation error has the following form

̃𝑦𝑘=𝑦𝑘𝐻𝑘̂𝑥𝑘/𝑘1=𝐹𝑦𝑘𝑓𝑘+𝐻𝑘𝐸𝑥𝑘1𝑑𝑘1+𝑒𝑘,(3.1) where

𝑒𝑘=𝐻𝑘̃𝑥𝑘/𝑘1+𝑣𝑘,̃(3.2)𝑥𝑘/𝑘1=𝐴𝑘1̃𝑥𝑘1+𝐹𝑥𝑘1𝑓𝑘1+𝑤𝑘1.(3.3) The fault estimation error and the state estimation error are, respectively, given by

𝑓𝑘=𝑓𝑘𝑓𝑘=𝐼𝐾𝑓𝑘𝐹𝑦𝑘𝑓𝑘𝐾𝑓𝑘𝐻𝑘𝐸𝑥𝑘1𝑑𝑘1𝐾𝑓𝑘𝑒𝑘,(3.4)̃𝑥𝑘=𝑥𝑘̂𝑥𝑘=𝐼𝐾𝑥𝑘𝐻𝑘̃𝑥𝑘/𝑘1𝐾𝑥𝑘𝐹𝑦𝑘𝑓𝑘𝐾𝑥𝑘𝐻𝑘𝐸𝑥𝑘1𝐸𝑥𝑘1𝑑𝑘1𝐾𝑥𝑘𝑣𝑘.(3.5) The estimators ̂𝑥𝑘 and 𝑓𝑘 are unbiased if 𝐾𝑓𝑘 and 𝐾𝑥𝑘 satisfy the following constraints:

𝐾𝑓𝑘𝐺𝑘=𝑘,𝐾(3.6)𝑥𝑘𝐺𝑘=Γ𝑘,(3.7) where 𝐺𝑘=[𝐹𝑦𝑘𝐻𝑘𝐸𝑥𝑘1], 𝑘=[𝐼𝑝0] and Γ𝑘=[0𝐸𝑥𝑘1].

Lemma 3.1. Let rank(𝐹𝑦𝑘)=𝑝; under the assumptions 𝐴2 and 𝐴4, the necessary and sufficient condition so that the estimators (2.3) and (2.4) are unbiased as matrix 𝐺𝑘 is full column rank, that is, 𝐺rank𝑘𝐹=rank𝑦𝑘𝐻𝑘𝐸𝑥𝑘1=𝑝+𝑞.(3.8)

Proof. Equations (3.6) and (3.7) can be written as 𝐾𝑓𝑘𝐾𝑥𝑘𝐺𝑘=𝑘Γ𝑘.(3.9) The necessary and sufficient condition for the existence of the solution to (3.9) is rank𝑘Γ𝑘𝐺𝑘𝐺=rank𝑘.(3.10) We clarify (3.10), and we obtain 𝐼rank𝑝00𝐸𝑥𝑘1𝐹𝑦𝑘𝐻𝑘𝐸𝑥𝑘1𝐹=rank𝑦𝑘𝐻𝑘𝐸𝑥𝑘1.(3.11) However, the matrix on the left of the equality has a rank equal to 𝑝+𝑞. According to assumptions 𝐴2, 𝐴4 and rank(𝐹𝑦𝑘)=𝑝, this can be easily justified by considering that the faults and the unknown disturbances have an independent influences. The condition to satisfy is thus given by (3.8).

3.1.2. UMV Estimation

In this subsection, we propose to determine the gain matrices 𝐾𝑓𝑘 and 𝐾𝑥𝑘 by satisfying the unbiasedness constraints (2.5) and (2.6).

(a) Fault Estimation
Equation (3.1) will be written as ̃𝑦𝑘=𝐺𝑘𝑓𝑘𝑑𝑘1+𝑒𝑘.(3.12) Since, 𝑒𝑘 does not have unit variance and ̃𝑦𝑘 does not satisfy the assumptions of the Gauss-Markov theorem [16], the least-square (LS) solutions do not have a minimum-variance. Nevertheless, the covariance matrix of 𝑒𝑘 has the following form: 𝐶𝑘𝑒=𝑘𝑒𝑇𝑘=𝐻𝑘𝑃𝑥𝑘/𝑘1𝐻𝑇𝑘+𝑅𝑘,(3.13) where 𝑃𝑥𝑘/𝑘1̃=[𝑥𝑘/𝑘1̃𝑥𝑇𝑘/𝑘1].
For that, 𝑓𝑘 can be obtained by a weighted least-square (WLS) estimation with a weighting matrix 𝐶𝑘1.

Theorem 3.2. Let ̃𝑥𝑘/𝑘1 be unbiased; the matrix 𝐶𝑘 is positive definite and the matrix 𝐺𝑘 is full column rank; then to have a UMV fault estimation, the matrix gain 𝐾𝑓𝑘 is given by 𝐾𝑘𝑓=𝑘𝐺𝑘,(3.14) where 𝐺𝑘=(𝐺𝑇𝑘𝐶𝑘1𝐺𝑘)1𝐺𝑇𝑘𝐶𝑘1.

Proof. Under that 𝐶𝑘 is positive definite and an invertible matrix 𝑆𝑘𝑚×𝑚 verifes 𝑆𝑘𝑆𝑇𝑘=𝐶𝑘, so we can rewrite (3.12) as follow. 𝑆𝑘1̃𝑦𝑘=𝑆𝑘1𝐺𝑘𝑓𝑘𝑑𝑘1+𝑆𝑘1𝑒𝑘.(3.15) If the matrix 𝐺𝑘 is full column rank, that is, rank(𝐺𝑘)=𝑝+𝑞, then the matrix 𝐺𝑇𝑘𝐶𝑘1𝐺𝑘 is invertible. Solving (3.15) by an LS estimation is equivalent to solve (3.12) by WLS solution: 𝑓𝑘=𝑘(𝐺𝑇𝑘𝐶𝑘1𝐺𝑘)1𝐺𝑇𝑘𝐶𝑘1̃𝑦𝑘.(3.16) In this way, we can consider that 𝑆𝑘1𝑒𝑘 has a unit variance and (3.15) can satisfy the assumptions of the Gauss-Markov theorem. Hence, (3.16) is the UMV estimate of 𝑓𝑘.

In this case, the fault estimation error is rewritten as follows:

𝑓𝑘=𝐾𝑘𝑓𝑒𝑘.(3.17) Using (3.17), the covariance matrix 𝑃𝑓𝑘 is given by

𝑃𝑘𝑓𝑓=𝑘𝑓𝑘𝑇=𝐾𝑘𝑓𝐶𝑘𝐾𝑘𝑓𝑇=𝑘(𝐺𝑇𝑘𝐶𝑘1𝐺𝑘)1𝑇𝑘.(3.18)

(b) State Estimation
In this part, we propose to obtain an unbiased minimum variance state estimator to calculate the gain matrix 𝐾𝑥𝑘 which will minimize the trace of covariance matrix 𝑃𝑥𝑘 under the unbiasedness constraint (3.7).

Theorem 3.3. Let 𝐺𝑇𝑘𝐶𝑘1𝐺𝑘 be nonsingular; then the state gain matrix 𝐾𝑥𝑘 is given by 𝐾𝑘𝑥=𝑃𝑥𝑘/𝑘1𝐻𝑇𝑘𝐶𝑘1𝐼𝐺𝑘𝐺𝑘+Γ𝑘𝐺𝑘.(3.19)

Proof. Considering (3.7) and (3.5), we determine 𝑃𝑥𝑘 as follows: 𝑃𝑥𝑘=𝐼𝐾𝑥𝑘𝐻𝑘𝑃𝑥𝑘/𝑘1(𝐼𝐾𝑥𝑘𝐻𝑘)𝑇+𝐾𝑥𝑘𝑅𝑘𝐾𝑘𝑥𝑇=𝐾𝑥𝑘𝐶𝑘𝐾𝑘𝑥𝑇2𝑃𝑥𝑘/𝑘1𝐻𝑇𝑘𝐾𝑘𝑥𝑇+𝑃𝑥𝑘/𝑘1.(3.20) So, the optimization problem can be solved using Lagrange multipliers: 𝐾trace𝑥𝑘𝐶𝑘𝐾𝑘𝑥𝑇2𝑃𝑥𝑘/𝑘1𝐻𝑇𝑘𝐾𝑘𝑥𝑇+𝑃𝑥𝑘/𝑘1𝐾2trace𝑥𝑘𝐺𝑘Γ𝑘Λ𝑇𝑘,(3.21) where Λ𝑘 is the matrix of Lagrange multipliers.
To derive (3.21) with respect to 𝐾𝑥𝑘, we obtain 𝐶𝑘𝐾𝑘𝑥𝑇𝐻𝑘𝑃𝑥𝑘/𝑘1𝐺𝑘Λ𝑇𝑘=0.(3.22) Equations (3.7) and (3.22) form the linear system of equations: 𝐶𝑘𝐺𝑘𝐺𝑇𝑘0𝐾𝑘𝑥𝑇Λ𝑇𝑘=𝐻𝑘𝑃𝑥𝑘/𝑘1Γ𝑇𝑘.(3.23) If 𝐺𝑇𝑘𝐶𝑘1𝐺𝑘 is nonsingular, (3.23) will have a unique solution.

3.1.3. The Filter Time Update

From (3.3), the prior covariance matrix 𝑃𝑥𝑘/𝑘1̃=[𝑥𝑘/𝑘1̃𝑥𝑇𝑘/𝑘1] has the following form:

𝑃𝑥𝑘/𝑘1=𝐴𝑘1𝐹𝑥𝑘1𝑃𝑥𝑘1𝑃𝑥𝑓𝑘1𝑃𝑓𝑥𝑘1𝑃𝑓𝑘1𝐴𝑇𝑘1𝐹𝑥𝑇𝑘1+𝑄𝑘1,(3.24) where 𝑃𝑘𝑥𝑓=[̃𝑥𝑘𝑓𝑘𝑇] is calculated by using (2.3) and (2.4):

𝑃𝑘𝑥𝑓=𝐼𝐾𝑘𝑥𝐻𝑘𝑃𝑥𝑘/𝑘1𝐻𝑇𝑘𝐾𝑘𝑓𝑇+𝐾𝑘𝑥𝑅𝑘𝐾𝑘𝑓𝑇.(3.25)

3.2. Extended UMV Fault and State Estimation

In this section, we consider that 0<rank(𝐹𝑦𝑘)𝑝. To solve this interesting problem we will use the proposed approach by Hsieh in (2009) [15]. If we introduce (3.2) and (3.3) into (3.4), then we will be able to write the fault error estimation as follows:

𝑓𝑘=𝐼𝐾𝑓𝑘𝐹𝑦𝑘𝑓𝑘𝐾𝑓𝑘𝐻𝑘𝐸𝑥𝑘1𝑑𝑘1𝐾𝑓𝑘𝐻𝑘̃𝑥𝑘/𝑘1+𝑣𝑘=𝐾𝑓𝑘𝐻𝑘𝐹𝑥𝑘1𝑓𝑘1𝐾𝑓𝑘𝐻𝑘𝐴𝑘1̃𝑥𝑘1+𝐼𝐾𝑓𝑘𝐹𝑦𝑘𝑓𝑘𝐾𝑓𝑘𝐻𝑘𝐸𝑥𝑘1𝑑𝑘1𝐾𝑓𝑘𝐻𝑘𝑤𝑘1𝐾𝑓𝑘𝑣𝑘.(3.26) Assuming that [̃𝑥𝑘1]=0 we define the following notations:

Φ𝑘=𝐾𝑓𝑘𝐹𝑦𝑘=𝐼𝑝Σ𝑘,𝐺𝑓𝑘=𝐾𝑓𝑘𝐻𝑘𝐹𝑥𝑘1,𝐺𝑑𝑘=𝐾𝑓𝑘𝐻𝑘𝐸𝑥𝑘1,(3.27) where Σ𝑘=𝐼(𝐹𝑦𝑘)+𝐹𝑦𝑘.

Using the same technique presented in [15], the expectation value of the 𝑓𝑘 is given by

𝑓𝑘=Σ𝑘𝑓𝑘𝐺𝑓𝑘Σ𝑘1𝑓𝑘1+𝐺𝑓𝑘𝐺𝑓𝑘1𝑓Σ𝑘2𝑘2++(1)𝑘𝐺𝑓𝑘××𝐺𝑓2𝐺𝑓1Σ0𝑓0𝐺𝑑𝑘𝑑𝑘1+𝐺𝑓𝑘𝐺𝑑𝑘1𝑑𝑘2++(1)𝑘𝐺𝑓𝑘××𝐺𝑓1𝐺𝑑1𝑑0.(3.28) When we assume that 𝐺𝑓𝑖Σ𝑖1=0 and 𝐺𝑑𝑖=0 for 𝑖=1,,𝑘, then we obtain

𝑓𝑘=Σ𝑘𝑓𝑘.(3.29) To obtain an unbiased estimation of the fault, the gain matrix 𝐾𝑓𝑘 should respect the following constraints:

𝐾𝑓𝑘𝐹𝑦𝑘=Φ𝑘,𝐾𝑓𝑘𝐻𝑘𝐹𝑥𝑘1Σ𝑘1𝐾=0,𝑓𝑘𝐻𝑘𝐸𝑥𝑘1=0.(3.30) Equation (3.30) can be written as

𝐾𝑓𝑘𝐺𝑘=𝑘,(3.31) where

𝐺𝑘=𝐹𝑦𝑘𝐻𝑘𝐹𝑥𝑘1Σ𝑘1𝐻𝑘𝐸𝑥𝑘1,𝑘=Φ𝑘.00(3.32) Using (3.31), we can determine the gain matrix 𝐾𝑓𝑘 as follows:

𝐾𝑘𝑓=𝑘𝐺𝑘,(3.33) where 𝐺𝑘=(𝐺𝑇𝑘𝐶𝑘1𝐺𝑘)+𝐺𝑇𝑘𝐶𝑘1 and 𝑋+ denotes the Moore-Penrose pseudoinverse of 𝑋.

The state estimation error is given by

̃𝑥𝑘=𝐼𝐾𝑥𝑘𝐻𝑘̃𝑥𝑘/𝑘1𝐾𝑥𝑘𝐹𝑦𝑘𝑓𝑘𝐾𝑥𝑘𝐻𝑘𝐸𝑥𝑘1𝐸𝑥𝑘1𝑑𝑘1𝐾𝑥𝑘𝑣𝑘=𝐼𝐾𝑥𝑘𝐻𝑘𝐴𝑘1̃𝑥𝑘1+𝐼𝐾𝑥𝑘𝐻𝑘𝐹𝑥𝑘1𝑓𝑘𝐾𝑥𝑘𝐻𝑘𝐸𝑥𝑘1𝐸𝑥𝑘1𝑑𝑘1𝐾𝑥𝑘𝐹𝑦𝑘𝑓𝑘+𝐼𝐾𝑥𝑘𝐻𝑘𝑤𝑘1𝐾𝑥𝑘𝑣𝑘.(3.34) To have an unbiased estimation of the state, the gain matrix 𝐾𝑥𝑘 should satisfy the following constraints:

𝐾𝑥𝑘𝐹𝑦𝑘𝐾=0,𝑥𝑘𝐻𝑘𝐹𝑥𝑘1Σ𝑘1=𝐹𝑥𝑘1Σ𝑘1,𝐾𝑥𝑘𝐻𝑘𝐸𝑥𝑘1=𝐸𝑥𝑘1.(3.35) From (3.35), we obtain

𝐾𝑥𝑘𝐺𝑘=Γ𝑘,(3.36) where

Γ𝑘=0𝐹𝑥𝑘1Σ𝑘1𝐸𝑥𝑘1.(3.37) Refering to (3.34), we calculate the error state covariance matrix:

𝑃𝑥𝑘=𝐼𝐾𝑥𝑘𝐻𝑘𝑃𝑥𝑘/𝑘1(𝐼𝐾𝑥𝑘𝐻𝑘)𝑇+𝐾𝑥𝑘𝑅𝑘𝐾𝑘𝑥𝑇=𝐾𝑥𝑘𝐶𝑘𝐾𝑘𝑥𝑇2𝑃𝑥𝑘/𝑘1𝐻𝑇𝑘𝐾𝑘𝑥𝑇+𝑃𝑥𝑘/𝑘1.(3.38) The gain matrix 𝐾𝑥𝑘 is determin by minimizing the trace of the covariance matrix 𝑃𝑥𝑘 such as (3.36). Using the Kitanidis method [8], we obtain

𝐶𝑘𝐺𝑘𝐺𝑇𝑘0𝐾𝑘𝑥𝑇Λ𝑇𝑘=𝐻𝑘𝑃𝑥𝑘/𝑘1Γ𝑇𝑘,(3.39) where Λ𝑘 is the matrix of Lagrange multipliers.

If 𝐺𝑇𝑘𝐶𝑘1𝐺𝑘 is nonsingular, (3.39) will have a unique solution. So, the gain matrix 𝐾𝑥𝑘 is given by

𝐾𝑘𝑥=𝑃𝑥𝑘/𝑘1𝐻𝑇𝑘𝐶𝑘1𝐼𝐺𝑘𝐺𝑘+Γ𝑘𝐺𝑘.(3.40) The filter time update is the same as that given by (3.24) and (3.25). The obtained filters will be tested by an illustrative example in Section 5.

4. Summary of Filter Equations

We suppose to know the following:

(i)the known input 𝑢𝑘,(ii)matrices 𝐴𝑘, 𝐵𝑘, 𝐻𝑘, 𝐹𝑥𝑘, 𝐹𝑦𝑘 and 𝐸𝑥𝑘,(iii)covariance matrices 𝑄𝑥𝑘 and 𝑅𝑘,(iv)initial values ̂𝑥0 and 𝑃𝑥0.

We assume that the estimate of the initial state is unbiased and we take the initial covariance matrix 𝑃𝑥0/1=𝑃𝑥0.

Step 1. Estimation of fault is 𝐶𝑘=𝐻𝑘𝑃𝑥𝑘/𝑘1𝐻𝑇𝑘+𝑅𝑘,𝐺𝑘=𝐹𝑦𝑘𝐻𝑘𝐹𝑥𝑘1Σ𝑘1𝐻𝑘𝐸𝑥𝑘1,𝑘=Φ𝑘,00𝐺𝑘=𝐺𝑇𝑘𝐶𝑘1𝐺𝑘+𝐺𝑇𝑘𝐶𝑘1,𝐾𝑓𝑘=𝑘𝐺𝑘,𝑓𝑘=𝐾𝑓𝑘𝑦𝑘𝐻𝑘̂𝑥𝑘/𝑘1,𝑃𝑓𝑘=𝐾𝑓𝑘𝐶𝑘𝐾𝑘𝑓𝑇.(4.1)

Step 2. Measurement update is Γ𝑘=0𝐹𝑥𝑘1Σ𝑘1𝐸𝑥𝑘1,𝐾𝑥𝑘=𝑃𝑥𝑘/𝑘1𝐻𝑇𝑘𝐶𝑘1𝐼𝐺𝑘𝐺𝑘+Γ𝑘𝐺𝑘,̂𝑥𝑘=̂𝑥𝑘/𝑘1+𝐾𝑥𝑘𝑦𝑘𝐻𝑘̂𝑥𝑘/𝑘1,𝑃𝑥𝑘=𝐼𝐾𝑥𝑘𝐻𝑘𝑃𝑥𝑘/𝑘1(𝐼𝐾𝑥𝑘𝐻𝑘)𝑇+𝐾𝑥𝑘𝑅𝑘𝐾𝑘𝑥𝑇,𝑃𝑘𝑥𝑓=𝐼𝐾𝑥𝑘𝐻𝑘𝑃𝑥𝑘/𝑘1𝐻𝑇𝑘𝐾𝑘𝑓𝑇+𝐾𝑥𝑘𝑅𝑘𝐾𝑘𝑓𝑇.(4.2)

Step 3. Time update is ̂𝑥𝑘+1/𝑘=𝐴𝑘̂𝑥𝑘+𝐵𝑘𝑢𝑘+𝐹𝑥𝑘𝑓𝑘,𝑃𝑥𝑘+1/𝑘=𝐴𝑘𝐹𝑥𝑘𝑃𝑥𝑘𝑃𝑘𝑥𝑓𝑃𝑘𝑓𝑥𝑃𝑓𝑘𝐴𝑇𝑘𝐹𝑘𝑥𝑇+𝑄𝑘.(4.3)

Remark 4.1. If rank(𝐹𝑦𝑘)=𝑝, then we have Σ𝑘=0 for all 𝑘0 and it is easier to use the filter obtained in Section 3.1. In this case, the gain matrices 𝐾𝑓𝑘 and 𝐾𝑥𝑘 are given by (3.14) and (3.19), respectively.

Remark 4.2. These remarks give the relationships with the existing literature results. (i)If 𝐸𝑥𝑘=0 and 0<rank(𝐹𝑦𝑘)𝑝, the obtained filter is equivalent to ERTSF developed by [15]. (ii)If 𝐸𝑥𝑘=0 and rank(𝐹𝑦𝑘)=𝑝, then we have Σ𝑘=0 for all 𝑘0 and the obtained filter is equivalent to RTSF proposed by [13]. (iii)In the case where 𝐹𝑥𝑘=0 and 𝐹𝑦𝑘=0, the filter of [8] is obtained. (iv)In the case where 𝐹𝑥𝑘=0, 𝐹𝑦𝑘=0 and 𝐸𝑥𝑘=0, we obtain the standard Kalman filter.

5. An Illustrative Example

To apply our proposed filters we will treat different cases to respect assumption 𝐴3. The parameters of the system (2.1) are given by

𝑥𝑘=𝑥1,𝑘𝑥2,𝑘𝑥3,𝑘,𝐴𝑘=𝑎𝑘0.10.20.10.60.30.50.10.25,𝑎𝑘=0.4+0.3sin(0.2𝑘),𝐵𝑘=2,𝐹1.50.5𝑥𝑘=0.50.71.51.10.80.9,𝐸𝑥𝑘=021,𝐻𝑘=110010011,𝑄𝑘=0.1𝐼3×3,𝑅𝑘=0.01𝐼3×3,𝑥0=112,̂𝑥0=000,𝑃𝑥0=𝐼3×3.(5) In this simulation, four cases of 𝐹𝑦𝑘 will be considered as follows:

(𝐹𝑦𝑘)1=21.40.60.30.21.6,(𝐹𝑦𝑘)2=,210.60.30.20.1(𝐹𝑦𝑘)3=200.600.20,(𝐹𝑦𝑘)4=.01.400.301.6(5.2) We assume that the fault and the disturbance are given by

𝑓1,𝑘𝑓2,𝑘=5𝑢𝑠(𝑘10)5𝑢𝑠(𝑘70)4𝑢𝑠(𝑘30)4𝑢𝑠,𝑑(𝑘65)𝑘=4𝑢𝑠(𝑘15)4𝑢𝑠(𝑘55),(5.3) where 𝑢𝑠(𝑘) is the unit-step function.

Figure 1 presents the input sequence of the system (2.1). The simulation time is 100 time steps.

343586.fig.001
Figure 1: Known input 𝑢𝑘.

In Figure 2, we have plotted the actual and the estimated value of the first element of the state vector 𝑥𝑘=[𝑥1𝑘𝑥2𝑘𝑥3𝑘]𝑇. Figures 3 and 4 present the actual and the estimated value of the first element and the second element of the fault vector 𝑓𝑘=[𝑓1𝑘𝑓2𝑘]𝑇, respectively. The convergence of the trace of covariance matrices 𝑃𝑥𝑘 and 𝑃𝑓𝑘 is shown, respectively, in Figures 5 and 6.

fig2
Figure 2: Actual state 𝑥1𝑘 and estimated state ̂𝑥1𝑘.
fig3
Figure 3: Actual fault 𝑓1𝑘 and estimated fault 𝑓1𝑘.
fig4
Figure 4: Actual fault 𝑓2𝑘 and estimated fault 𝑓2𝑘.
fig5
Figure 5: Trace of the covariance matrix 𝑃𝑥𝑘.
fig6
Figure 6: Trace of the covariance matrix 𝑃𝑓𝑘.

The simulation results in Table 1 show the average root mean square errors (RMSEs) in the estimated states and faults.

tab1
Table 1: RMSE values.

According to Figures 26 and Table 1, we can conclude that if the matrix 𝐹𝑦𝑘 has full rank, then we obtain a best estimate of the state and the fault (Figures 2(a), 3(a) and 4(a)). On the other hand, when the matrix 𝐹𝑦𝑘 has not full rank, it is not possible to obtain a best estimate of the various components of the fault (Figures 3(b), 3(c), 3(d), 4(b), 4(c) and 4(d)), but the state estimation remains acceptable (Figures 2(b), 2(c), and 2(d)).

6. Conclusion

In this paper, the problem of the state and the fault estimation are solved in the case of stochastic linear discrete-time and varying-time systems. A recursive unbiased minimum-variance (UMV) filter is proposed when the direct feedthrough matrix of the fault has an arbitrary rank. The advantages of this filter are especially important in the case when we do not have any priory information about the unknown disturbances and the fault. An application of the proposed filter has been shown by an illustrative example. This recursive filter is able to obtain a robust and unbiased minimum-variance of the state and the fault estimation in spite of the presence of the unknown disturbances.

References

  1. M. Blanke, M. Kinnaert, J. Lunze, and M. Staroswiecki, Diagnosis and Fault-Tolerant Control, Springer, Berlin, Germany, 2006. View at Zentralblatt MATH
  2. B. Friedland, “Treatment of bias in recursive filtering,” IEEE Transactions on Automatic Control, vol. 14, pp. 359–367, 1969. View at Google Scholar · View at MathSciNet
  3. A. T. Alouani, T. R. Rice, and W. D. A. Blair, “Two-stage filter for state estimation in the presence of dynamical stochastic bias,” in Proceedings of the American Control Conference, vol. 2, pp. 1784–1788, Chicago, Ill, USA, 1992.
  4. J. Y. Keller and M. Darouach, “Two-stage Kalman estimator with unknown exogenous inputs,” Automatica, vol. 35, no. 2, pp. 339–342, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  5. C. S. Hsieh and F. C. Chen, “Optimal solution of the two-stage Kalman estimator,” IEEE Transactions on Automatic Control, vol. 44, no. 1, pp. 194–199, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  6. K. H. Kim, J. G. Lee, and C. G. Park, “Adaptive two-stage Kalman filter in the presence of unknown random bias,” International Journal of Adaptive Control and Signal Processing, vol. 20, no. 7, pp. 305–319, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  7. K. H. Kim, J. G. Lee, and C. G. Park, “The stability analysis of the adaptive two-stage Kalman filter,” International Journal of Adaptive Control and Signal Processing, vol. 21, no. 10, pp. 856–870, 2007. View at Publisher · View at Google Scholar · View at MathSciNet
  8. P. K. Kitanidis, “Unbiased minimum-variance linear state estimation,” Automatica, vol. 23, no. 6, pp. 775–778, 1987. View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  9. M. Darouach, M. Zasadzinski, and M. Boutayeb, “Extension of minimum variance estimation for systems with unknown inputs,” Automatica, vol. 39, no. 5, pp. 867–876, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  10. C. S. Hsieh, “Robust two-stage Kalman filters for systems with unknown inputs,” IEEE Transactions on Automatic Control, vol. 45, no. 12, pp. 2374–2378, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  11. C. S. Hsieh, “Optimal minimum-variance filtering for systems with unknown inputs,” in Proceedings of the 6th World Congress on Intelligent Control and Automation (WCICA '06), vol. 1, pp. 1870–1874, Dalian, China, 2006. View at Publisher · View at Google Scholar
  12. S. Gillijns and B. Moor, “Unbiased minimum-variance input and state estimation for linear discrete-time systems with direct feedthrough,” Automatica, vol. 43, no. 5, pp. 934–937, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  13. S. Gillijns and B. Moor, “Unbiased minimum-variance input and state estimation for linear discrete-time systems,” Automatica, vol. 43, no. 1, pp. 111–116, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  14. Y. Cheng, H. Ye, Y. Wang, and D. Zhou, “Unbiased minimum-variance state estimation for linear systems with unknown input,” Automatica, vol. 45, no. 2, pp. 485–491, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  15. C. S. Hsieh, “Extension of unbiased minimum-variance input and state estimation for systems with unknown inputs,” Automatica, vol. 45, no. 9, pp. 2149–2153, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  16. T. Kailath, A. H. Sayed, and B. Hassibi, Linear Estimation, Prentice-Hall, Englewood Cliffs, NJ, USA, 2000.